ABOUT
OSCAR (Odometric Sensor-based Cart for Autonomous Routing)
This project presents the design and implementation of an autonomous mobile robot capable of point-to-point navigation. Unlike systems relying on expensive Computer Vision (CV) or LiDAR, this prototype utilizes Sensor Fusion—integrating high-resolution wheel encoders for odometry with ultrasonic sensors for environment mapping and obstacle avoidance. The robot uses entered cartesian coordinates to map out a graph of its surroundings, uses ultrasonic sensors to navigate to a particular place and avoid any obstacles in the path.
TEAM
Saptaparna Deb
Project Designer
Josh Andrew Saldanha
Hands-On Lead
INTRODUCTION
Using Computer Vision based systems or Compass modules is complex, requiring a lot of coding and resources which most cannot afford. Thus, to maintain the cost effectiveness, as well as durability of the project, we have used Arduino along with ultrasonic sensors to map distances and avoid obstacles. This project aims to create a transparent, educational, and cost-effective navigation pipeline using an Arduino-based architecture.
PROBLEM STATEMENT
Mobile robots typically rely on highly complex systems such as Computer Vision (CV), Light Detection and Ranging (LiDAR), or IMU-based compass modules for spatial awareness and localization. These systems are often resource-intensive, cost-prohibitive, or susceptible to environmental interference.
We referred to Noon Minutes, a popular quick commerce platform in the United Arab Emirates as our case study. Noon Minutes utilizes a network of dark stores within a 2 kilometer radius of its catchment areas, significantly reducing picking and delivery times to a maximum of 15 minutes. However, replicating this exact model in geographically vast regions is challenging; increased travel distances not only extend delivery windows but also jeopardize rider safety as they face pressure to meet strict deadlines.
This problem could be mitigated by minimizing the time required to pack the order, which will increase the delivery time for riders and even allow such quick commerce companies to access regions with low population density in other countries.
Currently, Noon Minutes, for example, requires 2 minutes to pack and dispatch an order with their rider. Using a robot that can autonomously and efficiently pack all these items, reduces manual labour, removes the chance of human labour and even allows us to reduce the time required as much as possible.
We address the problem of designing and implementing an autonomous cart for inventory management and order collection that is capable of point-to-point navigation in a dynamic and constantly changing indoor environment without using camera based vision or electronic compasses. The system must accurately detect the position of an object, figure out the shortest pathway to get there and collect it carefully from the shelves. It must maintain position accuracy and avoid obstacles while collecting the required items without human intervention.
OBJECTIVES
Developing a control algorithm that enables the mobile platform to navigate from a defined starting point to a target destination in an indoor environment without human intervention.
Integrating high-resolution wheel encoders to track the robot’s displacement and orientation, establishing a reliable "dead reckoning" baseline for motion control.
Creating a responsive control loop capable of detecting, identifying, and navigating around transient obstacles (such as moving individuals) while maintaining the robot's primary path.
EXISTING SYSTEMS
While autonomous carts for inventory management exist, most of them use Computer Vision based models or Compass modules for navigation. Both these solutions are quite resource intensive or require extensive coding that may not be feasible for a lot of small scale shop-owners. In order to make our project accessible for everyone and easy to maintain, we have used Odometric Detection via ultrasonic sensors in order to detect obstacles, avoid them and navigate to a particular area.
High-end industrial Automated Guided Vehicles (AGVs) that utilize external infrastructure to maintain precise localization are pre existing and are being researched upon currently. These systems employ grid-based markers, such as RFID tags embedded in the floor or QR-code patterns placed on ceilings. When the robot passes over or under a marker, it retrieves a unique ID that provides an "absolute" coordinate fix, allowing the robot to reset its internal navigation data and eliminate any cumulative errors. While this approach offers high reliability, it requires significant investment in facility modification and maintenance, making it impractical for dynamic, unstructured environments.
There are also sophisticated indoor robots that utilize Simultaneous Localization and Mapping (SLAM). These systems typically integrate LiDAR (Light Detection and Ranging) or high-resolution depth cameras to create a real-time, high-fidelity 2D or 3D map of the environment. By continuously comparing incoming sensor data against this map, the robot determines its exact position within the room . While these systems are highly autonomous and effective in complex, cluttered spaces, they are computationally intensive and often rely on expensive sensor arrays that exceed the scope of a cost-effective, Arduino-based educational prototype.
The "Dead Reckoning" and "Wall-Referencing" approach closely aligns with our project and has served as inspiration for our project. These robots rely on proprioceptive sensors (which measure a system's internal state—such as position, velocity, acceleration, and force—enabling self-awareness in robots and biological systems)—primarily wheel encoders—to track movement relative to a starting position. By using ultrasonic proximity sensors to measure distances to known architectural features (such as walls), the robot performs periodic "re-zeroing" of its coordinates.This approach does not require environmental modifications or expensive imaging hardware, instead prioritizing a compact, state-machine-based logic to maintain positional accuracy through clever, low-cost sensor integration.
SYSTEM OVERVIEW
The system operates through an integrated loop that maps inventory data to physical movement, enabling the robot to autonomously locate and retrieve items.
The robot begins by loading the inventory database. Each item is assigned a specific coordinate (x, y, z) within a 3D Cartesian grid stored in the system’s memory array.
Upon start-up, the robot calibrates its position at the origin point (0, 0, 20), with 20 being the height of the grabber arm from the ground.
The user inputs a list of requested items. The robot retrieves the corresponding coordinates and sorts them to optimize the path, ensuring the most efficient traversal through the workspace.
Navigation is handled by a differential drive system that converts spatial distance into motor-encoder ticks.
To ensure the robot maintains safe proximity to shelves, the navigation controller adds a 50cm safety buffer to the x-coordinate before executing the approach.
A critical design feature is the dynamic sensor re-mapping. As the robot executes a 90° turn, the system performs a coordinate frame shift. The sensors effectively "switch" their roles, ensuring that the sensor previously measuring the side distance now correctly calibrates the robot's orientation to the new wall.
An ultrasonic sensor mounted on the gripper assembly monitors the distance to the ground. This allows the arm to precisely extend to the target height—for instance, 50cm—to achieve contact with the shelf.
The gripper secures the item and deposits it into the integrated storage bin.
After the retrieval, the robot computes the shortest distance to the next item and follows the path in order to retrieve it.
Materials Used
Our robotic cart is constructed using a durable chassis made from 3D-printed PLA, designed with a three layer structure in order to give adequate strength and stability while keeping the net weight low. It is equipped with four ultrasonic sensors for distance measurement and obstacle detection, enabling accurate environmental awareness. Movement is aided by four tires powered by four DC motors, which are controlled via a motor driver to ensure precise movement and speed regulation. The entire system is managed by an ESP32 microcontroller, which coordinates sensor input and motor control efficiently. Power is supplied by a battery pack consisting of three Lithium-ion batteries delivering approximately 14V, and a switch is included to allow easy control of the system's power supply.
MECHANICAL DESIGN
SOFTWARE AND PROGRAMMING
SYSTEM INTEGRATION
The entire platform is programmed using the Arduino IDE, which provides a flexible and user-friendly environment for writing, testing, and uploading code to the ESP32 microcontroller. The ESP32 serves as the central processing unit, where all sensor inputs and motor control commands are processed and executed. The code is structured to continuously read data from the ultrasonic sensors, measure distances, and make real-time decisions to control the direction and speed of the DC motors through the motor driver. Additionally, a complete circuit design has been developed and implemented to ensure proper connectivity between all components, including the sensors, motors, power supply, and control unit. This circuit design helps maintain stable power distribution and reliable signal transmission across the system. Overall, the integration of hardware and software enables smooth communication between components, allowing the robot to operate efficiently and respond dynamically to its surroundings.
RESULT AND PERFORMANCE EVALUATION
The results and performance evaluation of the system demonstrated that the robot operates reliably and effectively to detect and avoid obstacles. The ultrasonic sensors were able to measure distances with consistent accuracy, allowing the ESP32 to make timely decisions for navigation. The motors driver and DC motors responded well to control signals, resulting in smooth and stable movement across different surfaces. The three layer PLA chassis provided sufficient structural and mechanical support, maintaining balance and durability during operation. Additionally, the power supply from the Lithium-ion battery pack ensured consistent performance without significant voltage drops. Overall the system showed efficient real-time responsiveness, good obstacle avoidance capability and stable mobility, indicating that the design and integration of both hardware and software were successful.
ADVANTAGES & LIMITATIONS
Advantages
- By utilizing Encoder-Ultrasonic Sensor Fusion, the robot maintains sub-5cm positional accuracy. The "re-zeroing" technique ensures that even if the robot experiences minor wheel slip, it corrects its path against permanent architectural walls.
- The use of a State-Machine architecture ensures the robot's behavior is predictable. Unlike AI-driven models that can be difficult to debug, this rule-based system allows for easy troubleshooting and performance verification.
- The system uses standard, off-the-shelf microcontrollers and proximity sensors. This reduces the total project cost significantly compared to LiDAR-equipped AGVs while providing sufficient navigation capability for simple point-to-point inventory retrieval.
- The non-blocking safety loop—which continuously monitors for transient obstacles—ensures the robot can operate in shared spaces with humans without requiring complex vision-processing or cloud connectivity.
Limitations
- The navigation logic assumes the environment is largely static (walls do not move). If furniture or shelves are relocated, the "coordinate array" would require manual updates, as the system does not perform real-time SLAM (Simultaneous Localization and Mapping).
- The HC-05 module uses a short-range, non-encrypted serial connection. This is sufficient for a classroom prototype but would be vulnerable to interference or unauthorized access in a high-density industrial facility.
- Because the robot follows a pre-defined path based on coordinates, it cannot dynamically calculate an "optimal" route around new, permanent obstacles. It simply treats any non-wall obstacle as a transient object to wait for.
- The vertical lift assembly is designed for light items. Heavy inventory loads would increase the Center of Gravity (CoG) and potentially compromise the stability of the chassis during retrieval.
POTENTIAL IMPROVEMENTS
Adding a 6-axis gyro/accelerometer would allow the system to detect and correct for "fishtailing" or rotational drift during turns. The IMU provides an independent check on the wheel encoders, creating a tri-sensor fusion (Encoders + IMU + Ultrasonics).
While complex, upgrading to a 2D LiDAR sensor would allow the robot to build its own map. Instead of hard-coding coordinates into an array, the robot could explore the room and build a structural map of the shelving units autonomously.
The current system uses a static "shortest distance" calculation. Integrating an A* (A-Star) Pathfinding Algorithm would allow the robot to navigate complex mazes or obstacles by dynamically calculating the most efficient path around obstacles rather than just waiting for them to move.
While currently operating on pre-set coordinates, integrating a local NLP engine (e.g., VOSK) would allow for voice-based inventory queries ("Find the nearest box of screws") rather than relying on manual coordinate inputs.
Adding a load cell (force sensor) to the gripper would allow the robot to "feel" the item. This ensures that the robot doesn't accidentally crush fragile items or drop heavy ones due to a weak grip.
To make the robot truly autonomous for 24/7 warehouse operation, a "Return to Base" protocol should be added. The robot would track its battery voltage and, when low, navigate to a designated charging dock without human intervention.
Adding photovoltaic sensors and solar panels on top of our robotic cart offers a sophisticated leap towards operational autonomy. By utilizing the sensors to detect high-intensity light sources, the robot can autonomously navigate to sunlit areas during idle periods, transforming downtime into a proactive recharging phase. This reduces the reliance on manual tethering or frequent battery swaps, thus increasing the system’s endurance and making it ideal for long-term deployment in remote or outdoor environments.
By integrating kinetic energy harvesting through the robot’s wheels, we can make it a completely self-sustaining machine that can generate its own energy on the go. By utilizing regenerative braking or electromagnetic induction, the mechanical energy generated during movement can be converted into electrical energy to recharge the onboard batteries.
This dual-source approach ensures the robotic claw is constantly capturing energy; harvesting solar power while stationary in the sun and reclaiming kinetic energy while in transit, thus effectively maximizing operational uptime and minimizing the need for any external charging or batteries.
The robotic cart has strong potential to support sustainable development in the future. By contributing to more efficient and automated systems, it aligns with the goals set by the United Nations under Sustainable Development Goal 7, which focuses on affordable and clean energy, and Sustainable Development Goal 11, which promotes sustainable cities and communities. Such technologies can play an important role in building smarter, more energy efficient infrastructure for the future.
FUTURE SCOPE
While the current system calculates the shortest distance between coordinates, it lacks global path optimization. Future work will involve implementing the A* (A-Star) search algorithm which will allow the robot to dynamically navigate around complex architectural clusters or permanent obstacles, enabling it to compute the most power-efficient path rather than following a rigid, linear trajectory.
The existing architecture is designed for a single-node operation. Future iterations will focus on a multi-agent communication protocol, where multiple AONS carts communicate their status via a centralized hub (using MQTT or Wi-Fi). This would enable "fleet management," where one robot could handle heavy inventory while another manages lightweight, high-frequency items, significantly increasing warehouse throughput.
To overcome the current dependency on a static, pre-defined coordinate array, future development will integrate Visual Odometry using low-power, lightweight cameras. By using algorithms like ORB-SLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping), the robot will be able to recognize its environment visually. This would remove the need for hard-coding coordinates, allowing the robot to "learn" the warehouse layout on the fly.
For true 24/7 autonomy, the robot requires a self-sustained energy loop. Future work will include the design of an autonomous docking station. By installing IR beacons on a charging dock, the robot can detect its battery level, navigate to the dock, and perform precise "reverse-alignment" to initiate contact charging, eliminating the need for human intervention to power the unit.
To expand the variety of inventory the robot can handle, the mechanical gripper will be upgraded from a basic servo-claw to a force-feedback gripper. By integrating load cells, the robot will be able to detect the weight and fragility of an object, adjusting its grip pressure automatically to prevent damage to sensitive items.
A future application of this project may include an automated luggage carrier in airports wherein the robot will be able to automatically scan RFID codes on the luggage tags, scan checked in luggage and sort them according to their flight number. It will be able to carry the checked in luggage to the respective planes on the runway as well as load and de-load them from the airplane. This will significantly reduce manual labour, mitigate human error and free up a lot of workers for more productive tasks.
CONCLUSION
The development of the Odometric Sensor-based Cart for Autonomous routing (OSCAR) demonstrates that sophisticated, high-precision robotics can be achieved through clever sensor fusion and structured logic rather than relying solely on expensive, proprietary hardware. By transitioning from a reliance on computer vision and external absolute-positioning systems to a model based on proprioceptive odometry and ultrasonic wall-referencing, this project has successfully established a scalable and cost-effective framework for autonomous indoor logistics.
The system’s core achievement lies in its deterministic State-Machine architecture. By decomposing complex navigation tasks into defined operational states the robot can maintain orientation and positional accuracy even in dynamic, unstructured environments. The integration of a dynamic frame transformation logic ensures that the robot maintains its spatial awareness relative to its surroundings, regardless of its directional heading.
Furthermore, this project validates that a "blind" navigation system, when combined with temporal filtering and obstacle-clearance geometry, is capable of safe and reliable operation in shared human workspaces. While the prototype currently functions as a single-node system within a defined coordinate grid, the modular nature of its hardware and software provides a robust foundation for future expansions, such as fleet-based multi-agent coordination and autonomous energy management.
Ultimately, this project highlights the essential synergy between mechanical stability, intelligent firmware design, and low-level sensor fusion. It provides a transparent, modifiable, and educational platform that serves as a blueprint for modern mobile robotics, proving that precision autonomy is not just a function of sensor density, but a result of elegant, well-integrated system design.
FINAL OUTPUT
The autonomous obstacle-avoiding robot is designed to navigate safely in all directions using four ultrasonic sensors mounted at the front, back, left, and right. The robot continuously measures distances in each direction and moves forward when the path ahead is clear. If an obstacle is detected in front, it evaluates the left and right sides and turns toward the direction with more available space. If both sides are blocked, the robot checks behind and reverses only if it is safe to do so, preventing collisions with obstacles or people. In situations where all directions are obstructed, the robot stops completely to ensure safety. The motor control system allows independent speed adjustments for left and right wheels, enabling smooth forward, backward, and turning movements. Real-time sensor readings are printed via the serial monitor for monitoring and debugging, making the robot a reliable and responsive platform for safe navigation in dynamic environments.
REFERENCES
[1] Josh Coder, “Noon delivery riders know Dubai better than Google, says Chief of Staff”, Caterer Middle East, Available: https://www.caterermiddleeast.com/delivery/noon-delivery-rider-times#:~:text=Noon%20has%20a%2015%2Dminute,They%20are%20just%20delivering.%E2%80%9D
[2] noon, “noon Minutes: How we deliver in 15 minutes or less”, Available: https://www.linkedin.com/pulse/noon-minutes-how-we-deliver-15-less-nooncom#:~:text=Published%20Jun%204%2C%202023,'
[3] Science Direct, “Regenerative Breaking”, Science Direct Webpage, Available: https://www.sciencedirect.com/topics/engineering/regenerative-braking