RobAIoT: Overview, Roadmap and Challenges of Robotics Integration with AI & IoT

read 4 mts

Future belongs to connecting, collecting and computing data. IoT connects and collects while AI does intelligent computation on the data. Robotics stands as a platform for integration of IoT with AI.  Here, we will discuss a roadmap to build an AI integrated IoT System through a prototype named “RobAIoT” which is a fusion of IoT components, AI model, and Robotics.

RobAIoT is an IoT enabled robotic car capable of performing object detection through deep learning models. Onboard sensors and actuators help RoboAIoT to sense the ambient data and actuate motors as per the decisions made by deep learning models without human intervention.

In this phase, the primary decisions made by the car are taking right or left or reaching to complete halt based on detecting external sign boards through an onboard camera.  In this blog, Archana Moturi and Bala Subrahmanya, Data Scientists at INSOFE will deep dive into the phases involved in the development of this car and also look at the challenges in integration and execution while maintaining optimum performance.

Phases of Development

The development can be divided into three phases:

  1. Hardware System Set-up
  2. Deep Learning Model Development
  3. System Integration

High-Level Architecture:

Fig 1: Basic RobAIoT Architecture

Phase 1: Hardware System Set-up

This phase primarily involves the IoT system setup which includes hardware components set-up and installation of the necessary software.

The components used are:

    • Raspberry Pi
    • Pi Camera v1.3
    • Sensors
    • Motor Driver
    • DC Motors
    • LEDs
    • Car Chassis
  • Power Supply and other components related to connections

Fig 2: Sensors and hardware of RobAIoT

In the case of RobAIoT, as the RPi should be capable of running an AI model, 32 GB Hard disk RPi has been chosen.

In this phase, each IoT component is set up and necessary libraries are installed on RPi. Various software packages such as TensorFlow and OpenCV are installed on RPi.

The detailed steps of setting up the IoT system and installing the software on RPi will be discussed in our future Blog. Stay Tuned!

Phase 2: Development of Custom Object Detection Model with a novel data collection and annotation approach

A custom object detection model is developed to perform the detection of right/left/stop signs. Following are the high-level steps to develop a custom object detection model:

    • Data collection: Real images of various traffic signs involving the right, left and stop signs.
    • Data labeling: Annotating each image with a bounding box, object label, and image size.
    • Generation of input specific record format (TFRecord, in case, of TensorFlow API).
    • Identification of object detection architecture and Convolutional Neural Network model.
    • Model Development, training, and testing.
  • Model implementation and deployment.

Here, a custom approach is used in data collection and data annotation steps. A flexible and unique approach is developed such that the data simulates the real-time environment, and can be scaled easily. This automated method eliminates the need for manual annotation of labels which is the most time-consuming step in the custom object detection model development. This method also resulted in higher performance and accuracies in the detection of bounding boxes as well as labels.

A separate article about the novel data annotation approach and development of custom object detection model will be published soon, Stay Tuned!

Object Detection Model from TensorFlow API is used for model development. The detection model is trained on the following Object detection architectures on Nvidia GTX 1080 Ti 16 GB based computer:

    • SSD_mobilenet
  • Faster_RCNN_Resnet50

The reason for choosing the above models is basically to optimize speed vs performance. A detailed speed to performance metrics of various different Object Detection Model architectures is presented here.

The coco image dataset trained weights are chosen for weight initialization. MAP (Mean Average Precision) is chosen as the metric for evaluation of model performance. This metric balances both bounding box accuracy and label prediction accuracy. Various IoU Thresholds are tested but the results are not much different. Hence, default IoU threshold of architecture is chosen.

Below are the observations after training model architectures:

Model/Metric SSD_Mobilenet Faster_RCNN_Resnet50
Loss 0.25 0.09
Convergence Steps 3842 1200
Time Taken 0.35 Secs/ Step 0.15 Secs/Step
Accuracy Accurate in most of the cases; But the right direction is predicted incorrectly in some specific scenarios Highly accurate in terms of detection of labels as well as boxes
Speed of detection Faster detection Slow detection with a Lag is observed

As the objective is to integrate the model on RPi, the SSD model is preferred over Faster RCNN to minimize the lag in detection.  The inference graph of the trained model is generated which is further integrated on RPi and used for detection of left/right/stop signs.

Phase 3: Integration of Object Detection Model in IoT system:

Following are the activities involved in this phase:

    • Setup of necessary Object Detection libraries and their dependencies on RPi.
    • Modification of the Object Detection model dependency codes to work on RPi.
    • Hardware integrations of RPi with Sensors, Camera, and Motor Drivers.
    • Integration of RPi and Object Detection code.
  • PiCamera module setup.

< Here’s a video of our working demo >

Article discussing the detailed steps of integration and the challenges faced during the integration will be published soon. Stay Tuned!

References:

  1. https://github.com/tensorflow/models/tree/master/research/object_detection
  2. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
  3. https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10
2+

Leave a Reply

Your email address will not be published. Required fields are marked *