Advanced Driver Assistent Systems

LIDAR sensor and YaDo-VR software combination, results in 3D object detection, classification and tracking.

Dynamic LIDAR processing into 3D models & Classification

LIDAR-based perception system for ground robot mobility, consisting of 3D object detection, classification and tracking

Also have a look at our 3D Base Map Section which is imperative for navigation

RADAR

LIDAR

PASSIVE VISUAL

ULTRASONIC

Each individual sensor technology has strengths and weaknesses and combining single representations of reality from multiple sensors is imperative in order to avoid false positives and also false negatives.

Yado-VR software platform strenghts lies in the automated detection and classification of LIDAR observed objects, the hardware agnostic approach and ability to process data fast using the Yado-VR patented algorithms. Our deep learning mechanisms allow for rapidly building up a object library to be used on board.

Different sensors explained;

LIDAR — a surveying technology that measures distance by illuminating a target with a laser light. LIDAR is an acronym of Light Detection And Ranging, (sometimes Light Imaging, Detection, And Ranging) and was originally created as a portmanteau of “light” and “radar.”

Radar — an object-detection system that uses radio waves to determine the range, angle, or velocity of objects.

Ultrasonic— an object detection system which emits ultrasonic sound waves and detects their return to determine distance.

Passive Visual — the use of passive cameras and sophisticated object detection algorithms to understand what is visible from the cameras.

 


 

Deep Learning and Artificial Intelligence will play a vital role in imitating the human neural networks. Deep Learning uses algorithms to analyze the data and solve the problems that may come in the functioning of autonomous vehicles.

The YADO-VR algorithms are more accurate at object recognitions than humans and these, in turn, will make the roads available to fully autonomous vehicles for they allow detection & classification of multiple objects, improve perception, reduce power consumption and enable identification and prediction of actions.

The most important part of self-driving vehicles is GEO-data and this data about the environment is gathered by using multiple sensors. These sensors are being used for map- ping, localization and for avoiding obstacles. The main sensor used to gather Geo-data and information is the LIDAR, with ranges up to 100 meters.

The LIDAR is used to build 3D maps and to allow the car to foresee any potential hazard by bouncing laser beam of surfaces surrounding the car to determine the distance and profile of the subject accurately.

While LIDAR is being used to accurately map the surroundings, it is RADAR that is used to map and monitor the speed of the surrounding vehicles to avoid potential accidents, detours, traffic delays and any other obstacles by sending a signal to the on-board processor to apply the brakes or to move out of the way.

Modern self-driving vehicles rely on both LIDAR and RADAR to validate the data that is generated on what is seen and how motion is predicted.

The base map is at the center of the self-driving vehicles for efficient and easy navigation for even though sensors on the car detect things in real time, prior information is necessary to evaluate what exits.

High precision base maps are being made by leveraging aerial imagery, sensors, mobile driven LIDAR, Aerial LIDAR data specifically for self-driving vehicle models and markets. The base map cannot be static and needs to be updated reliably and regularly.