The road to autonomy

As vehicles become increasingly autonomous they need to understand precisely where they are, what objects surround them and how those objects are likely to move.  Situational awareness is built on sensor data: from ultrasonic proximity and video cameras, to lidar and long-range radar.  Multiple sensors of the same type extended coverage around the vehicle.  While sensors of different types use complementary strengths to increase accuracy in challenging driving conditions.
As vehicles become increasingly autonomous they need to understand precisely where they are, what objects surround them and how those objects are likely to move. Situational awareness is built on sensor data: from ultrasonic proximity and video cameras, to lidar and long-range radar. Multiple sensors of the same type extended coverage around the vehicle. While sensors of different types use complementary strengths to increase accuracy in challenging driving conditions.

Enter deep learning

The almost infinite permutations of roadways, vehicles, obstacles and environmental conditions make safe driving too complex to program procedurally. Fortunately, advances in deep learning mean that systems can be trained on the road and with simulations so that they take appropriate steering, throttle and braking action based on sensor data.
The almost infinite permutations of roadways, vehicles, obstacles and environmental conditions make safe driving too complex to program procedurally. Fortunately, advances in deep learning mean that systems can be trained on the road and with simulations so that they take appropriate steering, throttle and braking action based on sensor data.

So where’s my autonomous car?

Attaching vehicle sensors to a central deep neural network is both conceptually straightforward and attractive. So, where’s my autonomous car?  Unfortunately, making autonomous systems safe and reliable in real world conditions at mass market prices presents some serious technical and financial challenges. Invision.ai is built on the idea that pushing artificial intelligence to the edge makes sense for most applications.  We believe that this is particularly true for autonomous vehicles. Let’s take a look at how this approach plays out in practice.
Attaching vehicle sensors to a central deep neural network is both conceptually straightforward and attractive. So, where’s my autonomous car? Unfortunately, making autonomous systems safe and reliable in real world conditions at mass market prices presents some serious technical and financial challenges. Invision.ai is built on the idea that pushing artificial intelligence to the edge makes sense for most applications. We believe that this is particularly true for autonomous vehicles. Let’s take a look at how this approach plays out in practice.

Automotive AI

Sensor fusion

Training a single powerful deep learning system using raw sensor data streams introduces a combinatorial problem. Sensors will go offline due to defective hardware, software bugs, physical damage or environmental conditions. There are too many sensors to train the system with every possible combination, so a sensor failure will cause the entire system to fail. Invision avoids problem as classifiers rather than raw data is fused centrally. A low confidence interval (e.g. video camera at night) or offline sensor does not prevent the fusion of higher confidence classifiers.

Network traffic

To be useful for fast moving vehicles sensors need high resolution, long range, and high sampling/frame rates. This massive stream would quickly overwhelm standard data buses, necessitating costly dedicated high-speed connections to numerous sensors. By processing the raw data stream on sensor, invision.ai just passes light-weight metadata to be fused at a central location. This enables less expensive networking technology to provide sufficient bandwidth to integrate both todays high-resolution sensors and tomorrows even more powerful sensors.

Explainable AI

Deep learning systems taking many raw feeds are the proverbial black box, providing few clues as to why a course of action was taken. This causes consternation for development engineers and does little to engender passenger trust. Attaching confidence intervals to each classifier enables our software to explain its reasoning. For example, if multiple sensors report the presence of a cyclist at a specific location with high confidence, the system would discount lower confidence reports of no or another object at the same location.

V2X collaboration

DSRC and 5G will deliver reliable low latency vehicle to vehicle, vehicle to infrastructure and vehicle to cloud communication. Invision’s classifier fusion approach enables a vehicles sensor event horizon to be extended to include surround vehicles. Safety is enhanced as potential hazards can be detected more quickly, projected motion of objects more accurately estimated, and the performance of onboard sensors correlated with those in other vehicles.

Packaging

A centralized deep learning system needs to be powerful, likely requires active thermal management and introduces packaging challenges. Invision.ai deep learning systems are orders of magnitude more compute efficient that the alternatives. This enables us to distribute processing using low powered generic processors that can be passively cooled and located almost anywhere on the vehicle. This eliminates the guessing game when sizing central processing needs for new vehicle platforms.

Innovation

The rapid advances in sensor technologies such as solid state lidar are set to continue as suppliers vie for a slice of the autonomous vehicle market. But, tightly coupling raw sensor feeds to millions of miles of training data makes it expensive and time consuming to add new sensor technology to a vehicle. By decoupling sensor hardware from the autonomous driving brain suppliers are free to innovate while OEMs benefit from reduced cycle times and get one step closer to the modular field upgradable vehicle.

We can help

We are working on a variety of automotive applications: from obstacle classification in rear-view and surround video streams to sophisticated multi-band infrared and high resolution radar systems. We look forward to discussing your needs and exploring ways that we can help.