Written by OurCrowd

It was once only the stuff from science fiction movies or futuristic cartoons.  But now more than ever, there is greater recognition and anticipation that the launch of fully Autonomous Vehicles (AV) will be as transformative and revolutionary as the assembly line was to manufacturing in 1913 and as innovative and competitive as the Space Race in the 1960s.  Autonomous vehicles, sometimes referred to as driverless cars or self-driving cars, are becoming capable of reading and sensing the environment and operating without human input, moving forward into a brave new world.

Billions of dollars are being invested by companies across the AV value chain, from traditional and Tier-1 automakers to innovative software startups, not to mention the tech giants, with the goal of being the first to market. More and more players are developing semi-autonomous and fully autonomous vehicles, include: Google (Waymo), Intel, Apple, Huawei, Continental, General Motors, Volkswagen, BMW, Ford Motor Company, Baidu, Toyota, Tesla, Audi, Jaguar among others; all trying to keep up with a changing marketplace.  While we are clearly past the initial stage of ‘early adaption’ to the concept of AVs, consumers are watching closely to see when the industry will achieve critical mass and acceptable safety requirements to comfortably use AVs.

“[We are on the] verge of the most significant transformation since the introduction of the automobile.”  U.S. Secretary of Transportation Elaine L. Chao told the Detroit Auto show last year.  “Automated or self-driving vehicles are about to change the way we travel and connect with one another.”[1] This is seen as the next major disruption in the auto industry as well as a huge source of revenue growth.  Intel predicts the driverless economy to grow to $800B in 2035, and reach $7T by 2050.[2]

So what’s holding back this industry breakthrough?

A safe form of mobility 

For AVs to move beyond proof-of-concept to mass production, they must be proven to be safe, dependable and affordable. Driven by immense advancements in Artificial Intelligence (AI), the advent of Autonomous Vehicles will offer a new form of mobility that will not just improve lives, but will save them too.

The World Economic Forum anticipates driverless vehicles will generate $1 trillion in “economic benefit to consumers and society” over the next 10 years, and autonomous driving features will help prevent 9 percent of accidents by 2025 with the potential to save 900,000 lives in the next 10 years.[3]  Yet, the highest barrier for autonomous vehicle deployment is consumer fear,[4] particularly in light of headline grabbing accidents involving autonomous vehicles.[5]

“Clearly people like being in control of their cars, which is why at Mobileye (along with many car companies) we are taking a phased approach to deploying autonomous features,” former Mobileye executive Yonah Lloyd told OurCrowd in 2016. “With our system the driver can always take control of the wheel, or choose to let the car drive itself.[6]  Over time, people will likely become more comfortable with this experience, and society will appreciate the safety benefits, as accident numbers decrease.” As proven in many of the technological revolutions of past decade, if the value added of the new technology is significant and the cost is in-line, new industry standards will be adopted. 

There are, in addition to AI, many different technologies that go into the autonomous vehicle technology stack such as sensors, computational software and hardware, that enable an AV to perceive its environment, make sense of it all and decide how to act.  By fusing real-time and surrounding data with an always alert, automated, split-second decision-making mechanism roads will be safer for passengers and pedestrians alike.  Significant social issues that impair human drivers, such as inebriation, distraction, or exhaustion, will become a thing of the past.

What's the difference between a semi-autonomous and a fully autonomous vehicle?

According to Amir Freund, who served as a Ford Motors Israel executive (2015-2017), “self-driving cars employ some combination of sensors – cameras, radar, high-performance Global Navigation Satellites System (“GNSS”), and LiDAR (Light Detection and Ranging, a remote sensing method that uses light in the form of a pulsed laser to measure ranges), together with AI and machine learning to achieve their respective levels of autonomy.”   It is the combined coverage that these sensors provide and the ability to process the data from them in real time to make decisions that determines the level of autonomy a vehicle has.

The Society of Automotive Engineers (SAE), has issued a standard that delivers a coherent classification system for driving automation.  They’ve classified six levels of driving automation from “no automation” to “full automation”, which groups levels of functionality, technology, and relevant distinctions between the various levels. The classifications are as follows:

Level 0: No Automation

Level 1: Vehicle performs minor steering or acceleration tasks; all other operations are under full human control.

Level 2: Vehicle automatically responds to safety situations, but the driver must remain alert and responsive.

Level 3: Vehicle performs certain “safety-critical functions” under various traffic or environmental conditions.

Level 4: Vehicle can operate without requiring human input.

Level 5: Vehicle operates with full automation in any environment (weather or traffic).[7]

Full autonomy, Level 5 technology, is projected to arrive by 2030 at the earliest.[8] 

Key functions autonomous cars must perform to ensure a safe ride

A variety of different sensor technologies are required to enable cars to “take control of the wheel.” It is the fusion of data from these sensor technologies which will make autonomous driving a reality. Sensors must enable detection and classification of objects in every weather or lighting condition, and inputs have to be ultra-reliable to ensure safety-critical functionality. Consequently, both sensor redundancy and fusion are essential.[9]

Freund identifies six crucial functions that autonomous vehicles must be able to perform based on input from sensors.

self-driving cars functions

  • Navigation
  • Localization, turning corners, at a safe distance from the curb.
  • Obeying traffic rules and signs
  • Perfect braking (safety)
  • Obstacle avoidance (even when unexpected)
  • Crossing intersections, especially when not all cars are autonomous. 

Today, navigation is largely provided by GPS, but it has an insufficient level of accuracy for AVs.  Autonomous vehicles also need to know their precise location, both for decision making and path planning.[10] Many AVs rely on GPS signals for positioning, but these measurements can be off by as much as 1-2 meters — too significant an error rate, given that an entire bike lane is roughly 1.2 meters on average.[11]  AVs must learn how to negotiate driving patterns involving both human drivers and other AVs, localizing vehicles with a very high degree of accuracy. 

Sensors: The all-seeing AV eyes 

In the AV sector, sensors like radar, sonar, cameras and LiDAR are often combined to allow automobiles to perceive and react to the world around them.[12] They are considered the key elements in developing AVs, as well as current Advanced Driver Assistance Systems (“ADAS”) for enhanced safety and security.  According to Freund, solving these challenges requires not only significant upfront R&D but also long test and validation periods.

ADAS applications detect potentially dangerous situations and warn drivers,[13] with the goal of minimizing accidents. Among commonly used ADAS applications available today are automatic emergency braking, blind-spot detection, lane-change assist, vehicle-exit assist and pre-crash warning.[14] Current ADAS applications combine several sensor systems to perceive the car’s surroundings as accurately as possible.[15]

The four main groups of sensor systems in AVs are camera, radar, LiDAR and IR, and different car manufacturers utilize different combinations.

how self-driving cars see the road

No one type of sensor can provide enough coverage, so data from multiple sensors needs to be fused and layered together, but all autonomous vehicle manufactures use cameras as one of the main sensors. According to a McKinsey report from 2017, there are different advantages to each type of sensor.

  • Camera over radar relies predominantly on camera systems, supplementing them with radar data.
  • Radar over camera relies primarily on radar sensors, supplementing them with information from cameras.
  • The hybrid approach combines LiDAR, radar, camera systems, and sensor-fusion algorithms to understand the environment at a more granular level.[16] 

“Google (Waymo) and Ford are using a camera and LiDAR combination, while Tesla is relying on cameras mainly, with radar added, which is probably the original view of Mobileye,” Freund said. “GM is relying on radar more significantly.”  Improving and extending the range of current sensors is an ongoing challenge.

What is the "sweet spot" of sensor coverage? 

The optimal distance for clear detection of objects is 200 meters.  Freund relates that, 200 meters is important because if an AV turns onto a highway, and there is a motorcycle driving at 120 km/h (65 mph), then it would take about four or five seconds to get the AV onto the highway, and the motorcycle will travel about 200 meters in that time.  One also needs to factor in inclement weather (rain, fog or night), so this is a simple calculation and the sensor detection should be capable to overcome these obstacles.

“Cameras have very good resolution but have range limitations, as well as difficulties with night imagery and glare from the sun. LiDAR has a detection range of up to 100 meters, and next year we'll see LiDAR that can reach 150-180 meters,” according to Freund.   “Thermal & IR cameras and radar are also used, and have a very long range but very low resolution, making it difficult to identify whether an object is a motorcycle or a truck from long distances. 

Israeli startup and OurCrowd portfolio company Arbe has developed proprietary chipset technology that it claims can overcome radar’s resolution limitation.  Arbe’s product is an end-to-end 4D (distance, height, depth and speed) ultra-high-resolution imaging radar system that can maintain detection accuracy in long, mid and short ranges under all weather and lighting conditions and provides a highly detailed image of the environment in a wide field of view.  The company intends to ship its Beta system to its first customers, including OEMs, Tier-1 companies this year.

Sensory perception and fusion: The brains of the AV 

The real-time data generated from the multiple sensors layers in AVs needs to be fused together and localized so it can understand where it is, and then try to relate to the different objects that are detected and make decisions. 

This data fusion and detection requires high processor capacity and efficiency, and size and cost are crucial factors for implementation into autonomous vehicles, according to Freund.  Major chip manufacturers like Nvidia, Huawei, Intel, and Qualcomm are competing with startups like OurCrowd funded Hailo, as well as other venture-back companies like ThinCI, Mythic, Cerebras Systems, and Horizon Robotics in multi-domain controller sector.

Hailo is developing a deep learning processor with the performance of a data center-class computer, operating in real time at minimal power consumption, size and cost.

Monetization and user experience 

Like the Space Race over 60 years ago, the Autonomous Vehicle market is driving technology innovation in a number of different spheres that have the potential to transform economies and improve lives.  The main overall challenge, according to Freund, is not to create an AV for its own sake, but to actually to be able to monetize it and provide a better and safer user experience for passengers that will be safe and comfortable.

While it is an axiom for all new drivers to keep both hands on the wheel, as investment pours into these emerging technologies, it’s no longer hard to image a future, a few years from now, where all drivers might be twiddling their thumbs. 

In the meantime, keep both hands near the wheel and your eyes on the road ahead.


Next Step:

Create Free Account













[12] Dr. Elisabeth, Stéphane, and Malaquim, Cédric. 2018. How will Radar Sensor Technology Shape Cars of the Future. Electronic Design. April 27. See link.  

[13] Heineke, Kersten, Kampshoff, Philipp, Mkrtchyen, Armen, and Shao, Emily. 2017. Self-driving car technology: When will the robots hit the road?. McKinsey & Company. May. See link.  

[14] Dr. Elisabeth, Stéphane, and Malaquim, Cédric. 2018. How will Radar Sensor Technology Shape Cars of the Future. Electronic Design. April 27. See link.  

[15] Dr. Elisabeth, Stéphane, and Malaquim, Cédric. 2018. How will Radar Sensor Technology Shape Cars of the Future. Electronic Design. April 27. See link.  



You may also like:

Tech Sectors

Will High Times Lead to High Returns? A Look at the Cannabis Landscape

Countries across the globe have begun the process of liberalizing their marijuana laws which presents a unique opportuni...

Tech Sectors

The Leaders, Technologies, and Challenges of the Global FoodTech Market

If the saying, “We are what we eat”, is true, then Humankind is eating a lot. The global food and agricultural industrie...

Tech Sectors

5 Tips for Investing in Autonomous Vehicle Companies

Smart investors know that funding a private company comes with risks, but if successful, there are three paths to an exi...