Autonomous Driving: Collision Avoidance and Decision Making in Smart Cars

By Len WegnerAugust 14, 2017

Disruptive technologies, such as mobile-influenced applications, are rapidly changing our way of life and the way we look at everyday things. Today, smart homes, smart cities, smart factories and, of course, self-driving smart cars are becoming a reality, thanks to the revolutionary advancements in sensor-enabled technologies.

At the heart of these technologies is Machine Vision, which is the primary driver for advancements in self-learning, decision making, autonomous systems.

What Exactly Is the Machine Vision System?

Engineers and scientists are utilizing the Machine Vision system to make machines smarter. Some of the common examples of machine vision include depth or distance measurement using stereo vision, facial or object recognition, factory floor monitoring using electromagnetic spectrum, and collision avoidance systems deployed in new-generation smart cars. 

The processing unit integrated into a machine vision system is typically an advanced image processing and decision-making unit. The decision-making unit is independently capable of making decisions on captured raw image data. Besides, the unit can also perform a range of operations like image stitching and filtering, white balance correction, object edge detection, and HDR adjustment based on the processed image.

Machine Vision in Automotive Applications

The evolution and advent of smart automotive technologies have significantly accelerated over the last few years. Rapid innovation is shaping up fully autonomous, state-of-the-art smart cars driven by on-board sensors and autopilot systems. Machine vision is playing a key role in these developments, enabling cars to be smarter by making them learn and make decisions with integrated sensor technologies.

Machine vision systems in automotive systems are complex architectures, especially when deployed in safety-critical applications like ADAS or autonomous driving. This is because the incorporated system has to be smart enough to make cars able to see, hear, and feel like human beings.

This technology enables cars to perform a host of operations like gathering vital data, identifying hazards, or making crucial decisions while on the go. Machine vision systems, when coupled with machine learning, pave the way for fully autonomous smart cars. 

Recent innovations in vision technology are presenting wide-ranging opportunities to expand the capabilities of automotive safety by incorporating unique features that are more responsive and intelligent than before. Nonetheless, the success of smart cars will rely a lot on the quality of data generated and conveyed by the sensors installed in them. Auto manufacturers will have to consolidate both smart sensors and advanced analytics to create genuinely effective smart cars.

Lattice Semiconductor's Cutting-Edge Smart Car Solutions

Lattice’s Embedded Vision Development Kit assists in the development of applications related to machine vision. The broad product portfolio offers flexible solutions that are expressly designed to address the diverse requirements of embedded vision engineers such as new interface requirements, hardware acceleration, and energy-efficient image signal processing.

Key Features of Lattice’s Embedded Vision Development Kit

The state-of-the-art Embedded Vision Development Kit integrates leading-edge features like the  CrossLink™, ECP5™, and SiI1136 devices, along with Sony IMX dual-cameras-to-HDMI® bridging. The kit creates a seamless development platform for machine vision while offering a unique modular design. Prime features of Lattice’s Embedded Vision Development Kit include -

  • Freescale i.MX 6 Solo is an ARM Cortex-A9 1 GHz has 512 KB L2 cache, with MIPI CSI-2 interface and PCIe 2.0 x1
  • ECP5-85K LUTs device has 365 I/Os, 4 high-speed SERDES input and output channels, along with 400 MHz LPDDR3 memory support
  • Includes External connectors to standard interfaces such as HDMI®, Gigabit Ethernet, USB OTG, Camera, FMC, LVDS and GPIO
  • Support for OmniVision OV5640 camera; SFP module can expand camera over GigE or Fiber
  • ECP5’s high-performance DSP can be used for image processing, object analytics, algorithm development and machine learning
  • Linux kernel with device drivers, applications/services, libraries, GNU tools (compilers, linkers, etc.), as well as deployment mechanisms

Lattice’s Embedded Vision Development Kit, decked up with two built-in MIPI D-PHY interfaces, offers a highly optimized platform for critical applications like image sensors interface conversion and sensor aggregation.

Decision-Making and Collision Avoidance System Market is Witnessing Rapid Growth

Collision avoidance and decision-making systems are currently positioned to transform the automotive sector. With its increasing adoption expanding beyond the top of the line vehicles to small and mid-sized cars, the market for such systems look extremely positive. 

Ground-breaking R&D by leading market players, stringent legal and regulatory standards, increasing consumer awareness, and the rising demand for enhanced in-car security are all acting as strong growth drivers for the collision avoidance and decision-making systems market.

According to a January 2017 study conducted by Grand View Research Inc., the global collision avoidance, and decision-making systems market is estimated to reach $18.97 billion by 2025. New ratings of safety agencies will drive the integration of anti-collision systems into mass models, which will strengthen the market, and propel the growth momentum over the next six to eight years. 

Download the Lattice's Embedded Vision Development Kit now and contact WPGA Intelligent Connectivity Solution's Director Len Wegner with product inquiries or questions.