US-based autonomous vehicle (AV) company Insight LiDAR has developed a new gesture detection sensing technology for AV light detection and ranging (LiDAR) systems.

The Insight 1600 is designed to offer a more sensitive technology to assess the environment using frequency modulated continuous wave (FMCW) technology.

It makes use of a low power continuous wave of light instead of high-power laser pulses to sense its environment.

Combined with Insight’s highly sensitive FMCW architecture, the Insight 1600 detects enough pixels to enable AV software to detect and identify small, low-reflectivity objects from further than 200m.

The solution is claimed to be the industry’s first LiDAR to recognise pedestrian gestures.

Insight LiDAR Business Development vice-president Greg Smolka said: “When humans drive, we’re constantly scanning the environment around us.

“We’re watching for cars moving into our lane and looking at nearby pedestrians to see what they might do. For example, if a pedestrian looks both ways at an intersection, drivers understand that that person intends to cross the street.”

The new product is equipped with high resolution as well as velocity detection capabilities, which can be used by AV perception teams to swiftly and accurately predict the actions of pedestrians.

Insight LiDAR CEO Michael Minneman said: “The Insight 1600, with its ultra-high resolution and advanced FMCW technology, opens the door to substantially better advanced driving-assistance systems (ADAS), as well as more capable and safer AVs.

“What’s key here is both the quality and the amount of data the Insight 1600 generates. More data makes the artificial intelligence (AI) easier and ultimately drives safety. As we drive, we’re all used to watching pedestrians to understand their intent. Now for the first time, LiDAR can do the same thing.”

Last January, Insight LiDAR announced the highest resolution FMCW LiDAR sensor for autonomous vehicles.

The Digital Coherent LiDAR integrates critical technologies to detect low-reflectivity objects from further than 200m and puts ten to 20 times more pixels on objects, allowing fast and accurate identifications.