Chinese autonomous driving start-up Pony.ai has debuted its first product-ready PonyAlpha autonomous vehicle system, marking a significant milestone for the autonomous driving industry in the country.

The new self-driving system features increased sensor coverage provided by additional LiDARs, radars, and cameras.

The system is fitted with an optimised hardware platform in a tightly integrated full-stack system to support these additional sensors.

Overall, the full sensor suite can see distances of approximately 200m, supporting a number of driving scenarios at even greater speeds.

Pony.ai’s sensor fusion technology can intelligently use the most reliable data from multiple sensors depending on different environmental or driving scenarios. It offers a complete and accurate understanding of the surrounding environment while maintaining a highly effective on-vehicle process.

“The Pony.ai team has been working on numerous other critical fleet capabilities such as the Pony.ai Vehicle Control Center and its in-vehicle interface PonyHI.”

Pony.ai co-founder and CEO James Peng said: “I am incredibly proud of what we have accomplished. In less than two years, we are now announcing our third-generation self-driving system.

“PonyAlpha is a product of truly extensive testing in challenging environments, culminating to a safer and more stable system that we look forward to sharing with all of you.” Sorry, there are no polls available at the moment.

In addition to launching PonyAlpha, the Pony.ai team has been working on numerous other critical fleet capabilities such as the Pony.ai Vehicle Control Center and its in-vehicle interface PonyHI.

The Pony.ai Vehicle Control Center supports centralised management, tracking, and dispatching of all vehicles in its fleet.

PonyHI provides passengers with a direct window into the brain of the vehicle and helps them to follow along the vehicle’s every move and decision as it identifies, classifies, and plans based on its surrounding environment.

Passengers can see road agents such as vehicles, cyclists, or pedestrians, as well as transportation infrastructure such as road signs or lane markings on the screen.

As these objects appear and move on the screen, passengers can also observe the ‘driver’ take accurate and timely action, which aims to establish transparency and trust between human and machine.