Massachusetts Institute of Technology (MIT) and Toyota have released an innovative dataset to accelerate autonomous driving research.

The dataset, DriveSeg, includes a precise, pixel-level representation of common road objects that are represented through the lens of a continuous video driving scene.

Researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) will be focusing on various topics.

They will study if computers could learn from past experiences to recognise future patterns and safely navigate new and unpredictable situations.

Toyota CSRC senior principal engineer Rini Sherony said: “Predictive power is an important part of human intelligence

“Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions.

“By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”

Sherony explained that the video-based driving scene perception provides a flow of data that more closely resembles dynamic, real-world driving situations.

With the release of the dataset, MIT and Toyota will be working towards advancing their research in autonomous driving systems that will have an ability like human perception, perceive the driving environment as a continuous flow of visual information.

MIT principal researcher Bryan Reimer said: “In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modelling that enables the next generation of assisted driving and automotive safety technologies.

“Our longstanding working relationship with Toyota CSRC has enabled our research efforts to impact future safety technologies.”

Last January, Scientists at Massachusetts Institute of Technology (MIT) and Microsoft developed an artificial intelligence (AI) based technology to cover ‘blind spots’ in self-driving cars.

The model leverages human inputs and helps self-driving cars avoid dangerous errors in the real world.