Scientists at Massachusetts Institute of Technology (MIT) and Microsoft have developed an artificial intelligence (AI) based technology to cover ‘blind spots’ in self-driving cars.

The new model leverages human inputs and helps self-driving cars avoid dangerous errors in the real world.

According to researchers, this model would enable engineers to enhance the safety of AI systems such as self-driving cars and autonomous robots.

Similar to traditional methodology, the scientists put an AI system through simulation training. However, a human closely monitors the system’s actions in the real world and gives feedback when the system either made or was about to make any mistakes.

Covid-19 Report — Updated twice a week Understanding the Covid-19 outbreak, the economic impact and implications for specific sectors

Covid-19 executive briefing report cover
GlobalData

Our parent business intelligence company

“If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution.”

After this, training data is combined with the human feedback data. Machine-learning techniques will be used to produce a model that precisely identifies the situations where the system most likely needs more information about how to perform in right manner.

MIT Computer Science and Artificial Intelligence Laboratory graduate student Ramya Ramakrishnan said: “The model helps autonomous systems better know what they don’t know.

“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution.”