A photorealistic simulator has been developed by a group of researchers capable of creating highly realistic environments that can be used to train autonomous vehicles. The VISTA 2.0 engine has been released in open-source form by scientists at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), allowing other researchers to teach their autonomous vehicles how to drive themselves. Real-world scenarios, without the limitations of real-world data sets.
The simulation engine developed by researchers at CSAIL, known as VISTA 2.0, is not the first hyper-realistic driving simulation trainer for AI. “Today, only companies have software with the type and capabilities of VISTA 2.0’s simulation environment, and this software is proprietary,” said Daniela Russ, MIT professor and CSAIL director. said,
“We are excited to release VISTA 2.0 to help the community collect their own datasets and transform them into virtual worlds,” said CSAIL PhD student, Alexander Amini.
Rus added that with the release of VISTA 2.0, other researchers will finally have access to a powerful new tool for research and development of autonomous driving vehicles. But unlike other similar models, VISTA 2.0 has a distinct advantage – it’s built with real-world data while still being photorealistic.
The team of scientists used the foundations of their previous engine, VISTA, and mapped out photo-realistic simulations using the data available to them. This allowed them to enjoy the benefits of real data points but also create photo-realistic simulations for more complex training.
It helped train the AV AI in various complex scenarios such as overtaking, following, negotiating and multiagent scenarios. All this was done in a photo-realistic environment and in real-time. Hard work showed immediate results. AVs trained using VISTA 2.0 were much more robust than those trained on earlier models that only used real-world data.