Navlab - The Self-Driving Car of the '80s
By: Thomas Kay
Self-driving cars are becoming a reality on our streets. It comes with the promise of revolutionizing the way we live our lives. When we discuss self-driving cars, we talk about industry giants like Google or Uber. You rarely hear the name Navlab, one of the first autonomous vehicles from Carnegie Mellon University.
In 1984, Carnegie Mellon University (CMU) Robotics Institute started the NavLab project. Its mission was to use computer vision to create autonomous navigation. It was funded by U. S. Department of Defense's Advanced Research Projects Agency (ARPA). It was part of ARPA’s Autonomous Land Vehicle (ALV) program.
When the NavLab project started, the technology of the day only allowed indoor robots. They were connected to computers using long cables. Even with the best computers, the robots were slow decision makers. Stanford Cart, one of the early examples, took 15-minute pauses between moves for image processing and route planning. It moved in one-meter spurts.
The Terragator was Navlab project's first venture into the outdoor environment with a robot. When the Terragator was able to move continuously a few centimeters per second and avoid obstacles, it was considered an advancement. Instead of one image, it processed a number of images together. More sensors gave it better obstacle avoidance ability. However, the robot was not self-contained. There was a long cable from the processing computers to the moving machine. From 1984 to 1992, the Terregator served as a testbed machine for coming innovations, laying down the blueprint for technology such as the driverless Uber.
Built in the summer of 1986, NavLab 1 was the first self-contained vehicle. It was a blue Chevy van that could carry generators, computers, sensors, and even a few graduate students. The cost of this setup was an estimated $1 million. Human researchers onboard could monitor navigation problems in real-time. It meant better debugging of the navigation programs. NavLab 1 achieved 20 mph of autonomous driving.
The next iteration was NavLab 2, an army ambulance humvee. It had all the sensors of the NavLab 1 with additional cameras. It could navigate on rough terrain and drive at 70 mph on the highway. It had complex modules that could work with each other seamlessly. ALVINN, a neural network, learned how to drive on the streets. RANGER handled trails through rugged terrain. GANESHA kept track of obstacles and helped with tasks like parallel parking. D-star created maps and paths. STRIPE allowed remote control navigation of the vehicle. Distributed Architecture for Mobile Navigation (DAMN) was the central controller. DAMN ensured harmony between the modules.
The early vehicles did not have today’s sensor sensitivity, data processing power, storage capability, and network connectivity to help them navigate. But still, they were able to achieve operational speeds on highways and rough terrains. Since 1984, there have been 11 generations of NavLab vehicles, and CMU has filed more than 140 invention disclosures. A number of engineers who worked on the CMU NavLab project went to Google and other places and contributed to the development of self-driving cars.
So, when you get your first self-driving car, remember there was once a smart car called NavLab that paved the way.
Follow RD80s on Social Media