Inside an unassuming building on Georgia Tech's premises is a group of researchers looking into potential cyber security vulnerabilities of autonomous vehicles.
An entire room is dedicated to testing to see how humans might react if a driverless vehicle is hacked.
“We primarily look at the interface between the human the vehicle and the infrastructure,” said Srinivas Peeta, who leads the university's Autonomous & Connected Transportation (ACT) laboratory.

Inside the lab is a full-sized vehicle surrounded by large screens. The car moves slightly to immerse the test subject in an eerily authentic environment where reactions are observed and recorded.
I participated in a simulation of a vehicle behaving strangely due to interference from hackers. A vehicle on the screen inside the car began behaving erratically, and after slowly maintaining my distance behind the car, I hit the accelerator and passed it.
All of my reactions were carefully scrutinised by the researchers, who measured everything from eye movements to how I rotated the steering wheel.

Concerns about cyber security and driverless cars are not without precedent.
Waymo, which has permission to operate dozens of its autonomous vehicles in parts of Atlanta, Georgia, and several other US cities, touches on cyber security on its website.
“Protecting the Waymo driver from malicious activity is paramount,” a section on the website reads.
“Waymo has developed a robust process to identify, prioritise, and mitigate cyber security threats in alignment with industry and government-defined security best practices.” It does not go into great detail about what the mitigation process involves.

The continuing research at Georgia Tech and elsewhere into the potential ramifications of autonomous vehicles being hacked is taking place as individuals try to test the limits of autonomous and driverless technology security.
During an appearance at the 2022 World Government Summit in Dubai, David Colombo, who was 19 at the time, spoke to officials about how he managed to hack 25 Teslas across 13 countries, urging governments work closer with the private sector to increase cyber security.
There have also been reports of others managing to hack Tesla vehicles and overcome their speed restrictions.
Another widely documented hack has shown just how vulnerable keyless entry systems are, with various demonstrations showing how criminals can take advantage of loopholes in the technology.
Yangjiao Chen, a doctoral student at Georgia Tech, said that although hacking is not necessarily high on the list of concerns when it comes to autonomous vehicles, it should not be overlooked.
“Connectivity is one of the basic attack vectors for hackers,” she explained.
Ms Chen added that the end result of a hack could be something as simple as a small traffic jam, or it could escalate into something far more consequential.

“It all depends on the different motivations … a hack could manipulate vehicles and cause collisions,” she said.
While the onus of cyber security currently rests with the autonomous vehicle manufacturers and regulators, Ms Chen said that her research looks at the potential roles that humans can play, even on a subconscious level.
“How will human drivers react to the anomalies when they're driving and other vehicles, autonomous vehicles behave strangely?” Ms Chen said.
So far, her tests and research indicate that once humans are aware of the possibility that other vehicles might have been hacked, they are significantly more prone to be careful and keep a safe distance.
Ms Chen added that even without a warning provided before the tests begin, some results show that humans are able to adjust relatively well to autonomous vehicles behaving strangely.
“They can mostly feel the differences,” she said.

Yongyang Liu, another doctoral student at Georgia Tech who researches transport technology, said that as driverless vehicles become more common, there will be a transition period that might be uncomfortable for drivers.
“Autonomous vehicles and human-driven vehicles look the same at first glance, but humans have social intuitions and react to social clues,” he explained, adding that the disconnect between humans and autonomous vehicles sharing the road might cause crashes for a multitude of reasons.
Yet hope is far from lost, he said.
“Humans have an ability to learn, so we can make autonomous vehicles with increasingly human-centric designs so the driverless vehicles improve their interactions.”
Meanwhile, despite driverless technology making incredible progress in the last decade, polling indicates that humans are still uneasy about autonomous vehicles.

As of 2025, only 13 per cent of US drivers said they trusted self-driving vehicles, according to polling from the American Automobile Association – only a slight increase from the survey results last year.
But there is some consensus among those polled, with 78 per cent supporting the idea of prioritising the advancement of autonomous vehicle safety systems.
Those numbers give credence to the work being done by Mr Peeta and his team at Georgia Tech.
“The trust is a fundamental issue, and the more trust we get, the more people will be comfortable with these vehicles,” he said.