“Hackers get more and more sophisticated, and they only have to find one vulnerability to be
successful.”
To Catch a Threat
A Changing Game
Cybersecurity has evolved over the years from focusing on blocking digital intruders from a main entry point to tracking their movements throughout a computer network if they break past security measures, said Dr. Kangkook Jee, assistant professor of computer science. “The game is changing right now,” he said. “You can’t just have a lock on the front door. You need security cameras inside the house.” New threats continue to emerge. The increase in numbers of people working from home due to the COVID-19 pandemic opened an untapped entry point for attackers, exposing weak security measures in some entities. While all types of cyberattacks were up in 2020 and 2021, ransomware attacks made the biggest headlines.“The game is changing right now. You can’t just have a lock on the front door. You
need
security cameras inside the house.”
Outsmarting Cyber Spies
As the Internet of Things – the vast network of connected devices, from smartwatches to home security systems – rapidly grows, so too does the potential for criminals to use the technology to spy, cause physical harm, or steal information for financial gain or to use as blackmail. The complex network of computing devices that are connected by software and sensors nearly doubled to more than 38 billion from 2018 to 2020, according to Juniper Research. The firm, which provides research and analysis to the global high-tech communications sector, predicts that by 2024 that number will surpass 83 billion. In a study published in the September-October 2019 issue of the journal IEEE Security & Privacy, UT Dallas researchers tested home security systems, drones and children’s smart toys to demonstrate just a few of the many ways common personal devices can be hacked. The team found several different types of vulnerabilities, which they reported to the manufacturers. One of the most eye-opening examples involved a children’s toy. The stuffed animal contained a microphone through which an attacker could inject audio into the device and have conversations with the child, perhaps even telling the child to open the door to the home.
“If AI systems are attacked, there are going to be all kinds of crazy repercussions.”
Countering Attacks on Artificial Intelligence
The increasing use of artificial intelligence (AI) poses new types of security risks as well. Massive amounts of data are used to train the software that controls autonomous vehicles and other AI systems. Machine learning, an AI technique, involves feeding millions of real-life examples into a computer to teach a self-driving car, for example, how to respond to a stop sign. A subset of machine learning, called deep learning, analyzes layers of information, paving the way for AI to perform tasks such as evaluating mammograms to flag tumors. But what if online vandals access and tamper with the data? “Artificial intelligence is affecting every aspect of our lives, from health care to finance to driving to managing the home,” Thuraisingham said. “Sophisticated machine-learning techniques with a focus on deep learning are being applied successfully to detect cancer, to make the best choices for investments and to determine the most suitable routes for driving, as well as efficiently managing the electricity in our homes.” The threat of attacks on AI systems has fueled one of the hottest areas of cybersecurity research. “If AI systems are attacked, there are going to be all kinds of crazy repercussions,” Thuraisingham said. “Imagine financial organizations that depend on AI giving you messed up results and advice, or a medical provider giving the wrong diagnosis.” Current driver-assist technology also could be vulnerable. Consider the sensors used to alert drivers when it is unsafe to change lanes. “What if it doesn’t detect another vehicle, and the driver thinks it’s safe to change lanes?” she asked.“We are looking at how … we can make machine-learning models more robust against
… attacks.”
Crook Sourcing
While the growing use of AI poses ever-changing cybersecurity threats, the technology also brings new ways to detect, disable and even learn from attacks. A team of researchers, including Dr. Kevin Hamlen, the Louis Beecherl Jr. Distinguished Professor of computer science, and Dr. Latifur Khan, professor of computer science, developed a cyberthreat detection system that uses AI to fight attacks. The method, called DEEP-Dig (DEcEPtion DIGging), ushers intruders into a decoy website so the computer can learn from hackers’ tactics. The information is then used to train the computer to recognize and stop future attacks. DEEP-Dig advances a rapidly growing cybersecurity field known as deception technology, or “crook sourcing,” which involves setting traps for hackers. Researchers hope that the approach can be especially useful for defense organizations. “There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” said Hamlen, who succeeded Thuraisingham as CSI executive director. “Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”
There are criminals trying to attack our networks all the time.