Cybersecurity firms are using AI and machine learning to prevent attacks — but what’s to stop criminals using these technologies for ill?
Despite spending more money on security than ever, organisations struggling with a widespread cybersecurity skills gap are often told how technologies like big data, analytics, machine learning, and artificial intelligence can aid them in protecting their data or critical infrastructure from attackers.
Organisations ranging from startups to established large corporations are investing in the building of AI systems to bolster defences by analysing vast amounts of data and helping cybersecurity professionals identify far more threats than would be possible if they were left to do it manually.
But the same technologies that improve corporate defences could also be used to attack them.
Take phishing. It’s the simplest method of cyberattack available — and there are schemes on the dark web which put all the tools required to go phishing into anyone’s hands. It’s simply a case of taking an email address, scraping some publicly available personal data to make the phishing email seem convincing, then sending it to the victim and waiting for them to bite. That could become even more effective if AI is added.
“Spear phishing is going to become really, really good when machine learning is incorporated into it on the attacking side,” says Dave Palmer, director of technology at Darktrace, a cybersecurity firm which deploys machine learning in its technology.
The machine learning algorithms don’t even need to be very advanced; relatively simple sequence-to-sequence machine learning could be installed on an infected device in order to monitor emails and conversations of a compromised victim. After a period of monitoring, the AI could tailor phishing messages to mimic the message style of the victim to particular contacts in their address book, in order to convince them to click on a malicious link.
“If I were emailing someone outside the company, I’d probably be polite and formal, but if I was emailing a close colleague, I’d be more jokey as I email them all the time. Maybe I’d sign off my emails to them in a certain way. That would all be easily replicated by machine learning and it’s not hard to envision an email mimicking my style with a malicious attachment,” Palmer explains.
“It’s come from me, it sounds like me, it talks about the things we usually talk about. I expect they’d open it,” he adds.
It isn’t just emails artificial intelligence could monitor and learn from; the increasingly public nature of many social media profiles and photo accounts, video streaming and even online shopping accounts could all be in the cross-hairs of malicious machine learning algorithms. And the more information that’s available to sift through, the easier it would be for an AI to learn about the behaviours and habits of the victim and exploit that in an effort to steal data, or even whole accounts.
“Imagine if it could predict our likely answers to security questions in order to reset passwords for us automatically to hijack accounts without having to steal the data from the source,” says Jonathan Sander, VP of product strategy at Lieberman Software, a security management firm. “Imagine if it could even text us and pretend to be our kid asking for the Netflix password because they forgot it.”
Much like the human criminal counterpart, an AI with all the right information about a target could ultimately trick them into clicking anything or sending out any data desired. Using AI to go through that information means more people could be targeted in a much shorter length of time.
More targets means more victims, putting more individuals — and the organisations they work for — at risk of having data stolen, which in turn puts more information into the public domain to be exploited.
While phishing for data is a simple but effective attack, it’s possible artificial intelligence could be used to keep criminals in the game of developing malware and ransomware one step ahead of those attempting to shut them down, by continually altering the malicious code to avoid detection or providing it with more effective means of attack.
Indeed, AI has already been used to exploit vulnerabilities, although the incident itself took place at a Defense Advanced Research Projects Agency (DARPA)-sponsored hacking and defence tournament, at the DEF CON security conference in Las Vegas in August this year.
The DARPA Cyber Grand Challenge was designed to accelerate the development of advanced, autonomous systems capable of detecting, evaluating, and patching software vulnerabilities before adversaries have a chance to exploit them. Seven teams competed in the competition, which was won by a computer system dubbed Mayhem, created by a team known as ForAllSecure who walked away with $2m for their efforts.
In addition to patching the existing security holes, the teams’ automated systems were actively encouraged to find weakness in the code of their opponents and exploit it before the holes were patched. While the aim of the event was to demonstrate how this strategy of taking the fight to opponents could benefit cybersecurity professionals, it’s not hard to see how malicious actors could use the same methods and exploit them for nefarious means.
“While this was a research tournament to help the ‘good guys’, the contest proved that machines can automatically find and exploit new vulnerabilities rather quickly. In other words, it illustrated one way malicious threat actors might leverage AI for an attack as well as how defenders can leverage it for defence,” says Corey Nachreiner, CTO of network security firm WatchGuard Technologies.
But while the likes of artificial intelligence and machine learning could prove useful to cybercriminals and hackers, they require a lot of investment and development time to build in the first place, at a time when people are still making themselves vulnerable to even the most basic forms of cybercrime.
Millions are regularly falling for basic phishing campaigns, and it only takes one click for a whole target organisation to be breached. So why would cybercriminal organisations bother to invest in advanced techniques when they’re already winning the fight?
“If I was an attacker, why would I develop a deep learning system like Google are building when I can send 10,000 emails and have one person click on them? It can be around any subject — Donald Trump, the SuperBowl, a coupon advert -a- people will click. People in security often hype sophistication, but there’s no reason for attackers to do that,” says Oren Falkowitz, CEO and co-founder of Area 1 Security.
Nonetheless, it’s still possible that like any legitimate organisation, hacking gangs will look to exploit machine learning and artificial intelligence tools to augment their operations, if not replace their manual tasks. “Hackers will always look for things to make their processes work smoother,” Falkowitz says.