Machine learning and Artificial Intelligence (AI) are creating a world where they can emulate more and more of the future with each passing day. With automation and efficiency taking center stage, AI now has an increasing part to play in every industry, and that applies especially to cybersecurity. When ML and AI can be harnessed to make criminal cyber-attacks multiple times more effective and untraceable, cybersecurity needs to evolve enough to fight fire with fire.
Malicious actors can utilize the brilliance behind machine learning to construct complex algorithms and patterns, which can be deployed to wreak all kinds of havoc in the global cyber-space. Experts even suggest that aside from being able to crack passwords, AI-backed cybercriminals can now even manage to create complex malware capable of completely hiding from detection.
That, unfortunately, is only the tip of the iceberg since AI is a technology that is rapidly progressing, and the experts can only hope that the good guys can keep up.
The 3 Huge Dangers
Evading detections is instrumental for hackers’ success since it allows them to bypass any countermeasures put in place by the authorities, even paving the way to adapting to future cybersecurity barriers. Experts believe that although cybersecurity needs to arm itself with equally advanced technology to combat the looming threat of cybercrime, it will still always take human minds to build the most robust defenses that can resist all kinds of attacks.
However, humans need to understand the threat itself before taking the challenge head-on. The following are the three major ways cybercriminals orchestrate AI-backed cyber-attacks.
1. Data Poisoning
Data is the bread and soul of ML and AI. The AI networks are essentially composed of training models that learn from large reserves of data known as ‘training-sets.’ By corrupting or manipulating these important training data sets, the training models are adversely affected.
This severely damages the training models’ accuracy, and these effects can further cascade to ruin even more training models. Since the prediction behavior is inadvertently being attacked, the models make many more errors. Even a minute poisoning of about 3% via a backdoor attack can lead to an 11% decrease in the model’s accuracy, which could produce disastrous results.
2. Manipulation of Bots
Bots are algorithms programmed to make decisions, so merely forcing them to make wrong decisions poses excellent cybercriminals value. Bots can even be used to sabotage the very systems they operate in.
Greg Foss, a senior cybersecurity strategist at VMware Carbon Black, described at a cybersecurity summit that attackers can abuse decision-making models once they understand them. He even referred to an attack on a cryptocurrency system, where the trading bots were manipulated into tricking the system’s algorithm once the hackers figured out the bots’ patterns.
3. GAN – Generative Adversarial Networks
GAN’s are essentially sets of 2 AI systems that simulate data and, consequently, learn from each other. During the interactions, one presents the data while the other points out the errors. The result is a content set convincingly similar to the original content set.
GAN’s can be used in heist-like cybercrime by emulating regular traffic activity during a cyberattack, thereby essentially hiding the criminals and malware. They can even be used for password breaking and deceiving facial-recognition algorithms.