Report sees peril in cybercriminals’ looming use of AI
By Paul Gillen in SiliconAngle
A new report this week by anti-malware vendor Malwarebytes Inc. paints an ominous picture of the potential impact of artificial intelligence technologies such as machine learning and deep learning once criminals have the skills and incentive to use them.
That hasn’t happened yet, but the report’s authors suggest it could be as little as a year or two before AI-powered malware makes its way into the wild.
“Almost by definition, cybercriminals are opportunistic,” the report noted. “You only need one smart cybercriminal to develop malicious AI in an attack for this method to catch on."
Malwarebytes Lab Director Adam Kujawa drew an analogy to ransomware, which was detected as early as 2010 but was considered only a screen-locking nuisance until 2013, when Cryptolocker debuted with the ability to encrypt files. “Suddenly we saw a lot of variants emerging,” Kujawa said. “For the most part we don’t see a move by criminals en masse until one version completely destroys its target.
In the short term, the advantage is to the good guys, who are using AI to supplement human labor. In the field of malware, for example, machine learning can be used to create “smart detections that can capture future versions of the same malware, or other variants in the same malware family,” the report’s authors note. ... "
No comments:
Post a Comment