Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat

A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations.

AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm.

The researchers’ study into offensive AI used both existing research into the subject in addition to responses from organisations including Airbus, Huawei, and IBM.

Three core motivations were highlighted as to why an adversary would turn to AI: coverage, speed, and success.

While offensive AI threats come in many shapes, it’s the use of the technology for impersonation that has both academia and industry highly concerned. Deepfakes, for example, are growing in prevalence for purposes ranging from relatively innocuous comedy to the far more sinister fraud, blackmail, exploitation, defamation, and spreading misinformation.

Similar campaigns in the past using fake/manipulated content has been a slow and arduous process with little chance of success. Not only is AI making the creation of such content easier but it’s also meaning that organisations can be bombarded with phishing attacks which greatly increases the chance of success.

Tools such as Microsoft’s Video Authenticator are helping to counter deepfakes but it will be an ongoing battle to keep up with their increasing sophistication.

Back when Google’s Duplex service was announced – which sounds like a real human to book appointments on a person’s behalf – concerns were raised that similar technology could be used to automate fraud. The researchers expect bots to gain the ability to make convincing deepfake phishing calls.

The researchers also predict an increased prevalence of offensive AI in “data collection, model development, training, and evaluation” in the coming years.

Here are the top 10 offensive AI concerns from both the perspectives of industry and academia:

Very few organisations are currently investing in ways to counter, or mitigate the fallout, of an offensive AI attack such as a deepfake phishing campaign.

The researchers recommend more research into post-processing tools that can protect software from analysis after development (i.e anti-vulnerability detection) and that organisations extend the current MLOps paradigm to also encompass ML security (MLSecOps) that incorporates security testing, protection, and monitoring of AI/ML models.

You can find the full paper, The Threat of Offensive AI to Organizations, on arXiv here (PDF)

(Photo by Dan Dimmock on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

Tags: ai, artificial intelligence, bots, cybersecurity, deepfakes, Featured, microsoft, mlops, mlsecops, offensive ai, research, security, study

Source link

Leave a Comment