Applying Security Principles to AI Production

907641_Bitvore images_3_120220

AI is becoming more common—which of course means that bad people are going to try and attack it. It’s a tale as old as time, but are AI companies doing enough to protect themselves and their products?

There are two ways to attack an AI. The first way is to show it something that it isn’t programmed to see. If an AI detects an image or a data pattern that it can’t interpret, it’s likely to react unpredictably—maybe even dangerously. Achieving this attack means having a detailed understanding of an AI’s model, however.

 

The easiest way to get into an AI is to get into its supply chain—the tools and data developers who train and update the AI. From there, attackers would be able to manipulate the AI directly, creating extremely undesirable results. These attacks are extremely plausible. A major supply chain attack just occurred in the form of the Solarwinds breach, which affected up to 18,000 customers.

 

In short, someone could tamper with the entire AI development process. At best, this could result in an AI that fails to work as intended. At worst, you could have an AI that commits active harm. In addition, you could lose intellectual property or customer information. To add insult to injury, you could even suffer fines if you were found to have inadequately protected this information. In short, it’s time to update your threat model if you’re an AI company.

 

Fooling an AI with a Single Pixel

 

You may have seen images, in the past, of people using elaborate makeup to fool facial recognition cameras. AI has long since learned to compensate for this kind of camouflage, but it’s still possible to fool AI—and it might take a lot less than face paint.

 

A study from 2017 shows that image recognition algorithms can be fooled by attacks that involve changing just a single pixel in their target image. According to the abstract, “67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74.03% and 22.91% confidence on average.”

 

In addition, you can fool image recognition algorithms by distorting images in other relatively simple ways. Google’s image recognition AI will misidentify objects if they’re rotated or flipped horizontally, for example. Although this kind of hack seems trivial, imagine a self-driving car that doesn’t recognize an upside-down stop sign or one that’s been marred with a single drop of paint. Developers need to ensure that their AI systems are robust enough to withstand simple tampering.

 

The Solarwinds Hack Foreshadows AI Data Poisoning Attacks

 

The Solarwinds hack worked because many companies use the tool. Attackers broke into Solarwinds and installed malware into the development database. When victims downloaded the next Solarwinds update, they unwittingly infected themselves with malware.

 

This attack is extremely plausible from an AI standpoint because people don’t—and can’t—create AI using just a single tool. Companies might use Informatica for ETL, Google BigQuery for data warehousing, OpenNN to implement neural networks, etc. As we’ve seen with Solarwinds, attackers can weaponize any tool you’re using by poisoning a software update. If you create an AI solution with a poisoned tool, the solution you create may be compromised in several aspects.

 

For example, if an attacker compromises your training data, they can introduce correlations that don’t exist in real life. This could induce you to create an AI solution that is biased against certain groups. If you don’t detect the compromise, this bias could extend into production, where it can harm your customers materially.

 

Alternatively, an attacker could compromise the training tool that creates your AI model. With this, they could train an AI model to ignore correlations that it detects. In other words, you could have an AI fraud detection tool that doesn’t detect fraud, a car-mounted LIDAR tool that doesn’t detect pedestrians or medical screening apps that don’t detect cancer.

 

To summarize, AI developers need to keep an eye on several new aspects of security. Not only do they need to ensure that their supply chain is safe, but they also need to ensure that their AI is robust against simple tampering. Lastly, they should test their AI before production to ensure that its dataset doesn’t include any intentional biases. It doesn’t sound simple, but it’s the only way to make sure that attackers can’t take advantage of the burgeoning AI industry.

 

Read our white paper below to learn more about what Bitvore can do for you.

Tractable Understanding of the World Using AI + NLP

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all