Home Technology IBM Designs a New AI Toolbox to Safeguard AI Systems

IBM Designs a New AI Toolbox to Safeguard AI Systems

IBM Designs a New AI Toolbox to Safeguard AI Systems
Images: Flickr Unsplash Pixabay Wiki & Others

Recently, IBM, a leading global technology company, created a new open-source artificial intelligence (AI) toolbox intended to deliver practical defences for real-world AI systems on the basis of how threat aspects can attack AI models.

With artificial intelligence being one of the key topics at this year’ s(2018) RSA Conference, IBM, unlike other vendors that simply want to utilize AI in enhancing traditional security, is considering ways of protecting the AI models themselves.

As a result, the company has unveiled an open-source initiative dubbed the Adversarial Robustness Toolbox that is a platform-agnostic AI toolbox, which is made for not only mitigating but also protecting AI models against various threat actors. In fact, the new AI toolbox is considered a library comprising of defences, attacks, and benchmarks for implementing enhanced security.

Enemy attacks pose a real threat to the use of AI systems in vital security applications. Virtually unrecognizable changes of video, speech, images and other data have all been made to confuse artificial intelligence systems. Worst case scenario, such alterations can be made when the attacker does not have the particular knowledge of the deep neural network’s architecture. To make matters worse, adversarial attacks can be executed in the physical world as opposed to altering the pixels of a digital image.

MORE – IBM Watson: What you Need to Know

Enemies could avoid facial recognition systems by wearing special glasses. They could also evade visual recognition systems in self-drive vehicles by simply sticking patches to traffic signs.

Maria-Irina Nicolae, an IBM Security research scientist, showcased a demonstration in an RSAC 2018 session in which an image previously recognized by AI as an eagle 99% of the time could be altered by an attacker to trick the AI into registering it as another object with 71% confidence. Nevertheless, using the defences of the new IBM AI toolbox, such attacks can be averted.

Koos Lodewijkx, IBM’s CTO and VP of security operations and response, said that threat actors could attack artificial intelligence (AI) models directly. Even so, they have shown the ability to acquire credit card numbers that had been included in the model data. He added that threat actors do not require having the same data fed into an AI model in an attempt to find new ways of attacking.

The new toolbox, Adversarial Robustness Toolbox, can measure deep neural network’s robustness in a bid to determine accuracy after an attack has occurred. Furthermore, use adversarial examples to harden DNN and flag any inputs that may have been altered by applying runtime detection.

Being an open-source project, the Adversarial Robustness Toolbox intends to develop a vibrant ecosystem of contributors drawn from both the academic and industry space.

According to an IBM blog post, the key difference to similar ongoing activities includes the concentration on defense methods as well as on practical defense systems’ composability. In addition, the company said that it hopes the Adversarial Robustness Toolbox project would help stimulate both research and development, especially around DDN’s adversarial robustness, while boosting the integration of secure artificial intelligence (AI) in real-world applications.

Source SearchSecurity