Intel develops defense technology for the U.S. Department of Defense to prevent AI systems from being attacked/deceived

Publisher:ping777Latest update time:2020-04-14 Source: 盖世汽车Keywords:Intel Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

According to foreign media reports, on April 9 local time, Intel Corp. and the Georgia Institute of Technology announced that they will lead a project team for the US Defense Advanced Research Projects Agency to develop defense technology to prevent artificial intelligence systems from being attacked and deceived.


(Image source: Intel)


The project, called Guaranteeing Artificial Intelligence Robustness against Deception (GARD), will last for four years and cost millions of dollars. Intel is the main contractor and is committed to improving cybersecurity defense mechanisms to prevent deception attacks that affect machine learning models.


While adversarial attacks against machine learning (ML) systems are rare, the number of systems operating with ML and AI continues to grow. As such technologies are an important part of semi-autonomous and autonomous systems such as self-driving cars, there is a continued need to prevent AI technologies from incorrectly classifying real-world objects (e.g., being able to distinguish a stop sign from a human), as inserting harmful information into the learning model or slightly modifying the object itself can mislead object recognition.


The GARD team aims to equip such systems with tools and enhancements to better help them defend against such attacks.


Such attacks have been around for years. For example, in March 2019, Tencent's Keen Security Lab published a report showing that researchers were able to trick a Tesla Model S into changing lanes and into oncoming traffic using stickers placed on the road. In 2017, researchers 3D-printed a turtle with a specially designed shell pattern to trick Google's AI image recognition algorithm into thinking the turtle was a rifle.


Since then, researchers have been studying ways to trick self-driving vehicles with modified road signs, such as using a piece of black tape to fool a semi-autonomous car's camera into thinking the speed limit is wider than it actually is.


The vision of GARD, therefore, is to leverage previous and current research on flaws and vulnerabilities affecting computer AI and ML vision systems to build a theoretical foundation that can help identify vulnerabilities in systems and characterize edge cases to make such systems more resilient to errors and attacks.


In the first phase of the GARD project, Intel and Georgia Tech will maintain spatial, temporal and semantic consistency in static images and videos to improve object recognition capabilities, meaning that AI and ML systems can be trained in additional contexts, learn what happens in any given situation, and be designed to label scenes or reject unlikely image scenes. The idea of ​​this phase is to develop AI with better "judgment" by introducing more factors in the environment to explain the image.


For example, an attacker could change a stop sign so that it looks a bit distorted in order to trick the AI ​​into thinking it is not a stop sign. To a human, the stop sign would still appear red and octagonal in shape, even if the word “Stop” has been changed or removed, so most human drivers would still stop. However, the AI ​​might misinterpret the sign and continue driving. However, if the AI ​​is given additional contextual awareness, i.e. trained to recognize intersections, sign shapes, sign colors, etc., it can reject misclassified information just like a human driver.


All in all, the Intel and Georgia Tech research is aimed at enabling better image recognition systems to improve the safety of self-driving vehicles and enable ML systems across industries to better classify objects.


Keywords:Intel Reference address:Intel develops defense technology for the U.S. Department of Defense to prevent AI systems from being attacked/deceived

Previous article:More than 2 million Mercedes-Benz vehicles exposed security vulnerabilities: How can smart cars resist hacker attacks?
Next article:Eyesight platform can detect whether drivers wearing masks are driving fatigued

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号