Online Learning Update

May 30, 2019

How we might protect ourselves from malicious AI

Filed under: Online Learning News — Ray Schroeder @ 12:10 am

Karen Hao, MIT Technology Review
We’ve touched previously on the concept of adversarial examples—the class of tiny changes that, when fed into a deep-learning model, cause it to misbehave. In recent years, as deep-learning systems have grown more and more pervasive in our lives, researchers have demonstrated how adversarial examples can affect everything from simple image classifiers to cancer diagnosis systems, leading to consequences that range from the benign to the life-threatening.  A new paper from MIT now points toward a possible path to overcoming this challenge. It could allow us to create far more robust deep-learning models that would be much harder to manipulate in malicious ways.

https://www.technologyreview.com/s/613555/how-we-might-protect-ourselves-from-malicious-ai/

Share on Facebook

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress