Educational Technology

April 30, 2021

How To Ensure Your Machine Learning Models Aren’t Fooled

Filed under: Educational Technology — admin @ 12:30 am

Alex Saad-Falcon, Information Week

Machine learning models are not infallible. In order to prevent attackers from exploiting a model, researchers have designed various techniques to make machine learning models more robust.
All neural networks are susceptible to “adversarial attacks,” where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks.

https://www.informationweek.com/big-data/ai-machine-learning/how-to-ensure-your-machine-learning-models-arent-fooled/a/d-id/1340630

Share on Facebook

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress