Online Learning Update

July 19, 2019

The push for explainable AI

Filed under: Online Learning News — Ray Schroeder @ 12:10 am



While organizations are ultimately legally responsible for the ways their products, including algorithms, behave, many encounter what is known as the “black box” problem: situations where the decisions made by a machine learning algorithm become more opaque to human managers over time as it takes in more data and makes increasingly complex inferences. The challenge has led experts to champion “explainability” as a key factor for regulators to assess the ethical and legal use of algorithms, essentially being able to demonstrate that an organization has insight into what information its algorithm is using to arrive at the conclusions it spits out.┬áThe Algorithmic Accountability Act would give the Federal Trade Commission two years to develop regulations requiring large companies to conduct automated decision system impact assessments of their algorithms and treat discrimination resulting from those decisions as “unfair or deceptive acts and practices,” opening those firms up to civil lawsuits.

Share on Facebook

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress