Scaling AI: 3 Reasons Why Explainability Matters 

NextGov:As artificial intelligence and machine learning-based systems become more ubiquitous in decision-making, should we expect our confidence in the outcomes to remain like that of its human collaborators? When humans make decisions, we’re able to rationalize the outcomes through inquiry and conversation around how expert judgment, experience and use of available information led to the decision. Unfortunately, engaging in a similar conversation with a machine isn’t possible yet. To borrow the words of former Secretary of Defense Ash Carter when speaking at a 2019 SXSW panel about post-analysis of an AI-enabled decision, “'the machine did it' won’t fly.”

Read article

Share