By Morten Hansen, Marketing Manager, Camo Analytics
As more and more decisions are made by advanced analytics software, people are in need of an explanation giving a quick glance in to the algorithms. EU is supporting the “right to explanation” in its GDPR initiative. In a strictly business context managers and decision makers should ride this trend and not relax before they have an explanation. This is the best defense against “artificial stupidity” deriving from AI in a black box.
When the bank turns down your request for a loan, you are entitled to an explanation. Even if the credit scoring process is entirely run by more or less intelligent software. This claim for the right to an explanation is put forward by the EU in the context of privacy protection, and the EU plays a leading role in the current historic battles between citizens and corporations. The value of data and the rights related to personal data is a key concern in an era of data capitalism. Some years ago – maybe even only months ago – hating social networks like Facebook was limited to a few data privacy fanatics. It suddenly went mainstream, and we had Zuckerberg proclaiming support for EU’s GDPR legislation from his trial like senate hearing. It was a turn of the tides in the global privacy discourse, and this tidal wave could also do some good in the companies.
In pharma and science there is a long tradition for scientific use of data. Advanced analytics and supervised machine learning methodology is applied to vast amounts of data to carefully reach conclusions by evaluating and documenting outcomes. Data scientists at the core of these processes are delivering the results and value. With the wave of digital transformation in all industries comes the potential for automation and applying analytics in real-time to production processes that makes this an interesting, growing and increasingly more valuable contributor. The need for analytics and the fight to attract analysts are growing and makes out-of-black-box AI solutions a welcomed shortcut to stay ahead of the competition. The new kid on the block is entering on a red carpet rolled out by disruption seeking top level managers. The alarm must go, if we accept results from software just because it is labelled AI. We should not ignore the risk of “artificial stupidity” if we automate using black box AI with no questions asked.
In terms of method it is key to open up the black box of the algorithms to understand, how and what it will do to your process and value creation. This is where managers should ride the tidal wave of rights to explanation put forward by consumers. It is fact-based decision making in the approach to advanced software, and it is healthy. For example it can be dangerous to overlook inherent traits in data material leading to biases, or to settle with one seemingly determining variable. This may lead to wrong conclusions and destruction of value. The old saying “garbage-in garbage-out” is especially true to the data material fed to an algorithm. Do not trust an algorithm just because it arrives with flying colors and big fanfare. You need to understand the software, the modelling and the data to be able to trust it.
To me it is obvious, why managers should demand an explanation about AI decision making in their organization: They are ultimately responsible for both human decisions, software decisions and everything in between. It is an integrated ingredient in management to take charge here. Blind trust of AI in a black box is no-go. Managers, demand your right to an explanation!