Why it is extremely difficult to fix accountability in Artificial Intelligence systems

Share the Reality


AI systems are extremely powerful and are reshaping our lives. They are turning out to be extremely useful to us. AI systems are being used everywhere. In customer support for an online retailer in form of a chatbot, in social media to provide suggested audio or video to the user etc. AI systems are also being used to sanction loans to customers by banks or to hire employees by private and government agencies. But at the same time; they pose dangers.

For instance; the company Amazon abandoned an AI system after finding that it was unfairly discriminating potential employees based on gender and social background. Whenever AI systems fail; it is extremely difficult to pinpoint the root cause of the problem. AI systems are like a black box and their inner workings are difficult to find out even by the algorithm developer who developed it.

So it is easy to attach accountability when a software system fails. But why it is so difficult to blame an AI system when it fails?

Accountability case for software systems

Suppose a software program is built to calculate electricity charges for a user who is a customer of an electricity distribution company. Due to faulty logic in the software program; the electricity consumption calculation has been reported wrongly. After realising the mistake; the electricity distribution company approached the developer who built the software program. The developer investigated the matter immediately. It is a well known fact that problem could be found either in the data or in the business logic.

The developer first searched the database for a specific line item. They found that the data looked to be correct. So they investigated the source code. They found the problem was there. The data was saved in the database as a floating point number up to 2 decimal points. For instance; in this case, the consumption was 100.20 units of energy consumption for the month. But in the source code; the calculation was done as a whole number with no decimal point. So the source code was calculating energy consumption as 100 units and not 100.20 units. Once the problem was found; it was fixed immediately. The source code was changed to accommodate floating number for energy consumption.

Why AI systems are black boxes?

But what about an AI system? In an AI system; there is no data. An AI system is trained on data and then tested with testing data. Once training and testing is done then the AI system does computing based on the memory which was saved after training and testing. Now if this AI system was asked to calculate energy consumption calculation then it will do so by using its trained memory and not from any data saved in a database. Suppose the AI system is doing wrong energy consumption calculation then how can you find where the problem is? It is almost impossible to do that. If there was an issue with the training or testing data or the algorithm? Difficult nut to crack!

How to establish transparency in AI systems?

Transparency in a software system is visible and so it is easy to establish accountability. But not in any AI system. Now the question is: how to establish transparency in an AI system so that accountability can be established?

The first nut to crack to achieve transparency is to look into the data used to train and test the AI model. If the data is not clean or contains bias then the AI model will generate wrong results. The next guy in line to be checked is the algorithm used to develop the AI model. Now understanding the algorithm is difficult to understand even by the developers. So good documentation written in easy to understand language must be maintained so that even non technical people can also understand as to how the AI model does calculations or makes decisions.

Explainability is a must for all AI models. If documentation is not proper then it will be difficult to trace the root cause of any wrong decisions or calculation made by an AI model.

Once an AI model has 100% transparency only then establishing accountability will be possible. 



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *