top of page

Responsible AI: Because the Individual Matters



There’s an inherent tension in the use of AI when we apply it to individuals. A Machine Learning (ML) model identifies correlations in the data it is trained with, to give results with quantifiable accuracy. But as we know, correlation is not causation, and getting it wrong for an individual, even an outlier, will have impact on a real life. As practitioners, we need to be responsible in how we use AI, but also need to develop a framework for creating Responsible AI.

As I wrote in my paper Engineering Explainability into Oscar Enterprise AI, explainability is how we create Responsible AI. This blog post discusses why we need Responsible AI.


Introduction: Why we still distrust AI


Ask a reasonable Man on the Clapham omnibus if they’d trust an AI robot to perform surgery on them, or if they’d prefer a human, they’d probably choose the human. They might think the human is better at making the right judgement in the moment, perhaps because AI might get it wrong, or do something that we didn’t want it to do, or they just don’t like robots. Ask the same question again, but this time adding that past data showed that humans had a 90% success rate whereas robots had a 98% success rate, and they might still choose a human over AI. I think I would. Our distrust of AI is deep-rooted.


The reasons boil down to 3 broad categories:

· AI might get it wrong = AI might be biased, maybe unintentionally but systemically.

· AI might do something that we don’t want it to do = cannot control fully autonomous AI.

· Only humans should do the work = Pre-judgement of AI without considering its benefit-cost trade-off.


Explainable AI is the most important feature we need to engineer into our AI platforms to address these reasons for distrust, but we need to do more than just software development; we need to be responsible on how we use AI.


Three themes of Responsible AI


HAL isn’t a good role model for Responsible AI
  1. Responsible AI is not, and should not, become Artificial General Intelligence. Instead, Responsible AI is the creation of a ML model to solve a specific problem.

  2. Responsible AI should communicate to humans. Practitioners should be able convincingly describe how the AI application addresses concerns on privacy, bias and unfairness. Such communication should be as accessible to all users, not just other inculcated peer practitioners.

  3. Responsible AI should be accountable and governable. We should be confident that we understand the trade-offs that practitioners have made in creating the AI application, such as between accuracy and explainability (as in the surgery example earlier). Users should be able to audit these trade-offs in their own context, and have an opportunity for redress.


Tools of Responsible AI


Happily, these themes have already been examined in software development. Data privacy laws around the globe have quantified the obligations upon software developers for accountability and governance. We can follow their example to see the tools that AI practitioners will need to develop.

  • Separate treatment of intent and data: Even if the End User License Agreement is clear and accepted, the user still has inviolable rights on the use of their data. Intent and data should be separately accountable and governable. ML models are divided into three (not two): the model (e.g. GPT-3 Powers the Next Generation of Apps), its configuration after training (the individual values of all 175 billion parameters), and the data used by the model to make predictions.

  • Separate treatment of privacy, security and fairness. Most legal jurisdictions treat these separately; for example in the UK there is separate legislation for GDPR, Data Protection and Equality. ML models must be separately auditable on these three axes.

  • There should be opportunity for redress of individual cases. ML models should describe saliency: Why did an individual receive that judgement from the model? They should also posit counterfactual data: What could the individual change in their submission order to change the judgement? Put together, this information gives each user the opportunity to change the outcome.

  • Not every AI app is suitable for every individual; there’s no need to shy away from this as it’s a corollary to having choice. However, the reasons why should be measurable.


Conclusion: The challenge faced by Responsible AI


In many cases, it’s already too late. There are billions of pictures sloshing around the internet, freely shared and viewed. Social media platforms, with their inadequate Responsible AI safeguards, have allowed companies like Clearview AI to train its ML facial recognition model on these pictures. Clearview has done such a good job that it’s used by over 600 law enforcement agencies globally. It’s also such an egregious violation of individual liberty that nations are taking action (Database firm Clearview AI told to remove photos taken in Australia).


The challenge for all us practitioners is to convince the next Clearview that Responsible AI is good for business; that just because the data is out there, doesn’t make it fair game; that it’s worth spending the effort and money to make every individual matter.


Coda: New laws of AI


Time to overhaul the original AI rules of conduct

This blog wouldn’t be complete without making a nod to the original laws on Responsible AI. Asimov created his 3 laws of robotics by imagining robots to be tools. The laws seem perfectly obvious when viewed through this lens:

  1. A tool must not be unsafe to use.

  2. A tool must perform its function efficiently unless this would harm the user.

  3. A tool must remain intact during its use unless its destruction is required for its use or for safety.

AI is simply the latest, but most complex tool that humans have made. Its complexity, and consequent opaqueness, means that additional laws are now needed. I like the list that New Scientist published, and I close this blog with them:

  • AI may not injure a human being or allow a human being to come to harm (through action or inaction) unless it is being supervised by another human

  • AI must be able to explain itself

  • AI must treat all human beings equally

  • AI must not impersonate a human

  • AI should always have an off switch

202 views5 comments
bottom of page