“Universal human rights must be embedded in the design of AI technology frameworks”. That’s the view it seems of Dr Eileen Donahue, Executive Director for the Stanford Global Policy Incubator.  Doesn’t that sound wonderful. A failsafe mechanism in any AI programme that shuts down or prevents robots or computers from taking over humanity or outsmarting us when we don’t want them to. Problem solved. Or is it?

What is AI? And who can create it?

AI is the combination of an algorithm and data. Simple maths with rules created by humans, but designed to learn on its own.

And anyone with enough brain capacity and computer power can write an AI framework or create one. Maybe not in 2018. But probably by 2023.

So if we’re talking frameworks let’s be realistic. We struggle to control the internet and what people create there. How do we propose to implant these global failsafe mechanisms which hackers will most likely be able to circumvent anyway? Who will be checking and enforcing that any AI framework created adheres to human rights?

Without wanting to sound all doom and gloom or like a DC superman movie in an age plagued by terrorism, AI can very quickly become a very powerful weapon. AI will not necessarily be a cold calculating physical robot as movies would like us to believe (although it very well may be). But as masses of data are collected and accessible for anyone who has coffers large enough to buy or collect it – manipulation of economies and people are the most likely first cases of AI abuse. No humanitarian or academic will be able to prevent that with imaginary human rights protocols written into code.

How are we going to prevent humans from creating AI protocols that could potentially cause harm?

So the real issue is, how are we going to prevent humans from creating AI protocols that could potentially cause harm? When we are just beginning to dabble with it? These are complex global issues that we are absolutely not prepared for.

So tell me – whose responsibility is it to police and enforce AI?