top of page
Search

Ethics and Artificial Intelligence: the new green

Just three years ago few people were thinking about the wide scale impact of artificial intelligence on the world. Now, everyone is talking about artificial intelligence and the fact that businesses which do not espouse it will become the dinosaurs of our age.

However, along with this understanding of the importance of AI in our world has come the understanding that AI is such a powerful tool that it can cause problems as well as solve them. Stephen Hawking and others said, in 2014, that AI may be our greatest invention or our last, and since then a plethora of organizations have grown up to help think about the way we can ensure that we use AI in the best possible way for humanity.

This is all the more important in a world where fake news can be made believable by creating the voice of a politician and putting that with an altered image of the politician so that she seamlessly appears to say words which she has never used. The impact of this utterly believable and hard to disprove news could be catastrophic.

A quick survey of the organizations involved in thinking about AI ethics shows that it is not limited to academic institutions but includes business, governments and inter-governmental agents; a non-exhaustive list includes the following:

  • Centre for the Study of Existential Risk – University of Cambridge

  • Future of Humanity Institute – University of Oxford

  • Centre for the Future of Intelligence – Universities of Oxford, Cambridge, Berkley and Imperial College

  • Machine Intelligence Research Institute – California

  • Open AI Project – worldwide but based in US

  • Campaign to Stop Killer Robots – worldwide but based in UK

  • Campaign against Sex Robots - UK

  • AI Partnership – members are Google, Deep Mind, IBM, Amazon, Apple, Facebook

  • Singularity University - US

  • UN AI Centre, The Hague

  • Future of Life Institute, funded by Elon Musk MIT

  • Foundation for Responsible Robotics - European

  • Institute Electrical and Electronic Engineers - Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems - worldwide

Together with various Centers and Consortia based at Universities such as Carnegie Mellon, Harvard, MIT and the University of Texas.

Additionally, reports on AI and the impact of AI have been created by the US Obama Government, UK Government and the EU. Both Japan and Korea have developed codes of ethics for their robots.

It is obvious that business cannot afford to ignore this wellspring of interest in AI and its impact upon future society. Articles from numerous research groups agree that automation will mean that about 50% of jobs in the USA will disappear within the next 8years. None of those studies expect that the same number will be created; some of this will be because of AI. The impact will be felt in business, the way the economy works and our political institutions.

Two pieces of work are of particular note, the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and the 23 Principles created at Asilomar by the attendees at a Future of Life Institute event in January 2017.

The IEEE Initiative has been working since December 2015 and by the time it created its Report in December 2016 had gathered expertise from over 100 experts from across the spectrum who are working on trying to create standards for the ethical design of AI or autonomous systems. It has 12 subject matter committees all of which are working on various areas of the design of AI.

Additionally, it has now created 10 ideas for IEEE standards which are in the pipeline process towards becoming a standard. The brief overview can be seen here and the whole report is open for responses until the middle of May 2017.

The Asilomar meeting brought together a similar number of experts working in this area for discussions about AI, the future economy, law, ethics and lethal autonomous weapons. During that time we created 23 Principles on which over 95% of attendees were agreed for the ethical design of AI. An overlap with the work of the IEEE is obvious and deliberate.

Ethics is a very misunderstood word and tends to have negative connotations as ‘impeding innovation’ but, in fact, responsible innovation is to the benefit of everyone working in AI. Good policy and governance of emerging technology helps acceptance, one only has to look at the debacle of the GMO story in the EU, for an example of what can go wrong if the public turns against a technology. Unless we espouse responsible development and use of this technology the short term negative effects could outweigh the benefits and may have profound effects on the institutions we hold dear.

 

Kay is an international authority in law and ethics in AI. For the past three decades, Kay has worked as a barrister, mediator, arbitrator, professor and judge. In her previous role, as Chief officer of the Ethics Advisory Panel of Lucid.ai, she led ethical design, development and use of AI.

Kay is co-founder of the Consortium for Law and Policy of Artificial Intelligence and Robotics at the Robert E. Strauss Center, University of Texas and teaches a course at the UT Law School for the Consortium: "Artificial Intelligence and emerging technologies: Law and Policy”.

Kay is also a Distinguished Scholar of the Robert E Strauss Center at the University of Texas and Vice Chair of the IEEE Industry Connections "Global Initiative for Ethical Considerations in the Design of Autonomous Systems. Kay is a regular speaker and special policy adviser, with upcoming and previous work with the Max Planck Institute, The British Academy, EU Parliament, British Parliament, Royal Society, National Academies of Science and the White House.

Follow Us
  • Twitter Classic
bottom of page