Artificial Intelligence is the technology which, both as software in computers and embodied in devices such as robots is the engine of the 4th Industrial Revolution; it will change everything and challenge us to understand what it is to be human.
It is a complex, fast, tireless and transformative technology which is currently, concentrated in the hands of a few. This increases power imbalances within nations and globally. China recently announced its AI Strategy and is investing billions in AI which leaves the investments being made by all other governments’ way behind. It may be inevitable that this investment will take China to the top of the AI tree and give it an unassailable lead. Kai Fu Lee worries that will happen and has written a dire warning in the New York Times that unless AI can be spread widely across the globe most nations will simply become vassels of those which have AI industries. Additionally, we see that AI is largely unregulated which has created, including in China, calls for ethical and governance structures
We will be interacting with machines which at least appear intelligent. What will be the effect on us as our co-workers are increasingly non-human? These questions and more have brought groups together to think about the ethical challenge, for example the World Economic Forum, the IEEE Global Initiative of Ethics of Autonomous and Intelligent Systems and the Centre for the Future of Intelligence.
At the World Economic Forum the work will be centered on building trust in AI; finding ways to address the big picture issues of transparency, bias, privacy and accountability. To do so we need to think about the tech holistically – human wellbeing , human centred design, ethics and values, social inclusion as well as profit, this technology challenges our traditional economic measures.
Currently, my thinking for such projects which will be developed with partners from government, business, academia and civil society are:
AI Board Leadership Toolkit: As AI increasingly becomes an imperative for business models across all industries, corporate leaders are often ill-equipped to identify the specific benefits this complex technology can bring to their businesses nor the concerns about the need to design, develop and deploy it responsibly. A practical set of tools can assist Board Members and decision-makers in asking the right questions, understanding key trade-offs, and meeting the needs of diverse stakeholders, as well as how to consider and optimize approaches such as appointing a Chief Values Officer or creating an Ethics Advisory Board. This project will partner with the Forum’s Community of Chairmen.
Unlocking Public Sector AI: Although AI holds significant potential for vastly improving government operations, many public institutions are cautious to harness it due to concerns over bias, privacy, accountability, transparency and overall complexity. Baseline standards for effective and responsible procurement and deployment of AI by the public sector can help overcome these concerns, opening the door to completely new ways for governments to better interact with and serve their citizens. Also, as a softer alternative to regulation, governments’ significant buying power and public credibility can drive adoption of these standards by the private sector.
Generation AI: Standards for Protecting Children:AI is increasingly being imbedded in children’s toys, tools, and classrooms, creating sophisticated new approaches to education and child development tailored to the specific needs of each user. However, special precautions must be taken to protect society’s most vulnerable demographic. Actionable guidelines can help address privacy and security concerns arises from data unknowingly collected from children; enable parents to have agency in understanding the design and values of these algorithms; and prevent biases from AI training data and algorithms to undermine educational objectives. Transparency and accountability can build the trust necessary to accelerate the positive societal benefit of these technologies for all.
National Centers for AI Governance: An increasing number of governments around the world are establishing national AI centers or commissions to address the rapidly expanding impacts of AI on their citizens, but they often lack the tools or expertise to drive tangible impact. A practical playbook can help governments evaluate different models and approaches, set concrete goals and select tools to achieve them, learn from existing examples in diverse contexts, and connect with a broader network of similar endeavors for shared learning and collaboration.
Currently Scoping:
The Ethics Switch: Just as electricity needs a circuit breaker, so do AI enabled devices need a control switch to prevent them committing actions which are unethical in the jurisdiction in which they are being used. This model will explore different models to accomplish this and collect insights from current efforts.
Teaching Responsible AI: Co-developing an ethics curriculum for students of AI at university level and helping to scale efforts to educate students with diverse backgrounds to use AI can help to address the bias which is created in AI when it is coded by people of similar backgrounds and to address the shortage of skilled and diverse AI developers.
Additionally, it is pparticularly important to work with partners in the developing world to ensure that AI brings its benefits to all, changing our lives for the better and increase our abilities and horizons and our ability to care for planet
In addition to my work at the World Economic Forum, I have been Vice-Chair of the IEEE Global Initiative on Ethics in Autonomous and Intelligent Systems for 2years. This has been a huge efforts and is the brainchild of Konstantinos Karachalios, Managing Director of IEEE Standards and John Havens, now Executive Director the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. They began working together on their mutual concerns about the need for such systems to be designed ethically and with a human centered approach.
In December, 2015, they created the Initiative and asked 12 people to join them. I was very honoured to be asked to serve as Vice-Chair. By April of 2016, some 80 more academic and technical experts had joined the work. We had a conference in The Hague in summer of 2016 and in December 2016, the first report arising from the work was released. It was comprised of the work of about 120 volunteer experts on the topic who had worked in eight different committees on aspects of AI and ethics, including the one which I co-Chaired on law and governance of the technology.
In 2017, the work expanded to 13 different committees and over 250 experts with a conference in Austin, Texas. The second report was released on 12 December 2017 and the work will continue through 2018 until the release of the final report in December. Currently, 11 putative IEEE standards which arise from the work of the Initiative are in process of discussion. They cover areas such as algorithmic bias and transparency together with privacy and AI.
The reports have contributors from across the globe and have been translated into 9 languages, including Brazilian, Russian, Chinese and Japanese.