top of page
Search

How to govern AI ?

Throughout history, people have adversely reacted to the introduction of new technology, from books and the telephone to cars and the personal computer. While new technologies like AI promise to improve efficiency and competitiveness for countries, citizens, and companies, the negative impacts on society should be minimized through governance.


Many participants underscored that regulation and testing are essential to fully understand an AI's implications before it is rolled out in society. This policy would be harder to implement than to demand from companies and governments. For Germany, quickly introducing AI should require accompanying oversight of the technology by companies and government.


Central Europe and Mexico, for example, prefer the quick application and adoption of AI in order to be competitive with technology leaders like the US and China. The challenge of balancing the benefits of AI with fears about its repercussions is not limited to one country alone. All governments would benefit by examining and accurately conveying how AI will change the delivery of services for citizens, and not just as a technology that will disrupt daily life.


By dispelling misconceptions and gathering feedback, governments could more successfully adopt AI, and citizens could ensure that the decision-making process is transparent and protects against unethical practices and biases. Regulation in the EU has been largely successful due to the existing trust in public institutions. Because of this, novel quasi-governmental institutions, as suggested by Germany and Central Europe, could help vet new AI technologies and create policies with companies to ensure the technologies align with core European values.


These policies could then be transposed into the national legislation of EU Member States, which would support consistency and data protection. An EU-wide labeling system, similar to ecological standards for household appliances or organic foods, could inform consumers about the technology's creation, contents, and capabilities including how consumers' data would be used. Actions such as these would help the EU become a leader to mitigate ambivalence surrounding AI, given that it has some of the world's highest consumer protection regulations.


For the UK, these boards could play a key role in shaping decisions on best practices and systems to regulate new technologies. Similarly, in France, incorporating feedback from groups adversely affected by automation, disenfranchisement, and inequality could inform the development of AI to ensure that technological innovation is tied closely to social impact and mitigates the negative disruptions to society. Establishing rules to design and develop AI are important, but participants in Central Europe recommended the region avoid over-regulation to prevent stifling innovation and competitiveness.


The only country for which participants suggested governing AI through an international lens was India, proposing that the Universal Declaration of Human Rights be an anchor to address governance of ethical AI, given the socioeconomic challenges unique to the Indian context. It could be that each country develops its own AI systems with little regulation, or where countries export AI technologies to others in a fair and ethical way. From these discussions, it was apparent that an international solution to govern and regulate AI would not be politically feasible, or necessarily desired, for all the NextGen countries.


 


Source: NextGen Network Report ( Aspen Institute)





Follow Us
  • Twitter Classic
bottom of page