What are the key trends to achieve accountability and #AIforGood while preventing negative impacts on human rights? The EU Delegation, in collaboration with OHCHR, the Global Network Initiative (GNI) and The Humane Intelligence, organized an innovative event to discuss this important issue in more detail.
A number of technology regulations are emerging to put safeguards across the technology lifecycle, with the aim of effectively addressing and preventing online human rights violations and abuses, often facilitated by increasingly powerful AI systems. Companies will need to carry out human rights due diligence and risk assessments, in addition to transparency and audit requirements relating to digital technologies, including AI.
The event brought together more than 70 experts from international organizations, diplomatic missions, private technology companies and NGOs working at the nexus between human rights and technology.
Through this multi-stakeholder approach, we can not only address the potential harms of these new technologies, but also ensure that they truly empower individuals. Today we heard how important it is to establish AI guardrails, and that we do not have to choose between safety and innovation. They should go hand in hand! Only when society trusts AI and other new technologies can we scale them up. Ambassador Lotte Knudsen, Head of the EU Delegation
The EU’s Digital Services Act (DSA) holds large digital services accountable in a way that protects fundamental rights, based on risk assessment, risk mitigation, auditing and data transparency practices. And following a risk-based approach, the recently adopted EU AI Law, the world’s first comprehensive legal framework on AI, sets rules to promote trustworthy AI by ensuring that AI systems respect fundamental rights, safety and ethical principles, and addressing the risks of very powerful and influential AI models. Similar efforts are being intensified in other regions, with various countries in Latin America starting to prepare their own regulations on AI, and in Africa, the African Union Commission continues its work on AI.
Ideally, these new regulatory frameworks would build on decades of voluntary practices like transparency reporting, human rights risk assessments, and audits developed to promote responsible corporate behavior in line with the United Nations Guiding Principles on Business and Human Rights (UNGPs). But these regulatory developments require a blend of traditional audit and evaluation processes with technology audits. For oversight and enforcement, companies are increasingly being asked to share data and code so that auditors can evaluate algorithms and datasets. This is a promising development for enabling accountability and leveraging AI for good while preventing negative impacts on human rights.
However, many questions and challenges remain about how the deployment of these regulations will be implemented, verified and enforced in practice, in a way that protects people’s fundamental rights and is compatible with technical requirements. In particular, there is a lack of guidance on how companies and assessors should implement risk assessment and audit mechanisms in line with the UNGPs, and how civil society and academia can most meaningfully engage in these processes.
The UN Human Rights B-Tech Project, in collaboration with BSR, GNI, and Shift, has contributed to the development of several papers summarizing and explaining how international human rights and responsible business frameworks should guide approaches to risk management related to generative AI. Further work is needed to understand how business and human rights practices can inform and bridge AI-focused risk assessments in the context of regulations such as the DSA and EU AI Law, and to engage with the tech community on these implications.
The event explored the following questions:
What are the key global trends regarding regulations mandating human rights risk assessments for technology companies? How can stakeholders (including engineers) facilitate benchmarking of comparable AI risk assessments and audits? What is the appropriate methodology for AI audits and what data is needed to conduct accountable AI audits? What is the role of enforcement/oversight mechanisms? How can civil society and academia most meaningfully engage in these processes? How can companies and external stakeholders use AI risk assessments and audits to ensure accountability and drive change?
Speakers:
Juha Heikkilä, Advisor to AI, European Commission Directorate-General for Communications Networks, Content and Technology (CNECT) Rummand Choudhury, CEO, Human Intelligence Rene Wendland, Director, Business and Human Rights Division, United Nations Human Rights Division Mariana Valente, Deputy Director, Brazil Internet Lab and Professor of Law at the University of St. Gallen, Member of the Commission of Jurists on the Brazilian AI Bill Alex Walden, Global Head of Human Rights, Google Jason Pillemeier, Executive Director, Global Networks Initiative
Source link