
On May 30, the Center for AI Safety released a public warning about the dangers of artificial intelligence to humanity. The one-sentence statement, signed by more than 350 scientists, business executives and the public, emphasized: “Reducing the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The brutal double irony of this announcement is hard to fathom.
First, some of the signatories – including the CEOs of Google DeepMind and OpenAI – warning of the end of civilization represent the companies responsible for creating this technology in the first place. Second, these same companies have the power to ensure that AI actually benefits, or at least does not harm, humanity.
They should heed the advice of the human rights community and immediately adopt a due diligence framework that helps identify, prevent and mitigate potential negative impacts of their products.
Although scientists have long warned about the dangers of AI, until the recent release of new generative AI tools, a large portion of the general public did not understand its negative consequences.
Generative AI is a broad term, describing “creative” algorithms that can generate new content by itself, including images, text, audio, video, and even computer code. These algorithms are trained on huge datasets, and then use that training to produce output that is often indistinguishable from “real” data – making it difficult, if not impossible, to tell whether the content was created by a person, or an algorithm. by
To date, generative AI products have taken three main forms: tools like ChatGPT that generate text, tools like Dall-E, Midjourney, and Stable Diffusion that generate images, and tools like Codex and Copilot that generate computer code.
The sudden rise of new generative AI tools is unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This surpassed the initial growth of popular platforms like TikTok, which took nine months to reach many people.
Throughout history, technology has helped advance human rights but also harmed them, often in unexpected ways. When Internet search tools, social media, and mobile technologies were first released, and they grew with widespread adoption and accessibility, it was almost impossible to predict the many troubling ways in which these transformative technologies became drivers and multipliers of human rights violations around. the world
Meta’s role in the 2017 ethnic cleansing of the Rohingya in Myanmar, for example, or the almost inadvertent use of spyware deployed to turn mobile phones into 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of disastrous introductions. The social and political implications of technologies that have not been seriously considered.
Learning from these developments, the human rights community is calling on companies developing generative AI products to act immediately to stop any negative consequences for human rights.
So what might a human rights-based approach to generative AI look like? There are three steps we propose, based on evidence and examples from the recent past.
First, to meet their obligations to respect human rights, they must immediately implement a rigorous human rights due diligence framework, as outlined in the UN Guiding Principles on Business and Human Rights. This includes proactive and ongoing due diligence to identify actual and potential harms, transparency about these harms, and mitigation and remediation where appropriate.
Second, organizations developing these technologies must actively engage with academics, civil society actors, and community organizations, especially those representing traditionally marginalized communities.
While we cannot predict all the ways these new technologies may cause or contribute to harm, we have extensive evidence that marginalized communities may suffer the most. Early versions of ChatGPT are associated with racial and gender biases, for example, Aboriginal women are “worth” less than people of other races and genders.
Active engagement with marginalized communities must be part of the product design and policy development process to better understand the potential impact of these new tools. This cannot be done after the companies have already caused or contributed to the loss.
Third, the human rights community itself must come forward. In the absence of regulation to prevent and mitigate the potentially dangerous effects of generative AI, human rights organizations should take the lead in identifying actual and potential harm. This means that human rights organizations themselves should help build a deeper understanding around these tools and develop research, advocacy and engagement that anticipates the transformative power of generative AI.
Complacency is not an option in the face of this revolutionary moment – but in this case, neither is cynicism. We all have a part to play in ensuring that this powerful new technology is used to benefit humanity. Applying a human rights-based approach to identifying and responding to harm is an important first step in this process.
The views expressed in this article are the author’s own and do not necessarily reflect the editorial position of Al Jazeera.
Source link