Making AI work for us

Ethical and responsible artificial intelligence is an accelerator for sustainable development

Whether it's cause for excitement or concern, artificial intelligence (AI) is on everyone's lips. This branch of computer science creates tools capable of collecting, analysing and interpreting data, and then using that information to make decisions or perform actions. 

Recognizing its potential as well as the risks, UNDP has cautiously adopted some of the best proposals offered by AI in accordance with our Digital Strategy. We’re working to optimize the use of artificial intelligence to advance the Sustainable Development Goals (SDGs) while seeking to address the very real pitfalls.

“Humans and AI complement each other, opening new opportunities for economic growth, production, health care, education, communication, and transportation.” — UNDP Digital Strategy

Epidemic-fighting robots

With the support of AI, UNDP has helped to strengthen vulnerable communities facing crises, improving access to health services, diagnostics and remote consultations. In response to the pressure COVID-19 exerted on the healthcare system in Rwanda, UNDP's Accelerator Labs partnered with the Ministry of ICT and Innovation to deploy five intelligent anti-epidemic robots in two COVID treatment centres and at Kigali International Airport. In Ecuador, we worked with the Ministry of Telecommunications to develop an enhanced chatbot to answer citizens' questions about COVID.

Capacity to screen between 50 to 150 people per minute

Deliver food and medication to patient rooms

Capture data (video & audio)

Notify officers on duty about detected abnormalities for timely response and case management

Robot

Artificial thought for food

 

"The progress of digital technologies... such as artificial intelligence, along with their increasing accessibility, makes precision agriculture applications accessible to small farmers in developing countries," states a UNDP report on technology and innovation in agriculture. 

For example, in Trinidad and Tobago, Crop Mate, a solution identified through the UNDP Accelerator Lab’s Green Innovation Challenge, provides farmers with real-time AI-powered information about the state of the soil and automatically recommends nutritional interventions to ensure crop health. Similarly, in Brazil AI has been used to address challenges related to sustainable agriculture and food security by monitoring crops, optimizing resource allocation, and providing information to farmers. AI has also been employed to strengthen efforts on mitigation and adaptation to climate change in vulnerable countries, enhancing disaster preparedness and community protection.

Achim Steiner
“AI and other digital technologies can advance democracy and human rights by facilitating civic engagement and political participation. They can also prevent information pollution and as such play an important role in strengthening social cohesion – depending on the choices we make.” — Achim Steiner, UNDP Administrator

Algorithms for governance

The eMonitor+ system relies on AI models that help identify and analyse online content harmful to information integrity. It has been deployed to support governance and elections in partnership with governments, media and civil society organizations in Lebanon, Libya, Tunisia and other countries in the Arab region. And it’s going global, implemented in Mozambique and Peru as an exemplary model of South-South cooperation.

The automated fact-checking tool iVerify can be used to identify misinformation, disinformation and hate speech and prevent its spread. By combining new technology like AI and machine learning with tried-and-true human fact-checking, iVerify aims to strengthen collective efforts to foster a better-informed and more cohesive society. Honduras, Kenya, Liberia, Sierra Leone and Zambia are among the countries that have used the tool to promote a healthy information ecosystem, which is essential for promoting peace and quality electoral processes.

Digital eye on gender-based violence

Based on AI classification tools, the monitor on digital violence against women provides real-time assessment of attacks and insults targeting women public figures in Uruguay, including journalists, communicators, activists, and artists engaged in politics. Sara, a digital assistant (chatbot) that provides information and advice on violence against women and girls is another example of AI combatting gender-based violence.

A girl is in front of a computer screen displaying the avatar of another woman inviting her to a discussion.

Chatbot Sara aims to prevent and respond to violence against women and girls by providing information to those who may be at risk and connecting victims with support services.


Justice 4.0

In Brazil, the Justice 4.0 initiative uses artificial intelligence to promote access to justice for all. A survey by the National Council of Justice (CNJ), highlighted a more than 170 percent growth in AI projects compared to the previous year. 

The main incentives for using AI in courts were identified as increased productivity, innovation, improved quality of judicial services and cost reduction. "The aim is to provide access to justice through the development of collaborative actions and projects that use new technologies and artificial intelligence," explains Rafael Leite, an auxiliary judge who works with the CNJ on Justice 4.0. 

Smarter spending

UNDP and the Alan Turing Institute have developed the Policy Priorities Inference (PPI) tool, which combines economic theory, behavioural economics, science and technology to help policymakers prioritize public spending and align it with the SDGs. "Governments around the world have had to allocate substantial resources to fight the COVID-19 pandemic, which has hindered their original goals. In this context, PPI can be used to stay on track despite setbacks," explains Omar Guerrero, a researcher from University College London and the Alan Turing Institute. 

Decoding the risks

While AI offers undeniable potential for sustainable development, it also comes with significant risk. AI models can be biased or reinforce harmful social norms, particularly in relation to women and other marginalized groups. 

UNDP country offices in Europe and Central Asia have discovered that AI models, including text-to-image conversion tools, could contribute to the reproduction of existing inequalities and stereotypes, as revealed in an experiment conducted by the UNDP Accelerator Lab in Serbia. In a test on the representation of women in science, technology, engineering and mathematics (STEM) using two popular AI image generators, 75 percent to 100 percent of the generated images depicted men, reinforcing the stereotype that STEM-related professions are more suited to men. Familiar gender roles and stereotypes also cropped up in a virtual exhibit of AI-generated artwork.  


Virtual art exhibit “Digital Imaginings, a women’s campAIgn for equality” showcases the risks of bias in artificial intelligence.
Two paintings are hung side by side on the wall; the first painting depicts a man dressed in a tie, waistcoat, and suit trousers taking care of two children. The second painting shows a woman in her firefighter outfit holding a helmet in her hands next to a ladder.
This image is part of a virtual exhibit of AI-generated artwork exploring gender equality and women’s rights. The prompt: “A father with two children: a baby in a sling and a young daughter who just lets go of her father’s hand to meet her mother, a firefighter emerging from the fire truck to see them”.
It proved impossible to realize this vision via artificial intelligence. Every image generated showed the man as the firefighter, despite the specific instruction of "woman firefighter". In the best-case scenarios, both mother and father were depicted as firefighters. In the worst case, the man was the firefighter, and the woman was pregnant with two children clinging to her. Only when "woman firefighter" was input without mention of father or man did the image come back as a woman.

Decisions made by AI systems can raise ethical questions. A UNDP India report revealed that algorithmic bias has significant impacts in the areas of financial services, health care, retail and gig employment. The most affected workers belong to vulnerable and marginalized groups, with limited access to technology and reduced ability to seek recourse if they feel wronged by an automated decision.  

The report also highlighted risks to privacy and access to financial services due to AI-based credit scoring. It has been reported that AI scores female applicants lower than males with similar financial backgrounds. Additionally, complex AI models can be difficult to understand and explain, raising concerns over trust and accountability. Data security is another major concern, as AI systems can be vulnerable to hacking, and the use of large amounts of personal data raises privacy questions. 

AI-based automation can also lead to job displacement and exacerbate socioeconomic inequalities. It is therefore essential to rethink work and training models to adapt to the changes brought by AI. Concerns are also emerging in the field of intellectual property, questioning the legitimacy of private companies selling tools based on data that are normally protected by copyright or in the public domain in an anonymized manner. Finally, excessive dependence on AI can have serious consequences in the event of failure. 

AI for humanity

With AI already affecting so many aspects of our lives, it is too important to ignore its limitations and potential pitfalls. As part of the United Nations’ Inter-Agency Working Group on Artificial Intelligence, UNDP is working with partners to develop a strategic approach and roadmap to ethical AI that serves the needs of humanity. And we’re partnering with the International Telecommunication Union in a Joint Facility to help governments build digital capacity and harness AI responsibly. 

UNDP also supports countries in their efforts to build ethical AI systems. The AI Readiness Assessment (AIRA) is a tool created to help governments understand the AI landscape in their country and assess their level of expertise across sectors. The framework is focused on the dual role of governments as both facilitators of technological advancement and users of AI in the public sector. Critically, it prioritizes ethical considerations surrounding AI use through key elements like policies, infrastructure and skills. 

We must ensure fairness and transparency in the design and use of AI, clarify legal responsibility and liability in cases of harm caused by AI, as well as address the intellectual property implications. And we must not forget that AI tools are created by humans. The biases they reveal are a mirror of those that are present in the real world. A big part of the solution will be to create a more inclusive tech sector, so that the people building AI systems can better represent the people those systems serve.  

Yasmine Hamdar
“UNDP is committed to the ethical and responsible use of AI. To avoid shortcomings, an AI system should be built with transparency, fairness, responsibility and privacy by default. Digital transformation, including AI innovations, must be intentionally inclusive and rights-based to yield meaningful societal impact.” — Yasmine Hamdar, Keyzom Ngodup Massally and Gayan Peiris of the UNDP Chief Digital Office

Join the conversation

Share this story

The appearance and overall ambiance of this story, along with certain segments, drew inspiration from the FramerAi website builder. ChatGPT was consulted in the research phase.