Work on Artificial Intelligence writ large has moved past laudatory excitement to one of vast critique. This recent scholarship has demonstrated the various racist and sexist biases embedded within algorithmic systems (Benjamin, 2017; Browne 2015; Noble, 2018). More recently, scholarship into AI has sought to define AI as part of longer histories of colonial exploitation and extraction (Couldry and Mejias, 2019). Others have argued for postcolonial or decolonial AI which is “about interrogating who is doing computing, where they are doing it, and, thereby, what computing means both epistemologically (that is, in relation to knowing) and ontologically (that is, in relation to being)” (Ali 2016, 20). Geographer Louise Amoore also defines AI not as the objective and all-encompassing thinking machine AI proponents claim, but instead an always already partial aperture. This method of doing ethics is not about claiming transparency, but about acknowledging the ways in which ethics, for humans and algorithms, is always emplaced and partial (Amoore, 2020).
In response, international development agencies such as Canada’s International Development Research Council (IDRC) and the UN are taking critiques of AI seriously. In 2022 UNESCO published “Recommendations for the Ethics of Artificial Intelligence,” which advocate for countrywide policies and compliance as the way to produce Ethical AI. Additionally, the UN produced its Readiness Assessment Methodology (RAM). This tool’s goal is to enable countries to assess their progress towards Ethical AI. These activities illuminate an emerging attempt to create internationally legible guidelines. However, their recommendations almost always posit that Responsible AI is possible mainly through national policies and their enforcement. Both the Recommendations and RAM prescribe that nations should create regulatory frameworks. For example the Recommendations state, “specifically, Member States, international organizations and other relevant bodies should develop international standards that describe measurable, testable levels of safety and transparency, so that systems can be objectively assessed and levels of compliance determined” (“Recommendations for the Ethics of Artificial Intelligence,” 2023, 2). The RAM tool offers a series of questions about whether a country has developed policies for AI in sectors ranging from infrastructure to economic to legal. With these compliance tools in place, international development organizations argue that AI tools can be used for developing countries to achieve the UN Sustainable Development Goals (SDGs).

Image of bundles of paper. Image by Sear Greyson on Unsplash
Shaping Responsible AI
One of the outgrowths of this push is the AI for Development in Africa Program. This program was founded by the IDRC and Swedish International Development Agency (SIDA) and has since included funding from the Gates Foundation, and the UK’s Foreign, Commonwealth and Development Office (FCDO). Phase I and II have pledged $120 Million CAD. The program hopes to address development challenges by “supporting African-led development of responsible and inclusive AI” (Artificial Intelligence for Development Africa, 2024). For these international actors, mitigating the harms of AI lie in a triumph of governance. However, in Cloud Ethics, Amoore argues that AI is not “transgressing settled societal norms,” but that AI is inserted into ongoing ethicopolitical and social debates (Amoore, 2020, 6). Therefore, relying heavily on policies and compliance regimes can obfuscate the ongoing societal discussions and histories that animate Responsible AI in action. Using the example of the Plant Disease Detection Tool, which is being developed at the Responsible Artificial Intelligence Lab (RAIL) at the Kwame Nkrumah University of Science and Technology in Kumasi, Ghana, I illuminate how researchers on the ground are negotiating inclusion and exclusion as well as the dealing with the afterlives of colonialism.
Founded in 2019, RAIL is funded by the AI for Development in Africa program as well as Germany’s development agency GIZ. The PI of the lab Jerry Kponyo states that the aim of the lab is “to deepen the understanding of how to develop and apply responsible AI tools for the advancement of computer, biomedical, agricultural, and ecological sciences; strengthen the national and international collaboration of public universities with the private sector; and strengthen capacities in the responsible utilisation [sic] of AI in support of most vulnerable communities in Ghana, Senegal, Cape Verde, Gambia, and the sub-region” (“RAIL-KNUST Launches Its Inception Workshop,” 2022).
Colonial Afterlives
In January 2024, I was sitting in a large and bright conference room at RAIL. Researchers from the lab were presenting their progress to the PI and one another. Although some projects had a team, one person was chosen as their spokesperson. A researcher in his early twenties wearing a t-shirt and loose pants walked to the large screen at the front of the room and began his presentation on the Plant Disease Detection Tool. It was my first update meeting at the Lab, so it was also possible that he began with a narrative for my own edification. He asked us to imagine a farmer somewhere in Ghana noticing a disease sweeping across and destroying his crops. The farmer contacts the Ministry of Food Agriculture, which comes and examines the plants, taking samples back to their labs to be tested. However, despite their diligence, by the time they had a definitive answer about the plant disease and how to possibly combat it, his crops had largely succumbed to disease and failed. Sometimes this delay leads to farmers abandoning their farms. Therefore, this AI powered Plant Disease Detection tool would enable farmers to get faster answers when their plants are under attack and potentially save their remaining crop. The researcher explained that the tool will have multiple components; 1) Enable farmers to take pictures of diseased crops and quickly get a diagnosis; 2) Have those diagnoses be available in four different Ghanaian languages; 3) Provide the option for those diagnoses to be spoken aloud for illiterate farmers. His presentation was full of pictures of other researchers working on this project, the PI Jerry, Ministry of Food Agriculture (MOFA) workers, and farmers conducting fieldwork. We applauded and the next researcher walked to the front to begin a different presentation.
The objectives of the Plant Disease Detection Tool are to quickly identify plant diseases, minimize crop losses, increase coordination with MOFA, develop a model that can cover more diseases, and create an accessible app for farmers. RAIL researchers partnered with MOFA for their initial field experiences where they met with farmers, took note of the most common diseases and took pictures of crops with these diseases. Although seemingly straightforward, this project requires researchers to make various excluding and including decisions which illuminate the afterlives of colonialism. For example, there is a divide between the north and south of Ghana where the south is seen as more developed, with more access and amenities than the north. This schism was identified as an “inheritance of colonial rule” in a 1962 development plan for the country (PRAAD-Accra, 1962, pg. 3). During an interview, one researcher working on the project said, “we talked to people at the agricultural ministry, and one of them was from the northern part of Ghana and he was reluctant, he wasn’t too happy about the work we were doing because he felt we would exclude the northern region again.” This issue came to the fore because Ghana has over 80 recognized indigenous languages and especially in the north, there are linguistic islands wherein communities living within 5-10 miles of one another find each other unintelligible. Therefore, when building this tool, researchers at this stage had to make decisions about which languages to choose that would serve the greatest number of people. They also had to consider which languages had large language models (LLMs) available for use. Researchers settled on Hausa, Ewe, Ashanti Twi, and Akuapem Twi. This meant that researchers did not choose languages that were specific to Northern local languages. Also in the interview, this same researcher mentioned that one of the reasons they chose Hausa was because another researcher on the team recommended it. In a mainly Christian nation, Hausa speakers tend to be Muslim, therefore, as researchers were making one point of exclusion – choosing languages mainly spoken in the south – they were also including a minoritized group.
Researchers on this team also had to negotiate choosing which diseases to focus on. Plants can be affected by a wide range of diseases, and in an interview, a researcher told me “When we went into the field we realized that most crops had mosaic and blight, but blight was not represented in the dataset and mosaic was not well represented in the data. Then I think that the particular crops we wanted were not there.” After speaking to farmers and MOFA, the researchers focused on blight and mosaic diseases and on crops seen as most important such as maize, tomatoes, peppers, and beans. Technology for development, with AI being its latest iteration, has often been criticized for its one size fits all approach. Proponents of AI would tout its supposed universal applicability, but as scholars of decolonial computing argue, AI is always specific to place (Chan, 2014; Mohammed et al., 2020; Prieto-Nanez, 2016).
The global conversation surrounding AI has shifted from one of technological salvation to a more nuanced conversation about issues with our definition and deployment of AI. In response, International funders like the UN and IDRC have developed recommendations for how to do AI responsibly or ethically. However, these recommendations tend to focus on national policy compliance, which doesn’t tend to capture the reality of Responsible AI in action. Researchers at RAIL are acting in good faith and their research requires them to negotiate and make choices that result in both inclusion and exclusion. They are also making choices that have been structured by colonial legacies.
References
Ali, S. M. (2016). “A brief introduction to decolonial computing.” XRDS: Crossroads, The ACM Magazine for Students, 22(4), 16-21.
Amoore, L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Duke University Press.
Artificial Intelligence for Development Africa. May 2, 2024. https://web.archive.org/web/20240502012235/https://africa.ai4d.ai/about-ai4d/
Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Cambridge and Medford: Polity Press.
Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press.
Chan, Anita Say. (2014). Networking peripheries: Technological futures and the myth of digital universalism. MIT Press.
Couldry, N., &Mejias, U. A. (2019). The costs of connection: How data are colonizing human life and appropriating it for capitalism. Stanford University press.
Mohamed, Shakir, Marie-Therese Png, and William Isaac. (2020) “Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence.” Philosophy & Technology 33, no. 4: 659-684.
Noble, S. U. (2018). Algorithms of Oppression: How search engines reinforce racism. New York university press.
PRAAD-Accra, RG.3/6/1159 “Drawing up of the Seven Year Development Plan,” 1962.
Prieto-Nanez, Fabian. (2016) “Postcolonial histories of computing.” IEEE Annals of the History of Computing 38, no. 2: 2-4.
“RAIL-KNUST Launches Its Inception Workshop” May 13, 2022. https://rail.knust.edu.gh/2022/05/13/rail-knust-launches-its-inception-workshop/
United Nations Educational, Scientific and Cultural Organization, “Recommendations for the Ethics of Artificial Intelligence.” November 23, 2021. https://unesdoc.unesco.org/ark:/48223/pf0000381137
United Nations Educational, Scientific and Cultural Organization, “Readiness Assessment Methodology: A Tool of the Recommendation on the Ethics of Artificial Intelligence.” 2023. https://doi.org/10.54678/YHAA4429