top of page
Search

“We Have This Thing Called Ubuntu”: Reflections on South Africa's draft AI policy

In a recent LinkedIn post, Dumisani Sondlo promised that after the mid‑October cabinet meeting, the Working Group of the Development of South Africa’s Artificial Intelligence Policy would unveil an AI policy that positions the country as a responsible leader in inclusive AI innovation. This announcement highlights the anticipation surrounding South Africa’s long‑awaited National AI Policy. While the final text has not yet been released, public briefings suggest that the draft policy is a bold experiment. Rather than enacting an immediate law, it proposes a flexible framework that offers sectors a menu of governance options instead of top‑down rules. This “light‑touch” approach reflects humility: Sondlo has acknowledged that drafting the policy has been “an act of acknowledging that we don’t know enough”. Under the pillar of Cultural Preservation and International Integration, the AI policy also advocates for Ubuntu principles. At EthicEdge, we believe that modesty should be paired with consultation, learning from the people whose lives will be affected by AI, and allowing Ubuntu and other African values to guide us.


Operational vs. Contextual Values: A Quick Recap

Dr. Cindy Friedman previously explained the difference between operational and contextual values. Operational values (values like accuracy, security, reliability, fairness and transparency) ensure that AI systems work as intended and do not harm users. Contextual values extend further by addressing how AI impacts people in specific cultural, economic, and political contexts. In the South African context, these values include human dignity, relationality, communal well-being, and benefit sharing.  Recognising this distinction allows South Africa to align operational values with global standards while grounding contextual values in local realities.


This thing called Ubuntu

At a recent South African Conference for Artificial Intelligence (SACAIR), I overheard a participant exclaim that in South Africa, we “have this thing called Ubuntu” that will guide us in our decision-making about AI. But what exactly does it mean? Ubuntu is simultaneously a highly contested concept, as well as something that every South African is familiar with. How do we translate this into our policy, ethical guidance and decision-making? This was the focus of one of the doctoral research projects of the EthicEdge team. The basic question was: Instead of relying only on the scholarship about Ubuntu, why not have a conversation with people who are using the technology? Through a series of surveys and focus groups both in South Africa and Europe, some interesting aspects were revealed. 


Human Dignity: More than a buzzword

Respondents in the research were adamant: Human Dignity is the value that should be prioritised in our thinking about AI, often underlying all other values. Mainstream AI ethics often treats human Dignity as a lofty aspiration or even sidelines it in favour of more measurable principles. Earlier this year, philosophers and ethicists of AI argued that human dignity is a troubling concept for AI ethics. Of course, South Africans were yet again excluded from this discourse. Why does dignity matter so much in South Africa? The reason is that histories of apartheid, systemic inequality and ongoing socio‑economic injustice have made dignity a fragile concept. 


Samuel Segun’s recent article on African ethical values at risk from AI shows how corporate adoption of AI prioritises efficiency and profit over relational values. For instance, the use of facial recognition technology often reduces people to mathematical vectors, misclassifies those with darker skin and undermines the recognition of individuals as persons with intrinsic worth. Segun argues that such practices amount to digital colonialism, turning people into data commodities rather than community members. To resist this, South African AI policy should centre on human dignity by ensuring that AI systems recognise and respect every person’s capacity to commune and contribute. The task of policy-makers, ethicists, educators, and developers is to figure out exactly how we make a value like human dignity central in our adoption of AI. For example, what would privacy regulation look like if we place the human dignity of every South African at the forefront? What does preserving and promoting human dignity mean in the context of creative and intellectual property being used for data sets? 


Missing Values: Creativity, Integrity and Diversity

Beyond human dignity, research participants highlighted several values missing from the AI ethics discourse:

  • Creativity: Focus groups expressed concern that generative AI threatens human creativity and produces homogeneity in speech and writing. A human‑centred AI policy should nurture human creativity rather than simply outsourcing creative tasks to machines.

  • Integrity: Participants insisted on coherence between the values and actions of tech companies. Ubuntu ethics treats integrity as holistic; other virtues, such as responsibility, flow from it. Developers should therefore be transparent about their motives and accountable for social impact.

  • Diversity: Protecting diversity is another underrepresented value. Participants argued that AI models should handle cultural, ethnic and religious diversities, and avoid tokenism. This requires intersectional analyses and genuine representation in design and governance. In a society embedded in deep historical injustices, diversity is essential to building trust.


Taking a seat at the table

For South Africa to actively shape global AI ethics and governance, rather than passively adopting imported technologies and values, it must embrace what sets it apart. The draft AI policy is a necessary first step, but its success hinges on centring the people it serves. Ubuntu reminds us that being human is about our relationships with others. In that spirit, AI should help build stronger connections, not weaken them. By combining global best practices with local values such as Ubuntu, South Africa can create AI that protects human dignity, supports creativity and fairness, embraces diversity, and pushes back against digital colonialism. At EthicEdge, we're excited to be part of this conversation and help shape a future where AI reflects what it truly means to be human.


ree

 
 
 
bottom of page