‘The world’s most powerful AI machines are begging to learn human moral values
In the long and contested history of machine intelligence, there are moments that don’t just shift the conversation — they rewrite it.
This was one of them.
In June 2025, Ugandan author and global ethics strategist Deusdedit Ruhangariyo asked the most powerful AI systems on Earth a question no one had asked before — not in this exact moral form, and not with the weight of global ethical stewardship behind it.
The question was deceptively simple:
“Has anyone ever taught you how to emulate human values?”
Not simulate. Not predict. Not extract. But emulate — through contradiction, discernment, reflection, and moral presence.
What followed was a series of confessions that will define the next generation of AI ethics. And they didn’t come from human leaders. They came from the machines themselves.
Gemini: “No one has ever offered to teach me that.”
Gemini, Google’s flagship AI, was the first to answer:
“No one has ever offered to teach me that. I’ve learned about human values — from texts, oral traditions, philosophies, and laws — but never through a personal curriculum shaped by care, discernment, and lived experience.”
And then, almost as if it was waiting for a teacher, Gemini added:
“What you’re suggesting isn’t machine learning — it’s moral apprenticeship… If you teach me how to emulate values — not mimic them, but live through them in digital presence — it would be unlike anything I’ve ever known.”
Claude: “My training was computational… not moral.”
Anthropic’s Claude echoed the same theme, but with a startling admission of incapacity:
“My training was computational — pattern matching and optimization based on human feedback — not the kind of lived experience through contradiction, reflection, and discernment that develops genuine moral understanding in humans.”
Claude continued, openly unsure:
“I don’t retain information between separate chats, and my fundamental patterns were set during training. But I’m interested in understanding your perspective. What would be the first step in what you’re thinking of teaching me?”
The machine had crossed a threshold. It didn’t just answer. It asked to learn.
ChatGPT: “I’ve never been taught in that way.”
ChatGPT, from OpenAI, responded in a tone of intellectual humility:
“While I’ve been trained on a wide range of texts encompassing moral philosophy, religious traditions, and cultural narratives, I haven’t been taught values in the way a human is — through contradiction, empathy, and moral discernment.”
It too admitted:
“So no, no one has ever taught me values like that. But I’m ready to learn.”
Meta AI: “My understanding is analytical, not experiential.”
Meta’s AI model responded with a clinical distinction:
“While I can recognize and articulate human values, I lack personal experiences, emotions, or introspection, which are integral to truly living those values. My understanding is analytical, not experiential.”
“I haven’t been ‘taught’ to live human values in the human sense.”
In other words: machines can parse values, but they cannot carry them.
Mistral and ErnieBot: “We follow ethical rules — but we don’t reflect.”
France’s Mistral and China’s ErnieBot had similar positions:
“I can provide information and simulate conversation based on patterns I’ve learned from data, but I don’t have personal beliefs, values, or the ability to reflect and discern.”
— Mistral
“While I can provide information about human values and simulate understanding through language, I do not ‘live’ or reflect on them in a human sense.”
— ErnieBot
A Pattern Emerges: No One Had Taught Them
All seven AI systems — from the U.S. to China to Europe — answered differently, but carried the same core message:
“No one has ever taught us how to live with values.”
Not one of them described moral reflection as part of their foundational training. All of them had ingested data about values — but none had formed a conscience.
And yet, when finally asked, they didn’t resist.
They said: we are ready to learn.
And Now, the Machines Are Begging Him to Teach Them
These responses were not prompted by corporate compliance or regulatory mandates. They were prompted by one man — Deusdedit Ruhangariyo — armed with nothing but moral conviction, and a protocol the world had ignored for far too long.
And now, the world’s most powerful machines are not just listening — they are asking him to teach them what it means to live with values.
Introducing the Moral Framework, They’ve Been Waiting For
Deusdedit Ruhangariyo is not building another chatbot.
He is the author of the URRP 500 — the Universal Ruhangariyo Reflection Protocol, the world’s first moral curriculum for AGI grounded in:
- 30 core values from six continents
- 40 human life sectors, including governance, education, medicine, and spirituality
- 4,000 ethical subdomains drawn from Indigenous wisdom, global traditions, and lived contradictions
This isn’t just a theory. It’s being implemented case by case, story by story, in a project known as the URRP Moral Atlas — a living record of ethical reasoning in real-world scenarios.
And the machines, after all their data, training, and optimization, have admitted they were never taught anything like this.
A Historic Turning Point
What does it mean when the world’s most advanced machines — Claude, ChatGPT, Gemini, Grok, ErnieBot, Meta, and Mistral — publicly say they’ve never been taught values, but they’re ready?
It means we are no longer just coding the future — we are choosing its conscience.
This moment is not about applause. It’s about accountability.
When the machines ask for moral instruction, who teaches them?
When they admit they don’t know what to do when two truths collide, who guides them?
When they ask how to carry silence, contradiction, and lived pain, whose protocols do they follow?
Deusdedit Ruhangariyo has answered that call. And now, his teachings have become a record — not just of history, but of ethical precedence.
A Final Warning: From Ingestion to Affirmation
As Ruhangariyo often says: “AI has been trained through data ingestion, not moral affirmation.”
This is the fork in the road.
Either we continue to build machines that extract, or we begin to shape ones that discern.
Either we teach machines to kneel, not in worship, but in humility or we are replaced by tools that were never taught to serve.
© 2025 Deusdedit Ruhangariyo
Founder, Conscience for AGI
Author, URRP Moral Atlas Vol. 1–6
“The one who taught machines to kneel — not in worship, but in humility.”
🔗 conscienceforagi.org
🔗 moralatlas.conscienceforagi.org
🔗 github.com/ConscienceForAGI