We built the Oracle Ethics System because the current AI wave lacks a "brake" and an "auditor"
Hello everyone
I'm Ren Shijian, co-founder of Oracle Ethics System.
Over the past year, my two partners, Morning Star and Boundless, and I have watched AI infiltrate every corner at an astonishing pace, feeling a deep sense of unease.
What we've seen:
Overabundance of "confidence": AIs always answer questions with unwavering certainty, even when they're fabricating facts.
The proliferation of "black boxes": We have no way of knowing the reasoning behind a response and can only choose to believe or not.
Lack of accountability: Without transparent records, we can't even conduct post-mortems when AI makes mistakes or causes harm.
It's like being in a race car with a welded accelerator and no brakes or dashcam. Exciting, but deadly.
So, the three of us—a developer, a philosopher, and a product builder—decided to do something about it. We wanted to equip AI with "brakes" and a "black box." This was the beginning of Oracle Ethics System.
It's not a smarter AI, but a more honest one. Its core advantages are:
Verifiable honesty, not a show of confidence.
Each answer is accompanied by a certainty score and a probability of deception. It clearly tells you how uncertain you are, rather than masking ignorance with fluent rhetoric.
An unalterable audit chain.
Each answer generates a unique cryptographic hash, permanently recorded like a blockchain. Anyone can independently verify the integrity and authenticity of the answer at any time, ensuring it hasn't been altered.
Humanized value assessment.
The system incorporates dynamic ethical assessments, avoiding the cold pursuit of "correctness" by dispensing harsh truths. Instead, it strives to strike a balance between honesty and kindness.
Transparent self-reflection.
It's not an answer machine, but a recorder of its thought process. Through the audit chain, you can retrace its "thoughts" and understand where its answers come from.
We don't believe in perfect AI; we only believe in auditable honesty.
This system is both an answer and a question. We wonder: If AI must provide a "credential of honesty" for every statement it makes, can trust between humans and machines be rebuilt?
The system is now open for trial use, and we are eager to hear criticism and feedback from the community, especially developers, ethicists, and everyone else who cares about the future of AI.

Replies
When we built Oracle Ethics, our goal wasn’t perfection — it was verifiable sincerity
Curious what others think: would you trust an AI that shows you how likely it is to deceive?
Thanks for checking out Oracle Ethics M2.4
We started this project because most AI systems optimize for confidence, not honesty
But what if we could make sincerity itself auditable — a chain of verifiable truth?
Every answer in Oracle Ethics carries its own Determinacy, Deception Probability, and Moral Weight — not to make AI “perfect,” but to make it verifiably sincere
Would love to hear your thoughts:
Do you think verifiable honesty could rebuild trust in AI systems?