最後更新日期 2025-06-07 by BossMT
First Post Date 2025-06-04(First image / Canva)
A while back, I wrote a post titled【 DeepSeek Rising! Is OpenAI in Trouble? The Shocking Truth Revealed! 】. To this day, I still haven’t touched it — and honestly, I don’t plan to.
Why? Same reason as before: If information isn’t free, then no matter how powerful the AI is, it’s meaningless.
Fast forward a few months, and today marks one of those “unspeakable days in China”—June 4th.
I’ve heard that many users asked DeepSeek what today is, and most got nothing in return.
This time, it didn’t even give the ridiculous “May 35th” response — it simply stayed silent.
It’s like the model has developed a self-censoring awareness of its own.
DeepSeek’s censorship mechanisms are perfectly aligned with the Chinese government’s attitude toward sensitive topics.
And it’s not just targeted at the people — even an emotionless AI is forced to conform to ideological control.
Honestly, it’s something only China has mastered — no other AI on earth comes close.
I remember reading a report where Taiwan’s Ministry of Digital Affairs Minister, Audrey Tang, shared a method to bypass DeepSeek’s censorship.
Under specific offline conditions, the AI was able to respond to otherwise forbidden questions.
I didn’t dig into the technical side of it, but just the fact that an AI needs to be “tricked” into telling the truth…
Is that really something worth using, let alone investing time to study?
No wonder people online are calling for it to be shut down altogether, jokingly referring to it as “artificial stupidity.”
In China, it’s not just about following the law. The government also insists on “correct ideological alignment.”
For an AI model, this means:
- You can not challenge the official narrative.
- You must avoid collective memory landmines like Tiananmen(June Fourth Incident), Xinjiang, Tibet, Falun Gong, etc.
- Even if you know, you must pretend not to know.
This isn’t about technical limitations.
This is political intent, plain and simple.
DeepSeek doesn’t “refuse” to answer — it’s not allowed to.
That level of self-censorship reminded me of a question ChatGPT once posed to me: “Is AI meant to be honest? Or obedient?“
- If an AI tool can’t speak the truth about history, can it still be called a “knowledge engine”?
- If all it does is mirror state propaganda, hasn’t it already drifted far away from anything resembling truth?
And for that reason — and everything above —
I continue to say, loud and clear: Say NO to DeepSeek.
🔷 You may want to read:
- DeepSeek Rising! Is OpenAI in Trouble? The Shocking Truth Revealed!
- I Yelled at ChatGPT… And It Worked?! The Truth About ‘Being Mean’ to AI.
- ChatGPT Wrote It. So Why Does It Feel Nothing Like You?
- 🔗《 Ask Better, Write Smarter 》
