moderndiplomacy.eu

DeepSeek’s Censorship Controversy: A Global Shake-Up in AI Development

In January 2025, the release of DeepSeek, a Chinese-developed AI chatbot, sent shockwaves through the global tech community, especially in the U.S. DeepSeek, developed by Chinese entrepreneur Liang Wenfeng, has surpassed some of the most advanced AI models in the world, including OpenAI’s ChatGPT and Google’s Gemini Advanced. The success of DeepSeek, particularly its stellar performance in benchmarks like math and reasoning, has upset the American tech industry and stock markets. Amidst all the praise for its technical prowess, DeepSeek’s controversial censorship practices have raised serious concerns about the AI’s ability to provide impartial and open responses.

**A Breakthrough in AI Performance**

When DeepSeek R1 was launched in January 2025, it was immediately praised as a game-changer in AI development. Unlike its American equivalents, which required billions of dollars in investment, DeepSeek was developed with just $5.6 million in funding. 1 With only 200 employees, DeepSeek was able to object to giants like OpenAI and Google by providing an AI model that was faster, more efficient, and far less costly to operate. Its debut caused a significant stir, with the app quickly becoming the most downloaded on platforms like the App Store and Google Play, even eclipsing ChatGPT in terms of popularity.

This massive success surprised the stock market, wiping out $1 trillion from the U.S. tech index. Notably, the market capitalization of NVIDIA, a key actor in producing computer chips for AI models, fell by $589 billion in a single day—a historic loss for the company. 2

**The Innovator Behind DeepSeek**

Liang Wenfeng, who founded High Flyer in 2015 and later High Flyer AI in 2019, originally had no intentions of creating a product for commercial gain. His goal was purely scientific: to create an AI model that outperformed all functioning models. 3 Unlike other AI developers who employ large engineering teams, Wenfeng’s approach was unconventional. He relied on PhD students from top Chinese universities.

DeepSeek’s Chain of Thought approach, which mimics human reasoning, has been a suggestive factor in its success. This method allows the AI to “think” before giving answers that refine its responses based on logical reasoning. The AI also excels in showing users its thought process, offering insights into the steps it takes to arrive at an answer—something other AI models do not provide in as much detail.

However, despite its technological advancements, DeepSeek’s ability to give accurate and unbiased answers has been compromised due to its censorship practices, which have become a point of intense criticism.

**Censorship Concerns: A Dark Side to DeepSeek**

One of the most controversial orientations of DeepSeek is its censorship of politically sensitive topics, particularly those related to China. Users have reported that when questions about the Chinese government or politically sensitive issues such as the 1989 Tiananmen Square protests or Taiwan’s independence are proposed, DeepSeek responds with generic and evasive answers, often apologizing and redirecting the conversation to safer topics like

1 Bailey, Liam. “DeepSeek and Data Centres: Revisiting the Investment Case.” Knight Frank, The Intelligence Lab, Global Property Market Insight.

2 Friesen, Garth. “What DeepSeek’s AI Innovation Means for Investors and Big Tech.”

3 Tunik, Tamar. “The Mastermind Behind DeepSeek: Who is Liang Wenfeng?” January 28, 2025.

math or logic. This is in stark contrast to how the AI responds to inquiries about leaders or issues outside of China, where it offers detailed responses.

This censorship is not a coincidence. Chinese AI models, including DeepSeek, are subjected to rigorous tests by China’s Cyberspace Administration to ensure that they provide “safe” responses to politically controversial topics. For DeepSeek, this means that any criticism of Chinese leadership or politically sensitive events is carefully avoided. This form of censorship appears to be part of an effort to avert the spread of information or viewpoints that could challenge China’s narrative on the international stage. The strategic avoidance of politically sensitive topics raises questions about the platform’s credibility and whether it can truly serve as an unbiased source of information. It may also lead to a chilling effect for users who are seeking unfettered discussions of global events and history. The delay in answering questions or even suspending the ability to ask questions further emphasizes the platform’s high control over the information users can access.

The issue deepens with the revelation that the AI’s code is open-source. This means that anyone with the technical ability can download the model and modify it. While this offers a level of transparency and freedom that many American AI companies, such as OpenAI, have been criticized for lacking, it also exposes DeepSeek to further manipulation, potentially allowing for even more extensive censorship or bias to be introduced into the model by outside parties.

**Global Impact and the Response from Western Companies**

Regardless of the concerns surrounding censorship, DeepSeek’s technical capabilities have impressed many, leading to its adoption by major Western tech companies. Microsoft, for instance, has integrated DeepSeek’s R1 model into its Azure platform, making it available for use in applications like Copilot. This openness has drawn attention to the contrast between DeepSeek’s open-source model and the more closed systems of companies like OpenAI, which have faced criticism for not keeping their AI models fully open and accessible to the public.

In fact, many users have started to mock American companies like OpenAI for their lack of transparency, as DeepSeek’s open-source nature makes it more accessible and adaptable to a global audience. Some argue that despite its Chinese origins, DeepSeek’s more open approach to AI development is a step forward compared to American companies that have failed to live up to their promises of public service.

**The Ethical Dilemma: Should We Trust DeepSeek?**

The biggest question arising from the DeepSeek controversy is whether users should trust an AI model that is basically shaped by political censorship. While the technology behind DeepSeek is unquestionably impressive, its ability to provide accurate and unbiased information is compromised by its built-in censorship filters. This raises concerns about the ethical inferences of using an AI that can selectively withhold or manipulate information based on political agendas.

Some critics argue that the global AI race is now not just about technology but also about ethics, with different countries imposing their values and biases on AI systems. The increasing reliance on AI models like DeepSeek, which are subject to government sway, features the need for more transparent, independent, and unbiased systems in the future.

**The Future of AI: A Global Challenge**

The rise of DeepSeek signals the beginning of a new era in AI development where innovation is no longer solely driven by Western companies. As AI becomes an increasingly integral part of everyday life, the global implications of censorship and transparency will be central to the ongoing development of these technologies. Countries like India are now being urged to invest in their own AI models, which could offer a more balanced and open alternative to models shaped by political and corporate interests.

The DeepSeek controversy underlines the importance of ethical considerations in AI development. While its performance and innovation are impressive, the question remains whether it can be trusted as a truly unbiased source of information or if it is simply another tool in a larger geopolitical game.

Read full news in source page