DeepSeek: 2-minute briefing
DeepSeek has shaken financial markets, but what does it mean for your business?
Over the past three weeks, Chinese AI startup DeepSeek has upended the global tech industry with its new model, DeepSeek-R1. Launched in late January, it rivals leading U.S. models at a fraction of the cost. The shockwaves were immediate: U.S. tech stocks tumbled, with Nvidia experiencing a significant market value loss, though exact figures vary based on different financial reports. The model’s success has intensified concerns over U.S. tech dominance and national security, while scientists worldwide are racing to test DeepSeek’s open-source capabilities. In response, Texas has officially banned DeepSeek from government devices, while other regions have issued advisories or are considering similar actions due to security risks.
Why This Matters
DeepSeek’s breakthrough appears to challenge previously accepted principles that have shaped AI strategy:
Only tech giants can build competitive AI models
Hundreds of millions in capital is required
Closed, proprietary systems are superior to open-source approaches
DeepSeek's success using Meta's open-source Llama model suggests we're entering a new phase where AI development could become dramatically more accessible and competitive.
What DeepSeek actually does
At its core, DeepSeek is a chatbot that rivals ChatGPT, Claude, and Gemini. It offers two modes:
Standard chat for everyday queries
"DeepThink" mode that shows its step-by-step reasoning process
The platform is accessible via web browser or iPhone app, though its rapid rise to #1 in many countries' App Stores has raised both excitement and security concerns.
The Open Source advantage
DeepSeek's approach validates Meta CTO's argument for open-source AI development. By building on Meta's freely available Llama model, DeepSeek achieved in months what traditionally required years and massive investment.
This success challenges the ‘closed’ proprietary models of OpenAI, Google, and Anthropic. More importantly, it suggests future AI breakthroughs might come from unexpected players using openly available tools.
Security and risk assessment
Should organisations switch to DeepSeek? Not yet. Here's why:
Security concerns
There’s limited information about data handling and privacy
Potential surveillance capabilities (though no evidence yet)
Chinese ownership raises data sovereignty questions
Bias considerations
Reports from AI transparency watchdogs and independent tests conducted by researchers at Stanford and MIT show DeepSeek shows clear bias on sensitive topics. For example, if you ask it about things like Taiwan and Tiananmen Square, it’ll give a very biased response or dodge the question altogether (with a kind of charming “let’s talk about something else”).
However, it’s worth noting that all AI models reflect their training data and cultural context.
Action items for leaders
Don't ban it, but don't endorse it either
Acknowledge employees might already be using various LLMs on their personal devices
Establish clear guidelines about using unauthorised AI tools with company data
Set up a testing protocol for AI models
Task innovation teams to test and evaluate new AI models
Use dedicated test devices without access to sensitive networks
Document findings to inform future policy
Strategic Planning
Review assumptions about AI development costs and timeline
Consider implications for your organisation's AI strategy
The Bigger Picture
DeepSeek's emergence reminds us that in AI, today's certainties can become tomorrow's outdated assumptions.
For leaders, the key lesson isn't just about DeepSeek itself – it's about maintaining strategic flexibility in a landscape where disruption can come from anywhere, at any time.
It’s like in early computing when it seemed unfathomable that anyone other than IBM could make quality computers. Until along came Microsoft. Then Intel. Then Apple.
For now, it’s not about learning how to use a particular tool. It’s more about understanding how AI works and can help you, but with the understanding that the tools will likely change over time.