- The Grow Getter
- Posts
- šThe Story Behind How DeepSeek Became the #1 AI App with 10M DownloadsāChallenging OpenAI & Nvidia
šThe Story Behind How DeepSeek Became the #1 AI App with 10M DownloadsāChallenging OpenAI & Nvidia
DeepSeek AI DisruptionāWhat OpenAI Didnāt See Coming

Welcome to The Growth Getter, a monthly newsletter with curated insights that will challenge your thinking as an aspiring creator, marketer, or tech enthusiast.
Read time: 8-10 mins
Today at a Glance:
š¤Storytime: DeepSeek shows us why you donāt need to reinvent the wheel to be a rockstar š
āļøCase Study: DeepSeek ās Growth Playbook
āØSelf-development: 3 Career Lessons I learned from DeepSeek in navigating this AI hype cycle.
Hint: Donāt fall into this gold rush. Keep swimming.
Storytime: You donāt need to reinvent the wheel to be a rockstar š
š DeepSeek this, Deep Seek That š
You see there are 2 kinds of AI models.
(1) AI models that are trained on Web scraping ā> OpenAI ās GPT, Metaās LLAMA, Google Gemini
(2) AI models that are trained on (AI models that are trained on Web scraping )
ā>DeepSeek
Lesson: You donāt need to reinvent the wheel to be a rockstar.
šÆ DeepSeek: A Chinese AI Model that is currently #1 on App Store
Last month, my LinkedIn feed exploded with headlines on DeepSeek, and my tech stock portfolio was all in red š“
DeepSeek climbed the rank #1 on AppStore, superseding OpenAI with a total of 10M app downloads - of which 16% were from the US.
I mean, SF engineers are obsessed with a Chinese AI model š¤Æ
Tell me something wilder.
š£ļø WTF is DeepSeek?

DeepSeek is a Chinese AI company that has spun out of a hedge fund (A team of 200, as of Jan 2025).
Its recent release of the R1 large language model on Jan 27th was a breakthrough as it performed as well as Open AI o1 ās model at 1/10 of the cost.
Whatās super cool about R-1 isnāt merely that it matches OpenAI 'ās o1 quality; itās that it is 90% cheaper and nearly twice as fast. Speed counts.
Here is a one-pager of DeepSeek (Source: ByteByteGo)
DeepSeek ās competitive edge: 11X reduction in computing as opposed to OpenAI, Meta, and Anthropic with the same performance capacity (Only $5.6M)
I am not an AI nerd so I will keep it high-level, from a growth and marketing POV.
In a saturated market of AI LLMS, 3 fundamental blocks caused DeepSeek to be a breakthrough.
Block #1: Decrease in AI Tokens = Cost breakthrough
WTF is a ātokenā š¤Æ
A token refers to a unit of text used by AI language models to process and generate language.
The sentence "AI is amazing!" might be split into four tokens: ["AI", "is", "amazing", "!"]
š² What does this Chart mean?
One key metric that measures LLMās performance efficiency is the ātoken rateā vs the price. In this case, R1 smashes it!
As a reference point, humans: speak at around 2-3 tokens per second (or 150 characters per minute).
R1 generates 275 tokens per second which is over 100 times faster than human speech.
The "extra whitespace in the chart" shows that R-1 (DeepSeek) is significantly more efficient than O1 (OpenAI), meaning it can process and generate tokens much faster, leading to lower latency in responses.
𤯠āDamn, what makes DeepSeek so damn good?ā
The#1 reason why DeepSeek is so good is that it challenges the wide-held traditional AI perception that you need massive datasets to create good AI models.
Through a technique called Reinforcement learning, DeepSeek ās AI learns through feedback and interaction - allowing it to follow human-like interactions using fewer chips.
Example: OpenAIās ChatGPT (Traditional) vs. DeepSeek (Reinforcement learning)
Feature | Traditional Training (GPT, LLAMA) | Reinforcement Learning (DeepSeek) |
---|---|---|
Learning Style | š¢ Learns from the static dataset | š Learns through experience & feedback |
Adaptability | š¤ Needs retraining when new data emerges | š Self-improves based on real-world use |
Speed & Efficiency | š°ļø Expensive & slow (relies on massive GPUs) | ā”ļø Cheaper & faster (uses fewer GPUs) |
Analogy | š§āāļø Memorizes past cases like a law student | š§ Experiments & adapts like a street-smart entrepreneur |
š And the best part?
R1 (DeepSeek) is 10% of the cost of Open AI - enabling affordable, high-quality AI.
The spicy note is that OpenAI alleges that the Chinese AI firm may have abused its API privileges and stolen some of OpenAIās IP.
How?
By using a process called distillation, āa common technique developers use to train AI models by extracting data from larger, more capable onesā - in this case OpenAI or even Meta ās LLaMa.
Food for thought: While traditional AI labs like OpenAI held the thought that better AI is trained on massive datasets, DeepSeek used Reinforcement Learning [RL] that allow learning through interaction and feedback. This maxed outs its chips with an efficient, targeted architecture aka becoming āNvidia wizardsā.
If you are curious about the technicality behind DeepSeek, check out the concise explainer from Vishal Misra, the Vice Dean of Computing and AI at Columbia University.
Block #2: Open-source models [DeepSeek] > Proprietary [OpenAI]
As someone learning to code, I find it interesting to see how DeepSeek ās code repositories are public on Github, with 64.5k followers.
There are 2 schools of thought on this - (1) Play in the shadows + maintain MOAT or (2) Build in Public.
DeepSeek chose the latter.
DeepSeek has profited from open-source code and research. It had a community-driven approach towards coding on Github, sparking rapid developer adoption and innovation at minimal cost.
Even Sam Altman was surprised!
He mentioned that OpenAI may have been āon the wrong side of historyā with its proprietary and closed-model source approach (and will think about a new open-source strategy for the firm).
Food for thought: In an age of democratisation with AI; innovation = speed, building in public is the way to go
Block #3: Industry disruption
DeepSeek has changed the whole maths game.
Big Tech (OpenAI, Google, Meta) and Nvidiaās GPU business face pressure as low-cost models challenge closed, expensive systems.
ā ļø Short term: A red Ocean if you own NASDAQ stocks (-17%) ā ļø
Last Monday, Nvidia fell by 17% on Monday and lost ~$600B on its market cap. The main narrative for why Nvidia fell was that DeepSeek sunk the tech leader.
Another factor for the high rate of NVIDIA stock sell-off was Trumpās claims of tariff measures on Taiwan semiconductors.
š What does all this chaos mean?
Noah Smith, who writes the Noahpinion Substack summed up the chaos into clear 4 takeaways.
1. LLMs donāt have very much of a āmoatā ā a lot of people are going to be able to make very good AI of this type, no matter what anyone does.
2. The idea that America can legislate āAI safetyā by slowing down progress in the field is now doomed.
3. Competing with China by denying them the intangible parts of LLMs ā algorithmic secrets and model weights ā is not going to work.
4. Export controls actually are effective, but China will try to use the hype over DeepSeek to give Trump the political cover to cancel export controls.
ā¬ļø Longer term: Increased use of AI LLMs as computing costs go down
Ultimately, AI computing follows the gas law.
I mean a few years ago, AI was used to be reserved for the elite [aka specialized AI labs]. But now things are changing.
It is quickly becoming a commodity- accelerating its usage and accessibility. In the long term, it will become a resource we cannot get enough of.
TLDR: Expect an acceleration of AI adoption.
š„TL;DR: DeepSeekās Growth Playbook
1. The Humble Beginnings of DeepSeek
DeepSeek ās modest beginnings are the ultimate edge in a world that thinks going big = better.
The founder of DeepSeek is Liang Wenfeng. Growing up in a modest region in Chinaās Guangdong, his electrical engineering background spurred an adjacent interest in āmachine visionā.
Using the funds from his semi-successful High Flyers fund, he purchased thousands of NVIDIA GPU chips and dabbled in programming with his friends. Liang started hiring local talent including Math Olympiads and PhDs - all in China.
āNecessity is the mother of inventionāā.
With no financial motive, Liangās self-funded creative tinkering - became the AI spin-off off which is now known as āDeepSeekā. I believe his modest circumstances enabled a level of resourcefulness that made him the innovator he is.
āIām unsure if itās madness, but many inexplicable phenomena exist in this world. Take many programmers, for example ā theyāre passionate contributors to open-source communities. Even after an exhausting day, they still dedicate time to contributing codeā¦Itās like walking 50 kilometers ā your body is completely exhausted, but your spirit feels deeply fulfilledā¦
Not everyone can stay passionate their entire life. But most people, in their younger years, can wholeheartedly dedicate themselves to something without any materialistic aims.ā
2. š”How did DeepSeek Get Its First Customers?
Initial Traction:
Remember the 64.5k followers following DeepSeek ās open-source GitHub code repositories?
Starting with technical engineers and innovators interested in LLMs - is a low-hanging fruit.
No paid ads or fancy podcasts. Period.
3. šWhat is the evidence of PMF?
PMF = Getting Media Attention To Hitting #1 On The App Store
Like Notionās viral user-generated content strategy, DeepSeek leveraged open-source adoption to hit PMF fast.
DeepSeek had an average of 1.8 million daily active users since its global launch on January 20, 2025 - a strong sign of PMF.

4. š¤ User Acquisition: Building in Public + Media

Source: Here is a quick DeepSeek timeline of AI model releases:
November 2023: One year after ChatGPTās launch, DeepSeek released a coding model and an LLM.
May 2024: DeepSeek releases its V2 LLM (a ChatGPT-like model).
December 2024: On December 26, 2024, DeepSeek released its V3 LLM, which is competitive with leading LLM models from OpenAI and Anthropic.
The related paper highlights the famous ā$6mā training run figure. This attracted the attention of OpenAi and the AI team of Tesla as they started questioning the traditional assumption that āā$$$ means better AI model performance.āā
Fast forward to Monday, January 20th, 2025. On the same day as President Donald Trumpās inauguration, DeepSeek released its r1 reasoning model which was a more advanced version of the V3 model that they released earlier.
The timing was soo damn good š„
Community led growth = aka open source is the ultimate innovation edge.
5. šScaling Strategy: How did DeepSeek scale its user base?
Honestly, DeepSeek ās scaling strategy is all due to the earned media growth engine -as it fundamentally challenges many tech leadersāā worldviews on āgo big or go homeā in terms of AI and GPU chips.
Media storm on channels like Linkedin, X- arousing public curiosity as tech professionals started debating these widely held assumptions on token rate, speed, performance etc.
This led to a spike in the total search volume for "DeepSeek" to be 9.3 million in Jan 2025.
More eyeballs =more attention = more growth
Also, r1 ās strategic release on the same day as Trumpās inauguration is a symbol of how tech, culture, and politics are merging; becoming an aggregated growth lever.
TLDR:
As the divide between technology and culture narrows - if the product you have challenges the cultural zeitgeist or assumption [e.g. āLLMs need massive datasets ] - this will lead to attention and product growth.
TLDR DeepSeek Growth Playbook
1. Embrace constraints in the beginning
2. Build in public [Open source] and co-create with innovators and early adopters [community-led]
3. Stand for something unique as this will lead to media attention
4. Be strategic with product timing.
⨠3 Takeaways I learned from DeepSeek in navigating this AI hype cycle.
Honestly, it is hard not to get overwhelmed by the AI gold rush hype.

How can you best position yourself without getting drowned in the mess? š
ā Lesson #1: DeepSeek is fundamentally showing a huge shift in the AI wave-Democratisation
DeepSeekās ability to deliver high-quality AI at a fraction of the cost broke the internet.
āOkay cool, but what does that mean for me?ā
Short answer: AI is getting democratized and you donāt need to be a tech nerd.
You donāt need to be a big tech guy. Look for pockets of white spaces where you can use AI to serve underserved audiences that big enterprise people like OpenAi cannot serve.
ā Lesson #2: Focus on the problem and not the trend
Donāt care about integrating AI. Think about real use-cases that are always ever-green.
DeepSeek was invested in AI research for years and did not care about monetization for a long time.
This helped them build an edge that even larger companies with resources cannot compete with.
ā Lesson #3: Open source your life > building in secret
In wild times, become more collaborative. Rising tides lift all boats.
While OpenAI is a proprietary technology, DeepSeek was solving important problems through open-source code- this led to deep network effects that cannot be easily replicated.
Reply