How to Download DeepSeek: Apps, Weights and API Access

DeepSeek download guide for iOS, Android, desktop, open weights and the V4 API. Verify official sources, pick the right path, get started today.

How to Download DeepSeek: Apps, Weights and API Access

Guides·April 23, 2026·By DS Guide Editorial

If you have landed here looking for a DeepSeek download, the honest answer depends on what you actually want. A phone app? Local model weights? API access for code? The word “download” covers at least four very different things, and plenty of shady sites mix them up. I run DeepSeek V4-Pro and V4-Flash in production and have done the same with V3.2 and R1 before that, so this guide focuses on exactly where the official files live, how to verify you are installing the real app, and when you should skip downloading anything at all and use the web chat or API instead. You will get step-by-step instructions for every platform, plus the trade-offs for each route.

Which DeepSeek download do you actually need?

Before you click anything, pick a lane. The four paths below cover every legitimate way to get DeepSeek onto a device.

Path Who it is for Size Cost
Official mobile app (iOS/Android) Most people — chat, DeepThink toggle, file upload ~90–200 MB Free
Web chat (no download) Desktop users who want zero install 0 MB Free
Open-weight models (Hugging Face / Ollama) Developers running inference locally 160 GB – 865 GB Free weights, your hardware
API key (no download) Developers building apps against DeepSeek V4 0 MB Pay per token

There is no official standalone Windows or macOS executable from DeepSeek. If a website promises a “DeepSeek.exe” or “DeepSeek.dmg”, treat it as suspect. Desktop users either run the web app in a browser or run the open weights locally through a tool like Ollama or LM Studio.

Option 1: Download the official DeepSeek mobile app

The mobile app is the main consumer-facing DeepSeek download. It was announced as free to use with cross-platform chat history sync, web search, Deep-Think mode, and file upload with text extraction, available through the App Store, Google Play, and major Android markets, with the official download page at download.deepseek.com/app/. As of April 2026 the app serves DeepSeek V4 by default on the back end, and the DeepThink toggle now switches V4 between non-thinking and thinking mode.

On iPhone and iPad

  1. Open the App Store and search “DeepSeek”.
  2. Tap Get on the listing published by DeepSeek (not a third-party clone).
  3. Sign in with email, Google, or Apple ID — the same account works across platforms.

A handful of clones appear in search results. If you want a canonical starting point, go to download.deepseek.com/app/ and follow the iOS link. Our verify official DeepSeek app guide walks through the publisher checks and icon details.

On Android

  1. Open the Google Play Store and search “DeepSeek”.
  2. Confirm the publisher is DeepSeek before tapping install.
  3. If the app is not available in your region, go to download.deepseek.com/app/ for the official APK and enable “install from unknown sources” for your browser.

A word on the APK route: install it only from DeepSeek’s own domain. “MOD APK” sites and mirror hosts frequently bundle trackers or adware. For platform-specific walkthroughs see DeepSeek on Android and DeepSeek on iPhone.

What the app actually gives you

  • V4-powered chat with a DeepThink toggle for thinking mode.
  • 1,000,000-token context for long documents and codebases.
  • File upload with text extraction (PDFs, spreadsheets, code).
  • Session history that persists across devices once you are signed in — unlike the API, which is stateless.

Option 2: Skip the download and use DeepSeek online

For desktop use, the browser remains the path of least resistance. Point Chrome, Safari, Edge, or Firefox at chat.deepseek.com, sign in, and you are done. There is no native Windows or Mac client today. If you want an app-like window, most modern browsers will let you install the site as a Progressive Web App — a two-click operation that gives you a dock or taskbar icon without any real download.

For more on the web surface and its differences from the app, see DeepSeek online and DeepSeek browser vs app.

Option 3: Download the open model weights

This is the route most people do not actually want, but the one that gets searched the most after a major release. On April 24, 2026, DeepSeek released preview versions of its V4 series, an open-source Mixture-of-Experts family with two models and a native one-million-token context window: DeepSeek-V4-Pro has 1.6 trillion total parameters with 49 billion activated per token, and DeepSeek-V4-Flash has 284 billion total parameters with 13 billion activated. Both are MIT-licensed, which means you can fine-tune, redistribute, and use them commercially.

Where the files live

The canonical repositories are on Hugging Face:

  • huggingface.co/deepseek-ai/DeepSeek-V4-Pro
  • huggingface.co/deepseek-ai/DeepSeek-V4-Flash

These are not “an app you install.” They are raw model weights you load into an inference engine.

Be realistic about file sizes

Practitioner Simon Willison summarised the download footprint clearly: V4-Pro is the new largest open-weights model, larger than Kimi K2.6 (1.1T) and GLM-5.1 (754B), with Pro weighing 865GB on Hugging Face and Flash at 160GB. Unless you have a workstation with multiple H100s or a very patient M-series Mac, Pro is not a realistic local download. Flash, once the community ships GGUF quantisations, is tractable on a high-memory Apple Silicon laptop.

The easiest way to run weights locally: Ollama

If you want the “download and go” experience for a smaller DeepSeek model, Ollama remains the simplest pipeline. After Ollama is installed, the pull command is the only step you run:

ollama pull deepseek-v3.2
ollama run deepseek-v3.2 "Summarise this article in five bullet points."

Expect Ollama and llama.cpp to need an update before V4 Flash lands as a quantised pull — V4 ships with a new attention stack. Our running DeepSeek on Ollama tutorial tracks which tags are live, and the DeepSeek hardware calculator estimates VRAM and RAM requirements per quantisation. If you want a full walkthrough of the local route including offline use, see install DeepSeek locally.

Option 4: Skip weights entirely and use the API

For most developers, downloading 160 GB is a waste. The API gives you the same models at a tiny fraction of the effort and zero hardware cost.

A minimal Python example

Chat requests hit POST /chat/completions, the OpenAI-compatible endpoint. Swap base_url and api_key in any existing OpenAI SDK integration and it works unmodified:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.deepseek.com",
    api_key="<your DeepSeek API key>",
)

resp = client.chat.completions.create(
    model="deepseek-v4-flash",
    messages=[{"role": "user", "content": "Explain MoE routing in two sentences."}],
    temperature=1.3,
    max_tokens=512,
)
print(resp.choices[0].message.content)

To switch on thinking mode, add reasoning_effort="high" and extra_body={"thinking": {"type": "enabled"}}. The response then returns reasoning_content alongside the final content. Thinking is a request parameter on either V4 tier, not a separate model ID. DeepSeek also exposes an Anthropic-compatible surface on the same base URL if you prefer that SDK.

Model IDs you can send today

  • deepseek-v4-pro — 1.6T / 49B active, frontier tier.
  • deepseek-v4-flash — 284B / 13B active, the default cost-efficient tier.
  • deepseek-chat and deepseek-reasoner — legacy IDs currently routing to deepseek-v4-flash non-thinking and thinking respectively, fully retired after July 24, 2026, 15:59 UTC. Migrating is a one-line model= change; base_url does not move.

Pricing snapshot (V4-Flash, as of April 2026)

Bucket Rate per 1M tokens
Input, cache hit $0.028
Input, cache miss $0.14
Output $0.28

V4-Pro sits at $0.145 / $1.74 / $3.48 for the same three buckets — roughly six times the output cost of Flash, and worth it only when the benchmark lift justifies the spend. Always check the DeepSeek API pricing page before committing to a tier; prices during the V4 Preview window can change. Off-peak discounts remain discontinued (they ended 2025-09-05).

Worked example: 1,000,000 chat calls on V4-Flash

Assume a 2,000-token cached system prompt, 200-token user message, and 300-token response per call:

  • Cached input: 2,000,000,000 tokens × $0.028/M = $56.00
  • Uncached input: 200,000,000 tokens × $0.14/M = $28.00
  • Output: 300,000,000 tokens × $0.28/M = $84.00
  • Total: $168.00

The user message on each call is always an uncached miss against the cached prefix — do not skip that line. For more detailed cost models, try the DeepSeek pricing calculator.

Verify you are downloading the real DeepSeek

Fake DeepSeek downloads exploded during the January 2025 app-store surge and have not stopped. A simple checklist keeps you out of trouble:

  • Publisher name: the iOS and Android listings must be published by DeepSeek, not “DeepSeek AI Pro”, “DeepSeek Plus”, or similar.
  • Official domains only: deepseek.com, chat.deepseek.com, api.deepseek.com, download.deepseek.com, and huggingface.co/deepseek-ai. Anything else is third-party.
  • No paid “Premium APK”: the official mobile app is free. Anyone selling a premium unlock is running a scam.
  • Hash-check open weights: Hugging Face provides SHA256 sums alongside every file. Verify before loading.

If you have already installed something suspect and want a clean start, see DeepSeek troubleshooting.

Platform-specific notes

Windows and macOS

No native installer. Use the browser at chat.deepseek.com, or run weights through Ollama or LM Studio. Platform guides: DeepSeek on Windows, DeepSeek on Mac, DeepSeek on Linux.

Chrome and Edge

The Chrome Web Store lists several DeepSeek extensions of varying quality; a community-maintained option is covered in our DeepSeek Chrome extension walkthrough.

Countries with restrictions

App availability varies. Italy’s Garante ordered blocking of the DeepSeek app in January 2025 over data-protection concerns, and several US states have restricted DeepSeek on government devices. The app-store listing may not appear in some regions; see DeepSeek availability by country.

Which download should you pick?

If you are a normal user who wants to chat, summarise documents, or use DeepThink on the go: install the official mobile app and skip everything else. If you work from a desktop all day: use the web app in a browser. If you are building a product: get an API key and leave the weights alone unless you have an explicit reason (air-gapped deployment, custom fine-tunes, or data-residency rules) to host yourself. The open weights are genuinely open and genuinely useful, but the hardware bill for V4-Pro is not.

For a broader walkthrough of first steps once you have installed, head to how to use DeepSeek or the DeepSeek beginner guides.

Last verified: 2026-04-24. DeepSeek AI Guide is an independent resource and is not affiliated with DeepSeek or its parent company. Model IDs, pricing and API behaviour change; check the official DeepSeek documentation and pricing page before committing to a production decision.

Is the DeepSeek download free?

Yes — the official mobile app is free, the web chat is free, and the open-weight V4-Pro and V4-Flash models are released under the MIT license for commercial and research use. You only pay when you use the API, which bills per token. For a full breakdown of what costs money and what does not, see our guide on whether DeepSeek is free.

How do I know the DeepSeek app I downloaded is official?

Check three things: the publisher on the App Store or Google Play listing must be “DeepSeek”; the domain for an APK download should be download.deepseek.com; and there is no paid premium version. Clone apps often ask for excessive permissions or charge a subscription. Our verify official DeepSeek app checklist covers the icon, bundle ID, and other signals.

Can I download DeepSeek V4 and run it on my laptop?

V4-Pro weights are around 865 GB — not realistic locally. V4-Flash is roughly 160 GB and becomes tractable once the community ships quantised GGUF builds; a 128 GB Apple Silicon machine or a dual-3090 rig is the entry point. If you only want to try V4 today, the API or web chat are faster. See DeepSeek system requirements for the numbers.

What happened to the deepseek-chat and deepseek-reasoner downloads?

Those are API model IDs, not downloads. They currently route to deepseek-v4-flash in non-thinking and thinking mode respectively, and are fully retired after 2026-07-24 at 15:59 UTC. Migrate to deepseek-v4-pro or deepseek-v4-flash with a one-line model swap. Full details are in the DeepSeek API documentation.

Does DeepSeek have an official Windows or Mac app?

No. DeepSeek does not publish a native desktop client. On Windows, macOS, or Linux, use the web interface at chat.deepseek.com (which you can install as a PWA), or run models locally through Ollama or LM Studio. Our install DeepSeek locally tutorial walks through the local options end to end.

Leave a Reply

Your email address will not be published. Required fields are marked *