How do you install and run LLMs such as deepseek-r1 locally?
1 min read
Summary
The article advises installing and running large language models locally instead of using AI provider platforms or phones for better control and customization of AI models.
It describes the requirements needed to install and run LLMs, including a good GPU, enough RAM, and several software applications.
Just as Linux made its operating system code open-source, DeepSeek has done the same with its AI algorithms, models, and training procedure details, stating that it will guide users through installing and running LLMs locally.
The latter part of the article discusses the step-by-step process of installing and running an LLM with a script manager, Docker, and the fast and efficient Ollama CLI tool.
Finally, the writer warns against using DeepSeek’s app version, claiming it is not secure despite it being the most popular free AI app.