Skip to content

Large Language Models: Locally running one

  • Download LM Studio.
  • Open the app and download a model.
  • I downloaded DeepSeek R1 Distill Qwen 7B model and you can get good enough results from it.

Resources used on my machine:

Idle - Generating output tokens -
RAM CPU RAM CPU
upto 3.5 GB 0% upto 4 GB ~14%