LM Studio: Run AI Locally on Your Computer (Install Guide, Best Models, Hardware, and Real Projects)

by | Jan 10, 2026 | Uncategorized | 0 comments

Curtision Logo ⬅️ Back to Main Page

LM Studio: Run AI Locally (Beginner → Power User Guide)

Start from zero. Install LM Studio, pick the right model, tune speed/quality, and run a local API server for apps, agents, coding, and creative tools.
Last updated: January 2026 • Estimated read: 10–14 minutes
LM Studio Discover tab showing models to download
Discover tab: search, compare, and download models.

1) The basics (what AI is in plain language)

An “AI model” (LLM) is like a text engine trained on huge amounts of writing. It predicts the next bits of text, and that turns into answers, plans, code, and stories. LM Studio is the desktop app that helps you download and run models on your own computer.

Three beginner terms that matter

  • Tokens: chunks of text. More tokens = more memory and slower responses.
  • Context length: how much the model can “remember” at once (chat + pasted text).
  • Quantization (Q4/Q5/Q8): smaller/faster model files with a quality trade-off.
Reality check: LM Studio isn’t “the internet”. To browse the web or control apps, you normally add tools (Tool Use / MCP) or run a small bridge service.

2) Download + install LM Studio (step-by-step)

LM Studio is available for Windows, macOS, and Linux. Use the official download page below, then follow the steps for your operating system.

Official download

Tip: If you ever want to double-check you’re on the right site, the official domain is lmstudio.ai.

Install steps

  • Windows: download the installer → run it → open LM Studio from Start Menu.
  • macOS: download the DMG → drag LM Studio to Applications → open it (allow “Open” if macOS warns you).
  • Linux: download the AppImage → make it executable → run it:
    chmod +x LM_Studio*.AppImage
    ./LM_Studio*.AppImage
What you should see: once installed, LM Studio opens with a sidebar including Discover, Chat, and Developer. You’ll use Discover to download a model, Chat to talk to it, and Developer to run the API server.

3) Quick start (first model + first prompts)

  1. Open Discover.
  2. Search for an Instruct model (general chat) and click Download.
  3. Go to Chat → load the model.
  4. Try the prompts below so you know it’s working.

Three test prompts (copy/paste)

1) Explain tokens + context length like I'm 12.
2) Summarise this paragraph in 5 bullets: [paste any paragraph]
3) Give me a step-by-step checklist to do X, then ask me 2 questions.

4) Download your first model (what to choose)

Choose models by job and hardware. In LM Studio you’ll typically see: Instruct (general assistant), Coder (programming), and sometimes Vision (images).

Beginner picks (by hardware)

  • Older PC / CPU-only: 3B–7B Instruct in Q4.
  • Mid-range (16–32GB RAM, modest GPU): 7B–14B in Q4/Q5.
  • High-end GPU: 14B–34B+ models, higher quant, larger context.

5) Best settings (low-end vs high-end)

Most “why is this slow?” problems come down to 2 things: model size and context length. Use these as your starting defaults.

SettingLow-end / laptopHigh-end / GPU box
QuantQ4 (fast + fits memory)Q5/Q8 (better quality)
Context length4k–8k (stable)8k–32k+ (if you have RAM/VRAM)
Temperature0.2–0.6 (more precise)0.7–1.0 (more creative)
GPU offloadSome offload if VRAM is limitedMax offload (fastest)
Starter rule: If you don’t know what to change, set Context to 4096–8192 and Temperature to 0.4.

6) Developer tab: run a local API server (localhost)

LM Studio can expose an OpenAI-compatible API so your apps/bots can talk to your local model. The base URL is usually: http://localhost:1234/v1

  1. Load a model in LM Studio.
  2. Open Developer → start the server.
  3. Copy the base URL.
  4. Test it quickly:
curl http://localhost:1234/v1/models
Advanced: minimal Python client (OpenAI-compatible)
from openai import OpenAI

client = OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed")

resp = client.chat.completions.create(
  model="YOUR_MODEL_ID",
  messages=[{"role":"user","content":"Explain LM Studio in 2 sentences."}],
  temperature=0.4
)

print(resp.choices[0].message.content)

7) Serve on your local network (LAN)

Want your phone/laptop to use your main PC as the AI server? In Server Settings, enable Serve on Local Network, then use: http://YOUR_LAN_IP:1234/v1

LM Studio Developer tab
Developer tab: start the local server and copy the base URL.
Security: keep it LAN-only. Don’t expose your LM Studio server directly to the public internet unless you know how to secure it.

8) Real ways to use LM Studio

Project A — Local “ChatGPT-style” assistant

Prompt template:
Goal: (1 sentence)
Context: (2–6 bullets)
Constraints: (bullets)
Output: (exact format)
Ask me 2 questions if anything is missing.

Project B — Coding assistant (reliable edits)

You are my coding assistant.
- Ask clarifying questions if requirements are missing.
- Make the smallest change first.
- Output a unified diff only.
- After the diff: explain how to test and how to roll back.

Project C — Creative writing

Write in: punchy cinematic voice, short paragraphs, strong imagery.
Length: 600–900 words.
Structure: hook → escalation → cliffhanger.

Project D — Documents

Do this:
1) Summarise in 8 bullets
2) Extract action items + dates
3) List risks/unknowns
4) Executive brief in 120 words

Project E — Tools + browser-like research (agent style)

  1. User asks a question
  2. Model requests a tool action (search/open/screenshot)
  3. Your tool runs and returns results
  4. Model answers using those results

Project F — Minecraft + Discord (bridge app)

Ai Chat Bot In Minecraft (LM Studio + Minecraft → LM Studio Python bridge)

9) Troubleshooting

  • Model won’t load / out of memory: smaller model → lower quant → lower context.
  • It’s slow: reduce context to 4096, use Q4, and offload to GPU if possible.
  • LAN devices can’t connect: check firewall + correct LAN IP + port 1234.
  • Web app can’t call the server: enable CORS in server settings (dev only).
LM Studio document and vision workflow example
Optional advanced: searchable notes, docs, and screenshots (RAG-style workflows).

Sources

Curtision Logo ⬅️ Back to Main Page