March 17, 2026 2 min read

Introducing Local AI Models in Workshop Desktop

Try local AI in Workshop Desktop: run models on your own computer for private, lower-cost, and offline coding help with a simple in-app setup.

AI EngineeringLocal AIAI Coding

Want faster feedback, more privacy, and fewer API costs while building?

With local AI models in Workshop Desktop, you can run AI directly on your own computer. That means the same Workshop coding agent you already use can help you plan, code, and debug using on-device AI.

Why try local AI?

For most people, it comes down to three practical benefits:

  • More private AI workflows: your prompts and project context stay on your machine
  • Lower AI costs: no per-request API charges when using local models
  • Offline AI coding support: keep working even without internet (after setup)

If you’ve been curious about self-hosted AI or on-device AI assistants, this is the easiest way to try it.

Getting started is simple

Workshop’s default setup is guided and built into the app.

Go to Agent Settings → Local Models, then:

  1. Choose a recommendation in Guided (Fast / Balanced / Genius)
  2. Download the model in-app
  3. Activate it
  4. Pick Local in the model selector

That’s it.

No separate local server is required for the standard setup.

What Workshop handles for you

Workshop Desktop takes care of the setup behind the scenes:

  • Prepares the local runtime
  • Downloads and stores model files
  • Shows status and progress
  • Starts your selected local model
  • Connects the Local model option to the active model

You’ll also find:

  • All Models to browse the full catalog
  • Downloaded to manage models already on your machine

Advanced option: use your own local server

If you already run your own AI stack, that path is still supported.

In Local Models, open Advanced: Connect your own server and enter a base URL for an Anthropic Messages API-compatible endpoint (/v1/messages).

So you get both:

  • an easy default path for most users
  • a bring-your-own-server path for advanced users

What to expect

As with any local AI setup, speed and quality depend on your hardware and model choice.

  • Smaller models are usually faster
  • Larger models are usually stronger, but need more memory
  • Your mileage will vary by machine

If you’re trying unfamiliar models, keep code execution approvals on so you can review actions before they run.

Learn more

For setup guides, hardware recommendations, and full bring-your-own-server documentation:

Running Local AI Models in Workshop Desktop →