Build a Local AI Automation Stack on Windows with Docker, n8n, and LM Studio
A step-by-step guide
✨ Why Go Local?
Owning your AI workflows is a power move. Self-hosting n8n isn’t just about keeping things private. It’s about controlling your tech stack, dodging API limits, and running unlimited queries without a dime in fees. You get to experiment, automate, and scale on your terms. Best part? It’s all humming on your machine, no internet required.
This guide breaks down how to set up a local AI environment using Docker, n8n, and LM Studio on a beefy Windows rig. Want to take it further? Future posts will dive into the top remote servers and cloud platforms to level up your workflows when you’re ready to scale beyond local.
Pros of Self-Hosting n8n
Full Control: You own the stack. Customize workflows, tweak settings, and run things exactly how you want.
Cost-Free Queries: No per-query fees or subscriptions. Run as many tasks as your hardware can handle.
Offline Power: No internet? No problem. Your workflows keep running, perfect for secure or remote setups.
Privacy First: Keep sensitive data on your machine. No third-party servers snooping on your automations.
Scalable Playground: Experiment freely. Test new workflows without worrying about API limits or vendor lock-in.
Cons of Self-Hosting n8n
Hardware Demands: Needs a solid rig. Weak machines will choke on complex workflows or large models.
Setup Hustle: Docker and n8n require some tech chops. Expect a learning curve if you’re new to self-hosting.
Maintenance Load: Updates, backups, and troubleshooting are on you. No hand-holding from a SaaS provider.
Limited Cloud Perks: Local setups lack the elasticity of cloud platforms. Scaling up means more manual work.
Security Responsibility: You’re the gatekeeper. Misconfigure something, and your system could be vulnerable.
🖥️ Hardware for Self-Hosting n8n
For this guide, I’m running n8n, Docker, and LM Studio on a setup that handles complex workflows like a champ 💪. Here’s what I use and what works best for top performance:
CPU:
My Setup: Intel Core i9 (16 cores, high clock speed).
Best Choice: A high-core-count CPU (e.g., Intel Core i9 or AMD Ryzen 9 with 12+ cores). More cores speed up parallel tasks and heavy automations.
RAM:
My Setup: 64GB.
Best Choice: 32GB minimum, 64GB ideal. Local LLMs and n8n workflows can be memory hogs, especially with multiple processes.
GPU:
My Setup: NVIDIA RTX 3080 (12GB VRAM).
Best Choice: A modern GPU with 8GB+ VRAM (e.g., NVIDIA RTX 3060 or better) for AI tasks. Optional but accelerates model processing significantly.
Storage:
My Setup: 2*500GB NVMe SSD.
Best Choice: 500GB+ NVMe SSD for fast read/write speeds. Quick storage keeps workflows responsive, especially with large models.
This setup powers my local AI environment efficiently, letting me experiment and automate without bottlenecks. For best results, prioritize a strong CPU and ample RAM. A GPU is a bonus for AI-heavy tasks, but not mandatory. If your hardware’s weaker, expect slower performance on complex workflows.
⚙️ Prerequisites
Here's how to set up each component:
1. Install Docker Desktop
Download: Docker Desktop
Run the installer and follow the instructions.
Once installed, open Docker Desktop and ensure it’s running.
Verify via terminal:
docker -v
2. Install LM Studio
Download: LM Studio
Install and launch the app.
Go to the Discover tab.
Search and install a model that’s compatible with your hardware. In my case I’m using:
deepseek-r1-distill-qwen-7b
3. Set up n8n via Docker
Create a working directory:
mkdir n8n-llm && cd n8n-llm
Create the
docker-compose.yml
file (I’m on Windows so I’ll use Notepad 🤣):
notepad docker-compose.yml
Paste the YAML config
services:
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=[CHOOSE A USERNAME]
- N8N_BASIC_AUTH_PASSWORD=[CHOOSE A PASSWORD]
volumes:
- ./n8n-data:/home/node/.n8n
Start n8n:
docker-compose up -d
Now visit http://localhost:5678
to access the n8n UI.
🤖 Set Up LM Studio with a DeepSeek Model
🔗 Connect n8n to LM Studio
Add an HTTP Request Node in n8n:
{
"model": "deepseek-r1-distill-qwen-7b",
"messages": [
{
"role": "user",
"content": "Tell me a dad joke."
}
]
}
And voilà. There you have it.
Let me know if you have any questions in the comments and we’ll help you out 🛠️