Why VPS Rocks for Quick Deployments: My Story + Build an LLM-over-DNS agent in Under 30 Mins!
My most valuable skill as a hacker/entrepreneur? Being confident enough to deploy random stuff that works on my laptop to the actual internet. Sounds simple, but this superpower literally got me into Y-Combinator and helped me raise a seed round!
The Struggle Bus Was Real ๐
Picture this: teenage me, fresh off a Christmas Raspberry Pi gift, building my very first “real” project. A Twitter weather bot that would read from the API firehose and reply with weather conditions when you @’ed it. I followed a tutorial, got it working locally, and then… complete brain freeze on deployment.
Obviously I should just use my Pi as a server, right? WRONG. My code had more bugs than a summer camping trip, crashing whenever I wasn’t around to babysit it. Then I couldn’t even SSH back in because our house didn’t have a static IP (and Tailscale wasn’t a thing yet). It was like having a pet that only worked when you were home ๐
Welcome to PaaS Hell ๐ฅ
Fast forward to building web apps in college. Somehow I completely skipped VPS and went straight to Platform as a Service hell. I’d Google “how do I deploy my create react app” and the top answer was always some third-party service with build steps, managed SSL, and enough complexity to make my head spin.
There was ALWAYS some weird limitation:
- Memory constraints during build โ
- Puppeteer couldn’t run because missing apt packages ๐ฆ
- Stuck configuring Docker images (pre-AI era, so no ChatGPT to save me) ๐ณ
I spent more time fighting deployment than actually building my janky React apps. Not a good look!
My VPS Salvation ๐
During college I got lucky and met this hacky startup entrepreneur who was hiring. The whole operation seemed barely legitimate, but I took the leap anyway.
Going in, I assumed the “proper” way to deploy was AWS or some other hyperscaler. But this guy? Total VPS maximalist with the most beautifully simple philosophy I’d ever encountered:
Rent a VPS, SSH in, do the same thing you did locally (
yarn dev
or whatever), throw up a reverse proxy, call it a day.
Watching him deploy like this over and over was like seeing magic. Everything was small, learnable, and confidence-building. That nagging voice in my head saying “I can’t build this because I won’t be able to deploy it” just… disappeared.
Paying It Forward โก
I’ve become a total evangelist for this approach, but never knew how to write about it entertainingly. Then I saw levelsio’s tweet about deploying a DNS server that lets you talk to an LLM, and I knew I had my hook!
Want to see it in action? Try this magic:
dig @llm.skeptrune.com "what is the meaning of life?" TXT +short
Pretty wild, right? Let’s build our own in less than 30 minutes with nothing but a rented server!
The Build: LLM-over-DNS in 30 Minutes โฐ
Step 1: Get Into Your VPS ๐
After purchasing your VPS (I love Hetzner, but any provider works), you’ll get an IP and login creds via email:
ssh root@<your-vps-ip>
Step 2: Clear the DNS Decks ๐งน
Most VPS images come with DNS stuff pre-installed. Let’s clean house:
# Check what's running
systemctl list-units --type=service | grep -E 'bind|dns|systemd-resolved'
# Stop the usual suspects
systemctl stop systemd-resolved
systemctl disable systemd-resolved
# Bye bye bind9
apt-get remove --purge bind9 -y
Step 3: Install Our Tools ๐ ๏ธ
Just need a couple Python packages:
pip install dnslib requests
Step 4: The Magic Script โจ
Here’s where the fun happens. Create a Python script that listens for DNS queries, treats them as prompts, hits the OpenRouter API, and returns the response as a TXT record:
from dnslib.server import DNSServer, BaseResolver
from dnslib import RR, QTYPE, TXT
import requests
import codecs
OPENROUTER_API_KEY = "" # Add your OpenRouter API key here
LLM_API_URL = "https://openrouter.ai/api/v1/chat/completions"
class LLMResolver(BaseResolver):
def resolve(self, request, handler):
qname = request.q.qname
qtype = QTYPE[request.q.qtype]
prompt = str(qname).rstrip('.')
# Forward prompt to LLM
try:
response = requests.post(
LLM_API_URL,
headers={
"Authorization": f"Bearer {OPENROUTER_API_KEY}",
"Content-Type": "application/json"
},
json={
"model": "openai/gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}]
},
timeout=10
)
response.raise_for_status()
raw_answer = response.json()["choices"][0]["message"]["content"]
except Exception as e:
raw_answer = f"Error: {str(e)}"
try:
answer = codecs.decode(raw_answer.encode('utf-8'), 'unicode_escape')
except Exception:
answer = raw_answer.replace('\010', 'n').replace('\n', 'n')
reply = request.reply()
if qtype == "TXT":
# Split long responses into chunks (DNS has limits!)
chunk_size = 200
if len(answer) > chunk_size:
chunks = [answer[i:i+chunk_size] for i in range(0, len(answer), chunk_size)]
for i, chunk in enumerate(chunks):
reply.add_answer(RR(qname, QTYPE.TXT, rdata=TXT(f"[{i+1}/{len(chunks)}] {chunk}")))
else:
reply.add_answer(RR(qname, QTYPE.TXT, rdata=TXT(answer)))
return reply
if __name__ == "__main__":
resolver = LLMResolver()
server = DNSServer(resolver, port=53, address="0.0.0.0")
server.start_thread()
import time
while True:
time.sleep(1)
Save this as llm_dns.py
. Don’t forget to grab an OpenRouter API key and paste it in!
Quick Note: This is totally a proof-of-concept. For real production stuff, you’d want proper process management, logging, rate limiting, and definitely not storing API keys in plaintext ๐ฌ
Step 5: Fire It Up! ๐ฅ
Start your DNS-LLM hybrid (needs root for port 53):
sudo python3 llm_dns.py
Step 6: Test Your Creation ๐งช
From another machine, send a DNS query and watch the magic:
dig @<your-vps-ip> "what is the meaning of life" TXT +short
You should see the LLM’s response come back through DNS. How cool is that?!
When Things Go Wrong ๐จ
Common hiccups:
-
Permission denied
: Remember thatsudo
for port 53! -
Connection timeout
: Check your firewall settings -
API errors
: Double-check that OpenRouter key -
No response
: Make sure systemd-resolved is actually disabled
Lock It Down (Optional but Smart) ๐
Want to add some basic security? UFW to the rescue:
ufw allow ssh
ufw allow 53
ufw enable
This keeps SSH open (don’t lock yourself out!) and allows DNS queries while blocking everything else.
The VPS Philosophy ๐ฏ
This whole exercise demonstrates why I’m such a VPS evangelist. No complex infrastructure, no weird platform limitations, no mysterious build failures. Just:
- SSH into a server
- Install what you need
- Run your code
- It works!
That’s it. That’s the tweet.
Sure, you might need more sophisticated setups for massive scale, but for 99% of side projects and early-stage startups? VPS all the way. The confidence boost alone is worth it.
Resources ๐
Now go forth and deploy weird stuff to the internet! Your future entrepreneurial self will thank you ๐
What’s the weirdest thing you’ve deployed to a VPS? Drop it in the comments!