<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Self-Hosting on Tomas Tech Lab</title><link>http://tomastechlab.com/tags/self-hosting/</link><description>Recent content in Self-Hosting on Tomas Tech Lab</description><generator>Hugo</generator><language>en-us</language><managingEditor>tomas@tomastechlab.com (Tomas)</managingEditor><webMaster>tomas@tomastechlab.com (Tomas)</webMaster><lastBuildDate>Tue, 21 Apr 2026 17:21:33 +0200</lastBuildDate><atom:link href="http://tomastechlab.com/tags/self-hosting/index.xml" rel="self" type="application/rss+xml"/><item><title>Why I Run AI in My Garden Shed</title><link>http://tomastechlab.com/posts/why-i-run-ai-in-my-garden-shed/</link><pubDate>Tue, 21 Apr 2026 17:21:33 +0200</pubDate><author>tomas@tomastechlab.com (Tomas)</author><guid>http://tomastechlab.com/posts/why-i-run-ai-in-my-garden-shed/</guid><description>&lt;p>Let me start with a confession: I have three NVIDIA Tesla P100 GPUs sitting in an old Dell workstation in my garden shed.&lt;/p>
&lt;p>Yes, a garden shed. Not a datacenter. Not a climate-controlled server room with redundant power supplies and backup generators. A garden shed. With possibly the most uninsulated walls you can imagine.&lt;/p>
&lt;p>And you know what? It works. Surprisingly well.&lt;/p>
&lt;p>The question I get asked most often (or should get asked, people just assume I&amp;rsquo;m using ChatGPT like everyone else) is why I bother with all this when OpenAI, Anthropic, and the other big players exist. Their models are better. They&amp;rsquo;re faster. They cost pennies per API call when you look at it on the surface.&lt;/p></description></item></channel></rss>