<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI on Home</title>
    <link>https://www.stephan.michard.io/tags/ai/</link>
    <description>Recent content in AI on Home</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sat, 07 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://www.stephan.michard.io/tags/ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Extending the Local AI Stack with On-Demand GPU Inference on RunPod</title>
      <link>https://www.stephan.michard.io/2026/extending-the-local-ai-stack-with-on-demand-gpu-inference-on-runpod/</link>
      <pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>https://www.stephan.michard.io/2026/extending-the-local-ai-stack-with-on-demand-gpu-inference-on-runpod/</guid>
      <description>Conceptual illustration of the extended AI stack with elastic cloud GPU resources for running large language models on demand - AI generated Introduction In this post, I want to describe how I extended the local AI stack I built in my homelab with on-demand GPU-backed model inference, without adding any GPU hardware to the lab itself.
The two previous posts in this series provide the context for what follows. The homelab post covers the base infrastructure: thin clients, Docker Compose, Traefik, and internal DNS.</description>
    </item>
    
    <item>
      <title>My Local AI Stack: Open WebUI, LiteLLM, SearXNG, and Docling</title>
      <link>https://www.stephan.michard.io/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/</link>
      <pubDate>Sat, 14 Feb 2026 00:00:00 +0000</pubDate>
      
      <guid>https://www.stephan.michard.io/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/</guid>
      <description>Overview of the modular self-hosted AI stack - AI generated Introduction In my previous post about my homelab, I described the foundation I use for self-hosted services: a small set of low-power machines, Docker Compose for deployment, Traefik as the reverse proxy, and internal DNS to expose services with clean HTTPS hostnames. I have been running this setup for several years with very little maintenance overhead. That setup turned out to be a good base not only for classic self-hosting, but also for local AI workloads.</description>
    </item>
    
  </channel>
</rss>
