Geo based latency

Geo based latency

September 26, 2025·
Hrvoje Milković

Think about the ultimate self-hosted blog:

  • Geo-routed to the nearest server, so it’s always fast.
  • Highly available to handle traffic spikes and stay online.
  • Automatically secured with HTTPS right out of the box.

My goal is a zero-reliance setup with absolutely no third-party services ex. GitHub Pages, Cloudflare, or S3. They’re great, but I want to see exactly what I can build entirely on my own.

Geo latency

The farther data must travel between its source and destination, the higher the propagation latency.

You may wonder how big is the difference in latency related to location, as I use Hetzner VPS I really love this project which visualizes latencies across the different Hetzner data centers

How does it impact me as a user or service provider?

Users notice it in page load times, timeouts or slow UI while hosting the service needs more compute to keep the slow connections open while they wait for data transfer. Lower geo distance is a win for both.

Brainstorming phase

Maybe I could develop latency based routing DNS server via CoreDNS plugin, yes and no. DNS client-side caching means failover wouldn’t be instantaneous when a server goes down and the DNS server itself needs to be HA. My idea was more like a alternative to DNS based balancing.

Then I watched:

Then it came to me, why not redirect by subdomain not http path. What if I take 2 small VPS instances and place one in US other in EU. If a request lands of any on them calculate which VPS is closer to user and redirect them there.

This still wan’t solve HA issues, so we need one HA entry point and this is the base domain and only this region we need 2 servers.

Example of setup:

  • VPS 2xEU for HA (keepalived) both serve eu.example.com and example.com
  • VPS 1xUS serve us.example.com

Known limitations

All seems perfect, but every decision has trade-offs. In this case:

  1. Users can pin themselves to one subdomain (e.g., by bookmarking it).
  2. If that region is down and user lands on it, the redirect won’t work because they have bypassed the main entry point domain.

Depending on the frequency of health checks, some traffic may fail without a retry. For a battle-proven setup, I would still suggest that each region has at least two VPS instances.

Implementation

While a single service or framework implementation was possible, I opted to develop something reusable—a solution that could benefit various projects and be valuable to more people. This led me to closely look the extendability features of current web servers:

  • Nginx Lua -> I didn’t want to do it in Lua
  • Pingora -> In Rust, cool, but I need a more general web server with batteries included
  • Caddy server module -> Bingo , it’s even Go language, I like it

On each request we need to:

  1. Get geo location (lat, long) for client’s IP
  2. Get public serving IP’s for VPS from cache with TTL (we resolve at startup and cache)
  3. Calculate Haversine distance from client IP to all VPS instance IP’s and cache it with TTL
  4. Select the VPS that has the smallest distance and is healthy and redirect to it’s subdomain
  5. If all other VPS instances are down just serve traffic

The Haversine formula is used to calculate the great-circle distance between two points on a sphere, such as the Earth, using their longitudes and latitudes.

The formula is as follows:

a=sin2(Δϕ2)+cos(ϕ1)cos(ϕ2)sin2(Δλ2)a = \sin^2(\frac{\Delta\phi}{2}) + \cos(\phi_1) \cdot \cos(\phi_2) \cdot \sin^2(\frac{\Delta\lambda}{2})c=2atan2(a,1a)c = 2 \cdot \operatorname{atan2}(\sqrt{a}, \sqrt{1-a})d=Rcd = R \cdot c

Where:

  • ϕ\phi is latitude, and λ\lambda is longitude (in radians).
  • RR is the Earth’s radius (e.g., 6371 km or 3959 mi).
  • Δϕ=ϕ2ϕ1\Delta\phi = \phi_2 - \phi_1
  • Δλ=λ2λ1\Delta\lambda = \lambda_2 - \lambda_1

Diagrams

I’ve created C4 diagrams to provide a clearer visualization of the software architecture.

C4 System Context Diagram

  graph TB
    subgraph "External Systems"
        Users["Users<br/>(Web Clients)"]
        GeoIPDB["GeoIP Database"]
        TargetServers["Target Servers<br/>(eu.example.com, us.example.com)"]
    end

    subgraph "Caddy Geo-Redirect System"
        CaddyServer["Caddy Server<br/>(Web Server with Geo-Redirect Module)"]
    end

    Users -->|"HTTP Requests"| CaddyServer
    CaddyServer -->|"HTTP 302 Redirects"| Users
    CaddyServer -->|"Downloads MMDB"| GeoIPDB
    CaddyServer -->|"Health Checks"| TargetServers
    CaddyServer -->|"Redirects Users"| TargetServers

C4 Container Diagram

  graph TB
    subgraph "Caddy Server Process"
        subgraph "Geo-Redirect Module"
            Middleware["Middleware Handler<br/>(ServeHTTP)"]
            GeoIPEngine["GeoIP Database Engine<br/>(Distance Calculation)"]
            CacheLayer["In memory cache<br/>(Result Caching)"]
            HealthMonitor["Health Monitor<br/>(Domain Status)"]
        end

        subgraph "Caddy Core"
            HTTPServer["HTTP Server"]
            MetricsSystem["Prometheus Metrics"]
            ConfigParser["Caddyfile Parser"]
        end
    end

    subgraph "External Dependencies"
        MMDBFile["MMDB File<br/>(Local Storage)"]
        TargetDomains["Target Domain IPs<br/>(DNS Resolution)"]
    end

    HTTPServer --> Middleware
    ConfigParser --> Middleware
    Middleware --> GeoIPEngine
    Middleware --> MetricsSystem
    GeoIPEngine --> CacheLayer
    GeoIPEngine --> MMDBFile
    HealthMonitor --> TargetDomains
    GeoIPEngine --> HealthMonitor

Code

caddy-geo-redirect