Shady Baby

finding shade for your little sunshine

How We Handled Our Exponential Growth by Moving the Problem to the User

When I founded Shady Baby, I had a simple vision: no baby should ever have sun in their face during a stroll. What I didn't anticipate was how complicated it gets when you try to do that across 44 countries while also checking the wind.

Today our CTO shipped what we believe is the most significant architectural change in the company's history. We moved the entire data pipeline into the user's browser. Their CPU. Their RAM. Their electricity bill.

We call it "Client-Side Excellence."

The Problem

44countries available
237Mbuildings indexed
95Mroad segments
20Mtrees mapped

Five network hops per route request. Multi-region compute, distributed caching, upstream data providers falling over under our query volume. Infrastructure costs were growing unsustainably. Our board demanded action.

Growth
Our CTO described the v1 architecture as "a distributed system cosplaying as a weekend project."

What We Built (Before We Deleted It)

Our upstream data provider (Overpass API) kept going down. Without building footprints we can't project shadows. Without shadows, babies get sun in their faces. Unacceptable.

So we downloaded the entire OpenStreetMap Europe extract, parsed over a billion nodes through a mmap'd sparse file, and built a custom binary tile format with delta encoding and a deduplicated string dictionary. Normal Saturday.

Building record (13 bytes, down from 28):
  centroid:  int16 delta from tile anchor (~1.1m precision)
  height:    uint8 (0.5m steps, max 127.5m)
  bbox:      4x int16 delta

The whole thing is mmap'd. Query time: microseconds. No parsing, no allocation, no GC.

Our CTO insisted on calling this "a B-tree for babies." Nobody laughed.

Then we self-hosted Valhalla because OSRM's pedestrian profile can't avoid cobblestones. Our CTO was very upset about cobblestones.

{
  "step_penalty": 300,        // 5 min penalty per staircase
  "use_roads": 0.1,           // strongly prefer footways
  "exclude_private": true,    // no trespassing with the stroller
  "use_ferry": 0.0,           // no ferries
  "driveway_factor": 5.0,     // really avoid driveways
  "sidewalk_factor": 0.7      // prefer sidewalks
}

At this point our architecture had: multi-region compute, distributed storage, edge proxy, container orchestration via Kamal, infrastructure-as-code via Terraform, custom data pipelines, tiered caching, third-party API integrations, real-time wind shelter scoring, and UV index monitoring with SPF recommendations.

All of this to tell a parent which side of the street to walk on.

The Realization

Our CTO stared at the architecture diagram for a long time. Then said: "What if the user's laptop built the tiles?"

User's Browser | +-- DuckDB WASM --------> Overture Maps S3 (Parquet) | gets road data for 4km radius | +-- Valhalla WASM ------> Build routing tiles in memory | +-- Valhalla WASM ------> Route with stroller costing Server count: 0 User's electricity bill: not our problem

Three WASM modules: DuckDB queries Overture Maps directly from S3 (Parquet predicate pushdown, only pulls a few MB even though the full dataset is 100+ GB). Valhalla mjolnir builds routing tiles in the browser's memory. Valhalla routing computes the route. About 50ms for the routing, few seconds total including the S3 fetch.

Our CTO called this "the ultimate microservice: the user's MacBook Pro."

The Data Pipeline That Doesn't Exist

Previously, updating building data required a 10-step process involving temporary builder instances, billion-node parsing, tile generation, fleet transfers, and rolling restarts.

Now:

1. Overture Maps publishes a monthly update
2. Users get it automatically (DuckDB reads latest Parquet)
3. There is no step 3

Results

Since the migration, our infrastructure costs dropped to zero. Uptime is 100% because there's nothing to go down. On-call rotation dissolved. Our CTO finally sleeps through the night, though that might be because the baby does too, now that we found the shady routes.

Lessons Learned

  1. Overengineering is a feature. Our custom binary tile format with delta encoding saves 50% over naive storage. This matters to exactly no one, but it's very satisfying.
  2. The best server is no server. Every component we moved to the browser was one less thing to pay for, monitor, and get paged about.
  3. WebAssembly changes everything. Running a production routing engine in a browser tab was science fiction five years ago. Now it's a weekend project that somehow took three months.
  4. Wind matters. Nobody asked for wind shelter scoring. But our CTO's baby doesn't like wind in her face, so now the entire scoring algorithm accounts for building-induced wind shadows.

We open-sourced the tile builder that makes all of this possible: valhalla-overture. It's an out-of-tree Valhalla plugin that reads Overture Maps GeoParquet directly instead of OSM PBF. Same tile output, completely different input pipeline. It's what powers both our server-side routing and the WASM version.

If you're a parent with a stroller and strong opinions about cobblestones, try shadybaby.app. If you want to see the entire pipeline run in your browser, check out our routing playground.

Mission accomplished

The Founder & CEO, from a park bench in Copenhagen (shady side, obviously)

← Back to blog