Glinto Blog
engineering 6 min read

Why We Built Glinto on the Cloudflare Edge

How running analytics at the edge eliminates latency, reduces infrastructure costs, and keeps data within European borders.

Glinto Engineering ·

When we started building Glinto, the first architectural decision we faced was where the analytics pipeline should live. The obvious choice was a traditional server stack — a load balancer, a few application servers, a message queue, and a time-series database somewhere in a central European data centre. It is a proven pattern, but it is also a pattern that introduces latency, complexity, and a single point of failure we were not willing to accept.

Instead, we chose to run Glinto entirely on the Cloudflare Edge. Every request — every page view, every event, every aggregation — is handled by a Worker running in one of Cloudflare’s 300+ points of presence. The data never has to travel to a central server before it is processed. For a privacy-first analytics product, that decision turned out to be more impactful than we initially expected.

Latency Is a Feature

Traditional analytics scripts load from a central CDN, then send data back to a central collector. Even with a fast CDN, that round-trip can add 50–150 ms of latency for users outside the data centre’s region. For a lightweight analytics pixel, that is unacceptable. Our edge-first approach means the collector is literally in the same data centre as the user’s ISP. In practice, the Glinto pixel responds in under 10 ms for the vast majority of requests. That speed is not just a nice-to-have — it is the difference between a pixel that blocks rendering and one that is invisible to the user experience.

Data Residency by Design

GDPR compliance is not just about consent banners and privacy policies. It is about where data lives, who can access it, and how long it is retained. By leveraging Cloudflare’s European edge nodes and KV stores with regional restrictions, we can guarantee that raw event data never leaves the EU unless explicitly configured otherwise. There is no central database in the US. There is no cross-border replication by default. The edge is not just fast — it is geographically precise.

Cost Structure That Scales Linearly

Cloudflare Workers are billed per request, not per hour of uptime. For an analytics product, that is a perfect match. A small blog might send a few thousand requests per month. A high-traffic e-commerce site might send millions. In both cases, the cost scales with actual usage, not with provisioned capacity. We have run load tests simulating ten million events per day and the infrastructure cost stays in the double-digit euro range. That efficiency lets us offer generous free tiers without subsidising them with venture capital.

What We Gave Up

No architecture is without trade-offs. Running on the edge means we cannot rely on long-running processes or in-memory state. Every Worker invocation is stateless and has a strict CPU limit. That forced us to design our aggregation pipeline around idempotent, parallelisable operations. We use KV for ephemeral counters, Durable Objects for session-level state, and R2 for long-term archival. It is a different mental model from a traditional server, but once the patterns click, the resulting system is simpler and more resilient.

The Road Ahead

We are now experimenting with edge-side real-time dashboards. The idea is to pre-compute aggregations inside the Worker itself and stream them to connected clients via WebSockets, again without ever touching a central server. If it works — and early prototypes are promising — it will mean sub-second dashboard updates for every Glinto customer, regardless of scale.

Building on the edge felt like a gamble two years ago. Today, it feels like the only sensible way to build infrastructure that is fast, private, and affordable. We are glad we bet on it.