Edge Functions - Compute at the CDN Edge

LESSON

Caching, Workers, and Performance

022 30 min intermediate

Day 250: Edge Functions - Compute at the CDN Edge

Edge compute is valuable when the decision is small, latency-sensitive, and worth moving closer to the request.


Today's "Aha!" Moment

The insight: A CDN caches content near users; edge functions push some logic there too. The important design question is not "Can we run code at the edge?" but "Which decisions become materially better if they happen before the request reaches origin?"

Why this matters: Teams often hear "compute at the edge" and imagine the edge as a smaller server region. That is the wrong model. Edge runtimes are valuable precisely because they are constrained, close to users, and integrated into the request path at many PoPs.

The universal pattern: latency-sensitive request/response decision -> move lightweight logic toward the requester -> avoid origin round trips for work that does not need origin authority.

Concrete anchor: A request arrives from a user in Madrid. Before the origin is contacted, the edge can already decide whether to redirect, normalize headers, enforce a country policy, personalize by lightweight cookie state, or choose a cache key variant. If that logic waited for origin, the system would spend a full remote round trip on a decision that could have been taken locally.

How to recognize when this applies:

Common misconceptions:

Real-world examples:

  1. Request shaping: Header rewrites, redirects, A/B assignment, geo/device policy, and auth prechecks.
  2. Response shaping: Injecting lightweight headers, selecting cache variants, or applying safe personalization before content returns to the user.

Why This Matters

The problem: Many requests reach origin only to perform a small decision that could have been taken much earlier. That wastes distance, increases origin load, and makes the system pay global latency for local logic.

Before:

After:

Real-world impact: Done well, edge functions reduce latency, protect origin, and make global behavior more consistent. Done badly, they create duplicated business logic, hidden platform constraints, and hard-to-debug multi-layer behavior.


Learning Objectives

By the end of this session, you will be able to:

  1. Explain why edge compute exists - Connect request-path latency and policy locality to moving logic out of origin.
  2. Describe how edge functions fit into the CDN path - Reason about request interception, cache interaction, and constrained runtimes.
  3. Evaluate practical trade-offs - Decide which logic belongs at the edge, which belongs at origin, and what risks appear when the boundary is wrong.

Core Concepts Explained

Concept 1: Edge Functions Are About Decision Placement, Not Just Faster Code

The key design shift is this:

That means edge compute is fundamentally about where a decision lives.

Typical good fits include:

These are all cases where:

The edge is therefore not valuable because it offers huge compute power. It is valuable because it offers proximity.

The trade-off is immediate:

This is why edge functions should be seen as request-path policy engines, not as general-purpose backend replacement.

Concept 2: The Edge Runtime Sits in a Constrained Part of the Request Pipeline

An edge function typically executes around a request/response event:

user -> edge request phase -> cache lookup / origin decision -> origin if needed -> edge response phase -> user

Different platforms expose slightly different hooks, but the important ideas are stable:

This placement is powerful, but it comes with strong constraints:

Those constraints are not accidental. They are what let the platform run code across many PoPs safely and quickly.

The good mental model is:

That means any heavy work becomes expensive fast. If the function blocks on deep origin state or performs large computation, it loses the very advantage edge placement was supposed to buy.

This is also where the relationship to caching becomes interesting:

So edge functions are often not separate from caching. They are the logic that decides how the cache behaves for a given request class.

Concept 3: The Real Question Is What Must Stay Authoritative at Origin

The edge can be close, but it is not usually the source of truth.

That means the best edge functions are the ones that can act correctly with:

The edge is a poor fit when the decision requires:

This is the crucial boundary:

When teams ignore that boundary, two failure modes appear:

  1. They duplicate business logic across edge and origin.
  2. They make the edge wait on origin so often that the edge stops buying real latency improvement.

This is why edge architecture is mostly about subtraction. The goal is not to move "as much as possible" to the edge. It is to move only the work whose value clearly comes from proximity.

That perspective also sets up the next lessons:


Troubleshooting

Issue: "We moved logic to the edge, but latency barely improved."

Why it happens / is confusing: Teams assume proximity alone guarantees a win.

Clarification / Fix: Check whether the function still depends on origin or expensive remote state. If the logic cannot decide locally, the edge may simply add another layer rather than remove real latency.

Issue: "Edge functions can replace most backend endpoints."

Why it happens / is confusing: Marketing around edge platforms emphasizes flexibility.

Clarification / Fix: The edge is strongest for lightweight request-path decisions. Authoritative writes, large data fetches, and complex business workflows usually still belong at origin or in regional services.

Issue: "Cache bugs and edge-function bugs are unrelated."

Why it happens / is confusing: One sounds like compute and the other like storage.

Clarification / Fix: Edge code often changes cache keys, headers, routing, and cacheability. A bad edge decision can therefore create cache fragmentation, leaks, or stale behavior indirectly.


Advanced Connections

Connection 1: Edge Functions <-> CDN Fundamentals

The parallel: The previous lesson explained the CDN as a global shared cache and routing layer. Edge functions add programmability to that same layer so it can make request-specific decisions before origin.

Real-world case: A CDN that only caches can improve static reuse; a CDN with edge compute can also tailor routing, keys, and policy per request class.

Connection 2: Edge Functions <-> Cache Purging and Invalidation

The parallel: Once logic at the edge influences cache keys and content variants, invalidation becomes harder because the system has more copies and more shapes of copies to reason about.

Real-world case: A small header rewrite in an edge function can change the effective key space and therefore the purge blast radius later.


Resources

Optional Deepening Resources


Key Insights

  1. Edge compute is about placement of logic - The main win comes from deciding close to the user, not from raw compute power.
  2. The edge is a constrained policy surface - Its limits are part of the value because they keep request-path execution fast and globally deployable.
  3. Origin still owns deep truth - The best edge functions remove only the work that benefits from proximity and does not require authoritative state transitions.

PREVIOUS CDN Fundamentals - Global-Scale Content Delivery NEXT Cache Purging Strategies - CDN Cache Invalidation

← Back to Caching, Workers, and Performance

← Back to Learning Hub