CDN and HTTP Caching Layers

Day 067: CDN and HTTP Caching Layers

Some of the highest-value cache hits happen before the backend runs at all, which is why HTTP caching and CDN layers are really about using the network and the protocol as part of your cache strategy.


Today's "Aha!" Moment

After talking about in-process caches and Redis, it is easy to think caching is something the backend does for itself. HTTP caching changes the perspective. Here the cache may live in the browser, in a proxy, or in a CDN edge location far away from your origin. The backend still controls the policy, but the actual reuse happens out in the protocol and the network.

That matters because many requests should never have to reach your application servers in the first place. If a user revisits the learning platform homepage, the browser may already have the JavaScript bundle. A nearby CDN edge may already have the hero image. Even some semi-static HTML or public API responses may be reusable for a short time or revalidated cheaply. In those cases the fastest backend request is the one that never gets sent to the origin at all.

That is the aha. HTTP caching is not "Redis but in the browser." It is a contract between origin, intermediaries, and clients about what may be reused, by whom, for how long, and under which validation rules. CDN caching is the same idea pushed outward to geographically distributed edge nodes.

Once you see that, the important design questions become much sharper. Is this content public or user-specific? Is it immutable or just slow-changing? Can it be cached for a while, or only revalidated? Those questions matter more here than raw memory performance, because the cache lives in the delivery path itself.


Why This Matters

The problem: Many backends spend origin CPU, bandwidth, and latency budget serving bytes that could have been reused safely much closer to the user.

Before:

After:

Real-world impact: Better user-perceived speed, lower origin load, lower bandwidth cost, and a more scalable delivery path for public content and static assets.


Learning Objectives

By the end of this session, you will be able to:

  1. Explain what makes HTTP caching different - Understand why browser and edge caches are part of the delivery path, not just backend internals.
  2. Reason about cacheability and validation - Distinguish immutable caching, freshness windows, and validation-based reuse.
  3. Choose the right layer for reuse - Decide which responses belong in the browser, at the CDN edge, or only at the origin.

Core Concepts Explained

Concept 1: Browser, CDN, and Origin Form a Delivery Hierarchy with Reuse at Each Layer

Start with the path of one request for the homepage. The browser may already hold the stylesheet locally. If not, the request may hit a CDN edge that has the image or JavaScript bundle cached. Only if those layers cannot satisfy the request does the origin backend need to serve the bytes itself.

That means HTTP delivery naturally creates a cache hierarchy:

browser cache
   -> CDN / edge cache
      -> origin backend

Each layer that can reuse a valid response saves two things at once:

That is why CDN and browser caching are so powerful for public content. They are not just faster storage. They are fewer origin round trips and fewer bytes crossing the expensive path to your backend.

The trade-off is that you now need to think carefully about who is allowed to reuse the response and what "fresh enough" means at multiple layers, not just at the origin.

Concept 2: HTTP Headers Are the Policy Language That Governs Reuse and Validation

In Redis, the application controls the cache directly. In HTTP caching, the origin expresses rules through headers and validators. The browser and intermediaries then enforce those rules.

The main ideas are:

For example, a versioned JavaScript bundle might be sent with a very long lifetime because the file name changes on deploy. A homepage HTML response might get a short freshness window or rely more on validation. A personalized account page may be marked as private or non-cacheable for shared layers.

Cache-Control: public, max-age=300
ETag: "homepage-v18"

That tiny header pair expresses an important contract. For five minutes, shared caches may reuse the response. After that, they may need to validate whether the origin has changed it. The browser does not need to guess. The policy is explicit.

This is the key shift from backend-local caching: the protocol itself becomes the control surface. Good HTTP caching depends on intentionally classifying responses and attaching the right reuse semantics to each one.

The trade-off is explicitness versus convenience. If you do not think through the headers, caches will behave poorly or conservatively. If you do think them through, the network starts doing useful work for you.

Concept 3: Cacheability Comes from Content Semantics, Especially Public vs Private and Immutable vs Changing

The most important cache design question here is not "Can the CDN store this?" It is "What kind of content is this, and who may safely reuse it?"

That leads to a very practical classification:

Use one learning-platform example:

same protocol
different content semantics
different cache policy

This is where students often overgeneralize. If one piece of content can change, they conclude nothing should be cached. If one page is public, they conclude everything may be shared at the edge. Both mistakes come from ignoring content meaning.

The trade-off is reuse versus correctness and privacy. CDN and HTTP caching work best when content classes are explicit and the policy follows those classes closely.


Troubleshooting

Issue: Treating all HTTP responses as if they deserve the same cache policy.

Why it happens / is confusing: The protocol looks uniform, so it is easy to forget that assets, pages, and personalized data have very different reuse semantics.

Clarification / Fix: Classify responses first by public/private, immutable/changing, and user-visible freshness needs. Then assign headers and CDN behavior to each class deliberately.

Issue: Thinking CDN caching is just "edge Redis."

Why it happens / is confusing: Both are caches outside the database, so they can sound like the same idea with different branding.

Clarification / Fix: Remember the difference: Redis is an application-managed shared cache; HTTP/CDN caching is governed by protocol semantics and sits in the delivery path between origin and client.


Advanced Connections

Connection 1: HTTP Caching ↔ Backend Load Reduction

The parallel: Every valid browser or CDN hit is one request and one payload the origin no longer needs to serve.

Real-world case: Public landing pages, media-heavy catalogs, and documentation sites often scale more through edge reuse than through origin compute growth alone.

Connection 2: HTTP Caching ↔ API Design

The parallel: Stable resource semantics and deliberate response classes make cache policy much easier to express cleanly.

Real-world case: Versioned assets, public thumbnails, and clearly scoped public read endpoints often lead to far cleaner CDN and browser-cache behavior than overly mixed or personalized responses.


Resources

Optional Deepening Resources


Key Insights

  1. HTTP caching moves reuse into the delivery path - The browser and the edge can often satisfy requests before the origin is involved.
  2. Headers are the cache policy language - Freshness, validation, and sharability are expressed through explicit HTTP semantics.
  3. Content meaning decides policy - Public, immutable, slow-changing, and personalized responses should not be treated as the same cache class.

Knowledge Check (Test Questions)

  1. Why can a CDN improve performance even when the origin is already optimized?

    • A) Because it can satisfy reusable responses closer to the user and reduce trips to the origin.
    • B) Because it removes the need to think about freshness rules.
    • C) Because it automatically makes all content safe to share publicly.
  2. What is one useful role of an ETag?

    • A) It lets a client or intermediary validate whether a cached response is still current without always redownloading the full content.
    • B) It tells the CDN to ignore cache-control rules.
    • C) It guarantees a response may be cached forever.
  3. Which response is usually the worst candidate for aggressive public caching?

    • A) A personalized billing page for a signed-in user.
    • B) A versioned JavaScript bundle.
    • C) A public course thumbnail image.

Answers

1. A: A CDN helps because it can answer some requests from a closer location, which improves latency and reduces origin work.

2. A: ETag is a validator, which makes cheap revalidation possible when a cache wants to confirm freshness.

3. A: Personalized or sensitive responses usually should not be reusable by shared public caches because privacy and correctness risks are much higher.



← Back to Learning