WebAssembly Beyond the Browser: A 2021 Progress Report

| 5 min read |
webassembly wasm edge-computing serverless

Sixteen months after my first Wasm post, here's what's actually moved. WASI is still early, but edge computing and plugin systems are turning into real use cases.

Quick take

Wasm outside the browser has moved from “interesting idea” to “early production use” in about a year. Edge platforms like Cloudflare Workers and Fastly Compute@Edge are the clearest wins. Plugin systems (Envoy filters, extensible platforms) are real. WASI is still immature – no networking, limited filesystem. Don’t use Wasm to replace containers. Use it where sandboxing, fast startup, and portability actually matter.


I wrote about WebAssembly in March 2020 when the server-side story was mostly theoretical. Sixteen months later, it’s time for an honest progress report. Some things moved faster than I expected. Some haven’t moved at all.

What’s actually changed

The biggest shift is that Wasm is showing up in production, not just conference demos. Not everywhere. Not for everything. But in specific niches where the tradeoffs make sense.

Edge computing is the clearest win. Cloudflare Workers has been running Wasm workloads for a while, and Fastly’s Compute@Edge is built on Wasmtime. The value proposition is straightforward: Wasm modules start in microseconds, use minimal memory, and run in a sandbox. For edge compute – where you need code running close to users, starting fast, and tearing down quickly – that beats spinning up a container.

I’ve been using Compute@Edge for a few side projects. The developer experience is decent. You write Rust or AssemblyScript, compile to Wasm, deploy. Cold starts are genuinely fast. The limitation is the restricted API surface – you can handle HTTP request/response flows, but anything much beyond that gets painful.

Plugin systems are the second real use case. Envoy’s Wasm filter support is now production-grade. Teams can ship custom proxy logic – rate limiting, header manipulation, auth checks – without rebuilding the Envoy binary. At one company, we evaluated this for custom traffic-routing rules that varied per tenant. The alternative was maintaining a fork of Envoy. Wasm filters were dramatically simpler to deploy and update.

The pattern works for any platform that needs user-extensible logic in a safe sandbox. Think database UDFs, SaaS platform plugins, policy engines. The host controls resources and data access. The plugin runs in a box with no ambient authority. Language choice is flexible because Wasm doesn’t care what you compiled from.

WASI: still early, still important

WASI preview1 is the standard everyone targets. It gives you filesystem access (capability-based, not ambient), clocks, random number generation, and environment variables. That’s enough for CLI tools, data transformation, and controlled plugin workloads.

What’s still missing: networking. No sockets, no HTTP client, no DNS. This is the single biggest limitation for server-side Wasm. If your workload needs outbound calls, you’re either working around it through host-provided functions or you’re waiting. The WASI networking proposal exists, but it’s still in early design.

For Go developers specifically, the story is mixed. TinyGo compiles to Wasm and targets WASI, but you lose chunks of the standard library. The main Go compiler has experimental Wasm support, but it’s browser-focused and the binaries are large. I’ve had reasonable results with TinyGo for small, focused modules. I wouldn’t build a full service with it.

Runtimes are getting solid

Wasmtime (from the Bytecode Alliance) and Wasmer are both maturing. Wasmtime in particular has become the reference implementation that edge platforms build on. The embedding story is good – you can load a Wasm module into a Go, Rust, or Python host, expose a narrow API, and let the module run in a sandbox.

I’ve been embedding Wasmtime in a Go service for a side project where users can define custom validation rules. The module loads in milliseconds, executes deterministically, and can’t touch anything the host doesn’t explicitly allow. Compared to running user code in a Docker container or subprocess, the operational story is dramatically simpler.

The debugging experience, though, is still rough. Print debugging works (you can wire up stdout through WASI). Proper step-through debugging with source maps is technically possible but clunky. Profiling is even worse. This matters less for small, plugin-style modules, but it would be a problem for anything substantial.

Where it doesn’t make sense

I need to be clear about this because the hype cycle is doing its thing.

Wasm isn’t replacing containers. It’s not a general-purpose server runtime. If your workload needs full OS access, mature networking, established tooling, and rich library ecosystems, containers are still the answer. Docker is boring and it works.

Wasm also struggles with anything that needs high-performance I/O or tight integration with native libraries. The cross-boundary calling overhead is real. Data copying between the host and the Wasm module can become the bottleneck. For compute-heavy, data-light workloads this is fine. For data-heavy workloads, measure before committing.

Where I’d use it today

Three scenarios where Wasm outside the browser makes sense right now:

  1. Edge compute where startup time and memory footprint matter. Cloudflare Workers and Fastly Compute@Edge are the obvious platforms.

  2. Plugin systems where you need safe, sandboxed execution of user-provided or tenant-specific logic. Envoy filters are the most mature example.

  3. CLI tools and data transformers where portability across platforms matters and the workload fits within WASI’s current API surface.

For everything else, I’d wait. Not because Wasm isn’t promising – it is. But the tooling, the system interface, and the component model all need another year or two of work before server-side Wasm becomes a general-purpose option.

The trajectory

The direction is right. The Bytecode Alliance is doing real work. WASI is evolving (slowly). Edge platforms are proving the model. The component model proposal could eventually solve the cross-language composition problem in a way that nothing else does.

But right now, in July 2021, Wasm beyond the browser is a sharp tool for specific problems. Not a platform. Use it where the constraints fit. Wait where they don’t.