Rust for Cloud Services: A Go Developer's Honest Take

| 4 min read |
rust go cloud performance

I write Go for a living. Rust is not replacing it. But I have to be honest about where Rust wins.

I’m tired of the Rust discourse.

Every week there’s a new blog post about how Rust is going to replace everything: Go, Java, Python, C++, your kitchen appliance firmware. The Rust evangelism strike force is relentless. As someone who writes Go professionally, contributes to Go projects, and has built production systems in Go for years, my default reaction is to roll my eyes.

But I’ve been writing some Rust lately, and I have to be honest: for certain things, it’s genuinely better than what I use.

That sentence hurt to type.

Where Go wins and it’s not close

Go wins on velocity. I can write a production-ready HTTP service in Go in an afternoon. The standard library is excellent. The tooling is fast. go build gives me a static binary. go test just works. The language is intentionally simple, which means new team members are productive in days, not weeks.

At Decloud, at the fintech startup, in everything I’ve built, Go is the default. It’s boring in the best possible way. The compile times are fast. The deployment story is trivial. The hiring pool is large and growing.

For the vast majority of cloud services – API servers, background workers, CLI tools, infrastructure automation – Go is the right choice. I’ll die on this hill.

Where Rust wins and I have to admit it

Tail latency. That’s the killer argument.

Go’s garbage collector has gotten dramatically better over the years. Sub-millisecond pauses in most cases. But “most cases” isn’t good enough when you have a service with strict p99 latency targets and you’re processing thousands of requests per second. Those GC pauses show up in your tail latency, and no amount of tuning eliminates them completely.

Rust doesn’t have a garbage collector. Its ownership model handles memory at compile time. The result is predictable latency with no runtime surprises. For services where p99 matters as much as p50 – high-frequency data processing, real-time bidding, network proxies – that’s a legitimate advantage.

Memory footprint is the other one. I’ve got a Go service that idles at 40MB of RSS. The equivalent Rust service? 8MB. For edge deployments or anything running thousands of instances, that difference translates directly to infrastructure cost.

And then there’s safety. Go has data race detection with -race, but only at runtime. Rust catches data races at compile time. For security-sensitive code that processes untrusted input, having the compiler do that work for you is genuinely valuable.

The Rust ecosystem in 2021

Tokio 1.0 landed. That’s a big deal: it means the async runtime is stable and you aren’t going to have the rug pulled out from under you on a major API change. Hyper, Actix Web, and Warp are all viable for HTTP services. Serde is excellent for serialization. The tracing crate is the right approach to structured observability.

It’s usable. It’s not turnkey.

Want an ORM? Diesel exists, but async support is clunky. SQLx is better for async, but newer. Want something like Go’s net/http, where you import one package and have a production-ready server? You’re assembling it from five crates and hoping version compatibility holds.

My actual problem with Rust adoption

It’s not the language. The language is well designed. My problem is the adoption pattern I keep seeing.

A team has a Go service. The service works fine. Someone reads a blog post about Rust performance. The team rewrites the service in Rust. It takes three months instead of two weeks. The performance improvement is 15% on a service that wasn’t performance-constrained. The team now has one person who can maintain the Rust code and four who can’t.

That’s not a Rust problem. That’s a decision-making problem. But Rust’s community actively encourages this pattern by framing everything as “rewrite it in Rust” without asking whether the rewrite solves a real problem.

If you’re considering Rust for a cloud service, answer these questions first:

  • Is there a measured performance or safety problem that Go (or whatever you use) can’t solve?
  • Do you have at least two people who can write and review Rust code?
  • Have you accounted for compile times in your CI pipeline? A Rust build from scratch takes minutes, not seconds.
  • Can you hire for Rust in your market?

If any answer is no, profile your existing code first. You’ll probably find that the bottleneck is a bad algorithm or an unnecessary allocation, not the language runtime.

Where I would actually use Rust

I would use Rust for a network proxy that needs microsecond-level latency consistency. I would use it for a data processing pipeline that’s CPU-bound and memory-constrained. I would use it for anything running on embedded hardware or at the edge, where every megabyte counts. I would use it for security-critical parsers that handle untrusted input.

I wouldn’t use it for a CRUD API. I wouldn’t use it for a CLI tool. I wouldn’t use it for a service where time-to-market matters more than raw performance.

Go is my tool. Rust is a tool I respect. The trick is knowing which problem you actually have before you pick the tool.