Your API Is a Contract You Can't Take Back

| 4 min read |
api design rest backend

Hard-won lessons on designing HTTP APIs that survive real integrations, drawn from building fintech and mobility platforms.

Quick take

Stop mirroring your database in your API. Design for the people calling it, version before you think you need to, and accept that every field you ship is a promise you’re stuck with.


I’ve been building APIs since before I knew enough to be scared of them. At the fintech startup I inherited an API surface that financial data consumers depended on daily. At Dropbyke we had mobile clients on two platforms hitting endpoints that had to be rock solid on sketchy Seoul cell connections. Now at EF, starting Decloud from scratch, I finally get to apply all the scar tissue from day one.

Here’s what I wish someone had told me earlier.

Your internals aren’t your API

This is the mistake I see most often. Someone maps their Postgres schema straight to a JSON response and calls it a day. Three months later, you rename a column and half your integrations break.

// This is your database leaking into the world
{
  "usr_id": 12345,
  "usr_nm": "Alice",
  "crt_ts": 1551052800
}

// This is an API response
{
  "id": "usr_12345",
  "name": "Alice",
  "createdAt": "2019-02-25T00:00:00Z"
}

Clients care about meaning. They don’t care about your storage layer. Prefixed IDs (usr_12345) are a small investment that pay off immediately in debugging and log searching.

Be boring and consistent

I know this sounds obvious. It isn’t, in practice. The number of APIs I’ve seen where GET /users returns a list but GET /user/{id} returns a single record, or where half the endpoints use camelCase and the other half use snake_case - it’s depressing.

Pick a pattern. Stick to it everywhere.

GET    /users          # List
POST   /users          # Create
GET    /users/{id}     # Read
PUT    /users/{id}     # Replace
PATCH  /users/{id}     # Partial update
DELETE /users/{id}     # Delete

Boring is good. Boring means a new developer can guess your endpoint structure before reading the docs.

Error responses are a feature

At the fintech startup, our error responses used to be a single string message with a 400 status code. That’s fine until you have a mobile client trying to highlight which form field failed validation, or a partner integration trying to programmatically retry on specific error types.

{
  "error": {
    "code": "validation_error",
    "message": "Request validation failed",
    "details": [
      {
        "field": "email",
        "code": "invalid_format",
        "message": "Must be a valid email address"
      }
    ]
  },
  "meta": {
    "requestId": "req_abc123"
  }
}

The requestId alone will save you hours of debugging. Stable error codes let clients react without parsing human-readable messages. Field-level details let UIs be smart about what they show.

Wrap everything in an envelope

Use a consistent response shape. Always. The reason is simple: if you ever need to add pagination metadata, rate limit info, or deprecation warnings, you don’t have to restructure your entire response.

{
  "data": { "id": "usr_123", "name": "Alice" },
  "meta": { "requestId": "req_abc123" }
}

I’ve seen teams ship a bare object as their response, then bolt on metadata by adding top-level fields next to the resource fields. Messy. An envelope avoids that from the start.

Version before you need to

You will make breaking changes. Accepting this early is cheaper than pretending you won’t.

GET /v1/users
GET /v2/users

URL-based versioning is dead simple. It shows up in logs, caches, and monitoring dashboards. Header-based versioning is technically cleaner but practically harder to debug and easier to screw up. I’ll take debuggability over purity every time.

The rule I follow: additive changes (new optional fields, new endpoints) don’t need a version bump. Removing fields, renaming fields, or changing types - that’s a new version. No exceptions.

Pagination: choose and commit

For the Dropbyke API, we started with offset pagination (?page=2&limit=20) because it was simple. Worked great until our ride data grew and pages started shifting under users mid-scroll. Cursor pagination fixed that.

The heuristic is straightforward: offset for small, mostly-static datasets. Cursors for anything that’s growing or changing frequently.

# Offset - simple, good for admin dashboards
GET /orders?page=2&limit=20

# Cursor - stable, good for feeds and timelines
GET /orders?limit=20&after=cursor_abc123

Timestamps: ISO 8601 in UTC, end of discussion

2019-02-25T14:30:00Z

Not unix timestamps. Not localized formats. Not “seconds since epoch as a string.” One format, UTC, everywhere. This one decision prevents a class of bugs that are genuinely miserable to track down.

Rate limiting is communication

Expose your limits in headers. When you reject a request, tell the client exactly when they can retry. This is basic courtesy and it prevents retry storms.

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 998
Retry-After: 60

After a few hundred endpoints

Security, stability, performance - in that order. That’s my priority stack for everything, and APIs are no different. Use TLS everywhere. Use OAuth 2.0 or API keys with proper scoping (read:orders, write:orders). Don’t invent your own auth scheme.

An API is a promise. Every field, every status code, every error shape becomes something another team depends on. The best time to think about that is before you ship. The second best time is right now.