Quick take
If an attacker gets your OAuth token, they are your user. Defense means making tokens short-lived, narrowly scoped, encrypted at rest, and monitored in real time. Everything else is wishful thinking.
The Heroku/Travis CI OAuth token leak earlier this year was a masterclass in how badly things go wrong when token hygiene is poor. GitHub OAuth tokens – some with broad repo scope – were exfiltrated and used to access private repositories of major organizations. The tokens had been sitting in databases for months, some for years. Long-lived, over-privileged, and unmonitored.
This wasn’t a novel attack. It was the predictable consequence of treating OAuth tokens as fire-and-forget credentials. I’ve seen the same pattern at multiple organizations: tokens issued during an integration setup, granted broad scopes “to make it work,” and then never reviewed again. At one enterprise, we found CI tokens with full write access to production repositories that had been active for over two years. Nobody knew they existed until we audited.
OAuth isn’t broken. The way most teams handle tokens is.
Bearer tokens are bearer weapons
An OAuth access token is a bearer token. Whoever holds it can use it. The server can’t distinguish between the legitimate client and an attacker who stole the token. This isn’t a bug – it’s the design. The implication is that every security control around tokens is about reducing the window and impact of theft, because theft will happen.
The attack surface:
- Stolen from storage: databases, environment variables, CI/CD secrets, log files.
- Intercepted in transit: unencrypted connections, misconfigured proxies, debug endpoints.
- Leaked accidentally: committed to version control, included in error messages, logged in request parameters.
Once stolen, the token works until it expires or is revoked. If the token has no expiration and no one is watching, it works forever.
Short lifetimes are your primary defense
Access tokens should expire in minutes. Not hours, not days. The shorter the lifetime, the smaller the window an attacker has to use a stolen token.
In Go, when issuing tokens from your own authorization server:
func issueAccessToken(claims Claims) (string, error) {
now := time.Now()
token := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{
"sub": claims.Subject,
"scope": claims.Scope,
"iat": now.Unix(),
"exp": now.Add(15 * time.Minute).Unix(), // 15 minutes, not 15 days
"jti": uuid.NewString(),
})
return token.SignedString(privateKey)
}
Fifteen minutes is aggressive but reasonable for most web applications. The client refreshes when the token expires. The user doesn’t notice. An attacker who steals a 15-minute token has a much smaller window than one who steals a token that lives for 30 days.
Refresh tokens can have longer lifetimes, but they must rotate on use. When a client uses a refresh token to get a new access token, the old refresh token should be invalidated immediately:
func refreshAccessToken(refreshToken string) (*TokenPair, error) {
stored, err := tokenStore.Get(refreshToken)
if err != nil {
return nil, fmt.Errorf("invalid refresh token: %w", err)
}
// Invalidate the old refresh token immediately
if err := tokenStore.Revoke(refreshToken); err != nil {
return nil, fmt.Errorf("revocation failed: %w", err)
}
// Issue new pair
accessToken, err := issueAccessToken(stored.Claims)
if err != nil {
return nil, err
}
newRefresh, err := issueRefreshToken(stored.Claims)
if err != nil {
return nil, err
}
return &TokenPair{Access: accessToken, Refresh: newRefresh}, nil
}
If someone tries to use a refresh token that has already been rotated, that’s a signal of theft. Revoke the entire token family and force re-authentication.
Scopes: the principle of least surprise
The Heroku incident was devastating partly because the stolen tokens had repo scope – full read and write access to all repositories. If those tokens had been scoped to repo:read or limited to specific repositories, the blast radius would have been dramatically smaller.
Scope discipline is straightforward but rarely practiced:
- Request the minimum scope the feature needs. If your integration reads commit statuses, it doesn’t need
repo:write. - Separate read and write scopes. A monitoring dashboard shouldn’t have the same permissions as a deployment tool.
- Review scopes when features change. An integration that started as read-only and grew to handle deployments might still be running on the original broad scope from the initial setup.
- Show users what access they’re granting. The consent screen should be explicit and understandable. “This application will access all your private repositories” should make someone pause.
When building your own OAuth server, enforce scope validation at every resource endpoint:
func requireScope(required string, handler http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
claims, ok := claimsFromContext(r.Context())
if !ok {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
if !claims.HasScope(required) {
http.Error(w, "insufficient scope", http.StatusForbidden)
return
}
handler(w, r)
}
}
// Usage
mux.HandleFunc("/api/repos", requireScope("repo:read", listRepos))
mux.HandleFunc("/api/deploy", requireScope("deploy:write", triggerDeploy))
Don’t just validate scopes at token issuance. Validate them at every request. Scopes can be misassigned, tokens can be recycled across services, and bugs happen.
Token storage: assume compromise
If your tokens are stored in a database, assume that database will be compromised. The question isn’t “will it happen” but “when it happens, how bad is it?”
Encrypt at rest. Tokens stored in any persistent medium should be encrypted. Not hashed – encrypted, because you need to use them. Use envelope encryption with a KMS:
func encryptToken(plaintext string) (string, error) {
key, err := kms.GenerateDataKey(context.Background(), masterKeyID)
if err != nil {
return "", err
}
block, err := aes.NewCipher(key.Plaintext)
if err != nil {
return "", err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return "", err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return "", err
}
ciphertext := gcm.Seal(nonce, nonce, []byte(plaintext), nil)
// Store encrypted data key alongside ciphertext
return encode(key.EncryptedKey, ciphertext), nil
}
Never log tokens. This sounds obvious, but I’ve found tokens in application logs, access logs, error tracking systems, and even monitoring dashboards. Scrub tokens from any logging pipeline. If you use structured logging, redact fields that contain tokens.
Keep refresh tokens server-side. For web applications, the refresh token should never reach the browser. Store it in a server-side session. For mobile apps, use the platform’s secure storage (Keychain on iOS, Keystore on Android).
Never put tokens in URLs. Query parameters get logged by proxies, CDNs, browser history, and analytics tools. Use the Authorization header.
Monitoring: the last line of defense
Tokens will be stolen despite your best efforts. Detection is what turns a breach into an incident instead of a catastrophe.
What to monitor:
- New IP addresses. If a token that has always been used from AWS
us-east-1suddenly appears from a residential IP in a different country, that deserves an alert. - Usage spikes. A token that makes 10 API calls per day suddenly making 10,000 is suspicious.
- Scope usage patterns. If a token with
repo:readandrepo:writehas only ever used read operations and suddenly starts writing, investigate. - Failed requests. A burst of 403s from a single token might indicate an attacker testing what access they have.
Build an anomaly detection pipeline. It doesn’t need to be fancy. At one financial services company, we built a simple system that tracked per-token request rates and geographic locations, and alerted when either deviated significantly from the 30-day baseline. It caught two incidents in its first quarter.
Provide users with visibility into their authorized applications. A “connected apps” page that shows which applications have tokens, what scopes they have, and when they were last used. Make it easy to revoke individual tokens.
Support mass revocation. When an integration provider is compromised (like Heroku was), you need the ability to revoke every token associated with that provider in one operation. This should be a well-tested capability, not something you build during the incident.
Sender-constrained tokens: the future
Bearer tokens have an inherent weakness: possession equals authorization. Sender-constrained tokens fix this by binding the token to a specific client, usually through mutual TLS or proof-of-possession (DPoP).
With DPoP (RFC 9449, still a draft in early 2022), the client generates a key pair and includes a proof in each request. If someone steals the token but not the private key, the token is useless.
Adoption is still early. Most OAuth providers don’t support DPoP yet. But for high-value systems – financial APIs, infrastructure management, healthcare data – it’s worth investigating. The standard is moving in the right direction.
What matters
Tokens are credentials. Treat them with the same discipline you would treat a database password or a production SSH key:
- Short lifetimes. 15 minutes for access tokens. Rotate refresh tokens on every use.
- Minimum scopes. Request only what you need. Validate scopes on every request.
- Encrypted storage. Envelope encryption. Never in logs. Never in URLs.
- Active monitoring. Anomaly detection on IP, geography, and usage patterns.
- Fast revocation. Per-token, per-user, and per-provider. Tested before you need it.
The teams that handled the Heroku incident with minimal damage were the ones that had short-lived tokens, narrow scopes, and the ability to revoke at scale. Everyone else spent a week doing forensics. Build the defenses now, while it isn’t an emergency.