Developer Portals: The Thing Nobody Wants to Build But Everyone Needs

| 5 min read |
developer-portal platform-engineering backstage developer-experience

What I learned helping large telecoms build internal developer portals, and why the service catalog is the only part that actually matters on day one.

Quick take

An internal developer portal solves one problem: “who owns this thing and where do I find information about it?” If it does that well, everything else follows. If it doesn’t, you’ve built another abandoned wiki. Start with the service catalog. Automate it. Make it the default place people go during incidents. Everything else is a feature request.


I’ve been working with large telecoms this year. Verizon and AT&T scale. Thousands of engineers. Hundreds of services. Multiple deployment platforms.

You’d think organizations that size would have this figured out. They don’t.

The question that comes up in almost every engagement is some version of: “We have a problem. Nobody knows who owns what.” A service is on fire. The dashboard shows errors. Who do you page? The team that built it reorganized six months ago. The runbook is in a Confluence space that references a Slack channel that no longer exists. The person who actually understands the service is on a different team now.

This isn’t a technology problem. This is an information discovery problem. And the solution is embarrassingly simple: build a service catalog that people actually use.

The service catalog is the whole product

I know Backstage exists. I know it has a plugin system and documentation tooling and scaffolding templates and a growing ecosystem. That’s all great. But every time I see a team plan a developer portal, they start with the feature list instead of the problem.

The problem is: during an incident at 3am, can an on-call engineer find the owner, the repo, the dashboard, and the runbook for the service that’s failing? In under two minutes?

If no, build the catalog first. Everything else can wait.

A catalog entry needs exactly these fields:

  • Service name and a one-sentence description
  • Owner team and escalation contact
  • Source repository link
  • Links to monitoring dashboards and runbooks
  • Upstream and downstream dependencies

That’s it. Not a comprehensive documentation platform. Not a service mesh visualizer. A searchable list of services with accurate ownership and links.

At one telecom client, we built this as a static site generated from YAML files in each service’s repository. CI runs on every merge, pulls the catalog files, and rebuilds the portal. Total development time: about a week. Adoption was immediate because the alternative was asking in Slack and hoping someone knew.

Stale data will kill it

The number one reason developer portals fail: the data goes stale and nobody trusts it anymore. Once engineers learn that the ownership information is wrong, they stop checking the portal entirely. You get one shot at credibility.

The fix is automation. Don’t ask humans to manually update a catalog. Pull the data from sources of truth.

  • Ownership comes from your team directory or CODEOWNERS files.
  • Repository links come from your Git hosting provider’s API.
  • Deployment status comes from your CD system.
  • Dashboard links get validated weekly – if a link 404s, the team gets pinged.

At another engagement, we set up a weekly job that checked every catalog entry against reality. Broken links, missing owners, archived repos – all flagged automatically. Within a month, catalog accuracy went from roughly 60% to over 95%. Not because anyone was doing extra work. Because the automation caught the drift before it accumulated.

Golden paths, but keep it simple

Templates for new services are the second most valuable feature. When a team needs a new service, the portal should give them a one-click (or one-command) scaffold that sets up the repo, the CI pipeline, the monitoring, and the catalog entry automatically.

At scale, this matters enormously. Without a golden path, every new service is a snowflake. Different CI configs. Different monitoring setups. Different directory structures. Multiplied across hundreds of services, that inconsistency makes cross-team debugging a nightmare.

But don’t over-engineer the templates. Start with one template for your most common service type. If 80% of your services are Go HTTP APIs on Kubernetes, build that template first. Add more later when someone asks.

Adoption is earned, not mandated

I’ve seen organizations try to mandate portal usage through policy documents. It doesn’t work. Engineers use tools that save them time. They ignore tools that add friction.

The trick is to make the portal the path of least resistance. If finding an owner through the portal is faster than asking in Slack, people will use the portal. If creating a new service through the portal is faster than copying another team’s repo, people will use the portal.

At one client, the tipping point was incident response. We integrated the catalog with PagerDuty. When an alert fired, the incident channel automatically got populated with owner information, dashboard links, and runbook links pulled from the catalog. Suddenly the catalog wasn’t a “developer experience nice-to-have.” It was critical incident infrastructure. Adoption followed immediately.

Do you need Backstage?

Maybe. If you have a platform team that can maintain it, Backstage is a solid foundation. The plugin ecosystem is growing. The community is active.

But if you’re a smaller organization, or if you just need the catalog, you don’t need Backstage. A static site with good search, generated from repository metadata, is enough to start. You can always migrate to something more sophisticated later.

The portal is a product. Treat it like one. It needs an owner, a feedback loop, and a willingness to cut features that nobody uses. The teams I work with that succeed with portals are the ones that ship a minimal catalog fast and iterate based on what engineers actually need – not the ones that spend six months building a comprehensive platform nobody asked for.