OpenTelemetry in Late 2021: What's Ready and What's Not

| 5 min read |
opentelemetry observability tracing golang

Tracing is ready. Metrics are getting there. Logs are not. Here's a practical adoption path and the code to back it up.

Quick take

OpenTelemetry tracing hit 1.0 this year and it’s the real deal. Adopt it now for tracing. Be cautious with metrics – the API isn’t finalized. Ignore logs for now. Deploy the Collector as your telemetry gateway, standardize your resource attributes on day one, and configure sampling before you drown in data. The vendor lock-in argument alone makes this worth the migration effort.


I’m tired of observability vendor lock-in. Every organization I work with has a different combination of Datadog, New Relic, Jaeger, Prometheus, and three logging tools. Switching backends means rewriting instrumentation across dozens of services. Correlating traces with metrics requires duct tape and prayer.

OpenTelemetry fixes this. One instrumentation standard, any backend. It’s the most important infrastructure project nobody is talking about enough.

What Is Actually Stable Right Now

As of November 2021, here is the honest state:

Tracing: production-ready. The API and SDKs hit 1.0. Go, Java, Python, JavaScript, .NET all have stable implementations. Context propagation works across HTTP, gRPC, and most messaging systems. This is safe to adopt today.

Metrics: getting there. The metrics API isn’t finalized. SDKs are in various stages of beta. You can start experimenting but I wouldn’t bet a production monitoring pipeline on it yet. Give it six months.

Logs: early. On the roadmap. Not a reason to adopt OTel today. Keep your existing log pipeline.

This maturity gap matters for planning. Don’t try to adopt all three signals at once. Start with tracing, add metrics when the API stabilizes, leave logs alone.

Start With the Collector

The single best decision you can make is deploying the OpenTelemetry Collector before you instrument a single service. The Collector sits between your applications and your backends. Applications export to it via OTLP. It forwards to whatever backend you use.

Why this matters: when you inevitably switch observability vendors (and you’ll), you change the Collector config. Not your application code. Not a hundred services. One config file.

A basic Collector config for forwarding traces to Jaeger:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 5s
    send_batch_size: 1024

exporters:
  jaeger:
    endpoint: jaeger-collector:14250
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger]

Deploy this as a sidecar or a standalone service. I prefer standalone in most cases – easier to manage, easier to scale, and you avoid coupling the Collector lifecycle to your application pods.

Instrumenting a Go Service

Here is what basic OTel tracing looks like in Go. I use this as my starting template for new projects.

Set up the trace provider at application startup:

func initTracer(ctx context.Context) (*sdktrace.TracerProvider, error) {
    exporter, err := otlptracegrpc.New(ctx,
        otlptracegrpc.WithEndpoint("otel-collector:4317"),
        otlptracegrpc.WithInsecure(),
    )
    if err != nil {
        return nil, fmt.Errorf("creating OTLP exporter: %w", err)
    }

    tp := sdktrace.NewTracerProvider(
        sdktrace.WithBatcher(exporter),
        sdktrace.WithResource(resource.NewWithAttributes(
            semconv.SchemaURL,
            semconv.ServiceNameKey.String("orders-api"),
            semconv.ServiceVersionKey.String("1.4.2"),
            attribute.String("environment", "production"),
        )),
        sdktrace.WithSampler(
            sdktrace.ParentBased(sdktrace.TraceIDRatioBased(0.1)),
        ),
    )
    otel.SetTracerProvider(tp)
    otel.SetTextMapPropagator(propagation.TraceContext{})
    return tp, nil
}

Then instrument your HTTP handlers:

func handleOrder(w http.ResponseWriter, r *http.Request) {
    ctx, span := otel.Tracer("orders-api").Start(r.Context(), "handleOrder")
    defer span.End()

    span.SetAttributes(
        attribute.String("order.customer_id", r.Header.Get("X-Customer-ID")),
    )

    order, err := processOrder(ctx, r)
    if err != nil {
        span.RecordError(err)
        span.SetStatus(codes.Error, err.Error())
        http.Error(w, "order processing failed", http.StatusInternalServerError)
        return
    }

    span.SetAttributes(attribute.String("order.id", order.ID))
    json.NewEncoder(w).Encode(order)
}

The ctx parameter carries trace context through your call stack. Pass it everywhere. Every downstream call that receives this context becomes part of the same trace. This is how you get end-to-end visibility across services.

Sampling: Configure This on Day One

I can’t stress this enough. Default sampling is 100% – every request gets traced. That’s fine for development. In production with any real traffic, you’ll generate terabytes of trace data and your observability bill will make your CFO cry.

Set up parent-based sampling with a ratio. 10% is a good starting point for most services. Critical paths can be sampled at higher rates.

OTEL_TRACES_SAMPLER=parentbased_traceidratio
OTEL_TRACES_SAMPLER_ARG=0.1

Parent-based means if an upstream service already decided to sample this request, downstream services honor that decision. This keeps traces complete instead of fragmented.

Resource Attributes: The Thing Everyone Gets Wrong

Resource attributes are metadata attached to every span your service produces. Service name, environment, version. They’re how you filter and correlate traces across your entire fleet.

Get these right from day one. I’ve seen migrations stall for months because teams used inconsistent service names and nobody could query across services.

Standardize at minimum:

  • service.name – unique, lowercase, hyphenated
  • service.version – semver, from your build
  • deployment.environment – production, staging, development
  • service.namespace – team or domain grouping

Enforce these through the Collector. Use the resource processor to inject defaults and reject spans missing required attributes.

The Migration Path

Don’t try to migrate everything at once. This is the sequence I follow:

Week 1-2: Deploy the Collector. Configure it to export to your current backend. No application changes yet.

Week 3-4: Instrument one new service (or a non-critical existing one) with OTel. Verify traces show up in your backend. Fix any context propagation gaps at service boundaries.

Month 2: Migrate 2-3 critical services from vendor SDKs to OTel. Run both in parallel for a week to verify data parity.

Month 3+: Expand coverage. Set up dashboards and alerts on the new pipeline. Once confident, remove vendor SDK instrumentation.

This sequence keeps your existing observability intact while you build confidence in the new pipeline. Nobody loses visibility during the migration.

Pitfalls I’ve Hit

Inconsistent service naming. One team calls it orders-api, another calls it OrdersAPI, a third calls it orders. Now you can’t query across services. Solve this with a naming convention doc and Collector-level enforcement.

Missing context propagation at message queues. HTTP propagation works out of the box. Kafka, RabbitMQ, SQS – you need to manually inject and extract trace context from message headers. If you skip this, your traces end at the queue boundary and you lose visibility into async processing.

High-cardinality attributes. Putting user IDs, request IDs, or full URLs as span attributes sounds useful until your trace backend is indexing millions of unique values and your storage costs explode. Use low-cardinality attributes for filtering. Put high-cardinality data in span events or logs.

OpenTelemetry is the right bet for 2021 and beyond. The tracing story is solid. The Collector architecture is sound. Adopt it incrementally, get your conventions right early, and you’ll never have to rewrite instrumentation for a vendor switch again.