I have a client and a server, both instrumented with tracing
, and the opentelemetry_otlp
exporter also works correctly as far as I can tell. To test that I have set up the docker-compose example of Grafana Tempo as a receiver and the traces show up correctly.
However, a call from the client to the server results in two distinct traces, one for the client and one for the server, and I'd like to associate them into a single trace, by propagating the trace context from the client, and injecting it on the server.
My problem however is that there appear to be hundreds of ways to get this done, and they all contradict. And my understanding of the interaction between tracing
and opentelemetry
is a bit nebulous. I would expect that registering the OpenTelemetryLayer
as a subscriber to tracing
would result in the opentelemetry span being managed alongside the tracing
span, but whenever I try to extract the SpanContext
in order to serialize it and attach it to the tonic
query as an HTTP header, the SpanContext
is uninitialized (i.e., it has the invalid trace_id=00000000000000000000000000000000
and span_id=0000000000000000
, which isn't allowed.
The following minimal example shows what will eventually be the client, attempting to extract the SpanContext
, but until I can get the SpanContext
there is no point in continuing.
use opentelemetry::{
runtime,
sdk::{
trace::{BatchConfig, RandomIdGenerator, Tracer},
Resource,
},
trace::TraceContextExt,
KeyValue,
};
use opentelemetry_otlp::WithExportConfig;
use opentelemetry_semantic_conventions::{resource::SERVICE_NAME, SCHEMA_URL};
use tracing_opentelemetry::OpenTelemetryLayer;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
fn init_tracer(resource: Resource, endpoint: &str) -> Tracer {
opentelemetry_otlp::new_pipeline()
.tracing()
.with_trace_config(
opentelemetry::sdk::trace::Config::default()
.with_id_generator(RandomIdGenerator::default())
.with_resource(resource),
)
.with_exporter(
opentelemetry_otlp::new_exporter()
.http()
.with_timeout(std::time::Duration::from_secs(1))
.with_endpoint(endpoint),
)
.with_batch_config(BatchConfig::default())
.install_batch(runtime::Tokio)
.unwrap()
}
fn init_tracing_subscriber(service_name: &str, endpoint: &str) -> OtelGuard {
eprintln!("Initializing tracing with endpoint {}", endpoint);
let resource = Resource::from_schema_url(
[KeyValue::new(SERVICE_NAME, service_name.to_owned())],
SCHEMA_URL,
);
tracing_subscriber::registry()
.with(tracing_subscriber::filter::EnvFilter::from_default_env())
.with(tracing_subscriber::fmt::layer().compact())
.with(OpenTelemetryLayer::new(init_tracer(resource, endpoint)))
.try_init()
.unwrap();
OtelGuard {}
}
fn init() -> OtelGuard {
let service_name = dbg!(env!("CARGO_PKG_NAME"));
let endpoint = "http://172.18.0.2:4318/v1/traces";
init_tracing_subscriber(service_name, endpoint)
}
pub struct OtelGuard {}
impl Drop for OtelGuard {
fn drop(&mut self) {
opentelemetry::global::shutdown_tracer_provider();
}
}
#[tokio::main]
async fn main() {
let _guard = init();
a().await;
}
#[tracing::instrument]
async fn a() {
aa();
bb()
}
#[tracing::instrument]
fn aa() {}
#[tracing::instrument]
fn bb() {
dbg!(opentelemetry::Context::current().span());
}
What am I missing? How can I get the current opentracing
SpanContext
so that I can propagate it to the client?
You're almost there! Check the OpenTelemetrySpanExt trait from the
tracing_opentelemetry
crate.It provides
context
on the sending side:The receiving side uses
set_parent
from the same trait:See opentelemetry-remote-context for an example.