Operating a Prose Pod

Telemetry

Updated on May 31, 2025

If you would like to intregrate Prose in your OpenTelemetry pipeline, know that the Prose Pod API supports OpenTelemetry out of the box. To enable it, you just need to set 3 environment variables:

# For gRPC:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT='http://otel-collector:4317'
OTEL_EXPORTER_OTLP_TRACES_PROTOCOL='grpc'
OTEL_TRACES_SAMPLER='always_on'

# For HTTP:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT='http://otel-collector:4318/v1/traces'
OTEL_EXPORTER_OTLP_TRACES_PROTOCOL='http/protobuf'
OTEL_TRACES_SAMPLER='always_on'

For more information, see “Configuration based on the environment variables” in davidB/tracing-opentelemetry-instrumentation-sdk/init-tracing-opentelemetry/README.md

You can also have a look at prose-pod-api/local-run/otel-collector-config.yaml and prose-pod-api/local-run/scenarios/default/local-run.env for an example configuration we use when developing the Prose Pod API. The OpenTelemetry collector is defined in prose-pod-api/local-run/compose.yaml and receivers are started using prose-pod-api/local-run/scripts/otlp. It’s not production ready but it’s a good base to understand how things relate to one another.

If you need help setting up telemetry for your self-hosted Prose Pod, feel free to contact our technical support team which will gladly help you set it up.