Logging¶
The use of log/slog
gives us easy access to structured logs with no additional dependencies. It allows us to pass around a *slog.Logger
that’s pre-configured with various log attributes, and a small helper function gets all the details of log/trace/metric linking out of the
way so that application code can focus on logging useful messages.
Capturing logs¶
Promtail is probably the most commonly-used container log shipper, and that’s what we use here. However, there are (of course) multiple other ways to do this. Here’s a few of them:
- You could use a Loki slog handler that sends directly from the application
- If you’re using docker, then the Loki docker plugin can be configured either as the default log driver on your host, or as part of the container instantiation.
- Vector is also a very capable component, which can pull logs from docker, kubernetes or various other sources.
However, Promtail works very well and it means you can easily route all logs from third-party services via the same mechanism used for the application. In this example, we use docker_sd_configs to discover containers. There’s nothing too involved with the setup, probably the main thing to highlight is the adding of some labels via relabel_configs – this means we can search for logs based on these labels, but without adding any other clutter to the log line itself:
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container'
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
regex: '/(.*)'
target_label: 'service'
- source_labels: ['__meta_docker_container_label_app']
target_label: 'app'
You can see all the source labels available for relabelling like this on the Promtail web interface, http://localhost:9080/targets. Check the full config here.
Application code¶
We want to make it easy for application developers to get all the signal correlation handled for them. In our GRPC method, we simply call
func (s *server) Boom(ctx context.Context, req *pb.BoomRequest) (*pb.BoomResponse, error) {
logger := util.LoggerFromContext(ctx)
logger.Info("boom", "boomer_name", req.GetName())
This actually logs something like:
time=2023-12-30T10:51:41.680Z level=INFO msg=boom hostname=1984ea724676 service_name=MyBoomerServer trace_id=25bb0819a73da590ee2c533162b4fcfa span_id=3c0b58ad9671b7b3 boomer_name="old dude"
Attributes¶
Notice above that some of the log attributes got added automatically. Of particular importance here are the trace_id
and span_id
attributes, which are pulled from the request context by the
util.LoggerFromContext helper function:
spanContext := trace.SpanContextFromContext(ctx)
if spanContext.HasTraceID() {
logger = logger.With("trace_id", spanContext.TraceID().String())
}
if spanContext.HasSpanID() {
logger = logger.With("span_id", spanContext.SpanID().String())
}
For GRPC, you setup the necessary context when you instantiate the grpc.Server
:
See reference: grpc.NewServer, grpc.StatsHandler and otelgrpc.NewServerHandler.
Note that
otelgrpc.UnaryServerInterceptor
and stream server
interceptors are now deprecated – although, more generally, grpc
unary/stream interceptors are still supported and can be useful for
passing additional context values (more on that later).
The hostname
and service_name
attributes in the log line got added as part of the logger
instantiation:
if hostName := GetResourceHostName(r); hostName != "" {
logger = logger.With("hostname", hostName)
}
if serviceName := GetResourceServiceName(r); serviceName != "" {
logger = logger.With("service_name", serviceName)
}
In practice, you might prefer the hostname/servicename there to be added by the Promtail relabel_configs
, but sometimes it can be useful to
setup some global attributes like that.