Observability - Go SDK
This page covers the many ways to view the current state of your Temporal Application—that is, ways to view which Workflow Executions are tracked by the Temporal Platform and the state of any specified Workflow Execution, either currently or at points of an execution.
This section covers features related to viewing the state of the application, including:
How to emit metrics
How to emit application metrics using the Temporal Go SDK.
Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics reference.
- For an overview of Prometheus and Grafana integration, refer to the Monitoring guide.
- For a list of metrics, see the SDK metrics reference.
- For an end-to-end example that exposes metrics with the Go SDK, refer to the samples-go repo.
To emit metrics from the Temporal Client in Go, create a metrics handler from the Client Options and specify a listener address to be used by Prometheus.
client.Options{
MetricsHandler: sdktally.NewMetricsHandler(newPrometheusScope(prometheus.Configuration{
ListenAddress: "0.0.0.0:9090",
TimerType: "histogram",
}
The Go SDK currently supports the Tally library; however, Tally offers extensible custom metrics reporting, which is exposed through the WithCustomMetricsReporter API.
For more information, see the Go sample for metrics.
Tracing
Tracing allows you to view the call graph of a Workflow along with its Activities, Nexus Operations, and Child Workflows.
The Go SDK provides tracing interceptors for OpenTelemetry, OpenTracing, and Datadog. Create a tracing interceptor and pass it to ClientOptions:
// OpenTelemetry
tracingInterceptor, err := opentelemetry.NewTracingInterceptor(opentelemetry.TracerOptions{})
// OpenTracing
tracingInterceptor, err := opentracing.NewInterceptor(opentracing.TracerOptions{})
// Datadog
tracingInterceptor, err := tracing.NewTracingInterceptor(tracing.TracerOptions{})
c, err := client.Dial(client.Options{
Interceptors: []interceptor.ClientInterceptor{tracingInterceptor},
})
The interceptor automatically propagates trace context across Workflow, Activity, and Child Workflow boundaries using Temporal headers. You can also register interceptors through a Plugin if you’re building a reusable library.
For more information, see the documentation for OpenTelemetry, OpenTracing, and Datadog.
Context Propagation
Context propagation lets you pass custom key-value data from a Client to Workflows, and from Workflows to Activities and Child Workflows, without threading it through every function signature. Common use cases include propagating tracing IDs, tenant IDs, auth tokens, or other request-scoped metadata.
The mechanism works through Temporal headers: when a call crosses a boundary (Client to Workflow, Workflow to Activity, etc.), the SDK serializes values from the caller’s context into headers, carries them through the Temporal Server, and deserializes them into the callee’s context.
How it works
- Register a context propagator on the Client via
ContextPropagatorsin ClientOptions - Inject - On outbound calls, the SDK calls
Inject(fromcontext.Context) orInjectFromWorkflow(fromworkflow.Context) to serialize values into Temporal headers - Extract - On inbound calls, the SDK calls
Extract(intocontext.Context) orExtractToWorkflow(intoworkflow.Context) to deserialize headers back into the context - Access - Your Workflow and Activity code reads values from the context as usual
Implement a context propagator
A context propagator implements the ContextPropagator interface:
type ContextPropagator interface {
// Inject writes values from a Go context.Context into headers (Client/Activity side)
Inject(context.Context, HeaderWriter) error
// Extract reads headers into a Go context.Context (Client/Activity side)
Extract(context.Context, HeaderReader) (context.Context, error)
// InjectFromWorkflow writes values from a workflow.Context into headers
InjectFromWorkflow(Context, HeaderWriter) error
// ExtractToWorkflow reads headers into a workflow.Context
ExtractToWorkflow(Context, HeaderReader) (Context, error)
}
There are two pairs of methods because Go uses context.Context in non-Workflow code (Client, Activities) and workflow.Context inside Workflows. You must implement all four methods for values to propagate across every boundary (Client → Workflow → Activity/Child Workflow).
Here is a propagator that carries a custom key-value pair from the Client to Workflows and Activities (from the context propagation sample):
const HeaderKey = "custom-header"
type propagator struct{}
func (s *propagator) Inject(ctx context.Context, writer workflow.HeaderWriter) error {
value := ctx.Value(PropagateKey)
payload, err := converter.GetDefaultDataConverter().ToPayload(value)
if err != nil {
return err
}
writer.Set(HeaderKey, payload)
return nil
}
func (s *propagator) Extract(ctx context.Context, reader workflow.HeaderReader) (context.Context, error) {
if value, ok := reader.Get(HeaderKey); ok {
var values Values
if err := converter.GetDefaultDataConverter().FromPayload(value, &values); err != nil {
return ctx, nil
}
ctx = context.WithValue(ctx, PropagateKey, values)
}
return ctx, nil
}
// InjectFromWorkflow and ExtractToWorkflow are similar but operate on workflow.Context.
// See the full sample for details.
Register the propagator and set context values
Register the propagator on the Client. Then set context values before starting a Workflow:
c, err := client.Dial(client.Options{
ContextPropagators: []workflow.ContextPropagator{NewContextPropagator()},
})
// Set a value in context before starting the Workflow
ctx := context.Background()
ctx = context.WithValue(ctx, PropagateKey, &Values{Key: "test", Value: "tested"})
we, err := c.ExecuteWorkflow(ctx, workflowOptions, MyWorkflow)
You can also register context propagators through a Plugin if you are building a reusable library.
Access propagated values
In your Workflow, the propagated values are available on the workflow.Context. When the Workflow starts an Activity, the SDK automatically propagates the same values:
func MyWorkflow(ctx workflow.Context) error {
// Read propagated value in the Workflow
if val := ctx.Value(PropagateKey); val != nil {
vals := val.(Values)
workflow.GetLogger(ctx).Info("propagated to workflow", vals.Key, vals.Value)
}
// The value is automatically propagated to Activities
var result Values
err := workflow.ExecuteActivity(ctx, SampleActivity).Get(ctx, &result)
return err
}
func SampleActivity(ctx context.Context) (*Values, error) {
// Read propagated value in the Activity
if val := ctx.Value(PropagateKey); val != nil {
return val.(*Values), nil
}
return nil, nil
}
You can configure multiple context propagators on a single Client, each responsible for its own set of keys.
Context propagation over Nexus
Nexus does not use the ContextPropagator interface. It relies on a Temporal-agnostic protocol with its own header format (nexus.Header, a wrapper around map[string]string).
To propagate context over Nexus Operation calls, use interceptors to explicitly serialize and deserialize context into the Nexus header. See the Nexus Context Propagation sample.
Log from a Workflow
How to log from a Workflow using the Go SDK.
Send logs and errors to a logging service, so that when things go wrong, you can see what happened.
Loggers create an audit trail and capture information about your Workflow's operation. An appropriate logging level depends on your specific needs. During development or troubleshooting, you might use debug or even trace. In production, you might use info or warn to avoid excessive log volume.
The logger supports the following logging levels:
| Level | Use |
|---|---|
TRACE | The most detailed level of logging, used for very fine-grained information. |
DEBUG | Detailed information, typically useful for debugging purposes. |
INFO | General information about the application's operation. |
WARN | Indicates potentially harmful situations or minor issues that don't prevent the application from working. |
ERROR | Indicates error conditions that might still allow the application to continue running. |
The Temporal SDK core normally uses WARN as its default logging level.
In Workflow Definitions you can use workflow.GetLogger(ctx) to write logs.
import (
"context"
"time"
"go.temporal.io/sdk/activity"
"go.temporal.io/sdk/workflow"
)
// Workflow is a standard workflow definition.
// Note that the Workflow and Activity don't need to care that
// their inputs/results are being compressed.
func Workflow(ctx workflow.Context, name string) (string, error) {
// ...
workflow.WithActivityOptions(ctx, ao)
// Getting the logger from the context.
logger := workflow.GetLogger(ctx)
// Logging a message with the key value pair `name` and `name`
logger.Info("Compressed Payloads workflow started", "name", name)
info := map[string]string{
"name": name,
}
logger.Info("Compressed Payloads workflow completed.", "result", result)
return result, nil
}
Provide a custom logger
How to provide a custom logger to the Temporal Client using the Go SDK.
This field sets a custom Logger that is used for all logging actions of the instance of the Temporal Client.
Although the Go SDK does not support most third-party logging solutions natively, our friends at Banzai Cloud built the adapter package logur which makes it possible to use third party loggers with minimal overhead. Most of the popular logging solutions have existing adapters in Logur, but you can find a full list in the Logur Github project.
Here is an example of using Logur to support Logrus:
package main
import (
"go.temporal.io/sdk/client"
"github.com/sirupsen/logrus"
logrusadapter "logur.dev/adapter/logrus"
"logur.dev/logur"
)
func main() {
// ...
logger := logur.LoggerToKV(logrusadapter.New(logrus.New()))
clientOptions := client.Options{
Logger: logger,
}
temporalClient, err := client.Dial(clientOptions)
// ...
}
Visibility APIs
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service.
Search Attributes
How to use Search Attributes using the Go SDK.
The typical method of retrieving a Workflow Execution is by its Workflow Id.
However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.
You can do this with Search Attributes.
- Default Search Attributes like
WorkflowType,StartTimeandExecutionStatusare automatically added to Workflow Executions. - Custom Search Attributes can contain their own domain-specific data (like
customerIdornumItems).
The steps to using custom Search Attributes are:
- Create a new Search Attribute in your Temporal Service using
temporal operator search-attribute createor the Cloud UI. - Set the value of the Search Attribute for a Workflow Execution:
- On the Client by including it as an option when starting the Execution.
- In the Workflow by calling
UpsertSearchAttributes.
- Read the value of the Search Attribute:
- On the Client by calling
DescribeWorkflow. - In the Workflow by looking at
WorkflowInfo.
- On the Client by calling
- Query Workflow Executions by the Search Attribute using a List Filter:
- In the Temporal CLI.
- In code by calling
ListWorkflowExecutions.
Here is how to query Workflow Executions:
The ListWorkflow() function retrieves a list of Workflow Executions that match the Search Attributes of a given List Filter. The metadata returned from the Visibility store can be used to get a Workflow Execution's history and details from the Persistence store.
Use a List Filter to define a request to pass into ListWorkflow().
request := &workflowservice.ListWorkflowExecutionsRequest{ Query: "CloseTime = missing" }
This request value returns only open Workflows.
For more List Filter examples, see the examples provided for List Filters in the Temporal Visibility guide.
resp, err := temporalClient.ListWorkflow(ctx.Background(), request)
if err != nil {
return err
}
fmt.Println("First page of results:")
for _, exec := range resp.Executions {
fmt.Printf("Workflow ID %v\n", exec.Execution.WorkflowId)
}
Set custom Search Attributes
How to set custom Search Attributes using the Go SDK.
After you've created custom Search Attributes in your Temporal Service (using the temporal operator search-attribute create command or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.
Provide key-value pairs in StartWorkflowOptions.SearchAttributes.
Search Attributes are represented as map[string]interface{}.
The values in the map must correspond to the Search Attribute's value type:
- Bool =
bool - Datetime =
time.Time - Double =
float64 - Int =
int64 - Keyword =
string - Text =
string
If you had custom Search Attributes CustomerId of type Keyword and MiscData of type Text, you would provide string values:
func (c *Client) CallYourWorkflow(ctx context.Context, workflowID string, payload map[string]interface{}) error {
// ...
searchAttributes := map[string]interface{}{
"CustomerId": payload["customer"],
"MiscData": payload["miscData"]
}
options := client.StartWorkflowOptions{
SearchAttributes: searchAttributes
// ...
}
we, err := c.Client.ExecuteWorkflow(ctx, options, app.YourWorkflow, payload)
// ...
}
Upsert Search Attributes
How to upsert Search Attributes using the Go SDK.
You can upsert Search Attributes to add or update Search Attributes from within Workflow code.
In advanced cases, you may want to dynamically update these attributes as the Workflow progresses. UpsertSearchAttributes is used to add or update Search Attributes from within Workflow code.
UpsertSearchAttributes will merge attributes to the existing map in the Workflow.
Consider this example Workflow code:
func YourWorkflow(ctx workflow.Context, input string) error {
attr1 := map[string]interface{}{
"CustomIntField": 1,
"CustomBoolField": true,
}
workflow.UpsertSearchAttributes(ctx, attr1)
attr2 := map[string]interface{}{
"CustomIntField": 2,
"CustomKeywordField": "seattle",
}
workflow.UpsertSearchAttributes(ctx, attr2)
}
After the second call to UpsertSearchAttributes, the map will contain:
map[string]interface{}{
"CustomIntField": 2, // last update wins
"CustomBoolField": true,
"CustomKeywordField": "seattle",
}
Remove a Search Attribute from a Workflow
How to remove a Search Attribute from a Workflow using the Go SDK.
To remove a Search Attribute that was previously set, set it to an empty array: [].
There is no support for removing a field.
However, to achieve a similar effect, set the field to some placeholder value.
For example, you could set CustomKeywordField to impossibleVal.
Then searching CustomKeywordField != 'impossibleVal' will match Workflows with CustomKeywordField not equal to impossibleVal, which includes Workflows without the CustomKeywordField set.