Logging can help you understand what's happening under the hood of your Coder deployment and is useful for debugging and monitoring the health of your cluster.
Accessing logs
You can access your logs at any time by running:
kubectl -n coder logs <podname>
Exporting logs
The following sections show how you can change your Helm chart values to export logs.
See our guide to updating your Helm chart if you're unfamiliar with updating a Helm chart.
Please note that:
-
Setting either the
/dev/stdout
or/dev/stderr
value to an empty string to disable. -
You can use
/dev/stdout
and/dev/stderr
interchangeably, since both write to the pod's standard output and error directories. -
Coder supports writing logs to multiple output targets.
Human-readable logs
This is the default value that's set in the Helm chart:
logging:
human: /dev/stderr
When set, logs will be sent to the /dev/stderr
file path and formatted for
human readability.
JSON-formatted logs
You can get JSON-formatted logs by setting the json
value:
logging:
json: /dev/stderr
Sending logs to Google Stackdriver
If you deployed your Kubernetes cluster to Google Cloud, you can send logs to Stackdriver:
logging:
stackdriver: /dev/stderr
Sending logs to Splunk
Coder can send logs directly to Splunk. Splunk uses the HTTP Event Collector (HEC) to receive data and application logs. See Splunk's docs for information on configuring an HEC.
Once you've configured an HEC, you'll need to update your Helm chart with your HTTP (HEC) endpoint and your HEC collector token.
To provide your HTTP (HEC) endpoint:
logging:
splunk:
url: ""
To provide your HEC collector token:
logging:
splunk:
token: ""
Optionally, you can specify the Splunk channel that you'd like associated with your messages. Channels allow logs to be segmented by client, preventing Coder application logs from affecting other client logs in your Splunk deployment.
logging:
splunk:
channel: ""