Kubewarden: Deep Dive into Policy Logging
However, policies run in a confined WebAssembly environment. For this mechanism to work per usual, Kubewarden would need to set up the runtime environment so the policy can write to stdout and stderr file descriptors. Upon completion, Kubewarden can check them – or stream log messages as they pop up.
Given that Kubewarden uses waPC for allowing intercommunication between the guest (the policy) and the host (Kubewarden – the policy-server
or kwctl
if we are running policies manually), we have extended our language SDK’s so that they can log messages by using waPC internally.
Kubewarden has defined a contract between policies (guests) and the host (Kubewarden) for performing policy settings validation, policy validation, policy mutation, and logging.
The waPC interface used for logging is a contract because once you have built a policy, it should be possible to run it in future Kubewarden versions. In this sense, Kubewarden keeps this contract behind the SDK of your preferred language, so you don’t have to deal with details of how logging is implemented in Kubewarden. You must use your logging library of choice for the language you are working with.
Let’s look at how to take advantage of logging in with Kubewarden in specific languages!
For Policy Authors
Go
We are going to use the Go policy template as a starting point.
Our Go SDK provides integration with the onelog
library. When our policy is built for the WebAssembly target, it will send the logs to the host through waPC. Otherwise, it will just print them on stderr – but this is only relevant if you run your policy outside a Kubewarden runtime environment.
One of the first things our policy does on its main.go
file is to initialize the logger:
var (
logWriter = kubewarden.KubewardenLogWriter{}
logger = onelog.New(
&logWriter,
onelog.ALL, // shortcut for onelog.DEBUG|onelog.INFO|onelog.WARN|onelog.ERROR|onelog.FATAL
)
)
We are then able to use onelog
API to produce log messages. We could, for example, perform structured logging with debugging level:
logger.DebugWithFields("validating object", func(e onelog.Entry) {
e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})
Or, with info level:
logger.InfoWithFields("validating object", func(e onelog.Entry) {
e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})
What happens under the covers is that our Go SDK sends every log event to the kubewarden
host through waPC.
Rust
Let’s use the Rust policy template as our guide.
Our Rust SDK implements an integration with the slog
crate. This crate exposes the concept of drains, so we have to define a global drain that we will use throughout our policy code:
use kubewarden::logging;
use slog::{o, Logger};
lazy_static! {
static ref LOG_DRAIN: Logger = Logger::root(
logging::KubewardenDrain::new(),
o!("some-key" => "some-value") // This key value will be shared by all logging events that use
// this logger
);
}
Then, we can use the macros provided by slog
to log on to different levels:
use slog::{crit, debug, error, info, trace, warn};
Let’s log an info-level message:
info!(
LOG_DRAIN,
"rejecting resource";
"resource_name" => &resource_name
);
As with the Go SDK implementation, our Rust implementation of the slog
drain sends this logging events to the host by using waPC.
You can read more about slog here.
Swift
We will be looking at the Swift policy template for this example.
As happens with Go and Rust’s SDKs, the Swift SDK is instrumented to use Swift’s LogHandler
from the swift-log
project, so our policy only has to initialize it. In our Sources/Policy/main.swift
file:
import kubewardenSdk
import Logging
LoggingSystem.bootstrap(PolicyLogHandler.init)
Then, in our policy business logic, under Sources/BusinessLogic/validate.swift
we can log with different levels:
import Logging
public func validate(payload: String) -> String {
// ...
logger.info("validating object",
metadata: [
"some-key": "some-value",
])
// ...
}
Following the same strategy as the Go and Rust SDKs, the Swift SDK can push log events to the host through waPC.
For Cluster Administrators
Being able to log from within a policy is half of the story. Then, we have to be able to read and potentially collect these logs.
As we have seen, Kubewarden policies support structured logging that is then forwarded to the component running the policy. Usually, this is kwctl
if you are executing the policy in a manual fashion, or policy-server
if the policy is running in a Kubernetes environment.
Both kwctl
and policy-server
use the tracing
crate to produce log events, either the events produced by the application itself or by policies running in WebAssembly runtime environments.
kwctl
The kwctl
CLI tool takes a very straightforward approach to logging from policies: it will print them to the standard error file descriptor.
policy-server
The policy-server
supports different log formats: json
, text
and otlp
.
otlp
? I hear you ask. It stands for OpenTelemetry Protocol. We will look into that in a bit.
If the policy-server
is run with the --log-fmt
argument set to json
or text
, the output will be printed to the standard error file descriptor in JSON or plain text formats. These messages can be read using kubectl logs <policy-server-pod>
.
If --log-fmt
is set to otlp
, the policy-server
will use OpenTelemetry to report logs and traces.
OpenTelemetry
Kubewarden is instrumented with OpenTelemetry, so it’s possible for the policy-server
to send trace events to an OpenTelemetry collector by using the OpenTelemetry Protocol (otlp
).
Our official Kubewarden Helm Chart has certain values that allow you to deploy Kubewarden with OpenTelemetry support, reporting logs and traces to, for example, a Jaeger instance:
telemetry:
enabled: True
tracing:
jaeger:
endpoint: "all-in-one-collector.jaeger.svc.cluster.local:14250"
This functionality closes the gap on logging/tracing, given the freedom that the OpenTelemetry collector provides to us regarding flexibility of what to do with these logs and traces.
You can read more about Kubewarden’s integration with OpenTelemetry in our documentation.
But this is a big enough topic on its own and worth a future blog post. Stay logged!
Related Articles
Jan 09th, 2023
Longhorn 1.4 – Starting A New Year With A New Release
Jan 30th, 2023
Deciphering container complexity from operations to security
Aug 10th, 2023