Send SUSE Security (NeuVector) events to AWS CloudTrail Lake

Share
Share

Guest writer: Dominik Wombacher, Sr. Partner Solutions Architect, AWS

In this Blog, you’ll learn how to send SUSE Security (NeuVector) events and reports to AWS CloudTrail Lake . Storing alert data immutably and securely for years is a common use-case. This helps to fulfill audit and compliance requirements and allows access to the ingested data at any time. For example, to recognize patterns, to perform near real-time troubleshooting or alerting via Cloudwatch, or long-term retrospective analysis.

Walkthrough

The heart of the Solution is Fluent Bit , a fast, lightweight, and highly scalable logging processor and forwarder. It uses the concept of Inputs and Outputs. One of the included Inputs is Syslog. SUSE Security allows sending events to external systems via Syslog . The open source project Fluent Bit: Output Plugin for AWS CloudTrail Data Service is used to perform the data output, and the Helm Chart: Fluent Bit Syslog to AWS CloudTrail Data to deploy the solution.

Let’s examine an Architecture overview before we dive into the configuration and deployment. 

Architecture

Figure 1: SUSE Security (NeuVector) Syslog to AWS CloudTrail Lake Architecture. Workloads run on Amazon EKS and gets temporary credentials via IAM roles for service accounts (IRSA). Then Syslog messages are received from SUSE Security and pushed to AWS CloudTrail Lake.

The data flow shown in Figure 1:

  1. Fluent Bit pod requests temporary AWS credentials
  2. SUSE Security (NeuVector) sends events via Syslog to Fluent Bit
  3. Fluent Bit receives events via Syslog
  4. Fluent Bit processes with Parser syslog-rfc5424
  5. Fluent Bit CloudTrail Output Plugin calls PutAuditEvents
  6. User query AWS CloudTrail Lake event store for audit records

The Architecture to demonstrate the implementation in this Blog uses Amazon EKS and Temporary security credentials via IAM roles for service accounts (IRSA). But with slight adjustments, it works on self-hosted Kubernetes clusters, for example RKE2 or K3s too. Even though temporary credentials are highly recommended, you can alternatively inject AWS access and security keys as environment variables or provide a credentials file. This approach is out of scope of this Blog.

Prerequisites

An intermediate (L200) understanding of AWS IAM, Amazon EKS, AWS CloudTrail and SUSE Security (NeuVector) with Kubernetes and Helm hands-on experience. References will be linked throughout this blog.

If you want to perform your own deployment based on the shared examples, you need:

 

Important: Please be aware that deploying AWS resources, for example Amazon EKS, will incur costs to your AWS Account. After you complete the evaluation of the described solution, don’t forget to delete all resources to avoid any additional charges.

Identity and Access Management (IAM)

For IRSA you have to create an IAM Role with a custom trust relationship and attach a policy to grant access to CloudTrail Data. Have a look at Create an IAM OIDC provider for your cluster and Assign IAM roles to Kubernetes service accounts to learn more about the necessary steps.

For this Blog, a Role called nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data with the following Policy and Trust Relationship was created.


policy.json


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "cloudtrail-data:PutAuditEvents"
            ],
            "Resource": "arn:aws:cloudtrail:REGION:ACCOUNT:channel/INTEGRATION"
        }
    ]
}

trust-relationship.json


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::ACCOUNT:oidc-provider/OIDCPROVIDER"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "OIDCPROVIDER:aud": "sts.amazonaws.com",
          "OIDCPROVIDER:sub": "system:serviceaccount:NAMESPACE:SERVICEACCOUNT"
        }
      }
    }
  ]
}

 

Installation

Helm, the package manager for Kubernetes, is used to install the Syslog to AWS CloudTrail Chart. It creates and configures the deployment, serviceaccount, service and configmap resources.

Before you can deploy the Chart, you have to prepare a values file with your customizations. To receive a full list of available values, you can run the command:

helm show values oci://quay.io/wombelix/fluent-bit-syslog-to-aws-cloudtrail-data --version VERSION > values.yaml

At time of writing the latest version of the Helm Chart on Quay.io is 0.3.0.

Following an example with the minimum amount of changes to deploy the Chart:

values.yaml


channelArn: "arn:aws:cloudtrail:REGION:ACCOUNT:channel/INTEGRATION"
awsRegion: "REGION"
enableStandardOutput: true
serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT:role/nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data

The channelArn is the Arn of your AWS CloudTrail Lake integration. Set the awsRegion to the same region that the integration is in. You can set enableStandardOutput to false to reduce the amount of Kubernetes logs. It’s turned on in the example for demonstration purposes. The service account annotation is used by IRSA to inject temporary AWS credentials.

In this example, SUSE Security is installed in the namespace neuvector, the Helm revision nv-logging, the customized settings are in a file called values.yaml and version 0.3.0 of the chart is used. This will result in a default name of nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data for the different Kubernetes resources.

When the values.yaml file is ready and you have kubectl access to the cluster, run the installation:


helm upgrade --install nv-logging oci://quay.io/wombelix/fluent-bit-syslog-to-aws-cloudtrail-data --version 0.3.0 -f values.yaml --namespace neuvector

This will produce an output like this:


Pulled: quay.io/wombelix/fluent-bit-syslog-to-aws-cloudtrail-data:0.3.0-main_ef6e36b
Digest: sha256:6b593874e58d55e4b3f13839cbff379e59192cde57dd7a8dd7b67915cdc75561
quay.io/wombelix/fluent-bit-syslog-to-aws-cloudtrail-data:0.3.0-main_ef6e36b contains an underscore.

OCI artifact references (e.g. tags) do not support the plus sign (+). To support
storing semantic versions, Helm adopts the convention of changing plus (+) to
an underscore (_) in chart version tags when pushing to a registry and back to
a plus (+) when pulling from a registry.

Release "nv-logging" has been upgraded. Happy Helming!
NAME: nv-logging
LAST DEPLOYED: Tue Oct 29 11:04:33 2024
NAMESPACE: neuvector
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
To get the syslog service address please run the following command:
  export SVC_NAME=$(kubectl get svc --namespace neuvector nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data -o jsonpath="{.metadata.name}")
  export SVC_SYSLOG_PORT=$(kubectl get svc --namespace neuvector nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data -o jsonpath="{.spec.ports[0].port}")
  echo $SVC_NAME:$SVC_SYSLOG_PORT

You can run the commands listed in the Notes section of the Helm output to get the Syslog service URL and Port. By default, the URL is set to the Helm revision plus the Chart name, in this example nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data, and the Port to tcp/5140. Verification You can perform the following tests to verify that the deployment was successful.

 

1/ The Helm triggered deployment should be completed successfully after a couple of minutes. Command:


kubectl rollout status deployment -n neuvector nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data

Output:


deployment "nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data" successfully rolled out

 

2/ One pod is in running state Command:


kubectl get pods -n neuvector | grep nv-logging

Output:


nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data-7c6bbd5wd55   1/1     Running     0          7m13s

 

3/ The service account is linked to the pod. It is in Running state and Ready. Environment variables for the Channel, Region, STS, Role and Token file are injected. Mounts exist for the custom fluent-bit confimap and eks.amazonaws.com service account annotation.


kubectl describe pod -n neuvector nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data-7c6bbd5wd55

Output:


Name:             nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data-7c6bbd5wd55
Namespace:        neuvector
[...]
Service Account:  nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data
[...]
Status:           Running
[...]
Containers:
  fluent-bit-syslog-to-aws-cloudtrail-data:
[...]
    Image:         quay.io/wombelix/fluent-bit-aws-cloudtrail-data:v0.2.0
[...]
    State:          Running
[...]
    Ready:          True
[...]
    Environment:
      AWS_CLOUDTRAIL_DATA_CHANNELARN:  arn:aws:cloudtrail:REGION:ACCOUNT:channel/INTEGRATION
      AWS_REGION:                      REGION
      AWS_STS_REGIONAL_ENDPOINTS:      regional
      AWS_ROLE_ARN:                    arn:aws:iam::ACCOUNT:role/nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data
      AWS_WEB_IDENTITY_TOKEN_FILE:     /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /etc/fluent-bit/ from configmap-nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data (ro)
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
[...]
Volumes:
  aws-iam-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
  configmap-nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data
    Optional:  false
[...]

 

4/ Log messages coming in from SUSE Security and are pushed to AWS CloudTrail Lake. Get the pod name by running:


kubectl get pods -n neuvector | grep nv-logging

Show the pod logs with the following command:


kubectl logs -n neuvector nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data-7c6bbd5wd55 -f

The -f parameter lets kubectl continue to run and show new logs when they arrive. You can cancel it with CTRL+C. Leave the terminal open and the command running for now. Because enableStandardOutput was set to true in the Helm Chart values file, processed messages will show up in the pod logs as well. The Output will look like this:


Fluent Bit v3.1.9
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

______ _                  _    ______ _ _           _____  __  
|  ___| |                | |   | ___ (_) |         |____ |/  | 
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __   / /`| | 
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / /   \ \ | | 
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /.___/ /_| |_
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/ \____(_)___/

[2024/10/29 10:04:34] [ info] [fluent bit] version=3.1.9, commit=431fa79ae2, pid=1
[2024/10/29 10:04:34] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2024/10/29 10:04:34] [ info] [cmetrics] version=0.9.6
[2024/10/29 10:04:34] [ info] [ctraces ] version=0.5.6
[2024/10/29 10:04:34] [ info] [input:syslog:syslog.0] initializing
[2024/10/29 10:04:34] [ info] [input:syslog:syslog.0] storage_strategy='memory' (memory only)
[2024/10/29 10:04:34] [ info] [in_syslog] TCP server binding 0.0.0.0:5140
[2024/10/29 10:04:34] [ info] [output:stdout:stdout.1] worker #0 started
[2024/10/29 10:04:34] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2024/10/29 10:04:34] [ info] [sp] stream processor started

 

The next step is to configure SUSE Security (NeuVector) to push Syslog messages to the nv-logging deployment. SUSE Security (NeuVector) Now that the nv-logging release is deployed and running, let’s send Syslog messages from SUSE Security to it. Login to the SUSE Security WebUI and then navigate to Settings > Configuration.

a

Figure 2: SUSE Security (NeuVector) Notification Configuration Settings options.

 

As shown in Figure 2, select Notification Configuration and activate Syslog. Set Server to the Syslog service name, in this example nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data, Protocol to TCP and Port to 5140. Keep Event, Security Event and Risk Report selected in the Categories section. Select the checkbox In JSON and Send individual log events per vulnerability.

Figure 3: SUSE Security (NeuVector) Notification Configuration Settings. Adjusted to send messages to the nv-logging deployment.

 

After you have changed all configuration settings, scroll down a bit and select the Submit button to apply them. 

Go back to the previous terminal window with the pod logs to verify that messages arrive. You don’t need to worry about the incoming messages. What’s relevant is that every received syslog message starts with [#] syslog.0 and after a stream of messages was processed, there is a line that shows Successful processed Audit Events: #.


[0] syslog.0: [[1730197072.783363332, {}], {"pri"=>"134", "host"=>"neuvector-controller-pod-6bcd5d4b4-5ckjt", "ident"=>"/usr/local/bin/controller", "pid"=>"6", "msgid"=>"neuvector", "extradata"=>"-", "message"=>"notification=event,name=RESTful.Write,level=Info,reported_timestamp=1730197068,reported_at=2024-10-29T10:17:48Z,cluster_name=cluster.local,host_id=ip-10-0-141-134.eu-central-1.compute.internal:ec28d2ba-521d-8663-87ad-9ec115d9c894,host_name=ip-10-0-141-134.eu-central-1.compute.internal,enforcer_id=,enforcer_name=,controller_id=bb5a2b3b2b7bb097eb239d07531f1ef7d6c721506c1e39a28728d4f2d5839e67,controller_name=neuvector-controller-pod-6bcd5d4b4-5m69d,workload_id=,workload_name=,workload_domain=,workload_image=,workload_service=,category=RESTFUL,user=admin,user_roles=map[:admin],user_addr=10.0.141.134,user_session=6f98963f8b55,rest_method=PATCH,rest_request=https://neuvector-svc-controller.neuvector:10443/v2/system/config,rest_body={"atmo_config":{"mode_auto_d2m":false,"mode_auto_d2m_duration":3600,"mode_auto_m2p":false,"mode_auto_m2p_duration":3600},"config":{"cluster_name":"cluster.local","ibmsa_ep_dashboard_url":"https://neuvec-awsel-zgl7edwnu5hm-35ac3042b1cb8442.elb.eu-central-1.amazonaws.com:8443/","ibmsa_ep_enabled":false,"new_service_policy_mode":"Discover","new_service_profile_baseline":"zero-drift","no_telemetry_report":true,"output_event_to_logs":false,"registry_http_proxy":{"password":"The value is masked","url":"","username":""},"registry_http_proxy_status":false,"registry_https_proxy":{"password":"The value is masked","url":"","username":""},"registry_https_proxy_status":false,"scanner_autoscale":{"max_pods":1,"min_pods":1,"strategy":""},"single_cve_per_syslog":true,"syslog_categories":["event","security-event","audit"],"syslog_cve_in_layers":true,"syslog_in_json":false,"syslog_ip":"nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data","syslog_ip_proto":6,"syslog_level":"Info","syslog_port":5140,"syslog_status":true,"unused_,message=Configure system settings"}]
time="2024-10-29T10:17:53Z" level=info msg="Successful processed Audit Events: 1"

This is the confirmation that the Input via Syslog and Output via AWS CloudTrail Lake works.

Query data

Pushing data to AWS CloudTrail Lake for immutable and secure storage is the first step. The next is to run SQL-based Queries to retrieve data. Login to your AWS Account in the AWS Console and change the region to the one your AWS CloudTrail Lake event store is located. Then navigate to CloudTrail > Lake > Query.

Figure 4: AWS CloudTrail Lake Query Editor, select your event data store and query data.

 

On the left side of the Query Editor, select the event data store and copy the Id. You need it to run a query against the event data store. Following a simplified example query to retrieve the event message that came from SUSE Security. Replace the eventId in the FROM statement with yours and select Run:


SELECT 
	json_extract(eventJson, '$.eventData.additionalEventData.message') AS additionalEventData
FROM e34ef727-a7c5-42a9-af49-d0be5211b245

Figure 5: Example results after submitting the Query against the event data store.

 

Learn more about CloudTrail Lake SQL constraints and Create CloudTrail Lake queries from English language prompts to design advanced queries for your data.

Following two example records, picked at random from the sample results. The first contains an SUSE Security Rest Write event, the change of system settings by user admin, more specifically changing the Syslog server. The second one is a Warning about a netcat process inside a pod which was flagged as suspicious:


[
  {
    "notification": "event",
    "name": "RESTful.Write",
    "level": "Info",
    "reported_timestamp": 1730195043,
    "reported_at": "2024-10-29T09:44:03Z",
    "cluster_name": "cluster.local",
    "host_id": "ip-10-0-141-134.eu-central-1.compute.internal:ec28d2ba-521d-8663-87ad-9ec115d9c894",
    "host_name": "ip-10-0-141-134.eu-central-1.compute.internal",
    "enforcer_id": "",
    "enforcer_name": "",
    "controller_id": "bb5a2b3b2b7bb097eb239d07531f1ef7d6c721506c1e39a28728d4f2d5839e67",
    "controller_name": "neuvector-controller-pod-6bcd5d4b4-5m69d",
    "workload_id": "",
    "workload_name": "",
    "workload_domain": "",
    "workload_image": "",
    "workload_service": "",
    "category": "RESTFUL",
    "user": "admin",
    "user_roles": {
      "": "admin"
    },
    "user_addr": "10.0.141.134",
    "user_session": "b9755676d458",
    "rest_method": "PATCH",
    "rest_request": "https://neuvector-svc-controller.neuvector:10443/v2/system/config",
    "rest_body": "{\"atmo_config\":{\"mode_auto_d2m\":false,\"mode_auto_d2m_duration\":3600,\"mode_auto_m2p\":false,\"mode_auto_m2p_duration\":3600},\"config\":{\"cluster_name\":\"cluster.local\",\"ibmsa_ep_dashboard_url\":\"https://neuvec-awsel-zgl7enudw5hm-35ac4230cbb14284.elb.eu-central-1.amazonaws.com:8443/\",\"ibmsa_ep_enabled\":false,\"new_service_policy_mode\":\"Discover\",\"new_service_profile_baseline\":\"zero-drift\",\"no_telemetry_report\":true,\"output_event_to_logs\":false,\"registry_http_proxy\":{\"password\":\"The value is masked\",\"url\":\"\",\"username\":\"\"},\"registry_http_proxy_status\":false,\"registry_https_proxy\":{\"password\":\"The value is masked\",\"url\":\"\",\"username\":\"\"},\"registry_https_proxy_status\":false,\"scanner_autoscale\":{\"max_pods\":1,\"min_pods\":1,\"strategy\":\"\"},\"single_cve_per_syslog\":true,\"syslog_categories\":[\"event\",\"security-event\",\"audit\"],\"syslog_cve_in_layers\":true,\"syslog_in_json\":true,\"syslog_ip\":\"nv-logging-fluent-bit-syslog-to-aws-cloudtrail-data\",\"syslog_ip_proto\":6,\"syslog_level\":\"Info\",\"syslog_port\":5140,\"syslog_status\":true,\"unused_g",
    "message": "Configure system settings"
  },
  {
    "notification": "incident",
    "name": "Container.Suspicious.Process",
    "level": "Warning",
    "reported_timestamp": 1728559008,
    "reported_at": "2024-10-10T11:16:48Z",
    "cluster_name": "cluster.local",
    "host_id": "ip-10-0-0-60:ec2762b0-e7fc-8ece-9a0a-9e421712cfc1",
    "host_name": "ip-10-0-0-60",
    "enforcer_id": "77a95fdf8019aa962eba5ca1d7758fd35e1c2168241483b2c7648b1485cbeeb4",
    "enforcer_name": "neuvector-enforcer-pod-mh9gk",
    "id": "680d3d31-ba0a-4a87-b6ab-799ef8615a76",
    "workload_id": "8ce5890a508ec41cdd66eb6bae66f315c59faccdece8aaafe1b9621c45464f78",
    "workload_name": "instance-manager-4e77d195a50f5adbdaf84e668fde5082",
    "workload_domain": "longhorn-system",
    "workload_image": "docker.io/rancher/mirrored-longhornio-longhorn-instance-manager:v1.6.2",
    "workload_service": "instance-manager.longhorn-system",
    "proc_name": "nc",
    "proc_path": "/usr/bin/nc",
    "proc_cmd": "nc -zv localhost 8503",
    "proc_effective_user": "root",
    "proc_parent_name": "sh",
    "proc_parent_path": "/usr/bin/bash",
    "action": "violate",
    "group": "nv.instance-manager.longhorn-system",
    "rule_id": "00000000-0000-0000-0000-000000000001",
    "aggregation_from": 1728558943,
    "count": 12,
    "message": "Risky application: netcat process"
  }
]

The JSON format allows you to filter, enrich and further process the data based on your individual needs. The provided example is the starting point of your journey to integrate SUSE Security in your audit and compliance stack.

Cleaning up

After you completed the evaluation of the described solution, don’t forget to delete all AWS resources you have created to avoid any additional charges. The following command will uninstall the Helm Chart, nv-logging is the release name, neuvector the namespace that was used during the installation:


helm uninstall nv-logging -n neuvector

Conclusion

This Blog demonstrated how you store SUSE Security alert data in AWS CloudTrail Lake to complement your compliance stack and fulfill audit and regulatory requirements. It also highlighted the benefits of running SQL-based queries on your ingested data to recognize patterns, to perform near real-time troubleshooting or long term retrospective analysis.

The sample solution, based on the Fluent Bit output plugin for AWS CloudTrail Data, isn’t limited to SUSE Security. It allows you to leverage Fluent Bit to ingest data from over 40 sources. They include Kernel Logs, Kubernetes Events, SystemD and Prometheus Remote Write for example. This opens up use-cases beyond SUSE Security and outside Kubernetes. Fluent Bit runs on your SUSE Linux Enterprise Server (SLES) as well.

It’s your turn, time to test the integration with AWS CloudTrail Lake by yourself, to comment and share the Blog.

Contribute

The heart of the solution is the pretty young open source project Fluent Bit: Output Plugin for AWS CloudTrail Data Service and its sibling Helm Chart: Fluent Bit Syslog to AWS CloudTrail Data. Join the community, contribute, share your experience and use-case, provide Feedback, open an Issue or create a Pull / Merge Request.

Further reading

 

Author

Dominic Wombacher

Dominik works as Sr. Partner Solutions Architect, with a focus on SUSE products, in the Linux Partner Team at AWS. He is an Open Source Enthusiast and Contributor, Dog Person, Passionate Engineer who loves solving tricky issues and is always eager to learn new things. His professional career started in 2002 and has always been IT-centric, distinguished by broad knowledge of different technologies and fields. At AWS, he helps Partners and Customers to optimize existing and to migrate new workloads to AWS.

Share
(Visited 1 times, 1 visits today)
Avatar photo
387 views