Skip to content

Try with Kubernetes

This guide will walk you through how Camblet can be used with Kubernetes. We are going to use a one node Kubernetes environment with a simple curl client pod and an echo server deployment.

Enter into the Lima VM

Terminal window
limactl shell quickstart

Configure Camblet agent

Camblet consist of two building blocks:

  • Kernel module: Handles transparent TLS and enforces policies.
  • Agent: Signs certificates and collects metadata for processes.

Camblet support various metadata sources, those can be configured under the metadataCollectors block within the configuration. Camblet can utilize metadata from these sources to identify processes and enforce policies. The procfs, linuxos, and sysfsdmi are enabled by default.

The Kubernetes metadata collector gathers data from the kubelet that runs on the same node as the agent, but it is not enabled by default. Proper authentication credentials are necessary to enable that collector.

The agent configuration resides in /etc/camblet/config.yaml, modify it the enable the Kubernetes metadata collector.

The config should resemble the following:

agent:
trustDomain: acme.corp
defaultCertTTL: 2h
metadataCollectors:
kubernetes:
enabled: true
kubeletCA: /var/lib/rancher/k3s/server/tls/server-ca.crt
credentials: /var/lib/rancher/k3s/server/tls/client-admin.crt,/var/lib/rancher/k3s/server/tls/client-admin.key

The Camblet agent must be restarted after the configuration change.

Terminal window
sudo systemctl restart camblet.service

Let’s check if the agent is indeed able to collect metadata from Kubernetes for a process that runs within a pod. K3s comes with Traefik installed so the following command can be used as a test.

Terminal window
camblet agent augment $(pidof traefik)
output
k8s:annotation:kubernetes.io/config.seen:2024-01-26T14:56:05.047670838Z
k8s:annotation:kubernetes.io/config.source:api
k8s:annotation:prometheus.io/path:/metrics
k8s:annotation:prometheus.io/port:9100
k8s:annotation:prometheus.io/scrape:true
k8s:container:image:id:docker.io/rancher/mirrored-library-traefik@sha256:ca9c8fbe001070c546a75184e3fd7f08c3e47dfc1e89bff6fe2edd302accfaec
k8s:container:name:traefik
k8s:label:app.kubernetes.io/instance:traefik-kube-system
k8s:label:app.kubernetes.io/managed-by:Helm
k8s:label:app.kubernetes.io/name:traefik
k8s:label:helm.sh/chart:traefik-25.0.2_up25.0.0
k8s:label:pod-template-hash:f4564c4f4
k8s:node:name:lima-quickstart
k8s:pod:ephemeral-image:count:0
k8s:pod:image:count:1
k8s:pod:image:id:docker.io/rancher/mirrored-library-traefik@sha256:ca9c8fbe001070c546a75184e3fd7f08c3e47dfc1e89bff6fe2edd302accfaec
k8s:pod:image:name:rancher/mirrored-library-traefik:2.10.5
k8s:pod:init-image:count:0
k8s:pod:name:traefik-f4564c4f4-fqhqm
k8s:pod:namespace:kube-system
k8s:pod:owner:kind:replicaset
k8s:pod:owner:kind-with-version:apps/v1/replicaset
k8s:pod:serviceaccount:traefik
...

Upon successful Camblet re-configuration, Kubernetes-associated labels can be utilized for process identification as well.

Deploy workloads to Kubernetes

An echo server running as a Kubernetes deployment and a simple Alpine pod with cURL are going to be used to showcase how Camblet integrates with Kubernetes.

First, install the echo server using kubectl.

Terminal window
kubectl create -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
labels:
k8s-app: echo
spec:
replicas: 1
selector:
matchLabels:
k8s-app: echo
template:
metadata:
labels:
k8s-app: echo
spec:
terminationGracePeriodSeconds: 2
containers:
- name: echo-service
image: ghcr.io/cisco-open/nasp-echo-server:main
ports:
- containerPort: 8080
resources:
limits:
cpu: 1000m
memory: 128Mi
requests:
cpu: 500m
memory: 64Mi
---
apiVersion: v1
kind: Service
metadata:
name: echo
labels:
k8s-app: echo
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
k8s-app: echo
EOF

Let’s wait for the echo pod to be up and running.

Terminal window
kubectl wait --for=condition=ready pod -l k8s-app=echo

Now create a simple Alpine pod which will host our cURL client.

Terminal window
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: alpine
spec:
containers:
- name: alpine
image: alpine
# Just spin & wait forever
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 3000; done;" ]
EOF

Let’s wait for the alpine pod to come up.

Terminal window
kubectl wait --for=condition=ready pod alpine

Create policy for the echo server

It is time to assign strong identities to the workloads and transparently establish mTLS connections.

The policy can be written by hand, but the Camblet CLI can do the work for you with the generate-policy command. There are two mandatory parameters, the PID of the process and the workload ID.

Terminal window
camblet agent generate-policy $(pidof server) echo-server | sudo tee /etc/camblet/policies/echo-server.yaml
output
- certificate:
ttl: 86400s
workloadID: echo-server
connection:
mtls: STRICT
selectors:
- k8s:container:name: echo-service
k8s:pod:name: echo-54c896dd86-tnvpq
k8s:pod:namespace: default
k8s:pod:serviceaccount: default
process:binary:path: /server
process:gid: "65532"
process:name: server
process:uid: "65532"

Camblet will use these selectors to identify the echo server. In a well-written policy the selectors should describe a particular process as precisely as possible, there are various metadata collectors available to achieve that goal. The connection part configures the TLS settings where STRICT mTLS value means only clients with trusted certificates can communicate with it.

To verify that, let’s try it with cURL from the Alpine container

Terminal window
kubectl exec -it alpine sh

Inside the Alpine container, first, we have to install cURL (and openssl).

Terminal window
apk add curl openssl

Next, try to connect to the echo server on echo.

Terminal window
curl echo
output
curl: (56) Recv failure: Connection reset by peer

The connection failed, as it was supposed to. Camblet now protects the echo-server workload with mTLS nothing can communicate with it without a trusted client certificate.

The protection can be simply checked by openssl s_client.

Terminal window
openssl s_client -connect echo:80 2>/dev/null
output
CONNECTED(00000003)
---
Certificate chain
0 s:CN = camblet-protected-workload
i:CN = Camblet root CA
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Feb 21 13:44:20 2024 GMT; NotAfter: Feb 22 13:44:20 2024 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDSDCCAjCgAwIBAgIUGBEwCjEpqr+6IuiCLtk6TLIh2oIwDQYJKoZIhvcNAQEL
BQAwGjEYMBYGA1UEAxMPQ2FtYmxldCByb290IENBMB4XDTI0MDIyMTEzNDQyMFoX
DTI0MDIyMjEzNDQyMFowJTEjMCEGA1UEAxMaY2FtYmxldC1wcm90ZWN0ZWQtd29y
a2xvYWQwggEgMA0GCSqGSIb3DQEBAQUAA4IBDQAwggEIAoIBAQDCgZnRjiiSlrAZ
QqOr0C4o+2uWKqPQdFENNqp3qZAMkq6tSYlkEyf6q9KPFI+V9gOxwPkAFPamizTH
3d57974oBmNK2BwtYiyhLus0AnAVahSYgqefCJ7EyYXw62Ip93Bqjdvu+P4KLLHe
2Vdy/tT3T05aQTn9yguvhdQjIncUoPleU+UFFatv+/qkBUKQnr284ado+YQvTD2U
jCL/s/yhyhbZ0M3NugZn+dZuqDRB7dt6/rZOQ/McPJ0FzxZDK53dWz6xQb+5yGHm
YbJKNcGtPyPycKjhnmcEzowBT4L+uTnVPxSr8kP8KqzspfIBRLPC4vURyYwl9to9
fJBkY7AFAgEDo30wezAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUH
AwIGCCsGAQUFBwMBMB8GA1UdIwQYMBaAFKGmr3y8dJsskFwuowJaVVzBru2kMCkG
A1UdEQQiMCCGHnNwaWZmZTovL2FjbWUuY29ycC9lY2hvLXNlcnZlcjANBgkqhkiG
9w0BAQsFAAOCAQEAFm5svzOUQAMR79d+MbCOHtrixGzipPCuIa2locGo7vLk/mdk
PYSU1dQsR0pCBpXuSfuTIhLG1WvSN6weIHisVhikWKKq0WCw02MwhyyZoh0E3PXu
3iE+nY2w6RUCAR6Ok2goH0bw6vVpmHklFJl4XHSzmzVbvNwqg3IL1+lR2cf67Feo
N3Wmi0wD0Pe0E5Z5ObQ1uHaxOvoDl4OqBfu7M5HOyq1eUcxWyj15Zbp/aUq/Uil0
5YYSndYJ49nDAki5jWVjXPlnehtvVV0pHtSzuOcXDlOaBEpuAiadO6/gqPQFm4++
0uQsSZHkbGhwqHCfsBVudroOusDXAKvDIuLmwA==
-----END CERTIFICATE-----
subject=CN = camblet-protected-workload
issuer=CN = Camblet root CA
---
Acceptable client certificate CA names
CN = Camblet root CA
Client Certificate Types: RSA sign
Requested Signature Algorithms: RSA+SHA256:RSA+SHA384:RSA+SHA512:RSA+SHA224:RSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA256:RSA+SHA384:RSA+SHA512:RSA+SHA224
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1294 bytes and written 398 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 64217D9B0EE7BE52F229F9D30BD7949C3494019E957A76C9385F60C146030F09
Session-ID-ctx:
Master-Key: 0C1F47031A5C1408C3B85D09F492FAC3086112F5A8AA800145AB1E49163A446A09CD56F7EC59EF195453E1E158EB0A25
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1708523445
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
Extended master secret: no
---

Create policy for cURL

Now exit from the Alpine container and let’s create a policy for cURL as well. A running process of cURL is the easiest way to create a new policy for cURL with the Camblet CLI. Since cURL is not running continuously like the server, a “dummy” command is needed to force it to run until generate the policy.

The following command will run a cURL process for 30 seconds, the 1.2.3.4 IP address is non-routed and the connection to it going to time out. This is plenty of time to generate the policy.

Terminal window
kubectl exec alpine -- sh -c "curl --connect-timeout 30 1.2.3.4 >/dev/null 2>&1 &"

Let’s generate a policy for cURL.

Terminal window
camblet agent generate-policy $(pidof curl) curl | sudo tee /etc/camblet/policies/curl.yaml
output
- certificate:
ttl: 86400s
workloadID: curl
connection:
mtls: STRICT
selectors:
- k8s:container:name: alpine
k8s:pod:name: alpine
k8s:pod:namespace: default
k8s:pod:serviceaccount: default
process:binary:path: /usr/bin/curl
process:gid: "0"
process:name: curl
process:uid: "0"

Using this policy, cURL gets an identity and will use mTLS. Let’s try to communicate with the nginx once again.

Try to connect to the echo server with certificate

Terminal window
kubectl exec -it alpine -- curl echo
output
curl: (56) Recv failure: Connection reset by peer

It still doesn’t work, one last piece of the puzzle is missing. Camblet must ascertain the target destination for the application of policies. Enforcing policies on every egress connection implies that cURL won’t be able to reach destinations beyond the Camblet-managed environment. Services has to be registered into the service registry through service definitions.

Sample Service discovery configuration file

Terminal window
- addresses:
- address: localhost
port: 80
labels:
app:label: app

Let’s ignore the labels part for now; it is meant for more advanced configuration. The addresses section specifies the registered destination addresses to which the policies should be applied.

Let’s create a service registry entry for the echo-server service with the following command.

Terminal window
SVC_IP=$(kubectl get svc echo -o jsonpath='{@.spec.clusterIP}')
envsubst <<EOF | sudo tee /etc/camblet/services/echo-service.yaml >/dev/null
- addresses:
- address: $SVC_IP
port: 80
labels:
app:label: echo-server
EOF

Try to connect to the echo server with certificate and configured Camblet

With the service registry entry for the echo-server service in place, let’s check if its indeed fixed the error. To do that, execute once again into the Alpine container and run cURL.

Terminal window
kubectl exec -it alpine -- curl echo
output
Hostname: echo-54c896dd86-x9tdj
Pod Information:
-no pod information available-
Request Information:
client_address=10.42.0.23:55956
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_url=http://echo/
Request Headers:
accept=*/*
user-agent=curl/8.5.0
Request Body:

It finally works.

Cleanup

Exit from the Lima VM and delete it to cleanup.

Terminal window
limactl delete quickstart --force