Skip to main content
Version: v0.6

BehaviorModeling Mode

Introduction

The BehaviorModeling mode is an experimental feature. You can utilize the BehaviorModeling mode to gather behavior data of the target workloads over a specified duration. Once the modeling is completed, vArmor will generate an ArmorProfileModel object to store the model of the target workloads.

The model generated by the BehaviorModeling mode can also be used to analyze which built-in rules can be applied to harden the target application. Or guide to minimize the securityContext configurations of workload.

Currently, only AppArmor and Seccomp enforcers support BehaviorModeling mode.

Requirements

vArmor currently leverages a built-in BPF tracer and the Linux audit system to capture application behavior.

The requirements for the BehaviorModeling mode are as follows.

  1. containerd v1.6.0 and above.

  2. BTF (BPF Type Format) must be enabled.

  3. Upgrade vArmor

    • Enable the BehaviorModeling feature with --set behaviorModeling.enabled=true

    • [Optional] Use the --set "agent.args={--auditLogPaths=FILE_PATH|FILE_PATH}" argument to specify the audit log file or determine the search order yourself.

    helm upgrade varmor varmor-0.6.2.tgz \
    --namespace varmor --create-namespace \
    --set image.registry="elkeid-cn-beijing.cr.volces.com" \
    --set behaviorModeling.enabled=true

    Note:

    • vArmor sequentially checks whether the files /var/log/audit/audit.log and /var/log/kern.log exist, and monitors the first valid file to consume AppArmor and Seccomp audit events for violation auditing and behavioral modeling. If you are using auditd, the audit events of AppArmor and Seccomp will be stored by default in /var/log/audit/audit.log. Otherwise they will be stored in /var/log/kern.log.

    • The varmor-agent requires additional resources as shown below when the BehaviorModeling feature is enabled. Another component, the varmor-classifier, which is used to identify random patterns in the path, will also be deployed.

      resources:
      limits:
      cpu: 2
      memory: 2Gi
      requests:
      cpu: 500m
      memory: 500Mi

Use Case

1. Deploy target workloads

cat << EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-4
namespace: default
labels:
app: demo-4
# This label is required with target workloads.
# You can disable the feature with --set 'manager.args={--webhookMatchLabel=}'
sandbox.varmor.org/enable: "true"
spec:
replicas: 2
selector:
matchLabels:
app: demo-4
template:
metadata:
labels:
app: demo-4
spec:
containers:
- name: c0
image: debian:10
command: ["/bin/sh", "-c", "sleep infinity"]
imagePullPolicy: IfNotPresent
EOF
cat << EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-4
namespace: demo
labels:
app: demo-4
# This label is required with target workloads.
# You can disable the feature with --set 'manager.args={--webhookMatchLabel=}'
sandbox.varmor.org/enable: "true"
spec:
replicas: 2
selector:
matchLabels:
app: demo-4
template:
metadata:
labels:
app: demo-4
annotations:
# Use these annotation to explicitly disable the protection for the container named c0.
# It always takes precedence over the '.spec.target.containers' field of VarmorPolicy
# or VarmorClusterPolicy object.
container.apparmor.security.beta.varmor.org/c0: unconfined
container.seccomp.security.beta.varmor.org/c0: unconfined
spec:
shareProcessNamespace: true
containers:
- name: c0
image: curlimages/curl:7.87.0
command: ["/bin/sh", "-c", "sleep infinity"]
imagePullPolicy: IfNotPresent
- name: c1
image: debian:10
command: ["/bin/sh", "-c", "sleep infinity"]
imagePullPolicy: IfNotPresent
EOF

2. Create a policy to model

Create a policy with BehaviorModeling mode. You can set the modeling duration in the .spec.policy.modelingOptions.duration field.

The target workloads will be updated once the policy is created. You can also create a policy before deploying target workloads.

cat << EOF | kubectl create -f -
apiVersion: crd.varmor.org/v1beta1
kind: VarmorClusterPolicy
metadata:
name: demo-4
spec:
# Perform a rolling update on existing workloads.
# It's disabled by default.
updateExistingWorkloads: true
target:
kind: Deployment
selector:
matchLabels:
app: demo-4
policy:
enforcer: AppArmorSeccomp
# Switching the mode from BehaviorModeling to others is prohibited, and vice versa.
# You need recraete the policy to switch the mode from BehaviorModeling to DefenseInDepth.
# mode: DefenseInDepth
mode: BehaviorModeling
modelingOptions:
# 30 minutes
duration: 30
EOF

3. Inspect the status

Check out the policy object. If anything is working right, the policy will be ready and in the Modeling status as shown below.

$ kubectl get vcpol demo-4
NAME ENFORCER MODE TARGET-KIND TARGET-NAME TARGET-SELECTOR PROFILE-NAME READY STATUS AGE
demo-4 AppArmorSeccomp BehaviorModeling Deployment {"matchLabels":{"app":"demo-4"}} varmor-cluster-varmor-demo-4 true Modeling 2s

Check out the target workloads. If the policy is created after deploying the target applications, the workloads will be updated.

$ kubectl get Pods -A -l app=demo-4
NAMESPACE NAME READY STATUS RESTARTS AGE
default demo-4-6b98965dc-5xfqn 1/1 Running 0 49s
default demo-4-6b98965dc-kmpbn 1/1 Terminating 0 50s
default demo-4-b4d56646c-b82hw 0/1 ContainerCreating 0 1s
default demo-4-b4d56646c-bdk56 1/1 Running 0 3s
demo demo-4-5f4d94f7d9-5st8f 2/2 Running 0 3s
demo demo-4-5f4d94f7d9-8k6r6 0/2 ContainerCreating 0 1s
demo demo-4-9b8848dbc-84qwf 2/2 Running 0 49s
demo demo-4-9b8848dbc-bs5jr 2/2 Terminating 0 50s

4. Do something

Run commands as shown below after the workloads have finished the rolling update. Please make sure that there are no containers under the Terminating status.

$ pod_name=$(kubectl get Pods -n default -l app=demo-4 -o jsonpath='{.items[0].metadata.name}')
$ kubectl exec -n default $pod_name -c c0 -it -- cat /etc/shadow
$ kubectl exec -n default $pod_name -c c0 -it -- bash -c "unshare -Un id"

$ pod_name=$(kubectl get Pods -n demo -l app=demo-4 -o jsonpath='{.items[1].metadata.name}')
$ kubectl exec -n demo $pod_name -c c0 -it -- bash -c "echo $pod_name/c0 > /root/c0"
$ kubectl exec -n demo $pod_name -c c0 -it -- cat /root/c0

$ kubectl exec -n demo $pod_name -c c1 -it -- bash -c "echo $pod_name/c1 > /root/c1"
$ kubectl exec -n demo $pod_name -c c1 -it -- cat /root/c1

5. Stop modeling

Adjust the duration to end the modeling process, and wait for the status of the VarmorClusterPolicy object to change to Completed.

$ kubectl patch vcpol demo-4 --type='json' -p='[{"op": "replace", "path": "/spec/policy/modelingOptions/duration", "value":1}]'

$ kubectl get vcpol demo-4
NAME ENFORCER MODE TARGET-KIND TARGET-NAME TARGET-SELECTOR PROFILE-NAME READY STATUS AGE
demo-4 AppArmorSeccomp BehaviorModeling Deployment {"matchLabels":{"app":"demo-4"}} varmor-cluster-varmor-demo-4 true Completed 3m32s

6. Check out the result

All behavior data of target workloads will be processed and saved in the ArmorProfileMode object in the same namespace as the namespace of ArmorProfile object.

Use the following command to check them out.

$ profile_name=$(kubectl get vcpol demo-4 -o jsonpath='{.status.profileName}')
$ kubectl get ArmorProfileModel -n varmor $profile_name -o yaml

vArmor can also generate an AppArmor profile and a Seccomp profile in a "Deny by Default" manner with the behavior data.

Use the following commands to print the AppArmor profile.

$ kubectl get ArmorProfileModel -n varmor varmor-cluster-varmor-demo-4 -o jsonpath='{.data.profile.content}' | base64 -d

## == Managed by vArmor == ##

abi <abi/3.0>,
#include <tunables/global>

profile varmor-cluster-varmor-demo-4 flags=(attach_disconnected,mediate_deleted) {

#include <abstractions/base>

# ---- EXEC ----
/usr/bin/id ix,
/usr/bin/sleep ix,
/usr/bin/unshare ix,

# ---- FILE ----
owner /dev/tty rw,
owner /etc/group r,
owner /etc/ld.so.cache r,
owner /etc/nsswitch.conf r,
owner /etc/passwd r,
owner /etc/shadow r,
owner /proc/filesystems r,
owner /proc/sys/kernel/ngroups_max r,
owner /root/c1 rw,
owner /usr/bin/id r,
owner /usr/bin/sleep r,
owner /usr/bin/unshare r,
owner /usr/lib/x86_64-linux-gnu/** mr,

# ---- CAPABILITY ----
capability sys_admin,

# ---- NETWORK ----
network,

# ---- PTRACE ----
## suppress ptrace denials when using 'docker ps' or using 'ps' inside a container
ptrace (trace,read,tracedby,readby) peer=varmor-cluster-varmor-demo-4,

# ---- SIGNAL ----
## host (privileged) processes may send signals to container processes.
signal (receive) peer=unconfined,
## container processes may send signals amongst themselves.
signal (send,receive) peer=varmor-cluster-varmor-demo-4,

# ---- ADDITIONAL ----
umount,

}

Use the following commands to print the Seccomp profile.

$ kubectl get ArmorProfileModel -n varmor varmor-cluster-varmor-demo-4 -o jsonpath='{.data.profile.seccompContent}' | base64 -d | jq
{
"defaultAction": "SCMP_ACT_ERRNO",
"syscalls": [
{
"names": [
"open",
"openat",
"openat2",
"close",
"read",
"write"
],
"action": "SCMP_ACT_ALLOW"
},
{
"names": [
"fcntl",
"epoll_ctl",
"fstatfs",
"getdents64",
"chdir",
"capget",
"prctl",
"mmap",
"newfstatat",
"fstat",
"futex",
"setgroups",
"setgid",
"setuid",
"getcwd",
"rt_sigreturn",
"capset",
"getppid",
"faccessat2",
"getpid",
"execve",
"brk",
"arch_prctl",
"access",
"pread64",
"mprotect",
"set_tid_address",
"set_robust_list",
"rseq",
"prlimit64",
"munmap",
"getuid",
"getgid",
"rt_sigaction",
"geteuid",
"getrandom",
"getegid",
"rt_sigprocmask",
"vfork",
"wait4",
"pause",
"fadvise64",
"exit_group",
"ioctl",
"sysinfo",
"uname",
"socket",
"connect",
"lseek",
"getpgrp",
"getpeername",
"unshare",
"statfs",
"getgroups",
"dup2"
],
"action": "SCMP_ACT_ALLOW"
}
]
}

You may have noticed that the write and read permissions for /root/c0 file do not exist in the AppArmor profile, which is generated by vArmor. Since the demo/demo-4 deployment was explicitly declared with the container.apparmor.security.beta.varmor.org/c0: unconfined annotation, that tells vArmor not to apply any policy to the c0 container.