Use Cases

Standalone Decryptor

This configuration is for cases where users want to run a SKI Decryptor on one node and a security tool on another node. In this configuration the decryptor receives encrypted traffic and keys in the same manner as described in Getting Started or SKI Decryptor. The only difference is that the traffic from nurx0 (SKI Decryptor output) is vxlan encapsulated and forwarded to the node running the security tool.

../_images/SDI.png

Note

If you are running on AWS you can deploy this configuration using the following cloud formation template: https://nubevalabs.s3.amazonaws.com/sdi_ref_arch/nubeva-sdi.master.template.yaml . The Cloud-formation Template deploys two containers on the SKI Decryptor node. These containers are installed in the same manner as described in Deploying a FastKey Buffer and Deploying a SKI Decryptor. The only difference is that /etc/hosts is configured to map key.nubedge.com to 127.0.0.1. The key buffer is installed with the default key.nubedge.com cert.

Network Configuration - Dual Interface

The CFT sets up the following network if you choose to use two network interfaces on the SKI Decryptor node:

../_images/SDI_multi.png
  1. The Key Buffer listens on TCP/UDP 4433.
  2. The SKI Decryptor receives VXLAN - UDP 4789 traffic.
  3. The SKI Decryptor outputs the original traffic, decapsulated, and the decrypted traffic, to nurx0.
  4. Create an outbound namespace. Add the eth1 interface to it and and get another IP address via DHCP:
ip netns add outbound
ip link set eth1 netns outbound
ip netns exec outbound
ip link set eth1 up
ip netns exec outbound dhclient eth1
  1. Configure paired virtual Ethernet interfaces - sdi0 and sdi1:
ip link add sdi0 type veth peer name sdi1

# set sdi1 in the outbound network namespace
ip link set sdi1 netns outbound

# enable both interfaces
ip link set sdi0 up
ip netns exec outbound ip link set sdi1 up

# set the mtu of both interfaces to 9001
ip link set sdi0 mtu 9001
ip netns exec outbound ip link set sdi1 mtu 9001
  1. Create a vxlan interface vxlan2 to send encapsulated traffic to the security tool.
# create a vxlan link over eth1 with a remote IP address of the security tool.
ip netns exec outbound ip link add vxlan2 type vxlan id 2 remote <<Tool_IP_Address>> dev eth1 dstport 4789
ip netns exec outbound ip link set vxlan2 up
  1. Mirror traffic from the nurx0 interface to the sdi0 interface using tc:
tc qdisc add dev nurx0 ingress;
tc filter add dev nurx0 parent ffff: protocol all prio 2 u32 \
match u32 0 0 flowid 1:1 \
action mirred egress mirror dev sdi0

tc qdisc replace dev nurx0 parent root handle 10: prio
tc filter add dev nurx0 parent 10: protocol all prio 2 u32 \
match u32 0 0 flowid 10:1 \
action mirred egress mirror dev sdi0
  1. Similarly - mirror traffic from sdi1 to vxlan1 using tc:

Network Configuration - Single Interface

The CFT sets up the following network if you choose to use one network interface on the SKI Decryptor node:

../_images/SDI_single.png
  1. The Key Buffer listens on TCP/UDP 4433.
  2. The SKI Decryptor receives VXLAN - UDP 4789 traffic.
  3. The SKI Decryptor outputs the original traffic, decapsulated, and the decrypted traffic, to nurx0.
  4. Create a container called outbound, that sends decrypted traffic to the security tool via VXLAN.
docker run -dti --name outbound --cap-add NET_ADMIN alpine sh
docker exec outbound apk add bash iptables iproute2
mkdir -p /var/run/netns/ && pid=$(docker inspect -f '{{.State.Pid}}' outbound) && ln -sfT /proc/$pid/ns/net /var/run/netns/outbound
  1. Configure paired virtual Ethernet interfaces - sdi0 and sdi1:
ip link add sdi0 type veth peer name sdi1
ip link set sdi1 netns outbound
ip link set sdi0 up
ip netns exec outbound ip link set sdi1 up
ip link set sdi0 mtu 9001

ifconfig sdi0 mtu 9001
ifconfig | grep veth | awk '{split($0, a, ":"); system("ip link set mtu 9001 " a[1])}'

# set MTU of the outbound container interfaces
docker exec outbound ifconfig eth0 mtu 9001
docker exec outbound ifconfig sdi1 mtu 9001
  1. Configure vxlan encasulation in the outbound container over
docker exec outbound ip link add vxlan2 type vxlan id 2 remote <<Tool_IP_Address>> dev eth0 dstport 4789
docker exec outbound ip link set vxlan2 up
  1. Mirror traffic from the nurx0 interface to the sdi0 interface using tc:
tc qdisc add dev nurx0 ingress;
tc filter add dev nurx0 parent ffff: protocol all prio 2 u32 \
match u32 0 0 flowid 1:1 \
action mirred egress mirror dev sdi0

tc qdisc replace dev nurx0 parent root handle 10: prio
tc filter add dev nurx0 parent 10: protocol all prio 2 u32 \
match u32 0 0 flowid 10:1 \
action mirred egress mirror dev sdi0
  1. Mirror traffic from the sdi1 interface to the vxlan2 interface using tc:
ip netns exec outbound tc qdisc add dev sdi1 ingress
ip netns exec outbound tc filter add dev sdi1 parent ffff: protocol all prio 2 u32 \
match u32 0 0 flowid 1:1 \
action mirred egress mirror dev vxlan2

ip netns exec outbound tc qdisc replace dev sdi1 parent root handle 10: prio
ip netns exec outbound tc filter add dev sdi1 parent 10: protocol all prio 2 u32 \
match u32 0 0 flowid 10:1 \
action mirred egress mirror dev vxlan2