Deploying Security Tools

This guide shows how to add five open-source security & networking tools to your AWS Cloud environment. It deploys Moloch, ntopng, Suricata, Wireshark, and Zeek that provide advanced security visibility in your cloud. These tools are integrated with the Nubeva TLS Decryption solution with provides deeper visibility into all TLS traffic, including TLS 1.2 w/PFS and TLS 1.3. You can choose to create a new VPC environment for your open-source tools or deploy them into your existing VPC environment. After you deploy the Quick Start, you can add other AWS services, infrastructure components, and software layers to complete your test.

../_images/ToolsArch.png

Tool Launcher

Please follow the steps below to launch cloud tools.

../_images/ToolLauncher.png
  1. Click on the “Wrench” (middle) icon on the top left of the Destination Group box. This will display the popup depicted in the figure below.
../_images/LaunchCloudTools.png
  1. Select the region and the VPC options.
  2. Click the copy button to copy the Nubeva Token to the clipboard. This token is required by the cloud formation template that orchestrates tool launches.
  3. Click the Launch Tool button. This loads a cloud formation template that allows you to select which tools you would like to launch.

The following sections describe each tool deployment is more detail.

Moloch

Moloch is a large scale, open-source, indexed packet capture and search system. Moloch augments your current security infrastructure to store and index network traffic in standard PCAP format, providing fast, indexed access. An intuitive and simple web interface is provided for PCAP browsing, searching, and exporting. Moloch exposes APIs which allow for PCAP data and JSON formatted session data to be downloaded and consumed directly. Moloch stores and exports all packets in standard PCAP format, allowing you to also use your favorite PCAP ingesting tools such as wireshark, during your analysis workflow.

For additional information please refer to Moloch documentation as well as the Moloch github repository.

Moloch Architecture Details

As part of the Nubeva Tools automated deployment, Moloch is deployed using an AWS Well-Architected Architecture. The figure below depicts the complete highly scalable Moloch architecture.

../_images/Moloch.png
  • Moloch EC2 instances are built from code, look at the cloud formation template (CFT) for more details.
  • Each Moloch EC2 instance contains an active Moloch Viewer and Moloch Capture
  • Moloch is installed at /data/moloch
  • All Moloch components start automatically with the /data/moloch/start_moloch.sh script.
  • Moloch logs are located at /data/moloch/logs
  • Moloch will be configured to use the username & password which were defined as part of the CFT creation.
  • All Moloch EC2 instances use the AWS ElasticSearch service, moloch-es.
    • VPC access w/security group restrictions (see below for more info)
    • Uses same machine type for ES cluster nodes
    • Uses same node count for ES cluster nodes
    • Uses port 80/http for all ES communication (can be changed after install to 443/https)
  • More Moloch details are in the config file /data/moloch/etc/config.ini.
  • Moloch only monitors nurx0
  • Network Elastic Load Balancers front-end all communications to the Moloch instances
    • UDP port 4789 is forwarded to all targets for Amazon VPC traffic mirrors
    • TCP port 8005 is forwarded to all targets for Moloch Viewer

Operating Moloch

  • Connect to the MolochELB on port 8005 using HTTP. Login in with the tooladmin username & password.
  • Point all VPC traffic mirroring sessions at the Traffic Mirror Target (TMT) for the MolochELB. This will ensure that the mirrors are sent to any active Moloch capture point. This leverages UDP load balancing, so flows will stick on the Moloch capture engines.
  • All Moloch instances will then store packet information in the AWS ElasticSearch Service that is created during the CFT process.
  • The final traffic PCAPs are stored in the S3 bucket created during the CFT process.
  • To view the moloch data, use a web browser to connect to the load balancer URL on port 8005.

Moloch Security Details

  • Each Moloch EC2 instance allows TCP port 22 (ssh) and 8005 (Moloch viewer using http) from the Remote Access CIDR specified at the CFT launch.
  • Each Moloch EC2 instance allows UDP port 4789 (vxlan) from any source in the VPC.
  • Each Moloch EC2 instance has unlimited outbound access
  • The AWS ElasticSearch service allow TCP port 80 (http) from any source in the VPC.

ntopng

ntopng is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. For more details refer to ntop documentation.

ntopng Architecture Details

As part of the Nubeva Tools automated deployment, ntopng is deployed using an AWS Well-Architected Architecture. The figure below depicts the complete highly scalable Suricata architecture.

../_images/Ntopng.png
  • ntopng instances are built from code, look at the cloud formation template (CFT) for more details.
  • ntopng is installed using yum and uses all the defaults
  • ntopng only monitors nurx0.
  • Any browser can be used to connect to the ntopng console on port 3000 if they are part of the Remote Access CIDR. .
  • Network Elastic Load Balancers front-end all communications to the ntopng instances
    • UDP port 4789 is forwarded to all targets for Amazon VPC traffic mirrors
    • TCP port 3000 is forwarded to all targets for incoming console connections.

Operating ntopng

  • Connect to the NtopELB on port 3000 using HTTP. Login in with the default username & password for ntop
  • Point all VPC traffic mirroring sessions at the Traffic Mirror Target (TMT) for the ntopELB. This will ensure that the mirrors are sent to any active ntopng instance. This leverages UDP load balancing, so flows will stick on the ntopng instance.
  • To view the ntopng UI, connect to the load balancer URL on port 3000.

ntopng Security Details

  • Each ntopng EC2 instance allows TCP port 22 (ssh) and 3000 (ntopng console using http) from the Remote Access CIDR specified at the CFT launch.
  • Each ntopng EC2 instance allows UDP port 4789 (vxlan) from any source in the VPC.
  • Each ntopng EC2 instance has unlimited outbound access

Suricata

Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. It is open-source and owned by a community-run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF.

For additional information please refer to Suricata documentation as well as the Suricata github repository.

Suricata Architecture Details

As part of the Nubeva Tools automated deployment, Suricata is deployed using an AWS Well-Architected Architecture. The figure below depicts the complete highly scalable Suricata architecture.

../_images/Suricata.png
  • Suricata EC2 instances are built from code, look at the cloud formation template (CFT) for more details.
  • Each Suricata EC2 instance contains an active Suricata worker and Logstash for log storage.
  • Suricata is installed via yum with all defaults
  • All Suricata components start automatically as a service.
  • Suricata logs are located at /var/log/suricata/
  • Suricata only monitors nurx0
  • All Suricata EC2 instances use the AWS ElasticSearch service, suricata-es.
    • VPC access w/security group restrictions (see below for more info)
    • Uses same machine type for ES cluster nodes
    • Uses same node count for ES cluster nodes
    • Uses port 80/http for all ES communication (can be changed after install to 443/https)
  • More Suricata details are in the config file /etc/suricata/suricata.yaml
  • Suricatas alerts can be viewed through the Kibana UI integrated with the AWS ES service. See the output of the Suricata CFT for the exact URL.
  • Network Elastic Load Balancers front-end all communications to the Suricata instances
    • UDP port 4789 is forwarded to all targets for Amazon VPC traffic mirrors

Operating Suricata

  • Connect to the Kibana link in the Suricata CFT Output section for access.
  • Point all VPC traffic mirroring sessions at the Traffic Mirror Target (TMT) for the SuricataELB. This will ensure that the mirrors are sent to any active Suricata worker. This leverages UDP load balancing, so flows will stick on the Suricata workers.
  • All Suricata logs are sent to the AWS ElasticSearch Service that is created during the CFT process.
  • To view the Suricata data, use a web browser to connect to the Kibana URL specified in the output of the Suricata CFT.

Suricata Security Details

  • Each Suricata EC2 instance allows TCP port 22 (ssh) from the Remote Access CIDR specified at the CFT launch.
  • Each Suricata EC2 instance allows UDP port 4789 (vxlan) from any source in the VPC.
  • Each Suricata EC2 instance has unlimited outbound access
  • The AWS ElasticSearch service allow TCP port 80 (http) from any source in the VPC and TCP port 443 (https) from the Remote Access CIDR specified at launch.

Wireshark

Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. for more information refer to Wireshark documentation.

Wireshark Architecture Details

As part of the Nubeva Tools automated deployment, Wireshark is deployed using an AWS Well-Architected Architecture. The figure below depicts the complete highly scalable Suricata architecture.

../_images/Wireshark.png
  • Wireshark instances are built from code, look at the cloud formation template (CFT) for more details.
  • Each Wireshark EC2 instance is built on Ubuntu 18.04LTS with an LXDE GUI
  • Start wireshark with the command “sudo wireshark” in the run box or terminal window.
  • The RDP connection username & password which were defined as part of the CFT creation and the after install config instructions above.
  • Any RDP client can be used to connect to the Wireshark instances.
  • Select the nurx0 interface to see the decapsulated and decrypted packets.
  • Network Elastic Load Balancers front-end all communications to the Wireshark instances
  • UDP port 4789 is forwarded to all targets for Amazon VPC traffic mirrors
  • TCP port 3389 is forwarded to all targets for incoming RDP connections.

Post-Install Config Steps

  • SSH to the EC2 instance
  • Set the password for the “tooladmin” user or your specified name.
# This will be the username & password for RDP connectivity to use Wireshark.
sudo passwd tooladmin

Operating Wireshark

  • Connect to the Wireshark instance using SSH; ssh ubuntu@[ip.address]. Set the password for the ubuntu user:
sudo passwd ubuntu
  • Connect to the WiresharkhELB on port 3389 using RDP. Login in with the ubuntu username & password from step 1
  • Point all VPC traffic mirroring sessions at the Traffic Mirror Target (TMT) for the WiresharkELB. This will ensure that the mirrors are sent to any active Wireshark instance. This leverages UDP load balancing, so flows will stick on the Wireshark instance.
  • To view the wireshark UI, use any RDP client to connect to the load balancer URL, located on the console or in the output of the CFT.

Wireshark Security Details

  • Each Wireshark EC2 instance allows TCP port 22 (ssh) and 3389 (RDP) from the Remote Access CIDR specified at the CFT launch.
  • Each Wireshark EC2 instance allows UDP port 4789 (vxlan) from any source in the VPC.
  • Each Wireshark EC2 instance has unlimited outbound access

Zeek

Zeek is a powerful network analysis framework that is much different from the typical IDS you may know. While focusing on network security monitoring, Zeek provides a comprehensive platform for more general network traffic analysis as well. Well grounded in more than 20 years of research, Zeek has successfully bridged the traditional gap between academia and operations since its inception. Today, it is relied upon operationally by both major companies and numerous educational and scientific institutions for securing their cyberinfrastructure.

For additional information please refer to Zeek documentation as well as the Zeek github repository.

Zeek Architecture Details

As part of the Nubeva Tools automated deployment, Zeek is deployed using an AWS Well-Architected Architecture. The figure below depicts the complete highly scalable Suricata architecture.

../_images/Zeek.png
  • Zeek EC2 instances are built from code, look at the cloud formation template (CFT) for more details.
  • Each Zeek EC2 instance contains an active Zeek worker and Logstash for log storage.
  • Zeek is installed at /opt/zeek
  • All Zeek components start automatically with the /opt/zeek/start_zeek.sh script.
  • Zeek logs are located at /opt/zeek/logs
  • Zeek only monitors nurx0
  • All Zeek EC2 instances use the AWS ElasticSearch service, zeek-es.
  • VPC access w/security group restrictions (see below for more info)
  • Uses same machine type for ES cluster nodes
  • Uses same node count for ES cluster nodes
  • Uses port 80/http for all ES communication (can be changed after install to 443/https)
  • Zeeks alerts can be viewed through the Kibana UI integrated with the AWS ES service. See the output of the Zeek CFT for the exact URL.
  • Network Elastic Load Balancers front-end all communications to the Zeek instances
  • UDP port 4789 is forwarded to all targets for Amazon VPC traffic mirrors

Operating Zeek

  • Connect to the Kibana link in the Zeek CTF Output section for access.
  • Point all VPC traffic mirroring sessions at the Traffic Mirror Target (TMT) for the ZeekELB. This will ensure that the mirrors are sent to any active Zeek worker. This leverages UDP load balancing, so flows will stick on the Zeek workers.
  • All Zeek logs are sent to the AWS ElasticSearch Service that is created during the CFT process.
  • To view the zeek data, use a web browser to connect to the Kibana URL specified in the output of the Zeek CFT..

Zeek Security Details

  • Each Zeek EC2 instance allows TCP port 22 (ssh) from the Remote Access CIDR specified at the CFT launch.
  • Each Zeek EC2 instance allows UDP port 4789 (vxlan) from any source in the VPC.
  • Each Zeek EC2 instance has unlimited outbound access
  • The AWS ElasticSearch service allow TCP port 80 (http) from any source in the VPC and TCP port 443 (https) from the Remote Access CIDR specified at launch.