diff --git a/.github/ISSUE_TEMPLATE/general-issue-for-t-pot.md b/.github/ISSUE_TEMPLATE/general-issue-for-t-pot.md index 631ce307..e86a858b 100644 --- a/.github/ISSUE_TEMPLATE/general-issue-for-t-pot.md +++ b/.github/ISSUE_TEMPLATE/general-issue-for-t-pot.md @@ -7,6 +7,8 @@ assignees: '' --- +🗨️ Please post your questions in [Discussions](https://github.com/telekom-security/tpotce/discussions) and keep the issues for **issues**. Thank you 😁.
+ Before you post your issue make sure it has not been answered yet and provide `basic support information` if you come to the conclusion it is a new issue. - 🔍 Use the [search function](https://github.com/dtag-dev-sec/tpotce/issues?utf8=%E2%9C%93&q=) first diff --git a/CHANGELOG.md b/CHANGELOG.md index 6b572d8d..afe8a43f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,15 @@ # Changelog +## 20200904 +- **Release T-Pot 20.06.1** + - Github offers a free Docker Container Registry for public packages. For our Open Source projects we want to make sure to have everything in one place and thus moving from Docker Hub to the GitHub Container Registry. +- **Bump Elastic Stack** + - Update the Elastic Stack to 7.9.1. +- **Rebuild Images** + - All docker images were rebuilt based on the latest (and stable running) versions of the tools and honeypots and have been pinned to specific Alpine / Debian versions and git commits so rebuilds will less likely fail. +- **Cleaning up** + - Clean up old references and links. + ## 20200630 - **Release T-Pot 20.06** - After 4 months of public testing with the NextGen edition T-Pot 20.06 can finally be released. @@ -51,7 +61,7 @@ - **Update ISO image to fix upstream bug of missing kernel modules** - **Include dashboards for CitrixHoneypot** - Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type. - - This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first. + - This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/telekom-security/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first. ## 20200115 - **Prepare integration of CitrixHoneypot** diff --git a/README.md b/README.md index 078e13d5..e6fcbe6b 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ Furthermore T-Pot includes the following tools # TL;DR 1. Meet the [system requirements](#requirements). The T-Pot installation needs at least 8 GB RAM and 128 GB free disk space as well as a working (outgoing non-filtered) internet connection. -2. Download the T-Pot ISO from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) or [create it yourself](#createiso). +2. Download the T-Pot ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) or [create it yourself](#createiso). 3. Install the system in a [VM](#vm) or on [physical hardware](#hw) with [internet access](#placement). 4. Enjoy your favorite beverage - [watch](https://sicherheitstacho.eu) and [analyze](#kibana). @@ -132,7 +132,7 @@ The T-Pot project provides all the tools and documentation necessary to build yo The source code and configuration files are fully stored in the T-Pot GitHub repository. The docker images are preconfigured for the T-Pot environment. If you want to run the docker images separately, make sure you study the docker-compose configuration (`/opt/tpot/etc/tpot.yml`) and the T-Pot systemd script (`/etc/systemd/system/tpot.service`), as they provide a good starting point for implementing changes. -The individual docker configurations are located in the [docker folder](https://github.com/dtag-dev-sec/tpotce/tree/master/docker). +The individual docker configurations are located in the [docker folder](https://github.com/telekom-security/tpotce/tree/master/docker). # System Requirements @@ -183,18 +183,18 @@ There are prebuilt installation types available each focussing on different aspe # Installation The installation of T-Pot is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation **will fail!** -Firstly, decide if you want to download the prebuilt installation ISO image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases), [create it yourself](#createiso) ***or*** [post-install on an existing Debian 10 (Buster)](#postinstall). +Firstly, decide if you want to download the prebuilt installation ISO image from [GitHub](https://github.com/telekom-security/tpotce/releases), [create it yourself](#createiso) ***or*** [post-install on an existing Debian 10 (Buster)](#postinstall). Secondly, decide where you the system to run: [real hardware](#hardware) or in a [virtual machine](#vm)? ## Prebuilt ISO Image -An installation ISO image is available for download (~50MB), which is created by the [ISO Creator](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image. -You can download the prebuilt installation ISO from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases) and jump to the [installation](#vm) section. +An installation ISO image is available for download (~50MB), which is created by the [ISO Creator](https://github.com/telekom-security/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image. +You can download the prebuilt installation ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) and jump to the [installation](#vm) section. ## Create your own ISO Image -For transparency reasons and to give you the ability to customize your install you use the [ISO Creator](https://github.com/dtag-dev-sec/tpotce) that enables you to create your own ISO installation image. +For transparency reasons and to give you the ability to customize your install you use the [ISO Creator](https://github.com/telekom-security/tpotce) that enables you to create your own ISO installation image. **Requirements to create the ISO image:** - Debian 10 as host system (others *may* work, but *remain* untested) @@ -206,7 +206,7 @@ For transparency reasons and to give you the ability to customize your install y 1. Clone the repository and enter it. ``` -git clone https://github.com/dtag-dev-sec/tpotce +git clone https://github.com/telekom-security/tpotce cd tpotce ``` 2. Run the `makeiso.sh` script to build the ISO image. @@ -237,7 +237,7 @@ You can now jump [here](#firstrun). If you decide to run T-Pot on dedicated hardware, just follow these steps: 1. Burn a CD from the ISO image or make a bootable USB stick using the image.
-Whereas most CD burning tools allow you to burn from ISO images, the procedure to create a bootable USB stick from an ISO image depends on your system. There are various Windows GUI tools available, e.g. [this tip](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) might help you.
On [Linux](http://askubuntu.com/questions/59551/how-to-burn-a-iso-to-a-usb-device) or [MacOS](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx) you can use the tool *dd* or create the USB stick with T-Pot's [ISO Creator](https://github.com/dtag-dev-sec). +Whereas most CD burning tools allow you to burn from ISO images, the procedure to create a bootable USB stick from an ISO image depends on your system. There are various Windows GUI tools available, e.g. [this tip](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) might help you.
On [Linux](http://askubuntu.com/questions/59551/how-to-burn-a-iso-to-a-usb-device) or [MacOS](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx) you can use the tool *dd* or create the USB stick with T-Pot's [ISO Creator](https://github.com/telekom-security). 2. Boot from the USB stick and install. *Please note*: Limited tests are performed for the Intel NUC platform other hardware platforms **remain untested**. There is no hardware support provided of any kind. @@ -255,7 +255,7 @@ The T-Pot Universal Installer will upgrade the system and install all required T Just follow these steps: ``` -git clone https://github.com/dtag-dev-sec/tpotce +git clone https://github.com/telekom-security/tpotce cd tpotce/iso/installer/ ./install.sh --type=user ``` @@ -269,7 +269,7 @@ You can also let the installer run automatically if you provide your own `tpot.c Just follow these steps while adjusting `tpot.conf` to your needs: ``` -git clone https://github.com/dtag-dev-sec/tpotce +git clone https://github.com/telekom-security/tpotce cd tpotce/iso/installer/ cp tpot.conf.dist tpot.conf ./install.sh --type=auto --conf=tpot.conf @@ -436,7 +436,7 @@ You may opt out of the submission by removing the `# Ewsposter service` from `/o restart: always networks: - ewsposter_local - image: "dtagdevsec/ewsposter:2006" + image: "ghcr.io/telekom-security/ewsposter:2006" volumes: - /data:/data - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip @@ -466,7 +466,7 @@ As with every development there is always room for improvements ... Some features may be provided with updated docker images, others may require some hands on from your side. -You are always invited to participate in development on our [GitHub](https://github.com/dtag-dev-sec/tpotce) page. +You are always invited to participate in development on our [GitHub](https://github.com/telekom-security/tpotce) page. # Disclaimer @@ -478,18 +478,18 @@ You are always invited to participate in development on our [GitHub](https://git # FAQ -Please report any issues or questions on our [GitHub issue list](https://github.com/dtag-dev-sec/tpotce/issues), so the community can participate. +Please report any issues or questions on our [GitHub issue list](https://github.com/telekom-security/tpotce/issues), so the community can participate. # Contact The software is provided **as is** in a Community Edition format. T-Pot is designed to run out of the box and with zero maintenance involved.
-We hope you understand that we cannot provide support on an individual basis. We will try to address questions, bugs and problems on our [GitHub issue list](https://github.com/dtag-dev-sec/tpotce/issues). +We hope you understand that we cannot provide support on an individual basis. We will try to address questions, bugs and problems on our [GitHub issue list](https://github.com/telekom-security/tpotce/issues). # Licenses The software that T-Pot is built on uses the following licenses.
GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeysap](https://github.com/SecureAuthCorp/HoneySAP/blob/master/COPYING), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/) -
GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/dtag-dev-sec/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE) +
GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE)
Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/) diff --git a/bin/change_ews_config.sh b/bin/change_ews_config.sh index 6f9c25ba..5b660656 100755 --- a/bin/change_ews_config.sh +++ b/bin/change_ews_config.sh @@ -60,7 +60,7 @@ fi echo "" echo "[+] Creating config file with API UserID '$apiUser' and API Token '$apiToken'." echo "[+] Fetching config file from github. Outgoing https requests must be enabled!" -wget -q https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/docker/ews/dist/ews.cfg -O ews.cfg.dist +wget -q https://raw.githubusercontent.com/telekom-security/tpotce/master/docker/ews/dist/ews.cfg -O ews.cfg.dist if [[ -f "ews.cfg.dist" ]]; then echo "[+] Successfully downloaded ews.cfg from github." else diff --git a/bin/updateip.sh b/bin/updateip.sh index 992844e0..28f83ffe 100755 --- a/bin/updateip.sh +++ b/bin/updateip.sh @@ -2,6 +2,7 @@ # Let's add the first local ip to the /etc/issue and external ip to ews.ip file # If the external IP cannot be detected, the internal IP will be inherited. source /etc/environment +myUUID=$(lsblk -o MOUNTPOINT,UUID | grep "/" | awk '{ print $2 }') myLOCALIP=$(hostname -I | awk '{ print $1 }') myEXTIP=$(/opt/tpot/bin/myip.sh) if [ "$myEXTIP" = "" ]; @@ -26,6 +27,7 @@ tee /data/ews/conf/ews.ip << EOF ip = $myEXTIP EOF tee /opt/tpot/etc/compose/elk_environment << EOF +HONEY_UUID=$myUUID MY_EXTIP=$myEXTIP MY_INTIP=$myLOCALIP MY_HOSTNAME=$HOSTNAME diff --git a/cloud/ansible/README.md b/cloud/ansible/README.md index 15aed061..72fb7026 100644 --- a/cloud/ansible/README.md +++ b/cloud/ansible/README.md @@ -36,6 +36,8 @@ Ansible works over the SSH Port, so you don't have to add any special rules to y ## Ansible Installation +:warning: Ansible 2.10 or newer is required! + Example for Ubuntu 18.04: At first we update the system: @@ -48,6 +50,12 @@ Then we need to add the repository and install Ansible: For other OSes and Distros have a look at the official [Ansible Documentation](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html). +If your OS does not offer a recent version of Ansible (>= 2.10) you should consider [installing Ansible with pip](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-with-pip). +In short (if you already have Python3/pip3 installed): +``` +pip3 install ansible +``` + ## Agent Forwarding If you run the Ansible Playbook remotely on your Ansible Master Server, Agent Forwarding must be enabled in order to let Ansible connect to newly created machines. @@ -96,7 +104,7 @@ Import your SSH public key. # Clone Git Repository Clone the `tpotce` repository to your Ansible Master: -`git clone https://github.com/dtag-dev-sec/tpotce.git` +`git clone https://github.com/telekom-security/tpotce.git` All Ansible related files are located in the [`cloud/ansible/openstack`](openstack) folder. @@ -160,14 +168,6 @@ Here you can choose: - a username for the web interface - a password for the web interface (**you should definitely change that**) -``` -# tpot configuration file -# myCONF_TPOT_FLAVOR=[STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN] -myCONF_TPOT_FLAVOR='STANDARD' -myCONF_WEB_USER='webuser' -myCONF_WEB_PW='w3b$ecret' -``` - ## Optional: Custom `ews.cfg` Enable this by uncommenting the role in the [deploy_tpot.yaml](openstack/deploy_tpot.yaml) playbook. @@ -226,7 +226,7 @@ If you are running on a machine which asks for a sudo password, you can use: The Playbook will first install required packages on the Ansible Master and then deploy a new server instance. After that, T-Pot gets installed and configured on the newly created host, optionally custom configs are applied and finally it reboots. -Once this is done, you can proceed with connecting/logging in to the T-Pot according to the [documentation](https://github.com/dtag-dev-sec/tpotce#ssh-and-web-access). +Once this is done, you can proceed with connecting/logging in to the T-Pot according to the [documentation](https://github.com/telekom-security/tpotce#ssh-and-web-access). # Further documentation diff --git a/cloud/ansible/openstack/clouds.yaml b/cloud/ansible/openstack/clouds.yaml index fd0b2831..c16bfcf3 100644 --- a/cloud/ansible/openstack/clouds.yaml +++ b/cloud/ansible/openstack/clouds.yaml @@ -1,6 +1,7 @@ clouds: open-telekom-cloud: profile: otc + region_name: eu-de auth: project_name: eu-de_your_project username: your_api_user diff --git a/cloud/ansible/openstack/roles/check/tasks/main.yaml b/cloud/ansible/openstack/roles/check/tasks/main.yaml index 385be4dc..d9483ef4 100644 --- a/cloud/ansible/openstack/roles/check/tasks/main.yaml +++ b/cloud/ansible/openstack/roles/check/tasks/main.yaml @@ -1,14 +1,17 @@ - name: Install dependencies package: name: + - gcc - pwgen - - python-setuptools - - python-pip + - python3-dev + - python3-setuptools + - python3-pip state: present - name: Install openstacksdk pip: name: openstacksdk + executable: pip3 - name: Check if agent forwarding is enabled fail: diff --git a/cloud/ansible/openstack/roles/install/tasks/main.yaml b/cloud/ansible/openstack/roles/install/tasks/main.yaml index 40977347..173c4f08 100644 --- a/cloud/ansible/openstack/roles/install/tasks/main.yaml +++ b/cloud/ansible/openstack/roles/install/tasks/main.yaml @@ -6,7 +6,7 @@ - name: Cloning T-Pot install directory git: - repo: "https://github.com/dtag-dev-sec/tpotce.git" + repo: "https://github.com/telekom-security/tpotce.git" dest: /root/tpot - name: Prepare to set user password diff --git a/cloud/terraform/cloud-init.yaml b/cloud/terraform/cloud-init.yaml index 123e1612..18d6621a 100644 --- a/cloud/terraform/cloud-init.yaml +++ b/cloud/terraform/cloud-init.yaml @@ -5,7 +5,7 @@ packages: - git runcmd: - - git clone https://github.com/dtag-dev-sec/tpotce /root/tpot + - git clone https://github.com/telekom-security/tpotce /root/tpot - /root/tpot/iso/installer/install.sh --type=auto --conf=/root/tpot.conf - rm /root/tpot.conf - /sbin/shutdown -r now diff --git a/cloud/terraform/otc/clouds.yaml b/cloud/terraform/otc/clouds.yaml index 742ceb4b..5eefd562 100644 --- a/cloud/terraform/otc/clouds.yaml +++ b/cloud/terraform/otc/clouds.yaml @@ -1,5 +1,6 @@ clouds: open-telekom-cloud: + region_name: eu-de auth: project_name: eu-de_your_project username: your_api_user diff --git a/doc/architecture.png b/doc/architecture.png index 2bebdf2c..51348088 100644 Binary files a/doc/architecture.png and b/doc/architecture.png differ diff --git a/docker/adbhoney/Dockerfile b/docker/adbhoney/Dockerfile index ba9a4a0f..e249b746 100644 --- a/docker/adbhoney/Dockerfile +++ b/docker/adbhoney/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -13,7 +13,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ python3-dev && \ # # Install adbhoney from git - git clone --depth=1 https://github.com/huuck/ADBHoney /opt/adbhoney && \ + git clone https://github.com/huuck/ADBHoney /opt/adbhoney && \ + cd /opt/adbhoney && \ + git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \ cp /root/dist/adbhoney.cfg /opt/adbhoney && \ sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \ sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \ diff --git a/docker/adbhoney/docker-compose.yml b/docker/adbhoney/docker-compose.yml index 58e62f11..03fb50f2 100644 --- a/docker/adbhoney/docker-compose.yml +++ b/docker/adbhoney/docker-compose.yml @@ -14,7 +14,7 @@ services: - adbhoney_local ports: - "5555:5555" - image: "dtagdevsec/adbhoney:2006" + image: "ghcr.io/telekom-security/adbhoney:2006" read_only: true volumes: - /data/adbhoney/log:/opt/adbhoney/log diff --git a/docker/ciscoasa/Dockerfile b/docker/ciscoasa/Dockerfile index 85dcaa71..57d7100f 100644 --- a/docker/ciscoasa/Dockerfile +++ b/docker/ciscoasa/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -23,8 +23,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Get and install packages mkdir -p /opt/ && \ cd /opt/ && \ - git clone --depth=1 https://github.com/cymmetria/ciscoasa_honeypot && \ + git clone https://github.com/cymmetria/ciscoasa_honeypot && \ cd ciscoasa_honeypot && \ + git checkout d6e91f1aab7fe6fc01fabf2046e76b68dd6dc9e2 && \ pip3 install --no-cache-dir -r requirements.txt && \ cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \ chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \ diff --git a/docker/ciscoasa/docker-compose.yml b/docker/ciscoasa/docker-compose.yml index bf85bc48..bb2a466f 100644 --- a/docker/ciscoasa/docker-compose.yml +++ b/docker/ciscoasa/docker-compose.yml @@ -13,7 +13,7 @@ services: ports: - "5000:5000/udp" - "8443:8443" - image: "dtagdevsec/ciscoasa:2006" + image: "ghcr.io/telekom-security/ciscoasa:2006" read_only: true volumes: - /data/ciscoasa/log:/var/log/ciscoasa diff --git a/docker/citrixhoneypot/Dockerfile b/docker/citrixhoneypot/Dockerfile index 4326568a..7416f480 100644 --- a/docker/citrixhoneypot/Dockerfile +++ b/docker/citrixhoneypot/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Install packages RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ @@ -15,7 +15,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Install CitrixHoneypot from GitHub # git clone --depth=1 https://github.com/malwaretech/citrixhoneypot /opt/citrixhoneypot && \ # git clone --depth=1 https://github.com/vorband/CitrixHoneypot /opt/citrixhoneypot && \ - git clone --depth=1 https://github.com/t3chn0m4g3/CitrixHoneypot /opt/citrixhoneypot && \ + git clone https://github.com/t3chn0m4g3/CitrixHoneypot /opt/citrixhoneypot && \ + cd /opt/citrixhoneypot && \ + git checkout f59ad7320dc5bbb8c23c8baa5f111b52c52fbef3 && \ # # Setup user, groups and configs mkdir -p /opt/citrixhoneypot/logs /opt/citrixhoneypot/ssl && \ diff --git a/docker/citrixhoneypot/docker-compose.yml b/docker/citrixhoneypot/docker-compose.yml index 16eea88f..dd2c5d6c 100644 --- a/docker/citrixhoneypot/docker-compose.yml +++ b/docker/citrixhoneypot/docker-compose.yml @@ -14,7 +14,7 @@ services: - citrixhoneypot_local ports: - "443:443" - image: "dtagdevsec/citrixhoneypot:2006" + image: "ghcr.io/telekom-security/citrixhoneypot:2006" read_only: true volumes: - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs diff --git a/docker/conpot/Dockerfile b/docker/conpot/Dockerfile index e16be97e..24d71f3a 100644 --- a/docker/conpot/Dockerfile +++ b/docker/conpot/Dockerfile @@ -28,7 +28,7 @@ RUN apk -U add \ # Setup ConPot git clone https://github.com/mushorg/conpot /opt/conpot && \ cd /opt/conpot/ && \ - git checkout 7a77329cd99cee9c37ee20e2f05a48952d8eece9 && \ + git checkout ff09e009d10d953aa7dcff2c06b7c890e6ffd4b7 && \ # Change template default ports if <1024 sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \ sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \ diff --git a/docker/conpot/docker-compose.yml b/docker/conpot/docker-compose.yml index 4b315497..57c7fd39 100644 --- a/docker/conpot/docker-compose.yml +++ b/docker/conpot/docker-compose.yml @@ -35,7 +35,7 @@ services: - "2121:21" - "44818:44818" - "47808:47808" - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" read_only: true volumes: - /data/conpot/log:/var/log/conpot @@ -58,7 +58,7 @@ services: ports: # - "161:161" - "2404:2404" - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" read_only: true volumes: - /data/conpot/log:/var/log/conpot @@ -80,7 +80,7 @@ services: - conpot_local_guardian_ast ports: - "10001:10001" - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" read_only: true volumes: - /data/conpot/log:/var/log/conpot @@ -102,7 +102,7 @@ services: - conpot_local_ipmi ports: - "623:623" - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" read_only: true volumes: - /data/conpot/log:/var/log/conpot @@ -125,7 +125,7 @@ services: ports: - "1025:1025" - "50100:50100" - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" read_only: true volumes: - /data/conpot/log:/var/log/conpot diff --git a/docker/cowrie/Dockerfile b/docker/cowrie/Dockerfile index d3aa058e..aa8f533b 100644 --- a/docker/cowrie/Dockerfile +++ b/docker/cowrie/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -31,9 +31,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Install cowrie mkdir -p /home/cowrie && \ cd /home/cowrie && \ - git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.1.0 && \ + git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.2.0 && \ cd cowrie && \ - sed -i s/logfile.DailyLogFile/logfile.LogFile/g src/cowrie/python/logfile.py && \ +# sed -i s/logfile.DailyLogFile/logfile.LogFile/g src/cowrie/python/logfile.py && \ mkdir -p log && \ cp /root/dist/requirements.txt . && \ pip3 install -r requirements.txt && \ diff --git a/docker/cowrie/docker-compose.yml b/docker/cowrie/docker-compose.yml index 181a9bd7..1d232138 100644 --- a/docker/cowrie/docker-compose.yml +++ b/docker/cowrie/docker-compose.yml @@ -18,7 +18,7 @@ services: ports: - "22:22" - "23:23" - image: "dtagdevsec/cowrie:2006" + image: "ghcr.io/telekom-security/cowrie:2006" read_only: true volumes: - /data/cowrie/downloads:/home/cowrie/cowrie/dl diff --git a/docker/cyberchef/Dockerfile b/docker/cyberchef/Dockerfile index 90258091..abc36bd7 100644 --- a/docker/cyberchef/Dockerfile +++ b/docker/cyberchef/Dockerfile @@ -13,7 +13,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # # Install CyberChef cd /root && \ - git clone https://github.com/gchq/cyberchef --depth=1 && \ + git clone https://github.com/gchq/cyberchef -b v9.21.0 && \ chown -R nobody:nobody cyberchef && \ cd cyberchef && \ npm install && \ diff --git a/docker/cyberchef/docker-compose.yml b/docker/cyberchef/docker-compose.yml index 6bb8c3b9..e8a16d07 100644 --- a/docker/cyberchef/docker-compose.yml +++ b/docker/cyberchef/docker-compose.yml @@ -14,5 +14,5 @@ services: - cyberchef_local ports: - "127.0.0.1:64299:8000" - image: "dtagdevsec/cyberchef:2006" + image: "ghcr.io/telekom-security/cyberchef:2006" read_only: true diff --git a/docker/deprecated/elasticpot.old/README.md b/docker/deprecated/elasticpot.old/README.md index cbe64597..3556bc04 100644 --- a/docker/deprecated/elasticpot.old/README.md +++ b/docker/deprecated/elasticpot.old/README.md @@ -1,10 +1,10 @@ -[![](https://images.microbadger.com/badges/version/dtagdevsec/elasticpot:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/elasticpot:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1903 "Get your own image badge on microbadger.com") +[![](https://images.microbadger.com/badges/version/ghcr.io/telekom-security/elasticpot:1903.svg)](https://microbadger.com/images/ghcr.io/telekom-security/elasticpot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/ghcr.io/telekom-security/elasticpot:1903.svg)](https://microbadger.com/images/ghcr.io/telekom-security/elasticpot:1903 "Get your own image badge on microbadger.com") # elasticpot [elasticpot](https://github.com/schmalle/ElasticPot) is a simple elastic search honeypot. -This dockerized version is part of the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** of Deutsche Telekom AG. +This dockerized version is part of the **[T-Pot community honeypot](http://telekom-security.github.io/)** of Deutsche Telekom AG. The `Dockerfile` contains the blueprint for the dockerized elasticpot and will be used to setup the docker image. diff --git a/docker/deprecated/elasticpot.old/docker-compose.yml b/docker/deprecated/elasticpot.old/docker-compose.yml index a8fd3547..60992d17 100644 --- a/docker/deprecated/elasticpot.old/docker-compose.yml +++ b/docker/deprecated/elasticpot.old/docker-compose.yml @@ -14,7 +14,7 @@ services: - elasticpot_local ports: - "9200:9200" - image: "dtagdevsec/elasticpot:2006" + image: "ghcr.io/telekom-security/elasticpot:2006" read_only: true volumes: - /data/elasticpot/log:/opt/ElasticpotPY/log diff --git a/docker/deprecated/glastopf/README.md b/docker/deprecated/glastopf/README.md index 166c6998..1adf6c61 100644 --- a/docker/deprecated/glastopf/README.md +++ b/docker/deprecated/glastopf/README.md @@ -1,10 +1,10 @@ -[![](https://images.microbadger.com/badges/version/dtagdevsec/glastopf:1903.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/glastopf:1903.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1903 "Get your own image badge on microbadger.com") +[![](https://images.microbadger.com/badges/version/ghcr.io/telekom-security/glastopf:1903.svg)](https://microbadger.com/images/ghcr.io/telekom-security/glastopf:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/ghcr.io/telekom-security/glastopf:1903.svg)](https://microbadger.com/images/ghcr.io/telekom-security/glastopf:1903 "Get your own image badge on microbadger.com") # glastopf (deprecated) [glastopf](https://github.com/mushorg/glastopf) is a python web application honeypot. -This dockerized version is part of the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** of Deutsche Telekom AG. +This dockerized version is part of the **[T-Pot community honeypot](http://telekom-security.github.io/)** of Deutsche Telekom AG. The `Dockerfile` contains the blueprint for the dockerized glastopf and will be used to setup the docker image. diff --git a/docker/deprecated/glastopf/docker-compose.yml b/docker/deprecated/glastopf/docker-compose.yml index 5d67d6fc..bb14a6d0 100644 --- a/docker/deprecated/glastopf/docker-compose.yml +++ b/docker/deprecated/glastopf/docker-compose.yml @@ -16,7 +16,7 @@ services: - glastopf_local ports: - "8081:80" - image: "dtagdevsec/glastopf:1903" + image: "ghcr.io/telekom-security/glastopf:1903" read_only: true volumes: - /data/glastopf/db:/tmp/glastopf/db diff --git a/docker/deprecated/hpfeeds/docker-compose.yml b/docker/deprecated/hpfeeds/docker-compose.yml index da104895..ce7bbaf5 100644 --- a/docker/deprecated/hpfeeds/docker-compose.yml +++ b/docker/deprecated/hpfeeds/docker-compose.yml @@ -16,4 +16,4 @@ services: - hpfeeds_local ports: - "20000:20000" - image: "dtagdevsec/hpfeeds:latest" + image: "ghcr.io/telekom-security/hpfeeds:latest" diff --git a/docker/deprecated/nginx/docker-compose.yml b/docker/deprecated/nginx/docker-compose.yml index 2443efe7..46430307 100644 --- a/docker/deprecated/nginx/docker-compose.yml +++ b/docker/deprecated/nginx/docker-compose.yml @@ -17,7 +17,7 @@ services: network_mode: "host" ports: - "64297:64297" - image: "dtagdevsec/nginx:1903" + image: "ghcr.io/telekom-security/nginx:1903" read_only: true volumes: - /data/nginx/cert/:/etc/nginx/cert/:ro diff --git a/docker/dicompot/Dockerfile b/docker/dicompot/Dockerfile index 7fc9c2b3..51a1fcd2 100644 --- a/docker/dicompot/Dockerfile +++ b/docker/dicompot/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Setup apk RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ @@ -14,6 +14,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ cd /opt/go/ && \ git clone https://github.com/nsmfoo/dicompot.git && \ cd dicompot && \ + git checkout 41331194156bbb17078bcc1594f4952ac06a731e && \ go mod download && \ go install -a -x github.com/nsmfoo/dicompot/server && \ # diff --git a/docker/dicompot/docker-compose.yml b/docker/dicompot/docker-compose.yml index e06a4fad..5ae13067 100644 --- a/docker/dicompot/docker-compose.yml +++ b/docker/dicompot/docker-compose.yml @@ -17,7 +17,7 @@ services: - dicompot_local ports: - "11112:11112" - image: "dtagdevsec/dicompot:2006" + image: "ghcr.io/telekom-security/dicompot:2006" read_only: true volumes: - /data/dicompot/log:/var/log/dicompot diff --git a/docker/dionaea/Dockerfile b/docker/dionaea/Dockerfile index 25e457a6..e6028f1b 100644 --- a/docker/dionaea/Dockerfile +++ b/docker/dionaea/Dockerfile @@ -36,7 +36,7 @@ RUN apt-get update -y && \ # # Get and install dionaea # Latest master is unstable, SIP causes crashing - git clone --depth=1 https://github.com/dinotools/dionaea -b 0.8.0 /root/dionaea/ && \ + git clone --depth=1 https://github.com/dinotools/dionaea -b 0.11.0 /root/dionaea/ && \ cd /root/dionaea && \ #git checkout 1426750b9fd09c5bfeae74d506237333cd8505e2 && \ mkdir build && \ diff --git a/docker/dionaea/docker-compose.yml b/docker/dionaea/docker-compose.yml index 07bd6336..372934aa 100644 --- a/docker/dionaea/docker-compose.yml +++ b/docker/dionaea/docker-compose.yml @@ -31,7 +31,7 @@ services: - "5060:5060/udp" - "5061:5061" - "27017:27017" - image: "dtagdevsec/dionaea:2006" + image: "ghcr.io/telekom-security/dionaea:2006" read_only: true volumes: - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index 3bb1f328..bc6d9df1 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -10,98 +10,98 @@ services: # Adbhoney service adbhoney: build: adbhoney/. - image: "dtagdevsec/adbhoney:2006" + image: "ghcr.io/telekom-security/adbhoney:2006" # Ciscoasa service ciscoasa: build: ciscoasa/. - image: "dtagdevsec/ciscoasa:2006" + image: "ghcr.io/telekom-security/ciscoasa:2006" # CitrixHoneypot service citrixhoneypot: build: citrixhoneypot/. - image: "dtagdevsec/citrixhoneypot:2006" + image: "ghcr.io/telekom-security/citrixhoneypot:2006" # Conpot IEC104 service conpot_IEC104: build: conpot/. - image: "dtagdevsec/conpot:2006" + image: "ghcr.io/telekom-security/conpot:2006" # Cowrie service cowrie: build: cowrie/. - image: "dtagdevsec/cowrie:2006" + image: "ghcr.io/telekom-security/cowrie:2006" # Dicompot service dicompot: build: dicompot/. - image: "dtagdevsec/dicompot:2006" + image: "ghcr.io/telekom-security/dicompot:2006" # Dionaea service dionaea: build: dionaea/. - image: "dtagdevsec/dionaea:2006" + image: "ghcr.io/telekom-security/dionaea:2006" # ElasticPot service elasticpot: build: elasticpot/. - image: "dtagdevsec/elasticpot:2006" + image: "ghcr.io/telekom-security/elasticpot:2006" # Glutton service glutton: build: glutton/. - image: "dtagdevsec/glutton:2006" + image: "ghcr.io/telekom-security/glutton:2006" # Heralding service heralding: build: heralding/. - image: "dtagdevsec/heralding:2006" + image: "ghcr.io/telekom-security/heralding:2006" # HoneyPy service honeypy: build: honeypy/. - image: "dtagdevsec/honeypy:2006" + image: "ghcr.io/telekom-security/honeypy:2006" # Honeytrap service honeytrap: build: honeytrap/. - image: "dtagdevsec/honeytrap:2006" + image: "ghcr.io/telekom-security/honeytrap:2006" # Mailoney service mailoney: build: mailoney/. - image: "dtagdevsec/mailoney:2006" + image: "ghcr.io/telekom-security/mailoney:2006" # Medpot service medpot: build: medpot/. - image: "dtagdevsec/medpot:2006" + image: "ghcr.io/telekom-security/medpot:2006" # Rdpy service rdpy: build: rdpy/. - image: "dtagdevsec/rdpy:2006" + image: "ghcr.io/telekom-security/rdpy:2006" #### Snare / Tanner ## Tanner Redis Service tanner_redis: build: tanner/redis/. - image: "dtagdevsec/redis:2006" + image: "ghcr.io/telekom-security/redis:2006" ## PHP Sandbox service tanner_phpox: build: tanner/phpox/. - image: "dtagdevsec/phpox:2006" + image: "ghcr.io/telekom-security/phpox:2006" ## Tanner API Service tanner_api: build: tanner/tanner/. - image: "dtagdevsec/tanner:2006" + image: "ghcr.io/telekom-security/tanner:2006" ## Snare Service snare: build: tanner/snare/. - image: "dtagdevsec/snare:2006" + image: "ghcr.io/telekom-security/snare:2006" ################## @@ -111,17 +111,17 @@ services: # Fatt service fatt: build: fatt/. - image: "dtagdevsec/fatt:2006" + image: "ghcr.io/telekom-security/fatt:2006" # P0f service p0f: build: p0f/. - image: "dtagdevsec/p0f:2006" + image: "ghcr.io/telekom-security/p0f:2006" # Suricata service suricata: build: suricata/. - image: "dtagdevsec/suricata:2006" + image: "ghcr.io/telekom-security/suricata:2006" ################## @@ -131,40 +131,40 @@ services: # Cyberchef service cyberchef: build: cyberchef/. - image: "dtagdevsec/cyberchef:2006" + image: "ghcr.io/telekom-security/cyberchef:2006" #### ELK ## Elasticsearch service elasticsearch: build: elk/elasticsearch/. - image: "dtagdevsec/elasticsearch:2006" + image: "ghcr.io/telekom-security/elasticsearch:2006" ## Kibana service kibana: build: elk/kibana/. - image: "dtagdevsec/kibana:2006" + image: "ghcr.io/telekom-security/kibana:2006" ## Logstash service logstash: build: elk/logstash/. - image: "dtagdevsec/logstash:2006" + image: "ghcr.io/telekom-security/logstash:2006" ## Elasticsearch-head service head: build: elk/head/. - image: "dtagdevsec/head:2006" + image: "ghcr.io/telekom-security/head:2006" # Ewsposter service ewsposter: build: ews/. - image: "dtagdevsec/ewsposter:2006" + image: "ghcr.io/telekom-security/ewsposter:2006" # Nginx service nginx: build: heimdall/. - image: "dtagdevsec/nginx:2006" + image: "ghcr.io/telekom-security/nginx:2006" # Spiderfoot service spiderfoot: build: spiderfoot/. - image: "dtagdevsec/spiderfoot:2006" + image: "ghcr.io/telekom-security/spiderfoot:2006" diff --git a/docker/elasticpot/Dockerfile b/docker/elasticpot/Dockerfile index 52d74478..ad935053 100644 --- a/docker/elasticpot/Dockerfile +++ b/docker/elasticpot/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -20,8 +20,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ python3-dev && \ mkdir -p /opt && \ cd /opt/ && \ - git clone --depth=1 https://gitlab.com/bontchev/elasticpot.git/ && \ + git clone https://gitlab.com/bontchev/elasticpot.git/ && \ cd elasticpot && \ + git checkout d12649730d819bd78ea622361b6c65120173ad45 && \ pip3 install -r requirements.txt && \ # # Setup user, groups and configs diff --git a/docker/elasticpot/docker-compose.yml b/docker/elasticpot/docker-compose.yml index 16ce22cf..e8d3e67d 100644 --- a/docker/elasticpot/docker-compose.yml +++ b/docker/elasticpot/docker-compose.yml @@ -14,7 +14,7 @@ services: - elasticpot_local ports: - "9200:9200" - image: "dtagdevsec/elasticpot:2006" + image: "ghcr.io/telekom-security/elasticpot:2006" read_only: true volumes: - /data/elasticpot/log:/opt/elasticpot/log diff --git a/docker/elk/docker-compose.yml b/docker/elk/docker-compose.yml index 09d59dbb..c49be155 100644 --- a/docker/elk/docker-compose.yml +++ b/docker/elk/docker-compose.yml @@ -24,7 +24,7 @@ services: mem_limit: 4g ports: - "127.0.0.1:64298:9200" - image: "dtagdevsec/elasticsearch:2006" + image: "ghcr.io/telekom-security/elasticsearch:2006" volumes: - /data:/data @@ -39,7 +39,7 @@ services: condition: service_healthy ports: - "127.0.0.1:64296:5601" - image: "dtagdevsec/kibana:2006" + image: "ghcr.io/telekom-security/kibana:2006" ## Logstash service logstash: @@ -53,7 +53,7 @@ services: condition: service_healthy env_file: - /opt/tpot/etc/compose/elk_environment - image: "dtagdevsec/logstash:2006" + image: "ghcr.io/telekom-security/logstash:2006" volumes: - /data:/data # - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf @@ -68,5 +68,5 @@ services: condition: service_healthy ports: - "127.0.0.1:64302:9100" - image: "dtagdevsec/head:2006" + image: "ghcr.io/telekom-security/head:2006" read_only: true diff --git a/docker/elk/elasticsearch/Dockerfile b/docker/elk/elasticsearch/Dockerfile index 89d19c4c..9b691408 100644 --- a/docker/elk/elasticsearch/Dockerfile +++ b/docker/elk/elasticsearch/Dockerfile @@ -1,7 +1,7 @@ FROM alpine:3.12 # # VARS -ENV ES_VER=7.9.0 \ +ENV ES_VER=7.10.1 \ JAVA_HOME=/usr/lib/jvm/java-11-openjdk # Include dist ADD dist/ /root/dist/ diff --git a/docker/elk/elasticsearch/docker-compose.yml b/docker/elk/elasticsearch/docker-compose.yml index 3f51dcb5..0cf2ccf6 100644 --- a/docker/elk/elasticsearch/docker-compose.yml +++ b/docker/elk/elasticsearch/docker-compose.yml @@ -24,6 +24,6 @@ services: mem_limit: 2g ports: - "127.0.0.1:64298:9200" - image: "dtagdevsec/elasticsearch:2006" + image: "ghcr.io/telekom-security/elasticsearch:2006" volumes: - /data:/data diff --git a/docker/elk/head/Dockerfile b/docker/elk/head/Dockerfile index e1022f55..83399b97 100644 --- a/docker/elk/head/Dockerfile +++ b/docker/elk/head/Dockerfile @@ -10,7 +10,9 @@ RUN apk -U add \ # Get and install packages mkdir -p /usr/src/app/ && \ cd /usr/src/app/ && \ - git clone --depth=1 https://github.com/mobz/elasticsearch-head . && \ + git clone https://github.com/mobz/elasticsearch-head . && \ +# git checkout d0a25608854479f0b3f2dca24e8039a2fd66b0e2 && \ + git checkout 2932af571b84017f87bc1c5beee5b6dfbf11b0a5 && \ npm install http-server && \ sed -i "s#\"http\:\/\/localhost\:9200\"#window.location.protocol \+ \'\/\/\' \+ window.location.hostname \+ \'\:\' \+ window.location.port \+ \'\/es\/\'#" /usr/src/app/_site/app.js && \ # diff --git a/docker/elk/head/docker-compose.yml b/docker/elk/head/docker-compose.yml index 5cfaafdb..3c0bf2a3 100644 --- a/docker/elk/head/docker-compose.yml +++ b/docker/elk/head/docker-compose.yml @@ -12,5 +12,5 @@ services: # condition: service_healthy ports: - "127.0.0.1:64302:9100" - image: "dtagdevsec/head:2006" + image: "ghcr.io/telekom-security/head:2006" read_only: true diff --git a/docker/elk/kibana/Dockerfile b/docker/elk/kibana/Dockerfile index 3c7d9db9..ebd68d7d 100644 --- a/docker/elk/kibana/Dockerfile +++ b/docker/elk/kibana/Dockerfile @@ -1,7 +1,7 @@ -FROM node:10.21.0-alpine +FROM node:10.22.1-alpine # # VARS -ENV KB_VER=7.9.0 +ENV KB_VER=7.10.1 # # Include dist ADD dist/ /root/dist/ diff --git a/docker/elk/kibana/docker-compose.yml b/docker/elk/kibana/docker-compose.yml index 2f464089..e00ddc33 100644 --- a/docker/elk/kibana/docker-compose.yml +++ b/docker/elk/kibana/docker-compose.yml @@ -12,4 +12,4 @@ services: # condition: service_healthy ports: - "127.0.0.1:64296:5601" - image: "dtagdevsec/kibana:2006" + image: "ghcr.io/telekom-security/kibana:2006" diff --git a/docker/elk/logstash/Dockerfile b/docker/elk/logstash/Dockerfile index 16e22035..fdf35632 100644 --- a/docker/elk/logstash/Dockerfile +++ b/docker/elk/logstash/Dockerfile @@ -1,7 +1,7 @@ FROM alpine:3.12 # # VARS -ENV LS_VER=7.9.0 +ENV LS_VER=7.10.1 # Include dist ADD dist/ /root/dist/ # @@ -25,8 +25,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ bunzip2 *.bz2 && \ cd /root/dist/ && \ mkdir -p /usr/share/logstash/ && \ - aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER.tar.gz && \ - tar xvfz logstash-$LS_VER.tar.gz --strip-components=1 -C /usr/share/logstash/ && \ + aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-linux-x86_64.tar.gz && \ + tar xvfz logstash-$LS_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/logstash/ && \ + rm -rf /usr/share/logstash/jdk && \ /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \ /usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \ # diff --git a/docker/elk/logstash/dist/logstash.conf b/docker/elk/logstash/dist/logstash.conf index 2e486f34..549ece19 100644 --- a/docker/elk/logstash/dist/logstash.conf +++ b/docker/elk/logstash/dist/logstash.conf @@ -321,6 +321,7 @@ filter { } mutate { rename => { + "ID" => "id" "IP" => "src_ip" "Port" => "src_port" "AETitle" => "aetitle" @@ -542,6 +543,11 @@ if "_grokparsefailure" in [tags] { drop {} } convert => { "status" => "integer" } } } + if [id] { + mutate { + convert => { "id" => "string" } + } + } # Add T-Pot hostname and external IP if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "CitrixHoneypot" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dicompot" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Fatt" or [type] == "Glutton" or [type] == "Honeysap" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Ipphoney" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" { diff --git a/docker/elk/logstash/docker-compose.yml b/docker/elk/logstash/docker-compose.yml index ed94864b..187a30bb 100644 --- a/docker/elk/logstash/docker-compose.yml +++ b/docker/elk/logstash/docker-compose.yml @@ -14,7 +14,7 @@ services: # condition: service_healthy env_file: - /opt/tpot/etc/compose/elk_environment - image: "dtagdevsec/logstash:2006" + image: "ghcr.io/telekom-security/logstash:2006" volumes: - /data:/data # - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf diff --git a/docker/ews/Dockerfile b/docker/ews/Dockerfile index 27cee956..51f616e1 100644 --- a/docker/ews/Dockerfile +++ b/docker/ews/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -23,7 +23,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ pip3 install --no-cache-dir configparser hpfeeds3 pyOpenSSL xmljson && \ # # Setup ewsposter - git clone --depth=1 https://github.com/dtag-dev-sec/ewsposter /opt/ewsposter && \ + git clone https://github.com/telekom-security/ewsposter /opt/ewsposter && \ + cd /opt/ewsposter && \ + git checkout 09508938de3a28856114f6ea5f3f529fdb776d79 && \ mkdir -p /opt/ewsposter/spool /opt/ewsposter/log && \ # # Setup user and groups diff --git a/docker/ews/dist/ews.cfg b/docker/ews/dist/ews.cfg index 44fc9e7d..96c8732f 100644 --- a/docker/ews/dist/ews.cfg +++ b/docker/ews/dist/ews.cfg @@ -4,10 +4,11 @@ spooldir = /opt/ewsposter/spool/ logdir = /opt/ewsposter/log/ del_malware_after_send = false send_malware = false -sendlimit = 500 +sendlimit = 5000 contact = your_email_address -proxy = -ip = +proxy = None +ip_int = None +ip_ext = None [EWS] ews = true @@ -39,24 +40,6 @@ nodeid = glastopfv3-community-01 sqlitedb = /data/glastopf/db/glastopf.db malwaredir = /data/glastopf/data/files/ -[GLASTOPFV2] -glastopfv2 = false -nodeid = -mysqlhost = -mysqldb = -mysqluser = -mysqlpw = -malwaredir = - -[KIPPO] -kippo = false -nodeid = -mysqlhost = -mysqldb = -mysqluser = -mysqlpw = -malwaredir = - [COWRIE] cowrie = true nodeid = cowrie-community-01 @@ -75,12 +58,6 @@ newversion = true payloaddir = /data/honeytrap/attacks/ attackerfile = /data/honeytrap/log/attacker.log -[RDPDETECT] -rdpdetect = false -nodeid = -iptableslog = -targetip = - [EMOBILITY] eMobility = false nodeid = emobility-community-01 @@ -135,3 +112,18 @@ logfile = /data/tanner/log/tanner_report.json glutton = true nodeid = glutton-community-01 logfile = /data/glutton/log/glutton.log + +[HONEYSAP] +honeysap = true +nodeid = honeysap-community-01 +logfile = /data/honeysap/log/honeysap-external.log + +[ADBHONEY] +adbhoney = true +nodeid = adbhoney-community-01 +logfile = /data/adbhoney/log/adbhoney.json + +[FATT] +fatt = true +nodeid = fatt-community-01 +logfile = /data/fatt/log/fatt.log diff --git a/docker/ews/docker-compose.yml b/docker/ews/docker-compose.yml index 1900e1d3..b2c4dc30 100644 --- a/docker/ews/docker-compose.yml +++ b/docker/ews/docker-compose.yml @@ -23,8 +23,7 @@ services: - EWS_HPFEEDS_FORMAT=json env_file: - /opt/tpot/etc/compose/elk_environment - image: "dtagdevsec/ewsposter:2006" + image: "ghcr.io/telekom-security/ewsposter:2006" volumes: - /data:/data - - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip - +# - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip diff --git a/docker/fatt/Dockerfile b/docker/fatt/Dockerfile index 30864c2c..66ae4480 100644 --- a/docker/fatt/Dockerfile +++ b/docker/fatt/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist #ADD dist/ /root/dist/ @@ -21,8 +21,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Install fatt mkdir -p /opt && \ cd /opt && \ - git clone --depth=1 https://github.com/0x4D31/fatt && \ + git clone https://github.com/0x4D31/fatt && \ cd fatt && \ + git checkout 314cd1ff7873b5a145a51ec4e85f6107828a2c79 && \ mkdir -p log && \ pip3 install pyshark==0.4.2.2 && \ # @@ -39,4 +40,4 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ STOPSIGNAL SIGINT ENV PYTHONPATH /opt/fatt WORKDIR /opt/fatt -CMD python3 fatt.py -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) --print_output --json_logging -o log/fatt.log +CMD python3 fatt.py -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') --print_output --json_logging -o log/fatt.log diff --git a/docker/fatt/docker-compose.yml b/docker/fatt/docker-compose.yml index 1550ed3a..39ad84f8 100644 --- a/docker/fatt/docker-compose.yml +++ b/docker/fatt/docker-compose.yml @@ -12,6 +12,6 @@ services: - NET_ADMIN - SYS_NICE - NET_RAW - image: "dtagdevsec/fatt:2006" + image: "ghcr.io/telekom-security/fatt:2006" volumes: - /data/fatt/log:/opt/fatt/log diff --git a/docker/glutton/Dockerfile b/docker/glutton/Dockerfile index 34c51835..7b0f141d 100644 --- a/docker/glutton/Dockerfile +++ b/docker/glutton/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -22,6 +22,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ cd /opt/go/ && \ git clone https://github.com/mushorg/glutton && \ cd /opt/go/glutton/ && \ + git checkout 08f364fff489a82667866ecff2bcc4815569a0c8 && \ mv /root/dist/system.go /opt/go/glutton/ && \ go mod download && \ make build && \ @@ -52,4 +53,4 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Start glutton WORKDIR /opt/glutton USER glutton:glutton -CMD exec bin/server -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) -l /var/log/glutton/glutton.log > /dev/null 2>&1 +CMD exec bin/server -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') -l /var/log/glutton/glutton.log > /dev/null 2>&1 diff --git a/docker/glutton/docker-compose.yml b/docker/glutton/docker-compose.yml index 68843e9d..3d050516 100644 --- a/docker/glutton/docker-compose.yml +++ b/docker/glutton/docker-compose.yml @@ -13,7 +13,7 @@ services: network_mode: "host" cap_add: - NET_ADMIN - image: "dtagdevsec/glutton:2006" + image: "ghcr.io/telekom-security/glutton:2006" read_only: true volumes: - /data/glutton/log:/var/log/glutton diff --git a/docker/heimdall/Dockerfile b/docker/heimdall/Dockerfile index cc5154d6..f3d01ab9 100644 --- a/docker/heimdall/Dockerfile +++ b/docker/heimdall/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -28,6 +28,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # # Clone and setup Heimdall, Nginx git clone https://github.com/linuxserver/heimdall && \ + cd heimdall && \ + git checkout 3a9bdd2c431d70803b259990fa4d81db4b06dba4 && \ + cd .. && \ cp -R heimdall/. /var/lib/nginx/html && \ rm -rf heimdall && \ cd /var/lib/nginx/html && \ diff --git a/docker/heimdall/docker-compose.yml b/docker/heimdall/docker-compose.yml index 98346f10..a879a991 100644 --- a/docker/heimdall/docker-compose.yml +++ b/docker/heimdall/docker-compose.yml @@ -26,7 +26,7 @@ services: ports: - "64297:64297" - "127.0.0.1:64304:64304" - image: "dtagdevsec/nginx:2006" + image: "ghcr.io/telekom-security/nginx:2006" read_only: true volumes: - /data/nginx/cert/:/etc/nginx/cert/:ro diff --git a/docker/heralding/Dockerfile b/docker/heralding/Dockerfile index ce3eb6ea..a41f1f14 100644 --- a/docker/heralding/Dockerfile +++ b/docker/heralding/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12.1 # # Include dist ADD dist/ /root/dist/ @@ -21,8 +21,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Setup heralding mkdir -p /opt && \ cd /opt/ && \ - git clone --depth=1 https://github.com/johnnykv/heralding && \ + git clone https://github.com/johnnykv/heralding && \ cd heralding && \ + git checkout 9e9e9218f053c515ebb234667fb5575e6154ffa5 && \ pip3 install --no-cache-dir -r requirements.txt && \ pip3 install --no-cache-dir . && \ # diff --git a/docker/heralding/docker-compose.yml b/docker/heralding/docker-compose.yml index 15f92661..945cb0c3 100644 --- a/docker/heralding/docker-compose.yml +++ b/docker/heralding/docker-compose.yml @@ -30,7 +30,7 @@ services: - "3389:3389" - "5432:5432" - "5900:5900" - image: "dtagdevsec/heralding:2006" + image: "ghcr.io/telekom-security/heralding:2006" read_only: true volumes: - /data/heralding/log:/var/log/heralding diff --git a/docker/honeypy/Dockerfile b/docker/honeypy/Dockerfile index 833aa2e4..e796f446 100644 --- a/docker/honeypy/Dockerfile +++ b/docker/honeypy/Dockerfile @@ -17,8 +17,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ pip install --no-cache-dir virtualenv && \ # # Clone honeypy from git - git clone --depth=1 https://github.com/foospidy/HoneyPy /opt/honeypy && \ + git clone https://github.com/foospidy/HoneyPy /opt/honeypy && \ cd /opt/honeypy && \ + git checkout feccab56ca922bcab01cac4ffd82f588d61ab1c5 && \ sed -i 's/local_host/dest_ip/g' /opt/honeypy/loggers/file/honeypy_file.py && \ sed -i 's/local_port/dest_port/g' /opt/honeypy/loggers/file/honeypy_file.py && \ sed -i 's/remote_host/src_ip/g' /opt/honeypy/loggers/file/honeypy_file.py && \ diff --git a/docker/honeypy/docker-compose.yml b/docker/honeypy/docker-compose.yml index dd12fa2d..caa6c928 100644 --- a/docker/honeypy/docker-compose.yml +++ b/docker/honeypy/docker-compose.yml @@ -20,7 +20,7 @@ services: - "2324:2324" - "4096:4096" - "9200:9200" - image: "dtagdevsec/honeypy:2006" + image: "ghcr.io/telekom-security/honeypy:2006" read_only: true volumes: - /data/honeypy/log:/opt/honeypy/log diff --git a/docker/honeysap/Dockerfile b/docker/honeysap/Dockerfile index 01c280a6..d6c2e4d1 100644 --- a/docker/honeysap/Dockerfile +++ b/docker/honeysap/Dockerfile @@ -18,6 +18,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # git clone --depth=1 https://github.com/SecureAuthCorp/HoneySAP /opt/honeysap && \ git clone --depth=1 https://github.com/t3chn0m4g3/HoneySAP /opt/honeysap && \ cd /opt/honeysap && \ + git checkout a3c355a710d399de9d543659a685effaa70e683d && \ mkdir conf && \ cp /root/dist/* conf/ && \ python setup.py install && \ diff --git a/docker/honeysap/docker-compose.yml b/docker/honeysap/docker-compose.yml index 830a8c0b..032f5607 100644 --- a/docker/honeysap/docker-compose.yml +++ b/docker/honeysap/docker-compose.yml @@ -14,6 +14,6 @@ services: - honeysap_local ports: - "3299:3299" - image: "dtagdevsec/honeysap:2006" + image: "ghcr.io/telekom-security/honeysap:2006" volumes: - /data/honeysap/log:/opt/honeysap/log diff --git a/docker/honeytrap/Dockerfile b/docker/honeytrap/Dockerfile index 80df2fdd..e2507ffb 100644 --- a/docker/honeytrap/Dockerfile +++ b/docker/honeytrap/Dockerfile @@ -29,6 +29,7 @@ RUN apt-get update -y && \ git clone https://github.com/armedpot/honeytrap /root/honeytrap && \ # git clone https://github.com/t3chn0m4g3/honeytrap /root/honeytrap && \ cd /root/honeytrap/ && \ + git checkout 9aa4f734f2ea2f0da790b02d79afe18204a23982 && \ autoreconf -vfi && \ ./configure \ --with-stream-mon=nfq \ diff --git a/docker/honeytrap/docker-compose.yml b/docker/honeytrap/docker-compose.yml index 7573b3d5..e049e86e 100644 --- a/docker/honeytrap/docker-compose.yml +++ b/docker/honeytrap/docker-compose.yml @@ -12,7 +12,7 @@ services: network_mode: "host" cap_add: - NET_ADMIN - image: "dtagdevsec/honeytrap:2006" + image: "ghcr.io/telekom-security/honeytrap:2006" read_only: true volumes: - /data/honeytrap/attacks:/opt/honeytrap/var/attacks diff --git a/docker/ipphoney/Dockerfile b/docker/ipphoney/Dockerfile index dfad9560..a81a44ca 100644 --- a/docker/ipphoney/Dockerfile +++ b/docker/ipphoney/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12.1 # # Include dist ADD dist/ /root/dist/ @@ -21,8 +21,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ python3-dev && \ mkdir -p /opt && \ cd /opt/ && \ - git clone --depth=1 https://gitlab.com/bontchev/ipphoney.git/ && \ + git clone https://gitlab.com/bontchev/ipphoney.git/ && \ cd ipphoney && \ + git checkout 7ab1cac437baba17cb2cd25d5bb1400327e1bb79 && \ pip3 install -r requirements.txt && \ setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \ # diff --git a/docker/ipphoney/docker-compose.yml b/docker/ipphoney/docker-compose.yml index 69328fc0..53f7e681 100644 --- a/docker/ipphoney/docker-compose.yml +++ b/docker/ipphoney/docker-compose.yml @@ -14,7 +14,7 @@ services: - ipphoney_local ports: - "631:631" - image: "dtagdevsec/ipphoney:2006" + image: "ghcr.io/telekom-security/ipphoney:2006" read_only: true volumes: - /data/ipphoney/log:/opt/ipphoney/log diff --git a/docker/mailoney/Dockerfile b/docker/mailoney/Dockerfile index 2c6efd6b..2376f854 100644 --- a/docker/mailoney/Dockerfile +++ b/docker/mailoney/Dockerfile @@ -13,8 +13,9 @@ RUN apk -U --no-cache add \ python-dev && \ # # Install libemu - git clone --depth=1 https://github.com/buffer/libemu /root/libemu/ && \ + git clone https://github.com/buffer/libemu /root/libemu/ && \ cd /root/libemu/ && \ + git checkout e2624361e13588da74a2ce3e1dea0abb59dcf1d0 && \ autoreconf -vi && \ ./configure && \ make && \ @@ -26,7 +27,9 @@ RUN apk -U --no-cache add \ pylibemu && \ # # Install mailoney from git - git clone --depth=1 https://github.com/t3chn0m4g3/mailoney /opt/mailoney && \ + git clone https://github.com/t3chn0m4g3/mailoney /opt/mailoney && \ + cd /opt/mailoney && \ + git checkout 85c37649a99e1cec3f8d48d509653c9a8127ea4f && \ # # Setup user, groups and configs addgroup -g 2000 mailoney && \ diff --git a/docker/mailoney/docker-compose.yml b/docker/mailoney/docker-compose.yml index c5979e6b..5b131acd 100644 --- a/docker/mailoney/docker-compose.yml +++ b/docker/mailoney/docker-compose.yml @@ -20,7 +20,7 @@ services: - mailoney_local ports: - "25:25" - image: "dtagdevsec/mailoney:2006" + image: "ghcr.io/telekom-security/mailoney:2006" read_only: true volumes: - /data/mailoney/log:/opt/mailoney/logs diff --git a/docker/medpot/Dockerfile b/docker/medpot/Dockerfile index 05ea54d6..8dd1a1d4 100644 --- a/docker/medpot/Dockerfile +++ b/docker/medpot/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Setup apk RUN apk -U --no-cache add \ @@ -12,6 +12,9 @@ RUN apk -U --no-cache add \ mkdir -p /opt/go/src && \ cd /opt/go/src && \ git clone https://github.com/schmalle/medpot && \ + cd medpot && \ + git checkout 75a2e6134cf926c35b6017d62542274434c87388 && \ + cd .. && \ go get -d -v github.com/davecgh/go-spew/spew && \ go get -d -v github.com/go-ini/ini && \ go get -d -v github.com/mozillazg/request && \ diff --git a/docker/medpot/docker-compose.yml b/docker/medpot/docker-compose.yml index a5565475..6d6490b1 100644 --- a/docker/medpot/docker-compose.yml +++ b/docker/medpot/docker-compose.yml @@ -14,7 +14,7 @@ services: - medpot_local ports: - "2575:2575" - image: "dtagdevsec/medpot:2006" + image: "ghcr.io/telekom-security/medpot:2006" read_only: true volumes: - /data/medpot/log/:/var/log/medpot diff --git a/docker/p0f/Dockerfile b/docker/p0f/Dockerfile index 6568b41f..3694bc06 100644 --- a/docker/p0f/Dockerfile +++ b/docker/p0f/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Add source ADD . /opt/p0f @@ -29,7 +29,7 @@ RUN apk -U --no-cache add \ rm -rf /root/* && \ rm -rf /var/cache/apk/* # -# Start suricata +# Start p0f WORKDIR /opt/p0f USER p0f:p0f -CMD exec /opt/p0f/p0f -u p0f -j -o /var/log/p0f/p0f.json -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) > /dev/null +CMD exec /opt/p0f/p0f -u p0f -j -o /var/log/p0f/p0f.json -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') > /dev/null diff --git a/docker/p0f/README.md b/docker/p0f/README.md deleted file mode 100644 index c3af5e3c..00000000 --- a/docker/p0f/README.md +++ /dev/null @@ -1,11 +0,0 @@ -[![](https://images.microbadger.com/badges/version/dtagdevsec/p0f:1804.svg)](https://microbadger.com/images/dtagdevsec/p0f:1804 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/p0f:1804.svg)](https://microbadger.com/images/dtagdevsec/p0f:1804 "Get your own image badge on microbadger.com") - -# p0f - -[p0f](http://lcamtuf.coredump.cx/p0f3/) P0f is a tool that utilizes an array of sophisticated, purely passive traffic fingerprinting mechanisms to identify the players behind any incidental TCP/IP communications (often as little as a single normal SYN) without interfering in any way. - -This dockerized version is part of the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** of Deutsche Telekom AG. - -The `Dockerfile` contains the blueprint for the dockerized p0f and will be used to setup the docker image. - -The `docker-compose.yml` contains the necessary settings to test p0f using `docker-compose`. This will ensure to start the docker container with the appropriate permissions and port mappings. diff --git a/docker/p0f/docker-compose.yml b/docker/p0f/docker-compose.yml index 0b1329b8..f3f18081 100644 --- a/docker/p0f/docker-compose.yml +++ b/docker/p0f/docker-compose.yml @@ -8,7 +8,7 @@ services: container_name: p0f restart: always network_mode: "host" - image: "dtagdevsec/p0f:2006" + image: "ghcr.io/telekom-security/p0f:2006" read_only: true volumes: - /data/p0f/log:/var/log/p0f diff --git a/docker/rdpy/Dockerfile b/docker/rdpy/Dockerfile index 700039f9..c15b58f0 100644 --- a/docker/rdpy/Dockerfile +++ b/docker/rdpy/Dockerfile @@ -34,8 +34,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # Install rdpy from git mkdir -p /opt && \ cd /opt && \ - git clone --depth=1 https://github.com/t3chn0m4g3/rdpy && \ + git clone https://github.com/t3chn0m4g3/rdpy && \ cd rdpy && \ + git checkout 1d2a4132aefe0637d09cac1a6ab83ec5391f40ca && \ python setup.py install && \ # # Setup user, groups and configs diff --git a/docker/rdpy/docker-compose.yml b/docker/rdpy/docker-compose.yml index c991c270..8912b3f1 100644 --- a/docker/rdpy/docker-compose.yml +++ b/docker/rdpy/docker-compose.yml @@ -22,7 +22,7 @@ services: - rdpy_local ports: - "3389:3389" - image: "dtagdevsec/rdpy:2006" + image: "ghcr.io/telekom-security/rdpy:2006" read_only: true volumes: - /data/rdpy/log:/var/log/rdpy diff --git a/docker/spiderfoot/Dockerfile b/docker/spiderfoot/Dockerfile index 5462e68a..8b540c38 100644 --- a/docker/spiderfoot/Dockerfile +++ b/docker/spiderfoot/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Get and install dependencies & packages RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ @@ -33,7 +33,7 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \ # # Install spiderfoot - git clone --depth=1 -b v3.1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \ + git clone --depth=1 -b v3.2.1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \ cd /home/spiderfoot && \ pip3 install --no-cache-dir wheel && \ pip3 install --no-cache-dir -r requirements.txt && \ diff --git a/docker/spiderfoot/docker-compose.yml b/docker/spiderfoot/docker-compose.yml index efc808c9..0e90c8ba 100644 --- a/docker/spiderfoot/docker-compose.yml +++ b/docker/spiderfoot/docker-compose.yml @@ -14,6 +14,6 @@ services: - spiderfoot_local ports: - "127.0.0.1:64303:8080" - image: "dtagdevsec/spiderfoot:2006" + image: "ghcr.io/telekom-security/spiderfoot:2006" volumes: - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db diff --git a/docker/suricata/Dockerfile b/docker/suricata/Dockerfile index 3d9196cb..965600e0 100644 --- a/docker/suricata/Dockerfile +++ b/docker/suricata/Dockerfile @@ -1,30 +1,31 @@ -FROM alpine:latest +FROM alpine:edge # # Include dist ADD dist/ /root/dist/ # # Install packages -RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ - apk -U --no-cache add \ +RUN apk -U --no-cache add \ ca-certificates \ curl \ file \ + hiredis \ libcap \ - wget && \ - apk -U add --repository http://dl-cdn.alpinelinux.org/alpine/edge/community \ + wget \ suricata && \ # # Setup user, groups and configs addgroup -g 2000 suri && \ adduser -S -H -u 2000 -D -g 2000 suri && \ chmod 644 /etc/suricata/*.config && \ - cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \ + cp /root/dist/*.yaml /etc/suricata/ && \ + cp /root/dist/*.conf /etc/suricata/ && \ cp /root/dist/*.bpf /etc/suricata/ && \ # -# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules +# Download the latest EmergingThreats OPEN ruleset cp /root/dist/update.sh /usr/bin/ && \ chmod 755 /usr/bin/update.sh && \ - update.sh OPEN && \ + suricata-update update-sources && \ + suricata-update --no-reload && \ # # Clean up rm -rf /root/* && \ @@ -33,4 +34,4 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ # # Start suricata STOPSIGNAL SIGINT -CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) +CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') diff --git a/docker/suricata/Dockerfile.from.source b/docker/suricata/Dockerfile.from.source index 59c2687a..215e49ed 100644 --- a/docker/suricata/Dockerfile.from.source +++ b/docker/suricata/Dockerfile.from.source @@ -1,7 +1,7 @@ FROM alpine # # VARS -ENV VER=5.0.2 +ENV VER=6.0.0 # # Include dist ADD dist/ /root/dist/ @@ -59,8 +59,7 @@ RUN apk -U add \ libhtp \ libhtp-dev && \ # -# Upgrade pip, install suricata-update to meet deps, however we will not be using it -# to reduce image (no python needed) and use the update script. +# Upgrade pip, install suricata-update to meet deps pip3 install --no-cache-dir --upgrade pip && \ pip3 install --no-cache-dir suricata-update && \ # @@ -93,15 +92,17 @@ RUN apk -U add \ addgroup -g 2000 suri && \ adduser -S -H -u 2000 -D -g 2000 suri && \ chmod 644 /etc/suricata/*.config && \ - cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \ + cp /root/dist/*.yaml /etc/suricata/ && \ + cp /root/dist/*.conf /etc/suricata/ && \ cp /root/dist/*.bpf /etc/suricata/ && \ mkdir -p /etc/suricata/rules && \ cp /opt/builder/rules/* /etc/suricata/rules/ && \ # -# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules +# Download the latest EmergingThreats OPEN ruleset cp /root/dist/update.sh /usr/bin/ && \ chmod 755 /usr/bin/update.sh && \ - update.sh OPEN && \ + suricata-update update-sources && \ + suricata-update --no-reload && \ # # Clean up apk del --purge \ @@ -126,8 +127,6 @@ RUN apk -U add \ nss-dev \ nspr-dev \ pcre-dev \ - python3 \ - rust \ yaml-dev && \ rm -rf /opt/builder && \ rm -rf /root/* && \ @@ -136,4 +135,4 @@ RUN apk -U add \ # # Start suricata STOPSIGNAL SIGINT -CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address | grep '^2: ' | awk '{ print $2 }' | tr -d [:punct:]) +CMD SURICATA_CAPTURE_FILTER=$(update.sh $OINKCODE) && exec suricata -v -F $SURICATA_CAPTURE_FILTER -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') diff --git a/docker/suricata/dist/capture-filter.bpf b/docker/suricata/dist/capture-filter.bpf index d43d7d6e..582729ca 100644 --- a/docker/suricata/dist/capture-filter.bpf +++ b/docker/suricata/dist/capture-filter.bpf @@ -1,3 +1,5 @@ not (host sicherheitstacho.eu or community.sicherheitstacho.eu or listbot.sicherheitstacho.eu) and +not (host rules.emergingthreats.net or rules.emergingthreatspro.com) and not (host deb.debian.org) and +not (host ghcr.io) and not (host index.docker.io or docker.io) diff --git a/docker/suricata/dist/disable.conf b/docker/suricata/dist/disable.conf new file mode 100644 index 00000000..e69de29b diff --git a/docker/suricata/dist/enable.conf b/docker/suricata/dist/enable.conf new file mode 100644 index 00000000..2a0a3dc0 --- /dev/null +++ b/docker/suricata/dist/enable.conf @@ -0,0 +1,3 @@ +# Since honeypot traffic is usually low, we can afford to enable +# all the rules that are normally disabled for performance reasons. +re:. diff --git a/docker/suricata/dist/modify.conf b/docker/suricata/dist/modify.conf new file mode 100644 index 00000000..e69de29b diff --git a/docker/suricata/dist/suricata.yaml b/docker/suricata/dist/suricata.yaml index 90acad75..0bf81036 100644 --- a/docker/suricata/dist/suricata.yaml +++ b/docker/suricata/dist/suricata.yaml @@ -6,7 +6,7 @@ # https://suricata.readthedocs.io/en/latest/configuration/suricata-yaml.html ## -## Step 1: inform Suricata about your network +## Step 1: Inform Suricata about your network ## vars: @@ -44,24 +44,26 @@ vars: MODBUS_PORTS: 502 FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]" FTP_PORTS: 21 + GENEVE_PORTS: 6081 VXLAN_PORTS: 4789 + TEREDO_PORTS: 3544 ## -## Step 2: select outputs to enable +## Step 2: Select outputs to enable ## # The default logging directory. Any log or output file will be -# placed here if its not specified with a full path name. This can be +# placed here if it's not specified with a full path name. This can be # overridden with the -l command line parameter. default-log-dir: /var/log/suricata/ -# global stats configuration +# Global stats configuration stats: enabled: no - # The interval field (in seconds) controls at what interval - # the loggers are invoked. + # The interval field (in seconds) controls the interval at + # which stats are updated in the log. interval: 8 - # Add decode events as stats. + # Add decode events to stats. #decoder-events: true # Decoder event prefix in stats. Has been 'decoder' before, but that leads # to missing events in the eve.stats records. See issue #2225. @@ -83,12 +85,16 @@ outputs: enabled: yes filetype: regular #regular|syslog|unix_dgram|unix_stream|redis filename: eve.json + # Enable for multi-threaded eve.json output; output files are amended with + # with an identifier, e.g., eve.9.json + #threaded: false #prefix: "@cee: " # prefix to prepend to each log entry # the following are valid when type: syslog above #identity: "suricata" #facility: local5 #level: Info ## possible levels: Emergency, Alert, Critical, ## Error, Warning, Notice, Info, Debug + #ethernet: no # log ethernet header in events when available #redis: # server: 127.0.0.1 # port: 6379 @@ -100,10 +106,10 @@ outputs: # Redis pipelining set up. This will enable to only do a query every # 'batch-size' events. This should lower the latency induced by network # connection at the cost of some memory. There is no flushing implemented - # so this setting as to be reserved to high traffic suricata. + # so this setting should be reserved to high traffic Suricata deployments. # pipelining: # enabled: yes ## set enable to yes to enable query pipelining - # batch-size: 10 ## number of entry to keep in buffer + # batch-size: 10 ## number of entries to keep in buffer # Include top level metadata. Default yes. #metadata: no @@ -113,8 +119,8 @@ outputs: # Community Flow ID # Adds a 'community_id' field to EVE records. These are meant to give - # a records a predictable flow id that can be used to match records to - # output of other tools such as Bro. + # records a predictable flow ID that can be used to match records to + # output of other tools such as Zeek (Bro). # # Takes a 'seed' that needs to be same across sensors and tools # to make the id less predictable. @@ -131,13 +137,13 @@ outputs: # or forward proxied. xff: enabled: yes - # Two operation modes are available, "extra-data" and "overwrite". + # Two operation modes are available: "extra-data" and "overwrite". mode: extra-data - # Two proxy deployments are supported, "reverse" and "forward". In + # Two proxy deployments are supported: "reverse" and "forward". In # a "reverse" deployment the IP address used is the last one, in a # "forward" deployment the first IP address is used. deployment: reverse - # Header name where the actual IP address will be reported, if more + # Header name where the actual IP address will be reported. If more # than one IP address is present, the last IP address will be the # one taken into consideration. header: X-Forwarded-For @@ -148,9 +154,9 @@ outputs: payload-buffer-size: 4kb # max size of payload buffer to output in eve-log payload-printable: yes # enable dumping payload in printable (lossy) format # packet: yes # enable dumping of packet (without stream segments) - http-body: yes # enable dumping of http body in Base64 - http-body-printable: yes # enable dumping of http body in printable format # metadata: no # enable inclusion of app layer metadata with alert. Default yes + http-body: yes # Requires metadata; enable dumping of HTTP body in Base64 + http-body-printable: yes # Requires metadata; enable dumping of HTTP body in printable format # Enable the logging of tagged packets for rules using the # "tag" keyword. @@ -177,9 +183,9 @@ outputs: # specific conditions that are unexpected, invalid or are # unexpected given the application monitoring state. # - # By default, anomaly logging is disabled. When anomaly + # By default, anomaly logging is enabled. When anomaly # logging is enabled, applayer anomaly reporting is - # enabled. + # also enabled. enabled: yes # # Choose one or more types of anomaly logging and whether to enable @@ -191,9 +197,12 @@ outputs: #packethdr: no - http: extended: yes # enable this for extended logging information - # custom allows additional http fields to be included in eve-log + # custom allows additional HTTP fields to be included in eve-log. # the example below adds three additional fields when uncommented custom: [Accept-Encoding, Accept-Language, Authorization, Forwarded, From, Referer, Via] + # set this value to one and only one from {both, request, response} + # to dump all HTTP headers for every HTTP request and/or response + # dump-all-headers: none - dns: # This configuration uses the new DNS logging format, # the old configuration is still available: @@ -201,7 +210,7 @@ outputs: # As of Suricata 5.0, version 2 of the eve dns output # format is the default. - version: 2 + #version: 2 # Enable/disable this logger. Default: enabled. #enabled: yes @@ -219,7 +228,7 @@ outputs: # Default: all #formats: [detailed, grouped] - # Types to log, based on the query type. + # DNS record types to log, based on the query type. # Default: all. #types: [a, aaaa, cname, mx, ns, ptr, txt] - tls: @@ -227,8 +236,7 @@ outputs: # output TLS transaction where the session is resumed using a # session id #session-resumption: no - # custom allows to control which tls fields that are included - # in eve-log + # custom controls which TLS fields that are included in eve-log custom: [subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after, certificate, ja3, ja3s] - files: force-magic: yes # force logging magic on all logged files @@ -259,11 +267,12 @@ outputs: - smb - tftp - ikev2 + - dcerpc - krb5 - snmp + - rfb - sip - dhcp: - # DHCP logging requires Rust. enabled: no # When extended mode is on, all DHCP messages are logged # with full detail. When extended mode is off (the @@ -271,10 +280,16 @@ outputs: # to an IP address is logged. extended: no - ssh - - stats: - totals: yes # stats for all threads merged together - threads: no # per thread stats - deltas: no # include delta values + - mqtt: + passwords: yes # enable output of passwords + # HTTP2 logging. HTTP2 support is currently experimental and + # disabled by default. To enable, uncomment the following line + # and be sure to enable http2 in the app-layer section. + #- http2 + #- stats: + #totals: yes # stats for all threads merged together + #threads: no # per thread stats + #deltas: no # include delta values # bi-directional flows #- flow # uni-directional flows @@ -285,19 +300,13 @@ outputs: # flowints. #- metadata - # deprecated - unified2 alert format for use with Barnyard2 - - unified2-alert: - enabled: no - # for further options see: - # https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#alert-output-for-use-with-barnyard2-unified2-alert - # a line based log of HTTP requests (no alerts) - http-log: enabled: no filename: http.log append: yes #extended: yes # enable this for extended logging information - #custom: yes # enabled the custom logging format (defined by customformat) + #custom: yes # enable the custom logging format (defined by customformat) #customformat: "%{%D-%H:%M:%S}t.%z %{X-Forwarded-For}i %H %m %h %u %s %B %a:%p -> %A:%P" #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram' @@ -323,7 +332,7 @@ outputs: # "multi" and "sguil". # # In normal mode a pcap file "filename" is created in the default-log-dir, - # or are as specified by "dir". + # or as specified by "dir". # In multi mode, a file is created per thread. This will perform much # better, but will create multiple files where 'normal' would create one. # In multi mode the filename takes a few special variables: @@ -341,7 +350,7 @@ outputs: # is: 8*1000*2000 ~ 16TiB. # # In Sguil mode "dir" indicates the base directory. In this base dir the - # pcaps are created in th directory structure Sguil expects: + # pcaps are created in the directory structure Sguil expects: # # $sguil-base-dir/YYYY-MM-DD/$filename. # @@ -357,7 +366,8 @@ outputs: # is parsed as bytes. limit: 1000mb - # If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit" + # If set to a value, ring buffer mode is enabled. Will keep maximum of + # "max-files" of size "limit" max-files: 2000 # Compression algorithm for pcap files. Possible values: none, lz4. @@ -379,9 +389,9 @@ outputs: #ts-format: usec # sec or usec second format (default) is filename.sec usec is filename.sec.usec use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets - honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stopped being logged. + honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stop being logged. - # a full alerts log containing much information for signature writers + # a full alert log containing much information for signature writers # or for investigating suspected false positives. - alert-debug: enabled: no @@ -404,51 +414,44 @@ outputs: append: yes # append to file (yes) or overwrite it (no) totals: yes # stats for all threads merged together threads: no # per thread stats - #null-values: yes # print counters that have value 0 + #null-values: yes # print counters that have value 0. Default: no # a line based alerts log similar to fast.log into syslog - syslog: enabled: no - # reported identity to syslog. If ommited the program name (usually + # reported identity to syslog. If omitted the program name (usually # suricata) will be used. #identity: "suricata" facility: local5 #level: Info ## possible levels: Emergency, Alert, Critical, ## Error, Warning, Notice, Info, Debug - # deprecated a line based information for dropped packets in IPS mode - - drop: - enabled: no - # further options documented at: - # https://suricata.readthedocs.io/en/suricata-5.0.0/configuration/suricata-yaml.html#drop-log-a-line-based-information-for-dropped-packets - - # Output module for storing files on disk. Files are stored in a + # Output module for storing files on disk. Files are stored in # directory names consisting of the first 2 characters of the # SHA256 of the file. Each file is given its SHA256 as a filename. # - # When a duplicate file is found, the existing file is touched to - # have its timestamps updated. + # When a duplicate file is found, the timestamps on the existing file + # are updated. # - # Unlike the older filestore, metadata is not written out by default + # Unlike the older filestore, metadata is not written by default # as each file should already have a "fileinfo" record in the - # eve.log. If write-fileinfo is set to yes, the each file will have - # one more associated .json files that consists of the fileinfo + # eve-log. If write-fileinfo is set to yes, then each file will have + # one more associated .json files that consist of the fileinfo # record. A fileinfo file will be written for each occurrence of the # file seen using a filename suffix to ensure uniqueness. # # To prune the filestore directory see the "suricatactl filestore # prune" command which can delete files over a certain age. - - file-store: version: 2 enabled: no - # Set the directory for the filestore. If the path is not - # absolute will be be relative to the default-log-dir. + # Set the directory for the filestore. Relative pathnames + # are contained within the "default-log-dir". #dir: filestore - # Write out a fileinfo record for each occurrence of a - # file. Disabled by default as each occurrence is already logged + # Write out a fileinfo record for each occurrence of a file. + # Disabled by default as each occurrence is already logged # as a fileinfo record to the main eve-log. #write-fileinfo: yes @@ -456,15 +459,16 @@ outputs: #force-filestore: yes # Override the global stream-depth for sessions in which we want - # to perform file extraction. Set to 0 for unlimited. + # to perform file extraction. Set to 0 for unlimited; otherwise, + # must be greater than the global stream-depth value to be used. #stream-depth: 0 # Uncomment the following variable to define how many files can # remain open for filestore by Suricata. Default value is 0 which - # means files get closed after each write + # means files get closed after each write to the file. #max-open-files: 1000 - # Force logging of checksums, available hash functions are md5, + # Force logging of checksums: available hash functions are md5, # sha1 and sha256. Note that SHA256 is automatically forced by # the use of this output module as it uses the SHA256 as the # file naming scheme. @@ -483,32 +487,30 @@ outputs: # a "reverse" deployment the IP address used is the last one, in a # "forward" deployment the first IP address is used. deployment: reverse - # Header name where the actual IP address will be reported, if more + # Header name where the actual IP address will be reported. If more # than one IP address is present, the last IP address will be the # one taken into consideration. header: X-Forwarded-For - # deprecated - file-store v1 - - file-store: - enabled: no - # further options documented at: - # https://suricata.readthedocs.io/en/suricata-5.0.0/file-extraction/file-extraction.html#file-store-version-1 - # Log TCP data after stream normalization - # 2 types: file or dir. File logs into a single logfile. Dir creates - # 2 files per TCP session and stores the raw TCP data into them. - # Using 'both' will enable both file and dir modes. + # Two types: file or dir: + # - file logs into a single logfile. + # - dir creates 2 files per TCP session and stores the raw TCP + # data into them. + # Use 'both' to enable both file and dir modes. # - # Note: limited by stream.depth + # Note: limited by "stream.reassembly.depth" - tcp-data: enabled: no type: file filename: tcp-data.log - # Log HTTP body data after normalization, dechunking and unzipping. - # 2 types: file or dir. File logs into a single logfile. Dir creates - # 2 files per HTTP session and stores the normalized data into them. - # Using 'both' will enable both file and dir modes. + # Log HTTP body data after normalization, de-chunking and unzipping. + # Two types: file or dir. + # - file logs into a single logfile. + # - dir creates 2 files per HTTP session and stores the + # normalized data into them. + # Use 'both' to enable both file and dir modes. # # Note: limited by the body limit settings - http-body-data: @@ -529,7 +531,7 @@ outputs: # Logging configuration. This is not about logging IDS alerts/events, but # output about what Suricata is doing, like startup messages, errors, etc. logging: - # The default log level, can be overridden in an output section. + # The default log level: can be overridden in an output section. # Note that debug level logging will only be emitted if Suricata was # compiled with the --enable-debug configure option. # @@ -550,7 +552,7 @@ logging: default-output-filter: # Define your logging outputs. If none are defined, or they are all - # disabled you will get the default - console output. + # disabled you will get the default: console output. outputs: - console: enabled: yes @@ -568,9 +570,9 @@ logging: ## -## Step 4: configure common capture settings +## Step 3: Configure common capture settings ## -## See "Advanced Capture Options" below for more options, including NETMAP +## See "Advanced Capture Options" below for more options, including Netmap ## and PF_RING. ## @@ -584,39 +586,30 @@ af-packet: # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash. # This is only supported for Linux kernel > 3.1 # possible value are: - # * cluster_round_robin: round robin load balancing - # * cluster_flow: all packets of a given flow are send to the same socket - # * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket + # * cluster_flow: all packets of a given flow are sent to the same socket + # * cluster_cpu: all packets treated in kernel by a CPU are sent to the same socket # * cluster_qm: all packets linked by network card to a RSS queue are sent to the same # socket. Requires at least Linux 3.14. - # * cluster_random: packets are sent randomly to sockets but with an equipartition. - # Requires at least Linux 3.14. - # * cluster_rollover: kernel rotates between sockets filling each socket before moving - # to the next. Requires at least Linux 3.10. # * cluster_ebpf: eBPF file load balancing. See doc/userguide/capture-hardware/ebpf-xdp.rst for # more info. # Recommended modes are cluster_flow on most boxes and cluster_cpu or cluster_qm on system - # with capture card using RSS (require cpu affinity tuning and system irq tuning) + # with capture card using RSS (requires cpu affinity tuning and system IRQ tuning) cluster-type: cluster_flow - # In some fragmentation case, the hash can not be computed. If "defrag" is set + # In some fragmentation cases, the hash can not be computed. If "defrag" is set # to yes, the kernel will do the needed defragmentation before sending the packets. defrag: yes - # After Linux kernel 3.10 it is possible to activate the rollover option: if a socket is - # full then kernel will send the packet on the next socket with room available. This option - # can minimize packet drop and increase the treated bandwidth on single intensive flow. - #rollover: yes # To use the ring feature of AF_PACKET, set 'use-mmap' to yes #use-mmap: yes - # Lock memory map to avoid it goes to swap. Be careful that over subscribing could lock - # your system + # Lock memory map to avoid it being swapped. Be careful that over + # subscribing could lock your system #mmap-locked: yes # Use tpacket_v3 capture mode, only active if use-mmap is true # Don't use it in IPS or TAP mode as it causes severe latency #tpacket-v3: yes - # Ring size will be computed with respect to max_pending_packets and number + # Ring size will be computed with respect to "max-pending-packets" and number # of threads. You can set manually the ring size in number of packets by setting - # the following value. If you are using flow cluster-type and have really network - # intensive single-flow you could want to set the ring-size independently of the number + # the following value. If you are using flow "cluster-type" and have really network + # intensive single-flow you may want to set the "ring-size" independently of the number # of threads: #ring-size: 2048 # Block size is used by tpacket_v3 only. It should set to a value high enough to contain @@ -626,25 +619,25 @@ af-packet: # tpacket_v3 block timeout: an open block is passed to userspace if it is not # filled after block-timeout milliseconds. #block-timeout: 10 - # On busy system, this could help to set it to yes to recover from a packet drop - # phase. This will result in some packets (at max a ring flush) being non treated. + # On busy systems, set it to yes to help recover from a packet drop + # phase. This will result in some packets (at max a ring flush) not being inspected. #use-emergency-flush: yes - # recv buffer size, increase value could improve performance + # recv buffer size, increased value could improve performance # buffer-size: 32768 # Set to yes to disable promiscuous mode # disable-promisc: no # Choose checksum verification mode for the interface. At the moment - # of the capture, some packets may be with an invalid checksum due to - # offloading to the network card of the checksum computation. + # of the capture, some packets may have an invalid checksum due to + # the checksum computation being offloaded to the network card. # Possible values are: # - kernel: use indication sent by kernel for each packet (default) # - yes: checksum validation is forced # - no: checksum validation is disabled - # - auto: suricata uses a statistical approach to detect when + # - auto: Suricata uses a statistical approach to detect when # checksum off-loading is used. - # Warning: 'checksum-validation' must be set to yes to have any validation + # Warning: 'capture.checksum-validation' must be set to yes to have any validation #checksum-checks: kernel - # BPF filter to apply to this interface. The pcap filter syntax apply here. + # BPF filter to apply to this interface. The pcap filter syntax applies here. #bpf-filter: port 80 or udp # You can use the following variables to activate AF_PACKET tap or IPS mode. # If copy-mode is set to ips or tap, the traffic coming to the current @@ -654,35 +647,34 @@ af-packet: #copy-mode: ips #copy-iface: eth1 # For eBPF and XDP setup including bypass, filter and load balancing, please - # see doc/userguide/capture/ebpf-xdt.rst for more info. + # see doc/userguide/capture-hardware/ebpf-xdp.rst for more info. # Put default values here. These will be used for an interface that is not # in the list above. - interface: default #threads: auto #use-mmap: no - #rollover: yes #tpacket-v3: yes # Cross platform libpcap capture support pcap: - interface: eth0 - # On Linux, pcap will try to use mmaped capture and will use buffer-size - # as total of memory used by the ring. So set this to something bigger + # On Linux, pcap will try to use mmap'ed capture and will use "buffer-size" + # as total memory used by the ring. So set this to something bigger # than 1% of your bandwidth. #buffer-size: 16777216 #bpf-filter: "tcp and port 25" # Choose checksum verification mode for the interface. At the moment - # of the capture, some packets may be with an invalid checksum due to - # offloading to the network card of the checksum computation. + # of the capture, some packets may have an invalid checksum due to + # the checksum computation being offloaded to the network card. # Possible values are: # - yes: checksum validation is forced # - no: checksum validation is disabled # - auto: Suricata uses a statistical approach to detect when # checksum off-loading is used. (default) - # Warning: 'checksum-validation' must be set to yes to have any validation + # Warning: 'capture.checksum-validation' must be set to yes to have any validation #checksum-checks: auto - # With some accelerator cards using a modified libpcap (like myricom), you + # With some accelerator cards using a modified libpcap (like Myricom), you # may want to have the same number of capture threads as the number of capture # rings. In this case, set up the threads variable to N to start N threads # listening on the same interface. @@ -706,15 +698,15 @@ pcap-file: # Warning: 'checksum-validation' must be set to yes to have checksum tested checksum-checks: auto -# See "Advanced Capture Options" below for more options, including NETMAP +# See "Advanced Capture Options" below for more options, including Netmap # and PF_RING. ## -## Step 5: App Layer Protocol Configuration +## Step 4: App Layer Protocol configuration ## -# Configure the app-layer parsers. The protocols section details each +# Configure the app-layer parsers. The protocol's section details each # protocol. # # The option "enabled" takes 3 values - "yes", "no", "detection-only". @@ -722,6 +714,14 @@ pcap-file: # "detection-only" enables protocol detection only (parser disabled). app-layer: protocols: + rfb: + enabled: yes + detection-ports: + dp: 5900, 5901, 5902, 5903, 5904, 5905, 5906, 5907, 5908, 5909 + # MQTT, disabled by default. + mqtt: + enabled: yes + max-msg-length: 1mb krb5: enabled: yes snmp: @@ -733,7 +733,8 @@ app-layer: detection-ports: dp: 443 - # Generate JA3 fingerprint from client hello + # Generate JA3 fingerprint from client hello. If not specified it + # will be disabled by default, but enabled if rules require it. ja3-fingerprints: yes # What to do when the encrypted communications start: @@ -748,7 +749,7 @@ app-layer: # # For best performance, select 'bypass'. # - #encrypt-handling: default + #encryption-handling: default dcerpc: enabled: yes @@ -759,17 +760,22 @@ app-layer: enabled: yes ssh: enabled: yes + hassh: yes + # HTTP2: Experimental HTTP 2 support. Disabled by default. + http2: + enabled: no smtp: enabled: yes + raw-extraction: no # Configure SMTP-MIME Decoder mime: # Decode MIME messages from SMTP transactions # (may be resource intensive) - # This field supercedes all others because it turns the entire + # This field supersedes all others because it turns the entire # process on or off decode-mime: yes - # Decode MIME entity bodies (ie. base64, quoted-printable, etc.) + # Decode MIME entity bodies (ie. Base64, quoted-printable, etc.) decode-base64: yes decode-quoted-printable: yes @@ -789,8 +795,6 @@ app-layer: content-inspect-window: 4096 imap: enabled: detection-only - # Note: --enable-rust is required for full SMB1/2 support. W/o rust - # only minimal SMB1 support is available. smb: enabled: yes detection-ports: @@ -799,21 +803,11 @@ app-layer: # Stream reassembly size for SMB streams. By default track it completely. #stream-depth: 0 - # Note: NFS parser depends on Rust support: pass --enable-rust - # to configure. nfs: enabled: yes tftp: enabled: yes dns: - # memcaps. Globally and per flow/state. - global-memcap: 16mb - state-memcap: 512kb - - # How many unreplied DNS requests are considered a flood. - # If the limit is reached, app-layer-event:dns.flooded; will match. - request-flood: 500 - tcp: enabled: yes detection-ports: @@ -824,8 +818,8 @@ app-layer: dp: 53 http: enabled: yes - # memcap: Maximum memory capacity for http - # Default is unlimited, value can be such as 64mb + # memcap: Maximum memory capacity for HTTP + # Default is unlimited, values can be 64mb, e.g. # default-config: Used when no server-config matches # personality: List of personalities used by default @@ -839,7 +833,7 @@ app-layer: # server-config: List of server configurations to use if address matches # address: List of IP addresses or networks for this block - # personalitiy: List of personalities used by this block + # personality: List of personalities used by this block # # Then, all the fields from default-config can be overloaded # @@ -868,7 +862,7 @@ app-layer: http-body-inline: auto # Decompress SWF files. - # 2 types: 'deflate', 'lzma', 'both' will decompress deflate and lzma + # Two types: 'deflate', 'lzma', 'both' will decompress deflate and lzma # compress-depth: # Specifies the maximum amount of data to decompress, # set 0 for unlimited. @@ -881,20 +875,29 @@ app-layer: compress-depth: 0 decompress-depth: 0 - # Take a random value for inspection sizes around the specified value. - # This lower the risk of some evasion technics but could lead - # detection change between runs. It is set to 'yes' by default. + # Use a random value for inspection sizes around the specified value. + # This lowers the risk of some evasion techniques but could lead + # to detection change between runs. It is set to 'yes' by default. #randomize-inspection-sizes: yes - # If randomize-inspection-sizes is active, the value of various - # inspection size will be choosen in the [1 - range%, 1 + range%] + # If "randomize-inspection-sizes" is active, the value of various + # inspection size will be chosen from the [1 - range%, 1 + range%] # range - # Default value of randomize-inspection-range is 10. + # Default value of "randomize-inspection-range" is 10. #randomize-inspection-range: 10 # decoding double-decode-path: no double-decode-query: no + # Can enable LZMA decompression + #lzma-enabled: false + # Memory limit usage for LZMA decompression dictionary + # Data is decompressed until dictionary reaches this size + #lzma-memlimit: 1mb + # Maximum decompressed size with a compression ratio + # above 2048 (only LZMA can reach this ratio, deflate cannot) + #compression-bomb-limit: 1mb + server-config: #- apache: @@ -919,14 +922,14 @@ app-layer: # double-decode-path: no # double-decode-query: no - # Note: Modbus probe parser is minimalist due to the poor significant field + # Note: Modbus probe parser is minimalist due to the limited usage in the field. # Only Modbus message length (greater than Modbus header length) - # And Protocol ID (equal to 0) are checked in probing parser + # and protocol ID (equal to 0) are checked in probing parser # It is important to enable detection port and define Modbus port - # to avoid false positive + # to avoid false positives modbus: - # How many unreplied Modbus requests are considered a flood. - # If the limit is reached, app-layer-event:modbus.flooded; will match. + # How many unanswered Modbus requests are considered a flood. + # If the limit is reached, the app-layer-event:modbus.flooded; will match. #request-flood: 500 enabled: yes @@ -954,21 +957,25 @@ app-layer: dp: 44818 sp: 44818 - # Note: parser depends on Rust support ntp: enabled: yes dhcp: enabled: no - # SIP, disabled by default. sip: enabled: yes - # Limit for the maximum number of asn1 frames to decode (default 256) asn1-max-frames: 256 +# Datasets default settings +# datasets: +# # Default fallback memcap and hashsize values for datasets in case these +# # were not explicitly defined. +# defaults: +# memcap: 100mb +# hashsize: 2048 ############################################################################## ## @@ -980,12 +987,12 @@ asn1-max-frames: 256 ## Run Options ## -# Run suricata as user and group. +# Run Suricata with a specific user-id and group-id: #run-as: # user: suri # group: suri -# Some logging module will use that name in event as identifier. The default +# Some logging modules will use that name in event as identifier. The default # value is the hostname #sensor-name: suricata @@ -1016,9 +1023,9 @@ asn1-max-frames: 256 coredump: max-dump: unlimited -# If Suricata box is a router for the sniffed networks, set it to 'router'. If +# If the Suricata box is a router for the sniffed networks, set it to 'router'. If # it is a pure sniffing setup, set it to 'sniffer-only'. -# If set to auto, the variable is internally switch to 'router' in IPS mode +# If set to auto, the variable is internally switched to 'router' in IPS mode # and 'sniffer-only' in IDS mode. # This feature is currently only used by the reject* keywords. host-mode: auto @@ -1029,41 +1036,42 @@ host-mode: auto #max-pending-packets: 1024 # Runmode the engine should use. Please check --list-runmodes to get the available -# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned -# load balancing). +# runmodes for each packet acquisition method. Default depends on selected capture +# method. 'workers' generally gives best performance. #runmode: autofp # Specifies the kind of flow load balancer used by the flow pinned autofp mode. # # Supported schedulers are: # -# round-robin - Flows assigned to threads in a round robin fashion. -# active-packets - Flows assigned to threads that have the lowest number of -# unprocessed packets (default). -# hash - Flow allocated using the address hash. More of a random -# technique. Was the default in Suricata 1.2.1 and older. +# hash - Flow assigned to threads using the 5-7 tuple hash. +# ippair - Flow assigned to threads using addresses only. # -#autofp-scheduler: active-packets +#autofp-scheduler: hash -# Preallocated size for packet. Default is 1514 which is the classical -# size for pcap on ethernet. You should adjust this value to the highest +# Preallocated size for each packet. Default is 1514 which is the classical +# size for pcap on Ethernet. You should adjust this value to the highest # packet size (MTU + hardware header) on your system. #default-packet-size: 1514 -# Unix command socket can be used to pass commands to Suricata. +# Unix command socket that can be used to pass commands to Suricata. # An external tool can then connect to get information from Suricata # or trigger some modifications of the engine. Set enabled to yes # to activate the feature. In auto mode, the feature will only be # activated in live capture mode. You can use the filename variable to set # the file name of the socket. unix-command: - enabled: no + enabled: yes #filename: custom.socket # Magic file. The extension .mgc is added to the value here. #magic-file: /usr/share/file/magic magic-file: /usr/share/misc/magic.mgc +# GeoIP2 database file. Specify path and filename of GeoIP2 database +# if using rules with "geoip" rule option. +#geoip-database: /usr/local/share/GeoLite2/GeoLite2-Country.mmdb + legacy: uricontent: enabled @@ -1152,19 +1160,19 @@ defrag: # By default, the reserved memory (memcap) for flows is 32MB. This is the limit # for flow allocation inside the engine. You can change this value to allow # more memory usage for flows. -# The hash-size determine the size of the hash used to identify flows inside +# The hash-size determines the size of the hash used to identify flows inside # the engine, and by default the value is 65536. -# At the startup, the engine can preallocate a number of flows, to get a better +# At startup, the engine can preallocate a number of flows, to get better # performance. The number of flows preallocated is 10000 by default. -# emergency-recovery is the percentage of flows that the engine need to -# prune before unsetting the emergency state. The emergency state is activated -# when the memcap limit is reached, allowing to create new flows, but +# emergency-recovery is the percentage of flows that the engine needs to +# prune before clearing the emergency state. The emergency state is activated +# when the memcap limit is reached, allowing new flows to be created, but # pruning them with the emergency timeouts (they are defined below). # If the memcap is reached, the engine will try to prune flows # with the default timeouts. If it doesn't find a flow to prune, it will set # the emergency bit and it will try again with more aggressive timeouts. -# If that doesn't work, then it will try to kill the last time seen flows -# not in use. +# If that doesn't work, then it will try to kill the oldest flows using +# last time seen flows. # The memcap can be specified in kb, mb, gb. Just a number indicates it's # in bytes. @@ -1176,20 +1184,20 @@ flow: #managers: 1 # default to one flow manager #recyclers: 1 # default to one flow recycler thread -# This option controls the use of vlan ids in the flow (and defrag) +# This option controls the use of VLAN ids in the flow (and defrag) # hashing. Normally this should be enabled, but in some (broken) -# setups where both sides of a flow are not tagged with the same vlan -# tag, we can ignore the vlan id's in the flow hashing. +# setups where both sides of a flow are not tagged with the same VLAN +# tag, we can ignore the VLAN id's in the flow hashing. vlan: use-for-tracking: true # Specific timeouts for flows. Here you can specify the timeouts that the # active flows will wait to transit from the current state to another, on each -# protocol. The value of "new" determine the seconds to wait after a handshake or -# stream startup before the engine free the data of that flow it doesn't +# protocol. The value of "new" determines the seconds to wait after a handshake or +# stream startup before the engine frees the data of that flow it doesn't # change the state to established (usually if we don't receive more packets # of that flow). The value of "established" is the amount of -# seconds that the engine will wait to free the flow if it spend that amount +# seconds that the engine will wait to free the flow if that time elapses # without receiving new packets or closing the connection. "closed" is the # amount of time to wait after a flow is closed (usually zero). "bypassed" # timeout controls locally bypassed flows. For these flows we don't do any other @@ -1244,7 +1252,7 @@ flow-timeouts: # # number indicates it's in bytes. # checksum-validation: yes # To validate the checksum of received # # packet. If csum validation is specified as -# # "yes", then packet with invalid csum will not +# # "yes", then packets with invalid csum values will not # # be processed by the engine stream/app layer. # # Warning: locally generated traffic can be # # generated without checksum due to hardware offload @@ -1257,7 +1265,9 @@ flow-timeouts: # inline: no # stream inline mode # drop-invalid: yes # in inline mode, drop packets that are invalid with regards to streaming engine # max-synack-queued: 5 # Max different SYN/ACKs to queue -# bypass: no # Bypass packets when stream.depth is reached +# bypass: no # Bypass packets when stream.reassembly.depth is reached. +# # Warning: first side to reach this triggers +# # the bypass. # # reassembly: # memcap: 64mb # Can be specified in kb, mb, gb. Just a number @@ -1271,8 +1281,8 @@ flow-timeouts: # # this size. Can be specified in kb, mb, # # gb. Just a number indicates it's in bytes. # randomize-chunk-size: yes # Take a random value for chunk size around the specified value. -# # This lower the risk of some evasion technics but could lead -# # detection change between runs. It is set to 'yes' by default. +# # This lowers the risk of some evasion techniques but could lead +# # to detection change between runs. It is set to 'yes' by default. # randomize-chunk-range: 10 # If randomize-chunk-size is active, the value of chunk-size is # # a random value between (1 - randomize-chunk-range/100)*toserver-chunk-size # # and (1 + randomize-chunk-range/100)*toserver-chunk-size and the same @@ -1295,7 +1305,7 @@ flow-timeouts: # stream: memcap: 64mb - checksum-validation: yes # reject wrong csums + checksum-validation: yes # reject incorrect csums inline: auto # auto will use inline mode in IPS mode, yes or no set it statically reassembly: memcap: 256mb @@ -1310,7 +1320,7 @@ stream: # Host table: # -# Host table is used by tagging and per host thresholding subsystems. +# Host table is used by the tagging and per host thresholding subsystems. # host: hash-size: 4096 @@ -1330,20 +1340,34 @@ host: decoder: # Teredo decoder is known to not be completely accurate - # it will sometimes detect non-teredo as teredo. + # as it will sometimes detect non-teredo as teredo. teredo: enabled: true + # ports to look for Teredo. Max 4 ports. If no ports are given, or + # the value is set to 'any', Teredo detection runs on _all_ UDP packets. + ports: $TEREDO_PORTS # syntax: '[3544, 1234]' or '3533' or 'any'. + # VXLAN decoder is assigned to up to 4 UDP ports. By default only the + # IANA assigned port 4789 is enabled. + vxlan: + enabled: true + ports: $VXLAN_PORTS # syntax: '[8472, 4789]' or '4789'. + + # Geneve decoder is assigned to up to 4 UDP ports. By default only the + # IANA assigned port 6081 is enabled. + geneve: + enabled: true + ports: $GENEVE_PORTS # syntax: '[6081, 1234]' or '6081'. ## ## Performance tuning and profiling ## # The detection engine builds internal groups of signatures. The engine -# allow us to specify the profile to use for them, to manage memory on an -# efficient way keeping a good performance. For the profile keyword you -# can use the words "low", "medium", "high" or "custom". If you use custom -# make sure to define the values at "- custom-values" as your convenience. +# allows us to specify the profile to use for them, to manage memory in an +# efficient way keeping good performance. For the profile keyword you +# can use the words "low", "medium", "high" or "custom". If you use custom, +# make sure to define the values in the "custom-values" section. # Usually you would prefer medium/high/low. # # "sgh mpm-context", indicates how the staging should allot mpm contexts for @@ -1357,7 +1381,7 @@ decoder: # in the content inspection code. For certain payload-sig combinations, we # might end up taking too much time in the content inspection code. # If the argument specified is 0, the engine uses an internally defined -# default limit. On not specifying a value, we use no limits on the recursion. +# default limit. When a value is not specified, there are no limits on the recursion. detect: profile: medium custom-values: @@ -1376,7 +1400,7 @@ detect: default: mpm # the grouping values above control how many groups are created per - # direction. Port whitelisting forces that port to get it's own group. + # direction. Port whitelisting forces that port to get its own group. # Very common ports will benefit, as well as ports with many expensive # rules. grouping: @@ -1410,8 +1434,8 @@ detect: # signature groups, specified by the conf - "detect.sgh-mpm-context". # Selecting "ac" as the mpm would require "detect.sgh-mpm-context" # to be set to "single", because of ac's memory requirements, unless the -# ruleset is small enough to fit in one's memory, in which case one can -# use "full" with "ac". Rest of the mpms can be run in "full" mode. +# ruleset is small enough to fit in memory, in which case one can +# use "full" with "ac". The rest of the mpms can be run in "full" mode. mpm-algo: auto @@ -1428,7 +1452,7 @@ spm-algo: auto threading: set-cpu-affinity: no # Tune cpu affinity of threads. Each family of threads can be bound - # on specific CPUs. + # to specific CPUs. # # These 2 apply to the all runmodes: # management-cpu-set is used for flow timeout handling, counters @@ -1446,7 +1470,7 @@ threading: - worker-cpu-set: cpu: [ "all" ] mode: "exclusive" - # Use explicitely 3 threads and don't compute number by using + # Use explicitly 3 threads and don't compute number by using # detect-thread-ratio variable: # threads: 3 prio: @@ -1469,7 +1493,7 @@ threading: # detect-thread-ratio: 1.0 -# Luajit has a strange memory requirement, it's 'states' need to be in the +# Luajit has a strange memory requirement, its 'states' need to be in the # first 2G of the process' memory. # # 'luajit.states' is used to control how many states are preallocated. @@ -1478,11 +1502,11 @@ threading: luajit: states: 128 -# Profiling settings. Only effective if Suricata has been built with the +# Profiling settings. Only effective if Suricata has been built with # the --enable-profiling configure flag. # profiling: - # Run profiling for every xth packet. The default is 1, which means we + # Run profiling for every X-th packet. The default is 1, which means we # profile every packet. If set to 1000, one packet is profiled for every # 1000 received. #sample-rate: 1000 @@ -1558,15 +1582,15 @@ profiling: # When running in NFQ inline mode, it is possible to use a simulated # non-terminal NFQUEUE verdict. -# This permit to do send all needed packet to Suricata via this a rule: +# This permits sending all needed packet to Suricata via this rule: # iptables -I FORWARD -m mark ! --mark $MARK/$MASK -j NFQUEUE # And below, you can have your standard filtering ruleset. To activate # this mode, you need to set mode to 'repeat' -# If you want packet to be sent to another queue after an ACCEPT decision -# set mode to 'route' and set next-queue value. -# On linux >= 3.1, you can set batchcount to a value > 1 to improve performance +# If you want a packet to be sent to another queue after an ACCEPT decision +# set the mode to 'route' and set next-queue value. +# On Linux >= 3.1, you can set batchcount to a value > 1 to improve performance # by processing several packets before sending a verdict (worker runmode only). -# On linux >= 3.6, you can set the fail-open option to yes to have the kernel +# On Linux >= 3.6, you can set the fail-open option to yes to have the kernel # accept the packet if Suricata is not able to keep pace. # bypass mark and mask can be used to implement NFQ bypass. If bypass mark is # set then the NFQ bypass is activated. Suricata will set the bypass mark/mask @@ -1592,9 +1616,9 @@ nflog: buffer-size: 18432 # put default value here - group: default - # set number of packet to queue inside kernel + # set number of packets to queue inside kernel qthreshold: 1 - # set the delay before flushing packet in the queue inside kernel + # set the delay before flushing packet in the kernel's queue qtimeout: 100 # netlink max buffer size max-size: 20000 @@ -1603,7 +1627,7 @@ nflog: ## Advanced Capture Options ## -# general settings affecting packet capture +# General settings affecting packet capture capture: # disable NIC offloading. It's restored when Suricata exits. # Enabled by default. @@ -1615,19 +1639,21 @@ capture: # Netmap support # -# Netmap operates with NIC directly in driver, so you need FreeBSD which have -# built-in netmap support or compile and install netmap module and appropriate -# NIC driver on your Linux system. +# Netmap operates with NIC directly in driver, so you need FreeBSD 11+ which has +# built-in Netmap support or compile and install the Netmap module and appropriate +# NIC driver for your Linux system. # To reach maximum throughput disable all receive-, segmentation-, -# checksum- offloadings on NIC. -# Disabling Tx checksum offloading is *required* for connecting OS endpoint +# checksum- offloading on your NIC (using ethtool or similar). +# Disabling TX checksum offloading is *required* for connecting OS endpoint # with NIC endpoint. # You can find more information at https://github.com/luigirizzo/netmap # netmap: # To specify OS endpoint add plus sign at the end (e.g. "eth0+") - interface: eth2 - # Number of receive threads. "auto" uses number of RSS queues on interface. + # Number of capture threads. "auto" uses number of RSS queues on interface. + # Warning: unless the RSS hashing is symmetrical, this will lead to + # accuracy issues. #threads: auto # You can use the following variables to activate netmap tap or IPS mode. # If copy-mode is set to ips or tap, the traffic coming to the current @@ -1645,8 +1671,8 @@ netmap: # Set to yes to disable promiscuous mode # disable-promisc: no # Choose checksum verification mode for the interface. At the moment - # of the capture, some packets may be with an invalid checksum due to - # offloading to the network card of the checksum computation. + # of the capture, some packets may have an invalid checksum due to + # the checksum computation being offloaded to the network card. # Possible values are: # - yes: checksum validation is forced # - no: checksum validation is disabled @@ -1663,7 +1689,7 @@ netmap: # Put default values here - interface: default -# PF_RING configuration. for use with native PF_RING support +# PF_RING configuration: for use with native PF_RING support # for more info see http://www.ntop.org/products/pf_ring/ pfring: - interface: eth0 @@ -1684,13 +1710,13 @@ pfring: #bpf-filter: tcp # If bypass is set then the PF_RING hw bypass is activated, when supported - # by the interface in use. Suricata will instruct the interface to bypass + # by the network interface. Suricata will instruct the interface to bypass # all future packets for a flow that need to be bypassed. #bypass: yes # Choose checksum verification mode for the interface. At the moment - # of the capture, some packets may be with an invalid checksum due to - # offloading to the network card of the checksum computation. + # of the capture, some packets may have an invalid checksum due to + # the checksum computation being offloaded to the network card. # Possible values are: # - rxonly: only compute checksum for packets received by network card. # - yes: checksum validation is forced @@ -1716,8 +1742,8 @@ pfring: # # ipfw add 100 divert 8000 ip from any to any # -# The 8000 above should be the same number you passed on the command -# line, i.e. -d 8000 +# N.B. This example uses "8000" -- this number must mach the values +# you passed on the command line, i.e., -d 8000 # ipfw: @@ -1740,34 +1766,73 @@ napatech: # (-1 = OFF, 1 - 100 = percentage of the host buffer that can be held back) # This may be enabled when sharing streams with another application. # Otherwise, it should be turned off. - hba: -1 + #hba: -1 - # use_all_streams set to "yes" will query the Napatech service for all configured - # streams and listen on all of them. When set to "no" the streams config array - # will be used. - use-all-streams: yes + # When use_all_streams is set to "yes" the initialization code will query + # the Napatech service for all configured streams and listen on all of them. + # When set to "no" the streams config array will be used. + # + # This option necessitates running the appropriate NTPL commands to create + # the desired streams prior to running Suricata. + #use-all-streams: no - # The streams to listen on. This can be either: - # a list of individual streams (e.g. streams: [0,1,2,3]) + # The streams to listen on when auto-config is disabled or when and threading + # cpu-affinity is disabled. This can be either: + # an individual stream (e.g. streams: [0]) # or # a range of streams (e.g. streams: ["0-3"]) + # streams: ["0-3"] + # Stream stats can be enabled to provide fine grain packet and byte counters + # for each thread/stream that is configured. + # + enable-stream-stats: no + # When auto-config is enabled the streams will be created and assigned # automatically to the NUMA node where the thread resides. If cpu-affinity # is enabled in the threading section. Then the streams will be created - # according to the number of worker threads specified in the worker cpu set. + # according to the number of worker threads specified in the worker-cpu-set. # Otherwise, the streams array is used to define the streams. # - # This option cannot be used simultaneous with "use-all-streams". + # This option is intended primarily to support legacy configurations. + # + # This option cannot be used simultaneously with either "use-all-streams" + # or "hardware-bypass". # auto-config: yes - # Ports indicates which napatech ports are to be used in auto-config mode. - # these are the port ID's of the ports that will be merged prior to the + # Enable hardware level flow bypass. + # + hardware-bypass: yes + + # Enable inline operation. When enabled traffic arriving on a given port is + # automatically forwarded out its peer port after analysis by Suricata. + # + inline: no + + # Ports indicates which Napatech ports are to be used in auto-config mode. + # these are the port IDs of the ports that will be merged prior to the # traffic being distributed to the streams. # - # This can be specified in any of the following ways: + # When hardware-bypass is enabled the ports must be configured as a segment. + # specify the port(s) on which upstream and downstream traffic will arrive. + # This information is necessary for the hardware to properly process flows. + # + # When using a tap configuration one of the ports will receive inbound traffic + # for the network and the other will receive outbound traffic. The two ports on a + # given segment must reside on the same network adapter. + # + # When using a SPAN-port configuration the upstream and downstream traffic + # arrives on a single port. This is configured by setting the two sides of the + # segment to reference the same port. (e.g. 0-0 to configure a SPAN port on + # port 0). + # + # port segments are specified in the form: + # ports: [0-1,2-3,4-5,6-6,7-7] + # + # For legacy systems when hardware-bypass is disabled this can be specified in any + # of the following ways: # # a list of individual ports (e.g. ports: [0,1,2,3]) # @@ -1776,9 +1841,9 @@ napatech: # "all" to indicate that all ports are to be merged together # (e.g. ports: [all]) # - # This has no effect if auto-config is disabled. + # This parameter has no effect if auto-config is disabled. # - ports: [all] + ports: [0-1,2-3] # When auto-config is enabled the hashmode specifies the algorithm for # determining to which stream a given packet is to be delivered. @@ -1789,100 +1854,23 @@ napatech: # # See Napatech NTPL documentation other hashmodes and details on their use. # - # This has no effect if auto-config is disabled. + # This parameter has no effect if auto-config is disabled. # hashmode: hash5tuplesorted ## ## Configure Suricata to load Suricata-Update managed rules. ## -## If this section is completely commented out move down to the "Advanced rule -## file configuration". -## - -#default-rule-path: /var/lib/suricata/rules -#rule-files: -# - suricata.rules - -## -## Advanced rule file configuration. -## -## If this section is completely commented out then your configuration -## is setup for suricata-update as it was most likely bundled and -## installed with Suricata. -## - -default-rule-path: /etc/suricata/rules +default-rule-path: /var/lib/suricata/rules rule-files: - - botcc.rules - - botcc.portgrouped.rules - - ciarmy.rules - - compromised.rules - - drop.rules - - dshield.rules - - emerging-activex.rules - - emerging-adware_pup.rules - - emerging-attack_response.rules - - emerging-chat.rules - - emerging-coinminer.rules - - emerging-current_events.rules - - emerging-dns.rules - - emerging-dos.rules - - emerging-exploit.rules - - emerging-exploit_kit.rules - - emerging-ftp.rules - - emerging-games.rules - - emerging-hunting.rules - - emerging-icmp_info.rules - - emerging-icmp.rules - - emerging-imap.rules - - emerging-inappropriate.rules - - emerging-info.rules - - emerging-ja3.rules - - emerging-malware.rules - - emerging-misc.rules - - emerging-mobile_malware.rules - - emerging-netbios.rules - - emerging-p2p.rules - - emerging-phishing.rules - - emerging-policy.rules - - emerging-pop3.rules - - emerging-rpc.rules - - emerging-scada.rules - - emerging-scan.rules - - emerging-shellcode.rules - - emerging-smtp.rules - - emerging-snmp.rules - - emerging-sql.rules - - emerging-telnet.rules - - emerging-tftp.rules -# - emerging-trojan.rules - - emerging-user_agents.rules - - emerging-voip.rules - - emerging-web_client.rules - - emerging-web_server.rules - - emerging-web_specific_apps.rules - - emerging-worm.rules - - tor.rules - - decoder-events.rules # available in suricata sources under rules dir - - stream-events.rules # available in suricata sources under rules dir - - http-events.rules # available in suricata sources under rules dir - - smtp-events.rules # available in suricata sources under rules dir - - dns-events.rules # available in suricata sources under rules dir - - tls-events.rules # available in suricata sources under rules dir - - modbus-events.rules # available in suricata sources under rules dir - - app-layer-events.rules # available in suricata sources under rules dir - - dnp3-events.rules # available in suricata sources under rules dir - - ntp-events.rules # available in suricata sources under rules dir - - ipsec-events.rules # available in suricata sources under rules dir - - kerberos-events.rules # available in suricata sources under rules dir + - suricata.rules ## ## Auxiliary configuration files. ## -classification-file: /etc/suricata/rules/classification.config +classification-file: /var/lib/suricata/rules/classification.config reference-config-file: /etc/suricata/reference.config # threshold-file: /etc/suricata/threshold.config @@ -1890,7 +1878,10 @@ reference-config-file: /etc/suricata/reference.config ## Include other configs ## -# Includes. Files included here will be handled as if they were -# inlined in this configuration file. +# Includes: Files included here will be handled as if they were in-lined +# in this configuration file. Files with relative pathnames will be +# searched for in the same directory as this configuration file. You may +# use absolute pathnames too. +# You can specify more than 2 configuration files, if needed. #include: include1.yaml #include: include2.yaml diff --git a/docker/suricata/dist/update.sh b/docker/suricata/dist/update.sh index fcb5d21a..c9ca30ad 100755 --- a/docker/suricata/dist/update.sh +++ b/docker/suricata/dist/update.sh @@ -9,24 +9,6 @@ trap fuCLEANUP EXIT ### Vars myOINKCODE="$1" -function fuDLRULES { -### Check if args are present then download rules, if not throw error -if [ "$myOINKCODE" != "" ] && [ "$myOINKCODE" == "OPEN" ]; - then - echo "Downloading ET open ruleset." - wget -q --tries=2 --timeout=2 https://rules.emergingthreats.net/open/suricata-5.0/emerging.rules.tar.gz -O /tmp/rules.tar.gz - else - if [ "$myOINKCODE" != "" ]; - then - echo "Downloading ET pro ruleset with Oinkcode $myOINKCODE." - wget -q --tries=2 --timeout=2 https://rules.emergingthreatspro.com/$myOINKCODE/suricata-5.0/etpro.rules.tar.gz -O /tmp/rules.tar.gz - else - echo "Usage: update.sh <[OPEN, OINKCODE]>" - exit - fi -fi -} - # Check internet availability function fuCHECKINET () { mySITES=$1 @@ -46,9 +28,14 @@ for i in $mySITES; myCHECK=$(fuCHECKINET "rules.emergingthreatspro.com rules.emergingthreats.net") if [ "$myCHECK" == "0" ]; then - fuDLRULES 2>&1 > /dev/null - tar xvfz /tmp/rules.tar.gz -C /etc/suricata/ 2>&1 > /dev/null - sed -i s/^#alert/alert/ /etc/suricata/rules/*.rules 2>&1 > /dev/null + if [ "$myOINKCODE" != "" ] && [ "$myOINKCODE" != "OPEN" ]; + then + suricata-update -q enable-source et/pro secret-code=$myOINKCODE > /dev/null + else + # suricata-update uses et/open ruleset by default if not configured + rm -f /var/lib/suricata/update/sources/et-pro.yaml 2>&1 > /dev/null + fi + suricata-update -q --no-test --no-reload > /dev/null echo "/etc/suricata/capture-filter.bpf" else echo "/etc/suricata/null.bpf" diff --git a/docker/suricata/dist/update.yaml b/docker/suricata/dist/update.yaml new file mode 100644 index 00000000..8780931c --- /dev/null +++ b/docker/suricata/dist/update.yaml @@ -0,0 +1,12 @@ +disable-conf: /etc/suricata/disable.conf +enable-conf: /etc/suricata/enable.conf +#drop-conf: /etc/suricata/drop.conf +modify-conf: /etc/suricata/modify.conf + +ignore: + - "*deleted.rules" + - "dhcp-events.rules" # DHCP is disabled in suricata.yaml + - "files.rules" # file-store is disabled in suricata.yaml + +reload-command: suricatasc -c ruleset-reload-rules + diff --git a/docker/suricata/docker-compose.yml b/docker/suricata/docker-compose.yml index 4568fba9..9b7434c4 100644 --- a/docker/suricata/docker-compose.yml +++ b/docker/suricata/docker-compose.yml @@ -15,6 +15,6 @@ services: - NET_ADMIN - SYS_NICE - NET_RAW - image: "dtagdevsec/suricata:2006" + image: "ghcr.io/telekom-security/suricata:2006" volumes: - /data/suricata/log:/var/log/suricata diff --git a/docker/tanner/docker-compose.yml b/docker/tanner/docker-compose.yml index b70977a3..ff2e4bec 100644 --- a/docker/tanner/docker-compose.yml +++ b/docker/tanner/docker-compose.yml @@ -14,7 +14,7 @@ services: tty: true networks: - tanner_local - image: "dtagdevsec/redis:2006" + image: "ghcr.io/telekom-security/redis:2006" read_only: true # PHP Sandbox service @@ -28,7 +28,7 @@ services: tty: true networks: - tanner_local - image: "dtagdevsec/phpox:2006" + image: "ghcr.io/telekom-security/phpox:2006" read_only: true # Tanner API Service @@ -42,7 +42,7 @@ services: tty: true networks: - tanner_local - image: "dtagdevsec/tanner:2006" + image: "ghcr.io/telekom-security/tanner:2006" read_only: true volumes: - /data/tanner/log:/var/log/tanner @@ -63,7 +63,7 @@ services: - tanner_local # ports: # - "127.0.0.1:8091:8091" - image: "dtagdevsec/tanner:2006" + image: "ghcr.io/telekom-security/tanner:2006" command: tannerweb read_only: true volumes: @@ -82,7 +82,7 @@ services: tty: true networks: - tanner_local - image: "dtagdevsec/tanner:2006" + image: "ghcr.io/telekom-security/tanner:2006" command: tanner read_only: true volumes: @@ -104,6 +104,6 @@ services: - tanner_local ports: - "80:80" - image: "dtagdevsec/snare:2006" + image: "ghcr.io/telekom-security/snare:2006" depends_on: - tanner diff --git a/docker/tanner/phpox/Dockerfile b/docker/tanner/phpox/Dockerfile index 621f4495..c3a4eb70 100644 --- a/docker/tanner/phpox/Dockerfile +++ b/docker/tanner/phpox/Dockerfile @@ -15,8 +15,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ re2c && \ # # Install bfr sandbox from git - git clone --depth=1 https://github.com/mushorg/BFR /opt/BFR && \ + git clone https://github.com/mushorg/BFR /opt/BFR && \ cd /opt/BFR && \ + git checkout 508729202428a35bcc6bb27dd97b831f7e5009b5 && \ phpize7 && \ ./configure \ --with-php-config=/usr/bin/php-config7 \ @@ -28,8 +29,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ echo "zend_extension = "$(find /usr -name bfr.so) >> /etc/php7/php.ini && \ # # Install PHP Sandbox - git clone --depth=1 https://github.com/mushorg/phpox /opt/phpox && \ + git clone https://github.com/mushorg/phpox /opt/phpox && \ cd /opt/phpox && \ + git checkout 001437b9ed3e228fac3828e18fe90991a330578d && \ pip3 install -r requirements.txt && \ make && \ # diff --git a/docker/tanner/snare/Dockerfile b/docker/tanner/snare/Dockerfile index 6dfe6375..cd462496 100644 --- a/docker/tanner/snare/Dockerfile +++ b/docker/tanner/snare/Dockerfile @@ -13,8 +13,9 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ python3-dev && \ # # Setup Snare - git clone --depth=1 https://github.com/mushorg/snare /opt/snare && \ + git clone https://github.com/mushorg/snare /opt/snare && \ cd /opt/snare/ && \ + git checkout 7762b762b272f0599c16e11ef997c37d2899d33e && \ pip3 install --no-cache-dir setuptools && \ pip3 install --no-cache-dir -r requirements.txt && \ python3 setup.py install && \ diff --git a/docker/tanner/tanner/Dockerfile b/docker/tanner/tanner/Dockerfile index cdc1885a..6badbd0c 100644 --- a/docker/tanner/tanner/Dockerfile +++ b/docker/tanner/tanner/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:latest +FROM alpine:3.12 # # Include dist ADD dist/ /root/dist/ @@ -18,10 +18,11 @@ RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \ python3-dev && \ # # Setup Tanner - git clone --depth=1 https://github.com/mushorg/tanner /opt/tanner && \ + git clone https://github.com/mushorg/tanner /opt/tanner && \ cd /opt/tanner/ && \ # git fetch origin pull/364/head:test && \ # git checkout test && \ + git checkout 40e2357119065445cbb06234e953a95e5a73ce93 && \ cp /root/dist/config.yaml /opt/tanner/tanner/data && \ pip3 install --no-cache-dir setuptools && \ pip3 install --no-cache-dir -r requirements.txt && \ diff --git a/etc/objects/elkbase.tgz b/etc/objects/elkbase.tgz index f6680f23..a99a0e80 100644 Binary files a/etc/objects/elkbase.tgz and b/etc/objects/elkbase.tgz differ diff --git a/etc/objects/kibana-objects.tgz b/etc/objects/kibana-objects.tgz index a0c076b8..6eae1893 100644 Binary files a/etc/objects/kibana-objects.tgz and b/etc/objects/kibana-objects.tgz differ diff --git a/etc/objects/kibana_export.ndjson.zip b/etc/objects/kibana_export.ndjson.zip index db915169..d5212657 100644 Binary files a/etc/objects/kibana_export.ndjson.zip and b/etc/objects/kibana_export.ndjson.zip differ diff --git a/iso/installer/install.sh b/iso/installer/install.sh index fc43b8f5..62777ea2 100755 --- a/iso/installer/install.sh +++ b/iso/installer/install.sh @@ -16,13 +16,13 @@ fi myBACKTITLE="T-Pot-Installer" myCONF_FILE="/root/installer/iso.conf" myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80" -mySITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org" +mySITES="https://ghcr.io https://github.com https://pypi.python.org https://debian.org" myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml" myLSB_STABLE_SUPPORTED="stretch buster" myLSB_TESTING_SUPPORTED="stable" myREMOTESITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org https://listbot.sicherheitstacho.eu" -myPREINSTALLPACKAGES="aria2 apache2-utils cracklib-runtime curl dialog figlet fuse grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet" -myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant" +myPREINSTALLPACKAGES="aria2 apache2-utils cracklib-runtime curl dialog figlet fuse grc libcrack2 libpq-dev lsb-release net-tools software-properties-common toilet" +myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant" myINFO="\ ########################################### ### T-Pot Installer for Debian (Stable) ### @@ -290,21 +290,6 @@ function fuCHECKNET { # Install T-Pot dependencies function fuGET_DEPS { export DEBIAN_FRONTEND=noninteractive - # Determine fastest mirror - echo - echo "### Determine fastest mirror for your location." - echo - netselect-apt -n -a amd64 stable && cp sources.list /etc/apt/ - mySOURCESCHECK=$(cat /etc/apt/sources.list | grep -c stable) - if [ "$mySOURCESCHECK" == "0" ] - then - echo "### Automatic mirror selection failed, using main mirror." - # Point to Debian (stable) -tee /etc/apt/sources.list < Management > Saved Objects > Export / Import" echo "### Or use the command:" diff --git a/version b/version index a30c04d4..8ce48caa 100644 --- a/version +++ b/version @@ -1 +1 @@ -20.06.0 +20.06.1