mirror of
https://github.com/telekom-security/tpotce.git
synced 2025-04-20 06:02:24 +00:00
Add T-Pot Technical Preview
This commit is contained in:
parent
87ef005c17
commit
00d6d1b4c7
20 changed files with 3546 additions and 0 deletions
52
preview/.env
Normal file
52
preview/.env
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
# T-Pot config file. Do not remove.
|
||||||
|
|
||||||
|
# Set Web username and password here, only required for first run
|
||||||
|
# Removing the password after first run is recommended
|
||||||
|
# You can always add or remove users as you see fit using htpasswd:
|
||||||
|
# htpasswd -b -c /<data_folder>/nginx/conf/nginxpasswd <username> <password>
|
||||||
|
WEB_USER=changeme
|
||||||
|
WEB_PW=changeme
|
||||||
|
|
||||||
|
# T-Pot Blackhole
|
||||||
|
# ENABLED: T-Pot will download a db of known mass scanners and nullroute them
|
||||||
|
# Be aware, this will put T-Pot off the map for stealth reasons and
|
||||||
|
# you will get less traffic. Routes will active until reboot and will
|
||||||
|
# be re-added with every T-Pot start until disabled.
|
||||||
|
# DISABLED: This is the default and no stealth efforts are in place.
|
||||||
|
TPOT_BLACKHOLE=DISABLED
|
||||||
|
|
||||||
|
###################################################################################
|
||||||
|
# NEVER MAKE CHANGES TO THIS SECTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!! #
|
||||||
|
###################################################################################
|
||||||
|
|
||||||
|
# T-Pot Landing page provides Cockpit Link
|
||||||
|
COCKPIT=false
|
||||||
|
|
||||||
|
# docker.sock Path
|
||||||
|
TPOT_DOCKER_SOCK=/var/run/docker.sock
|
||||||
|
|
||||||
|
# docker compose .env
|
||||||
|
TPOT_DOCKER_ENV=./.env
|
||||||
|
|
||||||
|
# Docker-Compose file
|
||||||
|
TPOT_DOCKER_COMPOSE=./docker-compose.yml
|
||||||
|
|
||||||
|
# T-Pot Repo
|
||||||
|
TPOT_REPO=dtagdevsec
|
||||||
|
|
||||||
|
# T-Pot Version Tag
|
||||||
|
TPOT_VERSION=2204
|
||||||
|
|
||||||
|
# T-Pot Pull Policy
|
||||||
|
# always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry.
|
||||||
|
# never: Compose implementations SHOULD NOT pull the image from a registry and SHOULD rely on the platform cached image.
|
||||||
|
# missing: Compose implementations SHOULD pull the image only if it's not available in the platform cache.
|
||||||
|
# build: Compose implementations SHOULD build the image. Compose implementations SHOULD rebuild the image if already present.
|
||||||
|
TPOT_PULL_POLICY=always
|
||||||
|
|
||||||
|
# T-Pot Data Path
|
||||||
|
TPOT_DATA_PATH=./data
|
||||||
|
|
||||||
|
# OSType (linux, mac, win)
|
||||||
|
# Most docker features are available on linux
|
||||||
|
TPOT_OSTYPE=linux
|
196
preview/README.md
Normal file
196
preview/README.md
Normal file
|
@ -0,0 +1,196 @@
|
||||||
|
# T-Pot - Technical Preview
|
||||||
|
|
||||||
|
T-Pot will be turning 10 years next year and this milestone will be celebrated when the time comes, which brings us today to the best time to reflect on how technology advanced, what this means for the project and how we can ensure T-Pot will meet the current and future requirements of the community.
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# TL;DR
|
||||||
|
1. [Download](#choose-your-distro) or use a running, supported distribution
|
||||||
|
2. Install the ISO with as minimal packages / services as possible (SSH required!)
|
||||||
|
3. Clone T-Pot: `$ git clone https://github.com/telekom-security/tpotce`
|
||||||
|
4. Locate installer for your distribution: `$ cd tpotce/preview/installer/<distro>`
|
||||||
|
5. Run installer as non-root: `$ ./install.sh`
|
||||||
|
* Follow instructions, read messages, check for possible port conflicts and reboot
|
||||||
|
7. [Set](#t-pot-config-file) username and password in config `.env`: `vi preview/.env`
|
||||||
|
8. [Start](#start-t-pot) T-Pot for the first time:
|
||||||
|
```
|
||||||
|
$ cd tpotce/preview/
|
||||||
|
$ docker compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
# Table of Contents
|
||||||
|
- [Disclaimer](#disclaimer)
|
||||||
|
- [Last Time Departed](#last-time-departed)
|
||||||
|
- [Present Time](#present-time)
|
||||||
|
- [Destination Time](#destination-time)
|
||||||
|
- [Technical Preview](#technical-preview)
|
||||||
|
- [Architecture](#architecture)
|
||||||
|
- [Installation](#installation)
|
||||||
|
- [Choose your distro](#choose-your-distro)
|
||||||
|
- [Get and Install T-Pot](#get-and-install-t-pot)
|
||||||
|
- [T-Pot Config File](#t-pot-config-file)
|
||||||
|
- [macOS & Windows](#macos--windows)
|
||||||
|
- [Start T-Pot](#start-t-pot)
|
||||||
|
- [Stop T-Pot](#stop-t-pot)
|
||||||
|
- [Uninstall T-Pot](#uninstall-t-pot)
|
||||||
|
- [Feedback](#uninstall-t-pot)
|
||||||
|
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Disclaimer
|
||||||
|
- This is a Technical Preview, a very very early stage in the development T-Pot. You have been warned - there will be dragons steering flying time machines possibly causing paradoxes.
|
||||||
|
- The T-Pot [disclaimer](https://github.com/telekom-security/tpotce/blob/master/README.md#disclaimer) and [documentation](https://github.com/telekom-security/tpotce/blob/master/README.md) apply.
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Last Time Departed
|
||||||
|
Jumping back to 2014 T-Pot was born as the direct ancestor of our Raspberry Pi images we used to offer for download (which probably by now only insiders will remember 😅). Docker was just the new kid on the block with the shiny new container engine everyone desperately unknowingly waited for and thus taking the dev-world by storm. At that point we wanted to ensure that T-Pot was something tangible, tethered to a physical device (Hello NUC my old friend 👋) while using latest technologies ensuring an easy transition should we ever leave hardware based installations (or VMs for that matter). And Oh-My-Zsh as you all know that day came faster than anticipated! (Special thanks @vorband, @shaderecker and @tmariuss for all of their contributions!)
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Present Time
|
||||||
|
Flash Forward to today, T-Pot offers support for Debian, both as an ISO based installation or a post installation method (install your own Debian Server), support for OTC, AWS and other clouds through Ansible and Terraform Support. All of this in many different flavors and even a distributed installation. At the same time we are still relying on the same base concept we originally started with which does not seem fit for the foreseeable future.<br>
|
||||||
|
In the last couple of years being independent of a certain platform was the one feature that stood out by far. The reason for this, until today, is the simple fact that T-Pot, although relying heavily on Docker, still relies on a fully controlled environment. This has its advantages but can not meet a demand where cloud based installations need different settings than we can provide (we can only run limited platform tests), companies follow different guidelines for allowed distributions or hosters simply offer Debian images slightly adjusted to their environments causing issues with the setting T-Pot relies on. Roll the dice or ask the Magic-8-Ball.
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Destination Time
|
||||||
|
Back to the future of T-Pot. For a brief time we had the idea of T-Pot Light which should compensate for the missing platform support. A concept was whipped up to support all of T-Pot's dockered services on minimal installations of Debian, Fedora, OpenSuse and Ubuntu Server. And it worked! It worked so good that we have almost achieved feature parity for this Technical Preview and decided that this is the best candidate for the future of the development of T-Pot<br>
|
||||||
|
We are thrilled to share this now, so you can test, provide us with feedback, open issues and discussions and give us the chance to make the next T-Pot the best T-Pot we have ever released!
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
## Technical Preview
|
||||||
|
For the purpose of the Technical Preview T-Pot will still use the 22.04 images and for a great part rely on the 22.04 release. This will lay the groundwork though for the next T-Pot release by just relying on the latest Docker package repositories (yes, the distros mostly do not offer Docker's bleeding edge features), some tiny modifications on the host (installer and uninstaller provided!) and move all of T-Pot's core in its own Docker image with a simple, user adjustable, configuration.<br>
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
While the basic architecture still remains, the Technical Preview of T-Pot is mostly independent of the underlying OS with only some basic requirements:
|
||||||
|
1. Underlying OS is available as supported distribution:
|
||||||
|
* Only the bare minimum of services and packages are installed to avoid possible port conflicts with T-Pot's services
|
||||||
|
* Debian, Fedora, OpenSuse and Ubuntu Server are currently supported, others might follow if the requirements will be met
|
||||||
|
2. Latest Docker Engine from Docker's repositories is supported
|
||||||
|
* Only the latest Docker Engine packages offer all the features needed for T-Pot
|
||||||
|
* Docker Desktop does not offer host network capabilities and thus only a limited T-Pot experience (not available for the Technical Preview, but planned to even get started faster!)
|
||||||
|
3. Changes to the host
|
||||||
|
* Some changes to the host are necessary but will be kept as minimalistic as possible, just enough T-Pot will be able to run
|
||||||
|
* There are uninstallers available this time 😁
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# System Requirements
|
||||||
|
The known T-Pot hardware (CPU, RAM, SSD) requirements and recommendations still apply.
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Installation
|
||||||
|
[Download](#choose-your-distro) one of the supported Linux distro images, `git clone` the T-Pot repository and run the installer specific to your system. Running T-Pot on top of a running and supported Linux system is possible, but a clean installation is recommended to avoid port conflicts with running services.
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
## Choose your distro
|
||||||
|
Choose a supported distro of your choice. It is recommended to use the minimum / netiso installers linked below and only install a minimalistic set of packages. SSH is mandatory or you will not be able to connect to the machine remotely.
|
||||||
|
|
||||||
|
| Distribution Name | x64 | arm64
|
||||||
|
|:-----------------------------------------------|:-----------------------------------------------------------------------------------------------------------|:--------------
|
||||||
|
| [Debian](https://www.debian.org/index.en.html) | [download](http://ftp.debian.org/debian/dists/stable/main/installer-amd64/current/images/netboot/mini.iso) | [download](http://ftp.debian.org/debian/dists/stable/main/installer-arm64/current/images/netboot/mini.iso)
|
||||||
|
| [Fedora](https://fedoraproject.org) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/38/Server/x86_64/iso/Fedora-Server-netinst-x86_64-38-1.6.iso) | [download](https://download.fedoraproject.org/pub/fedora/linux/releases/38/Server/aarch64/iso/Fedora-Server-netinst-aarch64-38-1.6.iso)
|
||||||
|
| [OpenSuse](https://www.opensuse.org) | [download](https://download.opensuse.org/tumbleweed/iso/openSUSE-Tumbleweed-NET-x86_64-Current.iso) | [download](https://download.opensuse.org/ports/aarch64/tumbleweed/iso/openSUSE-Tumbleweed-NET-aarch64-Current.iso)
|
||||||
|
| [Ubuntu](https://ubuntu.com) | [download](https://releases.ubuntu.com/22.04.2/ubuntu-22.04.2-live-server-amd64.iso) | [download](https://cdimage.ubuntu.com/releases/22.04/release/ubuntu-22.04.2-live-server-arm64.iso)
|
||||||
|
|
||||||
|
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
## Get and install T-Pot
|
||||||
|
1. Clone the GitHub repository: `$ git clone https://github.com/telekom-security/tpotce`
|
||||||
|
2. Change into the **tpotce/preview/installer** folder: `$ cd tpotce/preview/installer`
|
||||||
|
3. Locate your distribution, i.e. `fedora`: `$ cd fedora`
|
||||||
|
4. Run the installer as non-root: `$ ./install.sh`:
|
||||||
|
* ⚠️ ***Depending on your Linux distribution of choice the installer will:***
|
||||||
|
* Change the SSH port to `tcp/64295`
|
||||||
|
* Disable the DNS Stub Listener to avoid port conflicts with honeypots
|
||||||
|
* Set SELinux to Monitor Mode
|
||||||
|
* Set the firewall target for the public zone to ACCEPT
|
||||||
|
* Add Docker's repository and install Docker
|
||||||
|
* Install recommended packages
|
||||||
|
* Remove package known to cause issues
|
||||||
|
* Add the current user to the docker group (allow docker interaction without `sudo`)
|
||||||
|
* Add `dps` and `dpsw` aliases (`grc docker ps -a`, `watch -c "grc --colour=on docker ps -a`)
|
||||||
|
* Display open ports on the host (compare with T-Pot [required](https://github.com/telekom-security/tpotce#required-ports) ports)
|
||||||
|
5. Follow the installer instructions, you will have to enter your password at least once
|
||||||
|
6. Check the installer messages for errors and open ports that might cause port conflicts
|
||||||
|
7. Reboot: `$ sudo reboot`
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
## T-Pot Config File
|
||||||
|
T-Pot offers a configuration file providing environment variables not only for the docker services (i.e. honeypots and tools) but also for the docker compose environment. The configuration file is hidden in the `preview` folder and is called `.env`. There is however an example file (`env.example`) which holds the default configuration.<br> Before the first start set the `WEB_USER` and `WEB_PW`. Once T-Pot was initialized it is recommended to remove the password and set `WEB_PW=<changeme>`. Other settings are available also, these however should only be changed if you are comfortable with possible errors 🫠 as some of the features are not fully integrated and tested yet.
|
||||||
|
```
|
||||||
|
# T-Pot config file. Do not remove.
|
||||||
|
|
||||||
|
# Set Web username and password here, only required for first run
|
||||||
|
# Removing the password after first run is recommended
|
||||||
|
# You can always add or remove users as you see fit using htpasswd:
|
||||||
|
# htpasswd -b -c /<data_folder>/nginx/conf/nginxpasswd <username> <password>
|
||||||
|
WEB_USER=<changeme>
|
||||||
|
WEB_PW=<changeme>
|
||||||
|
|
||||||
|
# T-Pot Blackhole
|
||||||
|
# ENABLED: T-Pot will download a db of known mass scanners and nullroute them
|
||||||
|
# Be aware, this will put T-Pot off the map for stealth reasons and
|
||||||
|
# you will get less traffic. Routes will active until reboot and will
|
||||||
|
# be re-added with every T-Pot start until disabled.
|
||||||
|
# DISABLED: This is the default and no stealth efforts are in place.
|
||||||
|
TPOT_BLACKHOLE=DISABLED
|
||||||
|
```
|
||||||
|
|
||||||
|
## macOS & Windows
|
||||||
|
Sometimes it is just nice if you can spin up a T-Pot instance on macOS or Windows, i.e. for development, testing or just the fun of it. While Docker Desktop is rather limited not all honeypot types or T-Pot features are supported. Also remember, by default the macOS and Windows firewall are blocking access from remote, so testing is limited to the host. For production it is recommended to run T-Pot on Linux.<br>
|
||||||
|
To get things up and running just follow these steps:
|
||||||
|
1. Install Docker Desktop for [macOS](https://docs.docker.com/desktop/install/mac-install/) or [Windows](https://docs.docker.com/desktop/install/windows-install/)
|
||||||
|
2. Clone the GitHub repository: `$ git clone https://github.com/telekom-security/tpotce`
|
||||||
|
2. Change into the **tpotce/preview/compose** folder: `$ cd tpotce/preview/compose`
|
||||||
|
3. Copy **mac_win.yml** to the **tpotce/preview** folder by overwriting **docker-compose.yml**: `$ cp mac_win.yml ../docker-compose.yml`
|
||||||
|
4. Adjust the **.env** file by changing **TPOT_OSTYPE** to either **mac** or **win**:
|
||||||
|
```
|
||||||
|
# OSType (linux, mac, win)
|
||||||
|
# Most docker features are available on linux
|
||||||
|
TPOT_OSTYPE=mac
|
||||||
|
```
|
||||||
|
5. You have to ensure on your own there are no port conflicts keeping T-Pot from starting up.
|
||||||
|
You can follow the README on how to [Start T-Pot](#start-t-pot), however you may skip the **crontab**.
|
||||||
|
|
||||||
|
|
||||||
|
# Start T-Pot
|
||||||
|
1. Change into the **tpotce/preview/** folder: `$ cd tpotce/preview/`
|
||||||
|
2. Run: `$ docker compose up` (notice the missing dash, `docker-compose` no longer exists with the latest Docker installation)
|
||||||
|
* You can also run `$ docker compose -f /<path_to_tpot>/tpotce/preview/docker-compose.yml up` directly if you want to avoid to change into the `preview` folder or add an alias of your choice.
|
||||||
|
3. `docker compose` will now download all the necessary images to run the T-Pot Docker containers
|
||||||
|
4. On the first run T-Pot (`tpotinit`) will initialize and create the `data` folder in the path specified (by default it is located in `tpotce/preview/data/`):
|
||||||
|
* It takes about 2-3 minutes to bring all the containers up (should port conflicts arise `docker compose` will simply abort)
|
||||||
|
* Once all containers have started successfully for the first time you can access T-Pot as described [here](https://github.com/telekom-security/tpotce#remote-access-and-tools) or cancel with `CTRL-C` ...
|
||||||
|
5. ... and run T-Pot in the background: `$ docker compose up -d`
|
||||||
|
* Unless you run `docker compose down -v` T-Pot's Docker service will remain persistent and restart with a reboot
|
||||||
|
* You can however add a crontab entry with `crontab -e` which will also add some container and image management.
|
||||||
|
```
|
||||||
|
@reboot docker compose -f /<path_to_tpot_>/tpotce/preview/docker-compose.yml down -v; \
|
||||||
|
docker container prune -f; \
|
||||||
|
docker image prune -f; \
|
||||||
|
docker compose -f /<path_to_tpot_>/tpotce/preview/docker-compose.yml up -d
|
||||||
|
```
|
||||||
|
6. By default Docker will always check if the local and remote docker images match, if not, Docker will either revert to a fitting locally cached image or download the image from remote. This ensures T-Pot images will always be up-to-date
|
||||||
|
|
||||||
|
# Stop T-Pot
|
||||||
|
1. Change into the **tpotce/preview/** folder: `$ cd tpotce/preview/`
|
||||||
|
2. Run: `$ docker compose down -v` (notice the missing dash, `docker-compose` no longer exists with the latest docker installation)
|
||||||
|
3. Docker will now stop all running T-Pot containers and disable reboot persistence (unless you made a [crontab entry](#start-t-pot)
|
||||||
|
* You can also run `$ docker compose -f /<path_to_tpot>/tpotce/preview/docker-compose.yml down -v` directly if you want to avoid to change into the `preview` folder or add an alias of your choice.
|
||||||
|
|
||||||
|
# Uninstall T-Pot
|
||||||
|
1. Change into the **tpotce/preview/uninstaller/** folder: `$ cd tpotce/preview/uninstaller/`
|
||||||
|
2. Locate your distribution, i.e. `fedora`: `$ cd fedora`
|
||||||
|
3. Run the installer as non-root: `$ ./uninstall.sh`:
|
||||||
|
* The uninstaller will reverse the installation steps
|
||||||
|
4. Follow the uninstaller instructions, you will have to enter your password at least once
|
||||||
|
5. Check the uninstaller messages for errors
|
||||||
|
6. Reboot: `$ sudo reboot`
|
||||||
|
<br><br>
|
||||||
|
|
||||||
|
# Feedback
|
||||||
|
To ensure the next T-Pot release will be everything we and you - The T-Pot Community - have in mind please feel free to leave comments in the `Technical Preview` [discussion](https://github.com/telekom-security/tpotce/discussions/1325) pinned on our GitHub [Discussions](https://github.com/telekom-security/tpotce/discussions) section. Please bear in mind that this Technical Preview is made public in the earliest stage of the T-Pot development process at your convenience for ***your*** valuable input.
|
||||||
|
<br><br>
|
||||||
|
Thank you for testing 💖
|
||||||
|
|
||||||
|
Special thanks to all the [contributors](https://github.com/telekom-security/tpotce/graphs/contributors) and [developers](https://github.com/telekom-security/tpotce#credits) making this project possible!
|
9
preview/compose/README
Normal file
9
preview/compose/README
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
This folder contains docker-compose.yml files, basically referring to installation types.
|
||||||
|
|
||||||
|
Just copy the .yml file you want to use over the on in the parent folder, but unload docker compose first:
|
||||||
|
> $ docker compose down -v
|
||||||
|
> $ cd compose
|
||||||
|
> $ cp standard.yml ../docker-compose.yml
|
||||||
|
|
||||||
|
For Docker Desktop on macOS and Windows machines is only one .yml available "mac_win.yml" which is a stripped down
|
||||||
|
version of T-Pot able to run with the constraints of Docker Desktop.
|
804
preview/compose/mac_win.yml
Normal file
804
preview/compose/mac_win.yml
Normal file
|
@ -0,0 +1,804 @@
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
networks:
|
||||||
|
tpotinit_local:
|
||||||
|
adbhoney_local:
|
||||||
|
ciscoasa_local:
|
||||||
|
citrixhoneypot_local:
|
||||||
|
conpot_local_IEC104:
|
||||||
|
conpot_local_guardian_ast:
|
||||||
|
conpot_local_ipmi:
|
||||||
|
conpot_local_kamstrup_382:
|
||||||
|
cowrie_local:
|
||||||
|
ddospot_local:
|
||||||
|
dicompot_local:
|
||||||
|
dionaea_local:
|
||||||
|
elasticpot_local:
|
||||||
|
heralding_local:
|
||||||
|
ipphoney_local:
|
||||||
|
mailoney_local:
|
||||||
|
medpot_local:
|
||||||
|
redishoneypot_local:
|
||||||
|
sentrypeer_local:
|
||||||
|
tanner_local:
|
||||||
|
nginx_local:
|
||||||
|
ewsposter_local:
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
#############################################
|
||||||
|
#### DEV
|
||||||
|
##############################################
|
||||||
|
#### T-Pot Light Init - Never delete this!
|
||||||
|
##############################################
|
||||||
|
|
||||||
|
# T-Pot Init Service
|
||||||
|
tpotinit:
|
||||||
|
container_name: tpotinit
|
||||||
|
build: docker/.
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
restart: always
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/etc:uid=2000,gid=2000
|
||||||
|
- /tmp/:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- tpotinit_local
|
||||||
|
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
|
||||||
|
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Honeypots
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Adbhoney service
|
||||||
|
adbhoney:
|
||||||
|
container_name: adbhoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- adbhoney_local
|
||||||
|
ports:
|
||||||
|
- "5555:5555"
|
||||||
|
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
|
||||||
|
|
||||||
|
# Ciscoasa service
|
||||||
|
ciscoasa:
|
||||||
|
container_name: ciscoasa
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/ciscoasa:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- ciscoasa_local
|
||||||
|
ports:
|
||||||
|
- "5000:5000/udp"
|
||||||
|
- "8443:8443"
|
||||||
|
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
|
||||||
|
|
||||||
|
# CitrixHoneypot service
|
||||||
|
citrixhoneypot:
|
||||||
|
container_name: citrixhoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- citrixhoneypot_local
|
||||||
|
ports:
|
||||||
|
- "443:443"
|
||||||
|
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||||
|
|
||||||
|
# Conpot IEC104 service
|
||||||
|
conpot_IEC104:
|
||||||
|
container_name: conpot_iec104
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
|
||||||
|
- CONPOT_TEMPLATE=IEC104
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_IEC104
|
||||||
|
ports:
|
||||||
|
- "161:161/udp"
|
||||||
|
- "2404:2404"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot guardian_ast service
|
||||||
|
conpot_guardian_ast:
|
||||||
|
container_name: conpot_guardian_ast
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
|
||||||
|
- CONPOT_TEMPLATE=guardian_ast
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_guardian_ast
|
||||||
|
ports:
|
||||||
|
- "10001:10001"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot ipmi
|
||||||
|
conpot_ipmi:
|
||||||
|
container_name: conpot_ipmi
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
|
||||||
|
- CONPOT_TEMPLATE=ipmi
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_ipmi
|
||||||
|
ports:
|
||||||
|
- "623:623/udp"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot kamstrup_382
|
||||||
|
conpot_kamstrup_382:
|
||||||
|
container_name: conpot_kamstrup_382
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
|
||||||
|
- CONPOT_TEMPLATE=kamstrup_382
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_kamstrup_382
|
||||||
|
ports:
|
||||||
|
- "1025:1025"
|
||||||
|
- "50100:50100"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Cowrie service
|
||||||
|
cowrie:
|
||||||
|
container_name: cowrie
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/cowrie:uid=2000,gid=2000
|
||||||
|
- /tmp/cowrie/data:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- cowrie_local
|
||||||
|
ports:
|
||||||
|
- "22:22"
|
||||||
|
- "23:23"
|
||||||
|
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
|
||||||
|
|
||||||
|
# Ddospot service
|
||||||
|
ddospot:
|
||||||
|
container_name: ddospot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ddospot_local
|
||||||
|
ports:
|
||||||
|
- "19:19/udp"
|
||||||
|
- "53:53/udp"
|
||||||
|
- "123:123/udp"
|
||||||
|
# - "161:161/udp"
|
||||||
|
- "1900:1900/udp"
|
||||||
|
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
|
||||||
|
|
||||||
|
# Dicompot service
|
||||||
|
# Get the Horos Client for testing: https://horosproject.org/
|
||||||
|
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
|
||||||
|
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
|
||||||
|
dicompot:
|
||||||
|
container_name: dicompot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dicompot_local
|
||||||
|
ports:
|
||||||
|
- "11112:11112"
|
||||||
|
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
|
||||||
|
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
|
||||||
|
|
||||||
|
# Dionaea service
|
||||||
|
dionaea:
|
||||||
|
container_name: dionaea
|
||||||
|
stdin_open: true
|
||||||
|
tty: true
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dionaea_local
|
||||||
|
ports:
|
||||||
|
- "20:20"
|
||||||
|
- "21:21"
|
||||||
|
- "42:42"
|
||||||
|
- "69:69/udp"
|
||||||
|
- "81:81"
|
||||||
|
- "135:135"
|
||||||
|
# - "443:443"
|
||||||
|
- "445:445"
|
||||||
|
- "1433:1433"
|
||||||
|
- "1723:1723"
|
||||||
|
- "1883:1883"
|
||||||
|
- "3306:3306"
|
||||||
|
# - "5060:5060"
|
||||||
|
# - "5060:5060/udp"
|
||||||
|
# - "5061:5061"
|
||||||
|
- "27017:27017"
|
||||||
|
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
|
||||||
|
|
||||||
|
# ElasticPot service
|
||||||
|
elasticpot:
|
||||||
|
container_name: elasticpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- elasticpot_local
|
||||||
|
ports:
|
||||||
|
- "9200:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
|
||||||
|
|
||||||
|
# Heralding service
|
||||||
|
heralding:
|
||||||
|
container_name: heralding
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/heralding:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- heralding_local
|
||||||
|
ports:
|
||||||
|
# - "21:21"
|
||||||
|
# - "22:22"
|
||||||
|
# - "23:23"
|
||||||
|
# - "25:25"
|
||||||
|
# - "80:80"
|
||||||
|
- "110:110"
|
||||||
|
- "143:143"
|
||||||
|
# - "443:443"
|
||||||
|
- "465:465"
|
||||||
|
- "993:993"
|
||||||
|
- "995:995"
|
||||||
|
# - "3306:3306"
|
||||||
|
# - "3389:3389"
|
||||||
|
- "1080:1080"
|
||||||
|
- "5432:5432"
|
||||||
|
- "5900:5900"
|
||||||
|
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
|
||||||
|
|
||||||
|
# Ipphoney service
|
||||||
|
ipphoney:
|
||||||
|
container_name: ipphoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ipphoney_local
|
||||||
|
ports:
|
||||||
|
- "631:631"
|
||||||
|
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
|
||||||
|
|
||||||
|
# Mailoney service
|
||||||
|
mailoney:
|
||||||
|
container_name: mailoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- HPFEEDS_SERVER=
|
||||||
|
- HPFEEDS_IDENT=user
|
||||||
|
- HPFEEDS_SECRET=pass
|
||||||
|
- HPFEEDS_PORT=20000
|
||||||
|
- HPFEEDS_CHANNELPREFIX=prefix
|
||||||
|
networks:
|
||||||
|
- mailoney_local
|
||||||
|
ports:
|
||||||
|
- "25:25"
|
||||||
|
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
|
||||||
|
|
||||||
|
# Medpot service
|
||||||
|
medpot:
|
||||||
|
container_name: medpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- medpot_local
|
||||||
|
ports:
|
||||||
|
- "2575:2575"
|
||||||
|
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
|
||||||
|
|
||||||
|
# Redishoneypot service
|
||||||
|
redishoneypot:
|
||||||
|
container_name: redishoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- redishoneypot_local
|
||||||
|
ports:
|
||||||
|
- "6379:6379"
|
||||||
|
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
|
||||||
|
|
||||||
|
# SentryPeer service
|
||||||
|
sentrypeer:
|
||||||
|
container_name: sentrypeer
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
|
||||||
|
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
|
||||||
|
# the bad actors in its logs. Therefore this option is opt-in based.
|
||||||
|
# environment:
|
||||||
|
# - SENTRYPEER_PEER_TO_PEER=0
|
||||||
|
networks:
|
||||||
|
- sentrypeer_local
|
||||||
|
ports:
|
||||||
|
# - "4222:4222/udp"
|
||||||
|
- "5060:5060/udp"
|
||||||
|
# - "127.0.0.1:8082:8082"
|
||||||
|
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
|
||||||
|
|
||||||
|
#### Snare / Tanner
|
||||||
|
## Tanner Redis Service
|
||||||
|
tanner_redis:
|
||||||
|
container_name: tanner_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## PHP Sandbox service
|
||||||
|
tanner_phpox:
|
||||||
|
container_name: tanner_phpox
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Tanner API Service
|
||||||
|
tanner_api:
|
||||||
|
container_name: tanner_api
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_redis
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
command: tannerapi
|
||||||
|
|
||||||
|
## Tanner Service
|
||||||
|
tanner:
|
||||||
|
container_name: tanner
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_api
|
||||||
|
- tanner_phpox
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
command: tanner
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
|
||||||
|
|
||||||
|
## Snare Service
|
||||||
|
snare:
|
||||||
|
container_name: snare
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### NSM
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Fatt service
|
||||||
|
fatt:
|
||||||
|
container_name: fatt
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
|
||||||
|
|
||||||
|
# P0f service
|
||||||
|
p0f:
|
||||||
|
container_name: p0f
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
|
||||||
|
|
||||||
|
# Suricata service
|
||||||
|
suricata:
|
||||||
|
container_name: suricata
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
# For ET Pro ruleset replace "OPEN" with your OINKCODE
|
||||||
|
- OINKCODE=OPEN
|
||||||
|
# Loading externel Rules from URL
|
||||||
|
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Tools
|
||||||
|
##################
|
||||||
|
|
||||||
|
#### ELK
|
||||||
|
## Elasticsearch service
|
||||||
|
elasticsearch:
|
||||||
|
container_name: elasticsearch
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- bootstrap.memory_lock=true
|
||||||
|
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||||
|
- ES_TMPDIR=/tmp
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
cap_add:
|
||||||
|
- IPC_LOCK
|
||||||
|
ulimits:
|
||||||
|
memlock:
|
||||||
|
soft: -1
|
||||||
|
hard: -1
|
||||||
|
nofile:
|
||||||
|
soft: 65536
|
||||||
|
hard: 65536
|
||||||
|
mem_limit: 4g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64298:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Kibana service
|
||||||
|
kibana:
|
||||||
|
container_name: kibana
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
mem_limit: 1g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64296:5601"
|
||||||
|
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Logstash service
|
||||||
|
logstash:
|
||||||
|
container_name: logstash
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
environment:
|
||||||
|
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
|
||||||
|
mem_limit: 2g
|
||||||
|
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Map Redis Service
|
||||||
|
map_redis:
|
||||||
|
container_name: map_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Map Web Service
|
||||||
|
map_web:
|
||||||
|
container_name: map_web
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=AttackMapServer.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64299:64299"
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Map Data Service
|
||||||
|
map_data:
|
||||||
|
container_name: map_data
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=DataServer_v2.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
#### /ELK
|
||||||
|
|
||||||
|
# Ewsposter service
|
||||||
|
ewsposter:
|
||||||
|
container_name: ewsposter
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ewsposter_local
|
||||||
|
environment:
|
||||||
|
- EWS_HPFEEDS_ENABLE=false
|
||||||
|
- EWS_HPFEEDS_HOST=host
|
||||||
|
- EWS_HPFEEDS_PORT=port
|
||||||
|
- EWS_HPFEEDS_CHANNELS=channels
|
||||||
|
- EWS_HPFEEDS_IDENT=user
|
||||||
|
- EWS_HPFEEDS_SECRET=secret
|
||||||
|
- EWS_HPFEEDS_TLSCERT=false
|
||||||
|
- EWS_HPFEEDS_FORMAT=json
|
||||||
|
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
|
||||||
|
|
||||||
|
# Nginx service
|
||||||
|
nginx:
|
||||||
|
container_name: nginx
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
- COCKPIT=${COCKPIT}
|
||||||
|
- TPOT_OSTYPE=${TPOT_OSTYPE}
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /var/tmp/nginx/client_body
|
||||||
|
- /var/tmp/nginx/proxy
|
||||||
|
- /var/tmp/nginx/fastcgi
|
||||||
|
- /var/tmp/nginx/uwsgi
|
||||||
|
- /var/tmp/nginx/scgi
|
||||||
|
- /run
|
||||||
|
- /var/lib/nginx/tmp:uid=100,gid=82
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
ports:
|
||||||
|
- "64297:64297"
|
||||||
|
- "127.0.0.1:64304:64304"
|
||||||
|
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
|
||||||
|
|
||||||
|
# Spiderfoot service
|
||||||
|
spiderfoot:
|
||||||
|
container_name: spiderfoot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- nginx_local
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64303:8080"
|
||||||
|
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot
|
807
preview/compose/standard.yml
Normal file
807
preview/compose/standard.yml
Normal file
|
@ -0,0 +1,807 @@
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
networks:
|
||||||
|
adbhoney_local:
|
||||||
|
ciscoasa_local:
|
||||||
|
citrixhoneypot_local:
|
||||||
|
conpot_local_IEC104:
|
||||||
|
conpot_local_guardian_ast:
|
||||||
|
conpot_local_ipmi:
|
||||||
|
conpot_local_kamstrup_382:
|
||||||
|
cowrie_local:
|
||||||
|
ddospot_local:
|
||||||
|
dicompot_local:
|
||||||
|
dionaea_local:
|
||||||
|
elasticpot_local:
|
||||||
|
heralding_local:
|
||||||
|
ipphoney_local:
|
||||||
|
mailoney_local:
|
||||||
|
medpot_local:
|
||||||
|
redishoneypot_local:
|
||||||
|
sentrypeer_local:
|
||||||
|
tanner_local:
|
||||||
|
spiderfoot_local:
|
||||||
|
ewsposter_local:
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
##############################################
|
||||||
|
#### DEV
|
||||||
|
##############################################
|
||||||
|
#### T-Pot Light Init - Never delete this!
|
||||||
|
##############################################
|
||||||
|
|
||||||
|
# T-Pot Init Service
|
||||||
|
tpotinit:
|
||||||
|
container_name: tpotinit
|
||||||
|
build: docker/.
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
restart: always
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/etc:uid=2000,gid=2000
|
||||||
|
- /tmp/:uid=2000,gid=2000
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
|
||||||
|
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Honeypots
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Adbhoney service
|
||||||
|
adbhoney:
|
||||||
|
container_name: adbhoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- adbhoney_local
|
||||||
|
ports:
|
||||||
|
- "5555:5555"
|
||||||
|
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
|
||||||
|
|
||||||
|
# Ciscoasa service
|
||||||
|
ciscoasa:
|
||||||
|
container_name: ciscoasa
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/ciscoasa:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- ciscoasa_local
|
||||||
|
ports:
|
||||||
|
- "5000:5000/udp"
|
||||||
|
- "8443:8443"
|
||||||
|
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
|
||||||
|
|
||||||
|
# CitrixHoneypot service
|
||||||
|
citrixhoneypot:
|
||||||
|
container_name: citrixhoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- citrixhoneypot_local
|
||||||
|
ports:
|
||||||
|
- "443:443"
|
||||||
|
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||||
|
|
||||||
|
# Conpot IEC104 service
|
||||||
|
conpot_IEC104:
|
||||||
|
container_name: conpot_iec104
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
|
||||||
|
- CONPOT_TEMPLATE=IEC104
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_IEC104
|
||||||
|
ports:
|
||||||
|
- "161:161/udp"
|
||||||
|
- "2404:2404"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot guardian_ast service
|
||||||
|
conpot_guardian_ast:
|
||||||
|
container_name: conpot_guardian_ast
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
|
||||||
|
- CONPOT_TEMPLATE=guardian_ast
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_guardian_ast
|
||||||
|
ports:
|
||||||
|
- "10001:10001"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot ipmi
|
||||||
|
conpot_ipmi:
|
||||||
|
container_name: conpot_ipmi
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
|
||||||
|
- CONPOT_TEMPLATE=ipmi
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_ipmi
|
||||||
|
ports:
|
||||||
|
- "623:623/udp"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot kamstrup_382
|
||||||
|
conpot_kamstrup_382:
|
||||||
|
container_name: conpot_kamstrup_382
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
|
||||||
|
- CONPOT_TEMPLATE=kamstrup_382
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_kamstrup_382
|
||||||
|
ports:
|
||||||
|
- "1025:1025"
|
||||||
|
- "50100:50100"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Cowrie service
|
||||||
|
cowrie:
|
||||||
|
container_name: cowrie
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/cowrie:uid=2000,gid=2000
|
||||||
|
- /tmp/cowrie/data:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- cowrie_local
|
||||||
|
ports:
|
||||||
|
- "22:22"
|
||||||
|
- "23:23"
|
||||||
|
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
|
||||||
|
|
||||||
|
# Ddospot service
|
||||||
|
ddospot:
|
||||||
|
container_name: ddospot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ddospot_local
|
||||||
|
ports:
|
||||||
|
- "19:19/udp"
|
||||||
|
- "53:53/udp"
|
||||||
|
- "123:123/udp"
|
||||||
|
# - "161:161/udp"
|
||||||
|
- "1900:1900/udp"
|
||||||
|
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
|
||||||
|
|
||||||
|
# Dicompot service
|
||||||
|
# Get the Horos Client for testing: https://horosproject.org/
|
||||||
|
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
|
||||||
|
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
|
||||||
|
dicompot:
|
||||||
|
container_name: dicompot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dicompot_local
|
||||||
|
ports:
|
||||||
|
- "11112:11112"
|
||||||
|
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
|
||||||
|
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
|
||||||
|
|
||||||
|
# Dionaea service
|
||||||
|
dionaea:
|
||||||
|
container_name: dionaea
|
||||||
|
stdin_open: true
|
||||||
|
tty: true
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dionaea_local
|
||||||
|
ports:
|
||||||
|
- "20:20"
|
||||||
|
- "21:21"
|
||||||
|
- "42:42"
|
||||||
|
- "69:69/udp"
|
||||||
|
- "81:81"
|
||||||
|
- "135:135"
|
||||||
|
# - "443:443"
|
||||||
|
- "445:445"
|
||||||
|
- "1433:1433"
|
||||||
|
- "1723:1723"
|
||||||
|
- "1883:1883"
|
||||||
|
- "3306:3306"
|
||||||
|
# - "5060:5060"
|
||||||
|
# - "5060:5060/udp"
|
||||||
|
# - "5061:5061"
|
||||||
|
- "27017:27017"
|
||||||
|
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
|
||||||
|
|
||||||
|
# ElasticPot service
|
||||||
|
elasticpot:
|
||||||
|
container_name: elasticpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- elasticpot_local
|
||||||
|
ports:
|
||||||
|
- "9200:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
|
||||||
|
|
||||||
|
# Heralding service
|
||||||
|
heralding:
|
||||||
|
container_name: heralding
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/heralding:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- heralding_local
|
||||||
|
ports:
|
||||||
|
# - "21:21"
|
||||||
|
# - "22:22"
|
||||||
|
# - "23:23"
|
||||||
|
# - "25:25"
|
||||||
|
# - "80:80"
|
||||||
|
- "110:110"
|
||||||
|
- "143:143"
|
||||||
|
# - "443:443"
|
||||||
|
- "465:465"
|
||||||
|
- "993:993"
|
||||||
|
- "995:995"
|
||||||
|
# - "3306:3306"
|
||||||
|
# - "3389:3389"
|
||||||
|
- "1080:1080"
|
||||||
|
- "5432:5432"
|
||||||
|
- "5900:5900"
|
||||||
|
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
|
||||||
|
|
||||||
|
# Honeytrap service
|
||||||
|
honeytrap:
|
||||||
|
container_name: honeytrap
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/honeytrap:uid=2000,gid=2000
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
|
||||||
|
|
||||||
|
# Ipphoney service
|
||||||
|
ipphoney:
|
||||||
|
container_name: ipphoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ipphoney_local
|
||||||
|
ports:
|
||||||
|
- "631:631"
|
||||||
|
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
|
||||||
|
|
||||||
|
# Mailoney service
|
||||||
|
mailoney:
|
||||||
|
container_name: mailoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- HPFEEDS_SERVER=
|
||||||
|
- HPFEEDS_IDENT=user
|
||||||
|
- HPFEEDS_SECRET=pass
|
||||||
|
- HPFEEDS_PORT=20000
|
||||||
|
- HPFEEDS_CHANNELPREFIX=prefix
|
||||||
|
networks:
|
||||||
|
- mailoney_local
|
||||||
|
ports:
|
||||||
|
- "25:25"
|
||||||
|
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
|
||||||
|
|
||||||
|
# Medpot service
|
||||||
|
medpot:
|
||||||
|
container_name: medpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- medpot_local
|
||||||
|
ports:
|
||||||
|
- "2575:2575"
|
||||||
|
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
|
||||||
|
|
||||||
|
# Redishoneypot service
|
||||||
|
redishoneypot:
|
||||||
|
container_name: redishoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- redishoneypot_local
|
||||||
|
ports:
|
||||||
|
- "6379:6379"
|
||||||
|
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
|
||||||
|
|
||||||
|
# SentryPeer service
|
||||||
|
sentrypeer:
|
||||||
|
container_name: sentrypeer
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
|
||||||
|
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
|
||||||
|
# the bad actors in its logs. Therefore this option is opt-in based.
|
||||||
|
# environment:
|
||||||
|
# - SENTRYPEER_PEER_TO_PEER=0
|
||||||
|
networks:
|
||||||
|
- sentrypeer_local
|
||||||
|
ports:
|
||||||
|
# - "4222:4222/udp"
|
||||||
|
- "5060:5060/udp"
|
||||||
|
# - "127.0.0.1:8082:8082"
|
||||||
|
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
|
||||||
|
|
||||||
|
#### Snare / Tanner
|
||||||
|
## Tanner Redis Service
|
||||||
|
tanner_redis:
|
||||||
|
container_name: tanner_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## PHP Sandbox service
|
||||||
|
tanner_phpox:
|
||||||
|
container_name: tanner_phpox
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Tanner API Service
|
||||||
|
tanner_api:
|
||||||
|
container_name: tanner_api
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_redis
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
command: tannerapi
|
||||||
|
|
||||||
|
## Tanner Service
|
||||||
|
tanner:
|
||||||
|
container_name: tanner
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_api
|
||||||
|
- tanner_phpox
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
command: tanner
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
|
||||||
|
|
||||||
|
## Snare Service
|
||||||
|
snare:
|
||||||
|
container_name: snare
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### NSM
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Fatt service
|
||||||
|
fatt:
|
||||||
|
container_name: fatt
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
|
||||||
|
|
||||||
|
# P0f service
|
||||||
|
p0f:
|
||||||
|
container_name: p0f
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
|
||||||
|
|
||||||
|
# Suricata service
|
||||||
|
suricata:
|
||||||
|
container_name: suricata
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
# For ET Pro ruleset replace "OPEN" with your OINKCODE
|
||||||
|
- OINKCODE=OPEN
|
||||||
|
# Loading externel Rules from URL
|
||||||
|
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Tools
|
||||||
|
##################
|
||||||
|
|
||||||
|
#### ELK
|
||||||
|
## Elasticsearch service
|
||||||
|
elasticsearch:
|
||||||
|
container_name: elasticsearch
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- bootstrap.memory_lock=true
|
||||||
|
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||||
|
- ES_TMPDIR=/tmp
|
||||||
|
cap_add:
|
||||||
|
- IPC_LOCK
|
||||||
|
ulimits:
|
||||||
|
memlock:
|
||||||
|
soft: -1
|
||||||
|
hard: -1
|
||||||
|
nofile:
|
||||||
|
soft: 65536
|
||||||
|
hard: 65536
|
||||||
|
mem_limit: 4g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64298:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Kibana service
|
||||||
|
kibana:
|
||||||
|
container_name: kibana
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
mem_limit: 1g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64296:5601"
|
||||||
|
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Logstash service
|
||||||
|
logstash:
|
||||||
|
container_name: logstash
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
|
||||||
|
mem_limit: 2g
|
||||||
|
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Map Redis Service
|
||||||
|
map_redis:
|
||||||
|
container_name: map_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Map Web Service
|
||||||
|
map_web:
|
||||||
|
container_name: map_web
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=AttackMapServer.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64299:64299"
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Map Data Service
|
||||||
|
map_data:
|
||||||
|
container_name: map_data
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=DataServer_v2.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
#### /ELK
|
||||||
|
|
||||||
|
# Ewsposter service
|
||||||
|
ewsposter:
|
||||||
|
container_name: ewsposter
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ewsposter_local
|
||||||
|
environment:
|
||||||
|
- EWS_HPFEEDS_ENABLE=false
|
||||||
|
- EWS_HPFEEDS_HOST=host
|
||||||
|
- EWS_HPFEEDS_PORT=port
|
||||||
|
- EWS_HPFEEDS_CHANNELS=channels
|
||||||
|
- EWS_HPFEEDS_IDENT=user
|
||||||
|
- EWS_HPFEEDS_SECRET=secret
|
||||||
|
- EWS_HPFEEDS_TLSCERT=false
|
||||||
|
- EWS_HPFEEDS_FORMAT=json
|
||||||
|
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
|
||||||
|
|
||||||
|
# Nginx service
|
||||||
|
nginx:
|
||||||
|
container_name: nginx
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
- COCKPIT=${COCKPIT}
|
||||||
|
- TPOT_OSTYPE=${TPOT_OSTYPE}
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /var/tmp/nginx/client_body
|
||||||
|
- /var/tmp/nginx/proxy
|
||||||
|
- /var/tmp/nginx/fastcgi
|
||||||
|
- /var/tmp/nginx/uwsgi
|
||||||
|
- /var/tmp/nginx/scgi
|
||||||
|
- /run
|
||||||
|
- /var/lib/nginx/tmp:uid=100,gid=82
|
||||||
|
network_mode: "host"
|
||||||
|
ports:
|
||||||
|
- "64297:64297"
|
||||||
|
- "127.0.0.1:64304:64304"
|
||||||
|
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
|
||||||
|
|
||||||
|
# Spiderfoot service
|
||||||
|
spiderfoot:
|
||||||
|
container_name: spiderfoot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- spiderfoot_local
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64303:8080"
|
||||||
|
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot
|
807
preview/docker-compose.yml
Normal file
807
preview/docker-compose.yml
Normal file
|
@ -0,0 +1,807 @@
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
networks:
|
||||||
|
adbhoney_local:
|
||||||
|
ciscoasa_local:
|
||||||
|
citrixhoneypot_local:
|
||||||
|
conpot_local_IEC104:
|
||||||
|
conpot_local_guardian_ast:
|
||||||
|
conpot_local_ipmi:
|
||||||
|
conpot_local_kamstrup_382:
|
||||||
|
cowrie_local:
|
||||||
|
ddospot_local:
|
||||||
|
dicompot_local:
|
||||||
|
dionaea_local:
|
||||||
|
elasticpot_local:
|
||||||
|
heralding_local:
|
||||||
|
ipphoney_local:
|
||||||
|
mailoney_local:
|
||||||
|
medpot_local:
|
||||||
|
redishoneypot_local:
|
||||||
|
sentrypeer_local:
|
||||||
|
tanner_local:
|
||||||
|
spiderfoot_local:
|
||||||
|
ewsposter_local:
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
##############################################
|
||||||
|
#### DEV
|
||||||
|
##############################################
|
||||||
|
#### T-Pot Light Init - Never delete this!
|
||||||
|
##############################################
|
||||||
|
|
||||||
|
# T-Pot Init Service
|
||||||
|
tpotinit:
|
||||||
|
container_name: tpotinit
|
||||||
|
build: docker/.
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
restart: always
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/etc:uid=2000,gid=2000
|
||||||
|
- /tmp/:uid=2000,gid=2000
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
image: ${TPOT_REPO}/tpotinit:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DOCKER_COMPOSE}:/tmp/tpot/docker-compose.yml:ro
|
||||||
|
- ${TPOT_DATA_PATH}/blackhole:/etc/blackhole
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Honeypots
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Adbhoney service
|
||||||
|
adbhoney:
|
||||||
|
container_name: adbhoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- adbhoney_local
|
||||||
|
ports:
|
||||||
|
- "5555:5555"
|
||||||
|
image: ${TPOT_REPO}/adbhoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/log:/opt/adbhoney/log
|
||||||
|
- ${TPOT_DATA_PATH}/adbhoney/downloads:/opt/adbhoney/dl
|
||||||
|
|
||||||
|
# Ciscoasa service
|
||||||
|
ciscoasa:
|
||||||
|
container_name: ciscoasa
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/ciscoasa:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- ciscoasa_local
|
||||||
|
ports:
|
||||||
|
- "5000:5000/udp"
|
||||||
|
- "8443:8443"
|
||||||
|
image: ${TPOT_REPO}/ciscoasa:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ciscoasa/log:/var/log/ciscoasa
|
||||||
|
|
||||||
|
# CitrixHoneypot service
|
||||||
|
citrixhoneypot:
|
||||||
|
container_name: citrixhoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- citrixhoneypot_local
|
||||||
|
ports:
|
||||||
|
- "443:443"
|
||||||
|
image: ${TPOT_REPO}/citrixhoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||||
|
|
||||||
|
# Conpot IEC104 service
|
||||||
|
conpot_IEC104:
|
||||||
|
container_name: conpot_iec104
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_IEC104.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_IEC104.log
|
||||||
|
- CONPOT_TEMPLATE=IEC104
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_IEC104
|
||||||
|
ports:
|
||||||
|
- "161:161/udp"
|
||||||
|
- "2404:2404"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot guardian_ast service
|
||||||
|
conpot_guardian_ast:
|
||||||
|
container_name: conpot_guardian_ast
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_guardian_ast.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_guardian_ast.log
|
||||||
|
- CONPOT_TEMPLATE=guardian_ast
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_guardian_ast
|
||||||
|
ports:
|
||||||
|
- "10001:10001"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot ipmi
|
||||||
|
conpot_ipmi:
|
||||||
|
container_name: conpot_ipmi
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_ipmi.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_ipmi.log
|
||||||
|
- CONPOT_TEMPLATE=ipmi
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_ipmi
|
||||||
|
ports:
|
||||||
|
- "623:623/udp"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Conpot kamstrup_382
|
||||||
|
conpot_kamstrup_382:
|
||||||
|
container_name: conpot_kamstrup_382
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- CONPOT_CONFIG=/etc/conpot/conpot.cfg
|
||||||
|
- CONPOT_JSON_LOG=/var/log/conpot/conpot_kamstrup_382.json
|
||||||
|
- CONPOT_LOG=/var/log/conpot/conpot_kamstrup_382.log
|
||||||
|
- CONPOT_TEMPLATE=kamstrup_382
|
||||||
|
- CONPOT_TMP=/tmp/conpot
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/conpot:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- conpot_local_kamstrup_382
|
||||||
|
ports:
|
||||||
|
- "1025:1025"
|
||||||
|
- "50100:50100"
|
||||||
|
image: ${TPOT_REPO}/conpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/conpot/log:/var/log/conpot
|
||||||
|
|
||||||
|
# Cowrie service
|
||||||
|
cowrie:
|
||||||
|
container_name: cowrie
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/cowrie:uid=2000,gid=2000
|
||||||
|
- /tmp/cowrie/data:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- cowrie_local
|
||||||
|
ports:
|
||||||
|
- "22:22"
|
||||||
|
- "23:23"
|
||||||
|
image: ${TPOT_REPO}/cowrie:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/keys:/home/cowrie/cowrie/etc
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log:/home/cowrie/cowrie/log
|
||||||
|
- ${TPOT_DATA_PATH}/cowrie/log/tty:/home/cowrie/cowrie/log/tty
|
||||||
|
|
||||||
|
# Ddospot service
|
||||||
|
ddospot:
|
||||||
|
container_name: ddospot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ddospot_local
|
||||||
|
ports:
|
||||||
|
- "19:19/udp"
|
||||||
|
- "53:53/udp"
|
||||||
|
- "123:123/udp"
|
||||||
|
# - "161:161/udp"
|
||||||
|
- "1900:1900/udp"
|
||||||
|
image: ${TPOT_REPO}/ddospot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/log:/opt/ddospot/ddospot/logs
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/bl:/opt/ddospot/ddospot/bl
|
||||||
|
- ${TPOT_DATA_PATH}/ddospot/db:/opt/ddospot/ddospot/db
|
||||||
|
|
||||||
|
# Dicompot service
|
||||||
|
# Get the Horos Client for testing: https://horosproject.org/
|
||||||
|
# Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
|
||||||
|
# Put images (which must be in Dicom DCM format or it will not work!) into /data/dicompot/images
|
||||||
|
dicompot:
|
||||||
|
container_name: dicompot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dicompot_local
|
||||||
|
ports:
|
||||||
|
- "11112:11112"
|
||||||
|
image: ${TPOT_REPO}/dicompot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dicompot/log:/var/log/dicompot
|
||||||
|
# - ${TPOT_DATA_PATH}/dicompot/images:/opt/dicompot/images
|
||||||
|
|
||||||
|
# Dionaea service
|
||||||
|
dionaea:
|
||||||
|
container_name: dionaea
|
||||||
|
stdin_open: true
|
||||||
|
tty: true
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- dionaea_local
|
||||||
|
ports:
|
||||||
|
- "20:20"
|
||||||
|
- "21:21"
|
||||||
|
- "42:42"
|
||||||
|
- "69:69/udp"
|
||||||
|
- "81:81"
|
||||||
|
- "135:135"
|
||||||
|
# - "443:443"
|
||||||
|
- "445:445"
|
||||||
|
- "1433:1433"
|
||||||
|
- "1723:1723"
|
||||||
|
- "1883:1883"
|
||||||
|
- "3306:3306"
|
||||||
|
# - "5060:5060"
|
||||||
|
# - "5060:5060/udp"
|
||||||
|
# - "5061:5061"
|
||||||
|
- "27017:27017"
|
||||||
|
image: ${TPOT_REPO}/dionaea:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea:/opt/dionaea/var/dionaea
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/log:/opt/dionaea/var/log
|
||||||
|
- ${TPOT_DATA_PATH}/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
|
||||||
|
|
||||||
|
# ElasticPot service
|
||||||
|
elasticpot:
|
||||||
|
container_name: elasticpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- elasticpot_local
|
||||||
|
ports:
|
||||||
|
- "9200:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/elasticpot/log:/opt/elasticpot/log
|
||||||
|
|
||||||
|
# Heralding service
|
||||||
|
heralding:
|
||||||
|
container_name: heralding
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/heralding:uid=2000,gid=2000
|
||||||
|
networks:
|
||||||
|
- heralding_local
|
||||||
|
ports:
|
||||||
|
# - "21:21"
|
||||||
|
# - "22:22"
|
||||||
|
# - "23:23"
|
||||||
|
# - "25:25"
|
||||||
|
# - "80:80"
|
||||||
|
- "110:110"
|
||||||
|
- "143:143"
|
||||||
|
# - "443:443"
|
||||||
|
- "465:465"
|
||||||
|
- "993:993"
|
||||||
|
- "995:995"
|
||||||
|
# - "3306:3306"
|
||||||
|
# - "3389:3389"
|
||||||
|
- "1080:1080"
|
||||||
|
- "5432:5432"
|
||||||
|
- "5900:5900"
|
||||||
|
image: ${TPOT_REPO}/heralding:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/heralding/log:/var/log/heralding
|
||||||
|
|
||||||
|
# Honeytrap service
|
||||||
|
honeytrap:
|
||||||
|
container_name: honeytrap
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/honeytrap:uid=2000,gid=2000
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
image: ${TPOT_REPO}/honeytrap:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/attacks:/opt/honeytrap/var/attacks
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/downloads:/opt/honeytrap/var/downloads
|
||||||
|
- ${TPOT_DATA_PATH}/honeytrap/log:/opt/honeytrap/var/log
|
||||||
|
|
||||||
|
# Ipphoney service
|
||||||
|
ipphoney:
|
||||||
|
container_name: ipphoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ipphoney_local
|
||||||
|
ports:
|
||||||
|
- "631:631"
|
||||||
|
image: ${TPOT_REPO}/ipphoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/ipphoney/log:/opt/ipphoney/log
|
||||||
|
|
||||||
|
# Mailoney service
|
||||||
|
mailoney:
|
||||||
|
container_name: mailoney
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- HPFEEDS_SERVER=
|
||||||
|
- HPFEEDS_IDENT=user
|
||||||
|
- HPFEEDS_SECRET=pass
|
||||||
|
- HPFEEDS_PORT=20000
|
||||||
|
- HPFEEDS_CHANNELPREFIX=prefix
|
||||||
|
networks:
|
||||||
|
- mailoney_local
|
||||||
|
ports:
|
||||||
|
- "25:25"
|
||||||
|
image: ${TPOT_REPO}/mailoney:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/mailoney/log:/opt/mailoney/logs
|
||||||
|
|
||||||
|
# Medpot service
|
||||||
|
medpot:
|
||||||
|
container_name: medpot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- medpot_local
|
||||||
|
ports:
|
||||||
|
- "2575:2575"
|
||||||
|
image: ${TPOT_REPO}/medpot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/medpot/log/:/var/log/medpot
|
||||||
|
|
||||||
|
# Redishoneypot service
|
||||||
|
redishoneypot:
|
||||||
|
container_name: redishoneypot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- redishoneypot_local
|
||||||
|
ports:
|
||||||
|
- "6379:6379"
|
||||||
|
image: ${TPOT_REPO}/redishoneypot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/redishoneypot/log:/var/log/redishoneypot
|
||||||
|
|
||||||
|
# SentryPeer service
|
||||||
|
sentrypeer:
|
||||||
|
container_name: sentrypeer
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
# SentryPeer offers to exchange bad actor data via DHT / P2P mode by setting the ENV to true (1)
|
||||||
|
# In some cases (i.e. internally deployed T-Pots) this might be confusing as SentryPeer will show
|
||||||
|
# the bad actors in its logs. Therefore this option is opt-in based.
|
||||||
|
# environment:
|
||||||
|
# - SENTRYPEER_PEER_TO_PEER=0
|
||||||
|
networks:
|
||||||
|
- sentrypeer_local
|
||||||
|
ports:
|
||||||
|
# - "4222:4222/udp"
|
||||||
|
- "5060:5060/udp"
|
||||||
|
# - "127.0.0.1:8082:8082"
|
||||||
|
image: ${TPOT_REPO}/sentrypeer:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/sentrypeer/log:/var/log/sentrypeer
|
||||||
|
|
||||||
|
#### Snare / Tanner
|
||||||
|
## Tanner Redis Service
|
||||||
|
tanner_redis:
|
||||||
|
container_name: tanner_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## PHP Sandbox service
|
||||||
|
tanner_phpox:
|
||||||
|
container_name: tanner_phpox
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/phpox:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Tanner API Service
|
||||||
|
tanner_api:
|
||||||
|
container_name: tanner_api
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_redis
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
command: tannerapi
|
||||||
|
|
||||||
|
## Tanner Service
|
||||||
|
tanner:
|
||||||
|
container_name: tanner
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner_api
|
||||||
|
- tanner_phpox
|
||||||
|
tmpfs:
|
||||||
|
- /tmp/tanner:uid=2000,gid=2000
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
image: ${TPOT_REPO}/tanner:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
command: tanner
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/log:/var/log/tanner
|
||||||
|
- ${TPOT_DATA_PATH}/tanner/files:/opt/tanner/files
|
||||||
|
|
||||||
|
## Snare Service
|
||||||
|
snare:
|
||||||
|
container_name: snare
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
- tanner
|
||||||
|
tty: true
|
||||||
|
networks:
|
||||||
|
- tanner_local
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
image: ${TPOT_REPO}/snare:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### NSM
|
||||||
|
##################
|
||||||
|
|
||||||
|
# Fatt service
|
||||||
|
fatt:
|
||||||
|
container_name: fatt
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/fatt:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/fatt/log:/opt/fatt/log
|
||||||
|
|
||||||
|
# P0f service
|
||||||
|
p0f:
|
||||||
|
container_name: p0f
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
network_mode: "host"
|
||||||
|
image: ${TPOT_REPO}/p0f:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/p0f/log:/var/log/p0f
|
||||||
|
|
||||||
|
# Suricata service
|
||||||
|
suricata:
|
||||||
|
container_name: suricata
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
# For ET Pro ruleset replace "OPEN" with your OINKCODE
|
||||||
|
- OINKCODE=OPEN
|
||||||
|
# Loading externel Rules from URL
|
||||||
|
# - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
- SYS_NICE
|
||||||
|
- NET_RAW
|
||||||
|
image: ${TPOT_REPO}/suricata:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/suricata/log:/var/log/suricata
|
||||||
|
|
||||||
|
|
||||||
|
##################
|
||||||
|
#### Tools
|
||||||
|
##################
|
||||||
|
|
||||||
|
#### ELK
|
||||||
|
## Elasticsearch service
|
||||||
|
elasticsearch:
|
||||||
|
container_name: elasticsearch
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- bootstrap.memory_lock=true
|
||||||
|
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||||
|
- ES_TMPDIR=/tmp
|
||||||
|
cap_add:
|
||||||
|
- IPC_LOCK
|
||||||
|
ulimits:
|
||||||
|
memlock:
|
||||||
|
soft: -1
|
||||||
|
hard: -1
|
||||||
|
nofile:
|
||||||
|
soft: 65536
|
||||||
|
hard: 65536
|
||||||
|
mem_limit: 4g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64298:9200"
|
||||||
|
image: ${TPOT_REPO}/elasticsearch:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Kibana service
|
||||||
|
kibana:
|
||||||
|
container_name: kibana
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
mem_limit: 1g
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64296:5601"
|
||||||
|
image: ${TPOT_REPO}/kibana:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Logstash service
|
||||||
|
logstash:
|
||||||
|
container_name: logstash
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- LS_JAVA_OPTS=-Xms1024m -Xmx1024m
|
||||||
|
mem_limit: 2g
|
||||||
|
image: ${TPOT_REPO}/logstash:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
|
||||||
|
## Map Redis Service
|
||||||
|
map_redis:
|
||||||
|
container_name: map_redis
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/redis:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
|
||||||
|
## Map Web Service
|
||||||
|
map_web:
|
||||||
|
container_name: map_web
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=AttackMapServer.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64299:64299"
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
|
||||||
|
## Map Data Service
|
||||||
|
map_data:
|
||||||
|
container_name: map_data
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
elasticsearch:
|
||||||
|
condition: service_healthy
|
||||||
|
environment:
|
||||||
|
- MAP_COMMAND=DataServer_v2.py
|
||||||
|
stop_signal: SIGKILL
|
||||||
|
tty: true
|
||||||
|
image: ${TPOT_REPO}/map:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
#### /ELK
|
||||||
|
|
||||||
|
# Ewsposter service
|
||||||
|
ewsposter:
|
||||||
|
container_name: ewsposter
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- ewsposter_local
|
||||||
|
environment:
|
||||||
|
- EWS_HPFEEDS_ENABLE=false
|
||||||
|
- EWS_HPFEEDS_HOST=host
|
||||||
|
- EWS_HPFEEDS_PORT=port
|
||||||
|
- EWS_HPFEEDS_CHANNELS=channels
|
||||||
|
- EWS_HPFEEDS_IDENT=user
|
||||||
|
- EWS_HPFEEDS_SECRET=secret
|
||||||
|
- EWS_HPFEEDS_TLSCERT=false
|
||||||
|
- EWS_HPFEEDS_FORMAT=json
|
||||||
|
image: ${TPOT_REPO}/ewsposter:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}:/data
|
||||||
|
- ${TPOT_DATA_PATH}/ews/conf/ews.ip:/opt/ewsposter/ews.ip
|
||||||
|
|
||||||
|
# Nginx service
|
||||||
|
nginx:
|
||||||
|
container_name: nginx
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
- COCKPIT=${COCKPIT}
|
||||||
|
- TPOT_OSTYPE=${TPOT_OSTYPE}
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
tmpfs:
|
||||||
|
- /var/tmp/nginx/client_body
|
||||||
|
- /var/tmp/nginx/proxy
|
||||||
|
- /var/tmp/nginx/fastcgi
|
||||||
|
- /var/tmp/nginx/uwsgi
|
||||||
|
- /var/tmp/nginx/scgi
|
||||||
|
- /run
|
||||||
|
- /var/lib/nginx/tmp:uid=100,gid=82
|
||||||
|
network_mode: "host"
|
||||||
|
ports:
|
||||||
|
- "64297:64297"
|
||||||
|
- "127.0.0.1:64304:64304"
|
||||||
|
image: ${TPOT_REPO}/nginx:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
read_only: true
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/cert/:/etc/nginx/cert/:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
|
||||||
|
- ${TPOT_DATA_PATH}/nginx/log/:/var/log/nginx/
|
||||||
|
|
||||||
|
# Spiderfoot service
|
||||||
|
spiderfoot:
|
||||||
|
container_name: spiderfoot
|
||||||
|
restart: always
|
||||||
|
depends_on:
|
||||||
|
tpotinit:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- spiderfoot_local
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:64303:8080"
|
||||||
|
image: ${TPOT_REPO}/spiderfoot:${TPOT_VERSION}
|
||||||
|
pull_policy: ${TPOT_PULL_POLICY}
|
||||||
|
volumes:
|
||||||
|
- ${TPOT_DATA_PATH}/spiderfoot:/home/spiderfoot/.spiderfoot
|
61
preview/docker/Dockerfile
Normal file
61
preview/docker/Dockerfile
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
FROM alpine:edge
|
||||||
|
#
|
||||||
|
# Include dist
|
||||||
|
COPY dist/ /root/dist/
|
||||||
|
#
|
||||||
|
# Get and install dependencies & packages
|
||||||
|
RUN apk --no-cache -U add \
|
||||||
|
aria2 \
|
||||||
|
apache2-utils \
|
||||||
|
bash \
|
||||||
|
bind-tools \
|
||||||
|
conntrack-tools \
|
||||||
|
curl \
|
||||||
|
ethtool \
|
||||||
|
figlet \
|
||||||
|
git \
|
||||||
|
grep \
|
||||||
|
iproute2 \
|
||||||
|
iptables \
|
||||||
|
jq \
|
||||||
|
logrotate \
|
||||||
|
lsblk \
|
||||||
|
net-tools \
|
||||||
|
openssl \
|
||||||
|
pigz \
|
||||||
|
tar \
|
||||||
|
uuidgen && \
|
||||||
|
apk --no-cache -U add --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community \
|
||||||
|
yq && \
|
||||||
|
#
|
||||||
|
# Setup user
|
||||||
|
addgroup -g 2000 tpot && \
|
||||||
|
adduser -S -s /bin/ash -u 2000 -D -g 2000 tpot && \
|
||||||
|
#
|
||||||
|
# Install tpot
|
||||||
|
git clone --depth=1 https://github.com/telekom-security/tpotce /opt/tpot && \
|
||||||
|
cd /opt/tpot && \
|
||||||
|
sed -i "s#/opt/tpot/etc/logrotate/status#/data/tpot/etc/logrotate/status#g" bin/clean.sh && \
|
||||||
|
sed -i "s#/opt/tpot/etc/compose/elk_environment#/data/tpot/etc/compose/elk_environment#g" bin/clean.sh && \
|
||||||
|
sed -i "s#/usr/sbin/iptables-legacy#/sbin/iptables-legacy#g" bin/rules.sh && \
|
||||||
|
sed -i "s/tr -d '\", '/tr -d '\", ,#,-'/g" bin/rules.sh && \
|
||||||
|
sed -i "s#/opt/tpot/etc/compose/elk_environment#/data/tpot/etc/compose/elk_environment#g" bin/updateip.sh && \
|
||||||
|
sed -i "s#.*myLOCALIP=.*#myLOCALIP=\$(/sbin/ip address show | awk '/inet .*brd/{split(\$2,a,\"/\"); print a[1]; exit}')#" bin/updateip.sh && \
|
||||||
|
sed -i "s#.*myUUID=.*#myUUID=\$(cat /data/uuid)#" bin/updateip.sh && \
|
||||||
|
sed -i "s#/etc/issue#/tmp/etc/issue#g" bin/updateip.sh && \
|
||||||
|
sed -i "/toilet/d" bin/updateip.sh && \
|
||||||
|
sed -i "/source \/etc\/environment/d" bin/updateip.sh && \
|
||||||
|
touch /opt/tpot/etc/tpot.yml && \
|
||||||
|
cp /root/dist/entrypoint.sh . && \
|
||||||
|
#
|
||||||
|
# Clean up
|
||||||
|
apk del --purge git && \
|
||||||
|
rm -rf /root/* /tmp/* && \
|
||||||
|
rm -rf /root/.cache /opt/tpot/.git && \
|
||||||
|
rm -rf /var/cache/apk/*
|
||||||
|
#
|
||||||
|
# Run tpotinit
|
||||||
|
WORKDIR /opt/tpot
|
||||||
|
HEALTHCHECK --retries=1000 --interval=5s CMD test -f /tmp/success || exit 1
|
||||||
|
STOPSIGNAL SIGKILL
|
||||||
|
CMD ["/opt/tpot/entrypoint.sh"]
|
168
preview/docker/dist/entrypoint.sh
vendored
Executable file
168
preview/docker/dist/entrypoint.sh
vendored
Executable file
|
@ -0,0 +1,168 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
VERSION="T-Pot $(cat /opt/tpot/version)"
|
||||||
|
COMPOSE="/tmp/tpot/docker-compose.yml"
|
||||||
|
|
||||||
|
# Check for compatible OSType
|
||||||
|
echo
|
||||||
|
echo "# Checking if OSType is compatible."
|
||||||
|
echo
|
||||||
|
myOSTYPE=$(uname -a | grep -Eo "linuxkit")
|
||||||
|
if [ "${myOSTYPE}" == "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
|
||||||
|
then
|
||||||
|
echo "# Docker Desktop for macOS or Windows detected."
|
||||||
|
echo "# 1. You need to adjust the OSType in the hidden \".env\" file."
|
||||||
|
echo "# 2. You need to use the macos or win docker compose file."
|
||||||
|
echo
|
||||||
|
echo "# Aborting."
|
||||||
|
echo
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Data folder management
|
||||||
|
if [ -f "/data/uuid" ];
|
||||||
|
then
|
||||||
|
figlet "Initializing ..."
|
||||||
|
figlet "${VERSION}"
|
||||||
|
echo
|
||||||
|
echo "# Data folder is present, just cleaning up, please be patient ..."
|
||||||
|
echo
|
||||||
|
/opt/tpot/bin/clean.sh on
|
||||||
|
echo
|
||||||
|
else
|
||||||
|
figlet "Setting up ..."
|
||||||
|
figlet "${VERSION}"
|
||||||
|
echo
|
||||||
|
echo "# Checking for default user."
|
||||||
|
if [ "${WEB_USER}" == "changeme" ] || [ "${WEB_PW}" == "changeme" ];
|
||||||
|
then
|
||||||
|
echo "# Please change WEB_USER and WEB_PW in the hidden \".env\" file."
|
||||||
|
echo "# Aborting."
|
||||||
|
echo
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo
|
||||||
|
echo "# Setting up data folder structure ..."
|
||||||
|
echo
|
||||||
|
mkdir -vp /data/ews/conf \
|
||||||
|
/data/nginx/{cert,conf,log} \
|
||||||
|
/data/tpot/etc/compose/ \
|
||||||
|
/data/tpot/etc/logrotate/ \
|
||||||
|
/tmp/etc/
|
||||||
|
echo
|
||||||
|
echo "# Generating self signed certificate ..."
|
||||||
|
echo
|
||||||
|
myINTIP=$(/sbin/ip address show | awk '/inet .*brd/{split($2,a,"/"); print a[1]; exit}')
|
||||||
|
openssl req \
|
||||||
|
-nodes \
|
||||||
|
-x509 \
|
||||||
|
-sha512 \
|
||||||
|
-newkey rsa:8192 \
|
||||||
|
-keyout "/data/nginx/cert/nginx.key" \
|
||||||
|
-out "/data/nginx/cert/nginx.crt" \
|
||||||
|
-days 3650 \
|
||||||
|
-subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' \
|
||||||
|
-addext "subjectAltName = IP:${myINTIP}"
|
||||||
|
echo
|
||||||
|
echo "# Creating web user from tpot.env, make sure to erase the password from the .env ..."
|
||||||
|
echo
|
||||||
|
htpasswd -b -c /data/nginx/conf/nginxpasswd "${WEB_USER}" "${WEB_PW}"
|
||||||
|
echo
|
||||||
|
echo "# Extracting objects, final touches and permissions ..."
|
||||||
|
echo
|
||||||
|
tar xvfz /opt/tpot/etc/objects/elkbase.tgz -C /
|
||||||
|
/opt/tpot/bin/clean.sh off
|
||||||
|
uuidgen > /data/uuid
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if TPOT_BLACKHOLE is enabled
|
||||||
|
if [ "${myOSTYPE}" == "linuxkit" ];
|
||||||
|
then
|
||||||
|
echo
|
||||||
|
echo "# Docker Desktop for macOS or Windows detected, Blackhole feature is not supported."
|
||||||
|
echo
|
||||||
|
else
|
||||||
|
if [ "${TPOT_BLACKHOLE}" == "ENABLED" ] && [ ! -f "/etc/blackhole/mass_scanner.txt" ];
|
||||||
|
then
|
||||||
|
echo "# Adding Blackhole routes."
|
||||||
|
/opt/tpot/bin/blackhole.sh add
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
if [ "${TPOT_BLACKHOLE}" == "DISABLED" ] && [ -f "/etc/blackhole/mass_scanner.txt" ];
|
||||||
|
then
|
||||||
|
echo "# Removing Blackhole routes."
|
||||||
|
/opt/tpot/bin/blackhole.sh del
|
||||||
|
echo
|
||||||
|
else
|
||||||
|
echo "# Blackhole is not active."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get IP
|
||||||
|
echo
|
||||||
|
echo "# Updating IP Info ..."
|
||||||
|
echo
|
||||||
|
/opt/tpot/bin/updateip.sh
|
||||||
|
|
||||||
|
# Update permissions
|
||||||
|
echo
|
||||||
|
echo "# Updating permissions ..."
|
||||||
|
echo
|
||||||
|
chown -R tpot:tpot /data
|
||||||
|
chmod -R 777 /data
|
||||||
|
#chmod 644 -R /data/nginx/conf
|
||||||
|
#chmod 644 -R /data/nginx/cert
|
||||||
|
|
||||||
|
# Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap)
|
||||||
|
### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux
|
||||||
|
if [ "${myOSTYPE}" != "linuxkit" ] && [ "${TPOT_OSTYPE}" == "linux" ];
|
||||||
|
then
|
||||||
|
echo
|
||||||
|
echo "# Get IF, disable offloading, enable promiscious mode for p0f and suricata ..."
|
||||||
|
echo
|
||||||
|
ethtool --offload $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) rx off tx off
|
||||||
|
ethtool -K $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) gso off gro off
|
||||||
|
ip link set $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) promisc on
|
||||||
|
echo
|
||||||
|
echo "# Adding firewall rules ..."
|
||||||
|
echo
|
||||||
|
/opt/tpot/bin/rules.sh ${COMPOSE} set
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Display open ports
|
||||||
|
if [ "${myOSTYPE}" != "linuxkit" ];
|
||||||
|
then
|
||||||
|
echo
|
||||||
|
echo "# This is a list of open ports on the host (netstat -tulpen)."
|
||||||
|
echo "# Make sure there are no conflicting ports by checking the docker compose file."
|
||||||
|
echo "# Conflicting ports will prevent the startup of T-Pot."
|
||||||
|
echo
|
||||||
|
netstat -tulpen | grep -Eo ':([0-9]+)' | cut -d ":" -f 2 | uniq
|
||||||
|
echo
|
||||||
|
else
|
||||||
|
echo
|
||||||
|
echo "# Docker Desktop for macOS or Windows detected, cannot show open ports on the host."
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
# Done
|
||||||
|
echo
|
||||||
|
figlet "Starting ..."
|
||||||
|
figlet "${VERSION}"
|
||||||
|
echo
|
||||||
|
touch /tmp/success
|
||||||
|
|
||||||
|
# We want to see true source for UDP packets in container (https://github.com/moby/libnetwork/issues/1994)
|
||||||
|
if [ "${myOSTYPE}" != "linuxkit" ];
|
||||||
|
then
|
||||||
|
sleep 60
|
||||||
|
/usr/sbin/conntrack -D -p udp
|
||||||
|
else
|
||||||
|
echo
|
||||||
|
echo "# Docker Desktop for macOS or Windows detected, Conntrack feature is not supported."
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Keep the container running ...
|
||||||
|
sleep infinity
|
15
preview/docker/docker-compose.yml
Normal file
15
preview/docker/docker-compose.yml
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
# T-Pot Init Service
|
||||||
|
tpotinit:
|
||||||
|
build: .
|
||||||
|
container_name: tpotinit
|
||||||
|
restart: "no"
|
||||||
|
image: "dtagdevsec/tpotinit:2204"
|
||||||
|
# volumes:
|
||||||
|
# - /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
network_mode: "host"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
35
preview/docker/macvlan/docker-compose.yml
Normal file
35
preview/docker/macvlan/docker-compose.yml
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
### This is an example on how to use macvlan driver
|
||||||
|
### which results in bridging the container fully
|
||||||
|
### to your network.
|
||||||
|
|
||||||
|
version: '3.9'
|
||||||
|
|
||||||
|
services:
|
||||||
|
nginx:
|
||||||
|
container_name: nginx
|
||||||
|
restart: always
|
||||||
|
image: nginx
|
||||||
|
networks:
|
||||||
|
mynet:
|
||||||
|
### unassigned ip from your local network
|
||||||
|
ipv4_address: 1.2.3.10
|
||||||
|
|
||||||
|
nginx2:
|
||||||
|
container_name: nginx2
|
||||||
|
restart: always
|
||||||
|
image: nginx
|
||||||
|
networks:
|
||||||
|
mynet:
|
||||||
|
### unassigned ip from your local network
|
||||||
|
ipv4_address: 1.2.3.11
|
||||||
|
|
||||||
|
networks:
|
||||||
|
mynet:
|
||||||
|
driver: macvlan
|
||||||
|
driver_opts:
|
||||||
|
parent: eth0
|
||||||
|
ipam:
|
||||||
|
config:
|
||||||
|
### your local network with netmask and gateway
|
||||||
|
- subnet: 1.2.3.0/24
|
||||||
|
gateway: 1.2.3.1
|
52
preview/env.example
Normal file
52
preview/env.example
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
# T-Pot Light config file. Do not remove.
|
||||||
|
|
||||||
|
# Set Web username and password here, only required for first run
|
||||||
|
# Removing the password after first run is recommended
|
||||||
|
# You can always add or remove users as you see fit using htpasswd:
|
||||||
|
# htpasswd -b -c /<data_folder>/nginx/conf/nginxpasswd <username> <password>
|
||||||
|
WEB_USER=changeme
|
||||||
|
WEB_PW=changeme
|
||||||
|
|
||||||
|
# T-Pot Blackhole
|
||||||
|
# ENABLED: T-Pot will download a db of known mass scanners and nullroute them
|
||||||
|
# Be aware, this will put T-Pot off the map for stealth reasons and
|
||||||
|
# you will get less traffic. Routes will active until reboot and will
|
||||||
|
# be re-added with every T-Pot start until disabled.
|
||||||
|
# DISABLED: This is the default and no stealth efforts are in place.
|
||||||
|
TPOT_BLACKHOLE=DISABLED
|
||||||
|
|
||||||
|
###################################################################################
|
||||||
|
# NEVER MAKE CHANGES TO THIS SECTION UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!!! #
|
||||||
|
###################################################################################
|
||||||
|
|
||||||
|
# T-Pot Landing page provides Cockpit Link
|
||||||
|
COCKPIT=false
|
||||||
|
|
||||||
|
# docker.sock Path
|
||||||
|
TPOT_DOCKER_SOCK=/var/run/docker.sock
|
||||||
|
|
||||||
|
# docker compose .env
|
||||||
|
TPOT_DOCKER_ENV=./.env
|
||||||
|
|
||||||
|
# Docker-Compose file
|
||||||
|
TPOT_DOCKER_COMPOSE=./docker-compose.yml
|
||||||
|
|
||||||
|
# T-Pot Repo
|
||||||
|
TPOT_REPO=dtagdevsec
|
||||||
|
|
||||||
|
# T-Pot Version Tag
|
||||||
|
TPOT_VERSION=2204
|
||||||
|
|
||||||
|
# T-Pot Pull Policy
|
||||||
|
# always: (T-Pot default) Compose implementations SHOULD always pull the image from the registry.
|
||||||
|
# never: Compose implementations SHOULD NOT pull the image from a registry and SHOULD rely on the platform cached image.
|
||||||
|
# missing: Compose implementations SHOULD pull the image only if it's not available in the platform cache.
|
||||||
|
# build: Compose implementations SHOULD build the image. Compose implementations SHOULD rebuild the image if already present.
|
||||||
|
TPOT_PULL_POLICY=always
|
||||||
|
|
||||||
|
# T-Pot Data Path
|
||||||
|
TPOT_DATA_PATH=./data
|
||||||
|
|
||||||
|
# OSType (linux, mac, win)
|
||||||
|
# Most docker features are available on linux
|
||||||
|
TPOT_OSTYPE=linux
|
71
preview/installer/debian/install.sh
Executable file
71
preview/installer/debian/install.sh
Executable file
|
@ -0,0 +1,71 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Debian
|
||||||
|
if ! grep -q 'ID=debian' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Debian. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /var/log/debian-install-lock ]; then
|
||||||
|
echo "Error: The installer has already been run on this system. If you wish to run it again, please run the uninstall.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create installer lock file
|
||||||
|
sudo touch /var/log/debian-install-lock
|
||||||
|
|
||||||
|
# Update SSH config
|
||||||
|
echo "Updating SSH config..."
|
||||||
|
sudo bash -c 'echo "Port 64295" >> /etc/ssh/sshd_config'
|
||||||
|
|
||||||
|
# Install recommended packages
|
||||||
|
echo "Installing recommended packages..."
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install bash-completion git grc neovim net-tools
|
||||||
|
|
||||||
|
# Remove old Docker
|
||||||
|
echo "Removing old docker packages..."
|
||||||
|
sudo apt-get -y remove docker docker-engine docker.io containerd runc
|
||||||
|
|
||||||
|
# Add Docker to repositories, install latest docker
|
||||||
|
echo "Adding Docker to repositories and installing..."
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install ca-certificates curl gnupg
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
sudo chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
echo \
|
||||||
|
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
|
||||||
|
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
|
||||||
|
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo systemctl enable docker
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl start docker
|
||||||
|
|
||||||
|
# Add user to Docker group
|
||||||
|
echo "Adding user to Docker group..."
|
||||||
|
sudo usermod -aG docker $(whoami)
|
||||||
|
|
||||||
|
# Add aliases
|
||||||
|
echo "Adding aliases..."
|
||||||
|
echo "alias dps='grc docker ps -a'" >> ~/.bashrc
|
||||||
|
echo "alias dpsw='watch -c \"grc --colour=on docker ps -a\"'" >> ~/.bashrc
|
||||||
|
|
||||||
|
# Show running services
|
||||||
|
sudo grc netstat -tulpen
|
||||||
|
echo "Please review for possible honeypot port conflicts."
|
||||||
|
echo "While SSH is taken care of, other services such as"
|
||||||
|
echo "SMTP, HTTP, etc. might prevent T-Pot from starting."
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/64295."
|
||||||
|
|
10
preview/installer/debian/sudo-install.sh
Executable file
10
preview/installer/debian/sudo-install.sh
Executable file
|
@ -0,0 +1,10 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
if ! command -v sudo &> /dev/null
|
||||||
|
then
|
||||||
|
echo "sudo is not installed. Installing now..."
|
||||||
|
su -c "apt-get -y update && apt-get -y install sudo"
|
||||||
|
su -c "/usr/sbin/usermod -aG sudo $(whoami)"
|
||||||
|
else
|
||||||
|
echo "sudo is already installed."
|
||||||
|
fi
|
52
preview/installer/debian/uninstall.sh
Executable file
52
preview/installer/debian/uninstall.sh
Executable file
|
@ -0,0 +1,52 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Debian
|
||||||
|
if ! grep -q 'ID=debian' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Debian. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if installer lock file exists
|
||||||
|
if [ ! -f /var/log/debian-install-lock ]; then
|
||||||
|
echo "Error: The installer has not been run on this system. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove SSH config changes
|
||||||
|
echo "Removing SSH config changes..."
|
||||||
|
sudo sed -i '/Port 64295/d' /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Uninstall Docker
|
||||||
|
echo "Stopping and removing all containers ..."
|
||||||
|
docker stop $(docker ps -aq)
|
||||||
|
docker rm $(docker ps -aq)
|
||||||
|
echo "Uninstalling Docker..."
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl disable docker
|
||||||
|
sudo apt-get -y remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo apt-get -y autoremove
|
||||||
|
sudo rm -rf /etc/apt/sources.list.d/docker.list
|
||||||
|
sudo rm -rf /etc/apt/keyrings/docker.gpg
|
||||||
|
|
||||||
|
# Remove user from Docker group
|
||||||
|
echo "Removing user from Docker group..."
|
||||||
|
sudo deluser $(whoami) docker
|
||||||
|
|
||||||
|
# Remove aliases
|
||||||
|
echo "Removing aliases..."
|
||||||
|
sed -i '/alias dps=/d' ~/.bashrc
|
||||||
|
sed -i '/alias dpsw=/d' ~/.bashrc
|
||||||
|
|
||||||
|
# Remove installer lock file
|
||||||
|
sudo rm -f /var/log/debian-install-lock
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/22"
|
||||||
|
|
79
preview/installer/fedora/install.sh
Executable file
79
preview/installer/fedora/install.sh
Executable file
|
@ -0,0 +1,79 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Fedora
|
||||||
|
if ! grep -q 'ID=fedora' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Fedora. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /var/log/fedora-install-lock ]; then
|
||||||
|
echo "Error: The installer has already been run on this system. If you wish to run it again, please run the uninstall.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create installer lock file
|
||||||
|
sudo touch /var/log/fedora-install-lock
|
||||||
|
|
||||||
|
# Update SSH config
|
||||||
|
echo "Updating SSH config..."
|
||||||
|
sudo bash -c 'echo "Port 64295" >> /etc/ssh/sshd_config'
|
||||||
|
|
||||||
|
# Update DNS config
|
||||||
|
echo "Updating DNS config..."
|
||||||
|
sudo bash -c "sed -i 's/^.*DNSStubListener=.*/DNSStubListener=no/' /etc/systemd/resolved.conf"
|
||||||
|
sudo systemctl restart systemd-resolved.service
|
||||||
|
|
||||||
|
# Update SELinux config
|
||||||
|
echo "Updating SELinux config..."
|
||||||
|
sudo sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
|
||||||
|
|
||||||
|
# Update Firewall rules
|
||||||
|
echo "Updating Firewall rules..."
|
||||||
|
sudo firewall-cmd --permanent --add-port=64295/tcp
|
||||||
|
sudo firewall-cmd --permanent --zone=public --set-target=ACCEPT
|
||||||
|
#sudo firewall-cmd --reload
|
||||||
|
sudo firewall-cmd --list-all
|
||||||
|
|
||||||
|
# Load kernel modules
|
||||||
|
echo "Loading kernel modules..."
|
||||||
|
sudo modprobe -v iptable_filter
|
||||||
|
echo "iptable_filter" | sudo tee /etc/modules-load.d/iptables.conf
|
||||||
|
|
||||||
|
# Add Docker to repositories, install latest docker
|
||||||
|
echo "Adding Docker to repositories and installing..."
|
||||||
|
sudo dnf -y update
|
||||||
|
sudo dnf -y install dnf-plugins-core
|
||||||
|
sudo dnf -y config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
|
||||||
|
sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo systemctl enable docker
|
||||||
|
sudo systemctl start docker
|
||||||
|
|
||||||
|
# Install recommended packages
|
||||||
|
echo "Installing recommended packages..."
|
||||||
|
sudo dnf -y install bash-completion git grc net-tools
|
||||||
|
|
||||||
|
# Add user to Docker group
|
||||||
|
echo "Adding user to Docker group..."
|
||||||
|
sudo usermod -aG docker $(whoami)
|
||||||
|
|
||||||
|
# Add aliases
|
||||||
|
echo "Adding aliases..."
|
||||||
|
echo "alias dps='grc docker ps -a'" >> ~/.bashrc
|
||||||
|
echo "alias dpsw='watch -c \"grc --colour=on docker ps -a\"'" >> ~/.bashrc
|
||||||
|
|
||||||
|
# Show running services
|
||||||
|
sudo grc netstat -tulpen
|
||||||
|
echo "Please review for possible honeypot port conflicts."
|
||||||
|
echo "While SSH is taken care of, other services such as"
|
||||||
|
echo "SMTP, HTTP, etc. might prevent T-Pot from starting."
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/64295."
|
||||||
|
|
71
preview/installer/fedora/uninstall.sh
Executable file
71
preview/installer/fedora/uninstall.sh
Executable file
|
@ -0,0 +1,71 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Fedora
|
||||||
|
if ! grep -q 'ID=fedora' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Fedora. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f /var/log/fedora-install-lock ]; then
|
||||||
|
echo "Error: The installer has not been run on this system. Aborting uninstallation."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove SSH config changes
|
||||||
|
echo "Removing SSH config changes..."
|
||||||
|
sudo sed -i '/Port 64295/d' /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Remove DNS config changes
|
||||||
|
echo "Updating DNS config..."
|
||||||
|
sudo bash -c "sed -i 's/^.*DNSStubListener=.*/#DNSStubListener=yes/' /etc/systemd/resolved.conf"
|
||||||
|
sudo systemctl restart systemd-resolved.service
|
||||||
|
|
||||||
|
# Restore SELinux config
|
||||||
|
echo "Restoring SELinux config..."
|
||||||
|
sudo sed -i s/SELINUX=permissive/SELINUX=enforcing/g /etc/selinux/config
|
||||||
|
|
||||||
|
# Remove Firewall rules
|
||||||
|
echo "Removing Firewall rules..."
|
||||||
|
sudo firewall-cmd --permanent --remove-port=64295/tcp
|
||||||
|
sudo firewall-cmd --permanent --zone=public --set-target=default
|
||||||
|
#sudo firewall-cmd --reload
|
||||||
|
sudo firewall-cmd --list-all
|
||||||
|
|
||||||
|
# Unload kernel modules
|
||||||
|
echo "Unloading kernel modules..."
|
||||||
|
sudo modprobe -rv iptable_filter
|
||||||
|
sudo rm /etc/modules-load.d/iptables.conf
|
||||||
|
|
||||||
|
# Uninstall Docker
|
||||||
|
echo "Stopping and removing all containers ..."
|
||||||
|
docker stop $(docker ps -aq)
|
||||||
|
docker rm $(docker ps -aq)
|
||||||
|
echo "Uninstalling Docker..."
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl disable docker
|
||||||
|
sudo dnf -y remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo dnf config-manager --disable docker-ce-stable
|
||||||
|
sudo rm /etc/yum.repos.d/docker-ce.repo
|
||||||
|
|
||||||
|
# Remove user from Docker group
|
||||||
|
echo "Removing user from Docker group..."
|
||||||
|
sudo gpasswd -d $(whoami) docker
|
||||||
|
|
||||||
|
# Remove aliases
|
||||||
|
echo "Removing aliases..."
|
||||||
|
sed -i '/alias dps=/d' ~/.bashrc
|
||||||
|
sed -i '/alias dpsw=/d' ~/.bashrc
|
||||||
|
|
||||||
|
# Remove installer lock file
|
||||||
|
sudo rm /var/log/fedora-install-lock
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/22"
|
||||||
|
|
63
preview/installer/suse/install.sh
Executable file
63
preview/installer/suse/install.sh
Executable file
|
@ -0,0 +1,63 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on OpenSuse Tumbleweed
|
||||||
|
if ! grep -q 'ID="opensuse-tumbleweed"' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on OpenSuse Tumbleweed. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /var/log/suse-install-lock ]; then
|
||||||
|
echo "Error: The installer has already been run on this system. If you wish to run it again, please run the uninstall.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create installer lock file
|
||||||
|
sudo touch /var/log/suse-install-lock
|
||||||
|
|
||||||
|
# Update SSH config
|
||||||
|
echo "Updating SSH config..."
|
||||||
|
sudo bash -c 'echo "Port 64295" >> /etc/ssh/sshd_config.d/port.conf'
|
||||||
|
|
||||||
|
# Update Firewall rules
|
||||||
|
echo "Updating Firewall rules..."
|
||||||
|
sudo firewall-cmd --permanent --add-port=64295/tcp
|
||||||
|
sudo firewall-cmd --permanent --zone=public --set-target=ACCEPT
|
||||||
|
#sudo firewall-cmd --reload
|
||||||
|
sudo firewall-cmd --list-all
|
||||||
|
|
||||||
|
# Install docker and recommended packages
|
||||||
|
echo "Installing recommended packages..."
|
||||||
|
sudo zypper -n update
|
||||||
|
sudo zypper -n remove cups net-tools postfix yast2-auth-client yast2-auth-server
|
||||||
|
sudo zypper -n install bash-completion docker docker-compose git grc busybox-net-tools
|
||||||
|
|
||||||
|
# Enable and start docker
|
||||||
|
echo "Enabling and starting docker..."
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
|
||||||
|
# Add user to Docker group
|
||||||
|
echo "Adding user to Docker group..."
|
||||||
|
sudo usermod -aG docker $(whoami)
|
||||||
|
|
||||||
|
# Add aliases
|
||||||
|
echo "Adding aliases..."
|
||||||
|
echo "alias dps='grc docker ps -a'" >> ~/.bashrc
|
||||||
|
echo "alias dpsw='watch -c \"grc --colour=on docker ps -a\"'" >> ~/.bashrc
|
||||||
|
|
||||||
|
# Show running services
|
||||||
|
sudo grc netstat -tulpen
|
||||||
|
echo "Please review for possible honeypot port conflicts."
|
||||||
|
echo "While SSH is taken care of, other services such as"
|
||||||
|
echo "SMTP, HTTP, etc. might prevent T-Pot from starting."
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/64295."
|
||||||
|
|
56
preview/installer/suse/uninstall.sh
Executable file
56
preview/installer/suse/uninstall.sh
Executable file
|
@ -0,0 +1,56 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on OpenSuse Tumbleweed
|
||||||
|
if ! grep -q 'ID="opensuse-tumbleweed"' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on OpenSuse Tumbleweed. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f /var/log/suse-install-lock ]; then
|
||||||
|
echo "Error: The installer has not been run on this system. Aborting uninstallation."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove SSH config changes
|
||||||
|
echo "Removing SSH config changes..."
|
||||||
|
sudo sed -i '/Port 64295/d' /etc/ssh/sshd_config.d/port.conf
|
||||||
|
|
||||||
|
# Remove Firewall rules
|
||||||
|
echo "Removing Firewall rules..."
|
||||||
|
sudo firewall-cmd --permanent --remove-port=64295/tcp
|
||||||
|
sudo firewall-cmd --permanent --zone=public --set-target=default
|
||||||
|
#sudo firewall-cmd --reload
|
||||||
|
sudo firewall-cmd --list-all
|
||||||
|
|
||||||
|
# Uninstall Docker
|
||||||
|
echo "Stopping and removing all containers ..."
|
||||||
|
docker stop $(docker ps -aq)
|
||||||
|
docker rm $(docker ps -aq)
|
||||||
|
echo "Uninstalling Docker..."
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl disable docker
|
||||||
|
sudo zypper -n remove docker docker-compose
|
||||||
|
sudo zypper -n install cups postfix
|
||||||
|
|
||||||
|
# Remove user from Docker group
|
||||||
|
echo "Removing user from Docker group..."
|
||||||
|
sudo gpasswd -d $(whoami) docker
|
||||||
|
|
||||||
|
# Remove aliases
|
||||||
|
echo "Removing aliases..."
|
||||||
|
sed -i '/alias dps=/d' ~/.bashrc
|
||||||
|
sed -i '/alias dpsw=/d' ~/.bashrc
|
||||||
|
|
||||||
|
# Remove installer lock file
|
||||||
|
sudo rm /var/log/suse-install-lock
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/22"
|
||||||
|
|
79
preview/installer/ubuntu/install.sh
Executable file
79
preview/installer/ubuntu/install.sh
Executable file
|
@ -0,0 +1,79 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Ubuntu
|
||||||
|
if ! grep -q 'ID=ubuntu' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Ubuntu. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /var/log/ubuntu-install-lock ]; then
|
||||||
|
echo "Error: The installer has already been run on this system. If you wish to run it again, please run the uninstall.sh first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create installer lock file
|
||||||
|
sudo touch /var/log/ubuntu-install-lock
|
||||||
|
|
||||||
|
# Update SSH config
|
||||||
|
echo "Updating SSH config..."
|
||||||
|
sudo bash -c 'echo "Port 64295" >> /etc/ssh/sshd_config'
|
||||||
|
sudo systemctl disable ssh.socket
|
||||||
|
sudo rm /etc/systemd/system/ssh.service.d/00-socket.conf
|
||||||
|
sudo systemctl enable ssh.service
|
||||||
|
|
||||||
|
# Update DNS config
|
||||||
|
echo "Updating DNS config..."
|
||||||
|
sudo bash -c "sed -i 's/^.*DNSStubListener=.*/DNSStubListener=no/' /etc/systemd/resolved.conf"
|
||||||
|
sudo systemctl restart systemd-resolved.service
|
||||||
|
|
||||||
|
# Install recommended packages
|
||||||
|
echo "Installing recommended packages..."
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install bash-completion git grc net-tools vim
|
||||||
|
|
||||||
|
# Remove old Docker
|
||||||
|
echo "Removing old docker packages..."
|
||||||
|
sudo apt-get -y remove docker docker-engine docker.io containerd runc
|
||||||
|
|
||||||
|
# Add Docker to repositories, install latest docker
|
||||||
|
echo "Adding Docker to repositories and installing..."
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install ca-certificates curl gnupg
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
sudo chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
echo \
|
||||||
|
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||||
|
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
|
||||||
|
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get -y update
|
||||||
|
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo systemctl enable docker
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl start docker
|
||||||
|
|
||||||
|
# Add user to Docker group
|
||||||
|
echo "Adding user to Docker group..."
|
||||||
|
sudo usermod -aG docker $(whoami)
|
||||||
|
|
||||||
|
# Add aliases
|
||||||
|
echo "Adding aliases..."
|
||||||
|
echo "alias dps='grc docker ps -a'" >> ~/.bashrc
|
||||||
|
echo "alias dpsw='watch -c \"grc --colour=on docker ps -a\"'" >> ~/.bashrc
|
||||||
|
|
||||||
|
# Show running services
|
||||||
|
sudo grc netstat -tulpen
|
||||||
|
echo "Please review for possible honeypot port conflicts."
|
||||||
|
echo "While SSH is taken care of, other services such as"
|
||||||
|
echo "SMTP, HTTP, etc. might prevent T-Pot from starting."
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/64295."
|
||||||
|
|
59
preview/installer/ubuntu/uninstall.sh
Executable file
59
preview/installer/ubuntu/uninstall.sh
Executable file
|
@ -0,0 +1,59 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Needs to run as non-root
|
||||||
|
myWHOAMI=$(whoami)
|
||||||
|
if [ "$myWHOAMI" == "root" ]
|
||||||
|
then
|
||||||
|
echo "Need to run as user ..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if running on Ubuntu
|
||||||
|
if ! grep -q 'ID=ubuntu' /etc/os-release; then
|
||||||
|
echo "This script is designed to run on Ubuntu. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if installer lock file exists
|
||||||
|
if [ ! -f /var/log/ubuntu-install-lock ]; then
|
||||||
|
echo "Error: The installer has not been run on this system. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove SSH config changes
|
||||||
|
echo "Removing SSH config changes..."
|
||||||
|
sudo sed -i '/Port 64295/d' /etc/ssh/sshd_config
|
||||||
|
sudo systemctl disable ssh.service
|
||||||
|
sudo systemctl enable ssh.socket
|
||||||
|
|
||||||
|
# Remove DNS config changes
|
||||||
|
echo "Updating DNS config..."
|
||||||
|
sudo bash -c "sed -i 's/^.*DNSStubListener=.*/#DNSStubListener=yes/' /etc/systemd/resolved.conf"
|
||||||
|
sudo systemctl restart systemd-resolved.service
|
||||||
|
|
||||||
|
# Uninstall Docker
|
||||||
|
echo "Stopping and removing all containers ..."
|
||||||
|
docker stop $(docker ps -aq)
|
||||||
|
docker rm $(docker ps -aq)
|
||||||
|
echo "Uninstalling Docker..."
|
||||||
|
sudo systemctl stop docker
|
||||||
|
sudo systemctl disable docker
|
||||||
|
sudo apt-get -y remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
sudo apt-get -y autoremove
|
||||||
|
sudo rm -rf /etc/apt/sources.list.d/docker.list
|
||||||
|
sudo rm -rf /etc/apt/keyrings/docker.gpg
|
||||||
|
|
||||||
|
# Remove user from Docker group
|
||||||
|
echo "Removing user from Docker group..."
|
||||||
|
sudo deluser $(whoami) docker
|
||||||
|
|
||||||
|
# Remove aliases
|
||||||
|
echo "Removing aliases..."
|
||||||
|
sed -i '/alias dps=/d' ~/.bashrc
|
||||||
|
sed -i '/alias dpsw=/d' ~/.bashrc
|
||||||
|
|
||||||
|
# Remove installer lock file
|
||||||
|
sudo rm -f /var/log/ubuntu-install-lock
|
||||||
|
|
||||||
|
echo "Done. Please reboot and re-connect via SSH on tcp/22"
|
||||||
|
|
Loading…
Reference in a new issue