Merge pull request #329 from dtag-dev-sec/debian

Prepare for T-Pot 19.03 release
This commit is contained in:
Marco Ochse 2019-04-01 14:54:22 +02:00 committed by GitHub
commit ecb2b4a587
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
101 changed files with 2153 additions and 6655 deletions

172
README.md
View file

@ -1,6 +1,6 @@
# T-Pot 18.11
# T-Pot 19.03
T-Pot 18.11 runs on the latest 18.04.x LTS Ubuntu Server Network Installer image, is based on
T-Pot 19.03 runs on Debian (Sid), is based heavily on
[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
@ -9,12 +9,13 @@ and includes dockerized versions of the following honeypots
* [adbhoney](https://github.com/huuck/ADBHoney),
* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
* [conpot](http://conpot.org/),
* [cowrie](http://www.micheloosterhof.com/cowrie/),
* [cowrie](https://github.com/cowrie/cowrie),
* [dionaea](https://github.com/DinoTools/dionaea),
* [elasticpot](https://github.com/schmalle/ElasticPot),
* [elasticpot](https://github.com/schmalle/ElasticpotPY),
* [glastopf](http://mushmush.org/),
* [glutton](https://github.com/mushorg/glutton),
* [heralding](https://github.com/johnnykv/heralding),
* [honeypy](https://github.com/foospidy/HoneyPy),
* [honeytrap](https://github.com/armedpot/honeytrap/),
* [mailoney](https://github.com/awhitehatter/mailoney),
* [medpot](https://github.com/schmalle/medpot),
@ -60,6 +61,7 @@ Furthermore we use the following tools
- [Tools](#tools)
- [Maintenance](#maintenance)
- [Community Data Submission](#submission)
- [Opt-In HPFEEDS Data Submission](#hpfeeds-optin)
- [Roadmap](#roadmap)
- [Disclaimer](#disclaimer)
- [FAQ](#faq)
@ -67,58 +69,54 @@ Furthermore we use the following tools
- [Licenses](#licenses)
- [Credits](#credits)
- [Stay tuned](#staytuned)
- [Testimonial](#testimonial)
- [Fun Fact](#funfact)
<a name="changelog"></a>
# Changelog
- **New honeypots**
- *Adbhoney* Low interaction honeypot designed for Android Debug Bridge over TCP/IP.
- *Ciscoasa* a low interaction honeypot for the Cisco ASA component capable of detecting CVE-2018-0101, a DoS and remote code execution vulnerability.
- *Glutton* (NextGen) is the all eating honeypot
- *Heralding* a credentials catching honeypot.
- *Medpot* is a HL7 / FHIR honeypot.
- *Snare* is a web application honeypot sensor, is the successor of Glastopf. SNARE has feature parity with Glastopf and allows to convert existing web pages into attack surfaces.
- *Tanner* is SNARES' "brain". Every event is send from SNARE to TANNER, gets evaluated and TANNER decides how SNARE should respond to the client. This allows us to change the behaviour of many sensors on the fly. We are providing a TANNER instance for your use, but there is nothing stopping you from setting up your own instance.
- **New tools**
- *Cockpit* is an interactive server admin interface. It is easy to use and very lightweight. Cockpit interacts directly with the operating system from a real Linux session in a browser.
- *Cyberchef* is the Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis.
- *grc* (commandline) is yet another colouriser (written in python) for beautifying your logfiles or output of commands.
- *multitail* (commandline) allows you to monitor logfiles and command output in multiple windows in a terminal, colorize, filter and merge.
- *tped.sh* (commandline) allows you to switch between T-Pot Editions after installation.
- **Deprecated tools**
- *Netdata*, *Portainer* and *WeTTY* were superseded by *Cockpit* which is much more lightweight, perfectly well integrated into Ubuntu 18.04 LTS and of course comes with the same but a more basic feature set.
- **New Standard Installation**
- The new standard installation is now running a whopping *14* honeypot instances.
- **T-Pot Universal Installer**
- The T-Pot installer now also includes the option to install on a existing machine, the T-Pot-Autoinstaller is no longer necessary.
- **Tighten Security**
- The docker containers are now running mostly with a read-only file system
- If possible using `setcap` to start daemons without root or dropping privileges
- Introducing `fail2ban` to ease up on `authorized_keys` requirement which is no longer necessary for `SSH`. Also to further prevent brute-force attacks on `Cockpit` and `NGINX` allowing for faster load times of the WebUI.
- **Iptables exceptions for NFQ based honeypots**
- In previous versions `iptables`had manually be maintained, now a a script parses `/opt/tpot/etc/tpot.yml` and extracts port information to automatically generate exceptions for ports that should not be forwarded to NFQ.
- **CI**
- The Kibana UI now uses a magenta theme.
- **ES HEAD**
- A Java Script now automatically enters the correct FQDN / IP. A manual step is no longer required.
- **ELK STACK**
- The ELK Stack was updated to the latest 6.x versions.
- This also means you can now expect the availability of basic *X-Pack-Feaures*, the full feature set however is only available to users with a valid license.
- **Dashboards Makeover**
- Because Kibana 6.x introduced so much whitespace the dashboards and some of the visualizations needed some overhaul. While it probably needs some getting used to the key was to focus on displaying as much information while not compromising on clarity.
- Because of the new honeypots we now more than **200 Visualizations** pre-configured and compiled to 16 individual **Kibana Dashboards**. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM and NGINX* events for a quick overview of wire events.
- **Honeypot updates and improvements**
- All honeypots were updated to their latest stable versions.
- Docker images were mostly overhauled to tighten security even further
- Some of the honeypot configurations were modified to keep things fresh
# Release Notes
- **Move from Ubuntu 18.04 to Debian (Sid)**
- For almost 5 years Ubuntu LTS versions were our distributions of choice. Last year we made a design choice for T-Pot to be closer to a rolling release model and thus allowing us to issue smaller changes and releases in a more timely manner. The distribution of choice is Debian (Sid / unstable) which will provide us with the latest advancements in a Debian based distribution.
- **Include HoneyPy honeypot**
- *HoneyPy* is now included in the NEXTGEN installation type
- **Include Suricata 4.1.3**
- Building *Suricata 4.1.3* from scratch to enable JA3 and overall better protocol support.
- **Update tools to the latest versions**
- ELK Stack 6.6.2
- CyberChef 8.27.0
- SpiderFoot v3.0
- Cockpit 188
- NGINX is now built to enforce TLS 1.3 on the T-Pot WebUI
- **Update honeypots**
- Where possible / feasible the honeypots have been updated to their latest versions.
- *Cowrie* now supports *HASSH* generated hashes which allows for an easier identification of an attacker accross IP adresses.
- *Heralding* now supports *SOCKS5* emulation.
- **Update Dashboards & Visualizations**
- *Offset Dashboard* added to easily spot changes in attacks on a single dashboard in 24h time window.
- *Cowrie Dashboard* modified to integrate *HASSH* support / visualizations.
- *HoneyPy Dashboard* added to support latest honeypot addition.
- *Suricata Dashboard* modified to integrate *JA3* support / visualizations.
- **Debian mirror selection**
- During base install you now have to manually select a mirror.
- Upon T-Pot install the mirror closest to you will be determined automatically.
- This solves peering problems for most of the users speeding up installation and updates.
- **Bugs**
- Fixed issue #298 where the import and export of objects on the shell did not work.
- Fixed issue #313 where Spiderfoot raised a KeyError, which was previously fixed in upstream.
- Fixed error in Suricata where path for reference.config changed.
- **Release Cycle**
- As far as possible we will integrate changes now faster into the master branch, eliminating the need for monolithic releases. The update feature will be continuously improved on that behalf. However this might not account for all feature changes.
- **HPFEEDS Opt-In**
- If you want to share your T-Pot data with a 3rd party HPFEEDS broker such as [SISSDEN](https://sissden.eu) you can do so by creating an account at the SISSDEN portal and run `hpfeeds_optin.sh` on T-Pot.
- **Update Feature**
- For the ones who like to live on the bleeding edge of T-Pot development there is now a update script available in `/opt/tpot/update.sh`.
- This feature is now in beta and is mostly intended to provide you with the latest development advances without the need of reinstalling T-Pot.
- For the ones who like to live on the bleeding edge of T-Pot development there is now an update script available in `/opt/tpot/update.sh`.
- This feature is beta and is mostly intended to provide you with the latest development advances without the need of reinstalling T-Pot.
- **Deprecated tools**
- *ctop* will no longer be part of T-Pot.
<a name="concept"></a>
# Technical Concept
T-Pot is based on the network installer of Ubuntu Server 18.04.x LTS.
T-Pot is based on the network installer Debian (Stretch). During installation the whole system will be updated to Debian (Sid).
The honeypot daemons as well as other support components being used have been containerized using [docker](http://docker.io).
This allows us to run multiple honeypot daemons on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
@ -132,6 +130,7 @@ In T-Pot we combine the dockerized honeypots ...
* [glastopf](http://mushmush.org/),
* [glutton](https://github.com/mushorg/glutton),
* [heralding](https://github.com/johnnykv/heralding),
* [honeypy](https://github.com/foospidy/HoneyPy),
* [honeytrap](https://github.com/armedpot/honeytrap/),
* [mailoney](https://github.com/awhitehatter/mailoney),
* [medpot](https://github.com/schmalle/medpot),
@ -147,11 +146,11 @@ In T-Pot we combine the dockerized honeypots ...
* [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and a easy-to-use multi-honeypot appliance.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot appliance.
![Architecture](doc/architecture.png)
While data within docker containers is volatile we do now ensure a default 30 day persistence of all relevant honeypot and tool data in the well known `/data` folder and sub-folders. The persistence configuration may be adjusted in `/opt/tpot/etc/logrotate/logrotate.conf`. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.<br>
While data within docker containers is volatile we do ensure a default 30 day persistence of all relevant honeypot and tool data in the well known `/data` folder and sub-folders. The persistence configuration may be adjusted in `/opt/tpot/etc/logrotate/logrotate.conf`. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.<br>
Basically, what happens when the system is booted up is the following:
@ -170,7 +169,7 @@ The individual docker configurations are located in the [docker folder](https://
Depending on your installation type, whether you install on [real hardware](#hardware) or in a [virtual machine](#vm), make sure your designated T-Pot system meets the following requirements:
##### Standard Installation
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, elasticpot, heralding, honeytrap, mailoney, rdpy, snare, tanner and vnclowpot
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, elasticpot, heralding, honeytrap, mailoney, medpot, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -179,7 +178,7 @@ Depending on your installation type, whether you install on [real hardware](#har
- A working, non-proxied, internet connection
##### Sensor Installation
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, elasticpot, heralding, honeytrap, mailoney, rdpy, snare, tanner and vnclowpot
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, elasticpot, heralding, honeytrap, mailoney, medpot, rdpy, snare & tanner
- Tools: cockpit
- 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -188,7 +187,7 @@ Depending on your installation type, whether you install on [real hardware](#har
- A working, non-proxied, internet connection
##### Industrial Installation
- Honeypots: conpot, rdpy, vnclowpot
- Honeypots: conpot, cowrie, heralding, medpot, rdpy
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -205,17 +204,8 @@ Depending on your installation type, whether you install on [real hardware](#har
- Network via DHCP
- A working, non-proxied, internet connection
##### NextGen Installation (Glutton instead of Honeytrap)
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, elasticpot, glutton, heralding, mailoney, rdpy, snare, tanner and vnclowpot
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping)
- 128 GB SSD (smaller is possible but limits the capacity of storing events)
- Network via DHCP
- A working, non-proxied, internet connection
##### Legacy Installation (honeypots based on Standard Installation of T-Pot 17.10)
- Honeypots: cowrie, dionaea, elasticpot, glastopf, honeytrap, mailoney, rdpy and vnclowpot
##### NextGen Installation (Glutton replacing Honeytrap, HoneyPy replacing Elasticpot)
- Honeypots: adbhoney, ciscoasa, conpot, cowrie, dionaea, glutton, heralding, honeypy, mailoney, rdpy, snare & tanner
- Tools: cockpit, cyberchef, ELK, elasticsearch head, ewsposter, NGINX, spiderfoot, p0f and suricata
- 6-8 GB RAM (less RAM is possible but might introduce swapping)
@ -227,7 +217,7 @@ Depending on your installation type, whether you install on [real hardware](#har
# Installation
The installation of T-Pot is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation **will fail!**
Firstly, decide if you want to download our prebuilt installation ISO image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases), [create it yourself](#createiso) ***or*** [post-install on a existing Ubuntu Server 18.04 LTS](#postinstall).
Firstly, decide if you want to download our prebuilt installation ISO image from [GitHub](https://github.com/dtag-dev-sec/tpotce/releases), [create it yourself](#createiso) ***or*** [post-install on an existing Debian 9.7 (Stretch)](#postinstall).
Secondly, decide where you want to let the system run: [real hardware](#hardware) or in a [virtual machine](#vm)?
@ -241,7 +231,7 @@ You can download the prebuilt installation image from [GitHub](https://github.co
For transparency reasons and to give you the ability to customize your install, we provide you the [ISO Creator](https://github.com/dtag-dev-sec/tpotce) that enables you to create your own ISO installation image.
**Requirements to create the ISO image:**
- Ubuntu 18.04 LTS or newer as host system (others *may* work, but *remain* untested)
- Debian 9.7 or newer as host system (others *may* work, but *remain* untested)
- 4GB of free memory
- 32GB of free storage
- A working internet connection
@ -284,17 +274,17 @@ If you decide to run T-Pot on dedicated hardware, just follow these steps:
Whereas most CD burning tools allow you to burn from ISO images, the procedure to create a bootable USB stick from an ISO image depends on your system. There are various Windows GUI tools available, e.g. [this tip](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) might help you.<br> On [Linux](http://askubuntu.com/questions/59551/how-to-burn-a-iso-to-a-usb-device) or [MacOS](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx) you can use the tool *dd* or create the USB stick with T-Pot's [ISO Creator](https://github.com/dtag-dev-sec).
2. Boot from the USB stick and install.
*Please note*: We will ensure the compatibility with the Intel NUC platform, as we really like the form factor, looks and build quality. Other platforms **remain untested**.
*Please note*: While we are performing limited tests with the Intel NUC platform other hardware platforms **remain untested**. We can not provide hardware support of any kind.
<a name="postinstall"></a>
## Post-Install User
In some cases it is necessary to install Ubuntu Server 18.04 LTS on your own:
In some cases it is necessary to install Debian 9.7 (Stretch) on your own:
- Cloud provider does not offer mounting ISO images.
- Hardware setup needs special drivers and / or kernels.
- Within your company you have to setup special policies, software etc.
- You just like to stay on top of things.
While the T-Pot-Autoinstaller served us perfectly well in the past we decided to include the feature directly into T-Pot and its Universal Installer.
The T-Pot Universal Installer will upgrade the system to Debian (Sid) and install all required T-Pot dependencies.
Just follow these steps:
@ -308,7 +298,7 @@ The installer will now start and guide you through the install process.
<a name="postinstallauto"></a>
## Post-Install Auto
You can also let the installer run automatically if you provide your own `tpot.conf`. A example is available in `tpotce/iso/installer/tpot.conf.dist`. This should make things easier in case you want to automate the installation i.e. with **Ansible**.
You can also let the installer run automatically if you provide your own `tpot.conf`. An example is available in `tpotce/iso/installer/tpot.conf.dist`. This should make things easier in case you want to automate the installation i.e. with **Ansible**.
Just follow these steps while adjusting `tpot.conf` to your needs:
@ -344,7 +334,7 @@ You can also login from your browser and access the Web UI: `https://<your.ip>:6
<a name="placement"></a>
# System Placement
Make sure your system is reachable through the internet. Otherwise it will not capture any attacks, other than the ones from your internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. However to avoid fingerprinting you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs.
Make sure your system is reachable through a network you suspect intruders in / from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks, other than the ones from your internal network! We recommend you put it in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. However to avoid fingerprinting you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs.
A list of all relevant ports is available as part of the [Technical Concept](#concept)
<br>
@ -355,7 +345,7 @@ In case you need external Admin UI access, forward TCP port 64294 to T-Pot, see
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing git, http, https connections for updates (Ubuntu, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location.
T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi) and attack submission (ewsposter, hpfeeds). Ports and availability may vary based on your geographical location.
<a name="updates"></a>
# Updates
@ -363,10 +353,9 @@ For the ones of you who want to live on the bleeding edge of T-Pot development w
**If you made any relevant changes to the T-Pot relevant config files make sure to create a backup first.**
- The Update script will
- **merciless** overwrite local changes to be in sync with the T-Pot master branch
- upgrade the system to the latest kernel within Ubuntu 18.04.x LTS
- upgrade the system to the latest packages available within Ubuntu 18.04.x LTS
- update all resources to be en par with the T-Pot master branch
- ensure all T-Pot relevant system files will be patched / copied into original T-Pot state
- upgrade the system to the packages available in Debian (Sid)
- update all resources to be in-sync with the T-Pot master branch
- ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
You simply run the update script:
```
@ -428,7 +417,7 @@ If new versions of the components involved appear, we will test them and build n
<a name="submission"></a>
## Community Data Submission
We provide T-Pot in order to make it accessible to all parties interested in honeypot deployment. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu.
We provide T-Pot in order to make it accessible to all parties interested in honeypot deployment. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu).
You may opt out of the submission by removing the `# Ewsposter service` from `/opt/tpot/etc/tpot.yml`:
1. Stop T-Pot services: `systemctl stop tpot`
2. Remove Ewsposter service: `vi /opt/tpot/etc/tpot.yml`
@ -440,7 +429,7 @@ You may opt out of the submission by removing the `# Ewsposter service` from `/o
restart: always
networks:
- ewsposter_local
image: "dtagdevsec/ewsposter:1810"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -451,6 +440,11 @@ Data is submitted in a structured ews-format, a XML stucture. Hence, you can par
We encourage you not to disable the data submission as it is the main purpose of the community approach - as you all know **sharing is caring** 😍
<a name="hpfeeds-optin"></a>
## Opt-In HPFEEDS Data Submission
As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers, such as [SISSDEN](https://sissden.eu).
If you want to share your T-Pot data you simply have to regsiter an account with a 3rd party broker with its own benefits towards the community. Once registered you will receive your credentials to share events with the broker. In T-Pot you simply run `hpfeeds_optin.sh` which will ask for your credentials, in case of SISSDEN this is just `Ident` and `Secret`, everything else is pre-configured. It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
<a name="roadmap"></a>
# Roadmap
As with every development there is always room for improvements ...
@ -479,15 +473,15 @@ We hope you understand that we cannot provide support on an individual basis. We
<a name="licenses"></a>
# Licenses
The software that T-Pot is built on uses the following licenses.
<br>GPLv2: [conpot)](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://github.com/schmalle/ElasticPot), [ewsposter](https://github.com/dtag-dev-sec/ews/), [glastopf](https://github.com/glastopf/glastopf/blob/master/GPL), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ctop](https://github.com/bcicen/ctop/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE)
<br> Other: [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Ubuntu licensing](http://www.ubuntu.com/about/about-ubuntu/licensing)
<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE)
<br> Other: [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/)
<a name="credits"></a>
# Credits
Without open source and the fruitful development community we are proud to be a part of, T-Pot would not have been possible! Our thanks are extended but not limited to the following people and organizations:
Without open source and the fruitful development community (we are proud to be a part of), T-Pot would not have been possible! Our thanks are extended but not limited to the following people and organizations:
### The developers and development communities of
@ -496,15 +490,17 @@ Without open source and the fruitful development community we are proud to be a
* [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors)
* [conpot](https://github.com/mushorg/conpot/graphs/contributors)
* [cowrie](https://github.com/micheloosterhof/cowrie/graphs/contributors)
* [debian](http://www.debian.org/)
* [dionaea](https://github.com/DinoTools/dionaea/graphs/contributors)
* [docker](https://github.com/docker/docker/graphs/contributors)
* [elasticpot](https://github.com/schmalle/ElasticPot/graphs/contributors)
* [elasticpot](https://github.com/schmalle/ElasticpotPY/graphs/contributors)
* [elasticsearch](https://github.com/elastic/elasticsearch/graphs/contributors)
* [elasticsearch-head](https://github.com/mobz/elasticsearch-head/graphs/contributors)
* [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors)
* [glastopf](https://github.com/mushorg/glastopf/graphs/contributors)
* [glutton](https://github.com/mushorg/glutton/graphs/contributors)
* [heralding](https://github.com/johnnykv/heralding/graphs/contributors)
* [honeypy](https://github.com/foospidy/HoneyPy/graphs/contributors)
* [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors)
* [kibana](https://github.com/elastic/kibana/graphs/contributors)
* [logstash](https://github.com/elastic/logstash/graphs/contributors)
@ -516,10 +512,9 @@ Without open source and the fruitful development community we are proud to be a
* [snare](https://github.com/mushorg/snare/graphs/contributors)
* [tanner](https://github.com/mushorg/tanner/graphs/contributors)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors)
* [ubuntu](http://www.ubuntu.com/)
### The following companies and organizations
* [canonical](http://www.canonical.com/)
* [debian](https://www.debian.org/)
* [docker](https://www.docker.com/)
* [elastic.io](https://www.elastic.co/)
* [honeynet project](https://www.honeynet.org/)
@ -531,7 +526,12 @@ Without open source and the fruitful development community we are proud to be a
# Stay tuned ...
We will be releasing a new version of T-Pot about every 6-12 months.
<a name="testimonial"></a>
# Testimonial
One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br>
***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***
<a name="funfact"></a>
# Fun Fact
In an effort of saving the environment we are now brewing our own Mate Ice Tea and consumed 241 liters so far for the T-Pot 18.11 development 😇
In an effort of saving the environment we are now brewing our own Mate Ice Tea and consumed 73 liters so far for the T-Pot 19.03 development 😇

View file

@ -1,4 +1,12 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
# Backup all ES relevant folders
# Make sure ES is available
myES="http://127.0.0.1:64298/"
@ -16,7 +24,7 @@ fi
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
myELKPATH="/data/elk/data"
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/' | grep -w ".kibana_1" | awk '{ print $4 }')
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/.kibana' | awk '{ print $4 }')
myKIBANAINDEXPATH=$myELKPATH/nodes/0/indices/$myKIBANAINDEXNAME
# Let's ensure normal operation on exit or if interrupted ...

View file

@ -1,6 +1,5 @@
#!/bin/bash
# T-Pot Container Data Cleaner & Log Rotator
# Set colors
myRED=""
myGREEN=""
@ -154,6 +153,14 @@ fuHERALDING () {
chown tpot:tpot /data/heralding -R
}
# Let's create a function to clean up and prepare honeypy data
fuHONEYPY () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeypy/*; fi
mkdir -p /data/honeypy/log
chmod 760 /data/honeypy -R
chown tpot:tpot /data/honeypy -R
}
# Let's create a function to clean up and prepare honeytrap data
fuHONEYTRAP () {
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeytrap/*; fi
@ -258,6 +265,7 @@ if [ "$myPERSISTENCE" = "on" ];
fuGLASTOPF
fuGLUTTON
fuHERALDING
fuHONEYPY
fuHONEYTRAP
fuMAILONEY
fuMEDPOT

View file

@ -1,4 +1,13 @@
#/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
# Show current status of T-Pot containers
myPARAM="$1"
myCONTAINERS="$(cat /opt/tpot/etc/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2 | sort | tr -d " ")"
@ -9,14 +18,13 @@ myWHITE=""
myMAGENTA=""
function fuGETSTATUS {
grc docker ps -f status=running -f status=exited --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -v "NAME" | sort
grc --colour=on docker ps -f status=running -f status=exited --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -v "NAME" | sort
}
function fuGETSYS {
printf "========| System |========\n"
printf "%+10s %-20s\n" "Date: " "$(date)"
printf "%+10s %-20s\n" "Uptime: " "$(uptime | cut -b 2-)"
printf "%+10s %-20s\n" "CPU temp: " "$(sensors | grep 'Physical' | awk '{ print $4" " }' | tr -d [:cntrl:])"
echo
}

View file

@ -2,10 +2,10 @@
# Dump all ES data
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c "green\|yellow")
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
echo "### Elasticsearch is not available, try starting via 'systemctl start tpot'."
exit
else
echo "### Elasticsearch is available, now continuing."
@ -20,12 +20,12 @@ trap fuCLEANUP EXIT
# Set vars
myDATE=$(date +%Y%m%d%H%M)
myINDICES=$(curl -s -XGET ''$myES'_cat/indices/' | awk '{ print $3 }' | sort | grep -v 1970)
myES="http://127.0.0.1:64298/"
myINDICES=$(curl -s -XGET ''$myES'_cat/indices/logstash-*' | awk '{ print $3 }' | sort | grep -v 1970)
myINDICES+=" .kibana"
myCOL1=""
myCOL0=""
# Dumping all ES data
# Dumping Kibana and Logstash data
echo $myCOL1"### The following indices will be dumped: "$myCOL0
echo $myINDICES
echo

124
bin/hpfeeds_optin.sh Executable file
View file

@ -0,0 +1,124 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
myTPOTYMLFILE="/opt/tpot/etc/tpot.yml"
function fuSISSDEN () {
echo
echo "You chose SISSDEN, you just need to provide ident and secret"
echo
myENABLE="true"
myHOST="hpfeeds.sissden.eu"
myPORT="10000"
myCHANNEL="t-pot.events"
myCERT="/opt/ewsposter/sissden.pem"
read -p "Ident: " myIDENT
read -p "Secret: " mySECRET
myFORMAT="json"
}
function fuGENERIC () {
echo
echo "You chose generic, please provide all the details of the broker"
echo
myENABLE="true"
read -p "Host URL: " myHOST
read -p "Port: " myPORT
read -p "Channel: " myCHANNEL
echo "For generic providers set this to 'false'"
echo "If you received a CA certficate mount it into the ewsposter container by modifying $myTPOTYMLFILE"
read -p "TLS - 'false' or path to CA in container: " myCERT
read -p "Ident: " myIDENT
read -p "Secret: " mySECRET
read -p "Format ews (xml) or json: " myFORMAT
}
function fuOPTOUT () {
echo
while [ 1 != 2 ]
do
read -s -n 1 -p "You chose to opt out (y/n)? " mySELECT
echo $mySELECT
case "$mySELECT" in
[y,Y])
echo "Opt out."
break
;;
[n,N])
echo "Aborted."
exit
;;
esac
done
myENABLE="false"
myHOST="host"
myPORT="port"
myCHANNEL="channels"
myCERT="false"
myIDENT="user"
mySECRET="secret"
myFORMAT="json"
}
function fuAPPLY () {
echo "Now stopping T-Pot ..."
systemctl stop tpot
echo "Applying your settings ... "
sed --follow-symlinks -i "s/EWS_HPFEEDS_ENABLE.*/EWS_HPFEEDS_ENABLE=${myENABLE}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_HOST.*/EWS_HPFEEDS_HOST=${myHOST}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_PORT.*/EWS_HPFEEDS_PORT=${myPORT}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_CHANNELS.*/EWS_HPFEEDS_CHANNELS=${myCHANNEL}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s#EWS_HPFEEDS_TLSCERT.*#EWS_HPFEEDS_TLSCERT=${myCERT}#g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_IDENT.*/EWS_HPFEEDS_IDENT=${myIDENT}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_SECRET.*/EWS_HPFEEDS_SECRET=${mySECRET}/g" "$myTPOTYMLFILE"
sed --follow-symlinks -i "s/EWS_HPFEEDS_FORMAT.*/EWS_HPFEEDS_FORMAT=${myFORMAT}/g" "$myTPOTYMLFILE"
echo "Now starting T-Pot ..."
systemctl start tpot
echo "You can always change or review your settings in the ewsposter section of $myTPOTYMLFILE"
echo "Done."
}
echo "HPFEEDS Delivery Opt-In for T-Pot"
echo "---------------------------------"
echo "By running this script you agree to share your data with a 3rd party and agree to their corresponding sharing terms."
echo
echo
echo "Please choose your broker"
echo "---------------------------"
echo "[1] - SISSDEN"
echo "[2] - Generic (enter details manually)"
echo "[0] - Opt out of HPFEEDS"
echo "[q] - Do not agree end exit"
echo
while [ 1 != 2 ]
do
read -s -n 1 -p "Your choice: " mySELECT
echo $mySELECT
case "$mySELECT" in
[1])
fuSISSDEN
break
;;
[2])
fuGENERIC
break
;;
[0])
fuOPTOUT
break
;;
[q,Q])
echo "Aborted."
exit
;;
esac
done
fuAPPLY

View file

@ -2,10 +2,10 @@
# Restore folder based ES backup
# Make sure ES is available
myES="http://127.0.0.1:64298/"
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c green)
myESSTATUS=$(curl -s -XGET ''$myES'_cluster/health' | jq '.' | grep -c "green\|yellow")
if ! [ "$myESSTATUS" = "1" ]
then
echo "### Elasticsearch is not available, try starting via 'systemctl start elk'."
echo "### Elasticsearch is not available, try starting via 'systemctl start tpot'."
exit
else
echo "### Elasticsearch is available, now continuing."
@ -41,13 +41,27 @@ echo $myCOL1"### Now unpacking tar archive: "$myDUMP $myCOL0
tar xvf $myDUMP
# Build indices list
myINDICES=$(ls tmp/logstash*.gz | cut -c 5- | rev | cut -c 4- | rev)
myINDICES="$(ls tmp/logstash*.gz | cut -c 5- | rev | cut -c 4- | rev)"
myINDICES+=" .kibana"
echo $myCOL1"### The following indices will be restored: "$myCOL0
echo $myINDICES
echo
# Force single seat template for everything
echo -n $myCOL1"### Forcing single seat template: "$myCOL0
curl -s XPUT ''$myES'_template/.*' -H 'Content-Type: application/json' -d'
{ "index_patterns": ".*",
"order": 1,
"settings":
{
"number_of_shards": 1,
"number_of_replicas": 0
}
}'
echo
# Restore indices
curl -s -X DELETE ''$myES'.kibana*' > /dev/null
for i in $myINDICES;
do
# Delete index if it already exists

View file

@ -23,10 +23,10 @@ function fuNFQCHECK {
myNFQCHECK=$(grep -e '^\s*honeytrap:\|^\s*glutton:' $myDOCKERCOMPOSEYML | tr -d ': ' | uniq)
if [ "$myNFQCHECK" == "" ];
then
echo "No NFQ related honeypot detected, no iptables rules needed. Exiting."
echo "No NFQ related honeypot detected, no iptables-legacy rules needed. Exiting."
exit
else
echo "Detected $myNFQCHECK as NFQ based honeypot, iptables rules needed. Continuing."
echo "Detected $myNFQCHECK as NFQ based honeypot, iptables-legacy rules needed. Continuing."
fi
}
@ -41,54 +41,54 @@ echo "$myRULESPORTS"
}
function fuSETRULES {
### Setting up iptables rules for honeytrap
### Setting up iptables-legacy rules for honeytrap
if [ "$myNFQCHECK" == "honeytrap" ];
then
/sbin/iptables -w -A INPUT -s 127.0.0.1 -j ACCEPT
/sbin/iptables -w -A INPUT -d 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -A INPUT -s 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -A INPUT -d 127.0.0.1 -j ACCEPT
for myPORT in $myRULESPORTS; do
/sbin/iptables -w -A INPUT -p tcp --dport $myPORT -j ACCEPT
/usr/sbin/iptables-legacy -w -A INPUT -p tcp --dport $myPORT -j ACCEPT
done
/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
/usr/sbin/iptables-legacy -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
fi
### Setting up iptables rules for glutton
### Setting up iptables-legacy rules for glutton
if [ "$myNFQCHECK" == "glutton" ];
then
/sbin/iptables -w -t raw -A PREROUTING -s 127.0.0.1 -j ACCEPT
/sbin/iptables -w -t raw -A PREROUTING -d 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -A PREROUTING -s 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -A PREROUTING -d 127.0.0.1 -j ACCEPT
for myPORT in $myRULESPORTS; do
/sbin/iptables -w -t raw -A PREROUTING -p tcp --dport $myPORT -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -A PREROUTING -p tcp --dport $myPORT -j ACCEPT
done
# No need for NFQ forwarding, such rules are set up by glutton
fi
}
function fuUNSETRULES {
### Removing iptables rules for honeytrap
### Removing iptables-legacy rules for honeytrap
if [ "$myNFQCHECK" == "honeytrap" ];
then
/sbin/iptables -w -D INPUT -s 127.0.0.1 -j ACCEPT
/sbin/iptables -w -D INPUT -d 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -D INPUT -s 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -D INPUT -d 127.0.0.1 -j ACCEPT
for myPORT in $myRULESPORTS; do
/sbin/iptables -w -D INPUT -p tcp --dport $myPORT -j ACCEPT
/usr/sbin/iptables-legacy -w -D INPUT -p tcp --dport $myPORT -j ACCEPT
done
/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
/usr/sbin/iptables-legacy -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
fi
### Removing iptables rules for glutton
### Removing iptables-legacy rules for glutton
if [ "$myNFQCHECK" == "glutton" ];
then
/sbin/iptables -w -t raw -D PREROUTING -s 127.0.0.1 -j ACCEPT
/sbin/iptables -w -t raw -D PREROUTING -d 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -D PREROUTING -s 127.0.0.1 -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -D PREROUTING -d 127.0.0.1 -j ACCEPT
for myPORT in $myRULESPORTS; do
/sbin/iptables -w -t raw -D PREROUTING -p tcp --dport $myPORT -j ACCEPT
/usr/sbin/iptables-legacy -w -t raw -D PREROUTING -p tcp --dport $myPORT -j ACCEPT
done
# No need for removing NFQ forwarding, such rules are removed by glutton
fi

View file

@ -1,5 +1,13 @@
#!/bin/bash
# Run as root only.
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
# set backtitle, get filename
myBACKTITLE="T-Pot Edition Selection Tool"
myYMLS=$(cd /opt/tpot/etc/compose/ && ls -1 *.yml)
@ -21,7 +29,7 @@ for i in $myYMLS;
do
myITEMS+="$i $(echo $i | cut -d "." -f1 | tr [:lower:] [:upper:]) "
done
myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 13 50 6 $myITEMS 3>&1 1>&2 2>&3 3>&-)
myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 12 50 5 $myITEMS 3>&1 1>&2 2>&3 3>&-)
if [ "$myEDITION" == "" ];
then
echo "Have a nice day!"

View file

@ -9,10 +9,18 @@ if [ "$myEXTIP" = "" ];
myEXTIP=$myLOCALIP
fi
mySSHUSER=$(cat /etc/passwd | grep 1000 | cut -d ':' -f1)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
sed -i "s#ADMIN:.*#ADMIN: https://$myLOCALIP:64294#" /etc/issue
echo "" > /etc/issue
toilet -f ivrit -F metal --filter border:metal "T-Pot 19.03" | sed 's/\\/\\\\/g' >> /etc/issue
echo >> /etc/issue
echo ",---- [ \n ] [ \d ] [ \t ]" >> /etc/issue
echo "|" >> /etc/issue
echo "| IP: $myLOCALIP ($myEXTIP)" >> /etc/issue
echo "| SSH: ssh -l tsec -p 64295 $myLOCALIP" >> /etc/issue
echo "| WEB: https://$myLOCALIP:64297" >> /etc/issue
echo "| ADMIN: https://$myLOCALIP:64294" >> /etc/issue
echo "|" >> /etc/issue
echo "\`----" >> /etc/issue
echo >> /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

After

Width:  |  Height:  |  Size: 374 KiB

View file

@ -14,7 +14,7 @@ services:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:1811"
image: "dtagdevsec/adbhoney:1903"
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/ciscoasa:1811.svg)](https://microbadger.com/images/dtagdevsec/ciscoasa:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/ciscoasa:1811.svg)](https://microbadger.com/images/dtagdevsec/ciscoasa:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/ciscoasa:1903.svg)](https://microbadger.com/images/dtagdevsec/ciscoasa:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/ciscoasa:1903.svg)](https://microbadger.com/images/dtagdevsec/ciscoasa:1903 "Get your own image badge on microbadger.com")
# ciscoasa

View file

@ -13,7 +13,7 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:1811"
image: "dtagdevsec/ciscoasa:1903"
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/conpot:1811.svg)](https://microbadger.com/images/dtagdevsec/conpot:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/conpot:1811.svg)](https://microbadger.com/images/dtagdevsec/conpot:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/conpot:1903.svg)](https://microbadger.com/images/dtagdevsec/conpot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/conpot:1903.svg)](https://microbadger.com/images/dtagdevsec/conpot:1903 "Get your own image badge on microbadger.com")
# conpot

View file

@ -35,7 +35,7 @@ services:
- "2121:21"
- "44818:44818"
- "47808:47808"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -58,7 +58,7 @@ services:
ports:
# - "161:161"
- "2404:2404"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -80,7 +80,7 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -102,7 +102,7 @@ services:
- conpot_local_ipmi
ports:
- "623:623"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -125,7 +125,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot

View file

@ -5,6 +5,7 @@ ADD dist/ /root/dist/
# Get and install dependencies & packages
RUN apk -U --no-cache add \
bash \
build-base \
git \
gmp-dev \
@ -12,6 +13,7 @@ RUN apk -U --no-cache add \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl \
openssl-dev \
python \
python-dev \
@ -24,11 +26,14 @@ RUN apk -U --no-cache add \
addgroup -g 2000 cowrie && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 cowrie && \
# Install cowrie from git
git clone --depth=1 https://github.com/micheloosterhof/cowrie /home/cowrie/cowrie/ -b v1.3.0 && \
cd /home/cowrie/cowrie && \
pip install --no-cache-dir --upgrade cffi pip && \
pip install --no-cache-dir --upgrade -r requirements.txt && \
# Install cowrie
mkdir -p /home/cowrie && \
cd /home/cowrie && \
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b 1.5.3 && \
cd cowrie && \
mkdir -p log && \
pip install --upgrade pip && \
pip install --upgrade -r requirements.txt && \
# Setup configs
setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
@ -36,7 +41,7 @@ RUN apk -U --no-cache add \
chown cowrie:cowrie -R /home/cowrie/* /usr/lib/python2.7/site-packages/twisted/plugins && \
# Start Cowrie once to prevent dropin.cache errors upon container start caused by read-only filesystem
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie && \
su - cowrie -c "export PYTHONPATH=/home/cowrie/cowrie:/home/cowrie/cowrie/src && \
cd /home/cowrie/cowrie && \
/usr/bin/twistd --uid=2000 --gid=2000 -y cowrie.tac --pidfile cowrie.pid cowrie &" && \
sleep 10 && \
@ -49,6 +54,7 @@ RUN apk -U --no-cache add \
libffi-dev \
mpc1-dev \
mpfr-dev \
openssl-dev \
python-dev \
py-mysqldb \
py-pip && \
@ -57,7 +63,7 @@ RUN apk -U --no-cache add \
rm -rf /home/cowrie/cowrie/cowrie.pid
# Start cowrie
ENV PYTHONPATH /home/cowrie/cowrie
ENV PYTHONPATH /home/cowrie/cowrie:/home/cowrie/cowrie/src
WORKDIR /home/cowrie/cowrie
USER cowrie:cowrie
CMD ["/usr/bin/twistd", "--nodaemon", "-y", "cowrie.tac", "--pidfile", "/tmp/cowrie/cowrie.pid", "cowrie"]

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/cowrie:1811.svg)](https://microbadger.com/images/dtagdevsec/cowrie:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/cowrie:1811.svg)](https://microbadger.com/images/dtagdevsec/cowrie:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/cowrie:1903.svg)](https://microbadger.com/images/dtagdevsec/cowrie:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/cowrie:1903.svg)](https://microbadger.com/images/dtagdevsec/cowrie:1903 "Get your own image badge on microbadger.com")
# cowrie

View file

@ -1,14 +1,44 @@
[honeypot]
hostname = ubuntu
log_path = log
download_path = dl
report_public_ip = true
share_path= share/cowrie
state_path = /tmp/cowrie/data
etc_path = etc
contents_path = honeyfs
txtcmds_path = txtcmds
ttylog = true
ttylog_path = log/tty
interactive_timeout = 180
authentication_timeout = 120
backend = shell
auth_class = AuthRandom
auth_class_parameters = 2, 5, 10
reported_ssh_port = 22
data_path = /tmp/cowrie/data
[shell]
filesystem = share/cowrie/fs.pickle
processes = share/cowrie/cmdoutput.json
arch = linux-x64-lsb
kernel_version = 3.2.0-4-amd64
kernel_build_string = #1 SMP Debian 3.2.68-1+deb7u1
hardware_platform = x86_64
operating_system = GNU/Linux
[ssh]
enabled = true
rsa_public_key = etc/ssh_host_rsa_key.pub
rsa_private_key = etc/ssh_host_rsa_key
dsa_public_key = etc/ssh_host_dsa_key.pub
dsa_private_key = etc/ssh_host_dsa_key
version = SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
listen_endpoints = tcp:22:interface=0.0.0.0
sftp_enabled = true
forwarding = true
forward_redirect = false
forward_tunnel = false
[telnet]
enabled = true
@ -18,8 +48,10 @@ reported_port = 23
[output_jsonlog]
enabled = true
logfile = log/cowrie.json
epoch_timestamp = false
[output_textlog]
enabled = false
logfile = log/cowrie-textlog.log
format = text

View file

@ -18,7 +18,7 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
image: "dtagdevsec/cowrie:1903"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl

View file

@ -1,4 +1,4 @@
FROM alpine
FROM alpine:3.8
# Get and install dependencies & packages
RUN apk -U --no-cache add \
@ -12,7 +12,7 @@ RUN apk -U --no-cache add \
# Install CyberChef
cd /root && \
git clone https://github.com/gchq/cyberchef -b v8.20.0 --depth=1 && \
git clone https://github.com/gchq/cyberchef --depth=1 && \
chown -R nobody:nobody cyberchef && \
cd cyberchef && \
npm install && \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/cyberchef:1811.svg)](https://microbadger.com/images/dtagdevsec/cyberchef:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/cyberchef:1811.svg)](https://microbadger.com/images/dtagdevsec/cyberchef:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/cyberchef:1903.svg)](https://microbadger.com/images/dtagdevsec/cyberchef:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/cyberchef:1903.svg)](https://microbadger.com/images/dtagdevsec/cyberchef:1903 "Get your own image badge on microbadger.com")
# cyberchef

View file

@ -14,5 +14,5 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1811"
image: "dtagdevsec/cyberchef:1903"
read_only: true

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/dionaea:1811.svg)](https://microbadger.com/images/dtagdevsec/dionaea:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/dionaea:1811.svg)](https://microbadger.com/images/dtagdevsec/dionaea:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/dionaea:1903.svg)](https://microbadger.com/images/dtagdevsec/dionaea:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/dionaea:1903.svg)](https://microbadger.com/images/dtagdevsec/dionaea:1903 "Get your own image badge on microbadger.com")
# dionaea

View file

@ -27,7 +27,7 @@ services:
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1811"
image: "dtagdevsec/dionaea:1903"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/elasticpot:1811.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/elasticpot:1811.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/elasticpot:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/elasticpot:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticpot:1903 "Get your own image badge on microbadger.com")
# elasticpot

View file

@ -14,7 +14,7 @@ services:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1811"
image: "dtagdevsec/elasticpot:1903"
read_only: true
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log

View file

@ -1,11 +1,11 @@
# Elasticsearch
[![](https://images.microbadger.com/badges/version/dtagdevsec/elasticsearch:1811.svg)](https://microbadger.com/images/dtagdevsec/elasticsearch:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/elasticsearch:1811.svg)](https://microbadger.com/images/dtagdevsec/elasticsearch:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/elasticsearch:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticsearch:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/elasticsearch:1903.svg)](https://microbadger.com/images/dtagdevsec/elasticsearch:1903 "Get your own image badge on microbadger.com")
# Logstash
[![](https://images.microbadger.com/badges/version/dtagdevsec/logstash:1811.svg)](https://microbadger.com/images/dtagdevsec/logstash:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/logstash:1811.svg)](https://microbadger.com/images/dtagdevsec/logstash:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/logstash:1903.svg)](https://microbadger.com/images/dtagdevsec/logstash:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/logstash:1903.svg)](https://microbadger.com/images/dtagdevsec/logstash:1903 "Get your own image badge on microbadger.com")
# Kibana
[![](https://images.microbadger.com/badges/version/dtagdevsec/kibana:1811.svg)](https://microbadger.com/images/dtagdevsec/kibana:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/kibana:1811.svg)](https://microbadger.com/images/dtagdevsec/kibana:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/kibana:1903.svg)](https://microbadger.com/images/dtagdevsec/kibana:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/kibana:1903.svg)](https://microbadger.com/images/dtagdevsec/kibana:1903 "Get your own image badge on microbadger.com")
# elk stack

View file

@ -24,7 +24,7 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data
@ -39,7 +39,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"
## Logstash service
logstash:
@ -51,7 +51,7 @@ services:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
@ -66,5 +66,5 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
image: "dtagdevsec/head:1903"
read_only: true

View file

@ -4,17 +4,19 @@ FROM alpine
ADD dist/ /root/dist/
# Setup env and apt
RUN apk -U add \
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
aria2 \
bash \
curl \
openjdk8-jre \
wget && \
nss \
openjdk8-jre && \
# Get and install packages
cd /root/dist/ && \
mkdir -p /usr/share/elasticsearch/ && \
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.tar.gz && \
tar xvfz elasticsearch-6.5.4.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.2.tar.gz && \
tar xvfz elasticsearch-6.6.2.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
# Add and move files
cd /root/dist/ && \
@ -28,7 +30,7 @@ RUN apk -U add \
rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
# Clean up
apk del --purge wget && \
apk del --purge aria2 && \
rm -rf /root/* && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/*

View file

@ -24,6 +24,6 @@ services:
mem_limit: 2g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data

View file

@ -1,19 +1,19 @@
FROM alpine
FROM node:10.15.2-alpine
# Include dist
ADD dist/ /root/dist/
# Setup env and apt
RUN apk -U add \
curl \
nodejs \
wget && \
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
aria2 \
curl && \
# Get and install packages
cd /root/dist/ && \
mkdir -p /usr/share/kibana/ && \
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz && \
tar xvfz kibana-6.5.4-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-6.6.2-linux-x86_64.tar.gz && \
tar xvfz kibana-6.6.2-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
# Kibana's bundled node does not work in alpine
rm /usr/share/kibana/node/bin/node && \
@ -26,38 +26,29 @@ RUN apk -U add \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon.ico && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-16x16.png && \
cp elk.ico /usr/share/kibana/src/ui/public/assets/favicons/favicon-32x32.png && \
cp create_kibana_index.js /usr/share/kibana/src/core_plugins/elasticsearch/lib/ && \
# Setup plugins, rebuild bundle
#cd /usr/share/kibana/plugins && \
#wget https://github.com/dlumbrer/kbn_radar/releases/download/Kibana-6.X/kbn_radar.tar.gz && \
#wget https://github.com/dlumbrer/kbn_network/releases/download/6.0.X-1/network_vis.tar.gz && \
#tar xvfz kbn_radar.tar.gz && \
#tar xvfz network_vis.tar.gz && \
#rm *.tar.gz && \
rm -rf /usr/share/kibana/optimize/bundles/* && \
# Setup user, groups and configs
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#kibana.defaultAppId: "home"/kibana.defaultAppId: "dashboards"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#elasticsearch.url: "http:\/\/localhost:9200"/elasticsearch.url: "http:\/\/elasticsearch:9200"/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
sed -i 's/#server.rewriteBasePath: false/server.rewriteBasePath: false/' /usr/share/kibana/config/kibana.yml && \
sed -i "s/#005571/#e20074/g" /usr/share/kibana/src/ui/public/chrome/directives/global_nav/global_nav.less && \
sed -i "s/globalColorBlue/globalColorMagenta/g" /usr/share/kibana/src/ui/public/chrome/directives/global_nav/global_nav_link/global_nav_link.less && \
echo "@globalColorMagenta: #9E0051;" >> /usr/share/kibana/src/ui/public/styles/variables/colors.less && \
sed -i "s/#005571/#e20074/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \
sed -i "s/#007ba4/#9e0051/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \
sed -i "s/#00465d/#4f0028/g" /usr/share/kibana/src/legacy/core_plugins/kibana/public/index.css && \
echo "xpack.infra.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.logstash.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.canvas.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.spaces.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
echo "xpack.apm.enabled: false" >> /usr/share/kibana/config/kibana.yml && \
rm -rf /usr/share/kibana/optimize/bundles/* && \
/usr/share/kibana/bin/kibana --optimize && \
addgroup -g 2000 kibana && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
chown -R kibana:kibana /usr/share/kibana/ && \
# Clean up
apk del --purge wget && \
apk del --purge aria2 && \
rm -rf /root/* && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/*

View file

@ -1,38 +0,0 @@
'use strict';
var _setup_error = require('./setup_error');
var _setup_error2 = _interopRequireDefault(_setup_error);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
module.exports = function (server, mappings) {
var _server$plugins$elast = server.plugins.elasticsearch.getCluster('admin');
const callWithInternalUser = _server$plugins$elast.callWithInternalUser;
const index = server.config().get('kibana.index');
function handleError(message) {
return function (err) {
throw new _setup_error2.default(server, message, err);
};
}
return callWithInternalUser('indices.create', {
index: index,
body: {
settings: {
number_of_shards: 1,
number_of_replicas: 0,
'index.mapper.dynamic': false
},
mappings
}
}).catch(handleError('Unable to create Kibana index "<%= kibana.index %>"')).then(function () {
return callWithInternalUser('cluster.health', {
waitForStatus: 'yellow',
index: index
}).catch(handleError('Waiting for Kibana index "<%= kibana.index %>" to come online failed.'));
});
};

View file

@ -12,4 +12,4 @@ services:
# condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"

View file

@ -4,25 +4,27 @@ FROM alpine
ADD dist/ /root/dist/
# Setup env and apt
RUN apk -U add \
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
aria2 \
bash \
curl \
git \
libc6-compat \
libzmq \
openjdk8-jre \
wget && \
nss \
openjdk8-jre && \
# Get and install packages
git clone --depth=1 https://github.com/dtag-dev-sec/listbot /etc/listbot && \
cd /root/dist/ && \
mkdir -p /usr/share/logstash/ && \
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz && \
wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz && \
tar xvfz logstash-6.5.4.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-6.6.2.tar.gz && \
tar xvfz logstash-6.6.2.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
tar xvfz GeoLite2-ASN.tar.gz --strip-components=1 -C /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/ && \
aria2c -s 16 -x 16 -o GeoLite2-ASN.tar.gz http://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz && \
tar xvfz GeoLite2-ASN.tar.gz --strip-components=1 -C /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor && \
# Add and move files
cd /root/dist/ && \
@ -30,7 +32,7 @@ RUN apk -U add \
chmod u+x /usr/bin/update.sh && \
mkdir -p /etc/logstash/conf.d && \
cp logstash.conf /etc/logstash/conf.d/ && \
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/ && \
cp elasticsearch-template-es6x.json /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.3.2-java/lib/logstash/outputs/elasticsearch/ && \
# Setup user, groups and configs
addgroup -g 2000 logstash && \
@ -40,7 +42,7 @@ RUN apk -U add \
chmod 755 /usr/bin/update.sh && \
# Clean up
apk del --purge wget && \
apk del --purge aria2 && \
rm -rf /root/* && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/*
@ -50,4 +52,4 @@ HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
# Start logstash
#USER logstash:logstash
CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --java-execution
CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution

View file

@ -76,6 +76,13 @@ input {
type => "Heralding"
}
# Honeypy
file {
path => ["/data/honeypy/log/json.log"]
codec => json
type => "Honeypy"
}
# Honeytrap
file {
path => ["/data/honeytrap/log/attackers.json"]
@ -131,6 +138,7 @@ filter {
field => "[alert][signature_id]"
destination => "[alert][cve_id]"
dictionary_path => "/etc/listbot/cve.yaml"
# fallback => "-"
}
}
@ -265,6 +273,17 @@ filter {
}
}
# Honeypy
if [type] == "Honeypy" {
date {
match => [ "timestamp", "ISO8601" ]
remove_field => ["timestamp"]
remove_field => ["date"]
remove_field => ["time"]
remove_field => ["millisecond"]
}
}
# Honeytrap
if [type] == "Honeytrap" {
date {
@ -387,7 +406,7 @@ if "_grokparsefailure" in [tags] { drop {} }
}
# Add T-Pot hostname and external IP
if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Glastopf" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" {
if [type] == "Adbhoney" or [type] == "Ciscoasa" or [type] == "ConPot" or [type] == "Cowrie" or [type] == "Dionaea" or [type] == "ElasticPot" or [type] == "Glastopf" or [type] == "Glutton" or [type] == "Honeytrap" or [type] == "Heralding" or [type] == "Honeypy" or [type] == "Mailoney" or [type] == "Medpot" or [type] == "P0f" or [type] == "Rdpy" or [type] == "Suricata" or [type] == "Tanner" {
mutate {
add_field => {
"t-pot_ip_ext" => "${MY_EXTIP}"
@ -406,12 +425,12 @@ output {
# document_type => "doc"
}
if [type] == "Suricata" {
file {
file_mode => 0760
path => "/data/suricata/log/suricata_ews.log"
}
}
#if [type] == "Suricata" {
# file {
# file_mode => 0760
# path => "/data/suricata/log/suricata_ews.log"
# }
#}
# Debug output
#if [type] == "XYZ" {
# stdout {

View file

@ -12,7 +12,7 @@ services:
# condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
- /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf

View file

@ -20,7 +20,7 @@ RUN apk -U --no-cache add \
py-requests \
py-setuptools && \
pip install --no-cache-dir -U pip && \
pip install --no-use-pep517 --no-cache-dir pyOpenSSL && \
pip install --no-cache-dir pyOpenSSL xmljson && \
# Setup ewsposter
git clone --depth=1 https://github.com/rep/hpfeeds /opt/hpfeeds && \
@ -36,6 +36,7 @@ RUN apk -U --no-cache add \
# Supply configs
mv /root/dist/ews.cfg /opt/ewsposter/ && \
mv /root/dist/*.pem /opt/ewsposter/ && \
# Clean up
apk del build-base \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/ewsposter:1811.svg)](https://microbadger.com/images/dtagdevsec/ewsposter:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/ewsposter:1811.svg)](https://microbadger.com/images/dtagdevsec/ewsposter:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/ewsposter:1903.svg)](https://microbadger.com/images/dtagdevsec/ewsposter:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/ewsposter:1903.svg)](https://microbadger.com/images/dtagdevsec/ewsposter:1903 "Get your own image badge on microbadger.com")
# ewsposter

View file

@ -18,12 +18,16 @@ rhost_second = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMes
ignorecert = false
[HPFEED]
hpfeed = false
host = 0.0.0.0
port = 0
channels = 0
ident = 0
secret= 0
hpfeed = %(EWS_HPFEEDS_ENABLE)s
host = %(EWS_HPFEEDS_HOST)s
port = %(EWS_HPFEEDS_PORT)s
channels = %(EWS_HPFEEDS_CHANNELS)s
ident = %(EWS_HPFEEDS_IDENT)s
secret= %(EWS_HPFEEDS_SECRET)s
# path/to/certificate for tls broker - or "false" for non-tls broker
tlscert = %(EWS_HPFEEDS_TLSCERT)s
# hpfeeds submission format: "ews" (xml) or "json"
hpfformat = %(EWS_HPFEEDS_FORMAT)s
[EWSJSON]
json = false
@ -95,7 +99,7 @@ logfile = /data/elasticpot/log/elasticpot.log
[SURICATA]
suricata = true
nodeid = suricata-community-01
logfile = /data/suricata/log/suricata_ews.log
logfile = /data/suricata/log/eve.json
[MAILONEY]
mailoney = true
@ -126,3 +130,8 @@ logfile = /data/ciscoasa/log/ciscoasa.log
tanner = true
nodeid = tanner-community-01
logfile = /data/tanner/log/tanner_report.json
[GLUTTON]
glutton = true
nodeid = glutton-community-01
logfile = /data/glutton/log/glutton.log

70
docker/ews/dist/sissden.pem vendored Normal file
View file

@ -0,0 +1,70 @@
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIBATANBgkqhkiG9w0BAQsFADCBnTEYMBYGA1UEAwwPU0lT
U0RFTiBSb290IENBMQswCQYDVQQGEwJQTDERMA8GA1UEBwwIV2Fyc3phd2ExLjAs
BgNVBAoMJU5hdWtvd2EgaSBBa2FkZW1pY2thIFNpZWMgS29tcHV0ZXJvd2ExEDAO
BgNVBAsMB1NJU1NERU4xHzAdBgkqhkiG9w0BCQEWEGFkbWluQHNpc3NkZW4uZXUw
HhcNMTcwNDExMTMxNDE2WhcNMjcwNDA5MTMxNDE2WjCBjTEbMBkGA1UEAwwSU0lT
U0RFTiBTZXJ2aWNlIENBMQswCQYDVQQGEwJQTDEfMB0GCSqGSIb3DQEJARYQYWRt
aW5Ac2lzc2Rlbi5ldTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2ll
YyBLb21wdXRlcm93YTEQMA4GA1UECwwHU0lTU0RFTjCCAiIwDQYJKoZIhvcNAQEB
BQADggIPADCCAgoCggIBAPFLjU6cLQoGz1s73QMPiRxYISCMUh3CXFe52Uim9a60
nkBDLfjMFW87MNhFCcE2xmxwdPPTz4+f5+DsEV3eZf0y63NxWx+RFV+UpODuEW5n
tWPFUDxmgKx6iAR/tyeLVNqmgtCnWzSthE0cg71dlil6onWvkMc+Wn5Kv6aXoz4e
5YVVhNsymhhrR0BntospY8EvtPm70hHAzOty957/zixOQ/MM+4SHRsWXTlKqv0K2
udWpkUy1Ihs3bpea2KAvn9bBWejFwy7K4q3LyhSyqwpVCYjNi+s+9z4ipSMfvAlT
FvHrMrODv/Iz/TQOfypYSlpX2gBP9WKLgOQj3wulJnMDQlvG1XNgOAqKfEF52YGF
eUu21UraRgDAguIIhWxRwgXenmRo8ngWjfk9Q8734PzzXt8cwzbxJWiJLMew1SiW
I+Kg8uYNGNT4mdBeUMo92S17ZNMXVnkt1TYfxT0A0ZlTCrhXPiWITtsVZXAdqFtl
j5hASmEcRYNgXEUQHBn13O9IinEmks2PEcqbbbKbs2Je0DS/JvxBkqES51UdsaVQ
zITKw3deCk0pISG8WDWZ97LEeDCvAKA5l/ooKjDwfS5vWw11mTUCOdhCoF0m8Lao
TwE1fzzNbSaqMsT6JF/n0ACabfuvF2aqCmWsZC/Hpw8LQQS62zOouCLdcqizL9+z
AgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgHuMB0GA1UdDgQWBBQ4
nurxBppBA5PTNvFFU/vhDr/NFzAfBgNVHSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHH
tbUAbzANBgkqhkiG9w0BAQsFAAOCAgEAIvA2gkYsIVH7FGuoIo9RIxgwy7G/SHNC
Xllz6hyTx10UwbttJ+o4gdNt8WPuGnkmywFgsjL1//bFw2+fUO5IRvWKSmXzwx9N
faRJAjQT4JNx2uOW0ctw4USngPrLjXr3UrIQQlJFtZnEyT9u5VJXX8zkhfNJudyJ
N88YVrPEf6Gh1Q0P+yCX0rDEb3PlP2jsYyXZtcYA5kDQ6Qq7jpLT/zrjJdaPTmzh
2NUe7jJOBfZxPCoeev7meafY2vVOgqRqMz1+DZRoOgwq+ysczzRaXmd5a2p9Tabc
L1w5FXKNJQ4apszA0cEScI+4mBIIQ7VFT3GO098GOcYsC2MelRkgONAIyamm66AP
tvLQAKoiK/xz3sEHN4zaZvN/YVHaSYZEXUP0QHdyL62P62a92aCNyrHpzKURhEDA
n8cs6icxKrS4xuVa517m53zun0brjrfeltfbO7z1A2TstFYu9BHKzRuhwV9cGRHP
EDcb7PkfA/08sDHsyfsWtzIysNo3hwCmQ6gtOW5xlrGplFfwSsXmPG4SR3ByW379
RA5h3zzrO0g7iCvbLclqHoqLTJTMS+6U43qXjnQ7DJ+mcbhRGcMHcZVKqO3QmLm+
mmkDNzNYfTgY52D5mXJqUK50750mQ8dwMSkD2TufSAPmAPUp90LdQ8u9CIv6gQ+x
A08hDHJ1cdY=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIGHDCCBASgAwIBAgIJAPZqsOOroxaHMA0GCSqGSIb3DQEBCwUAMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTAeFw0xNzA0MTExMzA3NTZaFw0yNzA0MDkxMzA3NTZaMIGdMRgwFgYD
VQQDDA9TSVNTREVOIFJvb3QgQ0ExCzAJBgNVBAYTAlBMMREwDwYDVQQHDAhXYXJz
emF3YTEuMCwGA1UECgwlTmF1a293YSBpIEFrYWRlbWlja2EgU2llYyBLb21wdXRl
cm93YTEQMA4GA1UECwwHU0lTU0RFTjEfMB0GCSqGSIb3DQEJARYQYWRtaW5Ac2lz
c2Rlbi5ldTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKT77EYYEhV
tJUnfnvQtGttfgqIzKIV2W6nPK9aDsKRTX5BVDHF6P5ZAF1u/52ATwdyTK7+LD66
Q/nCzyyA2kqTgdruX6VGucpD2DVVSVF6nZhV9PcISNaMXytoG2HHlqrim53E/rVa
rskColfs7oCxama6lPKZ/rqrJlVjA1Pl5ZtxR0IORjpOyZjSbSzKQwLp/JxHPMCU
2cVirS7aEu5UGj+Q7Ibg0AEyoAu5tnHBKun4hmIoo7LtKWNEe1TdboxOSboGJ5wd
UTEmNH+7izZ5FAogTUINjubkf2zZ65xEnN7DT/zFS30vYU1EclqCTp96EKPANogV
ZeBKntEN6M5azM6Q6+nFI56TV5DWHTIXm85zzeDj5JM7TQlIGTh8A5APHpr0YyUP
AiIUrixV2lqSDrjewey5qQcWV6WbjMS72OFKh/x7+UJICJhoUw+KwnPmWSq1WAlt
n7C+W0raSQzt7puI30LUkInKL6iEQebMoYg0eDRI5vsRIpbo+PzflIuk/Vea/D1Y
twgRc8ujoKI9GpPJyP4yO4nY7BkShLqKJ251lEJZnxq8LiFVi8aN6ZHt//OGEtVs
6L97cPzqFx7qx8vnyLBFk23lb8pilHK1G0nqxCCjakTruT/JgkLXnZcLu/IDSqd3
QLjJL0rmU9q6+RTH8A782pcBUNzeLKnlAgMBAAGjXTBbMAwGA1UdEwQFMAMBAf8w
CwYDVR0PBAQDAgHuMB0GA1UdDgQWBBSDpRyQSgaBD5XvyFOA8YHHtbUAbzAfBgNV
HSMEGDAWgBSDpRyQSgaBD5XvyFOA8YHHtbUAbzANBgkqhkiG9w0BAQsFAAOCAgEA
IA0U6znfPykr5PoQlXb/Wr4L5mY/ZtNAJsvJ8jwNMsj3ZlqLOJfnHHoG5LHkb2b/
xfM1Ee2ojmYBt4VDARqrHLLbup38Ivqt0aEco3Qx/WqbIR4IlvZBF+/qKF/wIUuc
CuBYNIy12PcLzafT+SJosj1BJ+XiUCj/RsVXIT5CxsdXIABWC+5b3T3/PrAtKk+C
sVjA/ck1KAHDd+3VUyRjLAAekYWA9C/hek3YwWQ3OvmyHos5gxifqMMDj6bx5qgv
AuIs4mYJlBlHE19GxRmo2TDwE0eZiUoUdavdRBbl9v7dex+AF2GegmnC1ouYc9kv
9moNBcuPFXuJMCOCU44aTpgEKRm3QTZTvVcUza251T+4kgT2wlFyzPqQ8hcpih4t
knlqHhNc9ibL3/qzWr093AgC9uNaNRqmqu1WAu3vs9g3DVb/RSMrUG/V0YS1GgPq
E+nVJ1AIJoee8YaxHztRfjPsmu1R3pp633lfcRPUKCkz52dZDFRPuQP36DuJzl2M
itTra0MtDUuRCsuJfVGe1op2wFprswLI0qy7O9N21D4Ab8g0ik+lhmpOf5DpYxmx
C2Xpe4d/5Xlg3wIYhEs5MnfeEy4lSMA4cxwJs11gVYHba62L7/5lqzpPmHdRYHu3
Vf0pM/6zniQpy58Pf9+9CNU15I3iWF5K3zmevFArd6s=
-----END CERTIFICATE-----

View file

@ -12,9 +12,19 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=/opt/ewsposter/ca.pem
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View file

@ -8,7 +8,7 @@ RUN apk -U --no-cache add \
autoconf \
bind-tools \
build-base \
cython \
# cython \
git \
libffi \
libffi-dev \
@ -17,6 +17,7 @@ RUN apk -U --no-cache add \
make \
php7 \
php7-dev \
openssl-dev \
py-mysqldb \
py-openssl \
py-pip \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/glastopf:1811.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/glastopf:1811.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/glastopf:1903.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/glastopf:1903.svg)](https://microbadger.com/images/dtagdevsec/glastopf:1903 "Get your own image badge on microbadger.com")
# glastopf

View file

@ -15,8 +15,8 @@ services:
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1811"
- "8081:80"
image: "dtagdevsec/glastopf:1903"
read_only: true
volumes:
- /data/glastopf/db:/tmp/glastopf/db

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/glutton:1811.svg)](https://microbadger.com/images/dtagdevsec/glutton:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/glutton:1811.svg)](https://microbadger.com/images/dtagdevsec/glutton:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/glutton:1903.svg)](https://microbadger.com/images/dtagdevsec/glutton:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/glutton:1903.svg)](https://microbadger.com/images/dtagdevsec/glutton:1903 "Get your own image badge on microbadger.com")
# glutton

View file

@ -12,7 +12,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/glutton:1811"
image: "dtagdevsec/glutton:1903"
read_only: true
volumes:
- /data/glutton/log:/var/log/glutton

View file

@ -9,7 +9,7 @@ RUN apk -U --no-cache add \
git \
libcap \
libffi-dev \
libressl-dev \
openssl-dev \
libzmq \
postgresql-dev \
python3 \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/heralding:1811.svg)](https://microbadger.com/images/dtagdevsec/heralding:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/heralding:1811.svg)](https://microbadger.com/images/dtagdevsec/heralding:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/heralding:1903.svg)](https://microbadger.com/images/dtagdevsec/heralding:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/heralding:1903.svg)](https://microbadger.com/images/dtagdevsec/heralding:1903 "Get your own image badge on microbadger.com")
# heralding

View file

@ -150,3 +150,8 @@ capabilities:
enabled: true
port: 5900
timeout: 30
socks5:
enabled: true
port: 1080
timeout: 30

View file

@ -25,9 +25,10 @@ services:
- "443:443"
- "993:993"
- "995:995"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding

54
docker/honeypy/Dockerfile Normal file
View file

@ -0,0 +1,54 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Install packages
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \
git \
libcap \
python2 \
python2-dev \
py2-pip && \
# Upgrade pip, install virtualenv
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir virtualenv && \
# Clone honeypy from git
git clone --depth=1 https://github.com/foospidy/HoneyPy /opt/honeypy && \
cd /opt/honeypy && \
sed -i 's/local_host/dest_ip/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/local_port/dest_port/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/remote_host/src_ip/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/remote_port/src_port/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/service/proto/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/event/event_type/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/bytes/size/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/date_time/timestamp/g' /opt/honeypy/loggers/file/honeypy_file.py && \
sed -i 's/data,/data.decode("hex"),/g' /opt/honeypy/loggers/file/honeypy_file.py && \
virtualenv env && \
cp /root/dist/services.cfg /opt/honeypy/etc && \
cp /root/dist/honeypy.cfg /opt/honeypy/etc && \
/opt/honeypy/env/bin/pip install -r /opt/honeypy/requirements.txt && \
# Setup user, groups and configs
addgroup -g 2000 honeypy && \
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 honeypy && \
chown -R honeypy:honeypy /opt/honeypy && \
setcap cap_net_bind_service=+ep /opt/honeypy/env/bin/python2 && \
# Clean up
apk del --purge build-base \
git \
python2-dev \
py2-pip && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*
# Set workdir and start mailoney
USER honeypy:honeypy
WORKDIR /opt/honeypy
CMD ["/opt/honeypy/env/bin/python2", "/opt/honeypy/Honey.py", "-d"]

117
docker/honeypy/dist/honeypy.cfg vendored Normal file
View file

@ -0,0 +1,117 @@
# HoneyPy/etc/honeypy.cfg
# https://github.com/foospidy/HoneyPy
[honeypy]
# select any name for this HoneyPy node, it can be anything you want (default is: honeypy).
# It will be displayed in tweets, Slack messages, and other integrations.
nodename = honeypy
#add a comma seperated list of ip addresses to supress logging of your local scanners
#whitelist = 192.168.0.5, 192.168.0.21
#include the following service profiles (comma seperated), all services will be combined.
#enabling this will disable the use of service.cfg, which will not be processed
#service_profiles = services.databases.profile, services.linux.profile
# Limit internal log files to a single day. Useful for deployments with limited disk space.
limit_internal_logs = No
# Directory for internal HoneyPy logs (not external loggers).
# Use leading slash for absolute path, or omit for relative path
internal_log_dir = log/
# Tweet events on Twitter. Having a dedicated Twitter account for this purpose is recommended.
# You will need to Twitter API credentials for this to work. See https://dev.twitter.com/oauth/application-only
[twitter]
enabled = No
consumerkey =
consumersecret =
oauthtoken =
oauthsecret =
########################################################################################################
# Animus is dead! (http://morris.guru/the-life-and-death-of-animus/) This feature should be use no more.
# enable tweats to include querying Animus Threat Bot (https://github.com/threatbot)
# ask_animus = No
########################################################################################################
#
# Animus rises from the ashes! https://animus.io/
#
########################################################################################################
#
# Animus falls again. https://github.com/hslatman/awesome-threat-intelligence/pull/101
#
########################################################################################################
# Post your events to HoneyDB. Your HoneyPy honepots can contribute threat information to HoneyDB.
# You will need to create API credentails for this to work. See https://riskdiscovery.com/honeydb/#threats
[honeydb]
enabled = No
api_id =
api_key =
# Post your events to a Slack channel. Having a dedicated Slack channel for this is recommended.
# For setting up your Slack webhook see https://api.slack.com/incoming-webhooks
[slack]
enabled = No
webhook_url =
[logstash]
enabled = No
host =
port =
[elasticsearch]
enabled = No
# Elasticsearch url should include ":port/index/type
# example: http://localhost:9200/honeypot/honeypy
es_url =
[telegram]
# You need to add your bot to channel or group, and get the bot token see https://core.telegram.org/bots
enabled = No
# Telegram bot HTTP API Token
bot_id =
[sumologic]
enabled = No
# create a http collector source and use the url provided
# https://help.sumologic.com/Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Data-to-an-HTTP-Source
url =
custom_source_host =
custom_source_name =
custom_source_category =
[splunk]
enabled = No
# /services/receivers/simple api endpoint
url = https://localhost:8089/services/receivers/simple
username =
password =
[rabbitmq]
enabled = No
# Here you need create rabbitmq config url to be used with pika python lib
# For ex. 1) amqp://username:password@rabbitmq_host/%2f
# 2) amqp://username:password@127.0.0.1/%2f
url_param =
# Name of the Rabbitmq Exchange
# Ex. mycoolexchange
exchange =
# Rabbitmq routing Key if not configured in rabbitmq leave it
# Ex. honeypy
routing_key =
[file]
enabled = Yes
filename = log/json.log
[hpfeeds]
enabled = No
persistent = Yes
server = 127.0.0.1
port = 20000
ident = ident
secret = secret
channel = channel
serverid = id

67
docker/honeypy/dist/services.cfg vendored Normal file
View file

@ -0,0 +1,67 @@
# HoneyPy Copyright (C) 2013-2017 foospidy
# services.default.profile
# Important: service names must not contain spaces.
# Important: use port redirecting for services that listen on ports below 1024 (see https://github.com/foospidy/ipt-kit).
[Echo]
plugin = Echo
low_port = tcp:7
port = tcp:7
description = Echo back data received via tcp.
enabled = Yes
[Echo.udp]
plugin = Echo_udp
low_port = udp:7
port = udp:7
description = Echo back data received via udp.
enabled = Yes
[MOTD]
plugin = MOTD
low_port = tcp:8
port = tcp:8
description = Send a message via tcp and close connection.
enabled = Yes
[MOTD.udp]
plugin = MOTD_udp
low_port = udp:8
port = udp:8
description = Send a message via udp.
enabled = Yes
[Telnet]
plugin = TelnetUnix
low_port = tcp:2323
port = tcp:2323
description = Emulate Debian telnet login via tcp.
enabled = Yes
[Telnet.Windows]
plugin = TelnetWindows
low_port = tcp:2324
port = tcp:2324
description = Emulate Windows telnet login via tcp.
enabled = Yes
[Random]
plugin = Random
low_port = tcp:2048
port = tcp:2048
description = Send random data via tcp.
enabled = Yes
[HashCountRandom]
plugin = HashCountRandom
low_port = tcp:4096
port = tcp:4096
description = Send random data prefixed with a hash of a counter via tcp.
enabled = Yes
[Elasticsearch]
plugin = Elasticsearch
low_port = tcp:9200
port = tcp:9200
description = Send basic elasticsearch like replies
enabled = Yes

View file

@ -0,0 +1,26 @@
version: '2.3'
networks:
honeypy_local:
services:
# HoneyPy service
honeypy:
build: .
container_name: honeypy
restart: always
networks:
- honeypy_local
ports:
- "7:7"
- "8:8"
- "2048:2048"
- "2323:2323"
- "2324:2324"
- "4096:4096"
- "9200:9200"
image: "dtagdevsec/honeypy:1903"
read_only: true
volumes:
- /data/honeypy/log:/opt/honeypy/log

View file

@ -12,7 +12,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
image: "dtagdevsec/honeytrap:1903"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/mailoney:1811.svg)](https://microbadger.com/images/dtagdevsec/mailoney:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/mailoney:1811.svg)](https://microbadger.com/images/dtagdevsec/mailoney:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/mailoney:1903.svg)](https://microbadger.com/images/dtagdevsec/mailoney:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/mailoney:1903.svg)](https://microbadger.com/images/dtagdevsec/mailoney:1903 "Get your own image badge on microbadger.com")
# mailoney

View file

@ -20,7 +20,7 @@ services:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:1811"
image: "dtagdevsec/mailoney:1903"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs

View file

@ -17,6 +17,7 @@ RUN apk -U --no-cache add \
go get -d -v github.com/mozillazg/request && \
go get -d -v go.uber.org/zap && \
cd medpot && \
cp dist/etc/ews.cfg /etc/ && \
go build medpot && \
# Setup medpot

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/medpot:1811.svg)](https://microbadger.com/images/dtagdevsec/medpot:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/medpot:1811.svg)](https://microbadger.com/images/dtagdevsec/medpot:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/medpot:1903.svg)](https://microbadger.com/images/dtagdevsec/medpot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/medpot:1903.svg)](https://microbadger.com/images/dtagdevsec/medpot:1903 "Get your own image badge on microbadger.com")
# Medpot

View file

@ -14,7 +14,7 @@ services:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:1811"
image: "dtagdevsec/medpot:1903"
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot

View file

@ -1,26 +0,0 @@
FROM alpine
# Include dist
ADD dist/ /root/dist/
# Get and install dependencies & packages
RUN rm -rf /etc/ssl/openssl.cnf && \
apk add --no-cache -U -X http://dl-3.alpinelinux.org/alpine/edge/testing/ \
nginx \
nginx-mod-http-headers-more \
openssl1.1 || : && \
# Setup configs
mkdir -p /run/nginx && \
rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
cp /root/dist/conf/nginx.conf /etc/nginx/ && \
cp -R /root/dist/conf/ssl /etc/nginx/ && \
cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
cp -R /root/dist/html/ /var/lib/nginx/ && \
# Clean up
rm -rf /root/* && \
rm -rf /var/cache/apk/*
# Start nginx
CMD exec nginx -g 'daemon off;'

View file

@ -31,7 +31,8 @@ http {
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
@ -73,25 +74,3 @@ http {
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}

View file

@ -9,7 +9,7 @@ server {
#########################
listen 64297 ssl http2;
index tpotweb.html;
ssl_protocols TLSv1.2;
ssl_protocols TLSv1.3;
server_name example.com;
error_page 300 301 302 400 401 402 403 404 500 501 502 503 504 /error.html;

View file

@ -17,7 +17,7 @@ services:
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro

View file

@ -8,7 +8,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f

View file

@ -34,7 +34,7 @@ RUN apk -U --no-cache add \
pyasn1 && \
# Install rdpy from git
mkdir /opt && \
mkdir -p /opt && \
cd /opt && \
git clone --depth=1 https://github.com/t3chn0m4g3/rdpy && \
cd rdpy && \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/rdpy:1811.svg)](https://microbadger.com/images/dtagdevsec/rdpy:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/rdpy:1811.svg)](https://microbadger.com/images/dtagdevsec/rdpy:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/rdpy:1903.svg)](https://microbadger.com/images/dtagdevsec/rdpy:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/rdpy:1903.svg)](https://microbadger.com/images/dtagdevsec/rdpy:1903 "Get your own image badge on microbadger.com")
# rdpy

View file

@ -22,7 +22,7 @@ services:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
image: "dtagdevsec/rdpy:1903"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy

View file

@ -1,7 +1,8 @@
FROM alpine
# Get and install dependencies & packages
RUN apk -U --no-cache add \
RUN sed -i 's/dl-cdn/dl-4/g' /etc/apk/repositories && \
apk -U --no-cache add \
build-base \
curl \
git \
@ -13,26 +14,21 @@ RUN apk -U --no-cache add \
openssl-dev \
python \
python-dev \
py-lxml \
py-netaddr \
py-mako \
py-markupsafe \
py-future \
py-pip \
py-setuptools \
py-requests \
swig && \
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir \
cherrypy \
bs4 \
m2crypto && \
# Setup user
addgroup -g 2000 spiderfoot && \
adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
# Install spiderfoot
git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \
# git clone --depth=1 https://github.com/smicallef/spiderfoot -b v2.12.0-final /home/spiderfoot && \
git clone --depth=1 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
cd /home/spiderfoot && \
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir wheel && \
pip install --no-cache-dir -r requirements.txt && \
chown -R spiderfoot:spiderfoot /home/spiderfoot && \
sed -i "s#'__docroot': ''#'__docroot': '\/spiderfoot'#" /home/spiderfoot/sf.py && \
sed -i 's#raise cherrypy.HTTPRedirect("\/")#raise cherrypy.HTTPRedirect("\/spiderfoot")#' /home/spiderfoot/sfwebui.py && \

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/spiderfoot:1811.svg)](https://microbadger.com/images/dtagdevsec/spiderfoot:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/spiderfoot:1811.svg)](https://microbadger.com/images/dtagdevsec/spiderfoot:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/spiderfoot:1903.svg)](https://microbadger.com/images/dtagdevsec/spiderfoot:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/spiderfoot:1903.svg)](https://microbadger.com/images/dtagdevsec/spiderfoot:1903 "Get your own image badge on microbadger.com")
[spiderfoot](https://github.com/smicallef/spiderfoot) the open source footprinting and intelligence-gathering tool.

View file

@ -14,6 +14,6 @@ services:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
image: "dtagdevsec/spiderfoot:1903"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -4,20 +4,91 @@ FROM alpine
ADD dist/ /root/dist/
# Install packages
RUN apk -U --no-cache add \
RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
apk -U --no-cache add \
ca-certificates \
curl \
file \
libcap \
wget && \
apk -U add --repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
suricata && \
geoip \
hiredis \
jansson \
libcap-ng \
libhtp \
libmagic \
libnet \
libnetfilter_queue \
libnfnetlink \
libpcap \
luajit \
lz4-libs \
musl \
nspr \
nss \
pcre \
yaml \
wget \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libhtp-dev \
libcap-ng-dev \
luajit-dev \
libpcap-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python2 \
py2-pip \
rust \
yaml-dev && \
# Upgrade pip, install virtualenv
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir suricata-update && \
# Get and build Suricata
mkdir -p /opt/builder/ && \
wget https://www.openinfosecfoundation.org/download/suricata-4.1.3.tar.gz && \
tar xvfz suricata-4.1.3.tar.gz --strip-components=1 -C /opt/builder/ && \
rm suricata-4.1.3.tar.gz && \
cd /opt/builder && \
./configure \
--prefix=/usr \
--sysconfdir=/etc \
--mandir=/usr/share/man \
--localstatedir=/var \
--enable-non-bundled-htp \
--enable-nfqueue \
--enable-rust \
--disable-gccmarch-native \
--enable-hiredis \
--enable-geoip \
--enable-gccprotect \
--enable-pie \
--enable-luajit && \
make && \
make check && \
make install && \
make install-full && \
# Setup user, groups and configs
addgroup -g 2000 suri && \
adduser -S -H -u 2000 -D -g 2000 suri && \
chmod 644 /etc/suricata/*.config && \
cp /root/dist/suricata.yaml /etc/suricata/suricata.yaml && \
cp /root/dist/*.bpf /etc/suricata/ && \
mkdir -p /etc/suricata/rules && \
cp /opt/builder/rules/* /etc/suricata/rules/ && \
# Download the latest EmergingThreats ruleset, replace rulebase and enable all rules
cp /root/dist/update.sh /usr/bin/ && \
@ -25,6 +96,32 @@ RUN apk -U --no-cache add \
update.sh OPEN && \
# Clean up
apk del --purge \
automake \
autoconf \
build-base \
cargo \
file-dev \
geoip-dev \
hiredis-dev \
jansson-dev \
libtool \
libhtp-dev \
libcap-ng-dev \
luajit-dev \
libpcap-dev \
libnet-dev \
libnetfilter_queue-dev \
libnfnetlink-dev \
lz4-dev \
nss-dev \
nspr-dev \
pcre-dev \
python2 \
py2-pip \
rust \
yaml-dev && \
rm -rf /opt/builder && \
rm -rf /root/* && \
rm -rf /var/cache/apk/*

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/suricata:1811.svg)](https://microbadger.com/images/dtagdevsec/suricata:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/suricata:1811.svg)](https://microbadger.com/images/dtagdevsec/suricata:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/suricata:1903.svg)](https://microbadger.com/images/dtagdevsec/suricata:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/suricata:1903.svg)](https://microbadger.com/images/dtagdevsec/suricata:1903 "Get your own image badge on microbadger.com")
# dockerized suricata

View file

@ -1,3 +1,4 @@
not (host sicherheitstacho.eu or community.sicherheitstacho.eu) and
not (host archive.ubuntu.com or security.ubuntu.com) and
not (host index.docker.io or docker.io)
not (host index.docker.io or docker.io) and
not (host hpfeeds.sissden.eu)

File diff suppressed because it is too large Load diff

View file

@ -15,6 +15,6 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata

View file

@ -1,4 +1,4 @@
[![](https://images.microbadger.com/badges/version/dtagdevsec/tanner:1811.svg)](https://microbadger.com/images/dtagdevsec/tanner:1811 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/tanner:1811.svg)](https://microbadger.com/images/dtagdevsec/tanner:1811 "Get your own image badge on microbadger.com")
[![](https://images.microbadger.com/badges/version/dtagdevsec/tanner:1903.svg)](https://microbadger.com/images/dtagdevsec/tanner:1903 "Get your own version badge on microbadger.com") [![](https://images.microbadger.com/badges/image/dtagdevsec/tanner:1903.svg)](https://microbadger.com/images/dtagdevsec/tanner:1903 "Get your own image badge on microbadger.com")
# Snare / Tanner

View file

@ -14,7 +14,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/redis:1811"
image: "dtagdevsec/redis:1903"
read_only: true
# PHP Sandbox service
@ -26,7 +26,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/phpox:1811"
image: "dtagdevsec/phpox:1903"
read_only: true
# Tanner API Service
@ -40,7 +40,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
@ -59,7 +59,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tannerweb
read_only: true
volumes:
@ -78,7 +78,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tanner
read_only: true
volumes:
@ -100,6 +100,6 @@ services:
- tanner_local
ports:
- "80:80"
image: "dtagdevsec/snare:1811"
image: "dtagdevsec/snare:1903"
depends_on:
- tanner

View file

@ -33,9 +33,10 @@ services:
- "443:443"
- "993:993"
- "995:995"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
@ -49,7 +50,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
image: "dtagdevsec/honeytrap:1903"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@ -66,7 +67,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
@ -83,7 +84,7 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata
@ -100,7 +101,7 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1811"
image: "dtagdevsec/cyberchef:1903"
read_only: true
#### ELK
@ -124,7 +125,7 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data
@ -137,7 +138,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"
## Logstash service
logstash:
@ -148,7 +149,7 @@ services:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
@ -161,7 +162,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
image: "dtagdevsec/head:1903"
read_only: true
# Ewsposter service
@ -170,9 +171,18 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -191,7 +201,7 @@ services:
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
@ -206,6 +216,6 @@ services:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
image: "dtagdevsec/spiderfoot:1903"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -47,7 +47,7 @@ services:
- "21:21"
- "44818:44818"
- "47808:47808"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -69,7 +69,7 @@ services:
ports:
# - "161:161"
- "2404:2404"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -90,7 +90,7 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -111,7 +111,7 @@ services:
- conpot_local_ipmi
ports:
- "623:623"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -133,7 +133,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -150,7 +150,7 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
image: "dtagdevsec/cowrie:1903"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
@ -179,7 +179,7 @@ services:
# - "995:995"
# - "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
@ -193,7 +193,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
image: "dtagdevsec/honeytrap:1903"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@ -208,7 +208,7 @@ services:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:1811"
image: "dtagdevsec/medpot:1903"
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
@ -229,7 +229,7 @@ services:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
image: "dtagdevsec/rdpy:1903"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy
@ -244,7 +244,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
@ -261,7 +261,7 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata
@ -278,7 +278,7 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1811"
image: "dtagdevsec/cyberchef:1903"
read_only: true
#### ELK
@ -302,7 +302,7 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data
@ -315,7 +315,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"
## Logstash service
logstash:
@ -326,7 +326,7 @@ services:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
@ -339,7 +339,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
image: "dtagdevsec/head:1903"
read_only: true
# Ewsposter service
@ -348,9 +348,18 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -369,7 +378,7 @@ services:
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
@ -384,6 +393,6 @@ services:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
image: "dtagdevsec/spiderfoot:1903"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -1,329 +0,0 @@
# T-Pot (Legacy)
# Do not erase ports sections, these are used by /opt/tpot/bin/rules.sh to setup iptables ACCEPT rules for NFQ (honeytrap / glutton)
version: '2.3'
networks:
cowrie_local:
elasticpot_local:
glastopf_local:
heralding_local:
mailoney_local:
rdpy_local:
ewsposter_local:
spiderfoot_local:
services:
##################
#### Honeypots
##################
# Cowrie service
cowrie:
container_name: cowrie
restart: always
tmpfs:
- /tmp/cowrie:uid=2000,gid=2000
- /tmp/cowrie/data:uid=2000,gid=2000
networks:
- cowrie_local
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
- /data/cowrie/keys:/home/cowrie/cowrie/etc
- /data/cowrie/log:/home/cowrie/cowrie/log
- /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
# Dionaea service
dionaea:
container_name: dionaea
stdin_open: true
tty: true
restart: always
network_mode: "host"
ports:
- "20:20"
- "21:21"
- "42:42"
- "69:69/udp"
- "81:81"
- "135:135"
- "443:443"
- "445:445"
- "1433:1433"
- "1723:1723"
- "1883:1883"
- "3306:3306"
- "5060:5060"
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1811"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
- /data/dionaea/roots/tftp:/opt/dionaea/var/dionaea/roots/tftp
- /data/dionaea/roots/www:/opt/dionaea/var/dionaea/roots/www
- /data/dionaea/roots/upnp:/opt/dionaea/var/dionaea/roots/upnp
- /data/dionaea:/opt/dionaea/var/dionaea
- /data/dionaea/binaries:/opt/dionaea/var/dionaea/binaries
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1811"
read_only: true
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
# Glastopf service
glastopf:
container_name: glastopf
tmpfs:
- /tmp/glastopf:uid=2000,gid=2000
restart: always
networks:
- glastopf_local
ports:
- "80:80"
image: "dtagdevsec/glastopf:1811"
read_only: true
volumes:
- /data/glastopf/db:/tmp/glastopf/db
- /data/glastopf/log:/tmp/glastopf/log
# Heralding service
heralding:
container_name: heralding
restart: always
tmpfs:
- /tmp/heralding:uid=2000,gid=2000
networks:
- heralding_local
ports:
# - "21:21"
# - "22:22"
# - "23:23"
# - "25:25"
# - "80:80"
# - "110:110"
# - "143:143"
# - "443:443"
# - "993:993"
# - "995:995"
# - "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
# Honeytrap service
honeytrap:
container_name: honeytrap
restart: always
tmpfs:
- /tmp/honeytrap:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
- /data/honeytrap/downloads:/opt/honeytrap/var/downloads
- /data/honeytrap/log:/opt/honeytrap/var/log
# Mailoney service
mailoney:
container_name: mailoney
restart: always
environment:
- HPFEEDS_SERVER=
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=20000
- HPFEEDS_CHANNELPREFIX=prefix
networks:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:1811"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
# Rdpy service
rdpy:
container_name: rdpy
extra_hosts:
- hpfeeds.example.com:127.0.0.1
restart: always
environment:
- HPFEEDS_SERVER=hpfeeds.example.com
- HPFEEDS_IDENT=user
- HPFEEDS_SECRET=pass
- HPFEEDS_PORT=65000
- SERVERID=id
networks:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy
##################
#### NSM
##################
# P0f service
p0f:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
# Suricata service
suricata:
container_name: suricata
restart: always
environment:
# For ET Pro ruleset replace "OPEN" with your OINKCODE
- OINKCODE=OPEN
network_mode: "host"
cap_add:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
volumes:
- /data/suricata/log:/var/log/suricata
##################
#### Tools
##################
#### ELK
## Elasticsearch service
elasticsearch:
container_name: elasticsearch
restart: always
environment:
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
- ES_TMPDIR=/tmp
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
volumes:
- /data:/data
## Kibana service
kibana:
container_name: kibana
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
## Logstash service
logstash:
container_name: logstash
restart: always
depends_on:
elasticsearch:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
volumes:
- /data:/data
## Elasticsearch-head service
head:
container_name: head
restart: always
depends_on:
elasticsearch:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
read_only: true
# Ewsposter service
ewsposter:
container_name: ewsposter
restart: always
networks:
- ewsposter_local
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
# Nginx service
nginx:
container_name: nginx
restart: always
tmpfs:
- /var/tmp/nginx/client_body
- /var/tmp/nginx/proxy
- /var/tmp/nginx/fastcgi
- /var/tmp/nginx/uwsgi
- /var/tmp/nginx/scgi
- /run
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
- /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
- /data/nginx/log/:/var/log/nginx/
# Spiderfoot service
spiderfoot:
container_name: spiderfoot
restart: always
networks:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -10,8 +10,8 @@ networks:
conpot_local_kamstrup_382:
cowrie_local:
cyberchef_local:
elasticpot_local:
heralding_local:
honeypy_local:
mailoney_local:
medpot_local:
rdpy_local:
@ -33,7 +33,7 @@ services:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:1811"
image: "dtagdevsec/adbhoney:1903"
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
@ -49,7 +49,7 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:1811"
image: "dtagdevsec/ciscoasa:1903"
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
@ -71,7 +71,7 @@ services:
ports:
- "161:161"
- "2404:2404"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -92,7 +92,7 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -113,7 +113,7 @@ services:
- conpot_local_ipmi
ports:
- "623:623"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -135,7 +135,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -152,7 +152,7 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
image: "dtagdevsec/cowrie:1903"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
@ -184,7 +184,7 @@ services:
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1811"
image: "dtagdevsec/dionaea:1903"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@ -196,18 +196,22 @@ services:
- /data/dionaea/log:/opt/dionaea/var/log
- /data/dionaea/rtp:/opt/dionaea/var/dionaea/rtp
# Elasticpot service
elasticpot:
container_name: elasticpot
# Glutton service
glutton:
build: .
container_name: glutton
restart: always
networks:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1811"
tmpfs:
- /var/lib/glutton:uid=2000,gid=2000
- /run:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/glutton:1903"
read_only: true
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
- /data/glutton/log:/var/log/glutton
# - /root/tpotce/docker/glutton/dist/rules.yaml:/opt/glutton/rules/rules.yaml
# Heralding service
heralding:
@ -228,29 +232,33 @@ services:
# - "443:443"
- "993:993"
- "995:995"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
# Glutton service
glutton:
# HoneyPy service
honeypy:
build: .
container_name: glutton
container_name: honeypy
restart: always
tmpfs:
- /var/lib/glutton:uid=2000,gid=2000
- /run:uid=2000,gid=2000
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/glutton:1811"
networks:
- honeypy_local
ports:
- "7:7"
- "8:8"
- "2048:2048"
- "2323:2323"
- "2324:2324"
- "4096:4096"
- "9200:9200"
image: "dtagdevsec/honeypy:1903"
read_only: true
volumes:
- /data/glutton/log:/var/log/glutton
# - /root/tpotce/docker/glutton/dist/rules.yaml:/opt/glutton/rules/rules.yaml
- /data/honeypy/log:/opt/honeypy/log
# Mailoney service
mailoney:
@ -266,7 +274,7 @@ services:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:1811"
image: "dtagdevsec/mailoney:1903"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
@ -279,7 +287,7 @@ services:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:1811"
image: "dtagdevsec/medpot:1903"
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
@ -300,7 +308,7 @@ services:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
image: "dtagdevsec/rdpy:1903"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy
@ -313,7 +321,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/redis:1811"
image: "dtagdevsec/redis:1903"
read_only: true
## PHP Sandbox service
@ -323,7 +331,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/phpox:1811"
image: "dtagdevsec/phpox:1903"
read_only: true
## Tanner API Service
@ -335,7 +343,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
@ -352,7 +360,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tannerweb
read_only: true
volumes:
@ -369,7 +377,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tanner
read_only: true
volumes:
@ -389,7 +397,7 @@ services:
- tanner_local
ports:
- "80:80"
image: "dtagdevsec/snare:1811"
image: "dtagdevsec/snare:1903"
depends_on:
- tanner
@ -403,7 +411,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
@ -420,7 +428,7 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata
@ -437,7 +445,7 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1811"
image: "dtagdevsec/cyberchef:1903"
read_only: true
#### ELK
@ -461,7 +469,7 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data
@ -474,7 +482,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"
## Logstash service
logstash:
@ -485,7 +493,7 @@ services:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
@ -498,7 +506,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
image: "dtagdevsec/head:1903"
read_only: true
# Ewsposter service
@ -507,9 +515,18 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -528,7 +545,7 @@ services:
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
@ -543,6 +560,6 @@ services:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
image: "dtagdevsec/spiderfoot:1903"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -32,7 +32,7 @@ services:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:1811"
image: "dtagdevsec/adbhoney:1903"
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
@ -48,7 +48,7 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:1811"
image: "dtagdevsec/ciscoasa:1903"
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
@ -70,7 +70,7 @@ services:
ports:
- "161:161"
- "2404:2404"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -91,7 +91,7 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -112,7 +112,7 @@ services:
- conpot_local_ipmi
ports:
- "623:623"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -134,7 +134,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -151,7 +151,7 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
image: "dtagdevsec/cowrie:1903"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
@ -183,7 +183,7 @@ services:
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1811"
image: "dtagdevsec/dionaea:1903"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@ -203,7 +203,7 @@ services:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1811"
image: "dtagdevsec/elasticpot:1903"
read_only: true
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
@ -227,9 +227,10 @@ services:
# - "443:443"
- "993:993"
- "995:995"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
@ -243,7 +244,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
image: "dtagdevsec/honeytrap:1903"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@ -264,7 +265,7 @@ services:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:1811"
image: "dtagdevsec/mailoney:1903"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
@ -277,7 +278,7 @@ services:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:1811"
image: "dtagdevsec/medpot:1903"
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
@ -298,7 +299,7 @@ services:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
image: "dtagdevsec/rdpy:1903"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy
@ -311,7 +312,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/redis:1811"
image: "dtagdevsec/redis:1903"
read_only: true
## PHP Sandbox service
@ -321,7 +322,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/phpox:1811"
image: "dtagdevsec/phpox:1903"
read_only: true
## Tanner API Service
@ -333,7 +334,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
@ -350,7 +351,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tannerweb
read_only: true
volumes:
@ -367,7 +368,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tanner
read_only: true
volumes:
@ -387,7 +388,7 @@ services:
- tanner_local
ports:
- "80:80"
image: "dtagdevsec/snare:1811"
image: "dtagdevsec/snare:1903"
depends_on:
- tanner
@ -401,7 +402,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
@ -418,7 +419,7 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata
@ -433,9 +434,18 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip

View file

@ -33,7 +33,7 @@ services:
- adbhoney_local
ports:
- "5555:5555"
image: "dtagdevsec/adbhoney:1811"
image: "dtagdevsec/adbhoney:1903"
read_only: true
volumes:
- /data/adbhoney/log:/opt/adbhoney/log
@ -49,7 +49,7 @@ services:
ports:
- "5000:5000/udp"
- "8443:8443"
image: "dtagdevsec/ciscoasa:1811"
image: "dtagdevsec/ciscoasa:1903"
read_only: true
volumes:
- /data/ciscoasa/log:/var/log/ciscoasa
@ -71,7 +71,7 @@ services:
ports:
- "161:161"
- "2404:2404"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -92,7 +92,7 @@ services:
- conpot_local_guardian_ast
ports:
- "10001:10001"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -113,7 +113,7 @@ services:
- conpot_local_ipmi
ports:
- "623:623"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -135,7 +135,7 @@ services:
ports:
- "1025:1025"
- "50100:50100"
image: "dtagdevsec/conpot:1811"
image: "dtagdevsec/conpot:1903"
read_only: true
volumes:
- /data/conpot/log:/var/log/conpot
@ -152,7 +152,7 @@ services:
ports:
- "22:22"
- "23:23"
image: "dtagdevsec/cowrie:1811"
image: "dtagdevsec/cowrie:1903"
read_only: true
volumes:
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
@ -184,7 +184,7 @@ services:
- "5060:5060/udp"
- "5061:5061"
- "27017:27017"
image: "dtagdevsec/dionaea:1811"
image: "dtagdevsec/dionaea:1903"
read_only: true
volumes:
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@ -204,7 +204,7 @@ services:
- elasticpot_local
ports:
- "9200:9200"
image: "dtagdevsec/elasticpot:1811"
image: "dtagdevsec/elasticpot:1903"
read_only: true
volumes:
- /data/elasticpot/log:/opt/ElasticpotPY/log
@ -228,9 +228,10 @@ services:
# - "443:443"
- "993:993"
- "995:995"
- "1080:1080"
- "5432:5432"
- "5900:5900"
image: "dtagdevsec/heralding:1811"
image: "dtagdevsec/heralding:1903"
read_only: true
volumes:
- /data/heralding/log:/var/log/heralding
@ -244,7 +245,7 @@ services:
network_mode: "host"
cap_add:
- NET_ADMIN
image: "dtagdevsec/honeytrap:1811"
image: "dtagdevsec/honeytrap:1903"
read_only: true
volumes:
- /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@ -265,7 +266,7 @@ services:
- mailoney_local
ports:
- "25:25"
image: "dtagdevsec/mailoney:1811"
image: "dtagdevsec/mailoney:1903"
read_only: true
volumes:
- /data/mailoney/log:/opt/mailoney/logs
@ -278,7 +279,7 @@ services:
- medpot_local
ports:
- "2575:2575"
image: "dtagdevsec/medpot:1811"
image: "dtagdevsec/medpot:1903"
read_only: true
volumes:
- /data/medpot/log/:/var/log/medpot
@ -299,7 +300,7 @@ services:
- rdpy_local
ports:
- "3389:3389"
image: "dtagdevsec/rdpy:1811"
image: "dtagdevsec/rdpy:1903"
read_only: true
volumes:
- /data/rdpy/log:/var/log/rdpy
@ -312,7 +313,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/redis:1811"
image: "dtagdevsec/redis:1903"
read_only: true
## PHP Sandbox service
@ -322,7 +323,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/phpox:1811"
image: "dtagdevsec/phpox:1903"
read_only: true
## Tanner API Service
@ -334,7 +335,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
read_only: true
volumes:
- /data/tanner/log:/var/log/tanner
@ -351,7 +352,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tannerweb
read_only: true
volumes:
@ -368,7 +369,7 @@ services:
tty: true
networks:
- tanner_local
image: "dtagdevsec/tanner:1811"
image: "dtagdevsec/tanner:1903"
command: tanner
read_only: true
volumes:
@ -388,7 +389,7 @@ services:
- tanner_local
ports:
- "80:80"
image: "dtagdevsec/snare:1811"
image: "dtagdevsec/snare:1903"
depends_on:
- tanner
@ -402,7 +403,7 @@ services:
container_name: p0f
restart: always
network_mode: "host"
image: "dtagdevsec/p0f:1811"
image: "dtagdevsec/p0f:1903"
read_only: true
volumes:
- /data/p0f/log:/var/log/p0f
@ -419,7 +420,7 @@ services:
- NET_ADMIN
- SYS_NICE
- NET_RAW
image: "dtagdevsec/suricata:1811"
image: "dtagdevsec/suricata:1903"
volumes:
- /data/suricata/log:/var/log/suricata
@ -436,7 +437,7 @@ services:
- cyberchef_local
ports:
- "127.0.0.1:64299:8000"
image: "dtagdevsec/cyberchef:1811"
image: "dtagdevsec/cyberchef:1903"
read_only: true
#### ELK
@ -460,7 +461,7 @@ services:
mem_limit: 4g
ports:
- "127.0.0.1:64298:9200"
image: "dtagdevsec/elasticsearch:1811"
image: "dtagdevsec/elasticsearch:1903"
volumes:
- /data:/data
@ -473,7 +474,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64296:5601"
image: "dtagdevsec/kibana:1811"
image: "dtagdevsec/kibana:1903"
## Logstash service
logstash:
@ -484,7 +485,7 @@ services:
condition: service_healthy
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/logstash:1811"
image: "dtagdevsec/logstash:1903"
volumes:
- /data:/data
@ -497,7 +498,7 @@ services:
condition: service_healthy
ports:
- "127.0.0.1:64302:9100"
image: "dtagdevsec/head:1811"
image: "dtagdevsec/head:1903"
read_only: true
# Ewsposter service
@ -506,9 +507,18 @@ services:
restart: always
networks:
- ewsposter_local
environment:
- EWS_HPFEEDS_ENABLE=false
- EWS_HPFEEDS_HOST=host
- EWS_HPFEEDS_PORT=port
- EWS_HPFEEDS_CHANNELS=channels
- EWS_HPFEEDS_IDENT=user
- EWS_HPFEEDS_SECRET=secret
- EWS_HPFEEDS_TLSCERT=false
- EWS_HPFEEDS_FORMAT=json
env_file:
- /opt/tpot/etc/compose/elk_environment
image: "dtagdevsec/ewsposter:1811"
image: "dtagdevsec/ewsposter:1903"
volumes:
- /data:/data
- /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@ -527,7 +537,7 @@ services:
network_mode: "host"
ports:
- "64297:64297"
image: "dtagdevsec/nginx:1811"
image: "dtagdevsec/nginx:1903"
read_only: true
volumes:
- /data/nginx/cert/:/etc/nginx/cert/:ro
@ -542,6 +552,6 @@ services:
- spiderfoot_local
ports:
- "127.0.0.1:64303:8080"
image: "dtagdevsec/spiderfoot:1811"
image: "dtagdevsec/spiderfoot:1903"
volumes:
- /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db

View file

@ -22,6 +22,7 @@
/data/glutton/log/*.err
/data/heralding/log/*.log
/data/heralding/log/*.csv
/data/honeypy/log/*.log
/data/honeytrap/log/*.log
/data/honeytrap/log/*.json
/data/honeytrap/attacks.tgz

Binary file not shown.

Binary file not shown.

File diff suppressed because one or more lines are too long

Binary file not shown.

View file

@ -1,144 +0,0 @@
#
# Run-time configuration file for dialog
#
# Automatically generated by "dialog --create-rc <file>"
#
#
# Types of values:
#
# Number - <number>
# String - "string"
# Boolean - <ON|OFF>
# Attribute - (foreground,background,highlight?)
# Set aspect-ration.
aspect = 0
# Set separator (for multiple widgets output).
separate_widget = ""
# Set tab-length (for textbox tab-conversion).
tab_len = 0
# Make tab-traversal for checklist, etc., include the list.
visit_items = OFF
# Shadow dialog boxes? This also turns on color.
use_shadow = ON
# Turn color support ON or OFF
use_colors = ON
# Screen color
screen_color = (WHITE,MAGENTA,ON)
# Shadow color
shadow_color = (BLACK,BLACK,ON)
# Dialog box color
dialog_color = (BLACK,WHITE,OFF)
# Dialog box title color
title_color = (MAGENTA,WHITE,OFF)
# Dialog box border color
border_color = (WHITE,WHITE,ON)
# Active button color
button_active_color = (WHITE,MAGENTA,OFF)
# Inactive button color
button_inactive_color = dialog_color
# Active button key color
button_key_active_color = button_active_color
# Inactive button key color
button_key_inactive_color = (RED,WHITE,OFF)
# Active button label color
button_label_active_color = (YELLOW,MAGENTA,ON)
# Inactive button label color
button_label_inactive_color = (BLACK,WHITE,OFF)
# Input box color
inputbox_color = dialog_color
# Input box border color
inputbox_border_color = dialog_color
# Search box color
searchbox_color = dialog_color
# Search box title color
searchbox_title_color = title_color
# Search box border color
searchbox_border_color = border_color
# File position indicator color
position_indicator_color = title_color
# Menu box color
menubox_color = dialog_color
# Menu box border color
menubox_border_color = border_color
# Item color
item_color = dialog_color
# Selected item color
item_selected_color = button_active_color
# Tag color
tag_color = title_color
# Selected tag color
tag_selected_color = button_label_active_color
# Tag key color
tag_key_color = button_key_inactive_color
# Selected tag key color
tag_key_selected_color = (RED,MAGENTA,ON)
# Check box color
check_color = dialog_color
# Selected check box color
check_selected_color = button_active_color
# Up arrow color
uarrow_color = (MAGENTA,WHITE,ON)
# Down arrow color
darrow_color = uarrow_color
# Item help-text color
itemhelp_color = (WHITE,BLACK,OFF)
# Active form text color
form_active_text_color = button_active_color
# Form text color
form_text_color = (WHITE,CYAN,ON)
# Readonly form item color
form_item_readonly_color = (CYAN,WHITE,ON)
# Dialog box gauge color
gauge_color = title_color
# Dialog box border2 color
border2_color = dialog_color
# Input box border2 color
inputbox_border2_color = dialog_color
# Search box border2 color
searchbox_border2_color = dialog_color
# Menu box border2 color
menubox_border2_color = dialog_color

View file

@ -1,21 +0,0 @@

┌────────────────────────────────────────────┐
│ _____ ____ _ _ ___ _ _ │
│|_ _| | _ \\ ___ | |_ / |( _ ) / / |│
│ | |_____| |_) / _ \\| __| | |/ _ \\ | | |│
│ | |_____| __/ (_) | |_ | | (_) || | |│
│ |_| |_| \\___/ \\__| |_|\\___(_)_|_|│
│ │
└────────────────────────────────────────────┘
,---- [ \n ] [ \d ] [ \t ]
|
| IP:
| SSH:
| WEB:
| ADMIN:
|
`----

View file

@ -1,144 +0,0 @@
#
# Run-time configuration file for dialog
#
# Automatically generated by "dialog --create-rc <file>"
#
#
# Types of values:
#
# Number - <number>
# String - "string"
# Boolean - <ON|OFF>
# Attribute - (foreground,background,highlight?)
# Set aspect-ration.
aspect = 0
# Set separator (for multiple widgets output).
separate_widget = ""
# Set tab-length (for textbox tab-conversion).
tab_len = 0
# Make tab-traversal for checklist, etc., include the list.
visit_items = OFF
# Shadow dialog boxes? This also turns on color.
use_shadow = ON
# Turn color support ON or OFF
use_colors = ON
# Screen color
screen_color = (WHITE,MAGENTA,ON)
# Shadow color
shadow_color = (BLACK,BLACK,ON)
# Dialog box color
dialog_color = (BLACK,WHITE,OFF)
# Dialog box title color
title_color = (MAGENTA,WHITE,OFF)
# Dialog box border color
border_color = (WHITE,WHITE,ON)
# Active button color
button_active_color = (WHITE,MAGENTA,OFF)
# Inactive button color
button_inactive_color = dialog_color
# Active button key color
button_key_active_color = button_active_color
# Inactive button key color
button_key_inactive_color = (RED,WHITE,OFF)
# Active button label color
button_label_active_color = (YELLOW,MAGENTA,ON)
# Inactive button label color
button_label_inactive_color = (BLACK,WHITE,OFF)
# Input box color
inputbox_color = dialog_color
# Input box border color
inputbox_border_color = dialog_color
# Search box color
searchbox_color = dialog_color
# Search box title color
searchbox_title_color = title_color
# Search box border color
searchbox_border_color = border_color
# File position indicator color
position_indicator_color = title_color
# Menu box color
menubox_color = dialog_color
# Menu box border color
menubox_border_color = border_color
# Item color
item_color = dialog_color
# Selected item color
item_selected_color = button_active_color
# Tag color
tag_color = title_color
# Selected tag color
tag_selected_color = button_label_active_color
# Tag key color
tag_key_color = button_key_inactive_color
# Selected tag key color
tag_key_selected_color = (RED,MAGENTA,ON)
# Check box color
check_color = dialog_color
# Selected check box color
check_selected_color = button_active_color
# Up arrow color
uarrow_color = (MAGENTA,WHITE,ON)
# Down arrow color
darrow_color = uarrow_color
# Item help-text color
itemhelp_color = (WHITE,BLACK,OFF)
# Active form text color
form_active_text_color = button_active_color
# Form text color
form_text_color = (WHITE,CYAN,ON)
# Readonly form item color
form_item_readonly_color = (CYAN,WHITE,ON)
# Dialog box gauge color
gauge_color = title_color
# Dialog box border2 color
border2_color = dialog_color
# Input box border2 color
inputbox_border2_color = dialog_color
# Search box border2 color
searchbox_border2_color = dialog_color
# Menu box border2 color
menubox_border2_color = dialog_color

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,3 @@
#!/bin/bash
plymouth --quit
#plymouth --quit
openvt -f -w -s /root/installer/wrapper.sh

View file

@ -1,6 +1,6 @@
default install
label install
menu label ^T-Pot 18.11
menu label ^T-Pot 19.03 (based on Debian Sid)
menu default
kernel linux
append vga=788 initrd=initrd.gz console-setup/ask_detect=true --

View file

@ -13,7 +13,7 @@ d-i localechooser/preferred-locale string en_US.UTF-8
######################
### Keyboard Selection
######################
#d-i console-setup/ask_detect boolean true
d-i console-setup/ask_detect boolean true
#d-i keyboard-configuration/layoutcode string de
d-i console-setup/detected note
@ -25,10 +25,10 @@ d-i console-setup/detected note
#########################
### Network Configuration
#########################
d-i netcfg/do_not_use_netplan true
#d-i netcfg/choose_interface select auto
#d-i netcfg/dhcp_timeout string 60
d-i netcfg/choose_interface select auto
d-i netcfg/dhcp_timeout string 60
d-i netcfg/get_hostname string t-pot
d-i netcfg/get_domain string
###############
### Disk Layout
@ -70,10 +70,17 @@ d-i user-setup/encrypt-home boolean false
########################################
### Country Mirror & Proxy Configuration
########################################
d-i mirror/country string manual
d-i mirror/http/hostname string archive.ubuntu.com
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string
#d-i mirror/country string manual
#d-i mirror/http/hostname string deb.debian.org
#d-i mirror/http/directory string /debian
#d-i mirror/http/proxy string
###################
# Suite to install
###################
#d-i mirror/suite string unstable
#d-i mirror/suite string testing
#d-i mirror/udeb/suite string testing
###########################
### Skip Grub Configuration
@ -81,6 +88,7 @@ d-i mirror/http/proxy string
#d-i grub-installer/confirm boolean true
#d-i grub-installer/only_debian boolean true
#d-i grub-installer/with_other_os boolean true
#d-i grub-installer/bootdev string default
d-i grub-installer/skip boolean true
d-i lilo-installer/skip boolean true
@ -91,17 +99,18 @@ d-i lilo-installer/skip boolean true
d-i clock-setup/utc boolean true
d-i time/zone string UTC
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string ntp.ubuntu.com
d-i clock-setup/ntp-server string debian.pool.ntp.org
##################
### Package Groups
##################
tasksel tasksel/first multiselect ubuntu-server
tasksel tasksel/first multiselect ssh-server
########################
### Package Installation
########################
d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker curl debconf-utils dialog dnsutils docker.io docker-compose dstat ethtool fail2ban genisoimage git glances grc html2text htop ifupdown iptables iw jq libcrack2 libltdl7 lm-sensors man mosh multitail net-tools npm ntp openssh-server openssl pass prips software-properties-common syslinux psmisc pv python-pip unzip vim wireless-tools wpasupplicant
d-i pkgsel/include string apache2-utils curl dialog figlet git grc libcrack2 libpq-dev lsb-release netselect-apt net-tools software-properties-common toilet
popularity-contest popularity-contest/participate boolean false
#################
### Update Policy
@ -120,8 +129,12 @@ d-i debian-installer/splash boolean false
d-i preseed/late_command string \
in-target apt-get -y install grub-pc; \
in-target grub-install --force $(debconf-get partman-auto/disk); \
update-dev; \
in-target update-grub; \
in-target git clone https://github.com/dtag-dev-sec/tpotce /opt/tpot; \
in-target git clone --depth=1 https://github.com/dtag-dev-sec/tpotce /opt/tpot; \
in-target sed -i 's/allow-hotplug/auto/g' /etc/network/interfaces; \
#in-target apt-get -y remove exim4-base; \
#in-target apt-get -y autoremove; \
cp /target/opt/tpot/iso/installer/rc.local.install /target/etc/rc.local; \
cp /target/opt/tpot/iso/installer -R /target/root/;

View file

@ -2,14 +2,14 @@
# Set TERM, DIALOGRC
export TERM=linux
export DIALOGRC=/etc/dialogrc
# Let's define some global vars
myBACKTITLE="T-Pot - ISO Creator"
# If you need latest hardware support, try using the hardware enablement (hwe) ISO, usually released later in time
# myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/hwe-netboot/mini.iso"
myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/netboot/mini.iso"
myUBUNTUISO="mini.iso"
#myMINIISOLINK="http://ftp.debian.org/debian/dists/testing/main/installer-amd64/current/images/netboot/mini.iso"
#myMINIISOLINK="https://d-i.debian.org/daily-images/amd64/daily/netboot/mini.iso"
# For stability reasons Debian Sid installation is built on a stable installer
myMINIISOLINK="http://ftp.debian.org/debian/dists/stretch/main/installer-amd64/current/images/netboot/mini.iso"
myMINIISO="mini.iso"
myTPOTISO="tpot.iso"
myTPOTDIR="tpotiso"
myTPOTSEED="iso/preseed/tpot.seed"
@ -49,9 +49,6 @@ if [ "$myINST" != "" ]
done
fi
# Let's load dialog color theme
cp host/etc/dialogrc /etc/
# Let's clean up at the end or if something goes wrong ...
function fuCLEANUP {
rm -rf $myTMP $myTPOTDIR $myPFXFILE $myNTPCONFFILE $myCONF_FILE
@ -81,7 +78,7 @@ function valid_ip()
}
# Let's ask if the user wants to run the script ...
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nDownload latest supported Ubuntu Mini ISO and build the T-Pot Install Image." 8 50
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nDownload latest supported Debian Mini ISO and build the T-Pot Install Image." 8 50
mySTART=$?
if [ "$mySTART" = "1" ];
then
@ -207,18 +204,18 @@ if [ "$myCONF_PROXY_USE" == "0" ] || [ "$myCONF_PFX_USE" == "0" ] || [ "$myCONF_
echo "myCONF_NTP_CONF_FILE=\"/root/installer/ntp.conf\"" >> $myCONF_FILE
fi
# Let's download Ubuntu Minimal ISO
if [ ! -f $myUBUNTUISO ]
# Let's download Debian Minimal ISO
if [ ! -f $myMINIISO ]
then
wget $myUBUNTULINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Ubuntu ... ]" --gauge "" 5 70;
echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Ubuntu ... Done! ]" --gauge "" 5 70;
wget $myMINIISOLINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian ... ]" --gauge "" 5 70;
echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian ... Done! ]" --gauge "" 5 70;
else
dialog --infobox "Using previously downloaded .iso ..." 3 50;
fi
# Let's loop mount it and copy all contents
mkdir -p $myTMP $myTPOTDIR
mount -o loop $myUBUNTUISO $myTMP
mount -o loop $myMINIISO $myTMP
rsync -a $myTMP/ $myTPOTDIR
umount $myTMP
@ -279,4 +276,6 @@ do
fi
done
dialog --clear
exit 0

View file

@ -76,8 +76,8 @@ echo
# Let's check for version
function fuCHECK_VERSION () {
local myMINVERSION="18.04.0"
local myMASTERVERSION="18.11.0"
local myMINVERSION="19.03.0"
local myMASTERVERSION="19.03.0"
echo
echo "### Checking for version tag ..."
if [ -f "version" ];
@ -168,7 +168,8 @@ echo
}
function fuUPDATER () {
local myPACKAGES="apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker curl debconf-utils dialog dnsutils docker.io docker-compose dstat ethtool fail2ban genisoimage git glances grc html2text htop ifupdown iptables iw jq libcrack2 libltdl7 lm-sensors man mosh multitail net-tools npm ntp openssh-server openssl pass prips software-properties-common syslinux psmisc pv python-pip unattended-upgrades unzip vim wireless-tools wpasupplicant"
export DEBIAN_FRONTEND=noninteractive
local myPACKAGES="apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux curl debconf-utils dialog dnsutils docker.io docker-compose dstat ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 man mosh multitail netselect-apt net-tools npm ntp openssh-server openssl pass prips software-properties-common syslinux psmisc pv python-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
echo "### Now upgrading packages ..."
dpkg --configure -a
apt-get -y autoclean
@ -185,18 +186,17 @@ npm install "https://github.com/taskrabbit/elasticsearch-dump" -g
pip install --upgrade pip
hash -r
pip install --upgrade elasticsearch-curator yq
wget https://github.com/bcicen/ctop/releases/download/v0.7.1/ctop-0.7.1-linux-amd64 -O /usr/bin/ctop && chmod +x /usr/bin/ctop
apt-get -y purge exim4-base mailutils
apt-mark hold exim4-base mailutils
echo
echo "### Now replacing T-Pot related config files on host"
cp host/etc/systemd/* /etc/systemd/system/
cp host/etc/issue /etc/
systemctl daemon-reload
echo
# Ensure some defaults
echo "### Ensure some T-Pot defaults with regard to some folders, permissions and configs."
sed -i 's#ListenStream=9090#ListenStream=64294#' /lib/systemd/system/cockpit.socket
sed -i '/^port/Id' /etc/ssh/sshd_config
echo "Port 64295" >> /etc/ssh/sshd_config
echo
@ -213,6 +213,7 @@ mkdir -p /data/adbhoney/downloads /data/adbhoney/log \
/data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \
/data/glutton/log \
/data/heralding/log \
/data/honeypy/log \
/data/mailoney/log \
/data/medpot/log \
/data/nginx/log \
@ -234,10 +235,17 @@ echo "### Now pulling latest docker images"
echo "######$myBLUE This might take a while, please be patient!$myWHITE"
fuPULLIMAGES 2>&1>/dev/null
fuREMOVEOLDIMAGES "1804"
#fuREMOVEOLDIMAGES "1804"
echo "### If you made changes to tpot.yml please ensure to add them again."
echo "### We stored the previous version as backup in /root/."
echo "### Done, please reboot."
echo "### Some updates may need an import of the latest Kibana objects as well."
echo "### Download the latest objects here if they recently changed:"
echo "### https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/etc/objects/kibana_export.json.zip"
echo "### Export and import the objects easily through the Kibana WebUI:"
echo "### Go to Kibana > Management > Saved Objects > Export / Import"
echo "### All objects will be overwritten upon import, make sure to run an export first."
echo
echo "### Please reboot."
echo
}
@ -267,7 +275,7 @@ fi
fuCHECK_VERSION
fuCONFIGCHECK
fuCHECKINET "https://index.docker.io https://github.com https://pypi.python.org https://ubuntu.com"
fuCHECKINET "https://index.docker.io https://github.com https://pypi.python.org https://debian.org"
fuSTOP_TPOT
fuBACKUP
fuSELFUPDATE "$0" "$@"

Some files were not shown because too many files have changed in this diff Show more