Release 22.04.0 code to master
Prepping for T-Pot 22.04.0 release.
356
CHANGELOG.md
|
@ -1,323 +1,45 @@
|
|||
# Changelog
|
||||
# Release Notes / Changelog
|
||||
T-Pot 22.04.0 is probably the most feature rich release ever provided with long awaited (wanted!) features readily available after installation.
|
||||
|
||||
## 20210222
|
||||
- **New Release 20.06.2**
|
||||
- **Countless Cloud Contributions**
|
||||
- Thanks to @shaderecker
|
||||
## New Features
|
||||
* **Distributed** Installation with **HIVE** and **HIVE_SENSOR**
|
||||
* **ARM64** support for all provided Docker images
|
||||
* **GeoIP Attack Map** visualizing Live Attacks on a dedicated webpage
|
||||
* **Kibana Live Attack Map** visualizing Live Attacks from different **HIVE_SENSORS**
|
||||
* **Blackhole** is a script trying to avoid mass scanner detection
|
||||
* **Elasticvue** a web front end for browsing and interacting with an Elastic Search cluster
|
||||
* **Ddospot** a honeypot for tracking and monitoring UDP-based Distributed Denial of Service (DDoS) attacks
|
||||
* **Endlessh** is a SSH tarpit that very slowly sends an endless, random SSH banner
|
||||
* **HellPot** is an endless honeypot based on Heffalump that sends unruly HTTP bots to hell
|
||||
* **qHoneypots** 25 honeypots in a single container for monitoring network traffic, bots activities, and username \ password credentials
|
||||
* **Redishoneypot** is a honeypot mimicking some of the Redis' functions
|
||||
* **SentryPeer** a dedicated SIP honeypot
|
||||
* **Index Lifecycle Management** for Elasticseach indices is now being used
|
||||
|
||||
## 20210219
|
||||
- **Rebuild Snare, Tanner, Redis, Phpox**
|
||||
- Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
|
||||
- **Bump Elastic Stack to 7.11.1**
|
||||
- Updgrade Elastic Stack Images to 7.11.1 and update License Info to reflect new Elastic License.
|
||||
- Prepare for new release.
|
||||
## Upgrades
|
||||
* **Debian 11.x** is now being used for the T-Pot ISO images and required for post installs
|
||||
* **Elastic Stack 8.x** is now provided as Docker images
|
||||
|
||||
## 20210218
|
||||
- **Rebuild Conpot, EWSPoster, Cowrie, Glutton, Dionaea**
|
||||
- Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
|
||||
## Updates
|
||||
* **Honeypots** and **tools** were updated to their latest masters and releases
|
||||
* Updates will be provided continuously through Docker Images updates
|
||||
|
||||
## 20210216
|
||||
- **Bump Heralding to 1.0.7**
|
||||
- Rebuild and upgrade image to 1.0.7 and upgrade Alpine OS to 3.13.
|
||||
- Enable SMTPS for Heralding.
|
||||
- **Rebuild IPPHoney, Fatt, EWSPoster, Spiderfoot**
|
||||
- Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
|
||||
- Upgrade Spiderfoot to 3.3
|
||||
## Breaking Changes
|
||||
* For security reasons all Py2.x honeypots with the need of PyPi packages have been removed: **HoneyPy**, **HoneySAP** and **RDPY**
|
||||
* If you are upgrading from a previous version of T-Pot (20.06.x) you need to import the new Kibana objects or some of the functionality will be broken or will be unavailabe
|
||||
* **Cyberchef** is now part of the Nginx Docker image, no longer as individual image
|
||||
* **ElasticSearch Head** is superseded by **Elasticvue** and part the Nginx Docker image
|
||||
* **Heimdall** is no longer supported and superseded with a new Bento based landing page
|
||||
* **Elasticsearch Curator** is no longer supprted and superseded with **Index Lifecycle Policies** available through Kibana.
|
||||
|
||||
## 20210215
|
||||
- **Rebuild Dicompot, p0f, Medpot, Honeysap, Heimdall, Elasticpot, Citrixhoneypot, Ciscoasa**
|
||||
- Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
|
||||
# Thanks & Credits
|
||||
* @ghenry, for some fun late night debugging and of course SentryPeer!
|
||||
* @giga-a, for adding much appreciated features (i.e. JSON logging,
|
||||
X-Forwarded-For, etc.) and of course qHoneypots!
|
||||
* @sp3t3rs, @trixam, for their backend and ews support!
|
||||
* @tadashi-oya, for spotting some errors and propose fixes!
|
||||
* @tmariuss, @shaderecker for their cloud contributions!
|
||||
* @vorband, for much appreciated and helpful insights regarding the GeoIP Attack Map!
|
||||
* @yunginnanet, on not giving up on squashing a bug and of course Hellpot!
|
||||
|
||||
## 20210212
|
||||
- **Rebuild Cyberchef, Adbhoney, Elastic Stack**
|
||||
- Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
|
||||
- Bump Elastic Stack to 7.11.0
|
||||
- Bump Cyberchef to 9.27.0
|
||||
|
||||
## 20210119
|
||||
- **Bump Dionaea to 0.11.0**
|
||||
- Upgrade Dionaea to 0.11.0, rebuild image and upgrade Alpine OS to 3.13.
|
||||
|
||||
## 20210106
|
||||
- **Update Internet IF retrieval**
|
||||
- To be consistent with @adepasquale PR #746 fatt, glutton and p0f Dockerfiles were updated accordingly.
|
||||
- Merge PR #746 from @adepasquale, thank you!
|
||||
|
||||
## 20201228
|
||||
- **Fix broken SQlite DB**
|
||||
- Fix a broken `app.sqlite` in Heimdall
|
||||
- **Avoid ghcr.io because of slow transfers**
|
||||
- **Remove netselect-apt**
|
||||
- causes too many unpredictable errors #733 as the latest example
|
||||
|
||||
## 20201210
|
||||
- **Bump Elastic Stack 7.10.1, EWSPoster to 1.12**
|
||||
|
||||
## 20201202
|
||||
- **Update Elastic Stack to 7.10.0**
|
||||
|
||||
## 20201130
|
||||
- **Suricata, use suricata-update for rule management**
|
||||
- As a bonus we can now run "suricata-update" using docker-exec, triggering both a rule update and a Suricata rule reload.
|
||||
- Thanks to @adepasquale!
|
||||
|
||||
## 20201126
|
||||
- **Suricata, update suricata.yaml for 6.x**
|
||||
- Merge in the latest updates from suricata-6.0.x while at the same time keeping the custom T-Pot configuration.
|
||||
- Thanks to @adepasquale!
|
||||
- **Bump Cowrie to 2.2.0**
|
||||
|
||||
## 20201028
|
||||
- **Bump Suricata to 5.0.4, Spiderfoot to 3.2.1, Dionaea to 0.9.2, IPPHoney, Heralding, Conpot to latest masters**
|
||||
|
||||
## 20201027
|
||||
- **Bump Dicompot to latest master, Elastic Stack to 7.9.3**
|
||||
|
||||
## 20201005
|
||||
- **Bump Elastic Stack to 7.9.2**
|
||||
- @brianlechthaler, thanks for PR #706, which had issues regarding Elastic Stack and resulted in reverting to 7.9.1
|
||||
|
||||
## 20200904
|
||||
- **Release T-Pot 20.06.1**
|
||||
- Github offers a free Docker Container Registry for public packages. For our Open Source projects we want to make sure to have everything in one place and thus moving from Docker Hub to the GitHub Container Registry.
|
||||
- **Bump Elastic Stack**
|
||||
- Update the Elastic Stack to 7.9.1.
|
||||
- **Rebuild Images**
|
||||
- All docker images were rebuilt based on the latest (and stable running) versions of the tools and honeypots and have been pinned to specific Alpine / Debian versions and git commits so rebuilds will less likely fail.
|
||||
- **Cleaning up**
|
||||
- Clean up old references and links.
|
||||
|
||||
## 20200630
|
||||
- **Release T-Pot 20.06**
|
||||
- After 4 months of public testing with the NextGen edition T-Pot 20.06 can finally be released.
|
||||
- **Debian Buster**
|
||||
- With the release of Debian Buster T-Pot now has access to all packages required right out of the box.
|
||||
- **Add new honeypots**
|
||||
- [Dicompot](https://github.com/nsmfoo/dicompot) by @nsmfoo is a low interaction honeypot for the Dicom protocol which is the international standard to process medical imaging information. Together with Medpot which supports the HL7 protocol T-Pot is now offering a Medical Installation type.
|
||||
- [Honeysap](https://github.com/SecureAuthCorp/HoneySAP) by SecureAuthCorp is a low interaction honeypot for the SAP services, in case of T-Pot configured for the SAP router.
|
||||
- [Elasticpot](https://gitlab.com/bontchev/elasticpot) by Vesselin Bontchev replaces ElasticpotPY as a low interaction honeypot for Elasticsearch with more features, plugins and scripted responses.
|
||||
- **Rebuild Images**
|
||||
- All docker images were rebuilt based on the latest (and stable running) versions of the tools and honeypots. Mostly the images now run on Alpine 3.12 / Debian Buster. However some honeypots / tools still reuire Alpine 3.11 / 3.10 to run properly.
|
||||
- **Install Types**
|
||||
- All docker-compose files (`/opt/tpot/etc/compose`) were remixed and most of the NextGen honeypots are now available in Standard.
|
||||
- There is now a **Medical** Installation Type with Dicompot and Medpot which will be of most interest for medical institutions to get started with T-Pot.
|
||||
- **Update Tools**
|
||||
- Connecting to T-Pot via `https://<ip>:64297` brings you to the T-Pot Landing Page now which is based on Heimdall and the latest NGINX enforcing TLS 1.3.
|
||||
- The ELK stack was updated to 7.8.0 and stripped down to the necessary core functions (where possible) for T-Pot while keeping ELK RAM requirements to a minimum (8GB of RAM is recommended now). The number of index pattern fields was reduced to **697** which increases performance significantly. There are **22** Kibana Dashboards, **397** Kibana Visualizations and **24** Kibana Searches readily available to cover all your needs to get started and familiar with T-Pot.
|
||||
- Cyberchef was updated to 9.21.0.
|
||||
- Elasticsearch Head was updated to the latest version available on GitHub.
|
||||
- Spiderfoot was updated to latest 3.1 dev.
|
||||
- **Landing Page**
|
||||
- After logging into T-Pot via web you are now greeted with a beautifully designed landing page.
|
||||
- **Countless Tweaks and improvements**
|
||||
- Under the hood lots of tiny tweaks, improvements and a few bugfixes will increase your overall experience with T-Pot.
|
||||
|
||||
## 20200316
|
||||
- **Move from Sid to Stable**
|
||||
- Debian Stable has now all the packages and versions we need for T-Pot. As a consequence we can now move to the `stable` branch.
|
||||
|
||||
## 20200310
|
||||
- **Add 2FA to Cockpit**
|
||||
- Just run `2fa.sh` to enable two factor authentication in Cockpit.
|
||||
- **Find fastest mirror with netselect-apt**
|
||||
- Netselect-apt will find the fastest mirror close to you (outgoing ICMP required).
|
||||
|
||||
## 20200309
|
||||
- **Bump Nextgen to 20.06**
|
||||
- All NextGen images have been rebuilt to their latest master.
|
||||
- ElasticStack bumped to 7.6.1 (Elasticsearch will need at least 2048MB of RAM now, T-Pot at least 8GB of RAM) and tweak to accomodate changes of 7.x.
|
||||
- Fixed errors in Tanner / Snare which will now handle downloads of malware via SSL and store them correctly (thanks to @afeena).
|
||||
- Fixed errors in Heralding which will now improve on RDP connections (thanks to @johnnykv, @realsdx).
|
||||
- Fixed error in honeytrap which will now build in Debian/Buster (thanks to @tillmannw).
|
||||
- Mailoney is now logging in JSON format (thanks to @monsherko).
|
||||
- Base T-Pot landing page on Heimdall.
|
||||
- Tweaking of tools and some minor bug fixing
|
||||
|
||||
## 20200116
|
||||
- **Bump ELK to latest 6.8.6**
|
||||
- **Update ISO image to fix upstream bug of missing kernel modules**
|
||||
- **Include dashboards for CitrixHoneypot**
|
||||
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
|
||||
- This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/telekom-security/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first.
|
||||
|
||||
## 20200115
|
||||
- **Prepare integration of CitrixHoneypot**
|
||||
- Prepare integration of [CitrixHoneypot](https://github.com/MalwareTech/CitrixHoneypot) by MalwareTech
|
||||
- Integration into ELK is still open
|
||||
- Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
|
||||
|
||||
## 20191224
|
||||
- **Use pigz, optimize logrotate.conf**
|
||||
- Use `pigz` for faster archiving, especially with regard to high volumes of logs - Thanks to @workandresearchgithub!
|
||||
- Optimize `logrotate.conf` to improve archiving speed and get rid of multiple compression, also introduce `pigz`.
|
||||
|
||||
## 20191121
|
||||
- **Bump ADBHoney to latest master**
|
||||
- Use latest version of ADBHoney, which now fully support Python 3.x - Thanks to @huuck!
|
||||
|
||||
## 20191113, 20191104, 20191103, 20191028
|
||||
- **Switch to Debian 10 on OTC, Ansible Improvements**
|
||||
- OTC now supporting Debian 10 - Thanks to @shaderecker!
|
||||
|
||||
## 20191028
|
||||
- **Fix an issue with pip3, yq**
|
||||
- `yq` needs rehashing.
|
||||
|
||||
## 20191026
|
||||
- **Remove cockpit-pcp**
|
||||
- `cockpit-pcp` floods swap for some reason - removing for now.
|
||||
|
||||
## 20191022
|
||||
- **Bump Suricata to 5.0.0**
|
||||
|
||||
## 20191021
|
||||
- **Bump Cowrie to 2.0.0**
|
||||
|
||||
## 20191016
|
||||
- **Tweak installer, pip3, Heralding**
|
||||
- Install `cockpit-pcp` right from the start for machine monitoring in cockpit.
|
||||
- Move installer and update script to use pip3.
|
||||
- Bump heralding to latest master (1.0.6) - Thanks @johnnykv!
|
||||
|
||||
## 20191015
|
||||
- **Tweaking, Bump glutton, unlock ES script**
|
||||
- Add `unlock.sh` to unlock ES indices in case of lockdown after disk quota has been reached.
|
||||
- Prevent too much terminal logging from p0f and glutton since `daemon.log` was filled up.
|
||||
- Bump glutton to latest master now supporting payload_hex. Thanks to @glaslos.
|
||||
|
||||
## 20191002
|
||||
- **Merge**
|
||||
- Support Debian Buster images for AWS #454
|
||||
- Thank you @piffey
|
||||
|
||||
## 20190924
|
||||
- **Bump EWSPoster**
|
||||
- Supports Python 3.x
|
||||
- Thank you @Trixam
|
||||
|
||||
## 20190919
|
||||
- **Merge**
|
||||
- Handle non-interactive shells #454
|
||||
- Thank you @Oogy
|
||||
|
||||
## 20190907
|
||||
- **Logo tweaking**
|
||||
- Add QR logo
|
||||
|
||||
## 20190828
|
||||
- **Upgrades and rebuilds**
|
||||
- Bump Medpot, Nginx and Adbhoney to latest master
|
||||
- Bump ELK stack to 6.8.2
|
||||
- Rebuild Mailoney, Honeytrap, Elasticpot and Ciscoasa
|
||||
- Add 1080p T-Pot wallpaper for download
|
||||
|
||||
## 20190824
|
||||
- **Add some logo work**
|
||||
- Thanks to @thehadilps's suggestion adjusted social preview
|
||||
- Added 4k T-Pot wallpaper for download
|
||||
|
||||
## 20190823
|
||||
- **Fix for broken Fuse package**
|
||||
- Fuse package in upstream is broken
|
||||
- Adjust installer as workaround, fixes #442
|
||||
|
||||
## 20190816
|
||||
- **Upgrades and rebuilds**
|
||||
- Adjust Dionaea to avoid nmap detection, fixes #435 (thanks @iukea1)
|
||||
- Bump Tanner, Cyberchef, Spiderfoot and ES Head to latest master
|
||||
|
||||
## 20190815
|
||||
- **Bump ELK stack to 6.7.2**
|
||||
- Transition to 7.x must iterate slowly through previous versions to prevent changes breaking T-Pots
|
||||
|
||||
## 20190814
|
||||
- **Logstash Translation Maps improvement**
|
||||
- Download translation maps rather than running a git pull
|
||||
- Translation maps will now be bzip2 compressed to reduce traffic to a minimum
|
||||
- Fixes #432
|
||||
|
||||
## 20190802
|
||||
- **Add support for Buster as base image**
|
||||
- Install ISO is now based on Debian Buster
|
||||
- Installation upon Debian Buster is now supported
|
||||
|
||||
## 20190701
|
||||
- **Reworked Ansible T-Pot Deployment**
|
||||
- Transitioned from bash script to all Ansible
|
||||
- Reusable Ansible Playbook for OpenStack clouds
|
||||
- Example Showcase with our Open Telekom Cloud
|
||||
- Adaptable for other cloud providers
|
||||
|
||||
## 20190626
|
||||
- **HPFEEDS Opt-In commandline option**
|
||||
- Pass a hpfeeds config file as a commandline argument
|
||||
- hpfeeds config is saved in `/data/ews/conf/hpfeeds.cfg`
|
||||
- Update script restores hpfeeds config
|
||||
|
||||
## 20190604
|
||||
- **Finalize Fatt support**
|
||||
- Build visualizations, searches, dashboards
|
||||
- Rebuild index patterns
|
||||
- Some finishing touches
|
||||
|
||||
## 20190601
|
||||
- **Start supporting Fatt, remove Glastopf**
|
||||
- Build Dockerfile, Adjust logstash, installer, update and such.
|
||||
- Glastopf is no longer supported within T-Pot
|
||||
|
||||
## 20190528+20190531
|
||||
- **Increase total number of fields**
|
||||
- Adjust total number of fileds for logstash templae from 1000 to 2000.
|
||||
|
||||
## 20190526
|
||||
- **Fix build for Cowrie**
|
||||
- Upstream changes required a new package `py-bcrypt`.
|
||||
|
||||
## 20190525
|
||||
- **Fix build for RDPY**
|
||||
- Building was prevented due to cache error which occurs lately on Alpine if `apk` is using `--no-ache' as options.
|
||||
|
||||
## 20190520
|
||||
- **Adjust permissions for /data folder**
|
||||
- Now it is possible to download files from `/data` using SCP, WINSCP or CyberDuck.
|
||||
|
||||
## 20190513
|
||||
- **Added Ansible T-Pot Deployment on Open Telekom Cloud**
|
||||
- Reusable Ansible Playbooks for all cloud providers
|
||||
- Example Showcase with our Open Telekom Cloud
|
||||
|
||||
## 20190511
|
||||
- **Add hptest script**
|
||||
- Quickly test if the honeypots are working with `hptest.sh <[ip,host]>` based on nmap.
|
||||
|
||||
## 20190508
|
||||
- **Add tsec / install user to tpot group**
|
||||
- For users being able to easily download logs from the /data folder the installer now adds the `tpot` or the logged in user (`who am i`) via `usermod -a -G tpot <user>` to the tpot group. Also /data permissions will now be enforced to `770`, which is necessary for directory listings.
|
||||
|
||||
## 20190502
|
||||
- **Fix KVPs**
|
||||
- Some KVPs for Cowrie changed and the tagcloud was not showing any values in the Cowrie dashboard.
|
||||
- New installations are not affected, however existing installations need to import the objects from /opt/tpot/etc/objects/kibana-objects.json.zip.
|
||||
- **Makeiso**
|
||||
- Move to Xorriso for building the ISO image.
|
||||
- This allows to support most of the Debian based distros, i.e. Debian, MxLinux and Ubuntu.
|
||||
|
||||
## 20190428
|
||||
- **Rebuild ISO**
|
||||
- The install ISO needed a rebuilt after some changes in the Debian mirrors.
|
||||
- **Disable Netselect**
|
||||
- After some reports in the issues that some Debian mirrors were not fully synced and thus some packages were unavailable the netselect-apt feature was disabled.
|
||||
|
||||
## 20190406
|
||||
- **Fix for SSH**
|
||||
- In some situations the SSH Port was not written to a new line (thanks to @dpisano for reporting).
|
||||
- **Fix race condition for apt-fast**
|
||||
- Curl and wget need to be installed before apt-fast installation.
|
||||
|
||||
## 20190404
|
||||
- **Fix #332**
|
||||
- If T-Pot, opposed to the requirements, does not have full internet access netselect-apt fails to determine the fastest mirror as it needs ICMP and UDP outgoing. Should netselect-apt fail the default mirrors will be used.
|
||||
- **Improve install speed with apt-fast**
|
||||
- Migrating from a stable base install to Debian (Sid) requires downloading lots of packages. Depending on your geo location the download speed was already improved by introducing netselect-apt to determine the fastest mirror. With apt-fast the downloads will be even faster by downloading packages not only in parallel but also with multiple connections per package.
|
||||
|
||||
`git log --date=format:"## %Y%m%d" --pretty=format:"%ad %n- **%s**%n - %b"`
|
||||
... and many others from the T-Pot community by opening valued issues and discussions, suggesting ideas and thus helping to improve T-Pot!
|
|
@ -1,12 +1,21 @@
|
|||
#!/bin/bash
|
||||
# Run as root only.
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" != "root" ]
|
||||
if [ "$myWHOAMI" != "root" ];
|
||||
then
|
||||
echo "Need to run as root ..."
|
||||
exit
|
||||
fi
|
||||
|
||||
if [ "$1" == "" ] || [ "$1" != "all" ] && [ "$1" != "base" ];
|
||||
then
|
||||
echo "Usage: backup_es_folders [all, base]"
|
||||
echo " all = backup all ES folder"
|
||||
echo " base = backup only Kibana index".
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Backup all ES relevant folders
|
||||
# Make sure ES is available
|
||||
myES="http://127.0.0.1:64298/"
|
||||
|
@ -25,7 +34,7 @@ myCOUNT=1
|
|||
myDATE=$(date +%Y%m%d%H%M)
|
||||
myELKPATH="/data/elk/data"
|
||||
myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/.kibana' | awk '{ print $4 }')
|
||||
myKIBANAINDEXPATH=$myELKPATH/nodes/0/indices/$myKIBANAINDEXNAME
|
||||
myKIBANAINDEXPATH=$myELKPATH/indices/$myKIBANAINDEXNAME
|
||||
|
||||
# Let's ensure normal operation on exit or if interrupted ...
|
||||
function fuCLEANUP {
|
||||
|
@ -42,5 +51,11 @@ sleep 2
|
|||
|
||||
# Backup DB in 2 flavors
|
||||
echo "### Now backing up Elasticsearch folders ..."
|
||||
tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
|
||||
tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH
|
||||
if [ "$1" == "all" ];
|
||||
then
|
||||
tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
|
||||
elif [ "$1" == "base" ];
|
||||
then
|
||||
tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH
|
||||
fi
|
||||
|
||||
|
|
109
bin/blackhole.sh
Executable file
|
@ -0,0 +1,109 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Run as root only.
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" != "root" ]
|
||||
then
|
||||
echo "### Need to run as root ..."
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Disclaimer
|
||||
if [ "$1" == "" ];
|
||||
then
|
||||
echo "### Warning!"
|
||||
echo "### This script will download and add blackhole routes for known mass scanners in an attempt to decrease the chance of detection."
|
||||
echo "### IPs are neither curated or verified, use at your own risk!"
|
||||
echo "###"
|
||||
echo "### As long as <blackhole.sh del> is not executed the routes will be re-added on T-Pot start through </opt/tpot/bin/updateip.sh>."
|
||||
echo "### Check with <ip r> or <dps.sh> if blackhole is enabled."
|
||||
echo
|
||||
echo "Usage: blackhole.sh add (add blackhole routes)"
|
||||
echo " blackhole.sh del (delete blackhole routes)"
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# QnD paths, files
|
||||
mkdir -p /etc/blackhole
|
||||
cd /etc/blackhole
|
||||
myFILE="mass_scanner.txt"
|
||||
myURL="https://raw.githubusercontent.com/stamparm/maltrail/master/trails/static/mass_scanner.txt"
|
||||
myBASELINE="500"
|
||||
# Alternatively, using less routes, but blocking complete /24 networks
|
||||
#myFILE="mass_scanner_cidr.txt"
|
||||
#myURL="https://raw.githubusercontent.com/stamparm/maltrail/master/trails/static/mass_scanner_cidr.txt"
|
||||
|
||||
# Calculate age of downloaded list, read IPs
|
||||
if [ -f "$myFILE" ];
|
||||
then
|
||||
myNOW=$(date +%s)
|
||||
myOLD=$(date +%s -r "$myFILE")
|
||||
myDAYS=$(( ($myNOW-$myOLD) / (60*60*24) ))
|
||||
echo "### Downloaded $myFILE list is $myDAYS days old."
|
||||
myBLACKHOLE_IPS=$(grep -o -P "\b(?:\d{1,3}\.){3}\d{1,3}\b" "$myFILE" | sort -u)
|
||||
fi
|
||||
|
||||
# Let's load ip list
|
||||
if [[ ! -f "$myFILE" && "$1" == "add" || "$myDAYS" -gt 30 ]];
|
||||
then
|
||||
echo "### Downloading $myFILE list."
|
||||
aria2c --allow-overwrite -s16 -x 16 "$myURL" && \
|
||||
myBLACKHOLE_IPS=$(grep -o -P "\b(?:\d{1,3}\.){3}\d{1,3}\b" "$myFILE" | sort -u)
|
||||
fi
|
||||
|
||||
myCOUNT=$(echo $myBLACKHOLE_IPS | wc -w)
|
||||
# Let's extract mass scanner IPs
|
||||
if [ "$myCOUNT" -lt "$myBASELINE" ] && [ "$1" == "add" ];
|
||||
then
|
||||
echo "### Something went wrong. Please check contents of /etc/blackhole/$myFILE."
|
||||
echo "### Aborting."
|
||||
echo
|
||||
exit
|
||||
elif [ "$(ip r | grep 'blackhole' -c)" -gt "$myBASELINE" ] && [ "$1" == "add" ];
|
||||
then
|
||||
echo "### Blackhole already enabled."
|
||||
echo "### Aborting."
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Let's add blackhole routes for all mass scanner IPs
|
||||
if [ "$1" == "add" ];
|
||||
then
|
||||
echo
|
||||
echo -n "Now adding $myCOUNT IPs to blackhole."
|
||||
for i in $myBLACKHOLE_IPS;
|
||||
do
|
||||
ip route add blackhole "$i"
|
||||
echo -n "."
|
||||
done
|
||||
echo
|
||||
echo "Added $(ip r | grep "blackhole" -c) IPs to blackhole."
|
||||
echo
|
||||
echo "### Remember!"
|
||||
echo "### As long as <blackhole.sh del> is not executed the routes will be re-added on T-Pot start through </opt/tpot/bin/updateip.sh>."
|
||||
echo "### Check with <ip r> or <dps.sh> if blackhole is enabled."
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Let's delete blackhole routes for all mass scanner IPs
|
||||
if [ "$1" == "del" ] && [ "$myCOUNT" -gt "$myBASELINE" ];
|
||||
then
|
||||
echo
|
||||
echo -n "Now deleting $myCOUNT IPs from blackhole."
|
||||
for i in $myBLACKHOLE_IPS;
|
||||
do
|
||||
ip route del blackhole "$i"
|
||||
echo -n "."
|
||||
done
|
||||
echo
|
||||
echo "$(ip r | grep 'blackhole' -c) IPs remaining in blackhole."
|
||||
echo
|
||||
rm "$myFILE"
|
||||
else
|
||||
echo "### Blackhole already disabled."
|
||||
echo
|
||||
fi
|
18
bin/clean.sh
|
@ -205,14 +205,6 @@ fuHONEYPOTS () {
|
|||
chown tpot:tpot /data/honeypots -R
|
||||
}
|
||||
|
||||
# Let's create a function to clean up and prepare honeypy data
|
||||
fuHONEYPY () {
|
||||
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeypy/*; fi
|
||||
mkdir -p /data/honeypy/log
|
||||
chmod 770 /data/honeypy -R
|
||||
chown tpot:tpot /data/honeypy -R
|
||||
}
|
||||
|
||||
# Let's create a function to clean up and prepare honeysap data
|
||||
fuHONEYSAP () {
|
||||
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeysap/*; fi
|
||||
|
@ -285,6 +277,14 @@ fuREDISHONEYPOT () {
|
|||
chown tpot:tpot /data/redishoneypot -R
|
||||
}
|
||||
|
||||
# Let's create a function to clean up and prepare sentrypeer data
|
||||
fuSENTRYPEER () {
|
||||
if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/sentrypeer/log; fi
|
||||
mkdir -p /data/sentrypeer/log
|
||||
chmod 770 /data/sentrypeer -R
|
||||
chown tpot:tpot /data/sentrypeer -R
|
||||
}
|
||||
|
||||
# Let's create a function to prepare spiderfoot db
|
||||
fuSPIDERFOOT () {
|
||||
mkdir -p /data/spiderfoot
|
||||
|
@ -356,7 +356,6 @@ if [ "$myPERSISTENCE" = "on" ];
|
|||
fuHELLPOT
|
||||
fuHONEYSAP
|
||||
fuHONEYPOTS
|
||||
fuHONEYPY
|
||||
fuHONEYTRAP
|
||||
fuIPPHONEY
|
||||
fuLOG4POT
|
||||
|
@ -365,6 +364,7 @@ if [ "$myPERSISTENCE" = "on" ];
|
|||
fuNGINX
|
||||
fuREDISHONEYPOT
|
||||
fuRDPY
|
||||
fuSENTRYPEER
|
||||
fuSPIDERFOOT
|
||||
fuSURICATA
|
||||
fuP0F
|
||||
|
|
|
@ -15,7 +15,7 @@ if [ "$(whoami)" != "root" ];
|
|||
fi
|
||||
}
|
||||
|
||||
function fuDEPLOY_POT () {
|
||||
function fuDEPLOY_SENSOR () {
|
||||
echo
|
||||
echo "###############################"
|
||||
echo "# Deploying to T-Pot Hive ... #"
|
||||
|
@ -24,7 +24,7 @@ echo
|
|||
sshpass -e ssh -4 -t -T -l "$MY_TPOT_USERNAME" -p 64295 "$MY_HIVE_IP" << EOF
|
||||
echo "$SSHPASS" | sudo -S bash -c 'useradd -m -s /sbin/nologin -G tpotlogs "$MY_HIVE_USERNAME";
|
||||
mkdir -p /home/"$MY_HIVE_USERNAME"/.ssh;
|
||||
echo "$MY_POT_PUBLICKEY" >> /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
|
||||
echo "$MY_SENSOR_PUBLICKEY" >> /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
|
||||
chmod 600 /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
|
||||
chmod 755 /home/"$MY_HIVE_USERNAME"/.ssh;
|
||||
chown "$MY_HIVE_USERNAME":"$MY_HIVE_USERNAME" -R /home/"$MY_HIVE_USERNAME"/.ssh'
|
||||
|
@ -72,8 +72,8 @@ if [ $? -eq 0 ];
|
|||
echo "######################################################"
|
||||
echo
|
||||
kill -9 $(pidof ssh)
|
||||
rm $MY_POT_PUBLICKEYFILE
|
||||
rm $MY_POT_PRIVATEKEYFILE
|
||||
rm $MY_SENSOR_PUBLICKEYFILE
|
||||
rm $MY_SENSOR_PRIVATEKEYFILE
|
||||
rm $MY_LS_ENVCONFIGFILE
|
||||
exit 1
|
||||
fi;
|
||||
|
@ -84,8 +84,8 @@ if [ $? -eq 0 ];
|
|||
echo "# Aborting. #"
|
||||
echo "#################################################################"
|
||||
echo
|
||||
rm $MY_POT_PUBLICKEYFILE
|
||||
rm $MY_POT_PRIVATEKEYFILE
|
||||
rm $MY_SENSOR_PUBLICKEYFILE
|
||||
rm $MY_SENSOR_PRIVATEKEYFILE
|
||||
rm $MY_LS_ENVCONFIGFILE
|
||||
exit 1
|
||||
fi;
|
||||
|
@ -105,12 +105,12 @@ echo
|
|||
export SSHPASS
|
||||
read -p "IP / FQDN: " MY_HIVE_IP
|
||||
MY_HIVE_USERNAME="$(hostname)"
|
||||
MY_TPOT_TYPE="POT"
|
||||
MY_TPOT_TYPE="SENSOR"
|
||||
MY_LS_ENVCONFIGFILE="/data/elk/logstash/ls_environment"
|
||||
|
||||
MY_POT_PUBLICKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME.pub"
|
||||
MY_POT_PRIVATEKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME"
|
||||
if ! [ -s "$MY_POT_PRIVATEKEYFILE" ] && ! [ -s "$MY_POT_PUBLICKEYFILE" ];
|
||||
MY_SENSOR_PUBLICKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME.pub"
|
||||
MY_SENSOR_PRIVATEKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME"
|
||||
if ! [ -s "$MY_SENSOR_PRIVATEKEYFILE" ] && ! [ -s "$MY_SENSOR_PUBLICKEYFILE" ];
|
||||
then
|
||||
echo
|
||||
echo "##############################"
|
||||
|
@ -118,8 +118,8 @@ if ! [ -s "$MY_POT_PRIVATEKEYFILE" ] && ! [ -s "$MY_POT_PUBLICKEYFILE" ];
|
|||
echo "##############################"
|
||||
echo
|
||||
mkdir -p /data/elk/logstash
|
||||
ssh-keygen -f "$MY_POT_PRIVATEKEYFILE" -N "" -C "$MY_HIVE_USERNAME"
|
||||
MY_POT_PUBLICKEY="$(cat "$MY_POT_PUBLICKEYFILE")"
|
||||
ssh-keygen -f "$MY_SENSOR_PRIVATEKEYFILE" -N "" -C "$MY_HIVE_USERNAME"
|
||||
MY_SENSOR_PUBLICKEY="$(cat "$MY_SENSOR_PUBLICKEYFILE")"
|
||||
else
|
||||
echo
|
||||
echo "#############################################"
|
||||
|
@ -137,7 +137,7 @@ echo "###########################################################"
|
|||
echo
|
||||
tee $MY_LS_ENVCONFIGFILE << EOF
|
||||
MY_TPOT_TYPE=$MY_TPOT_TYPE
|
||||
MY_POT_PRIVATEKEYFILE=$MY_POT_PRIVATEKEYFILE
|
||||
MY_SENSOR_PRIVATEKEYFILE=$MY_SENSOR_PRIVATEKEYFILE
|
||||
MY_HIVE_USERNAME=$MY_HIVE_USERNAME
|
||||
MY_HIVE_IP=$MY_HIVE_IP
|
||||
EOF
|
||||
|
@ -171,7 +171,7 @@ while [ 1 != 2 ]
|
|||
[c,C])
|
||||
fuGET_DEPLOY_DATA
|
||||
fuCHECK_HIVE
|
||||
fuDEPLOY_POT
|
||||
fuDEPLOY_SENSOR
|
||||
break
|
||||
;;
|
||||
[q,Q])
|
||||
|
|
122
bin/deprecated/hptest.sh
Executable file
|
@ -0,0 +1,122 @@
|
|||
#!/bin/bash
|
||||
|
||||
myHOST="$1"
|
||||
myPACKAGES="dcmtk netcat nmap"
|
||||
myMEDPOTPACKET="
|
||||
MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
|
||||
EVN|A01|198808181123
|
||||
PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
|
||||
NK1|1|JONES^BARBARA^K|SPO|||||20011105
|
||||
NK1|1|JONES^MICHAEL^A|FTH
|
||||
PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
|
||||
AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
|
||||
AL1|2||^CAT DANDER||CODE257
|
||||
DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
|
||||
PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
|
||||
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
|
||||
GT1|1122|1519|BILL^GATES^A
|
||||
IN1|001|A357|1234|BCMD|||||132987
|
||||
IN2|ID1551001|SSN12345678
|
||||
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
|
||||
|
||||
function fuGOTROOT {
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" != "root" ]
|
||||
then
|
||||
echo "Need to run as root ..."
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
function fuCHECKDEPS {
|
||||
myINST=""
|
||||
for myDEPS in $myPACKAGES;
|
||||
do
|
||||
myOK=$(dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
|
||||
if [ "$myOK" != "ok" ]
|
||||
then
|
||||
myINST=$(echo $myINST $myDEPS)
|
||||
fi
|
||||
done
|
||||
if [ "$myINST" != "" ]
|
||||
then
|
||||
apt-get update -y
|
||||
for myDEPS in $myINST;
|
||||
do
|
||||
apt-get install $myDEPS -y
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
function fuCHECKFORARGS {
|
||||
if [ "$myHOST" != "" ];
|
||||
then
|
||||
echo "All arguments met. Continuing."
|
||||
else
|
||||
echo "Usage: hp_test.sh <[host or ip]>"
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
function fuGETPORTS {
|
||||
myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
|
||||
myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo "$i"; done)
|
||||
echo "Found these ports enabled:"
|
||||
echo "$myPORTS"
|
||||
exit
|
||||
}
|
||||
|
||||
function fuSCAN {
|
||||
local myTIMEOUT="$1"
|
||||
local mySCANPORT="$2"
|
||||
local mySCANIP="$3"
|
||||
local mySCANOPTS="$4"
|
||||
|
||||
timeout --foreground ${myTIMEOUT} nmap ${mySCANOPTS} -T4 -v -p ${mySCANPORT} ${mySCANIP} &
|
||||
}
|
||||
|
||||
# Main
|
||||
fuGOTROOT
|
||||
fuCHECKDEPS
|
||||
fuCHECKFORARGS
|
||||
|
||||
echo "Starting scans ..."
|
||||
echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
|
||||
curl -XGET "http://$myHOST:9200/logstash-*/_search" &
|
||||
curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
|
||||
echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
|
||||
findscu -P -k PatientName="*" $myHOST 11112 &
|
||||
getscu -P -k PatientName="*" $myHOST 11112 &
|
||||
telnet $myHOST 3299 &
|
||||
fuSCAN "180" "7,8,102,135,161,1025,1080,5000,9200" "$myHOST" "-sC -sS -sU -sV"
|
||||
fuSCAN "180" "2048,4096,5432" "$myHOST" "-sC -sS -sU -sV --version-light"
|
||||
fuSCAN "120" "20,21" "$myHOST" "--script=ftp* -sC -sS -sV"
|
||||
fuSCAN "120" "22" "$myHOST" "--script=ssh2-enum-algos,ssh-auth-methods,ssh-hostkey,ssh-publickey-acceptance,sshv1 -sC -sS -sV"
|
||||
fuSCAN "30" "22" "$myHOST" "--script=ssh-brute"
|
||||
fuSCAN "120" "23,2323,2324" "$myHOST" "--script=telnet-encryption,telnet-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "120" "25" "$myHOST" "--script=smtp* -sC -sS -sV"
|
||||
fuSCAN "180" "42" "$myHOST" "-sC -sS -sV"
|
||||
fuSCAN "120" "69" "$myHOST" "--script=tftp-enum -sU"
|
||||
fuSCAN "120" "80,81,8080,8443" "$myHOST" "-sC -sS -sV"
|
||||
fuSCAN "120" "110,995" "$myHOST" "--script=pop3-capabilities,pop3-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "30" "110,995" "$myHOST" "--script=pop3-brute -sS"
|
||||
fuSCAN "120" "143,993" "$myHOST" "--script=imap-capabilities,imap-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "30" "143,993" "$myHOST" "--script=imap-brute -sS"
|
||||
fuSCAN "240" "445" "$myHOST" "--script=smb-vuln* -sS -sU"
|
||||
fuSCAN "120" "502" "$myHOST" "--script=modbus-discover -sS -sU"
|
||||
fuSCAN "120" "623" "$myHOST" "--script=ipmi-cipher-zero,ipmi-version,supermicro-ipmi -sS -sU"
|
||||
fuSCAN "30" "623" "$myHOST" "--script=ipmi-brute -sS -sU"
|
||||
fuSCAN "120" "1433" "$myHOST" "--script=ms-sql* -sS"
|
||||
fuSCAN "120" "1723" "$myHOST" "--script=pptp-version -sS"
|
||||
fuSCAN "120" "1883" "$myHOST" "--script=mqtt-subscribe -sS"
|
||||
fuSCAN "120" "2404" "$myHOST" "--script=iec-identify -sS"
|
||||
fuSCAN "120" "3306" "$myHOST" "--script=mysql-vuln* -sC -sS -sV"
|
||||
fuSCAN "120" "3389" "$myHOST" "--script=rdp* -sC -sS -sV"
|
||||
fuSCAN "120" "5000" "$myHOST" "--script=*upnp* -sS -sU"
|
||||
fuSCAN "120" "5060,5061" "$myHOST" "--script=sip-call-spoof,sip-enum-users,sip-methods -sS -sU"
|
||||
fuSCAN "120" "5900" "$myHOST" "--script=vnc-info,vnc-title,realvnc-auth-bypass -sS"
|
||||
fuSCAN "120" "27017" "$myHOST" "--script=mongo* -sS"
|
||||
fuSCAN "120" "47808" "$myHOST" "--script=bacnet* -sS"
|
||||
wait
|
||||
reset
|
||||
echo "Done."
|
45
bin/dps.sh
|
@ -8,8 +8,14 @@ if [ "$myWHOAMI" != "root" ]
|
|||
exit
|
||||
fi
|
||||
|
||||
# Show current status of T-Pot containers
|
||||
myPARAM="$1"
|
||||
if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
|
||||
then
|
||||
watch --color -n $myPARAM "dps.sh"
|
||||
exit
|
||||
fi
|
||||
|
||||
# Show current status of T-Pot containers
|
||||
myCONTAINERS="$(cat /opt/tpot/etc/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2 | sort | tr -d " ")"
|
||||
myRED="[1;31m"
|
||||
myGREEN="[1;32m"
|
||||
|
@ -17,19 +23,39 @@ myBLUE="[1;34m"
|
|||
myWHITE="[0;0m"
|
||||
myMAGENTA="[1;35m"
|
||||
|
||||
# Blackhole Status
|
||||
myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
|
||||
if [ "$myBLACKHOLE_STATUS" -gt "500" ];
|
||||
then
|
||||
myBLACKHOLE_STATUS="${myGREEN}ENABLED"
|
||||
else
|
||||
myBLACKHOLE_STATUS="${myRED}DISABLED"
|
||||
fi
|
||||
|
||||
function fuGETTPOT_STATUS {
|
||||
# T-Pot Status
|
||||
myTPOT_STATUS=$(systemctl status tpot | grep "Active" | awk '{ print $2 }')
|
||||
if [ "$myTPOT_STATUS" == "active" ];
|
||||
then
|
||||
echo "${myGREEN}ACTIVE"
|
||||
else
|
||||
echo "${myRED}INACTIVE"
|
||||
fi
|
||||
}
|
||||
|
||||
function fuGETSTATUS {
|
||||
grc --colour=on docker ps -f status=running -f status=exited --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -v "NAME" | sort
|
||||
}
|
||||
|
||||
function fuGETSYS {
|
||||
printf "========| System |========\n"
|
||||
printf "%+10s %-20s\n" "Date: " "$(date)"
|
||||
printf "%+10s %-20s\n" "Uptime: " "$(uptime | cut -b 2-)"
|
||||
printf "[ ========| System |======== ]\n"
|
||||
printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "DATE: " "$(date)"
|
||||
printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "UPTIME: " "$(grc --colour=on uptime)"
|
||||
printf "${myMAGENTA}%+11s %-20s\n" "T-POT: " "$(fuGETTPOT_STATUS)"
|
||||
printf "${myMAGENTA}%+11s %-20s\n" "BLACKHOLE: " "$myBLACKHOLE_STATUS${myWHITE}"
|
||||
echo
|
||||
}
|
||||
|
||||
while true
|
||||
do
|
||||
myDPS=$(fuGETSTATUS)
|
||||
myDPSNAMES=$(echo "$myDPS" | awk '{ print $1 }' | sort)
|
||||
fuGETSYS
|
||||
|
@ -45,10 +71,3 @@ while true
|
|||
printf "%-28s %-28s\n" "$myRED$i" "DOWN$myWHITE"
|
||||
fi
|
||||
done
|
||||
if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
|
||||
then
|
||||
sleep "$myPARAM"
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
|
|
@ -1,23 +1,8 @@
|
|||
#!/bin/bash
|
||||
|
||||
myHOST="$1"
|
||||
myPACKAGES="dcmtk netcat nmap"
|
||||
myMEDPOTPACKET="
|
||||
MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
|
||||
EVN|A01|198808181123
|
||||
PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
|
||||
NK1|1|JONES^BARBARA^K|SPO|||||20011105
|
||||
NK1|1|JONES^MICHAEL^A|FTH
|
||||
PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
|
||||
AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
|
||||
AL1|2||^CAT DANDER||CODE257
|
||||
DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
|
||||
PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
|
||||
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
|
||||
GT1|1122|1519|BILL^GATES^A
|
||||
IN1|001|A357|1234|BCMD|||||132987
|
||||
IN2|ID1551001|SSN12345678
|
||||
ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
|
||||
myPACKAGES="nmap"
|
||||
myDOCKERCOMPOSEYML="/opt/tpot/etc/tpot.yml"
|
||||
|
||||
function fuGOTROOT {
|
||||
myWHOAMI=$(whoami)
|
||||
|
@ -52,71 +37,32 @@ function fuCHECKFORARGS {
|
|||
if [ "$myHOST" != "" ];
|
||||
then
|
||||
echo "All arguments met. Continuing."
|
||||
echo
|
||||
else
|
||||
echo "Usage: hp_test.sh <[host or ip]>"
|
||||
echo "Usage: hptest.sh <[host or ip]>"
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
function fuGETPORTS {
|
||||
myDOCKERCOMPOSEUDPPORTS=$(cat $myDOCKERCOMPOSEYML | grep "udp" | tr -d '"\|#\-' | cut -d ":" -f2 | cut -d "/" -f1 | sort -gu)
|
||||
myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
|
||||
myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo "$i"; done)
|
||||
echo "Found these ports enabled:"
|
||||
echo "$myPORTS"
|
||||
exit
|
||||
}
|
||||
|
||||
function fuSCAN {
|
||||
local myTIMEOUT="$1"
|
||||
local mySCANPORT="$2"
|
||||
local mySCANIP="$3"
|
||||
local mySCANOPTS="$4"
|
||||
|
||||
timeout --foreground ${myTIMEOUT} nmap ${mySCANOPTS} -T4 -v -p ${mySCANPORT} ${mySCANIP} &
|
||||
myUDPPORTS=$(for i in $myDOCKERCOMPOSEUDPPORTS; do echo -n "U:$i,"; done)
|
||||
myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo -n "T:$i,"; done)
|
||||
}
|
||||
|
||||
# Main
|
||||
fuGETPORTS
|
||||
fuGOTROOT
|
||||
fuCHECKDEPS
|
||||
fuCHECKFORARGS
|
||||
|
||||
echo "Starting scans ..."
|
||||
echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
|
||||
curl -XGET "http://$myHOST:9200/logstash-*/_search" &
|
||||
curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
|
||||
echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
|
||||
findscu -P -k PatientName="*" $myHOST 11112 &
|
||||
getscu -P -k PatientName="*" $myHOST 11112 &
|
||||
telnet $myHOST 3299 &
|
||||
fuSCAN "180" "7,8,102,135,161,1025,1080,5000,9200" "$myHOST" "-sC -sS -sU -sV"
|
||||
fuSCAN "180" "2048,4096,5432" "$myHOST" "-sC -sS -sU -sV --version-light"
|
||||
fuSCAN "120" "20,21" "$myHOST" "--script=ftp* -sC -sS -sV"
|
||||
fuSCAN "120" "22" "$myHOST" "--script=ssh2-enum-algos,ssh-auth-methods,ssh-hostkey,ssh-publickey-acceptance,sshv1 -sC -sS -sV"
|
||||
fuSCAN "30" "22" "$myHOST" "--script=ssh-brute"
|
||||
fuSCAN "120" "23,2323,2324" "$myHOST" "--script=telnet-encryption,telnet-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "120" "25" "$myHOST" "--script=smtp* -sC -sS -sV"
|
||||
fuSCAN "180" "42" "$myHOST" "-sC -sS -sV"
|
||||
fuSCAN "120" "69" "$myHOST" "--script=tftp-enum -sU"
|
||||
fuSCAN "120" "80,81,8080,8443" "$myHOST" "-sC -sS -sV"
|
||||
fuSCAN "120" "110,995" "$myHOST" "--script=pop3-capabilities,pop3-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "30" "110,995" "$myHOST" "--script=pop3-brute -sS"
|
||||
fuSCAN "120" "143,993" "$myHOST" "--script=imap-capabilities,imap-ntlm-info -sC -sS -sV --version-light"
|
||||
fuSCAN "30" "143,993" "$myHOST" "--script=imap-brute -sS"
|
||||
fuSCAN "240" "445" "$myHOST" "--script=smb-vuln* -sS -sU"
|
||||
fuSCAN "120" "502" "$myHOST" "--script=modbus-discover -sS -sU"
|
||||
fuSCAN "120" "623" "$myHOST" "--script=ipmi-cipher-zero,ipmi-version,supermicro-ipmi -sS -sU"
|
||||
fuSCAN "30" "623" "$myHOST" "--script=ipmi-brute -sS -sU"
|
||||
fuSCAN "120" "1433" "$myHOST" "--script=ms-sql* -sS"
|
||||
fuSCAN "120" "1723" "$myHOST" "--script=pptp-version -sS"
|
||||
fuSCAN "120" "1883" "$myHOST" "--script=mqtt-subscribe -sS"
|
||||
fuSCAN "120" "2404" "$myHOST" "--script=iec-identify -sS"
|
||||
fuSCAN "120" "3306" "$myHOST" "--script=mysql-vuln* -sC -sS -sV"
|
||||
fuSCAN "120" "3389" "$myHOST" "--script=rdp* -sC -sS -sV"
|
||||
fuSCAN "120" "5000" "$myHOST" "--script=*upnp* -sS -sU"
|
||||
fuSCAN "120" "5060,5061" "$myHOST" "--script=sip-call-spoof,sip-enum-users,sip-methods -sS -sU"
|
||||
fuSCAN "120" "5900" "$myHOST" "--script=vnc-info,vnc-title,realvnc-auth-bypass -sS"
|
||||
fuSCAN "120" "27017" "$myHOST" "--script=mongo* -sS"
|
||||
fuSCAN "120" "47808" "$myHOST" "--script=bacnet* -sS"
|
||||
echo
|
||||
echo "Starting scan on all UDP / TCP ports defined in /opt/tpot/etc/tpot.yml ..."
|
||||
nmap -sV -sC -v -p $myPORTS $1 &
|
||||
nmap -sU -sV -sC -v -p $myUDPPORTS $1 &
|
||||
echo
|
||||
wait
|
||||
reset
|
||||
echo "Done."
|
||||
echo
|
||||
|
||||
|
|
45
bin/setup_builder.sh
Executable file
|
@ -0,0 +1,45 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Got root?
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" != "root" ]
|
||||
then
|
||||
echo "Need to run as root ..."
|
||||
exit
|
||||
fi
|
||||
|
||||
# Only run with command switch
|
||||
if [ "$1" != "-y" ]; then
|
||||
echo "### Setting up docker for Multi Arch Builds."
|
||||
echo "### Use on x64 only!"
|
||||
echo "### Run with -y to install!"
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Main
|
||||
mkdir -p /root/.docker/cli-plugins/
|
||||
cd /root/.docker/cli-plugins/
|
||||
wget https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64 -O docker-buildx
|
||||
chmod +x docker-buildx
|
||||
|
||||
docker buildx ls
|
||||
|
||||
# We need to create a new builder as the default one cannot handle multi-arch builds
|
||||
# https://docs.docker.com/desktop/multi-arch/
|
||||
docker buildx create --name mybuilder
|
||||
|
||||
# Set as default
|
||||
docker buildx use mybuilder
|
||||
|
||||
# We need to install emulators, arm64 should be fine for now
|
||||
# https://github.com/tonistiigi/binfmt/
|
||||
docker run --privileged --rm tonistiigi/binfmt --install arm64
|
||||
|
||||
# Check if everything is setup correctly
|
||||
docker buildx inspect --bootstrap
|
||||
echo
|
||||
echo "### Done."
|
||||
echo
|
||||
echo "Example: docker buildx build --platform linux/amd64,linux/arm64 -t username/demo:latest --push ."
|
||||
echo "Docs: https://docs.docker.com/desktop/multi-arch/"
|
29
bin/tpdclean.sh
Executable file
|
@ -0,0 +1,29 @@
|
|||
#!/bin/bash
|
||||
# T-Pot Compose and Container Cleaner
|
||||
# Set colors
|
||||
myRED="[0;31m"
|
||||
myGREEN="[0;32m"
|
||||
myWHITE="[0;0m"
|
||||
|
||||
# Only run with command switch
|
||||
if [ "$1" != "-y" ]; then
|
||||
echo $myRED"### WARNING"$myWHITE
|
||||
echo ""
|
||||
echo $myRED"###### This script is only intended for the tpot.service."$myWHITE
|
||||
echo $myRED"###### Run <systemctl stop tpot> first and then <tpdclean.sh -y>."$myWHITE
|
||||
echo $myRED"###### Be aware, all T-Pot container volumes and images will be removed."$myWHITE
|
||||
echo ""
|
||||
echo $myRED"### WARNING "$myWHITE
|
||||
echo
|
||||
exit
|
||||
fi
|
||||
|
||||
# Remove old containers, images and volumes
|
||||
docker-compose -f /opt/tpot/etc/tpot.yml down -v >> /dev/null 2>&1
|
||||
docker-compose -f /opt/tpot/etc/tpot.yml rm -v >> /dev/null 2>&1
|
||||
docker network rm $(docker network ls -q) >> /dev/null 2>&1
|
||||
docker volume rm $(docker volume ls -q) >> /dev/null 2>&1
|
||||
docker rm -v $(docker ps -aq) >> /dev/null 2>&1
|
||||
docker rmi $(docker images | grep "<none>" | awk '{print $3}') >> /dev/null 2>&1
|
||||
docker rmi $(docker images | grep "2203" | awk '{print $3}') >> /dev/null 2>&1
|
||||
exit 0
|
|
@ -29,7 +29,7 @@ for i in $myYMLS;
|
|||
do
|
||||
myITEMS+="$i $(echo $i | cut -d "." -f1 | tr [:lower:] [:upper:]) "
|
||||
done
|
||||
myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 17 50 10 $myITEMS 3>&1 1>&2 2>&3 3>&-)
|
||||
myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 18 50 1 $myITEMS 3>&1 1>&2 2>&3 3>&-)
|
||||
if [ "$myEDITION" == "" ];
|
||||
then
|
||||
echo "Have a nice day!"
|
||||
|
|
|
@ -2,23 +2,62 @@
|
|||
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
|
||||
# If the external IP cannot be detected, the internal IP will be inherited.
|
||||
source /etc/environment
|
||||
myCHECKIFSENSOR=$(head -n 1 /opt/tpot/etc/tpot.yml | grep "Sensor" | wc -l)
|
||||
myUUID=$(lsblk -o MOUNTPOINT,UUID | grep "/" | awk '{ print $2 }')
|
||||
myLOCALIP=$(hostname -I | awk '{ print $1 }')
|
||||
myEXTIP=$(/opt/tpot/bin/myip.sh)
|
||||
if [ "$myEXTIP" = "" ];
|
||||
then
|
||||
myEXTIP=$myLOCALIP
|
||||
myEXTIP_LAT="49.865835022498125"
|
||||
myEXTIP_LONG="8.62606472775735"
|
||||
else
|
||||
myEXTIP_LOC=$(curl -s ipinfo.io/$myEXTIP/loc)
|
||||
myEXTIP_LAT=$(echo "$myEXTIP_LOC" | cut -f1 -d",")
|
||||
myEXTIP_LONG=$(echo "$myEXTIP_LOC" | cut -f2 -d",")
|
||||
fi
|
||||
|
||||
# Load Blackhole routes if enabled
|
||||
myBLACKHOLE_FILE1="/etc/blackhole/mass_scanner.txt"
|
||||
myBLACKHOLE_FILE2="/etc/blackhole/mass_scanner_cidr.txt"
|
||||
if [ -f "$myBLACKHOLE_FILE1" ] || [ -f "$myBLACKHOLE_FILE2" ];
|
||||
then
|
||||
/opt/tpot/bin/blackhole.sh add
|
||||
fi
|
||||
|
||||
myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
|
||||
if [ "$myBLACKHOLE_STATUS" -gt "500" ];
|
||||
then
|
||||
myBLACKHOLE_STATUS="| [1;34mBLACKHOLE: [ [0;37mENABLED[1;34m ][0m"
|
||||
else
|
||||
myBLACKHOLE_STATUS="| [1;34mBLACKHOLE: [ [1;30mDISABLED[1;34m ][0m"
|
||||
fi
|
||||
|
||||
mySSHUSER=$(cat /etc/passwd | grep 1000 | cut -d ':' -f1)
|
||||
|
||||
# Export
|
||||
export myUUID
|
||||
export myLOCALIP
|
||||
export myEXTIP
|
||||
export myEXTIP_LAT
|
||||
export myEXTIP_LONG
|
||||
export myBLACKHOLE_STATUS
|
||||
export mySSHUSER
|
||||
|
||||
# Build issue
|
||||
echo "[H[2J" > /etc/issue
|
||||
toilet -f ivrit -F metal --filter border:metal "T-Pot 20.06" | sed 's/\\/\\\\/g' >> /etc/issue
|
||||
toilet -f ivrit -F metal --filter border:metal "T-Pot 22.04" | sed 's/\\/\\\\/g' >> /etc/issue
|
||||
echo >> /etc/issue
|
||||
echo ",---- [ [1;34m\n[0m ] [ [0;34m\d[0m ] [ [1;30m\t[0m ]" >> /etc/issue
|
||||
echo "|" >> /etc/issue
|
||||
echo "| [1;34mIP: $myLOCALIP ($myEXTIP)[0m" >> /etc/issue
|
||||
echo "| [0;34mSSH: ssh -l tsec -p 64295 $myLOCALIP[0m" >> /etc/issue
|
||||
echo "| [1;30mWEB: https://$myLOCALIP:64297[0m" >> /etc/issue
|
||||
if [ "$myCHECKIFSENSOR" == "0" ];
|
||||
then
|
||||
echo "| [1;30mWEB: https://$myLOCALIP:64297[0m" >> /etc/issue
|
||||
fi
|
||||
echo "| [0;37mADMIN: https://$myLOCALIP:64294[0m" >> /etc/issue
|
||||
echo "$myBLACKHOLE_STATUS" >> /etc/issue
|
||||
echo "|" >> /etc/issue
|
||||
echo "\`----" >> /etc/issue
|
||||
echo >> /etc/issue
|
||||
|
@ -29,6 +68,8 @@ EOF
|
|||
tee /opt/tpot/etc/compose/elk_environment << EOF
|
||||
HONEY_UUID=$myUUID
|
||||
MY_EXTIP=$myEXTIP
|
||||
MY_EXTIP_LAT=$myEXTIP_LAT
|
||||
MY_EXTIP_LONG=$myEXTIP_LONG
|
||||
MY_INTIP=$myLOCALIP
|
||||
MY_HOSTNAME=$HOSTNAME
|
||||
EOF
|
||||
|
@ -38,7 +79,7 @@ if [ -s "/data/elk/logstash/ls_environment" ];
|
|||
source /data/elk/logstash/ls_environment
|
||||
tee -a /opt/tpot/etc/compose/elk_environment << EOF
|
||||
MY_TPOT_TYPE=$MY_TPOT_TYPE
|
||||
MY_POT_PRIVATEKEYFILE=$MY_POT_PRIVATEKEYFILE
|
||||
MY_SENSOR_PRIVATEKEYFILE=$MY_SENSOR_PRIVATEKEYFILE
|
||||
MY_HIVE_USERNAME=$MY_HIVE_USERNAME
|
||||
MY_HIVE_IP=$MY_HIVE_IP
|
||||
EOF
|
||||
|
|
|
@ -28,31 +28,31 @@ variable "ec2_instance_type" {
|
|||
default = "t3.large"
|
||||
}
|
||||
|
||||
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Buster
|
||||
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
|
||||
variable "ec2_ami" {
|
||||
type = map(string)
|
||||
default = {
|
||||
"af-south-1" = "ami-0272d4f5fb1b98a0d"
|
||||
"ap-east-1" = "ami-00d242e2f23abf6d2"
|
||||
"ap-northeast-1" = "ami-001c6b4d627e8be53"
|
||||
"ap-northeast-2" = "ami-0d841ed4bf80e764c"
|
||||
"ap-northeast-3" = "ami-01b0a01d770321320"
|
||||
"ap-south-1" = "ami-04ba7e5bd7c6f6929"
|
||||
"ap-southeast-1" = "ami-0dca3eabb09c32ae2"
|
||||
"ap-southeast-2" = "ami-03ff8684dc585ddae"
|
||||
"ca-central-1" = "ami-08af22d7c0382fd83"
|
||||
"eu-central-1" = "ami-0f41e297b3c53fab8"
|
||||
"eu-north-1" = "ami-0bbc6a00971c77d6d"
|
||||
"eu-south-1" = "ami-03ff8684dc585ddae"
|
||||
"eu-west-1" = "ami-080684ad73d431a05"
|
||||
"eu-west-2" = "ami-04b259723891dfc53"
|
||||
"eu-west-3" = "ami-00662eead74f66895"
|
||||
"me-south-1" = "ami-021a6c6047091ab5b"
|
||||
"sa-east-1" = "ami-0aac091cce68a049c"
|
||||
"us-east-1" = "ami-05ad4ed7f9c48178b"
|
||||
"us-east-2" = "ami-07640f3f27c0ad3d3"
|
||||
"us-west-1" = "ami-0c053f1d5f22eb09f"
|
||||
"us-west-2" = "ami-090cd3aed687b1ee1"
|
||||
"af-south-1" = "ami-0c372f041acae6d49"
|
||||
"ap-east-1" = "ami-079b8d011d4655385"
|
||||
"ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
|
||||
"ap-northeast-2" = "ami-0269fe7d013b8e2dd"
|
||||
"ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
|
||||
"ap-south-1" = "ami-020d429f17c9f1d0a"
|
||||
"ap-southeast-1" = "ami-09625a221230d9fe6"
|
||||
"ap-southeast-2" = "ami-03cbc6cddb06af2c2"
|
||||
"ca-central-1" = "ami-09125623b02302014"
|
||||
"eu-central-1" = "ami-00c36c60f07e21791"
|
||||
"eu-north-1" = "ami-052bea934e2d9dbfe"
|
||||
"eu-south-1" = "ami-04e2bb16d37324719"
|
||||
"eu-west-1" = "ami-0f87948fe2cf1b2a4"
|
||||
"eu-west-2" = "ami-02ed1bc837487d535"
|
||||
"eu-west-3" = "ami-080efd2add7e29430"
|
||||
"me-south-1" = "ami-0dbde382c834c4a72"
|
||||
"sa-east-1" = "ami-0a0792814cb068077"
|
||||
"us-east-1" = "ami-05dd1b6e7ef6f8378"
|
||||
"us-east-2" = "ami-04dd0542609808c50"
|
||||
"us-west-1" = "ami-07af5f877b3db9f73"
|
||||
"us-west-2" = "ami-0d0d8694ba492c02b"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -74,7 +74,7 @@ variable "linux_password" {
|
|||
## These will go in the generated tpot.conf file ##
|
||||
variable "tpot_flavor" {
|
||||
default = "STANDARD"
|
||||
description = "Specify your tpot flavor [STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN, MEDICAL]"
|
||||
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
|
||||
}
|
||||
|
||||
variable "web_user" {
|
||||
|
|
9
cloud/terraform/aws_multi_region/_provider.tf
Normal file
|
@ -0,0 +1,9 @@
|
|||
provider "aws" {
|
||||
alias = "eu-west-2"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
alias = "us-west-1"
|
||||
region = "us-west-1"
|
||||
}
|
27
cloud/terraform/aws_multi_region/main.tf
Normal file
|
@ -0,0 +1,27 @@
|
|||
module "eu-west-2" {
|
||||
source = "./modules/multi-region"
|
||||
ec2_vpc_id = "vpc-xxxxxxxx"
|
||||
ec2_subnet_id = "subnet-xxxxxxxx"
|
||||
ec2_region = "eu-west-2"
|
||||
tpot_name = "T-Pot Honeypot"
|
||||
|
||||
linux_password = var.linux_password
|
||||
web_password = var.web_password
|
||||
providers = {
|
||||
aws = aws.eu-west-2
|
||||
}
|
||||
}
|
||||
|
||||
module "us-west-1" {
|
||||
source = "./modules/multi-region"
|
||||
ec2_vpc_id = "vpc-xxxxxxxx"
|
||||
ec2_subnet_id = "subnet-xxxxxxxx"
|
||||
ec2_region = "us-west-1"
|
||||
tpot_name = "T-Pot Honeypot"
|
||||
|
||||
linux_password = var.linux_password
|
||||
web_password = var.web_password
|
||||
providers = {
|
||||
aws = aws.us-west-1
|
||||
}
|
||||
}
|
|
@ -0,0 +1,69 @@
|
|||
variable "ec2_vpc_id" {}
|
||||
variable "ec2_subnet_id" {}
|
||||
variable "ec2_region" {}
|
||||
variable "linux_password" {}
|
||||
variable "web_password" {}
|
||||
variable "tpot_name" {}
|
||||
|
||||
resource "aws_security_group" "tpot" {
|
||||
name = "T-Pot"
|
||||
description = "T-Pot Honeypot"
|
||||
vpc_id = var.ec2_vpc_id
|
||||
ingress {
|
||||
from_port = 0
|
||||
to_port = 64000
|
||||
protocol = "tcp"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
ingress {
|
||||
from_port = 0
|
||||
to_port = 64000
|
||||
protocol = "udp"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
ingress {
|
||||
from_port = 64294
|
||||
to_port = 64294
|
||||
protocol = "tcp"
|
||||
cidr_blocks = var.admin_ip
|
||||
}
|
||||
ingress {
|
||||
from_port = 64295
|
||||
to_port = 64295
|
||||
protocol = "tcp"
|
||||
cidr_blocks = var.admin_ip
|
||||
}
|
||||
ingress {
|
||||
from_port = 64297
|
||||
to_port = 64297
|
||||
protocol = "tcp"
|
||||
cidr_blocks = var.admin_ip
|
||||
}
|
||||
egress {
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
protocol = "-1"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
tags = {
|
||||
Name = "T-Pot"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_instance" "tpot" {
|
||||
ami = var.ec2_ami[var.ec2_region]
|
||||
instance_type = var.ec2_instance_type
|
||||
key_name = var.ec2_ssh_key_name
|
||||
subnet_id = var.ec2_subnet_id
|
||||
tags = {
|
||||
Name = var.tpot_name
|
||||
}
|
||||
root_block_device {
|
||||
volume_type = "gp2"
|
||||
volume_size = 128
|
||||
delete_on_termination = true
|
||||
}
|
||||
user_data = templatefile("../cloud-init.yaml", { timezone = var.timezone, password = var.linux_password, tpot_flavor = var.tpot_flavor, web_user = var.web_user, web_password = var.web_password })
|
||||
vpc_security_group_ids = [aws_security_group.tpot.id]
|
||||
associate_public_ip_address = true
|
||||
}
|
|
@ -0,0 +1,12 @@
|
|||
output "Admin_UI" {
|
||||
value = "https://${aws_instance.tpot.public_dns}:64294/"
|
||||
}
|
||||
|
||||
output "SSH_Access" {
|
||||
value = "ssh -i {private_key_file} -p 64295 admin@${aws_instance.tpot.public_dns}"
|
||||
}
|
||||
|
||||
output "Web_UI" {
|
||||
value = "https://${aws_instance.tpot.public_dns}:64297/"
|
||||
}
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
variable "admin_ip" {
|
||||
default = ["127.0.0.1/32"]
|
||||
description = "admin IP addresses in CIDR format"
|
||||
}
|
||||
|
||||
variable "ec2_ssh_key_name" {
|
||||
default = "default"
|
||||
}
|
||||
|
||||
# https://aws.amazon.com/ec2/instance-types/
|
||||
variable "ec2_instance_type" {
|
||||
default = "t3.xlarge"
|
||||
}
|
||||
|
||||
# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
|
||||
variable "ec2_ami" {
|
||||
type = map(string)
|
||||
default = {
|
||||
"af-south-1" = "ami-0c372f041acae6d49"
|
||||
"ap-east-1" = "ami-079b8d011d4655385"
|
||||
"ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
|
||||
"ap-northeast-2" = "ami-0269fe7d013b8e2dd"
|
||||
"ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
|
||||
"ap-south-1" = "ami-020d429f17c9f1d0a"
|
||||
"ap-southeast-1" = "ami-09625a221230d9fe6"
|
||||
"ap-southeast-2" = "ami-03cbc6cddb06af2c2"
|
||||
"ca-central-1" = "ami-09125623b02302014"
|
||||
"eu-central-1" = "ami-00c36c60f07e21791"
|
||||
"eu-north-1" = "ami-052bea934e2d9dbfe"
|
||||
"eu-south-1" = "ami-04e2bb16d37324719"
|
||||
"eu-west-1" = "ami-0f87948fe2cf1b2a4"
|
||||
"eu-west-2" = "ami-02ed1bc837487d535"
|
||||
"eu-west-3" = "ami-080efd2add7e29430"
|
||||
"me-south-1" = "ami-0dbde382c834c4a72"
|
||||
"sa-east-1" = "ami-0a0792814cb068077"
|
||||
"us-east-1" = "ami-05dd1b6e7ef6f8378"
|
||||
"us-east-2" = "ami-04dd0542609808c50"
|
||||
"us-west-1" = "ami-07af5f877b3db9f73"
|
||||
"us-west-2" = "ami-0d0d8694ba492c02b"
|
||||
}
|
||||
}
|
||||
|
||||
## cloud-init configuration ##
|
||||
variable "timezone" {
|
||||
default = "UTC"
|
||||
}
|
||||
|
||||
## These will go in the generated tpot.conf file ##
|
||||
variable "tpot_flavor" {
|
||||
default = "STANDARD"
|
||||
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
|
||||
}
|
||||
|
||||
variable "web_user" {
|
||||
default = "webuser"
|
||||
description = "Set a username for the web user"
|
||||
}
|
|
@ -0,0 +1,9 @@
|
|||
terraform {
|
||||
required_version = ">= 0.13"
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "3.72.0"
|
||||
}
|
||||
}
|
||||
}
|
7
cloud/terraform/aws_multi_region/outputs.tf
Normal file
|
@ -0,0 +1,7 @@
|
|||
output "eu-west-2_Web_UI" {
|
||||
value = module.eu-west-2.Web_UI
|
||||
}
|
||||
|
||||
output "us-west-1_Web_UI" {
|
||||
value = module.us-west-1.Web_UI
|
||||
}
|
19
cloud/terraform/aws_multi_region/variables.tf
Normal file
|
@ -0,0 +1,19 @@
|
|||
variable "linux_password" {
|
||||
#default = "LiNuXuSeRP4Ss!"
|
||||
description = "Set a password for the default user"
|
||||
|
||||
validation {
|
||||
condition = length(var.linux_password) > 0
|
||||
error_message = "Please specify a password for the default user."
|
||||
}
|
||||
}
|
||||
|
||||
variable "web_password" {
|
||||
#default = "w3b$ecret20"
|
||||
description = "Set a password for the web user"
|
||||
|
||||
validation {
|
||||
condition = length(var.web_password) > 0
|
||||
error_message = "Please specify a password for the web user."
|
||||
}
|
||||
}
|
|
@ -79,7 +79,7 @@ variable "eip_size" {
|
|||
## These will go in the generated tpot.conf file ##
|
||||
variable "tpot_flavor" {
|
||||
default = "STANDARD"
|
||||
description = "Specify your tpot flavor [STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN, MEDICAL]"
|
||||
description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
|
||||
}
|
||||
|
||||
variable "web_user" {
|
||||
|
|
Before Width: | Height: | Size: 311 KiB After Width: | Height: | Size: 432 KiB |
BIN
doc/attackmap.png
Normal file
After Width: | Height: | Size: 380 KiB |
BIN
doc/cockpit1.png
Before Width: | Height: | Size: 140 KiB |
BIN
doc/cockpit2.png
Before Width: | Height: | Size: 185 KiB |
BIN
doc/cockpit3.png
Before Width: | Height: | Size: 336 KiB |
BIN
doc/cockpit_a.png
Normal file
After Width: | Height: | Size: 135 KiB |
BIN
doc/cockpit_b.png
Normal file
After Width: | Height: | Size: 334 KiB |
Before Width: | Height: | Size: 101 KiB After Width: | Height: | Size: 117 KiB |
Before Width: | Height: | Size: 368 KiB |
BIN
doc/dockerui.png
Before Width: | Height: | Size: 87 KiB |
BIN
doc/elasticvue.png
Normal file
After Width: | Height: | Size: 174 KiB |
Before Width: | Height: | Size: 127 KiB |
BIN
doc/heimdall.png
Before Width: | Height: | Size: 354 KiB |
BIN
doc/kibana.png
Before Width: | Height: | Size: 368 KiB |
BIN
doc/kibana_a.png
Normal file
After Width: | Height: | Size: 464 KiB |
BIN
doc/kibana_b.png
Normal file
After Width: | Height: | Size: 129 KiB |
BIN
doc/kibana_c.png
Normal file
After Width: | Height: | Size: 213 KiB |
BIN
doc/netdata.png
Before Width: | Height: | Size: 199 KiB |
Before Width: | Height: | Size: 133 KiB After Width: | Height: | Size: 162 KiB |
BIN
doc/tpotwebui.png
Normal file
After Width: | Height: | Size: 324 KiB |
BIN
doc/webssh.png
Before Width: | Height: | Size: 148 KiB |
|
@ -1,20 +1,19 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U add \
|
||||
RUN apk --no-cache -U add \
|
||||
git \
|
||||
libcap \
|
||||
py3-pip \
|
||||
python3 \
|
||||
python3-dev && \
|
||||
procps \
|
||||
python3 && \
|
||||
#
|
||||
# Install adbhoney from git
|
||||
git clone https://github.com/huuck/ADBHoney /opt/adbhoney && \
|
||||
cd /opt/adbhoney && \
|
||||
git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
|
||||
# git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
|
||||
git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
|
||||
cp /root/dist/adbhoney.cfg /opt/adbhoney && \
|
||||
sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
|
||||
sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \
|
||||
|
@ -23,16 +22,15 @@ RUN apk -U add \
|
|||
addgroup -g 2000 adbhoney && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
|
||||
chown -R adbhoney:adbhoney /opt/adbhoney && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge git \
|
||||
python3-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
apk del --purge git && \
|
||||
rm -rf /root/* /opt/adbhoney/.git /var/cache/apk/*
|
||||
#
|
||||
# Set workdir and start adbhoney
|
||||
STOPSIGNAL SIGINT
|
||||
# Adbhoney sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
|
||||
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
|
||||
USER adbhoney:adbhoney
|
||||
WORKDIR /opt/adbhoney/
|
||||
CMD nohup /usr/bin/python3 run.py
|
||||
CMD /usr/bin/python3 run.py
|
||||
|
|
|
@ -10,12 +10,13 @@ services:
|
|||
build: .
|
||||
container_name: adbhoney
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- adbhoney_local
|
||||
ports:
|
||||
- "5555:5555"
|
||||
# image: "dtagdevsec/adbhoney:2006"
|
||||
image: "dtagdevsec/adbhoney:2006"
|
||||
image: "dtagdevsec/adbhoney:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/adbhoney/log:/opt/adbhoney/log
|
||||
|
|
79
docker/builder.sh
Executable file
|
@ -0,0 +1,79 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Setup Vars
|
||||
myPLATFORMS="linux/amd64,linux/arm64"
|
||||
myHUBORG="dtagdevsec"
|
||||
myTAG="2204"
|
||||
myIMAGESBASE="adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
|
||||
myIMAGESELK="elasticsearch kibana logstash map"
|
||||
myIMAGESTANNER="phpox redis snare tanner"
|
||||
myBUILDERLOG="builder.log"
|
||||
myBUILDERERR="builder.err"
|
||||
myBUILDCACHE="/buildcache"
|
||||
|
||||
# Got root?
|
||||
myWHOAMI=$(whoami)
|
||||
if [ "$myWHOAMI" != "root" ]
|
||||
then
|
||||
echo "Need to run as root ..."
|
||||
exit
|
||||
fi
|
||||
|
||||
# Check for Buildx
|
||||
docker buildx > /dev/null 2>&1
|
||||
if [ "$?" == "1" ];
|
||||
then
|
||||
echo "### Build environment not setup. Run bin/setup_builder.sh"
|
||||
fi
|
||||
|
||||
# Only run with command switch
|
||||
if [ "$1" == "" ]; then
|
||||
echo "### T-Pot Multi Arch Image Builder."
|
||||
echo "## Usage: builder.sh [build, push]"
|
||||
echo "## build - Just build images, do not push."
|
||||
echo "## push - Build and push images."
|
||||
echo "## Pushing requires an active docker login."
|
||||
exit
|
||||
fi
|
||||
|
||||
fuBUILDIMAGES () {
|
||||
local myPATH="$1"
|
||||
local myIMAGELIST="$2"
|
||||
local myPUSHOPTION="$3"
|
||||
|
||||
for myREPONAME in $myIMAGELIST;
|
||||
do
|
||||
echo -n "Now building: $myREPONAME in $myPATH$myREPONAME/."
|
||||
docker buildx build --cache-from "type=local,src=$myBUILDCACHE" --cache-to "type=local,dest=$myBUILDCACHE" --platform $myPLATFORMS -t $myHUBORG/$myREPONAME:$myTAG $myPUSHOPTION $myPATH$myREPONAME/. >> $myBUILDERLOG 2>&1
|
||||
if [ "$?" != "0" ];
|
||||
then
|
||||
echo " [ ERROR ] - Check logs!"
|
||||
echo "Error building $myREPONAME" >> "$myBUILDERERR"
|
||||
else
|
||||
echo " [ OK ]"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Just build images
|
||||
if [ "$1" == "build" ];
|
||||
then
|
||||
mkdir -p $myBUILDCACHE
|
||||
rm -f "$myBUILDERLOG" "$myBUILDERERR"
|
||||
echo "### Building images ..."
|
||||
fuBUILDIMAGES "" "$myIMAGESBASE" ""
|
||||
fuBUILDIMAGES "elk/" "$myIMAGESELK" ""
|
||||
fuBUILDIMAGES "tanner/" "$myIMAGESTANNER" ""
|
||||
fi
|
||||
|
||||
# Build and push images
|
||||
if [ "$1" == "push" ];
|
||||
then
|
||||
mkdir -p $myBUILDCACHE
|
||||
rm -f "$myBUILDERLOG" "$myBUILDERERR"
|
||||
echo "### Building and pushing images ..."
|
||||
fuBUILDIMAGES "" "$myIMAGESBASE" "--push"
|
||||
fuBUILDIMAGES "elk/" "$myIMAGESELK" "--push"
|
||||
fuBUILDIMAGES "tanner/" "$myIMAGESTANNER" "--push"
|
||||
fi
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Setup env and apt
|
||||
RUN apk -U upgrade && \
|
||||
apk add build-base \
|
||||
RUN apk --no-cache -U upgrade && \
|
||||
apk --no-cache add build-base \
|
||||
git \
|
||||
libffi \
|
||||
libffi-dev \
|
||||
|
@ -26,6 +26,7 @@ RUN apk -U upgrade && \
|
|||
git clone https://github.com/cymmetria/ciscoasa_honeypot && \
|
||||
cd ciscoasa_honeypot && \
|
||||
git checkout d6e91f1aab7fe6fc01fabf2046e76b68dd6dc9e2 && \
|
||||
sed -i "s/git+git/git+https/g" requirements.txt && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
|
||||
chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
|
||||
|
@ -37,6 +38,7 @@ RUN apk -U upgrade && \
|
|||
openssl-dev \
|
||||
python3-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /opt/ciscoasa_honeypot/.git && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Start ciscoasa
|
||||
|
|
|
@ -9,11 +9,14 @@ services:
|
|||
restart: always
|
||||
tmpfs:
|
||||
- /tmp/ciscoasa:uid=2000,gid=2000
|
||||
network_mode: "host"
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- ciscoasa_local
|
||||
ports:
|
||||
- "5000:5000/udp"
|
||||
- "8443:8443"
|
||||
image: "dtagdevsec/ciscoasa:2006"
|
||||
image: "dtagdevsec/ciscoasa:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/ciscoasa/log:/var/log/ciscoasa
|
||||
|
|
|
@ -1,13 +1,12 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U add \
|
||||
RUN apk --no-cache -U add \
|
||||
git \
|
||||
libcap \
|
||||
openssl \
|
||||
py3-pip \
|
||||
python3 \
|
||||
python3-dev && \
|
||||
python3 && \
|
||||
#
|
||||
pip3 install --no-cache-dir python-json-logger && \
|
||||
#
|
||||
|
@ -33,9 +32,9 @@ RUN apk -U add \
|
|||
#
|
||||
# Clean up
|
||||
apk del --purge git \
|
||||
openssl \
|
||||
python3-dev && \
|
||||
openssl && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /opt/citrixhoneypot/.git && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Set workdir and start citrixhoneypot
|
||||
|
|
|
@ -10,11 +10,13 @@ services:
|
|||
build: .
|
||||
container_name: citrixhoneypot
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- citrixhoneypot_local
|
||||
ports:
|
||||
- "443:443"
|
||||
image: "dtagdevsec/citrixhoneypot:2006"
|
||||
image: "dtagdevsec/citrixhoneypot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Setup apt
|
||||
RUN apk -U add \
|
||||
RUN apk --no-cache -U add \
|
||||
build-base \
|
||||
cython \
|
||||
file \
|
||||
git \
|
||||
libev \
|
||||
|
@ -16,36 +17,53 @@ RUN apk -U add \
|
|||
libxslt-dev \
|
||||
mariadb-dev \
|
||||
pkgconfig \
|
||||
procps \
|
||||
python3 \
|
||||
python3-dev \
|
||||
py3-cffi \
|
||||
py3-cryptography \
|
||||
py3-cffi \
|
||||
py3-cryptography \
|
||||
py3-freezegun \
|
||||
py3-gevent \
|
||||
py3-lxml \
|
||||
py3-natsort \
|
||||
py3-pip \
|
||||
tcpdump \
|
||||
py3-ply \
|
||||
py3-psutil \
|
||||
py3-pycryptodomex \
|
||||
py3-pytest \
|
||||
py3-requests \
|
||||
py3-pyserial \
|
||||
py3-setuptools \
|
||||
py3-slugify \
|
||||
py3-snmp \
|
||||
py3-sphinx \
|
||||
py3-wheel \
|
||||
py3-zope-event \
|
||||
py3-zope-interface \
|
||||
wget && \
|
||||
#
|
||||
# Setup ConPot
|
||||
git clone https://github.com/mushorg/conpot /opt/conpot && \
|
||||
cd /opt/conpot/ && \
|
||||
git checkout 804fd65aa3b7ffa31c07fd4e863d4a5500414cf3 && \
|
||||
git checkout b3740505fd26d82473c0d7be405b372fa0f82575 && \
|
||||
#git checkout 1c2382ea290b611fdc6a0a5f9572c7504bcb616e && \
|
||||
# Change template default ports if <1024
|
||||
sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \
|
||||
sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \
|
||||
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/default/ipmi/ipmi.xml && \
|
||||
sed -i 's/port="5020"/port="502"/' /opt/conpot/conpot/templates/default/modbus/modbus.xml && \
|
||||
sed -i 's/port="10201"/port="102"/' /opt/conpot/conpot/templates/default/s7comm/s7comm.xml && \
|
||||
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/default/snmp/snmp.xml && \
|
||||
sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \
|
||||
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
|
||||
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
|
||||
pip3 install --no-cache-dir -U setuptools && \
|
||||
sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \
|
||||
sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \
|
||||
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/default/ipmi/ipmi.xml && \
|
||||
sed -i 's/port="5020"/port="502"/' /opt/conpot/conpot/templates/default/modbus/modbus.xml && \
|
||||
sed -i 's/port="10201"/port="102"/' /opt/conpot/conpot/templates/default/s7comm/s7comm.xml && \
|
||||
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/default/snmp/snmp.xml && \
|
||||
sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \
|
||||
sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
|
||||
sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
|
||||
cp /root/dist/requirements.txt . && \
|
||||
pip3 install --no-cache-dir --upgrade pip && \
|
||||
pip3 install --no-cache-dir . && \
|
||||
pip3 install --no-cache-dir pysnmp-mibs && \
|
||||
cd / && \
|
||||
rm -rf /opt/conpot /tmp/* /var/tmp/* && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
|
||||
#
|
||||
#
|
||||
# Get wireshark manuf db for scapy, setup configs, user, groups
|
||||
mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
|
||||
wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
|
||||
|
@ -66,7 +84,6 @@ RUN apk -U add \
|
|||
mariadb-dev \
|
||||
pkgconfig \
|
||||
python3-dev \
|
||||
py-cffi \
|
||||
wget && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
|
@ -74,5 +91,7 @@ RUN apk -U add \
|
|||
#
|
||||
# Start conpot
|
||||
STOPSIGNAL SIGINT
|
||||
# Conpot sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
|
||||
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
|
||||
USER conpot:conpot
|
||||
CMD exec /usr/bin/conpot --mibcache $CONPOT_TMP --temp_dir $CONPOT_TMP --template $CONPOT_TEMPLATE --logfile $CONPOT_LOG --config $CONPOT_CONFIG
|
||||
|
|
1123
docker/conpot/dist/command_responder.py
vendored
20
docker/conpot/dist/requirements.txt
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
pysnmp-mibs
|
||||
pysmi
|
||||
libtaxii>=1.1.0
|
||||
crc16
|
||||
scapy==2.4.3rc1
|
||||
hpfeeds3
|
||||
modbus-tk
|
||||
stix-validator
|
||||
stix
|
||||
cybox
|
||||
bacpypes==0.17.0
|
||||
pyghmi==1.4.1
|
||||
mixbox
|
||||
modbus-tk
|
||||
cpppo
|
||||
fs==2.3.0
|
||||
tftpy
|
||||
# some freezegun versions broken
|
||||
pycrypto
|
||||
sphinx_rtd_theme
|
16
docker/conpot/dist/templates/IEC104/template.xml
vendored
|
@ -91,19 +91,19 @@
|
|||
<value type="value">1</value>
|
||||
</key>
|
||||
<key name="ifInOctets">
|
||||
<value type="value">1618895</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.BytesRecv</value>
|
||||
</key>
|
||||
<key name="ifInUcastPkts">
|
||||
<value type="value">7018</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.PacketsRecv</value>
|
||||
</key>
|
||||
<key name="ifInNUcastPkts">
|
||||
<value type="value">291</value>
|
||||
</key>
|
||||
<key name="ifOutOctets">
|
||||
<value type="value">455107</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.BytesSent</value>
|
||||
</key>
|
||||
<key name="ifOutUcastPkts">
|
||||
<value type="value">872264</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.PacketsSent</value>
|
||||
</key>
|
||||
<key name="ifOutUNcastPkts">
|
||||
<value type="value">143</value>
|
||||
|
@ -168,7 +168,7 @@
|
|||
<value type="value">0</value>
|
||||
</key>
|
||||
<key name="ipAdEntAddr">
|
||||
<value type="value">"217.172.190.137"</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.LocalIP</value>
|
||||
</key>
|
||||
<key name="ipAdEntIfIndex">
|
||||
<value type="value">1</value>
|
||||
|
@ -290,7 +290,7 @@
|
|||
<value type="value">45</value>
|
||||
</key>
|
||||
<key name="tcpCurrEstab">
|
||||
<value type="value">0</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.TcpCurrEstab</value>
|
||||
</key>
|
||||
<key name="tcpInSegs">
|
||||
<value type="value">30321</value>
|
||||
|
@ -305,7 +305,7 @@
|
|||
<value type="value">2</value>
|
||||
</key>
|
||||
<key name="tcpConnLocalAddress">
|
||||
<value type="value">"217.172.190.137"</value>
|
||||
<value type="function">conpot.emulators.misc.sysinfo.LocalIP</value>
|
||||
</key>
|
||||
<key name="tcpConnLocalPort">
|
||||
<value type="value">2404</value>
|
||||
|
@ -336,7 +336,7 @@
|
|||
<value type="value">47</value>
|
||||
</key>
|
||||
<key name="udpLocalAddress">
|
||||
<value type="value">"217.172.190.137"</value>
|
||||
<value type="value">"163.172.189.137"</value>
|
||||
</key>
|
||||
<key name="udpLocalPort">
|
||||
<value type="value">161</value>
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
<!-- Core value that can be retrieved from the databus by key -->
|
||||
<key_value_mappings>
|
||||
<key name="power_simulator">
|
||||
<value type="function">conpot.protocols.kamstrup.usage_simulator.UsageSimulator</value>
|
||||
<value type="function">conpot.emulators.kamstrup.usage_simulator.UsageSimulator</value>
|
||||
</key>
|
||||
<key name="register_1024">
|
||||
<value type="value">0</value>
|
||||
|
|
|
@ -23,6 +23,8 @@ services:
|
|||
- CONPOT_TMP=/tmp/conpot
|
||||
tmpfs:
|
||||
- /tmp/conpot:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- conpot_local_default
|
||||
ports:
|
||||
|
@ -35,14 +37,13 @@ services:
|
|||
- "2121:21"
|
||||
- "44818:44818"
|
||||
- "47808:47808/udp"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
||||
# Conpot IEC104 service
|
||||
conpot_IEC104:
|
||||
build: .
|
||||
container_name: conpot_IEC104
|
||||
restart: always
|
||||
environment:
|
||||
|
@ -53,19 +54,20 @@ services:
|
|||
- CONPOT_TMP=/tmp/conpot
|
||||
tmpfs:
|
||||
- /tmp/conpot:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- conpot_local_IEC104
|
||||
ports:
|
||||
# - "161:161/udp"
|
||||
- "2404:2404"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
||||
# Conpot guardian_ast service
|
||||
conpot_guardian_ast:
|
||||
build: .
|
||||
container_name: conpot_guardian_ast
|
||||
restart: always
|
||||
environment:
|
||||
|
@ -76,18 +78,19 @@ services:
|
|||
- CONPOT_TMP=/tmp/conpot
|
||||
tmpfs:
|
||||
- /tmp/conpot:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- conpot_local_guardian_ast
|
||||
ports:
|
||||
- "10001:10001"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
||||
# Conpot ipmi
|
||||
conpot_ipmi:
|
||||
build: .
|
||||
container_name: conpot_ipmi
|
||||
restart: always
|
||||
environment:
|
||||
|
@ -98,18 +101,19 @@ services:
|
|||
- CONPOT_TMP=/tmp/conpot
|
||||
tmpfs:
|
||||
- /tmp/conpot:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- conpot_local_ipmi
|
||||
ports:
|
||||
- "623:623/udp"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
||||
# Conpot kamstrup_382
|
||||
conpot_kamstrup_382:
|
||||
build: .
|
||||
container_name: conpot_kamstrup_382
|
||||
restart: always
|
||||
environment:
|
||||
|
@ -120,12 +124,14 @@ services:
|
|||
- CONPOT_TMP=/tmp/conpot
|
||||
tmpfs:
|
||||
- /tmp/conpot:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- conpot_local_kamstrup_382
|
||||
ports:
|
||||
- "1025:1025"
|
||||
- "50100:50100"
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/conpot/log:/var/log/conpot
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Get and install dependencies & packages
|
||||
RUN apk -U add \
|
||||
RUN apk --no-cache -U add \
|
||||
bash \
|
||||
build-base \
|
||||
git \
|
||||
|
@ -15,7 +15,21 @@ RUN apk -U add \
|
|||
mpfr-dev \
|
||||
openssl \
|
||||
openssl-dev \
|
||||
py3-appdirs \
|
||||
py3-asn1-modules \
|
||||
py3-attrs \
|
||||
py3-bcrypt \
|
||||
py3-cryptography \
|
||||
py3-dateutil \
|
||||
py3-greenlet \
|
||||
py3-mysqlclient \
|
||||
py3-openssl \
|
||||
py3-packaging \
|
||||
py3-parsing \
|
||||
py3-pip \
|
||||
py3-service_identity \
|
||||
py3-treq \
|
||||
py3-twisted \
|
||||
python3 \
|
||||
python3-dev && \
|
||||
#
|
||||
|
@ -29,9 +43,8 @@ RUN apk -U add \
|
|||
git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.3.0 && \
|
||||
cd cowrie && \
|
||||
# git checkout 6b1e82915478292f1e77ed776866771772b48f2e && \
|
||||
# sed -i s/logfile.DailyLogFile/logfile.LogFile/g src/cowrie/python/logfile.py && \
|
||||
mkdir -p log && \
|
||||
sed -i '/packaging.*/d' requirements.txt && \
|
||||
cp /root/dist/requirements.txt . && \
|
||||
pip3 install --upgrade pip && \
|
||||
pip3 install -r requirements.txt && \
|
||||
#
|
||||
|
@ -61,6 +74,7 @@ RUN apk -U add \
|
|||
rm -rf /root/* /tmp/* && \
|
||||
rm -rf /var/cache/apk/* && \
|
||||
rm -rf /home/cowrie/cowrie/cowrie.pid && \
|
||||
rm -rf /home/cowrie/cowrie/.git && \
|
||||
unset PYTHON_DIR
|
||||
#
|
||||
# Start cowrie
|
||||
|
|
2
docker/cowrie/dist/requirements.txt
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
configparser==5.2.0
|
||||
tftpy==0.8.2
|
|
@ -13,12 +13,14 @@ services:
|
|||
tmpfs:
|
||||
- /tmp/cowrie:uid=2000,gid=2000
|
||||
- /tmp/cowrie/data:uid=2000,gid=2000
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- cowrie_local
|
||||
ports:
|
||||
- "22:22"
|
||||
- "23:23"
|
||||
image: "dtagdevsec/cowrie:2006"
|
||||
image: "dtagdevsec/cowrie:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/cowrie/downloads:/home/cowrie/cowrie/dl
|
||||
|
|
|
@ -1,11 +1,20 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U add \
|
||||
RUN apk --no-cache -U add \
|
||||
build-base \
|
||||
git \
|
||||
libcap \
|
||||
py3-colorama \
|
||||
py3-greenlet \
|
||||
py3-pip \
|
||||
py3-schedule \
|
||||
py3-sqlalchemy \
|
||||
py3-twisted \
|
||||
py3-wheel \
|
||||
python3 \
|
||||
python3-dev && \
|
||||
#
|
||||
|
@ -30,6 +39,7 @@ RUN apk -U add \
|
|||
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/generic/genericpot.conf && \
|
||||
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ntp/ntpot.conf && \
|
||||
sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ssdp/ssdpot.conf && \
|
||||
cp /root/dist/requirements.txt . && \
|
||||
pip3 install -r ddospot/requirements.txt && \
|
||||
setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
|
||||
#
|
||||
|
@ -43,6 +53,7 @@ RUN apk -U add \
|
|||
git \
|
||||
python3-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /opt/ddospot/.git && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Start ddospot
|
||||
|
|
4
docker/ddospot/dist/requirements.txt
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
git+https://github.com/hpfeeds/hpfeeds
|
||||
tabulate
|
||||
python-geoip
|
||||
python-geoip-geolite2
|
|
@ -10,6 +10,8 @@ services:
|
|||
build: .
|
||||
container_name: ddospot
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- ddospot_local
|
||||
ports:
|
||||
|
@ -18,7 +20,7 @@ services:
|
|||
- "123:123/udp"
|
||||
# - "161:161/udp"
|
||||
- "1900:1900/udp"
|
||||
image: "dtagdevsec/ddospot:2006"
|
||||
image: "dtagdevsec/ddospot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/ddospot/log:/opt/ddospot/ddospot/logs
|
||||
|
|
|
@ -12,7 +12,7 @@ RUN npm install
|
|||
RUN grunt prod
|
||||
#
|
||||
# Move from builder
|
||||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
RUN apk -U --no-cache add \
|
||||
curl \
|
|
@ -14,5 +14,5 @@ services:
|
|||
- cyberchef_local
|
||||
ports:
|
||||
- "127.0.0.1:64299:8000"
|
||||
image: "dtagdevsec/cyberchef:2006"
|
||||
image: "dtagdevsec/cyberchef:2204"
|
||||
read_only: true
|
|
@ -1,4 +1,4 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Setup env and apt
|
||||
RUN apk -U add \
|
|
@ -12,5 +12,5 @@ services:
|
|||
# condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64302:9100"
|
||||
image: "dtagdevsec/head:2006"
|
||||
image: "dtagdevsec/head:2204"
|
||||
read_only: true
|
|
@ -20,7 +20,7 @@ services:
|
|||
- "2324:2324"
|
||||
- "4096:4096"
|
||||
- "9200:9200"
|
||||
image: "dtagdevsec/honeypy:2006"
|
||||
image: "dtagdevsec/honeypy:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/honeypy/log:/opt/honeypy/log
|
|
@ -14,6 +14,6 @@ services:
|
|||
- honeysap_local
|
||||
ports:
|
||||
- "3299:3299"
|
||||
image: "dtagdevsec/honeysap:2006"
|
||||
image: "dtagdevsec/honeysap:2204"
|
||||
volumes:
|
||||
- /data/honeysap/log:/opt/honeysap/log
|
|
@ -22,7 +22,7 @@ services:
|
|||
- rdpy_local
|
||||
ports:
|
||||
- "3389:3389"
|
||||
image: "dtagdevsec/rdpy:2006"
|
||||
image: "dtagdevsec/rdpy:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/rdpy/log:/var/log/rdpy
|
|
@ -1,11 +1,11 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Setup apk
|
||||
RUN apk -U add --no-cache \
|
||||
build-base \
|
||||
git \
|
||||
g++ && \
|
||||
apk -U add go --repository http://dl-3.alpinelinux.org/alpine/edge/community && \
|
||||
apk -U add --no-cache go --repository http://dl-3.alpinelinux.org/alpine/edge/community && \
|
||||
#
|
||||
# Setup go, build dicompot
|
||||
mkdir -p /opt/go && \
|
||||
|
|
|
@ -13,11 +13,13 @@ services:
|
|||
build: .
|
||||
container_name: dicompot
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- dicompot_local
|
||||
ports:
|
||||
- "11112:11112"
|
||||
image: "dtagdevsec/dicompot:2006"
|
||||
image: "dtagdevsec/dicompot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/dicompot/log:/var/log/dicompot
|
||||
|
|
|
@ -2,14 +2,20 @@ FROM ubuntu:20.04
|
|||
ENV DEBIAN_FRONTEND noninteractive
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Install dependencies and packages
|
||||
RUN apt-get update -y && \
|
||||
# Determine arch, get and install packages
|
||||
RUN ARCH=$(arch) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi && \
|
||||
if [ "$ARCH" = "aarch64" ]; then ARCH="arm64"; fi && \
|
||||
echo "$ARCH" && \
|
||||
cd /root/dist/ && \
|
||||
apt-get update -y && \
|
||||
apt-get install wget -y && \
|
||||
wget http://archive.ubuntu.com/ubuntu/pool/universe/libe/libemu/libemu2_0.2.0+git20120122-1.2build1_amd64.deb http://archive.ubuntu.com/ubuntu/pool/universe/libe/libemu/libemu-dev_0.2.0+git20120122-1.2build1_amd64.deb && \
|
||||
apt install ./libemu2_0.2.0+git20120122-1.2build1_amd64.deb ./libemu-dev_0.2.0+git20120122-1.2build1_amd64.deb -y && \
|
||||
apt-get dist-upgrade -y && \
|
||||
wget http://ftp.us.debian.org/debian/pool/main/libe/libemu/libemu2_0.2.0+git20120122-1.2+b1_$ARCH.deb \
|
||||
http://ftp.us.debian.org/debian/pool/main/libe/libemu/libemu-dev_0.2.0+git20120122-1.2+b1_$ARCH.deb && \
|
||||
apt install ./libemu2_0.2.0+git20120122-1.2+b1_$ARCH.deb \
|
||||
./libemu-dev_0.2.0+git20120122-1.2+b1_$ARCH.deb -y && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
|
@ -19,7 +25,6 @@ RUN apt-get update -y && \
|
|||
git \
|
||||
libcap2-bin \
|
||||
libcurl4-openssl-dev \
|
||||
# libemu-dev \
|
||||
libev-dev \
|
||||
libglib2.0-dev \
|
||||
libloudmouth1-dev \
|
||||
|
@ -97,14 +102,16 @@ RUN apt-get update -y && \
|
|||
libnetfilter-queue1 \
|
||||
libnl-3-200 \
|
||||
libpcap0.8 \
|
||||
# libpython3.6 \
|
||||
libpython3.8 \
|
||||
libudns0 && \
|
||||
#
|
||||
apt-get autoremove --purge -y && \
|
||||
apt-get clean && \
|
||||
rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /opt/dionaea/.git
|
||||
#
|
||||
# Start dionaea
|
||||
STOPSIGNAL SIGINT
|
||||
# Dionaea sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
|
||||
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
|
||||
USER dionaea:dionaea
|
||||
CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"]
|
||||
|
|
|
@ -12,6 +12,8 @@ services:
|
|||
stdin_open: true
|
||||
tty: true
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- dionaea_local
|
||||
ports:
|
||||
|
@ -27,11 +29,11 @@ services:
|
|||
- "1723:1723"
|
||||
- "1883:1883"
|
||||
- "3306:3306"
|
||||
- "5060:5060"
|
||||
- "5060:5060/udp"
|
||||
- "5061:5061"
|
||||
# - "5060:5060"
|
||||
# - "5060:5060/udp"
|
||||
# - "5061:5061"
|
||||
- "27017:27017"
|
||||
image: "dtagdevsec/dionaea:2006"
|
||||
image: "dtagdevsec/dionaea:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
|
||||
|
|
|
@ -10,98 +10,128 @@ services:
|
|||
# Adbhoney service
|
||||
adbhoney:
|
||||
build: adbhoney/.
|
||||
image: "dtagdevsec/adbhoney:2006"
|
||||
image: "dtagdevsec/adbhoney:2204"
|
||||
|
||||
# Ciscoasa service
|
||||
ciscoasa:
|
||||
build: ciscoasa/.
|
||||
image: "dtagdevsec/ciscoasa:2006"
|
||||
image: "dtagdevsec/ciscoasa:2204"
|
||||
|
||||
# CitrixHoneypot service
|
||||
citrixhoneypot:
|
||||
build: citrixhoneypot/.
|
||||
image: "dtagdevsec/citrixhoneypot:2006"
|
||||
image: "dtagdevsec/citrixhoneypot:2204"
|
||||
|
||||
# Conpot IEC104 service
|
||||
conpot_IEC104:
|
||||
build: conpot/.
|
||||
image: "dtagdevsec/conpot:2006"
|
||||
image: "dtagdevsec/conpot:2204"
|
||||
|
||||
# Cowrie service
|
||||
cowrie:
|
||||
build: cowrie/.
|
||||
image: "dtagdevsec/cowrie:2006"
|
||||
image: "dtagdevsec/cowrie:2204"
|
||||
|
||||
# Ddospot service
|
||||
ddospot:
|
||||
build: ddospot/.
|
||||
image: "dtagdevsec/ddospot:2204"
|
||||
|
||||
# Dicompot service
|
||||
dicompot:
|
||||
build: dicompot/.
|
||||
image: "dtagdevsec/dicompot:2006"
|
||||
image: "dtagdevsec/dicompot:2204"
|
||||
|
||||
# Dionaea service
|
||||
dionaea:
|
||||
build: dionaea/.
|
||||
image: "dtagdevsec/dionaea:2006"
|
||||
image: "dtagdevsec/dionaea:2204"
|
||||
|
||||
# ElasticPot service
|
||||
elasticpot:
|
||||
build: elasticpot/.
|
||||
image: "dtagdevsec/elasticpot:2006"
|
||||
image: "dtagdevsec/elasticpot:2204"
|
||||
|
||||
# Endlessh service
|
||||
endlessh:
|
||||
build: endlessh/.
|
||||
image: "dtagdevsec/endlessh:2204"
|
||||
|
||||
# Glutton service
|
||||
glutton:
|
||||
build: glutton/.
|
||||
image: "dtagdevsec/glutton:2006"
|
||||
image: "dtagdevsec/glutton:2204"
|
||||
|
||||
# Hellpot service
|
||||
hellpot:
|
||||
build: hellpot/.
|
||||
image: "dtagdevsec/hellpot:2204"
|
||||
|
||||
# Heralding service
|
||||
heralding:
|
||||
build: heralding/.
|
||||
image: "dtagdevsec/heralding:2006"
|
||||
image: "dtagdevsec/heralding:2204"
|
||||
|
||||
# HoneyPy service
|
||||
honeypy:
|
||||
build: honeypy/.
|
||||
image: "dtagdevsec/honeypy:2006"
|
||||
# Honeypots service
|
||||
honeypots:
|
||||
build: honeypots/.
|
||||
image: "dtagdevsec/honeypots:2204"
|
||||
|
||||
# Honeytrap service
|
||||
honeytrap:
|
||||
build: honeytrap/.
|
||||
image: "dtagdevsec/honeytrap:2006"
|
||||
image: "dtagdevsec/honeytrap:2204"
|
||||
|
||||
# IPPHoney service
|
||||
ipphoney:
|
||||
build: ipphoney/.
|
||||
image: "dtagdevsec/ipphoney:2204"
|
||||
|
||||
# Log4Pot service
|
||||
log4pot:
|
||||
build: log4pot/.
|
||||
image: "dtagdevsec/log4pot:2204"
|
||||
|
||||
# Mailoney service
|
||||
mailoney:
|
||||
build: mailoney/.
|
||||
image: "dtagdevsec/mailoney:2006"
|
||||
image: "dtagdevsec/mailoney:2204"
|
||||
|
||||
# Medpot service
|
||||
medpot:
|
||||
build: medpot/.
|
||||
image: "dtagdevsec/medpot:2006"
|
||||
image: "dtagdevsec/medpot:2204"
|
||||
|
||||
# Rdpy service
|
||||
rdpy:
|
||||
build: rdpy/.
|
||||
image: "dtagdevsec/rdpy:2006"
|
||||
# Redishoneypot service
|
||||
redishoneypot:
|
||||
build: redishoneypot/.
|
||||
image: "dtagdevsec/redishoneypot:2204"
|
||||
|
||||
# Sentrypeer service
|
||||
sentrypeer:
|
||||
build: sentrypeer/.
|
||||
image: "dtagdevsec/sentrypeer:2204"
|
||||
|
||||
#### Snare / Tanner
|
||||
## Tanner Redis Service
|
||||
tanner_redis:
|
||||
build: tanner/redis/.
|
||||
image: "dtagdevsec/redis:2006"
|
||||
image: "dtagdevsec/redis:2204"
|
||||
|
||||
## PHP Sandbox service
|
||||
tanner_phpox:
|
||||
build: tanner/phpox/.
|
||||
image: "dtagdevsec/phpox:2006"
|
||||
image: "dtagdevsec/phpox:2204"
|
||||
|
||||
## Tanner API Service
|
||||
tanner_api:
|
||||
build: tanner/tanner/.
|
||||
image: "dtagdevsec/tanner:2006"
|
||||
image: "dtagdevsec/tanner:2204"
|
||||
|
||||
## Snare Service
|
||||
snare:
|
||||
build: tanner/snare/.
|
||||
image: "dtagdevsec/snare:2006"
|
||||
image: "dtagdevsec/snare:2204"
|
||||
|
||||
|
||||
##################
|
||||
|
@ -111,60 +141,55 @@ services:
|
|||
# Fatt service
|
||||
fatt:
|
||||
build: fatt/.
|
||||
image: "dtagdevsec/fatt:2006"
|
||||
image: "dtagdevsec/fatt:2204"
|
||||
|
||||
# P0f service
|
||||
p0f:
|
||||
build: p0f/.
|
||||
image: "dtagdevsec/p0f:2006"
|
||||
image: "dtagdevsec/p0f:2204"
|
||||
|
||||
# Suricata service
|
||||
suricata:
|
||||
build: suricata/.
|
||||
image: "dtagdevsec/suricata:2006"
|
||||
image: "dtagdevsec/suricata:2204"
|
||||
|
||||
|
||||
##################
|
||||
#### Tools
|
||||
##################
|
||||
|
||||
# Cyberchef service
|
||||
cyberchef:
|
||||
build: cyberchef/.
|
||||
image: "dtagdevsec/cyberchef:2006"
|
||||
|
||||
#### ELK
|
||||
## Elasticsearch service
|
||||
elasticsearch:
|
||||
build: elk/elasticsearch/.
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
image: "dtagdevsec/elasticsearch:2204"
|
||||
|
||||
## Kibana service
|
||||
kibana:
|
||||
build: elk/kibana/.
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
image: "dtagdevsec/kibana:2204"
|
||||
|
||||
## Logstash service
|
||||
logstash:
|
||||
build: elk/logstash/.
|
||||
image: "dtagdevsec/logstash:2006"
|
||||
|
||||
## Elasticsearch-head service
|
||||
head:
|
||||
build: elk/head/.
|
||||
image: "dtagdevsec/head:2006"
|
||||
image: "dtagdevsec/logstash:2204"
|
||||
|
||||
# Ewsposter service
|
||||
ewsposter:
|
||||
build: ews/.
|
||||
image: "dtagdevsec/ewsposter:2006"
|
||||
build: ewsposter/.
|
||||
image: "dtagdevsec/ewsposter:2204"
|
||||
|
||||
# Nginx service
|
||||
nginx:
|
||||
build: heimdall/.
|
||||
image: "dtagdevsec/nginx:2006"
|
||||
build: nginx/.
|
||||
image: "dtagdevsec/nginx:2204"
|
||||
|
||||
# Spiderfoot service
|
||||
spiderfoot:
|
||||
build: spiderfoot/.
|
||||
image: "dtagdevsec/spiderfoot:2006"
|
||||
image: "dtagdevsec/spiderfoot:2204"
|
||||
|
||||
# Map Web Service
|
||||
map_web:
|
||||
build: elk/map/.
|
||||
image: "dtagdevsec/map:2204"
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Install packages
|
||||
RUN apk -U add \
|
||||
RUN apk -U --no-cache add \
|
||||
build-base \
|
||||
ca-certificates \
|
||||
git \
|
||||
|
@ -13,9 +13,19 @@ RUN apk -U add \
|
|||
openssl-dev \
|
||||
postgresql-dev \
|
||||
py3-cryptography \
|
||||
py3-elasticsearch \
|
||||
py3-geoip2 \
|
||||
py3-maxminddb \
|
||||
py3-mysqlclient \
|
||||
py3-packaging \
|
||||
py3-psycopg2 \
|
||||
py3-redis \
|
||||
py3-requests \
|
||||
py3-service_identity \
|
||||
py3-setuptools \
|
||||
py3-pip \
|
||||
py3-twisted \
|
||||
py3-wheel \
|
||||
python3 \
|
||||
python3-dev && \
|
||||
mkdir -p /opt && \
|
||||
|
@ -23,6 +33,7 @@ RUN apk -U add \
|
|||
git clone https://gitlab.com/bontchev/elasticpot.git/ && \
|
||||
cd elasticpot && \
|
||||
git checkout d12649730d819bd78ea622361b6c65120173ad45 && \
|
||||
cp /root/dist/requirements.txt . && \
|
||||
pip3 install -r requirements.txt && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
|
@ -38,7 +49,7 @@ RUN apk -U add \
|
|||
postgresql-dev \
|
||||
python3-dev && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
rm -rf /var/cache/apk/* /opt/elasticpot/.git
|
||||
#
|
||||
# Start elasticpot
|
||||
STOPSIGNAL SIGINT
|
||||
|
|
6
docker/elasticpot/dist/requirements.txt
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
configparser>=3.5.0
|
||||
couchdb
|
||||
hpfeeds>=3.0.0
|
||||
influxdb
|
||||
pymongo
|
||||
rethinkdb>=2.4
|
|
@ -10,11 +10,13 @@ services:
|
|||
build: .
|
||||
container_name: elasticpot
|
||||
restart: always
|
||||
# cpu_count: 1
|
||||
# cpus: 0.25
|
||||
networks:
|
||||
- elasticpot_local
|
||||
ports:
|
||||
- "9200:9200"
|
||||
image: "dtagdevsec/elasticpot:2006"
|
||||
image: "dtagdevsec/elasticpot:2204"
|
||||
read_only: true
|
||||
volumes:
|
||||
- /data/elasticpot/log:/opt/elasticpot/log
|
||||
|
|
|
@ -10,7 +10,7 @@ services:
|
|||
restart: always
|
||||
environment:
|
||||
- bootstrap.memory_lock=true
|
||||
# - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||
- ES_JAVA_OPTS=-Xms2048m -Xmx2048m
|
||||
- ES_TMPDIR=/tmp
|
||||
cap_add:
|
||||
- IPC_LOCK
|
||||
|
@ -21,10 +21,10 @@ services:
|
|||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
# mem_limit: 4g
|
||||
mem_limit: 4g
|
||||
ports:
|
||||
- "127.0.0.1:64298:9200"
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
image: "dtagdevsec/elasticsearch:2204"
|
||||
volumes:
|
||||
- /data:/data
|
||||
|
||||
|
@ -39,7 +39,7 @@ services:
|
|||
condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64296:5601"
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
image: "dtagdevsec/kibana:2204"
|
||||
|
||||
## Logstash service
|
||||
logstash:
|
||||
|
@ -53,20 +53,49 @@ services:
|
|||
condition: service_healthy
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
image: "dtagdevsec/logstash:2006"
|
||||
image: "dtagdevsec/logstash:2204"
|
||||
volumes:
|
||||
- /data:/data
|
||||
# - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
|
||||
|
||||
## Elasticsearch-head service
|
||||
head:
|
||||
build: head/.
|
||||
container_name: head
|
||||
# Map Redis Service
|
||||
map_redis:
|
||||
container_name: map_redis
|
||||
restart: always
|
||||
depends_on:
|
||||
elasticsearch:
|
||||
condition: service_healthy
|
||||
stop_signal: SIGKILL
|
||||
tty: true
|
||||
ports:
|
||||
- "127.0.0.1:64302:9100"
|
||||
image: "dtagdevsec/head:2006"
|
||||
- "127.0.0.1:6379:6379"
|
||||
image: "dtagdevsec/redis:2204"
|
||||
read_only: true
|
||||
|
||||
# Map Web Service
|
||||
map_web:
|
||||
build: .
|
||||
container_name: map_web
|
||||
restart: always
|
||||
environment:
|
||||
- MAP_COMMAND=AttackMapServer.py
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
stop_signal: SIGKILL
|
||||
tty: true
|
||||
ports:
|
||||
- "127.0.0.1:64299:64299"
|
||||
image: "dtagdevsec/map:2204"
|
||||
depends_on:
|
||||
- map_redis
|
||||
|
||||
# Map Data Service
|
||||
map_data:
|
||||
container_name: map_data
|
||||
restart: always
|
||||
environment:
|
||||
- MAP_COMMAND=DataServer_v2.py
|
||||
env_file:
|
||||
- /opt/tpot/etc/compose/elk_environment
|
||||
stop_signal: SIGKILL
|
||||
tty: true
|
||||
image: "dtagdevsec/map:2204"
|
||||
depends_on:
|
||||
- map_redis
|
||||
|
|
|
@ -1,44 +1,42 @@
|
|||
FROM alpine:3.14
|
||||
FROM ubuntu:20.04
|
||||
#
|
||||
# VARS
|
||||
ENV ES_VER=7.17.0 \
|
||||
ES_JAVA_HOME=/usr/lib/jvm/java-16-openjdk
|
||||
|
||||
ENV ES_VER=8.0.1
|
||||
#
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
RUN apk -U --no-cache add \
|
||||
aria2 \
|
||||
bash \
|
||||
curl \
|
||||
nss && \
|
||||
apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
|
||||
RUN apt-get update -y && \
|
||||
apt-get install -y \
|
||||
aria2 \
|
||||
curl && \
|
||||
#
|
||||
# Get and install packages
|
||||
# Determine arch, get and install packages
|
||||
ARCH=$(arch) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then ES_ARCH="amd64"; fi && \
|
||||
if [ "$ARCH" = "aarch64" ]; then ES_ARCH="arm64"; fi && \
|
||||
echo "$ARCH" && \
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/elasticsearch/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz elasticsearch-$ES_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
|
||||
rm -rf /usr/share/elasticsearch/jdk && \
|
||||
rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
|
||||
# For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
|
||||
sed -i 's/! -x/! -e/g' /usr/share/elasticsearch/bin/elasticsearch-env && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-$ES_ARCH.deb && \
|
||||
dpkg -i elasticsearch-$ES_VER-$ES_ARCH.deb && \
|
||||
#
|
||||
# Add and move files
|
||||
cd /root/dist/ && \
|
||||
# rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
|
||||
mkdir -p /usr/share/elasticsearch/config && \
|
||||
cp elasticsearch.yml /usr/share/elasticsearch/config/ && \
|
||||
cp elasticsearch.yml /etc/elasticsearch/ && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 elasticsearch && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \
|
||||
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \
|
||||
groupmod -g 2000 elasticsearch && \
|
||||
usermod -u 2000 elasticsearch && \
|
||||
chown -R root:elasticsearch /etc/default/elasticsearch \
|
||||
/etc/elasticsearch && \
|
||||
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch \
|
||||
/var/log/elasticsearch && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge aria2 && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
apt-get purge aria2 -y && \
|
||||
apt-get autoremove -y --purge && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
|
||||
#
|
||||
# Healthcheck
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
|
||||
|
|
|
@ -2,6 +2,8 @@ cluster.name: tpotcluster
|
|||
node.name: "tpotcluster-node-01"
|
||||
xpack.ml.enabled: false
|
||||
xpack.security.enabled: false
|
||||
xpack.security.transport.ssl.enabled: false
|
||||
xpack.security.http.ssl.enabled: false
|
||||
path:
|
||||
logs: /data/elk/log
|
||||
data: /data/elk/data
|
||||
|
|
|
@ -24,6 +24,6 @@ services:
|
|||
mem_limit: 2g
|
||||
ports:
|
||||
- "127.0.0.1:64298:9200"
|
||||
image: "dtagdevsec/elasticsearch:2006"
|
||||
image: "dtagdevsec/elasticsearch:2204"
|
||||
volumes:
|
||||
- /data:/data
|
||||
|
|
|
@ -1,30 +1,28 @@
|
|||
FROM node:16.13.2-alpine3.14
|
||||
FROM ubuntu:20.04
|
||||
#
|
||||
# VARS
|
||||
ENV KB_VER=7.17.0
|
||||
#
|
||||
ENV KB_VER=8.0.1
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
RUN apk -U --no-cache add \
|
||||
RUN apt-get update -y && \
|
||||
apt-get install -y \
|
||||
aria2 \
|
||||
curl \
|
||||
gcompat && \
|
||||
curl && \
|
||||
#
|
||||
# Get and install packages
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/kibana/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz kibana-$KB_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
|
||||
#
|
||||
# Kibana's bundled node does not work in alpine
|
||||
rm /usr/share/kibana/node/bin/node && \
|
||||
ln -s /usr/local/bin/node /usr/share/kibana/node/bin/node && \
|
||||
#
|
||||
# Add and move files
|
||||
# Determine arch, get and install packages
|
||||
ARCH=$(arch) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then KB_ARCH="amd64"; fi && \
|
||||
if [ "$ARCH" = "aarch64" ]; then KB_ARCH="arm64"; fi && \
|
||||
echo "$ARCH" && \
|
||||
cd /root/dist/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-$KB_ARCH.deb && \
|
||||
dpkg -i kibana-$KB_VER-$KB_ARCH.deb && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
mkdir -p /usr/share/kibana/config \
|
||||
/usr/share/kibana/data && \
|
||||
cp /etc/kibana/kibana.yml /usr/share/kibana/config && \
|
||||
sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
|
||||
sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
|
||||
sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
|
||||
|
@ -36,15 +34,19 @@ RUN apk -U --no-cache add \
|
|||
echo "kibana.autocompleteTerminateAfter: 1000000" >> /usr/share/kibana/config/kibana.yml && \
|
||||
rm -rf /usr/share/kibana/optimize/bundles/* && \
|
||||
/usr/share/kibana/bin/kibana --optimize --allow-root && \
|
||||
addgroup -g 2000 kibana && \
|
||||
adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
|
||||
chown -R kibana:kibana /usr/share/kibana/ && \
|
||||
groupmod -g 2000 kibana && \
|
||||
usermod -u 2000 kibana && \
|
||||
chown -R root:kibana /etc/kibana && \
|
||||
chown -R kibana:kibana /usr/share/kibana/data \
|
||||
/run/kibana \
|
||||
/var/log/kibana \
|
||||
/var/lib/kibana && \
|
||||
chmod 755 -R /usr/share/kibana/config && \
|
||||
#
|
||||
# Clean up
|
||||
apk del --purge aria2 && \
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
apt-get purge aria2 -y && \
|
||||
apt-get autoremove -y --purge && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
|
||||
#
|
||||
# Healthcheck
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601'
|
||||
|
|
|
@ -12,4 +12,4 @@ services:
|
|||
# condition: service_healthy
|
||||
ports:
|
||||
- "127.0.0.1:64296:5601"
|
||||
image: "dtagdevsec/kibana:2006"
|
||||
image: "dtagdevsec/kibana:2204"
|
||||
|
|
|
@ -1,72 +1,66 @@
|
|||
FROM alpine:3.14
|
||||
FROM ubuntu:20.04
|
||||
#
|
||||
# VARS
|
||||
ENV LS_VER=7.17.0
|
||||
ENV LS_VER=8.0.1
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
COPY dist/ /root/dist/
|
||||
#
|
||||
# Setup env and apt
|
||||
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
RUN apk -U --no-cache add \
|
||||
RUN apt-get update -y && \
|
||||
apt-get install -y \
|
||||
aria2 \
|
||||
autossh \
|
||||
bash \
|
||||
bzip2 \
|
||||
curl \
|
||||
libc6-compat \
|
||||
libzmq \
|
||||
nss \
|
||||
openssh && \
|
||||
apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
|
||||
openssh-client && \
|
||||
#
|
||||
# Get and install packages
|
||||
# Determine arch, get and install packages
|
||||
ARCH=$(arch) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then LS_ARCH="amd64"; fi && \
|
||||
if [ "$ARCH" = "aarch64" ]; then LS_ARCH="arm64"; fi && \
|
||||
echo "$ARCH" && \
|
||||
mkdir -p /etc/listbot && \
|
||||
cd /etc/listbot && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
|
||||
bunzip2 *.bz2 && \
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/logstash/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz logstash-$LS_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
|
||||
rm -rf /usr/share/logstash/jdk && \
|
||||
# For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
|
||||
sed -i 's/! -x/! -e/g' /usr/share/logstash/bin/logstash.lib.sh && \
|
||||
/usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-filter-translate && \
|
||||
/usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-input-http && \
|
||||
/usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-gelf && \
|
||||
/usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-http && \
|
||||
/usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-syslog && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-$LS_ARCH.deb && \
|
||||
dpkg -i logstash-$LS_VER-$LS_ARCH.deb && \
|
||||
# /usr/share/logstash/bin/logstash-plugin install logstash-output-gelf logstash-output-syslog && \
|
||||
#
|
||||
# Add and move files
|
||||
cd /root/dist/ && \
|
||||
cp update.sh /usr/bin/ && \
|
||||
chmod u+x /usr/bin/update.sh && \
|
||||
mkdir -p /etc/logstash/conf.d && \
|
||||
cp logstash.conf /etc/logstash/conf.d/ && \
|
||||
cp http_input.conf /etc/logstash/conf.d/ && \
|
||||
cp http_output.conf /etc/logstash/conf.d/ && \
|
||||
cp entrypoint.sh /usr/bin/ && \
|
||||
chmod u+x /usr/bin/entrypoint.sh && \
|
||||
mkdir -p /usr/share/logstash/config && \
|
||||
cp logstash.conf /etc/logstash/ && \
|
||||
cp http_input.conf /etc/logstash/ && \
|
||||
cp http_output.conf /etc/logstash/ && \
|
||||
cp pipelines.yml /usr/share/logstash/config/pipelines.yml && \
|
||||
cp pipelines_pot.yml /usr/share/logstash/config/pipelines_pot.yml && \
|
||||
cp tpot_es_template.json /etc/logstash/ && \
|
||||
cp pipelines_sensor.yml /usr/share/logstash/config/pipelines_sensor.yml && \
|
||||
cp tpot-template.json /etc/logstash/ && \
|
||||
rm /etc/logstash/pipelines.yml && \
|
||||
rm /etc/logstash/logstash.yml && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 logstash && \
|
||||
adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
|
||||
chown -R logstash:logstash /usr/share/logstash && \
|
||||
chown -R logstash:logstash /etc/listbot && \
|
||||
chmod 755 /usr/bin/update.sh && \
|
||||
groupmod -g 2000 logstash && \
|
||||
usermod -u 2000 logstash && \
|
||||
chown -R logstash:logstash /etc/listbot \
|
||||
/var/log/logstash/ \
|
||||
/var/lib/logstash \
|
||||
/usr/share/logstash/data \
|
||||
/usr/share/logstash/config/pipelines* && \
|
||||
chmod 755 /usr/bin/entrypoint.sh && \
|
||||
#
|
||||
# Clean up
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
apt-get autoremove -y --purge && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
|
||||
#
|
||||
# Healthcheck
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
|
||||
#
|
||||
# Start logstash
|
||||
#USER logstash:logstash
|
||||
#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution --log.level debug
|
||||
#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_output.conf --config.reload.automatic --java-execution
|
||||
CMD update.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic --java-execution
|
||||
USER logstash:logstash
|
||||
CMD entrypoint.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic
|
||||
|
|
|
@ -1,68 +0,0 @@
|
|||
FROM alpine:3.14
|
||||
#
|
||||
# VARS
|
||||
ENV LS_VER=7.15.1
|
||||
# Include dist
|
||||
ADD dist/ /root/dist/
|
||||
#
|
||||
# Setup env and apt
|
||||
#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
|
||||
RUN apk -U --no-cache add \
|
||||
aria2 \
|
||||
bash \
|
||||
bzip2 \
|
||||
curl \
|
||||
libc6-compat \
|
||||
libzmq \
|
||||
nss && \
|
||||
apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
|
||||
#
|
||||
# Get and install packages
|
||||
mkdir -p /etc/listbot && \
|
||||
cd /etc/listbot && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
|
||||
bunzip2 *.bz2 && \
|
||||
cd /root/dist/ && \
|
||||
mkdir -p /usr/share/logstash/ && \
|
||||
aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-linux-x86_64.tar.gz && \
|
||||
tar xvfz logstash-$LS_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
|
||||
rm -rf /usr/share/logstash/jdk && \
|
||||
# For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
|
||||
sed -i 's/! -x/! -e/g' /usr/share/logstash/bin/logstash.lib.sh && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-input-http && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-output-gelf && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-output-http && \
|
||||
/usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
|
||||
#
|
||||
# Add and move files
|
||||
cd /root/dist/ && \
|
||||
cp update.sh /usr/bin/ && \
|
||||
chmod u+x /usr/bin/update.sh && \
|
||||
mkdir -p /etc/logstash/conf.d && \
|
||||
cp logstash.conf /etc/logstash/conf.d/ && \
|
||||
cp http.conf /etc/logstash/conf.d/ && \
|
||||
cp pipelines.yml /usr/share/logstash/config/pipelines.yml && \
|
||||
cp tpot_es_template.json /etc/logstash/ && \
|
||||
#
|
||||
# Setup user, groups and configs
|
||||
addgroup -g 2000 logstash && \
|
||||
adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
|
||||
chown -R logstash:logstash /usr/share/logstash && \
|
||||
chown -R logstash:logstash /etc/listbot && \
|
||||
chmod 755 /usr/bin/update.sh && \
|
||||
#
|
||||
# Clean up
|
||||
rm -rf /root/* && \
|
||||
rm -rf /tmp/* && \
|
||||
rm -rf /var/cache/apk/*
|
||||
#
|
||||
# Healthcheck
|
||||
HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
|
||||
#
|
||||
# Start logstash
|
||||
#USER logstash:logstash
|
||||
#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution --log.level debug
|
||||
#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution
|
||||
CMD update.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic --java-execution
|
87
docker/elk/logstash/dist/entrypoint.sh
vendored
Normal file
|
@ -0,0 +1,87 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Let's ensure normal operation on exit or if interrupted ...
|
||||
function fuCLEANUP {
|
||||
exit 0
|
||||
}
|
||||
trap fuCLEANUP EXIT
|
||||
|
||||
# Check internet availability
|
||||
function fuCHECKINET () {
|
||||
mySITES=$1
|
||||
error=0
|
||||
for i in $mySITES;
|
||||
do
|
||||
curl --connect-timeout 5 -Is $i 2>&1 > /dev/null
|
||||
if [ $? -ne 0 ];
|
||||
then
|
||||
let error+=1
|
||||
fi;
|
||||
done;
|
||||
echo $error
|
||||
}
|
||||
|
||||
# Check for connectivity and download latest translation maps
|
||||
myCHECK=$(fuCHECKINET "listbot.sicherheitstacho.eu")
|
||||
if [ "$myCHECK" == "0" ];
|
||||
then
|
||||
echo "Connection to Listbot looks good, now downloading latest translation maps."
|
||||
cd /etc/listbot
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
|
||||
aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
|
||||
bunzip2 -f *.bz2
|
||||
cd /
|
||||
else
|
||||
echo "Cannot reach Listbot, starting Logstash without latest translation maps."
|
||||
fi
|
||||
|
||||
# Distributed T-Pot installation needs a different pipeline config and autossh tunnel.
|
||||
if [ "$MY_TPOT_TYPE" == "SENSOR" ];
|
||||
then
|
||||
echo
|
||||
echo "Distributed T-Pot setup, sending T-Pot logs to $MY_HIVE_IP."
|
||||
echo
|
||||
echo "T-Pot type: $MY_TPOT_TYPE"
|
||||
echo "Keyfile used: $MY_SENSOR_PRIVATEKEYFILE"
|
||||
echo "Hive username: $MY_HIVE_USERNAME"
|
||||
echo "Hive IP: $MY_HIVE_IP"
|
||||
echo
|
||||
# Ensure correct file permissions for private keyfile or SSH will ask for password
|
||||
chmod 600 $MY_SENSOR_PRIVATEKEYFILE
|
||||
cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml
|
||||
autossh -f -M 0 -4 -l $MY_HIVE_USERNAME -i $MY_SENSOR_PRIVATEKEYFILE -p 64295 -N -L64305:127.0.0.1:64305 $MY_HIVE_IP -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Index Management is happening through ILM, but we need to put T-Pot ILM setting on ES.
|
||||
myTPOTILM=$(curl -s -XGET "http://elasticsearch:9200/_ilm/policy/tpot" | grep "Lifecycle policy not found: tpot" -c)
|
||||
if [ "$myTPOTILM" == "1" ];
|
||||
then
|
||||
echo "T-Pot ILM template not found on ES, putting it on ES now."
|
||||
curl -XPUT "http://elasticsearch:9200/_ilm/policy/tpot" -H 'Content-Type: application/json' -d'
|
||||
{
|
||||
"policy": {
|
||||
"phases": {
|
||||
"hot": {
|
||||
"min_age": "0ms",
|
||||
"actions": {}
|
||||
},
|
||||
"delete": {
|
||||
"min_age": "30d",
|
||||
"actions": {
|
||||
"delete": {
|
||||
"delete_searchable_snapshot": true
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_meta": {
|
||||
"managed": true,
|
||||
"description": "T-Pot ILM policy with a retention of 30 days"
|
||||
}
|
||||
}
|
||||
}'
|
||||
else
|
||||
echo "T-Pot ILM already configured or ES not available."
|
||||
fi
|
||||
echo
|
8
docker/elk/logstash/dist/http_input.conf
vendored
|
@ -3,7 +3,8 @@ input {
|
|||
http {
|
||||
id => "tpot"
|
||||
host => "0.0.0.0"
|
||||
port => "80"
|
||||
port => "64305"
|
||||
ecs_compatibility => disabled
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,9 +12,10 @@ input {
|
|||
output {
|
||||
elasticsearch {
|
||||
hosts => ["elasticsearch:9200"]
|
||||
# With templates now being legacy and ILM in place we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
|
||||
# With templates now being legacy we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
|
||||
index => "logstash-%{+YYYY.MM.dd}"
|
||||
template => "/etc/logstash/tpot_es_template.json"
|
||||
template => "/etc/logstash/tpot-template.json"
|
||||
template_overwrite => "true"
|
||||
}
|
||||
|
||||
}
|
||||
|
|
140
docker/elk/logstash/dist/http_output.conf
vendored
|
@ -1,12 +1,12 @@
|
|||
# Input section
|
||||
input {
|
||||
|
||||
# Fatt
|
||||
file {
|
||||
# Fatt
|
||||
file {
|
||||
path => ["/data/fatt/log/fatt.log"]
|
||||
codec => json
|
||||
codec => json
|
||||
type => "Fatt"
|
||||
}
|
||||
}
|
||||
|
||||
# Suricata
|
||||
file {
|
||||
|
@ -119,20 +119,6 @@ input {
|
|||
type => "Honeypots"
|
||||
}
|
||||
|
||||
# Honeypy
|
||||
file {
|
||||
path => ["/data/honeypy/log/json.log"]
|
||||
codec => json
|
||||
type => "Honeypy"
|
||||
}
|
||||
|
||||
# Honeysap
|
||||
file {
|
||||
path => ["/data/honeysap/log/honeysap-external.log"]
|
||||
codec => json
|
||||
type => "Honeysap"
|
||||
}
|
||||
|
||||
# Honeytrap
|
||||
file {
|
||||
path => ["/data/honeytrap/log/attackers.json"]
|
||||
|
@ -168,12 +154,6 @@ input {
|
|||
type => "Medpot"
|
||||
}
|
||||
|
||||
# Rdpy
|
||||
file {
|
||||
path => ["/data/rdpy/log/rdpy.log"]
|
||||
type => "Rdpy"
|
||||
}
|
||||
|
||||
# Redishoneypot
|
||||
file {
|
||||
path => ["/data/redishoneypot/log/redishoneypot.log"]
|
||||
|
@ -188,6 +168,13 @@ input {
|
|||
type => "NGINX"
|
||||
}
|
||||
|
||||
# Sentrypeer
|
||||
file {
|
||||
path => ["/data/sentrypeer/log/sentrypeer.json"]
|
||||
codec => json
|
||||
type => "Sentrypeer"
|
||||
}
|
||||
|
||||
# Tanner
|
||||
file {
|
||||
path => ["/data/tanner/log/tanner_report.json"]
|
||||
|
@ -228,8 +215,8 @@ filter {
|
|||
}
|
||||
translate {
|
||||
refresh_interval => 86400
|
||||
field => "[alert][signature_id]"
|
||||
destination => "[alert][cve_id]"
|
||||
source => "[alert][signature_id]"
|
||||
target => "[alert][cve_id]"
|
||||
dictionary_path => "/etc/listbot/cve.yaml"
|
||||
# fallback => "-"
|
||||
}
|
||||
|
@ -279,13 +266,13 @@ filter {
|
|||
|
||||
# CitrixHoneypot
|
||||
if [type] == "CitrixHoneypot" {
|
||||
grok {
|
||||
match => {
|
||||
"message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}",
|
||||
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}",
|
||||
grok {
|
||||
match => {
|
||||
"message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}",
|
||||
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}",
|
||||
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{S3_REQUEST_LINE:msg:string} %{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string:string}",
|
||||
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ]
|
||||
}
|
||||
"\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ]
|
||||
}
|
||||
}
|
||||
date {
|
||||
match => [ "asctime", "ISO8601" ]
|
||||
|
@ -301,18 +288,18 @@ filter {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# Conpot
|
||||
if [type] == "ConPot" {
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"dst_port" => "dest_port"
|
||||
"dst_ip" => "dest_ip"
|
||||
}
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"dst_port" => "dest_port"
|
||||
"dst_ip" => "dest_ip"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Cowrie
|
||||
|
@ -439,7 +426,7 @@ filter {
|
|||
# Example: 2021-10-29T21:08:31.026Z CLOSE host=1.2.3.4 port=12345 fd=4 time=20.015 bytes=24
|
||||
# Example: 2021-10-29T21:08:11.011Z ACCEPT host=1.2.3.4 port=12346 fd=4 n=1/4096
|
||||
if [type] == "Endlessh" {
|
||||
grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}time=%{SECOND:duration}%{SPACE}bytes=%{NUMBER:bytes}", "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}n=%{INT}/%{INT}" ] } }
|
||||
grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}time=%{SECOND:duration}%{SPACE}bytes=%{NUMBER:bytes}", "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}n=%{INT}/%{INT}" ] } }
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
remove_field => ["timestamp"]
|
||||
|
@ -494,17 +481,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Honeypy
|
||||
if [type] == "Honeypy" {
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
remove_field => ["timestamp"]
|
||||
remove_field => ["date"]
|
||||
remove_field => ["time"]
|
||||
remove_field => ["millisecond"]
|
||||
}
|
||||
}
|
||||
|
||||
# Honeypots
|
||||
if [type] == "Honeypots" {
|
||||
date {
|
||||
|
@ -512,31 +488,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Honeysap
|
||||
if [type] == "Honeysap" {
|
||||
date {
|
||||
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
|
||||
remove_field => ["timestamp"]
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"[data][error_msg]" => "event_type"
|
||||
"service" => "sensor"
|
||||
"source_port" => "src_port"
|
||||
"source_ip" => "src_ip"
|
||||
"target_port" => "dest_port"
|
||||
"target_ip" => "dest_ip"
|
||||
}
|
||||
remove_field => "event"
|
||||
remove_field => "return_code"
|
||||
}
|
||||
if [data] {
|
||||
mutate {
|
||||
remove_field => "[data]"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Honeytrap
|
||||
if [type] == "Honeytrap" {
|
||||
date {
|
||||
|
@ -609,18 +560,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Rdpy
|
||||
if [type] == "Rdpy" {
|
||||
grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp},domain:%{CISCO_REASON:domain},username:%{CISCO_REASON:username},password:%{CISCO_REASON:password},hostname:%{GREEDYDATA:hostname}", "\A%{TIMESTAMP_ISO8601:timestamp},Connection from %{IPV4:src_ip}:%{INT:src_port:integer}" ] } }
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
remove_field => ["timestamp"]
|
||||
}
|
||||
mutate {
|
||||
add_field => { "dest_port" => "3389" }
|
||||
}
|
||||
}
|
||||
|
||||
# Redishoneypot
|
||||
if [type] == "Redishoneypot" {
|
||||
date {
|
||||
|
@ -630,8 +569,8 @@ filter {
|
|||
}
|
||||
mutate {
|
||||
split => { "addr" => ":" }
|
||||
add_field => {
|
||||
"src_ip" => "%{[addr][0]}"
|
||||
add_field => {
|
||||
"src_ip" => "%{[addr][0]}"
|
||||
"src_port" => "%{[addr][1]}"
|
||||
"dest_port" => "6379"
|
||||
"dest_ip" => "${MY_EXTIP}"
|
||||
|
@ -652,6 +591,21 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Sentrypeer
|
||||
if [type] == "Sentrypeer" {
|
||||
date {
|
||||
match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSSSSS" ]
|
||||
remove_field => ["event_timestamp"]
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"source_ip" => "src_ip"
|
||||
"destination_ip" => "dest_ip"
|
||||
}
|
||||
add_field => { "dest_port" => "5060" }
|
||||
}
|
||||
}
|
||||
|
||||
# Tanner
|
||||
if [type] == "Tanner" {
|
||||
date {
|
||||
|
@ -680,7 +634,7 @@ if "_jsonparsefailure" in [tags] { drop {} }
|
|||
}
|
||||
|
||||
# Add geo coordinates / ASN info / IP rep.
|
||||
if [src_ip] {
|
||||
if [src_ip] {
|
||||
geoip {
|
||||
cache_size => 10000
|
||||
source => "src_ip"
|
||||
|
@ -693,8 +647,8 @@ if "_jsonparsefailure" in [tags] { drop {} }
|
|||
}
|
||||
translate {
|
||||
refresh_interval => 86400
|
||||
field => "src_ip"
|
||||
destination => "ip_rep"
|
||||
source => "src_ip"
|
||||
target => "ip_rep"
|
||||
dictionary_path => "/etc/listbot/iprep.yaml"
|
||||
}
|
||||
}
|
||||
|
|
106
docker/elk/logstash/dist/logstash.conf
vendored
|
@ -119,20 +119,6 @@ input {
|
|||
type => "Honeypots"
|
||||
}
|
||||
|
||||
# Honeypy
|
||||
file {
|
||||
path => ["/data/honeypy/log/json.log"]
|
||||
codec => json
|
||||
type => "Honeypy"
|
||||
}
|
||||
|
||||
# Honeysap
|
||||
file {
|
||||
path => ["/data/honeysap/log/honeysap-external.log"]
|
||||
codec => json
|
||||
type => "Honeysap"
|
||||
}
|
||||
|
||||
# Honeytrap
|
||||
file {
|
||||
path => ["/data/honeytrap/log/attackers.json"]
|
||||
|
@ -168,12 +154,6 @@ input {
|
|||
type => "Medpot"
|
||||
}
|
||||
|
||||
# Rdpy
|
||||
file {
|
||||
path => ["/data/rdpy/log/rdpy.log"]
|
||||
type => "Rdpy"
|
||||
}
|
||||
|
||||
# Redishoneypot
|
||||
file {
|
||||
path => ["/data/redishoneypot/log/redishoneypot.log"]
|
||||
|
@ -181,6 +161,13 @@ input {
|
|||
type => "Redishoneypot"
|
||||
}
|
||||
|
||||
# Sentrypeer
|
||||
file {
|
||||
path => ["/data/sentrypeer/log/sentrypeer.json"]
|
||||
codec => json
|
||||
type => "Sentrypeer"
|
||||
}
|
||||
|
||||
# Host NGINX
|
||||
file {
|
||||
path => ["/data/nginx/log/access.log"]
|
||||
|
@ -228,8 +215,8 @@ filter {
|
|||
}
|
||||
translate {
|
||||
refresh_interval => 86400
|
||||
field => "[alert][signature_id]"
|
||||
destination => "[alert][cve_id]"
|
||||
source => "[alert][signature_id]"
|
||||
target => "[alert][cve_id]"
|
||||
dictionary_path => "/etc/listbot/cve.yaml"
|
||||
# fallback => "-"
|
||||
}
|
||||
|
@ -494,17 +481,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Honeypy
|
||||
if [type] == "Honeypy" {
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
remove_field => ["timestamp"]
|
||||
remove_field => ["date"]
|
||||
remove_field => ["time"]
|
||||
remove_field => ["millisecond"]
|
||||
}
|
||||
}
|
||||
|
||||
# Honeypots
|
||||
if [type] == "Honeypots" {
|
||||
date {
|
||||
|
@ -512,31 +488,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Honeysap
|
||||
if [type] == "Honeysap" {
|
||||
date {
|
||||
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
|
||||
remove_field => ["timestamp"]
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"[data][error_msg]" => "event_type"
|
||||
"service" => "sensor"
|
||||
"source_port" => "src_port"
|
||||
"source_ip" => "src_ip"
|
||||
"target_port" => "dest_port"
|
||||
"target_ip" => "dest_ip"
|
||||
}
|
||||
remove_field => "event"
|
||||
remove_field => "return_code"
|
||||
}
|
||||
if [data] {
|
||||
mutate {
|
||||
remove_field => "[data]"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Honeytrap
|
||||
if [type] == "Honeytrap" {
|
||||
date {
|
||||
|
@ -609,18 +560,6 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Rdpy
|
||||
if [type] == "Rdpy" {
|
||||
grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp},domain:%{CISCO_REASON:domain},username:%{CISCO_REASON:username},password:%{CISCO_REASON:password},hostname:%{GREEDYDATA:hostname}", "\A%{TIMESTAMP_ISO8601:timestamp},Connection from %{IPV4:src_ip}:%{INT:src_port:integer}" ] } }
|
||||
date {
|
||||
match => [ "timestamp", "ISO8601" ]
|
||||
remove_field => ["timestamp"]
|
||||
}
|
||||
mutate {
|
||||
add_field => { "dest_port" => "3389" }
|
||||
}
|
||||
}
|
||||
|
||||
# Redishoneypot
|
||||
if [type] == "Redishoneypot" {
|
||||
date {
|
||||
|
@ -652,6 +591,21 @@ filter {
|
|||
}
|
||||
}
|
||||
|
||||
# Sentrypeer
|
||||
if [type] == "Sentrypeer" {
|
||||
date {
|
||||
match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSSSSS" ]
|
||||
remove_field => ["event_timestamp"]
|
||||
}
|
||||
mutate {
|
||||
rename => {
|
||||
"source_ip" => "src_ip"
|
||||
"destination_ip" => "dest_ip"
|
||||
}
|
||||
add_field => { "dest_port" => "5060" }
|
||||
}
|
||||
}
|
||||
|
||||
# Tanner
|
||||
if [type] == "Tanner" {
|
||||
date {
|
||||
|
@ -680,7 +634,7 @@ if "_jsonparsefailure" in [tags] { drop {} }
|
|||
}
|
||||
|
||||
# Add geo coordinates / ASN info / IP rep.
|
||||
if [src_ip] {
|
||||
if [src_ip] {
|
||||
geoip {
|
||||
cache_size => 10000
|
||||
source => "src_ip"
|
||||
|
@ -693,8 +647,8 @@ if "_jsonparsefailure" in [tags] { drop {} }
|
|||
}
|
||||
translate {
|
||||
refresh_interval => 86400
|
||||
field => "src_ip"
|
||||
destination => "ip_rep"
|
||||
source => "src_ip"
|
||||
target => "ip_rep"
|
||||
dictionary_path => "/etc/listbot/iprep.yaml"
|
||||
}
|
||||
}
|
||||
|
@ -746,10 +700,10 @@ if "_jsonparsefailure" in [tags] { drop {} }
|
|||
output {
|
||||
elasticsearch {
|
||||
hosts => ["elasticsearch:9200"]
|
||||
# With templates now being legacy and ILM in place we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
|
||||
# With templates now being legacy we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
|
||||
index => "logstash-%{+YYYY.MM.dd}"
|
||||
template => "/etc/logstash/tpot_es_template.json"
|
||||
#document_type => "doc"
|
||||
template => "/etc/logstash/tpot-template.json"
|
||||
template_overwrite => "true"
|
||||
}
|
||||
|
||||
#if [type] == "Suricata" {
|
||||
|
|