diff --git a/CHANGELOG.md b/CHANGELOG.md
index aca4a2bd..cb5fcd5d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,323 +1,45 @@
-# Changelog
+# Release Notes / Changelog
+T-Pot 22.04.0 is probably the most feature rich release ever provided with long awaited (wanted!) features readily available after installation. 
 
-## 20210222
-- **New Release 20.06.2**
-- **Countless Cloud Contributions**
-  - Thanks to @shaderecker
+## New Features
+* **Distributed** Installation with **HIVE** and **HIVE_SENSOR**
+* **ARM64** support for all provided Docker images
+* **GeoIP Attack Map** visualizing Live Attacks on a dedicated webpage
+* **Kibana Live Attack Map** visualizing Live Attacks from different **HIVE_SENSORS**
+* **Blackhole** is a script trying to avoid mass scanner detection 
+* **Elasticvue** a web front end for browsing and interacting with an Elastic Search cluster
+* **Ddospot** a honeypot for tracking and monitoring UDP-based Distributed Denial of Service (DDoS) attacks
+* **Endlessh** is a SSH tarpit that very slowly sends an endless, random SSH banner
+* **HellPot** is an endless honeypot based on Heffalump that sends unruly HTTP bots to hell
+* **qHoneypots** 25 honeypots in a single container for monitoring network traffic, bots activities, and username \ password credentials
+* **Redishoneypot** is a honeypot mimicking some of the Redis' functions
+* **SentryPeer** a dedicated SIP honeypot
+* **Index Lifecycle Management** for Elasticseach indices is now being used
 
-## 20210219
-- **Rebuild Snare, Tanner, Redis, Phpox**
-  - Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
-- **Bump Elastic Stack to 7.11.1**
-  - Updgrade Elastic Stack Images to 7.11.1 and update License Info to reflect new Elastic License.
-  - Prepare for new release.
+## Upgrades
+* **Debian 11.x** is now being used for the T-Pot ISO images and required for post installs
+* **Elastic Stack 8.x** is now provided as Docker images
 
-## 20210218
-- **Rebuild Conpot, EWSPoster, Cowrie, Glutton, Dionaea**
-  - Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
+## Updates
+* **Honeypots** and **tools** were updated to their latest masters and releases
+* Updates will be provided continuously through Docker Images updates 
 
-## 20210216
-- **Bump Heralding to 1.0.7**
-  - Rebuild and upgrade image to 1.0.7 and upgrade Alpine OS to 3.13.
-  - Enable SMTPS for Heralding.
-- **Rebuild IPPHoney, Fatt, EWSPoster, Spiderfoot**
-  - Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
-  - Upgrade Spiderfoot to 3.3
+## Breaking Changes
+* For security reasons all Py2.x honeypots with the need of PyPi packages have been removed: **HoneyPy**, **HoneySAP** and **RDPY**
+* If you are upgrading from a previous version of T-Pot (20.06.x) you need to import the new Kibana objects or some of the functionality will be broken or will be unavailabe
+* **Cyberchef** is now part of the Nginx Docker image, no longer as individual image
+* **ElasticSearch Head** is superseded by **Elasticvue** and part the Nginx Docker image
+* **Heimdall** is no longer supported and superseded with a new Bento based landing page
+* **Elasticsearch Curator** is no longer supprted and superseded with **Index Lifecycle Policies** available through Kibana.
 
-## 20210215
-- **Rebuild Dicompot, p0f, Medpot, Honeysap, Heimdall, Elasticpot, Citrixhoneypot, Ciscoasa**
-  - Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
+# Thanks & Credits
+* @ghenry, for some fun late night debugging and of course SentryPeer!
+* @giga-a, for adding much appreciated features (i.e. JSON logging, 
+X-Forwarded-For, etc.) and of course qHoneypots! 
+* @sp3t3rs, @trixam, for their backend and ews support!
+* @tadashi-oya, for spotting some errors and propose fixes!
+* @tmariuss, @shaderecker for their cloud contributions!
+* @vorband, for much appreciated and helpful insights regarding the GeoIP Attack Map!
+* @yunginnanet, on not giving up on squashing a bug and of course Hellpot!
 
-## 20210212
-- **Rebuild Cyberchef, Adbhoney, Elastic Stack**
-  - Rebuild images to their latest masters and upgrade Alpine OS to 3.13 where possible.
-  - Bump Elastic Stack to 7.11.0
-  - Bump Cyberchef to 9.27.0
-
-## 20210119
-- **Bump Dionaea to 0.11.0**
-  - Upgrade Dionaea to 0.11.0, rebuild image and upgrade Alpine OS to 3.13.
-
-## 20210106
-- **Update Internet IF retrieval**
-  - To be consistent with @adepasquale PR #746 fatt, glutton and p0f Dockerfiles were updated accordingly.
-  - Merge PR #746 from @adepasquale, thank you!
-
-## 20201228
-- **Fix broken SQlite DB**
-  - Fix a broken `app.sqlite` in Heimdall
-- **Avoid ghcr.io because of slow transfers**
-- **Remove netselect-apt**
-  - causes too many unpredictable errors #733 as the latest example
-
-## 20201210
-- **Bump Elastic Stack 7.10.1, EWSPoster to 1.12**
-
-## 20201202
-- **Update Elastic Stack to 7.10.0**
-
-## 20201130
-- **Suricata, use suricata-update for rule management**
-  - As a bonus we can now run "suricata-update" using docker-exec, triggering both a rule update and a Suricata rule reload.
-  - Thanks to @adepasquale!
-
-## 20201126
-- **Suricata, update suricata.yaml for 6.x**
-  - Merge in the latest updates from suricata-6.0.x while at the same time keeping the custom T-Pot configuration.
-  - Thanks to @adepasquale!
-- **Bump Cowrie to 2.2.0**
-
-## 20201028
-- **Bump Suricata to 5.0.4, Spiderfoot to 3.2.1, Dionaea to 0.9.2, IPPHoney, Heralding, Conpot to latest masters**
-
-## 20201027
-- **Bump Dicompot to latest master, Elastic Stack to 7.9.3**
-
-## 20201005
-- **Bump Elastic Stack to 7.9.2**
-  - @brianlechthaler, thanks for PR #706, which had issues regarding Elastic Stack and resulted in reverting to 7.9.1
-
-## 20200904
-- **Release T-Pot 20.06.1**
-  - Github offers a free Docker Container Registry for public packages. For our Open Source projects we want to make sure to have everything in one place and thus moving from Docker Hub to the GitHub Container Registry.
-- **Bump Elastic Stack**
-  - Update the Elastic Stack to 7.9.1.
-- **Rebuild Images**
-  - All docker images were rebuilt based on the latest (and stable running) versions of the tools and honeypots and have been pinned to specific Alpine / Debian versions and git commits so rebuilds will less likely fail.
-- **Cleaning up**
-  - Clean up old references and links.
-
-## 20200630
-- **Release T-Pot 20.06**
-  - After 4 months of public testing with the NextGen edition T-Pot 20.06 can finally be released.
-- **Debian Buster**
-  - With the release of Debian Buster T-Pot now has access to all packages required right out of the box.
-- **Add new honeypots**
-  - [Dicompot](https://github.com/nsmfoo/dicompot) by @nsmfoo is a low interaction honeypot for the Dicom protocol which is the international standard to process medical imaging information. Together with Medpot which supports the HL7 protocol T-Pot is now offering a Medical Installation type.
-  - [Honeysap](https://github.com/SecureAuthCorp/HoneySAP) by SecureAuthCorp is a low interaction honeypot for the SAP services, in case of T-Pot configured for the SAP router.
-  - [Elasticpot](https://gitlab.com/bontchev/elasticpot) by Vesselin Bontchev replaces ElasticpotPY as a low interaction honeypot for Elasticsearch with more features, plugins and scripted responses.
-- **Rebuild Images**
-  - All docker images were rebuilt based on the latest (and stable running) versions of the tools and honeypots. Mostly the images now run on Alpine 3.12 / Debian Buster. However some honeypots / tools still reuire Alpine 3.11 / 3.10 to run properly.
-- **Install Types**
-  - All docker-compose files (`/opt/tpot/etc/compose`) were remixed and most of the NextGen honeypots are now available in Standard.
-  - There is now a **Medical** Installation Type with Dicompot and Medpot which will be of most interest for medical institutions to get started with T-Pot.
-- **Update Tools**
-  - Connecting to T-Pot via `https://<ip>:64297` brings you to the T-Pot Landing Page now which is based on Heimdall and the latest NGINX enforcing TLS 1.3.
-  - The ELK stack was updated to 7.8.0 and stripped down to the necessary core functions (where possible) for T-Pot while keeping ELK RAM requirements to a minimum (8GB of RAM is recommended now). The number of index pattern fields was reduced to **697** which increases performance significantly. There are **22** Kibana Dashboards, **397** Kibana Visualizations and **24** Kibana Searches readily available to cover all your needs to get started and familiar with T-Pot.
-  - Cyberchef was updated to 9.21.0.
-  - Elasticsearch Head was updated to the latest version available on GitHub.
-  - Spiderfoot was updated to latest 3.1 dev.
-- **Landing Page**
-  - After logging into T-Pot via web you are now greeted with a beautifully designed landing page.
-- **Countless Tweaks and improvements**
-  - Under the hood lots of tiny tweaks, improvements and a few bugfixes will increase your overall experience with T-Pot.
-
-## 20200316
-- **Move from Sid to Stable**
-  - Debian Stable has now all the packages and versions we need for T-Pot. As a consequence we can now move to the `stable` branch.
-
-## 20200310
-- **Add 2FA to Cockpit**
-  - Just run `2fa.sh` to enable two factor authentication in Cockpit.
-- **Find fastest mirror with netselect-apt**
-  - Netselect-apt will find the fastest mirror close to you (outgoing ICMP required).
-
-## 20200309
-- **Bump Nextgen to 20.06**
-  - All NextGen images have been rebuilt to their latest master.
-  - ElasticStack bumped to 7.6.1 (Elasticsearch will need at least 2048MB of RAM now, T-Pot at least 8GB of RAM) and tweak to accomodate changes of 7.x.
-  - Fixed errors in Tanner / Snare which will now handle downloads of malware via SSL and store them correctly (thanks to @afeena).
-  - Fixed errors in Heralding which will now improve on RDP connections (thanks to @johnnykv, @realsdx).
-  - Fixed error in honeytrap which will now build in Debian/Buster (thanks to @tillmannw).
-  - Mailoney is now logging in JSON format (thanks to @monsherko).
-  - Base T-Pot landing page on Heimdall.
-  - Tweaking of tools and some minor bug fixing
-
-## 20200116
-- **Bump ELK to latest 6.8.6**
-- **Update ISO image to fix upstream bug of missing kernel modules**
-- **Include dashboards for CitrixHoneypot**
-  - Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
-  - This update requires the latest Kibana objects as well. Download the latest from https://raw.githubusercontent.com/telekom-security/tpotce/master/etc/objects/kibana_export.json.zip, unzip and import the objects within Kibana WebUI > Management > Saved Objects > Export / Import". All objects will be overwritten upon import, make sure to run an export first.
-
-## 20200115
-- **Prepare integration of CitrixHoneypot**
-  - Prepare integration of [CitrixHoneypot](https://github.com/MalwareTech/CitrixHoneypot) by MalwareTech
-  - Integration into ELK is still open
-  - Please run `/opt/tpot/update.sh` for the necessary modifications, omit the reboot and run `/opt/tpot/bin/tped.sh` to (re-)select the NextGen installation type.
-
-## 20191224
-- **Use pigz, optimize logrotate.conf**
-  - Use `pigz` for faster archiving, especially with regard to high volumes of logs - Thanks to @workandresearchgithub!
-  - Optimize `logrotate.conf` to improve archiving speed and get rid of multiple compression, also introduce `pigz`.
-
-## 20191121
-- **Bump ADBHoney to latest master**
-  - Use latest version of ADBHoney, which now fully support Python 3.x - Thanks to @huuck!
-
-## 20191113, 20191104, 20191103, 20191028
-- **Switch to Debian 10 on OTC, Ansible Improvements**
-  - OTC now supporting Debian 10 - Thanks to @shaderecker!
-
-## 20191028
-- **Fix an issue with pip3, yq**
-  - `yq` needs rehashing.
-
-## 20191026
-- **Remove cockpit-pcp**
-  - `cockpit-pcp` floods swap for some reason - removing for now.
-
-## 20191022
-- **Bump Suricata to 5.0.0**
-
-## 20191021
-- **Bump Cowrie to 2.0.0**
-
-## 20191016
-- **Tweak installer, pip3, Heralding**
-  - Install `cockpit-pcp` right from the start for machine monitoring in cockpit.
-  - Move installer and update script to use pip3.
-  - Bump heralding to latest master (1.0.6) - Thanks @johnnykv!
-
-## 20191015
-- **Tweaking, Bump glutton, unlock ES script**
-  - Add `unlock.sh` to unlock ES indices in case of lockdown after disk quota has been reached.
-  - Prevent too much terminal logging from p0f and glutton since `daemon.log` was filled up.
-  - Bump glutton to latest master now supporting payload_hex. Thanks to @glaslos.
-
-## 20191002
-- **Merge**
-  - Support Debian Buster images for AWS #454
-  - Thank you @piffey
-
-## 20190924
-- **Bump EWSPoster**
-  - Supports Python 3.x
-  - Thank you @Trixam
-
-## 20190919
-- **Merge**
-  - Handle non-interactive shells #454
-  - Thank you @Oogy
-
-## 20190907
-- **Logo tweaking**
-  - Add QR logo
-
-## 20190828
-- **Upgrades and rebuilds**
-  - Bump Medpot, Nginx and Adbhoney to latest master
-  - Bump ELK stack to 6.8.2
-  - Rebuild Mailoney, Honeytrap, Elasticpot and Ciscoasa
-  - Add 1080p T-Pot wallpaper for download
-
-## 20190824
-- **Add some logo work**
-  - Thanks to @thehadilps's suggestion adjusted social preview
-  - Added 4k T-Pot wallpaper for download
-
-## 20190823
-- **Fix for broken Fuse package**
-  - Fuse package in upstream is broken
-  - Adjust installer as workaround, fixes #442
-
-## 20190816
-- **Upgrades and rebuilds**
-  - Adjust Dionaea to avoid nmap detection, fixes #435 (thanks @iukea1)
-  - Bump Tanner, Cyberchef, Spiderfoot and ES Head to latest master
-
-## 20190815
-- **Bump ELK stack to 6.7.2**
-  - Transition to 7.x must iterate slowly through previous versions to prevent changes breaking T-Pots
-
-## 20190814
-- **Logstash Translation Maps improvement**
-  - Download translation maps rather than running a git pull
-  - Translation maps will now be bzip2 compressed to reduce traffic to a minimum
-  - Fixes #432
-
-## 20190802
-- **Add support for Buster as base image**
-  - Install ISO is now based on Debian Buster
-  - Installation upon Debian Buster is now supported
-
-## 20190701
-- **Reworked Ansible T-Pot Deployment**
-  - Transitioned from bash script to all Ansible
-  - Reusable Ansible Playbook for OpenStack clouds
-  - Example Showcase with our Open Telekom Cloud
-  - Adaptable for other cloud providers
-
-## 20190626
-- **HPFEEDS Opt-In commandline option**
-  - Pass a hpfeeds config file as a commandline argument
-  - hpfeeds config is saved in `/data/ews/conf/hpfeeds.cfg`
-  - Update script restores hpfeeds config
-
-## 20190604
-- **Finalize Fatt support**
-  - Build visualizations, searches, dashboards
-  - Rebuild index patterns
-  - Some finishing touches
-
-## 20190601
-- **Start supporting Fatt, remove Glastopf**
-  - Build Dockerfile, Adjust logstash, installer, update and such.
-  - Glastopf is no longer supported within T-Pot
-
-## 20190528+20190531
-- **Increase total number of fields**
-  - Adjust total number of fileds for logstash templae from 1000 to 2000.
-
-## 20190526
-- **Fix build for Cowrie**
-  - Upstream changes required a new package `py-bcrypt`.
-
-## 20190525
-- **Fix build for RDPY**
-  - Building was prevented due to cache error which occurs lately on Alpine if `apk` is using `--no-ache' as options.
-
-## 20190520
-- **Adjust permissions for /data folder**
-  - Now it is possible to download files from `/data` using SCP, WINSCP or CyberDuck.
-
-## 20190513
-- **Added Ansible T-Pot Deployment on Open Telekom Cloud**
-  - Reusable Ansible Playbooks for all cloud providers
-  - Example Showcase with our Open Telekom Cloud
-
-## 20190511
-- **Add hptest script**
-  - Quickly test if the honeypots are working with `hptest.sh <[ip,host]>` based on nmap.
-
-## 20190508
-- **Add tsec / install user to tpot group**
-  - For users being able to easily download logs from the /data folder the installer now adds the `tpot` or the logged in user (`who am i`) via `usermod -a -G tpot <user>` to the tpot group. Also /data permissions will now be enforced to `770`, which is necessary for directory listings.
-
-## 20190502
-- **Fix KVPs**
-  - Some KVPs for Cowrie changed and the tagcloud was not showing any values in the Cowrie dashboard.
-  - New installations are not affected, however existing installations need to import the objects from /opt/tpot/etc/objects/kibana-objects.json.zip.
-- **Makeiso**
-  - Move to Xorriso for building the ISO image.
-  - This allows to support most of the Debian based distros, i.e. Debian, MxLinux and Ubuntu.
-
-## 20190428
-- **Rebuild ISO**
-  - The install ISO needed a rebuilt after some changes in the Debian mirrors.
-- **Disable Netselect**
-  - After some reports in the issues that some Debian mirrors were not fully synced and thus some packages were unavailable the netselect-apt feature was disabled.
-
-## 20190406
-- **Fix for SSH**
-  - In some situations the SSH Port was not written to a new line (thanks to @dpisano for reporting).
-- **Fix race condition for apt-fast**
-  - Curl and wget need to be installed before apt-fast installation.
-
-## 20190404
-- **Fix #332**
-  - If T-Pot, opposed to the requirements, does not have full internet access netselect-apt fails to determine the fastest mirror as it needs ICMP and UDP outgoing. Should netselect-apt fail the default mirrors will be used.
-- **Improve install speed with apt-fast**
-  - Migrating from a stable base install to Debian (Sid) requires downloading lots of packages. Depending on your geo location the download speed was already improved by introducing netselect-apt to determine the fastest mirror. With apt-fast the downloads will be even faster by downloading packages not only in parallel but also with multiple connections per package.
-
-`git log --date=format:"## %Y%m%d" --pretty=format:"%ad %n- **%s**%n  - %b"`
+... and many others from the T-Pot community by opening valued issues and discussions, suggesting ideas and thus helping to improve T-Pot!
\ No newline at end of file
diff --git a/README.md b/README.md
index 5111aa6b..21346a44 100644
--- a/README.md
+++ b/README.md
@@ -1,98 +1,98 @@
+# T-Pot - The All In One Multi Honeypot Plattform
+
 ![T-Pot](doc/tpotsocial.png)
 
-T-Pot 20.06 runs on Debian (Stable), is based heavily on
-
-[docker](https://www.docker.com/), [docker-compose](https://docs.docker.com/compose/)
-
-and includes dockerized versions of the following honeypots
-
-* [adbhoney](https://github.com/huuck/ADBHoney),
-* [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
-* [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
-* [conpot](http://conpot.org/),
-* [cowrie](https://github.com/cowrie/cowrie),
-* [ddospot](https://github.com/aelth/ddospot),
-* [dicompot](https://github.com/nsmfoo/dicompot),
-* [dionaea](https://github.com/DinoTools/dionaea),
-* [elasticpot](https://gitlab.com/bontchev/elasticpot),
-* [endlessh](https://github.com/skeeto/endlessh),
-* [glutton](https://github.com/mushorg/glutton),
-* [heralding](https://github.com/johnnykv/heralding),
-* [hellpot](https://github.com/yunginnanet/HellPot),
-* [honeypots](https://github.com/qeeqbox/honeypots),
-* [honeypy](https://github.com/foospidy/HoneyPy),
-* [honeysap](https://github.com/SecureAuthCorp/HoneySAP),
-* [honeytrap](https://github.com/armedpot/honeytrap/),
-* [ipphoney](https://gitlab.com/bontchev/ipphoney),
-* [log4pot](https://github.com/thomaspatzke/Log4Pot),
-* [mailoney](https://github.com/awhitehatter/mailoney),
-* [medpot](https://github.com/schmalle/medpot),
-* [rdpy](https://github.com/citronneur/rdpy),
-* [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot),
-* [snare](http://mushmush.org/),
-* [tanner](http://mushmush.org/)
-
-
-Furthermore T-Pot includes the following tools
-
-* [Cockpit](https://cockpit-project.org/running) for a lightweight, webui for docker, os, real-time performance monitoring and web terminal.
-* [Cyberchef](https://gchq.github.io/CyberChef/) a web app for encryption, encoding, compression and data analysis.
-* [ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
-* [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
-* [Fatt](https://github.com/0x4D31/fatt) a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
-* [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
-* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
-
+T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeypot plattform, supporting 20+ honeypots and countless visualization options using the Elastic Stack, animated live attack maps and lots of security tools to further improve the deception experience.
+<br><br>
 
 # TL;DR
-1. Meet the [system requirements](#requirements). The T-Pot installation needs at least 8 GB RAM and 128 GB free disk space as well as a working (outgoing non-filtered) internet connection.
-2. Download the T-Pot ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) or [create it yourself](#createiso).
-3. Install the system in a [VM](#vm) or on [physical hardware](#hw) with [internet access](#placement).
-4. Enjoy your favorite beverage - [watch](https://sicherheitstacho.eu) and [analyze](#kibana).
-
+1. Meet the [system requirements](#system-requirements). The T-Pot installation needs at least 8-16 GB RAM and 128 GB free disk space as well as a working (outgoing non-filtered) internet connection.
+2. Download the T-Pot ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) acording to your architecture (amd64, arm64) or [create it yourself](#create-your-own-iso-image).
+3. Install the system in a [VM](#running-in-a-vm) or on [physical hardware](#running-on-hardware) with [internet access](#system-placement).
+4. Enjoy your favorite beverage - [watch](https://sicherheitstacho.eu) and [analyze](#kibana-dashboard).
+<br><br>
 
 # Table of Contents
-- [Technical Concept](#concept)
-- [System Requirements](#requirements)
-- [Installation Types](#types)
-- [Installation](#installation)
-  - [Prebuilt ISO Image](#prebuilt)
-  - [Create your own ISO Image](#createiso)
-  - [Running in a VM](#vm)
-  - [Running on Hardware](#hardware)
-  - [Post Install User](#postinstall)
-  - [Post Install Auto](#postinstallauto)
-  - [Cloud Deployments](#cloud)
-    - [Ansible](#ansible)
-    - [Terraform](#terraform)
-  - [First Run](#firstrun)
-  - [System Placement](#placement)
-- [Updates](#updates)
-- [Options](#options)
-  - [SSH and web access](#ssh)
-  - [T-Pot Landing Page](#heimdall)
-  - [Kibana Dashboard](#kibana)
-  - [Tools](#tools)
-  - [Maintenance](#maintenance)
-  - [Community Data Submission](#submission)
-  - [Opt-In HPFEEDS Data Submission](#hpfeeds-optin)
-- [Roadmap](#roadmap)
 - [Disclaimer](#disclaimer)
-- [FAQ](#faq)
+- [Technical Concept](#technical-concept)
+  - [Technical Architecture](#technical-architecture)
+  - [Services](#services)
+  - [User Types](#user-types)
+- [System Requirements](#system-requirements)
+  - [Running in a VM](#running-in-a-vm)
+  - [Running on Hardware](#running-on-hardware)
+  - [Running in a Cloud](#running-in-a-cloud)
+  - [Required Ports](#required-ports)
+- [System Placement](#system-placement)
+- [Installation](#installation)
+  - [ISO Based](#isoinstall)
+    - [Download ISO Image](#downloadiso)
+    - [Build your own ISO Image](#makeiso)
+  - [Post Install](#post-install)
+    - [Download Debian Netinstall Image](#download-debian-netinstall-image)
+    - [User](#post-install-user-method)
+    - [Auto](#postauto)
+  - [T-Pot Installer](#tpotinstaller)
+    - [Installation Types](#installtypes)
+    - [Standalone](#standalonetype)
+    - [Distributed](#distributedtype)
+  - [Cloud Deployments](#cloud)
+    - [Ansible](#ansible-deployment)
+    - [Terraform](#terraform-configuration)
+- [First Start](#first-start)
+  - [Standalone Start](#standalone-first-start)
+  - [Distributed Deployment](#distributed-deployment)
+  - [Community Data Submission](#community-data-submission)
+  - [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission)
+- [Remote Access and Tools](#remote-access-and-tools)
+  - [SSH and Cockpit](#ssh)
+  - [T-Pot Landing Page](#t-pot-landing-page)
+  - [Kibana Dashboard](#kibana-dashboardibana)
+  - [Attack Map](#attack-map)
+  - [Cyberchef](#cyberchef)
+  - [Elasticvue](#elasticvue)
+  - [Spiderfoot](#spiderfoot)
+- [Maintenance](#maintenance)
+  - [Updates](#updates)
+  - [Start T-Pot](#start-t-pot)
+  - [Stop T-Pot](#stop-t-pot)
+  - [T-Pot Data Folder](#t-pot-data-folder)
+  - [Log Persistence](#log-persistence)
+  - [Clean Up](#clean-up)
+  - [Show Containers](#show-containers)
+  - [Blackhole](#blackhole)
+  - [Add Users to Nginx (T-Pot WebUI)](#add-users-to-nginx-t-pot-webui)
+  - [Import and Export Kibana Objects](#import-and-export-kibana-objects)
+  - [Switch Editions](#switch-editions)
+  - [Redeploy Hive Sensor](#redeploy-hive-sensor)
+  - [Adjust tpot.yml](#adjust-tpotyml)
+  - [Enable Cockpit 2FA](#enable-cockpit-2fa)
+- [Troubleshooting](#troubleshooting)
+  - [Logging](#logging)
+  - [Fail2Ban](#fail2ban)
+  - [RAM](#ram-and-storage)
 - [Contact](#contact)
+  - [Issues](#issues)
+  - [Discussions](#discussions)
 - [Licenses](#licenses)
 - [Credits](#credits)
-- [Stay tuned](#staytuned)
-- [Testimonial](#testimonial)
+- [Testimonials](#testimonials)
+<br><br>
+
+# Disclaimer
+- You install and run T-Pot within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
+- For fast help research the [Issues](https://github.com/telekom-security/tpotce/issues) and [Discussions](https://github.com/telekom-security/tpotce/discussions).
+- The software is designed and offered with best effort in mind. As a community and open source project it uses lots of other open source software and may contain bugs and issues. Report responsibly.
+- Honeypots - by design - should not host any sensitive data. Make sure you don't add any.
+- By default, your data is submitted to [Sicherheitstacho](https://www.sicherheitstacho.eu/start/main). You can disable this in the config (`/opt/tpot/etc/tpot.yml`) by [removing](#community-data-submission) the ewsposter section. But in this case sharing really is caring!
+<br><br>
 
-<a name="concept"></a>
 # Technical Concept
+T-Pot is based on the Debian 11 (Bullseye) Netinstaller and utilizes 
+[docker](https://www.docker.com/) and [docker-compose](https://docs.docker.com/compose/) to reach its goal of running as many tools as possible simultaneously and thus utilizing the host's hardware to its maximum.
+<br><br>
 
-T-Pot is based on the Debian (Stable) network installer.
-The honeypot daemons as well as other support components are [dockered](http://docker.io).
-This allows T-Pot to run multiple honeypot daemons and tools on the same network interface while maintaining a small footprint and constrain each honeypot within its own environment.
-
-In T-Pot we combine the dockerized honeypots ...
+T-Pot offers docker images for the following honeypots ...
 * [adbhoney](https://github.com/huuck/ADBHoney),
 * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot),
 * [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot),
@@ -107,164 +107,220 @@ In T-Pot we combine the dockerized honeypots ...
 * [heralding](https://github.com/johnnykv/heralding),
 * [hellpot](https://github.com/yunginnanet/HellPot),
 * [honeypots](https://github.com/qeeqbox/honeypots),
-* [honeypy](https://github.com/foospidy/HoneyPy),
-* [honeysap](https://github.com/SecureAuthCorp/HoneySAP),
 * [honeytrap](https://github.com/armedpot/honeytrap/),
 * [ipphoney](https://gitlab.com/bontchev/ipphoney),
 * [log4pot](https://github.com/thomaspatzke/Log4Pot),
 * [mailoney](https://github.com/awhitehatter/mailoney),
 * [medpot](https://github.com/schmalle/medpot),
 * [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot),
-* [rdpy](https://github.com/citronneur/rdpy),
+* [sentrypeer](https://github.com/SentryPeer/SentryPeer),
 * [snare](http://mushmush.org/),
 * [tanner](http://mushmush.org/)
 
-... with the following tools ...
-* [Cockpit](https://cockpit-project.org/running) for a lightweight, webui for docker, os, real-time performance monitoring and web terminal.
+... alongside the following tools ...
+* [Cockpit](https://cockpit-project.org/running) for a lightweight and secure WebManagement and WebTerminal.
 * [Cyberchef](https://gchq.github.io/CyberChef/) a web app for encryption, encoding, compression and data analysis.
-* [ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
-* [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
+* [Elastic Stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
+* [Elasticvue](https://github.com/cars10/elasticvue/) a web front end for browsing and interacting with an Elastic Search cluster.
 * [Fatt](https://github.com/0x4D31/fatt) a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
+* [Geoip-Attack-Map](https://github.com/eddie4/geoip-attack-map) a beautifully animated attack map [optimized](https://github.com/t3chn0m4g3/geoip-attack-map) for T-Pot.
+* [P0f](https://lcamtuf.coredump.cx/p0f3/) is a tool for purely passive traffic fingerprinting.
 * [Spiderfoot](https://github.com/smicallef/spiderfoot) a open source intelligence automation tool.
 * [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
 
 ... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot appliance.
+<br><br>
 
+
+## Technical Architecture
 ![Architecture](doc/architecture.png)
 
-While data within docker containers is volatile T-Pot ensures a default 30 day persistence of all relevant honeypot and tool data in the well known `/data` folder and sub-folders. The persistence configuration may be adjusted in `/opt/tpot/etc/logrotate/logrotate.conf`. Once a docker container crashes, all other data produced within its environment is erased and a fresh instance is started from the corresponding docker image.<br>
+The source code and configuration files are fully stored in the T-Pot GitHub repository. The docker images are built and preconfigured for the T-Pot environment. 
 
-Basically, what happens when the system is booted up is the following:
+The individual Dockerfiles and configurations are located in the [docker folder](https://github.com/telekom-security/tpotce/tree/master/docker).
+<br><br>
 
-- start host system
-- start all the necessary services (i.e. cockpit, docker, etc.)
-- start all docker containers via docker-compose (honeypots, nms, elk, etc.)
+## Services
+T-Pot offers a number of services which are basically divided into five groups:
+1. System services provided by the OS
+    * SSH for secure remote access.
+    * Cockpit for web based remote acccess, management and web terminal.
+2. Elastic Stack
+    * Elasticsearch for storing events.
+    * Logstash for ingesting, receiving and sending events to Elasticsearch.
+    * Kibana for displaying events on beautyfully rendered dashboards.
+3. Tools
+    * NGINX for providing secure remote access (reverse proxy) to Kibana, CyberChef, Elasticvue, GeoIP AttackMap and Spiderfoot.
+    * CyberChef a web app for encryption, encoding, compression and data analysis.
+    * Elasticvue a web front end for browsing and interacting with an Elastic Search cluster.
+    * Geoip Attack Map a beautifully animated attack map for T-Pot.
+    * Spiderfoot a open source intelligence automation tool.
+4. Honeypots
+    * A selection of the 22 available honeypots based on the selected edition and / or setup.
+5. Network Security Monitoring (NSM)
+    * Fatt a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
+    * P0f is a tool for purely passive traffic fingerprinting.
+    * Suricata a Network Security Monitoring engine.
+<br><br>
 
-The T-Pot project provides all the tools and documentation necessary to build your own honeypot system and contribute to our [Sicherheitstacho](https://sicherheitstacho.eu).
+## User Types
+During the installation and during the usage of T-Pot there are two different types of accounts you will be working with. Make sure you know the differences of the different account types, since it is **by far** the most common reason for authentication errors and `fail2ban` lockouts.
 
-The source code and configuration files are fully stored in the T-Pot GitHub repository. The docker images are preconfigured for the T-Pot environment. If you want to run the docker images separately, make sure you study the docker-compose configuration (`/opt/tpot/etc/tpot.yml`) and the T-Pot systemd script (`/etc/systemd/system/tpot.service`), as they provide a good starting point for implementing changes.
+| Service             | Account Type | Username / Group | Description                                                             |
+| :---                | :---         | :---             | :---                                                                    |
+| SSH, Cockpit        | OS           | `tsec`           | On ISO based installations the user `tsec` is predefined.               |
+| SSH, Cockpit        | OS           | `<os_username>`  | Any other installation, the `<username>` you chose during installation. |
+| Nginx               | BasicAuth    | `<web_user>`     | `<web_user>` you chose during the installation of T-Pot.                |
+| CyberChef           | BasicAuth    | `<web_user>`     | `<web_user>` you chose during the installation of T-Pot.                |
+| Elasticvue          | BasicAuth    | `<web_user>`     | `<web_user>` you chose during the installation of T-Pot.                |
+| Geoip Attack Map    | BasicAuth    | `<web_user>`     | `<web_user>` you chose during the installation of T-Pot.                |
+| Spiderfoot          | BasicAuth    | `<web_user>`     | `<web_user>` you chose during the installation of T-Pot.                |
+| T-Pot               | OS           | `tpot`           | `tpot` this user / group is always reserved by the T-Pot services.      |
+| T-Pot Logs          | OS           | `tpotlogs`       | `tpotlogs` this group is always reserved by the T-Pot services.         |
 
-The individual docker configurations are located in the [docker folder](https://github.com/telekom-security/tpotce/tree/master/docker).
 
-<a name="requirements"></a>
+<br><br>
+
 # System Requirements
-Depending on the installation type, whether installing on [real hardware](#hardware) or in a [virtual machine](#vm), make sure the designated system meets the following requirements:
 
-- 8 GB RAM (less RAM is possible but might introduce swapping / instabilities)
-- 128 GB SSD (smaller is possible but limits the capacity of storing events)
-- Network via DHCP
-- A working, non-proxied, internet connection
+Depending on the installation setup, edition, installing on [real hardware](#running-on-hardware), in a [virtual machine](#running-in-a-vm) or [cloud](#running-in-a-cloud) there are different kind of requirements to be met regarding OS, RAM, storage and network for a successful installation of T-Pot (you can always adjust `/opt/tpot/etc/tpot.yml` to your needs to overcome these requirements).
+<br><br>
+| T-Pot Type  | RAM          | Storage         | Description                                                                              |
+| :---        | :---         | :---            | :---                                                                                     |
+| Standalone  | 8-16GB       | >=128GB SSD     | RAM requirements depend on the edition,<br> storage on how much data you want to persist.    |
+| Hive        | >=8GB        | >=256GB SSD     | As a rule of thumb, the more sensors & data,<br> the more RAM and storage is needed.         |
+| Hive_Sensor | >=8GB        | >=128GB SSD     | Since honeypot logs are persisted (/data)<br> for 30 days, storage depends on attack volume. |
+
+All T-Pot installations will require ...
+- an IP address via DHCP
+- a working, non-proxied, internet connection
+
+... for an installation to succeed.
+<br><br>
+*If you need proxy support or static IP addresses please review the [Debian](https://www.debian.org/doc/index.en.html) and / or [Docker documentation](https://docs.docker.com/).*
+<br><br>
+
+## Running in a VM
+T-Pot is reported to run with with the following hypervisors, however not each and every combination is tested.
+* [UTM (Intel & Apple Silicon)](https://mac.getutm.app/)
+* [VirtualBox](https://www.virtualbox.org/)
+* [VMWare vSphere / ESXi](https://kb.vmware.com/s/article/2107518)
+* [VMWare Fusion](https://www.vmware.com/products/fusion/fusion-evaluation.html) and [VMWare Workstation](https://www.vmware.com/products/workstation-pro.html)
+* KVM is reported to work as well.
+
+***Some configuration hints:***
+- While Intel versions run stable, Apple Silicon (arm64) support for Debian has known issues which in UTM may require switching `Display` to `Console Only` during initial installation of T-Pot / Debian and afterwards back to `Full Graphics`.
+- During configuration you may need to enable promiscuous mode for the network interface in order for fatt, suricata and p0f to work properly.
+- If you want to use a wifi card as a primary NIC for T-Pot, please be aware that not all network interface drivers support all wireless cards. In VirtualBox e.g. you have to choose the *"MT SERVER"* model of the NIC.
+<br><br>
+
+## Running on Hardware
+T-Pot is tested on and known to run with ...
+* IntelNUC series (only some tested)
+* Some generic Intel hardware
+
+Since the number of possible hardware combinations is too high to make general recommendations. If you are unsure, you should test the hardware with the T-Pot ISO image or use the post install method.  
+<br><br>
+
+## Running in a Cloud
+T-Pot is tested on and known to run on ...
+* Telekom OTC using the post install method
+* Amazon AWS using the post install method (somehow limited)
+
+Some users report working installations on other clouds and hosters, i.e. Azure and GCP. Hardware requirements may be different. If you are unsure you should research [issues](https://github.com/telekom-security/tpotce/issues) and [discussions](https://github.com/telekom-security/tpotce/discussions) and run some functional tests. Cloud support is a community developed feature and hyperscalers are known to adjust linux images, so expect some necessary adjustments on your end. 
+<br><br>
+
+## Required Ports
+Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS, etc. T-Pot will require the following ports for incomding / outgoing connections. Review the [T-Pot Architecure](#technical-architecture) for a visual representation. Also some ports will show up as duplicates, which is fine since used in different editions.
+| Port        | Protocol | Direction | Description                                                   |
+| :---        | :---     | :---      | :---                                                          |
+| 80, 443     | tcp      | outgoing  | T-Pot Management: Install, Updates, Logs (i.e. Debian, GitHub, DockerHub, PyPi, Sicherheitstacho, etc. |
+| 64294       | tcp      | incoming  | T-Pot Management: Access to Cockpit                           |
+| 64295       | tcp      | incoming  | T-Pot Management: Access to SSH                               |
+| 64297       | tcp      | incoming  | T-Pot Management Access to NGINX reverse proxy                |
+| 5555        | tcp      | incoming  | Honeypot: ADBHoney                                            |
+| 5000        | udp      | incoming  | Honeypot: CiscoASA                                            |
+| 8443        | tcp      | incoming  | Honeypot: CiscoASA                                            |
+| 443         | tcp      | incoming  | Honeypot: CitrixHoneypot                                      |
+| 80, 102, 502, 1025, 2404, 10001, 44818, 47808, 50100  | tcp      | incoming          | Honeypot: Conpot            |
+| 161, 623    | udp      | incoming  | Honeypot: Conpot                                              |
+| 22, 23      | tcp      | incoming  | Honeypot: Cowrie                                              |
+| 19, 53, 123, 1900 | udp| incoming  | Honeypot: Ddospot                                             |
+| 11112       | tcp      | incoming  | Honeypot: Dicompot                                            |
+| 21, 42, 135, 443, 445, 1433, 1723, 1883, 3306, 8081 | tcp        | incoming          | Honeypot: Dionaea           |
+| 69          | udp      | incoming  | Honeypot: Dionaea                                             |
+| 9200        | tcp      | incoming  | Honeypot: Elasticpot                                          |
+| 22          | tcp      | incoming  | Honeypot: Endlessh                                            |
+| 21, 22, 23, 25, 80, 110, 143, 443, 993, 995, 1080, 5432, 5900          | tcp      | incoming  | Honeypot: Heralding  |
+| 21, 22, 23, 25, 80, 110, 143, 389, 443, 445, 1080, 1433, 1521, 3306, 5432, 5900, 6379, 8080, 9200, 11211 | tcp | incoming  | Honeypot: qHoneypots |
+| 53, 123, 161| udp      | incoming  | Honeypot: qHoneypots                                          |
+| 631         | tcp      | incoming  | Honeypot: IPPHoney                                            |
+| 80, 443, 8080, 9200, 25565 | tcp      | incoming  | Honeypot: Log4Pot                              |
+| 25          | tcp      | incoming  | Honeypot: Mailoney                                            |
+| 2575        | tcp      | incoming  | Honeypot: Medpot                                              |
+| 6379        | tcp      | incoming  | Honeypot: Redishoneypot                                       |
+| 5060        | udp      | incoming  | Honeypot: SentryPeer                                          |
+| 80          | tcp      | incoming  | Honeypot: Snare (Tanner)                                      |
 
 
-<a name="types"></a>
-# Installation Types
-There are prebuilt installation types available each focussing on different aspects to get you started right out of the box. The docker-compose files are located in `/opt/tpot/etc/compose`. If you want to build your own compose file just create a new one (based on the layout and settings of the prebuilds) in `/opt/tpot/etc/compose` and run `tped.sh` afterwards to point T-Pot to the new compose file and run you personalized edition.
+Ports and availability of SaaS services may vary based on your geographical location. Also during first install outgoing ICMP / TRACEROUTE is required additionally to find the closest and fastest mirror to you.
 
-##### Standard
-- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, elasticpot, heralding, honeysap, honeytrap, mailoney, medpot, rdpy, snare & tanner
-- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
+For some honeypots to reach full functionality (i.e. Cowrie or Log4Pot) outgoing connections are necessary as well, in order for them to download the attackers malware. Please see the individual honeypot's documentation to learn more by following the [links](#technical-concept) to their repositories.
 
+<br><br>
 
-##### Sensor
-- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, elasticpot, heralding, honeypy, honeysap, honeytrap, mailoney, medpot, rdpy, snare & tanner
-- Tools: cockpit, ewsposter, fatt, p0f & suricata
-- Since there is no ELK stack provided the Sensor Installation only requires 4 GB of RAM.
+# System Placement
+It is recommended to get yourself familiar how T-Pot and the honeypots work before you start exposing towards the interet. For a quickstart run a T-Pot installation in a virtual machine.
+<br><br>
+Once you are familiar how things work you should choose a network you suspect intruders in or from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks (unless you want to proof a point)! For starters it is recommended to put T-Pot in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. To avoid probing for T-Pot's management ports you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs and / or only expose the [ports](#required-ports) relevant to your use-case. If you wish to catch malware traffic on unknown ports you should not limit the ports you forward since glutton and honeytrap dynamically bind any TCP port that is not covered by other honeypot daemons and thus give you a better representation what risks your setup is exposed to.
+<br><br>
 
-
-##### Industrial
-- Honeypots: conpot, cowrie, dicompot, heralding, honeysap, honeytrap, medpot & rdpy
-- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
-
-
-##### Collector
-- Honeypots: heralding & honeytrap
-- Tools: cockpit, cyberchef, fatt, ELK, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
-
-
-##### NextGen
-- Honeypots: adbhoney, ciscoasa, citrixhoneypot, conpot, cowrie, dicompot, dionaea, glutton, heralding, honeypy, honeysap, ipphoney, mailoney, medpot, rdpy, snare & tanner
-- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
-
-
-##### Medical
-- Honeypots: dicompot & medpot
-- Tools: cockpit, cyberchef, ELK, fatt, elasticsearch head, ewsposter, nginx / heimdall, spiderfoot, p0f & suricata
-
-
-<a name="installation"></a>
 # Installation
-The installation of T-Pot is straight forward and heavily depends on a working, transparent and non-proxied up and running internet connection. Otherwise the installation **will fail!**
+The T-Pot installation is offered in different variations. While the overall installation of T-Pot is straight forward it heavily depends on a working, non-proxied (unless you made modifications) up and running internet connection (also see [required outgoing ports](#required-ports)). If these conditions are not met the installation **will fail!** either during the execution of the Debian Installer, after the first reboot before the T-Pot Installer is starting up or while the T-Pot installer is trying to download all the necessary dependencies.
+<br><br>
 
-Firstly, decide if you want to download the prebuilt installation ISO image from [GitHub](https://github.com/telekom-security/tpotce/releases), [create it yourself](#createiso) ***or*** [post-install on an existing Debian 10 (Buster)](#postinstall).
+## ISO Based
+Installing T-Pot based on an ISO image is basically the same routine as with any other ISO based Linux distribution. Running on hardware You copy the ISO file to an USB drive (i.e. with [Etcher](https://github.com/balena-io/etcher)) and boot into the Debian installer and choose to install **T-Pot** or you mount the ISO image as a virtual drive in one of the supported [hypervisors](#running-in-a-vm).
+<br><br> 
 
-Secondly, decide where you the system to run: [real hardware](#hardware) or in a [virtual machine](#vm)?
+### **Download ISO Image**
+On the [T-Pot release page](https://github.com/telekom-security/tpotce/releases) you will find two prebuilt ISO images for download `tpot_amd64.iso` and `tpot_arm64.iso`. Both are based on Debian 11 for x64 / arm64 based hardware. So far ARM64 support is limited, but works mostly fine with [UTM](#running-in-a-vm) based VMs on Apple Silicon (M1x) Macs.
+<br><br>
 
-<a name="prebuilt"></a>
-## Prebuilt ISO Image
-An installation ISO image is available for download (~50MB), which is created by the [ISO Creator](https://github.com/telekom-security/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image.
-You can download the prebuilt installation ISO from [GitHub](https://github.com/telekom-security/tpotce/releases) and jump to the [installation](#vm) section.
-
-<a name="createiso"></a>
-## Create your own ISO Image
-For transparency reasons and to give you the ability to customize your install you use the [ISO Creator](https://github.com/telekom-security/tpotce) that enables you to create your own ISO installation image.
+### **Create your own ISO Image**
+In case you want to modify T-Pot for your environment or simply want to take things into your own hands you can use the [ISO Creator](https://github.com/telekom-security/tpotce) to build your own ISO image.
 
 **Requirements to create the ISO image:**
-- Debian 10 as host system (others *may* work, but *remain* untested)
-- 4GB of free memory  
+- Debian 11 as host system (others *may* work, but *remain* untested)
+- 4GB of free RAM
 - 32GB of free storage
 - A working internet connection
 
-**How to create the ISO image:**
+**Steps to create the ISO image:**
 
 1. Clone the repository and enter it.
 ```
 git clone https://github.com/telekom-security/tpotce
 cd tpotce
 ```
-2. Run the `makeiso.sh` script to build the ISO image.
-The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Pot is based on.
+2. Run `makeiso.sh` to build the ISO image.
+The script will download and install dependencies necessary to build the image. It will further download the Debian Netiso installer image (~50-150MB) which T-Pot is based on.
 ```
 sudo ./makeiso.sh
 ```
-After a successful build, you will find the ISO image `tpot.iso` along with a SHA256 checksum `tpot.sha256` in your folder.
+3. After a successful build, you will find the ISO image `tpot_[amd64,arm64].iso` along with a SHA256 checksum `tpot_[amd64,arm64].sha256` based on your architecture choice in your folder.
+<br><br>
 
-<a name="vm"></a>
-## Running in VM
-You may want to run T-Pot in a virtualized environment. The virtual system configuration depends on your virtualization provider.
+## Post Install
+In some cases it is necessary to install T-Pot after you installed Debian, i.e. your provider does not offer you the option of an ISO based installation, you need special drivers for your hardware to work, or you want to experiment with ARM64 hardware that is not supported by the ISO image. In that case you can clone the T-Pot repository on your own. Make sure you understand the different [user types](#user-types) before setting up your OS.
+<br><br>
 
-T-Pot is successfully tested with [VirtualBox](https://www.virtualbox.org) and [VMWare](http://www.vmware.com) with just little modifications to the default machine configurations.
+### **Download Debian Netinstall Image**
+Since T-Pot is based on the Debian Netinstall Image ([amd64](http://ftp.debian.org/debian/dists/bullseye/main/installer-amd64/current/images/netboot/mini.iso), [arm64](http://ftp.debian.org/debian/dists/bullseye/main/installer-arm64/current/images/netboot/mini.iso)) it is heavily recommended you use this image, too, if possible. It is very lightweight and only offers to install core services.
+<br><br>
 
-It is important to make sure you meet the [system requirements](#requirements) and assign virtual harddisk and RAM according to the requirements while making sure networking is bridged.
-
-You need to enable promiscuous mode for the network interface for fatt, suricata and p0f to work properly. Make sure you enable it during configuration.
-
-If you want to use a wifi card as a primary NIC for T-Pot, please be aware that not all network interface drivers support all wireless cards. In VirtualBox e.g. you have to choose the *"MT SERVER"* model of the NIC.
-
-Lastly, mount the `tpot.iso` ISO to the VM and continue with the installation.<br>
-
-You can now jump [here](#firstrun).
-
-<a name="hardware"></a>
-## Running on Hardware
-If you decide to run T-Pot on dedicated hardware, just follow these steps:
-
-1. Burn a CD from the ISO image or make a bootable USB stick using the image. <br>
-Whereas most CD burning tools allow you to burn from ISO images, the procedure to create a bootable USB stick from an ISO image depends on your system. There are various Windows GUI tools available, e.g. [this tip](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) might help you.<br> On [Linux](http://askubuntu.com/questions/59551/how-to-burn-a-iso-to-a-usb-device) or [MacOS](http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx) you can use the tool *dd* or create the USB stick with T-Pot's [ISO Creator](https://github.com/telekom-security).
-2. Boot from the USB stick and install.
-
-*Please note*: Limited tests are performed for the Intel NUC platform other hardware platforms **remain untested**. There is no hardware support provided of any kind.
-
-<a name="postinstall"></a>
-## Post-Install User
-In some cases it is necessary to install Debian 10 (Buster) on your own:
- - Cloud provider does not offer mounting ISO images.
- - Hardware setup needs special drivers and / or kernels.
- - Within your company you have to setup special policies, software etc.
- - You just like to stay on top of things.
-
-The T-Pot Universal Installer will upgrade the system and install all required T-Pot dependencies.
-
-Just follow these steps:
+### **Post Install User Method**
+The post install method must be executed by the `root` (`sudo su -`, `su -`), just follow the following steps:
 
 ```
 git clone https://github.com/telekom-security/tpotce
@@ -272,10 +328,10 @@ cd tpotce/iso/installer/
 ./install.sh --type=user
 ```
 
-The installer will now start and guide you through the install process.
+The installation will now start, you can now move on to the [T-Pot Installer](#t-pot-installer) section.
+<br><br>
 
-<a name="postinstallauto"></a>
-## Post-Install Auto
+### **Post-Install Auto**
 You can also let the installer run automatically if you provide your own `tpot.conf`. An example is available in `tpotce/iso/installer/tpot.conf.dist`. This should make things easier in case you want to automate the installation i.e. with **Ansible**.
 
 Just follow these steps while adjusting `tpot.conf` to your needs:
@@ -286,19 +342,38 @@ cd tpotce/iso/installer/
 cp tpot.conf.dist tpot.conf
 ./install.sh --type=auto --conf=tpot.conf
 ```
+<br><br>
 
-The installer will start automatically and guide you through the install process.
+# T-Pot Installer
+Usage of the T-Pot Installer is mostly self explanatory, since the installer will guide you through the setup process. Depending on your installation method [ISO Based](#iso-based) or [Post Install](#post-install) you will be asked to create a password for the user `tsec` and / or create a `<web-username>` and password. Make sure to remember the username and passwords you understand their meanings outlined in [User Types](#user-types).
+<br><br>
+
+## Installation Types
+In the past T-Pot was only available as a [standalone](#standalone) solution with all services, tools, honeypots, etc. installed on to a single machine. Based on demand T-Pot now also offers a [distributed](#distributed) solution. While the standalone solution does not require additional explanation the distributed option requires you to select different editions (or flavors). 
+<br><br>
+
+### **Standalone**
+With T-Pot Standalone all services, tools, honeypots, etc. will be installed on to a single host. Make sure to meet the [system requirements](#system-requirements). You can choose from various pre-defined T-Pot editions (or flavors) depending on your personal use-case (you can always adjust `/opt/tpot/etc/tpot.yml` to your needs).
+Once the installation is finished you can proceed to [First Start](#first-start).
+<br><br>
+
+### **Distributed**
+The distributed version of T-Pot requires at least two hosts
+- the T-Pot **HIVE**, which will host the Elastic Stack and T-Pot tools (install this first!),
+- and a T-Pot **HIVE_SENSOR**, which will host the honeypots and transmit log data to the **HIVE's** Elastic Stack.
+
+To finalize the **HIVE_SENSOR** installation continue to [Distributed Deployment](#distributed-deployment).
+<br><br>
 
-<a name="cloud"></a>
 ## Cloud Deployments
 Located in the [`cloud`](cloud) folder.  
-Currently there are examples with Ansible & Terraform.  
+Currently there are examples for Ansible & Terraform.  
 If you would like to contribute, you can add other cloud deployments like Chef or Puppet or extend current methods with other cloud providers.
-
+<br><br>
 *Please note*: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Pot. There is no cloud provider support provided of any kind.
+<br><br>
 
-<a name="ansible"></a>
-### Ansible Deployment
+### **Ansible Deployment**
 You can find an [Ansible](https://www.ansible.com/) based T-Pot deployment in the [`cloud/ansible`](cloud/ansible) folder.  
 The Playbook in the [`cloud/ansible/openstack`](cloud/ansible/openstack) folder is reusable for all **OpenStack** clouds out of the box.
 
@@ -307,140 +382,59 @@ It first creates all resources (security group, network, subnet, router), deploy
 You can have a look at the Playbook and easily adapt the deploy role for other [cloud providers](https://docs.ansible.com/ansible/latest/scenario_guides/cloud_guides.html). Check out [Ansible Galaxy](https://galaxy.ansible.com/search?keywords=&order_by=-relevance&page=1&deprecated=false&type=collection&tags=cloud) for more cloud collections.
 
 *Please note*: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Pot. There is no cloud provider support provided of any kind.
+<br><br>
 
-<a name="terraform"></a>
-### Terraform Configuration
-
-You can find [Terraform](https://www.terraform.io/) configuration in the [`cloud/terraform`](cloud/terraform) folder.
-
+### **Terraform Configuration**
+You can find a [Terraform](https://www.terraform.io/) configuration in the [`cloud/terraform`](cloud/terraform) folder.
 This can be used to launch a virtual machine, bootstrap any dependencies and install T-Pot in a single step.
 
-Configuration for **Amazon Web Services** (AWS) and **Open Telekom Cloud** (OTC) is currently included.  
+Configurations for **Amazon Web Services** (AWS) and **Open Telekom Cloud** (OTC) are currently included.  
 This can easily be extended to support other [Terraform providers](https://registry.terraform.io/browse/providers?category=public-cloud%2Ccloud-automation%2Cinfrastructure).
 
 *Please note*: Cloud providers usually offer adjusted Debian OS images, which might not be compatible with T-Pot. There is no cloud provider support provided of any kind.
-
-<a name="firstrun"></a>
-## First Run
-The installation requires very little interaction, only a locale and keyboard setting have to be answered for the basic linux installation. While the system reboots maintain the active internet connection. The T-Pot installer will start and ask you for an installation type, password for the **tsec** user and credentials for a **web user**. Everything else will be configured automatically. All docker images and other componenents will be downloaded. Depending on your network connection and the chosen installation type, the installation may take some time. With 250Mbit down / 40Mbit up the installation is usually finished within 15-30 minutes.
-
-Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. On the console you may login with:
-
-- user: **[tsec or user]** *you chose during one of the post install methods*
-- pass: **[password]** *you chose during the installation*
-
-All honeypot services are preconfigured and are starting automatically.
-
-You can login from your browser and access the Admin UI: `https://<your.ip>:64294` or via SSH to access the command line: `ssh -l tsec -p 64295 <your.ip>`
-
-- user: **[tsec or user]** *you chose during one of the post install methods*
-- pass: **[password]** *you chose during the installation*
-
-You can also login from your browser and access the Web UI: `https://<your.ip>:64297`
-- user: **[user]** *you chose during the installation*
-- pass: **[password]** *you chose during the installation*
+<br><br>
 
 
-<a name="placement"></a>
-# System Placement
-Make sure your system is reachable through a network you suspect intruders in / from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks, other than the ones from your internal network! For starters it is recommended to put T-Pot in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot's network interface. However to avoid fingerprinting you can put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs.
+# First Start
+Once the T-Pot Installer successfully finishes, the system will automatically reboot and you will be presented with the T-Pot login screen. Logins are according to the [User Types](#user-types):
 
-A list of all relevant ports is available as part of the [Technical Concept](#concept)
-<br>
+- user: **[`tsec` or `<os_username>`]**
+- pass: **[password]**
 
-Basically, you can forward as many TCP ports as you want, as glutton & honeytrap dynamically bind any TCP port that is not covered by the other honeypot daemons.
+You can login from your browser and access Cockpit: `https://<your.ip>:64294` or via SSH to access the command line: `ssh -l [tsec,<os_username>] -p 64295 <your.ip>`:
 
-In case you need external Admin UI access, forward TCP port 64294 to T-Pot, see below.
-In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
-In case you need external Web UI access, forward TCP port 64297 to T-Pot, see below.
+- user: **[`tsec` or `<os_username>`]**
+- pass: **[password]**
 
-T-Pot requires outgoing git, http, https connections for updates (Debian, Docker, GitHub, PyPi), attack submission (ewsposter, hpfeeds) and CVE / IP reputation translation map updates (logstash, listbot). Ports and availability may vary based on your geographical location. Also during first install outgoing ICMP / TRACEROUTE is required additionally to find the closest and fastest mirror to you.
+You can also login from your browser and access the Nginx (T-Pot Web UI and tools): `https://<your.ip>:64297`
+- user: **[`<web_user>`]**
+- pass: **[password]**
+<br><br>
 
-<a name="updates"></a>
-# Updates
-For the ones of you who want to live on the bleeding edge of T-Pot development we introduced an update feature which will allow you to update all T-Pot relevant files to be up to date with the T-Pot master branch.
-**If you made any relevant changes to the T-Pot relevant config files make sure to create a backup first.**
+## Standalone First Start
+There is not much to do except to login and check via `dps.sh` if all services and honeypots are starting up correctly and login to Kibana and / or Geoip Attack Map to monitor the attacks.
+<br><br>
 
-The Update script will:
- - **mercilessly** overwrite local changes to be in sync with the T-Pot master branch
- - upgrade the system to the packages available in Debian (Stable)
- - update all resources to be in-sync with the T-Pot master branch
- - ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
- - restore your custom ews.cfg and HPFEED settings from `/data/ews/conf`
+## Distributed Deployment
+With the distributed deployment firstly login to **HIVE** and the **HIVE_SENSOR** and check via `dps.sh` if all services and honeypots are starting up correctly. Once you have confirmed everything is working fine you need to deploy the **HIVE_SENSOR** to the **HIVE** in order to transmit honeypot logs to the Elastic Stack.
+<br><br>
 
-You simply run the update script:
+For **deployment** simply keep the **HIVE** login data ready and follow these steps while the `deploy.sh` script will setup the **HIVE** and **HIVE_SENSOR** for securely shipping and receiving logs:
 ```
 sudo su -
-cd /opt/tpot/
-./update.sh
+deploy.sh
 ```
 
-**Despite all testing efforts please be reminded that updates sometimes may have unforeseen consequences. Please create a backup of the machine or the files with the most value to your work.**  
-
-<a name="options"></a>
-# Options
-The system is designed to run without any interaction or maintenance and automatically contributes to the community.<br>
-For some this may not be enough. So here some examples to further inspect the system and change configuration parameters.
-
-<a name="ssh"></a>
-## SSH and web access
-By default, the SSH daemon allows access on **tcp/64295** with a user / password combination and prevents credential brute forcing attempts using `fail2ban`. This also counts for Admin UI (**tcp/64294**) and Web UI (**tcp/64297**) access.<br>
-
-If you do not have a SSH client at hand and still want to access the machine via command line you can do so by accessing the Admin UI from `https://<your.ip>:64294`, enter
-
-- user: **[tsec or user]** *you chose during one of the post install methods*
-- pass: **[password]** *you chose during the installation*
-
-You can also add two factor authentication to Cockpit just by running `2fa.sh` on the command line.
-
-![Cockpit Terminal](doc/cockpit3.png)
-
-<a name="heimdall"></a>
-## T-Pot Landing Page 
-Just open a web browser and connect to `https://<your.ip>:64297`, enter
-
-- user: **[user]** *you chose during the installation*
-- pass: **[password]** *you chose during the installation*
-
-and the **Landing Page** will automagically load. Now just click on the tool / link you want to start.
-
-![Dashbaord](doc/heimdall.png)
-
-<a name="kibana"></a>
-## Kibana Dashboard
-
-![Dashbaord](doc/kibana.png)
-
-<a name="tools"></a>
-## Tools
-The following web based tools are included to improve and ease up daily tasks.
-
-![Cockpit Overview](doc/cockpit1.png)
-
-![Cockpit Containers](doc/cockpit2.png)
-
-![Cyberchef](doc/cyberchef.png)
-
-![ES Head Plugin](doc/headplugin.png)
-
-![Spiderfoot](doc/spiderfoot.png)
+The script will ask for the **HIVE** login data, the **HIVE** IP address, will create SSH keys accordingly and deploy them securely over a SSH connection to the **HIVE**. On the **HIVE** machine a user with the **HIVE_SENSOR** hostname is created, belonging to a user group `tpotlogs` which may only open a SSH tunnel via port `64295` and transmit Logstash logs to port `127.0.0.1:64305`, with no permission to login on a shell. You may review the config in `/etc/ssh/sshd_config` and the corresponding `autossh` settings in `docker/elk/logstash/dist/entrypoint.sh`. Settings and keys are stored in `/data/elk/logstash` and loaded as part of `/opt/tpot/etc/tpot.yml`.
+<br><br> 
 
 
-<a name="maintenance"></a>
-## Maintenance
-T-Pot is designed to be low maintenance. Basically, there is nothing you have to do but let it run.
-
-If you run into any problems, a reboot may fix it :bowtie:
-
-If new versions of the components involved appear new docker images will be created and distributed. New images will be available from docker hub and downloaded automatically to T-Pot and activated accordingly.  
-
-<a name="submission"></a>
 ## Community Data Submission
 T-Pot is provided in order to make it accessible to all interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed [Sicherheitstacho](https://sicherheitstacho.eu).
-You may opt out of the submission by removing the `# Ewsposter service` from `/opt/tpot/etc/tpot.yml`:
+You may opt out of the submission by removing the `# Ewsposter service` from `/opt/tpot/etc/tpot.yml` by following these steps:
 1. Stop T-Pot services: `systemctl stop tpot`
-2. Remove Ewsposter service: `vi /opt/tpot/etc/tpot.yml`
-3. Remove the following lines, save and exit vi (`:x!`):<br>
+2. Open `tpot.yml`: `vi /opt/tpot/etc/tpot.yml`
+3. Remove the following lines, save and exit vi (`:x!`):
 ```
 # Ewsposter service
   ewsposter:
@@ -448,18 +442,27 @@ You may opt out of the submission by removing the `# Ewsposter service` from `/o
     restart: always
     networks:
      - ewsposter_local
-    image: "ghcr.io/telekom-security/ewsposter:2006"
+    environment:
+     - EWS_HPFEEDS_ENABLE=false
+     - EWS_HPFEEDS_HOST=host
+     - EWS_HPFEEDS_PORT=port
+     - EWS_HPFEEDS_CHANNELS=channels
+     - EWS_HPFEEDS_IDENT=user
+     - EWS_HPFEEDS_SECRET=secret
+     - EWS_HPFEEDS_TLSCERT=false
+     - EWS_HPFEEDS_FORMAT=json
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    image: "dtagdevsec/ewsposter:2203"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
 ```
 4. Start T-Pot services: `systemctl start tpot`
 
-Data is submitted in a structured ews-format, a XML stucture. Hence, you can parse out the information that is relevant to you.
-
 It is encouraged not to disable the data submission as it is the main purpose of the community approach - as you all know **sharing is caring** 😍
+<br><br>
 
-<a name="hpfeeds-optin"></a>
 ## Opt-In HPFEEDS Data Submission
 As an Opt-In it is now possible to also share T-Pot data with 3rd party HPFEEDS brokers.  
 If you want to share your T-Pot data you simply have to register an account with a 3rd party broker with its own benefits towards the community. You simply run `hpfeeds_optin.sh` which will ask for your credentials. It will automatically update `/opt/tpot/etc/tpot.yml` to deliver events to your desired broker.
@@ -471,45 +474,305 @@ Be sure to apply any changes by running `./hpfeeds_optin.sh --conf=/data/ews/con
 No worries: Your old config gets backed up in `/data/ews/conf/hpfeeds.cfg.old`
 
 Of course you can also rerun the `hpfeeds_optin.sh` script to change and apply your settings interactively.
+<br><br>
 
-<a name="roadmap"></a>
-# Roadmap
-As with every development there is always room for improvements ...
 
-Some features may be provided with updated docker images, others may require some hands on from your side.
+# Remote Access and Tools
+T-Pot comes with some pre-installed services and tools which will make some of your research tasks or accessing T-Pot remote a lot easier.
+<br><br>
 
-You are always invited to participate in development on our [GitHub](https://github.com/telekom-security/tpotce) page.
+## SSH and Cockpit
+According to the [User Types](#user-types) you can login from your browser and access Cockpit: `https://<your.ip>:64294` or via SSH to access the command line: `ssh -l [tsec,<os_username>] -p 64295 <your.ip>`:
 
-<a name="disclaimer"></a>
-# Disclaimer
-- We don't have access to your system. So we cannot remote-assist when you break your configuration. But you can simply reinstall.
-- The software was designed with best effort security, not to be in stealth mode. Because then, we probably would not be able to provide those kind of honeypot services.
-- You install and you run within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
-- Honeypots - by design - should not host any sensitive data. Make sure you don't add any.
-- By default, your data is submitted to [SecurityMeter](https://www.sicherheitstacho.eu/start/main). You can disable this in the config. But hey, wouldn't it be better to contribute to the community?
+- user: **[`tsec` or `<os_username>`]**
+- pass: **[password]**
 
-<a name="faq"></a>
-# FAQ
-Please report any issues or questions on our [GitHub issue list](https://github.com/telekom-security/tpotce/issues), so the community can participate.
+Especially if you do not have a SSH client at hand and still want to access the machine with a command line option you can do so by accessing Cockpit. You can also add two factor authentication to Cockpit just by running `2fa.sh` on the command line.
+
+![Cockpit Overview](doc/cockpit_a.png)
+![Cockpit Terminal](doc/cockpit_b.png)
+<br><br>
+
+## T-Pot Landing Page 
+According to the [User Types](#user-types) you can open the T-Pot Landing Page from your browser via `https://<your.ip>:64297`:
+
+- user: **[`<web_user>`]**
+- pass: **[password]**
+
+![T-Pot-WebUI](doc/tpotwebui.png)
+<br><br>
+
+## Kibana Dashboard
+On the T-Pot Landing Page just click on `Kibana` and you will be forwarded to Kibana. You can select from a large variety of dashboards and visualizations all tailored to the T-Pot supported honeypots.
+
+![Dashbaord](doc/kibana_a.png)
+<br><br>
+
+## Attack Map
+On the T-Pot Landing Page just click on `Attack Map` and you will be forwarded to the Attack Map. Since the Attack Map utilizes web sockets you need to re-enter the `<web_user>` credentials.
+
+![AttackMap](doc/attackmap.png)
+<br><br>
+
+## Cyberchef
+On the T-Pot Landing Page just click on `Cyberchef` and you will be forwarded to Cyberchef.
+
+![Cyberchef](doc/cyberchef.png)
+<br><br>
+
+## Elasticvue
+On the T-Pot Landing Page just click on `Elastivue` and you will be forwarded to Elastivue.
+
+![Elasticvue](doc/elasticvue.png)
+<br><br>
+
+## Spiderfoot
+On the T-Pot Landing Page just click on `Spiderfoot` and you will be forwarded to Spiderfoot.
+
+![Spiderfoot](doc/spiderfoot.png)
+<br><br>
+
+
+# Maintenance
+T-Pot is designed to be low maintenance. Basically there is nothing you have to do but let it run, however you should read this section closely.
+<br><br>
+
+## Updates
+While security update are installed automatically by the OS and docker images are pulled once per day (`/etc/crontab`) to check for updated images, T-Pot offers the option to be updated to the latest master and / or upgrade a previous version. Updating and upgrading always introduces the risk of loosing your data, so it is heavily encouraged you backup your machine before proceeding.
+<br><br>
+Should an update fail, opening an issue or a discussion will help to improve things in the future, but the solution will always be to perform a ***fresh install*** as we simply ***cannot*** provide any support for lost data!
+<br>
+## ***If you made any relevant changes to the T-Pot config files make sure to create a backup first!***
+## ***Updates may have unforeseen consequences. Create a backup of the machine or the files with the most value to your work!*** 
+<br>
+
+The update script will ...
+ - ***mercilessly*** overwrite local changes to be in sync with the T-Pot master branch
+ - upgrade the system to the latest packages available for the installed Debian version
+ - update all resources to be in sync with the T-Pot master branch
+ - ensure all T-Pot relevant system files will be patched / copied into the original T-Pot state
+ - restore your custom ews.cfg and HPFEED settings from `/data/ews/conf`
+
+You simply run the update script ***after you backed up any relevant data***:
+```
+sudo su -
+cd /opt/tpot/
+./update.sh
+```
+
+## Start T-Pot
+The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in `/etc/crontab` during installation).
+<br>
+If you want to manually start the T-Pot service you can do so via `systemctl start tpot` and observe via `dps.sh 1` the startup of the containers.
+<br><br>
+
+## Stop T-Pot
+The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in `/etc/crontab` during installation).
+<br>
+If you want to manually stop the T-Pot service you can do so via `systemctl stop tpot` and observe via `dps.sh 1` the shutdown of the containers.
+<br><br>
+
+## T-Pot Data Folder
+All persistent log files from the honeypots, tools and T-Pot related services are stored in `/data`. This includes collected artifacts which are not transmitted to the Elastic Stack.
+<br><br>
+
+## Log Persistence
+All log data stored in the [T-Pot Data Folder](#t-pot-data-folder) will be persisted for 30 days by default. The persistence for the log files can be changed in `/opt/tpot/etc/logrotate/logrotate.conf`.
+<br>
+Elasticsearch indices are handled by the `tpot` Index Lifecycle Policy which can be adjusted directly in Kibana.
+![IndexManagement1](doc/kibana_b.png)
+<br><br>
+
+
+By default the `tpot` Index Lifecycle Policy keeps the indices for 30 days. This offers a good balance between storage and speed. However you may adjust the policy to your needs.
+![IndexManagement2](doc/kibana_c.png)
+<br><br>
+
+## Clean Up
+All log data stored in the [T-Pot Data Folder](#t-pot-data-folder) (except for Elasticsearch indices, of course) can be erased by running `clean.sh`.
+<br><br>
+
+## Show Containers
+You can show all T-Pot relevant containers by running `dps.sh` or `dps.sh [interval]`. The `interval (s)` will re-run `dps.sh` automatically. You may also run `glances` which will also give you more insight into system usage and available resources while still showing the containers running.
+<br><br>
+
+## Blackhole
+Some users reported they wanted to have the option to run T-Pot in some sort of stealth mode without permanent visits of publicly known scanners and thus reducing the possibility of being exposed. While this is of course always a cat and mouse game T-Pot now offers a blackhole feature that is null routing all requests from [known mass scanners](https://raw.githubusercontent.com/stamparm/maltrail/master/trails/static/mass_scanner.txt) while still catching the events through Suricata.
+<br>
+The feature is activated by running `blackhole.sh add` which will download the mass scanner ip list, add the blackhole routes and re-add keep them active until `blackhole.sh del` permanently removes them.
+<br>
+Enabling this feature will drastically reduce some attackers visibility and consequently result in less activity. However as already mentioned it is neither a guarantee for being completely stealth nor will it prevent fingerprinting of some honeypot services.
+<br><br>
+
+## Add Users to Nginx (T-Pot WebUI)
+Nginx (T-Pot WebUI) allows to add as many `<web_user>` accounts as you want (according to the [User Types](#user-types)).
+
+To add a new user just follow these steps:
+```
+sudo su -
+systemctl stop tpot
+htpasswd /data/nginx/conf/nginxpasswd <username>
+> New password:
+> Re-type new password:
+> Adding password for user foobar
+systemctl start tpot
+```
+If you want to remove users you just modify `nginxpasswd` with `vi` or any other editor, remove the corresponding line and restart T-Pot again.
+<br><br>
+
+## Import and Export Kibana Objects
+Some T-Pot updates will require you to update the Kibana objects. Either to support new honeypots or to improve existing dashboards or visualizations. Make sure to ***export*** first so you do not loose any of your adjustments.
+
+### **Export**
+1. Go to Kibana
+2. Click on "Stack Management"
+3. Click on "Saved Objects"
+4. Click on "Export <no.> objetcs"
+5. Click on "Export all"
+This will export a NDJSON file with all your objects. Always run a full export to make sure all references are included.
+
+### **Import**
+1. [Download the NDJSON file](https://github.com/dtag-dev-sec/tpotce/blob/master/etc/objects/kibana_export.ndjson.zip) and unzip it.
+2. Go to Kibana
+3. Click on "Stack Management"
+4. Click on "Saved Objects"
+5. Click on "Import" and leave the defaults (check for existing objects and automatically overwrite conflicts) if you did not make personal changes to the Kibana objects.
+6. Browse NDJSON file
+When asked: "If any of the objects already exist, do you want to automatically overwrite them?" you answer with "Yes, overwrite all".
+<br><br>
+
+## Switch Editions
+You can switch between T-Pot editions (flavors) by running `tped.sh`.
+<br><br>
+
+## Redeploy Hive Sensor
+In case you need to re-deploy your Hive Sensor, i.e. the IP of your Hive has changed or you want to move the Hive Sensor to a new Hive, you simply follow these commands:
+```
+sudo su -
+systemctl stop tpot
+rm /data/elk/logstash/*
+deploy.sh
+reboot
+```
+<br><br>
+
+## Adjust tpot.yml
+Maybe the avaialble T-Pot editions do not apply to your use-case or you need a different set of honeypots. You can adjust `/opt/tpot/etc/tpot.yml` to your own preference. If you need examples how this works, just follow the configuration of the existing editions (docker-compose files) in `/opt/tpot/etc/compose` and follow the [Docker Compose Specification](https://docs.docker.com/compose/compose-file/).
+```
+sudo su -
+systemctl stop tpot
+vi /opt/tpot/etc/tpot.yml
+docker-compose -f /opt/tpot/etc/tpot.yml up (to see if everything works, CTRL+C)
+docker-compose -f /opt/tpot/etc/tpot.yml down -v
+systemctl start tpot 
+```
+<br><br>
+
+## Enable Cockpit 2FA
+You can enable two-factor-authentication for Cockpit by running `2fa.sh`.
+<br><br>
+
+# Troubleshooting
+Generally T-Pot is offered ***as is*** without any committment regarding support. Issues and discussions can opened, but be prepared to include basic necessary info, so the community is able to help.
+<br><br>
+
+## Logging
+* Check if your containers are running correctly: `dps.sh`
+
+* Check if your system ressources are not exhausted: `htop`, `glances`
+
+* Check if there is a port conflict:
+```
+systemctl stop tpot
+grc netstat -tulpen
+vi /opt/tpot/etc/tpot.yml up
+docker-compose -f /opt/tpot/etc/tpot.yml up
+CTRL+C
+docker-compose -f /opt/tpot/etc/tpot.yml down -v
+```
+
+* Check container logs: `docker logs -f <container_name>`
+
+* Check if you were locked out by [fail2ban](#fail2ban).
+<br><br>
+
+## Fail2Ban
+If you cannot login there are probably three possible reasons:
+1. You need to review [User Types](#user-types) and understand the different users.
+2. You are trying to SSH into T-Pot, but use `tcp/22` instead of `tcp/64295` or were using the incorrect user for Cockpit or Nginx (T-Pot WebUI).
+3. You had too many wrong attempts from the above and got locked out by `fail2ban`.
+
+To resolve Fail2Ban lockouts run `fail2ban-client status`:
+
+```
+fail2ban-client status
+Status
+|- Number of jail:	3
+nginx-http-auth, pam-generic, sshd
+```
+
+`nginx-http-auth` refers to missed BasicAuth login attempts (Nginx / T-Pot WebUI) on `tcp/64295`
+
+`sshd` refers to missed OS SSH login attempts on `tcp/64295`
+
+`pam-generic` refers to missed OS Cockpit login attempts on `tcp/64294`
+
+Check all jails, i.e. `sshd`:
+
+```
+fail2ban-client status sshd
+Status for the jail: sshd
+|- Filter
+|  |- Currently failed:	0
+|  |- Total failed:	0
+|  `- File list:	/var/log/auth.log
+`- Actions
+   |- Currently banned:	0
+   |- Total banned:	0
+   `- Banned IP list:
+```
+
+If there are any banned IPs you can unban these with `fail2ban-client unban --all` or `fail2ban-client unban <ip>`.
+<br><br>
+
+## RAM and Storage
+The Elastic Stack is hungry for RAM, specifically `logstash` and `elasticsearch`. If the Elastic Stack is unavailable, does not receive any logs or simply keeps crashing it is most likely a RAM or Storage issue.
+While T-Pot keeps trying to restart the services / containers run `docker logs -f <container_name>` (either `logstash` or `elasticsearch`) and check if there any warnings or failures involving RAM.
+
+Storage failures can be identified easier via `htop` or `glances`. 
+<br><br>
 
-<a name="contact"></a>
 # Contact
-The software is provided **as is** in a Community Edition format. T-Pot is designed to run out of the box and with zero maintenance involved. <br>
-We hope you understand that we cannot provide support on an individual basis. We will try to address questions, bugs and problems on our [GitHub issue list](https://github.com/telekom-security/tpotce/issues).
+T-Pot is provided ***as is*** open source ***without*** any committment regarding support ([see the disclaimer](#disclaimer)).
+
+If you are a company or institution and wish a personal contact aside from [issues](#issues) and [discussions](#discussions) please get in contact with our [sales team](https://www.t-systems.com/de/en/security).
+
+If you are a security researcher and want to responsibly report an issue please get in touch with our [CERT](https://www.telekom.com/en/corporate-responsibility/data-protection-data-security/security/details/introducing-deutsche-telekom-cert-358316).
+<br><br>
+
+## Issues
+Please report issues (errors) on our [GitHub Issues](https://github.com/telekom-security/tpotce/issues), but [troubleshoot](#troubleshooting) first. Issues not providing information to address the error will be closed or converted into [discussions](#discussions).
+
+Feel free to use the search function, it is possible a similar issues has been adressed already, with the solution just a search away.
+<br><br>
+
+## Discussions
+General questions, ideas, show & tell, etc. can be addressed on our [GitHub Discussions](https://github.com/telekom-security/tpotce/discussions).
+
+Feel free to use the search function, it is possible a similar discussion has been opened already, with an answer just a search away.
+<br><br>
 
-<a name="licenses"></a>
 # Licenses
 The software that T-Pot is built on uses the following licenses.
-<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeysap](https://github.com/SecureAuthCorp/HoneySAP/blob/master/COPYING), [honeypy](https://github.com/foospidy/HoneyPy/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
-<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [log4pot](https://github.com/thomaspatzke/Log4Pot/blob/master/LICENSE), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [rdpy](https://github.com/citronneur/rdpy/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/blob/main/LICENSE), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
-<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
-<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE)
+<br>GPLv2: [conpot](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
+<br>GPLv3: [adbhoney](https://github.com/huuck/ADBHoney), [elasticpot](https://gitlab.com/bontchev/elasticpot/-/blob/master/LICENSE), [ewsposter](https://github.com/telekom-security/ews/), [log4pot](https://github.com/thomaspatzke/Log4Pot/blob/master/LICENSE), [fatt](https://github.com/0x4D31/fatt/blob/master/LICENSE), [heralding](https://github.com/johnnykv/heralding/blob/master/LICENSE.txt), [ipphoney](https://gitlab.com/bontchev/ipphoney/-/blob/master/LICENSE), [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/blob/main/LICENSE), [sentrypeer](https://github.com/SentryPeer/SentryPeer/blob/main/LICENSE.GPL-3.0-only), [snare](https://github.com/mushorg/snare/blob/master/LICENSE), [tanner](https://github.com/mushorg/snare/blob/master/LICENSE)
+<br>Apache 2 License: [cyberchef](https://github.com/gchq/CyberChef/blob/master/LICENSE), [dicompot](https://github.com/nsmfoo/dicompot/blob/master/LICENSE), [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker](https://github.com/docker/docker/blob/master/LICENSE)
+<br>MIT license: [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/blob/master/LICENSE), [ddospot](https://github.com/aelth/ddospot/blob/master/LICENSE), [elasticvue](https://github.com/cars10/elasticvue/blob/master/LICENSE), [glutton](https://github.com/mushorg/glutton/blob/master/LICENSE), [hellpot](https://github.com/yunginnanet/HellPot/blob/master/LICENSE), [maltrail](https://github.com/stamparm/maltrail/blob/master/LICENSE)
 <br> Unlicense: [endlessh](https://github.com/skeeto/endlessh/blob/master/UNLICENSE)
 <br> Other: [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot#licencing-agreement-malwaretech-public-licence), [cowrie](https://github.com/micheloosterhof/cowrie/blob/master/LICENSE.md), [mailoney](https://github.com/awhitehatter/mailoney), [Debian licensing](https://www.debian.org/legal/licenses/), [Elastic License](https://www.elastic.co/licensing/elastic-license)
 <br> AGPL-3.0: [honeypots](https://github.com/qeeqbox/honeypots/blob/main/LICENSE)
+<br><br>
 
-
-<a name="credits"></a>
 # Credits
 Without open source and the fruitful development community (we are proud to be a part of), T-Pot would not have been possible! Our thanks are extended but not limited to the following people and organizations:
 
@@ -517,6 +780,7 @@ Without open source and the fruitful development community (we are proud to be a
 
 * [adbhoney](https://github.com/huuck/ADBHoney/graphs/contributors)
 * [apt-fast](https://github.com/ilikenwf/apt-fast/graphs/contributors)
+* [bento](https://github.com/migueravila/Bento/graphs/contributors)
 * [ciscoasa](https://github.com/Cymmetria/ciscoasa_honeypot/graphs/contributors)
 * [citrixhoneypot](https://github.com/MalwareTech/CitrixHoneypot/graphs/contributors)
 * [cockpit](https://github.com/cockpit-project/cockpit/graphs/contributors)
@@ -529,7 +793,7 @@ Without open source and the fruitful development community (we are proud to be a
 * [docker](https://github.com/docker/docker/graphs/contributors)
 * [elasticpot](https://gitlab.com/bontchev/elasticpot/-/project_members)
 * [elasticsearch](https://github.com/elastic/elasticsearch/graphs/contributors)
-* [elasticsearch-head](https://github.com/mobz/elasticsearch-head/graphs/contributors)
+* [elasticvue](https://github.com/cars10/elasticvue/graphs/contributors)
 * [endlessh](https://github.com/skeeto/endlessh/graphs/contributors)
 * [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors)
 * [fatt](https://github.com/0x4D31/fatt/graphs/contributors)
@@ -537,39 +801,37 @@ Without open source and the fruitful development community (we are proud to be a
 * [hellpot](https://github.com/yunginnanet/HellPot/graphs/contributors)
 * [heralding](https://github.com/johnnykv/heralding/graphs/contributors)
 * [honeypots](https://github.com/qeeqbox/honeypots/graphs/contributors)
-* [honeypy](https://github.com/foospidy/HoneyPy/graphs/contributors)
-* [honeysap](https://github.com/SecureAuthCorp/HoneySAP/graphs/contributors)
 * [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors)
 * [ipphoney](https://gitlab.com/bontchev/ipphoney/-/project_members)
 * [kibana](https://github.com/elastic/kibana/graphs/contributors)
 * [logstash](https://github.com/elastic/logstash/graphs/contributors)
 * [log4pot](https://github.com/thomaspatzke/Log4Pot/graphs/contributors)
 * [mailoney](https://github.com/awhitehatter/mailoney)
+* [maltrail](https://github.com/stamparm/maltrail/graphs/contributors)
 * [medpot](https://github.com/schmalle/medpot/graphs/contributors)
 * [p0f](http://lcamtuf.coredump.cx/p0f3/)
-* [rdpy](https://github.com/citronneur/rdpy)
 * [redishoneypot](https://github.com/cypwnpwnsocute/RedisHoneyPot/graphs/contributors)
+* [sentrypeer](https://github.com/SentryPeer/SentryPeer/graphs/contributors),
 * [spiderfoot](https://github.com/smicallef/spiderfoot)
 * [snare](https://github.com/mushorg/snare/graphs/contributors)
 * [tanner](https://github.com/mushorg/tanner/graphs/contributors)
 * [suricata](https://github.com/inliniac/suricata/graphs/contributors)
 
-### The following companies and organizations
+**The following companies and organizations**
 * [debian](https://www.debian.org/)
 * [docker](https://www.docker.com/)
 * [elastic.io](https://www.elastic.co/)
 * [honeynet project](https://www.honeynet.org/)
 * [intel](http://www.intel.com)
 
-### ... and of course ***you*** for joining the community!
+**... and of course ***you*** for joining the community!**
+<br><br>
 
-<a name="staytuned"></a>
-# Stay tuned ...
-A new version of T-Pot is released about every 6-12 months, development has shifted more and more towards rolling releases and the usage of `/opt/tpot/update.sh`.
-
-<a name="testimonial"></a>
 # Testimonials
 One of the greatest feedback we have gotten so far is by one of the Conpot developers:<br>
-***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***<br>
+***"[...] I highly recommend T-Pot which is ... it's not exactly a swiss army knife .. it's more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. [...]"***
+<br><br>
 And from @robcowart (creator of [ElastiFlow](https://github.com/robcowart/elastiflow)):<br>
 ***"#TPot is one of the most well put together turnkey honeypot solutions. It is a must-have for anyone wanting to analyze and understand the behavior of malicious actors and the threat they pose to your organization."***
+<br><br>
+**Thank you!**
diff --git a/bin/backup_es_folders.sh b/bin/backup_es_folders.sh
index 88a279be..3d15261b 100755
--- a/bin/backup_es_folders.sh
+++ b/bin/backup_es_folders.sh
@@ -1,12 +1,21 @@
 #!/bin/bash
 # Run as root only.
 myWHOAMI=$(whoami)
-if [ "$myWHOAMI" != "root" ]
+if [ "$myWHOAMI" != "root" ];
   then
     echo "Need to run as root ..."
     exit
 fi
 
+if [ "$1" == "" ] || [ "$1" != "all" ] && [ "$1" != "base" ];
+  then
+    echo "Usage: backup_es_folders [all, base]"
+    echo "       all  = backup all ES folder"
+    echo "       base = backup only Kibana index".
+    echo
+    exit
+fi
+
 # Backup all ES relevant folders
 # Make sure ES is available
 myES="http://127.0.0.1:64298/"
@@ -25,7 +34,7 @@ myCOUNT=1
 myDATE=$(date +%Y%m%d%H%M)
 myELKPATH="/data/elk/data"
 myKIBANAINDEXNAME=$(curl -s -XGET ''$myES'_cat/indices/.kibana' | awk '{ print $4 }')
-myKIBANAINDEXPATH=$myELKPATH/nodes/0/indices/$myKIBANAINDEXNAME
+myKIBANAINDEXPATH=$myELKPATH/indices/$myKIBANAINDEXNAME
 
 # Let's ensure normal operation on exit or if interrupted ...
 function fuCLEANUP {
@@ -42,5 +51,11 @@ sleep 2
 
 # Backup DB in 2 flavors
 echo "### Now backing up Elasticsearch folders ..."
-tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
-tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH
+if [ "$1" == "all" ];
+  then
+    tar cvfz "elkall_"$myDATE".tgz" $myELKPATH
+elif [ "$1" == "base" ];
+  then
+    tar cvfz "elkbase_"$myDATE".tgz" $myKIBANAINDEXPATH
+fi
+
diff --git a/bin/blackhole.sh b/bin/blackhole.sh
new file mode 100755
index 00000000..e2a51af0
--- /dev/null
+++ b/bin/blackhole.sh
@@ -0,0 +1,109 @@
+#!/bin/bash
+
+# Run as root only.
+myWHOAMI=$(whoami)
+if [ "$myWHOAMI" != "root" ]
+  then
+    echo "### Need to run as root ..."
+    echo
+    exit
+fi
+
+# Disclaimer
+if [ "$1" == "" ];
+  then
+    echo "### Warning!"
+    echo "### This script will download and add blackhole routes for known mass scanners in an attempt to decrease the chance of detection."
+    echo "### IPs are neither curated or verified, use at your own risk!"
+    echo "###"
+    echo "### As long as <blackhole.sh del> is not executed the routes will be re-added on T-Pot start through </opt/tpot/bin/updateip.sh>."
+    echo "### Check with <ip r> or <dps.sh> if blackhole is enabled."
+    echo
+    echo "Usage: blackhole.sh add (add blackhole routes)" 
+    echo "       blackhole.sh del (delete blackhole routes)"
+    echo
+    exit
+fi
+
+# QnD paths, files
+mkdir -p /etc/blackhole
+cd /etc/blackhole
+myFILE="mass_scanner.txt"
+myURL="https://raw.githubusercontent.com/stamparm/maltrail/master/trails/static/mass_scanner.txt"
+myBASELINE="500"
+# Alternatively, using less routes, but blocking complete /24 networks
+#myFILE="mass_scanner_cidr.txt"
+#myURL="https://raw.githubusercontent.com/stamparm/maltrail/master/trails/static/mass_scanner_cidr.txt"
+
+# Calculate age of downloaded list, read IPs
+if [ -f "$myFILE" ];
+  then
+    myNOW=$(date +%s)
+    myOLD=$(date +%s -r "$myFILE")
+    myDAYS=$(( ($myNOW-$myOLD) / (60*60*24) ))
+    echo "### Downloaded $myFILE list is $myDAYS days old."
+    myBLACKHOLE_IPS=$(grep -o -P "\b(?:\d{1,3}\.){3}\d{1,3}\b" "$myFILE" | sort -u)
+fi
+
+# Let's load ip list
+if [[ ! -f "$myFILE" && "$1" == "add" || "$myDAYS" -gt 30 ]];
+  then
+    echo "### Downloading $myFILE list."
+    aria2c --allow-overwrite -s16 -x 16 "$myURL" && \
+    myBLACKHOLE_IPS=$(grep -o -P "\b(?:\d{1,3}\.){3}\d{1,3}\b" "$myFILE" | sort -u) 
+fi
+
+myCOUNT=$(echo $myBLACKHOLE_IPS | wc -w)
+# Let's extract mass scanner IPs
+if [ "$myCOUNT" -lt "$myBASELINE" ] && [ "$1" == "add" ];
+  then
+    echo "### Something went wrong. Please check contents of /etc/blackhole/$myFILE."
+    echo "### Aborting."
+    echo
+    exit
+elif [ "$(ip r | grep 'blackhole' -c)" -gt "$myBASELINE" ] && [ "$1" == "add" ];
+  then
+    echo "### Blackhole already enabled."
+    echo "### Aborting."
+    echo
+    exit
+fi
+
+# Let's add blackhole routes for all mass scanner IPs
+if [ "$1" == "add" ];
+  then
+    echo
+    echo -n "Now adding $myCOUNT IPs to blackhole."
+    for i in $myBLACKHOLE_IPS;
+      do
+        ip route add blackhole "$i"
+	echo -n "."
+    done
+    echo
+    echo "Added $(ip r | grep "blackhole" -c) IPs to blackhole."
+    echo
+    echo "### Remember!"
+    echo "### As long as <blackhole.sh del> is not executed the routes will be re-added on T-Pot start through </opt/tpot/bin/updateip.sh>."
+    echo "### Check with <ip r> or <dps.sh> if blackhole is enabled."
+    echo
+    exit
+fi
+
+# Let's delete blackhole routes for all mass scanner IPs
+if [ "$1" == "del" ] && [ "$myCOUNT" -gt "$myBASELINE" ];
+  then
+    echo
+    echo -n "Now deleting $myCOUNT IPs from blackhole."
+      for i in $myBLACKHOLE_IPS;
+        do
+          ip route del blackhole "$i"
+	  echo -n "."
+      done
+      echo
+      echo "$(ip r | grep 'blackhole' -c) IPs remaining in blackhole."
+      echo
+      rm "$myFILE"
+  else
+    echo "### Blackhole already disabled."
+    echo
+fi
diff --git a/bin/clean.sh b/bin/clean.sh
index 494e4575..b5c71668 100755
--- a/bin/clean.sh
+++ b/bin/clean.sh
@@ -205,14 +205,6 @@ fuHONEYPOTS () {
   chown tpot:tpot /data/honeypots -R
 }
 
-# Let's create a function to clean up and prepare honeypy data
-fuHONEYPY () {
-  if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeypy/*; fi
-  mkdir -p /data/honeypy/log
-  chmod 770 /data/honeypy -R
-  chown tpot:tpot /data/honeypy -R
-}
-
 # Let's create a function to clean up and prepare honeysap data
 fuHONEYSAP () {
   if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/honeysap/*; fi
@@ -285,6 +277,14 @@ fuREDISHONEYPOT () {
   chown tpot:tpot /data/redishoneypot -R
 }
 
+# Let's create a function to clean up and prepare sentrypeer data
+fuSENTRYPEER () {
+  if [ "$myPERSISTENCE" != "on" ]; then rm -rf /data/sentrypeer/log; fi
+  mkdir -p /data/sentrypeer/log
+  chmod 770 /data/sentrypeer -R
+  chown tpot:tpot /data/sentrypeer -R
+}
+
 # Let's create a function to prepare spiderfoot db
 fuSPIDERFOOT () {
   mkdir -p /data/spiderfoot
@@ -356,7 +356,6 @@ if [ "$myPERSISTENCE" = "on" ];
     fuHELLPOT
     fuHONEYSAP
     fuHONEYPOTS
-    fuHONEYPY
     fuHONEYTRAP
     fuIPPHONEY
     fuLOG4POT
@@ -365,6 +364,7 @@ if [ "$myPERSISTENCE" = "on" ];
     fuNGINX
     fuREDISHONEYPOT
     fuRDPY
+    fuSENTRYPEER
     fuSPIDERFOOT
     fuSURICATA
     fuP0F
diff --git a/bin/deploy.sh b/bin/deploy.sh
index f9e82bc6..e1d5af4b 100755
--- a/bin/deploy.sh
+++ b/bin/deploy.sh
@@ -15,7 +15,7 @@ if [ "$(whoami)" != "root" ];
 fi
 }
 
-function fuDEPLOY_POT () {
+function fuDEPLOY_SENSOR () {
 echo
 echo "###############################"
 echo "# Deploying to T-Pot Hive ... #"
@@ -24,7 +24,7 @@ echo
 sshpass -e ssh -4 -t -T -l "$MY_TPOT_USERNAME" -p 64295 "$MY_HIVE_IP" << EOF
 echo "$SSHPASS" | sudo -S bash -c 'useradd -m -s /sbin/nologin -G tpotlogs "$MY_HIVE_USERNAME";
 mkdir -p /home/"$MY_HIVE_USERNAME"/.ssh;
-echo "$MY_POT_PUBLICKEY" >> /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
+echo "$MY_SENSOR_PUBLICKEY" >> /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
 chmod 600 /home/"$MY_HIVE_USERNAME"/.ssh/authorized_keys;
 chmod 755 /home/"$MY_HIVE_USERNAME"/.ssh;
 chown "$MY_HIVE_USERNAME":"$MY_HIVE_USERNAME" -R /home/"$MY_HIVE_USERNAME"/.ssh'
@@ -72,8 +72,8 @@ if [ $? -eq 0 ];
         echo "######################################################"
         echo
         kill -9 $(pidof ssh)
-	rm $MY_POT_PUBLICKEYFILE
-	rm $MY_POT_PRIVATEKEYFILE
+	rm $MY_SENSOR_PUBLICKEYFILE
+	rm $MY_SENSOR_PRIVATEKEYFILE
 	rm $MY_LS_ENVCONFIGFILE
 	exit 1
     fi;
@@ -84,8 +84,8 @@ if [ $? -eq 0 ];
     echo "# Aborting.                                                     #"
     echo "#################################################################"
     echo
-    rm $MY_POT_PUBLICKEYFILE
-    rm $MY_POT_PRIVATEKEYFILE
+    rm $MY_SENSOR_PUBLICKEYFILE
+    rm $MY_SENSOR_PRIVATEKEYFILE
     rm $MY_LS_ENVCONFIGFILE
     exit 1
 fi;
@@ -105,12 +105,12 @@ echo
 export SSHPASS
 read -p "IP / FQDN: " MY_HIVE_IP
 MY_HIVE_USERNAME="$(hostname)"
-MY_TPOT_TYPE="POT"
+MY_TPOT_TYPE="SENSOR"
 MY_LS_ENVCONFIGFILE="/data/elk/logstash/ls_environment"
 
-MY_POT_PUBLICKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME.pub"
-MY_POT_PRIVATEKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME"
-if ! [ -s "$MY_POT_PRIVATEKEYFILE" ] && ! [ -s "$MY_POT_PUBLICKEYFILE" ];
+MY_SENSOR_PUBLICKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME.pub"
+MY_SENSOR_PRIVATEKEYFILE="/data/elk/logstash/$MY_HIVE_USERNAME"
+if ! [ -s "$MY_SENSOR_PRIVATEKEYFILE" ] && ! [ -s "$MY_SENSOR_PUBLICKEYFILE" ];
   then
     echo
     echo "##############################"
@@ -118,8 +118,8 @@ if ! [ -s "$MY_POT_PRIVATEKEYFILE" ] && ! [ -s "$MY_POT_PUBLICKEYFILE" ];
     echo "##############################"
     echo
     mkdir -p /data/elk/logstash
-    ssh-keygen -f "$MY_POT_PRIVATEKEYFILE" -N "" -C "$MY_HIVE_USERNAME"
-    MY_POT_PUBLICKEY="$(cat "$MY_POT_PUBLICKEYFILE")"
+    ssh-keygen -f "$MY_SENSOR_PRIVATEKEYFILE" -N "" -C "$MY_HIVE_USERNAME"
+    MY_SENSOR_PUBLICKEY="$(cat "$MY_SENSOR_PUBLICKEYFILE")"
   else
     echo
     echo "#############################################"
@@ -137,7 +137,7 @@ echo "###########################################################"
 echo
 tee $MY_LS_ENVCONFIGFILE << EOF
 MY_TPOT_TYPE=$MY_TPOT_TYPE
-MY_POT_PRIVATEKEYFILE=$MY_POT_PRIVATEKEYFILE
+MY_SENSOR_PRIVATEKEYFILE=$MY_SENSOR_PRIVATEKEYFILE
 MY_HIVE_USERNAME=$MY_HIVE_USERNAME
 MY_HIVE_IP=$MY_HIVE_IP
 EOF
@@ -171,7 +171,7 @@ while [ 1 != 2 ]
         [c,C])
           fuGET_DEPLOY_DATA
           fuCHECK_HIVE
-	  fuDEPLOY_POT
+	  fuDEPLOY_SENSOR
           break
           ;;
         [q,Q])
diff --git a/bin/deprecated/hptest.sh b/bin/deprecated/hptest.sh
new file mode 100755
index 00000000..94806a71
--- /dev/null
+++ b/bin/deprecated/hptest.sh
@@ -0,0 +1,122 @@
+#!/bin/bash
+
+myHOST="$1"
+myPACKAGES="dcmtk netcat nmap"
+myMEDPOTPACKET="
+MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
+EVN|A01|198808181123
+PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
+NK1|1|JONES^BARBARA^K|SPO|||||20011105
+NK1|1|JONES^MICHAEL^A|FTH
+PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
+AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
+AL1|2||^CAT DANDER||CODE257
+DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
+PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
+ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
+GT1|1122|1519|BILL^GATES^A
+IN1|001|A357|1234|BCMD|||||132987
+IN2|ID1551001|SSN12345678
+ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
+
+function fuGOTROOT {
+myWHOAMI=$(whoami)
+if [ "$myWHOAMI" != "root" ]
+  then
+    echo "Need to run as root ..."
+    exit
+fi
+}
+
+function fuCHECKDEPS {
+myINST=""
+for myDEPS in $myPACKAGES;
+do
+  myOK=$(dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
+  if [ "$myOK" != "ok" ]
+    then
+      myINST=$(echo $myINST $myDEPS)
+  fi
+done
+if [ "$myINST" != "" ]
+  then
+    apt-get update -y
+    for myDEPS in $myINST;
+    do
+      apt-get install $myDEPS -y
+    done
+fi
+}
+
+function fuCHECKFORARGS {
+if [ "$myHOST" != "" ];
+  then
+    echo "All arguments met. Continuing."
+  else
+    echo "Usage: hp_test.sh <[host or ip]>"
+    exit
+fi
+}
+
+function fuGETPORTS {
+myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
+myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo "$i"; done)
+echo "Found these ports enabled:"
+echo "$myPORTS"
+exit
+}
+
+function fuSCAN {
+local myTIMEOUT="$1"
+local mySCANPORT="$2"
+local mySCANIP="$3"
+local mySCANOPTS="$4"
+
+timeout --foreground ${myTIMEOUT} nmap ${mySCANOPTS} -T4 -v -p ${mySCANPORT} ${mySCANIP} &
+}
+
+# Main
+fuGOTROOT
+fuCHECKDEPS
+fuCHECKFORARGS
+
+echo "Starting scans ..."
+echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
+curl -XGET "http://$myHOST:9200/logstash-*/_search" &
+curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
+echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
+findscu -P -k PatientName="*" $myHOST 11112 &
+getscu -P -k PatientName="*" $myHOST 11112 &
+telnet $myHOST 3299 &
+fuSCAN "180" "7,8,102,135,161,1025,1080,5000,9200" "$myHOST" "-sC -sS -sU -sV"
+fuSCAN "180" "2048,4096,5432" "$myHOST" "-sC -sS -sU -sV --version-light"
+fuSCAN "120" "20,21" "$myHOST" "--script=ftp* -sC -sS -sV"
+fuSCAN "120" "22" "$myHOST" "--script=ssh2-enum-algos,ssh-auth-methods,ssh-hostkey,ssh-publickey-acceptance,sshv1 -sC -sS -sV"
+fuSCAN "30" "22" "$myHOST" "--script=ssh-brute"
+fuSCAN "120" "23,2323,2324" "$myHOST" "--script=telnet-encryption,telnet-ntlm-info -sC -sS -sV --version-light"
+fuSCAN "120" "25" "$myHOST" "--script=smtp* -sC -sS -sV"
+fuSCAN "180" "42" "$myHOST" "-sC -sS -sV"
+fuSCAN "120" "69" "$myHOST" "--script=tftp-enum -sU"
+fuSCAN "120" "80,81,8080,8443" "$myHOST" "-sC -sS -sV"
+fuSCAN "120" "110,995" "$myHOST" "--script=pop3-capabilities,pop3-ntlm-info -sC -sS -sV --version-light"
+fuSCAN "30" "110,995" "$myHOST" "--script=pop3-brute -sS"
+fuSCAN "120" "143,993" "$myHOST" "--script=imap-capabilities,imap-ntlm-info -sC -sS -sV --version-light"
+fuSCAN "30" "143,993" "$myHOST" "--script=imap-brute -sS"
+fuSCAN "240" "445" "$myHOST" "--script=smb-vuln* -sS -sU"
+fuSCAN "120" "502" "$myHOST" "--script=modbus-discover -sS -sU"
+fuSCAN "120" "623" "$myHOST" "--script=ipmi-cipher-zero,ipmi-version,supermicro-ipmi -sS -sU"
+fuSCAN "30" "623" "$myHOST" "--script=ipmi-brute -sS -sU"
+fuSCAN "120" "1433" "$myHOST" "--script=ms-sql* -sS"
+fuSCAN "120" "1723" "$myHOST" "--script=pptp-version -sS"
+fuSCAN "120" "1883" "$myHOST" "--script=mqtt-subscribe -sS"
+fuSCAN "120" "2404" "$myHOST" "--script=iec-identify -sS"
+fuSCAN "120" "3306" "$myHOST" "--script=mysql-vuln* -sC -sS -sV"
+fuSCAN "120" "3389" "$myHOST" "--script=rdp* -sC -sS -sV"
+fuSCAN "120" "5000" "$myHOST" "--script=*upnp* -sS -sU"
+fuSCAN "120" "5060,5061" "$myHOST" "--script=sip-call-spoof,sip-enum-users,sip-methods -sS -sU"
+fuSCAN "120" "5900" "$myHOST" "--script=vnc-info,vnc-title,realvnc-auth-bypass -sS"
+fuSCAN "120" "27017" "$myHOST" "--script=mongo* -sS"
+fuSCAN "120" "47808" "$myHOST" "--script=bacnet* -sS"
+wait
+reset
+echo "Done."
diff --git a/bin/dps.sh b/bin/dps.sh
index d3274ab1..06b6eefd 100755
--- a/bin/dps.sh
+++ b/bin/dps.sh
@@ -8,8 +8,14 @@ if [ "$myWHOAMI" != "root" ]
     exit
 fi
 
-# Show current status of T-Pot containers
 myPARAM="$1"
+if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
+  then
+    watch --color -n $myPARAM "dps.sh"
+    exit
+fi
+
+# Show current status of T-Pot containers
 myCONTAINERS="$(cat /opt/tpot/etc/tpot.yml | grep -v '#' | grep container_name | cut -d: -f2 | sort | tr -d " ")"
 myRED=""
 myGREEN=""
@@ -17,19 +23,39 @@ myBLUE=""
 myWHITE=""
 myMAGENTA=""
 
+# Blackhole Status
+myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
+if [ "$myBLACKHOLE_STATUS" -gt "500" ];
+  then
+    myBLACKHOLE_STATUS="${myGREEN}ENABLED"
+  else
+    myBLACKHOLE_STATUS="${myRED}DISABLED"
+fi
+
+function fuGETTPOT_STATUS {
+# T-Pot Status
+myTPOT_STATUS=$(systemctl status tpot | grep "Active" | awk '{ print $2 }')
+if [ "$myTPOT_STATUS" == "active" ];
+  then
+    echo "${myGREEN}ACTIVE"
+  else
+    echo "${myRED}INACTIVE"
+fi
+}
+
 function fuGETSTATUS {
 grc --colour=on docker ps -f status=running -f status=exited --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -v "NAME" | sort
 }
 
 function fuGETSYS {
-printf "========| System |========\n"
-printf "%+10s %-20s\n" "Date: " "$(date)"
-printf "%+10s %-20s\n" "Uptime: " "$(uptime | cut -b 2-)"
+printf "[ ========| System |======== ]\n"
+printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "DATE: " "$(date)"
+printf "${myBLUE}%+11s ${myWHITE}%-20s\n" "UPTIME: " "$(grc --colour=on uptime)"
+printf "${myMAGENTA}%+11s %-20s\n" "T-POT: " "$(fuGETTPOT_STATUS)"
+printf "${myMAGENTA}%+11s %-20s\n" "BLACKHOLE: " "$myBLACKHOLE_STATUS${myWHITE}"
 echo
 }
 
-while true
-  do
     myDPS=$(fuGETSTATUS)
     myDPSNAMES=$(echo "$myDPS" | awk '{ print $1 }' | sort)
     fuGETSYS
@@ -45,10 +71,3 @@ while true
 	  printf "%-28s %-28s\n" "$myRED$i" "DOWN$myWHITE"
       fi
     done
-    if [[ $myPARAM =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
-      then 
-        sleep "$myPARAM"
-      else 
-        break
-    fi
-done
diff --git a/bin/hptest.sh b/bin/hptest.sh
index 94806a71..9410cbba 100755
--- a/bin/hptest.sh
+++ b/bin/hptest.sh
@@ -1,23 +1,8 @@
 #!/bin/bash
 
 myHOST="$1"
-myPACKAGES="dcmtk netcat nmap"
-myMEDPOTPACKET="
-MSH|^~\&|ADT1|MCM|LABADT|MCM|198808181126|SECURITY|ADT^A01|MSG00001-|P|2.6
-EVN|A01|198808181123
-PID|||PATID1234^5^M11^^AN||JONES^WILLIAM^A^III||19610615|M||2106-3|677 DELAWARE AVENUE^^EVERETT^MA^02149|GL|(919)379-1212|(919)271-3434~(919)277-3114||S||PATID12345001^2^M10^^ACSN|123456789|9-87654^NC
-NK1|1|JONES^BARBARA^K|SPO|||||20011105
-NK1|1|JONES^MICHAEL^A|FTH
-PV1|1|I|2000^2012^01||||004777^LEBAUER^SIDNEY^J.|||SUR||-||ADM|A0
-AL1|1||^PENICILLIN||CODE16~CODE17~CODE18
-AL1|2||^CAT DANDER||CODE257
-DG1|001|I9|1550|MAL NEO LIVER, PRIMARY|19880501103005|F
-PR1|2234|M11|111^CODE151|COMMON PROCEDURES|198809081123
-ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^SMITH^ELLEN|199505011201
-GT1|1122|1519|BILL^GATES^A
-IN1|001|A357|1234|BCMD|||||132987
-IN2|ID1551001|SSN12345678
-ROL|45^RECORDER^ROLE MASTER LIST|AD|RO|KATE^ELLEN|199505011201"
+myPACKAGES="nmap"
+myDOCKERCOMPOSEYML="/opt/tpot/etc/tpot.yml"
 
 function fuGOTROOT {
 myWHOAMI=$(whoami)
@@ -52,71 +37,32 @@ function fuCHECKFORARGS {
 if [ "$myHOST" != "" ];
   then
     echo "All arguments met. Continuing."
+    echo
   else
-    echo "Usage: hp_test.sh <[host or ip]>"
+    echo "Usage: hptest.sh <[host or ip]>"
+    echo
     exit
 fi
 }
 
 function fuGETPORTS {
+myDOCKERCOMPOSEUDPPORTS=$(cat $myDOCKERCOMPOSEYML | grep "udp" | tr -d '"\|#\-' | cut -d ":" -f2 | cut -d "/" -f1 | sort -gu)
 myDOCKERCOMPOSEPORTS=$(cat $myDOCKERCOMPOSEYML | yq -r '.services[].ports' | grep ':' | sed -e s/127.0.0.1// | tr -d '", ' | sed -e s/^:// | cut -f1 -d ':' | grep -v "6429\|6430" | sort -gu)
-myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo "$i"; done)
-echo "Found these ports enabled:"
-echo "$myPORTS"
-exit
-}
-
-function fuSCAN {
-local myTIMEOUT="$1"
-local mySCANPORT="$2"
-local mySCANIP="$3"
-local mySCANOPTS="$4"
-
-timeout --foreground ${myTIMEOUT} nmap ${mySCANOPTS} -T4 -v -p ${mySCANPORT} ${mySCANIP} &
+myUDPPORTS=$(for i in $myDOCKERCOMPOSEUDPPORTS; do echo -n "U:$i,"; done)
+myPORTS=$(for i in $myDOCKERCOMPOSEPORTS; do echo -n "T:$i,"; done)
 }
 
 # Main
+fuGETPORTS
 fuGOTROOT
 fuCHECKDEPS
 fuCHECKFORARGS
-
-echo "Starting scans ..."
-echo "$myMEDPOTPACKET" | nc "$myHOST" 2575 &
-curl -XGET "http://$myHOST:9200/logstash-*/_search" &
-curl -XPOST -H "Content-Type: application/json" -d '{"name":"test","email":"test@test.com"}' "http://$myHOST:9200/test" &
-echo "I20100" | timeout --foreground 3 nc "$myHOST" 10001 &
-findscu -P -k PatientName="*" $myHOST 11112 &
-getscu -P -k PatientName="*" $myHOST 11112 &
-telnet $myHOST 3299 &
-fuSCAN "180" "7,8,102,135,161,1025,1080,5000,9200" "$myHOST" "-sC -sS -sU -sV"
-fuSCAN "180" "2048,4096,5432" "$myHOST" "-sC -sS -sU -sV --version-light"
-fuSCAN "120" "20,21" "$myHOST" "--script=ftp* -sC -sS -sV"
-fuSCAN "120" "22" "$myHOST" "--script=ssh2-enum-algos,ssh-auth-methods,ssh-hostkey,ssh-publickey-acceptance,sshv1 -sC -sS -sV"
-fuSCAN "30" "22" "$myHOST" "--script=ssh-brute"
-fuSCAN "120" "23,2323,2324" "$myHOST" "--script=telnet-encryption,telnet-ntlm-info -sC -sS -sV --version-light"
-fuSCAN "120" "25" "$myHOST" "--script=smtp* -sC -sS -sV"
-fuSCAN "180" "42" "$myHOST" "-sC -sS -sV"
-fuSCAN "120" "69" "$myHOST" "--script=tftp-enum -sU"
-fuSCAN "120" "80,81,8080,8443" "$myHOST" "-sC -sS -sV"
-fuSCAN "120" "110,995" "$myHOST" "--script=pop3-capabilities,pop3-ntlm-info -sC -sS -sV --version-light"
-fuSCAN "30" "110,995" "$myHOST" "--script=pop3-brute -sS"
-fuSCAN "120" "143,993" "$myHOST" "--script=imap-capabilities,imap-ntlm-info -sC -sS -sV --version-light"
-fuSCAN "30" "143,993" "$myHOST" "--script=imap-brute -sS"
-fuSCAN "240" "445" "$myHOST" "--script=smb-vuln* -sS -sU"
-fuSCAN "120" "502" "$myHOST" "--script=modbus-discover -sS -sU"
-fuSCAN "120" "623" "$myHOST" "--script=ipmi-cipher-zero,ipmi-version,supermicro-ipmi -sS -sU"
-fuSCAN "30" "623" "$myHOST" "--script=ipmi-brute -sS -sU"
-fuSCAN "120" "1433" "$myHOST" "--script=ms-sql* -sS"
-fuSCAN "120" "1723" "$myHOST" "--script=pptp-version -sS"
-fuSCAN "120" "1883" "$myHOST" "--script=mqtt-subscribe -sS"
-fuSCAN "120" "2404" "$myHOST" "--script=iec-identify -sS"
-fuSCAN "120" "3306" "$myHOST" "--script=mysql-vuln* -sC -sS -sV"
-fuSCAN "120" "3389" "$myHOST" "--script=rdp* -sC -sS -sV"
-fuSCAN "120" "5000" "$myHOST" "--script=*upnp* -sS -sU"
-fuSCAN "120" "5060,5061" "$myHOST" "--script=sip-call-spoof,sip-enum-users,sip-methods -sS -sU"
-fuSCAN "120" "5900" "$myHOST" "--script=vnc-info,vnc-title,realvnc-auth-bypass -sS"
-fuSCAN "120" "27017" "$myHOST" "--script=mongo* -sS"
-fuSCAN "120" "47808" "$myHOST" "--script=bacnet* -sS"
+echo
+echo "Starting scan on all UDP / TCP ports defined in /opt/tpot/etc/tpot.yml ..."
+nmap -sV -sC -v -p $myPORTS $1 &
+nmap -sU -sV -sC -v -p $myUDPPORTS $1 &
+echo
 wait
-reset
 echo "Done."
+echo
+
diff --git a/bin/setup_builder.sh b/bin/setup_builder.sh
new file mode 100755
index 00000000..a8057549
--- /dev/null
+++ b/bin/setup_builder.sh
@@ -0,0 +1,45 @@
+#!/bin/bash
+
+# Got root?
+myWHOAMI=$(whoami)
+if [ "$myWHOAMI" != "root" ]
+  then
+    echo "Need to run as root ..."
+    exit
+fi
+
+# Only run with command switch
+if [ "$1" != "-y" ]; then
+  echo "### Setting up docker for Multi Arch Builds."
+  echo "### Use on x64 only!"
+  echo "### Run with -y to install!"
+  echo
+  exit
+fi
+
+# Main
+mkdir -p /root/.docker/cli-plugins/
+cd /root/.docker/cli-plugins/
+wget https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64 -O docker-buildx
+chmod +x docker-buildx
+
+docker buildx ls
+
+# We need to create a new builder as the default one cannot handle multi-arch builds
+# https://docs.docker.com/desktop/multi-arch/
+docker buildx create --name mybuilder
+
+# Set as default
+docker buildx use mybuilder
+
+# We need to install emulators, arm64 should be fine for now
+# https://github.com/tonistiigi/binfmt/
+docker run --privileged --rm tonistiigi/binfmt --install arm64
+
+# Check if everything is setup correctly
+docker buildx inspect --bootstrap
+echo
+echo "### Done."
+echo
+echo "Example: docker buildx build --platform linux/amd64,linux/arm64 -t username/demo:latest --push ."
+echo "Docs: https://docs.docker.com/desktop/multi-arch/"
diff --git a/bin/tpdclean.sh b/bin/tpdclean.sh
new file mode 100755
index 00000000..7ae50398
--- /dev/null
+++ b/bin/tpdclean.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+# T-Pot Compose and Container Cleaner
+# Set colors
+myRED=""
+myGREEN=""
+myWHITE=""
+
+# Only run with command switch
+if [ "$1" != "-y" ]; then
+  echo $myRED"### WARNING"$myWHITE
+  echo ""
+  echo $myRED"###### This script is only intended for the tpot.service."$myWHITE
+  echo $myRED"###### Run <systemctl stop tpot> first and then <tpdclean.sh -y>."$myWHITE
+  echo $myRED"###### Be aware, all T-Pot container volumes and images will be removed."$myWHITE
+  echo ""
+  echo $myRED"### WARNING "$myWHITE
+  echo
+  exit
+fi
+
+# Remove old containers, images and volumes
+docker-compose -f /opt/tpot/etc/tpot.yml down -v >> /dev/null 2>&1
+docker-compose -f /opt/tpot/etc/tpot.yml rm -v >> /dev/null 2>&1
+docker network rm $(docker network ls -q) >> /dev/null 2>&1
+docker volume rm $(docker volume ls -q) >> /dev/null 2>&1
+docker rm -v $(docker ps -aq) >> /dev/null 2>&1
+docker rmi $(docker images | grep "<none>" | awk '{print $3}') >> /dev/null 2>&1
+docker rmi $(docker images | grep "2203" | awk '{print $3}') >> /dev/null 2>&1
+exit 0
diff --git a/bin/tped.sh b/bin/tped.sh
index cc5c7d01..1eadbdff 100755
--- a/bin/tped.sh
+++ b/bin/tped.sh
@@ -29,7 +29,7 @@ for i in $myYMLS;
   do
     myITEMS+="$i $(echo $i | cut -d "." -f1 | tr [:lower:] [:upper:]) " 
 done
-myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 17 50 10 $myITEMS 3>&1 1>&2 2>&3 3>&-)
+myEDITION=$(dialog --backtitle "$myBACKTITLE" --menu "Select T-Pot Edition" 18 50 1 $myITEMS 3>&1 1>&2 2>&3 3>&-)
 if [ "$myEDITION" == "" ];
   then
     echo "Have a nice day!"
diff --git a/bin/updateip.sh b/bin/updateip.sh
index 09784501..da1aca96 100755
--- a/bin/updateip.sh
+++ b/bin/updateip.sh
@@ -2,23 +2,62 @@
 # Let's add the first local ip to the /etc/issue and external ip to ews.ip file
 # If the external IP cannot be detected, the internal IP will be inherited.
 source /etc/environment
+myCHECKIFSENSOR=$(head -n 1 /opt/tpot/etc/tpot.yml | grep "Sensor" | wc -l)
 myUUID=$(lsblk -o MOUNTPOINT,UUID | grep "/" | awk '{ print $2 }')
 myLOCALIP=$(hostname -I | awk '{ print $1 }')
 myEXTIP=$(/opt/tpot/bin/myip.sh)
 if [ "$myEXTIP" = "" ];
   then
     myEXTIP=$myLOCALIP
+    myEXTIP_LAT="49.865835022498125"
+    myEXTIP_LONG="8.62606472775735"
+  else
+    myEXTIP_LOC=$(curl -s ipinfo.io/$myEXTIP/loc)
+    myEXTIP_LAT=$(echo "$myEXTIP_LOC" | cut -f1 -d",")
+    myEXTIP_LONG=$(echo "$myEXTIP_LOC" | cut -f2 -d",")
 fi
+
+# Load Blackhole routes if enabled 
+myBLACKHOLE_FILE1="/etc/blackhole/mass_scanner.txt"
+myBLACKHOLE_FILE2="/etc/blackhole/mass_scanner_cidr.txt"
+if [ -f "$myBLACKHOLE_FILE1" ] || [ -f "$myBLACKHOLE_FILE2" ];
+  then
+    /opt/tpot/bin/blackhole.sh add
+fi
+
+myBLACKHOLE_STATUS=$(ip r | grep "blackhole" -c)
+if [ "$myBLACKHOLE_STATUS" -gt "500" ];
+  then
+    myBLACKHOLE_STATUS="| BLACKHOLE: [ ENABLED ]"
+  else
+    myBLACKHOLE_STATUS="| BLACKHOLE: [ DISABLED ]"
+fi
+
 mySSHUSER=$(cat /etc/passwd | grep 1000 | cut -d ':' -f1)
+
+# Export
+export myUUID
+export myLOCALIP
+export myEXTIP
+export myEXTIP_LAT
+export myEXTIP_LONG
+export myBLACKHOLE_STATUS
+export mySSHUSER
+
+# Build issue
 echo "" > /etc/issue
-toilet -f ivrit -F metal --filter border:metal "T-Pot   20.06" | sed 's/\\/\\\\/g' >> /etc/issue
+toilet -f ivrit -F metal --filter border:metal "T-Pot   22.04" | sed 's/\\/\\\\/g' >> /etc/issue
 echo >> /etc/issue
 echo ",---- [ \n ] [ \d ] [ \t ]" >> /etc/issue
 echo "|" >> /etc/issue
 echo "| IP: $myLOCALIP ($myEXTIP)" >> /etc/issue
 echo "| SSH: ssh -l tsec -p 64295 $myLOCALIP" >> /etc/issue 
-echo "| WEB: https://$myLOCALIP:64297" >> /etc/issue
+if [ "$myCHECKIFSENSOR" == "0" ];
+  then
+    echo "| WEB: https://$myLOCALIP:64297" >> /etc/issue
+fi
 echo "| ADMIN: https://$myLOCALIP:64294" >> /etc/issue
+echo "$myBLACKHOLE_STATUS" >> /etc/issue
 echo "|" >> /etc/issue
 echo "\`----" >> /etc/issue
 echo >> /etc/issue
@@ -29,6 +68,8 @@ EOF
 tee /opt/tpot/etc/compose/elk_environment << EOF
 HONEY_UUID=$myUUID
 MY_EXTIP=$myEXTIP
+MY_EXTIP_LAT=$myEXTIP_LAT
+MY_EXTIP_LONG=$myEXTIP_LONG
 MY_INTIP=$myLOCALIP
 MY_HOSTNAME=$HOSTNAME
 EOF
@@ -38,7 +79,7 @@ if [ -s "/data/elk/logstash/ls_environment" ];
     source /data/elk/logstash/ls_environment
     tee -a /opt/tpot/etc/compose/elk_environment << EOF
 MY_TPOT_TYPE=$MY_TPOT_TYPE
-MY_POT_PRIVATEKEYFILE=$MY_POT_PRIVATEKEYFILE
+MY_SENSOR_PRIVATEKEYFILE=$MY_SENSOR_PRIVATEKEYFILE
 MY_HIVE_USERNAME=$MY_HIVE_USERNAME
 MY_HIVE_IP=$MY_HIVE_IP
 EOF
diff --git a/cloud/terraform/aws/variables.tf b/cloud/terraform/aws/variables.tf
index 7e6aed67..6b4ff656 100644
--- a/cloud/terraform/aws/variables.tf
+++ b/cloud/terraform/aws/variables.tf
@@ -28,31 +28,31 @@ variable "ec2_instance_type" {
   default = "t3.large"
 }
 
-# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Buster
+# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
 variable "ec2_ami" {
   type = map(string)
   default = {
-    "af-south-1"     = "ami-0272d4f5fb1b98a0d"
-    "ap-east-1"      = "ami-00d242e2f23abf6d2"
-    "ap-northeast-1" = "ami-001c6b4d627e8be53"
-    "ap-northeast-2" = "ami-0d841ed4bf80e764c"
-    "ap-northeast-3" = "ami-01b0a01d770321320"
-    "ap-south-1"     = "ami-04ba7e5bd7c6f6929"
-    "ap-southeast-1" = "ami-0dca3eabb09c32ae2"
-    "ap-southeast-2" = "ami-03ff8684dc585ddae"
-    "ca-central-1"   = "ami-08af22d7c0382fd83"
-    "eu-central-1"   = "ami-0f41e297b3c53fab8"
-    "eu-north-1"     = "ami-0bbc6a00971c77d6d"
-    "eu-south-1"     = "ami-03ff8684dc585ddae"
-    "eu-west-1"      = "ami-080684ad73d431a05"
-    "eu-west-2"      = "ami-04b259723891dfc53"
-    "eu-west-3"      = "ami-00662eead74f66895"
-    "me-south-1"     = "ami-021a6c6047091ab5b"
-    "sa-east-1"      = "ami-0aac091cce68a049c"
-    "us-east-1"      = "ami-05ad4ed7f9c48178b"
-    "us-east-2"      = "ami-07640f3f27c0ad3d3"
-    "us-west-1"      = "ami-0c053f1d5f22eb09f"
-    "us-west-2"      = "ami-090cd3aed687b1ee1"
+    "af-south-1"     = "ami-0c372f041acae6d49"
+    "ap-east-1"      = "ami-079b8d011d4655385"
+    "ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
+    "ap-northeast-2" = "ami-0269fe7d013b8e2dd"
+    "ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
+    "ap-south-1"     = "ami-020d429f17c9f1d0a"
+    "ap-southeast-1" = "ami-09625a221230d9fe6"
+    "ap-southeast-2" = "ami-03cbc6cddb06af2c2"
+    "ca-central-1"   = "ami-09125623b02302014"
+    "eu-central-1"   = "ami-00c36c60f07e21791"
+    "eu-north-1"     = "ami-052bea934e2d9dbfe"
+    "eu-south-1"     = "ami-04e2bb16d37324719"
+    "eu-west-1"      = "ami-0f87948fe2cf1b2a4"
+    "eu-west-2"      = "ami-02ed1bc837487d535"
+    "eu-west-3"      = "ami-080efd2add7e29430"
+    "me-south-1"     = "ami-0dbde382c834c4a72"
+    "sa-east-1"      = "ami-0a0792814cb068077"
+    "us-east-1"      = "ami-05dd1b6e7ef6f8378"
+    "us-east-2"      = "ami-04dd0542609808c50"
+    "us-west-1"      = "ami-07af5f877b3db9f73"
+    "us-west-2"      = "ami-0d0d8694ba492c02b"
   }
 }
 
@@ -74,7 +74,7 @@ variable "linux_password" {
 ## These will go in the generated tpot.conf file ##
 variable "tpot_flavor" {
   default     = "STANDARD"
-  description = "Specify your tpot flavor [STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN, MEDICAL]"
+  description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
 }
 
 variable "web_user" {
diff --git a/cloud/terraform/aws_multi_region/_provider.tf b/cloud/terraform/aws_multi_region/_provider.tf
new file mode 100644
index 00000000..53b015f6
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/_provider.tf
@@ -0,0 +1,9 @@
+provider "aws" {
+  alias  = "eu-west-2"
+  region = "eu-west-2"
+}
+
+provider "aws" {
+  alias  = "us-west-1"
+  region = "us-west-1"
+}
diff --git a/cloud/terraform/aws_multi_region/main.tf b/cloud/terraform/aws_multi_region/main.tf
new file mode 100644
index 00000000..e3655383
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/main.tf
@@ -0,0 +1,27 @@
+module "eu-west-2" {
+  source = "./modules/multi-region"
+  ec2_vpc_id = "vpc-xxxxxxxx"
+  ec2_subnet_id = "subnet-xxxxxxxx"
+  ec2_region = "eu-west-2"
+  tpot_name = "T-Pot Honeypot"
+  
+  linux_password = var.linux_password
+  web_password = var.web_password
+  providers = {
+    aws = aws.eu-west-2
+  }
+}
+
+module "us-west-1" {
+  source = "./modules/multi-region"
+  ec2_vpc_id = "vpc-xxxxxxxx"
+  ec2_subnet_id = "subnet-xxxxxxxx"
+  ec2_region = "us-west-1"
+  tpot_name = "T-Pot Honeypot"
+
+  linux_password = var.linux_password
+  web_password = var.web_password
+  providers = {
+    aws = aws.us-west-1
+  }
+}
diff --git a/cloud/terraform/aws_multi_region/modules/multi-region/main.tf b/cloud/terraform/aws_multi_region/modules/multi-region/main.tf
new file mode 100644
index 00000000..18ad1f40
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/modules/multi-region/main.tf
@@ -0,0 +1,69 @@
+variable "ec2_vpc_id" {}
+variable "ec2_subnet_id" {}
+variable "ec2_region" {}
+variable "linux_password" {}
+variable "web_password" {}
+variable "tpot_name" {}
+
+resource "aws_security_group" "tpot" {
+  name        = "T-Pot"
+  description = "T-Pot Honeypot"
+  vpc_id      = var.ec2_vpc_id
+  ingress {
+    from_port   = 0
+    to_port     = 64000
+    protocol    = "tcp"
+    cidr_blocks = ["0.0.0.0/0"]
+  }
+  ingress {
+    from_port   = 0
+    to_port     = 64000
+    protocol    = "udp"
+    cidr_blocks = ["0.0.0.0/0"]
+  }
+  ingress {
+    from_port   = 64294
+    to_port     = 64294
+    protocol    = "tcp"
+    cidr_blocks = var.admin_ip
+  }
+  ingress {
+    from_port   = 64295
+    to_port     = 64295
+    protocol    = "tcp"
+    cidr_blocks = var.admin_ip
+  }
+  ingress {
+    from_port   = 64297
+    to_port     = 64297
+    protocol    = "tcp"
+    cidr_blocks = var.admin_ip
+  }
+  egress {
+    from_port   = 0
+    to_port     = 0
+    protocol    = "-1"
+    cidr_blocks = ["0.0.0.0/0"]
+  }
+  tags = {
+    Name = "T-Pot"
+  }
+}
+
+resource "aws_instance" "tpot" {
+  ami           = var.ec2_ami[var.ec2_region]
+  instance_type = var.ec2_instance_type
+  key_name      = var.ec2_ssh_key_name
+  subnet_id     = var.ec2_subnet_id
+  tags = {
+    Name = var.tpot_name
+  }
+  root_block_device {
+    volume_type           = "gp2"
+    volume_size           = 128
+    delete_on_termination = true
+  }
+  user_data                   = templatefile("../cloud-init.yaml", { timezone = var.timezone, password = var.linux_password, tpot_flavor = var.tpot_flavor, web_user = var.web_user, web_password = var.web_password })
+  vpc_security_group_ids      = [aws_security_group.tpot.id]
+  associate_public_ip_address = true
+}
diff --git a/cloud/terraform/aws_multi_region/modules/multi-region/outputs.tf b/cloud/terraform/aws_multi_region/modules/multi-region/outputs.tf
new file mode 100644
index 00000000..753a893b
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/modules/multi-region/outputs.tf
@@ -0,0 +1,12 @@
+output "Admin_UI" {
+  value = "https://${aws_instance.tpot.public_dns}:64294/"
+}
+
+output "SSH_Access" {
+  value = "ssh -i {private_key_file} -p 64295 admin@${aws_instance.tpot.public_dns}"
+}
+
+output "Web_UI" {
+  value = "https://${aws_instance.tpot.public_dns}:64297/"
+}
+
diff --git a/cloud/terraform/aws_multi_region/modules/multi-region/variables.tf b/cloud/terraform/aws_multi_region/modules/multi-region/variables.tf
new file mode 100644
index 00000000..26a31b66
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/modules/multi-region/variables.tf
@@ -0,0 +1,57 @@
+variable "admin_ip" {
+  default     = ["127.0.0.1/32"]
+  description = "admin IP addresses in CIDR format"
+}
+
+variable "ec2_ssh_key_name" {
+  default = "default"
+}
+
+# https://aws.amazon.com/ec2/instance-types/
+variable "ec2_instance_type" {
+  default = "t3.xlarge"
+}
+
+# Refer to https://wiki.debian.org/Cloud/AmazonEC2Image/Bullseye
+variable "ec2_ami" {
+  type = map(string)
+  default = {
+    "af-south-1"     = "ami-0c372f041acae6d49"
+    "ap-east-1"      = "ami-079b8d011d4655385"
+    "ap-northeast-1" = "ami-08dbbf1c0485a4aa8"
+    "ap-northeast-2" = "ami-0269fe7d013b8e2dd"
+    "ap-northeast-3" = "ami-0848d1e5fb6e3e3da"
+    "ap-south-1"     = "ami-020d429f17c9f1d0a"
+    "ap-southeast-1" = "ami-09625a221230d9fe6"
+    "ap-southeast-2" = "ami-03cbc6cddb06af2c2"
+    "ca-central-1"   = "ami-09125623b02302014"
+    "eu-central-1"   = "ami-00c36c60f07e21791"
+    "eu-north-1"     = "ami-052bea934e2d9dbfe"
+    "eu-south-1"     = "ami-04e2bb16d37324719"
+    "eu-west-1"      = "ami-0f87948fe2cf1b2a4"
+    "eu-west-2"      = "ami-02ed1bc837487d535"
+    "eu-west-3"      = "ami-080efd2add7e29430"
+    "me-south-1"     = "ami-0dbde382c834c4a72"
+    "sa-east-1"      = "ami-0a0792814cb068077"
+    "us-east-1"      = "ami-05dd1b6e7ef6f8378"
+    "us-east-2"      = "ami-04dd0542609808c50"
+    "us-west-1"      = "ami-07af5f877b3db9f73"
+    "us-west-2"      = "ami-0d0d8694ba492c02b"
+  }
+}
+
+## cloud-init configuration ##
+variable "timezone" {
+  default = "UTC"
+}
+
+## These will go in the generated tpot.conf file ##
+variable "tpot_flavor" {
+  default     = "STANDARD"
+  description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
+}
+
+variable "web_user" {
+  default     = "webuser"
+  description = "Set a username for the web user"
+}
diff --git a/cloud/terraform/aws_multi_region/modules/multi-region/versions.tf b/cloud/terraform/aws_multi_region/modules/multi-region/versions.tf
new file mode 100644
index 00000000..5699714f
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/modules/multi-region/versions.tf
@@ -0,0 +1,9 @@
+terraform {
+  required_version = ">= 0.13"
+  required_providers {
+    aws = {
+      source  = "hashicorp/aws"
+      version = "3.72.0"
+    }
+  }
+}
diff --git a/cloud/terraform/aws_multi_region/outputs.tf b/cloud/terraform/aws_multi_region/outputs.tf
new file mode 100644
index 00000000..845637d4
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/outputs.tf
@@ -0,0 +1,7 @@
+output "eu-west-2_Web_UI" {
+  value = module.eu-west-2.Web_UI
+}
+
+output "us-west-1_Web_UI" {
+  value = module.us-west-1.Web_UI
+}
diff --git a/cloud/terraform/aws_multi_region/variables.tf b/cloud/terraform/aws_multi_region/variables.tf
new file mode 100644
index 00000000..beb671a8
--- /dev/null
+++ b/cloud/terraform/aws_multi_region/variables.tf
@@ -0,0 +1,19 @@
+variable "linux_password" {
+  #default = "LiNuXuSeRP4Ss!"
+  description = "Set a password for the default user"
+
+  validation {
+    condition     = length(var.linux_password) > 0
+    error_message = "Please specify a password for the default user."
+  }
+}
+
+variable "web_password" {
+  #default = "w3b$ecret20"
+  description = "Set a password for the web user"
+
+  validation {
+    condition     = length(var.web_password) > 0
+    error_message = "Please specify a password for the web user."
+  }
+}
diff --git a/cloud/terraform/otc/variables.tf b/cloud/terraform/otc/variables.tf
index e70c89eb..384ea00e 100644
--- a/cloud/terraform/otc/variables.tf
+++ b/cloud/terraform/otc/variables.tf
@@ -79,7 +79,7 @@ variable "eip_size" {
 ## These will go in the generated tpot.conf file ##
 variable "tpot_flavor" {
   default     = "STANDARD"
-  description = "Specify your tpot flavor [STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN, MEDICAL]"
+  description = "Specify your tpot flavor [STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]"
 }
 
 variable "web_user" {
diff --git a/doc/architecture.png b/doc/architecture.png
index 51348088..3f02eaaa 100644
Binary files a/doc/architecture.png and b/doc/architecture.png differ
diff --git a/doc/attackmap.png b/doc/attackmap.png
new file mode 100644
index 00000000..55dc26e5
Binary files /dev/null and b/doc/attackmap.png differ
diff --git a/doc/cockpit1.png b/doc/cockpit1.png
deleted file mode 100644
index 3f154faa..00000000
Binary files a/doc/cockpit1.png and /dev/null differ
diff --git a/doc/cockpit2.png b/doc/cockpit2.png
deleted file mode 100644
index d1dcd0e9..00000000
Binary files a/doc/cockpit2.png and /dev/null differ
diff --git a/doc/cockpit3.png b/doc/cockpit3.png
deleted file mode 100644
index 09a34d2a..00000000
Binary files a/doc/cockpit3.png and /dev/null differ
diff --git a/doc/cockpit_a.png b/doc/cockpit_a.png
new file mode 100644
index 00000000..12331090
Binary files /dev/null and b/doc/cockpit_a.png differ
diff --git a/doc/cockpit_b.png b/doc/cockpit_b.png
new file mode 100644
index 00000000..3a66cdde
Binary files /dev/null and b/doc/cockpit_b.png differ
diff --git a/doc/cyberchef.png b/doc/cyberchef.png
index d295a551..04f6d28e 100644
Binary files a/doc/cyberchef.png and b/doc/cyberchef.png differ
diff --git a/doc/dashboard.png b/doc/dashboard.png
deleted file mode 100644
index ad60dd00..00000000
Binary files a/doc/dashboard.png and /dev/null differ
diff --git a/doc/dockerui.png b/doc/dockerui.png
deleted file mode 100644
index 8c6aa2dd..00000000
Binary files a/doc/dockerui.png and /dev/null differ
diff --git a/doc/elasticvue.png b/doc/elasticvue.png
new file mode 100644
index 00000000..83af72c1
Binary files /dev/null and b/doc/elasticvue.png differ
diff --git a/doc/headplugin.png b/doc/headplugin.png
deleted file mode 100644
index d6d611d6..00000000
Binary files a/doc/headplugin.png and /dev/null differ
diff --git a/doc/heimdall.png b/doc/heimdall.png
deleted file mode 100644
index 96fb494e..00000000
Binary files a/doc/heimdall.png and /dev/null differ
diff --git a/doc/kibana.png b/doc/kibana.png
deleted file mode 100644
index ad60dd00..00000000
Binary files a/doc/kibana.png and /dev/null differ
diff --git a/doc/kibana_a.png b/doc/kibana_a.png
new file mode 100644
index 00000000..5e993e06
Binary files /dev/null and b/doc/kibana_a.png differ
diff --git a/doc/kibana_b.png b/doc/kibana_b.png
new file mode 100644
index 00000000..e1b21660
Binary files /dev/null and b/doc/kibana_b.png differ
diff --git a/doc/kibana_c.png b/doc/kibana_c.png
new file mode 100644
index 00000000..d2886b5d
Binary files /dev/null and b/doc/kibana_c.png differ
diff --git a/doc/netdata.png b/doc/netdata.png
deleted file mode 100644
index 43cb2729..00000000
Binary files a/doc/netdata.png and /dev/null differ
diff --git a/doc/spiderfoot.png b/doc/spiderfoot.png
index b9abe17a..138561b8 100644
Binary files a/doc/spiderfoot.png and b/doc/spiderfoot.png differ
diff --git a/doc/tpotwebui.png b/doc/tpotwebui.png
new file mode 100644
index 00000000..f9bfc2dc
Binary files /dev/null and b/doc/tpotwebui.png differ
diff --git a/doc/webssh.png b/doc/webssh.png
deleted file mode 100644
index 62b52f22..00000000
Binary files a/doc/webssh.png and /dev/null differ
diff --git a/docker/adbhoney/Dockerfile b/docker/adbhoney/Dockerfile
index 606f1895..d6808eb4 100644
--- a/docker/adbhoney/Dockerfile
+++ b/docker/adbhoney/Dockerfile
@@ -1,20 +1,19 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
-RUN apk -U add \
+RUN apk --no-cache -U add \
             git \
-            libcap \
-	    py3-pip \
-            python3 \
-            python3-dev && \
+	    procps \
+            python3 && \
 #
 # Install adbhoney from git
     git clone https://github.com/huuck/ADBHoney /opt/adbhoney && \
     cd /opt/adbhoney && \
-    git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
+#    git checkout ad7c17e78d01f6860d58ba826a4b6a4e4f83acbd && \
+    git checkout 2417a7a982f4fd527b3a048048df9a23178767ad && \
     cp /root/dist/adbhoney.cfg /opt/adbhoney && \
     sed -i 's/dst_ip/dest_ip/' /opt/adbhoney/adbhoney/core.py && \
     sed -i 's/dst_port/dest_port/' /opt/adbhoney/adbhoney/core.py && \
@@ -23,16 +22,15 @@ RUN apk -U add \
     addgroup -g 2000 adbhoney && \
     adduser -S -H -s /bin/ash -u 2000 -D -g 2000 adbhoney && \
     chown -R adbhoney:adbhoney /opt/adbhoney && \
-    setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
 #
 # Clean up
-    apk del --purge git \
-                    python3-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    apk del --purge git && \
+    rm -rf /root/* /opt/adbhoney/.git /var/cache/apk/*
 #
 # Set workdir and start adbhoney
 STOPSIGNAL SIGINT
+# Adbhoney sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
+HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
 USER adbhoney:adbhoney
 WORKDIR /opt/adbhoney/
-CMD nohup /usr/bin/python3 run.py
+CMD /usr/bin/python3 run.py
diff --git a/docker/adbhoney/docker-compose.yml b/docker/adbhoney/docker-compose.yml
index 1c720021..7f2139f3 100644
--- a/docker/adbhoney/docker-compose.yml
+++ b/docker/adbhoney/docker-compose.yml
@@ -10,12 +10,13 @@ services:
     build: .
     container_name: adbhoney
     restart: always
+    #    cpu_count: 1
+    #    cpus: 0.25
     networks:
      - adbhoney_local
     ports:
      - "5555:5555"
-#    image: "dtagdevsec/adbhoney:2006"
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
     read_only: true
     volumes:
      - /data/adbhoney/log:/opt/adbhoney/log
diff --git a/docker/builder.sh b/docker/builder.sh
new file mode 100755
index 00000000..10582f03
--- /dev/null
+++ b/docker/builder.sh
@@ -0,0 +1,79 @@
+#!/bin/bash
+
+# Setup Vars
+myPLATFORMS="linux/amd64,linux/arm64"
+myHUBORG="dtagdevsec"
+myTAG="2204"
+myIMAGESBASE="adbhoney ciscoasa citrixhoneypot conpot cowrie ddospot dicompot dionaea elasticpot endlessh ewsposter fatt glutton hellpot heralding honeypots honeytrap ipphoney log4pot mailoney medpot nginx p0f redishoneypot sentrypeer spiderfoot suricata wordpot"
+myIMAGESELK="elasticsearch kibana logstash map"
+myIMAGESTANNER="phpox redis snare tanner"
+myBUILDERLOG="builder.log"
+myBUILDERERR="builder.err"
+myBUILDCACHE="/buildcache"
+
+# Got root?
+myWHOAMI=$(whoami)
+if [ "$myWHOAMI" != "root" ]
+  then
+    echo "Need to run as root ..."
+    exit
+fi
+
+# Check for Buildx
+docker buildx > /dev/null 2>&1 
+if [ "$?" == "1" ];
+  then
+    echo "### Build environment not setup. Run bin/setup_builder.sh"
+fi
+
+# Only run with command switch
+if [ "$1" == "" ]; then
+  echo "### T-Pot Multi Arch Image Builder."
+  echo "## Usage: builder.sh [build, push]"
+  echo "## build - Just build images, do not push."
+  echo "## push - Build and push images."
+  echo "## Pushing requires an active docker login."
+  exit
+fi
+
+fuBUILDIMAGES () {
+local myPATH="$1"
+local myIMAGELIST="$2"
+local myPUSHOPTION="$3"
+
+for myREPONAME in $myIMAGELIST;
+  do
+    echo -n "Now building: $myREPONAME in $myPATH$myREPONAME/."
+    docker buildx build --cache-from "type=local,src=$myBUILDCACHE" --cache-to "type=local,dest=$myBUILDCACHE" --platform $myPLATFORMS -t $myHUBORG/$myREPONAME:$myTAG $myPUSHOPTION $myPATH$myREPONAME/. >> $myBUILDERLOG 2>&1
+    if [ "$?" != "0" ];
+      then
+	echo " [ ERROR ] - Check logs!"
+	echo "Error building $myREPONAME" >> "$myBUILDERERR"
+      else
+	echo " [ OK ]"
+    fi
+done
+}
+
+# Just build images
+if [ "$1" == "build" ];
+  then
+    mkdir -p $myBUILDCACHE
+    rm -f "$myBUILDERLOG" "$myBUILDERERR" 
+    echo "### Building images ..."
+    fuBUILDIMAGES "" "$myIMAGESBASE" ""
+    fuBUILDIMAGES "elk/" "$myIMAGESELK" ""
+    fuBUILDIMAGES "tanner/" "$myIMAGESTANNER" ""
+fi
+
+# Build and push images
+if [ "$1" == "push" ];
+  then
+    mkdir -p $myBUILDCACHE
+    rm -f "$myBUILDERLOG" "$myBUILDERERR" 
+    echo "### Building and pushing images ..."
+    fuBUILDIMAGES "" "$myIMAGESBASE" "--push"
+    fuBUILDIMAGES "elk/" "$myIMAGESELK" "--push"
+    fuBUILDIMAGES "tanner/" "$myIMAGESTANNER" "--push"
+fi
+
diff --git a/docker/ciscoasa/Dockerfile b/docker/ciscoasa/Dockerfile
index 49233257..ad64bfb1 100644
--- a/docker/ciscoasa/Dockerfile
+++ b/docker/ciscoasa/Dockerfile
@@ -1,11 +1,11 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup env and apt
-RUN apk -U upgrade && \
-    apk add build-base \
+RUN apk --no-cache -U upgrade && \
+    apk --no-cache add build-base \
             git \
             libffi \
             libffi-dev \
@@ -26,6 +26,7 @@ RUN apk -U upgrade && \
     git clone https://github.com/cymmetria/ciscoasa_honeypot && \
     cd ciscoasa_honeypot && \
     git checkout d6e91f1aab7fe6fc01fabf2046e76b68dd6dc9e2 && \
+    sed -i "s/git+git/git+https/g" requirements.txt && \
     pip3 install --no-cache-dir -r requirements.txt && \
     cp /root/dist/asa_server.py /opt/ciscoasa_honeypot && \
     chown -R ciscoasa:ciscoasa /opt/ciscoasa_honeypot && \
@@ -37,6 +38,7 @@ RUN apk -U upgrade && \
                     openssl-dev \
                     python3-dev && \
     rm -rf /root/* && \
+    rm -rf /opt/ciscoasa_honeypot/.git && \
     rm -rf /var/cache/apk/*
 #
 # Start ciscoasa
diff --git a/docker/ciscoasa/docker-compose.yml b/docker/ciscoasa/docker-compose.yml
index bf85bc48..2aab5b40 100644
--- a/docker/ciscoasa/docker-compose.yml
+++ b/docker/ciscoasa/docker-compose.yml
@@ -9,11 +9,14 @@ services:
     restart: always
     tmpfs:
      - /tmp/ciscoasa:uid=2000,gid=2000
-    network_mode: "host"
+#    cpu_count: 1
+#    cpus: 0.25
+    networks:
+     - ciscoasa_local
     ports:
      - "5000:5000/udp"
      - "8443:8443"
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
     read_only: true
     volumes:
      - /data/ciscoasa/log:/var/log/ciscoasa
diff --git a/docker/citrixhoneypot/Dockerfile b/docker/citrixhoneypot/Dockerfile
index 39f7c1b4..b6788840 100644
--- a/docker/citrixhoneypot/Dockerfile
+++ b/docker/citrixhoneypot/Dockerfile
@@ -1,13 +1,12 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Install packages
-RUN apk -U add \
+RUN apk --no-cache -U add \
             git \
             libcap \
 	    openssl \
             py3-pip \
-            python3 \
-            python3-dev && \
+            python3 && \
 #
     pip3 install --no-cache-dir python-json-logger && \
 #
@@ -33,9 +32,9 @@ RUN apk -U add \
 #
 # Clean up
     apk del --purge git \
-                    openssl \
-                    python3-dev && \
+                    openssl && \
     rm -rf /root/* && \
+    rm -rf /opt/citrixhoneypot/.git && \
     rm -rf /var/cache/apk/*
 #
 # Set workdir and start citrixhoneypot
diff --git a/docker/citrixhoneypot/docker-compose.yml b/docker/citrixhoneypot/docker-compose.yml
index 16eea88f..7e3383f3 100644
--- a/docker/citrixhoneypot/docker-compose.yml
+++ b/docker/citrixhoneypot/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: citrixhoneypot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - citrixhoneypot_local
     ports:
      - "443:443"
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
     read_only: true
     volumes:
      - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
diff --git a/docker/conpot/Dockerfile b/docker/conpot/Dockerfile
index f537fba8..feb4fd33 100644
--- a/docker/conpot/Dockerfile
+++ b/docker/conpot/Dockerfile
@@ -1,11 +1,12 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apt
-RUN apk -U add \
+RUN apk --no-cache -U add \
              build-base \
+	     cython \
              file \
              git \
              libev \
@@ -16,36 +17,53 @@ RUN apk -U add \
              libxslt-dev \
              mariadb-dev \
              pkgconfig \
+	     procps \
              python3 \
              python3-dev \
-             py3-cffi \
-             py3-cryptography \
+	     py3-cffi \
+	     py3-cryptography \
+	     py3-freezegun \
 	     py3-gevent \
+	     py3-lxml \
+	     py3-natsort \
 	     py3-pip \
-             tcpdump \
+	     py3-ply \
+	     py3-psutil \
+	     py3-pycryptodomex \
+	     py3-pytest \
+	     py3-requests \
+             py3-pyserial \
+	     py3-setuptools \
+	     py3-slugify \
+	     py3-snmp \
+	     py3-sphinx \
+	     py3-wheel \
+	     py3-zope-event \
+	     py3-zope-interface \
              wget && \
 #
 # Setup ConPot
     git clone https://github.com/mushorg/conpot /opt/conpot && \
     cd /opt/conpot/ && \
-    git checkout 804fd65aa3b7ffa31c07fd4e863d4a5500414cf3 && \
+    git checkout b3740505fd26d82473c0d7be405b372fa0f82575 && \
+    #git checkout 1c2382ea290b611fdc6a0a5f9572c7504bcb616e && \
     # Change template default ports if <1024
-    sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \ 
-    sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \ 
-    sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/default/ipmi/ipmi.xml && \ 
-    sed -i 's/port="5020"/port="502"/' /opt/conpot/conpot/templates/default/modbus/modbus.xml && \ 
-    sed -i 's/port="10201"/port="102"/' /opt/conpot/conpot/templates/default/s7comm/s7comm.xml && \ 
-    sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/default/snmp/snmp.xml && \ 
-    sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \ 
-    sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \ 
-    sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \ 
-    pip3 install --no-cache-dir -U setuptools && \
+    sed -i 's/port="2121"/port="21"/' /opt/conpot/conpot/templates/default/ftp/ftp.xml && \
+    sed -i 's/port="8800"/port="80"/' /opt/conpot/conpot/templates/default/http/http.xml && \
+    sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/default/ipmi/ipmi.xml && \
+    sed -i 's/port="5020"/port="502"/' /opt/conpot/conpot/templates/default/modbus/modbus.xml && \
+    sed -i 's/port="10201"/port="102"/' /opt/conpot/conpot/templates/default/s7comm/s7comm.xml && \
+    sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/default/snmp/snmp.xml && \
+    sed -i 's/port="6969"/port="69"/' /opt/conpot/conpot/templates/default/tftp/tftp.xml && \
+    sed -i 's/port="16100"/port="161"/' /opt/conpot/conpot/templates/IEC104/snmp/snmp.xml && \
+    sed -i 's/port="6230"/port="623"/' /opt/conpot/conpot/templates/ipmi/ipmi/ipmi.xml && \
+    cp /root/dist/requirements.txt . && \
+    pip3 install --no-cache-dir --upgrade pip && \
     pip3 install --no-cache-dir . && \
-    pip3 install --no-cache-dir pysnmp-mibs && \
     cd / && \
     rm -rf /opt/conpot /tmp/* /var/tmp/* && \
     setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
-#    
+#
 # Get wireshark manuf db for scapy, setup configs, user, groups
     mkdir -p /etc/conpot /var/log/conpot /usr/share/wireshark && \
     wget https://github.com/wireshark/wireshark/raw/master/manuf -o /usr/share/wireshark/manuf && \
@@ -66,7 +84,6 @@ RUN apk -U add \
             mariadb-dev \
             pkgconfig \
             python3-dev \
-            py-cffi \
             wget && \
     rm -rf /root/* && \
     rm -rf /tmp/* && \
@@ -74,5 +91,7 @@ RUN apk -U add \
 #
 # Start conpot
 STOPSIGNAL SIGINT
+# Conpot sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
+HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
 USER conpot:conpot
 CMD exec /usr/bin/conpot --mibcache $CONPOT_TMP --temp_dir $CONPOT_TMP --template $CONPOT_TEMPLATE --logfile $CONPOT_LOG --config $CONPOT_CONFIG
diff --git a/docker/conpot/dist/command_responder.py b/docker/conpot/dist/command_responder.py
deleted file mode 100644
index 74cabca2..00000000
--- a/docker/conpot/dist/command_responder.py
+++ /dev/null
@@ -1,1123 +0,0 @@
-# Copyright (C) 2013  Daniel creo Haslinger <creo-conpot@blackmesa.at>
-#
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2
-# of the License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc.,
-# 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
-
-import logging
-import time
-import random
-
-from datetime import datetime
-
-from html.parser import HTMLParser
-from socketserver import ThreadingMixIn
-
-import http.server
-import http.client
-import os
-from lxml import etree
-from conpot.helpers import str_to_bytes
-import conpot.core as conpot_core
-import gevent
-
-
-logger = logging.getLogger(__name__)
-
-
-class HTTPServer(http.server.BaseHTTPRequestHandler):
-
-    def log(self, version, request_type, addr, request, response=None):
-
-        session = conpot_core.get_session('http', addr[0], addr[1], self.connection._sock.getsockname()[0], self.connection._sock.getsockname()[1])
-
-        log_dict = {'remote': addr,
-                    'timestamp': datetime.utcnow(),
-                    'data_type': 'http',
-                    'dst_port': self.server.server_port,
-                    'data': {0: {'request': '{0} {1}: {2}'.format(version, request_type, request)}}}
-
-        logger.info('%s %s request from %s: %s. %s', version, request_type, addr, request, session.id)
-
-        if response:
-            logger.info('%s response to %s: %s. %s', version, addr, response, session.id)
-            log_dict['data'][0]['response'] = '{0} response: {1}'.format(version, response)
-            session.add_event({'request': str(request), 'response': str(response)})
-        else:
-            session.add_event({'request': str(request)})
-
-        # FIXME: Proper logging
-
-    def get_entity_headers(self, rqfilename, headers, configuration):
-
-        xml_headers = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/headers/*'
-        )
-
-        if xml_headers:
-
-            # retrieve all headers assigned to this entity
-            for header in xml_headers:
-                headers.append((header.attrib['name'], header.text))
-
-        return headers
-
-    def get_trigger_appendix(self, rqfilename, rqparams, configuration):
-
-        xml_triggers = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/triggers/*'
-        )
-
-        if xml_triggers:
-            paramlist = rqparams.split('&')
-
-            # retrieve all subselect triggers assigned to this entity
-            for triggers in xml_triggers:
-
-                triggerlist = triggers.text.split(';')
-                trigger_missed = False
-
-                for trigger in triggerlist:
-                    if not trigger in paramlist:
-                        trigger_missed = True
-
-                if not trigger_missed:
-                    return triggers.attrib['appendix']
-
-        return None
-
-    def get_entity_trailers(self, rqfilename, configuration):
-
-        trailers = []
-        xml_trailers = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/trailers/*'
-        )
-
-        if xml_trailers:
-
-            # retrieve all headers assigned to this entity
-            for trailer in xml_trailers:
-                trailers.append((trailer.attrib['name'], trailer.text))
-
-        return trailers
-
-    def get_status_headers(self, status, headers, configuration):
-
-        xml_headers = configuration.xpath('//http/statuscodes/status[@name="' +
-                                          str(status) + '"]/headers/*')
-
-        if xml_headers:
-
-            # retrieve all headers assigned to this status
-            for header in xml_headers:
-                headers.append((header.attrib['name'], header.text))
-
-        return headers
-
-    def get_status_trailers(self, status, configuration):
-
-        trailers = []
-        xml_trailers = configuration.xpath(
-            '//http/statuscodes/status[@name="' + str(status) + '"]/trailers/*'
-        )
-
-        if xml_trailers:
-
-            # retrieve all trailers assigned to this status
-            for trailer in xml_trailers:
-                trailers.append((trailer.attrib['name'], trailer.text))
-
-        return trailers
-
-    def send_response(self, code, message=None):
-        """Send the response header and log the response code.
-        This function is overloaded to change the behaviour when
-        loggers and sending default headers.
-        """
-
-        # replace integrated loggers with conpot logger..
-        # self.log_request(code)
-
-        if message is None:
-            if code in self.responses:
-                message = self.responses[code][0]
-            else:
-                message = ''
-
-        if self.request_version != 'HTTP/0.9':
-            msg = str_to_bytes("{} {} {}\r\n".format(self.protocol_version, code, message))
-            self.wfile.write(msg)
-
-        # the following two headers are omitted, which is why we override
-        # send_response() at all. We do this one on our own...
-
-        # - self.send_header('Server', self.version_string())
-        # - self.send_header('Date', self.date_time_string())
-
-    def substitute_template_fields(self, payload):
-
-        # initialize parser with our payload
-        parser = TemplateParser(payload)
-
-        # triggers the parser, just in case of open / incomplete tags..
-        parser.close()
-
-        # retrieve and return (substituted) payload
-        return parser.payload
-
-    def load_status(self, status, requeststring, requestheaders, headers, configuration, docpath, method='GET', body=None):
-        """Retrieves headers and payload for a given status code.
-           Certain status codes can be configured to forward the
-           request to a remote system. If not available, generate
-           a minimal response"""
-
-        # handle PROXY tag
-        entity_proxy = configuration.xpath('//http/statuscodes/status[@name="' +
-                                           str(status) +
-                                           '"]/proxy')
-
-        if entity_proxy:
-            source = 'proxy'
-            target = entity_proxy[0].xpath('./text()')[0]
-        else:
-            source = 'filesystem'
-
-        # handle TARPIT tag
-        entity_tarpit = configuration.xpath(
-            '//http/statuscodes/status[@name="' + str(status) + '"]/tarpit'
-        )
-
-        if entity_tarpit:
-            tarpit = self.server.config_sanitize_tarpit(entity_tarpit[0].xpath('./text()')[0])
-        else:
-            tarpit = None
-
-        # check if we have to delay further actions due to global or local TARPIT configuration
-        if tarpit is not None:
-            # this node has its own delay configuration
-            self.server.do_tarpit(tarpit)
-        else:
-            # no delay configuration for this node. check for global latency
-            if self.server.tarpit is not None:
-                # fall back to the globally configured latency
-                self.server.do_tarpit(self.server.tarpit)
-
-        # If the requested resource resides on our filesystem,
-        # we try retrieve all metadata and the resource itself from there.
-        if source == 'filesystem':
-
-            # retrieve headers from entities configuration block
-            headers = self.get_status_headers(status, headers, configuration)
-
-            # retrieve headers from entities configuration block
-            trailers = self.get_status_trailers(status, configuration)
-
-            # retrieve payload directly from filesystem, if possible.
-            # If this is not possible, return an empty, zero sized string.
-            try:
-                if not isinstance(status, int):
-                    status = status.value
-                with open(os.path.join(docpath, 'statuscodes', str(int(status)) + '.status'), 'rb') as f:
-                    payload = f.read()
-
-            except IOError as e:
-                logger.exception('%s', e)
-                payload = ''
-
-            # there might be template data that can be substituted within the
-            # payload. We only substitute data that is going to be displayed
-            # by the browser:
-
-            # perform template substitution on payload
-            payload = self.substitute_template_fields(payload)
-
-            # How do we transport the content?
-            chunked_transfer = configuration.xpath('//http/htdocs/node[@name="' +
-                                                   str(status) + '"]/chunks')
-
-            if chunked_transfer:
-                # Append a chunked transfer encoding header
-                headers.append(('Transfer-Encoding', 'chunked'))
-                chunks = str(chunked_transfer[0].xpath('./text()')[0])
-            else:
-                # Calculate and append a content length header
-                headers.append(('Content-Length', payload.__len__()))
-                chunks = '0'
-
-            return status, headers, trailers, payload, chunks
-
-        # the requested status code is configured to forward the
-        # originally targeted resource to a remote system.
-
-        elif source == 'proxy':
-
-            # open a connection to the remote system.
-            # If something goes wrong, fall back to 503.
-
-            # NOTE: we use try:except here because there is no perfect
-            # platform independent way to check file accessibility.
-
-            trailers = []
-            chunks = '0'
-
-            try:
-                # Modify a few headers to fit our new destination and the fact
-                # that we're proxying while being unaware of any session foo..
-                requestheaders['Host'] = target
-                requestheaders['Connection'] = 'close'
-
-                remotestatus = 0
-                conn = http.client.HTTPConnection(target)
-                conn.request(method, requeststring, body, dict(requestheaders))
-                response = conn.getresponse()
-
-                remotestatus = int(response.status)
-                headers = response.getheaders()   # We REPLACE the headers to avoid duplicates!
-                payload = response.read()
-
-                # WORKAROUND: to get around a strange httplib-behaviour when it comes
-                # to chunked transfer encoding, we replace the chunked-header with a
-                # valid Content-Length header:
-
-                for i, header in enumerate(headers):
-
-                    if header[0].lower() == 'transfer-encoding' and header[1].lower() == 'chunked':
-                        del headers[i]
-                        break
-
-                status = remotestatus
-
-            except:
-
-                # before falling back to 503, we check if we are ALREADY dealing with a 503
-                # to prevent an infinite request handling loop...
-
-                if status != 503:
-
-                    # we're handling another error here.
-                    # generate a 503 response from configuration.
-                    (status, headers, trailers, payload, chunks) = self.load_status(503,
-                                                                                    requeststring,
-                                                                                    self.headers,
-                                                                                    headers,
-                                                                                    configuration,
-                                                                                    docpath)
-
-                else:
-
-                    # oops, we're heading towards an infinite loop here,
-                    # generate a minimal 503 response regardless of the configuration.
-                    status = 503
-                    payload = ''
-                    chunks = '0'
-                    headers.append(('Content-Length', 0))
-
-            return status, headers, trailers, payload, chunks
-
-    def load_entity(self, requeststring, headers, configuration, docpath):
-        """
-        Retrieves status, headers and payload for a given entity, that
-        can be stored either local or on a remote system
-        """
-
-        # extract filename and GET parameters from request string
-        rqfilename = requeststring.partition('?')[0]
-        rqparams = requeststring.partition('?')[2]
-
-        # handle ALIAS tag
-        entity_alias = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/alias'
-        )
-        if entity_alias:
-            rqfilename = entity_alias[0].xpath('./text()')[0]
-
-        # handle SUBSELECT tag
-        rqfilename_appendix = self.get_trigger_appendix(rqfilename, rqparams, configuration)
-        if rqfilename_appendix:
-            rqfilename += '_' + rqfilename_appendix
-
-        # handle PROXY tag
-        entity_proxy = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/proxy'
-        )
-        if entity_proxy:
-            source = 'proxy'
-            target = entity_proxy[0].xpath('./text()')[0]
-        else:
-            source = 'filesystem'
-
-        # handle TARPIT tag
-        entity_tarpit = configuration.xpath(
-            '//http/htdocs/node[@name="' + rqfilename + '"]/tarpit'
-        )
-        if entity_tarpit:
-            tarpit = self.server.config_sanitize_tarpit(entity_tarpit[0].xpath('./text()')[0])
-        else:
-            tarpit = None
-
-        # check if we have to delay further actions due to global or local TARPIT configuration
-        if tarpit is not None:
-            # this node has its own delay configuration
-            self.server.do_tarpit(tarpit)
-        else:
-            # no delay configuration for this node. check for global latency
-            if self.server.tarpit is not None:
-                # fall back to the globally configured latency
-                self.server.do_tarpit(self.server.tarpit)
-
-        # If the requested resource resides on our filesystem,
-        # we try retrieve all metadata and the resource itself from there.
-        if source == 'filesystem':
-
-            # handle STATUS tag
-            # ( filesystem only, since proxied requests come with their own status )
-            entity_status = configuration.xpath(
-                '//http/htdocs/node[@name="' + rqfilename + '"]/status'
-            )
-            if entity_status:
-                status = int(entity_status[0].xpath('./text()')[0])
-            else:
-                status = 200
-
-            # retrieve headers from entities configuration block
-            headers = self.get_entity_headers(rqfilename, headers, configuration)
-
-            # retrieve trailers from entities configuration block
-            trailers = self.get_entity_trailers(rqfilename, configuration)
-
-            # retrieve payload directly from filesystem, if possible.
-            # If this is not possible, return an empty, zero sized string.
-            if os.path.isabs(rqfilename):
-                relrqfilename = rqfilename[1:]
-            else:
-                relrqfilename = rqfilename
-
-            try:
-                with open(os.path.join(docpath, 'htdocs', relrqfilename), 'rb') as f:
-                    payload = f.read()
-
-            except IOError as e:
-                if not os.path.isdir(os.path.join(docpath, 'htdocs', relrqfilename)):
-                    logger.error('Failed to get template content: %s', e)
-                payload = ''
-
-            # there might be template data that can be substituted within the
-            # payload. We only substitute data that is going to be displayed
-            # by the browser:
-
-            templated = False
-            for header in headers:
-                if header[0].lower() == 'content-type' and header[1].lower() == 'text/html':
-                    templated = True
-
-            if templated:
-                # perform template substitution on payload
-                payload = self.substitute_template_fields(payload)
-
-            # How do we transport the content?
-            chunked_transfer = configuration.xpath(
-                '//http/htdocs/node[@name="' + rqfilename + '"]/chunks'
-            )
-
-            if chunked_transfer:
-                # Calculate and append a chunked transfer encoding header
-                headers.append(('Transfer-Encoding', 'chunked'))
-                chunks = str(chunked_transfer[0].xpath('./text()')[0])
-            else:
-                # Calculate and append a content length header
-                headers.append(('Content-Length', payload.__len__()))
-                chunks = '0'
-
-            return status, headers, trailers, payload, chunks
-
-        # the requested resource resides on another server,
-        # so we act as a proxy between client and target system
-
-        elif source == 'proxy':
-
-            # open a connection to the remote system.
-            # If something goes wrong, fall back to 503
-
-            trailers = []
-
-            try:
-                conn = http.client.HTTPConnection(target)
-                conn.request("GET", requeststring)
-                response = conn.getresponse()
-
-                status = int(response.status)
-                headers = response.getheaders()    # We REPLACE the headers to avoid duplicates!
-                payload = response.read()
-                chunks = '0'
-
-            except:
-                status = 503
-                (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                                requeststring,
-                                                                                self.headers,
-                                                                                headers,
-                                                                                configuration,
-                                                                                docpath)
-
-            return status, headers, trailers, payload, chunks
-
-    def send_chunked(self, chunks, payload, trailers):
-        """Send payload via chunked transfer encoding to the
-        client, followed by eventual trailers."""
-
-        chunk_list = chunks.split(',')
-        pointer = 0
-        for cwidth in chunk_list:
-            cwidth = int(cwidth)
-            # send chunk length indicator
-            self.wfile.write(format(cwidth, 'x').upper() + "\r\n")
-            # send chunk payload
-            self.wfile.write(payload[pointer:pointer + cwidth] + "\r\n")
-            pointer += cwidth
-
-        # is there another chunk that has not been configured? Send it anyway for the sake of completeness..
-        if len(payload) > pointer:
-            # send chunk length indicator
-            self.wfile.write(format(len(payload) - pointer, 'x').upper() + "\r\n")
-            # send chunk payload
-            self.wfile.write(payload[pointer:] + "\r\n")
-
-        # we're done with the payload. Send a zero chunk as EOF indicator
-        self.wfile.write('0'+"\r\n")
-
-        # if there are trailing headers :-) we send them now..
-        for trailer in trailers:
-            self.wfile.write("%s: %s\r\n" % (trailer[0], trailer[1]))
-
-        # and finally, the closing ceremony...
-        self.wfile.write("\r\n")
-
-    def send_error(self, code, message=None):
-        """Send and log an error reply.
-        This method is overloaded to make use of load_status()
-        to allow handling of "Unsupported Method" errors.
-        """
-
-        headers = []
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        if not hasattr(self, 'headers'):
-            self.headers = self.MessageClass(self.rfile, 0)
-
-        trace_data_length = self.headers.get('content-length')
-        unsupported_request_data = None
-
-        if trace_data_length:
-            unsupported_request_data = self.rfile.read(int(trace_data_length))
-
-        # there are certain situations where variables are (not yet) registered
-        # ( e.g. corrupted request syntax ). In this case, we set them manually.
-        if hasattr(self, 'path') and self.path is not None:
-            requeststring = self.path
-        else:
-            requeststring = ''
-            self.path = None
-            if message is not None:
-                logger.info(message)
-
-        # generate the appropriate status code, header and payload
-        (status, headers, trailers, payload, chunks) = self.load_status(code,
-                                                                        requeststring.partition('?')[0],
-                                                                        self.headers,
-                                                                        headers,
-                                                                        configuration,
-                                                                        docpath)
-
-        # send http status to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # decide upon sending content as a whole or chunked
-        if chunks == '0':
-            # send payload as a whole to the client
-            if type(payload) != bytes:
-                payload = payload.encode()
-            self.wfile.write(payload)
-        else:
-            # send payload in chunks to the client
-            self.send_chunked(chunks, payload, trailers)
-
-        # loggers
-        self.log(self.request_version, self.command, self.client_address, (self.path,
-                                                                           self.headers._headers,
-                                                                           unsupported_request_data), status)
-
-    def do_TRACE(self):
-        """Handle TRACE requests."""
-
-        # fetch configuration dependent variables from server instance
-        headers = []
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        # retrieve TRACE body data
-        # ( sticking to the HTTP protocol, there should not be any body in TRACE requests,
-        #   an attacker could though use the body to inject data if not flushed correctly,
-        #   which is done by accessing the data like we do now - just to be secure.. )
-
-        trace_data_length = self.headers.get('content-length')
-        trace_data = None
-
-        if trace_data_length:
-            trace_data = self.rfile.read(int(trace_data_length))
-
-        # check configuration: are we allowed to use this method?
-        if self.server.disable_method_trace is True:
-
-            # Method disabled by configuration. Fall back to 501.
-            status = 501
-            (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                            self.path,
-                                                                            self.headers,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath)
-
-        else:
-
-            # Method is enabled
-            status = 200
-            payload = ''
-            headers.append(('Content-Type', 'message/http'))
-
-            # Gather all request data and return it to sender..
-            for rqheader in self.headers:
-                payload = payload + str(rqheader) + ': ' + self.headers.get(rqheader) + "\n"
-
-        # send initial HTTP status line to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # send payload (the actual content) to client
-        if type(payload) != bytes:
-            payload = payload.encode()
-        self.wfile.write(payload)
-
-        # loggers
-        self.log(self.request_version,
-                 self.command,
-                 self.client_address,
-                 (self.path, self.headers._headers, trace_data),
-                 status)
-
-    def do_HEAD(self):
-        """Handle HEAD requests."""
-
-        # fetch configuration dependent variables from server instance
-        headers = list()
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        # retrieve HEAD body data
-        # ( sticking to the HTTP protocol, there should not be any body in HEAD requests,
-        #   an attacker could though use the body to inject data if not flushed correctly,
-        #   which is done by accessing the data like we do now - just to be secure.. )
-
-        head_data_length = self.headers.get('content-length')
-        head_data = None
-
-        if head_data_length:
-            head_data = self.rfile.read(int(head_data_length))
-
-        # check configuration: are we allowed to use this method?
-        if self.server.disable_method_head is True:
-
-            # Method disabled by configuration. Fall back to 501.
-            status = 501
-            (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                            self.path,
-                                                                            self.headers,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath)
-
-        else:
-
-            # try to find a configuration item for this GET request
-            entity_xml = configuration.xpath(
-                '//http/htdocs/node[@name="'
-                + self.path.partition('?')[0] + '"]'
-            )
-
-            if entity_xml:
-                # A config item exists for this entity. Handle it..
-                (status, headers, trailers, payload, chunks) = self.load_entity(self.path,
-                                                                                headers,
-                                                                                configuration,
-                                                                                docpath)
-
-            else:
-                # No config item could be found. Fall back to a standard 404..
-                status = 404
-                (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                                self.path,
-                                                                                self.headers,
-                                                                                headers,
-                                                                                configuration,
-                                                                                docpath)
-
-        # send initial HTTP status line to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # loggers
-        self.log(self.request_version,
-                 self.command,
-                 self.client_address,
-                 (self.path, self.headers._headers, head_data),
-                 status)
-
-    def do_OPTIONS(self):
-        """Handle OPTIONS requests."""
-
-        # fetch configuration dependent variables from server instance
-        headers = []
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        # retrieve OPTIONS body data
-        # ( sticking to the HTTP protocol, there should not be any body in HEAD requests,
-        #   an attacker could though use the body to inject data if not flushed correctly,
-        #   which is done by accessing the data like we do now - just to be secure.. )
-
-        options_data_length = self.headers.get('content-length')
-        options_data = None
-
-        if options_data_length:
-            options_data = self.rfile.read(int(options_data_length))
-
-        # check configuration: are we allowed to use this method?
-        if self.server.disable_method_options is True:
-
-            # Method disabled by configuration. Fall back to 501.
-            status = 501
-            (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                            self.path,
-                                                                            self.headers,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath)
-
-        else:
-
-            status = 200
-            payload = ''
-
-            # Add ALLOW header to response. GET, POST and OPTIONS are static, HEAD and TRACE are dynamic
-            allowed_methods = 'GET'
-
-            if self.server.disable_method_head is False:
-                # add head to list of allowed methods
-                allowed_methods += ',HEAD'
-
-            allowed_methods += ',POST,OPTIONS'
-
-            if self.server.disable_method_trace is False:
-                allowed_methods += ',TRACE'
-
-            headers.append(('Allow', allowed_methods))
-
-            # Calculate and append a content length header
-            headers.append(('Content-Length', payload.__len__()))
-
-            # Append CC header
-            headers.append(('Connection', 'close'))
-
-            # Append CT header
-            headers.append(('Content-Type', 'text/html'))
-
-        # send initial HTTP status line to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # loggers
-        self.log(self.request_version,
-                 self.command,
-                 self.client_address,
-                 (self.path, self.headers._headers, options_data),
-                 status)
-
-    def do_GET(self):
-        """Handle GET requests"""
-
-        # fetch configuration dependent variables from server instance
-        headers = []
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        # retrieve GET body data
-        # ( sticking to the HTTP protocol, there should not be any body in GET requests,
-        #   an attacker could though use the body to inject data if not flushed correctly,
-        #   which is done by accessing the data like we do now - just to be secure.. )
-
-        get_data_length = self.headers.get('content-length')
-        get_data = None
-
-        if get_data_length:
-            get_data = self.rfile.read(int(get_data_length))
-
-        # try to find a configuration item for this GET request
-        logger.debug('Trying to handle GET to resource <%s>, initiated by %s', self.path, self.client_address)
-        entity_xml = configuration.xpath(
-            '//http/htdocs/node[@name="' + self.path.partition('?')[0] + '"]'
-        )
-
-        if entity_xml:
-            # A config item exists for this entity. Handle it..
-            (status, headers, trailers, payload, chunks) = self.load_entity(self.path,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath)
-
-        else:
-            # No config item could be found. Fall back to a standard 404..
-            status = 404
-            (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                            self.path,
-                                                                            self.headers,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath,
-                                                                            'GET')
-
-        # send initial HTTP status line to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # decide upon sending content as a whole or chunked
-        if chunks == '0':
-            # send payload as a whole to the client
-            self.wfile.write(str_to_bytes(payload))
-        else:
-            # send payload in chunks to the client
-            self.send_chunked(chunks, payload, trailers)
-
-        # loggers
-        self.log(self.request_version,
-                 self.command,
-                 self.client_address,
-                 (self.path, self.headers._headers, get_data),
-                 status)
-
-    def do_POST(self):
-        """Handle POST requests"""
-
-        # fetch configuration dependent variables from server instance
-        headers = list()
-        headers.extend(self.server.global_headers)
-        configuration = self.server.configuration
-        docpath = self.server.docpath
-
-        # retrieve POST data ( important to flush request buffers )
-        post_data_length = self.headers.get('content-length')
-        post_data = None
-
-        if post_data_length:
-            post_data = self.rfile.read(int(post_data_length))
-
-        # try to find a configuration item for this POST request
-        entity_xml = configuration.xpath(
-            '//http/htdocs/node[@name="' + self.path.partition('?')[0] + '"]'
-        )
-
-        if entity_xml:
-            # A config item exists for this entity. Handle it..
-            (status, headers, trailers, payload, chunks) = self.load_entity(self.path,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath)
-
-        else:
-            # No config item could be found. Fall back to a standard 404..
-            status = 404
-            (status, headers, trailers, payload, chunks) = self.load_status(status,
-                                                                            self.path,
-                                                                            self.headers,
-                                                                            headers,
-                                                                            configuration,
-                                                                            docpath,
-                                                                            'POST',
-                                                                            post_data)
-
-        # send initial HTTP status line to client
-        self.send_response(status)
-
-        # send all headers to client
-        for header in headers:
-            self.send_header(header[0], header[1])
-
-        self.end_headers()
-
-        # decide upon sending content as a whole or chunked
-        if chunks == '0':
-            # send payload as a whole to the client
-            if type(payload) != bytes:
-                payload = payload.encode()
-            self.wfile.write(payload)
-        else:
-            # send payload in chunks to the client
-            self.send_chunked(chunks, payload, trailers)
-
-        # loggers
-        self.log(self.request_version,
-                 self.command,
-                 self.client_address,
-                 (self.path, self.headers._headers, post_data),
-                 status)
-
-
-class TemplateParser(HTMLParser):
-    def __init__(self, data):
-        self.databus = conpot_core.get_databus()
-        if type(data) == bytes:
-            data = data.decode()
-        self.data = data
-        HTMLParser.__init__(self)
-        self.payload = self.data
-        self.feed(self.data)
-
-    def handle_startendtag(self, tag, attrs):
-        """ handles template tags provided in XHTML notation.
-
-            Expected format:    <condata source="(engine)" key="(descriptor)" />
-            Example:            <condata source="databus" key="SystemDescription" />
-
-            at the moment, the parser is space- and case-sensitive(!),
-            this could be improved by using REGEX for replacing the template tags
-            with actual values.
-        """
-
-        source = ''
-        key = ''
-
-        # only parse tags that are conpot template tags ( <condata /> )
-        if tag == 'condata':
-
-            # initialize original tag (needed for value replacement)
-            origin = '<' + tag
-
-            for attribute in attrs:
-
-                # extend original tag
-                origin = origin + ' ' + attribute[0] + '="' + attribute[1] + '"'
-
-                # fill variables with all meta information needed to
-                # gather actual data from the other engines (databus, modbus, ..)
-                if attribute[0] == 'source':
-                    source = attribute[1]
-                elif attribute[0] == 'key':
-                    key = attribute[1]
-
-            # finalize original tag
-            origin += ' />'
-
-            # we really need a key in order to do our work..
-            if key:
-                # deal with databus powered tags:
-                if source == 'databus':
-                    self.result = self.databus.get_value(key)
-                    self.payload = self.payload.replace(origin, str(self.result))
-
-                # deal with eval powered tags:
-                elif source == 'eval':
-                    result = ''
-                    # evaluate key
-                    try:
-                        result = eval(key)
-                    except Exception as e:
-                        logger.exception(e)
-                    self.payload = self.payload.replace(origin, result)
-
-
-class ThreadedHTTPServer(ThreadingMixIn, http.server.HTTPServer):
-    """Handle requests in a separate thread."""
-
-
-class SubHTTPServer(ThreadedHTTPServer):
-    """this class is necessary to allow passing custom request handler into
-       the RequestHandlerClass"""
-    daemon_threads = True
-
-    def __init__(self, server_address, RequestHandlerClass, template, docpath):
-        http.server.HTTPServer.__init__(self, server_address, RequestHandlerClass)
-
-        self.docpath = docpath
-
-        # default configuration
-        self.update_header_date = True             # this preserves authenticity
-        self.disable_method_head = False
-        self.disable_method_trace = False
-        self.disable_method_options = False
-        self.tarpit = '0'
-
-        # load the configuration from template and parse it
-        # for the first time in order to reduce further handling..
-        self.configuration = etree.parse(template)
-
-        xml_config = self.configuration.xpath('//http/global/config/*')
-        if xml_config:
-
-            # retrieve all global configuration entities
-            for entity in xml_config:
-
-                if entity.attrib['name'] == 'protocol_version':
-                    RequestHandlerClass.protocol_version = entity.text
-
-                elif entity.attrib['name'] == 'update_header_date':
-                    if entity.text.lower() == 'false':
-                        # DATE header auto update disabled by configuration
-                        self.update_header_date = False
-                    elif entity.text.lower() == 'true':
-                        # DATE header auto update enabled by configuration
-                        self.update_header_date = True
-
-                elif entity.attrib['name'] == 'disable_method_head':
-                    if entity.text.lower() == 'false':
-                        # HEAD method enabled by configuration
-                        self.disable_method_head = False
-                    elif entity.text.lower() == 'true':
-                        # HEAD method disabled by configuration
-                        self.disable_method_head = True
-
-                elif entity.attrib['name'] == 'disable_method_trace':
-                    if entity.text.lower() == 'false':
-                        # TRACE method enabled by configuration
-                        self.disable_method_trace = False
-                    elif entity.text.lower() == 'true':
-                        # TRACE method disabled by configuration
-                        self.disable_method_trace = True
-
-                elif entity.attrib['name'] == 'disable_method_options':
-                    if entity.text.lower() == 'false':
-                        # OPTIONS method enabled by configuration
-                        self.disable_method_options = False
-                    elif entity.text.lower() == 'true':
-                        # OPTIONS method disabled by configuration
-                        self.disable_method_options = True
-
-                elif entity.attrib['name'] == 'tarpit':
-                    if entity.text:
-                        self.tarpit = self.config_sanitize_tarpit(entity.text)
-
-        # load global headers from XML
-        self.global_headers = []
-        xml_headers = self.configuration.xpath('//http/global/headers/*')
-        if xml_headers:
-
-            # retrieve all headers assigned to this status code
-            for header in xml_headers:
-                if header.attrib['name'].lower() == 'date' and self.update_header_date is True:
-                    # All HTTP date/time stamps MUST be represented in Greenwich Mean Time (GMT),
-                    # without exception ( RFC-2616 )
-                    self.global_headers.append((header.attrib['name'],
-                                                time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime())))
-                else:
-                    self.global_headers.append((header.attrib['name'], header.text))
-
-    def config_sanitize_tarpit(self, value):
-
-        # checks tarpit value for being either a single int or float,
-        # or a series of two concatenated integers and/or floats seperated by semicolon and returns
-        # either the (sanitized) value or zero.
-
-        if value is not None:
-
-            x, _, y = value.partition(';')
-
-            try:
-                _ = float(x)
-            except ValueError:
-                # first value is invalid, ignore the whole setting.
-                logger.error("Invalid tarpit value: '%s'. Assuming no latency.", value)
-                return '0;0'
-
-            try:
-                _ = float(y)
-                # both values are fine.
-                return value
-            except ValueError:
-                # second value is invalid, use the first one.
-                return x
-
-        else:
-            return '0;0'
-
-    def do_tarpit(self, delay):
-
-        # sleeps the thread for $delay ( should be either 1 float to apply a static period of time to sleep,
-        # or 2 floats seperated by semicolon to sleep a randomized period of time determined by ( rand[x;y] )
-
-        lbound, _, ubound = delay.partition(";")
-
-        if not lbound or lbound is None:
-            # no lower boundary found. Assume zero latency
-            pass
-        elif not ubound or ubound is None:
-            # no upper boundary found. Assume static latency
-            gevent.sleep(float(lbound))
-        else:
-            # both boundaries found. Assume random latency between lbound and ubound
-            gevent.sleep(random.uniform(float(lbound), float(ubound)))
-
-
-class CommandResponder(object):
-
-    def __init__(self, host, port, template, docpath):
-
-        # Create HTTP server class
-        self.httpd = SubHTTPServer((host, port), HTTPServer, template, docpath)
-        self.server_port = self.httpd.server_port
-
-    def serve_forever(self):
-        self.httpd.serve_forever()
-
-    def stop(self):
-        logging.info("HTTP server will shut down gracefully as soon as all connections are closed.")
-        self.httpd.shutdown()
diff --git a/docker/conpot/dist/requirements.txt b/docker/conpot/dist/requirements.txt
new file mode 100644
index 00000000..c9ef466b
--- /dev/null
+++ b/docker/conpot/dist/requirements.txt
@@ -0,0 +1,20 @@
+pysnmp-mibs
+pysmi
+libtaxii>=1.1.0
+crc16
+scapy==2.4.3rc1
+hpfeeds3
+modbus-tk
+stix-validator
+stix
+cybox
+bacpypes==0.17.0
+pyghmi==1.4.1
+mixbox
+modbus-tk
+cpppo
+fs==2.3.0
+tftpy
+# some freezegun versions broken
+pycrypto
+sphinx_rtd_theme
diff --git a/docker/conpot/dist/templates/IEC104/template.xml b/docker/conpot/dist/templates/IEC104/template.xml
index c5a19edc..9e29d28c 100644
--- a/docker/conpot/dist/templates/IEC104/template.xml
+++ b/docker/conpot/dist/templates/IEC104/template.xml
@@ -91,19 +91,19 @@
                 <value type="value">1</value>
             </key>
             <key name="ifInOctets">
-                <value type="value">1618895</value>
+                <value type="function">conpot.emulators.misc.sysinfo.BytesRecv</value>		    
             </key>
             <key name="ifInUcastPkts">
-                <value type="value">7018</value>
+                <value type="function">conpot.emulators.misc.sysinfo.PacketsRecv</value> 
             </key>
             <key name="ifInNUcastPkts">
                 <value type="value">291</value>
             </key>
             <key name="ifOutOctets">
-                <value type="value">455107</value>
+                <value type="function">conpot.emulators.misc.sysinfo.BytesSent</value>
             </key>
             <key name="ifOutUcastPkts">
-                <value type="value">872264</value>
+                <value type="function">conpot.emulators.misc.sysinfo.PacketsSent</value> 
             </key>
             <key name="ifOutUNcastPkts">
                 <value type="value">143</value>
@@ -168,7 +168,7 @@
                 <value type="value">0</value>
             </key>
             <key name="ipAdEntAddr">
-                <value type="value">"217.172.190.137"</value>
+                <value type="function">conpot.emulators.misc.sysinfo.LocalIP</value>
             </key>
             <key name="ipAdEntIfIndex">
                 <value type="value">1</value>
@@ -290,7 +290,7 @@
                 <value type="value">45</value>
             </key>
             <key name="tcpCurrEstab">
-                <value type="value">0</value>
+                <value type="function">conpot.emulators.misc.sysinfo.TcpCurrEstab</value>
             </key>
             <key name="tcpInSegs">
                 <value type="value">30321</value>
@@ -305,7 +305,7 @@
                 <value type="value">2</value>
             </key>
             <key name="tcpConnLocalAddress">
-                <value type="value">"217.172.190.137"</value>
+                <value type="function">conpot.emulators.misc.sysinfo.LocalIP</value>
             </key>
             <key name="tcpConnLocalPort">
                 <value type="value">2404</value>
@@ -336,7 +336,7 @@
                 <value type="value">47</value>
             </key>
             <key name="udpLocalAddress">
-                <value type="value">"217.172.190.137"</value>
+                <value type="value">"163.172.189.137"</value>
             </key>
             <key name="udpLocalPort">
                 <value type="value">161</value>
diff --git a/docker/conpot/dist/templates/kamstrup_382/template.xml b/docker/conpot/dist/templates/kamstrup_382/template.xml
index 376cf9c6..9d7e835a 100644
--- a/docker/conpot/dist/templates/kamstrup_382/template.xml
+++ b/docker/conpot/dist/templates/kamstrup_382/template.xml
@@ -11,7 +11,7 @@
         <!-- Core value that can be retrieved from the databus by key -->
         <key_value_mappings>
             <key name="power_simulator">
-                <value type="function">conpot.protocols.kamstrup.usage_simulator.UsageSimulator</value>
+                <value type="function">conpot.emulators.kamstrup.usage_simulator.UsageSimulator</value>
             </key>
             <key name="register_1024">
                 <value type="value">0</value>
diff --git a/docker/conpot/docker-compose.yml b/docker/conpot/docker-compose.yml
index 297488c0..3e21b2b1 100644
--- a/docker/conpot/docker-compose.yml
+++ b/docker/conpot/docker-compose.yml
@@ -23,6 +23,8 @@ services:
      - CONPOT_TMP=/tmp/conpot
     tmpfs:
      - /tmp/conpot:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25    
     networks:
      - conpot_local_default
     ports:
@@ -35,14 +37,13 @@ services:
      - "2121:21"
      - "44818:44818"
      - "47808:47808/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
 
 # Conpot IEC104 service
   conpot_IEC104:
-    build: .
     container_name: conpot_IEC104
     restart: always
     environment:
@@ -53,19 +54,20 @@ services:
      - CONPOT_TMP=/tmp/conpot
     tmpfs:
      - /tmp/conpot:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - conpot_local_IEC104
     ports:
 #     - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
 
 # Conpot guardian_ast service
   conpot_guardian_ast:
-    build: .
     container_name: conpot_guardian_ast
     restart: always
     environment:
@@ -76,18 +78,19 @@ services:
      - CONPOT_TMP=/tmp/conpot
     tmpfs:
      - /tmp/conpot:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
 
 # Conpot ipmi
   conpot_ipmi:
-    build: .
     container_name: conpot_ipmi
     restart: always
     environment:
@@ -98,18 +101,19 @@ services:
      - CONPOT_TMP=/tmp/conpot
     tmpfs:
      - /tmp/conpot:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
 
 # Conpot kamstrup_382
   conpot_kamstrup_382:
-    build: .
     container_name: conpot_kamstrup_382
     restart: always
     environment:
@@ -120,12 +124,14 @@ services:
      - CONPOT_TMP=/tmp/conpot
     tmpfs:
      - /tmp/conpot:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - conpot_local_kamstrup_382
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
diff --git a/docker/cowrie/Dockerfile b/docker/cowrie/Dockerfile
index 4d536348..8b12e318 100644
--- a/docker/cowrie/Dockerfile
+++ b/docker/cowrie/Dockerfile
@@ -1,10 +1,10 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Get and install dependencies & packages
-RUN apk -U add \
+RUN apk --no-cache -U add \
              bash \
              build-base \
              git \
@@ -15,7 +15,21 @@ RUN apk -U add \
              mpfr-dev \
              openssl \
              openssl-dev \
+	     py3-appdirs \
+	     py3-asn1-modules \
+	     py3-attrs \
+	     py3-bcrypt \
+	     py3-cryptography \
+	     py3-dateutil \
+	     py3-greenlet \
+	     py3-mysqlclient \
+	     py3-openssl \
+	     py3-packaging \
+	     py3-parsing \
              py3-pip \
+	     py3-service_identity \
+	     py3-treq \
+	     py3-twisted \
              python3 \
              python3-dev && \
 #
@@ -29,9 +43,8 @@ RUN apk -U add \
     git clone --depth=1 https://github.com/micheloosterhof/cowrie -b v2.3.0 && \
     cd cowrie && \
 #    git checkout 6b1e82915478292f1e77ed776866771772b48f2e && \
-#    sed -i s/logfile.DailyLogFile/logfile.LogFile/g src/cowrie/python/logfile.py && \
     mkdir -p log && \
-    sed -i '/packaging.*/d' requirements.txt && \
+    cp /root/dist/requirements.txt . && \
     pip3 install --upgrade pip && \
     pip3 install -r requirements.txt && \
 #
@@ -61,6 +74,7 @@ RUN apk -U add \
     rm -rf /root/* /tmp/* && \
     rm -rf /var/cache/apk/* && \
     rm -rf /home/cowrie/cowrie/cowrie.pid && \
+    rm -rf /home/cowrie/cowrie/.git && \
     unset PYTHON_DIR
 #
 # Start cowrie
diff --git a/docker/cowrie/dist/requirements.txt b/docker/cowrie/dist/requirements.txt
new file mode 100644
index 00000000..91efe5ff
--- /dev/null
+++ b/docker/cowrie/dist/requirements.txt
@@ -0,0 +1,2 @@
+configparser==5.2.0
+tftpy==0.8.2
diff --git a/docker/cowrie/docker-compose.yml b/docker/cowrie/docker-compose.yml
index 181a9bd7..c0261fd3 100644
--- a/docker/cowrie/docker-compose.yml
+++ b/docker/cowrie/docker-compose.yml
@@ -13,12 +13,14 @@ services:
     tmpfs:
      - /tmp/cowrie:uid=2000,gid=2000
      - /tmp/cowrie/data:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - cowrie_local
     ports:
      - "22:22"
      - "23:23"
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
     read_only: true
     volumes:
      - /data/cowrie/downloads:/home/cowrie/cowrie/dl
diff --git a/docker/ddospot/Dockerfile b/docker/ddospot/Dockerfile
index f9437697..c76469b6 100644
--- a/docker/ddospot/Dockerfile
+++ b/docker/ddospot/Dockerfile
@@ -1,11 +1,20 @@
-FROM alpine:3.14
+FROM alpine:3.15
+#
+# Include dist
+COPY dist/ /root/dist/
 #
 # Install packages
-RUN apk -U add \
+RUN apk --no-cache -U add \
              build-base \
              git \
 	     libcap \
+	     py3-colorama \
+	     py3-greenlet \
 	     py3-pip \
+	     py3-schedule \
+	     py3-sqlalchemy \
+	     py3-twisted \
+	     py3-wheel \
              python3 \
              python3-dev && \
 #	     
@@ -30,6 +39,7 @@ RUN apk -U add \
     sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/generic/genericpot.conf && \
     sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ntp/ntpot.conf && \
     sed -i "s#rotate_size = 10#rotate_size = 9999#g" /opt/ddospot/ddospot/pots/ssdp/ssdpot.conf && \
+    cp /root/dist/requirements.txt . && \
     pip3 install -r ddospot/requirements.txt && \
     setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
 #
@@ -43,6 +53,7 @@ RUN apk -U add \
                     git \
 		    python3-dev && \
     rm -rf /root/* && \
+    rm -rf /opt/ddospot/.git && \
     rm -rf /var/cache/apk/*
 #
 # Start ddospot
diff --git a/docker/ddospot/dist/requirements.txt b/docker/ddospot/dist/requirements.txt
new file mode 100644
index 00000000..7b0191a2
--- /dev/null
+++ b/docker/ddospot/dist/requirements.txt
@@ -0,0 +1,4 @@
+git+https://github.com/hpfeeds/hpfeeds
+tabulate
+python-geoip
+python-geoip-geolite2
diff --git a/docker/ddospot/docker-compose.yml b/docker/ddospot/docker-compose.yml
index cfeaf7db..935aaa41 100644
--- a/docker/ddospot/docker-compose.yml
+++ b/docker/ddospot/docker-compose.yml
@@ -10,6 +10,8 @@ services:
     build: .
     container_name: ddospot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - ddospot_local
     ports:
@@ -18,7 +20,7 @@ services:
      - "123:123/udp"
 #     - "161:161/udp"
      - "1900:1900/udp"
-    image: "dtagdevsec/ddospot:2006"
+    image: "dtagdevsec/ddospot:2204"
     read_only: true
     volumes:
      - /data/ddospot/log:/opt/ddospot/ddospot/logs
diff --git a/docker/cyberchef/Dockerfile b/docker/deprecated/cyberchef/Dockerfile
similarity index 97%
rename from docker/cyberchef/Dockerfile
rename to docker/deprecated/cyberchef/Dockerfile
index 8b994ada..20d9038d 100644
--- a/docker/cyberchef/Dockerfile
+++ b/docker/deprecated/cyberchef/Dockerfile
@@ -12,7 +12,7 @@ RUN npm install
 RUN grunt prod
 #
 # Move from builder
-FROM alpine:3.14
+FROM alpine:3.15
 #
 RUN apk -U --no-cache add \
       curl \
diff --git a/docker/cyberchef/docker-compose.yml b/docker/deprecated/cyberchef/docker-compose.yml
similarity index 86%
rename from docker/cyberchef/docker-compose.yml
rename to docker/deprecated/cyberchef/docker-compose.yml
index 6bb8c3b9..45bd3291 100644
--- a/docker/cyberchef/docker-compose.yml
+++ b/docker/deprecated/cyberchef/docker-compose.yml
@@ -14,5 +14,5 @@ services:
      - cyberchef_local
     ports:
      - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
+    image: "dtagdevsec/cyberchef:2204"
     read_only: true
diff --git a/docker/elk/head/Dockerfile b/docker/deprecated/head/Dockerfile
similarity index 98%
rename from docker/elk/head/Dockerfile
rename to docker/deprecated/head/Dockerfile
index 8844e536..7ed772e9 100644
--- a/docker/elk/head/Dockerfile
+++ b/docker/deprecated/head/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Setup env and apt
 RUN apk -U add \
diff --git a/docker/elk/head/docker-compose.yml b/docker/deprecated/head/docker-compose.yml
similarity index 88%
rename from docker/elk/head/docker-compose.yml
rename to docker/deprecated/head/docker-compose.yml
index 5cfaafdb..57c7591f 100644
--- a/docker/elk/head/docker-compose.yml
+++ b/docker/deprecated/head/docker-compose.yml
@@ -12,5 +12,5 @@ services:
     #        condition: service_healthy
     ports:
      - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
+    image: "dtagdevsec/head:2204"
     read_only: true
diff --git a/docker/honeypy/Dockerfile b/docker/deprecated/honeypy/Dockerfile
similarity index 100%
rename from docker/honeypy/Dockerfile
rename to docker/deprecated/honeypy/Dockerfile
diff --git a/docker/honeypy/dist/honeypy.cfg b/docker/deprecated/honeypy/dist/honeypy.cfg
similarity index 100%
rename from docker/honeypy/dist/honeypy.cfg
rename to docker/deprecated/honeypy/dist/honeypy.cfg
diff --git a/docker/honeypy/dist/services.cfg b/docker/deprecated/honeypy/dist/services.cfg
similarity index 100%
rename from docker/honeypy/dist/services.cfg
rename to docker/deprecated/honeypy/dist/services.cfg
diff --git a/docker/honeypy/docker-compose.yml b/docker/deprecated/honeypy/docker-compose.yml
similarity index 91%
rename from docker/honeypy/docker-compose.yml
rename to docker/deprecated/honeypy/docker-compose.yml
index dd12fa2d..4dc581fa 100644
--- a/docker/honeypy/docker-compose.yml
+++ b/docker/deprecated/honeypy/docker-compose.yml
@@ -20,7 +20,7 @@ services:
      - "2324:2324"
      - "4096:4096"
      - "9200:9200"
-    image: "dtagdevsec/honeypy:2006"
+    image: "dtagdevsec/honeypy:2204"
     read_only: true
     volumes:
      - /data/honeypy/log:/opt/honeypy/log
diff --git a/docker/honeysap/Dockerfile b/docker/deprecated/honeysap/Dockerfile
similarity index 100%
rename from docker/honeysap/Dockerfile
rename to docker/deprecated/honeysap/Dockerfile
diff --git a/docker/honeysap/dist/external_route_table.yml b/docker/deprecated/honeysap/dist/external_route_table.yml
similarity index 100%
rename from docker/honeysap/dist/external_route_table.yml
rename to docker/deprecated/honeysap/dist/external_route_table.yml
diff --git a/docker/honeysap/dist/honeysap.yml b/docker/deprecated/honeysap/dist/honeysap.yml
similarity index 100%
rename from docker/honeysap/dist/honeysap.yml
rename to docker/deprecated/honeysap/dist/honeysap.yml
diff --git a/docker/honeysap/docker-compose.yml b/docker/deprecated/honeysap/docker-compose.yml
similarity index 87%
rename from docker/honeysap/docker-compose.yml
rename to docker/deprecated/honeysap/docker-compose.yml
index 830a8c0b..26a46456 100644
--- a/docker/honeysap/docker-compose.yml
+++ b/docker/deprecated/honeysap/docker-compose.yml
@@ -14,6 +14,6 @@ services:
      - honeysap_local
     ports:
      - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
+    image: "dtagdevsec/honeysap:2204"
     volumes:
      - /data/honeysap/log:/opt/honeysap/log
diff --git a/docker/rdpy/Dockerfile b/docker/deprecated/rdpy/Dockerfile
similarity index 100%
rename from docker/rdpy/Dockerfile
rename to docker/deprecated/rdpy/Dockerfile
diff --git a/docker/rdpy/dist/1 b/docker/deprecated/rdpy/dist/1
similarity index 100%
rename from docker/rdpy/dist/1
rename to docker/deprecated/rdpy/dist/1
diff --git a/docker/rdpy/dist/2 b/docker/deprecated/rdpy/dist/2
similarity index 100%
rename from docker/rdpy/dist/2
rename to docker/deprecated/rdpy/dist/2
diff --git a/docker/rdpy/dist/3 b/docker/deprecated/rdpy/dist/3
similarity index 100%
rename from docker/rdpy/dist/3
rename to docker/deprecated/rdpy/dist/3
diff --git a/docker/rdpy/docker-compose.yml b/docker/deprecated/rdpy/docker-compose.yml
similarity index 93%
rename from docker/rdpy/docker-compose.yml
rename to docker/deprecated/rdpy/docker-compose.yml
index c991c270..d14c2592 100644
--- a/docker/rdpy/docker-compose.yml
+++ b/docker/deprecated/rdpy/docker-compose.yml
@@ -22,7 +22,7 @@ services:
      - rdpy_local
     ports:
      - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
+    image: "dtagdevsec/rdpy:2204"
     read_only: true
     volumes:
      - /data/rdpy/log:/var/log/rdpy
diff --git a/docker/dicompot/Dockerfile b/docker/dicompot/Dockerfile
index 68c56a88..886fc587 100644
--- a/docker/dicompot/Dockerfile
+++ b/docker/dicompot/Dockerfile
@@ -1,11 +1,11 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Setup apk
 RUN apk -U add --no-cache \
                    build-base \
                    git \
                    g++ && \
-    apk -U add go --repository http://dl-3.alpinelinux.org/alpine/edge/community && \
+    apk -U add --no-cache go --repository http://dl-3.alpinelinux.org/alpine/edge/community && \
 #
 # Setup go, build dicompot 
     mkdir -p /opt/go && \
diff --git a/docker/dicompot/docker-compose.yml b/docker/dicompot/docker-compose.yml
index e06a4fad..c40f83fe 100644
--- a/docker/dicompot/docker-compose.yml
+++ b/docker/dicompot/docker-compose.yml
@@ -13,11 +13,13 @@ services:
     build: .
     container_name: dicompot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
diff --git a/docker/dionaea/Dockerfile b/docker/dionaea/Dockerfile
index 281e085c..8bc5b0df 100644
--- a/docker/dionaea/Dockerfile
+++ b/docker/dionaea/Dockerfile
@@ -2,14 +2,20 @@ FROM ubuntu:20.04
 ENV DEBIAN_FRONTEND noninteractive
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
-# Install dependencies and packages
-RUN apt-get update -y && \
+# Determine arch, get and install packages
+RUN ARCH=$(arch) && \
+      if [ "$ARCH" = "x86_64" ]; then ARCH="amd64"; fi && \
+      if [ "$ARCH" = "aarch64" ]; then ARCH="arm64"; fi && \
+    echo "$ARCH" && \
+    cd /root/dist/ && \
+    apt-get update -y && \
     apt-get install wget -y && \
-    wget http://archive.ubuntu.com/ubuntu/pool/universe/libe/libemu/libemu2_0.2.0+git20120122-1.2build1_amd64.deb http://archive.ubuntu.com/ubuntu/pool/universe/libe/libemu/libemu-dev_0.2.0+git20120122-1.2build1_amd64.deb && \
-    apt install ./libemu2_0.2.0+git20120122-1.2build1_amd64.deb ./libemu-dev_0.2.0+git20120122-1.2build1_amd64.deb -y && \
-    apt-get dist-upgrade -y && \
+    wget http://ftp.us.debian.org/debian/pool/main/libe/libemu/libemu2_0.2.0+git20120122-1.2+b1_$ARCH.deb \
+         http://ftp.us.debian.org/debian/pool/main/libe/libemu/libemu-dev_0.2.0+git20120122-1.2+b1_$ARCH.deb && \
+    apt install ./libemu2_0.2.0+git20120122-1.2+b1_$ARCH.deb \
+                ./libemu-dev_0.2.0+git20120122-1.2+b1_$ARCH.deb -y && \
     apt-get install -y --no-install-recommends \
 	build-essential \
 	ca-certificates \
@@ -19,7 +25,6 @@ RUN apt-get update -y && \
 	git \
         libcap2-bin \
 	libcurl4-openssl-dev \
-#	libemu-dev \
 	libev-dev \
 	libglib2.0-dev \
 	libloudmouth1-dev \
@@ -97,14 +102,16 @@ RUN apt-get update -y && \
       libnetfilter-queue1 \
       libnl-3-200 \
       libpcap0.8 \
-#      libpython3.6 \
       libpython3.8 \
       libudns0 && \
 #
     apt-get autoremove --purge -y && \
     apt-get clean && \
-    rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/*
+    rm -rf /root/* /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /opt/dionaea/.git
 #
 # Start dionaea
+STOPSIGNAL SIGINT
+# Dionaea sometimes hangs at 100% CPU usage, if detected process will be killed and container restarts per docker-compose settings
+HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
 USER dionaea:dionaea
 CMD ["/opt/dionaea/bin/dionaea", "-u", "dionaea", "-g", "dionaea", "-c", "/opt/dionaea/etc/dionaea/dionaea.cfg"]
diff --git a/docker/dionaea/docker-compose.yml b/docker/dionaea/docker-compose.yml
index 07bd6336..96389316 100644
--- a/docker/dionaea/docker-compose.yml
+++ b/docker/dionaea/docker-compose.yml
@@ -12,6 +12,8 @@ services:
     stdin_open: true
     tty: true
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - dionaea_local
     ports:
@@ -27,11 +29,11 @@ services:
      - "1723:1723"
      - "1883:1883"
      - "3306:3306"
-     - "5060:5060"
-     - "5060:5060/udp"
-     - "5061:5061"
+#     - "5060:5060"
+#     - "5060:5060/udp"
+#     - "5061:5061"
      - "27017:27017"
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
     read_only: true
     volumes:
      - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 3bb1f328..48612492 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -10,98 +10,128 @@ services:
 # Adbhoney service
   adbhoney:
     build: adbhoney/.
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
 
 # Ciscoasa service
   ciscoasa:
     build: ciscoasa/.
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
 
 # CitrixHoneypot service
   citrixhoneypot:
     build: citrixhoneypot/.
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
 
 # Conpot IEC104 service
   conpot_IEC104:
     build: conpot/.
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
 
 # Cowrie service
   cowrie:
     build: cowrie/.
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
+
+# Ddospot service
+  ddospot:
+    build: ddospot/.
+    image: "dtagdevsec/ddospot:2204"
 
 # Dicompot service
   dicompot:
     build: dicompot/.
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
 
 # Dionaea service
   dionaea:
     build: dionaea/.
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
 
 # ElasticPot service
   elasticpot:
     build: elasticpot/.
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
+
+# Endlessh service
+  endlessh:
+    build: endlessh/.
+    image: "dtagdevsec/endlessh:2204"
 
 # Glutton service
   glutton:
     build: glutton/.
-    image: "dtagdevsec/glutton:2006"
+    image: "dtagdevsec/glutton:2204"
+
+# Hellpot service
+  hellpot:
+    build: hellpot/.
+    image: "dtagdevsec/hellpot:2204"
 
 # Heralding service
   heralding:
     build: heralding/.
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
 
-# HoneyPy service
-  honeypy:
-    build: honeypy/.
-    image: "dtagdevsec/honeypy:2006"
+# Honeypots service
+  honeypots:
+    build: honeypots/.
+    image: "dtagdevsec/honeypots:2204"
 
 # Honeytrap service
   honeytrap:
     build: honeytrap/.
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
+
+# IPPHoney service
+  ipphoney:
+    build: ipphoney/.
+    image: "dtagdevsec/ipphoney:2204"
+
+# Log4Pot service
+  log4pot:
+    build: log4pot/.
+    image: "dtagdevsec/log4pot:2204"
 
 # Mailoney service
   mailoney:
     build: mailoney/.
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
 
 # Medpot service
   medpot:
     build: medpot/.
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
 
-# Rdpy service
-  rdpy:
-    build: rdpy/.
-    image: "dtagdevsec/rdpy:2006"
+# Redishoneypot service
+  redishoneypot:
+    build: redishoneypot/.
+    image: "dtagdevsec/redishoneypot:2204"
+
+# Sentrypeer service
+  sentrypeer:
+    build: sentrypeer/.
+    image: "dtagdevsec/sentrypeer:2204"
 
 #### Snare / Tanner
 ## Tanner Redis Service
   tanner_redis:
     build: tanner/redis/.
-    image: "dtagdevsec/redis:2006"
+    image: "dtagdevsec/redis:2204"
 
 ## PHP Sandbox service
   tanner_phpox:
     build: tanner/phpox/.
-    image: "dtagdevsec/phpox:2006"
+    image: "dtagdevsec/phpox:2204"
 
 ## Tanner API Service
   tanner_api:
     build: tanner/tanner/.
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
 
 ## Snare Service
   snare:
     build: tanner/snare/.
-    image: "dtagdevsec/snare:2006"
+    image: "dtagdevsec/snare:2204"
 
 
 ##################
@@ -111,60 +141,55 @@ services:
 # Fatt service
   fatt:
     build: fatt/.
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
 
 # P0f service
   p0f:
     build: p0f/.
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
 
 # Suricata service
   suricata:
     build: suricata/.
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
 
 
 ##################
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    build: cyberchef/.
-    image: "dtagdevsec/cyberchef:2006"
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
     build: elk/elasticsearch/.
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
 
 ## Kibana service
   kibana:
     build: elk/kibana/.
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     build: elk/logstash/.
-    image: "dtagdevsec/logstash:2006"
-
-## Elasticsearch-head service
-  head:
-    build: elk/head/.
-    image: "dtagdevsec/head:2006"
+    image: "dtagdevsec/logstash:2204"
 
 # Ewsposter service
   ewsposter:
-    build: ews/.
-    image: "dtagdevsec/ewsposter:2006"
+    build: ewsposter/.
+    image: "dtagdevsec/ewsposter:2204"
 
 # Nginx service
   nginx:
-    build: heimdall/.
-    image: "dtagdevsec/nginx:2006"
+    build: nginx/.
+    image: "dtagdevsec/nginx:2204"
 
 # Spiderfoot service
   spiderfoot:
     build: spiderfoot/.
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
+
+# Map Web Service
+  map_web:
+    build: elk/map/.
+    image: "dtagdevsec/map:2204"
diff --git a/docker/elasticpot/Dockerfile b/docker/elasticpot/Dockerfile
index 6b399690..6be72abc 100644
--- a/docker/elasticpot/Dockerfile
+++ b/docker/elasticpot/Dockerfile
@@ -1,10 +1,10 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
-RUN apk -U add \
+RUN apk -U --no-cache add \
              build-base \
 	     ca-certificates \
              git \
@@ -13,9 +13,19 @@ RUN apk -U add \
              openssl-dev \
 	     postgresql-dev \
              py3-cryptography \
+	     py3-elasticsearch \
+	     py3-geoip2 \
+	     py3-maxminddb \
              py3-mysqlclient \
+	     py3-packaging \
+	     py3-psycopg2 \
+	     py3-redis \
              py3-requests \
+	     py3-service_identity \
+	     py3-setuptools \
 	     py3-pip \
+	     py3-twisted \
+	     py3-wheel \
              python3 \
              python3-dev && \
     mkdir -p /opt && \
@@ -23,6 +33,7 @@ RUN apk -U add \
     git clone https://gitlab.com/bontchev/elasticpot.git/ && \
     cd elasticpot && \
     git checkout d12649730d819bd78ea622361b6c65120173ad45 && \
+    cp /root/dist/requirements.txt . && \
     pip3 install -r requirements.txt && \
 #
 # Setup user, groups and configs
@@ -38,7 +49,7 @@ RUN apk -U add \
 		    postgresql-dev \
 		    python3-dev && \
     rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /var/cache/apk/* /opt/elasticpot/.git
 #
 # Start elasticpot
 STOPSIGNAL SIGINT
diff --git a/docker/elasticpot/dist/requirements.txt b/docker/elasticpot/dist/requirements.txt
new file mode 100644
index 00000000..189bc74b
--- /dev/null
+++ b/docker/elasticpot/dist/requirements.txt
@@ -0,0 +1,6 @@
+configparser>=3.5.0
+couchdb
+hpfeeds>=3.0.0
+influxdb
+pymongo
+rethinkdb>=2.4
diff --git a/docker/elasticpot/docker-compose.yml b/docker/elasticpot/docker-compose.yml
index 16ce22cf..66e968ea 100644
--- a/docker/elasticpot/docker-compose.yml
+++ b/docker/elasticpot/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: elasticpot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - elasticpot_local
     ports:
      - "9200:9200"
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
     read_only: true
     volumes:
      - /data/elasticpot/log:/opt/elasticpot/log
diff --git a/docker/elk/docker-compose.yml b/docker/elk/docker-compose.yml
index 155ff483..d749c646 100644
--- a/docker/elk/docker-compose.yml
+++ b/docker/elk/docker-compose.yml
@@ -10,7 +10,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -21,10 +21,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -39,7 +39,7 @@ services:
         condition: service_healthy
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
@@ -53,20 +53,49 @@ services:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 #     - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
 
-## Elasticsearch-head service
-  head:
-    build: head/.
-    container_name: head
+# Map Redis Service
+  map_redis:
+    container_name: map_redis
     restart: always
-    depends_on:
-      elasticsearch:
-        condition: service_healthy
+    stop_signal: SIGKILL
+    tty: true
     ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
+      - "127.0.0.1:6379:6379"
+    image: "dtagdevsec/redis:2204"
     read_only: true
+
+# Map Web Service
+  map_web:
+    build: .
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+    depends_on:
+     - map_redis
+
+# Map Data Service
+  map_data:
+    container_name: map_data
+    restart: always
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+    depends_on:
+     - map_redis
diff --git a/docker/elk/elasticsearch/Dockerfile b/docker/elk/elasticsearch/Dockerfile
index 329774df..03b8408e 100644
--- a/docker/elk/elasticsearch/Dockerfile
+++ b/docker/elk/elasticsearch/Dockerfile
@@ -1,44 +1,42 @@
-FROM alpine:3.14
+FROM ubuntu:20.04
 #
 # VARS
-ENV ES_VER=7.17.0 \
-    ES_JAVA_HOME=/usr/lib/jvm/java-16-openjdk
-
+ENV ES_VER=8.0.1
+#
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
-RUN apk -U --no-cache add \
-             aria2 \
-             bash \
-             curl \
-             nss && \
-    apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
+RUN apt-get update -y && \
+    apt-get install -y \
+            aria2 \
+            curl && \
 #
-# Get and install packages
+# Determine arch, get and install packages
+    ARCH=$(arch) && \
+      if [ "$ARCH" = "x86_64" ]; then ES_ARCH="amd64"; fi && \
+      if [ "$ARCH" = "aarch64" ]; then ES_ARCH="arm64"; fi && \
+    echo "$ARCH" && \
     cd /root/dist/ && \
-    mkdir -p /usr/share/elasticsearch/ && \
-    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-linux-x86_64.tar.gz && \
-    tar xvfz elasticsearch-$ES_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/elasticsearch/ && \
-    rm -rf /usr/share/elasticsearch/jdk && \
-    rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
-    # For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
-    sed -i 's/! -x/! -e/g' /usr/share/elasticsearch/bin/elasticsearch-env && \
+    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VER-$ES_ARCH.deb && \
+    dpkg -i elasticsearch-$ES_VER-$ES_ARCH.deb && \
 #
 # Add and move files
-    cd /root/dist/ && \
+#    rm -rf /usr/share/elasticsearch/modules/x-pack-ml && \
     mkdir -p /usr/share/elasticsearch/config && \
-    cp elasticsearch.yml /usr/share/elasticsearch/config/ && \
+    cp elasticsearch.yml /etc/elasticsearch/ && \
 #
 # Setup user, groups and configs
-    addgroup -g 2000 elasticsearch && \
-    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 elasticsearch && \
-    chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/ && \
+    groupmod -g 2000 elasticsearch && \
+    usermod -u 2000 elasticsearch && \
+    chown -R root:elasticsearch /etc/default/elasticsearch \
+                                /etc/elasticsearch && \
+    chown -R elasticsearch:elasticsearch /var/lib/elasticsearch \
+                                         /var/log/elasticsearch && \
 #
 # Clean up
-    apk del --purge aria2 && \
-    rm -rf /root/* && \
-    rm -rf /tmp/* && \
-    rm -rf /var/cache/apk/*
+    apt-get purge aria2 -y && \
+    apt-get autoremove -y --purge && \
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
 #
 # Healthcheck
 HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9200/_cat/health'
diff --git a/docker/elk/elasticsearch/dist/elasticsearch.yml b/docker/elk/elasticsearch/dist/elasticsearch.yml
index a5ccd137..35d79569 100644
--- a/docker/elk/elasticsearch/dist/elasticsearch.yml
+++ b/docker/elk/elasticsearch/dist/elasticsearch.yml
@@ -2,6 +2,8 @@ cluster.name: tpotcluster
 node.name: "tpotcluster-node-01"
 xpack.ml.enabled: false
 xpack.security.enabled: false
+xpack.security.transport.ssl.enabled: false
+xpack.security.http.ssl.enabled: false
 path:
     logs: /data/elk/log
     data: /data/elk/data
diff --git a/docker/elk/elasticsearch/docker-compose.yml b/docker/elk/elasticsearch/docker-compose.yml
index 3f51dcb5..a4081e12 100644
--- a/docker/elk/elasticsearch/docker-compose.yml
+++ b/docker/elk/elasticsearch/docker-compose.yml
@@ -24,6 +24,6 @@ services:
     mem_limit: 2g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
diff --git a/docker/elk/kibana/Dockerfile b/docker/elk/kibana/Dockerfile
index d38d672e..48da6b3a 100644
--- a/docker/elk/kibana/Dockerfile
+++ b/docker/elk/kibana/Dockerfile
@@ -1,30 +1,28 @@
-FROM node:16.13.2-alpine3.14
+FROM ubuntu:20.04
 #
 # VARS
-ENV KB_VER=7.17.0
-# 
+ENV KB_VER=8.0.1
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
-RUN apk -U --no-cache add \
+RUN apt-get update -y && \
+    apt-get install -y \
             aria2 \
-            curl \
-            gcompat && \
+            curl && \
 #
-# Get and install packages
-    cd /root/dist/ && \
-    mkdir -p /usr/share/kibana/ && \
-    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-linux-x86_64.tar.gz && \
-    tar xvfz kibana-$KB_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/kibana/ && \
-#
-# Kibana's bundled node does not work in alpine
-    rm /usr/share/kibana/node/bin/node && \
-    ln -s /usr/local/bin/node /usr/share/kibana/node/bin/node && \
-#
-# Add and move files
+# Determine arch, get and install packages
+    ARCH=$(arch) && \
+      if [ "$ARCH" = "x86_64" ]; then KB_ARCH="amd64"; fi && \
+      if [ "$ARCH" = "aarch64" ]; then KB_ARCH="arm64"; fi && \
+    echo "$ARCH" && \
     cd /root/dist/ && \
+    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/kibana/kibana-$KB_VER-$KB_ARCH.deb && \
+    dpkg -i kibana-$KB_VER-$KB_ARCH.deb && \
 #
 # Setup user, groups and configs
+    mkdir -p /usr/share/kibana/config \
+             /usr/share/kibana/data && \
+    cp /etc/kibana/kibana.yml /usr/share/kibana/config && \
     sed -i 's/#server.basePath: ""/server.basePath: "\/kibana"/' /usr/share/kibana/config/kibana.yml && \
     sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/' /usr/share/kibana/config/kibana.yml && \
     sed -i 's/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/elasticsearch.hosts: \["http:\/\/elasticsearch:9200"\]/' /usr/share/kibana/config/kibana.yml && \
@@ -36,15 +34,19 @@ RUN apk -U --no-cache add \
     echo "kibana.autocompleteTerminateAfter: 1000000" >> /usr/share/kibana/config/kibana.yml && \
     rm -rf /usr/share/kibana/optimize/bundles/* && \
     /usr/share/kibana/bin/kibana --optimize --allow-root && \
-    addgroup -g 2000 kibana && \
-    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 kibana && \
-    chown -R kibana:kibana /usr/share/kibana/ && \
+    groupmod -g 2000 kibana && \
+    usermod -u 2000 kibana && \
+    chown -R root:kibana /etc/kibana && \
+    chown -R kibana:kibana /usr/share/kibana/data \
+                           /run/kibana \
+			   /var/log/kibana \
+		     	   /var/lib/kibana && \
+    chmod 755 -R /usr/share/kibana/config && \
 #
 # Clean up
-    apk del --purge aria2 && \
-    rm -rf /root/* && \
-    rm -rf /tmp/* && \
-    rm -rf /var/cache/apk/*
+    apt-get purge aria2 -y && \
+    apt-get autoremove -y --purge && \
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
 #
 # Healthcheck
 HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:5601'
diff --git a/docker/elk/kibana/docker-compose.yml b/docker/elk/kibana/docker-compose.yml
index 2f464089..cad163be 100644
--- a/docker/elk/kibana/docker-compose.yml
+++ b/docker/elk/kibana/docker-compose.yml
@@ -12,4 +12,4 @@ services:
 #        condition: service_healthy
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
diff --git a/docker/elk/logstash/Dockerfile b/docker/elk/logstash/Dockerfile
index 13260c1b..08a6281b 100644
--- a/docker/elk/logstash/Dockerfile
+++ b/docker/elk/logstash/Dockerfile
@@ -1,72 +1,66 @@
-FROM alpine:3.14
+FROM ubuntu:20.04
 #
 # VARS
-ENV LS_VER=7.17.0
+ENV LS_VER=8.0.1
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup env and apt
-#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
-RUN apk -U --no-cache add \
+RUN apt-get update -y && \
+    apt-get install -y \
              aria2 \
 	     autossh \
              bash \
              bzip2 \
 	     curl \
-             libc6-compat \
-             libzmq \
-             nss \
-             openssh && \
-    apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
+             openssh-client && \
 #
-# Get and install packages
+# Determine arch, get and install packages
+    ARCH=$(arch) && \
+      if [ "$ARCH" = "x86_64" ]; then LS_ARCH="amd64"; fi && \
+      if [ "$ARCH" = "aarch64" ]; then LS_ARCH="arm64"; fi && \
+    echo "$ARCH" && \
     mkdir -p /etc/listbot && \
     cd /etc/listbot && \
     aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
     aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
     bunzip2 *.bz2 && \
     cd /root/dist/ && \
-    mkdir -p /usr/share/logstash/ && \
-    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-linux-x86_64.tar.gz && \
-    tar xvfz logstash-$LS_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
-    rm -rf /usr/share/logstash/jdk && \
-    # For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
-    sed -i 's/! -x/! -e/g' /usr/share/logstash/bin/logstash.lib.sh && \
-    /usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-filter-translate && \
-    /usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-input-http && \
-    /usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-gelf && \
-    /usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-http && \
-    /usr/share/logstash/bin/logstash-plugin install --preserve --no-verify logstash-output-syslog && \
+    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-$LS_ARCH.deb && \
+    dpkg -i logstash-$LS_VER-$LS_ARCH.deb && \
+#    /usr/share/logstash/bin/logstash-plugin install logstash-output-gelf logstash-output-syslog && \
 #
 # Add and move files
     cd /root/dist/ && \
-    cp update.sh /usr/bin/ && \
-    chmod u+x /usr/bin/update.sh && \
-    mkdir -p /etc/logstash/conf.d && \
-    cp logstash.conf /etc/logstash/conf.d/ && \
-    cp http_input.conf /etc/logstash/conf.d/ && \
-    cp http_output.conf /etc/logstash/conf.d/ && \
+    cp entrypoint.sh /usr/bin/ && \
+    chmod u+x /usr/bin/entrypoint.sh && \
+    mkdir -p /usr/share/logstash/config && \
+    cp logstash.conf /etc/logstash/ && \
+    cp http_input.conf /etc/logstash/ && \
+    cp http_output.conf /etc/logstash/ && \
     cp pipelines.yml /usr/share/logstash/config/pipelines.yml && \
-    cp pipelines_pot.yml /usr/share/logstash/config/pipelines_pot.yml && \
-    cp tpot_es_template.json /etc/logstash/ && \
+    cp pipelines_sensor.yml /usr/share/logstash/config/pipelines_sensor.yml && \
+    cp tpot-template.json /etc/logstash/ && \
+    rm /etc/logstash/pipelines.yml && \
+    rm /etc/logstash/logstash.yml && \
 #
 # Setup user, groups and configs
-    addgroup -g 2000 logstash && \
-    adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
-    chown -R logstash:logstash /usr/share/logstash && \
-    chown -R logstash:logstash /etc/listbot && \
-    chmod 755 /usr/bin/update.sh && \
+    groupmod -g 2000 logstash && \
+    usermod -u 2000 logstash && \
+    chown -R logstash:logstash /etc/listbot \
+                               /var/log/logstash/ \
+			       /var/lib/logstash \
+			       /usr/share/logstash/data \
+			       /usr/share/logstash/config/pipelines* && \
+    chmod 755 /usr/bin/entrypoint.sh && \
 #
 # Clean up
-    rm -rf /root/* && \
-    rm -rf /tmp/* && \
-    rm -rf /var/cache/apk/*
+    apt-get autoremove -y --purge && \
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/*
 #
 # Healthcheck
 HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
 #
 # Start logstash
-#USER logstash:logstash
-#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution --log.level debug
-#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/http_output.conf --config.reload.automatic --java-execution
-CMD update.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic --java-execution
+USER logstash:logstash
+CMD entrypoint.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic
diff --git a/docker/elk/logstash/Dockerfile.new b/docker/elk/logstash/Dockerfile.new
deleted file mode 100644
index 72cf3fd2..00000000
--- a/docker/elk/logstash/Dockerfile.new
+++ /dev/null
@@ -1,68 +0,0 @@
-FROM alpine:3.14
-#
-# VARS
-ENV LS_VER=7.15.1
-# Include dist
-ADD dist/ /root/dist/
-#
-# Setup env and apt
-#RUN sed -i 's/dl-cdn/dl-2/g' /etc/apk/repositories && \
-RUN apk -U --no-cache add \
-             aria2 \
-             bash \
-             bzip2 \
-	     curl \
-             libc6-compat \
-             libzmq \
-             nss && \
-    apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community openjdk16-jre && \
-#
-# Get and install packages
-    mkdir -p /etc/listbot && \
-    cd /etc/listbot && \
-    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
-    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
-    bunzip2 *.bz2 && \
-    cd /root/dist/ && \
-    mkdir -p /usr/share/logstash/ && \
-    aria2c -s 16 -x 16 https://artifacts.elastic.co/downloads/logstash/logstash-$LS_VER-linux-x86_64.tar.gz && \
-    tar xvfz logstash-$LS_VER-linux-x86_64.tar.gz --strip-components=1 -C /usr/share/logstash/ && \
-    rm -rf /usr/share/logstash/jdk && \
-    # For some reason Alpine 3.14 does not report the -x flag correctly and thus elasticsearch does not find java
-    sed -i 's/! -x/! -e/g' /usr/share/logstash/bin/logstash.lib.sh && \
-    /usr/share/logstash/bin/logstash-plugin install logstash-filter-translate && \
-    /usr/share/logstash/bin/logstash-plugin install logstash-input-http && \
-    /usr/share/logstash/bin/logstash-plugin install logstash-output-gelf && \
-    /usr/share/logstash/bin/logstash-plugin install logstash-output-http && \
-    /usr/share/logstash/bin/logstash-plugin install logstash-output-syslog && \
-#
-# Add and move files
-    cd /root/dist/ && \
-    cp update.sh /usr/bin/ && \
-    chmod u+x /usr/bin/update.sh && \
-    mkdir -p /etc/logstash/conf.d && \
-    cp logstash.conf /etc/logstash/conf.d/ && \
-    cp http.conf /etc/logstash/conf.d/ && \
-    cp pipelines.yml /usr/share/logstash/config/pipelines.yml && \
-    cp tpot_es_template.json /etc/logstash/ && \
-#
-# Setup user, groups and configs
-    addgroup -g 2000 logstash && \
-    adduser -S -H -s /bin/bash -u 2000 -D -g 2000 logstash && \
-    chown -R logstash:logstash /usr/share/logstash && \
-    chown -R logstash:logstash /etc/listbot && \
-    chmod 755 /usr/bin/update.sh && \
-#
-# Clean up
-    rm -rf /root/* && \
-    rm -rf /tmp/* && \
-    rm -rf /var/cache/apk/*
-#
-# Healthcheck
-HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:9600'
-#
-# Start logstash
-#USER logstash:logstash
-#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution --log.level debug
-#CMD update.sh && exec /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic --java-execution
-CMD update.sh && exec /usr/share/logstash/bin/logstash --config.reload.automatic --java-execution
diff --git a/docker/elk/logstash/dist/entrypoint.sh b/docker/elk/logstash/dist/entrypoint.sh
new file mode 100644
index 00000000..936c9932
--- /dev/null
+++ b/docker/elk/logstash/dist/entrypoint.sh
@@ -0,0 +1,87 @@
+#!/bin/bash
+
+# Let's ensure normal operation on exit or if interrupted ...
+function fuCLEANUP {
+  exit 0
+}
+trap fuCLEANUP EXIT
+
+# Check internet availability 
+function fuCHECKINET () {
+mySITES=$1
+error=0
+for i in $mySITES;
+  do
+    curl --connect-timeout 5 -Is $i 2>&1 > /dev/null
+      if [ $? -ne 0 ];
+        then
+          let error+=1
+      fi;
+  done;
+  echo $error
+}
+
+# Check for connectivity and download latest translation maps
+myCHECK=$(fuCHECKINET "listbot.sicherheitstacho.eu")
+if [ "$myCHECK" == "0" ];
+  then
+    echo "Connection to Listbot looks good, now downloading latest translation maps."
+    cd /etc/listbot 
+    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
+    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
+    bunzip2 -f *.bz2
+    cd /
+  else
+    echo "Cannot reach Listbot, starting Logstash without latest translation maps."
+fi
+
+# Distributed T-Pot installation needs a different pipeline config and autossh tunnel. 
+if [ "$MY_TPOT_TYPE" == "SENSOR" ];
+  then
+    echo
+    echo "Distributed T-Pot setup, sending T-Pot logs to $MY_HIVE_IP."
+    echo
+    echo "T-Pot type: $MY_TPOT_TYPE"
+    echo "Keyfile used: $MY_SENSOR_PRIVATEKEYFILE"
+    echo "Hive username: $MY_HIVE_USERNAME"
+    echo "Hive IP: $MY_HIVE_IP"
+    echo
+    # Ensure correct file permissions for private keyfile or SSH will ask for password
+    chmod 600 $MY_SENSOR_PRIVATEKEYFILE
+    cp /usr/share/logstash/config/pipelines_sensor.yml /usr/share/logstash/config/pipelines.yml
+    autossh -f -M 0 -4 -l $MY_HIVE_USERNAME -i $MY_SENSOR_PRIVATEKEYFILE -p 64295 -N -L64305:127.0.0.1:64305 $MY_HIVE_IP -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"
+    exit 0
+fi
+
+# Index Management is happening through ILM, but we need to put T-Pot ILM setting on ES.
+myTPOTILM=$(curl -s -XGET "http://elasticsearch:9200/_ilm/policy/tpot" | grep "Lifecycle policy not found: tpot" -c)
+if [ "$myTPOTILM" == "1" ];
+  then
+    echo "T-Pot ILM template not found on ES, putting it on ES now."
+    curl -XPUT "http://elasticsearch:9200/_ilm/policy/tpot" -H 'Content-Type: application/json' -d'
+    {
+      "policy": {
+        "phases": {
+          "hot": {
+            "min_age": "0ms",
+            "actions": {}
+          },
+          "delete": {
+            "min_age": "30d",
+            "actions": {
+              "delete": {
+                "delete_searchable_snapshot": true
+              }
+            }
+          }
+        },
+        "_meta": {
+          "managed": true,
+          "description": "T-Pot ILM policy with a retention of 30 days"
+        }
+      }
+    }'
+  else
+    echo "T-Pot ILM already configured or ES not available."
+fi
+echo
diff --git a/docker/elk/logstash/dist/http_input.conf b/docker/elk/logstash/dist/http_input.conf
index 43773654..005be8c7 100644
--- a/docker/elk/logstash/dist/http_input.conf
+++ b/docker/elk/logstash/dist/http_input.conf
@@ -3,7 +3,8 @@ input {
   http {
     id => "tpot"
     host => "0.0.0.0"
-    port => "80"
+    port => "64305"
+    ecs_compatibility => disabled
   }
 }
 
@@ -11,9 +12,10 @@ input {
 output {
   elasticsearch {
     hosts => ["elasticsearch:9200"]
-    # With templates now being legacy and ILM in place we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
+    # With templates now being legacy we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
     index => "logstash-%{+YYYY.MM.dd}"
-    template => "/etc/logstash/tpot_es_template.json"
+    template => "/etc/logstash/tpot-template.json"
+    template_overwrite => "true"
   }
 
 }
diff --git a/docker/elk/logstash/dist/http_output.conf b/docker/elk/logstash/dist/http_output.conf
index 02418b04..abd92051 100644
--- a/docker/elk/logstash/dist/http_output.conf
+++ b/docker/elk/logstash/dist/http_output.conf
@@ -1,12 +1,12 @@
 # Input section
 input {
 
-# Fatt                                 
-  file {                                   
+# Fatt
+  file {
     path => ["/data/fatt/log/fatt.log"]
-    codec => json     
+    codec => json
     type => "Fatt"
-  }  
+  }
 
 # Suricata
   file {
@@ -119,20 +119,6 @@ input {
     type => "Honeypots"
   }
 
-# Honeypy
-  file {
-    path => ["/data/honeypy/log/json.log"]
-    codec => json
-    type => "Honeypy"
-  }
-
-# Honeysap
-  file {
-    path => ["/data/honeysap/log/honeysap-external.log"]
-    codec => json
-    type => "Honeysap"
-  }
-
 # Honeytrap
   file {
     path => ["/data/honeytrap/log/attackers.json"]
@@ -168,12 +154,6 @@ input {
     type => "Medpot"
   }
 
-# Rdpy
-  file {
-    path => ["/data/rdpy/log/rdpy.log"]
-    type => "Rdpy"
-  }
-
 # Redishoneypot
   file {
     path => ["/data/redishoneypot/log/redishoneypot.log"]
@@ -188,6 +168,13 @@ input {
     type => "NGINX"
   }
 
+# Sentrypeer
+  file {
+    path => ["/data/sentrypeer/log/sentrypeer.json"]
+    codec => json
+    type => "Sentrypeer"
+  }
+
 # Tanner
   file {
     path => ["/data/tanner/log/tanner_report.json"]
@@ -228,8 +215,8 @@ filter {
     }
     translate {
       refresh_interval => 86400
-      field => "[alert][signature_id]"
-      destination => "[alert][cve_id]"
+      source => "[alert][signature_id]"
+      target => "[alert][cve_id]"
       dictionary_path => "/etc/listbot/cve.yaml"
 #      fallback => "-"
     }
@@ -279,13 +266,13 @@ filter {
 
 # CitrixHoneypot
   if [type] == "CitrixHoneypot" {
-    grok { 
-      match => { 
-        "message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}", 
-	               "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}", 
+    grok {
+      match => {
+        "message" => [ "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{UNIXPATH:fileinfo.filename:string}",
+	               "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{JAVAMETHOD:http.http_method:string}%{SPACE}%{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string}",
 		       "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{S3_REQUEST_LINE:msg:string} %{CISCO_REASON:fileinfo.state:string}: %{GREEDYDATA:payload:string:string}",
-		       "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ] 
-      } 
+		       "\A\(%{IPV4:src_ip:string}:%{INT:src_port:integer}\): %{GREEDYDATA:msg:string}" ]
+      }
     }
     date {
       match => [ "asctime", "ISO8601" ]
@@ -301,18 +288,18 @@ filter {
       }
     }
   }
-  
+
 # Conpot
   if [type] == "ConPot" {
     date {
       match => [ "timestamp", "ISO8601" ]
     }
-    mutate { 
-      rename => { 
-        "dst_port" => "dest_port" 
-        "dst_ip" => "dest_ip" 
-      } 
-    } 
+    mutate {
+      rename => {
+        "dst_port" => "dest_port"
+        "dst_ip" => "dest_ip"
+      }
+    }
   }
 
 # Cowrie
@@ -439,7 +426,7 @@ filter {
 # Example: 2021-10-29T21:08:31.026Z CLOSE host=1.2.3.4 port=12345 fd=4 time=20.015 bytes=24
 # Example: 2021-10-29T21:08:11.011Z ACCEPT host=1.2.3.4 port=12346 fd=4 n=1/4096
   if [type] == "Endlessh" {
-    grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}time=%{SECOND:duration}%{SPACE}bytes=%{NUMBER:bytes}", "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}n=%{INT}/%{INT}" ] } }    
+    grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}time=%{SECOND:duration}%{SPACE}bytes=%{NUMBER:bytes}", "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{WORD:reason}%{SPACE}host=%{IPV4:src_ip}%{SPACE}port=%{INT:src_port}%{SPACE}fd=%{INT}%{SPACE}n=%{INT}/%{INT}" ] } }
     date {
       match => [ "timestamp", "ISO8601" ]
       remove_field => ["timestamp"]
@@ -494,17 +481,6 @@ filter {
     }
   }
 
-# Honeypy
-  if [type] == "Honeypy" {
-    date {
-      match => [ "timestamp", "ISO8601" ]
-      remove_field => ["timestamp"]
-      remove_field => ["date"]
-      remove_field => ["time"]
-      remove_field => ["millisecond"]
-    }
-  }
-
 # Honeypots
   if [type] == "Honeypots" {
     date {
@@ -512,31 +488,6 @@ filter {
     }
   }
 
-# Honeysap
-  if [type] == "Honeysap" {
-    date {
-      match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
-      remove_field => ["timestamp"]
-    }
-    mutate {
-      rename => {
-        "[data][error_msg]" => "event_type"
-        "service" => "sensor"
-        "source_port" => "src_port"
-        "source_ip" => "src_ip"
-        "target_port" => "dest_port"
-        "target_ip" => "dest_ip"
-      }
-      remove_field => "event"
-      remove_field => "return_code"
-    }
-    if [data] {
-      mutate {
-	remove_field => "[data]"
-      }
-    }    
-  }
-
 # Honeytrap
   if [type] == "Honeytrap" {
     date {
@@ -609,18 +560,6 @@ filter {
     }
   }
 
-# Rdpy
-  if [type] == "Rdpy" {
-    grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp},domain:%{CISCO_REASON:domain},username:%{CISCO_REASON:username},password:%{CISCO_REASON:password},hostname:%{GREEDYDATA:hostname}", "\A%{TIMESTAMP_ISO8601:timestamp},Connection from %{IPV4:src_ip}:%{INT:src_port:integer}" ] } }
-    date {
-      match => [ "timestamp", "ISO8601" ]
-      remove_field => ["timestamp"]
-    }
-    mutate {
-      add_field => { "dest_port" => "3389" }
-    }
-  }
-
 # Redishoneypot
   if [type] == "Redishoneypot" {
     date {
@@ -630,8 +569,8 @@ filter {
     }
     mutate {
       split => { "addr" => ":" }
-      add_field => { 
-        "src_ip" => "%{[addr][0]}" 
+      add_field => {
+        "src_ip" => "%{[addr][0]}"
         "src_port" => "%{[addr][1]}"
         "dest_port" => "6379"
         "dest_ip" => "${MY_EXTIP}"
@@ -652,6 +591,21 @@ filter {
     }
   }
 
+# Sentrypeer
+  if [type] == "Sentrypeer" {
+    date {
+      match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSSSSS" ]
+      remove_field => ["event_timestamp"]
+    }
+    mutate {
+      rename => {
+        "source_ip" => "src_ip"
+        "destination_ip" => "dest_ip"
+      }
+      add_field => { "dest_port" => "5060" }
+    }
+  }
+
 # Tanner
   if [type] == "Tanner" {
     date {
@@ -680,7 +634,7 @@ if "_jsonparsefailure" in [tags] { drop {} }
   }
 
 # Add geo coordinates / ASN info / IP rep.
-  if [src_ip]  {
+  if [src_ip] {
     geoip {
       cache_size => 10000
       source => "src_ip"
@@ -693,8 +647,8 @@ if "_jsonparsefailure" in [tags] { drop {} }
     }
     translate {
       refresh_interval => 86400
-      field => "src_ip"
-      destination => "ip_rep"
+      source => "src_ip"
+      target => "ip_rep"
       dictionary_path => "/etc/listbot/iprep.yaml"
     }
   }
diff --git a/docker/elk/logstash/dist/logstash.conf b/docker/elk/logstash/dist/logstash.conf
index 8224f24d..7bd1b1ea 100644
--- a/docker/elk/logstash/dist/logstash.conf
+++ b/docker/elk/logstash/dist/logstash.conf
@@ -119,20 +119,6 @@ input {
     type => "Honeypots"
   }
 
-# Honeypy
-  file {
-    path => ["/data/honeypy/log/json.log"]
-    codec => json
-    type => "Honeypy"
-  }
-
-# Honeysap
-  file {
-    path => ["/data/honeysap/log/honeysap-external.log"]
-    codec => json
-    type => "Honeysap"
-  }
-
 # Honeytrap
   file {
     path => ["/data/honeytrap/log/attackers.json"]
@@ -168,12 +154,6 @@ input {
     type => "Medpot"
   }
 
-# Rdpy
-  file {
-    path => ["/data/rdpy/log/rdpy.log"]
-    type => "Rdpy"
-  }
-
 # Redishoneypot
   file {
     path => ["/data/redishoneypot/log/redishoneypot.log"]
@@ -181,6 +161,13 @@ input {
     type => "Redishoneypot"
   }
 
+# Sentrypeer 
+  file {
+    path => ["/data/sentrypeer/log/sentrypeer.json"]
+    codec => json
+    type => "Sentrypeer"
+  }
+
 # Host NGINX
   file {
     path => ["/data/nginx/log/access.log"]
@@ -228,8 +215,8 @@ filter {
     }
     translate {
       refresh_interval => 86400
-      field => "[alert][signature_id]"
-      destination => "[alert][cve_id]"
+      source => "[alert][signature_id]"
+      target => "[alert][cve_id]"
       dictionary_path => "/etc/listbot/cve.yaml"
 #      fallback => "-"
     }
@@ -494,17 +481,6 @@ filter {
     }
   }
 
-# Honeypy
-  if [type] == "Honeypy" {
-    date {
-      match => [ "timestamp", "ISO8601" ]
-      remove_field => ["timestamp"]
-      remove_field => ["date"]
-      remove_field => ["time"]
-      remove_field => ["millisecond"]
-    }
-  }
-
 # Honeypots
   if [type] == "Honeypots" {
     date {
@@ -512,31 +488,6 @@ filter {
     }
   }
 
-# Honeysap
-  if [type] == "Honeysap" {
-    date {
-      match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
-      remove_field => ["timestamp"]
-    }
-    mutate {
-      rename => {
-        "[data][error_msg]" => "event_type"
-        "service" => "sensor"
-        "source_port" => "src_port"
-        "source_ip" => "src_ip"
-        "target_port" => "dest_port"
-        "target_ip" => "dest_ip"
-      }
-      remove_field => "event"
-      remove_field => "return_code"
-    }
-    if [data] {
-      mutate {
-	remove_field => "[data]"
-      }
-    }    
-  }
-
 # Honeytrap
   if [type] == "Honeytrap" {
     date {
@@ -609,18 +560,6 @@ filter {
     }
   }
 
-# Rdpy
-  if [type] == "Rdpy" {
-    grok { match => { "message" => [ "\A%{TIMESTAMP_ISO8601:timestamp},domain:%{CISCO_REASON:domain},username:%{CISCO_REASON:username},password:%{CISCO_REASON:password},hostname:%{GREEDYDATA:hostname}", "\A%{TIMESTAMP_ISO8601:timestamp},Connection from %{IPV4:src_ip}:%{INT:src_port:integer}" ] } }
-    date {
-      match => [ "timestamp", "ISO8601" ]
-      remove_field => ["timestamp"]
-    }
-    mutate {
-      add_field => { "dest_port" => "3389" }
-    }
-  }
-
 # Redishoneypot
   if [type] == "Redishoneypot" {
     date {
@@ -652,6 +591,21 @@ filter {
     }
   }
 
+# Sentrypeer
+  if [type] == "Sentrypeer" {
+    date {
+      match => [ "event_timestamp", "yyyy-MM-dd HH:mm:ss.SSSSSSSSS" ]
+      remove_field => ["event_timestamp"]
+    }
+    mutate {
+      rename => {
+        "source_ip" => "src_ip"
+        "destination_ip" => "dest_ip"
+      }
+      add_field => { "dest_port" => "5060" }
+    }
+  }
+
 # Tanner
   if [type] == "Tanner" {
     date {
@@ -680,7 +634,7 @@ if "_jsonparsefailure" in [tags] { drop {} }
   }
 
 # Add geo coordinates / ASN info / IP rep.
-  if [src_ip]  {
+  if [src_ip] {
     geoip {
       cache_size => 10000
       source => "src_ip"
@@ -693,8 +647,8 @@ if "_jsonparsefailure" in [tags] { drop {} }
     }
     translate {
       refresh_interval => 86400
-      field => "src_ip"
-      destination => "ip_rep"
+      source => "src_ip"
+      target => "ip_rep"
       dictionary_path => "/etc/listbot/iprep.yaml"
     }
   }
@@ -746,10 +700,10 @@ if "_jsonparsefailure" in [tags] { drop {} }
 output {
   elasticsearch {
     hosts => ["elasticsearch:9200"]
-    # With templates now being legacy and ILM in place we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
+    # With templates now being legacy we need to set the daily index with its template manually. Otherwise a new index might be created with differents settings configured through Kibana.
     index => "logstash-%{+YYYY.MM.dd}"
-    template => "/etc/logstash/tpot_es_template.json"
-    #document_type => "doc"
+    template => "/etc/logstash/tpot-template.json"
+    template_overwrite => "true"
   }
 
   #if [type] == "Suricata" {
diff --git a/docker/elk/logstash/dist/logstash.yml b/docker/elk/logstash/dist/logstash.yml
new file mode 100644
index 00000000..c8be53b5
--- /dev/null
+++ b/docker/elk/logstash/dist/logstash.yml
@@ -0,0 +1 @@
+path.config: "/usr/share/logstash/config/pipelines.yml"
diff --git a/docker/elk/logstash/dist/pipelines.yml b/docker/elk/logstash/dist/pipelines.yml
index 41883e78..1e7e638f 100644
--- a/docker/elk/logstash/dist/pipelines.yml
+++ b/docker/elk/logstash/dist/pipelines.yml
@@ -1,4 +1,6 @@
 - pipeline.id: logstash
-  path.config: "/etc/logstash/conf.d/logstash.conf"
+  path.config: "/etc/logstash/logstash.conf"
+  pipeline.ecs_compatibility: disabled
 - pipeline.id: http_input
-  path.config: "/etc/logstash/conf.d/http_input.conf"
+  path.config: "/etc/logstash/http_input.conf"
+  pipeline.ecs_compatibility: disabled
diff --git a/docker/elk/logstash/dist/pipelines_pot.yml b/docker/elk/logstash/dist/pipelines_pot.yml
deleted file mode 100644
index cf6201a1..00000000
--- a/docker/elk/logstash/dist/pipelines_pot.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-- pipeline.id: http_output
-  path.config: "/etc/logstash/conf.d/http_output.conf"
diff --git a/docker/elk/logstash/dist/pipelines_sensor.yml b/docker/elk/logstash/dist/pipelines_sensor.yml
new file mode 100644
index 00000000..4e5ca5a7
--- /dev/null
+++ b/docker/elk/logstash/dist/pipelines_sensor.yml
@@ -0,0 +1,3 @@
+- pipeline.id: http_output
+  path.config: "/etc/logstash/http_output.conf"
+  pipeline.ecs_compatibility: disabled
diff --git a/docker/elk/logstash/dist/tpot-template.json b/docker/elk/logstash/dist/tpot-template.json
new file mode 100644
index 00000000..375f8d7d
--- /dev/null
+++ b/docker/elk/logstash/dist/tpot-template.json
@@ -0,0 +1,97 @@
+{
+  "template": {
+    "settings": {
+      "index": {
+        "lifecycle": {
+          "name": "tpot"
+        },
+        "mapping": {
+          "total_fields": {
+            "limit": "2000"
+          }
+        },
+        "refresh_interval": "5s",
+        "number_of_shards": "1",
+        "number_of_replicas": "0",
+        "query": {
+          "default_field": "*"
+        }
+      }
+    },
+    "mappings": {
+      "_source": {
+        "excludes": [],
+        "includes": [],
+        "enabled": true
+      },
+      "_routing": {
+        "required": false
+      },
+      "dynamic": true,
+      "numeric_detection": false,
+      "date_detection": true,
+      "dynamic_date_formats": [
+        "strict_date_optional_time",
+        "yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"
+      ],
+      "dynamic_templates": [
+        {
+          "message_field": {
+            "path_match": "message",
+            "mapping": {
+              "norms": false,
+              "type": "text"
+            },
+            "match_mapping_type": "string"
+          }
+        },
+        {
+          "string_fields": {
+            "mapping": {
+              "norms": false,
+              "fields": {
+                "keyword": {
+                  "ignore_above": 256,
+                  "type": "keyword"
+                }
+              },
+              "type": "text"
+            },
+            "match_mapping_type": "string",
+            "match": "*"
+          }
+        }
+      ],
+      "properties": {
+        "geoip.ip": {
+          "type": "ip"
+        },
+        "geoip.latitude": {
+          "type": "half_float"
+        },
+        "geoip.location": {
+          "type": "geo_point"
+        },
+        "geoip.longitude": {
+          "type": "half_float"
+        },
+        "geoip_ext.ip": {
+          "type": "ip"
+        },
+        "geoip_ext.latitude": {
+          "type": "half_float"
+        },
+        "geoip_ext.location": {
+          "type": "geo_point"
+        },
+        "geoip_ext.longitude": {
+          "type": "half_float"
+        }
+      }
+    }
+  },
+  "index_patterns": [
+    "logstash-*"
+  ],
+  "composed_of": []
+}
diff --git a/docker/elk/logstash/dist/tpot_es_template.json b/docker/elk/logstash/dist/tpot_es_template.json
deleted file mode 100644
index 0ee8dd62..00000000
--- a/docker/elk/logstash/dist/tpot_es_template.json
+++ /dev/null
@@ -1,58 +0,0 @@
-{
-  "index_patterns" : "logstash-*",
-  "version" : 60001,
-  "settings" : {
-    "index.refresh_interval" : "5s",
-    "number_of_shards" : 1,
-    "index.number_of_replicas" : "0",
-    "index.mapping.total_fields.limit" : "2000",
-    "index.query": {
-      "default_field": "*"
-     }
-  },
-  "mappings" : {
-    "dynamic_templates" : [ {
-      "message_field" : {
-        "path_match" : "message",
-        "match_mapping_type" : "string",
-        "mapping" : {
-          "type" : "text",
-          "norms" : false
-        }
-      }
-    }, {
-      "string_fields" : {
-        "match" : "*",
-        "match_mapping_type" : "string",
-        "mapping" : {
-          "type" : "text", "norms" : false,
-          "fields" : {
-            "keyword" : { "type": "keyword", "ignore_above": 256 }
-          }
-        }
-      }
-    } ],
-    "properties" : {
-      "@timestamp": { "type": "date"},
-      "@version": { "type": "keyword"},
-      "geoip"  : {
-        "dynamic": true,
-        "properties" : {
-          "ip": { "type": "ip" },
-          "location" : { "type" : "geo_point" },
-          "latitude" : { "type" : "half_float" },
-          "longitude" : { "type" : "half_float" }
-        }
-      },
-      "geoip_ext"  : {
-        "dynamic": true,
-        "properties" : {
-          "ip": { "type": "ip" },
-          "location" : { "type" : "geo_point" },
-          "latitude" : { "type" : "half_float" },
-          "longitude" : { "type" : "half_float" }
-        }
-      }
-    }
-  }
-}
diff --git a/docker/elk/logstash/dist/update.sh b/docker/elk/logstash/dist/update.sh
deleted file mode 100644
index 0ec6f57f..00000000
--- a/docker/elk/logstash/dist/update.sh
+++ /dev/null
@@ -1,122 +0,0 @@
-#!/bin/bash
-
-# Let's ensure normal operation on exit or if interrupted ...
-function fuCLEANUP {
-  exit 0
-}
-trap fuCLEANUP EXIT
-
-# Check internet availability 
-function fuCHECKINET () {
-mySITES=$1
-error=0
-for i in $mySITES;
-  do
-    curl --connect-timeout 5 -Is $i 2>&1 > /dev/null
-      if [ $? -ne 0 ];
-        then
-          let error+=1
-      fi;
-  done;
-  echo $error
-}
-
-# Check for connectivity and download latest translation maps
-myCHECK=$(fuCHECKINET "listbot.sicherheitstacho.eu")
-if [ "$myCHECK" == "0" ];
-  then
-    echo "Connection to Listbot looks good, now downloading latest translation maps."
-    cd /etc/listbot 
-    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/cve.yaml.bz2 && \
-    aria2c -s16 -x 16 https://listbot.sicherheitstacho.eu/iprep.yaml.bz2 && \
-    bunzip2 -f *.bz2
-    cd /
-  else
-    echo "Cannot reach Listbot, starting Logstash without latest translation maps."
-fi
-
-# Distributed T-Pot installation needs a different pipeline config and autossh tunnel. 
-if [ "$MY_TPOT_TYPE" == "POT" ];
-  then
-    echo
-    echo "Distributed T-Pot setup, sending T-Pot logs to $MY_HIVE_IP."
-    echo
-    echo "T-Pot type: $MY_TPOT_TYPE"
-    echo "Keyfile used: $MY_POT_PRIVATEKEYFILE"
-    echo "Hive username: $MY_HIVE_USERNAME"
-    echo "Hive IP: $MY_HIVE_IP"
-    echo
-    cp /usr/share/logstash/config/pipelines_pot.yml /usr/share/logstash/config/pipelines.yml
-    autossh -f -M 0 -4 -l $MY_HIVE_USERNAME -i $MY_POT_PRIVATEKEYFILE -p 64295 -N -L64305:127.0.0.1:64305 $MY_HIVE_IP -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"
-    exit 0
-fi
-
-# We do want to enforce our es_template thus we always need to delete the default template, putting our default afterwards
-# This is now done via common_configs.rb => overwrite default logstash template
-echo "Removing logstash template."
-curl -s -XDELETE http://elasticsearch:9200/_template/logstash
-echo
-echo "Checking if empty."
-curl -s -XGET http://elasticsearch:9200/_template/logstash
-echo
-echo "Putting default template."
-curl -XPUT "http://elasticsearch:9200/_template/logstash" -H 'Content-Type: application/json' -d'
-{
-  "index_patterns" : "logstash-*",
-  "version" : 60001,
-  "settings" : {
-    "index.refresh_interval" : "5s",
-    "number_of_shards" : 1,
-    "index.number_of_replicas" : "0",
-    "index.mapping.total_fields.limit" : "2000",
-    "index.query": {
-      "default_field": "*"
-     }
-  },
-  "mappings" : {
-    "dynamic_templates" : [ {
-      "message_field" : {
-        "path_match" : "message",
-        "match_mapping_type" : "string",
-        "mapping" : {
-          "type" : "text",
-          "norms" : false
-        }
-      }
-    }, {
-      "string_fields" : {
-        "match" : "*",
-        "match_mapping_type" : "string",
-        "mapping" : {
-          "type" : "text", "norms" : false,
-          "fields" : {
-            "keyword" : { "type": "keyword", "ignore_above": 256 }
-          }
-        }
-      }
-    } ],
-    "properties" : {
-      "@timestamp": { "type": "date"},
-      "@version": { "type": "keyword"},
-      "geoip"  : {
-        "dynamic": true,
-        "properties" : {
-          "ip": { "type": "ip" },
-          "location" : { "type" : "geo_point" },
-          "latitude" : { "type" : "half_float" },
-          "longitude" : { "type" : "half_float" }
-        }
-      },
-      "geoip_ext"  : {
-        "dynamic": true,
-        "properties" : {
-          "ip": { "type": "ip" },
-          "location" : { "type" : "geo_point" },
-          "latitude" : { "type" : "half_float" },
-          "longitude" : { "type" : "half_float" }
-        }
-      }
-    }
-  }
-}'
-echo
diff --git a/docker/elk/logstash/docker-compose.yml b/docker/elk/logstash/docker-compose.yml
index b6c71354..1b641069 100644
--- a/docker/elk/logstash/docker-compose.yml
+++ b/docker/elk/logstash/docker-compose.yml
@@ -15,9 +15,10 @@ services:
     env_file:
      - /opt/tpot/etc/compose/elk_environment
     ports:
-     - "127.0.0.1:64305:80"
-    image: "dtagdevsec/logstash:2006"
+     - "127.0.0.1:64305:64305"
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 #     - /root/tpotce/docker/elk/logstash/dist/logstash.conf:/etc/logstash/conf.d/logstash.conf
 #     - /root/tpotce/docker/elk/logstash/dist/http.conf:/etc/logstash/conf.d/http.conf
+#     - /root/tpotce/docker/elk/logstash/dist/logstash.yml:/etc/logstash/conf.d/logstash.yml
diff --git a/docker/elk/map/Dockerfile b/docker/elk/map/Dockerfile
new file mode 100644
index 00000000..9176fe99
--- /dev/null
+++ b/docker/elk/map/Dockerfile
@@ -0,0 +1,41 @@
+FROM alpine:3.15
+#
+# Include dist
+COPY dist/ /root/dist/
+#
+# Install packages
+RUN apk -U --no-cache add \
+             build-base \
+             git \
+	     libcap \
+	     py3-pip \
+             python3 \
+             python3-dev && \
+#	     
+# Install  Server from GitHub and setup
+    mkdir -p /opt && \
+    cd /opt/ && \
+    git clone https://github.com/t3chn0m4g3/geoip-attack-map && \
+    cd geoip-attack-map && \
+#    git checkout 4dae740178455f371b667ee095f824cb271f07e8 && \
+    cp /root/dist/* . && \
+    pip3 install -r requirements.txt && \
+    pip3 install flask && \
+    setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
+#
+# Setup user, groups and configs
+    addgroup -g 2000 map && \
+    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 map && \
+    chown map:map -R /opt/geoip-attack-map && \
+#
+# Clean up
+    apk del --purge build-base \
+                    git \
+		    python3-dev && \
+    rm -rf /root/* /var/cache/apk/* /opt/geoip-attack-map/.git
+#
+# Start wordpot
+STOPSIGNAL SIGINT
+USER map:map
+WORKDIR /opt/geoip-attack-map
+CMD ./entrypoint.sh && exec /usr/bin/python3 $MAP_COMMAND
diff --git a/docker/elk/map/dist/entrypoint.sh b/docker/elk/map/dist/entrypoint.sh
new file mode 100755
index 00000000..a63bb29e
--- /dev/null
+++ b/docker/elk/map/dist/entrypoint.sh
@@ -0,0 +1,2 @@
+#!/bin/ash
+sed -i "s/var hqLatLng = new L.LatLng(52.3058, 4.932);/var hqLatLng = new L.LatLng($MY_EXTIP_LAT, $MY_EXTIP_LONG);/g" /opt/geoip-attack-map/static/map.js
diff --git a/docker/elk/map/docker-compose.yml b/docker/elk/map/docker-compose.yml
new file mode 100644
index 00000000..247ff7a3
--- /dev/null
+++ b/docker/elk/map/docker-compose.yml
@@ -0,0 +1,46 @@
+version: '2.3'
+
+#networks:
+#  map_local:
+
+services:
+
+# Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+# Map Web Service
+  map_web:
+    build: .
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+    depends_on:
+     - map_redis
+
+# Map Data Service
+  map_data:
+    container_name: map_data
+    restart: always
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+    depends_on:
+     - map_redis
diff --git a/docker/endlessh/Dockerfile b/docker/endlessh/Dockerfile
index b1ef42f8..72e13475 100644
--- a/docker/endlessh/Dockerfile
+++ b/docker/endlessh/Dockerfile
@@ -16,7 +16,7 @@ RUN apk -U add --no-cache \
     make && \
     mv /opt/endlessh/endlessh /root/dist
 #
-FROM alpine:3.14
+FROM alpine:3.15
 #
 COPY --from=builder /root/dist/* /opt/endlessh/
 #
diff --git a/docker/endlessh/docker-compose.yml b/docker/endlessh/docker-compose.yml
index eb2359dd..d0bef565 100644
--- a/docker/endlessh/docker-compose.yml
+++ b/docker/endlessh/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: endlessh
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - endlessh_local
     ports:
      - "22:2222"
-    image: "dtagdevsec/endlessh:2006"
+    image: "dtagdevsec/endlessh:2204"
     read_only: true
     volumes:
      - /data/endlessh/log:/var/log/endlessh
diff --git a/docker/ews/Dockerfile b/docker/ewsposter/Dockerfile
similarity index 84%
rename from docker/ews/Dockerfile
rename to docker/ewsposter/Dockerfile
index a58be85f..c34da45c 100644
--- a/docker/ews/Dockerfile
+++ b/docker/ewsposter/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
 RUN apk -U --no-cache add \
@@ -17,10 +17,13 @@ RUN apk -U --no-cache add \
             py3-ipaddress \
 	    py3-lxml \
 	    py3-mysqlclient \
+	    py3-openssl \
             py3-requests \
 	    py3-pip \
-            py3-setuptools && \
-    pip3 install --no-cache-dir configparser hpfeeds3 influxdb influxdb-client pyOpenSSL xmljson && \
+            py3-setuptools \
+            py3-wheel && \
+    pip3 install --upgrade pip && \
+    pip3 install --no-cache-dir configparser hpfeeds3 influxdb influxdb-client xmljson && \
 #
 # Setup ewsposter
     git clone https://github.com/telekom-security/ewsposter /opt/ewsposter && \
@@ -44,8 +47,7 @@ RUN apk -U --no-cache add \
             openssl-dev \
             python3-dev \
             py-setuptools && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /root/* /var/cache/apk/* /opt/ewsposter/.git
 #
 # Run ewsposter
 STOPSIGNAL SIGINT
diff --git a/docker/ews/dist/ews.cfg b/docker/ewsposter/dist/ews.cfg
similarity index 99%
rename from docker/ews/dist/ews.cfg
rename to docker/ewsposter/dist/ews.cfg
index 8e6badad..95da250d 100644
--- a/docker/ews/dist/ews.cfg
+++ b/docker/ewsposter/dist/ews.cfg
@@ -154,7 +154,7 @@ nodeid = medpot-community-01
 logfile = /data/medpot/log/medpot.log
 
 [HONEYPY]
-honeypy = true
+honeypy = false
 nodeid = honeypy-community-01
 logfile = /data/honeypy/log/json.log
 
diff --git a/docker/ews/docker-compose.yml b/docker/ewsposter/docker-compose.yml
similarity index 89%
rename from docker/ews/docker-compose.yml
rename to docker/ewsposter/docker-compose.yml
index f172fb28..003597e6 100644
--- a/docker/ews/docker-compose.yml
+++ b/docker/ewsposter/docker-compose.yml
@@ -10,6 +10,8 @@ services:
     build: .
     container_name: ewsposter
     restart: always
+#    cpu_count: 1
+#    cpus: 0.75
     networks:
      - ewsposter_local
     environment:
@@ -23,7 +25,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
 #     - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
diff --git a/docker/fatt/Dockerfile b/docker/fatt/Dockerfile
index 9dec2ae4..ace00609 100644
--- a/docker/fatt/Dockerfile
+++ b/docker/fatt/Dockerfile
@@ -1,8 +1,9 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Get and install dependencies & packages
-RUN apk -U add \
+RUN apk -U --no-cache add \
               git \
+	      libcap \
               py3-libxml2 \
               py3-lxml \
 	      py3-pip \
@@ -19,22 +20,25 @@ RUN apk -U add \
     cd /opt && \
     git clone https://github.com/0x4D31/fatt && \
     cd fatt && \
-    git checkout 314cd1ff7873b5a145a51ec4e85f6107828a2c79 && \
+    git checkout 45cabf0b8b59162b99a1732d853efb01614563fe && \
+    #git checkout 314cd1ff7873b5a145a51ec4e85f6107828a2c79 && \
     mkdir -p log && \
     # pyshark >= 0.4.3 breaks fatt
     pip3 install pyshark==0.4.2.11 && \
 #
 # Setup configs
+    chgrp fatt /usr/bin/dumpcap && \
+    setcap cap_net_raw,cap_net_admin=+eip /usr/bin/dumpcap && \
     chown fatt:fatt -R /opt/fatt/* && \
 #
 # Clean up
     apk del --purge git \
                     python3-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/* 
+    rm -rf /root/* /var/cache/apk/* /opt/fatt/.git
 #
 # Start fatt
 STOPSIGNAL SIGINT
 ENV PYTHONPATH /opt/fatt
 WORKDIR /opt/fatt
+USER fatt:fatt
 CMD python3 fatt.py -i $(/sbin/ip address show | /usr/bin/awk '/inet.*brd/{ print $NF; exit }') --print_output --json_logging -o log/fatt.log
diff --git a/docker/fatt/docker-compose.yml b/docker/fatt/docker-compose.yml
index 1550ed3a..01a1f67b 100644
--- a/docker/fatt/docker-compose.yml
+++ b/docker/fatt/docker-compose.yml
@@ -7,11 +7,13 @@ services:
     build: .
     container_name: fatt
     restart: always
+#    cpu_count: 1
+#    cpus: 0.75
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
diff --git a/docker/glutton/Dockerfile b/docker/glutton/Dockerfile
index 71689fc2..41f37935 100644
--- a/docker/glutton/Dockerfile
+++ b/docker/glutton/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.13
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 # 
 # Setup apk
 RUN apk -U --no-cache add \
@@ -47,7 +47,7 @@ RUN apk -U --no-cache add \
                     g++ && \
     rm -rf /var/cache/apk/* \
            /opt/go \
-           /root/dist
+           /root/*
 #
 # Start glutton 
 WORKDIR /opt/glutton
diff --git a/docker/glutton/docker-compose.yml b/docker/glutton/docker-compose.yml
index 68843e9d..2f14b8b3 100644
--- a/docker/glutton/docker-compose.yml
+++ b/docker/glutton/docker-compose.yml
@@ -10,10 +10,12 @@ services:
     tmpfs:
      - /var/lib/glutton:uid=2000,gid=2000
      - /run:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.75
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/glutton:2006"
+    image: "dtagdevsec/glutton:2204"
     read_only: true
     volumes:
      - /data/glutton/log:/var/log/glutton
diff --git a/docker/heimdall/Dockerfile b/docker/heimdall/Dockerfile
deleted file mode 100644
index 06758b85..00000000
--- a/docker/heimdall/Dockerfile
+++ /dev/null
@@ -1,83 +0,0 @@
-FROM alpine:3.14
-#
-# Include dist
-ADD dist/ /root/dist/
-#
-# Get and install dependencies & packages
-RUN apk -U --no-cache add \
-      git \
-      nginx \
-      nginx-mod-http-headers-more \
-      php7 \
-      php7-cgi \
-      php7-ctype \
-      php7-fileinfo \
-      php7-fpm \
-      php7-json \
-      php7-mbstring \
-      php7-openssl \
-      php7-pdo \
-      php7-pdo_pgsql \
-      php7-pdo_sqlite \
-      php7-session \
-      php7-sqlite3 \
-      php7-tokenizer \
-      php7-xml \
-      php7-zip && \
-#
-# Clone and setup Heimdall, Nginx
-    git clone https://github.com/linuxserver/heimdall && \
-    cd heimdall && \
-    git checkout 61a5a1a8b023771e0ff7c056add5537d20737e51 && \
-    cd .. && \
-    cp -R heimdall/. /var/lib/nginx/html && \
-    rm -rf heimdall && \
-    cd /var/lib/nginx/html && \
-    cp .env.example .env && \
-    # Fix error for ArrayInput in smyfony with regard to PHP7.4 (https://github.com/symfony/symfony/pull/32806/files)
-    sed -i "135s/.*/} elseif (0 === strpos(\$key, '-')) {/" /var/lib/nginx/html/vendor/symfony/console/Input/ArrayInput.php && \
-    php7 artisan key:generate && \
-#
-## Add previously configured content
-    mkdir -p /var/lib/nginx/html/storage/app/public/backgrounds/ && \
-    cp /root/dist/app/bg1.jpg /var/lib/nginx/html/public/img/bg1.jpg && \
-    cp /root/dist/app/t-pot.png /var/lib/nginx/html/public/img/heimdall-icon-small.png && \
-    cp /root/dist/app/app.sqlite /var/lib/nginx/html/database/app.sqlite && \
-    cp /root/dist/app/cyberchef.png /var/lib/nginx/html/storage/app/public/icons/ZotKKZA2QKplZhdoF3WLx4UdKKhLFamf3lSMcLkr.png && \
-    cp /root/dist/app/eshead.png /var/lib/nginx/html/storage/app/public/icons/77KqFv4YIshXUDLDoOvZ1NUbsKDtsMAjJvg4sYqN.png && \
-    cp /root/dist/app/tsec.png /var/lib/nginx/html/storage/app/public/icons/RHwXCfCeGNDdhYgzlShL9o4NBFL2LHZWajgyeL0a.png && \
-    cp /root/dist/app/spiderfoot.png /var/lib/nginx/html/storage/app/public/icons/s7uPe1frJqjv76oI6SNqNbWUsgU1GHYqRALMlwYb.png && \
-    cp /root/dist/html/*.html /var/lib/nginx/html/public/ && \
-    cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-16x16.png && \
-    cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-32x32.png && \
-    cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon-96x96.png && \
-    cp /root/dist/html/favicon.ico /var/lib/nginx/html/public/favicon.ico && \
-#
-## Change ownership, permissions
-    chown root:www-data -R /var/lib/nginx/html && \
-    chmod 775 -R /var/lib/nginx/html/storage && \
-    chmod 775 -R /var/lib/nginx/html/database && \
-    sed -i "s/user = nobody/user = nginx/g" /etc/php7/php-fpm.d/www.conf && \
-    sed -i "s/group = nobody/group = nginx/g" /etc/php7/php-fpm.d/www.conf && \
-    sed -i "s#;upload_tmp_dir =#upload_tmp_dir = /var/lib/nginx/tmp#g" /etc/php7/php.ini && \
-    sed -i "s/9000/64304/g" /etc/php7/php-fpm.d/www.conf && \
-    sed -i "s/APP_NAME=Heimdall/APP_NAME=T-Pot/g" /var/lib/nginx/html/.env && \
-## Add Nginx / T-Pot specific configs
-    rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
-    mkdir -p /etc/nginx/conf.d && \
-    cp /root/dist/conf/nginx.conf /etc/nginx/ && \
-    cp -R /root/dist/conf/ssl /etc/nginx/ && \
-    cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
-    cp /root/dist/start.sh / && \
-## Pack database for first time usage
-    cd /var/lib/nginx && \
-    tar cvfz first.tgz /var/lib/nginx/html/database /var/lib/nginx/html/storage && \
-#
-# Clean up
-    apk del --purge \
-      git && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
-#
-# Start nginx
-CMD /start.sh && php-fpm7 && exec nginx -g 'daemon off;'
diff --git a/docker/heimdall/dist/app/app.sqlite b/docker/heimdall/dist/app/app.sqlite
deleted file mode 100755
index 5447bd06..00000000
Binary files a/docker/heimdall/dist/app/app.sqlite and /dev/null differ
diff --git a/docker/heimdall/dist/app/bg1.jpg b/docker/heimdall/dist/app/bg1.jpg
deleted file mode 100644
index 1130ed2e..00000000
Binary files a/docker/heimdall/dist/app/bg1.jpg and /dev/null differ
diff --git a/docker/heimdall/dist/app/cyberchef.png b/docker/heimdall/dist/app/cyberchef.png
deleted file mode 100644
index eb182286..00000000
Binary files a/docker/heimdall/dist/app/cyberchef.png and /dev/null differ
diff --git a/docker/heimdall/dist/app/eshead.png b/docker/heimdall/dist/app/eshead.png
deleted file mode 100644
index 55cf04c5..00000000
Binary files a/docker/heimdall/dist/app/eshead.png and /dev/null differ
diff --git a/docker/heimdall/dist/app/spiderfoot.png b/docker/heimdall/dist/app/spiderfoot.png
deleted file mode 100644
index f2ac38f5..00000000
Binary files a/docker/heimdall/dist/app/spiderfoot.png and /dev/null differ
diff --git a/docker/heimdall/dist/app/t-pot.png b/docker/heimdall/dist/app/t-pot.png
deleted file mode 100644
index 0349351c..00000000
Binary files a/docker/heimdall/dist/app/t-pot.png and /dev/null differ
diff --git a/docker/heimdall/dist/conf/nginx.conf b/docker/heimdall/dist/conf/nginx.conf
deleted file mode 100644
index 70c4d552..00000000
--- a/docker/heimdall/dist/conf/nginx.conf
+++ /dev/null
@@ -1,76 +0,0 @@
-user nginx;
-worker_processes auto;
-pid /run/nginx.pid;
-load_module /usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so;
-
-events {
-	worker_connections 768;
-	# multi_accept on;
-}
-
-http {
-
-	##
-	# Basic Settings
-	##
-
-	sendfile on;
-	tcp_nopush on;
-	tcp_nodelay on;
-	keepalive_timeout 65;
-	types_hash_max_size 2048;
-	# server_tokens off;
-
-	# server_names_hash_bucket_size 64;
-	# server_name_in_redirect off;
-
-	include /etc/nginx/mime.types;
-	default_type application/octet-stream;
-
-	##
-	# SSL Settings
-	##
-
-	#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
-	ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
-	ssl_prefer_server_ciphers on;
-
-	##
-	# Logging Settings
-	##
-
-        log_format le_json '{ "timestamp": "$time_iso8601", '
- 	'"src_ip": "$remote_addr", '
- 	'"remote_user": "$remote_user", '
- 	'"body_bytes_sent": "$body_bytes_sent", '
- 	'"request_time": "$request_time", '
- 	'"status": "$status", '
- 	'"request": "$request", '
- 	'"request_method": "$request_method", '
- 	'"http_referrer": "$http_referer", '
- 	'"http_user_agent": "$http_user_agent" }';
- 
- 	access_log /var/log/nginx/access.log le_json;
-	error_log /var/log/nginx/error.log;
-
-	##
-	# Gzip Settings
-	##
-
-	gzip on;
-	gzip_disable "msie6";
-
-	# gzip_vary on;
-	# gzip_proxied any;
-	# gzip_comp_level 6;
-	# gzip_buffers 16 8k;
-	# gzip_http_version 1.1;
-	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
-
-	##
-	# Virtual Host Configs
-	##
-
-	include /etc/nginx/conf.d/*.conf;
-	include /etc/nginx/sites-enabled/*;
-}
diff --git a/docker/heimdall/dist/start.sh b/docker/heimdall/dist/start.sh
deleted file mode 100755
index 6e986628..00000000
--- a/docker/heimdall/dist/start.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/ash
-if [ "$(ls /var/lib/nginx/html/database)" = "" ] && [ "$HEIMDALL_PERSIST" = "YES" ];
-  then
-    tar xvfz /var/lib/nginx/first.tgz -C /
-fi
-if [ "$HEIMDALL_PERSIST" = "YES" ];
-  then
-    chmod 770 -R /var/lib/nginx/html/database /var/lib/nginx/html/storage
-    chown root:www-data -R /var/lib/nginx/html/database /var/lib/nginx/html/storage
-fi
diff --git a/docker/heimdall/docker-compose.yml b/docker/heimdall/docker-compose.yml
deleted file mode 100644
index 98346f10..00000000
--- a/docker/heimdall/docker-compose.yml
+++ /dev/null
@@ -1,37 +0,0 @@
-version: '2.3'
-
-services:
-
-# nginx service
-  nginx:
-    build: .
-    container_name: nginx
-    restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
-    tmpfs:
-     - /var/tmp/nginx/client_body
-     - /var/tmp/nginx/proxy
-     - /var/tmp/nginx/fastcgi
-     - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi
-     - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
-    network_mode: "host"
-    ports:
-     - "64297:64297"
-     - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
-    read_only: true
-    volumes:
-     - /data/nginx/cert/:/etc/nginx/cert/:ro
-     - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
-     - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
diff --git a/docker/hellpot/Dockerfile b/docker/hellpot/Dockerfile
index a9975bd8..2d50aae1 100644
--- a/docker/hellpot/Dockerfile
+++ b/docker/hellpot/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apk
 RUN apk -U --no-cache add \
@@ -17,10 +17,11 @@ RUN apk -U --no-cache add \
     mkdir -p /opt/go && \ 
     git clone https://github.com/yunginnanet/HellPot && \
     cd HellPot && \
-    git checkout f87b1f17e21b36edae41b7f49d4a54ae420a9bf8 && \
-  # Hellpot ignores setting the logpath, need to this hardcoded :(
+    git checkout 1312f20e719223099af8aad80f316420ee3dfcb1 && \
+  # Hellpot ignores setting the logpath, need to do this hardcoded ...
     sed -i 's#logDir = snek.GetString("logger.directory")#logDir = "/var/log/hellpot/"#g' config/logger.go && \
     sed -i 's#tnow := "HellPot"#tnow := "hellpot"#g' config/logger.go && \
+    sed -i 's#logFileName := "HellPot"#logFileName := "hellpot"#g' config/logger.go && \
     go build cmd/HellPot/HellPot.go && \
     mv /root/HellPot/HellPot /opt/hellpot/ && \
 #
@@ -40,7 +41,7 @@ RUN apk -U --no-cache add \
                     g++ && \
     rm -rf /var/cache/apk/* \
            /opt/go \
-           /root/dist
+           /root/*
 #
 # Start hellpot
 WORKDIR /opt/hellpot
diff --git a/docker/hellpot/docker-compose.yml b/docker/hellpot/docker-compose.yml
index 150520db..0fcb3b92 100644
--- a/docker/hellpot/docker-compose.yml
+++ b/docker/hellpot/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: hellpot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - hellpot_local
     ports:
      - "80:8080"    
-    image: "dtagdevsec/hellpot:2006"
+    image: "dtagdevsec/hellpot:2204"
     read_only: true
     volumes:
      - /data/hellpot/log:/var/log/hellpot
diff --git a/docker/heralding/Dockerfile b/docker/heralding/Dockerfile
index 90438a02..9972ff75 100644
--- a/docker/heralding/Dockerfile
+++ b/docker/heralding/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #  
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
 RUN apk -U --no-cache add \ 
@@ -12,7 +12,19 @@ RUN apk -U --no-cache add \
             		openssl-dev \
                         py3-pyzmq \
             		postgresql-dev \
+			py3-attrs \
+			py3-mysqlclient \
+			py3-nose \
+			py3-openssl \
 			py3-pip \
+			py3-psycopg2 \
+			py3-pycryptodome \
+			py3-pyzmq \
+			py3-requests \
+			py3-rsa \
+			py3-typing-extensions \
+			py3-wheel \
+			py3-yaml \
             		python3 \
             		python3-dev && \
 #
@@ -22,6 +34,7 @@ RUN apk -U --no-cache add \
     git clone https://github.com/johnnykv/heralding && \
     cd heralding && \
     git checkout c31f99c55c7318c09272d8d9998e560c3d4de9aa && \
+    cp /root/dist/requirements.txt . && \
     pip3 install --upgrade pip && \
     pip3 install --no-cache-dir -r requirements.txt && \
     pip3 install --no-cache-dir . && \
diff --git a/docker/heralding/dist/requirements.txt b/docker/heralding/dist/requirements.txt
new file mode 100644
index 00000000..21336f71
--- /dev/null
+++ b/docker/heralding/dist/requirements.txt
@@ -0,0 +1,4 @@
+aiosmtpd
+asyncssh>=2.0.0
+pyaml
+hpfeeds3
diff --git a/docker/heralding/docker-compose.yml b/docker/heralding/docker-compose.yml
index 9df2f1e7..774fa687 100644
--- a/docker/heralding/docker-compose.yml
+++ b/docker/heralding/docker-compose.yml
@@ -12,6 +12,8 @@ services:
     restart: always
     tmpfs:
      - /tmp/heralding:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - heralding_local
     ports:
@@ -31,7 +33,7 @@ services:
      - "3389:3389"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
diff --git a/docker/honeypots/Dockerfile b/docker/honeypots/Dockerfile
index b07ac6e3..6eea4b9c 100644
--- a/docker/honeypots/Dockerfile
+++ b/docker/honeypots/Dockerfile
@@ -1,10 +1,10 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
-RUN apk -U add \
+RUN apk -U --no-cache add \
              build-base \
 	     freetds \
 	     freetds-dev \
@@ -19,7 +19,31 @@ RUN apk -U add \
              openssl \
              openssl-dev \
 	     postgresql-dev \
+	     py3-chardet \
+	     py3-click \
+	     py3-cryptography \
+	     py3-dnspython \
+	     py3-flask \
+	     py3-future \
+	     py3-hiredis \
+	     py3-impacket \
+	     py3-itsdangerous \
+	     py3-jinja2 \
+	     py3-ldap3 \
+	     py3-markupsafe \
+	     py3-netifaces \
+	     py3-openssl \
+	     py3-packaging \
+	     py3-paramiko \
 	     py3-pip \
+	     py3-psutil \
+	     py3-psycopg2 \
+	     py3-pycryptodomex \
+	     py3-requests \
+	     py3-service_identity \
+	     py3-twisted \
+	     py3-werkzeug \
+	     py3-wheel \
              python3 \
              python3-dev \
              zlib-dev && \
@@ -28,12 +52,11 @@ RUN apk -U add \
     mkdir -p /opt \
              /var/log/honeypots && \
     cd /opt/ && \
-    #git clone https://github.com/qeeqbox/honeypots && \
-    git clone https://github.com/t3chn0m4g3/honeypots && \
+    git clone https://github.com/qeeqbox/honeypots && \
     cd honeypots && \
-    #git checkout 7c654a3ef2c564ae6f1247bf302d652037080163 && \
+    git checkout bee3147cf81837ba7639f1e27fe34d717ecccf29 && \
+    cp /root/dist/setup.py . && \
     pip3 install --upgrade pip && \
-    pip3 install --ignore-installed hiredis packaging && \
     pip3 install . && \
     setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
 #
@@ -54,12 +77,11 @@ RUN apk -U add \
 		    postgresql-dev \
 		    python3-dev \
 		    zlib-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /root/* /var/cache/apk/* /opt/honeypots/.git
+
 #
 # Start honeypots 
 STOPSIGNAL SIGINT
 USER honeypots:honeypots
 WORKDIR /opt/honeypots/
-CMD python3 -m honeypots --setup all --config config.json
-#CMD python3 -m honeypots --setup telnet --config config.json
+CMD python3 -E -m honeypots --setup all --config config.json
diff --git a/docker/honeypots/dist/config.json b/docker/honeypots/dist/config.json
index 648e583c..4bc9b287 100644
--- a/docker/honeypots/dist/config.json
+++ b/docker/honeypots/dist/config.json
@@ -1,144 +1,246 @@
 {
-    "logs":"file,terminal",
-    "logs_location":"/var/log/honeypots/",
-    "honeypots": {
-        "dns": {
-            "port": 53,
-            "ip": "0.0.0.0",
-            "username": "administrator",
-            "password": "123456"
-            },
-        "ftp": {
-            "port": 21,
-            "ip": "0.0.0.0",
-            "username": "ftp",
-            "password": "anonymous"
-            },
-        "httpproxy": {
-            "port": 8080,
-            "ip": "0.0.0.0",
-            "username": "admin",
-            "password": "admin"
-            },
-        "http": {
-            "port": 80,
-            "ip": "0.0.0.0",
-            "username": "admin",
-            "password": "admin"
-            },
-        "https": {
-            "port": 443,
-            "ip": "0.0.0.0",
-            "username": "admin",
-            "password": "admin"
-            },
-        "imap": {
-            "port": 143,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "mysql": {
-            "port": 3306,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "pop3": {
-            "port": 110,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "postgres": {
-            "port": 5432,
-            "ip": "0.0.0.0",
-            "username": "postgres",
-            "password": "123456"
-            },
-        "redis": {
-            "port": 6379,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": ""
-            },
-        "smb": {
-            "port": 445,
-            "ip": "0.0.0.0",
-            "username": "administrator",
-            "password": "123456"
-            },
-        "smtp": {
-            "port": 25,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "socks5": {
-            "port": 1080,
-            "ip": "0.0.0.0",
-            "username": "admin",
-            "password": "admin"
-            },
-        "ssh": {
-            "port": 22,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "telnet": {
-            "port": 23,
-            "ip": "0.0.0.0",
-            "username": "root",
-            "password": "123456"
-            },
-        "vnc": {
-            "port": 5900,
-            "ip": "0.0.0.0",
-            "username": "administrator",
-            "password": "123456"
-            },
-        "elastic": {
-            "port": 9200,
-            "ip": "0.0.0.0",
-            "username": "elastic",
-            "password": "123456"
-            },
-        "mssql": {
-            "port": 1433,
-            "ip": "0.0.0.0",
-            "username": "sa",
-            "password": ""
-            },
-        "ldap": {
-            "port": 389,
-            "ip": "0.0.0.0",
-            "username": "administrator",
-            "password": "123456"
-            },
-        "ntp": {
-            "port": 123,
-            "ip": "0.0.0.0",
-            "username": "administrator",
-            "password": "123456"
-            },
-        "memcache": {
-            "port": 11211,
-            "ip": "0.0.0.0",
-            "username": "admin",
-            "password": "123456"
-            },
-        "oracle": {
-            "port": 1521,
-            "ip": "0.0.0.0",
-            "username": "bi",
-            "password": "123456"
-            },
-        "snmp": {
-            "port": 161,
-            "ip": "0.0.0.0",
-            "username": "privUser",
-            "password": "123456"
-            }
-        }
+   "logs":"file,terminal,json,tpot",
+   "logs_location":"/var/log/honeypots/",
+   "syslog_address":"",
+   "syslog_facility":0,
+   "postgres":"",
+   "db_options":[
+      
+   ],
+   "filter":"",
+   "interface":"",
+   "honeypots":{
+      "dns":{
+         "port":53,
+         "ip":"0.0.0.0",
+         "username":"administrator",
+         "password":"123456",
+         "log_file_name":"dns.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "ftp":{
+         "port":21,
+         "ip":"0.0.0.0",
+         "username":"ftp",
+         "password":"anonymous",
+         "log_file_name":"ftp.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "httpproxy":{
+         "port":8080,
+         "ip":"0.0.0.0",
+         "username":"admin",
+         "password":"admin",
+         "log_file_name":"httpproxy.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "http":{
+         "port":80,
+         "ip":"0.0.0.0",
+         "username":"admin",
+         "password":"admin",
+         "log_file_name":"http.log",
+         "max_bytes":0,
+         "backup_count":10,
+	 "options":"fix_get_client_ip"
+      },
+      "https":{
+         "port":443,
+         "ip":"0.0.0.0",
+         "username":"admin",
+         "password":"admin",
+         "log_file_name":"https.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "imap":{
+         "port":143,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"imap.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "mysql":{
+         "port":3306,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"mysql.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "pop3":{
+         "port":110,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"pop3.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "postgres":{
+         "port":5432,
+         "ip":"0.0.0.0",
+         "username":"postgres",
+         "password":"123456",
+         "log_file_name":"postgres.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "redis":{
+         "port":6379,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"",
+         "log_file_name":"redis.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "smb":{
+         "port":445,
+         "ip":"0.0.0.0",
+         "username":"administrator",
+         "password":"123456",
+         "log_file_name":"smb.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "smtp":{
+         "port":25,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"smtp.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "socks5":{
+         "port":1080,
+         "ip":"0.0.0.0",
+         "username":"admin",
+         "password":"admin",
+         "log_file_name":"socks5.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "ssh":{
+         "port":22,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"ssh.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "telnet":{
+         "port":23,
+         "ip":"0.0.0.0",
+         "username":"root",
+         "password":"123456",
+         "log_file_name":"telnet.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "vnc":{
+         "port":5900,
+         "ip":"0.0.0.0",
+         "username":"administrator",
+         "password":"123456",
+         "log_file_name":"vnc.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "elastic":{
+         "port":9200,
+         "ip":"0.0.0.0",
+         "username":"elastic",
+         "password":"123456",
+         "log_file_name":"elastic.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "mssql":{
+         "port":1433,
+         "ip":"0.0.0.0",
+         "username":"sa",
+         "password":"",
+         "log_file_name":"mssql.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "ldap":{
+         "port":389,
+         "ip":"0.0.0.0",
+         "username":"administrator",
+         "password":"123456",
+         "log_file_name":"ldap.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "ntp":{
+         "port":123,
+         "ip":"0.0.0.0",
+         "username":"administrator",
+         "password":"123456",
+         "log_file_name":"ntp.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "memcache":{
+         "port":11211,
+         "ip":"0.0.0.0",
+         "username":"admin",
+         "password":"123456",
+         "log_file_name":"memcache.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "oracle":{
+         "port":1521,
+         "ip":"0.0.0.0",
+         "username":"bi",
+         "password":"123456",
+         "log_file_name":"oracle.log",
+         "max_bytes":0,
+         "backup_count":10
+      },
+      "snmp":{
+         "port":161,
+         "ip":"0.0.0.0",
+         "username":"privUser",
+         "password":"123456",
+         "log_file_name":"snmp.log",
+         "max_bytes":0,
+         "backup_count":10
+      }
+   },
+   "custom_filter":{
+      "honeypots":{
+         "change":{
+            "server":"protocol"
+         },
+         "contains":[
+            "protocol",
+            "action",
+            "src_ip",
+            "src_port",
+            "dest_ip",
+            "dest_port"
+         ],
+         "remove":[
+            
+         ],
+         "options":[
+            "remove_errors",
+            "remove_init",
+            "remove_word_server"
+         ]
+      }
+   }
 }
+
diff --git a/docker/honeypots/dist/setup.py b/docker/honeypots/dist/setup.py
new file mode 100644
index 00000000..d63ab76b
--- /dev/null
+++ b/docker/honeypots/dist/setup.py
@@ -0,0 +1,39 @@
+from setuptools import setup
+
+with open("README.rst", "r") as f:
+    long_description = f.read()
+
+setup(
+    name='honeypots',
+    author='QeeqBox',
+    author_email='gigaqeeq@gmail.com',
+    description=r"23 different honeypots in a single pypi package! (dns, ftp, httpproxy, http, https, imap, mysql, pop3, postgres, redis, smb, smtp, socks5, ssh, telnet, vnc, mssql, elastic, ldap, ntp, memcache, snmp, oracle, sip and irc) ",
+    long_description=long_description,
+    version='0.51',
+    license="AGPL-3.0",
+    license_files=('LICENSE'),
+    url="https://github.com/qeeqbox/honeypots",
+    packages=['honeypots'],
+    entry_points={
+        "console_scripts": [
+            'honeypots=honeypots.__main__:main_logic'
+        ]
+    },
+    include_package_data=True,
+    install_requires=[
+        'pycrypto',
+        'scapy',
+        'twisted',
+        'psutil',
+        'psycopg2-binary',
+        'requests',
+        'impacket',
+        'paramiko',
+        'service_identity',
+        'netifaces'
+    ],
+    extras_require={
+        'test': ['redis', 'mysql-connector', 'elasticsearch', 'pymssql', 'ldap3', 'pysnmp']
+    },
+    python_requires='>=3.5'
+)
diff --git a/docker/honeypots/docker-compose.yml b/docker/honeypots/docker-compose.yml
index 7bf3df65..bf8d61a3 100644
--- a/docker/honeypots/docker-compose.yml
+++ b/docker/honeypots/docker-compose.yml
@@ -14,6 +14,8 @@ services:
     restart: always
     tmpfs:
      - /tmp:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.75
     networks:
      - honeypots_local
     ports:
@@ -36,7 +38,7 @@ services:
      - "6379:6379"
      - "8080:8080"
      - "9200:9200"
-    image: "dtagdevsec/honeypots:2006"
+    image: "dtagdevsec/honeypots:2204"
     read_only: true
     volumes:
      - /data/honeypots/log:/var/log/honeypots
diff --git a/docker/honeytrap/Dockerfile b/docker/honeytrap/Dockerfile
index 780696f7..5af73372 100644
--- a/docker/honeytrap/Dockerfile
+++ b/docker/honeytrap/Dockerfile
@@ -2,12 +2,11 @@ FROM ubuntu:20.04
 ENV DEBIAN_FRONTEND noninteractive
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apt
 RUN apt-get update && \
     apt-get update -y && \
-    apt-get dist-upgrade -y && \
 #
 # Install packages
     apt-get install -y autoconf \
@@ -56,7 +55,7 @@ RUN apt-get update && \
                      libnetfilter-queue-dev \
                      libpq-dev && \
     apt-get autoremove -y --purge && \
-    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /root/* /opt/honeytrap/.git
 #
 # Start honeytrap
 USER honeytrap:honeytrap
diff --git a/docker/honeytrap/docker-compose.yml b/docker/honeytrap/docker-compose.yml
index 7573b3d5..252897ee 100644
--- a/docker/honeytrap/docker-compose.yml
+++ b/docker/honeytrap/docker-compose.yml
@@ -9,10 +9,12 @@ services:
     restart: always
     tmpfs:
      - /tmp/honeytrap:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.75
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
diff --git a/docker/ipphoney/Dockerfile b/docker/ipphoney/Dockerfile
index e6529348..489364c3 100644
--- a/docker/ipphoney/Dockerfile
+++ b/docker/ipphoney/Dockerfile
@@ -1,10 +1,10 @@
-FROM alpine:3.13
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
-RUN apk -U add \
+RUN apk -U --no-cache add \
              build-base \
 	     ca-certificates \
              git \
@@ -14,9 +14,20 @@ RUN apk -U add \
              openssl-dev \
 	     postgresql-dev \
 	     py3-cryptography \
+	     py3-elasticsearch \
+	     py3-geoip2 \
+	     py3-maxminddb \
              py3-mysqlclient \
              py3-requests \
+	     py3-packaging \
 	     py3-pip \
+	     py3-psycopg2 \
+	     py3-redis \
+	     py3-requests \
+	     py3-service_identity \
+	     py3-setuptools \
+	     py3-twisted \
+	     py3-wheel \
              python3 \
              python3-dev && \
     mkdir -p /opt && \
@@ -24,8 +35,9 @@ RUN apk -U add \
     git clone https://gitlab.com/bontchev/ipphoney.git/ && \
     cd ipphoney && \
     git checkout 7ab1cac437baba17cb2cd25d5bb1400327e1bb79 && \
+    cp /root/dist/requirements.txt . && \
     pip3 install -r requirements.txt && \
-    setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
+    setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
 #
 # Setup user, groups and configs
     addgroup -g 2000 ipphoney && \
@@ -39,8 +51,7 @@ RUN apk -U add \
 		    openssl-dev \
 		    postgresql-dev \
 		    python3-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /root/* /var/cache/apk/* /opt/ipphoney/.git
 #
 # Start ipphoney
 STOPSIGNAL SIGINT
diff --git a/docker/ipphoney/dist/requirements.txt b/docker/ipphoney/dist/requirements.txt
new file mode 100644
index 00000000..882d8f6f
--- /dev/null
+++ b/docker/ipphoney/dist/requirements.txt
@@ -0,0 +1,4 @@
+configparser>=3.5.0
+couchdb
+hpfeeds>=3.0.0
+pymongo
diff --git a/docker/ipphoney/docker-compose.yml b/docker/ipphoney/docker-compose.yml
index 69328fc0..dbe4c94e 100644
--- a/docker/ipphoney/docker-compose.yml
+++ b/docker/ipphoney/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: ipphoney
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - ipphoney_local
     ports:
      - "631:631"
-    image: "dtagdevsec/ipphoney:2006"
+    image: "dtagdevsec/ipphoney:2204"
     read_only: true
     volumes:
      - /data/ipphoney/log:/opt/ipphoney/log
diff --git a/docker/log4pot/Dockerfile b/docker/log4pot/Dockerfile
index 3d6aab31..6dde6ab3 100644
--- a/docker/log4pot/Dockerfile
+++ b/docker/log4pot/Dockerfile
@@ -2,9 +2,7 @@ FROM ubuntu:20.04
 ENV DEBIAN_FRONTEND noninteractive
 #
 # Install packages
-RUN apt-get update && \
-    apt-get update -y && \
-    apt-get dist-upgrade -y && \
+RUN apt-get update -y && \
     apt-get install -y \
              build-essential \
 	     cargo \
@@ -29,9 +27,9 @@ RUN apt-get update && \
     cd /opt/ && \
     git clone https://github.com/thomaspatzke/Log4Pot && \
     cd Log4Pot && \
-#    git checkout 4269bf4a91457328fb64c3e7941cb2f520e5e911 && \
-    git checkout 4e9bac32605e4d2dd4bbc6df56365988b4815c4a && \
-    sed -i 's#"type": logtype,#"reason": logtype,#g' log4pot.py && \
+#    git checkout b163858649801974e9b86cea397f5eb137c7c01b && \
+    git checkout fac539f470217347e51127c635f16749a887c0ac && \
+    sed -i 's#"type": logtype,#"reason": logtype,#g' log4pot-server.py && \
     poetry install && \
     setcap cap_net_bind_service=+ep /usr/bin/python3.8 && \
 #
@@ -49,10 +47,10 @@ RUN apt-get update && \
 		    python3-dev \
 		    rust-all && \
     apt-get autoremove -y --purge && \
-    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache /opt/Log4Pot/.git
 #
 # Start log4pot
 STOPSIGNAL SIGINT
 USER log4pot:log4pot
 WORKDIR /opt/Log4Pot/
-CMD ["/usr/bin/python3","log4pot.py","--port","8080","--log","/var/log/log4pot/log/log4pot.log","--download-dir","/var/log/log4pot/payloads/","--download-class","--download-payloads"]
+CMD ["/usr/bin/python3","log4pot-server.py","--port","8080","--log","/var/log/log4pot/log/log4pot.log","--payloader","--download-dir","/var/log/log4pot/payloads/","--download-timeout","15","--response","/opt/Log4Pot/responses/sap-netweaver.html"]
diff --git a/docker/log4pot/docker-compose.yml b/docker/log4pot/docker-compose.yml
index 408129e0..54992265 100644
--- a/docker/log4pot/docker-compose.yml
+++ b/docker/log4pot/docker-compose.yml
@@ -12,6 +12,8 @@ services:
     restart: always
     tmpfs:
      - /tmp:uid=2000,gid=2000
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - log4pot_local
     ports:
@@ -20,7 +22,7 @@ services:
      - "8080:8080"
      - "9200:8080"
      - "25565:8080"
-    image: "dtagdevsec/log4pot:2006"
+    image: "dtagdevsec/log4pot:2204"
     read_only: true
     volumes:
      - /data/log4pot/log:/var/log/log4pot/log
diff --git a/docker/mailoney/Dockerfile b/docker/mailoney/Dockerfile
index 2376f854..b1314269 100644
--- a/docker/mailoney/Dockerfile
+++ b/docker/mailoney/Dockerfile
@@ -1,35 +1,14 @@
-FROM alpine:3.11
+FROM alpine:3.15
 #
 # Install packages
 RUN apk -U --no-cache add \
-            autoconf \
-            automake \
-            build-base \
             git \
             libcap \
-            libtool \
-            py-pip \
-            python \
-            python-dev && \
-#
-# Install libemu    
-    git clone https://github.com/buffer/libemu /root/libemu/ && \
-    cd /root/libemu/ && \
-    git checkout e2624361e13588da74a2ce3e1dea0abb59dcf1d0 && \
-    autoreconf -vi && \
-    ./configure && \
-    make && \
-    make install && \
-#
-# Install libemu python wrapper
-    pip install --no-cache-dir \ 
-                        hpfeeds \
-                        pylibemu && \ 
+            python2 && \
 #
 # Install mailoney from git
     git clone https://github.com/t3chn0m4g3/mailoney /opt/mailoney && \
     cd /opt/mailoney && \
-    git checkout 85c37649a99e1cec3f8d48d509653c9a8127ea4f && \
 #
 # Setup user, groups and configs
     addgroup -g 2000 mailoney && \
@@ -38,14 +17,8 @@ RUN apk -U --no-cache add \
     setcap cap_net_bind_service=+ep /usr/bin/python2.7 && \
 #
 # Clean up
-    apk del --purge autoconf \
-                    automake \
-                    build-base \
-                    git \
-                    py-pip \
-                    python-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    apk del --purge git && \
+    rm -rf /root/* /var/cache/apk/* /opt/mailoney/.git
 #
 # Set workdir and start mailoney
 STOPSIGNAL SIGINT
diff --git a/docker/mailoney/docker-compose.yml b/docker/mailoney/docker-compose.yml
index c5979e6b..4e221c79 100644
--- a/docker/mailoney/docker-compose.yml
+++ b/docker/mailoney/docker-compose.yml
@@ -16,11 +16,13 @@ services:
      - HPFEEDS_SECRET=pass
      - HPFEEDS_PORT=20000
      - HPFEEDS_CHANNELPREFIX=prefix
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - mailoney_local
     ports:
      - "25:25"
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
     read_only: true
     volumes:
      - /data/mailoney/log:/opt/mailoney/logs
diff --git a/docker/medpot/Dockerfile b/docker/medpot/Dockerfile
index c5ea415b..6cf8527c 100644
--- a/docker/medpot/Dockerfile
+++ b/docker/medpot/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Setup apk
 RUN apk -U --no-cache add \
diff --git a/docker/medpot/docker-compose.yml b/docker/medpot/docker-compose.yml
index a5565475..f4aaf5d8 100644
--- a/docker/medpot/docker-compose.yml
+++ b/docker/medpot/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: medpot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
diff --git a/docker/nginx/Dockerfile b/docker/nginx/Dockerfile
new file mode 100644
index 00000000..5244ac49
--- /dev/null
+++ b/docker/nginx/Dockerfile
@@ -0,0 +1,36 @@
+FROM alpine:3.15
+#
+# Include dist
+COPY dist/ /root/dist/
+#
+# Get and install dependencies & packages
+RUN apk -U --no-cache add \
+      nginx \
+      nginx-mod-http-headers-more && \
+#
+## Setup T-Pot Landing Page, Eleasticvue, Cyberchef 
+    cp -R /root/dist/html/* /var/lib/nginx/html/ && \
+    cd /var/lib/nginx/html/esvue && \
+    tar xvfz esvue.tgz && \
+    rm esvue.tgz && \
+    cd /var/lib/nginx/html/cyberchef && \
+    tar xvfz cyberchef.tgz && \
+    rm cyberchef.tgz && \
+#
+## Change ownership, permissions
+    chown root:www-data -R /var/lib/nginx/html && \
+    chmod 755 -R /var/lib/nginx/html && \
+#
+## Add Nginx / T-Pot specific configs
+    rm -rf /etc/nginx/conf.d/* /usr/share/nginx/html/* && \
+    mkdir -p /etc/nginx/conf.d && \
+    cp /root/dist/conf/nginx.conf /etc/nginx/ && \
+    cp -R /root/dist/conf/ssl /etc/nginx/ && \
+    cp /root/dist/conf/tpotweb.conf /etc/nginx/conf.d/ && \
+#
+# Clean up
+    rm -rf /root/* && \
+    rm -rf /var/cache/apk/*
+#
+# Start nginx
+CMD nginx -g 'daemon off;'
diff --git a/docker/nginx/builder/cyberchef/Dockerfile b/docker/nginx/builder/cyberchef/Dockerfile
new file mode 100644
index 00000000..3d05b307
--- /dev/null
+++ b/docker/nginx/builder/cyberchef/Dockerfile
@@ -0,0 +1,18 @@
+FROM node:10.24.1-alpine3.11 as builder
+#
+# Prep and build Cyberchef 
+RUN apk -U --no-cache add git && \
+    chown -R node:node /srv && \
+    npm install -g grunt-cli
+WORKDIR /srv
+USER node
+RUN git clone https://github.com/gchq/cyberchef -b v9.32.3 . && \
+    NODE_OPTIONS=--max_old_space_size=2048 && \
+    npm install && \
+    grunt prod && \
+    cd build/prod && \
+    rm CyberChef_v9.32.3.zip && \
+    tar cvfz cyberchef.tgz *
+#    
+FROM scratch AS exporter
+COPY --from=builder /srv/build/prod/cyberchef.tgz /
diff --git a/docker/nginx/builder/cyberchef/build.sh b/docker/nginx/builder/cyberchef/build.sh
new file mode 100755
index 00000000..ccf3660b
--- /dev/null
+++ b/docker/nginx/builder/cyberchef/build.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+# Needs buildx to build. Run tpotce/bin/setup-builder.sh first
+docker buildx build --output ../../dist/html/cyberchef/ .
diff --git a/docker/nginx/builder/esvue/Dockerfile b/docker/nginx/builder/esvue/Dockerfile
new file mode 100644
index 00000000..6c153ba6
--- /dev/null
+++ b/docker/nginx/builder/esvue/Dockerfile
@@ -0,0 +1,21 @@
+FROM node:14.18-alpine AS builder
+#
+# Prep and build Elasticvue 
+RUN apk -U --no-cache add git && \
+    git clone https://github.com/cars10/elasticvue /opt/src && \
+# We need to adjust consts.js so the user has connection suggestion for reverse proxied ES
+    sed -i "s#export const DEFAULT_HOST = 'http://localhost:9200'#export const DEFAULT_HOST = window.location.origin + '/es'#g" /opt/src/src/consts.js && \
+    sed -i 's#href="/images/logo/favicon.ico"#href="images/logo/favicon.ico"#g' /opt/src/public/index.html && \
+    mkdir /opt/app && \
+    cd /opt/app && \
+    cp /opt/src/package.json . && \
+    cp /opt/src/yarn.lock . && \
+    yarn install && \
+    cp -R /opt/src/* . && \
+# We need to set this ENV so we can run Elasticvue in its own location rather than /
+    VUE_APP_PUBLIC_PATH=/elasticvue/ yarn build && \
+    cd dist && \
+    tar cvfz esvue.tgz *
+#    
+FROM scratch AS exporter
+COPY --from=builder /opt/app/dist/esvue.tgz /
diff --git a/docker/nginx/builder/esvue/build.sh b/docker/nginx/builder/esvue/build.sh
new file mode 100755
index 00000000..07a37c14
--- /dev/null
+++ b/docker/nginx/builder/esvue/build.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+# Needs buildx to build. Run tpotce/bin/setup-builder.sh first
+docker buildx build --output ../../dist/html/esvue/ .
diff --git a/docker/nginx/dist/conf/nginx.conf b/docker/nginx/dist/conf/nginx.conf
new file mode 100644
index 00000000..dbe2d6f9
--- /dev/null
+++ b/docker/nginx/dist/conf/nginx.conf
@@ -0,0 +1,105 @@
+user nginx;
+worker_processes auto;
+pid /run/nginx.pid;
+load_module /usr/lib/nginx/modules/ngx_http_headers_more_filter_module.so;
+
+events {
+	worker_connections 768;
+	# multi_accept on;
+}
+
+http {
+
+	##
+	# Basic Settings
+	##
+
+	sendfile on;
+	tcp_nopush on;
+	tcp_nodelay on;
+	keepalive_timeout 65;
+	types_hash_max_size 2048;
+	# server_tokens off;
+
+	# server_names_hash_bucket_size 64;
+	# server_name_in_redirect off;
+
+	include /etc/nginx/mime.types;
+	default_type application/octet-stream;
+
+	##
+	# SSL Settings
+	##
+
+	#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
+	ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
+	ssl_prefer_server_ciphers on;
+
+	##
+	# Logging Settings
+	##
+
+      log_format main_json escape=json '{'
+        '"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
+        '"connection_serial": "$connection", ' # connection serial number
+        '"connection_requests": "$connection_requests", ' # number of requests made in connection
+        '"pid": "$pid", ' # process pid
+        '"request_id": "$request_id", ' # the unique request id
+        '"request_length": "$request_length", ' # request length (including headers and body)
+        '"src_ip": "$remote_addr", ' # client IP
+        '"remote_user": "$remote_user", ' # client HTTP username
+        '"src_port": "$remote_port", ' # client port
+        '"time_local": "$time_local", '
+        '"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
+        '"request_data": "$request", ' # full path no arguments if the request
+        '"request_uri": "$request_uri", ' # full path and arguments if the request
+        '"args": "$args", ' # args
+        '"status": "$status", ' # response status code
+        '"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
+        '"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
+        '"http_referer": "$http_referer", ' # HTTP referer
+        '"http_user_agent": "$http_user_agent", ' # user agent
+        '"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
+        '"http_host": "$http_host", ' # the request Host: header
+        '"server_name": "$server_name", ' # the name of the vhost serving the request
+        '"request_time": "$request_time", ' # request processing time in seconds with msec resolution
+        '"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
+        '"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
+        '"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
+        '"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
+        '"upstream_response_length": "$upstream_response_length", ' # upstream response length
+        '"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
+        '"ssl_protocol": "$ssl_protocol", ' # TLS protocol
+        '"ssl_cipher": "$ssl_cipher", ' # TLS cipher
+        '"scheme": "$scheme", ' # http or https
+        '"request_method": "$request_method", ' # request method
+        '"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
+        '"pipe": "$pipe", ' # “p” if request was pipelined, “.” otherwise
+        '"gzip_ratio": "$gzip_ratio", '
+        '"http_cf_ray": "$http_cf_ray"'
+      '}';
+
+ 	access_log /var/log/nginx/access.log main_json;
+	error_log /var/log/nginx/error.log;
+
+	##
+	# Gzip Settings
+	##
+
+	gzip on;
+	gzip_disable "msie6";
+
+	# gzip_vary on;
+	# gzip_proxied any;
+	# gzip_comp_level 6;
+	# gzip_buffers 16 8k;
+	# gzip_http_version 1.1;
+	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
+
+	##
+	# Virtual Host Configs
+	##
+
+	include /etc/nginx/conf.d/*.conf;
+	include /etc/nginx/sites-enabled/*;
+}
diff --git a/docker/heimdall/dist/conf/ssl/dhparam4096.pem b/docker/nginx/dist/conf/ssl/dhparam4096.pem
similarity index 100%
rename from docker/heimdall/dist/conf/ssl/dhparam4096.pem
rename to docker/nginx/dist/conf/ssl/dhparam4096.pem
diff --git a/docker/heimdall/dist/conf/ssl/gen-cert.sh b/docker/nginx/dist/conf/ssl/gen-cert.sh
similarity index 100%
rename from docker/heimdall/dist/conf/ssl/gen-cert.sh
rename to docker/nginx/dist/conf/ssl/gen-cert.sh
diff --git a/docker/heimdall/dist/conf/ssl/gen-dhparam.sh b/docker/nginx/dist/conf/ssl/gen-dhparam.sh
similarity index 100%
rename from docker/heimdall/dist/conf/ssl/gen-dhparam.sh
rename to docker/nginx/dist/conf/ssl/gen-dhparam.sh
diff --git a/docker/heimdall/dist/conf/tpotweb.conf b/docker/nginx/dist/conf/tpotweb.conf
similarity index 74%
rename from docker/heimdall/dist/conf/tpotweb.conf
rename to docker/nginx/dist/conf/tpotweb.conf
index 42473407..d41a720e 100644
--- a/docker/heimdall/dist/conf/tpotweb.conf
+++ b/docker/nginx/dist/conf/tpotweb.conf
@@ -8,12 +8,11 @@ server {
     ### Basic server settings
     #########################
     listen 64297 ssl http2;
-    #index tpotweb.html;
-    index index.php;
+    index index.html;
     ssl_protocols TLSv1.3;
     server_name example.com;
     error_page 300 301 302 400 401 402 403 404 500 501 502 503 504 /error.html;
-    root /var/lib/nginx/html/public;
+    root /var/lib/nginx/html;
 
 
     ##############################################
@@ -28,7 +27,7 @@ server {
     ##############################################
     ssl_certificate /etc/nginx/cert/nginx.crt;
     ssl_certificate_key /etc/nginx/cert/nginx.key;
-    
+
     ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!DHE:!SHA:!SHA256';
     ssl_ecdh_curve secp384r1;
     ssl_dhparam /etc/nginx/ssl/dhparam4096.pem;
@@ -41,8 +40,8 @@ server {
     ### OWASP recommendations / settings
     ####################################
 
-    ### Size Limits & Buffer Overflows 
-    ### the size may be configured based on the needs. 
+    ### Size Limits & Buffer Overflows
+    ### the size may be configured based on the needs.
     client_body_buffer_size  128k;
     client_header_buffer_size 1k;
     client_max_body_size 2M;
@@ -66,7 +65,7 @@ server {
 
     ### This will enforce HTTP browsing into HTTPS and avoid ssl stripping attack
     add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
-
+#    add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
 
     ##################################
     ### Restrict access and basic auth
@@ -86,24 +85,29 @@ server {
     auth_basic_user_file /etc/nginx/nginxpasswd;
 
 
-    ############
-    ### Heimdall
-    ############
+    #############################
+    ### T-Pot Landing Page & Apps
+    #############################
 
     location / {
         auth_basic           "closed site";
         auth_basic_user_file /etc/nginx/nginxpasswd;
-        try_files $uri $uri/ /index.php?$query_string;
+        try_files $uri $uri/ /index.html?$args;
     }
 
-    location ~ \.php$ {
-        fastcgi_split_path_info ^(.+\.php)(/.+)$;
-        fastcgi_pass 127.0.0.1:64304;
-        fastcgi_index index.php;
-        include fastcgi_params;
-        fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
+    location ^~ /cyberchef {
+        index index.html;
+        alias /var/lib/nginx/html/cyberchef;
+        try_files $uri $uri/ /index.html?$args;
     }
 
+    location ^~ /elasticvue {
+        index index.html;
+        alias /var/lib/nginx/html/esvue;
+        try_files $uri $uri/ /index.html?$args;
+    }
+
+
     #################
     ### Proxied sites
     #################
@@ -114,22 +118,28 @@ server {
         rewrite /kibana/(.*)$ /$1 break;
     }
 
-    ### ES 
+    ### ES
     location /es/ {
         proxy_pass http://127.0.0.1:64298/;
         rewrite /es/(.*)$ /$1 break;
     }
 
-    ### head standalone 
-    location /myhead/ {
-        proxy_pass http://127.0.0.1:64302/;
-        rewrite /myhead/(.*)$ /$1 break;
+    ### Map
+    location /map/ {
+        proxy_pass http://127.0.0.1:64299/;
+        rewrite /map/(.*)$ /$1 break;
+	proxy_http_version 1.1;
+        proxy_set_header Upgrade $http_upgrade;
+        proxy_set_header Connection "Upgrade";
+	proxy_set_header Host $host;
     }
-
-    ### CyberChef
-    location /cyberchef {
+    location /websocket {
         proxy_pass http://127.0.0.1:64299;
-        rewrite ^/cyberchef(.*)$ /$1 break;
+        proxy_read_timeout 3600s;
+        proxy_http_version 1.1;
+        proxy_set_header Upgrade $http_upgrade;
+        proxy_set_header Connection "Upgrade";
+        proxy_set_header Host $host;
     }
 
     ### spiderfoot
@@ -144,7 +154,7 @@ server {
         location /scanviz {
             proxy_pass http://127.0.0.1:64303/spiderfoot/scanviz;
         }
-        
+
         location /scandelete {
             proxy_pass http://127.0.0.1:64303/spiderfoot/scandelete;
         }
diff --git a/docker/nginx/dist/html/License b/docker/nginx/dist/html/License
new file mode 100644
index 00000000..61d18602
--- /dev/null
+++ b/docker/nginx/dist/html/License
@@ -0,0 +1,674 @@
+GNU GENERAL PUBLIC LICENSE
+                       Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+  The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works.  By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.  We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors.  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+  To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights.  Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received.  You must make sure that they, too, receive
+or can get the source code.  And you must show them these terms so they
+know their rights.
+
+  Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+  For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software.  For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+  Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so.  This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software.  The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable.  Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products.  If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+  Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary.  To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                       TERMS AND CONDITIONS
+
+  0. Definitions.
+
+  "This License" refers to version 3 of the GNU General Public License.
+
+  "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+  "The Program" refers to any copyrightable work licensed under this
+License.  Each licensee is addressed as "you".  "Licensees" and
+"recipients" may be individuals or organizations.
+
+  To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy.  The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+  A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+  To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy.  Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+  To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies.  Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+  An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License.  If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+  1. Source Code.
+
+  The "source code" for a work means the preferred form of the work
+for making modifications to it.  "Object code" means any non-source
+form of a work.
+
+  A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+  The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form.  A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+  The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities.  However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work.  For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+  The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+  The Corresponding Source for a work in source code form is that
+same work.
+
+  2. Basic Permissions.
+
+  All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met.  This License explicitly affirms your unlimited
+permission to run the unmodified Program.  The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work.  This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+  You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force.  You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright.  Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+  Conveying under any other circumstances is permitted solely under
+the conditions stated below.  Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+  3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+  No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+  When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+  4. Conveying Verbatim Copies.
+
+  You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+  You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+  5. Conveying Modified Source Versions.
+
+  You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+    a) The work must carry prominent notices stating that you modified
+    it, and giving a relevant date.
+
+    b) The work must carry prominent notices stating that it is
+    released under this License and any conditions added under section
+    7.  This requirement modifies the requirement in section 4 to
+    "keep intact all notices".
+
+    c) You must license the entire work, as a whole, under this
+    License to anyone who comes into possession of a copy.  This
+    License will therefore apply, along with any applicable section 7
+    additional terms, to the whole of the work, and all its parts,
+    regardless of how they are packaged.  This License gives no
+    permission to license the work in any other way, but it does not
+    invalidate such permission if you have separately received it.
+
+    d) If the work has interactive user interfaces, each must display
+    Appropriate Legal Notices; however, if the Program has interactive
+    interfaces that do not display Appropriate Legal Notices, your
+    work need not make them do so.
+
+  A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit.  Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+  6. Conveying Non-Source Forms.
+
+  You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+    a) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by the
+    Corresponding Source fixed on a durable physical medium
+    customarily used for software interchange.
+
+    b) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by a
+    written offer, valid for at least three years and valid for as
+    long as you offer spare parts or customer support for that product
+    model, to give anyone who possesses the object code either (1) a
+    copy of the Corresponding Source for all the software in the
+    product that is covered by this License, on a durable physical
+    medium customarily used for software interchange, for a price no
+    more than your reasonable cost of physically performing this
+    conveying of source, or (2) access to copy the
+    Corresponding Source from a network server at no charge.
+
+    c) Convey individual copies of the object code with a copy of the
+    written offer to provide the Corresponding Source.  This
+    alternative is allowed only occasionally and noncommercially, and
+    only if you received the object code with such an offer, in accord
+    with subsection 6b.
+
+    d) Convey the object code by offering access from a designated
+    place (gratis or for a charge), and offer equivalent access to the
+    Corresponding Source in the same way through the same place at no
+    further charge.  You need not require recipients to copy the
+    Corresponding Source along with the object code.  If the place to
+    copy the object code is a network server, the Corresponding Source
+    may be on a different server (operated by you or a third party)
+    that supports equivalent copying facilities, provided you maintain
+    clear directions next to the object code saying where to find the
+    Corresponding Source.  Regardless of what server hosts the
+    Corresponding Source, you remain obligated to ensure that it is
+    available for as long as needed to satisfy these requirements.
+
+    e) Convey the object code using peer-to-peer transmission, provided
+    you inform other peers where the object code and Corresponding
+    Source of the work are being offered to the general public at no
+    charge under subsection 6d.
+
+  A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+  A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling.  In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage.  For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product.  A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+  "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source.  The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+  If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information.  But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+  The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed.  Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+  Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+  7. Additional Terms.
+
+  "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law.  If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+  When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it.  (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.)  You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+  Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+    a) Disclaiming warranty or limiting liability differently from the
+    terms of sections 15 and 16 of this License; or
+
+    b) Requiring preservation of specified reasonable legal notices or
+    author attributions in that material or in the Appropriate Legal
+    Notices displayed by works containing it; or
+
+    c) Prohibiting misrepresentation of the origin of that material, or
+    requiring that modified versions of such material be marked in
+    reasonable ways as different from the original version; or
+
+    d) Limiting the use for publicity purposes of names of licensors or
+    authors of the material; or
+
+    e) Declining to grant rights under trademark law for use of some
+    trade names, trademarks, or service marks; or
+
+    f) Requiring indemnification of licensors and authors of that
+    material by anyone who conveys the material (or modified versions of
+    it) with contractual assumptions of liability to the recipient, for
+    any liability that these contractual assumptions directly impose on
+    those licensors and authors.
+
+  All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10.  If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term.  If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+  If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+  Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+  8. Termination.
+
+  You may not propagate or modify a covered work except as expressly
+provided under this License.  Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+  However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+  Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+  Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License.  If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+  9. Acceptance Not Required for Having Copies.
+
+  You are not required to accept this License in order to receive or
+run a copy of the Program.  Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance.  However,
+nothing other than this License grants you permission to propagate or
+modify any covered work.  These actions infringe copyright if you do
+not accept this License.  Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+  10. Automatic Licensing of Downstream Recipients.
+
+  Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License.  You are not responsible
+for enforcing compliance by third parties with this License.
+
+  An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations.  If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+  You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License.  For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+  11. Patents.
+
+  A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based.  The
+work thus licensed is called the contributor's "contributor version".
+
+  A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version.  For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+  Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+  In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement).  To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+  If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients.  "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+  If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+  A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License.  You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+  Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+  12. No Surrender of Others' Freedom.
+
+  If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all.  For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+  13. Use with the GNU Affero General Public License.
+
+  Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work.  The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+  14. Revised Versions of this License.
+
+  The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+  Each version is given a distinguishing version number.  If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation.  If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+  If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+  Later license versions may give you additional or different
+permissions.  However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+  15. Disclaimer of Warranty.
+
+  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. Limitation of Liability.
+
+  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+  17. Interpretation of Sections 15 and 16.
+
+  If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <https://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+  If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+    <program>  Copyright (C) <year>  <name of author>
+    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+  You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+<https://www.gnu.org/licenses/>.
+
+  The GNU General Public License does not permit incorporating your program
+into proprietary programs.  If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.  But first, please read
+<https://www.gnu.org/licenses/why-not-lgpl.html>.
\ No newline at end of file
diff --git a/docker/nginx/dist/html/README-ES-MX.md b/docker/nginx/dist/html/README-ES-MX.md
new file mode 100644
index 00000000..b3086690
--- /dev/null
+++ b/docker/nginx/dist/html/README-ES-MX.md
@@ -0,0 +1,248 @@
+![image](assets/img/header.png)
+
+
+<p style="margin: -20px 0 30px">
+  <a href="https://www.buymeacoffee.com/migueravila" target="_blank" style='margin-right:0px; margin-top:5px'>
+    <img align="center" src="https://github.com/migueravila/Bento/blob/master/assets/img/donation.png" alt="donation" height="35px" />
+  </a>
+
+  <a href="https://migueravila.github.io/Bento/" target="_blank" style='margin-right:0px; margin-top:5px'>
+    <img align="center" src="https://github.com/migueravila/Bento/blob/master/assets/img/live.png" alt="live-preview" height="35px" />
+  </a> 
+</p>
+
+<br />
+
+## 👇 Índice
+- [👇 Índice](#-índice)
+- [✨ Características](#-características)
+- [🚀 Usos](#-usos)
+    - [Como página de inicio](#como-página-de-inicio)
+    - [Como una nueva pestaña](#como-una-nueva-pestaña)
+- [🎨 Personalización](#-personalización)
+  - [👋 General: Nombre, Imagen De Fondo y Saludos](#-general-nombre-imagen-de-fondo-y-saludos)
+  - [🏷️ Botones de Enlace****](#️-botones-de-enlace)
+  - [📑 Lista de enlaces](#-lista-de-enlaces)
+  - [⛈️ Clima: Clave De La Api, Iconos y Grupos](#️-clima-clave-de-la-api-iconos-y-grupos)
+  - [💛 Colores](#-colores)
+  - [🌑 Cambio automatico de tema](#-cambio-automatico-de-tema)
+
+
+## ✨ Características
+
+- **Configuración Sencilla** de archivos.
+- **Modo Claro/Obscuro** puedes alternarlos y se guardara en tu almacenamiento local.
+- **Fecha y Hora**, puedes utilizar el formato de 24 horas (predeterminado) o el de 12 horas.
+- **Saludos** fáciles de modificar.
+- **Variables** para colores y tamaños de fuente personalizados en el código del archivo `style.css`.
+- **Iconos** todos los iconos provienen de [Feather Icons](https://feathericons.com/) (Otros los hice yo mismo, tomando los iconos de Feather Icons como base).
+- **Archivos Modulares** de JavaScript para una lectura sencilla.
+
+## 🚀 Usos
+
+#### Como página de inicio
+
+1. Haz un Fork de este repositorio
+2. Activa el servicio de GitHub Pages `Settings > GitHub Pages > Source [rama master] > Save`
+3. Configúrala como página de inicio:
+   - Haz click en el botón Menú, selecciona Opciones, selecciona Preferencias
+   - Haz click en el panel de inicio.
+   - Haz click en el menú al lado de Inicio y Nuevas Ventanas. Elige la opción de mostrar URL's personalizadas, después, añade el enlace de tu GitHub Pages.
+
+#### Como una nueva pestaña
+
+Puedes utilizar distintos Add-ons/Extensiones para ello
+
+- Si usas Firefox: [Custom New Tab Page](https://addons.mozilla.org/en-US/firefox/addon/custom-new-tab-page/?src=search)
+- Si usas Chromium (Brave, Vivaldi, Chrome): [Custom New Tab URL](https://chrome.google.com/webstore/detail/custom-new-tab-url/mmjbdbjnoablegbkcklggeknkfcjkjia)
+
+## 🎨 Personalización
+
+Casi todas la personalización puede ser configurada desde el archivo `config.js`:
+
+### 👋 General: Nombre, Imagen De Fondo y Saludos
+
+Para cambiar el nombre por defecto, los saludos y si deseas tener una imagen de fondo o abrir los enlaces en una nueva pestaña, edita las primeras configuraciones en el archivo `config.js`.
+
+```js
+ // General
+  name: 'John',
+  imageBackground: false,
+  openInNewTab: true,
+
+  // Saludos
+  greetingMorning: 'Good morning!',
+  greetingAfternoon: 'Good afternoon,',
+  greetingEvening: 'Good evening,',
+  greetingNight: 'Go to Sleep!',
+
+```
+
+> Puedes cambiar el fondo, sustituyendo el archivo `background.jpg` en la carpeta `assets`.
+
+![](assets/img/previewbg.png)
+
+### 🏷️ Botones de Enlace****
+
+Para editar los botones solo tienes que cambiar la siguiente lista en el archivo `config.js` eligiendo un enlace, un icono proveniente de [Feather Icons](https://feathericons.com/) y un nombre:
+
+```js
+cards: [
+    {
+      id: '1',
+      name: 'Github',
+      icon: 'github',
+      link: 'https://github.com/',
+    },
+    {
+      id: '2',
+      name: 'Mail',
+      icon: 'mail',
+      link: 'https://mail.protonmail.com/',
+    },
+    {
+      id: '3',
+      name: 'Todoist',
+      icon: 'trello',
+      link: 'https://calendar.google.com/calendar/r',
+    },
+    {
+      id: '4',
+      name: 'Calendar',
+      icon: 'calendar',
+      link: 'https://calendar.google.com/calendar/r',
+    },
+    {
+      id: '5',
+      name: 'Reddit',
+      icon: 'bookmark',
+      link: 'https://reddit.com',
+    },
+    {
+      id: '6',
+      name: 'Odysee',
+      icon: 'youtube',
+      link: 'https://odysee.com/',
+    },
+  ],
+```
+
+### 📑 Lista de enlaces
+
+Lo mismo pasa con la lista de enlaces, puedes cambiar la lista de iconos (también provenientes de [Feather Icons](https://feathericons.com/)) y los enlaces:
+
+```js
+  //Iconos
+  firstListIcon: 'music',
+  secondListIcon: 'coffee',
+
+  // Enlaces
+  lists: {
+    firstList: [
+      {
+        name: 'Inspirational',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Classic',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Oldies',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Rock',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+    ],
+    secondList: [
+      {
+        name: 'Linkedin',
+        link: 'https://linkedin.com/',
+      },
+      {
+        name: 'Figma',
+        link: 'https://figma.com/',
+      },
+      {
+        name: 'Dribbble',
+        link: 'https://dribbble.com',
+      },
+      {
+        name: 'Telegram',
+        link: 'https://webk.telegram.org',
+      },
+    ],
+  },
+```
+
+### ⛈️ Clima: Clave De La Api, Iconos y Grupos
+
+Para configurar el widget del clima necesitaras una clave de API proveniente de: `https://openweathermap.org/`. Una vez que hayas obtenido tu clave, necesitaras configurar tu latitud y longitud, para ello puedes usar: `https://www.latlong.net/` para obtenerlas. 
+
+Finalmente, escoge un set de iconos:
+
+![](assets/img/icons.png)
+
+- **Nord** Usa el esquema de colores Nord si te encantan los colores agradables a la vista.
+- **OneDark** (_Predeterminado_) Usa el esquema de colores One Dark Pro.
+- **Dark** Para usuarios que solo usan temas claros y quieren un look minimalista.
+- **White** Para usuarios que solo usan temas oscuros y quieren un look minimalista.
+
+Finalmente, solo añádelos al archivo `config.js`.
+
+```js
+  // clima
+  weatherKey: 'InsertYourAPIKeyHere123456',
+  weatherIcons: 'OneDark',
+  weatherUnit: 'C',
+  weatherLatitude: '37.774929',
+  weatherLongitude: '-122.419418',
+```
+
+### 💛 Colores
+
+En el archivo `app.css` puedes cambiar las variables para cualquiera de los temas (Oscuro y Claro):
+
+```css
+/* Tema Claro  */
+
+:root {
+  --accent: #61b0f1; /* Hover color */
+  --bg: #f5f5f5; /* Background color */
+  --sbg: #e4e6e6; /* Cards color */
+  --fg: #3a3a3a; /* Foreground color */
+  --sfg: #3a3a3a; /* Sceondary Foreground color */
+}
+
+/* Tema Oscuro  */
+
+.darktheme {
+  --accent: #61b0f1; /* Hover color */
+  --bg: #19171a; /* Background color */
+  --sbg: #201e21; /* Cards color */
+  --fg: #d8dee9; /* Foreground color */
+  --sfg: #3a3a3a; /* Secondary Foreground color */
+}
+```
+
+### 🌑 Cambio automatico de tema
+
+The theme can be automatically changed by the OS' current theme or personalized hours
+that you can change in the `config.js` file:
+
+```js
+  // Autochange
+  autoChangeTheme: true,
+
+  // Autochabge by OS
+  changeThemeByOS: false, 
+
+  // Autochange by hour options (24hrs format, string must be in: hh:mm)
+  changeThemeByHour: true, // If it's true, it will use the values below:
+  hourDarkThemeActive: '18:30', // Turn on the dark theme after this hour
+  hourDarkThemeInactive: '07:00', // Turn off the dark theme after this hour and before the above hour
+```
+
+![](assets/img/subheader.png)
diff --git a/docker/nginx/dist/html/README.md b/docker/nginx/dist/html/README.md
new file mode 100644
index 00000000..b68014e0
--- /dev/null
+++ b/docker/nginx/dist/html/README.md
@@ -0,0 +1,252 @@
+![image](assets/img/header.png)
+
+
+<p style="margin: -20px 0 30px">
+  <a href="https://www.buymeacoffee.com/migueravila" target="_blank" style='margin-right:0px; margin-top:5px'>
+    <img align="center" src="https://github.com/migueravila/Bento/blob/master/assets/img/donation.png" alt="donation" height="35px" />
+  </a>
+
+  <a href="https://migueravila.github.io/Bento/" target="_blank" style='margin-right:0px; margin-top:5px'>
+    <img align="center" src="https://github.com/migueravila/Bento/blob/master/assets/img/live.png" alt="live-preview" height="35px" />
+  </a> 
+
+  <a href="https://github.com/migueravila/Bento/blob/master/README-ES-MX.md" target="_blank" style='margin-right:0px; margin-top:5px'>
+    <img align="center" src="https://github.com/migueravila/Bento/blob/master/assets/img/spanish.png" alt="live-preview" height="35px" />
+  </a> 
+</p>
+
+<br />
+
+## 👇 Index
+- [👇 Index](#-index)
+- [✨ Features](#-features)
+- [🚀 Usage](#-usage)
+    - [As Home Page](#as-home-page)
+    - [As New Tab](#as-new-tab)
+- [🎨 Customization](#-customization)
+  - [👋 General: Name, Image Background and Greetings](#-general-name-image-background-and-greetings)
+  - [🏷️ Button Links](#️-button-links)
+  - [📑 List Links](#-list-links)
+  - [⛈️ Weather: Api Key, Icons and Unit](#️-weather-api-key-icons-and-unit)
+  - [💛 Colors](#-colors)
+  - [🌑 Auto change theme](#-auto-change-theme)
+
+
+## ✨ Features
+
+- **Easy configuration** file.
+- **Dark/Light** mode, you can toggle them and It'll be saved in local storage.
+- **Clock and Date** format can be set to 24 hour (default) or 12 hour.
+- **Greetings** are easy to modify.
+- **Variables** for custom colors and font sizes in the `style.css` code.
+- **Icons** all icons are from Feather Icons (Some others I made them with the Feather icons as a base)
+- **Modular** javascript files for an easy read.
+
+## 🚀 Usage
+
+#### As Home Page
+
+1. Fork this repo
+2. Enable the Github Pages service `Settings > GitHub Pages > Source [master branch] > Save`
+3. Set it as Home Page:
+   - Click the menu button. and select Options. Preferences.
+   - Click the Home panel.
+   - Click the menu next to Homepage and new windows and choose to show custom URLs and add your `Github Pages link`
+
+#### As New Tab
+
+You can use different Add-ons/Extensions for it
+
+- If you use Firefox: [Custom New Tab Page](https://addons.mozilla.org/en-US/firefox/addon/custom-new-tab-page/?src=search)
+- If you use Chromium (Brave, Vivaldi, Chrome): [Custom New Tab URL](https://chrome.google.com/webstore/detail/custom-new-tab-url/mmjbdbjnoablegbkcklggeknkfcjkjia)
+
+## 🎨 Customization
+
+Almost all customization can be managed in the `config.js` file:
+
+### 👋 General: Name, Image Background and Greetings
+
+To change the default name, the greetings and if you want to have an image background or open your links in new tabs, edit the first configs in the `config.js`.
+
+```js
+ // General
+  name: 'John',
+  imageBackground: false,
+  openInNewTab: true,
+
+  // Greetings
+  greetingMorning: 'Good morning!',
+  greetingAfternoon: 'Good afternoon,',
+  greetingEvening: 'Good evening,',
+  greetingNight: 'Go to Sleep!',
+
+```
+
+> You cah change the background by substituting the `background.jpg` file in `assets` folder.
+
+![](assets/img/previewbg.png)
+
+### 🏷️ Button Links
+
+To edit the buttons you just need to change the follow list in the `config.js` file by choosing a link, an icon from [Feather icons](https://feathericons.com/) and a name:
+
+```js
+cards: [
+    {
+      id: '1',
+      name: 'Github',
+      icon: 'github',
+      link: 'https://github.com/',
+    },
+    {
+      id: '2',
+      name: 'Mail',
+      icon: 'mail',
+      link: 'https://mail.protonmail.com/',
+    },
+    {
+      id: '3',
+      name: 'Todoist',
+      icon: 'trello',
+      link: 'https://calendar.google.com/calendar/r',
+    },
+    {
+      id: '4',
+      name: 'Calendar',
+      icon: 'calendar',
+      link: 'https://calendar.google.com/calendar/r',
+    },
+    {
+      id: '5',
+      name: 'Reddit',
+      icon: 'bookmark',
+      link: 'https://reddit.com',
+    },
+    {
+      id: '6',
+      name: 'Odysee',
+      icon: 'youtube',
+      link: 'https://odysee.com/',
+    },
+  ],
+```
+
+### 📑 List Links
+
+The same happens with the list links, you can change the list icon (also using feather icons) and the links:
+
+```js
+  //Icons
+  firstListIcon: 'music',
+  secondListIcon: 'coffee',
+
+  // Links
+  lists: {
+    firstList: [
+      {
+        name: 'Inspirational',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Classic',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Oldies',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+      {
+        name: 'Rock',
+        link: 'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
+      },
+    ],
+    secondList: [
+      {
+        name: 'Linkedin',
+        link: 'https://linkedin.com/',
+      },
+      {
+        name: 'Figma',
+        link: 'https://figma.com/',
+      },
+      {
+        name: 'Dribbble',
+        link: 'https://dribbble.com',
+      },
+      {
+        name: 'Telegram',
+        link: 'https://webk.telegram.org',
+      },
+    ],
+  },
+```
+
+### ⛈️ Weather: Api Key, Icons and Unit
+
+For setting up the Weather widget you'll need an API Key from: `https://openweathermap.org/`. Once you have your Key you'll need to set your latitude and longitude, you can use: `https://www.latlong.net/` to get them. 
+
+Finally, choose an Icon set:
+
+![](assets/img/icons.png)
+
+- **Nord** Using the Nord Color Scheme and easy-to-eyes colors
+- **OneDark** (_Default one_) Using the One Dark Pro color scheme
+- **Dark** For White theme only users that want a minimalist look
+- **White** For Dark theme only users that want a minimalist look
+
+Finally just add them to the `config.js` file.
+
+```js
+  // Weather
+  weatherKey: 'InsertYourAPIKeyHere123456',
+  weatherIcons: 'OneDark',
+  weatherUnit: 'C',
+  weatherLatitude: '37.774929',
+  weatherLongitude: '-122.419418',
+```
+
+### 💛 Colors
+
+In the `app.css` file you can change the variables for both themes (Dark and Light):
+
+```css
+/* Light theme  */
+
+:root {
+  --accent: #61b0f1; /* Hover color */
+  --bg: #f5f5f5; /* Background color */
+  --sbg: #e4e6e6; /* Cards color */
+  --fg: #3a3a3a; /* Foreground color */
+  --sfg: #3a3a3a; /* Sceondary Foreground color */
+}
+
+/* Dark theme  */
+
+.darktheme {
+  --accent: #61b0f1; /* Hover color */
+  --bg: #19171a; /* Background color */
+  --sbg: #201e21; /* Cards color */
+  --fg: #d8dee9; /* Foreground color */
+  --sfg: #3a3a3a; /* Secondary Foreground color */
+}
+```
+
+### 🌑 Auto change theme
+
+The theme can be automatically changed by the OS' current theme or personalized hours
+that you can change in the `config.js` file:
+
+```js
+  // Autochange
+  autoChangeTheme: true,
+
+  // Autochabge by OS
+  changeThemeByOS: false, 
+
+  // Autochange by hour options (24hrs format, string must be in: hh:mm)
+  changeThemeByHour: true, // If it's true, it will use the values below:
+  hourDarkThemeActive: '18:30', // Turn on the dark theme after this hour
+  hourDarkThemeInactive: '07:00', // Turn off the dark theme after this hour and before the above hour
+```
+
+![](assets/img/subheader.png)
\ No newline at end of file
diff --git a/docker/nginx/dist/html/app.css b/docker/nginx/dist/html/app.css
new file mode 100644
index 00000000..98015e20
--- /dev/null
+++ b/docker/nginx/dist/html/app.css
@@ -0,0 +1,174 @@
+/* 
+// ╔╗ ╔═╗╔╗╔╔╦╗╔═╗
+// ╠╩╗║╣ ║║║ ║ ║ ║
+// ╚═╝╚═╝╝╚╝ ╩ ╚═╝ 
+*/
+
+@import url('https://fonts.googleapis.com/css2?family=Open+Sans:wght@300;400;700&display=swap');
+
+/* V A R I A B L E S */
+
+:root {
+  /* Fonts  */
+  --fsg: 12vh; /* Time and Greetings */
+  --fsm: 8vh; /* Date */
+  --fss: 3vh; /* Greetings and Weather widger */
+  --fses: 2vh; /* Links List */
+
+  --iconsize: 3vh;
+
+  /* Dark theme  */
+  --accent: #E20074; /* Hover color */
+  --bg: #19171a; /* Background color */
+  --sbg: #201e21a4; /* Cards color */
+  --fg: #d8dee9; /* Foreground color */
+  --sfg: #ffffff; /* Secondary Foreground color */
+
+  /* Image background  */
+  --imgbg: url(assets/background.jpg);
+  --imgcol: linear-gradient(
+    rgba(0, 0, 0, 0.1),
+    rgba(0, 0, 0, 0.1)
+  ); /* Filter color */
+}
+
+
+/* S T Y L E S */
+
+* {
+  margin: 0;
+  padding: 0;
+  box-sizing: border-box;
+  font-family: 'Open Sans', sans-serif;
+  transition: 0.2s ease-in-out;
+}
+
+.notransition {
+  -webkit-transition: none;
+  -moz-transition: none;
+  -o-transition: none;
+  transition: none;
+}
+
+.withImageBackground {
+  background-image: var(--imgcol), var(--imgbg);
+  background-size: cover;
+}
+
+body {
+  width: 100vw;
+  height: 100vh;
+  background-color: var(--bg);
+  display: flex;
+  align-items: center;
+  justify-content: center;
+}
+
+.container {
+  width: 145vh;
+  height: 85vh;
+  display: grid;
+  grid-template-columns: repeat(4, 1fr);
+  grid-template-rows: repeat(4, 1fr);
+  grid-gap: 30px;
+  padding: 20px;
+}
+
+.card {
+  background-color: var(--sbg);
+  box-shadow: 0 5px 7px rgba(0, 0, 0, 0.4);
+  border-radius: 5px;
+}
+
+.card:hover {
+  transform: translateY(-0.5rem);
+  box-shadow: 0 10px 10px rgba(226, 0, 116, 0.4);
+}
+
+.timeBlock {
+  grid-row: 1 / span 2;
+  grid-column: 1 / span 2;
+  display: flex;
+  flex-direction: column;
+  align-items: center;
+  justify-content: center;
+}
+
+.clock {
+  display: flex;
+  align-items: center;
+  justify-content: center;
+}
+
+#hour,
+#separator,
+#minutes {
+  font-size: var(--fsg);
+  font-weight: bolder;
+  color: var(--fg);
+}
+
+#greetings {
+  font-size: var(--fss);
+  color: var(--fg);
+}
+
+.list {
+  display: flex;
+  align-items: center;
+  flex-direction: column;
+}
+
+.list__1 {
+  grid-column: 1;
+  grid-row: 3 / span 2;
+}
+.list__2 {
+  grid-column: 2;
+  grid-row: 3 / span 2;
+}
+.list__head {
+  margin-top: 3vh;
+  margin-bottom: 2vh;
+  color: var(--fg);
+  width: var(--iconsize);
+  height: var(--iconsize);
+}
+.list__link {
+  text-decoration: none;
+  font-size: var(--fses);
+  color: var(--fg);
+  margin-top: 1vh;
+  padding: 8px 12px;
+  border-radius: 5px;
+  font-weight: bolder;
+  text-align: center;
+  width: 80%;
+}
+.list__link:hover {
+  background-color: var(--accent);
+  color: var(--sfg);
+}
+
+/* M E D I A - Q U E R I E S */
+
+@media only screen and (max-width: 68.75em) {
+  .container {
+    grid-gap: 20px;
+    padding: 40px;
+  }
+
+  .timeBlock {
+    grid-row: 1 / span 2;
+    grid-column: 1 / span 4;
+  }
+
+  #greetings {
+    font-size: var(--fss);
+  }
+
+  .list {
+    display: none;
+  }
+
+}
diff --git a/docker/nginx/dist/html/assets/.DS_Store b/docker/nginx/dist/html/assets/.DS_Store
new file mode 100644
index 00000000..26858459
Binary files /dev/null and b/docker/nginx/dist/html/assets/.DS_Store differ
diff --git a/docker/nginx/dist/html/assets/background.jpg b/docker/nginx/dist/html/assets/background.jpg
new file mode 100644
index 00000000..a0be4e9b
Binary files /dev/null and b/docker/nginx/dist/html/assets/background.jpg differ
diff --git a/docker/nginx/dist/html/assets/icons/.DS_Store b/docker/nginx/dist/html/assets/icons/.DS_Store
new file mode 100644
index 00000000..42c5e8de
Binary files /dev/null and b/docker/nginx/dist/html/assets/icons/.DS_Store differ
diff --git a/docker/heimdall/dist/app/tsec.png b/docker/nginx/dist/html/assets/icons/favicon.png
similarity index 100%
rename from docker/heimdall/dist/app/tsec.png
rename to docker/nginx/dist/html/assets/icons/favicon.png
diff --git a/docker/nginx/dist/html/assets/img/donation.png b/docker/nginx/dist/html/assets/img/donation.png
new file mode 100644
index 00000000..43701588
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/donation.png differ
diff --git a/docker/nginx/dist/html/assets/img/header.png b/docker/nginx/dist/html/assets/img/header.png
new file mode 100644
index 00000000..a526f83e
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/header.png differ
diff --git a/docker/nginx/dist/html/assets/img/icons.png b/docker/nginx/dist/html/assets/img/icons.png
new file mode 100644
index 00000000..6432f35f
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/icons.png differ
diff --git a/docker/nginx/dist/html/assets/img/live.png b/docker/nginx/dist/html/assets/img/live.png
new file mode 100644
index 00000000..2686929a
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/live.png differ
diff --git a/docker/nginx/dist/html/assets/img/previewbg.png b/docker/nginx/dist/html/assets/img/previewbg.png
new file mode 100644
index 00000000..465d616a
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/previewbg.png differ
diff --git a/docker/nginx/dist/html/assets/img/spanish.png b/docker/nginx/dist/html/assets/img/spanish.png
new file mode 100644
index 00000000..42c90c39
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/spanish.png differ
diff --git a/docker/nginx/dist/html/assets/img/subheader.png b/docker/nginx/dist/html/assets/img/subheader.png
new file mode 100644
index 00000000..1148be3d
Binary files /dev/null and b/docker/nginx/dist/html/assets/img/subheader.png differ
diff --git a/docker/nginx/dist/html/assets/js/greeting.js b/docker/nginx/dist/html/assets/js/greeting.js
new file mode 100644
index 00000000..8793d391
--- /dev/null
+++ b/docker/nginx/dist/html/assets/js/greeting.js
@@ -0,0 +1,27 @@
+// ┌─┐┬─┐┌─┐┌─┐┌┬┐┬┌┐┌┌─┐┌─┐
+// │ ┬├┬┘├┤ ├┤  │ │││││ ┬└─┐
+// └─┘┴└─└─┘└─┘ ┴ ┴┘└┘└─┘└─┘
+
+// Get the hour
+const today = new Date();
+const hour = today.getHours();
+
+// Here you can change your name
+const name = CONFIG.name;
+
+// Here you can change your greetings
+const gree1 = `${CONFIG.greetingNight}\xa0`;
+const gree2 = `${CONFIG.greetingMorning}\xa0`;
+const gree3 = `${CONFIG.greetingAfternoon}\xa0`;
+const gree4 = `${CONFIG.greetingEvening}\xa0`;
+
+// Define the hours of the greetings
+if (hour >= 23 || hour < 5) {
+  document.getElementById('greetings').innerText = gree1;
+} else if (hour >= 6 && hour < 12) {
+  document.getElementById('greetings').innerText = gree2;
+} else if (hour >= 12 && hour < 17) {
+  document.getElementById('greetings').innerText = gree3;
+} else {
+  document.getElementById('greetings').innerText = gree4;
+}
diff --git a/docker/nginx/dist/html/assets/js/lists.js b/docker/nginx/dist/html/assets/js/lists.js
new file mode 100644
index 00000000..ab2ee020
--- /dev/null
+++ b/docker/nginx/dist/html/assets/js/lists.js
@@ -0,0 +1,46 @@
+// ┬  ┬┌─┐┌┬┐┌─┐
+// │  │└─┐ │ └─┐
+// ┴─┘┴└─┘ ┴ └─┘
+
+// Print the first List
+const printFirstList = () => {
+  let icon = `<i class="list__head" icon-name="${CONFIG.firstListIcon}"></i>`;
+  const position = 'beforeend';
+  list_1.insertAdjacentHTML(position, icon);
+  for (const link of CONFIG.lists.firstList) {
+    // List item
+    let item = `
+        <a
+        target="${CONFIG.openInNewTab ? '_blank' : ''}"
+        href="${link.link}"
+        class="list__link"
+        >${link.name}</a
+        >
+    `;
+    const position = 'beforeend';
+    list_1.insertAdjacentHTML(position, item);
+  }
+};
+
+// Print the second List
+const printSecondList = () => {
+  let icon = `<i class="list__head" icon-name="${CONFIG.secondListIcon}"></i>`;
+  const position = 'beforeend';
+  list_2.insertAdjacentHTML(position, icon);
+  for (const link of CONFIG.lists.secondList) {
+    // List item
+    let item = `
+          <a
+          target="${CONFIG.openInNewTab ? '_blank' : ''}"
+          href="${link.link}"
+          class="list__link"
+          >${link.name}</a
+          >
+      `;
+    const position = 'beforeend';
+    list_2.insertAdjacentHTML(position, item);
+  }
+};
+
+printFirstList();
+printSecondList();
diff --git a/docker/nginx/dist/html/assets/js/theme.js b/docker/nginx/dist/html/assets/js/theme.js
new file mode 100644
index 00000000..d8d9c936
--- /dev/null
+++ b/docker/nginx/dist/html/assets/js/theme.js
@@ -0,0 +1,7 @@
+// ┌┬┐┬ ┬┌─┐┌┬┐┌─┐
+//  │ ├─┤├┤ │││├┤
+//  ┴ ┴ ┴└─┘┴ ┴└─┘
+
+if (CONFIG.imageBackground) {
+  document.body.classList.add('withImageBackground');
+}
\ No newline at end of file
diff --git a/docker/nginx/dist/html/assets/js/time.js b/docker/nginx/dist/html/assets/js/time.js
new file mode 100644
index 00000000..71f3c21f
--- /dev/null
+++ b/docker/nginx/dist/html/assets/js/time.js
@@ -0,0 +1,36 @@
+// ┌┬┐┬┌┬┐┌─┐
+//  │ ││││├┤
+//  ┴ ┴┴ ┴└─┘
+
+window.onload = displayClock();
+// Clock function
+function displayClock() {
+  const monthNames = [
+    'Jan',
+    'Feb',
+    'Mar',
+    'Apr',
+    'May',
+    'Jun',
+    'Jul',
+    'Aug',
+    'Sep',
+    'Oct',
+    'Nov',
+    'Dec',
+  ];
+
+  // Get clock elements
+  var d = new Date();
+  var mm = monthNames[d.getMonth()];
+  var dd = d.getDate();
+  var min = (mins = ('0' + d.getMinutes()).slice(-2));
+  var hh = d.getHours();
+  var ampm = '';
+
+  // Display clock elements
+  document.getElementById('hour').innerText = hh;
+  document.getElementById('separator').innerHTML = ' : ';
+  document.getElementById('minutes').innerText = min + ampm;
+  setTimeout(displayClock, 1000);
+}
diff --git a/docker/heimdall/dist/html/cockpit.html b/docker/nginx/dist/html/cockpit.html
similarity index 100%
rename from docker/heimdall/dist/html/cockpit.html
rename to docker/nginx/dist/html/cockpit.html
diff --git a/docker/nginx/dist/html/config.js b/docker/nginx/dist/html/config.js
new file mode 100644
index 00000000..a98788a5
--- /dev/null
+++ b/docker/nginx/dist/html/config.js
@@ -0,0 +1,75 @@
+// ╔╗ ╔═╗╔╗╔╔╦╗╔═╗
+// ╠╩╗║╣ ║║║ ║ ║ ║
+// ╚═╝╚═╝╝╚╝ ╩ ╚═╝
+// ┌─┐┌─┐┌┐┌┌─┐┬┌─┐┬ ┬┬─┐┌─┐┌┬┐┬┌─┐┌┐┌
+// │  │ ││││├┤ ││ ┬│ │├┬┘├─┤ │ ││ ││││
+// └─┘└─┘┘└┘└  ┴└─┘└─┘┴└─┴ ┴ ┴ ┴└─┘┘└┘
+
+const CONFIG = {
+  // ┌┐ ┌─┐┌─┐┬┌─┐┌─┐
+  // ├┴┐├─┤└─┐││  └─┐
+  // └─┘┴ ┴└─┘┴└─┘└─┘
+
+  // General
+  imageBackground: true,
+  openInNewTab: true,
+  twelveHourFormat: false,
+
+  // Greetings
+  greetingMorning: 'Good morning ☕',
+  greetingAfternoon: 'Good afternoon 🍯',
+  greetingEvening: 'Good evening 😁',
+  greetingNight: 'Go to Sleep 🥱',
+
+  // ┬  ┬┌─┐┌┬┐┌─┐
+  // │  │└─┐ │ └─┐
+  // ┴─┘┴└─┘ ┴ └─┘
+
+  //Icons
+  firstListIcon: 'home',
+  secondListIcon: 'external-link',
+
+  // Links
+  lists: {
+    firstList: [
+      {
+        name: 'Cockpit',
+        link: '/cockpit.html',
+      },
+      {
+        name: 'Cyberchef',
+        link: '/cyberchef/',
+      },
+      {
+        name: 'Elasticvue',
+        link: '/elasticvue/',
+      },
+      {
+        name: 'Kibana',
+        link: '/kibana/',
+      },
+      {
+        name: 'Spiderfoot',
+        link: '/spiderfoot/',
+      },
+    ],
+    secondList: [
+      {
+        name: 'Attack Map',
+        link: '/map/',
+      },
+      {
+        name: 'SecurityMeter',
+        link: 'https://sicherheitstacho.eu',
+      },
+      {
+        name: 'T-Pot @ GitHub',
+        link: 'https://github.com/dtag-dev-sec/tpotce/',
+      },
+      {
+        name: 'T-Pot ReadMe',
+        link: 'https://github.com/telekom-security/tpotce/blob/master/README.md',
+      },      
+    ],
+  },
+};
diff --git a/docker/nginx/dist/html/cyberchef/cyberchef.tgz b/docker/nginx/dist/html/cyberchef/cyberchef.tgz
new file mode 100644
index 00000000..93595ab1
Binary files /dev/null and b/docker/nginx/dist/html/cyberchef/cyberchef.tgz differ
diff --git a/docker/heimdall/dist/html/error.html b/docker/nginx/dist/html/error.html
similarity index 100%
rename from docker/heimdall/dist/html/error.html
rename to docker/nginx/dist/html/error.html
diff --git a/docker/nginx/dist/html/esvue/esvue.tgz b/docker/nginx/dist/html/esvue/esvue.tgz
new file mode 100644
index 00000000..9a364e8c
Binary files /dev/null and b/docker/nginx/dist/html/esvue/esvue.tgz differ
diff --git a/docker/heimdall/dist/html/favicon.ico b/docker/nginx/dist/html/favicon.ico
similarity index 100%
rename from docker/heimdall/dist/html/favicon.ico
rename to docker/nginx/dist/html/favicon.ico
diff --git a/docker/nginx/dist/html/index.html b/docker/nginx/dist/html/index.html
new file mode 100644
index 00000000..00eb6881
--- /dev/null
+++ b/docker/nginx/dist/html/index.html
@@ -0,0 +1,61 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <title>T-Pot</title>
+    <link
+      rel=" shortcut icon"
+      type="image/png"
+      href="assets/icons/favicon.png"
+    />
+    <link rel="stylesheet" href="app.css" />
+    <script src="https://unpkg.com/lucide@latest"></script>
+  </head>
+
+  <!-- 
+      ╔╗ ╔═╗╔╗╔╔╦╗╔═╗
+      ╠╩╗║╣ ║║║ ║ ║ ║
+      ╚═╝╚═╝╝╚╝ ╩ ╚═╝
+      -->
+
+  <body class="">
+    <div class="container">
+      <!-- Clock and Greetings  -->
+
+      <div class="timeBlock">
+        <div class="clock">
+          <div id="hour" class=""></div>
+          <div id="separator" class=""></div>
+          <div id="minutes" class=""></div>
+        </div>
+        <div id="greetings"></div>
+      </div>
+
+      <!-- 
+        ┬  ┬┌─┐┌┬┐┌─┐
+        │  │└─┐ │ └─┐
+        ┴─┘┴└─┘ ┴ └─┘
+        -->
+
+      <div class="card list list__1" id="list_1"></div>
+
+      <div class="card list list__2" id="list_2"></div>
+    </div>
+
+    <!-- Config  -->
+    <script src="config.js"></script>
+
+    <!-- Scripts  -->
+    <script src="assets/js/time.js"></script>
+    <script src="assets/js/theme.js"></script>
+    <script src="assets/js/greeting.js"></script>
+    <script src="assets/js/cards.js"></script>
+    <script src="assets/js/lists.js"></script>
+    <script>
+      lucide.createIcons();
+    </script>
+  </body>
+
+  <!-- Developed and designed by Miguel R. Ávila: -->
+  <!-- https://github.com/migueravila -->
+</html>
diff --git a/docker/nginx/dist/html/package.json b/docker/nginx/dist/html/package.json
new file mode 100644
index 00000000..868e8c82
--- /dev/null
+++ b/docker/nginx/dist/html/package.json
@@ -0,0 +1,7 @@
+{
+  "name": "bento",
+  "version": "1.2.0",
+  "description": "🍱 Minimalist, elegant and hackable startpage inspired by the Bento box!",
+  "author": "Miguel Ávila",
+  "license": "ISC"
+}
\ No newline at end of file
diff --git a/docker/nginx/docker-compose.yml b/docker/nginx/docker-compose.yml
new file mode 100644
index 00000000..74193a08
--- /dev/null
+++ b/docker/nginx/docker-compose.yml
@@ -0,0 +1,29 @@
+version: '2.3'
+
+services:
+
+# nginx service
+  nginx:
+    build: .
+    container_name: nginx
+    restart: always
+    tmpfs:
+     - /var/tmp/nginx/client_body
+     - /var/tmp/nginx/proxy
+     - /var/tmp/nginx/fastcgi
+     - /var/tmp/nginx/uwsgi
+     - /var/tmp/nginx/scgi
+     - /run
+     - /var/lib/nginx/tmp:uid=100,gid=82
+#    cpu_count: 1
+#    cpus: 0.75
+    network_mode: "host"
+    ports:
+     - "64297:64297"
+     - "127.0.0.1:64304:64304"
+    image: "dtagdevsec/nginx:2204"
+    read_only: true
+    volumes:
+     - /data/nginx/cert/:/etc/nginx/cert/:ro
+     - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
+     - /data/nginx/log/:/var/log/nginx/
diff --git a/docker/p0f/Dockerfile b/docker/p0f/Dockerfile
index 5fcc89cd..4bb900a2 100644
--- a/docker/p0f/Dockerfile
+++ b/docker/p0f/Dockerfile
@@ -1,9 +1,9 @@
 # In case of problems Alpine 3.13 needs to be used:
 # https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0#faccessat2
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Add source
-ADD . /opt/p0f
+COPY . /opt/p0f
 #
 # Install packages
 RUN apk -U --no-cache add \
diff --git a/docker/p0f/docker-compose.yml b/docker/p0f/docker-compose.yml
index 0b1329b8..14139d5d 100644
--- a/docker/p0f/docker-compose.yml
+++ b/docker/p0f/docker-compose.yml
@@ -7,8 +7,10 @@ services:
     build: .
     container_name: p0f
     restart: always
+#    cpu_count: 1
+#    cpus: 0.75
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
diff --git a/docker/redishoneypot/Dockerfile b/docker/redishoneypot/Dockerfile
index bdce15e9..a5aa187e 100644
--- a/docker/redishoneypot/Dockerfile
+++ b/docker/redishoneypot/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apk
 RUN apk -U --no-cache add \
@@ -35,7 +35,8 @@ RUN apk -U --no-cache add \
                     g++ && \
     rm -rf /var/cache/apk/* \
            /opt/go \
-           /root/dist
+           /root/* \
+	   /opt/redishoneypot/.git
 #
 # Start redishoneypot
 WORKDIR /opt/redishoneypot
diff --git a/docker/redishoneypot/docker-compose.yml b/docker/redishoneypot/docker-compose.yml
index f06e1bd4..93b9f61a 100644
--- a/docker/redishoneypot/docker-compose.yml
+++ b/docker/redishoneypot/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     build: .
     container_name: redishoneypot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - redishoneypot_local
     ports:
      - "6379:6379"    
-    image: "dtagdevsec/redishoneypot:2006"
+    image: "dtagdevsec/redishoneypot:2204"
     read_only: true
     volumes:
      - /data/redishoneypot/log:/var/log/redishoneypot
diff --git a/docker/sentrypeer/Dockerfile b/docker/sentrypeer/Dockerfile
new file mode 100644
index 00000000..d923c43d
--- /dev/null
+++ b/docker/sentrypeer/Dockerfile
@@ -0,0 +1,66 @@
+FROM alpine:3.15 as builder
+#
+RUN apk -U add --no-cache \
+            autoconf \
+	    automake \
+	    autoconf-archive \
+	    build-base \
+	    curl-dev \
+	    cmocka-dev \
+	    git \
+	    jansson-dev \
+	    libmicrohttpd-dev \
+            pcre2-dev \
+	    sqlite-dev \
+	    util-linux-dev
+#
+RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
+            libosip2-dev
+#
+# Download SentryPeer sources and build
+RUN git clone https://github.com/SentryPeer/SentryPeer -b v1.2.0
+#
+WORKDIR /SentryPeer
+#
+RUN ./bootstrap.sh
+RUN ./configure --disable-opendht --disable-zyre
+RUN make
+RUN make check
+RUN make install
+#RUN tar cvfz sp.tgz /SentryPeer/* && \
+#    mv sp.tgz /
+#
+FROM alpine:3.15
+#
+#COPY --from=builder /sp.tgz /root
+COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
+#
+# Install packages
+RUN apk -U add --no-cache \
+            jansson \
+            libmicrohttpd \
+	    libuuid \
+            pcre2 \
+	    sqlite-libs && \
+    apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
+            libosip2 && \
+#
+# Extract from builder
+#    mkdir /opt/sentrypeer && \
+#    tar xvfz /root/sp.tgz --strip-components=1 -C /opt/sentrypeer/ && \
+#
+# Setup user, groups and configs
+    mkdir -p /var/log/sentrypeer && \
+    addgroup -g 2000 sentrypeer && \
+    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 sentrypeer && \
+    chown -R sentrypeer:sentrypeer /opt/sentrypeer && \
+#
+# Clean up
+    rm -rf /root/* && \
+    rm -rf /var/cache/apk/*
+#
+# Set workdir and start sentrypeer
+STOPSIGNAL SIGKILL
+USER sentrypeer:sentrypeer
+WORKDIR /opt/sentrypeer/
+CMD ./sentrypeer -jar -f /var/log/sentrypeer/sentrypeer.db -l /var/log/sentrypeer/sentrypeer.json
diff --git a/docker/sentrypeer/Dockerfile.alpine.keep b/docker/sentrypeer/Dockerfile.alpine.keep
new file mode 100644
index 00000000..bb04a4da
--- /dev/null
+++ b/docker/sentrypeer/Dockerfile.alpine.keep
@@ -0,0 +1,96 @@
+FROM alpine:3.15 as builder
+#
+RUN apk -U add --no-cache \
+            argon2-dev \
+            autoconf \
+	    automake \
+	    autoconf-archive \
+	    build-base \
+	    curl-dev \
+	    cmocka-dev \
+	    czmq-dev \
+	    git \
+	    jansson-dev \
+	    libtool \
+	    libmicrohttpd-dev \
+            pcre2-dev \
+	    readline-dev \
+	    sqlite-dev \
+	    util-linux-dev \
+	    zeromq-dev
+#
+RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
+            libosip2-dev
+RUN apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community \
+            asio-dev \
+            msgpack-c-dev \
+	    msgpack-cxx-dev
+#
+# Download and build Zyre
+WORKDIR /tmp
+RUN git clone https://github.com/savoirfairelinux/opendht dht 
+WORKDIR /tmp/dht
+RUN ./autogen.sh
+RUN ./configure
+RUN make
+RUN make install
+RUN ldconfig /etc/ld.so.conf.d
+#
+WORKDIR /tmp
+RUN git clone --quiet https://github.com/zeromq/zyre zyre
+WORKDIR /tmp/zyre
+RUN ./autogen.sh 2> /dev/null
+RUN ./configure --quiet --without-docs
+RUN make
+RUN make install
+RUN ldconfig /etc/ld.so.conf.d
+#
+# Download SentryPeer sources and build
+WORKDIR /
+RUN git clone https://github.com/SentryPeer/SentryPeer.git
+#
+WORKDIR /SentryPeer
+#
+RUN cp -R /tmp/dht/* .
+RUN ./bootstrap.sh
+RUN ./configure
+RUN make CPPFLAGS=-D_POSIX_C_SOURCE=199309L
+RUN make check
+RUN make install
+RUN tar cvfz sp.tgz /SentryPeer/* && \
+    mv sp.tgz /
+#
+FROM alpine:3.15
+#
+#COPY --from=builder /sp.tgz /root
+COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
+#
+# Install packages
+RUN apk -U add --no-cache \
+            jansson \
+            libmicrohttpd \
+	    libuuid \
+            pcre2 \
+	    sqlite-libs && \
+    apk -U add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing \
+            libosip2 && \
+#
+# Extract from builder
+#    mkdir /opt/sentrypeer && \
+#    tar xvfz /root/sp.tgz --strip-components=1 -C /opt/sentrypeer/ && \
+#
+# Setup user, groups and configs
+    mkdir -p /var/log/sentrypeer && \
+    addgroup -g 2000 sentrypeer && \
+    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 sentrypeer && \
+    chown -R sentrypeer:sentrypeer /opt/sentrypeer && \
+#
+# Clean up
+    rm -rf /root/* && \
+    rm -rf /var/cache/apk/*
+#
+# Set workdir and start sentrypeer
+STOPSIGNAL SIGKILL
+USER sentrypeer:sentrypeer
+WORKDIR /opt/sentrypeer/
+CMD ./sentrypeer -draws
diff --git a/docker/sentrypeer/Dockerfile.debian.keep b/docker/sentrypeer/Dockerfile.debian.keep
new file mode 100644
index 00000000..8eba1d12
--- /dev/null
+++ b/docker/sentrypeer/Dockerfile.debian.keep
@@ -0,0 +1,95 @@
+FROM debian:bullseye as builder
+ENV DEBIAN_FRONTEND noninteractive
+#
+RUN apt-get update
+RUN apt-get dist-upgrade -y \
+            autoconf \
+	    automake \
+	    autoconf-archive \
+	    build-essential \
+	    git \
+	    libcmocka-dev \
+	    libcurl4-gnutls-dev \
+	    libczmq-dev \
+	    libjansson-dev \
+	    libmicrohttpd-dev \
+	    libopendht-dev \
+            libosip2-dev \
+	    libpcre2-dev \
+	    libsqlite3-dev \
+	    libtool
+#
+# Download and build OpenDHT
+WORKDIR /tmp
+RUN git clone https://github.com/savoirfairelinux/opendht opendht
+WORKDIR /tmp/opendht
+RUN ./autogen.sh
+RUN ./configure
+RUN make
+RUN make install
+RUN ldconfig
+#
+# Download and build Zyre
+WORKDIR /tmp
+RUN git clone https://github.com/zeromq/zyre -b v2.0.1 zyre
+WORKDIR /tmp/zyre
+RUN ./autogen.sh
+RUN ./configure --without-docs
+RUN make
+RUN make install
+RUN ldconfig
+#
+# Download and build SentryPeer
+WORKDIR /
+RUN git clone https://github.com/SentryPeer/SentryPeer -b v1.0.0
+#
+WORKDIR /SentryPeer
+#
+RUN cp -r /tmp/opendht .
+RUN ./bootstrap.sh
+RUN ./configure
+RUN make
+RUN make check
+RUN make install
+#RUN tar cvfz sp.tgz /SentryPeer/* && \
+#    mv sp.tgz /
+#RUN exit 1
+#
+FROM debian:bullseye
+#
+#COPY --from=builder /sp.tgz /root
+COPY --from=builder /SentryPeer/sentrypeer /opt/sentrypeer/
+#
+# Install packages
+RUN apt-get update && \
+    apt-get dist-upgrade -y && \
+    apt-get install -y \
+            libcmocka0 \
+            libcurl4 \
+            libczmq4 \
+            libjansson4 \
+            libmicrohttpd12 \
+            libosip2-11 \
+            libsqlite3-0 \
+            pcre2-utils && \
+#
+# Extract from builder
+#    mkdir /opt/sentrypeer && \
+#    tar xvfz /root/sp.tgz --strip-components=1 -C /opt/sentrypeer/ && \
+#
+# Setup user, groups and configs
+    mkdir -p /var/log/sentrypeer && \
+    addgroup --gid 2000 sentrypeer && \
+    adduser --system --no-create-home --shell /bin/bash --uid 2000 --disabled-password --disabled-login --gid 2000 sentrypeer && \
+    chown -R sentrypeer:sentrypeer /opt/sentrypeer && \
+#
+# Clean up
+    rm -rf /root/* && \
+    apt-get autoremove -y --purge && \
+    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
+#
+# Set workdir and start sentrypeer
+STOPSIGNAL SIGKILL
+USER sentrypeer:sentrypeer
+WORKDIR /opt/sentrypeer/
+CMD ./sentrypeer -draws
diff --git a/docker/sentrypeer/docker-compose.yml b/docker/sentrypeer/docker-compose.yml
new file mode 100644
index 00000000..9b376434
--- /dev/null
+++ b/docker/sentrypeer/docker-compose.yml
@@ -0,0 +1,23 @@
+version: '2.3'
+
+networks:
+  sentrypeer_local:
+
+services:
+
+# SentryPeer service
+  sentrypeer:
+    build: .
+    container_name: sentrypeer
+    restart: always
+#    cpu_count: 1
+#    cpus: 0.25
+    networks:
+     - sentrypeer_local
+    ports:
+     - "5060:5060/udp"
+    # - "127.0.0.1:8082:8082"
+    image: "dtagdevsec/sentrypeer:2204"
+    read_only: true
+    volumes:
+     - /data/sentrypeer/log:/var/log/sentrypeer
diff --git a/docker/spiderfoot/Dockerfile b/docker/spiderfoot/Dockerfile
index ae058415..9b2f845c 100644
--- a/docker/spiderfoot/Dockerfile
+++ b/docker/spiderfoot/Dockerfile
@@ -1,4 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
+#
+# Include dist
+COPY dist/ /root/dist/
 #
 # Get and install dependencies & packages
 RUN apk -U --no-cache add \
@@ -18,7 +21,31 @@ RUN apk -U --no-cache add \
             openssl-dev \
             python3 \
             python3-dev \
+	    py3-cryptography \
+	    py3-ipaddr \
+	    py3-beautifulsoup4 \
+	    py3-dnspython \
+	    py3-exifread \
+	    py3-future \
+	    py3-jaraco.classes \
+	    py3-jaraco.context \
+	    py3-jaraco.functools \
+	    py3-lxml \
+	    py3-mako \
+	    py3-more-itertools \
+	    py3-netaddr \
+	    py3-networkx \
+	    py3-openssl \
+	    py3-pillow \
+	    py3-portend \
+	    py3-pypdf2 \
+	    py3-phonenumbers \
             py3-pip \
+	    py3-pysocks \
+	    py3-requests \
+	    py3-tempora \
+	    py3-wheel \
+	    py3-xlsxwriter \
             swig \
 	    tinyxml \
 	    tinyxml-dev \
@@ -29,11 +56,12 @@ RUN apk -U --no-cache add \
     adduser -S -s /bin/ash -u 2000 -D -g 2000 spiderfoot && \
 #
 # Install spiderfoot 
-    git clone --depth=1 -b v3.4 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
+    git clone --depth=1 -b v3.5 https://github.com/smicallef/spiderfoot /home/spiderfoot && \
     cd /home/spiderfoot && \
     pip3 install --upgrade pip && \
-    pip3 install --no-cache-dir wheel && \ 
+    cp /root/dist/requirements.txt . && \
     pip3 install --no-cache-dir -r requirements.txt && \ 
+    mkdir -p /home/spiderfoot/.spiderfoot/logs && \
     chown -R spiderfoot:spiderfoot /home/spiderfoot && \
     sed -i "s#'root': '\/'#'root': '\/spiderfoot'#" /home/spiderfoot/sf.py && \
     sed -i "s#'root', '\/'#'root', '\/spiderfoot'#" /home/spiderfoot/sf.py && \
@@ -50,13 +78,12 @@ RUN apk -U --no-cache add \
                     python3-dev \
 		    swig \
 		    tinyxml-dev && \
-    rm -rf /var/cache/apk/*
+    rm -rf /var/cache/apk/* /home/spiderfoot/.git
 #
 # Healthcheck
-#HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080'
 HEALTHCHECK --retries=10 CMD curl -s -XGET 'http://127.0.0.1:8080/spiderfoot/'
 #
 # Set user, workdir and start spiderfoot
 USER spiderfoot:spiderfoot
 WORKDIR /home/spiderfoot
-CMD ["/usr/bin/python3.9", "sf.py","-l", "0.0.0.0:8080"]
+CMD echo -n >> /home/spiderfoot/.spiderfoot/spiderfoot.db && exec /usr/bin/python3.9 sf.py -l 0.0.0.0:8080
diff --git a/docker/spiderfoot/dist/requirements.txt b/docker/spiderfoot/dist/requirements.txt
new file mode 100644
index 00000000..3da7bd68
--- /dev/null
+++ b/docker/spiderfoot/dist/requirements.txt
@@ -0,0 +1,11 @@
+adblockparser>=0.7,<1
+CherryPy>=18.6.1,<19
+cherrypy-cors>=1.6,<2
+ipwhois>=1.1.0,<1.2.0
+pygexf>=0.2.2,<0.3
+python-whois>=0.7.3,<0.8
+secure>=0.3.0,<0.4.0
+python-docx>=0.8.11,<0.9
+python-pptx>=0.6.21,<0.7
+publicsuffixlist>=0.7.9,<0.8
+openpyxl>=3.0.9,<4
diff --git a/docker/spiderfoot/docker-compose.yml b/docker/spiderfoot/docker-compose.yml
index efc808c9..30a60696 100644
--- a/docker/spiderfoot/docker-compose.yml
+++ b/docker/spiderfoot/docker-compose.yml
@@ -10,10 +10,12 @@ services:
     build: .
     container_name: spiderfoot
     restart: always
+#    cpu_count: 1
+#    cpus: 0.75
     networks:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/docker/suricata/Dockerfile b/docker/suricata/Dockerfile
index 1e9f5171..dac3172d 100644
--- a/docker/suricata/Dockerfile
+++ b/docker/suricata/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:edge
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Install packages
 RUN apk -U --no-cache add \
@@ -16,16 +16,18 @@ RUN apk -U --no-cache add \
 # Setup user, groups and configs
     addgroup -g 2000 suri && \
     adduser -S -H -u 2000 -D -g 2000 suri && \
-    chmod 644 /etc/suricata/*.config && \
     cp /root/dist/*.yaml /etc/suricata/ && \
     cp /root/dist/*.conf /etc/suricata/ && \
     cp /root/dist/*.bpf /etc/suricata/ && \
+    cp /root/dist/update.sh /usr/bin/ && \
+    chmod 644 /etc/suricata/*.config && \
+    chmod 755 -R /var/lib/suricata && \
+    chmod 755 /usr/bin/update.sh && \
+    chown -R root:suri /tmp /run && \
 #
 # Download the latest EmergingThreats OPEN ruleset
-    cp /root/dist/update.sh /usr/bin/ && \
-    chmod 755 /usr/bin/update.sh && \
     suricata-update update-sources && \
-    suricata-update --no-reload && \
+    suricata-update --no-test --no-reload && \
 #
 # Clean up
     rm -rf /root/* && \
diff --git a/docker/suricata/dist/suricata.yaml b/docker/suricata/dist/suricata.yaml
index 0bf81036..bb523417 100644
--- a/docker/suricata/dist/suricata.yaml
+++ b/docker/suricata/dist/suricata.yaml
@@ -988,9 +988,9 @@ asn1-max-frames: 256
 ##
 
 # Run Suricata with a specific user-id and group-id:
-#run-as:
-#  user: suri
-#  group: suri
+run-as:
+  user: suri
+  group: suri
 
 # Some logging modules will use that name in event as identifier. The default
 # value is the hostname
diff --git a/docker/suricata/docker-compose.yml b/docker/suricata/docker-compose.yml
index 4568fba9..b9eed19c 100644
--- a/docker/suricata/docker-compose.yml
+++ b/docker/suricata/docker-compose.yml
@@ -10,11 +10,13 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
diff --git a/docker/tanner/docker-compose.yml b/docker/tanner/docker-compose.yml
index b70977a3..b477a845 100644
--- a/docker/tanner/docker-compose.yml
+++ b/docker/tanner/docker-compose.yml
@@ -12,9 +12,11 @@ services:
     restart: always
     stop_signal: SIGKILL
     tty: true
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - tanner_local
-    image: "dtagdevsec/redis:2006"
+    image: "dtagdevsec/redis:2204"
     read_only: true
 
 # PHP Sandbox service
@@ -26,9 +28,11 @@ services:
     tmpfs:
      - /tmp:uid=2000,gid=2000
     tty: true
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - tanner_local
-    image: "dtagdevsec/phpox:2006"
+    image: "dtagdevsec/phpox:2204"
     read_only: true
 
 # Tanner API Service
@@ -40,9 +44,11 @@ services:
     tmpfs:
      - /tmp/tanner:uid=2000,gid=2000
     tty: true
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     read_only: true
     volumes:
      - /data/tanner/log:/var/log/tanner
@@ -51,25 +57,25 @@ services:
      - tanner_redis
 
 # Tanner WEB Service
-  tanner_web:
-    build: ./tanner
-    container_name: tanner_web
-    restart: always
-    stop_signal: SIGKILL
-    tmpfs:
-     - /tmp/tanner:uid=2000,gid=2000
-    tty: true
-    networks:
-     - tanner_local
+#  tanner_web:
+#    build: ./tanner
+#    container_name: tanner_web
+#    restart: always
+#    stop_signal: SIGKILL
+#    tmpfs:
+#     - /tmp/tanner:uid=2000,gid=2000
+#    tty: true
+#    networks:
+#     - tanner_local
 #    ports:
 #     - "127.0.0.1:8091:8091"
-    image: "dtagdevsec/tanner:2006"
-    command: tannerweb
-    read_only: true
-    volumes:
-     - /data/tanner/log:/var/log/tanner
-    depends_on:
-     - tanner_redis
+#    image: "dtagdevsec/tanner:2204"
+#    command: tannerweb
+#    read_only: true
+#    volumes:
+#     - /data/tanner/log:/var/log/tanner
+#    depends_on:
+#     - tanner_redis
 
 # Tanner Service
   tanner:
@@ -80,9 +86,11 @@ services:
     tmpfs:
      - /tmp/tanner:uid=2000,gid=2000
     tty: true
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     command: tanner
     read_only: true
     volumes:
@@ -90,7 +98,7 @@ services:
      - /data/tanner/files:/opt/tanner/files
     depends_on:
      - tanner_api
-     - tanner_web
+#     - tanner_web
      - tanner_phpox
 
 # Snare Service
@@ -100,10 +108,12 @@ services:
     restart: always
     stop_signal: SIGKILL
     tty: true
+#    cpu_count: 1
+#    cpus: 0.25
     networks:
      - tanner_local
     ports:
      - "80:80"
-    image: "dtagdevsec/snare:2006"
+    image: "dtagdevsec/snare:2204"
     depends_on:
      - tanner
diff --git a/docker/tanner/phpox/Dockerfile b/docker/tanner/phpox/Dockerfile
index 59fa7184..cebf5591 100644
--- a/docker/tanner/phpox/Dockerfile
+++ b/docker/tanner/phpox/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.13
+FROM alpine:3.15
 #
 # Install packages
 RUN apk -U --no-cache add \
@@ -8,7 +8,7 @@ RUN apk -U --no-cache add \
                make \
                php7 \
                php7-dev \
-	       py3-pip \
+	       py3-aiohttp \
                python3 \
                python3-dev \
                re2c && \
@@ -31,7 +31,6 @@ RUN apk -U --no-cache add \
     git clone https://github.com/mushorg/phpox /opt/phpox && \
     cd /opt/phpox && \
     git checkout a62c8136ec7b3ebab0c989f4235e2960175121f8 && \
-    pip3 install -r requirements.txt && \
     make && \
 #
 # Clean up
@@ -39,8 +38,7 @@ RUN apk -U --no-cache add \
                     git \
                     php7-dev \
                     python3-dev && \
-    rm -rf /root/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /root/* /var/cache/apk/* /opt/phpox/.git
 #
 # Set workdir and start phpsandbox
 STOPSIGNAL SIGKILL
diff --git a/docker/tanner/redis/Dockerfile b/docker/tanner/redis/Dockerfile
index 83b862d8..3ac962b0 100644
--- a/docker/tanner/redis/Dockerfile
+++ b/docker/tanner/redis/Dockerfile
@@ -1,18 +1,24 @@
-FROM redis:alpine
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
-# Setup apt
-RUN apk -U --no-cache add redis && \ 
+# Setup apk and redis
+RUN apk -U --no-cache add redis shadow && \
     cp /root/dist/redis.conf /etc && \
 #
+# Setup user and group
+    groupmod -g 2000 redis && \
+    usermod -u 2000 redis && \
+#
 # Clean up
+    apk del --purge \ 
+            shadow && \
     rm -rf /root/* && \
     rm -rf /tmp/* /var/tmp/* && \
     rm -rf /var/cache/apk/*
 #
 # Start redis
 STOPSIGNAL SIGKILL
-USER nobody:nobody
+USER redis:redis
 CMD redis-server /etc/redis.conf
diff --git a/docker/tanner/snare/Dockerfile b/docker/tanner/snare/Dockerfile
index 0157f6bb..1f8db9a4 100644
--- a/docker/tanner/snare/Dockerfile
+++ b/docker/tanner/snare/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.14
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apt
 RUN apk -U --no-cache add \
@@ -9,14 +9,22 @@ RUN apk -U --no-cache add \
                git \
                linux-headers \
                python3 \
-               python3-dev \ 
-               py3-pip && \ 
+               python3-dev \
+	       py3-aiohttp \
+	       py3-beautifulsoup4 \
+	       py3-gitpython \
+	       py3-jinja2 \
+	       py3-markupsafe \
+	       py3-setuptools \
+               py3-pip \
+	       py3-pycodestyle \
+	       py3-wheel && \
 #
 # Setup Snare 
     git clone https://github.com/mushorg/snare /opt/snare && \
     cd /opt/snare/ && \
     git checkout 0919a80838eb0823a3b7029b0264628ee0a36211 && \
-    pip3 install --no-cache-dir setuptools && \
+    cp /root/dist/requirements.txt . && \
     pip3 install --no-cache-dir -r requirements.txt && \
     python3 setup.py install && \
     cd / && \
@@ -24,6 +32,12 @@ RUN apk -U --no-cache add \
     mkdir -p /opt/snare/pages && \
 #    clone --target http://example.com && \
     mv /root/dist/pages/* /opt/snare/pages/ && \
+#
+# Setup configs, user, groups
+    addgroup -g 2000 snare && \
+    adduser -S -s /bin/ash -u 2000 -D -g 2000 snare && \
+    mkdir /var/log/tanner && \
+    chown -R snare:snare /opt/snare && \
 #   
 # Clean up
     apk del --purge \
@@ -36,4 +50,6 @@ RUN apk -U --no-cache add \
 #
 # Start snare
 STOPSIGNAL SIGKILL
-CMD snare --tanner tanner --debug true --no-dorks true --auto-update false --host-ip 0.0.0.0 --port 80 --page-dir $(shuf -i 1-10 -n 1)
+USER snare:snare
+#CMD snare --tanner tanner --debug true --no-dorks true --auto-update false --host-ip 0.0.0.0 --port 80 --page-dir $(shuf -i 1-10 -n 1)
+CMD snare --tanner tanner --debug true --auto-update false --host-ip 0.0.0.0 --port 80 --page-dir $(shuf -i 1-10 -n 1)
diff --git a/docker/tanner/snare/dist/requirements.txt b/docker/tanner/snare/dist/requirements.txt
new file mode 100644
index 00000000..42765b8e
--- /dev/null
+++ b/docker/tanner/snare/dist/requirements.txt
@@ -0,0 +1,2 @@
+aiohttp_jinja2==1.1.0
+cssutils==1.0.2
diff --git a/docker/tanner/tanner/Dockerfile b/docker/tanner/tanner/Dockerfile
index 204ebe13..76e6d042 100644
--- a/docker/tanner/tanner/Dockerfile
+++ b/docker/tanner/tanner/Dockerfile
@@ -1,7 +1,7 @@
-FROM alpine:3.13
+FROM alpine:3.15
 #
 # Include dist
-ADD dist/ /root/dist/
+COPY dist/ /root/dist/
 #
 # Setup apt
 RUN apk -U --no-cache add \
@@ -11,7 +11,21 @@ RUN apk -U --no-cache add \
                libffi-dev \
                openssl-dev \
                linux-headers \
+	       py3-aiohttp \
+	       py3-geoip2 \
+	       py3-jinja2 \
+	       py3-jwt \
+	       py3-mako \
+	       py3-mysqlclient \
+	       py3-packaging \
 	       py3-pip \
+	       py3-redis \
+	       py3-pycodestyle \
+	       py3-setuptools \
+	       py3-tornado \
+	       py3-websocket-client \
+	       py3-wheel \
+	       py3-yaml \
                py3-yarl \
                python3 \
                python3-dev && \ 
@@ -21,11 +35,12 @@ RUN apk -U --no-cache add \
     cd /opt/tanner/ && \
 #    git fetch origin pull/364/head:test && \	
 #    git checkout test && \
-    git checkout 20dabcbccc50f8878525677b925a4c9abcaf9f54 && \
-    sed -i 's/aioredis/aioredis==1.3.1/g' requirements.txt && \
-    sed -i 's/^aiohttp$/aiohttp==3.7.4/g' requirements.txt && \
+#    git checkout 20dabcbccc50f8878525677b925a4c9abcaf9f54 && \
+    git checkout 2fdce2e2ad7e125012c7e6dcbfa02b50f73c128e && \
+#    sed -i 's/aioredis/aioredis==1.3.1/g' requirements.txt && \
+#    sed -i 's/^aiohttp$/aiohttp==3.7.4/g' requirements.txt && \
     cp /root/dist/config.yaml /opt/tanner/tanner/data && \
-    pip3 install --no-cache-dir setuptools && \
+    cp /root/dist/requirements.txt . && \
     pip3 install --no-cache-dir -r requirements.txt && \
     python3 setup.py install && \
     rm -rf .coveragerc \
@@ -57,8 +72,7 @@ RUN apk -U --no-cache add \
             linux-headers \
             python3-dev && \
     rm -rf /root/* && \
-    rm -rf /tmp/* /var/tmp/* && \
-    rm -rf /var/cache/apk/*
+    rm -rf /tmp/* /var/tmp/* /var/cache/apk/* /opt/tanner/.git
 #
 # Start tanner
 STOPSIGNAL SIGKILL
diff --git a/docker/tanner/tanner/dist/requirements.txt b/docker/tanner/tanner/dist/requirements.txt
new file mode 100644
index 00000000..162e6e82
--- /dev/null
+++ b/docker/tanner/tanner/dist/requirements.txt
@@ -0,0 +1,8 @@
+aiomysql
+aiohttp_jinja2==1.1.0
+docker<2.6
+mimesis<3.0.0
+aioredis
+pymongo
+pylibinjection
+aiodocker
diff --git a/docker/wordpot/Dockerfile b/docker/wordpot/Dockerfile
new file mode 100644
index 00000000..ea80eb11
--- /dev/null
+++ b/docker/wordpot/Dockerfile
@@ -0,0 +1,47 @@
+FROM alpine:3.15
+#
+# Include dist
+COPY dist/ /root/dist/
+#
+# Install packages
+RUN apk -U --no-cache add \
+             build-base \
+             git \
+	     libcap \
+	     py3-click \
+	     py3-flask \
+	     py3-itsdangerous \
+	     py3-jinja2 \
+	     py3-markupsafe \
+	     py3-pip \
+	     py3-werkzeug \
+             python3 \
+             python3-dev && \
+#	     
+# Install wordpot from GitHub and setup
+    mkdir -p /opt && \
+    cd /opt/ && \
+    git clone https://github.com/Will-777/wordpot2 && \
+    cd wordpot2 && \
+    git checkout e93a2e00d84d280b0acd58ba6889b4bee8a6e4d2 && \
+#    cp /root/dist/views.py /opt/wordpot2/wordpot/views.py && \
+    cp /root/dist/requirements.txt . && \
+    pip3 install -r requirements.txt && \
+    setcap cap_net_bind_service=+ep /usr/bin/python3.9 && \
+#
+# Setup user, groups and configs
+    addgroup -g 2000 wordpot && \
+    adduser -S -H -s /bin/ash -u 2000 -D -g 2000 wordpot && \
+    chown wordpot:wordpot -R /opt/wordpot2 && \
+#
+# Clean up
+    apk del --purge build-base \
+                    git \
+		    python3-dev && \
+    rm -rf /root/* /var/cache/apk/* /opt/wordpot2/.git
+#
+# Start wordpot
+STOPSIGNAL SIGINT
+USER wordpot:wordpot
+WORKDIR /opt/wordpot2
+CMD ["/usr/bin/python3","wordpot2.py", "--host", "0.0.0.0", "--port", "80", "--title", "Wordpress"]
diff --git a/docker/wordpot/dist/requirements.txt b/docker/wordpot/dist/requirements.txt
new file mode 100644
index 00000000..b2378c53
--- /dev/null
+++ b/docker/wordpot/dist/requirements.txt
@@ -0,0 +1 @@
+hpfeeds-threatstream==1.1
diff --git a/docker/wordpot/docker-compose.yml b/docker/wordpot/docker-compose.yml
new file mode 100644
index 00000000..fc16d0a0
--- /dev/null
+++ b/docker/wordpot/docker-compose.yml
@@ -0,0 +1,22 @@
+version: '2.3'
+
+networks:
+  wordpot_local:
+
+services:
+
+# Wordpot service
+  wordpot:
+    build: .
+    container_name: wordpot
+    restart: always
+#    cpu_count: 1
+#    cpus: 0.25
+    networks:
+     - wordpot_local
+    ports:
+     - "80:80"
+    image: "dtagdevsec/wordpot:2204"
+    #    read_only: true
+    #    volumes:
+            # - /data/wordpot/log:/opt/ddospot/ddospot/db
diff --git a/etc/compose/collector.yml b/etc/compose/collector.yml
index b20c5125..2e8134e8 100644
--- a/etc/compose/collector.yml
+++ b/etc/compose/collector.yml
@@ -3,7 +3,6 @@
 version: '2.3'
 
 networks:
-  cyberchef_local:
   heralding_local:
   ewsposter_local:
   spiderfoot_local:
@@ -39,7 +38,7 @@ services:
      - "3389:3389"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
@@ -53,7 +52,7 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@@ -74,7 +73,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -83,7 +82,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -95,12 +94,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -109,17 +110,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -127,7 +117,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -138,10 +128,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -152,36 +142,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -200,7 +219,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -209,34 +228,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -246,6 +255,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/hive.yml b/etc/compose/hive.yml
index 32011ec3..29825486 100644
--- a/etc/compose/hive.yml
+++ b/etc/compose/hive.yml
@@ -3,7 +3,6 @@
 version: '2.3'
 
 networks:
-  cyberchef_local:
   spiderfoot_local:
 
 services:
@@ -12,17 +11,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -30,7 +18,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -44,7 +32,7 @@ services:
 #    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -55,71 +43,90 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+#    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
     ports:
-     - "127.0.0.1:64305:80"
-    image: "dtagdevsec/logstash:2006"
+     - "127.0.0.1:64305:64305"
+#    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Nginx service
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -129,6 +136,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/pot.yml b/etc/compose/hive_sensor.yml
similarity index 80%
rename from etc/compose/pot.yml
rename to etc/compose/hive_sensor.yml
index 3d53bd36..a43c8dec 100644
--- a/etc/compose/pot.yml
+++ b/etc/compose/hive_sensor.yml
@@ -1,26 +1,29 @@
-# T-Pot (Pot)
+# T-Pot (Hive_Sensor)
 # Do not erase ports sections, these are used by /opt/tpot/bin/rules.sh to setup iptables ACCEPT rules for NFQ (honeytrap / glutton)
 version: '2.3'
 
 networks:
   adbhoney_local:
+  ciscoasa_local:
   citrixhoneypot_local:
   conpot_local_IEC104:
   conpot_local_guardian_ast:
   conpot_local_ipmi:
   conpot_local_kamstrup_382:
   cowrie_local:
+  ddospot_local:
   dicompot_local:
   dionaea_local:
   elasticpot_local:
   heralding_local:
-  honeysap_local:
-  logstash_local:
+  ipphoney_local:
   mailoney_local:
   medpot_local:
-  rdpy_local:
+  redishoneypot_local:
   tanner_local:
   ewsposter_local:
+  sentrypeer_local:
+  spiderfoot_local:
 
 services:
 
@@ -36,7 +39,7 @@ services:
      - adbhoney_local
     ports:
      - "5555:5555"
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
     read_only: true
     volumes:
      - /data/adbhoney/log:/opt/adbhoney/log
@@ -48,11 +51,12 @@ services:
     restart: always
     tmpfs:
      - /tmp/ciscoasa:uid=2000,gid=2000
-    network_mode: "host"
+    networks:
+     - ciscoasa_local
     ports:
      - "5000:5000/udp"
      - "8443:8443"
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
     read_only: true
     volumes:
      - /data/ciscoasa/log:/var/log/ciscoasa
@@ -65,7 +69,7 @@ services:
      - citrixhoneypot_local
     ports:
      - "443:443"
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
     read_only: true
     volumes:
      - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
@@ -87,7 +91,7 @@ services:
     ports:
      - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -108,7 +112,7 @@ services:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -129,7 +133,7 @@ services:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -151,7 +155,7 @@ services:
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -168,7 +172,7 @@ services:
     ports:
      - "22:22"
      - "23:23"
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
     read_only: true
     volumes:
      - /data/cowrie/downloads:/home/cowrie/cowrie/dl
@@ -176,6 +180,25 @@ services:
      - /data/cowrie/log:/home/cowrie/cowrie/log
      - /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
 
+# Ddospot service
+  ddospot:
+    container_name: ddospot
+    restart: always
+    networks:
+     - ddospot_local
+    ports:
+     - "19:19/udp"
+     - "53:53/udp"
+     - "123:123/udp"
+#     - "161:161/udp"
+     - "1900:1900/udp"
+    image: "dtagdevsec/ddospot:2204"
+    read_only: true
+    volumes:
+     - /data/ddospot/log:/opt/ddospot/ddospot/logs
+     - /data/ddospot/bl:/opt/ddospot/ddospot/bl
+     - /data/ddospot/db:/opt/ddospot/ddospot/db
+
 # Dicompot service
 # Get the Horos Client for testing: https://horosproject.org/
 # Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
@@ -187,7 +210,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -214,11 +237,11 @@ services:
      - "1723:1723"
      - "1883:1883"
      - "3306:3306"
-     - "5060:5060"
-     - "5060:5060/udp"
-     - "5061:5061"
+     # - "5060:5060"
+     # - "5060:5060/udp"
+     # - "5061:5061"
      - "27017:27017"
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
     read_only: true
     volumes:
      - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@@ -238,7 +261,7 @@ services:
      - elasticpot_local
     ports:
      - "9200:9200"
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
     read_only: true
     volumes:
      - /data/elasticpot/log:/opt/elasticpot/log
@@ -268,23 +291,11 @@ services:
      - "1080:1080"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
 
-# HoneySAP service
-  honeysap:
-    container_name: honeysap
-    restart: always
-    networks:
-     - honeysap_local
-    ports:
-     - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
-    volumes:
-     - /data/honeysap/log:/opt/honeysap/log
-
 # Honeytrap service
   honeytrap:
     container_name: honeytrap
@@ -294,13 +305,26 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
      - /data/honeytrap/downloads:/opt/honeytrap/var/downloads
      - /data/honeytrap/log:/opt/honeytrap/var/log
 
+# Ipphoney service
+  ipphoney:
+    container_name: ipphoney
+    restart: always
+    networks:
+     - ipphoney_local
+    ports:
+     - "631:631"
+    image: "dtagdevsec/ipphoney:2204"
+    read_only: true
+    volumes:
+     - /data/ipphoney/log:/opt/ipphoney/log
+
 # Mailoney service
   mailoney:
     container_name: mailoney
@@ -315,7 +339,7 @@ services:
      - mailoney_local
     ports:
      - "25:25"
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
     read_only: true
     volumes:
      - /data/mailoney/log:/opt/mailoney/logs
@@ -328,31 +352,36 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
 
-# Rdpy service
-  rdpy:
-    container_name: rdpy
-    extra_hosts:
-     - hpfeeds.example.com:127.0.0.1
+# Redishoneypot service
+  redishoneypot:
+    container_name: redishoneypot
     restart: always
-    environment:
-     - HPFEEDS_SERVER=hpfeeds.example.com
-     - HPFEEDS_IDENT=user
-     - HPFEEDS_SECRET=pass
-     - HPFEEDS_PORT=65000
-     - SERVERID=id
     networks:
-     - rdpy_local
+     - redishoneypot_local
     ports:
-     - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
+     - "6379:6379"
+    image: "dtagdevsec/redishoneypot:2204"
     read_only: true
     volumes:
-     - /data/rdpy/log:/var/log/rdpy
+     - /data/redishoneypot/log:/var/log/redishoneypot
+
+# SentryPeer service
+  sentrypeer:
+    container_name: sentrypeer
+    restart: always
+    networks:
+     - sentrypeer_local
+    ports:
+     - "5060:5060/udp"
+    image: "dtagdevsec/sentrypeer:2204"
+    read_only: true
+    volumes:
+     - /data/sentrypeer/log:/var/log/sentrypeer
 
 #### Snare / Tanner
 ## Tanner Redis Service
@@ -362,7 +391,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/redis:2006"
+    image: "dtagdevsec/redis:2204"
     read_only: true
 
 ## PHP Sandbox service
@@ -372,7 +401,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/phpox:2006"
+    image: "dtagdevsec/phpox:2204"
     read_only: true
 
 ## Tanner API Service
@@ -384,7 +413,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     read_only: true
     volumes:
      - /data/tanner/log:/var/log/tanner
@@ -401,7 +430,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     command: tanner
     read_only: true
     volumes:
@@ -421,7 +450,7 @@ services:
      - tanner_local
     ports:
      - "80:80"
-    image: "dtagdevsec/snare:2006"
+    image: "dtagdevsec/snare:2204"
     depends_on:
      - tanner
 
@@ -439,7 +468,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -448,7 +477,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -460,12 +489,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -474,17 +505,16 @@ services:
 #### Tools
 ##################
 
-# Logstash service
+## Logstash service
   logstash:
     container_name: logstash
     restart: always
-    networks:
-     - logstash_local
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
@@ -505,7 +535,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
diff --git a/etc/compose/industrial.yml b/etc/compose/industrial.yml
index 22839aa7..15478286 100644
--- a/etc/compose/industrial.yml
+++ b/etc/compose/industrial.yml
@@ -9,12 +9,9 @@ networks:
   conpot_local_ipmi:
   conpot_local_kamstrup_382:
   cowrie_local:
-  cyberchef_local:
   dicompot_local:
   heralding_local:
-  honeysap_local:
   medpot_local:
-  rdpy_local:
   ewsposter_local:
   spiderfoot_local:
 
@@ -48,7 +45,7 @@ services:
      - "21:21"
      - "44818:44818"
      - "47808:47808/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -70,7 +67,7 @@ services:
     ports:
 #     - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -91,7 +88,7 @@ services:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -112,7 +109,7 @@ services:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -134,7 +131,7 @@ services:
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -151,7 +148,7 @@ services:
     ports:
      - "22:22"
      - "23:23"
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
     read_only: true
     volumes:
      - /data/cowrie/downloads:/home/cowrie/cowrie/dl
@@ -170,7 +167,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -200,23 +197,11 @@ services:
     # - "3389:3389"
     # - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
 
-# HoneySAP service
-  honeysap:
-    container_name: honeysap
-    restart: always
-    networks:
-     - honeysap_local
-    ports:
-     - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
-    volumes:
-     - /data/honeysap/log:/opt/honeysap/log
-
 # Honeytrap service
   honeytrap:
     container_name: honeytrap
@@ -226,7 +211,7 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@@ -241,33 +226,11 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
 
-# Rdpy service
-  rdpy:
-    container_name: rdpy
-    extra_hosts:
-     - hpfeeds.example.com:127.0.0.1
-    restart: always
-    environment:
-     - HPFEEDS_SERVER=hpfeeds.example.com
-     - HPFEEDS_IDENT=user
-     - HPFEEDS_SECRET=pass
-     - HPFEEDS_PORT=65000
-     - SERVERID=id
-    networks:
-     - rdpy_local
-    ports:
-     - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
-    read_only: true
-    volumes:
-     - /data/rdpy/log:/var/log/rdpy
-
-
 ##################
 #### NSM
 ##################
@@ -281,7 +244,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -290,7 +253,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -302,12 +265,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -316,17 +281,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -334,7 +288,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -345,10 +299,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -359,36 +313,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -407,7 +390,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -416,34 +399,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -453,6 +426,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/log4j.yml b/etc/compose/log4j.yml
index 30dd9ccd..9d6b9179 100644
--- a/etc/compose/log4j.yml
+++ b/etc/compose/log4j.yml
@@ -3,7 +3,6 @@
 version: '2.3'
 
 networks:
-  cyberchef_local:
   log4pot_local:
   ewsposter_local:
   spiderfoot_local:
@@ -28,7 +27,7 @@ services:
      - "8080:8080"
      - "9200:8080"
      - "25565:8080"
-    image: "dtagdevsec/log4pot:2006"
+    image: "dtagdevsec/log4pot:2204"
     read_only: true
     volumes:
      - /data/log4pot/log:/var/log/log4pot/log
@@ -43,7 +42,7 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@@ -64,7 +63,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -73,7 +72,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -85,12 +84,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -99,17 +100,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -117,7 +107,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -128,10 +118,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -142,36 +132,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -190,7 +209,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -199,34 +218,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -236,6 +245,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/medical.yml b/etc/compose/medical.yml
index a51a6e86..f2c966f4 100644
--- a/etc/compose/medical.yml
+++ b/etc/compose/medical.yml
@@ -3,7 +3,6 @@
 version: '2.3'
 
 networks:
-  cyberchef_local:
   dicompot_local:
   medpot_local:
   ewsposter_local:
@@ -26,7 +25,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -40,7 +39,7 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
@@ -58,7 +57,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -67,7 +66,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -79,12 +78,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -93,17 +94,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -111,7 +101,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -122,10 +112,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -136,36 +126,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -184,7 +203,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -193,34 +212,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -230,6 +239,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/mini.yml b/etc/compose/mini.yml
index 5c69d754..88c2406b 100644
--- a/etc/compose/mini.yml
+++ b/etc/compose/mini.yml
@@ -3,7 +3,6 @@
 version: '2.3'
 
 networks:
-  cyberchef_local:
   honeypots_local:
   ewsposter_local:
   spiderfoot_local:
@@ -14,7 +13,7 @@ services:
 #### Honeypots
 ##################
 
-# Honeypots service
+# qHoneypots service
   honeypots:
     container_name: honeypots
     stdin_open: true
@@ -48,7 +47,7 @@ services:
      - "8080:8080"
      - "9200:9200"
      - "11211:11211"
-    image: "dtagdevsec/honeypots:2006"
+    image: "dtagdevsec/honeypots:2204"
     read_only: true
     volumes:
      - /data/honeypots/log:/var/log/honeypots
@@ -62,7 +61,7 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
@@ -83,7 +82,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -92,7 +91,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -104,12 +103,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -118,17 +119,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -136,7 +126,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -147,10 +137,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -161,36 +151,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -209,7 +228,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -218,34 +237,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -255,6 +264,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/nextgen.yml b/etc/compose/nextgen.yml
index 37929a7e..75ddc90e 100644
--- a/etc/compose/nextgen.yml
+++ b/etc/compose/nextgen.yml
@@ -10,20 +10,16 @@ networks:
   conpot_local_guardian_ast:
   conpot_local_ipmi:
   conpot_local_kamstrup_382:
-  cyberchef_local:
+  ddospot_local:
   dicompot_local:
   dionaea_local:
-  ddospot_local:
   elasticpot_local:
   endlessh_local:
   hellpot_local:
   heralding_local:
-  honeypy_local:
-  honeysap_local:
   ipphoney_local:
   mailoney_local:
   medpot_local:
-  rdpy_local:
   redishoneypot_local:
   ewsposter_local:
   spiderfoot_local:
@@ -42,7 +38,7 @@ services:
      - adbhoney_local
     ports:
      - "5555:5555"
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
     read_only: true
     volumes:
      - /data/adbhoney/log:/opt/adbhoney/log
@@ -52,14 +48,14 @@ services:
   ciscoasa:
     container_name: ciscoasa
     restart: always
-    networks:
-      - ciscoasa_local
     tmpfs:
      - /tmp/ciscoasa:uid=2000,gid=2000
+    networks:
+     - ciscoasa_local
     ports:
      - "5000:5000/udp"
      - "8443:8443"
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
     read_only: true
     volumes:
      - /data/ciscoasa/log:/var/log/ciscoasa
@@ -72,7 +68,7 @@ services:
      - citrixhoneypot_local
     ports:
      - "443:443"
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
     read_only: true
     volumes:
      - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs       
@@ -94,7 +90,7 @@ services:
     ports:
      - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -115,7 +111,7 @@ services:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -136,7 +132,7 @@ services:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -158,7 +154,7 @@ services:
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -175,7 +171,7 @@ services:
      - "123:123/udp"
 #     - "161:161/udp"
      - "1900:1900/udp"
-    image: "dtagdevsec/ddospot:2006"
+    image: "dtagdevsec/ddospot:2204"
     read_only: true
     volumes:
      - /data/ddospot/log:/opt/ddospot/ddospot/logs
@@ -193,7 +189,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -220,11 +216,11 @@ services:
      - "1723:1723"
      - "1883:1883"
      - "3306:3306"
-     - "5060:5060"
-     - "5060:5060/udp"
-     - "5061:5061"
+     # - "5060:5060"
+     # - "5060:5060/udp"
+     # - "5061:5061"
      - "27017:27017"
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
     read_only: true
     volumes:
      - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@@ -244,7 +240,7 @@ services:
      - elasticpot_local
     ports:
      - "9200:9200"
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
     read_only: true
     volumes:
      - /data/elasticpot/log:/opt/elasticpot/log
@@ -257,7 +253,7 @@ services:
      - endlessh_local
     ports:
      - "22:2222"
-    image: "dtagdevsec/endlessh:2006"
+    image: "dtagdevsec/endlessh:2204"
     read_only: true
     volumes:
      - /data/endlessh/log:/var/log/endlessh
@@ -272,7 +268,7 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/glutton:2006"
+    image: "dtagdevsec/glutton:2204"
     read_only: true
     volumes:
      - /data/glutton/log:/var/log/glutton
@@ -303,42 +299,11 @@ services:
      - "1080:1080"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
 
-# HoneyPy service
-  honeypy:
-    container_name: honeypy
-    restart: always
-    networks:
-     - honeypy_local
-    ports:
-     - "7:7"
-     - "8:8"
-     - "2048:2048"
-     - "2323:2323"
-     - "2324:2324"
-     - "4096:4096"
-    # - "9200:9200"
-    image: "dtagdevsec/honeypy:2006"
-    read_only: true
-    volumes:
-     - /data/honeypy/log:/opt/honeypy/log
-
-# HoneySAP service
-  honeysap:
-    container_name: honeysap
-    restart: always
-    networks:
-     - honeysap_local
-    ports:
-     - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
-    volumes:
-     - /data/honeysap/log:/opt/honeysap/log
-
 # Ipphoney service
   ipphoney:
     container_name: ipphoney
@@ -347,7 +312,7 @@ services:
      - ipphoney_local
     ports:
      - "631:631"
-    image: "dtagdevsec/ipphoney:2006"
+    image: "dtagdevsec/ipphoney:2204"
     read_only: true
     volumes:
      - /data/ipphoney/log:/opt/ipphoney/log
@@ -366,7 +331,7 @@ services:
      - mailoney_local
     ports:
      - "25:25"
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
     read_only: true
     volumes:
      - /data/mailoney/log:/opt/mailoney/logs
@@ -379,32 +344,11 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
 
-# Rdpy service
-  rdpy:
-    container_name: rdpy
-    extra_hosts:
-     - hpfeeds.example.com:127.0.0.1
-    restart: always
-    environment:
-     - HPFEEDS_SERVER=hpfeeds.example.com
-     - HPFEEDS_IDENT=user
-     - HPFEEDS_SECRET=pass
-     - HPFEEDS_PORT=65000
-     - SERVERID=id
-    networks:
-     - rdpy_local
-    ports:
-     - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
-    read_only: true
-    volumes:
-     - /data/rdpy/log:/var/log/rdpy
-
 # Redishoneypot service
   redishoneypot:
     container_name: redishoneypot
@@ -413,7 +357,7 @@ services:
      - redishoneypot_local
     ports:
      - "6379:6379"
-    image: "dtagdevsec/redishoneypot:2006"
+    image: "dtagdevsec/redishoneypot:2204"
     read_only: true
     volumes:
      - /data/redishoneypot/log:/var/log/redishoneypot
@@ -426,7 +370,7 @@ services:
      - hellpot_local
     ports:
      - "80:8080"
-    image: "dtagdevsec/hellpot:2006"
+    image: "dtagdevsec/hellpot:2204"
     read_only: true
     volumes:
      - /data/hellpot/log:/var/log/hellpot
@@ -444,7 +388,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -453,7 +397,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -465,12 +409,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -479,17 +425,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -497,7 +432,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -508,10 +443,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -522,36 +457,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -570,7 +534,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -579,34 +543,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -616,6 +570,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/sensor.yml b/etc/compose/sensor.yml
index 14d7f70a..b8a13cda 100644
--- a/etc/compose/sensor.yml
+++ b/etc/compose/sensor.yml
@@ -11,17 +11,19 @@ networks:
   conpot_local_ipmi:
   conpot_local_kamstrup_382:
   cowrie_local:
+  ddospot_local:
   dicompot_local:
   dionaea_local:
   elasticpot_local:
   heralding_local:
-  honeypy_local:
-  honeysap_local:
+  ipphoney_local:
   mailoney_local:
   medpot_local:
-  rdpy_local:
+  redishoneypot_local:
   tanner_local:
   ewsposter_local:
+  sentrypeer_local:
+  spiderfoot_local:
 
 services:
 
@@ -37,7 +39,7 @@ services:
      - adbhoney_local
     ports:
      - "5555:5555"
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
     read_only: true
     volumes:
      - /data/adbhoney/log:/opt/adbhoney/log
@@ -47,14 +49,14 @@ services:
   ciscoasa:
     container_name: ciscoasa
     restart: always
-    networks:
-      - ciscoasa_local
     tmpfs:
      - /tmp/ciscoasa:uid=2000,gid=2000
+    networks:
+     - ciscoasa_local
     ports:
      - "5000:5000/udp"
      - "8443:8443"
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
     read_only: true
     volumes:
      - /data/ciscoasa/log:/var/log/ciscoasa
@@ -67,7 +69,7 @@ services:
      - citrixhoneypot_local
     ports:
      - "443:443"
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
     read_only: true
     volumes:
      - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
@@ -89,7 +91,7 @@ services:
     ports:
      - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -110,7 +112,7 @@ services:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -131,7 +133,7 @@ services:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -153,7 +155,7 @@ services:
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -170,7 +172,7 @@ services:
     ports:
      - "22:22"
      - "23:23"
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
     read_only: true
     volumes:
      - /data/cowrie/downloads:/home/cowrie/cowrie/dl
@@ -178,6 +180,25 @@ services:
      - /data/cowrie/log:/home/cowrie/cowrie/log
      - /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
 
+# Ddospot service
+  ddospot:
+    container_name: ddospot
+    restart: always
+    networks:
+     - ddospot_local
+    ports:
+     - "19:19/udp"
+     - "53:53/udp"
+     - "123:123/udp"
+#     - "161:161/udp"
+     - "1900:1900/udp"
+    image: "dtagdevsec/ddospot:2204"
+    read_only: true
+    volumes:
+     - /data/ddospot/log:/opt/ddospot/ddospot/logs
+     - /data/ddospot/bl:/opt/ddospot/ddospot/bl
+     - /data/ddospot/db:/opt/ddospot/ddospot/db
+
 # Dicompot service
 # Get the Horos Client for testing: https://horosproject.org/
 # Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
@@ -189,7 +210,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -216,11 +237,11 @@ services:
      - "1723:1723"
      - "1883:1883"
      - "3306:3306"
-     - "5060:5060"
-     - "5060:5060/udp"
-     - "5061:5061"
+     # - "5060:5060"
+     # - "5060:5060/udp"
+     # - "5061:5061"
      - "27017:27017"
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
     read_only: true
     volumes:
      - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@@ -240,7 +261,7 @@ services:
      - elasticpot_local
     ports:
      - "9200:9200"
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
     read_only: true
     volumes:
      - /data/elasticpot/log:/opt/elasticpot/log
@@ -270,42 +291,11 @@ services:
      - "1080:1080"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
 
-# HoneyPy service
-  honeypy:
-    container_name: honeypy
-    restart: always
-    networks:
-     - honeypy_local
-    ports:
-     - "7:7"
-     - "8:8"
-     - "2048:2048"
-     - "2323:2323"
-     - "2324:2324"
-     - "4096:4096"
-    # - "9200:9200"
-    image: "dtagdevsec/honeypy:2006"
-    read_only: true
-    volumes:
-     - /data/honeypy/log:/opt/honeypy/log
-
-# HoneySAP service
-  honeysap:
-    container_name: honeysap
-    restart: always
-    networks:
-     - honeysap_local
-    ports:
-     - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
-    volumes:
-     - /data/honeysap/log:/opt/honeysap/log
-
 # Honeytrap service
   honeytrap:
     container_name: honeytrap
@@ -315,13 +305,26 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
      - /data/honeytrap/downloads:/opt/honeytrap/var/downloads
      - /data/honeytrap/log:/opt/honeytrap/var/log
 
+# Ipphoney service
+  ipphoney:
+    container_name: ipphoney
+    restart: always
+    networks:
+     - ipphoney_local
+    ports:
+     - "631:631"
+    image: "dtagdevsec/ipphoney:2204"
+    read_only: true
+    volumes:
+     - /data/ipphoney/log:/opt/ipphoney/log
+
 # Mailoney service
   mailoney:
     container_name: mailoney
@@ -336,7 +339,7 @@ services:
      - mailoney_local
     ports:
      - "25:25"
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
     read_only: true
     volumes:
      - /data/mailoney/log:/opt/mailoney/logs
@@ -349,31 +352,36 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
 
-# Rdpy service
-  rdpy:
-    container_name: rdpy
-    extra_hosts:
-     - hpfeeds.example.com:127.0.0.1
+# Redishoneypot service
+  redishoneypot:
+    container_name: redishoneypot
     restart: always
-    environment:
-     - HPFEEDS_SERVER=hpfeeds.example.com
-     - HPFEEDS_IDENT=user
-     - HPFEEDS_SECRET=pass
-     - HPFEEDS_PORT=65000
-     - SERVERID=id
     networks:
-     - rdpy_local
+     - redishoneypot_local
     ports:
-     - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
+     - "6379:6379"
+    image: "dtagdevsec/redishoneypot:2204"
     read_only: true
     volumes:
-     - /data/rdpy/log:/var/log/rdpy
+     - /data/redishoneypot/log:/var/log/redishoneypot
+
+# SentryPeer service
+  sentrypeer:
+    container_name: sentrypeer
+    restart: always
+    networks:
+     - sentrypeer_local
+    ports:
+     - "5060:5060/udp"
+    image: "dtagdevsec/sentrypeer:2204"
+    read_only: true
+    volumes:
+     - /data/sentrypeer/log:/var/log/sentrypeer
 
 #### Snare / Tanner
 ## Tanner Redis Service
@@ -383,7 +391,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/redis:2006"
+    image: "dtagdevsec/redis:2204"
     read_only: true
 
 ## PHP Sandbox service
@@ -393,7 +401,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/phpox:2006"
+    image: "dtagdevsec/phpox:2204"
     read_only: true
 
 ## Tanner API Service
@@ -405,7 +413,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     read_only: true
     volumes:
      - /data/tanner/log:/var/log/tanner
@@ -413,23 +421,6 @@ services:
     depends_on:
      - tanner_redis
 
-## Tanner WEB Service
-#  tanner_web:
-#    container_name: tanner_web
-#    restart: always
-#    tmpfs:
-#     - /tmp/tanner:uid=2000,gid=2000
-#    tty: true
-#    networks:
-#     - tanner_local
-#    image: "dtagdevsec/tanner:2006"
-#    command: tannerweb
-#    read_only: true
-#    volumes:
-#     - /data/tanner/log:/var/log/tanner
-#    depends_on:
-#     - tanner_redis
-
 ## Tanner Service
   tanner:
     container_name: tanner
@@ -439,7 +430,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     command: tanner
     read_only: true
     volumes:
@@ -459,7 +450,7 @@ services:
      - tanner_local
     ports:
      - "80:80"
-    image: "dtagdevsec/snare:2006"
+    image: "dtagdevsec/snare:2204"
     depends_on:
      - tanner
 
@@ -477,7 +468,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -486,7 +477,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -505,7 +496,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -531,7 +522,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
diff --git a/etc/compose/standard.yml b/etc/compose/standard.yml
index 38297ed0..e1825080 100644
--- a/etc/compose/standard.yml
+++ b/etc/compose/standard.yml
@@ -4,23 +4,25 @@ version: '2.3'
 
 networks:
   adbhoney_local:
+  ciscoasa_local:
   citrixhoneypot_local:
   conpot_local_IEC104:
   conpot_local_guardian_ast:
   conpot_local_ipmi:
   conpot_local_kamstrup_382:
   cowrie_local:
-  cyberchef_local:
+  ddospot_local:
   dicompot_local:
   dionaea_local:
   elasticpot_local:
   heralding_local:
-  honeysap_local:
+  ipphoney_local:
   mailoney_local:
   medpot_local:
-  rdpy_local:
+  redishoneypot_local:
   tanner_local:
   ewsposter_local:
+  sentrypeer_local:
   spiderfoot_local:
 
 services:
@@ -37,7 +39,7 @@ services:
      - adbhoney_local
     ports:
      - "5555:5555"
-    image: "dtagdevsec/adbhoney:2006"
+    image: "dtagdevsec/adbhoney:2204"
     read_only: true
     volumes:
      - /data/adbhoney/log:/opt/adbhoney/log
@@ -49,11 +51,12 @@ services:
     restart: always
     tmpfs:
      - /tmp/ciscoasa:uid=2000,gid=2000
-    network_mode: "host"
+    networks:
+     - ciscoasa_local
     ports:
      - "5000:5000/udp"
      - "8443:8443"
-    image: "dtagdevsec/ciscoasa:2006"
+    image: "dtagdevsec/ciscoasa:2204"
     read_only: true
     volumes:
      - /data/ciscoasa/log:/var/log/ciscoasa
@@ -66,7 +69,7 @@ services:
      - citrixhoneypot_local
     ports:
      - "443:443"
-    image: "dtagdevsec/citrixhoneypot:2006"
+    image: "dtagdevsec/citrixhoneypot:2204"
     read_only: true
     volumes:
      - /data/citrixhoneypot/logs:/opt/citrixhoneypot/logs
@@ -88,7 +91,7 @@ services:
     ports:
      - "161:161/udp"
      - "2404:2404"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -109,7 +112,7 @@ services:
      - conpot_local_guardian_ast
     ports:
      - "10001:10001"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -130,7 +133,7 @@ services:
      - conpot_local_ipmi
     ports:
      - "623:623/udp"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -152,7 +155,7 @@ services:
     ports:
      - "1025:1025"
      - "50100:50100"
-    image: "dtagdevsec/conpot:2006"
+    image: "dtagdevsec/conpot:2204"
     read_only: true
     volumes:
      - /data/conpot/log:/var/log/conpot
@@ -169,7 +172,7 @@ services:
     ports:
      - "22:22"
      - "23:23"
-    image: "dtagdevsec/cowrie:2006"
+    image: "dtagdevsec/cowrie:2204"
     read_only: true
     volumes:
      - /data/cowrie/downloads:/home/cowrie/cowrie/dl
@@ -177,6 +180,25 @@ services:
      - /data/cowrie/log:/home/cowrie/cowrie/log
      - /data/cowrie/log/tty:/home/cowrie/cowrie/log/tty
 
+# Ddospot service
+  ddospot:
+    container_name: ddospot
+    restart: always
+    networks:
+     - ddospot_local
+    ports:
+     - "19:19/udp"
+     - "53:53/udp"
+     - "123:123/udp"
+#     - "161:161/udp"
+     - "1900:1900/udp"
+    image: "dtagdevsec/ddospot:2204"
+    read_only: true
+    volumes:
+     - /data/ddospot/log:/opt/ddospot/ddospot/logs
+     - /data/ddospot/bl:/opt/ddospot/ddospot/bl
+     - /data/ddospot/db:/opt/ddospot/ddospot/db
+
 # Dicompot service
 # Get the Horos Client for testing: https://horosproject.org/
 # Get Dicom images (CC BY 3.0): https://www.cancerimagingarchive.net/collections/
@@ -188,7 +210,7 @@ services:
      - dicompot_local
     ports:
      - "11112:11112"
-    image: "dtagdevsec/dicompot:2006"
+    image: "dtagdevsec/dicompot:2204"
     read_only: true
     volumes:
      - /data/dicompot/log:/var/log/dicompot
@@ -215,11 +237,11 @@ services:
      - "1723:1723"
      - "1883:1883"
      - "3306:3306"
-     - "5060:5060"
-     - "5060:5060/udp"
-     - "5061:5061"
+     # - "5060:5060"
+     # - "5060:5060/udp"
+     # - "5061:5061"
      - "27017:27017"
-    image: "dtagdevsec/dionaea:2006"
+    image: "dtagdevsec/dionaea:2204"
     read_only: true
     volumes:
      - /data/dionaea/roots/ftp:/opt/dionaea/var/dionaea/roots/ftp
@@ -239,7 +261,7 @@ services:
      - elasticpot_local
     ports:
      - "9200:9200"
-    image: "dtagdevsec/elasticpot:2006"
+    image: "dtagdevsec/elasticpot:2204"
     read_only: true
     volumes:
      - /data/elasticpot/log:/opt/elasticpot/log
@@ -269,23 +291,11 @@ services:
      - "1080:1080"
      - "5432:5432"
      - "5900:5900"
-    image: "dtagdevsec/heralding:2006"
+    image: "dtagdevsec/heralding:2204"
     read_only: true
     volumes:
      - /data/heralding/log:/var/log/heralding
 
-# HoneySAP service
-  honeysap:
-    container_name: honeysap
-    restart: always
-    networks:
-     - honeysap_local
-    ports:
-     - "3299:3299"
-    image: "dtagdevsec/honeysap:2006"
-    volumes:
-     - /data/honeysap/log:/opt/honeysap/log
-
 # Honeytrap service
   honeytrap:
     container_name: honeytrap
@@ -295,13 +305,26 @@ services:
     network_mode: "host"
     cap_add:
      - NET_ADMIN
-    image: "dtagdevsec/honeytrap:2006"
+    image: "dtagdevsec/honeytrap:2204"
     read_only: true
     volumes:
      - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
      - /data/honeytrap/downloads:/opt/honeytrap/var/downloads
      - /data/honeytrap/log:/opt/honeytrap/var/log
 
+# Ipphoney service
+  ipphoney:
+    container_name: ipphoney
+    restart: always
+    networks:
+     - ipphoney_local
+    ports:
+     - "631:631"
+    image: "dtagdevsec/ipphoney:2204"
+    read_only: true
+    volumes:
+     - /data/ipphoney/log:/opt/ipphoney/log
+
 # Mailoney service
   mailoney:
     container_name: mailoney
@@ -316,7 +339,7 @@ services:
      - mailoney_local
     ports:
      - "25:25"
-    image: "dtagdevsec/mailoney:2006"
+    image: "dtagdevsec/mailoney:2204"
     read_only: true
     volumes:
      - /data/mailoney/log:/opt/mailoney/logs
@@ -329,31 +352,36 @@ services:
      - medpot_local
     ports:
      - "2575:2575"
-    image: "dtagdevsec/medpot:2006"
+    image: "dtagdevsec/medpot:2204"
     read_only: true
     volumes:
      - /data/medpot/log/:/var/log/medpot
 
-# Rdpy service
-  rdpy:
-    container_name: rdpy
-    extra_hosts:
-     - hpfeeds.example.com:127.0.0.1
+# Redishoneypot service
+  redishoneypot:
+    container_name: redishoneypot
     restart: always
-    environment:
-     - HPFEEDS_SERVER=hpfeeds.example.com
-     - HPFEEDS_IDENT=user
-     - HPFEEDS_SECRET=pass
-     - HPFEEDS_PORT=65000
-     - SERVERID=id
     networks:
-     - rdpy_local
+     - redishoneypot_local
     ports:
-     - "3389:3389"
-    image: "dtagdevsec/rdpy:2006"
+     - "6379:6379"
+    image: "dtagdevsec/redishoneypot:2204"
     read_only: true
     volumes:
-     - /data/rdpy/log:/var/log/rdpy
+     - /data/redishoneypot/log:/var/log/redishoneypot
+
+# SentryPeer service
+  sentrypeer:
+    container_name: sentrypeer
+    restart: always
+    networks:
+     - sentrypeer_local
+    ports:
+     - "5060:5060/udp"
+    image: "dtagdevsec/sentrypeer:2204"
+    read_only: true
+    volumes:
+     - /data/sentrypeer/log:/var/log/sentrypeer
 
 #### Snare / Tanner
 ## Tanner Redis Service
@@ -363,7 +391,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/redis:2006"
+    image: "dtagdevsec/redis:2204"
     read_only: true
 
 ## PHP Sandbox service
@@ -373,7 +401,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/phpox:2006"
+    image: "dtagdevsec/phpox:2204"
     read_only: true
 
 ## Tanner API Service
@@ -385,7 +413,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     read_only: true
     volumes:
      - /data/tanner/log:/var/log/tanner
@@ -393,23 +421,6 @@ services:
     depends_on:
      - tanner_redis
 
-## Tanner WEB Service
-#  tanner_web:
-#    container_name: tanner_web
-#    restart: always
-#    tmpfs:
-#     - /tmp/tanner:uid=2000,gid=2000
-#    tty: true
-#    networks:
-#     - tanner_local
-#    image: "dtagdevsec/tanner:2006"
-#    command: tannerweb
-#    read_only: true
-#    volumes:
-#     - /data/tanner/log:/var/log/tanner
-#    depends_on:
-#     - tanner_redis
-
 ## Tanner Service
   tanner:
     container_name: tanner
@@ -419,7 +430,7 @@ services:
     tty: true
     networks:
      - tanner_local
-    image: "dtagdevsec/tanner:2006"
+    image: "dtagdevsec/tanner:2204"
     command: tanner
     read_only: true
     volumes:
@@ -439,7 +450,7 @@ services:
      - tanner_local
     ports:
      - "80:80"
-    image: "dtagdevsec/snare:2006"
+    image: "dtagdevsec/snare:2204"
     depends_on:
      - tanner
 
@@ -457,7 +468,7 @@ services:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/fatt:2006"
+    image: "dtagdevsec/fatt:2204"
     volumes:
      - /data/fatt/log:/opt/fatt/log
 
@@ -466,7 +477,7 @@ services:
     container_name: p0f
     restart: always
     network_mode: "host"
-    image: "dtagdevsec/p0f:2006"
+    image: "dtagdevsec/p0f:2204"
     read_only: true
     volumes:
      - /data/p0f/log:/var/log/p0f
@@ -478,12 +489,14 @@ services:
     environment:
     # For ET Pro ruleset replace "OPEN" with your OINKCODE
      - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
     network_mode: "host"
     cap_add:
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
-    image: "dtagdevsec/suricata:2006"
+    image: "dtagdevsec/suricata:2204"
     volumes:
      - /data/suricata/log:/var/log/suricata
 
@@ -492,17 +505,6 @@ services:
 #### Tools
 ##################
 
-# Cyberchef service
-  cyberchef:
-    container_name: cyberchef
-    restart: always
-    networks:
-     - cyberchef_local
-    ports:
-     - "127.0.0.1:64299:8000"
-    image: "dtagdevsec/cyberchef:2006"
-    read_only: true
-
 #### ELK
 ## Elasticsearch service
   elasticsearch:
@@ -510,7 +512,7 @@ services:
     restart: always
     environment:
      - bootstrap.memory_lock=true
-#     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
      - ES_TMPDIR=/tmp
     cap_add:
      - IPC_LOCK
@@ -521,10 +523,10 @@ services:
       nofile:
         soft: 65536
         hard: 65536
-#    mem_limit: 4g
+    mem_limit: 4g
     ports:
      - "127.0.0.1:64298:9200"
-    image: "dtagdevsec/elasticsearch:2006"
+    image: "dtagdevsec/elasticsearch:2204"
     volumes:
      - /data:/data
 
@@ -535,36 +537,65 @@ services:
     depends_on:
       elasticsearch:
         condition: service_healthy
+    mem_limit: 1g
     ports:
      - "127.0.0.1:64296:5601"
-    image: "dtagdevsec/kibana:2006"
+    image: "dtagdevsec/kibana:2204"
 
 ## Logstash service
   logstash:
     container_name: logstash
     restart: always
-#    environment:
-#     - LS_JAVA_OPTS=-Xms2048m -Xmx2048m
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
     depends_on:
       elasticsearch:
         condition: service_healthy
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/logstash:2006"
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
     volumes:
      - /data:/data
 
-## Elasticsearch-head service
-  head:
-    container_name: head
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
     restart: always
     depends_on:
       elasticsearch:
         condition: service_healthy
-    ports:
-     - "127.0.0.1:64302:9100"
-    image: "dtagdevsec/head:2006"
-    read_only: true
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
 
 # Ewsposter service
   ewsposter:
@@ -583,7 +614,7 @@ services:
      - EWS_HPFEEDS_FORMAT=json
     env_file:
      - /opt/tpot/etc/compose/elk_environment
-    image: "dtagdevsec/ewsposter:2006"
+    image: "dtagdevsec/ewsposter:2204"
     volumes:
      - /data:/data
      - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
@@ -592,34 +623,24 @@ services:
   nginx:
     container_name: nginx
     restart: always
-    environment:
-    ### If set to YES all changes within Heimdall will remain for the next start
-    ### Make sure to uncomment the corresponding volume statements below, or the setting will prevent a successful start of T-Pot.
-     - HEIMDALL_PERSIST=NO
     tmpfs:
      - /var/tmp/nginx/client_body
      - /var/tmp/nginx/proxy
      - /var/tmp/nginx/fastcgi
      - /var/tmp/nginx/uwsgi
-     - /var/tmp/nginx/scgi 
+     - /var/tmp/nginx/scgi
      - /run
-     - /var/log/php7/
-     - /var/lib/nginx/tmp:uid=100,gid=82 
-     - /var/lib/nginx/html/storage/logs:uid=100,gid=82
-     - /var/lib/nginx/html/storage/framework/views:uid=100,gid=82
+     - /var/lib/nginx/tmp:uid=100,gid=82
     network_mode: "host"
     ports:
      - "64297:64297"
      - "127.0.0.1:64304:64304"
-    image: "dtagdevsec/nginx:2006"
+    image: "dtagdevsec/nginx:2204"
     read_only: true
     volumes:
      - /data/nginx/cert/:/etc/nginx/cert/:ro
      - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
      - /data/nginx/log/:/var/log/nginx/
-    ### Enable the following volumes if you set HEIMDALL_PERSIST=YES
-    # - /data/nginx/heimdall/database:/var/lib/nginx/html/database
-    # - /data/nginx/heimdall/storage:/var/lib/nginx/html/storage
 
 # Spiderfoot service
   spiderfoot:
@@ -629,6 +650,6 @@ services:
      - spiderfoot_local
     ports:
      - "127.0.0.1:64303:8080"
-    image: "dtagdevsec/spiderfoot:2006"
+    image: "dtagdevsec/spiderfoot:2204"
     volumes:
-     - /data/spiderfoot/spiderfoot.db:/home/spiderfoot/spiderfoot.db
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/compose/tarpit.yml b/etc/compose/tarpit.yml
new file mode 100644
index 00000000..3ca278b8
--- /dev/null
+++ b/etc/compose/tarpit.yml
@@ -0,0 +1,287 @@
+# T-Pot (Tarpit)
+# Do not erase ports sections, these are used by /opt/tpot/bin/rules.sh to setup iptables ACCEPT rules for NFQ (honeytrap / glutton)
+version: '2.3'
+
+networks:
+  endlessh_local:
+  hellpot_local:
+  heralding_local:
+  ewsposter_local:
+  spiderfoot_local:
+
+services:
+
+##################
+#### Honeypots
+##################
+
+# Endlessh service
+  endlessh:
+    container_name: endlessh
+    restart: always
+    networks:
+     - endlessh_local
+    ports:
+     - "22:2222"
+    image: "dtagdevsec/endlessh:2204"
+    read_only: true
+    volumes:
+     - /data/endlessh/log:/var/log/endlessh
+
+# Heralding service
+  heralding:
+    container_name: heralding
+    restart: always
+    tmpfs:
+     - /tmp/heralding:uid=2000,gid=2000
+    networks:
+     - heralding_local
+    ports:
+    # - "21:21"
+    # - "22:22"
+    # - "23:23"
+    # - "25:25"
+    # - "80:80"
+     - "110:110"
+     - "143:143"
+    # - "443:443"
+     - "465:465"
+     - "993:993"
+     - "995:995"
+    # - "3306:3306"
+    # - "3389:3389"
+     - "1080:1080"
+     - "5432:5432"
+     - "5900:5900"
+    image: "dtagdevsec/heralding:2204"
+    read_only: true
+    volumes:
+     - /data/heralding/log:/var/log/heralding
+
+# Honeytrap service
+  honeytrap:
+    container_name: honeytrap
+    restart: always
+    tmpfs:
+     - /tmp/honeytrap:uid=2000,gid=2000
+    network_mode: "host"
+    cap_add:
+     - NET_ADMIN
+    image: "dtagdevsec/honeytrap:2204"
+    read_only: true
+    volumes:
+     - /data/honeytrap/attacks:/opt/honeytrap/var/attacks
+     - /data/honeytrap/downloads:/opt/honeytrap/var/downloads
+     - /data/honeytrap/log:/opt/honeytrap/var/log
+
+# Hellpot service
+  hellpot:
+    container_name: hellpot
+    restart: always
+    networks:
+     - hellpot_local
+    ports:
+     - "80:8080"
+    image: "dtagdevsec/hellpot:2204"
+    read_only: true
+    volumes:
+     - /data/hellpot/log:/var/log/hellpot
+
+##################
+#### NSM
+##################
+
+# Fatt service
+  fatt:
+    container_name: fatt
+    restart: always
+    network_mode: "host"
+    cap_add:
+     - NET_ADMIN
+     - SYS_NICE
+     - NET_RAW
+    image: "dtagdevsec/fatt:2204"
+    volumes:
+     - /data/fatt/log:/opt/fatt/log
+
+# P0f service
+  p0f:
+    container_name: p0f
+    restart: always
+    network_mode: "host"
+    image: "dtagdevsec/p0f:2204"
+    read_only: true
+    volumes:
+     - /data/p0f/log:/var/log/p0f
+
+# Suricata service
+  suricata:
+    container_name: suricata
+    restart: always
+    environment:
+    # For ET Pro ruleset replace "OPEN" with your OINKCODE
+     - OINKCODE=OPEN
+    # Loading externel Rules from URL 
+    # - FROMURL="https://username:password@yoururl.com|https://username:password@otherurl.com"
+    network_mode: "host"
+    cap_add:
+     - NET_ADMIN
+     - SYS_NICE
+     - NET_RAW
+    image: "dtagdevsec/suricata:2204"
+    volumes:
+     - /data/suricata/log:/var/log/suricata
+
+
+##################
+#### Tools
+##################
+
+#### ELK
+## Elasticsearch service
+  elasticsearch:
+    container_name: elasticsearch
+    restart: always
+    environment:
+     - bootstrap.memory_lock=true
+     - ES_JAVA_OPTS=-Xms2048m -Xmx2048m
+     - ES_TMPDIR=/tmp
+    cap_add:
+     - IPC_LOCK
+    ulimits:
+      memlock:
+        soft: -1
+        hard: -1
+      nofile:
+        soft: 65536
+        hard: 65536
+    mem_limit: 4g
+    ports:
+     - "127.0.0.1:64298:9200"
+    image: "dtagdevsec/elasticsearch:2204"
+    volumes:
+     - /data:/data
+
+## Kibana service
+  kibana:
+    container_name: kibana
+    restart: always
+    depends_on:
+      elasticsearch:
+        condition: service_healthy
+    mem_limit: 1g
+    ports:
+     - "127.0.0.1:64296:5601"
+    image: "dtagdevsec/kibana:2204"
+
+## Logstash service
+  logstash:
+    container_name: logstash
+    restart: always
+    environment:
+     - LS_JAVA_OPTS=-Xms1024m -Xmx1024m
+    depends_on:
+      elasticsearch:
+        condition: service_healthy
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    mem_limit: 2g
+    image: "dtagdevsec/logstash:2204"
+    volumes:
+     - /data:/data
+
+## Map Redis Service
+  map_redis:
+    container_name: map_redis
+    restart: always
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/redis:2204"
+    read_only: true
+
+## Map Web Service
+  map_web:
+    container_name: map_web
+    restart: always
+    environment:
+     - MAP_COMMAND=AttackMapServer.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    ports:
+     - "127.0.0.1:64299:64299"
+    image: "dtagdevsec/map:2204"
+
+## Map Data Service
+  map_data:
+    container_name: map_data
+    restart: always
+    depends_on:
+      elasticsearch:
+        condition: service_healthy
+    environment:
+     - MAP_COMMAND=DataServer_v2.py
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    stop_signal: SIGKILL
+    tty: true
+    image: "dtagdevsec/map:2204"
+#### /ELK
+
+# Ewsposter service
+  ewsposter:
+    container_name: ewsposter
+    restart: always
+    networks:
+     - ewsposter_local
+    environment:
+     - EWS_HPFEEDS_ENABLE=false
+     - EWS_HPFEEDS_HOST=host
+     - EWS_HPFEEDS_PORT=port
+     - EWS_HPFEEDS_CHANNELS=channels
+     - EWS_HPFEEDS_IDENT=user
+     - EWS_HPFEEDS_SECRET=secret
+     - EWS_HPFEEDS_TLSCERT=false
+     - EWS_HPFEEDS_FORMAT=json
+    env_file:
+     - /opt/tpot/etc/compose/elk_environment
+    image: "dtagdevsec/ewsposter:2204"
+    volumes:
+     - /data:/data
+     - /data/ews/conf/ews.ip:/opt/ewsposter/ews.ip
+
+# Nginx service
+  nginx:
+    container_name: nginx
+    restart: always
+    tmpfs:
+     - /var/tmp/nginx/client_body
+     - /var/tmp/nginx/proxy
+     - /var/tmp/nginx/fastcgi
+     - /var/tmp/nginx/uwsgi
+     - /var/tmp/nginx/scgi
+     - /run
+     - /var/lib/nginx/tmp:uid=100,gid=82
+    network_mode: "host"
+    ports:
+     - "64297:64297"
+     - "127.0.0.1:64304:64304"
+    image: "dtagdevsec/nginx:2204"
+    read_only: true
+    volumes:
+     - /data/nginx/cert/:/etc/nginx/cert/:ro
+     - /data/nginx/conf/nginxpasswd:/etc/nginx/nginxpasswd:ro
+     - /data/nginx/log/:/var/log/nginx/
+
+# Spiderfoot service
+  spiderfoot:
+    container_name: spiderfoot
+    restart: always
+    networks:
+     - spiderfoot_local
+    ports:
+     - "127.0.0.1:64303:8080"
+    image: "dtagdevsec/spiderfoot:2204"
+    volumes:
+     - /data/spiderfoot:/home/spiderfoot/.spiderfoot
diff --git a/etc/curator/actions.yml b/etc/curator/actions.yml
deleted file mode 100644
index 5b7645fd..00000000
--- a/etc/curator/actions.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-# Remember, leave a key empty if there is no value.  None will be a string,
-# not a Python "NoneType"
-#
-# Also remember that all examples have 'disable_action' set to True.  If you
-# want to use this action as a template, be sure to set this to False after
-# copying it.
-actions:
-  1:
-    action: delete_indices
-    description: >-
-      Delete indices older than 90 days (based on index name), for logstash-
-      prefixed indices. Ignore the error if the filter does not result in an
-      actionable list of indices (ignore_empty_list) and exit cleanly.
-    options:
-      ignore_empty_list: True
-      disable_action: False
-    filters:
-    - filtertype: pattern
-      kind: timestring
-      value: '%Y.%m.%d'
-    - filtertype: age
-      source: name
-      direction: older
-      timestring: '%Y.%m.%d'
-      unit: days
-      unit_count: 90
diff --git a/etc/curator/curator.yml b/etc/curator/curator.yml
deleted file mode 100644
index 715bcd06..00000000
--- a/etc/curator/curator.yml
+++ /dev/null
@@ -1,21 +0,0 @@
-# Remember, leave a key empty if there is no value.  None will be a string,
-# not a Python "NoneType"
-client:
-  hosts:
-    - 127.0.0.1
-  port: 64298
-  url_prefix:
-  use_ssl: False
-  certificate:
-  client_cert:
-  client_key:
-  ssl_no_validate: False
-  http_auth:
-  timeout: 30
-  master_only: False
-
-logging:
-  loglevel: INFO
-  logfile: /var/log/curator.log
-  logformat: default
-  blacklist: ['elasticsearch', 'urllib3']
diff --git a/etc/logrotate/logrotate.conf b/etc/logrotate/logrotate.conf
index 52631483..07223601 100644
--- a/etc/logrotate/logrotate.conf
+++ b/etc/logrotate/logrotate.conf
@@ -24,7 +24,6 @@
 /data/heralding/log/*.csv
 /data/heralding/log/*.json
 /data/honeypots/log/*.log
-/data/honeypy/log/*.log
 /data/honeysap/log/*.log
 /data/honeytrap/log/*.log
 /data/honeytrap/log/*.json
@@ -36,6 +35,7 @@
 /data/p0f/log/p0f.json
 /data/rdpy/log/rdpy.log
 /data/redishoneypot/log/*.log
+/data/sentrypeer/log/*.json
 /data/suricata/log/*.log
 /data/suricata/log/*.json
 /data/tanner/log/*.json
diff --git a/etc/objects/elkbase.tgz b/etc/objects/elkbase.tgz
index 65ba153b..b85479ab 100644
Binary files a/etc/objects/elkbase.tgz and b/etc/objects/elkbase.tgz differ
diff --git a/etc/objects/kibana_export.ndjson.zip b/etc/objects/kibana_export.ndjson.zip
index 24ab95c0..55c0f6b9 100644
Binary files a/etc/objects/kibana_export.ndjson.zip and b/etc/objects/kibana_export.ndjson.zip differ
diff --git a/host/etc/rc.local b/host/etc/rc.local
index 06bd9865..68f6775a 100755
--- a/host/etc/rc.local
+++ b/host/etc/rc.local
@@ -1,2 +1,3 @@
 #!/bin/bash
+/opt/tpot/bin/updateip.sh
 exit 0
diff --git a/host/etc/systemd/tpot.service b/host/etc/systemd/tpot.service
index a0c8350b..96241fa2 100644
--- a/host/etc/systemd/tpot.service
+++ b/host/etc/systemd/tpot.service
@@ -15,12 +15,7 @@ ExecStartPre=-/opt/tpot/bin/updateip.sh
 ExecStartPre=-/bin/bash -c '/opt/tpot/bin/clean.sh on'
 
 # Remove old containers, images and volumes
-ExecStartPre=-/usr/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v
-ExecStartPre=-/usr/bin/docker-compose -f /opt/tpot/etc/tpot.yml rm -v
-ExecStartPre=-/bin/bash -c 'docker network rm $(docker network ls -q)'
-ExecStartPre=-/bin/bash -c 'docker volume rm $(docker volume ls -q)'
-ExecStartPre=-/bin/bash -c 'docker rm -v $(docker ps -aq)'
-ExecStartPre=-/bin/bash -c 'docker rmi $(docker images | grep "<none>" | awk \'{print $3}\')'
+ExecStartPre=/opt/tpot/bin/tpdclean.sh -y
 
 # Get IF, disable offloading, enable promiscious mode for p0f and suricata
 ExecStartPre=-/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip address | grep "^2: " | awk \'{ print $2 }\' | tr -d [:punct:]) rx off tx off'
@@ -34,6 +29,9 @@ ExecStartPre=/opt/tpot/bin/rules.sh /opt/tpot/etc/tpot.yml set
 # Compose T-Pot up
 ExecStart=/usr/bin/docker-compose -f /opt/tpot/etc/tpot.yml up --no-color
 
+# We want to see true source for UDP packets in container (https://github.com/moby/libnetwork/issues/1994)
+ExecStartPost=/bin/bash -c '/usr/bin/sleep 30 && /usr/sbin/conntrack -D -p udp'
+
 # Compose T-Pot down, remove containers and volumes
 ExecStop=/usr/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v
 
diff --git a/iso/installer/install.sh b/iso/installer/install.sh
index 70f60cdc..8d7ae865 100755
--- a/iso/installer/install.sh
+++ b/iso/installer/install.sh
@@ -18,11 +18,16 @@ myCONF_FILE="/root/installer/iso.conf"
 myPROGRESSBOXCONF=" --backtitle "$myBACKTITLE" --progressbox 24 80"
 mySITES="https://ghcr.io https://github.com https://pypi.python.org https://debian.org"
 myTPOTCOMPOSE="/opt/tpot/etc/tpot.yml"
-myLSB_STABLE_SUPPORTED="stretch buster"
+myLSB_STABLE_SUPPORTED="bullseye"
 myLSB_TESTING_SUPPORTED="stable"
 myREMOTESITES="https://hub.docker.com https://github.com https://pypi.python.org https://debian.org https://listbot.sicherheitstacho.eu"
 myPREINSTALLPACKAGES="aria2 apache2-utils cracklib-runtime curl dialog figlet fuse grc libcrack2 libpq-dev lsb-release net-tools software-properties-common toilet"
-myINSTALLPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common sshpass syslinux psmisc pv python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
+if [ -f "../../packages.txt" ];
+  then myINSTALLPACKAGESFILE="../../packages.txt"
+elif [ -f "/opt/tpot/packages.txt" ];
+  then myINSTALLPACKAGESFILE="/opt/tpot/packages.txt"
+fi
+myINSTALLPACKAGES=$(cat $myINSTALLPACKAGESFILE)
 myINFO="\
 ###########################################
 ### T-Pot Installer for Debian (Stable) ###
@@ -122,9 +127,6 @@ mySYSCTLCONF="
 kernel.panic = 1
 kernel.panic_on_oops = 1
 vm.max_map_count = 262144
-net.ipv6.conf.all.disable_ipv6 = 1
-net.ipv6.conf.default.disable_ipv6 = 1
-net.ipv6.conf.lo.disable_ipv6 = 1
 "
 myFAIL2BANCONF="[DEFAULT]
 ignore-ip = 127.0.0.1/8
@@ -172,9 +174,6 @@ myCRONJOBS="
 # Check if updated images are available and download them
 $myRANDOM_MINUTE $myPULL_HOUR * * *      root    docker-compose -f /opt/tpot/etc/tpot.yml pull
 
-# Delete elasticsearch logstash indices older than 90 days
-$myRANDOM_MINUTE $myDEL_HOUR * * *      root    curator --config /opt/tpot/etc/curator/curator.yml /opt/tpot/etc/curator/actions.yml
-
 # Uploaded binaries are not supposed to be downloaded
 */1 * * * *     root    mv --backup=numbered /data/dionaea/roots/ftp/* /data/dionaea/binaries/
 
@@ -312,7 +311,7 @@ function fuGET_DEPS {
   echo "### Removing and holding back problematic packages ..."
   apt-fast -y purge exim4-base mailutils pcp cockpit-pcp elasticsearch-curator
   apt-fast -y autoremove
-  apt-mark hold exim4-base mailutils pcp cockpit-pcp elasticsearch-curator
+  apt-mark hold exim4-base mailutils pcp cockpit-pcp
 }
 
 # Check for other services
@@ -439,6 +438,16 @@ if [ -s "$myTPOT_CONF_FILE" ] && [ "$myTPOT_CONF_FILE" != "" ];
 fi
 
 # Prepare running the installer
+myUSERCHECK=$(grep "tpot" /etc/passwd | wc -l)
+if [ "$myUSERCHECK" -gt "0" ];
+  then
+    echo "### The user name \"tpot\" already exists. The tpot username and group may not previously exist or T-Pot will not work."
+    echo "### We recommend a fresh install according to the T-Pot Readme Post-Install method."
+    echo
+    echo "Aborting."
+    echo
+    exit 0
+fi
 echo "$myINFO" | head -n 3
 fuCHECK_PORTS
 
@@ -454,7 +463,7 @@ export TERM=linux
 if [ "$myTPOT_DEPLOYMENT_TYPE" == "iso" ];
   then
     sleep 5
-    dialog --keep-window --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Wait to avoid interference with service messages ]" --pause "" 6 80 7
+    dialog --keep-window --no-ok --no-cancel --backtitle "$myBACKTITLE" --title "[ Wait to avoid interference with service messages ]" --pause "" 7 80 7
 fi
 
 # Check if remote sites are available
@@ -515,14 +524,15 @@ fi
 if [ "$myTPOT_DEPLOYMENT_TYPE" == "iso" ] || [ "$myTPOT_DEPLOYMENT_TYPE" == "user" ];
   then
     myCONF_TPOT_FLAVOR=$(dialog --keep-window --no-cancel --backtitle "$myBACKTITLE" --title "[ Choose Your T-Pot Edition ]" --menu \
-    "\nRequired: 8GB RAM, 128GB SSD\nRecommended: 8GB RAM, 256GB SSD" 15 70 7 \
-    "STANDARD" "Honeypots, ELK, NSM & Tools" \
+    "\nRequired: 8-16GB RAM, 128GB SSD\nRecommended: 16GB RAM, 256GB SSD" 17 70 1 \
+    "STANDARD" "T-Pot Standalone with everything you need" \
+    "HIVE" "T-Pot Hive: ELK & Tools" \
+    "HIVE_SENSOR" "T-Pot Hive Sensor: Honeypots & NSM" \
+    "INDUSTRIAL" "Same as Standard with focus on Conpot" \
     "LOG4J" "Log4Pot, ELK, NSM & Tools" \
-    "SENSOR" "Just Honeypots, EWS Poster & NSM" \
-    "INDUSTRIAL" "Conpot, RDPY, Vnclowpot, ELK, NSM & Tools" \
-    "COLLECTOR" "Heralding, ELK, NSM & Tools" \
-    "NEXTGEN" "NextGen (Glutton, HoneyPy)" \
-    "MEDICAL" "Dicompot, Medpot, ELK, NSM & Tools" 3>&1 1>&2 2>&3 3>&-)
+    "MEDICAL" "Dicompot, Medpot, ELK, NSM & Tools" \
+    "MINI" "Same as Standard with focus on qHoneypots" \
+    "SENSOR" "Just Honeypots & NSM" 3>&1 1>&2 2>&3 3>&-)
 fi
 
 # Let's ask for a secure tsec password if installation type is iso
@@ -662,7 +672,7 @@ fi
 if [ "$myCONF_NTP_USE" == "0" ];
   then
     fuBANNER "Setup NTP"
-    cp $myCONF_NTP_CONF_FILE /etc/ntp.conf
+    cp $myCONF_NTP_CONF_FILE /etc/systemd/timesyncd.conf
 fi
 
 # Let's setup 802.1x networking
@@ -683,16 +693,17 @@ echo "$myNETWORK_WLANEXAMPLE" | tee -a /etc/network/interfaces
 fuBANNER "SSH roaming off"
 echo "UseRoaming no" | tee -a /etc/ssh/ssh_config
 
-# Installing elasticdump, elasticsearch-curator, yq
+# Installing elasticdump, yq
 fuBANNER "Installing pkgs"
 npm install elasticdump -g
-pip3 install elasticsearch-curator yq
+pip3 install glances yq
 hash -r
 
 # Cloning T-Pot from GitHub
 if ! [ "$myTPOT_DEPLOYMENT_TYPE" == "iso" ];
   then
     fuBANNER "Cloning T-Pot"
+    ### DEV
     git clone https://github.com/telekom-security/tpotce /opt/tpot
 fi
 
@@ -732,30 +743,34 @@ case $myCONF_TPOT_FLAVOR in
     fuBANNER "STANDARD"
     ln -s /opt/tpot/etc/compose/standard.yml $myTPOTCOMPOSE
   ;;
-  LOG4J)
-    fuBANNER "LOG4J"
-    ln -s /opt/tpot/etc/compose/log4j.yml $myTPOTCOMPOSE
+  HIVE)
+    fuBANNER "HIVE"
+    ln -s /opt/tpot/etc/compose/hive.yml $myTPOTCOMPOSE
   ;;
-  SENSOR)
-    fuBANNER "SENSOR"
-    ln -s /opt/tpot/etc/compose/sensor.yml $myTPOTCOMPOSE
+  HIVE_SENSOR)
+    fuBANNER "HIVE_SENSOR"
+    ln -s /opt/tpot/etc/compose/hive_sensor.yml $myTPOTCOMPOSE
   ;;
   INDUSTRIAL)
     fuBANNER "INDUSTRIAL"
     ln -s /opt/tpot/etc/compose/industrial.yml $myTPOTCOMPOSE
   ;;
-  COLLECTOR)
-    fuBANNER "COLLECTOR"
-    ln -s /opt/tpot/etc/compose/collector.yml $myTPOTCOMPOSE
-  ;;
-  NEXTGEN)
-    fuBANNER "NEXTGEN"
-    ln -s /opt/tpot/etc/compose/nextgen.yml $myTPOTCOMPOSE
+  LOG4J)
+    fuBANNER "LOG4J"
+    ln -s /opt/tpot/etc/compose/log4j.yml $myTPOTCOMPOSE
   ;;
   MEDICAL)
     fuBANNER "MEDICAL"
     ln -s /opt/tpot/etc/compose/medical.yml $myTPOTCOMPOSE
   ;;
+  MINI)
+    fuBANNER "MINI"
+    ln -s /opt/tpot/etc/compose/mini.yml $myTPOTCOMPOSE
+  ;;
+  SENSOR)
+    fuBANNER "SENSOR"
+    ln -s /opt/tpot/etc/compose/sensor.yml $myTPOTCOMPOSE
+  ;;
 esac
 
 # Let's load docker images
@@ -788,59 +803,39 @@ echo "$mySYSTEMDFIX" | tee /etc/systemd/network/99-default.link
 fuBANNER "Add cronjobs"
 echo "$myCRONJOBS" | tee -a /etc/crontab
 
-### For some honeypots to work we need to ensure ntp.service is not listening
-echo "### Ensure ntp.service is not listening to avoid potential port conflict with ddospot."
-myNTP_IF_DISABLE="interface ignore wildcard
-interface ignore 127.0.0.1
-interface ignore ::1"
-
-if [ "$(cat /etc/ntp.conf | grep "interface ignore wildcard" | wc -l)" != "1" ];
-  then
-    echo "### Found active ntp listeners and updating config."
-    echo "$myNTP_IF_DISABLE" | tee -a /etc/ntp.conf
-    echo "### Restarting ntp.service for changes to take effect."
-    systemctl stop ntp.service
-    systemctl start ntp.service
-  else
-    echo "### Found no active ntp listeners."
-fi
-
 # Let's create some files and folders
 fuBANNER "Files & folders"
 mkdir -vp /data/adbhoney/{downloads,log} \
-         /data/ciscoasa/log \
-         /data/conpot/log \
-         /data/citrixhoneypot/logs \
-         /data/cowrie/{downloads,keys,misc,log,log/tty} \
-         /data/ddospot/{bl,db,log} \
-         /data/dicompot/{images,log} \
-         /data/dionaea/{log,bistreams,binaries,rtp,roots,roots/ftp,roots/tftp,roots/www,roots/upnp} \
-         /data/elasticpot/log \
-         /data/elk/{data,log} \
-         /data/endlessh/log \
-         /data/fatt/log \
-         /data/honeytrap/{log,attacks,downloads} \
-         /data/glutton/log \
-         /data/hellpot/log \
-         /data/heralding/log \
-         /data/honeypots/log \
-         /data/honeypy/log \
-         /data/honeysap/log \
-         /data/ipphoney/log \
-         /data/log4pot/{log,payloads} \
-         /data/mailoney/log \
-         /data/medpot/log \
-         /data/nginx/{log,heimdall} \
-         /data/emobility/log \
-         /data/ews/conf \
-         /data/rdpy/log \
-         /data/redishoneypot/log \
-         /data/spiderfoot \
-         /data/suricata/log \
-         /data/tanner/{log,files} \
-         /data/p0f/log \
-         /home/tsec/.ssh/
-touch /data/spiderfoot/spiderfoot.db
+          /data/ciscoasa/log \
+          /data/conpot/log \
+          /data/citrixhoneypot/logs \
+          /data/cowrie/{downloads,keys,misc,log,log/tty} \
+          /data/ddospot/{bl,db,log} \
+          /data/dicompot/{images,log} \
+          /data/dionaea/{log,bistreams,binaries,rtp,roots,roots/ftp,roots/tftp,roots/www,roots/upnp} \
+          /data/elasticpot/log \
+          /data/elk/{data,log} \
+          /data/endlessh/log \
+          /data/ews/conf \
+          /data/fatt/log \
+          /data/glutton/log \
+          /data/hellpot/log \
+          /data/heralding/log \
+          /data/honeypots/log \
+          /data/honeysap/log \
+          /data/honeytrap/{log,attacks,downloads} \
+          /data/ipphoney/log \
+          /data/log4pot/{log,payloads} \
+          /data/mailoney/log \
+          /data/medpot/log \
+          /data/nginx/{log,heimdall} \
+          /data/p0f/log \
+          /data/redishoneypot/log \
+          /data/sentrypeer/log \
+          /data/spiderfoot \
+          /data/suricata/log \
+          /data/tanner/{log,files} \
+          /home/tsec/.ssh/
 touch /data/nginx/log/error.log
 
 # Let's copy some files
@@ -883,14 +878,14 @@ tee -a /root/.bashrc <<EOF
 $mySHELLCHECK
 $myROOTPROMPT
 $myROOTCOLORS
-PATH="$PATH:/opt/tpot/bin"
+PATH="\$PATH:/opt/tpot/bin"
 EOF
 for i in $(ls -d /home/*/)
   do
 tee -a $i.bashrc <<EOF
 $mySHELLCHECK
 $myUSERPROMPT
-PATH="$PATH:/opt/tpot/bin"
+PATH="\$PATH:/opt/tpot/bin"
 EOF
 done
 
diff --git a/iso/installer/rc.local.install b/iso/installer/rc.local.install
index 5f0962f1..a7a62b50 100755
--- a/iso/installer/rc.local.install
+++ b/iso/installer/rc.local.install
@@ -1,3 +1,4 @@
 #!/bin/bash
-#plymouth --quit
+# Ensure client will receive a DHCP lease
+dhclient
 openvt -f -w -s /root/installer/wrapper.sh
diff --git a/iso/installer/tpot.conf.dist b/iso/installer/tpot.conf.dist
index 2ca11736..ef4a304f 100644
--- a/iso/installer/tpot.conf.dist
+++ b/iso/installer/tpot.conf.dist
@@ -1,5 +1,5 @@
 # tpot configuration file
-# myCONF_TPOT_FLAVOR=[STANDARD, SENSOR, INDUSTRIAL, COLLECTOR, NEXTGEN, MEDICAL]
+# myCONF_TPOT_FLAVOR=[STANDARD, HIVE, HIVE_SENSOR, INDUSTRIAL, LOG4J, MEDICAL, MINI, SENSOR]
 myCONF_TPOT_FLAVOR='STANDARD'
 myCONF_WEB_USER='webuser'
 myCONF_WEB_PW='w3b$ecret'
diff --git a/iso/isolinux/txt.cfg b/iso/isolinux/txt.cfg
index 5ece0abb..f1e5853d 100755
--- a/iso/isolinux/txt.cfg
+++ b/iso/isolinux/txt.cfg
@@ -1,6 +1,6 @@
 default install
 label install
-  menu label ^T-Pot 20.06.2 (based on Debian Stable)
+  menu label ^T-Pot 22.04.0 (AMD64)
   menu default
   kernel linux
   append vga=788 initrd=initrd.gz console-setup/ask_detect=true --
diff --git a/iso/preseed/tpot.seed b/iso/preseed/tpot_amd64.seed
similarity index 99%
rename from iso/preseed/tpot.seed
rename to iso/preseed/tpot_amd64.seed
index 6b88721d..c4437e77 100755
--- a/iso/preseed/tpot.seed
+++ b/iso/preseed/tpot_amd64.seed
@@ -131,6 +131,8 @@ in-target apt-get -y install grub-pc; \
 in-target grub-install --force $(debconf-get partman-auto/disk); \
 update-dev; \
 in-target update-grub; \
+cp /opt/installer -R /target/root; \
+### DEV
 in-target git clone --depth=1 https://github.com/telekom-security/tpotce /opt/tpot; \
 in-target sed -i 's/allow-hotplug/auto/g' /etc/network/interfaces; \
 #in-target apt-get -y remove exim4-base; \
diff --git a/iso/preseed/tpot_arm64.seed b/iso/preseed/tpot_arm64.seed
new file mode 100755
index 00000000..e286f978
--- /dev/null
+++ b/iso/preseed/tpot_arm64.seed
@@ -0,0 +1,107 @@
+##############################################
+### T-Pot Preseed Configuration File by mo ###
+##############################################
+
+####################
+### Locale Selection
+####################
+#d-i debian-installer/country string DE
+d-i debian-installer/language string en
+d-i debian-installer/locale string en_US.UTF-8
+d-i localechooser/preferred-locale string en_US.UTF-8
+
+######################
+### Keyboard Selection
+######################
+d-i console-setup/ask_detect boolean true
+#d-i keyboard-configuration/layoutcode string de
+d-i console-setup/detected note
+
+#############################
+### Unmount Active Partitions
+#############################
+#d-i preseed/early_command string umount /media || :
+
+#########################
+### Network Configuration
+#########################
+d-i netcfg/choose_interface select auto
+d-i netcfg/dhcp_timeout string 60
+d-i netcfg/get_hostname string t-pot
+d-i netcfg/get_domain string
+
+######################
+### User Configuration
+######################
+d-i passwd/root-login boolean false
+d-i passwd/make-user boolean true
+d-i passwd/user-fullname string tsec
+d-i passwd/username string tsec
+d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
+d-i user-setup/encrypt-home boolean false
+
+########################################
+### Country Mirror & Proxy Configuration
+########################################
+#d-i mirror/country string manual
+#d-i mirror/http/hostname string deb.debian.org
+#d-i mirror/http/directory string /debian
+#d-i mirror/http/proxy string
+
+###################
+# Suite to install
+###################
+#d-i mirror/suite string unstable
+#d-i mirror/suite string testing
+#d-i mirror/udeb/suite string testing
+
+######################
+### Time Configuration
+######################
+#d-i time/zone string Europe/Berlin
+d-i clock-setup/utc boolean true
+d-i time/zone string UTC
+d-i clock-setup/ntp boolean true
+d-i clock-setup/ntp-server string debian.pool.ntp.org
+
+##################
+### Package Groups
+##################
+tasksel tasksel/first multiselect ssh-server
+
+########################
+### Package Installation
+########################
+d-i pkgsel/include string apache2-utils cracklib-runtime curl dialog figlet git grc libcrack2 libpq-dev lsb-release net-tools software-properties-common toilet
+popularity-contest popularity-contest/participate boolean false
+
+#################
+### Update Policy
+#################
+d-i pkgsel/update-policy select unattended-upgrades
+
+###############
+### Boot Splash
+###############
+d-i debian-installer/quiet boolean false
+d-i debian-installer/splash boolean false
+
+#########################################
+### Post install (Grub & T-Pot Installer)
+#########################################
+d-i preseed/late_command string \
+cp /opt/installer -R /target/root; \
+### DEV
+in-target git clone --depth=1 https://github.com/telekom-security/tpotce /opt/tpot; \
+in-target sed -i 's/allow-hotplug/auto/g' /etc/network/interfaces; \
+#in-target apt-get -y remove exim4-base; \
+#in-target apt-get -y autoremove; \
+cp /target/opt/tpot/iso/installer/rc.local.install /target/etc/rc.local; \
+cp /target/opt/tpot/iso/installer -R /target/root/;
+
+##########
+### Reboot
+##########
+d-i nobootloader/confirmation_common note
+d-i finish-install/reboot_in_progress note
+d-i cdrom-detect/eject boolean true
diff --git a/makeiso.sh b/makeiso.sh
index cdbf5e1d..28db4396 100755
--- a/makeiso.sh
+++ b/makeiso.sh
@@ -5,18 +5,13 @@ export TERM=linux
 
 # Let's define some global vars
 myBACKTITLE="T-Pot - ISO Creator"
-#myMINIISOLINK="http://ftp.debian.org/debian/dists/testing/main/installer-amd64/current/images/netboot/mini.iso"
-#myMINIISOLINK="https://d-i.debian.org/daily-images/amd64/daily/netboot/mini.iso"
-# For stability reasons Debian Sid installation is built on a stable installer
-myMINIISOLINK="http://ftp.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/mini.iso"
-myMINIISO="mini.iso"
-myTPOTISO="tpot.iso"
+### DEV
 myTPOTDIR="tpotiso"
 myTPOTSEED="iso/preseed/tpot.seed"
-myPACKAGES="dialog genisoimage syslinux syslinux-utils pv rsync udisks2 xorriso"
+myPACKAGES="dialog genisoimage pv rsync syslinux syslinux-utils udisks2 wget xorriso"
 myPFXFILE="iso/installer/keys/8021x.pfx"
 myINSTALLERPATH="iso/installer/install.sh"
-myNTPCONFFILE="iso/installer/ntp.conf"
+myNTPCONFFILE="iso/installer/timesyncd.conf"
 myTMP="tmp"
 myCONF_FILE="iso/installer/iso.conf"
 myCONF_DEFAULT_FILE="iso/installer/iso.conf.dist"
@@ -77,13 +72,15 @@ function valid_ip()
     return $stat
 }
 
-# Let's ask if the user wants to run the script ...
-dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nDownload latest supported Debian Mini ISO and build the T-Pot Install Image." 8 50
-mySTART=$?
-if [ "$mySTART" = "1" ];
+# Let's ask for the architecture and set VARs accordingly...
+myARCH=$(dialog --backtitle "$myBACKTITLE" --title "[ Architecture ]" --menu "Please choose." 9 60 2 "amd64" "For x64 AMD / Intel CPUs" "arm64" "For Apple Silicon, 64 Bit ARM based CPUs" 3>&1 1>&2 2>&3 3>&-)
+if [ "$myARCH" == "" ];
   then
     exit
 fi
+myMINIISOLINK="http://ftp.debian.org/debian/dists/bullseye/main/installer-$myARCH/current/images/netboot/mini.iso"
+myMINIISO="mini_$myARCH.iso"
+myTPOTISO="tpot_$myARCH.iso"
 
 # Let's load the default config file
 if [ -f $myCONF_DEFAULT_FILE ];
@@ -165,19 +162,25 @@ do
           if valid_ip $myCONF_NTP_IP; then myIPRESULT="true"; fi
       done
 tee $myNTPCONFFILE <<EOF
-driftfile /var/lib/ntp/ntp.drift
+#  This file is part of systemd.
+#
+#  systemd is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU Lesser General Public License as published by
+#  the Free Software Foundation; either version 2.1 of the License, or
+#  (at your option) any later version.
+#
+# Entries in this file show the compile time defaults.
+# You can change settings by editing this file.
+# Defaults can be restored by simply deleting this file.
+#
+# See timesyncd.conf(5) for details.
 
-statistics loopstats peerstats clockstats
-filegen loopstats file loopstats type day enable
-filegen peerstats file peerstats type day enable
-filegen clockstats file clockstats type day enable
-
-server $myCONF_NTP_IP
-
-restrict -4 default kod notrap nomodify nopeer noquery
-restrict -6 default kod notrap nomodify nopeer noquery
-restrict 127.0.0.1
-restrict ::1
+[Time]
+NTP=$myCONF_NTP_IP
+#FallbackNTP=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org
+#RootDistanceMaxSec=5
+#PollIntervalMinSec=32
+#PollIntervalMaxSec=2048
 EOF
 
       break
@@ -201,25 +204,24 @@ if [ "$myCONF_PROXY_USE" == "0" ] || [ "$myCONF_PFX_USE" == "0" ] || [ "$myCONF_
     echo "myCONF_PFX_HOST_ID=\"$myCONF_PFX_HOST_ID\"" >> $myCONF_FILE
     echo "myCONF_NTP_USE=\"$myCONF_NTP_USE\"" >> $myCONF_FILE
     echo "myCONF_NTP_IP=\"$myCONF_NTP_IP\"" >> $myCONF_FILE
-    echo "myCONF_NTP_CONF_FILE=\"/root/installer/ntp.conf\"" >> $myCONF_FILE
+    echo "myCONF_NTP_CONF_FILE=\"/root/installer/timesyncd.conf\"" >> $myCONF_FILE
 fi
 
 # Let's download Debian Minimal ISO
 if [ ! -f $myMINIISO ]
   then
-    wget $myMINIISOLINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian ... ]" --gauge "" 5 70;
-    echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian ... Done! ]" --gauge "" 5 70;
+    wget $myMINIISOLINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian for $myARCH ]" --gauge "" 5 70;
+    echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Debian for $myARCH ... Done! ]" --gauge "" 5 70;
+    # Need to rename after download or progresss bar does not work.
+    mv mini.iso $myMINIISO
   else
     dialog --infobox "Using previously downloaded .iso ..." 3 50;
 fi
 
-# Let's loop mount it and copy all contents
-mkdir -p $myTMP $myTPOTDIR
-mount -o loop $myMINIISO $myTMP
-rsync -a $myTMP/ $myTPOTDIR
-umount $myTMP
+# Let's extract ISO contents (using / to extract all from ISO root)
+xorriso -osirrox on -indev $myMINIISO -extract / $myTPOTDIR
 
-# Let's modify initrd
+# Let's modify initrd and create a tmp for the initrd filesystem we need to modify
 gunzip $myTPOTDIR/initrd.gz
 mkdir $myTPOTDIR/tmp
 cd $myTPOTDIR/tmp
@@ -231,8 +233,15 @@ cd ..
 # Let's add the files for the automated install
 mkdir -p $myTPOTDIR/tmp/opt/
 cp iso/installer -R $myTPOTDIR/tmp/opt/
-cp iso/isolinux/* $myTPOTDIR/
-cp iso/preseed/tpot.seed $myTPOTDIR/tmp/preseed.cfg
+# Isolinux is only necessary for AMD64
+if [ "$myARCH" = "amd64" ];
+  then
+    cp iso/isolinux/* $myTPOTDIR/
+  else
+    sed -i "s#menuentry 'Install'#menuentry 'Install T-Pot 22.04.0 (ARM64)'#g" $myTPOTDIR/boot/grub/grub.cfg
+fi
+# For now we need architecture based preseeds
+cp iso/preseed/tpot_$myARCH.seed $myTPOTDIR/tmp/preseed.cfg
 
 # Let's create the new initrd
 cd $myTPOTDIR/tmp
@@ -242,13 +251,33 @@ gzip initrd
 rm -rf tmp
 cd ..
 
-# Let's create the new .iso
+# Since ARM64 needs EFI we need different methods to build the ISO
 cd $myTPOTDIR
-xorrisofs -gui -D -r -V "T-Pot" -cache-inodes -J -l -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../$myTPOTISO ../$myTPOTDIR 2>&1 | awk '{print $1+0} fflush()' | cut -f1 -d"." | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... ]" --gauge "" 5 70 0
-echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... Done! ]" --gauge "" 5 70
-cd ..
-isohybrid $myTPOTISO
-sha256sum $myTPOTISO > tpot.sha256
+if [ "$myARCH" == "amd64" ];
+  then
+    # Create AMD64 .iso
+    xorrisofs -gui -D -r -V "T-Pot $myARCH" \
+      -cache-inodes -J -l -b isolinux.bin \
+      -c boot.cat -no-emul-boot -boot-load-size 4 \
+      -boot-info-table \
+      -o ../"$myTPOTISO" ../"$myTPOTDIR" 2>&1 | awk '{print $1+0} fflush()' | cut -f1 -d"." | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot $myARCH .iso ... ]" --gauge "" 5 70 0
+    echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot $myARCH .iso ... Done! ]" --gauge "" 5 70
+    cd ..
+    isohybrid $myTPOTISO
+  else
+    # Create ARM64 .iso
+    xorriso -as mkisofs -r -V "T-Pot $myARCH" \
+      -J -joliet-long -cache-inodes \
+      -e boot/grub/efi.img \
+      -no-emul-boot \
+      -append_partition 2 0xef boot/grub/efi.img \
+      -partition_cyl_align all \
+      -o ../"$myTPOTISO" \
+      ../"$myTPOTDIR"
+      echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot $myARCH .iso ... Done! ]" --gauge "" 5 70
+    cd ..
+fi
+sha256sum $myTPOTISO > "tpot_$myARCH.sha256"
 
 # Let's write the image
 while true;
diff --git a/packages.txt b/packages.txt
new file mode 100644
index 00000000..0eed131f
--- /dev/null
+++ b/packages.txt
@@ -0,0 +1,60 @@
+aria2
+apache2-utils
+apparmor
+apt-transport-https
+bash-completion
+bat
+build-essential
+ca-certificates
+cgroupfs-mount
+cockpit conntrack
+console-setup
+console-setup-linux
+cracklib-runtime
+curl
+debconf-utils
+dialog dnsutils
+docker.io
+docker-compose
+ethtool
+fail2ban
+figlet
+fuse
+genisoimage
+git
+grc
+haveged
+html2text
+htop
+iptables
+iw
+jq
+kbd
+libcrack2
+libltdl7
+libpam-google-authenticator
+libpq-dev
+lsb-release
+man
+mosh
+multitail
+net-tools
+neovim
+npm
+openssh-server
+openssl
+pass
+pigz
+prips
+software-properties-common
+sshpass
+psmisc
+pv
+python3-pip
+systemd-timesyncd
+toilet
+unattended-upgrades
+unzip
+wget
+wireless-tools
+wpasupplicant
diff --git a/update.sh b/update.sh
index 04d0885c..d14be593 100755
--- a/update.sh
+++ b/update.sh
@@ -3,6 +3,7 @@
 # Some global vars
 myCONFIGFILE="/opt/tpot/etc/tpot.yml"
 myCOMPOSEPATH="/opt/tpot/etc/compose"
+myLSB_RELEASE="bullseye"
 myRED=""
 myGREEN=""
 myWHITE=""
@@ -10,6 +11,7 @@ myBLUE=""
 
 # Check for existing tpot.yml
 function fuCONFIGCHECK () {
+  echo
   echo "### Checking for T-Pot configuration file ..."
   if ! [ -L $myCONFIGFILE ];
     then
@@ -34,6 +36,7 @@ echo
 # Let's test the internet connection
 function fuCHECKINET () {
 mySITES=$1
+  echo
   echo "### Now checking availability of ..."
   for i in $mySITES;
     do
@@ -55,6 +58,7 @@ echo
 
 # Update
 function fuSELFUPDATE () {
+  echo
   echo "### Now checking for newer files in repository ..."
   git fetch --all
   myREMOTESTAT=$(git status | grep -c "up-to-date")
@@ -63,6 +67,7 @@ function fuSELFUPDATE () {
       echo "###### $myBLUE""No updates found in repository.""$myWHITE"
       return
   fi
+  ### DEV
   myRESULT=$(git diff --name-only origin/master | grep update.sh)
   if [ "$myRESULT" == "update.sh" ];
     then
@@ -81,16 +86,58 @@ echo
 
 # Let's check for version
 function fuCHECK_VERSION () {
-local myMINVERSION="19.03.0"
-local myMASTERVERSION="20.06.2"
+local myMINVERSION="20.06.0"
+local myMASTERVERSION="22.04.0"
 echo
 echo "### Checking for Release ID"
-myRELEASE=$(lsb_release -i | grep Debian -c)
-if [ "$myRELEASE" == "0" ] 
+myRELEASE=$(lsb_release -c | awk '{ print $2 }')
+if [ "$myRELEASE" != "$myLSB_RELEASE" ] 
   then
-    echo "###### This version of T-Pot cannot be upgraded automatically. Please run a fresh install.$myWHITE"" [ $myRED""NOT OK""$myWHITE ]"
+    echo "###### Need to upgrade to Debian 11 (Bullseye) first:$myWHITE"" [ $myRED""NOT OK""$myWHITE ]"
+    echo "###### Upgrade may result in complete data loss and should not be run via SSH."
+    echo "###### If you installed T-Pot using the post-install method instead of the ISO it is recommended you upgrade manually to Debian 11 (Bullseye) and then re-run update.sh."
+    echo "###### Do you want to upgrade to Debian 11 (Bullseye) now?"
+    while [ "$myQST" != "y" ] && [ "$myQST" != "n" ];
+      do
+        read -p "Upgrade? (y/n) " myQST
+      done
+    if [ "$myQST" = "n" ];
+      then
+	echo
+        echo $myGREEN"Aborting!"$myWHITE
+	echo
+        exit
+      else
+	echo "###### Stopping and disabling T-Pot services ... "
+	echo
+	systemctl stop tpot
+	systemctl disable tpot
+	systemctl stop docker
+	systemctl start docker
+        docker stop $(docker ps -aq)
+        docker rm -v $(docker ps -aq)
+	echo "###### Switching /etc/apt/sources.list from buster to bullseye ... "
+	echo
+	sed -i 's/buster/bullseye/g' /etc/apt/sources.list
+	echo "###### Updating repositories ... "
+	echo
+	apt-fast update
+        export DEBIAN_FRONTEND=noninteractive
+	echo "###### Running full upgrade ... "
+	echo
+        echo "docker.io docker.io/restart       boolean true" | debconf-set-selections -v
+        echo "ssh ssh/restart		       	boolean true" | debconf-set-selections -v
+        echo "cron cron/restart			boolean true" | debconf-set-selections -v
+        echo "debconf debconf/frontend select noninteractive" | debconf-set-selections -v
+	apt-fast full-upgrade -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --force-yes
+        dpkg --configure -a
+        echo "###### $myBLUE""Finished with upgrading. Now restarting update.sh and to continue with T-Pot related updates.""$myWHITE"
+	exec ./update.sh -y
+	exit 1
+    fi
     exit
 fi
+echo
 echo "### Checking for version tag ..."
 if [ -f "version" ];
   then
@@ -112,6 +159,7 @@ echo
 
 # Stop T-Pot to avoid race conditions with running containers with regard to the current T-Pot config
 function fuSTOP_TPOT () {
+echo
 echo "### Need to stop T-Pot ..."
 echo -n "###### $myBLUE Now stopping T-Pot.$myWHITE "
 systemctl stop tpot
@@ -124,6 +172,8 @@ if [ $? -ne 0 ];
     exit 1
   else
     echo "[ $myGREEN"OK"$myWHITE ]"
+    echo "###### $myBLUE Now disabling T-Pot service.$myWHITE "
+    systemctl disable tpot
     echo "###### $myBLUE Now cleaning up containers.$myWHITE "
     if [ "$(docker ps -aq)" != "" ];
       then
@@ -138,6 +188,7 @@ echo
 function fuBACKUP () {
 local myARCHIVE="/root/$(date +%Y%m%d%H%M)_tpot_backup.tgz"
 local myPATH=$PWD
+echo
 echo "### Create a backup, just in case ... "
 echo -n "###### $myBLUE Building archive in $myARCHIVE $myWHITE"
 cd /opt/tpot
@@ -163,6 +214,7 @@ local myOLDTAG=$1
 local myOLDIMAGES=$(docker images | grep -c "$myOLDTAG")
 if [ "$myOLDIMAGES" -gt "0" ];
   then
+    echo
     echo "### Removing old docker images."
     docker rmi $(docker images | grep "$myOLDTAG" | awk '{print $3}')
 fi
@@ -181,13 +233,16 @@ echo
 
 function fuUPDATER () {
 export DEBIAN_FRONTEND=noninteractive
+echo
 echo "### Installing apt-fast"
 /bin/bash -c "$(curl -sL https://raw.githubusercontent.com/ilikenwf/apt-fast/master/quick-install.sh)"
-local myPACKAGES="aria2 apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount cockpit cockpit-docker console-setup console-setup-linux cracklib-runtime curl debconf-utils dialog dnsutils docker.io docker-compose ethtool fail2ban figlet genisoimage git glances grc haveged html2text htop iptables iw jq kbd libcrack2 libltdl7 libpam-google-authenticator man mosh multitail net-tools npm ntp openssh-server openssl pass pigz prips software-properties-common sshpass syslinux psmisc pv python3-elasticsearch-curator python3-pip toilet unattended-upgrades unzip vim wget wireless-tools wpasupplicant"
-# Remove purge in the future
-echo "### Removing repository based install of elasticsearch-curator"
-apt-get purge elasticsearch-curator -y
+local myPACKAGES=$(cat /opt/tpot/packages.txt)
+echo
+echo "### Removing and holding back problematic packages ..."
+apt-fast -y --allow-change-held-packages purge cockpit-pcp elasticsearch-curator exim4-base mailutils ntp pcp
+apt-mark hold exim4-base mailutils ntp pcp cockpit-pcp
 hash -r
+echo
 echo "### Now upgrading packages ..."
 dpkg --configure -a
 apt-fast -y autoclean
@@ -202,21 +257,15 @@ apt-fast -y dist-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::
 dpkg --configure -a
 npm cache clean --force
 npm install elasticdump -g
-pip3 install --upgrade yq
-# Remove --force switch in the future ...
-pip3 install elasticsearch-curator --force
+pip3 install --upgrade glances yq
 hash -r
-echo "### Removing and holding back problematic packages ..."
-apt-fast -y purge exim4-base mailutils pcp cockpit-pcp elasticsearch-curator
-apt-mark hold exim4-base mailutils pcp cockpit-pcp elasticsearch-curator
 echo
-
 echo "### Now replacing T-Pot related config files on host"
 cp host/etc/systemd/* /etc/systemd/system/
 systemctl daemon-reload
-echo
 
 # Ensure some defaults
+echo
 echo "### Ensure some T-Pot defaults with regard to some folders, permissions and configs."
 sed -i '/^port/I,$d' /etc/ssh/sshd_config
 tee -a /etc/ssh/sshd_config << EOF
@@ -225,61 +274,39 @@ Match Group tpotlogs
         PermitOpen 127.0.0.1:64305
         ForceCommand /usr/bin/false
 EOF
-echo
 
 ### Ensure creation of T-Pot related folders, just in case
 mkdir -vp /data/adbhoney/{downloads,log} \
-         /data/ciscoasa/log \
-         /data/conpot/log \
-         /data/citrixhoneypot/logs \
-         /data/cowrie/{downloads,keys,misc,log,log/tty} \
-         /data/ddospot/{bl,db,log} \
-         /data/dicompot/{images,log} \
-         /data/dionaea/{log,bistreams,binaries,rtp,roots,roots/ftp,roots/tftp,roots/www,roots/upnp} \
-         /data/elasticpot/log \
-         /data/elk/{data,log} \
-         /data/endlessh/log \
-         /data/fatt/log \
-         /data/honeytrap/{log,attacks,downloads} \
-         /data/glutton/log \
-         /data/hellpot/log \
-         /data/heralding/log \
-         /data/honeypots/log \
-         /data/honeypy/log \
-         /data/honeysap/log \
-         /data/ipphoney/log \
-         /data/log4pot/{log,payloads} \
-         /data/log4pot/log \
-         /data/mailoney/log \
-         /data/medpot/log \
-         /data/nginx/{log,heimdall} \
-         /data/emobility/log \
-         /data/ews/conf \
-         /data/rdpy/log \
-         /data/redishoneypot/log \
-         /data/spiderfoot \
-         /data/suricata/log \
-         /data/tanner/{log,files} \
-         /data/p0f/log \
-         /home/tsec/.ssh/
-
-### For some honeypots to work we need to ensure ntp.service is not listening
-echo "### Ensure ntp.service is not listening to avoid potential port conflict with ddospot."
-myNTP_IF_DISABLE="interface ignore wildcard
-interface ignore 127.0.0.1
-interface ignore ::1"
-
-if [ "$(cat /etc/ntp.conf | grep "interface ignore wildcard" | wc -l)" != "1" ];
-  then
-    echo "### Found active ntp listeners and updating config."
-    echo "$myNTP_IF_DISABLE" | tee -a /etc/ntp.conf
-    echo "### Restarting ntp.service for changes to take effect."
-    systemctl stop ntp.service
-    systemctl start ntp.service
-  else
-    echo "### Found no active ntp listeners."
-fi
-
+          /data/ciscoasa/log \
+          /data/conpot/log \
+          /data/citrixhoneypot/logs \
+          /data/cowrie/{downloads,keys,misc,log,log/tty} \
+          /data/ddospot/{bl,db,log} \
+          /data/dicompot/{images,log} \
+          /data/dionaea/{log,bistreams,binaries,rtp,roots,roots/ftp,roots/tftp,roots/www,roots/upnp} \
+          /data/elasticpot/log \
+          /data/elk/{data,log} \
+          /data/endlessh/log \
+          /data/ews/conf \
+          /data/fatt/log \
+          /data/glutton/log \
+          /data/hellpot/log \
+          /data/heralding/log \
+          /data/honeypots/log \
+          /data/honeysap/log \
+          /data/honeytrap/{log,attacks,downloads} \
+          /data/ipphoney/log \
+          /data/log4pot/{log,payloads} \
+          /data/mailoney/log \
+          /data/medpot/log \
+          /data/nginx/{log,heimdall} \
+          /data/p0f/log \
+          /data/redishoneypot/log \
+          /data/sentrypeer/log \
+          /data/spiderfoot \
+          /data/suricata/log \
+          /data/tanner/{log,files} \
+          /home/tsec/.ssh/
 
 ### Let's take care of some files and permissions
 chmod 770 -R /data
@@ -287,11 +314,19 @@ chown tpot:tpot -R /data
 chmod 644 -R /data/nginx/conf
 chmod 644 -R /data/nginx/cert
 
-echo "### Now pulling latest docker images"
+echo
+echo "### Now pulling latest docker images ..."
 echo "######$myBLUE This might take a while, please be patient!$myWHITE"
 fuPULLIMAGES 2>&1>/dev/null
 
-#fuREMOVEOLDIMAGES "1903"
+fuREMOVEOLDIMAGES "2006"
+
+echo
+echo "### Copying T-Pot service to systemd."
+cp /opt/tpot/host/etc/systemd/tpot.service /etc/systemd/system/
+systemctl enable tpot
+
+echo
 echo "### If you made changes to tpot.yml please ensure to add them again."
 echo "### We stored the previous version as backup in /root/."
 echo "### Some updates may need an import of the latest Kibana objects as well."
@@ -299,9 +334,7 @@ echo "### Download the latest objects here if they recently changed:"
 echo "### https://raw.githubusercontent.com/telekom-security/tpotce/master/etc/objects/kibana_export.ndjson.zip"
 echo "### Export and import the objects easily through the Kibana WebUI:"
 echo "### Go to Kibana > Management > Saved Objects > Export / Import"
-echo "### Or use the command:"
-echo "### import_kibana-objects.sh /opt/tpot/etc/objects/kibana-objects.tgz"
-echo "### All objects will be overwritten upon import, make sure to run an export first if you made changes."
+echo
 }
 
 function fuRESTORE_EWSCFG () {
@@ -329,12 +362,15 @@ fi
 myWHOAMI=$(whoami)
 if [ "$myWHOAMI" != "root" ]
   then
+    echo
     echo "Need to run as root ..."
+    echo
     exit
 fi
 
 # Only run with command switch
 if [ "$1" != "-y" ]; then
+  echo
   echo "This script will update / upgrade all T-Pot related scripts, tools and packages to the latest versions."
   echo "A backup of /opt/tpot will be written to /root. If you are unsure, you should save your work."
   echo "This is a beta feature and only recommended for experienced users."
@@ -354,5 +390,5 @@ fuRESTORE_EWSCFG
 fuRESTORE_HPFEEDS
 
 echo
-echo "### Done."
+echo "### Done. Please reboot."
 echo
diff --git a/version b/version
index 7f89e01a..c207f759 100644
--- a/version
+++ b/version
@@ -1 +1 @@
-20.06.2
+22.04.0