continue with documentation

fix tpotinit entrypoint.sh to resolve a conflict with sensor deployment where data folder is not yet owned by tpot user
This commit is contained in:
Marco Ochse 2024-03-22 20:47:39 +01:00
parent 4585d750e1
commit cf5df3b60b
4 changed files with 70 additions and 43 deletions

View file

@ -10,7 +10,7 @@ T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeyp
2. [Download](#choose-your-distro) or use a running, supported distribution. 2. [Download](#choose-your-distro) or use a running, supported distribution.
3. Install the ISO with as minimal packages / services as possible (`ssh` required) 3. Install the ISO with as minimal packages / services as possible (`ssh` required)
4. Install `curl`: `$ sudo [apt, dnf, zypper] install curl` if not installed already 4. Install `curl`: `$ sudo [apt, dnf, zypper] install curl` if not installed already
5. Run installer as non-root: 5. Run installer as non-root from `$HOME`:
``` ```
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/install.sh)" env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/install.sh)"
``` ```
@ -38,12 +38,12 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/ins
* [Get and install T-Pot](#get-and-install-t-pot) * [Get and install T-Pot](#get-and-install-t-pot)
* [macOS & Windows](#macos--windows) * [macOS & Windows](#macos--windows)
* [Installation Types](#installation-types) * [Installation Types](#installation-types)
* [**HIVE**](#hive) * [Standard / HIVE](#standard--hive)
* [**Distributed**](#distributed) * [**Distributed**](#distributed)
* [Uninstall T-Pot (Linux only!) (to do)](#uninstall-t-pot-linux-only-to-do) * [Uninstall T-Pot (Linux only!) (to do)](#uninstall-t-pot-linux-only-to-do)
* [First Start](#first-start) * [First Start](#first-start)
* [Standalone First Start](#standalone-first-start) * [Standalone First Start](#standalone-first-start)
* [Distributed Deployment (to do)](#distributed-deployment-to-do) * [Distributed Deployment](#distributed-deployment)
* [Community Data Submission](#community-data-submission) * [Community Data Submission](#community-data-submission)
* [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission) * [Opt-In HPFEEDS Data Submission](#opt-in-hpfeeds-data-submission)
* [Remote Access and Tools](#remote-access-and-tools) * [Remote Access and Tools](#remote-access-and-tools)
@ -57,7 +57,6 @@ env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/alpha/ins
* [Configuration](#configuration) * [Configuration](#configuration)
* [T-Pot Config File](#t-pot-config-file) * [T-Pot Config File](#t-pot-config-file)
* [Customize T-Pot Honeypots and Services](#customize-t-pot-honeypots-and-services) * [Customize T-Pot Honeypots and Services](#customize-t-pot-honeypots-and-services)
* [Redeploy Hive Sensor (to do)](#redeploy-hive-sensor-to-do)
* [Maintenance](#maintenance) * [Maintenance](#maintenance)
* [General Updates](#general-updates) * [General Updates](#general-updates)
* [Update Script](#update-script) * [Update Script](#update-script)
@ -343,8 +342,8 @@ To get things up and running just follow these steps:
## Installation Types ## Installation Types
### **HIVE** ### Standard / HIVE
With T-Pot HIVE all services, tools, honeypots, etc. will be installed on to a single host which also serves as a HIVE endpoint. Make sure to meet the [system requirements](#system-requirements). You can adjust `~/tpotce/docker-compose.yml` to your personal use-case or create your very own configuration using `~/tpotce/compose/customizer.py` for a tailored T-Pot experience to your needs. With T-Pot Standard / HIVE all services, tools, honeypots, etc. will be installed on to a single host which also serves as a HIVE endpoint. Make sure to meet the [system requirements](#system-requirements). You can adjust `~/tpotce/docker-compose.yml` to your personal use-case or create your very own configuration using `~/tpotce/compose/customizer.py` for a tailored T-Pot experience to your needs.
Once the installation is finished you can proceed to [First Start](#first-start). Once the installation is finished you can proceed to [First Start](#first-start).
<br><br> <br><br>
@ -352,8 +351,7 @@ Once the installation is finished you can proceed to [First Start](#first-start)
The distributed version of T-Pot requires at least two hosts The distributed version of T-Pot requires at least two hosts
- the T-Pot **HIVE**, the standard installation of T-Pot (install this first!), - the T-Pot **HIVE**, the standard installation of T-Pot (install this first!),
- and a T-Pot **SENSOR**, which will host only the honeypots, some tools and transmit log data to the **HIVE**. - and a T-Pot **SENSOR**, which will host only the honeypots, some tools and transmit log data to the **HIVE**.
- The **SENSOR** will not start before finalizing the **SENSOR** installation as described in [Distributed Deployment](#distributed-deployment).
To finalize the **SENSOR** installation continue to [Distributed Deployment](#distributed-deployment).
<br><br> <br><br>
## Uninstall T-Pot (Linux only!) (to do) ## Uninstall T-Pot (Linux only!) (to do)
@ -381,16 +379,37 @@ You can also login from your browser and access the T-Pot WebUI and tools: `http
There is not much to do except to login and check via `dps.sh` if all services and honeypots are starting up correctly and login to Kibana and / or Geoip Attack Map to monitor the attacks. There is not much to do except to login and check via `dps.sh` if all services and honeypots are starting up correctly and login to Kibana and / or Geoip Attack Map to monitor the attacks.
<br><br> <br><br>
## Distributed Deployment (to do) ## Distributed Deployment
With the distributed deployment firstly login to **HIVE** and the **SENSOR** and check via `dps` if all services and honeypots are starting up correctly. Once you have confirmed everything is working fine you need to deploy the **SENSOR** to the **HIVE** in order to transmit honeypot logs to the Elastic Stack. To continue with the distributed deployment login to **HIVE** and go to `cd ~/tpotce` folder.
<br><br>
For **deployment** simply keep the **HIVE** login data ready and follow these steps while the `deploy.sh` script will setup the **HIVE** and **SENSOR** for securely shipping and receiving logs: If you have not done already generate a SSH key to securely login to the **SENSOR** and to allow `Ansible` to run a playbook on the sensor:
1. Run `ssh-keygen`, follow the instructions and leave the passphrase empty:
``` ```
deploy.sh Generating public/private rsa key pair.
Enter file in which to save the key (/home/<your_user>/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/<your_user>/.ssh/id_rsa
Your public key has been saved in /home/<your_user>/.ssh/id_rsa.pub
``` ```
2. Deploy the key to the SENSOR by running `ssh-copy-id -p 64295 <SENSOR_SSH_USER>@<SENSOR_IP>)`:
```
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/<your_user>/.ssh/id_rsa.pub"
The authenticity of host '[<SENSOR_IP>]:64295 ([<SENSOR_IP>]:64295)' can't be stablished.
ED25519 key fingerprint is SHA256:naIDxFiw/skPJadTcgmWZQtgt+CdfRbUCoZn5RmkOnQ.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
<your_user>@172.20.254.124's password:
The script will ask for the **HIVE** login data, the **HIVE** IP address, will create SSH keys accordingly and deploy them securely over a SSH connection to the **HIVE**. On the **HIVE** machine a user with the **SENSOR** hostname is created, belonging to a user group `tpotlogs` which may only open a SSH tunnel via port `64295` and transmit Logstash logs to port `127.0.0.1:64305`, with no permission to login on a shell. You may review the config in `/etc/ssh/sshd_config` and the corresponding `autossh` settings in `docker/elk/logstash/dist/entrypoint.sh`. Settings and keys are stored in `/data/elk/logstash` and loaded as part of `/opt/tpot/etc/tpot.yml`. Number of key(s) added: 1
Now try logging into the machine, with: "ssh -p '64295' '<your_user>@<SENSOR_IP>'"
and check to make sure that only the key(s) you wanted were added.
```
3. As suggested follow the instructions to test the connection `ssh -p '64295' '<your_user>@<SENSOR_IP>'`.
4. Once the key is successfully deployed run `./deploy.sh` and follow the instructions.
<br><br> <br><br>
## Community Data Submission ## Community Data Submission
@ -524,18 +543,6 @@ To create your customized docker compose file:
8. Start T-Pot with `systemctl start tpot`. 8. Start T-Pot with `systemctl start tpot`.
<br><br> <br><br>
## Redeploy Hive Sensor (to do)
In case you need to re-deploy your Hive Sensor, i.e. the IP of your Hive has changed or you want to move the Hive Sensor to a new Hive, you simply follow these commands:
```
sudo su -
systemctl stop tpot
rm /data/elk/logstash/*
deploy.sh
reboot
```
<br><br>
# Maintenance # Maintenance
T-Pot is designed to be low maintenance. Since almost everything is provided through docker images there is basically nothing you have to do but let it run. We will upgrade the docker images regularly to reduce the risks of compromise; however you should read this section closely.<br><br> T-Pot is designed to be low maintenance. Since almost everything is provided through docker images there is basically nothing you have to do but let it run. We will upgrade the docker images regularly to reduce the risks of compromise; however you should read this section closely.<br><br>
Should an update fail, opening an issue or a discussion will help to improve things in the future, but the offered solution will ***always*** be to perform a ***fresh install*** as we simply ***cannot*** provide any support for lost data! Should an update fail, opening an issue or a discussion will help to improve things in the future, but the offered solution will ***always*** be to perform a ***fresh install*** as we simply ***cannot*** provide any support for lost data!

View file

@ -108,6 +108,10 @@ echo "# New SENSOR passowrd: ${myLS_WEB_PW}"
echo "# New htpasswd encoded credentials: ${myLS_WEB_USER_ENC}" echo "# New htpasswd encoded credentials: ${myLS_WEB_USER_ENC}"
echo "# New htpasswd credentials base64 encoded: ${myLS_WEB_USER_ENC_B64}" echo "# New htpasswd credentials base64 encoded: ${myLS_WEB_USER_ENC_B64}"
echo "# New SENSOR credentials base64 encoded: ${myTPOT_HIVE_USER}" echo "# New SENSOR credentials base64 encoded: ${myTPOT_HIVE_USER}"
echo
echo "# When asked for a 'BECOME password' enter the password for your user on the SENSOR machine."
echo "# The password will allow Ansible to run a reboot via sudo on the SENSOR."
echo
# Read LS_WEB_USER from file # Read LS_WEB_USER from file
myENV_LS_WEB_USER=$(grep "^LS_WEB_USER=" "${myENV_FILE}" | sed 's/^LS_WEB_USER=//g' | tr -d "\"'") myENV_LS_WEB_USER=$(grep "^LS_WEB_USER=" "${myENV_FILE}" | sed 's/^LS_WEB_USER=//g' | tr -d "\"'")
@ -124,7 +128,7 @@ fi
export myTPOT_HIVE_USER export myTPOT_HIVE_USER
export myTPOT_HIVE_IP export myTPOT_HIVE_IP
ANSIBLE_LOG_PATH=${HOME}/tpotce/data/deploy_sensor.log ansible-playbook ${myANSIBLE_TPOT_PLAYBOOK} -i ${mySENSOR_IP}, -c ssh -u ${mySSHUSER} -e "ansible_port=${myANSIBLE_PORT}" ANSIBLE_LOG_PATH=${HOME}/tpotce/data/deploy_sensor.log ansible-playbook ${myANSIBLE_TPOT_PLAYBOOK} -i ${mySENSOR_IP}, -c ssh -u ${mySSHUSER} --ask-become-pass -e "ansible_port=${myANSIBLE_PORT}"
if [ "$?" == 0 ]; if [ "$?" == 0 ];
then then

View file

@ -138,6 +138,19 @@ create_web_users() {
done done
} }
update_permissions() {
echo
echo "# Updating permissions ..."
echo
chown -R tpot:tpot /data
chmod -R 770 /data
chmod 774 -R /data/nginx/conf
chmod 774 -R /data/nginx/cert
}
# Update permissions
update_permissions
# Check for compatible OSType # Check for compatible OSType
echo echo
echo "# Checking if OSType is compatible." echo "# Checking if OSType is compatible."
@ -274,13 +287,7 @@ echo
/opt/tpot/bin/updateip.sh /opt/tpot/bin/updateip.sh
# Update permissions # Update permissions
echo update_permissions
echo "# Updating permissions ..."
echo
chown -R tpot:tpot /data
chmod -R 770 /data
chmod 774 -R /data/nginx/conf
chmod 774 -R /data/nginx/cert
# Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap) # Update interface settings (p0f and Suricata) and setup iptables to support NFQ based honeypots (glutton, honeytrap)
### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux ### This is currently not supported on Docker for Desktop, only on Docker Engine for Linux

View file

@ -55,3 +55,12 @@
regexp: '^LS_WEB_USER=' regexp: '^LS_WEB_USER='
line: 'LS_WEB_USER=' line: 'LS_WEB_USER='
create: yes create: yes
- name: Reboot the sensor
become: yes
ansible.builtin.reboot:
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 0
msg: "Reboot initiated by Ansible for T-Pot sensor deployment."
test_command: "uptime"