prepare for T-Pot 16.03

This commit is contained in:
marco 2015-12-08 15:47:39 +01:00
parent 0701b5f2f4
commit f06935fe63
72 changed files with 29029 additions and 459 deletions

View file

@ -1,35 +1,32 @@
# T-Pot Community Edition Image Creator # T-Pot Image Creator
This repository contains the necessary files to create the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** ISO image. This repository contains the necessary files to create the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** ISO image.
The image can then be used to install T-Pot on a physical or virtual machine. The image can then be used to install T-Pot on a physical or virtual machine.
### Image Creation ### Image Creation
**Requirements to create the ISO image:** **Requirements to create the ISO image:**
- Ubuntu 14.04.2 or 14.10 as host system (others *may* work, but remain untested) - Ubuntu 14.04.3 or newer as host system (others *may* work, but remain untested)
- 2GB of free memory - 4GB of free memory
- 4GB of free storage - 32GB of free storage
- A working internet connection - A working internet connection
**How to create the ISO image:** **How to create the ISO image:**
1. Clone the repository and enter it. 1. Clone the repository and enter it.
git clone https://github.com/dtag-dev-sec/tpotce.git git clone git@github.com:dtag-dev-sec/tpotce.git
cd tpotce cd tpotce
2. Invoke the script that builds the ISO image. 2. Invoke the script that builds the ISO image.
The script will download and install dependecies necessary to build the image on the invoking machine. It will further download the ubuntu base image (~600MB) which T-Pot is based on. The script will download and install dependecies necessary to build the image on the invoking machine. It will further download the ubuntu base image (~600MB) which T-Pot is based on.
sudo ./makeiso.sh sudo ./makeiso.sh
After successful build, you will find the ISO image `tpotce.iso` in your directory.
After a successful build, you will find the ISO image `tpot.iso` in your directory.
###Prebuilt ISO Image
If you don't want to build the image yourself, you can download the prebuilt [tpotce.iso](http://community-honeypot.de/tpotce.iso) ISO image from the project's web page.
###Installation ###Installation
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements: When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 2 GB RAM (4 GB recommended) - 4 GB RAM
- 40 GB disk (64 GB SSD recommended) - 40 GB disk (64 GB SSD recommended)
- Network via DHCP - Network via DHCP
- A working internet connection - A working internet connection

44
getimages.sh Executable file
View file

@ -0,0 +1,44 @@
#!/bin/bash
########################################################
# T-Pot #
# Export docker images maker #
# #
# v0.01 by mo, DTAG, 2015-08-11 #
########################################################
# This feature is experimental and requires at least docker 1.7!
# Using any docker version < 1.7 may result in a unusable installation
# This script will download the docker images and export them to the folder "images".
# When building the .iso image the preloaded docker images will be exported to the .iso which
# may be useful if you need to install more than one machine.
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Please run as root ..."
exit
fi
if [ -z $1 ]
then
echo "Please view the script for more details!"
exit
fi
if [ $1 == "now" ]
then
for name in $(cat installer/data/full_images.conf)
do
docker pull dtagdevsec/$name:latest1603
done
mkdir images
chmod 777 images
for name in $(cat installer/data/full_images.conf)
do
echo "Now exporting dtagdevsec/$name:latest1603"
docker save -o images/$name:latest1603.img dtagdevsec/$name:latest1603
done
chmod 777 images/*.img
fi

View file

@ -1,10 +1,10 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Check container and services script # # Check container and services script #
# # # #
# v0.14 by mo, DTAG, 2015-08-07 # # v0.02 by mo, DTAG, 2015-08-08 #
######################################################## ########################################################
if [ -a /var/run/check.lock ]; if [ -a /var/run/check.lock ];
then exit then exit

View file

@ -1,15 +1,33 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Container and services restart script # # Container and services restart script #
# # # #
# v0.14 by mo, DTAG, 2015-08-07 # # v0.03 by mo, DTAG, 2015-11-02 #
######################################################## ########################################################
myCOUNT=1
if [ -a /var/run/check.lock ]; while true
then exit do
if ! [ -a /var/run/check.lock ];
then break
fi fi
sleep 0.1
if [ "$myCOUNT" = "1" ];
then
echo -n "Waiting for services "
else echo -n .
fi
if [ "$myCOUNT" = "6000" ];
then
echo
echo "Overriding check.lock"
rm /var/run/check.lock
break
fi
myCOUNT=$[$myCOUNT +1]
done
myIMAGES=$(cat /data/images.conf) myIMAGES=$(cat /data/images.conf)

View file

@ -1,10 +1,10 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Container and services status script # # Container and services status script #
# # # #
# v0.11 by mo, DTAG, 2015-06-12 # # v0.04 by mo, DTAG, 2015-08-20 #
######################################################## ########################################################
myCOUNT=1 myCOUNT=1
myIMAGES=$(cat /data/images.conf) myIMAGES=$(cat /data/images.conf)
@ -29,12 +29,13 @@ do
done done
echo echo
echo echo
echo "****************** $(date) ******************" echo "======| System |======"
echo echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo echo
for i in $myIMAGES for i in $myIMAGES
do do
echo
echo "======| Container:" $i "|======" echo "======| Container:" $i "|======"
docker exec $i supervisorctl status | GREP_COLORS='mt=01;32' egrep --color=always "(RUNNING)|$" | GREP_COLORS='mt=01;31' egrep --color=always "(STOPPED|FATAL)|$" docker exec $i supervisorctl status | GREP_COLORS='mt=01;32' egrep --color=always "(RUNNING)|$" | GREP_COLORS='mt=01;31' egrep --color=always "(STOPPED|FATAL)|$"
echo echo

View file

@ -0,0 +1,311 @@
[2015-11-16 09:29:38,922][INFO ][node ] [Veil] initialized
[2015-11-16 09:29:38,923][INFO ][node ] [Veil] starting ...
[2015-11-16 09:29:39,081][INFO ][transport ] [Veil] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-11-16 09:29:39,096][INFO ][discovery ] [Veil] elasticsearch/uYwNByX2TxSVe55Pzdbb0g
[2015-11-16 09:29:42,201][INFO ][cluster.service ] [Veil] new_master {Veil}{uYwNByX2TxSVe55Pzdbb0g}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-16 09:29:42,294][INFO ][gateway ] [Veil] recovered [2] indices into cluster_state
[2015-11-16 09:29:42,311][INFO ][http ] [Veil] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-11-16 09:29:42,311][INFO ][node ] [Veil] started
[2015-11-16 09:30:24,102][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [SuricataIDPS-logs, _default_]
[2015-11-16 09:30:24,229][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 09:30:24,813][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 09:30:31,124][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 09:53:30,514][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:03:55,575][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:03:59,745][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:03:59,762][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:04:03,891][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:10:48,444][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:29:23,286][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 10:29:23,307][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] update_mapping [SuricataIDPS-logs]
[2015-11-16 11:21:34,996][INFO ][rest.suppressed ] /.kibana/visualization/Destination-Ports Params: {id=Destination-Ports, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][Destination-Ports]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 11:22:20,042][INFO ][rest.suppressed ] /.kibana/visualization/Destination-Ports Params: {id=Destination-Ports, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][Destination-Ports]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 11:26:11,386][INFO ][cluster.metadata ] [Veil] [logstash-2015.11.16] create_mapping [ews-logs]
[2015-11-16 11:30:22,723][INFO ][rest.suppressed ] /.kibana/index-pattern/[logstash-]YYYY.MM.DD Params: {id=[logstash-]YYYY.MM.DD, index=.kibana, op_type=create, type=index-pattern}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[index-pattern][[logstash-]YYYY.MM.DD]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 12:07:18,928][INFO ][rest.suppressed ] /.kibana/visualization/Destination-Ports Params: {id=Destination-Ports, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][Destination-Ports]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 12:18:27,537][INFO ][rest.suppressed ] /.kibana/visualization/SSH-Software-Version Params: {id=SSH-Software-Version, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][SSH-Software-Version]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 12:22:37,298][INFO ][rest.suppressed ] /.kibana/visualization/SSH-Software-Version Params: {id=SSH-Software-Version, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][SSH-Software-Version]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 12:43:41,414][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 14:33:42,067][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 14:48:17,447][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 14:55:11,489][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 14:58:51,689][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 15:01:17,546][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 15:13:10,208][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 15:21:57,533][INFO ][rest.suppressed ] /.kibana/visualization/Fileinfo-Magic Params: {id=Fileinfo-Magic, index=.kibana, op_type=create, type=visualization}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[visualization][Fileinfo-Magic]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 15:23:22,710][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 16:10:54,364][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 16:14:13,496][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 16:32:20,483][INFO ][rest.suppressed ] /.kibana/dashboard/Default Params: {id=Default, index=.kibana, op_type=create, type=dashboard}
[.kibana][[.kibana][0]] DocumentAlreadyExistsException[[dashboard][Default]: document already exists]
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:411)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:369)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:341)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:517)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:789)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1073)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:170)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-11-16 16:34:42,196][INFO ][node ] [Veil] stopping ...
[2015-11-16 16:34:42,288][INFO ][node ] [Veil] stopped
[2015-11-16 16:34:42,289][INFO ][node ] [Veil] closing ...
[2015-11-16 16:34:42,297][INFO ][node ] [Veil] closed
[2015-11-16 16:35:06,696][INFO ][node ] [Famine] version[2.0.0], pid[8], build[de54438/2015-10-22T08:09:48Z]
[2015-11-16 16:35:06,697][INFO ][node ] [Famine] initializing ...
[2015-11-16 16:35:06,798][INFO ][plugins ] [Famine] loaded [], sites []
[2015-11-16 16:35:06,915][INFO ][env ] [Famine] using [1] data paths, mounts [[/data/elk (/dev/sda5)]], net usable_space [6.9gb], net total_space [7.3gb], spins? [possibly], types [ext4]
[2015-11-16 16:35:08,561][INFO ][node ] [Famine] initialized
[2015-11-16 16:35:08,561][INFO ][node ] [Famine] starting ...
[2015-11-16 16:35:08,752][INFO ][transport ] [Famine] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2015-11-16 16:35:08,758][INFO ][discovery ] [Famine] elasticsearch/viSYKHsKRYar5tp5Av8fLQ
[2015-11-16 16:35:11,809][INFO ][cluster.service ] [Famine] new_master {Famine}{viSYKHsKRYar5tp5Av8fLQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-16 16:35:11,897][INFO ][gateway ] [Famine] recovered [3] indices into cluster_state
[2015-11-16 16:35:11,945][INFO ][http ] [Famine] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2015-11-16 16:35:11,945][INFO ][node ] [Famine] started
[2015-11-16 16:39:06,106][INFO ][node ] [Famine] stopping ...
[2015-11-16 16:39:06,223][INFO ][node ] [Famine] stopped
[2015-11-16 16:39:06,223][INFO ][node ] [Famine] closing ...
[2015-11-16 16:39:06,239][INFO ][node ] [Famine] closed

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,4 @@
[2015-11-14 03:43:15,837][INFO ][node ] [Veil] version[2.0.0], pid[8], build[de54438/2015-10-22T08:09:48Z]
[2015-11-14 03:43:15,838][INFO ][node ] [Veil] initializing ...
[2015-11-14 03:43:15,973][INFO ][plugins ] [Veil] loaded [], sites []
[2015-11-14 03:43:16,175][INFO ][env ] [Veil] using [1] data paths, mounts [[/data/elk (/dev/sda5)]], net usable_space [6.9gb], net total_space [7.3gb], spins? [possibly], types [ext4]

View file

@ -1,7 +1,6 @@
dionaea dionaea
glastopf glastopf
honeytrap honeytrap
kippo cowrie
suricata suricata
ews
elk elk

View file

@ -0,0 +1,4 @@
dionaea
glastopf
honeytrap
cowrie

View file

@ -1,4 +1,4 @@
T-Pot Community Edition (Beta) T-Pot 16.03 (Alpha)
Hostname: \n Hostname: \n
IP: IP:
@ -13,4 +13,3 @@ ___________ _____________________________
CTRL+ALT+F2 - Display current container status CTRL+ALT+F2 - Display current container status
CTRL+ALT+F1 - Return to this screen CTRL+ALT+F1 - Return to this screen

View file

@ -1,6 +1,6 @@
#!/bin/sh -e #!/bin/bash
# Let's add the first local ip to the /etc/issue and ews.ip file # Let's add the first local ip to the /etc/issue and external ip to ews.ip file
# export http_proxy=http://your.proxy.server:port/ source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }') myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl myexternalip.com/raw) myEXTIP=$(curl myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP, $myEXTIP#" /etc/issue sed -i "s#IP:.*#IP: $myLOCALIP, $myEXTIP#" /etc/issue
@ -12,5 +12,3 @@ chown tpot:tpot /data/ews/conf/ews.ip
if [ -f /var/run/check.lock ]; if [ -f /var/run/check.lock ];
then rm /var/run/check.lock then rm /var/run/check.lock
fi fi
setupcon
exit 0

View file

@ -1,10 +1,10 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Two-Factor authentication enable script # # Two-Factor authentication enable script #
# # # #
# v0.20 by mo, DTAG, 2015-01-27 # # v0.01 by mo, DTAG, 2015-06-15 #
######################################################## ########################################################
echo "### This script will enable Two-Factor-Authentication based on Google Authenticator for SSH." echo "### This script will enable Two-Factor-Authentication based on Google Authenticator for SSH."

View file

@ -1,10 +1,10 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# SSH enable script # # SSH enable script #
# # # #
# v0.21 by mo, DTAG, 2015-01-27 # # v0.01 by mo, DTAG, 2015-06-15 #
######################################################## ########################################################
if ! [ -f /etc/init/ssh.override ]; if ! [ -f /etc/init/ssh.override ];

312
installer/install.sh Executable file
View file

@ -0,0 +1,312 @@
#!/bin/bash
########################################################
# T-Pot post install script #
# Ubuntu server 14.04.3, x64 #
# #
# v0.10 by mo, DTAG, 2015-10-06 #
########################################################
# Type of install, SENSOR or FULL?
myFLAVOR="FULL"
# Some global vars
myPROXYFILEPATH="/root/tpot/etc/proxy"
myNTPCONFPATH="/root/tpot/etc/ntp"
myPFXPATH="/root/tpot/keys/8021x.pfx"
myPFXPWPATH="/root/tpot/keys/8021x.pw"
myPFXHOSTIDPATH="/root/tpot/keys/8021x.id"
# Let's create a function for colorful output
fuECHO () {
local myRED=1
local myWHT=7
tput setaf $myRED
echo $1 "$2"
tput setaf $myWHT
}
# Let's make sure there is a warning if running for a second time
if [ -f install.log ];
then fuECHO "### Running more than once may complicate things. Erase install.log if you are really sure."
exit 1;
fi
# Let's log for the beauty of it
set -e
exec 2> >(tee "install.err")
exec > >(tee "install.log")
# Let's setup the proxy for env
if [ -f $myPROXYFILEPATH ];
then fuECHO "### Setting up the proxy."
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/environment <<EOF
export http_proxy=$myPROXY
export https_proxy=$myPROXY
export HTTP_PROXY=$myPROXY
export HTTPS_PROXY=$myPROXY
export no_proxy=localhost,127.0.0.1,.sock
EOF
source /etc/environment
# Let's setup the proxy for apt
tee /etc/apt/apt.conf <<EOF
Acquire::http::Proxy "$myPROXY";
Acquire::https::Proxy "$myPROXY";
EOF
fi
# Let's setup the ntp server
if [ -f $myNTPCONFPATH ];
then
fuECHO "### Setting up the ntp server."
cp $myNTPCONFPATH /etc/ntp.conf
fi
# Let's setup 802.1x networking
if [ -f $myPFXPATH ];
then
fuECHO "### Setting up 802.1x networking."
cp $myPFXPATH /etc/wpa_supplicant/
if [ -f $myPFXPWPATH ];
then
fuECHO "### Setting up 802.1x password."
myPFXPW=$(cat $myPFXPWPATH)
fi
myPFXHOSTID=$(cat $myPFXHOSTIDPATH)
tee -a /etc/network/interfaces <<EOF
wpa-driver wired
wpa-conf /etc/wpa_supplicant/wired8021x.conf
### Example wireless config for 802.1x
### This configuration was tested with the IntelNUC series
### If problems occur you can try and change wpa-driver to "iwlwifi"
### Do not forget to enter a ssid in /etc/wpa_supplicant/wireless8021x.conf
#
#auto wlan0
#iface wlan0 inet dhcp
# wpa-driver wext
# wpa-conf /etc/wpa_supplicant/wireless8021x.conf
EOF
tee /etc/wpa_supplicant/wired8021x.conf <<EOF
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root
eapol_version=1
ap_scan=1
network={
key_mgmt=IEEE8021X
eap=TLS
identity="host/$myPFXHOSTID"
private_key="/etc/wpa_supplicant/8021x.pfx"
private_key_passwd="$myPFXPW"
}
EOF
tee /etc/wpa_supplicant/wireless8021x.conf <<EOF
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root
eapol_version=1
ap_scan=1
network={
ssid="<your_ssid_here_without_brackets>"
key_mgmt=WPA-EAP
pairwise=CCMP
group=CCMP
eap=TLS
identity="host/$myPFXHOSTID"
private_key="/etc/wpa_supplicant/8021x.pfx"
private_key_passwd="$myPFXPW"
}
EOF
fi
# Let's provide a wireless example config ...
fuECHO "### Providing a wireless example config."
tee -a /etc/network/interfaces <<EOF
### Example wireless config without 802.1x
### This configuration was tested with the IntelNUC series
### If problems occur you can try and change wpa-driver to "iwlwifi"
#
#auto wlan0
#iface wlan0 inet dhcp
# wpa-driver wext
# wpa-ssid <your_ssid_here_without_brackets>
# wpa-ap-scan 1
# wpa-proto RSN
# wpa-pairwise CCMP
# wpa-group CCMP
# wpa-key-mgmt WPA-PSK
# wpa-psk "<your_password_here_without_brackets>"
EOF
# Let's modify the sources list
sed -i '/cdrom/d' /etc/apt/sources.list
# Let's pull some updates
fuECHO "### Pulling Updates."
apt-get update -y
fuECHO "### Installing Upgrades."
apt-get dist-upgrade -y
# Let's install docker
fuECHO "### Installing docker."
wget -qO- https://get.docker.com/gpg | apt-key add -
wget -qO- https://get.docker.com/ | sh
# Let's add proxy settings to docker defaults
if [ -f $myPROXYFILEPATH ];
then fuECHO "### Setting up the proxy for docker."
myPROXY=$(cat $myPROXYFILEPATH)
tee -a /etc/default/docker <<EOF
export http_proxy=$myPROXY
export https_proxy=$myPROXY
export HTTP_PROXY=$myPROXY
export HTTPS_PROXY=$myPROXY
export no_proxy=localhost,127.0.0.1,.sock
EOF
fi
# Let's add a new user
fuECHO "### Adding new user."
addgroup --gid 2000 tpot
adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot
# Let's set the hostname
fuECHO "### Setting a new hostname."
myHOST=ce$(date +%s)$RANDOM
hostnamectl set-hostname $myHOST
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts
# Let's patch sshd_config
fuECHO "### Patching sshd_config to listen on port 64295 and deny password authentication."
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config
# Let's disable ssh service
echo "manual" >> /etc/init/ssh.override
# Let's patch docker defaults, so we can run images as service
fuECHO "### Patching docker defaults."
tee -a /etc/default/docker <<EOF
DOCKER_OPTS="-r=false"
EOF
# Let's make sure only myFLAVOR images will be downloaded and started
if [ "$myFLAVOR" = "SENSOR" ]
then
cp /root/tpot/data/sensor_images.conf /root/tpot/data/images.conf
echo "manual" >> /etc/init/suricata.override
echo "manual" >> /etc/init/elk.override
else
cp /root/tpot/data/full_images.conf /root/tpot/data/images.conf
fi
# Let's load docker images
fuECHO "### Loading docker images. Please be patient, this may take a while."
if [ -d /root/tpot/images ];
then
fuECHO "### Found cached images and will load from local."
for name in $(cat /root/tpot/data/images.conf)
do
fuECHO "### Now loading dtagdevsec/$name:latest1603"
docker load -i /root/tpot/images/$name:latest1603.img
done
else
for name in $(cat /root/tpot/data/images.conf)
do
docker pull dtagdevsec/$name:latest1603
done
fi
# Let's add the daily update check with a weekly clean interval
fuECHO "### Modifying update checks."
tee /etc/apt/apt.conf.d/10periodic <<EOF
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7";
EOF
# Let's make sure to reboot the system after a kernel panic
fuECHO "### Reboot after kernel panic."
tee -a /etc/sysctl.conf <<EOF
# Reboot after kernel panic, check via /proc/sys/kernel/panic[_on_oops]
kernel.panic = 1
kernel.panic_on_oops = 1
EOF
# Let's add some cronjobs
fuECHO "### Adding cronjobs."
tee -a /etc/crontab <<EOF
# Show running containers every 60s via /dev/tty2
*/2 * * * * root status.sh > /dev/tty2
# Check if containers and services are up
*/5 * * * * root check.sh
# Check if updated images are available and download them
27 1 * * * root for i in \$(cat /data/images.conf); do docker pull dtagdevsec/\$i:latest1603; done
# Restart docker service and containers
27 3 * * * root dcres.sh
# Delete elastic indices older than 30 days
27 4 * * * root docker exec elk bash -c '/usr/local/bin/curator --host 127.0.0.1 delete --older-than 30'
# Update IP and erase check.lock if it exists
27 15 * * * root /etc/rc.local
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root sleep \$((RANDOM %600)); apt-get autoclean -y; apt-get autoremove -y; apt-get update -y; apt-get upgrade -y; apt-get upgrade docker-engine -y; sleep 5; reboot
EOF
# Let's take care of some files and permissions before copying
chmod 500 /root/tpot/bin/*
chmod 600 /root/tpot/data/*
chmod 644 /root/tpot/etc/issue
chmod 755 /root/tpot/etc/rc.local
chmod 700 /root/tpot/home/*
chown tsec:tsec /root/tpot/home/*
chmod 644 /root/tpot/upstart/*
# Let's create some files and folders
fuECHO "### Creating some files and folders."
mkdir -p /data/ews/log /data/ews/conf /data/elk/data /data/elk/log /home/tsec/.ssh/
# Let's copy some files
cp -R /root/tpot/bin/* /usr/bin/
cp -R /root/tpot/data/* /data/
cp -R /root/tpot/etc/issue /etc/
cp -R /root/tpot/home/* /home/tsec/
cp -R /root/tpot/upstart/* /etc/init/
cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys
# Let's take care of some files and permissions
chmod 760 -R /data
chown tpot:tpot -R /data
chmod 600 /home/tsec/.ssh/authorized_keys
chown tsec:tsec /home/tsec/*.sh /home/tsec/.ssh /home/tsec/.ssh/authorized_keys
# Let's clean up apt
apt-get autoclean -y
apt-get autoremove -y
# Let's replace "quiet splash" options, set a console font for more screen canvas and update grub
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub
sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600x32#' /etc/default/grub
tee -a /etc/default/grub <<EOF
GRUB_GFXPAYLOAD=800x600x32
GRUB_GFXPAYLOAD_LINUX=800x600x32
EOF
update-grub
cp /usr/share/consolefonts/Uni2-Terminus12x6.psf.gz /etc/console-setup/
gunzip /etc/console-setup/Uni2-Terminus12x6.psf.gz
sed -i 's#FONTFACE=".*#FONTFACE="Terminus"#' /etc/default/console-setup
sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup
update-initramfs -u
# Final steps
fuECHO "### Thanks for your patience. Now rebooting."
mv /root/tpot/etc/rc.local /etc/rc.local && rm -rf /root/tpot/ && chage -d 0 tsec && sleep 2 && reboot

View file

@ -1,18 +0,0 @@
#!/bin/bash
#############################################################
# T-Pot Community Edition - disable splash boot #
# and consoleblank permanently #
# Ubuntu server 14.04.1, x64 #
# #
# v0.12 by mo, DTAG, 2015-02-15 #
#############################################################
# Let's replace "quiet splash" options and update grub
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub
sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600#' /etc/default/grub
update-grub
sed -i 's#FONTFACE=".*#FONTFACE="Terminus"#' /etc/default/console-setup
sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup
# Let's move the install script to rc.local and reboot
mv /root/tpotce/install2.sh /etc/rc.local && sleep 2 && reboot

View file

@ -1,152 +0,0 @@
#!/bin/bash
########################################################
# T-Pot Community Edition post install script #
# Ubuntu server 14.04, x64 #
# #
# v0.49 by mo, DTAG, 2015-08-14 #
########################################################
# Let's make sure there is a warning if running for a second time
if [ -f install.log ];
then fuECHO "### Running more than once may complicate things. Erase install.log if you are really sure."
exit 1;
fi
# Let's log for the beauty of it
set -e
exec 2> >(tee "install.err")
exec > >(tee "install.log")
# Let's create a function for colorful output
fuECHO () {
local myRED=1
local myWHT=7
tput setaf $myRED
echo $1 "$2"
tput setaf $myWHT
}
# Let's modify the sources list
sed -i '/cdrom/d' /etc/apt/sources.list
# Let's pull some updates
fuECHO "### Pulling Updates."
apt-get update -y
fuECHO "### Installing Upgrades."
apt-get dist-upgrade -y
# Let's install docker
fuECHO "### Installing docker."
wget -qO- https://get.docker.com/gpg | apt-key add -
wget -qO- https://get.docker.com/ | sh
# Let's install all the packages we need
fuECHO "### Installing packages."
apt-get install curl ethtool git ntp libpam-google-authenticator vim -y
# Let's add a new user
fuECHO "### Adding new user."
addgroup --gid 2000 tpot
adduser --system --no-create-home --uid 2000 --disabled-password --disabled-login --gid 2000 tpot
# Let's set the hostname
fuECHO "### Setting a new hostname."
myHOST=ce$(date +%s)$RANDOM
hostnamectl set-hostname $myHOST
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts
# Let's patch sshd_config
fuECHO "### Patching sshd_config to listen on port 64295 and deny password authentication."
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config
# Let's disable ssh service
echo "manual" >> /etc/init/ssh.override
# Let's patch docker defaults, so we can run images as service
fuECHO "### Patching docker defaults."
tee -a /etc/default/docker <<EOF
DOCKER_OPTS="-r=false"
EOF
# Let's load docker images from remote
fuECHO "### Downloading docker images from DockerHub. Please be patient, this may take a while."
for name in $(cat /root/tpotce/data/images.conf)
do
docker pull dtagdevsec/$name
done
# Let's add the daily update check with a weekly clean interval
fuECHO "### Modifying update checks."
tee /etc/apt/apt.conf.d/10periodic <<EOF
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "7";
EOF
# Let's wait no longer for network than 60 seconds
fuECHO "### Wait no longer for network than 60 seconds."
sed -i.bak 's#sleep 60#sleep 30#' /etc/init/failsafe.conf
# Let's make sure to reboot the system after a kernel panic
fuECHO "### Reboot after kernel panic."
tee -a /etc/sysctl.conf <<EOF
# Reboot after kernel panic, check via /proc/sys/kernel/panic[_on_oops]
kernel.panic = 1
kernel.panic_on_oops = 1
EOF
# Let's add some cronjobs
fuECHO "### Adding cronjobs."
tee -a /etc/crontab <<EOF
# Show running containers every 60s via /dev/tty2
*/2 * * * * root /usr/bin/status.sh > /dev/tty2
# Check if containers and services are up
*/5 * * * * root /usr/bin/check.sh
# Check if updated images are available and download them
27 1 * * * root for i in \$(cat /data/images.conf); do /usr/bin/docker pull dtagdevsec/\$i:latest; done
# Restart docker service and containers
27 3 * * * root /usr/bin/dcres.sh
# Delete elastic indices older than 30 days
27 4 * * * root /usr/bin/docker exec elk bash -c '/usr/local/bin/curator --host 127.0.0.1 delete --older-than 30'
# Update IP and erase check.lock if it exists
27 15 * * * root /etc/rc.local
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root sleep \$((RANDOM %600)); apt-get autoclean -y; apt-get autoremove -y; apt-get update -y; apt-get upgrade -y; apt-get upgrade docker-engine -y; sleep 5; reboot
EOF
# Let's take care of some files and permissions
chmod 500 /root/tpotce/bin/*
chmod 600 /root/tpotce/data/*
chmod 644 /root/tpotce/etc/issue
chmod 755 /root/tpotce/etc/rc.local
chmod 700 /root/tpotce/home/*
chown tsec:tsec /root/tpotce/home/*
chmod 644 /root/tpotce/upstart/*
# Let's create some files and folders
fuECHO "### Creating some files and folders."
mkdir -p /data/ews/log /data/ews/conf /data/elk/data /data/elk/log
# Let's move some files
cp -R /root/tpotce/bin/* /usr/bin/
cp -R /root/tpotce/data/* /data/
cp -R /root/tpotce/etc/issue /etc/
cp -R /root/tpotce/home/* /home/tsec/
cp -R /root/tpotce/upstart/* /etc/init/
# Let's take care of some files and permissions
chmod 660 -R /data
chown tpot:tpot -R /data
chown tsec:tsec /home/tsec/*.sh
# Final steps
fuECHO "### Thanks for your patience. Now rebooting."
mv /root/tpotce/etc/rc.local /etc/rc.local && rm -rf /root/tpotce/ && chage -d 0 tsec && sleep 2 && reboot

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,4 @@
#!/bin/bash
# Stop plymouth to allow for terminal interaction
plymouth quit
openvt -w -s /root/tpot/install.sh

View file

@ -0,0 +1,24 @@
########################################################
# T-Pot #
# Cowrie upstart script #
# #
# v0.04 by av, DTAG, 2015-10-07 #
########################################################
description "cowrie"
author "av"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing cowrie containers
myCID=$(docker ps -a | grep cowrie | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm $myCID;
fi
end script
script
# Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name cowrie --rm=true -p 22:2222 -v /data:/data dtagdevsec/cowrie:latest1603
end script

View file

@ -1,13 +1,13 @@
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Dionaea upstart script # # Dionaea upstart script #
# # # #
# v0.53 by mo, DTAG, 2015-11-02 # # v0.04 by mo, DTAG, 2015-12-08 #
######################################################## ########################################################
description "Dionaea" description "Dionaea"
author "mo" author "mo"
start on (started docker and filesystem) start on started docker and filesystem
stop on runlevel [!2345] stop on runlevel [!2345]
respawn respawn
pre-start script pre-start script
@ -20,7 +20,7 @@ end script
script script
# Delayed start to avoid rapid respawning # Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 8080:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 3306:3306 -p 5061:5061 -p 5060:5060 -p 69:69/udp -p 5060:5060/udp -v /data/dionaea dtagdevsec/dionaea /usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 8080:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 3306:3306 -p 5061:5061 -p 5060:5060 -p 69:69/udp -p 5060:5060/udp -v /data:/data dtagdevsec/dionaea:latest1603
end script end script
post-start script post-start script
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))

View file

@ -1,13 +1,13 @@
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# ELK upstart script # # ELK upstart script #
# # # #
# v0.53 by mo, DTAG, 2015-11-02 # # v0.04 by mo, DTAG, 2015-12-08 #
######################################################## ########################################################
description "ELK" description "ELK"
author "mo" author "mo"
start on (started docker and filesystem and started ews and started dionaea and started glastopf and started honeytrap and started kippo and started suricata) start on started docker and filesystem
stop on runlevel [!2345] stop on runlevel [!2345]
respawn respawn
pre-start script pre-start script
@ -20,7 +20,7 @@ end script
script script
# Delayed start to avoid rapid respawning # Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name=elk --volumes-from ews --volumes-from suricata -v /data/elk/:/data/elk/ -p 127.0.0.1:64296:8080 --rm=true dtagdevsec/elk /usr/bin/docker run --name=elk -v /data:/data -p 127.0.0.1:64296:8080 --rm=true dtagdevsec/elk:latest1603
end script end script
post-start script post-start script
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))

View file

@ -1,27 +0,0 @@
########################################################
# T-Pot Community Edition #
# EWS upstart script #
# #
# v0.53 by mo, DTAG, 2015-11-02 #
########################################################
description "EWS"
author "mo"
start on (started docker and filesystem and started dionaea and started glastopf and started honeytrap and started kippo)
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing ews containers
myCID=$(docker ps -a | grep ews | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
end script
script
# Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name ews --volumes-from dionaea --volumes-from glastopf --volumes-from honeytrap --volumes-from kippo --rm=true -v /data/ews/conf/:/data/ews/conf/ -v /data/ews/ --link kippo:kippo dtagdevsec/ews
end script
post-start script
sleep $(((RANDOM % 5)+5))
end script

View file

@ -1,13 +1,13 @@
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Glastopf upstart script # # Glastopf upstart script #
# # # #
# v0.53 by mo, DTAG, 2015-11-02 # # v0.04 by mo, DTAG, 2015-12-08 #
######################################################## ########################################################
description "Glastopf" description "Glastopf"
author "mo" author "mo"
start on (started docker and filesystem) start on started docker and filesystem
stop on runlevel [!2345] stop on runlevel [!2345]
respawn respawn
pre-start script pre-start script
@ -20,7 +20,7 @@ end script
script script
# Delayed start to avoid rapid respawning # Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name glastopf --rm=true -p 80:80 -v /data/glastopf dtagdevsec/glastopf /usr/bin/docker run --name glastopf --rm=true -v /data:/data -p 80:80 dtagdevsec/glastopf:latest1603
end script end script
post-start script post-start script
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))

View file

@ -1,8 +1,8 @@
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Honeytrap upstart script # # Honeytrap upstart script #
# # # #
# v0.53 by mo, DTAG, 2015-11-02 # # v0.04 by mo, DTAG, 2015-12-08 #
######################################################## ########################################################
description "Honeytrap" description "Honeytrap"
@ -21,7 +21,7 @@ end script
script script
# Delayed start to avoid rapid respawning # Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data/honeytrap dtagdevsec/honeytrap /usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data:/data dtagdevsec/honeytrap:latest1603
end script end script
post-start script post-start script
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))

View file

@ -1,27 +0,0 @@
########################################################
# T-Pot Community Edition #
# Kippo upstart script #
# #
# v0.53 by mo, DTAG, 2015-11-02 #
########################################################
description "Kippo"
author "mo"
start on (started docker and filesystem)
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing kippo containers
myCID=$(docker ps -a | grep kippo | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
end script
script
# Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name kippo --rm=true -p 22:2222 -v /data/kippo dtagdevsec/kippo
end script
post-start script
sleep $(((RANDOM % 5)+5))
end script

View file

@ -1,8 +1,8 @@
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# Suricata upstart script # # Suricata upstart script #
# # # #
# v0.53 by mo, DTAG, 2015-11-02 # # v0.04 by mo, DTAG, 2015-12-08 #
######################################################## ########################################################
description "Suricata" description "Suricata"
@ -24,7 +24,7 @@ end script
script script
# Delayed start to avoid rapid respawning # Delayed start to avoid rapid respawning
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))
/usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data/suricata/ dtagdevsec/suricata /usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data:/data dtagdevsec/suricata:latest1603
end script end script
post-start script post-start script
sleep $(((RANDOM % 5)+5)) sleep $(((RANDOM % 5)+5))

View file

@ -1,10 +1,5 @@
#default install
#label install
# menu label ^Install T-Pot Community Edition
# kernel /install/vmlinuz
# append file=/cdrom/tpotce/tpotce.seed initrd=/install/initrd.gz ks=cdrom:/tpotce/ks.cfg debian-installer/locale=en_US console-setup/ask_detect=false keyboard-configuration/layoutcode=de --
default install default install
label install label install
menu label ^Install T-Pot Community Edition menu label ^Install T-Pot 16.03
kernel /install/vmlinuz kernel /install/vmlinuz
append file=/cdrom/tpotce/tpotce.seed initrd=/install/initrd.gz ks=cdrom:/tpotce/ks.cfg console-setup/ask_detect=true -- append file=/cdrom/tpot/tpot.seed initrd=/install/initrd.gz ks=cdrom:/tpot/ks.cfg console-setup/ask_detect=true --

View file

@ -1,83 +1,266 @@
#!/bin/bash #!/bin/bash
######################################################## ########################################################
# T-Pot Community Edition # # T-Pot #
# .ISO maker # # .ISO maker #
# # # #
# v0.14 by mo, DTAG, 2015-08-11 # # v0.07 by mo, DTAG, 2015-08-12 #
######################################################## ########################################################
# Let's define some global vars # Let's define some global vars
myBACKTITLE="T-Pot - ISO Maker"
myUBUNTULINK="http://releases.ubuntu.com/14.04.3/ubuntu-14.04.3-server-amd64.iso" myUBUNTULINK="http://releases.ubuntu.com/14.04.3/ubuntu-14.04.3-server-amd64.iso"
myUBUNTUISO="ubuntu-14.04.3-server-amd64.iso" myUBUNTUISO="ubuntu-14.04.3-server-amd64.iso"
myTPOTCEISO="tpotce.iso" myTPOTISO="tpot.iso"
myTPOTCEDIR="tpotceiso" myTPOTDIR="tpotiso"
myTPOTSEED="preseed/tpot.seed"
myPACKAGES="dialog genisoimage syslinux syslinux-utils pv"
myAUTHKEYSPATH="installer/keys/authorized_keys"
myPFXPATH="installer/keys/8021x.pfx"
myPFXPWPATH="installer/keys/8021x.pw"
myPFXHOSTIDPATH="installer/keys/8021x.id"
myINSTALLER2PATH="installer/install2.sh"
myPROXYCONFIG="installer/etc/proxy"
myNTPCONFPATH="installer/etc/ntp"
myTMP="tmp" myTMP="tmp"
myDEV=$1
# Let's create a function for colorful output # Got root?
fuECHO () { myWHOAMI=$(whoami)
local myRED=1 if [ "$myWHOAMI" != "root" ]
local myWHT=7 then
tput setaf $myRED echo "Please run as root ..."
echo $1 "$2" exit
tput setaf $myWHT fi
# Let's clean up at the end or if something goes wrong ...
function fuCLEANUP {
rm -rf $myTMP $myTPOTDIR $myPROXYCONFIG $myPFXPATH $myPFXPWPATH $myPFXHOSTIDPATH $myNTPCONFPATH
echo > $myAUTHKEYSPATH
if [ -f $myTPOTSEED.bak ];
then
mv $myTPOTSEED.bak $myTPOTSEED
fi
}
trap fuCLEANUP EXIT
# Let's create a function for validating an IPv4 address
function valid_ip()
{
local ip=$1
local stat=1
if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
OIFS=$IFS
IFS='.'
ip=($ip)
IFS=$OIFS
[[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
&& ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
stat=$?
fi
return $stat
} }
# Let's install all the packages we need # Let's check if all dependencies are met
fuECHO "### Installing packages." myINST=""
for myDEPS in $myPACKAGES;
do
myOK=$(dpkg -s $myDEPS | grep ok | awk '{ print $3 }');
if [ "$myOK" != "ok" ]
then
myINST=$(echo $myINST $myDEPS)
fi
done
if [ "$myINST" != "" ]
then
apt-get update -y apt-get update -y
apt-get install genisoimage syslinux syslinux-utils -y apt-get install $myINST -y
fi
# Let's ask if the user wants to run the script ...
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nThis script will download the latest supported Ubuntu Server and build the T-Pot .iso" 8 50
mySTART=$?
if [ "$mySTART" = "1" ];
then
exit
fi
# Let's ask for the type of installation FULL or SENSOR?
myFLAVOR=$(dialog --no-cancel --backtitle "$myBACKTITLE" --title "[ Installation type ... ]" --radiolist "" 8 50 2 "FULL" "Install Everything" on "SENSOR" "Install Honeypots & EWS Poster" off 3>&1 1>&2 2>&3 3>&-)
sed -i 's#^myFLAVOR=.*#myFLAVOR="'$myFLAVOR'"#' $myINSTALLER2PATH
# Let's ask the user for a proxy ...
while true;
do
dialog --backtitle "$myBACKTITLE" --title "[ Proxy Settings ]" --yesno "\nDo you want to configure a proxy?" 7 50
myADDPROXY=$?
if [ "$myADDPROXY" = "0" ]
then
myIPRESULT="false"
while [ "$myIPRESULT" = "false" ];
do
myPROXYIP=$(dialog --backtitle "$myBACKTITLE" --no-cancel --title "Proxy IP?" --inputbox "" 7 50 "1.2.3.4" 3>&1 1>&2 2>&3 3>&-)
if valid_ip $myPROXYIP; then myIPRESULT="true"; fi
done
myPORTRESULT="false"
while [ "$myPORTRESULT" = "false" ];
do
myPROXYPORT=$(dialog --backtitle "$myBACKTITLE" --no-cancel --title "Proxy Port (i.e. 3128)?" --inputbox "" 7 50 "3128" 3>&1 1>&2 2>&3 3>&-)
if [[ $myPROXYPORT =~ ^-?[0-9]+$ ]] && [ $myPROXYPORT -gt 0 ] && [ $myPROXYPORT -lt 65536 ]; then myPORTRESULT="true"; fi
done
echo http://$myPROXYIP:$myPROXYPORT > $myPROXYCONFIG
sed -i.bak 's#d-i mirror/http/proxy.*#d-i mirror/http/proxy string http://'$myPROXYIP':'$myPROXYPORT'/#' $myTPOTSEED
break
else
break
fi
done
# Let's ask the user for ssh keys ...
while true;
do
dialog --backtitle "$myBACKTITLE" --title "[ Add ssh keys? ]" --yesno "\nDo you want to add public key(s) to authorized_keys file?" 8 50
myADDKEYS=$?
if [ "$myADDKEYS" = "0" ]
then
myKEYS=$(dialog --backtitle "$myBACKTITLE" --fselect "/" 15 50 3>&1 1>&2 2>&3 3>&-)
if [ -f "$myKEYS" ]
then
cat $myKEYS > $myAUTHKEYSPATH
break
else
dialog --backtitle "$myBACKTITLE" --title "[ Try again! ]" --msgbox "\nThis is no regular file." 7 50;
fi
else
echo > $myAUTHKEYSPATH
break
fi
done
# Let's ask the user for 802.1x data ...
while true;
do
dialog --backtitle "$myBACKTITLE" --title "[ Need 802.1x auth? ]" --yesno "\nDo you want to add a 802.1x host certificate?" 7 50
myADDPFX=$?
if [ "$myADDPFX" = "0" ]
then
myPFX=$(dialog --backtitle "$myBACKTITLE" --fselect "/" 15 50 3>&1 1>&2 2>&3 3>&-)
if [ -f "$myPFX" ]
then
cp $myPFX $myPFXPATH
dialog --backtitle "$myBACKTITLE" --title "[ Password protected? ]" --yesno "\nDoes the certificate need your password?" 7 50
myADDPFXPW=$?
if [ "$myADDPFXPW" = "0" ]
then
myPFXPW=$(dialog --backtitle "$myBACKTITLE" --no-cancel --inputbox "Password?" 7 50 3>&1 1>&2 2>&3 3>&-)
echo $myPFXPW > $myPFXPWPATH
fi
myPFXHOSTID=$(dialog --backtitle "$myBACKTITLE" --no-cancel --inputbox "Host ID?" 7 50 "<HOSTNAME>.<DOMAIN>" 3>&1 1>&2 2>&3 3>&-)
echo $myPFXHOSTID > $myPFXHOSTIDPATH
break
else
dialog --backtitle "$myBACKTITLE" --title "[ Try again! ]" --msgbox "\nThis is no regular file." 7 50;
fi
else
break
fi
done
# Let's ask the user for a ntp server ...
while true;
do
dialog --backtitle "$myBACKTITLE" --title "[ NTP server? ]" --yesno "\nDo you want to configure a ntp server?" 7 50
myADDNTP=$?
if [ "$myADDNTP" = "0" ]
then
myIPRESULT="false"
while [ "$myIPRESULT" = "false" ];
do
myNTPIP=$(dialog --backtitle "$myBACKTITLE" --no-cancel --title "NTP IP?" --inputbox "" 7 50 "1.2.3.4" 3>&1 1>&2 2>&3 3>&-)
if valid_ip $myNTPIP; then myIPRESULT="true"; fi
done
tee $myNTPCONFPATH <<EOF
driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
server $myNTPIP
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
restrict 127.0.0.1
restrict ::1
EOF
break
else
break
fi
done
# Let's get Ubuntu 14.04.2 as .iso # Let's get Ubuntu 14.04.2 as .iso
fuECHO "### Downloading Ubuntu 14.04.3."
if [ ! -f $myUBUNTUISO ] if [ ! -f $myUBUNTUISO ]
then wget $myUBUNTULINK; then
else fuECHO "### Found it locally."; wget $myUBUNTULINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Ubuntu ... ]" --gauge "" 6 70;
echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Ubuntu ... Done! ]" --gauge "" 6 70;
else
dialog --infobox "Using previously downloaded .iso ..." 3 50;
fi fi
# Let's loop mount it and copy all contents # Let's loop mount it and copy all contents
fuECHO "### Mounting .iso and copying all contents." mkdir -p $myTMP $myTPOTDIR
mkdir -p $myTMP $myTPOTCEDIR
losetup /dev/loop0 $myUBUNTUISO losetup /dev/loop0 $myUBUNTUISO
mount /dev/loop0 $myTMP mount /dev/loop0 $myTMP
cp -rT $myTMP $myTPOTCEDIR cp -rT $myTMP $myTPOTDIR
chmod 777 -R $myTPOTCEDIR chmod 777 -R $myTPOTDIR
umount $myTMP umount $myTMP
losetup -d /dev/loop0 losetup -d /dev/loop0
# Let's add the files for the automated install # Let's add the files for the automated install
fuECHO "### Adding the automated install files." mkdir -p $myTPOTDIR/tpot
mkdir -p $myTPOTCEDIR/tpotce cp installer/* -R $myTPOTDIR/tpot/
cp installer/* -R $myTPOTCEDIR/tpotce/ cp isolinux/* $myTPOTDIR/isolinux/
cp isolinux/* $myTPOTCEDIR/isolinux/ cp kickstart/* $myTPOTDIR/tpot/
cp kickstart/* $myTPOTCEDIR/tpotce/ cp preseed/* $myTPOTDIR/tpot/
cp preseed/* $myTPOTCEDIR/tpotce/ if [ -d images ];
chmod 777 -R $myTPOTCEDIR then
cp -R images $myTPOTDIR/tpot/images/
fi
chmod 777 -R $myTPOTDIR
# Let's create the new .iso # Let's create the new .iso
fuECHO "### Now creating the .iso." cd $myTPOTDIR
cd $myTPOTCEDIR mkisofs -gui -D -r -V "T-Pot" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../$myTPOTISO ../$myTPOTDIR 2>&1 | awk '{print $1+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... ]" --gauge "" 5 70 0
mkisofs -D -r -V "T-Pot CE" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../$myTPOTCEISO ../$myTPOTCEDIR echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... Done! ]" --gauge "" 5 70
cd .. cd ..
isohybrid $myTPOTCEISO isohybrid $myTPOTISO
# Let's clean up # Let's write the image
fuECHO "### Cleaning up." while true;
rm -rf $myTMP $myTPOTCEDIR do
dialog --backtitle "$myBACKTITLE" --yesno "\nWrite .iso to USB drive?" 7 50
# Let's write the image to $myDEV or show instructions myUSBCOPY=$?
if [ -b $myDEV ] && [ ! -z $1 ] if [ "$myUSBCOPY" = "0" ]
then then
fuECHO "### Found a block device on $myDEV" myTARGET=$(dialog --backtitle "$myBACKTITLE" --title "[ Select target device ... ]" --menu "" 16 40 10 $(lsblk -io NAME,SIZE -dnp) 3>&1 1>&2 2>&3 3>&-)
fuECHO "### Writing image to device. Please wait..." if [ "$myTARGET" != "" ]
dd bs=1M if="$myTPOTCEISO" of="$myDEV" then
else dialog --backtitle "$myBACKTITLE" --yesno "\nWrite .iso to "$myTARGET"?" 7 50
fuECHO "### Install to usb stick" myWRITE=$?
fuECHO "###### Show devices: df or fdisk -l" if [ "$myWRITE" = "0" ]
fuECHO "###### Write to device: dd bs=1M if="$myTPOTCEISO" of=<path to device>" then
umount $myTARGET 2>&1 || true
(dd if="$myTPOTISO" | pv -n -s $(ls --block-size=1M -vs "$myTPOTISO" | awk '{print $1}')m | dd bs=1M of="$myTARGET") 2>&1 | dialog --backtitle "$myBACKTITLE" --title "[ Writing .iso to target ... ]" --gauge "" 5 70 0
echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Writing .iso to target ... Done! ]" --gauge "" 5 70
break;
fi fi
fi
else
break;
fi
done
# Done
fuECHO "### Done."
exit 0 exit 0

View file

@ -31,15 +31,15 @@ d-i clock-setup/ntp boolean true
tasksel tasksel/first multiselect ubuntu-server tasksel tasksel/first multiselect ubuntu-server
# Packages # Packages
d-i pkgsel/include string openssh-server d-i pkgsel/include string curl dialog dstat ethtool genisoimage git htop iw libpam-google-authenticator lm-sensors ntp openssh-server syslinux pv vim wireless-tools wpasupplicant
# Update Policy # Update Policy
d-i pkgsel/update-policy select unattended-upgrades d-i pkgsel/update-policy select unattended-upgrades
# Post install # Post install
d-i preseed/late_command string \ d-i preseed/late_command string \
cp /cdrom/tpotce/install1.sh /target/etc/rc.local; \ cp /cdrom/tpot/rc.local.install /target/etc/rc.local; \
cp -r /cdrom/tpotce/ /target/root/ cp -r /cdrom/tpot/ /target/root/
# Reboot # Reboot
d-i finish-install/reboot_in_progress note d-i finish-install/reboot_in_progress note

View file

@ -1,60 +0,0 @@
#!/bin/bash
########################################################
# T-Pot Community Edition #
# Volume bug fix script #
# #
# v0.02 by mo, DTAG, 2015-08-14 #
########################################################
myFIXPATH="/tpot-volume-fix"
myLOCK="/var/run/check.lock"
myIMAGECONFPATH="/data/images.conf"
# Let's set check.lock to prevent the check scripts from execution
touch $myLOCK
# Since there are different versions out there let's update to the latest version first
apt-get update -y
apt-get upgrade -y
apt-get install lxc-docker -y
# Let's stop all docker and t-pot related services
for i in $(cat $myIMAGECONFPATH); do service $i stop; done
service docker stop
# Let's create a tmp and move some configs to prevent unwanted intervention
mkdir $myFIXPATH
for i in $(cat $myIMAGECONFPATH); do mv /etc/init/$i.conf $myFIXPATH; done
mv /etc/crontab $myFIXPATH
# Let's remove docker and all associated files
apt-get purge lxc-docker -y
apt-get autoremove -y
rm -rf /var/lib/docker/
rm -rf /var/run/docker/
# Let's reinstall docker using the new docker repo (old one is deprecated)
wget -qO- https://get.docker.com/gpg | apt-key add -
wget -qO- https://get.docker.com/ | sh
# Let's pull the images
for i in $(cat $myIMAGECONFPATH); do /usr/bin/docker pull dtagdevsec/$i:latest; done
# Let's clone the tpotce repo and replace the buggy configs
git clone https://github.com/dtag-dev-sec/tpotce.git $myFIXPATH/tpotce/
cp $myFIXPATH/tpotce/installer/bin/check.sh /usr/bin/
cp $myFIXPATH/tpotce/installer/bin/dcres.sh /usr/bin/
for i in $(cat $myIMAGECONFPATH); do cp $myFIXPATH/tpotce/installer/upstart/$i.conf /etc/init/; done
cp $myFIXPATH/crontab /etc/
tee -a /etc/crontab <<EOF
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root sleep \$((RANDOM %600)); apt-get autoclean -y; apt-get autoremove -y; apt-get update -y; apt-get upgrade -y; apt-get upgrade docker-engine -y; sleep 5; reboot
EOF
# Let's remove the check.lock and allow scripts to execute again
rm $myLOCK
# Let's start the services again
for i in $(cat $myIMAGECONFPATH); do service $i start && sleep 2; done
sleep 10
status.sh

View file

@ -1,41 +0,0 @@
#!/bin/bash
myLOCK="/var/run/check.lock"
myIMAGECONFPATH="/data/images.conf"
# Let's set check.lock to prevent the check scripts from execution
touch $myLOCK
# Let's stop all docker and t-pot related services
for i in $(cat $myIMAGECONFPATH); do service $i stop; done
service docker stop
# Since there are different versions out there let's update to the latest version first
apt-get update -y
apt-get upgrade -y
apt-get install lxc-docker -y
# Let's remove deprecated lxc-docker
apt-get purge lxc-docker -y
apt-get autoremove -y
rm /etc/apt/sources.list.d/docker.list
# Let's install docker
echo "### Installing docker."
wget -qO- https://get.docker.com/gpg | apt-key add -
wget -qO- https://get.docker.com/ | sh
tee -a /etc/crontab <<EOF
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root sleep \$((RANDOM %600)); apt-get autoclean -y; apt-get autoremove -y; apt-get update -y; apt-get upgrade -y; apt-get upgrade docker-engine -y; sleep 5; reboot
EOF
# Let's remove the check.lock and allow scripts to execute again
rm $myLOCK
# Let's restart the containers
/usr/bin/dcres.sh
# Let's reboot if so desired
echo "Done. Will reboot in 60 seconds, press CTRL+C now to abort."
sleep 60
reboot