Merge pull request #61 from dtag-dev-sec/16.10

Update for T-Pot 16.10
This commit is contained in:
Marco Ochse 2016-10-27 18:48:42 +02:00 committed by GitHub
commit 693a38bb69
93 changed files with 23105 additions and 2530 deletions

View file

@ -17,7 +17,7 @@ Thank you :smiley:
### FAQ
##### Where can I find the honeypot logs?
###### The honeypot logs are located in `/data/`. You have to login via ssh and run `sudo cd /data/`. Do not change any permissions here or T-Pot will fail to work.
###### The honeypot logs are located in `/data/`. You have to login via ssh and run `sudo su -` and then `cd /data/`. Do not change any permissions here or T-Pot will fail to work.
-

184
README.md
View file

@ -1,15 +1,16 @@
# T-Pot 16.03 Image Creator
[![Gitter](https://img.shields.io/gitter/room/nwjs/nw.js.svg?maxAge=2592000)](https://gitter.im/dtag-dev-sec/tpotce)
# T-Pot 16.10 Image Creator
This repository contains the necessary files to create the **[T-Pot community honeypot](http://dtag-dev-sec.github.io/)** ISO image.
The image can then be used to install T-Pot on a physical or virtual machine.
Last year we released
[T-Pot 15.03](http://dtag-dev-sec.github.io/mediator/feature/2015/03/17/concept.html)
as open source and we received lots of positive feedback and naturally feature requests which encouraged us to continue development and share our work as open source and are proud to present to you ...
In March 2016 we released
[T-Pot 16.03](http://dtag-dev-sec.github.io/mediator/feature/2016/03/11/t-pot-16.03.html)
# T-Pot 16.03
# T-Pot 16.10
T-Pot 16.03 is based on
T-Pot 16.10 now uses Ubuntu Server 16.04 LTS and is based on
[docker](https://www.docker.com/)
@ -25,8 +26,13 @@ and includes dockerized versions of the following honeypots
Furthermore we use the following tools
* [suricata](http://suricata-ids.org/) a Network Security Monitoring engine and the
* [ELK stack](https://www.elastic.co/videos) to beautifully visualize all the events captured by T-Pot.
* [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
* [Netdata](http://my-netdata.io/) for real-time performance monitoring.
* [Portainer](http://portainer.io/) a web based UI for docker.
* [Suricata](http://suricata-ids.org/) a Network Security Monitoring engine.
* [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
# TL;DR
@ -56,7 +62,7 @@ In case you already have an Ubuntu 14.04.x running in your datacenter and are un
- [First Run](#firstrun)
- [System Placement](#placement)
- [Options](#options)
- [Enabling SSH](#ssh)
- [SSH and web access](#ssh)
- [Kibana Dashboard](#kibana)
- [Maintenance](#maintenance)
- [Community Data Submission](#submission)
@ -71,39 +77,58 @@ In case you already have an Ubuntu 14.04.x running in your datacenter and are un
<a name="background"></a>
# Changelog
- **Docker** was updated to the latest **1.10.x** release
- **ELK** was updated to the latest **Kibana 4.4.x**, **Elasticsearch 2.2.x** and **Logstash 2.2.x** releases.
- More than **100 Visualizations** compiled to 12 individual **Dashboards** for every honeypot now allow you to monitor the *honeypot events* captured on your T-Pot installation; a huge improvement over T-Pot 15.03 which was only capable of showing Suricata NSM events.
- Thanks to Kibana 4.x SSH port forwarding can now utilize any user defined local port
ssh -p 64295 -l tsec -N -L4711:127.0.0.1:64296 <yourHoneypotIPaddress>
- **IP to AS Lookups** are now provided within Kibana dashboard, as well as some smart links to research IP reputation, Suricata Rules or AS information when in Discover mode.
- **ElasticSearch** indexes will now be kept for <=90 days, the time period may be adjusted in `/etc/crontab`.
- **Suricata** was updated to the latest **3.0** version including the latest **Emerging Threats** community ruleset.
- **P0f** is now part of the Suricata container, passively fingerprinting and guessing the involving OS.
- **Conpot**, **ElasticPot** and **eMobility** are being introduced as new honeypots in T-Pot.
- **Cowrie** replaces **Kippo** as SSH honeypot since it offers huge improvements over Kippo such as *(SFTP-support, exec-support, SSH-tunneling, advanced logging, JSON logging, etc.)*.
- With **Conpot** and **eMobility** we are now offering an experimental **Industrial Installation Option**.
- **T-Pot Image Creator** was completely rewritten to offer a more convenient experience for creating your personal T-Pot image (*802.1x authentication, proxy support, public key for SSH and pre defined NTP server*). Docker images can be preloaded using the experimental **`getimages.sh`** script and will be exported to the installation image.
- T-Pot itself and all of its containers are now based on **Ubuntu Server 14.04.4 LTS** and thus automatically benefit from the latest features introduced by Cannonical for Ubuntu Server.
- **Docker** containers are now storing important log data outside the container in `/data/<container-name>` allowing easy access from the host and improving container startup and restart speed.
- The **upstart** scripts have been rewritten to support storing data on the host either volatile (*default*) or persistent (`/data/persistence.on`).
- Depending on the honeypot **EWS-Poster** now supports extracting some logging information as JSON.
- The **`/usr/bin/backup_elk.sh`** allows you to backup all ElasticSearch indexes including `.kibana` and `logstash` which contain all information to restore your data on a freshly installed machine simply by entering `tar xvfz <backup-name>.tgz -C /`.
- The **`enable_ssh.sh`** script has been removed and is now part of a more convenient **`2fa_enable.sh`** script.
- Size limits for the `/data` have been lifted and swap space is now 8 GB.
- The number of **installation reboots** has been reduced to **2**. The first to finish the initial Ubuntu Server installation and the second after setting up T-Pot and its dependencies.
- Some packages are now be installed directly from the installation image instead of downloading them.
- **[Update 20160313]** - T-Pot host `/var/log/syslog` and `/var/log/auth.log` will now be forwarded to the ELK-stack.
- **Ubuntu 16.04** is now being used as T-Pot's OS base
- **Size does matter** 😅
- `tpot.iso` is now based on **Ubuntu's** network installer reducing the image download size by 600MB from 650MB to only **50MB**
- All docker images have been rebuilt to reduce the image size at least by 50MB in some cases even 400-600MB
- A "Everything" installation takes roughly 2GB less download size (counting from initial image download)
- **Introducing** new tools making things a lot easier for new users
- [Elasticsearch Head](https://mobz.github.io/elasticsearch-head/) a web front end for browsing and interacting with an Elastic Search cluster.
- [Netdata](http://my-netdata.io/) for real-time performance monitoring.
- [Portainer](http://portainer.io/) a web based UI for docker.
- [Wetty](https://github.com/krishnasrinivas/wetty) a web based SSH client.
- **NGINX** implemented as HTTPS reverse proxy
- Access Kibana, ES Head plugin, UI-for-Docker, WebSSH and Netdata via browser!
- Two factor based SSH tunnel is no longer needed!
- **Installation** procedure improved
- Set your own password for the *tsec* user
- Choose your installation type without the need of building your own image
- Setup a remote user / password for secure web access including a self-signed-certificate
- Easy to remember hostnames
- **First login** easy and secure
- Access from console, ssh or web
- No two-factor-authentication needed for ssh when logging in from RFC1918 networks
- Enforcing public-key authentication for ssh connections other than RFC1918 networks
- **Systemd** now supersedes *upstart* as init system. All upstart scripts were ported to systemd along with the following improvements:
- Improved start / stop handling of containers
- Set persistence individually per container startup scripts (`/etc/systemd/system`)
- Set persistence globally (`/usr/bin/clean.sh`)
- **Honeypot updates and improvements**
- **Conpot** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [Andrea Pasquale](https://github.com/adepasquale),
- [Danilo Massa](https://github.com/danilo-massa) &
- [Johnny Vestergaard](https://github.com/johnnykv)
- **Cowrie** is now supporting **telnet** which is highly appreciated and thank you
- [Michel Oosterhof](https://github.com/micheloosterhof)
- **Dionaea** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [PhiBo](https://github.com/phibos)
- **Elasticpot** now supports **logging all queries and requests** with many thanks as to making this feature request possible going to:
- [Markus Schmall](https://github.com/schmalle)
- **Honeytrap** now supports **JSON logging** with many thanks as to making this feature request possible going to:
- [Andrea Pasquale](https://github.com/adepasquale)
- **Updates**
- **Docker** was updated to the latest **1.12.2** release
- **ELK** was updated to the latest **Kibana 4.6.2**, **Elasticsearch 2.4.1** and **Logstash 2.4.0** releases.
- **Suricata** was updated to the latest **3.1.2** version including the latest **Emerging Threats** community ruleset.
- We now have **150 Visualizations** pre-configured and compiled to 14 individual **Kibana Dashboards** for every honeypot. Monitor all *honeypot events* locally on your T-Pot installation. Aside from *honeypot events* you can also view *Suricata NSM, Syslog and NGINX* events for a quick overview of local host events.
- More **Smart links** are now included.
<a name="concept"></a>
# Technical Concept
T-Pot is based on Ubuntu Server 14.04.4 LTS.
T-Pot is based on the network installer of Ubuntu Server 16.04.1 LTS.
The honeypot daemons as well as other support components being used have been paravirtualized using [docker](http://docker.io).
This allowed us to run multiple honeypot daemons on the same network interface without problems make the entire system very low maintenance. <br>The encapsulation of the honeypot daemons in docker provides a good isolation of the runtime environments and easy update mechanisms.
This allows us to run multiple honeypot daemons on the same network interface without problems and thus making the entire system very low maintenance. <br>The encapsulation of the honeypot daemons in docker provides a good isolation of the runtime environments and easy update mechanisms.
In T-Pot we combine the dockerized honeypots
[conpot](http://conpot.org/),
@ -119,12 +144,12 @@ In T-Pot we combine the dockerized honeypots
![Architecture](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/architecture.png)
All data in docker is volatile. Once a docker container crashes, all data produced within its environment is gone and a fresh instance is restarted. Hence, for some data that needs to be persistent, i.e. config files, we have a persistent storage **`/data/`** on the host in order to make it available and persistent across container or system restarts.<br>
Important log data is now also stored outside the container in `/data/<container-name>` allowing easy access to logs from within the host and. The **upstart** scripts have been adjusted to support storing data on the host either volatile (*default*) or persistent (`/data/persistence.on`).
Important log data is now also stored outside the container in `/data/<container-name>` allowing easy access to logs from within the host and. The **systemd** scripts have been adjusted to support storing data on the host either volatile (*default*) or persistent (adjust individual systemd scripts in `/etc/systemd/system` or use a global setting in `/usr/bin/clear.sh`).
Basically, what happens when the system is booted up is the following:
- start host system
- start all the necessary services (i.e. docker-engine)
- start all the necessary services (i.e. docker-engine, reverse proxy, etc.)
- start all docker containers (honeypots, nms, elk)
Within the T-Pot project, we provide all the tools and documentation necessary to build your own honeypot system and contribute to our [community data view](http://sicherheitstacho.eu/?peers=communityPeers), a separate channel on our [Sicherheitstacho](http://sicherheitstacho.eu) that is powered by T-Pot community data.
@ -141,13 +166,15 @@ The individual docker configurations etc. we used can be found here:
- [emobility](https://github.com/dtag-dev-sec/emobility)
- [glastopf](https://github.com/dtag-dev-sec/glastopf)
- [honeytrap](https://github.com/dtag-dev-sec/honeytrap)
- [netdata](https://github.com/dtag-dev-sec/netdata)
- [portainer](https://github.com/dtag-dev-sec/ui-for-docker)
- [suricata](https://github.com/dtag-dev-sec/suricata)
<a name="requirements"></a>
# System Requirements
Depending on your installation type, whether you install on [real hardware](#hardware) or in a [virtual machine](#vm), make sure your designated T-Pot system meets the following requirements:
##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, ELK, Suricata+P0f)
##### T-Pot Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (6-8 GB recommended)
@ -156,7 +183,6 @@ When installing the T-Pot ISO image, make sure the target system (physical/virtu
- A working internet connection
##### Sensor Installation (Cowrie, Dionaea, ElasticPot, Glastopf, Honeytrap)
This installation type is currently only available via [ISO Creator](https://github.com/dtag-dev-sec).
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 3 GB RAM (4-6 GB recommended)
@ -164,8 +190,7 @@ When installing the T-Pot ISO image, make sure the target system (physical/virtu
- Network via DHCP
- A working internet connection
##### Industrial Installation (ConPot, eMobility, ELK, Suricata+P0f)
This installation type is currently only available via [ISO Creator](https://github.com/dtag-dev-sec) and remains experimental.
##### Industrial Installation (ConPot, eMobility, ELK, Suricata+P0f & Tools)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 4 GB RAM (8 GB recommended)
@ -173,8 +198,7 @@ When installing the T-Pot ISO image, make sure the target system (physical/virtu
- Network via DHCP
- A working internet connection
##### Everything Installation (Everything)
This installation type is currently only available via [ISO Creator](https://github.com/dtag-dev-sec).
##### Everything Installation (Everything, all of the above)
When installing the T-Pot ISO image, make sure the target system (physical/virtual) meets the following minimum requirements:
- 8 GB RAM
@ -192,7 +216,7 @@ Secondly, decide where you want to let the system run: [real hardware](#hardware
<a name="prebuilt"></a>
## Prebuilt ISO Image
We provide an installation ISO image for download (~600MB), which is created using the same [tool](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image.
We provide an installation ISO image for download (~50MB), which is created using the same [tool](https://github.com/dtag-dev-sec/tpotce) you can use yourself in order to create your own image. It will basically just save you some time downloading components and creating the ISO image.
You can download the prebuilt installation image [here](http://community-honeypot.de/tpot.iso) and jump to the [installation](#vm) section. The ISO image is hosted by our friends from [Strato](http://www.strato.de) / [Cronon](http://www.cronon.de).
shasum tpot.iso
@ -203,7 +227,7 @@ You can download the prebuilt installation image [here](http://community-honeypo
For transparency reasons and to give you the ability to customize your install, we provide you the [ISO Creator](https://github.com/dtag-dev-sec/tpotce) that enables you to create your own ISO installation image.
**Requirements to create the ISO image:**
- Ubuntu 14.04.4 or newer as host system (others *may* work, but remain untested)
- Ubuntu 16.04.x or newer as host system (others *may* work, but remain untested)
- 4GB of free memory
- 32GB of free storage
- A working internet connection
@ -216,11 +240,11 @@ For transparency reasons and to give you the ability to customize your install,
cd tpotce
2. Invoke the script that builds the ISO image.
The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu base image (~600MB) which T-Pot is based on.
The script will download and install dependencies necessary to build the image on the invoking machine. It will further download the ubuntu network installer image (~50MB) which T-Pot is based on.
sudo ./makeiso.sh
After a successful build, you will find the ISO image `tpot.iso` in your directory.
After a successful build, you will find the ISO image `tpot.iso` along with a SHA256 checksum `tpot.sha256`in your directory.
<a name="vm"></a>
## Running in VM
@ -250,16 +274,20 @@ Whereas most CD burning tools allow you to burn from ISO images, the procedure t
<a name="firstrun"></a>
## First Run
The installation requires very little interaction, only some locales and keyboard settings have to be answered. Everything else will be configured automatically. The system will reboot two times. Make sure it can access the internet as it needs to download the updates and the dockerized honeypot components. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation was usually finished within <=30 minutes.
The installation requires very little interaction, only some locales and keyboard settings have to be answered. Everything else will be configured automatically. The system will reboot two times. Make sure it can access the internet as it needs to download the updates and the dockerized honeypot components. Depending on your network connection and the chosen installation type, the installation may take some time. During our tests (50Mbit down, 10Mbit up), the installation is usually finished within <=30 minutes.
Once the installation is finished, the system will automatically reboot and you will be presented with the T-Pot login screen. The user credentials for the first login are:
- user: *tsec*
- pass: *tsec*
- user: **tsec**
- pass: **password you chose during the installation**
You will need to set a new password after first login.
All honeypot services are preconfigured and are starting automatically.
You can also login from your browser: ``https://<your.ip>:64297``
- user: **user you chose during the installation**
- pass: **password you chose during the installation**
All honeypot services are started automatically.
<a name="placement"></a>
# System Placement
@ -269,10 +297,9 @@ If you are behind a NAT gateway (e.g. home router), here is a list of ports that
| Honeypot|Transport|Forwarded ports|
|---|---|---|
| conpot | TCP | 81, 102, 502 |
| conpot | UDP | 161 |
| cowrie | TCP | 22 |
| dionaea | TCP | 21, 42, 135, 443, 445, 1433, 3306, 5060, 5061, 8081 |
| conpot | TCP | 1025, 50100 |
| cowrie | TCP | 22, 23 |
| dionaea | TCP | 21, 42, 135, 443, 445, 1433, 1723, 1883, 1900, 3306, 5060, 5061, 8081, 11211 |
| dionaea | UDP | 69, 5060 |
| elasticpot | TCP | 9200 |
| emobility | TCP | 8080 |
@ -284,6 +311,7 @@ If you are behind a NAT gateway (e.g. home router), here is a list of ports that
Basically, you can forward as many TCP ports as you want, as honeytrap dynamically binds any TCP port that is not covered by the other honeypot daemons.
In case you need external SSH access, forward TCP port 64295 to T-Pot, see below.
In case you need external web access, forward TCP port 64297 to T-Pot, see below.
T-Pot requires outgoing http and https connections for updates (ubuntu, docker) and attack submission (ewsposter, hpfeeds).
@ -294,23 +322,25 @@ The system is designed to run without any interaction or maintenance and automat
We know, for some this may not be enough. So here come some ways to further inspect the system and change configuration parameters.
<a name="ssh"></a>
## Enabling 2FA & SSH
By default, the SSH daemon is disabled. However, if you want to be able to login remotely via SSH and / or enable two-factor authentication (2fa) by using an authenticator app i.e. [Google Authenticator](https://support.google.com/accounts/answer/1066447?hl=en) just run the following script as the user *tsec*. ***Do not run it as root or via sudo***. Otherwise the setup of the two factor authentication will be bound to the user root who is not permitted to login remotely.
## SSH and web access
By default, the SSH daemon only allows access on **tcp/64295** with a user / password combination from RFC1918 networks. However, if you want to be able to login remotely via SSH you need to put your SSH keys on the host as described below.<br>
It is configured to prevent password login from official IP addresses and pubkey-authentication must be used. Copy your SSH keyfile to `/home/tsec/.ssh/authorized_keys` and set the appropriate permissions (`chmod 600 authorized_keys`) as well as the correct ownership (`chown tsec:tsec authorized_keys`).
~/2fa_enable.sh
If you do not have a SSH client at hand and still want to access the machine via SSH you can do so by directing your browser to `https://<your.ip>:64297`, enter
Afterwards you can login via SSH using the password you set for the user *tsec* and use the authenticator token as the second authentication factor.
The script will also enable the SSH daemon on **tcp/64295**. It is configured to prevent password login and use pubkey-authentication or challenge-response instead. We recommend using pubkey-authentication; just copy your SSH keyfile to `/home/tsec/.ssh/authorized_keys` and set the appropriate permissions (`chmod 600 authorized_keys`) as well as the correct ownership (`chown tsec:tsec authorized_keys`).
- user: **user you chose during the installation**
- pass: **password you chose during the installation**
and choose **WebSSH** from the navigation bar. You will be prompted to allow access for this connection and enter the password for the user **tsec**.
<a name="kibana"></a>
## Kibana Dashboard
To access the kibana dashboard, ensure you have [enabled SSH](#ssh) on T-Pot. If you have you can use [SSH port forwarding](http://explainshell.com/explain?cmd=ssh+-p+64295+-l+tsec+-N+-L8080%3A127.0.0.1%3A64296+yourHoneypotIPaddress) to access the kibana dashboard (make sure you leave the terminal open).
Just open a web browser and access and connect to `https://<your.ip>:64297`, enter
ssh -p 64295 -l tsec -N -L8080:127.0.0.1:64296 <yourHoneypotIPaddress>
- user: **user you chose during the installation**
- pass: **password you chose during the installation**
Finally, open a web browser and access [http://127.0.0.1:8080](http://127.0.0.1:8080). The kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers.
and the **Kibana dashboard** will automagically load. The Kibana dashboard can be customized to fit your needs. By default, we haven't added any filtering, because the filters depend on your setup. E.g. you might want to filter out your incoming administrative ssh connections and connections to update servers.
![Dashbaord](https://raw.githubusercontent.com/dtag-dev-sec/tpotce/master/doc/dashboard.png)
@ -340,14 +370,9 @@ Please do not change anything other than those settings and only if you absolute
# Roadmap
As with every development there is always room for improvements ...
- Move to Ubuntu Server 16.04 LTS
- Further improve on JSON logging
- Move from upstart to systemd (only if necessary)
- Bump ELK-stack to 5.0
- Move from Glastopf to SNARE
- Work on a upgrade strategy
- Improve backup script, include restore script
- Tweaking 😎
- Documentation 😎
Some features may be provided with updated docker images, others may require some hands on from your side.
@ -380,11 +405,12 @@ For general feedback you can write to cert @ telekom.de.
# Licenses
The software that T-Pot is built on, uses the following licenses.
<br>GPLv2: [conpot (by Lukas Rist)](https://github.com/mushorg/conpot/blob/master/LICENSE.txt), [dionaea](https://github.com/DinoTools/dionaea/blob/master/LICENSE), [honeytrap (by Tillmann Werner)](https://github.com/armedpot/honeytrap/blob/master/LICENSE), [suricata](http://suricata-ids.org/about/open-source/)
<br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL)
<br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker] (https://github.com/docker/docker/blob/master/LICENSE)
<br>MIT License: [tagcloud (by Shelby Sturgis)](https://github.com/stormpython/tagcloud/blob/master/LICENSE), [heatmap (by Shelby Sturgis)](https://github.com/stormpython/heatmap/blob/master/LICENSE)
<br>GPLv3: [elasticpot (by Markus Schmall)](https://github.com/schmalle/ElasticPot), [emobility (by Mohamad Sbeiti)](https://github.com/dtag-dev-sec/emobility/blob/master/LICENSE), [ewsposter (by Markus Schroer)](https://github.com/dtag-dev-sec/ews/), [glastopf (by Lukas Rist)](https://github.com/glastopf/glastopf/blob/master/GPL), [netdata](https://github.com/firehol/netdata/blob/master/LICENSE.md)
<br>Apache 2 License: [elasticsearch](https://github.com/elasticsearch/elasticsearch/blob/master/LICENSE.txt), [logstash](https://github.com/elasticsearch/logstash/blob/master/LICENSE), [kibana](https://github.com/elasticsearch/kibana/blob/master/LICENSE.md), [docker] (https://github.com/docker/docker/blob/master/LICENSE), [elasticsearch-head](https://github.com/mobz/elasticsearch-head/blob/master/LICENCE)
<br>MIT License: [tagcloud (by Shelby Sturgis)](https://github.com/stormpython/tagcloud/blob/master/LICENSE), [heatmap (by Shelby Sturgis)](https://github.com/stormpython/heatmap/blob/master/LICENSE), [wetty](https://github.com/krishnasrinivas/wetty/blob/master/LICENSE)
<br>[cowrie (copyright disclaimer by Upi Tamminen)](https://github.com/micheloosterhof/cowrie/blob/master/doc/COPYRIGHT)
<br>[Ubuntu licensing](http://www.ubuntu.com/about/about-ubuntu/licensing)
<br>[Portainer](https://github.com/portainer/portainer/blob/develop/LICENSE)
<a name="credits"></a>
# Credits
@ -398,6 +424,7 @@ Without open source and the fruitful development community we are proud to be a
* [docker](https://github.com/docker/docker/graphs/contributors)
* [elasticpot](https://github.com/schmalle/ElasticPot/graphs/contributors)
* [elasticsearch](https://github.com/elastic/elasticsearch/graphs/contributors)
* [elasticsearch-head](https://github.com/mobz/elasticsearch-head/graphs/contributors)
* [emobility](https://github.com/dtag-dev-sec/emobility/graphs/contributors)
* [ewsposter](https://github.com/armedpot/ewsposter/graphs/contributors)
* [glastopf](https://github.com/mushorg/glastopf/graphs/contributors)
@ -405,17 +432,20 @@ Without open source and the fruitful development community we are proud to be a
* [honeytrap](https://github.com/armedpot/honeytrap/graphs/contributors)
* [kibana](https://github.com/elastic/kibana/graphs/contributors)
* [logstash](https://github.com/elastic/logstash/graphs/contributors)
* [netdata](https://github.com/firehol/netdata/graphs/contributors)
* [p0f](http://lcamtuf.coredump.cx/p0f3/)
* [portainer](https://github.com/portainer/portainer/graphs/contributors)
* [suricata](https://github.com/inliniac/suricata/graphs/contributors)
* [tagcloud](https://github.com/stormpython/tagcloud/graphs/contributors)
* [ubuntu](http://www.ubuntu.com/)
* [wetty](https://github.com/krishnasrinivas/wetty/graphs/contributors)
###The following companies and organizations
* [cannonical](http://www.canonical.com/)
* [docker](https://www.docker.com/)
* [elastic.io](https://www.elastic.co/)
* [honeynet project](https://www.honeynet.org/)
* [intel](http://www.intel.de/content/www/de/de/homepage.html)
* [intel](http://www.intel.com)
### ... and of course ***you*** for joining the community!
@ -427,4 +457,4 @@ We will be releasing a new version of T-Pot about every 6 months.
<a name="funfact"></a>
# Fun Fact
Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *203* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 16.03 😇
Coffee just does not cut it anymore which is why we needed a different caffeine source and consumed *107* bottles of [Club Mate](https://de.wikipedia.org/wiki/Club-Mate) during the development of T-Pot 16.10 😇

216
doc/Makefile Normal file
View file

@ -0,0 +1,216 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/T-Pot.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/T-Pot.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/T-Pot"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/T-Pot"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 132 KiB

BIN
doc/build/doctrees/environment.pickle vendored Normal file

Binary file not shown.

BIN
doc/build/doctrees/index.doctree vendored Normal file

Binary file not shown.

4
doc/build/html/.buildinfo vendored Normal file
View file

@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: fae7c9d3df0173e81358661e32fdb8fe
tags: 645f666f9bcd5a90fca523b33c5a78b7

22
doc/build/html/_sources/index.txt vendored Normal file
View file

@ -0,0 +1,22 @@
.. T-Pot documentation master file, created by
sphinx-quickstart on Mon Aug 8 13:24:39 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to T-Pot's documentation!
=================================
Contents:
.. toctree::
:maxdepth: 2
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

BIN
doc/build/html/_static/ajax-loader.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 673 B

599
doc/build/html/_static/basic.css vendored Normal file
View file

@ -0,0 +1,599 @@
/*
* basic.css
* ~~~~~~~~~
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 230px;
margin-left: -100%;
font-size: 90%;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar #searchbox input[type="text"] {
width: 170px;
}
div.sphinxsidebar #searchbox input[type="submit"] {
width: 30px;
}
img {
border: 0;
max-width: 100%;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li div.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable {
width: 100%;
}
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable dl, table.indextable dd {
margin-top: 0;
margin-bottom: 0;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
div.modindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
div.genindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
/* -- general body styles --------------------------------------------------- */
a.headerlink {
visibility: hidden;
}
h1:hover > a.headerlink,
h2:hover > a.headerlink,
h3:hover > a.headerlink,
h4:hover > a.headerlink,
h5:hover > a.headerlink,
h6:hover > a.headerlink,
dt:hover > a.headerlink,
caption:hover > a.headerlink,
p.caption:hover > a.headerlink,
div.code-block-caption:hover > a.headerlink {
visibility: visible;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.field-list ul {
padding-left: 1em;
}
.first {
margin-top: 0 !important;
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
img.align-left, .figure.align-left, object.align-left {
clear: left;
float: left;
margin-right: 1em;
}
img.align-right, .figure.align-right, object.align-right {
clear: right;
float: right;
margin-left: 1em;
}
img.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
.align-center {
text-align: center;
}
.align-right {
text-align: right;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px 7px 0 7px;
background-color: #ffe;
width: 40%;
float: right;
}
p.sidebar-title {
font-weight: bold;
}
/* -- topics ---------------------------------------------------------------- */
div.topic {
border: 1px solid #ccc;
padding: 7px 7px 0 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
div.admonition dl {
margin-bottom: 0;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- tables ---------------------------------------------------------------- */
table.docutils {
border: 0;
border-collapse: collapse;
}
table caption span.caption-number {
font-style: italic;
}
table caption span.caption-text {
}
table.docutils td, table.docutils th {
padding: 1px 8px 1px 5px;
border-top: 0;
border-left: 0;
border-right: 0;
border-bottom: 1px solid #aaa;
}
table.field-list td, table.field-list th {
border: 0 !important;
}
table.footnote td, table.footnote th {
border: 0 !important;
}
th {
text-align: left;
padding-right: 5px;
}
table.citation {
border-left: solid 1px gray;
margin-left: 1px;
}
table.citation td {
border-bottom: none;
}
/* -- figures --------------------------------------------------------------- */
div.figure {
margin: 0.5em;
padding: 0.5em;
}
div.figure p.caption {
padding: 0.3em;
}
div.figure p.caption span.caption-number {
font-style: italic;
}
div.figure p.caption span.caption-text {
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
list-style: decimal;
}
ol.loweralpha {
list-style: lower-alpha;
}
ol.upperalpha {
list-style: upper-alpha;
}
ol.lowerroman {
list-style: lower-roman;
}
ol.upperroman {
list-style: upper-roman;
}
dl {
margin-bottom: 15px;
}
dd p {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
dt:target, .highlighted {
background-color: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.optional {
font-size: 1.3em;
}
.sig-paren {
font-size: larger;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa;
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
.guilabel, .menuselection {
font-family: sans-serif;
}
.accelerator {
text-decoration: underline;
}
.classifier {
font-style: oblique;
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
td.linenos pre {
padding: 5px 0px;
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
margin-left: 0.5em;
}
table.highlighttable td {
padding: 0 0.5em 0 0.5em;
}
div.code-block-caption {
padding: 2px 5px;
font-size: small;
}
div.code-block-caption code {
background-color: transparent;
}
div.code-block-caption + div > div.highlight > pre {
margin-top: 0;
}
div.code-block-caption span.caption-number {
padding: 0.1em 0.3em;
font-style: italic;
}
div.code-block-caption span.caption-text {
}
div.literal-block-wrapper {
padding: 1em 1em 0;
}
div.literal-block-wrapper div.highlight {
margin: 0;
}
code.descname {
background-color: transparent;
font-weight: bold;
font-size: 1.2em;
}
code.descclassname {
background-color: transparent;
}
code.xref, a code {
background-color: transparent;
font-weight: bold;
}
h1 code, h2 code, h3 code, h4 code, h5 code, h6 code {
background-color: transparent;
}
.viewcode-link {
float: right;
}
.viewcode-back {
float: right;
font-family: sans-serif;
}
div.viewcode-block:target {
margin: -1px -10px;
padding: 0 10px;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

261
doc/build/html/_static/classic.css vendored Normal file
View file

@ -0,0 +1,261 @@
/*
* default.css_t
* ~~~~~~~~~~~~~
*
* Sphinx stylesheet -- default theme.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: sans-serif;
font-size: 100%;
background-color: #11303d;
color: #000;
margin: 0;
padding: 0;
}
div.document {
background-color: #1c4e63;
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 230px;
}
div.body {
background-color: #ffffff;
color: #000000;
padding: 0 20px 30px 20px;
}
div.footer {
color: #ffffff;
width: 100%;
padding: 9px 0 9px 0;
text-align: center;
font-size: 75%;
}
div.footer a {
color: #ffffff;
text-decoration: underline;
}
div.related {
background-color: #133f52;
line-height: 30px;
color: #ffffff;
}
div.related a {
color: #ffffff;
}
div.sphinxsidebar {
}
div.sphinxsidebar h3 {
font-family: 'Trebuchet MS', sans-serif;
color: #ffffff;
font-size: 1.4em;
font-weight: normal;
margin: 0;
padding: 0;
}
div.sphinxsidebar h3 a {
color: #ffffff;
}
div.sphinxsidebar h4 {
font-family: 'Trebuchet MS', sans-serif;
color: #ffffff;
font-size: 1.3em;
font-weight: normal;
margin: 5px 0 0 0;
padding: 0;
}
div.sphinxsidebar p {
color: #ffffff;
}
div.sphinxsidebar p.topless {
margin: 5px 10px 10px 10px;
}
div.sphinxsidebar ul {
margin: 10px;
padding: 0;
color: #ffffff;
}
div.sphinxsidebar a {
color: #98dbcc;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
/* -- hyperlink styles ------------------------------------------------------ */
a {
color: #355f7c;
text-decoration: none;
}
a:visited {
color: #355f7c;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
/* -- body styles ----------------------------------------------------------- */
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: 'Trebuchet MS', sans-serif;
background-color: #f2f2f2;
font-weight: normal;
color: #20435c;
border-bottom: 1px solid #ccc;
margin: 20px -20px 10px -20px;
padding: 3px 0 3px 10px;
}
div.body h1 { margin-top: 0; font-size: 200%; }
div.body h2 { font-size: 160%; }
div.body h3 { font-size: 140%; }
div.body h4 { font-size: 120%; }
div.body h5 { font-size: 110%; }
div.body h6 { font-size: 100%; }
a.headerlink {
color: #c60f0f;
font-size: 0.8em;
padding: 0 4px 0 4px;
text-decoration: none;
}
a.headerlink:hover {
background-color: #c60f0f;
color: white;
}
div.body p, div.body dd, div.body li, div.body blockquote {
text-align: justify;
line-height: 130%;
}
div.admonition p.admonition-title + p {
display: inline;
}
div.admonition p {
margin-bottom: 5px;
}
div.admonition pre {
margin-bottom: 5px;
}
div.admonition ul, div.admonition ol {
margin-bottom: 5px;
}
div.note {
background-color: #eee;
border: 1px solid #ccc;
}
div.seealso {
background-color: #ffc;
border: 1px solid #ff6;
}
div.topic {
background-color: #eee;
}
div.warning {
background-color: #ffe4e4;
border: 1px solid #f66;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre {
padding: 5px;
background-color: #eeffcc;
color: #333333;
line-height: 120%;
border: 1px solid #ac9;
border-left: none;
border-right: none;
}
code {
background-color: #ecf0f3;
padding: 0 1px 0 1px;
font-size: 0.95em;
}
th {
background-color: #ede;
}
.warning code {
background: #efc2c2;
}
.note code {
background: #d6d6d6;
}
.viewcode-back {
font-family: sans-serif;
}
div.viewcode-block:target {
background-color: #f4debf;
border-top: 1px solid #ac9;
border-bottom: 1px solid #ac9;
}
div.code-block-caption {
color: #efefef;
background-color: #1c4e63;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

BIN
doc/build/html/_static/comment-close.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

BIN
doc/build/html/_static/comment.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

1
doc/build/html/_static/default.css vendored Normal file
View file

@ -0,0 +1 @@
@import url("classic.css");

263
doc/build/html/_static/doctools.js vendored Normal file
View file

@ -0,0 +1,263 @@
/*
* doctools.js
* ~~~~~~~~~~~
*
* Sphinx JavaScript utilities for all documentation.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/**
* select a different prefix for underscore
*/
$u = _.noConflict();
/**
* make the code below compatible with browsers without
* an installed firebug like debugger
if (!window.console || !console.firebug) {
var names = ["log", "debug", "info", "warn", "error", "assert", "dir",
"dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace",
"profile", "profileEnd"];
window.console = {};
for (var i = 0; i < names.length; ++i)
window.console[names[i]] = function() {};
}
*/
/**
* small helper function to urldecode strings
*/
jQuery.urldecode = function(x) {
return decodeURIComponent(x).replace(/\+/g, ' ');
};
/**
* small helper function to urlencode strings
*/
jQuery.urlencode = encodeURIComponent;
/**
* This function returns the parsed url parameters of the
* current request. Multiple values per key are supported,
* it will always return arrays of strings for the value parts.
*/
jQuery.getQueryParameters = function(s) {
if (typeof s == 'undefined')
s = document.location.search;
var parts = s.substr(s.indexOf('?') + 1).split('&');
var result = {};
for (var i = 0; i < parts.length; i++) {
var tmp = parts[i].split('=', 2);
var key = jQuery.urldecode(tmp[0]);
var value = jQuery.urldecode(tmp[1]);
if (key in result)
result[key].push(value);
else
result[key] = [value];
}
return result;
};
/**
* highlight a given string on a jquery object by wrapping it in
* span elements with the given class name.
*/
jQuery.fn.highlightText = function(text, className) {
function highlight(node) {
if (node.nodeType == 3) {
var val = node.nodeValue;
var pos = val.toLowerCase().indexOf(text);
if (pos >= 0 && !jQuery(node.parentNode).hasClass(className)) {
var span = document.createElement("span");
span.className = className;
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling));
node.nodeValue = val.substr(0, pos);
}
}
else if (!jQuery(node).is("button, select, textarea")) {
jQuery.each(node.childNodes, function() {
highlight(this);
});
}
}
return this.each(function() {
highlight(this);
});
};
/*
* backward compatibility for jQuery.browser
* This will be supported until firefox bug is fixed.
*/
if (!jQuery.browser) {
jQuery.uaMatch = function(ua) {
ua = ua.toLowerCase();
var match = /(chrome)[ \/]([\w.]+)/.exec(ua) ||
/(webkit)[ \/]([\w.]+)/.exec(ua) ||
/(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) ||
/(msie) ([\w.]+)/.exec(ua) ||
ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) ||
[];
return {
browser: match[ 1 ] || "",
version: match[ 2 ] || "0"
};
};
jQuery.browser = {};
jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true;
}
/**
* Small JavaScript module for the documentation.
*/
var Documentation = {
init : function() {
this.fixFirefoxAnchorBug();
this.highlightSearchWords();
this.initIndexTable();
},
/**
* i18n support
*/
TRANSLATIONS : {},
PLURAL_EXPR : function(n) { return n == 1 ? 0 : 1; },
LOCALE : 'unknown',
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext : function(string) {
var translated = Documentation.TRANSLATIONS[string];
if (typeof translated == 'undefined')
return string;
return (typeof translated == 'string') ? translated : translated[0];
},
ngettext : function(singular, plural, n) {
var translated = Documentation.TRANSLATIONS[singular];
if (typeof translated == 'undefined')
return (n == 1) ? singular : plural;
return translated[Documentation.PLURALEXPR(n)];
},
addTranslations : function(catalog) {
for (var key in catalog.messages)
this.TRANSLATIONS[key] = catalog.messages[key];
this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')');
this.LOCALE = catalog.locale;
},
/**
* add context elements like header anchor links
*/
addContextElements : function() {
$('div[id] > :header:first').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this headline')).
appendTo(this);
});
$('dt[id]').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this definition')).
appendTo(this);
});
},
/**
* workaround a firefox stupidity
* see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075
*/
fixFirefoxAnchorBug : function() {
if (document.location.hash)
window.setTimeout(function() {
document.location.href += '';
}, 10);
},
/**
* highlight the search words provided in the url in the text
*/
highlightSearchWords : function() {
var params = $.getQueryParameters();
var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : [];
if (terms.length) {
var body = $('div.body');
if (!body.length) {
body = $('body');
}
window.setTimeout(function() {
$.each(terms, function() {
body.highlightText(this.toLowerCase(), 'highlighted');
});
}, 10);
$('<p class="highlight-link"><a href="javascript:Documentation.' +
'hideSearchWords()">' + _('Hide Search Matches') + '</a></p>')
.appendTo($('#searchbox'));
}
},
/**
* init the domain index toggle buttons
*/
initIndexTable : function() {
var togglers = $('img.toggler').click(function() {
var src = $(this).attr('src');
var idnum = $(this).attr('id').substr(7);
$('tr.cg-' + idnum).toggle();
if (src.substr(-9) == 'minus.png')
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
else
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
}).css('display', '');
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) {
togglers.click();
}
},
/**
* helper function to hide the search marks again
*/
hideSearchWords : function() {
$('#searchbox .highlight-link').fadeOut(300);
$('span.highlighted').removeClass('highlighted');
},
/**
* make the url absolute
*/
makeURL : function(relativeURL) {
return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL;
},
/**
* get the current relative url
*/
getCurrentURL : function() {
var path = document.location.pathname;
var parts = path.split(/\//);
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
if (this == '..')
parts.pop();
});
var url = parts.join('/');
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
}
};
// quick alias for translations
_ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
});

BIN
doc/build/html/_static/down-pressed.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 B

BIN
doc/build/html/_static/down.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 B

BIN
doc/build/html/_static/file.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 B

10351
doc/build/html/_static/jquery.js vendored Normal file

File diff suppressed because it is too large Load diff

BIN
doc/build/html/_static/minus.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 B

BIN
doc/build/html/_static/plus.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 B

65
doc/build/html/_static/pygments.css vendored Normal file
View file

@ -0,0 +1,65 @@
.highlight .hll { background-color: #ffffcc }
.highlight { background: #eeffcc; }
.highlight .c { color: #408090; font-style: italic } /* Comment */
.highlight .err { border: 1px solid #FF0000 } /* Error */
.highlight .k { color: #007020; font-weight: bold } /* Keyword */
.highlight .o { color: #666666 } /* Operator */
.highlight .ch { color: #408090; font-style: italic } /* Comment.Hashbang */
.highlight .cm { color: #408090; font-style: italic } /* Comment.Multiline */
.highlight .cp { color: #007020 } /* Comment.Preproc */
.highlight .cpf { color: #408090; font-style: italic } /* Comment.PreprocFile */
.highlight .c1 { color: #408090; font-style: italic } /* Comment.Single */
.highlight .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */
.highlight .gd { color: #A00000 } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gr { color: #FF0000 } /* Generic.Error */
.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.highlight .gi { color: #00A000 } /* Generic.Inserted */
.highlight .go { color: #333333 } /* Generic.Output */
.highlight .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.highlight .gt { color: #0044DD } /* Generic.Traceback */
.highlight .kc { color: #007020; font-weight: bold } /* Keyword.Constant */
.highlight .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */
.highlight .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */
.highlight .kp { color: #007020 } /* Keyword.Pseudo */
.highlight .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */
.highlight .kt { color: #902000 } /* Keyword.Type */
.highlight .m { color: #208050 } /* Literal.Number */
.highlight .s { color: #4070a0 } /* Literal.String */
.highlight .na { color: #4070a0 } /* Name.Attribute */
.highlight .nb { color: #007020 } /* Name.Builtin */
.highlight .nc { color: #0e84b5; font-weight: bold } /* Name.Class */
.highlight .no { color: #60add5 } /* Name.Constant */
.highlight .nd { color: #555555; font-weight: bold } /* Name.Decorator */
.highlight .ni { color: #d55537; font-weight: bold } /* Name.Entity */
.highlight .ne { color: #007020 } /* Name.Exception */
.highlight .nf { color: #06287e } /* Name.Function */
.highlight .nl { color: #002070; font-weight: bold } /* Name.Label */
.highlight .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */
.highlight .nt { color: #062873; font-weight: bold } /* Name.Tag */
.highlight .nv { color: #bb60d5 } /* Name.Variable */
.highlight .ow { color: #007020; font-weight: bold } /* Operator.Word */
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
.highlight .mb { color: #208050 } /* Literal.Number.Bin */
.highlight .mf { color: #208050 } /* Literal.Number.Float */
.highlight .mh { color: #208050 } /* Literal.Number.Hex */
.highlight .mi { color: #208050 } /* Literal.Number.Integer */
.highlight .mo { color: #208050 } /* Literal.Number.Oct */
.highlight .sb { color: #4070a0 } /* Literal.String.Backtick */
.highlight .sc { color: #4070a0 } /* Literal.String.Char */
.highlight .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */
.highlight .s2 { color: #4070a0 } /* Literal.String.Double */
.highlight .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */
.highlight .sh { color: #4070a0 } /* Literal.String.Heredoc */
.highlight .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */
.highlight .sx { color: #c65d09 } /* Literal.String.Other */
.highlight .sr { color: #235388 } /* Literal.String.Regex */
.highlight .s1 { color: #4070a0 } /* Literal.String.Single */
.highlight .ss { color: #517918 } /* Literal.String.Symbol */
.highlight .bp { color: #007020 } /* Name.Builtin.Pseudo */
.highlight .vc { color: #bb60d5 } /* Name.Variable.Class */
.highlight .vg { color: #bb60d5 } /* Name.Variable.Global */
.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */
.highlight .il { color: #208050 } /* Literal.Number.Integer.Long */

651
doc/build/html/_static/searchtools.js vendored Normal file
View file

@ -0,0 +1,651 @@
/*
* searchtools.js_t
* ~~~~~~~~~~~~~~~~
*
* Sphinx JavaScript utilties for the full-text search.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* Non-minified version JS is _stemmer.js if file is provided */
/**
* Porter Stemmer
*/
var Stemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}
/**
* Simple result scoring code.
*/
var Scorer = {
// Implement the following function to further tweak the score for each result
// The function takes a result array [filename, title, anchor, descr, score]
// and returns the new score.
/*
score: function(result) {
return result[4];
},
*/
// query matches the full name of an object
objNameMatch: 11,
// or matches in the last dotted part of the object name
objPartialMatch: 6,
// Additive scores depending on the priority of the object
objPrio: {0: 15, // used to be importantResults
1: 5, // used to be objectResults
2: -5}, // used to be unimportantResults
// Used when the priority is not in the mapping.
objPrioDefault: 0,
// query found in title
title: 15,
// query found in terms
term: 5
};
/**
* Search Module
*/
var Search = {
_index : null,
_queued_query : null,
_pulse_status : -1,
init : function() {
var params = $.getQueryParameters();
if (params.q) {
var query = params.q[0];
$('input[name="q"]')[0].value = query;
this.performSearch(query);
}
},
loadIndex : function(url) {
$.ajax({type: "GET", url: url, data: null,
dataType: "script", cache: true,
complete: function(jqxhr, textstatus) {
if (textstatus != "success") {
document.getElementById("searchindexloader").src = url;
}
}});
},
setIndex : function(index) {
var q;
this._index = index;
if ((q = this._queued_query) !== null) {
this._queued_query = null;
Search.query(q);
}
},
hasIndex : function() {
return this._index !== null;
},
deferQuery : function(query) {
this._queued_query = query;
},
stopPulse : function() {
this._pulse_status = 0;
},
startPulse : function() {
if (this._pulse_status >= 0)
return;
function pulse() {
var i;
Search._pulse_status = (Search._pulse_status + 1) % 4;
var dotString = '';
for (i = 0; i < Search._pulse_status; i++)
dotString += '.';
Search.dots.text(dotString);
if (Search._pulse_status > -1)
window.setTimeout(pulse, 500);
}
pulse();
},
/**
* perform a search for something (or wait until index is loaded)
*/
performSearch : function(query) {
// create the required interface elements
this.out = $('#search-results');
this.title = $('<h2>' + _('Searching') + '</h2>').appendTo(this.out);
this.dots = $('<span></span>').appendTo(this.title);
this.status = $('<p style="display: none"></p>').appendTo(this.out);
this.output = $('<ul class="search"/>').appendTo(this.out);
$('#search-progress').text(_('Preparing search...'));
this.startPulse();
// index already loaded, the browser was quick!
if (this.hasIndex())
this.query(query);
else
this.deferQuery(query);
},
/**
* execute search (requires search index to be loaded)
*/
query : function(query) {
var i;
var stopwords = ["a","and","are","as","at","be","but","by","for","if","in","into","is","it","near","no","not","of","on","or","such","that","the","their","then","there","these","they","this","to","was","will","with"];
// stem the searchterms and add them to the correct list
var stemmer = new Stemmer();
var searchterms = [];
var excluded = [];
var hlterms = [];
var tmp = query.split(/\s+/);
var objectterms = [];
for (i = 0; i < tmp.length; i++) {
if (tmp[i] !== "") {
objectterms.push(tmp[i].toLowerCase());
}
if ($u.indexOf(stopwords, tmp[i].toLowerCase()) != -1 || tmp[i].match(/^\d+$/) ||
tmp[i] === "") {
// skip this "word"
continue;
}
// stem the word
var word = stemmer.stemWord(tmp[i].toLowerCase());
var toAppend;
// select the correct list
if (word[0] == '-') {
toAppend = excluded;
word = word.substr(1);
}
else {
toAppend = searchterms;
hlterms.push(tmp[i].toLowerCase());
}
// only add if not already in the list
if (!$u.contains(toAppend, word))
toAppend.push(word);
}
var highlightstring = '?highlight=' + $.urlencode(hlterms.join(" "));
// console.debug('SEARCH: searching for:');
// console.info('required: ', searchterms);
// console.info('excluded: ', excluded);
// prepare search
var terms = this._index.terms;
var titleterms = this._index.titleterms;
// array of [filename, title, anchor, descr, score]
var results = [];
$('#search-progress').empty();
// lookup as object
for (i = 0; i < objectterms.length; i++) {
var others = [].concat(objectterms.slice(0, i),
objectterms.slice(i+1, objectterms.length));
results = results.concat(this.performObjectSearch(objectterms[i], others));
}
// lookup as search terms in fulltext
results = results.concat(this.performTermsSearch(searchterms, excluded, terms, titleterms));
// let the scorer override scores with a custom scoring function
if (Scorer.score) {
for (i = 0; i < results.length; i++)
results[i][4] = Scorer.score(results[i]);
}
// now sort the results by score (in opposite order of appearance, since the
// display function below uses pop() to retrieve items) and then
// alphabetically
results.sort(function(a, b) {
var left = a[4];
var right = b[4];
if (left > right) {
return 1;
} else if (left < right) {
return -1;
} else {
// same score: sort alphabetically
left = a[1].toLowerCase();
right = b[1].toLowerCase();
return (left > right) ? -1 : ((left < right) ? 1 : 0);
}
});
// for debugging
//Search.lastresults = results.slice(); // a copy
//console.info('search results:', Search.lastresults);
// print the results
var resultCount = results.length;
function displayNextItem() {
// results left, load the summary and display it
if (results.length) {
var item = results.pop();
var listItem = $('<li style="display:none"></li>');
if (DOCUMENTATION_OPTIONS.FILE_SUFFIX === '') {
// dirhtml builder
var dirname = item[0] + '/';
if (dirname.match(/\/index\/$/)) {
dirname = dirname.substring(0, dirname.length-6);
} else if (dirname == 'index/') {
dirname = '';
}
listItem.append($('<a/>').attr('href',
DOCUMENTATION_OPTIONS.URL_ROOT + dirname +
highlightstring + item[2]).html(item[1]));
} else {
// normal html builders
listItem.append($('<a/>').attr('href',
item[0] + DOCUMENTATION_OPTIONS.FILE_SUFFIX +
highlightstring + item[2]).html(item[1]));
}
if (item[3]) {
listItem.append($('<span> (' + item[3] + ')</span>'));
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
} else if (DOCUMENTATION_OPTIONS.HAS_SOURCE) {
$.ajax({url: DOCUMENTATION_OPTIONS.URL_ROOT + '_sources/' + item[0] + '.txt',
dataType: "text",
complete: function(jqxhr, textstatus) {
var data = jqxhr.responseText;
if (data !== '' && data !== undefined) {
listItem.append(Search.makeSearchSummary(data, searchterms, hlterms));
}
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
}});
} else {
// no source available, just display title
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
}
}
// search finished, update title and status message
else {
Search.stopPulse();
Search.title.text(_('Search Results'));
if (!resultCount)
Search.status.text(_('Your search did not match any documents. Please make sure that all words are spelled correctly and that you\'ve selected enough categories.'));
else
Search.status.text(_('Search finished, found %s page(s) matching the search query.').replace('%s', resultCount));
Search.status.fadeIn(500);
}
}
displayNextItem();
},
/**
* search for object names
*/
performObjectSearch : function(object, otherterms) {
var filenames = this._index.filenames;
var objects = this._index.objects;
var objnames = this._index.objnames;
var titles = this._index.titles;
var i;
var results = [];
for (var prefix in objects) {
for (var name in objects[prefix]) {
var fullname = (prefix ? prefix + '.' : '') + name;
if (fullname.toLowerCase().indexOf(object) > -1) {
var score = 0;
var parts = fullname.split('.');
// check for different match types: exact matches of full name or
// "last name" (i.e. last dotted part)
if (fullname == object || parts[parts.length - 1] == object) {
score += Scorer.objNameMatch;
// matches in last name
} else if (parts[parts.length - 1].indexOf(object) > -1) {
score += Scorer.objPartialMatch;
}
var match = objects[prefix][name];
var objname = objnames[match[1]][2];
var title = titles[match[0]];
// If more than one term searched for, we require other words to be
// found in the name/title/description
if (otherterms.length > 0) {
var haystack = (prefix + ' ' + name + ' ' +
objname + ' ' + title).toLowerCase();
var allfound = true;
for (i = 0; i < otherterms.length; i++) {
if (haystack.indexOf(otherterms[i]) == -1) {
allfound = false;
break;
}
}
if (!allfound) {
continue;
}
}
var descr = objname + _(', in ') + title;
var anchor = match[3];
if (anchor === '')
anchor = fullname;
else if (anchor == '-')
anchor = objnames[match[1]][1] + '-' + fullname;
// add custom score for some objects according to scorer
if (Scorer.objPrio.hasOwnProperty(match[2])) {
score += Scorer.objPrio[match[2]];
} else {
score += Scorer.objPrioDefault;
}
results.push([filenames[match[0]], fullname, '#'+anchor, descr, score]);
}
}
}
return results;
},
/**
* search for full-text terms in the index
*/
performTermsSearch : function(searchterms, excluded, terms, titleterms) {
var filenames = this._index.filenames;
var titles = this._index.titles;
var i, j, file;
var fileMap = {};
var scoreMap = {};
var results = [];
// perform the search on the required terms
for (i = 0; i < searchterms.length; i++) {
var word = searchterms[i];
var files = [];
var _o = [
{files: terms[word], score: Scorer.term},
{files: titleterms[word], score: Scorer.title}
];
// no match but word was a required one
if ($u.every(_o, function(o){return o.files === undefined;})) {
break;
}
// found search word in contents
$u.each(_o, function(o) {
var _files = o.files;
if (_files === undefined)
return
if (_files.length === undefined)
_files = [_files];
files = files.concat(_files);
// set score for the word in each file to Scorer.term
for (j = 0; j < _files.length; j++) {
file = _files[j];
if (!(file in scoreMap))
scoreMap[file] = {}
scoreMap[file][word] = o.score;
}
});
// create the mapping
for (j = 0; j < files.length; j++) {
file = files[j];
if (file in fileMap)
fileMap[file].push(word);
else
fileMap[file] = [word];
}
}
// now check if the files don't contain excluded terms
for (file in fileMap) {
var valid = true;
// check if all requirements are matched
if (fileMap[file].length != searchterms.length)
continue;
// ensure that none of the excluded terms is in the search result
for (i = 0; i < excluded.length; i++) {
if (terms[excluded[i]] == file ||
titleterms[excluded[i]] == file ||
$u.contains(terms[excluded[i]] || [], file) ||
$u.contains(titleterms[excluded[i]] || [], file)) {
valid = false;
break;
}
}
// if we have still a valid result we can add it to the result list
if (valid) {
// select one (max) score for the file.
// for better ranking, we should calculate ranking by using words statistics like basic tf-idf...
var score = $u.max($u.map(fileMap[file], function(w){return scoreMap[file][w]}));
results.push([filenames[file], titles[file], '', null, score]);
}
}
return results;
},
/**
* helper function to return a node containing the
* search summary for a given text. keywords is a list
* of stemmed words, hlwords is the list of normal, unstemmed
* words. the first one is used to find the occurance, the
* latter for highlighting it.
*/
makeSearchSummary : function(text, keywords, hlwords) {
var textLower = text.toLowerCase();
var start = 0;
$.each(keywords, function() {
var i = textLower.indexOf(this.toLowerCase());
if (i > -1)
start = i;
});
start = Math.max(start - 120, 0);
var excerpt = ((start > 0) ? '...' : '') +
$.trim(text.substr(start, 240)) +
((start + 240 - text.length) ? '...' : '');
var rv = $('<div class="context"></div>').text(excerpt);
$.each(hlwords, function() {
rv = rv.highlightText(this, 'highlighted');
});
return rv;
}
};
$(document).ready(function() {
Search.init();
});

159
doc/build/html/_static/sidebar.js vendored Normal file
View file

@ -0,0 +1,159 @@
/*
* sidebar.js
* ~~~~~~~~~~
*
* This script makes the Sphinx sidebar collapsible.
*
* .sphinxsidebar contains .sphinxsidebarwrapper. This script adds
* in .sphixsidebar, after .sphinxsidebarwrapper, the #sidebarbutton
* used to collapse and expand the sidebar.
*
* When the sidebar is collapsed the .sphinxsidebarwrapper is hidden
* and the width of the sidebar and the margin-left of the document
* are decreased. When the sidebar is expanded the opposite happens.
* This script saves a per-browser/per-session cookie used to
* remember the position of the sidebar among the pages.
* Once the browser is closed the cookie is deleted and the position
* reset to the default (expanded).
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
$(function() {
// global elements used by the functions.
// the 'sidebarbutton' element is defined as global after its
// creation, in the add_sidebar_button function
var bodywrapper = $('.bodywrapper');
var sidebar = $('.sphinxsidebar');
var sidebarwrapper = $('.sphinxsidebarwrapper');
// for some reason, the document has no sidebar; do not run into errors
if (!sidebar.length) return;
// original margin-left of the bodywrapper and width of the sidebar
// with the sidebar expanded
var bw_margin_expanded = bodywrapper.css('margin-left');
var ssb_width_expanded = sidebar.width();
// margin-left of the bodywrapper and width of the sidebar
// with the sidebar collapsed
var bw_margin_collapsed = '.8em';
var ssb_width_collapsed = '.8em';
// colors used by the current theme
var dark_color = $('.related').css('background-color');
var light_color = $('.document').css('background-color');
function sidebar_is_collapsed() {
return sidebarwrapper.is(':not(:visible)');
}
function toggle_sidebar() {
if (sidebar_is_collapsed())
expand_sidebar();
else
collapse_sidebar();
}
function collapse_sidebar() {
sidebarwrapper.hide();
sidebar.css('width', ssb_width_collapsed);
bodywrapper.css('margin-left', bw_margin_collapsed);
sidebarbutton.css({
'margin-left': '0',
'height': bodywrapper.height()
});
sidebarbutton.find('span').text('»');
sidebarbutton.attr('title', _('Expand sidebar'));
document.cookie = 'sidebar=collapsed';
}
function expand_sidebar() {
bodywrapper.css('margin-left', bw_margin_expanded);
sidebar.css('width', ssb_width_expanded);
sidebarwrapper.show();
sidebarbutton.css({
'margin-left': ssb_width_expanded-12,
'height': bodywrapper.height()
});
sidebarbutton.find('span').text('«');
sidebarbutton.attr('title', _('Collapse sidebar'));
document.cookie = 'sidebar=expanded';
}
function add_sidebar_button() {
sidebarwrapper.css({
'float': 'left',
'margin-right': '0',
'width': ssb_width_expanded - 28
});
// create the button
sidebar.append(
'<div id="sidebarbutton"><span>&laquo;</span></div>'
);
var sidebarbutton = $('#sidebarbutton');
light_color = sidebarbutton.css('background-color');
// find the height of the viewport to center the '<<' in the page
var viewport_height;
if (window.innerHeight)
viewport_height = window.innerHeight;
else
viewport_height = $(window).height();
sidebarbutton.find('span').css({
'display': 'block',
'margin-top': (viewport_height - sidebar.position().top - 20) / 2
});
sidebarbutton.click(toggle_sidebar);
sidebarbutton.attr('title', _('Collapse sidebar'));
sidebarbutton.css({
'color': '#FFFFFF',
'border-left': '1px solid ' + dark_color,
'font-size': '1.2em',
'cursor': 'pointer',
'height': bodywrapper.height(),
'padding-top': '1px',
'margin-left': ssb_width_expanded - 12
});
sidebarbutton.hover(
function () {
$(this).css('background-color', dark_color);
},
function () {
$(this).css('background-color', light_color);
}
);
}
function set_position_from_cookie() {
if (!document.cookie)
return;
var items = document.cookie.split(';');
for(var k=0; k<items.length; k++) {
var key_val = items[k].split('=');
var key = key_val[0].replace(/ /, ""); // strip leading spaces
if (key == 'sidebar') {
var value = key_val[1];
if ((value == 'collapsed') && (!sidebar_is_collapsed()))
collapse_sidebar();
else if ((value == 'expanded') && (sidebar_is_collapsed()))
expand_sidebar();
}
}
}
add_sidebar_button();
var sidebarbutton = $('#sidebarbutton');
set_position_from_cookie();
});

1415
doc/build/html/_static/underscore.js vendored Normal file

File diff suppressed because it is too large Load diff

BIN
doc/build/html/_static/up-pressed.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 B

BIN
doc/build/html/_static/up.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 345 B

808
doc/build/html/_static/websupport.js vendored Normal file
View file

@ -0,0 +1,808 @@
/*
* websupport.js
* ~~~~~~~~~~~~~
*
* sphinx.websupport utilties for all documentation.
*
* :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
(function($) {
$.fn.autogrow = function() {
return this.each(function() {
var textarea = this;
$.fn.autogrow.resize(textarea);
$(textarea)
.focus(function() {
textarea.interval = setInterval(function() {
$.fn.autogrow.resize(textarea);
}, 500);
})
.blur(function() {
clearInterval(textarea.interval);
});
});
};
$.fn.autogrow.resize = function(textarea) {
var lineHeight = parseInt($(textarea).css('line-height'), 10);
var lines = textarea.value.split('\n');
var columns = textarea.cols;
var lineCount = 0;
$.each(lines, function() {
lineCount += Math.ceil(this.length / columns) || 1;
});
var height = lineHeight * (lineCount + 1);
$(textarea).css('height', height);
};
})(jQuery);
(function($) {
var comp, by;
function init() {
initEvents();
initComparator();
}
function initEvents() {
$(document).on("click", 'a.comment-close', function(event) {
event.preventDefault();
hide($(this).attr('id').substring(2));
});
$(document).on("click", 'a.vote', function(event) {
event.preventDefault();
handleVote($(this));
});
$(document).on("click", 'a.reply', function(event) {
event.preventDefault();
openReply($(this).attr('id').substring(2));
});
$(document).on("click", 'a.close-reply', function(event) {
event.preventDefault();
closeReply($(this).attr('id').substring(2));
});
$(document).on("click", 'a.sort-option', function(event) {
event.preventDefault();
handleReSort($(this));
});
$(document).on("click", 'a.show-proposal', function(event) {
event.preventDefault();
showProposal($(this).attr('id').substring(2));
});
$(document).on("click", 'a.hide-proposal', function(event) {
event.preventDefault();
hideProposal($(this).attr('id').substring(2));
});
$(document).on("click", 'a.show-propose-change', function(event) {
event.preventDefault();
showProposeChange($(this).attr('id').substring(2));
});
$(document).on("click", 'a.hide-propose-change', function(event) {
event.preventDefault();
hideProposeChange($(this).attr('id').substring(2));
});
$(document).on("click", 'a.accept-comment', function(event) {
event.preventDefault();
acceptComment($(this).attr('id').substring(2));
});
$(document).on("click", 'a.delete-comment', function(event) {
event.preventDefault();
deleteComment($(this).attr('id').substring(2));
});
$(document).on("click", 'a.comment-markup', function(event) {
event.preventDefault();
toggleCommentMarkupBox($(this).attr('id').substring(2));
});
}
/**
* Set comp, which is a comparator function used for sorting and
* inserting comments into the list.
*/
function setComparator() {
// If the first three letters are "asc", sort in ascending order
// and remove the prefix.
if (by.substring(0,3) == 'asc') {
var i = by.substring(3);
comp = function(a, b) { return a[i] - b[i]; };
} else {
// Otherwise sort in descending order.
comp = function(a, b) { return b[by] - a[by]; };
}
// Reset link styles and format the selected sort option.
$('a.sel').attr('href', '#').removeClass('sel');
$('a.by' + by).removeAttr('href').addClass('sel');
}
/**
* Create a comp function. If the user has preferences stored in
* the sortBy cookie, use those, otherwise use the default.
*/
function initComparator() {
by = 'rating'; // Default to sort by rating.
// If the sortBy cookie is set, use that instead.
if (document.cookie.length > 0) {
var start = document.cookie.indexOf('sortBy=');
if (start != -1) {
start = start + 7;
var end = document.cookie.indexOf(";", start);
if (end == -1) {
end = document.cookie.length;
by = unescape(document.cookie.substring(start, end));
}
}
}
setComparator();
}
/**
* Show a comment div.
*/
function show(id) {
$('#ao' + id).hide();
$('#ah' + id).show();
var context = $.extend({id: id}, opts);
var popup = $(renderTemplate(popupTemplate, context)).hide();
popup.find('textarea[name="proposal"]').hide();
popup.find('a.by' + by).addClass('sel');
var form = popup.find('#cf' + id);
form.submit(function(event) {
event.preventDefault();
addComment(form);
});
$('#s' + id).after(popup);
popup.slideDown('fast', function() {
getComments(id);
});
}
/**
* Hide a comment div.
*/
function hide(id) {
$('#ah' + id).hide();
$('#ao' + id).show();
var div = $('#sc' + id);
div.slideUp('fast', function() {
div.remove();
});
}
/**
* Perform an ajax request to get comments for a node
* and insert the comments into the comments tree.
*/
function getComments(id) {
$.ajax({
type: 'GET',
url: opts.getCommentsURL,
data: {node: id},
success: function(data, textStatus, request) {
var ul = $('#cl' + id);
var speed = 100;
$('#cf' + id)
.find('textarea[name="proposal"]')
.data('source', data.source);
if (data.comments.length === 0) {
ul.html('<li>No comments yet.</li>');
ul.data('empty', true);
} else {
// If there are comments, sort them and put them in the list.
var comments = sortComments(data.comments);
speed = data.comments.length * 100;
appendComments(comments, ul);
ul.data('empty', false);
}
$('#cn' + id).slideUp(speed + 200);
ul.slideDown(speed);
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem retrieving the comments.');
},
dataType: 'json'
});
}
/**
* Add a comment via ajax and insert the comment into the comment tree.
*/
function addComment(form) {
var node_id = form.find('input[name="node"]').val();
var parent_id = form.find('input[name="parent"]').val();
var text = form.find('textarea[name="comment"]').val();
var proposal = form.find('textarea[name="proposal"]').val();
if (text == '') {
showError('Please enter a comment.');
return;
}
// Disable the form that is being submitted.
form.find('textarea,input').attr('disabled', 'disabled');
// Send the comment to the server.
$.ajax({
type: "POST",
url: opts.addCommentURL,
dataType: 'json',
data: {
node: node_id,
parent: parent_id,
text: text,
proposal: proposal
},
success: function(data, textStatus, error) {
// Reset the form.
if (node_id) {
hideProposeChange(node_id);
}
form.find('textarea')
.val('')
.add(form.find('input'))
.removeAttr('disabled');
var ul = $('#cl' + (node_id || parent_id));
if (ul.data('empty')) {
$(ul).empty();
ul.data('empty', false);
}
insertComment(data.comment);
var ao = $('#ao' + node_id);
ao.find('img').attr({'src': opts.commentBrightImage});
if (node_id) {
// if this was a "root" comment, remove the commenting box
// (the user can get it back by reopening the comment popup)
$('#ca' + node_id).slideUp();
}
},
error: function(request, textStatus, error) {
form.find('textarea,input').removeAttr('disabled');
showError('Oops, there was a problem adding the comment.');
}
});
}
/**
* Recursively append comments to the main comment list and children
* lists, creating the comment tree.
*/
function appendComments(comments, ul) {
$.each(comments, function() {
var div = createCommentDiv(this);
ul.append($(document.createElement('li')).html(div));
appendComments(this.children, div.find('ul.comment-children'));
// To avoid stagnating data, don't store the comments children in data.
this.children = null;
div.data('comment', this);
});
}
/**
* After adding a new comment, it must be inserted in the correct
* location in the comment tree.
*/
function insertComment(comment) {
var div = createCommentDiv(comment);
// To avoid stagnating data, don't store the comments children in data.
comment.children = null;
div.data('comment', comment);
var ul = $('#cl' + (comment.node || comment.parent));
var siblings = getChildren(ul);
var li = $(document.createElement('li'));
li.hide();
// Determine where in the parents children list to insert this comment.
for(i=0; i < siblings.length; i++) {
if (comp(comment, siblings[i]) <= 0) {
$('#cd' + siblings[i].id)
.parent()
.before(li.html(div));
li.slideDown('fast');
return;
}
}
// If we get here, this comment rates lower than all the others,
// or it is the only comment in the list.
ul.append(li.html(div));
li.slideDown('fast');
}
function acceptComment(id) {
$.ajax({
type: 'POST',
url: opts.acceptCommentURL,
data: {id: id},
success: function(data, textStatus, request) {
$('#cm' + id).fadeOut('fast');
$('#cd' + id).removeClass('moderate');
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem accepting the comment.');
}
});
}
function deleteComment(id) {
$.ajax({
type: 'POST',
url: opts.deleteCommentURL,
data: {id: id},
success: function(data, textStatus, request) {
var div = $('#cd' + id);
if (data == 'delete') {
// Moderator mode: remove the comment and all children immediately
div.slideUp('fast', function() {
div.remove();
});
return;
}
// User mode: only mark the comment as deleted
div
.find('span.user-id:first')
.text('[deleted]').end()
.find('div.comment-text:first')
.text('[deleted]').end()
.find('#cm' + id + ', #dc' + id + ', #ac' + id + ', #rc' + id +
', #sp' + id + ', #hp' + id + ', #cr' + id + ', #rl' + id)
.remove();
var comment = div.data('comment');
comment.username = '[deleted]';
comment.text = '[deleted]';
div.data('comment', comment);
},
error: function(request, textStatus, error) {
showError('Oops, there was a problem deleting the comment.');
}
});
}
function showProposal(id) {
$('#sp' + id).hide();
$('#hp' + id).show();
$('#pr' + id).slideDown('fast');
}
function hideProposal(id) {
$('#hp' + id).hide();
$('#sp' + id).show();
$('#pr' + id).slideUp('fast');
}
function showProposeChange(id) {
$('#pc' + id).hide();
$('#hc' + id).show();
var textarea = $('#pt' + id);
textarea.val(textarea.data('source'));
$.fn.autogrow.resize(textarea[0]);
textarea.slideDown('fast');
}
function hideProposeChange(id) {
$('#hc' + id).hide();
$('#pc' + id).show();
var textarea = $('#pt' + id);
textarea.val('').removeAttr('disabled');
textarea.slideUp('fast');
}
function toggleCommentMarkupBox(id) {
$('#mb' + id).toggle();
}
/** Handle when the user clicks on a sort by link. */
function handleReSort(link) {
var classes = link.attr('class').split(/\s+/);
for (var i=0; i<classes.length; i++) {
if (classes[i] != 'sort-option') {
by = classes[i].substring(2);
}
}
setComparator();
// Save/update the sortBy cookie.
var expiration = new Date();
expiration.setDate(expiration.getDate() + 365);
document.cookie= 'sortBy=' + escape(by) +
';expires=' + expiration.toUTCString();
$('ul.comment-ul').each(function(index, ul) {
var comments = getChildren($(ul), true);
comments = sortComments(comments);
appendComments(comments, $(ul).empty());
});
}
/**
* Function to process a vote when a user clicks an arrow.
*/
function handleVote(link) {
if (!opts.voting) {
showError("You'll need to login to vote.");
return;
}
var id = link.attr('id');
if (!id) {
// Didn't click on one of the voting arrows.
return;
}
// If it is an unvote, the new vote value is 0,
// Otherwise it's 1 for an upvote, or -1 for a downvote.
var value = 0;
if (id.charAt(1) != 'u') {
value = id.charAt(0) == 'u' ? 1 : -1;
}
// The data to be sent to the server.
var d = {
comment_id: id.substring(2),
value: value
};
// Swap the vote and unvote links.
link.hide();
$('#' + id.charAt(0) + (id.charAt(1) == 'u' ? 'v' : 'u') + d.comment_id)
.show();
// The div the comment is displayed in.
var div = $('div#cd' + d.comment_id);
var data = div.data('comment');
// If this is not an unvote, and the other vote arrow has
// already been pressed, unpress it.
if ((d.value !== 0) && (data.vote === d.value * -1)) {
$('#' + (d.value == 1 ? 'd' : 'u') + 'u' + d.comment_id).hide();
$('#' + (d.value == 1 ? 'd' : 'u') + 'v' + d.comment_id).show();
}
// Update the comments rating in the local data.
data.rating += (data.vote === 0) ? d.value : (d.value - data.vote);
data.vote = d.value;
div.data('comment', data);
// Change the rating text.
div.find('.rating:first')
.text(data.rating + ' point' + (data.rating == 1 ? '' : 's'));
// Send the vote information to the server.
$.ajax({
type: "POST",
url: opts.processVoteURL,
data: d,
error: function(request, textStatus, error) {
showError('Oops, there was a problem casting that vote.');
}
});
}
/**
* Open a reply form used to reply to an existing comment.
*/
function openReply(id) {
// Swap out the reply link for the hide link
$('#rl' + id).hide();
$('#cr' + id).show();
// Add the reply li to the children ul.
var div = $(renderTemplate(replyTemplate, {id: id})).hide();
$('#cl' + id)
.prepend(div)
// Setup the submit handler for the reply form.
.find('#rf' + id)
.submit(function(event) {
event.preventDefault();
addComment($('#rf' + id));
closeReply(id);
})
.find('input[type=button]')
.click(function() {
closeReply(id);
});
div.slideDown('fast', function() {
$('#rf' + id).find('textarea').focus();
});
}
/**
* Close the reply form opened with openReply.
*/
function closeReply(id) {
// Remove the reply div from the DOM.
$('#rd' + id).slideUp('fast', function() {
$(this).remove();
});
// Swap out the hide link for the reply link
$('#cr' + id).hide();
$('#rl' + id).show();
}
/**
* Recursively sort a tree of comments using the comp comparator.
*/
function sortComments(comments) {
comments.sort(comp);
$.each(comments, function() {
this.children = sortComments(this.children);
});
return comments;
}
/**
* Get the children comments from a ul. If recursive is true,
* recursively include childrens' children.
*/
function getChildren(ul, recursive) {
var children = [];
ul.children().children("[id^='cd']")
.each(function() {
var comment = $(this).data('comment');
if (recursive)
comment.children = getChildren($(this).find('#cl' + comment.id), true);
children.push(comment);
});
return children;
}
/** Create a div to display a comment in. */
function createCommentDiv(comment) {
if (!comment.displayed && !opts.moderator) {
return $('<div class="moderate">Thank you! Your comment will show up '
+ 'once it is has been approved by a moderator.</div>');
}
// Prettify the comment rating.
comment.pretty_rating = comment.rating + ' point' +
(comment.rating == 1 ? '' : 's');
// Make a class (for displaying not yet moderated comments differently)
comment.css_class = comment.displayed ? '' : ' moderate';
// Create a div for this comment.
var context = $.extend({}, opts, comment);
var div = $(renderTemplate(commentTemplate, context));
// If the user has voted on this comment, highlight the correct arrow.
if (comment.vote) {
var direction = (comment.vote == 1) ? 'u' : 'd';
div.find('#' + direction + 'v' + comment.id).hide();
div.find('#' + direction + 'u' + comment.id).show();
}
if (opts.moderator || comment.text != '[deleted]') {
div.find('a.reply').show();
if (comment.proposal_diff)
div.find('#sp' + comment.id).show();
if (opts.moderator && !comment.displayed)
div.find('#cm' + comment.id).show();
if (opts.moderator || (opts.username == comment.username))
div.find('#dc' + comment.id).show();
}
return div;
}
/**
* A simple template renderer. Placeholders such as <%id%> are replaced
* by context['id'] with items being escaped. Placeholders such as <#id#>
* are not escaped.
*/
function renderTemplate(template, context) {
var esc = $(document.createElement('div'));
function handle(ph, escape) {
var cur = context;
$.each(ph.split('.'), function() {
cur = cur[this];
});
return escape ? esc.text(cur || "").html() : cur;
}
return template.replace(/<([%#])([\w\.]*)\1>/g, function() {
return handle(arguments[2], arguments[1] == '%' ? true : false);
});
}
/** Flash an error message briefly. */
function showError(message) {
$(document.createElement('div')).attr({'class': 'popup-error'})
.append($(document.createElement('div'))
.attr({'class': 'error-message'}).text(message))
.appendTo('body')
.fadeIn("slow")
.delay(2000)
.fadeOut("slow");
}
/** Add a link the user uses to open the comments popup. */
$.fn.comment = function() {
return this.each(function() {
var id = $(this).attr('id').substring(1);
var count = COMMENT_METADATA[id];
var title = count + ' comment' + (count == 1 ? '' : 's');
var image = count > 0 ? opts.commentBrightImage : opts.commentImage;
var addcls = count == 0 ? ' nocomment' : '';
$(this)
.append(
$(document.createElement('a')).attr({
href: '#',
'class': 'sphinx-comment-open' + addcls,
id: 'ao' + id
})
.append($(document.createElement('img')).attr({
src: image,
alt: 'comment',
title: title
}))
.click(function(event) {
event.preventDefault();
show($(this).attr('id').substring(2));
})
)
.append(
$(document.createElement('a')).attr({
href: '#',
'class': 'sphinx-comment-close hidden',
id: 'ah' + id
})
.append($(document.createElement('img')).attr({
src: opts.closeCommentImage,
alt: 'close',
title: 'close'
}))
.click(function(event) {
event.preventDefault();
hide($(this).attr('id').substring(2));
})
);
});
};
var opts = {
processVoteURL: '/_process_vote',
addCommentURL: '/_add_comment',
getCommentsURL: '/_get_comments',
acceptCommentURL: '/_accept_comment',
deleteCommentURL: '/_delete_comment',
commentImage: '/static/_static/comment.png',
closeCommentImage: '/static/_static/comment-close.png',
loadingImage: '/static/_static/ajax-loader.gif',
commentBrightImage: '/static/_static/comment-bright.png',
upArrow: '/static/_static/up.png',
downArrow: '/static/_static/down.png',
upArrowPressed: '/static/_static/up-pressed.png',
downArrowPressed: '/static/_static/down-pressed.png',
voting: false,
moderator: false
};
if (typeof COMMENT_OPTIONS != "undefined") {
opts = jQuery.extend(opts, COMMENT_OPTIONS);
}
var popupTemplate = '\
<div class="sphinx-comments" id="sc<%id%>">\
<p class="sort-options">\
Sort by:\
<a href="#" class="sort-option byrating">best rated</a>\
<a href="#" class="sort-option byascage">newest</a>\
<a href="#" class="sort-option byage">oldest</a>\
</p>\
<div class="comment-header">Comments</div>\
<div class="comment-loading" id="cn<%id%>">\
loading comments... <img src="<%loadingImage%>" alt="" /></div>\
<ul id="cl<%id%>" class="comment-ul"></ul>\
<div id="ca<%id%>">\
<p class="add-a-comment">Add a comment\
(<a href="#" class="comment-markup" id="ab<%id%>">markup</a>):</p>\
<div class="comment-markup-box" id="mb<%id%>">\
reStructured text markup: <i>*emph*</i>, <b>**strong**</b>, \
<code>``code``</code>, \
code blocks: <code>::</code> and an indented block after blank line</div>\
<form method="post" id="cf<%id%>" class="comment-form" action="">\
<textarea name="comment" cols="80"></textarea>\
<p class="propose-button">\
<a href="#" id="pc<%id%>" class="show-propose-change">\
Propose a change &#9657;\
</a>\
<a href="#" id="hc<%id%>" class="hide-propose-change">\
Propose a change &#9663;\
</a>\
</p>\
<textarea name="proposal" id="pt<%id%>" cols="80"\
spellcheck="false"></textarea>\
<input type="submit" value="Add comment" />\
<input type="hidden" name="node" value="<%id%>" />\
<input type="hidden" name="parent" value="" />\
</form>\
</div>\
</div>';
var commentTemplate = '\
<div id="cd<%id%>" class="sphinx-comment<%css_class%>">\
<div class="vote">\
<div class="arrow">\
<a href="#" id="uv<%id%>" class="vote" title="vote up">\
<img src="<%upArrow%>" />\
</a>\
<a href="#" id="uu<%id%>" class="un vote" title="vote up">\
<img src="<%upArrowPressed%>" />\
</a>\
</div>\
<div class="arrow">\
<a href="#" id="dv<%id%>" class="vote" title="vote down">\
<img src="<%downArrow%>" id="da<%id%>" />\
</a>\
<a href="#" id="du<%id%>" class="un vote" title="vote down">\
<img src="<%downArrowPressed%>" />\
</a>\
</div>\
</div>\
<div class="comment-content">\
<p class="tagline comment">\
<span class="user-id"><%username%></span>\
<span class="rating"><%pretty_rating%></span>\
<span class="delta"><%time.delta%></span>\
</p>\
<div class="comment-text comment"><#text#></div>\
<p class="comment-opts comment">\
<a href="#" class="reply hidden" id="rl<%id%>">reply &#9657;</a>\
<a href="#" class="close-reply" id="cr<%id%>">reply &#9663;</a>\
<a href="#" id="sp<%id%>" class="show-proposal">proposal &#9657;</a>\
<a href="#" id="hp<%id%>" class="hide-proposal">proposal &#9663;</a>\
<a href="#" id="dc<%id%>" class="delete-comment hidden">delete</a>\
<span id="cm<%id%>" class="moderation hidden">\
<a href="#" id="ac<%id%>" class="accept-comment">accept</a>\
</span>\
</p>\
<pre class="proposal" id="pr<%id%>">\
<#proposal_diff#>\
</pre>\
<ul class="comment-children" id="cl<%id%>"></ul>\
</div>\
<div class="clearleft"></div>\
</div>\
</div>';
var replyTemplate = '\
<li>\
<div class="reply-div" id="rd<%id%>">\
<form id="rf<%id%>">\
<textarea name="comment" cols="80"></textarea>\
<input type="submit" value="Add reply" />\
<input type="button" value="Cancel" />\
<input type="hidden" name="parent" value="<%id%>" />\
<input type="hidden" name="node" value="" />\
</form>\
</div>\
</li>';
$(document).ready(function() {
init();
});
})(jQuery);
$(document).ready(function() {
// add comment anchors for all paragraphs that are commentable
$('.sphinx-has-comment').comment();
// highlight search words in search results
$("div.context").each(function() {
var params = $.getQueryParameters();
var terms = (params.q) ? params.q[0].split(/\s+/) : [];
var result = $(this);
$.each(terms, function() {
result.highlightText(this.toLowerCase(), 'highlighted');
});
});
// directly open comment window if requested
var anchor = document.location.hash;
if (anchor.substring(0, 9) == '#comment-') {
$('#ao' + anchor.substring(9)).click();
document.location.hash = '#s' + anchor.substring(9);
}
});

92
doc/build/html/genindex.html vendored Normal file
View file

@ -0,0 +1,92 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Index &mdash; T-Pot 16.10 documentation</title>
<link rel="stylesheet" href="_static/classic.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '16.10',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<link rel="top" title="T-Pot 16.10 documentation" href="index.html" />
</head>
<body role="document">
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="#" title="General Index"
accesskey="I">index</a></li>
<li class="nav-item nav-item-0"><a href="index.html">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<h1 id="index">Index</h1>
<div class="genindex-jumpbox">
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<form class="search" action="search.html" method="get">
<input type="text" name="q" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
<p class="searchtip" style="font-size: 90%">
Enter search terms or a module, class or function name.
</p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="#" title="General Index"
>index</a></li>
<li class="nav-item nav-item-0"><a href="index.html">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="footer" role="contentinfo">
&copy; Copyright 2016, t3chn0m4g3.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.3.6.
</div>
</body>
</html>

111
doc/build/html/index.html vendored Normal file
View file

@ -0,0 +1,111 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Welcome to T-Pots documentation! &mdash; T-Pot 16.10 documentation</title>
<link rel="stylesheet" href="_static/classic.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '16.10',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<link rel="top" title="T-Pot 16.10 documentation" href="#" />
</head>
<body role="document">
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="nav-item nav-item-0"><a href="#">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<div class="section" id="welcome-to-t-pot-s-documentation">
<h1>Welcome to T-Pot&#8217;s documentation!<a class="headerlink" href="#welcome-to-t-pot-s-documentation" title="Permalink to this headline"></a></h1>
<p>Contents:</p>
<div class="toctree-wrapper compound">
<ul class="simple">
</ul>
</div>
</div>
<div class="section" id="indices-and-tables">
<h1>Indices and tables<a class="headerlink" href="#indices-and-tables" title="Permalink to this headline"></a></h1>
<ul class="simple">
<li><a class="reference internal" href="genindex.html"><span>Index</span></a></li>
<li><a class="reference internal" href="py-modindex.html"><span>Module Index</span></a></li>
<li><a class="reference internal" href="search.html"><span>Search Page</span></a></li>
</ul>
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h3><a href="#">Table Of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">Welcome to T-Pot&#8217;s documentation!</a></li>
<li><a class="reference internal" href="#indices-and-tables">Indices and tables</a></li>
</ul>
<div role="note" aria-label="source link">
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="_sources/index.txt"
rel="nofollow">Show Source</a></li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<form class="search" action="search.html" method="get">
<input type="text" name="q" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
<p class="searchtip" style="font-size: 90%">
Enter search terms or a module, class or function name.
</p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
>index</a></li>
<li class="nav-item nav-item-0"><a href="#">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="footer" role="contentinfo">
&copy; Copyright 2016, t3chn0m4g3.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.3.6.
</div>
</body>
</html>

BIN
doc/build/html/objects.inv vendored Normal file

Binary file not shown.

99
doc/build/html/search.html vendored Normal file
View file

@ -0,0 +1,99 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Search &mdash; T-Pot 16.10 documentation</title>
<link rel="stylesheet" href="_static/classic.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: './',
VERSION: '16.10',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/searchtools.js"></script>
<link rel="top" title="T-Pot 16.10 documentation" href="index.html" />
<script type="text/javascript">
jQuery(function() { Search.loadIndex("searchindex.js"); });
</script>
<script type="text/javascript" id="searchindexloader"></script>
</head>
<body role="document">
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="nav-item nav-item-0"><a href="index.html">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<h1 id="search-documentation">Search</h1>
<div id="fallback" class="admonition warning">
<script type="text/javascript">$('#fallback').hide();</script>
<p>
Please activate JavaScript to enable the search
functionality.
</p>
</div>
<p>
From here you can search these documents. Enter your search
words into the box below and click "search". Note that the search
function will automatically search for all of the words. Pages
containing fewer words won't appear in the result list.
</p>
<form action="" method="get">
<input type="text" name="q" value="" />
<input type="submit" value="search" />
<span id="search-progress" style="padding-left: 10px"></span>
</form>
<div id="search-results">
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
>index</a></li>
<li class="nav-item nav-item-0"><a href="index.html">T-Pot 16.10 documentation</a> &raquo;</li>
</ul>
</div>
<div class="footer" role="contentinfo">
&copy; Copyright 2016, t3chn0m4g3.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.3.6.
</div>
</body>
</html>

1
doc/build/html/searchindex.js vendored Normal file
View file

@ -0,0 +1 @@
Search.setIndex({envversion:46,filenames:["index"],objects:{},objnames:{},objtypes:{},terms:{content:0,index:0,modul:0,page:0,search:0},titles:["Welcome to T-Pot&#8217;s documentation!"],titleterms:{document:0,indic:0,pot:0,tabl:0,welcom:0}})

File diff suppressed because it is too large Load diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 MiB

After

Width:  |  Height:  |  Size: 319 KiB

263
doc/make.bat Normal file
View file

@ -0,0 +1,263 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
set I18NSPHINXOPTS=%SPHINXOPTS% source
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 1>NUL 2>NUL
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\T-Pot.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\T-Pot.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

285
doc/source/conf.py Normal file
View file

@ -0,0 +1,285 @@
# -*- coding: utf-8 -*-
#
# T-Pot documentation build configuration file, created by
# sphinx-quickstart on Mon Aug 8 13:24:39 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'T-Pot'
copyright = u'2016, t3chn0m4g3'
author = u't3chn0m4g3'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'0.0.1'
# The full version, including alpha/beta/rc tags.
release = u'16.10'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'T-Potdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'T-Pot.tex', u'T-Pot Documentation',
u't3chn0m4g3', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 't-pot', u'T-Pot Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'T-Pot', u'T-Pot Documentation',
author, 'T-Pot', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

22
doc/source/index.rst Normal file
View file

@ -0,0 +1,22 @@
.. T-Pot documentation master file, created by
sphinx-quickstart on Mon Aug 8 13:24:39 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to T-Pot's documentation!
=================================
Contents:
.. toctree::
:maxdepth: 2
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View file

@ -31,14 +31,14 @@ if [ $1 == "now" ]
then
for name in $(cat installer/data/imgcfg/all_images.conf)
do
docker pull dtagdevsec/$name:latest1603
docker pull dtagdevsec/$name:latest1610
done
mkdir images
chmod 777 images
for name in $(cat installer/data/full_images.conf)
do
echo "Now exporting dtagdevsec/$name:latest1603"
docker save -o images/$name:latest1603.img dtagdevsec/$name:latest1603
docker save -o images/$name:latest1610.img dtagdevsec/$name:latest1610
done
chmod 777 images/*.img
fi

View file

@ -4,7 +4,7 @@
# T-Pot #
# ELK DB backup script #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
myDATE=$(date +%Y%m%d%H%M)
@ -38,14 +38,14 @@ touch /var/run/check.lock
# Stop ELK to lift db lock
echo "Now stopping ELK ..."
service elk stop
systemctl stop elk
sleep 10
# Backup DB in 2 flavors
echo "Now backing up Elasticsearch data ..."
tar cvfz $myBACKUPPATH"$myDATE"_elkall.tgz $myELKPATH
rm -rf "$myELKPATH"log/*
rm -rf "$myELKPATH"data/elasticsearch/nodes/0/indices/logstash*
rm -rf "$myELKPATH"data/tpotcluster/nodes/0/indices/logstash*
tar cvfz $myBACKUPPATH"$myDATE"_elkbase.tgz $myELKPATH
rm -rf $myELKPATH
tar xvfz $myBACKUPPATH"$myDATE"_elkall.tgz -C /
@ -53,7 +53,7 @@ chmod 760 -R $myELKPATH
chown tpot:tpot -R $myELKPATH
# Start ELK
service elk start
systemctl start elk
echo "Now starting up ELK ..."
# Allow checks to resume

View file

@ -4,7 +4,7 @@
# T-Pot #
# Check container and services script #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
if [ -a /var/run/check.lock ];
then
@ -19,6 +19,8 @@ touch /var/run/check.lock
myUPTIME=$(awk '{print int($1/60)}' /proc/uptime)
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
myCIDSTATUS=$(docker exec $i supervisorctl status)
if [ $? -ne 0 ];
then
@ -29,9 +31,10 @@ for i in $myIMAGES
if [ $myUPTIME -gt 4 ] && [ $myCIDSTATUS -gt 0 ];
then
echo "Restarting "$i"."
service $i stop
systemctl stop $i
sleep 5
service $i start
systemctl start $i
fi
fi
done

124
installer/bin/clean.sh Executable file
View file

@ -0,0 +1,124 @@
#!/bin/bash
########################################################
# T-Pot #
# Container Data Cleaner #
# #
# v16.10.0 by mo, DTAG, 2016-05-28 #
########################################################
# Set persistence
myPERSISTENCE=$2
# Check persistence
if [ "$myPERSISTENCE" = "on" ];
then
echo "### Persistence enabled, nothing to do."
exit
fi
# Let's create a function to clean up and prepare conpot data
fuCONPOT () {
rm -rf /data/conpot/*
mkdir -p /data/conpot/log
chmod 760 /data/conpot -R
chown tpot:tpot /data/conpot -R
}
# Let's create a function to clean up and prepare cowrie data
fuCOWRIE () {
rm -rf /data/cowrie/*
mkdir -p /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/
chmod 760 /data/cowrie -R
chown tpot:tpot /data/cowrie -R
}
# Let's create a function to clean up and prepare dionaea data
fuDIONAEA () {
rm -rf /data/dionaea/*
rm /data/ews/dionaea/ews.json
mkdir -p /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp
chmod 760 /data/dionaea -R
chown tpot:tpot /data/dionaea -R
}
# Let's create a function to clean up and prepare elasticpot data
fuELASTICPOT () {
rm -rf /data/elasticpot/*
mkdir -p /data/elasticpot/log
chmod 760 /data/elasticpot -R
chown tpot:tpot /data/elasticpot -R
}
# Let's create a function to clean up and prepare elk data
fuELK () {
# ELK data will be kept for <= 90 days, check /etc/crontab for curator modification
# ELK daemon log files will be removed
rm -rf /data/elk/log/*
mkdir -p /data/elk/logstash/conf
chmod 760 /data/elk -R
chown tpot:tpot /data/elk -R
}
# Let's create a function to clean up and prepare emobility data
fuEMOBILITY () {
rm -rf /data/emobility/*
rm /data/ews/emobility/ews.json
mkdir -p /data/emobility/log /data/ews/emobility
chmod 760 /data/emobility -R
chown tpot:tpot /data/emobility -R
}
# Let's create a function to clean up and prepare glastopf data
fuGLASTOPF () {
rm -rf /data/glastopf/*
mkdir -p /data/glastopf
chmod 760 /data/glastopf -R
chown tpot:tpot /data/glastopf -R
}
# Let's create a function to clean up and prepare honeytrap data
fuHONEYTRAP () {
rm -rf /data/honeytrap/*
mkdir -p /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/
chmod 760 /data/honeytrap/ -R
chown tpot:tpot /data/honeytrap/ -R
}
# Let's create a function to clean up and prepare suricata data
fuSURICATA () {
rm -rf /data/suricata/*
mkdir -p /data/suricata/log
chmod 760 -R /data/suricata
chown tpot:tpot -R /data/suricata
}
case $1 in
conpot)
fuCONPOT $1
;;
cowrie)
fuCOWRIE $1
;;
dionaea)
fuDIONAEA $1
;;
elasticpot)
fuELASTICPOT $1
;;
elk)
fuELK $1
;;
emobility)
fuEMOBILITY $1
;;
glastopf)
fuGLASTOPF $1
;;
honeytrap)
fuHONEYTRAP $1
;;
suricata)
fuSURICATA $1
;;
esac

View file

@ -4,7 +4,7 @@
# T-Pot #
# Container and services restart script #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
@ -38,12 +38,12 @@ if [ $myUPTIME -gt 4 ];
then
for i in $myIMAGES
do
service $i stop
systemctl stop $i
done
echo "### Waiting 10 seconds before restarting docker ..."
sleep 10
iptables -w -F
service docker restart
systemctl restart docker
while true
do
docker info > /dev/null
@ -64,7 +64,7 @@ if [ $myUPTIME -gt 4 ];
echo "### Starting T-Pot services ..."
for i in $myIMAGES
do
service $i start
systemctl start $i
done
sleep 5
else

32
installer/bin/dps.sh Executable file
View file

@ -0,0 +1,32 @@
#/bin/bash
stty -echo -icanon time 0 min 0
myIMAGES=$(cat /data/images.conf)
while true
do
clear
echo "======| System |======"
echo Date:" "$(date)
echo Uptime:" "$(uptime)
echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
echo "NAME CREATED PORTS"
for i in $myIMAGES; do
/usr/bin/docker ps -f name=$i --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" -f status=running -f status=exited | GREP_COLORS='mt=01;35' /bin/egrep --color=always "(^[_a-z-]+ |$)|$" | GREP_COLORS='mt=01;32' /bin/egrep --color=always "(Up[ 0-9a-Z ]+ |$)|$" | GREP_COLORS='mt=01;31' /bin/egrep --color=always "(Exited[ \(0-9\) ]+ [0-9a-Z ]+ ago|$)|$" | tail -n 1
if [ "$1" = "vv" ];
then
/usr/bin/docker exec -t $i /bin/ps -awfuwfxwf | egrep -v -E "awfuwfxwf|/bin/ps"
fi
done
if [[ $1 =~ ^([1-9]|[1-9][0-9]|[1-9][0-9][0-9])$ ]];
then
sleep $1
else
break
fi
read myKEY
if [ "$myKEY" == "q" ];
then
break;
fi
done
stty sane

View file

@ -4,7 +4,7 @@
# T-Pot #
# Container and services status script #
# #
# v16.03.1 by mo, DTAG, 2016-03-09 #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
myCOUNT=1
@ -42,7 +42,10 @@ echo CPU temp: $(sensors | grep "Physical" | awk '{ print $4 }')
echo
for i in $myIMAGES
do
if [ "$i" != "ui-for-docker" ] && [ "$i" != "netdata" ];
then
echo "======| Container:" $i "|======"
docker exec $i supervisorctl status | GREP_COLORS='mt=01;32' egrep --color=always "(RUNNING)|$" | GREP_COLORS='mt=01;31' egrep --color=always "(STOPPED|FATAL)|$"
echo
fi
done

View file

@ -4,7 +4,7 @@
# T-Pot #
# Only start the containers found in /etc/init/ #
# #
# v16.03.2 by mo, DTAG, 2016-04-20 #
# v16.10.0 by mo, DTAG, 2016-05-12 #
########################################################
# Make sure not to interrupt a check
@ -32,27 +32,27 @@ done
# We do not want to get interrupted by a check
touch /var/run/check.lock
# Stop T-Pot services and delete all T-Pot upstart scripts
# Stop T-Pot services and disable all T-Pot services
echo "### Stopping T-Pot services and cleaning up."
for i in $(cat /data/imgcfg/all_images.conf);
do
service $i stop
systemctl stop $i
sleep 2
rm -rf /etc/init/$i.conf || true;
systemctl disable $i;
done
# Restarting docker services
echo "### Restarting docker services ..."
service docker stop
systemctl stop docker
sleep 2
service docker start
systemctl start docker
sleep 2
# Setup only T-Pot upstart scripts from images.conf and pull the images
# Enable only T-Pot upstart scripts from images.conf and pull the images
for i in $(cat /data/images.conf);
do
docker pull dtagdevsec/$i:latest1603;
cp /data/upstart/"$i".conf /etc/init/;
docker pull dtagdevsec/$i:latest1610;
systemctl enable $i;
done
# Announce reboot

Binary file not shown.

View file

@ -4,7 +4,7 @@ spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 10
sendlimit = 400
contact = your_email_address
proxy =
ip =
@ -56,7 +56,7 @@ malwaredir = /data/cowrie/downloads/
dionaea = true
nodeid = dionaea-community-01
malwaredir = /data/dionaea/binaries/
sqlitedb = /data/dionaea/logsql.sqlite
sqlitedb = /data/dionaea/log/dionaea.sqlite
[HONEYTRAP]
honeytrap = true

View file

@ -7,3 +7,5 @@ emobility
glastopf
honeytrap
suricata
netdata
ui-for-docker

View file

@ -2,3 +2,5 @@ conpot
elk
emobility
suricata
netdata
ui-for-docker

View file

@ -5,3 +5,5 @@ elk
glastopf
honeytrap
suricata
netdata
ui-for-docker

View file

@ -0,0 +1,15 @@
[Unit]
Description=conpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop conpot
ExecStartPre=-/usr/bin/docker rm -v conpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh conpot off'
ExecStart=/usr/bin/docker run --name conpot --rm=true -v /data/conpot:/data/conpot -v /data/ews:/data/ews -p 1025:1025 -p 50100:50100 dtagdevsec/conpot:latest1610
ExecStop=/usr/bin/docker stop conpot
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=cowrie
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop cowrie
ExecStartPre=-/usr/bin/docker rm -v cowrie
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh cowrie off'
ExecStart=/usr/bin/docker run --name cowrie --rm=true -p 22:2222 -p 23:2223 -v /data/cowrie:/data/cowrie -v /data/ews:/data/ews dtagdevsec/cowrie:latest1610
ExecStop=/usr/bin/docker stop cowrie
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=dionaea
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop dionaea
ExecStartPre=-/usr/bin/docker rm -v dionaea
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh dionaea off'
ExecStart=/usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 69:69/udp -p 8081:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 1723:1723 -p 1883:1883 -p 1900:1900 -p 3306:3306 -p 5060:5060 -p 5061:5061 -p 5060:5060/udp -p 11211:11211 -v /data/dionaea:/data/dionaea -v /data/ews:/data/ews dtagdevsec/dionaea:latest1610
ExecStop=/usr/bin/docker stop dionaea
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=elasticpot
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elasticpot
ExecStartPre=-/usr/bin/docker rm -v elasticpot
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elasticpot off'
ExecStart=/usr/bin/docker run --name elasticpot --rm=true -v /data/elasticpot:/data/elasticpot -v /data/ews:/data/ews -p 9200:9200 dtagdevsec/elasticpot:latest1610
ExecStop=/usr/bin/docker stop elasticpot
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=elk
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop elk
ExecStartPre=-/usr/bin/docker rm -v elk
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh elk'
ExecStart=/usr/bin/docker run --name=elk -v /data:/data -v /var/log:/data/host/log -p 127.0.0.1:64296:5601 -p 127.0.0.1:64298:9200 --rm=true dtagdevsec/elk:latest1610
ExecStop=/usr/bin/docker stop elk
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=emobility
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop emobility
ExecStartPre=-/usr/bin/docker rm -v emobility
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh emobility off'
ExecStart=/usr/bin/docker run --name emobility --cap-add=NET_ADMIN -p 8080:8080 -v /data/emobility:/data/eMobility -v /data/ews:/data/ews --rm=true dtagdevsec/emobility:latest1610
ExecStop=/usr/bin/docker stop emobility
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
[Unit]
Description=glastopf
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop glastopf
ExecStartPre=-/usr/bin/docker rm -v glastopf
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh glastopf off'
ExecStart=/usr/bin/docker run --name glastopf --rm=true -v /data/glastopf:/data/glastopf -v /data/ews:/data/ews -p 80:80 dtagdevsec/glastopf:latest1610
ExecStop=/usr/bin/docker stop glastopf
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,23 @@
[Unit]
Description=honeytrap
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop honeytrap
ExecStartPre=-/usr/bin/docker rm -v honeytrap
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh honeytrap off'
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStart=/usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data/honeytrap:/data/honeytrap -v /data/ews:/data/ews dtagdevsec/honeytrap:latest1610
ExecStop=/usr/bin/docker stop honeytrap
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 1025,50100,8080,8081,9200 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 64295,64296,64297,64298,64299,64300,64301 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 3306,5060,5061,5601,11211 -j NFQUEUE
ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -m multiport ! --dports 21,22,23,42,69,80,135,443,445,1433,1723,1883,1900 -j NFQUEUE
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,14 @@
[Unit]
Description=netdata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop netdata
ExecStartPre=-/usr/bin/docker rm -v netdata
ExecStart=/usr/bin/docker run --name netdata --net=host --cap-add=SYS_PTRACE --rm=true -v /proc:/host/proc:ro -v /sys:/host/sys:ro -v /var/run/docker.sock:/var/run/docker.sock dtagdevsec/netdata:latest1610
ExecStop=/usr/bin/docker stop netdata
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,19 @@
[Unit]
Description=suricata
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop suricata
ExecStartPre=-/usr/bin/docker rm -v suricata
# Get IF, disable offloading, enable promiscious mode
ExecStartPre=/bin/bash -c '/sbin/ethtool --offload $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') rx off tx off'
ExecStartPre=/bin/bash -c '/sbin/ethtool -K $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') gso off gro off'
ExecStartPre=/bin/bash -c '/sbin/ip link set $(/sbin/ip route | /bin/grep $(/bin/hostname -I | /usr/bin/awk \'{print $1 }\') | /usr/bin/awk \'{print $3 }\') promisc on'
ExecStartPre=/bin/bash -c '/usr/bin/clean.sh suricata off'
ExecStart=/usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data/suricata:/data/suricata dtagdevsec/suricata:latest1610
ExecStop=/usr/bin/docker stop suricata
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,14 @@
[Unit]
Description=ui-for-docker
Requires=docker.service
After=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop ui-for-docker
ExecStartPre=-/usr/bin/docker rm -v ui-for-docker
ExecStart=/usr/bin/docker run --name ui-for-docker --rm=true -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:64299:9000 dtagdevsec/ui-for-docker:latest1610
ExecStop=/usr/bin/docker stop ui-for-docker
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,13 @@
[Unit]
Description=wetty
Requires=sshd.service
After=sshd.service
[Service]
Restart=always
User=tsec
Group=tsec
ExecStart=/usr/bin/node /usr/local/lib/node_modules/wetty/app.js -p 64300 --host 127.0.0.1 --sshhost 127.0.0.1 --sshport 64295
[Install]
WantedBy=multi-user.target

View file

@ -1,34 +0,0 @@
########################################################
# T-Pot #
# ConPot upstart script #
# #
# v16.03.2 by mo, DTAG, 2016-03-02 #
########################################################
description "ConPot"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing conpot containers
myCID=$(docker ps -a | grep conpot | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/conpot/* || true
mkdir -p /data/conpot/log
chmod 760 /data/conpot -R
chown tpot:tpot /data/conpot -R
fi
end script
script
/usr/bin/docker run --name conpot --rm=true -v /data/conpot:/data/conpot -v /data/ews:/data/ews -p 81:80 -p 102:102 -p 161:161/udp -p 502:502 dtagdevsec/conpot:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,34 +0,0 @@
########################################################
# T-Pot #
# Cowrie upstart script #
# #
# v16.03.4 by av / mo, DTAG, 2016-03-03 #
########################################################
description "Cowrie"
author "av"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing cowrie containers
myCID=$(docker ps -a | grep cowrie | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/cowrie/* || true
mkdir -p /data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/
chmod 760 /data/cowrie -R
chown tpot:tpot /data/cowrie -R
fi
end script
script
/usr/bin/docker run --name cowrie --rm=true -p 22:2222 -v /data/cowrie:/data/cowrie -v /data/ews:/data/ews dtagdevsec/cowrie:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,35 +0,0 @@
########################################################
# T-Pot #
# Dionaea upstart script #
# #
# v16.03.6 by mo, DTAG, 2016-03-03 #
########################################################
description "Dionaea"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing dionaea containers
myCID=$(docker ps -a | grep dionaea | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/dionaea/* || true
rm /data/ews/dionaea/ews.json || true
mkdir -p /data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/wwwroot
chmod 760 /data/dionaea -R
chown tpot:tpot /data/dionaea -R
fi
end script
script
/usr/bin/docker run --name dionaea --cap-add=NET_BIND_SERVICE --rm=true -p 21:21 -p 42:42 -p 8081:80 -p 135:135 -p 443:443 -p 445:445 -p 1433:1433 -p 3306:3306 -p 5060:5060 -p 5061:5061 -p 69:69/udp -p 5060:5060/udp -v /data/dionaea:/data/dionaea -v /data/ews:/data/ews dtagdevsec/dionaea:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,34 +0,0 @@
########################################################
# T-Pot #
# Elasticpot upstart script #
# #
# v16.03.5 by ms/mo, DTAG, 2016-03-03 #
########################################################
description "ElasticPot"
author "ms"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing elasticpot containers
myCID=$(docker ps -a | grep elasticpot | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/elasticpot/* || true
mkdir -p /data/elasticpot/log
chmod 760 /data/elasticpot -R
chown tpot:tpot /data/elasticpot -R
fi
end script
script
/usr/bin/docker run --name elasticpot --rm=true -v /data/elasticpot:/data/elasticpot -v /data/ews:/data/ews -p 9200:9200 dtagdevsec/elasticpot:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,29 +0,0 @@
########################################################
# T-Pot #
# ELK upstart script #
# #
# v16.03.7 by mo, DTAG, 2016-03-12 #
########################################################
description "ELK"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing elk containers
myCID=$(docker ps -a | grep elk | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# ELK data will be kept for <= 90 days, check /etc/crontab for curator modification
# ELK daemon log files will be removed
rm -rf /data/elk/log/* || true
end script
script
/usr/bin/docker run --name=elk -v /data:/data -v /var/log:/data/host/log -p 127.0.0.1:64296:8080 --rm=true dtagdevsec/elk:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,33 +0,0 @@
########################################################
# T-Pot #
# eMobility upstart script #
# #
# v16.03.1 by ms / mo, DTAG, 2016-03-03 #
########################################################
description "emobility"
author "ms"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing emobility containers
myCID=$(docker ps -a | grep emobility | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/emobility/* || true
rm /data/ews/emobility/ews.json || true
mkdir -p /data/emobility/log /data/ews/emobility
chmod 760 /data/emobility -R
chown tpot:tpot /data/emobility -R
fi
end script
script
# Delayed start to avoid rapid respawning
sleep 2
/usr/bin/docker run --name emobility --cap-add=NET_ADMIN -p 8080:8080 -v /data/emobility:/data/eMobility -v /data/ews:/data/ews --rm=true dtagdevsec/emobility:latest1603
end script

View file

@ -1,34 +0,0 @@
########################################################
# T-Pot #
# Glastopf upstart script #
# #
# v16.03.4 by mo, DTAG, 2016-03-04 #
########################################################
description "Glastopf"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing glastopf containers
myCID=$(docker ps -a | grep glastopf | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/glastopf/* || true
mkdir -p /data/glastopf
chmod 760 /data/glastopf -R
chown tpot:tpot /data/glastopf -R
fi
end script
script
/usr/bin/docker run --name glastopf --rm=true -v /data/glastopf:/data/glastopf -v /data/ews:/data/ews -p 80:80 dtagdevsec/glastopf:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,40 +0,0 @@
########################################################
# T-Pot #
# Honeytrap upstart script #
# #
# v16.03.8 by mo, DTAG, 2016-03-04 #
########################################################
description "Honeytrap"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing honeytrap containers
myCID=$(docker ps -a | grep honeytrap | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/honeytrap/* || true
mkdir -p /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/
chmod 760 /data/honeytrap/ -R
chown tpot:tpot /data/honeytrap/ -R
fi
# Enable NFQ chain
/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
end script
script
/usr/bin/docker run --name honeytrap --cap-add=NET_ADMIN --net=host --rm=true -v /data/honeytrap:/data/honeytrap -v /data/ews:/data/ews dtagdevsec/honeytrap:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script
post-stop script
# Drop NFQ chain
/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE
end script

View file

@ -1,39 +0,0 @@
########################################################
# T-Pot #
# Suricata upstart script #
# #
# v16.03.3 by mo, DTAG, 2016-03-04 #
########################################################
description "Suricata"
author "mo"
start on started docker and filesystem
stop on runlevel [!2345]
respawn
pre-start script
# Remove any existing suricata containers
myCID=$(docker ps -a | grep suricata | awk '{ print $1 }')
if [ "$myCID" != "" ];
then docker rm -v $myCID;
fi
# Remove any data from previous container if persistence is not enabled
if ! [ -f /data/persistence.on ];
then
rm -rf /data/suricata/* || true
mkdir -p /data/suricata/log
chmod 760 -R /data/suricata
chown tpot:tpot -R /data/suricata
fi
# Get IF, disable offloading, enable promiscious mode
myIF=$(route | grep default | awk '{ print $8 }')
/sbin/ethtool --offload $myIF rx off tx off
/sbin/ethtool -K $myIF gso off gro off
/sbin/ip link set $myIF promisc on
end script
script
/usr/bin/docker run --name suricata --cap-add=NET_ADMIN --net=host --rm=true -v /data/suricata:/data/suricata dtagdevsec/suricata:latest1603
end script
post-start script
# Delay next start to avoid rapid respawning
sleep 2
end script

View file

@ -1,7 +1,5 @@
T-Pot 16.03
T-Pot 16.10
Hostname: \n
IP:
___________ _____________________________
\\__ ___/ \\______ \\_____ \\__ ___/
@ -10,6 +8,9 @@ ___________ _____________________________
|____| |____| \\_______ /____|
\\/
IP:
SSH:
WEB:
CTRL+ALT+F2 - Display current container status
CTRL+ALT+F1 - Return to this screen

View file

@ -0,0 +1,96 @@
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
log_format le_json '{ "timestamp": "$time_iso8601", '
'"src_ip": "$remote_addr", '
'"remote_user": "$remote_user", '
'"body_bytes_sent": "$body_bytes_sent", '
'"request_time": "$request_time", '
'"status": "$status", '
'"request": "$request", '
'"request_method": "$request_method", '
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
access_log /var/log/nginx/access.log le_json;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}

View file

@ -0,0 +1,13 @@
-----BEGIN DH PARAMETERS-----
MIICCAKCAgEAiHmfakVLOStSULBdaTbZY/zeFyEeQ19GY9Z5CJg06dIIgIzhxk9L
4xsQdQk8giKOjP6SfX0ZgF5CYaurQ3ljYlP0UlAQQo9+fEErbqj3hCzAxtIpd6Yj
SV6zFdnSjwxWuKAPPywiQNljnHH+Y1KBdbl5VQ9gC3ehtaLo1A4y8q96f6fC5rGU
nfgw4lTxLvPD7NwaOdFTCyK8tTxvUGNJIvf7805IxZ0BvAiBuVaXStaMcqf5BHLP
fYpvIiVaCrtto4elu18nL0tf2CN5n9ai4hlr0nPmNrE/Zrrur78Re5F4Ien9kr4d
xabXvVJJQa9j2NdQO7vk7Cz/dAIiqt/1XKFhll4TTYBqrFVXIwF+FNx636zyOjcO
nlZk/V+IL/UTPnZOv2PGt5+WetvJJubi6B9XgOgVLduI07woAp5qnRJJt6fJW1aA
M86By6WLy5P31Py6eFj8nYgj1V703XgQ5lESKYpeVgqA0bh7daNzOCoGQvvUKlTP
RTu6fs7clw5ta4yYUyvuIKTngH5yGBNdTuP0GWo6Y+Dy1BctVwl2xSw+FhYeuIf/
EB2A3129H59HhbWyNH337+1dfntHfQRXBsT0YSyDxPurI5/FNGcmw+GZEYk4BB8j
g7TwH3GBjbKnjnr7SnhanqmWgybgQw6oR9gDC399eR4LiOk9sbxpX1MCAQI=
-----END DH PARAMETERS-----

View file

@ -0,0 +1,12 @@
#!/bin/bash
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
openssl req -nodes -x509 -sha512 -newkey rsa:8192 -keyout "nginx.key" -out "nginx.crt" -days 3650

View file

@ -0,0 +1,16 @@
#!/bin/bash
# Got root?
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Need to run as root ..."
exit
fi
if [ "$1" = "2048" ] || [ "$1" = "4096" ] || [ "$1" = "8192" ]
then
openssl dhparam -outform PEM -out dhparam$1.pem $1
else
echo "Usage: ./gen-dhparam [2048, 4096, 8192]..."
fi

View file

@ -0,0 +1,161 @@
############################################
### NGINX T-Pot configuration file by mo ###
############################################
###################################
### Allow for 60 reloads per minute
###################################
limit_req_zone $binary_remote_addr zone=base:1m rate=1r/s;
server {
#########################
### Basic server settings
#########################
listen 64297 ssl http2;
index tpotweb.html;
ssl_protocols TLSv1.2;
server_name example.com;
error_page 300 301 302 400 401 402 403 404 500 501 502 503 504 /error.html;
##############################################
### Remove version number add different header
##############################################
server_tokens off;
more_set_headers 'Server: apache';
##############################################
### SSL settings and Cipher Suites
##############################################
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!DHE:!SHA:!SHA256';
ssl_ecdh_curve secp384r1;
ssl_dhparam /etc/nginx/ssl/dhparam4096.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
####################################
### OWASP recommendations / settings
####################################
### Size Limits & Buffer Overflows
### the size may be configured based on the needs.
client_body_buffer_size 100K;
client_header_buffer_size 1k;
client_max_body_size 100k;
large_client_header_buffers 2 1k;
### Mitigate Slow HHTP DoS Attack
### Timeouts definition ##
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;
### X-Frame-Options is to prevent from clickJacking attack
add_header X-Frame-Options SAMEORIGIN;
### disable content-type sniffing on some browsers.
add_header X-Content-Type-Options nosniff;
### This header enables the Cross-site scripting (XSS) filter
add_header X-XSS-Protection "1; mode=block";
### This will enforce HTTP browsing into HTTPS and avoid ssl stripping attack
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
##################################
### Restrict access and basic auth
##################################
# satisfy all;
satisfy any;
# allow 10.0.0.0/8;
# allow 172.16.0.0/12;
# allow 192.168.0.0/16;
allow 127.0.0.1;
allow ::1;
deny all;
auth_basic "closed site";
auth_basic_user_file /etc/nginx/nginxpasswd;
##############################
### Limit brute-force attempts
##############################
location = / {
limit_req zone=base burst=1 nodelay;
}
#################
### Proxied sites
#################
### Kibana
location /kibana/ {
proxy_pass http://localhost:64296;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break;
}
### Head plugin
location /myhead/ {
proxy_pass http://localhost:64298/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /myhead/(.*)$ /$1 break;
}
### ui-for-docker
location /ui {
proxy_pass http://127.0.0.1:64299;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_redirect off;
rewrite /ui/(.*)$ /$1 break;
}
### web tty
location /wetty {
proxy_pass http://127.0.0.1:64300/wetty;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 43200000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
}
### netdata
location /netdata/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:64301;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Connection "keep-alive";
proxy_store off;
rewrite /netdata/(.*)$ /$1 break;
}
}

View file

@ -2,12 +2,15 @@
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP, $myEXTIP#" /etc/issue
myEXTIP=$(curl -s myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
if [ -f /var/run/check.lock ];
then rm /var/run/check.lock

View file

@ -1,44 +0,0 @@
#!/bin/bash
########################################################
# T-Pot #
# Two-Factor-Authentication and SSH enable script #
# #
# v16.03.2 by mo, DTAG, 2016-03-09 #
########################################################
myBACKTITLE="T-Pot - Two-Factor-Authentication and SSH enable script"
# Let's ask if the user wants to enable two-factor ...
dialog --backtitle "$myBACKTITLE" --title "[ Enable 2FA? ]" --yesno "\nDo you want to enable Two-Factor-Authentication based on Google Authenticator for SSH?" 8 70
my2FA=$?
# Let's ask if the user wants to enable ssh ...
dialog --backtitle "$myBACKTITLE" --title "[ Enable SSH? ]" --yesno "\nDo you want to enable the SSH service?" 8 70
mySSH=$?
# Enable 2FA
if [ "$my2FA" = "0" ] && ! [ -f /etc/pam.d/sshd.bak ];
then
clear
sudo sed -i.bak '\# PAM#aauth required pam_google_authenticator.so' /etc/pam.d/sshd
sudo sed -i.bak 's#ChallengeResponseAuthentication no#ChallengeResponseAuthentication yes#' /etc/ssh/sshd_config
google-authenticator -t -d -f -r 3 -R 30 -w 21
echo "2FA enabled. Please press return to continue ..."
read
elif [ -f /etc/pam.d/sshd.bak ]
then
dialog --backtitle "$myBACKTITLE" --title "[ Already enabled ]" --msgbox "\nIt seems that Two-Factor-Authentication has already been enabled. Please run 'google-authenticator -t -d -f -r 3 -R 30 -w 21' if you want to rewrite your token." 8 70
fi
# Enable SSH
if [ "$mySSH" = "0" ] && [ -f /etc/init/ssh.override ];
then
clear
sudo rm /etc/init/ssh.override
sudo service ssh start
dialog --backtitle "$myBACKTITLE" --title "[ SSH enabled ]" --msgbox "\nThe SSH service has been enabled and is now reachable via port tcp/64295. Password authentication is disabled by default." 8 70
elif ! [ -f /etc/init/ssh.override ]
then
dialog --backtitle "$myBACKTITLE" --title "[ Already enabled ]" --msgbox "\nIt seems that SSH has already been enabled." 8 70
fi

243
installer/install.sh Executable file → Normal file
View file

@ -1,14 +1,11 @@
#!/bin/bash
########################################################
# T-Pot post install script #
# Ubuntu server 14.04.4, x64 #
# Ubuntu server 16.04.0, x64 #
# #
# v16.03.14 by mo, DTAG, 2016-03-08 #
# v16.10.0 by mo, DTAG, 2016-10-27 #
########################################################
# Type of install, SENSOR, INDUSTRIAL or FULL?
myFLAVOR="TPOT"
# Some global vars
myPROXYFILEPATH="/root/tpot/etc/proxy"
myNTPCONFPATH="/root/tpot/etc/ntp"
@ -20,9 +17,17 @@ myPFXHOSTIDPATH="/root/tpot/keys/8021x.id"
fuECHO () {
local myRED=1
local myWHT=7
tput setaf $myRED
echo $1 "$2"
tput setaf $myWHT
tput setaf $myRED -T xterm
echo "$1" "$2"
tput setaf $myWHT -T xterm
}
fuRANDOMWORD () {
local myWORDFILE=/usr/share/dict/names
local myLINES=$(cat $myWORDFILE | wc -l)
local myRANDOM=$((RANDOM % $myLINES))
local myNUM=$((myRANDOM * myRANDOM % $myLINES + 1))
echo -n $(sed -n "$myNUM p" $myWORDFILE | tr -d \' | tr A-Z a-z)
}
# Let's make sure there is a warning if running for a second time
@ -36,6 +41,108 @@ set -e
exec 2> >(tee "install.err")
exec > >(tee "install.log")
# Let's remove NGINX default website
fuECHO "### Removing NGINX default website."
rm /etc/nginx/sites-enabled/default
rm /etc/nginx/sites-available/default
rm /usr/share/nginx/html/index.html
# Let's wait a few seconds to avoid interference with service messages
fuECHO "### Waiting a few seconds to avoid interference with service messages."
sleep 5
# Let's ask user for install type
# Install types are TPOT, HP, INDUSTRIAL, ALL
while [ 1 != 2 ]
do
fuECHO "### Please choose your install type and notice HW recommendation."
fuECHO
fuECHO " [T] - T-Pot Standard Installation"
fuECHO " - Cowrie, Dionaea, Elasticpot, Glastopf, Honeytrap, Suricata & ELK"
fuECHO " - 4 GB RAM (6-8 GB recommended)"
fuECHO " - 64GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [H] - Honeypots Only Installation"
fuECHO " - Cowrie, Dionaea, ElasticPot, Glastopf & Honeytrap"
fuECHO " - 3 GB RAM (4-6 GB recommended)"
fuECHO " - 64 GB disk (64 GB SSD recommended)"
fuECHO
fuECHO " [I] - Industrial"
fuECHO " - ConPot, eMobility, ELK & Suricata"
fuECHO " - 4 GB RAM (8 GB recommended)"
fuECHO " - 64 GB disk (128 GB SSD recommended)"
fuECHO
fuECHO " [E] - Everything"
fuECHO " - All of the above"
fuECHO " - 8 GB RAM"
fuECHO " - 128 GB disk or larger (128 GB SSD or larger recommended)"
fuECHO
read -p "Install Type: " myTYPE
case "$myTYPE" in
[t,T])
myFLAVOR="TPOT"
break
;;
[h,H])
myFLAVOR="HP"
break
;;
[i,I])
myFLAVOR="INDUSTRIAL"
break
;;
[e,E])
myFLAVOR="ALL"
break
;;
esac
done
fuECHO "### You chose: "$myFLAVOR
fuECHO
# Let's ask user for a web user and password
myOK="n"
myUSER="tsec"
while [ 1 != 2 ]
do
fuECHO "### Please enter a web user name and password."
read -p "Username (tsec not allowed): " myUSER
echo "Your username is: "$myUSER
fuECHO
read -p "OK (y/n)? " myOK
fuECHO
if [ "$myOK" = "y" ] && [ "$myUSER" != "tsec" ] && [ "$myUSER" != "" ];
then
break
fi
done
myPASS1="pass1"
myPASS2="pass2"
while [ "$myPASS1" != "$myPASS2" ]
do
while [ "$myPASS1" == "pass1" ] || [ "$myPASS1" == "" ]
do
read -s -p "Password: " myPASS1
fuECHO
done
read -s -p "Repeat password: " myPASS2
fuECHO
if [ "$myPASS1" != "$myPASS2" ];
then
fuECHO "### Passwords do not match."
myPASS1="pass1"
myPASS2="pass2"
fi
done
htpasswd -b -c /etc/nginx/nginxpasswd $myUSER $myPASS1
fuECHO
# Let's generate a SSL certificate
fuECHO "### Generating a self-signed-certificate for NGINX."
fuECHO "### If you are unsure you can use the default values."
mkdir -p /etc/nginx/ssl
openssl req -nodes -x509 -sha512 -newkey rsa:8192 -keyout "/etc/nginx/ssl/nginx.key" -out "/etc/nginx/ssl/nginx.crt" -days 3650
# Let's setup the proxy for env
if [ -f $myPROXYFILEPATH ];
then fuECHO "### Setting up the proxy."
@ -150,22 +257,39 @@ tee -a /etc/ssh/ssh_config <<EOF
UseRoaming no
EOF
# Let's pull some updates
fuECHO "### Pulling Updates."
apt-get update -y
apt-get upgrade -y
# Let's clean up apt
apt-get autoclean -y
apt-get autoremove -y
# Installing alerta-cli, wetty
fuECHO "### Installing alerta-cli."
pip install --upgrade pip
pip install alerta
fuECHO "### Installing wetty."
ln -s /usr/bin/nodejs /usr/bin/node
npm install git://github.com/t3chn0m4g3/wetty -g
# Let's add the docker repository
fuECHO "### Adding the docker repository."
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
tee /etc/apt/sources.list.d/docker.list <<EOF
deb https://apt.dockerproject.org/repo ubuntu-trusty main
deb https://apt.dockerproject.org/repo ubuntu-xenial main
EOF
# Let's pull some updates
fuECHO "### Pulling Updates."
apt-get update -y
fuECHO "### Installing Upgrades."
apt-get upgrade -y
# Let's install docker
fuECHO "### Installing docker-engine."
apt-get install docker-engine=1.10.2-0~trusty -y
fuECHO "### You can safely ignore the [FAILED] message,"
fuECHO "### which is caused by a bug in the docker installer."
apt-get install docker-engine=1.12.2-0~xenial -y || true && sleep 5
# Let's add proxy settings to docker defaults
if [ -f $myPROXYFILEPATH ];
@ -187,7 +311,11 @@ adduser --system --no-create-home --uid 2000 --disabled-password --disabled-logi
# Let's set the hostname
fuECHO "### Setting a new hostname."
myHOST=ce$(date +%s)$RANDOM
myHOST=$(curl -s www.nsanamegenerator.com | html2text | tr A-Z a-z | awk '{print $1}')
if [ "$myHOST" = "" ]; then
fuECHO "### Failed to fetch name from remote, using local cache."
myHOST=$(fuRANDOMWORD)
fi
hostnamectl set-hostname $myHOST
sed -i 's#127.0.1.1.*#127.0.1.1\t'"$myHOST"'#g' /etc/hosts
@ -196,8 +324,12 @@ fuECHO "### Patching sshd_config to listen on port 64295 and deny password authe
sed -i 's#Port 22#Port 64295#' /etc/ssh/sshd_config
sed -i 's#\#PasswordAuthentication yes#PasswordAuthentication no#' /etc/ssh/sshd_config
# Let's disable ssh service
echo "manual" >> /etc/init/ssh.override
# Let's allow ssh password authentication from RFC1918 networks
fuECHO "### Allow SSH password authentication from RFC1918 networks"
tee -a /etc/ssh/sshd_config <<EOF
Match address 127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
PasswordAuthentication yes
EOF
# Let's patch docker defaults, so we can run images as service
fuECHO "### Patching docker defaults."
@ -205,6 +337,10 @@ tee -a /etc/default/docker <<EOF
DOCKER_OPTS="-r=false"
EOF
# Let's restart docker for proxy changes to take effect
systemctl restart docker
sleep 5
# Let's make sure only myFLAVOR images will be downloaded and started
case $myFLAVOR in
HP)
@ -227,20 +363,11 @@ esac
# Let's load docker images
fuECHO "### Loading docker images. Please be patient, this may take a while."
if [ -d /root/tpot/images ];
then
fuECHO "### Found cached images and will load from local."
for name in $(cat /root/tpot/data/images.conf)
do
fuECHO "### Now loading dtagdevsec/$name:latest1603"
docker load -i /root/tpot/images/$name:latest1603.img
docker pull dtagdevsec/$name:latest1610
done
else
for name in $(cat /root/tpot/data/images.conf)
do
docker pull dtagdevsec/$name:latest1603
done
fi
#fi
# Let's add the daily update check with a weekly clean interval
fuECHO "### Modifying update checks."
@ -264,13 +391,16 @@ fuECHO "### Adding cronjobs."
tee -a /etc/crontab <<EOF
# Show running containers every 60s via /dev/tty2
*/2 * * * * root status.sh > /dev/tty2
#*/2 * * * * root status.sh > /dev/tty2
# Check if containers and services are up
*/5 * * * * root check.sh
# Example for alerta-cli IP update
#*/5 * * * * root alerta --endpoint-url http://<ip>:<port>/api delete --filters resource=<host> && alerta --endpoint-url http://<ip>:<port>/api send -e IP -r <host> -E Production -s ok -S T-Pot -t \$(cat /data/elk/logstash/mylocal.ip) --status open
# Check if updated images are available and download them
27 1 * * * root for i in \$(cat /data/images.conf); do docker pull dtagdevsec/\$i:latest1603; done
27 1 * * * root for i in \$(cat /data/images.conf); do docker pull dtagdevsec/\$i:latest1610; done
# Restart docker service and containers
27 3 * * * root dcres.sh
@ -281,17 +411,21 @@ tee -a /etc/crontab <<EOF
# Update IP and erase check.lock if it exists
27 15 * * * root /etc/rc.local
# Daily reboot
27 23 * * * root reboot
# Check for updated packages every sunday, upgrade and reboot
27 16 * * 0 root apt-get autoclean -y; apt-get autoremove -y; apt-get update -y; apt-get upgrade -y; sleep 5; reboot
27 16 * * 0 root apt-get autoclean -y && apt-get autoremove -y && apt-get update -y && apt-get upgrade -y && sleep 10 && reboot
EOF
# Let's create some files and folders
fuECHO "### Creating some files and folders."
mkdir -p /data/conpot/log \
/data/cowrie/log/tty/ /data/cowrie/downloads/ /data/cowrie/keys/ /data/cowrie/misc/ \
/data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/wwwroot \
/data/dionaea/log /data/dionaea/bistreams /data/dionaea/binaries /data/dionaea/rtp /data/dionaea/roots/ftp /data/dionaea/roots/tftp /data/dionaea/roots/www /data/dionaea/roots/upnp \
/data/elasticpot/log \
/data/elk/data /data/elk/log /data/glastopf /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \
/data/elk/data /data/elk/log /data/elk/logstash/conf \
/data/glastopf /data/honeytrap/log/ /data/honeytrap/attacks/ /data/honeytrap/downloads/ \
/data/emobility/log \
/data/ews/log /data/ews/conf /data/ews/dionaea /data/ews/emobility \
/data/suricata/log /home/tsec/.ssh/
@ -301,38 +435,39 @@ chmod 500 /root/tpot/bin/*
chmod 600 /root/tpot/data/*
chmod 644 /root/tpot/etc/issue
chmod 755 /root/tpot/etc/rc.local
chmod 700 /root/tpot/home/*
chown tsec:tsec /root/tpot/home/*
chmod 644 /root/tpot/data/upstart/*
chmod 644 /root/tpot/data/systemd/*
# Let's copy some files
tar xvfz /root/tpot/data/elkbase.tgz -C /
cp /root/tpot/data/elkbase.tgz /data/
cp -R /root/tpot/bin/* /usr/bin/
cp -R /root/tpot/data/* /data/
cp -R /root/tpot/etc/issue /etc/
cp -R /root/tpot/home/* /home/tsec/
cp /root/tpot/data/systemd/* /etc/systemd/system/
cp /root/tpot/etc/issue /etc/
cp -R /root/tpot/etc/nginx/ssl /etc/nginx/
cp /root/tpot/etc/nginx/tpotweb.conf /etc/nginx/sites-available/
cp /root/tpot/etc/nginx/nginx.conf /etc/nginx/nginx.conf
cp /root/tpot/keys/authorized_keys /home/tsec/.ssh/authorized_keys
cp /root/tpot/usr/share/nginx/html/* /usr/share/nginx/html/
for i in $(cat /data/images.conf);
do
cp /data/upstart/$i.conf /etc/init/;
systemctl enable $i;
done
systemctl enable wetty
# Let's turn persistence off by default
touch /data/persistence.off
# Let's enable T-Pot website
fuECHO "### Enabling T-Pot website."
ln -s /etc/nginx/sites-available/tpotweb.conf /etc/nginx/sites-enabled/tpotweb.conf
# Let's take care of some files and permissions
chmod 760 -R /data
chown tpot:tpot -R /data
chmod 600 /home/tsec/.ssh/authorized_keys
chown tsec:tsec /home/tsec/*.sh /home/tsec/.ssh /home/tsec/.ssh/authorized_keys
# Let's clean up apt
apt-get autoclean -y
apt-get autoremove -y
chown tsec:tsec /home/tsec/.ssh /home/tsec/.ssh/authorized_keys
# Let's replace "quiet splash" options, set a console font for more screen canvas and update grub
sed -i 's#GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"#GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0"#' /etc/default/grub
sed -i 's#GRUB_CMDLINE_LINUX=""#GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"#' /etc/default/grub
sed -i 's#\#GRUB_GFXMODE=640x480#GRUB_GFXMODE=800x600x32#' /etc/default/grub
tee -a /etc/default/grub <<EOF
GRUB_GFXPAYLOAD=800x600x32
@ -346,19 +481,29 @@ sed -i 's#FONTSIZE=".*#FONTSIZE="12x6"#' /etc/default/console-setup
update-initramfs -u
# Let's enable a color prompt
sed -i 's#\#force_color_prompt=yes#force_color_prompt=yes#' /home/tsec/.bashrc
sed -i 's#\#force_color_prompt=yes#force_color_prompt=yes#' /root/.bashrc
myROOTPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;1m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;1m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
myUSERPROMPT='PS1="\[\033[38;5;8m\][\[$(tput sgr0)\]\[\033[38;5;2m\]\u\[$(tput sgr0)\]\[\033[38;5;6m\]@\[$(tput sgr0)\]\[\033[38;5;4m\]\h\[$(tput sgr0)\]\[\033[38;5;6m\]:\[$(tput sgr0)\]\[\033[38;5;5m\]\w\[$(tput sgr0)\]\[\033[38;5;8m\]]\[$(tput sgr0)\]\[\033[38;5;2m\]\\$\[$(tput sgr0)\]\[\033[38;5;15m\] \[$(tput sgr0)\]"'
tee -a /root/.bashrc << EOF
$myROOTPROMPT
EOF
tee -a /home/tsec/.bashrc << EOF
$myUSERPROMPT
EOF
# Let's create ews.ip before reboot and prevent race condition for first start
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP, $myEXTIP#" /etc/issue
myEXTIP=$(curl -s myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
# Final steps
fuECHO "### Thanks for your patience. Now rebooting."
mv /root/tpot/etc/rc.local /etc/rc.local && rm -rf /root/tpot/ && chage -d 0 tsec && sleep 2 && reboot
mv /root/tpot/etc/rc.local /etc/rc.local && rm -rf /root/tpot/ && sleep 2 && reboot

View file

@ -1,4 +1,2 @@
#!/bin/bash
# Stop plymouth to allow for terminal interaction
plymouth quit
openvt -w -s /root/tpot/install.sh

File diff suppressed because it is too large Load diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 805 B

View file

@ -0,0 +1,21 @@
<!DOCTYPE html>
<html lang="en_US">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>T-Pot</title>
</head>
<link href="style.css" rel="stylesheet" type="text/css"/>
<body bgcolor="#E20074">
<center>
<a href="/tpotweb.html" target="_top" class="btn">Home</a>
<a href="/kibana/" target="main" class="btn">Kibana</a>
<a href="/myhead/_plugin/head/" target="main" class="btn">ES Head Plugin</a>
<a href="/ui/" target="main" class="btn">UI-For-Docker</a>
<a href="/wetty/ssh/tsec" target="main" class="btn">WebSSH</a>
<a href="/netdata/" target="_blank" class="btn">Netdata</a>
</center>
</body>
</html>

View file

@ -0,0 +1,17 @@
.btn {
-webkit-border-radius: 0;
-moz-border-radius: 0;
border-radius: 0px;
font-family: Arial;
color: #ffffff;
font-size: 12px;
background: #E20074;
padding: 2px 30px 2px 30px;
text-decoration: none;
}
.btn:hover {
background: #c2c2c2;
text-decoration: none;
}

View file

@ -0,0 +1,15 @@
<!DOCTYPE html>
<html lang="en_US">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>T-Pot</title>
</head>
<frameset rows='20,*' border='0' frameborder='0' framespacing='0'>
<frame src='navbar.html' name='navbar' marginwidth='0' marginheight='0' scrolling='no' noresize>
<frame src='/kibana/' name='main' marginwidth='0' marginheight='0' scrolling='auto' noresize>
<noframes>
</noframes>
</frameset>
</html>

View file

@ -1,5 +1,7 @@
default install
label install
menu label ^Install T-Pot 16.03
kernel /install/vmlinuz
append file=/cdrom/tpot/tpot.seed initrd=/install/initrd.gz ks=cdrom:/tpot/ks.cfg console-setup/ask_detect=true --
menu label ^T-Pot 16.10
menu default
kernel linux
append vga=788 initrd=initrd.gz console-setup/ask_detect=true --
#append vga=788 initrd=initrd.gz console-setup/ask_detect=true DEBCONF_DEBUG=developer

View file

@ -1,41 +0,0 @@
#Generated by Kickstart Configurator
#platform=AMD64 or Intel EM64T
#System language
lang en
#Language modules to install
#langsupport en_US
#System keyboard
#keyboard de
#System mouse
mouse
#System timezone
#timezone Europe/Berlin
#Root password
rootpw --disabled
#Initial user
user tsec --fullname "tsec" --iscrypted --password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
#Reboot after installation
reboot
#Use text mode install
text
#Install OS instead of upgrade
install
#Use CDROM installation media
cdrom
#System bootloader configuration
bootloader --location=mbr
#Clear the Master Boot Record
zerombr yes
#Partition clearing information
clearpart --all --initlabel
#Disk partitioning information
part swap --size=8192
#part /data --fstype ext4 --size 8192
part / --fstype ext4 --size 1 --grow
#System authorization infomation
auth --useshadow --enablemd5
#Firewall configuration
firewall --disabled
#Do not configure the X Window System
skipx

View file

@ -2,15 +2,15 @@
########################################################
# T-Pot #
# .ISO maker #
# .ISO creator #
# #
# v16.03.4 by mo, DTAG, 2016-03-08 #
# v16.10.0 by mo, DTAG, 2016-07-04 #
########################################################
# Let's define some global vars
myBACKTITLE="T-Pot - ISO Maker"
myUBUNTULINK="http://releases.ubuntu.com/14.04.4/ubuntu-14.04.4-server-amd64.iso"
myUBUNTUISO="ubuntu-14.04.4-server-amd64.iso"
myBACKTITLE="T-Pot - ISO Creator"
myUBUNTULINK="http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/netboot/mini.iso"
myUBUNTUISO="mini.iso"
myTPOTISO="tpot.iso"
myTPOTDIR="tpotiso"
myTPOTSEED="preseed/tpot.seed"
@ -28,7 +28,8 @@ myTMP="tmp"
myWHOAMI=$(whoami)
if [ "$myWHOAMI" != "root" ]
then
echo "Please run as root ..."
echo "Need to run as root ..."
sudo ./$0
exit
fi
@ -81,17 +82,13 @@ if [ "$myINST" != "" ]
fi
# Let's ask if the user wants to run the script ...
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nThis script will download the latest supported Ubuntu Server and build the T-Pot .iso" 8 50
dialog --backtitle "$myBACKTITLE" --title "[ Continue? ]" --yesno "\nDownload latest supported Ubuntu Mini ISO and build the T-Pot Install Image." 8 50
mySTART=$?
if [ "$mySTART" = "1" ];
then
exit
fi
# Let's ask for the type of installation SENSOR, INDUSTRIAL or FULL?
myFLAVOR=$(dialog --no-cancel --backtitle "$myBACKTITLE" --title "[ Installation type ... ]" --radiolist "" 11 76 4 "TPOT" "Standard (w/o INDUSTRIAL)" on "HP" "Honeypots only (w/o INDUSTRIAL)" off "INDUSTRIAL" "ConPot, eMobility, ELK, Suricata (8GB RAM recommended)" off "ALL" "Everything (8GB RAM required)" off 3>&1 1>&2 2>&3 3>&-)
sed -i 's#^myFLAVOR=.*#myFLAVOR="'$myFLAVOR'"#' $myINSTALLERPATH
# Let's ask the user for a proxy ...
while true;
do
@ -204,7 +201,7 @@ EOF
fi
done
# Let's get Ubuntu 14.04.4 as .iso
# Let's download Ubuntu Minimal ISO
if [ ! -f $myUBUNTUISO ]
then
wget $myUBUNTULINK --progress=dot 2>&1 | awk '{print $7+0} fflush()' | dialog --backtitle "$myBACKTITLE" --title "[ Downloading Ubuntu ... ]" --gauge "" 5 70;
@ -215,31 +212,40 @@ fi
# Let's loop mount it and copy all contents
mkdir -p $myTMP $myTPOTDIR
losetup /dev/loop0 $myUBUNTUISO
mount /dev/loop0 $myTMP
cp -rT $myTMP $myTPOTDIR
chmod 777 -R $myTPOTDIR
mount -o loop $myUBUNTUISO $myTMP
rsync -a $myTMP/ $myTPOTDIR
umount $myTMP
losetup -d /dev/loop0
# Let's modify initrd
gunzip $myTPOTDIR/initrd.gz
mkdir $myTPOTDIR/tmp
cd $myTPOTDIR/tmp
cpio --extract --make-directories --no-absolute-filenames < ../initrd
cd ..
rm initrd
cd ..
# Let's add the files for the automated install
mkdir -p $myTPOTDIR/tpot
cp installer/* -R $myTPOTDIR/tpot/
cp isolinux/* $myTPOTDIR/isolinux/
cp kickstart/* $myTPOTDIR/tpot/
cp preseed/* $myTPOTDIR/tpot/
if [ -d images ];
then
cp -R images $myTPOTDIR/tpot/images/
fi
chmod 777 -R $myTPOTDIR
mkdir -p $myTPOTDIR/tmp/opt/tpot
cp installer/* -R $myTPOTDIR/tmp/opt/tpot/
cp isolinux/* $myTPOTDIR/
cp preseed/tpot.seed $myTPOTDIR/tmp/preseed.cfg
# Let's create the new initrd
cd $myTPOTDIR/tmp
find . | cpio -H newc --create > ../initrd
cd ..
gzip initrd
rm -rf tmp
cd ..
# Let's create the new .iso
cd $myTPOTDIR
mkisofs -gui -D -r -V "T-Pot" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../$myTPOTISO ../$myTPOTDIR 2>&1 | awk '{print $1+0} fflush()' | cut -f1 -d"." | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... ]" --gauge "" 5 70 0
mkisofs -gui -D -r -V "T-Pot" -cache-inodes -J -l -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ../$myTPOTISO ../$myTPOTDIR 2>&1 | awk '{print $1+0} fflush()' | cut -f1 -d"." | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... ]" --gauge "" 5 70 0
echo 100 | dialog --backtitle "$myBACKTITLE" --title "[ Building T-Pot .iso ... Done! ]" --gauge "" 5 70
cd ..
isohybrid $myTPOTISO
sha256sum $myTPOTISO > tpot.sha256
# Let's write the image
while true;

View file

@ -1,46 +1,126 @@
# T-Pot preseed file by mo
# Setting locale
#d-i debian-installer/language string en
##############################################
### T-Pot Preseed Configuration File by mo ###
##############################################
####################
### Locale Selection
####################
#d-i debian-installer/country string DE
#d-i debian-installer/locale string en_US.UTF-8
d-i debian-installer/language string en
d-i debian-installer/locale string en_US.UTF-8
d-i localechooser/preferred-locale string en_US.UTF-8
# Keyboard selection
#d-i console-setup/ask_detect boolean false
######################
### Keyboard Selection
######################
#d-i console-setup/ask_detect boolean true
#d-i keyboard-configuration/layoutcode string de
d-i console-setup/detected note
#Unmount active partitions
d-i preseed/early_command string umount /media || :
#############################
### Unmount Active Partitions
#############################
#d-i preseed/early_command string umount /media || :
# Network Configuration
#########################
### Network Configuration
#########################
#d-i netcfg/choose_interface select auto
#d-i netcfg/dhcp_timeout string 60
d-i netcfg/get_hostname string t-pot
# Source & Proxy
###############
### Disk Layout
###############
d-i partman/early_command string \
debconf-set partman-auto/disk $(parted_devices | sort -k2nr | head -1 | cut -f1)
d-i partman-auto/method string regular
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-auto/choose_recipe select atomic
d-i partman-auto/expert_recipe string \
root :: \
8192 8888 8192 linux-swap \
$primary{ } \
method{ swap } format{ } \
. \
40960 44444 -1 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
######################
### User Configuration
######################
d-i passwd/root-login boolean false
d-i passwd/make-user boolean true
d-i passwd/user-fullname string tsec
d-i passwd/username string tsec
#d-i passwd/user-password-crypted password $1$jAw1TW8v$a2WFamxQJfpPYZmn4qJT71
d-i user-setup/encrypt-home boolean false
########################################
### Country Mirror & Proxy Configuration
########################################
d-i mirror/country string manual
d-i mirror/http/hostname string archive.ubuntu.com
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string
# Time
#d-i clock-setup/utc boolean true
###########################
### Skip Grub Configuration
###########################
#d-i grub-installer/confirm boolean true
#d-i grub-installer/only_debian boolean true
#d-i grub-installer/with_other_os boolean true
d-i grub-installer/skip boolean true
d-i lilo-installer/skip boolean true
######################
### Time Configuration
######################
#d-i time/zone string Europe/Berlin
d-i clock-setup/utc boolean true
d-i time/zone string UTC
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string ntp.ubuntu.com
# Package Groups
##################
### Package Groups
##################
tasksel tasksel/first multiselect ubuntu-server
# Packages
d-i pkgsel/include string apt-transport-https ca-certificates curl dialog dstat ethtool genisoimage git htop iw libpam-google-authenticator lm-sensors ntp openssh-server syslinux pv vim wireless-tools wpasupplicant
########################
### Package Installation
########################
d-i pkgsel/include string apache2-utils apparmor apt-transport-https aufs-tools bash-completion build-essential ca-certificates cgroupfs-mount curl dialog dstat ethtool genisoimage git glances html2text htop iptables iw libltdl7 lm-sensors man nginx-extras nodejs npm ntp openssh-server openssl syslinux psmisc pv python-pip vim wireless-tools wpasupplicant
# Update Policy
#################
### Update Policy
#################
d-i pkgsel/update-policy select unattended-upgrades
# Post install
#########################################
### Post install (Grub & T-Pot Installer)
#########################################
d-i preseed/late_command string \
cp /cdrom/tpot/rc.local.install /target/etc/rc.local; \
cp -r /cdrom/tpot/ /target/root/
in-target apt-get -y install grub-pc; \
in-target grub-install --force $(debconf-get partman-auto/disk); \
in-target update-grub; \
cp /opt/tpot/rc.local.install /target/etc/rc.local; \
cp -r /opt/tpot/ /target/root/; \
cp /opt/tpot/usr/share/dict/names /target/usr/share/dict/names
# Reboot
##########
### Reboot
##########
d-i nobootloader/confirmation_common note
d-i finish-install/reboot_in_progress note
d-i cdrom-detect/eject boolean true