CheatSheet – Bash

A random collection of commands for the linux shell Bash (and other linux commands that don’t yet have their own cheatsheet).

Linux Shell Back to Index

Backups **dd**

mount -o remount,ro /dev/whatever /
dd if=/dev/whatever bs=1M iflag=direct | dd of=/media/exthdd/backup/$date_backup.dd bs=1M
mount -o remount,rw /dev/whatever /


rsync -aAXv /* /path/to/backup/folder --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}

# Backup with rsync and keep folder structure (the /./ is important!)
rsync -avR /source/path/./folder-to-backup user@server:/target/folder/


# Create root filesystem snapshots with LVM

Find out where GRUB is installed

Nothing more annoying then getting asked during system upgrades where GRUB should be installed… how ’bout where it was before!? Wait, where was that again?

Just try the disk (e.g. /dev/sda), and if it’s not on there, try its partitions. (/dev/sda1)

root@server:~ # dd bs=512 count=1 if=/dev/sda 2>/dev/null |strings
GRUB <--- there it is!
Hard Disk

Run command in screen as one-liner

screen -dmS name command
screen -dmS screen01 rdesktop -k us -g 1920x1180    


Sometimes you need to search for something in a document and replace whatever comed after that with the string you found. For example search for xbob23f, ybob543, and zbob123 and replace it with xbobnew, ybobnew and zbobnew respectively. To do that, you need to specify a serachterm in brackets, like .bob (the . being a randomn char), a regular expression for what you want to change, like … (three dots for three random chars following the searchterm) and a string to replace the found content. The string contains whatever the searchterm (aka .bob) found plus whatever you might want to add to replace ....

First, the simple structure of sed

sed -options 's/searchterm/replace/g'	#s = search for, g = replace all

Example 1: Replace .bob with itself (e.g. xbob, ybob, zbob)

sed -re 's/(.bob)/\1/g'	#(searchterm) is represented by \1 in replace

Example 2: Replace .bob and the three following chars with the searchresults

sed -re 's/(.bob).../\1/g'	#You can add further regex after the (searchterm)

Example 3: Same as above and append _new to every found string.

sed -re 's/(.bob).../\1_new/g'

Replace x number of random characters

# As always, . stands for any character, but instead of typing five dots, we specify the amount of chars with `{5}`
sed -r 's/^.{5}//'

Replace a line in a file

sed -i '/TEXT_TO_BE_REPLACED/c\This line is removed by the admin.' /tmp/foo

Print a file up until a certain keyword

sed '/Keyword/q' file

IRC **IRSSI IRC encrypt traffic** – don’t know if that’s all of it…

openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout ~/.irssi/mynick.key -out ~/.irssi/mynick.pem

Connect to defcon IRC

/connect EFNet
/join #dc-forums


Here are some notes on how to use git – as I always seem to screw things up…

# the basics
git status
git add <file1 file2 | folder | *>
git commit -m "comment"
git push

# delete file
git rm <file>

# delete file from git only (not locally)
git rm --chached <file>

# delete all files from cache that are marked as deleted
sudo git rm --cached $(sudo git ls-files --deleted)

# get a file back that has been deleted locally but not commited yet
git checkout HEAD <file>

# get a file back that has been deleted and commited
git checkout HEAD^ <file>

# temporary move all changes to "stash", to work on something else (e.g. patch)
git stash

# after patch (or whatever) is done, get back to previous work
git stash apply

If you have committed something (not pushed) that you want to revert, use

# for the current commit
git reset HEAD

# for the one before the current
git reset HEAD~1

or if just want to reset them and get them back later,
git reset --soft HEAD

If git annoys you with multiple files, that are being tracked and that have changed but you don’t really care about them (in fact they’re just taking up space in git status…)

# Ignore tracked files
git update-index --assume-unchanged <file>

# If you wanna start tracking changes again run the following command:
git update-index --no-assume-unchanged <file>

# If you want to find all files that have been added to this list, use the following:
git ls-files -v|grep '^h'

General Bash Stuff
Set sticky bit to keep user or group throughout a directory or subdirectory when editing, moving or creating files under a different user

# Example folder structure
mkdir -p myfolder/subfolder/lastfolder

# Set folder ownership the way you want it
chown -R myuser:www-data myfolder
# Perm: myuser:r/w/x, www-data (group):r/x, everyone: nothing
chmod -R 750 myfolder

# To keep the ownership of myuser, set the sticky bit for user
chmod u+s myfolder
# Or with -R for recursive if you want to keep is throughout all subfolders
chmod -R u+s myfolder

# To keep the ownership of the group www-data, set the sticky bit for group (-R optional)
chmod -R g+s myfolder

# Get permissions of file/s in octal form
# stat -c = format of stat output 
# "%a %n" = print "octal-permissions filename"
stat -c "%a %n"  /etc/sudoers.d/*


Install Security Updates (-s is dryrun!)

grep security /etc/apt/sources.list > /etc/apt/security.sources.list
apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list -s

Pakete auf Hold setzen

dpkg --get-selections |grep hold
echo -e "packetname\thold" |sudo dpkg --set-selections

#search packet with apt or dpkg
apt-cache search packetname
dpkg --get-selection |grep packetname

#show packet
apt-cache showpkg packetname

#set hold
echo "packetname hold" |dpkg --set-selections

#set install
echo "packetname install" |dpkg --set-selections

MySQL Packet Troubleshooting (5.5 vs. 5.6)

#Prüfen welche versionen installiert sind
sudo apt-cache policy mysql-server-5.[5,6]

#Prüfen welche version läuft
sudo mysql -V
sudo mysqld -V

#MySQL 5.5 deinstallieren
sudo apt-get remove mysql-server-5.5 mysql-server-core-5.5 mysql-client-5.5

# Anschließend unbedingt MySQL 5.6 wieder starten, da dieses bei der deinstallation von 5.5 gestoppt wird.
sudo /etc/init.d/mysql start

#Prüfen welche version läuft
sudo mysql -V
sudo mysqld -V

List files

# list one file per line (1), don't go into subdirs and print full path (d)
ls -1d /etc/*

# show newest log at the bottom, oldest at the top.
# list all (a) in long-format (l), human readable (h), sorted by time (t) reverse (r)
ls -alhtr /var/log/


Print everything except first Collumn

awk -F "delimiter" {'$1=""; print $0'}


Grepplings that I need but never want to figure out on my own…

# grep for a string that is exactly 23 chars long (any chars)
grep '^.\{22\}$' 

# grep for a string that is exactly 23 chars long (charset a-z)
grep '^[a-z]\{22\}$' 

Locales on Ubuntu 14.04 – Fresh LXC install

fmohr@ubuntu-1404:~$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory

# To fix it, just run locale-gen en_US.UTF-8
fmohr@ubuntu-1404:~$ sudo locale-gen en_US.UTF-8
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF8)
Generating locales...
  en_US.UTF-8... done
Generation complete.

fmohr@ubuntu-1404:~$ locale

User Permissions and Groups

If you can’t remember your root password, or run usermod -G group user without the -a option and now find yourself without sudo rights, here is how you reset your root password or group settings.

# Reboot your system
# Keep hitting SHIFT to get into GRUB selection
# Select Recovery, or Advanced options -> Recovery
# Once the blue/red/greyish menu pops up, select root or netroot shell
# Remount / with
mount -rw -o remount / 
# Change root password
passwd root
# Or reset your group settings (first example vbox host, second vbox guest)
usermod -a -G  hashtagsecurity,adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers hashtagsecurity
usermod -a -G  hashtagsecurity,adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxsf hashtagsecurity
# You might not need all of the groups - these are just an example

SSL Cert Voodoo

This is a good blogpost when it comes to getting info from ssl certificates!

# Use openssl to get the valid-dates of a SSL cert directly from a website
# If you want a complete ssl scan, use sslscan instead!
echo | openssl s_client -connect 2>/dev/null | openssl x509 -noout -dates
notBefore=Jun 19 12:44:04 2013 GMT
notAfter=Oct 31 23:59:59 2013 GMT

# Not SSL, but handy if you are looking for hosts to check...
nmap -PN -p 443 -iL ./all_my_hosts.txt -oN nmap_results.txt

# Now check all open ports for ssl certs with this small bash script.

for i in `grep -B 4 open nmap_results.txt |grep "Nmap scan report" |awk {'print $5'}`
  j=`curl -Ik -m 5 -s https://$i |head -n 1`
  k=`echo $j|awk {'print $2'}`
  echo "Host: $i, Status: $j"
  if [[ "$k" != "401" && "$k" != "" ]]
    echo -n "$i;" && echo | openssl s_client -connect $i:443 2>/dev/null | openssl x509 -noout -subject -dates |sed 's/subject=.*CN/CN/g' |sed 's/$/;/g' |tr -d "\n" |sed 's/$/\n/g'

Check Server for supported SSL protocol versions – It should look like this – SSL3 (or -ssl2) not supported, which is good!

$> openssl s_client -connect server:443 -ssl3 
140131777316512:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1260:SSL alert number 40
140131777316512:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:
no peer certificate available
No client certificate CA names sent
SSL handshake has read 7 bytes and written 0 bytes
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
    Protocol  : SSLv3
    Cipher    : 0000
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1413807987
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)

Bad Example – This is a successfull connect – in case of SSL2 and SSL3 something you don’t want!

$> openssl s_client -connect server:443 -ssl3
depth=1 C = US, O = DigiCert Inc, OU =, CN = DigiCert SHA2 High Assurance Server CA
verify error:num=20:unable to get local issuer certificate
verify return:0
Certificate chain
Server certificate
No client certificate CA names sent
SSL handshake has read 3079 bytes and written 288 bytes
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
    Protocol  : SSLv3
    Cipher    : ECDHE-RSA-AES256-SHA

Even more random Bash stuff
Run a minimal http server on linux using netcat (nc)

while true;do nc -l -p 8080 -q 1 <<<"Hello World";done
while true;do nc -l -p 8080 -q 1 < index.html ;done

#with interpreted html (note, internal links will not work!)
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html; } | nc -l -p 8080 -q 1; done

VirtualBox shared folder troubleshooting

I know it’s just a link, but a good one!

Also, so you can access your files without sudo…

sudo usermod -a -G vboxsf username

Troubleshooting with BURP

Recently I installed owncloud on one of my servers. The setup went fine and all seemed good, until I noticed that the redirection after the login page was behaving somewhat strangely. But no worries – BURP to the rescue!

Before we delve into this whole thing let me just say that, while I really like BURP, I don’t want to sell it (or anything really). So far I always used the free version and never experienced any problems due to feature locks but if you’re an open source enthusiast you might wanna try the OWASP Zed Attack Proxy (ZAP) as an alternative to BURP.

Back to the story. In order to understand my problem, you need some insight into the setup I’m using.

  User     HTTPS     Public       HTTP      Internal

All Requests to HTTP:80 are redirected by [APACHE PROXY] to
All Requests to HTTPS:443 are handled by [APACHE PROXY] vHosts (SNI)

Now, whenever I tried to login at, I got redirected to and had to manually change the subdomain back to owncloud in order to be logged in.

So the first step for me was to find out where the redirect to came from.

Enter BURP, the reverse proxy tool that I came to know as a pentesting and troubleshooting gem.

BURP is an intercepting proxy first and foremost, which gives you the ability to do a local man-in-the-middle between your browser and webz and examine, drop, forward, alter, forge, etc. HTTP(S) requests and responses.

In this case, I used it to find out why I was redirected to www. instead of going to owncloud..

After starting BURP and adjusting my Firefox proxy settings to use localhost:8080, I went to my owncloud login page and started the BURP proxy in intercepting mode.

Here is where I hit the first bump. Apparently BURP doesn’t forward TLS client certificates so I had to import mine first.

After that, I changed the proxy settings to also intercept responses. This enabled me to look closer at whatever the server sent me in response to my requests.

The first intercepted request was the login, in which you can see my login credentials being sent to the server. In the first response we can see exactly where the problem lies. The URL in the Location header is correct, but it is set to http:// which results in a redirect by apache to

Obviously, this can easily be solved by changing the apache vhost config from Redirect to Rewrite.

# Remove Redirect
RedirectPermanent /

# And add Rewrite
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

While this did solve my problem, it wasn’t a satisfying solution. I’m still being redirected which is totally unnecessary. In order to change that, I needed to enable Enforce HTTPS in owncloud.

The last hurdle was, that owncloud told me that I need to visit the page via HTTPS in order to enable Enforce HTTPS. That was a bit confusing, until I realized that from the perspective of the owncloud server I am browsing via HTTP all the time. In order to get this fixed, I just needed to enable TLS on the backend owncloud apache and set the proxy to use HTTPS connections in the internal network as well.

Unicode display problems in bash

Ok, I know this has nothing to do with security and I’m just writing about typical, everyday sysadmin stuff right now. But this problem has cost me way too much time to not be sharing the solution with the world.

The problem

I was trying to build the owncloud sync client mirall when I noticed that certain characters aren’t displayed correctly. This looks like a typicall locales misconfiguration, so I started to mess with that a bit.

I switched to de_DE.UTF-8 which unfortunately didn’t fix the problem. What did though, was switching to de_DE.ISO-8859-1@euro. Well, so far so good but now I had my whole system throwing german messages at me. And I like my system to be in english (ever tried to google for a german error message?)

I didn’t quite understand the problem at the time, because UTF-8 is unicode and should be able to handle special characters like ä, ö and ü.

So here is a picture of a file called äöü.txt

Here is the same file shown in dolphin:

Another thing I noticed was the command-not-found crash, whenever I typed öö instead of ll, which is an alias for ls -lah on my system.

The solution

To make a long story short, I tried setting locales for quite a while until I finally checked my terminals shell profile. Under advanced, I found that the default was always set to ISO-8859-1 despite my locale settings.

After changing that, I went from this

to this

while still having my system running with en_US as locale.

HA-Proxy for the win

I finally found time to take a closer look at HA-Proxy. It is a high-availability load balancer and (reverse-) proxy server and fully open source.

Attention: This is me testing stuff – I have not taken care of settings like no-sslv3, etc. So if you use this in production, make sure to read up on this first! Also, since I’m new to HA-Proxy, I might have misconfigured or missed a few options so don’t blame me if things aren’t perfect regarding either security or performance!


My goals utilizing HA-Proxy included

  • Testing TLS offloading and passthrough capabilities
  • Moving client authentication from backend servers to HA-Proxy
  • Replacing Apache as reverse proxy
  • Increasing availability by adding loadbalancing to servers
  • Increasing availability through HA-Proxy failover setup
  • Learn lot’s of new stuff and share it! 🙂

First off I installed the current stable version 1.5.6 from the haproxy repositories, since Ubuntu 14.04 server still uses the old stable 1.4, which has a lot of features still missing – such as native SSL support.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:vbernat/haproxy-1.5
sudo apt-get update
sudo apt-get install haproxy

If you want to know more about the parameters used, check out the documentation here: (from now on referenced to as $dokulink)

Edit the conf /etc/haproxy/haproxy.cfg to look like this.

    log local0 notice                 # log to local rsyslog daemon
    maxconn 2000                                # Number of concurrent connections allowed
    user haproxy
    group haproxy
    tune.ssl.default-dh-param 2048				# DHE max size of parameters for key exchange - $dokulink#tune.ssl.default-dh-param

    log     global
    mode    http
    #option  httplog							# this option messed with my SSL passthrough, so I disabled it
    option  dontlognull
    retries 3
    option redispatch                           # redistribute sessions if a node goes down (no session stickiness)
    timeout connect  5000                       # minimum time to wait until timeout
    timeout client  10000                       # timeout received from client
    timeout server  10000                       # timeout received from server

To add a farm, you should first know a bit about the config structure.
There are three other config types besides global and defaults, named frontend, backend and listen.

The first two are used to configure the the interface that will be addressed by visitors (frontend) and the farm and loadbalancing settings (backend). The third one (listen) is actually just the combination of the first two, which takes less lines for the same config, but on the downside has a negative impact on the readability. Since I’m fairly new to HA-Proxy, I will use frontend and backend but it’s really up to you which path you’ll choose.

SSL Passthrough

Here haproxy doesn’t terminate the SSL connection but passes it right through to the internal server. This also means that you can’t mess with the traffic, add header options and so on. But it’s an easy way to loadbalance servers that already have SSL enabled without much effort.

frontend https_passthrough
  bind *:443
  option tcplog
  mode tcp
  default_backend apache01

backend apache01
  mode tcp
  option ssl-hello-chk
  # balance roundrobin		# Since I only have one server atm, I don't need a balance option
  server apache01.lan check

SSL Offloading / Termination

The nice thing here is, that you can have either HTTP or HTTPS traffic internally, as it gets terminated by HA-Proxy and than sent out to the user over the secured connection which has been established between HA-Proxy and the user.

One possible reason to do this, is to use TLS certificates signed by a private CA in the internal network and only deploy the official “trusted” certificate to HA-Proxy. This makes it easier to switch certificates internally as you have full control over the CA and can create certificates as much as you want for any internal domain. If you want to renew your websites official certificate, you just have to deploy it onto HA-Proxy and restart the service. Or you can have HTTP traffic internally, in case one of your applications isn’t capable of TLS and send encrypt the traffic between the user and your loadbalancer.

frontend https_termination
  bind *:443 ssl crt /etc/ssl/private/officialcert.pem
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  default_backend ghostblog

backend ghostblog
  mode http
  server ghostblog.lan check

Note that the HA-Proxy TLS certificate format is actually a combined file of the .crt and the .key file. To create the file, just run

cat sitecert.crt sitecert.key > sitecert_haproxy.pem

So much for testing TLS passthrough and termination. Let’s move on to client certificate authentication.

Client Certificate Authentication

In the TLS passthrough section, client certificate auth will still work if it was enabled on the internal apache server, as everything get’s just passed through including the request for the user to authenticate.

But I’d rather have one, highly available place to do all the config and not care about deploying authentication onto every internal webserver.

Enabling it is pretty straight forward, just append the ca-file and verify parameter to the bind option in your TLS termination section. (Note: $certs/ == /etc/ssl/private/)

bind *:443 ssl crt $certs/officialcert.pem ca-file $certs/private_ca.crt verify required

Now, users are required to show a certificate that has been signed by myca.crt in order to fully establish the TLS connection. However, right now no one without a valid cert can visit my blog.

frontend https_termination
  bind *:8080  ssl crt $certs/officialcert.pem ca-file $certs/private_ca.pem verify optional
  mode http
  #Update - 17.11.2014
  #redirect location / if { path_beg /ghost/ } ! { ssl_fc_has_crt }
  redirect location / if { path_beg /ghost/ } ! { ssl_c_used }
  default_backend ghost-htsec

backend ghost-htsec
  mode http
  server ghost-htsec01 check

Setting verify optional basically means that we don’t care if a visitor provides a certificate or not. Adding the redirect line adds the additional security for the subfolder we want to protect. Now visitors can browse my blog, but only those with a valid cert can go to /ghost/login/ or /ghost/signup/. The best part is, that those who try to login without a certificate don’t get an error message but instead are redirected to the root homepage /.

This adds another tiny bit of security by obscuring the login and register pages. Security by obscurity is nothing bad as long as you don’t solely rely on it for protection!

Since this is already quite a bit of haproxyness to take in, I’m going to stop here and publish what I have learned so far. Stay tuned for another post on HA-Proxy, where I will try to tackle my remaining goals.

UPDATE: 17.11.2014

I experienced huge problems with ssl_fc_has_crt in the past couple of days. The first connection to my host with a valid certificate would always be handled correctly, but a reload of the same page resulted in the redirect that should only be applied to users without a certificate. After going through the HA-Proxy documentation, I found that ssl_c_used is the better choice. Quote from the ([docs]:

ssl_fc_has_crt : boolean
  Returns true if a client certificate is present in an incoming connection over
  SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'.
  Note: on SSL session resumption with Session ID or TLS ticket, client
  certificate is not present in the current connection but may be retrieved
  from the cache or the ticket. So prefer "ssl_c_used" if you want to check if
  current SSL session uses a client certificate.

ssl_c_used : boolean
  Returns true if current SSL session uses a client certificate even if current
  connection uses SSL session resumption. See also "ssl_fc_has_crt".

Update: 21.11.2014
I noticed, that a brand new Firefox profile as well as Firefox mobile on my Android where greeting me with this message instead of my website. uses and invalid security certificate.
The certificate is not trusted becauzse no issuer chain was provided.
(Error code: sec_error_unknown_issuer)

I found that a little bit strange, since I bought a valid certificate at a Comodo reseller. This is due to Firefox being a bit more strict then other browsers when it comes to TLS implementation.

The fix is to add the ca-bundle certificates to your webserver config, which contains the TLS-Chain certificates.

# In Apache, add this line
SSLCertificateChainFile /etc/ssl/private/
# In Lighttpd, add this line = "/etc/ssl/private/"

In NGINX and HAPROXY, you don’t change the config file. Just add the content of the ca-bundle.crt to your original certificate.

cd /wherever/your/certs/are/
cat >> servername.crt

For HAPROXY, your certificate should look like this:


After restarting HAPROXY, the error message went away and Firefox displayed the website just like any other browser.

Here are a few links that helped me come this far. – they have a lot of stuff but I’m not always sure if it’s still accurate!

Install MySQL 5.5 on Debian Wheezy 7

This is just a quick note, as I struggled with installing not MySQL 5.6 but 5.5 on Debian Wheezy.

First, I already had MySQL 5.6 installed but no data was stored there, so backup wasn’t necessary

$ sudo apt-get purge mysql*
$ sudo apt-get autoremove
$ sudo rm -r  /var/lib/mysql/ /var/log/mysq* /tmp/mysql* 

Normally, this should do the trick

sudo apt-get install mysql-server-5.5 mysql-client-5.5 mytop

But in my case, mylsq-server-5.5 selected mysql-common (5.6) as dependency, so I had to do this in order to get the right versions

$ sudo apt-get install mysql-server-5.5 mysql-client-5.5 mysql-common=5.5.40-0+wheezy1 libdbd-mysql-perl libmysqlclient18=5.5.40-0+wheezy1 mytop
$ sudo mysql_upgrade

The packages mysql-common and libmysqlclient18 have to be pinned on a version older than 5.6. If you don’t know the version, you can check for available versions with ‘apt-cache policy packagename’. Just pin the packages to the required version until everything installs correctly. You can test your settings by running apt-get in dry-mode (append -s).

Hope this helps!

Mitro Login Manager On-Premise

On 31 Jul 2014, the cloud based login manager “Mitro” was published under the GPL on github. In this blogpost, I’ll go through the steps of setting up the server and browser extensions.

The login manager Mitro has been developed by a small team based in NYC, wich was recently aquired by Twitter. Part of the deal, as I recall, was that the Mitro project had to be published under an open source license, which is great since I contacted the devs a few months ago if there is going to be an on-premise version of it. Well now there is, but it takes a bit of work to get everything up and running.

At the time of writing, I do not see Mitro On-Premise as production ready software as there are a few issues that have to be sorted out. But I’ll do my best to help get it there and I’m happy to see that many others are actively working towards the same goal.


If you just want to install Mitro without understanding what you’re doing – you can use my script which should do all of this automatically. Make sure to check the for instructions first and run tail -f mitro-debian-setup.log in a second shell to catch problems like this:

You can avoid this completely by running apt-get update && apt-get upgrade prior to the script. Also, just to be on the same page – this is what my Debian image looks like:

A few notes before we get started – for those of you actually reading this…

I will generally not work under the root user but under admin with sudo. I will not print whole files but mark the changes like this:

user@host: $ vi /etc/example.conf
 + added line
 - removed line

Or with numbered lines if the file is to huge or similar lines exist.

user@host: $ vi /etc/example.conf
98: + added line
98: - removed line

While I always try to make things understandable even for unexperienced users, keep in mind that this is a very critical application and you should know what you’re doing before you attempt to host it for yourself or even others. Therefore, I will not go into any detail on basic commands such as how to exit vim or login via ssh. If you can’t do that, you lack serious basics for this post!

For now, I got the server up and running under Debian 7, and the extensions working with my server with chromium-browser in Kubuntu 14.04. Keep in mind that right now, everything is in a state of constant change, so I’ll either update this blogpost or look for a better way to keep the info up to date.

The whole setup consists of three main parts

  • The mitro server daemon
  • The mitro mailing daemon
  • The mitro browser extensions

I’ll try to get any changes I make in the code or scripts into the official Mitro repository, but I will note all important changes in this blogpost. If you’re missing anything, you can always checkout my fork on github for commits that haven’t been merged yet


Some general info about the environment I used for this setup.

Server OS: Debian 7 Non-GUI (Connect via SSH)
Server VM: Virtualbox -
Client OS: Kubuntu 14.04 64bit
Client VM: Virtualbox - 
Git Mitro:
Git Fork:

Before you do anything, make sure that your server has installed all package updates.

root@debian:~ # apt-get update && apt-get upgrade

Since this is a critical application, we’ll try to keep security in mind from the beginning.

Setup an admin user with sudo rights and one called mitro to run the services under. You should of course choose different, and strong password for both users as they build the foundation of your new login manager!

root@debian:~ # echo mitro-server > /etc/hostname
root@debian:~ # vi /etc/hosts
 -       localhost
 +       localhost mitro-server

root@debian:~ # hostname -F /etc/hostname
root@mitro-server:~ # adduser admin
root@mitro-server:~ # adduser mitro

And in case you haven’t done so during the installation of your os, set a strong password for root as well.

root@mitro-server:~ # passwd root

Next, grant sudo rights to admin, logout as root and login as admin.

root@mitro-server:~ # apt-get install sudo
root@mitro-server:~ # visudo -f /etc/sudoers.d/mitro
 + # admin user to manage mitro server
 + admin    ALL=(ALL:ALL) ALL
root@mitro-server:~ # exit

Test if sudo works properly and move on to the next part if it does.

admin@mitro-server:~ $ sudo whoami
[sudo] password for admin:

Mitro Server Setup

Server prerequisites

Before we begin with the actual server, there are a few things we have to take care of such as setting up the directory and installing the base packages.

admin@mitro-server:~$ sudo mkdir /srv/mitro/
admin@mitro-server:~$ cd /srv/mitro/
admin@mitro-server:~$ sudo chown -R mitro:mitro /srv/mitro
admin@mitro-server:~$ sudo chmod g+s /srv/mitro
admin@mitro-server:~$ sudo chmod u+s /srv/mitro
admin@mitro-server:~$ sudo chmod 755 /srv/mitro

This is how the folder permissions should look like. The sticky bit will make sure, that the user and group ownership will be mitro throughout the setup.

admin@mitro-server:~$ ls -lh /srv/
drwsr-sr-x  2 mitro mitro 4.0K Sep 19 19:17 mitro

admin@mitro-server:~$ cd /srv/mitro

Install all Debian packages necessary to build and run the server and create the necessary binary links, since there are a few files called incorrectly. — We should probably fix the build scripts…

admin@mitro-server:~$ sudo apt-get install git screen postgresql postgresql-contrib postgresql-pltcl-9.1 ant make g++ curl unzip
admin@mitro-server:/srv/mitro$ sudo curl -sL | sudo bash -
admin@mitro-server:/srv/mitro$ sudo apt-get install nodejs

Check if initdb is present. If not, you need to create a link to the executable.

admin@mitro-server:/srv/mitro$ which initdb
admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/postgresql/9.1/bin/initdb  /usr/bin/initdb
admin@mitro-server:/srv/mitro$ which initdb

According to the Mitro README, the server should be run with the official Oracle Java JDK 7. However, I haven’t had any problems with the openjdk version and since it’s easier to maintain updates through apt-get, I am going to stick with it.

admin@mitro-server:/srv/mitro$ sudo apt-get install openjdk-7-jdk libpostgresql-jdbc-java

Just to be sure, trace back the current java executable and check if it really is jdk-7. In my case jdk-6 was still being used so I had to change it.

admin@mitro-server:/srv/mitro$ which java
admin@mitro-server:/srv/mitro$ file /usr/bin/java
/usr/bin/java: symbolic link to `/etc/alternatives/java'
admin@mitro-server:/srv/mitro$ file /etc/alternatives/java
/etc/alternatives/java: symbolic link to `/usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java'
admin@mitro-server:/srv/mitro$ sudo rm /etc/alternatives/java
admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java /etc/alternatives/java

The following code-block shows some optional settings which I haven’t tried myself. But I’ll put them there in case anyone wants to try them.

# The sysctl.conf part is optional - depending wether or not you need/want multiple postgresql instances running.
# Edit /etc/sysctl.conf to add the following lines:
# run multiple instances of postgres

# Then run the following for each line above
sudo sysctl -w <line>	

Preparing the postgresql database

According to the docs, the postgresql database service must be running as the same user as the mitro service is running – which in this case means the unix user “mitro”.
To get this done, you need to follow these steps:

# Stop Database
admin@mitro-server:/srv/mitro$ sudo /etc/init.d/postgres stop
admin@mitro-server:/srv/mitro$ sudo netstat -antp 

# if postgres is still running for whatever reason
admin@mitro-server:/srv/mitro$ sudo killall -9 postgres

# prepare to start postgres as "mitro"
admin@mitro-server:/srv/mitro$ sudo chown -R mitro:mitro /var/run/postgresql/

Mitro uses pg_ctl, but Debian has switched to pg_ctlcluster so the build process fails! To fix that, you need to create a link for pg_ctl.

admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/postgresql/9.1/bin/pg_ctl /usr/bin/pg_ctl

Before we can continue, we need to checkout the repository. I advise you to take the official repo and not my fork, as the latter might not be working at times since I tend to break things and push them – you have been warned!

admin@mitro-server:/srv$ sudo git clone
  Cloning into 'mitro'...
  remote: Counting objects: 5516, done.
  remote: Total 5516 (delta 0), reused 0 (delta 0)
  Receiving objects: 100% (5516/5516), 65.85 MiB | 640 KiB/s, done.
  Resolving deltas: 100% (1049/1049), done.

Prepare postgresql database

mitro@mitro-server:/srv/mitro/mitro-core$ mkdir -p /srv/mitro/mitro-core/build/postgres
mitro@mitro-server:/srv/mitro/mitro-core$ initdb --pgdata=/srv/mitro/mitro-core/build/postgres -E 'UTF-8' --lc-collate='en_US.UTF-8' --lc-ctype='en_US.UTF-8'

Start postgresql daemon and create the database

mitro@mitro-server:/srv/mitro/mitro-core$ pg_ctl -D /srv/mitro/mitro-core/build/postgres -l logfile start
mitro@mitro-server:/srv/mitro/mitro-core$ createdb -O mitro mitro

Check if postgresql is running properly

netstat -antp |grep "postgres"
tcp        0      0*               LISTEN      7061/postgres
tcp6       0      0 ::1:5432                :::*                    LISTEN      7061/postgres

Installing and configuring the server

We only need the server files, which reside in mitro-core. Make sure that the right permissions are set and switch back to mitro before creating the database and running ant test.

#To avoid this, change at beginning of blogpost...
admin@mitro-server:/srv/mitro$ sudo mv ./mitro/mitro-core .
admin@mitro-server:/srv/mitro$ cd ../
admin@mitro-server:/srv$ sudo chown -R mitro:mitro mitro/
admin@mitro-server:/srv$ sudo su - mitro

mitro@mitro-server:~$ cd /srv/mitro/mitro-core/
mitro@mitro-server:/srv/mitro/mitro-core$ ant test
  Total time: 50 seconds

If you find this during the test run, don’t worry. We’ll take care of this later.

[junit] WARN  [2014-09-20 07:50:22,425Z] co.mitro.core.server.Manager: Not creating constraint groups_name_scope: DB is not Postgres
[junit] WARN  [2014-09-20 07:50:22,425Z] co.mitro.core.server.Manager: Not creating constraint group_secret_svs_group: DB is not Postgres
[junit] ERROR [2014-09-20 07:50:22,463Z] co.mitro.access.servlets.ManageAccessEmailer: missing or invalid email: null

I’m actually not quite sure if the build script is really neede after going through all these steps manually.

# you don't need it - the script is trash...
mitro@mitro-server:/srv/mitro/mitro-core$ ./

To run the server, just type the following:

mitro@mitro-server:/srv/mitro/mitro-core$ ant server

I suggest you only run the server this way for testing purposes. If you want the server to run in the background, login as admin and type

admin@mitro-server:/srv/mitro/mitro-core$ screen -S mitro-server
admin@mitro-server:/srv/mitro/mitro-core$ su - mitro
mitro@mitro-server:~$ cd /srv/mitro/mitro-core/
mitro@mitro-server:/srv/mitro/mitro-core$ ant server > server_run.log 2>&1
[CTRL]+[A] [D]
admin@mitro-server:/srv/mitro/mitro-core$ tail -f server_run.log

If you’ve made it this far, then mitro server is running. You can verify this by visiting this url: https://yourmitroserver:8443/mitro-core/api/BuildMetadata


I have added everything I found in the blogpost above so you shouldn’t run into these problems, however if you do this might help you! Unfortunately I have forgotten to document most of the error messages, so I’ll add the here in case I run into them again.

Error message:

mitro@mitro-server:/srv/mitro/mitro-core$ ant server
Buildfile: /srv/mitro/mitro-core/build.xml
     [exec] Result: 128
[propertyfile] Updating property file: /srv/mitro/mitro-core/build/java/src/
     [exec] shutil.rmtree(tempdir)
     [exec] Traceback (most recent call last):
     [exec]   File "tools/", line 109, in <module>
     [exec]     main()
     [exec]   File "tools/", line 92, in main
     [exec]     unpack_jar(path, tempdir)
     [exec]   File "tools/", line 29, in unpack_jar
     [exec]     process = subprocess.Popen(args)
     [exec]   File "/usr/lib/python2.7/", line 679, in __init__
     [exec]     errread, errwrite)
     [exec]   File "/usr/lib/python2.7/", line 1259, in _execute_child
     [exec]     raise child_exception
     [exec] OSError: [Errno 2] No such file or directory
     [exec] Result: 1
     [echo] Built build/mitrocore.jar


sudo apt-get install unzip

**Error message:**

mitro@debian:~$ pg_ctl -D /srv/mitro/mitro-core/build/postgres start
pg_ctl: another server might be running; trying to start server anyway
server starting
mitro@debian:~$ FATAL:  lock file "" already exists
HINT:  Is another postmaster (PID 3468) running in data directory "/srv/mitro/mitro-core/build/postgres"?


[*] If you can't see postgres running, you need to kill the old process first
 [-] root: killall -9 postgres
 [-] root: rm /srv/mitro/mitro-core/build/postgres/

Book Review: CTF Blueprints

This is a short review of the book Kali Linux CTF Blueprints by Cameron Buchanan which was published under Packt Publishing in July 2014.

The books goal is to provide blueprints to building a CTF environment. In my opinion, this is not quite true as the blueprints are mere pointers in the right direction. While this might be misleading, it’s actually a good thing as real blueprints would result in spawning a series of similar or even identical CTFs. Before you buy the book, please note that the author expects that a certain level of pentesting knowledge and skill is already given. Basic concepts such as XSS, privilege escalation or information gathering are not explained.

The target group is definitively the experienced pentester who wants to setup a challenge or training ground for colleagues and not the inexperienced one who wants to do it to test its own skills. That being said, someone new to the game could still learn a few things by reading this, although there are books out there who are better suited for this task.

In the first and second chapter, the author explains how to setup a vulnerability on Windows or Linux hosts respectively. The chapters are split into three main parts, namely securing the base os that is used to run the challenge, installing the vulnerable software and finally explaining how the vulnerability could be exploited.

The first part is important, as a challenge should not have any other security holes except the intended ones, therefore forcing the challengers into solving the right problems instead of finding a way around them. The second part is pretty obvious, as this book is all about creating vulnerable machines and the third part is just to give some insight on how the challenge could be solved.

There are also some tips on how to plant flags without them being to easy or too hard to find. For example, a C:\flag.txt file would be too easy to find and would also make any post exploitation unecessary. Hiding the flag inside a config file in a temporary, random generated subfolder of the Firefox addons folder would be to hard, unless the objective is clearly stated and includes some hints that would point a player to this location.

I can’t say much about the other four chapters “Wireless and Mobile”, “Social Engeneering”, “Cryptography” and “Red Teaming”as I didn’t find the time to read them. However, they do look interesting and I’ll post an update as soon as I get to them.

Overall, this is the first book PacktPub book that wasn’t completely dissapointing. The one thing I missed, was some suggestions for vulnerabilities that make good targets in a CTF, preferably in form of CVEs or a short list of sites where to find things like that. Someone who is new to creating CTFs might have a hard time finding vulnerabilities that are suitable for challenges.

I’m not quite sure if it is worth the price of €16.32 ($22.44) for the ebook edition as it’s only a quick intro to the topics and lacks in depth but at least you get something in return for your money, unlike many of the other books I reviewed.

However, if I had looked for a short CTF guide that points me in the right direction on how to make my own challenges, I wouldn’t have regretted the purchase.

If you have any specific questions about the book, drop me a line and I see if I can answer it for you.

#Security in time has gone a long way since it started over two years ago.

In the depths of my hard drive I found screenshots of older versions, so I decided to share them with you!

Back when I started it was a blog about pretty much everything that crossed my mind. Having it’s roots in my first, now buried blog which I started during my apprenticeship it wasn’t soley focused on security. However, after I got my hands on the domain, I decided to leave the administration area and get more into infosec.

The first “logo” I created was done during a trainride on my tablet. I was playing with different brush settings on a new painting app and created a “#” that I thought would make a great logo for my new blog.

Original logo on my old wordpress blog:

I’m no designer, that’s for sure – but still, I should have known better. After some time, even I noticed that the resolution was crappy, so I imported the jpg file in inkscape, created a svg off it and imported that into blender. Yes, I created my v.1.1 logo in a 3D moddeling application. That should give you a pretty good picture of how 1337 my design skills are.

Over time I rendered different logos, but the last one was this:

The logo was of course not the only thing that changed. In the beginning I often found my blog showing really old posts. It’s the typical beginners blogging problem.

  1. Post – “There is not much here yet, but I’m gonna start writing soon”
  2. Post – “Sorry for not writing in such a long time, but it’s gonna change real soon, promise”

Even though I didn’t actually write it that way, the publishing dates speak for themselves.

I still have to force myself from time to time to publish posts, but all in all I got kind of a thing going now.

As every beginning blogger, I wondered if anyone was reading my blog at all. And I still do actually. That’s why I added Google Analytics at one point. That was pretty much at the beginning, back when it was still running on wordpress. After Ghost came out, I switched to the nodejs powered new blogging platform and had so many new problems that tracking and analytics wasn’t really on my mind anymore.

I tried a few themes but ended up writing my first crappy theme in html,css,js and ghost handles.

This was version 1.0, and it even came with a “mobile version” which only worked in a few browsers correctly as you can see here.

For a first try I thought it acceptable, when in fact it sucked quite much. So I rewrote the whole thing twice, and what you see today is actually v3.1.

Version 2.0 was my first complete rewrite, and though I stuck to the overall design, there was quite a huge change in the codebase.

The big rewrite not only improved my theme, but also my rudimentary CSS,HTML and JS skills allowing me to finally solve many problems with my mobile theme.

![](/content/images/2014/Aug/v2-0_mobileA.jpg) ![](/content/images/2014/Aug/v2-0_mobileB.jpg) I thought about releasing the code, but I don’t want anyone having to deal with it – seriously! If you really want the code, just write me and I’ll gladly hand it out. But you have been warned!

Between writing version 2.0 and 3.0 I again found myself wondering if anyone actually read my blog and started to thing about my objective. Why was I writing this stuff? And why was I publishing it for everyone to see? The short answer is, because I like to help people. I love finding the solution to a problem in minutes on somebodies blog and I always wanted to contribute to the open knowledge and source community. Realizing that, I noticed that it doesn’t matter how many people read my blog. If it helped even just one person, the post was worth writing it.

I installed Kibana at one point to get a rough overview of visitors from my webservers access logs, but that’s about it. I have no need for cookies, tracking or advertisement. The cookies that are created are because of the twitter panel on the right, and I’m not even sure if I’m going to keep that.

I enjoy a clean blog, and that means no ads, no tracking and no click marathons to get the information you want.

After all that, it seems that I found time to look at my logo again. And it’s fair to say that I did not like it anymore. So I set out to the task of designing a new one. As I’ve mentioned before I’m no designer so it took me longer than I’m want to admit. Here are two of the ideas I had that I scribbled on a pink post-it.

Not much? That’s because no matter at how many designs of hashtags I looked, I couldn’t come up with something I liked.

Finally I fired up blender again and due to a lighting accident I came up with this.

This was actually the first logo I kind of liked. I showed it to a few of my colleagues and they said “Show it to the guys from the graphics department, they can surely give you some good feedback”. Oh boy!

I showed it to Nick. He just shook his head and sent me some examples a few minutes later. I mashed them all together to get a better overview, but they where all in high resolution.

After a few mails back and forth, he came up with these two for twitter and one with “Security” instead of just “SEC” for the blog header.

I really liked the red version as well, but since the blog had a blue theme going, I sticked with it in the end. It was a tough decision though!

The new logo inspired me to write version 3.1 and change a few things on the theme. This is how the blog looked like a few days ago, still with the old structure and logo. It’s the first minor version update, as it only takes on minor feature changes and overall design. The change from 2.0 to 3.0 had much bigger changes, such as “mobile first” and a complete rewrite of the CSS stylesheet.

For comparison, this is what the previous theme, so to say version 3.0 looked like.

It might not look that different from 2.0, but again there is quite a lot of code I ripped out and completely wrote new.

Of course it’s not done yet and I’m always going to be changing it. But I hope at least now it’s representable.
If you want to give me some feedback I would appreaciate it. Just put it in the comment section or tweet me @HashtagSecurity.


Defcon is over and the dust has settled – or at least I have rested. Since this was my first Defcon, here is a short write up of my experience.

First of, this post is about DC22 and that alone. If you want to read about my trip to BlackHat, go read my BlackHat review. But honestly, why would you? This is defcon we’re talking about!

Since I attended Blackhat, I had only three out of four days of defcon. I arrived around 9:00 AM with my badge in hand at RIO Hotel trying to figure out where to go. Ah, just follow the stream of people through the casino. Since I had already picked up my badge at BH the day before, I skipped standing in line for three+ hours and went around at the conference site. It was surprisingly quiet and only small groups of people where walking around, which surprised me to say the least. Where were the masses of hackers, geeks and strange people I had mentally prepared myself for? There wasn’t that much to see, so I headed for Pen&Teller at track 5 where I wanted to attend my first talk. A direct but friendly and loud voiced goon (defcon voluntary personel) was shouting at the attendees, including me, as we slowly made our way up the escalators to the upper seatings.

“If you just take a few steps and take a seat, this would go much quicker. It’s not that hard, you’ve done it before. You’ve already taken a few steps to get up here, now take a few more steps and sit down. Move it guys, you’re too slow, the escalator is faster then you! THE ESCALATOR IS FASTER THEN YOU!”

He then explained how we should go about choosing our seat. “Move it, go to the end of the row and sit down. There is a second row, start filling it – now! If the seats next to you are emtpy, you’ve done it wrong!” He put his hands on the shoulders of one guy, who was sitting all by himself in one of the top rows. “IF THE SEATS NEXT TO YOU ARE EMTPY – YOU’VE – DONE IT – WRONG!”.

The talk was a mix of how the badges where made and a general introduction to the mess that is DEFCON. I won’t go into detail, but if you haven’t seen a DEFCON badge before, this is what the DC22 badge and the rest of the attendee kit looked like.

I’m not a hardware guy so I don’t know all the details, but essentially the badge itself is part of the “be active” mentality of defcon that invites you to do something while you’re there. It included a challenge which took the winning team of about 8 people, something around 39 hours to solve. Parts of the challenge were hidden on the badge, others on vendor, goon or speaker badges and the rest placed all over defcon. If you want to know more about the badge challenge, I suggest you read the write up of the winning team [spoiler alert].

After the talk everyone left the room at the same time, once again overloading the escalators and a few minutes later I found myself in the main hall surrounded by tons and tons of people. This is what I had expected when I arrived in the morning!

From there on out it’s all a bit blurry. Not because I didn’t do anything but because I had no feeling for time for the rest of defcon. The three days felt like only half as much and the days where over so fast that it’s hard to recall what exactly happened. But what I do remember, is that I’ve never talked to so many people at any other conference before. The chillout lounge was a perfect spot to just relax, sit down with some people and chat. And chat we did, about pretty much everything that crossed our minds. The best part however is how total strangers can sit down at a table, and only five minutes later they are joking (at each others expenses of course), drinking, laughing, having a good time and solving problems. And that’s what impressed me the most. On Saturday morning I sat down with a small group of people, and they where still hung over from last night. That however did not stop them even one bit from discussing all sorts of matters. If I had to describe defcon in one sentence it would be “thousand of people partying and solving the worlds problems at the same time”.

Defcon is of course much more then chatting with people and listening to talks. There are so many challenges to solve, villages to participate in and learn new things, vendors to buy gadgets, books, tools, swag, etc. and everytime I went through the different areas I found something I had overseen the first couple of times. And that’s exactly why I’m going to stop here. I can’t possibly describe how amazing every single part of defcon was and I would surely miss some awesome things. Just go to DC23 and see for yourself. I’m definitely going to be there and I’m looking forward to spend even more time there and find even more stuff to explore. If you don’t have anyone to go with, don’t worry – you’ll make new friends in the first five minutes, or maybe even before that if you spend some time at or the irc channel #dc-forums.

For those who want to see more, I posted all the pictures I made on my twitter account – @HashtagSecurity. Hopefully I will find the time to write about all the challenges, villages and other fun stuff that happens at defcon next year.

Thanks to everyone who attended defcon this year, I had a great time and met loads of great people. Special thanks of course to all goons and organizers and everyone who helped make defcon what it is.

See you next year!

Python Cheat Sheet

I like to solve my problems in python, so here is a small cheat sheet on python tricks that make my life easier.

There’s not much yet, but more to come!

End for loop on return-key hit

If you need a certain task done over and over again, you can use watch -n [seconds] 'task', but I sometimes need all the information at once without a clear page after every execution. That’s where python comes in handy.

import sys, select

print "Hit <Return> to exit"
while True:
	print "I'm doing stuff :)"
	if sys.stdin in[sys.stdin], [], [], 0)[0]:
        print "Exiting..."

This little snippet will do what you want until the end of (up)time – or until you hit the return button.

HTTP requests made easy

Normally Python uses urllib to make HTTP request, but that’s kind of a PITA and not very pythonic. “requests” is a library that aims to make it easier – and it does!

Everything you need to know to get started can be found here

It’s easy to install

git clone git://
cd folder
sudo python install

and easy to use

>>> r = requests.get('', auth=('user', 'pass'))
>>> r.status_code
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
>>> r.text
>>> r.json()
{u'private_gists': 419, u'total_private_repos': 77, ...}

Manipulating strings

There is one thing that baffled me when it comes to strings in python.
Namely, I wanted to print the two middle characters of a string. This is how it’s done

	# My string (I want to print "CD")
	>>> string = "ABCDEF"
    # get half the length of the string (string == 6, half ==3)
    >>> pos = len(string)/2
    # print the middle chars.
    >>> print string[pos:pos+1]

Note: pos == 3, pos+1 == 4
To my understanding, pos and pos+1 shoudl print character 3 and 4, meaning CD. Python however sees positions as pointers to the chars meaning pos+1 points to the beginning of the 4th char. Therefore the 4th character D is not printed. To get both, you need to specify the start of the 5th character, as this is also the end of the 4th.

The correct syntax to print CD would be

>>> print string[pos:pos+2]

For more string manipulation, look here.