Linux ACLs and sticky Users

Sticky and suid bits are quite helpfull tools when it comes to keeping the correct permissions throughout a set of folders and files. But what if you want to do more then just a fixed group or an execute-as user option?

Problem: get /var/www/ to be editable by a bunch of users (group:editors), without messing up the following permission schema: www-data:editors:others r-w:rwx:---.

folder: /var/www/ 
owner: www-data
group: editors 

New files will be created with permissions set to owner: creator, group: editors, when they should be:

r-x for www-data (owner)
rws for editors (group+sticky)
--- non for others

However, since new files belong to the creating user (e.g creator) www-data is out of the picture which means that apache can’t access the file unless others have r-x permissions.

Linux Permissions

Here we have our testfolder, it belongs to www-data:editors and has the suid (s) and the sticky-bit (T) set.

drwxrws--T  3 www-data editors   32 Nov 19 16:06 .
drwxr-xr-x 20 fmohr    fmohr   4,0K Nov 19 15:56 ..
-rw-r--r--  1 fmohr    editors    0 Nov 19 16:06 test01
drwxr-sr-x  2 www-data editors    6 Nov 19 16:05 test02

This however only let’s us achieve fixed group ownership. As you can see in test01, this file was created by fmohr, not www-data.

ACL

Enter ACLs, the Linux integrated Access Control List permission system.
At first sight, ACL might look a little bit frustrating, but it’s actually quite easy to use.

First we create a new folder, in this case one belonging to root:root

$ ll
total 0
drwxr-xr-x 3 root root 17 Nov 19 16:26 .
drwxr-xr-x 3 root root 16 Nov 19 16:26 ..
drwxr-xr-x 2 root root  6 Nov 19 16:26 test

Of course we can’t do anything in this folder without using sudo, so let’s change that.

$ touch test/myfile
touch: cannot touch `test/myfile': Permission denied

First, install acl or check if it’s already installed with sudo apt-get install acl

You also need to check your filesystem and mounts for acl support. EXT3+4 and XFS should have it enabled by default, others might have to be remounted. For more info on how to enable ACL, check out this blog: http://www.projectenvision.com/blog/4/Enable-Support-for-ACL-in-Debian-Ubuntu

To see if your kernel supports ACL, run the following command and check for your current filesystem.

$ grep _ACL /boot/config-*
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_JFS_POSIX_ACL=y
CONFIG_XFS_POSIX_ACL=y
[...]

Now set the user, that should from now on own the folder and all files and folders in it.

$ sudo setfacl -m d:u:www-data:r-x test/		 # set for parent-dir
$ sudo setfacl -m u:www-data:r-x test			# set for *
$ ls -ld test/
drwxr-xr-x+ 2 root root 6 Nov 19 16:26 test/

As you can might have noticed the owner and group haven’t changed, but if you look a bit closer you can see the + sign behind the last execution bit x. This indicates, that ACL rules are already in place and working.

We can’t create files yet since we are neither www-data nor root. Even if we where www-data, we would only have read and execute permissions.

To fix that, we need to set the group permissions.

$ sudo setfacl -m d:g:editors:rwx test/
$ sudo setfacl -m g:editors:rwx test/
$ ll
total 4,0K
drwxr-xr-x  3 root root 17 Nov 19 16:26 .
drwxr-xr-x  3 root root 16 Nov 19 16:26 ..
drwxrwxr-x+ 2 root root 17 Nov 19 17:03 test
$ touch test/filetest
$ ll test/filetest
-rw-rw-r--+ 1 fmohr CM_T_OPS_Team 0 Nov 19 17:07 test/filetes

The folder still displays root:root, but we can see the current ACL permissions by running getfacl.

$ ll test/
drwxrwxr-x+ 2 root  root          21 Nov 19 17:10 .

$ getfacl test/
# file: test/
# owner: root
# group: root
user::rwx
user:www-data:r-x
user:fmohr:rwx
group::r-x
group:editors:rwx
mask::rwx
other::r-x
default:user::rwx
default:user:www-data:r-x
default:user:fmohr:rwx
default:group::r-x
default:group:editors:rwx
default:mask::rwx
default:other::r-x

The new file automatically inherited the primary group and user of its creator, which is not what we want. The ACL rules tell us something different.

$ ll test/
-rw-rw-r--+ 1 fmohr fmohr  0 Nov 19 17:07 filetest

$ getfacl test/filetest
# file: test/filetest
# owner: fmohr
# group: fmohr
user::rw-
user:www-data:r-x               #effective:r--
user:fmohr:rwx                  #effective:rw-
group::r-x                      #effective:r--
group:editors:rwx  			 #effective:rw-
mask::rw-
other::r--

They show us, which owner and group are set by the linux permission system but also which users and groups are granted access by the ACL rules.

To expand on that a little bit:

$ getfacl test/filetest
# file: test/filetest
# owner: fmohr
# group: fmohr
user::rw-			# owner: fmohr 	(from default linux permissions)
user:www-data:r-x	# www-data 		(from ACL rule)
user:fmohr:rwx   	# fmohr			(from ACL rule)
group::r-x       	# group: fmohr	(from default linux permissions)
group:editors:rwx	# editors		(from ACL rule)
mask::rw-			# maximum permissions for any users (automatic ACL)
other::r--			# other			(from default linux permissions)

As a good example, I will change the owner of filetest to root:root and set permissions to 000, which would mean no one can access the file.

$ sudo chown root:root filetest
$ sudo chmod 000 filetest
$ ll
----------+ 1 root root  0 Nov 19 17:07 filetest
$ cat filetest
cat: filetest: Permission denied

As you can see, as user fmohr I can’t access the file. This is due to mask being changed by the chmod command as well.

$ getfacl filetest
# file: filetest
# owner: root
# group: root
user::---
user:www-data:r-x               #effective:---
user:fmohr:rwx                  #effective:---
group::r-x                      #effective:---
group:application_treetool:rwx  #effective:---
mask::---
other::---

If we were to set mask back to the maximum permission ACL is allowed to grant, we would get access to the file by our ACL rules.

$ sudo setfacl -m mask:rwx filetest
$ echo "test" > filetest
$ cat filetest
test

As a final note, ACLs are inherited onto new files but won’t be applied to existing files automatically. This means, that you have to manually apply them to your existing files and folders. After that, you’re set to work with a detailed permission system, featuring permissions for multiple groups and users regardless of the original owner of the files.

Thanks to Daniel Lawson over at serverfault for a great answer!

CheatSheet – Ansible

Ansible Back to Index

A random collection of commands and playbook features for Ansible.

Setting SSH options

In /etc/ansible/ansible.cfg, SSH settings can be defined.

# uncomment this to disable SSH key host checking
host_key_checking = False
[...]
private_key_file = /etc/ansible/ansible.ppk
[...]

[ssh_connection]

# ssh arguments to use ssh_args = -o BatchMode=yes -o ForwardAgent=yes

Problems with -o BatchMode

Ansible gives you the option to pass SSH options such as “BatchMode” on to your ansible runs. However, I ran into a problem regarding BatchMode and the Ansible –ask-pass (-k) option.

I used the following command to check if ldap login worked on all hosts.

ansible 'all' -a hostname -u username -k

With ssh_args = -o BatchMode=yes enabled in /etc/ansible/ansible.cfg the command failed. After I removed BatchMode=yes, everything worked.

Fetch configs and store them on Ansible

To backup single files with ansible, use the following fetch-configs.yml playbook

---
# Fetch Configs before rollout
- hosts: all:!fail
  remote_user: root
  gather_facts: true
  tasks:
  - name: Fetch config /etc/example.conf
    fetch: src=/etc/example.conf dest=/srv/ansible/archive/fetched/
  - name: Fetch config /etc/another.conf
    fetch: src=/etc/another.conf dest=/srv/ansible/archive/fetched/
    
# Optional: Push to git repository (/srv/ansible/ must be git repo!)
# For more info, read below about automatically pushing configs to git
- tasks:
  include: ansible_commit.yml

To backup multiple files, use the synchronize module in pull mode.

#/srv/ansible/multifetch.yml
---
- hosts: all:!fail
  gather_facts: true
  remote_user: root
  tasks:
    - name: Fetch all configs in /root/.ssh/ with ansible sync-module
      synchronize: mode=pull src=/./root/.ssh/ dest=/srv/ansible/archive/fetched/{{ inventory_hostname }}/ rsync_opts=-avR perms=yes

The last part rsync_opts=-arR perms=yes can still be optimised. I think perms and -ar is on by default in ansibles synchronize module.

FYI, the hosts: all:!fail selects all hosts, except the ones I have added to the group [fail]. These can be hosts that have been known to fail during playbook runs but haven’t been fixed yet.

Automatically commit fetched configs to git
Assuming that you store all your fetched configs in one place on your Ansible server, e.g. /srv/ansible/archive/{{ansible_hostname}}/etc/example.conf, you can use the following autocommit.yml playbook to automatically push changes to your repository.

---
- hosts: localhost
  remote_user: root
  tasks:

  # Check if commit is necessary
  - name: check if git commit is necessary
    command: git --git-dir=/srv/ansible/.git/ --work-tree=/srv/ansible/ status
    register: git_check

  # Commit Changes in Ansible Directory
  - name: Committing changes on Ansible server
    local_action: shell cd /srv/ansible/ && git add * && git commit -m "Ansible Automated Commit" && git push
    when: "'nothing to commit' not in git_check.stdout"

In order to execute the playbook, just append it to your other playbooks. This is usefull if you have for example a webserver.yml playbook which fetches all configs before deploying new changes.

---
# Webserver playbook
[...] # <- whatever you do in your playbook

# Update & Push Ansible Local Repository
- tasks:
  include: autocommit.yml

Deploy time settings with Ansible CLI

sudo ansible 'mygroup' -m shell -a 'echo "Europe/Berlin" |sudo tee /etc/timezone' -u user -K
sudo ansible 'mygroup' -m shell -a 'sudo cp -f /usr/share/zoneinfo/Europe/Berlin /etc/localtime' -u user -K
sudo ansible 'mygroup' -m shell -a 'sudo ntpdate-debian' -u user -K

CheatSheet – Bash

A random collection of commands for the linux shell Bash (and other linux commands that don’t yet have their own cheatsheet).

Linux Shell Back to Index

Backups **dd**

mount -o remount,ro /dev/whatever /
dd if=/dev/whatever bs=1M iflag=direct | dd of=/media/exthdd/backup/$date_backup.dd bs=1M
mount -o remount,rw /dev/whatever /

RSYNC

# https://wiki.archlinux.org/index.php/Full_system_backup_with_rsync
rsync -aAXv /* /path/to/backup/folder --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}

# Backup with rsync and keep folder structure (the /./ is important!)
rsync -avR /source/path/./folder-to-backup user@server:/target/folder/

LVM

# Create root filesystem snapshots with LVM
# https://wiki.archlinux.org/index.php/Create_root_filesystem_snapshots_with_LVM

Find out where GRUB is installed

Nothing more annoying then getting asked during system upgrades where GRUB should be installed… how ’bout where it was before!? Wait, where was that again?

Just try the disk (e.g. /dev/sda), and if it’s not on there, try its partitions. (/dev/sda1)

root@server:~ # dd bs=512 count=1 if=/dev/sda 2>/dev/null |strings
ZRr=
`|f
\|f1
GRUB <--- there it is!
Geom
Hard Disk
Read

Run command in screen as one-liner

screen -dmS name command
screen -dmS screen01 rdesktop -k us -g 1920x1180 1.1.1.1    

Sed

Sometimes you need to search for something in a document and replace whatever comed after that with the string you found. For example search for xbob23f, ybob543, and zbob123 and replace it with xbobnew, ybobnew and zbobnew respectively. To do that, you need to specify a serachterm in brackets, like .bob (the . being a randomn char), a regular expression for what you want to change, like … (three dots for three random chars following the searchterm) and a string to replace the found content. The string contains whatever the searchterm (aka .bob) found plus whatever you might want to add to replace ....

First, the simple structure of sed

sed -options 's/searchterm/replace/g'	#s = search for, g = replace all

Example 1: Replace .bob with itself (e.g. xbob, ybob, zbob)

sed -re 's/(.bob)/\1/g'	#(searchterm) is represented by \1 in replace

Example 2: Replace .bob and the three following chars with the searchresults

sed -re 's/(.bob).../\1/g'	#You can add further regex after the (searchterm)

Example 3: Same as above and append _new to every found string.

sed -re 's/(.bob).../\1_new/g'

Replace x number of random characters

# As always, . stands for any character, but instead of typing five dots, we specify the amount of chars with `{5}`
sed -r 's/^.{5}//'

Replace a line in a file

sed -i '/TEXT_TO_BE_REPLACED/c\This line is removed by the admin.' /tmp/foo

Print a file up until a certain keyword

sed '/Keyword/q' file

IRC **IRSSI IRC encrypt traffic** – don’t know if that’s all of it…

openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout ~/.irssi/mynick.key -out ~/.irssi/mynick.pem

Connect to defcon IRC

irssi
/connect EFNet
/join #dc-forums

GIT

Here are some notes on how to use git – as I always seem to screw things up…
http://gitref.org/basic/#stash

# the basics
git status
git add <file1 file2 | folder | *>
git commit -m "comment"
git push

# delete file
git rm <file>

# delete file from git only (not locally)
git rm --chached <file>

# delete all files from cache that are marked as deleted
sudo git rm --cached $(sudo git ls-files --deleted)

# get a file back that has been deleted locally but not commited yet
git checkout HEAD <file>

# get a file back that has been deleted and commited
git checkout HEAD^ <file>

# temporary move all changes to "stash", to work on something else (e.g. patch)
git stash

# after patch (or whatever) is done, get back to previous work
git stash apply

If you have committed something (not pushed) that you want to revert, use

# for the current commit
git reset HEAD

# for the one before the current
git reset HEAD~1

or if just want to reset them and get them back later,
git reset --soft HEAD

If git annoys you with multiple files, that are being tracked and that have changed but you don’t really care about them (in fact they’re just taking up space in git status…)

# Ignore tracked files
git update-index --assume-unchanged <file>

# If you wanna start tracking changes again run the following command:
git update-index --no-assume-unchanged <file>

# If you want to find all files that have been added to this list, use the following:
git ls-files -v|grep '^h'

General Bash Stuff
Set sticky bit to keep user or group throughout a directory or subdirectory when editing, moving or creating files under a different user

# Example folder structure
mkdir -p myfolder/subfolder/lastfolder

# Set folder ownership the way you want it
chown -R myuser:www-data myfolder
# Perm: myuser:r/w/x, www-data (group):r/x, everyone: nothing
chmod -R 750 myfolder

# To keep the ownership of myuser, set the sticky bit for user
chmod u+s myfolder
# Or with -R for recursive if you want to keep is throughout all subfolders
chmod -R u+s myfolder

# To keep the ownership of the group www-data, set the sticky bit for group (-R optional)
chmod -R g+s myfolder

# Get permissions of file/s in octal form
# stat -c = format of stat output 
# "%a %n" = print "octal-permissions filename"
stat -c "%a %n"  /etc/sudoers.d/*

Apt-Get

Install Security Updates (-s is dryrun!)

grep security /etc/apt/sources.list > /etc/apt/security.sources.list
apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list -s

Pakete auf Hold setzen

dpkg --get-selections |grep hold
echo -e "packetname\thold" |sudo dpkg --set-selections

#search packet with apt or dpkg
apt-cache search packetname
dpkg --get-selection |grep packetname

#show packet
apt-cache showpkg packetname

#set hold
echo "packetname hold" |dpkg --set-selections

#set install
echo "packetname install" |dpkg --set-selections

MySQL Packet Troubleshooting (5.5 vs. 5.6)

#Prüfen welche versionen installiert sind
sudo apt-cache policy mysql-server-5.[5,6]

#Prüfen welche version läuft
sudo mysql -V
sudo mysqld -V

#MySQL 5.5 deinstallieren
sudo apt-get remove mysql-server-5.5 mysql-server-core-5.5 mysql-client-5.5

# Anschließend unbedingt MySQL 5.6 wieder starten, da dieses bei der deinstallation von 5.5 gestoppt wird.
sudo /etc/init.d/mysql start

#Prüfen welche version läuft
sudo mysql -V
sudo mysqld -V

List files

# list one file per line (1), don't go into subdirs and print full path (d)
ls -1d /etc/*

# show newest log at the bottom, oldest at the top.
# list all (a) in long-format (l), human readable (h), sorted by time (t) reverse (r)
ls -alhtr /var/log/

AWK

Print everything except first Collumn

awk -F "delimiter" {'$1=""; print $0'}

GREP

Grepplings that I need but never want to figure out on my own…

# grep for a string that is exactly 23 chars long (any chars)
grep '^.\{22\}$' 

# grep for a string that is exactly 23 chars long (charset a-z)
grep '^[a-z]\{22\}$' 

Locales on Ubuntu 14.04 – Fresh LXC install

fmohr@ubuntu-1404:~$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_US.UTF-8
LANGUAGE=
[...]

# To fix it, just run locale-gen en_US.UTF-8
fmohr@ubuntu-1404:~$ sudo locale-gen en_US.UTF-8
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF8)
Generating locales...
  en_US.UTF-8... done
Generation complete.

fmohr@ubuntu-1404:~$ locale
LANG=en_US.UTF-8
LANGUAGE=
[...]

User Permissions and Groups

If you can’t remember your root password, or run usermod -G group user without the -a option and now find yourself without sudo rights, here is how you reset your root password or group settings.
askubuntu.com/questions/24006/how-do-i-reset-a-lost-administrative-password

# Reboot your system
# Keep hitting SHIFT to get into GRUB selection
# Select Recovery, or Advanced options -> Recovery
# Once the blue/red/greyish menu pops up, select root or netroot shell
# Remount / with
mount -rw -o remount / 
# Change root password
passwd root
# Or reset your group settings (first example vbox host, second vbox guest)
usermod -a -G  hashtagsecurity,adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxusers hashtagsecurity
usermod -a -G  hashtagsecurity,adm,cdrom,sudo,dip,plugdev,lpadmin,sambashare,vboxsf hashtagsecurity
# You might not need all of the groups - these are just an example

SSL Cert Voodoo

This is a good blogpost when it comes to getting info from ssl certificates!

# Use openssl to get the valid-dates of a SSL cert directly from a website
# If you want a complete ssl scan, use sslscan instead!
echo | openssl s_client -connect google.com:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Jun 19 12:44:04 2013 GMT
notAfter=Oct 31 23:59:59 2013 GMT

# Not SSL, but handy if you are looking for hosts to check...
nmap -PN -p 443 -iL ./all_my_hosts.txt -oN nmap_results.txt

# Now check all open ports for ssl certs with this small bash script.
#!/bin/bash

for i in `grep -B 4 open nmap_results.txt |grep "Nmap scan report" |awk {'print $5'}`
do 
  j=`curl -Ik -m 5 -s https://$i |head -n 1`
  k=`echo $j|awk {'print $2'}`
  echo "Host: $i, Status: $j"
  if [[ "$k" != "401" && "$k" != "" ]]
  then
    echo -n "$i;" && echo | openssl s_client -connect $i:443 2>/dev/null | openssl x509 -noout -subject -dates |sed 's/subject=.*CN/CN/g' |sed 's/$/;/g' |tr -d "\n" |sed 's/$/\n/g'
  fi
done

Check Server for supported SSL protocol versions – It should look like this – SSL3 (or -ssl2) not supported, which is good!

$> openssl s_client -connect server:443 -ssl3 
CONNECTED(00000003)
140131777316512:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1260:SSL alert number 40
140131777316512:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : SSLv3
    Cipher    : 0000
    Session-ID: 
    Session-ID-ctx: 
    Master-Key: 
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1413807987
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---

Bad Example – This is a successfull connect – in case of SSL2 and SSL3 something you don’t want!

$> openssl s_client -connect server:443 -ssl3
CONNECTED(00000003)
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
[...]
---
Server certificate
-----BEGIN CERTIFICATE-----
[...]
---
No client certificate CA names sent
---
SSL handshake has read 3079 bytes and written 288 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : SSLv3
    Cipher    : ECDHE-RSA-AES256-SHA
[...]

Even more random Bash stuff
Run a minimal http server on linux using netcat (nc)

while true;do nc -l -p 8080 -q 1 <<<"Hello World";done
while true;do nc -l -p 8080 -q 1 < index.html ;done

#with interpreted html (note, internal links will not work!)
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html; } | nc -l -p 8080 -q 1; done

VirtualBox shared folder troubleshooting

I know it’s just a link, but a good one! https://forums.virtualbox.org/viewtopic.php?t=15868

Also, so you can access your files without sudo…

sudo usermod -a -G vboxsf username

Troubleshooting with BURP

Recently I installed owncloud on one of my servers. The setup went fine and all seemed good, until I noticed that the redirection after the login page was behaving somewhat strangely. But no worries – BURP to the rescue!

Before we delve into this whole thing let me just say that, while I really like BURP, I don’t want to sell it (or anything really). So far I always used the free version and never experienced any problems due to feature locks but if you’re an open source enthusiast you might wanna try the OWASP Zed Attack Proxy (ZAP) as an alternative to BURP.

Back to the story. In order to understand my problem, you need some insight into the setup I’m using.

  User     HTTPS     Public       HTTP      Internal
[BROWSER] -(443)> [APACHE PROXY] -(80)> [APACHE OWNCLOUD]

All Requests to HTTP:80 are redirected by [APACHE PROXY] to https://www.hashtagsecurity.com
All Requests to HTTPS:443 are handled by [APACHE PROXY] vHosts (SNI)

Now, whenever I tried to login at https://owncloud.hashtagsecurity.com/owncloud/index.php, I got redirected to https://www.hashtagsecurity.com/owncloud/index.php and had to manually change the subdomain back to owncloud in order to be logged in.

So the first step for me was to find out where the redirect to www.hashtagsecurity.com came from.

Enter BURP, the reverse proxy tool that I came to know as a pentesting and troubleshooting gem.

BURP is an intercepting proxy first and foremost, which gives you the ability to do a local man-in-the-middle between your browser and webz and examine, drop, forward, alter, forge, etc. HTTP(S) requests and responses.

In this case, I used it to find out why I was redirected to www. instead of going to owncloud..

After starting BURP and adjusting my Firefox proxy settings to use localhost:8080, I went to my owncloud login page and started the BURP proxy in intercepting mode.

Here is where I hit the first bump. Apparently BURP doesn’t forward TLS client certificates so I had to import mine first.

After that, I changed the proxy settings to also intercept responses. This enabled me to look closer at whatever the server sent me in response to my requests.

The first intercepted request was the login, in which you can see my login credentials being sent to the server. In the first response we can see exactly where the problem lies. The URL in the Location header is correct, but it is set to http:// which results in a redirect by apache to https://www.hashtagsecurity.com/.

Obviously, this can easily be solved by changing the apache vhost config from Redirect to Rewrite.

# Remove Redirect
RedirectPermanent / https://www.hashtagsecurity.com/

# And add Rewrite
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

While this did solve my problem, it wasn’t a satisfying solution. I’m still being redirected which is totally unnecessary. In order to change that, I needed to enable Enforce HTTPS in owncloud.

The last hurdle was, that owncloud told me that I need to visit the page via HTTPS in order to enable Enforce HTTPS. That was a bit confusing, until I realized that from the perspective of the owncloud server I am browsing via HTTP all the time. In order to get this fixed, I just needed to enable TLS on the backend owncloud apache and set the proxy to use HTTPS connections in the internal network as well.

Unicode display problems in bash

Ok, I know this has nothing to do with security and I’m just writing about typical, everyday sysadmin stuff right now. But this problem has cost me way too much time to not be sharing the solution with the world.

The problem

I was trying to build the owncloud sync client mirall when I noticed that certain characters aren’t displayed correctly. This looks like a typicall locales misconfiguration, so I started to mess with that a bit.

I switched to de_DE.UTF-8 which unfortunately didn’t fix the problem. What did though, was switching to de_DE.ISO-8859-1@euro. Well, so far so good but now I had my whole system throwing german messages at me. And I like my system to be in english (ever tried to google for a german error message?)

I didn’t quite understand the problem at the time, because UTF-8 is unicode and should be able to handle special characters like ä, ö and ü.

So here is a picture of a file called äöü.txt

Here is the same file shown in dolphin:

Another thing I noticed was the command-not-found crash, whenever I typed öö instead of ll, which is an alias for ls -lah on my system.

The solution

To make a long story short, I tried setting locales for quite a while until I finally checked my terminals shell profile. Under advanced, I found that the default was always set to ISO-8859-1 despite my locale settings.

After changing that, I went from this

to this

while still having my system running with en_US as locale.

HA-Proxy for the win

I finally found time to take a closer look at HA-Proxy. It is a high-availability load balancer and (reverse-) proxy server and fully open source.

Attention: This is me testing stuff – I have not taken care of settings like no-sslv3, etc. So if you use this in production, make sure to read up on this first! Also, since I’m new to HA-Proxy, I might have misconfigured or missed a few options so don’t blame me if things aren’t perfect regarding either security or performance!

Goals:

My goals utilizing HA-Proxy included

  • Testing TLS offloading and passthrough capabilities
  • Moving client authentication from backend servers to HA-Proxy
  • Replacing Apache as reverse proxy
  • Increasing availability by adding loadbalancing to servers
  • Increasing availability through HA-Proxy failover setup
  • Learn lot’s of new stuff and share it! 🙂

First off I installed the current stable version 1.5.6 from the haproxy repositories, since Ubuntu 14.04 server still uses the old stable 1.4, which has a lot of features still missing – such as native SSL support.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:vbernat/haproxy-1.5
sudo apt-get update
sudo apt-get install haproxy

If you want to know more about the parameters used, check out the documentation here: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html (from now on referenced to as $dokulink)

Edit the conf /etc/haproxy/haproxy.cfg to look like this.

global
    log 127.0.0.1 local0 notice                 # log to local rsyslog daemon
    maxconn 2000                                # Number of concurrent connections allowed
    user haproxy
    group haproxy
    tune.ssl.default-dh-param 2048				# DHE max size of parameters for key exchange - $dokulink#tune.ssl.default-dh-param

defaults
    log     global
    mode    http
    #option  httplog							# this option messed with my SSL passthrough, so I disabled it
    option  dontlognull
    retries 3
    option redispatch                           # redistribute sessions if a node goes down (no session stickiness)
    timeout connect  5000                       # minimum time to wait until timeout
    timeout client  10000                       # timeout received from client
    timeout server  10000                       # timeout received from server

To add a farm, you should first know a bit about the config structure.
There are three other config types besides global and defaults, named frontend, backend and listen.

The first two are used to configure the the interface that will be addressed by visitors (frontend) and the farm and loadbalancing settings (backend). The third one (listen) is actually just the combination of the first two, which takes less lines for the same config, but on the downside has a negative impact on the readability. Since I’m fairly new to HA-Proxy, I will use frontend and backend but it’s really up to you which path you’ll choose.

SSL Passthrough

Here haproxy doesn’t terminate the SSL connection but passes it right through to the internal server. This also means that you can’t mess with the traffic, add header options and so on. But it’s an easy way to loadbalance servers that already have SSL enabled without much effort.

frontend https_passthrough
  bind *:443
  option tcplog
  mode tcp
  default_backend apache01

backend apache01
  mode tcp
  option ssl-hello-chk
  # balance roundrobin		# Since I only have one server atm, I don't need a balance option
  server apache01.lan 10.0.3.4:443 check

SSL Offloading / Termination

The nice thing here is, that you can have either HTTP or HTTPS traffic internally, as it gets terminated by HA-Proxy and than sent out to the user over the secured connection which has been established between HA-Proxy and the user.

One possible reason to do this, is to use TLS certificates signed by a private CA in the internal network and only deploy the official “trusted” certificate to HA-Proxy. This makes it easier to switch certificates internally as you have full control over the CA and can create certificates as much as you want for any internal domain. If you want to renew your websites official certificate, you just have to deploy it onto HA-Proxy and restart the service. Or you can have HTTP traffic internally, in case one of your applications isn’t capable of TLS and send encrypt the traffic between the user and your loadbalancer.

frontend https_termination
  bind *:443 ssl crt /etc/ssl/private/officialcert.pem
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  default_backend ghostblog

backend ghostblog
  mode http
  server ghostblog.lan 10.0.3.4:2368 check

Note that the HA-Proxy TLS certificate format is actually a combined file of the .crt and the .key file. To create the file, just run

cat sitecert.crt sitecert.key > sitecert_haproxy.pem

So much for testing TLS passthrough and termination. Let’s move on to client certificate authentication.

Client Certificate Authentication

In the TLS passthrough section, client certificate auth will still work if it was enabled on the internal apache server, as everything get’s just passed through including the request for the user to authenticate.

But I’d rather have one, highly available place to do all the config and not care about deploying authentication onto every internal webserver.

Enabling it is pretty straight forward, just append the ca-file and verify parameter to the bind option in your TLS termination section. (Note: $certs/ == /etc/ssl/private/)

bind *:443 ssl crt $certs/officialcert.pem ca-file $certs/private_ca.crt verify required

Now, users are required to show a certificate that has been signed by myca.crt in order to fully establish the TLS connection. However, right now no one without a valid cert can visit my blog.

frontend https_termination
  bind *:8080  ssl crt $certs/officialcert.pem ca-file $certs/private_ca.pem verify optional
  mode http
  
  #Update - 17.11.2014
  #redirect location / if { path_beg /ghost/ } ! { ssl_fc_has_crt }
  redirect location / if { path_beg /ghost/ } ! { ssl_c_used }
  default_backend ghost-htsec

backend ghost-htsec
  mode http
  server ghost-htsec01 10.0.3.57:2368 check

Setting verify optional basically means that we don’t care if a visitor provides a certificate or not. Adding the redirect line adds the additional security for the subfolder we want to protect. Now visitors can browse my blog, but only those with a valid cert can go to /ghost/login/ or /ghost/signup/. The best part is, that those who try to login without a certificate don’t get an error message but instead are redirected to the root homepage /.

This adds another tiny bit of security by obscuring the login and register pages. Security by obscurity is nothing bad as long as you don’t solely rely on it for protection!

Since this is already quite a bit of haproxyness to take in, I’m going to stop here and publish what I have learned so far. Stay tuned for another post on HA-Proxy, where I will try to tackle my remaining goals.

UPDATE: 17.11.2014

I experienced huge problems with ssl_fc_has_crt in the past couple of days. The first connection to my host with a valid certificate would always be handled correctly, but a reload of the same page resulted in the redirect that should only be applied to users without a certificate. After going through the HA-Proxy documentation, I found that ssl_c_used is the better choice. Quote from the (http://www.haproxy.org/download/1.5/doc/configuration.txt)[docs]:

ssl_fc_has_crt : boolean
  Returns true if a client certificate is present in an incoming connection over
  SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'.
  Note: on SSL session resumption with Session ID or TLS ticket, client
  certificate is not present in the current connection but may be retrieved
  from the cache or the ticket. So prefer "ssl_c_used" if you want to check if
  current SSL session uses a client certificate.

ssl_c_used : boolean
  Returns true if current SSL session uses a client certificate even if current
  connection uses SSL session resumption. See also "ssl_fc_has_crt".

Update: 21.11.2014
I noticed, that a brand new Firefox profile as well as Firefox mobile on my Android where greeting me with this message instead of my website.

www.hashtagsecurity.com uses and invalid security certificate.
The certificate is not trusted becauzse no issuer chain was provided.
(Error code: sec_error_unknown_issuer)

I found that a little bit strange, since I bought a valid certificate at a Comodo reseller. This is due to Firefox being a bit more strict then other browsers when it comes to TLS implementation.

The fix is to add the ca-bundle certificates to your webserver config, which contains the TLS-Chain certificates.

# In Apache, add this line
SSLCertificateChainFile /etc/ssl/private/servername.ca-bundle
# In Lighttpd, add this line
ssl.ca-file = "/etc/ssl/private/servername.ca-bundle"

In NGINX and HAPROXY, you don’t change the config file. Just add the content of the ca-bundle.crt to your original certificate.

cd /wherever/your/certs/are/
cat servername.ca-bundle >> servername.crt

For HAPROXY, your certificate should look like this:

-----BEGIN CERTIFICATE-----
long-server-cert-string
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
long-server-key-string
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
long-ca-cert-string-01    
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
long-ca-cert-string-02
-----END CERTIFICATE-----

After restarting HAPROXY, the error message went away and Firefox displayed the website just like any other browser.

Links:
Here are a few links that helped me come this far.

haproxy.com – they have a lot of stuff but I’m not always sure if it’s still accurate!

Install MySQL 5.5 on Debian Wheezy 7

This is just a quick note, as I struggled with installing not MySQL 5.6 but 5.5 on Debian Wheezy.

First, I already had MySQL 5.6 installed but no data was stored there, so backup wasn’t necessary

$ sudo apt-get purge mysql*
$ sudo apt-get autoremove
$ sudo rm -r  /var/lib/mysql/ /var/log/mysq* /tmp/mysql* 

Normally, this should do the trick

sudo apt-get install mysql-server-5.5 mysql-client-5.5 mytop

But in my case, mylsq-server-5.5 selected mysql-common (5.6) as dependency, so I had to do this in order to get the right versions

$ sudo apt-get install mysql-server-5.5 mysql-client-5.5 mysql-common=5.5.40-0+wheezy1 libdbd-mysql-perl libmysqlclient18=5.5.40-0+wheezy1 mytop
$ sudo mysql_upgrade

The packages mysql-common and libmysqlclient18 have to be pinned on a version older than 5.6. If you don’t know the version, you can check for available versions with ‘apt-cache policy packagename’. Just pin the packages to the required version until everything installs correctly. You can test your settings by running apt-get in dry-mode (append -s).

Hope this helps!

Mitro Login Manager On-Premise

On 31 Jul 2014, the cloud based login manager “Mitro” was published under the GPL on github. In this blogpost, I’ll go through the steps of setting up the server and browser extensions.

The login manager Mitro has been developed by a small team based in NYC, wich was recently aquired by Twitter. Part of the deal, as I recall, was that the Mitro project had to be published under an open source license, which is great since I contacted the devs a few months ago if there is going to be an on-premise version of it. Well now there is, but it takes a bit of work to get everything up and running.

At the time of writing, I do not see Mitro On-Premise as production ready software as there are a few issues that have to be sorted out. But I’ll do my best to help get it there and I’m happy to see that many others are actively working towards the same goal.

TL;DR

If you just want to install Mitro without understanding what you’re doing – you can use my mitro-debian-setup.sh script which should do all of this automatically. Make sure to check the REAME.md for instructions first and run tail -f mitro-debian-setup.log in a second shell to catch problems like this:

You can avoid this completely by running apt-get update && apt-get upgrade prior to the script. Also, just to be on the same page – this is what my Debian image looks like:

A few notes before we get started – for those of you actually reading this…

I will generally not work under the root user but under admin with sudo. I will not print whole files but mark the changes like this:

user@host: $ vi /etc/example.conf
 + added line
 - removed line

Or with numbered lines if the file is to huge or similar lines exist.

user@host: $ vi /etc/example.conf
98: + added line
98: - removed line

While I always try to make things understandable even for unexperienced users, keep in mind that this is a very critical application and you should know what you’re doing before you attempt to host it for yourself or even others. Therefore, I will not go into any detail on basic commands such as how to exit vim or login via ssh. If you can’t do that, you lack serious basics for this post!

For now, I got the server up and running under Debian 7, and the extensions working with my server with chromium-browser in Kubuntu 14.04. Keep in mind that right now, everything is in a state of constant change, so I’ll either update this blogpost or look for a better way to keep the info up to date.

The whole setup consists of three main parts

  • The mitro server daemon
  • The mitro mailing daemon
  • The mitro browser extensions

I’ll try to get any changes I make in the code or scripts into the official Mitro repository, but I will note all important changes in this blogpost. If you’re missing anything, you can always checkout my fork on github for commits that haven’t been merged yet

Preparations

Some general info about the environment I used for this setup.

Server OS: Debian 7 Non-GUI (Connect via SSH)
Server VM: Virtualbox - 192.168.0.110
Client OS: Kubuntu 14.04 64bit
Client VM: Virtualbox - 
Git Mitro: https://github.com/mitro-co/mitro
Git Fork: https://github.com/fredericmohr/mitro

Before you do anything, make sure that your server has installed all package updates.

root@debian:~ # apt-get update && apt-get upgrade

Since this is a critical application, we’ll try to keep security in mind from the beginning.

Setup an admin user with sudo rights and one called mitro to run the services under. You should of course choose different, and strong password for both users as they build the foundation of your new login manager!

root@debian:~ # echo mitro-server > /etc/hostname
root@debian:~ # vi /etc/hosts
 - 127.0.0.1       localhost
 + 127.0.0.1       localhost mitro-server

root@debian:~ # hostname -F /etc/hostname
root@mitro-server:~ # adduser admin
root@mitro-server:~ # adduser mitro

And in case you haven’t done so during the installation of your os, set a strong password for root as well.

root@mitro-server:~ # passwd root

Next, grant sudo rights to admin, logout as root and login as admin.

root@mitro-server:~ # apt-get install sudo
root@mitro-server:~ # visudo -f /etc/sudoers.d/mitro
 + # admin user to manage mitro server
 + admin    ALL=(ALL:ALL) ALL
 
root@mitro-server:~ # exit

Test if sudo works properly and move on to the next part if it does.

admin@mitro-server:~ $ sudo whoami
[sudo] password for admin:
root

Mitro Server Setup

Server prerequisites

Before we begin with the actual server, there are a few things we have to take care of such as setting up the directory and installing the base packages.

admin@mitro-server:~$ sudo mkdir /srv/mitro/
admin@mitro-server:~$ cd /srv/mitro/
admin@mitro-server:~$ sudo chown -R mitro:mitro /srv/mitro
admin@mitro-server:~$ sudo chmod g+s /srv/mitro
admin@mitro-server:~$ sudo chmod u+s /srv/mitro
admin@mitro-server:~$ sudo chmod 755 /srv/mitro

This is how the folder permissions should look like. The sticky bit will make sure, that the user and group ownership will be mitro throughout the setup.

admin@mitro-server:~$ ls -lh /srv/
drwsr-sr-x  2 mitro mitro 4.0K Sep 19 19:17 mitro

admin@mitro-server:~$ cd /srv/mitro

Install all Debian packages necessary to build and run the server and create the necessary binary links, since there are a few files called incorrectly. — We should probably fix the build scripts…

admin@mitro-server:~$ sudo apt-get install git screen postgresql postgresql-contrib postgresql-pltcl-9.1 ant make g++ curl unzip
admin@mitro-server:/srv/mitro$ sudo curl -sL https://deb.nodesource.com/setup | sudo bash -
admin@mitro-server:/srv/mitro$ sudo apt-get install nodejs

Check if initdb is present. If not, you need to create a link to the executable.

admin@mitro-server:/srv/mitro$ which initdb
admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/postgresql/9.1/bin/initdb  /usr/bin/initdb
admin@mitro-server:/srv/mitro$ which initdb
/usr/bin/initdb

According to the Mitro README, the server should be run with the official Oracle Java JDK 7. However, I haven’t had any problems with the openjdk version and since it’s easier to maintain updates through apt-get, I am going to stick with it.

admin@mitro-server:/srv/mitro$ sudo apt-get install openjdk-7-jdk libpostgresql-jdbc-java

Just to be sure, trace back the current java executable and check if it really is jdk-7. In my case jdk-6 was still being used so I had to change it.

admin@mitro-server:/srv/mitro$ which java
/usr/bin/java
admin@mitro-server:/srv/mitro$ file /usr/bin/java
/usr/bin/java: symbolic link to `/etc/alternatives/java'
admin@mitro-server:/srv/mitro$ file /etc/alternatives/java
/etc/alternatives/java: symbolic link to `/usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java'
admin@mitro-server:/srv/mitro$ sudo rm /etc/alternatives/java
admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java /etc/alternatives/java

The following code-block shows some optional settings which I haven’t tried myself. But I’ll put them there in case anyone wants to try them.

# The sysctl.conf part is optional - depending wether or not you need/want multiple postgresql instances running.
# Edit /etc/sysctl.conf to add the following lines:
# run multiple instances of postgres
kern.sysv.shmmax=1610612736
kern.sysv.shmall=393216
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8

# Then run the following for each line above
sudo sysctl -w <line>	

Preparing the postgresql database

According to the docs, the postgresql database service must be running as the same user as the mitro service is running – which in this case means the unix user “mitro”.
To get this done, you need to follow these steps:

# Stop Database
admin@mitro-server:/srv/mitro$ sudo /etc/init.d/postgres stop
admin@mitro-server:/srv/mitro$ sudo netstat -antp 

# if postgres is still running for whatever reason
admin@mitro-server:/srv/mitro$ sudo killall -9 postgres

# prepare to start postgres as "mitro"
admin@mitro-server:/srv/mitro$ sudo chown -R mitro:mitro /var/run/postgresql/

Mitro uses pg_ctl, but Debian has switched to pg_ctlcluster so the build process fails! To fix that, you need to create a link for pg_ctl.

admin@mitro-server:/srv/mitro$ sudo ln -s /usr/lib/postgresql/9.1/bin/pg_ctl /usr/bin/pg_ctl

Before we can continue, we need to checkout the repository. I advise you to take the official repo and not my fork, as the latter might not be working at times since I tend to break things and push them – you have been warned!

admin@mitro-server:/srv$ sudo git clone https://github.com/mitro-co/mitro.git
  Cloning into 'mitro'...
  remote: Counting objects: 5516, done.
  remote: Total 5516 (delta 0), reused 0 (delta 0)
  Receiving objects: 100% (5516/5516), 65.85 MiB | 640 KiB/s, done.
  Resolving deltas: 100% (1049/1049), done.

Prepare postgresql database

mitro@mitro-server:/srv/mitro/mitro-core$ mkdir -p /srv/mitro/mitro-core/build/postgres
mitro@mitro-server:/srv/mitro/mitro-core$ initdb --pgdata=/srv/mitro/mitro-core/build/postgres -E 'UTF-8' --lc-collate='en_US.UTF-8' --lc-ctype='en_US.UTF-8'

Start postgresql daemon and create the database

mitro@mitro-server:/srv/mitro/mitro-core$ pg_ctl -D /srv/mitro/mitro-core/build/postgres -l logfile start
mitro@mitro-server:/srv/mitro/mitro-core$ createdb -O mitro mitro

Check if postgresql is running properly

netstat -antp |grep "postgres"
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      7061/postgres
tcp6       0      0 ::1:5432                :::*                    LISTEN      7061/postgres

Installing and configuring the server

We only need the server files, which reside in mitro-core. Make sure that the right permissions are set and switch back to mitro before creating the database and running ant test.

#To avoid this, change at beginning of blogpost...
admin@mitro-server:/srv/mitro$ sudo mv ./mitro/mitro-core .
admin@mitro-server:/srv/mitro$ cd ../
admin@mitro-server:/srv$ sudo chown -R mitro:mitro mitro/
admin@mitro-server:/srv$ sudo su - mitro


mitro@mitro-server:~$ cd /srv/mitro/mitro-core/
mitro@mitro-server:/srv/mitro/mitro-core$ ant test
  [...]
  BUILD SUCCESSFUL
  Total time: 50 seconds

If you find this during the test run, don’t worry. We’ll take care of this later.

[junit] WARN  [2014-09-20 07:50:22,425Z] co.mitro.core.server.Manager: Not creating constraint groups_name_scope: DB is not Postgres
[junit] WARN  [2014-09-20 07:50:22,425Z] co.mitro.core.server.Manager: Not creating constraint group_secret_svs_group: DB is not Postgres
[junit] ERROR [2014-09-20 07:50:22,463Z] co.mitro.access.servlets.ManageAccessEmailer: missing or invalid email: null

I’m actually not quite sure if the build script is really neede after going through all these steps manually.

# you don't need it - the script is trash...
mitro@mitro-server:/srv/mitro/mitro-core$ ./build.sh

To run the server, just type the following:

mitro@mitro-server:/srv/mitro/mitro-core$ ant server

I suggest you only run the server this way for testing purposes. If you want the server to run in the background, login as admin and type

admin@mitro-server:/srv/mitro/mitro-core$ screen -S mitro-server
admin@mitro-server:/srv/mitro/mitro-core$ su - mitro
mitro@mitro-server:~$ cd /srv/mitro/mitro-core/
mitro@mitro-server:/srv/mitro/mitro-core$ ant server > server_run.log 2>&1
[CTRL]+[A] [D]
admin@mitro-server:/srv/mitro/mitro-core$ tail -f server_run.log
[...]
[CTRL]+[C]

If you’ve made it this far, then mitro server is running. You can verify this by visiting this url: https://yourmitroserver:8443/mitro-core/api/BuildMetadata

Debugging:

I have added everything I found in the blogpost above so you shouldn’t run into these problems, however if you do this might help you! Unfortunately I have forgotten to document most of the error messages, so I’ll add the here in case I run into them again.

Error message:

mitro@mitro-server:/srv/mitro/mitro-core$ ant server
Buildfile: /srv/mitro/mitro-core/build.xml
compile:
jar:
     [exec] Result: 128
[propertyfile] Updating property file: /srv/mitro/mitro-core/build/java/src/build.properties
     [exec] shutil.rmtree(tempdir)
     [exec] Traceback (most recent call last):
     [exec]   File "tools/jarpackager.py", line 109, in <module>
     [exec]     main()
     [exec]   File "tools/jarpackager.py", line 92, in main
     [exec]     unpack_jar(path, tempdir)
     [exec]   File "tools/jarpackager.py", line 29, in unpack_jar
     [exec]     process = subprocess.Popen(args)
     [exec]   File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
     [exec]     errread, errwrite)
     [exec]   File "/usr/lib/python2.7/subprocess.py", line 1259, in _execute_child
     [exec]     raise child_exception
     [exec] OSError: [Errno 2] No such file or directory
     [exec] Result: 1
     [echo] Built build/mitrocore.jar

Solution:

sudo apt-get install unzip

**Error message:**

mitro@debian:~$ pg_ctl -D /srv/mitro/mitro-core/build/postgres start
pg_ctl: another server might be running; trying to start server anyway
server starting
mitro@debian:~$ FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 3468) running in data directory "/srv/mitro/mitro-core/build/postgres"?

Solution:

[*] If you can't see postgres running, you need to kill the old process first
 [-] root: killall -9 postgres
 [-] root: rm /srv/mitro/mitro-core/build/postgres/postmaster.pid

Book Review: CTF Blueprints

This is a short review of the book Kali Linux CTF Blueprints by Cameron Buchanan which was published under Packt Publishing in July 2014.

The books goal is to provide blueprints to building a CTF environment. In my opinion, this is not quite true as the blueprints are mere pointers in the right direction. While this might be misleading, it’s actually a good thing as real blueprints would result in spawning a series of similar or even identical CTFs. Before you buy the book, please note that the author expects that a certain level of pentesting knowledge and skill is already given. Basic concepts such as XSS, privilege escalation or information gathering are not explained.

The target group is definitively the experienced pentester who wants to setup a challenge or training ground for colleagues and not the inexperienced one who wants to do it to test its own skills. That being said, someone new to the game could still learn a few things by reading this, although there are books out there who are better suited for this task.

In the first and second chapter, the author explains how to setup a vulnerability on Windows or Linux hosts respectively. The chapters are split into three main parts, namely securing the base os that is used to run the challenge, installing the vulnerable software and finally explaining how the vulnerability could be exploited.

The first part is important, as a challenge should not have any other security holes except the intended ones, therefore forcing the challengers into solving the right problems instead of finding a way around them. The second part is pretty obvious, as this book is all about creating vulnerable machines and the third part is just to give some insight on how the challenge could be solved.

There are also some tips on how to plant flags without them being to easy or too hard to find. For example, a C:\flag.txt file would be too easy to find and would also make any post exploitation unecessary. Hiding the flag inside a config file in a temporary, random generated subfolder of the Firefox addons folder would be to hard, unless the objective is clearly stated and includes some hints that would point a player to this location.

I can’t say much about the other four chapters “Wireless and Mobile”, “Social Engeneering”, “Cryptography” and “Red Teaming”as I didn’t find the time to read them. However, they do look interesting and I’ll post an update as soon as I get to them.

Overall, this is the first book PacktPub book that wasn’t completely dissapointing. The one thing I missed, was some suggestions for vulnerabilities that make good targets in a CTF, preferably in form of CVEs or a short list of sites where to find things like that. Someone who is new to creating CTFs might have a hard time finding vulnerabilities that are suitable for challenges.

I’m not quite sure if it is worth the price of €16.32 ($22.44) for the ebook edition as it’s only a quick intro to the topics and lacks in depth but at least you get something in return for your money, unlike many of the other books I reviewed.

However, if I had looked for a short CTF guide that points me in the right direction on how to make my own challenges, I wouldn’t have regretted the purchase.

If you have any specific questions about the book, drop me a line and I see if I can answer it for you.

#Security in time

HashtagSecurity.com has gone a long way since it started over two years ago.

In the depths of my hard drive I found screenshots of older hashtagsecurity.com versions, so I decided to share them with you!

Back when I started it was a blog about pretty much everything that crossed my mind. Having it’s roots in my first, now buried blog mohrphium.com which I started during my apprenticeship it wasn’t soley focused on security. However, after I got my hands on the domain hashtagsecurity.com, I decided to leave the administration area and get more into infosec.

The first “logo” I created was done during a trainride on my tablet. I was playing with different brush settings on a new painting app and created a “#” that I thought would make a great logo for my new blog.

Original logo on my old wordpress blog:

I’m no designer, that’s for sure – but still, I should have known better. After some time, even I noticed that the resolution was crappy, so I imported the jpg file in inkscape, created a svg off it and imported that into blender. Yes, I created my v.1.1 logo in a 3D moddeling application. That should give you a pretty good picture of how 1337 my design skills are.

Over time I rendered different logos, but the last one was this:

The logo was of course not the only thing that changed. In the beginning I often found my blog showing really old posts. It’s the typical beginners blogging problem.

  1. Post – “There is not much here yet, but I’m gonna start writing soon”
  2. Post – “Sorry for not writing in such a long time, but it’s gonna change real soon, promise”

Even though I didn’t actually write it that way, the publishing dates speak for themselves.

I still have to force myself from time to time to publish posts, but all in all I got kind of a thing going now.

As every beginning blogger, I wondered if anyone was reading my blog at all. And I still do actually. That’s why I added Google Analytics at one point. That was pretty much at the beginning, back when it was still running on wordpress. After Ghost came out, I switched to the nodejs powered new blogging platform and had so many new problems that tracking and analytics wasn’t really on my mind anymore.

I tried a few themes but ended up writing my first crappy theme in html,css,js and ghost handles.


This was version 1.0, and it even came with a “mobile version” which only worked in a few browsers correctly as you can see here.

For a first try I thought it acceptable, when in fact it sucked quite much. So I rewrote the whole thing twice, and what you see today is actually v3.1.

Version 2.0 was my first complete rewrite, and though I stuck to the overall design, there was quite a huge change in the codebase.

The big rewrite not only improved my theme, but also my rudimentary CSS,HTML and JS skills allowing me to finally solve many problems with my mobile theme.

![](/content/images/2014/Aug/v2-0_mobileA.jpg) ![](/content/images/2014/Aug/v2-0_mobileB.jpg) I thought about releasing the code, but I don’t want anyone having to deal with it – seriously! If you really want the code, just write me and I’ll gladly hand it out. But you have been warned!

Between writing version 2.0 and 3.0 I again found myself wondering if anyone actually read my blog and started to thing about my objective. Why was I writing this stuff? And why was I publishing it for everyone to see? The short answer is, because I like to help people. I love finding the solution to a problem in minutes on somebodies blog and I always wanted to contribute to the open knowledge and source community. Realizing that, I noticed that it doesn’t matter how many people read my blog. If it helped even just one person, the post was worth writing it.

I installed Kibana at one point to get a rough overview of visitors from my webservers access logs, but that’s about it. I have no need for cookies, tracking or advertisement. The cookies that are created are because of the twitter panel on the right, and I’m not even sure if I’m going to keep that.

I enjoy a clean blog, and that means no ads, no tracking and no click marathons to get the information you want.

After all that, it seems that I found time to look at my logo again. And it’s fair to say that I did not like it anymore. So I set out to the task of designing a new one. As I’ve mentioned before I’m no designer so it took me longer than I’m want to admit. Here are two of the ideas I had that I scribbled on a pink post-it.

Not much? That’s because no matter at how many designs of hashtags I looked, I couldn’t come up with something I liked.

Finally I fired up blender again and due to a lighting accident I came up with this.

This was actually the first logo I kind of liked. I showed it to a few of my colleagues and they said “Show it to the guys from the graphics department, they can surely give you some good feedback”. Oh boy!

I showed it to Nick. He just shook his head and sent me some examples a few minutes later. I mashed them all together to get a better overview, but they where all in high resolution.

After a few mails back and forth, he came up with these two for twitter and one with “Security” instead of just “SEC” for the blog header.

I really liked the red version as well, but since the blog had a blue theme going, I sticked with it in the end. It was a tough decision though!

The new logo inspired me to write version 3.1 and change a few things on the theme. This is how the blog looked like a few days ago, still with the old structure and logo. It’s the first minor version update, as it only takes on minor feature changes and overall design. The change from 2.0 to 3.0 had much bigger changes, such as “mobile first” and a complete rewrite of the CSS stylesheet.

For comparison, this is what the previous theme, so to say version 3.0 looked like.

It might not look that different from 2.0, but again there is quite a lot of code I ripped out and completely wrote new.

Of course it’s not done yet and I’m always going to be changing it. But I hope at least now it’s representable.
If you want to give me some feedback I would appreaciate it. Just put it in the comment section or tweet me @HashtagSecurity.