Resurrecting my old ghost blog

Ghost (CMS) is nice and I’ve used it for a couple years but with the start of my company my blog got less attention, got broke and went offline, never to be fixed. In summary, running ghost yourself is work. Not necessarily a ton of work but work nontheless.

So I decided to resurrect my old blog and move it to a fully hosted platform. My decision fell on blogger as it is completely free, plays nice with Google Analytics and SEO (being a Google product) and has a couple of notable changes since its early days – most importantly, better themes and the ability to custom style them.

My first issue was, that my old ghost powered blog wasn’t online anymore so I had to get it up and running again, in order to copy the old content to the new platform. Luckily, I still have the raw setup files from /var/www/ and the most recent database backup.

Since the original server was in an unusable state (and running Ubuntu 14.04!), I decided to run the resurrection ritual on a local virtual machine.

First thing to notice here: I’m currently running Windows 10 instead of my usual Kubuntu setup, so I was first considering using Putty to SSH into my VM. I then decided to give the SSH feature in Windows a try and lo and behold, it is a beaut.

1-1

Thanks to this nifty little feature, I can use both scp and ssh directly from a Powershell window.

I copied the files over to my machine, which is running on a NAT interface with port-forwarding, hence the port 2222 on localhost. Once the copy job was done, I set out to install the most important packages to get started.

fmohr@vm$ sudo apt install unzip nodejs npm mysql-server

Now with the bare necessities in place, it was time to restore the database to its former glory.

fmohr@host:~/$ scp -P 2222 .\ghostdb.sql fmohr@localhost:~/
2-1
fmohr@vm:~/$ sudo mysql -u root < ghostdb.sql

3-1
Success! I can now try to adjust the ghost settings file and start the nodejs server.

fmohr@vm:~/$ vi config.js
4-1

Of course I had to change the database connection, namely: host, user and password from the live environment to my local test machine.

In order to even try and run the server, the necessary dependency packages need to be installed. It becomes pretty clear early on, that this ghost installation hasn’t been updated in a while.

fmohr@vm:~/ghost$ npm install
5-1

Unfortunately, all this brought me was a bunch of error messages.

6-1

Trying to run the server first gave me this error:

7-1

After a bit of Google-fu I half found out, half remembered that the NODE_ENV variable had to be set in order to use the production settings from the config.js file.

8-1

This resulted in a different error. Two steps forward, one backwards.

9-1

Stackoverflow to the rescue! The solution was to downgrade the mysql authentication method.

fmohr@vm:~/ghost$ mysql -u root -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'HelloKitty123';
mysql> FLUSH PRIVILEGES;
10-1

Well, this certainly got rid of the mysql authentication error but spawned a new one. Luckily there is a solution for everything. And once again, its name is downgrade. This is best done by using NVM, the node version manager.

fmohr@vm:~/ghost$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh -o install.sh
fmohr@vm:~/ghost$ less install.sh
fmohr@vm:~/ghost$ sudo bash install.sh

The originally suggested command for installing NVM was the following:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash 

You may have noticed that I changed the procedure. Since this is a security focused blog, let me quickly explain why.

Spoiler: Never pipe unknown shell scripts to bash. NEVER! Ever! EVAAA!
The following commands install the most recent node version as well as the desired one.

fmohr@vm:~/ghost$ nvm install node
fmohr@vm:~/ghost$ nvm install 0.10.45 
11-1

This allows me to try and run ghost again. This time with the (presumably) correct version of node.

NODE_ENV=production ~/.nvm/v0.10.45/bin/node index.js
12-1

It works, even if the message is incorrect as the server cannot possibly run on the public domain name. This is easily solved with netstat.

13-1

And with another port forwarding rule for my VM, this should be accessible as well.

14-1

Look at it. It runs so smoothly, I almost want to keep using ghost. But that would mean installing updates and maintaining my own server and I’ve grown lazy, apparently.

15-3

This is it for today, how I will move this content to blogger is a story for another time. 🙂

WBP#1 – All New Weekly Bucket Post

Since I can’t bring myself to write full blown blog posts on a regular basis, let’s try to do something else. I will attempt to publish a short blogpost every friday about all the small things that I encountered during the last week.

PFX Certificates
After finally receiving confirmation that our code signing certificate has been validated, I got a link to “download” it. And by download they actually mean, import it in your browsers certificate store. From there you can export it and set a password to encrypt the file if you want (you want!).

So far so good, but our devs said they need a .pfx file, not the .p12 I had given them.
Thanks to AkbarAhmed.com this proved not to be a problem at all.

1.) A .p12 and .pfx are the exact same binary format, although the extension differs.
2.) Based on #1, all you have to do is change the file extension.

DNS updates
Not really something new I learned, as something I already new but forgot in the meantime, was how to request a new dhcp lease on an ubuntu server.

$ sudo dhclient -v eth0 -r
this kills your connection... do not attempt over SSH ;)
$ sudo dhclient -v eth0
$ id a

Since we’re at linux 101 already, let’s quickly do the other one as well.

Change hostname – Sure, this one is easy. Nevertheless, I got stuck for longer then I would like to admit. After editing the files /etc/hosts and /etc/hostname, I wanted to apply the settings without rebooting the server. However, that’s where I encountered this little problem.

$ sudo /etc/init.d/hostname restart
[sudo] password for user: 
sudo: /etc/init.d/hostname: command not found

I could’ve sworn that this was the correct way to do this. Turns out, it’s this now.

$ sudo hostname -F /etc/hostname

One more thing I noticed while setting the new hostname was this little gem, which returns the IP registered to the hostname.

$ hostname -I
10.0.3.162

If I think back how many times I could’ve used that one in scripts in the past…

LXC Autostart – LXC or LinuX Containers are a great alternative to full virtualization, especiall when you’re running things on a vserver like I do. However, when you have to reboot your vserver due to kernel updates, your containers either don’t start at all or simultaneous, which makes it hard if you have services like dns/dhcp running in one of them.

The solution is a little tool called lxc-autostart, which let’s you decide in which order to start your LXCs. Here are a few config lines, which you can add to your containers config file in /var/lib/lxc/containername/config.

lxc.start.auto = 1		# 0=no, 1=yes
lxc.start.delay = 0		# seconds to wait before starting (group specific)
lxc.start.order = 500	# order, higher is first
lxc.group = dns			# group,groups

After you’ve configured every host you want to autostart, you can start,stop or reboot them like this

sudo lxc-autostart [-r/-s] -g "dns,web,db"
# -r=reboot, -s=shutdown, without is boot

Pay attention to the groups, all containers in group dns are started first. Within that group, you can define the order (e.g 500, 450, 400). If you set a start.delay value for one hosts, all groups and hosts that follow will also wait for that amount of time before starting.

#host 1
lxc.start.delay = 0		# start immediately
lxc.start.order = 500	# first host
lxc.group = dns			# first group
#host 2
lxc.start.delay = 20	# on turn, wait 20 seconds, then start
lxc.start.order = 450	# second server
lxc.group = dns			# first group
#host 3
lxc.start.delay = 0		# start after host 2 (after 20 seconds + boot)
lxc.start.order = 400	# third server
lxc.group = dns			# first group
#host 4
lxc.start.delay = 0		# start immediately after group dns is done
lxc.start.order = 500	# first in group web
lxc.group = web			# second group

CSP – I have spent quite a bit of my time with the Content-Security-Policy header. So far I have always found a way to manage without “unsafe-inline” (allowing inline css or js – BAD!) but when I tried to get Disqus working with my CSP I ran into an interesting problem. First of, I only used the unsafe-inline option for testing purposes. This will never get onto my production systems, NEVER!

Here is what I did, in case you want to try it yourself.

index.html

<html>
  <head>
      <!-- CSP Without "style-src unsafe-inline" reports CSP violations (of course!)
      <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/;" />-->
      <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/; style-src 'self' unsafe-inline;" />
  </head>
  <body>
      <h1>CSP with DISQUS</h1>
      <div id="disqus_thread"></div>
      <script type="text/javascript" src="disqus.js"></script>
      <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
      <a href="http://disqus.com" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a>
  </body>
</html>

disqus.js

/* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
var disqus_shortname = 'hashtagsecurity'; // required: replace example with your forum shortname

/* * * DON'T EDIT BELOW THIS LINE * * */
(function() {
    var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
    dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
    (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();

And finally, running a webserver in the directory that holds both those files

python -m SimpleHTTPServer 8080

What I found was, that it doesn’t really matter whether I enable “style-src unsafe-inline” or not, since the inline style is in the embed.js file which is pulled from discuscdn.com. It appears that in that case, CSP still sees it as a violation.

If anyone can explain to me exactly why this is, I’d be very happy to hear about it.

Speaking of CSP, I digged out the PHP Based CSP violation logger I used for debugging in the past. Just make sure to move the reports file to a private directory. Don’t want everyone reading your CSP reports.

Local Web Servers – Since I was playing around with CSP rules and violations, I found myself in need of a webserver. Installing apache2 or NGINX just to deliver a handful of pages to myself seemed a bit overkill, so I looked towards minimal webservers.

My favourite, being installed by default on most linux systems, is definitely pythons simple server

python -m SimpleHTTPServer 8080
Serving HTTP on 0.0.0.0 port 8080 ...

I used a PHP based CSP reporting tool to collect more info about found violations, however the integrated http server in python doesn’t support PHP. Luckily, php does 🙂

sudo apt-get install php5-cli
php5 -S 127.0.0.1:8000 -t /path/to/docs/

Keepass Password Generation Profiles – Password rules for password generators can be very helpful. Keepass offers an option to customize the way passwords are generated, which is great as the default policies are really bad!

Luckily, dF. over at stackoverflow.com has a nice list of chars for that.

A 12-char password policy in Keepass, this would look like this.

Whitelist chars (not all of them, just an example!)
[\!\#\%\+\2\3\4\5\6\7\8\9\:\=\?\@\A\B\C\D\E\F\G\H\J\K\L\M\N\P\R\S]{12}

If you prefer a “blacklist chars” approach, you can do it like this:

Bad Chars: il10o8B3Evu![]{}
PW Policy: [dAs^\i^\l^\1^\0^\o^\8^\B^\3^\E^\v^\u^!^\[^\]^\{^\}]{12}
Char Rule: == [d]igits, mixed [A]lpha, [s]pecial, [^] except, [\] escape char

More info on Keepass generation rules can be found here.

Keepass and KeeFox – Not much to say here except that it has gotten real easy lately, to integrate keepass into Firefox. Just follow the steps on stackoverflow.

Web Password Manager – I’m always on the lookout for web based password managers, that can be hosted on-site. Especially if they’re open source, or better yet free (as in speech).

I haven’t had the time yet to take a closer look, but at first glance RatticDB seems promising, although early stage. I will write more about it once I have spend more time with it – assuming it proves useful.

Flask Blueprint Templates – suck! I’m sorry but I can’t say it any other way.
I like the idea of how blueprints (sort of plugins in pythons web framework Flask) access templates. If I have an app, with it’s index.html lying in app/templates/index.html and a sub app or plugin within said app that has it’s own templates folder, like this app/subapp/templates/ everything is great as long as I don’t have any name conficts in templates.

Accessing /app/templates/index.html within the subapp
render_template("index.html")

Accessing /app/subapp/templates/subapp.html within the subapp
render_template("subapp.html")

Accessing /app/subapp/templates/index.html within the subapp
- not possible - 

To be able to access the subapp index.html file, you would have to either rename the it, or build a structure like this:

/app/subapp/templates/subapp/index
Blueprint("subapp", __name__, template_folder="templates")
render_template("subapp/index.html")

It works, but it’s annyoing as hell. I can understand the use case if you want to frequently access the main apps templates in subapps, but I think there should be an option to limit the subapp to it’s own template folder.

HashtagSecurity will be back…

To those of you who actually follow my blog and have noticed that it’s become rather quiet recently – I’m sorry. The reason for this is that I’ve put all my efforts into the new company blog, which is exactly where all my new posts went.

I’ve started hashtagsecurity.com to write about infosec topics and to openly document some of the things I fought with over long nights, be it because they just wouldn’t want to work or because I just couldn’t see it – sleep deprived and all. It is a project that’s very dear to me and I really want to keep it going. But I started something new with LastBreach and like all newborn, it requires a lot more attention than it’s older brother.

So even though I won’t be posting anything here for a while, that doesn’t mean that this place is dead – It’s just sleeping…

The more LastBreach will grow, the more I will get back my free time (hopefully), which I can then, once again, dedicate to this blog. For now, lastbreach.com is where you will find my newest posts and for those of you who already follow me on twitter, please continue to do so as I’m still active there.

But before I lock this place up – I made a promise in one of the posts a few months (again, sorry) back, in regards to upcoming posts on Lynis and its use for pentesters. I’m happy to say, that I will be able to continue this series. So head over to lastbreach.com and stay tuned for more Lynis goodness, amongst other things, and thanks for reading my blog(s).

See you!
Frederic

Server Patching with unattended-upgrades

I can’t believe I haven’t written about this yet. Unattended upgrades are a great way to keep your servers up to date, but there are a few things that didn’t work out of the box, so here is a summary of how my patch process is set up.

Why unattended-upgrades?

To be honest, running upgrades unattended can cause bad feelings with your colleagues if stuff breaks because of it. And it most likely will if you’re not doing it right. Unattended-upgrades is a feature available in Ubuntu, Debian and most likely other Linux derivatives, which allows you to control which updates should be installed and when you want to get notified about it.

From a security standpoint, unattended-upgrades is a no-brainer, you want to have the latest patches installed but you don’t want and can’t do it manually, unless you have near unlimited man power or really nothing better to do, which is pretty much never the case.

From the classic admin “keep things running” approach, doing any changes whatsoever is not really something you’d want, much less so doing them unattended, meaning “let the system do as it wants”.

This often leads to a discussion between security folks and administrators about whether or not this could actually be working. The problem, as so often in security, is that change is needed but for change not to end in disaster you need to know what you’re doing and more importantly you need to know how your network and your servers behave.

If you were to enable automatic updates on a Linux system without any knowledge of what that system is doing, it’s probably not going to end well.

So what then?

In order to get unattended-upgrades running without messing up your stuff, you need to

  • understand what the host is doing
  • implement the least needed upgrade process (usually security patches only)
  • make test runs before you let it lose
  • have backups ready, something which you should have always anyways!

There are two main problems that I have experienced so far with unattended-upgrades and that you should be aware of.

3rd party applications need specific package versions

The first one is easy to fix but a PITA to find out. Usually you find out by crashing the service, try to figure out what caused the problem and find out that package x has to be version y but the most recent version in your distros repository is newer than that.

Since you most likely can’t fix the applications dependencies, there is only one way – set the package on hold. Distros using the apt package manager allow you to set the hold flag for packages, which means that they won’t be upgraded along with the others. Be aware that this should only be done if no other way is possible as it can have side effects such as

  • other applications can’t be updated because they require x to be updates with them
  • which sometimes leads to apt dependency f-ups
  • It’s possible that security patches won’t be installed either

Service restarts mess things up

This was really my own fault, though it took some time to fix. Some services, such as MySQL for instance, can be tweaked during runtime. If an upgrade process restarts the service after updating the binaries, these runtime tweaks are lost. This can be a problem if

  • you don’t know about the restart
  • you didn’t document your tweaks properly.

Another thing that can happen – it always can, no matter what you do, is that a service doesn’t restart properly. One reason, although that didn’t happen to me yet, is that a service won’t restart because the config file still uses a deprecated version that was finally kicked out for good with this upgrade. If that happens you just were to lazy to switch to the new one during the transition time where both, the new and the old feature, where still supported. One good way of avoiding this is by actually using your log files.

But monitoring, analyzing and getting the goods out of logs is a post for another day.

Needless to say that backups, documentation and the ability to restart your services without messing things up is something you should have already and is not a requirement that only comes with unattended-upgrades.

Installing unattended-upgrades

Installing the packages is as easy as it gets. Just run the following commands and you’re good to go.

sudo apt-get install update-notifier-common unattended-upgrades

The update-notifier-common package is an optional addition, that will create the file /var/run/reboot-required, which tells you that the system requires a reboot to apply the updated or newly installed packages, and the file /var/run/reboot-required.pkgs which tells you which packages require the reboot.

You can even add checks to your monitoring system or your message of the day (motd) to get notified about uninstalled packages or required reboots.

69 packages can be updated.
0 updates are security updates.

You can get these with the following update-motd config files

$ cat /etc/update-motd.d/90-updates-available 
#!/bin/sh
if [ -x /usr/lib/update-notifier/update-motd-updates-available ]; then
    exec /usr/lib/update-notifier/update-motd-updates-available
fi

$ cat /etc/update-motd.d/91-release-upgrade 
#!/bin/sh
# if the current release is under development there won't be a new one
if [ "$(lsb_release -sd | cut -d' ' -f4)" = "(development" ]; then
    exit 0
fi
if [ -x /usr/lib/ubuntu-release-upgrader/release-upgrade-motd ]; then
    exec /usr/lib/ubuntu-release-upgrader/release-upgrade-motd
fi

$ cat /etc/update-motd.d/98-reboot-required 
#!/bin/sh
if [ -x /usr/lib/update-notifier/update-motd-reboot-required ]; then
    exec /usr/lib/update-notifier/update-motd-reboot-required

They should be included in Ubuntu by default and updated on login by the pam_motd module. If you change or add config files, you can either logout and login for the changes to take affect, or install the update-motd package and run it.

Configuring unattended-upgrades

To give you a quick overview, this is what we want the system to do stay updated.

  • Fetch the newest package information (apt-get update)
  • Install security updates only
  • Notify us via email, at first always, later only on error
  • Do not reboot automatically
  • Do not overwrite config files

Let’s take a look at how to set these things up. Unattended-upgrades has three config files that are of interest to us.

/etc/apt/apt.conf.d/20auto-upgrades

This config file is pretty simple and straight forward.

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

With this we enable point one and, partly, two of our bullet list. But we haven’t configured point two yet. We enabled the installation of upgrades but haven’t defined which upgrades we would like to have.

/etc/apt/apt.conf.d/50unattended-upgrades

Here is where all the magic happens. The following shows only the options that I am using, the default config offers more that that though, so you might want to take a look at it.

// Automatically upgrade security packages from these (origin:archive) pairs
// Additional options are "-updates", "-proposed" and "-backports"
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
};

Unattended-Upgrade::MinimalSteps "true";

// Send report email to this address, 'mailx' must be installed.
Unattended-Upgrade::Mail "spam@hashtagsecurity.com";

// Set this value to "true" to get emails only on errors.
Unattended-Upgrade::MailOnlyOnError "true";

// Do automatic removal of new unused dependencies after the upgrade (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";

// Automatically reboot *WITHOUT CONFIRMATION* if a the file /var/run/reboot-required is found after the upgrade 
// Unattended-Upgrade::Automatic-Reboot "true";

If you are really brave, you can enable the last option as well. It worked quite well for me on some servers but you have to make sure that all service are started properly after a reboot. I’ve read somewhere that unattended-upgrades can be timed to avoid redundant servers rebooting at the same time, but I haven’t looked into it so far.

For more information on the configuration, check out the official Ubuntu documentation here and here.

CRON

If you take a look at the file /etc/cron.daily/apt, you will see that we unattended-upgrades is already configured to run regularly.

1 #!/bin/sh
2 #set -e
3 #
4 # This file understands the following apt configuration variables:
5 # Values here are the default.
6 # Create /etc/apt/apt.conf.d/02periodic file to set your preference.

We created, or modified the files in /etc/apt/apt.conf.d/ and thus configured the /etc/cron/apt process to suit our needs, so there is no need to add a new cron job for it.

According to the Debian documentation, the 02periodic file is an alternative config file for the 20auto-upgrades, so we don’t need it.

Timing

The only problem I see with this, is that redundant servers might run updates or possibly even reboot themselves at the same time. The way it’s setup now is that the apt cron job is executed once daily.

Cron daily runs all scripts in /etc/cron.daily/ once a day, the start time for this is defined in /etc/crontab.

# cat /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
17 *    * * *   root    cd / && run-parts --report /etc/cron.hourly
25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6    * * 7   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6    1 * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

As we can see, cron daily starts at 06:25 AM every day. In my case, apt is the first command to be executed as can be seen by the alphabetical order of scripts in /etc/cron.daily/.

# ls -1 /etc/cron.daily/
apt
bsdmainutils
creds
dpkg
logrotate
man-db
quota.dpkg-dist
sysklogd

This might be different on your system, depending on the cron jobs you have installed but it should be among the first to run. If not and you want to be sure, just rename it to 01_apt.

The reason why I care about the order of execution is because command running before apt could delay it’s execution. We can easily change the start of cron daily for redundant systems, but if the first system would have a huge delay the might still end up running at the same time. The chance is slim, but why take chances if you can be sure.

Here is an example for two redundant web servers.

web01: # grep daily /etc/crontab
25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

web02: # grep daily /etc/crontab
25 8    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

The time difference of two hours should be more then enough, keep in mind that two much time difference might result in other problems. Such as two servers being out of sync because the dependencies changed on one system but hasn’t been updated on the other.

Logs

If you want to check on your recently applied updates, take a look at the following log files.

  • /var/log/unattended-upgrades/unattended-upgrades.log
  • /var/log/unattended-upgrades/*
  • /var/log/dpkg.log

Config files change!

Here is one more add-on that I stumbled across recently. If you have done a few upgrades with apt-get, you should have seen this prompt at least once.


While this is a great thing if you’re doing the upgrade manually, it’s kind of a problem when you install them automatically.

The image above is actually a screenshot from a mail my server sent me, after the upgrade process stopped because of this dialog. If this happens, the updates are not installed completely, and you will receive this mail daily until you fix it!

So let’s fix it. Open the config file /etc/apt/apt.conf.d/local and add the following lines.

# keep old configs on upgrade, move new versions to <file>.dpkg-dist
# e.g. /etc/vim/vimrc and /etc/vim/vimrc.dpkg-dist
Dpkg::Options {
   "--force-confdef";
   "--force-confold";
}

This will tell unattended-upgrades, to keep the original config files and move the new versions to .dpkg-dist, so you can inspect them at a later point. What I was missing though, is a notification by mail that new .dpkg-dist files have been created.

To get this information, I whipped up this small script. If you know a better way to solve this, please let me know. In the meantime, this will get the job done.

#!/bin/bash
#
# This script will search for .dist-dpkg files and notify you if any are found
#
REPORT_MAIL="upgrades@yourmail.com"


find / -name *.dpkg-dist > /var/log/unattended-upgrades/unattended-upgrades-config-diff.log
confcount=$(wc -l /var/log/unattended-upgrades/unattended-upgrades-config-diff.log |awk {'print $1'})

if [ "$confcount" -ne "0" ];
then
	echo -e "Subject: New Held-Back Config File Changes\nFor the following configs, changes have been held back during unattended-upgrade. \nPlease review them manually and delete the dpkg-dist file after you're done.\n\n $(cat /var/log/unattended-upgrades/unattended-upgrades-config-diff.log)\n\nRegards, \n$(hostname -f)" | sendmail $REPORT_MAIL 
fi

Just save this script somewhere on your server, make it executable and add it to your daily cron jobs. Make sure to change the email address!

Also, make sure the script has write permissions to the /var/log/unattended-upgrades/ folder, or otherwise it will fail.

Summary

Unattended upgrades are not something you “just enable”. They have to be introduced into your environment carefully but it’s time well spent as they can be quite helpful later on.

Not only is security increased but you safe a lot of time when you finally have no other choice then moving on the the next distro release. Believe me, few things are more painful then having to perform full dist upgrades (e.g. 12.04 => 14.04) on way outdated production servers.

Being more secure not only means that the risk of loosing money is smaller, it also means for admins that the risk of running around in panic trying to figure out how it happened and what can be done to stop it, is smaller. That’s something you should keep in mind if you’re an admin, or that you should keep handy as an argument if your working with admins.

Lynis Enterprise – The 2nd Encounter

This time we will dive into compliance scans and take a look at how multiple hosts are displayed. I also want to find out why I am at risk of data loss – that’s right, I still don’t know!

This round, I’ll take a look at the documentation, which can be found here.

RTFM if you can!

In order to get different results, I wanted to add another host. But, because around 9 days went by since I last touched Lynis, I couldn’t remember how exactly the Enterprise key was added to a new host.

After searching and searching in the documentation, I found nothing but the information that the key is in the configuration panel. Great, but how do I add it? What’s the secret option I need? I did check the config file, but there was no license key option to be found either.

Then I remembered, that I ran Lynis as root from the /root/ folder, so I checked the config file there. And to no surprise, there it was.

I should’ve checked this site though, since it’s all there…

Anyway, putting this information in the documentation and a link to it in the configuration page would help a ton. Speaking of documentation, there are a few other things that I came across, such as the icon description being a little off

and topics that are linked in the documentation or application but lead to nowhere.

Parts of the documentation just don’t seem to exist at all, while others at least have the decency to tell you so!

Adding other hosts

But enough about the docs, I wanted to add three more hosts so I have two 14.04 and two 12.04, which I could then scan with either the –check-all or the –pentest switch, to get an idea of how they impact the scan results. Also, this should give us a bit more then just “21 Suggestions” and might be more representative of what version is actually being used out there, with companies not always running the latest shit and all.

I want to see criticals and a red cross at compliance!

To get an overview, here are the hosts with their OS versions.

test-ossec-lynis01		Ubuntu 14.04	audit system --check-all
test-ossec-lynis02		Ubuntu 14.04	audit system --pentest
test-ossec-lynis03		Ubuntu 12.04	audit system --check-all
test-ossec-lynis04		Ubuntu 12.04	audit system --pentest

After copying the lynis folder with the enterprise key to the other hosts, I ran the commands, to add the hosts to the enterprise web UI.

One thing I noticed, is that the Ubuntu 12.04 hosts didn’t show up in the UI after the scan completed. The culprit here was that the curl package wasn’t installed on these hosts.

After running apt-get install curl -y and running the scans again, they where listed with the other hosts.

Two things stick out here, one is that the Ubuntu 12.04 version string is empty. The other is the bandage sign in the Lynis version column. Hovering over it says “This version is outdated and needs an update”. This being in the Lynis version column, I assume it’s referring to the Lynis binary needing an update, which is strange since I rsynced the folder from the test-ossec-lynis01 host and should therefore be the same.

test-ossec-lynis03.prod.lan:~/lynis $ sudo ./lynis --check-update
 == Lynis ==

  Version         : 2.1.0
  Status          : Up-to-date
  Release date    : 16 April 2015
  Update location : https://cisofy.com


Copyright 2007-2015 - CISOfy, https://cisofy.com

Yup, it’s the current version alright. Ideas? Ignore, for now at least.
Let’s check out the findings instead.

According to the dashboard, we now are at risk of system intrusion, which is never a good thing!

Since I took a quick look after the host test-ossec-lynis03 was added, I know that that’s where the problem was first found.

So what do we have here? Great, “one or more vulnerable packages”, I wonder which packages are vulnerable.

Wait, what? That is one way to make a great design around no additional information whatsoever. I understand that I should update my system, but as a technical person I would love to be able to understand what exactly the threat is and where it has its source. Maybe this host is running under certain circumstances that make upgrades hard. In this case, I might want to check if an upgrade is really necessary.

Note: I’m not saying that I’m a fan of the above scenario, but it does happen sometimes – unfortunately!

Since I can’t do much more then upgrading my server, let’s just continue with the other hosts.

The older hosts have the highest risk rating, no surprise there. But they didn’t introduce new risks, which is interesting. I know that 12.04 still receives security updates, but I’m pretty sure that they’re not running the latest versions.

Fun fact, some problems solve themselves, such as the “Old Lynis version” one.

Let’s checkout one of the 12.04 hosts and see what they have to offer. Apparently there isn’t much difference between the --check-all and --pentest checks, since the results are the same, at least when it comes to number of findings.

I won’t show the rest of the results as it’s pretty much the same as the first scan results. Obviously, we have the “vulnerable packages” finding and again I would love to know which packages are vulnerable, but it just shows the same page as before.

Just out of curiosity, let’s check real quick which packages are listed for security updates.

$ grep security /etc/apt/sources.list |sudo tee /tmp/security-updates-only.list
$ sudo apt-get dist-upgrade -o Dir::Etc::SourceList=/tmp/security-updates-only.list

As you can see, there is definitely a difference between the two hosts, even if it’s not a substantial as I would have thought. Still though, in a production environment, hosts are not all the same and having information on what exactly causes a problem goes a long way towards improving things.

They should all be updated, since these are all security updates, but I would still love to know which of them is responsible for the “system intrusion” risk.

Compliance

I’m still compliant with running not compliance checks, so let’s fix that next.

It seems that policies are changed in the web interface, not via cli switches. This begs the question how the command is initiated, but let’s try to run a scan with compliance first.

I think “High Secure” should be enough, but ultimately I want to check all of the predefined policies.

To check for compliancy, I need to run the rule checker.

So my system is not compliant after all. Since it was that easy, I ran a quick check over all rule sets.

Turns out it doesn’t make that big a difference. Let’s examine the findings one by one.

Firewall is pretty obvious, it’s installed and running. Since this host is actually just a Linux container (LXC), the detected firewall is the host IPTables rule set.

Time synchronization is configured, is more of a general information than anything else. Is the configuration compliant with the defined rule sets? Should it not be configured? Why isn’t it marked with a cross or checkmark?

Clicking on the link just shows the rule definition but no further information on the findings.

For the record, neither NTPd nor timed are running on the host, so the checks have probably failed. If it’s not a compliance issue, why is it listed at all?

Malware includes a check if an anti malware tool (clamd) is installed and, apparently, if it’s configuration is protected.

This is probably a variable that Lynis sets after a certain check. What I’m wondering is how this could have gone off, since clamd isn’t even installed.

Limited access to compiler seems to check if a compiler is installed!?

Compliant: check! I’m still compliant, at least according to the dashboard and the hosts overview. Even after running Lynis again – no change!

So what exactly do compliance checks look for?

$ sudo apt-get install clamav
$ sudo /etc/init.d/clamav-freshclam start
$ ps aux |grep clam
clamav    1915  5.0  0.0  52364  2908 ?        Ss   15:47   0:06 /usr/bin/freshclam -d --quiet
$ sudo ./lynis audit system -c --quick --upload

Seems like it worked!

The compliance check shows a read cross, let’s see what the UI has to say about the updated status.

What? OK, this doesn’t make sense.

There is no change in the result, whatsoever. And just for the record, here are the config permissions.

test-ossec-lynis03.prod.lan:~/lynis $ ls -lh /etc/clamav/
-rw-r--r--  1 root   root 2.0K clamd.conf
-r--r--r--  1 clamav adm   717 freshclam.conf
drwxr-xr-x  2 root   root 4.0K onerrorexecute.d
drwxr-xr-x  2 root   root 4.0K onupdateexecute.d

Only root is able to write and the config belongs to clamav. Seems reasonable to me.

Taking a closer look at the policies, I found that some of them are empty and don’t have any rules set. That would explain why so few results showed up in the overview page.

While this makes sense, since it’s hard to check for, let’s say, every possible backup service that could be in place, it’s also kind of misleading since the policies are named after “HIPAA”, “ISO27x” and “PCI-DSS”. Anyone getting a checkmark on these should check the rule tables before the auditor shows up!

I’m not an expert in compliance, but I’m pretty sure that each of them has more then 8 things that should be done properly, before you can pat yourself on the shoulder!

Summary

So what have we learned?

  • The documentation is lacking in some parts
  • Results could be more detailed, especially regarding the source of the problem
  • I still don’t know what puts me at risk of dataloss
  • Now I also don’t know which packet puts me at risk of system intrusion
  • I also don’t know how likely the exploitation of said risks is (linkt to CVSS?)
  • I havent’ seen a “mark as false positive” or “ignore because can’t be fixed” option.

All in all there is a lot of time that has to go into writing rules before Lynis Enterprise can really be used for compliance checks. On one hand I love it, as it forces people to create checks that fit their environment, on the other hand it would be great to have a ISO27001 rule set premade for some distros – say Ubuntu – to run a quick check and see how the host is holding up.

What really stopped me from “digging deeper”, is that I wasn’t able to figure out how the checks actually worked. I get that “malware running” is marked as compliant, if the “process running” variable contains “clamd”, but what is Lynis, the cli tool actually checking for and more importantly, how can I check the content of “process running” myself? I know it’s a OSS tool and I could look at the source code, but that’s not how I want to use my time – and your boss wouldn’t want you to use yours that way as well. Especially after he payed for the license.

What I’m getting at is, that as newcomer to Lynis, it’s sometimes hard to understand what exactly is being done in the background and how some results came to be.

I’m looking forward to my next encounter, as I still have to

  • write my own compliance rules
  • check out historical data, after I fixed and unfixed stuff
  • and most of all, find out if LE can actually be of use to the average pentester.

Also, what’s up with that?

I did everything imaginable to get this to show a red cross – without any luck!

Lynis Enterprise – The 1st Encounter

I recently got my hands on a trial of Lynis Enterprise, the commercial SaaS version of the open source Linux system auditing software Lynis. In exchange I promised to write about my experience here and share some feedback with the developers.

I could spent some time with the tool and write about it afterwards, but instead I decided to write down my thoughts as I stumble across things. That means, that some questions might get answered later down the road, or that I write stuff that seems stupid, but in return this post will be more closely to how really I experienced my first encounter with Lynis. My thoughts? Unfiltered? On (digital) paper? This will get weird – you have been warned!

I did run the open source version once on one of my servers and scrolled through the results, so to keep this fair, this is what I knew before I started.

  • Lynis is in apt-get, but (of course) not in its most current version
  • Out of the box, it doesn’t run with normal user privileges (that can be fixed though)
  • Lynis can simply be downloaded from cisofy.com and installed
  • Lynis ./include/const folder must belong to root if it is run with root privileges
  • Lynis results look like this, but in color.

Now that we’re on equal footing, let’s get started.

Setup

The first thing I noticed after I logged into the web console was this.

Which is not really a problem, but even though I don’t have anything in production at that point, I still immediately asked myself “When exactly will it expire”, followed by “and what type of license do I have?”. The later is a result of me not paying for the trial, otherwise I would have probably known about the type of license I ordered. Or maybe I wouldn’t have. Who knows how many licenses I have to manage.

I almost instantly dropped the questions, as all I wanted to do is setup my first server to send data to LE. Trying to do so, I took a closer look at the overview page, which looks like this.

“OK, so that’s my username, I have a trial account, my email, no messages, no subscriptions,… hm, but how do I get started. What’s that down there at the bottom, Control Panel, System, Compliance… No wait, that’s just the change log. Maybe in the navigation panel? Ah, there it is, in the box on the right. First time user.”

Don’t ask me why it took me so long to find this, but somehow I kept missing it. It could just be me, but I feel if you login for the first time, that box should present itself a little bit more.

Now then, let’s got to the systems page to add the first host…

Thoughts?

Hm, adjust config and run with --upload switch.
Looks easy enough. But what's that -k option? 
For self signed certificates? 

Maybe that's for the on-premise version.

Yeah probably, but for a moment I felt unsure about this whole thing... 
Sending my data to the cloud, to a unsigned cert?

Nah, it'll be fine!

...editing config
...copy paste command
sudo lynis audit system --quick --upload

Damn, I ran the outdated apt-get version...
sudo ./lynis audit system --quick --upload

Dashboard

The Dashboard showed the one host in a “Technical” overview, which I assume includes every option the Dashboard has to offer, while the other two “Business” and “Operational” only showed data regarding those areas.
What caught my eye though was that “Data Loss” was listed under “Technical Risk”, while at the same time, right next to it a nice, all green circle stated “All systems are compliant”. That strikes me as odd, although I never really thought that being compliant means being safe. Still, it seems strange seeing it on a dashboard for some reason.

But what else is there? One system, zero outdated. Zero systems or Lynis clients? What’s that Average rating. Average compared to? And based on? No events.
But I though I had risk of data loss. By the way, where can I see what’s that all about exactly? Probably by clicking on the host link. But first I’ll check out the tags.

Hm, white font color on white/gray background. For the record, the tags are “firewall”, “ipv6”, “iptables” and “unknown”. The last one fits my question perfectly. “What are these tags for and why is data loss not one of them?”

I know what tags are for – in general, but why these, and why not data loss. That seems to be the main risk that was identified on this host. Speaking of data loss, let’s look for more detail on that.
Clicking on the host link brought me to this page.

What have we got here, OS information, Network information and Other. IPv6. Does this mean IPv6 is enabled or that is has been deemed as securely configured?
There we have the compliance check again, and there is a bit more information on the average rating. So it’s average risk rating and it’s a comparison to the same OS and over all scanned systems.

I assume at this point, that this only includes my systems, since there is only one. The tags are readable this time. Scrolling down…

OK, so file integrity was gray because no checks have been performed. Is there a plugin for that or what do I have to do to get them. Maybe just another switch?
I think at some point I need to dig through the man page, but for now let’s just keep wandering around.

Wait, I am compliant because I have no policies assigned? That’s an easy way out… and a bit confusing to be honest. Why wasn’t this gray like the file integrity checks?

Networking doesn’t say anything about being secure, so I guess the green checkmark is about IPv6 being enabled. The same goes for the firewall audit. What else is there?
No expired certificates, one warning about a misconfiguration in /etc/resolv.conf and a bunch of suggestions to harden the host. These look eerily similar to the compliance checks from Nessus, although they are quite fewer in numbers.

The only thing that really goes against my personal recommendation is enabling password aging limits. I simply don’t believe that chaining passwords increases security, but that’s a discussion for another time and place.

Last but not least, there are a few manual tasks and an empty scan history.

System Overview

Move along, nothing to see here!
Well, that’s not entirely true. While there is only one host listed here, it’s easy to see why this page might get useful later on.

21 Suggestions, 0 Warnings, the host version and name, last updated, Lynis version and of course compliance trickery. With multiple hosts, this will definitely come in handy, although I’m missing a “sort by x” feature.
Show hidden controls? Of course!

Bummer.

Compliance

Right, we haven’t done this one yet. We got the checkmark though, that fine right?

Hm, but High Secure does have a nice ring to it and I wanna try the others as well. Maybe I’ll even create a custom one – for Cyber. But before that, let’s finish the round through the navigation panel.

File Integrity

This page doesn’t give much more information then the gray area in the dashboard did. I still wonder why Lynis didn’t run file integrity checks. Or what I have to do in order to get them running. This would be a good place for a quick howto. hint

Reports

Ignoring the Improvement Plan page, which is more of a documentation page then a feature, brought me to reports.

Systems and applicable controls is basically just a list of hosts with their associated suggestions. It’s nice to have it all in one place, but not worth yet another giant screenshot.
My systems overview and Systems without compliance policy is fairly obvious. It’s the same thing as the Systems Overview page, with or without compliance policies. There is but one difference.

Needless to say, I instantly copied the report data into LibreOffice Calc. The result looks… well, bad.

I know, it’s to small to read, but that’s the overall structure of the report. Something tells me that won’t be used much by anyone – unless I just did it wrong, or Libre Calc is nobody’s favorite spreadsheet software. Anyway, an export to spreadsheet, csv, pdf function would be swell.

Configuration

The final page in the main navigation is configuration, which didn’t bring much enlightenment to tell the truth.

So both Lynis and Lynis Control Panel are up to date. I guess the later is just interesting to people running the on premise version. No Lynis plugins found. Ok, so how do I get some?
This would be the right place for a link to, say, a plugin repository or at least the part of the documentation explaining how to get and use plugins.

But let’s continue with the Modules section. Most of these were either links to pages we already saw, or simply not clickable. The others were rather short in content.

Ok, so no events. That might change once a few more hosts are scanned.

Nope. Let’s skip that.

Ah, there is something new. Security Controls. What’s that for? Maybe the link will tell us more?

So it’s advice on how to correct the specified flaws. Neat! It even has Ansible, CfEngine, Chef and Puppet snippets.

Or it doesn’t. What’s the checkmark for then I wonder.

Summary

What you’ve just read is literally a brain dump. I wrote down everything while looking at Lynis Enterprise for the first time. I don’t really have an opinion on it yet, other that I like it (no reason) and that I think it has potential to help Linux admins keep an eye on their hosts.

I will take a closer look on file integrity and compliance checks and write about that in a more traditional manner. I will also try to figure out, how Lynis can benefit penetration testers in their work. It is clearly thought of as a program that should be used for continuous auditing, so I’m curious how much help it will be in one time assignments. Especially in regards to the difference between the enterprise and open source versions.

After that, I probably have to take a closer look at the Lynis config file and documentation to answer questions like “Can I tell Lynis that password aging is stupid” or “How do I add/enable/run plugins?”.

PS: @Michael, if you’re reading this. I like the Business dashboard for management, but I still don’t know why I’m at risk of data loss. That’s probably the first question any auditor will have to answer. Maybe I just missed it, or maybe a link to the cause in the dashboard isn’t such a bad idea.

Change OpenVAS Session Time

Here is a small piece of knowledge that prevented me from going nuts. Set you OpenVAS session expiry time before it drives you crazy!

Openvas is a great vulnerability scanner, but the default session expiry time is set to 15 minutes, which is just plain annoying when you’re running a scan and want to check in on it every know and then.

Set Session Expire Time in GreenboneSecurityAssistant (GSA) to 60 Minutes by adjusting the init script. Depending on your installation and linux distro this file might be named different.

sudo vi /etc/init.d/openvas-gsa

#Look for the Daemon startup parameters and add 
--timeout 60

Restart the service and login again in the web interface. Check the cookie for it’s expiration time.
You can check your cookies in Firefox from the Privacy tab in the Settings by clicking on the “remove individual cookies” link and searching for GSA.

Check the expiry date, it should have a difference of 60 minutes from your login time.

Disqus to fully support CSP

I already blogged about my problems with Disqus and the Content Security Policy header twice, but recent changes in Disqus made me revisit the whole topic.

Burak YiÄźit Kaya, developer at Disqus, made a few changes we discussed about a month ago, that would improve the coexistence of Disqus and CSP on a website. While both could be run together before, these improvements transform the from a dirtily hacked state into a real dream team.

If you ever implemented CSP on a site that includes third party components, which today nearly every site does in some form of social integration plugin, you know how many different hosts, urls and objects have to be whitelisted. And even if you whitelist everything, some cloud services depend on things like inline javascript, CSS or even the dreaded eval function (shudder).

Back when Burak first contacted me, we came up with a few ideas on how to improve Disqus to work better with CSP, such as

  • remove all inline CSS
  • remove all inline Images (data:base64=…)
  • Unify all ressources under two distinct domains,
  • a.disquscdn.com for static content (cookieless domain)
  • disqus.com for dynamic content

I’m happy to report that all of the above improvements have now been implemented, which I think is awesome news. There is one more thing on the to-do list, which Burak said, he has yet to solve.

  • move the beacon pixel at referrer.disqus.com to the other domain.

The last one isn’t really that important, but it removes one domain from the policy, which is always a good thing as it keeps the ruleset shorter and thus easier to maintain. But why is it so important to have a good integration with CSP? If the hack worked, why should Disqus care about a proper fix and more importantly – why should they spend resources on this?

For one, supporting security features like CSP, and actually working together with people who have questions, concerns or ideas for improvement on a products security, shows that a company actually cares. Of course, that’s what every company always claims – especially in the light of any recent security fails – but here we have actual proof.

There is more to it then that though. Up until now, getting CSP and Disqus on the same page required to either block certain requests or allow them via unsafe CSP options. I’m talking about things like inline CSS, images embedded as base64 encoded strings and alternate domains that serve nice-to-have content such as icons. Of course diminishing the security of our CSP is not really an option, but blocking sources in favor of keeping unsafe-* options disabled is also a bad choice, as it results in your CSP logs getting spammed with violations. You do log your violations, don’t you?

The CSP logs are a great way of receiving notifications as soon as someone stumbles across a potential XSS vulnerability and starts tinkering with it. If code is injected successfully, the CSP will block it and create a new entry in the log files. All you need to do is setup CSP and make sure that normal browsing of your site doesn’t create any violations. Once your site is clean, setup some form of notification for anything that hits the logs.

Of course that’s only an option when your site is actually clean and not constantly throwing violation errors in your face.

Finally, Burak told me that he wanted to write a howto on using Disqus with CSP, which is great.

If you’re curious as how my CSP looks, just take a look at my HTTP headers with

curl -I https://www.hashtagsecurity.com

This should give you the full ruleset, among other headers I’ve set.
If you have questions about CSP, use the comments, contact me on Twitter, or check out my CSP talk.

Protect Your Data

Are cloud services safe to use, or are you better of creating your own data castle? Let’s take a look at the difference between cloud services and self hosted solutions, and why trust is a key part of security.

Cloud services have become widely used over the past years, and it looks like they’ll be around for a while. But there are many concerns about the users privacy and the security of the stored data, both by professionals as well as the users of these services.

With government surveillance, the Snowden leaks and general cloud security fails, like the Apple iCloud incident, many decided to take back control over their data and store it somewhere “safe”. But is hosting the data yourself really safer then the majority of cloud services?

Cloud Security

It’s hard to tell whether a cloud service is safe to use or not. In all but a very few cases you get little to no insight in how data is secured, how the overall company security is setup and how your data is insured. That’s right, insurance also plays a part. What if your sensitive data is being leaked by a pissed of employee? As a private user, such an incident might sting, but as a company this can be a real threat to existence, if the leaked data contains business critical information.

Data safety might also be an issue. This new product might be all the rage at the moment, but is the small startup company behind it able to afford backups or is a RAID3 all that’s protecting your data from being lost for good?

In the end, all you can do is research and ask. Try to find out as much as possible about the service and company you want to entrust with your data, and don’t be afraid to ask them about their security. Your first response is usually “We take security very seriously”, but if you persistently ask specific questions, you might just get a real answer.
Important things to keep in mind are

  • Data Backups – If possible in a second datacenter or availability zone.
  • 2 Factor Authentication – User logins alone might no be enough to secure your login.
  • Reputation – Are there any know security issues in the past? Is the company known at all?
  • Data Control – Can you delete data for good? Or is it stored in the cloud forever?

[Just a thought] – A security related questionnaire for cloud service providers, and a public index of companies that already provided answers to these would be a swell idea. Let me know if you’re building, or know of, such a service.

Of couse you could just decide to only trust yourself and do your own thing, and that’s exactly the reason for this post. Over the past two years, I met lots of people who decided to go their own way, despite having next to no knowledge of how these things work.

Can you do better?

The big question is, if you can do it better. Since trust in cloud services has taken a huge hit, self hosted application have become a popular alternative. But there are many things to consider if you want to roll your own “cloud”.

  • Do you know how to secure your server and the application that is running on it?
  • Do you have the time to continuously apply patches to both system and application
  • Do you have the time to regularly check for misconfiguration and security holes?
  • Do you have enough space to make backups (not on your server!)

Or in short

  • Do you have ALL the required resources to do this?

If you’re answer is yes, then you should ask yourself one more question. Is it worth it? A lot of money, time and nerves is spent on hosting your own cloud applications in a secure manner, and since you started all of this because you want to protect your data, doing it in an insecure way would just be you, lying to yourself, about the security and safety of your data.

There are of course ways to minimize the risk and required level of trust to use cloud services, such as encrypting everything before uploading it – just in case you feel a bit lost right now.

Let me get one thing straight, I’m not trying to discourage anyone from running their own server. In fact, I would love to encourage anyone who wants to take back control over their data. I’m running my own server(s) for a couple of years now and I’m pretty happy with it. But I also want people to actually increase their security.

Feeling safe != being safe

The thing is, no matter what you do, when it comes to security, there will always be some level of trust involved. The further you minimize the required amount of trust in the ability and intentions of others, the more the required amount of resources will increase.

For example, you could host your own Owncloud instance on a hosted vserver. Now you don’t have to handover your data to Dropbox, Google or other services like theirs. But know, you have to put your trust in others.

You trust the Owncloud developers and the company behind them to do a good job at writing secure code and not harboring ill intentions towards their users (or any government enforcement against them). Also, you probably trust the community behind the project to keep and eye out for any bugs, vulnerabilities or suspicious occurrences regarding the project. Next, you trust the hoster that provides you with the vserver you rented, to be honest enough not to copy all the data you store on your server somewhere else, where it would be outside of your control.

Of course you could move everything to a local NAS running inside your home network, removing the issue with trusting a cheap hosting company, but probably suffering way slower connection speeds if you need your cloud to be available wherever you go.

Raise the bar, keep the balance

Security is all about raising the bar, but you still have to keep the balance between higher security and required resources to do so. There is no absolute solution and everyone has to decide whats the best choice for themselves. So be sure to ask yourself these questions

  • Am I really improving on what I already have?
  • Do I have the required resources to do so?
  • Is it worth the extra effort and do I want to spend my spare time on this?
  • Is there no cheaper way (time, effort, money) to increase security?

Especially the last part is often interesting. A compromise of cloud services and local encryption might help a lot of people get over the trust issue, without falling into a pit of increased work, lost time and most likely spent money.
Summary

These are just a few thought that have been rumbling around inside my head, after I talked to a few people about home cloud setups. Most of these people have few to no knowledge about service administration or security, which is why I was a bit torn apart between recommending for and against it.

Please share your thoughts on this with me, if you have any, either via Twitter @HashtagSecurity or in the comment section below.

(W)BP#3 – HAProxy SNI, IPython, PostgreSQL and VIM

A new bucket post – I will change them from weekly to “whenever I feel like it”. Mainly because I can’t find the time to write actual posts between the bucket posts and I don’t want this blog to consist solely of bucket posts.

SSL Client Certificate Support for Owncloud – Meanwhile on the interwebs, the support for client certificate authentication in Owncloud’s desktop client “Mirall” is progressing. So I didn’t do anything and I didn’t learn anything… why is this even here?

Because I’m really looking forward to it! In fact, I’m planning on writing a blog post about the lack of support for additional authentication layers in desktop applications next week!

Also, I’m curious who will claim my the bounty! I assume @qknight.

Windows NTP Problems Round 2 – Apparently my “fix” from last weeks post didn’t really fix my time issue with Windows 8. After a reboot, the clock is automatically set be off by one hour. Fortunately a friend of mine read the post and send me this link.

Dual Boot: Fix Time Differences Between Ubuntu And Windows

The problem lies in my dual boot setup of Kubuntu 14.04 and Windows 8.1. For me the solution was this command.

sudo sed -i 's/UTC=yes/UTC=no/' /etc/default/rcS

If you want to fix the problem using Windows, checkout the link above. There is more then one way to do this.

SNI with HAProxy – Last week I encountered a few problems with HAProxy and Server Name Indication, or SNI.

SNI is used by webservers, to distinguish between multiple SSL/TLS vhosts. In a normal HTTP setup, webservers can easily tell which site is requested. When TLS is in place, this becomes impossible without decrypting the traffic. In order to be able to have multiple websites hosted on the same IP and port (443), the client is required to send the hostname before transport encryption is established. That’s exactly what SNI does.

Usually SNI allows you to create different vhosts like this (pseudo code)

www.example.com:443
  www.example.com settings
private.example.com:443
  private.example.com settings

In HAProxy however, it looks more like this (pseudo code)

*:443
  use_backend www if sni is www.example.com
  use_backend private if sni is private.example.com

The problem here is, that a lot of settings are done in the frontend, not the backend and therefore some settings cannot be set vhost specific. I found a solution to this problem, which I documented on serverfault.com. If I find the time, I’ll write a blog post that will explain everything in more detail.

Seriously, why is this never documented??? – following a howto about something that includes PostgreSQL on Ubuntu 14.04 is always a pain. Mainly because these two lines seem to be missing every single time!

$ sudo useradd -U -s /bin/bash postgres
$ sudo pg_createcluster 9.3 main --start

source: askubuntu.com

IPython Notebook – Looking for a new web based notebook? I did! And I found “IPython Notebook” which is, to keep it short, awesome.
To showcase a few of the many features I like…

Run Python code

Use Markdown

Preview

VIM modelines
VIM modelines look something like this

and can be used to set VIM settings for specific files. By appending the modeline, VIM will adjust the global settings accordingly, unless modelines is disabled.

Modelines can be temporarily enabled by running :verbose set modeline or permanently by adding set modeling to your ~/.vimrc.

Note that modelines is off by default when editing as root..

VIM jar – VIM never ceases to amaze me, and the limit to things one can learn about it seems to be non-existent.

I looked for a tool to explore the contents of a jar file. As it turns out, it’s just a zipped archive so unpack it and that’s it – however, you could just open it with vim and have a look around without extracting the files first.

If you have unzip installed that is.

I usually use tar, so unzip is something I don’t have installed by default but know I might just have enough reason to install it as well.

Kubuntu on L420 – Just a quick addition, I recently bought a Thinkpad L420 for 220€ on ebay. Unfortunately Kubuntu only booted with the acpi=off and nolapic flags. After a BIOS upgrade with this boot CD everything worked fine. Just in case anyone faces this issue as well.

Links – Interesting things I found on the webs