Security vs. Compatibility – Fight!

How do you secure a web page for private use? Easy if you ask me, use client certificates and a secure connection over TLS, preferably with a signed certificate.

So far so great, the website is pretty secure know as only someone with the right certificate can visit it. But what about compatibility with client programs? Well – fuck!

Turns out, most endpoint clients don’t really support additional authentication mechanisms. In a best case scenario, one login with a strong password should be sufficient to secure something. However, I like to add additional layers of security to prevent possible flaws in the web application from ruining my day.

Here are some additional layers we could use to increase security.

  • HTTP Proxy – Add additional authentication running the webapp behind a web proxy
  • Client Certificates – The secure connection to the server requires the client to provide a valid certificate before browsing the site
  • Basic Auth – Webservers often offer a basic authentication mechanism, requiring a valid login to connect to the requested site
  • VPN – Running the web application inside a private network, forcing users to be connected to the network either physically or virtually.

The first three share the same problem, not all client software supports these authentication mechanisms. If any is supported at all it would be the proxy, but even that’s not available everywhere. Plus, you’d need to set a rule in your browser to only use the proxy for that single domain, otherwise you’re browsing everywhere via proxy, which can have a pretty hefty impact on your browsing performance.

Client certificates and basic-auth are easy to setup and also pretty secure, providing that the underlying connection is not flawed. However, they enjoy even less support in client software except for common browsers, which can put you in the situation of having to choose between the client or the security layer.

“You could always use SSH and open a tunnel between the web application and the client!” – that’s a dirty hack! We’re not going to talk about that! It works, yes. But I don’t see it as a good, permanent solution.

It seems to me, that VPN is the clear winner here. It’s independent of client software and can be run in split-tunnel mode. This is pretty much all theory though, spun together in that head of mine. I would like to know If any of my potential readers have thoughts (or dare I even say opinions) on this, especially the use of VPN as alternative to the other solutions.

Tweet to @HashtagsSecurity or use the comment system below.

Testing for GHOST Vulnerability with Ansible

Yesterday, reports of a critical vulnerability in the GNU C Library (glibc) hit the news. If you have more then just a handful servers to check, this Ansible playbook might be helpful.

You can read all about the vulnerability here.

Usually when I have to check multiple servers, I use Ansible like this.

ansible 'servergroup' -m shell -a 'command to execute'

In the case of GHOST, this wasn’t working for me as the test command contained both ' and " chars. Escaping them didn’t seem to work either, so I wrote a small playbook to take care of it.

- hosts: hosts,or,groups,comma,separated
  remote_user: sshuser
    - name: Check if host is vulnerable
      shell: php -r '$e="0";for($i=0;$i<2500;$i++){$e="0$e";} gethostbyname($e);'
      register: ghostvuln
    - debug: var=ghostvuln.stdout_lines

That’s it. If you’ve never used ansible before, just follow these steps.

  • install Ansible from your OS repository
  • add hosts to /etc/ansible/hosts [groupname] host1 host2
  • Make sure your SSH key is loaded with ssh-add -L
  • Test if Ansible reaches every hosts

    ansible 'group,or,hosts' -m shell -a 'hostname -f'
  • Execute Ansible playbook

    ansible-playbook /path/to/playbook

If you get either one of these, everything is fine.

changed just means that the command could be executed, check further down for the result. As you can see in the second image, the command did not return Segmentation Fault
failed means, that the could not be executed, in this case because php isn’t installed.

This is ans example of a vulnerable host returning a segfault message.

Note: The check command above uses PHP, which I don’t have installed on all my servers. Since this is a glibc vulnerability, I’m pretty sure that hosts can be vulnerable even if PHP is not installed. I will update this post if I find a way to check servers without PHP. Until then, install php5-cli if you don’t have php on the system.

Update 1: More info on this bug can be found here, including how to get a list of services that use glibc (Debian/Ubuntu)

sudo lsof | grep libc | awk '{print $1}' | sort | uniq

Also, make sure to reboot the servers after you’ve installed the patches or the server will remain vulnerable!

Update 2: An easier way to check if your server is vulnerable, is to check for the glibc version by running the following command.

$ ldd --version
ldd (Ubuntu EGLIBC 2.19-0ubuntu6.4) 2.19
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
Written by Roland McGrath and Ulrich Drepper.

$ ansible 'servergroup' -m shell -a 'ldd --version |grep "^ldd"'

According to Tomas Hoger, the issues was fixed in glibc 2.18.


We got hacked! Now what?

Almost a year ago, I experienced my first real security incident. The company’s bulletin board was compromised and it was my job to oversee and coordinate the incident response. The teams and I where pretty much thrown into the cold water, as we’ve never experienced an incident of that size before.

Right after the incident I wrote the following blog post, which I’m now able to publish. Please note that I didn’t change anything deliberately, as I wrote it back when my memories on everything where still fresh in all detail.

A note up front!

Please note that this is a private blog and although I am was an employee of CHIP Digital GmbH, all opinions depicted here are solely my own!

This is a write-up of last weeks events from my perspective and how I experienced it.

If you’re not from Germany, you might have missed the news that the bulletin board at was hacked. CHIP is a technical magazine targeted to end users and the board has around 2.4 million registered users. Not all of them are active, some have never even activated their account but it’s still a decent amount of users. Unfortunately, this only makes it so much worse if there is a breach.

So what happened?

Well, if you speak German you can read the official statement here or try your luck with the Google translator.

In summary, on Monday, 24.03.2014, someone gained unauthorized access to our bulletin board. As of right now, still don’t know how they got access but the compromised account in question had at least some higher permissions, allowing the attacker to compromise further employee accounts. As soon as we noticed that there was something fishy going on, we took the system offline and notified users that the board is under maintenance. We hired external forensic experts to secure any evidence of the breach and analyze the system, so we could figure out what happened, and how it happened. Once we were told that there is a chance that user data has been accessed, we notified all email addresses in the database (2.4M).

At the time of this writing, we’re still unsure if user data has been stolen.

But there is more!

All of the above seems quite simple. We got hacked, we hired forensic experts, we notified our users.

First of all, there are some points I want to make clear:

  1. First contact: I’ve never worked with forensic analysts before
  2. As you might have guessed, this is one of those situations that are hard to prepare for
  3. Yes, we made mistakes! (We’re not perfect! Again – my opinion!)
  4. Time ran really fast this week…

That being said, let’s take a closer look at my last week.


I have to admit my first mistake right here – I didn’t check my mail. I went home after work and didn’t check my phone. That’s why I missed out on a lot of information. If I had checked my mail, I might have been able to make a decision much earlier. This wouldn’t have prevented the incident, but we might have been able to get forensics at Tuesday morning.


I checked my mails during breakfast and that’s where all the information hit me in one big wave. It was from that point on, that I didn’t had a real brake for a good amount of time (and I wasn’t the only one!). So I rushed to work and tried to figure out what exactly happened, who got what information and if anyone had come up with some sort of plan.

Before I continue, I have to thank all my co-workers for doing a really great job! (Don’t even think about me doing all the work – I was quite busy, but there where a lot of people who gave everything!)

Just before the first meeting was called in, I managed to contact a forensics company and asked if they had someone who could come in ASAP. They had to check – which meant I had to wait. It didn’t matter, because I had just enough time to rush to the meeting which was about to begin. Why even bother with meeting in a situation like this? Well, as it turns out the meetings where (although exhausting) really important as they kept us focused on what’s important and in-sync information wise. The first one was a bit chaotic, since almost all colleagues from the technical department where attending and we had to find out what we knew and how we would go about fixing it. But it was a good way to figure out who would join which team, and who had the necessary know-how to help which party.

Overall there where four main teams. (I’m deeply sorry if I missed anyone. These are not all of the people that helped, just a rough overview of the main groups)

The forensics team
Charged with the analysis of the systems, this “team” consisted of one of my co-workers, me and of course the forensic experts. We had one analyst with us on-site, and it was our job to support him (from here on called Mr. M.) by handing him logfiles, database dumps, giving him physical access to the servers and answering all of his questions.

The “Revive” team

I call it that, since “Revive” is the word from their whiteboard that got stuck in my mind. These guys where busy the whole time, getting the board back online in a secure way so that users could login and interact again. Their first goal on Tuesday was to get the board back online in read-only mode. The system was set to read only so that the content was available again (on new servers of course!) but couldn’t be tampered with in case the attacker returned. The second goal was to setup the whole board with tightened security on new hardware, since the old one couldn’t be used. You might think that setting up a bulletin board is an easy task, but this system is very complex and they had to jump through quite a few hoops in order to get there. This was by far more than just recovering a backup. We didn’t even know if the backups could be trusted! – This is all you will be hearing from this team, since I wasn’t part of it. But for me it was an amazing example of their skills put to the test and I’m glad to be working with such great people!

The communication team

The communication team was a bit vaguely defined since a lot of people had part of it at some point, but the core of it included of course our public relations manager, top level management, part of the community team and myself (mostly for technical questions). This team was formed after the meeting and put in charge of informing our users and handling communication through out the company and with our data privacy officer, lawyers and such. They had to gather and distribute all information, sort it and make decisions based on it – and they made the right ones in my opinion!

The community team

This is the only real team. The community team normally moderates the bulletin board, as well as social media like Twitter, Youtube and the like. Aside from their role in the communication team, they had to answer all questions that came in, some of which had to be checked with forensics in order to avoid spreading rumors or making statements that where not true (or not yet declared as facts by Mr. M).

As you can see, a lot of people had to make sure everyone was up to date on the information, which was a lot of work that had to be done on the side. After the meeting, I called back our forensics contact and he told me that one of his analysts (Mr. M.) could be on-site in about two hours. Once Mr. M. arrived, we had to brief him on the incident and he quickly explained the next steps. First of all we had to take a complete dump of the compromised servers hard disk, since that takes a long time to finish. In case you wonder – no, you can’t use your standard backup software. He didn’t need a backup of the files on the disk, but a complete image the disk. We handed Mr. M. our webserver log files and dumped the complete database so he could analyze it. We spent the rest of the day going through log entries line by line and trying to figure out which IPs belonged to the attacker and which actions where taken. Mr. M. took the files back to his lab, where he continued to work on till late at night.


Wednesday morning, we drove back to the data center to get the disk image. Unfortunately I had made a mistake when calculating the approximate time the dump process would take so the image wasn’t done yet. First of all, I used the size of the data stored on the disks and compared it to the data transfer speed, which was wrong because the size of a complete image is obviously not the size of data stored, but of the complete disk array. The second mistake I made is that I trusted my own calculation. I could have checked if the copy job was finished from my workstation and we lost valuable time because of that.

Since we couldn’t analyze the disk image we continued to analyze logfiles and the database dumps. It was helpful that we had the (tamper-proof) webserver logs from Akamai to cross-reference if any of our logfiles had been tampered with. Later that day we found first signs of a possible access to the database. At this point, it was still just guessing but we decided that we needed to go public if there is a possibility that user data has been accessed. That was also the point where I started to jump between the forensics and the communications team. I became in charge of making sure that any publicized information was correct (from a technical or forensic point of view). The thing we wanted to avoid the most, was that rumors or even wrong information got out.

Much later, we went back to the data center to get the disk images, which Mr. M. took back to his lab in order to analyze them properly. I did get to go home, but I spent the rest of my evening documenting everything I knew so we where all on the same page.


On Wednesday evening we had made the decision to go public and that the message should go out to our users the next day at 15:00. It bugged me, that we would wait so long until we would send the message, but as it turned out we needed the time and I’m glad that our management new better than I did. Preparing a message in two both German and English was one thing, because we had to discuss the phrasing and what we could write (again, we didn’t want to spread rumors, but tell people what we knew so far). The other thing was to prepare a short FAQ on what people should and could do to be safe in the meantime. The biggest problem however was handling the amount of outgoing mails and the expected responses. We decided to go with our newsletter service, to which we imported all of the email addresses. But we couldn’t deliver all mails at once, so we had to send them in packages. The whole process took longer than I liked but we couldn’t change it. Meanwhile the FAQ was published. Unfortunately, because we where all in a hurry, someone set the FAQs publishing timestamp to 1972.

That was the end of the day and most of what I remember of it. It doesn’t sound like a whole day of work, but there where so many ends to tie together and so many decisions to be made on the spot, that I was totally powered out when I got home. The rest of the day, we tried to monitor the web for any reaction to our outgoing mail. It was much more quiet than I expected.

Here are some examples of the problems we had to deal with that day. It’s good that everyone has it’s own opinion because it can help find the best solution but time was short and we had to make sure that we used it as efficient as we could.

Our passwords are hashed, do we write encrypted?

This is a problem in the German language. Most (not technical affine) people say “verschlüsselt” (en: “encrypted”), for both hashing and encryption. The problem is, that hashing doesn’t have a German translation that is as widely used as the counterpart for encryption. So by writing “verschlüsselt” we would make sure that most user understood what we where saying but at the same time risked, that users who knew the difference might think that we couldn’t tell hashing from encryption. As it turned out, that’s exactly what some of the users where thinking (and posting). Oh well.

Do we write forensics or just experts?

For me it was pretty clear that we would tell people that we hired external forensic experts, not just external experts. Why? Because we have experts for various fields, but forensics just isn’t one of them. Forensic isn’t part of our daily work, in fact this was the first forensic job we had since I started working at CHIP, so it makes sense that we don’t have a full-time forensic analyst.

Where do we put the FAQ?

This is a tough nut to crack for every company that finds itself in this kind of situation. You obviously don’t want to spread bad news but you want to make sure that all users are being notified. So we decided not to post it on the front page of our main website, but on the front of our bulletin board. Since the board was still read only, and our “Revive” team was still tinkering with it, we had problems putting a message online. So we took the fastest solution we got, by replacing the top ad with a custom banner. The way we did that, was to create a banner with a message to all our users and deliver it via our ad service.
There was just one little problem we didn’t think of. (Again, lot of stress and time pressure – you might overlook something)


Everyone with an active adblocker didn’t see our banner and therefore just saw a clean, read-only board without the message. The users still got the link to our FAQ hosted on our domain, but since the FAQ was published with a timestamp of 1972, some of our users thought this might all just be a fake and maybe someone was trying a phishing attack. – Not how we thought it would go!


Also, since we sent our message via our newsletter service, some of our users filtered it or received it as spam. We even got some comments from users saying they deleted it without reading it because they thought it was just another newsletter. Damn!


This was the day! The community team was prepared to answer the responses from 2.4 million outgoing mails while the rest of us tried to keep an eye on what was going on in the web.

Which sites wrote or blogged about us yet, what questions haven’t we answered so far and the biggest question – should we publish an article with further information? It’s a tightrope walk between flooding people with unnecessary information, and handing over only the important facts without hiding something. We where prepared to answer all the questions that could possibly come in, but in order to avoid wrong statements we agreed that answers to technical questions had to be checked with me before sending them out. It worked quite well, mostly because the amount of incoming questions and responses where lower than expected. Also, most of them where from users who had already forgotten they still had an account with our site and simply asked us to delete it.

Still, it was a tough ride that was going on our nerves because we didn’t know when the big flame-wave was going to hit us.


As of right now, forensic analysis is still going on so we don’t have all of the information yet. I think we did quite well considering the situation. But there is always something to learn from your mistakes and that’s what I’m trying to do.
Below is a list of things I came up with that can help anybody who wants to prepare for a situation like this.

  1. It can happen to anyone – Yes we are a computer magazine and many people think that we “should have known better”. But the fact is that there is no such thing as being secure, only best-effort. And you want to make sure that your best-effort is something you can present without feeling the need to hide something!
  2. Always check how your data is secured and document it. You don’t want to be in the position where you have to check first if someone asks you that question
  3. Create a workflow to check regularly if the way you are storing your data is still state-of-the-art or if you have to improve on it.
  4. Prepare for emergency – This is really hard, because how do you prepare for something you don’t know yet? Define a group of people with the skills required to
    • check your systems – The technical goto person who can answer all technical questions or at least find out the answer. This should also be the person to speak with forensics.
    • handle communication with your data privacy officer, law enforcement, your lawyers, management, etc.
    • make decisions – you need someone who can make the required decisions, and make them fast. If you have multiple managers, let them decide who gets to make the call. The more people involved in decisions, the longer it’s going to take!
    • backup if some of the people above are not available.
  5. Time is of the essence – create a detailed workflow on how to communicate and make sure everyone knows and uses it. If you need to collaborate on documents or statements, make sure all you use the same software.
  6. Create an emergency response team, a group of people who know how to handle a system that has been compromised. They don’t need to be forensic experts, but they should now what to do in order to prepare the scene for the analysts.
  7. Make breaks – force yourself to make a break every once in a while. Situations like these are stressful and at some point you will make a mistake if you don’t rest. Lock your workstation and go for a 10 minute walk if it’s nice outside. Otherwise, get a coffee and don’t drink it at your desk! (or at a meeting!)
  8. Talk to your CEO and PR about disclosure and what their official statement is. When the situation comes, they might want to reconsider so write down what the decision was and exactly why they made it. This can safe time, and that’s all that counts!
  9. Find a forensics company if you don’t have your own analyst. If something like this goes down, you don’t want to spend time on searching for a suitable company. Keep the phone number in your drawer!
  10. Get your employees on board
    • tell them what happened and that they are not allowed to communicate anything on their own!
    • choose a dedicated person that your employees can contact for questions or forward questions to they received (in case something has been leaked already)
    • don’t hide information from them – if it’s a fact or even a strong possibility, you should tell them!

I’m glad that we made the right decisions, even if we didn’t think of everything. And as amazing as it was to see all those people giving all they got to resolve this problem, I still hope that we don’t have to deal with this kind of situation again.

Creative Commons License

We got hacked! now what? by HashtagSecurity is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https:/

WBP#2 – SHA256, Flask, Safari, and more…

The Weekly Bucket Post goes into the second round. This week we have wrong sha256 hashes, problems with Safari and amongst other things an invitation to!

sha256sum creates “wrong” hash – Recently I was wondering, why a SHA256 hash of the string password was listed in neither the Hash Toolkit nor the LeakDB databases. Surely someone must have used password as password somewhere!?

Turns out, password is of course in both those databases, what isn’t listed though is password\n. The reason I fell into this trap, was because I forgot that the bash command echo always appends a newline unless you call it with the -n switch.

# Same strings
echo -ne "string\n" | sha256sum
echo "string" |sha256sum

# what you really want...
echo -n "string" | sha256sum

Browser Testing – Oh Safari, now I know why I don’t use you… because I can’t!

Last week, @MikeyJck was kind enough to let me know that Apple’s Safari browser was asking for a client certificate when browsing on my site. While this is actually something I use on a few pages or subdomains, it should certainly not appear on the frontpage of this blog.

Since I don’t own a Mac, I went looking for ways to (efficiently) test websites with Safari – so far without great results!

Safari was available for Windows once, but after version 5.1.7 Apple killed it and now it’s only available for systems running OSX.

The alternatives I found are not really alternatives in my opinion, as they either build on said Windows version or on making screenshots of your page which doesn’t help much either when you’re trying to debug strange behaviour.

Running late on Windows 8 – For some reason my Windows 8.1 desktop was running an hour late. No matter what I did, it kept changing back to UTC, when it should be on UTC+1.

This seems to be a very well known problem in Windows, the reason for which is the default NTP servers Microsoft has set for their operating system.

By setting the NTP servers to, the clock jumped by one hour and displayed the correct time. What’s interesting is that public posts about this problem where talking about a few minutes time difference. For me it was exactly one hour.

Windows 8 time settings (in german – had no choice!)

Youtube, MP3 and distribution of malicious code – Last week I found myself in need to use one of those “Youtube to MP3 downloaders”, which I’m always kinda sceptical about. Not only is the sound quality crap, but obviously you have no control over what you’re really downloading – and in most cases you don’t really have a trustworthy brand behind it either.

Said situation spawned some food for thought. I know that malicious code can be stored in MP3, which really isn’t something new. But I can’t help but wonder if adding malicious sound tracks to youtube videos could make for a “good” distribution mechanism. Especially if you’re trying target unsuspecting smartphone kiddies who’re downloading their music from YT. – There isn’t much to say about that isn’t on the site itself already. It’s pretty self explanatory and for everything else there is the FAQ. So I’m just gonna say this

  • It’s the new shit
  • It’s a great idea
  • And it was about time somebody did it!

Also, I’m on there! #TrackMeINeedSnapshots!
If you need an invitation, drop me a line on Twitter!

Local IP and DNS don’t match – If you ever have connection problems to a host, but can still reach it via local console, give this a try.

Normally I would run ifconfig and nslookup to compare the IP with the DNS entry. The hostname command has a nice feature to return both IPs, if there different.

$ ifconfig |grep "inet addr"
eth0      Link encap:Ethernet  HWaddr 00:16:3e:47:65:1d  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::216:3eff:fe47:651d/64 Scope:Link
          RX packets:124 errors:0 dropped:0 overruns:0 frame:0
          TX packets:87 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:14563 (14.5 KB)  TX bytes:13151 (13.1 KB)

lo        Link encap:Local Loopback  
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ nslookup myhost.lan

Name:   myhost.lan

$ hostname -I 

Interesting things I found on the interwebs

CSP and Disqus are buddies!

After all that trouble I had with Disqus and my Content-Security-Policy, I finally got it working. Not only that, but I got some help from a Disqus JS dev!

First of all, I want to apoligize for a few things in my last post. I blamed Disqus for using eval, when it was really me who, unknowingly, invoked it by using the jQuery load() function.

I also said that Disqus is not compatible with CSP at all – which is not really true.
Thanks to Burak Yiğit Kaya, a javascript developer at Disqus who reached out to me via Twitter, I now know more about how Disqus and CSP work together.

And I want to thank Burak and Disqus for their reaction to my post.
Burak contacted me, not to tell me that I was wrong, but to try and understand what the problem was in order to fix it.

We take security quite seriously and also respect users treating it highly so I'll do my best to make this easier for you.

And he did! Which is awesome, because I have heard this one so many times already, and it’s very rare that there’s more behind it then just PR.

But back to the technical stuff. After some back and forth with Burak, I finally got a working CSP which looks like this.

Content-Security-Policy: default-src 'self'; script-src 'self'; img-src 'self'; frame-src 'self'; style-src 'self' 'unsafe-inline';


This CSP still contains the unsafe-inline option for style sources, which is not really a good thing. Burak told me that I can ignore it, as it’s doing is making the loading logo spin.

While I can absolute go without the spinning logo, there is another problem here. Ignoring CSP violations, even if there just a spinning icon, will at some point have a negative impact on your CSP logs if you have them enabled.

So it works, but it’s still not perfect.

Too many sources

The other thing that would need improvement is the amount of different sources. Burak explained, that they use as a cookieless domain, and as a stat beacon to check if the load was successfull. As I said to him before, this is just a nice to have. It would increase maintainability of CSPs but it’s not absolutely necessary.

Burak came up with the idea, to unify the sources under two domains. and

Disqus CSP

Something I didn’t know before, is that Disqus ships with it’s own Content-Security-Policy. As Burak told me, if you load Disqus and take a look at the response headers of the request, you can see a custom CSP is being set.

content-security-policy:script-src https://** * * https://** http://** * * 'unsafe-eval' **

For me this is just show even more that Disqus actually cares about security. Otherwise, they wouldn’t have bothered to limit the sources in the first place.


So, Disqus is back on And now it’s not just a convenient comment system anymore. I actually have an opinion about it now.

The conversation between Burak and me brought a few things to light that could be improved, and Burak said he will look into it. Although he couldn’t make any promises, I’m looking forward to see these improvements go live in the future.

  • Move inline CSS into an external file to remove unsafe-inline style source
  • Unify sources under and

And something we didn’t talk about, which I noticed later. The Disqus homepage only shows how to implement Disqus inline. Another option showing how to do it CSP compatible would be a great addition, especially for people just getting started with this kind of thing.

Wait, so what with the eval problem?

Oh yeah, I almost forgot to mention that. The eval problem only occured, because I don’t want Disqus to be loaded automatically. Instead I want visitors to click on the big blue bar below the article whenever they want to leave a comment.

The way I did this was by loading Disqus via jQuerys load() function. Which seems to use the eval() function internally.

After playing with both jQuery and plain old JS for a bit, I finally found this [nifty little helper}( which works like a charm. So kudos to @nternetinspired for solving this problem way before me!

CSP: “Disqus gotta go!”

Recently I noticed that Disqus isn’t loading anymore. It was easy to figure out that CSP was the reason why. In the end I was left with nothing more then the choice of which one needs to go.

Update: Burak Yiğit Kaya, a javascript developer at Disqus reached out to me to to address the problems in this post. I will write another post about the results shortly!

Obviously my choice was to keep CSP – this is a security blog after all. But let’s take a look at what brought me to the point of giving up.

At first I tried to fix the problem the old fashioned way. Everytime Disqus required a rule change, I did it. At first I thought this might even work, up until my CSP looked something like this: (split for better readability)

default-src	'self'; 
script-src  'self'; 
img-src 	'self' 
frame-src   'self' 
style-src 	'self' 

So far so semi-good. You might have noticed that I restricted the allowed sources to exactly to the least necessary space to decrease risk of XSS. However, all of that quickly lost it’s weight when Disqus finally requested to more changes.

default-src	'self'; 
script-src  'self' 
img-src 	'self' 
frame-src   'self' 
style-src 	'self' 

Update: unsafe-eval was actually my fault. Appearently the jQuery function .load() is using eval internally.

That’s right, to work properly Disqus needs unsafe-eval script and unsafe-inline style. For those of you not really familiar with CSP, let me explain the problem real quick.

CSP, or Content-Security-Policy, is meant to prevent XSS by restricting the sources of javascript, CSS stylesheets and other things such as images, frames, etc. To be able to do this, two important things have to be disallowed.

  1. Inline JS or CSS code embedded directly in HTML files, such as these two examples <script>alert("inline javascript")</script> style="height:100% width:100%"
  2. No use of the Javascript eval function (as it’s deemed highly insecure!)

Instead, JS and CSS should only be included as files via the src= option. The allowed sources can then be specified in the CSP rule. The problem with unsafe-inline and unsafe-eval is, that it enables the use ov the eval function and allows the use of inline CSS or JS code.

As sort of a last resort, I tried to create a single page with an alternate hard coded CSP rule, just to see if this would work.

disqus.html – containing the bad CSP and Disqus loader

<!-- META Header containing the full "Disqus compatible" CSP rule-->
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'unsafe-eval' 'self'; img-src 'self'; frame-src 'self'; style-src 'self' 'unsafe-inline';" />

<!-- DIV to load Disqus -->
<div id="disqus_thread"></div>
<script type="text/javascript" src="disqus.js"></script>
<noscript>Please enable JavaScript to view the <a href="">comments powered by Disqus.</a></noscript>
<a href="" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a>

disqus.js – containing the typical Disqus code

var disqus_shortname = 'hashtagsecurity'; // required: replace example with your forum shortname

(function() {
    var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
    dsq.src = '//' + disqus_shortname + '';
    (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);

The problem here is, that loading the disqus.html into my blog by using an iframe resulted in something like this.

On the first look, I though that did the trick. After checking with the disqus.html file, I saw that it should really look like this.

What happened is, that the iframe which held disqus.html didn’t resize properly in height. After looking a bit into dynamically resizing iframes to fit their content, I realized that this wouldn’t be possible without adding more javascript which in turn resulted in further adjustments to my original CSP rule.

At that point I decided that Disqus just isn’t worth the pain. So until I found a better comment system that can easily be integrated alongside CSP, you can simply reach out to me via twitter. – @HashtagSecurity

Check open ports with Python

Everyonce in a while I run into problems testing new setups just to find out that a certain port just isn’t reachable from within our company network. This simple tool should make it easier to find open ports for quick testing.

Note it’s not the fastest tool for huge numbers of ports (e.g. 1-65535) but it’s capable of rather quickly showing open ports in 1-1024 like ranges.

Without further ado, here it is.

from multiprocessing import Process, Pool
import subprocess, sys, socket

if len(sys.argv) < 2 or sys.argv[1] == "help" or sys.argv[1] == "-h":
  print '''Usage: file
  <port>: specify the port to check
  <port>: if a second port if given, script will check port-port range
  <help>: print this help text

  def checkopenport(port):
    # Check Hosts for open ports
      #print "DEBUG: Testing port %i" % (port)
      p = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      returncode = p.connect_ex(("",port))
      if returncode == 0:
        print "Open: %d" % (port)
        print "Closed %d" % (port)
    except Exception, e:
      print "Something went bad... Exception: %d %s" % (port, e)

  firstport = int(sys.argv[1])
  ports = []

  if len(sys.argv) > 2:
    lastport = int(sys.argv[2])
    portrange = (lastport - firstport) + 1
    if portrange < 1:
      print "error, no ports found"
      print "portrange is %i" % (portrange)
      pool = Pool(processes=portrange)
      for port in xrange(firstport,lastport+1):
    # Only set the first port if no second port is given
    pool = Pool(processes=1)

  # Run the threads
  results =, ports)

Linux ACLs and sticky Users

Sticky and suid bits are quite helpfull tools when it comes to keeping the correct permissions throughout a set of folders and files. But what if you want to do more then just a fixed group or an execute-as user option?

Problem: get /var/www/ to be editable by a bunch of users (group:editors), without messing up the following permission schema: www-data:editors:others r-w:rwx:---.

folder: /var/www/ 
owner: www-data
group: editors 

New files will be created with permissions set to owner: creator, group: editors, when they should be:

r-x for www-data (owner)
rws for editors (group+sticky)
--- non for others

However, since new files belong to the creating user (e.g creator) www-data is out of the picture which means that apache can’t access the file unless others have r-x permissions.

Linux Permissions

Here we have our testfolder, it belongs to www-data:editors and has the suid (s) and the sticky-bit (T) set.

drwxrws--T  3 www-data editors   32 Nov 19 16:06 .
drwxr-xr-x 20 fmohr    fmohr   4,0K Nov 19 15:56 ..
-rw-r--r--  1 fmohr    editors    0 Nov 19 16:06 test01
drwxr-sr-x  2 www-data editors    6 Nov 19 16:05 test02

This however only let’s us achieve fixed group ownership. As you can see in test01, this file was created by fmohr, not www-data.


Enter ACLs, the Linux integrated Access Control List permission system.
At first sight, ACL might look a little bit frustrating, but it’s actually quite easy to use.

First we create a new folder, in this case one belonging to root:root

$ ll
total 0
drwxr-xr-x 3 root root 17 Nov 19 16:26 .
drwxr-xr-x 3 root root 16 Nov 19 16:26 ..
drwxr-xr-x 2 root root  6 Nov 19 16:26 test

Of course we can’t do anything in this folder without using sudo, so let’s change that.

$ touch test/myfile
touch: cannot touch `test/myfile': Permission denied

First, install acl or check if it’s already installed with sudo apt-get install acl

You also need to check your filesystem and mounts for acl support. EXT3+4 and XFS should have it enabled by default, others might have to be remounted. For more info on how to enable ACL, check out this blog:

To see if your kernel supports ACL, run the following command and check for your current filesystem.

$ grep _ACL /boot/config-*

Now set the user, that should from now on own the folder and all files and folders in it.

$ sudo setfacl -m d:u:www-data:r-x test/		 # set for parent-dir
$ sudo setfacl -m u:www-data:r-x test			# set for *
$ ls -ld test/
drwxr-xr-x+ 2 root root 6 Nov 19 16:26 test/

As you can might have noticed the owner and group haven’t changed, but if you look a bit closer you can see the + sign behind the last execution bit x. This indicates, that ACL rules are already in place and working.

We can’t create files yet since we are neither www-data nor root. Even if we where www-data, we would only have read and execute permissions.

To fix that, we need to set the group permissions.

$ sudo setfacl -m d:g:editors:rwx test/
$ sudo setfacl -m g:editors:rwx test/
$ ll
total 4,0K
drwxr-xr-x  3 root root 17 Nov 19 16:26 .
drwxr-xr-x  3 root root 16 Nov 19 16:26 ..
drwxrwxr-x+ 2 root root 17 Nov 19 17:03 test
$ touch test/filetest
$ ll test/filetest
-rw-rw-r--+ 1 fmohr CM_T_OPS_Team 0 Nov 19 17:07 test/filetes

The folder still displays root:root, but we can see the current ACL permissions by running getfacl.

$ ll test/
drwxrwxr-x+ 2 root  root          21 Nov 19 17:10 .

$ getfacl test/
# file: test/
# owner: root
# group: root

The new file automatically inherited the primary group and user of its creator, which is not what we want. The ACL rules tell us something different.

$ ll test/
-rw-rw-r--+ 1 fmohr fmohr  0 Nov 19 17:07 filetest

$ getfacl test/filetest
# file: test/filetest
# owner: fmohr
# group: fmohr
user:www-data:r-x               #effective:r--
user:fmohr:rwx                  #effective:rw-
group::r-x                      #effective:r--
group:editors:rwx  			 #effective:rw-

They show us, which owner and group are set by the linux permission system but also which users and groups are granted access by the ACL rules.

To expand on that a little bit:

$ getfacl test/filetest
# file: test/filetest
# owner: fmohr
# group: fmohr
user::rw-			# owner: fmohr 	(from default linux permissions)
user:www-data:r-x	# www-data 		(from ACL rule)
user:fmohr:rwx   	# fmohr			(from ACL rule)
group::r-x       	# group: fmohr	(from default linux permissions)
group:editors:rwx	# editors		(from ACL rule)
mask::rw-			# maximum permissions for any users (automatic ACL)
other::r--			# other			(from default linux permissions)

As a good example, I will change the owner of filetest to root:root and set permissions to 000, which would mean no one can access the file.

$ sudo chown root:root filetest
$ sudo chmod 000 filetest
$ ll
----------+ 1 root root  0 Nov 19 17:07 filetest
$ cat filetest
cat: filetest: Permission denied

As you can see, as user fmohr I can’t access the file. This is due to mask being changed by the chmod command as well.

$ getfacl filetest
# file: filetest
# owner: root
# group: root
user:www-data:r-x               #effective:---
user:fmohr:rwx                  #effective:---
group::r-x                      #effective:---
group:application_treetool:rwx  #effective:---

If we were to set mask back to the maximum permission ACL is allowed to grant, we would get access to the file by our ACL rules.

$ sudo setfacl -m mask:rwx filetest
$ echo "test" > filetest
$ cat filetest

As a final note, ACLs are inherited onto new files but won’t be applied to existing files automatically. This means, that you have to manually apply them to your existing files and folders. After that, you’re set to work with a detailed permission system, featuring permissions for multiple groups and users regardless of the original owner of the files.

Thanks to Daniel Lawson over at serverfault for a great answer!

CheatSheet – Ansible

Ansible Back to Index

A random collection of commands and playbook features for Ansible.

Setting SSH options

In /etc/ansible/ansible.cfg, SSH settings can be defined.

# uncomment this to disable SSH key host checking
host_key_checking = False
private_key_file = /etc/ansible/ansible.ppk


# ssh arguments to use ssh_args = -o BatchMode=yes -o ForwardAgent=yes

Problems with -o BatchMode

Ansible gives you the option to pass SSH options such as “BatchMode” on to your ansible runs. However, I ran into a problem regarding BatchMode and the Ansible –ask-pass (-k) option.

I used the following command to check if ldap login worked on all hosts.

ansible 'all' -a hostname -u username -k

With ssh_args = -o BatchMode=yes enabled in /etc/ansible/ansible.cfg the command failed. After I removed BatchMode=yes, everything worked.

Fetch configs and store them on Ansible

To backup single files with ansible, use the following fetch-configs.yml playbook

# Fetch Configs before rollout
- hosts: all:!fail
  remote_user: root
  gather_facts: true
  - name: Fetch config /etc/example.conf
    fetch: src=/etc/example.conf dest=/srv/ansible/archive/fetched/
  - name: Fetch config /etc/another.conf
    fetch: src=/etc/another.conf dest=/srv/ansible/archive/fetched/
# Optional: Push to git repository (/srv/ansible/ must be git repo!)
# For more info, read below about automatically pushing configs to git
- tasks:
  include: ansible_commit.yml

To backup multiple files, use the synchronize module in pull mode.

- hosts: all:!fail
  gather_facts: true
  remote_user: root
    - name: Fetch all configs in /root/.ssh/ with ansible sync-module
      synchronize: mode=pull src=/./root/.ssh/ dest=/srv/ansible/archive/fetched/{{ inventory_hostname }}/ rsync_opts=-avR perms=yes

The last part rsync_opts=-arR perms=yes can still be optimised. I think perms and -ar is on by default in ansibles synchronize module.

FYI, the hosts: all:!fail selects all hosts, except the ones I have added to the group [fail]. These can be hosts that have been known to fail during playbook runs but haven’t been fixed yet.

Automatically commit fetched configs to git
Assuming that you store all your fetched configs in one place on your Ansible server, e.g. /srv/ansible/archive/{{ansible_hostname}}/etc/example.conf, you can use the following autocommit.yml playbook to automatically push changes to your repository.

- hosts: localhost
  remote_user: root

  # Check if commit is necessary
  - name: check if git commit is necessary
    command: git --git-dir=/srv/ansible/.git/ --work-tree=/srv/ansible/ status
    register: git_check

  # Commit Changes in Ansible Directory
  - name: Committing changes on Ansible server
    local_action: shell cd /srv/ansible/ && git add * && git commit -m "Ansible Automated Commit" && git push
    when: "'nothing to commit' not in git_check.stdout"

In order to execute the playbook, just append it to your other playbooks. This is usefull if you have for example a webserver.yml playbook which fetches all configs before deploying new changes.

# Webserver playbook
[...] # <- whatever you do in your playbook

# Update & Push Ansible Local Repository
- tasks:
  include: autocommit.yml

Deploy time settings with Ansible CLI

sudo ansible 'mygroup' -m shell -a 'echo "Europe/Berlin" |sudo tee /etc/timezone' -u user -K
sudo ansible 'mygroup' -m shell -a 'sudo cp -f /usr/share/zoneinfo/Europe/Berlin /etc/localtime' -u user -K
sudo ansible 'mygroup' -m shell -a 'sudo ntpdate-debian' -u user -K