Change OpenVAS Session Time

Here is a small piece of knowledge that prevented me from going nuts. Set you OpenVAS session expiry time before it drives you crazy!

Openvas is a great vulnerability scanner, but the default session expiry time is set to 15 minutes, which is just plain annoying when you’re running a scan and want to check in on it every know and then.

Set Session Expire Time in GreenboneSecurityAssistant (GSA) to 60 Minutes by adjusting the init script. Depending on your installation and linux distro this file might be named different.

sudo vi /etc/init.d/openvas-gsa

#Look for the Daemon startup parameters and add 
--timeout 60

Restart the service and login again in the web interface. Check the cookie for it’s expiration time.
You can check your cookies in Firefox from the Privacy tab in the Settings by clicking on the “remove individual cookies” link and searching for GSA.

Check the expiry date, it should have a difference of 60 minutes from your login time.

Disqus to fully support CSP

I already blogged about my problems with Disqus and the Content Security Policy header twice, but recent changes in Disqus made me revisit the whole topic.

Burak Yiğit Kaya, developer at Disqus, made a few changes we discussed about a month ago, that would improve the coexistence of Disqus and CSP on a website. While both could be run together before, these improvements transform the from a dirtily hacked state into a real dream team.

If you ever implemented CSP on a site that includes third party components, which today nearly every site does in some form of social integration plugin, you know how many different hosts, urls and objects have to be whitelisted. And even if you whitelist everything, some cloud services depend on things like inline javascript, CSS or even the dreaded eval function (shudder).

Back when Burak first contacted me, we came up with a few ideas on how to improve Disqus to work better with CSP, such as

  • remove all inline CSS
  • remove all inline Images (data:base64=…)
  • Unify all ressources under two distinct domains,
  • a.disquscdn.com for static content (cookieless domain)
  • disqus.com for dynamic content

I’m happy to report that all of the above improvements have now been implemented, which I think is awesome news. There is one more thing on the to-do list, which Burak said, he has yet to solve.

  • move the beacon pixel at referrer.disqus.com to the other domain.

The last one isn’t really that important, but it removes one domain from the policy, which is always a good thing as it keeps the ruleset shorter and thus easier to maintain. But why is it so important to have a good integration with CSP? If the hack worked, why should Disqus care about a proper fix and more importantly – why should they spend resources on this?

For one, supporting security features like CSP, and actually working together with people who have questions, concerns or ideas for improvement on a products security, shows that a company actually cares. Of course, that’s what every company always claims – especially in the light of any recent security fails – but here we have actual proof.

There is more to it then that though. Up until now, getting CSP and Disqus on the same page required to either block certain requests or allow them via unsafe CSP options. I’m talking about things like inline CSS, images embedded as base64 encoded strings and alternate domains that serve nice-to-have content such as icons. Of course diminishing the security of our CSP is not really an option, but blocking sources in favor of keeping unsafe-* options disabled is also a bad choice, as it results in your CSP logs getting spammed with violations. You do log your violations, don’t you?

The CSP logs are a great way of receiving notifications as soon as someone stumbles across a potential XSS vulnerability and starts tinkering with it. If code is injected successfully, the CSP will block it and create a new entry in the log files. All you need to do is setup CSP and make sure that normal browsing of your site doesn’t create any violations. Once your site is clean, setup some form of notification for anything that hits the logs.

Of course that’s only an option when your site is actually clean and not constantly throwing violation errors in your face.

Finally, Burak told me that he wanted to write a howto on using Disqus with CSP, which is great.

If you’re curious as how my CSP looks, just take a look at my HTTP headers with

curl -I https://www.hashtagsecurity.com

This should give you the full ruleset, among other headers I’ve set.
If you have questions about CSP, use the comments, contact me on Twitter, or check out my CSP talk.

Protect Your Data

Are cloud services safe to use, or are you better of creating your own data castle? Let’s take a look at the difference between cloud services and self hosted solutions, and why trust is a key part of security.

Cloud services have become widely used over the past years, and it looks like they’ll be around for a while. But there are many concerns about the users privacy and the security of the stored data, both by professionals as well as the users of these services.

With government surveillance, the Snowden leaks and general cloud security fails, like the Apple iCloud incident, many decided to take back control over their data and store it somewhere “safe”. But is hosting the data yourself really safer then the majority of cloud services?

Cloud Security

It’s hard to tell whether a cloud service is safe to use or not. In all but a very few cases you get little to no insight in how data is secured, how the overall company security is setup and how your data is insured. That’s right, insurance also plays a part. What if your sensitive data is being leaked by a pissed of employee? As a private user, such an incident might sting, but as a company this can be a real threat to existence, if the leaked data contains business critical information.

Data safety might also be an issue. This new product might be all the rage at the moment, but is the small startup company behind it able to afford backups or is a RAID3 all that’s protecting your data from being lost for good?

In the end, all you can do is research and ask. Try to find out as much as possible about the service and company you want to entrust with your data, and don’t be afraid to ask them about their security. Your first response is usually “We take security very seriously”, but if you persistently ask specific questions, you might just get a real answer.
Important things to keep in mind are

  • Data Backups – If possible in a second datacenter or availability zone.
  • 2 Factor Authentication – User logins alone might no be enough to secure your login.
  • Reputation – Are there any know security issues in the past? Is the company known at all?
  • Data Control – Can you delete data for good? Or is it stored in the cloud forever?

[Just a thought] – A security related questionnaire for cloud service providers, and a public index of companies that already provided answers to these would be a swell idea. Let me know if you’re building, or know of, such a service.

Of couse you could just decide to only trust yourself and do your own thing, and that’s exactly the reason for this post. Over the past two years, I met lots of people who decided to go their own way, despite having next to no knowledge of how these things work.

Can you do better?

The big question is, if you can do it better. Since trust in cloud services has taken a huge hit, self hosted application have become a popular alternative. But there are many things to consider if you want to roll your own “cloud”.

  • Do you know how to secure your server and the application that is running on it?
  • Do you have the time to continuously apply patches to both system and application
  • Do you have the time to regularly check for misconfiguration and security holes?
  • Do you have enough space to make backups (not on your server!)

Or in short

  • Do you have ALL the required resources to do this?

If you’re answer is yes, then you should ask yourself one more question. Is it worth it? A lot of money, time and nerves is spent on hosting your own cloud applications in a secure manner, and since you started all of this because you want to protect your data, doing it in an insecure way would just be you, lying to yourself, about the security and safety of your data.

There are of course ways to minimize the risk and required level of trust to use cloud services, such as encrypting everything before uploading it – just in case you feel a bit lost right now.

Let me get one thing straight, I’m not trying to discourage anyone from running their own server. In fact, I would love to encourage anyone who wants to take back control over their data. I’m running my own server(s) for a couple of years now and I’m pretty happy with it. But I also want people to actually increase their security.

Feeling safe != being safe

The thing is, no matter what you do, when it comes to security, there will always be some level of trust involved. The further you minimize the required amount of trust in the ability and intentions of others, the more the required amount of resources will increase.

For example, you could host your own Owncloud instance on a hosted vserver. Now you don’t have to handover your data to Dropbox, Google or other services like theirs. But know, you have to put your trust in others.

You trust the Owncloud developers and the company behind them to do a good job at writing secure code and not harboring ill intentions towards their users (or any government enforcement against them). Also, you probably trust the community behind the project to keep and eye out for any bugs, vulnerabilities or suspicious occurrences regarding the project. Next, you trust the hoster that provides you with the vserver you rented, to be honest enough not to copy all the data you store on your server somewhere else, where it would be outside of your control.

Of course you could move everything to a local NAS running inside your home network, removing the issue with trusting a cheap hosting company, but probably suffering way slower connection speeds if you need your cloud to be available wherever you go.

Raise the bar, keep the balance

Security is all about raising the bar, but you still have to keep the balance between higher security and required resources to do so. There is no absolute solution and everyone has to decide whats the best choice for themselves. So be sure to ask yourself these questions

  • Am I really improving on what I already have?
  • Do I have the required resources to do so?
  • Is it worth the extra effort and do I want to spend my spare time on this?
  • Is there no cheaper way (time, effort, money) to increase security?

Especially the last part is often interesting. A compromise of cloud services and local encryption might help a lot of people get over the trust issue, without falling into a pit of increased work, lost time and most likely spent money.
Summary

These are just a few thought that have been rumbling around inside my head, after I talked to a few people about home cloud setups. Most of these people have few to no knowledge about service administration or security, which is why I was a bit torn apart between recommending for and against it.

Please share your thoughts on this with me, if you have any, either via Twitter @HashtagSecurity or in the comment section below.

(W)BP#3 – HAProxy SNI, IPython, PostgreSQL and VIM

A new bucket post – I will change them from weekly to “whenever I feel like it”. Mainly because I can’t find the time to write actual posts between the bucket posts and I don’t want this blog to consist solely of bucket posts.

SSL Client Certificate Support for Owncloud – Meanwhile on the interwebs, the support for client certificate authentication in Owncloud’s desktop client “Mirall” is progressing. So I didn’t do anything and I didn’t learn anything… why is this even here?

Because I’m really looking forward to it! In fact, I’m planning on writing a blog post about the lack of support for additional authentication layers in desktop applications next week!

Also, I’m curious who will claim my the bounty! I assume @qknight.

Windows NTP Problems Round 2 – Apparently my “fix” from last weeks post didn’t really fix my time issue with Windows 8. After a reboot, the clock is automatically set be off by one hour. Fortunately a friend of mine read the post and send me this link.

Dual Boot: Fix Time Differences Between Ubuntu And Windows

The problem lies in my dual boot setup of Kubuntu 14.04 and Windows 8.1. For me the solution was this command.

sudo sed -i 's/UTC=yes/UTC=no/' /etc/default/rcS

If you want to fix the problem using Windows, checkout the link above. There is more then one way to do this.

SNI with HAProxy – Last week I encountered a few problems with HAProxy and Server Name Indication, or SNI.

SNI is used by webservers, to distinguish between multiple SSL/TLS vhosts. In a normal HTTP setup, webservers can easily tell which site is requested. When TLS is in place, this becomes impossible without decrypting the traffic. In order to be able to have multiple websites hosted on the same IP and port (443), the client is required to send the hostname before transport encryption is established. That’s exactly what SNI does.

Usually SNI allows you to create different vhosts like this (pseudo code)

www.example.com:443
  www.example.com settings
private.example.com:443
  private.example.com settings

In HAProxy however, it looks more like this (pseudo code)

*:443
  use_backend www if sni is www.example.com
  use_backend private if sni is private.example.com

The problem here is, that a lot of settings are done in the frontend, not the backend and therefore some settings cannot be set vhost specific. I found a solution to this problem, which I documented on serverfault.com. If I find the time, I’ll write a blog post that will explain everything in more detail.

Seriously, why is this never documented??? – following a howto about something that includes PostgreSQL on Ubuntu 14.04 is always a pain. Mainly because these two lines seem to be missing every single time!

$ sudo useradd -U -s /bin/bash postgres
$ sudo pg_createcluster 9.3 main --start

source: askubuntu.com

IPython Notebook – Looking for a new web based notebook? I did! And I found “IPython Notebook” which is, to keep it short, awesome.
To showcase a few of the many features I like…

Run Python code

Use Markdown

Preview

VIM modelines
VIM modelines look something like this

and can be used to set VIM settings for specific files. By appending the modeline, VIM will adjust the global settings accordingly, unless modelines is disabled.

Modelines can be temporarily enabled by running :verbose set modeline or permanently by adding set modeling to your ~/.vimrc.

Note that modelines is off by default when editing as root..

VIM jar – VIM never ceases to amaze me, and the limit to things one can learn about it seems to be non-existent.

I looked for a tool to explore the contents of a jar file. As it turns out, it’s just a zipped archive so unpack it and that’s it – however, you could just open it with vim and have a look around without extracting the files first.

If you have unzip installed that is.

I usually use tar, so unzip is something I don’t have installed by default but know I might just have enough reason to install it as well.

Kubuntu on L420 – Just a quick addition, I recently bought a Thinkpad L420 for 220€ on ebay. Unfortunately Kubuntu only booted with the acpi=off and nolapic flags. After a BIOS upgrade with this boot CD everything worked fine. Just in case anyone faces this issue as well.

Links – Interesting things I found on the webs

Security vs. Compatibility – Fight!

How do you secure a web page for private use? Easy if you ask me, use client certificates and a secure connection over TLS, preferably with a signed certificate.

So far so great, the website is pretty secure know as only someone with the right certificate can visit it. But what about compatibility with client programs? Well – fuck!

Turns out, most endpoint clients don’t really support additional authentication mechanisms. In a best case scenario, one login with a strong password should be sufficient to secure something. However, I like to add additional layers of security to prevent possible flaws in the web application from ruining my day.

Here are some additional layers we could use to increase security.

  • HTTP Proxy – Add additional authentication running the webapp behind a web proxy
  • Client Certificates – The secure connection to the server requires the client to provide a valid certificate before browsing the site
  • Basic Auth – Webservers often offer a basic authentication mechanism, requiring a valid login to connect to the requested site
  • VPN – Running the web application inside a private network, forcing users to be connected to the network either physically or virtually.

The first three share the same problem, not all client software supports these authentication mechanisms. If any is supported at all it would be the proxy, but even that’s not available everywhere. Plus, you’d need to set a rule in your browser to only use the proxy for that single domain, otherwise you’re browsing everywhere via proxy, which can have a pretty hefty impact on your browsing performance.

Client certificates and basic-auth are easy to setup and also pretty secure, providing that the underlying connection is not flawed. However, they enjoy even less support in client software except for common browsers, which can put you in the situation of having to choose between the client or the security layer.

“You could always use SSH and open a tunnel between the web application and the client!” – that’s a dirty hack! We’re not going to talk about that! It works, yes. But I don’t see it as a good, permanent solution.

It seems to me, that VPN is the clear winner here. It’s independent of client software and can be run in split-tunnel mode. This is pretty much all theory though, spun together in that head of mine. I would like to know If any of my potential readers have thoughts (or dare I even say opinions) on this, especially the use of VPN as alternative to the other solutions.

Tweet to @HashtagsSecurity or use the comment system below.

Testing for GHOST Vulnerability with Ansible

Yesterday, reports of a critical vulnerability in the GNU C Library (glibc) hit the news. If you have more then just a handful servers to check, this Ansible playbook might be helpful.

You can read all about the vulnerability here.

Usually when I have to check multiple servers, I use Ansible like this.

ansible 'servergroup' -m shell -a 'command to execute'

In the case of GHOST, this wasn’t working for me as the test command contained both ' and " chars. Escaping them didn’t seem to work either, so I wrote a small playbook to take care of it.

- hosts: hosts,or,groups,comma,separated
  remote_user: sshuser
  tasks:
    - name: Check if host is vulnerable
      shell: php -r '$e="0";for($i=0;$i<2500;$i++){$e="0$e";} gethostbyname($e);'
      register: ghostvuln
    - debug: var=ghostvuln.stdout_lines

That’s it. If you’ve never used ansible before, just follow these steps.

  • install Ansible from your OS repository
  • add hosts to /etc/ansible/hosts [groupname] host1 host2
  • Make sure your SSH key is loaded with ssh-add -L
  • Test if Ansible reaches every hosts

    ansible 'group,or,hosts' -m shell -a 'hostname -f'
  • Execute Ansible playbook

    ansible-playbook /path/to/playbook

If you get either one of these, everything is fine.

changed just means that the command could be executed, check further down for the result. As you can see in the second image, the command did not return Segmentation Fault
failed means, that the could not be executed, in this case because php isn’t installed.

This is ans example of a vulnerable host returning a segfault message.

Note: The check command above uses PHP, which I don’t have installed on all my servers. Since this is a glibc vulnerability, I’m pretty sure that hosts can be vulnerable even if PHP is not installed. I will update this post if I find a way to check servers without PHP. Until then, install php5-cli if you don’t have php on the system.

Update 1: More info on this bug can be found here, including how to get a list of services that use glibc (Debian/Ubuntu)

sudo lsof | grep libc | awk '{print $1}' | sort | uniq

Also, make sure to reboot the servers after you’ve installed the patches or the server will remain vulnerable!

Update 2: An easier way to check if your server is vulnerable, is to check for the glibc version by running the following command.

$ ldd --version
ldd (Ubuntu EGLIBC 2.19-0ubuntu6.4) 2.19
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.

$ ansible 'servergroup' -m shell -a 'ldd --version |grep "^ldd"'

According to Tomas Hoger, the issues was fixed in glibc 2.18.


source: bugzilla.redhat.com

We got hacked! Now what?

Almost a year ago, I experienced my first real security incident. The company’s bulletin board was compromised and it was my job to oversee and coordinate the incident response. The teams and I where pretty much thrown into the cold water, as we’ve never experienced an incident of that size before.

Right after the incident I wrote the following blog post, which I’m now able to publish. Please note that I didn’t change anything deliberately, as I wrote it back when my memories on everything where still fresh in all detail.

A note up front!

Please note that this is a private blog and although I am was an employee of CHIP Digital GmbH, all opinions depicted here are solely my own!

This is a write-up of last weeks events from my perspective and how I experienced it.

If you’re not from Germany, you might have missed the news that the bulletin board at forum.chip.de was hacked. CHIP is a technical magazine targeted to end users and the board has around 2.4 million registered users. Not all of them are active, some have never even activated their account but it’s still a decent amount of users. Unfortunately, this only makes it so much worse if there is a breach.

So what happened?

Well, if you speak German you can read the official statement here or try your luck with the Google translator.

In summary, on Monday, 24.03.2014, someone gained unauthorized access to our bulletin board. As of right now, still don’t know how they got access but the compromised account in question had at least some higher permissions, allowing the attacker to compromise further employee accounts. As soon as we noticed that there was something fishy going on, we took the system offline and notified users that the board is under maintenance. We hired external forensic experts to secure any evidence of the breach and analyze the system, so we could figure out what happened, and how it happened. Once we were told that there is a chance that user data has been accessed, we notified all email addresses in the database (2.4M).

At the time of this writing, we’re still unsure if user data has been stolen.

But there is more!

All of the above seems quite simple. We got hacked, we hired forensic experts, we notified our users.

First of all, there are some points I want to make clear:

  1. First contact: I’ve never worked with forensic analysts before
  2. As you might have guessed, this is one of those situations that are hard to prepare for
  3. Yes, we made mistakes! (We’re not perfect! Again – my opinion!)
  4. Time ran really fast this week…

That being said, let’s take a closer look at my last week.

Monday

I have to admit my first mistake right here – I didn’t check my mail. I went home after work and didn’t check my phone. That’s why I missed out on a lot of information. If I had checked my mail, I might have been able to make a decision much earlier. This wouldn’t have prevented the incident, but we might have been able to get forensics at Tuesday morning.

Tuesday

I checked my mails during breakfast and that’s where all the information hit me in one big wave. It was from that point on, that I didn’t had a real brake for a good amount of time (and I wasn’t the only one!). So I rushed to work and tried to figure out what exactly happened, who got what information and if anyone had come up with some sort of plan.

Before I continue, I have to thank all my co-workers for doing a really great job! (Don’t even think about me doing all the work – I was quite busy, but there where a lot of people who gave everything!)

Just before the first meeting was called in, I managed to contact a forensics company and asked if they had someone who could come in ASAP. They had to check – which meant I had to wait. It didn’t matter, because I had just enough time to rush to the meeting which was about to begin. Why even bother with meeting in a situation like this? Well, as it turns out the meetings where (although exhausting) really important as they kept us focused on what’s important and in-sync information wise. The first one was a bit chaotic, since almost all colleagues from the technical department where attending and we had to find out what we knew and how we would go about fixing it. But it was a good way to figure out who would join which team, and who had the necessary know-how to help which party.

Overall there where four main teams. (I’m deeply sorry if I missed anyone. These are not all of the people that helped, just a rough overview of the main groups)

The forensics team
Charged with the analysis of the systems, this “team” consisted of one of my co-workers, me and of course the forensic experts. We had one analyst with us on-site, and it was our job to support him (from here on called Mr. M.) by handing him logfiles, database dumps, giving him physical access to the servers and answering all of his questions.

The “Revive” team

I call it that, since “Revive” is the word from their whiteboard that got stuck in my mind. These guys where busy the whole time, getting the board back online in a secure way so that users could login and interact again. Their first goal on Tuesday was to get the board back online in read-only mode. The system was set to read only so that the content was available again (on new servers of course!) but couldn’t be tampered with in case the attacker returned. The second goal was to setup the whole board with tightened security on new hardware, since the old one couldn’t be used. You might think that setting up a bulletin board is an easy task, but this system is very complex and they had to jump through quite a few hoops in order to get there. This was by far more than just recovering a backup. We didn’t even know if the backups could be trusted! – This is all you will be hearing from this team, since I wasn’t part of it. But for me it was an amazing example of their skills put to the test and I’m glad to be working with such great people!

The communication team

The communication team was a bit vaguely defined since a lot of people had part of it at some point, but the core of it included of course our public relations manager, top level management, part of the community team and myself (mostly for technical questions). This team was formed after the meeting and put in charge of informing our users and handling communication through out the company and with our data privacy officer, lawyers and such. They had to gather and distribute all information, sort it and make decisions based on it – and they made the right ones in my opinion!

The community team

This is the only real team. The community team normally moderates the bulletin board, as well as social media like Twitter, Youtube and the like. Aside from their role in the communication team, they had to answer all questions that came in, some of which had to be checked with forensics in order to avoid spreading rumors or making statements that where not true (or not yet declared as facts by Mr. M).

As you can see, a lot of people had to make sure everyone was up to date on the information, which was a lot of work that had to be done on the side. After the meeting, I called back our forensics contact and he told me that one of his analysts (Mr. M.) could be on-site in about two hours. Once Mr. M. arrived, we had to brief him on the incident and he quickly explained the next steps. First of all we had to take a complete dump of the compromised servers hard disk, since that takes a long time to finish. In case you wonder – no, you can’t use your standard backup software. He didn’t need a backup of the files on the disk, but a complete image the disk. We handed Mr. M. our webserver log files and dumped the complete database so he could analyze it. We spent the rest of the day going through log entries line by line and trying to figure out which IPs belonged to the attacker and which actions where taken. Mr. M. took the files back to his lab, where he continued to work on till late at night.

Wednesday

Wednesday morning, we drove back to the data center to get the disk image. Unfortunately I had made a mistake when calculating the approximate time the dump process would take so the image wasn’t done yet. First of all, I used the size of the data stored on the disks and compared it to the data transfer speed, which was wrong because the size of a complete image is obviously not the size of data stored, but of the complete disk array. The second mistake I made is that I trusted my own calculation. I could have checked if the copy job was finished from my workstation and we lost valuable time because of that.

Since we couldn’t analyze the disk image we continued to analyze logfiles and the database dumps. It was helpful that we had the (tamper-proof) webserver logs from Akamai to cross-reference if any of our logfiles had been tampered with. Later that day we found first signs of a possible access to the database. At this point, it was still just guessing but we decided that we needed to go public if there is a possibility that user data has been accessed. That was also the point where I started to jump between the forensics and the communications team. I became in charge of making sure that any publicized information was correct (from a technical or forensic point of view). The thing we wanted to avoid the most, was that rumors or even wrong information got out.

Much later, we went back to the data center to get the disk images, which Mr. M. took back to his lab in order to analyze them properly. I did get to go home, but I spent the rest of my evening documenting everything I knew so we where all on the same page.

Thursday

On Wednesday evening we had made the decision to go public and that the message should go out to our users the next day at 15:00. It bugged me, that we would wait so long until we would send the message, but as it turned out we needed the time and I’m glad that our management new better than I did. Preparing a message in two both German and English was one thing, because we had to discuss the phrasing and what we could write (again, we didn’t want to spread rumors, but tell people what we knew so far). The other thing was to prepare a short FAQ on what people should and could do to be safe in the meantime. The biggest problem however was handling the amount of outgoing mails and the expected responses. We decided to go with our newsletter service, to which we imported all of the email addresses. But we couldn’t deliver all mails at once, so we had to send them in packages. The whole process took longer than I liked but we couldn’t change it. Meanwhile the FAQ was published. Unfortunately, because we where all in a hurry, someone set the FAQs publishing timestamp to 1972.

That was the end of the day and most of what I remember of it. It doesn’t sound like a whole day of work, but there where so many ends to tie together and so many decisions to be made on the spot, that I was totally powered out when I got home. The rest of the day, we tried to monitor the web for any reaction to our outgoing mail. It was much more quiet than I expected.

Here are some examples of the problems we had to deal with that day. It’s good that everyone has it’s own opinion because it can help find the best solution but time was short and we had to make sure that we used it as efficient as we could.

Our passwords are hashed, do we write encrypted?

This is a problem in the German language. Most (not technical affine) people say “verschlüsselt” (en: “encrypted”), for both hashing and encryption. The problem is, that hashing doesn’t have a German translation that is as widely used as the counterpart for encryption. So by writing “verschlüsselt” we would make sure that most user understood what we where saying but at the same time risked, that users who knew the difference might think that we couldn’t tell hashing from encryption. As it turned out, that’s exactly what some of the users where thinking (and posting). Oh well.

Do we write forensics or just experts?

For me it was pretty clear that we would tell people that we hired external forensic experts, not just external experts. Why? Because we have experts for various fields, but forensics just isn’t one of them. Forensic isn’t part of our daily work, in fact this was the first forensic job we had since I started working at CHIP, so it makes sense that we don’t have a full-time forensic analyst.

Where do we put the FAQ?

This is a tough nut to crack for every company that finds itself in this kind of situation. You obviously don’t want to spread bad news but you want to make sure that all users are being notified. So we decided not to post it on the front page of our main website, but on the front of our bulletin board. Since the board was still read only, and our “Revive” team was still tinkering with it, we had problems putting a message online. So we took the fastest solution we got, by replacing the top ad with a custom banner. The way we did that, was to create a banner with a message to all our users and deliver it via our ad service.
There was just one little problem we didn’t think of. (Again, lot of stress and time pressure – you might overlook something)

Adblockers!

Everyone with an active adblocker didn’t see our banner and therefore just saw a clean, read-only board without the message. The users still got the link to our FAQ hosted on our domain, but since the FAQ was published with a timestamp of 1972, some of our users thought this might all just be a fake and maybe someone was trying a phishing attack. – Not how we thought it would go!

Newsletters

Also, since we sent our message via our newsletter service, some of our users filtered it or received it as spam. We even got some comments from users saying they deleted it without reading it because they thought it was just another newsletter. Damn!

Friday

This was the day! The community team was prepared to answer the responses from 2.4 million outgoing mails while the rest of us tried to keep an eye on what was going on in the web.

Which sites wrote or blogged about us yet, what questions haven’t we answered so far and the biggest question – should we publish an article with further information? It’s a tightrope walk between flooding people with unnecessary information, and handing over only the important facts without hiding something. We where prepared to answer all the questions that could possibly come in, but in order to avoid wrong statements we agreed that answers to technical questions had to be checked with me before sending them out. It worked quite well, mostly because the amount of incoming questions and responses where lower than expected. Also, most of them where from users who had already forgotten they still had an account with our site and simply asked us to delete it.

Still, it was a tough ride that was going on our nerves because we didn’t know when the big flame-wave was going to hit us.

Summary

As of right now, forensic analysis is still going on so we don’t have all of the information yet. I think we did quite well considering the situation. But there is always something to learn from your mistakes and that’s what I’m trying to do.
Below is a list of things I came up with that can help anybody who wants to prepare for a situation like this.

  1. It can happen to anyone – Yes we are a computer magazine and many people think that we “should have known better”. But the fact is that there is no such thing as being secure, only best-effort. And you want to make sure that your best-effort is something you can present without feeling the need to hide something!
  2. Always check how your data is secured and document it. You don’t want to be in the position where you have to check first if someone asks you that question
  3. Create a workflow to check regularly if the way you are storing your data is still state-of-the-art or if you have to improve on it.
  4. Prepare for emergency – This is really hard, because how do you prepare for something you don’t know yet? Define a group of people with the skills required to
    • check your systems – The technical goto person who can answer all technical questions or at least find out the answer. This should also be the person to speak with forensics.
    • handle communication with your data privacy officer, law enforcement, your lawyers, management, etc.
    • make decisions – you need someone who can make the required decisions, and make them fast. If you have multiple managers, let them decide who gets to make the call. The more people involved in decisions, the longer it’s going to take!
    • backup if some of the people above are not available.
  5. Time is of the essence – create a detailed workflow on how to communicate and make sure everyone knows and uses it. If you need to collaborate on documents or statements, make sure all you use the same software.
  6. Create an emergency response team, a group of people who know how to handle a system that has been compromised. They don’t need to be forensic experts, but they should now what to do in order to prepare the scene for the analysts.
  7. Make breaks – force yourself to make a break every once in a while. Situations like these are stressful and at some point you will make a mistake if you don’t rest. Lock your workstation and go for a 10 minute walk if it’s nice outside. Otherwise, get a coffee and don’t drink it at your desk! (or at a meeting!)
  8. Talk to your CEO and PR about disclosure and what their official statement is. When the situation comes, they might want to reconsider so write down what the decision was and exactly why they made it. This can safe time, and that’s all that counts!
  9. Find a forensics company if you don’t have your own analyst. If something like this goes down, you don’t want to spend time on searching for a suitable company. Keep the phone number in your drawer!
  10. Get your employees on board
    • tell them what happened and that they are not allowed to communicate anything on their own!
    • choose a dedicated person that your employees can contact for questions or forward questions to they received (in case something has been leaked already)
    • don’t hide information from them – if it’s a fact or even a strong possibility, you should tell them!

I’m glad that we made the right decisions, even if we didn’t think of everything. And as amazing as it was to see all those people giving all they got to resolve this problem, I still hope that we don’t have to deal with this kind of situation again.

Creative Commons License

We got hacked! now what? by HashtagSecurity is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https:/www.hashtagsecurity.com.

WBP#2 – SHA256, Flask, Safari, Keybase.io and more…

The Weekly Bucket Post goes into the second round. This week we have wrong sha256 hashes, problems with Safari and amongst other things an invitation to keybase.io!

sha256sum creates “wrong” hash – Recently I was wondering, why a SHA256 hash of the string password was listed in neither the Hash Toolkit nor the LeakDB databases. Surely someone must have used password as password somewhere!?

Turns out, password is of course in both those databases, what isn’t listed though is password\n. The reason I fell into this trap, was because I forgot that the bash command echo always appends a newline unless you call it with the -n switch.

# Same strings
echo -ne "string\n" | sha256sum
echo "string" |sha256sum

# what you really want...
echo -n "string" | sha256sum

Browser Testing – Oh Safari, now I know why I don’t use you… because I can’t!

Last week, @MikeyJck was kind enough to let me know that Apple’s Safari browser was asking for a client certificate when browsing on my site. While this is actually something I use on a few pages or subdomains, it should certainly not appear on the frontpage of this blog.

Since I don’t own a Mac, I went looking for ways to (efficiently) test websites with Safari – so far without great results!

Safari was available for Windows once, but after version 5.1.7 Apple killed it and now it’s only available for systems running OSX.

The alternatives I found are not really alternatives in my opinion, as they either build on said Windows version or on making screenshots of your page which doesn’t help much either when you’re trying to debug strange behaviour.

Running late on Windows 8 – For some reason my Windows 8.1 desktop was running an hour late. No matter what I did, it kept changing back to UTC, when it should be on UTC+1.

This seems to be a very well known problem in Windows, the reason for which is the default NTP servers Microsoft has set for their operating system.

By setting the NTP servers to pool.ntp.org, the clock jumped by one hour and displayed the correct time. What’s interesting is that public posts about this problem where talking about a few minutes time difference. For me it was exactly one hour.

Windows 8 time settings (in german – had no choice!)

Youtube, MP3 and distribution of malicious code – Last week I found myself in need to use one of those “Youtube to MP3 downloaders”, which I’m always kinda sceptical about. Not only is the sound quality crap, but obviously you have no control over what you’re really downloading – and in most cases you don’t really have a trustworthy brand behind it either.

Said situation spawned some food for thought. I know that malicious code can be stored in MP3, which really isn’t something new. But I can’t help but wonder if adding malicious sound tracks to youtube videos could make for a “good” distribution mechanism. Especially if you’re trying target unsuspecting smartphone kiddies who’re downloading their music from YT.

Keybase.io – There isn’t much to say about keybase.io that isn’t on the site itself already. It’s pretty self explanatory and for everything else there is the FAQ. So I’m just gonna say this

  • It’s the new shit
  • It’s a great idea
  • And it was about time somebody did it!

Also, I’m on there! #TrackMeINeedSnapshots!
If you need an invitation, drop me a line on Twitter!

Local IP and DNS don’t match – If you ever have connection problems to a host, but can still reach it via local console, give this a try.

Normally I would run ifconfig and nslookup to compare the IP with the DNS entry. The hostname command has a nice feature to return both IPs, if there different.

$ ifconfig |grep "inet addr"
eth0      Link encap:Ethernet  HWaddr 00:16:3e:47:65:1d  
          inet addr:10.0.3.159  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe47:651d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:124 errors:0 dropped:0 overruns:0 frame:0
          TX packets:87 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:14563 (14.5 KB)  TX bytes:13151 (13.1 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ nslookup myhost.lan
Server:         10.0.3.2
Address:        10.0.3.2#53

Name:   myhost.lan
Address: 10.0.3.162

$ hostname -I
10.0.3.159 10.0.3.162 

Interesting things I found on the interwebs

CSP and Disqus are buddies!

After all that trouble I had with Disqus and my Content-Security-Policy, I finally got it working. Not only that, but I got some help from a Disqus JS dev!

First of all, I want to apoligize for a few things in my last post. I blamed Disqus for using eval, when it was really me who, unknowingly, invoked it by using the jQuery load() function.

I also said that Disqus is not compatible with CSP at all – which is not really true.
Thanks to Burak Yiğit Kaya, a javascript developer at Disqus who reached out to me via Twitter, I now know more about how Disqus and CSP work together.

And I want to thank Burak and Disqus for their reaction to my post.
Burak contacted me, not to tell me that I was wrong, but to try and understand what the problem was in order to fix it.

We take security quite seriously and also respect users treating it highly so I'll do my best to make this easier for you.

And he did! Which is awesome, because I have heard this one so many times already, and it’s very rare that there’s more behind it then just PR.

But back to the technical stuff. After some back and forth with Burak, I finally got a working CSP which looks like this.

Content-Security-Policy: default-src 'self'; script-src 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com code.jquery.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/; style-src 'self' 'unsafe-inline' a.disquscdn.com;

unsafe-inline

This CSP still contains the unsafe-inline option for style sources, which is not really a good thing. Burak told me that I can ignore it, as it’s doing is making the loading logo spin.

While I can absolute go without the spinning logo, there is another problem here. Ignoring CSP violations, even if there just a spinning icon, will at some point have a negative impact on your CSP logs if you have them enabled.

So it works, but it’s still not perfect.

Too many sources

The other thing that would need improvement is the amount of different sources. Burak explained, that they use a.disquscdn.com as a cookieless domain, and refferer.disqus.com as a stat beacon to check if the load was successfull. As I said to him before, this is just a nice to have. It would increase maintainability of CSPs but it’s not absolutely necessary.

Burak came up with the idea, to unify the sources under two domains.
a.disquscdn.com and some-subdomain.disqus.com

Disqus CSP

Something I didn’t know before, is that Disqus ships with it’s own Content-Security-Policy. As Burak told me, if you load Disqus and take a look at the response headers of the discus.com/embed/comments request, you can see a custom CSP is being set.

content-security-policy:script-src https://*.twitter.com:* https://api.adsnative.com/v1/ad.json *.adsafeprotected.com *.google-analytics.com https://glitter-services.disqus.com https://*.services.disqus.com:* disqus.com http://*.twitter.com:* a.disquscdn.com api.taboola.com referrer.disqus.com *.scorecardresearch.com *.moatads.com https://admin.appnext.com/offerWallApi.aspx 'unsafe-eval' https://mobile.adnxs.com/mob *.services.disqus.com:*

For me this is just show even more that Disqus actually cares about security. Otherwise, they wouldn’t have bothered to limit the sources in the first place.

Summary

So, Disqus is back on hashtagsecurity.com. And now it’s not just a convenient comment system anymore. I actually have an opinion about it now.

The conversation between Burak and me brought a few things to light that could be improved, and Burak said he will look into it. Although he couldn’t make any promises, I’m looking forward to see these improvements go live in the future.

  • Move inline CSS into an external file to remove unsafe-inline style source
  • Unify sources under a.disquscdn.com and some-subdomain.disqus.com

And something we didn’t talk about, which I noticed later. The Disqus homepage only shows how to implement Disqus inline. Another option showing how to do it CSP compatible would be a great addition, especially for people just getting started with this kind of thing.

Wait, so what with the eval problem?

Oh yeah, I almost forgot to mention that. The eval problem only occured, because I don’t want Disqus to be loaded automatically. Instead I want visitors to click on the big blue bar below the article whenever they want to leave a comment.

The way I did this was by loading Disqus via jQuerys load() function. Which seems to use the eval() function internally.

After playing with both jQuery and plain old JS for a bit, I finally found this [nifty little helper}(http://internet-inspired.com/wrote/load-disqus-on-demand/) which works like a charm. So kudos to @nternetinspired for solving this problem way before me!

CSP: “Disqus gotta go!”

Recently I noticed that Disqus isn’t loading anymore. It was easy to figure out that CSP was the reason why. In the end I was left with nothing more then the choice of which one needs to go.

Update: Burak Yiğit Kaya, a javascript developer at Disqus reached out to me to to address the problems in this post. I will write another post about the results shortly!

Obviously my choice was to keep CSP – this is a security blog after all. But let’s take a look at what brought me to the point of giving up.

At first I tried to fix the problem the old fashioned way. Everytime Disqus required a rule change, I did it. At first I thought this might even work, up until my CSP looked something like this: (split for better readability)

default-src	'self'; 
script-src  'self' 
			a.disquscdn.com/embed.js 
			hashtagsecurity.disqus.com 
			code.jquery.com; 
img-src 	'self' 
			referrer.disqus.com/juggler/stat.gif 
            a.disquscdn.com/next/assets/img/;
frame-src   'self' 
			disqus.com/embed/comments/ 
            disqus.com/home/forums/hashtagsecurity; 
style-src 	'self' 
            a.disquscdn.com;" 

So far so semi-good. You might have noticed that I restricted the allowed sources to exactly to the least necessary space to decrease risk of XSS. However, all of that quickly lost it’s weight when Disqus finally requested to more changes.

default-src	'self'; 
script-src  'self' 
      		'unsafe-eval' 
			a.disquscdn.com/embed.js 
			hashtagsecurity.disqus.com 
			code.jquery.com; 
img-src 	'self' 
			referrer.disqus.com/juggler/stat.gif 
            a.disquscdn.com/next/assets/img/;
frame-src   'self' 
			disqus.com/embed/comments/ 
            disqus.com/home/forums/hashtagsecurity; 
style-src 	'self' 
			'unsafe-inline' 
            a.disquscdn.com;" 

Update: unsafe-eval was actually my fault. Appearently the jQuery function .load() is using eval internally.

That’s right, to work properly Disqus needs unsafe-eval script and unsafe-inline style. For those of you not really familiar with CSP, let me explain the problem real quick.

CSP, or Content-Security-Policy, is meant to prevent XSS by restricting the sources of javascript, CSS stylesheets and other things such as images, frames, etc. To be able to do this, two important things have to be disallowed.

  1. Inline JS or CSS code embedded directly in HTML files, such as these two examples <script>alert("inline javascript")</script> style="height:100% width:100%"
  2. No use of the Javascript eval function (as it’s deemed highly insecure!)

Instead, JS and CSS should only be included as files via the src= option. The allowed sources can then be specified in the CSP rule. The problem with unsafe-inline and unsafe-eval is, that it enables the use ov the eval function and allows the use of inline CSS or JS code.

As sort of a last resort, I tried to create a single page with an alternate hard coded CSP rule, just to see if this would work.

disqus.html – containing the bad CSP and Disqus loader

<!-- META Header containing the full "Disqus compatible" CSP rule-->
<meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'unsafe-eval' 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com code.jquery.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/ disqus.com/home/forums/hashtagsecurity; style-src 'self' 'unsafe-inline' a.disquscdn.com;" />

<!-- DIV to load Disqus -->
<div id="disqus_thread"></div>
<script type="text/javascript" src="disqus.js"></script>
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
<a href="http://disqus.com" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a>

disqus.js – containing the typical Disqus code

/* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
var disqus_shortname = 'hashtagsecurity'; // required: replace example with your forum shortname

/* * * DON'T EDIT BELOW THIS LINE * * */
(function() {
    var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
    dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
    (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();

The problem here is, that loading the disqus.html into my blog by using an iframe resulted in something like this.

On the first look, I though that did the trick. After checking with the disqus.html file, I saw that it should really look like this.

What happened is, that the iframe which held disqus.html didn’t resize properly in height. After looking a bit into dynamically resizing iframes to fit their content, I realized that this wouldn’t be possible without adding more javascript which in turn resulted in further adjustments to my original CSP rule.

At that point I decided that Disqus just isn’t worth the pain. So until I found a better comment system that can easily be integrated alongside CSP, you can simply reach out to me via twitter. – @HashtagSecurity