Last Week in Infosec

July 8 – July 15, 2022

New security patches from no less than five major vendors, actively exploited vulnerabilities, ransomware, leaks, espionage and the Disneyland social media debacle – here’s your weekly infosec news summary.


Vulnerabilities and Exploits

5 major manufacturers release security updates

Last week, security patches were released for various products from Citrix, Microsoft, SAP, Adobe and Juniper. Microsoft alone closes 84 vulnerabilities, one of which is already being actively exploited.[1]

Microsoft vulnerability CVE-2022-22047 is actively exploited

The U.S. agency CISA maintains a catalog of vulnerabilities [2] that are known to be publicly exploited. Last week, vulnerability CVE-2022-22047 was added to this catalog. This is a privilege escalation vulnerability with a CVSS rating of 7.8 (High), but details have not yet been published. A patch has already been released.

What’s the patch status in your company right now? If you don't know the answer, it's very likely that your company is at risk. We get it, security is a complex topic that can easily make your head spin. 😡

Our friends over at LastBreach are happy to answer any questions you may have and make sure that you don't have to worry about security.

Threat Actors & Campaigns

140,000 customer records exposed in Aon hack

In a report to the Securities and Exchange Commission in February, Aon reported the incident. Three months later, in May, additional information was provided. [3]

According to a May 27 letter (πŸ’ΎPDF) from Aon informing those affected, personal data disclosed included driver’s license numbers, Social Security numbers, and, in rare cases, details about insurance policy purchases. The company has taken steps to ensure that the unauthorized third party can no longer access the data. Aon has no reason to believe that the third party copied, stored or transferred any other data.

Telecoms hit by ransomware attack

In our post last week, we had written about LockBit. Shortly after, we read that the administrative and management capacities of the French telecom provider MVNO LPA are significantly limited after a ransomware attack. The attack started on July 4 and is believed to have been carried out by the LockBit group. A week later, the company’s website is still down and has been replaced with a description of the cyberattack. [4]

Security experts have seen a sharp increase in LockBit activity in recent months, with some speculating that the group was more active in the first quarter of 2022 than Conti, which had the unenviable honor of being the most active ransomware gang in 2021. Given their vast consumer data stores and near-essential services, telecom companies are obviously a desirable target for cybercriminals.

‘BlackCat’ ransomware raises claims to $2.5 million

Cybersecurity researchers claim to have noticed a significant increase in ransomware demands from the BlackCat ransomware. [5] A statement from the organization said, “Such practices have a significant impact on the underground ransomware ecosystem, harming companies of various sizes around the world.”

To get the victim to end the situation as soon as possible, the BlackCat ransomware perpetrators have reportedly started defining ransom demands of $2.5 million, with a potential discount of almost half. To give the victim enough time to buy Bitcoin or MXR, the usual deadline for payment is between 5 and 7 days. If there are problems, the victim can hire an “intermediary” to assist with the payment process.

Ransomware attacks breach 1.9 million health records

In February, 1.9 million patient records from 657 healthcare providers were accessed through a “sophisticated” ransomware attack on the Professional Finance Company collection agency. [6]

Although this had a significant impact, it was only the third largest healthcare data breach reported in 2022. With more than 2.8 million patients affected by fewer than 45 providers, the Eye Care Leaders incident remains the largest healthcare incident to date. With 2 million patients affected, the Shields Health Care Group hack is the second largest.

PFC asserted (πŸ’ΎPDF) that it had strengthened its network security after the incident by wiping and rebuilding affected systems. Those efforts may have come a bit too late, however, as the agency would not say whether the stolen data was encrypted.


Business, Politics and Culture

Disneyland social media hacked with racist posts

Early Thursday morning, a hacker posted a series of offensive, racist, homophobic, and insensitive messages on the Disneyland Resort’s official Instagram account. [7]

The posts were published before 5 a.m. and quickly removed, but not before many people read or screenshotted them. At just before 4:30 a.m. (PST) on Thursday, four posts surfaced on Disneyland’s Instagram account. One of the captions said a self-proclaimed “superhacker” was seeking revenge against the theme park, according to the Los Angeles Times. The posts included homophobic obscenities and the N-word was used frequently.

Increasing cyber espionage against Russia

An investigation published Thursday shows that hackers with ties to the Chinese government are increasingly targeting Russian entities, with the ongoing operation appearing to be primarily linked to espionage. [8] The latest analysis also traces previous attacks by Chinese APT organizations that targeted Russia. These include the Space Pirates, Mustang Panda, and Scarab campaigns identified by SentinelLab. Google’s Threat Analysis Group (TAG) also drew attention in May to the increasing targeting of Russia by Chinese threat actors.

Cyberattacks in Ukraine surged in Q2

In the second quarter of the year, Ukraine reported an increase in cyberattacks directed against its networks. [9]

A report on the rise in cyberattacks was released by Ukraine’s State Service for Special Communications and Information Protection. Although there have been more attacks since Russia’s invasion, the increase was not expected in the second quarter of 2022. Ukraine’s National Vulnerability and Cyber Incident Detection System processed 19 billion events during the period. The number of registered and processed security incidents increased from 40 to 64, despite the measures introduced against Russian hacking groups. In addition, a sharp increase in activities of malicious hacker groups was recorded in the distribution of malware. The malicious code category, for example, grew by 38% in the second quarter compared to the first quarter of the year.


That’s it for today. We’ll be back next week with our infosec news summary.πŸ•΅οΈ

Last Week in Infosec

1st July – 8th July 2022

Wether you’re a CISO, Ethical Hacker or part of a security team – staying on top of the latest news is likey part of your responsibilities. To make it easier and less time consuming, we’ve went ahead and compiled the most interesting events of last week for you.


News on Vulnerabilities & Attacks

Phishing scams impersonating the UAE Ministry of Human Resources target the Middle East

Researchers from CloudSEK have discovered a widespread phishing campaign in which threat actors pretended to be the UAE Ministry of Human Resources. [1]

The new threat will target numerous government and corporate entities in the banking, travel, healthcare, legal, oil and gas, and consulting industries. According to the security experts’ assessment, this is a significant phishing effort that primarily targets businesses and job seekers, by faking sites that belong to the Ministry of Human Resources.

These phishing schemes could also be used as templates by other threat actors to target individuals and steal their passwords, documents, cryptocurrency wallets, and other sensitive data.


Phishing is still one of the most reliable ways for hackers to gain unauthorized access to companies and while attacks against high value organizations become more targeted, smaller companies still need to ensure, they don't get caught in big Phishing nets. 🎣

Contact LastBreach for security training and simulated Phishing attacks to make sure, you're on the safe side. πŸ›‘οΈ

News on Threat Actors & Campaigns

Cyber Criminals Claim to have Stolen Data of a Billion Chinese Residents

A new data leak was announced last week in an online cybercrime forum and it’s a big one. According to an anonymous source, the person or group claiming responsibility for the attack has offered to sell more than 23 terabytes of stolen data. [2]

The database includes names, addresses, birthplaces, national IDs, phone numbers, and information about criminal cases and was apparently stolen from the Shanghai National Police. According to the post, the threat actor demanded ten bitcoins, or about $200,000, which is a surprisingly small amount for this much data.

The Shanghai government has not publicly reacted to the purported cyberattack.

LockBit 3.0 Ransomware targets Organizations worldwide

The LockBit organization is back and has released LockBit 3.0, a new variant of their ransomware. The group named their most recent product LockBit Black, improving it with new extortion strategies and adding a Zcash payment option to the already-existing Bitcoin and Monero payment options. [3]

LockBit was the RaaS (Ransomware-as-a-Service) with the highest activity in June, continuing on their way to the top. Other than the ransomware group Conti, which split up into smaller groups after it gained to much spotlight and garnered the attention of law enforcement, LockBit seems to continue pursuing becoming a household name in the ransomware sector.

This time, LockBit hackers are in the news for starting the first-ever bug bounty program to be started by a criminal organization. Adversaries provide a financial reward for submitting a bug or enhancement idea that ranges from $1,000 to $1,000,000 in their pitch to hackers of all stripes. Additionally, the most significant reward is given to the first person to identify the affiliate manager, also known as LockBitSupp, correctly. This is believed to be mainly a PR stunt to raise the groups name and recognition.

Marriott’s Been Hacked β€” Again!

A report from DataBreaches claims that hackers could access almost 20GB of private data, including reservation and credit card information, from a hotel server at the BWI Airport Marriott in Maryland. [4]

According to Engadget, Marriott claims that most of the information compromised was “non-sensitive internal business files”. The hackers gained access to a Marriott employee’s computer to retrieve the information and collected the files from a shared file server.

“There is no evidence that the threat actor had access beyond the files that were accessible to this one associate”, Marriott continued. Nevertheless, the 300 to 400 people, the majority of whom were former workers, whose personal information was compromised during the incident will be informed, according to Marriott, the hotel operator told Engadget.


Other News on Business, Politics, and Culture

Increasing Cyber Espionage Efforts by China Against Russia

According to investigations by security companies and Ukraine’s Computer Emergency Response Team, a campaign linked to China began targeting businesses connected to Russia in June using malware to gather information on government activities (CERT). [5]

Since the start of the conflict in Ukraine, the group known as Mustang Panda has targeted Russian organizations, while a new cyber gang known as “Space Pirates” has infiltrated Russia’s space technology sector.

According to a recent investigation, infected Microsoft Office documents were used to distribute Remote Access Trojans (RATs). The most recent operations have used two malware sets connected to Chinese advanced persistent threat (APT) groups: the Royal Road Toolkit for creating malicious documents and the Bisonal Remote Access Trojan (RAT) created by Chinese operators.

The Tonto Team, also known as Karma Panda and Bronze Huntley, has typically concentrated on countries in other parts of Asia, including South Korea, Japan, the US, and Taiwan. The organization has recently expanded its operations to include Pakistan, Russia, and other countries. 

Attack on QNAP Devices by Raspberry Robin Worm

Microsoft reports that hundreds of businesses from various industry sectors have lately discovered Windows worms on their networks.[6]

The Raspberry Robin campaign, also known as the “LNK Worm,” is the subject of an investigation by the Cybereason team. A worm called Raspberry Robin uses infected QNAP (Network Attached Storage or NAS) devices as stages to spread via USB devices or shared folders. It draws victims with the help of “LNK” shortcut files, an old-fashioned but still powerful technique.

Security experts who discovered Raspberry Robin in the wild have not yet assigned the virus to a threat organization and are attempting to determine its operators’ ultimate objective.

Microsoft has labeled this effort as high-risk, nevertheless, because the attackers might download and use more malware inside the victims’ networks and increase their rights anytime.


That’s all for today – leave a comment if you have any feedback or think we missed something. See you next week!

Last Week in Infosec

24. June to 01. July 2022

No week goes by with something new to report on in the infosec world. Let’s take a look at some of the more noteworthy infosec activities that happened this week.

News on Vulnerabilities & Attacks

Users Push for Updates after Splunk Patches Critical Flaws

Splunk, a company that provides data monitoring and search services, has fixed a code execution vulnerability in its Splunk Enterprise deployment server and has reportedly promised to back-port the fix to prior versions, after users have pushed for updates to legacy versions. [1]

Versions previous to 9.0 allow clients to use the server to deploy forwarder bundles to other clients due to a critical vulnerability, CVE-2022-32158.

All the other Universal Forwarder (UF) endpoints in the company might be controlled by an attacker who had gained access to or compromised a single universal forwarder in the environment.

The Daily Swig was told by Nick Heudecker, senior director of market strategy and competitive intelligence at Cribl, that this vulnerability is significant because Splunk users frequently deploy thousands or tens of thousands of UFs throughout their infrastructure.

UnRAR Vulnerability used in Zimbra Hacks

Thanks to a path traversal vulnerability in UnRaR, unauthenticated attackers can escalate privileges and execute arbitrary commands as a Zimbra user. [2]

The Common Vulnerability Scoring System (CVSS) has given the path traversal vulnerability discovered in the Unix versions of UnRAR, the identification number CVE-2022-30333, and a base score of 7.5.

To put things in perspective, over 200,000 enterprises, government agencies, and financial institutions use Zimbra as their email solution. The fact that emails were taken from individual user accounts using a 0-day vulnerability demonstrates the value of a hacked email account to an attacker and the catastrophic effects that such vulnerabilities have on an organization. Passwords might be changed, classified papers could be taken, and organization members could pose as you to compromise more accounts.

This flaw adheres to a typical pattern of flaws whereby changing user input after it has been verified results in bypassing security measures.

Potential Brocade Vulnerabilities in multiple Storage Solutions

Brocade, a networking solutions company, has announced that they have discovered 9 vulnerabilities in their SANnav management application. [3]

Six of the vulnerabilities affect third party tools such as Oracle Java, OpenSSL or NGINX and can allow attackers to manipulate data, decrypt data, or cause a denial of service condition.

The remaining security flaws (CVE-2022-28167, CVE-2022-28168, and CVE-2022-28166) where found internally and there is no proof that they have been exploited in the wild. However, these flaws may impact the storage solutions of several businesses that work with Brocade, such as HPE, NetApp, Dell, Fujitsu, Huawei, IBM, and Lenovo.

As a result, the attacker may have access to sensitive data or even the device itself. Brocade has released a patch for the vulnerability, but it is unclear how many devices are affected.


News Related to Threat Actors & Campaigns

Nine accused members of phishing gang arrested

Nine accused members of a successful phishing gang who allegedly gained 100 million hryvnias ($3.4 million) by attracting locals with the promise of financial support from the EU were recently detained by Ukrainian “cyber-police.” [4] To solve the case, digital professionals collaborated with agents from the Pechersk Police Department and the National Bank of Ukraine (NBU) experts. The nine people detained are suspected of creating and running over 400 phishing websites asking users to enter their bank account and credit card information to apply for EU social welfare payments.

The group would utilize the information once they had it to take over users’ accounts and move their money. The NBU claims that over 5000 victims were duped in this manner, netting the scammers millions of dollars. During the arrests, the police also seized illegally earned money , bank cards, mobile phones, and computer equipment.

πŸ‘©β€πŸŽ“ A good security awareness training program will teach your employees how to spot phishing emails and websites and how to respond if they still fall victim to an attack. It's essential to keep your employees up-to-date on the latest phishing threats and regularly test their knowledge with quizzes and simulations. LastBreach can train your employees and verify their readiness. 

Other News Related to Business, Politics, and Culture

OpenSea email addresses leaked to third-party vendor

The world’s most significant non-fungible token (NFT) marketplace, OpenSea, has discovered that a disgruntled employee at a third-party vendor exchanged the email addresses of its users with an unauthorized outside party.[5]

OpenSea’s head of security, Cory Hardman, cautioned users on June 29th: “If you have shared your email with OpenSea in the past, you should presume you were impacted”.

Customer.io, an automated messaging platform used by marketers to compose and deliver emails, push alerts, and SMS messages, was the offender, according to OpenSea.

A Major Hack has Exposed $100 Million in Crypto

The latest significant theft in the decentralized finance industry saw hackers stealing $100 million in cryptocurrencies from the blockchain bridge Horizon. [6]

Users can use Horizon to transfer tokens from the Ethereum network to Binance Smart Chain. According to Harmony, a separate bridge for bitcoin was unaffected by the hack.

The theft adds to a recent flood of negative headlines about cryptocurrency. After experiencing a severe liquidity shortage due to a significant decline in the value of their assets, cryptocurrency lenders Celsius and Babel Finance froze withdrawals. Three Arrows Capital, a troubled cryptocurrency hedge fund, maybe on the verge of defaulting on a $660 million debt from brokerage house Voyager Digital.


That’s it for our summary on infosec stories that made headlines this week. Be sure to stay up to date on the latest news and continue to take precautions to protect your data.

Did we miss any important news last week? Write us on Twitter @HashtagSecurity πŸ™‹β€β™‚οΈ

Weekly Infosec News Summary

Let’s examine the most recent cyber events that have occurred recently in different parts of the world.

Vulnerabilities & Attacks News

Critical PHP vulnerability opens QNAP NAS devices to remote attacks

This week in infosec news, a critical PHP vulnerability was discovered that exposes QNAP NAS devices to remote attacks.[1] A vulnerability in the web server component of a device’s firmware could allow an attacker to gain control of the device and access sensitive data. 

Customers are advised to update their QTS or QuTS hero operating systems, and the devices are also advised not to be connected to the internet.

Additionally, QNAP has recommended that customers contact QNAP Support for help if they cannot identify the ransom letter after updating the firmware and entering the obtained DeadBolt decryption key.

This shows once again, why it is important to regularly test your infrastructure for security issues. Luckily, it just so happens that your reading a blog post by the people that can help you. Check out our website for more infos on our penetration tests and vulnerability assessments. πŸ˜‰

Russia exploits Microsoft Follina vulnerability against Ukraine

This week’s story comes from Ukraine, where Russian hackers have used the Follina flaw to access sensitive information.[2]

The Follina vulnerability, a few months back, now affects all versions of Microsoft Windows and can be exploited remotely.[3] This means that hackers can gain access to a system without prior knowledge or access to the target network. In the case of the Ukrainian attacks, the hackers could use the Follina flaw to deploy a backdoor called Poison Ivy, which allowed them to gain access to sensitive data.

These attacks underscore the need for organizations to patch their systems as soon as possible and implement robust security measures.

Vulnerability in Citrix ADM allows admin passwords to be reset

Recently, a critical Citrix ADM vulnerability was discovered that creates a means to reset admin passwords. 

A remote, unauthenticated user ran the risk of crashing a system via a denial-of-service (DoS) exploit and subsequently resetting admin credentials on the next reboot due to the inappropriate access control vulnerability (CVE-2022-27511).

The vulnerability could be exploited to force the “reset of the administrator password at the next device reboot, allowing an attacker with SSH [Secure Shell] access to connect with the default administrator credentials after the device has rebooted,” according to a Citrix advisory published last week. [4]

The vulnerability, which is present in the Citrix Application Delivery Controller and Gateway appliance, can be exploited by an unauthenticated attacker to gain administrative access to the appliance. While the patch for this vulnerability has been available for several weeks, it is unclear how many appliances are still vulnerable.

News Related to Threat Actors & Campaigns

Chinese hackers are distributing SMS bomber tools with malware inside

Chinese hackers are distributing an SMS bomber tool with malware hidden inside. The tool, designed to send many text messages to a specific phone number, includes a backdoor that allows the attacker to execute commands remotely on the victim’s device. The malware has been spreading via a phishing campaign that targets Android users in China. Once installed, the malware collects information about the victim’s device and sends it to a remote server. [5]

Insights into Magecart’s infrastructure reveal that the campaign is vast

Infosec researchers have discovered a new Magecart infrastructure that reveals the scale of ongoing Magecart campaigns.[6] This infrastructure consists of several servers that store stolen credit card data. The servers are located in different parts of the world, suggesting that the people behind Magecart use a distributed network to avoid detection. It proves how important it is for companies to be vigilant against Magecart attacks. It also shows how cybercriminals use increasingly sophisticated methods to steal sensitive information.

Other News Related to Business, Politics, and Culture

Cyberattacks on Ukraine highlighted by Microsoft

On Wednesday, Microsoft announced it had uncovered new evidence of Russian state-sponsored attacks on Ukrainian entities.Despite the news, tensions between Ukraine and Russia continue to rise, with the two countries locked in a conflict over the breakaway region of Crimea. In a blog post, Microsoft’s Threat Intelligence Center said it had discovered a group of hackers known as Strontium targeting government agencies, political parties, and media organizations in Ukraine. The group has also been linked to previous attacks on the Ukrainian power grid and the NotPetya malware outbreak. [8] Microsoft did not name any specific targets of the latest attacks but said they were “consistent with the group’s longstanding interest in Ukrainian affairs.” 

Earlier this week, Ukrainian security officials said they had uncovered a plot to disrupt the country’s financial system and critical infrastructure. 

Scammer steals Microsoft credentials with voicemail scam

Recent scams target Microsoft users to steal their credentials.[9] The scam starts with a voicemail from Microsoft stating that there has been a problem with the recipient’s account. The message then instructs the user to call a phone number to resolve the issue. However, the phone number belongs to a scammer who tries to trick the user into revealing their login information. Recently, there have been many scams targeting Microsoft users recently, so it’s essential to be vigilant when dealing with unsolicited calls or messages. If you suspect that you may have been a victim of this scam, be sure to change your password and enable two-factor authentication on your account as soon as possible.

Five European countries use Pegasus spyware, NSO confirms

The NSO Group has long been known for selling its spyware to government organizations worldwide. Now, new information has surfaced that suggests at least five European countries have used Pegasus spyware to target opponents and journalists. The software, designed to infect phones and collect data, was first used by the UAE to target human rights activists. It was then acquired by Mexico, where it was used to target journalists investigating corruption. [10]

This new information confirms what many infosec experts have long suspected: that governments worldwide are using NSO Group’s spyware to violate human rights.


We are witnessing an increase in cybercrime as we move further into the digital age. Every week, it seems like there’s a new story of some major data breach or cyber attack making headlines. This week was no different, as there were several important stories in the infosec world and these are just a few of the major stories from this week. Next week, we’ll get back to you with another updated news summary so be sure to add this blog to your watch list.

Resurrecting my old ghost blog

Ghost (CMS) is nice and I’ve used it for a couple years but with the start of my company my blog got less attention, got broke and went offline, never to be fixed. In summary, running ghost yourself is work. Not necessarily a ton of work but work nontheless.

So I decided to resurrect my old blog and move it to a fully hosted platform. My decision fell on blogger as it is completely free, plays nice with Google Analytics and SEO (being a Google product) and has a couple of notable changes since its early days – most importantly, better themes and the ability to custom style them.

My first issue was, that my old ghost powered blog wasn’t online anymore so I had to get it up and running again, in order to copy the old content to the new platform. Luckily, I still have the raw setup files from /var/www/ and the most recent database backup.

Since the original server was in an unusable state (and running Ubuntu 14.04!), I decided to run the resurrection ritual on a local virtual machine.

First thing to notice here: I’m currently running Windows 10 instead of my usual Kubuntu setup, so I was first considering using Putty to SSH into my VM. I then decided to give the SSH feature in Windows a try and lo and behold, it is a beaut.

1-1

Thanks to this nifty little feature, I can use both scp and ssh directly from a Powershell window.

I copied the files over to my machine, which is running on a NAT interface with port-forwarding, hence the port 2222 on localhost. Once the copy job was done, I set out to install the most important packages to get started.

fmohr@vm$ sudo apt install unzip nodejs npm mysql-server

Now with the bare necessities in place, it was time to restore the database to its former glory.

fmohr@host:~/$ scp -P 2222 .\ghostdb.sql fmohr@localhost:~/
2-1
fmohr@vm:~/$ sudo mysql -u root < ghostdb.sql

3-1
Success! I can now try to adjust the ghost settings file and start the nodejs server.

fmohr@vm:~/$ vi config.js
4-1

Of course I had to change the database connection, namely: host, user and password from the live environment to my local test machine.

In order to even try and run the server, the necessary dependency packages need to be installed. It becomes pretty clear early on, that this ghost installation hasn’t been updated in a while.

fmohr@vm:~/ghost$ npm install
5-1

Unfortunately, all this brought me was a bunch of error messages.

6-1

Trying to run the server first gave me this error:

7-1

After a bit of Google-fu I half found out, half remembered that the NODE_ENV variable had to be set in order to use the production settings from the config.js file.

8-1

This resulted in a different error. Two steps forward, one backwards.

9-1

Stackoverflow to the rescue! The solution was to downgrade the mysql authentication method.

fmohr@vm:~/ghost$ mysql -u root -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'HelloKitty123';
mysql> FLUSH PRIVILEGES;
10-1

Well, this certainly got rid of the mysql authentication error but spawned a new one. Luckily there is a solution for everything. And once again, its name is downgrade. This is best done by using NVM, the node version manager.

fmohr@vm:~/ghost$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh -o install.sh
fmohr@vm:~/ghost$ less install.sh
fmohr@vm:~/ghost$ sudo bash install.sh

The originally suggested command for installing NVM was the following:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash 

You may have noticed that I changed the procedure. Since this is a security focused blog, let me quickly explain why.

Spoiler: Never pipe unknown shell scripts to bash. NEVER! Ever! EVAAA!
The following commands install the most recent node version as well as the desired one.

fmohr@vm:~/ghost$ nvm install node
fmohr@vm:~/ghost$ nvm install 0.10.45 
11-1

This allows me to try and run ghost again. This time with the (presumably) correct version of node.

NODE_ENV=production ~/.nvm/v0.10.45/bin/node index.js
12-1

It works, even if the message is incorrect as the server cannot possibly run on the public domain name. This is easily solved with netstat.

13-1

And with another port forwarding rule for my VM, this should be accessible as well.

14-1

Look at it. It runs so smoothly, I almost want to keep using ghost. But that would mean installing updates and maintaining my own server and I’ve grown lazy, apparently.

15-3

This is it for today, how I will move this content to blogger is a story for another time. πŸ™‚

WBP#1 – All New Weekly Bucket Post

Since I can’t bring myself to write full blown blog posts on a regular basis, let’s try to do something else. I will attempt to publish a short blogpost every friday about all the small things that I encountered during the last week.

PFX Certificates
After finally receiving confirmation that our code signing certificate has been validated, I got a link to “download” it. And by download they actually mean, import it in your browsers certificate store. From there you can export it and set a password to encrypt the file if you want (you want!).

So far so good, but our devs said they need a .pfx file, not the .p12 I had given them.
Thanks to AkbarAhmed.com this proved not to be a problem at all.

1.) A .p12 and .pfx are the exact same binary format, although the extension differs.
2.) Based on #1, all you have to do is change the file extension.

DNS updates
Not really something new I learned, as something I already new but forgot in the meantime, was how to request a new dhcp lease on an ubuntu server.

$ sudo dhclient -v eth0 -r
this kills your connection... do not attempt over SSH ;)
$ sudo dhclient -v eth0
$ id a

Since we’re at linux 101 already, let’s quickly do the other one as well.

Change hostname – Sure, this one is easy. Nevertheless, I got stuck for longer then I would like to admit. After editing the files /etc/hosts and /etc/hostname, I wanted to apply the settings without rebooting the server. However, that’s where I encountered this little problem.

$ sudo /etc/init.d/hostname restart
[sudo] password for user: 
sudo: /etc/init.d/hostname: command not found

I could’ve sworn that this was the correct way to do this. Turns out, it’s this now.

$ sudo hostname -F /etc/hostname

One more thing I noticed while setting the new hostname was this little gem, which returns the IP registered to the hostname.

$ hostname -I
10.0.3.162

If I think back how many times I could’ve used that one in scripts in the past…

LXC Autostart – LXC or LinuX Containers are a great alternative to full virtualization, especiall when you’re running things on a vserver like I do. However, when you have to reboot your vserver due to kernel updates, your containers either don’t start at all or simultaneous, which makes it hard if you have services like dns/dhcp running in one of them.

The solution is a little tool called lxc-autostart, which let’s you decide in which order to start your LXCs. Here are a few config lines, which you can add to your containers config file in /var/lib/lxc/containername/config.

lxc.start.auto = 1		# 0=no, 1=yes
lxc.start.delay = 0		# seconds to wait before starting (group specific)
lxc.start.order = 500	# order, higher is first
lxc.group = dns			# group,groups

After you’ve configured every host you want to autostart, you can start,stop or reboot them like this

sudo lxc-autostart [-r/-s] -g "dns,web,db"
# -r=reboot, -s=shutdown, without is boot

Pay attention to the groups, all containers in group dns are started first. Within that group, you can define the order (e.g 500, 450, 400). If you set a start.delay value for one hosts, all groups and hosts that follow will also wait for that amount of time before starting.

#host 1
lxc.start.delay = 0		# start immediately
lxc.start.order = 500	# first host
lxc.group = dns			# first group
#host 2
lxc.start.delay = 20	# on turn, wait 20 seconds, then start
lxc.start.order = 450	# second server
lxc.group = dns			# first group
#host 3
lxc.start.delay = 0		# start after host 2 (after 20 seconds + boot)
lxc.start.order = 400	# third server
lxc.group = dns			# first group
#host 4
lxc.start.delay = 0		# start immediately after group dns is done
lxc.start.order = 500	# first in group web
lxc.group = web			# second group

CSP – I have spent quite a bit of my time with the Content-Security-Policy header. So far I have always found a way to manage without “unsafe-inline” (allowing inline css or js – BAD!) but when I tried to get Disqus working with my CSP I ran into an interesting problem. First of, I only used the unsafe-inline option for testing purposes. This will never get onto my production systems, NEVER!

Here is what I did, in case you want to try it yourself.

index.html

<html>
  <head>
      <!-- CSP Without "style-src unsafe-inline" reports CSP violations (of course!)
      <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/;" />-->
      <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' a.disquscdn.com/embed.js hashtagsecurity.disqus.com; img-src 'self' referrer.disqus.com/juggler/stat.gif a.disquscdn.com/next/assets/img/; frame-src 'self' disqus.com/embed/comments/; style-src 'self' unsafe-inline;" />
  </head>
  <body>
      <h1>CSP with DISQUS</h1>
      <div id="disqus_thread"></div>
      <script type="text/javascript" src="disqus.js"></script>
      <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
      <a href="http://disqus.com" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a>
  </body>
</html>

disqus.js

/* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
var disqus_shortname = 'hashtagsecurity'; // required: replace example with your forum shortname

/* * * DON'T EDIT BELOW THIS LINE * * */
(function() {
    var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
    dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
    (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();

And finally, running a webserver in the directory that holds both those files

python -m SimpleHTTPServer 8080

What I found was, that it doesn’t really matter whether I enable “style-src unsafe-inline” or not, since the inline style is in the embed.js file which is pulled from discuscdn.com. It appears that in that case, CSP still sees it as a violation.

If anyone can explain to me exactly why this is, I’d be very happy to hear about it.

Speaking of CSP, I digged out the PHP Based CSP violation logger I used for debugging in the past. Just make sure to move the reports file to a private directory. Don’t want everyone reading your CSP reports.

Local Web Servers – Since I was playing around with CSP rules and violations, I found myself in need of a webserver. Installing apache2 or NGINX just to deliver a handful of pages to myself seemed a bit overkill, so I looked towards minimal webservers.

My favourite, being installed by default on most linux systems, is definitely pythons simple server

python -m SimpleHTTPServer 8080
Serving HTTP on 0.0.0.0 port 8080 ...

I used a PHP based CSP reporting tool to collect more info about found violations, however the integrated http server in python doesn’t support PHP. Luckily, php does πŸ™‚

sudo apt-get install php5-cli
php5 -S 127.0.0.1:8000 -t /path/to/docs/

Keepass Password Generation Profiles – Password rules for password generators can be very helpful. Keepass offers an option to customize the way passwords are generated, which is great as the default policies are really bad!

Luckily, dF. over at stackoverflow.com has a nice list of chars for that.

A 12-char password policy in Keepass, this would look like this.

Whitelist chars (not all of them, just an example!)
[\!\#\%\+\2\3\4\5\6\7\8\9\:\=\?\@\A\B\C\D\E\F\G\H\J\K\L\M\N\P\R\S]{12}

If you prefer a “blacklist chars” approach, you can do it like this:

Bad Chars: il10o8B3Evu![]{}
PW Policy: [dAs^\i^\l^\1^\0^\o^\8^\B^\3^\E^\v^\u^!^\[^\]^\{^\}]{12}
Char Rule: == [d]igits, mixed [A]lpha, [s]pecial, [^] except, [\] escape char

More info on Keepass generation rules can be found here.

Keepass and KeeFox – Not much to say here except that it has gotten real easy lately, to integrate keepass into Firefox. Just follow the steps on stackoverflow.

Web Password Manager – I’m always on the lookout for web based password managers, that can be hosted on-site. Especially if they’re open source, or better yet free (as in speech).

I haven’t had the time yet to take a closer look, but at first glance RatticDB seems promising, although early stage. I will write more about it once I have spend more time with it – assuming it proves useful.

Flask Blueprint Templates – suck! I’m sorry but I can’t say it any other way.
I like the idea of how blueprints (sort of plugins in pythons web framework Flask) access templates. If I have an app, with it’s index.html lying in app/templates/index.html and a sub app or plugin within said app that has it’s own templates folder, like this app/subapp/templates/ everything is great as long as I don’t have any name conficts in templates.

Accessing /app/templates/index.html within the subapp
render_template("index.html")

Accessing /app/subapp/templates/subapp.html within the subapp
render_template("subapp.html")

Accessing /app/subapp/templates/index.html within the subapp
- not possible - 

To be able to access the subapp index.html file, you would have to either rename the it, or build a structure like this:

/app/subapp/templates/subapp/index
Blueprint("subapp", __name__, template_folder="templates")
render_template("subapp/index.html")

It works, but it’s annyoing as hell. I can understand the use case if you want to frequently access the main apps templates in subapps, but I think there should be an option to limit the subapp to it’s own template folder.

HashtagSecurity will be back…

To those of you who actually follow my blog and have noticed that it’s become rather quiet recently – I’m sorry. The reason for this is that I’ve put all my efforts into the new company blog, which is exactly where all my new posts went.

I’ve started hashtagsecurity.com to write about infosec topics and to openly document some of the things I fought with over long nights, be it because they just wouldn’t want to work or because I just couldn’t see it – sleep deprived and all. It is a project that’s very dear to me and I really want to keep it going. But I started something new with LastBreach and like all newborn, it requires a lot more attention than it’s older brother.

So even though I won’t be posting anything here for a while, that doesn’t mean that this place is dead – It’s just sleeping…

The more LastBreach will grow, the more I will get back my free time (hopefully), which I can then, once again, dedicate to this blog. For now, lastbreach.com is where you will find my newest posts and for those of you who already follow me on twitter, please continue to do so as I’m still active there.

But before I lock this place up – I made a promise in one of the posts a few months (again, sorry) back, in regards to upcoming posts on Lynis and its use for pentesters. I’m happy to say, that I will be able to continue this series. So head over to lastbreach.com and stay tuned for more Lynis goodness, amongst other things, and thanks for reading my blog(s).

See you!
Frederic

Server Patching with unattended-upgrades

I can’t believe I haven’t written about this yet. Unattended upgrades are a great way to keep your servers up to date, but there are a few things that didn’t work out of the box, so here is a summary of how my patch process is set up.

Why unattended-upgrades?

To be honest, running upgrades unattended can cause bad feelings with your colleagues if stuff breaks because of it. And it most likely will if you’re not doing it right. Unattended-upgrades is a feature available in Ubuntu, Debian and most likely other Linux derivatives, which allows you to control which updates should be installed and when you want to get notified about it.

From a security standpoint, unattended-upgrades is a no-brainer, you want to have the latest patches installed but you don’t want and can’t do it manually, unless you have near unlimited man power or really nothing better to do, which is pretty much never the case.

From the classic admin “keep things running” approach, doing any changes whatsoever is not really something you’d want, much less so doing them unattended, meaning “let the system do as it wants”.

This often leads to a discussion between security folks and administrators about whether or not this could actually be working. The problem, as so often in security, is that change is needed but for change not to end in disaster you need to know what you’re doing and more importantly you need to know how your network and your servers behave.

If you were to enable automatic updates on a Linux system without any knowledge of what that system is doing, it’s probably not going to end well.

So what then?

In order to get unattended-upgrades running without messing up your stuff, you need to

  • understand what the host is doing
  • implement the least needed upgrade process (usually security patches only)
  • make test runs before you let it lose
  • have backups ready, something which you should have always anyways!

There are two main problems that I have experienced so far with unattended-upgrades and that you should be aware of.

3rd party applications need specific package versions

The first one is easy to fix but a PITA to find out. Usually you find out by crashing the service, try to figure out what caused the problem and find out that package x has to be version y but the most recent version in your distros repository is newer than that.

Since you most likely can’t fix the applications dependencies, there is only one way – set the package on hold. Distros using the apt package manager allow you to set the hold flag for packages, which means that they won’t be upgraded along with the others. Be aware that this should only be done if no other way is possible as it can have side effects such as

  • other applications can’t be updated because they require x to be updates with them
  • which sometimes leads to apt dependency f-ups
  • It’s possible that security patches won’t be installed either

Service restarts mess things up

This was really my own fault, though it took some time to fix. Some services, such as MySQL for instance, can be tweaked during runtime. If an upgrade process restarts the service after updating the binaries, these runtime tweaks are lost. This can be a problem if

  • you don’t know about the restart
  • you didn’t document your tweaks properly.

Another thing that can happen – it always can, no matter what you do, is that a service doesn’t restart properly. One reason, although that didn’t happen to me yet, is that a service won’t restart because the config file still uses a deprecated version that was finally kicked out for good with this upgrade. If that happens you just were to lazy to switch to the new one during the transition time where both, the new and the old feature, where still supported. One good way of avoiding this is by actually using your log files.

But monitoring, analyzing and getting the goods out of logs is a post for another day.

Needless to say that backups, documentation and the ability to restart your services without messing things up is something you should have already and is not a requirement that only comes with unattended-upgrades.

Installing unattended-upgrades

Installing the packages is as easy as it gets. Just run the following commands and you’re good to go.

sudo apt-get install update-notifier-common unattended-upgrades

The update-notifier-common package is an optional addition, that will create the file /var/run/reboot-required, which tells you that the system requires a reboot to apply the updated or newly installed packages, and the file /var/run/reboot-required.pkgs which tells you which packages require the reboot.

You can even add checks to your monitoring system or your message of the day (motd) to get notified about uninstalled packages or required reboots.

69 packages can be updated.
0 updates are security updates.

You can get these with the following update-motd config files

$ cat /etc/update-motd.d/90-updates-available 
#!/bin/sh
if [ -x /usr/lib/update-notifier/update-motd-updates-available ]; then
    exec /usr/lib/update-notifier/update-motd-updates-available
fi

$ cat /etc/update-motd.d/91-release-upgrade 
#!/bin/sh
# if the current release is under development there won't be a new one
if [ "$(lsb_release -sd | cut -d' ' -f4)" = "(development" ]; then
    exit 0
fi
if [ -x /usr/lib/ubuntu-release-upgrader/release-upgrade-motd ]; then
    exec /usr/lib/ubuntu-release-upgrader/release-upgrade-motd
fi

$ cat /etc/update-motd.d/98-reboot-required 
#!/bin/sh
if [ -x /usr/lib/update-notifier/update-motd-reboot-required ]; then
    exec /usr/lib/update-notifier/update-motd-reboot-required

They should be included in Ubuntu by default and updated on login by the pam_motd module. If you change or add config files, you can either logout and login for the changes to take affect, or install the update-motd package and run it.

Configuring unattended-upgrades

To give you a quick overview, this is what we want the system to do stay updated.

  • Fetch the newest package information (apt-get update)
  • Install security updates only
  • Notify us via email, at first always, later only on error
  • Do not reboot automatically
  • Do not overwrite config files

Let’s take a look at how to set these things up. Unattended-upgrades has three config files that are of interest to us.

/etc/apt/apt.conf.d/20auto-upgrades

This config file is pretty simple and straight forward.

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

With this we enable point one and, partly, two of our bullet list. But we haven’t configured point two yet. We enabled the installation of upgrades but haven’t defined which upgrades we would like to have.

/etc/apt/apt.conf.d/50unattended-upgrades

Here is where all the magic happens. The following shows only the options that I am using, the default config offers more that that though, so you might want to take a look at it.

// Automatically upgrade security packages from these (origin:archive) pairs
// Additional options are "-updates", "-proposed" and "-backports"
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
};

Unattended-Upgrade::MinimalSteps "true";

// Send report email to this address, 'mailx' must be installed.
Unattended-Upgrade::Mail "spam@hashtagsecurity.com";

// Set this value to "true" to get emails only on errors.
Unattended-Upgrade::MailOnlyOnError "true";

// Do automatic removal of new unused dependencies after the upgrade (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";

// Automatically reboot *WITHOUT CONFIRMATION* if a the file /var/run/reboot-required is found after the upgrade 
// Unattended-Upgrade::Automatic-Reboot "true";

If you are really brave, you can enable the last option as well. It worked quite well for me on some servers but you have to make sure that all service are started properly after a reboot. I’ve read somewhere that unattended-upgrades can be timed to avoid redundant servers rebooting at the same time, but I haven’t looked into it so far.

For more information on the configuration, check out the official Ubuntu documentation here and here.

CRON

If you take a look at the file /etc/cron.daily/apt, you will see that we unattended-upgrades is already configured to run regularly.

1 #!/bin/sh
2 #set -e
3 #
4 # This file understands the following apt configuration variables:
5 # Values here are the default.
6 # Create /etc/apt/apt.conf.d/02periodic file to set your preference.

We created, or modified the files in /etc/apt/apt.conf.d/ and thus configured the /etc/cron/apt process to suit our needs, so there is no need to add a new cron job for it.

According to the Debian documentation, the 02periodic file is an alternative config file for the 20auto-upgrades, so we don’t need it.

Timing

The only problem I see with this, is that redundant servers might run updates or possibly even reboot themselves at the same time. The way it’s setup now is that the apt cron job is executed once daily.

Cron daily runs all scripts in /etc/cron.daily/ once a day, the start time for this is defined in /etc/crontab.

# cat /etc/crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
17 *    * * *   root    cd / && run-parts --report /etc/cron.hourly
25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6    * * 7   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6    1 * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

As we can see, cron daily starts at 06:25 AM every day. In my case, apt is the first command to be executed as can be seen by the alphabetical order of scripts in /etc/cron.daily/.

# ls -1 /etc/cron.daily/
apt
bsdmainutils
creds
dpkg
logrotate
man-db
quota.dpkg-dist
sysklogd

This might be different on your system, depending on the cron jobs you have installed but it should be among the first to run. If not and you want to be sure, just rename it to 01_apt.

The reason why I care about the order of execution is because command running before apt could delay it’s execution. We can easily change the start of cron daily for redundant systems, but if the first system would have a huge delay the might still end up running at the same time. The chance is slim, but why take chances if you can be sure.

Here is an example for two redundant web servers.

web01: # grep daily /etc/crontab
25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

web02: # grep daily /etc/crontab
25 8    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

The time difference of two hours should be more then enough, keep in mind that two much time difference might result in other problems. Such as two servers being out of sync because the dependencies changed on one system but hasn’t been updated on the other.

Logs

If you want to check on your recently applied updates, take a look at the following log files.

  • /var/log/unattended-upgrades/unattended-upgrades.log
  • /var/log/unattended-upgrades/*
  • /var/log/dpkg.log

Config files change!

Here is one more add-on that I stumbled across recently. If you have done a few upgrades with apt-get, you should have seen this prompt at least once.


While this is a great thing if you’re doing the upgrade manually, it’s kind of a problem when you install them automatically.

The image above is actually a screenshot from a mail my server sent me, after the upgrade process stopped because of this dialog. If this happens, the updates are not installed completely, and you will receive this mail daily until you fix it!

So let’s fix it. Open the config file /etc/apt/apt.conf.d/local and add the following lines.

# keep old configs on upgrade, move new versions to <file>.dpkg-dist
# e.g. /etc/vim/vimrc and /etc/vim/vimrc.dpkg-dist
Dpkg::Options {
   "--force-confdef";
   "--force-confold";
}

This will tell unattended-upgrades, to keep the original config files and move the new versions to .dpkg-dist, so you can inspect them at a later point. What I was missing though, is a notification by mail that new .dpkg-dist files have been created.

To get this information, I whipped up this small script. If you know a better way to solve this, please let me know. In the meantime, this will get the job done.

#!/bin/bash
#
# This script will search for .dist-dpkg files and notify you if any are found
#
REPORT_MAIL="upgrades@yourmail.com"


find / -name *.dpkg-dist > /var/log/unattended-upgrades/unattended-upgrades-config-diff.log
confcount=$(wc -l /var/log/unattended-upgrades/unattended-upgrades-config-diff.log |awk {'print $1'})

if [ "$confcount" -ne "0" ];
then
	echo -e "Subject: New Held-Back Config File Changes\nFor the following configs, changes have been held back during unattended-upgrade. \nPlease review them manually and delete the dpkg-dist file after you're done.\n\n $(cat /var/log/unattended-upgrades/unattended-upgrades-config-diff.log)\n\nRegards, \n$(hostname -f)" | sendmail $REPORT_MAIL 
fi

Just save this script somewhere on your server, make it executable and add it to your daily cron jobs. Make sure to change the email address!

Also, make sure the script has write permissions to the /var/log/unattended-upgrades/ folder, or otherwise it will fail.

Summary

Unattended upgrades are not something you “just enable”. They have to be introduced into your environment carefully but it’s time well spent as they can be quite helpful later on.

Not only is security increased but you safe a lot of time when you finally have no other choice then moving on the the next distro release. Believe me, few things are more painful then having to perform full dist upgrades (e.g. 12.04 => 14.04) on way outdated production servers.

Being more secure not only means that the risk of loosing money is smaller, it also means for admins that the risk of running around in panic trying to figure out how it happened and what can be done to stop it, is smaller. That’s something you should keep in mind if you’re an admin, or that you should keep handy as an argument if your working with admins.

Lynis Enterprise – The 2nd Encounter

This time we will dive into compliance scans and take a look at how multiple hosts are displayed. I also want to find out why I am at risk of data loss – that’s right, I still don’t know!

This round, I’ll take a look at the documentation, which can be found here.

RTFM if you can!

In order to get different results, I wanted to add another host. But, because around 9 days went by since I last touched Lynis, I couldn’t remember how exactly the Enterprise key was added to a new host.

After searching and searching in the documentation, I found nothing but the information that the key is in the configuration panel. Great, but how do I add it? What’s the secret option I need? I did check the config file, but there was no license key option to be found either.

Then I remembered, that I ran Lynis as root from the /root/ folder, so I checked the config file there. And to no surprise, there it was.

I should’ve checked this site though, since it’s all there…

Anyway, putting this information in the documentation and a link to it in the configuration page would help a ton. Speaking of documentation, there are a few other things that I came across, such as the icon description being a little off

and topics that are linked in the documentation or application but lead to nowhere.

Parts of the documentation just don’t seem to exist at all, while others at least have the decency to tell you so!

Adding other hosts

But enough about the docs, I wanted to add three more hosts so I have two 14.04 and two 12.04, which I could then scan with either the –check-all or the –pentest switch, to get an idea of how they impact the scan results. Also, this should give us a bit more then just “21 Suggestions” and might be more representative of what version is actually being used out there, with companies not always running the latest shit and all.

I want to see criticals and a red cross at compliance!

To get an overview, here are the hosts with their OS versions.

test-ossec-lynis01		Ubuntu 14.04	audit system --check-all
test-ossec-lynis02		Ubuntu 14.04	audit system --pentest
test-ossec-lynis03		Ubuntu 12.04	audit system --check-all
test-ossec-lynis04		Ubuntu 12.04	audit system --pentest

After copying the lynis folder with the enterprise key to the other hosts, I ran the commands, to add the hosts to the enterprise web UI.

One thing I noticed, is that the Ubuntu 12.04 hosts didn’t show up in the UI after the scan completed. The culprit here was that the curl package wasn’t installed on these hosts.

After running apt-get install curl -y and running the scans again, they where listed with the other hosts.

Two things stick out here, one is that the Ubuntu 12.04 version string is empty. The other is the bandage sign in the Lynis version column. Hovering over it says “This version is outdated and needs an update”. This being in the Lynis version column, I assume it’s referring to the Lynis binary needing an update, which is strange since I rsynced the folder from the test-ossec-lynis01 host and should therefore be the same.

test-ossec-lynis03.prod.lan:~/lynis $ sudo ./lynis --check-update
 == Lynis ==

  Version         : 2.1.0
  Status          : Up-to-date
  Release date    : 16 April 2015
  Update location : https://cisofy.com


Copyright 2007-2015 - CISOfy, https://cisofy.com

Yup, it’s the current version alright. Ideas? Ignore, for now at least.
Let’s check out the findings instead.

According to the dashboard, we now are at risk of system intrusion, which is never a good thing!

Since I took a quick look after the host test-ossec-lynis03 was added, I know that that’s where the problem was first found.

So what do we have here? Great, “one or more vulnerable packages”, I wonder which packages are vulnerable.

Wait, what? That is one way to make a great design around no additional information whatsoever. I understand that I should update my system, but as a technical person I would love to be able to understand what exactly the threat is and where it has its source. Maybe this host is running under certain circumstances that make upgrades hard. In this case, I might want to check if an upgrade is really necessary.

Note: I’m not saying that I’m a fan of the above scenario, but it does happen sometimes – unfortunately!

Since I can’t do much more then upgrading my server, let’s just continue with the other hosts.

The older hosts have the highest risk rating, no surprise there. But they didn’t introduce new risks, which is interesting. I know that 12.04 still receives security updates, but I’m pretty sure that they’re not running the latest versions.

Fun fact, some problems solve themselves, such as the “Old Lynis version” one.

Let’s checkout one of the 12.04 hosts and see what they have to offer. Apparently there isn’t much difference between the --check-all and --pentest checks, since the results are the same, at least when it comes to number of findings.

I won’t show the rest of the results as it’s pretty much the same as the first scan results. Obviously, we have the “vulnerable packages” finding and again I would love to know which packages are vulnerable, but it just shows the same page as before.

Just out of curiosity, let’s check real quick which packages are listed for security updates.

$ grep security /etc/apt/sources.list |sudo tee /tmp/security-updates-only.list
$ sudo apt-get dist-upgrade -o Dir::Etc::SourceList=/tmp/security-updates-only.list

As you can see, there is definitely a difference between the two hosts, even if it’s not a substantial as I would have thought. Still though, in a production environment, hosts are not all the same and having information on what exactly causes a problem goes a long way towards improving things.

They should all be updated, since these are all security updates, but I would still love to know which of them is responsible for the “system intrusion” risk.

Compliance

I’m still compliant with running not compliance checks, so let’s fix that next.

It seems that policies are changed in the web interface, not via cli switches. This begs the question how the command is initiated, but let’s try to run a scan with compliance first.

I think “High Secure” should be enough, but ultimately I want to check all of the predefined policies.

To check for compliancy, I need to run the rule checker.

So my system is not compliant after all. Since it was that easy, I ran a quick check over all rule sets.

Turns out it doesn’t make that big a difference. Let’s examine the findings one by one.

Firewall is pretty obvious, it’s installed and running. Since this host is actually just a Linux container (LXC), the detected firewall is the host IPTables rule set.

Time synchronization is configured, is more of a general information than anything else. Is the configuration compliant with the defined rule sets? Should it not be configured? Why isn’t it marked with a cross or checkmark?

Clicking on the link just shows the rule definition but no further information on the findings.

For the record, neither NTPd nor timed are running on the host, so the checks have probably failed. If it’s not a compliance issue, why is it listed at all?

Malware includes a check if an anti malware tool (clamd) is installed and, apparently, if it’s configuration is protected.

This is probably a variable that Lynis sets after a certain check. What I’m wondering is how this could have gone off, since clamd isn’t even installed.

Limited access to compiler seems to check if a compiler is installed!?

Compliant: check! I’m still compliant, at least according to the dashboard and the hosts overview. Even after running Lynis again – no change!

So what exactly do compliance checks look for?

$ sudo apt-get install clamav
$ sudo /etc/init.d/clamav-freshclam start
$ ps aux |grep clam
clamav    1915  5.0  0.0  52364  2908 ?        Ss   15:47   0:06 /usr/bin/freshclam -d --quiet
$ sudo ./lynis audit system -c --quick --upload

Seems like it worked!

The compliance check shows a read cross, let’s see what the UI has to say about the updated status.

What? OK, this doesn’t make sense.

There is no change in the result, whatsoever. And just for the record, here are the config permissions.

test-ossec-lynis03.prod.lan:~/lynis $ ls -lh /etc/clamav/
-rw-r--r--  1 root   root 2.0K clamd.conf
-r--r--r--  1 clamav adm   717 freshclam.conf
drwxr-xr-x  2 root   root 4.0K onerrorexecute.d
drwxr-xr-x  2 root   root 4.0K onupdateexecute.d

Only root is able to write and the config belongs to clamav. Seems reasonable to me.

Taking a closer look at the policies, I found that some of them are empty and don’t have any rules set. That would explain why so few results showed up in the overview page.

While this makes sense, since it’s hard to check for, let’s say, every possible backup service that could be in place, it’s also kind of misleading since the policies are named after “HIPAA”, “ISO27x” and “PCI-DSS”. Anyone getting a checkmark on these should check the rule tables before the auditor shows up!

I’m not an expert in compliance, but I’m pretty sure that each of them has more then 8 things that should be done properly, before you can pat yourself on the shoulder!

Summary

So what have we learned?

  • The documentation is lacking in some parts
  • Results could be more detailed, especially regarding the source of the problem
  • I still don’t know what puts me at risk of dataloss
  • Now I also don’t know which packet puts me at risk of system intrusion
  • I also don’t know how likely the exploitation of said risks is (linkt to CVSS?)
  • I havent’ seen a “mark as false positive” or “ignore because can’t be fixed” option.

All in all there is a lot of time that has to go into writing rules before Lynis Enterprise can really be used for compliance checks. On one hand I love it, as it forces people to create checks that fit their environment, on the other hand it would be great to have a ISO27001 rule set premade for some distros – say Ubuntu – to run a quick check and see how the host is holding up.

What really stopped me from “digging deeper”, is that I wasn’t able to figure out how the checks actually worked. I get that “malware running” is marked as compliant, if the “process running” variable contains “clamd”, but what is Lynis, the cli tool actually checking for and more importantly, how can I check the content of “process running” myself? I know it’s a OSS tool and I could look at the source code, but that’s not how I want to use my time – and your boss wouldn’t want you to use yours that way as well. Especially after he payed for the license.

What I’m getting at is, that as newcomer to Lynis, it’s sometimes hard to understand what exactly is being done in the background and how some results came to be.

I’m looking forward to my next encounter, as I still have to

  • write my own compliance rules
  • check out historical data, after I fixed and unfixed stuff
  • and most of all, find out if LE can actually be of use to the average pentester.

Also, what’s up with that?

I did everything imaginable to get this to show a red cross – without any luck!

Lynis Enterprise – The 1st Encounter

I recently got my hands on a trial of Lynis Enterprise, the commercial SaaS version of the open source Linux system auditing software Lynis. In exchange I promised to write about my experience here and share some feedback with the developers.

I could spent some time with the tool and write about it afterwards, but instead I decided to write down my thoughts as I stumble across things. That means, that some questions might get answered later down the road, or that I write stuff that seems stupid, but in return this post will be more closely to how really I experienced my first encounter with Lynis. My thoughts? Unfiltered? On (digital) paper? This will get weird – you have been warned!

I did run the open source version once on one of my servers and scrolled through the results, so to keep this fair, this is what I knew before I started.

  • Lynis is in apt-get, but (of course) not in its most current version
  • Out of the box, it doesn’t run with normal user privileges (that can be fixed though)
  • Lynis can simply be downloaded from cisofy.com and installed
  • Lynis ./include/const folder must belong to root if it is run with root privileges
  • Lynis results look like this, but in color.

Now that we’re on equal footing, let’s get started.

Setup

The first thing I noticed after I logged into the web console was this.

Which is not really a problem, but even though I don’t have anything in production at that point, I still immediately asked myself “When exactly will it expire”, followed by “and what type of license do I have?”. The later is a result of me not paying for the trial, otherwise I would have probably known about the type of license I ordered. Or maybe I wouldn’t have. Who knows how many licenses I have to manage.

I almost instantly dropped the questions, as all I wanted to do is setup my first server to send data to LE. Trying to do so, I took a closer look at the overview page, which looks like this.

“OK, so that’s my username, I have a trial account, my email, no messages, no subscriptions,… hm, but how do I get started. What’s that down there at the bottom, Control Panel, System, Compliance… No wait, that’s just the change log. Maybe in the navigation panel? Ah, there it is, in the box on the right. First time user.”

Don’t ask me why it took me so long to find this, but somehow I kept missing it. It could just be me, but I feel if you login for the first time, that box should present itself a little bit more.

Now then, let’s got to the systems page to add the first host…

Thoughts?

Hm, adjust config and run with --upload switch.
Looks easy enough. But what's that -k option? 
For self signed certificates? 

Maybe that's for the on-premise version.

Yeah probably, but for a moment I felt unsure about this whole thing... 
Sending my data to the cloud, to a unsigned cert?

Nah, it'll be fine!

...editing config
...copy paste command
sudo lynis audit system --quick --upload

Damn, I ran the outdated apt-get version...
sudo ./lynis audit system --quick --upload

Dashboard

The Dashboard showed the one host in a “Technical” overview, which I assume includes every option the Dashboard has to offer, while the other two “Business” and “Operational” only showed data regarding those areas.
What caught my eye though was that “Data Loss” was listed under “Technical Risk”, while at the same time, right next to it a nice, all green circle stated “All systems are compliant”. That strikes me as odd, although I never really thought that being compliant means being safe. Still, it seems strange seeing it on a dashboard for some reason.

But what else is there? One system, zero outdated. Zero systems or Lynis clients? What’s that Average rating. Average compared to? And based on? No events.
But I though I had risk of data loss. By the way, where can I see what’s that all about exactly? Probably by clicking on the host link. But first I’ll check out the tags.

Hm, white font color on white/gray background. For the record, the tags are “firewall”, “ipv6”, “iptables” and “unknown”. The last one fits my question perfectly. “What are these tags for and why is data loss not one of them?”

I know what tags are for – in general, but why these, and why not data loss. That seems to be the main risk that was identified on this host. Speaking of data loss, let’s look for more detail on that.
Clicking on the host link brought me to this page.

What have we got here, OS information, Network information and Other. IPv6. Does this mean IPv6 is enabled or that is has been deemed as securely configured?
There we have the compliance check again, and there is a bit more information on the average rating. So it’s average risk rating and it’s a comparison to the same OS and over all scanned systems.

I assume at this point, that this only includes my systems, since there is only one. The tags are readable this time. Scrolling down…

OK, so file integrity was gray because no checks have been performed. Is there a plugin for that or what do I have to do to get them. Maybe just another switch?
I think at some point I need to dig through the man page, but for now let’s just keep wandering around.

Wait, I am compliant because I have no policies assigned? That’s an easy way out… and a bit confusing to be honest. Why wasn’t this gray like the file integrity checks?

Networking doesn’t say anything about being secure, so I guess the green checkmark is about IPv6 being enabled. The same goes for the firewall audit. What else is there?
No expired certificates, one warning about a misconfiguration in /etc/resolv.conf and a bunch of suggestions to harden the host. These look eerily similar to the compliance checks from Nessus, although they are quite fewer in numbers.

The only thing that really goes against my personal recommendation is enabling password aging limits. I simply don’t believe that chaining passwords increases security, but that’s a discussion for another time and place.

Last but not least, there are a few manual tasks and an empty scan history.

System Overview

Move along, nothing to see here!
Well, that’s not entirely true. While there is only one host listed here, it’s easy to see why this page might get useful later on.

21 Suggestions, 0 Warnings, the host version and name, last updated, Lynis version and of course compliance trickery. With multiple hosts, this will definitely come in handy, although I’m missing a “sort by x” feature.
Show hidden controls? Of course!

Bummer.

Compliance

Right, we haven’t done this one yet. We got the checkmark though, that fine right?

Hm, but High Secure does have a nice ring to it and I wanna try the others as well. Maybe I’ll even create a custom one – for Cyber. But before that, let’s finish the round through the navigation panel.

File Integrity

This page doesn’t give much more information then the gray area in the dashboard did. I still wonder why Lynis didn’t run file integrity checks. Or what I have to do in order to get them running. This would be a good place for a quick howto. hint

Reports

Ignoring the Improvement Plan page, which is more of a documentation page then a feature, brought me to reports.

Systems and applicable controls is basically just a list of hosts with their associated suggestions. It’s nice to have it all in one place, but not worth yet another giant screenshot.
My systems overview and Systems without compliance policy is fairly obvious. It’s the same thing as the Systems Overview page, with or without compliance policies. There is but one difference.

Needless to say, I instantly copied the report data into LibreOffice Calc. The result looks… well, bad.

I know, it’s to small to read, but that’s the overall structure of the report. Something tells me that won’t be used much by anyone – unless I just did it wrong, or Libre Calc is nobody’s favorite spreadsheet software. Anyway, an export to spreadsheet, csv, pdf function would be swell.

Configuration

The final page in the main navigation is configuration, which didn’t bring much enlightenment to tell the truth.

So both Lynis and Lynis Control Panel are up to date. I guess the later is just interesting to people running the on premise version. No Lynis plugins found. Ok, so how do I get some?
This would be the right place for a link to, say, a plugin repository or at least the part of the documentation explaining how to get and use plugins.

But let’s continue with the Modules section. Most of these were either links to pages we already saw, or simply not clickable. The others were rather short in content.

Ok, so no events. That might change once a few more hosts are scanned.

Nope. Let’s skip that.

Ah, there is something new. Security Controls. What’s that for? Maybe the link will tell us more?

So it’s advice on how to correct the specified flaws. Neat! It even has Ansible, CfEngine, Chef and Puppet snippets.

Or it doesn’t. What’s the checkmark for then I wonder.

Summary

What you’ve just read is literally a brain dump. I wrote down everything while looking at Lynis Enterprise for the first time. I don’t really have an opinion on it yet, other that I like it (no reason) and that I think it has potential to help Linux admins keep an eye on their hosts.

I will take a closer look on file integrity and compliance checks and write about that in a more traditional manner. I will also try to figure out, how Lynis can benefit penetration testers in their work. It is clearly thought of as a program that should be used for continuous auditing, so I’m curious how much help it will be in one time assignments. Especially in regards to the difference between the enterprise and open source versions.

After that, I probably have to take a closer look at the Lynis config file and documentation to answer questions like “Can I tell Lynis that password aging is stupid” or “How do I add/enable/run plugins?”.

PS: @Michael, if you’re reading this. I like the Business dashboard for management, but I still don’t know why I’m at risk of data loss. That’s probably the first question any auditor will have to answer. Maybe I just missed it, or maybe a link to the cause in the dashboard isn’t such a bad idea.