PrivateCore » Alon Nafta https://privatecore.com The Private Computing Company Thu, 31 Jul 2014 21:42:21 +0000 en-US hourly 1 http://wordpress.org/?v=3.5.2 Linux Malware by the Numbers https://privatecore.com/blogs/linux-malware-numbers/ https://privatecore.com/blogs/linux-malware-numbers/#comments Tue, 22 Jul 2014 17:43:05 +0000 Alon Nafta https://privatecore.com/?p=2533 Concrete security incident data is typically scarce for any operating system, but the challenge of finding useful data is even more acute for Linux environments. Some folks might even believe that there is no such thing as Linux malware, or that Linux is inherently secure / heterogenous / rare compared to Windows systems. Instead of going into a “why” discussion, I’d like to take a look at reports of actual incidents, describe those threats, and use the Windows malware experience to infer “what’s next” for Linux.

Alon Nafta

Alon Nafta

A key point to consider when looking at Linux malware is that it’s mostly targeting servers. When you compare threats to servers against those targeting client systems, the common exploitation vectors are typically different, in addition to heavy reliance on system administrators’ skill and meticulousness.

What were the major Linux malware incidents in recent years?

Here’s the data I collected for the last 3 or so years:

  • 2011 kernel.org hacked, malware used was some variant of Phalanx, one of the better known Linux rootkits

  • 2012 A Linux rootkit is caught in the wild, nicknamed Snakso (Here’s a blog post describing it)

  • 2012 An iframe injection module is caught in the wild, nicknamed Chapro by Symantec, and later linked to the previously known Darkleech.

  • 2012 Volatility released a nice analysis of a recent variant of Phalanx (dubbed Phalanx 2) caught that year

  • 2014 Linux backdoors were found in the wild in large numbers. Because of the high volume, this was named Operation Windigo

  • 2014 Darkleech is still seen in the wild in newer variants

Phalanx and Snakso are both kernel rootkits that use loadable kernel modules to execute kernel code, and various hooks to hide processes, files and network connections. All other used malware “userland” modules that patch existing binaries on the system and use various techniques to evade system administrators and external system audits.

How did malware get in?

Linux-running systems are primarily servers, meaning that common exploits targeting browsers (via drive-by attacks) or email clients and file readers (via spear phishing emails) are practically irrelevant. Yet all of these were presumably installed with root privileges, so how did they get in?

We can reasonably speculate that many servers suffer from bad configuration, exposed interfaces and shared SSH keys and credentials. Unpatched servers are also further exposed to publicly disclosed privilege escalation vulnerabilities, and obviously everyone is exposed to zero-days. Since in many cases numerous servers are managed by the same IT administrators, sysadmins are obvious targets for attackers (pdf) for acquiring targets nonlinearly.

These are small numbers compared to Windows incidents, why should this change?

Granted, the scale and complexity of these incidents is significantly inferior to what happened in the parallel world of Windows systems. However, we should take into account the following observations -

  • The technology barrier isn’t as high as previously thought, as demonstrated by last year’s disclosure of the (5-year old) NSA Tailored Access Operations (TAO) catalog. Moreover, many Windows malware capabilities can be ported to operate on Linux running systems, for example, using GRUB to retain persistence.

  • As data is moving from endpoints to outsourced servers and centralized server farms, we can expect (or fear) that the relative value of exploiting servers is rapidly increasing.

  • Linux server security relies almost exclusively on security-aware administration and capable administrators, since deployed third party security products are sparse. This leaves incredible room for human error and security gaps due to lack of technical aptitude or awareness.

Looking at the Crystal Ball: Malware and Linux

Linux systems and the Linux security landscape have experienced far less malware activity than its Windows counterpart, yet this very same reduced friction has a major impact, resulting in a relatively immature ecosystem.  The Linux server environment lacks of easily deployed security solutions, third party security products (e.g. anti-malware) and requires high administration skills. These factors make it a relatively fertile field for malware.

While data and computation moves from endpoints to servers, the number of Linux servers holding sensitive data is on the rise and provides an attractive opportunity for malicious adversaries. The technology and tools used by nation-state actors will eventually make their way to cybercrime organizations, expanding their efforts and capabilities to target Linux systems.

]]>
https://privatecore.com/blogs/linux-malware-numbers/feed/ 0
Preventing the Next Target* Breach with Trusted Computing https://privatecore.com/blogs/preventing-target-breach-trusted-computing/ https://privatecore.com/blogs/preventing-target-breach-trusted-computing/#comments Wed, 15 Jan 2014 18:56:44 +0000 Alon Nafta https://privatecore.com/?p=2094 * Replace Target with your favorite retail chain.

Alon Nafta

Alon Nafta

The recent news that Target, Neiman Marcus and perhaps three other retailers suffered breaches involving large volumes of data pilfered is raising concerns among retail security professionals.  While details are sketchy and there are plenty of unknowns, it appears that “memory scraping” (also called “RAM scraping”) malware might have played a part in the compromise. There is plenty of research and alerts around memory scraping malware found here, here and here.  This sort of malware has been around a while – check out this Dark Reading article from 2009 and this 2009 Verizon Data Breach Investigations piece.

What is memory-scraping malware?  What we have seen to date has affected retail point-of-sale (POS) systems and potentially backend systems that are processing various types of payment cards (credit cards, debit cards, prepaid cards, etc.). While standards like the Payment Card Industry Data Security Standard (PCI DSS) call for encrypting cardholder information while at rest (storage) and in transit (in motion on the network), cardholder information is typically unencrypted while in use (memory).  If you can access the POS system or server memory, you can extract its contents including the cardholder information.

The data format of such information is clearly defined (see ISO/IEC 7813 and 7816), so attackers can simply implement suitable algorithms in malware which is then installed on the POS machines to harvest cardholder information in memory with those formats in mind.

How can you protect against this sort malware? Antivirus is certainly a necessary component required by PCI DSS for systems handling cardholder information, but AV has been demonstrated to be less than effective in stopping sophisticated threats and updating AV on isolated networks is cumbersome.

One promising countermeasure is attestation. Attestation protects against persistent malware on immutable, “gold” base software images, and ensures – using cryptographic principles and components – that both hardware and software are unchanged.  Attesting to the integrity of server and POS systems would validate that the machine (hardware and software) is clean of malware. If a machine was infected, it would fail attestation and could be examined and remediated. Proper attestation supported by strong cryptography would eliminate any chance for otherwise undetected malware persisting.

Naturally, there could be some infection that occurs after attestation that could exploit vulnerabilities, but periodically attested systems (which would typically require a reboot) minimize this window of vulnerability (or opportunity, depending on your perspective). In this situation, malware could infect a machine after it was attested in a known, good state, but that malware would be wiped away the moment the system reboots and that would be validated when the system re-attests.

A normal, stateful machine suffers from malware that can use its hard-drive, or other components, to persist. A stateless machine that relies on a locked-down, base software image and is periodically attested avoids malware that might try burrow its way into a stateful component.  POS systems, as well as transaction processing backend systems, are not intended to run arbitrary code.  Validating (attesting) such systems against a known, good software image would dramatically reduce the window of opportunity for attackers.

Security measures typically require some change in technology and processes. One change of periodically attesting systems is that it would require downtime as systems reboot and applications restart.  The impact of this change could be minimized by rebooting during off hours for POS machines and this could be done in a round-robin fashion among a high-availability (HA) server cluster for mission-critical servers. POS systems are natural candidates for being stateless as they handle stateless data.

No security countermeasure is going to stop all attacks all the time – technology is extremely complex and attackers are very clever.  While details of the exact circumstances around the breaches at Target, Neiman Marcus, and other retailers are still unknown, my speculation is that attesting systems would have reduced the chance of a successful attack and minimized the damage of any successful attack by reducing the attack duration.

]]>
https://privatecore.com/blogs/preventing-target-breach-trusted-computing/feed/ 0
IPMI and New Challenges in Cloud Server Security https://privatecore.com/blogs/challenges-cloud-server-security/ https://privatecore.com/blogs/challenges-cloud-server-security/#comments Tue, 17 Dec 2013 23:23:52 +0000 Alon Nafta https://privatecore.com/?p=1995 Integrated Platform Management Interface (IPMI) controllers ship on practically every  x86 server, and any large IT monoculture provides an attractive target for bad guys.  While offering increased manageability for cloud servers, the abundance of IPMI controllers in cloud environments poses new threats for cloud users, spanning from remote,  over-the-web exploitation to local network attacks from cohabiting cloud tenants.

Alon Nafta

Alon Nafta

Being enabled and connected by default on many systems, IPMI controllers expose unaware users to various threats, orthogonal to the ones they currently protect against, which mostly relate to malware and web-based attacks.

Often overlooked by administrators, ensuring proper IPMI tenant isolation is a key step in protecting against IPMI-based attacks. We were able to demonstrate the usage of a low-footprint memory scraping tool to collect root passwords, keys and other valuable data from memory of remote servers, using an easy-to-accomplish attack sequence.

Background

IPMI has recently made news headlines following two notable and impressive pieces of security research work: the first by Dan Farmer in January, followed by HD Moore in July. A month ago, Rapid7 disclosed software vulnerabilities in Supermicro server firmware. The combination of these results makes the case for practical remote server exploitation, found affect approximately 35,000 servers and potentially many more.

IPMI 101

IPMI provides on-board hardware and software, allowing remote Command & Control communications to servers. IPMI is implemented in most x86 servers and apparently enabled by default in many of them. Technically, it is handled by an on-board Baseboard Management Controller (BMC). IPMI provides functionality that would otherwise require physical presence: display, keyboard and mouse, virtual media, and power management, even when the machine is shutdown. To grasp what can be done via IPMI, one simply has to imagine that the attacker is standing next to your machine with fingers on the console keyboard. 

Several unique features of IMPI are noteworthy in the context of threat assessment:

  • On many systems, IPMI communication is possible through a single Ethernet port. That means that if it’s enabled (through BIOS), IPMI is exposed to the network the server is functionally using. To be fair, the IPMI BMC would have to be assigned an IP address, but that will usually be taken care of automatically by DHCP, something that is present in most networks. HD Moore and Rapid7 were able to discover 35K exposed IPMI interfaces of solely Supermicro servers. Clearly, potential for widespread damage is huge.
  • There is no open source implementation for IPMI BMC controllers – every vendor makes its own, closed-source implementation. That leaves room for many potential bugs, and very few (responsible) eyes searching for them.
  • Similarly to BIOS updates, IPMI firmware updates are hard to manage, and are in the early stages of being recognized as a viable threat by the compliance and pentesting communities. While BIOS and network equipment firmware updates are covered by compliance standards (e.g. PCI DSS, etc.) and pen testing routines, BMC firmware updates are commonly overlooked. Interestingly  enough, most of them are also Chinese.

Other bad security practices, such as unchanged default passwords and needing Java to communicate with IPMI controllers, as well as weak encryption schemes, are worrisome. PrivateCore identified many vendors utilizing TLS 1.0 with 128-bit RC4-based encryption, widely considered as broken, for connecting to the IPMI HTTPS server. The HTTP server also uses untrusted certificates, making the case for easy-to-fake Man in the Middle (MITM) attacks, exploiting poor administrators running unpatched JAVA clients.

IPMI Exploitation DIY

To demonstrate the relative ease in exploiting a vulnerable server via IPMI, we conducted the following attack exercise in our own network, simulating a typical cloud environment:

  1. Scan for IPMI interfaces: Using nmap, we scanned our network for IPMI controllers, easily discoverable by their unique use of port 623.

  2. Vendor identification: Several servers have been found. Identifying the vendor is easy, simply browse to that IP and look at the displayed server brand logo. Programmatically, than can be implemented by parsing HTTP packets from the IPMI controller.

  3. Connecting to servers: The is a trickier part. Not all servers block entries for wrong credentials, leaving room for brute-force and dictionary attacks. Many servers maintain their default passwords, or use a single password for all servers, mostly due to administrator ignorance or laziness. And then there are vulnerabilities, which PrivateCore (and others) consider to be common.

  4. Attaching virtual media: Attaching virtual media is supported by every IPMI controller, as it is one of its key features. An attacker would need to convert its malicious payload to a bootable CD ISO image, that can then be deployed to the remote server via the IPMI user interface.

  5. Reset: We reset the server using the IPMI power management controls

  6. Collecting memory: We used a small-footprint memory “scraping” tool to remotely collect memory. These two tools – tool_1, tool_2 will do the trick.

  7. Collecting rewards: There is really no need to argue why a full memory dump is not something you want your attackers to have, but one that can be done remotely is a whole new game. Hidden gems in memory, such as root passwords and cryptographic keys, are valuable treasure, allowing easy control without having to restart the server again.

    password4

Two caveats:

  1. Error Correcting Code (ECC) memory: Many modern servers use ECC memory, which is zeroed out by the BIOS during boot. Conveniently, some BIOSes allow to disable that function (1 out of 3 vendors in our mini-experiment, all VERY popular).

  2. BIOS patching: BIOS updates are also generally possible, in case the BIOS doesn’t support ECC disable (although obviously this increases the complexity of this attack), or if the BIOS is password-locked.

Conclusion

When running on the cloud, one must trust the cloud provider to maintain good security practices, ensure high isolation between tenants and continuously update their software as well as security mechanisms. As cloud infrastructure scales, the potential attack surface grows, the risk of compromise increases, and control measures are not keeping pace. Large-scale architectures and complicated IT measures are a hidden treasure for attackers who can easily find their way in, commonly by user-made configuration mistakes, reinforced by exploiting server vulnerabilities.

One of the better ways to avoid these issues is changing the game for attackers. Rather than building more walls (or, using a more appropriate metaphor, keep fixing the holes in the existing walls), apply a solution that inherently protects the data by attesting the environment and keeping all data (memory, storage, network traffic) encrypted. At all times.

 

]]>
https://privatecore.com/blogs/challenges-cloud-server-security/feed/ 0