Cloud Security

Gartner Research on IaaS Encryption: Protect your Keys

Todd Thiemann

Todd Thiemann

Gartner analyst Joerg Fritsch published a new report last week titled “Enabling High-Risk Services in the Public Cloud With IaaS Encryption”. It provides juicy insights into the ins and outs of Infrastructure-as-a-Service (IaaS) encryption, trade-offs between data confidentiality and reliability, and provides a nice comparison table of vendor options.  And I am delighted that the research includes a PrivateCore vCage mention!  PrivateCore is the only significant new defensive technology mentioned alongside traditional technologies from legacy vendors.

A point that Joerg highlights in a blog post announcing the report is, “Parts of the confidential data must always be in cleartext in RAM, – even the necessary encryption keys!”  Even if an enterprise uses encryption in the IaaS cloud where they control the keys, at the end of the day those keys need to be in clear text in memory for processing.  A bad guy (outside hacker, malicious insider, etc.) can grab the memory and parse the contents to get encryption keys and decrypt data. Also, your favorite government agency (FBI, etc) that can provide a national security letter requesting the encrypted data and a memory snapshot, parse the memory to get the encryption keys, and decrypt the encrypted data-at-rest.  This is where PrivateCore can help by encrypting memory.

The public cloud has some compelling advantages in speed and deployment, but enterprises need to grapple with the resulting data security issues explained in the Gartner research.  If you want to use the cloud with some comfort that the CSP insiders, hackers, or lawful outsiders cannot grab your memory to view cleartext, it is time for your to consider vCage Host.

OpenStack Summit May 2014 – Security Insights

Oded Horovitz

Oded Horovitz

What an exciting event! This was my first time participating in the OpenStack Summit series, and the May 2014 summit was located in hot and rainy Atlanta

GA left me with sense of being part of something big, and a strong desire to participate in the upcoming event (and not just because of the Paris location). As you entered the event, you could see the sponsor wall proudly presenting PrivateCore among many great OpenStack companies.

OpenStackSummitATLSponsors-3

The show floor was very busy, and the casual dress code suggested this is going to be a fun event, where I would get my fair share of geeking out time. As you can read below, I wasn’t disappointed.

OpenStack is a growing force as indicated by the bi-annual user-survey.  And the survey tracks Dev/QA, PoC, and Production deployment stages independently. Thank you OpenStack community for some great information!

Lets talk Security

Being a founder of a security company, I have a slight security bias, and the first two days offered a wealth of security-related talks. Below are some notes that I thought might be interesting to PrivateCore blog readers.

Russell Haering talk on Multi-Tenant Bare Metal Provisioning with Ironic triggered a set of question around firmware security. The problem presented by several attendees is the following: “how could one detect or prevent a bare metal tenant attempt to reflash the BIOS firmware or any other IO-device firmware?”.  My best recommendation for detecting firmware updates that will run on the main CPU is to take advantage of the Trusted Platform Module (TPM) chip on your servers to validate the firmware before any sensitive data touches the server. Our vCage Manager can be of help here. As for IO-device firmware, unfortunately, the answer is not as simple, and my design assumption is assume these IO-devices as malicious, and build your stack to defend against them.

Next was the Bryan D Payne talk on Security for Private OpenStack clouds. The talk was more of an open discussion with OpenStack operators rather than a presentation, providing the opportunity to hear back from the community about their best practices. What caught my attention was a comment from one of the security operators at Yahoo. His claim (if I understood correctly) was that they assume every guest VM will be compromised. So far no big news. Then he added that they assume compromised guest VMs will successfully escape to the hypervisor. Now that is some bold statement. Later he explained to me that through Nova message signing, even compromised hypervisors do not have much of a say on their Control Plane. Unfortunately, our conversation was interrupted, and I was left without understanding the full architecture, I hope to catch up with him back in the Bay Area.

While walking the expo floor I had a chance encounter at the demo theater with an interesting technology from HGST. OpenStackSummitATL-HGST As you can see, HGST is working on an open architecture, turning a hard-disk into a Linux server. The hard-disk has a dedicated CPU, memory and ethernet port. It runs Linux, and allows applications such as distributed file-system to run directly on the disk, saving CPU cycles, and all related trips on the server bus. My interest in this advancement relates to the possibility of turning this into an “hardware implant for script-kiddies”. In my blog earlier this year, I touched on a leaked NSA software implant called IRATEMONK – a firmware implant affecting many vendor hard-disk controllers, and allowing a stealthy MBR code injection. With the new work from HGST, anyone capable of writing a Linux application will likely be able to do the same. Technology innovation frequently happens without considering the security implications.

The Marketplace

As sponsors of the event we had a spaceOpenStackSummitATL-OH-AN-booth-small to present our warez, and had many lively discussions with the summit crowd. To my pleasant surprise, most attendees we spoke with understood TPMs, Intel Trusted Execution Technology (Intel TXT) and general Trusted Computing concepts. This resulted  in lots of deep discussions about implementation of the technology in their environment – the OpenStack crowd understood the value of system integrity controls that PrivateCore brings to OpenStack.

Peek into PrivateCore roadmap

If you had a chance to join Keith Basil TripleO talk, you should have noticed the slide OpenStackSummitATL-TripleO-KeithBshowcasing PrivateCore’s technology integration into OpenStack on OpenStack (TripleO). We have not publicly shared details of integration, but if you are interested learning how trusted computing plays directly into cloud deployment and management, please get in contact with us for a preview.

See you all at November’s OpenStack Summit in Paris!

Gartner Report Illuminates Server Security

Gartner’s analysts Joerg Fritsch and Mario de Boer published a comprehensive report covering server security on 31 March 2014 titled “The Feasibility of Host-Based Controls and the Evolution of Server Security”.  This report (G00260437) is a tour de force on all aspects of physical and virtual server security – if you are in the business of securing enterprise server infrastructure, you should get ahold of it and spend some quality time digesting it.  This report is a great example of the value of a Gartner IT Pro service subscription.

Todd Thiemann

Todd Thiemann

The report is holistic and touches on all aspects of server security, including anti-malware (AV), host-based intrusion prevention (IDS/IPS), application whitelisting, file integrity monitoring (FIM), privileged account monitoring and server integrity.

Something that we are proud of is the recognition given to PrivateCore vCage Manager as a leading solution for bootstrapping trust in private and public clouds.  As Gartner states in the report, “…bootstrapped trust comes in with a very moderate price tag, or it could even be a feature of products that are already deployed in the local data center, such as the HyTrust appliance, PrivateCore vCage Manager or OpenStack.”

Reading between the lines, I suspect the recent news regarding NSA’s Tailored Access Operations (TAO) unit is motivating more focus on system integrity.  As Oded pointed out in his January blog post, bad guys will eventually learn from the NSA TAO techniques for illicit gain.  The Gartner Server Security report lays out best practices in securing such systems.  As you look to implement such best practices described by Gartner, have a chat with us about maintaining Linux/OpenStack system integrity with PrivateCore vCage.

2014 Prediction: Smart Cyber Criminals Learn From NSA “Software Implants”

Happy New Year and welcome to 2014!  We are off to a rip-roaring start with news of the NSA’s exploit techniques. Following on Der Spiegel’s revelations about the US National Security Agency (NSA) Tailored Access Operations (TAO) group, the new year brought with it news of specific tools used by the NSA Advanced Network Technology (ANT) division detailed in the catalog of exploits described by Der Spiegel and Wired.

Oded Horovitz

Oded Horovitz

While there is not much enterprises can do to counter the NSA going after a specific target (if they want your sensitive data, they will find a way to get it), the more worrisome issue is the criminal community digesting the news and learning from the masters of system penetration.  You can expect that techniques described in the NSA ANT catalog will soon be used by the hacker community to create similar exploits.   

As mentioned in Todd’s earlier blog post, the NSA technologists have designed their exploits for persistence and use the system BIOS as a launching pad.  These bootkits (referred to as “software implants” in the NSA catalog) are the first thing to load when a system starts and can lock themselves into a privileged background process called “System Management Mode” (SMM) from which they can passively inspect data, or actively inject payloads into the running operating system or hypervisor. Some examples of the NSA persistent software implant approach include:

DEITYBOUNCE (highlighted in Bruce Schneier’s blog) and IRONCHEF (also highlighted in Bruce Schneier’s blog) exploit the x86 server BIOS and utilizing SMM to drop their payloads.

IRATEMONK infects the firmware on a common HDD controller, and performs a Man-in-the-Middle (MITM) attack to inject code into the Master-Boot-Record (MBR) of the system on the fly at boot time.

I founded PrivateCore knowing that these sorts of weaknesses existed in today’s computing infrastructure, and anticipating that hackers will take advantage of these weakness to gain data access and system control. Now that the NSA catalog is out in the open, we have evidence that indeed these weaknesses are being exploited in the wild.

PrivateCore vCage counters all of the BIOS threats to servers described in the NSA catalog.  Why can I make such a broad claim?  We protect servers with some foundation technology: validating the integrity of x86 servers with remote attestation to counter BIOS infection trying to fly under the radar. We follow the motto of “verify then trust” when it comes server integrity. Infected BIOS? Infected MBR? We’ve got our eyes on you! This video describes how PrivateCore vCage does this in an OpenStack environment.  

The NSA ANT catalog is dated 2008 so how come we never heard about a breach using these exploits? If I would have to guess, the NSA has been very diligent in using these tools in a pin-point fashion to go after specific targets. Criminals on the other hand, will not be as discriminating or precise, and you should expect more widespread use of these techniques.  

While techniques described in the NSA ANT catalog were previously in the realm of well-funded state actors, you can expect them to come to a server near you as they become commonplace tools of criminal actors. Verifying (rather than taking for granted) the integrity of your compute infrastructure and having measures in place to counter these sorts of persistent threats will enable you to have a better night’s sleep in 2014.   

The Tao of NSA, Persistent Threats and 2014

As 2013 comes to a close, news from Germany’s Spiegel Online that the NSA Tailored Access Operations (TAO) unit created a toolbox of exploits to compromise systems caught my attention.  Todd’s prediction: this news is a harbinger of infosecurity risks making headlines in 2014 as bad guys learn from the extremely talented NSA.  

Todd Thiemann

Todd Thiemann

The news generated by Mr. Snowden’s disclosures has brought data privacy headlines.  What was different about the Der Spiegel article highlighting the TAO was not only the breadth of exploits, but also the depth and sophistication.  

The sophisticated exploits highlighted in the Spiegel piece were designed for persistence.  These are advanced persistent threats (APTs) – once you are in, can you stay in.  As the article highlights, “the [NSA] ANT developers have a clear preference for planting their malicious code in so-called BIOS, software located on a computer’s motherboard that is the first thing to load when a computer is turned on.”  

Modifying the BIOS bypasses traditional security layers such as antivirus software. Mitigating against threats using such attack vectors requires an additional layer of security to attest the validity of the host system, harden systems against compromise, and secure the underlying data-in-use (as well as data-at-rest and data-in-transit).  This is bad news for enterprises and service providers who need to consider protecting their server infrastructure, but the good news is that there are solutions to shut down this attack vector, notably PrivateCore vCage (my shameless product plug for this post).

The Spiegel news dovetails with a cybersecurity prognostication for 2014 from IT risk and governance auditor Coalfire:“There will be a significant security breach at a cloud service provider that causes a major outage.”  Reading the Spiegel Online article, the “security breach” part might have already happened. Buckle your seatbelts and enjoy 2014.

IPMI and New Challenges in Cloud Server Security

Integrated Platform Management Interface (IPMI) controllers ship on practically every  x86 server, and any large IT monoculture provides an attractive target for bad guys.  While offering increased manageability for cloud servers, the abundance of IPMI controllers in cloud environments poses new threats for cloud users, spanning from remote,  over-the-web exploitation to local network attacks from cohabiting cloud tenants.

Alon Nafta

Alon Nafta

Being enabled and connected by default on many systems, IPMI controllers expose unaware users to various threats, orthogonal to the ones they currently protect against, which mostly relate to malware and web-based attacks.

Often overlooked by administrators, ensuring proper IPMI tenant isolation is a key step in protecting against IPMI-based attacks. We were able to demonstrate the usage of a low-footprint memory scraping tool to collect root passwords, keys and other valuable data from memory of remote servers, using an easy-to-accomplish attack sequence.

Background

IPMI has recently made news headlines following two notable and impressive pieces of security research work: the first by Dan Farmer in January, followed by HD Moore in July. A month ago, Rapid7 disclosed software vulnerabilities in Supermicro server firmware. The combination of these results makes the case for practical remote server exploitation, found affect approximately 35,000 servers and potentially many more.

IPMI 101

IPMI provides on-board hardware and software, allowing remote Command & Control communications to servers. IPMI is implemented in most x86 servers and apparently enabled by default in many of them. Technically, it is handled by an on-board Baseboard Management Controller (BMC). IPMI provides functionality that would otherwise require physical presence: display, keyboard and mouse, virtual media, and power management, even when the machine is shutdown. To grasp what can be done via IPMI, one simply has to imagine that the attacker is standing next to your machine with fingers on the console keyboard. 

Several unique features of IMPI are noteworthy in the context of threat assessment:

  • On many systems, IPMI communication is possible through a single Ethernet port. That means that if it’s enabled (through BIOS), IPMI is exposed to the network the server is functionally using. To be fair, the IPMI BMC would have to be assigned an IP address, but that will usually be taken care of automatically by DHCP, something that is present in most networks. HD Moore and Rapid7 were able to discover 35K exposed IPMI interfaces of solely Supermicro servers. Clearly, potential for widespread damage is huge.
  • There is no open source implementation for IPMI BMC controllers – every vendor makes its own, closed-source implementation. That leaves room for many potential bugs, and very few (responsible) eyes searching for them.
  • Similarly to BIOS updates, IPMI firmware updates are hard to manage, and are in the early stages of being recognized as a viable threat by the compliance and pentesting communities. While BIOS and network equipment firmware updates are covered by compliance standards (e.g. PCI DSS, etc.) and pen testing routines, BMC firmware updates are commonly overlooked. Interestingly  enough, most of them are also Chinese.

Other bad security practices, such as unchanged default passwords and needing Java to communicate with IPMI controllers, as well as weak encryption schemes, are worrisome. PrivateCore identified many vendors utilizing TLS 1.0 with 128-bit RC4-based encryption, widely considered as broken, for connecting to the IPMI HTTPS server. The HTTP server also uses untrusted certificates, making the case for easy-to-fake Man in the Middle (MITM) attacks, exploiting poor administrators running unpatched JAVA clients.

IPMI Exploitation DIY

To demonstrate the relative ease in exploiting a vulnerable server via IPMI, we conducted the following attack exercise in our own network, simulating a typical cloud environment:

  1. Scan for IPMI interfaces: Using nmap, we scanned our network for IPMI controllers, easily discoverable by their unique use of port 623.

  2. Vendor identification: Several servers have been found. Identifying the vendor is easy, simply browse to that IP and look at the displayed server brand logo. Programmatically, than can be implemented by parsing HTTP packets from the IPMI controller.

  3. Connecting to servers: The is a trickier part. Not all servers block entries for wrong credentials, leaving room for brute-force and dictionary attacks. Many servers maintain their default passwords, or use a single password for all servers, mostly due to administrator ignorance or laziness. And then there are vulnerabilities, which PrivateCore (and others) consider to be common.

  4. Attaching virtual media: Attaching virtual media is supported by every IPMI controller, as it is one of its key features. An attacker would need to convert its malicious payload to a bootable CD ISO image, that can then be deployed to the remote server via the IPMI user interface.

  5. Reset: We reset the server using the IPMI power management controls

  6. Collecting memory: We used a small-footprint memory “scraping” tool to remotely collect memory. These two tools – tool_1, tool_2 will do the trick.

  7. Collecting rewards: There is really no need to argue why a full memory dump is not something you want your attackers to have, but one that can be done remotely is a whole new game. Hidden gems in memory, such as root passwords and cryptographic keys, are valuable treasure, allowing easy control without having to restart the server again.

    password4

Two caveats:

  1. Error Correcting Code (ECC) memory: Many modern servers use ECC memory, which is zeroed out by the BIOS during boot. Conveniently, some BIOSes allow to disable that function (1 out of 3 vendors in our mini-experiment, all VERY popular).

  2. BIOS patching: BIOS updates are also generally possible, in case the BIOS doesn’t support ECC disable (although obviously this increases the complexity of this attack), or if the BIOS is password-locked.

Conclusion

When running on the cloud, one must trust the cloud provider to maintain good security practices, ensure high isolation between tenants and continuously update their software as well as security mechanisms. As cloud infrastructure scales, the potential attack surface grows, the risk of compromise increases, and control measures are not keeping pace. Large-scale architectures and complicated IT measures are a hidden treasure for attackers who can easily find their way in, commonly by user-made configuration mistakes, reinforced by exploiting server vulnerabilities.

One of the better ways to avoid these issues is changing the game for attackers. Rather than building more walls (or, using a more appropriate metaphor, keep fixing the holes in the existing walls), apply a solution that inherently protects the data by attesting the environment and keeping all data (memory, storage, network traffic) encrypted. At all times.

 

The Fallacy of Shared Responsibility in the Cloud

Sharing is usually considered to be a positive attribute – parents teach children to share and we are moving into a “sharing economy” with services like Zipcar and Airbnb. For most businesses and the security of their sensitive data, sharing is a threat. In fact, numerous laws have been created to curb or manage sharing including copyright provisions designed to protect music, books, software and more. Cloud security is no exception. For businesses, sharing responsibility for the security of their data with a cloud service provider can lead to unpleasant consequences and finger-pointing. For years, standards bodies like the PCI Council and leading cloud providers like Amazon Web Services and Microsoft Azure have fostered the perception that shared responsibility for security in the cloud with infrastructure as a service (IaaS) providers is the best approach. Times have changed, this is no longer the case.

Todd Thiemann

Todd Thiemann

What is the downside of shared responsibility in the cloud?  The enterprise has ultimate accountability for security of its data, yet must share the responsibility for data security with the Cloud Service Provider (CSP).  Put another way, shared responsibility means shared access to your sensitive data.  You share responsibility for security of the overall environment, but implicit in that relationship is that your CSP can access your data.  You might not like it, but the shared responsibility model forces you to trust the CSP and face the consequences when the CSP falls short.  Amplifying these consequences for the enterprise are CSP terms of service that are typically one-sided and hand the aftermath of breached data to the enterprise customer.  Consequences can include fines, reputational risk, and lost competitive advantage – items that would not be covered by a CSP refunding your payment.  The shared responsibility model also requires elaborate and time-consuming legal contracts so the obligations of the CSP and the enterprise are understood.  While shared responsibility can be mitigated in a Software-as-a-Service (SaaS) where the SaaS vendor is fully accountable for data loss, it does not make sense in the Infrastructure-as-a-Service (IaaS) world where IaaS vendors significantly (Amazon EC2, etc.) limit their responsibility for security.

While the CSP needs to provide their service with sufficient security to satisfy customers, the CSP is usually not the one holding the bag when something goes wrong.  Interest in cloud encryption has grown as enterprises wrestle with securing their data at the CSP.  Enterprises understand the need to secure their data while at rest and while in transit by holding the encryption keys themselves. However, the shared responsibility model circumvents at-rest and in-transit encryption; the cloud service provider can access enterprise data-in-use while the cloud server runs in the CSP datacenter.  Data-in-use, or memory, contains secrets including encryption keys, digital certificates, and sensitive information such as intellectual property.  Accessing data-in-use leaves the door open to lawful or unlawful interception of data of any data on the server.  Sensitive data can be encrypted at rest or in motion, but it is “in the clear” and available to the CSP while in use.

What if a new technology allowed you to have control and visibility into the security of cloud servers without ever having to set foot in a cloud data center?  PrivateCore does just that, allowing the enterprise to take complete ownership of data security rather than relying on the CSP.  This approach also permits the CSP to focus on their core competencies and reduce liabilities.  PrivateCore vCage provides a secure foundation, ensuring that nobody at the CSP can access or manipulate your data without your consent.  Deploying vCage as a foundation of trust for your IaaS security enables you to avoid lengthy security negotiations because you control the security of your server and its data.

PrivateCore vCage secures server data-in-use with full memory encryption.  Data-in-use can contain valuable information such as encryption keys for data-at-rest, certificates, intellectual property, and personally identifiable information.  Accessing data-in-use provides a pathway to decrypt data-at-rest and data-in-motion.  Compromising data-in-use, be it through a malicious insider or lawful request, leaves a system open and available.

While security measures such as data-at-rest and data-in-motion encryption are necessary, they are insufficient if the foundation has a crack that allows information to be siphoned off.  PrivateCore vCage changes the game, obviating the need for “shared responsibility” by providing a foundation of trust in the cloud so you can take control of the security of your data in the cloud.