The Dangers of IPMI

IPMI has become a fundamental component of data center automation. It’s how we tell boxes to boot, reboot, or boot from pxe. It’s how we avoid having to send people into the forbidding depths of our data centers. If you’ve ever automated the physical provisioning of a server, you’ve dealt with IPMI or something very much like it.

The problem fundamentally with IPMI is that it’s pretty heavily integrated with the systems and operating systems they interact with.

The further problem is that IPMI is about as hardened as a basket of kittens.

The net result is that IPMI tends to leak sensistive data like a seive and tends to have the security capabilities of a door left open with a neon sign outside saying ‘come on in and take anything you think you may enjoy.’

In short, IPMI is basically a root shell connected to your network. And it must be treated as such. If anyone has access to an IPMI interface, they have root on that box. And that’s how it must be viewed from architectural and policy viewpoints. Because, that’s how it actually is.

And here’s someone else laying it out in dazzling technicolor.

Why IPMI is a road to hell.

Now, I’m not saying you shouldn’t use IPMI. What I am saying is you need to understand that handling IPMI is kind of like handling dynamite. If you don’t store it properly and handle it cautiously and with forethought, it may very well blow up in your face.


Def Con wrap up!

I don’t know who else from the OpenStack community was in attendance at the recent Black Hat / Def Con festivities in Las Vegas. I certainly didn’t run into very many folks I know from that community. I did run into folks I know from the Amazon community, as well as other cloud arenas.

The feeling I got from my discussions with deployers, and infosec professionals that are increasingly operating in automation environments, or environments at scale, is that the primary goal right now in the industry is getting a handle on integrity.

Deployers, and their security teams, want to know if their resources have been compromised. The great risk of putting things into an automation framework at scale, is that you make it impossible for human minds to track the finite state of your resources. Now we enter a new age. An age in which adaptive analysis and heuristic analysis will be the norm rather than the cutting edge. In this arena OpenStack is currently behind the curve. On the plus side, so are all of it’s competitors.

There are many layers on the OpenStack, or virt stacks of the world, that we can target integrity verification vectors at. And I had more than a few discussions about novel vectors from which to draw integrity values while at Def Con. I hope to be able to write more about that soon. However, I will have to work with folks to make sure I am not stepping on toes before I do.

Also read this wonderful literature and be better people: International Journal of PoC 0×00


JoCCASA: Trust mechanisms for cloud computing

I occasionally read the Journal of Cloud Computing from SpringerOpen, today I caught up on this work:

Trust mechanisms for cloud computing by Jingwei Huang* and David M Nicol

Basically what’s being introduced here is a discussion of various trust mechanisms, or possibly vectors as the authors look to integrating many mechanisms to build a trust profile relating to cloud resources.

An interesting application of past ideas to a new technological landscape. And probably worth playing with at some point in the future. Trying to build trust inference algorithms for Ceilometer could allow OpenStack implementers to identify instances for greater scrutiny and possibly identify compromised or malfunctioning components of their infrastructure earlier.


What’s the story on the Vulnerability API?

I’ll have more on my ongoing vulnerability API work very shortly.

I have test code starting to show up here: GitHub

What I am envisioning is something akin to an opt-in Emergency broadcast system for the OpenStack community. In addition by standardizing emergency alerting API’s we can standardize any private use of a similar workflow.

I am calling the project Chips for now. I will propose the full work and what I see as the problems and solutions addressing them as an Infrastructure Blueprint for the Icehouse summit.

More to come later with design docs, explanations and better testing code.

Just wanted to let you know that I am still alive and there’s a lot more on the horizon.


OpenStack Security Guide ( OFFICIAL ) Released

This is a little behind the curve, but several OSSG members and developers met up in Atlanta Georgia to do a book sprint on writing up a security hardening guide for OpenStack. I’ve read through it and it’s a pretty solid read from an operations and implementation perspective. There’s plenty of cookbook like examples. So definitely read through this. It’s a great read.


OpenStack Common Vulnerability Database

As per my last post I am starting to work on building an OpenStack common vulnerability database. As for the justifications, read here.

This post will discuss some of my proposed architecture.

So this is what I envision the final process workflow will probably resemeble:

We can make use of OSLO components in several areas of the architecture. Additionally, we can make use of the new requirements project. And that would certainly be the goal.

The next post I make will contain a rough draft of a proposed schema design for the database.



SecStack in Havana


I am just back from Summit. Where I ran into a bunch of wonderful folks both new and old. OpenStack is getting awesome. The security front is about to get REALLY interesting. HP, Nebula, and a few others are showing serious interest in building out continuous integration methodology to help protect the OpenStack community. I’ll of course be doing my part.

And that’s where the flowchart comes in.

The image above is a variant of a traditional approach to a security audit of an application, basically modified for use in the OpenStack world. I’ll probably do some more work on this modeling as the year progresses and I get input from the OSSG and VMT.

Anyways, you can see that need for compiling an understanding of current known vulnerabilities and risks. This site kind of fills that gap. But to be blunt, it’s not good at it. When I was doing some data analysis for the summit, I realized that there was no single authoritative source for OpenStack threat resources. The OSSA’s are only released in e-mail and the e-mails themselves are not in a parseable format. They also have changed over time. In addition to that the CVEs don’t always have corresponding OSSAs, and vice versa. Currently there is very little in the way of decent attribute data for vulnerabilities that stem from secondary dependencies, and entire dependencies are untracked as far as the OpenStack community is concerned.

During the day job I have been helping with building out our OpenStack packaging toolchain. One of the things I need to do during a release cycle is to verify we have a fully patched package set. In addition to that we need to respond to OSSAs by patching many supported release package sets. As such I have begun to build such a database. As the project progresses I will share the code, the data, and I’ll stend up an experimental API. Hopefully by the end of the Havana release cycle I’ll have something that we can integrate into the OpenStack development cycle.

So I am probably going to begin laying off of the posting of OSSA vulnerabilities from here on out. However, I will begin releasing a comprehensive JSON set as I enter further into the investigation and cataloging of datasets. As part of that effort I will also be building tools to help me harvest that data.

Read on to the next section for further insight into this work effort.

As far as how this site is concerned. I’ll continue to highlight security concerns in the OpenStack community but I will do less copy-paste posting of OSSAs and CVEs. In fact, I have a couple of pending posts I need to finish writing for you guys.

In terms of Security in OpenStack. Havana stands to be a really break out release. I’m giddy with anticipation.

The OpenStack Vulnerability Database


A sample OpenStack CORE vulnerability datastructure: vulndb.json

I have exported a couple of CSV’s from that : here

I have a sample 3d.js render of one of the csv files here: here ( showing vulnerability reports by company )

This should wet your appetite for what I have planned.

The next steps are diagrammed here:

Basically, as you can see in the JSON I have already begun investigation and cataloging of datasets.

I am not happy with the schema I am currently working with. I’ll follow up this post with a new schema break out post as I get further along and solicit input from the community at large. AKA you and the OSSG.




The VMT is the Vulnerability Management Team.

This is a VERY small team of trusted contributors whom receive new vulnerability reports. Currently there are three members of this team. New reports start out as embargo’ed. That is to say, if there isn’t a previously existing launchpad bug that is already public, a new security tagged bug will be created. Visibility of that bug will only be opened up to those folks who are brought in to help patch / fix the bug.

During the embargo period some vendors trusted contacts are informed of the vulnerability and provided access to patch sets that address the issue and will be merged later at time of public disclosure.

Eventually the bug is disclosed publicly in two formats. OpenStack releases an authoritative OSSA and will file corresponding CVEs if they do not exist already.


The OSSG is the OpenStack Security Group.

While the VMT is a purely reactive security measure, the OSSG is a pro-active security measure.

The OSSG is a restricted team, meaning unlike the VMT it is not limited to a very small number of individuals. Currently there are 42 members. However, the team is gated for relevancy.

The OSSG performs many functions. They will assist the VMT at their pleasure as well as assist in the issuance of OSNs.


An OSSA is an OpenStack Security Advisory. OSSA’s are issued by the VMT team and are intended to advise the OpenStack community of vulnerabilities as they occur and are addressed.


An OSN or OSSN is an OpenStack Security Note. Security Notes addressing security concerns that do not justify an OSSA, or extend upon an OSSA providing further information for deployers and developers to help them avoid any potential security risks addressed therein.


CVE-2013-0335 : VNC proxy can connect to the wrong VM

OpenStack Security Advisory: 2013-006

CVE: CVE-2013-0335
Date: February 26, 2013
Title: VNC proxy can connect to the wrong VM
Reporter: Loganathan Parthipan (HP), Rohit Karajgi (NTT Data)
Products: Nova
Affects: All versions


Loganathan Parthipan (HP) and Rohit Karajgi (NTT Data) independently
reported a vulnerability in Nova. If a user requests a console and
then deletes the VM, it is possible that the console token could allow
connectivity to a different VM before the console token expires if the
VNC port gets reused in that time period. This issue can be worked
around by disabling VNC support.




epydoc output on openstack

Started doing some analysis of OpenStack code sets. As part of that I’ve done a couple things thus far.
Firstly. I have a gnuplot PPA that has the latest CVS build packaged for ubuntu precise ( 12.04 ).

Also and probably more helpful. I hopped into an existing devstack VM I had and pulled some epydoc output of some of the chief components of OpenStack.

Check them out here: