OpenStack Security A blog created by members of the OpenStack Security Project to update readers on project progress, security issues, advisories and general security curiosities. https://openstack-security.github.io/ Fri, 08 Sep 2017 06:37:17 +0000 Fri, 08 Sep 2017 06:37:17 +0000 Jekyll v3.5.2 OpenStack Security Notes, and how they help you the Operator <p>For this post I will explain what OpenStack Security notes are, and how they benefit operators in securing an OpenStack Cloud.</p> <p>OpenStack Security Notes (OSSN’s) are solely to notify operators of a discovered risk, that are often not directly addressed by a code patch.</p> <p>OSSN’s can be in the form of a deployment architecture recommendation, configuration value or a file permission.</p> <p>Consider the meme ‘If you do this, you’re going to have a bad time’ to get an idea of what OSSN’s are about.</p> <p>Some examples of recent OSSN’s would be:</p> <ul> <li> <p><a href="https://wiki.openstack.org/wiki/OSSN/OSSN-0079">Ceph credentials included in logs using older versions of libvirt/qemu</a></p> </li> <li> <p><a href="https://wiki.openstack.org/wiki/OSSN/OSSN-0078">copy_from in Image Service API v1 allows network port scan</a></p> </li> <li> <p><a href="https://wiki.openstack.org/wiki/OSSN/OSSN-0076">Glance Image service v1 and v2 api image-create vulnerability</a></p> </li> </ul> <p>The end to end process of an OSSN, starts when a member of the security project, a project core, or a VMT member, mark a launchpad bug by adding the ‘OpenStack Security Note’ group. An author will then assign themselves to the bug, and will commit to authoring the OSSN. Public notes may be worked on by anyone, whereas embargoed notes are only handled by the security project core members.</p> <p>Once the author has a draft in place, they will submit a patch to the <a href="https://review.openstack.org/#/admin/projects/openstack/security-doc">security-docs repo</a>, where other members of the security project and cores from the related project of the original launchpad bug, can review the note content.</p> <p>After the patch has received two +2 reviews from security project core members, and a +1 from a core within the concerned project, the OSSN is merged into the security-docs repository.</p> <p>Once merged, the reviewed text will be posted to the <a href="https://wiki.openstack.org/wiki/OSSN">OpenStack Wiki</a> , and a GPG signed email will be sent to the <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">openstack</a> &amp; <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">openstack-dev</a> mailing lists.</p> <p>The OpenStack Security Project welcomes anyone who wants to help Author or review OSSN’s. Security Notes are often a path to the election of core members of the OpenStack security project. OSSN authorship was how I personally found myself elected almost two years back.</p> <p>Anyone new to the security project offering to help author a Security Note, will be given lots of support on creating their first OSSN from other Security Project members.</p> <p>We also welcome feedback from operators on how valuable you find OSSN’s, and ways you feel may improve the process. After all, the process is there to benefit you the operator.</p> <p>For anyone with an interest in OpenStack Security, the OpenStack Security Project can be found on the irc-channel #openstack-security and we meet weekly on #openstack-meeting-alt every Thursday @ 17:00 UTC time.</p> <p>You can also email the security project on the OpenStack developer mailing list, by using a [security] tag in the subject line.</p> <p>Luke Hinds (Security Project PTL)</p> Fri, 08 Sep 2017 00:00:00 +0000 https://openstack-security.github.io/security-notes/2017/09/08/openstack-security-notes.html https://openstack-security.github.io/security-notes/2017/09/08/openstack-security-notes.html security notes vulnerabilities security-notes Syntribos team recap for Newton <p>Our team set out to accomplish several tasks during the Newton cycle:</p> <ul> <li>Improve <a href="https://github.com/openstack/syntribos">Syntribos</a> to the point that it was reliable and useful for testing the 6 projects <a href="https://osic.org/">OSIC</a> has chosen to focus on for Newton (<a href="https://github.com/openstack/keystone">keystone</a>, <a href="https://github.com/openstack/neutron">neutron</a>, <a href="https://github.com/openstack/glance">glance</a>, <a href="https://github.com/openstack/nova">nova</a>, <a href="https://github.com/openstack/cinder">cinder</a>, and <a href="https://github.com/openstack/swift">swift</a>)</li> <li>Test those 6 key projects and report our results to upstream developers</li> <li>Based on our results, determine future action items to further improve Syntribos</li> </ul> <p>We succeeded in making Syntribos more configurable, easier to use, and less prone to false positives. We released several new features, removed cruft in the codebase, added function/class comments, and wrote unit tests, all with the goal of making Syntribos a more effective tool for testers, and easier to contribute to as a developer. We were able to test all 6 key projects, and reported several bugs in their Launchpads.</p> <p>We started off the cycle by making improvements aimed at easing further development. This included cleaning up the codebase, creating documentation with sphinx, fixing bugs, writing unit tests, and adding docstrings, among other changes. We also removed our dependency on OpenCAFE at the request of some in the community, which took several weeks. This leaves us with a pretty small dependency base, which should make future maintenance / modification more manageable. Once we were more confident in the core codebase, we started focusing on how to improve the accuracy and depth of tests conducted by Syntribos. We used a special vulnerable API created by Matt Valdes from Rackspace to validate our improvements, and ensure that our tests were detecting the issues we introduced into the API.</p> <p>We spent a significant amount of time creating templates and extensions for each project. This typically took at least 1 or 2 days per project, significantly reducing the amount of time we spent testing. However, much of the heavy legwork for testing these projects is now out of the way, and future testing should be significantly easier. Our team is also more experienced with the OpenStack projects we tested, and with security testing in general in some cases.</p> <p>Overall, we believe we were successful in meeting our goals for this cycle, though we believe that more work is required to get Syntribos ready for production use by others in the community.</p> <h2 id="our-key-accomplishments">Our Key Accomplishments</h2> <ul> <li>Worked to improve Syntribos tool from April 5th through September 30th</li> <li>Participated in the OpenStack Security Project’s midcycle, where we wrote several OSSNs, contributed to the barbican threat analysis process, and discussed Syntribos with others in the community</li> <li>Tested 6 OSIC key projects from August 29th through September 30th</li> <li>Found 4 defects during our testing, and submitted them in projects’ Launchpads</li> </ul> <h2 id="metrics">Metrics</h2> <h3 id="syntribos">Syntribos</h3> <ul> <li>Bugs reported in Launchpad: <strong>17</strong> [<a href="http://stackalytics.com/?release=newton&amp;project_type=openstack&amp;module=syntribos">3</a>]</li> <li>Bugs resolved: <strong>15</strong> [<a href="http://stackalytics.com/?release=newton&amp;project_type=openstack&amp;module=syntribos">3</a>]</li> <li>Unit test coverage at start: <strong>9%</strong></li> <li>Unit test coverage at end: <strong>63%</strong></li> </ul> <h3 id="osic-key-projects">OSIC Key Projects</h3> <ul> <li>Request templates created: <strong>611</strong> [<a href="https://github.com/openstack/syntribos/tree/master/examples/templates">4</a>]</li> <li>Bugs reported in Launchpad: <strong>4</strong> (see <a href="#reported-bugs">Reported Bugs</a> below)</li> </ul> <h3 id="reported-bugs">Reported Bugs</h3> <ul> <li><strong>String “..%c0%af” causes 500 errors in multiple locations</strong> <ul> <li>Affects: keystone, cinder, neutron, glance</li> <li>Launchpad: https://bugs.launchpad.net/keystone/+bug/1613901</li> </ul> </li> <li><strong>[Duplicate] Stored XSS in glance image names</strong> <ul> <li>Affects: horizon</li> <li>Launchpad: https://bugs.launchpad.net/horizon/+bug/1623735</li> </ul> </li> <li><strong>Authenticated “billion laughs” memory exhaustion / DoS in ovf_process.py</strong> <ul> <li>Affects: glance</li> <li>Launchpad: https://bugs.launchpad.net/glance/+bug/1625402</li> </ul> </li> <li>One embargoed issue that is still being triaged</li> </ul> <h2 id="challenges">Challenges</h2> <ul> <li>Removing OpenCAFE took several weeks, and while it removed a large dependency, it cut down on time for other improvements.</li> <li>Our short testing schedule (1 month of testing for 6 projects) didn’t give us much time to learn the intricacies of each project, and test them at a deeper, domain-specific level. Some projects offered significantly more endpoints than others, and in some cases we had to move on before fully evaluating every component. However, we were able to at least do some basic testing on every offered endpoint for every tested service. <ul> <li>The significant time investment required to create templates for each project further limited the time we had to test these projects.</li> </ul> </li> <li>Lack of unit tests meant that many changes introduced bugs/crashes into master and required fix-ups. This happened less often as our coverage improved.</li> <li>Documentation was lacking or inaccurate at the outset, and required significant effort to improve.</li> <li>Our team’s relative inexperience with the OpenStack projects under test, and security testing in general in some cases, made testing more challenging.</li> </ul> <h2 id="syntribos-improvements--future-plans">Syntribos Improvements / Future Plans</h2> <p>As we performed our one-month engagement testing various OpenStack services, several members of the team <a href="https://etherpad.openstack.org/p/syntribos-future">took notes</a> about features, bugs, and general improvements to be made in Syntribos.</p> <h3 id="planned-changes-for-ocata">Planned Changes for Ocata</h3> <ul> <li>Cutting a stable Syntribos release on PyPI to reflect the many updates since it was last released</li> <li>Exploring multithreading for performance/time improvement in test runs, to make them more viable for gate jobs or similar</li> <li>Enabling Syntribos to understand context beyond a single request (i.e. enable tests to create, then read, modify, and delete a resource)</li> <li>Rethinking request templates by using a less cluttered/repetitive format, and giving more information to Syntribos for improved testing accuracy</li> <li>Stretch goal: Adding more formatters for results output (e.g. HTML, human-readable text)</li> <li>Further improving test reliability &amp; confidence, and reducing false positives</li> </ul> <h2 id="members-of-syntribos-team">Members of Syntribos Team</h2> <ul> <li>Aastha Dixit (<a href="https://github.com/aasthadixit">Github</a>) - Intel</li> <li>Charles Neill (<a href="https://github.com/cneill">Github</a>) - Rackspace</li> <li>Khanak Nangia (<a href="https://github.com/knangia">Github</a>) - Intel</li> <li>Matt Valdes(<a href="https://github.com/mattvaldes">Github</a>) - Rackspace</li> <li>Michael Dong (<a href="https://github.com/MCDong">Github</a>) - Rackspace</li> <li>Michael Xin (<a href="https://github.com/jqxin2006">Github</a>) - Rackspace</li> <li>Rahul Nair (<a href="https://github.com/rahulunair/">Github</a>) - Intel</li> <li>Vinay Potluri (<a href="https://github.com/vinaypotluri">Github</a>) - Intel</li> </ul> Wed, 26 Oct 2016 00:00:00 +0000 https://openstack-security.github.io/syntribos/2016/10/26/syntribos-team-recap-for-newton.html https://openstack-security.github.io/syntribos/2016/10/26/syntribos-team-recap-for-newton.html OSSP Python Security Syntribos syntribos Secure Development in Python <p>OpenStack is one of the largest Python projects, both in code size and number of contributors. Like any development language, Python has a set of best (and worst) security practices that developers should be aware of to avoid common security pitfalls. One mission of the OpenStack Security Project is to help developers write Python code as securely and easily as possible, so we created two resources to help.</p> <h2 id="secure-development-guidelines">Secure Development Guidelines</h2> <p><img src="https://openstack-security.github.io/assets/make_it_easy.jpg" alt="Easy" /></p> <p>The <a href="https://security.openstack.org/#secure-development-guidelines">Secure Development Guidelines</a> were created with the goal to make it quick and easy for a developer to learn:</p> <ul> <li>What is the best practice</li> <li>An example of the incorrect (insecure!) way of accomplishing a task</li> <li>An example of the correct way of accomplishing a task</li> <li>Consequences for not following best practices</li> <li>Links for further Reference</li> </ul> <p>As developers ourselves we’re guilty of more than the occasional copy-paste. The <code class="highlighter-rouge">Correct</code> section of the <code class="highlighter-rouge">Secure Development Guidelines</code> are a perfect source to jump in and get the best practice code snippet you need.</p> <h2 id="bandit">Bandit</h2> <p><a href="https://wiki.openstack.org/wiki/Security/Projects/Bandit">Bandit</a> was built to find common insecure coding practices in Python code. Developed for the OpenStack community by the OSSP, it is the best Python static analysis tool available (in our biased opinion). Like all OSSP resources and tools, Bandit is open source and we encourage people to use it, extend it, and provide feedback.</p> <p>If you’re new to Bandit a good way to get started is by watching this: <a href="https://www.youtube.com/watch?v=hxbbpdUdU_k" title="Securing the OpenStack code base with Bandit"><img src="https://img.youtube.com/vi/hxbbpdUdU_k/0.jpg" alt="presentation" /></a></p> <p>Also check out our <a href="https://wiki.openstack.org/wiki/Security/Projects/Bandit">wiki</a>.</p> <p>If you have any questions please contact us on the OpenStack Developer Mailing list (using the [Security] tag), or visit us on IRC in <code class="highlighter-rouge">#openstack-security</code> on Freenode.</p> Mon, 26 Sep 2016 00:00:00 +0000 https://openstack-security.github.io/organization/2016/09/26/python-secure-development.html https://openstack-security.github.io/organization/2016/09/26/python-secure-development.html OSSP Python Security Organization Maturing the Security Project <p>This blog article is intended to address the recent discussions on the openstack-dev mailing list, following the suggestion by Thierry on behalf of the TC that the OpenStack Security Project “should be removed from the Big Tent” because the security team failed to nominate and elect a project ream lead (PTL) for the next release cycle. This process is required for all active project teams and is seen by the TC as a failure in community engagement that the OSSP has missed this deadline, again.</p> <p>Back in the early days of the Security Project being in the big tent I missed the election deadline for my nomination. Pure oversight on my part, I was new to the role of PTL having been grandfathered in from the working group and I simply didn’t realise what was required for elections. ‘Missing a nomination once is bad, so missing the most recent nomination window is obviously very bad and raises questions over the level of engagement we have in the community, particularly as everyone in the OSSP also missed the email sent to highlight the closing nomination window (Its the one on the 19th)…</p> <p><img src="https://openstack-security.github.io/assets/SecurityMail.png" alt="PTL election reminder" /></p> <p>Unfortunately during the nomination window I was temporarily distracted dealing with some local issues. I’ve discussed these with a member of the TC who recognises that it was a temporary thing that’s unlikely to happen in the future, however the bell has been rung and we must decide how to proceed.</p> <h2 id="maturing-the-security-project">Maturing the Security Project</h2> <p>Missing two nominations reflects badly on a project team and leads to several understandable <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-September/104170.html">questions</a> being asked: <em>Who are these people?</em> <em>Are they an active team?</em> <em>Should they be moved outside of the big tent?</em></p> <p>These are understandable questions, I feel that my <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-September/104176.html">on-thread response</a> addressed them for the most part. What I want to focus on is the things that we need to do to be a better part of the community and ensure that project teams and the TC are both aware of what we do and how we help improve security in OpenStack.</p> <p>We know from the feedback we’ve had from downstream OpenStack consumers that our work is valued, we need to better demonstrate that value within the OpenStack community. I think a good place to start is to look at the <a href="http://docs.openstack.org/project-team-guide">Project Team Guide</a> examine what we are already doing and where we fall short. Of course this doesn’t include the good things we do like providing CI tooling for security, threat analysis etc but it is the minimum boxes that we should be ticking off as a project team and that I should be driving as PTL.</p> <p>I want to be clear, I think that the Security Project is doing great things to enhance security in OpenStack. We need to become a better community player though, through doing so I expect new opportunities to innovate on security and create new ways to make OpenStack more secure.</p> <h2 id="score-card">Score Card</h2> <p>I’m proposing a score card for the security project, to ensure we’re doing all that we should be doing and identify those areas where we need to improve. I’ve based this on the <a href="http://docs.openstack.org/project-team-guide">Project Team Guide</a></p> <table> <thead> <tr> <th>Requirement</th> <th>Status</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>Open Code</td> <td>Achieved</td> <td>All code in git and licensed appropriately</td> </tr> <tr> <td>Open Design</td> <td>Achieved</td> <td>All design is open to the public, conducted at summits etc</td> </tr> <tr> <td>Open Development</td> <td>Achieved</td> <td>We follow standard OpenStack best practice</td> </tr> <tr> <td>Open Community</td> <td>Needs Improvement</td> <td>We have a gap around the mailing lists that we need to address</td> </tr> <tr> <td>Public Meetings on IRC</td> <td>Achieved</td> <td>1700UTC Thursdays #openstack-meeting-alt</td> </tr> <tr> <td>Project IRC channel</td> <td>Achieved</td> <td>#openstack-security</td> </tr> <tr> <td>Community Support Channels</td> <td>Mostly Achieved</td> <td>We are strong on Launchpad and IRC which is where 90% of our workload comes from however we need to pay more attention to the ML and ask.openstack.org</td> </tr> <tr> <td>Planet OpenStack</td> <td>Achieved</td> <td>This security blog posts to planet openstack</td> </tr> <tr> <td>Participate in Design Summits</td> <td>Achieved</td> <td>Regular, very well attended sessions</td> </tr> <tr> <td>Release Management</td> <td>Achieved</td> <td>We have a number of software projects that we created to support or enhance security in OpenStack. As they’re not directly consumed by OpenStack Operators they’ve not been part of the normal release cycle. Instead we follow the Independent release model.</td> </tr> <tr> <td>Support Phases</td> <td>Needs Improvement</td> <td>Traditionally we have not followed the normal support phases for our projects because they have not been directly consumed by downstream OpenStack users. However there’s a clear opportunity to get more in line with the rest of the OpenStack community here. This should make things like rolling Bandit changes out through CI easier.</td> </tr> <tr> <td>Testing</td> <td>Achieved++</td> <td>All of our software and documentation efforts have appropriate gate tests in place. Functional and Unit tests are in place where appropriate. We’ve also built tooling that other teams are using in their projects for Security gate tests. We’re not just testing, we’re also testing our integration with the projects that have adopted us.</td> </tr> <tr> <td>Vulnerability Management</td> <td>Achieved</td> <td>Our software projects don’t have the vulnerability managed tag, however as the OSSP we do triage any security issues in our own software following standard processes, this is best demonstrated with the recent XSS issue in Bandit https://bugs.launchpad.net/bandit/+bug/1612988</td> </tr> <tr> <td>Documentation</td> <td>Achieved</td> <td>We have a lot of documentation out there for customers and consumers of openstack <a href="https://wiki.openstack.org/wiki/Security_Notes">OSSNs</a>, <a href="https://security.openstack.org">security.openstack.org</a>, the <a href="http://docs.openstack.org/sec/">security guide</a> as well as developer documentation such as that for <a href="http://docs.openstack.org/developer/anchor/">Anchor</a> and <a href="http://docs.openstack.org/developer/bandit/">Bandit</a></td> </tr> </tbody> </table> <h2 id="the-four-opens">The Four Opens</h2> <p>To paraphrase from the OpenStack <a href="http://governance.openstack.org/reference/opens.html">documentation</a> it’s important that any project participating in the big tent adopt and practice the “four opens”. Open Source, Open Design, Open Development and Open Community.</p> <p>For the most part we have done a good job of following these, all of our code is developed under the appropriate Apache Licenses and all of our documentation efforts like the security guide, security notes, threat analysis etc are all conducted openly and use the same peer review tools as our code projects. We develop new ideas in the open, attend design summits and encourage new contributions.</p> <p>Where we have not done such a good job is with the Open Community goal. Of course our team is open to new ideas and new contributions but we have not been as big of a participant in the larger community as we could have been. Our work with the VMT typically means that teams are driven <em>toward us</em> when they require our assistance.</p> <p>I’d like to expand a little bit more on what Open Community means and where we can improve. OpenStack has some very good <a href="http://docs.openstack.org/project-team-guide/open-community.html">documentation</a> on this topic but again I’ll paraphrase here.</p> <p><strong>Public Meetings on IRC:</strong> This is something that the security project has always done. We can be found on #openstack-meeting-alt at 1700UTC every Thursday. Our meetings are public and <a href="http://eavesdrop.openstack.org/meetings/security/2016/">logged</a> we have a standing public <a href="https://etherpad.openstack.org/p/security-agenda">agenda</a> that any developer is welcome to contribute to if they want to participate in the meeting, we also welcome people dropping by with questions, comments etc.</p> <p><strong>Mailing Lists:</strong> When the Security Project first formed we were a working group, we had a separate mailing list that didn’t get used for many things but for legacy reasons that I can’t remember (we’ve been doing security for OpenStack since Essex) we had a private list. As I said it didn’t get used much in our day-to-day and I think that’s a bad practice that we carried across to our big tent operations.</p> <p>Largely I think this disconnect from the mailing list has arisen because it was not our experience that we needed to use it. Most of our work has always come from teams reaching out directly to us, typically via IRC. I think it will always be the case that teams will be more active on one communication medium than another but I fully accept that to meet our obligations under the four opens we must find a way to work more effectively on the mailing lists.</p> <p><strong>Community Support Channels:</strong> We manage all of our bugs on LaunchPad, that’s the primary way we interact with the VMT. Our IRC channel is reasonably active but as we’ve described above we certainly need to do better on the mailing lists.</p> <h2 id="impact-of-removing-security-from-the-big-tent">Impact of removing Security from the big-tent</h2> <p>Although I think it’s been addressed a number of times on the mailing-list <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-September/104176.html">thread</a> I’d like to reiterate two themes from the responses regarding concerns of removing Security from the OpenStack big-tent.</p> <p><strong>Legitimacy:</strong> As can be gleaned from this blog, we haven’t done the best job in making the wider OpenStack community aware of what it is that we do, probably even some teams who are running Bandit in their gate might not realise that it’s a tool that we created for OpenStack to be more secure. However even with teams that haven’t heard of us, we are able to quickly gain traction when they see that we are a ‘proper’ OpenStack project - the truth of the matter is that how most people see OpenStack, you’re either in the tent or you’re largely an irrelevance. We know this because we started outside of the tent and found it much harder to engage with teams where we could see there were obvious security issues. Being outside of the big-tent will make it very difficult for us to act as an authority for signing off that a project has taken reasonable security steps before applying for a vulnerability managed tag, a relatively recent <a href="https://review.openstack.org/#/c/294212/">change</a>.</p> <p><strong>Investment:</strong> Running any OpenStack project requires investment, very few projects succeed based only on people working on them in their spare time. For the most part investment here means giving people time to contribute to Security as part of their working week, to provide funding for spaces for meetings and mid-cycles and to cover the time and expenses of contributors travelling to design summits etc. It’s no secret from looking around OpenStack that some historically big contributors have been scaling back the number of people they send to summits, the numbers of active contributor they maintain etc. Having been in the position of lobbying various corporations for support in these areas I cannot imagine a scenario where we could leave the big tent and continue to dedicate time to the efforts we have in place.</p> <p>Without the legitimacy we have from being part of the big-tent we will not get the investment required to deliver and enhance security within OpenStack.</p> <h2 id="moving-forward">Moving Forward</h2> <p>I think it’s clear by now that <strong>I want the Security Project to have the opportunity to stay within the big-tent</strong>. I’d like to <strong>continue on as PTL</strong> at least through a period of maturing the Security Project to ensure that our baseline operations are aligned with what the wider community expects of any big-tent project.</p> <p>I want the opportunity to improve the score card above and have us achieving everything on that list. I see no reason why we can’t begin acting on these things now and that our status can easily be judged on this basis during the next election cycle.</p> Thu, 22 Sep 2016 00:00:00 +0000 https://openstack-security.github.io/organization/2016/09/22/maturing-the-security-project.html https://openstack-security.github.io/organization/2016/09/22/maturing-the-security-project.html maturity PTL OSSP Organization Clearing the Air about Vulnerabilities in OpenStack <p>Recently, there have been a few talks around “vulnerabilities” within the OpenStack project that introduced some undue concern.</p> <p>Some of them call out important concepts such as <a href="https://en.wikipedia.org/wiki/Information_security#Key_concepts">CIA</a> and the <a href="https://cve.mitre.org/">CVE database</a>. Unfortunately, they all attempt to highlight vulnerabilities or attack vectors within OpenStack that have either been addressed years ago, or are not able to be addressed by the upstream community and are the responsibility of the group deploying and maintaining the cloud.</p> <p>To understand how OpenStack handles vulnerabilities securely, it is important to briefly introduce the OpenStack vulnerability management process.</p> <h2 id="vulnerability-management-in-openstack">Vulnerability Management in OpenStack</h2> <p>A vulnerability in OpenStack usually begins life as a bug filed against a project with the “security” tag. These bugs are marked private and sent directly to the project’s security team and the Vulnerability Management Team (VMT). An initial triage is performed to understand whether the bug represents a legitimate security issue and if so what the impact is. If the issue is confirmed, an advisory and patch are prepared and validated privately. Once the the advisory and fix are available, OpenStack stakeholders are given two weeks early notice to patch their systems before public disclosure. The reason the two week notice is important is because <strong>it is expected that a responsible OpenStack provider will respond to security advisories in a timely manner.</strong></p> <p><img src="https://security.openstack.org/_images/vmt-process.png" alt="VMT Process" /></p> <h2 id="vulnerability-management-outside-of-openstack">Vulnerability Management Outside of OpenStack</h2> <p>The Vulnerability Management Team (VMT) only manages issues for OpenStack components (with the “vulnerability:managed” tag) inside the OpenStack ecosystem. A portion of recently discussed “vulnerabilities” have to do with third party applications deployed on an OpenStack cloud. The responsibility for securing third party applications is shared between the third party developer to produce timely patches to secure products, distributions to make sure there are sane defaults baked-in to the product, deployers to ensure their configurations are tuned to their environment and applications as well as patching methods, and the OpenStack community. To that end, the OpenStack Security Project maintains the <a href="http://docs.openstack.org/security-guide/">OpenStack Security Guide</a> for help architecting secure environments, and <a href="https://wiki.openstack.org/wiki/Security_Notes">Security Notes</a> to address common deployment-specific issues that have been found.</p> <h2 id="conclusion---were-here-to-help">Conclusion - We’re Here To Help</h2> <p>The OpenStack community is very concerned about security, and actively engages the upstream community, deployers, and operators to help increase the overall security posture of every OpenStack deployment. The Security Project has released many tools and references to assist everyone with the knowledge to securely configure and maintain their OpenStack cloud.</p> <p>Finally, if you have found a security issue in OpenStack, please disclose it responsibly by marking “security” in the LaunchPad bug report, or by contacting the involved project team or VMT team members directly. The Security Project welcomes everyone wishing to discuss, learn, or contribute to the security of the OpenStack project and can be found on freenode in the #openstack-security room, or on the OpenStack developers mailinglist with the [security] tag.</p> Thu, 05 May 2016 00:00:00 +0000 https://openstack-security.github.io/vulnerabilities/2016/05/05/clearing-the-air.html https://openstack-security.github.io/vulnerabilities/2016/05/05/clearing-the-air.html security vmt vulnerabilities summit vulnerabilities Applying threat analysis to Anchor <p>As a followup to my previous post on <a href="/collaboration/2016/01/16/threat-analysis.html">Threat Analysis</a> I started working through a simple TA process for <a href="/tooling/2016/01/20/ephemeral-pki.html">Anchor</a> with a view to seeing how long the process takes as well as trying to understand how we should document the steps that are required. I think we need to end up with a point by point guide to TA, some simple process that is repeatable and somewhat deterministic.</p> <p>A good measure of the quality of this documentation would be to have two groups of developers from the same project attempt to perform TA in parallel and compare the results.</p> <p>We are still working on the process as can been seen from this review.</p> <h2 id="reference-architectures">Reference architectures</h2> <p>One of the problems when trying to work out how to create threat analysis documentation for OpenStack services is that they can be configured in so many different ways. Anchor is probably one of the least complicated services in the ecosystem, capable of being deployed in a completely stand alone, single service, single host configuration. However, this is not how it’s intended to be used ‘in production’. The expectation is that developers use best judgement on what the best practice architecture should look like, what components should be present and what should be optional or recommended. You can see this in the architecture diagram for Anchor below. I decided to represent the system in an HA configuration but with LDAP and the audit queue as optional components. The uses, threats to and protections for the optional components will be included in the TA but the notation will highlight that Anchor does not explicitly require these services to run.</p> <h2 id="anchor-components">Anchor components</h2> <p>In this diagram you can see that a typical Anchor deployment typically consists of just a load balancer, a couple of Anchor instances and the Anchor configuration file which is stored on disk. <img src="https://drive.google.com/uc?export=download&amp;id=0B0osRPn3qBq5YWEyWGNZemVGMzQ" alt="Anchor Component Diagram" /></p> <p>The interface list in this diagram doesn’t contain a lot of information but it’s really just there for a quick reference.</p> <h2 id="security-requirements">Security Requirements</h2> <p>For each component we like to run through a list of basic security considerations these are at a minimum <strong>Confidentiality</strong>, <strong>Integrity</strong> and <strong>Availability</strong> (C.I.A) - bonus points are awarded for including <strong>Authorization</strong>, <strong>Authentication</strong> and <strong>Auditability</strong>.</p> <p><strong>Artefact</strong> - Generic term for a given component, interface or asset.</p> <ul> <li><strong>Confidentiality:</strong> Does this artefact include or access information that should be kept secret - could this information be used to compromise the security or integrity of the system. In the context of the Anchor project, the on disk config, validators and AuthN information stored on-disk (c4) and read through interface (5) contain confidential information.</li> <li><strong>Integrity:</strong> Does this artefact include or access information that it’s critical remains correct? Often it is the case that components that have strong confidentiality requirements will also have strong integrity requirements but there are times where strong integrity requirements can exist without confidentiality. Consider a time signal that a service relies on for synchronisation - the time of day is in no way confidential or sensitive however it’s critical to the process that the integrity of the time signal is preserved.</li> <li><strong>Availability:</strong> Will the system overall fail if this component, interface or asset is momentarily unavailable - what is the impact of a potential failure?</li> <li><strong>Authentication:</strong> Does the artefact involve or require authentication for some or all operations - what is the impact of the authentication system failing.</li> <li><strong>Authorization:</strong> Once an entity has authenticated (the system knows the identity of the user) - is it authorized to perform an action. What happens if an authorization failure allows all users to perform actions?</li> <li><strong>Auditability:</strong> Sometimes also considered as <strong>non-repudiation</strong> (I wont go into the difference here) is the ability of the system to maintain a immutable log of operations and events that occurred with a level of granularity that allows an investigator to reconstruct any given series of events.</li> </ul> <h2 id="discussion--dissection">Discussion / Dissection</h2> <p>At this point in the review it’s time to talk about the block diagram. What data travels between components and what happens to that data within each component. Reviews should consider and capture:</p> <ul> <li>What a reference architecture looks like and what parts are optional</li> <li>The interfaces between components, what data travels over those interfaces</li> <li>The security requirements for each interface</li> <li>The protocols used</li> </ul> <p>The content of this discussion should be used to inform the component and interface lists as shown below. Consideration should be given to</p> <h3 id="reference-architecture-validation">Reference Architecture Validation</h3> <p><strong>Does the presented architecture make sense as a reference for future deployer?</strong> Yes - the block diagram uses dotted line objects to denote optional components. These components should be present in a strong and robust deployment of Anchor but are not required for a basic / test deployment.</p> <h2 id="component-list">Component list</h2> <p>A component list is particularly useful for large projects as it helps keep track of all the different parts of a system under review. The component list describes the entities in the system and how they might process and persist data.</p> <table> <thead> <tr> <th>ID</th> <th>Name</th> <th>Purpose</th> <th>Persists Sensitive Data</th> <th>Exposed Protocols</th> </tr> </thead> <tbody> <tr> <td>c1</td> <td>Client System</td> <td>Any server or service that requires a certificate for operations.</td> <td>Yes - stores certificates from Anchor as well as it’s own private keys.</td> <td>TLS / 443</td> </tr> <tr> <td>c2</td> <td>Load Balancer</td> <td>Not strictly required for Anchor but strongly advisable. This component balances traffic between two or more instances of Anchor</td> <td>No - Passes data between Anchor instances and clients.</td> <td>TLS / 443</td> </tr> <tr> <td>c3</td> <td>Anchor Instance</td> <td>To validate certificate requests and generate certificates based on request data.</td> <td>Yes - Anchor reads configuration data from disk but does not store anything locally other than log caches when the audit stream isn’t available</td> <td>TLS / 443</td> </tr> <tr> <td>c4</td> <td>Configuration File</td> <td>To store configuration information</td> <td>Anchor never writes to this file. It reads lots of sensitive data from the file including validation rules and credentials</td> <td>Filesystem DAC</td> </tr> <tr> <td>c5</td> <td>Audit Queue</td> <td>To receive, process and forward log data from Anchor instances.</td> <td>Anchor emits no sensitive log data. Audit data contains: which host was issued a certificate, the type of authentication used and the validation rules that were met. Depending on configuration the audit stream data may be logged in it’s pre-processed form. Alternatively the representation within a target log management or SEIM application my be persisted.</td> <td>Unsure</td> </tr> <tr> <td>c6</td> <td>LDAP Server</td> <td>Anchor can use LDAP to simply authenticate a request or can use group membership as part of validation rules. I.E only a user in the “Nova Engineering” group is allowed to generate certificates matching the “*.compute.cloud” schema</td> <td>LDAP undoubtedly stores sensitive data however no sensitive data is persisted in LDAP by Anchor or by any side affect of Anchor running.</td> <td>Unsure</td> </tr> </tbody> </table> <h2 id="interface-list">Interface List</h2> <p>The interface list describes how these components communicate with each other.</p> <table> <thead> <tr> <th>ID</th> <th>Name</th> <th>Purpose</th> <th>Protocol(s)</th> <th>Confidentiality</th> <th>Integrity</th> <th>Availability</th> <th>Boundaries</th> </tr> </thead> <tbody> <tr> <td>1-2</td> <td>Client to Load Balancer</td> <td>The client connection to Anchor. Although this goes to a load balancer first, the client perception is that they are connecting directly with an Anchor instance</td> <td>REST or CMC - both over TLS</td> <td>Access credentials are passed over this connection. The credentials are not encrypted but the connection is.</td> <td>Integrity of requests is important and protected by TLS.</td> <td>Availability of 2 is important - this is why a LB is used in front of Anchor. Potentially two LB could be deployed with a DNS round-robin configuration</td> <td>Public / Edge Network -&gt; Control Plane</td> </tr> <tr> <td>3-4</td> <td>Load Balancer to Anchor Instance</td> <td>This is the main communication channel for Anchor operations. The data from the client is passed to anchor over TLS from the load balancer.</td> <td>REST or CMC - both over TLS (whatever was provided over 1-2)</td> <td>See 1-2</td> <td>See 1-2</td> <td>See 1-2</td> <td>Public Facing / Edge Network -&gt; Internal Network / Control Plane</td> </tr> <tr> <td>5-6</td> <td>Disk Read</td> <td>Anchor reads configuration data from disk. This data can contain AD credentials, validation rules etc. Appropriate DAC and MAC should be set</td> <td>FS Reads</td> <td>High</td> <td>High</td> <td>Low/Med - Anchor requires access to this file only when the service starts</td> <td>Entirely within control plane</td> </tr> <tr> <td>5-7</td> <td>Audit Stream</td> <td>To log events from Anchor</td> <td>CADF</td> <td>Low - no sensitive data in logs although in aggregate logs could provide an attacker with an understanding on the layout of the infrastructure</td> <td>High</td> <td>Medium</td> <td>Within control plane</td> </tr> <tr> <td>5-8</td> <td>LDAP connection</td> <td>To authenticate users and verify that the group they reside in matches what’s required in validation rules</td> <td>LDAP over TLS</td> <td>High</td> <td>High</td> <td>High</td> <td>Internal Network / Control Plane -&gt; External corporate network</td> </tr> </tbody> </table> <p>The next step in the process is to generate sequence diagrams for a selection of common or important operations within the system under review. This is again where the process relies on the best judgement of the reviewers - to generate enough sequence diagrams to map out major functionality or to ensure that the various interfaces in the reference architecture are explored.</p> <h2 id="sequence-diagrams">Sequence Diagrams</h2> <p>As a brief reminder to the reviewer, sometimes it’s useful to include a simplified diagram that explains the general principles of a system such as the one below.</p> <p><img src="https://drive.google.com/uc?export=download&amp;id=0B0osRPn3qBq5ZFBqQ1BMeU80cVU" alt="Simplified Anchor Diagram" /></p> <p>Obviously, not much progress can be made with this diagram alone but for more complex systems it can sometimes be useful. This diagram might be a simple one that shows the different parts of a system that handle various API calls for example.</p> <p>The tool that I chose to draw the sequence diagrams is available online at https://www.websequencediagrams.com it allows you to simply describe a sequence using text that is then parsed and turned into a diagram. The syntax is pretty trivial and comes from the js-sequence-diagrams project: https://bramp.github.io/js-sequence-diagrams/ . Below is the text that was used to generate the simple diagram above:</p> <div class="highlighter-rouge"><pre class="highlight"><code>title Simplified Certificate Request Flow Client-&gt;Anchor: HTTPS [ Certificate Signing Request &amp; Credentials ] Anchor--&gt;Anchor: Validate Credentials Anchor--&gt;Anchor: Validate Request Anchor-&gt;Client: [ Certificate | Error ] </code></pre> </div> <p>The threat analysis project does not place any requirements on what tools should be used to generate diagrams but in our experience the js-sequence-diagrams syntax is the easiest to use. The diagram below runs through the process for Anchor to sign (or refuse to sign) a certificate signing request.</p> <p><img src="https://drive.google.com/uc?export=download&amp;id=0B0osRPn3qBq5b3labDRocTNsQlk" alt="Detailed Anchor Sequence Diagram" /></p> <p>In this diagram we’ve used a few annotations to help the reader more easily understand the communication taking place. First lets take a look at the complete source code for this diagram:</p> <div class="highlighter-rouge"><pre class="highlight"><code>title Detailed Certificate Request Flow Client System-&gt;Load Balancer: [1-2] HTTPS [GET /sign? &amp; Certificate Signing Request &amp; Credentials ] Load Balancer-&gt;Anchor Instance: [3-4] HTTP [GET /sign? &amp; Certificate Signing Request &amp; Credentials ] Anchor Instance-&gt;LDAP: [5-8] Get group for user credentials LDAP--&gt;LDAP: Lookup user LDAP-&gt;Anchor Instance: [5-8] Group membership info Anchor Instance--&gt;Anchor Instance: Validate group matches requested certificate Anchor Instance--&gt;Anchor Instance: Apply all validation rules from on disk config Anchor Instance-&gt;Audit Queue: [5-7] CADF [ Certificate Signing Request &amp; Decision &amp; User ] Anchor Instance-&gt;Load Balancer: [3-4] HTTP [ Certificate | Error] Load Balancer-&gt;Client System: [1-2] HTTPS [ Certificate | Error ] </code></pre> </div> <p>The communication between the client system and the load balancer takes place over interface [1-2] (see the interface table above to check what that interface is and the security requirements around it) the protocol is HTTPS, the http verb is ‘GET’ and the resource is ‘sign’. The first option passed to this is the ‘Certificate Signing Request’, this is separated from other parameters by the ‘&amp;’ character.</p> <div class="highlighter-rouge"><pre class="highlight"><code>Client System-&gt;Load Balancer: [1-2] HTTPS [GET /sign? &amp; Certificate Signing Request &amp; Credentials ] </code></pre> </div> <p>Sometimes we want notation that describes an either/or relationship rather than an inclusive list, in this case we use the ‘|’ character:</p> <div class="highlighter-rouge"><pre class="highlight"><code>Anchor Instance-&gt;Load Balancer: [3-4] HTTP [ Certificate | Error ] </code></pre> </div> <p>These diagrams help reviewers to understand how components communicate and how date flows throughout a system. For each operation reviews can use the interface ID to check what security considerations have been given to that interface and if they map well to the operations described in the sequence diagram.</p> Thu, 28 Apr 2016 00:00:00 +0000 https://openstack-security.github.io/threatanalysis/2016/04/28/anchorTA.html https://openstack-security.github.io/threatanalysis/2016/04/28/anchorTA.html anchor ephemeral pki certificates threatanalysis Lightweight Threat Analysis Process for OpenStack <h1 id="lightweight-threat-analysis-process-for-openstack">Lightweight Threat Analysis Process for OpenStack</h1> <p>Following on from our previous posts on <a href="/collaboration/2016/01/16/threat-analysis.html">Threat analysis</a> and <a href="/threatanalysis/2016/04/28/anchorTA.html">Applying threat analysis to Anchor</a>, we have been working on defining a lightweight process for threat analysis which can be applied to OpenStack projects. This blog post gives a first look at the draft process, the final location of which is to be decided, but it is likely to be in a wiki page, or possibly in the security docs project.</p> <p>The materials are currently formatted in RST, due to their location as part of the security docs project, they can be cloned with:</p> <div class="highlighter-rouge"><pre class="highlight"><code>git clone git://git.openstack.org/openstack/security-doc git review -d 220712 </code></pre> </div> <p>We focus on four stages of the threat analysis process:</p> <ul> <li>Preparing artifacts for review</li> <li>Verifying readiness for a threat analysis review</li> <li>Running the threat analysis review</li> <li>Follow-up from the threat analysis review</li> </ul> <h2 id="preparing-artifacts-for-review">Preparing artifacts for review</h2> <ul> <li>Complete the architecture page. The architecture page describes the purpose of the service, and captures the information that is required for an effective threat analysis review. A template for the architecture page is provided <a href="https://review.openstack.org/#/c/220712/6/security-threat-analysis/templates/architecture-page.rst">here</a> and there is guidance on diagraming <a href="https://review.openstack.org/#/c/220712/6/security-threat-analysis/source/architecture-diagram-guidance.rst">here</a>. If further help or advice is needed, please reach out to the Security Project via the [email protected] mailing list, tagging your email [security].</li> <li>The architecture page should describe a best practice deployment. If a reference architecture is available this may be a good example to use, otherwise the page should describe a best practice deployment, rather than the simplest possible deployment. Where reference architectures do not exist, it is possible that the architecture drawn for the threat analysis process can be used as a reference architecture.</li> <li> <p>The following information is required in the architecture page for review:</p> <ol> <li>A brief description of the service, its purpose and intended usage.</li> <li>A list of components in the architecture, their purpose, any sensitive data they persist and protocols they expose.</li> <li>External dependancies and security assumptions made about them.</li> <li>An architecture block diagram.</li> <li>Either a sequence diagram or a data flow diagram, describing common operations of the service</li> </ol> </li> </ul> <h2 id="before-the-review">Before the review</h2> <ul> <li>Verify that the service’s architecture page contains all the sections listed in the <a href="https://review.openstack.org/#/c/220712/6/security-threat-analysis/templates/architecture-page.rst">Architecture Page Template</a>.</li> <li>The architecture page should include diagrams as specified in the <a href="https://review.openstack.org/#/c/220712/6/security-threat-analysis/source/architecture-diagram-guidance.rst">Architecture diagram guidance</a>.</li> <li>Send an email to the [email protected] mailing list with a [security] tag to announce the up-coming threat analysis review.</li> <li>Prepare a threat analysis review etherpad, using this template <TBD>.</TBD></li> <li>Print the architecture page as a PDF, to be checked in along with the review notes, as evidence of what was reviewed.</li> </ul> <h2 id="running-the-threat-analysis-review">Running the threat analysis review</h2> <ul> <li>Identify the “scribe” role, who will record the discussion and any findings in the etherpad.</li> <li>Ask the project architect to briefly describe the purpose of the service, typical uses cases, who will use it and how it will be deployed. Identify the data assets that might be at risk, eg peoples photos, cat videos, databases. Assets in flight and at rest.</li> <li>Briefly consider potential abuse cases, what might an attacker want to use this service for? Could an attacker use this service as a stepping stone to attack other services? Do not spend too long on this section, as abuse cases will come up as the architecture is discussed.</li> <li>Ask the project architect to summarize the architecture by stepping through the architecture block diagram.</li> </ul> <p><img src="http://i.imgur.com/7e1Fuz6.png" alt="Threat Analysis: Example Architecture Diagram" /></p> <p>While reviewing the architecture, perform the following steps:</p> <ol> <li>For each interface between components, consider the confidentiality, integrity and availability requirements for that interface. Is sensitive data protected effectively to prevent information disclosure (loss of confidentiality) or tampering (loss of integrity)? Is there a requirement for availability which should be documented and added to reference deployments? In addition to considering the authenticity of the data in transit, consider how the authenticity of the sending and receiving nodes is assured.</li> <li>Consider the protocols used to pass data between interfaces. Is this an appropriate protocol, is it a current protocol, does it have documented vulnerabilities, is the implementation in use maintained? Is this protocol used as a security control to provide confidentiality, integrity or availability?</li> <li>Can this interface be used as an entry point to the system, can an attacker use it to attack a potentially vulnerable service? If so, consider what additional controls should be applied to limit the exposure.</li> <li>If an attacker was able to compromise a given component, what would that enable them to do? Could they stepping-stone through the OpenStack cloud?</li> <li>How is the service administered? Is this a secure path, with appropriate authentication and authorization controls?</li> </ol> <ul> <li>Once the reviewers are familiar with the service, re-consider abuse cases, are there any other cases which should be considered and mitigated?</li> <li>Step through sequence or dataflow diagrams for typical use-cases. Again consider if sensitive data is appropriately protected. Where an entry point is identified, consider how risks of malicious input data can be mitigated.</li> <li>If any potential vulnerabilities are identified, they should be discussed with the project team, if they agree that it is an issue then a note should be made in the findings section of the etherpad, with a short title and summary of the issue, including a note of who found it. If the project team disagree, then the note should be made under the further investigation section.</li> </ul> <h2 id="follow-up-from-the-threat-analysis-review">Follow-up from the threat analysis review</h2> <ul> <li>Create a separate bug for each of the security findings listed in the TA Review notes.</li> <li>Update the Threat Analysis Review Etherpad each of the new launchpad bug numbers.</li> <li>Paste the contents of the Threat Analysis Review Etherpad into a text file in security-docs/security-ta/notes and push it to the security-review repo using gerit.</li> <li>Distribute the Threat Analysis Review Notes via email to all who were present at the threat analysis. If anyone discovers errors or omissions in the notes, then make corrections.</li> <li>On the threat analysis reviews wiki page create a new row in the reviews table, include a link to the master bug, the date of the review, the PTL and reviewers.</li> </ul> Tue, 26 Apr 2016 00:00:00 +0000 https://openstack-security.github.io/collaboration/2016/04/26/threat-analysis-process.html https://openstack-security.github.io/collaboration/2016/04/26/threat-analysis-process.html mid-cycle collaboration mitaka threat analysis collaboration Glance Signed Image Validation <h2 id="glance-signed-image-validation">Glance Signed Image Validation</h2> <p>A new addition to the OpenStack Security Guide is <a href="http://docs.openstack.org/security-guide/instance-management/security-services-for-instances.html">Signed Image Validation</a> in the Glance service. This will now allow boot-time assurance that an image has not been tampered with before it is booted. The steps for doing this are</p> <ol> <li>A signature of the image is created</li> <li>A Keystone service context is created</li> <li>The image signature is encoded and uploaded to Castellan</li> <li>The image is uploaded to the Glance service</li> <li>The <code class="highlighter-rouge">verify_glance_signatures</code> is set to <code class="highlighter-rouge">True</code> in the <code class="highlighter-rouge">/etc/nova/nova.conf</code> file</li> </ol> <p>A detailed list of the specific actions for each step is located at <a href="http://docs.openstack.org/openstack-ops/content/user_facing_images.html#add_signed_images">Adding Signed Images to Glance</a>.</p> <p>Once the configuration details above have been taken, when an image with a signature hash in its metadata is referenced as the boot image, the Nova service will securely copy the image from Glance, and compare a hash of the copied image against the signature in from the metadata. If this hash matches the image will boot, giving the user assurance it has not been tampered with.</p> Sat, 26 Mar 2016 00:00:00 +0000 https://openstack-security.github.io/security/2016/03/26/signed-image-validation-glance.html https://openstack-security.github.io/security/2016/03/26/signed-image-validation-glance.html glance signed-image on-boot openstack security Pragmatic Security: When Your PoC Goes into Production <p><img src="http://s22.postimg.org/mruu1gim9/cloud_computing_button_blue.jpg" alt="Pragmatic Security: When your PoC Goes into Production" /></p> <h2 id="creating-a-cloud-security-program">Creating a Cloud Security Program</h2> <p>A common anecdote when looking at OpenStack cloud deployments is that when an issue arises - such as with timetables or migrating an application to a cloud-native architecture - that security in one of the first items to slip from the schedule. So when your Proof-of-Concept cloud is ready to be deployed in Production, there are occasional gaps in the overall security and assurance of the environment.</p> <p>This will be a one size fits all post, considering how each environment will vary on mandatory security requirements. It will be up to the reader to adapt the recommended Security Software Lifcycle Management program to their environment.</p> <h2 id="teams">Teams</h2> <p>Teams will be referenced in these posts with as much clarity as possible, and our assumption is that they are roughly assigned as service teams - which would take care of an individual OpenStack service such as Nova - and development teams - that are in charge of the applications that run on the deployment. In your situation, this may be able to be directly mapped to individual teams, a single team, individuals responsible for a given service, or even a single individual. It will be up to you to determine what team is best to engage for a given issue.</p> <p>Each team should be responsible to a master architect - a single technical resource that is tasked with overseeing the integration details for each service and application to ensure they are all able to integrate and develop together for future integrations.</p> <p>Additionally, there should be a dedicated security architect who will work with both the service teams, development teams, and master architect to ensure the security of the OpenStack deployment. The Security architect should not be responsible to the master architect, but rather a peer that is able to influence priority and build features.</p> <h2 id="the-gate">The Gate</h2> <p>It is also assumed that you are utilizing some type of version-tracking software such as git or SVN. Both your services, configurations, orchestration, and application teams are utilizing these version-tracking applications for code check-in and to then trigger gate and build jobs.</p> <p>This article will assume that Git is used for versioning, and Jenkins for check-in jobs like pep8.</p> <p>The repositories that approved check-ins are merged into can be considered the ‘source of truth’ for code and configuration information in a running environment.</p> <h2 id="security-activities">Security Activities</h2> <p>The security team will work to build security and assurance through each step of the software lifecycle. Traditionally, this is encompassed by host hardening, code review, and firewall/network ACLs. In a cloud native environment, there are a few additional recommended steps. Additionally, the Security Team will also interpret, maintain, and enforce the Security Policy.</p> <p><img src="http://s27.postimg.org/3xtqknmdv/th_1.jpg" alt="Secure Architecture" /></p> <h2 id="secure-architecture">Secure Architecture</h2> <p>The Security team works with each service team to identify the best practice approach for a given service, the threat landscape, and secure methods to any identified threat vectors. These are captured in network diagrams, wiki pages, and configuration management databases for reference by the teams required to implement the devices.</p> <h2 id="threat-analysis">Threat Analysis</h2> <p>Once a secure architecture has been documented, the security team and major stakeholders would be invited to a meeting that critically analyzes each service in context where the security architect who influenced the architecture introduces it to the rest of the security team and stakeholders to get addtional points of view of exposure and possible exploit vectors for review.</p> <h2 id="static-analysis">Static Analysis</h2> <p>On a per-application basis, the appropriate static code analysis tool can be selected and included in the gate so that “low hanging fruit” can be identified and fixed by a developer on check-in, and not depend on other developers or an offical code review by an external party.</p> <h2 id="continuous-assurance">Continuous Assurance</h2> <p>The virtual machines used to host the applications will also need to be hardened, and can be through an orchestration tool such as Chef or Ansible. Once a secure baseline has been determined, the orchestration tools can be used to deploy the proper versions of packages and utilities and bring each host up into a known hardened state.</p> <p><img src="http://s12.postimg.org/t6vf6afft/image.jpg" alt="Regular Review" /></p> <h2 id="regular-review">Regular Review</h2> <p>Additionally, the policies, configurations, and results of the above should be validated on a regular cadence. Scheduling regular reviews for each - such as starting with an annual review of each, and then increasing as needed - will ensure the controls are accurately applied for the current environment.</p> Fri, 18 Mar 2016 00:00:00 +0000 https://openstack-security.github.io/security/2016/03/18/pragmatic-security.html https://openstack-security.github.io/security/2016/03/18/pragmatic-security.html security openstack security Security Coverage: Adding the Vulnerability:Managed Tag <p><img src="http://s29.postimg.org/bzfucj247/shield.png" alt="Security Coverage: Adding the Vulnerability:Managed Tag" /></p> <h1 id="the-vulnerability-management-team">The Vulnerability Management Team</h1> <p>The OpenStack Vulnerability Management Team (VMT) provides a point of contact for individuals or groups wanting to report a security issue in OpenStack. They do the initial triage and response to any reported vulnerabilities for the projects that have the <code class="highlighter-rouge">vulnerability:managed</code> tag. This tag provides a clear indication that a project has coverage from the VMT, and allows OpenStack to have a reasonable security baseline - an assurance passed on to implementors and operators of OpenStack clouds.</p> <h2 id="requirements">Requirements</h2> <p>To create this assurance for a given service, the VMT has a list of requirements to be fulfilled before requesting the vulnterability:managed tag. There are six requirements the VMT looks at when a service requests inclusion.</p> <ol> <li>The <code class="highlighter-rouge">vulnerability:managed</code> tag applies to all repos within a given deliverable.</li> <li>The deliverable must have a dedicated point of contact for security issues.</li> <li>The PTL for the deliverable is also a point of contact, or delegates one.</li> <li>A defect/bug tracker for the deliverable is configured to initially only allow access to the VMT, which will then bring in deliverable liaisons as needed.</li> <li>The deliverable repository is audited for security by a third party.</li> <li>Automated testing is in place to cover the main features of the deliverable, and are lightweight enough to run locally, but are also in the OpenStack CI infrastructure.</li> </ol> <p>The complete description of each of the requirements is on the <a href="http://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements">Security.OpenStack.org VMT ‘Vulnerability Managed’ page</a>.</p> <p>Using the OpenStack tracker and CI infrastructure will allow for requirement number four - secure defect/bug tracking, and allow easy extension for requirement six - automated testing.</p> <h2 id="requesting-the-tag">Requesting The Tag</h2> <p>Once the above requirements are met, a thread describing the request should be created on the openstack-dev mailinglist. Once the request is responded to by the VMT, the tag can be requested through a change to the <a href="http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements">OpenStack Governance</a> repository. An example request can be seen for <a href="https://review.openstack.org/#/c/247528/">the Ironic project</a></p> <h2 id="vulnerability-managed">Vulnerability, Managed</h2> <p>Once the above change is merged, the VMT will be able to receive secure bug defect reports, be able to analyze them to determine if they are legitimate or not, develop a patch to remediate the issue, have the deliverable’s point of contact review the patch for impact on the service, and responsibly disclose the defect, impact, and patch to downstream stakeholders.</p> Fri, 18 Mar 2016 00:00:00 +0000 https://openstack-security.github.io/vmt/2016/03/18/vulnerability-managed-tag.html https://openstack-security.github.io/vmt/2016/03/18/vulnerability-managed-tag.html vulnerability-management-team vmt VMT