“From an open source perspective we really built this massive death star, and now we need to go in and close it out.” Tracy Ragan
Last week the Open Source Security Foundation (OpenSSF) met to kickoff the Linux Foundation’s North American Summit. OpenSSF was launched in August of 2020 to combine the efforts for Linux Foundation’s Core Infrastructure Initiative and the Open Source Security Coalition increasingly stem from the Open Source Supply Chain. With some of the top minds in open source security, there was a lot to cover, so I’ve given you some highlights at the top and the details of each topic covered below.
Before we get started, I’d like to give every reader a quick call to action: understand that OpenSSF is just a governing board oversees a budget, and it’s through that oversight that you have influence and discussion over the direction of the community, as a developer and as an organisation. Open Source security affects everyone, and everyone should have a voice at the table. Here’s some ways you can get involved today. As Brian Behlendorf, the general manager of OpenSSF explains, “In the sense of cybersecurity, it’s no longer enough to buy tools off the shelf. It’s now more so about working together to solve this interdependence”. Be a part of helping us make open source secure so we can keep building things that change the world for the better.
Nithya A. Ruff, Chair, Linux Foundation Board of Directors
We started the day with an overview of how we got to the current state of open source security:
In the 1980s and 1990s, open source was more of an idealogical stance, the want to have individuals use software for any usage that they wanted, and to be able to observe what they were using (think Free Software Foundation). Then in the 90s Linux, OSI, and OSDL started to provide a way for company level engagement. Open Source DL was put together by a consortium of companies to create a foundation level way to engage.
Then came Foundations and Big Company support: The Apache Software Foundation, and the Linux Foundation. Webscale Companies built on open source (think Google, Facebook/Meta, Amazon, Netflix), and in the 2000’s enterprises really started to adopt OSS for digital transformation. Today, we have an Open Source Security Challenge: diverse set of projects in many languages, focus on problem solving over bug fixing, and many OS contributors and maintainers lack security training or focus. With this combination in tow, maintainer burnout is becoming a real problem. It means security threats take up more time for dedicated maintainers, and projects with less active or dormant maintenance are becoming easy targets for cybersecurity attacks within the open source supply chain.
Open Source is now used for mission critical infrastructure, developers often take OSS for granted, there’s a need for better dependency management tools and invest in upstream policy: automation in security is the best way to help maintainers focus on innovation over patches. There’s also a corporate culture problem: contributing to open source as a company is often an indirect benefit, so it has traditionally been hard to communicate the value of giving back to and maintaining a developer community. Now, we’re at a point where there’s collaboration between industry, government and foundations that really needs to come together to make sure that open source can stay secure.
Mobilising the Open Source Industry in the Fight for Better Security (By Default)
The belief that popularity is a rough proxy for security is a reasonable assumption, but it isn’t always true. “The second step towards building trust was showing that it didn’t depend upon heroics or personality - Apache, Linux, Perl, all. developed contribution and project acceptance processes that favoured community over code", says Nithya.
This was a way of building in resiliency. “But somehow, we got into the swap-meet model of sharing code…and it’d like to see us return to the idea of open source communities and open source projects as a little bit more like barn-raising”.
“The Open Source software community was one of the first places on the Internet the demonstrated, with the right processes, you could handle contributions from anonymous and pseudonymous sources…now we are one of the last such places.”
All processes and projects fall down if you have too few developers per line of code.
Now RSA is offering ways to verify national identity or IP address in order to verify the trust in contributions. This may not be the right approach - contributors come from everywhere on earth. We need better tools to measure the trustworthiness of code based o objective measures. We need processes and tools that encourage teamwork and shred responsibility for security, and this should be happening by default.
Dunbar’s number, this is the number of social relationships that any one person can maintain in their mind at one time, is about 150. We are far past that in Open Source contribution, and this offers unique challenges.
In December 2021, Log4Shel “broke the internet” as a vulnerability with surface area almost the size of the internet. In response, the White House convened in January 2022 for a discussion with 10 large open source and cloud infrastructure companies. From this meeting, the OpenSSF Community developed a plan for coordinated action to fight this new wave of open source supply chain attacks. This was unique, typically open source development is really more organic, but in this case this had to be a more top down governing approach to be able to build on existing efforts and create pragmatic but ambitious targets to create solutions to these global issues within two years. This resulted in The Open Source Software Security Mobilisation Plan.
Michael Scovetta, Microsoft and Michael Winser, Google
Alpha-Omega breaks pretty cleanly into two joint initiatives, as the name might suggest. “Alpha” works with maintainers of critical open source projects to identify and fix vulnerabilities, while “Omega” aims to identify the most widely deployed open source projects and help with automated security, scoring and remediation guidance for their contributing communities.
There are many ways to get involved, but if you have a particular focus please consider joining one of the relevant OpenSSF working groups:
Other Ways to Get Involved:
Anne Bertucio, Google
When someone finds a vulnerability, there’s plenty of things they can do with it:
If an ethical hacker decides to disclose the vulnerability, they then undergo a process of verifying, documenting and communicating vulnerabilities. These processes come in three different types:
Full Disclosure: information is made available as quickly as possible with as much reach as possible. The idea is that the sooner users have the info, the sooner they can act. This is commonly referred to as the Twitter Model.
Private Disclosure: info should only go to those who need to know. Patching issues without public disclosure keeps the issue, and the class of exploit, covert.
Coordinated Disclosure: a model that acknowledges the projects maintainer’s wants and the reporters wants, and works with them to ensure that the patch can occur in a timely response before further exposure of the vulnerability.
“The time to get comfortable with disclosure is not the time that you have a serious vulnerability on your hands, it’s well before that” Anne Bertucio
Coordinated Disclosure is the recommended route of action, because risk to users can be reduced if projects have the opportunity to patch and cut a mitigated release before disclosure. Disclosure is fundamental in open source, we don’t know who the users are, but we have to ultimately make this information. For more on how to do this well, see the Guide to Coordinated Vulnerability Disclosure for Open Source Software Projects.
This sounds simple, so why do we need a whole working grouped around this? And what if it’s a really bad vulnerability? What if we can’t figure out how to patch it? Importantly, there are still vulnerabilities where coordinated disclosure may not apply, e.g. malware activity exploiting people/it is in wider public interest: a great example being when there is a need to notify everyone of an effective typosquat attack. If you are curious about what the active conversations around this topic are, consider joining to Vulnerability Disclosure Working Group.
Running a bug bounty is premature if you don’t have the processes in place, it brings a spotlight of attention to vulnerabilities that you may not be ready to address, making your organisation more susceptible to attacks. As Katie Moussouris put it, “when will orgs learn you don’t open a restaurant when you haven’t hired chefs, line cooks, waiters, bus people”. If your organisation isn’t ready to fix the vulnerabilities that are reported, then your security model is not mature enough for a bug bounty to make sense.
Someone finds a bug: An obvious and easy way to contact the Vulnerability Management Team (VMT). These don’t have to be security engineers, they are maintainers who are there for triage and support. This doesn’t have to be anything complex, this can be an email (and a backup email) - sending by email might feel risky but it’s less risky than not sending the report. Use PGP encryption when sending email, if the organisation offers a key.
SECURITY.md: this should explain how to report, and the cadence of communication that the reporter can expect.
Assessment: is this really a security issue, or a regular bug? To be clear, “design that isn’t great for security isn’t a vulnerability”. But if confirmed that it is a vulnerability, then we move to patching.
Patching: Thank them for the report then ask if they would like to be involved in patching this vulnerability. Also state that ask you request a CVE, ask if they wish to be publicly credited and in what way.
90 Day Disclosure timeline: 90 days from when the first report is sent tot he VMT to when it is publicly disclosed is pretty standard. If not disclosed by the VMT within 90 Days, the reporter can disclose publicly. These are all rules of thumb, a reporter can also state that they want a patch or disclosure more quickly.
Disclosure: Project team and reporter publicly disclose the vulnerability
SAY THANK YOU to the individual(s) who reported the vulnerability, and let them know that their efforts to keep your code safe are appreciated
Prica Wadhwa, Chaingaurd, Inc
Signatures guarantee that software hasn’t been changed since it was signed, that it came from a specific producer and can be used as validation in a large pipeline. Keeping long-lived private keys safe is challenging, and distributing public keys to end users ins’t easy. Paid certificates are an option, but not for everyone. Sigstore is a free and open source way to sign with cosign to images, blogs, attestations and SBOMS.
SigStore Policy Controller: Kubernetes Admission Webhook, supports CUE/REGO, and can be installed via Helm
Sigstore Gitsign :a keyless git commit sign-on with Sigstore. Once configured, you can sign with git commit -S or sign every commit by default
Sigstore GA (coming soon): this will give increased reliability, and allows for defined SLAs for core Sigstore services - keyless signing will no longer be experimental\sigstore.dev
Jeff Mendoza, Google, and Stephen Augustus, Cisco
You can use this for your own projects, but also the ones that you take as dependencies. Scorecard is integrated into Github scanning dashboard - this can be either as a cronjob or as a push to the repository.
Allstar - helps to achieve and maintain adherence to security best practices across many Github repositories. It continuously checks, raises alerts as GitHub issues, and you can decide what checks or policies to enable. It’s configured by yml files and a great example of it’s best usage is branch protection, enforcing issues or fix protocols. Check out Google Container Tools for a great technical deep dive on this.
David A. Wheeler, The Linux Foundation
“Sadly, many software developers have received inadequate education and training on how to develop and distribute secure software.” David Wheeler
Dataconomy 2021 states there are an estimated 4 million unfilled cybersecurity positions.
Poneman 2020 says 53% of developers work in orgs that don’t ensure training on secure coding, and this has need has yet to be solved by higher education. In fact, UC San Diego is the only top 24 CS institution that requires security. Secure Code Warrior Report: 89% of developers reported they’ve received sufficient raining in secure coding skills, but more than half of the respondents also are not familiar with common software vulnerabilities, how to avoid them, and how they can be exploited.
Education is key, even beyond better automation: intelligent adversaries actively looking for exploits, so we need active intelligent defence. When telling a story about how static analysis tools ‘don’t work’ because developers comment out the buggy lines, this really hones in the prime problem: David emphasised, “it’s not that the tool doesn’t work, it’s that the tool on the other side of the eyeballs wasn’t ready”. We have intelligent adversaries, we need intelligent defenders to be able to keep the cutting edge of open source truly secure.
Secure Software Development Fundamentals is available from OpenSSF, and takesM 14-18 hours. This course is free with no time limit for completion, and you get a certification that is valid for two years when you pass the final exam. It’s also available on EdX, but it has time limits and a paywall to certification.
SCORM Connect (Option for Educational Organisations)
The Secure Software Development Fundamentals course will now be available to most educational institutions for use in their own Learning Management System for training and education as of June 2022. This is free to OpenSSF Premier members and accredited educational institutions, and for other organisations to incorporate with an annual fee to cover LF costs. But, this is just enabling more people to learn, it will still be available to individuals vial LF T&C and edX. This might be a good stopgap to bring cybersecurity training into the current cohort of higher education.
Joel Orlina, Sonatype
Maven was originally developed for Java, but now is relevant for all languages that run on the Java Virtual machine (Scala and Clojure, for example). Maven Central is relied upon by millions of developers as the central repository for open source java packages. Search for yourself at Maven Central, where you’ll find added info about the packages you intake.
Maven ensures a certain level of quality before publishing an Open Source package, making it more secure for everyone. Think of it as an immune system around the Java ecosystem. As Joel explains, “We raise a non-trivial bar for publishers to vault over. This is an extra level of effort, but it’s worth it for the payoff”. This then gets published to repository users, then it’s easy for end users to search for it.
In Maven Central, they require that to publish they recognise a namespace. This is important to verify that they own the domain they claim to cover. When the migration to Bintray occurred, it was possible, that they could purchase a namespace that had been prominent, so in order to get through this, Maven broke its automated system to ensure that they could keep the Java ecosystem safe. They’d check if it existed on Maven, and if it existed on Bintray. They measured about 700 projects over three months that needed to be migrated with intention and control the scope of the potential security problem.
The Log4j Debacle
The OSS Index is a free catalogue of open source components and scanning tools to help identify vulnerabilities and risk. It’s maintained by a vigilant team, but when Log4j hit, they “needed to get volunteers from other organisations to help us manage the Jira...thankfully we’d learned the lessons on how to quickly scale”, Joel remarked, but it’s a great reminder of how rapidly open source security is having to evolve to match massive modern supply chain attacks.
Where Does this All Lead?
Maven Central Portal is for both package consumers and publishers, so they are redesigning it to better support both. There are still vulnerable versions of Log4J being downloaded, not the fault of the maintainers.” Now they are developing the new central portal which will lead to better signup and identity management. This will also includes component popularity and categorisations, major improvements to this invaluable developer resource.
If you care about repository security, consider joining in on the OpenSSF Securing Software Repositories Working Group. Maven Central is also developing a new portal to improve the experience of publishing packages, and if you’d be interested in giving your Java namespace a voice, fill out this form. If you’d like to be a beta tester for this new portal, you can sign up here.
Brian Behlendorf, OSSF and Jamie Thomas, IBM
“We as an industry been to collaborate more fully in order to conquer this challenge. This includes cooperating with the government as well.” Jamie Thomas
Brian Behlendorf reflected on the growing awareness of cybersecurity outside of traditional tech, as it’s begun severally impacting critical infrastructures that run on Open Source. He was pleasantly surprised “to find that there were congress people who themselves were programmers were pretty impressive…and you could tell because of the quality of their follow-up responses”. In a dialogue with Congressional representatives, Brian explained the paradoxical philosophy of Open Source security: “I’d rather use open source that had bugs in it - bugs that had been found and fixed, that means that there’s scrutiny on that code”.
As cybersecurity threats expand, and our traditional educational pathways still struggle to adjust, the need to include more diverse pathways into this accelerating career is apparent. “Every day I get a multi-page cybersecurity report…750,000 open cybersecurity jobs are open right now. How are we going to fill that if we don’t open the aperture of education? This includes partnering with HSBCs”, explains Jamie Thomas. It’s also important to expand the reach to veterans and neurodiverse populations.
The only good thing that came out of Log4j is an enormous awareness of how important cybersecurity is for everybody…I think it’s been a catalyst (perhaps an unfortunate catalyst) but we can move forward from there” Jamie Thomas
Tracy Ragan, DeployHub; Rao Lakkakula, JP Morgan Chase, Bob Callaway, Google
It’s our obligation to take the burden of security and make sure that software is designed with security in mind. Industries that are next in line as end user industries for security are financial services, and the most effected organisations for ransomware are healthcare organisations. The next are those that are running embedded systems, whether in manufacturing or critical infrastructure like our grids.
On the Future of Security: “Even if we got a magic wand an cured all of the issues going on in the world today, there would still be new projects starting today that will develop new problems” Bob Callaway
On Governmental Oversight: “Thanks to regulators, security has become very important…but overall the trend has been changing. Log4Shell is a great example, actually now they are asking the right questions. How do we help the open source community to do better? The trend is changing to transparency.” Rao Lakkakula
On Leadership in Security: “If we’re not doing the right thing we can’t expect our customers to do the right thing. In our world it’s about leading by example: scanning our own code and eating our own dog-food” Tracy Ragan
On Challenges to Come: “Bringing in new collar workers, they are going to be using these foundation models to create code. Accessibility of the information becomes problematic. For example, you generated an SBOM, where is it, what’s in it…developers have enough on their plate”, Tracy remarks of the true complexity that we need to sort through in the next year.
Open Source software is developed by global community of people who love to build things and solve problems, but it isn’t necessarily developed with security in mind. This hasn’t traditionally been a problem, many people still think of cybersecurity as a strictly enterprise issue: but that’s no longer the case. Many black hat hackers have turned their keyboards to the open source supply chain in a new wave of cybersecurity attacks, increasing cyber attacks in the Open Source Supply Chain by over 650% in the last year alone. Hackers can now hit critical enterprise infrastructure all from attacking the same open source project that they pull from, and this is a global issue that we urgently need to address. Ultimately, it’s the great work being done with projects like Alpha-Omega, Google’s SLSA, and Linux Foundation Education to solve the gaps needed to keep open source secure.