Thinking Strategically About Software Bills of Materials (SBOMs)

Share
  • March 17, 2023

Where did SBOMs spring from? As someone who (let’s say) has been around the block a few times, I’ve often felt confronted by something ‘new’, which looks awfully like something I’ve seen before. As a direct answer to the question, I believe it was US.gov wot dunnit, when in 2021 the White House released an executive order on improving cybersecurity. To whit, Section (4)(e)(vii): “Such guidance shall include standards, procedures, or criteria regarding… providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website.”

The driver for this particular edict, cybersecurity, is clear enough, in that it can be very difficult to say exactly what’s in a software package these days, what with open-source components, publicly available libraries, web site scripting packages and so on. If you can’t say what’s within, you can’t say for sure that it is secure; and if it turns out it isn’t, you won’t be aware of either the vulnerability, or the fix. 

But more than this. Understanding what’s in your app may turn out to be like descending into the mines of Moria: level upon level of tunnels and interconnections, rat runs of data, chasms descending into the void, sinkholes sending plumes of digital steam into the air. If you want to understand the meaning behind the term “attack surface” you only have to recall Peter Jackson’s movie scene in which untold horrors emerge from long-forgotten crevices… yes, it’s highly likely you are running software based on a similar, once glorious but now forsaken architecture. 

It is perfectly fair that the US Government saw to mandate such an index as the SBOM. Indeed, it could legitimately be asked, what took them so long; or indeed, why weren’t other organizations putting such a requirement on their requests for proposals? Note that I’m far from cynical about such a need, even if I remain healthily skeptical about the emergence of such a thing into the day to day parlance, as though it had always been there. 

Let’s go back a few steps. I can remember working with software delivery and library management back in the 1980s. We had some advantages over today: first, all the software, everything above the operating system at least, was hand-crafted, written in Pascal, C and C++, compiled, built and delivered as a singular unit. Oh, those halcyon days! Even a few years later, when I was taking software packages from a development centre in Berlin, the list of what was being delivered was a core element of the delivery. 

What changed is simple – the (equally hand-crafted) processes we had were too slow to keep up with the rate of innovation. By the late 1990s, when e-commerce started to take off, best practice was left behind: no prizes existed for doing it right in an age of breaking things and GSD. That’s not a criticism, by the way: it’s all very well working by the book, but not if the bookstore is being closed around you because it is failing to innovate at the same pace as the innovators. 

Disrupt or be disrupted, indeed, but the consequences of operating fast and loose are laid out before us today. As an aside, I’m reminded of buying my first ukulele from Forsyths, a 150-year old music shop in Manchester. “It’s not that cheaper is necessarily worse,” said the chap helping me choose. “It’s more that the quality assurance is less good, so there’s no guarantee that what you buy will be well built.” In this situation, the QA was pushed to the endpoints, that is, the shop assistant and myself, who had to work through several instruments before finding a mid-range one with reasonable build and tone. 

Just as ukuleles, so used cars, and indeed, software. The need for quality management is not an absolute, in that things won’t necessarily go wrong if it is not in place. However, its absence increases risk levels, across software delivery, operations, and indeed, security management. Cybersecurity is all about risk, and attempting to secure an application without an SBOM creates a risk in itself – it’s like theft-proofing a building without having a set of architecture plans. 

But as we can see, the need for better oversight of software delivery (oversight which would provide the SBOM out of the box) goes beyond cybersecurity. Not that long ago, I was talking to Tracey Regan at DeployHub about service catalogs, i.e. directories of application elements and where each are used. The conversation pretty much aligned with what I’m writing here, that is: as long as software has been modular, the need has existed to list out said modules, and manage that list in some way. This “parts list” notion probably dates back to the Romans, if not before.

The capability (to know an application’s constitution) has a variety of uses. For example, so that an application could, if necessary, be reconstituted for scratch. In software configuration management best practice, you should be able to say, “Let ‘s spin up the version of the application we were running last August.” In these software-defined times, you can also document the (virtualised) hardware as code, and (to bring in GitOps) compare what is currently running with what you think is running, in case of unmanaged configuration tweaks. 

This clearly isn’t some bureaucratic need to log everything in a ledger. Rather, and not dissimilar to the theories behind (ledger-based) Blockchain, having everything logged enables you to assure provenance and accountability, diagnose problems better, keep on top of changes and, above all, create a level of protection against complexity-based risk. So much of the current technology discussion is about acting on visibility: in operations circles for example, we talk about observability and AIOps; in customer-facing situations, it’s all to do with creating a coherent view. 

If it was ever thus, that we needed to keep tabs on what we deliver, the fundamental difference has moved from a need for speed (which set the agenda in the last couple of decades), to the challenges of dealing with the consequences of doing things fast. Whilst complexity existed back in the early days of software delivery—Yourdon and Constantine’s 1975 paper on Structured Design existed to address it—today’s complexity is different, requiring a different kind of response. 

Back in the day, it was about understanding and delivering on business needs. Understanding requirements was a challenge in itself, with the inevitable cries of scope creep as organisations tried to build every possible feature into their proprietary systems. The debate was around how to deliver more – in general users didn’t trust software teams to build what was needed, and everything ran slower than hoped. Projects were completist, built to last and as solid as a plum pudding.

Today, it’s more about operations, management and indeed, security. The need for SBOMs was always the case; for needing to know what is delivered, then roll back if it is wrong, remains the same. But the problems caused by not knowing are an order of magnitude greater (or more). This is what organisations are discovering as they free themselves for legacy approaches and head into the cloud-native unknown.

So many of today’s conversations are about addressing the problems we have caused. We can talk about shift-left testing, or security by design, each of which are about gaining a better understanding earlier in the process, looking before we leap. We’ve moved from scope creep to delivery sprawl, as everything is delivered whether it is wanted or not. The funnel has flipped around, or rather, it has become a fire hose. 

Rather than requiring ourselves to lock down the needs, we now need to lock down the outputs. Which is why SBOMs are so important—not because everybody likes a list, but rather, because our ability to create an SBOM efficiently is as good a litmus test as any, for the state of our software delivery practices, and consequent levels of risk. 

So, let’s create SBOMs. In doing so, let’s also understand just how deep the rabbit hole goes in terms of our software stack and the vulnerabilities that lie within, and let’s use that understanding as a lever, to convince senior decision makers that the status quo needs to change. Let’s assess our software architectures, open our eyes to how we are using external libraries, open-source modules and scripting languages. Let’s not see anything as bad, other than our inability to know what we have, and what we are building it upon. 

Any organization requested to provide an SBOM could see it as a dull distraction from getting things done, or as a tactical way of responding to a request. But taking this attitude creates a missed opportunity, alongside the risk: I can’t offer concrete numbers, but chances are the effort required in creating an SBOM as a one-off won’t be much different from instigating processes that enable it to be created repeatably, with all the ancillary benefits that brings. 

This isn’t a “let’s go back to how things used to be” plea, but a simple observation. Software quality processes exist to increase efficiency and reduce risk, both of which have costs attached. Get the processes right, and the SBOM becomes a spin-off benefit. Get them wrong, and the business as a whole will face the consequences.

The post Thinking Strategically About Software Bills of Materials <strong>(SBOMs)</strong> appeared first on GigaOm.

Source : Thinking Strategically About Software Bills of Materials (SBOMs)