In Software Development, Address Complexity and the Rest Will Follow

featured image

Where is DevOps going? Is it ‘dead’ as some are suggesting, to be replaced by other disciplines such as platform engineering? I would proffer that while it was never as simple as that, now is as good a moment as any to reflect on approaches such as those discussed in DevOps circles. So, let’s consider what is at their heart, and see how they can be applied to delivering software-based innovation at scale. 

A bit of background. My first job, three decades ago, was as a programmer; I later ran software tools and infrastructure for application development groups; I went on to advise some pretty big organizations on how to develop software, and how to manage data centers, servers, storage, networking, security and all that. Over that time I’ve seen a lot of software delivered successfully, and a not-insignificant amount hit the rails, be superseded or not fit the bill. 

Interestingly, even though I have seen much aspiration in terms of better ways of doing things, I can’t help feeling we are still working out some of the basics. DevOps itself came into existence in the mid-Noughties, as a way of breaking out of older, slower models. Ten years before that however, I was already working at the forefront of ‘the agile boom’, as a Dynamic Systems Development Methodology (DSDM) consultant. 

In the mid-Nineties, older, ponderous approaches to software production, with two-year lead times and no guarantees of success, were being reconsidered in the light of the rapidly growing Internet. And before that, Barry Boehm’s Spiral methods, Rapid Application Development and the like offered alternatives to Waterfall methodologies, in which delivery would be bogged down in over-specified requirements (so-called analysis paralysis) and exhausting test regimes. 

No wonder software development gurus such as Barry B, Kent Beck and Martin Fowler looked to return to the source (sic) and adopt the JFDI approach that continues today. The idea was, and remains simple: take too long to deliver something, and the world will have moved on. This remains as aspirationally true as ever — the goal was, is, and continues to be about creating software faster, with all the benefits of improved feedback, more immediate value and so on. 

We certainly see examples of success, so why do these feel more akin to delivering a hit record or killer novel, than business as usual? Organizations across the board look hopefully towards two-pizza teams, SAFe Agile principles and DORA metrics, but still struggle to make agile approaches scale across their teams and businesses. Tools should be able to help, but (as I discuss here) can equally become part of the problem, rather than the solution. 

So, what’s the answer? In my time as a DSDM consultant, my job was to help the cool kids do things fast, but do things right. Over time I learned one factor that stood out above all others, that could make or break an agile development practice: complexity. The ultimate truth with software is that it is infinitely malleable. Within the bounds of what software can enable, you really can write anything you want, potentially really quickly.

We can thank Alan Turing for recognising this as he devised his eponymous, and paper tape-based machine, upon which he based his theory of computation. Put simply, the Turing Machine can (in principle) run any program that is mathematically possible; not only this, but this includes the program that represents how any other type of computer works. 

So you could write a program representing a Cray Computer, say, spin that up on an Apple Mac, and on it, run another that emulates an IBM mainframe. Why you’d want to is unclear, but for a fun example, you can go down a rabbit hole finding out the different platforms the first-person shooter game Doom has been ported to, including itself. 

Good times. But the immediacy of infinite possibility needs to be handled with care. In my DSDM days I learned the power of the Pareto principle, or in layperson’s terms, “let’s separate out the things we absolutely need, from the nice-to-haves, they can come later.” This eighty-twenty principle is as true and necessary as ever, as the first danger of being able to do everything now is, to try to do it all, all at once. 

The second danger is not logging things as we go. Imagine you are Theseus, descending to find the minotaur in the maze of caverns beneath. Without pausing for breath, you travel down many passageways before realizing they all look similar, and you no longer know which ones to prioritize for your next build of your cloud-native mapping application. 

Okay, I’m stretching the analogy, but you get the point. In a recent online panel I likened developers to the Sorcerer’s Apprentice — it’s one thing to be able to make a broom at will, but how are you going to manage them all? It’s as good an analogy as any, to reflect how simple it is to create a software-based artifact, and to illustrate the issues created if each is not at least given a label. 

But here’s the irony: the complexity resulting from doing things fast without controls, slows things down to the extent that it kills the very innovation it was aiming to create. In private discussion, I’ve learned that even the poster children of cloud-native mega-businesses now struggle with the complexity of what they have created — good for them for ignoring it while they established their brand, but you can only put good old-fashioned configuration management off for so long. 

I’ve started writing about the ‘governance gap’ between the get-things-done world, and the rest. This works in two ways, first that things are no longer got done; and second, that even when they are, they don’t necessarily align with what the business, or its customers, actually need — call this the third danger of doing things in a rush. 

When the term Value Stream Management first started to come into vogue three years ago, I didn’t adopt it because I wanted to jump on yet another bandwagon. Rather, I had been struggling with how to explain the need to address this governance gap, at least in part (DevSecOps and the shift-left movement are also on the guest list at this party). VSM came at the right time, not just for me but for organizations that already realised they couldn’t scale their software efforts. 

VSM didn’t come into existence on a whim. It emerged from the DevOps community itself, in response to the challenges caused by its absence. This is really interesting, and offers a hook to any senior decision maker feeling out of their depth when it comes to addressing the lack of productivity from its more leading-edge software teams. 

Step aside, enterprise imposter syndrome: it’s time to bring some of those older wisdoms, such as configuration management, requirements management and risk management, to bear. It’s not that agile approaches were wrong, but they do need such enterprise-y practices from the outset, or any benefits will quickly unravel. While enterprises can’t suddenly become carefree startups; they can weave traditional governance into newer ways of delivering software. 

This won’t be easy, but it is necessary, and it will be supported by tools vendors as they, too, mature. We’ve seen VSM go from one of several three-letter-acronyms addressing management visibility on the development pipeline, to becoming the one the industry is rallying around. Even as a debate develops between its relationship with Project Portfolio Management (PPM) from top-down (as illustrated by Planview’s acquisition of Tasktop), we are seeing increased interest in software development analytics tools coming from bottom-up. 

Over the coming year, I expect to see further simplification and consolidation across the tools and platform space, enabling more policy-driven approaches, better guardrails and improved automation. The goal is that developers can get on and do the thing with minimal encumbrance, even as managers and the business as a whole feels the coordination benefit. 

But this will also require enterprise organizations—or more specifically, their development groups—to accept that there is no such thing as a free lunch, not when it comes to software anyway. Any approach to software development (agile or otherwise) requires developers and their management to keep tight hold of the reins on the living entities they are creating, corralling them to deliver value. 

Do I think that software should be delivered more slowly, or do I favor a return to old-fashioned methodologies? Absolutely not. But some of the principles they espouse were there for a reason. Of all the truths in software, recognise that complexity will always exist, which then needs to be managed. Ignore this at your peril, you’re not being a stuffy old bore by putting software delivery governance back on the table. 

Read More

Share on Google Plus
    Blogger Comment
    Facebook Comment

0 Comments :

Post a Comment