As I mentioned recently, I’ve been wanting to talk about Agile software development methodologies and how they relate to permaculture – Agile permaculture for short – for yearsandyearsandyears, and it finally seems like time to do so.
Over on Making Permaculture Stronger, Dan is making an inquiry into permaculture design processes, and how much design is actually done up front vs emerging as you go. Turns out, while the books and classes tend to say “design up front” as the official process, in reality people tend to start implementing before the design is finalised, and allow the final stages to emerge after the first steps are already in place.
The reasons for this are obvious when you think about it: a permaculture property (be it a farm or a backyard or a community garden) is a complex system with many interacting parts, not least of which are the humans that use it. Over time, different issues may arise: a particularly dry summer, a change in the price or availability of materials, a new member of the household, an injury that stops you climbing ladders… and so, your plans that you made up at the start may need to change.
On top of that, there’s the fact that we can sometimes be paralysed by choice and find it hard to really make decisions about what we want. It’s natural to say, “Let’s rough in the outlines and see how it looks, then decide on the detail later.” Or, “Let’s start with the veggie beds outside the back door, and think about the back paddock in a year or two.”
Several of David Holmgren’s Permaculture Principles implicitly recognise the continuous nature of creating a permaculture system:
Observe and interact
Apply self-regulation and accept feedback
Creatively use and respond to change
Yet there’s tension here too. I’ve heard people say that it’s good to observe a landscape for at least a year, to see it in every season, before breaking ground. In a climate like the one I live in in southern Australia, you could even argue to extend that through at least one strong El Niño cycle – perhaps five years – to understand the effect of droughts and flooding rains on the land.
But at the same time, we want to obtain a yield. We have to eat, and it seems silly to hold off on planting a few veggies until we have a complete understanding of our local ecosystem under every environmental condition, as well as a perfect master plan for our lives in which nothing will ever change.
My background is in software development, which is another field where people try to develop and maintain complex systems over many years, all while dealing with shifting goals and changing contexts. The accelerating pace of technological change, especially since the Internet became big, has made the software community think hard about how best to design and implement systems that can deal with all this.
Merry Christmas, here’s a pile of floppy disks
Reader, if you are my age or older, you probably remember buying software from a physical shop. It came in a shrinkwraped cardboard box, and when you opened the box, there would be a manual printed on paper, and a stack of floppy disks. Windows 95, for instance, came on thirteen 3.5″ floppy disks, and was released in August 1995.
The major version of Windows before that was Windows 3.1, released in April 1992 – three years and four months earlier. This was a typical release cycle for shrinkwrap software: 3 years, give or take a bit, between major updates, with small bugfix releases in between.
Shrinkwrap software was primarily sold for the PC market from the 1980s to the early 2000s. The other two major types of software in the pre-Internet age were custom applications designed for a single customer (for instance, a rocket control system built for NASA) or turnkey commercial software sold to big businesses, such as payroll or inventory management software, which usually came with expensive consulting and support contracts to integrate and customise it for each enterprise’s needs.
Welcome to the waterfall
In each of these types of software, the lead time from concept to delivery was usually on the order of years, and it was developed by sizeable teams made up of programmers, designers, project managers, business analysts, testers, documentation writers, and many others. Software represented a major investment, so you’d want to get it right. In aid of this, companies developed various processes to make sure that the software projects ran smoothly.
The most popular of these was the Waterfall Model, introduced to the software development world around 1970. It consists of steps which flow, one into the next, like a series of rapids down a river. At each step a deliverable is produced and signed off on, and then you’d shoot over the rapids into the next phase, with no return possible.
There were plenty of cracks showing in this system when I was at university in the early 1990s. A number of variations were suggested. The first was that there should be an opportunity to loop back to an earlier step of the waterfall if things aren’t working out.
“Rapid prototyping” was another popular idea, especially suited for designing software with graphical user interfaces. The software developers would build a rough models of the software’s interface to get the customer’s feedback, then throw them away before starting on the real thing. (Of course, the temptation was to continue work on the prototype rather than throw it away, which often led to software with poor foundations.)
My software engineering lecturer seemed pretty taken with something called the Spiral model, in which the project loops around and around with a series of increasingly mature prototypes, building up a library of supporting documentation as you go, until the very end where you go through what look quite like the waterfall steps to bring the project to final delivery.
Governments and large organisations tended to use even more complex Waterfall-based methodologies, sending staff on training courses to get certified in their use. I remember seeing posters with complicated flow diagrams on project managers’ cubicle walls, showing the process their organisation favoured, along with Gantt charts to show when all the stages would occur.
Here’s one I found online, quite typical of its genre:
Despite this diagram having at least an order of magnitude more boxes than the first Waterfall one I showed you, it’s following basically the same steps.
Waterfall in Permaculture
As it happens, the Waterfall steps are very similar to those typically taught in Permaculture Design Courses (PDCs) and in many permaculture books.
Some of the most popular Permaculture methodologies include:
This is sometimes expanded to OBREDIM or OBREDIMET, adding “Observation” to the start and “Evaluation” and “Tweaking” to the end. I’ve seen SADIMET, too.
Dave Jacke, in Edible Forest Gardens, lays out an essentially similar waterfall-esque model, which was taught in the PDC I recently attended:
- Goals articulation
- Site analysis and assessment
- Design, which is broken down into:
- Conceptual design
- Schematic design
- Detailed design
- Patch design
(Dan has also gathered some more waterfall-esque processes used by other permaculture authors as part of his Making Permaculture Stronger inquiry.)
What’s wrong with waterfall?
The most fundamental feature of the waterfall model is that you finish each step before moving onto the next. Most importantly, the requirements (step 1 of the waterfall) and design (step 2) needed to be finalised and signed off on before moving on to implementation.
If you’re working to a finite timescale, you’ll soon find that the more time you spent on the early stages, the less time you’d have for implementation and testing. There’s always a trade-off: skimp on early stages and reduce the quality through poor design, or spend so much time on early stages that you have no time to build a solid product.
To think of this in permaculture terms, imagine the following scenario:
You’re a professional permaculture designer. You’re called to see a client, who says they want their newly-purchased acreage to be producing 80% of their nutritional needs by August 2020, three years from now. Most importantly, you won’t get paid unless it succeeds.
You’ll interview the client and write up their needs, along with a site analysis, which they’ll sign off on. Based on this – and without any further questions or clarifications – you need to produce a design. The client will sign off on that in turn, then they’ll hand over to a team of WWOOFers to build it. You don’t get to see the site again, or have any communication with the workers, until the three years are up.
How do you make sure you’ll get paid when August 2020 rolls around?
The longer you spend observing, analyzing, and writing the most intricately detailed design documents, the less time the implementers actually have to build soil, plant trees, or integrate animal systems and see them start to produce a yield before your time is up.
On top of that, the strict hand-off between the designer and the implementers means there’s a natural antagonism, in which each party wants to spend as much time as possible on their stage of the work, and will tend to blame the other if things go wrong.
If you can foresee these tensions, you’ll realise there’s a strong risk that the project will fail. To protect yourself, first of all you’ll make sure that the plan is so simple and straightforward that the implementers can’t muck it up. Don’t put in anything weird or new – just stick with what works.
Next, you’ll want to set up a contract with lots of arse-covering clauses saying that if implementation doesn’t perfectly match what you specified, or there’s some unforeseen circumstance, it’s not your fault. The client, of course, has exactly the opposite views. Hope you’ve got a good contract lawyer!
I don’t know of any permaculture projects that are quite this dysfunctional, in this particular way (though if you have any stories, leave a comment!) Nevertheless, it’s clear that if you followed the strict linear process laid out in many permaculture books and courses, this is what you’d end up with.
In summary, the problems of a strict waterfall methodology are:
- Fundamental uncertainty: we live in a changing world and we have imperfect knowledge.
- Limited time for analysis and design: we need to finish designing quickly so we can start implementing.
- Risk of errors: we have to call the plan “finished” at some point, but it might still be wrong.
- Schedule overruns: the more time we spend designing, the more we push back the start of implementation. The less time we spend designing, the more time we spend dealing with problems. Either way we delay our yield.
- The blame game: “Your design is wrong!” “No, your implementation is wrong!” (This can happen even inside one person’s head.)
- Conservatism: to avoid errors, schedule overruns and the blame game, we stick to tried-and-true solutions rather than innovative ones.
- No way back: if the plan is wrong, and the implementation fails, we can only throw it away and start over again.
And, at the heart of it all:
- Cognitive dissonance: we know there’s something wrong with the process, but we try and fool ourselves it’ll work anyway.
Is there a better way?
I’ve probably talked for long enough now, so I’m going to leave you with a teaser for the next post.
In 2001, a group of software developers who were pushing back against the Waterfall model got together and produced a manifesto for a new way of developing software:
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
This was the Agile Manifesto, and it changed the way software was developed. More in the next episode!