I don't know...
When I start something new it's often spontaneous. I have this idea I'd like to try and then I start hacking.
And while I hack at the keyboard and create the basis I think of the long term.
Having to do double work, write a document first and then repeat the same in code is extremely counterproductive. Motivation plays a big role. Repetition is not fun. Most projects never get finished. There are always obstacles and some are really hard. But you won't know them from writing documentation alone.
And if you write documentation ahead of time it's very likely this documentation will not fit the final product, at 1st release time.
Having a general outline with key features is ok. But fully-fledged documentation is over the top.
That's my educated opinion at least. Motivation is very important for me, especially if I don't know for years if this thing I'm building will be used by people at all. Single developer.
If there were dedicated people for tasks, that'd be different, probably.
Also you can't take the established business and say they all had a memo or document and that's the reason why they succeeded. That's a really long shot, IMHO.
I've found that going straight into hacking on something can end up biting me in the ass for big projects. I've also worked where we would do everything up front in a design document all the way down to functions, what args they take, and pseudocode for everything. The code would then take a day or two with nothing to redo.
A sufficiently detailed design document is essentially indistinguishable from code. How do you avoid the problems you encounter when writing code while writing the design document?
In my experience it is not one or the other. You can experiment first and then document, or vice versa. The point of documentation is "writing is thinking". The act of writing forces you to think through your ideas in ways that other modes of thinking do not. This is especially important when you are working on a team and need to share your ideas.
I agree that sufficiently detailed design documents are indispensable from code and should be avoided. However, clearly documenting the why, what and how of your work helps align everyone clearly. It also helps an individual clarify their ideas to themselves. It is hard to write clearly & succinctly. It takes time & practice, but I find it essential.
You know your programming language very very well, AND design for failure. What I do normally is first write the API of the library I want to use. Then I might start detailing things, which becomes some early documentation/API. Finally I try to implement it in code, and if I find a truly blocking code bit I restructure my assumptions about what is possible with my API and go back to the API/docs.
For example, at some point I knew I wanted to set localStorage and cookies values just like this:
cookie.token = '1234';
local.token = '1234';
I know enough Javascript, and the language is flexible enough, that I know that I could achieve that, either with getter/setters or with the more flexible Proxy(). So I continued writing the other methods first; how would it look ideally to "read" a value? To "delete" a value? Etc:
And finally wrote the code for it. Later I wanted to add IndexedDB, BUT! That is async! My abstraction was broken, or was it? I could just modify a single method and everything else would work as expected:
I have the same reaction to this as I would to "a sufficiently detailed blueprint is essentially indistinguishable from a building". I'd be very interested to see a software design document so opinionated and prescriptive about implementation that it leaves no room for a software engineer to add value.
I think it was an extreme example but we didn't. The design phase is exactly when you should be formulating ideas and throwing away things that don't work. The theory is that this is easier to do before code is laid down. They were way too granular though so it was super painful even in design phase to get these design docs together.
I think the design phase in this instance required far too much detail. Some middle ground would probably be best imo.
You can’t necessarily, unless you write it at a higher level of abstraction* and trust the engineers are correct when they say it’s feasible to implement in a reasonable amount of time. Now your team is aligned and coordinated, even if the team consists of one person.
I don't think it's about experience. It's just different approaches geared to the individual. I really dislike it when people assume that anything other than their way of working is down to experience.
I've been doing this for 10 years and I've tried the design doc first route and it's arduous for me. I have coworkers who absolutely love that route.
I generally just do it in parallel. Coding to me is like sketching. So I get down my ideas in code, and I document the design as I go. I switch between the two as each one informs the other.
Not the guy you asked, but personally I've been one long enough to know that iterative design/implementation often produces a better design than waterfall. In waterfall, you have to think of everything at the beginning, and you aren't smart enough. Nobody is.
In iterative, sure, you have a design (often informal and sketchy), and you try implementing it, and as you do you find out why certain aspects were a bad idea, and you change the design to fix those, and you find more things that were bad ideas, and you fix those, and so on. By the time you're done, you have either a nice clean design or a total pile of garbage, depending on whether you did real redesigns or just kludged some bandaids on.
Writing the documentation up front doesn't prevent iteration. But it means that you're going to have to re-write it every time you change the design. And it means that you're going to have to flesh out the initial design enough to write the documentation for it. This is somewhat wasteful.
I've had this bite me in the ass on many projects too, but I still agree with the parent. When I get an idea I get a rush of motivation to implement it. If, after that rush is over, all I have is a document, the motivation isn't coming back and the project isn't getting finished. Starting with a mostly-functional MVP means I can show it to people or use it myself, which gives me the extra motivation needed to make it less hacky/more maintainable/perform better/all the other things, that I could've thought about in the planning phase.
(I just went through this cycle in the last month or so actually - I got pissed off at the state of public transport apps after I missed a train, so I decided I should make my own. I made a buggy hacked-together web app, started using it daily and showing it to people. Every time I used it, it made me want to work on it more, so I did - rewrote the backend to cache and segment data properly, made the map run at a non-sideshow framerate, made the icons consistent, started building a native version... It's actually getting close to something I'd call "good" now. Knowing myself, this would never have happened had I started off with a specification or even a mockup)
What kind of systems do you develop and what stack do you use?
Software developers tend to impose their habits on everyone thinking the circumstances do not matter. But well, they do. A lot.
Both the article, you, and the GP are doing this. I've just read the first few comments, but I expect to find a lot more of it there.
Anyway, if the strictness isn't imposed by the problems you are solving, you may gain from pushing it into your tools and getting some experience with less strict procedures too. Just like the GP may gain from trying some more strict procedures for a while.
What exactly am I doing? I'm saying that I've experienced both extremes but I am not endorsing either of them. What works best obviously depends on circumstances.
As I wrote in a previous comment for a different post: I'm currently developing an "experimental" piece of software, i.e. I am not even sure some features I want to add are possible, meaning I had to rewrite it a couple of times (and won't be the last). Having strict versioning, pinning, and documenting everything before it reaches a semi-stable state would discourage me from developing it altogether.
Sometimes, writing up some code as a proof of concept is the best way to _inform_ the documentation, a way to gain clarity and understanding of the problem.
In other words, writing software doesn't have to be a linear process (rarely is). Sometimes you start with documentation if the problem is well understood. Sometimes you start with code to derisk the project. Most of the time, you bounce back and forth between the two.
if you can't explain what your code does, you don't know what your code does.
if you can say you know the level of detail you need to write for all the code you're ever going to produce, you're probably not writing very complex code.
It's interesting that you can replace "documentation" with "tests" in your argument and it's still an argument people make. It's double work you have to write tests and code! Yup, it is, kind of.
I'm not necessarily assuming which side of the argument is right -- for docs or tests.
Certainly "starting something new spontaneously just by starting to hack" can be useful and fun. And may not involve tests or docs. That's sometimes called a "spike" (although that's not exactly what the term "spike" was meant for, I think).
I find that generally after an initial period of spontaneous experimentation, getting a sense of what might work or not and what direction I might want to go--it's generally best to move to a less spontaneous development process. That will likely involve test coverage (whether strictly TDD or not). I think thats probably also the point to consider "test-first development" too.
Lots of programs fail adoption because the documentation just isn’t there, or of poor quality, and adoption can seriously be hampered by it.
It all depends on the style of project. If it’s your side project that tries out a lot of things it makes less sense to make docs, then say making a new start up core app or framework.
You're talking about something very different -- basically, building something for fun. The objective is to spend time building something, and it doesn't really matter what the thing being built is. This aligns with the intrinsic motivation very well.
The other scenario is that you're building something for others to use -- you have to understand their problem, design the solution so that it would make their lives easier, etc. With this, it makes sense to start at the soft end of things. The implementation, the actual building -- that's just drudgery at this point, something someone will do if they're paid enough.
So true, when I say in engineering school we had a subject called project management and we were taught that you have to write the documentation and everything first and it should be done in such a detail (including variable names and such) that you can give the project documentation to three different programmers and receive the exact same product from all three.
I always hated this approach because of the reasons you stated.
I love the quote from that German hacker Fefe: "Coding is an explorative process"
> write the documentation and everything first and it should be done in such a detail (including variable names and such) that you can give the project documentation to three different programmers and receive the exact same product from all three.
The only documentation that specific would be … the code!
Programmers seem to think that code is somehow its own documentation. It is not, no matter how readable it is.
Documentation is there to provide structure to the reader (no, the class hierarchy in your code and your build files do not do that) and to eschew the unimportant details. Code has a lot of unimportant details that provide cognitive load when reading it, but do not give insight. Documentation is about transferring insight.
That's not what documentation is for. That's not even really achievable by sharing code unless you are sharing a lot of metadata too (exact build setups, compute environments, etc.) or the code is very simple.
Documentation is there to _abstract away_ all of the specification that you don't need. That is very valuable if you are ever going to work with other people - it reduces their cognitive load - and it is valuable if you are working on something complex since it also reduces your cognitive load.
I'm totally with you there. I can't get down with TDD because the purpose and structure of the project come to me when I'm in the zone coding. If I worked like the author said I'd have a big document and then immediately encounter some glaring problem with it at coding time and have to go back to square one.
I think that other people's brains work differently, and more power to them, but we don't all work this way.
Even I think this is kind of dumb. You should probably write a high-level outline first, just to get your thoughts together. That could be written on a napkin for all I care, or could only exist in your mental mind palace. The important part is that you have a concrete vision for what you're trying to do. It helps to get it down physically because then you can't silently change it over time.
But real docs require someone to actually use the product, find exception/edge cases and actual behavior, and document those behaviors. An outline is a great starting point, but if you just try to write all of your docs beforehand, you'll frustrate the hell out of all of your users because your initial vision for a product never accurately describes the end result.
That appears to be false. This [1] tells how if they couldn't explain the feature on the reference card, they'd change the code. So the code came first.
> The Nest thermostat started as a press release.
From what I can tell, the product was announced in a Verge article[2] where there is a demo-able product. And the secret to their success, according to the article, was that it was designed by people who'd worked on phones.
From what I can tell, actual product documentation came last in both cases, exactly as we'd expect.
So the article starts with a double-barrel blast of bullshit. In the article's favor, they did write the bullshit down, though.
Author here. The quote about VisiCalc starting as a reference card came from VisiCalc co-founder Dan Bricklin's TED Talk, where he mentioned writing the reference card first. And in the article you linked (which I'd also quoted from), Bob Frankston mentions the documentation card as part of prototyping, where it seems the card came first, and said "If we couldn't figure out how to explain a feature on the reference card we would change the program," which seemed to me another point towards documentation-drive development.
And on the Nest thermostat, that story came from founder Tony Fadell's book, Build, where he says he wrote a press release before they started working on the Next thermostat. That would have been prior to The Verge's article, based on his story.
This is confusing to me. Did VisiCalc start as a reference card or not? Because the article doesn't say this and neither does the transcript of the TED talk.
>In addition to prototyping, Dan put together a reference card for users. If we couldn't figure out how to explain a feature on the reference card we would change the program. The original method for copying formulas was too complicated so we just changed the design rather than try to explain it.
Sounds like the reference card came after prototyping and was used as feedback in an iterative prototyping process. Pretty standard. This isn't "write before you code."
It’s definitely possible I got the order wrong. I still feel reasonably confident that documentation, and the general ideas sketched out on the VisiCalc reference card, were a core part of the start of the project, before the code was anywhere near finalized. I’m trying to find an update if I’m able to, but there was a copy I saw in my research of the VisiCalc reference card made before the program was finished, with a sketch instead of a screenshot, and then it showed what changed and what didn’t between that and the release. And Frankston comment about that if the code couldn’t match the card, they’d change the code, seemed to me that at least the docs set the tone for the code, and not the other way around.
Fadell’s writing the press release before building the Nest and Raskin writing The Book of Macintosh to kick off the original Mac, though, both are directly documented so we can be certain there.
At any rate, appreciate the push back—definitely want to have the story straight! Will dig a bit more on VisiCalc and see if I can find more confirmation.
So we have come full circle. That's the way we have been building software when I was starting with professional software development (this was around 2005). But the idea of doing a thorough analysis (business, technical) is a little bit older. The only people to whom this might come as a surprise are the ones who've drank too much of the "agile Kool-aid". I'm not here to bash agile and start this flame war all over again (agile has it's merits ...), but somehow thinking before doing got a bad rap recently (for reasons beyond me).
It's like software development got trapped inside of King Julian's (from Madagascar movies/series) brain with his modus operandi: "let's start doing this before we figure out it does not make any sense".
The phrase I hang on to is something thrown out by someone I used to work with - "Agile isn't an excuse not to do things". Too many organisation use "we're doing Agile" as an excuse not to do any design thinking and instead "just code".
What Agile actually is (in the most general terms) is a different order of doing things that increases the amount of knowledge you have when you do it, and reduces the chances of wasting your work. In my example, you still do the design work, but rather than doing it for the whole system at once you do it for the piece you're about to do, and then implement that piece[1].
But yes, it's nice that the internet has discovered Waterfall!
[1] Although many systems, even in an agile world, do require a degree of up front "whole system" architecture and/or design thinking.
Because thinking clearly is the hard part. I use Github copilot every day for work (and it generates decent code completion for me) and the only thing I've realised is that the major work in programming is to think clearly, and converting thoughts to syntax is not really that hard. Copilot takes off that easier cognitive workload and helps me focus and clear my thoughts. Also, does anyone know how to think clearly or does that just come with practice and experience?
Thinking, especially abstract thinking, is crucial to software development, as often (not always) creating software is abstracting real life into algorithms and data structures. Writing text in a word procesor, notepad, etc. is in my opinion first step to validation of the idea, to having at least faint idea about the complexity of the solution we would like to build.
On the question of "how to think clearly" - I'd say it's pretty individual - some people start "from the bottom", some "from the top" and others "in the middle". Experience is to know which is which and which approach suits you best.
I think you can sum it up this way: be planning heavy where it makes sense, and agile where it makes sense. There is not true one way. I'd not want to use agile to develop the formula language in a spreadsheet, and I'd not want to waterfall a marketing website for a product that is speculative. Common sense applies.
Agile and Documentation-Driven-Development can not only easily go hand in hand, I think agile is perfectly suited to work like this.
We figure out what the software is supposed to do -> We write documentation describing that -> We build the implementation -> New Requirements -> Figure out how to incorporate them into the plan -> Update the documentation -> Build the changes.
It's an iterative process, perfectly suited to agile development.
I would assume that an agile process will rely on Markdown, but then it some point does it get (irreversibly) committed to a more conventional document format.
If assume (please correct me if I'm wrong) by "conventional document format" you mean ones that do not play well with version control software like git (eg. because they may be/contain huge binary blobs), yes?
If so, at least in the projects I am involved in, all documentation that is checked into the repos remains in markdown (asciidoc is also used alot), and is only converted to non-plaintext formats for the purposes of release.
This is similar to how we build executables from our source code, but don't check in the resulting binary files into the repo.
Yes, by "conventional document format" I meant something in the Microsoft crapiverse.
Keeping it in markdown/asciidoc sounds great if you can make it work. Is your work in technical environments ? Where people have no expectation of using MS Word ? And those users that need fancy features like footnotes can learn how to use them and then retain that knowledge and use the features without inadvertantly document damage ?
I ask because I worked TC in a software firm targeting Microsoft platforms, so they relied on Word (and Sharepoint), and were resistant to ideas of XML and structured documentation, and Markdown never even raised its head.
>somehow thinking before doing got a bad rap recently (for reasons beyond me).
I've seen plenty of software devs engage in detailed forward planning for a future that would be utterly different from what they expected.
Best case this rendered all that forward planning moot and a waste of time. Medium case is they did a lot of pointless work. Worst case they technologically straitjacketed themselves and dug multiple holes they couldnt extricate themselves from easily.
Then they'd pick themselves up and do it all over again thinking that if they (or more usually somebody else) just managed to predict the future better then this kind of shit wouldnt happen. E.g. those idiot PMs just need to provide better requirements.
It's a hard rut to get out of because in so many other spheres of life forward upfront planning is critical and the future IS predictable. Software intuitively feels like it should be too. But it's the exact fucking opposite of that.
This is, at least, why certain kinds of thinking before doing got a bad rap with me.
Sounds like a problem with the execution instead of the technique.
The alternative to thinking before doing is wandering aimlessly, which really isn't likely to steer you where you want to be.
We basically developed whole-ass treatises (and some unhealthy cargo cults) around people saying that the observe -> think -> do -> restart loop should be small and leave space for adjustments between each step.
> The alternative to thinking before doing is wandering aimlessly, which really isn't likely to steer you where you want to be.
When I was a kid, we topped over this hill on the interstate, and way down in the distance there was an overpass across the road, with straight road from here to there. And I wondered how my dad could aim the car so well that it would go under that overpass way off in the distance.
Of course, now I know that he didn't do that. He didn't even try to do that. Instead, he steered the car.
The alternative to thinking before doing is not wandering aimlessly. It's steering. It's knowing where you're trying to go, even if you don't exactly know how to get there, and having an initial idea of how to get there, and then starting to go there, and adjusting as you find obstacles that you didn't know existed, and as you find that your aim was off.
Docs should start out as a rough scaffolding that is easy to change, then filled in as the product anneals. The reason is that systems evolve as you build them, and constantly updating the docs to be consistent as the evolution occurs is an error prone chore that not everyone has the patience or diligence for.
Assuming that your systems evolve as you create them, if you "complete" your docs too soon, you end up with less accurate docs than if you complete them later. Out of date docs are often harmful and demoralizing because they break trust.
Tony Fadell mentioned something similar in his Build book, about writing a press release before building your product. He mentions to pull the press release back out a few months into the project, and use the press release as a way to see what needs cut or what is critical to add. Or, if everything's changed too much, rewrite the press release.
Scaffolding's a great example; it needs to morph as the project scope changes, but can also help set the scope (or at least direct it) at the start. At the very least, it gives you something to say ok, we're good to stop here, everything else can wait for v2.
100% this. Outdated documentation is worse than no documentation. „Tell me you don’t care about your product without telling me you don’t care about it“.
What I like to do is write a rough, high-level description of some core concepts that I don’t expect to change (they do, eventually). Just to get the ball rolling, and as a form of validation - if you can’t explain how something works, you either don’t understand it well enough or it‘s to complicated/convoluted.
I've found out that "prototype fast, then document extensively as you re-write" a great approach. Because you've fleshed out the ideas, found out what works and what doesn't, and now you can capture all that in docs.
However, as always YMMV and you might not get the luxury of re-writing :)
> Out of date docs are often harmful and demoralizing because they break trust.
I agree, however the flipside failure mode is the initial rough design checks the "design doc" box and is never updated, so it remains vague and/or inaccurate.
Any process ultimately is only as strong as the people following it.
So, in other words: write a specification document and then build.
Isn't this what many reasonably run companies do anyway? There have been very few instances in my coding career where I've been asked to build something that didn't have a spec doc, design or some form of legwork done prior to implementation. A design I would argue is a form of documentation, visual documentation. The handful of situations where I've been asked to do something was a result of one boss in particular who was an ideas guy and would always come to me and ask me to build demos for ideas based on a few minutes of conversation.
Software Engineering is still in the stage of early industrialism - when James Watt built his steam engine, or Carl Benz built his car, they did not have specification documents. They tinkered and made it work, moving around assemblies until everything fit, then improved on the design. Essentially, they were doing some form of 'agile': small teams (to the point of only one), no specifications, high turnover. Today, this approach is unthinkable in mechanical engineering, because bad designs are lethal.
The software engineering environment is special because when it started out, everyone thought of it as just another branch of engineering, so they applied standard industrial processes onto it - and that did not work well, what worked for building hydraulic presses did not translate well to writing software. SE needed to "learn" how to tinker and experiment on mid-sized projects first, and now they are in an early-industrial-age phase (and call it 'agile').
Eventually, SE will return to something more organized, something more reliable - after all, we are learning that bad design choices can be lethal. We are seeing the first steps today, trying to capture agility with frameworks, with dedicated test methodologies, ... Naturally, this will cause frustration with the tinkerers, and it will take time. And while some aspects will resemble mechanical engineering processes, some things will be completely new and untranslatable to other engineering disciplines.
We've been waiting for years for software to become something more organized and reliable. It will take a rethinking of the fundamentals to get there IMO, along the lines of Brad Cox's thinking in [0]. Object-oriented programming was a revelation for a period, but overhype and overselling has made it non grata to a lot of developers. Now we wonder, what the next iteration of progress will be?
Is this even true? Applying engineering project management and methodology to software was the default approach historically. This is essentially the waterfall/BDUF that "agile" was reacting to.
The OP is just content marketing for some proofreading software. No one actually thinks this stuff is an epiphany unless they are completely unfamiliar with the history of the field.
With hardware (be it a bridge, a car, a pair of jeans) customers know and accept that no modification is possible once the product is accepted. With software, customers have grown acquainted to the reality that software will evolve to fit their changing needs. Later, this became hoping for a software that can anticipate their needs. Finally, it perversely morphed into expecting a software that precede them being aware of their own needs.
In a team/large company context, yes - a spec gets made then some (usually someone else) gets to write the code.
In a smaller context one, or two, people may be responsible for design and code. Assuming they have a good understanding of the problem space they may start with some practical things (like database design) then program, then document (if you're lucky)
If you work on client projects in a small team then a functional spec can help you keeping everything conceptual in one place and have a basis for discussion. It doesn’t have to be the most elaborate thing. A smaller project gets a few pages with some pictures, a more involved one has more explanation and database diagrams, a few stories and such.
You do this work anyway, so why not write it down in a structured way? Pragmatic and simple but clear and neat.
You’ll find that people do not tend to read every detail, but you can refer to the details when needed, which is very useful.
> So, in other words: write a specification document and then build.
I don't believe this is what the article is saying at all. I interpreted the headline that way, but the article seems to be saying to hav a goal upfront (written) and write docs along the way, also using the process of writing text as a way to clarify thinking.
Documentation (and press releases/FAQs) are usually for users. Specifications are for developers. The former can precede and help with forming the latter.
> So when Bezos—in a possibly apocryphally, 128-word memo—wrote that “All teams will henceforth expose their data and functionality through service interfaces,” it was a focused idea that led to Amazon building web services for which it’d be it’s own best customer, and turned AWS into the cloud computing juggernaut it is today.
Last I checked, almost every AWS api has a V2. And wait, I just checked, AWS CDK is on V2.28.0. I wonder why the last 28 versions of V2 didn't work if they wrote the documentation first?
Last I checked, Mac OS is on like version 11 or something? I'm not a Mac user so I don't follow this stuff.
Clearly, documentation is not a silver bullet. If it was, we wouldn't have to continue to rewrite and bump the versions of all of our tools. When I worked at Amazon, writing that 6 page document was like agonizing torture. They would ask me what the minimal latency of the API I was creating would be, and how am I supposed to know that?? I don't know what other APIs I may or may not have to call to actually get this to work. Implementation details matter a lot.
I don't remember who said this, but there's a saying that's something like ideas are worth nothing. Execution is worth everything. And the same thing goes for documentation, what good is documentation without a final product? It's useless.
A lot of the problems that exist in software, a lot of the problems that one encounters in writing... in building a system, in writing a program: those are problems that are caused by not thinking about what you are doing before starting to code. - Leslie Lamport
An idiot with a plan can beat a genius without a plan. - Warren Buffett
No battle was ever won according to plan, but no battle was ever won without one. - Dwight D. Eisenhower
To think, you have to write. If you're thinking without writing, you only think you're thinking. - Leslie Lamport
Anything you do is an iterative process. You start out by thinking about something, and then you start writing it, and in the course of writing it, you rewrite and you rewrite and you rewrite. - Leslie Lamport
PS. Full disclosure: I spent today rewriting something I coded yesterday... but generated a lot of written pages before starting both days! (It's a nontrivial problem. I do planning on paper because it's faster.) Now it's mostly there and well documented.
So are artists all idiots? Storyboards usually don't start with a full script. They start with sketches. Architectural designs start with sketches. Mathematical theorems start with tinkering with equations and notes. Are these people all idiots?
Different professions have different methods of "sketching" out ideas. Just because writing is the de facto method of sketching out an idea in the business world, doesn't mean it's the right fit for every profession ever. Most programmers I know feel like writing exploratory code is a form of sketching an idea. I have nothing against writing out your thought process, but it's not desirable or useful in every scenario for writing a program.
I was working on some low level multiplayer code about 6 months ago. I spent around 2 and a half weeks researching other methods. Then I spent another week drawing out designs. I was paralyzed because I didn't know what I didn't know. I finally got around to writing the stupidest code, and in 3 days I had an MVP. Yes, this is anecdotal. But I've been paralyzed by design too many times in the past. I would much rather write a rough draft of my code that works, learn a bunch in the process. Then edit it and refine it.
Why does writing English have to be the only way to explore a problem? Why is it so difficult to imagine that writing exploratory code is similar to writing a rough draft of a paper?
Finally, do research papers get written before they go through the process? No. They do the research, they experiment, then they document the results. I guess these darn researchers are all idiots that didn't realize writing an essay will tell them way more about their research than actual experimentation.
No, and GP didn't remotely imply anything like that. In fact, your examples are exactly backwards. Let's take the quote you're likely referring to.
> An idiot with a plan can beat a genius without a plan. - Warren Buffett
When an artist sketches, they're creating a rough specification for what they're ultimately going to do. That's the point of a sketch. It might not be _exactly_ what gets made in the end, but it should largely indicative of the end product.
The sketch is your initial specification. Filling in details in the sketch is your MVP. The artwork is the finished product.
People create sketches because they realise they're not geniuses who can just slap paint on a canvas an expect it to work.
> Why does writing English have to be the only way to explore a problem? Why is it so difficult to imagine that writing exploratory code is similar to writing a rough draft of a paper?
No, but only if _you_ are the only audience for it. If others, especially others outside of your team are the audience, then you need documentation.
> But I've been paralyzed by design too many times in the past. I would much rather write a rough draft of my code that works, learn a bunch in the process. Then edit it and refine it.
You should do both. Write some docs, hit a brick wall, write an MVP to test things out and clarify your thinking, then go back and update the docs with your findings. You know, _iterate_. The problem isn't documentation, but that you fell into the trap of Big Design Up-Front.
> Finally, do research papers get written before they go through the process? No. They do the research, they experiment, then they document the results. I guess these darn researchers are all idiots that didn't realize writing an essay will tell them way more about their research than actual experimentation.
Well, no, that's not how it happens. Researchers don't just write up everything at the end after they've done everything: they need to actually _plan_ the kinds of experiments they do, _plan_ their methodology, _document_ all that. Does this all end up in the paper at the end? No, but it does end up in their lab books, which they're required to keep. Material from those lab books ends up in and edited form of the final paper.
The mistake you're falling into is that you're assuming that this means that _everything_ has to be defined up front. But nobody's actually saying that. Sure, you might have a situation like the one you mentioned previously where somebody will ask you what the minimal latency for X will be, and it's OK to say "I don't know, I'll need a spike to write some kind of MVP to discover that" in response.
That doesn't mean that documenting your intents isn't a good thing.
I agree with what you're saying here. Except what you're saying is the antithesis of what the article is saying.
> The mistake you're falling into is that you're assuming that this means that _everything_ has to be defined up front. But nobody's actually saying that.
This article is entirely focused on the idea that sketching and wireframes are bad ideas. I fundamentally disagree with that. From the article:
> The temptation is to jump straight into building mode, to write code or build wireframes or sketch drafts of what you want to create. The better idea is to write your idea down as a way to think it through and improve upon it. Then, when you start building, you’ll know exactly what to build, and why, and you’ll have pre-written documentation at launch.
> what good is documentation without a final product?
Reference material for the features and techniques the product had/was going to have. I've earned years of my salary at a time by referencing documentation for vaporware -- things that never quite made it but that had worked out a ton of problems and documented them along the way.
> They would ask me what the minimal latency of the API I was creating would be, and how am I supposed to know that?
Isn’t that part of the job? When writing a technical document, you need to research things like dependencies, data storage, caching and estimate expected latency.
The only way to get an accurate estimate of expected latency is to measure it in all sorts of different scenarios. You can't measure something that doesn't exist. And relying on other peoples documentation for accurate latency estimates is asking for trouble. The one thing I've learned through profiling a lot of my code is that my intuitions about how long something will take are always wrong.
Part of your job is understanding what is reasonable to expect from your code and understanding how to estimate things. An estimate that is wrong by a factor of 10 can still be very useful.
When you get into the weeds on perperformance, a lot of things get very un-intuitive, but from a high level, it is often not hard to estimate median performance figures for a system within a factor of 10.
As an experienced engineer, you should know that you can never give a good estimate. Very few things are as you would expect them to be.
I looked for an article on how to estimate latency. When I Google "how to estimate the latency of my program" the top results are:
"Network Latency Guide: How to Check, Test, & Reduce"
"How to Measure Latency - Obkio"
"How to measure network latency: the 5 best tools - Kadiska"
They're all about measuring latency. Honestly I can't find any blog or guide that talks about how to give good estimates. Every time I Google a different variation of this question, it only gives me results that talk about how to measure latency. If you can point me to a good source that talks about how you can provide a good estimate of latency, I would be very surprised ;)
Use a table of common latencies, add up what's going to happen on your critical path, and multiply by 2. That will be close to your median latency.
Edit - when people ask for estimates, they usually aren't asking if it's 1 millisecond or 2 milliseconds. They're asking if it's closer to 1 millisecond or 100 milliseconds.
There are a few good reasons why almost every AWS API is on V2.
Firstly, the earlier ones were the Wild West and took differing approaches to things. A good example of how things were is SimpleDB, one of their first public APIs, which uses SOAP. Amazon later changed how they did all this, standardising things and switching to JSON for serialisation along with standardising parts of their request and response format, and that means that many went to V2 just because of that. And as somebody who's ex-Amazon, you should know that better than me who's just looking in from the outside.
Heck, it could be that somebody pushed them to release something before the details were nailed down. I've had projects like that where we were forced to release an API for general use internally before we were happy with it, which lead to us having to do a V2 that should've been our V1. This happens too.
Also, the fact that a service is on V2 doesn't bother me that much. That means that they went through one set of breaking changes after their first public release. If that major version number were higher, I think you might have a point. If they've stuck to that and have just been bumping the minor revision, that means they've just been adding features.
> Clearly, documentation is not a silver bullet.
It's not meant to be. Nothing is, and that's not the point that's being made. That doesn't mean that it doesn't help an awful lot, even if is agonising torture. If you can't write down what you're doing, you probably don't know or understand what you're doing. Narrow your scope to what you do understand, and specify and implement that. Scope expansion is what minor version bumps are for.
You'll find 100s or 1000s of article espousing the opposite. That you can't know what you want until you try building it. You can't see all of the issues and all of the problems and how your design really doesn't work until you actually try to build it.
Imagine a chef trying to come up with a new dish. I suspect they don't write it down first. They mix things in and then see what to add to make it better. Sometimes they fail but don't think they'd get there by making cards. Cards are not the flavor and docs are not the actual product.
Maybe it fits certain products more than others or maybe like everything it just depends, some projects succeed using the waterfall style, others need to improvise.
Generally I think I prefer the iterate and edit style to the plan everything before you start style. But I also don't like wasting time so I'd prefer to spend as little time as possible going down paths that will be discarded. But often I think you can't know it's the wrong path until you get there.
"You can't see all of the issues and all of the problems and how your design really doesn't work until you actually try to build it." - assuming you build something, nobody has tried to build before.
In the 95% of other cases, use a library solutions, read about solutions that exist, try to learn from similar solutions. Don't reinvent the wheel.
It's not wrong to do experiments, learn from that and apply the know-how to production code.
> Imagine a chef trying to come up with a new dish. I suspect they don't write it down first.
Many chefs do. Especially when it's about the presentation on the plate, many chefs make a sketch first before they prepare the new dish for the first time.
Exactly! In my experience, a chef will do reasearch (they have a ton of notes/books/recipes), then combine some stuff and try things, but then will 100% write things down.
I think that's the key point being lost in any of these over-generalizations of "software" separated from actual industries and customer types.
Building a back-end system for a corporate client? Yeah, probably write a spec first and get them to sign off.
Starting an API-driven business, selling metered REST API resources? Definitely a strong case for writing the docs first, which are key to adoption and usage.
Building a new social, dating, or media network? Seems unlikely the docs are going to be a deciding factor in those projects' success.
My opinion is this post romanticizes the idea of planning before building anything a little too much. As with anything, the devil is in the details.
I agree that some type of high-level planning is needed before you do significant coding or design. However, just how much really depends on what you're building.
I don't really buy into the notion that writing a lot first will get to the heart of what you want to build. There are far too many unknowns up front. You'll have to start prototyping and experimenting to really know if what you've planned has any merit in terms of a real product. What you've written will give you a map of where you're trying to go, but you'll need to be flexible enough to change some of those initial plans if reality doesn't match your expectations.
When you can’t write a functional spec of something, then exploration/prototyping is in order. But that doesn’t mean you shouldn’t write one when you can. It’s very useful to get people on literally the same page.
This overlooks that doing things correctly tends to not be very fun.
If you're working on a side project, I'd argue the best way to completely drain your motivation is to try and write a whole lot of documentation first.
Author here. Agreed on doing things correctly tending to not be very fun. Would add that, for most people (writers, included), writing documentation isn't very fun, is often put off until the last minute, and easy to neglect since it's not fun. That's why I found it interesting to see how many examples of people writing documentation much earlier in the process, as it's an incentive to prioritize one of those not-fun tasks.
With most creative endeavours about 20% is creative, the rest is work.
This is why most side projects (what the rest of the world calls hobbies) never turn into anything commercial - because making it commercial means doing the work part.
For example, writing a book. The writing part is easy, and fun. Then the work starts. First, second, third edits (or rewrites, depending on your definition of edit.) then finding (and convincing) an agent. Then ditto a publisher. (then more edits)
Then endless book promotions in small book shops for 10 people at a time. If you're famous then tours on TV talk shows. Lots of boring travel, boring towns and boring work.
If you want to write a book, write a book. If you want to be a writer expect to spend only a fraction of your time writing.
If you're working on a side project, then do the fun part. Save the work for your employer. If you decide to turn your side project into your income, then you still get to do the fun part, but remember that success will require lots of work as well (like docs, examples, training and so on.)
To be fair it's more fun doing work for the benefit if your own stuff - but work will be required.
I guess that makes sense when you start a product specifically made to be sold. It’s good to have the end product as well defined as possible so you know how far ahead the end result is.
But when “scratching your own itch”, docs would take away all of the ambition you started with because you might realize the scope of the project is bigger than you’d like.
When I got my first monitor in 2017, I couldn’t believe that the only way to change its brightness throughout the day was to use a clunky joystick to go through the monitor menus and find the brightness setting, and press the joystick a million times to lower the brightness until it’s comfortable again.
So I started to create Lunar (https://lunar.fyi) with that in mind, an app that can automatically change the brightness of my monitor throughout the day.
It was my first time doing a Mac app, my first time reading about DDC and my first time creating a desktop UI. If I had started with docs, that app would have never existed, because I would have realized just how much I didn’t know.
Simply building it, bit by bit, led me to a good enough end result, that I could use myself and share with others, and no documentation was needed when the app did just one thing.
I understand algorithms more after I write down what they actually mean or do and why. The implementation is secondary.
I would rather read what an algorithm does than read your implementation. Even though the implementation is more useful from an execution perspective. From the description I can write my own. It's easier for me to understand a plain description of an algorithm or technology than the code that implements it. Reading code is an investment.
Reading code is harder than reading English of the same..I understand that English is imprecise but from a detailed description of say the Chrome layout engine I could probably learn how to write a layout algorithm but the documentation for layout algorithms doesn't exist in a digestible form so you are forced to read Chromium sourcecode.
The ORC whitepaper uses branch and bound optimisation technique paired with a constraint satisfaction optimiser. I suspect I can use ORTools.
How many alternative rendering engines are there really? IMGUI, video game layout engines and video game engines, Gecko, Blink, WebKit and JavaFX and Swing and Qt QML
My first job out of college was working for someone that wanted to implement whole systems in XML documents and use XSLT transforms to turn them into working code and documentation at the same time.
That approach didn't last very long. But hey at least I learned the Muenchian Method.
I just happened to be reading up on Readme Driven Development[0] last night and the HN thread[1] had a few mentions of Donald Knuth's Literate Programming[2]. Fascinating read.
I don’t know. I feel like there’s something to be said about building something on a limb because you’re curious and then seeing where it takes you. I feel like that approach, albeit less organized, can also lead to a great product because you end up naturally spending more and more of your time on a curiosity.
As one co-commenter wrote: it depends strongly on the context. From approx. 5 people up (to hundreds), starting with some whiteboard and constantly keeping up a joint documentation can help very much to split work and not step onto each other's toes... A good design document can be very short and much easier to understand than code, i.e. when you explain memory layout and do data oriented design.
So one camp says "just jump in and code", the other one advises to go full waterfall and write books of documentation. Yet again, the new generation is making the same mistakes us fossils learned from years ago.
You will be shocked to find out that the truth is in the middle.
"Jumping in an hacking" is a recipe for a waste of life (but seems fine when you are young and the time seems eternal). Yes, you will eventually get it done, but after going down rabbit holes and "just getting it to work", you will end up with code that, let's just say, down the road you will not be proud of.
Trying to think everything through in advance is a fool's errand. Your "documentation" will be useless after you write the first few lines of code, unless you are clairvoyant and can predict every little problem.
Think before you write, that's all. Think more, write less. I spend more time thinking about what I am about to write, but at some point I have to start coding, before I enter the analysis paralysis stage.
A company I was at followed this approach. But of course putting it in writing then meant it had to be shared and approved or "reviewed"(reviewers of course thought they were approvers which was never the case). We called them TDDs (Technical Design Documents) and they were a nightmare.
The docs were often out of date, as the actual engineering would deviate strongly over time (and short periods of time too, like days).
They were incomplete as we wouldn't quite know what was involved until we actually began figuring it out by writing code and building POCs. We also wouldn't know how far we'd want to take some decisions until we had code to then figure out questions about share-ability, extensibility, ownership, etc.
The project would take significantly longer as everyone would first work on the TDD before ever working with code. Then you'd have to corral approvals which involved comments, questions, and a lot of back and forth. This would take days and could stretch out into weeks.
All in all it became a PITA and I wouldn't care to ever continue this practice.
I agree writing helps clarify intent and understanding, for software that writing comes through by writing code. Phrases like measure twice and cut once don't really make sense for software where getting new wood to cut has an effective marginal cost of 0.
Software is completely different from any physical good so these analogies just don't make sense. It's not like a building where once it's built it's impossible or difficult to change. We can forever update, improve, repurpose, and clean up any and all aspects.
Write a README or a TDD once the project is done to capture how it works, the problem being solved, or any gotchas, limitations, or ideas for future development. But this should go at the end. Lean in to software's strength of exceptional flexibility and adaptibility and stop treating it like a physical thing.
I think the process highly depends on what you know about the requirements and what is unknown. More established and standardized fields like mechanical engineering already know a lot of things upfront and can specify/document what the project is supposed to implement without having to tinker and experiment first.
That may also be true for some types of products in SE that have been built hundreds of times before or some types of problems that we already know how to solve. But with many other projects, a lot of uncertainty comes along, where we don’t even know what we don’t know yet.
In these cases it can be a great waste of time and energy to try and fix design decisions upfront, based on a theory of how things supposedly will work. In my experience, a lot of important insights can only be generated while you operate with the matter at hand (or at least imagine doing so, but this is very hard and inaccurate in more complex cases).
Therefore, I believe that doing small, isolated experiments/prototypes before writing any documentation (and certainly before writing production-level code) is a crucial first step that should not be avoided unless you have a very solid understanding of what you’re about to build.
I agree, however, that we shouldn’t start coding for production before doing any sort of research and design step first (although I don’t think that it is a linear process – we might have to cycle through these stages repeatedly).
I have worked in graphic design for many years before doing SE and my more challenging projects always needed multiple practical experiments – planning everything in my head upfront never got me anywhere. Of course, GD and SE cannot be compared 1:1, but both involve research and design.
Of course, planning experiments can still be fruitful and it certainly helps to formulate scope and intent beforehand so you know what problems you are trying to solve (this is how I would see the role of the press release mentioned in the article).
For more complex, long-term projects, I like to keep a dev journal where I can document my research and approaches while working on experiments and prototypes. This helps me a lot later on to re-trace my thinking process and the act of writing certainly helps to organize and focus my thoughts. I don’t think of this as a techical documentation/specification, but maybe it is one of the things the author of this article had in mind.
Like some other commenters, I disagreed with the headline. But the article says something more subtle than 'write documentation first'. Its points are:
* Write documentation alongside, ie as you go
* Use writing English (or your language) documentation as a way to clarify your thoughts: the process of writing is a tool to ensure something is well thought through
I disagreed with the headline because I interpreted it as writing end-user documentation, effectively a very detailed specification document, up-front and then writing code to match. In reality, coding is a process of discovery, and you need the flexibility to allow the implementation to change. Your goal might be known, but the path to that goal is flexible.
The article pitches writing that goal upfront. Visicalc had a few sample formulae. Nest had a press release written. But for both, and it's explicitly called out for Visicalc, the actual user-facing documentation was written alongside the code as the product developed. This solves the problem of not having documentation ready when the product is ready, too.
In short: misleading headline, insightful article. I'd like to take its suggestion of writing docs alongside implementation as implementation develops on board myself.
I think the word documentation is not used correctly in the article. The article seems to suggest small snippets of text, ranging from one short paragraph to a few pages.
It also goes on to explain documentation as memo / press release / reference card. But all these are not documentation.
It sounds like the PRFAQ method, which is already in use by many teams.
It's good, especially because it aligns people across all verticals of a company.
A lot of good ideas come through trial and error. After a certain while of using a product a specific way whole new pathways can open up. It's really difficult to know what your end product is supposed to look like, if you're working on something novel.
If you're building something that already exists, sure, then you can structure and optimize it during the design phase (and should).
Hmmm, I think it depends on how much documentation. Design docs are important to clarify and communicate the intent. But typically (at least in my personal experience), designs evolve during the implementation as better understanding and consideration of edge cases comes to light. I now personally favour light weight PoC projects for any non trivial development projects.
Tangential to the article but I’m curious to see what the app looks like once it releases. I’ve been thinking a lot recently about how software for editing text (not authoring) is woefully unexplored.
When it comes to other mediums like video, software has maintained the film/edit division—one captures video with a camera then uses dedicated software to edit it. It remains a two-step process and the division between steps is reinforced by the design of supporting software.
With writing, the opposite has happened. Software helped collapse the two-step divide between writing and editing. Text editors serve a dual purpose in that one can both produce the initial content and edit it thereafter all at once and seamlessly, in one tool, in one session. However, I remain convinced this is a bad way to do it and the two-step approach is the correct one. It’d be interesting to see what a tool dedicated only to the editing phase could look like (so, you can alter existing text but cannot use it to produce novel material).
> It’d be interesting to see what a tool dedicated only to the editing phase could look like (so, you can alter existing text but cannot use it to produce novel material).
Scribus [https://www.scribus.net/] specifically states that it’s meant for editing already-written text; I’d imagine that other publishing software (e.g. InDesign) works fairly similarly.
Always upvote articles that imply Agile can be lacking. Really, meta-processes are the way to go, and can be defined by checklists. E.g. For this effort, will documentation be helpful? check yes/no. Then you aren't blindly just doing stuff because it is or is not part of the process. The process itself is configurable.
I like the documentation first approach, but I understand that it does not work for everyone.
Designing a program (on paper or the screen) before you start coding is still the recommended textbook approach from university courses and books.
I think of documentation in the broadest form. For example, flow diagrams, or rough wireframes of screens in your app. Figma (and other tools) is good for creating rough app prototypes with short annotations describing features. You get the benefit of fleshing out flows and screens or pages (before coding) but without heavy documentation.
A popular UX technique you can try by yourself (or with colleagues) called 'Design The Box' can also be used before you start coding. It's an exercise to articulate the key features and benefits of your app: https://gamestorming.com/design-the-box/
It's not about using the pen and paper before the build. It's about building prototypes before the actual product. There's nothing wrong with prototyping the software using code and building it directly. The problem is when the prototype is being automatically elevated to a product rank when it starts working. In such case, everyone treats the prototype as it were a product, but in reality it's still a prototype.
So instead of using pen & paper, just use the code to prototype things, because it's 2022 and it's possible to do so. Just make sure to properly transform the prototype into a full-fledged product when the time comes, possibly by rewriting the implementation, maybe even change the implementation language (e.g. Python for prototyping, C++ for implementation).
If I'm writing some small tool, I often start with fictional workflow examples.. These get adjusted as I discover ways that the implementation can be made simpler or easier without any significant changes to the workflow..
Other times I find that the workflow is annoying and that a larger complexity in the implementation is well worth it for a nicer workflow..
What I'm trying to convey is that it's often better to BOTH have a lose idea of how it should be used AND how to implement it, but be ready and able to change both implementation and workflow/documentation as new discoveries are made..
In the end, if you know the implementation AND workflow EXACTLY, it's because the program already exists and you should just use that instead..
I tend to do the exact opposite. Only when the program is fully done (if ever) do I document it (beyond peppered lines of documentation that explain something that is not obvious from code). There is good reason for this: Premature abstraction. As others have pointed out when I start to work on an idea, I’m not sure about everything. Working on it will reveal misassumptions and blind spots in my current thinking. I have also learned to abstract out behavior into functions lazily. Often having created defined functions with an immaturely developed idea is a hindrance to flexible experimentation.
I think the term 'documentation' (in a dev context anyway) is a daunting one. Documentation is hard. Time is poor. And things always change in build, often big things. Knowing the process, why would you presume the outcome?
That said, even on personal projects I do like to write a readme.md (a scope of work "lite" - audience me) outlining the intention and key features of whatever it is I am planning to create. This isn't a massive commitment, neither detailed nor prescriptive, but it does provide a reasonable amount of guidance and milestones. But it's not documentation.
I find that the quality of my output tends to increase if I'm Literate Programming. And in that regard, documentation is like testing: whether you do it first or second doesn't matter as long as you do it and integrate it into your process.
I tend to find that I'm chasing an idea and hacking code together without updating the docs. Then I hit a complexity wall and go back and look at the big ball of mud I made, and break it into pieces, updating the documentation for each piece, and then I get a refactor idea that helps me simplify the code.
This is worse than TDD "write the test first" extremism.
The # of projects that have all the requirements perfectly laid out to enable this is... effectively zero.
I can see something where the requirements and iteration/convergence artifacts lead to some structured documentation ... maybe, that's an interesting idea.
The reason documentation comes last is that you don't know what comes out the end of the sewer pipe after the vagaries of Faster, Better, Cheaper pick two... well actually it's Faster Better Cheaper Documented ProcessFollowed, pick 2.5.
As someone who just got promoted into engineering management from development, the number one cause of frustration, delays, confusion was always lack of documentation - either in requirements or just how the system works in general.
It’s now my number one priority to ensure that we have a proper framework for documenting requirements ie. no more two-line “change this functionality because reasons” tickets and having a centralized wiki that documents all features and how the work, along with any quirks that exist.
I can't remember the last time I worked somewhere where nobody wrote down anything. Sure sometimes not enough and sometimes too much. But in general people write down specs in, stories, tasks, pitches, presentations, wireframe, slick UIs etc. Even for my own play projects I generally start out with an outline of some sorts.
but total spec upfront brings back bad memories from the waterfall world. But I have also had discussions with folks who thought that agile means not specs/documentation at all
For work projects, I often write a short README file first, and share it with my team as a sanity check before writing a single line of code.
It's just rough draft which says "here is the problem I think we are solving, and why/how we're solving it". If my team has feedback, I update the README. This is easier to keep track of than Slack conversations.
If necessary, this document can be polished later, when other people need to use the thing we built. But it's nice to have a starting point.
I feel like a more appropriate name would be “write first, then build”. “Documentation” reminds the product managers and engineers of corporate-y documents like release notes. The “documentation” described is more of a fluid collection of ideas than what we would consider “documentation”. I have found this to be true - “Writing is a way of finding out”.
This advice works only if 5 prerequisites are met:
- the coder is fluent in programming. Beginners will not know how to architecture things, and need to fiddle way more. It's impossible for them to create a doc that will make sense from the top of their head.
- The coder understands the problem well, and it has a known solution. Many problems are better understood when trying to solve them, by exploring its space while programming. It's impossible to start documenting something you don't know very well. It's espacially true if you have to come up with a novel solution.
- The problem is well defined. Unlikely. Clients don't define their problem well, and it's half our job to extract the information from their brain. However, while it is sometimes possible to get that on paper, and formalize it, most of the time, programming an incorrect PoC and iterating on it is a better interface to communicate with the client: they have a visual result to discuss. Writting a doc that you trash down every day would be a waste.
- The coder understands the field well. I'm currently working with a Bank, and I don't know the field. Many algo I must implement are in the head of my clients, and the quickest road to understanding them is to peer code them. They attempted to make a formal doc first of course. After 6 months, they still don't agree on how to do some things. One week of coding with me forced them to confront reality, and take concrete decisions.
- The coder has the big picture. Starting with a doc assumes you know what your API should look like. I would recommand that you do know, if you don't, you are probably in trouble. But it's a fact of life that sometimes you can't figure it out, and starting from something incomplete and coding it will make the big picture clearer.
That's a lot of pre-conditions.
My experience is that most teams starting with the doc (or tests) don't ship, or end up shipping only once they start fiddling. Those who do are excellent teams, the best ones, and I love working with them. But they are a rare bread. We are usually an average team, not ticking all pre-conditions, and not composed of only very skilled or experienced professionals. We fiddle a lot.
And unless you have a great team leader, taking such an ordinarity team and making it write the doc first will result in a terrible doc that doesn't match the solution to the problem, that you will rework again and again, while nothing is getting in the hand of the client for concrete feedback.
I favor a middle ground: start with paper, but don't pretend you are going to have a full spec.
IMHO, the process of software project development is iterative and non-linear. Writing some docs followed by some code, then writing/editing more docs, then more code, tests, experiments ... rinse and repeat. I found that any rigid structure like docs-before-code or code-before-docs to be counterproductive.
Yes but we all work differently. I love to come up with ideas as I build. It doesn't scale well with a team, but can allow you to be quite creative to start building something without everything planned. Maybe I am alone on this one.
Aren't we supposed to write unit tests before we build too? The benefit of that is that if you have a CI system the tests have to stay current with the code. Whereas documentation quckly gets out of sync with the actual code.
Cobble together stackoverflow code, don’t write docs, instead hire entry level devs to fix the cryptic levels of technical debt in your code, reward them with seniority for how long they endured.
My grandpa worked like this on a naval propulsion factory :)
They had quite a long development lifecycle - decades for every new engine from inception to series production.
One problem for me is that I lose motivation after writing the documentation, thinking that burdensome implementation of the details should be left as an exercise to others.
That’s a dangerous notion, because you might be overestimating your ability to think things through in detail. It’s typical to revist a spec and iterate on it. Also a program literally is a specific spec for a given computing environment, and there are often many interesting things to figure out.
I've had a great deal of experience with the "Measure Twice, Cut Once" approach. Most of my career was at hardware companies, and hardware development tends to be about as Waterfall as you can get. It was incredibly frustrating, because they insisted that software be developed the same way as hardware. Efforts to tell them that software is different were met with sneers, and accusations of being "lazy," or "sloppy."
The OP was done by a professional writer, so it's not surprising they have a "documentation-first" approach. I tend to use an approach I call "Forensic Design Documentation"[0], and "Evolutionary Design"[1].
Knowing what you want is not easy. That's the single biggest problem with a "requirements-first" approach. I've learned the value of rapid prototyping, and getting involvement from non-tech stakeholders, as early as possible, so the project design is iterative, and hands-on.
But there was this chap named Damocles, and his sword is often brought up, in situations like this...
"Rapid prototyping" is often taken as license to do sloppy work. The single biggest issue with rapid prototyping, is that management always assumes that the project is "done," and cut the schedule and budget, forcing you to use the junk prototype as the ship product. After a couple of these, I learned to make my prototypes top-quality, if functionally incomplete. That way, by the time management says "f**k it, let's ship!", the project is actually shipshape.
And I am careful about documentation. I tend to use a lot of embedded documentation (headerdoc-style)[2]. That's because it "ages" well, and tends to be a lot less of a "concrete galosh."[3]
I have to be willing to throw away a lot of code, and use implementation techniques that afford flexibility.
I've been working on a fairly large-scale iOS app, for the last couple of years. It's about 40 screens (Most of my apps are 2-5 screens). It also has a fairly substantial backend (that I also wrote).
One of the reasons that it has taken that long, is because I have thrown away -literally- months' worth of work. I spend a lot of time on each screen, leaving placeholders, or disabled selectors, for unimplemented ones. Each screen is done as "ship" Quality. People aren't testing a prototype; they are testing the actual ship code.
I call this "constant beta." The app is always at ship Quality. Localization, accessibility, and "white glove" testing are being done, right from the start.
This means that integration testing (the most important type of testing, in my opinion) starts almost immediately. Since this is iOS, I can use Apple's TestFlight beta-testing system, to distribute the builds to the non-technical stakeholders. I generally do a couple of releases a day. I have done over 600 releases, so far.
Bugs get found quickly. It's important to find bugs as close to the point they were created, as possible. Since this is integration testing, whole subsystems get tested; not just individual modules. Also, the testers are much closer to the actual target audience, than you would get with professional QA teams. I tend to prefer the use of test harnesses, as opposed to unit tests, for this reason[0].
A lot of times (it has happened numerous times, in this project) we "get what we want," and realize that it isn't actually what we want, or it's unusable/impossible. A lot of times, you can't get to that point, unless you have built the app pretty completely.
The absolute worst thing that can happen, is that you get exactly what you asked for. That's one of the problems with using outsourced shops and loosely-coupled distributed development. These techniques require heavy-duty specification, and those specifications often end up as rock-hard cement. No one wants to change them; even when it becomes clear that the result will be less-than-satisfactory. I've been through that, a number of times.
But the crap-quality MVP isn't (in my opinion) an acceptable solution, either. Every now and then, one gets a home run, but, more frequently, we end up with foul balls. The brand gets damaged, the engineers quit in disgust, the money gets used up, and it becomes difficult to pivot.
One thing that I often do, is spin off code that I'm not gonna use, into subprojects, for reuse in other projects. Here are a couple of examples:
I'm not using these in the project (they were written for the project, but we decided they weren't appropriate), but they are available for other projects.
But how can writings and meetings align code that doesn't yet exist? Are you a believer that during writing of documentation, you can anticipate each code challenge and solve it ahead of time?
Just like "two programmers can always achieve in two months what takes an entire month for a single programmer", the one we imported from sciences "a short week programming can save you many hours of thinking", and other similar famous phrases, this is true when it is; when it isn't, it's false.
Having to do double work, write a document first and then repeat the same in code is extremely counterproductive. Motivation plays a big role. Repetition is not fun. Most projects never get finished. There are always obstacles and some are really hard. But you won't know them from writing documentation alone. And if you write documentation ahead of time it's very likely this documentation will not fit the final product, at 1st release time.
Having a general outline with key features is ok. But fully-fledged documentation is over the top.
That's my educated opinion at least. Motivation is very important for me, especially if I don't know for years if this thing I'm building will be used by people at all. Single developer.
If there were dedicated people for tasks, that'd be different, probably.
Also you can't take the established business and say they all had a memo or document and that's the reason why they succeeded. That's a really long shot, IMHO.