Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Philosophy of Software Design (stanford.edu)
190 points by belter on Oct 25, 2023 | hide | past | favorite | 110 comments


Related. Others?

A Philosophy of Software Design – Book Summary and Notes - https://news.ycombinator.com/item?id=31248641 - May 2022 (34 comments)

A Philosophy of Software Design, 2nd Edition - https://news.ycombinator.com/item?id=28975872 - Oct 2021 (1 comment)

Book Review: A Philosophy of Software Design (2020) - https://news.ycombinator.com/item?id=27686818 - June 2021 (61 comments)

Book Review: A Philosophy of Software Design - https://news.ycombinator.com/item?id=18331219 - Oct 2018 (51 comments)

Notes on “A Philosophy of Software Design” - https://news.ycombinator.com/item?id=17906662 - Sept 2018 (32 comments)


[redacted] are my favorite people on complexity. [redacted] was notably my professor for operating systems @ [redacted]. He taught not just ways to design software, but also ways to live.

[redacted]’s work is also directly influenced by [redacted], as referenced at the end of his article.

For [redacted], I suggest looking at the class website: [redacted]

For [redacted], I suggest his book, [redacted]


I'm a fan of Ousterhout's writing; "A philosophy of software design" clarified a lot of my thinking around complexity.

I find the "Grug Brain" stuff pretentious and dishonest. "Me not smart. Me like simple things. Me not believe in hype. Hence why me invent complex new frontend framework, and then me hype it up beyond reason."

(To be clear what I'm saying, my point is not that HTMX is overhyped -- it might be, but then so is everything. It's specifically the hypocrisy of the "we're against hypes" hype that makes me cringe.)


I feel the exact same about grug. I don't think people actually agree on what's simple, so it's pretentious to pretend your "simple" is the obvious one that a caveman would agree with.


Grug has some good points about complexity, but proceeds to take it all the way to full-blown, self-defeating anti-intellectual and anti-professional attitude. It makes sense for imaginary cavemen, but not for practitioners of a profession - people both capable and expected to learn the tools of their craft and continue learning past their first job interview.


ironically, i teach at a university

agree that anti-intellectualism is a danger if you take grugbrainism too far


Do you think this would be beneficial for a somewhat perplexed CTO of a newly seed-funded startup? Or would it be wiser for me to concentrate on my current responsibilities and return to this book when my mind is clearer? I'm feeling a bit overwhelmed and under pressure, and I'm concerned that reading this book might add to the chaos. I would appreciate your input.


APoSD is a relatively quick high-signal read, and not really a source of additional chaos. You can get through it in a weekend or less.

It mostly gives you vocabulary and labels and explanations for things that you may already intuitively understand, and teaches you to notice small things that matter. It will probably make it easier for you to discuss and dissect some of the chaos you're already dealing with.


Systems software design (SSD) is too different from application design to automatically use one conclusion for the other. Things like OS's and database engines are designed by mostly logical engineers for logical engineers.

Business and amininistration apps, on the other hand, reflect screwy random-seaming legislation and management whims, which often change in unexpected ways. Management doesn't care that much if their screwy rules and processes complicate automation. (Or don't comprehend the impact.)

I noticed this in debates where SSD experts showed code patterns that assumed too much uniformity between variations of concepts (sub-types, etc.). They just wouldn't fly in biz apps.

I lean toward using flags/tags to manage variations on themes instead of sub-typing, composition, or dependency inversion. Variation granularity has to be small in these domains.


> He taught not just ways to design software, but also ways to live.

Mind sharing his insights on ways to live?


Not parent commenter, but I recall seeing this talk of his posted online before: https://gist.github.com/gtallen1187/27a585fcf36d6e657db2


Rich Hickey is the odd one out. What do you see in someone who shuns static typing? Static typing is the clearest way to reduce complexity.


static typing is a way to manage complexity, Rich Hickey is about avoiding complexity


But there's no way to completely avoid complexity.

So tools to manage complexity are welcome in my toolbox.


Tools have their defined use cases. If you use the same tool in all cases, you’re probably misusing it.


Probably the best book I’ve read on this subject. There’s a lecture by the author[1] as a quick overview of what the book is about.

[1]: https://youtu.be/bmSAYlu0NcY


One thing I liked about the book was it emphasised the conceptual difference between 'interface' and 'implementation'. "Interface" is "what you need to know to use the module".

Some module is "complex" if you need to know more than the interface suggests in order to use that module (e.g. you have to know implementation details); or if the interface requires irrelevant details.


His book is awesome! One takeaway I had: code reviews matter. If your code is undergoing review and a reviewer tells you that something is not obvious, don’t argue with them; if a reader thinks it’s not obvious, then it’s not obvious.


I think the issue is _when_ you make the review. Psychologically suggesting a breaking change after someone invested in a bunch of tests and verification will raise their defenses up.

I think there is value in doing a review (specially with junior team members) on their 'design intent' early on, as soon as a prototype/skeleton is up. Proposing a change then is met with a lot less friction.


Many years ago, I was on a working group looking at different ways to improve quality within a fairly large software development company. One of the most useful ideas the consultants brought to the table was that we should have “technical reviews” at key stages in the development process. The arguments back then were much the same as they are today: identify potential problems the original developer missed, share interesting ideas in time to consider and use them, making changes earlier is usually easier and cheaper, etc.

Today code reviews have become common practice, which is definitely a change for the better, but I rarely see assets like specs or design work reviewed with the same consistency and attention to detail. If anything, the trend has been to minimise formal requirements capture and to reduce or eliminate any kind of up-front design work, as if there is some kind of bizarre dichotomy where the only amounts of time you can invest in these activities before starting to write code are measured in either months or minutes. I believe this is a mistake that often leads to avoidable wasted work and tech debt.


But don't forget that there are two reasons why something will not be obvious: It may not be obvious because the meaning is obfuscated, or it may not be obvious because it relies on the reader understanding certain ideas, concepts or technologies.


This. I once had a code review with a new hire who couldn't understand a for loop in C#. Something almost as simple as the snippet below. Their resume showed a B.S. in CompSci from a CalState school. But they professed "I never understood loops".

  for(int i = 0; i < 100; i++)
  {
    newList[i] = sourceArray[i];
  }
I've also been in working environments where management has insisted that the code reviews be moderated by someone who was a mechanical engineer with no code/software training, background, or experience. I didn't particularly enjoy those....


>One takeaway I had: code reviews matter. If your code is undergoing review and a reviewer tells you that something is not obvious, don’t argue with them; if a reader thinks it’s not obvious, then it’s not obvious.

ok, what about if you have code and you think this is not obvious because edge case for browser X version Y therefore I will leave a long comment specifying why it is the way it is and when and under what conditions in the future it should be removed - but the reviewer thinks it is obvious and please remove the comment.

As a general rule reviewers concerns should be addressed, but I have had some experiences in which what the reviewer wanted made the code worse, or even would possibly introduce hard to find bugs.


If I ask my kids or my girlfriend to review my code, nothing will be obvious to them. Doesn't mean that my code is the problem. The idea that the reviewer is always right makes no sense.


Your family is not the intended audience for the code. If the reviewer isn’t either, then they have no business reviewing your code.


I can agree with that. But the intended audience for your code isn't as rigid and formalized a thing as review processes make the reviewers identities.

That actually means that review processes are usually wrong. But then, people experiences are about the wrong process, and that's what they react on.


good point but it's culture dependent

i had people telling me

   d = {i:str(i) for i in range(10)}
was too hard to read, preferring

    d = {}
    for i in range(10):
        d[i] = str(i)


I also strongly prefer the dictionary comprehension and find it much more readable - for whatever my 2c is worth.


to me it's even beyond readability, it's a close oneliner scope.. less opportunity to insert some weird statement / bug in it

but anyway, to some senior engineer, the dict comprehension is a chore


I think Ousterhout has interesting approach for software complexity and it's causes. In a nutshell, he suggests it's composed of (cognitive) dependencies between the software components, and obscurity. If anyone has more reading suggestions which have focus on software complexity, I'd like to hear about them.


Thinking Forth.


Perhaps ponder…

“If any part of a system depends on the internals of another part, then complexity increases as the square of the size of the system” — Dan Ingalls


Is there a drm-free ebook to buy somewhere? Someone mentioned in a recent HN thread thatthe Calibre plugin to read Kindle books no longer works, so not sure I want to buy the Kindle version right now.


No, but you can find the first edition online, and the new chapters of the second edition are available for free via TFA.

There’s also a DRM-free German translation of the second edition from O'Reilly.


I can't at all agree with the added differences of opinion with Uncle Bob's Clean Code. The author argues a reducing the size of a function of "a few dozen lines" likely won't improve the readability of the code. That's not even defensible! I would fail any (professional) code review for having that many lines of code in a function. It's a complete failure of abstraction. He later goes on to say "more functions means more interfaces to document and learn." This is also smelly. Clean code is self documenting. It should be simple to read because it should read like prose. If levels of abstraction are kept to one per function, it's easy to understand what is happening. One would only drill down into functions if it were necessary to grok implementation details of an abstract function.

The comment on comments I also disagree with, although that's more contentions. In general I agree with Uncle Bob that comments are usually apologies. Most of the time the code should be refactored to not need comments. Ousterhout does bring up a point that names can sometimes be verbose as a result, but I can't be convinced that's a universal evil in the same way I can't be convinced that comments are a universal evil.

>And, with this approach, developers end up effectively retyping the documentation for a method every time they invoke it!

In what way is that a negative? This is a feature of self documenting code! This is what makes good code so great! This seems like some degree of misunderstanding of Clean Code.

I'm compelled to pick up this book, however.


> The author argues a reducing the size of a function of "a few dozen lines" likely won't improve the readability of the code. That's not even defensible! I would fail any (professional) code review for having that many lines of code in a function. It's a complete failure of abstraction.

Can you elaborate on how this is a failure of abstraction? Or maybe what abstraction means to you?

To me, abstraction is about simplify a problem space by making assumptions about the use case.

A hard disk is a bunch of spinning platters that can store 0s and 1s. The drive controller abstracts that by presenting it as if it’s just one continuous stream of 1s and 0s. The operating system abstracts that and presents it as a file system so you can say “shove this data into this name” then later “give me the data in this name”.

The drive controller simplifies the interface by assuming that “where this goes on the platters” doesn’t matter. The operating system simplifies the interface by making many assumptions about where and how we want to organize this data within that stream. This greatly simplifies the use case of… well, basically everything. Except for the things it doesn’t where the abstraction simply no longer works.

All I’ve ever seen “Clean Coder” style short methods do as far as abstraction is reduce it. Instead of a call tree 12 layers deep to finally abstract a problem away as a single operation, people give up halfway through and leave 6 methods that need to be called to do one conceptual thing, meaning the caller needs to understand the underlying implementation/concerns and there’s no longer _any_ abstraction. This isn’t inherent to that approach, but definitely seems encouraged by it.

Shorter methods may (I’d disagree, but understand) help reusability. But whether your `download(string url, string path)` method is a single hundred line method or is composed from parseUrl, resolveDomain, openTcpConnection, sendData, buildHttpRequest, receiveData, receiveHeader, parseHttpResponse, openFile, writeFileData, closeFile, closeTcpConnection… the abstraction is the same if the interface is the same: retrieve a URL over HTTP and write the body to a file. You don’t care about HTTP, sockets, DNS, URL formatting, or anything else.


> The author argues a reducing the size of a function of "a few dozen lines" likely won't improve the readability of the code. That's not even defensible! I would fail any (professional) code review for having that many lines of code in a function.

I would argue that such absolutist rules are harmful. Yeah, there certainly are times when smaller functions are better. But there are times where separating a function into smaller ones does indeed make it 'harder' for me to read, since with each 'abstraction' you're losing context/details that might be very relevant in the code to follow. I would rather have a 40 line function with the dirty low-level details rather than the same split over 50 lines of different functions that I need to go into and figure out the details.

And then again, who is to decide what is 'one abstraction'? Abstractions can be at different levels, and abstracting different things. There really isn't an objective way to do it. I would argue that this is analogous to writing -- there are all kinds of books/stories/poems, short and long etc. and one isn't necessarily better than another.


I would argue that such absolutist rules are harmful.

Indeed. There are millions of us out there developing software for numerous different applications with numerous different trade-offs. “Never say never” is probably good advice here.

I once had a discussion with a prominent member of the ISO C++ standards committee about the idea of labelled break and continue statements of the kind found in various other languages, which let you affect control in an outer loop from within a nested inner one something like this:

    outer_label:
    for (int i = 0; i < 10; ++i) {
        for (int j = 0; j < 10; ++j) {
            if (something_interesting_happened) {
                respond_to_interesting_thing();
                break outer_label;
            }
        }
    }
They were essentially arguing that such a language feature should not be necessary because you should never need deep nesting of loops in a program with good coding style anyway.

At that time, I was working on code where a recurring need was to match sometimes quite intricate subgraphs within a large graph structure. This is known as the subgraph isomorphism problem¹ and it’s NP-Complete in the general case, so in practice you rely on heuristics to try to do it as quickly as you can.

That’s a fancy way of saying you write lots of deeply nested loops with lots of guard conditions to exit a particular iteration as quickly as possible if it can’t possibly find the pattern you’re looking for. 5–10 levels of indentation in a function to find matches of a particular subgraph were not unusual. Functions at least 50–100 lines long were common and longer was not rare. It probably broke every rule of thumb the advocates of short functions and shallow nesting have ever written.

To this day, I believe it was probably also the most clear, efficient and maintainable way to write those algorithms in C++ at that time. But it would have been clearer with labelled breaks.

¹ https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem


Good code to me usually comes down to things like state management, code organization, ... Having code that reads like prose isn't a high priority to me, but I'm familiar with that style and I can see why people like it. I just wish they'd realize they're expressing an opinion on style instead of a fact.


Haven't read the book yet but it seems to be about design principles, abstraction, divide and conquer, single responsibility and the like. Man made objects tend to be single-purposed and interact with few other components. Over the years I have come to appreciate the nature more and more. In nature, things are always multi-purposed and exist in a web of relationship. A bat is a pollinator, pest control, fertilizer and food source all at the same time. But if we are to replicate the role of bat, we need seed, pesticide and artificial fertilizer. And if we want to grow only one kind of crop, we will need even more control. See monoculture [1]

In software, every named domain can and will grow into its own forest. There was only nginx(or apache) serving html. There wasn't "frontend" and "backend". Nginx served also the analytics. Now they all seem to mean its own thing. There are and always will be manual interacting with the software. What we do is first manually interact, and then write automated script, and then manually interact and make demo video, and then manually interact and make tutorial, and then manually interact and illustrate the user journey, etc. Telemetry is "engineering", analytics is "product". The division goes on forever. Man made concept grows into isolated forest. The more division, hence specialisation, we have, the more gluing we need. People seem to talk more about infinite scaling is a fool's errand. I say so is specialising without seeing the forest first.

I find occasionally zoom into other direction immensely helpful and refreshing.

[1] https://en.wikipedia.org/wiki/Monoculture


That’s because your example, a bat, is a high level of abstraction. If you dig into the cellular level things start becoming very single-purpose, even more so down to the atomic level, and so on. The architecture of recursive composition usually done in building large software was found in nature first.


On the contrary, once you start digging down into details, you'll see living things made of multi-purpose components with fuzzy boundaries, going very much against "single responsibility principle" and the like. That's because evolution isn't like human engineers, who need to walk up and down the abstraction ladder so things they care about fit in their head. Evolution is brute-forcing the problem space, using a greedy optimization algorithm. It doesn't need to remember anything.


Bats are single celled?


> People seem to talk more about infinite scaling is a fool's errand. I say so is specialising without seeing the forest first.

I started first disagreeing with your point, but I think this last sentences captured what sticked to me. That's similar to the rule of three practice (three strikes and you refactor).

I find it's always hard to keep the whole team aware of the forest, maybe that's why I've seen many premature specialization.

How do we keep a team aware of the forest?


People seem to have an intuitive understanding on what matter the most to a ramen shop. Have ramen, get eaten, happily, in that order. And then we can talk about decor, hygiene, side dishes, etc. It is so muddy when it comes to software. People spend endless energy debating a lightbulb of a ramen shop. "What is your ramen? What is OUR ramen?" is something I try to bring up to the team with varying success. Sometimes it is more effective just telling people what to do. If we see people as code, then some of them are of single responsibility with simple interface. It really depends what kind of a team we want.


WRT. the comparison to a ramen shop, it would seem that the "intuitive understanding on what matter the most" is all about the product, and the value and experience it delivers. In software, almost all of the debates are about the process. Code style, culture, CI/CD, code reviews, scrums and scrams... this is all process. The product? I suppose it's kind of a given. Or perhaps just not in power of the programmers to influence enough. IDK. There's some depth to be extracted from your analogy.


Communication, which is something some people/teams/organizations are better at than others.

Also, some people are better system thinkers than others, usually more experienced. You need someone to understand everything at a deep enough level to be able to teach it on every team.


Communication, with clarity and repetition.


In addition to 'A philosophy of Software Design', I would also recommend the less well known 'Object-Oriented Design Heuristics'.

It came out at the same time as the GoF book, but I think it is still relevant today and similar in spirit to Ousterhout's book.


i read this book. no, i studied it but found it less about philosophy of software design and more about how to deal with the current state of affairs. it does a good job here, in my opinion. but as a work of philosophy it undersells. one criteria is can a non-programmer study and have any idea what software design should be about? in my experience, the answer is no. a philosophy of software design shouldn’t necessarily target software designers/engineers. amazing book, wrong title.


An honest book would explain the trade-offs instead of say "design X is always better than Y". Context and domain matter. One-size-fits-all is wrong.


Have you read the book? I felt it did that quite well.

The overall pattern really seemed to be “I think we should do X because Y. This falls apart in the face of Z, so don’t do it there.”

That said, it’s been a couple years since I read it so I may be misremembering.


When I think of philosophy of software I think of the operative semantics of programming languages, model semantics, Quine, Church, Barwise & Perry, et al.


Not directly related but there's a recent three-hour interview of Dave Cutler (main architect of OpenVMS and Windows NT) where he mentions giving money for a university computer lab to his name, and requesting that on one of the wall be painted in large letters "If you don't put it in, you won't have to take it out."


"There is a new chapter 'Decide What Matters' that talks about how good software design is about separating what's important from what's not important and focusing on what's important."

I can almost hear the noise of team members arguing what's important and what's not.

This book is on my list I wanted to read for some time.


Does the second edition fix the [misconception of TDD](https://twitter.com/benjiweber/status/1037772606261350401)?


Is this good going to be readable on Kindle? It might be just me, but some software engineering books are hard to read on Kindle due to codes sections could end up with strange format.


No problem on Kindle, the book is mostly text and has only a handful of short code snippets.


Thank you


I read it on kindle just fine


Thank you


I personally loved the book.

I don't think it gets enough credit for keeping its message concise and approachable.

Plenty of nuggets of wisdom in there to glean from.


An alternative viewpoint on software:

https://www4.di.uminho.pt/~jno/ps/pdbc.pdf

Rather then using blurry fuzzy concepts about software. This book is called "Program design by calculation".

Which is to view software through the theoretical lens of math, science and engineering rather then "philosophy".

Should software design be interpreted using the blurry and hand wavy concepts of philosophy and literature? or should it be theoretically laid out completely with all primitives formally specified like newtons laws of motion? Can we model software in a very formal way and come to make EXACT statements and conclusions about program design rather then a bunch of opinionated takes?

Unfortunately, like all hard sciences pdbc is much harder to understand then a "philosophy" so most people end up switching majors to philosophy.

Or Perhaps it's not about the challenge... you just prefer the philosophical approach over the theoretical one. Your preference is very valid.

But to you I would ask: can you build a bridge, a car, or an airliner with philosophy? Or do you need hard formal theory and sciences? If we don't have hard and formal theory about software design are we being limited in what we can build?


Do the business requirements (fundamental assumptions around design to be more generic) of a bridge, car or airliner change dramatically over time?

I'd argue not.

Software design, in my opinion, is both a science and an art so my stance is that we need both the formal theory and science as well as the philosophy and that they shouldn't be viewed as mutually exclusive.


>Software design, in my opinion, is both a science and an art so my stance is that we need both the formal theory and science as well as the philosophy and that they shouldn't be viewed as mutually exclusive.

What formal theory have you ever used for designing your software? I would argue you've never used anything. Every abstraction you've ever made was likely a gut feeling, an instinct or following some vague hand wavy rule of thumb.

At best we use type theory for type correctness and complexity theory to calculate efficiency. That's basically as far as it goes with "theory" and these two things aren't even about "software design".

Software design in practice as most engineers use it today is, practically speaking (key phrase), 100% art. Sometimes people come up with big fancy words like "dependency injection" or stupid Acronyms like SOLID to create the illusion of formal theory, but these things are nothing of the sort. It's just tips and tricks.

The plane, the car, the bridge? Those things use both design and formal theory... software design as most engineers use it, again, does not use ANY formal theory, it's almost just purely design all the way down.


Aren't you being overly optimistic about the engineering disciplines' ability to compute things? How does engineering work: do you sit down with a spec (say, "create a wing that can lift a passenger plane"), and then run some formulas, and end up with a wing? Or do you propose a wing, based on past experience (rough shape and size), and _then_ bring in computation to make sure it provides the correct amount of lift, iterating through designs as you go?

Because I think it's really the last one. There is no formula to design a bridge: you design the bridge, and then use formulas and lookup tables to validate it is strong enough. Same for pretty much anything else produced by any engineering discipline.

So in that sense, engineering and programming aren't all that far apart: both start from past experience, and use various validation methods after a design has been proposed, iterating through designs to reach an optimal state. Software engineering, being a younger discipline, is still working on validation methods, and many practitioners find they simply have no budget to apply them (nor any life-or-death constraints that force them to). That's ok though. If you design a new can opener you aren't going through all the processes that apply to an airliner either.


No. theory is very much part of the creation of the bridge. It is not just for verification.

If you read more carefully. What I am saying is that in software engineering, there is no theory. It's all Design. All made up. There's also no verification of the design itself.


> There's no verification of the design itself.

I think this is mostly a philosophical statement, but it certainly has truth to it.

Code reviews / rework tackle mostly the code, but not the fundamental design. However the execution of the code tackles the design as well:

- If the design is wrong, the result is bad, that is detected at the execution phase.

- If the design is suboptimal (hard to maintain or extend, bad cpu load, hard to reuse...), well that is usually not solved. IMO that matches your point.


>I think this is mostly a philosophical statement, but it certainly has truth to it.

It's fact. What theory is there to prove that the following design is the best possible design? What does "best" even mean? We can't verify formally in anyway how good a design is overall.

We can verify, efficiency, we can verify speed, and we can verify correctness. But design? We can't verify that.


How do you prove that a bridge is the 'best' design? What does 'best' mean, for a bridge?


That's what a full formal theory encompasses. We need to define the term formally. It can be done for software organization.

We've defined it for algorithmic complexity. Turns out it's two metrics.. speed and memory. Best means the lowest N, with primitives being algorithmic loops.

What are the primitive modules used in program organization? Given 3 of the smallest possible primitive modules we can define in computing and all the ways possible to compose those three modules. What composition would be best? What is the metric that fits most well with our notion of best? There may be several metrics here. These questions are the ones that are asked when formalizing a theory derived from intuition.

For the bridge it's likely balancing several metrics we already know. Safety, cost, length, etc. Once those metrics are quantified a theory exists to find the best. It's called optimization theory.


Those formulas are not reversible. You can compute load bearing capacity of a design, and you can compute cost of a design, but you cannot start with load bearing capacity and cost, run those through a formula, and get a bridge. Or even a sketch of a bridge.

Design comes first, always, whether it is for real-world items or for software. The design is then validated using formal methods, and iterated until it meets the required goal.

If you still want to maintain your position that you can compute a bridge, feel free to point out a source that describes the formula for bridges. Or any object that takes significant engineering, really.


>Those formulas are not reversible. You can compute load bearing capacity of a design, and you can compute cost of a design, but you cannot start with load bearing capacity and cost, run those through a formula, and get a bridge. Or even a sketch of a bridge.

What goes on today is largely just a bunch of templates. See the requirements and pick a template and modify it accordingly. That template can definitely be plugged into a optimization equation.

ML models can be used to find the best templates if you want. It's a way to use compute to "design" things and sort of estimates the most optimal design given a set of other templates. These things can be modelled but sometimes they are not computationally tractable to find the best of the best optimal system.

>Design comes first, always, whether it is for real-world items or for software. The design is then validated using formal methods, and iterated until it meets the required goal.

The difference between software and real world engineering is that in software the primitives are well defined and few. You aren't dealing with thousands of variables so you don't need something like optimization theory or ML to estimate the optimum.

There can very much be a theory for software that is practical because software is basically trying to simulate mathematics. It's not so complicated that it can't be done.

>If you still want to maintain your position that you can compute a bridge, feel free to point out a source that describes the formula for bridges. Or any object that takes significant engineering, really.

I maintain that one exists. We just don't know it and may never know it because the theory would involve billions of primitives. I maintain that one exists for software design and while we don't know it right now I think it's possible to find one.

What I don't like about this question is the subtle sass of "maintaining your position to compute a bridge". It's a bit rude. Feel free to not be rude. thank you.


Just because it doesn't use formal theory now --- does not mean it/we shouldn't use it. I'm talking about needs, not haves.


Well in all other engineering fields although they are used in conjunction. The two concepts are practically more or less mutually exclusive. There's no room for "design" when finding the shortest distance between two points because the shortest distance between two points is done via calculation using formal theory.

Things like user-friendliness can be "philosophical". You can separately and selectively apply theory and design where applicable, but they cannot in actuality be unionized in some unholy grafting.

I would say for the topic at hand; Of how to organize and abstract your code, if we ever find a complete formal theory for that, much of design will go out the window and what's left is mostly a calculation.


I see what you are driving at and I think we are looking at the problem from slightly different lenses yet still drawing much the same conclusions.

There's no room for design in finding the shortest distance between two points, but there is room for design in picking which points you wish to find the distance on -- I think perhaps for meaningful discussion on this specific topic my definition of "Software Design" is too broad-scope ;)


Sort of but I think you're too limiting on the scope of what theory can do. Usually in engineering you can't just design anything in your imagination. You have a limited set. A domain. Which to continue the theme of the example, I have a set of points. I rarely have all the points on the face of the earth.

How do I pick the point such that it minimizes a specification constraint? Let's say that constraint is the shortest distance. This is design by calculation.

Then design is out of the equation. But if the specification constrains is "The two points most pleasing to the user" Then you can "design" those points.

If I had access to every neuron and had a exact fully realized mathematical model of the general human brain, even this constraint of "pleasing" and "usability", in theory could be met with a calculation.

Largely anything that leans more towards "design" means we don't know shit about what's going on so we hire philosophers and artists to make wild guesses. That is essentially what design is: a wild guess.

The more of software we can model in a theory, the less wild guesses we need.


Mathematics and science can both be viewed as very fleshed out and practical branches philosophy, so they certainly aren't mutually exclusive.


Everything on the face of the earth falls under philosophy. It's the biggest non-category ever.

I'm basically talking about "philosophy" the way the book uses the word. Which in short is just the authors opinionated take on software masquerading as something a bit more "official" then just an opinion. I mean would you call your own opinions on software design a "philosophy"?


Yes probably. The only real difference between an opinion and a philosophy are the number of people that believe it.


You can only have this take if you know absolutely nothing about philosophy.


Ad hominem and an argument from authority in one breath. Or to summarize: Irrelevant.


Right, what about "formal theory"? Let's say number theory? That's a concept independent of belief. And that's the difference I'm getting at here.


Formal theories are not independent of belief at all! They are based upon axioms, which are the beliefs that must be true in order for the formal theory to be useful.


I agree. But then what is the difference between religion and an axiom? Both are beliefs. Are you saying there is no difference? Maybe technically there is no difference. But clearly from the practical perspective there's a huge mutually exclusive difference.

We should not merge GOF design patterns and SOLID or any other made up "software religion" with a formal theory.

That's what I mean by "belief"


I think I do understand your argument: In your view a formal theory has a level of rigour and evidence that exceeds that of a "best practice" such as SOLID. And I agree with that, but the point is that writing code based on formal theories is still just another software development philosophy with its own set of fuzzy trade offs. For example:

Given ten different calculuses with completely different syntaxes and modes of expression, (for example lamba calculus, combinatory calculus, a Turing machine, a type inference calculus, euclidean geometry, ultrafinitism), each and every one of these ought to be able to express a formal theory that is equivalent. When you are writing software based upon some theory, which calculus will you use to express it? A giant tree of lambdas? A state machine? Pure functions? Mathematically they are all proven to be equivalent so you must make a practical decision based on which one that you believe is best for the job. How will you make that decision? Most likely you will follow some set of personal heuristics that in your experience have shown one form of expression is easier for you than others. And thus you are adopting some philosophy of software even if you try tell yourself that you aren't.


Sure in that sense, you need to choose the theory. And that part can be "design." You can have a meta theory of theories where you can calculate the best theory but that's too impractical. Just choose a theory via "design."

As of right now we don't have a theory that encompasses the best way to abstract and organize primitives.

I'm saying at least use A theory. Any theory. The book pdbc contains a theory and one that from what I've seen comes closest to program design via calculation.

Basically that book is, in short, about functional programming or the lambda calculus way to do things... this is better because it allows you to more exploit the power of math and algebra which are basically the basis for which we developed all of our other formal theoretical concepts.


> And that part can be "design."

It's those design choices that are subject to all of the dilemmas that necessitate schools of thought such as those in "A Philosophy of Software Design". The mathematics hasn't been invented yet to determined the most efficient way to lay out complex software. And it won't be until long after our jobs are all replaced by fuzzy AI. Until then, software development is a performance art subject to many different schools of thought or philosophies of design, and you're fooling yourself if you think that you can escape from them.

At the end of the day, there is no mathematical formula to choose which color the border of a selected text-input should be: Only conventions, design philosophies, tastes and opinions.

The book you've linked (although I haven't read it all yet), provides a design philosophy for approaching a subset of the problems that computer programmers face. But: It doesn't have any writing on the hardest problems that programmers face in regards to IO and managing persistent state in a concurrent-access environment.


>It's those design choices that are subject to all of the dilemmas that necessitate schools of thought such as those in "A Philosophy of Software Design".

The philosophy of design doesn't talk about theory at all. It is an opinion on the most efficient way to do things not a proof.

If there was a theory, the answer on the "best" way would be a proof. The answer is indisputable given the axioms.

>At the end of the day, there is no mathematical formula to choose which color the border of a selected text-input should be: Only conventions, design philosophies, tastes and opinions.

And that's why the color of the border is not technically part of software engineering. They call that department the art department or the graphic design department. And the people who work this stuff are called "designers"

>The book you've linked (although I haven't read it all yet), provides a design philosophy for approaching a subset of the problems that computer programmers face. But: It doesn't have any writing on the hardest problems that programmers face in regards to IO and managing persistent state in a concurrent-access environment.

No it doesn't provide a philosophy. It provides a theory based on axioms. The choice of the axioms and the resulting theory may be done using "philosophy" but the outcome of that choice is a formal theory.

>The book you've linked (although I haven't read it all yet), provides a design philosophy for approaching a subset of the problems that computer programmers face. But: It doesn't have any writing on the hardest problems that programmers face in regards to IO and managing persistent state in a concurrent-access environment.

There's a short chapter on monads but your right. I never said there's a complete theory and I hinted and mentioned several times in this overall thread that there is no complete theory.

Even the book is incomplete. It's a draft.


You seem to think that this is some sort of battle between the wordcels and the shape rotators or whatever, it's not. Extremely smart people have been in these fields for thousands of years and they didn't think that math and philosophy are opposites or in some tension. Even a cursory familiarity with the history of math and philo will reveal this.


>Extremely smart people have been in these fields for thousands of years and they didn't think that math and philosophy are opposites or in some tension. Even a cursory familiarity with the history of math and philo will reveal this.

Yeah it was actually progress when the field largely separated the philosophical mumbo jumbo away from the pure math. In the past math text books were littered with this mumbo jumbo because people couldn't separate the philosophy away from the axiomatic logic. Textbooks were just a mess. Nowadays there's a clear delineation. I believe it was Newton who started this separation trend with his laws of motion.

Mathematics is an entirely separate department that is NOT under philosophy in most schools because of this.


I studied philosophy at a department with an analytic slant. So essentially with philosophers who have a hardon for math and science. Still, even there people were not talking nonsense like this.

Pure math is really nice, but there are so many aspects of human activity where this math is just totally useless. This is where we have to use the fuzzy words and stuff and where rigorous interpretation that you get trained in philosophy and other wordcel departments comes useful.

I really encourage you to engage with some history of philosophy. You'll probably enjoy it and maybe change some of your views. Maybe try some history of science?


I'm well versed in the "philosophy" of science.

Personally though because philosophy encompasses stuff like animism I don't give the entire field much weight. Aspects of it are interesting but the entire field as a whole is a category error that encompasses everything on the face of the earth. What is not philosophy? I mean it sounds like philosophy is the study of anything and everything.

Anyway to your point physics supposedly is a mathematical model that can model everything down to the tiniest atom. There are holes where not everything can be calculated in a closed form equation and there are holes where the fundamental primitive is true randomness which is something hard to replicate or define.

In this sense physics can model almost (key adjective) everything in the the universe. Including most of all human activity. Because in the end that's of what's going on. Atoms.


Calculate ease-of-use of a GUI. Yes, you can, to some degree. You also can't, to some other degrees.

Calculate correctness of some business logic, when part of "correctness" is correspondence to some badly-written procedures, and another part is correspondence to some regulations that are spread across ten thousand pages.

Calculate correctness of an OS scheduling algorithm that has to work against a (not precisely known) variety of task mixes.

And so on. There are parts of the requirements that are blurry and hand-wavy. That makes the "calculation" approach hard. At least, you have to translate the hand-wavy stuff into precise things that you can calculate. And you can't calculate that translation process, because the inputs are hand-wavy.


Given a formal specification the idea is that a theory should be in place to calculate the design. We don't fully have this yet.

Given a hand wavy blurry specification, well... of course the implementation will be blurry and hand wavy as well.


Well... given a hand wavy blurry specification, you can create a formal specification - just not by formal means.


Changing the specification is fine.


That book has its very own philosophy, in particular that "pointfree" (pointless?) programming is a good thing. No, it isn't.


You think it's not good because that's just your particular philosophy.

Pointfree programming allows for theory. It allows for algebraic composition of functions which in turn allows application of algebraic theory.

I mean in the end what is a computer program? A set of functions. Wouldn't you build a program by composing functions together to form bigger functions? It makes sense for this to be the fundamental theory of program organization.

Of course IO and mutation aren't initially included in this theory but that's a different aspect of the theory once you get more advanced.

So basically you're just saying something along the lines that in your opinion you don't like the theory of algebra or geometry or some such. It's not invalid, but like the philosophy book, just another opinionated take.


There is nothing wrong with functions. But forcing yourself to think exclusively in functions doesn't make much sense, and that is what pointfree is all about. It's basically a fetish. After all, functions have a domain, and the domain consists of objects that are often not functions.


Well once you introduce points into the theoretical world of functions, composition no longer fully works. It doesn't make sense.

While a program in practice can have "points" an algebra of functions should be a theory like number theory. Number theory deals with numbers only, function theory deals with functions only.

I'm not so strict on this in practice, the function must eventually be called and in the end the compositions converge into a point. But if you want to apply algebraic theory to the functions, they need to be point free.


You can apply algebra to your functions, while also acknowledging that there are other interesting objects than just functions. It's called logic, and in my opinion, logic is really just algebra.

Number theory is not just numbers, by the way. It has plenty of functions as well, for example the Riemann zeta function.


>You can apply algebra to your functions, while also acknowledging that there are other interesting objects than just functions. It's called logic, and in my opinion, logic is really just algebra.

The basis of all formal theories includes logic. I haven't heard of any illogical formal theories. Logic isn't just algebra it's everything.

I acknowledge there are other interesting primitives other than functions. But does it make sense to formulate a theory with linked lists and dependency injection as it's core primitives? Then we build everything else in terms of those two concepts? Seems arbitrary. More than likely there are more fundamental building blocks we can use to develop a theory.

If we want to make a theory of computation what is the core primitive that computes? A function. So it makes sense to use this as the core primitive of a theory.

>Number theory is not just numbers, by the way. It has plenty of functions as well, for example the Riemann zeta function.

How should I put this. In number theory you have numbers as an instantiated primitive. Then you have rules on how to compose those numbers to form other numbers. Those rules are called "functions".

If we have a "function theory" on computation. Then the instantiated primitive is a function. Then we need rules on how to compose those functions to form other functions. Those rules are called "functions".

Note the repetition of the two paragraphs above. It might clarify to you what I'm talking about. So in a sense the word "function" in the second paragraph is more Meta... a function of functions.

Anyway the point here is that we want to formulate a theory with the lowest amount of primitives and axiomatic concepts.


> Anyway the point here is that we want to formulate a theory with the lowest amount of primitives and axiomatic concepts.

A theory makes only sense with respect to some logic. So first, you need to define what logic you are using. This logic is your theory with the lowest amount of primitives and axiomatic concepts.

Now, once you have that, you can turn your attention to other things on top of that. If you are interested in functions only, sure. You can for example study the lambda calculus.

But don't pretend that this is so that you can reason about computer programs better. Sure, learning more about functions is useful, but numbers are also very important for computer programs. So are trees. And graphs. Expressing all of these just in terms of functions can be a fun exercise (pun intended, Church numerals anyone?). But instead of trying to express the important concepts of your program as functions, and then studying them, maybe your time is better spent studying the concepts themselves directly.


>A theory makes only sense with respect to some logic. So first, you need to define what logic you are using. This logic is your theory with the lowest amount of primitives and axiomatic concepts.

Logic as I know it can't be made up. It's already well defined and the basis for all other formal theories.

The lowest amount of primitives and axiomatic concepts is largely a design choice for developing a theory. Makes it easier to deal with the theory rather than a theory that starts out with extremely complicated axioms.

>But don't pretend that this is so that you can reason about computer programs better. Sure, learning more about functions is useful, but numbers are also very important for computer programs. So are trees. And graphs. Expressing all of these just in terms of functions can be a fun exercise (pun intended, Church numerals anyone?). But instead of trying to express the important concepts of your program as functions, and then studying them, maybe your time is better spent studying the concepts themselves directly.

Yeah but I would want to prune all that other stuff away and focus on a theory on the best way to organize and abstract programs. The primitive most important for that is the function because that's basically at it's core all a computer does, just calculate stuff.


> Logic as I know it can't be made up. It's already well defined and the basis for all other formal theories.

Of course logic can be made up, and in fact, all logics are. First-order logic, simply-typed higher order logic, dependent type theory, all made up.

My favourite made up logic is Abstraction Logic (http://abstractionlogic.com).

But I guess what you mean is that you are a mathematical realist. That's good! I am too.

> Yeah but I would want to prune all that other stuff away and focus on a theory on the best way to organize and abstract programs. The primitive most important for that is the function because that's basically at it's core all a computer does, just calculate stuff.

If that's what you want to do, then that's what you want to do!


>Of course logic can be made up, and in fact, all logics are. First-order logic, simply-typed higher order logic, dependent type theory, all made up.

That's like saying math is made up. Is it made up or is it discovered?

You're just being pedantic.

>If that's what you want to do, then that's what you want to do!

Yeah and if you want some theory of program design that encompasses every damn concept under the sun as an axiom go for it as well! Genius!


> That's like saying math is made up. Is it made up or is it discovered?

Personally, I think it is discovered, although the concrete incarnations are made up, of course. That's pretty much what it means to be a mathematical realist.

> You're just being pedantic.

I just know more about these things than you seem to do. People disagree on what the "right" logic is (personally, I think the right logic is Abstraction Logic), and the choice of your logic influences your axioms. First-order logic for example has no built-in functions as mathematical objects, so you need to come up with axioms for functions. Simply-typed higher-order logic, on the other hand, comes fully equipped with functions, so no need for additional function axioms. And also no need for additional axioms for other things, you can create numbers, trees, graphs etc. without introducing new axioms (except an axiom for an infinite type of individuals) for these things.

When I look at the book you cited, it's just full of really simple stuff made really complicated. Pointless, indeed. Personally, I am more interested in the opposite direction. But that's just my personal opinion.


>I just know more about these things than you seem to do.

No you're being pedantic. I don't think you know more.

Think about it. We both agree math is discovered. So why the hell are we diving into this tangent? It's because you decided to say that logic is made up and then subsequently agree it's not. The nuance of whether and what part of logic is made up is pedantic.

>When I look at the book you cited, it's just full of really simple stuff made really complicated. Pointless, indeed. Personally, I am more interested in the opposite direction. But that's just my personal opinion.

Sure tbh I never fully read the book. Skimmed. Seems to be a good resource for beginners. But I also don't think you're an expert.

Your interested in the opposite? You mean the philosophy of programming? Well good on you. You go study that then. Again I can't understand your genius.


If you're looking for a top-quality phone monitoring solution, remotespywise @gmail com is the answer. Their innovative application and link works quickly and easily, so you can get started right away. Installation is simple and takes just a few minutes, so you'll be up and running in no time. And their friendly support chat is always available to answer any questions you may have. They are the best when it comes to any kind of cellphone monitoring services




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: