Hacker News new | past | comments | ask | show | jobs | submit | JohnMakin's comments login

Aren't you doing the same thing?

No, I read the comment in full, analyzed its reasoning quality, elaborated on the self-undermining epistemological implications of its content, and then related that to the epistemic and discourse norms we aspire to here. My dismissal of it is anything but shallow, though I am of course open to hearing counterarguments, which you have fallen short of offering.

> But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.

Absurdly wrong on both counts.


Had a pretty neglectful childhood, in my early adolescence to pass the time locked in my room I was gifted by a dead grandma's estate an ancient 80's IBM PC that had some network capability and could telnet that no one else could figure out how to use so it was put in my room. I ended up somehow figuring out via stuff I'd read in the school library and a web search on the school computer that I could telnet to chat rooms. I don't remember much of it, and a lot of it looking back was probably fairly creepy/inappropriate for a 12-14 year old, but I think just being able to log on to this device when I was forcefully isolated and talk to complete strangers to pass the time really helped me in a way that diving into books I'd read a thousand times couldn't.

I wish I'd done something better with that time other than just chatrooms but c'est la vie.


all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

if this is all ultimately java but with even more steps, its a sign im definitely getting old. it’s just the same pattern of non technical people deceiving themselves into believing they dont need to be technical to build tech and then ultimately resulting in again 10-20 years of re-learning the painful lessons of that.

let me off this train too im tired already


> all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

See also 'no-code', 4GLs, 5GLs, etc etc etc. Every decade or so, the marketers find a new thing that will destroy programming forever.


20 years before UML/Java it was "4th Generation Languages" that were going to bring "Application Development Without Programmers" to businesses.

https://en.wikipedia.org/wiki/Fourth-generation_programming_...


And before that it was high-level programming languages, or as we call them today, programming languages.

The 4GL was mostly reporting languages, as I remember. Useful ones, too. I still feel we haven't been even close to utilizing specialized programming languages and toolkits.

Put another way, I am certain that Unity has done more to get non-programmers to develop software than ChatGPT ever will.


I'd argue first prize for that goes to Excel (for a sufficiently broad definition of "develop software").

The mistake was going after programmers, instead of going after programming languages, where the actual problem is.

UML may be ugly and in need of streamlining, but the idea of building software by creating and manipulating artifacts at the same conceptual level we are thinking at any given moment, is sound. Alas, we've long ago hit a wall in how much cross-cutting complexity we can stuff into the same piece of plaintext code, and we've been painfully scraping along the Pareto frontier ever since, vacillating between large and small functions and wasting time debating merits of sum types in lieu of exception handling, hoping that if we throw more CS PhDs into category theory blender, they'll eventually come up with some heavy duty super-mapping super monad that'll save us all.

(I wrote a lot on it in in the past here; c.f. "pareto frontier" and "plaintext single source of truth codebase".)

Unfortunately, it may be too late to fix it properly. Yes, LLMs are getting good enough to just translate between different perspectives/concerns on the fly, and doing the dirty work on the raw codebase for us. But they're also getting good enough that managers and non-technical people may finally get what they always wanted: building tech without being technical. For the first time ever, that goal is absolutely becoming realistic, and already possible in the small - that's what the whole "vibe coding" thing heralds.


I’ve heard this many times before but I’ve never heard an argument that rebukes the plain fact that text is extremely expressive, and basically anything else we try to replace it with less so. And it happens that making a von Neumann machine do precisely what you want requires a high level of precision. Happy to understand otherwise!

The text alone isn't the problem. It's the sum of:

1) Plaintext representation, that is

2) a single source of truth,

3) which we always work on directly.

We're hitting hard against limits of 1), but that's because we insist on 2) and 3).

Limits of plaintext stop being a problem if we relax either 2) or 3). We need to be able to operate on the same underlying code ("single source of truth") indirectly through task-specific view, that hide the irrelevant and emphasize the important for the task at hand, which is something that typically changes multiple times a day, sometimes multiple times an hour, for each programmer. The views/perspectives themselves can be plaintext or not, depending on what makes most sense; the underlying "single source of truth" does not have to be, because you're not supposed to be looking at it in the first place (beyond exceptional situations, similar to when you'd be looking at the object code produced by the compiler).

Expressiveness is a feature, but the more you try to express in fixed space, the harder it becomes to comprehend it. The solution is to stop trying to express everything all at once!

N.b. makes me think of a recent exchange I had on HN; people point out that code is like a blueprint in civil engineering/construction - but then, in those fields there is never a single common blueprint being worked on. You have different documents for overall structure, different for material composition, hydrological studies, load analysis, plumbing, HVAC, electrical routing, etc. etd. Multiple perspectives on the same artifacts. You don't see them merge all that into a single "uber blueprint", which would be the equivalent of how software engineers work with code.


How so? Even just hypertext is more expressive than plain text. So is JSON, or any other data format or programming language which has a string type for that matter.

Those are all still text.

Yes, structured text is a subset of text. That doesn't negate the point made.

Of all the things I read at uni UML is the thing I've felt the least use for - even when designing new systems. I've had more use for things I never thought I'd need like Rayleigh scattering and processor design.

I think most software engineers need to draw a class diagram from time to time. Maybe there are a lot of unnecessary details to the UML spec, but it certainly doesn't hurt to agree that a hollow triangle for the arrow head means parent/child while a normal arrow head means composition, with a diamond at the root for ownership.

As the sibling comment says, sequence diagrams are often useful too. I've used them a few times for illustrating messages between threads, and for showing the relationship between async tasks in structured concurrency. Again, maybe there are murky corners to UML sequence diagrams that are rarely needed, but the broad idea is very helpful.


True but I don't bother with a unified system, just a mermaid diagram. I work in web though, so perhaps if I went back to embedded (which I did only a short while) or something else when a project is planned in it entirety rather than growing organically/reacting to customers needs/trends/the whims of management.

I just looked at Mermaid and it seems to as close to UML as I meant by my previous comment. Just look at this class diagram [1]: triangle-ended arrows for parent/child, the classic UML class box of name/attributes/methods, stereotypes in <<double angle brackets>>, etc. The text even mentions UML. I'm not a JS dev so tend to use PlantUML instead - which is also UML based, as the name implies.

I'm not sure what you mean by "unified system". If you mean some sort of giant data store of design/architecture where different diagrams are linked to each other, then I'm certainly NOT advocating that. "Archimate experience" is basically a red flag against both a person and the organisation they work for IMO.

(I once briefly contracted for a large company and bumped into a "software architect" in a kitchenette one day. What's your software development background, I asked him. He said: oh no, I can't code. D-: He spent all day fussing with diagrams that surely would be ignored by anyone doing the actual work.)

[1] https://mermaid.js.org/syntax/classDiagram.html


The "unified" UML system is referring to things like Rose (also mentioned indirectly several more comments up) where they'd reflect into code and auto-build diagrams and also auto-build/auto-update code from diagrams.

I've been at this 16 years. I've seen one planned project in that 16 years that stuck anywhere near the initial plan. They always grow with the whims of someone.

> I think most software engineers need to draw a class diagram from time to time.

Sounds a lot like RegEx to me: if you use something often then obviously learn it but if you need it maybe a dozen or two dozen times per year, then perhaps there’s less need to do a deep dive outside of personal interest.


UML was a buzzword, but a sequence diagram can sometimes replace a few hundred words of dry text. People think best in 2d.

Sure, but you're talking "mildly useful", rather than "replaced programmers 30 years ago, programmers don't exist anymore".

(Also, I'm _fairly_ sure that sequence diagrams didn't originate with UML; it just adopted them.)


>People think best in 2d.

no they don't. some people do. Some people think best in sentences, paragraphs, and sections of structured text. Diagrams mean next to nothing to me.

Some graphs, as in representations of actual mathematical graphs, do have meaning though. If a graph is really the best data structure to describe a particular problem space.

on edit: added in "representations of" as I worried people might misunderstand.


FWIW, you're likely right here; not everyone is a visual thinker.

Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.


> code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It depends on the language. In my experience, well-written Lisp with judicious macros can come close to fitting the way I think of a problem. But some language with tons of boilerplate? No, not at all.


As a die-hard Lisper, I still disagree. Yes, Lisp can go further than anything else to eliminate boilerplate, but you're still locked in a single representation. The moment you switch your task into something else - especially something that actually cares about the boilerplate you hidden, and not the logic you exposed - and now you're fighting an even harder battle.

That's what I mean by Pareto frontier: the choices made by various current-generation languages and coding methodologies (including choices you as a macro author makes, too), are all promoting readability for some tasks, at the expense of readability for other tasks. We're just shifting the difficulty around the time of day, not actually eliminating it.

To break through that and actually make progress, we need to embrace working in different, problem-specific views, instead of on the underlying shared single-source-of-truth plaintext code directly.


IMHO there's usually a lot of necessary complexity that is irrelevant to the actual problem; logging, observability, error handling, authn/authz, secret management, adapting data to interfaces for passing to other services, etc.

Diagrams and pseudocode allow to push those inconveniences into the background and focus on flows that matter.


Precisely that. As you say, this complexity is both necessary and irrelevant to the actual problem.

Now, I claim that the main thing that's stopping advancement in our field is that we're making a choice up front on what is relevant and what's not.

The "actual problem" changes from programmer to programmer, and from hour to the next. In the morning, I might be tweaking the business logic; at noon, I might be debugging some bug across the abstraction layers; in the afternoon, I might be reworking the error handling across the module, and just as I leave for the day, I might need to spend 30 minutes discussing architecture issue with the team. All those things demand completely different perspectives; for each, different things are relevant and different are just noise. But right now, we're stuck looking at the same artifact (the plaintext code base), and trying to make every possible thing readable simultaneously to at least some degree.

I claim this is a wrong approach that's been keeping us stuck for too long now.


I'd love this to be possible. We're analyzing projections from the solution space to the understandability plane when discussing systems - but going the other way, from all existing projections to the solution space, is what we do when we actually build software. If you're saying you want to synthesize systems from projections, LLMs are the closest thing we've got and... it maybe sometimes works.

Yeah, LLMs seem like they'll allow us to side-step the difficult parts by synthesizing projections instead of maintaining them. I.e. instead of having a well-defined way to go back and forth between a specific view and underlying code (e.g. "all the methods in all the classes in this module, as a database", or "this code, but with error handling elided", or "this code, but only with types and error handling", or "how components link together, as a graph", etc.), we can just tell LLMs to synthesize the views, and apply changes we make in them to the underlying code, and expect that to mostly work - even today.

It's just hell of an expensive way to get around doing it. But then maybe at least a real demonstration will convince people of the utility and need of doing it properly.

But then, by that time, LLMs will take over all software development anyway, making this topic moot.


ok, but my reference to sentences, paragraphs and sections would not indicate code but rather documentation.

oops, evidently I got downvoted because I don't think best in 2d and that is bad, classy as always HN.

Lmao I remember uni teaching me UML. Right before I dropped out after a year because fuck all of that. It's a shame because some of the final year content I probably would've liked.

But I just couldn't handle it when I got into like COMP102 and in the first lecture, the lecturer is all "has anybody not used the internet before?"

I spent my childhood doing the stuff so I just had to bail. I'm sure others would find it rewarding (particularly those that were in my classes because 'a computer job is a good job for money').


"The cloud" isn't the problem nor is migration to it. The problem is single points of failure in a vast, super-connected, global network.

To demonstrate my point, say someone like cloudflare opted to get "off cloud" and run their own datacenters. Half the web would still go down if they had some issue in their datacenters. If anything, the economy of scale and resiliency of a huge cloud network is far beyond what any single operator can ensure for their own service. If this wasn't the case, cloud services couldn't be as profitable as they are.

It isn't the panacea people seem to think it is. One critical service going down, regardless of whether it's cloud hosted or not, has ripple effects in the broader network.


My belief that's kind of settling in after a few years of observation is that I absolutely believe the "hype" claim that AI is a force multiplier. However, lots of things out there are terrible and shouldn't be force multiplied (spam, phishing, scams, etc) or say like, people that are very bad at their jobs. If people like this's output is multiplied, it clearly can and will be very bad. I have seen this play out at a small scale already on some teams I've worked with.

For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.


> My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock.

I think the underlying belief causing people to believe things like this are "silly" or that AI criticism is overstated is that the market does not really make mistakes, at least not in the aggregate. So, if XYZ company's CEO says "Our product is doing ABC 300000% better and will take over the world!" and its value/revenue is also going up at the same time, that is seen as a sign that the market has validated this view, and it is infallible (to a point). Of course, this ignores that the market has historically and often been completely wrong, and that this type of reasoning is entirely circular - pay no attention to the man (marketing team) behind the curtain or think about it too hard.


> market has validated this view, and it is infallible (to a point)

Irrational Exuberance. Speculative bubbles are scarily common.


It’s crazy on the web when you point out that google or google products used to be much better in the past someone will come out of nowhere to tell you it’s always been that way

what is this instinct? anyone that’s over the age of 25 would know


> what is this instinct?

"The rules were you guys weren't going to fact check."

The instinct is about pointing out factual inaccuracies. What they wrote is either correct, or not. If it is not, and someone knows better they can and should point that out.

If you, or some other commenter, have a fuzzy feeling that google is worse than it used to be you are free to write that. You are perfectly entitled to that opinion. But you can't just make up false statements and expect to be unchallenged and unchallengeable on it.


Except that jkaptur is the one making up false statements, and then providing "citations" that contradict him. I don't think an instinct to point out inaccuracies can explain that. There would have to be inaccuracies to point out first.

If you believe stuff like this isn’t actual astroturfing, you must face that from somewhere there seems to exist a deeply ingrained belief from a subset of extremely vocal and argumentative people that Google is amazing and if it isnt well that’s just how the web is now (ignore the google man behind the curtain that created the modern web in the first place) and if it’s not that well, it’s always been this way (even if it hasn’t).

There is a very strong stance on this site against talking about astroturfing, and I understand it. But for the life of me, I cannot figure out where this general type of sentiment originates. I don’t know any google enthusiasts and am not sure I’ve ever met one. It’s a fairly uncontroversial take on this website and in the tech world that google search has worsened (the degree of which is debateable). Coming out and saying boldly “no it isn’t, you’re lying” is just crazy weird to me and again I’m very curious where that sentiment comes from.

see some of the sibling and aunt/uncle comments in this thread to get at a little of what I’m talking about.


I was a google fan back when they first started and were just a search engine. Search engines like Yahoo and excite became massively bloated and ad-filled while google was clean and fast.

I wasn't a fan for very long. Google got creepy fast, and at this point their search is becoming useless, but for a short time I really thought that Google is amazing and I was an enthusiast.


All I see here is someone making a claim and someone else making a different claim. They may have erroneously intended the claim in opposition, either missing or interpreting differently the 'interspersed' qualifier. Or, alternatively, they may believe when any ads appeared is more meaningful in the context of this discussion.

I think Google search has gone downhill tremendously to the point of near uselessness and have been a Kagi subscriber for awhile, but I don't see astroturf in this instance. Do you have other examples?


There was a pretty insane comment in this genre a month ago: https://news.ycombinator.com/item?id=43951164

> If Google [had been] broken up 20 years ago [...] [e]veryone would still be paying for email.

Some people don't have the foggiest idea what they're talking about. But I don't really see that as suggesting they're part of an organized campaign.


> Except that jkaptur is the one making up false statements, and then providing "citations" that contradict him.

I believe I have covered that case in my comment. Let me quote the relevant part here for you: “What they wrote is either correct, or not. If it is not, and someone knows better they can and should point that out.”

That being said could you help me by pointing out the inaccuracy in jkaptur’s comment? It seems fairly simple and as far as I can see well supported by the source.


Other than the fact the parent comment to this subthread is posting a literal factual innacuracy regarding the history of ads on google - It’s not just one guy’s “fuzzy feeling.” It’s been written about in so many thousands of words over the last two years and is the general sentiment across the tech space. It’s sort of the major reason big companies like chatGPT, and smaller ones like Kagi are trying to swoop in and fill this void. it’s fairly obvious to anyone paying attention.

You can sealion with posts like this all you want but every time someone counters a post like this with ample evidence it gets group downvoted or ignored. You are also making an assertion that you’re free to back with evidence, that google and google products are not noticeably worse than 10 years ago.

here’s one study that says yes, it is bad:

https://downloads.webis.de/publications/papers/bevendorff_20...

Since we don’t have a time machine and can’t study the google of 2015 we have to rely on collective memory, don’t we? You proclaiming “it’s always been this way” and saying any assertion otherwise is false is an absolutely unfalsifisble statement. As I said, anyone over 25 knows.

Besides perusing the wealth of writing about this the last two years or so, in which the tech world at large has lamented at how bad search specifically has gotten - we also see market trends where people are increasingly seeking tools like chatGPT and LLM’s as a search replacement. Surely you, a thinking individual, could come to some pretty obvious conclusions as to why that might be, which is that google search has got a lot worse. The language models well known to make up stuff and people still are preferring them because search is somehow even less reliable and definitely more exhausting, and it was not always this way. If it was always this way, why are so many people turning to other tools?


> Other than the fact the parent comment to this subthread is posting a literal factual innacuracy regarding the history of ads on google

Sounds like it should be very easy to counter their argument then.

For my education could you tell me which part of their message is inaccurate? The “Google was founded in 1998” or the “and you could buy ads on the search results page in 2000.” part?

> You are also making an assertion that you’re free to back with evidence, that google and google products are not noticeably worse than 10 years ago.

I did not make such an assertion. Where in my comment do you think i’m making that assertion?

> You proclaiming “it’s always been this way”

I’m sorry but who are you quoting? Did you perhaps misclicked which comment you wanted to respond to?


Many people who post here are, were, or would like to be Googlers. Maybe not so much astroturfing ao much as a kind of corporate hasbara (though maybe both).

> Maybe not so much astroturfing as much as a kind of corporate hasbara

What's the difference? In astroturfing, someone pays people to form an organization, claim to have no external support, and do some kind of activism.

In hasbara, the government of Israel pays people to not form an organization, claim to have no external support, and do various kinds of pro-Israel and pro-Jew activism. This looks like astroturfing with the major vulnerability of the no-external-support claim shored up.


Fair. The main difference is that people here don't like it when you call it astroturfing.

This isn't a generic data privacy counter-measure or concern. This is specifically targeted against stalking, which is pretty much one of only a few cases where this kind of thing would be used against you. Specifically the case where the perpetrator will place a device in or on the victim's car.

Knowing where you are is useful.

Knowing where you _aren't_ is equally useful.

I can imagine half a dozen ways to use this data against you in all kinds of settings. Sales, divorce, employment, espionage against your employer, burglary, and basic blackmail.


It doesn't necessarily say where you aren't. What if you get in somebody else's car? (Not uncommon for me as we typically carpool to trailheads.)

Sure, but if your car is presently driving to the supermarket, it’s a pretty safe bet that you are probably not at your house.

Sure, but the stalking issue is a subset of the generic data privacy issue or do you believe you can hide from a stalker if everyone else under the sun knows you location. It might be too difficult to use location data brokers for stalking[1] but the whole economy around them makes the app ecosystem weak against location privacy and makes it easy to use a manipulated app for stalking. No special devices needed and certainly no cellular devices needed.

https://xkcd.com/538/

[1] Even though data brokers have been used to find out the medications of a German MP, for example. https://www.techradar.com/news/even-your-deleted-secret-web-...


I’m not sure what point you’re really trying to make here. This is a thread about detection methods of an extremely invasive (and rare) method of stalking, which yes is a subset of a data privacy issue. The fact that data brokers can get a lot of location and other data about you is irrelevant to the discussion.

> or do you believe you can hide from a stalker if everyone else under the sun knows you location.

I’m not sure anyone is claiming that the detection methods described in this study are going to make you completely undetectable to any party at all times. Again, not sure what point you’re trying to make here and it feels irrelevant to the larger thread. The original comment seemed to indicate that the article hadn’t been read at all.


My point is that what they are doing is interesting and commendable but if they want to effectively help stalking victims they are barking up the wrong tree and that there are much better ways to spend time and energy to help the issue at hand.

What? sorry, but this is pure nonsense. better ways like what? This is a study. Did you read it at all? Again, it’s not claiming to be a cure-all solution. It’s studying how to detect low powered LTE devices in a vehicle. Did you read it?

Except Musk and the chadsphere he surrounds himself with spends an inordinate amount of time promoting him as some kind of techno-genius. A first year CS student couldn't confuse those terms, and he makes embarrassing gaffes like that quite often. They go ignored because people make unlimited amounts of excuses for him for some reason, other than the very obvious conclusion - that he doesn't know wtf he is talking about.

Even your corrected, generous version is wildly inaccurate.


There hasn't ever been a single time in your entire life where you were thinking of one thing, but the words coming out of your mouth communicated something different, by mistake, even though you genuinely did understand the difference?

How many times do you make that excuse of "he just flubbed a word" before thinking maybe he doesn't really know what he's talking about? Once? Twice? A dozen times?

> where you were thinking of one thing, but the words coming out of your mouth communicated something different

He was promoting a new feature with fanfare, in writing, it wasn't just a casual utterance. Besides, his words sound wrong even if the intended message was "Bitcoin style cryptography", it's still a preposterous non-description because bitcoin isn't, and has never been, a measure of cryptographic strength, the formal validity of that statement doesn't make it less uninformed.


If you don't want extra skepticism, don't be the richest person on earth, don't insert yourself into government, don't insist you are uber-intelligent, don't be a notable person, don't be an asshole in public, etc.

It literally doesn't matter whether it's a mistake, he does this too often to give him the benefit of the doubt anymore. Elon Musk reliably claims to be an expert in everything ever, despite all available evidence to the contrary. Elon has never demonstrated technical competence in anything.


Do you understand the post you are replying to? If you do, what does this question have to do with it?

I do, but that post is arguing a point (Elon Musk doesn't know the difference between encryption and cryptography) that's unsubstantiated, while a plausible alternative explanation (he does know the difference, and mis-spoke, because he, like all other human beings, sometimes makes errors in translating thoughts into words) was proposed in my parent post.

Your post completely sailed right past that alternative plausible explanation, and immediately went back to asserting the unsubstantiated claim without addressing the alternative hypothesis, in what appears to be a bout of motivated reasoning against a figure that is politically disliked.

You don't get to completely ignore the point I'm raising, assert your own, and then play the "why aren't you staying on topic" card when your post was the one that brought up an unsubstantiated and unrelated response to the initial claim - that's hypocritical at best, if not outright trolling.


the point is less the infallibility of human cognition and more Spider Man's Law (with great power comes great responsibility).

if you're one of the most powerful people on the planet and you make public statements and decisions that will impact many people, you should be held to a higher standard of emission.


> that's unsubstantiated,

Other than me not making this claim at all, of course it’s substantiated, given he literally mixed up these terms people purporting that kind of expertise typically don’t. That’s literal substantiation, but whatever, I wasn’t even making that claim.

If it seems like I’m skipping past your point it’s because you’re not really making one, or at least not the clever one you seem to think you are.

To answer your q in good faith - yes, I have mixed up words, even in professional settings. I will then typically issue a correction, because the degree of such a mixup can cast a shadow on my credibility and can damage my career and thus earning potential. You seem to be taking the position that elon’s credibility cannot be questioned, at least on the topic of technical expertise. I find that a little bit (actually a lot) silly and an infantile way of looking at this.

Likewise, if I was routinely claiming to be this like, super technical genius founder engineer elite space dude that could never admit fault and was an expert at basically all topics, I would expect to be placed under the same skeptical lens (if not much more, given I’m just a low level grunt) I would face in a scenario like this in my day to day work.

Does this explanation help?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: