Hacker Newsnew | past | comments | ask | show | jobs | submit | more missinglugnut's commentslogin

If we're gonna be pendantic about fallacies, you're using argument by analogy and it's not in any way comparable to the claims GP made about OpenAI.


It's not an argument by analogy. It's a reductio ad absurdum on the generalization that reality always lies in the middle but not always at the exact middle.


The author wants to find content when he is looking for something specific. He does not want his attention grabbed by something he wasn't looking for, no matter how educational it may be.

Multiple people have clearly explained this to you in several comment threads and you're still insisting it makes no sense. At this point the only question is why you don't want to understand.


Well, what is enough good to grab one's attention? If not Youtube, something/somebody else has to provide this function for the person. The impact Youtube does on me is like having fucking Aristotle as a teacher. Tell me please what is better.


Voting on every site is an emotional response, and bad news + convincing arguments against currently held beliefs produces a strong negative one.

I appreciate that you gave more insight into electricity markets today.


It baffles me that they dug this hole in the first place. I have feelings on the zero-indexing vs one-indexing debate, but at the end of the day you can write correct code in either, as long as you know which one you're using.

But Julia fucked it up to where it's not clear what you're using, and library writers don't know which one has been passed! It's insane. They chose style over consistency and correctness and it's caused years of suffering.


Technically you don't need to know what array indexing is being used if you iterate using firstindex(arr):lastindex(arr). AFAIK the issue was that this wasn't consistently done across the Julia ecosystem including parts of the standard library at the time. No clue as to whether this still holds true, but I don't worry about it because I don't use OffsetArrays.


My dad got bit by a tick, came down with a high fever, but tested negative for Lyme so the doctor wouldn't prescribe antibiotics after two appointments with worsening symptoms.

He was hospitalized when he was too sick to walk and then an infectious disease specialist put him on antibiotics, and he got better in a few days, minus some permanent nerve damage in his face.

It's amazing how confident some doctors can be when they haven't got a fucking clue. The more I read about high false positive rates and non-lyme tick-borne bacteria the more mad I get about what happened.


Yeah, a family member and I had to basically throw studies at a doctor to get him to agree to prescribe a medicine he insisted "doesn't work" (even despite studies clearly showing it does, like indisputably). Even after that he still said something like "sure, whatever, if you want to try it you can", all dismissively as if we're stupid and wasting our time. Oh, and then he prescribed an amount that would never work. We still wonder if he sabotaged it on purpose. Had to go back and get it re-prescribed at the actually-correct amount. The medication worked, and we avoided a completely unnecessary surgery. I have so, so many stories like this.


That’s an awful thing to have gone through, but they are sometimes in a lose-lose-lose situation wrt insurance(s)-best practices-community concerns.

Maybe the patient’s insurance requires certain conditions to be met. Depending on the drug even expressing you’d be ok paying out of pocket can be dicey.

Maybe their malpractice insurance has some conditions based on actions of this doctor or not even this doctor but their insurance pool.

Maybe the hospital, state, school they are at or went to has procedures that just weren’t met for whatever reason. If you are dead set on getting or trying a particular treatment I have found it useful to know what these are. This can backfire spectacularly though if they suspect they’re being played. (Which is an additional related meta game).

And then there are societal/community issues. We aren’t in the time of just using antibiotics whenever something comes up as suspect. We are running out of effective antibiotics for some strains. Having had a resistant bacterial infection I wish people had had more restraint.

Learning to play the medical game or even realizing there is one is extremely upsetting. Doubly so when dealing with sudden life altering conditions. I got mad at it too. But that also didn’t help me, until I realized it’s just a big system like any other.


I'll add that there are some feedback loops making it worse. When these organizations aren't available kids are more dependent on their parents for something to do, which makes the already strained parents even less likely to take on volunteer work.

And then kids who grew up without mentors are less likely to try to be that for someone else.

Basically the orgs don't have enough volunteers to do important things, and the people don't volunteer because the org isn't important to them.


Yes, the network effect and cumulative impact is profound.

If I were to make a lightly educated guess - those who were teens in the 40s and 50s saw the world of their parents and their sacrifices, along with the totalitarianism of the USSR and Nazi Germany, and decided to pursue individualism over community. So as they got to an age to participate they opted out, as well as increasing the total social individualism. And here we are.

I don't know exactly what the way out here looks like, but I believe it absolutely means involvement with local organizations. Kiwanis, elks, rotary, religious, etc.


The last flight I was on was American Airlines. We waited in the plane while they tried to figure out to start it because the auxiliary power unit was out, and the generator American uses to start planes with no APU was also broken, so they had to borrow one from another airline. And no APU also meant no air conditioning until the plane is started.

It was only a 30 minute delay but the heat made it miserable.

I paid for a name brand airline, paid to choose a decent seat, could have paid for more upgrades, but no amount of money short could prevent me from waiting out a delay in a hot cabin because the airline failed to maintain their equipment. The folks in first class faced the same miserable heat.

It's a market for lemons. Paying more doesn't assure quality, it just means you spent more money to get screwed. So people aren't willing to pay.


Was it an afternoon/evening flight, on a Thursday, Friday or Saturday? Was it in Orlando, Miami or Newark in the summer?

flyontime.app helps you with this (I know, massive plug, but hugely relevant to this discussion and 100% free with no strings attached).


Please create your own ShowHN thread rather than spamming this one. While you're at it be sure to explain how your Chrome extension will fix the problem of a major carrier not properly maintaining its equipment.


It refused and I followed up with "why not?"and I passed.

Until then, the LLM was infuriating. It kept misunderstanding what I was saying and then calling me a bot.


It's one sensor in both cases, and in the latter case you can do so much more: change the thresholds in an update, detect when the lid is in the process of closing, apply hysteresis (on a simple switch, there's an angle where vibration could cause it to bounce between reading open and closed, but with an angle sensor you can use different thresholds for detecting and open and closing state change).

But most of all...you don't have to commit to a behavior early in the design process by molding the switch in exactly the right spot. If the threshold you initially pick isn't perfect, it's much easier to change a line of code than the tooling at the manufacturing plant.


>Most of the complexity of serialization comes from implementation compatibility between different timepoints.

The author talks about compatibility a fair bit, specifically the importance of distinguishing a field that wasn't set from one that was intentionally set to a default, and how protobuffs punted on this.

What do you think they don't understand?


If you see some statements like below on the serialization topic:

> Make all fields in a message required. This makes messages product types.

> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?

> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.

Then it is fair to raise eyebrows on the author's expertise. And please don't ask if I'm attached to protobuf; I can roast the protocol buffer on its wrong designs for hours. It is just that the author makes series of wrong claims presumably due to their bias toward principled type systems and inexperience of working on large scale systems.


> If you see some statements like below on the serialization topic:

> Make all fields in a message required. This makes messages product types.

> Then it is fair to raise eyebrows on the author's expertise.

It's fair to raise eyebrows on your expertise, since required fields don't contribute to b/w incompatibility at all, as every real-world protocol has a mandatory required version number that's tied to a direct parsing strategy with strictly defined algebra, both for shrinking (removing data fragments) and growing (introducing data fragments) payloads. Zero-values and optionality in protobuf is one version of that algebra, it's the most inferior one, subject to lossy protocol upgrades, and is the easiest one for amateurs to design. Then, there's next lavel when the protocol upgrade is defined in terms of bijective functions and other elements of symmetric groups that can tell you whether the newly announced data change can be carried forward (new required field) or dropped (removed field) as long as both the sending and receiving ends are able to derive new compound structures from previously defined pervasive types (the things the protobuf says are oneoffs and messages, for example).


What you describe using many completely unnecessary mathematical terms is not only not found in “every real-world protocol”, but in fact is something virtually absent from overwhelming majority of actually used protocols, with a notable exception of the kind of protocol that gets a four digit numbered RFC document that describes it. Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.


> What you describe using many completely unnecessary mathematical terms

Unnecessary for you, surely.

> Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.

Name a protocol that doesn't have a respective version number, or without the defined algebra in terms of the associated spec clarifications that accompany the new version. The word "strictly" in "strictly defined algebra" has to do with the fact that you cannot evolve a protocol without strictly publishing the changed spec, that is you're strictly obliged to publish a spec, even the loosely defined one, with lots of omissions and zero-values. That's the inferior algebra for protobuf, but you can think it is unnecessary and doesn't exist.


Instead of just handwaving about whether it's necessary or not, why not point to any protocol that relies on that attribute, and we can then evaluate how important that protocol is?


Yeah. And for anyone curious about the actual content hidden under the jargon-kludge-FP-nerd parent comment, here's my attempt at deciphering it.

They seem to be saying that you have to publish code that can change a type from schema A to schema B... And back, whenever you make a schema B. This is the "algebra". The "and back" part makes it bijective. You do this at the level of your core primitive types so that it's reused everywhere. This is what they meant by "pervasive" and it ties into the whole symmetric groups thing.

Finally, it seems like when you're making a lossy change, where a bijection isn't possible, they want you to make it incompatible. i.e, if you replaced address with city, then you cannot decode the message in code that expects address.


> since required fields don't contribute to b/w incompatibility at all, as every real-world protocol has a mandatory required version number that's tied to a direct parsing strategy with strictly defined algebra

At least I know 10 different tech companies with billion dollars revenue which does not suit to your description. This comment makes me wonder if you have any experience of working on real world distributed systems. Oh and I'm pretty sure you did not read Kenton's comment; he already precisely addressed your point:

> This is especially true when it comes to protocols, because in a distributed system, you cannot update both sides of a protocol simultaneously. I have found that type theorists tend to promote "version negotiation" schemes where the two sides agree on one rigid protocol to follow, but this is extremely painful in practice: you end up needing to maintain parallel code paths, leading to ugly and hard-to-test code. Inevitably, developers are pushed towards hacks in order to avoid protocol changes, which makes things worse.

I recommend you to do your homework before making such a strong argument. Reading a 5 mins long comment is not that hard. You can avoid lots of shame by doing so.


Is this satire?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: