For lying, just as with coding, you just need to develop a ruleset that you can use as a framework that you can apply consistently depending on the person's role within your life/career. You need to divide your lies into a limited number of alternative realities that are easy to keep track of. Then depending on the role of the person you're communicating with, you adhere to one reality or another. This ensures that everyone in the same group operates within the same reality... So when people who exist in the same reality talk to each other, they will believe that you are logically consistent. The challenge is mostly about ensuring that people from different groups/realities you've crafted never meet or talk to each other. With social media filter bubbles nowadays, it barely requires any effort to do this; if the big guys manage to make it work with billions of people, surely you can do it with just your own network.
You could maximize your gains by having a custom-crafted reality tailored for each individual but the risks go up by an order of magnitude since it means that you have to put up and maintain more information barriers between the different participants. Also, they will never trust you fully because that requires confirmation from a third party... And you can never create that social confirmation scenario if every person who knows you has a different reality about you in their minds. It's better to forego some of the bigger potential gains of one-person-one-reality and instead go for a lower risk approach with fewer 'shared realities' with significant overlap between themselves and where the discrepancies between the different realities which you crafted is just enough to yield some benefit for you but can also provide you the social support necessary to generate a certain level of authenticity if doubts arise surrounding some of your more creative money-maker narratives.
The way to lie properly is to fully believe the lie. If I choose to lie, I immediately accept that the lie is now truth and I forget what the prior truth was.
But how do you arbitrage situations where the yields are concentrated in belief discrepancies between two different groups? You just pick the group with the highest yield potential and give up on the other one? Risky. What if there is an overarching narrative change?
This is especially problematic when the group which provides the highest yield potential is further from ground truth because ground truth exerts a kind of gravity on all narratives.
The scapegoating threat is too high for my taste. I'm not well versed in scapegoat selection. Being too strongly tied to a narrative puts you in the crosshairs of the big guys if things don't pan out.
The problem with full narrative buy-in is that it's impossible to spin out of it later because you didn't plan ahead. Some positions are easy to spin out of while others are essentially impossible. Full narrative buy-in doesn't play well with position selection IMO.
It was a joke about the feature of the same name in the context of regular expressions.
They'll ask a series of incriminating questions resulting in negative answers from you that can then be shown later to be false by inference. They may get you to assert that you never been to a certain location or never taken a certain action, they will accept that assertion, and then get you to add details to it. Later they'll show that one of your assertions was false, and therefore, all of the other dependent ones must be as well.
Watching the White House Press Secretary deal with reporters is good insight into how deceit (and everyone knows it) is done in broad daylight at the highest levels.
One of them many funny quirks of Western "civilization" that we don't talk about in any really serious way. Humans are so weird.
i'm no database expert, but from what i can tell: this is a spanner-like (clock synchronized) distributed synchronization mechanism for live, high volume event streams?
It’s different to Spanner. Materialize uses “virtual” (Lamport-style) clocks to version data, basically more like transaction ids than a physical time. The versions of different data are compared in such a way that outputs are always strongly consistent (though eventually consistent).
This is more complex than it sounds when you include disparate input sources, calculations that may be recursive (and distributed), etc.
>The Problem with Lying is Keeping Track of All the Lies
Lie consistently. Lie in the same way. Keep using the same lies. Make an alternative story in your mind and connect all lies together so they are consistent and make sense. And practice a lot. Hard work beats talent.
Does not work. You can make this work with an alternative identity and heavy investment like an undercover cop, psychopathy, or starting over in life. In all other cases there are always tells, like bad at poker obvious, that make this not worth the effort.
This article is in the context of stream processing, which is a bit of a special case, but:
Whenever somebody is making a lot of hot air about consistency in their chosen SQL database, they tend to conveniently sweep under the rug how common it is to use read-only replicas that are fed by asynchronous replication and caches that are updated when the programmer remembers to / that use consistency-ignoring TTLs.
In the best possible scenario, a distributed DB can be as truthful as a single DB.
In the best possible scenario, a DB with snapshot isolation can be as truthful as a strictly serialisable DB.
In both those cases, your truth is at the whim of a programmer committing buggy code.
When I want truth, I write append-only events and fold my view over them. TFA tried to argue a point about eventual-consistency never settling, but that's how the world works. What's the final balance of your bank account?
Note to 6 out of the 7 commenters so far: this article is about distributed systems, not lying per se. I'm hoping this is not a bellwether of how many HN readers actually read the article.
Most of the earliest comments seem to be like this. They just aren't so obviously incredibly off-topic as they are here (because the title matches the topic better). Someone who reads the article and thinks about it, assembles information from their own past as well as quotes and sources simply can't write a response as quickly as someone who just throws out a one-liner based only on the title.
It does happen more on certain types of articles (those that "everyone" feels they have a valid opinion on) but I have noticed this on pretty much all articles. Sometimes the commenters don't even read other comments that warn people what is not relevant to TFA.
i was thinking that. would be an interesting idea for a ranking penalty: underweight early comments to compensate for the momentum they build just from being early.
Nothing wrong with that. But the (toplevel) comments should still reflect the article. Otherwise it can be quite annoying for people who do read the article.
The trick to this is to downvote or flag the offtopic comments and if you want to comment, write an on-topic comment. Writing meta about other comments just adds to the offtopic commentary and ends up working against the thing you're trying to improve.
... so, like a significant number of articles on HN where all the comments only discuss the title or something tangentially related to it instead of the actual article? Not sure why it's any different here than on other posts.
I disagree. It's just the nature of how conversation works and it gives us interesting discussions and insights. I think it's more annoying that you want to try to control the direction discussions can take.
I haven't read most of your comment, but you're entirely and embarrassingly wrong at every conceivable level. The guidelines don't say anything about complaining, only that you shouldn't comment on whether or not someone read the article.
This one is just a magazine profile story with a musician. It was titled "First recording of tinnitus raises new questions", which instead of sharing the recording, was an interview with a musician and shared her album pieces. I see that it was recently renamed to " An Interview with Lola De La Mata about tinnitus", which is more accurate because I would not have opened it.
https://news.ycombinator.com/item?id=40600748
Here's another one titled "Pleasure or Pain? He Maps the Neural Circuits That Decide." It's also a profile piece, and half the article is full page pictures of the scientist and his life story. It's just a magazine article on the man, and not really going into his work. It actually has "magazine" in the name so I should have expected that.
https://news.ycombinator.com/item?id=40238788
And then of course there's the current article which is is a misleading headline that's actually an ad for their services. Hey, pretty cheap, but not what I'm looking for here.
The promise of lies is it allows the impossible. Mutually inconsistent ideas to coexist. You don’t have to track anything. You just lie again and again. That’s the beauty of it. If you start tracking everything, you may as well tell the truth and use your brain.
You could maximize your gains by having a custom-crafted reality tailored for each individual but the risks go up by an order of magnitude since it means that you have to put up and maintain more information barriers between the different participants. Also, they will never trust you fully because that requires confirmation from a third party... And you can never create that social confirmation scenario if every person who knows you has a different reality about you in their minds. It's better to forego some of the bigger potential gains of one-person-one-reality and instead go for a lower risk approach with fewer 'shared realities' with significant overlap between themselves and where the discrepancies between the different realities which you crafted is just enough to yield some benefit for you but can also provide you the social support necessary to generate a certain level of authenticity if doubts arise surrounding some of your more creative money-maker narratives.