> While the law bans setting higher prices through surveillance pricing, it doesn’t address reducing prices. If a company raises its prices for everyone, and then offers individualized discounts, “suddenly you’ve arrived at the same outcome,” McBrien says.
While I agree with the intent of this law, I don't think it will be effective. If you have a system capable of jacking prices up you can just multiply this calculated delta by -1 transform that into a discount.
To effectively prevent this practice you probably need to ban any kind of personal discount. I don't think we will ever see such law, nor do I think this would be a good idea.
Yeah, sounds like a law that's passed because it sounds/polls good (ie. "affordability"), even though it's addressing a non-existent problem and is trivial to work around.
Uber pays drivers differential rates depending on how desperate they believe the driver to be. I can believe that UberEats demands a higher premium depending on the item and what they infer about you.
Most markets have also had a wide variety of regulations. It seems perfectly reasonable to me that large retail operations would be prohibited from attempting a predatory scheme depending on individualized pricing. There's a tangible difference between one off purchase contracts and selling into the consumer market at large.
Sure, haggling was historically the standard but that just isn't the way these modern operations work. If an outdated practice gets caught in the crossfire when protecting consumers from imminent harm I'm okay with that.
Most pricing laws are built on the idea that this isn't OK. For example, I can't negotiate pricing directly with an automobile manufacturer. I have to go through a dealer so I am "protected".
If you dig around in your hotel room the next time you're there, you'll likely find a statutory "list of prices" - often showing $1,000 or more per night for a room you paid $150 for.
> enough samples that you can apply statistics to find precise locations, in many cases you can de-anonymize the IDs
I think a lot of people don't realize the power of a big enough sample size. With enough samples even something pretty innocent looking like your daily step counter could make you identifiable.
As far as I know we don't have large enough databases to make this happen in practice, but I don't think this is impossible in the future.
I have nothing to back that up, but I wouldn't be surprised if this is a feature.
If these luxury items are being used by the society (or at least in some circles) as a proxy for 'success'(ie having enough disposable money) it probably would be better if they we also quite fragile. This way you could distinguish between someone who received a expense gift vs someone that has money to always keep buying new items.
I'm not sure how real it this, but I've read somewhere that part of the appeal of expensive glassware was the fact that it was pretty fragile. Serving someone at your house with expensive glassware was a way to tell 'look how much money I've got'.
Just to be clear, I don't think we should get impressed/try to impress people by how much money someone has. But that is a practice as old as time, and it doesn't seem to be going away any time soon.
> It implies life was seeded on earth and not generated via abiogenesis.
I don't think this conclusion is correct. The abiogenesis/panspermia debate is about where life formed. This article only says "we found all the DNA/RNA bases in an asteroid," but there is a HUGE gap between DNA bases and life(ie self-replicating organisms).
Making a crude analogy you could say they found Lego pieces in the asteroid, but that doesn't imply that the first 'Lego kits' on earth came pre-assembled. They might, or might not. We don't really have enough information to get a definitive conclusion. What we know is that we can't discard the panspermia idea yet.
Let me put it in another way, imagine we find clay in an asteroid. Does that alone imply the existence of ceramic in other places of the universe?
We need these molecules to build build a DNA strand, but their existence doesn't imply the existence of other life forms. Maybe exists a process that produce these molecules naturally and we just don't know about yet.
And remember that life(self replicating organisms) is way more complex than just DNA/RNA. In another crude analogy you could say that DNA is just the source code, to have life you still need to have all the hardware to run this code on. (fun fact: that is the reason why people argue about virus being something alive or not. Generally it has only the RNA necessary for the replication, and this is why it can only reproduce if it is able to take over another cell. In this analogy it has the source code but not the hardware, so how do we classify it?)
I've been thinking about critical thought in our society from another angle. In my opinion if you assume that every person employs it's critical thinking abilities to reason about the world you would expect to see a lot of different opinions about the world.
But with each passing day we see the opposite, more and more people are converging in one of a few opinions about each topic. This is great if you want to move the world in a specific direction, but I think it demonstrates that people are exercising less their critical thinking abilities.
AI definitely made this worse, but I think it started long before that.
Another factor that I think contributes negatively to this effect is that our society doesn't really like when someone is wrong, or changes ideas. If we want to encourage to use their critical thinking skills we also need to tell them that arriving at bad conclusions is ok, the important thing is to always keep improving.
Why would you expect that more critical thought would lead to more visible opinions? Would be like expecting everyone to have a different route they take out of their neighborhood. Nothing wrong if someone does want to try a different way, to a large extent, but often nothing is gained from it, either.
The counter hope, of course, is that more critical thought will result in more people discovering some abstract truth out there. I don't think that is realistic, either.
The mundane landing spot, I think, is the likely one. For most things, critical thought is just not much of a benefit. Knowledge and understanding are far more beneficial. Is why we don't constantly reinvent how to drive a car. We have largely agreed that we have some mechanisms that work, and it is better to educate folks on how those work, than it is to get people to think critically about the controls.
Going further in that regard, understanding is far more immediately useful than critical deconstruction. Learning about affordances and how they guide you to what you are wanting to do is far more useful to someone's daily life.
Which is not to say that critical thought in designing said affordances is not good. Just, for most of us, we are not in a position to really impact any of that.
Democracy requires allies, so the overall position will tend to settle into two camps.
I'm not sure how well that reflects people's actual opinions. In many cases I think people don't care much about most topics. They simply accept the position of their allies. Occasionally they even find it abhorrent but necessary.
I think that mass communication has exacerbated that for decades, and AI at most optimizes it a bit further.
I don't really expect fine critical thinking. Most people aren't experts at most things.
But I am a bit surprised at the degree to which people have twisted themselves in knots to justify positions that do not withstand even the slightest scrutiny.
> you would expect to see a lot of different opinions about the world.
It is an age-old debate between know-that and know-how. Understanding the world around us is the point of education, and this means ways of looking at it, insights or theories, and how these insights and theories come about which is the critical thinking process. I would like to call it thinking from first assumptions since critical thinking as a term is overused and I would argue that AI is great at critical-thinking in the shallow definition of the term.
If this seems interesting for you remember that if you are putting $100 in a 99 to 1 bet you need to win 100 times to get $100 but only need to loose 1 time to loose $100.
And the chance of losing at least once in a 99% sure bet after 100 rounds is around 60%. Even if you reduce to 30 rounds it still is around 30%.
This may seem smart at first glance, but the math doesn't really checks out.
In your scenario you're assuming the dice rolls are all independent. If polymarket bets were all pure dice rolls the 60% odds you quoted would be true.
But they aren't independent there are a lot of correlations. Global geopolitics for example.
The way the math works out, 73% of markets resolve to No, If you buy No at 0.73 each time you would break even.
> It seems to me that existing good practices continue to work well. I haven't seen any radically new approaches to software design and development that only work with LLMs and wouldn't work without them.
I've been thinking about it lately and I think you are right. LLMs haven't changed what is 'good software'. But they changed some proxies I used to have for what is 'good software'.
In the past I've always loved projects that had good documentation, and many times I've used this metric to select a project/library to use. But LLMs transformed something that was (IMHO) a good indicator for "care"/"software quality" into something that is becoming irrelevant (see Goodhart's law).
I'm not sure llms produce good documentation. I'm open to hear more opinions on this, my feeling is that the documentation of llm-heavy projects is a bit too verbose, a bit off-target, sometimes completely irrelevant, very repetitive.
Not terrible, but I'll just point my own llm to it instead of reading it myself like I would for an actual great documentation
If you are willing to point your LLM to the docs instead of actually reading it why not skip it and send your LLM directly to the source code? That is what I've been doing recently, and that is why recently good documentation became less important for me.
What I do is use 'C-z' and 'fg' to suspend and resume my editor when I need.
Pressing C-z on neovim puts me back in the terminal so I can do whatever I need to do and when that is done I just type 'fg' in the terminal and it opens up my neovim again, exactly as it was.
I've been using a POC-driven workflow for my agentic coding.
What I do is to use the LLM to ask a lot of questions to help me better understand to problem. After I have a good understanding I jump into the code and code by hand the core of the solution. With this core work finished(keep in mind that at this point the code doesn't even need to compile) I fire up my LLM and say something like "I need to do X, uncommited in this repo we have a POC for how we want to do it. Create and implement a plan on what we need to do to finish this feature."
I think this is a good model because I'm using the LLM for the thing it is good at: "reading through code and explaining what it does" and "doing the grunt work". While I do the hard part of actually selecting the right way of solving a problem.
If you have a large PR the existence of a good summary on "what" changed can help you to make a better review.
But I agree with you, when reading PR descriptions and code comments I want a "why" not a "what". And that is why I think most LLM-generated documentation is bad.
While I agree with the intent of this law, I don't think it will be effective. If you have a system capable of jacking prices up you can just multiply this calculated delta by -1 transform that into a discount.
To effectively prevent this practice you probably need to ban any kind of personal discount. I don't think we will ever see such law, nor do I think this would be a good idea.
reply