Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
Do you though? I guess it depends on how you define family. There's family that you rarely see and you call them family because of the social (even if weak) ties. And then there's family you grew up knowing. The impact of family early in you, never goes away. Your family early in life shapes us in ways we probably can't comprehend. Reading Scott's work was a family ritual at the breakfast table. I'm sure his work had some part in shaping me in a way that I can't delete.
At my age, he was about as close to family as you can get without being physically there. I grew up reading his comics in our newspaper while eating family breakfast. His work was a part of our family morning ritual. His work was part of pre-internet America when our channels were limited. Our thought and worldview were to some degree shaped by these limited channels.
Right, that seemed like a minor issue. There was also the minor issue of the increase of AI code PRs. Seems like the greater issue was a perception of deterioration of the platform (in their sites for years) and a reasonable path towards migration to another platform.
> Why would you be unwilling to merge AI code at all?
Because structurally it's a flag for being highly likely to waste extremely scare time. It's sort of like avoiding bad neighborhoods,not because everyone is bad, but because there is enough bad there that it's not worth bothering with.
What sticks out for me in these cases is that the AI sticks out like a sore thumb. Go ahead and use AI, it's as if the low effort nature of AI sets users on a course of using low effort throughout the cycle of whatever it is they are trying to accomplish as an end game.
The AI shouldn't look like AI. The proposed contributions shouldn't stand out from the norm. This include the entire process, not just the provided code. It's just a bad aesthetic and for most people it screams "low effort."
Their issue seemed to be the process. They're setup for a certain flow. Jamming that flow breaks it. Wouldn't matter if it were AI or a sudden surge of interested developers. So, it's not a question of accepting or not accepting AI generated code, but rather changing the process. That in itself is time-consuming and carries potential risk.
Definitely, with the primary issue in this case being that the PRer didn't discuss with the maintainers before going to work. Things could've gone very differently if that discussion was had, especially disclosing the intent to use generated code. Though of course there's the risk that disclosure could've led to a preemptive shutdown of the discussion, as there are those who simply don't want to consider it at all.
Right, it's an issue that requires intensive care to address mental health issues. The human resources required for this is always going to be a bottleneck. Much more so than housing shortages or funding for programs that are largely self service (if you can navigate the system, you may not be homeless for long.) Building, staffing, and funding such an institution seems like it would be extremely difficult.
SF is currently spending $100K per homeless person. I agree, it will be extremely difficult, and that the human resources may be a bottleneck. But that's enough for an average person to live in SF, go out sometimes, and pay for therapy. There must be a way to deploy those funds effectively.
$100k/year, and ya, public orgs non-profits that get the money aren’t very efficient. What is worse is that much of that money goes to chronic cases with drug addiction and mental illness, the people who are just struggling to pay rent (much cheaper than $100k/year) and wind up living in their car often get ignored until they become chronic cases that are no longer easy to help.
I'm sure it is a conflation, but it is directionally correct. We are burning money and making zero progress.
The ethical dilemma is deep. Is forcing someone into an asylum—where they can be sheltered, monitored, and treated—more ethical than giving someone the self-determination to self-destruct on the street?
I don't have the answer, I'm not Kant. But it's a question we have been unwilling to face because it is deeply unsettling. It goes against our liberal instinct.
I think the only way is to give up a bit of freedom for the person's best interested AND societies.
This never works though because once you decide to do this, it is abused. So to prevent abuse, you use law enforcement. What I mean is that we decided freedom is more important than forcing treatment. And since there are no other levers, law enforcement is left to deal with the problem.
The question is what do you do in the face of abuse of the system? Do we shut everything down and walk away and pretend it didn't happen and let's just not talk about it, like an emotionally immature six year old child that pooped in the living room? Because that doesn't make the problem go away, but that's what we did.
You set up a system to help people that could be abused, and then you set up oversight committee, external auditors, regulators, board of trustees, ombudsman, inspector general. A giant pile of bureaucrats and bureaucracy. And yes, abuses will still happen. You get it all on camera, in writing, you find the abuses of the system and you close those down. New abuses happen, you find those loopholes and close them down. The problem is there's no will to do that. The systems broken, so we just threw it out and the people that it was helping got fucked. Instead of like, hey the systems broken, lets fix it.
So instead we got people living in tents with no running water, no sewage, no electricity.
the human resources required to make it work are a bottleneck, but even if we had the resources we need to build a humane modern asylum system, there'd be a whole slew of civil liberties issues
Spitballing here. I always understood stuff like this as "the system doesn't care about you, it cares about the masses." If the result is overwhelmingly looking no better than a placebo, then the small number of people it actually helps is sort of irrelevant. The exception might be cases where people are willing to drop a bomb of cash for lifesaving drugs for rare diseases (Pharma Bro got a lot of flack for massively jacking up the price of one of these drugs.) I don't know what implications such a study may have in a complex space. I imagine the drug will still be available for those who want to try, but far less prescribed as a sort of safe default. I doubt drug companies will care much for this, since the patent has long expired.
Looks like G5 is the highest level and the scale system is used by NOAA.
reply