Hacker Newsnew | past | comments | ask | show | jobs | submit | strgcmc's commentslogin

I got curious and validated your source [1], to pull the exact quote:

"The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

- the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

- but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

- it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)


I'm sure nobody cares, but the data I can check shows a couple interesting observations (won't call them conclusions, that's too strong):

- Yes, you can find certain slices of 1.8% of customers, that would represent 50%+ of revenue... But this is usually pretty close to simply listing out the top 1.8% of all accounts by spend

- Therefore, to support the original claim, one would essentially have to definitionally accept that nearly all of the top revenue accounts are "problem gamblers" and almost no one else is... But this doesn't pass a basic smell test, because population wise there are more "poor" problem-gamblers than there are "rich" ones, because there are a lot more poor people in general than rich ones, so it's very unlikely that nearly all of the 1.8% of total population problem gamblers also happen to overlap so much with the top 1.8% customer accounts by revenue.


In such scenarios (data engineering / DS / analytics is my personal background), I have learned not to underestimate the value of, explicitly declaring within Team X, that person X1 is dedicated to line L1, person X2 is dedicated to line L2, etc. (aka similar to your last line about embedding a person with that line of business).

In theory, it doesn't actually "change" anything, because Team X is still stuck supporting exactly the same number of dependencies + the same volume and types of requests.

But the benefit of explicit >>> implicit, the clarity/certainty of knowing who-to-go-to-for-what, the avoidance of context switching + the ability to develop expertise/comfort in a particular domain (as opposed to the team trying to uphold a fantasy of fungibility or that anyone can take up any piece of work at any time...), and also the specificity by which you can eventually say, "hey I need to hire more people on Team X, because you need my team for 4 projects but I only have 3 people..." -- all of that has turned out to be surprisingly valuable.

Another way to say it is -- for Team X to be stretched like that initial state, is probably dysfunctional, and in a terminally-fatal sense, but it's a slow kind of decay/death. Rather than pretending it can work, pretending you can virtualize the work across people (as if people were hyper-threads in a CPU core, effortlessly switching tasks)... instead by making it discrete/concrete/explicit, by nominating who-is-going-to-work-on-what-for-who, I have learned that this is actually a form of escalation, of forcing the dysfunction to the surface, and forcing the organization to confront a sink-or-swim moment sooner than it otherwise would have (vs if you just kept limping on, kept trying to pretend you can stay on top of the muddled mess of requests that keep coming in, and you're just stuck treading water and drowning slowly).

---

Of course, taking an accelerationist stance is itself risky, and those risks need to be managed. But for example, if the reaction to such a plan is something like, "okay, you've created clarity, but what happens if person X1 goes on vacation/gets-hit-by-bus, then L1 will get no support, right?"... That is the entire purpose/benefit of escalating/accelerating!

In other words, Team X always had problems, but they were hidden beneath a layer of obfuscation due to the way work was being spread around implicitly... it's actually a huge improvement, if you've transformed a murky/unnameable problem into something as crispy and quantifiable as a bus-factor=1 problem (which almost everyone understands more easily/intuitively).

---

Maybe someday Team X could turn itself into a self-service platform, or a "X-as-a-service" offering, where the dependent teams do not need to have you work with or for them, but rather just consume your outputs, your service(s)/product(s), etc. at arms-length. So you probably don't always want to stay in this embedded or explicit "allocation" model.


The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.


A purely technology-minded compromise to this question (aka how to support both the "good" and "bad" kinds of recording), is probably something along the lines of expiry and enforcing a lack of permanence as the default (kind of like, the digital age recording-centric version of "innocent until proven guilty", which honestly is one of the greatest inventions in the history of human legal systems). Of course, one should never make societal decisions purely from a technological practicality standpoint.

Since you can't be sure what is "bad"/illegal, and people will just record many things anyways without thinking too much about it --> then the default should be auto-expiring/auto-deletion after X hours/days, unless some reason or some confirmation is provided to justify its persistence.

For example, imagine we lived in a near-future where AI assistants were commonplace. Imagine that recording was ubiquitous but legally mandated to default into being "disappearing videos" like Snapchat, but for all the major platforms (YouTube, TikTok, X, Twitch, Kick, etc.). Imagine that every day, you as a regular person doing regular things, get maybe 10000 notifications of, "you have been recorded in video X on platform Y, do you consent for this to be persisted?", and also law enforcement has to go through a judge (kind of like a search warrant) to file things like "persistence warrants", and then maybe there is another channel/method for concerned citizens who want to persist video of a "bad guy" doing "bad things" where they can request for persistence (maybe it's like an injunction against auto-deletion until a review body can look at the request)... Obviously this would be a ton of administrative overhead, a ton of micro-decisions to be made -- which is why I mentioned the AI-assistant angle, because then I can tell my personal AI helper, "here are my preferences, here is when I consent to recording and here is when I don't... knowing my personal rules, please go and deal with the 10000 notifications I get every day, thanks". Of course if there's disagreement or lack of consensus, some rules have to be developed about how to combine different parties wishes together (e.g. take a recording of a child's soccer game, where maybe 8 parents consent and 3 parents don't to persistence... perhaps it's majority rule so persistence side wins, but then majority has to pay the cost of API tokens to a blurring/anonymization service that protects the 3 who didn't want to be persisted -- that could be a framework for handling disputed outcomes?)

I'm also purposefully ignoring the edge-case problem of, what if a bad actor wants to persist the videos anyways, but in short I think the best we can do is impose some civil legal penalties if an unwilling participant later finds out you kept their videos without permission.

Anyways, I know that's all super fanciful and unrealistic in many ways, but I think that's a compromise sort of world-building I can imagine, that retains some familiar elements of how people think about consent and legal processes, while acknowledging the reality that recording is ubiquitous and that we need sane defaults + follow-up processes to review or adjudicate disputes later (and disputes might arise for trivial things, or serious criminal matters -- a criminal won't consent to their recording being persisted, but then society needs a sane way to override that, which is what judges and warrants are meant to do in protecting rights by requiring a bar of justification to be cleared).


True of course that dollars is the end goal, but frankly it'd be better if they just took the dollars out of my pocket directly, instead of poisoning my brain first so that they can trick me into giving some dollars...

Obviously I'm being hyperbolic, but I think eventually if society survives past this phase, our descendants will look back and judge us for letting psychological manipulation be a valid economic process as a way to generate dollars, in much the same way we might judge our ancestors for ever building up a whole industry to hunt whales for oil for fuel (meaning, they might acknowledge that fuel is important and necessary to power an industrializing society, but they would mock us for not understanding how to refine petroleum sooner, and how silly going through the tech tree of fucking whale hunting is, just to get some fuel).

It is fucking silly/absurd/dangerous, that we go through the tech tree branch of psychological manipulation, just to be able to sell some ads or whatever.


I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.

You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.

Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".

And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).


Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.


This is a really strange reply.


I think they just read my first sentence and decided to take offense immediately. Shrug.

All I meant was, I didn't want to go down a path of talking about Trump... that's a very very dead horse to beat. I thought there were interesting elements to this person's ideas that were worth further discussion, that could be divorced/split-off from the Trump lightning rod, so I tried to do that. I generally thought I agreed with their original ideas, and wanted to build on them or respond to them, without getting sucked into wasting breath on Trump (nobody benefits, regardless if you have left or right leaning views).

I'm sure I could fix some gaps in the way I explained myself, but oh well, just another day on the internet.


As a manager, I am considering to enforce a rule on my team that -- no README in any repo should ever go stale ever again --> it should be near-trivial for every dev to ask Claude Code to read the existing README, read/interpret the code as it practically currently stands, read what's changed in the PR, then update the README as necessary. This does not mean Claude will be perfect or that engineers don't need to check that its summaries make sense (they do, and the human is always accountable for the changes at the end of the day); but this does mean that, the typical amount of laziness that we are all guilty of often, should not be eliminated as a reason as to why READMEs go stale.


Why have such a rule if at any moment of time the LLM could update the readme ad hoc? Btw, your ingested readmes will affect your LLM's code generation and I made the observation that more often than not it is better to exclude the readmes from the context window.


No LLM will by default touch a README.md

They will when you run /init, but after that they won't look at it unless directed to do so.


Bold statement


As a thought-exercise -- assume models continue to improve, whereas "using claude-code daily" is something you choose to do because it's useful, but is not yet at the level of "absolute necessity, can't imagine work without it". What if it does become, that level of absolute necessity?

- Is your demand inelastic at that point, if having claude-code becomes effectively required, to sustain your livelihood? Does pricing continue to increase, until it's 1%/5%/20%/50% of your salary (because hey, what's the alternative? if you don't pay, then you won't keep up with other engineers and will just lose your job completely)?

- But if tools like claude-code become such a necessity, wouldn't enterprises be the ones paying? Maybe, but maybe like health-insurance in America (a uniquely dystopian thing), your employer may pay some portion of the premiums, but they'll also pass some costs to you as the employee... Tech salaries have been cushy for a while now, but we might be entering a "K-shaped" inflection point --> if you are an OpenAI elite researcher, then you might get a $100M+ offer from Meta; but if you are an average dev doing average enterprise CRUD, maybe your wages will be suppressed because the small cabal of LLM providers can raise prices and your company HAS to pay, which means you HAVE to bear the cost (or else what? you can quit and look for another job, but who's hiring?)

This is a pessimistic take of course (and vastly oversimplified / too cynical). A more positive outcome might be, that increasing quality of AI/LLM options leads to a democratization of talent, or a blossoming of "solo unicorns"... personally I have toyed with calling this, something like a "techno-Amish utopia", in the sense that Amish people believe in self-sufficiency and are not wholly-resistant to technology (it's actually quite clever, what sorts of technology they allow for themselves or not), so what if we could take that further?

If there was a version of that Amish-mentality of loosely-federated self-sufficient communities (they have newsletters! they travel to each other! but they largely feed themselves, build their own tools, fix their own fences, etc.!), where engineers + their chosen LLM partner could launch companies from home, manage their home automation / security tech, run a high-tech small farm, live off-grid from cheap solar, use excess electricity to Bitcoin mine if they choose to, etc.... maybe there is actually a libertarian world that can arise, where we are no longer as dependent on large institutions to marshal resources, deploy capital, scale production, etc., if some of those things are more in-reach for regular people in smaller communities, assisted by AI. This of course assumes that, the cabal of LLM model creators can be broken, that you don't need to pay for Claude if the cheaper open-source-ish Llama-like alternative is good enough


Well my business doesn't rely on AI as a competitive advantage, at least not yet anyways. So as it stands, if claude got 100x as effective, but cost 100x more, I'm not sure I could justify the cost because my market might just not be large enough. Which means I can either ditch it (for an alternative if one exists) or expand into other markets... which is appealing but a huge change from what I'm currently doing.

As usual, the answer is "it depends". I guarantee though that I'll at least start looking at alternatives when there's a huge price hike.

Also I suspect that a 100x improvement (if even possible) wouldn't just cost 100 times as much, but probably 100,000+ times as much. I also suspect than an improvement of 100x will be hyped as an improvement of 1,000x at least :)

Regardless, AI is really looking like a commodity to me. While I'm thankful for all the investment that got us here, I doubt anyone investing this late in the game at these inflated numbers are going to see a long term return (other than ponzi selling).


I do agree that "unlimited interns who don't improve much" is less practically useful than it might seem at first, but OTOH "never improve much" seems unrealistic, given the insane progress of the field in the last 3ish years (or think back 5 years and tell me who was realistically predicting tools like Claude Code to even exist by 2025).

Also, there's a decently large subset of small startups where there's 1 technical founder and a team of contract labor, trying to build that first MVP or cranking out early features in a huge rush to stay alive, where yeah, cheap unlimited interns might actually be meaningfully useful or economically more attractive than whatever they're doing now. Founders kind of have a perverse incentive, where a CTO doesn't need to solo code the first MVP, and also doesn't need to share/hand-out equity or make early hires quittteee as early, if unlimited interns can scale that CTO's solo productivity for a bit longer than the before-times.


> but OTOH "never improve much" seems unrealistic, given the insane progress of the field in the last 3ish years

The point is that no one should hire an intern or a junior because they think it will improve their team's productivity. You hire interns and juniors because there's a causal link between "I hired an intern and spent money training them" and "they joined my company full time and a year later are now productive, contributing members of the team". It's an investment in the future, not a productivity boost today.

There is no causal link between "I aggressively adopted Claude Code in 2025" and "Claude Code in 2026 functions as a full software engineer without babysitting". If I sit around and wait a year without adopting Claude Code that will have no measurable impact on Claude Code's 2026 performance, so why would I adopt it now if it's still at intern- or junior-level skill?

If we accept that Claude is a junior-level contribution then the rational move is to wait and watch for now and only adopt it in earnest if and when it uplevels.


Precisely - AI getting better or not has nothing to do with my burning cycles using it. My juniors do improve based on my effort. I can free ride on AI getting good enough later (wait) whereas I cannot with my own team of juniors.

> 1 technical founder and a team of contract labor, trying to build that first MVP or cranking out early features in a huge rush

Having worked in environments with a large number of junior contractors... this is generally a recipe for a lot of effort with resulting output that neither works technically nor actually delivers features.


To your last point -- I didn't say large number of junior contractors would write good code or whatever. The change that is happening in the startup scene now, as compared to say 10 years ago, is more about lowering the barrier to MVP and making it easier/cheaper for startups to experiment with finding product market fit, than anything to do with "productivity" or code quality or whatever.

We're probably just talking past each other, because the thing you care about is not the thing I care about. I am saying that, it used to cost some reference benchmark of $X/idea to iterate as a startup and experiment with ideas, but then it became 0.5X because gig workers or overseas contractors became more accessible and easier to work with, and now it's becoming 0.1X because of LLMs and coding agents. I am not making any sort of argument about quality being better/good/equal, nor am I making any sort of conversion chart between 10 interns or 100 LLM agents equals 1 senior engineer or something... Quality is rarely (never?) the deciding factor, when it comes to early pre-seed iteration as a startup tries to gasp and claw for something resembling traction. Cost to iterate as well as benefits of having more iterations, can be improving, even if each iteration's quality level is declining.

I'm simply saying, if I was a founder, and I had $10k to spend to test new ideas with, I can test a helluva lot more ideas today (leveraging AI), vs what I could have done 5 years ago (using contractors), vs what I could have done 10-20 years ago (hiring FTEs, just to test out ideas, is frankly kind of absurd when you think about how expensive that is). I am not saying that $10k worth of Claude Code is going to buy me a production grade super fantastic amazing robust scalable elegant architecture or whatever, but it sure as heck can buy me a good enough working prototype and help me secure a seed round. Reducing that cost of experimentation is the real revolution (and whether interns can learn or will pay off over time is a wholly orthogonal topic that has no bearing to this cost of experimentation revolution).


Yeah in this context I get what you are talking about. I got through your first paragraph and thought of the startup founders using overseas / gig workers a decade ago to test ideas.. which is exactly where you went!


"hired an intern and spent money training them" and "they joined my company full time and a year later are now productive"

Why would I do that if I can have sombody else pay for the training then poach them when they are ready?


What are you doing to actually keep them at your company? I left a company after they invested a lot in training me. They gave me very little raises and already paid poorly, no guaranteed bonus, bad vacation hours, and no opportunities for promotion. They were shocked when I left, even though I had asked for very modest raises and was way more productive than the "seniors" at the company.

Most companies outside of FAANGs treat their talented juniors like crap, so of course they'll leave.


Which is exactly why no one's hiring juniors anymore. It made sense back when the market for hiring engineers was super competitive and it was easier to gamble on being able to keep a junior than it was to try to snag a senior. But now that there are seniors galore on the market who would bother with a junior?


> Also, there's a decently large subset of small startups where there's 1 technical founder and a team of contract labor, trying to build that first MVP or cranking out early features in a huge rush to stay alive, where yeah, cheap unlimited interns might actually be meaningfully useful or economically more attractive than whatever they're doing now

That's when experienced developers are a huge plus. They know how to cut corners in a way that will not hurt that much in the long term. It's more often intern level that are proposing stuff like next.js, kubernetes, cloud-native,... that will grind you to a halt once the first bugs appear.

A very small team of good engineers will get you much further than any army of intern level coders.


Yeah "actually good engineers" are like a 10:1 ratio with intern/new college hire/junior consultant level.

Not to generalize too much but if you are contracting out to some agency for junior levels, you are generally paying markup on coders who couldn't find better direct hire jobs to start with. At least with mid/senior level you can get into more of a hired-gun deal for someone who is between gigs/working part time/buy a share of their time you couldn't afford full-time.

In fact most junior consultants you are basically paying for the privilege to train other peoples employees who will then be billed at a higher rate back to you when they improve.. if they don't move on otherwise.


Your comment somehow reminded me of this quote: "In a society that profits from your self doubt, liking yourself is a rebellious act." (https://www.goodreads.com/quotes/9371890-in-a-society-that-p...)

That's not a direct response to your concern, but I think this quote applies in a parallel manner -- I've seen this quote applied as a statement about what it means to be "punk", and how simply being content with yourself (meaning you don't fall victim to all the ways society attacks/preys on insecurities or tries to sell you drugs or makeup or clothes or surgery or whatever to change yourself), is actually incredibly "punk". You don't have to dress up weird, or go out and do graffiti, or get into fights... just being content with yourself is "punk", within a capitalist/post-capitalist world.

So, in a similar vein, I think this author is saying that, "caring" is also a form of being "punk", in a world where seemingly not-caring is mainstream now. The thing is, being "punk" doesn't need an external "why" reason to justify it... the whole point of "punk" culture is about authenticity, that just being yourself is what's important, that you don't need a special reason to reject capitalist consumerism or mainstream opiate-of-the-masses media or to dress how you feel instead of how society thinks you should look. In that way, being "punk" is quite Buddhism-aligned actually, to center on existence and enlightenment through self-realization, instead of pursuit of external "why" reasons for doing X or Y.

Caring is the punk thing to do, because it is who you actually are. You don't need a special reason to care, if you subscribe to any kind of "punk" mindset/philosophy about life. Don't care because it will yield better material rewards, get you laid, or whatever. Care, just because.

At least, that's the argument... up to you if you buy it or not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: