Hacker Newsnew | past | comments | ask | show | jobs | submit | somestag's commentslogin

I'm continually shocked at how well companies can continue on despite serious internal problems. I do think it usually catches up with them though, and there are a lot of "terminally ill" products out there that are going to die eventually. I've seen those projects first hand, and I've seen the slow decay of the product that happens when a company loses control of its tech stack. It can take years.

I think that cuts both ways though, because I think even the business people would be shocked at how well things can seem to work despite serious dysfunction. If an entire team dies off but 3 months later things seem "fine," the business types will just assume everything is as good as it ever was. I know it's popular these days to say that good tech and good business are not always the same thing, but I do think that many of those same businesses would be a little worried if you actually convinced them that their tech was coming off the rails and was only going to get worse. Some businesses just want to be acquired and don't care what happens after that, but not every business thinks they're on a clock.

If you're big enough, like say you're a trillion dollar megacorp, then important teams folding up and being reborn is just part of your ecosystem. I've seen big tech power through churn for years with nothing but human wave tactics until the business climate changed enough that the team in question became less critical, and that's when they let it die.


There is no "the consumer."


Well someone is preordering all these games that are almost garunteed to be unfinished, at best. What shod we call them?


suckers?


I agree with your point, but I think the honest answer to your question is that people view creative jobs as aspirational whereas the other "rote" jobs that were being automated away were ones that people would have preferred to avoid anyway.

When we're getting rid of assembly line jobs, or checkout counter staff, or data entry clerks, or any other job that we know is demeaning and boring, we can convince ourselves that the thousands/millions put out of work are an unfortunate side effect of progress. Oh sure, the loss of jobs sucks, but no one should have to do those jobs anyway, right? The next generation will be better off, surely. We're just closing the door behind us. And maybe if our economic system didn't suck so much, we would take care of these people.

But when creatives start getting replaced, well, that's different. Many people dream of moving into the creative industry, not out of it. Now it feels like the door is closing ahead of us, not behind us.


Agree. Amusingly, the authors found evidence that the drugs work: students spent more time focusing on even the easy version of the task. The impulse to "be done" with something and stop focusing on it is one of the things stimulants counteract.

I'm also not a big fan on emphasizing the "cognitively healthy" part of the equation. My understanding is that stimulants do exactly the same thing in a person whether they're "cognitively healthy" or not; they're not the sort of drugs that target a deficiency or clear up some specific problem. The only difference is that some people have more of a need in this area than others.

This reminds me of an old article I read about how psychedelics don't actually "increase connectivity in the brain" like users thought, as though that had anything to do with why people use psychedelics.


>This reminds me of an old article I read about how psychedelics don't actually "increase connectivity in the brain" like users thought, as though that had anything to do with why people use psychedelics.

I don't recall seeing anyone make that argument, but I do tend to avoid woo and pop sci. What is reasonably clear is that psychedelics increase neuroplasticity even in vitro, which is hypothesised as being one of the plausible mechanisms of action for psychedelics as a treatment for mental disorder - they potentially create a window of opportunity for habitual patterns of maladaptive cognition and behaviour to be unlearned.

Some people are very attached to the idea that the qualitative experience of the "trip" is integral to the therapeutic effects of psychedelics, but that is by no means a universal belief; many groups are working on non-psychedelic drugs that exploit this mechanism.

I think it's entirely reasonable to be wary of people justifying their recreational drug use with outsized claims of therapeutic benefits, but in the case of psychedelics there is definitely something of clinical interest happening. I'm quite circumspect about the clinical use of psychedelics, but I think it's highly likely that we are going to see a generation of novel and useful psychiatric drugs emerge based on what we have learned from psychedelics research.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6082376/


> This reminds me of an old article I read about how psychedelics don't actually "increase connectivity in the brain" like users thought, as though that had anything to do with why people use psychedelics.

That's why some folks say they take psychedelics; they want to justify their drug use for 'fun' as something more beneficial. I wish people were more honest with themselves and others.


Some people take psychedelics for the feeling of connectivity, and there is some talk out there about psychedelics being healthy for the brain and "helping the brain form new connections" (all of that is based on very limited evidence and a lot of woo), but even that isn't what the article was about.

The article just scanned people's brains while they were on drugs and saw that the electrical activity was more erratic and scattered than normal, not more active and connected, and concluded that people were using psychedelics based on a myth.


"The drug increases connectivity in the brain" is a completely orthogonal claim to "the drug does something beneficial for me."

In fact a lot of people who have taken (and really cherish) psychedelics would not describe them as fun, and sometimes they're very actively un-fun/terrifying/unenjoyable, and yet can still be beneficial.

What exactly is going on at the level of "brain connectivity" is pretty much unrelated to all of that^


I've had fun with psychedelics, I also honestly consider they have improved my life.

Yes the trip can be lots of fun, but really, "fun" is widely available once you cross the line and take drugs. MDMA is fun. Amphetamines can be fun. There is no scarcity of "fun" here and I have no issue admitting I've taken those to the possible detriment of my health and for fun.

The difference is that if I had to skip the "trip" part of psychedelics and retain the rest, I'd still do it.


There are also people who do it because they find it interesting, or spiritual, or healing in some way. They aren’t really ‘fun’, and are generally not addictive, though ones with dopaminergic activity like lsd or to a larger extent MDMA do cause addictive tendencies occasionally.


Let’s say for the sake of argument (because it’s not true) that the only possible benefit of psychedelics is “fun”. Assuming sensible doses do not cause harm, what is the problem with it only being fun?


Or hell, even if it's not "fun" (since what is fun is ultimately subjective as well), as long as it is not harming anyone else, what is really the problem?

I did a lot of psychedelics, I don't know if fun was ever the word I'd use to describe most of it, but I don't regret any of it. It opened up a lot of the back of my brain in a way I never could conceive and really allowed me to think and analyze things in ways that I wouldn't have access to without. That in itself is well worth the experience, in my subjective reckoning. When I no longer felt that psychedelics were able to provide me with what it once did, I stopped. That was that. That surely is enough, right?


Legality in the US depends on suffering being alleviated. Honesty won’t get you legality. I’m pro legalization so I’ll lie about the benefits if necessary. You’ll then have to pay the costs necessary to detect lies.


Wait, what? US as in the United States? That's simply not true. Even if we disregard that "legality in the US" is at best a moving target considering that federalism creates jurisdictional differences as a feature and therefore there are few things that are entirely outlawed in the US across the board everywhere, but considering that the most efficacious painkillers are also the most controlled, while deliriants that, prior to processing, extraction of active ingredients, and pairing with other psychoactive substances at minute doses, have little health benefits but great potential to create harm (thinking of belladonna and datura as a start), are legal to grow and aren't even subject to scheduling. Criminalization's link to health is effectively ret-conned and a poor ret-con at that. What turned Laudanum from something acceptable to give to babies to something of a moral hazard far worse than a health hazard was the same rationale that turned a nation of with no provision for immigration controls for the first hundred years of its existence into one where entire ethnicities, nationalities, and places of origin served as the basis for exclusion and continues to do so under the thinnest of guises: moral panics based in racism and perpetuated by those empowered and enriched via the propagation of such policies.

Lying about efficacy will do nothing but only bolster what was never a legitimate rationale to begin with. This is a country that failed to learn lessons from prohibition, that managed to kept the Mann Act on the books even though the legislative intent - that of the fear over 'miscegenation' was hardly a secret, a country where congress never bothered to even get rid of the chapter in the US Code that contained the Chinese Exclusion Act and therefore leaving the ugly legacy plain to see for all. Why lie about your cause when the truth that undergirds the reality is far uglier and far more disconcerting to begin with? And of course, for once, Oscar Wilde was wrong and the truth may not be pure but certainly simple: prohibitionist policies began in racism and are sustained by moral panics while protected by the state agencies that now benefit financially from its existence. All this has a paper trail, both online and in print, in the federal register, in the US Code, in congressional records and archives. If that's not enough to justify resistance, your real reasoning will still be more persuasive than having to make up something to justify a position against a line of thinking that has goalposts with wheels built in from the start.


Dude, there is no way I'm going to get shrooms legalized by saying I want it for fun. It's too hard to get support for. It's much easier to place this fig leaf in front: look at these suffering people with mental illness; here's some evidence that it heals them; look at this grandma who has less pain; look at this grandpa who doesn't have seizures.

Marijuana legalization used the same strategy: medical first, then complete. The technique works. The other technique of "Oh yeah, I want it for fun so you should let me do it" has been tried for eons and got nowhere. You can come up with all the theory, but one technique worked and the other didn't, so I'm not interested in the theory.

Fake science it's going to be. The people who hate it don't have the ability to tell that it's fake.


No, some of us know it's a completely fake argument, know that people just want it for fun, even support legalization, and still hate the liars.

The amount of half-braindead pot smokers earnestly talking about their medical need to get stoned is so irritating. Legalization happened despite them, not because of them.


Both you and I know that your support or hate are limited to comments on the Internet. No one really worries about that moving the needle. When the thing is done, you'll fall into line with the new norms. I doubt anyone is that concerned one way or another.


I don't agree with them, but I don't agree with you either. Spreading fake science and medicine is fundamentally harmful to society, and doing it just because you want to trip legally is not a good enough excuse. This is like a textbook example of the ends not justifying the means, and it's from the same playbook that big pharma uses.


Well, everyone hates Big Pharma and gives them what they want, and everyone agrees with honest legalization efforts and doesn't give them anything. You're welcome to aim to be loved. I aim to get what I want.

And come on, healthcare requires you to lie. The head of the NIAID famously lied about mask efficacy in order to get what he wanted and people backed him for it. I don't see myself as any different.


I don't agree with the parent that it's okay to lie about the health effects of a drug just because you want it legalized, but I feel like you missed the forest for the trees here.

When a drug is stigmatized in the United States, it is essentially impossible to get it legalized without showing there is a medical use for it. That's just how it is. If your goal is to get a drug legalized, it does not matter whether that's how it should be, or whether this is all rooted in systemic discrimination, or whether pharmaceutical companies push stuff that's just as bad under their veil of legitimacy. Citizens who are anti-drug are under the impression right now that ~100% of illicit drugs are extremely dangerous and a major threat to their community, and they are not going to sit around and listen to a history lesson philosophizing about how all their fears are a propagandized illusion.

Even if you got them to listen, they would just say "Why risk it?" The only way to start changing minds is to point out there are upsides to drug legalization beyond just "freedom is good." That's why all of the arguments for legalization focus on health/therapeautic benefits, either directly from the drugs themselves or in the ability to treat those who need help with addiction.


>My understanding is that stimulants do exactly the same thing in a person whether they're "cognitively healthy"

I don’t know a lot about stimulants in general, but I know for caffeine in particular this is not true. As an anecdote, I have a friend that caffeine puts to sleep, she just can’t take it. I’ve come to find out (partly from knowing her) that part of why the FDA doesn’t regulate caffeine, is that it has a very wide range of varying effects on different people.


Well, everyone is different biologically and every drug affects everyone differently, I didn't mean that there's no biodiversity between people. What I meant was that these stimulants don't function differently based on whether you're "cognitively healthy." Methylphenidate doesn't do something different in a person with ADHD versus one without, because it doesn't interact with any mechanisms of the illness, and whether or not you have ADHD you're experiencing similar effects on the drug.

Contrast with, say, SSRIs, which might have some effect in a healthy person but you're looking at a different range of effects compared to someone taking it for depression/anxiety/OCD/whatever.


Yeah, regardless of hallucinations and repeating the same mistake even after you tell it to fix it, iterating with ChatGPT is so much less stressful than iterating with another engineer.

I almost ruined my relationship with a coworker because they submitted some code that took a dependency on something it shouldn't have, and I told them to remove the dependency. What I meant was "do the same thing you did, but instead of using this method, just do what this method does inside your own code." But they misinterpreted it to mean "Don't do what this method does, build a completely different solution." Repeated attempts to clarify what I meant only dug the hole deeper because to them I was just complaining that their solution was different from how I would've done it.

Eventually I just showed them in code what I was asking for (it was a very small change!) and they got mad at me for making such a big deal over 3 lines of code. Of course the whole point was that it was a small change that would avoid a big problem down the road...

So I'll take ChatGPT using library methods that don't exist, no matter how much you tell them to fix it, over that kind of stress any day.


I had a friend jokingly poke fun at me for the way I was writing ChatGPT prompts. It seemed, to him, like I was going out of my way to be nice and helpful to an AI. It was a bit of an aha moment for him when I told him that helping the AI along gave much more useful answers, and he saw I was right.


Interesting. I also work in game development, and I tend to work on project-specific optimization problems, and I've had the opposite experience.

If I have to solve a hairy problem specific to our game's architecture, obviously I'm not going to ask ChatGPT to solve that for me. It's everything else that it works so well for. The stuff that I could do, but it's not really worth my time to actually do it when I can be focusing on the hard stuff.

One example: there was a custom protocol our game servers used to communicate with some other service. For reasons, we relied on an open-source tool to handle communication over this protocol, but then we decided we wanted to switch to an in-code solution. Rather than study the open source tool's code, rewrite it in the language we used, write tests for it, generate some test data... I just gave ChatGPT the original source and the protocol spec and spent 10 minutes walking it through the problem. I had a solution (with tests) in under half an hour when doing it all myself would've taken the afternoon. Then I went back to working on the actual hard stuff that my human brain was needed to solve.

I can't imagine being so specialized that I only ever work on difficult problems within my niche and nothing else. There's always some extra query to write, some API to interface with, some tests to write... it's not a matter of being able to do it myself, it's a matter of being able to focus primarily on the stuff I need to do myself.

Being able to offload the menial work to an AI also just changes the sorts of stuff I'm willing to do with my time. As a standalone software engineer, I will often choose not to write some simple'ish tool or script that might be useful because it might not be worth my time to write it, especially factoring in the cost of context switching. Nothing ground breaking, just something that might not be worth half an hour of my time. But I can just tell AI to write the script for me and I get it in a couple minutes. So instead of doing all my work without access to some convenient small custom tools, now I can do my work with them, with very little change to my workflow.


>I can't imagine being so specialized that I only ever work on difficult problems within my niche and nothing else. There's always some extra query to write, some API to interface with, some tests to write... it's not a matter of being able to do it myself, it's a matter of being able to focus primarily on the stuff I need to do myself.

there might simply not be enough literature for LLM's to properly write this stuff in certain domains. I'm sure a graphics programmer would consider a lot of shader and DirectX API calls to be busy work, but I'm not sure if GPT can get more than a basic tutorial renderer working. Simply because there really isn't that much public literature to begin with, especially for DX12 and Vulkan. That part of games has tons of tribal knowledge kept in-house at large studios and Nvidia/intel/AMD so there's not much to go on.

But I can see it replacing various kinds of tools programming or even UI work soon, if not right now. It sounds like GPT works best for scripting tasks and there's tons of web literature to go off of (and many programmers hate UI work to begin with).


Well, I think most software engineers in games don’t work all that much with scripts or database queries, nor write that many tests for systems of scale that GPT could produce. You might be in devops, tools, or similar if you deal with a lot of that in game dev.

GPT code in a lot of critical path systems wouldn’t pass code review, not probably integrate well enough into any bespoke realtime system. It seems to be more useful in providing second opinions on high level decisions to me, but still not useful enough to use.

Maybe it could help with some light Lua or C# gameplay scripting, although I think co-pilot works much better. But all that doesn’t matter as due to licensing, the AAA industry still generally can’t use any of these generative AIs for code. Owning and being able to copyright all code and assets in a game is normally a requirement set by large publishers.

To conclude, my experience is indeed very different from yours.


I think the difference in our perspectives is the type of studios we work for. In a AAA studio what you're saying makes perfect sense. But I've worked entirely for small- and mid-size studios where developers are often asked to help out with things outside their specialization. In my world, even having a specialization probably means you're more experienced and thus you're involved in a variety of projects.

Whether that's "most" software engineers in games or not I can't say. AAA studios employ way more engineers per project but there are comparatively way more small- and mid-sized developers out there. It's interesting how strong the bubbles are, even within a niche industry like games.


I think I agree with basically your whole comment but I'm wondering if you could explain what you mean by "software-only AGI". Obviously all software runs on hardware, and creating specialized hardware to run certain types of software is something the computing industry is already very familiar with.

In the far far future, if we did crack AGI, it's not impossible to believe that specialized hardware modules would be built to enable AGI to interface with a "normal" home computer, much like we already add modules to our computers for specialized applications. Would this still count as software-only AI to you?

I've held for a long time that sensory input and real-world agency might be necessary to grow intelligence, so maybe you mean something like that, but even then that's something not incredibly outside the realm of what regular computers could do with some expansion.


There's some discussion of embodiment as an important factor in intelligence such that it would defy pure software implementation. I’m personally of the opinion that even to the extent this is true, it probably just means the compute capacity required for software is higher than we might otherwise think, to simulate the other parts, alternatively, with the right interfaces and hardware, we don't need that cheat. But “everything involved can be simulated in software at the required level”, while I believe it, somewhat speculative.


https://en.m.wikipedia.org/wiki/Portia_(spider)

This spider could be evidence of "software based intelligence" in biological brains - it exhibits much more complex behaviors than other animals it's size, more comparable to cats and dogs.

What I mean is that some believe that their brain is "emulating" all parts of the larger "brain", but one at a time, and passing the "data" that comes out of one into the next.

Just a cool thing.


I like this comment because I think it highlights the exact difference between AI optimists and AI cynics.

I think you'll find that AGI cynics do not agree at all that "engineering a 10x/100x version" of what we have and making it attempt "AGI algorithms 24/7 in an evolutionary setting" is a "safe ticket" to AGI.


I wouldn’t say I’m a cynic, I’d just say how can one possibly know what a safe ticket is in this space? The logic you described is basically simple extrapolation, like in the xkcd wedding dress comic. There’s no guarantee that will get you anywhere in finite time.


Yes, 1/3 and 1/6 are quite important. Do enough making and you will run into those fractions plenty. It just turns out that in practice, as you say, easy unit conversions outweigh the benefit of clean division. Metric's important benefit is that it's a consistent base, not that it's specifically base-10.

And now that we use computers for a lot of measurement, those infinite decimals aren't as big of a deal. (Well, most of the time, anyway. Representation of infinite fractions famously creates some programming problems. But I mean for the user making the measurement.)

Base-10 is easy to learn since it's the same we use for counting, but I think there's an interesting argument to be made that purely from a science/engineering/making perspective it would be better to use a consistent base-12 measurement system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: