Hacker Newsnew | past | comments | ask | show | jobs | submit | Pedro_Ribeiro's commentslogin

HL Alyx ruined most VR games for me because they never lived up to Half-Life, and no one but Valve could make such a high-profile game. VR is a genre that benefits A LOT from high budgets.

I enjoyed games like No Man's Sky in VR, but they just don't hit the same high notes.


What are you on about? Have you seen the same comment sections I have? This place is clearly majority democrats.


This barely qualifies as a critique; it's a hit piece with no constructive criticism and some anti-capitalist rhetoric thrown in.

> The National Design Studio is a clear case of nepotism, and they're greatly overpromising. Dismantling 18F was a massive mistake.

There, I just said the same thing but cut out the personal attacks and the ridiculous take that Airbnb was somehow built by thieves (does anything actually believe this?).

I'm surprised this article has stayed up this long. Have the mods read it? It's low signal-to-noise ratio.


> Dismantling 18F was a massive mistake.

Mistake to me implies unintended consequences that differ from the original goal. I've never heard anyone say "Opening Auschwitz was a massive mistake", as its consequences were exactly as intended. In the same way, dismantling 18F is not a mistake.


The figures they've been talking of the ideal cost per launch of starship are even more insane. I'm sure some of it is hype farming on Twitter but if they get the cost to less then $1000/kg it would be incredible.


Curious to what you would think if this kid downloaded an open source model and talked to it privately.

Would his blood be on the hands of the researchers who trained that model?


Then it's like cigarettes or firearms. As a distributor you're responsible for making clear the limitations, safety issues, etc, but assuming you're doing the distribution in a way that isn't overly negligent then the user becomes responsible.

If we were facing a reality in which these chat bots were being sold for $10 in the App Store, then running on end-user devices and no longer under the control of the distributors, but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. As is, distributed on-device models are the purview of researchers and hobbyists and don't seem to be doing any harm at all.


Mhm but I don't believe inherently violent and dangerous things like guns and cigarretes are comparable to simple technology.

Should the creators of Tornado Cash be in prison for what they have enabled? You can jail them but the world can't go back, just like it can't go back when a new OSS model is released.

It is also much easier to crack down on illegal gun distribution than to figure out who uploaded the new model torrent or who deployed the latest zk innovation on Ethereum.

I don't think your hypothetical law will have the effects you think it will.

---

I also referenced this in another reply but I believe the government controlling what can go on a publicly distributed AI model is a dangerous path and probably inconstitucional.


> but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible.

Or, I mean, just banning sale on the basis that they're unsafe devices and unfit for purpose. Like, you can't sell, say, a gas boiler that is known to, due to a design flaw, leak CO into the room; sticking a "this will probably kill you" warning on it is not going to be sufficient.


In that extreme case the "packaging requirements" would be labeling the thing not as a boiler, but as dangerous explosive scrap.


I suspect in many places you couldn’t sell what would essentially be a device for poisoning people to consumers _at all_.


I would say no. Someone with the knowledge and motivation to do those things is far less likely to be overly influenced by the output and if they were they are much more aware of what exactly they are doing with regard to using the model.


So if a hypothetical open source enthusiast who fell in love with GPT-OSS and killed his real wife because the AI told him to should only be himself held accountable, where as if it were GPT-5 commanding him to commit the same crime, it would extend into OpenAI's responsability?

Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.

On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

I suspect your suggestion will be how it ends up in Europe and get rejected in the US.


After a certain point, people are responsible for what they do when they see certain words, especially words they know to be potentially inaccurate, fictional and have a lot of time to consider the actual reality of. A book is not responsible for people doing bad things, they are themselves.

AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.


> At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.

You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals.


It was a joke


> On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.

That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned.


That's why you have this:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose.


I'm not sure, but there is a difference: the researchers don't have much incentive to get everyone to use their model. As such, they're not really the ones hyping up AI as the future while ignoring shortcomings.


I specifically blame Sam Altman because of the allegations in the complaint that he ordered safety checks to be skipped in order to rush this model to market, specific safety checks that were later demonstrated to identify and prevent precisely this behavior.


You build the tool, you're culpable ultimately. I've made it a rule in my life to conduct myself as if I will be held to account for everything I ultimately build, and it's externalities. Helps keep my nose cleaner. Still managed to work on some things that keep me up at night though.


That’s absolutely not how it works. Every license has a clause explicitly saying that the user is responsible for what they do with the tool. That’s just common sense. If it was the way you suggested no one would create tools for others anymore. If you buy the screw driver I sold and kill someone with it, I sure as hell have my conscience clean. In the ChatGPT case it’s different because the “tool” has the capacity to interact and potentially manipulate people psychologically, which is the only reason it’s not a clear cut case.


That's a slippery slope! By that logic, you could argue that the creators of Tor, torrenting, Ethereum, and Tornado Cash should be held accountable for the countless vile crimes committed using their technology.


Legally I think not being responsible is the right decision. Morally I would hope everyone considers if they themselves are even partially responsible. As I look round at young people today and the tablet holding, media consuming youth programmers have created in order to get rich via advertising. I wish morals would get considered more often.


This, right here, is why I take the stance I do. Too many ethical blank checks get written ultimately if you don't keep the moral stain in place. If you make a surveillance tool, release it to the world that didn't have that capacity, and a dictator picks it up and rolls with it, that license of yours may absolve you in a court of law, but in the eyes of Root, you birthed it. You made the problem tractable. Not all problems were meant to be so. I used to not care about it as much. The last decade though has sharply changed my views though. It may very well be a lesson only learned with sufficient time and experience. I made my choice though. There are things I will not make/make easier. I will not be complicit in knowingly forging the bindings of the future. Maybe if we mature as a society as someday, but that day is not today.


So if you build a chair and then someone uses it to murder someone, are you responsible for the murder?


I am not willing to grant the same level of exculpation on the basis of reasonable forseeability to a software/AI dev that I am to a carpenter making a chair. The history of our trade has in it found great joy in how to use things in ways they were never intended, which much more than a carpenter, puts a burden on us to take the propensity and possibility of abuse into account amongst ourselves.

So no. Mr. Altman, and the people who made this chair, are in part responsible. You aren't a carpenter. You had a responsibility to the public to constrain this thing, and to be as ahead of it as humanly possible, and the number of AI as therapist startups I've seen in the past couple years, even just as passion projects from juniors I've trained, have been met with the same guiding wisdom: go no further. You are critically outside your depth, and you are creating a clear and evident danger to the public you are not yet mentally or sufficiently with it enough to mitigate all the risks of.

If I can get there it's pretty damn obvious.


You're not even responsible if you build an AR-15, complete with a bunch of free advertisements from the US army using the select-fire variant to slaughter innocent Iraqis, and it's used to murder someone.

The government will require them to add age controls and that will be that.


What is sad about this?


Clearly by installing a wrongthink iron dome around the world!


Is it just me or has there been a wave of delusional people on Hacker News completely neglecting new advancements in technology? The two most common technologies I see having this type of discourse are AI coding and containers.

Either everyone here is a low level quantum database 5D graphics pipeline developer with a language from the future that AI hasn't yet learned, or some people are in denial.


I'm primarily an embedded firmware developer. Gas/electric power products. Ada codebase, so it's off the beaten path but nothing academic by any stretch of the imagination. I have a comprehensive reference manual that describes exactly how the language should be working, and don't need an LLM to regurgitate it to me. I have comprehensive hardware and programming manuals for the MCUs I program that describe exactly how the hardware should be working, and don't need and LLM to regurgitate it to me. I actually really specifically don't want the information transformed, it is engineered to be the way it is, and to change its presentation strips it of a lot of its power.

I deal with way too much torque and way too much electrical energy to trust an LLM. Saving a few minutes here and there isn't worth blowing up expensive prototypes or getting hurt over.


Software development is a spectrum and you're basically on the polar opposite end of the one AI is being used for: sloppy web dev.


I would be willing to live and let live for the sake of being practical, if the tolerance for (and even active drive towards) low quality slop didn't keep pushing further and further into places it shouldn't. People that accept it in sloppy web dev will accept it in fairly important line of business software. People that accept it in fairly important line of business software will accept it in IT infrastructure. People that accept it in IT infrastructure will accept it in non-trivial security software. People that accept it in non-trivial security software will accept it in what should be a high-integrity system, at which point real engineers or regulatory bodies hopefully step in to stop the bullshit. When asked, everybody will say they draw the line at security, but the drive towards Worse knows no bounds. It's why we see constant rookie mistakes in every IoT device imaginable.

My actual idealistic position, discounting the practicality, is that it shouldn't be tolerated anywhere. We should be trying to minimize the amount of cheap, born-to-die, plasticy shit in society, not maximize it. Most people going on about "muh feature velocity" are reinventing software that has existed for decades. The next shitty UI refresh for Android or Windows, or bad firmware update for whatever device is being screwed up for me, will leave me just as unhappy as the last. The sprint was indeed completed on time, but the product still sucks.

A guided missile should obviously not miss its target. An airliner should obviously never crash. An ERP system should obviously never screw up accounting, inventory, etc, although many people will tolerate that to an unreasonable degree. But my contention is that a phone or desktop's UI should never fail to function as described. A "smart" speaker should never fail to turn on or be controlled. A child's toy should never fail to work in the circumstances they would play with it.

If it's going to constantly fuck up and leave me unhappy and frustrated, why was it made? Why did I buy it? AI could have brought it to market faster, but for what? Once I noticed this, I did just quit buying/dealing with this junk. I'm an ideologue and maybe even a luddite, but I just don't need that bad juju on my soul. I use and write software that's worth caring about.


The consequences of incorrect code can be severe outside of front-end web development. For front-end web development, if the code is wrong, you see from your browser that your local web app is broken and try to fix it, or ship it anyway if it's a minor UI bug. For critical backend systems, subtle bugs are often discovered in downstream systems by other teams, and can result in financial loss, legal risk, reputational damage, or even loss of life.


It’s totally valid to see a new piece of tech, try it, say it’s not for you, and move on. With LLMs it feels forced-fed, and simply saying “eh I’m good, no thanks” isn’t enough. Lots of hype and headlines on how it’s going to take our jobs and replace us, pressure from management to adopt it.

Some new trends make perfect sense to me and I’ll adopt them. I’ve let some pass me by and rarely regretted it. That doesn’t make me a luddite.


I think it’s just backlash against all the AI hype - I get it, im tired of hearing about it too, but - it’s already here to stay, it’s been that way for years now - it’s a normal part of development now for most people, the same as any new tool that becomes the industry darling. Learn to like it or at least learn it, but the reality is here whether you like it or not.


The gatekeepers are witnessing the gate opening up more and letting more people in and they don't like that at all.


This is awesome. Does anyone know if it'll possible to integrate it with tools like SQLX (Rust's pseudo-ORM) to get type hinting when writing in-like SQL code within Rust, or just have it work when I'm writing SQL within a simple Python script? That would truly be next-gen.


Python 3.14 will likely add PEP-750 t-strings which will make LSP integrations like this much more tangible.

https://discuss.python.org/t/pep750-template-strings-new-upd...


that's something we are currently looking into for typescript. at first, I thought a tsserver plugin will do. but a bit of research suggested that such a plugin can not call other language servers. this must be solved on the editor side instead. in vscode, it should be possible via request forwarding [0]. for neovim there are plugins like otter.nvim [1].

and at least for js, we are planning to add direct support for it in our workspace api so that e.g. `postgrestools check file.ts` will emit diagnostics for embedded sql.

[0] https://code.visualstudio.com/api/language-extensions/embedd... [1] https://github.com/jmbuhr/otter.nvim


I think you're just describing BeReal, which went viral but kind of died out.


At least in my clique in Finland, BeReal is alive and kicking. Definitely nothing like Instagram in terms of popularity, but quite active


My German circle of friends is also still very active on BeReal.


I think BeReal was too restrictive. Considering all the types of content on social media, very little of it is selfies. The goal of that platform is authenticity in a very narrow sense, which is a noble goal.


But it being too restrictive was its unique selling point, like how Twitter's limited post length was.


The "problem" with most social media is the same with F2P games: they require whales to keep them relevant


The critical mass problem is only insurmountable for social media that seeks to connect people globally. Locally, you never notice this problem.

For example, in your typical gaming clan Discord server, a tenement building's WhatsApp/email group, or a small town's quarterly town hall meeting. It only takes a handful of people in such social groups for the group to serve its purpose.

Indeed, TheFacebook easily reached its critical mass when it was limited to Harvard College. Hacker News would work just as well with 500 monthly readers as it does with 5,000,000+ currently.

Whales, celebrity influencers (whether from out-of-network or homegrown celebrities), and other such things are only needed to compete with the social media giants today. But if you don't wish to compete and want to serve a small community, then this is not a problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: