Hacker Newsnew | past | comments | ask | show | jobs | submit | neocritter's commentslogin

>> "At the very least, this research doesn’t appear to be a reliable indicator of people’s willingness to pay for human-created art."

It's no secret that what does well commercially and what represents good craft doesn't always have much connection. I'm pro-GenAI and still prefer art made by a person. With the aid of AI is fine, but that connection through the choices they make in creating a thing matters to me.

Looking at the actual paper, the story might have biased the results. Would the results be the same if the story wasn't about a story written with the help of AI?

It should be noted the actual study focus is different from how it's framed by the article: "AI Bias for Creative Writing: Subjective Assessment Versus Willingness to Pay"

It seems like the focus was on commercial writing, not writing overall.


I got the occasional A/B test with a new image generator while playing with Dall-E during a one month test of Plus. It was always clear which one was the new model because every aspect was so much better. I assume that model and the model they announced are the same.


How much of this is the bicycle, and how much of this is that it enables less conscientious people to go more miles at otherwise normal cycling speeds? The e- part is limited to speeds most people can reach on a normal bicycle unless they modify it.

/r/IdiotsInCars is a monument to how ineffective permitting is for this purpose. The people rear-ending pedestrians on sidewalks at 20MPH might be the same people who never miss their off ramp.


> The e- part is limited to speeds that most people can reach on a normal bicycle.

Maybe.

There are regulations, but the wording is generally such that the bike cannot be sold with a speed capability greater than X. So the bikes ship from the factory with a firmware setting for the max speed of X. But as soon as people buy the bike they go into the bike settings (!) and change it to whatever higher speed that they want.

Whenever crashes happen these settings need to be examined and they need to be part of the liability determination.


Also, how much of this riding in pedestrian areas is because all resources and area has been assigned to cars? Instead of being angry at cyclists, I often wish more people saw that it shouldn't be a fight between "soft road users", but instead a fight against cars limiting where people can move themselves.


If someone runs over a pedestrian and they're in the hospital because they drove in a pedestrian path ... that's on them, not "it's me vs cars".


Problems you see in videos on the internet maybe prove that problems will still occur with permitting.

It doesn't show you if it would happen any less compared to no permitting.

Not that I care either way regarding permitting, but people being stupid is common, but we still do things to try to limit it.


That was back when everyone was trying to make a "portal" to compete with AOL. It seems like browsers are headed down the same path now with replacing the simple search box new tab page with the same stuff the portals had.


Most people will never personally encounter more than 0-1 of any rare or uncommon way of thinking or living. The loudest people are the most self-sure and radical, which makes them ideal candidates for those rare encounters. They come to represent the whole for most people who aren't practiced at seeing past that sort of thing.

You don't think about the vegetarians around you who keep to themselves and go about their way of life without trying to push it on others, so they're quietly embarrassed by those noisy few. At least that's my guess. I don't know that I know any vegetarians.


Probably in the /r/battlestations/ sense.

https://www.reddit.com/r/battlestations/


A classic: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

>> "The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive."

>> "Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters."

>> "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

It's an older piece, but like good old code, it still holds up. Newer tools and technology have improved the creating of new code, but they've also made improving old code easier in equal measure.


It's a good point in general, but in this case it's not clear if the cost of re-writing the existing codebase is less than the cost of staying with a memory-unsafe language.

We know from past experience that it takes an extreme amount of time and effort to harden a browser written in C++ against malicious web content. The Ladybird codebase is not particularly "old" in any sense of the word. Judging by Github's stats most of the code is less than 4 years old and it is still a long ways from being ready for general use. I think it's safe to say Ladybird still has a vast amount of work to be done fixing vulnerabilities that arise from lack of memory safety.

I find it quite plausible that the cost of re-writing the existing code in Rust is less than the cost of fixing all of the current and future bugs in the C++ codebase that Rust would catch at compile time.


That is the sneaky thing about rewrites. The "Ship of Theseus" rewrite is reasonably safe based on the article and what I could find of people sharing their experiences with rewrites. Fix what needs fixing, but swap in the newer better language/framework/whatever a piece at a time. It works!

People get in trouble when they decide to rewrite the whole thing. You might be right in this case, but I'm sure every person who began a doomed rewrite project felt the the benefits outweighed the risks.

Viewed in the rear view mirror of history, the Netscape rewrite was a good thing in a technical sense. As far as I understand it gave us the foundation for Firefox and the Gecko engine. It was just bad business in context because it let other browsers run laps around it while the rewrite proceeded. It let IE get a foothold that didn't shake for many years until Netscape became Firefox.

Rewriting the new browser in Rust would probably be similar from a technical POV. But from a business standpoint, we seem to be at an inflection point where a new browser might be able to enter in the cracks of discontent over sketchy AI features in Edge and the slow-boiling attempts to break ad blocking in Chrome. If they divert resources now to a rewrite, they could miss this opportunity to do to Chrome what Firefox did to IE.

It sounds like the plan is a Ship of Theseus rewrite anyway, so they'll get there in time without the risk of distraction.


The only exception is if you have 500k LOC in a language whose runtime is going to be deprecated on all platforms overnight.

I'm referring to the uh, retrospectively unfortunate decision I made in 2007 to start building large scale business app frontends in AS3.

I guess I should be thankful for the work, having to rewrite everything in TS from scratch a decade later. (At least the backends didn't have to be torn down).


There's a parallel universe where someone convinced you to rewrite it in something else from the start and you spent years on the rewrite instead and it never went anywhere. Could you have done that emergency rewrite without 10 years of becoming an expert in the problem you were solving? The alternative universe has you spending time becoming an expert in a new language instead and maybe not getting anywhere with the rewrite.


Totally true. Spending years fine-tuning the business logic and UIs made the eventual rewrites a lot cleaner and faster, having already iterated many times over the years and discovering what worked and what didn't. And learning TS after AS3 was easy enough. The real pain point was switching from a paradigm in which I owned the screen graph down to the pixel-level placement of each component, to a trying to wrangle similar behavior from a mix of DOM elements, relative/absolute positioning and arbitrary stuff drawn into canvases. Particularly for things like interactive Gantt charts and some of the really complicated visualization components that had been a relative pleasure to design and code in Flash. But yeah, it was much easier to learn a new language paradigm knowing exactly what I needed to implement, rather than having to devise the logic at the same time.


I wonder how many businesses suffered the same?

I remember Flash as a complete, straight-to-business platform that allowed me to just focus on getting stuff done.

It was a sound decision back then.


I think it was a very sound decision back in 2007 if you wanted to write once and deploy everywhere. In browser, and on the desktop for Windows and Mac. JS wasn't up to the task of complex SPAs or graphic visualizations yet (<canvas> didn't even exist), and the alternative would have been Java apps which relied on whatever runtime the user had installed. The fact that Flash/AIR could deploy with its own runtime or a browser plugin was huge. It allowed an independent coder like me to maintain multiple large pieces of software across multiple platforms at a time when it was almost unheard of to do that without a team.


My current employer, similarly, invested a significant amount of resources into Silverlight. Luckily only one component of the application had been switched to Silverlight, but a significant amount of code was written to be the core of that effort and future components before browsers/MS killed it overnight.


Old code does acquire new bugs by sitting in your hard drive, since it interfaces with dozens of libraries and APIs that don't care about how well test the code is: every path of code is dependent on multiple components playing well and following standards/APIs/formats that old code has no knowledge of. Also, the mountain of patch-fixes and "workarounds" in the end force the programmers into a corner, where development is hobbled by constraints and quirks of "battle-tested" code, that will be thrown away as soon as it couldn't support fancy new feature X or cannot use fancy new library API without extra layers of indirection.


What's the % look like for people who use password managers? There's probably a reason they all support Linux.


The existence of a port does not guarantee future support of a port. Safari used to run on Windows. They're also somewhat notorious for trash quality Windows ports.


Can confirm. I bought Freespace 2 in 2009 and it's still there in my account with no DRM.


Back then, PSN was free. Bought a PS3 over an XB360 because I didn't want to pay a subscription for the occasional multiplayer game.


PSN is still free but it only gives you access to the store and an account to keep user data, PS plus is an add on to it that costs money and is required for multiplayer.

EDIT: I feel like I should point out that PC players don't need PS plus to play multiplayer, I think literally every copy would be refunded if they tried to pull that one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: