Hacker Newsnew | past | comments | ask | show | jobs | submit | more devbent's commentslogin

I'd kill for a way to apply CRT's filters to my vscode editor...


IMHO the problem is the majority of new multifamily construction is rentals only. Rentals ran by international firms suck the life out of communities, as people cannot be part of a local community when rent hikes can force them to move every couple of years.

We need large scale multifamily housing for sale.


> Rentals ran by international firms suck the life out of communities, as people cannot be part of a local community when rent hikes can force them to move every couple of years.

You can't believe that only international firms... raise rent? That's extremely ridiculous.


How bout some rent controls (max annual increase)… works (for the most part) in Montreal. Also Germany has limits.


Generally speaking. It doesn't work.


You mean Americans can’t make it work?

Cuz it clearly works elsewhere to limit rent increases, so that renters don’t get pushed out of their residences.


No, thanks.


> I don’t get how one imagines being an animal and… it just exists as an automaton? And for some reason exactly humans are all conscious/intelligent,

So far evidence points to the simpler solution that all living creatures are automatons and that there is no mechanism by which self determinism can exist.

Wide acceptance of such a belief would pretty much ruin society so it is best if we all just go on pretending we are masters of our own destiny.


On one hand, eating another intelligent being seems to be an obvious moral wrong.

On the other hand, many of the creatures we eat will happily eat us if given a chance.


Last time this topic came up on HN an engineer whose job it is to do these calculations and then re-engineer products not last as long popped into the thread


I'm working on something similar but instead of basing my efforts on the generator agents paper I'm basing it on a technique I call narrative generation. It requires far less context and fewer prompts, and focuses on generating narrative aspects and letting the traditional game engine things simulate the remainder of the world.

As an example, with the system I am building you only need to input what action the player has taken, and all the NPC's actions and dialogues will be generated.


I had an XML file format from one app I needed converted to a json file format for another app.

I threw both schemas at Claude and asked for it to write converter code.

Writing mocks, Claude saves an hour+ when mocking out complex classes.

I've never written graphics code before, I had a png animation film strip, Claude wrote code to load, parse, and animate it.


> Genuinely curious if people think this is a product, solving a real problem?

For people with memory issues, products like this can be life changing.

On a second note, imagine a world where everyone has perfect memory recall. The concept of "recording" someone is useless, as everyone around a conversation (e.g. sitting at the next table at a cafe) would be a 100% reliable witness to the conversation.

Obviously we would not fault someone for taking steps to train their memory to be better, so why fault them for using electronics to improve the limits of biology?


> imagine a world where everyone has perfect memory recall

This describes the plot of a Black Mirror episode: https://www.imdb.com/title/tt2089050 (S1 E3)


That plot goes quite a bit further.


One problem that I run into with LLM code generation on large projects is that at some point the LLM runs into a problem it just cannot fix no matter how it is prompted. This manifest in a number of ways. Sometimes it is by bouncing back and forth between two invalid solutions while other times it is bouncing back and forth fixing one issue and while breaking something else in another part of the code.

Another issue with complex projects is that llms will not tell you what you don't know. They will happily go about designing crappy code if you ask them for a crappy solution and they don't have the ability to recommend a better path forward unless explicitly prompted.

That said, I had Claude generate most of a tile-based 2D pixel art rendering engine[1] for me, but again, once things got complicated I had to go and start hand fixing the code because Claude was no longer able to make improvements.

I've seen these failure modes across multiple problem domains, from CSS (alternating between two broken css styles, neither came close to fixing the issue) to backend, to rendering code (trying to get character sprites correctly on the tiles)

[1] https://www.generativestorytelling.ai/town/index.html notice the tons of rendering artifacts. I've realized I'm going to need to rewrite a lot of how rendering happens to resolve them. Claude wrote 80% of the original code but by the time I'm done fixing everything maybe only 30% or so of Claude's code will remain.


Same. I was writing my own language compiler with MLIR/C++ and GPT was ok-ish to dive into the space initially but ran out of steam pretty quickly and the recommendations were so off at one point (invented MLIR features, invented libraries, incorrect understanding of the framework, etc) that I had to go back to the drawing board, RTFM, and basically do everything I would have done without GPT to begin with. I've seen similar issues in other problem domains as well just like you. It doesn't surprise me though.


I’ve observed this too. I’m sceptical of the all-in-one builders, I think the most likely route to get there is for LLMs to eat the smaller tasks as part of a developer workflow, with humans wiring them together, then expand with specialised agents to move up the stack.

For instance, instead of a web designer AI, start with an agent to generate tests for a human building a web component. Then add an agent to generate components for a human building a design system. Then add an agent to generate a design system using those agents for a human building a web page. Then add an agent to build entire page layouts using a design system for a human building a website.

Even if there’s a 20% failure rate that needs human intervention, that’s still 5x developer productivity. When the failure rate gets low enough, move up the stack.


I’ve found that getting the ai to write unit tests is almost more useless than getting it to write the code. If I’m writing a test suite, the code is non trivial, and the edge cases are something I need to think about deeply to really make sure I’ve covered, which is absolutely not something an llm will do. And, most of the time, it’s only by actually writing the tests that I actually figure out all of the possible edge cases, if I just handed the job off to an llm I’m very confident that my defect rate would balloon significantly.


I've found that LLMs are only as good as the code it is trained on.

So for basic CRUD style web apps it is great. But then again so is a template.

But the minute you are dealing with newer libraries, less popular languages e.g Rust or Scala it just falls apart where for me it constantly hallucinates methods, imports etc.


I spent weeks trying to get GPT4 to get me through some gnarly Shapeless (Scala) issues. After that failure, I realized how real the limitations are. They really cannot produce original work and as far as niche languages, hallucinates all the time to the point of being completely unusable.


Hallucination from software generation AI is worse than software written by a beginning programmer, because at least they can analyse and determine the truth of their mistakes.


They can do surprisingly things if prompted correctly.

But their ability to do complex logic falls apart, and their limits are pretty much a hard wall when reached.


And so another level of abstraction in software development is created, but this time with an unknown level of accuracy. Call me old-school, but I like a debuggable, explainable and essentially provably reliable result. When a good developer can code while keeping the whole problem accurately in their heads, the code is worth its wait (deliberate sic, thank you) in gold.


Which is to say, the job of programming is safe, for now.


TS rocks so hard for doing pure functional stuff, and its support for type algebra can be an absolute blast.

I try to avoid OO now days, and TS is my go-to language for modeling problems in the functional domain!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: