Hacker Newsnew | past | comments | ask | show | jobs | submit | sho_hn's commentslogin

How are you approaching a Wayland port?

We keep meaning to provide a page equivalent to this one. Our position is more or less identical:

https://www.kicad.org/blog/2025/06/KiCad-and-Wayland-Support...

We have user reports that XWayland causes error when running many plugins (primarily those written use JUCE); we also have reports that using XNest tends to be more successful.


It would have to be done someday. We are slowly running out of desktop environments that haven't abandoned X11.

Strangely, for me, downward is more restful than inward. Must be the decades of keyboard use ...

This is actually an aspect of using AI tools I really enjoy: Forming an educated intuition about what the tool is good at, and tastefully framing and scoping the tasks I give it to get better results.

It cognitively feels very similar to other classic programming activities, like modularization at any level from architecture to code units/functions, thoughtfully choosing how to lay out and chunk things. It's always been one of the things that make programming pleasurable for me, and some of that feeling returns when slicing up tasks for agents.


"Become better at intuiting the behavior of this non-deterministic black box oracle maintained by a third party" just isn't a strong professional development sell for me, personally. If the future of writing software is chasing what a model trainer has done with no ability to actually change that myself I don't think that's going to be interesting to nearly as many people.

It sounds like you're talking more about "vibe coding" i.e. just using LLMs without inspecting the output. That's neither what the article nor the people to whom you're replying are saying. You can (and should) heavily review and edit LLM generated code. You have the full ability to change it yourself, because the code is just there and can be edited!

And yet the comments are chock full of cargo-culting about different moods of the oracle and ways to get better output.

I think this is underrating the role of intuition in working effectively with deterministic but very complex software systems like operating systems and compilers. Determinism is a red herring.

Whether it's interesting or not is irrelevant to whether it produces usable output that could be economically valuable.

Yeah, still waiting for something to ship before I form a judgement on that

Claude Code is made with Anthropic's models and is very commercially successful.

Something besides AI tooling. This isn't Amway.

Since they started doing that it's gained a lot of bugs.

Should have used codex. (jk ofc)

I agree that framing and scoping tasks is becoming a real joy. The great thing about this strategy is there's a point at which you can scope something small enough that it's hard for the AI to get it wrong and it's easy enough for you as a human to comprehend what it's done and verify that it's correct.

I'm starting to think of projects now as a tree structure where the overall architecture of the system is the main trunk and from there you have the sub-modules, and eventually you get to implementations of functions and classes. The goal of the human in working with the coding agent is to have full editorial control of the main trunk and main sub-modules and delegate as much of the smaller branches as possible.

Sometimes you're still working out the higher-level architecture, too, and you can use the agent to prototype the smaller bits and pieces which will inform the decisions you make about how the higher-level stuff should operate.


[Edit: I may have been replying to another comment in my head as now I re-read it and I'm not sure I've said the same thing as you have. Oh well.]

I agree. This is how I see it too. It's more like a shortcut to an end result that's very similar (or much better) than I would've reached through typing it myself.

The other day I did realise that I'm using my experience to steer it away from bad decisions a lot more than I noticed. It feels like it does all the real work, but I have to remember it's my/our (decades of) experience writing code playing a part also.

I'm genuinely confused when people come in at this point and say that it's impossible to do this and produce good output and end results.


I feel the same, but, also, within like three years this might look very different. Maybe you'll give the full end-to-end goal upfront and it just polls you when it needs clarification or wants to suggest alternatives, and it self-manages cleanly self-delegating.

Or maybe something quite different but where these early era agentic tooling strategies still become either unneeded or even actively detrimental.


> it just polls you when it needs clarification

I think anyone who has worked on a serious software project would say, this means it would be polling you constantly.

Even if we posit that an LLM is equivalent to a human, humans constantly clarify requirements/architecture. IMO on both of those fronts the correct path often reveals itself over time, rather than being knowable from the start.

So in this scenario it seems like you'd be dealing with constant pings and need to really make sure you're understanding of the project is growing with the LLM's development efforts as well.

To me this seems like the best-case of the current technology, the models have been getting better and better at doing what you tell it in small chunks but you still need to be deciding what it should be doing. These chunks don't feel as though they're getting bigger unless you're willing to accept slop.


Much more pragmatic and less performative than other posts hitting frontpage. Good article.

Finally, a step-by-step guide for even the skeptics to try to see what spot the LLM tools have in their workflows, without hype or magic like I vibe-coded an entire OS, and you can too!.

Nothing in the post about whether the compiled kernel boots.

video does show it booting.

This blog post is written by a product manager, not a programmer. Their CV speaks to an Economics background, a stint in market research, writing small scripting-type programs ("Cron+MySQL data warehouse") and then off to the product management races.

What it's trying to express is that the (T)PM job still should still be safe because they can just team-lead a dozen agents instead of software developers.

Take with a grain of salt when it comes to relevance for "coding", or the future role breakdown in tech organizations.


That's me! I'm pretty open about that.

I'm not trying to express that my particular flavor of career is safe. I think that the ability to produce software is much less about the ability to hand-write code, and that's going to continue as the models and ecosystem improve, and I'm fascinated by where that goes.


At least going by their own CV, they've mostly written what sounds like small scripting-type programs described in grandiose terms like "data warehouse".

This blog post is influencer content.


Pretty unpopular influencer if that were the case

> Never thought this would be something people actually take seriously.

You have to remember that the number of software developers saw a massive swell in the last 20 years, and many of these folks are Bootcamp-educated web/app dev types, not John Carmack. They typically started too late and for the wrong reasons to become very skilled in the craft by middle age, under pre-AI circumstances and statistically (of course there are many wonderful exceptions; one of my best developers is someone who worked in a retail store for 15 years before pivoting).

AI tools are now available to everyone, not just the developers who were already proficient at writing code. When you take in the excitement you always have to consider what it does for the average developer and also those below average: A chance to redefine yourself, be among the first doing a new thing, skip over many years of skill-building and, as many of them would put it, focus on results.

It's totally obvious why many leap at this, and it's even probably what they should do, individually. But it's a selfish concern, not a care for the practice as-is. It also results in a lot of performative blog posting. But if it was you, you might well do the same to get ahead in life. There's only to so many opportunities to get in on something on the ground floor.

I feel a lot of senior developers don't keep the demographics of our community of practice into account when they try to understand the reception of AI tools.


This is gold.

I have rarely had the words pulled out of my mouth.

The percentage of devs in my career that are from the same academic background, show similar interests, and approach the field in the same way, is probably less than %10, sadly.


Well, there are programmers like Karpathy in his original coinage of vibe coding

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

Notice "don't read the diffs anymore".

In fact, this is practically the anniversary of that tweet: https://x.com/karpathy/status/2019137879310836075?s=20


Ahh Bulverism, with a hint of ad-hominem and a dash no No True Scotsman. I think the most damning indictment here is the seeming inability to make actual arguments and not just cheap shots at people you've never even met.

Please tell me, "Were people excited about high-level languages just programmers who 'couldn't hack it' with assembly? Maybe you are one of those? Were GUI advocates just people who couldn't master the command line?"


Thanks for teaching me about Bulverism, I hadn't heard of that fallacy before. I can see how my comment displays those characteristics and will probably try to avoid that pattern more in the future.

Honestly, I still think there's truth to what I wrote, and I don't think your counter-examples prove it wrong per-se. The prompt I responded to ("why are people taking this seriously") also led fairly naturally down the road of examining the reasons. That was of course my choice to do, but it's also just what interested me in the moment.


I think he's a cook, watching people putting frozen "meals" in the microwave and telling himself: "hey! That's not cooking!".

And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.


>I think he's a cook, watching people putting frozen "meals" in the microwave and telling himself: "hey! That's not cooking!".

It's the equivalent of saying anyone excited about being able to microwave Frozen meals is a hack who couldn't make it in the kitchen. I'm sorry, but if you don't see how ridiculous that assertion is then I don't know what to tell you.

>And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.

A series of condescending statements meant to demean with no objective backing whatsoever is not an argument. What do you want me to say ? There's nothing worth addressing, other than pointing out how empty it is.

You think there aren't big shots, more accomplished than anyone in this conversation who are similarly enthusiastic?

You and OP have zero actual clue. At any advancement, regardless of how big or consequential, there are always people like that. It's very nice to feel smart and superior and degrade others, but people ought to be better than that.

So I'm sorry but I don't really care how superior a cook you think you are.


> You think there aren't big shots, more accomplished than anyone in this conversation who are similarly enthusiastic?

I think both things can be true simultaneously.

You're arguing against a straw man.


Pointing out that your argument relies on an unverifiable (and easily countered) generalization isn't a straw man.

I think it's the performative aspects that are grating, though. You're right that even many systems programmers only look at the generated assembly occasionally, but at least most of them have the good sense to respect the deeper knowledge of mechanism that is to be found there, and many strive to know more eventually. Totally orthogonal to whether writing assembly at scale is sensible practice or not.

But with the AI tools we're not yet at the wave of "sometimes it's good to read the code" virtue signaling blog posts that will make front page next year or so, and still at the "I'm the new hot shit because I don't read code" moment, which is all a bit hard to take.


I still think this is mostly people who never could hack it at coding taking to the new opportunities that these tools afford them without having to seriously invest in the skill, and basking in touting their skilless-ness being accepted as the new temporary cool.

Which is perhaps what they should do, of course. Any transition is a chance to get ahead and redefine yourself.


Just FYI, this is the attitude that causes pro-AI people to start shit-talking anti-AI folks as Luddites who need to learn to use the tools.

Agents are a quality/velocity tradeoff (which is often good), if you can't debug stuff without them that's a problem as you'll get into holes, but that doesn't mean you have to write the code by hand.


I enjoy new technology in general, so I very much keep up with the tools and also like using them for the things they do well at any given moment. I'm not among the Luddites, FWIW. I think there's a lot of legitimately great building going on right now.

Note though we're talking about "not reading code" in context, not the writing of it.


Author is a former data analytics product manager (already a bit of a tea leaf reading domain) who says he never reads code and is now marketing himself as a new class of developer.

Parent post sounds like a very accurate description.


I completely agree in a sense - the cost of producing software is plummeting, and it's leading to me being able to develop things that I would never have invested months in before.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: