Hacker Newsnew | past | comments | ask | show | jobs | submit | Legend2440's commentslogin

> Artificial intelligence will displace so many jobs that it will eliminate the need for mass immigration, according to Palantir Technologies Inc. Chief Executive Officer Alex Karp.

>“There will be more than enough jobs for the citizens of your nation, especially those with vocational training,” said Karp, speaking at a World Economic Forum panel in Davos, Switzerland on Tuesday. “I do think these trends really do make it hard to imagine why we should have large-scale immigration unless you have a very specialized skill.”

Idk man, that sounds like a crock of nonsense. Previous waves of automation sure didn’t stop the demand for immigrant labor - if anything, it’s only increased.


>> “There will be more than enough jobs for the citizens of your nation, especially those with vocational training,”

I was with him until "vocational training", he's obviously implying that the bots will be bosses and the humans - Indians. That message is being thrown around way too often to exclude the possibility of a planned development.

> Previous waves of automation sure didn’t stop the demand for immigrant labor - if anything, it’s only increased.

Past performance does not guarantee future results, I'd much rather analyze a situation for what it is than rely on rough and unproven analogies.

I'm not sure what "waves of automation" you have in mind but for example, automation in the auto industry did lead to a drastic reduction of workforce per car produced, just look at the former auto factories in Detroit. So the Palantir guy is kind of right here.

The real issue is who will control the AI, because in addition to being a workforce reducer it can serve as a rather petty oppressor. The reduction of immigration is the least of your worries here.


Really curious how it's going to kill the need for immigrants in construction, agriculture, food service, hospitality, lawn care, cleaning, building maintenance, etc.

Maybe the idea is all us displaced software folks end up in the fields picking fruit?


It’s also interesting because the whole SV model assumes demand which goes way down if a genuine AI enters the market and starts cutting into middle class jobs. There aren’t many ads targeting low-wage workers, not nearly enough to support the tech sector.

> Maybe the idea is all us displaced software folks end up in the fields picking fruit?

Apparently, that's exactly the main idea here. Them and other expendable office workers.


There is a reason why you are seeing a ton of humanoid robots in development. They just don't have the software to make them viable yet.

Yeah this is a massive oversimplification from someone who appears to have a big ego

The real issue here is that conventional software fundamentally lacks the expressiveness to process the kind of data that LLMs can.

That’s why you’re using an LLM in the first place.


How does this relate to the need for deterministic and consistent output?

Idk man, all AI discussion feels like a waste of effort.

“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.

There’s no point in talking about it anymore, just wait to see how it all turns out.


The cool thing is you can just try it. The barrier to entry is incredibly low right now. If it works for you, great. If it doesn't work for you, great. At least you know.

That's the real reason the conversation seems pointless. Every thread is full of comments from one group saying how useful AI is, and from another group saying how useless it is. The first group is convinced the second group just hasn't figured out how to use it right, and the second group is convinced the first group is deluded or outright lying.

Yes, I'm in the second group and I have that conviction about the first group based on personal experience with LLMs.

But most hype is not delusion. It's people trying to present themselves as "AI" experts in order to land those well paid "AI" positions. I don't think they even believe what they're saying.


Talking about AI instead of just leveraging it (or not) is the new “talking about note-taking software” instead of just writing.

Organizing notes is the next problem to always solve after solving todo lists.

It's not "yes it is" vs "no it won't" though. The discussion is "Yes it does" vs "no it doesn't" (present tense.) There's nothing wrong with guessing about the future, but lying about a present that is testable and unwillingness to submit to the testing is wrong.

Even then nothing is learned. Every HN thread there is on AI coding: "I am using $model for writing software and its great." "I am using $model for writing software and it sucks and will never do it." 800 comments of that tit for tat in present tense. Still nothing learned.

Doesn't help that no one talks about exactly what they are doing and exactly how they are doing it, because capitalism vs open technology discussions meant to uplift the species.


> Doesn't help that no one talks about exactly what they are doing and exactly how they are doing it

Let me try a translation:

> I am using $model for writing software and its great.

I have generated an extremely simple javascript application that could have been mostly done by copy/paste from StackOverflow or even Geeks4Geeks and it runs.

This is true. I have a PWA that I generated with a LLM on my phone right now. It works. Pretty sure even w3schools would be ashamed to post that code.

> I am using $model for writing software and it sucks and will never do it.

This is also true. At work I have a 15 year old codebase where everything is custom.

You can't get a LLM to use it all as a context because you simply don't have ram for it so you can't even test the quality of advice given on it.

You can't train a LLM on it because you don't have the budget for it.

You maybe could get a LLM to generate code by prompting it "this is the custom object allocation function, these are the basic GUI classes, now generate me the boilerplate for this new dialog I'm supposed to do". Unfortunately it takes as long or longer than doing it yourself, and you can't trust the output to boot.


That’s not what the article says at all, but go on with your doomer takes.

Instead of counting bears in the forest by hand, you set up a hundred trail cameras and then use computers to count bears 24/7 across an entire area. This is field research, on a scale that was previously impossible.


I think there's a lot of potential for AI in 3D modeling. But I'm not convinced text is the best user interface for it, and current LLMs seem to have a poor understanding of 3D space.

Text being a challenge is a symptom of the bigger problem: most people have a hard time thinking spatially, and so struggle to communicate their ideas (and that's before you add on modeling vocabulary like "extrude", "chamfer", etc)

LLMs struggle because I think there's a lot of work to be done with translating colloquial speech. For example, someone might describe a creating a tube is fairly ambiguous language, even though they can see it in their head: "Draw a circle and go up 100mm, 5mm thick" as opposed to "Place a circle on the XY plane, offset the circle by 5mm, and extrude 100mm in the z-plane"


I don't get the text obsession beyond LLMs being immensely useful that you might as well use LLM for <insert tasks here>. I believe that some things live in text, some in variable size n-dimensional array, or in fixed set of parameters, and so on - I mean, our brains don't run on text alone.

But our brains do map high-dimensionality input to dimensions low enough to be describable with text.

You can represent a dog as a specific multi-dimensional array (raster image), but the word dog represents many kinds of images.


Yeah, so, that's a lossy/ambiguous process. That represent_in_text(raster_image) -> "dog" don't contain a meaningful amount of the original data. The idea of LLM aided CAD sounds to me like, a sufficiently long hash should contain data it represents. That doesn't make a lot of sense to me.

But, you need the ambiguity, or the AI isn't really a help. If you know the exact coordinates and dimensions of everything, you've already got an answer.

Not necessarily. Sometimes, the desired final shape is clear, but the path there isn't when using typical parametric modeling steps with the desire to get a clean geometry.

I guess, it just comes with experience

When I use Claude to model I actually just speak to it in common English and it translates the concepts. For example, I might say something like this:

    I'm building a mount for our baby monitor that I can attach to the side of the changing table. The pins are x mm in diameter and are y mm apart. [Image #1] of the mounting pins. So what needs to happen is that the pin head has to be large, and the body of the pin needs to be narrow. Also, add a little bit of a flare to the bottom and top so they don't just knocked off the rest of the mount.
And then I'll iterate.

    We need a bit of slop in the measurements there because it's too tight.
And so on. I'll do little bits that I want and see if they look right before asking the LLM to union it to the main structure. It knows how to use OpenSCAD to generate preview PNGs and inspect it.

Amusingly, I did this just a couple of weeks ago and that's how I learned what a chamfer is: a flat angled transition. The adjustment I needed to make to my pins where they are flared (but at a constant angle) is a chamfer. Claude told me this as it edited the OpenSCAD file. And I can just ask it in-line for advice and so on.


I think a good UI would be to prompt it with something like "how far is that hole from the edge?" and it would measure it for you, and then "give me a slider to adjust it," and it gives you a slider that moves it in the appropriate direction. If there were already a dimension for that, it wouldn't help much, but sometimes the distance is derived.

I'd love to have that kind of UI for adjusting dimensions in regular (non-CAD) images. Or maybe adjusting the CSS on web pages?


I think that would make a lot of sense for non-CAD images, but the particular task you described there is do-able in just a few clicks in most CAD systems already. I think the AI would almost always take a longer time to do those kinds of actions than if you did it yourself.

For experts maybe, but beginners would probably find asking questions about how to do things useful.

It's the star trek future way of interfacing with things. I don't know SOLIDWORKS at all. I'm a total noob at Fusion 360, but I've made a couple of things with it. Sketch and extrude. But what I can do is English. Using a combination of Claude and openSCAD and my knowledge of programming, I was able to make something that I could 3d print, without having to learn SOLIDWORKS. Same with Abelton for music. It's frustrating when Claude does the wrong thing, but where it shines is when you give it the skill to render the object to a png for it to look at the scad that it's generating, so it can iterate until it actually makes what you're looking for. It's the human out of the loop where leaps are being made.

> LLMs seem to have a poor understanding of 3D space.

This is definitely my experience as well. However, in this situation it seems we are mostly working in "local" space, not "world" space wherein there are a lot of objects transformed relative to one another. There is also the massive benefit of having a fundamentally parametric representation of geometry.

I've been developing something similar around Unity, but I am not making competence in spatial domains a mandatory element. I am more interested in the LLM's ability to query scene objects, manage components, and fully own the scripting concerns behind everything.


Opus 4.5 seems to be a step above every other model in terms of creating SVGs. Before most models couldn't make something that looked half decent.

But I think this shows that these models can improve drastically on specific domains.

I think if three was some good datasets/mappings for spacial relation and CAD files -> text then a fine tune/model with this in its training data could improve the output a lot.

I assume this project is using a general LLM model with unique system prompt/context/MCP for this.


Curious what you think is the best interface for it? We thought about this ourselves and talked to some folks but it didn't seem there was a clear alternative to chat.

Solidworks's current controls are the best interface for it. "Draw a picture" is something you're going to find really difficult to beat. Most of the other parametric CADs work the same way, and Solidworks is widely known as one of the best interfaces on top of that. They've spent decades building one that is both unambiguous and fast.

Maybe it's just the engineers I've worked with, but I've never heard anyone describe Solidworks as "fast."

(Of course, you and I know it is, it's just that you're asking it to do a lot)


Haha yes, I've never heard any engineer discribe any CAD package as anything other than slow and full of bugs. But of the alternatives, I think most would still pick Solidworks.

I wonder how many of these bugs are actually situations where the underlying algorithms are simply confronted with situations outside their valid input domains. This can happen easily with 3d surface representations of geometries.

All of our production engineers that use CATIA think SolidWorks is fast...

I guess it's all in the perspective


Modelling isn't the slow part. If one is copying a drawing and have exact dimensions its pretty straightforward in most software even if the software is bloated.

The clear alternative is VR. You put on hand trackers and then physical describe the part to the machine. It should be rid m this wide, gestures, and moves hands that wide.

https://shapelabvr.com/


maybe some combination of visual representation with text. For example it's not easy to come up intuitive with names of operations which could be applied to some surface. But if you could say something like 'select top side of cylinder' and it will show you some possible names of operations (with illustrations/animations) which could be applied to it then it's easy to say back what it need to do without actually knowing what actually possible. So as result it maybe just much quicker way to interact with CAD that we are using currently.

So there's OpenSCAD, which is basically programming the geometry parametrically. But... I'd liken it to generating an SVG of a pelican on a bicycle at the current levels of LLMs.

I designed some parts for an enclosure over the weekend using claude opus to generate an OpenSCAD file, which I then exported to STL and sent to a 3D printer. I was able to get a visually-correct-enough STL off to the 3D printer in about 5 minutes.

I then waited about an hour for the print to finish, only to discover I wanted to make some adjustments. While I was able to iterate a couple times, I quickly realized that there were certain structures that were difficult to describe precisely enough without serious time spent on wording and deciding what to specify. It got harder as the complexity of the object grew, since one constraint affects another.

In the end, I switched to FreeCAD and did it by hand.


Will echo sibling. I have tried using Claude Sonnet for OpenSCAD to design a simple soap mold and it failed terribly in getting the rounded shape I wanted. (1) It's really difficult to explain 3d figures in text, and I doubt there is a lot of training material out there. (2) OpenSCAD is limited in what it can do. So the combination is pretty bad.

I needed some gears generated recently, and figured I could just get it done with Claude or Chatgpt in OpenSCAD in a few minutes... but oh man was I wrong. I was so wrong.

Wasted half an hour generating absolute nonsense if it even compiled and ended up going with one of those svg gear generators instead lmao.


You'd probably have been better off giving it a basic summary of OpenSCAD grammar and asking for a C or Python program to emit the code.

That worked! I didn't have to give it the grammar, though. https://github.com/knicholes/llm-gears

Exactly. May be focus on ideas translation to CAD. Idea generation and concepts are done with drawings. Translate these drawings to CAD and you have improved time to market.

Why do you think that? I can't really think of a use case where AI would be much help to me in the CAD context.

> The timing is unfortunate. Tired of content vanishing from streaming services or disappearing into algorithmic feeds, consumers are returning to physical media like CDs.

Vinyl is having a resurgence. CDs are not.

Vinyl sale revenue is now 4.5x higher than CD sales, and CD sales are still dropping: https://www.riaa.com/reports/2025-mid-year-music-industry-re...


Interestingly enough I just read that something like 1/3 of vinyl buyers don't have a record player.

EDIT: Apparently it’s even 50% (https://consequence.net/2023/04/half-vinyl-buyers-record-pla...)


The issue with traditional logic solvers ('good old-fashioned AI') is that the search space is extremely large, or even infinite.

Logic solvers are useful, but not tractable as a general way to approach mathematics.


> Logic solvers are useful, but not tractable as a general way to approach mathematics.

To be clear, there are explicitly computationally tractable fragments of existing logics, but they're more-or-less uninteresting by definition: they often look like very simple taxonomies (i.e. purely implicational) or like a variety of "modal" and/or "multi-modal" constructions over simpler logics.

Of course it would be nice to explicitly tease out and write down the "computationally tractable" general logical reasoning that some existing style of proof is implicitly relying on (AIUI this kind of inquiry would generally be comprised under "synthetic mathematics", trying to find simple treatments in axiom- and rule-of-inference style for existing complex theories) but that's also difficult.


Threats or “I will tip $100” don’t really work better than regular instructions. It’s just a rumor left over from the early days when nobody knew how to write good prompts.


That's an inherently subjective topic though. You could make a plausible argument either way, as each side may be similar to different elements of 19th century Victorianism.

If you ask it something more objective, especially about code, it's more likely to disagree with you:

>Test this hypothesis: it is good practice to use six * in a pointer declaration

>Using six levels of pointer indirection is not good practice. It is a strong indicator of poor abstraction or overcomplicated design and should prompt refactoring unless there is an extremely narrow, well-documented, low-level requirement—which is rare.


The founders made the right move selling it when they did. No way that site is worth $1.8 billion now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: