Hacker News new | past | comments | ask | show | jobs | submit | dtnewman's comments login

For those who prefer reading, this is more or less an audio version of of his essay from years ago: https://www.kalzumeus.com/2012/01/23/salary-negotiation/

Tangential: What's with the disclaimer about reading on an "iDevice"? Do they struggle with lots of words on a single page?

It's from 2012, mobile device screens were way smaller.

In 2013 I was buying books on kindle and reading them on my 4s just fine

This was written over 10 years ago, when such devices had much lower resolution.

I actually get a lot of value out of the repairability. It lets me buy a cheaper computer upfront without having to worry about whether i can upgrade later on.

How many times have I thought, maybe i should get 2tb just in case, and then end up using 500gb. With framework, I'll buy the 1TB and the cost to upgrade is very low if I ever need to.

Same thing with memory. Maybe i need 16, maybe 32, maybe 64. I tend to buy more than i need out of fear. I just don't have that fear with framework.

Oh, and don't even get me started with repairs. If my screen breaks, i know the time to fix is however long their shipping lead time is, since the repair itself will take me 15 minutes.

In general, i think that value depends on how you see a computer. $1000-2000 is a lot to spend on something you use for fun. It's really not much to spend on something you use every day for work. And it's even less if your company is paying.


The repair-ability has been a huge thing for me as a father of young kids. I've only had to do it once when a toddler jumped on the laptop screen, but it ended up being a fairly cheap repair instead of what had hitherto been a full laptop replacement.

I want a fully clear case, but apparently that's too brittle and isn't possible? This is what the Framework people say. They have keyboards like this, but won't make a full shell in this style.

I'd kill for a fully transparent phone or laptop shell.

I'd pay $1000 more for this aesthetic. Double if it's in the florescent neon colors of 90's /00's Nintendo / Apple designs.

This: https://imgur.com/a/DedpbHQ


Looks like you might've found that magical thing which is product-market fit. I'm rooting for you.

$200 is a really nice entry-price point. If I'm being honest, I'm marginally interested in this, but doubt that I'd actually use it too much. But the price point is justifiable for someone who is just interested in it from a learning point of view (if I bought this, used it a few times and learned a bit more about AI as it relates to robotics, it'd have paid itself off easily).

Rooting for you to succeed.


Thank you for your support! I’m actively working on the design and building the supply chain. In the next 3 to 6 months, I should be able to:

1. Keep the price similar while adding new features, such as a torque-controlled gripper. 2. Re-examine design decisions to see if we can offer a similar product at an even better price.


1000000%

And you've got an excellent gradient to keep growing!

All the best!


For what it’s worth I would have purchased if this was available in physical form (I’m weighing purchasing anyway, but my personal hunch is I’d pay $20-30 for physical copy and probably $10-20 for pdf).

Amazon makes it quite easy to publish books that get printed on demand, and they accept pdf files as inputs.


Thanks for the input. I have made several updates to the book in the past and this would be lost in print, but as the book matures and I incorporate feedback it can be interesting to try offering a print version. Perhaps print-on-demand type of thing.


Amazon does print on demand. I have a book published there that I sell at cost and it’s about $8 shipped. Print quality is pretty good.

If you make changes, you just upload a new template and you’re all set.


Thanks for the tip. I will explore it.


I wrote a CLI tool (GitHub.com/dtnewman/zev) and including a gif video at the top of the readme made a big difference.


> the fact that their empires are now all they'll ever be

Eh, Mark Zuckerberg is 40. Facebook is planting seeds in some pretty ambitious places (VR/AR + AI). To put that into perspective, Elon Musk is 53 now, but he was ~40 when the Falcon 9 first launched for SpaceX and the Model S was released at Tesla. In June 2012, when the S was released, Tesla was worth about 0.7% of what it is now. Elon Musk was certainly rich, but no where close to the wealthiest folks at the time. Similarly, at 40 years old (21 years ago) Jeff Bezos was worth about 100th of what he is now. Rich, but it wasn't clear that Amazon would ever come close to, say, Walmart, in terms of market cap.

Mark Zuckerberg's empire still has plenty of time to grow.


I put Facebook in the same category as Google. They have all of these flashy projects, but at the end of the day they never get beyond serving advertisements. It's their core competency and always will be.


- Amazon made AWS - Microsoft went from Operating Systems to a cloud company - Even Google has Waymo, which might develop into a large company by itself - Apple was founded in 1976. They really weren't the powerhouse that they are until the iPhone came out. - Netflix went from mailing DVDs to streaming

Of course most pivots don't work and there's selection bias here. BUT, it's not unheard of for business models to change. And it certainly helps that Zuck is the original founder and has controlling shares.


Most companies are like that. Look at their core competency and how much room that market has to grow. It's definitely the exception for company to go into a completely different vertical and excel there.


The article is talking about new grads generally, but I think there's an issue with AI that isn't talked about enough. It's not that it's taking away jobs [1], it's that it is taking away skills.

Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.

Once upon a time, I had to struggle through these. My code wouldn't run properly because I forgot to release a variable from memory or I was off-by-one on a recursive algorithm. But the struggling is what ultimately helped me actually learn the material [2]. If I could just type out "build a hash table in C" and then shuffle a few things around to make it look like my own, I'd have never really understood the underlying work.

At the same time, LLMs are often useful, but still fail quite frequently in real world work. I'm not trusting cursor to do a database migration in production unless I myself understand and check each line of code that it writes.

Now, as a hiring manager, what am I supposed to do with new grads?

[1] which I think it might be to some extent in some companies, by making existing engineers more productive, but that's a different point

[2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling


> Even if you are the biggest critic of AI, it's hard to deny that the frontier models are quite good at the sort of stuff that you learn in school. Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.

I don't feel this is a strong argument, since these are the sort of things that one could easily lookup on stackoverflow, github, and so on for a while now. What "AI" did was being a more convenient code search tool + text manipulation abilities.

But you still need to know the fundamentals, otherwise won't even know what to ask. I recently used GPT to get a quick code sample for a linear programming solution, and it saved me time looking up the API for scipy... but I knew what to ask for in the first place. I doubt GPT would suggest that as a solution if I described the problem in too high level.


Don't forget there were lots of things that are in standard libraries now that didn't use to exist back when I was coding in C in the 1990's. Nobody really writes their own sorting algorithm anymore -- and nobody should write their own A* algorithm either.

Honestly though, I recently asked Claude 3.7 Sonnet to write a python script to upload ssh keys to a mikrotik router, to prompt for the username and password -- etc. And it did it. I wouldn't say I loved the code -- but it worked. Code was written in more of a golang format, but okay. It's fine and readable enough. Hiring a contractor from our usual sources would have taken a week at least, probably by the time you add up the back and forth with code reviews and bugs.

I think for a lot of entry level positions (particularly in devops automation say), AI can effectively replace them. You'd still have someone supervise their code and do code reviews, but now the human just supervises an AI. And that human + AI combo replaces 3 other humans.


Those are the sorts of things you're supposed to struggle with in school though.

If students are using AI now, that is indeed the same thing as looking up solutions on Stack Overflow or elsewhere. It's cheating for a grade at the expense of never developing your skills.


Stackoverflow could help answer specifically targeted questions about how thigns worked, or suggest areas of debugging. They couldn't/wouldn't take a direct question from an assignment and provide a fully working answer to it.


You still have to understand what's happening and why I think.

I remember going to a cloud meetup in the early days of AWS. Somebody said "you won't need DBAs because the database is hosted in the cloud." Well, no. You need somebody with a thorough understanding of SQL in general and your specific database stack to successfully scale. They might not have the title "DBA," but you need that knowledge and experience to do things like design a schema, write performant queries, and review a query plan to figure out why something is slow.

I'm starting to understand that you can use a LLM to both do things and teach you. I say that as somebody who definitely has learned by struggling, but realizes that struggling is not the most efficient way to learn.

If I want to keep up, I have to adapt, not just by learning how to use tools that are powered by LLMs, but by changing how I learn, how I work, and how I view my role.


I'm seeing something similar. LLMs have helped me tremendously, especially in tasks like translating from one language to another.

But I've also witnessed interns using them as a crutch. They can churn out code faster that I did at an equivalent stage in my career but they really struggle debugging. Often, it seems like they just throw up their hands and pivot to something else (language, task, model) instead of troubleshooting. It almost seems like they are being conditioned to think implementation should always be easy. I often wonder if this is just "old curmudgeons" attitude or if it belies something more systemic about the craft.


Calculators have been available forever, but have not eliminated math education. Even algebra systems that can solve equations, do integrals and derivations have been available forever, but people understand that if they don't learn how it actually works they are robbing themselves. By the same token, if you need to do this stuff professionally, you are relying on computers to do it for you.

> Write a binary tree in C? Check. Implement radix sort in Python? check. An A* implementation? check.

You can look up any of these and find dozens of implementations to crib from if that's what you want.

Computers can now do more, but I'm not (yet) sure it's all that different.


I agree with you, but just to steelman the other side: how do you know when you are robbing yourself and when you are just being pragmatic?

When I change the spark plugs in my car, am I robbing myself if I'm not learning the intricacies of electrode design, materials science, combustion efficiency, etc.? Or am I just trying to be practical enough to get a needed job done?

To the OPs point, I think you are robbing yourself if the "black box" approach doesn't allow you to get the job done. In other words, in the edge cases alluded to, you may need to understand what's going on under the hood to implement it appropriately.


> how do you know when you are robbing yourself and when you are just being pragmatic?

I don't know why we're pretending that individuals have suddenly lost all agency and self-perception? It's pretty clear when you understand something or don't, and it's always been a choice of whether you dive deeper or copy some existing thing that you don't understand.

We know that if we cheat on our math homework, or copy from our friend, or copy something online, that's going to bite us. LLMs make getting an answer easier, but we've always had this option.


I don’t know why you are ignoring the concept of opportunity cost to make an argument.

Did you drive to work today? Did you learn everything about the tensile strength of nylon seatbelts before you buckled up? How about tarmacadem design before you exited your driveway? Or PID controls theory before you turned on the HVAC?

The point I’m making is that some people disagree about how much you need to know. Some people are ok with knowing just enough to get a job done because it makes them more productive overall. So the question still stands: How do you gauge when learning is enough? To my point above, I think it comes down to whether you can get the job done. Leaning beyond that may be admirable in your mind, but not exactly what you’re being paid for, and I think some experts would consider it a poor use of time/resources.

Do you care if your subordinate wrote a good report using a dictionary to “cheat” instead of memorizing a good vocabulary? Or that they referenced an industry standard for an equation instead of knowing it by heart? I know I don’t.


I said it’s up to the individual to make their own choices. I don’t know who or what you’re arguing against, but I don’t think I have much to do with it. Peace


I was simply asking you to put a finer framework on how individuals should decide. Like I said multiple times, IMO it comes down to what is needed to get the job done, but I’m open to other thoughts. Saying it’s up to the individual isn’t really saying much, other than a veiled version of “I don’t know but I feel the compulsion to comment anyway.”


> I was simply asking you to put a finer framework on how individuals should decide

I'm not in the business of prescribing philosophies on how others should live their lives?


But surely that doesn’t preclude you from describing how you live your own?


I think you have a good point, but I think the paradigm shift here is that people are chasing careers and money using LLM tools in a way that wasn't possible with calculators, and enforced differently within well-paying, white collar engineering professions.

For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.


>certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering

NCEES has a PE license related to controls software. The difficulty is most engineering work falls under an industrial exemption. It seems like the way to enforce that type of liability would be to remove the industrial exemption.


I'm not really sure what problem you're trying to point out here. There are legal standards and liability for engineering, and if someone violates them using an LLM they are held just as liable as they would be had they done the work themselves.

But the same is true for code? You are held to the same standards as if you had written it yourself, whatever that may be. Frequently that is nothing.

What change do you want to see here?


>What change do you want to see here?

Licensing, liability, and enforceable standards for safety critical code that interfaces with the public would be a good start.


I think all of those are great, but don't think that has much to do with AI tbh. How you get to the outcome, and the standards that it meets, should be all that matters


You don’t think the black box nature of much of AI has any impact on its impact in safety critical applications?

Just look at the diverse and haphazard way AI has been used in autonomous driving. I would argue it’s a misplacement of the “move fast and break things” (in some cases at least) that has no place in public-facing safety critical applications.

It brings up some difficult questions regarding adequacy of testing at the very least when the underpinnings are not very interpretable.


Ah, my understanding was that this discussion is about AI as dev-time tool, where the output is code, which gets reviewed : merged / deployed like any other bit of code whether written internally or installed via a library.

Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.


That’s a real distinction, but auto-generated code has special complications in safety critical code review as well. It’s also not relegated to ML/AI, just that adds yet another complication to good verification and validation.


There are still schools where you can learn to shoe a horse...


>> [2] to the inevitable responses that say "well I actually learn things better now because the LLM explains it to me", that's great, but what's relevant here is that a large chunk of people learn by struggling <<

I'm using AI to explain things to me.

And I'm still struggling, I'm just struggling less.

That's progress I think.


AWS lambda + API gateway. I use serverless to deploy to these.

If traffic is high, this could get pricy. For example, i have a site up getting about 20,000k visitors a day, hitting it many times each and it's probably costing me $400/month whereas a $50 server would probably do the trick (it's a temporary thing that i'm migrating away soon). But I have a bunch of smaller projects with a few visitors a day and the cost rounds to $0.

The nice thing is that it'll basically scale infinitely when you need it to.


1) When you are selecting a command you get a little description at the bottom telling you what it does.

2) this doesn’t run anything. It goes to your clipboard and you have to run it yourself

3) this a good callout… what do u think? I’m thinking maybe ask the models to return a Boolean is_dangerous plus a small explanation and then I can display dangerous commands in red and show the warning when you select one.


sounds like a solid plan


Just fyi, this is now implemented


ha, no, just a coincidence. Named after someone i know named Zev. But chose it because it's short and not taken on Pypi


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: