Hacker Newsnew | past | comments | ask | show | jobs | submit | soiler's commentslogin

Reasonable, but not necessarily true.

1. We don't understand what the motivations of our own AI are, let alone "typical" alien AI

2. Expanding AI might be better at and/or more invested in hiding itself. It probably has no need for wasteful communications, for example.


How the hell did this person start their SWE career at the same time as me and come to feel they have an authoritative view on the state of the industry? Maybe it's because they're a recent grad and I'm an older career changer, but I still feel quite new to the industry.


You mean: How the hell does a 25-year-old who spent most of their life so far in school being taught to write affected essays on a tight deadline for points not have an excess of perspective, wisdom, and humility?


A part time job washing floors? Any hobby that puts them in contact with the not-highschool reality? Spend a few days in the mountains? Meet someone who got into a great school despite not getting an A+++ in highschool lit?


I've been programming for 26 years, employed in the software industry for 20 years, been a SWE at multiple FANGs as well as a few smaller tech businesses, and I feel probably more similar to you than I do to the author regarding our views on the state of the industry.


I asked it to explain how to use a certain Vue feature the other day which wasn't working as I hoped. It explained incorrectly, and when I drilled down, it started using React syntax disguised with Vue keywords. I definitely could have tried harder to get it to figure out what was going on, but it kept repeating its mistakes even when I pointed them out explicitly.


Part of why I don't use ChatGPT very much for work is that I don't want to feed significant amounts of proprietary code into it. Could be the one thing that actually gets me in trouble at work, seems risky regardless. How is it you're comfortable with doing so? (Not asking in a judgmental way, just curious. I would like to have a LLM assistant that understood my whole codebase, because I'm stumped on a bug today.)


I'm not doing it right now, I'm more imagining a near-term product designed for this (maybe even with the option to self-host). Current LLMs probably couldn't hold enough context to analyze a whole codebase anyway, just one file at a time (which could still be useful, but)


That's even worse. They might not have the knowledge to realize the regex an AI gives them is bunk, or to debug it when it fails.

I'd like to see some numbers on a tool like this. If a huge majority of people are seeing genuine improvements in their workflow with it, I won't be a luddite yelling at them. Rare, low-severity failures shouldn't hold us back.

But the potential cost of failure with (any) regex is very high, so I personally wouldn't want to trust any remotely mission-critical to a person who doesn't understand regex well enough to write it themself, and if they can write it on their own that's often faster than debugging AI-generated regex.


I read a few weeks ago here on HN about one large SAAS grinding to a halt because of a greedy selector in one line of regex. Not sure how people find old stories, it's lost to me now. But it was an excellent example of why regex is dangerous and requires a lot of care to write. I wouldn't trust an AI to write my regex unless I saw that people were finding it to be consistently better than they were are writing what they need.


I'm (genuinely) curious what kind of code you write. I haven't tried Copilot and I haven't used ChatGPT very much, but I feel I would be pretty surprised if either of them made significant improvements to my workflow.

Copilot I could see, since I already use Intellisense, autocomplete, and snippets to great effect. I'd be annoyed if I had to work without them. But in general, knowing what I want the code to do is >90% of the work of writing new code.

I feel there are a few possibilities for why I'm confused:

1. I'm not a very good software engineer, at least in certain respects. Maybe I should have a better understanding of architecture patterns or something I might have learned in a CS degree. Maybe I am hacking everything together and maybe I am already a slow coder.

2. I'm not [being] creative enough as a prompt engineer. I typically can't think of any way that ChatGPT could help me without ingesting my entire repo and figuring out the correct patterns. It could be, however, that there are ways to get the answers I need with better questions.

3. We do completely different kinds of work, and some kinds of coding are better suited for AI assistance than others.


The opposite of 1 is also possible. You're a really good programmer and know the material better, and just don't need to ask the kinds of questions that other people are asking ChatGPT (or stack overflow, or man pages) for/are happy with your current reference materials.


> I think tech companies are (very) net positive for society

There are certainly huge positives, but do you really feel something like Facebook is a net positive? Facebook, which intentionally stoke(d/s) genocide? Genocides have existed before Facebook, yes, but so did communication and racist relatives.


Do you have a source on them intentionally stoking genocide? I’m not fan of Facebook, but if there’s reliable evidence on that I’d expect summons to The Hague in short order, which I’ve yet to see.


Some people certainly believe so[1], there are also plenty of other links if searching for ’facebook myanmar genocide’ (though I would assume they a few common sources). But intentions are of course hard to prove.

[1] https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...


> Personally, I'd say that at least Google's Android department is currently headless and has no idea what the users want.

The clock change, while minor, really put the nail in the coffin for me. I have very little optimism for Android. Luckily, it still allows me to use an app to revert the clock display to an readable clock display. I don't particularly want to switch to iOS and I am happy about GrapheneOS, but it's still going to suffer from bad decisions coming from Android.


Which clock change are you referring to? I don't think I've ever heard anything about this.


android changed the lock screen clock from HH:MM to

HH

MM

it's a little stupid to be angry about but it's also pretty stupid to do in the first place.


I would not consider a scheduled physical/social/emotional activity to be a distraction. Is eating lunch a distraction? Is sleeping a distraction? If anything, I think Avicenna supports the value of reducing distraction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: