Hacker Newsnew | past | comments | ask | show | jobs | submit | dilap's commentslogin

It's neat!

Suggestionms:

A version where you actually need to find sets -- "have two, find third" is way too easy.

Monospaced font -- it's too hard visually to see difference in word length right now, w/o counting letters.


I read about 4 paragraphs of the blog post, it does not at all read like it was written by ChatGPT!

Some people are perhaps overly focussed on superficial things like em-dashes. The real tells for ChatGPT writing are more subtle -- a tendency towards hyperboly (it's not A, it's [florid restatment of essentially A] B!), a certain kind of rhythym, and frequently a kind of hard to describe "emptiness" of claims.

(LLMs can write in mang styles, but this is the sort of "kid filling out the essay word count" style you get in chatgpt etc by default.)


It does not, but to many, many people who cannot tell the difference it does. Simply because it's well-written somewhat-formal-register English and not "internet speech" or similar casual register. As you probably know, there are many these days who take the mere use of em or en dashes as a reliable sign of LLM writing.


Hey bro! This is the real English bro! No way we can write like that bro! What? - and ;? The words like "furthermore" or "moreever"? All my homies nver use the words like that bro! Look at you. You're using newline! You're using ChatGPT, right bro?


Given the eloquently natural words in this post, I conclude you must be this thread's prompt engineer! Well done, my fellow Netizen. Reading your words was like smelling a rosebud in spring, just after the heavy snow fell.

Now, please, divulge your secret--your verbal nectar, if you wish--so that I too can flower in your tounge!


It's basically a "when to rip the band-aid off" type of situation.

Briefly poked around w/ linux again for the first time in years (Omarchy, DHH's tune of Arch + hyprland), and hoo boy, it's come a long way! Nothing like the KDE/Gnome+X jankery of the olden times. Very polished, very slick, very nice.


I did try Omarchy on an old laptop and it was fairly painless to get started. I did develop an unease the more I read about DHH unfortunately and decided to bail.

If anything though, Omarchy shows it's not impossible to get a nice working environment on linux.


Saying "Foo.app is damaged" is lying to the user though, which is not nice, and not a good sign, in general, for the health of a company / its culture.


Saying it's damage is by design. Apple wants to scare you aware. I agree it feels bad from one POV. That was my initial reaction. I also agree though that steering grandma away from evil apps is good too.


Part of the reason computer users like your grandma are so helpless is because OS' have devolved to be completely untrustworthy. Everything lies, and error messages now look like "oopsy windows made a fucky! >_<"

It's no wonder granny has zero confidence in the computer and is always behind.


Yeah, by design, of course, but I still think it's bad (& there are plenty of ways to scare grandma without lying to her, if you really need to do that).

In general I'd contend that the mindest which leads you to believe "we need to lie to our users because they are dumb" isn't conducive to making good software.


Lying to people is usually bad because they will stop trusting your warnings.


I read the first few paragraphs. Very much reads like LLM slop to me...

E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it."

If we had a reliable oracle, I would happily bet a $K on significant LLM authorship.


Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM.


If you find it useful no harm in enjoying it! The main problem with AI content is it's just not good enough...yet. It'll get there. The LLMs just need more real-world feedback incorporated, rather than being the ultimate has-read-everything,-actually-knows-nothing dweeb (a lot of humans are like this too). (You can see the first signs of overcoming this w/ latest models coding skills, which are stronger via RL, I believe.) (Not first hand knowledge tho -- pot kettle black situation there.)


I'm a huge K2 fan, it has a personality that feels very distinct from other models (not syccophantic at all), and is quite smart. Also pretty good at creative writing (tho not 100% slop free).

K2 hosted on groq is pretty crazy for intellgence/second. (Low rate limits still, tho.)


They crossed it definitively, and still unbelievably, to me, when they started showing ads as the first result in App Store search. For a long time searching "ChatGPT" in the AppStore would surface a rip-off clone w/ a lookalike icon as the first result. How many thousands of users inadvertently downloaded the clone, paid for it, and were, basically, victims of a scam, facilitated by Apple? (Now the first result for ChatGPT, Claude, Grok is at least the correct first party ad, though this almost seems like extortion on the part of Apple.)

(Software quality has also fallen off a cliff, though that's more a loss of instutional competence, I think, than active anti-user behavior motivated by avarice.)


My mother fell for exactly this. Downloaded a ChatGPT clone and paid for it. She was quite upset with herself when I had to tell her.

Until now I blamed Google, but now it seems much more likely that it was Apple’s fault.


Huge fan of X, but it's pissing in the face of your fans to tell such obvious lies.


> Huge fan of X

Why? It's a cesspool of hate. Even if you try to avoid the political nonsense Elon forces himself and his cronies into your recommendations.


X has everything, and you can pick what you follow (there's a "For You" tab, but also a strictly chronological following tab). I like it for variety of political views (e.g. super-lefty @caitoz, super-righty @L0m3z), following interesting LLM stuff (@elder_plinius is a great follow), lots of devs (e.g. carmack...), art accounts (@yumenohajime, @neurocolor), nutrition/health stuff, so much good stuff!

(The FYP, alas, sucks, and has since forever...)


But Elon Musk is a Nazi who goes around doing Hitler salutes. By using X you are implicitly endorsing and supporting this.


Swift is an early example of Apple losing its way. Such a stark contrast to Objective-c -- which was a simple, fast compiling language that hit way above its weight for expressivity and runtime speed. A great language for its day. Swift is "a C++ hacker's first attempt at language design".


I would be fine with a Objective-C 3.0, but the big question would be how to fix the underlying C flaws, which was one of Swift's original goals.

I do agree that the language design has gone overboard in the last couple of years, expecially in the various approaches to parallelism.

However they are not alone, just look at any programming language sponsored by companies, you need features to justify team sizes and naturally old features don't go away.


Taste & trade-offs aside, you've gotta make it compile reasonably fast! I do get that Objective-C is not the pinacle of language development, but you shouldn't give your main language the rough edges of a research project.

(And while the past shouldn't necessarily be a shackle on the future, it is striking that such a radically different set of trade-offs was picked for Swift vs Obj-C.)

I think both Go and C# are pretty nice languages, to give you an idea of where I'm coming from. And Rust is very interesting -- as a user you see software that gets written in it exceed the previous state-of-the-art (e.g., ripgrep).

I don't see that w/ Swift. It seems like the opposite. E.g., the terrible Settings rewrite that rolled out a couple releases ago...

Confession, though, while I did a lot of ojbc back in the day, I've never done more than kick the tires on Swift, so I'm not critiquing from a position of deep knowledge -- more like talking shit from the sidelines. But I do think I'm right. ;-)


C# is starting to get a C++ like feeling, I no longer can keep track of all features that get added every year, especially when not able to work in vLatest in consulting gigs.

Just compare C# 14 with C# 1, laundry list of features, and BCL changes.

Go, has plenty of warts caused by ignoring the history of programming languages.

Rust async/await story isn't that great, as it is kind of half done.

We could also add others to the list, each year get a few more constructs, runtime changes, standard library changes, whatever is the package manager of the year, and so on.

All have issues, then again we can go back to the famous Bjarne Stroustoup quote

"There are only two kinds of languages: the ones people complain about and the ones nobody uses".


I wasn’t aware objc was considered “fast”.


Light-weight wrapper over C, so as fast as C when you want it to be. Message passing isn't as fast as, say, vtables, but is still quite snappy and flexible for loosely binding objects together. No generics avoids code bloat.

In practice obj-c apps were snappy, e.g., good perf on extremely limited hardware of original iPhone. SwiftUI (I assume) of MacOS settings app much slower than the old version it replaced -- too much heavy programmer framework magic resulting in slower final code? That's my diagnosis/guess from afar (I might be wrong ofc), a pitfall that objc did not tend to lead developers into.


There is a website forvo.com which has a bunch of community-generated pronunciations of words in a ton of languages. I used to use it a lot when I was playing around w/ learning languages.

There's also a paid API. I made a very basic command-line client which might still work: https://github.com/erinok/forvosay


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: