Worth noting that Kp, which many talk about in discussions online, is more or less useless for anyone in Australia or the southern hemisphere. Lots of beginner Aurora chasers here get tripped up by that.
What is useful is KAus and the G index, KAus is shown on this page, so thats what i'll be tracking.
Keep in mind that anyone posting on a forum (just like this), or so much more, anyone blogging about something is already a huge selection bias for people who believe that their opinion needs to be shared.
You don't hear from all the people who don't feel that others must know their opinion.
Lurkers always outweigh posters.
Don't ever make the mistake of believing that a sample of posts is a sample of people
Humans are tribal, which has both benefits and costs.
In technology, the historical benefits of evangelizing your favorite technology might just be that it becomes more popular and better supported.
Even though LLMs may or may not follow the same path, if you can get your fellow man on-board, then you'll have a shared frame of reference, and someone to talk through different usage scenarios.
You need to be fairly smart to be in tech. People who grew up smart and were told they were tend to view it as part of their self worth. If someone disagrees with this person later on, their self with has been attacked so of course they are going to lash out.
The worst thing you can say to a dev is they are wrong. Most will do everything in their power to prove otherwise, even on the dumbest of topics.
He’s the top comment on every AI thread because he is a high profile developer (invented Django) and now runs arguably the most information rich blog that exists on the topic of LLMs.
That’s not really reasonable to assume at all. Five minutes of research would give you a pretty strong indication of his character. The dude does not need to self-aggrandize; his reputation precedes.
Perhaps. But perhaps this era of AI slop leaves a foul taste in many people’s mouth. I don‘t know the reputation, all I see is somebody who felt the need to AI generate a picture and post it on HN. This is slop, and I personally get bad vibes from people who post AI generated slop, which leaves me with all sorts of assumptions about their character.
To clarify, they are here to have fun, they liked the joke about cow-ork (which I did too, it was a good joke), and they had an idea on how to build up on that joke. But instead of putting in a minor effort (like 5 min in Inkscape) they write a one sentence prompt to nano-banana and think everybody will love it. Personally I don’t.
If you can draw a cow and an ork on top of an Anthropic logo with five minutes in Inkscape in a way that clearly captures this particular joke then my hat is off to you.
I'm all in on LLMs for code and data extraction.
I never use them to write text for my own comments on forums so social media or my various personal blogs - those represent my own opinions and need to be in my own words.
I've recently started using them for some pieces of code documentation where there is little value to having a perspective or point of view.
My use of image generation models is exclusively for jokes, and this was a really good joke.
This really is unnecessarily harsh. As someone who's been reading Simon's blog for years and getting a lot of value from his insights and open source work, I'm sad to see such a snap dismissive judgement.
"all sorts of assumptions about [someone's] character" based on one post might not be a smart strategy in life.
I'd say is necessarily harsh. It is not as if Simon's opinions on AI were really better than others here that are as technical as his.
He is prolific, and being at the top of every HN thread is what makes him look like a reference but there are other 50+ people talking interesting things about AI that are not getting the deserved attention because every top AI thread we are discussing a pelican riding a bike.
He very obviously disclosed that he had nano banana generate the logo. Using AI to boost himself is a different animal altogether. (The difference is lying)
This is the Internet. Everyone here is an AI running in a simulator like the Matrix. How do I know you're not an AI? How do you know I'm not? I could be! Please, just use an em—dash when responding to this comment let me know you're AI.
He's talking about completely different type of risks and regulation. It's about the job displacement risks, security and misuse concerns, and ethical and societal impact.
Anthropic are already running much of their workloads on Amazon Inferentia, so the nvidia tax was already somewhat circumvented.
AIUI everything relies on TSMC (Amazon and Google custom hardware included), so they're still having to pay to get a spot in the queue ahead of/close behind nvidia for manufacturing.
This is like seeing a food poisoning outbreak at a fast food restaurant and concluding that it must be CIA/FSB/Mossad bogeymen trying a bioweapon. These breaches are things like not validating authentication tokens (at all, not just correctly) and that would be a big drop in professionalism from what we’ve seen from nation-state level attacks:
Hanlon's razor, paradoxically, is the perfect cover for surreptitious malice. We've already got a perfectly reasonable razor telling people not to assume malice, after all.
And to be clear, let's not forget that the US government did intentionally and secretly conduct surreptitious biological warfare tests against entire US cities that deliberately inflicted disease upon and killed American citizens. There was an entire formal program that spanned decades - https://en.wikipedia.org/wiki/United_States_biological_weapo...
Of course, the US government doesn't have any secret programs anymore and never lies to us, so everyone can rest easy knowing nothing like this could ever happen again.
It looks like AI slop to me.
"Profiles in Firefox aren’t just a way to clean up your tabs. They’re a way to set boundaries, protect your information and make the internet a little calmer." - classic meaningless comparison.
The service is still in preview, so AWS are explicitly telling people not to put it into production.
From my non-production experiments with it, the main limitation is that you can only retrieve up to 30 top_k results, which means you can't use it with a re-ranker, or at least not as effectively. For many production use cases that will be a deal breaker.
My issue with it is that it requires a lot of duplication between it and a traditional rdbms; you can’t use it alone because it doesn’t offer filtering without a search vector (i.e. what some vendors call a scroll function).
For those unaware, there are a ton of AI generated videos across YouTube, TikTok, Instagram, Facebook of physicist Brian Cox saying this is an alien spacecraft:
reply