Hacker Newsnew | past | comments | ask | show | jobs | submit | bashtoni's commentslogin

Australian Bureau of Meteorology advisory for visible aurora: https://www.sws.bom.gov.au/Aurora

Worth noting that Kp, which many talk about in discussions online, is more or less useless for anyone in Australia or the southern hemisphere. Lots of beginner Aurora chasers here get tripped up by that.

What is useful is KAus and the G index, KAus is shown on this page, so thats what i'll be tracking.


Are there any resources to track Aurora sightings or predicted sightings?

https://aurora-alerts.uk/ Ignore the UK TLD, this tracks global sightings

Ahh thank you! I saw a few photos from last night's Aurora sightings down here in Vic, Australia. Maybe next time! :)

At the bottom right of that page is a subscribe link, with a number of different alerts and lists to subscribe to.

Thank you!

Is that tonight or last night?

It was only issued this morning Australian time, so I presume it's for tonight.

The tech industry seems to attract people that feel personally attacked when someone else makes different choices that they do.

"Why are you using Go? Rust is best! You should be using that!" "Don't use AWS CDK, use Terraform! Don't you know anything?"


It's not just the tech industry, this is a fundamental feature of humans as social animals.

https://knowyourmeme.com/videos/433740-just-coffee-black

> want to feel normal, to walk around and see that most other people made the same choice they made


Keep in mind that anyone posting on a forum (just like this), or so much more, anyone blogging about something is already a huge selection bias for people who believe that their opinion needs to be shared.

You don't hear from all the people who don't feel that others must know their opinion.

Lurkers always outweigh posters.

Don't ever make the mistake of believing that a sample of posts is a sample of people


Humans are tribal, which has both benefits and costs.

In technology, the historical benefits of evangelizing your favorite technology might just be that it becomes more popular and better supported.

Even though LLMs may or may not follow the same path, if you can get your fellow man on-board, then you'll have a shared frame of reference, and someone to talk through different usage scenarios.


You need to be fairly smart to be in tech. People who grew up smart and were told they were tend to view it as part of their self worth. If someone disagrees with this person later on, their self with has been attacked so of course they are going to lash out.

The worst thing you can say to a dev is they are wrong. Most will do everything in their power to prove otherwise, even on the dumbest of topics.


Hi Felix!

Simple suggestion: logo should be a cow and and orc to match how I originally read the product name.



Sorry not related - your blog is awesome. Cool to see you here on HN!


I'm starting to suspect some of these comments might be AI generated and it is all an experiment. guy is the top comment in every other HN thread.


He’s the top comment on every AI thread because he is a high profile developer (invented Django) and now runs arguably the most information rich blog that exists on the topic of LLMs.


The logo is AI generated... I think it is reasonable to assume so is many of the other things this account does.


That’s not really reasonable to assume at all. Five minutes of research would give you a pretty strong indication of his character. The dude does not need to self-aggrandize; his reputation precedes.


Yeah I was joking, don't think it is AI but I'm starting to get a bit tired of seeing his posts at the top of every AI thread.

Diversity of opinions is good, someone monopolizing the #1 comment of every AI thread is not healthy for the community.


Perhaps. But perhaps this era of AI slop leaves a foul taste in many people’s mouth. I don‘t know the reputation, all I see is somebody who felt the need to AI generate a picture and post it on HN. This is slop, and I personally get bad vibes from people who post AI generated slop, which leaves me with all sorts of assumptions about their character.

To clarify, they are here to have fun, they liked the joke about cow-ork (which I did too, it was a good joke), and they had an idea on how to build up on that joke. But instead of putting in a minor effort (like 5 min in Inkscape) they write a one sentence prompt to nano-banana and think everybody will love it. Personally I don’t.


If you can draw a cow and an ork on top of an Anthropic logo with five minutes in Inkscape in a way that clearly captures this particular joke then my hat is off to you.

I'm all in on LLMs for code and data extraction.

I never use them to write text for my own comments on forums so social media or my various personal blogs - those represent my own opinions and need to be in my own words.

I've recently started using them for some pieces of code documentation where there is little value to having a perspective or point of view.

My use of image generation models is exclusively for jokes, and this was a really good joke.


This really is unnecessarily harsh. As someone who's been reading Simon's blog for years and getting a lot of value from his insights and open source work, I'm sad to see such a snap dismissive judgement.

"all sorts of assumptions about [someone's] character" based on one post might not be a smart strategy in life.


I'd say is necessarily harsh. It is not as if Simon's opinions on AI were really better than others here that are as technical as his.

He is prolific, and being at the top of every HN thread is what makes him look like a reference but there are other 50+ people talking interesting things about AI that are not getting the deserved attention because every top AI thread we are discussing a pelican riding a bike.


He very obviously disclosed that he had nano banana generate the logo. Using AI to boost himself is a different animal altogether. (The difference is lying)


This is the Internet. Everyone here is an AI running in a simulator like the Matrix. How do I know you're not an AI? How do you know I'm not? I could be! Please, just use an em—dash when responding to this comment let me know you're AI.


That is an unreasonably good interpretation


ENOPELICANS


Specifically, an orc riding a cow into battle with a pose similar to the viking(?) on the cover of Clojure for the Brave and True[0]!

[0]: https://www.braveclojure.com/assets/images/home/png-book-cov...



This is exactly why the CEO of Anthropic has been talking up "risks" from AI models and asking for legislation to regulate the industry.


He's talking about completely different type of risks and regulation. It's about the job displacement risks, security and misuse concerns, and ethical and societal impact.

https://www.youtube.com/watch?v=aAPpQC-3EyE

https://www.youtube.com/watch?v=RhOB3g0yZ5k


Anthropic are already running much of their workloads on Amazon Inferentia, so the nvidia tax was already somewhat circumvented.

AIUI everything relies on TSMC (Amazon and Google custom hardware included), so they're still having to pay to get a spot in the queue ahead of/close behind nvidia for manufacturing.


If this was actually the lesson then they'd be banning Fortinet, but it seems these concerns about security don't apply to US listed companies.


Bold of you to assume those Fortinet vulns arent just exposed government backdoors.


This is like seeing a food poisoning outbreak at a fast food restaurant and concluding that it must be CIA/FSB/Mossad bogeymen trying a bioweapon. These breaches are things like not validating authentication tokens (at all, not just correctly) and that would be a big drop in professionalism from what we’ve seen from nation-state level attacks:

https://labs.watchtowr.com/get-fortirekt-i-am-the-super_admi...


Hanlon's razor, paradoxically, is the perfect cover for surreptitious malice. We've already got a perfectly reasonable razor telling people not to assume malice, after all.

And to be clear, let's not forget that the US government did intentionally and secretly conduct surreptitious biological warfare tests against entire US cities that deliberately inflicted disease upon and killed American citizens. There was an entire formal program that spanned decades - https://en.wikipedia.org/wiki/United_States_biological_weapo...

Of course, the US government doesn't have any secret programs anymore and never lies to us, so everyone can rest easy knowing nothing like this could ever happen again.


It looks like AI slop to me. "Profiles in Firefox aren’t just a way to clean up your tabs. They’re a way to set boundaries, protect your information and make the internet a little calmer." - classic meaningless comparison.


The service is still in preview, so AWS are explicitly telling people not to put it into production.

From my non-production experiments with it, the main limitation is that you can only retrieve up to 30 top_k results, which means you can't use it with a re-ranker, or at least not as effectively. For many production use cases that will be a deal breaker.


My issue with it is that it requires a lot of duplication between it and a traditional rdbms; you can’t use it alone because it doesn’t offer filtering without a search vector (i.e. what some vendors call a scroll function).


I hope these hypothetical banks will also be giving these theoretical indefinite loans interest free.


For those unaware, there are a ton of AI generated videos across YouTube, TikTok, Instagram, Facebook of physicist Brian Cox saying this is an alien spacecraft:

https://www.ign.com/articles/physicist-brian-cox-thanks-yout...


Damn interstellar tourists, just coming here, taking in the sights and leaving. The least they could do is spend some money in the local economy, smh.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: