The best way to discover music nowadays is RateYourMusic. I go to an album I like, read a couple reviews to find like-minded people and check out their profiles. They often have lists with their favorite albums.
The album chart queries are also incredible. The site has a very detailed system of genres and descriptors so you can find exactly what you want.
Audiotree has turned me on to several of my favorite bands as of late. Low hit rate (I probably only care for 5% of the music they feature at most) but those couple bands have been worth sifting through the rest.
> The site has a very detailed system of genres and descriptors
My problem with this is that it makes certain assumptions about the consistency of applying genres and about the very concept of genre which (imo) is more of a social construct than an empirical concept. It falls in the same category as religion-sect, language-dialect.
If you use Spotify, you can download your full listening history here: https://www.spotify.com/us/account/privacy/. You get it in a pretty convenient JSON format and with a little bit of code it's pretty easy to create some visualizations.
There are also websites for visualizing this data. I'm quite fond of this one: https://explorify.link/. It allows you to do some custom queries.
I build a web app years ago with Spotify SDK to display top artists, songs, recents, also with a Discovery section that generates new music based on your history. You can create playlists from all sections. free @ https://echoesapp.io
Note that apps built from the SDK don't have access to the full history, only up to some cutoff. I tried a couple over the years and wrongly concluded Spotify deleted your history after some time.
The data download does contain everything, which was a very pleasant surprise. I didn't think I'd ever see the data from the couple years gap in my last.fm.
> They have noted "Preparation time 30 days" :/ What takes so long?
There's probably one person nursing some horrific bogslop software that frequently breaks but absolutely cannot be rewritten or changed (because it was someone's pet project) and frequently has to be manually twizzled to get things out of what is probably a hostile data retrieval environment and they're just TIRED and that's why there's a 30 day leeway because otherwise the Data Retrieval Goblin would be way over the line of overwhelmed rather than just under it all the time.
Probably.
(I realise I've likely described a significant percentage of companies there.)
There are a couple parts at the start and the end where a lady points her phone camera at stuff and asks an AI about what it sees. Must have been mind-blowing stuff when this section was recorded (2023), but now it's just the bare minimum people expect of their phones.
I was ok with that as "fledgling AI" at the start of the movie/documentary, but thought that going back to it and having the chatbot suggest a chess book opening to Hassabis at the end was cheesy and misleading.
They should have ended the movie on the success of AlphaFold.
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
Man, those people using LLMs in competitive programming ... where's the fun in that? I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
I’m a very casual gamer but even I run into obvious cheaters in any popular online game all the time.
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
Yeah. I was happy to see this called out in their /about
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
> I don't get people for whom it's just about winning, I wish everyone would just have some basic form of dignity and respect.
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
> high school debate used to be an extracurricular thing students could do for fun.
High school debate has been ruthless for a long time, even before AI. There has been a rise in the use of techniques designed to abuse the rules and derail arguments for several years. In some regions, debates have become more about teams leveraging the rules and technicalities against their opponents than organically trying to debate a subject.
It sucks that the fun is being sucked out of debate, but I guess a silver lining is that the abuse of these tactics helps everyone understand that winning debates isn't about being correct, it's about being a good debater. And a similar principle can be applied to the application of law and public policy as well.
Why is that strange? Competitive programming, as the name suggests, is about competing. If the rules allow that, not using LLM is actually more like running tour de France.
If the rules don't allow that and yet people do then well, you need online qualifiers and then onsite finals to pick the real winners. Which was already necessary, because there are many other ways to cheat (like having more people than allowed in the team).
I'm a bit surprised you can honestly believe that a competition of humans isn't somehow different if allowed to use solution-generators. Like using a calculator in an arithmetic competition. Really?
It's not much different than outlawing performance enhancing drugs. Or aimbots in competitive gaming. The point is to see what the limits of human performance are.
If an alien race came along and said "you will all die unless you beat us in the IEEE programming competition", I would be all for LLM use. Like if they challenged us to Go, I think we'd probably / certainly use AI. Or chess - yeah, we'd be dumb to not use game solvers for this.
But that's not in the spirit of the competition if it's University of Michigan's use of Claude vs MIT's use of Claude vs ....
Imagine if the word "competition" meant "anything goes" automatically.
It's a different kind of fun. Just like doing math problems on paper can be fun, or writing code to do the math can be fun, or getting AI to write the code to do the math can be fun.
They're just different types of fun. The problem is if one type of fun is ruined by another.
It can be a matter of values from your upbringing or immediate environment. There are plenty of places where they value the results, not the journey, and they think that people who avoid cheating are chumps. Think about that: you are in a situation where you just want to do things for fun but everyone around you will disrespect you for not taking the easy way out.
Weirdly I feel lot more accepting of LLMs in this type of environment than in making actual products. Point is doing things fast and correct enough. So in someways LLM is just one more tool.
With products I want actual correctness. And not something thrown away.
We’re starting to get to a point where the ai can generate better code than your average developer, though. Maybe not a great developer yet, but a lot of products are written by average developers.
Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
I feel like it’s more like using an electronic dictionary in a spelling bee that already allowed you to use a paper dictionary. All it really does is demonstrate that the format isn’t suited to be a competition in the first place.
Which is why I think it’s great they dropped the competitive part and have just made it an advent calendar. Much better that way.
These contests are about memorizing common patterns and banging out code quickly. Outsourcing that to an LLM defeats the point. You can say it's a stupid contest format, and that's fine.
(I did a couple of these in college, though we didn't practice outside of competition so we weren't especially good at it.)
When I did competitions like these at uni (~10-15 years ago), we all used some thin-clients in the computer lab where the only webpages one could access were those allowed by the competition (mainly the submission portal). And then some admin/organizers would feed us and make sure people didn't cheat. Maybe we need to get back to that setup, heh.
Serious in-person competitions like ICPC are still effective against cheating. The first phase happens in a limited number of venues and the computers run a custom OS without internet access. There are many people watching so competitors don't user their phones, etc.
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s
contest, however, we will not be releasing official results. The reason for this is the significant
number of students who violated the CCC Rules. In particular, it is clear that many students
submitted code that they did not write themselves, relying instead on forbidden external help.
As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
In 1997, Deep Blue beat Gary Kasparov, the world chess champion. Today, chess grandmasters stand no chance against Stockfish, a chess engine that can run on a cheap phone. Yet chess remains super popular and competitive today, and while there are occasional scandals, cheating seems to be mostly prevented.
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
Oof. I had a great time cracking the top 100 of Advent of Code back in 2020. Bittersweet to know that I got in while it was still a fun challenge for humans.
> For coding, you can use AI to write your code. For software engineering, you can't.
You can 100% use AI for software engineering. Just not by itself, you need to currently be quite engaged in the process to check it and redirect it.
But AI lowers the barrier to writing code and thus it brings people will less rigour to the field and they can do a lot of damage. But it isn't significantly different than programming languages made coding more accessible than assembly language - and I am sure that this also allowed more people to cause damage.
You can use any tools you want, but you have to be rigorous about it no matter the tool.
> For coding, you can use AI to write your code.
For software engineering, you can't.
This is a pretty common sentiment.
I think it equates using AI with vibe-coding, having AI write code without human review.
I'd suggest amending your rule to this:
> For coding, you can use AI.
For software engineering, you can't.
You can use AI in a process compatible with software engineering.
Prompt it carefully to generate a draft, then have a human review and rework it as needed before committing.
If the AI-written code is poorly architected or redundant, the human can use the same AI to refactor and shape it.
Now, you can say this negates the productivity gains.
It will necessarily negate some.
My point is that the result is comparable to human-written software (such as it is).
I absolutely don't care about how people generate code, but they are responsible for every single line they push for review or merge.
That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.
If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.
Totally agree. For me, the hard part has been figuring out the distinction with junior engineers... Is this poorly thought out, inefficient solution that is 3x as long as necessary due to AI, or inexperience?
Not defending him, but we were already doing this with electron apps, frameworks, libraries, and scripting languages. The only meaningful cost in most software development is labor and that’s what makes sense to optimize. I’d rather have good software, but I’ll take badly made software for free over great software that costs more than the value of the problem solved.
These discussions are always about tactics and never operations.
Code is liability. LLM written PRs often bring net negative value: they make the whole system larger, more brittle, and less integrated. They come at the cost of end user quality and maintainer velocity.
Is that not also true of human written software that costs more per hour than the monthly cost of a coding agent? Developers are expected to ship enterprise software with defects that would land you in court if you made equivalent mistakes designing a water treatment plant or bridge.
I get the “AI sucks” argument from a programmers point of view. It’s weird looking and doesn’t care about “code smells” or about rearranging the code base’s deck chairs just the way you like. From an owner’s or client’s perspective, human programmers suck. You want big standard CRUD app? Like a baby’s first Django app? That’s going to take at least 6 months for some reason. They don’t understand your problem domain and don’t care enough to learn it. They work 15 minutes on the hour, spend 45 on social media or games, and bill you $200/hr. They “pair program” for “quality” to double their billed rate for the same product. They bill you for interns learning how to do their job on your dime. On top of that there is still a very good chance the whole project will just be a failure. Or I can pay Anthropic $20/month and text an AI requirements on my phone when I’ve got 5 minutes of down time. If it doesn’t work I just make a new one and try again. Even if progress on AI stopped today, the world is now so much better for consumers of programs. Maybe not for developers unless you’re writing the AI and getting paid in the millions. Good for them. I’m glad to see the $200/hr Stack Overflow copy and pasters go do something else.
> Is that not also true of human written software that costs more per hour than the monthly cost of a coding agent?
The difference is that a human can learn and grow.
From your examples, it sounds like we're talking about completely different applications of code. I'm a software engineer who is responding to the original topic of reviewing PRs full of LLM slop. It sounds like you are a hobbyist who uses LLMs to vibe code personal apps. Your use case is, frankly, exactly what LLMs should be used for. It's analogous to how using a consumer grade 3d printer to make toys for your kids is fine, but nobody would want to be on the hook for maintaining full scale structural systems that were printed the same way.
In this analogy though, a different someone designed a device or several devices and is printing them on a 3d printer and selling them online and making an alright living through that.
I get it, but I think there’s something deeply anti human about being ok with this (not just in software). It’s similar in sentiment to how you behave when nobody is looking - a culture and society is so much better off if people take pride in their work.
Obviously there’s nuance (I’ll take slop food for starving people over a healthy meal for a limited few if we’re forced to choose), but the perverse incentives in society start to take over if we allow ourselves to be ok with slop. Continuously chasing the bottom of the barrel makes it impossible for high quality to exist for anyone except the rich.
Put another way: if we as a society said “it is illegal to make slop food”, both the poor and the rich would have easy access to healthy food. The cost here would be born by the rich, as they profit off food production and thus would profit less to keep quality high.
I’m pretty sure the USSR, Cuba, and the like never succeeded doing this sort of thing, but maybe if we hit ourselves in the head with the same hammer (and sickle) just one more time it will work?
Absolutely. In areas where there are known quality options, people are clearly willing to pay more. Toyota for instance is a solid example of this.
Automobiles are large, expensive purchases with a relatively small set of options though... For most purchases, it's impossible to determine quality ahead of time.
It's not easy to be a junior, and we might be speaking with survivor bias, but most juniors don't end up in solid engineering teams, they are merely developers that are much cheaper and from whom you expect much less, but more often than not they are borderline left learning and figuring out things on their own. They need to luck some senior member that will nurture them and not just give them low quality work (which I admit I have done too when I had myself lots of pressure to deliver my own stuff).
Even in less desperate teams, as productivity grows with AI (mine does, even if I don't author code with it it's tremendous help in just navigating repos and connecting the dots, it saves me so much time...) the reviewing pressure increases too, and with that fatigue.
It does matter, because it's a worthwhile investment of my time to deeply review, understand, and provide feedback for the work of a junior engineer on my team. That human being can learn and grow.
It is not a worthwhile use of my time to similarly "coach" LLM slop.
The classic challenge with junior engineers is that helping them ship something is often more work than just doing it yourself. I'm willing to do that extra work for a human.
I disagree with the new rule. The old one is fine and applies to LLMs.
Vibing and good enough is a terrible combination, as unknown elements of the system grow at a faster rate than ever.
Using LLMs while understanding every change and retaining a mental model of the system is fine.
Granted, I see vibe+ignorance way too often as it is the short-term path of least resistance in the current climate of RTO and bums in seats and grind and ever more features.
Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.
I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.
I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).
No, that's not it. The difference between humans and AI is that AI suffers no embarrassment or shame when it makes mistakes, and the humans enthusiastically using AI don't seem to either. Most humans experience a quick and viseral deterrent when they publish sloppy code and mistakes are discovered. AI, not at all. It does not immediately learn from its mistakes like most humans do.
In the rare case when there is a human that is consistently persistently confidently wrong like AI, a project can identify that person and easily stop wasting their time working with that person. With masses of people being told by the vocal AI shills how amazing AI is, projects can easily be flooded with confidently wrong aaI generated PRs.
> LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.
in my experience these tests don't test anything useful
you may you have 100% test coverage, but it's almost entirely useless but not testing the actual desired behaviour of the system
If unit tests are boring chores for you, or 100% coverage is somehow a goal in itself, then your understanding of quality software development is quite lacking overall. Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control. Good tests are what keep a competent developer sane. You cannot build quality software without starting from tests. So if tests are boring you, the problem is your approach to engineering. Mature developers dont get bored chasing 100% coverage – they focus on meaningful tests that actually describe how the program is supposed to work.
> Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control.
I set boundaries during design where I choose responsibilities, interfaces and names. Red Green Refactor is very useful for beginners who would otherwise define boundaries that are difficult to test and maintain.
I design components that are small and focused so their APIs are simple and unit tests are incredibly easy to define and implement, usually parametrized. Unit tests don't keep me "sane", they keep me sleeping well at night because designing doesn't drive me mad. They don't define how the "program" is supposed to work, they define how the unit is supposed to work. The smaller the unit the simpler the test. I hope you agree: simple is better than complex. And no, I don't subscribe to "you only need integration tests".
Otherwise, nice battery of ad hominems you managed to slip in: my understanding of quality software is lacking, my problem is my approach to engineering and I'm an immature developer. All that from "LLMs can automate most of the boring stuff, including unit tests with 100% coverage." because you can't fathom how someone can design quality software without TDD, and you can't steelman my argument (even though it's recommended in the guidelines [1]). I do review and correct the LLM output. I almost always ask it for specific test cases to be implemented. I also enjoy seeing most basic test cases and most edge cases covered. And no, I don't particularly enjoy writing factories, setups, and asserts. I'm pretty happy to review them.
[1] https://news.ycombinator.com/newsguidelines.html Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
I don’t think this is uncommon. At one point Lemmy was a project with thousands of stars and literally no working code until finally someone other than the owner adopted it and merged in a usable product.
Wow, and if you go to their website listed in they're profile, not only do almost none of the links work, the one that did just linked out to the generic template that it was straight copied from. Wow.
Yeah, either this guy's totally insane or it could even be somebody who's an AI skeptic who's just flooding projects with really dumb PRs just to show the risks and get people skeptical about the use of AI in open source (Takes on my folie hat)
that's a grifter doing grifting. there was a thread on /g/ about this guy the other day, anons digged out much of its past as a failure / grifter in many areas, running away with the money at the first problem
When looking at this history here on HN, he started out in the poker world. I'm not sure if he played, but he wrote a poker engine or something. In my experience, the venn diagram for professional poker players, crypto enthusiasts and grifters have a lot of overlap.
But for this guy specifically there's practically complete radio silence during the crypto era. It's only recently with all the AI noise that he's become active here on HN again.
function estimate_method_targets(func_name::Symbol, types::Tuple)
# Conservative estimate
# In a real implementation, we'd query the method table
return 2 # Assume multiple possibilities
end
Hilarious. Was this model trained on XKCD [0] by any chance?
Among all the other problems with this... They describe [1] their contributions as "steering the AI" and "keeping it honest", which evidently they did not.
As an aside, he originally titled the thread "A complete guide to building static binaries with Julia (updated for 1.12)", with no mention of AI. That got me excited every time I opened the Discourse, until I remembered it was this slop. :/
Lot of people are criticising this guy but we all benefit from having an example to show people - this, please don’t do what this guy is doing. Please read the generated code, understand it, edit it and then submit it.
If anyone’s answer to “why does your PR do this” is “I don’t know, the AI did it and I didn’t question it” then they need a time out.
The biggest mistake, AI or not, is dropping a 10K+ PR. 300~500 LOC is how far one should be going, unless they're doing some automated refactoring. E.g. formatting the entire StaticCompiler.jl source. This should've been a distinct PR, preferably by a maintainer.
It could first judge whether the PR is frivolous, then try to review it, then flag a human if necessary.
The problem is that Github, or whatever system hosts the process, should actively prevent projects from being DDOS-ed with PR reviews since using AI costs real money.
It's been stated like a consultant giving architectural advice. The problem is that it is socially acceptable to use llms for absolutely anything and also in bulk. Before, you strove to live up to your own standards and people valued authenticity. Now it seems like we are all striving for the holy grail of conventional software engineering: The Average.
It is absolutely not socially acceptable, and people like yourself blithely declaring that it is is getting tiring. Maybe it’s socially acceptable in your particular circles to not give a single shit, take no pride in the slop you throw at people, and expect them to wade through it no questions asked? But not for the rest of us.
Maybe I didn't clearly state my point. That was a comment about my experience earlier here on HN, someone was asked whether or not they've used AI to write and their response was "why not use it if it's better than my own", if that is the reasoning that people give and they are not self-aware enough to be embarrassed about it, I think it must mean that there's a lot of people who think like that.
This isn't just "making mistakes." It's so profoundly obnoxious that I can't imagine what you've actually been doing during your apparently 30 years of experience as a software developer, such that you somehow didn't understand, or don't, why submitting these PRs is completely unacceptable.
The breezy "challenge me on this" and "it's just a proof of concept" remarks are infuriating. Pull requests are not conversation starters. They aren't for promoting something you think people should think about. The self-absorption and self-indulgence beggar belief.
Your homepage repeatedly says you're open to work and want someone to hire you. I can't imagine anybody looking at those PRs or your behavior in the discussions and concluding that you'd be a good addition to a team.
The cluelessness is mind-boggling.
It's so bad that I'm inclined to wonder whether you really are human -- or whether you're someone's stealthy, dishonest LLM experiment.
I'll one up you: at this point I'm becoming pretty sure that this is a person who actually hates LLMs, who is trying to poison the well by trying to give other people reasons to hate LLMs too.
Ah. I remember that guy. Joel. He sold his poker server and bragged around HN long time ago. He is too much of PR stunt guy recently. Unfortunately AI does not lead to people being nice in the end. The way people abuse other people using AI is crazy. Kudos to ocaml owners giving him a proper f-off but polite response.
I agree that's a funny coincidence. But, what about the change it wanted to commit? It is at least slightly interesting. It is doubly interesting that changing line 638 neither breaks nor fixes any test.
Even after the public call-outs you keep dropping blatant ads for your blog and AI in general in your PRs; there's no other word for them than ads. This is why I blocked you on the OCaml forum already.
When I was a kid, every year I'd get so obsessed about Christmas toys that the hype would fill my thoughts to the point I'd feel dizzy and throw up. I genuinely think you're going through the adult version of that: your guts might be ok but your mind is so filled with hype that you're losing self-awareness.
I assume he's well meaning because I've been seeing his posts for a few months already in the OCaml forums before this pattern of his started, but he's suddenly acting irrational and doubling down. There's no way I can put that nicely, plus he's since deleted his strongest reactions so you can't grasp it fully either.
This is not my first interaction with him outside HN, I already talked to him privately when this was starting to unfold in the OCaml community. Then I gave up, flagged his ads for the mods and blocked him for a few months, but I've kept encountering his drama on github and HN.
I've been following as well.
As much as i disagree with it, I appreciate that he has the audacity to propose this "all in" attitude.
I'm a bit worried by the amount of anger that border doxing at times. I'd like this community to behave better.
I don't think doxing means what you think it means but whatever. I also expected more of him than to spam ads, waste the time of maintainers and mods on multiple communities, and reply passive-aggressively to anyone asking him to tone it down.
Good faith includes the ability to respect and listen to the other person (or in this case, multiple other people who have been working on these projects for years). The attitude he demonstrates has been and remains (this is a quote from him elsewhere on this thread):
> existing open source projects are not ready for this and likely won't ever be.
i.e. he is enlightened and these projects are just not seeing The Way™.
Good faith doesn't mean you can waste people's time. It's the reason why patents for perpetual motion machines are banned in most countries. The inventors often genuinely believe they've done it and want to help people, and yet it just wastes people's time having to review them.
In either case I'd argue it is no longer good faith if asked to stop and you continue and do not learn from your peers.
Consider they're trained to respond to the prompt you entered. If you enter "I'm in Czechia. Please tell me a local LLM model." of course you expect it to say something about Czechia since you queried it that way. Now imagine the first sentence is in the system prompt instead - same thing.
The album chart queries are also incredible. The site has a very detailed system of genres and descriptors so you can find exactly what you want.
reply