Hacker Newsnew | past | comments | ask | show | jobs | submit | dorgo's commentslogin

I was wondering why uBlock is not enough since you can block Javascript globally and re-enable per site. AI's answer:

Only things uBlock doesn’t replicate:

NoScript’s anti-XSS and anti-clickjacking heuristics (uBlock just blocks the sources, not sanitize payloads).

NoScript’s control over other active content types (e.g., WebGL, media codecs, etc).


It's also a lot better UI -- it doesn't look like it's nearly as easy to see a drop down list of JS and selectively enable as NS allows.


The complex objects are conceptually similar to prime numbers.


cool. There is also about:performance with cpu & memory for each tab / addon. Hmm, it looks like all extensions are combined in a single entry. So, maybe not per addon.


naaa, you just run "unfuck it" in a loop..


Try asking ChatGPT to solve a captcha for you ( character recognition in a foreign language ). AI studio doesn't refuse.


SponsorBlock needs an AI-version. Currently it relies on users.


>I know multiple agencies who cut staff by 80-90% without dropping revenue.

I'm finding it hard to reconcile this with my own experiences. My whole team ( 5 people ) left last year ( for better pay I guess ) and the marketing agency in germany Im working for had to substitute them with freelancers. To offset the cost they fired the one guy who was hired to push the whole LLM AI topic. We managed to fill one junior position by offering 10k+ more then in their last job. The firm would love to hire people to replace the freelancers. We had to cut stuff lately. But mostly they closed the kitchen which wasn't used due to work from home policy. Definitely don't see any stuff reduction due to automation / LLM use. They still pay (external) people 60€ per written text/article. Because clients don't like LLM written stuff.


Take the computer which beats Magnus and restrain it to never make the best move in a position. Expand this to N best moves as needed to reach 1300 rating.


Even 1300s sometimes make the best move. Sometimes the best move is really easy to see or even mandatory, like if you are in check and MUST take that checking piece. Sometimes the best move is only obvious if you can look 20 moves ahead. Sometimes the best move is only obvious if you can look 5 moves ahead, but the line is so forcing that even 1300s can look that far ahead.

Despite decades of research, nobody has found a good way to make computers play like humans.


Then I can't refrain from asking: and what's the style of LLMs? For example the ChatGPT which is apparently rated around 1800? That should be completely different from that of a classic chess engine.


LLMs can be trained on chess games, but the tree of possible board states branches so fast that for any given position there is simply very little training data available. Even the billions of games played on chess.com and lichess are only a drop in the bucket compared to how many possible board states there are. This would have to be split further by rating range, so the amount of games for any given rating range would be even lower.

This means that the LLM does not actually have a lot of training data available to learn how a 1300 would play, and subsequently does a poor job at imitating it. There is a bunch of papers available online if you want more info.


LLMs already do play at elo ~1400-1800. The question was how does their style feels like to someone who can appreciate the difference between a human player and a chess engine (and the different styles of different human players).


I can’t speak for ChatGPT, but your intuition is correct that LLMs tend to play more like “humans” than Stockfish or other semi-brute force approaches.


ChatGPT will hallucinate and make impossible/invalid moves frequently, so I don't see how it could have a chess rating


That's not the case. Depending on the version, (Chat)GPT seems to be able to play between ~1400 and ~1800 elo, very rarely making invalid moves.


You've identified a potential strategy by which a computer can play like a 1300-rated player, but not one where it will "play like a 1300-rated human". Patzers can still find and make moves in your set of N (if only by blind chance).


Yeah, you would have to weigh the moves based on how "obvious" it is, such as how active the piece has been, how many turns until it leads to winning material, or other such 'bad habits' humans fall for.


This won't work. With that strategy, you can make a computer make play like a 1300 player, but not a 1300 human player.


That's kind of what they do for "training" bots and it produces something which plays NOTHING like a 1300-rated human.


I assume you could just give the computer a large set of 1300 rated games and train it to predict moves from that set :)


I think there's a real difference between "a computer"— in this context meaning an algorithm written by a human, possibly calibrated with a small number of parameters but not trained in any meaningful sense, and a "chess model" which works as you describe.

I think the chess model would be successful at producing the desired outcome but it's not as interesting. There's something to be said for being able to write down in precise terms how to play imperfectly in a manner that feels like a single cohesive intelligence strategizing against you.


I managed to delete all records in a table a week ago ( I blame copilot ). Used time travel ( not quite inter-dimensional travel ) in bigquery to restore. INSERT INTO ... SELECT * FROM ... FOR SYSTEM_TIME AS OF TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)


Would the snapshot contain the same info ( beyound any doubt ) that an actual user would see if they opened LinkedIn/Facebook/Service from Canada on an IPhone at a saturday morning (for example)? If not, the snapshot is useles for some usecases and we are back to scraping.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: