Hacker Newsnew | past | comments | ask | show | jobs | submit | more ricardo81's commentslogin

FWIW Mojeek (an organic search engine in the classic sense) can do this with the before: operator.

https://www.mojeek.com/search?q=britney+spears+before%3A2010...


I deleted my Facebook account a couple of years ago and my Twitter one yesterday.

It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.

There needs to be algorithms that promote cohorts and individuals preferences.

Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.


> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.

I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.


On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.

Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.

Edit, just to be clear: I'm not saying HN should be unmoderated.


Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.

So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.


The front page is managed extensively on HN, so is this an argument for stronger moderation?


I think there's an important distinction between strict moderation and curation, but in general yes I'd agree.


Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.


Abuse of the flagging system is probably one of the worst problems currently facing HN. It looks like mods might be trying to do something about it, as I've occasionally seen improperly-flagged posts get resuscitated, but it appears to take manual action by moderators, and by the time they get to it, the damage is done: The article was censored off the front page.


Even with addition of tomhow, they are clearly stretched too thin to make any meaningful impact. Their official answer to this issue by the way is to point out that you can message them on email to elicit this manual action, which if you ask me is a fucking joke and clearly shows the mammoth age stack in which this site is written and lack of resources allocated to its support is having a massive impact on their ability to keep up with massive traffic. But then again, this site only exists to funnel attention to yc's startups, and it is something that you need to keep in mind while trying to answer any questions about its current state.


It seems really so small compared to reddit.

I think I never downvoted anyone on hackernews yet - it just does not seem important.

On reddit on the other hand, I just had to downvote wrong opinions. This works to some extent, until moderators interfere and ban you. That part made me stop use reddit actually, in particular since someone made a complaint and I got banned for some days. I objected and the moderators of course did not respond. I can not allow random moderators to just chime in arbitrarily and flag "this comment you made is a threat", when it clearly was not. But you can not really argue with reddit moderators.


You can’t get banned just for downvoting. Nobody can see someone else’s voting history. You buried the lead, you were banned for your comments not for your voting activity.


I don’t know why this is being downvoted, I’ve witnessed it many times myself.

It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.


The comment consists of criticism on flagging behavior. Though it might have a point, it seems only vaguely related to its parent comment about non-personalized ordering.


In downvoting it, they are proving me right. For posterity, there is a mastodon account [0] collecting flagged posts in an easily digestible form, it really does paint a certain picture if you ask me.

[0] https://mastodon.social/@hn_flagged


The DOGE topics are a perfect example. HN users are uniquely placed to provide useful perspectives on DOGE but it gets flagged very regularly.


I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.

Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.


MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.


>Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.

I was too young for IRC/Usenet and started using the net/web in the late 90s, frequenting some forums. Agreed that anyone can come in and upset the balance.

I'd say the difference is that on the open web, you're free to discover and participate in those social settings for the most part. With everything being so centralised and behind an algorithm the things you're presented are more 'push' than 'pull'.


Right, but that’s slightly different.

I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.

Compare that to a typical flame war on HN (before the mods step in) or IRC.

On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.

On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.

Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.


Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.


> Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually

I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.

A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.


>pleasant and free of ragebait and propaganda

Most people are putting forth an argument of pervasiveness and scale, not existence.


I remember a thread a while ago where someone was claiming that Hacker News comments were much more civilized and on topic in the early days.

So someone pulled up Wayback Machine archives of random dates for HN pages. The comments were full of garbage, flame wars, confidently incorrect statements, off topic rants, and all the other things that people complain about today.

It was the same thing, maybe even slightly worse, just in a different era

I think the people who imagine that social media is worse today either didn’t participate in much online socialization years ago or have blocked out the bad parts from their memory.


I suppose more than a few of us olds remember Serdar Argic's attempts to redefine the Armenian genocide on IRC.

https://en.wikipedia.org/wiki/Serdar_Argic

But Serdar was relatively easy to ignore, because it was just one account, and it wasn't pushed on everyone via an algorithm designed to leverage outrage to make more money for one of the world's billionaires. You're right: pervasiveness and scale make a significant difference.


When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)

We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.

I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.


Thanks for this. I didn't realize until you said it why this issue might not be observable to a certain group of people. I think this is a cognitive awareness issue. You cant really see it until you have an awareness of it through experience. I came from a drug abuse background and my wife was never involved in the level of addiction I was involved in and she has a hard time seeing how algorithms like this effect behavior


I think you touch on the crux of the issue here, that education ioos one of the most potent defenses against this kind of psychological manipulation.

But not just any education. The humanities side of things, which are focused on the foundations of thought, morality and human psychology.

These things are sadly lacking in technical degrees and it shows.

It's also IMO why we see the destruction of our education systems as a whole as a element of control over society.


>If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!"

I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.


I think if you look at it this way then "addiction" is just the far end of spectrum that includes any repeated behaviors whatsoever.


Why not, the term spectrum can go from 'not harmful at all' to 'kills you really damned fast. Life isn’t black and white and there are very wide ranges across people.


I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.


I think the curation of all media content using your own LLM that has been tuned using your own custom criteria _must_ become the future of media.

We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.

Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.


That's the government regulation I want but it's probably not the government regulation we will get because both major constituencies have a vested interest in forcing their viewpoints on people. Then there's the endless pablum hitting both sides, giving us important vital cutting edge updates about influencers and reality TV stars whether we want to hear about them or not...

We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.


It would just become part of the shitshow, cf. Grok.


Which is why you should cancel your Twitter account unless you're on the same page with the guy who owns it, but I digress.

if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.

Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.

My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.


I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.


Algorithm driven social media is a kind of pollution. As the density of the pollution on those sites increases it spills out and causes the neighbors problems. Think of 4chan style raids. It wasn't enough for them to snipe each other on their site, so they spread the joy elsewhere.

And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.


It is of course human nature. The problem is what happens when algorithms can reenforce, exaggerate, and amplify the effects of this nature to promote engagement and ad-clicks. It’s cancer that will at the very least erode the agency of the average individual and in the worst create a hive mind that we have no control over. We are living in the preview of it all I think.


The thing is, the people on those "algorithm-free" forums still get manipulated by the algorithm in the rest of their life. So it seeps into everything.


I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.


I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.


I think it's less about content topic and more meta content topic. EG I don't want to remove pictures of broccoli because I don't like broccoli, I'm trying to remove pictures of food because it makes me eat more. Similarly, I don't want to remove Political Takes I Disagree With, I want to remove Political Takes Designed To Make Me Angry. The latter has a destructive viral effect whose antidote is inattention.

Echo chamber is a loaded term. Nobody is upset about the Not Murdering People Randomly echo chamber we've created for ourselves in civilised society, and with good reason. Many ideologies are internally stable and don't virally cause the breakdown of society. The concerning echo chambers are the ones that intensify and self-reinforce when left alone.


I've mentioned this a few times in the past, but I'm convinced that filters that exclude work much better than filters that include.

Instead of algorithms pushing us content it thinks we like (or what the advertisers are paying them to push on us), the relationship should be reversed and the algorithms should push us all content except the content we don't like.

Killfiles on Usenet newsreaders worked this way and they were amazing. I could filter out abusive trolls and topics I wasn't interested in, but I would otherwise get an unfiltered feed.


I’m at risk of sounding like an atproto shill at this point, but check out https://www.graze.social/.

I think every social media platform should allow something like this. You can make filters that work in either direction.


At least when you do this you are aware of it happening. Algorithmic feeds can shift biases without you even noticing.


> I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.

You are the one who gets to control what is filtered or not, so that's up to you. It's about choice. By the way, a social media experience which is not "ultra filtered" doesn't exist. Twitter is filtered heavily, with a bias towards extreme right wing viewpoints, the ones it's owner is in agreement with. And that sort of filtering disguised as lack of bias is a mind virus. For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed." How did a seemingly normal person fall into that? One "unfiltered" tweet at a time, I suppose.

> To me, only seeing things you know you are already interested in is no better than another company curating it for me.

I curate my own feeds. They don't have things I only agree with in them, they have topics I actually want to see in them. I don't want to see political ragebait, left or right flavoured. I don't want to see midwit discourse about vibecoding. I have that option on Bluesky, and that's the only platform aside from my RSS reader where I have that option.

Of course, you also have the option to stare endlessly at a raw feed containing everything. Hypothetically, you could exactly replicate a feed that aggregates the kind of RW viewpoints popular on Twitter and look at it 24/7. But that would be your choice.


For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed."

It seems like you're better off knowing that. Without Twitter, you wouldn't, right?

A venue that allows people to tell you who they really are isn't an unalloyed Bad Thing.


That's a good way of thinking about it, thank you, legitimately.


> Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.

I have another wise-sounding soundbite for you: "I disapprove of what you say, but I will defend to the death your right to say it." —Voltaire. All this sounds dandy and fine, until you actually try and examine the beliefs and prejudeces at hand. It would seem that such examination is possible, and it is—in theory, whereas in practice, i.e. in application of language—"ideas" simply don't matter as much. Material circumstance, mindset, background, all these things that make us who we are, are largely immutable in our own frames of reference. You can get exposed to new words all the time, but if they come in language you don't understand, it's of no use. This is not a bug, but a feature, a learned mechanism that allows us to navigate massive search spaces without getting overwhelmed.


So far my experience is that unless you subscribe to the general narrative of the platform, the discover algorithm punishes you with directing the mob your way.

I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.

I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.


I saw that, and I'm sorry it happened. I thought both the response to your original post and the resulting backlash to both you and everyone who engaged with you sincerely were absurd. I think that because of atproto you have the flexibility to create a social media experience where that sort of thing cannot happen, but I also understand why you in particular would be put off from the whole thing.


I don’t think this is a technical problem but a social problem. I think the audience defines itself by being the antithesis to Twitter instead of being a well balanced one.

I was pretty optimistic in the beginning but Bluesky doesn’t have organic growth and those who hang out there, are the core audience that wants to be there because of what the platform represents. But that also means rejection of a lot of things such AI.


In many ways I agree with you. In particular the conglomeration of high percentages of atproto users onto Bluesky owned and moderated algorithms and feeds and the replication of Twitter-style dogpiling combined with the relative lack of ideological diversity on Bluesky has created the perfect environment for toxicity, even if it doesn't reach the depths that Twitter does.

But conversely, that's the only place I disagree with you. Everything that is bad about Bluesky is much worse on Twitter. It's a -- larger -- red mob instead of a blue one (or vice versa I guess depending on how one assigns colors to political alignment), and some of the mob members are actually getting paid to throw bricks!


I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)


> My account got flagged as spam, still no idea why.

This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?


If you follow certain people, various communities will, en mass, block you and report you automatically with software "block lists". This can lead to getting flagged as spam.


Well that's the thing -- you might be flagged as spam in the Bluesky PDS, but there are other PDS's, with their own feeds and algorithms, and in fact you can make your own if you so choose. That's a lot of work, and Twitter is definitely easier, but atproto means that an LLM cannot steal your voice.


I enjoy Mastodon a lot. Ad-free, algo-free. I choose what goes in my feed, I do get exposed to external viewpoints by people boosts (aka re-tweets) and i follow hashtags (to get content from people I do not know). But it's extremely peaceful, spam and bots are rare and get flagged quickly. There's a good ecosystem of mobile apps. I can follow a few Bluesky people through a bridge between platforms and they can follow me too.

That's truly all I need.


Doesn’t Bluesky have a set of moderation rules that guarantee that it will turn into bizarro-world Twitter?


> it's how the algorithms promote engagement.

They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.


> should be heavily regulated.

By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.

But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.


I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.

Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.

Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).


HN let's you choose what order (active, new, top[actual algorithm])

That's not true of Facebook, new does not show you true posts in order of recency.

Reddit still does, bit also injects ads that look like recent posts and actually aren't which is misleading.


And HN doesn't choose to show you targeted, inflammatory content based on your history. That's a huge difference from Facebook.


Most people on HN who advocate regulating social media don't only want to prevent those platforms from showing targeted inflammatory content, they want to make all algorithmic feeds other than strictly chronological illegal, as well as moderation of any legal content.

From that point of view, Hacker News is little different than Facebook. One could even argue that HN's karma system is a dark pattern designed to breed addiction and influence conversation in much the same way as other social media platforms, albeit not to the same degree.


At least HN karma is incremental and based on something approximating merit as opposed to being a slot machine where you never know which comment will earn Karma. More effort or rare insight, generally yields more karma.


That hasn't been my experience. How much karma you get is heavily dependant on how many people see the comment. The most insightful effort-filled comment at the bottom of a 4 day old thread isn't going to get you nearly as much, if anything, compared to a joke with just the right amount of snark at the top of a post currently at the top of the front page.


That doesn't make it any less addictive or manipulative.


I would be astonished if a majority of people opposed to social media algorithms consider HN's approach to be sufficiently objectionable to be regulated or in any way similar to Facebook.


Hacker News doesn't use a strictly chronological feed. Hacker News manipulates the feed to promote certain items over others. Hacker News moderates legal content. Those are all features of social media algorithms that people are opposed to. It just isn't "objectionable" when HN does it.

And regulations of this kind always creep out of scope. We've seen it happen countless times. But people hate social media so much around here that they simply don't think it through, or else don't care.


You're moving the goalposts.

You said:

> Most people on HN who advocate regulating social media...want to make all algorithmic feeds other than strictly chronological illegal

I don't buy that, at all. I think they want a chronological feed to follow, and they want the end of targeted outrage machines that are poisoning civil discourse and breeding the type of destructive politics that has led to our sitting U.S. president to call for critics to be hanged.

Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.


>Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.

You're trying to discredit my comment but it seems as if your anger just led you around to proving me right.


You're failing to differentiate between "want" and "willing to settle for if the slippery slope is much worse than I think is realistically possible".


> But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape.

For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.

> Calling it “regulation” is just a polite veneer over wanting control.

It is you that may have misinterpreted what regulations are.


> or example, we can forbid corporations usage of algorithms beyond sorting by date of the post

Hacker News sorted by "new" is far less valuable to me than the default homepage which has a sorting algorithm that has a good balance between freshness and impact. Please don't break it.

> It is you that may have misinterpreted what regulations are.

The definition of regulation is literally: "a rule or directive made and maintained by an authority." I am just scared about who the authority is going to be.


Control is the whole point. One person being in charge, enacting their little whims, is what you get in an uncontrolled situation and what we have now. The assumption is that you live in a democratic society and "the regulator" is effectively the populace. (We have to keep believing democracy is possible or we're cooked.)


By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.

Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.


> By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.

Then you get situations like the school board stacked with creationists who believe removing the science textbooks is important for the stable wellbeing of society.

Or organizations like MADD that are hell bent on stamping out alcohol one incremental step at a time because “stable wellbeing of society” is their mandate.

Or the conservative action groups in my area that protest everything they find indecent, including plays and movies, because they believe they’re pushing for the stable wellbeing of society.

There is no such thing as a neutral group pushing for a platonic ideal stable wellbeing of society. If you give a group of people power to control what others see, it will be immediately co-opted by special interests and politics.

Singling out non-profit as being virtuous and good is utopian fallacy. If you give any group power over what others are allowed to show, it will be extremely political and abused by every group with an agenda to push.


It's really not that complicated:

- Ban algorithmic optimization that feeds on and proliferates polarisation.

- To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).

- Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.

All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.

Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.

One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.


That only treats the symptoms, not the cause. The purpose of algorithmic optimization farming engagement is to increase ad impressions for money. It is advertising that has to be regulated in such a way that maximizing ad impressions is not profitable or you will find that social media companies will still have every incentive to find other ways to do it that will probably be just as harmful.


> it's really not that complicated...

Then lists at least four priorities which would require one multi page bill or more than likely several bills make their way through house, senate, and presidents desk while under fire from every lobbyist in Washington?


I believe the world may contain legal and regulatory authorities that are not part of the United States. Dozens of them, so I've heard.


It’s always a question of who decides. Apparently, it’s this guy.


Recasting regulation as a desire for control is too reductive. The other point of regulation is compromise. No compromise at all is just a wasted opportunity.


I’d favour regulation towards transparency if nothing else. Show what factors influence appearance in a feed.


My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.

With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.


Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?

What if social media and the internet at large is now exposing people to things which before ha been kept hidden from them, or distorted? Are people wrong to feel hate?

I know the time before the internet, when a very select few decided what the public should know and not know, what they should feel, what they should do and how they should behave. The internet is not the first mass communications, neither are social media or LLMs. The public has been manipulated and mind primed by mass media for over a century now.

The largest bloodshed events World War I and II were orchestrated by lunatics screaming in the radio or screaming behind a pulpit, and the public eagerly being herded by them to the bloodshed.

This comment isn't in opposition to yours, it's just riffing on what you said.


> Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?

I think they are natural feelings that appear due to various reason. People struggle for centuries to control their impulses and this was used for millennia in the advantage of whom could manipulate them.

The second world war did not appear in a "happy world". It might even have started due to the great depression. For other conflicts, similarly - I don't think situation was great before them for most people.

I am afraid that social networks just expose better what happens in people's heads (which would be worrying as it could predict larger scale conflicts) rather than making normal people angry (which would be solved by just reducing social media). Things are never black and white, so probably is something in between. Time will tell if closer to first or second.


They seem to artificially create filter bubbles, echo chambers and rage. They do that just for the money. They divide societies.

For example:

(Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)

> First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.

https://www.mdpi.com/2075-4698/15/11/301


> Much of the hate and the fear and the envy that I see on social networks have other reasons

Maybe so, but do you really think actively amplifying or even rewarding them has no effect on people whatsoever?


During history, people did lots of horrible things and/or felt miserable without social networks. Yes, amplifying or rewarding does not have a positive effect, but I would like to see further analysis over the magnitude.

Think of slavery or burning of witches or genocides - those were considered perfectly normal not that long ago (on historical scale). I feel that focusing on social networks prevents some people to think "is that the root cause?". I personally think there other reasons of this generic "anger" that have a larger impact and that have different solutions than "less AI/less social networks", but that would be too off-topic.


I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.


We've seen what happens when we pretend the market will somehow regulate itself.


Just because the free market isn't producing results you like doesn't mean that more regulation would make it better.


Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.

N.B. Still employed btw.


You can have a LinkedIn profile without reading the feed.

This is literally how most of the world uses LinkedIn

I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?


Yeah I just LinkedIn as a public resume and message system with recruiters. Though even that goes through my email


LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.


Sorting the feed by "recent" at least gives you a randomized assortment of self aggrandizement, instead of algorithmically enhanced ragebait


Better suggestion: Ignore the feed if you don’t like it.

Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.

I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.


Did you just post basically the same reply to two comments 2 minutes apart? :)


This. Linkedin is garbage, yet I still use it because there are no competitors. This is what happens in a monoculture.


Do you really want a “competitor” to linkedin? Do you really want to have to make and manage accounts on multiple sites because you need a job and you don’t know which a company uses?

Isn’t it better to have a single place you check when you need a job because everyone else is also there?


Do you really want a “competitor” to Facebook? Do you really want to have to make and manage accounts on multiple sites because you want to chat with someone and you don’t know which social media that person uses? Isn’t it better to have a single place you check when you need to chat with someone because everyone else is also there?


Yes, it is better, which is why facebook is the only old school social network left. Do you really think we would be using myspace if it still existed?


As someone who doesn't, and never has, had a linkedin. What would a "competitor" look like? There's plenty of job boards. What are you using linkedin for?


I have a special, deep, loathing for linkedin. I honestly can't believe how horrible it is and I don't understand why people engage with it.


I don't understand how people can be so dismissive of LinkedIn purely for its resume function.

For essentially every "knowledge worker" profession with a halfway decent CV, a well kept LinkedIn resume can easily make a difference of $X0,000 in yearly salary, and the initial setup takes one to a few hours. It's one of the best ROI actions many could do for their careers.

How dismissive many engineers are of doing that and the justifications for that are often full of privilege.


Sorry, gotta be Holden Caufield on this one, though I take your point.


I think this statement is highly dependent on market and geography. I, for one, have mostly received scams. For the occassional real contact, we shifted away from LinkedIn as soon as we could after the basic hello.


You have a special loathing for a site where you can message professional contacts when you need to?

Nobody is forcing you to use the social networking features. Just use it as a way to keep in touch with coworkers.


> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.

I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)


Your loss.

Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.

Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.

I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.


Reddit may be next. The number of "promoted" items is increasing.


A social network can be great. Social media — usually not.

Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.


I eliminated twitter when a certain rich guy took over.

Actually, I deleted my account there before, as twitter sent me spam mail trying to lecture me what I write. There was nothing wrong with what I wrote - twitter was wrong. I can not accept AI-generated spam by twitter, so I went away. Don't really miss it either, but Elon really worsened the platform significantly with his antics.

> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.

Yeah, I can relate to this, but mostly what annoyed me was that twitter interfered "we got a complaint about you - they are right, you are a troublemaker". I don't understand why twitter wants to interfere into communication. Reddit is even worse, since moderators have such a wild range of what is "acceptable" and what is not. Double-standards everywhere on reddit.


No, there needs to be control over the algorithms that get used. You ought to be able to tune it. There needs to be a Google fuu equivalent for social media. Or, instead of one platform one algorithm, let users define the algorithm to a certain degree, using llms to help with that and then you can allow others to access your algorithms too. Asking for someone Facebook to tweak the algorithm is not going to help imo.


IMO there should not be an algorithm. You should just get what you have subscribed to, with whatever filters you have defined. There are better and worse algorithms but I think the meat of the rot is the expectation of an algorithm determining 90% of what you see.


Dude that's an algorithm.


Not in the sense that it's commonly used in this context. It's not a recommendation algorithm pulling from the whole platform based on what you're doing, it's far more controllable, deterministic process which only does what you explicitly request.


One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.

As so many have said, enragement equals engagement equals profit.

All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.

TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.


Could someone use a third-party AI agent to re-curate their feeds? If it was running from the user's computer I think this would avoid any API legal issues, as otherwise ad and script blockers would have been declared illegal long ago.

> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.

I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).


Yes, you absolutely can do this and back in the before times Facebook used to have an API that let you design your own interface to it.

But now I think that will be treated with as much derision by FAANG as ad blockers because you're preventing them from enraging you to keep you engaged and afraid. Why won't you think of the shareholder value (tm)?

But mandating API access would be fantastic government regulation going forward. Don't hold your breath.


> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.

I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.

You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit

If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.

On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.


I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.


I use X. I have an enormouse blocklist and I block keywords. I found that I can also block emoji. This keeps my feed focused on what I want to see (no politics. Just technology, classical and jazz music, etc.)


> You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.

The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.

The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.

You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).

But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).


> The algorithm doesn’t show you “more of the things you engage with”,

That’s literally what the complaint was that I was responding to.

You even immediately contradict yourself and agree that the algorithm shows you what you engage with

> But you make one three word comment about Brexit and the algorithm goes up

> Now your feed is trash forever, unless you engage with content from another mainstream category

This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.

Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.


The issue is that it's not symmetric: the algorithm is biased towards rage-baity content, so it will use any tiny level of engagement with something related to that content to push it, but there's not really anything you can do to stop it, or to get it to push less rage-baity content. This is also really bad if you realise you have a problem with getting caught up in such content (for some it's borderline addictive): there's no tools for someone to say 'I realise I respond to every message I see on this topic, but really that's not good for me, please don't show me it in the first place'.


OK sure, if you want to be technically correct, “the algorithm shows you what you engage with” in some sense, but not any useful sense. There’s no proportionality.

As I said above, if you engage heavily with content you like that is outside of the mainstream categories the algorithm has been trained to prefer, it will not show you more of those things.

If you engage one single time, in even the slightest way, with one of those mainstream categories, you will be seeing nothing but that, nonstop, forever.

The “mainstream categories” are not publicly listed anywhere, so it’s not always easy to know that you’ve just stepped in one until it’s too late.

You can’t engage with things you like in proportion to how much you care about them. If something is in a mainstream category and you care about it only little bit, you have to abstain from interacting with it at all, ever, and don’t slip up. Having to maintain constant vigilance about this all the time sucks, that’s what pisses me off.


> If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.

That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.

My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.


Your facebook feed is now at this URL: https://www.facebook.com/?filter=all&sk=h_chr


Just started using Minifeed (free account). I am still nostalgic about Google Reader.


>it’s not just X — it’s Y


on the opposite site, you know what they say, "there is no algo. for truth"


That's the same algorithm Youtube has and is more blatant. Phone mics and your coworker's proximity does a great job at picking up things you've said even after disabling mic access plus airplane mode just by process of elimination.

I'll only use an LLM for projects and building tools, like a junior dev in their 20s.


an interesting thing about Twitter, I find, is that plenty of rage bait and narcissism bait surface, but amid very highly technical information which is also published there, and extremely useful (immunology, genomics, and of course computational) to me.

i've learned pretty well how to 'guide' the algorithm so the tech stuff that's super valuable (to me) does not vanish, but still get nonsense bozo posts in the mix.


I call it AI slop and human slop.


It's a cool subject and article and things I only have a general understanding of (considering the place of posting).

What I'm sure about is having a programming unit more purposed to a task is more optimal than a general programming unit designed to accommodate all programming tasks.

More and more of the economics of programming boils down to energy usage and invariably towards physical rules, the efficiency of the process has the benefit of less energy consumed.

As a Layman is makes general sense. Maybe a future where productivity is based closer on energy efficiency rather than monetary gain pushes the economy in better directions.

Cryptocurrency and LLMs seem like they'll play out that story over the next 10 years.


Agree. Voyager is probably considered by many to be one of our greatest achievements.

It makes me wonder when we'll have anything set foot in another star system. I would guess realistically after 2100, but then we went from the Wright brothers to landing on the moon in under 70 years... so I may be proven wrong.


Space is so ridiculously big that I don't think it will ever happen.

Back of the envelope math - 4.2 light years to the nearest star that's not the sun, current vehicles traveling about 10x the speed of voyager (e.g. 1 light day in 5 years). If something was launched today it would get to the nearest star system in about 7,660 years (assuming that star system also a radius of 1 light day).

100x faster than current (1,000km/s) would still take 76 years.

Definitely not before 2100 and almost certainly so long after that we will seem like a primitive civilization compared to those that do it.


> current vehicles traveling about 10x the speed of voyager

As I understand it, not really. Parker Solar Probe is crazy fast, but only because it has that trajectory, and is unable to just change course and keep that speed in other directions.

If you want to launch something for deep space, the Jupiter-Saturn slingshot is still the most powerful trajectory we know of.

Today's rocket engines would give the probe a higher initial speed, but the final velocity would not differ dramatically. A fair bit higher, but not orders of magnitude.


You can do a Sun-diving Oberth maneuver too. Project Lyra was a proposal for an `Oumuamua flyby that got over 50km/s: http://orbitsimulator.com/BA/lyra.gif


> Space is so ridiculously big that I don't think it will ever happen

You are underestimating acceleration. To travel and come to a stop at 4.2 light years, a spaceship with 1g acceleration barely needs 3.5 years in relativistic ship time (~6 years earth time).

The technology to sustain 1g acceleration through 3.5 years is a different story, but very much within our understanding of physics (and not warp drives, etc). 20-50 years of engineering can get us there.


> 20-50 years of engineering can get us there.

I want to believe, but I think it'll be a lot more than that. The rocket equation is a stone cold bitch in this case.

Sustaining the thrust that accelerates a probe at 1g is very different to sustaining the thrust to move the probe and all the fuel. And it's much worse if you want to stop and not just fly past into deep space.


> The rocket equation is a stone cold bitch in this case.

It might not be. Plenty of hydrogen around everywhere. We just need tech to use it.


I think you are way too optimistic. Even with an antimatter drive and 100% conversion efficiency, such rocket would have a fuel to payload ratio of >1000.


That seems pretty promising actually.

Our moon landing missions had a similar ratio, so I assume we can do the engineering to make even a slightly worse ratio work for us a 100 years after it.

In practice it would be better with slingshot maneuvers and picking up mass on the way.


Whatever speed advancements we make on earth, they pale in comparison to sling shotting off of a planet. to make an engine that can go significantly faster, we would need the energy of a planet.


or a star :)


> 20-50 years of engineering can get us there.

What energy source do you think is merely 20-50 years of engineering effort away from being able to power that kind of journey?


Some kind of nuclear reaction or matter-antimatter collision, I assume.


You got the shielding problem how to protect the ship from disintegrating when it hits the first pebble at massive speed


Carry or produce antimatter.

Make it collide with stuff colliding with the ship, redirect it's energy for propulsion.


Is there enough reaction mass on earth to construct a rocket capable of accelerating at 1g for 3.5 years?


No, it probably can not be a chemical rocket. Nuclear, yes.

My point is that we are in the realm of just needing new engineering (how to make nuclear reactions, or even antimatter-matter collision work for this goal), not new science (warp drives, something else we don't understand about space or gravity, or mass).


Humans might one day have settlements around the solar system and in free space (large stations, etc.), but I have doubts about whether we'll ever go to the stars.

For machine intelligence, though, it would be easy. Just switch yourself off for a few thousand years.

It's likely that our "children" will go to the stars, not us.


Unfortunately at the current trajectory, it will be Grok that reaches the next star system first. Just imagine interplanetary immortal AI sycophantic towards a very specific billionaire


Getting the travel time down to 500 years would be a reasonable goal.

You'd ship embryos and caregiver robots, start breeding/raising people 30 years before you'd arrive.


Imagine being born and told your life has been determined by some other humans living a comfortable life with unlimited air and water.

You are told you are to about make the great achievement humankind has ever made but all you want is a little bit more food and to take a shower.


Sometime people are born for greatness.


Isn't that already all of us?


Wow, you are really enjoying life. Hope it gets better.


Just spitballing, but maybe it would be possible with relatively modest advances in ion thrusters, and one (admittedly less-than-modest) breakthrough with fusion.

It's maybe too speculative to even matter, but I don't think it's _crazy_ to imagine a handful of AI-fueled advances in materials discovery during the next decade or two. Possibly enough to unlock laser fusion, or something that could be crammed onto a spacecraft.


> AI-fueled advances

There is no amount of money in the world that would get me on the ChatGPT rocket


By "AI" they probably aren’t thinking about LLMs in this context.


Just a dumb joke


Those responsible for sacking the people who have just been sacked, have been sacked

[0] https://youtu.be/79TVMn_d_Pk?t=117


agreed. we interact with so many different types of software and I presume like me, we designate a confidence score of how things will work out because there's so many unknown quantities out there. Those little thoughts you have while an app/page is doing what it's doing, wondering whether it even works as it says in the first place.

I place value on grammar but appreciate in the web today that surely around half of English words in it are ESL (while ignoring AI). And that's fine, it's a human thing- not everyone was taught English or has known it a long time, or has dyslexia etc etc.

I guess in the end, allow end users to have full confidence in you in all ways possible.


It's good that he has his own website! I can relate (for non famous reasons) about the Facebook issues. I can't even sign up any more, using my real name anyway.

It can be a pain as so many local organisations use Facebook as a free way to share information. Unfortunately if you're not logged in pages can be rate limited, get spammed with modals to sign up, can't scroll very far into any feed and probably in his case a nuisance as a platform for his business.


An IPv6 would also. Then this chap could have his website hosted on it.


That's what I read on the surface. Any useful links for the context?


The best I've read is "The Eighth Day Of Creation" (which is amazing book beyond the part that covers the elucidation of the structure of DNA). He references multiple internal data sources that establish the process by which Gosling's photo made it to Watson and Crick. Of all the accounts I've read, it seems to be the most factual. I think it's also worth reading Watson's account ("The Double Helix") and the book that originally brought the most attention to the treatment of Franklin ("Rosalind Franklin: The Dark Lady of DNA")

I believe this article has some updated results: https://www.nytimes.com/2023/04/25/science/rosalind-franklin... and it appears there was an earlier book before Dark Lady, referenced here: https://www.nytimes.com/1975/09/21/archives/rosalind-frankli...


A thing you'll have to watch for is these agents actually being a user's browser, just the browser provider is using them as a proxy.

Otherwise, there are residential IP proxy services that cost around $1/GB which is cheap, but why pay when you can get the user to agree to be a proxy.

If the margin of error is small enough in detecting automated requests, may as well serve up some crypto mining code for the AI bots to work through but again, it could easily be an (unsuspecting) user.

I haven't looked into it much, it'd be interesting to know whether some of the AI requests are using mobile agents (and show genuine mobile fingerprints)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: