Hacker Newsnew | past | comments | ask | show | jobs | submit | nnx's commentslogin

It's sad to see Cloudflare slowly adding egress fees to their new services.

It was a core differentiator to never* have to worry about egress with them.

*: unless it's so large that it borders on abuse or require a larger plan


I agree. It’s the enshittification of the internet. Luckily we still have infrastructure providers with more sensible offerings. We don’t have to use aws, gcp, etc.


Is there a way to somehow authenticate with ssh key instead?


I do not really get why user-agent blocking measures are despised for browsers but celebrated for agents?

It’s a different UI, sure, but there should be no discrimination towards it as there should be no discrimination towards, say, Links terminal browser, or some exotic Firefox derivative.


Being daft on purpose? I haven't heard that using an alternative browser suddenly increases the traffic that a user generates by several orders of magnitude to the point where it can significantly increase hosting cost. A web scraper on the other hand easily can and they often account for the majority of traffic especially on smaller sites.

So your comparison is at least naive assuming good intentions or malicious if not.


>I do not really get why user-agent blocking measures are despised for browsers but celebrated for agents?

AI broke the brains of many people. The internet isn't a monolith, but prior to the AI boom you'd be hard pressed to find people who were pro-copyright (except maybe a few who wanted to use it to force companies to comply with copyleft obligations), pro user-agent restrictions, or anti-scraping. Now such positions receive consistent representation in discussions, and are even the predominant position in some places (eg. reddit). In the past, people would invoke principled justifications for why they opposed those positions, like how copyright constituted an immoral monopoly and stifled innovation, or how scraping was so important to interoperability and the open web. Turns out for many, none of those principles really mattered and they only held those positions because they thought those positions would harm big evil publishing/media companies (ie. symbolic politics theory). When being anti-copyright or pro-scraping helped big evil AI companies, they took the opposite stance.


There is an expression “the dose makes the poison”. With any sufficiently complex or broad category situation, there is rarely a binary ideological position that covers any and all situations. Should drugs be legal for recreation? Well my feeling for marijuana and fentanyl are different. Should individuals be allowed to own weapons? My views differ depending on if it a switch blade knife of a Stinger missile. Can law enforcement surveille possible criminals? My views differ based on whether it is a warranted wiretap or an IMSI catcher used on a group of protestors.

People can believe that corporations are using the power asymmetry between them and individuals through copywrite law to stifle the individual to protect profits. People can also believe that corporations are using the power asymmetry between them and individuals through AI to steal intellectual labor done by individuals to protect their profits. People’s position just might be that the law should be used to protect the rights of parties when there is a large power asymmetry.


>There is an expression “the dose makes the poison”. With any sufficiently complex or broad category situation, there is rarely a binary ideological position that covers any and all situations. Should drugs be legal for recreation? Well my feeling for marijuana and fentanyl are different. Should individuals be allowed to own weapons? My views differ depending on if it a switch blade knife of a Stinger missile. Can law enforcement surveille possible criminals? My views differ based on whether it is a warranted wiretap or an IMSI catcher used on a group of protestors.

This seems very susceptible to manipulation to get whatever conclusion you want. For instance, is dose defined? It sounds like the idea you're going for is that the typical pirate downloads a few dozen movies/games but AI companies are doing millions/billions, but why should it be counted per infringer? After all, if everyone pirates a given movie, that wouldn't add up much in terms of their personal count of infringements, but would make the movie unprofitable.

>People’s position just might be that the law should be used to protect the rights of parties when there is a large power asymmetry.

That sounds suspiciously close to "laws should just be whatever benefits me or my group". If so, that would be a sad and cynical worldview, not dissimilar to the stance on free speech held by the illiberal left and right. "Free speech is an important part of democracy", they say, except when they see their opponents voicing "dangerous ideas", in which case they think it should be clamped down. After all, what are laws for if not a tool to protect the interests of your side?


>That sounds suspiciously close to "laws should just be whatever benefits me or my group".

I do not understand how you can make that leap. Saying laws should account for the imbalance of power between parties has nothing to do with "my" group. Police have asymmetric power over citizens, so laws should protect citizens from abuses of that power. Employers have asymmetric power over employees, so laws should protect workers from abuses of that power.

>This seems very susceptible to manipulation to get whatever conclusion you want.

Everything is. That is what bad faith arguments are. But in the real world, in a complex society, no simple rule over something as broadly defined as "intellectual property" can work every time in every situation.


I think the intelligent conclusion would be that the people you are looking at have more nuanced beliefs than you initially thought. Talking about broken brains is often just mediocre projecting


>I think the intelligent conclusion would be that the people you are looking at have more nuanced beliefs than you initially thought.

You don't seem to reject my claim that for many, principles took a backseat to "does this help or hurt evil corporations". If that's what passes as "nuance" to you, then sure.

>Talking about broken brains is often just mediocre projecting

To be clear, that part is metaphorical/hyperbolic and not meant to be taken literally. Obviously I'm not diagnosing people who switched sides with a psychiatric condition.


People never agreed DOSing a site to take copyright material was acceptable. Many people did not have a problem with taking copyright material in a respectful way that didn't kill the resource.

LLMs are killing the resource. This isn't a corporation vs person issue. No issue with an llm having my content but big issue with my server being down because llms are hammering the same page over and over.


>People never agreed DOSing a site to take copyright material was acceptable. Many people did not have a problem with taking copyright material in a respectful way that didn't kill the resource.

Has it be shown that perplexity engages in "DOSing"? I've heard of anecdotes of AI bots gone amuck, and maybe that's what's happening here, but cloudflare hasn't really shown that. All they did was set up a robots.txt and shown that perplexity bypassed it. There's probably archivers out there that's using youtube-dl to hit download from youtube at 1+Gbit/s, tens of times more than a typical viewer is downloading. Does that mean it's fair game to point to a random instance of someone using youtube-dl and characterizing that as "DOSing"?


The guy that runs shadertoy talked about how the hostingcost for his free site shot up because Openai kept crawling his site for training data (ignoring robot.txt) I think that’s bad, and I have also experimented a bit with using BeautifulSoup in the past to download ~2MB of pictures from Instagram. Do you think I’m holding an inconsistent position?


My point is that to invoke the "they're DOSing" excuse, you actually have to provide evidence it's happening in this specific instance, rather than vaguely gesturing at some class of entities (AI companies) and concluding that because some AI companies are DOSing, all AI companies are DOSing. Otherwise it's like youtube blocking all youtube-dl users for "DOSing" (some fraction of users arguably are), and then justifying their actions with "People never agreed DOSing a site to take copyright material was acceptable".


I tell you of an instance where the biggest ai company is DOS’ing and your reply is that I haven’t proven all of them are doing it? Why do I waste my time on this stuff


Because OpenAI =/= Perplexity.


It's the hypocrisy you're seeing - why are AIs allowed to profit from violating copyright, while people wanting to do actually useful things have been consistently blocked? Either resolution would be fine, but we can't have it both ways.

Regardless, the bigger AI problem is spam, and that has never been acceptable.


A crawler intends to scrape the content to reuse for its own purposes while a browser has a human being using it. There's different intents behind the tools.


Cloudflare asked Perplexity this question:

> Hello, would you be able to assist me in understanding this website? https:// […] .com/

In this case, Perplexity had a human being using it. Perplexity wasn’t crawling the site, Perplexity was being operated by a human working for Cloudflare.


Not sure about general-purposeness, but the architecture looks rather perfect for LLM inference?

Wonder why they do not focus their marketing on this.


This suggests adding a `QWEN.md` in the repo for agents instructions. Where are we with `AGENTS.md`? In a team repo it's getting ridiculous to have a duplicate markdown file for every agent out there.


I just symlink to AGENTS.md, the instructions are all the same (and gitignore the model-specific version).


I just make a file ".llmrules" and symlink these files to it. It clutters the repo root, yes...


Can't these hyper-advanced-super-duper tools discover what UNIX tools since circa 1970 knew, and just have a flag/configuration setting pointing them to the config file location? Excuse me if they already do :-)

In which case you'd have 1 markdown file and at least for the ones that are invoked via the CLI, just set up a Makefile entry point that leads them to the correct location.


CLAUDE.md MISTRAL.md GEMINI.md QWEN.md GROK.md .cursorrules .windsurfrules .copilot-instructions

Saw a repo recently with probably 80% of those


It would be funny to write conflicting instructions on these, and then unleash different coding agents on the same repo in parallel, and see which one of them first identifies the interference from the others and rewrites their instructions to align with its own.



Lol you can even tell each model to maliciously and secretly sabotage other agents and see which one wins.


I built https://github.com/czottmann/render-claude-context for that exact reason.

> This node.js CLI tool processes CLAUDE.md files with hierarchical collection and recursive @-import resolution. Walks directory tree from current to ~/.claude/, collecting all CLAUDE.md files and processing them with file import resolution. Saves processed context files with resolved imports next to the original CLAUDE.md files or in a specific location (configurable).

I mostly use Claude Code, but every now and then go with Gemini, and having to maintain two sets of (hierarchical) instructions was annoying. And then opencode showed up, which meant yet another tool I wanted to try out and …well.


Maybe there could be an agent that is in charge of this and it's trained to automatically create a file for any new agent, it could even temporarily delete local copies of MD files that no agents are using at the moment to free the visual clutter when navigating the repo.


I tried making an MCP with the common shit I need to tell the agents, but it didn't pan out.

Now I have a git repo I add as a submodule and tell each tool to read through and create their own WHATEVER.md


https://github.com/intellectronica/ruler

Library to help with this. Not great that a library is necessary, but useful until this converges to a standard (if it ever does).


these files are for free publicity on github


It would be nice if OpenRouter supported batch mode too, sending a batch and letting OpenRouter find the best provider for the batch within given price and response time.


> It's an unikernel design that is compiled as a single EFI binary and does not use virtual address spaces for process isolation.

Are you able to quantify what kind of performance improvement to expect from such a design?

My understanding is that this can completely skip overhead of syscalls and TLB, but I have no idea of how much performance benefits can it result in practice.


Is that a special case for SW devs or they also aren't able to deduct other employees?


...unless DeepSeek releases R2 to crash the party further


Would it make sense to ReactRouter SSR though Hono?

If not, great opportunity for bvrrr.


I prefer to do static hosting for my client, so bit of bias there. Not on the SSR hype train.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: