Hacker Newsnew | past | comments | ask | show | jobs | submit | SquareWheel's commentslogin

Contextual permissions are a big improvement over early and uncertain prompts. I will never agree to grant my permission when first loading a page, however, I may do so if intentionally activating a map widget. At least then I understand the context by which it's being asked, and can make a more informed decision.

Well, because it means that other energy generation sources like oil, gas, and coal aren't being used there instead. Since they cause far, far more harm than nuclear waste does, it's a net win.


The same is true for solar and wind energy but without nuclear water. She cause far less harm than nuclear waste, even bigger win.

Our main problem isn’t energy production it‘s storage and quick reaction to consumption spikes. Nuclear energy doesn’t help with that.


Note that a checkbox's indeterminate state can only be set via JavaScript, so that lessens the elegance of a CSS-based approach.

I agree that using radios would be better. Or just prefers-color-scheme, which sidesteps the FOUT issue that often occurs when storing theme settings in localStorage.


Not having looked at the extension, I would assume they use the chrome.scripting API in MV3.

https://developer.chrome.com/docs/extensions/reference/api/s...

https://developer.chrome.com/blog/crx-scripting-api


No, this can't be used for remote code. Only existing local code.


Thanks for clarifying. It looks like I needed to refresh my memory of the browser APIs.

Reading further, this API only works remotely for CSS via chrome.scripting.insertCSS. For JS, however, the chrome.scripting.executeScript JS needs to be packaged locally with the extension, as you said.

It seems the advanced method is to use chrome.userScripts, which allows for arbitrary script injection, but requires the user be in Dev Mode and have an extra flag enabled for permission. This API enables extensions like TamperMonkey.

Since the Claude extension doesn't seem to require this extra permission flag, I'm curious what method they're using in this case. Browser extensions are de facto visible-source, so it should be possible to figure out with a little review.


> "probably should’ve added a disclaimer"

It's a violation of the Amazon Associates program to not have one.


Also, it's illegal in the US and many other countries to not disclose that you're earning money from a recommendation.[0]

[0] https://www.ftc.gov/business-guidance/resources/ftcs-endorse...


Which other parties? Because Mozilla's stance on JPEG XL and XSLT are identical to Google's. They don't want to create a maintenance burden for features that offer little benefit over existing options.


Didn't Mozilla basically say they would support it if Google does? Mozilla doesn't have the resources to maintain a feature that no one can actually use; they're barely managing to keep up with the latest standards as it is.


Yeah, they need those resources to pay the CEO!


They have many millions to spend on engineers. They should do that.


Just come up with some way to make it a huge win for Pocket integration or the like.


> maintain a feature that no one can actually use;

If only there was a way to detect which features a browser supports. Something maybe in the html, the css, javascript or the user agent. If only there was a way to do that, we would not be stuck in a world pretending that everything runs on IE6. /s


>Because Mozilla's stance on JPEG XL and XSLT are identical to Google's.

Okay, and do they align on every other web standard too?


Usually it’s Mozilla not wanting to implement something Google wants to implement, not the other way around.


Indeed, you're making my point.

SquareWheel implied that Mozilla doesn't count as an "other party" because they are aligned with Google on this specific topic.

My comment was pointing out that just because they are aligned on this doesn't mean they are aligned on everything, so Mozilla is an "other party".

And, as you have reinforced, Google and Mozilla are not always in alignment.


I made no such implication. Mozilla is certainly an other party, and their positions on standards hold water. They successfully argued for Web Assembly over Native Client, and have blocked other proposals such as HTML Import in the Web Components API. They are still a key member of the WHATWG.

The fact that Mozilla aligns with Google on both of these deprecations suggests the reasons are valid.

I personally see no reason for XSLT today. Outside of the novelty of theming RSS feeds, it sees very little use. And JPEG XL carries a large security surface area which neither company was comfortable including in its current shape. That may change based on adoption and availability of memory-safe decoders.


>>"[...] support the web standards as determined by other parties."

>"Which other parties? Because Mozilla's stance on JPEG XL and XSLT are identical to Google's"

If this isn't an implication that Mozilla isn't an other party, than I'm not sure what you were trying to say with "Which other parties?".

Whatever you meant to say, it read as an implication that Mozilla just does what Google does so Mozilla isn't really an "other party".


It means exactly what it says: "What other parties do you mean?". Key players are already in lockstep on this decision, so insisting that Google must submit to the other WHATWG members doesn't make any sense in an argument for restoring XSLT or JPEG XL.

You seem to be reading subtext into a statement that was put plainly.


>Google must submit to the other WHATWG members doesn't make any sense in an argument for restoring XSLT or JPEG XL.

The comment you replied to was speaking generally, not specifically to XSLT or JPEG XL. They obviously didn't say "Google should be barred from having standards positions" just in context of XSLT/JPEG XL, but they're totally cool with the Google monopoly with every other standard.

>You seem to be reading subtext into a statement that was put plainly.

Nah, I'm really not.

But I'm just farming downvotes, apparently, so nevermind. You win! yay

(It's fun that people are coming to a conversation over 24 hours old, however many levels deep, to downvote!)


Which is why Firefox is steadily losing market share.

If Mozilla wanted Firefox to succeed, they would stop playing "copy Chrome" and support all sorts of things that the community wants, like JpegXL, XSLT, RSS/Atom, Gemini (protocol, not AI), ActivityPub, etc.

Not to mention a built-in ad-blocker...


With all due respect, this is a completely HN-brained take.

No significant number of users chooses their browser based on support for image codecs. Especially not when no relevant website will ever use them until Safari and Chrome support them.

And websites which already do not bother supporting Firefox very much will bother even less if said browser by-default refuses to allow them to make revenue. They may in fact go even further and put more effort into trying to block said users unless they use a different browser.

Despite whatever HN thinks, Firefox lost marketshare on the basis of:

A) heavy marketing campaigns by Google including backdoor auto-installations via. crapware installers like free antivirus, Java and Adobe, and targeted popups on the largest websites on the planet (which are primarily google properties). The Chrome marketing budget alone nearly surpasses Mozilla's entire budget and that's not even accounting for the value of the aforementioned self-advertising.

B) being a slower, heavier browser at the time, largely because the extension model that HN loved so much and fought the removal of was an architectural anchor, and beyond that, XUL/XPCOM extensions were frequently the cause of the most egregious examples of bad performance, bloat and brokenness in the first place.

C) being "what their cellphone uses" and Google being otherwise synonymous with the internet, like IE was in the late 1990s and early 2000s. Their competitors (Apple, Microsoft, Google) all own their own OS platforms and can squeeze alternative browsers out by merely being good enough or integrated enough not to switch for the average person.


I don't disagree with you, but given (A) how will Firefox ever compete?

One possible way is doing things that Google and Chrome don't (can't).

Catering to niche audiences (and winning those niches) gives people a reason to use it. Maybe one of the niches takes off. Catering to advanced users not necessarily a bad way to compete.

Being a feature-for-feature copy of Chrome is not a winning strategy (IMHO).


>Being a feature-for-feature copy of Chrome is not a winning strategy (IMHO).

Good thing they aren't? Firefox's detached video player feature is far superior to anything Chrome has that I'm aware of. Likewise for container tabs, Manifest V2 and anti-fingerprinting mode. And there are AI integrations that do make sense, like local-only AI translation & summaries, which could be a "niche feature" that people care about. But people complain about that stuff too.


And these aren't niche/advanced features? I'm using Firefox now, and did not know about them. If I'm using them, it is only accidentally or because they are the defaults.

But I'm agreeing with you! These features are important to you, an advanced user. The more advanced users for Firefox, the better.


> all sorts of things that the community wants, like JpegXL, XSLT, RSS/Atom, Gemini (protocol, not AI), ActivityPub, etc.

What “community” is this? The typical consumer has no idea what any of this is.


I agree with you. But a typical consumer will already be using Chrome, and has no reason to use Firefox.

If one of these advanced/niche technologies takes off, suddenly they will have a reason to use Firefox.


For Firefox to win back significant share, they need to do more than embrace fringe scenarios that normal people don’t care about. They need some compelling reason to switch.

IE lost the lead to Firefox when IE basically just stopped development and stagnated. Firefox lost to Chrome when Firefox became too bloated and slow. Firefox simply will not win back that market until either Chrome screws up majorly or Firefox delivers some significant value that Google cannot immediately copy.


It's one thing to judge somebody for supporting an unjust and illegal war. It's another thing entirely to judge them for where they were born. None of us chooses our nationality.


There was hardly any judgement, except 'unfortunately'.

Regardless, there are people who want to avoid distributions made by Russians. Are builds reproducible? Where do these people reside? Could be important.


> there are people who want to avoid distributions made by Russians

Well, racism / citizenship-based discrimination is a thing, yes.

> Are builds reproducible?

A valid question, but what does that have to do with the ethnic background or citizenship of the distro makers?

> Where do these people reside?

It seems their team is from all sorts of places, although it doesn't exactly say where in the world each of them is located:

https://cachyos.org/about/


Of course it is a thing when the country of their nationality is committing genocide, and an authoritarian government.

> A valid question, but what does that have to do with the ethnic background or citizenship of the distro makers?

Allows security researchers to verify the binaries and/or find intentional backdoors.

> It seems their team is from all sorts of places, although it doesn't exactly say where in the world each of them is located

Lots of words for 'I don't know'. Me neither, that is why I am asking.


> Of course it is a thing when the country of their nationality is committing genocide, and an authoritarian government.

Bit ironic to continue posting comments here, isn't it?

None the less, I agree with your worry and caution based on where software is produced, but I enact that by checking my OS/software before installing/updating it, not spreading FUD on internet forums.


This website is very liberal with regards to freedom of speech, and while hosted in USA it isn't part of FAMAG, and non-partisan. While the USA is under attack from radical right, it has been before (Dubya).

The thing with citizens of Russia and China who reside in their respective authoritarian country is they cannot be held legally accountable.


What’s your procedure for checking it? How would you discover if the FSB has forced them to put a timebomb in?


I was using Bazzite, but they started talking about potentially shutting it down due to a removal of 32-bit support. It seems a bit safer to choose one of the mainline Fedora spins. Maybe Kinoite or Silverblue if you're into atomicity, though there's still some rough edges to be aware of.


They were going to shut it down due to upstream Fedora considering ending 32-bit support. Sticking to upstream wouldn't have helped you avoid that issue.


Why do you say that? If they drop 32-bit support, maybe I won't be able to play games for a time - at least until somebody rigs up a fix - but at least my operating system will still be supported.

If Bazzite goes poof overnight, though, that's a major problem. At least Fedora's official spins will continue to receive necessary updates.


The Steam client is 32-bit, the majority of games on Steam are 32-bit, and very popular titles like Left 4 Dead 2 are 32-bit.

The last time a distro tried to do this Ubuntu caved and continued supporting it with an extra repo. Fedora has no chance of winning that argument.

The good news is the incident you're talking about was a change proposal proposed by a single person and never even voted on. It did not survive the comment stage.


They're complimentary. As a general (though not exclusive) rule, consider flex for one-dimensional layouts, and grids for two-dimensional layouts.


Yeah, to expand on that... Flex is, well, flexible, whereas Grid is more rigid like a table. The rigidity of Grid allows you to span rows and columns (2D) just like you can with table cells (colspan/rowspan). Grid is usually used at a macro level for its more deterministic layout (no unintuitive flex quirks), while flex is usually used to lay things out at a component level where you don't care that the next row of items isn't perfectly aligned with the ones above (you will often see it hold some buttons or badges, or vertically align text to an icon), and Grid setting the layout of the app and container components (modals, cards, etc).


So is Grid supposed to be what we should use to replace the html <table> element? That I still use to this day for layouts because CSS still sucks to me?


Use <table> for tabular data, but for layout you should use grid. Grid doesn't have it's own element like table does, so you have to use css to apply that display to a div.

CSS takes a bit of time to understand. It's cascading nature and how certain properties behave differently based on the html structure or display type or direction makes it tricky. I don't blame you sticking with tables for layouts for yourself - making layouts with floats was a pain. Bootstrap hid a lot of the layout pain. But today we have flex and grid to help us realize our layouts.


There were back in CSS 2 display values for table, table cell, table row etc which meant you could make divs or other block elements layout like tables did. Of course it wasn't supported in a certain browser with 90% market share.


> Grid doesn't have it's own element like table does, so you have to use css to apply that display to a div.

Well, OOTB, yeah. I personally like to make use of custom html elements a lot of the time for such things. Such as <main-header> <main-footer> <main-content> <content-header> etc, and apply css styles to those, rather than putting in classes onto divs. Feels a lot more ergonomic to me. Also gives more meaningful markup in the html. (and forces me to name the actual tags so I use much less unnecessary ones)


One of the many things I hate about React: can't easily create custom elements that truly exist in the DOM so I can style them in CSS.


Recent React round-trips custom elements better now. You just have to remember the standard's rule that all custom elements need to be named with dash (-) inside them.


It's more like a comic book, you define the layout and the elements slot into that. You can define how many rows and columns your comic has and then you can make some panels fit exactly into one spot, or you can have panels that span more than one row or column. So it's more of a 2d design system.

https://l-wortley0811-dp.blogspot.com/2010/10/comic-layoutsj...


No. The table is meant to hold tabular data like a spreadsheet. It has special behavior for people who use tools like screen readers because they have vision impairment.

CSS grid is a powerful layout tool. If you think CSS sucks I encourage you to brush up on the newer developments. Flex box and grid and many other newer tools solve a lot of the classic pain points with CSS and make it a pleasure to use if you invest the time to learn it


That may work for blocking bad automated crawlers, but an agent acting on behalf of a user wouldn't follow robots.txt. They'd run the risk of hitting the bad URL when trying to understand the page.


That sounds like the desired outcome here. Your agent should respect robots.txt, OR it should be designed to not follow links.


An agent acting on my behalf, following my specific and narrowly scoped instructions, should not obey robots.txt because it's not a robot/crawler. Just like how a single cURL request shouldn't follow robots.txt. (It also shouldn't generate any more traffic than a regular browser user)

Unfortunately "mass scraping the internet for training data" and an "LLM powered user agent" get lumped together too much as "AI Crawlers". The user agent shouldn't actually be crawling.


Confused as to what you're asking for here. You want a robot acting out of spec, to not be treated as a robot acting out of spec, because you told it to?

How does this make you any different than the bad faith LLM actors they are trying to block?


robots.txt is for automated, headless crawlers, NOT user-initiated actions. If a human directly triggers the action, then robots.txt should not be followed.


But what action are you triggering that automatically follows invisible links? Especially those not meant to be followed with text saying not to follow them.

This is not banning you for following <h1><a>Today's Weather</a></h1>

If you are a robot that's so poorly coded that it is following links it clearly shouldn't that's are explicitly numerated as not to be followed, that's a problem. From an operator's perspective, how is this different than a case you described.

If a googler kicked off the googlebot manually from a session every morning, should they not respect robots.txt either?


I was responding to someone earlier saying a user agent should respect robots.txt. An LLM powered user-agent wouldn't follow links, invisible or not, because it's not crawling.


It very feasibly could. If I made an LLM agent that clicks on a returned element, and then the element was this trap doored link, that would happen


There's a fuzzy line between an agent analyzing the content of a single page I requested, and one making many page fetches on my behalf. I think it's fair to treat an agent that clicks an invisible link as a robot/crawler since that agent is causing more traffic than a regular user agent (browser).

Just trying to make the point that an LLM powered user agent fetching a single page at my request isn't a robot.


You're equating asking Siri to call your mom to using a robo-dialer machine.


If your specific and narrowly scoped instructions cause the agent, acting on your behalf, to click that link that clearly isn't going to help it--a link that is only being clicked by the scrapers because the scrapers are blindly downloading everything they can find without having any real goal--then, frankly, you might as well be blocked also, as your narrowly scoped instructions must literally have been something like "scrape this website without paying any attention to what you are doing", as an actual agent--just like an actual human--wouldn't find our click that link (and that this is true has nothing at all to do with robots.txt).


If it's a robot it should follow robots.txt. And if it's following invisible links it's clearly crawling.

Sure, a bad site could use this to screw with people, but bad sites have done that since forever in various ways. But if this technique helps against malicious crawlers, I think it's fair. The only downside I can see is that Google might mark you as a malware site. But again, they should be obeying robots.txt.


Your web browser is a robot, and always has been. Even using netcat to manually type your GET request is a robot in some sense, as you have a machine translating your ascii and moving it between computers.

The significant difference isn't in whether a robot is doing the actions for you or not, it's whether the robot is a user agent for a human or not.


should cURL follow robots.txt? What makes browser software not a robot? Should `curl <URL>` ignore robots.txt but `curl <URL> | llm` respect it?

The line gets blurrier with things like OAI's Atlas browser. It's just re-skinned Chromium that's a regular browser, but you can ask an LLM about the content of the page you just navigated to. The decision to use an LLM on that page is made after the page load. Doing the same thing but without rendering the page doesn't seem meaningfully different.

In general robots.txt is for headless automated crawlers fetching many pages, not software performing a specific request for a user. If there's 1:1 mapping between a user's request and a page load, then it's not a robot. An LLM powered user agent (browser) wouldn't follow invisible links, or any links, because it's not crawling.


How did you get the url for curl? Do you personally look for hidden links in pages to follow? This isn't an issue for people looking at the page, it's only a problem for systems that automatically follow all the links on a page.


Yea i think the context for my reply got lost. I was responding to someone saying that an LLM powered user-agent (browser) should respect robots.txt. And it wouldn't be clicking the hidden link because it's not crawling.


Maybe your agent is smart enough to determine that going against the wishes of the website owner can be detrimental to your relationship the such website owner and therefore the likelihood of the website to continue existing, so is prioritizing your long-term interests over your short-term ones.


How does a server tell an agent acting on behalf of a real person from the unwashed masses of scrapers? Do agents send a special header or token that other scrapers can't easily copy?

They get lumped together because they're more or less indistinguishable and cause similar problems: server load spikes, increased bandwidth, increased AWS bill ... with no discernible benefit for the server operator such as increased user engagement or ad revenue.

Now all automated requests are considered guilty until proven innocent. If you want your agent to be allowed, it's on you to prove that you're different. Maybe start by slowing down your agent so that it doesn't make requests any faster than the average human visitor would.


Good?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: