Hacker Newsnew | past | comments | ask | show | jobs | submit | cryptonym's commentslogin

I don't understand how side-loading would impact information marketplaces should provide. If it's side-loaded, that's no longer marketplace responsibility.


Good catch. Yes, side-loading directly from the developer website isn’t going to trigger marketplace obligations. Those obligations still exist but are the responsibility of the developer directly.

Under the CRA, smartphones are considered to be much more critical from a security standpoint and, by the end of 2027, will have to follow an enhanced set of “best practices” to be able to enjoy a presumption of conformity. The best practices are due to be published by December of this year. I think Google already knows that developer attestations will be on that list and want to appear proactive instead of reactive.

The point still stands - the DMA does not exist in a vacuum. Other EU laws affect how you interpret it, and you should assume that the EU will pass more laws in the future that also affect how you interpret it.


That comment was about overall slowness of the site, not a specific issue on a specific browser.

Available data confirms that SPA tends to perform worse than classic SSR.


I'm pretty sure that if they rendered/updated the same insane amount of nodes with some other technology, for example PJAX like they used to do, performance would not be better


Agree you can shoot yourself in the foot with pretty much any technology. By design, it's much easier to be inefficient with SPA frameworks.


Vibe-coding tests is nowhere near formal verification.


When typewriters spread in the late 19th century, clerks who used them were sometimes called “mechanical scribblers” or accused of doing “machine work” rather than proper clerical labor.

When adding machines and calculators appeared in offices, detractors claimed they would weaken the mind. In the mid-20th century, some educators derided calculator users as “button pushers” rather than “real mathematicians.”

In the 1980s, early adopters of personal computers and word processors were sometimes called “typists with toys.” Secretaries who mastered word processors were sometimes derided as “not real secretaries” because they lacked shorthand or dictation skills.

Architects and engineers who switched from drafting tables to CAD in the 1970s–80s faced accusations that CAD work was “cookie-cutter” and lacked craftsmanship. Traditional draftsmen argued that “real” design required hand drawing, while CAD users were seen as letting the machine think for them.

Across history, the insults usually follow the same structure:

- Suggesting the new tool makes the work too easy, therefore less valuable.

- Positioning users as “operators” rather than “thinkers.”

- Romanticizing the older skill as more “authentic” or “serious.”


Few generated unit tests doesn't replace formal verification. That's just not the same thing at all, it's not a matter of manual vs automated calculator.


> Few generated unit tests doesn't replace formal verification.

That's a claim you're making for the first time, here. Not one I've made. Go ahead and re-read my comments to verify.


True, per user doesn't scale.

Knowledge should be properly grouped and have rights on database, documents, and chatbot managed by groups. For instance specific user can use the Engineering chatbot but not the Finance one. If you fail to define these groups, feels like you don't have a solid strategy. In the end, if that's what they want, let them experience open knowledge.


As if knowledge was ever that clear cut. Sometimes you need a cross-department insight, some data points from finance may not be confidential, some engineering content may be relevant to sales support… there’s endless reasons why neat little compartments like this don’t work in reality.


Yeah. If you have knowledge stored in a structured form like that, you don't need an AI...


If organisation is that bad that finance docs are mixed with engineering docs, how do you even onboard people? You manually go through every single doc and decide if the newcomer can or can't access it?

You should see our Engineering knowledge base before saying an AI would be useless.


> It appears Cloudflare confused Perplexity with 3-6M daily requests of unrelated traffic from BrowserBase, a third-party cloud browser service that Perplexity only occasionally uses for highly specialized tasks (less than 45,000 daily requests). Because Cloudflare has conveniently obfuscated their methodology and declined to answer questions helping our teams understand

This doesn't look great, especially when we look at the attention generated by the original post compared with that one!


to me it doesn't look great that Perplexity use BrowserBase at all. I asked BB's doc bot if you can customise the user agent; it says you can't because it sets the user agent automatically _in order to bypass bot checks_.

This seems to be the only secret sauce they offer; other than that it's just a headless browser farm. So perplexity saying "companies like Cloudflare mischaracterize user-driven AI assistants as malicious bots" is disingenuous at best; they chose to use a tool designed to mask their traffic and it blew up in their face?


The very first thing they show on the website is a list of cloud providers.


I don’t think that’s a gotcha. Using a cloud provider in a way that provides easy migration options can be valid on the spectrum of self-hosting options. The ones they list specialize in renting virts by the hour/day/month, not lock-in services with no external equivalent.


> Sex segregation is the only way to ensure [...] women-only spaces

I know you can't be serious but still...


Having a fuzzy interpretation of configuration, and fuzzy input/output, on something designed for repeatable tasks? This doesn't sound like a great idea.

If you really want to LLM everything, I'd rather have a dedicated flag that provides correction/explanation of args while doing a dry-run. And another to analyze error messages.


I'm not saying if its a good idea or not, I'm saying it's my prediction of what will happen. Nobody will use old style terminals in a few years where you need to type exactly, is my prediction.


I have been online for 30y and can't remember being affected by downtime from my ISP DNS.

When DNS resolver is down, it affects everything, 100% uptime is a fair expectation, hence redundancy. Looks like both 1.0.0.1 and 1.1.1.1 were down for more than 1h, pretty bad TBH, especially when you advise global usage.

RCA is not detailed and feels like a marketing stunt we are now getting every other week.


I too have been online for 30 years, and frequently had ISP caused dns issues, even when I wasn't using their resolvers... because of the dns interception fuckery they like to engage in. Before I started running my own resolver I saw downtime from my ISP's DNS resolver. This is across a few ISPs in that time. Anecdata is great isn't it?


I think we agree.

Your own "anecdata" shows how catastrophic DNS failures are and why it's justified to expect 100% uptime with proper redundancy. Too bad you had providers not architecturing properly this key part of their infrastructure to the point you had to build your own solution.


Did you read the abstract? It says the exact opposite:

> this systematic review found little to no support for the hypothesis bicycle helmet use is associated with engaging in risky behaviour.


What! You’re lying!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: