If Wayland would have simply provided the API, but not the underlying implementation (compositors Could have handled type implementation). They could have sidestepped this whole issue of fragmentation.
Accessibility is dead to me on Wayland... Until emulate key, mouse presses in other windows, registering global hotkeys that isn't a hack and Enumerating window title and executable.
Wayland fragmented development efforts on Linux. It's gotten a little better as some compositors consolidate, but still pretty awful to cobble together a stack that works for the features above.
For those in the US, In some states like Minnesota, if you have long-term care insurance, assets can be sheltered equal to the amount of the LTC policy. This allows you qualify for Medicare with significant assets beyond the House, Vehicle and traditional qualifying assets. The sheltered assets could be used for legacy or personal needs.
“This allows you to mold soft plastics (which you can buy from the company, also called Saltgator, or a hobby shop or fishing-lure supplier) and mold a piece up to 250mL (8.4 oz) in volume in about 15 minutes.”
Yes. While we’ll offer our own optimized SoftGel formula, SALTGATOR is not locked to proprietary materials. You can experiment with other low-temperature thermoplastics that match our safety and flow specs.
Purely in terms of BOM, both on the GPU and packaging. Saving Die Space, PCB, Less lane, less testing etc. However the actual cost of implementing PCIe 6 and 7 will be also higher so both will likely cancel out.
I think the N100 and N150 suffer the same weakness for this type of use case in the context of SSD storage 10gb networking. We need a next generation chip that can leverage more PCI lanes with roughly the same power efficiency.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
This is really interesting being able to do bi-directional editing. This is desperately needed for accessibility software to get at underlying text. I would use it for semantic editing of code by voice. However, getting at GUI elements would be amazing.
The site owner can allow such crawlers. There is the issue of bad actors pretending to be these types of crawlers but that could already happen to a site that want to allow google search crawlers but not gemini training data crawlers for example, so theres strong support to solve that problem
How would an individual user use a "crawler" to navigate the web exactly? A browser that uses AI is not automatically a "crawler"... a "crawler" is something that mass harvests entire web sites to store for later processing...
How can you tell the difference, in a way that can't be spoofed?
This is a genuine question, since I see you work at CF. I'm very curious what the distinction will be between a user and a crawler. Is trust involved, making spoofing a non-issue?
I don't personally work on bot detection, and I don't know exactly what techniques they use.
But if you think about it: crawlers are probably not hard to identify, as they systematically download your entire web site as well as every other site on the internet (a significant fraction of which is on Cloudflare). This traffic pattern is obviously going to look extremely different from a web browser operated by a human. Honestly, this is probably one of the easiest kinds of bots to detect.
Not OP of course, but I think there's a clear way forward.
An LLM accessibility browser is a bot, so bot detection sounds like the wrong approach to me. What's more important than bot detection is "actual real user" detection, of which bot detection is only part.
If the control software runs on a user's local device, things like TPMs can offer a device-bound signature for remote attestation. Virtual TPMs don't have root certificates signed by TPM/CPU makers, so they're not useful for building trust. A CPU shared between hundreds of other VMs somewhere in a cloud will not be providing unique TPM verification so AI scrapers will have to switch their scraping to having botnets do the work rather than just using them as proxies, and even then they can't get away with hacked routers (that lack TPMs).
There's a huge downside to this, of course, and that's basically handing control over who gets to use the internet to a few TPM companies that can lock you out whenever they please. If there's any way to tie this remote attestation system to you as a person, this puts tremendous power in the hands of the US government (see what happened to the ICC judge investigating the genocide over in Gaza) as they can force American companies to banish you.
I don't think the internet should develop in this direction, but with CAPTCHA failing to block bots and with AI scrapers ruining the internet, I don't see things going any other way.
Now that Cloudflare is putting a monetary value to bypassing its blocks for shitty AI scrapers, you can bet that there will be an industry of underpaid IT workers figuring out how to bypass CF's bot detection for a competitive market rate.
I don't agree with this. A browser, operated by a human user, is not a bot. Adding LLM-powered accessibility features to a browser does not make it a bot.
We already have ARIA, which is far more deterministic and should already be present on all major sites. AI should not be used, or necessary, as an accessibility tool.
If site authors would actually use aria. Not everything is a div, italic text is not for spawning emoji… it’s not good for semantic content or aria right now. It should not be necessary, but it is.
There's plenty of people who don't bother with ARIA and likely never will, so it's good to have tools that can attempt to help the user understand what's on screen. Though the scraping restrictions wouldn't be a problem in this scenario because the user's browser can be the one to pull down the page and then provide it to the AI for analysis.