Its the other way around: it's easier to detect because detectors are looking for specific "fingerprints" and may even try to run specific JavaScript that will only work when there is a UI present.
(Source: did a ton of web scraping and ran into a few gnarly issues and sites and had to write a p/invoke based UI automation scraper for some properties)
> In just 11 months since the company arrived in Memphis, xAI has become one of Shelby County’s largest emitters of smog-producing nitrogen oxides, according to calculations by environmental groups whose data has been reviewed by POLITICO’s E&E News. The plant is in an area whose air is already considered unhealthy due to smog.
Had this set the precedent of working with the community, and _not_ breaking the law, I think we'd be in a better place all around.
Similarly, Amazon tried to take the excess nuclear power, without paying back into the electrical grid infrastructure, and got denied in 2024:
Yeah, that politico article conveniently leaves out that the TVA - the local electricity provider - runs a methane-powered gas power plant literally 200 meters down the road (which replaced a much dirtier coal-burning power station at the same location), but somehow could not be bothered actually hooking their neighbours up to the grid.
I presume they couldn't be bothered hooking their "neighbors" [0] up because the demand was too great, no...?
[0] "Neighbors" here means a datacenter primarily processing data for wealthy people outside of the community and their mega-companies, where the revenue from that processing primarily goes... also to wealthy people outside of the community and their mega-companies...
Datacenters are not things that just randomly appear. There are planning processes, and city stakeholders are involved - which would include the TVA. The fact that they built the thing is a good indication that the stakeholders agreed this project should go forward - and that would involve an agreement on power provisioning.
But what I strongly disagree with in the politico article is that the datacenter is framed as a major polluter when the whole area is heavy industry, including a steel works and - a methane-burning power plant. To put the blame now on the xAI site smells a lot like an anti-Musk hit piece.
Doesn't mean I like the guy. I just like my journalism honest.
I don't really understand the point you're making.
It seems like you're suggesting that Politico didn't mention that the area already had pollution problems prior to xAI, but that's literally the very first sentence of the article:
> Elon Musk’s artificial intelligence company is belching smog-forming pollution into an area of South Memphis that already leads the state in emergency department visits for asthma.
It's restated in the 5th sentence:
> The plant is in an area whose air is already considered unhealthy due to smog.
The power plant down the street is mentioned in the 6th sentence:
> The turbines spew nitrogen oxides, also known as NOx, at an estimated rate of 1,200 to 2,000 tons a year — far more than the gas-fired power plant across the street or the oil refinery down the road.
So I have to deduce that your actual complaint is that the article didn't say something to the effect of, "xAI is adding a bunch of emissions, but don't worry because the people there were already well-abused by other nearby emission sources?"
> what I strongly disagree with in the politico article is that the datacenter is framed as a major polluter when the whole area is heavy industry,
The xAI data center is a major polluter. In fact it's a major polluter even in an area full of major polluters! It produces more NOx than the gigantic power plant that powers the region.
> In just 11 months since the company arrived in Memphis, xAI has become one of Shelby County's largest emitters of smog-producing nitrogen oxides, according to calculations by environmental groups whose data has been reviewed by POLITICO's E&E News. The plant is in an area whose air is already considered unhealthy due to smog.
> The turbines spew nitrogen oxides, also known as NOx, at an estimated rate of 1,200 to 2,000 tons a year — far more than the gas-fired power plant across the street or the oil refinery down the road.
The details are in the specifics here. People are _already_ feeling the effects of the AI race, the consequences just aren't evenly distributed.
And if we look at the "clean" nuclear deals to power these data centers:
> The Talen agreement, however, would divert large amounts of power currently supplying the regional grid, which FERC said raised concerns about how that loss of supply would affect power bills and reliability. It was also unclear how transmission and distribution upgrades would be paid for.
The scale of environmental / social impacts comes down to how aggressive the AI race gets.
Tailscale is a great. I think of it as a swiss army knife for easier routing and connectivity.
I use it in projects to stream internet / connectivity from my phone to the NVIDIA Jetson line, making my robotics projects easily accessible / debuggable:
That was our initial use case for Tailscale as well. May 2020 we started growing a team and needed a really smooth remote access solution for a bunch of Xaviers... and we weren't allowed to be in the same room together :)
Rerun co-founder here. Rerun doesn’t have replay in the sense of you send messages in and can play back the same messages in the same order later. We have playback in the sense that you can play it back in the viewer. We also have apis for reading back data but its more focused on dataframe use cases rather than sending you back messages
So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".
If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.
I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.
Will an LLM model read a person's mind about what they want to build better than they can communicate?
That's already what recommender systems (like the TikTok algorithm) do.
But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?
I think that's where there's a gap in (basically) belief systems of the future.
If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.
This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.
When you work for most public corporations, you aren't allowed to bring personal devices linked to company servers to specific countries. You need to bring a burner device instead, because you are perceived as a target for corporate espionage.
This is like that, except the government and the type of people on the list are even better targets for their personal devices. The government has strict rules about secrecy and communication for military operations, and strong punishments for not following these protocols, because they can lead to a loss of life.
This is a different sort of "unsecure". The platform itself may be "secure", but the device, being in public where someone could take a picture of military secrets, etc. isn't.
It's called BYOD. Corporations have flirted with it for 10-15 years. The C-suite far too often is allowed privileges and exceptions like aristocracy that sacrifice and weaken the security of the organization.
Also, even for corporate-managed devices, as an example, Meta has specific requirements and procedures for taking devices to and returning them from contentious places like mainland China.
It'll let the AI platforms get around any other platform blocks by hijacking the consumer's browser.
And it makes total sense, but hopefully everyone else has done the game theory at least a step or two beyond that.
reply