This assumes that ETs are deliberately transmitting high power signals towards us (or into space in general), although I'm not sure that is a reasonable assumption. I think it would generally be unwise to loudly announce a civilization's presence.
According to chatgpt, our current earth-based radio telescopes would only be able to detect signals equivalent to radio leakage from earth at a distance of 1 light year.
It's mostly not reasonable to try and ascribe human motivations to alien entities, particularly when we know some humans would definitely fire up the transmitter if they could.
The presence or current lack of alien signals at the very least bounds estimates of local population density and what energy scale they're operating on. Currently there's no nearby Type 1 Kardashev scale civilizations.
Yes, there are arguments for and against sending out a strong deliberate "we're here" signal. But I guess you could also argue that the possible danger in announcing our presence is fairly well mitigated by the speed of light, as there are unlikely to be any other advanced civilizations within a few light years.
I heard this as well by scientists from JAXA. They gave a presentation on SETI.
When it came time for questions I asked. So, if we were at Alpha Centari could we detect signals from earth with the tech we currently have? They said "No". That was 2019. Maybe tech is better today?
could you imagine something as big as Arecibo was or FAST is floating in space? That'd be impressive. Would a constellation set up more like VLA be possible? Keep increasing the size of it with Starlink like launches??
There's a proposal for a large constellation of small, cheap-to-build radio sats. Heard about it on a Fraser Cain podcast. They plan to send them to one of the Lagrange points and spread them out in x KM cube pattern. They also want some of the sats to do some RF processing on-site, and beam just the "results" back.
What's the point? The frequencies being listened to on these radio telescopes aren't affected by the atmosphere. Arecibo in space doesn't get you anything that Arecibo on the ground didn't (except hurricane resistance, I guess).
One exception is the far side of the moon to get away from radio noise. But other than that, there's no reason to put a radio telescope in space.
Once that constellation is fully expanded as intended, the planned chinese constellation joins it, and other nations (India?, the EU?) pile on, things will get even noisier.
The dark side of the moon offers hope, but it's still a lot of addiional awkwardness and expense that could be avoided with better attention to "the commons".
I was always skeptical about these studies, as it wasn't clear if there were proper controls. There are many ways in which the results could be an artefact of the experiment, and the article details a few that I didn't even realise.
The problem is when the exercise is excessive and/or the depression is caused by stress/overload. In that case the exercise can lead to burnout/CFS/overtraining, resulting in worse depression as well as debilitating physical symptoms.
From what I can gather from the article, it does seem to be entirely legal. He never forcibly removes the squatters, just makes their life annoying and difficult, and they usually choose to leave. He also doesn't attack anyone, and only uses the weapons for self defence.
I think any property owner could do the same, but it's just a risk that they don't want to take. Who wants to get up close to a (potential) knife wielding meth addict?
Yeah. I think the additional trick is that squatters often have a fraudulent lease. That makes it owner vs. tenant, and the police have orders to err on the side of not facilitating an illegal eviction. The owner could attempt to owner-occupy the property, but there's no document for that and there is a lease. So when the police show up, the owner is very likely to be the one removed or arrested.
The sword guy makes it tenant vs. tenant, so neither party has that formal advantage. Of course the police know the game, but they're generally happy with the workaround.
Although I think the more interesting question is whether sunbed use increases or decreases overall mortality. The only study I can find is Lindqvist's:
Overall, sunbed use reduced the all-cause mortality by a ratio of 0.77 or 0.87 depending on the model used. It increased the risk of developing MM, and the risk of dying from MM, although all-cause mortality was not increased even in patients diagnosed with MM. (This seems to be because there is a very low overall risk of MM mortality, but UV light exposure seems to provide a greater overall health benefit than the small risk of increased MM risk).
I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads.
I don't think this is really helping the site owners. I suspect it's mainly about AI extortion:
You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious? I would say Cloudflare is giving these site owners an option to protect their content and as a byproduct, reduce their own costs of subsidizing their thieves. They can choose to turn off the crawl protection. If they aren't, that tells you that they want it, doesn’t it?
If someone has a robots.txt, and I want to request their page, but I want to do that in an automated way, should I open the browser to do it instead of issue a curl request? How about if I am going to ask claude to fetch the page for me?
Oh for sure. When he wrote of the AI companies that are "stealing/crawling/hammering", you thought he meant the legitimate ones that do honor robots.txt. That makes sense.
Actually, it looks like all the major ones do honour robots.txt including perplexity. They seemingly get around it using google serps, so theyre not actually crawling or hammering the site servers (or even cloudflare).
I'm guessing you don't manage any production web servers?
robots.txt isn't even respected by all of the American companies. Chinese ones (which often also use what are essentially botnets in Latin American and the rest of the world to evade detection) certainly don't care about anything short of dropping their packets.
I have been managing production commercial web servers for 28 years.
Yes, there are various bots, and some of the large US companies such as Perplexity do indeed seem to be ignoring robots.txt.
Is that a problem? It's certainly not a problem with cpu or network bandwidth (it's very minimal). Yes, it may be an issue if you are concerned with scraping (which I'm not).
Cloudflare's "solution" is a much bigger problem that affects me multiple times daily (as a user of sites that use it), and those sites don't seem to need protection against scraping.
It is rather disingenuous to backpedal from "you can easily block them" to "is that a problem? who even cares" when someone points out that you cannot in fact easily block them.
I was referring to legitimate ones, which you can easily block. Obviously there are scammy ones as well, and yes it is an issue, but for most sites I would say the cloudflare cure is worse than the problem it's trying to cure.
But is there any actual evidence that any major AI bots are bypassing robots.txt? It looked as if Perplexity was doing this, but after looking into it further it seems that likely isn't the case. Quite often people believe single source news stories without doing any due diligence or fact checking.
I haven't been in the weeds in a few months, but last time I was there we did have a lot of traffic from bots that didn't care about robots. Bytedance is one that comes to mind.
No you cannot! I blocked all of the user agents on a community wiki I run, and the traffic came back hours later masquerading as Firefox and Chrome. They just fucking lie to you and continue vacuuming your CPU.
There shouldn't be any noticeable hit on your cpu from bots from a site like that. Are you sure it's not a DDoS?
Obviously it depends on the bot, and you can't block the scammy ones. I was really just referring to the major legitimate companies (which might not include Perplexity).
Ive been seeing more of those prove your human pages as well, but I generally assume they are there to combat a DDOS or other type of attack (or maybe ai/bot). I remember how annoying it was combating DDOS attacks, or hacked sites before Cloudflare existed. I also remember how annoying capcha s were, everywhere. Cloudflare is not perfect but net, I think it’s been a great improvement.
More and more sites I can't even visit because of this "prove you're human" because it's not compatible with older web browsers, even though the website it's blocking is.
The pay-per-crawl thing, is about them thinking ahead about post-AI business/revenue models.
The way AI happened, it removed a big chunk of revenue from news companies, blogs, etc. Because lots of people go to AI instead of reaching the actual 3rd party website.
AI currently gets the content for free from the 3rd party websites, but they have revenue from their users.
So Cloudflare is proposing that AI companies should be paying for their crawling. Cloudflare's solution would give the lost revenue back where it belongs, just through a different mechanism.
The ugly side of the story is that this was already an existing solution, and open source, called L402.org.
Cloudflare wants to be the first to take a piece of the pie, but also instead of using the open source version, they forked it internally and published it as their own service, which is cloudflare specific.
To be completely fair, the l402 requires you to solve the payment mechanism itself, which for Cloudflare is easy because they already deal with payments.
> I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads.
Looking into this more, it does indeed seem to be a cloudflare problem. It looks like cloudflare made a significant error in their bot fingerprinting, and Perplexity wasn't actually bypassing robots.txt.
To be honest I find cloudflare a much more scammy company than Perplexity. I had a DDoS attack a few years ago which originated from their network, and they had zero interest in it.
That study only says that most Americans think they interact with AI at least a few times a week (it doesn't say how or if it's intentional). And it also says the vast majority feel they have little or no control over whether AI is used in the lives.
For example, someone getting a google search result containing an AI response is technically interacting with AI but not necessarily making use of its response or even wanting to see it in the first place. Or perhaps someone suspects their insurance premiums were decided by AI (whether that's true or not). Or customer service that requires you go through a chat bot before you get real service.
According to chatgpt, our current earth-based radio telescopes would only be able to detect signals equivalent to radio leakage from earth at a distance of 1 light year.
reply