Yes, turbine blades can introduce radar clutter and affect certain military systems; but this has been know since the 1990s and has been engineered around for decades.
China, the UK, Germany, and Denmark operate gigawatts of offshore wind in close proximity to military-grade and NATO air-defense radar without much issue...
There could be new developments in the problem. For example, small scale drones using these areas as entry points. Not to say that's that, but I think it's not impossible that something new is being taken under consideration.
Under other circumstances I might be inclined to give the benefit of the doubt. But when's the last time we saw any action from this admin that wasn't clearly self-dealing or ideologically motivated instead of science-based?
He has said that wind turbines cause cancer. He also said they make noise that bothers people and drives whales insane. He has also said that they kill birds, as if their spinning draws birds into the blades. But most of all, he thinks they're ugly.
There's actually some truth to the bird thing. Some of the first wind turbines in the 1980s had very short blades, 5-10 feet, and would spin at ~50rpm, sort of like a spinning baseball bat, ready to strike birds out of the air. Combined with not being very high off the ground, maybe 40 feet, birds would take off from the ground directly into the very fast spinning blades. Modern wind turbines neither look nor act like these early turbines, but that's where the data comes from. They only just retired those fast spinning, low to the ground turbines in like ~2017. Something like 80-95% of all bird strikes came from ~35 essentially prototype wind turbines, and virtually none come from modern, huge slow spinning turbines.
If privacy is your top priority, then sure spend a few grand on hardware and run everything locally.
Personally, I run a few local models (around 30B params is the ceiling on my hardware at 8k context), and I still keep a $200 ChatGPT subscription cause I'm not spending $5-6k just to run models like K2 or GLM-4.6 (they’re usable, but clearly behind OpenAI, Claude, or Gemini for my workflow)
I was got excited about aescoder-4b (model that specialize in web design only) after its DesignArena benchmarks, but it falls apart on large codebases and is mediocre at Tailwind
That said, I think there’s real potential in small, highly specialized models like 4B model trained only for FastAPI, Tailwind or a single framework. Until that actually exists and works well, I’m sticking with remote services.
That's around 350,000 tokens in a day. I don't track my Claude/Codex usage, but Kilocode with the free Grok model does and I'm using between 3.3M and 50M tokens in a day (plus additional usage in Claude + Codex + Mistral Vibe + Amp Coder)
I'm trying to imagine a use case where I'd want this. Maybe running some small coding task overnight? But it just doesn't seem very useful.
Essentially migrating codebases, implementing features, as well as all of the referencing of existing code and writing tests and various automation scripts that are needed to ensure that the code changes are okay. Over 95% of those tokens are reads, since often there’s a need for a lot of consistency and iteration.
It works pretty well if you’re not limited by a tight budget.
I only run small models (70b at my hardware gets me around 10-20 TOPS) for just random things (personal assistant kind of thing) but not for coding tasks.
For coding related tasks I consume 30-80M tokens per day and I want something as fast as it gets
Hard disagree. The difference in performance is not something you'll notice if you actually use these cards. In AI benchmarks, the RTX 3090 beats the RTX 4080 SUPER, despite the latter having native BF16 support. 736GiB/s (4080) memory bandwidth vs 936 GiB/s (3090) plays a major role. Additionally, the 3090 is not only the last NVIDIA consumer card to support SLI.
It's also unbeatable in price to performance as the next best 24GiB card would be the 4090 which, even used, is almost tripple the price these days while only offering about 25%-30% more performance in real-world AI workloads.
You can basically get an SLI-linked dual 3090 setup for less money than a single used 4090 and get about the same or even more performance and double the available VRAM.
If you run fp32 maybe but no sane person does that. The tensor performance of the 3090 is also abysmal. If you run bf16 or fp8 stay away from obsolete cards. Its barely usable for llms and borderline garbage tier on video and image gen.
> The tensor performance of the 3090 is also abysmal.
I for one compared my 50-series card's performance to my 3090 and didn't see "abysmal performance" on the older card at all. In fact, in actual real-world use (quantised models only, no one runs big fp32 models locally), the difference in performance isn't very noticeable at all. But I'm sure you'll be able to provide actual numbers (TTFT, TPS) to prove me wrong. I don't use diffusion models, so there might be a substantial difference there (I doubt it, though), but for LLMs I can tell you for a fact that you're just wrong.
To be clear, we are not discussing small toy models but to be fair I also don't use consumer cards. Benchmarks are out there (phoronix, runpod, hugginface or from Nvidias own presentation) and they say it's at least 2x on high and nearly 4x on low precision, which is comparable to the uplift I see on my 6000 cards, if you don't see the performance uplift everyone else sees there is something wrong with your setup and I don't have the time to debug it.
> To be clear, we are not discussing small toy models but to be fair I also don't use consumer cards.
> if you don't see the performance uplift everyone else sees there is something wrong with your setup and I don't have the time to debug it.
Read these two statements and think about what might be the issue. I only run what you call "toy models" (good enough for my purposes), so of course your experience is fundamentally different from mine. Spending 5 figures on hardware just to run models locally is usually a bad investment. Repurposing old hardware OTOH is just fine to play with local models and optimise them for specific applications and workflows.
Well, Congress renewed the Patriot act so I don't have a lot of faith. Personally I'm starting to think that all of Congress, including aides, should get cycled out all at once periodically so that their internal culture of hating the masses gets broken.
Fair point. The USA PATRIOT Act shouldn’t have existed in the first place. But one of its most controversial parts (Section 215) did expire in 2020 (it was barely going to make it, though).
But you’re right overall: most of the Act’s powers were repeatedly renewed or re-created under other laws.
Sunset clauses aren’t a silver bullet, but they do occasionally stop or slow things that would otherwise become permanent.
Angela Yu course is good, told my brother about it and he had hard time figuring some stuff out cause some pieces were outdated (he has zero experience)
Honestly, I feel like it's easy to filter out good/bad courses on Coursera by stuff like University name (although gotta admit last time did a course there were 3 years ago)
> If we went back to recording lectures by the worlds best and putting it online for free with attached books and exercises, we could improve the world a lot.
MIT OpenCourseWare still upload a lot of their lectures to YouTube for free (been doing it for decades) and pretty sure some other universities do the same.
The main problem with online courses is lack of "direction" and engagement (which both Udemy and Coursera don't solve)
Stanford's Youtube channel is also a goldmine. There's actually a lot of professors that put their lectures on Youtube, you just have to look (or watch enough that the algorithm finds them for you).
I find the Stanford channel a bit hard to navigate, MIT has a dedicated channel for the courses and another one for the other stuff, I wish Stanford did the same.
The same applies to Harvard too
China, the UK, Germany, and Denmark operate gigawatts of offshore wind in close proximity to military-grade and NATO air-defense radar without much issue...