The current largest hydroelectric dam in the world is the Three Gorges Dam in China. It can generate 22.5GW (40% more power than the dam in 2nd place, which is also Chinese).
Since Jan 2024, China has on average constructed 23GW of new solar power every month. So China has effectively been adding a "world's largest dam" worth of solar power, every single month for the last 24 months.
There are already headsets with decent text fidelity, but IMO the problem is now on the host side. I tried to get an XR desktop env running (Stardust https://stardustxr.org/) on Linux but ran into graphical issues. The Windows ecosystem is much better though.
> the android development kit really is very heavy. compared to `gcc -o main main.cpp && ./main`, it is several orders of magnitude away.
> the jetpack stuff and whatnot - the big android app shops probably do actually appreciate that stuff. but i wish the dev env 'scaled to zero' as they say, but in the sense of cognitive overload.
I tried to build a small binary that listens for events and launches/wakes an app to do some automation. But apparently there's no way to send Intents or Broadcasts from native code? So I need to boot a JVM in the binary if I want it to communicate with anything else on the system!
Of course, you can always communicate via stdio, but that's useless because everything in Android speaks Intents/Broadcasts. Native code can also do raw Binder calls, but nothing on the system speaks raw Binder.
>But apparently there's no way to send Intents or Broadcasts from native code? So I need to boot a JVM in the binary if I want it to communicate with anything else on the system!
There is "am" i think which can be invoked to do this.
However, Termux API exists, and is a nice package for calling other services. They have the scripts interface, which calls the actual app over a socket. Kinda inefficient, but at least the work is done.
Yes, but the 'am' command is just a CLI Java program. At that point, it would be more efficient to just boot a JVM in the binary to avoid the JVM startup cost every time a Intent/Broadcast needs to be sent.
I believe the Termux API relies on a Java/app process that runs in the background to do stuff in response to API calls. Though I guess you get it for free if you already have the API running for other reasons.
I also wish open-source communities would move off of Discord for another reason: Users are limited to joining a maximum of 100 servers.
I've hit the cap and it's driving me crazy. It's really easy to hit it since each friend group, hobby group, gaming community, and open-source community often all have their own servers.
I can barely keep up with 6 semi active discord servers, each with tens of semi active channels... Much less think about doing it with hundreds. More power to you, must have figured out a good notification scheme
I don't really care about the notifications. I just want to read what's in the servers. Lots of communities post their announcements/links/resources in their Discord servers.
It is sometimes possible to view a Discord server without joining it, but it is painful compared to just joining the server.
I am super curious how other people use discord. I’m like you—trying and basically failing to keep up with 6 servers. I just want to watch a power user out of morbid curiosity. I suspect they are also browser tab hoarders, which I’m also curious about.
> But you are aware that the Israeli side states that the Arabs who left Israel in 1948 did so at the beheast of Arab politicians requets - and there is ample evidence of this. Yet, many didn't leave and Israel became 20% Arab.
Bro really said: "the Palestinians did the nakba to themselves"...
Well, don't take my word for it. Maybe these are people that you trust more than me.
> "We brought disaster upon the refugees, by calling on them to leave their homes. We promised them that their expulsion would be temporary, and that they would return within a few days. We had to admit that we were wrong."
- Syrian Prime Minister Khalid AlAzm
> "Since 1948 we have been demanding the return of the refugees to their homes, while it is we who made them leave."
- Same guy, Syrian PM Khalid AlAzm
> "The Arab States encouraged the Palestine Arabs to leave their homes temporarily in order to be out of the way of the Arab invasion armies."
- Jordanian newspaper Falastin (Interesting fact, if I'm not mistaken the name of this very newspaper was the first Arab use of the word Falastin - way back in 1911!)
> "The fact that there are these refugees is the direct consequence of the action of the Arab States in opposing partition and the Jewish state. The Arab States agreed upon this policy unanimously, and they must share in the solution of the problem."
Obviously you can find quotes to support such a position. Just like I can run around quoting Israeli PMs about how Palestinians are rats and how they must all be killed. You have to look at the whole of the evidence, not individual quotes.
You're correct, of course. Let's look at the Israeli declaration of independence:
> WE APPEAL - in the very midst of the onslaught launched against us now for months - to the Arab inhabitants of the State of Israel to preserve peace and participate in the upbuilding of the State on the basis of full and equal citizenship and due representation in all its provisional and permanent institutions.
> WE EXTEND our hand to all neighboring states and their peoples in an offer of peace and good neighborliness, and appeal to them to establish bonds of cooperation and mutual help with the sovereign Jewish people settled in its own land. The State of Israel is prepared to do its share in a common effort for the advancement of the entire Middle East.
> By disassembly of ptxas, it is indeed hard-coded that they have logic like: strstr(kernel_name, "cutlass").
> it is likely that, this is an unstable, experimental, aggressive optimization by NVIDIA, and blindly always enabling it may produce some elusive bugs.
Often not elusive bugs, but elusive performance. GPU compilers are hard: Once you've done the basics, trying to do further transforms in a mature compiler will almost always produced mixed results. Some kernels will go faster, some will go slower, and you're hoping to move the balance and not hit any critical kernel too hard in your efforts to make another go faster.
An optimization with a universal >=0 speedup across your entire suite of tests is a really hard thing to come by. Something is always going to have a negative speedup.
My experience is with non-Nvidia GPU systems, but this feels like a familiar situation. They probably found something that has great outcomes for one set of kernels, terrible outcomes for another, and no known reliable heuristic or modeling they could use to automatically choose.
Speaking from a place of long-term frustration with Java, some compiler authors just absolutely hate exposing the ability to hint/force optimizations. Never mind that it might improve performance for N-5 and N+5 major releases, it might be meaningless or unhelpful or difficult to maintain in a release ten years from now, so it must not be exposed today.
I once exposed a "disableXYZOptimization" flag to customers so they could debug a easier without stuff getting scrambled. Paid for my gesture for the next year signing off on release updates, writing user guide entries, bleh.
So it's better to hardcode your specific library name and deal with the same issue after people have reverse engineered it and started depending on it anyway?
The premise of removing the flag is that it's useless or a problem. If it's still causing a big speed boost somewhere then you need to figure something out, but the core scenario here is that it's obsolete.
> An optimization with a universal >=0 speedup across your entire suite of tests is a really hard thing to come by. Something is always going to have a negative speedup.
Maybe a common example of this is that people can write matrix matrix multiplication kernels that outperform standard implementations (also in BLAS for CPU). But that's not a General Matrix Matrix multiply. Is the speedup still there for spare matrices? Larger ones? Small ones? Ones that aren't powers of 2? Non-square? And so on. You can beat the official implementation in any one of these but good luck doing it everywhere. In fact, you should beat the official method because you don't have the overhead to check which optimization you should use.
It's easy to over simplify a problem and not even realize you have done so. There's always assumptions being made and you should not let these be invisible.
Thanks for a little context, this is not my wheelhouse at all (never even heard of this project) and I could not make heads or tails of the title or the linked PR.
Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.
I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.
That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus
reply