I like the thinking and dependencies are a weight that make at least personal projects more maintenance long term, even if they help in the short term.
This doesn't fly at clients/customers usually but what you control needs to be highly maintainable and simple. Whatever works for you to achieve this is good. In regards to personal projects or internal products, in that case a framework with massive dependencies just isn't easier to maintain long term over simple web standards and market standard formats like HTML/CSS/JSON/Markdown/etc.
My only complaint is the lack of capitalization on the content, so many tech/devs do this, just don't. Even Sam Altman...
How dare you not capitalize in a capitalist system. /s
Eventually with quantum computing or other advancements, someone will break the encryption and potentially swipe the part of Satoshi's coin.
Bitcoin, and other crypto in general even more with higher concentration of early owners, will always be precarious because of this concentration. Whoever has control of the early issued coins, holds a leverage that is dangerous and has extortion properties.
Satoshi owns 5% of bitcoin. Other crypto coins are more concentrated and more problematic. This isn't like a large institutional investor in a public stock, this is a large percentage of all currency.
Eventually with quantum computing or other advancements, someone will break the encryption and potentially swipe the part of Satoshi's coin.
Based on what? The search space is large enough that if you used the smallest amount of energy possible to check keys, all the energy in the known universe would still give you an astronomically small chance of success.
Breaking the private keys to the concentrated wallet(s) targeted not the entire system. Easiest way is finding/stealing the keys but eventually over time compute does break encryption and keys made with those algos are no longer "secure". There may even be ways to target earlier keys easier than later. It could take decades but it will happen.
Are you suggesting as processing/compute increases, encryption doesn't get weaker from previous algorithms? 40-bit SSL certificates, Triple-DES encryption and MD5 + SHA-1 hashing would like a word. AES-256 could outlast the universe but that is based on our current knowledge, and sometimes encryption systems have doors, not only in the algorithm but the tooling that does the encrypting... the creators of bitcoin tools they used for keys may also be a weak link or even had doors they put in themselves as a failsafe, humans tend to do that due to game theory.
Encryption is a balance of compute/processing for encryption and decryption, too intense and the system is computationally too heavy. So with that, over time all encryption will be able to be broken at some point following, as history has show so far.
Even if that holds, the chance that someone finds the keys or tracks them down, might be faster and most likely will happen as time goes on.
The point being mainly that too much concentration in any financial system is a time bomb.
Breaking the private keys to the concentrated wallet(s) targeted not the entire system
This doesn't make sense. I was talking about what it takes to brute force a single key.
but eventually over time compute does break encryption and keys made with those algos are no longer "secure".
This is not true. You are misunderstanding the orders of magnitude differences in modern encryption from some weak schemes of the past.
Are you suggesting as processing/compute increases, encryption doesn't get weaker from previous algorithms?
I don't think you understand what it means to need an entire universe of energy with the smallest unit of energy for computation and still have an astronomically small chance or brute forcing the keys.
The fact that some of the first algorithms used for unrelated purposes were weak has nothing to do with what you are claiming. Your logic is basically "some encryption from 40 years ago was weak, therefore all encryption is weak."
Encryption is a balance of compute/processing for encryption and decryption, too intense and the system is computationally too heavy.
The encryption and decryption speed is not a factor here.
So with that, over time all encryption will be able to be broken at some point following, as history has show so far.
This is completely wrong. You are extrapolating off of something isn't a pattern in the first place. No one thought triple DES would last forever. This is like someone saying 'we moved on from 32 bits of RAM addresses so we will eventually move off of 64 bit and 128 bit to 256 bits'. Orders of magnitude don't work that way. 32 bits gives you 4 gigabytes, 64 bits gives you 18 exabytes and 128 bits is enough to give an address to every bit of data ever created.
Your comment seems more like someone reading headlines and news articles instead of actually understanding what they are claiming.
> I was talking about what it takes to brute force a single key.
If you have some insight to the tool that created the key you could, lots of systems have doors by design, typically by creators or regulation for export.
My main point though was that these keys will probably be found in the future. If they aren't broken then actually found, and that much concentration is too much. It creates a rug pull for an entire currency ecosystem. Other crypto coins are even worse in this aspect.
> You are misunderstanding the orders of magnitude differences in modern encryption from some weak schemes of the past.
You are basing this on modern tech. Making the same mistakes of people of the past. Right now I said AES-256 would take longer than the universe in existence, I get the orders of magnitude. I just think people base these ideas off of the present, not the future.
> "some encryption from 40 years ago was weak, therefore all encryption is weak."
Do you believe in 40 years we won't have advancements that may make this statement look silly? Right now they are secure, we don't know what is to come.
That is besides the point though, the keys are dangerous as they are concentration of leverage/power of not just a stock, but a currency...
> You are extrapolating off of something isn't a pattern in the first place. No one thought triple DES would last forever.
You are making the same mistakes of time, you don't know what is to come and the past has shown previous algorithms actually last LESS time than they expected. It does play into it.
Let's simplify this because you are lost in the weeds and resorting to ad hominems.
Do you think it is a good idea that a currency has keys out there, that can be found either directly or with time, that have heavy concentration?
Is concentrated unknown wealth of a currency, the root of all financial systems and power, a good idea?
My main point though was that these keys will probably be found in the future.
That's not at all what you said at first. You didn't say the keys would probably be found, you said with quantum computing someone will break the encryption, which is based on nothing. Here it is verbatim:
Eventually with quantum computing or other advancements, someone will break the encryption and potentially swipe the part of Satoshi's coin.
You are basing this on modern tech. Making the same mistakes of people of the past.
You aren't getting this. This isn't a "what if computers are faster in the future" scenario. You aren't going to brute force a search space of this size with all the energy from all the stars in the universe.
You are making the same mistakes of time, you don't know what is to come and the past has shown previous algorithms actually last LESS time than they expected. It does play into it.
No, I actually understand the search space of large key lengths instead of hallucinating a fantasy future. Even when DES was created people debated it being too weak.
You can go back a generation and read articles about cars so big they have their own wood shop, future cities full of flying cars and robot servants. That stuff was all more practical than what you are talking about.
This would not be a conversation if you understood what you are saying.
Let's simplify this because you are lost in the weeds and resorting to ad hominems.
Pointing out that you have huge misunderstandings is not 'ad hominem'.
Do you think it is a good idea that a currency has keys out there, that can be found either directly or with time, that have heavy concentration?
Is concentrated unknown wealth of a currency, the root of all financial systems and power, a good idea?
This has nothing to do with what I'm trying to tell you.
You originally said that "quantum computers will be able to break satoshi's keys" and I'm trying to explain to you why that is naive and uninformed.
If you assume powerful quantum computers then Bitcoin is dead, that is a straightforward result.
The digital signatures that prevent others from spending your bitcoins are based on elliptic curve cryptography (ECC). The security of elliptic curve cryptography is based on the hardness of the discrete logarithm problem (DLP). A sufficiently powerful quantum computer can use a variant of Shor’s algorithm to solve the DLP in runtime polynomial in the key size (my research indicates O(n^3) in key size more or less), giving you the private key behind a bitcoin wallet in a very tractable amount of time.
Though everything else they are saying about backdoors or design issues are wild speculation, a powerful quantum computer absolutely would allow you to spend anybody’s, including Satoshi Nakamoto’s, bitcoins.
Single-use P2PKH addresses are quantum safe, since the public key is not revealed publicly until spending, just its hash. QC breaks ECDSA but not SHA256.
Even those are at risk if the key can be cracked in a matter of minutes, since it takes 10 mins on average from publishing your spending transaction to it getting mined, and the attacker can doublespend it with a much larger fee.
This is true. Leaving coins at rest is safe, but moving them before the threat is understood might be risky. Widespread opt-in RBF enforcement could mitigate the risk to some degree, if miners cooperate and shun full RBF after a quantum attack. In the worst case, one might need to submit their "exit" transactions directly to a non-evil miner in order to avoid revealing the pubkey before confirmation. Ideally, this will all be figured out ahead of time, and most non-"lost" coins will be moved over to post-quantum UTXOs before the risk is serious.
Having just read up on it, sure. But that is a very restricted use case as you could only use your wallet for a single send transaction and that has already happened for the specific case of Satoshi's wallet.
I believe you could scaffold up a system even with a 1-send limit that transparently functions the same as what currently exists since you can issue transactions to multiple parties within a single send, but that largely kills Bitcoin as normally used. All but the most sophisticated users would be required to hand over control of their wallet to actually manage the massive proliferation of addresses needed to act as if you have more than a 1-send limit. But sure, you are technically correct that there exists a very narrow use case which you can probably hijack aggressively enough to salvage the system if you tried hard enough.
You're right that Satoshi's coins are at risk (but because they're using the older P2PK, not due to key reuse), and I agree that this would lead to some amount of chaos and transformative disruption.
> users would be required to hand over control of their wallet to actually manage the massive proliferation of addresses needed
Ah, I did not previously know that there were a plural number of Satoshi wallets. I previously read that Hal Finney was the first recipient of a Bitcoin and which came from Satoshi Nakamoto and assumed that there was just a single Satoshi wallet which would mean there is key reuse.
> You originally said that "quantum computers will be able to break satoshi's keys"
I said "Eventually with quantum computing or other advancements, someone will break the encryption and potentially swipe the part of Satoshi's coin."
As one part of my message. Now read the second, longer part.
Summary: "Whoever has control of the early issued coins, holds a leverage that is dangerous and has extortion properties." Not just for Bitcoin either.
What I was getting as we the concentration part and because of the amount, the desire to find Satoshi's (and other early crypto) keys will be immense whether that comes from technology or physically located.
Those keys are locked in earlier encryption algorithms and will be easier over time, maybe a long time, but still.
The longer the time actually the more concentration it may have depending on many factors but still.
The other concentration problems have also been seen in other areas like hosted wallets and shared mining sites/services. Situations for control of large amounts would be some hosted wallet sites being compromised and collecting keys or even using exploits/holes without the keys then issuing a broad push of many accounts at once, or even slowly.
Concentration in wealth, currently and banking is always a problem. In newer financial markets with less regulation there are always more gaps from many facets to technology to processes and tools.
Most of the time when someone says 'we don't know' they really are talking about themselves.
People do know. There has been 100 years of cryptography and there are billions at stake. Hand waving and saying 'anything can happen in the future' with no plan, no details, no facts and no evidence is basically tech astrology.
Here's a challenge - find a cryptography expert that agrees with you.
Do you think they'd be biased to answer in a certain way?
Additionally every cryptography expert know the system is only as good as the keys not being found, and that can come from other means not just breaking the algorithm or brute force... it can be how the key was created and what tool was used.
With time all encryption will be broken, we may be gone by then but maybe something comes along that changes the game. History is filled with leaps that were not expected. The early keys will get weaker and weaker over time, that is fact.
In any case, you are focusing on the wrong thing. I was talking about this concerned about the contentration in currency as the problem, not necessarily the encryption/key.
Do you think they'd be biased to answer in a certain way?
What are you even talking about? You are already accusing a theoretical cryptography expert of being "biased" against you? Do you think that might mean what you're saying isn't rooted in reality?
Additionally every cryptography expert know the system is only as good as the keys not being found,
That isn't what is being talked about here, isn't what I replied to and isn't what your claims were. Now you keep trying to shift the goal posts to something else instead of confronting that what you said before was absurd.
With time all encryption will be broken,
Prove it. Actual experts do not say this. Why do you keep repeating this with zero evidence? Repeating your claims over and over doesn't make them any less ridiculous.
In any case, you are focusing on the wrong thing
No, I'm responding to things you said and you keep trying to distract from them instead of admitting there is no evidence for what you said.
More than anything, I'm fascinated when someone makes an outrageous claim, someone gives them evidence that it is completely false, they give zero evidence that backs it up, yet they dig in, repeat their claim, distract from it and try everything to not just admit they don't actually know what they're saying.
This was my main point "Bitcoin, and other crypto in general even more with higher concentration of early owners, will always be precarious because of this concentration. Whoever has control of the early issued coins, holds a leverage that is dangerous and has extortion properties."
> You are already accusing a theoretical cryptography expert of being "biased" against you?
What are you talking about? Cryptographers would be biased to their field, like yourself, about their system being incapable of being broken. It isn't just about breaking algorithms...
However some are even talking we have to start worrying about advancements by 2030-2040
"One of the most important quantum computing algorithms, known as Shor's algorithm, would allow a large-scale quantum computer to quickly break essentially all of the encryption systems that are currently used to secure internet traffic against interception"
"Large universal quantum computers could break several popular public-key cryptography (PKC) systems, such as RSA and Diffie-Hellman, but that will not end encryption and privacy as we know it."
"The most widely used PKC systems, including RSA, Diffie-Hellman, and ECDSA, rely on the intractability of integer factorization and discrete log problems. These problems are hard for classical computers to solve, but easy for quantum computers."
"This means that as soon as a large-scale universal quantum computer is built, you will not be able to rely on the security of any scheme based on these problems."
"To quantify the security of cryptosystems, "bits of security" are used. You can think of this as a function of the number of steps needed to crack a system by the most efficient attack. A system with 112 bits of security would take 2112 steps to crack, which would take the best computers available today billions of years. Algorithms approved by NIST provide at least 112 bits of security."
"AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5 has little effect against quantum attacks."
"When large-scale universal quantum computers are built, you will still be able to securely use symmetric encryption algorithms, but not the systems like RSA and Diffie-Hellman. These PKC systems are widely used today to create digital signatures or to securely transmit symmetric encryption keys."
"Fortunately, there are several families of quantum-resistant PKC systems: Lattice-based, code-based, hash-based, isogeny-based, and multivariate systems. NIST's Report on Post-Quantum Cryptography describes each of these families."
Encryption will still exist with more compute and new systems but it will evolve. That doesn't mean keys of the past will that aren't updated.
> Additionally every cryptography expert know the system is only as good as the keys not being found,
I like how you cut out that sentence to disregard the context...
The rest is "and that can come from other means not just breaking the algorithm or brute force... it can be how the key was created and what tool was used."
> Actual experts do not say this. Why do you keep repeating this with zero evidence?
Again let's get the full quote not the biased selective clip you made for you context "With time all encryption will be broken, we may be gone by then but maybe something comes along that changes the game. History is filled with leaps that were not expected. The early keys will get weaker and weaker over time, that is fact."
If you have a problem with that statement you have a problem.
I gave examples you brushed off. You can agree to disagree but historically most crypto either is broken or has trapdoors for export even, so you don't need to break the algorithms, you might just need info on the tools. Try using any non approved encryption algorithm for communicating with defense/military, you'll get a visit from the FBI.
> More than anything, I'm fascinated when someone makes an outrageous claim, someone gives them evidence that it is completely false, they give zero evidence that backs it up, yet they dig in, repeat their claim, distract from it and try everything to not just admit they don't actually know what they're saying.
I am fascinated as well when someone entirely disregards the point of the post and tries to tell others they know everything. I even said it might take longer than lifetimes or the universe even to break the algorithms, yet you still can't get past that point. Quite fascinating indeed.
> No, I'm responding to things you said and you keep trying to distract from them instead of admitting there is no evidence for what you said.
No I already alluded to the time situation, it doesn't matter much in the main point of my comment.
The concentration of currency in digital currencies is a problem and makes people that own that leveragable or too powerful.
The longer it takes to find/break the keys the more the value will be worth potentially...
Yes that is my entire point. You just laser focused in on cryptographic algorithms and not all the things around it. The first sentence of my first comment was a bit salacious but a lead in to the dangers of concentration in currency, and the power people have, or want to take, of the early owners.
Yes I do believe cryptographers know that not all tools and keys will stand the test of time, especially keys made in 2008... just as cyber security people know even with the best security there is always dependency holes, social engineering, and tools that can be trojan horses.
The point was, of my comment, not shifting goal posts, the concentration in digital currency is a problem and is an even bigger problem with large swaths of it in keys out there floating around, either found physically or other means.
You seem a bit combative, you are starting in with the selective context clipping so let's just agree to disagree on the rest. You have been successful in completely derailing the main point... if that was your goal, Good job!
Think about what you're saying for a second. You made specific claims that I copied and pasted and keep repeating them with zero evidence. You have admitted and demonstrated you don't know anything about cryptography.
Instead of deferring to experts who spend huge amounts of time researching how to weaken cryptography you claim they all must be biased and ignore your conclusion (based on nothing) that all cryptography will be broken in the future by computers that don't exist (that you also don't know anything about).
This is conspiracy level thinking.
Bitcoin's encryption is elliptical curve. It was chosen specifically because of all the stuff you copied and pasted. That has been known for multiple decades. Researchers have entire academic careers based around writing papers and going to conferences trying to find the smallest theoretical weaknesses in any algorithm out there.
Stop trying to deflect and let go of the conspiracy theories of trying to make your conclusion first and then hallucinate rationalizations.
Now you are into ad hominems. You are completely lost. You can't acknowledge the topic nor the point of concentration in currency, which was 80% percent of my entire point. You are shadowboxing and really have that strawman on the ropes.
Nice job distracting from the OP even about concentration and early owners of Bitcoin.
> Bitcoin's encryption is elliptical curve.
Did you just learn this? The point is processing power at quantum level already starts to threaten some of the encryption methods and early keys are definitely at risk over time. Additionally there is motive to find holes in early tools that someone could unlock all that lost bitcoin... over time.
Did you ignore everything like this?
"AES-128 and RSA-2048 both provide adequate security against classical attacks, but not against quantum attacks. Doubling the AES key length to 256 results in an acceptable 128 bits of security, while increasing the RSA key by more than a factor of 7.5 has little effect against quantum attacks."
Since you are so singular focused, combative, and black and white on this. Since you don't adhere to future probabilities over time and unknowns, you seem like you fully think today's encryption will never be broken by advancements in decades or longer, as cryptographers fear could happen which I just shared with you, even programs at NIST regarding research on this.
Let's get you on record...
Do you think encryption methods today will hold up over time 100%?
Do you think early bitcoin keys from 2008 will never be broken (disregarding tools and being found which is more likely)?
See if you can contain yourself to what topic you wanted to talk about and double down on your take, answer the questions.
That wasn't even the point but let's get this for future generations to giggle at.
This is a classic playbook of people who keep claiming something with no evidence. They try to divert to something else and they try the "I don't like how you're saying it" move.
Pointing out that you have no idea what you're talking about is not ad hominem. Ad hominem would be something irrelevant to the topic like "you're fat so you don't know about cryptography".
The point is processing power at quantum level already starts to threaten some of the encryption methods and early keys are definitely at risk over time
You have grossly misunderstood (again). Quantum computers haven't threatened anything new.
AES was first proposed 26 years ago and has never been broken. Quantum computers only reduce the theoretical key lengths. This has been known for multiple decades and is why key lengths have been increased. Again, it has never been cracked, 256 bit keys have been used just for a theoretical time decades or centuries in the future with no clear path to get there.
There is zero evidence to back up what you are saying. There are no cryptography experts that agree with what you're saying. It is just you making something up.
If you have any evidence at all, go ahead and link it.
I completely disagree with your limited focus take on this, aside from the main point of the comment, and you still are not taking into account what others are saying which I shared.
You are very focused on "winning" rather than the topic of concentration in currencies in the digital space, whether those keys are found, solved or some future system or hole is able to break them.
Good debate but I feel you were debating and shadowboxing yourself mostly, some side point that I guess you "won". I answered all your questions and provided sources on them to back them up. You still refuse to acknowledge.
Can the keys be broken now? No. Will they? According to you... NEVER!
Since you still won't answer these questions for our future observers, I take it you think they will never be broken.
Let's get you on record...
Do you think encryption methods today will hold up over time 100%? According to you YES!
Do you think early bitcoin keys from 2008 will never be broken (disregarding tools and being found which is more likely)? According to you YES!
Ok, glad to get you on record. I work on probabilities and that we don't know all parts, is there a probability that these keys will one day be broken, YES. A high probability, with lots of time, YES. Even higher if the values of these early coins/keys are multiples of what they are today, YES.
We can agree to disagree on this point without you going into ad hominems again on some side point. Where there is loot and prizes, some will be very motivated to find a way to get at those keys, either finding them, finding holes in tools used to make the keys or with lots of time, break the algorithms or brute force them.
I work in games and no matter how well you hide things, players will find the holes. It is actually quite amazing when you see it. Never underestimate the human with tools and intel/tracks. I am sure you will misinterpret this but it is true.
is there a probability that these keys will one day be broken, YES. A high probability, with lots of time, YES. Even higher if the values of these early coins/keys are multiples of what they are today, YES.
Again, this is you repeating your claim. Repeating your claim isn't evidence. You haven't given any numbers, explanations, information from expert cryptographers or any external links at all.
No evidence doesn't mean it didn't or won't happen. There is a very large canyon between something happening and evidence. There you have to go off of history, timeline, motive (large piles of money get things to happen) and more.
Glad you could go on record and show you are an absolutist not a probabilist. Even cryptography itself is probabilistic. There are no absolutes in time except change.
You also skipped these two questions:
- Do you understand what diversion from the point is?
- Do you think Satoshi is Nick Szabo?
We are so far deep in this distraction that we have run out of room to reply without it being a line of vertical text.
Let's agree to disagree. I'll let you have the last word on this diversion.
The identity probably isn't one person but there are people that stand out.
Nick Szabo has been potentially mentioned as Satoshi for sometime [1] due to him working on decentralized currencies since 1998 with Bit Gold [2].
Elon Musk mentioned Nick Szabo on Lex Fridman's podcast as Szabo was also doing Bit Gold prior and is heavy into crypto and currencies. [3]
> "Obviously I don't know who created bitcoin ... it seems as though Nick Szabo is probably more than anyone else responsible for the evolution of those ideas," said Musk, adding, "he claims not to be Nakamoto ... but he seems to be the one more responsible for the ideas behind it than anyone else."
> Szabo is best known as the inventor of one of bitcoin’s predecessors, Bit Gold, and digital smart contracts—which eventually evolved to become a key part of the ethereum blockchain. Szabo has previously denied he's Satoshi Nakamoto, telling financial author Dominic Frisby in 2014, "I'm afraid you got it wrong doxing me as Satoshi, but I'm used to it."
Szabo's full name is Nicholas Szabo [4]. Just seems quite a bit like Satoshi Nakamoto. It feels like there is something there potentially.
He uses his initials for his Pseudonyms (N.S)
Nakamoto Satoshi = Bitcoin
Nicholas Saberhagen = Monero
Bit Gold was maybe to test out interest, then when the real one was made it would be more anonymous. If you think about it, bitcoin being anonymous is a feature as it makes it seem less centralized eventhough Satoshi owns a large chunk of it. Today companies or individuals will control a large chunk of other coins/platforms and it feels more centralized or even autocratic. At any time those big fish could wreak havoc. Satoshi seems more hands off.
Another potential reason is the money it would generate and the recognition would attract too much attention. The successful decentralized currency like bitcoin might have been foreseen as a threat to the creator after it takes off and gains in value.
From the wiki on Nick Szabo, he is more 'reclusive' and not wanting to be known. On his blog he mentioned his intent on creating a live version of the currency as Bit Gold was more of a prototype/demo and was never launched. Even the name Bit Gold and bitcoin are very similar as is the name Nicholas Szabo (N.S.) and Satoshi Nakamoto (S.N.) in a few ways. Satoshi Nakamoto always seems like a purposeful shroud of a name, looking for someone by that name is probably not going to find them:
> Nathaniel Popper wrote in The New York Times that "the most convincing evidence pointed to a reclusive American man of Hungarian descent named Nick Szabo." In 2008, prior to the release of bitcoin, Szabo wrote a comment on his blog about the intent of creating a live version of his hypothetical currency. [1]
Hal Finney was the first to receive 10 bitcoins from Satoshi Nakamoto [5][6]. Hal Finney was the next employee after Phil Zimmermann at PGP. So he knew the potential for being pursued by governments for software creations. Hal, who died in 2014 unfortunately, probably knew Satoshi and would have known he was shrouding/anonymous for good reasons as seen in the PGP history just before that and around the same time bit gold was being created.
The very likely people to be Satoshi Nakamoto are Nick Szabo and Hal Finney due to the early interactions and transactions, and potentially Dorian Satoshi Nakamoto, but that seems unlikely they would use their real name [7]. Maybe it was all three or someone else entirely, these guys are just around the early days and some of the first transactions. Either that or someone or some group saw the need for decentralized currency from their efforts and then front ran them and made it seem more like them to help shroud themselves.
It wasn't just 'digital currencies' Szabo pioneered either. He did the earliest work on smart contracts too. Smart contracts are multi-disciplinary spanning law, cryptography, economics, programming, accounting, and game theory. It's the kind of insight only a polymath would have. People who only know a little about Bitcoin think it's like a currency. But built inside it is a scripting system that lets you program the conditions for moving around coins. This allows for incredibly elaborate extensions to be made to the system without having to change its core rules.
Satoshi was also interested in smart contracts. In fact, there was early work in the Bitcoin code base where satoshi had designed an entire online market system. I've never seen the code for this myself or what features it had. But to me this is so far ahead of everyone else and solidifies much of what would be built on top of blockchain systems that it's scary. So Satoshi builds the groundwork for practically the next 10 years worth of complex, technical discussions. Satoshi even knew that time-locked transactions would be useful so it was part of Bitcoin too.
> How can anything be sent with no connection? What is a connection?
Think of a wired network (TCP) vs radio/television broadcast (UDP). There really never was a "connection" it is just a logical concept/abstraction that means an endpoint can be reached, when no data is sent/received there is no connection.
Broadcast or UDP is a "connection" but more like a tuner. UDP datagrams are sent out and don't need to be ACK'd and may never be received, they just broadcast to where you tell them to go. They might even be received out of order where you can discard previous ones if they aren't needed. Note: You can do Reliable UDP and ACK any important messages with a UDP datagram back. Most highly available real-time systems and games use UDP with some sprinkle of RUDP when needed. Example: player positions or actions across the level don't matter to you, can be received and rendered or not. Global state like level starting, level ending, you want to ACK those back to unify the simulation on important states. Any critical message you mark for ACK thus the "reliable" part, it also handles discarding out of order messages which happens with UDP broadcast.
Wired or TCP is more like a stream and has a "connection" and handles all the ordering, verification and ACK backs for you. This has lots of overhead and isn't great for gaming beyond like turn based or simple networking, it works great for sending a web page or file though because all parts are necessary.
Streaming or SCTP-like it really RUDP with more standards around it. It is a combination of the TCP where needed and UDP and can be direct or broadcast.
All types of "connections" are really virtual/logical connections not actual connections.
Gaffer on Games also has some great overviews on these topics and is a must read like Beej's.
Very cool! Slightly different though in that 3dsen seems to be adding depth to sprites and a camera view outside the player, while tom7's puts you in the game on the level either top down or side view, walking in it as the player.
I suppose it depends on where you place the camera, on the player like tom7 "WolfNEStein 3D" or from a viewpoint far enough back to view a portion of the level like 3dsen.
3dsen approach is more playable and is really neat for depth in 3d and VR. Could be cool in AR as well.
> WebGPU or WebGL are also more straightforward and a good starting point.
WebGL and WebGPU also show some of the difference in how rendering libraries have evolved. They used to be all "stateful" global state like OpenGL/OpenGL ES. WebGL followed that model as it was simple and needed for the time, but could have bugs if correct flags weren't set.
WebGPU and other newer rendering libraries (Vulkan, Metal, and Direct3D 12) are more "modern" in that they have almost no global state. They are also more raw and lower level and take a bit more to grok.
This is one of the best overviews of the differences between WebGL/WebGPU but also is similar to how OpenGL to Vulkan, Metal, and Direct3D 12 evolved.
> The biggest difference is WebGL is a stateful API and WebGPU is not. By that I mean in WebGL there is a bunch of global state. Which textures are currently bound, which buffers are currently bound, what the current program is, what the blending, depth, and stencil settings are. You set those states by calling various API functions like `gl.bindBuffer`, `gl.enable`, `gl.blendFunc`, etc…, and they stay what you set them globally until you change them to something else.
> By contrast, In WebGPU there is almost no global state. Instead, there are the concepts of a pipeline or render pipeline and a render pass which together effectively contain most of the state that was global in WebGL. Which textures, which attributes, which buffers, and all the various other settings. Any settings you don’t set have default values. You can’t modify a pipeline. Instead, you create them and after that they are immutable. If you want different settings you need to create another pipeline. render passes do have some state, but that state is local to the render pass.
> The second-biggest difference is that WebGPU is lower level than WebGL. In WebGL many things connect by names. For example, you declare a uniform in GLSL and you look up its location
> Another example is varyings, in a vertex shader you use `varying vec2 v_texcoord` or `out vec2 v_texcoord` and in the fragment shader you declare the corresponding varying naming it `v_texcoord`. The good part of this is if you mistype the name you’ll get an error.
> WebGPU, on the other hand, everything is entirely connected by index or byte offset. You don’t create individual uniforms like WebGL, instead you declare uniform blocks (a structure that declares your uniforms). It’s then up to you to make sure you manually organize the data you pass to the shader to match that structure.
> Note: WebGL2 has the same concept, known as Uniform Blocks, but WebGL2 also had the concept of uniforms by name. And, even though individual fields in a WebGL2 Uniform Block needed to be set via byte offsets, (a) you could query WebGL2 for those offsets and (b) you could still look up the block locations themselves by name.
> In WebGPU on the other hand EVERYTHING is by byte offset or index (often called ‘location’) and there is no API to query them. That means it’s entirely up to you to keep those locations in sync and to manually compute byte offsets.
For a time, supporting four rendering engines did cause lots of work for game engines, much more integration and abstraction.
As OpenGL support fades at least one will drop off. I will miss it as I do still love OpenGL/WebGL. OpenGL and OpenGL ES / WebGL in particular opened up mobile/web gaming in ways never before possible. Prior to that you had Director (3D), Flash (Papervision/Away3D/etc), Silverlight and more recently `<canvas>`. Canvas is great for smaller games but you need raw power for rendering 3d and WebGL (almost a direct port of OpenGL ES) brought that and engines like three.js use that well. Mobile gaming became the biggest gaming market due to OpenGL ES and web games took a leap on WebGL, also apps, interactives and tools became faster rendered.
With GL, in many cases the global state is more simple, but to take advantage of GPUs and rendering lower level the innovations were needed. The naming to index/position based for instance is lower level and can also end up in bugs just as the global state in GL could. The benefit is performance and cleaner global state.
It is probably a good idea to learn OpenGL/WebGL as some of the concept in WebGPU/newer engines will be more clear, much of it was simpler with naming.
The latest hints from Vulkan and D3D12 developments (VK_EXT_Shader_object[0], D3D12 Work Graphs[1]) suggest there might be an industry move away from pipelines and towards alternative solutions.
Good info, interesting. It does add a layer of complexity that takes a bit more to handle. The point of moving away from stateful/naming was to prevent global state and bugs but you can also run into those in other ways as it is somewhat like going from keyed data to offsets/positions binary data. Rebuilding pipelines also seems heavy.
Maybe the solution is a lower level API (current) and a higher level API (somewhat stateful/named but not so global). That does add extra weight though.
D3D10/11 is the sweet spot IMHO. It splits render pipeline state into a small number of immutable state group objects instead of granular render state toggles (like OpenGL or that new Vulkan extension), or an all-in-one rigid pipeline state object (like WebGPU or Vulkan).
Those D3D11 state objects are roughly: rasterizer-state, blend-state, depth-stencil-state, (vertex-)input-layout, and one shader object per shader stage.
> somehow got Apple and Intel to buy a small piece of the IPO (700m combined).
Well Apple makes more sense considering M* chips are ARM based and they have signed until 2040, though that could change at some point.
Intel might be using this as more a way to get "intel" so to speak. Not a bad idea to have some investment leverage.
I could see ARM after going public increasing the cost of licensing and some of the larger players would be ok with that to prevent competitors. Seems like lots of leverage plays here.
WebAssembly, WASI, like WebRTC/WebGPU/WebXR/WebAudio, just makes webdev, gamedev + native/networking very very interesting in this phase of technology where js frameworks are culty bloated/verbose and apps are the main thing for marketing/tools. Web apps + tools are opening up with wasm/wasi.
Runno (https://runno.dev/ + https://runno.dev/wasi) is a great idea and helps make interesting native stuff in a sandbox locally. Running all this directly without having to mash into assembly is a great idea. Awesome job on this!
> Runno helps you make runnable code examples that can be embedded in web pages. This is very handy for educational tools, it means:
> - You can make code examples that don't need users to install scary tools.
> - No need to run a server, it all runs client-side in the browser.
> - Your users can edit and re-run any code examples you make.
> - The examples are extremely customisable with HTML, CSS and JavaScript!
Yet I am still waiting to see a game with the same capabilities of Infinity Blade, that Apple used in 2011, as an example of a game showing of the newly acquired OpenGL ES 3.0 capabilities of iDevices.
Or the Unreal Engine 3.0 citadel demo, also in 2011, for Flash/CrossBridge.
It seems the only thing usable is running ShaderToy demos, and 3D views on ecommerce sites.
That expectation is off by nearly a decade. WASM games in terms of performance are closer to 2000 than 2010. Here's an open source indie game from 2003 that I ported to WASM [1]. It struggles to hit 60 FPS on devices that were released a good 15 years after the game itself. Browsers sometimes struggle to animate simple lines at 60 FPS still to this day, because they're just hugely massive platforms with thousands of moving parts with no room for big, complex apps on top of them.
Glancing at the project, one area where things could go terribly wrong is the translation layer from GL 1.x to WebGL. For instance 'tricks' that are common on native GL implementations like buffer orphaning are actually harmful on WebGL and may cause lock stalls which would cause exactly the issues you describe (e.g. struggling to hit a consistent frame rate despite extremely low CPU and GPU load).
Even though WebGL and WebGL2 are implementations of the GLES2 and GLES3 APIs you usually can't just take a non-trivial GLES2/3 code base and expect it to work without hickups on WebGL, what's going on under the hood is just too different.
A couple of years back I also wrote a pure WebGL renderer for Neverball levels [1]. No physics, no translation to WebGL, just the 3D renderer part in direct WebGL. It also has a surprisingly low performance ceiling. I'd say I managed even worse than what the gl4es translation layer does for Neverball. By far the biggest performance boosts were instanced arrays and vertex array objects - basically just reducing the number of calls into WebGL. Seems to me that WebGL has a lot of overhead with or without a translation layer.
I did some GL benchmarking for simple draw calls (on desktop) a couple of years ago, and while it's true that WebGL came out lowest, the difference between native drivers were much bigger. I basically checked how many trivial draw calls (16 bytes uniform update + draw call) were needed on Windows before the frame rate drops below 60 fps, and for WebGL in Chrome this was around 5k, Intel native was around 13k, and NVIDIA topped out at arond 150k (not all that surprising, since NVIDIA has traditionally the best GL drivers).
It is definitely true though that you need to employ old-school batching tricks to get any sort of performance out of WebGL, but that's also not surprising, because WebGL is only a GL by name, internally it works entirely different (for instance on Windows, WebGL implementations are backed by D3D).
Let's be clear what Flash was though. It was a native binary that would paint to a region owned by the browser. So yes, it was performant because basically the only thing it was doing differently than any other native application was how it was hosted by the browser.
That's also why it was such a security nightmare. It was a complete escape from the browser sandbox.
It was a great development experience, destroyed by pitch forks and lanterns by the folks that a decade later have failed to deliver a sound alternative.
Hence why the gaming industry is focusing on streaming instead.
Flash was destroyed by Apple, which had no interest in improving the web either. The iOS platform turned out to be a pretty good replacement for Flash though.
Sure, but what's the point if it's not supported by Safari. Adobe Air was at best a porting aid to get your old Flash code wrapped in an iOS app, but the writing was on the wall and nobody in their right mind would start new Flash projects.
PNaCl was a joke though, it suffered from much longer client-side compilation times than both asm.js and WASM ever had, and performance was at most on par with asm.js.
(NaCl was better in those areas, but required to stamp out one binary for each target CPU architecture)
There's definitely something strange going on here. I'm getting about 3x as much CPU load on the WASM version as compared to native. Still low enough for a solid 144fps on my machine, but there shouldn't be this much overhead.
The number of calls needs to be lower for a WebGL application. You have to use as much instancing as possible. The security layer makes the calls the slowest part of the pipeline. That's why you see amazing things in shadertoy. When the whole shebang is in the shader it runs smooth.
Devtools or F9 while in game. Type a famous magic word in the title screen to access all levels. Not sure how to do this on a phone which is what I was referring to - but framedrops are pretty obvious.
I've done a quick test and it runs very smoothly on my original OnePlus One from 2014 (so 11 years after the game). This is on latest Chrome, Android 11 (LineageOS 18.1).
I still find it amazing that this game never made with this in mind, the web tech at the time on the OnePlus One was nowhere near able to run this in browser and it works perfectly today!
To be fair my reference browser is Firefox, WebGL is a fair bit slower there.
What blows my mind with this technology is little things: porting the game to the browser gave me a half-working mobile port basically for free (had to implement touch input handling in Neverball). On top of that, thanks to SDL2 and the game controller web API, I can attach a game controller to my phone and play the game in the browser on my phone with a game controller. It just seems unreal that this combination of technology just works.
My hypothesis is that monetization on the web is harder so there are not as many resources spent on games in browsers. If anyone is willing to spend a lot of time on the game they are probably willing to install it, people arent conditioned to pay up front for websites, the UX around microtransactions isn't at all smooth.
These are mostly business issues. If the business problems would be solved, the games would be built, despite any technical limitations of the web platform (which are not much different than limitations on low-end Android mobile devices).
Nobody has figured out so far how to make money with high quality games on the "open web" to get the same return on investment like simple ad-driven 2D games running on closed platforms like Facebook which can technically be cobbled together by 3 dudes in their mom's basemement (exaggerating a bit here of course).
E.g. no matter how you approach it, the Citadel demo wouldn't have led to a web game that would make the same profit relative to the development cost like a simple ad-driven Tetris style game.
No they aren't, at least low end Android devices support GL ES 3.2 and have proper debugging tools.
After a decade Web 3D keeps having spectorJS as the only alternative.
Naturally there are also some masochists that enjoy debugging the browser 3D stack on RenderDoc, while trying to figure out what belongs to their application.
A kludge to work around lack of tooling for the past decade, and when doing that, no need to WASM anyway.
Even if we consider GL ES 3.0, native comes out winning, due to missing features on WebGL 2.0.
Not only has Android the GPU debugger from Google, that beats anything available for Web, each mobile GPU vendor has their own set of graphical debuggers.
[1] https://arstechnica.com/security/2023/10/okta-says-hackers-b...
[2] https://en.wikipedia.org/wiki/Okta,_Inc.#Security_incidents