Neat quiz. Reminds me that the absolute value of INT_MIN in C (and many other languages) is undefined, but will generally still return a negative value. This is a "gotcha" that a lot of people are unaware of.
A consequence of most, if not all, CPUs today using two's complement integers.
I think one's complement is more sensible since it doesn't have this problem, but it loses out because it requires a more complex ISA and implementation.
It's annoying that negation, ABS, and division can overflow with two's complement. But how I look at it: lots of operations can already overflow, just a fact of signed integers, and you need to guard against that overflow in portable code already. It doesn't seem to be fundamentally worse that those extra operations can overflow.
> The name "ones' complement" (note this is possessive of the plural "ones", not of a singular "one") refers to the fact that such an inverted value, if added to the original, would always produce an 'all ones' number
The radix complement in base two. They are closely related but not identical - the two's complement representation of a negative number is the ones' complement representation + 1 (ignoring the final carry). In ones' complment there are two representations for 0 (all 0s or all 1s), but every number has a positive and negative representation. In two's complement, there is a single 0, but there is also negative number that doesn't have a corresponding positive representation (INT_MIN).
In fact, for every numerical base there is an equivalent. For example, in base 3 you can represent a number in twos' complement notation (each trit X is replaced with 2-X, so -3 is represented as 212 on three trits, with +3 being 010) is or in three's complement by adding 1 (212+1 = 220). For base 10, you can do nines' complement (-3 on 3 digits is 996) or ten's complement (996+1 = 997).
While not that big of difference, the 14" is a little cheaper:
* The 14" MBP with M1 Max 10-core CPU, 24-core GPU, 32gb memory is $2,900
* The 14" MBP with M1 Max 10-core CPU, 32-core GPU, 32gb memory is $3,100.
The Mac mini with the Max variant chip will certainly be more than $1,000. But I expect it will be more reasonable than the MBPs, maybe $2,100 for the 32-core GPU version with 32gb of memory. That's how much the currently available Intel 6-core i7 Mac mini with 32gb of memory and 1TB of storage costs.
The point being that the cheapest 32-core GPU is $3,100, for competing with an RTX 3080 mobile that's constrained to about 105W (before it starts to pull ahead of the 32-core GPU in the M1 Max).
Overall it's just a silly premise that a sub-$1000 Mac Mini "would end up being the best gaming PC you could buy." That comment speaks to either not knowing the pricing structure here, or misunderstanding the performance comparisons.
A mid-to-low end desktop GPU pulls closer to 150-200W, and is not part of the comparisons here. And as Apple increases the performance of their chips, they also increase the price. So unless they start having 3 chips with the cheapest one being less than $1000 and massively ahead of desktop GPUs while pulling less than 50W, it's not going to happen. I don't see it happening in the next 5 years, and meanwhile Nvidia and AMD will continue their roadmap of releasing more powerful GPUs.
So, that's correct, but it just means everyone's correct. The original commenter said "like Tesla does" implying that significant number of Teslas have this, or it's their standard, which isn't true. It also makes one think of a touchscreen control rather than a capacitive button on the steering wheel.
Don't get me wrong, it's a stupid design. I have a Model 3, and really wish I had a front instrument cluster, too. But the whole thread seems to be half-truths from various camps trying to mislead each other.
That $99 price is for the Sketch program. The creator (Alin) just wanted a subscription model similar to it. The pro version of Lunar is $23 for a year of updates and support.
So what type of exploit is this? Is this a problem with smart contracts? Could this be a problem for other cryptocurrencies with smart contracts, like Ethereum?
Using smart contracts, people are creating cryptocurrencies within an Ethereum network. Sort of a blockchain-within-a-blockchain. Right now, Polygon is a popular alternate Ethereum network for doing these.
The problem of course, is that once these contracts are published onto the network to create your new cryptocurrency token, it's very difficult (perhaps impossible?) to update them. And of course, if someone finds a vulnerability in one of the contracts and exploits it, it's game over. There's no reversing the transfer of tokens.
So to give a more explicit answer to your question, it's not a problem with Ethereum, but a problem with the contracts people are writing on the Ethereum network.
This is most likely a problem with this specific smart contract, from what I understand, if the EVM on the "winning" miner were to produce a state transition that did not faithfully follow the SC, that block should be rejected.
Polygon is a little different, due to its proof of stake system, where there is no "winner" and the validators (Heimdall instances) "check" the work of the actual block producers (Bor instances). It also has a slasher-like element where only one Heimdall instance needs to prove that a block is incorrectly executed for it to be rejected, so a malicious actor would need to compromise all active Heimdall instances to be able to lie.
It was an issue with a specific contract, not the contract VM or the chain itself. It's absolutely a problem for other poorly written contracts, but not a systemic problem for all smart contracts.
I’m in the same boat. I’ve been wanting a new (electric) car for a while but it’s so hard to pull the trigger when my old Toyota gets me from point A to B. Recently, I’ve been thinking all of the new safety features might be worth getting a new car before my current one is finished.
> abs(-2147483648) = -2147483648