The first definition of this type of procedural generation algorithm was called Model Synthesis by Paul Merrell [1] which built upon texture synthesis. You can even read Merrell's later comparison of the two algorithms [2].
The civilian government agencies spent 248B on contract services in 2023 [1]. Not all of that was professional services, but I expect that we will see an increase in that number as more services are contracted out and a decrease in direct government workforce; a government contractor can still work remotely.
The mindset for acquisition is typically anything not core to an agency's mission should be bought on the open market at the lowest price technically acceptable. This tends to select against small businesses who can provide stellar services but can't just cut rates willy nilly for extended delivery time periods.
In effect, government contracting is a large jobs program.
Government often goes too far. You should outsource not things that are not your core values, but things you cannot trust someone else to do. Maintenance often needs to be something you do in house because you cannot trust someone else to take care of it. That someone in house will of course outsource the labor (toilet clogged once - the in house person uses a plunger - if that toilet clogs often they call a plumber to fix what is wrong), but you need someone in house to decide if you need to hire the labor in the first place, otherwise you end up paying a plumber to replace a toilet that works fine but got too much put into it one time.
I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.
Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?
I'm not a cryptographer, nor am I good with math (actually I suck badly; consider yourself warned...), but am I curious about how threatened password hashes should feel if the 'AI juggernauts' suddenly fancy themselves playing on the red team, so I quickly did some (likely poor) back-of-the-napkin calculations.
'Well known' password notwithstanding, let's use the following as a password:
correct-horse-battery-staple
This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).
I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.
If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.
Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.
Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:
dXIl5p*Vn6Gt#BH
Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!
More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.
I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.
All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.
Agreed. This was raised within our corp the other week and we read through the privacy and security documentation as it relates to Connected Experiences.
Microsoft has outlined specifically what Connected Experiences covers.[1] [2]
You could argue that predictive text is a product of machine learning but there is no clause allowing for training any generalized large language models using this data. The confusion may have arisen, if they read an article about CoPilot. If the user had a Microsoft Copilot 365 license, then the data would be used as grounding for their personal interaction with CoPilot. But still not used to train any foundational LLMs.
However, even this data is still managed in compliance with Microsoft's data security and privacy agreements.
The article covers this and I think the title is a bit too general. It is a byproduct of how CRISPR works as it targets a specific sequence. In this case the sequence is also present in areas that were non-targeted. Essentially, the sequence was not unique so the process impacted other areas in unintended ways.
> Here we evaluated diverse corrective CRISPR-based editing approaches that target the ΔGT mutation in NCF1, with the goal of evaluating post-editing genomic outcomes. These approaches include Cas9 ribonucleoprotein (RNP) delivery with single-stranded oligodeoxynucleotide (ssODN) repair templates [11,12], dead Cas9 (dCas9) shielding of NCF113, multiplex single-strand nicks using Cas9 nickase (nCas9) [14,15], or staggered double-strand breaks (DSBs) using Cas12a16.
It is a case of economic security. US National Security Advisor Daleep Singh spoke about this on the Odd Lots podcast a month ago. The goal is an integrated supply chain across all three countries, and an end result of marketing the ice breaker capacity to other allies.
Yes, he says 'What do they get. Well, in exchange, we agree to integrate our ice breaking supply chains so that they are interoperable at every stage of production', but that doesn't actually benefit Finland
Furthermore, suppose that it actually was something substantial, some kind of deal that NATO icebreakers are to be made by the US, Canada or Finland, then you screw the Aker group in Norway, who also make icebreakers.
The way I see it they expect that since the Canadians have been able to nab the shipyard after the disorder caused by the sanctions they can transfer all the knowledge from the Finns and make the icebreakers themselves, seizing appealing high-tech shipbuilding niche from the Finns, and they offer nothing in return but bullshit.
An integrated supply chain, sure, maybe that can save money, but once you've transferred your knowledge you no longer have your niche.
I think this is very obvious in the talk because of the vagueness in what is offered to the Finns; and my interpretation is that nothing meaningful is offered to the Finns and the the US is just expecting to seize this niche.
I don't think the 'there'll be enough for all of us' talk is plausible either. Surely, there might be an expansion demand, but there's really only the Baltic and the polar area that matters, and maybe US and Canada together do need 40 or so icebreakers to keep the North-West Passage safe and open in case it is to become a major trade route, but they'll last basically for ever, and my understanding is that the US is talking about only nine or so.
"US: cost $800-$900 million per ship ($1.1-$1.3 billion in 2024 dollars)
FIN: Finnish shipyard can build a heavy icebreaker for just a few hundred million dollars"
Bill North Americans for $500-600 million per ship? Can give some discount if significant amount of these projections indeed gets built
ODD LOTS: "And our best estimate is that the global total global demand for ice breakers over the next decade from allies and partners is between seventy and ninety vessels."
Its like showing someone a color and asking how many letters it has. 4... 3?
blau, blue, azul, blu
The color holds the meaning and the words all map back.
In the model the individual letters hold little meaning. Words are composed of letters but simply because we need some sort of organized structure for communication that helps represents meaning and intent. Just like our color blue/blau/azul/blu.
Not faulting them for asking the question but I agree that the results do not undermine the capability of the technology. In fact it just helps highlight the constraints and need for education.
I agree and do not think any company would make that investment directly. Nvidia selling to Microsoft renting to OpenAI, I'm sure you could make that add up to $100B on paper. In the long run the economics are likely much more complicated and consist of "agreements worth $x".
[1] https://paulmerrell.org//thesis.pdf [2] https://paulmerrell.org/wp-content/uploads/2021/07/compariso...