They have not done that, though. With Proton, almost every Windows game on Steam "Just Works" (and many others do with a small amount of configuration.)
As far as I can tell, there is no way for a player to use GPT to run games from their Steam library on Mac.
“The county has, for whatever reason, also refused to produce the network routers. We want the routers, Sonny. Wendy, we got to get those routers, please. The routers. Come on, Kelly, we can get those routers. Those routers. You know what? We’re so beyond the routers, there’s so many fraudulent votes without the routers. But if you got those routers, what that will show, and they don’t want to give up the routers. They don’t want to give them. They are fighting like hell. Why are these commissioners fighting not to give the routers?”
Look, having nuclear — my uncle was a great professor and scientist and engineer, Dr. John Trump at MIT; good genes, very good genes, OK, very smart, the Wharton School of Finance, very good, very smart — you know, if you're a conservative Republican, if I were a liberal, if, like, OK, if I ran as a liberal Democrat, they would say I'm one of the smartest people anywhere in the world — it's true! — but when you're a conservative Republican they try — oh, do they do a number — that's why I always start off: Went to Wharton, was a good student, went there, went there, did this, built a fortune — you know I have to give my like credentials all the time, because we're a little disadvantaged — but you look at the nuclear deal, the thing that really bothers me — it would have been so easy, and it's not as important as these lives are — nuclear is so powerful; my uncle explained that to me many, many years ago, the power and that was 35 years ago; he would explain the power of what's going to happen and he was right, who would have thought? — but when you look at what's going on with the four prisoners — now it used to be three, now it's four — but when it was three and even now, I would have said it's all in the messenger; fellas, and it is fellas because, you know, they don't, they haven't figured that the women are smarter right now than the men, so, you know, it's gonna take them about another 150 years — but the Persians are great negotiators, the Iranians are great negotiators, so, and they, they just killed, they just killed us, this is horrible.
Solid color would convey far less information, but it would still convey a minimum length of the secret text. If you can assume the font rendering parameters, this helps a ton.
As a simple scenario with monospace font rendering, say you know someone is censoring a Windows password that is (at most) 16 characters long. This significantly narrows the search space!
That sort of makes me wonder if the best form of censoring would be solid black shape, THEN passing it through some diffusion image generation step to infill the black square. It will be obvious that it's fake, but it'll make determining the "edges" of the censored area a lot harder. (Might also be a bit less distracting than a big black shape, for your actual non-advisarial viewers!)
I think the edges would still be evident, and this would just waste time and energy. I think a black square is just fine, so long as you can leak some information on the length of the secret. I would make it larger than it needs to be.
> Years ago it would've required a supercomputer and a PhD to do this stuff
This isn't actually true. You could do this 20 years ago on a consumer laptop, and you don't need the information you get for free from text moving under a filter either.
What you need is the ability to reproduce the conditions the image was generated and pixelated/blurred under. If the pixel radius only encompasses, say, 4 characters, then you only need to search for those 4 characters first. And then you can proceed to the next few characters represented under the next pixelated block.
You can think of pixelation as a bad hash which is very easy to find a preimage for.
No motion necessary. No AI necessary. No machine learning necessary.
The hard part is recreating the environment though, and AI just means you can skip having that effort and know-how.
In fact, there was a famous de-censoring that happened because the censoring which happened was a simple "whirlpool" algorithm that was very easy to unwind.
If media companies want to actually censor something, nothing does better than a simple black box.
You still need to be aware of the context that you're censoring in. Just adding black boxes over text in a PDF will hide the text on the screen, but might still allow the text to be extracted from the file.
Indeed. And famously, using black boxes as a background on individual words in a non-monospaced font is also susceptible to a dictionary attack on an image of the widths of the black boxes.
And even taking sharpie and drawing a black box doesn't mean the words can be seen at a certain angle or by removing the sharpie ink but not the printed ink.
Really, if you need to censor something create a duplicate without the originals. Preferably literally without the originals as the size of the black box is also an information leak.
Curious anyone know if the specific censoring tool in the MacOS viewer has this problem? I had assumed not because they warn you when using the draw shapes tool that text below it can be recovered later and they don't warn you about that when using the censoring tool.
This was pretty different though. The decensoring algorithm I'm describing is just a linear search. But pixelation is not an invertible transformation.
Mr. Swirl Face just applied a swirl to his face, which is invertible (-ish, with some data lost), and could naively be reversed. (I am pretty sure someone on 4chan did it before the authorities did, but this might just be an Internet Legend).
A long while ago, taking an image (typically porn), scrambling a portion of it, and having others try to figure out how to undo the scrambling was a game played on various chans.
Yeah but if you read about him it serves as a rallying cry for right wing types since he's an example of the candaian legal systems extreme leniency. This guy should be in prison forever and he's been free since 2017. Look at his record of sentencing. I love being a bleeding heart liberal/progressive and all, but this is too far.
Furthermore, don't look too hard at Isreal and it's policy of being very, very open to pedophiles and similar types.
A completely opaque shape or emoji does it. A simple black box overlay is not recommended unless you that’s the look you’re going for. Also very slightly transparent overlays come in all shapes and colors and are hard to recognize whether it’s a black box or another shape, so either way you need to be careful it’s 100% opaque.
Noob here, can you elaborate on this ? if you take for example a square of 25px and change the value of each individual pixels to the average color of the group, most of the data is lost, no ? if the group of pixels are big enough can you still undo it ?
Yeah most of the information is lost, but if you know the font used for the text (as is the case with a screencast of a macOS window), then you can try every possible combination of characters, render it, and then apply the averaging, and see which ones produce the same averaged color that you see. In addition, in practice not every set of characters is equally likely --- it's much more likely for a folder to be called "documents" than "MljDQRBO4Gg". So that further narrows down the amount of trying you have to do. You are right, of course, that the bigger the group of pixels, the harder it gets: exponentially so.
Depends how much data you have. If a 25x25 square is blurred to a single pixel, that will take more discrete information to de-blur than if it were a 2x2 square. So a longer video with more going on, but you can still get there.
As a toy example if you have pixels a,b,c and you blur to (a+b)/2,(b+c)/2, you can recover back c - a. You might be able to have a good estimate of the boundary condition ("a"), e.g. you can just use (a+b)/2 as an approximation for "a", so the recovered result might look fairly close.
You'll basically want to look up the area of deconvolution. You can interpret it in linear algebra terms as trying to invert an ill-conditioned matrix, or in signal processing terms as trying to multiply by the inverse of the PSF. In real-world cases the main challenge is doing so without blowing up any error that comes from quantization noise (or other types of noise).
How many pixels are in `I` vs how many are in `W`? different averages as a result. With subpixel rendering, kerning, etc, there is minute differences between the averages of `IW` and `WI` as well, order can be observed. It is almost completely based on having so much extra knowledge: the background of the text, the color of the text, the text renderer, the font, etc, there are a massive amount of unknowns if it is a random picture, but if we have all this extra knowledge, it massively cuts down on the amount of things we need to try: if its 4 characters, we can make a list of all possibilities, do the same mosaic, and find the closest match, in nearly no time.
It's not that you're utterly wrong; some transformations are irreversible, or close to. Multiplying each pixel's value by 0, assuming the result is exactly 0, is a particularly clear example.
But others are reversible because the information is not lost.
The details vary per transformation, and sometimes it depends on the transformation having been an imperfectly implemented one. Other times it's just that data is moved around and reduced by some reversible multiplicative factor. And so on.
TLDR: Most of the data is indeed "lost". If the group of pixels are big enough, this method alone becomes infeasible.
More details:
The larger the group of pixels, the more characters you'd have to guess, and so the longer this would take. Each character makes it combinatorially more difficult
To make matters worse, by the pigeonhole principle, you are guaranteed to have collisions (i.e. two different sets of characters which pixelate to the same value). E.g. A space with just 6 possible characters, even if limited to a-zA-Z0-9, that's 62*6 = 56800235584, while you can expect at most 2048 color values for it to map to.
(Side note: That's 2048 colors, not 256, between #000000 and #FFFFFF. This is because your pixelation / mosaic algorithm can have eight steps inclusive between, say, #000000 and #010101. That's #000000, #000001, #000100, #010000, #010001, #010100, #000101, and #010101.
Realistically, in scenarios where you wouldn't have pixel-perfect reproduction, you'd need to generate all the combos and sort by closest to the target color, possibly also weighted by a prior on the content of the text. This is even worse, since you might have too many combinations to store.)
So, at 25 pixel blocks, encompassing many characters, you're going to have to get more creative with this. (Remember, just 6 alphanumeric characters = 56 billion combinations.)
Thinking about this as "finding the preimage of a hash", you might take a page from the password cracking toolset and assume priors on the data. (I.e. Start with blocks of text that are more likely, rather than random strings or starting from 'aaaaaa' and counting up.)
I don't like Tesla and the premature "FSD" announcement was a huge set back AV research. An AV without lidar killing motorcyclists is not surprising, to say the least. And this is a damning report.
That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?
As others have pointed out in this thread you need to correct for miles driven by Teslas in autopilot versus other vehicles driven in autopilot mode. Without this adjustment, the data are meaningless. And with counts of 5 versus 0 you are deep into Poisson noise level. So yes, I stand by “pathetic”.
There’s no such thing as „other vehicles driven on autopilot”. There’s cruise control, adaptive cruise control, and limited rollouts of level 3 autonomous driving locked to certain regions. You know why? Because everyone else except of Musk knows the tech is NOT ready. It doesn’t matter how many miles driven. What matters is Teslas don’t have radars and every other brand has.
This is not the case. Waymo alone has claimed 50 million rider-only miles as of December 2024. That would mean Waymo travelled at least 833,000 miles per hour on its driverless miles! (Unless you mean "minutes" in the literal sense, which can be any amount of time, and would apply to every vehicle.)
It's worth noting Waymo's rider-only miles is a stronger claim than "FSD" miles. "Fully self driving" is Tesla branding (and very misleading, and expects an attentive human behind the wheel ready to take over in a split second.)
I’m in the same camp. I think self driving shouldn’t be allowed as it currently stands. But, this is probably the XKCD heatmap phenomenon.
How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.
I would like to add context for others that this was explicitly laid out in the "Project 2025" agenda. Accuweather was cited as an example, and Accuweather responded saying they do not agree with the agenda: https://www.accuweather.com/en/press/accuweather-does-not-su...
The actions were bringing in a ton of people who worked on it, to the campaign and nascent administration before the election, which happened before Trump's lies about how that wasn't the plan.
The same Accuweather whose previous CEO (and founder's brother) was Donald Trump's nominee to run NOAA in the last administration?
Color me shocked that, under the last administration and when it looked like the Trump campaign had a good chance of self-destructing, they wrote a milquetoast "we don't support this at all ;-)" press release about his agenda.
Yes but we've also had ten years of flooding the zone. I am inclined to believe this is the latter. He will get everything he needs from this term - primarily an escape from his Jan 6 and classified documents charges.
I'd like to offer a counter-narrative. It might just be luck, but across seven machines (excluding Pis, other SBCs, Macs, Androids), I've only ever had one WiFi issue and one sleep-mode issue with Linux. (Which easily clocks in at fewer problems than on Windows.)
By my measure, Linux has saved $1000s of dollars of my time just in waiting for updates alone. (Not to mention the employment and community opportunities it made possible.)
I feel like if you want an easy, mostly tinker-free Linux experience, you don't use something like Arch. You just install Ubuntu or Debian -- after doing your research and buying hardware that is well-supported -- and don't mess with things.
I share the same experience, but tbh, my almost every single laptop purchase was prefaced by hours or even days of searching the internet for other people's experience with Linux on a particular machine. Except the first one - and I got machine with broken ethernet, without having wifi at home. Have fun copying .deb packages over usb stick.
As far as I can tell, there is no way for a player to use GPT to run games from their Steam library on Mac.