Hacker News new | past | comments | ask | show | jobs | submit | cstejerean's comments login

Playing against an AI that's really dumb gets boring quickly. Playing against an AI that's way too good gets annoying quickly.

I want an AI that can play like a human at my level would, such that the game is competitive and fun.


You probably don’t though. It’s actually really unfun to lose 50% of matches against an AI, or worse, because it doesn’t get tired or tilted or distracted.

It’s much more fun to go against an AI that is dumber than you but generally more powerful.


Different kinds of AI are likely fun for different players. Games have difficulty levels partly because not everyone wants the same level of difficulty relative to their own skill level. Some may want something easily beatable for them, some may want something difficult for them to beat.


It's unfun if the AI feels like it's cheating.

In Counter Strike the AI can be super dumb and slow until progressively it becomes a dumb aimbot. It doesn't become a better player with gamesense ans tacticts, just raw aim (Try arms race if you wanna feel it yourself)

In Fear the AI is pretty advanced in a more organic way. It coordinates several enemies to locate you and engage with you in a way that sometimes feels totally human. That feels great and when you lose you try again thinking something like "I should have positioned myself better" instead of "I guess my aim is just not fast enough".

We just don't get enough of good AIs to know how good they can feel.


Not only that, some of the problem with addiction were directly caused by the dosage guidelines for oxycontin. They really wanted it to be a 12h drug, but it really isn't and it wears off after about 8 hours. Rather than admitting this and giving a smaller dosage more frequently they doubled down by using a larger dose and trying to keep with the 12h schedule.

This combination or larger dose followed by mild withdrawal then results in a higher likelihood to become addicted to opioids. So not only they marketed it heavily and got more people on opioids than necessary, they did it in a way that maximizes the likelihood of addiction.

https://www.latimes.com/projects/oxycontin-part1/


> the Waymo ADS’s perception system assigned a low damage score to the object;

and Tesla would do better how in this case? It also routinely crashes into stationary objects, presumably because the system assumes it wouldn't cause damage.


> and Tesla would do better how in this case? It also routinely crashes into stationary objects, presumably because the system assumes it wouldn't cause damage.

Are the Teslas in the room with you right now?

Please point out in my comment where I mentioned Tesla. I can wait.


Completely agree. It's been 18 years since Nvidia released CUDA. AMD has had a long time to figure this out so I'm amazed at how they continue to fumble this.


10 years ago AMD was selling its own headquarters so that it could stave off bankruptcy for another few weeks (https://arstechnica.com/information-technology/2013/03/amd-s...).

AMD's software investments have begun in earnest a few years ago, but AMD really did progress more than pretty much everyone else aside from NVidia IMO.

AMD further made a few bad decisions where they "split the bet", relying upon Microsoft and others to push software forward. (I did like C++ Amp for what its worth). The underpinnings of C++Amp led to Boltzmann which led to ROCm, which then needed to be ported away from C++Amp and into CUDA-like Hip.

So its a bit of a misstep there for sure. But its not like AMD has been dilly dallying. And for what its worth, I would have personally preferred C++ Amp (a C++11 standardized way to represent GPU functions as []-lambdas rather than CUDA-specific <<<extensions>>>). Obviously everyone else disagrees with me but there's some elegance to parallel_for_each([](param1, param2){magically a GPU function executing in parallel}), where the compiler figures out the details of how to get param1 and param2 from CPU RAM into GPU (or you use GPU-specific allocators to make param1/param2 in the GPU codespace already to bypass the automagic).


Nowadays you can write regular C++ in CUDA if you so wish, and contrary to AMD, NVidia employs several WG21 contributors.


CUDA of 18 years ago is very different to CUDA of today.

Back then AMD/ATI were actually at the forefront on the GPGPU side - things like the early brook language and CTM lead pretty quickly into things like OpenCL. Lots of work went on using the xbox360 gpu in real games for GPGPU tasks.

But CUDA steadily improved iteratively, and AMD kinda just... stopped developing their equivalents? Considering a good part of that time they were near bankruptcy it might have not have been surprising though.

But saying Nvidia solely kicked off everything with CUDA is rather a-historical.


AMD kinda just... stopped developing their equivalents?

I wasn't so much that they stopped developing, rather they kept throwing everything out and coming out with new and non backwards compatible replacements. I knew people working in the GPU Compute field back in those days who were trying to support both AMD/ATI and NVidia. While their CUDA code just worked from release to release and every new release of CUDA just got better and better, AMD kept coming up with new breaking APIs and forcing rewrite and rewrite until they just gave up and dropped AMD.


> CUDA of 18 years ago is very different to CUDA of today.

I've been writing CUDA since 2008 and it doesn't seem that different to me. They even still use some of the same graphics in the user guide.


Yep! I used BrookGPU for my GPGPU master thesis, before CUDA was a thing. AMD lacked followthrough on yhe software side as you said, but a big factor was also NV handing out GPUs to researchers.


10 years ago they were basically broke and bet the farm on Zen. That bet paid off. I doubt a bet on CUDA would have paid off in time to save the company. They definitely didn't have the resources to split that bet.


It's not like the specific push for AI on GPUs came out of nowhere either, Nvidia first shipped cuDNN in 2014.


None of the tech companies are selling your data to advertisers. They allow advertisers to target people based on the data, but the data itself is never sold. And it would be dumb to sell it because selling targeted ads is a lot more valuable than selling data.

Just about everyone else other than the tech companies are actually selling your data to various brokers, from the DMV to the cellphone companies.


> None of the tech companies are selling your data to advertisers.

First-hand account from me that this is not factual at all.

I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.

On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.

I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.


> we didn’t see “Joe Smith” level of granularity, it was at Zip code levels

So you got aggregated analytics instead of data about individual users.

Meanwhile other companies are selling your name, phone number, address history, people you are affiliated with, detailed location history, etc.

Which one would you say is "selling user data"?


The problem is you're limited to 24 GB of VRAM unless you pay through the nose for datacenter GPUs, whereas you can get an M-series chip with 128 GB or 192 GB of unified memory.


Surely! The point is that they're not million times faster magic chips that makes NVIDIA bankrupt tomorrow. That's all. A laptop with up to 128GB "VRAM" is a great option, absolutely no doubt about that.


They are powerful, but I agree with you, it's nice to be able to run Goliath locally, but it's a lot slower than my 4070.


Citation needed.


https://github.com/cloudinary/ssimulacra2?tab=readme-ov-file... shows a higher correlation with human responses across 4 different datasets and correlation metrics for one.

also see https://jon-cld.s3.amazonaws.com/test/ahall_of_fshame_SSIMUL... which is an ab comparison of a lot of images where it gives 2 versions, one preferred by ssimulacra, the other preferred by vmaf


The authors of the metric found some cases where it works better is not the same thing as it being widely considered to be better. When it comes to typical video compression and scaling artifacts VMAF does really well. To prove something is better than VMAF on video compression it should be compared on datasets like MCL-V, BVI-HD, CC-HD, CC-HDDO, SHVC, IVP, VQEGHD3 and so on (and of course Netflix Public).

TID2013 for example is an image dataset with many artifacts completely unrelated to compression and scaling.

- Additive Gaussian noise - Additive noise in color components is more intensive than additive noise in the luminance component - Spatially correlated noise - Masked noise - High frequency noise - Impulse noise - Quantization noise - Gaussian blur - Image denoising - JPEG compression - JPEG2000 compression - JPEG transmission errors - JPEG2000 transmission errors - Non eccentricity pattern noise - Local block-wise distortions of different intensity - Mean shift (intensity shift) - Contrast change - Change of color saturation - Multiplicative Gaussian noise - Comfort noise - Lossy compression of noisy images - Image color quantization with dither - Chromatic aberrations - Sparse sampling and reconstruction

Doing better on TID2013 is not really an indication of doing better on a video compression and scaling dataset (or being more useful for making decisions for video compression and streaming).


That's slightly different though, sounds like fob went from being blocked by your body to no longer being blocked by your body.


I think that's largely because wages haven't really kept pace with inflation.


Haven't really = not at all, I have no idea where I'd be if I wasn't working in the comparatively highly paid tech industry. I count myself as very lucky.


There's a bit of a gap there though between 1996 and Visual Basic classic being discontinued. VB.NET came out in 2002 but VB6 was supported until 2008.

VB5 in 1997 and VB6 in 1998 really closed the gap with Delphi from what I remember.


VB5/6 had native code compilers. Performance wise, the gap was reduced. But it still was only object based and not full OOP, VCL was much better in all respects, so were the GUI builders. The component ecosystem was much better, despite having a much smaller user base. I prefer not to use Object Pascal today, but back then, it was superior to using VC++ or VB.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: