I have noticed the same thing. And the blogs are still there, I checked, and marginalia returns them as top results when I search the relevant keywords. Google just really doesn't care.
Let's be honest, all the netflix plans will have ads just like they do now. They might not interrupt your show while you're in the middle of it, but you'll get ads no matter what. Ads as soon as the credits roll, a barrage of full screen ads if you pause a show for more than 10 seconds, full screen ads the moment you open the app, etc.
Netflix Plus (Netflix+) which is a side subscription to all of that which lets you syncopate different playback screens to one account, or some other esoteric value add which muddies the waters
I don't find Netflix "live action" movies to be super violent and there are a lot of non-violent shows. Its animations can be quite violent though (and those are good quality). From the little I know, it, like every other big platform, does shy away from sex. This has been a theme for decades - its ok to be violent but sex is a no no.
HuFlixPrime was my portmanteau of choice in 2010-ish but mainly because I felt the coming dawn of cable company style pricing encroaching; more and more folks adding multiple streaming services to get close to what cable packages could offer.
I still like the name.
Edit:
didn't Netflix have a feature called "Netflix Max" on the PS3 app? I remember it really liking it to find what to watch.
> We were cautious to only run after each model’s training cutoff dates for the LLM models. That way we could be sure models couldn’t have memorized market outcomes.
Even if it is after the cut off date wouldn't the models be able to query external sources to get data that could positively impact them? If the returns were smaller I could reasonably believe it but beating the S&P500 returns by 4x+ strains credulity.
We used the LLMs API and provided custom tools like a stock ticker tool that only gave stock price information for that date of backtest for the model. We did this for news apis, technical indicator apis etc. It took quite a long time to make sure that there weren't any data leakage. The whole process took us about a month or two to build out.
I have a hunch Grok model cutoff is not accurate and somehow it has updated weights though they still call it the same Grok model as the params and size are unchanged but they are incrementally training it in the background. Of course I don’t know this but it’s what I would do in their situation since ongoing incremental training could he a neat trick to improve their ongoing results against competitors, even if marginal. I also wouldn’t trust the models to honestly disclose their decision process either.
That said. This is a fascinating area of research and I do think LLM driven fundamental investing and trading has a future.
I know very little about how the environment where they run these models look, but surely they have access to different tools like vector embeddings with more current data on various topics?
I would say that if these models independently could create such high returns all these companies would shut down the external access to the models and just have their own money making machine. :)
But isn’t that more which MCP:s you can configure it to use? Do we have any idea which secret sauce stuff they have? Surely it’s not just a raw model that they are executing?
It's a very silly way of saying that the data the LLMs had access to was presented in chronological order, so that for instance, when they were trading on stocks at the start of the 8 month window, the LLMs could not just query their APIs to see the data from the end of the 8 month window.
Overall, it does sound weird. On the one hand, assuming I properly I understand what they are saying is that they removed model's ability to cheat based on their specific training. And I do get that nuance ablation is a thing, but this is not what they are discussing there. They are only removing one avenue of the model to 'cheat'. For all we know, some that data may have been part of its training set already...
reply