Hacker Newsnew | past | comments | ask | show | jobs | submit | sethops1's commentslogin

Or just 'mv ~ /dev/null'

Chugging away at building out https://tickerfeed.net

- Added creating blog posts

- Improved moderation tools

- Rewrote an upstream client to move off deprecated API

- Lots of improvements around CSS/ui (many thanks to Gemini)

- Fixing lots of bugs


I have noticed the same thing. And the blogs are still there, I checked, and marginalia returns them as top results when I search the relevant keywords. Google just really doesn't care.

Source: the guy the company was named after

Where did you read that?

If anything my experience has been the opposite of this. LLM detection is guesswork for an LLM.


Leaks were saying it's a Friend competitor https://friend.com

Just a chatbot in a box.


Surely the move now would be to rename the app to Netlfix Max

To keep it more in line with other brands:

- Netflix Max: basic subscription with ads, no 4K

- Netflix Max Ultra: basic subscription with ads, but with 4K

- Netflix Pro Max: standard subscription without ads, no 4K

- Netflix Pro Max Ultra: standard subscription without ads, with 4K

You can add a Mobile VIP package for one extra viewer outside your house, but only for Pro plans.


Let's be honest, all the netflix plans will have ads just like they do now. They might not interrupt your show while you're in the middle of it, but you'll get ads no matter what. Ads as soon as the credits roll, a barrage of full screen ads if you pause a show for more than 10 seconds, full screen ads the moment you open the app, etc.

And for shows they produce, product placements galore. Like when characters suddenly started saying "just bing it!" to each other.

There's still the one layer that comes with Dolby Atmos and access to the WB back catalogue

Netflix Plus (Netflix+) which is a side subscription to all of that which lets you syncopate different playback screens to one account, or some other esoteric value add which muddies the waters

If we're doing suggestions, I vote for "Maxflix"

"Maxflix" sounds like a name for a pornstudio but it is i guess better than the alternative of "NetB.O."

Very close to Netflix's core business: violence.

I don't find Netflix "live action" movies to be super violent and there are a lot of non-violent shows. Its animations can be quite violent though (and those are good quality). From the little I know, it, like every other big platform, does shy away from sex. This has been a theme for decades - its ok to be violent but sex is a no no.

MaxNet if you want to go final form Fortune 100

HuFlixPrime was my portmanteau of choice in 2010-ish but mainly because I felt the coming dawn of cable company style pricing encroaching; more and more folks adding multiple streaming services to get close to what cable packages could offer.

I still like the name.

Edit: didn't Netflix have a feature called "Netflix Max" on the PS3 app? I remember it really liking it to find what to watch.


And then to Max

And then to X

then to NetMax

Endgame: Netflix renames itself to HBO

Yeah this one seems like such low hanging fruit and would be a great convenience.

> Testing GPT-5, Claude, Gemini, Grok, and DeepSeek with $100K each over 8 months of backtested trading

So the results are meaningless - these LLMs have the advantage of foresight over historical data.


> We were cautious to only run after each model’s training cutoff dates for the LLM models. That way we could be sure models couldn’t have memorized market outcomes.

Even if it is after the cut off date wouldn't the models be able to query external sources to get data that could positively impact them? If the returns were smaller I could reasonably believe it but beating the S&P500 returns by 4x+ strains credulity.

We used the LLMs API and provided custom tools like a stock ticker tool that only gave stock price information for that date of backtest for the model. We did this for news apis, technical indicator apis etc. It took quite a long time to make sure that there weren't any data leakage. The whole process took us about a month or two to build out.

I have a hunch Grok model cutoff is not accurate and somehow it has updated weights though they still call it the same Grok model as the params and size are unchanged but they are incrementally training it in the background. Of course I don’t know this but it’s what I would do in their situation since ongoing incremental training could he a neat trick to improve their ongoing results against competitors, even if marginal. I also wouldn’t trust the models to honestly disclose their decision process either.

That said. This is a fascinating area of research and I do think LLM driven fundamental investing and trading has a future.


I know very little about how the environment where they run these models look, but surely they have access to different tools like vector embeddings with more current data on various topics?

If they could "see" the future and exploit that they'd probably have much higher returns.

I would say that if these models independently could create such high returns all these companies would shut down the external access to the models and just have their own money making machine. :)

56% over 8 months with the constraints provided are pretty good results for Grok.

you can (via the api, or to a lesser degree through the setting in the web client) determine what tools if any a model can use

But isn’t that more which MCP:s you can configure it to use? Do we have any idea which secret sauce stuff they have? Surely it’s not just a raw model that they are executing?

with the exception that it doesn't seem possible to fully disable this for grok 4

which is curiously the best model …

> We time segmented the APIs to make sure that the simulation isn’t leaking the future into the model’s context.

I wish they could explain what this actually means.


It's a very silly way of saying that the data the LLMs had access to was presented in chronological order, so that for instance, when they were trading on stocks at the start of the 8 month window, the LLMs could not just query their APIs to see the data from the end of the 8 month window.

Overall, it does sound weird. On the one hand, assuming I properly I understand what they are saying is that they removed model's ability to cheat based on their specific training. And I do get that nuance ablation is a thing, but this is not what they are discussing there. They are only removing one avenue of the model to 'cheat'. For all we know, some that data may have been part of its training set already...

Not sure how sound the analysis is but they did apparently actually think of that.

That's only if they're trained on data more recent than 8 months ago

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: