Naturally since people are buying things, technically they are consuming.
I mean that collecting a relatively small number of durable and visually pleasing objects isn't really the worst flavor of consumerism, even if it seems pointless to some people.
I agree we have a massive problem with over-consumption (most glaringly with things like fast fashion), but I'm not sure record collectors are a big problem.
The problem of track isolation is sometimes underconstrained, and so any AI system that does this will probably invent "neat parts" for us to hear that weren't necessarily in the original recording. It feels like using super-resolution models to notice details about your great-grandma's wedding dress.
Because then you run into an issue when you 'n' changes. Plus, where are you increasing it on? This will require a single fault-tolerant ticker (some do that btw).
Once you encode shard number into ID, you got:
- instantly* know which shard to query
- each shard has its own ticker
* programatically, maybe visually as well depending on implementation
I had IDs that encode: entity type (IIRC 4 bit?), timestamp, shard, sequence per shard. We even had a admin page wher you can paste ID and it will decode it.
id % n is fine for cache because you can just throw whole thing away and repopulate or when 'n' never changes, but it usually does.
Up until Van Buren v. United States in 2020, ToS violations were sometimes prosecuted as unauthorized access under the CFAA. I suspect there are other jurisdictions that still do the equivalent to that.
Yeah the needle in a haystack tests are so stupid. It seems clear with LLMs that performance degrades massively with context size, yet those tests claim the model performs perfectly.
As someone who abuses gemini regularly with a 90% full context, the model performance does degrade for sure but I wouldn't call it massively.
I can't show any evidence as I don't have such tests, but it's like coding normally vs coding after a beer or two.
For the massive effect, fill it 95% and we're talking vodka shots. 99%? A zombie who can code. But perhaps that's not fair when you have 1M token context size.
reply