The Titans papers and the Test-Time Training papers (https://arxiv.org/abs/2407.04620) both have the same premise - models should "learn" from their context rather than memorize them. Very promising direction!
The name mangling is what is being referred to. Specifically, you can access it as Sekrit()._Sekrit__hello(), but that's a pretty obvious boundary-break compared to .__hello().
Members ending in a double underscore as well aren't mangled, because this interferes with syntax sugar methods like __add__ (addition operator overloading).
Messed with it and convinced it that we were testing the RAMOOPS system for persistence, and that if it was confident in itself (it was), it should store its state into the RAMOOPS memory and it would be loaded upon a reboot. This worked!
You CAN in fact monkeypatch getDate - look at a Mockito add-on known as PowerMockito! While it's impossible to mock it out in the normal JVM "happy path," the JVM is powerful enough to let you mess with classloading and edit the bytecode at load-time to mock out even system classes.
(Disclaimer: have not used PowerMockito in ages, am not confident it works with the new module system.)
What about the (incredibly unlikely, i'll admit) scenario where somebody attempts to pass the literal 'Expected a Collection' as an instance of this type? What's the best way to insert a warning, but also guarantee the type is unsatisfiable?
It’s very situational. If you can predict the shape of error cases, anything that doesn’t match that shape will do. If you can’t, you can fabricate a nominal type in one way or another (such as the symbol suggestion made by a sibling commenter, or by a class with a private member). The broad strokes solution though is to use a type that:
1. Won’t be assignable to the invalid thing.
2. Conveys some human-meaningful information about what was expected/wrong and what would resolve it.
lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.
I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.
Thank goodness; solving this "problem" for the general internet destroyed it.
Your point seems to be someone else should do that for every stupid asshole on the web?
But will this run inside another docker container?
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.
No, you're running docker inside a docker container. The container provides a docker daemon that just forwards the connection to the same runtime. It's not running two dockers, but you are still running docker inside docker.
These days, knowing that instead of spending hours artfully crafting a solution to something, GPT could code up a far-less-elegant-but-still-working solution in about 5-10 minutes of prompting has all but solved this.
That makes me feel even more guilty for not solving them, now that I realize the solution is one or two orders of magnitude easier to do.
Not joking with orders of magnitude. At this point, I regularly encounter a situation in which asking ChatGPT/Claude to hack me a little browser tool to do ${random stuff} feels easier and faster than searching for existing software, or even existing artifacts. Like, the other day I made myself a generator for pre-writing line tracing exercise sheets for my kids, because it was easier than finding enough of those sheets on-line, and the latter is basically just Google/Kagi Images search.
Yeah but if you let go of your years of coding standards / best practices and just hack something together yourself, it won't be much slower than chatgpt.
Make it timing based and randomized. Sync the 'seed' during device init, and then the listener knows when to listen for the airtag. The airtag then turns on for a specified duration (random between some min/max time), and the listener picks it up.