I was sort of surprised to see MCP become a buzz word because we’ve been building these kinds of systems with duck tape and chewing gum for ages. Standardization is nice though. My advice is just ask your LLM nicely, and you should be safe :)
Yes I think once you’ve got an LLM in the loop it’s easy to be lazy and just use it to make all decisions. But it’s good to step back and think if there is a cheaper way, I mean even some hardcoded logic can do the job.
If you're talking about the spike in Q1 2020, there's nothing weird going on. That's from all the service workers getting laid off, which bumps up the average because they're typically lower paid, and no longer drag down the "employed" average.
>The usual weekly earnings data reflect only wage and salary earnings from work, not gross income from all sources. These data do not include the cash value of benefits such as employer-provided health insurance.
PI appears when you’re trying to mathematically describe an oscillating quantity. So it could be a circle, or it could be a wave. So I don’t think it’s the red flag you think it is.
TBF Typst internally also recompiles a bunch of times until a fixpoint is reached, however it is designed to limit what parts can depend on the previous iterations and to reuse previous results for parts that definitely didn't change.
That's the clue - you never repeat the same activity exactly the same way.
Seemingly identical action, from the performer's point of view, is performed in a different environment each time it is repeated. Unless you are the Laplace's Daemon, you can't say for sure you repeat the same action over and over because the environment could change in the meantime in an unimaginable way, and that could influence the outcome.
I just really hate that quote because it is detached from reality.
Aren’t getting different results the norm in programming anyway? Developers usually don’t make the effort to include idempotency and make builds reproducible.
Normally, if you compile the same code twice on the same machine, you'll get the same result, even if it's not truly reproducible across machines or large gaps in time. And differences between machines or across time are usually small enough that they don't impact the observed behavior of the code, especially if you pin your dependencies.
However, with LaTeX, the output of the first run is often an input to the second run, so you get notably different results if you only compile it once vs. compiling twice. When I last wrote LaTeX about ten years ago, I usually encountered this with page numbers and tables of context, since the page numbers couldn't be determined until the layout was complete. So the first pass would get the bulk of the layout and content in place, and then the second pass would do it all again, but this time with real page numbers. You would never expect to see something like this in a modern compiler, at least not in a way that's visible to the user.
(That said, it's been ten years, and I never compiled anything as long or complex as a PhD thesis, so I could be wrong about why you have to compile twice.)
I wrote my PhD (physics) in LaTeX and I indeed needed to compile twice (at least) to have a correct DVI file.
It was 25 years ago, though, but apparently this part did not change.
This said, I was at least sure that I would get an excellent result and not be like my friend who used MS Word and one day his file was "locked". He could not add a letter to it and had to retype everything.
Compared to that my concern about where a figure would land in the final document was nothing.
Dunno - to me it feels like the latex compiler should just run whatever it needs to for however many times until the output is done/usable, like basically all other compilers?
Imagine that your C or C++ compiler gave incorrect output, until you had run it some number of times. And that the number of runs required wasn’t obvious to the average user, so people just ran it again and again, to be safe. It’s absurd, yet we accept it for latex
People are just fear mongering to suggest Iran would use them or give them to those who would. The real issue here is that once you have them, you basically entrench yourself as a regional power. If the regime started falling out of favor, all their neighbors would be obliged to come to their aid to protect the nukes. Also, you would be far more limited in how you fight your proxy war. These are the things the involved parties are considering, not Armageddon fantasies.
I don’t understand the argument for being patient. If you think the new hire will lead to increased profits, there’s an opportunity cost every day you don’t have them on board. And sure, maybe you wait for the best person and they are more productive, but they might be out the door in a few years.
I have to say, including that detail was outstanding writing. It really upped the suspense and sense of horror even though you knew they’d make it out alive.
I agree with this. People use natural language all the time for programs. Talking to a teammate when pair programming, clients asking for features in language to developers. Details are there in the existing code and through a running conversation. This is why I like a lot to code with ChatGPT over a copilot type of setup.
reply