The author seems to have either misunderstood the API they're referencing or are assuming US only addresses. The specific endpoint they're calling seems to be for US specific zip codes.
The point stands that this doesn't "just work" for global services.
And TTD would say "start with a failing test, it is an executable specification for the code that you will need to make it pass. Then make it pass, then refactor to make nice code that still passes all tests - red, green, refactor under tests"
This is an interesting idea. I wonder if it's more that we have different "levels" of memory instead of generally "poor" memory though.
I'm reminded of an article on the front page recently about the use of bloom filters for search. Would something like a bloom filter per-topic make it easier to link seemingly unrelated ideas?
The details of the search are just details. The way current "LLM Apps" like chatgpt work are just fine. They access material from various places, put it in the context, the LLM reacts.
A CRDT is any data structure that meets the definition (associative, commutative, idempotent, etc...)
Event Sourcing is not strictly designed to achieve eventual consistency in the face of concurrent writes though. But that doesn't mean it can't be!
I've also been considering an intent based CRDT system for a while now (looking forward to checking out GPs link) and agree that it looks/sounds very much like Event Sourcing. It's worth while being clear on the definition/difference between the two though!
The only point from the article I agree with strongly would be putting the keyboard away for a bit and picking up a pencil & some paper and trying out some rough sketches (though I think you can do this just as well at your regular desk)
I think the reasoning is that interfaces are implemented by a dynamic lookup. Part of Go's philosophy is that things that could be expensive (function calls) should be visually distinct from cheap things.
Struct field access is cheap, hopping through a dynamic dispatch table is less cheap.
I can't remember where I picked it up from, but nowadays I try to be mindful of when things are "accidentally" repeated and when they are "necessarily" repeated. Abstractions that encapsulate the latter tend to be a good idea regardless of how many times you've repeated a piece of code in practice.
Exactly, but distinguishing the two that requires an excellent understanding of the problem space, and can’t at all be figured out in the solution space (i.e., by only looking at the code). But less experienced people only look at the code. In theory, a thousand repetitions would be fine if each one encodes an independent bit of information in the problem space.
This all strikes me as very similar to promise pipelining and distributed capabilities from an OCap system like E.
Any code sending an outbound request in reaction to a write is causally related and could be represented as a pipelined promise. The receiving system can then proceed in its work until it needs to "await" the incoming promise and can see whether it was broken due to a failure to persist some earlier write. This could also be handled at the network layer if the receiving system was external.
I'm pretty sure I remember Kenton announcing that Cloudflare Workers now supports something like (or exactly) object capabilities and promise pipelining and his knowledge/interest in such systems is already reflected in Cap'n Proto RPC.
At present output gates operate on the scope of a single worker, blocking the output from being sent. It's easy to imagine, though, that we extend things so if you are sending a message to another worker (including a Durable Object), the message is sent immediately, but the destination worker becomes subject to the same output gate. Haven't done it yet but would definitely like to!
The point stands that this doesn't "just work" for global services.