If we had super-smart AI with low latency and fast enough speed, would the perceived need for / usefulness of running multiple agents evaporate? Sure you might want to start working on the prompt or user story for something else while the agent is working on the first thing, but - in my thought experiment here there wouldn't be a "while" because it'd already be done while you're moving your hand off the enter key.
If they are interacting with the the world and tools like web research, compiles, deploys, end2end test runs etc, then no.
(Maybe you can argue that you could then do everything with a event-driven single agent, like async for llms, if you don't mind having a single very adhd context)
Mindstorms and the original RCX were a huge part of my college memories; I had a professor who used them for one of our classes. I built a substantially large "dragon" robot where the head + neck were actually a robot arm that could grip things, and it moved on treads. It took three RCX's "networked" together to get enough inputs and outputs for all the motors and functions. I accomplished the 3-way communication by making a parabolic reflector for the IR out of white cardstock; the three RCX's sat side by side and all aimed at the same reflector. The one in the middle controlled the outer two; each knew to only reply based on the first part of the message, so there was no contention.
I left that model with university which they kept as a display; I felt a little guilty having used up most of three Mindstorms sets for my project. Later I bought my own Mindstorms set, and then one of my then-wife's coworkers - whom we didn't know, and I don't know how she knew of me - randomly gave her a 2nd Mindstorms set to give to me. I keep meaning to re-create this dragon robot someday, leaner so it only takes two RCX. I have a lighter design now for the gripper + wrist section which, like the rocket equation, would allow the whole rest to be smaller if the last stage is smaller.
I once attended a talk by someone who is or was big in the node.js world. He opened with the premise, "a static type check is just a stand-in for a unit test."
I wanted to throw a shoe at him. A static type check doesn't stand in for "a" unit test; static typing stands in for an unbounded number of unit tests.
Put another way, this common misconception by users of languages like Javascript and Python that unit testing is just as good as type checking (plus more flexible) is a confusion between the "exists" and "for all" logical operators.
Plus, it is simply more enjoyable to design the types in your program than to write unit tests. The fun factor comes from operating on a higher level of abstraction and engages more of your brain’s puzzle-solving mode than just writing unit tests. Making yourself think about “for all x” rather than a concrete x forces your brain to consider deeply the properties of x being used.
> it is simply more enjoyable to design the types in your program than to write unit tests.
I have tried both and I have no idea what you're talking about.
> Making yourself think about “for all x” rather than a concrete x forces your brain to consider deeply the properties of x being used.
The entire point of dynamic typing is that you can think about interfaces rather than concrete types, which entails deep consideration of the properties of the object (semantics of the provided interface).
That's not the entire point of dynamic typing, because all the interface stuff comes from statically typed languages. Some* dynamic languages borrowed it, but most use "implicit" interfaces - where the interface is whatever kind of works, I guess.
> because all the interface stuff comes from statically typed languages.
No, it doesn't. It comes from theory that came after the languages.
> Some* dynamic languages borrowed it, but most use "implicit" interfaces
An implicit interface is an interface, and is exactly the sort of thing I'm talking about in GP. The point is that you think about the object in terms of its capabilities, rather than some proven-up-front categorization that it fits into. What it does, not what it is.
> "a static type check is just a stand-in for a unit test."
This is not an original argument. Rich Hickey made a similar argument in his "Simple made easy" talk in 2011, though his focus was on a fact that every bug that easiest in a software system has passed unnoticed through both a type checker and a test suit. And even before that similar ideas of test suits being a suitable replacement for a type checker have percolated through Python and Ruby communities, too.
I distinctly remember that the "tests makes static type checks unnecessary" was in fact so prevalent in JavaScript community that TypeScript had really hard time getting adoption in its first 3-4 years, and only the introduction of VSCode in 2015 and subsequent growth of its marketshare over Atom and SublimeText got more people exposed to TypeScript and the benefits of a type checker. Overall it took almost 10 years for Typescript to become the "default" language for web projects.
Besides, it's not like types don't matter in dynamically typed languages. The (competent) programmer still needs to keep types in their head while programming. "Can this function work with a float, or must I pass an int?" "This function expects an iterable, but what happens if I pass a string?" Etc.
I started my career with JavaScript and Python, but over the years I've come to the conclusion that a language that hides types from programmers and does implicit conversion magic in the background does not deliver a better DX. It might make the language more approachable initially, and the idea of faster prototyping might be appealing, but it very quickly leads to maintenance problems and bugs. Before type hinting tools for Python became popular, I worked on many projects where `TypeError` was the #1 exception in Sentry by a large margin.
Gradual and optional typing is better than nothing, but IME if the language doesn't require it, most programmers are lazy and will do the bare minimum to properly add type declarations. Especially with things like TypeScript, which makes many declarations difficult to read, write, and understand.
I think that type inference is a solid middle ground. Types are still statically declared, but the compiler is smart enough to not bother the developer when the type is obvious.
> Before type hinting tools for Python became popular, I worked on many projects where `TypeError` was the #1 exception in Sentry by a large margin.
My experience is radically different. `ValueError` is far more common in my un-annotated Python, and the most common cause of `TypeError` anyway is the wrong order or number of arguments after a refactoring.
Hhmm I could be misremembering if it was `ValueError` or `TypeError`. This was a few years ago. I know that typing issues were always the most frequent in any Python project I have worked on.
I’ve been doing Python and Typescript professionally, Python for almost two decades, Typescript for last 5 years and I can very confidently say that it doesn’t matter.
Besides, you see to be confusing Python run-time with Python typecheck-time, theoretically unfortunate, but again practically irrelevant distinction. (Unfortunate since Python typecheck is basically a different language than Python execution; irrelevant, because the right subsets of both align well.)
The distinction you are trying to make is nonsensical in Python's object model. Types are inherently callable, and calling them constructs (i.e. instantiates) the type (normally; this can be overridden, by design). There is also no type->kind->category hierarchy; `type` itself is an object, which is its own type.
When you're at a level of theory where terms like "type constructor" are natural, it's unreasonable to expect any of it to be applicable to Python. This is why the Haskell people speak of dynamically-typed languages in the Python mold as "untyped" regardless of their attitude towards implicit casts.
And I love it, and have been using it for decades, and write beautiful things where the annotations hardly ever seem worth the effort — perhaps for documentation, but not for a static checker. Then I look at other, newer Pythonistas trying to figure out how to write complex generic type expressions (and sacrificing backwards compatibility as they keep up with the churn of Python figuring out how to offer useful annotation syntax) and deal with covariance vs contravariance etc. and I just smile.
A unit test is a functional assertion. A type is a semantic construct that can provide that, but it provides a lot more.
As a trivial example, if I create a type alias from “string” to “foobarId,” I now (assuming a compliant language) can prevent code that consumes foobarIds from accidentally consuming a string.
You can run a third party linter on those comments, but you must hope that they're correct. There are usually some checks for that, but they're only reliable in trivial cases.
This is not static typing any more than "you can use emscripten to transpile JavaScript to C" means that JavaScript is a low level language with native assembly support. It's a huge step forward from "no system at all" and I'm thrilled it exists, but it's hardly the same thing.
It's actually remarkable how with the success of TypeScript so many other dynamic languages switched to gradual typing.
Erlang and Clojure were the early ones, TypeScript followed, and now Python, Ruby, and even Perl have ways to specify types and type check your programs.
He's probably conflating static and strong typing.
C is statically typed, but weakly typed - you need to throw away types to do a bunch of run of the mill things. Python is dynamically typed, but strongly typed, where it will just fail if typed don't resolve.
C# and C++ are both statically typed and strongly typed, although C# more than C++ in practice.
Tell me more please: how does one use types in Python? Unfortunately I write Python professionally these days (it is the language that has all the libraries) and hate it with a passion.
Good luck using static typing to model many real world unit tests for the programming languages people use most. I start with an easy example: those records should be sorted by date of birth. We can move on to more complicated scenarios.
No. They refuted the claim that "a static type check is just a stand-in for a unit test". That is a claim that you can just remove your type checks and replace them with unit tests at no loss. The comment stated that removing a type check just so you can replace it with a unit test is inferior. The prior state was already pre-supposed to have a type check/type checkable condition that you could replace.
That is the literal converse of the claim in the response to that comment arguing that the comment stated that all unit tests can be replaced with type checks. Those are not at all the same claim.
To make it even more clear the comment said: I saw a talk that said Type Check -> Unit Test. I said that is silly.
Response said: Unit Test -> Type Check is not reasonable. So clearly your claim that Type Check -> Unit Test is silly is wrong.
> A static type check doesn't stand in for "a" unit test; static typing stands in for an unbounded number of unit tests.
You have conflated "a static type check" with "static typing". Unit tests stand in, in the same way, for an unbounded number of states of real-world input. They're simply being subjected to a trial verification system rather than a proof system. It turns out that writing proofs is not very many people's idea of a good time, even in the programming world. And the concept of "type" that's normally grokked is anemic anyway.
> Put another way...
Rhetoric like this is unconvincing and frankly insulting. You pass off your taste and opinion as fact, while failing to understand opposed arguments.
When I was trying to improve compile time for my game engine, I ended up using compiled size as a proxy measure. Although it is an imperfect correlation, the fact that compiled size is deterministic across build runs and even across builds on different machines makes it easier to work with than wall clock time.
While you can cause the compiler to run longer to squeeze the binary size down, the compiler has a baseline number of compiler passes that it runs over the IR of the program being compiled. These compiler passes generally take time proportional to the input IR length, so a larger program takes longer to compile. Most compiler passes aren't throwing away huge amounts of instructions (dead code elimination being a notable exception, but the analysis to figure out which pieces of dead code can be eliminated still is operating on the input IR). So it's not a perfect proxy, but in general, if the output of your compiler is 2MB of code, it probably took longer to process all the input and spit out that 2MB than if the output of your compiler was 200KB.
Of course there are the cases where a huge template structure with complex instantiation and constexpr code compiles down to a single constant, but for most parts of the code I would assume there is a proportion from code size, via compile time to binary size.
This framework is a good example of something I call, "there's plenty of room in the middle." (In analogy to Feynman's "plenty of room at the bottom" about nanotechnology.)
Much like how Cosmic Inflation in the early universe left the imprint of microwave background radiation written exponentially large across the sky, I believe the exponential expansion of computing during the era of Moore's law left whole sections of scale in software not-fully-explored.
Specifically, as "average app sizes" went from 1K - 10K - 100K - 1M - 10M - 100M+, you could imagine the 1K-10K got fully explored because the space wasn't that big to begin with, and the 1M - 10M range got, if not explored at least inhabited, but it could be that that middle range of "neat applications that weigh in ~100K+ but less than 1M" didn't get completely explored because we didn't spend enough time in that era of computing.
A similar thing may be playing out in AI scaling, where we went from tiny to medium to huge model sizes, but now some new participants are going back and doing neat things with medium sized models.
I want so much to be in the business of selling my own traditional downloadable software, that I've thought about (in the absence of an idea) just putting together a do-nothing application with payment, installer, configuration dialogs, bug feedback - everything but a raison d'etre.
The irony is in my day job I am developing a traditional downloadable Windows application which will come with an immediate user base. But although I have considerable discretion over the project, it isn't mine (in an intellectual property sense), and I'm not getting rich off it.
Four companies. The doctor's office which may be controlled by profit-optimizing administration, the big profit pharma corporation, the for-profit medical insurance company, and the company the insured works for who picks the available insurance plan(s).
Doctor's office won't make money from prescriptions for a drug, and the pharmaceutical company will sell their drug to whoever will buy it. Both those entities also exist in non-US medication transactions as well, so I'm not sure it's relevant. The point is that the insurer and the employer are two extra middlemen.
Not every case of reflux is caused by the same things. Some people's esophageal sphincter (such as mine does) just... decides to let go at random times. For me, that I can tell, the frequency of that happening isn't really affected +/- by the PH of my stomach acid, but the PH sure has an effect on the consequences.
I am assuming the best thing I can do for it is lose weight, but that's easier said than done.
If you've tried the traditional diet and exercise and couldn't stick to it, don't let the stigma around glp-1s keep you from taking to your doctor about them. They are a powerful tool. Really helped me
50K context is an interesting number because I think there's a lot to explore with software within an order of magnitude that size. With apologies to Richard Feynman, I call it, "There's plenty of room in the middle." My idea there is the rapid expansion of computing power during the reign of Moore's law left the design space of "medium sized" programs under-explored. These would be programs in the range of 100's of kilobytes to low megabytes.