Algebraic effects seem very interesting. I have heard about this idea before, but assumed that it somehow belonged into the territory of static type systems. I am not a fan of static type systems, so I didn't look further into the idea.
But I found these two articles [1] about an earlier dynamic version of Eff (the new version is statically typed), which explains the idea nicely without introducing types or categories (well, they use "free algebra" and "unique homomorphism", just think "terms" and "evaluation" instead). I find it particularly intriguing that what Andrej Bauer describes there as "parameterised operation with generalised arity", I would just call an abstraction of shape [0, 1] (see [2]). So this might be helpful for using concepts from algebraic effects to turn abstraction algebra into a programming language.
In normal programming languages, I see static type systems as a necessary evil: TypeScript is better than JavaScript, as long as you don't confuse types with terms...
But in a logic, types are superfluous: You already have a notion of truth, and types just overcomplicate things. That doesn't mean that you cannot have mathematical objects x and A → B, such that x ∈ A → B, of course. Here you can indeed use terms instead of types.
So in a logic, I think types represent a form of premature optimisation of the language that invariants are expressed in.
Your invocation of Strong AI (in the linked paper) seems like a restatement of the Sufficiently Smart Compiler [1] fallacy that’s been around forever in programming language debates. It’s hypothetical, not practical, so it doesn’t represent a solution to anything. Do you have any evidence to suggest that Strong AI is imminent?
I routinely describe code that I want in natural language, and it generates correct TypeScript code for me automatically. When it gets something wrong, I see that it is because of missing information, not because it is not smart enough. If I needed any more evidence for Strong AI since AlphaGo, that would be it.
I wouldn't call it vibe coding; I just have much more time to focus on the spec than on the code. I'd call that a sufficiently smart compiler.
This is a very reductive definition of types, if not a facile category error entirely (see: curry-howard), and what you call "premature optimization" -- if we're discussing type systems -- is really "a best effort at formalizations within which we can do useful work".
AL doesn’t make types obsolete -- it just relocates the same constraints into another formalism. You still have types, you’re just not calling them that.
I think I have a reference to Curry in my summary link. Anyways, curry-howard is a nice correspondence, about as important to AL as the correspondence between the groups (ℝ, 0, +) and (ℝ \ 0, 1, *); by which I mean, not at all. But type people like bringing it up even when it is not relevant at all.
No, sorry, I really don't have types. Maybe trying to reduce all approaches to logic to curry-howard is the very reductive view here.
If your system encodes invariants, constrains terms, and supports abstraction via formal rules, then you’re doing the work of types whether you like the name or not.
Dismissing Curry–Howard without addressing its foundational and extricable relevance to computation and logic isn’t a rebuttal.
Saying "Curry-Howard, Curry-Howard, Curry-Howard" isn't an argument, either.
I am not saying that types cannot do this work. I am saying that to do this work you don't need types, and AL is the proof for that. Well, first-order logic is already the proof for that, but it doesn't have general binders.
Now, you are saying, whenever this work is done, it is Curry-Howard, but that is just plain wrong. Curry-Howard has a specific meaning, maybe read up on it.
Curry–Howard applies when there’s a computational interpretation of proofs — like in AL, which encodes computation and abstraction in a logic.
You don’t get to do type-like work, then deny the structural analogy just because you renamed the machinery. It’s a type system built while insisting type systems are obsolete.
Personally I would enjoy if TLA+ would have types, though, and TLA+ belongs to the land of logic, right? I do not know how it differs from the abstraction logic referred in your writing and your other whitepapers.
What is commonly used is a TypeOK predicate that verifies that your variables have the expected type. This is fine, except your intermediate values can still end up being of mis-intended values, so you won't spot the mistake until you evaluate the TypeOK predicate, and not at all if the checker doesn't visit the right corners of the state space. At least TypeOK can be much more expressive than any type system.
There is a new project in the same domain called Quint, it has types.
Practically, in abstraction logic (AL) I would solve that (AL is not a practical thing yet, unlike TLA+, I need libraries and tools for it) by having an error value ⊥, and making sure that abstractions return ⊥ whenever ⊥ is an argument of the abstraction, or when the return value is otherwise not well-defined [1]. For example,
div 7 0 = ⊥,
and
plus 3 ⊥ = ⊥,
so
plus 3 (div 7 0) = ⊥.
In principle, that could be done in TLA+ as well, I would guess. So you would have to prove a predicate Defined, where Defined x just means x ≠ ⊥.
[1] Actually, it is probably rather the other way around: Make sure that if your expression e is well-defined under certain preconditions, that you can then prove e ≠ ⊥ under the same preconditions.
This looks like you've defined the abstract lattice extension as a proper superset of the concrete semantics. That sort of analysis is entirely subsumed by Cousot & Cousot's work, right? Abstract Interpretation doesn't require static types; in fact, the imposition of a two-level interpretation is secondary. The constraints on the pre-level are so that the behavior of the program can be checked "about as quickly as parsing", while also giving strong correctness guarantees under the systems being checked.
Moving the whole thing to dynamic behavior doesn't tell us anything new, does it? Lisps have been tagged & checked for decades?
I have not defined any "abstract lattice extension" explicitly; which is nice, why would I need to know about lattices for something as simple as this? It is just a convention I suggest to get a useful Defined predicate, actually. Nobody can stop you from defining mul 0 ⊥ = 0, for example, and that might make sense sometimes.
I would suggest that abstraction logic compares to type theory as Lisp compares to Standard ML; but that is just an analogy and not precise.
My problem with your interaction on this forum, so far, is that it kind of ignores the practical (artisan) craft of programming. The use of static type systems by working engineers isn't math; it's more akin to "fast, formal checking". You can assert that you don't like types, or they're useless all day long; but, every working engineer knows you're wrong: static type systems have solved a real problem. Your arguments need to be rooted in the basic experience of a working engineer. Single-level programming has its place, but runtime checking is too error prone in reality to build large software. If you think this isn't true, then you need to show up to the table with large scale untyped software. For instance: the FORTH community has real advantages they've demonstrated. Where's your OS? Your database? Your browser? If you have real advantages then you'd have these things in hand, already.
Otherwise, you're just another academic blowhard using a thin veneer of formalism to justify your own motivated reasoning.
Strong opinion! Also, it seems you didn't read my interaction so far carefully enough. You also didn't read the summary I linked to, otherwise you would know that I fully approve the current use of types in programming languages, as they provide lightweight formal methods (another name for what you call "fast, formal checking"). But I think we will be able to do much better, we will have fast, formal checking without static types. Not today, not tomorrow, but certainly not more than 10 years from here.
I am not sure if I misunderstand you. Types are for domain, real world semantics, they help to disambiguate human language, they make context explicit which humans just assume when they talk about their domain.
Logic is abstract. If you implied people should be able to express a type system in their host language, that would be interesting. I can see something like Prolog as type annotations, embedded in any programming language, it would give tons of flexibility, but then you shift quite some burden onto the programmer.
Types for real-world semantics are fine, they are pretty much like predicates if you understand them like that.
The idea to use predicates instead of types has been tried many times; the main problem (I think) is that you still need a nice way of binding variables, and types seem the only way to do so, so you will introduce types anyway, and what is the point then? The nice thing about AL is that you can have a general variable binding mechanism without having to introduce types.
AL as described sounds like it reinvents parts of the meta-theoretic infrastructure of Isabelle/HOL, but repackaged as a clean break from type theory instead of what it seems to be — a notational reshuffling of well-trod ideas in type theory.
Given that I am an Isabelle user and/or developer since about 1996, similarities with Isabelle are certainly not accidental. I think Isabelle got it basically right: its only problem (in my opinion) is that it is based on intuitionistic type theory as a metalogic and not abstraction logic (nevertheless, most type theorists pretty much ignored Isabelle!). Abstraction logic has a simple semantics; ITT does not. My bet is that this conceptual simplicity is relevant in practice. We will see if that is actually the case or not. I've written a few words about that in the abstraction logic book on page 118, also available in the sample.
> — a notational reshuffling of well-trod ideas in type theory
Always fun to encounter disparaging comments (I see that you deleted the other one in the other thread), but I wrote this answer more for the other readers than for you.
I'm not a logician but do you mean that predicates and their algebra are a more granular and universal way to describe what a type is.. basically that names are a problem ?
Yes and no. Yes, predicates are more flexible, because they can range over the entire mathematical universe, as they do for example in (one-sorted) first-order logic. No, names are not a problem, predicates can have names, too.
so if names are not an issue, the problem with the usual static type systems is that they lack a way to manipulate / recombine user defined types to avoid expressive dead ends ?
The hyperreals don't fix the x/0 or 0/0 problems. Infinitesimals have a well-defined division and take the place of a lot of uses of 0 in the reals, but the hyperreals still have a 0, and the same problem posed in their comment exists when you consider division on the hyperreals as well.
I'm also curious what they intended, but there aren't many options:
1. The question is ill-posed. The input types are too broad.
2. You must extend ℝ with at least one additional point representing the result. Every choice is bad for a number of reasons (e.g., you no longer have multiplicative inverses and might not even have well-defined addition or subtraction on the resulting set). Some are easier to work with than others. A dedicated `undefined` value is usually easy enough to work with, though sometimes a single "infinity" isn't terrible (if you consider negative infinities it gets more terrible).
3. You arbitrarily choose some real-valued result. Basically every theorem or application considering division now doesn't have to special-case zero (because x/0 is defined) but still has to special-case zero (because every choice is wrong for most use cases), leading to no real improvements.
It's undefined because there isn't an obvious choice:
- At x:=0, every real is a reasonable limit for some path through the euclidean plane to (0,0). Ignore the 0/0 problem, then x/0 still has either positive or negative infinity as reasonable choices. Suppose you choose a different extension like only having a single point at infinity; then you give up other properties like being able to add and subtract all members of the set you're considering.
- However you define x/0, it still doesn't work as a multiplicative inverse for arbitrary x.
A good question to ask is why you want to compute x/0 in the first place. It is, e.g., sometimes true that doing so allows you to compute results arithmetically and ignore intermediate infinities. Doing so is usually dangerous though, since your intuition no longer lines up with the actual properties of the system, and most techniques you might want to apply are no longer valid. Certain definitions are more amenable to being situationally useful (like the one-point compactification of the reals), but without a goal in mind the definition you choose is unlikely to be any better than either treating division as a non-total function (not actually defined on all of ℝ), or, equivalently, considering its output to be ℝ extended with {undefined}.
Not that it directly applies to your question, ℝ typically does not include infinity. ℝ⁺ is one symbol sometimes used for representing the reals with two infinities, though notation varies both for that and the one-point compactification.
There is nothing practically usable right now. I hope there will be before the end of the year. Algebraic effects seem an interesting feature to include from the start, they seem conceptually very close to abstraction algebra.
But I found these two articles [1] about an earlier dynamic version of Eff (the new version is statically typed), which explains the idea nicely without introducing types or categories (well, they use "free algebra" and "unique homomorphism", just think "terms" and "evaluation" instead). I find it particularly intriguing that what Andrej Bauer describes there as "parameterised operation with generalised arity", I would just call an abstraction of shape [0, 1] (see [2]). So this might be helpful for using concepts from algebraic effects to turn abstraction algebra into a programming language.
[1] https://math.andrej.com/2010/09/27/programming-with-effects-...
[2] http://abstractionlogic.com