This character was code 0x01 in the original IBM PC code page (https://en.wikipedia.org/wiki/Code_page_437), and hence in DOS. It was displayed single-width and monochrome just like any other 8-bit character, never causing any rendering issues, unlike emojis today. It was added to Unicode for round-trip compatibility with that code page.
I think type theory is exactly right for this! Being so similar to programming languages, it can piggy back on the huge amount of training the LLMs have on source code.
I am not sure lean in part is the right language, there might be challengers rising (or old incumbents like Agda or Roq can find a boost). But type theory definitely has the most robust formal systems at the moment.
Why? Why would the language used to express proof of correctness have anything to do with English?
English was not developed to facilitate exact and formal reasoning. In natural language ambiguity is a feature, in formal languages it is unwanted. Just look at maths. The reasons for all the symbols is not only brevity but also precision. (I dont think the symbolism of mathematics is something to strive for though, we can use sensible names in our languages, but the structure will need to be formal and specialised to the domain.)
I think there could be meaningful work done to render the statements of the results automatically into (a restricted subset of) English for ease of human verification that the results proven are actually the results one wanted. I know there has been work in this direction. This might be viable. But I think the actual language of expressing results and proofs would have to be specialised for precision. And there I think type theory has the upper hand.
My answer is already in my previous comment: if you have two formal languages to choose from, you want the one closer to natural language, because it will be easier to see if informal and formal statements match. Once you are in formal land, you can do transformations to other formal systems as you like, as these can be machine-verified. Does that make sense?
Not really. You want the one more aligned to the domain. Think music notation. Languages have more evolved to match abstractions that help with software engineering principles than to help with layman understanding. (take SQL and the relational model, they have more relation with each other than the former with natural languages)
I love writing shaders and manually shovelling arrays into the graphics card as much as anyone, and I know first hand how this will give the game very much your own style.
But is that the direction LLM coding goes? My experience is that LLM produces code which is much more generic and boring than what skilled programmers make.
If the end result is, and you’re doing games, they might
Edit to add to my sibling comment:
> But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.
I swear that at some point i could tell a game was made in Unity based on the overall look of the screenshots. I didn't know it's the fault of the default shaders, but they all looked samey.
Since games are all about artistic expression and entertainment, I would say yes, it matters to the end users. The thing is, if you don't have your hand in the code details you might not know the directions it can be taken (and which it cannot). Seeing the possibility of the code is one way to get the creativity juices flowing for a game programmer. Just look at old games, which demonstrate extreme creativity even on the limits put on the software by the old hardware. But being stuck into the code, seeing the technical possibilities allowed this.
I think this is what the comment above was lamenting about abstractions. I am all for abstraction when it comes to being productive. And I think new abstractions open new possibilities some times! But some abstractions which makes standard stuff easy, makes non-standard stuff impossible.
I love the make reimagination in plan 9: mk. In my experience ok is easier and slightly saner than make. On unix like systems you can get it from plan9port, along with other gems, such as acme.
From what I see it is almost* strictly less useful than BSD make, which (IMO) is strictly less useful than Solaris make, all of which are strictly less useful than GNU make. It only might be more useful than SysV make.
* Regex support is unique, but you know what they say about solving problems with regexes, and you can usually do any useful think with `subst` and `word`. The `Pcmp -s` at first seems unique, but can easily be replaced by some stamp/dep file logic, with much better performance.
Don't get confused by the fact that `automake` chooses to maintain compatibility with versions of `make` more than 4 decades old.
I haven’t tried every version of make there is, but a few things about mk off the top of my head:
- Readability: $target, $prereq, $stem is much easier to read than Make’s $@, $<. Any white space can be used to indent.
- Flags after the target to do useful thing such as autodelete target om error (no more confusion on about partial files), and controlling verbosity, and specifying virtualness.
- Stricter when dealing with overlapping targets.
- Has a flag for printing why each target is being generated.
- Regex in targets is sometimes really useful! Taken together with the strictness towards overlapping targets this leads to less confusion overall.
This was going through my head, too. Though I am somewhat doubtful whether it would be a win on Linux. Mk's ergonomics come from leveraging the cohesiveness of Plan 9 as a system, IMHO.
That said, the automatic variables ($target, $stem, etc.) are more descriptive than those of make ($@, $*, etc) since the former are just normal shell variables passed in at rule runtime I think.
I agree, but I also want to clarify that cantors argument was about subsets of the naturals (N), or more precisely functions from N to Bool (the decidable subsets). This is where the diagonal argument makes sense.
So to conclude that there are more reals than naturals, the classical mathematical argument is:
a) There are more functions N to Bool than naturals.
b) There are as many reals as functions from N to Bool.
Now, we of course agree the mistake is in b) not in a).
> So to conclude that there are more reals than naturals, the classical mathematical argument is:
> a) There are more functions N to Bool than naturals.
> b) There are as many reals as functions from N to Bool.
> Now, we of course agree the mistake is in b) not in a).
Given certain foundations, (a) is false. For example, in the Russian constructivist school (as in Andrey Markov Jr), functions only exist if they are computable, and there are only countably many computable functions from N to Bool. More generally, viewing functions as sets, if you sufficiently restrict the axiom schema of separation/specification, then only countably many sets encoding functions N-to-Bool exist, rendering (a) false
Indeed, what you write is true from an external point of view; just note that within this flavor of constructive mathematics, the set of functions from N to Bool is uncountable again.
There is no paradox: Externally, there is an enumeration of all computable functions N -> Bool, but no such enumeration is computable.
Is it internally uncountable in the strong sense that the system can actually prove the theorem “this set is uncountable”, or only in the weaker sense that it can’t prove the theorem “this set is countable”, but can’t prove its negation either?
If the latter, what happens if you add to it the (admittedly non-constructive) axiom that the set in question is countable?
Suppose that you've written down a function enumerate, that maps all natural numbers to functions that themselves map all natural numbers to booleans. We then can write down the following program.
(defn unenumerated (n) (not ((enumerate n) n)))
This function can be recognized as Cantor diagonalization, written out as a program.
If enumerate actually works as advertised, then it can't ever find unenumerated. Because if (enumerate N) is unenumerated, then ((enumerate N) N) will hit infinite recursion and therefore doesn't return a boolean.
This argument is, of course, just Cantor's diagonalization argument. From an enumeration, it produces something that can't be in that enumeration.
> If enumerate actually works as advertised, then it can't ever find unenumerated. Because if (enumerate N) is unenumerated, then ((enumerate N) N) will hit infinite recursion and therefore doesn't return a boolean.
Okay, but if we take the position that only non-halting (for all inputs) programs represent functions over N, if your program “unenumerated” never halts for some N, it doesn’t represent a function, so your argument isn’t a case of “using the enumeration of all functions to produce something which doesn’t belong to the enumeration”
Obviously an enumeration of all computable functions isn’t itself computable. But if we consider Axiom CompFunc “if a function over N is computable then it exists” (so this axiom guarantees the existence of all computable functions, but is silent on whether any non-computable functions exist) and then we consider the additional Axiom CountReals “there exists a function from N to all functions over N”, then are those two axioms mutually inconsistent? I don’t think your argument, as given, directly addresses this question
It's just a straight up liar's paradox. If enumerate is a function that works as advertised, then unenumerated is as well. If enumerate tries to list unenumerated, then enumerate can't work as advertised.
For the argument that I gave to work, you need what you might call Axiom ComposeFunc, you may always compose a new function by taking an existing function and doing something that is known to be valid with it. Obviously this axiom is true about the computable universe. But, more than that, it is also true about any generalization of "function" that behaves like most would expect a function to have.
Now it is true that your Axiom CompFunc and Axiom CountReals do not necessarily contradict each other. But CompFunc, ComposeFunc and CountReals do contradict each other, and the contradiction can be built by following Cantor's diagonalization argument.
If I understand what I just Googled correctly (definitely not guaranteed), the reason why Cantor's argument can fail in NFU is that NFU does not necessarily allow you to build a function that returns things in X, out of a function that returns functions returning things in X.
So it has non-computable functions, but also has a type system that tries to avoid allowing self-reference. And that type system gets in the way of Cantor's argument.
I clearly hadn't thought of this possibility. It is a constraint of a kind that doesn't show up in the computable universe.
And so, going back, if the the Russian constructivist school does indeed center on computability, then my first answer to you was correct.
Right, I think we are in agreement - a pure Russian constructivist approach which only permits computable functions cannot prove the reals are countable. However, I still am sceptical it can prove they are uncountable-if you limit yourself to computable constructions, you can’t actually computably construct a Cantor diagonal, so his argument fails just like it does in NFU.
The (un)countability of the reals is known to be independent of NFU-it is consistent both with the reals being countable and them being uncountable. There are two different axioms which it is standard to add to NFU to decide this-AxCount≤ which implies the reals are countable and AxCount≥ which implies the reals are uncountable.
I guess I was suggesting that in the same way, an additional axiom could be added to computable set theory which renders the reals countable or uncountable. If an additional axiom asserting the countability of the reals involves the existence of a function from the naturals to functions over the naturals, that would obviously be introducing an uncomputable function-but for that to produce an inconsistency, it would need to enable Cantor’s argument-and, given your “ComposeFunc” in the computable universe is already restricted to only operating over computable functions, it is reasonable to limit its application to computable functions in an extension, which would mean the addition of this uncomputable function would still not permit Cantor’s argument
There is nothing uncomputable with the cantor diagonalisation. The Russians gladly accept it, I assure you. Here is a Haskell implementation:
diag :: (nat -> (nat -> Bool)) -> (nat -> Bool)
diag f n = not (f n n)
The following argument is also constructive. Just like in classical mathematics, crustructive mathematics proves the negation of A by assuming A and deriving a contradiction. (But you cannot prove A by assuming it’s negation and deriving a contraction):
Assume a surjectiom f :: nat -> Bool. Then there is x such that f x = diag f. Since these two functions are equal, they take equal values when we evaluate them in any point. In particular f x x = diag f x, but since diag f x = not (f x x), by definition, we have that f x x = not (f x x). This is a contradiction since not does not have fixed points.
( I made nat a type variable here since this works for any type / set )
Category theory gives a structural framework for discussing these things. The various categories live side by side and can be related with functors. This allows a broader view and makes it easier perhaps, to understand that there isn’t a right answer to “what is true” about sets in the absolute.
For sure there would have to be a meta theory of some sort. But, I think that many classical mathematicians would be happy with that meta theory to be weaker, rather than stronger. I don't think there is a need for that theory to have AC. After all, thanks to its independence of from the other axioms, AC does not add logical strength to a classical theory.
(For die-hard constructivists, such as myself, the story is of course different. But that story is for a another time. I am presenting the classical view here.)
In Haskell, I see mtl and algebraic effects (say freer-simple) as giving you the same kind of expressiveness. The difference to me is that for mtl you need to figure out and abstract a new type class for every kind of effect and then write n^2 instances. While the freer monad construction needs only a single data type (often a GADT) and some glue function (calling send on constructors of said data type), and you are off to the races.
The algebraic reason for this is that effects are combined with sum, which is commutative up to isomorphism. While transformers are not naturally commutative, so mtl must write all the commuters as instances.
This, along with the reinterpret functions means that you can quickly spin up custom effects for your program, which do exactly what you need to express your program logic. Then all the glue coddle to make your program interact with the real world becomes a series of handlers, usually refining in several steps until you reach IO.
When I have used mtl, I end up only using the standard monad classes, and then I have to remember the semantics of each one in my domain.
reply