Hacker Newsnew | past | comments | ask | show | jobs | submit | practal's commentslogin

See, I don't get why people say that the world is somehow more complex than the world of mathematics. I think that is because people don't really understand what mathematics is. A computer game for example is pure mathematics, minus the players, but the players can also be modelled just by their observed digital inputs / outputs.

So the world of mathematics is really the only world model we need. If we can build a self-supervised entity for that world, we can also deal with the real world.

Now, you may have an argument by saying that the "real" world is simpler and more constrained than the mathematical world, and therefore if we focus on what we can do in the real world, we might make progress quicker. That argument I might buy.


> So the world of mathematics is really the only world model we need. If we can build a self-supervised entity for that world, we can also deal with the real world.

In theory I think you are kind of right, in that you can model a lot of real world behaviour using maths, but it's an extremely inefficient lense to view much of the world through.

Consider something like playing catch on a windy day. If you wanted to model that mathematically there is a lot going on: you've got the ball interacting with gravity, fluid dynamics of the ball moving through the air, the changing wind conditions etc. yet this is a very basic task that many humans can do without really thinking about it.

Put more succinctly, there are many things we'd think of as very basic which need very complex maths to approach.


This view of simulation is just wrong and does not correspond at all to human perception.

Firstly, games aren't mathematics. They are low quality models of physics. Mathematics can not say what will happen in reality, mathematics can only describe a model and say what happens in the model. Just mathematics can not say anything about the real world, so a world model just doing mathematics can not say anything about the world either.

Secondly, and far worse for your premise, is that humans do not need these mathematical models. I do not understand the extremely complex mechanical problem of opening a door, to open a door. A world model which tries to understand the world based on mathematics has to. This makes any world model based on mathematics strictly inferior and totally unsuited to the goals.


A computer game is also textures, audio, maybe 3d models and landscapes, music composition, and data manipulation functions (see Minecraft).

Sure mathematics can be said to be at the core of most of that but you’re grossly oversimplifying.


John Von Neumann would disagree with you :

"If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is."


The world of mathematics is only a language. The (Platonic) concepts go from simple to very complex, but at the base stands a (dynamic and evolving) language.

The real world however is far more complex and perhaps rooted in a universal language, but in one we don’t know (yet) and ultimately try to describe and order by all scientific endeavors combined.

This philosophy is an attempt to point out that you can create worlds from mathematics, but we are far from describing or simulating ‘Our World’ (Platonic concept) in mathematics.


Representing concepts from the real world in terms of mathematics, is exactly what an AI model is internally.


The basic idea is: You run a program F on some input x, obtain r, and then you have some sort of justification why F x = r is a theorem. There are various levels of how you can do this, one is for example that "running the program" consists of applying a curated set of (proven) rewriting rules that take you from the theorem F x = F x to the theorem F x = r by applying them only to the right hand side. That is basically the same as "Proof by reflection" as used by Gonthier, where the Coq kernel acts as the (unverified) rewriting engine.

So this is not a matter of dependent or static typing or not, the idea is simple and the same (e.g., I've used it for my PhD thesis in Isabelle that is from 2008), it is just a matter of how practical this is to use in your theorem prover of choice.


> That is basically the same as "Proof by reflection" as used by Gonthier, where the Coq kernel acts as the (unverified) rewriting engine.

I don't think it's "basically the same", because this application of the rewrite rules in a LCF-like system is explicit (i.e. the proof checking work grows with the size of the problem), while in proof by reflection in a type theory it happens implicitly because the "rewriting" happens as part of reduction and makes use of with the definitional equality of the system?

For small and medium examples this probably doesn't matter, but I would think that for something like the four colour theorem it would.


As I said, it depends on how you practically implement it.

I've used it for proving linear inequalities as part of the Flyspeck project (formal proof of the Kepler conjecture), and there I implemented my own rewrite engine for taking a set of rewrite rules and do the computation outside of the LCF kernel, for example by compiling the rules to Standard ML. You can view that engine as an extension of the LCF kernel, just one more rule of how to get theorems. In that instance, it is exactly the same.


How does the proof of correctness work in this case? Is the rewriting ultimately resorting to proof steps within the kernel like a LCF-tactic would, or is there simply an informal argument that whatever you're doing in the rewriter is correct, on par with the proof kernel itself?


There is a proof as part of my thesis that the engine is correct, but it is not formal in the sense of machine-checked.

Note that the final result of the Flyspeck project does not depend on that proof, as the linear inequalities part has later on been redone and extended in HOL-Light by Alexey Solovyev, using just the LCF kernel of HOL-Light. Which proves that using a simple LCF kernel can definitely be fast enough for such computations, even on that scale!


No, that misunderstands what a proof is. It is very easy to write a SPEC that does not specify anything useful. A proof does exactly what it is supposed to do.


No, a proof proves what it proves. It does not prove what the designer of the proof intended it to prove unless the intention and the proof align. Proving that is outside of the realm of software.


Yes, indeed, a proof proves what it proves.

You confuse spec and proof.


In principle, LLMs can do this already. If you ask Claude to express this in simple words, you will get this translation of the theorem:

    "If applying f to things makes them red whenever they're not already red, then there must exist something that is red AND stays red after applying f twice to it."
Now the proof is easy to see, because it is the first thing you would try, and it works: If you have a red thing x, then either x and f(f(x)) are both red, or f(x) and f(f(f(x)) are both red. If x is not red, then f(x) is red. Qed.

You will be able to interact like this, instead of using tactics.


For anyone else for whom the justification for “either x and f(f(x)) are both red, or f(x) and f(f(f(x)) are both red” was not immediately obvious:

H: ∀x.(r(x)→⊥)→r(f(x))

goal: ∃x.r(x)∧r(f(f(x)))

If f(f(x)) is red:

    x is a solution (QED).
Otherwise:

    f(f(x)) not being red means f(x) must have been (by contraposition of H) and that f(f(f(x))) will be (by H); therefore, f(x) is a solution.


Indeed, and e:t in type theory is quite a strong ontological commitment, it implies that the mathematical universe is necessarily subdivided into static types. My abstraction logic [1] has no such commitments, it doesn't even presuppose any abstractions. Pretty much the only requirement is that there are at least two distinct mathematical objects.

[1] http://abstractionlogic.com


Oh, actually that is the syntax I will use for writing abstractions:

    my-abstr x y z foo: "bar" baz: "bak" "quak" quux: [a, b, c, d] lol: 9.7E+42
I don't think

    my-abstr x y z, foo: "bar", baz: "bak" "quak", quux: [a, b, c, d], lol: 9.7E+42
would be better. Indentation and/or coloring my-abstr and the labels (foo:, baz:, quux:, lol:) are the right measures here.


While I have no problems with indentation based syntax, it's not very conductive to minimization, so it's a no go for JSON's case.

Coloring things is a luxury, and from my understanding not many people understand that fact. When you work at the trenches you see a lot of files on a small viewport and without coloring most of the time. Being able to parse an squiggly file on a monochrome screen just by reading is a big plus for the file format in question.

As technology progresses we tend to forget where we came from and what are the fallbacks are, but we shouldn't. Not everyone is seeing the same file in a 30" 4K/8K screen with wide color gamut and HDR. Sometimes a 80x24 over some management port is all we have.


Sure, color and indentation are optional, but even without those, I don't see that a comma in the above syntax helps much, even on a 80x24 monochrome display. If you want separators, note that the labels which end with a colon are just that, and just as clear as commas. There is a reason why you tried to obfuscate that in your example by placing the colons not with the labels, but the arguments.


You can also remove the commas in the arrays.


You could, there is no fixed syntax for arrays in Practal, so it would depend on which custom syntax becomes popular. But for expressions that are bracketed anyway, it makes sense to have commas (or other separators), because otherwise you write something like

    [(a b) (x y)]
instead of

    [a b, x y]
Personally, i like the second option better.


Wondering if someone on HN came across a solution to this before?


Good one! (Q)Basic was my first language. Well, my second really, after .bat files.


In Terence Tao's book "Compactness and Contradiction", at the very start on page 1, he introduces material implication "If A Then B" (or "A implies B") as "B is at least as true as A", and then lists the various easy conclusions one can draw from this.

Very interesting, I think, that "A implies B" is the same as "A ≼ B", is apparently mathematical main stream, and not just popular in formal logic.

If you continue along these lines, you also not just need to ask, what is implication, but what are A and B? Well, they are things you can compare for their truth content, so let's call them truth values. Surely, "≼" should form a partial order, and if you want arbitrary conjunction and disjunction to exist, truth values with "≼" should form a complete lattice T. This means that "∧" and "∨" are now operations T×T → T. If you want implication ALSO to be such an operation "⇒", instead of just the comparison relation "≼", you can use the following condition (somebody already mentioned it in another comment here, via Galois connections), which just means that "A and B imply C" is the same as "A implies that (B implies C)", interpreting implication simultaneously as "⇒" and "≼":

    A ∧ B ≼ C iff A ≼ B ⇒ C   
That allows you to define B ⇒ C as the supremum of all A such that A ∧ B ≼ C, in every complete lattice. If the join-infinite distributive law [1] holds, above condition will hold with this definition, and you get a complete Heyting algebra this way.

This is exactly how I turn abstraction algebra into abstraction logic [2].

[1] https://proofwiki.org/wiki/Axiom:Infinite_Join_Distributive_...

[2] http://abstractionlogic.com


On your website, you seem to claim in bold letters that you've talked to Graham Priest about your work:

> A Conversation with Graham Priest About Abstraction Logic

but admit afterward that you talked to Claude prompted to sound like Graham Priest:

> A conversation about abstraction logic with Claude representing Graham Priest.

You also wrote an update stating:

> Update: The real Graham Priest says that it doesn't really sound like his voice. So enjoy with caution .

Don't you find it unethical to claim that you had "a conversation with Graham Priest about Abstraction Logic"? You didn't have a conversation with Priest. You had an interaction with Claude in Priest clothing. It doesn't even sound like Priest agrees with what you prompted Claude to say. Do you think it's permissible to let LLMs speak on behalf of people without their consent? Do you think that what an LLM says when prompted to speak as though it were some person should be accepted as what the person would actually say and believe?

Why should we find it interesting what any LLM has to say about your work, regardless of whose voice you dress it up as?


All the information you need to know about that article is right at the top of it [1]. I am very clear that this is a conversation with a Claude impersonation of Graham Priest, not Graham Priest himself.

I don't see what is unethical about that.

> Why should we find it interesting what any LLM has to say about your work, regardless of whose voice you dress it up as?

Who is "we"? I don't even know who you are, AbstractPlay. The article exists because I personally find it interesting, and I actually learnt something through it. If somebody else finds it interesting, great. If you don't, too bad. Thanks for letting me know either way.

[1] https://practal.com/press/cwgpaal/1


> I am very clear that this is a conversation with a Claude impersonation of Graham Priest, not Graham Priest himself.

Only after you make the upfront claim, in bold letters, in words you chose: "A Conversation with Graham Priest About Abstraction Logic". You did not choose to title your blog post "A Conversation About Abstraction Logic With Claude Representing Graham Priest" which is the more honest title for your blog post, except that it's clear that Claude was not capable of representing Priest since "the real Graham Priest says that it doesn't really sound like his voice." You chose to title your blog post "A Conversation with Graham Priest About Abstraction Logic". Obviously this line gives the impression that you had an actual conversation with the actual Graham Priest. You must have recognized that the wording you chose is false and deceptive. Are you hoping that attaching Priest's name gives your work more gravitas or encourages more sales of your book?

> I don't see what is unethical about that.

You see nothing unethical about prompting an LLM to take on someone's persona and then presenting the resulting conversation in a blog post with a title which gives the initial impression that you had an actual conversation with the actual person?

Be clear about where you stand on this at least so that any university or elsewhere that might have any interest in offering a job for you to continue your work in abstraction logic might know where you stand on misrepresenting professional academics.

> Who is "we"?

The general public to which you are presenting your work and advertising your book.

> I don't even know who you are, AbstractPlay.

I'm a member of the general public to which you are presenting your work and advertising your book.

> The article exists because I personally find it interesting, and I actually learnt something through it. If somebody else finds it interesting, great. If you don't, too bad.

Okay, but you're presenting this pseudo-conversation on the website through which you are presenting your work and advertising your book to the general public. Presenting it there gives the impression that this pseudo-conversation is meant to support your work, not that it's some tangential, self-satisfying curiosity appropriate for a personal blog.

It would be far more interesting if you posted actual conversations you actually have with actual academics actually commenting on your work instead of this fantasy world of LLM regurgitation that you expect us to believe is ultimately intended to only be interesting and enlightening to you.


Wow. I am glad your problems are not my problems.


Does any popular MCP host support sampling by now?


Yes. VSCode 1.101.0 does, as well as fast-agent.

Earlier I posted about mcp-webcam (you can find it) which gives you a no-install way to try out Sampling if you like.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: