Hacker Newsnew | past | comments | ask | show | jobs | submit | rck's commentslogin

Not a lawyer, but the NSF clause covering clawbacks is pretty specific:

> NSF reserves the right to terminate financial assistance awards and recover all funds if recipients, during the term of this award, operate any program in violation of Federal antidiscriminatory laws or engage in a prohibited boycott.

A "prohibited boycott" is apparently a legal term aimed specifically at boycotting Israel/Israeli companies, so unless PSF intended to violate federal law or do an Israel boycott, they probably weren't at risk. They mention they talked to other nonprofits, but don't mention talking to their lawyers. I would hope they did consult counsel, because it would be a shame to turn down that much money solely on the basis of word of mouth from non-attorneys.


I don't think you are misunderstanding the surface requirements, but I think you are mistaking “would eventually, with unlimited resources for litigation, prevail in litigation over NSF cancelling funds, assuming that the US justice system always eventually produces a correct result” with “not at risk”.


I can imagine that a very risk averse lawyer would have pointed out the costs and uncertainties of litigation in cases like this. But if I were in their shoes and I really cared about the money, I would have pressed that lawyer to show examples where the clawback clause had been invoked since Jan 20. I'm not sure it's happened, which seems relevant to estimating the actual risk.

Interestingly, they may get more in donations than they would have from this grant, so maybe that needs to be including in the risk estimate as well...


> But if I were in their shoes and I really cared about the money, I would have pressed that lawyer to show examples where the clawback clause had been invoked since Jan 20.

And the lawyer would be able to present hundreds of cases covering billions of dollars of federal grants, cancelled since Trump issued EO 14151 setting in black and white the Administration's broad crusade against funding anything with contact with DEI and declaring the DEI prohibition a policy for all federal grants and contracts, under different grant programs, many of which were originally awarded before Trump came back to office and which would not have had DEI terms in the original grant language. They'd also be able to point out that some of the cancellations had been litigated to the Supreme Court and allowed, other clawbacks had been struck down by lower courts and were still in appeals.


Yeah it looks like about 1500 grants:

https://www.urban.org/urban-wire/nsf-has-canceled-more-1500-...

But if the concern is about the provision allowing NSF to claw back funds that have been spent by the organization then the question remains: has that happened? Right now if you search for terms related to NSF clawbacks, most of the top results refer to the PSF's statement or forum discussions about it (like this one). I can't find any instances of a federal clawback related to DEI. If that had happened I would assume that the response from the awardee would have been noisy.


This is fun. But the bit at the beginning about philosophy is not correct. Parmenides did not believe in what we would call essences, but really did believe that nothing ever changes (along with his fellow Eliatic philosopher Zeno, of paradox fame). The idea that change is an illusion is pretty silly, and so Plato and especially Aristotle worked out what's wrong with that and proposed the idea of _forms_ in part to account for the nature of change. Aristotle extended Plato's idea and grounded it in material reality which we observe via the senses, and that's where the concept of essence really comes from - "essence" comes from the Latin "essentia" which was coined to deal with the tricky Greek οὐσία (ousia - "being") that Aristotle uses in his discussions of change.


One way I've seen it presented is that the early Greek philosophers were grappling with how to reconcile two basic facts: somethings stay the same (constancy or regularity), and some things change.

Heraclitus was before Parmenides and said that everything changes. Parmenides said that nothing changes, and then the atomists, most prominently Democritus, synthesised these two points of view by saying that there are atoms which don't change, but all apparent change is explained by the relative motions of the different basic atoms. Plato was influenced by all of these. But I would say the theory of forms accounts more for constancy or regularity more than change, no?

Btw, the central concept of Parmenides' philosophy is always translated as "Being", but I couldn't find the original Greek word. It isn't "ousia"?


I'm not sure what motivated Parmenides because he was more of a poet than anything - it just happened that his poetry was what we would now recognize as incredibly philosophical. He didn't really argue, he just wrote down what the "goddess" told him. But I think the basic problem is that everyone back then agreed that you can't get "something from nothing," and it sure seems like change requires being to come from non-being. The statue is there now, but before it was cast there wasn't a statue, just a chunk of bronze. If being can't come from non-being, how do you account for the "coming-to-be" of the statue? The Eliatic position as I understand it is that the change is just an illusion. Plato and Aristotle both react against this position and argue that it's silly (I'm very inclined to agree). They then give alternative accounts of what change really is.

I'm not sure about Plato, but the Aristotelian analysis is something like this: every thing that exists has the potential to exist in certain ways and not others, and it's said that the thing is "in potency" to exist in those potential ways. When something could exist in a certain way but right now doesn't, that's called a "privation." And the ways that the thing currently does exist are the "form" of the thing. So a substance changes when it goes from being in potency to being actual, and it does that by losing a privation. Aquinas follows Aristotle in giving the example: "For example, when a statue is made from bronze, the bronze which is in potency to the form of the statue is the matter; the shapeless or undisposed something is the privation; and the shape because of which it is called a statue is the form." Incidentally, Aquinas's short On the Principles of Nature (https://aquinas.cc/la/en/~DePrinNat) is a good overview of this theory, which is spread all over Aristotle (in the Categories, the Physics, and the Metaphysics).

As far as οὐσία is concerned, I think this is the complete Greek for Parmenides's poem: http://philoctetes.free.fr/parmenidesunicode.htm. In the places where that translation uses "being" you get slightly different words like γενέσθαι (to come into a new state of being) or εἶναι (just the infinitive "to be"). And looking at the definition of οὐσία (https://lsj.gr/wiki/%CE%BF%E1%BD%90%CF%83%CE%AF%CE%B1) it looks like most of the uses of that term specifically come well after Parmenides.


Ah, I was only thinking of Plato's theory of forms, not Aristotle's, but it makes sense it would be more about something taking on a form in Aristotle, since he was much more concerned with biology than Plato, whose forms are more timeless.

Thanks for the Parmenides poem. It seems much more straightforward than the various commentaries and analyses I've seen written about it.

VIII.16: ἔστιν ἢ οὐκ ἔστιν· :: It is or it is not

Very nearly "to be or not to be"...


The Nevada Museum of Art had an exhibit last year about Picasso's ceramics, and I was amazed at how ... meh it all was. Apparently the market agrees with me, because you (yes you!) can buy a Picasso plate for just a few thousand dollars.


Picasso prints are relatively common, and not terribly expensive.


I especially took time out to see his work up close when it was exhibited in Delhi, thinking it would make more sense in real life than from books and photographs.

One canvas was literally a stick figure with very thin paint allowed to run down the “work.”

“Meh” would have been high praise.


Pope Pius XI wrote about _subsidiarity_ as a guiding social principle:

"Just as it is gravely wrong to take from individuals what they can accomplish by their own initiative and industry and give it to the community, so also it is an injustice and at the same time a grave evil and disturbance of right order to assign to a greater and higher association what lesser and subordinate organizations can do. For every social activity ought of its very nature to furnish help to the members of the body social, and never destroy and absorb them."

Tao is observing the consequences of a society that increasingly has abandoned subsidiarity as an operating principle. (I had hoped that crypto might be able to bring subsidiarity back, but so far the opposite has happened in practice.)


This feels like the kind of popsci that's written for people who already agree with the author - there's nothing resembling an argument, or even a definition of "computation." There are nods to Church-Turing, but the leap from "every effectively calculable function is computable" to "life is a computation" is larger than anything you could fit in a book.


Reminds me of Wolfram's "Principle of Computational Equivalence"[1].

1. Things in nature have a maximum complexity which is like computation 2. Most things get this complicated 3. Therefore most things are "computationally equivalent" 4. "For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. "

The leap between things being in an equivalence class according to some relation and being "in principle, the same" might present difficulty if you've done any basic set theory, but that's just because you lack vision.

[1] https://mathworld.wolfram.com/PrincipleofComputationalEquiva...


This principle is just applying Turing equivalence to the hypothesis that there is nothing in nature that is effectively computable but exceeds the Turing computable (which would be the "maximal level of computational power")

Given we have no evidence of the existence of anything effectively computable that is not Turing computable, it's a reasonable hypothesis, with no evidence pointing towards falsifying it, nor any viable theories for what a "level of computational power" that exceeds this hypothetical maximum would look like.

And, yes, if that hypothesis holds, then life is equivalent, to the point of at least being indistinguishable from when observed from the outside, computation.

A lot of people get upset at this, because they want life to be special, and especially human thought. If they want to disprove this, a single example of humans computing a function that is outside the Turing computable would be a very significant blow to this hypothesis, and the notion of life as a computation (it wouldn't conclusively falsify it, as to do that you'd need to also disprove that there might we ways to extend computers to compute the set of newly discovered functions that can't be computed by a Turing machine, but it would be a very significant blow)


This is a poor argument, because the universe is uncomputable. We have models that apply on short time scales, but it's fundamentally not computable either in practice or in principle.

On long enough scales - and they're not that long when you're talking about billions of years - we don't even know if the solar system is stable.

Bio-computability has the same issue at smaller scales. There are islands of conceptual stability in a sea of noise, but good luck to you if you think you can compute this sequence of comments on Hacker News given the position of every atom in the original primordial soup.

The universe is not clockwork. The concept of computability is essentially mechanical, and it's essentially limited - not just by conceptual incompleteness theorems, but by the fact that any physical system of computation has physical limits which place hard bounds on precision and persistence.


> because the universe is uncomputable

We have no evidence to suggest that is true. If no individual process in the universe exceeds the Turing computable - and we have no evidence it does, or that anything exceeding the Turing computable can even exist - then the universe itself would be existence-proof that it is computable. Now, we can't be 100% sure, because we'd have to demonstrate that every physical interaction everywhere is individually Turing computable. But we also have nothing that even hints of evidence to the contrary.

Note that it is possible the universe is not computable from within with full precision due to e.g. lack of compressibility.

> On long enough scales - and they're not that long when you're talking about billions of years - we don't even know if the solar system is stable.

That has zero relevance to whether or not it is computable. If it is computable, then any such instability is simply an effect of a computation.

In other words you're committing the logical fallacy of begging the question - your conclusion rests on your premise, as you're trying to argue that the universe is computable by using processes as evidence that can only be uncomputable if the universe as a whole is uncomputable.

> The universe is not clockwork.

That is irrelevant to whether or not it is computable.

> but by the fact that any physical system of computation has physical limits which place hard bounds on precision and persistence.

This is also in general irrelevant to whether or not a system is computable. We can operate symbolically on entities that requires any arbitrary (including infite) precision and persistence within various constraints. E.g. we can do math with 1/3 to infinite precision for a whole lot of calculations.

Unless you can show specific processes that demonstrably happens with a precision that is impossible to simulate without the computation becoming infinite, this argument doesn't get you anywhere. Note that it would be insufficient to show a process that appears to have infinite precision in a way that would take infinite time to calculate unless there is demonstrably no way to lazily calculate it to whatever precision you actually try to observe in a finite amount of time, as such a system can be simulated.

Length of time would also not be a problem unless you can show why such a simulation needs to run at full speed to work, rather than impose a subjective time on the inside of the simulation that can vary with computational complexity.

Space complexity is also irrelevant unless you can show limits on the theoretical maximum capacity of an outside simulator.

Now to the question of whether life is computable, then if the universe is computable, then life is too, but if the universe is not, life might still be, and so this is largely a digression from the original point I made.


Firstly the crux of that hypothesis seems completely undecidable. Secondly it seems to me that applying something like Turing equivalence to things which are not computer programs is a category error which leads to him talking very obvious total nonsense.

1. Complexity != computation. How does a weather system compute anything at all for example? By any standard definition of these words it doesn’t. Since Wolfram never defines his terms rigourously, this statement is prima facie meaningless.

2. Computational complexity != equivalence. He’s talked about implementing the universe in 4 lines of mathematica code when clearly mathematica itself is in the universe and takes more than 4 lines of code to implement. What he (actually his staff) has implemented in 4 lines is a cellular automaton that is Turing equivalent. That’s cool but it’s not the universe. If you’re not drinking the kool-ade it’s just nonsense.

3. How does any of that make life indistinguishable from computation? All life that I’ve observed seems to be very easily distinguishable from computation, and I would suggest that anyone who finds this confusing should probably get out more.


> Secondly it seems to me that applying something like Turing equivalence to things which are not computer programs is a category error which leads to him talking very obvious total nonsense.

Turing equivalence applies to all computation. "Computer programs" has nothing to do with it.

> How does a weather system compute anything at all for example? By any standard definition of these words it doesn’t

By every normal definition of these words it does. Any computation with a digital computer is us applying an interpretation onto physical computation in the form of basic physical interactions that carry out operations that we interpret in terms of logic.

And we have computing devices that makes this link more explicit, such as e.g. the Soviet "water integrator". Using physical interactions to compute is trivial, e.g. ranging from the trivial, with two pools of water merging is the computational equivalent of addition, to the slightly less trivial classic demonstration of Pythagoras theorem with tree interconnected triangles filled with fluid.

Every physical system carries out computations with every interaction, but most of them are useless to us. But every digital computer can carry out computations that are useless to us too, if we let them run chaotic programs on chaotic data.

> That’s cool but it’s not the universe.

It's not the universe, but that is irrelevant unless you can either disprove Turing equivalence or prove that the universe contains computation that exceeds the Turing computable. If you could, there'd likely be a Nobel prize with your name on it.

> 3. How does any of that make life indistinguishable from computation? All life that I’ve observed seems to be very easily distinguishable from computation, and I would suggest that anyone who finds this confusing should probably get out more.

If life does not exceed the Turing computable, then it can be fully simulated, to the point of giving identical responses to identical stimuli when starting from the same state and at that point if there is any distinction at all, it would need to require observing the internal processes of the entities involved.

Put another way: If life does not exceed the Turing computable, then you don't know whether or not you are simply a simulation, nor do you know whether or not the universe itself is.


Yes, the article appears to be a short excerpt from a book and probably loses a lot of context because of that. I am interested in the questions raised by the author but will wait for the book to come out. The good news is that it appears the book will be open access - MIT Press seems to be encouraging this lately (at least by allowing this as an option for authors).


You can find the full book here: https://whatisintelligence.antikythera.org/


Oh great flag that it’s open access. Will give this a read.


> there's nothing resembling an argument, or even a definition of "computation."

"It's not even wrong" - Pauli


Is the author advancing a new argument? Has anyone read the book? A quick review suggests that the author posits that symbiogenesis is central to evolution, and artificial intelligence. This is interesting because I recall no mention of this mechanism in the current AI literature. The promise of a symbiotic relationship with artificial life sounds like a balm to people anxious about the future. It is a possibility, not a certainty. https://en.wikipedia.org/wiki/Symbiogenesis

https://publicservicesalliance.org/2025/05/24/what-is-intell...


I felt reminded of Hofstadter's Goedel/Escher/Bach mysticism that somehow everything is recursion.


I might not be a strange loop but I am indeed strange.


or the idea that the universe is a computer

https://en.wikipedia.org/wiki/Edward_Fredkin


In any case, he did fit that into a book! If only barely.

Edit: On further reflection, I suppose he didn't, if we consider the effort to span Gödel Escher Bach and I Am a Strange Loop.


Including free will


Self-simulation.


In 2023, shrinkage at Costco was less than 0.2%, vs a US national average of 1.44%.

https://finance.yahoo.com/news/costco-winning-war-against-re...


This is the Lean blueprint for the project, which is a human-readable "plan" more or less. The actual Lean proof is ongoing, and will probably take a few more years. Still cool though.


For the sake of comparison, you can train a 124M model on a 3090 (see nanoGPT). In that case, each batch ends up having about 500,000 tokens and takes maybe around 10ish seconds to run forward and backward. Then the 6 trillion tokens that this model was trained on would take about 4 years, approximately. Or just "too long" for a shorter answer.


You are underestimating the hype around self-driving. A quick search gives this from 2018:

https://stanfordmag.org/contents/in-two-years-there-could-be...

The open (about the bet) is actually pretty reasonable, but some of the predictions listed include: passenger vehicles on American roads will drop from 247 million in 2020 to 44 million in 2030. People really did believe that self-driving was "basically solved" and "about to be ubiquitous." The predictions were specific and falsifiable and in retrospect absurd.


I meant serious predictions. A surprisingly large percentage of people claim the Earth is flat, of course there's going to be baseless claims that the very nature of transportation is about to completely change overnight. But the people actually familiar with the subject were making dramatically more conservative and I would say reasonable predictions.


Do you know of any short examples of this? Yesterday I was trying to prove some "easy" theorems that involved machine number representations, and I couldn't find anything in Lean.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: