Hacker Newsnew | past | comments | ask | show | jobs | submit | more drostie's commentslogin

I mean the biggest thing was Kuhn coming along and saying "experimentally, it doesn't work that way."

What I mean is, people don't find an experiment about nanostructures that doesn't work and start going "hey I think I have disproved quantum mechanics!". Even when the OPERA faster-than-light neutrino debacle was going on, physicists were largely saying "We are pretty sure that there is some mistake in either the model or the experiment such that these neutrinos are not moving faster than the speed of light." In fact the objection goes a little further than that: according to Newtonian mechanics, geocentrism is perfectly admissible. There is absolutely nothing wrong with constructing a geocentric frame of reference and doing Fourier expansions of the motions of planetary bodies. No experiment has disproven it because it's just a mathematical choice of accelerated reference frame to analyze the motion of the planets. According to unmodified Popper, each of the first two should have led to a rejection of the scientific principles which led to them, while the latter would mean that geocentrism and heliocentrism are pseudoscience and there was never any scientific switch between them -- all of that sounds wrong.

So Kuhn introduced the idea that there is a separation of science into two parts, "theory" and "model". A scientific theory like quantum mechanics or heliocentrism is a platform for building models and deciding what questions are worth asking and how one goes about asking them. They are a "platform for computation" in a sense, and most of them are "Turing complete," there is nothing that they can't model somehow. So classical mechanics turns out to be able to do something with quantum mechanics if we use something like Bohm's pilot wave theory. And Couder and Fort's droplets on a vibrating oil bath show an experimental realization of particles which nevertheless diffract in a classically explicable way, underscoring this point. Kuhn said that theories need to be abandoned during some sort of "scientific revolution" but was very hazy on how exactly that happened. But he was a huge fan of Popper and wanted to say that Popper was fundamentally right about the way that we model systems, discarding models immediately when they do not fit experiment and coming up with better models.

Kuhn picked up a lot of flak because one of the things Popper's works were trying to do was to discredit things like psychiatry and astrology as being "pseudoscience" rather than real science because they could explain everything and thus never stuck their neck out -- thus Kuhn's work seemed to need some extra structure about the manner that we actually conduct such a revolution, otherwise astrology might not be a pseudoscience but an "eventual science" or so: if theories are just some sort of aesthetic agreement on behalf of the existing scientists then what stops us all from deciding that we rather like reading our weekly horoscopes?

This challenge was to my mind best resolved by the "research programmes" idea of Imre Lakatos. He philosophizes that theory choice -- fundamental progress in science -- is best seen as motivated by lazy grad students. Like, laziness is a virtue on this account: grad students have to make a contribution to the published literature that excites their peers and makes a name for themselves, and they do not have much time to do it.

So, why do people use the Copenhagen interpretation for everything if very few people philosophically accept its ontology? Because it is mathematically equivalent to all of the other interpretations but is astonishingly easy to use, just "yeah the wavefunction collapsed so now this is reality, I don't strictly have to care about that collapse happening across spacetime instantaneously because that's not observable anyways, so here are my experimental results." Lazy grad students will choose that ten times out of ten over coming up with the correct pilot-wave mechanics and simulating it. Why did heliocentrism win if Newtonian mechanics says that geocentrism is 100% experimentally valid? Because the heliocentric models are easier to build and reason about with straight mechanics, and lazy grad students will take Newton's law of gravity any day over those epicycles.

You can in some respects view this as Occam's razor but Occam's razor is painfully ill-specified. A better view of it is that it's a survival-of-the-fittest, a theory of scientific evolution. So, theories are "genes" which make it easier or harder to publish interesting discoveries that are modeled with those theories in scientific journals. Based on others reading those papers and extending those results in various ways, theories "reproduce" and the ones that reproduce most effectively are the ones that best adapt to their (ever-changing) environment.


This is way too good of a comment to be buried this deep... Thank you for the write-up!


Thank you too for the detailed and thoughtful reply!


No, when they talk about watts per kilogram I am pretty sure that they mean these were absorbed. Like they literally built 21 big microwaves, 7 for mice and 14 for rats, and then turned them on: it's no different than your microwave at home, the waves bounce around the walls until they find some water to call home. They're presumably not concerned with any microwaves that escaped the resonator.


Hey Koen, congrats on you and Corno making front-page on HN!

I can definitely confirm for others reading this that it indeed has been something like 10 years in the making; I was working with a prototype of it 8-9 years ago and found those data models so nice that I actually reimplemented the core idea in a repository on GitHub, though it didn't really go anywhere except for my own web site. I have also been able to reimplement it in TypeScript more recently, so that there is a non-Turing-complete subset of algebraic data types (though maybe I'll be able to add a fixpoint operator, who knows) as runtime objects with highly specific TypeScript types that are inferred from the functions you use to construct them. So then a parametric TypeScript construct,

    ValueOfType<typeof mySchema> 
embraces values that match the schema that you just specified. You can use this trick to write functions like

    myHTTPRouter.get('/objects/by-id/:objectguid', {
      params: {
        objectguid: {
          type: 'guid'
        }
      },
      async handler(params) {
        // inside of here, params has type {objectguid: string},
        // and VSCode knows this, because params is ValueOfType<schema> where
        // the schema is specified in the `params` key above.
        return response.json({success: true})
      }
    })
It's a really fun perspective on programming to have these schemas available at both runtime and compile-time, very DRY.


      // inside of here, params has type {objectguid: string},
Should that be {objectguid: guid}? If not, where did the string come from there?


I don't think TypeScript has a native GUID type, but if I am wrong about that please tell me as it will make my code more type-safe.

The `string` type here comes from a mapping that the router is using. That is, the router ultimately type-evaluates a `ValueOfType<{type: 'guid'}>` to `string`. But because it's a runtime object, the router can also, at runtime, validate that URL param, "did they actually give me a UUID?" -- and sanitize it, e.g. "convert all UUIDs to lowercase."

(In fact the benefit of having this TypeScript type at runtime is even bigger than that. With Express.js, the router can rewrite the route param so that the route doesn't even match if you don't provide a UUID, which matters because there is often a lot of accidental ambiguity in HTTP APIs -- but here you can embed the UUID regex into Express paths. The router can then also do some other trickery like confirm at initial load time that all params in URLs match params in this `params` dict, and it can convert all of its routes to OpenAPI/Swagger docs so that you can define another route which just gives you your OpenAPI JSON. Literally in what I have written the above would be a type error because the `Router` class would complain that `params` has the wrong type because the `objectguid` descriptor needs a key called `doc` which is a string for parameter documentation for OpenAPI.)


Hi Chris, how are you!? It's been a while man. We've definitely dug into it a lot deeper than last time you worked on it. As you say, once you get how the models work it sticks with you :)


I've been well. Landed a software job with a company called IntegriShield doing a sort of internet rent-a-cop work, now am writing apps for a mechanical contractor called US Engineering -- turns out the construction business always runs on razor-thin margins which is kinda nice because, like, reducing cost by 1% when construction margins are only ~5% causes a 20% improvement in net profit.

This is great work, and I think you're burying one lede, which is that it looks like you've embedded a declarative permissions model in this thing, and I built one of those and the time saved can be huge when authorization is handled at the model level rather than everywhere in the business logic.


Sounds like a good business to be in!

The permission model actually only landed a few weeks ago, so we haven't been able to fully appreciate what we did ourselves. You're right though, it's probably a pretty huge deal :)


I am not sure if I am in the 1% or if you are wrong in a foundational sense of “you have the latitude if you'll use it.”

Like, in the past week I had to add a feature to a front end that is heavily based on jQuery and its ecosystem, so lots of variables that are module-local but otherwise global, and every modification to that globalish state needs to update all of it consistently. I introduced maybe 80 lines of code and comments to define a Model as an immutable value with a list of subscribers to notify when that value changes, a Set method to change the value and update the subscribers, methods to subscribe and unsubscribe easily, and another function which multiplexes a bunch of models into one model of the tuples of values.

The result plays nice with that jQuery globalish code but it's terser and more organized, “define the state, define how updates must covary in one place.” But I can also see that it is not quite structured enough: it lacks functional dependencies which would structure the state more, “you select a ClientCompany in this drop-down and that wants to update the ProductList because each ClientCompany owns its own ProductList,” not because there happens to be a subscriber which has that responsibility. Also means that there is a sort of eventual consistency in the UI which was always there but now I may have an approach to remove it.

So I think that I have a good deal of latitude to try new high-level structures for my code, but it's possible that I just happen to be in a lucky place where I have that freedom.


I mean, that sounds very much like react + redux, which is where I’d recommend you start if you were building the thing you just described from scratch.


Right, I wouldn't dispute that. If I wanted to rewrite the 15k lines of code in this application (which is what, 500 pages printed? two books?) I would probably use react+redux and could maybe even eliminate half of the code when I was rewriting it.

The problem is that that still comes out to ~250 printed pages, so one book, so that's an investment of 2 months to create no obvious business value, and I think if I could take that I would actually be part of that 1%. But the point of my post was just to give an example of "we can make smaller architectural decisions all the time to clean out crap and make our lives easier," and nobody is going to look the ~2 hours you spend cleaning as wasted time since it causes them to get a more-correct product sooner.

Another example: I remember at IntegriShield we had an API written in PHP, and one of my favorite little things I had written was a data model. ORMs are not hard to find in PHP but because the data model we were using was JSON we could express inside of that data model a declarative security model for the data and it would get written into the SQL queries: you say "Give me all of the groups!" and it rewrites that to, "I will give you all of the groups that you can see." The logic for the group-editor does not need to explicitly handle the checks for "can this person really edit that group?" because the data model will check it for them, "UPDATE groups SET values WHERE id = (the group you are editing) AND (user can edit the group)."

Adding the first security type was maybe half a day's work threading stuff through the SQL generator? Adding subsequent new checks took more time but was incremental so each of them might have delayed their projects 1-2 hours. But the net result must have saved a tremendous amount of programming. I have always had that latitude to create structure, if I want it.

That said, I have been pretty lucky with the places I've been privileged to work, so maybe I'm already part of the 1% and this is not representative.


Yes, and 'reactive programming' is the exact sort of architectural choice that is good to capture with a name and a clear context and motivation. The system works, folks.


It's only a little bit off. To get the true hydrodynamic analog to a capacitor, connect the bars of two big pistons together so that volume accumulated in one comes at the expense of the other, then hit the bar with a spring so that its motion comes with some energy cost that can oppose a constant pressure.

The point is that this component alone ties flow to accumulation, whereas generally your other components (resistors=thin pipes, wires=thick pipes, batteries=Archimedes screws, inductors=turbines connected to flywheels) do not accumulate volumes of water inside of them. Flow needs to make sense even without accumulation due to flow-balance, just like force needs to make sense even in situations where velocity stays constant due to force-balance.


It's ok, if I didn't have a Master's in the field I would probably have similarly down voted you.

The problem is mainly that the criticism you are making is not great for pedagogy. What is being called “charge” is probably something like “disposition to accumulate charge” or so, in the same way that force is not actually mass times acceleration, but it's mass times a disposition to accelerate, so that you can do things like measure my weight-force even though I’m not falling through the floor.

The dispositional truth of the matter is fundamentally more cognitively complex to teach than the simple rule that you get when you say that everything does what it's disposed to do, and so everybody has memorized the version of the definitions that has no dispositions, and gets very confused when you point out that aspect of those definitions.


I suppose I also didn't provide an awful lot of explanation to my point, but I figured I could just explain when asked. I didn't expect the difference between current and a change in charge to be this controversial.


Except there has been no controversy at all. Everybody understands that current (i. e. flow of electric charge, water, etc.) may have nothing whatsoever to do with "change in charge" (or in the mass of water) contained in a volume of space through which charge or water flows.


If what's being called "charge" here is charge distribution between one side of the element and the other side within the circuit (basically, electrons to the left minus electrons to the right, divided by two lest we count a single moving electron twice), and what's being called "current" here is simply current across the element, then in this case, i = dq/dt.

That's a good mathematical model of the behavior of the elements from a pedagogical perspective, though confusing for more advanced readers.


They are not wrong. In the Maxwell equations both come in as fundamentally different terms.

The claim is that current density J is different from the time rate of change of charge density dρ/dt.

That is not to say they are unrelated; they are related by the continuity equation,

    dρ/dt = -∇·J.
The distinction is real, because what you are calling current in the one case is actually a spatial derivative of current, as indicated by the ∇.

I would actually go a step further than this and say that current is actually properly defined as the source of magnetic field. On the conventional definition of current, it is physically impossible for current to flow through a capacitor, but we speak of that all the time. So the True Current Density is just

    J + ε dE/dt
in SI units. Actually taking that seriously, however, does require to committing to language which sometimes seems a little awkward, like saying electromagnetic radiation involves an AC current oscillation that propagates through empty space transverse to its oscillation.


> I would actually go a step further than this and say that current is actually properly defined as the source of magnetic field.

That's actually (still, and somewhat) how the Ampere is defined. There are ongoing efforts to change this though.

I... prefer to avoid discussions like this one, but I thought you might appreciate this part :-).


It's technically not 100% defined. See my comment below for more details about degrees of freedom and energy.

Suppose you have a system with 100 degrees of freedom and 2 units of energy, spread out as (0.01, 0.01, ..., 0.01, 1.01). A bunch of its energy is in one of those hundred degrees of freedom. You can assign it two different temperatures: the temperature 0.01, which would describe how energy will right now flow into the system if you connect it to another system with a bunch of degrees of freedom with their own thermal energy (assuming that the 1.01 degree of freedom is "internal" and doesn't interact directly with the outside world), and the temperature 0.02, which would describe how energy will eventually be spread out and hence how it would eventually share freedom with the outside world.

Temperature is ultimately defined in terms of how our uncertainty about the microscopic state a system is in changes as we add energy to that system. The higher this rate of change of uncertainty, the lower the temperature is -- this is why when you connect two systems of different temperatures, in the process of us becoming more uncertain about the fundamental state of the world, energy "spontaneously" flows from the higher temperature to the lower temperature: the certainty gained from stealing energy from the higher-T one is more than compensated by uncertainty created from pouring that same energy into the lower-T one. (In fact there is a family of systems of "negative temperature" which become less uncertain as you add more energy to them: they are "hotter than the hottest possible temperature" because they will gladly give their energy to any "normal" system in the process of us becoming more uncertain about the world.)

The problem is that if we're certain that some degree of freedom has a given amount of energy that's "special", we have a bunch of different definitions of "temperature" depending on how "adding energy to the system" distributes between the "special" degree of freedom and the "thermal" degrees of freedom.

So the usual process is to just totally separate those degrees of freedom as separate systems, the "thermal" ones have a temperature, the "special" ones do not.


> there is a family of systems of "negative temperature" which become less uncertain as you add more energy to them . . .

I'm no physicist, just a chemist. What are they?


I mean it's not just one system, but the idea is what I just said.

The classical example is if you have a bunch of magnetic moments in a magnetic field and they do not interact with each other: then stuffing energy into the system requires aligning them against the magnetic field, and this makes the state more ordered. The problem is that these moments are generally in thermal contact with some apparatus that keeps them in place or vibrational degrees of freedom of their centers of mass or so. But you can get this thing to happen in magnetic resonance setups.

Negative temperature states pop up in a lot of strange places, the two that I know more closely are that lasing has this property of "as I dump more energy into the system I get more bosons in the lasing state" and Onsager in 1949 published a little article called “Statistical Hydrodynamics” which sort of went viral for the time, it points out that there is a way to view the instability of turbulent systems as due to negative temperature regimes of the vortices in those systems.


It's not just an analogy, and it contains the essence of the story, but it's also not the whole story.

In physics we talk about the "degrees of freedom" of a system -- this is just the count of all of the independent ways that it can move. For each degree of freedom of a system you can calculate the average energy in that degree of freedom. By the equipartition theorem, at thermal equilibrium, all the degrees of freedom will have the same average energy, which will be T (if you measure temperature in units of energy).

So if you think about dropping a bouncy ball in a tube and it bounces until it slowly comes to rest, it has these degrees of freedom -- the internal degrees of freedom of the atoms of the ball, the internal degrees of freedom of the atoms of the floor/tube -- and then two really obvious degrees of freedom, the center-of-mass position of the ball, which gains an energy scale due to the gravitational force, and the center-of-mass momentum of the ball, which trades energy with this position degree-of-freedom.

Statistical mechanics says that as this system progresses, the location of the energy will slowly become more uncertain until it is on-average-evenly distributed across all of the degrees of freedom. That's why it bounces lower and lower: there is so much energy in the two "main" degrees of freedom -- maybe half a joule? -- whereas in the vibrations there is something closer to 10^-21 J of energy at room temperature.

But the flip side of dissipation is always fluctuation -- this is in fact the subject of a major theorem! So the fact that this can randomly lose energy to these other degrees of freedom means that those degrees of freedom are also randomly kicking the ball. As you can imagine with ~20 orders of magnitude difference between the two, they don't kick this ball by all that much. But you have a lot of experience with a lot more tiny balls that are bouncing off the ground all the time. Take a deep breath. There they are.

If everything were to come to its minimum energy configuration, why are these air molecules so stubbornly not falling to the floor? Well, they are trying to! But they are so light that they are being kicked back upwards by these random thermal kicks, so high that they can in principle go the many kilometers to the uppermost atmosphere.

(Of course if they could go all that way in a single kick then air would have to be so non-interactive that we could not use it to talk to each other... the mean free path in air is actually about 68 nm, so in practice every air atom is getting its random thermal kicks from other nearby air atoms. But the ultimate origin of these random thermal kicks is the random kicks of the floor on the few hundred nanometers of air sitting above it, and that energy comes from the Sun and is mostly conserved as these atoms collide with each other -- but a tiny bit is often converted to little photons of infrared light that sometimes escape the atmosphere.)

With that said as others have noticed, the free-particle energy relation in special relativity is E = γ m c². Famously, at rest, this factor γ = 1/√(1 − (v/c)²) is 1 and the energy of a particle at rest is E = m c². But as v gets closer and closer to c, v → c, this energy grows without boundary, E → ∞. So there is no finite temperature where a kinetic degree of freedom would exceed the speed of light. Indeed you can solve for v, as 1/γ² = 1 − (v/c)². So the velocity corresponding to any given total energy is v = c √(1 − (mc²/E)²). For a rest particle with E = mc² this is v = 0 as you would expect; or when the kinetic energy first gets to mc² we would have E = 2mc² and thus v = c √(3/4) = 0.866 c.


One faces a similar problem with thinking about how much time is saved to go a certain distance: velocity is measured in meters per second, not "slowness" in seconds per meter.

Thus speeding by 10 miles per hour makes much more difference in your time if it happens at 20mph (3 minutes per mile -> 2 minutes per mile) than if it happens at 60mph (1 minute per mile -> 0.86 minutes per mile).

This can sometimes be erased because one typically (at least in the US) spends more time at the highway speed, but if we're talking about you need to spend 10 minutes getting on the highway, 30 minutes driving on it, and 10 minutes getting off it, then even allowing for 5 minutes of the city driving to be non-speedable (it's spent stuck at three traffic lights, say), you can save 5 minutes of time by speeding 10mph on the only 5 miles of city streets, but only 4 minutes of time by speeding 10 mph on the 30 miles of highway. You're speeding the same amount for 6 times the distance and something over twice the time, but because your average speed was higher it simply doesn't buy you as much.

Working out the numbers also helps you realize that speeding on the highway to get a substantial amount of time is, while not pointless, more unsafe than you think. Even under great circumstances, like if you are facing 40 minutes of driving -- if we're talking about a 70mph highway and you are 15 minutes late while you think 5 minutes late is still socially acceptable, you need to cut 10 minutes and thus average 4/3 * 70mph = 93.3 mph to make that happen. That means that to handle the moments where you are stuck behind two cars both going 10 over the limit at 80mph, you will need to at times be going 30 over the limit. And that's with a relatively long commute! If it's a 20 minute commute you have to drive this recklessly just to shave 5 minutes.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: