Hacker News new | past | comments | ask | show | jobs | submit | slowking2's comments login

I knew some who were bad at math. Asian immigrant test scores on math are ~1/2-1 standard deviation higher than white Americans. That’s noticeable comparing groups of people but still leaves a lot of Asian immigrants who are not good at math.

There is no royal road. If all your kids are biologically yours and you and all your family are good at math and you marry someone from a similar family, you can stack the deck maybe 95/5 in favor or your kid being good at math? But that option is already off the table if you lack that talent. And there are other things you should probably prioritize first!


Also, being far enough from Europe that a huge amount of talent decided the U.S. was a better bet for getting away from the Nazis. And then taking a large number of former Nazi scientist's post-war as well.

The article mentions but underrates the fact that post-war the British shot themselves in the foot economically.

As far as I'm aware, the article is kind of wrong that there wasn't a successful British computing industry post war, or at least it's not obvious that it's eventual failure has much to do with differences in basic research structure. There was a successful British computing industry at first, and it failed a few decades later.


And yet here we are with Arm cores everywhere you look! :-D


Fair point! That's a great technical success; I didn't realize Arm was British.

If the main failure of British companies is that they don't have U.S. company market caps, it seems more off base to blame this on government science funding policy instead of something else. In almost every part of the economy, U.S. companies are going to be larger.


My understanding is that the British "Arm" is just a patent holder now. I don't think they actually make anything. Companies outside the UK are the ones that actually make the chips licensed from the Arm designs.


At work, I find type hints useful as basically enforced documentation and as a weak sort of test, but few type systems offer decent basic support for the sort of things you would need to do type driven programming in scientific/numerical work. Things like making sure matrices have compatible dimensions, handling units, and constraining the range of a numerical variable would be a solid minimum.

I've read that F# has units, Ada and Pascal have ranges as types (my understanding is these are runtime enforced mostly), Rust will land const generics that might be useful for matrix type stuff some time soon. Does any language support all 3 of these things well together? Do you basically need fully dependent types for this?

Obviously, with discipline you can work to enforce all these things at runtime, but I'd like it if there was a language that made all 3 of these things straightforward.


I suspect C++ still comes the closest to what you’re asking for today, at least among mainstream programming languages.

Matrix dimensions are certainly doable, for example, because templates representing mathematical types like matrices and vectors can be parametrised by integers defining their dimension(s) as well as the type of an individual element.

You can also use template wizardry to write libraries like mp-units¹ or units² that provide explicit representations for numerical values with units. You can even get fancy with user-defined literals so you can write things like 0.5_m and have a suitably-typed value created (though that particular trick does get less useful once you need arbitrary compound units like kg·m·s⁻²).

Both of those are fairly well-defined problems, and the available solutions do provide a good degree of static checking at compile time.

IMHO, the range question is the trickiest one of your three examples, because in real mathematical code there are so many different things you might want to constrain. You could define a parametrised type representing open or closed ranges of integers between X and Y easily enough, but how far down the rabbit hole do you go? Fractional values with attached precision/error metadata? The 572 specific varieties of matrix that get defined in a linear algebra textbook, and which variety you get back when you compute a product of any two of them?

¹ https://mpusz.github.io/mp-units/

² http://nholthaus.github.io/units/


I'd be happy for just ranges on floats being quick and easy to specify even if the checking is at runtime (which it seems like it almost will have to be). I can imagine how to attach precision error/metadata when I need it with custom types as long as operator overloading is supported. I think similarly for specialized matrices, normal user defined types and operator overloading gets tolerably far. Although I can understand how different languages may be better or worse at it. Multiple dispatch might be more convenient than single dispatch, operator overloading is way more convenient than not having operator overloading, etc.

A lot of my frustration it is that the ergonomics of these things tend to be not great even when they are available. Or the different pieces (units, shape checking, ranges) don't necessarily compose together easily because they end up as 3 separate libraries or something.


Crystal certainly supports that kind of typing, and being able to restrict bounds based on dynamic elements recently landed in GCC making it simple in plain C as well.


If x is of type T, what type do you want (x - x) to be?


That's a hard one because it depends on what sort of details you let into types and maybe even on the specific type T. Not saying what I'm asking for is easy! Units and shape would be preserved in all cases I can think of. But with subranges (x - x) may have a super-type of x... or if the type system is very clever the type of (x - x) maybe be narrowed to a value :p

And then there's a subtlety where units might be preserved, but x may be "absolute" where as (x - x) is relative and you can do operations with relative units you can't with absolute units and vice versa. Like the difference between x being a position on a map and delta_x being movement from a position. You can subtract two positions on a map in a standard mathematical sense but not add them.


I use both DuckDB and SQLite at work. SQLite is better when the database gets lots of little writes, when the database is going to be stored long term as the primary source of truth, when you don't need to do lots of complicated analytical queries etc. Very useful for storing both real and simulated data long term.

DuckDB is much nicer in terms of types, built in functions, and syntax extensions for analytical queries and also happens to be faster for big analytical queries although most of our data is small enough that it doesn't make a big difference over SQLite (although it still is a big improvement over using Pandas even for small data). DuckDB only just released versions with backwards compatible data storage though, so we don't yet use it as a source of truth. DuckDB has really improved in general over the last couple years as well and finally hit 1.0 3 months ago. So depending what version you tried it out on tiny data, it may be better now. It's also possible to use DuckDB to read and write from SQLite if you're concerned about interop and long term storage although I haven't done that myself so don't know what the rough edges are.


Agreed. I work in medical device engineering, and >50% of our time relates to simulation in some way. A big part of our responsibilities is designing, implementing, or reimplementing models of various subsystems relevant to our devices so we can do preliminary estimates of safety or efficacy. Analyzing real world outcomes is also important, although I haven't been at the company long enough yet for that to catch up with simulation in terms of how much time we spend on it.

I'd say the closest description to us in the article is the practical research team. We have fairly clear business goals we are fulfilling with our work.


Jupyter notebooks can be executed roughly like scripts by papermill. You can also save a .py version of the notebook without outputs using jupytext. We use these packages together where I work to basically auto-generate the start of notebooks for exploratory work after certain batch jobs. For dashboards only used by a small number of users, we’ve found voila occasionally useful. Voila turns notebooks into dashboards or web apps basically.

You generally shouldn’t put code you want to reuse in notebooks, but we haven’t found this to be much of a problem. And >3/4 of the people who’ve worked on our team don’t have software engineering backgrounds. If you set clear expectations, you can avoid the really bad tar pits even if most of your co-workers are (non-software) engineers or scientists 0-3 years out of college.


Barring it being a joke, the first question is unhelpful and likely a jerk move. Everyone makes mistakes sometimes.

The second question seems like the type of feedback that would usually be fine. People's skills and knowledge don't always overlap. What is crazy complex for A may not be for B and what is crazy complex for B may not be for A! And that doesn't have to have anything do with A or B being smarter. A might not know SQL and B might not know pandas. But sometimes it really does make sense to move some code from SQL to pandas or vice-versa (assume for the moment that both SQL and pandas are already in the tech stack). Some people find it simple to write in object oriented style and others in a functional style. What makes more sense to do is not always obvious. So the question could be a good one. If the suggestion is bad, explain why it's bad. If the suggestion is good, maybe consider if it's worth doing at current point. If it's somewhere in the middle or there's no time, acknowledge and move on.


Same here, interacting with people makes my performance on all sorts of mental tasks drop much faster. 5-6 hours of meetings, and worst case I don't want to do anything else, best case I still need to go for a walk and another 30 minutes on top of that before I'm back to average coding skill.

On the flip side, even for coding I don't love, I can usually grind out 9-10 hours at fairly high productivity; I usually won't because I have other priorities to balance but I can. If I find what I'm working on really interesting, I can do 12 hours at high productivity.

For work, I try to have 2 days a week where I put most of my meetings so that hopefully at least 1 or 2 days of the rest are high productivity.


Had the same thought. The naming synergy is perfect.


The link you posted says that the original rat park researcher's own graduate student was unable to replicate the original experiment when he tried reducing the confounds. That seems like the most favorable possible situation for a replication to me. It also says that one confound was that morphine consumption wasn't measured in the same way between conditions in the original experiment. That seems pretty bad to me. Linked article also mentions that other researchers have had trouble replicating the results fully.

Rat park doesn't consistently apply to rats, so why should its results be considered particularly informative about humans?

Has someone done a much better study on rats since then? Significantly larger N, always taking measurements the same way across conditions, genetics carefully controlled or at least measured? I'm not an animal behavior researcher, so I'm sure there could be other things important to handle.

Rats are certainly more similar to humans that flies are, but that doesn't mean any particular study is informative about humans.


Just saying, since neither humans nor rodents typically live in cages, maybe rodent experiments would be more predictive of human behavior if they weren't forced into living in small cages.


looks around his cubicle

I must be in the control group. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: