This was inspired by a hn comment that linked to "you suck at excel" [0].
First, I realized how much I suck at excel despite being in love with it. Excel just has so many features and the basics are good enough to solve most excel problems. This means I never search for a better method and only discover features through word of mouth.
Then, I realized many of the suggestions could be automatically applied (named cells, standardized color formatting, defined tables). Other suggestions could just be recommended (you seem to be using formulas to build a pivot table, click here to learn about pivot tables).
This seems to obvious so it probably exists. But I only found Microsoft's excellent excel static analysis addin ExcelLint.
Use to be some humor about excel being limited to cells, because [1] hypercards were copyrighted. Might be confusing that with [2] Hypernews / [3] vipercard though.
Complexity/cost not withstanding, visually merging snapshot of program(s)s done in [4] and visually 'linting" in relevent documentation/comments as a real time AR overlay might be more interesting.
Yeah, these are the performance issue solutions. Based on the way open source databases work (clustered indexes), move your indexes to a table that has the sole purpose of indexing, and index everything you sort by index{search constants, then sorting columns, then id }. You can test with your workload, but I found that full covering indexes sat in ram better (so included the id even though not needed).
20yr ago MySQL didn’t have reverse index columns so we had to make (1-timestamp) cols and index those! Also used int for time as it was easier to index back then and you don’t need sub second or anything. Smaller indexes are better.
I really can't answer more precisely than that without knowing what you consider to be a "good" model, and what you are interested in modeling. Perhaps a marginally more useful answer is that they can be _excellent_ metabolic and cellular models. It's a case of "if you know what your looking for, you can pick animal models that effectively have exactly what you're looking for". The "what", here, would be a metabolic pathway, for example.
Ah. That makes sense. It's a tool that tests certain things.
My question was what percentage of results in mice generalize to humans?
To confirm I'm interpreting your answer directly. It depends on how the scientists use the tool. It is good at verifying certain aspects but not others.
>My question was what percentage of results in mice generalize to humans?
Accuracy is very high when you are studying a metabolic pathway that is (near)identical in humans and in mice.
>To confirm I'm interpreting your answer directly. It depends on how the scientists use the tool. It is good at verifying certain aspects but not others.
Yes. More precisely: animal models are accurate when the metabolic machinery in humans is also found in the animal model.
I don't know if these issues have been fixed but here's a study that looked into all types of animal studies for cancer for an overview and suggested fixes and listed notable failures. There is one study here that showed a promising cancer treatment in mice used at 500x dilution in humans and it caused systemic organ failure in humans.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3902221/
I'm trying to solve this by crowd sourcing curation.
Most votes != Weekly news letter worthy
I separated them by making a "deeply important" flag. That flag gets set by efficiently estimating a referendum (elected mods, statical sampling, referendum).
I commend the research. I've also thought about what/how to increase signal/noise and sort-of came to the realization that it's not well defined or solvable for a broad range of interest like HN.
You perhaps have an idea of what "deeply important" might look like. I have a vague idea of what "technically interesting" looks like, and wondered how that could be curated. Perhaps the fastest way to bootstrap a particular sensibility is for one person to manually curate, allowing anyone with similar interests to consume. Eventually some of the top aligned commenters could also curate.
The basic premise of the approach is based on the observation that as an audience grows large, so does the diversity of topics, and unfortunately the quality across the board decreases as signal for one topic is noise for another.
This idea of estimating a referendum (elected mods, statistical sampling, referendum) looks like an approach that could possibly keep on working as numbers grow and keep things 'on topic'. That depends on the mods having more of a say since an actual or accurately estimated referendum will dilute/diversify with growth.
My thoughts on curating for different interests was to imagine 'tags' that could be crowdsourced and each reader could choose a percentage or max number of posts per tag. The first 'dumb thing that could work' that I applied was regexs into sections so all the 'Covid-19' news, etc could be separate from niche stories.
First, I realized how much I suck at excel despite being in love with it. Excel just has so many features and the basics are good enough to solve most excel problems. This means I never search for a better method and only discover features through word of mouth.
Then, I realized many of the suggestions could be automatically applied (named cells, standardized color formatting, defined tables). Other suggestions could just be recommended (you seem to be using formulas to build a pivot table, click here to learn about pivot tables).
This seems to obvious so it probably exists. But I only found Microsoft's excellent excel static analysis addin ExcelLint.
[0] https://www.youtube.com/watch?v=0nbkaYsR94c