Hacker News new | past | comments | ask | show | jobs | submit | more evandwight's comments login

This was inspired by a hn comment that linked to "you suck at excel" [0].

First, I realized how much I suck at excel despite being in love with it. Excel just has so many features and the basics are good enough to solve most excel problems. This means I never search for a better method and only discover features through word of mouth.

Then, I realized many of the suggestions could be automatically applied (named cells, standardized color formatting, defined tables). Other suggestions could just be recommended (you seem to be using formulas to build a pivot table, click here to learn about pivot tables).

This seems to obvious so it probably exists. But I only found Microsoft's excellent excel static analysis addin ExcelLint.

[0] https://www.youtube.com/watch?v=0nbkaYsR94c


Wow! No, probably not. I don't scan macros or other plugins.

I only have 1 recommendation right now and it's super simple: look at formula references to find interesting cells then check for a label to the left.

It's all heuristics and easily broken. It doesn't need to be perfect to help though.

Sidenote: I can't believe excel formulas are Turing complete now! Excel is truly a beast!


Use to be some humor about excel being limited to cells, because [1] hypercards were copyrighted. Might be confusing that with [2] Hypernews / [3] vipercard though.

Complexity/cost not withstanding, visually merging snapshot of program(s)s done in [4] and visually 'linting" in relevent documentation/comments as a real time AR overlay might be more interesting.

************

[1] https://hypercard.org/ [2] http://www.art.net/~hopkins/Don/hyperlook/index.html [3] https://www.vipercard.net/ [4] https://dynamicland.org/


Why not an index on created for speeding up the "new" sort?

What indexes should you create? One for each sort you have?

(I'm just playing around cloning Reddit and running into all sorts of performance problems. Mostly due to celery taking all my memory)


Yeah, these are the performance issue solutions. Based on the way open source databases work (clustered indexes), move your indexes to a table that has the sole purpose of indexing, and index everything you sort by index{search constants, then sorting columns, then id }. You can test with your workload, but I found that full covering indexes sat in ram better (so included the id even though not needed).

20yr ago MySQL didn’t have reverse index columns so we had to make (1-timestamp) cols and index those! Also used int for time as it was easier to index back then and you don’t need sub second or anything. Smaller indexes are better.


For those who are wondering how this equation is true:

https://en.wikipedia.org/wiki/Law_of_total_variance


2015. This is not a new take on the current state.


It was a scam then, and it's still a scam now.


Unlikely to ever find a gp - Victoria Bc


How good are mice models? My uneducated thoughts are they are unlikely to generalize to humans - like testing carcinogens in petri dishes


"It depends."

I really can't answer more precisely than that without knowing what you consider to be a "good" model, and what you are interested in modeling. Perhaps a marginally more useful answer is that they can be _excellent_ metabolic and cellular models. It's a case of "if you know what your looking for, you can pick animal models that effectively have exactly what you're looking for". The "what", here, would be a metabolic pathway, for example.


Ah. That makes sense. It's a tool that tests certain things.

My question was what percentage of results in mice generalize to humans?

To confirm I'm interpreting your answer directly. It depends on how the scientists use the tool. It is good at verifying certain aspects but not others.


>My question was what percentage of results in mice generalize to humans?

Accuracy is very high when you are studying a metabolic pathway that is (near)identical in humans and in mice.

>To confirm I'm interpreting your answer directly. It depends on how the scientists use the tool. It is good at verifying certain aspects but not others.

Yes. More precisely: animal models are accurate when the metabolic machinery in humans is also found in the animal model.


I don't know if these issues have been fixed but here's a study that looked into all types of animal studies for cancer for an overview and suggested fixes and listed notable failures. There is one study here that showed a promising cancer treatment in mice used at 500x dilution in humans and it caused systemic organ failure in humans. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3902221/


I'm trying to solve this by crowd sourcing curation.

Most votes != Weekly news letter worthy

I separated them by making a "deeply important" flag. That flag gets set by efficiently estimating a referendum (elected mods, statical sampling, referendum).

https://efficientdemocracy.com

I need feedback.


I commend the research. I've also thought about what/how to increase signal/noise and sort-of came to the realization that it's not well defined or solvable for a broad range of interest like HN.

You perhaps have an idea of what "deeply important" might look like. I have a vague idea of what "technically interesting" looks like, and wondered how that could be curated. Perhaps the fastest way to bootstrap a particular sensibility is for one person to manually curate, allowing anyone with similar interests to consume. Eventually some of the top aligned commenters could also curate.

The basic premise of the approach is based on the observation that as an audience grows large, so does the diversity of topics, and unfortunately the quality across the board decreases as signal for one topic is noise for another.

This idea of estimating a referendum (elected mods, statistical sampling, referendum) looks like an approach that could possibly keep on working as numbers grow and keep things 'on topic'. That depends on the mods having more of a say since an actual or accurately estimated referendum will dilute/diversify with growth.

My thoughts on curating for different interests was to imagine 'tags' that could be crowdsourced and each reader could choose a percentage or max number of posts per tag. The first 'dumb thing that could work' that I applied was regexs into sections so all the 'Covid-19' news, etc could be separate from niche stories.


I agree that as a community diversifies beyond a single group a referendum will give a more generic answer.

It works well so long as you are like the group. I think subreddits are a good way of focusing on a topic.


How hard is it to actually get mail delivered?

I have the dumb idea of trying to make SMTP as cheap as http. Make spam expensive using proof of stake.

I find it frustrating I have to pay Amazon to send text for me. I was going to setup my own SMTP server but it seemed like too much work.


Check out the following pieces of software — it’s never been easier!

Maddy (https://maddy.email/)

Postal (https://docs.postalserver.io/)

Chasquid (https://blitiri.com.ar/p/chasquid/)


> Make spam expensive using proof of stake

That's already a thing. Hashcash [1], the PoW algorithm underpinning Bitcoin, was originally conceived as a method to prevent email spam.

[1] https://en.wikipedia.org/wiki/Hashcash


2015


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: