I would wager that the vast majority of analyses / code, in most fields of research, can run on a basic laptop. If I had to put a number on it, I'd say >99.999% of papers.
Maybe run in a matter of days or weeks or months, but people often want the results faster than that, especially if there will be a 2nd set of analyses or a conference is coming up.
There’s no way it is that high. A remarkably high percentage of work across the sciences is “computational” these days, which involves potentially very intensive simulation.
Lol, there was one time I churned out a series of magic numbers by repeatedly running several pieces of code in an IPython notebook, measuring (with significant contribution from eyeballing a graph) and tuning the input parameters by hand each time. The magic numbers worked and were put into production at LHC (yes, the Large Hadron Collider), but good luck turning the process into reproducible code without writing a lot of new annealing code which would at least double the amount of effort.
And I was the most organized programmer among my peers by a long shot. Judging from the messiness of shared code, I can’t imagine how bad other people’s unshared work is.
Some of the HPC code that I've used is definitely "Good luck, and god speed..." if I gave you the code.
Some of it is just really clean Python code that generates a bunch of simulated data to train some machine learning code on with human readable configuration files.
But neither one of them involve the words "$COUNTRY Ministry of Health approval...", which kicks things to a whole new level.
I would wager that the vast majority of analyses / code, in most fields of research, can run on a basic laptop. If I had to put a number on it, I'd say >99.999% of papers.