Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My understanding is that one of the major critiques of statistics, especially its use in psychology, has been the use of models which are derived from the mean.

There are inherent flaws/assumptions to this approach which Peter Molenaar has done extensive work to critique (See Todd Rose's book on the subject). For anyone who understands the technique presented in this paper, does it also depend on the mean as a model like when calculating Pearson's r?



Isn't Molenaar looking at networks of symptoms over time? Yes, in any multi-variate time series, when one is searching for relationships between the variables (and allowing that you may have multiple sets of time series which may have some type of grouping, i.e. observations from a set of people with one diagnosis vs observations from a set of people with a contrary or with no diagnosis), then yes, any attempt to find co-relations in the multivariate signal need to account for the underlying statistical distribution of the signal components. The normal distribution isn't a bad first a priori approximation, but you really need to check.

side note- it also isn't clear that you can group by diagnosis, see, for example, https://pubmed.ncbi.nlm.nih.gov/29154565/, which shows that even within diagnostic groups there is substantial individual variation.


This is an order-based algorithm, so it is more related to the median than the mean.

Another very useful consequence of being order-based, is that this new coefficient is much more robust to noise/outliers than the canonical correlation coefficient.


I think there are far bigger problems with the lack of theoretical foundations and abuse of p-values rather than with ergodicity or whatever is the pet peeve of Peter Molenaar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: