No. If he continued to use Cramer's rule, he would definitely lose to LAPACK on big problems, purely because of using Cramer's rule. But even if he didn't continue to use Cramer's rule (if he switched to Gaussian elimination), he still might lose to LAPACK, because LAPACK is pretty well optimized.
> he would definitely lose to LAPACK on big problems, purely because of using Cramer's rule.
More importantly though, the code would probably take ages (as in, "age of the universe" ages) to compile, as Cramer's rule is O(n!) (using a naive implementation, I think one could get to O(n^4) if determinant was computed using LU factorization, but what's the point of Cramer's rule then...).
At least, that's how I interpreted the comment...