That makes sense. But what I need to know are the thought processes that go into determining the hyper-parameters. Most of the time my process amounts to slightly-educated-guess and check. Are there smarter ways to go about that? What tools / analysis should I be doing to see the effects of my tweaks.
Thought process #1 is to test, measure and document everything. Change one thing at a time; if you have two modifications that you want to try and have the resources, then it'd be best to run (measure) A, B and A+B instead of both of them at the same time. If early in your experiments adding A was a mildly beneficial thing, it may be the case that after extensive modifications it's not anymore, but you can try and check that. This obviously means that you need a simple, mostly automated way to run repeatable experiments and document their results.
Thought process #2 is to read lots of papers that go into details on solutions for similar problems, see what works and doesn't work for them, try to understand if the factors that make it useful apply to you as well (e.g. size or type of data may mean that your experience is likely to be opposite) - and, of course, try and evaluate.
Thought process #3 is to do error analysis, possibly with tooling to show the relations (e.g. for image analysis tasks). You definitely want to know what kinds of mispredictions are you getting in what amounts, and that may help you (though not always) to understand why a particular type of misclassification occurs.
Technical analysis may also come into play, but IMHO that is more useful for debugging why something fails totally and not that useful for getting better accuracy on something that works really well (more useful for getting faster convergence to the same result). There are all kinds of metrics you may measure on your network, e.g. dead neurons for ReLU family, are your early layers stabilizing or not, etc. But again, problems of convergence at all or its speed and problems of converging to a not-good-enough optimum are quite different.