> What I cannot do is to asses if the code is supposed to do its job
You can, to a degree. If you can get the system under test you can make an assertion about how you think it works and see if it holds. If you know what the "law" is you can test whether the system calculates it according to that specification. You will learn something from making those kinds of assertions.
Working Effectively with Legacy Code by Michael Feathers goes into detail about exactly this process: getting untested, undocumented code people rely on into a state that it can be reliably and safely maintained and extended.
Depending on the situation I often recommend going further and use model checking. A language and toolbox like TLA+ or Alloy is really useful to get from a high-level, "what should the system do?" specification down to, "what does the system actually do?" The results are some times surprising.
You're right that at some level you do need to work with someone who does understand what the system should do.
But you can figure out what the system actually does. And that is de facto what the business actually does... as opposed to what they think it does, logic errors and all.
A good senior programmer, in my opinion, thinks above the code like this.
Update: interviewing for this kind of stuff is hard. Take-home problems can be useful for this kind of thing in the right context. What would you do differently?
There's also the likely chance your business doesn't have good documentation on how features should work, there are no tests and the thing you're testing is fairly custom tailored to your business that you need to have someone with knowledge of the product and history of it to tell you "oh yes that's supposed to happen that way".
Sure if there is a law written down or past written regulations your business follows, that's easy. You've got docs.
You can, to a degree. If you can get the system under test you can make an assertion about how you think it works and see if it holds. If you know what the "law" is you can test whether the system calculates it according to that specification. You will learn something from making those kinds of assertions.
Working Effectively with Legacy Code by Michael Feathers goes into detail about exactly this process: getting untested, undocumented code people rely on into a state that it can be reliably and safely maintained and extended.
Depending on the situation I often recommend going further and use model checking. A language and toolbox like TLA+ or Alloy is really useful to get from a high-level, "what should the system do?" specification down to, "what does the system actually do?" The results are some times surprising.
You're right that at some level you do need to work with someone who does understand what the system should do.
But you can figure out what the system actually does. And that is de facto what the business actually does... as opposed to what they think it does, logic errors and all.
A good senior programmer, in my opinion, thinks above the code like this.
Update: interviewing for this kind of stuff is hard. Take-home problems can be useful for this kind of thing in the right context. What would you do differently?