In the latter example, the question is really one of how tightly you wish to couple the application layer to that of the infrastructure (controller). Should the application logic be coupled to a http REST API (and thus map application errors to status codes etc), or does that belong in the controller?
I don't disagree that it's more practical, initially, as you've described it. However, I think it's important to point out the tradeoff rather than presenting it as purely more efficient. I've seen this approach result in poor separation of concerns and bloated use cases (`DoTheActualThing`) which become tedious to refactor, albeit in other languages.
One predictable side effect of the above, if you're working with junior engineers, is that they are likely going to write tests for the application logic with a dependency on the request / response as inputs, and asserting on status codes etc. I shudder to think how many lines of code I've read dedicated to mocking req/res that were never needed in the first place.
It leaves very little to the imagination as to whether or not ServeHTTP works, which is nice.
Complexity comes from generating requests and parsing the responses, and that is what leads to the desire to factor things out -- test the functions with their native data types instead of http.Request and http.Response. I think most people choose to factor things out to make that possible, but in the simplest of simple cases, many people just use httptest. It gets the job done.
I don't think it's poor to test http handling either, as a coarse grained integration test.
The problem I've seen is over-dependence on writing unit tests with mocks instead of biting the bullet and properly testing all the boundaries. I have seen folk end up with 1000+ tests, of which most are useless because the mocks make far too many assumptions, but are necessary because of the layer coupling.
This was mostly in Node though, where mocking the request/response gets done inconsistently, per framework. Go might have better tooling in that regard, and maybe that sways the equation a bit. IMO there's still merit to decoupling if there's any feasibility of e.g. migrating to GraphQL or another protocol without having to undergo an entire re-write.
> I don't think it's poor to test http handling either, as a coarse grained integration test.
Sorry to spring a mostly-unrelated question on you about this, but why do you call this an integration test? I recently interviewed three candidates in a row that described their tests in this way, and I thought it was odd, and now I see many people in this thread doing it also.
I would call this a functional or behavioral test. For me a key aspect of an integration test is that there's something "real" on at least two "sides" - otherwise what is it testing integration with? Is this some side-effect of a generation growing up with Spring's integration testing framework being used for all black-box testing?
(I will not comment about how often I see people referring to all test doubles as "mocks", as I have largely given up trying to bring clarity here...)
The reality is that I've heard unit, integration and e2e almost entirely used interchangeably, maybe except the former and latter. I don't think trying to nail down the terms to something concrete is necessarily a useful exercise. Attempts to do so, imo, make subjective sense in the terms of the individual's stack/deployment scenario.
To me, it's a contextual term much like 'single responsibility'. In this case, the two "sides" of an integration test are present. A consumer issues a request and a provider responds accordingly. The tests would ascertain that with variations to the client request, the provider behaves in the expected manner.
At which point you might point out that this sounds like an e2e test, but actually using the client web app, for example, might involve far more than a simple http client/library - in no small part because the provider can easily run a simple consumer in memory and avoid the network entirely. E2e tests tend to be far more fragile, so from the perspective of achieving practical continuous deployment, it's a useful distinction.
integration tests in this instance: varying HTTP requests (infrastructure layer) provoke correct behaviour in application layer.
e2e: intended client issues http requests under the correct conditions, which provokes certain provider behaviour, which client then actually utilises correctly.
This, to me, is why the most important part of testing is understanding the boundaries of the tests. Not worrying about their names.
I don't disagree that it's more practical, initially, as you've described it. However, I think it's important to point out the tradeoff rather than presenting it as purely more efficient. I've seen this approach result in poor separation of concerns and bloated use cases (`DoTheActualThing`) which become tedious to refactor, albeit in other languages.
One predictable side effect of the above, if you're working with junior engineers, is that they are likely going to write tests for the application logic with a dependency on the request / response as inputs, and asserting on status codes etc. I shudder to think how many lines of code I've read dedicated to mocking req/res that were never needed in the first place.