You can have more than one Main function, just pick which one to use when you compile. In our case it’s PHP, so the API entry point uses a different index than jobs, for example. They can be deployed differently and scaled differently, but operationally it’s just deploying the same code/configuration everywhere and the only difference is routing. When you are writing code, you very, very rarely have to worry about which context you are writing for.
But there's nothing in that about using different entrypoints or routing.
FWIW, variations on the theme:
1. A single binary with a single entrypoint, which can play multiple roles simultaneously (UI, API, scheduled jobs), but where different kinds of request are routed to different pools of instances
2. A single binary with multiple embedded configurations, selecting via command line arguments etc, each for a single different role (UI, admin console, data ingestion)
3. A single (Java) binary with multiple entrypoints (main methods), each playing a different role (live calculations, batch calculations, data recording)
4. A single (C++) codebase building multiple binaries (via CMake add_executable), each playing a different role (calculating prices for potatoes, calculating prices for oranges)
5. A single repository with multiple completely separate applications, with some shared submodules, each built separately, playing a different role (receiving transactions, validating transactions, reporting transactions)
That last one is probably not an example of what you are talking about, but it's closer to the one before than to the first in the list. There is a sort of "ring species" [1] shape to this variation.
We've been doing this too for some "logical" services. For example, we might have a service which has a REST API and but also needs to do long running processing in response to requests via said REST API. The code for both lives in one repo and can share code, data structure definitions, databases etc. One container is made but we deploy it twice with different args. One is set to run the REST API,and the other runs the processing. Both are closely related, but in the cloud we can scale and monitor them separately. It gives a lot of the benefits of "standard" microservices with much less of the code and repo level screwing around. It relaxes the general microservices approach that: one service == one git repo == one database == one container == one deployment in GCP/etc.