Hugo metadata allows to define a publication date for content. Then you need to run the Hugo build periodically to have the content published.
The hugo cli has options to build "drafts" and "futures" so that you can preview them.
I have used Hugo in the past to generate websites from external content sources, such as trello, gitlab, GitHub, RSS feeds, Dropbox, headless CMSes, etc. Some of them provided webhooks to trigger the build, others required polling and periodic rebuild, yet others can commit to a git repo and trigger a CI pipeline. Most of the time, hugo was more of a composable solution to a given flow than a constraint i had to build around thanks to its capabilities to use "data content" and not only markdown files. Using json as the metadata "front matter" for documents often proved very handy when using API sources.
Hugo is very fast, so build time wasn’t a big concern. There was no incremental build however, so for a very large corpus it could be an issue. The most problematic issue is that Hugo doesn’t ever delete anything from a previous build, but overwrites on top, so that could be problematic if you involuntarily publish something and then want it off. The same kind of issues you get into with caching or object storage. My "solution" was the classic parallel builds, symlink and cleanup dance.
Just wanted to add that I use Hugo in combination with a super simple CI/CD pipeline, and I just rsync the public/ directory every time the pipeline runs with the delete option enabled. This makes it relatively trivial to delete content after the fact, because I can just delete the content from the repository and have the pipeline remove it.
I have used Hugo in the past to generate websites from external content sources, such as trello, gitlab, GitHub, RSS feeds, Dropbox, headless CMSes, etc. Some of them provided webhooks to trigger the build, others required polling and periodic rebuild, yet others can commit to a git repo and trigger a CI pipeline. Most of the time, hugo was more of a composable solution to a given flow than a constraint i had to build around thanks to its capabilities to use "data content" and not only markdown files. Using json as the metadata "front matter" for documents often proved very handy when using API sources.
Hugo is very fast, so build time wasn’t a big concern. There was no incremental build however, so for a very large corpus it could be an issue. The most problematic issue is that Hugo doesn’t ever delete anything from a previous build, but overwrites on top, so that could be problematic if you involuntarily publish something and then want it off. The same kind of issues you get into with caching or object storage. My "solution" was the classic parallel builds, symlink and cleanup dance.
Would recommend.