Hacker Newsnew | past | comments | ask | show | jobs | submit | zikohh's commentslogin

Why not perses?


@ingve I'm interested in your plan for your new TRVs. Why would you check the temperature of the nest hub if the TRV is in a different room? Or do you also have "nest sensors" and those report to the nest hub?

Would love to see a blog post about HA and TRVs


Yeah, I learnt this the hard way.


Dent looks really cool thanks for sharing!


Two years later, I've experienced the same issue as someone else https://community.o2.co.uk/t5/Apple/Urgent-Help-Needed-iPhon...


Another term for this is delivering vertical slices. I like the way this explains that.


That's like most of Grafana's documentation


How is this different to this https://github.com/kellyjonbrazil/jc


jc has many parsers for the specific output format of various programs, it can automatically create a JSON object structure using its knowledge of each format.

jb doesn't have high-level knowledge of other formats, it can read from common shell data sources, like command-line arguments, environment variables and files. It gives you ways to pull several of these sources into a single JSON object.

jb understands some simple/general formats commonly used in shell environments:

- key=value pairs (e.g. environment variable declarations, like the `env` program prints

- delimited lists, like a,b,c,d; (but any character can be the delimiter) including null-delimited (commonly used to separate lists of file paths)

- JSON itself — jb can validate and merge together arrays and objects

You can use these simple sources to build up a more complex structure, e.g. using a pipeline of other command line tools to generate null-delimited data, or envar declarations, then consuming the program's output via process substitution <(...). (See the section in the README that explains process substitution if you're not familiar, it's really powerful.)

So jb is more suited to creating ad-hoc JSON for specific tasks. If you had both jc and jb available and jc could read the source you need, you'd prefer jc.


Have you tried the rendered manifests pattern ? https://akuity.io/blog/the-rendered-manifests-pattern/


I read this article a while ago and it seems like the most sane way of dealing with this. Which tool you use to render the manifests doesn't even matter anymore.


While I agree generally with the pattern (dynamically generating manifests, and using pipelines to co-ordinate pattern change), I could never quite figure out the value of using Branches instead of Folders (with CODEOWNER restrictions) or repositories (to enforce other types of rules if needed).

I can't quite put my finger on it, but having multiple, orphaned commit histories inside a single repository sounds off, even if technically feasible.


I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation. And all the changes are explicitly tracked in CI as a diff.

With directories you need to resort to diffing to spot any changes between files in folders.

That said there are some merge conflict scenarios that make it a little annoying to do in practice. The author doesn’t seem to mention this one, but if you have a workflow where hotfixes can get promoted from older versions (eg prod runs 1.0.0, staging is running 1.1.0, and you need to cut 1.0.1) then you can hit merge conflicts and the dream of a simple “click to release” workflow evaporates.


> I believe the idea is that it makes it very explicit to track provenance of code between environments, eg merge staging->master is a branch merge operation.=

That isn't quite my understanding - but I am happy to be corrected.

There wouldn't be be a staging->main flow. Rather CI would be pushing main->dev|staging|prod, as disconnected branches.

My understanding of the problem being solved, is how to see what is actually changing when moving between module versions by explicitly outputting the dynamic manifest results. I.e. instead of the commmit diff showing 4.3 -> 5.0, it shows the actual Ingress / Service / etc being updated.

> With directories you need to resort to diffing to spot any changes between files in folders.

Couldn't you just review the Commit that instigated that change to that file? If the CI is authoring the change, the commit would still be atomic and contain all the other changes.

> but if you have a workflow where hot-fixes can get promoted from older versions

Yeah 100%.

In either case, I'm not saying it's wrong by any stretch.

It just feels 'weird' to use branches to represent codebases which will never interact or be merged into each other.


Glad I am not the only one feeling "weird" about the separate branches thing :D

Probably just a matter of taste, but I think having the files for different environments "side by side" makes it actually easier to compare them if needed, and you still have the full commit history for tracking changes to each environment.


Sorry, typo, you’re quite right, I meant to say staging->prod is a merge. So your promotion history (including theoretically which staging releases don’t get promoted) can be observed from the ‘git log’. (I don’t think you want to push main->prod directly, as then your workflow doesn’t guarantee that you ran staging tests.)

When I played with this we had auto-push to dev, then click-button to merge to staging, then trigger some soak tests and optionally promote to prod if it looks good. The dream is you can just click CI actions to promote (asserting tests passed).

> Couldn't you just review the Commit that instigated that change to that file?

In general though a release will have tens or hundreds of commits; you also want a way to say “show me all the commits included in this release” and “show me the full diff of all commits in this release for this file(s)”.

> In either case, I'm not saying it's wrong by any stretch.

Yeah, I like some conceptual aspects of this but ultimately couldn’t get the tooling and workflow to fit together when I last tried this (probably 5 years ago at this point to be fair).


> staging->prod is a merge

I might be misunderstanding what you mean by staging in this case. If so, my bad!

I don't think staging ever actually gets merged into prod via git history, but is rather maintained as separate commit trees.

The way that I visualised the steps in this flow was something like:

  - Developer Commits code to feature branch
  - Developer Opens PR to Main from feature branch: Ephemeral tests, linting, validation etc occurs
  - Dev Merges PR
  - CI checks out main, finds the helm charts that have changed, and runs the equivelant of `helm template mychart`, and caches the results
  - CI then checks out staging (which is an entirely different HEAD, and structure), finds the relevant folder where that chart will sit, wipes the contents, and checks in the new chart contents.
  - Argo watches branch, applies changes as they appear
  - CI waits for validation test process to occur
  - CI then checks out prod, and carries out the same process (i.e. no merge step from staging to production).
In that model, there isn't actually ever a merge conflict that can occur between staging and prod, because you're not dealing with merging at all.

The way you then deal with a delta (like ver 1.0.1 in your earlier example) is to create a PR directly against the Prod branch, and then next time you do a full release, it just carries out the usual process, 'ignoring' what was there previously.

It's basically re-invented the terraform delta flow, but instead of the changes being shown via Terraform by comparing state and template, it's comparing template and template in git.

> ultimately couldn’t get the tooling and workflow to fit together when I last tried this

I genuinely feel like this is the bane of most tooling in this space. Getting stuff from 'I can run this job execution on my desktop', to 'this process can scale across multiple teams, integrated across many toolchains and deployment environments, with sane default' still feels like a mess today.

edit: HN Formatting


Interesting, we have a system (different context, though it does use yaml) that allows nested configurations, and arrived at a similar solution, where nested configs (implicit/human interface) are compiled to fully qualified specifications (explicit/machine interface). It works quite well for managing e.g. batch configurations with plenty of customization.

I was unaware there was a name for this pattern, thank you.


This pattern is powerful since you can pick arbitrary tooling and easily make modifications with your own tooling. For instance substituting variables/placeholders or applying static analysis.


I agree 100%. I also like using FF only. Reviewing commits is part of the code review process - their to title, description and code. If the commits are not atomic and clear I don't approve the merge request (MR); I believe the MR is a full snapshot of what will go to main.

It's so frustrating to see 7 commits for one MR and it's a 10 line change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: