Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Take a look at Nomad before jumping on Kubernetes (atodorov.me)
183 points by sofixa on Feb 28, 2021 | hide | past | favorite | 91 comments


We're all-in on kubernetes and cri where I work, but Nomad is cool and if we were starting out now we'd certainly take a close look at it. We use other Hashicorp tools, terraform for example, so we're already familiar with HCL (which I don't necessarily prefer over yaml or json for declarative tasks but that's another debate). Most of the simplicity argument comes from the single server/worker binary and ease of deployment on the ops side, which if you use managed k8s like GKE might not mean all that much. If you're going to implement and run your own clusters that's a bigger deal.

Looking at it from the consumer side and comparing like to like I don't feel its a _lot_ simpler. Task groups and pods, tasks and containers, config, networking, storage, it is all different on nomad but not, imo, tremendously simpler. That makes sense given the nature of the problem the systems are addressing. Over the last five years kubernetes has grown horizontally a great deal to encompass more and more enterprise features and use cases, and I feel in general this is a repeating pattern: a thing is created, it works well and becomes popular, more people use it and join the community bringing their own use cases, which drives the addition of features to the thing, which eventually gets complicated enough that people are attracted to a new thing that does largely the same stuff but hasn't had a lot of the new features added yet. Rinse and repeat.

On a somewhat unrelated note: I'm not very attracted to server-side features for tracking and manipulating versions of deployments and their history. I'm a fan of git ops, and what I want out of the server-side runtime is reliable, fast reconciliation of the infrastructure state with the release branch in my repo. If I want to roll back I would much rather revert in git and redeploy. It seems troublesome and error prone to rely on a cli command to force the server side state to a previous iteration that is not in sync with what master in the repo says should be the canonical release version. Interested in others' thoughts, but maybe its a separate topic.


Having a rollback feature at the orchestrator level is useful when you want the rollback to be fast. Sometimes that's critical - when you deploy something buggy in production.

One reason I'd prefer to use Nomad over K8s is scalability. K8s becomes slow and occasionally acts weird with just a few thousand pods. (Not that it is particularly fast with small numbers of pods.) Nomad is known to run reliably in production with many thousands of mixed workloads, not just containers.

Another reason is its specialization. I'd rather deal with a handful of independent, well documented components (consul, vault, nomad, basically) than with one that does all things, but in a way I have to occasionally fight, and which, by necessity, given its breadth, is awfully documented. We do use k8s but still run vault for secrets - k8s secrets are a joke. We don't use consul because our discovery needs are simple, but the service abstraction in k8s is weak at best.

While I'm not a big fan of HCL, any version, either, I do think that being able to manage everything, from cluster configuration to application deployment, via the same tool (terraform), in git, is more convenient than still having to use terraform for infrastructure but being forced to use helm on top of it - you might be able to maintain applications as terraform HCL scripts, but it would be unreasonable, given that for many applications that you'll want to deploy there are readymade charts available.

From a strictly theoretical point of view, designing any kind of software the way kubernetes is designed is bad practice. It consists of very few components - but which do many different things. Some people work hard to maintain the system usable, but IMO the bad structure shows in how users need to interact with the system.

Which is why, if the choice was mine, and not hype-driven, I'd take managed nomad with consul, vault, terraform and some git repo (my choice: gitlab, because it comes with a nice integrated CI) already integrated any time over managed kubernetes. But that's just me.


The main advantage of Kubernetes isn't really Kubernetes anymore — it's the ecosystem of stuff around it and the innumerable vendors you can pay to make your problems someone else's problems.


The main advantage of Nomad is Nomad. It strikes the perfect balance between simple and capable, enough so that I don't have to pay someone else. You can read the bulk of the docs in a day or two and deploy it that afternoon.

Consul/Nomad deployment is easy and operation is a breeze. As the sole ops person at my company, Nomad has made it possible for me to scale our infrastructure without an entire team dedicated to it.

I'm not saying that it's better than K8S, but for a small team who wants to focus on building things, it's an amazing time saver. Obviously, read the docs and make your own decisions, but there is a definitely an infrastructure void that Nomad has filled perfectly.


As a counterpoint, everything you just said about Consul/Nomad applies to using hosted K8s like GKE. I moved my company's infra to it last year and now we barely think about it. We're a small team, I'm the only person doing devops, and there are very, very few problems requiring my attention.

I'm skeptical that anything allowing flexibility in this space can ever be that simple because the domain is actually pretty complicated. Networking, secrets, scaling, healthchecking, deployment strategies, containerization -- you need to know something about all of these before you can use Nomad or Kubernetes.


K8s is complicated because it does many things, not because each of the things it does, individually, is complicated. If your orchestrator only runs stuff, if your service discovery is independent of the orchestrator, and just does the wiring and routing, if your secrets management is simply a thing that provides secrets to authorized clients, a lot of the complexity associated with kubernetes goes away. The whole is always more than the sum of its parts, and in the case of kubernetes this adds up in an unfavorable way, complexity-wise.

IME, a lot of complexity, in kubernetes land, also comes from kubernetes being highly opinionated. Whenever your needs don't match kubernetes' opinions, you have to fight it. To give you an example, service versioning is not something kubernetes knows about. If you need it, you need to add it manually. You can easily automate this with consul, and make applications unaware of service versions. Reciprocal trust verification between ... things that communicate is another. Kubernetes provides nothing. If someone manages to inject a pod into your namespace, and randomly starts replaying requests, there's no automatic way to verify the caller's identity - you have to do this manually, at the application level, using a custom sidecar (if you're lucky, your service mesh has something to help you). If you use consul with nomad, you get these things out of the box.

I believe your life is simple now because what you manage isn't overly complicated. Once you have thousands of distinct deployments, versioned, across several clusters, and thousands or even tens of thousands of pods running in tens of data centers, life with kubernetes becomes a lot more difficult. Of course, you can still manage, by automating everything, but automation, at that level, with kubernetes, is not an easy thing. Or at least that's my experience.


I think that is the point, though. If you're small and you want to focus on building things, it doesn't matter very much what it's running on; it's almost always going to be a better use of your time to make it someone else's problem.

If you're doing your own bespoke ops or you need to own more of the stack, then Nomad is a powerful choice.


> The main advantage of Nomad is Nomad.

Is it, really? It feels like it is it's main disadvantage, in the sense that Kubernetes is a first-class citizen offered by practically all major cloud providers, while Nomad... Is anyone actually offering Nomad at all?


Managed Kubernetes exists because Kubernetes is so mind-bendingly complex.

Nomad, on the other hand, is pretty simple and easy to run, and a single binary. There's probably no benefit to having a managed offering.

That said, I wouldn't be surprised if HashiCorp add Nomad to HashiCorp Cloud Platform, which currently lets you deploy Consul and Vault to cloud providers via their system.


Managed Kubernetes exists for the same reason managed PostgreSQL, MySQL, Redis, ElasticSearch so on and so forth exist.

Which is to say it's useful enough to a large amount of people to make it viable as a product offering. Nomad would fail as a managed offering here like like Deis, Flynn, Convox and probably 100 other container management platforms that came before.

As a niche tool to manage sub-1000 boxen it's probably ok. But k8s has won the greater war.

Disclaimer: Worked on Flynn, still run it for my own personal stuff by $DAY_JOB is all k8s.


You got that sub-1000 boxen wrong. Nomad is usually employed where the infrastructure is huge. IMO, K8s is driven by hype and marketing, not by technical merit.

Your enumeration of the managed things is very particular, in that it includes only stores. There's a reason for things that have keeping persistent state as their reason to exist are managed: reliably maintaining persistent state in the cloud is a lot more difficult than reliably orchestrating things that just have to run.


If public cloud providers would offer Nomad, then it would only be a question of time until managed Nomad would be a thing.

Managed _something_ does not mean it's "mind-bendingly complex", but rather that people don't want to take care about it and focus on their own stuff like building applications.



Up vote because that really is the main thing to consider when evaluating actual needs of a project.

My actual need is that I deploy my application and I don't care where or how it is running. Most of people should not even think about running k8s on their own. That should be job of a service provider and there should be service provider for running "Nomad".


or, just hear me out, not use k8s and avoid 90% of the problems in the first place.

There are some specific places that k8s can shine, but easy scaling(1000 nodes +) and low management overhead isn't one of them.


In reality, this is always an illusion, IME. You always have to deal with the platform yourself. You can hire a team to do this dealing with you, but that's not cost effective.


I think nomad really needs a k3s equivalent.

The equivalent of k8s is not nomad, it's nomad+consul+vault

If you read through something like kubernetes-the-hard-way most of the really tricky parts are setting up the PKI infrastructure and bootstrapping etcd.

https://learn.hashicorp.com/tutorials/nomad/security-enable-... starts to look a lot like https://github.com/kelseyhightower/kubernetes-the-hard-way/b...

Vault is better than k8s secrets, but getting nomad working doesn't magically give you a working vault cluster.


> The equivalent of k8s is not nomad, it's nomad+consul+vault

This was my stumbling block.

Nomad alone is clearly easier than k8s, and if it alone will suit your needs then you are in luck, but I didn't find installing and configuring 3 separate services any easier than k8s alone.


We had to add vault to our setup even if we use kubernetes - kubernetes secrets are a joke. We don't use consul, but managing istio, or any other service mesh, on top of kubernetes, isn't any simpler than running your own consul. (In fact, I have found istio to be a lot more difficult to get to do what you need than consul's service mesh, and with less functionality.)


I personally feel HCL (or any specific thing) is a large disadvantage for any orchestration platform. Kubernetes is an API for infrastructure. This is why it uses json/yaml. It doesn't mean your configuration should use json/yaml natively.

You can easily write tools that generate json/yaml for you based on your needs. For most applications you're deploying you'll need:

1. A Deployment

2. A Service that points to that Deployment

3. An Ingress or similar

In some cases you'll want to manage persistent state on disk and you'll need a StatefulSet.

A good example of this can be seen https://youtu.be/keT8ixRS6Fk?t=1317 and https://youtu.be/muvU1DYrY0w?t=581


> You can easily write tools that generate json/yaml for you based on your needs. For most applications you're deploying you'll need:

Can you with yaml? Helm tries, but isn't that great, jinja ( depending on how its done ( jinja in yaml as in ansible or jinja to yaml as in SaltStack)) does an ok job, but fundamentally managing a language which uses spaces for logic with code is hard, and certainly YAML wasn't created to be managed that way.

JSON i agree, you can easily convert your language of choice's data structures to it. But do you have to? It adds another layer of complexity, and you can have unexpected bugs ( since you have a logic layer that creates JSON, which could be wrong due to a logical error).

HCL is a DSL that needs learning and few use outside of Hashicorp products, but IMHO it's not that hard ( docs are great, tutorials are plenty) and it's worthwhile to have data+logic in the same human and machine readable layer.

In any case, Nomad accepts JSON for everything ( mostly for when non-humans interact with it).


Helm, in my personal opinion, is one of the biggest mistakes of the kubernetes ecosystem. Not embracing Jsonnet, Cue, Dhall, or even an in-language DSL like Pulumi has set the usability of kube back quite far.


Helm isn't that connected to the broader Kubernetes system. It was popular in the early days when Kubernetes knowledge was sparse and things were still being figured out. None of the official Kubernetes docs really mention or describe using helm, and I see more getting started tutorials are starting to avoid it too. The official kubectl tool supports a -k mode which applies Kustomize logic and I'd argue Kustomize is probably the official way forward with app deployments by the Kubernetes project.


Yet, the embedded kustomize version is years behind standalone.

https://github.com/kubernetes-sigs/kustomize/issues/1500


$DAY_JOB replaced helm with custom tool that does equivalent of kubectl apply by uses Jsonnet as templating engine.

I intend to write an OSS version of it because Jsonnet really is probably the best fit here (though Cue/Dhall are interesting I think Jsonnet is easiest to get others to adopt).


I also did something like this a few years ago (with Go templating instead of Jsonnet), but we recently replaced it with Kustomize (https://kustomize.io/) as it worked quite well for the basic use case of getting shared Kubernetes configs to deploy code to different environments at different times.


The fact that trivial things like ingresses don’t have transformers is a PITA.


I've done the same in the past. The important thing is making it so it's easy to build up abstractions and to work in a sensible deployment flow.

kubectl is close but doesn't understand secrets/variables, the concept of clusters, or changing image tags programmatically. It's difficult to get right.


> kubectl is close but doesn't understand secrets/variables, the concept of clusters, or changing image tags programmatically. It's difficult to get right.

Isn't that the problem kustomize (which kubectl now has native support for applying) is trying to solve? I ask that honestly, because I haven't used kustomize in anger in order to compare its workflow to that of helm


I did the full gambit of helm 1/2 tiller/tiller-less and kustomize. Personally kustomize is only marginally better than helm.

I need to check out kubecfg, I haven't used it yet.


My mistake. I meant kubecfg which is built off of jsonnet.


Yeah this is all stuff this tool understood, multiple clusters, managing kube contexts, external secret storage or k8s secrets, native git integration, etc.

Definitely agree that it's difficult to get right but the authors of the tool definitely struck a really good balance.

Hopefully if I implement it myself I can do their original design and implementation justice.


No need to build it, there's already kubecfg: https://github.com/bitnami/kubecfg


We use this heavily at InfluxData to manage multiple clusters in multiple regions of multiple clouds. One of the maintainers of kubecfg recently joined InfluxData as well.


A team at my workplace uses Jsonnet for this problem and it seems to work very well. It's a nicely human-readable DSL for generating JSON in a way that respects JSON structures - you can construct lists of objects (deployments, services, etc.) by looping over templates and composing modifications.

That seems a lot better than either SaltStack's use of Jinja (using text-based templating on a structured format) or Ansible's (using YAML as syntax for an imperative programming language that has Jinja-evaluation functionality).


HCL is JSON compatible, and Nomad can load JSON natively.


I assumed that was a possibility but that still means "HCL" isn't a benefit since kubectl can load json from the CLI and from apiserver.


It can only load json with the api, the nomad CLI can't launch json jobs.


Yes it can.

"For machine-friendliness, Nomad can also read JSON-equivalent configurations. In general, we recommend using the HCL syntax."

https://www.nomadproject.io/docs/job-specification#job-speci...


But who would use the Nomad CLI for serious work? I use the Terraform provider for Nomad to launch all my jobs.


You may not use it for automation. But you usually do much exploratory work before doing the automation. That's where the various CLIs for all kinds of things come in really handy.


Me? It's handy to be able to dump -> modify -> load jobs when I'm poking at things. No one said anything about serious work, though, they said "load json natively".


A thousand times this.

With Kubernetes, you can even use one of the many native clients (and/or protobuf) which makes converting from something else a breeze, avoiding the JSON/YAML entirely. You get static typing, you can write tests, etc.


Well, it's same thing with Nomad - there are native SDKs, protobuf and a classical JSON REST API for everything.


Unless you start messing with Operators (which are hot right now) for desired state configuration within your cluster, which can, theoretically, be written in anything, but realistically tend to be written in Golang.


I think operators are great and true to the k8s spirit. You describe your desired state (e.g. a postgres database) according to a CRD. And the operator will make your yaml definition become reality. I think the declarative approach is a big improvement over the imperative step-by-step setup. I rather like to describe how I would like things to be, instead of having to create the desired state myself.


May be. But operators are a relative pain to develop. I prefer the HCL approach. You can have modules instead of operators, and while HCL overall might be painful, it's less painful, IMO, than constantly changing operators. My team spends a significant amount of time chasing deprecations of operators and associated helm charts.


To be clear: I like operators! I was just responding to the parent post; JSON/YAML aren’t the only ways to express things in the Kubeverse.


Yep, exactly. KubeDB is a great example of this.


HCL alone is a good reason to prefer Nomad over any YAML mess that exists.


We use (and abuse) Nomad for Fly.io workloads. My favorite thing about it is that it's simple to understand. You can read the whole project and pretty much figure out what's going on .

We've written:

* A custom task driver to launch and manage Firecracker VMs

* A device plugin for managing persistent disks

The second is interesting – we looked hard at CSI, but the complexity cost of CSI is very high. The benefit is pluggable storage providers. We don't need pluggable storage providers, though.

Likewise, we don't need pluggable networking.

Device plugins are a really nice way to extend Nomad.


Cool. Related question - how has been your experience working with firecracker VMs, and what led you to that decision? (I’ve been eyeing them for a while to get faster startup times for autoscaling).


I looked into Nomad and my problem is having to run a minimum of six machines in a cluster for a few microservices. The argument could be made that we wouldn't need container orchestration for a few, but if we plan on scaling I don't want to have to deal with a fleet of servers. With ECS or EKS, I would only have to worry about the worker nodes so it's less to manage


You can totally do it on three machines, each running triple duty as consul, nomad, and worker machines.


Or if reduced redundancy is accepted, on one/two ( either one with everything running as server and client, or two with one running Nomad, Vault Consul servers and the other clients)


Right, and it's not like there's actually a hard requirement for perfect uptime in a lot of settings.


why do you have a microservices environment for “just a few”?


We'll have more but I can't justify running six from the get go.


From my experience nomad and other hashicorp tools required more learning how to use all the pieces together / figuring out edge cases / debugging.

K8s was definitely easier to setup.

I feel like nomad is appealing to people who don't really need orchestration but feel like they do.

It's perfectly fine to run apps with just docker. I have small projects on the side which are just a couple of servers with docker: one nginx container pointing at a few docker containers containing some services.

Once my needs grow, I'll just roll them over to k8s.


> From my experience nomad and other hashicorp tools required more learning how to use all the pieces together / figuring out edge cases / debugging. > K8s was definitely easier to setup

How so? I can't imagine any production setup of Kubernetes from scratch ( not managed by a cloud provider) being easier, initially or on day 2, than Nomad+Consul+Vault, but maybe i'm missing something.


Simpler option for k8s are available:

- Charmed Kubernetes (Juju)

https://jaas.ai/canonical-kubernetes

- Mikrok8s

https://microk8s.io/


I wouldn't consider any of these for production.


As a cloud engineer in the UK, it's almost required these days to know K8s in one form or another. I wish it wasn't the case, but I think the battle is over.


Once you get to a place where you need to run really-really huge workloads, like thousands or tens of thousands of machines, kubernetes, hosted or not, will take the will to live away from you. It's not about the learning curve, or the convenience, it's that kubernetes gets slow to catch up on large workloads that need to change often. That kind of setup is where nomad + consul + vault + terraform shine.


Learning k8s is not a problem, but if you are faced with the decision to implement either k8s or Nomad at a company, it's worth evaluating both and what fits better for your team.

Recruitment should not be an issue since anyone who understands k8s should be able to grasp Nomad, and vice versa (although in the latter case, with much more effort).


I love nomad so much. It never really gets a fair shake IMO. The world needs both k8s and nomad and I hope it continues to grow and see more adoption.

And many years ago we used it as a feature in a product written in Go. We could just tap in to the APIs directly. It is amazing for batch jobs.


I'm surprised that Nomad is gaining traction _now_, five years later. Nomad has always been "container orchestration, the easy HashiCorp way," but finding case studies from others using it has been challenging


I don't get why people are so afraid of kubernetes, the thing is pretty much hands off.


Honestly my turn off was seeing Google Cloud and later DigitalOcean providing managed versions of it. If they see value in doing that I assume it's gonna be a pain in the arse to solo sysadmin it.

I later realised my blog and little side projects do not need a cluster but that's another story. I was mainly looking at it to learn than to properly utilize

Completely off topic, sorry. Whatever happened to Flynn.io? That was my favorite of the clustery things but it just sort of fell off the internet

e: oh. Answered by visiting the site just now. "Flynn is no longer being developed." that is a shame.


I just started using CapRover (https://caprover.com/) for hosting some side projects, it was extremely easy to set up and appears to work similar to how Flynn.io used to.


It's really only in the last couple years that kubernetes has started to get tamed and be more approachable. For the longest time the only way to get a simple dev cluster up on your laptop was to make a VM with like 8-16GB of RAM and four cores dedicated to it--it was pretty outrageous. Nowadays kind, k3s, etc. make it a simple one line install and sip resources. But I agree, there is definitely a misperception that k8s is too complex.


I wouldn't call it a misperception. It does orchestration, service discovery, resilience, secrets management (which is a joke, but still) and whatnot. That's definitely a lot of complexity. Mixing it all together doesn't help with making it easier to manage, on the contrary.


Is it? If GCP are launching an even more managed version of their managed Kubernetes because the current one is too complex for some, i'd say it isn't that hands off, even with a managed service.

It's gotten drastically better, but updates are still serious work ( point in case, all cloud providers bar Scaleway are usually multiple versions late)


It's a thing of personal preference and requirements. I can easily see that there are requirements where Nomad is a better fit. However, I also have to say that Kubernetes has clearly won the battle a will be the safe choice for most companies for the years to come. It will be much easier to find knowledgeable DevOps and SRE workforce for k8s, than any other orchestrator alternative.


Any engineer worth his salt will be able to hit the ground running on both systems pretty quickly. The concepts are pretty similar.


> Any engineer worth his salt

... will build deep understanding and accumulate knowledge over the years. I won't deny that you could learn Nomad fast, but becoming an expert in some domain really takes some time.


It's the old Douglas Adams joke. The difference between a thing that can break and a thing that cannot break is that when a thing that cannot break breaks, there's no way to fix it.

I don't entirely know if this joke matches the reality of Nomad vs K8S, but that's probably the general sentiment. Bigger usually implies harder to fix.


> I don't get why people are so afraid of kubernetes, the thing is pretty much hands off.

To be fair, this time around the ones afraid of Kubernetes are the companies trying to sell a Kubernetes competitor.


To be clear, i ( the author of the article) am not affiliated with Hashicorp in any way besides using some of their ( free, open source versions) products.

I just used Kubernetes at work, inherited a poorly maintained cluster, and when time came to change, due to very limited time available, we opted for Nomad. Few years in, i'm telling what i've learned ( while still running a k8s cluster for home use, so i'm not entirely disconnected from the k8s ecosystem)


I see, I apologize for the unintended misrepresentation.

However one of the things I jumped into the assumption that the blog post was stealth advertising for Nomad was a few exaggerated claims that feel quite a stretch to be able to come up with anything in favour of Nomad over Kubernetes.

One of them, for example, is that

Nomad is supposed to be "significantly easier than microk8s or minikube or kind" to run locally as a dev environment. Well, anyone familiar with minikube is well aware that to get it up and running you only need to install it and kubectl, and quite literally just run 'minikube start'. Nomad agent is not only far more convoluted to get up and running according to Nomad's official documentation, but it is also explicitly stated that it's use is highly discouraged. Don't you feel that you're misrepresenting Nomad's selling points?


I see.

> Nomad is supposed to be "significantly easier than microk8s or minikube or kind" to run locally as a dev environment. Well, anyone familiar with minikube is well aware that to get it up and running you only need to install it and kubectl, and quite literally just run 'minikube start'

It would appear that i wasn't up to date on minikube - last time i installed it ( it was a few years back, and since then i've switched to microk8s for local stuff), it required a VM intermediary and driver, but that's no longer the case, direct Docker being supported. That's drastically easier than it was before, but there's still a slight advantage to Nomad (see below).

> Nomad agent is not only far more convoluted to get up and running according to Nomad's official documentation, but it is also explicitly stated that it's use is highly discouraged

How so? Download a binary, run `nomad agent -dev` and that's it, there's nothing convoluted. It's highly discouraged the same way minikube is - for production use. With `nomad agent -dev` you get a local ephemeral Nomad instance you can do whatever you want with, but should never use it for production use.

The small advantage i talked about is that minikube is a separate thing you have to download ( and there's minikube vs microk8s vs kind); with nomad, it's the same one as the CLI you need to interact with the cluster. So i'd be like there was `kubectl minikube start`, and that's it - you can go from working with a cluster to having a local one for testing in a command. Very slight advantage, indeed, but still one IMHO.


I think you have to ask yourself - why are you running Nomad locally as a dev environment? It's indeed very useful to test running jobs on Nomad just to learn it, but to develop using Nomad would be defeating the purpose. Just develop your own code as usual, e.g. with a Python stack you end up with a docker container and on Java you end up with a JAR file you can run from Nomad.


I'd consider Nomad if I'd need to manage my own orchestration infrastructure somehow.

My first option would be to hire a managed Kubernetes service. If that proves to be unreliable or not cost-effective in the long run, I'd then consider making this in-house and use Nomad instead.


If multi-region deployments weren’t an enterprise only feature, Nomad would be a lot more appealing for me since Kubefed is still nowhere near ready.


The absence of CRDs is a significant lacking. They enable abstraction, composability and reusability. Even if Kubernetes may be more complex, deploying complex jobs on Nomad may become (much) more complex than CRD-powered complex equivalent on Kubernetes.

I'm also not sure if Nomad has an equivalent of the listeners that you have on Kubernetes to implement reconciliation loops, i.e. the controller part of operators.


One of the reasons we choose nomad was because it had decent ipv6 support way before kubernetes (and I'm not sure if it's already decent nowadays). See my blog for more info: https://blog.42.be/2020/11/using-nomad-to-deploymanage-conta...


Or ECS for that matter.


If you're on AWS and don't intend to move, yep, it's a decent option.


Neither have ever really made me happy as a systems designer. They both have some massive drawbacks.


Could you list their main drawbacks as you see them?


Much better and easier to run/maintain than Kubernetes. I say this after using it for 5 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: