Hacker Newsnew | past | comments | ask | show | jobs | submit | steveb's commentslogin

There are a lot of developments around using Kubernetes as an IaC platform for the reasons in your comment. The combination of a standard API model in CRDs + the controller model maps nicely to managing infrastructure and exposing resources to developers.

<https://crossplane.io> just graduated to CNCF Incubation and each of the cloud providers are working on K8s controllers and code generators (like Amazon Controllers for Kubernetes, Google Config Connector, and the Azure service operator).


Before you leave, I'd love to talk to you. I'm in STL and looking for engineers to work in a professional, supportive, and inclusive environment. steve@aster.is


We've been working on a directed graph execution engine called Converge https://github.com/asteris-llc/converge.

In this case the task resource http://converge.aster.is/0.5.0/resources/task/ might help, as it allows you to create a directed graph using any kind of interpreter (for example, Python or Ruby) instead of having to use the DSL.


Nice, thanks for the pointer. It's nice to see templated shell calls, as these can be a powerful bridge between orchestration and execution.


You call this as a configuration management tools on Github. Does that make this a competitor to Ansible, etc as well?


Yes, we have designed it to deploy things like Kubernetes and Mesos clusters via integrations with Terraform and Packer.


We have started work on exposing Hashicorp Vault secrets via FUSE and Docker volumes. The expectation is your containers will just mount secrets via a mount like /secret in the container.

The project is brand new and we'd love to hear your feedback: https://github.com/asteris-llc/vaultfs


Nice! The FUSE mounting method for obtaining secrets is similar to how Keywhiz does this. Very cool and novel solution. Though, if you are unfortunate enough to still have Windows servers in your architecture, I think you're out of luck.


Thanks for sharing this!

With the mantl project https://github.com/CiscoCloud/microservices-infrastructure we feed mesos task information into consul.

We've looked a lot a load balancing and feel that rewriting haproxy files dynamically can lead to brittle behavior.

Our current setup is we use traefik https://github.com/emilevauge/traefik to proxy marathon tasks.

Haproxy 1.6 includes some dynamic dns lookups http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/


That's nice feature, we would need to look into it, however DNS A or AA does not contain port information limiting us to the 'default' ports.


Thanks for bringing this up. We opened an issue to investigate: https://github.com/CiscoCloud/microservices-infrastructure/i...


Why you should care about this project: You wouldn't spend months building your own Linux distribution, so why spend time reinventing the wheel with chef/puppet/ansible?

We're not building a PaaS. Like a Linux distribution, we're integrating open source building blocks (logging, service discovery, scheduling) so that you can focus on app development and analytics instead of sysadmin tasks.

I'll be giving a talk to the NYC Mesos user group this Wednesday June 17th: http://www.meetup.com/Apache-Mesos-NYC-Meetup/events/2229328...


I would really love to know more as I'm investing quite a bit of my time on this stuff.. I'll not be in NYC (unfortunately), could you please share you're slide then? Or maybe having a chat?


"When Mesos Met Consul" is a great topic for a talk, because I don't understand why you'd use both. Doesn't Mesos/Marathon remember what jobs it is running and where?


tl;dr: mesos/marathon run the tasks, consul exposes the tasks in DNS.

Both Mesos and Marathon store different views of task state on the cluster.

How do we use this data to make it easy for jobs to find one another? Can we use this information to automatically configure things like Load balancers?

This is where tools like consul and mesos-dns come in. They populate a DNS store, so that task-name becomes something like task-name.example.com. If you are running 10 copies of a service across different hosts, DNS will have 10 entries.

If a container moves to another system, DNS is updated on the fly.

We can embed health checks with consul, so that if a service is unhealthy, it gets pulled out of DNS.

Consul also is nice because the edges perform the checks (instead of a central server), so the load is distributed.


KONG looks amazing, I haven't heard of it before.

Here's how you would deploy it in microservices-infrastructure:

1. You'd deploy the cassandra mesos framework, giving you HA cassandra. Instead of setting IPs, you'd connect to this as cassandra.service.consul

2. You'd launch a Kong container in marathon. It would show up in dns as kong.service.consul for other apps to find, so you don't have to hard code IPs in your config.

Edit: We've opened an issue to make this an example app https://github.com/CiscoCloud/microservices-infrastructure/i...


Yes they open-sourced it a month ago I think... You made a lovely repo and great idea for the PR!


The real problem is going from tutorial to something you would use in production. Throw in logging, security and service discovery and you can have a few engineers hacking away for months.

So I want to plug a project I've been contributing to: https://github.com/CiscoCloud/microservices-infrastructure

We're trying to make it super easy to deploy these tools. For example every time you launch a docker container, it will register with consul and be added to haproxy. The nice thing about using Mesos is we can support data like workloads Cassandra, HDFS, and Kafka on the same cluster your run Docker images on.

We use terraform to deploy to multiple clouds so you don't get locked in to something like cloudformation.


This is basically why Kubernetes exists: for all the plumbing, discovery, etc required on top of bare containers.

It still requires work to go from zero to production-quality stack, of course.


We like Kubernetes (and are looking to add it to our project), but our goal is to integrate building blocks that allows us to run many different types of workloads. Think of our project more like an Ubuntu for distributed systems.

Kubernetes may eventually spread out beyond Docker, but for today we need to support things like Kafka and Spark.

As others have noted, we've had things like CloudFoundry, OpenShift and Heroku, and these all-in-one frameworks tend not to extend outside their original domain.


You should look at Cloud Foundry again, particularly with the introduction of Lattice. It used to be tied to apps, now it basically thinks about tasks and processes in a completely generic way.


Our goals for this tool is to make it dead simple to deploy and run. The binary and a json config file is all you need to check a host. When you combine it with consul, we can integrate service discovery with host monitoring and push checks out to the edge nodes instead of relying on something like Nagios.

We're using this tool to monitor cluster configurations and test deployments of https://github.com/CiscoCloud/microservices-infrastructure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: