> The problem with infrastructure-as-code today (Terraform, CloudFormation, CDK, Pulumi, etc) is that it is not reusable. That is because implementation, configuration and interface are mixed up together.
I find this statement from the documentation[0] unfair, given that the "target" concept this introduces seems to be mainly based on Terraform modules to _reuse code and expose an interface_. Terraform has its problems, but this doesn't seem to be right.
At best, this seems to be a curated set of Terraform modules and a managed CD pipeline execution SaaS. I get that it is supposed to simplify things, but it is lacking documentation for what it will do to an AWS account (you'll still pay for it, after all) and even provides documentation on how to drop "raw" Terraform into it. Why not go with Terraform directly then instead of sending your AWS credentials to a SaaS?
Thank you for these points! I respectfully disagree :)
A raw Terraform module is quite hard to reuse out-of-context for someone who isn't familiar with devops / sysadmin concepts. What's a VPC? Security group? ACL? Each service exposes a bunch of config options that won't make sense to people who are facing it for the first time. TF mimics AWS interface, and it's more like a pilot's cockpit than a car interior. All tools imaginable out there, but you got to know what you are doing to use it.
Targets on the other hand are exposing high-level concepts only. How many services? Is it container or a function? Enable or disable database? Got it, starting building. More like a car interface or a phone UI which you can figure out by doing.
Current implementation of Targets is very simplistic. It just does the job but not much more. In Targets v2 we are planning to introduce proper dynamic generation with a "stack state" API that would allow to create truly incapsulated, smart components that would adapt to a number of environments.
I'm not sure I get your point - Terraform modules[0] are a generic way to encapsulate a set of Terraform resources. What variables you expose to the outside is up to the module developer - You can expose high-level variables like "service type" and "add a database" in your module as well. No need to understand VPCs, security groups, ACLs, either. The question whether there are high-quality Terraform modules that do that is a different one, that's why I think your service might still be of value _if those modules you maintain are of high quality and do reasonable things, which I haven't verified_.
Maybe you have great ideas for this target concept, but the claims in your documentation[1] that this is new and the inference that Terraform isn't capable of this don't hold up:
> it describes a particular architecture that can produce many possible variations for a wide variety of stacks, depending on configuration.
You can do exactly that, with Terraform modules, today, no digger needed.
Thanks again! This viewpoint has a lot of merit for sure.
Please let me defend my claim on Terraform capability though.
The real question isn't whether doing X is possible with TF; it's whether it's likely to be done in practice.
I am speaking from my own experience as a former front-end dev and making a bold assumption that there are many others like me. Whenever I'm using Terraform, even ready-made modules, I find myself thinking of things that I neither want to be thinking about, nor I need to be thinking about. Most of my brainspace is occupied by frontend intricacies; however I still do want to get control of the entire stack. The further some tool is from my primary competence the less capacity I have for various details about it. I want my webapps and containers to work somewhere, that's all. But when I'm facing a problem - a specific problem - I also want it to be solvable. Like autoscaling or load balancing. And I want it to be solvable in a way that's not against industry best practices. Because today I may have a team of 3 but in a couple years that may be a team of 300. I don't want to have to rebuild from scratch half way through. But I also don't want to waste time building something future-proof on day 1.
I get what you're saying and I think it's a valuable discussion to have (I remain skeptical whether handing off your infrastructure design is a good idea or not; but as someone working in the space I might be biased), but that's not my really my point to be honest.
I think that the documentation is making several technical claims (from the quotes I've provided) that are factually false. You're agreeing that it CAN be done with Terraform. Best practice isn't what is being discussed in the documentation, it claims that reusing isn't possible.
Granted, I'm not your target audience, but I would recommend to a) rephrase those claims so they're closer to the truth and b) start documenting the architecture of your targets and the quality of your Terraform code (does it pass tfsec tests for example).
If someone asked me to review this product for their startup, I would primarily see Terraform modules with unknown quality or architecture.
It's also 100% untrue with Pulumi, which by virtue of using general purpose programming languages allows definition of interfaces in a fashion completely decoupled from their implementation.
Thank you!! Perhaps what we mean by "implementation" is different here. We should probably make it more explicit in the docs.
What I mean by "interface" is "My stack needs infrastructure for 3 containers and 2 webapps and container A needs a Postgres DB and container B needs a queue"
In today's IaC, including Pulumi, you actually need to specify _which particular_ way of running containers, with all the configuration details. Same for database. That's implementation. Swtching languages doesn't make it any simpler.
Practical example:
The exact same stack can be run on one EC2 box via docker-compose, and on a Kubernetes cluster with managed databases. Same interface, different implementations. What Digger accomplishes is allowing to swap implementations at any time as long as the interface stays the same.
Switching languages does not make this simpler. Switching the _implementation_ of an interface does. For example, I could implement a "queue" interface three times - once for Confluent Cloud's Kafka, once for Kinesis and once for EC2 instances that run OSS Kafka. The interface remains stable, the implementation changes. This can also be done across clouds.
I think it's worth you doing some more research into what Pulumi opens up before using it as an example like this in marketing material.
I find this statement from the documentation[0] unfair, given that the "target" concept this introduces seems to be mainly based on Terraform modules to _reuse code and expose an interface_. Terraform has its problems, but this doesn't seem to be right.
At best, this seems to be a curated set of Terraform modules and a managed CD pipeline execution SaaS. I get that it is supposed to simplify things, but it is lacking documentation for what it will do to an AWS account (you'll still pay for it, after all) and even provides documentation on how to drop "raw" Terraform into it. Why not go with Terraform directly then instead of sending your AWS credentials to a SaaS?
[0] https://learn.digger.dev/overview/how-it-works.html#technica...