Adding black boxes on top of black boxes is not a good way to abstract complexity. Helm does nothing more than any template engine does, yet requires me to trust not only the competency of some random chart author but also that they will correctly account for how my k8s environment is configured.
When I inevitably have to debug some deployment, now I'm digging through not only the raw k8s config, but also whatever complexity Helm has added on to obfuscate that k8s config complexity.
Helm is an illusion. All it does is hide important details from you.
I would think that by comparison to image models synthetic data would be relatively easy to generate for audio model training. I’m curious then why it continues to be so difficult to build a nearly flawless audio separation model. Is synthetic data being widely used? Is it just too hard of a problem to train even with this data? I don’t have a good sense of what the most challenging aspects are of audio models.
Unlike images, audio signals are time-dependent and have complex temporal dynamics, making it more challenging to generate realistic synthetic data that captures the nuances of real-world audio. Meanwhile, the complex nature of audio signals, the scarcity of high-quality training data, and the subjective evaluation of audio quality collectively contribute to the ongoing challenges in building near-flawless audio separation models.
It seems to me (at the moment) that the 'engineering' of "prompt engineering" happens at the level of taking a word prediction model AI and turning it into a Q&A AI, and the art of decomposing requests into state and behavior that can produce an A from a Q. Also the chaining of prompts, self prompting, etc.
It's functional composition that's interesting. A system of prompts. Not the phrasing of a single question, no matter how clever.
Agreed, there’s a lot more design space to explore with prompt chains/flows. But a single prompt requires design too especially if it’s dynamically generated. You have to choose what information to provide, frame the question or task you want it to complete in a way that provides good answers and is also extractable from the completion, and fit within the token limits (32k tokens is a lot but expensive and still small for some tasks).
This article only covers the simplest type of prompting you might do.
In Go that quote is literally true. If you do nothing on a new computer but download Go you can immediately build a statically linked application that can fetch from an https endpoint. There are no implied additional steps.
I don't believe this can be said of the other languages. I know that C, C++ and Rust do not include https in their standard libraries, so while they can be made to compile statically and use a library that provides https functionality, those are additional steps that must be taken by the developer, and it is the responsibility of the developer to choose the correct source and version of the https library to use. This will also include understanding and setting any additional compiler flags that the library may require, setting any optional defines or other library configuration settings and making the appropriate changes for every platform they wish to build for.
I really hated this feature at first, but it's one of my absolute favorite things about Go now. In general Go communicates more information than other programming languages without nearly as much visual clutter. When reading code one rarely needs to check elsewhere to fully understand what is intended.
Then why not assume it is the static linking that is trivial? Isolated each item is trivial in some context, its the exclusive set that is non-trivial, or at least uncommon.
The fact https is included in the standard library means that you can give a new user a hello world tutorial that includes producing a web server. It's a huge boon to productivity in a programming language to have a default path for such libraries.
I also work in C++, and it is infuriating the amount of time that must be spent sorting out the correct libraries for all the various aspects of an application one is not inclined to write themselves. People who don't fully grok the Go ecosystem overlook this cost when they claim that you can do the same thing in some other language. What they are missing in the subtext is the fundamental quality of life improvement.
> Isolated each item is trivial in some context, its the exclusive set that is non-trivial, or at least uncommon.
I'm not just taking it in isolation for no reason. If you have static linking, you basically have HTTPS by consequence.
> The fact https is included in the standard library means that you can give a new user a hello world tutorial that includes producing a web server. It's a huge boon to productivity in a programming language to have a default path for such libraries.
I think you're taking our discussion as if it were about if the language is cool or not. I never argued against that.
> I'm not just taking it in isolation for no reason. If you have static linking, you basically have HTTPS by consequence.
Can you clarify what you mean by this because to me there is literally nothing about static linking that implies https as a consequence. The point about Go is centrally not that it uniquely has access to an https library.
The point is that it is included. This may not at first appear all that noteworthy, but this is a substantive quality of life improvement. The standard library not only provides a vary large set of common functionality it is packaged with the distribution, works on all the platforms supported by Go without user intervention, and is bound in lock step to the release version of the compiler.
The convenience of building network clients in Go is heavily implied. Strictly you can say that assembly code can make standalone binary HTTPS endpoints, but that would be a bit silly.
To be sure you might write such a program in assembly, but you’d be doing it for fun or aesthetics. No engineering manager would commission such a thing.
Though of course this is inviting the trading maniacs to tell stories of hyper optimized clients…
HTTP doesn’t need to be in stdlib. Java and Python each had one but everyone long since switched to third-party reimplementations with better APIs. And HTTP will probably be replaced over the next twenty years, like FTP and Sun RPC and CORBA before it.
Ok, but it's not either Go or assembly. There's other languages. I listed 6 in a close-by comment. I wonder if Go's much more convenient at network clients than, say, Haskell or Common Lisp.
Well that's not on him man. You are the one calling him out for being unaware of static linking, when you don't even seem to understand the full set of pros he listed in his first sentience.
You're not replying to the same person. In a separate comment, I asked for a clarification on that specific part of their comment. The other pros, they're not part of what I asked about.
In the comment I replied to you, I'm just stating that you're bringing up something that's not in discussion.
Sorry for the misattribution, my mistake. Nevertheless, the fact remains that https is a part of the standard library and one of the elements that the op finds unique about Go. It is unambiguously part of the conversation, and the ostensibly negative comment that focuses on static linking is missing the point.
But I didn't even make the assumption that I was right on that understanding. I simply asked for more detail. Which was so that either I would learn something new, or the parent would.
> https is a part of the standard library and one of the elements that the op finds unique about Go.
Where do they say that it being part of the standard library is unique? I don't even see the words "standard library" mentioned.
EDIT: Look, this back and forth seems kind of pointless. This started with a statement about Go being unique for being able to be statically linked and talk HTTPS. If a language can be linked to compiled libraries, static or not, it's surely able to at least use libcurl, so I and the other commenter you replied to took it as Go is unique for being able to link statically. I assumed they meant, ignoring C/C++, static linking is rare, which I found interesting. I found other languages and asked for clarification. That's all.
I just addressed this question more directly on your other comment.
I do not agree that it is incumbent upon a speaker to anticipate the listeners knowledge, so I do not think it is a reasonable expectation that every qualifier be included. It's simply not practical. But I do think this is an interesting conversation you bring up.
Why would you need a Makefile? You have to run helm to apply helm charts, how is `kubectl apply -f .` any more complicated then that?
The entire existence of helm is superfluous. The features it provides are already part of Kubernetes. It was created (by /) for people who understand the package manager metaphor but don't understand how Kubernetes fundamentally works.
The metaphor is wrong! You are not installing applications on some new kind of OS. Using helm is like injecting untracked application code into a running application.
At best helm is just adding unnecessary complexity by re-framing existing features as features that helm adds.
In reality helm's obfuscation leads to an impenetrable mess of black boxes, that explodes the cost and complexity of managing a k8s cluster.
First off if you are packaging and/or publishing apps using helm charts, stop it!
There is purpose to the standardization of the Kubernetes configuration language. Just publish the damn configuration with a bit of documentation.... You know just like every other open source library! You're building the configuration for the helm chart anyway, so just publish that. It's a lot less work then creating stupid helm charts that serve no purpose but to obfuscate.
Here is your new no helm instructions:
We've stopped using helm to deploy our app. To use our recommended deployment, clone this repo of yaml configs. Copy these files into your kubernetes config repo, change any options you want (see inline comments). Apply with `kubectl apply -f .`, or let your continuous deployment system deploy it on commit.
A problem, however, is that subscriptions are desirable revenue streams regardless of continuous operating costs. A product I can buy for a largely concrete exchange in value, whereas subscriptions will (at best) go to value in the future that I may not have chosen to buy.
I notice that you feel bad for the devs of the podcast app, but not so bad as to give them the word of mouth marketing they no doubt hopped would be the fair trade value of early lifetime memberships.
The problem is that few people seem to understand the infrastructure as code concept, and essentially break the core k8s declarative architecture with imperative workflows that look just like the bash script install insanity we left behind. Workflows that are encouraged by tools like Helm and examples that create k8s objects on the fly without even creating much less retaining the "code" part of IaC.
It turns a tool that escaped the tyranny of endlessly mutating blessed servers with immutable, contained services and unified declarative life cycles, back into an imperative mess of magical incantations that must be spoken in just the right way.
Kubernetes is simple when used as designed, but staggeringly complicated when forced into the mutable imperative workflows it was expressly designed to prevent.
Mind expanding a little on your complaints about Helm? I’ve only used Helm as a templating solution (and even then only to differentiate between local, staging and production), so I’m curious what problems I have to guard against.
Think of Kubernetes like a single application. The config files are the source for that application, the running cluster is the compiled application running on the users computer. By default Helm injects more "compiled" code unrelated to your applications source into the running application. Allowing any tool to alter active cluster state diffuses your single source of truth, your source code, to multiple sources of truth which will not remain in sync with your source unless great care is taken. Moving in sync matters, because that is how you roll back to a known good state when things break.
If you are using Helm to generate source code for your application you still have the added complexity of additional build step, but at least you can choose to add the generated code to your app in a way that tracks with the rest of your code.
Also most Helm chart authors are of varying skill level, and even skilled ones necessarily make incorrect assumptions about your deployment environment. It takes a lot of addition code in helm charts to support more flexibility, so it often get ignored, and you are left with a black box that doesn't quite do what you'd want it to do.
Adding black boxes on top of black boxes is not a good way to abstract complexity. Helm does nothing more than any template engine does, yet requires me to trust not only the competency of some random chart author but also that they will correctly account for how my k8s environment is configured.
When I inevitably have to debug some deployment, now I'm digging through not only the raw k8s config, but also whatever complexity Helm has added on to obfuscate that k8s config complexity.
Helm is an illusion. All it does is hide important details from you.