I'm talking about the way I'm expected to provide metrics for my apps. Rather than exporting free-form JSON and then scripting Prometheus to understand it, I'm expected to use a custom client library to export the metrics. As for Kubernetes, you can only use it with Prometheus because of not insignificant amount of work on both sides. Basically, the latter is designed for vendor lock-in.
Prometheus scrapes the same text format as OpenMetrics 1.0 and over 700 public exporters use this format, and there are TONS of other non-Prometheus software that consume the exact same text format. Prometheus's biggest competitor, Datadog (which is not open source mind you), consumes it too. I think even Grafana consumes it directly. It's becoming an IETF standard[0].
Would I have preferred JSON over a custom text format like this? Yeah. But to claim an open source project like Prometheus with effectively no business at all is using a text format like this to have vendor lock-in? That's quite a stretch.
> Prometheus scrapes the same text format as OpenMetrics 1.0
I find the GP's claims weird - I've written a relative ton of collectors, exporters, and translators and the format is pretty OK, not worse than most that came before it and better than lots - but I think this relationship is backwards. Prometheus "scrapes OpenMetrics" because OpenMetrics was formal documentation of what Prometheus was already doing for years.
I would not have preferred JSON. That an exposed metric is also a query is also pretty close to a schematic definition is nice.
I apologize for my mistake, then. My understanding was based on reading the Prometheus docs on making exporters alone - something I needed urgently for a job.
Include the client library if you want, but the wire format is ridiculously simple. I'll implement it from memory in a HN comment.
http.HandleFunc("/metrics", func(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Add("content-type", "text/plain")
w.Write([]byte("# HELP foo_bar The numbers of foos barred.\n# TYPE foo_bar counter\nfoo_bar 42\n"))
})
The client library is largely to keep track of running counters (and gauges, histograms, etc.), with a small amount of code to actually report those metrics when scraped. It's a very simple format.
IIRC (it's been almost a decade since I used varz), having multiple label values would be a map of maps in varz. It got quite ugly if you wanted to have a number of dimensions.
The other commenters have pointed out that it _is_ based on another open standard, but admittedly one less common than say, JSON. So you'll generally have to implement your own metrics producer or use a client library, that's true.
However it's also a dead simple format and you can probably implement it with a for-loop or a shell script.
Prometheus supported a JSON representation in the beginning. It was deprecated and removed before 1.0. The current exposition format was created because it cut CPU and memory for scraping metrics in half.
JSON, especially free-form JSON, is not a good format for efficient metrics monitoring.
The design consideration was not that it had to be simple to implement. It's that it had to be easy to parse by a human during an outage when nothing else works.
There are a frustrating number of fundamental corner cases due variance to floating point text formats, and slightly more in the descriptor if you also need that. It's simple to implement an expositor for a limited set of cases. As usual, it's much more difficult to parse what you actually find in the world.
Yea, there still some corner cases and implementation bugs out there. We spent months deliberating how to deal with some of these. Because the base libraries in some languages just don't produce string output from IEEE 754 the same way.
IIRC, Java is different from Python is different from Go. So, really, this is a standardization in languages problem. We tried to work around these as best we could in the OM format.