Hi everyone, I'm a co-founder / CTO of LogDNA. We were in Y Combinator's W15 batch and launched our cloud logging platform last year (https://news.ycombinator.com/item?id=11074537)
From the beginning, we’ve always wanted to be able to embed logs (light log viewer with live tail) onto any web page and we’ve just released this! Let us know if you have any feedback or cool use cases.
So far we’ve seen these pretty awesome uses from early testers:
1) custom internal dashboards with 1 embedded log viewer tab per internal app
Yes that's true for many things in Kubernetes but not as frequent in the logging space for Kubernetes. Many still rely on Fluentd as a parsing filter first to extract metadata like pod name or container name. And most of the logging ones require you to edit your yaml in some way first. Ours is almost copy-pasteable, provided you have a LogDNA API key. :)
Hi everyone, I'm a co-founder / CTO of LogDNA. We were in Y Combinator's W15 batch and launched our cloud logging platform last year (https://news.ycombinator.com/item?id=11074537)
Based on user feedback, we're happy to announce our super easy Kubernetes integration. No more wrestling with fluentd configs, fiddling with Elasticsearch knobs or following 30-step guides cutting and pasting other people's configs.
We're looking for some feedback on how we can improve this integration. We currently extract Kubernetes metadata: pod name, container name, container id, namespace.
Feel free to try it out. Happy to answer any questions!
I have used the docker integration of logdna beforehand, before moving to Kubernetes.
The integration was done using docker compose per docker compose environment.
The logs which contained `err` were marked red as errors in logdna and I could trigger an alarm.
The same containers in Kubernetes with the same logs seem to marked as `info` now.
I am not sure, why this is and how I can get the same behavior as before.
Is there a way to tell Kubernetes about stderr/stdout? Or how would I trigger logdna to treat a log as error instead of info?
It would be nice to improve the documentation a bit, and mention that only one agent is needed per node and that this agent will collect the logs across namespaces.
As I have understood,you do not set any namespace for the secret/daemonset, which is fine with me.
But this way, I could install multiple agents (for each namespace).
So a better documentation about the best practice installation for multiple namespaces would be nice.
Ahh yes, thanks for catching that. I just made a note that we collect from all namespaces currently.
Yeah, we now set our agent up w/o namespaces. During our beta, we originally had it set up inside `kube-system` but 2 of our testers mentioned that the pod wouldn't install unless it was in `default`. So we moved it out of `kube-system` but we still weren't sure what caused the issue since it worked fine on our cluster. We were using Kubernetes v1.4 so it could've been an older version issue.
Yeah one of those mysteries of life. :/ One day we'll probably go back and try this on a few older versions to see if we can repro what happened. But yeah, it was right off the bat when we started our beta, 2 different people said they had to modify the yaml in order to get it to work (something we wanted to avoid altogether).
Awesome! We're pretty responsive on Intercom as well if you run into issues post install. But definitely share your feedback if you notice areas that can be improved.
Thanks for putting this out there Lee! This is huge!
How do you guys handle multi-line log entries? This is the hardest part with existing setups, and would really help us trace exceptions as they occur in real-time.
We currently treat each line separately and we can maintain line ordering okay.
We have thought about this before and doing something like: if a line starts with tab or a fixed number of spaces a few times in a row, treat it as 1 line with \n's and store it as such. It would help with alerting and filtering alerts. Just not 100% sure if this will screw up anything.
Hey! Lee here (CTO / co-founder) wanted to say thanks for all the feedback when we launched in February. We're constantly adding new features weekly and if there's anything we can help with, I'm happy to answer any questions!
The gif was from our docs page which afaik doesn't allow js.