Fork me on GitHub
#observability
<
2020-07-17
>
plexus08:07:27

Anyone know of a good alternative to librato? I used to use their free plan off an on whenever I wanted to collect some ad-hoc metrics as it was just so easy. Create an API token, and start sending in measurements with curl. But it seems they've gotten rid of their free plan.

plexus08:07:33

self-hosted open source stuff is also welcome if it's trivial to set up, or there's a hosted version that doesn't break the bank and is easy to cancel again

plexus08:07:28

I guess I should look into Grafite or Prometheus but I'm afraid it's going to turn into a rabbit hole

Jorin09:07:26

This question can be a rabbit whole in itself I guess 😅 If you are looking for a hosted solution with a free plan, I would definitely have a look at https://www.honeycomb.io/ It's simple to get started and it's more than just metrics. Only having metrics as in numbers I find restricting pretty quickly - especially at small scale where you can afford keeping more data around. But if you really want only metrics, we have quite a few Prometheus instances running and it's basically zero maintenance to run Prometheus plus Grafana. When going down the Prometheus route, just need to be aware upfront that a pull-based solution is actually what you want.

plexus09:07:34

I spun up a local Graphite through docker. The UI is very ugly but apart from that it seems to be exactly what I wanted, just feed in numbers and put them on a graph

plexus09:07:43

context is that I need to collect cpu and memory usage from firefox to help track down a performance issue, so it's really just a local ad-hoc short term thing where I need a little bit of visibility

Jorin09:07:15

For machine metrics I only have experience with the Prometheus node_exporter, which does a good job at this. It's straight forward to setup. If you really want to just query your data and not build dashboards or alerts, then you also don't need Grafana. The Prometheus built-in web interface is good enough for that. But I am sure there are other solutions that are potentially even easier to setup than node_exporter+prometheus. I am happy to learn more 🙂

Jorin09:07:54

Just to clarify: Setting up node_exporter+prometheus would mean running two binaries and you are done. But there might be other tools that do this even simpler 🙂

plexus09:07:48

it's all trivial once you know how to do it 🙂 sometimes the steps are easy but finding the relevant bits in the docs is not. In this case I needed two snippets.

docker run  \
 --name graphite \
 --restart=always \
 -p 1080:80 \
 -p 2003-2004:2003-2004 \
 -p 2023-2024:2023-2024 \
 -p 8125:8125/udp \
 -p 8126:8126 \
 graphiteapp/graphite-statsd
echo $metric_name $metric_value $(date +%s) | nc -q 0 localhost 2003
went to the web interface and I could see my data

Jorin09:07:57

Nice! So Graphite has some machine metrics already built-in?

plexus09:07:41

you just feed it numbers, I hacked up some shell functions to gather up memory/cpu

plexus09:07:20

I'm sure there are ready made scripts/daemons/agents that collect specific things, but that's already more than I needed

Jorin09:07:32

Ah I see 🙂 That's why I mentioned node_exporter as thing to go in addition to prometheus. But if you are looking for very specific things you might be quicker doing your own thing than understand the tons of metrics node_exporter is gathering for you 😅

lukasz14:07:46

@plexus me and my team have been really happy with Grafana Cloud - very reasonable pricing for both logs and metrics, you can use either Graphite or Prometheus as the metrics backend and Loki is pretty decent for log aggregation

lukasz14:07:40

and for running locally I always use this image: graphiteapp/graphite-statsd