Fork me on GitHub

Hi everyone! I’m excited to announce general availability of Amperity’s observability library `ken` and its companion, `ken-honeycomb` . We’ve been using this code in an internal library for a few months now and it has dramatically improved our insight into how our systems operate. I’ve recently open-sourced the code and it’s ready for use!

👀 9

This looked very cool at the recent online meetup where it was presented!


was that meetup recorded?


I think it was. I looked on Amperity’s YT channel and they used to post Seajure recordings so maybe it’ll turn up there? @U8XJ15DTK


It was recorded, I’ll ping about getting it uploaded


ken looks awesome! I really like the API and the docs. Thanks for sharing 🙂 I was looking at and But having a generic solution with ken is much nicer! Building an integration with OpenTelemetry wouldn't be too much effort either if someone likes to use that as backend. Looking forward to giving ken a try for some project soon!


Awesome - yeah, we wanted something decoupled from specific sinks, because we also pipe ken events other places, for example our structured JSON logs. Plus, when doing local development it’s super useful to subscribe a pretty-printer and get a dump into your console, or a simple local file appender for later investigation.


have you seen mulog? looks similar to ken at a glance


Yeah, it was announced a bit after we’d built and started using ken internally - definitely a lot of similar ideas

James Carr20:06:26

Really cool. Like @U8ZN5EHGU mentioned, otel integration would be 👑


@U8XJ15DTK: I’ve spent some time on and off in the last few months looking at mulog by @U0LCHMJTA. Starting July, I’m going to spend focussed time on instrumenting our code to emit events and moving away from file based logging. Based on my reading so far, my plan is to test with Open Telemetry compliant instrumentation libraries and use a Open Telemetry collector as a side-car. Some questions: 1. Is this how you are using ken internally as well? (seeing as honeycomb is an open-telemetry compliant sink) 2. Have you done a comparison of ken and mulog? (Sorry for the lazy question, but it would be so convenient 😛 to get the comparison from the library authors themselves 🙂 than to do the work). On a cursory glance, the two projects look similar. What should be the criteria for choosing between them, in your opinion?


1. currently uses libhoney-java (the Honeycomb SDK) to send events. I don’t think it’s a generic OT option, unfortunately. I took a quick look at io.opentelemetry/opentelemetry-api and it’s not immediately obvious how to send spans directly, but maybe there’s a lower-level interface available.


I haven’t used mulog directly, but I took a quick look through the code. • It seems to have a good number of integrations for various destinations already, so if you’re looking for something that integrates with your system out-of-the-box, that might be a better bet for you. • The high-level apis are pretty similar - u/log is like ken/observe, u/trace is similar to ken/watch. I don’t see direct equivalents for the annotations in ken, so if you want to enrich events during processing, ken might be more suited. • It looks like the context collectors in ken are a bit more flexible - ulog has a global context and a local context, whereas you can use arbitrary functions in ken. • A lot of our code at Amperity is written with async constructs (using Manifold) so it was important that the observability library be natively compatible with that. It doesn’t look like ulog has async support, or if it does I haven’t found it.


Thank you, I’ll look into how ken works with async code (as well as mulog) . I haven’t experimented with instrumenting async code yet, but we have some code paths which are async and we may have more in the future.


Hi, μ/log objective is to capture tracking information from applications and safely dispatch the data to third-party applictions. It can be connected to any backend (form ELS, Kafka, Prometeus, Zipking, Jaeger, Cloudwatch and It has only one dependency (ring-buffer) and it is extremely fast (~300 nanosecond). It doesn't prescribe a format or a specific tool/backend, but is a general purpose library to capture and dispatch quantitative/qualitative measures to other tools. If what you looking for is the data for further processing/search, then μ/log is the tool for you. I've a ton of tooling built on top of μ/log (some of which I might opensource at some point). The point is that with closed-system libraries you get the pretty graph, and that's all, with μ/log you can get raw data which you can use to get the pretty graphs but also to create custom built alerting, automation/auto-scaling based on specific conditions, you can use the info for qualitative measure, and store/keep the data in deep storage like S3 (very cheap) and query with Athena. Saas tools become very expensive when the amount of data increases. With μ/log you can customise your sampling policy to tailor your needs and keep use the datastore that you already use and know to query the data.