Fork me on GitHub

@mario.cordova.862 You’re looking for subqueries


Since you’ve got UUIDs, skip the where-clause version and just put the sub query at the top.


Craft the query that returns the refer matches stuff you want, but don’t try to put them in a list. The end of the query is something like |compose regex_result keywords

Mario C.15:09:39

@futuro compose regex_result keywords is exactly what I was looking for, thank you!! 🙌

😄 4

And it will return each of the UUIDs ORd together


Then you put that as the subquery for your overarching query.


trying to get docker-compose up with clojure and running into an issue of "could not find or load main class clojure.main"

  image: clojure:tools-deps
    - ~/.m2:/home/root/.m2:z
    - .:/code:z
    - "45678:45678"
  working_dir: /code
  command: "clj -A:cider"


anyone have experience with that perhaps? It randomly worked a few times and now consistently reports not finding clojure.main


I am not overly familiar with docker compose, but my guess is some how the caches generated in the working directory are not matching up. You could check that by deleting the hidden directory in /code where the caches classpaths end up


In "Language Of The System", Rich Hickey praised the simplifying nature of queues and in that context, said something like this:

It's super important, and I think one of the challenges for this approach is, invariably, people would like their service to do some more.  And making it do a little more also breaks the simple part.  So for instance, queues usually have very, very icky durability things, like, once they start to get into that space.  And all of a sudden, wow, this is not simple anymore.
Does that describe stuff like Kafka and RabbitMQ. In what way is that not simple? Any pointers to research more on this highly appreciated.


well, Kafka is not simple at all

🙃 4

Kafka is one such thing where I'm not sure whether to call it simple or not.


On the one hand, it's just a distributed immutable partitioned replicated log.


On the other hand, has a very elaborate and sophisticated implementation, and a bunch of stuff on top like KStreams and KTables which admittedly are just libraries which no one forces anyone to use.


Unable to point out the things that Kafka complects.


IMHO, If one sticks to the majority use-case, produce a message unto a topic, consume from that topic, kafka is really simple (from a programmer's POV). A lot of our technology is based upon kafka.


I like it quite a bit 🙂


sure producing and consuming may be easy, but the implementation is pretty complex and there are a million ways in which things may break


Implementation has less to do with simple


Simple is an interface thing


The simplest queue, I suppose, is one that lives in the same process as the consumer/producer. Make a list, a producer pushes things onto on end, consumers take off the other. That’s pretty simple.


indeed, but no durability at all

💯 4

Now let’s say you want it to be durable. Is it a DB, is it just a plain file? What do you do if you can’t write things out. Suddenly you have lots of questions, and it’s not simple anymore.


yep, add in stuff like replication and leader election and its pretty complex


sure, but if it’s a black box, then that’s not your problem


Or let’s say you don’t care about durability, but you want it to be networked. What’s the API to access it? What happens when a consumer/producer can’t connect to it? Etc


of course things have more essential complexity as you increase the scope of what you want from them


agreed, but speaking soley with my programmer's hat on, it's great. The setup/administration/configuration, I trust to others to know how to do 🙂


So @jaihindhreddy , to your original question, his quote makes more sense when you start from the simplest form of queue, then compare against things like Kafka et. al.


Regardless of how we might feel about their ease of use, they are certainly not as simple as an in-process, non-durable queue.


I recently started using AWS SQS for the first time. It's durable in the sense that it keeps messages around for up to two weeks, but the fact that I don't have to think how it works is very very nice.


@futuro Agreed. But almost all of those questions seem to be arising from two fundamental requirements, long-term memory and distribution. If one is approaching a scale where a single machine is not enough, then one has to answer all those questions anyways. It then boils down to which set of answers (technology) is appropriate, for the use case at hand, or in general.


as a side note to this I just finished reading Designing Data Intensive Application by Martin Klepmann, and it is really, really good. It really helped me to form better mental models around data stores and the guarantees they do or do not provide, and the implications for distributed computing, stream processing, batch processing, etc.


I also do think that ZooKeeper and Kafka are "simple" in the Hickey sense in that they are very generalized tools that solve a narrow task, without complecting it by pulling in concerns that are only tangentially related. The implementation is not trivial because the guarantees it provides are not trivial to implement, but at the end of the day it mostly does one thing, supplying a durable, distributed, strictly ordered queue.

👍 4

You've articulated my foggy thoughts exactly.


They are working towards getting rid of Zookeeper, along with some other optimizations to make it more cloud-native. We also use Kafka a lot, Combined with Spring Boot, it makes some things a lot simpler. You basically get scaling for free, while it's still easy to test/run locally.

Ahmed Hassan14:09:08

Scaling for free means?


@U26FJ5FDM your using java right and not clojure + spring boot?


No, Java indeed. With for free I mean you don't need to code using a framework RxJava/VertX. You still need to spin up multiple instances, using the same groupid.


Have you measured price of hardware needed for a highly available kafka cluster (3+ nodes to have hw failure resilience) ? Eg per 1000 messages per second routed 1-2 kb size each ?


You could use Confluent Cloud if you have low volumes. There is no free lunch. I've also been in a project that made a mess largely due to the use of RxJava with VertX, which meant we could only do 5% of what we could do if there was no such mess.


A part from Java being Java, why did you have a mess using RxJava and VertX? @U26FJ5FDM


I think it was originally set up properly, but the combination of having tight deadlines, and some developers with little/no knowledge of RxJava especially turned into something horrendous. With lots of observable all over the place and 100 line methods. I also used RxJava and VertX for advent of code, trying to write as much as possible in pure Functions, and then it worked great. Especially because all the problems are essentially the same, so you only need a bit of plumbing for that, and every solution can be a pure function. I think with some disciple the same kind of thing can be done with a real project. And I even think I will find it more enjoyable and better testable then a spring boot application, but I probably will never know.


I see, thank you for the insight!


The reason why i am asking is that we chose rabbitmq over kafka 4 years ago, since kafka looked hardware hungry


And 1000 rps constant does not look low volume to me :-)


We move away from RabbitMQ because we sometimes miss messages. 1000 rps is little in Kafka terms. With the right config it's only 10 payloads a second to kafka.


Interesting, we havent missed messages yet.


You used the manual ack to acknowledge them ?


Good riddance, Zookeeper was the source of most of my operational problems with Kafka


Total Noob question but does anyone run their own mail server via clojure?


I do not know. Has anyone written a mail server in Clojure? They have been around so long, developed in other languages, not sure anyone would have had that particular itch to scratch to develop another one.


I used to work at a place that used subethasmtp+clojure to receive mail. Subethasmtp is an email server written in Java, and you give it basically a callback to handle the incoming mail

👁️ 4
metal 4

The business was archiving and indexing email, and one way customers could get email to us was via smtp. They could set an option on their outlook severs to forward journaled mail to us, so that code handled a lot of email


Javaxmail was a sun maintained Java packed for reading mime messages and speaking smtp (more as a client if I recall). I think it may have also had some simple imap and pop3 clients.