Fork me on GitHub

Does anyone know if clojure-lsp can easily be used with a REPL-based approach in emacs? Meaning, editing the code and evaluating it in place?


@U061KMSM7 I'm not sure what you're asking here: editing code and evaluating it in place is pretty much the definition of a REPL-driven workflow, so I don't know what clojure-lsp does or doesn't do to change that?


Clojure lsp is static analysis and exposes no repl at all


Yeah, that's what I was wondering -- not sure how LSP comes into the picture at all with Emacs and RDD (or even any editor and an RDD workflow)...


So after using CIDER for a long time, I’m taking clojure-lsp for a test drive. I’m curious what else I would need to evaluate code in the editor, since clojure-lsp does static analysis only.


I use cider and clojure-lsp in tandem at work. Not sure what shortcomings you’re seeing but emacs doesn’t have too many other options. If you want to get rid of nrepl you could try chlorine but not sure there’s any editor support for lsp

Ashwin Bhaskar03:02:43

Hi All, I just shifted from using clj-http to clj-http.lite . The code used for uploading a file with multipart that worked with clj-http does not seem to work with clj-http-lite . With the later I get an error

POST requests require a <code>Content-length</code> header.  <ins>That's all we know.</ins>
I tried adding the content length header explicitly but still get the same error (http status code 411) Any idea what is happening here?


is there a better way to access a nested key? example:

(defn handler [{:keys [services]}]...


I need to access the database key in the services map


do I need to nest a :keys function


I don't need the services keys at all


you can go deep as you want 🙂 [{{{:keys [baz]} :bar} :foo} my-map]

Ankon Chakraborty08:02:12

You use get-in function instead of this approach

(def my-map {:foo {:bar {:baz 1}}})
(get-in my-map [:foo :bar :baz])


Or even (-> my-map :foo :bar :baz).


Any ready fn / solution which I can use to make this true? So :status is equal, :body is equal and in :headers there is a map and Cache-Control is equal. But :headers have more items, which I don’t care. So map compare which care only about data which are expected. Library with helpers for tests or something like that?

{:response {:status 200
            :headers {"Cache-Control" "no-cache"}
            :body "ok"}}
{:response {:status 200,
            :headers {"Cache-Control" "no-cache",
                      "Access-Control-Allow-Origin" "",
                      "Access-Control-Allow-Headers" "Content-Type, User-Agent"},
            :body "ok"},
 :request {:protocol "HTTP/1.1",
           :remote-addr "localhost",
           :headers {"host" "localhost"},
           :server-port 80,
           :uri "/shops/labels/uuid/1/orders/2",
           :server-name "localhost",
           :body nil,
           :scheme :http,
           :request-method :get},
 :headers nil,
 :app #object[api.core$add_headers$fn__24549 0x4f3a36b9 "api.core$add_headers$fn__24549@4f3a36b9"],
 :content-type nil,
 :cookie-jar nil}


PS I know I cal always write this myself, but I would prefer to use something ready


so it is a little like “if map contain map?”


You're looking for a submap predicate. It's not in core, but Alex miller wrote a good one for spec tests

👍 4

thanks, the point is to know right name 🙂 submap predicate sounds good 🙂


Not sure, but maybe this could be used I still need to test it myself

👍 4

Do you know where I can find Alex solution?


I didn’t find anything better, than Alex solution

(defn submap?
  "Is m1 a subset of m2?"
  [m1 m2]
  (if (and (map? m1) (map? m2))
    (every? (fn [[k v]] (and (contains? m2 k)
                          (submap? v (get m2 k))))
    (= m1 m2)))


I would like to see this in tests heplers in Clojure or in some additional library


I'm also using this approach in clj-kondo tests


I've extended it to regexes 😉


In advance, I apologize for the length of this post. I have a question about style. I've almost committed to giving a talk about Clojure, and why it's a good choice for certain problems, and why it's a good choice for other problems too. Consequently, I'd like to introduce some "functional" style to the audience. I think recursion, sequences and laziness might be good topics. More than that might be too much for the 45 minutes. Anyway, the "test" problem is `gathering" entries in a collection according to a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together. If I was gathering consecutive equal elements on [1 2 2 3 6 6 6 8], I'd get [[1] [2 2] [3] [6 6 6] [8]], and if I was gathering odds/evens, I'd get [[1] [2 2] [3] [6 6 6 8]]. The functions I've come up with are

;; given a collection c, and a gathering function f, return two vectors
;; where the first is the first set of gather elements and the second is
;; the rest of the collection less the gathered elements.
(defn col-split
  [f c]
   (fn [accum v]
     (if (or (empty? (first accum))
             (f (last (first accum)) v))
       (-> accum
           (update 0 conj v)
           (update 1 rest))
       (reduced accum)))
   [[] c]
;; given a collection c, and a gathering function f, return a lazy
;; sequence of vectors which are vectors of gathered elemens. 
(defn lz-split 
  [f c]
  (if (seq c)
     (let [[g rst] (col-split f c)]
        (lz-split f rst))))))
col-split returns the first qualifying elements, and then everything else. I use reduced to short-circuit the evaluation as soon as I've assembled the first group. So, I guess it's as lazy an eager function as possible. lz-split uses col-split to return a lazy sequence of the gathered elements. So, gathering equals would be (lz-split = [1 2 2 3 6 6 6 8]) and gathering odds/evens would be (lz-split #(= (mod %1 2) (mod %2 2)) [1 2 2 3 6 6 6 8]) Now, my question is: is this example Clojure'y? I'm less interested in whether it's the best way (there's probably 5 decent ways of doing it), or whether it's the fastest way than I am in its clojure-y-ish-ness. Any deductions for style? I'm not in love with the '`let`' in lz-split, it just seems inelegant - but not offensive. Thoughts?


lz-split sounds suspiciously similar to clojure.core/partition-by. If so, the most Clojure'y way would be to use partition-by. :)


Thanks. I looked at that, but doesn't partition-by only consider the current element? I didn't see an obvious way for it to look back/ahead. Did I miss something obvious?


user=> (partition-by odd? [1 2 2 3 6 6 6 8])
((1) (2 2) (3) (6 6 6 8))


Yes. But (partition-by = [1 2 2 3 6 6 6 8]) returns ((1 2 2 3 6 6 6 8))


reduce maybe then?


> But (partition-by = [1 2 2 3 6 6 6 8]) returns ((1 2 2 3 6 6 6 8)) > [...] a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together You don't need =, you need to group by some function's value. Since it's user-defined, you can just use a different function:

user=> (partition-by identity [1 2 2 3 6 6 6 8])
((1) (2 2) (3) (6 6 6) (8))


Yes, for reduce. I did want to illustrate that, so I stuck it in the col-split function. My thinking was to then illustrate the lazy part in lz-split, which is entirely redundant if you want the whole gathered sequence back. It only makes sense if you take on it


@U2FRKM4TW I agree. But, that might be a degenerate case. Let's say I wanted to gather elements into sequences where the previous number is less than or equal, using [1 2 2 1 3 6 6 6 8] as the input. (lz-split <= [1 2 2 1 3 6 6 6 8]) returns ([1 2 2] [1 3 6 6 6 8]), which is what I'd want. How might partition-by deal with this?


Ah, I see, right. No easy way to deal with <= in partition-by.


Thanks. I'd hate to give a talk and have someone point out that the whole lot could be reduced to one line. Very egg-on-face. 😕


What solutions do you use for database per tenant in SaaS? So each organization has separate DB. I was looking for any solution about 1 year ago and my conclusion was: write my own solution based on for example flyway db , because there is nothing on the market to maintenance DB and migrations for many tenants. Something changed maybe? Did I miss something? How do you solve this? Or do you use 1 DB for all tenants?


I'm currently using 1db for all tenants, but I've done db-per-tenant in the past...


I've home-rolled solutions by hacking existing migrators, active-record in a previous life and drift more recently


it was pretty straightforward to hack existing migrators to my purpose... but sadly i don't have code to share, it was a closed-source app


be aware that you may start to run in to difficulties with db-per-tenant as your number of tenants increases - either with number of tables in a db (if you are using a single db instance), or connection pool management (if you are using multiple db instances)


in general how would you describe complexity of both? I feel I should choose db per tenant, but I am a little afraid about complexity


unfortunately I don’t have practical experience with SaaS DB maintenance


for a smallish number of tenants it was fine - we used postgresql namespaces so that the tenants saw nothing different from a single-tenant db


hmm I thought having all tenants in 1 DB gives more issues with scale, than DB per tenant


because DB start to be huge


will maybe, that depends on your db :) ... we're currently using cassandra which makes sharding easy


but it's certainly very different to postgresql, and not suitable for all use-cases


iirc large numbers of tables can be problematic in many databases - e.g.


I have plan to use google cloud with they SQL DB


never used it, so don't know


I'd query how it behaves with large numbers of tables and how connections are managed


by large nr of tables you mean sum of tables from all DB?


well you've got roughly two ways of doing db-per-tenant... separate tables/namespaces in a single db instance, or completely separate db instances (or some blend of the two)


I was thinking about separate DB


the first gives you large numbers of tables in a db instance, the second gives you lots of active connection to lots of db instances


so the solution later is to add more machines with “mirror” of DB?


does it really matter vs keeping all tenants in 1 DB with extra column to identify user?


the number of connections


how huge issue is it?


depends... memory usage in the end - if you have 10k dbs with 10 connections each in each vm and each connection takes a few KB then that gets to be quite a lot of RAM


in my previous apps the number of tenants was 10s, so it was never an issue


so the same number of connections to 1 DB is less memory than the same number of connections to many DB


is s shortcut from tenants?


tens of tenants, i.e. not so many


generally there is some state associated with each connection - socket, buffer, prepared-statements, other state - exactly what depends on the db

Ahmed Hassan16:02:04

What is the trade-off with using table rows to store tenant data? And using tables this way to store other domain related data?


hmm I think after all I changed my mind to use 1 DB for all tenants


I don’t feel comfortable without tenants isolations, but it looks like the right way to do this


Another suggestion is to use pgsql row level security to handle tenancy.


hmm will it work with dozens / thousands of policies? doing this for all tables sounds complex. But idea is interesting.


if this only could be more general for tables and 1 role for table would be enough


any hints how do you deal with export / import / backup for tenant in multi-tenancy single database?


lots of ways @U0WL6FA77 , depending on need - in different circumstances and with different databases i've used all of: materialized-views, select-to-csv, streaming-table-scan-to-csv, spark

👍 4

What do you use for migrations?


we're currently using juxt/joplin (which is based on weavejester/ragtime) for our cassandra migrations... it lets you detect conflicts caused by merging (i.e. where a new migration which is older than the latest currently applied migration appears because of a git merge) which turns out to be a must-have


I'm processing some data using transducers and I was curious to know if 'unrolling' a particular function would be faster, but in fact it was slower and I don't understand's the code


the unrolled function does the same thing as the transducer-based function, but it's slower


I expected that not using partition-all and map (transducer forms) etc would make it run faster


I am a little confused. It seems to me that you are using transducers in both approaches, since you are using sequence.


no - see the macroexpansion of make-test2 which is the unrolled code


the function test2 which that macro creates is the alternative to transducer code I'm referring to


that macro uses sequence to create the unrolled code


but it's the unrolled code I'm comparing with, and that doesn't use sequence or transducers


Okay, now I see it. That's a cool trick! To your original question, maybe try clj-java-decompiler to inspect the generated byte code? It can be really helpful in understanding what's going on.


did you run the code in repl or in a standalone program ?


if you actually plan to optimize for non-repl production make sure you take timings outside of repl. i have seen quite a lot of variations introduced just by repl itself (something with the optimizations appears to behave differently)


Criterium - which you can tell from the gist


should i open a jira ticket or ask on for a ticket for 1.11 to have a**core-java-api** version specific to java 13 rather than falling back to the java 8 version?

Alex Miller (Clojure team)16:02:00

yes, jira if you have access, ask otherwise

👍 8

anyone have simple way to get min max dates from list of date inst values



(apply min-key inst-ms ...) would that work?


ok .. now that was a good one .. very slick


thanks mate 🙂


For min, max etc. I did a fair bit of benchmark and it is often best to use the reduce form over apply. These two are interchangeable, (apply min [3 5 7 1 2]) => 1 (reduce min [3 5 7 1 2]) => 1 While apply does a lot of gymnastics making the vector seem like args, reduce maintains the minimum till now, and reduces through the collection.


The reduce version of the above will be,

(reduce (partial min-key inst-ms)


fascinating actually .. thanks for sharing :beach_with_umbrella:


If the array might be empty you'll need an initial value when using reduce.

(reduce (partial min-key inst-ms) [])
Execution error (ArityException) at eval8568 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key


It is also a problem when using

(apply min-key inst-ms [])
Execution error (ArityException) at eval8598 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key


good points, i will fix my function to cover that


i can think of way to do it, but its not very fun


How can I "merge" two sequences, like overwriting one with each other? [1,2,3,4,5] + [:a :b :c] => [:a :b :c 4 5]


If the smaller sequence has m elements and longer sequence has n elements, how about dropping m elements from the larger sequence and placing it after the smaller sequence?


Assuming replacements will come in the smaller sequence.


If the bigger sequence is the replacements, just return it.


(defn prefer-seq [s1 s2]
  (when-let [[x] (or (seq s1) (seq s2))]
    (cons x (lazy-seq (prefer-seq (rest s1) (rest s2))))))


I tried this one liner,

(map #(or %1 %2)
     (concat [:a :b :c] (repeat nil)) ;; prevent map from terminating early
     [1 2 3 4 5])


Also lazy.


i suppose that neither of our versions allow for nils in the collections


I assumed replacements will not be nil. But that's a good point to note.


This seems to work for all cases,

(defn merge-seqs
  [xs ys]
  (if (and (seq xs)
           (seq ys))
     (cons (first ys)
           (merge-seqs (rest xs) (rest ys))))
    (or (seq ys) (seq xs))))

(merge-seqs [1 2 3 4 nil] [:a nil :c])
;; => (:a nil :c 4 nil)
Note: I kept the replacements as the second sequence as that is the convention with merge as well.


And, modified it to work for any number of sequences,

(defn merge-seqs
  [& ss]
  (letfn [(merge-two
            [xs ys]
            (if (and (seq xs)
                     (seq ys))
               (cons (first ys)
                     (merge-seqs (rest xs) (rest ys))))
              (or (seq ys) (seq xs))))]
    (reduce merge-two ss)))

(merge-seqs [1 2 3 4 nil] [:a nil :c nil :e] [nil nil 3.0])
;; => (nil nil 3.0 nil :e)


Sorry, I left for a second. Thanks for all the replies, I wanted to make sure that there is nothing in core doing this in one statement 🙂


Sorry, got carried away a little, 😛. I enjoy these puzzles a lot.


@UJRDALZA5, why wouldn't a simple concat work, like you originally suggested? I assume "merge" means that the 2nd vector's values replace the first, that's how merge works for maps anyway. It works no matter which vector is longer.

(let [a [1 2 3 4 5]
      b [:a :b :c]]
  (concat b (drop (count b) a)))
=> (:a :b :c 4 5)

👍 12

For two or more number of seqs following @UBRMX7MT7 approach: (puzzles are nice)

(defn merge-seqs [a b & args]
  (let [s (concat b (drop (count b) a))]
    (if (empty? args)
      (recur s (first args) (rest args)))))

👍 4

@UBRMX7MT7 Yeah, that works too. I assumed dropping more than the count is a problem, but apparently it's not.


@USJQPSBD3 Note that unlike all other solutions, which are lazy, yours is eager because into is eager. For vectors or already realized sequences, this is perfectly fine, but when dealing with lazy sequences sometimes you will want to postpone the evaluation.

Corneliu Hoffman11:02:31

(defn aa [a b] (concat b (keep-indexed #(when (<= (count b) %1) %2) a)))

Ben Sless12:02:08

how about this?

(defn map-longest
  ([f c1 c2]
    (let [s1 (seq c1) s2 (seq c2)]
      (when (or s1 s2)
        (cons (f (first s1) (first s2))
              (map-longest f (rest s1) (rest s2))))))))

(map-longest (fn [a b] (or a b)) [1 2 3] '[a b c d])


@UJRDALZA5 yep. For lazy ones the concat thing makes more sense but then the output collection might not be the original type but a list.


That's correct but I think Clojure seems to care less about the original type and cares more about laziness. Most collection functions in clojure.core implicitly convert to a sequence. It's a design philosophy of Clojure.

👍 8