Fork me on GitHub
#clojure
<
2020-02-23
>
jmckitrick02:02:02

Does anyone know if clojure-lsp can easily be used with a REPL-based approach in emacs? Meaning, editing the code and evaluating it in place?

seancorfield03:02:11

@U061KMSM7 I'm not sure what you're asking here: editing code and evaluating it in place is pretty much the definition of a REPL-driven workflow, so I don't know what clojure-lsp does or doesn't do to change that?

dpsutton03:02:31

Clojure lsp is static analysis and exposes no repl at all

seancorfield03:02:42

Yeah, that's what I was wondering -- not sure how LSP comes into the picture at all with Emacs and RDD (or even any editor and an RDD workflow)...

jmckitrick03:02:04

So after using CIDER for a long time, I’m taking clojure-lsp for a test drive. I’m curious what else I would need to evaluate code in the editor, since clojure-lsp does static analysis only.

dpsutton03:02:05

I use cider and clojure-lsp in tandem at work. Not sure what shortcomings you’re seeing but emacs doesn’t have too many other options. If you want to get rid of nrepl you could try chlorine but not sure there’s any editor support for lsp

Ashwin Bhaskar03:02:43

Hi All, I just shifted from using clj-http to clj-http.lite . The code used for uploading a file with multipart that worked with clj-http does not seem to work with clj-http-lite . With the later I get an error

POST requests require a <code>Content-length</code> header.  <ins>That's all we know.</ins>
I tried adding the content length header explicitly but still get the same error (http status code 411) Any idea what is happening here?

vinnyataide07:02:24

is there a better way to access a nested key? example:

(defn handler [{:keys [services]}]...

vinnyataide07:02:36

I need to access the database key in the services map

vinnyataide07:02:52

do I need to nest a :keys function

vinnyataide07:02:20

I don't need the services keys at all

dharrigan07:02:26

you can go deep as you want 🙂 [{{{:keys [baz]} :bar} :foo} my-map]

Ankon Chakraborty08:02:12

You use get-in function instead of this approach

(def my-map {:foo {:bar {:baz 1}}})
(get-in my-map [:foo :bar :baz])

p-himik12:02:29

Or even (-> my-map :foo :bar :baz).

kwladyka12:02:47

Any ready fn / solution which I can use to make this true? So :status is equal, :body is equal and in :headers there is a map and Cache-Control is equal. But :headers have more items, which I don’t care. So map compare which care only about data which are expected. Library with helpers for tests or something like that?

{:response {:status 200
            :headers {"Cache-Control" "no-cache"}
            :body "ok"}}
in
{:response {:status 200,
            :headers {"Cache-Control" "no-cache",
                      "Access-Control-Allow-Origin" "",
                      "Access-Control-Allow-Headers" "Content-Type, User-Agent"},
            :body "ok"},
 :request {:protocol "HTTP/1.1",
           :remote-addr "localhost",
           :headers {"host" "localhost"},
           :server-port 80,
           :uri "/shops/labels/uuid/1/orders/2",
           :server-name "localhost",
           :body nil,
           :scheme :http,
           :request-method :get},
 :headers nil,
 :app #object[api.core$add_headers$fn__24549 0x4f3a36b9 "api.core$add_headers$fn__24549@4f3a36b9"],
 :content-type nil,
 :cookie-jar nil}

kwladyka12:02:33

PS I know I cal always write this myself, but I would prefer to use something ready

kwladyka12:02:44

so it is a little like “if map contain map?”

dominicm12:02:13

You're looking for a submap predicate. It's not in core, but Alex miller wrote a good one for spec tests

👍 4
kwladyka12:02:51

thanks, the point is to know right name 🙂 submap predicate sounds good 🙂

jeroenvandijk12:02:26

Not sure, but maybe this could be used https://github.com/noprompt/meander I still need to test it myself

👍 4
kwladyka12:02:32

Do you know where I can find Alex solution?

kwladyka13:02:33

I didn’t find anything better, than Alex solution

(defn submap?
  "Is m1 a subset of m2?"
  [m1 m2]
  (if (and (map? m1) (map? m2))
    (every? (fn [[k v]] (and (contains? m2 k)
                          (submap? v (get m2 k))))
      m1)
    (= m1 m2)))

kwladyka13:02:59

I would like to see this in tests heplers in Clojure or in some additional library

borkdude14:02:10

I'm also using this approach in clj-kondo tests

borkdude14:02:37

I've extended it to regexes 😉

KJO14:02:45

In advance, I apologize for the length of this post. I have a question about style. I've almost committed to giving a talk about Clojure, and why it's a good choice for certain problems, and why it's a good choice for other problems too. Consequently, I'd like to introduce some "functional" style to the audience. I think recursion, sequences and laziness might be good topics. More than that might be too much for the 45 minutes. Anyway, the "test" problem is `gathering" entries in a collection according to a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together. If I was gathering consecutive equal elements on [1 2 2 3 6 6 6 8], I'd get [[1] [2 2] [3] [6 6 6] [8]], and if I was gathering odds/evens, I'd get [[1] [2 2] [3] [6 6 6 8]]. The functions I've come up with are

;; given a collection c, and a gathering function f, return two vectors
;; where the first is the first set of gather elements and the second is
;; the rest of the collection less the gathered elements.
(defn col-split
  [f c]
  (reduce
   (fn [accum v]
     (if (or (empty? (first accum))
             (f (last (first accum)) v))
       (-> accum
           (update 0 conj v)
           (update 1 rest))
       (reduced accum)))
   [[] c]
   c))
;; given a collection c, and a gathering function f, return a lazy
;; sequence of vectors which are vectors of gathered elemens. 
(defn lz-split 
  [f c]
  (if (seq c)
    (lazy-seq
     (let [[g rst] (col-split f c)]
       (cons
        g
        (lz-split f rst))))))
col-split returns the first qualifying elements, and then everything else. I use reduced to short-circuit the evaluation as soon as I've assembled the first group. So, I guess it's as lazy an eager function as possible. lz-split uses col-split to return a lazy sequence of the gathered elements. So, gathering equals would be (lz-split = [1 2 2 3 6 6 6 8]) and gathering odds/evens would be (lz-split #(= (mod %1 2) (mod %2 2)) [1 2 2 3 6 6 6 8]) Now, my question is: is this example Clojure'y? I'm less interested in whether it's the best way (there's probably 5 decent ways of doing it), or whether it's the fastest way than I am in its clojure-y-ish-ness. Any deductions for style? I'm not in love with the '`let`' in lz-split, it just seems inelegant - but not offensive. Thoughts?

p-himik14:02:45

lz-split sounds suspiciously similar to clojure.core/partition-by. If so, the most Clojure'y way would be to use partition-by. :)

KJO14:02:34

Thanks. I looked at that, but doesn't partition-by only consider the current element? I didn't see an obvious way for it to look back/ahead. Did I miss something obvious?

p-himik14:02:17

user=> (partition-by odd? [1 2 2 3 6 6 6 8])
((1) (2 2) (3) (6 6 6 8))

KJO15:02:26

Yes. But (partition-by = [1 2 2 3 6 6 6 8]) returns ((1 2 2 3 6 6 6 8))

kwladyka15:02:17

reduce maybe then?

p-himik15:02:50

> But (partition-by = [1 2 2 3 6 6 6 8]) returns ((1 2 2 3 6 6 6 8)) > [...] a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together You don't need =, you need to group by some function's value. Since it's user-defined, you can just use a different function:

user=> (partition-by identity [1 2 2 3 6 6 6 8])
((1) (2 2) (3) (6 6 6) (8))

KJO15:02:40

Yes, for reduce. I did want to illustrate that, so I stuck it in the col-split function. My thinking was to then illustrate the lazy part in lz-split, which is entirely redundant if you want the whole gathered sequence back. It only makes sense if you take on it

KJO15:02:40

@U2FRKM4TW I agree. But, that might be a degenerate case. Let's say I wanted to gather elements into sequences where the previous number is less than or equal, using [1 2 2 1 3 6 6 6 8] as the input. (lz-split <= [1 2 2 1 3 6 6 6 8]) returns ([1 2 2] [1 3 6 6 6 8]), which is what I'd want. How might partition-by deal with this?

p-himik15:02:55

Ah, I see, right. No easy way to deal with <= in partition-by.

KJO15:02:42

Thanks. I'd hate to give a talk and have someone point out that the whole lot could be reduced to one line. Very egg-on-face. 😕

kwladyka15:02:17

What solutions do you use for database per tenant in SaaS? So each organization has separate DB. I was looking for any solution about 1 year ago and my conclusion was: write my own solution based on for example flyway db , because there is nothing on the market to maintenance DB and migrations for many tenants. Something changed maybe? Did I miss something? How do you solve this? Or do you use 1 DB for all tenants?

mccraigmccraig15:02:34

I'm currently using 1db for all tenants, but I've done db-per-tenant in the past...

mccraigmccraig15:02:41

I've home-rolled solutions by hacking existing migrators, active-record in a previous life and drift more recently

mccraigmccraig15:02:15

it was pretty straightforward to hack existing migrators to my purpose... but sadly i don't have code to share, it was a closed-source app

mccraigmccraig15:02:16

be aware that you may start to run in to difficulties with db-per-tenant as your number of tenants increases - either with number of tables in a db (if you are using a single db instance), or connection pool management (if you are using multiple db instances)

kwladyka15:02:21

in general how would you describe complexity of both? I feel I should choose db per tenant, but I am a little afraid about complexity

kwladyka15:02:22

unfortunately I don’t have practical experience with SaaS DB maintenance

mccraigmccraig15:02:30

for a smallish number of tenants it was fine - we used postgresql namespaces so that the tenants saw nothing different from a single-tenant db

kwladyka15:02:17

hmm I thought having all tenants in 1 DB gives more issues with scale, than DB per tenant

kwladyka15:02:24

because DB start to be huge

mccraigmccraig15:02:52

will maybe, that depends on your db :) ... we're currently using cassandra which makes sharding easy

mccraigmccraig15:02:35

but it's certainly very different to postgresql, and not suitable for all use-cases

mccraigmccraig15:02:18

iirc large numbers of tables can be problematic in many databases - e.g. https://www.postgresql.org/message-id/18728.1027611113%40sss.pgh.pa.us

kwladyka15:02:05

I have plan to use google cloud with they SQL DB

mccraigmccraig15:02:42

never used it, so don't know

mccraigmccraig15:02:13

I'd query how it behaves with large numbers of tables and how connections are managed

kwladyka15:02:20

by large nr of tables you mean sum of tables from all DB?

mccraigmccraig15:02:45

well you've got roughly two ways of doing db-per-tenant... separate tables/namespaces in a single db instance, or completely separate db instances (or some blend of the two)

kwladyka15:02:22

I was thinking about separate DB

mccraigmccraig15:02:30

the first gives you large numbers of tables in a db instance, the second gives you lots of active connection to lots of db instances

kwladyka15:02:27

so the solution later is to add more machines with “mirror” of DB?

kwladyka15:02:39

does it really matter vs keeping all tenants in 1 DB with extra column to identify user?

kwladyka15:02:47

the number of connections

kwladyka15:02:51

how huge issue is it?

mccraigmccraig15:02:01

depends... memory usage in the end - if you have 10k dbs with 10 connections each in each vm and each connection takes a few KB then that gets to be quite a lot of RAM

mccraigmccraig15:02:53

in my previous apps the number of tenants was 10s, so it was never an issue

kwladyka15:02:30

so the same number of connections to 1 DB is less memory than the same number of connections to many DB

kwladyka16:02:40

is s shortcut from tenants?

mccraigmccraig16:02:38

tens of tenants, i.e. not so many

mccraigmccraig16:02:59

generally there is some state associated with each connection - socket, buffer, prepared-statements, other state - exactly what depends on the db

Ahmed Hassan16:02:04

What is the trade-off with using table rows to store tenant data? And using tables this way to store other domain related data?

kwladyka16:02:45

hmm I think after all I changed my mind to use 1 DB for all tenants

kwladyka16:02:08

I don’t feel comfortable without tenants isolations, but it looks like the right way to do this

dominicm18:02:59

Another suggestion is to use pgsql row level security to handle tenancy.

kwladyka07:02:16

hmm will it work with dozens / thousands of policies? doing this for all tables sounds complex. But idea is interesting.

kwladyka07:02:15

if this only could be more general for tables and 1 role for table would be enough

kwladyka09:02:36

any hints how do you deal with export / import / backup for tenant in multi-tenancy single database?

mccraigmccraig09:02:10

lots of ways @U0WL6FA77 , depending on need - in different circumstances and with different databases i've used all of: materialized-views, select-to-csv, streaming-table-scan-to-csv, spark

👍 4
kwladyka16:02:32

What do you use for migrations?

mccraigmccraig16:02:36

we're currently using juxt/joplin (which is based on weavejester/ragtime) for our cassandra migrations... it lets you detect conflicts caused by merging (i.e. where a new migration which is older than the latest currently applied migration appears because of a git merge) which turns out to be a must-have

octahedrion15:02:26

I'm processing some data using transducers and I was curious to know if 'unrolling' a particular function would be faster, but in fact it was slower and I don't understand why...here's the code https://gist.github.com/Hendekagon/b2f5639d6d56127dbe6b1ec83930e323

octahedrion15:02:43

the unrolled function does the same thing as the transducer-based function, but it's slower

octahedrion15:02:04

I expected that not using partition-all and map (transducer forms) etc would make it run faster

hindol16:02:18

I am a little confused. It seems to me that you are using transducers in both approaches, since you are using sequence.

octahedrion16:02:47

no - see the macroexpansion of make-test2 which is the unrolled code

octahedrion16:02:50

the function test2 which that macro creates is the alternative to transducer code I'm referring to

octahedrion16:02:13

that macro uses sequence to create the unrolled code

octahedrion16:02:53

but it's the unrolled code I'm comparing with, and that doesn't use sequence or transducers

hindol16:02:07

Okay, now I see it. That's a cool trick! To your original question, maybe try clj-java-decompiler to inspect the generated byte code? It can be really helpful in understanding what's going on.

kulminaator07:02:57

did you run the code in repl or in a standalone program ?

kulminaator07:02:51

if you actually plan to optimize for non-repl production make sure you take timings outside of repl. i have seen quite a lot of variations introduced just by repl itself (something with the optimizations appears to behave differently)

octahedrion09:02:47

Criterium - which you can tell from the gist

dpsutton16:02:38

should i open a jira ticket or ask on http://ask.clojure.org for a ticket for 1.11 to have a clojure.java.javadoc/**core-java-api** version specific to java 13 rather than falling back to the java 8 version?

Alex Miller (Clojure team)16:02:00

yes, jira if you have access, ask otherwise

👍 8
daniel.spaniel19:02:49

anyone have simple way to get min max dates from list of date inst values

[#inst"2020-02-01T05:00"
 #inst"2020-03-01T05:00"
 #inst"2020-04-01T05:00"]

jumpnbrownweasel19:02:07

(apply min-key inst-ms ...) would that work?

daniel.spaniel19:02:58

ok .. now that was a good one .. very slick

daniel.spaniel19:02:04

thanks mate 🙂

hindol19:02:55

For min, max etc. I did a fair bit of benchmark and it is often best to use the reduce form over apply. These two are interchangeable, (apply min [3 5 7 1 2]) => 1 (reduce min [3 5 7 1 2]) => 1 While apply does a lot of gymnastics making the vector seem like args, reduce maintains the minimum till now, and reduces through the collection.

hindol19:02:45

The reduce version of the above will be,

(reduce (partial min-key inst-ms)
        [#inst"2020-02-01T05:00"
         #inst"2020-03-01T05:00"
         #inst"2020-04-01T05:00"])

daniel.spaniel19:02:59

fascinating actually .. thanks for sharing :beach_with_umbrella:

jumpnbrownweasel22:02:33

If the array might be empty you'll need an initial value when using reduce.

(reduce (partial min-key inst-ms) [])
Execution error (ArityException) at eval8568 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key

jumpnbrownweasel22:02:05

It is also a problem when using

(apply min-key inst-ms [])
Execution error (ArityException) at eval8598 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key

daniel.spaniel00:02:12

good points, i will fix my function to cover that

daniel.spaniel19:02:19

i can think of way to do it, but its not very fun

otwieracz20:02:49

How can I "merge" two sequences, like overwriting one with each other? [1,2,3,4,5] + [:a :b :c] => [:a :b :c 4 5]

hindol20:02:29

If the smaller sequence has m elements and longer sequence has n elements, how about dropping m elements from the larger sequence and placing it after the smaller sequence?

hindol20:02:50

Assuming replacements will come in the smaller sequence.

hindol20:02:18

If the bigger sequence is the replacements, just return it.

dpsutton20:02:57

(defn prefer-seq [s1 s2]
  (when-let [[x] (or (seq s1) (seq s2))]
    (cons x (lazy-seq (prefer-seq (rest s1) (rest s2))))))

hindol20:02:24

I tried this one liner,

(map #(or %1 %2)
     (concat [:a :b :c] (repeat nil)) ;; prevent map from terminating early
     [1 2 3 4 5])

hindol20:02:36

Also lazy.

dpsutton20:02:21

i suppose that neither of our versions allow for nils in the collections

hindol20:02:01

I assumed replacements will not be nil. But that's a good point to note.

hindol20:02:03

This seems to work for all cases,

(defn merge-seqs
  [xs ys]
  (if (and (seq xs)
           (seq ys))
    (lazy-seq
     (cons (first ys)
           (merge-seqs (rest xs) (rest ys))))
    (or (seq ys) (seq xs))))

(merge-seqs [1 2 3 4 nil] [:a nil :c])
;; => (:a nil :c 4 nil)
Note: I kept the replacements as the second sequence as that is the convention with merge as well.

hindol20:02:07

And, modified it to work for any number of sequences,

(defn merge-seqs
  [& ss]
  (letfn [(merge-two
            [xs ys]
            (if (and (seq xs)
                     (seq ys))
              (lazy-seq
               (cons (first ys)
                     (merge-seqs (rest xs) (rest ys))))
              (or (seq ys) (seq xs))))]
    (reduce merge-two ss)))

(merge-seqs [1 2 3 4 nil] [:a nil :c nil :e] [nil nil 3.0])
;; => (nil nil 3.0 nil :e)

otwieracz20:02:53

Sorry, I left for a second. Thanks for all the replies, I wanted to make sure that there is nothing in core doing this in one statement 🙂

hindol20:02:50

Sorry, got carried away a little, 😛. I enjoy these puzzles a lot.

jumpnbrownweasel22:02:49

@UJRDALZA5, why wouldn't a simple concat work, like you originally suggested? I assume "merge" means that the 2nd vector's values replace the first, that's how merge works for maps anyway. It works no matter which vector is longer.

(let [a [1 2 3 4 5]
      b [:a :b :c]]
  (concat b (drop (count b) a)))
=> (:a :b :c 4 5)

👍 12
bartuka22:02:36

For two or more number of seqs following @UBRMX7MT7 approach: (puzzles are nice)

(defn merge-seqs [a b & args]
  (let [s (concat b (drop (count b) a))]
    (if (empty? args)
      s
      (recur s (first args) (rest args)))))

👍 4
hindol02:02:08

@UBRMX7MT7 Yeah, that works too. I assumed dropping more than the count is a problem, but apparently it's not.

hindol10:02:46

@USJQPSBD3 Note that unlike all other solutions, which are lazy, yours is eager because into is eager. For vectors or already realized sequences, this is perfectly fine, but when dealing with lazy sequences sometimes you will want to postpone the evaluation.

Corneliu Hoffman11:02:31

(defn aa [a b] (concat b (keep-indexed #(when (<= (count b) %1) %2) a)))

Ben Sless12:02:08

how about this?

(defn map-longest
  ([f c1 c2]
   (lazy-seq
    (let [s1 (seq c1) s2 (seq c2)]
      (when (or s1 s2)
        (cons (f (first s1) (first s2))
              (map-longest f (rest s1) (rest s2))))))))

(map-longest (fn [a b] (or a b)) [1 2 3] '[a b c d])

Tarun12:02:53

@UJRDALZA5 yep. For lazy ones the concat thing makes more sense but then the output collection might not be the original type but a list.

hindol12:02:59

That's correct but I think Clojure seems to care less about the original type and cares more about laziness. Most collection functions in clojure.core implicitly convert to a sequence. It's a design philosophy of Clojure.

👍 8