This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-23
Channels
- # announcements (18)
- # babashka (65)
- # bangalore-clj (14)
- # beginners (80)
- # bristol-clojurians (1)
- # calva (4)
- # chlorine-clover (3)
- # cider (39)
- # clj-kondo (5)
- # clojars (1)
- # clojure (134)
- # clojure-france (1)
- # clojure-losangeles (3)
- # clojure-nl (1)
- # clojure-uk (7)
- # clojurescript (17)
- # core-typed (22)
- # data-science (1)
- # datomic (6)
- # duct (9)
- # emacs (48)
- # fulcro (58)
- # graalvm (37)
- # kaocha (2)
- # nrepl (1)
- # reagent (8)
- # shadow-cljs (20)
- # specter (1)
- # tree-sitter (5)
- # xtdb (3)
Does anyone know if clojure-lsp
can easily be used with a REPL-based approach in emacs? Meaning, editing the code and evaluating it in place?
@U061KMSM7 I'm not sure what you're asking here: editing code and evaluating it in place is pretty much the definition of a REPL-driven workflow, so I don't know what clojure-lsp
does or doesn't do to change that?
Yeah, that's what I was wondering -- not sure how LSP comes into the picture at all with Emacs and RDD (or even any editor and an RDD workflow)...
So after using CIDER for a long time, I’m taking clojure-lsp
for a test drive. I’m curious what else I would need to evaluate code in the editor, since clojure-lsp
does static analysis only.
I use cider and clojure-lsp in tandem at work. Not sure what shortcomings you’re seeing but emacs doesn’t have too many other options. If you want to get rid of nrepl you could try chlorine but not sure there’s any editor support for lsp
Clj-kondo runs with flycheck or LSP https://github.com/borkdude/clj-kondo/blob/master/doc/editor-integration.md#emacs
Hi All, I just shifted from using clj-http
to clj-http.lite
. The code used for uploading a file with multipart
that worked with clj-http
does not seem to work with clj-http-lite
. With the later I get an error
POST requests require a <code>Content-length</code> header. <ins>That's all we know.</ins>
I tried adding the content length header explicitly but still get the same error (http status code 411)
Any idea what is happening here?is there a better way to access a nested key? example:
(defn handler [{:keys [services]}]...
I need to access the database key in the services map
do I need to nest a :keys function
I don't need the services keys at all
nice idea!
You use get-in function instead of this approach
(def my-map {:foo {:bar {:baz 1}}})
(get-in my-map [:foo :bar :baz])
Any ready fn / solution which I can use to make this true
? So :status
is equal, :body
is equal and in :headers
there is a map and Cache-Control
is equal. But :headers
have more items, which I don’t care. So map compare which care only about data which are expected.
Library with helpers for tests or something like that?
{:response {:status 200
:headers {"Cache-Control" "no-cache"}
:body "ok"}}
in
{:response {:status 200,
:headers {"Cache-Control" "no-cache",
"Access-Control-Allow-Origin" "",
"Access-Control-Allow-Headers" "Content-Type, User-Agent"},
:body "ok"},
:request {:protocol "HTTP/1.1",
:remote-addr "localhost",
:headers {"host" "localhost"},
:server-port 80,
:uri "/shops/labels/uuid/1/orders/2",
:server-name "localhost",
:body nil,
:scheme :http,
:request-method :get},
:headers nil,
:app #object[api.core$add_headers$fn__24549 0x4f3a36b9 "api.core$add_headers$fn__24549@4f3a36b9"],
:content-type nil,
:cookie-jar nil}
You're looking for a submap predicate. It's not in core, but Alex miller wrote a good one for spec tests
Not in core ... yet
Not sure, but maybe this could be used https://github.com/noprompt/meander I still need to test it myself
I didn’t find anything better, than Alex solution
(defn submap?
"Is m1 a subset of m2?"
[m1 m2]
(if (and (map? m1) (map? m2))
(every? (fn [[k v]] (and (contains? m2 k)
(submap? v (get m2 k))))
m1)
(= m1 m2)))
https://github.com/borkdude/clj-kondo/blob/1953b9323377a785e75292b478ec48a0082c309e/test/clj_kondo/test_utils.clj#L15 inspired by Alex
In advance, I apologize for the length of this post.
I have a question about style.
I've almost committed to giving a talk about Clojure, and why it's a good choice for certain problems, and why it's a good choice for other problems too.
Consequently, I'd like to introduce some "functional" style to the audience.
I think recursion, sequences and laziness might be good topics. More than that might be too much for the 45 minutes.
Anyway, the "test" problem is `gathering" entries in a collection according to a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together.
If I was gathering consecutive equal elements on [1 2 2 3 6 6 6 8]
, I'd get [[1] [2 2] [3] [6 6 6] [8]]
, and if I was gathering odds/evens, I'd get [[1] [2 2] [3] [6 6 6 8]]
.
The functions I've come up with are
;; given a collection c, and a gathering function f, return two vectors
;; where the first is the first set of gather elements and the second is
;; the rest of the collection less the gathered elements.
(defn col-split
[f c]
(reduce
(fn [accum v]
(if (or (empty? (first accum))
(f (last (first accum)) v))
(-> accum
(update 0 conj v)
(update 1 rest))
(reduced accum)))
[[] c]
c))
;; given a collection c, and a gathering function f, return a lazy
;; sequence of vectors which are vectors of gathered elemens.
(defn lz-split
[f c]
(if (seq c)
(lazy-seq
(let [[g rst] (col-split f c)]
(cons
g
(lz-split f rst))))))
col-split
returns the first qualifying elements, and then
everything else. I use reduced
to short-circuit the evaluation as
soon as I've assembled the first group. So, I guess it's as lazy an
eager function as possible.
lz-split
uses col-split
to return a lazy sequence of the gathered
elements.
So, gathering equals would be
(lz-split = [1 2 2 3 6 6 6 8])
and gathering odds/evens would be
(lz-split #(= (mod %1 2) (mod %2 2)) [1 2 2 3 6 6 6 8])
Now, my question is: is this example Clojure'y?
I'm less interested in whether it's the best way (there's probably 5 decent ways of doing it), or whether it's the fastest way than I am in its clojure-y-ish-ness.
Any deductions for style? I'm not in love with the '`let`' in lz-split
, it just seems inelegant - but not offensive.
Thoughts?lz-split
sounds suspiciously similar to clojure.core/partition-by
. If so, the most Clojure'y way would be to use partition-by
. :)
Thanks. I looked at that, but doesn't partition-by
only consider the current element? I didn't see an obvious way for it to look back/ahead. Did I miss something obvious?
> But (partition-by = [1 2 2 3 6 6 6 8]) returns ((1 2 2 3 6 6 6 8))
> [...] a function that compares (in some user-defined way) a value in the collection with the next value in the collection and gathering like-ones together
You don't need =
, you need to group by some function's value. Since it's user-defined, you can just use a different function:
user=> (partition-by identity [1 2 2 3 6 6 6 8])
((1) (2 2) (3) (6 6 6) (8))
Yes, for reduce. I did want to illustrate that, so I stuck it in the col-split
function. My thinking was to then illustrate the lazy part in lz-split
, which is entirely redundant if you want the whole gathered sequence back. It only makes sense if you take
on it
@U2FRKM4TW I agree. But, that might be a degenerate case. Let's say I wanted to gather elements into sequences where the previous number is less than or equal, using [1 2 2 1 3 6 6 6 8]
as the input.
(lz-split <= [1 2 2 1 3 6 6 6 8])
returns ([1 2 2] [1 3 6 6 6 8])
, which is what I'd want. How might partition-by
deal with this?
Thanks. I'd hate to give a talk and have someone point out that the whole lot could be reduced to one line. Very egg-on-face. 😕
What solutions do you use for database per tenant
in SaaS
? So each organization has separate DB. I was looking for any solution about 1 year ago and my conclusion was: write my own solution based on for example flyway db
, because there is nothing on the market to maintenance DB and migrations for many tenants. Something changed maybe? Did I miss something? How do you solve this? Or do you use 1 DB for all tenants?
I'm currently using 1db for all tenants, but I've done db-per-tenant in the past...
I've home-rolled solutions by hacking existing migrators, active-record in a previous life and drift more recently
it was pretty straightforward to hack existing migrators to my purpose... but sadly i don't have code to share, it was a closed-source app
be aware that you may start to run in to difficulties with db-per-tenant as your number of tenants increases - either with number of tables in a db (if you are using a single db instance), or connection pool management (if you are using multiple db instances)
in general how would you describe complexity of both? I feel I should choose db per tenant, but I am a little afraid about complexity
for a smallish number of tenants it was fine - we used postgresql namespaces so that the tenants saw nothing different from a single-tenant db
hmm I thought having all tenants in 1 DB gives more issues with scale, than DB per tenant
will maybe, that depends on your db :) ... we're currently using cassandra which makes sharding easy
but it's certainly very different to postgresql, and not suitable for all use-cases
iirc large numbers of tables can be problematic in many databases - e.g. https://www.postgresql.org/message-id/18728.1027611113%40sss.pgh.pa.us
never used it, so don't know
I'd query how it behaves with large numbers of tables and how connections are managed
well you've got roughly two ways of doing db-per-tenant... separate tables/namespaces in a single db instance, or completely separate db instances (or some blend of the two)
the first gives you large numbers of tables in a db instance, the second gives you lots of active connection to lots of db instances
does it really matter vs keeping all tenants in 1 DB with extra column to identify user?
depends... memory usage in the end - if you have 10k dbs with 10 connections each in each vm and each connection takes a few KB then that gets to be quite a lot of RAM
in my previous apps the number of tenants was 10s, so it was never an issue
so the same number of connections to 1 DB is less memory than the same number of connections to many DB
tens of tenants, i.e. not so many
generally there is some state associated with each connection - socket, buffer, prepared-statements, other state - exactly what depends on the db
What is the trade-off with using table rows to store tenant data? And using tables this way to store other domain related data?
I don’t feel comfortable without tenants isolations, but it looks like the right way to do this
hmm will it work with dozens / thousands of policies? doing this for all tables sounds complex. But idea is interesting.
any hints how do you deal with export / import / backup for tenant in multi-tenancy single database?
lots of ways @U0WL6FA77 , depending on need - in different circumstances and with different databases i've used all of: materialized-views, select-to-csv, streaming-table-scan-to-csv, spark
we're currently using juxt/joplin
(which is based on weavejester/ragtime
) for our cassandra migrations... it lets you detect conflicts caused by merging (i.e. where a new migration which is older than the latest currently applied migration appears because of a git merge) which turns out to be a must-have
be aware that you may start to run in to difficulties with db-per-tenant as your number of tenants increases - either with number of tables in a db (if you are using a single db instance), or connection pool management (if you are using multiple db instances)
I'm processing some data using transducers and I was curious to know if 'unrolling' a particular function would be faster, but in fact it was slower and I don't understand why...here's the code https://gist.github.com/Hendekagon/b2f5639d6d56127dbe6b1ec83930e323
the unrolled function does the same thing as the transducer-based function, but it's slower
I expected that not using partition-all
and map
(transducer forms) etc would make it run faster
I am a little confused. It seems to me that you are using transducers in both approaches, since you are using sequence
.
no - see the macroexpansion of make-test2
which is the unrolled code
the function test2
which that macro creates is the alternative to transducer code I'm referring to
that macro uses sequence
to create the unrolled code
but it's the unrolled code I'm comparing with, and that doesn't use sequence
or transducers
Okay, now I see it. That's a cool trick! To your original question, maybe try clj-java-decompiler
to inspect the generated byte code? It can be really helpful in understanding what's going on.
did you run the code in repl or in a standalone program ?
if you actually plan to optimize for non-repl production make sure you take timings outside of repl. i have seen quite a lot of variations introduced just by repl itself (something with the optimizations appears to behave differently)
Criterium - which you can tell from the gist
should i open a jira ticket or ask on http://ask.clojure.org for a ticket for 1.11 to have a clojure.java.javadoc/**core-java-api**
version specific to java 13 rather than falling back to the java 8 version?
anyone have simple way to get min max dates from list of date inst values
[#inst"2020-02-01T05:00"
#inst"2020-03-01T05:00"
#inst"2020-04-01T05:00"]
(apply min-key inst-ms ...)
would that work?
trying
ok .. now that was a good one .. very slick
thanks mate 🙂
no problem :-)
For min
, max
etc. I did a fair bit of benchmark and it is often best to use the reduce
form over apply
.
These two are interchangeable,
(apply min [3 5 7 1 2]) => 1
(reduce min [3 5 7 1 2]) => 1
While apply
does a lot of gymnastics making the vector seem like args, reduce
maintains the minimum till now, and reduces through the collection.
The reduce
version of the above will be,
(reduce (partial min-key inst-ms)
[#inst"2020-02-01T05:00"
#inst"2020-03-01T05:00"
#inst"2020-04-01T05:00"])
fascinating actually .. thanks for sharing :beach_with_umbrella:
If the array might be empty you'll need an initial value when using reduce.
(reduce (partial min-key inst-ms) [])
Execution error (ArityException) at eval8568 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key
It is also a problem when using
(apply min-key inst-ms [])
Execution error (ArityException) at eval8598 (main.clj:32).
Wrong number of args (1) passed to: clojure.core/min-key
good points, i will fix my function to cover that
i can think of way to do it, but its not very fun
How can I "merge" two sequences, like overwriting one with each other? [1,2,3,4,5] + [:a :b :c] => [:a :b :c 4 5]
If the smaller sequence has m elements and longer sequence has n elements, how about dropping m elements from the larger sequence and placing it after the smaller sequence?
(defn prefer-seq [s1 s2]
(when-let [[x] (or (seq s1) (seq s2))]
(cons x (lazy-seq (prefer-seq (rest s1) (rest s2))))))
I tried this one liner,
(map #(or %1 %2)
(concat [:a :b :c] (repeat nil)) ;; prevent map from terminating early
[1 2 3 4 5])
This seems to work for all cases,
(defn merge-seqs
[xs ys]
(if (and (seq xs)
(seq ys))
(lazy-seq
(cons (first ys)
(merge-seqs (rest xs) (rest ys))))
(or (seq ys) (seq xs))))
(merge-seqs [1 2 3 4 nil] [:a nil :c])
;; => (:a nil :c 4 nil)
Note: I kept the replacements as the second sequence as that is the convention with merge
as well.And, modified it to work for any number of sequences,
(defn merge-seqs
[& ss]
(letfn [(merge-two
[xs ys]
(if (and (seq xs)
(seq ys))
(lazy-seq
(cons (first ys)
(merge-seqs (rest xs) (rest ys))))
(or (seq ys) (seq xs))))]
(reduce merge-two ss)))
(merge-seqs [1 2 3 4 nil] [:a nil :c nil :e] [nil nil 3.0])
;; => (nil nil 3.0 nil :e)
Sorry, I left for a second. Thanks for all the replies, I wanted to make sure that there is nothing in core
doing this in one statement 🙂
@UJRDALZA5, why wouldn't a simple concat work, like you originally suggested? I assume "merge" means that the 2nd vector's values replace the first, that's how merge works for maps anyway. It works no matter which vector is longer.
(let [a [1 2 3 4 5]
b [:a :b :c]]
(concat b (drop (count b) a)))
=> (:a :b :c 4 5)
For two or more number of seqs following @UBRMX7MT7 approach: (puzzles are nice)
(defn merge-seqs [a b & args]
(let [s (concat b (drop (count b) a))]
(if (empty? args)
s
(recur s (first args) (rest args)))))
@UBRMX7MT7 Yeah, that works too. I assumed dropping more than the count is a problem, but apparently it's not.
@USJQPSBD3 Note that unlike all other solutions, which are lazy, yours is eager because into
is eager. For vectors or already realized sequences, this is perfectly fine, but when dealing with lazy sequences sometimes you will want to postpone the evaluation.
(defn aa [a b] (concat b (keep-indexed #(when (<= (count b) %1) %2) a)))
how about this?
(defn map-longest
([f c1 c2]
(lazy-seq
(let [s1 (seq c1) s2 (seq c2)]
(when (or s1 s2)
(cons (f (first s1) (first s2))
(map-longest f (rest s1) (rest s2))))))))
(map-longest (fn [a b] (or a b)) [1 2 3] '[a b c d])
@UJRDALZA5 yep. For lazy ones the concat thing makes more sense but then the output collection might not be the original type but a list.