This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-09-09
Channels
- # beginners (376)
- # cider (7)
- # cljs-dev (4)
- # clojure (96)
- # clojure-dev (7)
- # clojure-finland (2)
- # clojure-spec (1)
- # clojure-uk (15)
- # clojurescript (54)
- # cryogen (1)
- # defnpodcast (2)
- # docs (4)
- # emacs (1)
- # fulcro (2)
- # hoplon (15)
- # lumo (19)
- # off-topic (28)
- # om (3)
- # pedestal (2)
- # portkey (6)
- # proton (2)
- # re-frame (34)
- # reagent (4)
- # ring (3)
- # spacemacs (5)
- # unrepl (3)
my solution:
(defn jsonobject->edn [jsobj]
(cheshire.core/parse-string (.toString jsobj) false))
That's not really converting to EDN (which is a text format) -- it's converting to "Clojure" data structures, so maybe json-object->clj
?
Or perhaps read-json-object
?
(which is more in keeping with clojure.edn/read-string
for example)
And naming is hard… I tend to spend more time figuring out a name for a function than coding the function itself 😄
Names matter, but you'll always get them wrong. So its good to practice reading things as if names are just random unique ids, instead of documentation.
is it ok/idiomatic to use :keys
destructuring in protocol method definitions as in:
(defprotocol IMyP
(method [this & {:keys [foo bar] :or {foo false bar true}}]))
Have you tried that?
@mbjarland I don't think that code does what you think.
@U07TDTQNL Tried it yes, but as you say, it does not seem to do what I would expect. So assuming I want this kind off optional parameters, am I forced to just add a ton of arities to the protocol method declarations?
Is the result of an eduction
different from calling seq
on that eduction other than the fact that the applications will be performed every time reduce/iterator is called? I'm trying to figure out if it's safe to treat an eduction as a sequence or if it will bite me later somehow.
And what is the point of returning an eduction instead of a lazy seq? I don't really get it. 😛
Should I be concerned when composing a large number of functions (potentially hundreds)? In a very much non-performance-critical setting
@hmaurer If it isn't performance critical, why would you be concerned? And even if it was, try it and see whether it's fast enough.
What's the problem domain for which the solution seems to be massive composition?
@seancorfield call stack depth, etc? I am not familiar with those things on the JVM
it's configurable, but you can compose functions without increasing the call stack
Basically I am generating a schema and found it nice to express as a composition of functions each doing a little bit of work
(eg. comp
or trampoline
)
that is yet another way to compose functions without making the call stack deeper, yeah
good to know, thanks @noisesmith !
@hmaurer If you comp
thousands of functions, you will eventually reach the maximum allowed depth for a Java call stack.
You could, however, write an "interpreter" that processes a list of functions without increasing the call stack.
oh, right, comp does actually make the stack go deeper, right
@stuartsierra thanks! however I am not quite sure I understand why comp (given that code) makes the call stack deeper; would you mind to explain?
each call is wrapped in the ones before it in the arg list
It kind of has to to do what it does ( I mean you could do theoretically do that sort of thing on the heap, but it's hard to see why you'd need to.)
oooh, because of reduce? So it does not make it deeper for the <= 2 args case, but does make it deeper for > 2?
but you could definitely write a "trampolining comp" version that didn't do that (trading worse performance for not getting stack overflows for huge arg lists)
@hmaurer even with two args, that's two stack frames
@noisesmith ah right
if there was an arity-3 case, like this:
([f g h] (fn [x] (f (g (h x)))))
then it would also only generate two stackframes, right @noisesmith ?that would be 3 - one for f, g, h each
But (h x) will be called and will return, the g will be called and return, then f will be called and return, right? that’s only one level at ach point
when h is called, that's inside a call to g, which is inside a call to f
@noisesmith how is it inside the call to g? the result of h is passed as an argument
@noisesmith is argument evaluation lazy in clojure?
you can see by throwing an exception, one moment, generating an example in my repl
boot.user=> (defn foo [x] (println "-> foo"))
#'boot.user/foo
boot.user=> (foo (println "abc"))
abc
-> foo
nil
you are right about (f (g (h x))) not needing to go three deep, my bad
but comp still makes things go deeper because of the way it handles varargs with self calls that generate new functions (I think? or was stuart wrong?)
@noisesmith 🎉 what code did you run to check that?
+user=> (let [f inc g inc h inc i inc j inc k inc l inc m /] (f (g (h (i (j (k (l (m 1 0)))))))))
ArithmeticException Divide by zero clojure.lang.Numbers.divide (Numbers.java:158)
+user=> (pst)
ArithmeticException Divide by zero
clojure.lang.Numbers.divide (Numbers.java:158)
clojure.core// (core.clj:1019)
clojure.core// (core.clj:1012)
user/eval21 (NO_SOURCE_FILE:9)
user/eval21 (NO_SOURCE_FILE:9)
clojure.lang.Compiler.eval (Compiler.java:6978)
clojure.lang.Compiler.eval (Compiler.java:6941)
clojure.core/eval (core.clj:3187)
clojure.core/eval (core.clj:3183)
clojure.main/repl/read-eval-print--9945/fn--9948 (main.clj:242)
clojure.main/repl/read-eval-print--9945 (main.clj:242)
clojure.main/repl/fn--9954 (main.clj:260)
nil
@noisesmith yes I think that’s right; comp
uses reduce
after 2 args so that will generate extra stackframes, right?
I think so - checking now
+user=> (let [f inc g inc h inc i inc j inc k inc l inc m /] ((comp f g h i j k l m) 1 0))
ArithmeticException Divide by zero clojure.lang.Numbers.divide (Numbers.java:158)
+user=> (pst)
ArithmeticException Divide by zero
clojure.lang.Numbers.divide (Numbers.java:158)
clojure.core// (core.clj:1019)
clojure.core// (core.clj:1012)
clojure.core/comp/fn--6808 (core.clj:2543)
user/eval30 (NO_SOURCE_FILE:13)
user/eval30 (NO_SOURCE_FILE:13)
clojure.lang.Compiler.eval (Compiler.java:6978)
clojure.lang.Compiler.eval (Compiler.java:6941)
clojure.core/eval (core.clj:3187)
clojure.core/eval (core.clj:3183)
clojure.main/repl/read-eval-print--9945/fn--9948 (main.clj:242)
clojure.main/repl/read-eval-print--9945 (main.clj:242)
nil
now I'm back to my original thought that comp
avoids creating more stack frames, but I would be surprised if @stuartsierra was wrong about this
boot.user=> ((apply comp (map (fn [_] inc) (range 10000))) 1)
10001
boot.user=> ((apply comp (map (fn [_] inc) (range 100000))) 1)
boot.user=> java.lang.StackOverflowError:
btw you can just use (repeat 100000 inc)
for that, but yeah, that definitely does stack overflow
@tagore yeah I am sure it’s not desirable; I am new to clojure and had to write a sequence of transformations on some map. I started using comp and kept going with it
TL;DR; I am aware comp
probably isn’t the wisest approach, but by using it for this purpose it made me wonder how far I could go with it before blowing up the JVM 😛
@stuartsierra’s mention of writing an interpreter for a list of functions seems good (which, unless I am mistaken, is what Pedestal is doing with interceptors)
@hmaurer as hinted in one of the responses earlier, you could create a list/vector of the functions you want to compose and then reduce
over that, calling each one on the accumulating value...
@seancorfield I've read what he wrote, but I'm still not clear about what he wants. I can be a bit slow, I'm afraid.
(defn trampocomp [& fns]
(fn [x]
(reduce #(%2 %)
x
fns)))
As usual @noisesmith is a faster typist than I am 🙂
@seancorfield I learned to type by being a MUD addict
My MUD days are so far behind me now that I've become a slower typist. That's my story, and I'm sticking to it! 🙂
@hmaurer Not at all.. as I said, I can be a bit slow to understand what people want as an outcome
I think I got the answer(s) I was looking for, but I’ll add a bit of context here in case you have further advice: I am generating a GraphQL schema from an application schema (database schema + some extra info). I am doing this as a sequence of transformations on a “schema” map. So I start with an empty map, then generate some object types, then generate some queries, etc
I wrote each transformation as a function that takes two config maps and extra options, and return a 1-arity function which takes a schema and returns a schema
The reason I was expecting to reach high number of compositions (in the low hundreds) was because some of my transformation steps should be executed for each entity type
So I have code like this
(apply comp
(map #(gen-for-node hodl-config config %)
(keys (:nodes hodl-config)))))
(gen-for-node returns a (fn [schema] ... transformed-schema)
)the fact that it’s a GraphQL schema is irrelevant tbh, I just mentioned it to give context
the essence is, I think, building up a map as a sequence of (parametarized) transformations
I've never felt the need to compose functions that deeply, nor have I seen anyone else need to.