This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-05-05
Channels
- # admin-announcements (4)
- # beginners (47)
- # boot (69)
- # cider (11)
- # cljsjs (1)
- # cljsrn (5)
- # clojure (163)
- # clojure-austin (17)
- # clojure-russia (27)
- # clojure-uk (46)
- # clojurescript (109)
- # core-async (28)
- # cursive (2)
- # data-science (1)
- # datavis (1)
- # datomic (9)
- # dirac (33)
- # funcool (8)
- # hoplon (1)
- # lein-figwheel (1)
- # leiningen (1)
- # luminus (23)
- # mount (3)
- # nyc (2)
- # off-topic (25)
- # om (3)
- # onyx (4)
- # perun (7)
- # re-frame (10)
- # reagent (2)
- # ring-swagger (4)
- # spacemacs (4)
- # uncomplicate (1)
- # untangled (21)
- # vim (2)
- # yada (2)
if you aot compile, the compiler saves off the bytecode, and then just runs from the generated bytecode, without invoking the compiler again, so no macro expansion
if you run from .clj files, then every time the file is loaded the compiler is invoked and macros are expanded
none of that, has anything to do with if you are loading bytecode or clj files from a jar or not
a jar is just a zip file, so creating a jar is just shoveling some bits in to a zip file, other processes (like aot compiling) write bits to a filesystem somewhere which may or may not be sholved in to a jar
so, depending on how you are building things, and what your setup is, you can have macros expanded any number of times in any number of places
I see. I'm asking this because I was talking with someone who suggested you can slurp a file inside a macro, and this will make it the same as if you had put the content of the file in a string, hard coded. I questioned that, as my understanding is that unless you AOT compile the code that uses the macro, it actually will load the file at run-time, but only once, making it the same as simply memoizing the slurp.
anything emitted by the macro has to be embedded in the bytecode the compiler generates which has size limits
in generally, you should never actually do anything in a macro, a macro is for form rewriting
there is sort of a lifecycle of code that goes something like read -> compile -> run
Are those size limits present in a java class? Or is this something special about the bytecode. Like, say I have a huge file that I slurp in a macro, could it be that the bytecode sized is breached, but hard coded the content into a var would not have caused the same size limit to be breached? (I know this is bad, just trying to undertand)
the first thing compilation does is analysis, and the first thing analysis does is macroexpansion
not sure what you mean? clojure generates jvm bytecode, not java (the language) code
jvm bytecodes have to exist in a class to be run, there is a classfile format that is the format for classes on disk or in a jar
I actually did something similar in Clojurescript. I had a macro defined to slurp local SVG files and turn them into clojure-friendly notation.
Right, I get the phases, I'm unsure when they happen though. I thought they happened when you load a namespace, which would be at runtime, unless it's AOT compiled, which would be at compile time.
because clojurescript is split, the compilation happens in clojure, the runtime happens on a javascript runtime, the boundaries are more distinct
whenever you load clojure code you start at the first phase and go to the last, the difference between aot compilation and not, is aot compilation saves the bytecode, so the next time you load it, instead of loading clojure code, you load bytecode directly
calls to require, go through some stuff, and then bottom out in calls to clojure.lang.RT/load which has some logic to determine if it should load existing bytecode, or load clojurecode
So ingoring the bytecode size limitations, slurping a file in a macro, would only be identical to hardcoding the content in a string if the code calling the slurping macro was AOT compiled.
As for the size limitations, it sounds like those are not a consequence of the macro-expanssion, but simply of the bytecode a class is allowed to generate. So if you hardcoded the content, you should also hit that limit correct?
In ClojureScript, everything is AOT converted to JavaScript, so I guess all macros expand at that phase?
Can someone walk me through his thought process when he see this (overall I'm trying to think about recursion in a more efficient way)
(defn permutations [s]
(lazy-seq
(if (seq (rest s))
(apply concat (for [x s]
(map #(cons x %) (permutations (remove #{x} s)))))
[s])))
The permutations of the set {a, b, c} denoted P(a,b,c) is a hooked to all of the permutations of {b,c}, b hooked to all of the permutations of {a,c}, and c hooked to all of the permutations of {a,b}
here, a hooked to the permutations of {b,c} means a + [(b,c), (c,b)] => a + (b,c) & a (c,b) => (a,b,c), (a,c,b)
in english, the set of permutations is each element hooked to each permutation of the set with that element removed
we loop over each element of s (the set), and get all of the permutations of the smaller subset s - {x}, and then tack x back onto each of those things
the "hook x back up to all the permutations of s - {x}" part is (map #(cons x %) (permutations (remove #{x} s))
part
for each x, remove x from s, compute the permutations, and cons x back onto the front of the list
so we can't just concat them, we need to apply concat to them, so concat looks at the list of arguments, which are just the permutations
@dpsutton: that's actually pretty good how you explained in mathematical terms - which makes it easier to follow. The toughest bit is the recursion ("the big list of lists") it takes a while to picture it in one's head; need to run trial and errors to get it.
i've seen the definition before, and this is a known definition for getting permutations, so i wasn't looking at it without knowing what it was trying to do
also, there is a definition (mathematically) of permutations based on this so this largely mirrors one of these definitions
which makes it nice in the sense that you just define what it is and poof, you've got the code
but also, inside of the for loop, you have a map
(not a hashmap style map) which returns a list. since you are for
-ing, you are making a list of these
but that's exactly what apply is for, when the arguments to a function are passed as a list and you just want to apply a function to the elements of that list
its kinda cool that its this style. its inefficient for a stack machine but once you start thinking like this, if efficiency isn't an optimization parameter at the moment, it can make it nice
@danielcompton: thanks, looks like exactly what I wanted
map-indexed can expose index state to api invoker, so user no need to write code with change state to maintain current data index. Why designer don't want to use reduce-indexed instead of reduce-kv?
When Rich introduced transducer in clojure, the concept is base on an assumption that map can be implemented via reduce. But how we can implement (map + [1 2] [1 2]) via reduce if reduce don't accept multiple collections? To achieve this the current reduce api need to be enhanced?
Also (map + [1 2] [1 2]), is this common? Do we have this in lisp? I also never had to do something like this.
I think map is meant to be used on a single collection basis. Do you have a concrete example of how this could be useful?
(defn map2 [f & colls]
(reduce #(conj %1 (apply f %2))
[]
(partition (count colls) (apply interleave colls))))
re: reduce-indexed, I answered a question the other day on reddit (https://www.reddit.com/r/Clojure/comments/4fhb3x/new_clojurians_ask_anything/d28y0mb)
where it would have been useful, in-fact my first approach was to write my own reduce-indexed, but I ended up using loop/recur instead as I thought it was more elegant.
reduced function Usage: (reduced x) Wraps x in a way such that a reduce will terminate with the value x Added in Clojure version 1.5 Source
the main thing with map2 (above) is to to transpose your matrix of inputs such that you've got the correct parameters.
reduced just informs the reduce-fn that your computation has terminated and it can shortcut-exit with that value.
the original questioner wanted to know how to implement map (including support for multiple colls) using reduce.
you know what, internally interleave uses map, which I was trying to avoid when transposing the collections
Hey... Speaking of matrices and other slightly complex data structures... can I ask a silly question? I'm thinking of using memoize
in the core library on a function where the cost comes not from complex calculations but from the need to traverse a potentially big nested vector of vectors. Is the underlying machinery of equality testing smart enough to leverage immutability to actually make this work (i.e. to just compare some kinds of pointers under the hood), or will it have to traverse the whole data structure again in order to test that the argument is the same on the second...nth call (in which case I suppose I should choke the mutability down and store some state in an atom or something)?
@thiagofm: (map + [1 2] [1 2])
is used extensively in my code: summing 2D/3D/ND mathematical vectors.
(defn v+ [& args] (apply (partial mapv +) args))
where v+
is read "vector add". The trickiness with the args and apply and partials is so v+
works like +
, and (v+ [1 2] [10 12] [100 100])
works as expected
@gowder https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Util.java#L25
the one you don't implement will throw an AbstractMethodException if you try to invoke them
I have a java interop question. When passing an argument into a Java method (.withInput x) where x is of type InputStream[]. How does one create the Java Array of object ^InputStream?
herrwolfe: I find that most of the times it's faster to just try it in a repl rather than asking, plus you get to see the exact behaviour
@piotrek: sweet, thanks, that gives me enough confidence to make it worth running some tests...
I'm reading some information from resources/
during dev and now when running everything as a jar I can't access it using the regular file-system methods anymore. What are your go to solutions for this kind of thing?
@martinklepsch: java.lang.ClassLoader/getSystemResourceAsStream
@curtis.summers: does that let me do something akin to file-seq
? i.e. find all resources under a certain path?
it only does work for specific paths does it?
@martinklepsch: Ah, no that wouldn't work for that. Can you def the list of resources at compile time instead? Then use getSystemResourceAsStream
at runtime.
@martinklepsch: This answer looks helpful: http://stackoverflow.com/questions/22363010/get-list-of-embedded-resources-in-uberjar
@curtis.summers: yeah, moving things to compile time seems like the easiest way for now. FWIW instead of java.lang.ClassLoader/getSystemResourceAsStream
plain
also worked fine in my case.
I'd like to have an if
inside a vector literal: [1 2 (if (> x 2) x)]
but omit the nil
that results if the if
fails, so I get either [1 2 x]
or [1 2]
(instead of [1 2 nil]
like it does there). Any way to do this, other than moving the if
outside the vector literal?
@bronsa, doh! But, it turns out I needn't have worried: my use case was hiccup HTML templating, and it gracefully handles nil
s in the middle of its vectors. Hooray defensive libs!
@stand: shouldn’t be, if you’re running/compiling the clojure code itself on Java 8.
Has anyone run into NoSuchMethodError
, NoSuchFieldError
, or VerifyError
during java interops within dependent libraries?
any “breaking" api changes in your dependent library may manifest at runtime as NoSuchMethodError
What I am seeing is not broken APIs, more like JVM unable to find a parent class constructor for example, but only when invoked from clojure.
java.lang.NoSuchMethodError: org.apache.lucene.analysis.Analyzer.<init>(Lorg/apache/lucene/analysis/Analyzer$ReuseStrategy;)V
at org.apache.lucene.analysis.AnalyzerWrapper.<init>(AnalyzerWrapper.java:40)
at org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper.<init>(PerFieldAnalyzerWrapper.java:75)
jr, when I said uberjar I meant hand assembling multiple dependent jars. I haven't relied on lein facilities for this.
What is the preferred way to setup data for each clojure.test test? Right now I am just surrounding each test with a let block. Is this the normal way to do it?
Why not let in the deftest? If you're using for more than one then fixtures