Fork me on GitHub
#clojure
<
2016-05-05
>
hiredman00:05:22

macros are expanded before the compiler generates jvm bytecode

dimiter00:05:28

Should have googled simple_smile bean does what I need.

hiredman00:05:37

and when that happens depends

hiredman00:05:59

anytime you load clojure code, it gets compiled to jvm bytecode

hiredman00:05:49

if you aot compile, the compiler saves off the bytecode, and then just runs from the generated bytecode, without invoking the compiler again, so no macro expansion

hiredman00:05:57

(but don't aot compile)

hiredman00:05:31

if you run from .clj files, then every time the file is loaded the compiler is invoked and macros are expanded

hiredman00:05:58

none of that, has anything to do with if you are loading bytecode or clj files from a jar or not

hiredman00:05:29

a jar is just a zip file, so creating a jar is just shoveling some bits in to a zip file, other processes (like aot compiling) write bits to a filesystem somewhere which may or may not be sholved in to a jar

hiredman00:05:04

so, depending on how you are building things, and what your setup is, you can have macros expanded any number of times in any number of places

didibus00:05:02

I see. I'm asking this because I was talking with someone who suggested you can slurp a file inside a macro, and this will make it the same as if you had put the content of the file in a string, hard coded. I questioned that, as my understanding is that unless you AOT compile the code that uses the macro, it actually will load the file at run-time, but only once, making it the same as simply memoizing the slurp.

didibus00:05:46

Does my understanding makes sense?

hiredman00:05:12

it depends, but that is generally a bad idea

hiredman00:05:17

(the slurp thing)

hiredman00:05:47

anything emitted by the macro has to be embedded in the bytecode the compiler generates which has size limits

hiredman00:05:13

in generally, you should never actually do anything in a macro, a macro is for form rewriting

hiredman00:05:18

there is sort of a lifecycle of code that goes something like read -> compile -> run

hiredman00:05:31

those boxes are sort of fractal and can be decomposed

didibus00:05:31

Are those size limits present in a java class? Or is this something special about the bytecode. Like, say I have a huge file that I slurp in a macro, could it be that the bytecode sized is breached, but hard coded the content into a var would not have caused the same size limit to be breached? (I know this is bad, just trying to undertand)

hiredman00:05:06

read -> [analyze -> emit ] -> run

hiredman00:05:32

read -> [[macroexpand -> ...] -> emit] -> run

hiredman00:05:05

the first thing compilation does is analysis, and the first thing analysis does is macroexpansion

hiredman00:05:58

not sure what you mean? clojure generates jvm bytecode, not java (the language) code

hiredman00:05:56

jvm bytecodes have to exist in a class to be run, there is a classfile format that is the format for classes on disk or in a jar

hiredman00:05:08

that format has limits are the sizes of certain parts of it

dimiter00:05:12

I actually did something similar in Clojurescript. I had a macro defined to slurp local SVG files and turn them into clojure-friendly notation.

hiredman00:05:20

(method bodies, number of fields)

hiredman00:05:32

clojurescript is a whole other animal

didibus00:05:40

Right, I get the phases, I'm unsure when they happen though. I thought they happened when you load a namespace, which would be at runtime, unless it's AOT compiled, which would be at compile time.

hiredman00:05:11

because clojurescript is split, the compilation happens in clojure, the runtime happens on a javascript runtime, the boundaries are more distinct

hiredman00:05:15

whenever you load clojure code you start at the first phase and go to the last, the difference between aot compilation and not, is aot compilation saves the bytecode, so the next time you load it, instead of loading clojure code, you load bytecode directly

hiredman00:05:05

calls to require, go through some stuff, and then bottom out in calls to clojure.lang.RT/load which has some logic to determine if it should load existing bytecode, or load clojurecode

didibus00:05:17

So ingoring the bytecode size limitations, slurping a file in a macro, would only be identical to hardcoding the content in a string if the code calling the slurping macro was AOT compiled.

didibus00:05:32

As for the size limitations, it sounds like those are not a consequence of the macro-expanssion, but simply of the bytecode a class is allowed to generate. So if you hardcoded the content, you should also hit that limit correct?

didibus00:05:02

In ClojureScript, everything is AOT converted to JavaScript, so I guess all macros expand at that phase?

lewix02:05:16

Can someone walk me through his thought process when he see this (overall I'm trying to think about recursion in a more efficient way)

(defn permutations [s]
  (lazy-seq
   (if (seq (rest s))
     (apply concat (for [x s]
                     (map #(cons x %) (permutations (remove #{x} s)))))
     [s])))

dpsutton02:05:47

The permutations of the set {a, b, c} denoted P(a,b,c) is a hooked to all of the permutations of {b,c}, b hooked to all of the permutations of {a,c}, and c hooked to all of the permutations of {a,b}

dpsutton02:05:49

here, a hooked to the permutations of {b,c} means a + [(b,c), (c,b)] => a + (b,c) & a (c,b) => (a,b,c), (a,c,b)

dpsutton02:05:07

in english, the set of permutations is each element hooked to each permutation of the set with that element removed

dpsutton02:05:17

that's the for [x s] part

dpsutton02:05:50

we loop over each element of s (the set), and get all of the permutations of the smaller subset s - {x}, and then tack x back onto each of those things

dpsutton02:05:58

and finally, what's the permutation of the singleton set?

dpsutton02:05:50

the "hook x back up to all the permutations of s - {x}" part is (map #(cons x %) (permutations (remove #{x} s)) part

dpsutton02:05:15

for each x, remove x from s, compute the permutations, and cons x back onto the front of the list

dpsutton02:05:26

so we are gonna get a big list of lists

dpsutton02:05:46

so we can't just concat them, we need to apply concat to them, so concat looks at the list of arguments, which are just the permutations

lewix02:05:08

@dpsutton: that's actually pretty good how you explained in mathematical terms - which makes it easier to follow. The toughest bit is the recursion ("the big list of lists") it takes a while to picture it in one's head; need to run trial and errors to get it.

lewix02:05:27

it's so inefficient 😛

dpsutton02:05:36

unnesting the lists can be super confusing

lewix02:05:18

dpsutton: did you unnest it in your head or you just took a leap of faith

dpsutton02:05:00

i've seen the definition before, and this is a known definition for getting permutations, so i wasn't looking at it without knowing what it was trying to do

dpsutton02:05:24

also, there is a definition (mathematically) of permutations based on this so this largely mirrors one of these definitions

dpsutton02:05:39

which makes it nice in the sense that you just define what it is and poof, you've got the code

dpsutton02:05:29

but also, inside of the for loop, you have a map (not a hashmap style map) which returns a list. since you are for-ing, you are making a list of these

dpsutton02:05:41

so you can't concat it as the argument to concat is a single list

dpsutton02:05:03

but that's exactly what apply is for, when the arguments to a function are passed as a list and you just want to apply a function to the elements of that list

dpsutton02:05:02

its kinda cool that its this style. its inefficient for a stack machine but once you start thinking like this, if efficiency isn't an optimization parameter at the moment, it can make it nice

lewix02:05:10

yea apply concat is the easiest part

lewix02:05:56

picturing the unnested list without running samples is a pain in the butt

dpsutton02:05:58

no problem at all

korny06:05:24

@danielcompton: thanks, looks like exactly what I wanted

xfcjscn09:05:42

Hi, I have a question about the api desing need help here.

xfcjscn09:05:46

Why there is no api: reduce-indexed like map-indexed in clojure core.

xfcjscn09:05:02

map-indexed can expose index state to api invoker, so user no need to write code with change state to maintain current data index. Why designer don't want to use reduce-indexed instead of reduce-kv?

xfcjscn09:05:49

Another Question:

xfcjscn09:05:51

Why function reduce don't accept multiple collections like function map?

xfcjscn09:05:04

When Rich introduced transducer in clojure, the concept is base on an assumption that map can be implemented via reduce. But how we can implement (map + [1 2] [1 2]) via reduce if reduce don't accept multiple collections? To achieve this the current reduce api need to be enhanced?

xfcjscn09:05:51

Thanks in advance simple_smile

thiagofm10:05:37

I've never ever had to use a "reduce-indexed". Maybe it's just my experience

thiagofm10:05:03

Also (map + [1 2] [1 2]), is this common? Do we have this in lisp? I also never had to do something like this.

thiagofm10:05:37

What would you want to achieve with (map + [1 2] [1 2])?

turbopape10:05:31

that sums every elements of the coll

thiagofm10:05:34

I think map is meant to be used on a single collection basis. Do you have a concrete example of how this could be useful?

turbopape10:05:51

no, it takes any nb of collections, as long as it matches the arity of the fn

d-t-w10:05:04

it's valid to map over n collections

d-t-w10:05:15

map re-written with reduce and accepting multiple input collections might look like

d-t-w10:05:17

(defn map2 [f & colls]
  (reduce #(conj %1 (apply f %2))
          []
          (partition (count colls) (apply interleave colls))))

d-t-w10:05:01

re: reduce-indexed, I answered a question the other day on reddit (https://www.reddit.com/r/Clojure/comments/4fhb3x/new_clojurians_ask_anything/d28y0mb)

d-t-w10:05:33

where it would have been useful, in-fact my first approach was to write my own reduce-indexed, but I ended up using loop/recur instead as I thought it was more elegant.

turbopape10:05:41

You should maybe write a transducer

turbopape10:05:49

and use reduced when the state is as you wish

turbopape10:05:07

reduced function Usage: (reduced x) Wraps x in a way such that a reduce will terminate with the value x Added in Clojure version 1.5 Source

d-t-w10:05:44

the main thing with map2 (above) is to to transpose your matrix of inputs such that you've got the correct parameters.

turbopape10:05:00

Oh you actually can use reduced without transducers

d-t-w10:05:03

for each reduction

d-t-w10:05:54

reduced just informs the reduce-fn that your computation has terminated and it can shortcut-exit with that value.

turbopape10:05:09

Isn’t that what you want ?

turbopape10:05:56

your problem is the sum of different elements from different colls ?

d-t-w10:05:08

the original questioner wanted to know how to implement map (including support for multiple colls) using reduce.

d-t-w10:05:08

the trick is, normally to transpose a matrix you would use map.

d-t-w10:05:03

(apply mapv vector [[1 2] [3 4] [5 6]])
=> [[1 3 5] [2 4 6]]

d-t-w10:05:36

partition and interleave is the other way.

d-t-w10:05:00

to the other original question of why no reduce-indexed? not sure but mikera wrote one

d-t-w10:05:32

ended up looking over it when answering the reddit q.

d-t-w10:05:48

you know what, internally interleave uses map, which I was trying to avoid when transposing the collections

d-t-w10:05:59

so, no idea!

gowder10:05:28

Hey... Speaking of matrices and other slightly complex data structures... can I ask a silly question? I'm thinking of using memoize in the core library on a function where the cost comes not from complex calculations but from the need to traverse a potentially big nested vector of vectors. Is the underlying machinery of equality testing smart enough to leverage immutability to actually make this work (i.e. to just compare some kinds of pointers under the hood), or will it have to traverse the whole data structure again in order to test that the argument is the same on the second...nth call (in which case I suppose I should choke the mutability down and store some state in an atom or something)?

fasiha13:05:17

@thiagofm: (map + [1 2] [1 2]) is used extensively in my code: summing 2D/3D/ND mathematical vectors.

fasiha13:05:24

(defn v+ [& args] (apply (partial mapv +) args)) where v+ is read "vector add". The trickiness with the args and apply and partials is so v+ works like +, and (v+ [1 2] [10 12] [100 100]) works as expected

piotrek13:05:26

(= a b) uses that method so it is checking for reference equality first

derwolfe13:05:04

When using reify must one implement all methods of an interface?

bronsa13:05:02

the one you don't implement will throw an AbstractMethodException if you try to invoke them

derwolfe13:05:49

thanks bronsa simple_smile

firstclassfunc13:05:07

I have a java interop question. When passing an argument into a Java method (.withInput x) where x is of type InputStream[]. How does one create the Java Array of object ^InputStream?

bronsa13:05:56

herrwolfe: I find that most of the times it's faster to just try it in a repl rather than asking, plus you get to see the exact behaviour simple_smile

derwolfe13:05:39

good point simple_smile

gowder15:05:20

@piotrek: sweet, thanks, that gives me enough confidence to make it worth running some tests...

martinklepsch15:05:19

I'm reading some information from resources/ during dev and now when running everything as a jar I can't access it using the regular file-system methods anymore. What are your go to solutions for this kind of thing?

curtis.summers15:05:58

@martinklepsch: java.lang.ClassLoader/getSystemResourceAsStream

martinklepsch15:05:40

@curtis.summers: does that let me do something akin to file-seq? i.e. find all resources under a certain path?

martinklepsch15:05:17

it only does work for specific paths does it?

curtis.summers15:05:09

@martinklepsch: Ah, no that wouldn't work for that. Can you def the list of resources at compile time instead? Then use getSystemResourceAsStream at runtime.

martinklepsch15:05:52

@curtis.summers: yeah, moving things to compile time seems like the easiest way for now. FWIW instead of java.lang.ClassLoader/getSystemResourceAsStream plain also worked fine in my case.

fasiha18:05:33

I'd like to have an if inside a vector literal: [1 2 (if (> x 2) x)] but omit the nil that results if the if fails, so I get either [1 2 x] or [1 2] (instead of [1 2 nil] like it does there). Any way to do this, other than moving the if outside the vector literal?

ghadi18:05:43

nope, all expressions have a result

ghadi18:05:03

(cond-> [1 2] (> x 2) (conj x))

ghadi18:05:59

Another alternative ^^ , but not as readable/clear as:

(if (> x 2)
  [1 2]
  [1 2 x])

ghadi18:05:30

whoops, missed the arrow in cond->

fasiha19:05:32

@ghadi: very cool, I figured it was a language feature, that makes more sense

jr19:05:15

you can unquote-splice the result in a syntax quoted vector but it is kinda ugly

jr19:05:39

`[1 2 ~@(if pred [x]) 4]

jr19:05:34

I prefer use of into

(into [1 2]
  (if pred [x]))

jr19:05:59

but that means x needs to be at the tail

fasiha19:05:12

@ghadi @jr when we get user-defined reader macros 😈 maybe

ghadi19:05:22

never going to happen

jr19:05:44

clojure.edn supports user-defined reader macros

ghadi19:05:00

(that's not the same thing)

jr19:05:12

right but it can provide a vector when read 😉

ghadi19:05:28

@jr: can't omit a value entirely

jr19:05:36

ah I see

bronsa19:05:19

reader macros wouldn't help anyway

bronsa19:05:42

you can't omit at read time something that depends on runtime state

fasiha19:05:14

@bronsa, doh! But, it turns out I needn't have worried: my use case was hiccup HTML templating, and it gracefully handles nils in the middle of its vectors. Hooray defensive libs!

stand20:05:09

Is it possible to use a Java 8 based jar in a clojure project? Any issues?

gnejs20:05:52

@stand: shouldn’t be, if you’re running/compiling the clojure code itself on Java 8.

vipaca22:05:35

Has anyone run into NoSuchMethodError, NoSuchFieldError, or VerifyError during java interops within dependent libraries?

warn4n23:05:31

yup, like when you upgrade a library that you depend on

warn4n23:05:57

that experience has definitely nudged me toward the compile-time types camp

warn4n23:05:39

any “breaking" api changes in your dependent library may manifest at runtime as NoSuchMethodError

vipaca23:05:55

What I am seeing is not broken APIs, more like JVM unable to find a parent class constructor for example, but only when invoked from clojure.

vipaca23:05:11

java.lang.NoSuchMethodError: org.apache.lucene.analysis.Analyzer.<init>(Lorg/apache/lucene/analysis/Analyzer$ReuseStrategy;)V
	at org.apache.lucene.analysis.AnalyzerWrapper.<init>(AnalyzerWrapper.java:40)
	at org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper.<init>(PerFieldAnalyzerWrapper.java:75)

jr23:05:14

uberjar?

vipaca23:05:22

tried it, no joy.

jr23:05:55

oh I mean that uberjar (aot) is usually a source of such errors 😛

jr23:05:35

sometimes an improper type hint can cause that as well

vipaca23:05:10

jr, when I said uberjar I meant hand assembling multiple dependent jars. I haven't relied on lein facilities for this.

kenny23:05:40

What is the preferred way to setup data for each clojure.test test? Right now I am just surrounding each test with a let block. Is this the normal way to do it?

Alex Miller (Clojure team)23:05:02

Why not let in the deftest? If you're using for more than one then fixtures