Fork me on GitHub
#clojure
<
2018-02-12
>
mfikes01:02:50

@qqq Right. Interestingly, in ClojureScript swap! and reset! also work on objects satisfying ISwap and IReset (of which an atom satisfies neither).

bronsa01:02:21

clojure has IAtom

crankyadmin12:02:39

Hi,

ingest-service.core> (streaming/duration ^long (.longValue 60000))
#object[org.apache.spark.streaming.Duration 0x87482 "60000 ms"]
ingest-service.core> 
Great.... But when evaluated in Emacs I get org.apache.spark.streaming.Duration cannot be cast to java.lang.Number, am I missing something?

noisesmith16:02:53

as an aside - in clojure 60000 is always a long - you have to jump through hoops to get an int actually

noisesmith16:02:55

surely something else is trying to consume the value streaming/duration returned in order to get an error like that - that call creates a Duration, it doesn’t consume one, so it couldn’t produce that error

roklenarcic16:02:31

Hm, why are generators created by spec so conservative? If I specify something as string?, the generator basically never generates very long strings or strings with high unicode characters.

roklenarcic16:02:08

In case of generative testing, it seems that it doesn't really exercise any edge cases except empty string.

arrdem16:02:56

@roklenarcic that's a good question for #clojure-spec and I suspect the answer is that the default generators are conservative because there are so many possible interpretation of a string? spec. Custom generators do buy you a lot.

dominicm18:02:22

I thought the test check generators did generate weird strings?

hiredman18:02:59

the generator spec uses for clojure.core/string? is test.check's string-alphanumeric

roklenarcic18:02:10

@arrdem But if you have a specific interpretation for string? spec, it's still much easier to constrain the spec, than to add generators that generate weird strings everywhere.

sggdfgf20:02:34

why keyword is more popular than symbol in Clojure? unlike Scheme...

qqq20:02:14

I had the same question; spent a few days programming using symbols instead of keywords for data, and found I really liked keywords arfterwards.

sggdfgf20:02:10

and what draw you back?

manutter5120:02:25

Symbols mean different things in different contexts, but a keyword is just a keyword — a thing that evaluates to itself. That seems more efficient to me, but to be perfectly honest, the real reason I use keywords is because that’s what I learned and what everyone else uses, mostly.

Ryan Radomski20:02:02

It should be noted that just as (), [], {}, etc prevent some semantic overloading of parens, keywords prevent a degree of semantic overloading with symbolic data

jasonjckn20:02:09

feel free to point me to documentation: Question: when doing clojure data transformation, any rule of thumb for using reducers vs transducers?

jasonjckn20:02:20

e.g. (sequence coll (map inc)) vs (r/foldcat (r/map inc) coll)

jasonjckn20:02:56

i understand some of the tradeoffs, the latter is possibly parallel, the former has one lazy seq, is there a canonical choice

reborg20:02:36

not really @jasonjckn it depends on the problem. I'd say introduce parallelism concern later on, and model processing via transducers.

jasonjckn20:02:46

sounds good to me, thanks

jasonjckn20:02:56

cool profile pic

reborg20:02:49

transducers are also more flexible/powerful for modelling the processing pipeline. They can be made parallel later (with some tradeoffs)

laujensen21:02:51

Im rebooting ClojureQL and inviting anyone interested to join the team. Its good but there's still some fun stuff to be done, for example a compiler rewrite. The first lines are 8 years old, so some dusting off cant be ruled out. https://github.com/LauJensen/clojureql / http://clojureql.sabrecms.com/en/welcome - Ping me if you're interested in lending a hand

seancorfield21:02:04

Wow, clojure.java.jdbc 0.2.3... I'll be interested to see what it all looks like running on 0.7.5!

laujensen21:02:35

One thing at a time. Just got it up from Clojure 1.1 to 1.9 😄

seancorfield21:02:20

@laujensen If any questions come up about clojure.java.jdbc, feel free to chase me down in #sql for the answers!

laujensen21:02:33

Thanks sean, I'll be sure to do that

aengelberg21:02:26

Is there any built-in Clojure function or utility that can treat a java.util.Iterator as an IReduceInit?

bronsa22:02:18

not in clojure but it's trivial to roll your own..

bronsa22:02:13

@aengelberg do you care about treating it exactly as an IReduceInit or only care about reducing it?

bronsa22:02:24

because if the latter, you can use iterator-seq

bronsa22:02:57

which returns a chunked seq, so reduction should be quite fast anyway (altho you get caching and extra allocations)

aengelberg22:02:23

I just want to eagerly consume all values from an iterator without the cost of creating an extra seq.

bronsa22:02:42

then you'll have to reify IReduceInit yourself

bronsa22:02:37

(reify IReduceInit
    (reduce [iter f val]
       (loop [ret val]
         (if (.hasNext iter)
           (let [ret (f ret (.next iter))]
             (if (reduced? ret)
               @ret
               (recur ret)))
           ret))))

mikerod22:02:46

I found some interesting results in terms of compilation time with the clj compiler. I’m looking at a case where it looks like it started to get really slow to do a bunch of individual eval calls on single forms (this is related to some Clara rules stuff). It seemed like the compiler was being slower than “normal” to compile this amount of code. Meaning it seems like clj compiler can compile a lot of source code much faster than what we were seeing with a bunch of individual eval calls for this DSL layer. So I tried to experiment with some dummy examples of compiling 1 form at a time vs compiling batches of forms at a time to arrive at equivalent output. It looks like the compiler is quite a bit faster on batches vs individual calls to eval. It isn’t immediately obvious to me why. I have tried profiling and I can see the hot spots showing up in the compiler on the individual eval calls that don’t show up in the batch cases, but I haven’t dug into them enough yet to really understand why.

mikerod22:02:05

I was thinking perhaps the compiler has some sort of caching going on during a single eval pass

mikerod22:02:21

and you don’t get to take advantage of that if you do a bunch of individual calls to eval vs larger batches