This page is not created by, affiliated with, or supported by Slack Technologies, Inc.


@qqq Right. Interestingly, in ClojureScript swap! and reset! also work on objects satisfying ISwap and IReset (of which an atom satisfies neither).


clojure has IAtom



ingest-service.core> (streaming/duration ^long (.longValue 60000))
#object[org.apache.spark.streaming.Duration 0x87482 "60000 ms"]
Great.... But when evaluated in Emacs I get org.apache.spark.streaming.Duration cannot be cast to java.lang.Number, am I missing something?


Hm, why are generators created by spec so conservative? If I specify something as string?, the generator basically never generates very long strings or strings with high unicode characters.


In case of generative testing, it seems that it doesn't really exercise any edge cases except empty string.


@roklenarcic that's a good question for #clojure-spec and I suspect the answer is that the default generators are conservative because there are so many possible interpretation of a string? spec. Custom generators do buy you a lot.


as an aside - in clojure 60000 is always a long - you have to jump through hoops to get an int actually


surely something else is trying to consume the value streaming/duration returned in order to get an error like that - that call creates a Duration, it doesn’t consume one, so it couldn’t produce that error


I thought the test check generators did generate weird strings?


the generator spec uses for clojure.core/string? is test.check's string-alphanumeric


@arrdem But if you have a specific interpretation for string? spec, it's still much easier to constrain the spec, than to add generators that generate weird strings everywhere.


why keyword is more popular than symbol in Clojure? unlike Scheme...


I had the same question; spent a few days programming using symbols instead of keywords for data, and found I really liked keywords arfterwards.


and what draw you back?


Symbols mean different things in different contexts, but a keyword is just a keyword — a thing that evaluates to itself. That seems more efficient to me, but to be perfectly honest, the real reason I use keywords is because that’s what I learned and what everyone else uses, mostly.


It should be noted that just as (), [], {}, etc prevent some semantic overloading of parens, keywords prevent a degree of semantic overloading with symbolic data


feel free to point me to documentation: Question: when doing clojure data transformation, any rule of thumb for using reducers vs transducers?


e.g. (sequence coll (map inc)) vs (r/foldcat (r/map inc) coll)


i understand some of the tradeoffs, the latter is possibly parallel, the former has one lazy seq, is there a canonical choice


not really @jasonjckn it depends on the problem. I'd say introduce parallelism concern later on, and model processing via transducers.


sounds good to me, thanks


cool profile pic


transducers are also more flexible/powerful for modelling the processing pipeline. They can be made parallel later (with some tradeoffs)


Im rebooting ClojureQL and inviting anyone interested to join the team. Its good but there's still some fun stuff to be done, for example a compiler rewrite. The first lines are 8 years old, so some dusting off cant be ruled out. / - Ping me if you're interested in lending a hand


Wow, 0.2.3... I'll be interested to see what it all looks like running on 0.7.5!


One thing at a time. Just got it up from Clojure 1.1 to 1.9 :smile:


@laujensen If any questions come up about, feel free to chase me down in #sql for the answers!


Thanks sean, I'll be sure to do that


Is there any built-in Clojure function or utility that can treat a java.util.Iterator as an IReduceInit?


not in clojure but it's trivial to roll your own..


@aengelberg do you care about treating it exactly as an IReduceInit or only care about reducing it?


because if the latter, you can use iterator-seq


which returns a chunked seq, so reduction should be quite fast anyway (altho you get caching and extra allocations)


I just want to eagerly consume all values from an iterator without the cost of creating an extra seq.


then you'll have to reify IReduceInit yourself


(reify IReduceInit
    (reduce [iter f val]
       (loop [ret val]
         (if (.hasNext iter)
           (let [ret (f ret (.next iter))]
             (if (reduced? ret)
               (recur ret)))


I found some interesting results in terms of compilation time with the clj compiler. I’m looking at a case where it looks like it started to get really slow to do a bunch of individual eval calls on single forms (this is related to some Clara rules stuff). It seemed like the compiler was being slower than “normal” to compile this amount of code. Meaning it seems like clj compiler can compile a lot of source code much faster than what we were seeing with a bunch of individual eval calls for this DSL layer. So I tried to experiment with some dummy examples of compiling 1 form at a time vs compiling batches of forms at a time to arrive at equivalent output. It looks like the compiler is quite a bit faster on batches vs individual calls to eval. It isn’t immediately obvious to me why. I have tried profiling and I can see the hot spots showing up in the compiler on the individual eval calls that don’t show up in the batch cases, but I haven’t dug into them enough yet to really understand why.


I was thinking perhaps the compiler has some sort of caching going on during a single eval pass


and you don’t get to take advantage of that if you do a bunch of individual calls to eval vs larger batches