This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-06-08
Channels
- # atlanta-clojurians (1)
- # beginners (116)
- # cider (70)
- # cljs-dev (11)
- # cljsrn (2)
- # clojure (218)
- # clojure-italy (7)
- # clojure-nl (14)
- # clojure-nlp (11)
- # clojure-spec (8)
- # clojure-uk (113)
- # clojurescript (86)
- # core-async (14)
- # cursive (24)
- # datomic (64)
- # duct (1)
- # emacs (3)
- # fulcro (20)
- # graphql (10)
- # jobs-rus (1)
- # london-clojurians (1)
- # luminus (1)
- # nyc (1)
- # off-topic (24)
- # onyx (1)
- # parinfer (1)
- # pedestal (14)
- # portkey (11)
- # re-frame (36)
- # reagent (9)
- # reitit (5)
- # ring (1)
- # shadow-cljs (197)
- # spacemacs (21)
- # specter (22)
- # sql (15)
- # tools-deps (5)
is there an identity
for transducers? something suitable for this situation: (sequence (or (make-xform) identity) coll)
yeah, that’s what i did. was wondering if there was a blessed way
Needed this a couple of times myself, so here's an additional option:
(def xf-identity
(fn [rf]
(fn
([] (rf))
([res] (rf res))
([res in] (rf res in))
([res in & ins] (rf res (list* in ins))))))
user=> (sequence (comp (map inc) xf-identity) (range 10))
(1 2 3 4 5 6 7 8 9 10)
user=> (sequence
(comp (map #(apply + %&))
xf-identity) (range 10) (range 10))
(0 2 4 6 8 10 12 14 16 18)
Vaguely faster as expected:
(let [items (range 10000)]
(quick-bench
(dorun
(sequence (map identity) items))))
;; 914.020710 µs
(let [items (range 10000)]
(quick-bench
(dorun
(sequence xf-identity items))))
;; 892.697959 µs
there's no obvious identity
semantic for multiple seqs, I just gave one option above, but it's 1 convention that can be adopted
identity as a transducer is equivalent to
(defn xf-identity [rf] (fn ([] (rf)) ([res] (rf res)) ([res in] (rf res in)) ([res in & ins] (apply rf res in ins)))
which is trivial to prove as that inner fn is equivalent to rf
, collapsing the whole thing to (defn xf-identity [rf] rf)
what your xf-identity
is doing is just packing multiple arguments into a list which I think is a bad idea
your next transducer in the pipeline will take either an a
or a list of a
s through the same arity depending on the arity invoked by xf-identity
@bronsa what do you expect from (sequence xf-identity (range 10) (range 10))
, that you should not use it?
that xf-identity
is equivalent to map identity
for the 1-input case and to map list
for the n-input case
I don’t think xf-identity
as you wrote it is a good transducer, so it being “good” or “bad” for a particular use case doesn’t make much sense to me
and I definitely wouldn’t flip between 2 different functions depending on the number of input arguments
user=> (sequence (comp (map list) identity) (range 10) (range 10))
((0 0) (1 1) (2 2) (3 3) (4 4) (5 5) (6 6) (7 7) (8 8) (9 9))
btwso I don’t see why it would be desiderable to complect identity
and (map list)
in a single transducer
I don't have a specific use case in mind and given the power of identity
I would agree with you and not suggest to use a custom xf-identity as a general case. So @robert-stuttaford should consider (or (make-xform) identity)
as the first choice. Thanks @bronsa for the always useful insights.
ArityException Wrong number of args (3) passed to: clojure.lang.AFn.throwArity (AFn.java:430)
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/TransformerIterator.java#L38-L51 this one
thanks @bronsa!
I’m using clojure.walk to transform a tree of maps. is there a way to tell walk to ignore the value at a key?
if you’re using clojure.walk/walk
you can control the recursion and you can do whatever you want by writing the right inner
or outer
function, if you’re using prewalk
or postwalk
then no
@lilactown Do you mean that for a certain value, you want to keep the old value of the key?
@lilactown would something like this work?
(def m {:a 1 :b ""})
user=> (walk/prewalk (fn [x]
#_=> (if (map-entry? x)
#_=> (let [[k v] x]
#_=> (if (= :b k) x [k (inc v)]))
#_=> x))
#_=> m)
{:a 2, :b ""}
I figured out that what I actually wanted was for it to ONLY traverse down into a single key
I think I figured it out
(defn- analyze-tree [tree]
(println (first tree))
(if (= (first tree) :children)
(walk/walk
analyze
identity
(make-node tree))
(make-node tree)))
(defn analyze [tree]
(analyze-tree (make-node tree)))
Has anyone ever attempted to pack an uberjar with javapackager? I'm trying it right now, but all I ever get is "Exception: java.lang.Exception: Error: Modules are not allowed in srcfiles: [target/videocapture.jar]."
Getting some unexpected behavior when trying to capture output from clojure.tools.logging
:
(deftest log-test
(is (re-find #"foo" (with-out-str
(clojure.tools.logging/info "foo")))))
gives output:
11:25:04.447 [main] INFO <ns> - foo
expected: (re-find #"foo" (with-out-str (clojure.tools.logging/info "foo")))
actual: (not (re-find #"foo" ""))
It seems that the output is being printed to *out*
despite the use of with-out-str
.it is probably ignoring *out*
which is what with-out-str redefines, and using system.out directly
@xiongtx the more I consider it, tools.logging is wrapping java loggers which wouldn't know about or use *out*
at all
You're right. I used the following to instead of with-out-str
and it worked:
(defmacro with-system-out-str
[& body]
`(let [out# System/out
baos# (ByteArrayOutputStream.)]
(System/setOut (PrintStream. baos#))
~@body
(.flush System/out)
(System/setOut out#)
(str baos#)))
:thumbsup:
glad I could help
you could also do this with logging config, though that seems fine enough
*out*
itself wraps System/out
which a logger would use directly unless you configured it to do something else
Anyone familiar with the gotchas of AOT compilation of clojure code? I'm experiencing an error something akin to to java.lang.ClassCastException: Foo cannot be cast to IFoo, where IFoo is a protocol that Foo implements. Anyone know why this would cause an error in AOT compilation, but not in regular operation?
this can happen when you reload the file defining IFoo but not the file defining Foo
protocols / records are a bit fragile for iterative repl based stuff
Hmmm... since we're on the topic of logging. Given a Storm topology that's using Log4J (default) through tools.logging with no additional properties set... I assume it's writing to the file system. Seems like a crazy question, but is there a chance it's not async IO?
I'd like to, but i'm trying to see if I can get my project to native compile using graalvm, first step is to get it AoT 🙂
I'm wondering if it has anything to do with the Foo record obtaining more than it's defined structure of values
ie. (defrecord Foo [x y]) (def x (->Foo 1 2)) (class x) ;; Foo (class (assoc x :z 3)) ;; Foo This appears to be Foo still, but doesn't the inclusion of the additional key-value cause it to convert to a hashmap?
@benzap re: aot that kind of issue is very common if you have correct namespace dependencies
e.g. you import a type created using defrecord, but don't require the clojure namespace where it is defined, or something similar
(aot compilation is the worst, I doubt the native code from graalvm will offset that)
no, I have them separated out, with the record and extension in foo.impl.bar, and the protocol in foo.bar
if not, how are you importing/requiring the protocol in the namespace where the defrecord is defined?
so the protocol is defined in fif.stack-machine, and the implementation is fif.impl.stack-machine
you may have some weirdness there, the place you would need to delete them from will depend on whatever tooling you are using
i'm using lein uberjar, i'm trying to follow along to this: https://github.com/borkdude/cljtree-graalvm
if I recall that just tells lein to call clojure.core/compile on every namespace it finds in source
I bet you have a file somewhere in src that tries to run your code, and :all is loading that
oh of course, right, and that isn't a def, so you are calling the protocol method while building
That is correct, the fif.core compilation step fails on the class exception between the record Stackmachine and the protocol IStackmachine
so what is happening is the file that defines the protocol is getting reloaded after the record defining file is loaded
adding the :gen-class stuff may fix it, because you will get stable classfiles written to disk
https://github.com/benzap/fif/blob/master/src/fif/core.cljc#L18-L26 <-- isn't this def
going to cause all that code to run at load time?
you will eventually run in to a weird crazy error that will suck the life out of you
it may also fix it if you don't :aot :all and just pass in your main namespace there
@seancorfield the default-stack is a stack-machine instantiated at compile time
So "yes" is the answer to my question 🙂 ... which seems ... suspect.
I feel like if I can't get :aot :all to work, it might not be feasible to generate a native-image. Still not clear on whether what i've wrote can even be native 🙂
aot compilation is transitive, so if you load all you namespaces from your main namespace, or any namespace loaded by you main namespace, and so on, then it will be aot compiled
:aot :all is just a lein shortcut, which doesn't know anything about namespace dependencies in your project, so it may compile them in the wrong order, and recompile them after they have already been compiled
So would I be able to get something to the similar effect if I use :gen-class everywhere, and do an :aot [fif.core] in the leiningen profile?
Also, given that it's a leiningen shortcut, is there a better tool for doing this sort of AOT compilation, and is it entirely necessary
I dunno if it is necessary for whatever you are doing with nativeimage, it usually is not necessary for people not doing that
there potentially could be a better tool, if lein used tools.namespace to get an idea of the namespace dependencies and compiled them order
Alright, tried it with just fif-core, and included a :gen-class and additionally a :main to fif.core. Seemed to work
I forgot to mention that I had also included direct-linking, but I would assume this would cause runtime issues?
It's fine if you're not rebinding things dynamically at runtime.
I'm pretty sure we have direct linking enabled in production...
Yup,
base_jvm_opts="-XX:-OmitStackTraceInFastThrow -Dclojure.compiler.direct-linking=true"
for all JARs we run in production (we do not AOT compile tho')(let [namespaces
(clojure.tools.namespace.dependency/topo-sort
(:clojure.tools.namespace.track/deps
(clojure.tools.namespace.file/add-files
{}
(clojure.tools.namespace.find/find-sources-in-dir
"src/"))))]
(doseq [namespace namespaces]
(compile namespace)))
might be the more correct version of lein's :aot :all
Could it be some kind of recursion, maybe triggered by a macro. Just a wild idea, I don't have any experience with native image.
nativeimage is running on the bytecode, by which time all the macros have been expanded away
i read an issue a couple of weeks ago where someone got a native image to compile, but only after several hours and 20gb+ of RAM!
Haha, I guess I lucked out then. I think the highest it went for me was around ~5gb. My VM was only set at 4gb, so I had to bump it up a bit
It's actually kind of eerie how easy it was to build a native-image. It's working all kinds of magic in there...
I suspect youare effectively compiling your project + the jdk, which means you are compiling a huge project
> there is a reason I always tell people not to aot I think there are times when it is somewhat necessary to AOT. it is best if you don’t have to do it, but if you have a compelling reason, you have to fight through it.
(1) faster startup time may matter (2) don’t have to ship the raw source code clj files in final artifact (3) I think you have to for Android or something? (4) exposing statically loaded classes for Java (or similar) consumers (I guess (4) could be done by writing a Java class that you compile that JITs the rest of the clj underlying)
similar to the naitveimage thing, anroid isn't a jvm, but has a tool that operates on jvm bytecode to generate bytecode for android (or native code for art or whatever)
the better solution of course would be a compiler that targets the android bytecode/runtime de jour
@mikerod it is a spectrum, and the further that slider gets from no aot to aot the more pain there is, with diminishing increments of functionality
You're absolutely right, it would get compiled into a separate bytecode developed by google. I believe art is the successor to the previous version, which performed a lot more AoT
I think the struggle of clj aot is less of a struggle than building cljs with :advanced
optimizations though
so if you can get through cljs advanced opts, tackling clj aot problems should be a breeze 😛
what they have in common is that if you have them turned on during dev it's a series of small things that are fixable, but if you turn them on in a mature project you get a storm of failures
but aot is not strictly needed like advanced compilation - there's always a way around it
> but aot is not strictly needed like advanced compilation - there’s always a way around it I think it may be “strictly” needed if you need to squeeze every ounce of startup time perf you can out of a large app?
the time savings to effort is pretty small though
but sure
Alternatively, if you are forbidden to give consumers are jar full of your clj source files (perhaps rarer?, but legal-stuff at a company perhaps)
just saying, if anyone has a jar with some aot compiled clojure in and want to take a peek
I don’t know the legal-side discussions of this, so won’t pretend to. I believe in intellij and/or eclipse it has presented me with warnings about ensuring that it is “legal” to decompile the bytecode before it performs the action though
so I was thinking, perhaps decompiling things is a different category than giving people jars with sources in them directly
anyways, doesn’t really matter. I’m just contemplating and thinking of reasons why AOT can still be a necessary feature at times
Hello, all! I’m a little baffled by a problem I’m facing using Clojure future
— I’m using it to fire off a TCP message to HostedGraphite (it’s a hosted Grafana service). (PS: I love the service, although it can get spendy w/lots of metrics… http://hostedgraphite.com/)
I’m sending a message to HostedGraphite to graph / collect when certain Ring endpoints are hit, using a Future so that the main thread doesn’t have to wait for the TCP message send to complete. I’m posting the little function I wrote below.
It works great… messages are sent and received successfully about 40 times… then it looks like the futures stop firing, because println messages stop, TCP messages stop being sent/received.
Am I using futures incorrectly? I never dereference the future return value, but I didn’t think that mattered? (i.e., I suspect I’m exhausting some resource, like threads or something associated with futures??)
Many thanks in advance for any pointers, or feedback of any kind!
PS: I can’t believe how much fun I’m having programming again, because of Clojure. Thanks so much to this entire community! 🙂
(ns trello-workflow.hosted-graphite
(:require [clj-tcp.client :as tcp]
[clojure.spec.alpha :as s]
[clojure.core.async :as async]))
(def hg-key "xxx")
(defn- gen-key [s]
(let [full-metric (format "%s.%s" "trello-workflow" s)]
full-metric))
(defn send-metric
" input: metric-name val "
[metric-name n]
{:pre [(s/valid? string? metric-name)
(s/valid? int? n)]}
(let [hg-payload (format "%s.%s %d\n"
hg-key
(gen-key metric-name)
n)]
(future
(let [h (tcp/client "" 2003 {})]
;(println hg-payload)
(println (format "send-metric: %s %d" metric-name n))
(do
(tcp/write! h (.getBytes hg-payload))
(tcp/close-all h))))))
….just trying to think this through.. I’m using clj-tcp.client
, which is asynchronous… Inside the future
, I create the clj-tcp.client
, send the message, and close the socket, which if I understand correctly, kills the thread…
And since no one derefs the future
value, everything with the future
should be gone, eligible for GC, etc…
Is my understanding correct? Thx!!!
closing a socket doesn't kill a thread, and futures use a threadpool so once the future is finished running, the threadpool will be returned to the pool, not killed
it doesn't matter if you deref the future or not, it matters if you keep a reference to the future
but, garbage generated in the future will be eligible for garbage collection when there are no more references to it, regardless of if the future is still running or not
the one gotcha about not derefing a future is that futures leave exceptions for when you access them, so not derefing a future can mean an exception is never seen
Thx so much @hiredman @noisesmith ... sorry for mangling terminology. I’m baffled by why the code block inside the future stops running after, say, 30 times... Based on what you said, threads should complete and go back to thread pool, there are no derefs... So, what could be going wrong? Thx!!
I want it to run every time a Ring handler is run. I put it inside future so that these calls are run in parallel, to let rest of Ring handler code run. I’d expect it to be able to run to completion every time, not stop after 30 times. :)
you wouldn't know since the future is never dereffed, and there's no try/catch
justin.smith@C02RW05WFVH6: ~$ clj
Clojure 1.9.0
(ins)user=> (def f (future (/ 1 0)))
#'user/f
(ins)user=> (System/exit 0)
justin.smith@C02RW05WFVH6: ~$
we all know that blew up, but there's no evidenceThx @noisesmith !!
@noisesmith Sorry. I had to jump in car to pick up my kids. That was meant to be pronounced, “Ooooooh....” Uncaught exception!!! That could explain it! Thx for the suggestion. I can’t wait to try it out when I get home in a couple of hours. Want to take a guess at what exception is being thrown? :). I’m drawing a blank! :) I’ll let you know!
@noisesmith if threads have uncaught exceptions, that would explain behavior I’m seeing, right? Eventually, no threads left, defer’ed code never run, yes?
it won't use up threads, it just means you never see the error messages
Hmm... drats... well, more illumination will come when I deref those futures! :) Oddly excited to see what i find! :)
another option is (future (try .... (catch Exception e (println e))))
then you don't need to worry about the deref
If you feel like using Timbre for logging, it has a logged-future
macro that behaves just like future
except it also logs any exceptions (using color-coded stacktraces! 🙂 )
@cfeckardt In general, you get the most benefit from spec'ing functions that are at the boundaries of your systems -- so private functions wouldn't fall under that.
That said, if you have a private function that has some critical behavior you want to verify with generative testing then spec might be worthwhile.
There's also a school of thought that thinks private functions are a waste of time and using namespaces to organize what is your "API" and what is your implementation is a better way to go.
@cfeckardt In general, you get the most benefit from spec'ing functions that are at the boundaries of your systems -- so private functions wouldn't fall under that.