This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-07-20
Channels
- # admin-announcements (15)
- # announcements (1)
- # beginners (10)
- # boot (39)
- # bristol-clojurians (2)
- # clojure (146)
- # clojure-canada (1)
- # clojure-gamedev (8)
- # clojure-italy (2)
- # clojure-japan (16)
- # clojure-korea (1)
- # clojure-poland (1)
- # clojure-russia (20)
- # clojure-spain (3)
- # clojurebridge (5)
- # clojurescript (104)
- # core-typed (2)
- # datomic (80)
- # indycljs (1)
- # jobs (1)
- # ldnclj (31)
- # off-topic (15)
- # om (21)
- # onyx (7)
- # ox (9)
- # re-frame (2)
- # reagent (16)
Someday we might have the possibility to have more options for these, for instance passing our own threadpools to both go/thread (and possibly chan).
Just discovered that our app's uberjar contains compilers for Scala, ClojureScript and JavaScript (Closure and Rhino), as well as no less than five JSON libraries. To be clear: this is a Clojure-only application. This is getting out of hand. I expect it to become self-aware any moment now.
jrychter: per Greenspun's tenth rule it should also contain ad-hoc, informally-specified, bug-ridden slow implementation of half of Common Lisp
@dottedmag: I don't think Greenspun's tenth applies here, as all of the above libs surprised me. Greenspunning is when you intentionally write code.
I think the conclusion is I need to be much more careful with library dependencies, use :exclusions all over the place.
But people who write libraries should also pay more attention to deps being pulled in. Sometimes they are only needed in development.
@cfleming: clj-kafka pulled in org.apache.kafka/kafka, which pulled in org.scala-lang/scala-compiler.
But there is a a more amusing case: someone added testing code to one of our internal libraries, and added a dependency on camel-snake-kebab. Which pulled in com.keminglabs/cljx. Which pulled in ClojureScript, Closure compiler, closure library and Rhino.
@dottedmag: True. At 105MB ours is a bit on the heavy side right now.
lein collisions
is also to good idea to run on you project
i’m listening to datomic’s tx-report-queue in a thread and writing the tx results to an async chan. this all works. this happens inside the ‘start’ lifecycle phase of a Component. i now need to dispose of that thread in the ‘stop’ lifecycle phase. what’s the simplest/most idiomatic way to stop that thread?
can i create an atom outside the thread and deref the atom inside the while loop in the thread, and then reset! the atom to signal the thread?
I’d use a control channel
In the thread’s loop check if there’s a value on the channel.
wouldn’t reading from the channel block, though?
@ordnungswidrig: incidentally, lein-collisions pulls in its own interesting set of dependencies and lein tree
exclusion suggestions (and yes, I do use it sometimes)
Aah, thanks for the hint.
hold on, can’t i just use a go block instead of thread?
you could, but thread
is recommended if the body is io-bound
gah. this is not fitting into my brain right now. would you mind sharing a code snippet, ordnungswidrig?
specifically, i’m not sure how i’d combine the control-chan read with the while true
loop i already have
at least, not without blocking the while loop due to blocking on trying to read from the control chan
@robert-stuttaford: untests but the key is to use alt!!
with a default
aha, the trick is the default value for alts!
thank you, ordnungswidrig – i’ll give that a go. presumably anything other than ::continue appearing on the channel will cause it to exit
yes, that’s the idea. While an atom would do, too I find this approach more useful.
yes, reading the docs suggests it should be (while (= ::continue (alts! [control-ch] :default ::continue)) …)
thank you, much appreciated!
@robert-stuttaford: oh, you’re right.
@ordnungswidrig: fyi, the return value in that case is [::continue :default]
@robert-stuttaford there's also poll!
if you're on the latest version of core.async.
poll!
in its current implementation doesn't work correctly.
No one has figured out how to fix it yet
@robert-stuttaford: Instead of fully-blocking .take
, use .take
with a timeout. Loop until you get a "close" signal from somewhere else.
I don't recall exactly what was wrong with poll!
, something to do with how consumers get notified when it completes.
wasn't this an issue with promise-chan (maybe poll! as well, but it's the first time I hear about it)
Oh, maybe I'm getting them mixed up. @mpenet
Yeah, I think I am.
Ignore me
But I'm not sure if the work on poll!
was completed either.
btw there's a patch for the promise-chan issue I think: http://dev.clojure.org/jira/browse/ASYNC-124 at least it's related
thanks stuartsierra
Is there any clojure/clojurescript open source project that's welcoming to newbie contribution? I have used clojure for my personal project, but nothing at work yet.
I would also like to contribute to some projects! I've got the basics down (I think :)) and would like to push further.
hmm spotted an oddity I think: case default expression is always evaluated even if doesn't match
I’m trying to implement a version of hash-map
that turns the key (which is always a string) into a keyword. But I need it to still be fast because of where it will be used. After benchmarking lots of “clever" approaches, I found that the following straightforward code is also the fastest:
(defn map-odd
[f coll]
(when-let [s (seq coll)]
(cons (first s)
(map-even f (rest s)))))
(defn map-even
[f coll]
(when-let [s (seq coll)]
(cons (f (first s))
(map-odd f (rest s)))))
(defn keywordize-hash-map
[& keyvals]
(apply hash-map (map-even keyword keyvals)))
On my laptop and my data (building hash-maps with about 15 keys), this takes 50% longer than hash-map
. Only 10% of the extra time is spent in keyword
. Any ideas for making map-even
and map-odd
(or the entire keywordize-hash-map
) faster?@ljosa, have you tried warn-on-reflection ?
I didn’t know about it, but I set it to true
now, recompiled the functions and ran them. Nothing was printed in the cider-repr buffer.
@ljosa: maybe try it from a bare lein repl
; console output doesn't always end up where it should
ok @ljosa then I'm a the end of my wits
@ljsoa: I take it stuff like this is some of the ‘clever’ stuff you’ve already tried? (apply merge (map #(hash-map (keyword (first %1)) (second %1)) (partition 2 keyvals)))
@akiva: I tried some variants with (into {} … (partition 2 …))
. In my hands, your variant is about five times as slow as my code above.
3x slower according to criterium, but this was my attempt:
(defn keywordize [coll]
(let [keys (take-nth 2 coll)
values (take-nth 2 (rest coll))]
(interleave (map keyword keys) values)))
fastest way is not to do it: ex having a special map type where you can get using a kw but it lookups the string key instead
It doesn’t have to, but it makes much of the rest of the code clearer and more maintainable.
A wrapper would get you past the bottleneck but then you’d have to use a special keyword->string
fn just to talk to this one data structure.
Hey everyone, finally got around to joining! Awesome to see so many people here.
Does anyone know which Clojure libraries for accessing redis are fast? I’ve been using Carmine, but it turns out to be twice as slow as the python redis library for my use case.
@tbaldridge: how different is the CSP course on safari from the CSP course on your personal video series?
I think it does nippy serialization out of the box, you might have to turn this off to do correct speed tests
@mpenet: oh, it’s possible to turn it off? that should help. I don’t need any of the fancy stuff.
@roberto: it's more professionally produced and has an actual lesson plan. My personal series is much more "hey, listen to me ramble about programming for awhile".
@tbaldridge: hey! welcome!
ljosa: it applies magic here and there when it makes sense. but you can turn it off and do your own thing yes
@mpenet: I did read it, but I don’t see how to turn it off for reads. (I only do reads in the part of the code that I care the most about.)
tbaldridge: I wanted to ping you about the core.async stuff eventually, get some more feedback especially about the "chan" executors and the way to handle these (at "chan" level or put!/take!, if that even makes sense)
I am super busy at work and I kind of lost touch with all these things since I sent that patch
Hey @tbaldridge, nice to see you on here.
I'm running my server from the repl, using reloaded workflow - best way to get println
(or some log info) to display there?
@magnars: I recommend using a logging framework and watching the log file.
There are just too many things potentially trying to capture and reroute standard output to make it a reliable log.
anyone deploying pre-built uberjars to heroku for production apps? If so: how are you going about it?
@voxdolo: We’re building in the buildpack phase, then running the JAR at runtime
@voxdolo: it sounds like you’re building the uberjar at runtime?
Take a look at Preboot, this may be able to help somewhat https://devcenter.heroku.com/articles/preboot
@danielcompton: nope, we're building at buildpack phase… https://github.com/Suiteness/heroku-buildpack-clojure/blob/master/bin/compile#L47
So are you getting build timeouts or start timeouts?
@voxdolo: first thing is to check why its taking so long to bootstrap. Are all of your dependencies necessary in prod?
You could look at AOTing it at build time
that would cut down on bootstrap time but potentially raise the number of AOT bugs you have to deal with
the dev/test dependencies are explicitly marked as such, so I think we're good from the deps POV
we removed our internal Java library for the project and the AOT went with it… though we were seeing these boot timeouts then too.
60 seconds is quite a long time, are there things that can be delayed loading?
certainly… I'd like to measure before I cut though. Any suggestions on how to profile startup time?
I’m not sure what’s the best way to profile startup time, I know @tcrayford added a patch to Clojure to report timing for ns loading
You can just wrap (time
around things to get timings
@danielcompton: are you using a java agent? specifically the NewRelic Java Agent? Locally at least, that and the bootstrap seems to be the lion's share of the boot time… the actual app initializes in just under a second.
@voxdolo: oh yeah I gave up trying to use that
It totally killed my startup time
F-- would not use again
hah okay, that is definitely contributing… but again, it was already timing out before we started using newrelic.
And after it's helped me figure out some tricky runtime performance issues, I'm loathe to give it up
If your friends are dragging you down, you’ve got to let them go