Fork me on GitHub
#clojure
<
2015-07-20
>
mpenet07:07:16

Someday we might have the possibility to have more options for these, for instance passing our own threadpools to both go/thread (and possibly chan).

jrychter08:07:45

Just discovered that our app's uberjar contains compilers for Scala, ClojureScript and JavaScript (Closure and Rhino), as well as no less than five JSON libraries. To be clear: this is a Clojure-only application. This is getting out of hand. I expect it to become self-aware any moment now.

dottedmag08:07:56

jrychter: per Greenspun's tenth rule it should also contain ad-hoc, informally-specified, bug-ridden slow implementation of half of Common Lisp

dottedmag08:07:11

Clojure does not qualify :)

jrychter08:07:40

@dottedmag: I don't think Greenspun's tenth applies here, as all of the above libs surprised me. Greenspunning is when you intentionally write code.

jrychter08:07:13

I think the conclusion is I need to be much more careful with library dependencies, use :exclusions all over the place.

jrychter08:07:49

But people who write libraries should also pay more attention to deps being pulled in. Sometimes they are only needed in development.

cfleming08:07:55

@jrychter: How did the Scala compiler get pulled in?

jrychter09:07:01

@cfleming: clj-kafka pulled in org.apache.kafka/kafka, which pulled in org.scala-lang/scala-compiler.

jrychter09:07:28

But there is a a more amusing case: someone added testing code to one of our internal libraries, and added a dependency on camel-snake-kebab. Which pulled in com.keminglabs/cljx. Which pulled in ClojureScript, Closure compiler, closure library and Rhino.

jrychter09:07:37

All for a single function, called once simple_smile

cfleming09:07:02

Yeah, that sounds like a dep you could do without simple_smile

jrychter09:07:11

Everyone should run lein deps :tree every once in a while.

dottedmag09:07:52

Also an acceptance test for uberjar size wouldn't hurt.

jrychter09:07:54

@dottedmag: True. At 105MB ours is a bit on the heavy side right now.

ordnungswidrig09:07:01

lein collisions is also to good idea to run on you project

robert-stuttaford09:07:05

i’m listening to datomic’s tx-report-queue in a thread and writing the tx results to an async chan. this all works. this happens inside the ‘start’ lifecycle phase of a Component. i now need to dispose of that thread in the ‘stop’ lifecycle phase. what’s the simplest/most idiomatic way to stop that thread?

robert-stuttaford09:07:12

can i create an atom outside the thread and deref the atom inside the while loop in the thread, and then reset! the atom to signal the thread?

ordnungswidrig09:07:31

I’d use a control channel

ordnungswidrig09:07:32

In the thread’s loop check if there’s a value on the channel.

robert-stuttaford09:07:27

wouldn’t reading from the channel block, though?

jrychter09:07:20

@ordnungswidrig: incidentally, lein-collisions pulls in its own interesting set of dependencies and lein tree exclusion suggestions simple_smile (and yes, I do use it sometimes)

ordnungswidrig09:07:04

Aah, thanks for the hint.

robert-stuttaford09:07:04

hold on, can’t i just use a go block instead of thread?

ordnungswidrig09:07:06

you could, but thread is recommended if the body is io-bound

robert-stuttaford09:07:21

gah. this is not fitting into my brain right now. would you mind sharing a code snippet, ordnungswidrig?

robert-stuttaford09:07:08

specifically, i’m not sure how i’d combine the control-chan read with the while true loop i already have

robert-stuttaford09:07:30

at least, not without blocking the while loop due to blocking on trying to read from the control chan

ordnungswidrig10:07:42

@robert-stuttaford: untests but the key is to use alt!! with a default

mpenet10:07:47

alts! with a single channel doens't make any sense no?

mpenet10:07:58

just use <!

mpenet10:07:14

ah nevermind, read half of it

robert-stuttaford10:07:52

aha, the trick is the default value for alts!

mpenet10:07:15

should be :default :continue tho no?

mpenet10:07:24

well with ::

robert-stuttaford10:07:27

thank you, ordnungswidrig – i’ll give that a go. presumably anything other than ::continue appearing on the channel will cause it to exit

ordnungswidrig10:07:05

yes, that’s the idea. While an atom would do, too I find this approach more useful.

robert-stuttaford10:07:24

yes, reading the docs suggests it should be (while (= ::continue (alts! [control-ch] :default ::continue)) …)

robert-stuttaford10:07:55

thank you, much appreciated!

robert-stuttaford12:07:10

@ordnungswidrig: fyi, the return value in that case is [::continue :default]

potetm12:07:32

@robert-stuttaford there's also poll!if you're on the latest version of core.async.

mpenet13:07:14

that would be on master, poll! is not part of the last released version I believe

potetm13:07:16

Ah, that is correct.

potetm13:07:36

I feel like that’s been sitting on master for a long time.

Lambda/Sierra13:07:05

poll! in its current implementation doesn't work correctly.

Lambda/Sierra13:07:40

No one has figured out how to fix it yet simple_smile

Lambda/Sierra13:07:02

@robert-stuttaford: Instead of fully-blocking .take, use .take with a timeout. Loop until you get a "close" signal from somewhere else.

Lambda/Sierra13:07:50

I don't recall exactly what was wrong with poll!, something to do with how consumers get notified when it completes.

mpenet13:07:42

wasn't this an issue with promise-chan (maybe poll! as well, but it's the first time I hear about it)

Lambda/Sierra13:07:09

Oh, maybe I'm getting them mixed up. @mpenet

Lambda/Sierra13:07:16

Yeah, I think I am.

Lambda/Sierra13:07:53

But I'm not sure if the work on poll! was completed either.

mpenet13:07:37

btw there's a patch for the promise-chan issue I think: http://dev.clojure.org/jira/browse/ASYNC-124 at least it's related

mpenet13:07:20

no clue either, I didn't look into this stuff in a while

robert-stuttaford14:07:26

thanks stuartsierra

goodwind8914:07:35

Is there any clojure/clojurescript open source project that's welcoming to newbie contribution? I have used clojure for my personal project, but nothing at work yet.

marcofiset14:07:17

I would also like to contribute to some projects! I've got the basics down (I think :)) and would like to push further.

mpenet15:07:45

hmm spotted an oddity I think: case default expression is always evaluated even if doesn't match

mpenet15:07:28

nevermind, bad test

ljosa16:07:28

I’m trying to implement a version of hash-map that turns the key (which is always a string) into a keyword. But I need it to still be fast because of where it will be used. After benchmarking lots of “clever" approaches, I found that the following straightforward code is also the fastest:

(defn map-odd
  [f coll]
  (when-let [s (seq coll)]
    (cons (first s)
          (map-even f (rest s)))))

(defn map-even
  [f coll]
  (when-let [s (seq coll)]
    (cons (f (first s))
          (map-odd f (rest s)))))

(defn keywordize-hash-map
  [& keyvals]
  (apply hash-map (map-even keyword keyvals)))
On my laptop and my data (building hash-maps with about 15 keys), this takes 50% longer than hash-map. Only 10% of the extra time is spent in keyword. Any ideas for making map-even and map-odd (or the entire keywordize-hash-map) faster?

pesterhazy16:07:38

@ljosa, have you tried warn-on-reflection ?

ljosa16:07:20

I didn’t know about it, but I set it to true now, recompiled the functions and ran them. Nothing was printed in the cider-repr buffer.

pesterhazy16:07:13

@ljosa: maybe try it from a bare lein repl; console output doesn't always end up where it should

ljosa16:07:46

I see a reflection warning when I evaluate (.toString 23) , so I think it’s working.

pesterhazy16:07:27

ok @ljosa then I'm a the end of my wits

ljosa16:07:35

thanks for playing simple_smile

akiva16:07:14

@ljsoa: I take it stuff like this is some of the ‘clever’ stuff you’ve already tried? (apply merge (map #(hash-map (keyword (first %1)) (second %1)) (partition 2 keyvals)))

mpenet16:07:28

ljosa: did you try clojure.walk/keywordize-keys ?

mpenet16:07:41

probably slower, but prolly worth checking

ljosa16:07:37

@akiva: I tried some variants with (into {} … (partition 2 …)). In my hands, your variant is about five times as slow as my code above.

akiva16:07:48

Interesting!

surreal.analysis16:07:46

3x slower according to criterium, but this was my attempt:

(defn keywordize [coll] 
  (let [keys (take-nth 2 coll) 
        values (take-nth 2 (rest coll))] 
    (interleave (map keyword keys) values)))

ljosa16:07:46

@mpenet: yes, I tried it, but it was much slower (because it does it recursively).

profil16:07:13

@ljosa: how are you timing the execution?

ljosa16:07:57

@profil: criterium.core/bench. and I have -server in :jvm-opts.

mpenet16:07:37

fastest way is not to do it: ex having a special map type where you can get using a kw but it lookups the string key instead

mpenet16:07:51

def-map-type in potemkin could do that I think

ljosa16:07:59

@mpenet: interesting!

mpenet16:07:15

or a simple wrapper fn...

akiva16:07:24

That was going to be my next question is why the map key had to be a keyword.

ljosa16:07:07

It doesn’t have to, but it makes much of the rest of the code clearer and more maintainable.

akiva16:07:09

Understood; I prefer keywords as well.

akiva16:07:01

A wrapper would get you past the bottleneck but then you’d have to use a special keyword->string fn just to talk to this one data structure.

tbaldridge17:07:02

Hey everyone, finally got around to joining! Awesome to see so many people here.

ljosa17:07:00

Does anyone know which Clojure libraries for accessing redis are fast? I’ve been using Carmine, but it turns out to be twice as slow as the python redis library for my use case.

roberto17:07:05

@tbaldridge: how different is the CSP course on safari from the CSP course on your personal video series?

mpenet17:07:55

ljosa: that's surprising, carmine used to be very fast

mpenet17:07:13

I think it does nippy serialization out of the box, you might have to turn this off to do correct speed tests

ljosa17:07:13

@mpenet: oh, it’s possible to turn it off? that should help. I don’t need any of the fancy stuff.

tbaldridge17:07:41

@roberto: it's more professionally produced and has an actual lesson plan. My personal series is much more "hey, listen to me ramble about programming for awhile".

roberto17:07:44

ok, thank you. That was very helpful. I love your series btw.

mpenet17:07:12

ljosa: read the part about serialisation in the readme

mpenet17:07:41

ljosa: it applies magic here and there when it makes sense. but you can turn it off and do your own thing yes

ljosa17:07:34

@mpenet: I did read it, but I don’t see how to turn it off for reads. (I only do reads in the part of the code that I care the most about.)

mpenet18:07:22

Hello tbaldridge, welcome

mpenet18:07:12

tbaldridge: I wanted to ping you about the core.async stuff eventually, get some more feedback especially about the "chan" executors and the way to handle these (at "chan" level or put!/take!, if that even makes sense)

mpenet18:07:40

I am super busy at work and I kind of lost touch with all these things since I sent that patch

mpenet18:07:58

(ASYNC-94)

cfleming20:07:32

Hey @tbaldridge, nice to see you on here.

magnars20:07:50

I'm running my server from the repl, using reloaded workflow - best way to get println (or some log info) to display there?

Lambda/Sierra20:07:58

@magnars: I recommend using a logging framework and watching the log file.

magnars20:07:25

thanks, that's been my workaround until now. I'll keep on keeping on then. 👍

Lambda/Sierra20:07:47

There are just too many things potentially trying to capture and reroute standard output to make it a reliable log.

magnars20:07:04

makes sense!

voxdolo22:07:19

anyone deploying pre-built uberjars to heroku for production apps? If so: how are you going about it?

voxdolo22:07:40

we're getting bit by slow bootstrapping and deploy timeouts 😕

voxdolo22:07:59

generally means at least 2 minutes of downtime per deploy.

voxdolo22:07:55

all's hunky-dory afterward, but build timeouts aren't good.

danielcompton22:07:23

@voxdolo: We’re building in the buildpack phase, then running the JAR at runtime

danielcompton22:07:42

@voxdolo: it sounds like you’re building the uberjar at runtime?

danielcompton22:07:10

Take a look at Preboot, this may be able to help somewhat https://devcenter.heroku.com/articles/preboot

danielcompton22:07:26

So are you getting build timeouts or start timeouts?

voxdolo22:07:36

start timeouts… sorry 😕

danielcompton22:07:13

@voxdolo: first thing is to check why its taking so long to bootstrap. Are all of your dependencies necessary in prod?

danielcompton22:07:30

You could look at AOTing it at build time

danielcompton22:07:00

that would cut down on bootstrap time but potentially raise the number of AOT bugs you have to deal with

voxdolo22:07:02

the dev/test dependencies are explicitly marked as such, so I think we're good from the deps POV

voxdolo22:07:33

haven't looked seriously at AOTing

voxdolo22:07:58

we removed our internal Java library for the project and the AOT went with it… though we were seeing these boot timeouts then too.

voxdolo22:07:07

it's worth investigating again though

danielcompton22:07:35

60 seconds is quite a long time, are there things that can be delayed loading?

voxdolo22:07:10

certainly… I'd like to measure before I cut though. Any suggestions on how to profile startup time?

voxdolo22:07:28

I have about a 20-25 second boot time on my local machine

voxdolo22:07:27

the instance is a standard 2x, at 1024MB… so that's clearly going to be slower.

danielcompton22:07:33

I’m not sure what’s the best way to profile startup time, I know @tcrayford added a patch to Clojure to report timing for ns loading

voxdolo22:07:53

okay, I'll dig around and see if I can find it

danielcompton22:07:03

You can just wrap (time around things to get timings

voxdolo22:07:39

ah, true simple_smile needs more println debugging 😄

voxdolo22:07:53

s/more/moar/

voxdolo23:07:31

@danielcompton: are you using a java agent? specifically the NewRelic Java Agent? Locally at least, that and the bootstrap seems to be the lion's share of the boot time… the actual app initializes in just under a second.

danielcompton23:07:44

@voxdolo: oh yeah I gave up trying to use that

danielcompton23:07:52

It totally killed my startup time

danielcompton23:07:06

F-- would not use again

voxdolo23:07:20

hah simple_smile okay, that is definitely contributing… but again, it was already timing out before we started using newrelic.

voxdolo23:07:41

And after it's helped me figure out some tricky runtime performance issues, I'm loathe to give it up

danielcompton23:07:10

If your friends are dragging you down, you’ve got to let them go simple_smile

voxdolo23:07:37

is it fair to say that making the boot time faster locally should have a positive effect on deploy time also?

voxdolo23:07:59

if so, I can try to go down the AOT path and see how far that gets me