This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-02-14
Channels
- # announcements (1)
- # beginners (206)
- # calva (2)
- # cider (64)
- # cljs-dev (12)
- # clojars (2)
- # clojure (177)
- # clojure-europe (2)
- # clojure-finland (1)
- # clojure-italy (2)
- # clojure-losangeles (5)
- # clojure-nl (7)
- # clojure-russia (69)
- # clojure-spec (41)
- # clojure-uk (92)
- # clojurescript (60)
- # core-async (16)
- # cursive (48)
- # data-science (6)
- # datomic (73)
- # duct (5)
- # events (2)
- # figwheel-main (5)
- # fulcro (29)
- # hoplon (1)
- # off-topic (52)
- # pathom (11)
- # reagent (4)
- # reitit (5)
- # remote-jobs (1)
- # rum (7)
- # shadow-cljs (58)
- # slack-help (10)
- # spacemacs (3)
- # testing (3)
- # tools-deps (5)
Hi, anyone can offer any thoughts on the following problem? I want to execute arbitrary function with a timeout. I want result only if it's before the timeout value, after timeout I want nil
.
You could use future
and deref
with timeout; e.g.
(deref (future (Thread/sleep 10) 999)
9
nil)
nil
(deref (future (Thread/sleep 10) 999)
11
nil)
999
You can also use future-cancel
to try to cancel the future (might not be possible depending on what it's doing)
@U06BE1L6T Thanks, will give it a try
I've never needed to have a caching in my Clojure program before, so I need some guidance.
I have a collection of entities (maps), that are calculated via some expensive functions using data-fetching from the database. Basically, I want to have it in cache as a big map with entity-id
as keys and corresponding hash-map as values. I looked up idiomatic ways to handle caching in Clojure, it's suggested to use clojure.core.cache
and atoms. It seems to me that none of eviction policies are suitable for my case, so I resorted to use BasicCache
. Additional to my "main" cache, implemented as an atom which contains plain entries, I want to have a nested lookup map with same entities grouped in keys for fast access. I want to store this derived lookup map in some atom for future uses, rather than calculating it every time (it's ~ over a million values).
I have two questions:
1) Should I use clojure.core.cache
's BasicCache
? It seems to me that it's just a wrapper over regular map, what benefits do I get using it?
2) Is it ok or idiomatic to use add-watch
as a way to keep in sync two atoms? So that my derived-lookup-map in second atom is always relevant and in sync whenever plain data in first atom changes?
do i have to set site-defaults map key multipart params to true on compojure in order to upload a file? Edit: it seems the answer is no.
keeping two atoms in sync (sort of depending what you mean by in sync) is bad, just use one atom
Hi 👋
i have just started learning clojure and have a couple of beginner doubts.
say i have a vector [1 2 3 4 5]
i want to create another vector by combining successive pair of the pervious one
[[1 2] [2 3] [3 4] [4 5]]
1. does clojure have or commonly use tuples
?
2. i used partition
and it returns a lazy sequence,
(partition 2 1 [1 2 3 4 5])
((1 2) (2 3) (3 4) (4 5))
is there a way to get a vector from it ?
3. since i don't know how to write idiomatic clojure code, my code has a lot of functions that take vector do some computation and returns a list and then some that take a list do something and return a vector and go back and forth which feels awkward (and wonder surely not good for perf )
when to use list vs vector ?
is it fine to use mapv a lot ?1) re tuples: data structures are immutable, so vectors are essentially tuples.
2) you can easily achieve this like so (map vector coll (rest coll))
A lot of Clojure functions returns sequences. As long as you don’t care about it being a vector, list, etc. you should just work with that. When you do need a vector, you can use into
, vec
or vector
to coerce the sequence.
An alternative to the sequence abstraction is to use transducers which will not create intermediate sequences.
okay if i understand correctly i can use sequences for all computation till i need something specific from vector like random access etc, only then i convert to vector
> when to ues list vs vector it depends on what you need from your datastructure. if you just need sequential data, then list is fine. if you need random access by index, obviously vector is the way to go. also, think about when these constraints are true. Perhaps you only need access by index after a bunch of processing. then the last step would be to make vectors appropriately, rather than maintaining the vector type while you add, splice, filter, etc your data
E.g. (filterv odd? (map inc [1 2 3]))
or (into [] (comp (map inc) (filter odd?)) [1 2 3])
both return the same thing
Hey all. I've run into a bit of an interesting problem. Some time within the last day or two of dev my repl has become extremely slow to start. ie. on the order of minutes or so. I've tried rolling back commits / removing libraries, but so far I can't seem to narrow it down. I do see that one of my cores is pegged at 100%. In the interim I do get some logging from SL4J, but nothing meaningful. Does anyone have ideas about how I would go about debugging this?
Actually, just saw the thread in tools-deps
channel
Looks promising 🙂
I'm using openjdk 11.0.2 though, so perhaps not?
what kind of repl is it? do some ctrl-\ to see if the stack traces show you anything interesting
Just a plain ol' clojure repl? Booting up again to see if ctrl-\
yields anything. I'm using the java-time
library mentioned in #tools-deps too
@alexmiller i've seen you mention ctrl-\ before. what is this? some kind of interrupt? who "watches" for it? clojure or jvm?
or you can use jstack
or ctrl-break on windows
I presume this is standard looking for the main thread? It's surprising how long it runs for.
"main" #1 prio=5 os_prio=0 cpu=106657.69ms elapsed=217.69s tid=0x00007efd6c017000 nid=0x2d50 runnable [0x00007efd73323000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes([email protected]/Native Method)
at java.io.FileInputStream.read([email protected]/FileInputStream.java:279)
at java.io.BufferedInputStream.read1([email protected]/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read([email protected]/BufferedInputStream.java:351)
- locked <0x000000060aa47128> (a java.io.BufferedInputStream)
at sun.nio.cs.StreamDecoder.readBytes([email protected]/StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead([email protected]/StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read([email protected]/StreamDecoder.java:178)
- locked <0x000000060acf15d0> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read([email protected]/InputStreamReader.java:185)
at java.io.BufferedReader.fill([email protected]/BufferedReader.java:161)
at java.io.BufferedReader.read([email protected]/BufferedReader.java:182)
- locked <0x000000060acf15d0> (a java.io.InputStreamReader)
at java.io.LineNumberReader.read([email protected]/LineNumberReader.java:126)
- locked <0x000000060acf15d0> (a java.io.InputStreamReader)
at java.io.FilterReader.read([email protected]/FilterReader.java:65)
at java.io.PushbackReader.read([email protected]/PushbackReader.java:90)
- locked <0x000000060acf1590> (a java.io.LineNumberReader)
at clojure.lang.LineNumberingPushbackReader.read(LineNumberingPushbackReader.java:66)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0([email protected]/Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke([email protected]/NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke([email protected]/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke([email protected]/Method.java:566)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:167)
at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:438)
at clojure.main$skip_whitespace.invokeStatic(main.clj:130)
at clojure.main$repl_read.invokeStatic(main.clj:164)
at clojure.main$repl_read.invoke(main.clj:152)
at clojure.main$repl$read_eval_print__9068$fn__9069.invoke(main.clj:410)
at clojure.main$repl$read_eval_print__9068.invoke(main.clj:409)
at clojure.main$repl$fn__9077.invoke(main.clj:435)
at clojure.main$repl.invokeStatic(main.clj:435)
at clojure.main$repl_opt.invokeStatic(main.clj:499)
at clojure.main$main.invokeStatic(main.clj:598)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
Beyond that, @alexmiller if it matters, I'm using a pinned branch of tools.deps (w/ add-lib
) - I don't know much about how java / clojure builds yet (I've managed to avoid the JVM somehow during my career)by "plain ol' clojure repl" are you using clj
, or invoking clojure.main some other way?
clojure
from the command line
(`cider-jack-in` is equally slow)
here you're actually in the repl, reading stdin, so this is not startup time
Yeah, it's really all before that (ie. the 217s elapsed) it seems
you need to do the stack traces earlier I think
I didn't realize I could do it before the repl had loaded
will re-run
It looks like it's just actually compiling code?
looks like core.async go blocks
well the main thing likely to have changed recently is your code :)
Indeed
This project is ~800 lines, so this is a bit surprising... I'll try and revert further back and see if I'm doing something silly.
yeah, that seems weird
I will note that the reason you see this on startup is because of something in your user.clj
if you don't have stuff in user.clj, it shouldn't have to compile anything to get to a repl
Well, I'm requiring the project's main for a "reloaded" workflow (via integrant)
but it's still surprising as this hasn't been an issue earlier in the project's life-cycle
well, might want to think about what you have changed that affects what's loaded on startup
Yeah, I'm going to revert to a week ago
if you have some big chunks of code in a core.async go macro, that's a giant macro (that is basically a mini analyzer)
It's a 1 liner that calls a function, I believe
well, that doesn't seem bad
It looks like it's pathom
but it's surprising to me, because that wasn't an issue earlier - and libraries get cached don't they?
they do, but this is compiling your code, and that's not cached
(although we have some science experiments around this in the lab :)
(ns cryptonic.graph
"Pathom based query graph for Cryptonic"
(:require [com.wsscode.pathom.core :as p]
[com.wsscode.pathom.connect :as pc]
[duct.logger :as logger]
[integrant.core :as ig]
[utopia.integrant.db :as db]
[cryptonic.store.candle :as store.candle]
[clojure.spec.alpha :as s]
[clojure.core.async :refer [<!!]]))
(pc/defresolver hello-world [env input]
{::pc/input #{}
::pc/output [:hello-world]}
(let [user-name (-> env :ast :params :user-name)]
{:hello-world (str "Hello " user-name)}))
(def ^:private registry [hello-world])
;; -- Component Boilerplate
(s/def ::graph (s/keys :req-un [::logger/log
::store.candle/candle-store
::db/db]))
(defmethod ig/init-key :cryptonic/graph [_ {:keys [log]}]
(logger/log log :info ::starting-component)
(p/parallel-parser
{::p/env {::p/reader [p/map-reader
pc/parallel-reader
pc/open-ident-reader
p/env-placeholder-reader]
::p/placeholder-prefixes #{">"}
:log log}
::p/mutate pc/mutate-async
::p/plugins [(pc/connect-plugin {::pc/register registry})
p/error-handler-plugin
p/request-cache-plugin
p/trace-plugin]}))
(defmethod ig/pre-init-spec :cryptonic/graph [_]
::graph)
(defn query
[this ctx query]
(<!! (this ctx query)))
(comment
(query (user/find-running :cryptonic/graph)
{}
'[(:hello-world {:user-name "Ryan"})]))
That's the code that causes it to go into minutes of compiling
But it definitely didn't used to
Confirmed that commenting out the pathom related bits speeds it back up.
I'm going to see if requireing pathom takes a long time
No, weeks ago
and it worked fine, weeks ago (same version too)
Requiring it in repl takes about 7s
which isn't fast, but it's not minutes either
So, quite interesting....
the deps chain is as follows
user.clj
-> cryptonic.main
, has a ig/load-namespaces
which attempts to load classes based off of keywords in a config.edn
, ultimately resolves to cryptonic.graph
which, if it just requires pathom, takes minutes to compile (even if no functions are used from pathom at all)
I'm wondering if ig/load-namespaces
is somehow causing pathom to be thought of as "my code" instead of library code. That'd also explain why we found the compiler toiling around in async/go
macros (despite that I only have one, and it's quite simple)
None of this explains why I didn't see it earlier, but at least it's something
I'm grasping at straws, undoubtedly
it isn't grasping at straws, it is making up a feature of the compiler that it doesn't have to explain why what you are seeing doesn't match your mental model of what you should be seeing
Do you have ideas that fit your mental model given what I'm seeing?
but when you load the clojure code it all passes through the compiler, without distinction
lot of spec stuff in there too
so there is nothing weird about seeing the compiler spending time compiling and macro expanding library code
in pathom that is, which seems to be pretty macro heavy
but if I require pathom via a loaded repl, it takes ~7s
Not multiple minutes
I suspect it's not pathom itself, it's pathom applied to your code
If I comment out all pathom calls, and just require the namespaces, it still takes minutes to launch a repl - I'm not sure how it's being "applied to my code" in that case?
(Not trying to be disagreeable or anything here, I just don't understand why requiring the library would cause it - definitely a short coming in my mental model)
well, then guess not. I don't have all your context.
I don't anything about cryptonic but you're declaring defmethods, which change the state of the runtime by installing new polymorphic behaviors
so the question is, why in the repl does loading a phantom namespace take X time and when requiring it in a source file it takes Y time
sorry, not cryptonic, I guess that's integrant
how are you launching the repl when requiring pathom when it takes a short amount of time to load vs. a long amount of time?
In both cases, I am using clojure
in one case I have removed pathom from the requires, and then call (require)
as a general idea I would either try reducing the delta between the cases that are "fast" and "slow", or if you have commits, bisect to narrow down a specific change where the perf changed
I've got it down such that removing those two lines (the require of the pathom libs) will change it from seconds to minutes
I'm not familiar with what a verbose require is
(googling)
(also, make sure you are starting clojure from the command line, no CIDER in the mix)
or [com.wsscode.pathom.core :as p :verbose true]
will work I think
@hiredman is the idea to diff the output of the two?
A ML powered crypto bot I'm working on
I assume it's self aware
integrant does - which attempts to load namespaces via keys in a map
(but, to make sure it wasn't that, I got rid of that call and required the module directly)
you may also want to launch a slow and a fast repl and check to see if anything in (System/Properties) varies between them
So this is promising
364 lines of requires when loaded via the file
103 lines of requires when loaded via the repl
Looks like clojure.tools.analyzer
is the majority of it
(Which isn't loaded via the repl)
Just without pathom
ie. via user.clj
?
how do I "require" pathom from outside the file then?
okay, confirming that that's the experiment
Okay, running now
it looks like the main difference in the verbose loading is one is loading core.async(which loads tools.analyzer) and the other is not, but my guess would be the other is not loading core.async because it was already loaded when loading your code
@hiredman right you are, no user.clj is 364 lines, just the same - but fast (ie. not noticeably slower to a human, ~5-7s)
so in a fresh repl every time, load a namespace from your project(do them all), then load pathom and see how long it takes to load pathom after you've loaded each of your namespaces
so bring it back, and go backwards through it comment outlines until it goes from slow to fast
@hiredman haven't I achieved that by seeing the project load w/o pathom, (ie. the leaf of the dep tree), and then requiring pathom at the end?
what I was wondering is, is there some minimal subtree that when loaded loading pathom goes from fast to slow
What if I tried just requiring pathom from my user.clj
?
the point is we want to pull things a part so we can methodically check them in isolation
(ns user
(:require [com.wsscode.pathom.core :as p]
[com.wsscode.pathom.connect :as pc]))
Would reproduce it thoughie. how do we get more minimal than that?
(That does, indeed take minutes to compile / launch a repl on my machine)
That's correct
ie. if I have no user.clj
and then instead execute the requires, it is ~5 seconds
Good question
when you say loading pathom is fast, what are you actually running the repl? requiring those two namespaces?
Yes, requiring the two namespaces
(or evaluating the ns form)
do you have another file named user.clj anywhere on the classpath? (in the jars of any of your deps, etc)
Is there an easy way to check that?
It's slow
(ns user)
(require '[com.wsscode.pathom.core])
(require '[com.wsscode.pathom.connect])
connect depends on core
, trying core
if connect depends on core, you shouldn't need to load it, loading connect will load core
(and still running, point is, not 8s)
Yep, requiring connect
(I was using functions from core in my actual program) takes 1m+
Slow, actually
actually wait
my user.clj still exists
removing
yes, fast
Other fun facts to this mystery: My openjdk version hasn't changed, and I haven't upgraded pathom. It was working quickly a week ago. I've tried nuking my .cpcache
and even my ~/.m2
, neither of which did anything
put a (time ...) around the require in your user.clj and then a (time ...) around it at the repl
it's running
if you want to see if it's just my machine, I just pushed
w/user.clj:
λ clojure
"Elapsed time: 93907.317901 msecs"
Clojure 1.10.0
w/ eval:
λ clojure -e "(time (require '[com.wsscode.pathom.connect]))"
"Elapsed time: 8657.945781 msecs"
that is something, I can't think of anything that would cause loading code from user.clj and from the repl like that to differ
Quite a doozy huh?
Does this seem like a bug to you then?
in theory there could be a macro somewhere that says (when (= &file "user.clj") (Thread/sleep ...))
Well, I saw it in other files too at least - unless a macro can see the originating file at the top of a require chain
Thank you all for the help and taking the time to investigate this with me. I was able to implement a work-around for now, even though it feels like a bit of a hack.
I cloned your repo on a Mac running OSX 10.13.6 JVM 1.8.0_192 and Clojure 1.9.0. It took about 14 seconds both ways.
I've found it to be slow with zulu 11 jdk and fast with zulu 8 and openjdk-1.8.0.192.b12
when it is slow it doesn't look like loading one particular file is slow, but there is a sort of uniform slow down loading all the files (when initiated from user.clj)
On same physical Mac I mentioned above, in an Ubuntu 16.04 desktop Linux VM, using JVM 1.8.0_191 and Clojure 1.9.0, it took about 2.5 minutes for both ways. The java process has I think the default heap size of 512 Mbytes and fairly quickly got up to that amount of memory and stayed there. Maybe a lot of GC going on?