This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-10-10
Channels
- # aleph (4)
- # beginners (32)
- # cider (12)
- # cljs-dev (56)
- # cljsrn (7)
- # clojars (3)
- # clojure (165)
- # clojure-dev (33)
- # clojure-germany (1)
- # clojure-italy (27)
- # clojure-russia (7)
- # clojure-spec (24)
- # clojure-uk (62)
- # clojurescript (37)
- # core-async (7)
- # core-matrix (1)
- # cursive (9)
- # data-science (8)
- # datomic (8)
- # duct (4)
- # events (1)
- # figwheel (7)
- # flambo (3)
- # fulcro (43)
- # hoplon (25)
- # jobs-discuss (8)
- # lein-figwheel (4)
- # luminus (2)
- # off-topic (35)
- # om (8)
- # om-next (3)
- # onyx (30)
- # pedestal (62)
- # portkey (2)
- # protorepl (2)
- # re-frame (40)
- # reagent (9)
- # shadow-cljs (123)
- # specter (30)
- # sql (22)
- # testing (1)
- # uncomplicate (40)
- # unrepl (3)
- # vim (13)
- # yada (5)
anyone know a good way to get a file system based memoize instead of using memory?
@thedavidmeister mm it will eventually load the data in memory when you have to use it anyway right?
what's the use case?
@andrea.crotti i have 100's of network calls that take 20-60s each to complete and they are always the same (historical event data)
well you can use any DB for something like that
Redis even handles the TTL for you
yes, i can write a caching layer
i was hoping to simply memoize the fns
i don't want to add a dependency on a specific tech like a db or redis
well you can just do it yourself as well
write to some EDN files
and read from them
current plan is to copy and paste the clojure core memoize function and swap out the lookup for a file system based logic
it just seemed like something that might already exist
it won't save you any memory though, it will only save the computation time, unless you have a different file per input
yes, different file per fn/args combination
some answers there
but depending on what you're doing even sqlite could be better than doing it yourself
and that doesn't require any other server running
sqlite seems like a lot of work
the event data i'm getting has no schema
so i either have to hammer a schema into place to represent the specific data coming back from the query
or put k/v into the db, which doesn't seem better/easier than just dumping to files
this stack overflow answer is exactly what i was thinking
so the answer is "there's no existing libraries, so hand roll something simple"
@thedavidmeister I wrote that answer š. You could also use: https://github.com/Factual/clj-leveldb if you don't want to deal with multiple files. I have no experience with leveldb though.
mmmk, i'll start with the simple approach and move to a db if i find a reason to
@thedavidmeister I think you probably want to use a cache instead of memoising
a cache gives you more control over where it spills out to
https://github.com/raphw/guava-cache-overflow-extension, https://stackoverflow.com/questions/26884521/use-guava-cache-to-persist-data-to-a-hard-disk
but a cache also relies on me inserting cache logic
which will end up being the same as memoize behaviour
sure, they're basically the same thing here?
you can write a PluggableMemoization
if you prefer, see https://github.com/clojure/core.memoize
it all just seems like overkill... what's the downside of tweaking memoize to use a folder of files instead of an atom?
By tweaking you mean rewriting it I guess?
It just means you are reinventing the wheel imho
what wheel?
Rather than changing memoize, you could have the function write to the fs and return a path to it
writing a custom plugin for a library?
those are spokes, not a wheel š
@thedavidmeister There is nothing wrong with just using files, but YOU will have to do the coordination if you write/read from multiple threads. Or you'll get garbage. LevelDB library would allow you to just get/write as you like. It's thread safe (as long as you don't use Iterator).
i don't think i'll run into that issue with what i'm doing
but that's a fair point in general
i'm strictly serially hitting api endpoints, and each time i get about 3 million rows back
and just doing this in a repl
i don't think there's any threading in there, but i could be wrong
Well I hope you're right, b/c initially yous said it'd take 20-60sec for you to answer the request.
yeah it does
i can wait that long once or twice
just not over and over
@thedavidmeister My point is: What if a second request comes in, while you generated your cache from the first request...
why is that a problem?
each request has different args and so a different file on disk
@thedavidmeister So why cache it then? If it's never going to be accessed again since they're all different?
it will though
the next time i run the function to run a report
it's not always different
it's just not the same for a single set of calls
the only way to have the same args at the same time is for me to run two repls side by side
but the same args will definitely be called sequentially, when i call the fn a second or third time in the repl
anywho, i've already rewritten the memoize to do what i want for now
so i'm getting some dinner, thanks for all the suggestions everyone š
@rauh: the phrasing of Symbols begin with a non-numeric character and can contain alphanumeric characters and *, +, !, -, _, ', and ? (other characters may be allowed eventually). threw me off
by special casing the "_", one could interpret "non-numeric" to mean "non-numeric alpha-numeric" char (or atleast that was my first interpretation)
No, you'll have to roll your own. For example (defn flip [f] (fn [& args] (apply f (reverse args))))
and then ((flip reduce) (range 10) +)
@bravilogy
basically I have this function
(defn game-progress [score]
(cond
(zero? (mod score 4)) add-more-colors
(zero? (mod score 5)) decrease-time-left
:else identity))
and thought I would refactor that repetitive bit somehow and perhaps use condp
insteadI vaguely remember a library that can read EDN data and write it out again while preserving all comments and formatting. Can someone remember the name?
@danielcompton no please :-)
@cgrand Do you mean http://i0.kym-cdn.com/photos/images/newsfeed/000/865/302/3d5.gif
I didn't think words would quite express enough no in this case, I just pictured you screaming it. š
Hey guys, new Clojurecademy course uploaded: Code Katas
- https://clojurecademy.com/courses/17592186068882/learn/overview
I canāt figure out how to implement a custom pretty print method without mutating the default behavior. Is there an idiomatic way to use a custom pretty print method for one call to pprint?
(def my-atom (atom nil))
(defn pprint-dispatch []
(do
(defmethod clojure.pprint/simple-dispatch clojure.lang.IDeref [o]
(print o))
clojure.pprint/simple-dispatch))
(clojure.pprint/write my-atom :dispatch (pprint-dispatch))
(clojure.pprint/pprint my-atom)
These examples might help https://github.com/clojure/clojure/blob/0a6810ab3484b5be0afe4f505cd724eb5c974a09/src/clj/clojure/pprint/dispatch.clj#L471
clojure.pprint/pprint my-atom)
#<[email protected]: nil> ;; this is the default I'm trying to overwrite
=> nil
(clojure.pprint/write my-atom :dispatch (pprint-dispatch))
#object[clojure.lang.Atom 0x58a7831e {:status :ready, :val nil}]=> nil ;; great, this is the way I want IDerefs to print
(clojure.pprint/pprint my-atom)
#object[clojure.lang.Atom 0x58a7831e {:status :ready, :val nil}] ;; oops -- all subsequent calls to pprint will look this way?
=> nil
@U3DAE8HMG Thanks. I see these use code-dispatch instead of simple-dispatch, but I think I face the same issue. I want to alter the way IDeref is pretty printed and leave everything else the same š
^ might also be a question about extending multimethods since thatās how clojure.pprint/simple-dispatch is implemented
I would like to have separate names for test and production uberjars compiled from the same project.clj file. Is that possible? Can you put an uberjar-name key is a profile and make uberjars with 2 different profiles?
in clj-time.format
there's a function called show-formatters
that will show you all of the built in parse strings and what they can do
@tomaas youāre going to want to use a custom formatter e.g. (f/formatter "MMM dd, YYYY")
< might work
I have data from a lisp (common lisp maybe) and I would like to read it in clojure. Any advice? I found this... https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/LispReader.java with examples of its use at the end of the file.
LispReader is a clojure reader, similar to tools.reader https://github.com/clojure/tools.reader#differences-from-lispreaderjava
Thus, "MM" might output "03" whereas "MMM" might output "Mar" (the short form of March) ....
I often find myself looking for "something like pipeline-async, but that doesn't care about ordering, but does still deal with signaling 'all of x are finished'". am I silly? Does that already exist?
like, I'm writing this entirely too much
(async/pipeline-async
500 ;; some number bigger than our number of tasks
tasks-finished-for-account
from the readme:
(require '[com.climate.claypoole :as cp])
;; We'll use the with-shutdown! form to guarantee that pools are cleaned up.
(cp/with-shutdown! [net-pool (cp/threadpool 100)
cpu-pool (cp/threadpool (cp/ncpus))]
;; Unordered pmap doesn't return output in the same order as the input(!),
;; but that means we can start using service2 as soon as possible.
(def service1-resps (cp/upmap net-pool service1-request myinputs))
(def service2-resps (cp/upmap net-pool service2-request service1-resps))
(def results (cp/upmap cpu-pool handle-response service2-resps))
;; ...eventually...
;; Make sure sure the computation is complete before we shutdown the pools.
(doall results))
@bfabry nah, it doesn't exist, mostly because it's fairly simple to write.
@bfabry basic gist https://gist.github.com/halgari/a874b5e8e281c3e4afe51c0249b31811
I think you miss that you need to create a channel for each thread though. otherwise you may close! out-c before one of the go-loops has finished its work
what did you mean by "extend" though btw? because you could for instance extend them with a protocol with very little code
I have a protocol and I want to extends them with that protocol. but without repeating my self
donāt extend PersistentHashMap and PersistentArrayMap - those are implementation details, extend java.util.Map or clojure.lang.IPersistentMap, which cover both hash-map and array-map
@noisesmith good point. thanks
Hi, I have a quick collections question. Iām doing some stuff with datomic and of course youāre kind of on the hook for your own paging. Iām trying to implement cursor based paging such that I can give a key value then return that item, plus N items āin frontā or ābehind of itā So for the following
({:id "A" :val..} {:id "B" :val..} {:id "C" :val..}{:id "D" :val..} {:id "E" :val..})
If I have say āCā and ā2 behindā, I get A,B,C. Or āCā and āone afterā would give me C and D. and of course, itād be nice if it was reasonably performant@noisesmith hang on . java.util.Map or clojure.lang.IPersistentMap ? not both ?
thereās no need to extend both - it depends on whether you also want to cover vanilla java hashmaps at the same time
then IPersistentMap suffices
it is for making a new type that acts like a hash-map
if thatās what you are trying to do, then I totally misunderstood you
also, unless you want to change the hash-map behaviors, you can just use defrecord instead of potemkin
@lewix check out #remote-jobs
1. I have a function that is much easier to write recursively than as an iteration / loop. (It has a tree like branching structure.) 2. Realistically, how many stack frames does the jvm/clojure guarantee me?
@qqq itās a config on java startup - you should be able to find the default pretty quickly
https://stackoverflow.com/questions/4734108/what-is-the-maximum-depth-of-the-java-call-stack <-- hmm, default is 7k-8k calls ?