This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-07-20
Channels
- # announcements (7)
- # babashka (16)
- # beginners (58)
- # boot (12)
- # calva (3)
- # cider (11)
- # clj-kondo (9)
- # cljs-dev (8)
- # clojure (82)
- # clojure-europe (9)
- # clojure-italy (11)
- # clojure-losangeles (1)
- # clojure-nl (8)
- # clojure-uk (8)
- # clojurescript (5)
- # css (2)
- # cursive (5)
- # datomic (20)
- # docker (2)
- # emacs (4)
- # figwheel-main (16)
- # fulcro (53)
- # graalvm (17)
- # jackdaw (2)
- # jobs (4)
- # kaocha (6)
- # lambdaisland (2)
- # luminus (2)
- # meander (1)
- # off-topic (146)
- # re-frame (4)
- # releases (1)
- # rum (12)
- # sci (71)
- # shadow-cljs (26)
- # test-check (22)
- # vim (1)
- # xtdb (9)
With core.memoize, would it be possible to to combine cache strategies? I especially think about TTL and LRU. My problem are some results are valid for (say) one hour but I still would like to control the memory usage with a maximum number of element. I guess I could just compose them and it should work, but I wonder if that would be recommened.
I think something like this works
(def my-func (memoize/lru my-func* (cache/ttl-cache-factory {} :ttl 60000) :lru/threshold 200))
@U11BV7MTK recently dug into the core.cache library and its uses, and wrote this article about it: https://dev.to/dpsutton/exploring-the-core-cache-api-57al. Hopefully he does not mind too much being at-ed here for his attention, in case his fresh-in-memory investigation of core.cache means he has an answer to your question.
not at all. I did not investigate how the caches compose as the suggestion here. I would be nervous but I would check it.
and thinking on it, the caches do not use the cache protocol on the underlying cache
object that holds the information in the deftypes. so i don't believe there's any way that the caches will compose like this. so the ttl cache will be unbounded but by the ttl expiration. That sounds like a good structure though so a package that adds that would probably be welcome in the community at large
and adapting the lu-cache but using the ttl as the usage count for eviction might be a strategy to write one that combines the two
🤯 even after reading your post, it is still too complicated for me haha.
the gist is setting up a cache which imitates the usage found in projects on github and then hammering it with 20 threads making 20000 accesses and seeing what breaks.
looking for race conditions when checking a cache has?
a value and then subsequently checking the cache for the value and not finding it
Caches should compose. I think there are even examples of that in the docs. I remember a bug fix going in that only surfaced when you composed caches.
(but composing caches as part of memoization is non-trivial to get correct, I suspect)
i don't think that's possible. when you hit a cache entry in a ttl it does not propagate that hit into an underlying lu or lru cache as far as i can tell
same for all of them. i don't see any checks to see if the cache
object is a CacheProtocol
itself. They are just associative objects as far as i can tell
The associative operations are implemented in terms of the cache.
hit
is a no-op in most cache implementations. has?
checks TTL expiry (since we're talking about the TTL cache).
It's the caller that is supposed to use the has?
/ hit
/ miss
strategy. If you use that on all caches, it behaves as expected (and you can compose caches).
i guess that means some can compose but not all. stencil in the article just used get and assoc and the lu and lru cache's predictably didn't work
Stencil uses core.cache
incorrectly.
That was a surprise to me, when I read your article -- that misuse is widespread 😞
yeah. but the way stencil uses caches is the same way a TTL cache uses an underlying cache. just assoc and dissoc and get
so a TTL cache can never wrap an LU or LRU cache because it won't propagate the hit into that underlying cache
Yeah, the hit
method is very problematic. And different caches also have different behavior on lookup -- TTL cache can succeed on has?
and then fail on get
, for example. I'm fairly certain that almost no one uses the more esoteric caching strategies (and almost no one at all actually tries to compose them)...
hey, is there any clojure-channel here for community-open-source projects? new projects, etc if not: any web/forum that has?
strictly speaking core.async creates N threads, go
just lines up to use them for a slice at a time
core.async/thread creates threads (in a pool for reuse)
IOW all the threads are created when you load core.async, whether you use go or not
the thread inspection tools will show you the core.async thread, and their stack traces, but won't tell you directly which go block they are running
it's rather opaque once it's running in the VM
ctrl-\ (*-nix) or ctrl-break (win) will dump the stack of all threads in your current process JVM
jstack is a jvm tool that can be used externally (or kill -3 the pid)
profiler type tools let you connect with a ui and see this kind of stuff - jconsole (comes w/the jvm), jprofiler, yourkit
hello everyone! is there a way to do something like this:
(loop [x '(1 2 3)]
(for [y x]
(recur (next x)))
basically i want to use recur inside a for loop
loop
and for
don't really work together like this. why not just:
(let [x '(1 2 3)]
(for [y x]
x))
yeah that was just a dummy example.
My actual problem is to translate this function
(defn all-sentences [chain current-key sentence]
(let [words (get chain current-key)]
(if (not= #{nil} words)
(concat sentence
(reduce (fn [result w]
(all-sentences chain w (concat result current-key)))
'()
words))
(concat sentence current-key (list "END$")))))
into another function that uses tail recursion
chain is a huge markov chians map
and with this implementation i hit a stackoverflow unfortunately
https://clojuredocs.org/clojure.core/loop has some really good examples
related to your stackoverflow, someone recently shared this article that seems relevant, https://stuartsierra.com/2015/04/26/clojure-donts-concat
also on the topic of core.async, what’s the best way to mock out a function that will be called in a go
block?
with-redefs doesn’t seem to work, even though docs say it should be visible across all threads
the problem with with-redefs is that the original def is re-established as soon as the with-redefs block exits
to use with-redefs with a go loop, you need to block exit of with-redefs until after the loop exits
also, with-redefs is racy, if you use it on the same var from two threads, you can lose your initial definition
it's not safe for real code (and also blocks you from being able to safely run tests in parallel)
Running into some unexpected behavior with spec generators that I don't understand.
(s/def ::user
(s/keys :req-un [::name ::email ::password ::followed-states]))
(gen/sample (s/gen ::user))
;; => ({:name "x",
;; :email "",
;; ...
(s/def :crux.db/id
(s/with-gen uuid?
#(s/gen (into #{} (take 10 (repeatedly
(fn [] (java.util.UUID/randomUUID))))))))
(gen/sample (s/gen :crux.db/id))
;; => (#uuid "c99f13e9-7e54-44e1-9a4f-9466ce2afc89"
;; ...
(s/def ::user+
(s/and ::user (s/and (s/keys :req [:crux.db/id]))))
(gen/sample (s/gen ::user+))
;;1. Caused by clojure.lang.ExceptionInfo
;; Couldn't satisfy such-that predicate after 100 tries.
;; {}
Everything works fine until that last spec ::user+
. I'm able to generate every sub-spec that ::user+
relies on. But when I try to generate samples of ::user+
, it can't satisfy a predicate.
I'm going to search to see if there's a way to get more information about specifically which predicate it failed to satisfy and I'm also going to try (s/def ::user+ (s/with-gen (s/and ::user ,,,) #(merge (gen/sample (s/gen ::user)) {:crux.db/id ,,,_))
(defining ::user+
with a with-gen
and created a generator that calls gen/sample
on ::user
and then merges it with the new key; on the assumption that the problem lies somewhere in there, but I don't actually see why/where).
Aside from what I'm going to try, does anyone know why that last gen/sample
isn't working?