This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-09-22
Channels
- # 100-days-of-code (1)
- # beginners (51)
- # carry (1)
- # cider (10)
- # clojure (71)
- # clojure-conj (4)
- # clojure-dev (9)
- # clojure-italy (3)
- # clojure-nl (2)
- # clojure-russia (8)
- # clojure-uk (16)
- # clojurescript (42)
- # cursive (4)
- # datomic (2)
- # emacs (8)
- # figwheel-main (7)
- # fulcro (20)
- # hyperfiddle (5)
- # jobs (2)
- # off-topic (16)
- # om-next (4)
- # onyx (9)
- # powderkeg (1)
- # re-frame (8)
- # reagent (17)
- # reitit (41)
- # robots (6)
- # rum (1)
- # shadow-cljs (54)
- # testing (3)
- # tools-deps (19)
Morning all, I have a requirement where I need to alternate between items in a list:
(def my-list [1 2 3 4 5])
My requirement is that I need to pull 1
then the next time pull 2
and so forth, but once I've pulled 5
I need to go back to 1
and loop through it again.
I can't wrap my head round how to do this...
I suspect I need to use cycle
in some form.
Thats it basically...
Its the dynamically building the function I'm having difficultly with.
I fetch a list of things thing need to loop over them, but I don't know whats going to be in the list till I fetch it (from Elasticsearch)
Something like this but with out knowing what I passed in last time.
Whats happening is I have what we call a tag
,
thats then links to a list like ["11235" "456345" "234234"]
(they're marry up to images, but thats not important for this)
each time I see the tag
I need to drop in one of the above numbers.
The tag could appear 100's of time but I only I have three items in the list above so I need to loop over it repeating alternating which item I use.
As a example:
1st time I see the tag
i need "11235"
2nd time I see the tag
i need "456345"
but on the forth time I see it, I need "11235"
and so forth.
Nope, its just one tags.
{:tag-name "foo"
:list-of-creatives ["11235" "456345" "234234"]}
thing I’m getting at is that if you have two lists with items, then cycle
is your buddy
yeah... I have another file that contents foo and I need to replace foo each time I see it with the numbers.
Yeah... this is exactly the bit I'm stuck on 😞
I have potentially hundreds of tags to deal with...
For this PoC yes.
(I think)
ok, then something like this could work:
• load up all tagged items in a list
* group-by :tag
to get a map of tag -> items
* for each tag/items pair in the map, loop over it with something like (map mapping-fn items (cycle list-of-creatives-for-item)
Yeah I think so!! Let me see what I can hack up myself following your logic and see what happens! No doubt I'll be back for more advice 😂 Its a great learning experience!
Thank you!
if that doesn’t pan out, a stateful option is to have an atom
with a map from tag -> array index, and update the index for each tag everytime you it
yeah that was my initial thought, it just didn't "feel" right 😐
I feel ya 😄 in my experience it is often beneficial to find a stateless solution first and only go for a stateful solution if the stateless one ends up being too complicated and/or slow
Thats exactly it, I'm trying to remain as stateless as possible...
I'm writing a macro and I'm using a function that the user passes into the macro in the macro itself (i.e. to generate things). Of course I only get an unevaluated symbol. What's the best way to get the fn itself?
so far I've got:
(defn- load-from-symbol [sym]
(let [sym (if (namespace sym) sym (symbol (str *ns*) (name sym)))]
(var-get (find-var sym))))
although this leaves holes for imports
hm...
I'm an idiot, it's just (var-get (resolve sym))
So I’m trying to write some guide on using core.cache
, and I’ve realized I don’t fully get the semantics of atoms and race conditions:
(defn get-key [cache-atom k]
(let [value (cache/lookup @cache-atom k ::not-found)]
(when-not (= value ::not-found)
(swap! cache-atom cache/hit k)
value)))
Isn’t there a race-condition between the dereferencing of the cache-atom
and the swap!
call there?
I mean, there’s not a Thread-based race condition in that something will be corrupted etc. But the value
I’m getting might be stale — there’s no transaction ensuring read consistency etc. swap!
will retry until it can atomically update the cache, but dereferencing is uncoordinated, right?
As a cache, I think the idea is that multiple cache hits on the same key should be idempotent
(I realize that for a caching point of view, there shouldn’t be any assumptions of true transactions, I just realize how I’m missing something in my understanding of atoms)
Oh, also cache/lookup
is where the value is fetched, including if it was missing in the cache. the swap!
call doesn't use the value, it's just updating the hit information on the key
Cache lookup doesn’t try to fetch anything, cache/miss
will put a value in the cache.
I guess a concrete scenario would be:
a) (cache/lookup :a)
succeeds, giving me a value.
b) Meanwhile, the cache atom is updated by someone else in such a way that :a
is evicted.
c) I call (cache/hit :a)
, but :a
is no longer in the cache.
Anyway, regardless I suppose in the case you posted above, cache/hit
will need to account for the possibility that the key is no longer in the cache, since it could have been evicted between the deref and the swap!
Note that the documentation isn’t actually suggesting the pattern I posted above — and instead does:
(defn get-data [key]
(cache/lookup (swap! cache-store
#(if (cache/has? % key)
(cache/hit % key)
(cache/miss % key (retrieve-data key))))
key))
This overcomes the atom “race condition” — but then retrieve-data
might be called multiple times if the atom’s CAS operation has to retry.
The LRU cache does handle hit
being called when the key is not present in the cache: https://github.com/clojure/core.cache/blob/master/src/main/clojure/clojure/core/cache.clj#L225
Here’s the term that describes what I’m talking about: https://en.wikipedia.org/wiki/Cache_stampede
in my own designs i always lean towards the chance that caches will fail me, that fetching entries from backend possibly fails (network timeout etc.) and that i probably come around fetching for stuff multiple times, i never run single nodes of anything anyway so the according backends just have to be ready for it (and that's usually how my design ends up, slow part is either s3 or any database). on nodes that don't survive a cold start too well i do a warmup before i add them to the loadbalancers ... but all this is heavily usecase based 🙂
occasionally it's also nice if your hard hammered machine always takes entries from cache, even if they are slightly stale, and orders a fresh data to be fetched in async on the background.
It has been a while since I read the core.cache code at all, but I remember the few times I tried it always confused me, I think partly because it was intended that you could "layer" any combination of LRU, etc. behaviors on top of each other (or at least more than one such combination).
Easiest thing is to add a varnish in front of a service and it will hold all http connections and make only a single request upstream
if request is cacheable
even a single second max-age will work wonders in that regard