This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-09-08
Channels
- # announcements (9)
- # babashka (17)
- # beginners (26)
- # biff (2)
- # calva (5)
- # cider (11)
- # clara (6)
- # clojure (48)
- # clojure-europe (34)
- # clojure-nl (1)
- # clojure-norway (34)
- # clojure-uk (2)
- # clojurescript (22)
- # clr (11)
- # code-reviews (5)
- # conjure (3)
- # datomic (26)
- # emacs (14)
- # fulcro (10)
- # hyperfiddle (70)
- # lsp (34)
- # malli (5)
- # missionary (5)
- # off-topic (34)
- # releases (1)
- # shadow-cljs (19)
- # tree-sitter (1)
- # xtdb (25)
Using Accumulators can I use (take …
to limit the number of matches? I’m wanting to process a given number of matches multithread on those and then not have to process the matches later.
So say 15 matches, I want to process 10 at a time, I want to (take 10)
but only get the remaining 5 in another firing of the rule. If I need to let the rules quiesce and do this “outside of the session” that could work as well.
The take x
accumulator is quite doable with clara today, looking something like:
(defn take-accum
[x]
(acc/accum
{:initial-value nil
:reduce-fn (fn [items value]
(if (= (count items) x)
items
(conj items value)))}))
However, this doesn’t address:
> I want to process 10 at a time, I want to (take 10) but only get the remaining 5 in another firing of the rule.
As the accumulator above would be done after accumulating the prerequisite amount of facts. Additional fire-rules
wouldn’t propagate new facts after the number was reached.
Off the top of my head i don’t know that the latter functionality/ability would be possible within the session itself. As it feels like the remaining 5 would be “unfinished” work for the session, in the sense that fire-rules
contract is that after the call, the session should have reached a “steady state”… remaining work would imply a non-stable state.That’s what I suspected w/o trying it out. I suppose I could do this outside of the session. Fetch all the matches, and then retract
the ones I decide to process.
Maybe another option is to pull all of them, process 10, retract 15, then push the 5 back?
Im always weary of using retract
as it feels like re-writing history, probably just a me thing though.
As a possible alternative, a partitioning accum:
(defn partitioning-accum
[x]
(acc/accum
{:initial-value nil
:reduce-fn conj
:convert-return-fn #(partition-all x %)}))
With perhaps a paging like system:
(r/defrule paging-rule
[?pages <- (partitioning-accum 3) :from [Long]]
=>
(r/insert-all! (map-indexed #(with-meta {:values %2 :page %1} {:type :paged-long}) ?pages)))
Where the pages could be directly queried:
(r/defquery paged-response
[:?page]
[:paged-long [{:keys [page values]}] (= page ?page) (= values ?results)])
or gated by processing-requests:
(r/defrule page-processing-rule
[:process-request [{:keys [page]}] (= ?page page)]
[:paged-long [{:keys [page values]}] (= page ?page) (= values ?results)]
=>
(r/insert! ^{:type :more-processing} {:res ?results}))
Is probably how i’d look at it.