This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-01-03
Channels
- # aleph (2)
- # announcements (6)
- # babashka (6)
- # beginners (106)
- # biff (8)
- # clara (24)
- # clj-kondo (10)
- # clj-otel (4)
- # cljdoc (2)
- # clojure (54)
- # clojure-conj (3)
- # clojure-europe (85)
- # clojure-norway (54)
- # clojure-uk (3)
- # clojurescript (27)
- # community-development (2)
- # data-science (1)
- # datahike (2)
- # datomic (11)
- # deps-new (67)
- # emacs (4)
- # graalvm (15)
- # hyperfiddle (11)
- # introduce-yourself (1)
- # lsp (6)
- # malli (30)
- # midje (4)
- # nrepl (13)
- # off-topic (86)
- # polylith (7)
- # releases (2)
- # sql (10)
I’m wanting an accumulation, but without receiving duplicates. How can that be achieved?
(r/defrule batch
[items? <- (acc/all) :from [Item]]
=>
(prn "# of items" (count ?items))
If I have items a & b the first time this fire, and later I add c & d, then it will on the second firing give me a, b, c, d whereas I only want c & d the second time or some way to ascertain the “new items”. Looking at the https://www.metasimple.org/2017/12/23/clara-updating-facts.html I’m thinking the first approach outlined cannot work, and I’m forced to use the second approach if that even works.If I have a “UpdatingItem” that holds reference to the Item, it will have the same problem when attempting to capture its accumulation. If I hold some list of the “seen items” then modifying that value will also retrigger the accumulation rule, I believe in a way that will make it pointless. It seems I may be forced to use insert-unconditional! in some manner to solve this use case?
@UH13Y2FSA what do you mean that c & d
“come later”? I think that is important to get a handle on.
What makes the timing of a & b
differ from c & d
? How are they batched and inserted in this way?
If you have different sources of facts, where a & b
are just from a different “source” than c & d
than you can reify some information onto those facts about their “source”. Then you can do eg.
(acc/all) :from [Item (= ?source-id source-id)]
which will give you accumulated “partitions” in this type of structure.If this is instead insertions outside of the fire-rules
loop, you could have those outside insertions have some incrementing “session ID” or something that models “different points in insertion time”
I think you still do intermediate facts and use a rule to explicitly match the “latest” facts either via their “source” props or their “session ID” type of props.
What’s happening is that I’m batching a fetch of the facts for performance reasons… So that I can get the data multi-threaded. Further rules can fire that trigger another batch, but I don’t want to refetch what has already been fetched, so I effectively need to skip them.
I did work out a mechanism sort of what you had proposed with paging, but holding the current source-id in a fact that is incremented (atom) after the current items have been processed… Subsequent Item creations use the new number.
I’d be a bit worried about side-effects like atom updates with in a fire-rules
loop.
The reason is there is the truth maintenance system (TMS). It may arbitrarily fire RHS of rules and then retract them later when the LHS becomes no longer satisfied.
If you perform side-effects that are not visible to the TMS, those cannot be undone. There’s a chance that what you have is still reliable. However, it is something to be leery of.
I’d prefer any sort of incremented mutable like this be done between successive fire-rules
calls. So the TMS can operate in a “pure” env.
I see. The problem I was running up against is that (acc/all) doesn’t get invoked when more matching facts come in. Which feels like a bug to me.
If that had worked, then I wouldn’t have resorted to the mutation, but I’ll consider a fire-rules
loop
It does via TMS. However, it could be something to do with your side-effects as I mentioned.
I’d tend to think of rules as a good way to model a tricky flow of logic. However, I’d delegate side-effects to the external caller. They can call fire-rules
multiple times as some external “state” has changed.
So I may have a fact that represents “already fetched” facts. And use that to exclude those from any new fetch. Each fire-rules
loop, I’d query out what to fetch, fetch them, and add them to an “already fetched” fact to apply to the next insert
/`fire-rules` loop.
An example of the outline of this idea is here https://gist.github.com/mrrodriguez/baaa6bf73e8a3b412970c648e208c293
Is using a fire-rules
loop effectively any different than using insert-unconditionally
? It seems like when you insert them before the fire-rules
that amounts to the same thing as there is no truthiness in inserting those facts.
It can be similar. But you would rely on rule evaluation order with insert-unconditionally.