Fork me on GitHub
#clara
<
2018-01-19
>
mikerod15:01:07

@dave.dixon I mostly never would recommend many patterns that make pervasive use of insert-unconditional!

mikerod15:01:12

The one case I keep seeing that is difficult to work around is when needing to retract internally for something like “old events”. Other than that, I think it’s best to avoid. Even then, there are several RHS retract related issues outstanding that can be seen in the GitHub

mikerod15:01:28

I’m looking at your example to see if I see anything you could do

mikerod15:01:20

Woops missing a part in that, will edit

dave.dixon15:01:07

@mikerod Interesting. Let me ponder that.

mikerod15:01:28

I am missing a part still. Trying to get that done. The concept I think can work though in terms of marking a timestamp at each stage. However, the above fails because it is letting TMS remove the intermediary collections from previous runs

mikerod16:01:14

I think you have to signal each “stage” as an explicit fact though. Without something like that, the network never knows what each conceptual step is to you. It doesn’t distinguish one fire-rules from another really

mikerod16:01:34

So usually when that sort of info is missing, you end up adding it as a fact to represent the idea

mikerod16:01:34

@dave.dixon I updated it now

mikerod16:01:09

It is hard to say that it exactly captures what you want. If you look at :before-retract though, you get the multiple accumulated groups idea that you were wanting

mikerod16:01:23

Just requires each fact to be externally stamped with the same “timestamp” as the “fire event” that they were inserted with

mikerod16:01:34

I couldn’t work out a way for rules to express that stamping due to logical TMS.

mikerod16:01:55

Retracting is slightly harder, since you need to retract all matching facts across all timestamps (at least from your sort of example)

mikerod16:01:21

This retraction results in all prior groups being updated as well to remove the fact that is gone. That isn’t exactly what you wanted I dont’ think, but it conveys somewhat similar information.

mikerod16:01:32

The accumulation for Nums could tag it with the FireEvent ts it went with. Then that could be used to do a duplicate removal and/or “newest” result lookup on the :before-retract vs :after-retract

wparker18:01:55

@dave.dixon Perhaps a model like the following might work if you’re OK with accumulating some “garbage” over time.

- Logically insert a "Request" fact.
- Have a query like [Request (= ?id id)] [:not [RequestResponse (= ?id id)]].
- The client would externally insert a RequestResponse fact when it wanted to "close" a request.
- A todo could be done like [Request (= ?id id)] [:not [RequestResponse (= ?id id)]] => (insert! (->ToDo ?id)).
Then further logic downstream could dispatch on the TODOs as required, say if too many were outstanding.
If you need to clean up garbage, it almost sounds like you’re asking about retracting a fact without removing downstream insertions due to it.. is that on target at all?

dave.dixon19:01:19

@wparker I think in your 4th bullet you would remove the :not. But otherwise, that's basically it. The "garbage" is okay, I'm experimenting with rulesets that enforce logical consistency over time, e.g. of some process of interacting with the user and an external server. So the "garbage" is really the history. What I'm finding is that you generally have some "anchor" facts from which the rest of the process logic flows, so when you change the anchor the "garbage" gets collected. For example, in the conduit example, an anchor fact is the filter which retrieves a set of articles from the server, So articles wind up inserted with conditional dependence on the filter fact. Then the user can do a number of things, like change the favorite status of an article, edit if they are the author, etc. So all of that history gets accumulated, but when you change the filter it's all retracted.

dave.dixon19:01:19

@wparker It wasn't specifically that I was looking to retract a fact without removing downstream insertions, rather modify the set of facts that went into an accumulator without retracting downstream insertions created by previous activations of the accumulator rule. It makes sense from a purely logical standpoint (I think), but the semantics of accumulators in the rule network are slightly different - not wrong, just making a different logical assertion, and one that is feasible within rule network architecture.