Fork me on GitHub

@mike1452 @apbleonard If you use manual retractions in the RHS you become responsible for ensuring that the rule execution order works out so that you get your desired results. It is basically the same idea as with logical vs unconditional insertions discussed at This can be done with salience and/or strategic structuring of your rules, but in my experience adds significant complication and room for error


My suggestion would be to use accumulators or negation conditions as Mike alluded to above. If you have a priority order of different logical states it wouldn’t be too hard to get the state with the highest priority in an accumulator and use that state in a rule RHS. Alternatively, you can write rules like


(defrule negation-rule-1 [?a <- A (some-predicate-here)] [:not [B (other-predicate-here)]] => (insert something))


manual RHS retractions can be useful for getting things out of memory or for performance, in general wouldn’t be my first resort though


@wparker thanks for suggestions! btw, what is most biggest rules database you've ever seen in production memory that Clara worked with? 10,100, 500?


do you mean the number of rules?


I think we have some rulesets with at least 10K rules now, can’t say exactly off the top of my head since we autogenerate a lot of rules with some internal framework code


there’s a reason we’ve spent a lot of effort optimizing performance 😛


@mike1452 FWIW I have exactly the same challenges. We handle complex government policies introduced atop of older policies, and half the battle is understanding pathways through the logic, and the total space of inputs and the variety of resulting outputs. I'd love to implement the rules trivially enough to give fast feedback to policy makers on what their new policy just did to the complexity of the problem. I feel like clojure's core.logic should be able to help with this...? If that makes sense, anyone know of people using core.logic with Clara?


@apbleonard I'm a fan of core.logic, but am unaware of anyone using it tightly with Clara. I think core.logic works great for problems that can be described as a set of constraints, and core.logic will search that constraint space for a solution. Clara is more centered around encoding business rules or domain expertise. It's not always obvious which tool is best for a given job, so I always think it's a good idea to experiment with both in your problem space before choosing a direction.


Clara-rules 0.13.0 is released. - The replacement of the previous durability API with a more robust and performant one. This is discussed at issue 198. Note that this is experimental and is subject to further change; that said we're successfully using it in production at Cerner. - Improvements in error reporting in rule and query conditions, discussed at issue 255. Thanks to Carlos Phillips for this one. - Various performance improvements and bug fixes discussed on the changelog.


I meant to say “highlights include” before my bullets.. I fail at Slack formatting 😛