Fork me on GitHub

@ryanbrush: are there any guidelines/limits for the number of facts?


@ragge Not really, besides the limits of the JVM. We frequently have 100,000+ facts in our sessions, and I've tested into the millions (although for a limited scenario). Most of the work is done in Clojure group-by and reducer functions, which scale well.


@ragge The only thing to keep in mind is doing a non-equality cartesian join over lots of facts. Rule constraints that join on equals use hash-based joins, which scale well. This constraint applies pretty commonly to rule engines.


@ryanbrush: thanks for your reply, will definitely give it a go for our use case (which is currently around 50k facts)


@ragge Sounds good. Clara's performance should at least be competitive with other JVM-based rule engines and is working well with a large number of facts for us. If you do run into a workload that isn't doing well, please let me know.


is there a performance measurement of Clara?


or guidelines, like how much memory you should budget for number of facts/rules?


@albaker I use Criterium ( against a function that loads and exercises our Clara rulesets to measure performance. I did post the program I wrote some time ago to benchmark Clara itself at, but generally its easier just to write your own test function and use Criterium to measure.


@albaker Some considerations for a memory budget: each condition on a rule will compile into its own function, and identical conditions between rules are reused, reducing the overall size. The facts are mostly just stored in Clojure sequences, or maps grouped by fact type. In general, the memory footprint will be dominated by the size of the facts you have.


I think the best way to think of it as you're just writing Clojure functions and creating Clojure structures. Clara will wire things together, but the footprint is generally a function of the size of your data.


Cool, thanks!