Fork me on GitHub

Oh, I missed this. \o/


On another note, we've been troubleshooting some serious performance issues loading our database, and I've just figured out it is memory. If I fire-rules with the facts of the worst kind in batches of 5 (that chain the most), it completes in about 25 minutes; otherwise, it takes about 4h10m.


I mean, that's what tells me it is memory. Tell me if I'm wrong.


Do you have profiles of the heap when this occurs?


No, but I'll happily make some.


@eraserhd both heap profile and CPU sampling snapshot info would be good - not sure what you use. visualvm is good

👍 4

Another shot in the dark suggestion - if you are making rules with long LHS sequence of conditions, perhaps you’d be better off breaking it into multiple rules that aren’t as “deep”


using “intermediate facts” to represent the parts


this can often give better perf characteristics and I generally think it’s better modularization anyways. This may not be relevant to you at all though. I just noticed the “chain” part and wasn’t sure what it referred to.


The clara inspection tracing/tools recently had an update by Will I believe where it can try to report “counts” of operations happening at points in the network. if your chaining is resulting in large “cartesian product” of facts joining together, these helpers may be useful to diagnose too


which gives you a fn


by chain, I mean deep - rule A inserts a fact, then rule B fires and inserts a fact, etc...

👍 4

That counts things seems really relevant. I'll try it.


oh wow. clojure.lang.Atom memory usage is climbing by 20M per second.


Are you making a lot of atoms? Clara shouldn’t be hah


@eraserhd Clara uses atoms in engine.cljc in some places to store pending operations, conceivably you could get there from data being put in those atoms. It would be fairly hard though. But then the performant version of your rules sessions taking 25 minutes is striking. 😱 Obviously it could be unavoidable if you’re just processing massive amounts of data, but I’d be curious to see any kind of reproducing example of inordinate resource use, smaller examples obviously being easier to diagnose. I suspect you’ve uncovered some kind of bad pattern in the way the rule network fires rather than this (poor) level of performance being inevitable. Most of Clara’s optimization to date has been done against Cerner’s use cases; having a broader sample of benchmarks would be useful.


AlphaNodes are also generated with an atom for the bindings that the node will propagate in the event that its satisfied, however they would be scoped to the function that determined the satisfaction of the node. The atom is probably unnecessary and could be probably be replaced with shadowing within the function… I doubt that this is the growth that you are seeing though. Might be a performance gain by not having to swap! that atom though….


@U3KC48GHW interesting on the use of an atom there. I’ll have to check that out again. However, overall, I wouldn’t expect to see an “Atom” itself taking memory up. It’s a pointer to something that may be large, but it itself shouldn’t show up as taking that memory (that’s not what I expect in heap dumps at least I think?)


So a lot of memory taken by actual clojure.lang.Atom makes me think there’d be a lot of instances of the Atom class