Fork me on GitHub

the insert 1 million is realistic, the update 1 million slightly less so


thanks for all the help so far!


@wparker @mikerod Thanks for the clarification on Re: breaking changes, Precept recently experienced some with 0.17.0 and realized I should probably speak up about anything that might affect the implementation since I’m the only one that would really know 🙂 Sorry if I’m being overly concerned. It’s hard for me to investigate the effects of changes in detail myself due to time constraints lately, and hard in general since to me there’s a fair amount of complexity at work in Clara 🙂


@alex-dixon what sorts of breaking changes? Have you tracked down the cause(s)? I totally understand not wanting to be blindsided by breaking changes and am glad you’re looking over the issue list. 🙂 If we created a regression of some kind or need to more clearly communicate public vs private boundaries of some API(s) in Clara that’d be good info to have.


Meh. It’s my responsibility. You guys do a great job. It’s been inspiring to me to see how much effort is put into backward compatibility. I’ve narrowed it down to a couple of commits but since it’s not something that’s fun to solve I haven’t looked very far 😄


Basically I had some rules that I baked into every Precept session and in 0.17 they’re not making it in. Relevant spots in Precept code (which echoes Clara’s implementation considerably) may include:


Pretty confident I can figure it out…seems related to namespace resolution changes that fix rules not being able to see vars defined in their namespace in CLJS. That’s definitely a fix I want so just need to take some time to make this work on my end


Yeah, just giving it a fast glance it seems like you’re doing custom stuff with loading rules and such, and the changes in 359 could have impacted that somehow (again without looking at it in any detail). 359 was an obvious bug that needed to be fixed, but if we broke something else with it that ought to work let us know


@wdullaer for a workaround in your use case, would it make sense to first filter the consents? Your example shows 1/10th as many persons as events. Shrinking the search space by explicitly filtering the duplicates may speed things up.


So adding an additional defrecord DistinctConsent and a new rule to map from Consent to DistinctConsent The rest of the rules would just use DistinctConsent going forward


yes, I’ll probably try to prevent duplicate consent from being created


that’s also a cool way to handle it actually, I thought about querying before inserting


acc/distinct will still be managing a large object for retractions


but it will be smaller than the cross product of Consents and Processors


I don’t fully control how many consent items we’ll have for a given user


it depends on the granularity at which this will be recorded and that is not fully within my control


But you expect duplicate consents?


most certainly


another thought (possibly in addition to the previous one) was to explode the Processor facts into something like DesiredConsent [processorId purpose attribute], use a rule to match them to a Consent and dump into another intermediate fact ApprovedDesiredConsent [processorId personId purpose attribute]. Then accumulate and match your ApprovedDesiredConsent to the Processor. I haven’t tested it, so I don’t know if it’s more performant, but the idea/hope is that you’d accumulate less when creating the AllowedProcessor


I’ll experiment with some of these on Monday


the easiest I think is to just ensure that no duplicate consents are asserted, which I think I can do out of band (the consent will intially be put on a log)