Fork me on GitHub
#clara
<
2016-12-06
>
devn02:12:51

@mikerod william byrd is a friend of mine

devn02:12:17

what if Barliman (his code synthesis project) could synthesize rules

devn02:12:30

i’ve found there’s a sort of typical “well i know i need to join X and Y on field :q"

devn02:12:56

“and i expect a Z to be inserted"

devn02:12:39

this is a kind of partial description of both arity and return value

devn02:12:54

you may have a function cool? in the same namespace

devn02:12:49

he showed an example of: provided a definition of reduce-right, you can define concat

devn02:12:56

imagine a smallish rulebase in a namespace under test

devn02:12:01

i’m talking a bit of gibberish i suppose, but will was talking about synthesizing the “last 10%” of your program

devn02:12:29

i wonder if generating the “last rule” in a smallish rulebase would be an interesting place to try that out

jonsmock02:12:52

For me, the writing the rules is the "easy part" but knowing if my rules cover what I think they do is the hard part

jonsmock02:12:21

(I'm saying this as someone who is completely entranced by Barliman. I love that stuff, and I cloned the repo for the flight home)

jonsmock02:12:24

But at least at work (we use clara-rules), I want tooling around the rules I have

jonsmock02:12:36

Could just be me, not trying to dismiss what you're saying

jonsmock02:12:15

i.e. here's a ns of rules, did I cover all cases

jonsmock02:12:43

Or our actual situation: here's a ns of rules, is it possible more than one can fire?

jonsmock02:12:16

Again might be unique to the way I use clara-rules, but I have a hierarchy that looks like:

jonsmock02:12:35

basic fact rules -> intermediate level deduction rules -> final action rules

jonsmock02:12:25

And I want just one final action rule to fire (this doesn't actually do a side effect. Something else uses the output of the rule engine to take an action)

jonsmock02:12:07

But yeah, Barliman looks sick. I seriously eat that stuff up

svenhuster07:12:25

@devn @mikerod thanks a lot. I shall have a go.

mikerod15:12:02

devn if nothing else, I think it’d be an interesting experiment

mikerod15:12:17

That was an interesting talk.

mikerod15:12:11

jonsmock I agree that more tooling would be a big win and is one of the harder parts.

mikerod15:12:22

I’ll repeat since I missed the tags:

jonsmock15:12:32

Yeah, sorry, I was thinking more about devn's thing

mikerod15:12:36

@devn if nothing else, I think it’d be an interesting experiment

jonsmock15:12:39

Sounds cool, and I shouldn't poopoo it

jonsmock15:12:49

(Not that I was trying to, but I think I sounded negative)

jonsmock15:12:16

btw @mikerod I'm about half-way through your talk now - nice work!

mikerod15:12:37

thanks. it’s a bit basic I’m sure given how much you are already entrenched in the details of working with rules.

mikerod15:12:14

@jonsmock Maybe a SAT solver can be used in some sort of tooling around rule analysis 🙂

devn15:12:11

@jonsmock: agreed on tooling, and I didn't read it as negative

devn15:12:19

Some of the tooling stuff you describe is I think pretty straightforward

devn15:12:51

I would really like to know coverage of my rules under test

devn15:12:35

The inspect session output I believe can answer that, but it would require carrying around some state between tests

jonsmock15:12:09

I think I've played around with property tests and rules, but I don't know if we have any at the moment

jonsmock15:12:51

With a large rule system, it seems harder to synthesize the right data

jonsmock15:12:27

I should think about it more though hmm

devn15:12:40

Our tests are easy to write in the small

devn15:12:06

But terribly difficult to do integration tests

devn15:12:13

Because there is a truckload of data

devn15:12:22

Not Clara's fault

devn15:12:32

Just a function of the domain

jonsmock15:12:50

Yeah that's basically what we have at the moment

jonsmock15:12:01

A handful of integration tests and some unit tests on specific namespaces

devn15:12:19

We've talked about doing test generators that compose across a couple/few namespaces

devn15:12:42

So we can improve coverage without doing massive integration tests

jonsmock15:12:14

I'm curious if you have the same organization to your rules. We basically have 3 tiers ... oh I guess I said this above

jonsmock15:12:17

basic fact rules -> intermediate level deduction rules -> final action rules

jonsmock15:12:35

the basic fact rules are basically digging out important stuff from giant maps from other systems

jonsmock15:12:50

So I probably could write generators after that level

jonsmock15:12:59

And property test the intermediate and action rules together

devn15:12:09

we built a library that builds facts, and their variants (intermediate facts) using a bit of macro sugar

devn15:12:15

so that’s your step 1

devn15:12:47

(deffact FooFact
  “Blah blah"
  (alias foo-fact

  (field some_id :spec string? :doc “blah blah”)

  (variant InterestingFooFact
    (field additional_field :spec number? :doc “…”)))

devn15:12:05

err sorry, i guess that’s not your step 1, but it’s kind of interesting

jonsmock15:12:11

Yeah that's really neat

jonsmock15:12:30

It would be cool to put more relational info int hat

devn15:12:31

then we can use it to validate our facts conform

devn15:12:34

yeah absolutely

jonsmock15:12:04

For us, it's all about what the insurance has paid, etc

devn15:12:08

we also introduced a trace_id on all fact types, which is carried around, so you can track the lineage of a fact a bit easier

jonsmock15:12:57

So I'm thinking like (deffact TotalCharges ...) and having other facts declaratively say that they are less than TotalCharges

devn15:12:01

and, we have a :contributing_factors field on many (not all) facts, which captures interesting “why” information

devn15:12:19

but some assembly required

devn15:12:43

@jonsmock yeah, this fact library was a first pass

devn15:12:56

there’s more that could be done there for sure

devn15:12:33

@jonsmock we also have (optional-field …), and a way of specifying an explanation for each fact, so if you call (explain the-fact), it’ll give you the view you’re looking for

devn15:12:26

to date we haven’t actually used explain, which is just a function of the history of this project

devn15:12:33

we should have done it earlier, but there were bigger fish to fry

devn15:12:42

now we’re overdue for adding better explanations

devn15:12:48

“why did we make an InterestingRecommendation?” => “Patient is over 50 and has a recent encounter (< 45 days) where X happened…"

devn15:12:28

Speaking more on the topic of tooling

devn15:12:11

We built a graph you can traverse by clicking from nodes to rules and such, but textual explanation seems to be more fundamentally valuable

devn15:12:44

Ryan’s clara.tools project is on the right-ish track, but the sheer volume of information for a graph of any decent size can quickly turn into wading in a swamp of facts which aren’t all that interesting

devn16:12:06

it’s not an easy problem. views need to be tailored to your domain, and to your audience

devn16:12:19

but perhaps there’s some meta way of specifying views that could be shared

jonsmock16:12:26

@mikerod That was a solid talk