Fork me on GitHub

That thread got a little long, so I’ll start a new one… In general, I have not ever needed to describe provenance or temporal validity on individual triples. Instead, I’ve needed this sort of feature for groups of triples, and for me a group of triples implies a graph.

💯 2
👍 1

This makes querying a little easier than reification, though it’s sort of similar to SPARQL-*



SELECT ?v ?date
{ graph ?g { :importantNode :value ?v } .
  ?g :validFrom ?date .
  FILTER (?date >= "2032-01-10T00:00:00.000-05:00"^^xsd:dateTime) }
SELECT ?v ?date
{ << :importantNode :value ?v >> :validFrom ?date .
  FILTER (?date >= "2032-01-10T00:00:00.000-05:00"^^xsd:dateTime) }
There’s the indirection when using a graph but it’s not much


And a LOT less than standard reification


I like that quite a bit. In my primary application triples are normally added in batches that can readily come with e.g. validity metadata, after all. Where my head starts to hurt is how to specify that the graph to use is the merge of all the graphs within the specified validity window. (I probably need more caffeine; that’s not unusual.)


Why not a condition like:

{ graph ?g { :importantNode :value ?v } .
  ?g :validFrom ?start OPTIONAL { ?g :invalidFrom ?end } .
  FILTER ((?start < "2032-01-10T00:00:00.000-05:00"^^xsd:dateTime) &&
          bound(?end) && (?end > "2032-01-10T00:00:00.000-05:00"^^xsd:dateTime)) }
i.e. • with a variable graph, you pull out the appropriate temporal properties • Using filters, you can determine which temporal properties are within range (here I just said that it was valid before a date, and if it became invalid, then it was after the required date) • Filters should not be expensive, unless you are generating a significant number of graphs. • The data that you’re looking for and projecting is what’s found inside the braces for that graph. In the example, that’s {:importantNode :value ?v}


right, that makes sense for the single value case. I think I’m thinking of something like where the values I’m interested in are linked, e.g.:

SELECT ?id ?owner ?maintainer
{ graph ?g { :importantNode :id ?id ; 
                             ownedBy ?o ;
                            :maintainedBy ?m .
             ?o :name ?owner .
             ?m :name ?maintainer . } }
But, say, the owner triple(s) and maintainer triple(s) come from different graphs with different validity metadata, though both are valid at specified time. I think conceptually what I want is to find the union of all the valid graphs, but AIUI the semantics of this query are such that while there would be multiple graphs bound to ?g, all the triples in the { } graph pattern have to be in a single graph.


(recognizing I may need to go back and refresh my spec memory)


In that case, each constraint would have to go to it’s own graph variable, which had to go through the temporal test. Yes, that sounds cumbersome


Though, generally “Basic Graph Patterns” (BGPs) tend to be grouped together for the objects they describe. So it may not need an entire graph variable for each BGP, but rather for each group of BGPs (and a variable for each connector between groups)

👍 2

That might be workable… especially since we tend to do updates in batched from particular perspectives; i.e., processes tend to use the same predicates/BGPs, without much overlap — though I’d want to quantify that. Thanks! Very helpful noodling through this.

Rowland Watkins03:06:05

I did something a little bit similar, but used the Jena reasoner for more complex comparison work - the output of reasoning against a collection of (signature) graphs would then go into a new graph, which was then queried

Rowland Watkins03:06:15

Can’t say it was pretty, but best I had to work with - SPARQL was rather primitive back then

👍 2