Fork me on GitHub
#rdf
<
2022-12-05
>
Bart Kleijngeld20:12:00

In our company we have chosen an RDF stack to do our data modeling, particularly RDFS/OWL for conceptual models, and SHACL for logical models. Many of our data architects have absolutely no familiarity with it and now need to learn it. It turns out it's hard to get them excited however... "What's wrong with UML?", "Why not simply go for a centralized approach and store everything in a relational DB?" (we are moving away from that, favoring a data mesh approach). I have plenty of reasons why I think a graph-based choice like RDF is preferable over UML/relational for this purpose (I briefly shared my view here a few days ago), but, frankly, I'm actually not very experienced myself in this field, so I was hoping to learn from people here 🙂. I will use your input, among other sources, for a presentation I'm preparing on this. Why is RDF so suited to do data modeling (or isn't it?)? What are problems with UML/relational that RDF does not suffer from? What are caveats of RDF to take into consideration? Etc. Thanks!

quoll20:12:45

UML and relational are orthogonal, IMO. Data that are regular and well defined is appropriate for relational storage. This is especially the case when the most common access mode refers to entire records at once. Data that evolve in structure, are less record-oriented, and have a lot of information through linkages (e.g. tree structures) are much more appropriate for graph storage.

quoll20:12:32

UML is a modeling language with a long history of modeling software. I don’t know if it prefers an OWA or CWA, though I think it is agnostic to that (because there are systems that convert between UML and OWL). It is typically used for documentation purposes, though there have been some attempts at automating processes with it, hence the development of OCL (Object Constraint Language). OWL is specifically designed for the OWA, which makes it inappropriate for software development, but well suited for data. This is particularly true for data on the web, where not all current data may be accessible at any time, and where data continue to grow. (side note: I hate that “data” is a plural word). It has significantly more descriptive capability around relationships than UML has, and consequently allows for better modeling. However, this is a double edged sword, as relatively fewer people have exposure to OWL, and are unaware of the these capabilities, meaning that they are not used as often as they could be. Importantly, OWL was designed from the outset to be reasoned over, and there are many automated systems for doing exactly this. This, combined with the greater expressivity of OWL, is what has allowed automated reasoning in multiple domains, including medical (SNOMED), pharmaceutical, and financial domains. There are many organizations who rely on these reasoning systems.

quoll20:12:12

As some examples, NASA uses RDF/OWL for inventory systems in building spacecraft, SNOMED uses it to automate relationships between medical concepts, Deutsche Bank uses it to automate money laundering and fraud detection, and every major pharmaceutical company uses it to identify candidates for drug trials

quoll20:12:02

I provided examples to demonstrate that it’s not all hype. It has significant utility.

Bart Kleijngeld21:12:09

Examples help me out here, thanks! And yes, it does feel awkward that data is plural 😆. Sadly, I think the reasoning capabilities aren't we look to utilize any time soon. We are a large company and wish to model our (business) language, and all the data flowing through and being produced and stored in our systems. Note: even the data itself won't be in RDF, only the data models that needs to be conformed to. For now, at least. We like to take a decentralized approach here, a bit like the AAA slogan: anyone can say anything. Embracing OWA, this web-like approach really sounds like a match with RDFS/ OWL to me. It's just that I don't know well enough where UML is more limited in ways that matter to us. For instance: I don't know if one can express "subPropertyOf" in UML reasonably, let alone properties as first-class citizens to begin with. That's homework for me I guess, although be my guest if you have anything to say about that too. Thanks as always

quoll21:12:30

I didn’t think there was, but I looked it up and… it exists, but it’s ugly

quoll21:12:56

I don’t know that it’s part of the UML spec though

quoll21:12:21

Oh, no, apparently it’s legal. You just don’t see it used much

quoll21:12:30

“Generalization between associations”

👀 1
jumar07:12:14

Im a complete noob, just watching the discussions here. After reading this https://clojurians.slack.com/archives/C09GHBXRC/p1670273592043609?thread_ts=1670270820.424829&amp;channel=C09GHBXRC&amp;message_ts=1670273592.043609 I’m wondering what are some more boring examples of its utility apart from NASA, fraud detection, and farmaceutical companies. All of that sounds a bit special…

rickmoynihan13:12:18

@U06BE1L6T RDF’s utility or OWL’s?

jumar15:12:59

I guess RDF but perhaps both?

quoll16:12:21

I’m working with medical data right now. Using public files, which are mostly in tabular format. There are lots of ID codes that refer to foreign datasets. For instance, Vaccine manufacturers are provided in files from the CDC (Center for Disease Control). They also reference CVX codes, which are the codes for vaccines. There are also (separate) files that link CVX codes to National Drug Codes (NDC). There are also other systems that connect these codes into SNOMED (Systematized NOmenclature for MEDical data). I could put all the data into tables, and create foreign keys between them. In fact, my company has done this in the past. The ELT process is difficult, because these systems tend to model their data differently, and will sometimes take another system’s code and append extra letters to it. You often find yourself having to link many tables, and it can be very difficult to learn the schema to traverse from one part of the dataset to another. It’s a painful mess, but it works.

quoll16:12:20

Putting the whole thing into RDF simplifies the process significantly. ELT still has to happen, but it’s simplified. If something has different codes in different systems, I can keep both, and just link them. I don’t need to figure out foreign keys. I can traverse across the graph quickly and easily. And change management has become a thing of the past.

👍 3
1
quoll16:12:03

There’s nothing particularly complex about this. We’re just gaining utility by using a graph shape for the data instead of tabular

Bart Kleijngeld16:12:37

That's a nice example that might appeal to my colleagues.

curtosis23:12:58

When you say you’re using SHACL for logical models, can you give a (suitably anonymized) example?

Bart Kleijngeld06:12:09

I'm reading your question in two ways, so I'll just answer it in both 🙂. If you're looking for an example on how we use SHACL to obtain a logical model, it would look something like this:

:CarShape a sh:NodeShape ;
    sh:targetClas vehicle:Car ;
    sh:property :idShape ;
    sh:property [
        sh:path vehicle:tireCount
        sh:datatype xsd:int ;
        sh:minCount 1 ;
    ] .

:idShape a sh:PropertyShape ;
    # ...
Targeting the Car class from the vocabulary (conceptual model) you provide logical constraints this way, forming a logical model. If, on the other hand, you're looking for what we use such logical models for, let me try to answer that as well. The idea is to describe our data formally in a (large) conceptual model done in RDFS/OWL, so you can focus just on meaning and relationships under the Open World Assumption. This is great for modeling. From there, use cases in IT arise. Information is selected from the conceptual model, and logical constraints (like above) are added using SHACL. The resulting logical model can then be used to generate all sorts of target schemas (this is basically the project I work on), i.e. JSON Schema, OpenAPI specs, Pydantic models, SQL DDL, you name it (Work in progress!). I hope that clarifies it for you.

curtosis14:12:50

I was thinking mostly the first, but the second was also super helpful. Thanks!!

🙂 1
curtosis14:12:53

That’s actually quite relevant to the work that I’m doing, though in some cases I could see it making sense to “generate” (modulo a lot of human knowledge) in the other direction: given a bunch of possibly-overlapping (primarily-)SQL logical models, generate SHACL to describe their relationship to a conceptual model/ontology, possibly expanding/refining the conceptual model as needed.

Bart Kleijngeld14:12:53

Never considered that way around. interesting. Could you elaborate on your use case/work/project perhaps? Some context might make me appreciate what you're doing more

curtosis14:12:23

Without getting too specific 😉 sure… A common problem in a lot of large government agencies, especially the “boring” ones, is that they cover several major programs that are kind of related, but have some significant differences in approaches to data, only partially due to simple organizational boundaries. As a hypothetical example, there may be several programs that provide certain benefits or support to households, but because of the way the programs are designed (from a policy perspective) they define “household” quite differently. So there’s a nontrivial challenge in being able to identify which elements of those models are equivalent (and thus commensurable) and those that are not. So you want to be able to enable users (primarily but not exclusively) analysts to be able to find the right data and use it correctly, but you can’t realistically do much from the top down to standardize things. We’ve had some success building conceptual models in RDF/OWL, but the connection back to the logical models has always been fairly gauzy.

Bart Kleijngeld14:12:30

Haha, good to be careful. Interesting. There's definitely seems to be some overlap in our use cases. Do I understand correctly that you wish to have all the data represented in RDF ultimately? So that data integration and federated querying (is that what you call it? Still learning) can be done?

curtosis15:12:51

I don’t think there’s any appetite to put all the data in RDF — for starters a lot of it really is transactional (and there is a LOT of it*) and it’s not clear** how much of it would benefit from a graph perspective — but having the metadata all integrated in one catalog would be extremely valuable. That said, there are subdomains where the relationship graph would actually be super useful. * ~1Bn complex actions (dozens of txs per action) per year At least to the business-value folks. Demonstrating it at scale is part of the challenge.

curtosis15:12:46

We did a demonstration several years back on one of the natually-graphy domains (~6M primary subjects) and that graph alone was somewhere around 1.5Bn triples.

quoll15:12:39

Yes… triples grow quickly 🙂

curtosis15:12:55

we had a Cray graph analytics machine at the time 🙂

quoll16:12:24

Well, 1.5B triples should fit in main memory 🙂

quoll16:12:59

A Cray graph machine should barely notice 🙂

curtosis17:12:26

welllll…. IIRC it was complicated 🙂 . Also their triple/sparql implementation was distinctly weird, for performance reasons. (Basically everything was materialized as shared-memory pointers, so it was super fast once you loaded. Also very odd processors — slow clock but rotated through 128 thread slots with zero context switch overhead. https://en.wikipedia.org/wiki/Cray_XMT#Threadstorm4) Intriguing for low-level implementors, interesting performance properties for users.

1
curtosis17:12:59

We also had a team that was using it in non-RDF mode for some genomics work. It was fun to have access to.

quoll17:12:13

I interviewed with them about working with one of these back in 2010, but opted for another opportunity instead. I was definitely curious about it

curtosis17:12:26

I wish you had, their early RDF implementation was terribad. 😄

curtosis17:12:33

but I grew up with/on AllegroGraph, so that’s where I am most comfortable.

💖 1
quoll17:12:06

Well that makes sense, since it’s all in CL!