This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-29
Channels
- # announcements (35)
- # aws (40)
- # babashka (10)
- # beginners (119)
- # calva (25)
- # cider (13)
- # clj-kondo (15)
- # cljsrn (23)
- # clojure (205)
- # clojure-dev (3)
- # clojure-europe (15)
- # clojure-germany (3)
- # clojure-italy (3)
- # clojure-nl (2)
- # clojure-uk (58)
- # clojurescript (193)
- # community-development (2)
- # conjure (147)
- # core-async (49)
- # cursive (47)
- # datomic (27)
- # duct (1)
- # fulcro (19)
- # graalvm (3)
- # graphql (1)
- # helix (3)
- # hoplon (11)
- # jackdaw (1)
- # joker (1)
- # juxt (5)
- # kaocha (1)
- # keechma (3)
- # lambdaisland (6)
- # local-first-clojure (27)
- # malli (5)
- # off-topic (41)
- # rdf (27)
- # re-frame (7)
- # reagent (15)
- # reitit (5)
- # rum (11)
- # shadow-cljs (157)
- # spacemacs (18)
- # sql (4)
- # xtdb (8)
@cjsauer it depends what you’re interested in applying. That book is very much about reasoning with OWL and various subsets of it, e.g. RDFS+ in which case Jena is probably your best starting point: https://jena.apache.org/documentation/ontology/ RDF4j doesn’t have such good reasoning support out of the box, though it is in my opinion superior to jena in the cleanliness of its API etc… Jena and the open source triplestores do have some limitations; mainly in performance and size of database - so if you need to go bigger/faster you’ll want a commercial triplestore. We use stardog, which is in our experience the best, though we’ve not re-assessed the market for maybe 5 years; we compared bigdata/blazegraph with ontotext’s GraphDB (formerly owlim — which should also have decent OWL support). Stardog has decent reasoning support in the database via SPARQL too if you need it.
@cjsauer It’s also worth saying there are broadly speaking two distinct communities in the RDF world, that overlap somewhat in the middle. These are essentially the Linked Data community and the Semantic Web community. The former are less interested in formal open world reasoning, and are more interested in shipping data on the web that people can work with. OWL in linked data is just another vocabulary, and the modelling is usually restricted to a subset; which may or may not be consistent. The latter care more about logical consistency, ontological modelling in OWL etc. So the answer also partly depends on what camp you’re in. e.g. for applying that book you might want to look at Protege.
Also the above is of course an over-generalisation; but broadly speaking it’s true… and it’s even a division that go backs to the roots of good old fashion AI… e.g. neats and scruffies etc…
@rickmoynihan is too modest to mention this, but he and his outfit have authored Grafter (https://github.com/Swirrl/grafter), if you're looking to work in Clojure.
haha thanks… I did consider mentioning it, but figured the question was more about reasoning/ontologies which isn’t RDF4j/grafter’s strength. At least not without adding the Jena backend I’ve been planning to add forever.
This week I plan to write an IGraph wrapper around grafter. This'll be my first time dealing with rdf4j. Looking forward to it.
I'm very interested in learning how to use Grafter and Stardog. Any pointers to getting started with Grafter 2?
I really need to provide clearer docs around usage etc… It has been on the list of jobs for a long time… What is it that you want to do? Grafter is mainly focussed on RDF I/O; essentially treating immutable triples as lazy-sequences — with some light tooling around SPARQL queries etc. There’s also https://github.com/Swirrl/matcha for querying RDF graphs in memory (which is roughly equivalent to datascript but for RDF). Other tools include: https://github.com/Swirrl/csv2rdf which is more or less a complete implementation of the W3C standards for CSVW (CSV on the web) — which can be used for transforming CSV files into RDF, via a jsonld metadata file.
This is very helpful, @rickmoynihan; thanks! I have a couple of projects that involve various sorts of prosopographical and bibliographic knowledge, and I've been using CIDOC-CRM & PRESSoo to capture it; now I want to start building actual knowledge-based systems. I'm also beginning to investigate ways to implement Web Annotations with IIIF. I find myself falling down a tooling rabbit-hole everytime I look at this! I'd be happy to help out with documentation, if you're looking for volunteers...
One factor to consider if you're using a reasoner (esp. an owl reasoner) is that that your data has to be absolutely pristine, and in quantities low enough that they match the resources you're ready to allocate. I've heard stories of people adding a single axiom to an OWL KR, and waiting days for the system to work out all of the implications. Probably an exaggeration, but it does shine a light on a real issue.
Reasoners are eager, a lazy alternative to inferencing is to use property paths.
Yeah you can definitely use some SPARQL features like property paths, VALUES clauses etc to simulate or work around not having reasoning available… What do you mean by “reasoners are eager”?
Well it's been a while since I've actually used a reasoner in anger, and maybe you can set me straight on this, but when I've used a graph configured with a reasoner, adding a new assertion would automatically result in a materialization of everything the reasoner can infer about that assertion.
Am I incorrect in this impression?
I think that depends on the implementation
You can do it on update; but you can also do it on query… In which case you only need to compute the closure over what you’re querying
Ah! Good to know. Thx.
I abandoned the use of reasoners early on, working with large amounts of LD that was full of inconsistencies. Fun puzzles with long chains of inference leading to contradictions.
Yeah, this is the problem with the broad semantic web vision. It’s hard enough to create consistent highly curated knowledge bases anyway; let alone when disparate groups of disconnected people try to do so. This was one of things we were eventually hoping to make a meaningful (academic) contribution towards solving in my early days developing multi-agent systems with defeasible logic. Essentially we had an agent communication language, based around two primary primitives inform and request. The semantics of inform were then based around a concept of mutual belief… so if you informed me of something (assuming a level trust) I would then believe that you believed it, and may then choose to believe it myself via other defeasible rules. In terms of security you could also then introduce notions of trust, credulity, lying etc; and have further defeasible rules to reason about such things… Those rules may include not believing you if you said inconsistent things, or if others said you’d said things that you then contradicted to me etc etc… It was an interesting way to think about protocol design etc.
Interesting! I gather it didn't pay the bills?
The joys of overly academic startups 😁
I feel your pain.
My first interesting job was doing cognitive modeling in Common Lisp. US gov't reasearch grant. That was a great few years, then it was all venture-capital-driven from there, and things get significantly less interesting.
FYI for anyone interested in using OWL: https://github.com/phillord/tawny-owl Phil Lord has several associated example projects.... And this: https://github.com/phillord/protege-nrepl