This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-05-04
Channels
- # announcements (1)
- # asami (61)
- # babashka (71)
- # beginners (170)
- # biff (1)
- # calva (14)
- # clj-kondo (23)
- # cljsrn (28)
- # clojars (1)
- # clojure (152)
- # clojure-australia (2)
- # clojure-europe (65)
- # clojure-nl (2)
- # clojure-spec (8)
- # clojure-sweden (3)
- # clojure-uk (45)
- # clojurescript (1)
- # css (12)
- # cursive (16)
- # datomic (9)
- # devcards (2)
- # emacs (1)
- # events (1)
- # graalvm (31)
- # honeysql (10)
- # jackdaw (2)
- # jobs (5)
- # lambdaisland (9)
- # lsp (4)
- # malli (11)
- # meander (43)
- # off-topic (6)
- # pathom (7)
- # polylith (1)
- # portal (14)
- # re-frame (7)
- # releases (1)
- # remote-jobs (1)
- # rewrite-clj (6)
- # shadow-cljs (101)
- # specter (1)
- # tools-deps (26)
- # vim (9)
- # xtdb (2)
Question for people who use Naga… Do you only use Naga with Asami? Is there any desire to use it with any other systems?
Naga was originally built to be database agnostic. The first iteration ran on Datomic, though I planned to expand it to SPARQL, OrientDB, and more
But then I was asked to make my own open source DB to use, which is how Asami came to be.
I’ve reached a point where I could do more with rules if I know that I can use internal features of Asami. It’s also annoying having to keep Naga in sync with Asami (an Asami release means a trivial new Naga release with the new dependency)
I'm on the fence. I have considered trying to rig up naga and crux, but I'm not actually doing it. I wanted to react to incoming data, and insert additional derived data as a result of that. I'm not even sure if it's really possible.
Yes, Naga is about inferring from existing data. What you’re talking about is very possible, but it needs a more integrated approach.
oh, I meant the whole pipeline with Crux. Crux is a special-case because I can do it transactionally as data comes in. With datomic, etc. it was less appealing because the information would be sometimes out of date. But that's just the kind of application I build.
Doing a RETE engine over Datomic would be painful, because you need to be in the transaction pipeline, but there is no hook for that.
There seems to be a desire to do rule processing on the input stream for Asami, so I may need to integrate there too
👋 Not actually doing this yet; but I was considering running naga over some RDF stores, e.g. rdf4j’s native store, and possibly arbitrary sparql repositories
That’s an adapter that I always intended to write. It’s where all of this started, after all
yeah, it looked like it would be relatively easy to add to what is there already. I think the bulk of it would be coercing the naga keywords into URIs via a registry of prefixes
If using the rule language, then yes. But if using the API, then it’s not needed, since URIs can be provided directly
yeah, agreed, I was meaning via pabu
The rule language (Pabu) was literally a 30 minute hack just so I had a quick way to write and modify rules :rolling_on_the_floor_laughing:
I guess, it’s almost good enough 😆
Pabu was also a good opportunity to learn about parser combinators! (I used to work with Nate Young who wrote the Parsatron)
I just like that it’s so similar to prolog
though it probably does cause me more grief than using the API
yeah, which again is nice for me because I’m using RDF 🙂
though if I were to actually use any of the rules I’ve written I’ll need to coerce them into URIs on whatever backend I use (Jena/RDF4j)
yeah I was going to mention that; though you’ll need a way to register / hook in URI constructors
though that sort of thing might be better bolted on rather built in
That would work, though would break it being a subset of prolog.
You could also represent it as a special predicate:
pabu:prefix(qb,"
Or I guess put the prefix blocks in another file.
I thought I already broke the subset of Prolog part by accepting QNames? Or does Prolog already allow :
characters in its atoms?
Yeah, you can have :
in prolog atoms and predicates too. At least it’s supported in SWI prolog, I suspect most other prologs too.
I tried finding a canonical reference for you, but most descriptions of the syntax are informally specified. Unfortunately the ISO standard is paywalled.
ISO prolog was I believe largely derived from Edinburgh prolog… but I can’t find any good references their either. I think most implementers don’t care much for ISO prolog tbh
Do you mean literally that? Or putting it in a pabu comment? e.g.
%@prefix rdf: <> .
It would just be a new tag https://www.swi-prolog.org/pldoc/man?section=tags
Incidentally the standards body added to turtle 1.1 SPARQL like prefix support in addition to @prefix
So arguably because of SPARQL
PREFIX rdf: <
is more well known.
Obviously that won’t align with a swipl comment tag
Incidentally I think those tagged comments might only be expected inside /** comment blocks */
honestly, I always forgot which syntax did which. I copy/pasted and focused either on data or on queries.
Yeah me too… it’s just frustrating when you copy/paste a turtle @style
one into a SPARQL query 😩 or you skip the @
but leave the .
Go to the SPARQL query doc, and the first appearance of a prefix syntax is section https://www.w3.org/TR/sparql11-query/#docDataDesc: > This document uses the http://www.w3.org/TR/turtle/ [https://www.w3.org/TR/sparql11-query/#TURTLE] data format to show each triple explicitly. Turtle allows IRIs to be abbreviated with prefixes:
@prefix dc: < > .
@prefix : < > .
:book1 dc:title "SPARQL Tutorial" .
IKR 😩
TBH I might mention that on the SPARQL 1.2 issues…. they should really axe that @prefix dc: ....
stuff from the document and use the turtle 1.1 style to help prevent confusion.
If you recall the https://youtu.be/oyLBGkS5ICk?list=PLZdCLR02grLofiMKo0bCeLHZC0_2rpqsz from the 2016 Conj, he said that you shouldn’t axe anything 🙂
Yeah I’m definitely not suggesting axing support for it all, it should remain standardised in turtle.
I’m just suggesting that the SPARQL 1.2 sample data should be updated to use the turtle 1.1 feature (of SPARQL style PREFIX:
blocks; i.e. a small step to preventing the confusion.
Granted it might further add to that confusion; but it will at least mean people copy/pasting example data in that document into sparql etc won’t trip up.
Anyway this is an irrelevance 🙂