This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-10-13
Channels
- # announcements (1)
- # babashka (41)
- # beginners (194)
- # calva (14)
- # chlorine-clover (2)
- # cider (32)
- # circleci (9)
- # cljsrn (10)
- # clojure (110)
- # clojure-australia (1)
- # clojure-berlin (2)
- # clojure-dev (39)
- # clojure-europe (42)
- # clojure-france (3)
- # clojure-nl (19)
- # clojure-spec (22)
- # clojure-uk (23)
- # clojurescript (21)
- # conjure (41)
- # datomic (33)
- # depstar (16)
- # duct (46)
- # events (1)
- # fulcro (17)
- # graphql (14)
- # jobs (6)
- # jobs-discuss (9)
- # leiningen (6)
- # malli (29)
- # off-topic (21)
- # pathom (7)
- # portal (1)
- # rdf (81)
- # re-frame (3)
- # reagent (12)
- # reitit (2)
- # remote-jobs (1)
- # rum (1)
- # shadow-cljs (60)
- # specter (1)
- # sql (13)
- # tools-deps (23)
- # vrac (1)
- # yada (19)
@steven427 Ok I asked some friends and colleagues about some of the bigger projects in the heritage/museums/library/arts/humanities space here’s some more: The british museum’s catalog (this is the one I remembered but couldn’t quite find): https://www.britishmuseum.org/collection/object/W_1892-0516-351-a (looks like they’ve hidden or removed their public sparql endpoint, but the structure of the collections is clearly SKOS - I have it on good authority they’re also using cidoc that I mentioned in the thread) Another big one The library of congress: https://id.loc.gov/ Europeana — which is a huge cross europe collaboration to connect the european heritage sector, through various projects based around linked data etc… https://pro.europeana.eu/ e.g. see here for one of their many big projects: https://www.europeana.eu/en try searching for e.g. “van gogh” or historiana here: https://historiana.eu/ plus many others… Also the British Library: https://bnb.data.bl.uk/ Also all UK legislation is represented/managed as a repository of linked data, giving URI identifiers for everything on the official site here: https://www.legislation.gov.uk/developer/uris
Ok asked another friend who works for a client of ours who used to work at the BBC on their linked data platforms > 4 years ago… This is what he said about heritage orgs that he knows of who ha(d|ve) linked data projects in the UK at least: > BBC themselves, The National Archives, British Museum, National Library Wales, National Library Scotland, Rijksmuseum, Getty (thesauri for artists, geography and others), Wellcome, Archaeology Data Service, People’s Collection Wales, Science Museum, University of Manchester Image Collection, Tate Gallery, BFI Archive Collections, Nature…
BBC was a big one obviously… their news publishing and editorial processes uses linked data so journalists can cross reference articles and topics/articles when writing them, and also IIRC the olympics, and I think football coverage was/is done on linked data… though I could be wrong about the footy.
@rickmoynihan This is a great list! Thanks so much for taking the time to compile this. Before it's lost to the sands of Slack-is-the-worst-service-possible-for-something-like-Clojurians (:face_with_rolling_eyes:) do you know if this channel is logged somewhere?
@simongray Sorry I was not online yesterday. I’ve only just seen your comments now.
In general, I like @samir’s responses.
"foo"
and "foo"@en
are different literals. In fact, for RDF 1.0, there were 3 distinct types of string:
• "foo"
was a Simple Literal
• "foo"^^<xml:string>
was a Typed Literal
• "foo"@en
was a simple literal with a Language Tag
All 3 were distinct, and I can’t tell you the grief this caused. It was a relief when RDF 1.1 was introduced and gave all simple literals (that didn’t have a language tag) a datatype of xml:string
. Those with a tag are now rdf:langString
In terms of SPARQL stores, there are requirements on correctness, but performance may be terrible. In general, Jena worked hard for correctness, but typically did so with naïve code. Over time, this was reimplemented for better performance.
Generally, most stores get indexed around triples, and not strings. The store I worked on (Originally called Tucana, then Kowari, and finally Mulgara) had the option to use a Lucene index on strings, and extended the query language to allow for Lucene lookups. But SPARQL was intentionally technology agnostic, so how you might implement string indexing is not considered.
For instance, a Patricia index may be used for all strings, and then any queries that include a regex
could convert that operation into an index lookup. However, I’m not aware of anyone who did that (we started on Tucana, but lost funding). Consequently, I think that most regex queries are managed exclusively as filters… and that will never scale.
As for languages… the idea of tagging is to provide semantics for a group of letters. The simple literal "chat"
is just a sequence of 4 unicode characters. However, "chat"@en
has a semantic that means a conversation, and "chat"@fr
has a semantic that means a male cat. These semantics were considered important to capture
this is a great example
I still find it weird and not every ergonomic that in a system where knowledge is otherwise defined using named relations, for some reason this particular information has to be hardcoded into strings. 😛 but thank you for the in-depth history lesson.
@U051N6TTC btw Paula, if I may ask, what is the end goal of Asami? the readme says it is inspired by RDF, but it doesn’t really mention RDF otehrwise. If I wanted to use it as a triplestore for an existing dataset I guess I would have to develop code for importing RDF files and other necessary functionality?
That’s right, you would. Though I have an old project that would get you some of the way there
Ummm… the end goal. I only have vague notions right now. I can tell you why I started and where it’s going 🙂
It was written for Naga. Naga was designed to be an agnostic rule engine for graph databases. Implement a protocol for a graph database, and Naga could execute rules for it
I thought I would start with Datomic, then implement something for SPARQL, OrientDB… etc
But I made the mistake of showing my manager, and he got excited, and asked me to develop it for work instead of evenings and weekends. I agreed, so long as it stayed open source, which he was good with
But then he said that he wanted it to all be open source, and he wasn’t keen on Datomic for that reason. So could I write a simple database to handle it? Sure. I had only stopped working on Mulgara because I don’t like Java, so restarting with Clojure sounded like a good idea (second systems effect be damned!) 🙂
hah, ok, so it’s mainly because your manager dislikes closed source software? That is a fantastic 1st world problem to have.
you could argue that it wasn’t needed (Datomic doesn’t have one), but: a) I’d done it before b) rules could potentially create queries that were in suboptimal form. I’ve been bitten by this in the past
Some time later, he called me and asked me to port it to ClojureScript. So it moved into the browser
Since then, I’ve been getting more requests for more features. Right now it handles a LOT
It seems like a lot of work is happening in this space at the moment with Asami, Datalevin, Datahike, Datascript. Kind of exciting.
This is for backend storage. It is loosely based on Mulgara, but with a lot of innovations, and new emphasis
Honestly, if I’d known about Datascript (which had started), then I would have just used that
Anyway… I mentioned the backend storage, and several managers all got excited about it. So THAT is now my job
I’m doing the same thing on memory-mapped files. But it’s behind a set of protocols which makes it all look the same to the index code
I also hope to include other options, like S3 buckets. These will work, because everything is immutable (durable, persistent, full history, etc)
Do you see a future where a common protocol like ring can be developed for all of these Datomic-like databases? So much work is happening in parallel.
The protocol that Naga asks Databases to implement is oriented specifically to Naga’s needs, but it works pretty well
Well, the way I’ve done it in Naga has been as a set of package directories which implement the protocol for each database. Unfortunately, I’ve been busy, so I only have directories for Asami and Datomic
The main thing that Datascript/Datomic miss is a query API that allows you to do an INSERT/SELECT (which SPARQL has)
I need to get some real work done before heading “home” for today, i.e. moving from the desk to the sofa. Thanks for an interesting conversation. I’m keeping an eye on Asami (and now naga). Really interesting projects.
@U051N6TTC: Sounds like you’ve both had a very interesting career, and currently have a dream job. Most managers would never entertain the need to implement a new database; though it sounds like you’ve done it many times. :thumbsup: @UB3R8UYA1 spoke here a while back about doing something that sounded similar; providing some common abstraction across RDF and other graph stores / libraries. I definitely see the appeal; but I don’t really understand the real world use case. Why is it necessary for your business? Swapping out an RDF database for a different RDF one can be enough work as it is (due to radically different performance profiles), let alone moving across ecosystems. Or am I misunderstanding the purpose of the abstraction; is it to make more backends look like graphs? Which is a use case I totally get 👌. Regardless I’d love to hear more about your work
particularly if the library is supposed to have broader appeal than for just the team developing it
For instance… there is no need for Asami to have a SPARQL front end, but it’s a ticket, because I’d like to make it more accessible to people
yeah ok that’s fair
I don’t know how you could live with yourself… 😆
ahh well in that case… I don’t know how you could live with yourself 😁
If you don’t mind me asking, if you could re-live being on that committee, knowing what you do now, what would you do differently?
Well, it was a learning experience for me. A number of interests were on the committee to push the standard in a direction that most suited their existing systems. So rather than introducing technical changes, or working against specific things, I would have focused more on communication with each member of the committee. Not that I think I did a terrible job, but I could have done better
From a technical perspective, I would have liked to see a tighter definition around aggregates, with algorithmic description.
But that’s just because I find a bit of flexibility in some of the edge cases there. Also, having a default way to handle things, even if they’re not the ideal optimized approach, would have been nice to have
That said, that’s essentially what Jena sets out to do. They try to be the reference implementation, and they most certainly don’t take the optimized approach
The early versions of Jena saved triples as a flat list, and resolved patterns as filters against them 😖
Andy had some long conversations with me about Mulgara’s storage while he was planning out Fuseki
Also @rickmoynihan: > Sounds like you’ve both had a very interesting career, and currently have a dream job Yes! I have certainly been spoiled! I honestly don’t know how I have managed to keep coming back to these things, but I’m happy that I have. Of course, I’ve done other things in the between, but even those can be informative (for instance, I’ve had opportunities to work with both Datomic and OrientDB)
Oh! I just thought of something I could have mentioned in the SPARQL committee that continues to frustrate me… transactions!
It’s possible to send several operations through at once. e.g. An insert; an insert/select; a delete. But there are limits on what you can manage there. There are occasions where transactions are important.
Datomic is frustrating that way too, because Naga needs it. (I manage it by using a with
database, and once I’m done, I replay the accumulated transactions with transact
)
@U051N6TTC: fascinating, I agree it would have been nice to have a standard for transactions.
Especially when the original intent of RDF was to provide semantic linkages (hence the name, “Semantic Web”)
Also, on some specific questions:
> implemented languages as equality-distorting aspects of strings literals
Languages change the value. You can consider that as “equality-distorting”, but it can be avoided. For instance…
> If I am to query for my own name in an RSF resource how should I refer to it? “Simon”@en, “Simon”@da, and 6000 other entries?
Your query could include:
WHERE { ?me foaf:name ?name . FILTER(str(?name) = "Simon") }
A good implementation (and I’m not saying that your SPARQL store will be) could turn that FILTER
operation into an index lookup
Jena never used to do that, but they may have updated lately. This might be an excuse for me to check in and see how Andy is doing 🙂
To push the argument further, the concept of equality is quite complex as you can see in https://clojure.org/guides/equality . RDF makes no special treatment regarding equality. AFAIK two terms are equal when they have the identical long notation. SPARQL being a query language makes some decisions regarding equality in some functions. To me it feels like a good compromise as the goal of the semantic web is to enable the articulation of arbitrary knowledge and data domains
Yeah it’s the lexical form that strictly speaking should be used, in combination with the datatype uri, lang string etc
Though some stores will do some implicit coercions, e.g. stardog will by default canonicalise various numeric types e.g. xsd:byte
s into xsd:integer
unless you switch that off.
https://www.w3.org/TR/rdf-concepts/#section-Literal-Equality