This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (6)
- # aws (10)
- # beginners (73)
- # bristol-clojurians (2)
- # calva (9)
- # cider (25)
- # clj-kondo (7)
- # clojure (160)
- # clojure-dev (2)
- # clojure-europe (63)
- # clojure-italy (7)
- # clojure-nl (10)
- # clojure-uk (76)
- # clojuredesign-podcast (6)
- # clojurescript (63)
- # cursive (6)
- # data-science (3)
- # datomic (26)
- # duct (59)
- # emacs (1)
- # fulcro (12)
- # graalvm (17)
- # hoplon (23)
- # jobs-discuss (2)
- # kaocha (6)
- # meander (7)
- # off-topic (3)
- # pathom (2)
- # rdf (68)
- # re-frame (12)
- # reagent (20)
- # reitit (5)
- # ring (3)
- # ring-swagger (1)
- # shadow-cljs (14)
- # spacemacs (10)
- # sql (3)
- # tools-deps (30)
- # yada (9)
:thumbsup: @eric.d.scott I’m curious what your plans for the above and the other ont-app projects. It seems like you’re taking similar approaches, or at least trying to solve some of the same problems as myself
Well, the motivation for the project is that I think there could be a really good fit between clojure's functional programming paradigm and graph-based representations that use a ontology-driven approach.
I spent several years working in Python with semantic technologies, and when I came back to Clojure, I had this feeling that S-P-O-type graphs could make a really expressive sort of 'super-map'
> S-P-O-type graphs could make a really expressive sort of ‘super-map’ This is definitely my experience too. It’s why I wrote matcha
Also I think other people have had similar ideas; cgrand spoke about this at clojurex last year — IIRC he called it “map fatigue”, and was suggesting datalog approaches. Obviously not a new idea, e.g. datomic/datascript particularly in the browser etc. Datalog is almost sparql though right?
Yes I remember seeing Matcha and making a mental note to revisit it after I finished my thought with igraph.
Main usecase matcha was made for is to provide BGP queries over in memory RDF graphs.
Typically we use it to after
CONSTRUCT ing a subset of graph data from the triplestore with sparql; then we throw it into a matcha graph; and query that model a bunch of times to build out a UI/tree.
I'm kind of looking for a common abstraction over a whole range of graph-based representations. Right now I'm in the early stages of sussing out an approach to neo4j.
At some point, I'm hoping that some common approach can be developed for bringing all these disparate query languages under the same tent.
> At some point, I’m hoping that some common approach can be developed for bringing all these disparate query languages under the same tent. This is what I don’t really understand; for what purpose?
The idea of this protocol is to serve as an abstraction over a variety of graph implementations.
Naga was written specifically to be agnostic over databases like that. (The original goal was to switch between Datomic and SPARQL, and expand from there)
The various IFn forms allow some degree of generalization, but if you want to take a common view of datomic-based and say Wikidata-based content, you have to write datalog for one and SPARQL for the other.
So the idea is that you sit down with your domain expert and systematically name the pertinent set of collections and relationships, but instead of parlaying that into a set of Java classes, you use this vocabulary to describe your application state as a graph or perhaps a set of graphs.
RDF graphs make sense for many cases, but say, Datomic and Neo4j make sense for others.
There may even be advantages to viewing table-based data or web APIs using the same basic abstraction.
the way I view things, if you have such an encoding in RDF, you can then model -on top of it- graph relations or other types of relations. How e-a-v is ordered and indexed makes this possible.
What I understand that you're talking about is a generic interface that gives you the ability to plug into different systems to manipulate the data. Which, in other words, means that the data are "viewed" in different formats by the (db) systems, but can be manipulated in other formats, too - because this generic interface allows for this. Which in turn reminds me of the "Polyglot Data" talk by Greg Young - ie, the database system is a projection of the data; data can have multiple projections but only a single source.
it is mixed with the whole event-sourcing paradigm (although there is good reason for that: reducing complexity through immutability), but the concept, I think, is similar. From what I understand, you have very close goals. - Greg says that there is single source of data, and adapters expose the data in different formats through the different DBMS systems. - You say that there is a common interface; an API that allows you to talk to the different systems, as if the data bellow them had the same format. He centralizes on the data (data-first); you centralize on the interface (function-first). There are pros and cons to those choices that can make up a nice discussion. Who is lifting the weight in each case, what way provides more flexibility in case something is found to be "incompatible", etc