This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # adventofcode (3)
- # announcements (1)
- # babashka (25)
- # beginners (55)
- # calva (12)
- # cider (40)
- # clj-kondo (13)
- # clojure-australia (2)
- # clojure-dev (11)
- # clojure-europe (67)
- # clojure-france (6)
- # clojure-nl (16)
- # clojure-uk (9)
- # clojuredesign-podcast (9)
- # clojurescript (17)
- # conjure (7)
- # crux (4)
- # cursive (3)
- # datomic (3)
- # emacs (8)
- # figwheel-main (7)
- # fulcro (21)
- # google-cloud (21)
- # graphql (8)
- # helix (1)
- # honeysql (32)
- # instaparse (2)
- # jobs (2)
- # jobs-discuss (2)
- # meander (80)
- # mount (1)
- # off-topic (25)
- # pathom (31)
- # polylith (1)
- # rdf (24)
- # re-frame (21)
- # reagent (29)
- # releases (1)
- # remote-jobs (1)
- # shadow-cljs (16)
- # slack-help (6)
- # sql (5)
- # tools-deps (23)
- # uncomplicate (2)
- # wasm (2)
@quoll: FYI I think there’s a small bug in the naga README. The example code will raise an
Unknown storage configuration error.
I managed to fix it locally by adding the line:
(naga.store-registry/register-storage! :asami naga.storage.asami.core/create-store)
No, when I ran the example code I’d already required a different namespace, and I forgot to include it in the example
Incidentally is it possible to essentially do what is in the README, but without using the connection management and mutable database stuff. i.e. to manage asami and naga as pure values myself, or at least put them in atoms I control?
I just added a comment too. That extra line loads the Asami connector. • It registers the factory function • It extends Asami connections to the ConnectionStore protocol • It implements the Naga Storage protocol
I’ve kinda pushed the value management into the Connection. The Connection actually refers to all the old values of the database, as well as the latest.
So if you called
(asami/db connection) before running Naga on it then you’ll get the latest value of the database. Afterward, if you use
asami/as-of you can still get that same value. It’s actually transparent inside the Connection object. There’s a vector of every database value
This seemed to be the most sensible way to manipulate the database. After all, Datomic follows the same paradigm, where connections are executed against, and new values of the database are created that can be retrieved from the connection
I was thinking more for transient usecases, i.e. where you just want to compute a value… e.g. load some triples into asami, expand the graph with naga rules, and then spit the data or a query result out… without having to engage in resource management etc
e.g. possibly also in the context of a http request… i.e. querying data out of a sparql triple store with constructs, but using asami perhaps with naga in place of a Jena/RDF4j model to build a response.
In that case, I would just use a graph URI with
asami:mem:// for the scheme. It’s basically doing exactly what you just said.
asami:local:// scheme is still a work in progress. So you HAVE to use
asami:mem:// for now anyway).
There’s no “resource management”, except that the connection holds an atom for the vector of DBs, and when you do a transaction like Naga does, then it just calls
update on the maps that make up the latest DB, and does a
conj to the vector in the connections atom.
My colleagues are doing this all the time. Create a memory graph, throw data into it, and use queries to pull out exactly what they want. Then they throw it all away. 😱
@quoll: One other thing, it looks like the pabu parser silently fails on the
-- comments in the skos datalog example you pasted me. Swapping them out for the c-style ones seems to at least convert the program string into data (not got to trying to run it yet), but should I expect it to work in naga?
You should, but I haven’t done much with pabu for a long time. I thought I handled those comments, sorry