This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-25
Channels
- # announcements (3)
- # asami (63)
- # babashka (5)
- # babashka-sci-dev (32)
- # beginners (56)
- # calva (2)
- # cider (28)
- # clj-commons (9)
- # clj-kondo (16)
- # cljdoc (41)
- # cljs-dev (19)
- # clojure (67)
- # clojure-europe (15)
- # clojure-nl (1)
- # clojure-poland (1)
- # clojure-uk (2)
- # clojurescript (27)
- # community-development (10)
- # data-science (2)
- # datascript (8)
- # datomic (21)
- # events (3)
- # fulcro (54)
- # graalvm (18)
- # introduce-yourself (2)
- # juxt (3)
- # lsp (6)
- # music (1)
- # nextjournal (8)
- # off-topic (44)
- # omni-trace (1)
- # reitit (13)
- # releases (3)
- # rewrite-clj (4)
- # shadow-cljs (10)
- # spacemacs (6)
- # sql (12)
- # tools-build (17)
- # tools-deps (3)
- # web-security (1)
I maintain a library called pyramid which provides the ability to do Datomic pull-ish queries on Clojure maps, with some extra addons. I recently added the ability to extend the pull query engine via protocols to other types. Here's an example of extending it to DataScript db values: https://github.com/lilactown/pyramid/blob/main/scratch.clj#L107-L120
the query syntax is a little different than datomic pull. it's based on https://github.com/edn-query-language/eql which is a standard used by several other libraries, e.g. pathom
I know that asami doesn't have a pull API yet. I was thinking of working on a separate package to extend pyramid to asami, wdyt?
I’ve been taking a break from Asami since late last year (just… tired, so I’ve been doing fun things for a little while), but I need to get back into it myself
This is why https://github.com/quoll/remorse exists.
I’m actually working on extending cljs-math to implement BigInteger (as a stepping stone to implementing BigDecimal)
y'know, some people play golf. some people read the IEEE floating point spec and implement a cross-platform math library
the one thing pyramid needs is the ability to take a lookup ref, like [:person/id 1]
, and turn that into a map representing that entity. it doesn't need to be deep, as long as it has refs contained in the data. does asami have an API for doing that?
asami.graph/resolve-triple (part of the Graph protocol) is the best way for doing this. For in-memory graphs, it turns into a map lookup For local-storage it does the appropriate lookups in the maps, and converts the data into what’s appropriate.
The Graph protocol is a reasonably direct mechanism for accessing storage. “reasonably” is the caveat here because it does run a function across the args to map it into a pattern for looking up the correct function
Thinking about it… it may be possible to have the multimethod dispatch to named functions, so that use cases (like this) that know exactly which function they want can skip the dispatch step
ok, there's no rush on this at all 🙂 company-wide meeting this morning so I'm multitasking 😂
asami.index/empty-graph
It’s an object. Just assert statements into it and you have your graph.
Databases and Connections are wrappers around this. If you want a Database object, then it comes with a Connection (sorry). You can do the wrapping with: asami.core/as-connection
There’s a diagram that explains what Databases and Connections are doing to wrap graph objects: https://github.com/threatgrid/asami/wiki/Dev:-1.-Code-Layout#asamimemory
You’ll see that Connection
and Database
are actually really light. The history
vector looks like it’s large, but it’s actually just keeping pointers to each Database
which a updates to previous Databases, which in turn point to Graph
instances that are updates to previous Graph
versions. These are immutable objects with structural sharing, so it’s not expensive to keep that history
hmm asami's information model is very different than datascript & pyramid. have to think a bit how I want to bridge the two
I worked very hard to keep to the sort of semantics that we usually see from immutable data structures. The same happens with durable storage
One unexpected side-effect is that historical graphs can be added to. This is explicitly prevented in durable storage, but there is no reason it can’t happen.
I didn’t think anyone would care about it, but people have expressed interest in treating it like git. i.e. multiple branches
Do Datascript or Pyramid allow for something like that @U4YGF4NGM?
I'm not sure. I don't think that datascript has any thing to reason about automatically reconciling changes other than transacting new data.
pyramid is all about indexing data into and selecting data out of maps. it's not really meant to completely replace something like asami
you'd need some strategy to reconciling conflicts over time a la CRDTs in either case. def outside the purview of datascript and pyramid
This is exactly why it’s interesting. Having a different model allows for doing different things 🙂
in datascript and pyramid, there's a schema which allows someone to assert that [:person/id 0]
is a reference to some entity. usually it also has an index to look it up quickly.
asami being schemaless obviously doesn't have such a thing. makes it trickier to load and query the same data that I would in pyramid and ds
When I return to it, I have a project that I’m working on that I want to finish, but then the next thing I thought I should pick up was working on schemas
a pull query needs a place to "start from." I guess you could do something like
[{[:db/ident :tg/node-26575] [:person/id :person/name]}]
For now, I’ve been planning on temporary schemas (they apply during transactions). But I can store them too, then load and enforce them if they’re present
Well, I’d just go with the presumption that your identifying property is unique. The pull operation then retrieves it, and then just works with the first one returned. If you broke your implicit schema and added more than thing, then you’ll be getting a random object, but if you stick to your own rules, then it’ll work without incident
asami's sechameless-ness is interesting and makes it stand out from other dbs like datomic, datascript and its derivatives
Schemas don’t matter too much, except if you’re putting in data that is unique for an object. In that case, you need to see if the property already exists, and if so, issue a delete/insert to replace it. Right now, that’s controlled by an annotation on the attribute, but if the schema is described in a structure, then just look inside the structure instead of looking for the annotation.
I was planning on providing a schema as an attribute on the tx-data map (datomic documents that it’s a map, but the only field they support is tx-data. Why not allow for more? 🙂 )
Regardless, I plan to: a) keep schemas optional b) default everything to multi-arity c) default to untyped attributes
so what I have so far then is (scratch code):
(ag/resolve-triple (a/graph (a/db conn)) '?e :person/id 0)
;; => ([:tg/node-26575])
(ag/resolve-triple (a/graph (a/db conn)) :tg/node-26575 '?a '?v)
;; => ([:person/id 0] [:person/name "Rachel"] [:tg/owns :tg/node-26581] [:tg/owns :tg/node-26583] [:tg/owns :tg/node-26585] [:tg/owns :tg/node-26577] [:tg/owns :tg/node-26576] [:tg/owns :tg/node-26579] [:friend/list :tg/node-26576] [:db/ident :tg/node-26575] [:tg/entity true])
I think I'll elide the tg/owns
attributes. I then need to look through each value and discern if it's a reference to another node or not, and if so resolve that
I’d filter out the :tg/owns
properties. They’re internal book-keeping to connect to nested objects
in this case, :friend/list
is a collection, not an entity. so I'd need to look up the value of :tg/node-26576
and determine that it's a collection, and then resolve the collection