Fork me on GitHub
#datomic
<
2018-11-29
>
souenzzo00:11:52

Which JAVA version is use in Ions? We should care about that? How updates are handled? The new model from Oracle will affect ions somehow?

lwhorton00:11:09

how close can one get to datomic with use of triggers into audit tables via a sql relational db? 💥 i’m trying to survey the landscape to see what giant gaping holes i’m going to encounter by not using datomic to manage time

lwhorton00:11:16

some immediate issues i can see: 1. queries into the audit tables are just plain going to stink 2. a proliferation of CREATE OR REPLACE FUNCTION {trigger_fn} for each table, and those trigger functions are ad-hoc, and will change each time the schema evolves 3. trigger functions themselves dont keep around a history, so how does one query older versions of a schema? you probably have to keep versions of these functions around (somewhere, maybe in git?) 4. at what point does performance become an issue double-dipping every transaction? and how large can an audit table grow before we’re in trouble? 5. a lot of ad-hoc decisions to make: which extra fields to track in an audit table. tracking “who updated” certain rows (can you even keep track of who updated at the column level?). easy to get wrong, very punitive if you get it wrong

val_waeselynck09:11:33

@U0W0JDY4C This is a bit similar to 1., but because you have a sequence of changes doesn't mean you have much leverage over it. Datomic gives you an indexed, relational persistent data structure - you get consistent, queryable snapshots of the db at every point in time. This offers much more leverage than just "keeping all past versions of each record".

val_waeselynck09:11:26

This article I wrote recently may help you in this reflection; although it's not about audit triggers, I believe similar issues arise: https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html

lwhorton16:11:15

:thumbsup: will take a look

lwhorton00:11:21

6. past a certain volume of data is it possible to copy the data and its history accurately into other sources (e.g. for analysis queries)?

lwhorton00:11:48

7. (more generally) i’m going to miss the flatness of a universal schema 8. similarly to 5, 7; there’s a whole lot of up-front decisions around time modeling 9. i foresee problems with pagination as it relates to consistency

hueyp01:11:05

is there a way to wrap a datomic database? I implement datomic.Database and proxy to the datomic.db.Db but datomic.api/q is not happy with it 🙂

kenny03:11:49

It sounds just like a lib I wrote to use the Datomic peer mem db with the Client API: https://github.com/ComputeSoftware/datomic-client-memdb

hueyp02:11:21

that looks promising 🙂

hueyp02:11:10

hm, I’m using the datomic.api vs datomic.client.api but still gonna check it out .. thanks!

steveb8n02:11:24

it has worked great for me. I added the Pedestal interceptor lib to it and now have cross-cutting / middleware in all my db api calls

Dustin Getz03:11:07

wait what did you do exactly?

steveb8n04:11:24

you can use pedestal interceptors as a standalone middleware lib i.e. add middleware to anything

steveb8n04:11:36

so I used it to add middleware to datomic api calls

steveb8n04:11:53

imagine api call logging/ops

steveb8n04:11:09

transparent query transformations

steveb8n04:11:14

result filtering etc

hueyp02:11:26

I’m still using datomic.api/q not the client stuff … everything works great in my wrapping database except for d/q which just proceeds to find nothing ;/

hueyp02:11:19

I had to implement Iterable / ISeq to get d/q to not throw, so now it doesn’t throw, but it just doesn’t find a dang thing 😜

hueyp02:11:36

I don’t know if there is some other marker interface I need to tell it to treat it as a db

steveb8n04:11:08

that’s the limit of my ability to help. I didn’t try this with the peer api because it doesn’t have a protocol in front of it, like the client api does

hueyp04:11:53

yah — there is the datomic.Database interface, but it doesn’t seem to cover d/q 😜

hueyp04:11:58

thanks again tho!

steveb8n05:11:48

@hueyp you might find an idea or two in here https://github.com/stevebuik/ns-clone which does work with the peer api. it doesn’t proxy but delegates so pretty similar

johnj17:11:51

That's great for on-prem users that are on aws

👍 4
jeroenvandijk17:11:31

Just on-prem, not also datomic cloud?

johnj17:11:27

I don't think there will be any difference for users, since this is already handle for them. Maybe good news for the implementors of datomic cloud.

johnj17:11:52

Cloud doesn't use dynamo in the same way as on-prem

jeroenvandijk09:11:07

Ok I can't say, but my guess would have been that it will also have an effect on potential write speed (transactions) in Cloud

jeroenvandijk09:11:18

We're using on-prem so I'm not able to check

abdullahibra17:11:27

there is something confused me, how much costing for using Datomic for starter project ?

abdullahibra17:11:48

another question how Datomic different from rdf triple stores ?

ro602:12:36

Any triplestore in particular? I think the primary difference in the data model is that Datomic stores all historical states of the database, so you can query a snapshot of the database as of any transaction, or even run queries that aggregate over time. I don't have experience with RDF beyond researching it, but Datomic has several other unique capabilities as well.

eoliphant19:12:13

aside from not ‘speaking’ the RDF spec out of the box, and as @U8LN9KT2Nmentioned, the native concept of time, the basic information model is more or less identical. You could say easily build an RDF server as a probably small set of functions on top of datomic

lilactown17:11:24

Datomic cloud or the self-deployed version?

lilactown17:11:10

the self-deployed version (if i recall correctly) has a free offering with fewer features and no support. great for a starter project.

lilactown17:11:59

for Datomic Cloud, I'm using the solo topology and it's ~$25/mo at the moment. I haven't been using it super actively, but I don't imagine it would scale up much more even if I was

hueyp19:11:53

how often does the datomic forum get looked at? 😜

lwhorton22:11:04

is there a document somewhere that explains how to connect your non-ion based app to a datomic cloud instance? presumably the app code has to execute in an EC2 somewhere inside the datomic VPC, but that’s about all i know 😞 . (do i setup my own EC2, do i use beanstalk, what about autoscaling, etc. etc.)

kenny23:11:11

Getting this exception when deploying my Ion:

java.lang.NoClassDefFoundError: org/slf4j/event/LoggingEvent
Datomic appears to be using an old version of slf4j 1.7.14. The latest is 1.7.25. Could this dependency be bumped?

ro602:12:36

Any triplestore in particular? I think the primary difference in the data model is that Datomic stores all historical states of the database, so you can query a snapshot of the database as of any transaction, or even run queries that aggregate over time. I don't have experience with RDF beyond researching it, but Datomic has several other unique capabilities as well.