Fork me on GitHub
#datomic
<
2017-02-14
>
devth03:02:36

anyone generating squuids in clojurescript?

devth03:02:18

i see https://github.com/lbradstreet/cljs-uuid-utils – is this compatible with datomic.api/squuid?

pesterhazy07:02:27

There is an implementation in datascript

chrisblom10:02:23

is there a way to get the t value of the transaction within a transaction?

val_waeselynck10:02:09

@chrisblom (datomic.api/next-t db)

chrisblom10:02:46

cool thanks, i'll need to create a tx function for this right?

val_waeselynck10:02:47

Well I assume that's what it does.

val_waeselynck10:02:08

why do you need this though ?

chrisblom10:02:32

for syncing with datascript, i want to store the t of latest transaction so i can use it to generate diffs

chrisblom10:02:24

i could use the transaction id, but the t value is a more readable

val_waeselynck10:02:58

but is that not what d/basis-t gives you?

chrisblom10:02:28

yes, normally it would, but only some of my transactions are relevant for the diffing

chrisblom10:02:08

i have transaction that update some data model, and transactions that update metrics

chrisblom10:02:20

and only the data model needs to be synced with datascript

chrisblom10:02:18

i was tagging each transaction with a type

chrisblom10:02:01

and using a query to find the latest datamodel transaction

chrisblom10:02:43

ok thanks, i got it working with the transaction function

chrisblom10:02:05

maybe i should use separate db's instead

val_waeselynck10:02:08

yeah storing derived data is currently a bit hacky in Datomic IMHO

chrisblom10:02:46

is querying across 2 databases easy? is there a performance penalty?

val_waeselynck10:02:32

I don't think there's a performance penalty, but there may be an expressivity penalty, especially when using Datalog rules, and not / not-join / or / or-join clauses

val_waeselynck10:02:26

Having said that, you may be able able to circumvent those limitations using db functions in query

chrisblom10:02:14

yeah, sounds good, i will try that, i don't need complicated queries for my use case

val_waeselynck10:02:26

Well there may be an performance penalty too, you may need to go through 2 scalar indexes instead of 1 ref index at some point.

val_waeselynck10:02:38

If you run into the expressivity penalties, don't forget that you can use the secondary db as a dumb key-value store using db functions (a dumb key-value store with awesome local caching and time-travel features 😉 )

timgilbert16:02:37

@curtosis: we use conformity with mount, but we just load all norms when we initialize the datomic connection

timgilbert16:02:25

I think your idea is mildly crazy, but if you want to do it, the result of running conformity does give you a list of the norms that were transacted, and conceivably you could stash those somewhere and hook them into the component lifecycle

timgilbert16:02:05

For my use case it's much easier just to load everything all at once so as to ensure the latest code runs against the latest schema. I could see needing to mess around with it in a more finicky manner if you have more complex deployments though

curtosis16:02:38

@timgilbert thanks, that’s helpful. Realistically, the codebase isn’t likely to be that disjoint from the schema. The main use case I’m working toward is “build this version using these 4 of 7 available schema chunks”, but I should probably resist the temptation to overabstractify it.

b2berry19:02:31

Anyone know of a lib for visualizing datomic schema. Something reminiscent to a generated ERD of sorts, even if the relationships themselves were missing? ED instead of ERD, heh.

wei20:02:56

anyone have a solution for being able to directly upsert entitymaps? would be nice to do something like (d/transact conn [(s/assert ::some-entity (assoc some-entity :prop :val))]) for example

b2berry20:02:50

Thanks a lot @wei

wei20:02:30

reason for doing this over just using :db/add is to validate the entity with spec before putting it in the db