Fork me on GitHub
#datomic
<
2018-02-12
>
Desmond05:02:11

I thought I could add to the ident like a normal transaction:

(d/transact connection [{:db/id [:db/ident :question/source-identifier]
                           :db/unique :db.unique/identity}]))

Desmond05:02:33

That didn't throw an error but it also doesn't seem to have worked

jaret05:02:23

@captaingrover for your schema question I recommend reading through Stu’s blog on Schema growth. http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html

jaret05:02:04

Specifically though, if you want to change an attribute to unique I recommend:

jaret05:02:09

1. renaming the attribute (i.e. :user/name-deprecated)

jaret05:02:00

2. Make a new attribute with unique (:user/username)

jaret05:02:27

3. Migrate old values from the old attribute to the new attribute .

jaret05:02:03

It should be noted that if you have a lot of data to merge you will want to appropriately batch the merge transactions.

jaret05:02:30

This is also not an approach that is a solution for bad schema design and it should not be relied upon to correct what are in reality schema design problems.

jaret05:02:39

d/history will still point to the previous entry and :db/ident is not t-aware.

jaret05:02:23

And when you transact over you’ll need to de-dupe

Desmond05:02:26

@jaret thanks for the help! I was trying to follow the growth not breakage rules but learning datomic at the same time makes it harder. I was hoping to get off easy this time and not need a data migration. It sounds like that's not the case.

Desmond05:02:47

The truth is I only wanted to make this ident unique for the convenience of being able to do a ref lookup. From what you're saying it sounds like I might be better off just living with the extra query.

jaret05:02:42

@Desmond if all current values of the attribute are unique then you might be able to get away with altering the schema see https://docs.datomic.com/cloud/schema/schema-change.html#sec-5

jaret05:02:02

Make sure you backup before altering the schema, but altering unique is supported.

jaret05:02:10

If you already sent the schema alteration you posted before… You should call sync-schema

jaret05:02:43

in order to add :db/unique, you must first have an AVET index including that attribute.

jaret05:02:50

All alterations happen synchronously, except for adding an AVET index.

jaret05:02:11

f you want to know when the AVET index is available, call sync-schema

Desmond05:02:59

Yeah i'm all backed up on s3 and running my experiments against a restored copy in a staging environment before running them against prod. The transaction in the docs ran and all the values should be unique since they are uuids but i'm still seeing the non unique error when i try to ref lookup. I ran that a while ago though so i imagine it would be done. For sync-schema what should t be? I haven't worked with the time-travel features at all yet.

jaret06:02:00

Ah.. you’ll want to make sure your attribute has :db/index set to true then call sync-schema on the current T.

jaret06:02:11

>In order to add a unique constraint to an attribute, Datomic must already be maintaining an AVET index on the attribute, or the attribute must have never had any values asserted. Furthemore, if there are values present for that attribute, they must be unique in the set of current assertions. If either of these constraints are not met, the alteration will not be accepted and the transaction will fail.

jaret06:02:41

Just realized some of the API links are broken in that doc page

jaret06:02:20

I’ll have to fix the links tomorrow (later today :))

Desmond06:02:25

@jaret yes! that worked!

Desmond06:02:30

thank you!

alexk16:02:49

To atomically increment the value in a datom, must I implement a custom add/inc function and include it in the schema?

matthavener16:02:53

@alex438 you can do a :db/cas

matthavener16:02:36

[:db/cas 123456 :some/attr old-value (inc old-value)]

matthavener16:02:02

custom function is arguably better to avoid retrying after collisions

alexk16:02:03

how would you address the race condition where multiple writers are calling that at the same time?

alexk16:02:31

I understand - it’s not a perfect solution but it’s a way to guarantee data isn’t overwritten

matthavener16:02:41

yeah just depends on your requirements

juliobarros17:02:47

What’s the best practice for unit testing with Datomic cloud? Is it possible to create an in memory db? I’d like to be able to delete a db on demand to reset it but that may not be optimal/feasible with an on disk db. I didn’t find anything in the docs.

gerstree20:02:48

Did anyone manage to run transactors / peers on ecs FARGATE?

gerstree20:02:54

We have been running a transactor on ecs backed by an ec2 autoscaling cluster for over a year, but on FARGATE no luck

Chris Bidler22:02:36

I’m successfully running a Vase API service on FARGATE, and that uses the peer library

Chris Bidler22:02:59

no intel on transactors, though - we still run those on r4.large instances in an autoscaling group