This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-02-12
Channels
- # aws (3)
- # beginners (28)
- # boot (3)
- # cider (28)
- # clara (5)
- # cljs-dev (107)
- # cljsrn (1)
- # clojure (40)
- # clojure-austin (2)
- # clojure-brasil (5)
- # clojure-canada (1)
- # clojure-italy (1)
- # clojure-spec (39)
- # clojure-uk (38)
- # clojurescript (33)
- # community-development (11)
- # cursive (11)
- # datomic (43)
- # duct (6)
- # emacs (7)
- # flambo (1)
- # fulcro (68)
- # graphql (11)
- # jobs (1)
- # jobs-discuss (8)
- # leiningen (16)
- # luminus (2)
- # lumo (1)
- # off-topic (38)
- # om (2)
- # onyx (15)
- # parinfer (32)
- # portkey (5)
- # re-frame (50)
- # reagent (50)
- # reitit (1)
- # shadow-cljs (63)
- # spacemacs (10)
- # sql (27)
- # unrepl (6)
- # yada (2)
I thought I could add to the ident like a normal transaction:
(d/transact connection [{:db/id [:db/ident :question/source-identifier]
:db/unique :db.unique/identity}]))
@captaingrover for your schema question I recommend reading through Stu’s blog on Schema growth. http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html
It should be noted that if you have a lot of data to merge you will want to appropriately batch the merge transactions.
This is also not an approach that is a solution for bad schema design and it should not be relied upon to correct what are in reality schema design problems.
@jaret thanks for the help! I was trying to follow the growth not breakage rules but learning datomic at the same time makes it harder. I was hoping to get off easy this time and not need a data migration. It sounds like that's not the case.
The truth is I only wanted to make this ident unique for the convenience of being able to do a ref lookup. From what you're saying it sounds like I might be better off just living with the extra query.
@Desmond if all current values of the attribute are unique then you might be able to get away with altering the schema see https://docs.datomic.com/cloud/schema/schema-change.html#sec-5
If you already sent the schema alteration you posted before… You should call sync-schema
in order to add :db/unique, you must first have an AVET index including that attribute.
Yeah i'm all backed up on s3 and running my experiments against a restored copy in a staging environment before running them against prod. The transaction in the docs ran and all the values should be unique since they are uuids but i'm still seeing the non unique error when i try to ref lookup. I ran that a while ago though so i imagine it would be done. For sync-schema what should t be? I haven't worked with the time-travel features at all yet.
Ah.. you’ll want to make sure your attribute has :db/index set to true then call sync-schema on the current T.
>In order to add a unique constraint to an attribute, Datomic must already be maintaining an AVET index on the attribute, or the attribute must have never had any values asserted. Furthemore, if there are values present for that attribute, they must be unique in the set of current assertions. If either of these constraints are not met, the alteration will not be accepted and the transaction will fail.
But it’s linking off to https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/sync-schema
To atomically increment the value in a datom, must I implement a custom add/inc function and include it in the schema?
@alex438 you can do a :db/cas
[:db/cas 123456 :some/attr old-value (inc old-value)]
custom function is arguably better to avoid retrying after collisions
how would you address the race condition where multiple writers are calling that at the same time?
I understand - it’s not a perfect solution but it’s a way to guarantee data isn’t overwritten
yeah just depends on your requirements
What’s the best practice for unit testing with Datomic cloud? Is it possible to create an in memory db? I’d like to be able to delete a db on demand to reset it but that may not be optimal/feasible with an on disk db. I didn’t find anything in the docs.
We have been running a transactor on ecs backed by an ec2 autoscaling cluster for over a year, but on FARGATE no luck
I’m successfully running a Vase API service on FARGATE, and that uses the peer library
no intel on transactors, though - we still run those on r4.large
instances in an autoscaling group