Fork me on GitHub
#datomic
<
2016-11-16
>
georgek04:11:05

I’m not sure if this is the right place for this question but I’m trying to have an AWS Lambda make a datomic transaction that is backed by dynamodb. This is using clojure/aws-java-sdk libraries which work fine using ec2, etc. I’ve got the lambda wired properly and I’ve followed the various docs on giving the right permissions to the associated role. Here’s mine: AWSLambdaDynamoDBExecutionRole AWSLambdaVPCAccessExecutionRole AWSLambdaExecute AmazonVPCFullAccess I’ve configured the lambda to be part of the default security group and set it to run in the VPC. The problem is that I get an exception-less timeout when trying to connect to the store using the uri. Thoughts?

georgek04:11:51

On a related note I see that most of the libraries and discussion is around using cljs over node. The one cljs datomic library requires that there be a running peer that provides the REST api access it requires. There isn’t a lot of documentation on how to ensure that your lambda creates a peer upon invocation. Am I missing something ? I deployed via cloudfront following the basic install approach but the datomic docs on the REST api show a manual start of an api-ready peer. Is there some way to configure this on the transactor instance? Thanks!

kenny05:11:28

@georgek Datomic + Lambda will not work well because you would need to launch a peer for each request (see https://groups.google.com/forum/#!topic/datomic/OYJ4ghelmF0). As suggested in the Google Group post, you could setup the Datomic REST API and transact from the Lambda function by sending a request to the REST API. To prevent outside access to the REST API, you would probably need to setup the REST server inside an isolated VPC and only give your Lambda function access to that VPC.

pesterhazy10:11:08

yeah, in my experience connecting a new peer can take up to 90 seconds, kind of defeating the purpose of lambda

val_waeselynck10:11:39

Just checking, is it safe to migrate to AWS longer resource ids (https://aws.amazon.com/fr/blogs/aws/theyre-here-longer-ec2-resource-ids-now-available/) when using the Datomic Cloudformation stack ?

georgek15:11:22

@kenny @pesterhazy That makes so much sense I’m a bit floored. Thanks for the reality check and pointer to how to approach this!

conan15:11:54

I'm trying to do a restore-db. Whatever I try, I get this message:

clojure.lang.ExceptionInfo: :restore/roots-missing No database root for next-t 2568953 {:db/error :restore/roots-missing}
or this message:
clojure.lang.ExceptionInfo: :restore/no-roots No restore points available at file:/cn-catalog/db-backup/datomic/beta/ {:uri "file:/cn-catalog/db-backup/datomic/beta/", :db/error :r
estore/no-roots}
Here are some examples of the commands I'm running:
datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: `ls -1 ~/dev/cn-catalog/db-backup/datomic/beta/roots | sort -n | tail -n 1`

datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic:

datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: 6591640

datomic/bin/datomic restore-db file:/cn-catalog/db-backup/datomic/beta datomic: -t 6591640
Any ideas? I'm on Windows.

tengstrand15:11:38

How do I retract a value from an attribute with the cardinality many and with the type ref? For example, :user/role-id has references to the role entity. If the :user/role-id contains [1 2 3] and I want to retract 2 so that the result is [1 3], how do I do that?

wotbrew15:11:41

@teng [:db/retract eid :user/role-id 2]

wotbrew15:11:10

same as a cardinality one retraction

tengstrand15:11:58

@danstone So maybe the problem was that we tried to pass a vector of values.

wotbrew15:11:34

yeah, if you want to retract multiple values you have to submit individual retractions for each value

tengstrand15:11:52

Now it works!

pesterhazy16:11:22

when using the map form of transactions, it looks like using "reverse notation" is not supported: {:db/id .... :country/name "Germany" :city/_country #{[:city/name "Frankfurt"] [:city/name "Stuttgart"]}}

pesterhazy16:11:52

but I don't see why this wouldn't work in principle; it looks like a useful enhancement

pesterhazy16:11:20

or am I missing something?

Matt Butler17:11:51

Just to confirm, does setting an attribute to :db/unique :db.unique/value implicitly index the value? as implied by

To maintain AVET for an attribute, specify :db/index true (or some value for :db/unique) when installing or altering the attribute

Lambda/Sierra19:11:36

@pesterhazy Yes, that's a known limitation. Reversed attributes aren't supported in transactions. I don't know if/when it will be.

Lambda/Sierra19:11:37

@mbutler Yes, any kind of uniqueness implies indexing values.

Matt Butler19:11:56

Awesome thanks 🙂

jonpither19:11:06

Hi - do you have any resources on understanding Datomics relationship with Dynamo write capacity?

jonpither19:11:35

currently getting com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException in the transactor - would want to avoid this

jonpither19:11:12

I can up the provisioned write capacity - but would like to understand how Datomic does it's writes, i.e. presumably it's a transaction per unit?

jonpither19:11:53

or one "write capacity unit" per datom - think it could be this?

marshall21:11:09

@jonpither Datomic’s use of DDB writes doesn’t correlate exactly to transactions or datoms For every transaction, Datomic will write durably to the transaction log in DDB, but the transactor also writes a heartbeat to storage and, most importantly, will write large amounts of data during indexing jobs.

marshall21:11:54

Because of this, you need to provision ddb throughput based on the need during indexing jobs, not ongoing transactional load

jonpither21:11:06

Great my next Q is answered there about capturing the throttles