Fork me on GitHub
#datomic
<
2018-01-21
>
donmullen01:01:08

Back to working on import to Datomic Cloud — getting the following :

{:error #error {
 :cause "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float"
 :data {:datomic.client-spi/context-id "34d66806-8e6f-4c5e-a9dd-205ae330a9c7", :cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float", :dbs [{:database-id "a601a3f8-0af7-4c89-a082-37108f5d0b65", :t 14, :next-t 15, :history false}]}

marshall01:01:22

@donmullen can you share what you were transacting when you got that error?

donmullen02:01:38

@marshall - sample trx data

[#:master{:filed-date #inst "2015-08-04T04:00:00.000-00:00"
            :doc-amount 3300000.0
            :doc-type "AGMT"
            :borough "1"
            :good-through-date #inst "2015-08-31T04:00:00.000-00:00"
            :doc-id "2015072200337005"
            :modified-date #inst "2015-08-04T04:00:00.000-00:00"
            :crfn "2015000266648"
            :doc-date #inst "2015-07-16T04:00:00.000-00:00"}]

donmullen02:01:12

where :master/doc-amount is :db.type/float in the schema

marshall02:01:33

I'll look into it. May be Monday before I can get with Stu to discuss

donmullen02:01:11

NP - thanks. FYI : Same schema and data transact into datomic via peer and clojure client apis.

Desmond02:01:34

So I'm trying to understand all the aws resources that I got by creating the cloudformation stack as described in https://docs.datomic.com/on-prem/aws.html.

Desmond02:01:13

My first question is why do I see ec2 instances that run for a little while and then shut down? Is that the transactor?

marshall02:01:52

@captaingrover they shouldn't shut down immediately. That suggests a config issue

marshall02:01:37

The On-prem CFT only creates Transactor instances

marshall02:01:17

The ensure-transactor script creates a ddb table and some IAM roles

Desmond02:01:41

I created this stack about 6 hours ago and I see about 20 instances in a terminated state

Desmond02:01:04

I guess i ran the scripts a few times before I was satisfied

marshall02:01:20

Are you ever getting instances that stay running?

marshall02:01:43

You should kill the stack

Desmond02:01:03

i also inactivated the keys i used with the ensure transactor scripts because they had admin priveleges

marshall02:01:30

And regenerate the CTF with the scripts. The .most common issue is a typo or paste issue with the license key

Desmond02:01:44

it looked like the stack made its own roles so i thought i wouldnt need those keys anymore

Desmond02:01:23

...or completely skipping the step about the license key

Desmond02:01:16

thanks for helping me realize that

marshall02:01:53

That would do it

Desmond02:01:41

Ok, reran the scripts and it looks promising.

Desmond02:01:41

my next question is: if i have a user for my peer (which is not on aws), can I delete the trust relationship for ec2 from the datomic-aws-peer role?

Desmond03:01:15

i am including my peer user in the datomic-aws-peer role's trust relationships

marshall03:01:11

I think so, but I'll have to double check

Desmond04:01:49

well i removed the relationship and everything still works fine.

Desmond04:01:45

im also curious how other people have set up dev, staging, and prod databases.

Desmond04:01:30

the app im using datomic for hasn't launched yet and i'm not expecting a high load at first so to conserve costs I was planning to just create three databases with the same stack and the same dynamo table.

Desmond04:01:40

the only reasons i can think of not to do that would be security and the amount of traffic going through the transactor

Desmond04:01:55

are there other reasons to duplicate components or create whole separate stacks for each environment?

Desmond04:01:40

I just realized that backup and restore is on a per table basis meaning that if i want to backup prod and restore it to staging i need two separate dynamo tables

Desmond04:01:36

it looks like the ec2 instances incur the majority of the cost (at least without much data in dynamo). is there any reason i couldn't create multiple tables with only one transactor?

Desmond05:01:22

i'm not so familiar with cloudformation. which bits would i need to create just another table?

Desmond05:01:51

on a related note are there any plans to port these cloudformation templates over to terraform?

marshall16:01:48

Are you tied to Datomic On-Prem? Datomic Cloud may be a better fit for this approach

marshall16:01:18

Also, you should use Datomic backup and restore, not dynamo db

marshall16:01:07

I wouldn't recommend running staging and dev on the same transactor as your prod db for a couple reasons. If you want to test something like a large import or a config change, you have no separate infrastructure to test the change, every tweak to staging will also affect prod

marshall16:01:42

You could certainly run staging without HA (asg size of 1) to save on cost

marshall16:01:17

If you really want to cut ec2 cost you could even turn off dev and staging when you're not testing / using them

Desmond17:01:41

Datomic Cloud looks very appealing. Part of my reason for using On-Prem was to learn a bit more about the pieces in play. That said our use case looks like a perfect fit for Cloud so I will certainly investigate further.

Desmond17:01:15

In either case I would want to back up prod and restore it to staging with a chron job. When I mentioned backup and restore before I did actually mean the datomic backup and restore rather than the dynamo backups. Isn't datomic's backup and restore on a per table basis?

Desmond17:01:26

Actually, I don't know what i was reading before because the backup and restore doc clearly says "Backup URIs are per database"

Desmond17:01:17

So at the very least I could run staging and dev together and still backup prod to staging

Desmond06:01:39

@marshall So I tried out the backup and restore within a single table just to get started and it seems that this is not allowed: :restore/collision The database already exists under the name 'production'

Desmond06:01:31

is there a way around this? I would like to avoid beefing up my deployment for a little while

marshall14:01:52

you can’t restore the same database to the same storage with a different name

Desmond01:01:32

ok, cool. I split the prod infrastructure out. would have needed to do it eventually anyway.

Desmond01:01:43

thanks for helping me!

marshall01:01:07

:thumbsup:

steveb8n04:01:34

if I want to target Cloud but want to develop locally i.e. offline, can I use the client lib with a local datomic instance? if so, what would the connection string look like? caveat: I haven’t tried this yet so feel free to respond with RT(F)M. I’m just curious since the cloud docs seem to assume dev always uses cloud

Hendrik Poernama11:01:43

With datomic cloud, how does one atomically update a value based on another value? For example: [:db/add e a1 v] based on [e a2 v] which may change between creation of tx-data and transactor acknowledgement. Used to do this with database function. I'm thinking now I will have to use CAS and retry? Not sure if there is a better way.

donmullen12:01:47

@marshall ^^ simplified float issue case with movies example.

stuarthalloway13:01:59

@donmullen for now can you use doubles instead of floats?

stuarthalloway13:01:08

@donmullen actually, hold on that, bet you would hit the same issue

stuarthalloway13:01:34

@donmullen confirmed I can repro. Please use BigDecimal until we can push a fix

stuarthalloway14:01:57

@donmullen do you need floating point semantics, or could you stick with BigDecimal?

donmullen14:01:45

@stuarthalloway Likely BigDecimal is better for currency - correct? I then have attributes that represent area ratios and some representing measurements in feet and square feet.

stuarthalloway14:01:32

BigDecimal for currency for sure

donmullen14:01:56

@stuarthalloway The cloud client api requires clojure 1.9 currently, correct? Clojurescript client library to be released at some point?

donaldball15:01:04

A minor note of caution for bigdecs: be sure to set the scale to a consistent value (e.g. 2). Java and clojure have slightly different opinions about equality for bigdecs with the same amount but different scales.

stuarthalloway15:01:52

@donmullen Cloud API should work with 1.8

stuarthalloway15:01:33

@donmullen how would you use a ClojureScript library? from the browser or node or ?

donmullen15:01:22

@stuarthalloway was thinking from browser - going to put together a simple web portal that returns various filters/queries of the data. need a backend anyway to update data and do various analytics - but that could be microservice that only runs periodically. for now will have a full backend to handle sending results to web portal.

stuarthalloway15:01:40

@donmullen so how would you secure that?

cch116:01:47

Something like AWS Cognito might fit the bill.

donmullen15:01:49

Good point - hadn’t thought through security.

stuarthalloway15:01:36

read only db is straightforward, but nothing finer-grained yet

donmullen15:01:13

read-only from browser likely all I’ll need in the near term.

marshall16:01:06

@steveb8n you can do that in theory with peer server locally. However, there are some differences between cloud and on-prem you should be aware of: https://docs.datomic.com/on-prem/moving-to-cloud.html

steveb8n22:01:09

@marshall good to know, thanks.

donmullen22:01:04

@jaret I did see the :forbidden issue again this afternoon. Restarted proxy and repl and it went away. Will try and narrow down some way to reproduce if I can.

bbloom23:01:59

in fact, all the links in that article are broken

jaret23:01:11

@bbloom Thanks for reporting that. I’ll take a look. EDIT I’ve fixed the links and I’ll audit the rest of our api links.

jaret23:01:47

@donmullen if you get it again can you restart repl test and then restart the proxy? I’d like to isolate which step resolved the issue or if it requires both.

cch116:01:47

Something like AWS Cognito might fit the bill.