This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-01-21
Channels
- # beginners (73)
- # capetown (1)
- # cider (13)
- # cljsrn (4)
- # clojure (56)
- # clojure-russia (2)
- # clojure-uk (1)
- # clojurescript (50)
- # community-development (3)
- # cursive (1)
- # datomic (80)
- # defnpodcast (2)
- # emacs (2)
- # fulcro (16)
- # graphql (8)
- # hoplon (206)
- # immutant (43)
- # keechma (4)
- # lumo (4)
- # off-topic (26)
- # perun (2)
- # re-frame (2)
- # reagent (4)
- # remote-jobs (2)
- # rum (4)
- # shadow-cljs (82)
- # spacemacs (5)
- # vim (6)
Back to working on import to Datomic Cloud — getting the following :
{:error #error {
:cause "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float"
:data {:datomic.client-spi/context-id "34d66806-8e6f-4c5e-a9dd-205ae330a9c7", :cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Float", :dbs [{:database-id "a601a3f8-0af7-4c89-a082-37108f5d0b65", :t 14, :next-t 15, :history false}]}
@donmullen can you share what you were transacting when you got that error?
@marshall - sample trx data
[#:master{:filed-date #inst "2015-08-04T04:00:00.000-00:00"
:doc-amount 3300000.0
:doc-type "AGMT"
:borough "1"
:good-through-date #inst "2015-08-31T04:00:00.000-00:00"
:doc-id "2015072200337005"
:modified-date #inst "2015-08-04T04:00:00.000-00:00"
:crfn "2015000266648"
:doc-date #inst "2015-07-16T04:00:00.000-00:00"}]
NP - thanks. FYI : Same schema and data transact into datomic via peer and clojure client apis.
So I'm trying to understand all the aws resources that I got by creating the cloudformation stack as described in https://docs.datomic.com/on-prem/aws.html.
My first question is why do I see ec2 instances that run for a little while and then shut down? Is that the transactor?
@captaingrover they shouldn't shut down immediately. That suggests a config issue
I created this stack about 6 hours ago and I see about 20 instances in a terminated state
i also inactivated the keys i used with the ensure transactor scripts because they had admin priveleges
And regenerate the CTF with the scripts. The .most common issue is a typo or paste issue with the license key
it looked like the stack made its own roles so i thought i wouldnt need those keys anymore
my next question is: if i have a user for my peer (which is not on aws), can I delete the trust relationship for ec2 from the datomic-aws-peer role?
the app im using datomic for hasn't launched yet and i'm not expecting a high load at first so to conserve costs I was planning to just create three databases with the same stack and the same dynamo table.
the only reasons i can think of not to do that would be security and the amount of traffic going through the transactor
are there other reasons to duplicate components or create whole separate stacks for each environment?
I just realized that backup and restore is on a per table basis meaning that if i want to backup prod and restore it to staging i need two separate dynamo tables
it looks like the ec2 instances incur the majority of the cost (at least without much data in dynamo). is there any reason i couldn't create multiple tables with only one transactor?
i'm not so familiar with cloudformation. which bits would i need to create just another table?
on a related note are there any plans to port these cloudformation templates over to terraform?
Are you tied to Datomic On-Prem? Datomic Cloud may be a better fit for this approach
I wouldn't recommend running staging and dev on the same transactor as your prod db for a couple reasons. If you want to test something like a large import or a config change, you have no separate infrastructure to test the change, every tweak to staging will also affect prod
If you really want to cut ec2 cost you could even turn off dev and staging when you're not testing / using them
Datomic Cloud looks very appealing. Part of my reason for using On-Prem was to learn a bit more about the pieces in play. That said our use case looks like a perfect fit for Cloud so I will certainly investigate further.
In either case I would want to back up prod and restore it to staging with a chron job. When I mentioned backup and restore before I did actually mean the datomic backup and restore rather than the dynamo backups. Isn't datomic's backup and restore on a per table basis?
Actually, I don't know what i was reading before because the backup and restore doc clearly says "Backup URIs are per database"
So at the very least I could run staging and dev together and still backup prod to staging
@marshall So I tried out the backup and restore within a single table just to get started and it seems that this is not allowed: :restore/collision The database already exists under the name 'production'
is there a way around this? I would like to avoid beefing up my deployment for a little while
ok, cool. I split the prod infrastructure out. would have needed to do it eventually anyway.
if I want to target Cloud but want to develop locally i.e. offline, can I use the client lib with a local datomic instance? if so, what would the connection string look like? caveat: I haven’t tried this yet so feel free to respond with RT(F)M. I’m just curious since the cloud docs seem to assume dev always uses cloud
With datomic cloud, how does one atomically update a value based on another value? For example: [:db/add e a1 v]
based on [e a2 v]
which may change between creation of tx-data and transactor acknowledgement. Used to do this with database function. I'm thinking now I will have to use CAS and retry? Not sure if there is a better way.
@donmullen for now can you use doubles instead of floats?
@donmullen actually, hold on that, bet you would hit the same issue
@donmullen confirmed I can repro. Please use BigDecimal until we can push a fix
@donmullen do you need floating point semantics, or could you stick with BigDecimal?
@stuarthalloway Likely BigDecimal is better for currency - correct? I then have attributes that represent area ratios and some representing measurements in feet and square feet.
BigDecimal for currency for sure
@stuarthalloway The cloud client api requires clojure 1.9 currently, correct? Clojurescript client library to be released at some point?
A minor note of caution for bigdecs: be sure to set the scale to a consistent value (e.g. 2). Java and clojure have slightly different opinions about equality for bigdecs with the same amount but different scales.
ok - thanks @donaldball
@donmullen Cloud API should work with 1.8
@donmullen how would you use a ClojureScript library? from the browser or node or ?
@stuarthalloway was thinking from browser - going to put together a simple web portal that returns various filters/queries of the data. need a backend anyway to update data and do various analytics - but that could be microservice that only runs periodically. for now will have a full backend to handle sending results to web portal.
@donmullen so how would you secure that?
read only db is straightforward, but nothing finer-grained yet
@donmullen thanks, hammocking
@poernahi Cloud includes cas as a built in txn function https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-2
@steveb8n you can do that in theory with peer server locally. However, there are some differences between cloud and on-prem you should be aware of: https://docs.datomic.com/on-prem/moving-to-cloud.html
@jaret I did see the :forbidden issue again this afternoon. Restarted proxy and repl and it went away. Will try and narrow down some way to reproduce if I can.
bad link in the docs: https://docs.datomic.com/javadoc/datomic/Entity.html is bad in https://docs.datomic.com/on-prem/entities.html
@bbloom Thanks for reporting that. I’ll take a look. EDIT I’ve fixed the links and I’ll audit the rest of our api links.
@donmullen if you get it again can you restart repl test and then restart the proxy? I’d like to isolate which step resolved the issue or if it requires both.
Something like AWS Cognito might fit the bill.