This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (58)
- # boot (4)
- # calva (1)
- # cider (13)
- # cljdoc (1)
- # cljs-dev (7)
- # cljsrn (14)
- # clojure (93)
- # clojure-canada (1)
- # clojure-conj (1)
- # clojure-germany (1)
- # clojure-italy (6)
- # clojure-losangeles (3)
- # clojure-nl (8)
- # clojure-spec (6)
- # clojure-uk (77)
- # clojurescript (3)
- # cursive (5)
- # data-science (6)
- # datomic (52)
- # emacs (1)
- # figwheel-main (2)
- # fulcro (6)
- # graphql (7)
- # jobs (9)
- # leiningen (1)
- # luminus (15)
- # mount (14)
- # off-topic (94)
- # pedestal (1)
- # re-frame (7)
- # reagent (10)
- # shadow-cljs (75)
- # spacemacs (4)
- # test-check (15)
- # tools-deps (23)
- # unrepl (1)
In CFT 441 you switched to YAML, now in 441-8505 you are using json again. It's not a big deal this time, but in the storage stack we have a couple of modification that get around the issue with running in a AWS account that has EC2 classic support. So we have to cherry pick those changes by hand into any storge CF template upgrades. I did this just recently from 297 to 409 and it wasn't too bad, but the version 441 was a bit harder due to the change from json to yaml. Again it's a non-issue for 441-8505 since it's only a compute stack change, but going forward can you please distribute one format or both yaml and json?
the YAML change was a marketplace artifact; we did not choose that and we intend to use json
I posted this in the main thread as well. I'm applying this to a solo setup. the compute upgrade worked fine, but I'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. back from CF when I try to apply the update for storage
If you are running in the recommended two stack shape, then no: https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade
I launched the CF template and got three CF stacks in total — so update the stack named “compute”, yes?
Unfortunately, no. AWS's marketplace rules are in direct conflict with AWS's CloudFormation best practice guidelines. The "deleting" path takes you from Marketplace-land to CF-best-practice-land.
After you do this once, you will be in CF-best-practice-land and never have to do it again.
Just a quick note on the update: I had to re-adjust the capacity settings of the compute stack autoscaling group. The update seems to reset this to “desired 2, min 2, and max 3”.
I have two use-cases to change autoscaling of the primary compute group: First, to save money while experimenting with the prod topology (I set them to 0 during times I am not working on it), and second to try to increase transaction performance for large imports (that might be a brute force approach, see also https://forum.datomic.com/t/tuning-transactor-memory-in-datomic-cloud/643).
The documentation is a bit confusing. The two sentences “If you are writing to a large number of different databases, you can increase the size of the primary compute group by explicitly expanding its Auto Scaling Group.” and “You should not enable AWS Auto Scaling on the primary compute group.” seem to contradict each other. Am I missing something?
Those should not be autoscaling events; Scaling the group will not affect throughput for a single DB
i.e. Autoscaling events == things that AWS does for you triggered based on some metric/event
Changing the “min” “max” and “desired” explicitly is OK, but should be a fairly infrequent human-required action
You are right that it does not make sense to change the “desired” setting of the autoscaling group to adapt to spikes (that’s what the “auto” in autoscaling is for). But to increase the “max” and “desired” seems to be the best option currently to increase transaction (and indexing) performance in case of a known, large import.
if the import runs against a single database, changing the size of the compute group will not affect throughput
our use case is to have a prod setup for our staging environment, but without HA. so we set the 3 values (min, max, desired) to 1.
A deployment to the compute group seems to be performed sequentially (see attached graph which shows the incoming network traffic to 5 nodes; mostly the JAR files I assume). Can this be done in parallel to speed up the deployment?
@jocrau No, that is specifically the way that rolling deployments work to maintain uptime and enable rollback
Does anyone happen to know where I could find some good marketing materials on the Datomic value prop for non-technical executives?
@goomba I tried to answer that very question here: https://email@example.com/what-datomic-brings-to-businesses-e2238a568e1c
that's fine, due to the nature of the data/work it would have to be self-hosted anyway.
I think I have an answer to that: I can’t delete the datomic cloud root stack since the VPC endpoint stack depends on resources
the “compute” stack just failed to delete for me, with reason
The following resource(s) failed to delete: [HostedZone].
I'm setting up a new query group. The stack created without error. When doing the first deploy to the query group, it's failing
ScriptFailed. The events details show
[stdout]Received 503 a number of times, and then finally
[stdout]WARN: validation did not succeed after two minutes. Ideas on where to start debugging this?