Fork me on GitHub

to be clear, there's no update of the storage stack required, right?


In CFT 441 you switched to YAML, now in 441-8505 you are using json again. It's not a big deal this time, but in the storage stack we have a couple of modification that get around the issue with running in a AWS account that has EC2 classic support. So we have to cherry pick those changes by hand into any storge CF template upgrades. I did this just recently from 297 to 409 and it wasn't too bad, but the version 441 was a bit harder due to the change from json to yaml. Again it's a non-issue for 441-8505 since it's only a compute stack change, but going forward can you please distribute one format or both yaml and json?


the YAML change was a marketplace artifact; we did not choose that and we intend to use json


I posted this in the main thread as well. I'm applying this to a solo setup. the compute upgrade worked fine, but I'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. back from CF when I try to apply the update for storage


does upgrading really require deleting the stack and creating it again?


Hi @U1WMPA45U, it depends on what you mean by "the stack".


I launched the CF template and got three CF stacks in total — so update the stack named “compute”, yes?


Unfortunately, no. AWS's marketplace rules are in direct conflict with AWS's CloudFormation best practice guidelines. The "deleting" path takes you from Marketplace-land to CF-best-practice-land.


After you do this once, you will be in CF-best-practice-land and never have to do it again.


got it, thanks!


Just a quick note on the update: I had to re-adjust the capacity settings of the compute stack autoscaling group. The update seems to reset this to “desired 2, min 2, and max 3”.


same issue here


those are the default values; had you changed them to something else?


you should not be using autoscaling on the primary compute group


I have two use-cases to change autoscaling of the primary compute group: First, to save money while experimenting with the prod topology (I set them to 0 during times I am not working on it), and second to try to increase transaction performance for large imports (that might be a brute force approach, see also


The documentation is a bit confusing. The two sentences “If you are writing to a large number of different databases, you can increase the size of the primary compute group by explicitly expanding its Auto Scaling Group.” and “You should not enable AWS Auto Scaling on the primary compute group.” seem to contradict each other. Am I missing something?


Those should not be autoscaling events; Scaling the group will not affect throughput for a single DB


you can adjust the size of the group explicitly


you shouldn’t use AutoScaling


i.e. Autoscaling events == things that AWS does for you triggered based on some metric/event


Changing the “min” “max” and “desired” explicitly is OK, but should be a fairly infrequent human-required action


You are right that it does not make sense to change the “desired” setting of the autoscaling group to adapt to spikes (that’s what the “auto” in autoscaling is for). But to increase the “max” and “desired” seems to be the best option currently to increase transaction (and indexing) performance in case of a known, large import.


if the import runs against a single database, changing the size of the compute group will not affect throughput


you can change to using an i3.xlarge (instead of the default i3.large)


in your compute group


that will improve import perf


Ok. I will give that a try.


Does the number of nodes influence indexing performance (on a single database)?


@jocrau @marshall as long as you have at least two, no


our use case is to have a prod setup for our staging environment, but without HA. so we set the 3 values (min, max, desired) to 1.


A deployment to the compute group seems to be performed sequentially (see attached graph which shows the incoming network traffic to 5 nodes; mostly the JAR files I assume). Can this be done in parallel to speed up the deployment?


@jocrau No, that is specifically the way that rolling deployments work to maintain uptime and enable rollback


@marshall Makes sense. Thanks.


Does anyone happen to know where I could find some good marketing materials on the Datomic value prop for non-technical executives?


Note that the value prop of Datomic Cloud is a bit different


Ha! How serendipitous!


that's fine, due to the nature of the data/work it would have to be self-hosted anyway.


I don’t need to recreate my VPC endpoint again when I perform my first upgrade, do I?


I think I have an answer to that: I can’t delete the datomic cloud root stack since the VPC endpoint stack depends on resources


the “compute” stack just failed to delete for me, with reason The following resource(s) failed to delete: [HostedZone].


…which was because I had a record set for my VPC endpoint…


I'm setting up a new query group. The stack created without error. When doing the first deploy to the query group, it's failing ValidateService with ScriptFailed. The events details show [stdout]Received 503 a number of times, and then finally [stdout]WARN: validation did not succeed after two minutes. Ideas on where to start debugging this?


@grzm Check the CloudWatch logs - there was probably an exception.


@kenny cheers. thanks for the kick in the right direction.