Fork me on GitHub

is it possible to 'change' the name of a datomic cloud system? i.e. create a new stack, but use the storage of another one that has a different name? (we made a mistake on naming conventions)

🙏 4

Hello all. I am beginning the arduous process of attempting to do a POC for datomic cloud at my very large company. Our cloud security team is very concerned with potential problems and I'm hoping you guys can help answer their questions.


1) How do they handle security/patch updates to the servers they allocate. We’ve seen AWS Marketplace offerings ignore critical patches on EC2 instances for example. 2) We do not wish for Datomic to create its own VPC, could they provide a cloudformation template without a VPC or give us the template so we can try to modify it ourselves.


I've seen in my own POC work that the answer to 2) is yes, we're free to modify the templates and they're available on the datomic cloud releases page. Are there any concerns with us modifying the template to run in an existing VPC?


@notanon You are free to modify the CFT as necessary. Be aware that future updates may not work seamlessly if you’ve modified the CFT to use your own VPC, however, and that is technically an ‘unsupported’ deployment As far as updates - we intend to keep our AMIs as up to date as possible.


Of course, you always have the ability to SSH to your system instances if necessary


thanks. Is there anything specifically datomic cloud is doing that requires it's own VPC? Assumptions it's making perhaps?


Is there any guarentee/SLA around keeping the AMIs fully patched?


I'm not very familiar with AWS, do you mean by having the ability to SSH into the instances you're implying we can patch them ourselves if we needed to?


probably not a great option


since they should be considered to be ephimeral


i.e. if one goes down, it’s going to restart from the base AMI


There are a number of reasons/assumptions included in the creating a VPC approach


in particular, network configuration, LBs, etc


So the response to our cloud security team for 1) would be, the datomic team plans to keep the AMIs as up to date as possible, but there's no guarantee/contract ensuring this. It's also impractical to attempt to patch them ourselves. And for 2) We are free to edit the templates as much as we want, but this would make our install unsupported (I don't know if enterprise support is included in datomic cloud, does this mean we'd be unable to have it in any case). We would likely need to handle all of the network configuration ourselves (this was expected by our team when we discussed it).


I'm not sure I understand the comment about load balancers, how does the VPC assumption play into that?


@notanon we initially wanted to do the same thing, had even started playing around with the CF scripts, etc. though as @marshall points out, you then run into potential support issues. We have a pretty sophisticated setup with accts/vpc's for each lifecycle stage, as well as dedicated ones for in/outbound network traffic, a vault vpc for logs, etc etc. While the VPC as the 'unit of deployment' can be understandably off putting and/or counterintuitive at first, a datomic cloud 'system' is comprised of a lot of stuff, to the point that you probably don't want it in 'your' VPC. We've basically just setup a mirror Datomic system/vpc for each post dev lifecycle stage , and N per developer ones that are peered into our lab/dev VPC


Interesting. I'll definitely pass this along. Though I doubt they'll be very receptive. We have 1000+ servers, hundreds of databases, machine learning stuff, basically the entire AWS offering lol all in one VPC. I doubt they're going to see datomic spinning up 10s of aws offerings as a drop in the VPC bucket.


Is there any kind of documentation out there on best practices for VPCs on AWS? Something to at least support the idea of isolating things like you've outlined and datomic cloud can't seem to live without?


Not that I know of,and yeah a more 'normal' database, that's 1-N servers, an ASG and a LB ,etc or even say Datomic On-prem, is more easily (normally) plopped into an existing VPC setup. So yeah, doing it this way is certainly atypical, but after mucking around with it a bit, I think it certainly makes more sense, based again, on the complexity of the various bits. in practice, it's not really been an issue. We use terraform for our infrastructure,so we've just wrapped the datomic CloudFOrmation stuff into those scripts. so we just apply a 'build a "test2"' env and it spins up the AWS account, the 'main' and the datomic vpc, sets up the peering, etc etc.


hey we've loaded a good chunk of data into a datomic solo system, and since then, we're having issues even connecting to it, we've tried restarting the instance, etc to no avail, and there's nothing useful in the logs. Any ideas?


@U380J7PAQ how much is a good chunk?


Sorry about that 🙂 Got this second hand from one of my devs. Ok, just checked his dashboard looks like around 2M datoms


and there's no data (except CPU) from the last couple days, at all. He did the load, and ran into the issue on friday


Also, that stack is a little older, v407


Hey @U380J7PAQ could we move this conversation to a ticket? I’d also like to see if we could get read-only access to your Cloudwatch logs to look closer at your inability to connect. That’s better done over a ticket.


If you can give me a good e-mail address, I can start a case and copy this conversation into a ticket for us.


The stack is being upgraded to the latest, so e


we'll try again after that


Great! I created a case and sent you an e-mail with instructions for a ReadOnly Cloudwatch account in the event you’re able to provide us access and you still can’t connect on the latest stack.


can you define “good chunk”


and what does your dashboard show?