Fork me on GitHub
#datomic
<
2021-07-13
>
kenny18:07:45

Awesome!! Still reading through everything. I have one suggestion on the https://docs.datomic.com/cloud/whatis/configurations-and-pricing.html. It uses reserved instances. AWS has pretty much deprecated reserved instances, replacing them with savings plans. I’d suggest using savings plan rates instead.

👍 2
2
danieroux18:07:41

API Gateway automation for ions and clients Yes! I'll be deleting hand rolled Terraform config with glee soon Run and scale analytics anywhere Yes please!

😁 6
kenny20:07:00

Was m5.large removed from the instance types?

Joe Lane20:07:01

@U083D6HK9 Yes. Yes it was 🙂. You should look to t3.xlarge or t3.2xlarge instances. Note, that the t3.2xlarge instances have more vCPUs than i3.xlarge

kenny20:07:46

How come? The m5 family has very different characteristics from the t3 family.

em21:07:30

Absolutely fantastic upgrades, makes it 10x easier to recommend Datomic, especially for people looking for loads in between the previous large gap of Solo and Production

2
metal 2
kenny23:07:37

I'm really liking how the "solo" topology constraints do not exist anymore! It makes testing production like things so much easier.

Drew Verlee00:07:52

@U083D6HK9 i'll read the post.

Drew Verlee00:07:50

> Datomic no longer has a Solo compute stack. If you were using Solo you can upgrade to Production at no additional cost by peforming the following steps:

em07:07:09

@U0DJ4T5U1 Mostly that previously the minimum production cost ran around $400 a month, and that the solo topology didn't have a load balancer and nice things like HTTP direct that came with it. Now we have the best of both worlds - HA setups etc. at much more reasonable prices in the $50-$100 month a range for smaller businesses that didn't need 2 i3.large instances.

Drew Verlee13:07:09

ah ok. Hopefully a bit less then 50$ if it's the same price as solo. Which is around 35.

zalky18:07:20

Hey all, running into an issue where two datoms whose components (e a v t op) are all the same, and should be redundant, are said to be in conflict. The issue seems to be that the value (v) is a serialized byte array. I'm using https://github.com/ptaoussanis/nippy to serialize to a byte array. Are my expectations off that Datomic can tell that two serialized values are the same, or is there some other underlying issue here?

zalky18:07:42

I think I found a relevant section of the Datomic docs: https://docs.datomic.com/on-prem/schema/schema.html#bytes-limitations It says that attribute values of type byte cannot have value semantics, an implication of which is that you also cannot resolve to equivalent datoms apart.

ghadi20:07:45

if you're using bytes to store large blobs, this is generally a bad idea

ghadi20:07:02

it all depends on what your app-level semantics are

zalky03:07:43

Thanks @U050ECB92 for the response. The use case is for small blobs, in a very limited context. Normally we would not require value semantics but we ran into this edge case with redundant data. We were able to workaround the problem in application layer with some additional constraints.

kenny20:07:00

In 884-9095 the socks proxy was replaced with api gateway: > Replaced: The socks proxy is no longer available; clients can connect directly to the client API Gateway. It's probably worth noting in the release notes that if you run queries that take longer than 30s, these will now time out. API Gateway has a https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#http-api-quotas, which cannot be increased.

stuarthalloway13:07:42

Out of curiosity, do you regularly run such long queries in development?

kenny15:07:48

We use dev-local in development, so slightly different situation. We do have 2 large queries that can take 30s+ under a high load situation.

kenny20:07:57

From the https://blog.datomic.com/2021/07/Datomic-Cloud-884-9095-New-tiers-and-internet-access.html#_lower_pricing_at_all_scales, there is this sentence: > All instances sizes now cost less to run... If you are running production, your cost http://docs.datomic.com/cloud/changes.html#884-prod... I followed that link and it just takes me to "884-9095 for Production Users". It's not clear how all instance sizes cost less now. How did the cost decrease for an instance type that was previously available (e.g., t3.large for query group)? I have seen https://docs.datomic.com/cloud/operation/growing-your-system.html#hourly-price, but that just seems to list the regular On-Demand cost for the given instance types, which will not have changed from the last release.

kenny20:07:10

Perhaps the Datomic license cost decreased? If so, where can I find a table for that? IIRC, previously it was the same price per hour as the instance type you chose.

kenny20:07:51

From the https://docs.datomic.com/cloud/changes.html#884-9095, what exactly causes the "up to one minute of downtime"? > This upgrade will cause up to one minute of downtime for each compute group. Make sure to perform this upgrade at a time that minimizes the impact of these disruptions.

stuarthalloway13:07:10

Switching from an NLB to an ALB.

2
kenny15:07:53

Curious why it wouldn't be a no downtime switchover?

kenny20:07:55

> If you manually created an API Gateway for your ion application on a previous release of Datomic, that gateway will no longer work. We are definitely in this scenario. It will no longer work because the update requires a new LB?

kenny21:07:54

> Enhancement: The storage template now sets DDB provisioning to fit within the AWS free tier if your usage is low enough. > https://docs.datomic.com/cloud/changes.html#884-storage We have manually modified our DDB capacity mode to On-demand since it is a much better fit for our workloads. Will this storage update impact that setting? Will we need to go back and manually set it again?

stuarthalloway13:07:38

If you are running the production template, you do not need to do a storage upgrade at all.

kenny15:07:42

Ok. If I did do the storage update, would it impact that setting?

kenny19:07:06

I ask because at some point in the future, we’ll need to do a storage update. At that point, it’s unlikely we’ll remember that that update could impact the capacity mode.

kenny22:07:22

Since the client API Gateway is exposed to the internet, how does access control work for it?

kenny15:07:58

Ah, it's using Datomic's auth mechanism. I do wonder if this opens the door to DOS attacks. If someone had direct access to your client API endpoint, could they bring down your system by rapidly sending requests?

kenny22:07:51

I see the EndpointAddress format has changed in a backwards incompatible way. .<compute group>.<region>. to .<compute group>.<region>. . This will require all client applications to also update their endpoints. I don't see this noted in the changelog, but it seems like a critical piece to know. EDIT: Actually, this doesn't seem entirely true. I see a Route 53 entry for an upgraded "Solo" system where the old entry. record is pointing directly to the IP address of the 1 node in the system. Not sure what that means for production topologies.

kenny15:07:29

fyi, in a prod situation, it seems like it manually added N IP addresses to the entry. record. Unclear if those entries are continuously updated.

kenny22:07:30

Is the recommended :endpoint for a client application inside the Datomic VPC the value for the EndpointAddress CF stack output?