This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # babashka (29)
- # beginners (64)
- # calva (4)
- # cider (5)
- # cljs-dev (3)
- # cljsrn (2)
- # clojure (100)
- # clojure-australia (2)
- # clojure-conj (7)
- # clojure-dev (9)
- # clojure-europe (31)
- # clojure-germany (1)
- # clojure-nl (2)
- # clojure-uk (13)
- # clojured (2)
- # clojurescript (62)
- # community-development (2)
- # conjure (1)
- # cursive (21)
- # datomic (39)
- # events (2)
- # fulcro (7)
- # graalvm (24)
- # graalvm-mobile (11)
- # holy-lambda (3)
- # jobs (7)
- # lsp (15)
- # malli (26)
- # music (1)
- # nyc (2)
- # off-topic (18)
- # reagent (23)
- # reitit (5)
- # remote-jobs (1)
- # shadow-cljs (2)
- # tools-deps (26)
- # vim (6)
- # xtdb (17)
Datomic Cloud 884-9095 now available! https://blog.datomic.com/2021/07/Datomic-Cloud-884-9095-New-tiers-and-internet-access.html
Awesome!! Still reading through everything. I have one suggestion on the https://docs.datomic.com/cloud/whatis/configurations-and-pricing.html. It uses reserved instances. AWS has pretty much deprecated reserved instances, replacing them with savings plans. I’d suggest using savings plan rates instead.
API Gateway automation for ions and clients Yes! I'll be deleting hand rolled Terraform config with glee soon Run and scale analytics anywhere Yes please!
@U083D6HK9 Yes. Yes it was 🙂. You should look to t3.xlarge or t3.2xlarge instances. Note, that the t3.2xlarge instances have more vCPUs than i3.xlarge
Absolutely fantastic upgrades, makes it 10x easier to recommend Datomic, especially for people looking for loads in between the previous large gap of Solo and Production
I'm really liking how the "solo" topology constraints do not exist anymore! It makes testing production like things so much easier.
> Datomic no longer has a Solo compute stack. If you were using Solo you can upgrade to Production at no additional cost by peforming the following steps:
@U0DJ4T5U1 Mostly that previously the minimum production cost ran around $400 a month, and that the solo topology didn't have a load balancer and nice things like HTTP direct that came with it. Now we have the best of both worlds - HA setups etc. at much more reasonable prices in the $50-$100 month a range for smaller businesses that didn't need 2 i3.large instances.
ah ok. Hopefully a bit less then 50$ if it's the same price as solo. Which is around 35.
Hey all, running into an issue where two datoms whose components (e a v t op) are all the same, and should be redundant, are said to be in conflict. The issue seems to be that the value (v) is a serialized byte array. I'm using https://github.com/ptaoussanis/nippy to serialize to a byte array. Are my expectations off that Datomic can tell that two serialized values are the same, or is there some other underlying issue here?
I think I found a relevant section of the Datomic docs: https://docs.datomic.com/on-prem/schema/schema.html#bytes-limitations It says that attribute values of type byte cannot have value semantics, an implication of which is that you also cannot resolve to equivalent datoms apart.
Thanks @U050ECB92 for the response. The use case is for small blobs, in a very limited context. Normally we would not require value semantics but we ran into this edge case with redundant data. We were able to workaround the problem in application layer with some additional constraints.
In 884-9095 the socks proxy was replaced with api gateway: > Replaced: The socks proxy is no longer available; clients can connect directly to the client API Gateway. It's probably worth noting in the release notes that if you run queries that take longer than 30s, these will now time out. API Gateway has a https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#http-api-quotas, which cannot be increased.
We use dev-local in development, so slightly different situation. We do have 2 large queries that can take 30s+ under a high load situation.
From the https://blog.datomic.com/2021/07/Datomic-Cloud-884-9095-New-tiers-and-internet-access.html#_lower_pricing_at_all_scales, there is this sentence: > All instances sizes now cost less to run... If you are running production, your cost http://docs.datomic.com/cloud/changes.html#884-prod... I followed that link and it just takes me to "884-9095 for Production Users". It's not clear how all instance sizes cost less now. How did the cost decrease for an instance type that was previously available (e.g., t3.large for query group)? I have seen https://docs.datomic.com/cloud/operation/growing-your-system.html#hourly-price, but that just seems to list the regular On-Demand cost for the given instance types, which will not have changed from the last release.
Perhaps the Datomic license cost decreased? If so, where can I find a table for that? IIRC, previously it was the same price per hour as the instance type you chose.
From the https://docs.datomic.com/cloud/changes.html#884-9095, what exactly causes the "up to one minute of downtime"? > This upgrade will cause up to one minute of downtime for each compute group. Make sure to perform this upgrade at a time that minimizes the impact of these disruptions.
> If you manually created an API Gateway for your ion application on a previous release of Datomic, that gateway will no longer work. We are definitely in this scenario. It will no longer work because the update requires a new LB?
> Enhancement: The storage template now sets DDB provisioning to fit within the AWS free tier if your usage is low enough. > https://docs.datomic.com/cloud/changes.html#884-storage We have manually modified our DDB capacity mode to On-demand since it is a much better fit for our workloads. Will this storage update impact that setting? Will we need to go back and manually set it again?
If you are running the production template, you do not need to do a storage upgrade at all.
I ask because at some point in the future, we’ll need to do a storage update. At that point, it’s unlikely we’ll remember that that update could impact the capacity mode.
Since the client API Gateway is exposed to the internet, how does access control work for it?
Ah, it's using Datomic's auth mechanism. I do wonder if this opens the door to DOS attacks. If someone had direct access to your client API endpoint, could they bring down your system by rapidly sending requests?
I see the EndpointAddress format has changed in a backwards incompatible way.
. This will require all client applications to also update their endpoints. I don't see this noted in the changelog, but it seems like a critical piece to know.
Actually, this doesn't seem entirely true. I see a Route 53 entry for an upgraded "Solo" system where the old entry. record is pointing directly to the IP address of the 1 node in the system. Not sure what that means for production topologies.
fyi, in a prod situation, it seems like it manually added N IP addresses to the entry. record. Unclear if those entries are continuously updated.
Is the recommended :endpoint for a client application inside the Datomic VPC the value for the EndpointAddress CF stack output?