This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-07-04
Channels
- # announcements (6)
- # babashka (5)
- # beginners (57)
- # biff (6)
- # business (32)
- # clj-together (1)
- # clojars (10)
- # clojure (56)
- # clojure-europe (76)
- # clojure-nl (4)
- # clojure-norway (40)
- # clojure-serbia (1)
- # clojure-spec (5)
- # clojure-uk (10)
- # clojurescript (3)
- # cursive (12)
- # data-science (1)
- # datascript (4)
- # datomic (35)
- # docs (4)
- # emacs (28)
- # events (5)
- # hyperfiddle (9)
- # matrix (1)
- # off-topic (28)
- # practicalli (4)
- # re-frame (14)
- # shadow-cljs (2)
- # testing (5)
is anyone running Datomic On-Prem in Azure?
Since 2020.
(We run it as a Azure Container Instance)
@UQGBDTAR4 @UGJE0MM0W thank you! what storage are you using?
"Posgresql single server" atm. Moving to "flexible server" soon. Still Postgresql.
apologies, i don't know Azure at all - is that a managed service, like AWS Aurora?
It is managed, yes.
(I don't know AWS :-))
haha thank you
TL;DR most things has gone well. We were bitten by high network volatility compared to our old on-prem situation and @UGJE0MM0W had to tune socket-timeouts etc to keep things sane. Ivar can probably share more info later if needed.
(Network volatility is probably pretty cloud-vendor-agnostic.)
thank you gents
how long have you been running? how big is your database in storage?
Since 2020.
Pretty big. Datomic-backup ca 7GB (Psql-backup much larger)
Number of datoms: not sure, need Ivar to give estimate.
sweet, thank you, i really appreciate the feedback, it's super helpful!
our db is close to 100gb lol, since 2013
Wow :-)
Should be fine regardless.
Note that Azure Container Instances have pretty modest limits on cpu/ram (4vcpu/8GiB)
So depending on use-case you might want to look for bigger runtime situations for your apps
hmmm we're gonna need bigger instances than that 😅
I think container instances (ACI) are max 16 GiB.
Azure container apps (ACA) are max 8 GiB.
We also needed to "up" the amount of space available for the PostgreSQL instance.
That in turn triggered a bigger IOPS limit for the instance.
With the previous and lower limit I believe there were problems with apps not being allowed to read at a sufficient rate/speed.
It took some time to figure this out.
All our containers in azure have &socketTimeout=30
have set in the connection string, otherwise you will risk waiting for all eternity on some queries,
or ~16 minutes if you are lucky.
For Datomic I'm using &socketTimeout=3
(three seconds).
There might be some idle connection time limit property that could be better,
but we've also seen dropped connections just after application startup, so even so
it's still is a problem.
More reading:
https://github.com/brettwooldridge/HikariCP/wiki/Rapid-Recovery
(I know Datomic uses apache tomcat connection pool, HikariCP just describes the issue well.)
https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
I've been writing an article about this stuff forever for Datomic in particular (which is still not finished)
Oh and BTW our biggest Datomic database is 250M datoms, Peers have 8 GiB memory for this database and cache hit is near perfect (IIRC)..
one more thing... We've had a lot of DNS issues with ACI, so I don't think we can recommend that part of azure. ACA is better.
When I attempt to provision a DynamoDB table for Datomic, I get the following key-related exception despite having valid AWS keys set (verified via AWS cli) and exported:
bin/datomic ensure-transactor config/ddb-transactor.properties config/ddb-transactor.properties
com.amazonaws.services.identitymanagement.model.AmazonIdentityManagementException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. (Service: AmazonIdentityManagement; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 0e693d6f-7ea0-4c46-9fa3-7d7f18acec32; Proxy: null)
Recreated AWS root keys and ran this, but still happens:
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
ensure-transactor worked for ddb-local, but failing for actual Dynamo due to permissions.
Anyone encountered this?Apparently I have to run ensure-transactor with an IAM role? https://clojurians.slack.com/archives/C03RZMDSH/p1633518558245400?thread_ts=1633453233.241800&cid=C03RZMDSH
I thought the whole point of ensure-transactor was that it would create the necessary AWS resources for me?
I created an IAM user and gave it full permissions, created new access keys but am still getting the same SignatureDoesNotMatch
error :face_holding_back_tears: .
Some reports online say this can be caused by special characters or doubles slashes in secret access key, but this error also occurs for a plain looking secret.
Could be that this happens when your machine clock is too far from aws's clocks. Not sure how the error looks in that case.
I ended up reinstalling Datomic on a fresh machine, set the same AWS keys and it worked 🤷 I still don't know why it's not working on my primary machine.
wish the error messages were better