This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-25
Channels
- # announcements (9)
- # aws (50)
- # babashka (7)
- # beginners (95)
- # calva (10)
- # chlorine-clover (17)
- # cljdoc (11)
- # cljs-dev (4)
- # cljsrn (6)
- # clojars (25)
- # clojure (74)
- # clojure-belgium (4)
- # clojure-dev (17)
- # clojure-europe (3)
- # clojure-italy (23)
- # clojure-nl (3)
- # clojure-norway (5)
- # clojure-sanfrancisco (30)
- # clojure-spec (46)
- # clojure-uk (27)
- # clojured (3)
- # clojurescript (91)
- # core-async (61)
- # cursive (3)
- # data-science (4)
- # datascript (7)
- # datomic (67)
- # emacs (15)
- # events (1)
- # figwheel-main (13)
- # fulcro (31)
- # graalvm (1)
- # graphql (3)
- # hoplon (2)
- # jobs (3)
- # jobs-rus (1)
- # kaocha (4)
- # lambdaisland (34)
- # luminus (4)
- # off-topic (62)
- # om (4)
- # other-languages (9)
- # re-frame (14)
- # reitit (1)
- # ring-swagger (1)
- # shadow-cljs (51)
- # sql (5)
- # xtdb (8)
my Ions deployments to a specific query group started failing today due to an error reported by the BeforeInstall script: There is insufficient memory for the Java Runtime Environment to continue
. has anyone else experienced this?
the project deployed to the query group has no problem running queries. it's the deployment itself that fails.
https://docs.datomic.com/cloud/ions/ions-reference.html#jvm-settings the t medium instance only have 2582m heap
i did start playing with Datomic Analytics yesterday, although i'm using my main compute group for that. still, could that affect an unrelated query group?
shouldnt the analytics server itself runs on the gateway instance and sends queries to whatever group you’ve configured (or default to the primary compute)
right, that's what i thought. okay, we can look into increasing our instance size. thanks @marshall.
just curious though - wouldn't the heap have more of an effect on a running project? this happens when i initiate a deployment, which fails almost immediately.
LifecycleEvent - BeforeInstall
Script - scripts/install-clojure
[stdout]Clojure 1.10.0.414 already installed
Script - sync-libs
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ee000000, 32505856, 0) failed; error='Cannot allocate memory' (errno=12)
[stdout]#
[stdout]# There is insufficient memory for the Java Runtime Environment to continue.
[stdout]# Native memory allocation (mmap) failed to map 32505856 bytes for committing reserved memory.
we tried autoscaling a second instance which came up just fine. then we tried to redeploy or code to fix the wedged instance, but the deployment failed due to a 120s sync libs error
you should update your ion-dev version https://docs.datomic.com/cloud/releases.html#ion-dev-251
I am getting this exception ~1/day. Any idea why this would occur?
clojure.lang.ExceptionInfo: Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault, :http-result {:status 500, :headers {"content-length" "32", "server" "Jetty(9.4.24.v20191120)", "date" "Sun, 23 Feb 2020 17:08:37 GMT", "content-type" "application/edn"}, :body nil}}
at datomic.client.api.async$ares.invokeStatic (async.clj:58)
datomic.client.api.async$ares.invoke (async.clj:54)
datomic.client.api.sync.Client.list_databases (sync.clj:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)
compute.db.core.DatomicClient.list_databases (core.cljc:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)
Don't use CW logs often. It felt like a battle to get to the logs I wanted 😵 Should I upload them here? There's 2 relevant lines.
thanks -- that is probably useful to @marshall. Need your datomic compute stack version # too
Kenny, I am logging a case to capture this so we can look at it. I think we have everything we need, but wanted to let you know in case you see an e-mail come your way from support.
I'm setting Xmx and Xms when running the (on-prem) peer-server. I just noticed that it appears to start with its own settings for those flags.
CGroup: /system.slice/datomic.service
├─28220 /bin/bash /var/lib/datomic/runtime/bin/run -Xmx4g -Xms4g ...
└─28235 java -server -Xmx1g -Xms1g -Xmx4g -Xms4g -cp ...
Should I be setting those values thru configuration elsewhere?Ah, I see where they're hard coded into the bin/run script. Perhaps I don't need to treat the peer server like other app peers?Repeated flag precedence may differ across versions/distros
I am developing an application that reqquires storage of millions of transactions and I am concerned about the limitations of Datomic Cloud. Should I use a separate store (e.g. DynamoDB) for the transactions or are there ways to scale Datomic Cloud? Welcome any feedback anyone might have...
@hadilsabbagh18 Which limitations are you concerned about? What do you mean "Ways to scale Datomic Cloud?" ?
@hadilsabbagh18 millions of transactions is fine with Datomic Cloud. (Disclaimer: I work for Cognitect, but not on Datomic) If you really want to do it right, you'll want to estimate the cardinality of your entities, relationships, and estimate the frequency of change... all in all you need to provide more specifics
With the disclaimer that there is no application specifics, DynamoDB has very very rudimentary query power
Q: has anyone used aws x-ray inside Ions? I want to do this (i.e. add sub-segments) but I’m wary of memory leaks etc when using the aws client in Ions. Any war stories or success stories?
Is it possible to pass a temp id to a tx-fn and resolve the referenced entity in the tx-fn? Example:
(def my-tx-inc [db ent attr]
(let [[value] (d/pull db {:eid ent :selector [:db/id attr]})]
[:db/add (:db/id value) attr (inc (attr value))]))
{:tx-data [{:db/id "tempid" :product/sku 1} '(my-ns/my-tx-inc "tempid" :product/likes)]}
No. You must treat tempids as opaque. What they resolve to is unknowable until all datoms are expanded
Got it. Thank you.
For example some other tx fn may return something that asserts the tempid has a particular upserting attribute. That changes how it would resolve
As is, this doesn't seem to work
Keep in mind the amazon xray sdk for java is a completely separate sdk than the aws java sdk (not a subset)