Fork me on GitHub
#datomic
<
2020-02-25
>
joshkh14:02:07

my Ions deployments to a specific query group started failing today due to an error reported by the BeforeInstall script: There is insufficient memory for the Java Runtime Environment to continue. has anyone else experienced this?

joshkh14:02:43

the project deployed to the query group has no problem running queries. it's the deployment itself that fails.

marshall15:02:20

@joshkh what size instance is the query group?

joshkh15:02:27

i did start playing with Datomic Analytics yesterday, although i'm using my main compute group for that. still, could that affect an unrelated query group?

marshall15:02:15

shouldnt the analytics server itself runs on the gateway instance and sends queries to whatever group you’ve configured (or default to the primary compute)

joshkh15:02:05

right, that's what i thought. okay, we can look into increasing our instance size. thanks @marshall.

joshkh15:02:08

just curious though - wouldn't the heap have more of an effect on a running project? this happens when i initiate a deployment, which fails almost immediately.

LifecycleEvent - BeforeInstall
Script - scripts/install-clojure
[stdout]Clojure 1.10.0.414 already installed
Script - sync-libs
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ee000000, 32505856, 0) failed; error='Cannot allocate memory' (errno=12)
[stdout]#
[stdout]# There is insufficient memory for the Java Runtime Environment to continue.
[stdout]# Native memory allocation (mmap) failed to map 32505856 bytes for committing reserved memory.

marshall15:02:47

have you tried cycling the instance?

marshall15:02:26

that looks like a wedged instance

marshall15:02:35

if it can’t allocate 32M

joshkh15:02:19

we tried autoscaling a second instance which came up just fine. then we tried to redeploy or code to fix the wedged instance, but the deployment failed due to a 120s sync libs error

marshall15:02:19

what version are you running?

joshkh15:02:23

oops, incoming edit 😉 above

joshkh15:02:44

yes, i was very excited to see that!

marshall15:02:47

then cycle your instance(s)

joshkh15:02:09

will do, thanks Marshall

kenny16:02:37

I am getting this exception ~1/day. Any idea why this would occur?

clojure.lang.ExceptionInfo: Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault, :http-result {:status 500, :headers {"content-length" "32", "server" "Jetty(9.4.24.v20191120)", "date" "Sun, 23 Feb 2020 17:08:37 GMT", "content-type" "application/edn"}, :body nil}}
 at datomic.client.api.async$ares.invokeStatic (async.clj:58)
    datomic.client.api.async$ares.invoke (async.clj:54)
    datomic.client.api.sync.Client.list_databases (sync.clj:71)
    datomic.client.api$list_databases.invokeStatic (api.clj:112)
    datomic.client.api$list_databases.invoke (api.clj:106)
    compute.db.core.DatomicClient.list_databases (core.cljc:71)
    datomic.client.api$list_databases.invokeStatic (api.clj:112)
    datomic.client.api$list_databases.invoke (api.clj:106)

ghadi16:02:48

is that the full stacktrace? what was the user code that caused it?

kenny16:02:19

No but it's the only relevant part. It's caused by datomic.client.api$list_databases

kenny16:02:28

This is line 71 in compute.db.core:

(let [dbs (d/list-databases client arg-map)]

ghadi16:02:22

not sure, but you should try to correlate it with logs in cloudwatch

ghadi16:02:08

BTW ^ new Datomic CLI tools

kenny16:02:16

It looks nice but will require a couple things to happen before we can update.

ghadi16:02:49

you appear to be on the latest 616 compute

ghadi16:02:01

you can use the datomic cli tools fine with that

kenny16:02:09

Don't use CW logs often. It felt like a battle to get to the logs I wanted 😵 Should I upload them here? There's 2 relevant lines.

kenny16:02:58

Didn't look like any sensitive info so added them to the thread there ^

ghadi16:02:23

thanks -- that is probably useful to @marshall. Need your datomic compute stack version # too

kenny16:02:23

DatomicCFTVersion: 616 DatomicCloudVersion: 8879

ghadi16:02:49

seems like a server side bug from the stacktrace

kenny16:02:21

It's weird how it happens so infrequently.

jaret18:02:14

Kenny, I am logging a case to capture this so we can look at it. I think we have everything we need, but wanted to let you know in case you see an e-mail come your way from support.

kenny18:02:08

Great, thanks.

uwo16:02:59

I'm setting Xmx and Xms when running the (on-prem) peer-server. I just noticed that it appears to start with its own settings for those flags.

CGroup: /system.slice/datomic.service
           ├─28220 /bin/bash /var/lib/datomic/runtime/bin/run -Xmx4g -Xms4g ...
           └─28235 java -server -Xmx1g -Xms1g -Xmx4g -Xms4g -cp ...
Should I be setting those values thru configuration elsewhere?

uwo17:02:11

Ah, I see where they're hard coded into the bin/run script. Perhaps I don't need to treat the peer server like other app peers?Repeated flag precedence may differ across versions/distros

hadils21:02:04

I am developing an application that reqquires storage of millions of transactions and I am concerned about the limitations of Datomic Cloud. Should I use a separate store (e.g. DynamoDB) for the transactions or are there ways to scale Datomic Cloud? Welcome any feedback anyone might have...

Joe Lane22:02:31

@hadilsabbagh18 Which limitations are you concerned about? What do you mean "Ways to scale Datomic Cloud?" ?

hadils22:02:51

I am concerned about storage for now.

ghadi22:02:48

@hadilsabbagh18 millions of transactions is fine with Datomic Cloud. (Disclaimer: I work for Cognitect, but not on Datomic) If you really want to do it right, you'll want to estimate the cardinality of your entities, relationships, and estimate the frequency of change... all in all you need to provide more specifics

ghadi22:02:53

With the disclaimer that there is no application specifics, DynamoDB has very very rudimentary query power

hadils22:02:23

@ghadi -- I will make an estimate. Who should I be talking to about this?

ghadi22:02:12

@marshall is a good person to talk to

hadils22:02:42

@ghadi Thanks a lot. I will be more specific when talking to @marshall.

ghadi22:02:55

no problem

steveb8n23:02:22

Q: has anyone used aws x-ray inside Ions? I want to do this (i.e. add sub-segments) but I’m wary of memory leaks etc when using the aws client in Ions. Any war stories or success stories?

Sam DeSota23:02:57

Is it possible to pass a temp id to a tx-fn and resolve the referenced entity in the tx-fn? Example:

(def my-tx-inc [db ent attr]
  (let [[value] (d/pull db {:eid ent :selector [:db/id attr]})]
    [:db/add (:db/id value) attr (inc (attr value))]))

{:tx-data [{:db/id "tempid" :product/sku 1} '(my-ns/my-tx-inc "tempid" :product/likes)]}

favila23:02:52

No. You must treat tempids as opaque. What they resolve to is unknowable until all datoms are expanded

Sam DeSota23:02:35

Got it. Thank you.

favila23:02:54

For example some other tx fn may return something that asserts the tempid has a particular upserting attribute. That changes how it would resolve

Sam DeSota23:02:05

As is, this doesn't seem to work

ghadi23:02:28

@steveb8n I’ve done them

ghadi23:02:25

You need the xray sdk for Java but not the aws-sdk for java auto instrumenter

steveb8n23:02:34

great. not the Cognitect aws client? Just aws java interop?

ghadi23:02:45

Keep in mind the amazon xray sdk for java is a completely separate sdk than the aws java sdk (not a subset)

steveb8n23:02:46

ok. I’ll give that a try. is there a sample snippet out there somewhere? I don’t need one but it seems like a good thing for docs

ghadi23:02:09

No, sorry, but the aws docs were accurate

ghadi23:02:17

And helpful

steveb8n23:02:21

ok, good info. thanks