This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-03-10
Channels
- # aleph (1)
- # aws-lambda (1)
- # beginners (80)
- # boot (20)
- # cider (75)
- # cljs-dev (45)
- # cljsjs (1)
- # cljsrn (11)
- # clojure (428)
- # clojure-dusseldorf (13)
- # clojure-italy (4)
- # clojure-russia (153)
- # clojure-spec (47)
- # clojure-taiwan (1)
- # clojure-uk (62)
- # clojurescript (84)
- # cursive (19)
- # datascript (96)
- # datomic (75)
- # dirac (9)
- # docs (3)
- # emacs (19)
- # jobs (5)
- # jobs-discuss (20)
- # jobs-rus (17)
- # lein-figwheel (5)
- # leiningen (1)
- # liberator (4)
- # luminus (12)
- # off-topic (4)
- # om (31)
- # onyx (102)
- # pamela (1)
- # parinfer (3)
- # pedestal (3)
- # proton (1)
- # protorepl (14)
- # re-frame (54)
- # reagent (22)
- # rum (40)
- # spacemacs (2)
- # specter (8)
- # test-check (5)
- # unrepl (110)
- # untangled (80)
- # vim (3)
- # yada (46)
Storage & indexing costs.
Generally we recommend starting with db/index=true, since it's easier to remove an index than add one later.
In a project I'm currently involved in, I'm using datalog to pull a subset of data from a datomic database into an edn file. An issue arises when pulling anything typed bigint
. It seems to render as type long
in the edn file (no trailing N
). The project errs when attempting to load the edn file as it expects a bigint
but is getting a long
. Is there anything in datalog that would allow me to force the bigint
typing on export to the edn file?
Hey! So I just got this fun error: ("heartbeat failed")
2017-03-10 18:27:30.006 INFO default datomic.lifecycle - {:event :transactor/heartbeat-failed, :cause :timeout, :pid 7943, :tid 15}
2017-03-10 18:27:30.009 ERROR default datomic.process - {:message "Critical failure, cannot continue: Heartbeat failed", :pid 7943, :tid 1722}
2017-03-10 18:27:30.010 INFO default datomic.process-monitor - {:tid 1722, :AlarmHeartbeatFailed {:lo 1, :hi 1, :sum 1, :count 1}, :MemoryIndexMB {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 428.0, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 9692, :sum 44694, :count 8}, :Alarm {:lo 1, :hi 1, :sum 1, :count 1}, :pid 7943, :event :metrics, :SelfDestruct {:lo 1, :hi 1, :sum 1, :count 1}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}}
It’s not immediately apparent to me what caused the heartbeat failure. How should I read the log?
Thanks for responding so quickly @marshall! I was thinking it might be a memory insufficiency, because the transactor is simply transacting to DynamoDB
But I suppose DynamoDB might be read/write throttling, or maybe some latency was introduced
So I’m following bad practice and co-locating the transactor with the server JVM on the same AWS (EC2) instance.
Memory creeps down steadily but not steeply
What should I look for if it’s throttling? It’s just a bunch of heartbeats
Ah okay
Yeah, not seeing any notifications of that kind
No, for cost reasons
This is a small project
But times like this make me wish we had HA...
but you really only get the advantage if you’re running the pieces on separate instances
Right — which is more expensive 😕 Though I suppose not by much
It’s just myself and another person on the team and we have no revenue yet haha so we’re trying to minimize costs
Otherwise I’d say, pay the extra few dollars a month and save ourselves the headache because developer time is much more expensive
I’ll look around for some other feasible AWS options
Might have to buckle down and get two super-small AWS instances large enough for the two transactors (one active, one failover/backup) as well as the server instance
Let me check — it’s the 4GB memory with 2 cores
A t2.medium
We don’t have autoscaling or replication either so if that server goes down, I’m on duty 😅 Oh, to have money so permanently crossing fingers wouldn't be necessary
yeah, you can get away with a smaller instance for the transactor, but you have to be careful about load
well, worst case shoestring you could put the transactor up on a single instance ASG
Here’s my transactor memory settings — pretty minimal
memory-index-threshold=32m
memory-index-max=128m
object-cache-max=64m
ASG? Auto-scaling … G?
Ah right
Yeah that’s true
Not HA, but a smart idea
just make sure if you’re not using our provided CF/launch scripts that you terminate the instance when the transactor goes down
I used the CF/launch scripts at a previous startup when we had HA, which was nice, but we don’t have that luxury (really, necessity) now sadly
Thanks for the tip about termination!
yep. and yeah, the CF/AMI we have will work that way too - just set the ASG size to 1
So I can use the CF/launch scripts to create an ASG for the transactor without having to use HA?
Oh I get what you’re saying now — you just answered that
Sounds good — thanks so much!
Do you think a t2.micro instance is big enough? We might swing HA if we had two t2.nanos...
With the t2.micro it’s hard too?
I guess the bandwidth is really small
That’s true, especially since that’s including the OS
Yeah so t2.nano is infeasible I suppose
I remember trying to get a transactor running while setting the max heap size to something like 384MB and it crashed right away
Hmm, well this gives me something to think about — thanks!
I was never able to get a transactor up with less than 4GB (t2.medium), personally. It still relatively cheap though, $34 / mo according to http://www.ec2instances.info/