This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-08-30
Channels
- # admin-announcements (1)
- # aws (32)
- # bangalore-clj (1)
- # beginners (2)
- # boot (137)
- # cider (2)
- # clara (1)
- # cljs-dev (39)
- # cljsrn (20)
- # clojure (268)
- # clojure-berlin (20)
- # clojure-canada (37)
- # clojure-dev (8)
- # clojure-gamedev (6)
- # clojure-norway (2)
- # clojure-russia (55)
- # clojure-spec (130)
- # clojure-uk (39)
- # clojurebridge (1)
- # clojurescript (102)
- # cursive (20)
- # datomic (231)
- # editors (5)
- # editors-rus (8)
- # events (5)
- # funcool (12)
- # hoplon (31)
- # instaparse (57)
- # jobs (9)
- # lein-figwheel (4)
- # off-topic (2)
- # om (8)
- # om-next (30)
- # onyx (241)
- # planck (6)
- # protorepl (4)
- # re-frame (115)
- # reagent (7)
- # rum (9)
- # schema (1)
- # test-check (9)
- # untangled (24)
- # yada (20)
out of the blue i'm unable to connect to datomic. getting: ConnectException Connection refused java.net.PlainSocketImpl.socketConnect (PlainSocketImpl.java:-2)
That was a great interview you did on the defn podcast btw, @robert-stuttaford š
why thank you -bends a knee, tips hat-
transactor autoscaling is terminating my instance and then starts another one. and then stopps it and again forever. any ideas what could go wrong and how to get more data?
after further digging here is a log from instance that stopped: 'user-data: inflating: datomic-pro-0.9.5390/README-CONSOLE.md user-data: pid is 1879 user-data: ./startup.sh: line 26: kill: (1879) - No such process /dev/fd/11: line 1: /sbin/plymouthd: No such file or directory initctl: Event failed'
seems like issue with startup.sh, but unable to see whats inside there atm.
@jaret hi š are you around, and able to help with a transactor downtime analysis?
@robert-stuttaford absolutely. Whats up? or should I say down?
-grin- it's up again, but i'd really like to get a LOT better at analysing root cause
PMing you
when i get the message "Critical failure, cannot continue: Heartbeat failed" how can i find out what the failure was? it's happening every time i try to restart my transactor
@jdkealy Heartbeat failed indicates that the transactor canāt write to storage. What storage are you running on?
during transactor startup, yeah. Are you using our provided cloudformation scripts/etc?
i.e. did you follow the process here http://docs.datomic.com/storage.html#provisioning-dynamo and here http://docs.datomic.com/aws.html
is the transactor starting for some amount of time before you get the heartbeat failure?
if you look in cloudformation, do you see active heartbeats for any time or is it immediate failure?
@karol.adamiec Similar to my question to @jdkealy, did you follow this: http://docs.datomic.com/storage.html#provisioning-dynamo and this http://docs.datomic.com/aws.html
guys, i've just had dynamo db issues as well
in production
with a system that has been up for months
bet you DDB is having a tantrum
@marshall i used terraform module from https://github.com/mrmcc3/tf_aws_datomic
nothing on aws status
as of about 15 min ago there are reports of a DDB outage. possibly more out in us-east1
expletives and swearwords
I should have check in earlier. We were on the line with the AWS guys half an hour ago. They said "we're updating the status page soon."
was that because you had downtime @potetm ?
so do i
how did you know there's an outage, @potetm ?
by calling AWS?
nothing like finding out the best source for outage news is twitter vs. official status pages.
ass. that's not scalable at all
-follows you- think you could keep doing that? -grin-
@robert-stuttaford you mean you canāt keep an open chat with AWS 24/7 for status updates?
so what do we do now? game of chess? š
looks like things are stable again
oh, wait, no. my EC2 console was stale
last new transactor was 5 minutes ago
not seeing any dynamo issues (eu-west-1). fingers crossed!
Amazon DynamoDB (N. Virginia) Increased latencies less 6:47 AM PDT We are currently investigating increased API latencies in the US-EAST-1 Region.
we're also seeing transactor restarts (and flapping between our two transactors, as one tries to take over when the other kills itself). Is it expected behavior for the transactor java process to kill itself and restart when a heartbeat fails? Selected log lines:
2016-08-30 14:01:27.031 INFO default datomic.lifecycle - {:tid 18, :pid 7028, :event :transactor/heartbeat-failed, :cause :timeout}
2016-08-30 14:01:27.033 ERROR default datomic.process - {:tid 120, :pid 7028, :message "Critical failure, cannot continue: Heartbeat failed"}
2016-08-30 14:01:27.057 WARN default org.hornetq.core.server - HQ222113: On ManagementService stop, there are 2 unexpected registered MBeans: [core.acceptor.7b92fd66-6eb7-11e6-a9c9-eb6e98878cd4, core.acceptor.7b932477-6eb7-11e6-a9c9-eb6e98878cd4]
2016-08-30 14:01:27.076 INFO default org.hornetq.core.server - HQ221002: HornetQ Server version 2.3.17.Final (2.3.17, 123) [5d3fd9ae-ed45-11e5-a317-db72314f6b95] stopped
2016-08-30 14:02:03.345 WARN default datomic.slf4j - {:tid 10, :pid 12511, :message "Starting datomic: ..."}
are you using your own instance configuration, rather than the AMI provided by Cognitect, @ljosa?
yes. it kills itself to allow auto-scaling to notice that it's dead and replace the instance entirely
We are also down š
we've been stable for 30 mins now
We were stable for 5 minutes and then it went dark again
Data reads keep working for us too
but that could be cached
almost certainly cached
we don't see any SystemErrors in CloudWatch, and the SuccessfulRequestLatencies are normal. In between the failed heartbeats and transactor restarts, things work normally.
And ours is back
@jdkealy There also was an dynamodb issue a while ago, but it does not happen often
im back up too... is there any way to protect against this? should i be thinking of different backends other than dynamo ?
we had to switch from couchbase to dynamodb, and ddb has been great so far (about 6 months).
Realistically, the kind of downtime DDB has is still an order of magnitude (or more) better than pretty much any option you could run on your own behalf
the last time DDB went down was 1 week after a MAJOR launch at Cognician. that was so much fun. September last year
yup Marshall totally
no issues for 45 mins now
@jdkealy There was an outage similar to this last year. https://aws.amazon.com/message/5467D2/
Some guy must be watching #dynamodb on twitter. The second I said something about an outage, he likes it and tweets this: https://twitter.com/shirleman/status/770614099726114816
Cassandra is not a great option for Datomic if you're worried about downtime because Datomic cannot work across Cassandra data centers, and a Cassandra data center must be in a single AWS availability zone.
The fallacy there is that there is zero cost to managing your own machines vs using a hosted service. Even assuming the claims are accurate.
Cassandra is a fine option, but Iād be shocked if you could maintain a Cassandra ring with the same uptime and perf as DDB for anywhere near the cost
DDB has been great for us cost wise. Memcached is very effective at reducing the number of DDB requests.
has anyone used DDB streams to set up real-time multi-region replication?
i wonder how quickly one can shift regions with Datomic and DDB
there's no guarantee that the replica will be consistent, so you're just praying that Datomic will be okay after being started in the other region, right?
well, that's why i'm asking, i guess -- is it even a valid strategy
so far we've done the old backup-datomic, restore-datomic thing to switch transactor+storage, just once, back when we moved off of our snowflake transactor+postgres server
we're doing hourly datomic backups to S3. we populate our dev environment with those. I guess we could in theory restore them to a disaster recovery environment in another region. but realistically, if the entire us-east-1 goes down, we're our of business until it comes back.
we're also doing the backups that way
although i'm planning to switch from hourly to continuously
is anything keeping any of you in US-EAST-1 in particular?
-ing ditto
We are creating a Atlassian Connect addon and their servers are also in US-EAST-1 That is the only reason
we could operate in other regions for maybe an hour or two before we'd have to shut down because we rely on processing in us-east-1 to turn off ad campaigns when they exceed their daily budgets, etc.
i'm about 90% of the way to having a fresh env set up in oregon - new AMIs and Ubuntu LTS and whatnot
switching from upstart to systemd has been fun
I've read a bit about laziness in datomic and i wanted to ask a quick Q about my use case... i have accounts, accounts have collections, collections have photos, photos have tags. Tags are often removed / edited and i'd like to mark them as active / inactive. The criteria for being active/inactive is having just ONE photo that is not hidden. Is there any way I can have datomic fetch that criteria without scanning every photo in every collection? i.e. is it possible to write a query that returns true / false and will stop scanning after it hits a truthy value ?
@jdkealy Depending on your schema, you might be able to use get-some: http://docs.datomic.com/query.html#get-some
and/or get-else: http://docs.datomic.com/query.html#get-else
this may also be useful: http://docs.datomic.com/query.html#missing
sure. And I actually am not sure which would be faster. I would need to do some testing and thinking š
hey guys Iāve this query
(defn mult-lookup-user [db phones]
(let [result (d/q '[:find ?e ?phone
:in $ [?phone ...]
:where [?e :user/phone ?phone]] db phones)]
(map second result)))
(mult-lookup-user (d/db connect) ["0862561423" "0877654321ā])
where it will return only existing phones, and running it standalone works perfectly but my issues is using with a yada resource, the important section below
{:post {:parameters {:form {:users [String]}
:body [String]}
:consumes #{"application/json" "application/x-www-form-urlencoded;q=0.9"}
:produces #{"application/json" "application/edn"}
:response (fn [ctx]
(let [users (or (get-in ctx [:parameters :body])
(get-in ctx [:parameters :form :users]))]
(when-let [valid-users (mult-lookup-user (d/db connect) users)]
(println "valid" valid-users)
(if (seq? valid-users)
(json/generate-string valid-users)))))}}
using the same input as when called standalone returns an empty list, but if I include just one value (valid of course) it returns that singular result. Can any help explain this issue?@severed-infinity i would trace the inputs going into mult-lookup-user
in both cases and compare that to what happens when you call it directly
those are south african numbers, right? š
@robert-stuttaford Iāve removed the println calls for clarity but input shows the list of numbers coming in, but the results are as follows before and after
["0862561423","0877654321"]
valid ()
they are Irish mobile phone numbersagainst the same database value?
are you printing the result coming from datalog directly?
ie, put (prn :in phones db :out result)
before (map second result)
I assume you mean like so
(defn mult-lookup-user [db phones]
(let [result (d/q '[:find ?e ?phone
:in $ [?phone ...]
:where [?e :user/phone ?phone]] db phones)]
(println results)
(map second result)))
well, result
not results
š
and print the inputs
:in ["0862561423" "0877654321"] :out #{[17592186045419 "0862561423"] [17592186045453 "0877654321"]}
looking good so far
but when called from the resource modal
:in ["\"0862561423\",\"0877654321\""] :out #{}
model*
these are the two I am testing currently, as you can see with more than one I get an empty set but with one value I get the results
[0862561423,0877654321]
:in ["0862561423,0877654321"] :out #{}
valid ()
[0862561423]
:in ["0862561423"] :out #{[17592186045419 "0862561423"]}
valid (0862561423)
is there api to insert into datomic...letting datomic set the :db/id? Seems odd to force the user to create a temp id for all the inserts...
looks like you're passing in a string
in your yada impl, before you pass the numbers to your query fn, first (clojure.edn/read-string) it
oh so does appear to be
@fenton, no. you have to make tempids every time
which, imho, is a far better tradeoff than some hidden magic you can't control š
@robert-stuttaford I'd have preferred it to create it auto if not specified. š
if you absolutely must have it, write a function that does it for you. speaking as someone who's been there, and learned the hard way, you really just want to get used to providing them
@robert-stuttaford ok...its a minor inconvenience only...and can be abstracted like u suggest. thx! š
the danger with the abstraction is it makes it harder for you to use them in more complex ways later on when you realise the full power of the design
you end up either ditching the abstraction part of the time, or making more convoluted abstractions. either way, you lose, either consistency, or simplicity
i know. i've got several tens of thousands of lines of code written over several years by many people which bears the evidence of this
@robert-stuttaford thank you for that, solution parsed-users (str/split (first users) #",ā)
though I do not know why are array of strings turned into a an array with a joined single string value
you can express complex relationships in a single transaction
@severed-infinity likely something yada or a middleware is doing
Yea I tired to ask in the yada chat before I got to the datomic stuff but got no response and continued on with what seemed like a working solution
@fenton e.g. transact an order and all of the individual order items together
with all the relationships expressed in the same transaction
@fenton a contrived example
(let [order-id (d/tempid :db.part/user)]
[{:db/id order-id
:order/uuid (d/squuid)
:order/user [:user/email ""]}
{:db/id (d/tempid :db.part/user)
:order.item/order order-id
:order.item/product [:product/slug "tesla-model-s"]
:order.item/unit-price 100000
:order.item/unit-currency :usd}
{:db/id (d/tempid :db.part/user)
:order.item/order order-id
:order.item/product [:product/slug "starbucks-venti"]
:order.item/unit-price 20
:order.item/unit-currency :usd}])
note order-id
@robert-stuttaford ok...yes that makes good sense for sure.
this makes mocking fake databases with d/with
to test functions at the repl an absolute pleasure
highly recommend a scan-read of http://docs.datomic.com/clojure/#datomic.api/
just trying to understand the d/with
part a bit better...how do u use that in the repl for testing?
(def mock-db (->> some-made-up-tx-that-uses-real-data-and-adds-some-mock-data,-like-above (d/with some-actual-storage-backed-db-value) :db-after))
mock-db
is a db you can pass into any api fn that takes a db (including d/with
!) that you can query against as normal. you'll find all the stuff in storage, and all the stuff in your mock transaction, all together as though it was really transacted
you may have heard of time-travel databases, or speculative databases. this is that.
it's all just in local memory
ok, obviously this is something I'll need to know and being slow will take a bit of time to grok...I'll share it with our local clojure meetup for discussion...
you'll get it sooner than you think, once you poke at it for a bit
if you've not found it already; https://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases
kk @robert-stuttaford thanks for taking the time to hand hold...really appreciate! š
10-15 are about Datomic
some good (peer-reviewed and nicely edited) explanation in there!
@robert-stuttaford I get it now. Pretty straight forward actually. Just to re-iterate. d/with
allows you to run transactions with a 'seeming' copy of the database. Then you can inspect the results to see that they are what you want them to be. Thereby allowing you to test new DB functions on a live database without mucking up the live database.
If our current Datomic Pro license supports 5 processes, and weāre currently running 2 transactors and 3 peers (different environments), what happens when a 4th peer attempts to connect?
don't think it'll ever bump others off
@pesterhazy that hasn't been my experience. When you cross the limit, the existing peers don't get to keep their connections.
@pheuter my experience has been that it's peer count, txors don't go against the total
yeah in my experience the transactors don't count
but we have some distance to the limit of 5, so ymmv
what I've seen is that you can't connect if the limit is reached
It does suggest that, but if you turn on CW logging, there's a specific peer count metric. And we ran into problems when that metric was over the max.
I don't believe I've seen the "existing peers don't keep connections" documented anywhere, but that's what appeared to happen to me last week. So, def wanna confirm that with @marshall or @jaret
we haven't hit the limit of 22, but before we bought the licenses we kept hitting the limit of 2.
The limit is transactor + peers (i.e. a 5 process license would be 1 txor, 4 peers). HA Transactors donāt count. Each license is contractually limited to a single production system, so if you have a 5 process license, you should be running no more than 1 transactor and 4 peers concurrently in production
or can we run a sql
backed transactor on stage as well without incurring license costs?
production in this case is defined as your production application that faces users/runs the business, etc
https://clojurians.slack.com/archives/datomic/p1472588454000286 ^ by this I meant that the technically enforced limit is a little more permissive than the agreement, so you'll still have to count manually to stay honest. the tech just prevents massive overruns from happening when you forget.
Say, what's the quickest / simplest way to check whether an entity with a given value exists? I'm trying to come up with a sytem where some things use a String slug as their ID, like {:company/name "Boris LLC"}
winds up as {:company/name "Boris LLC" :company/slug "boris-llc"}
...
So I'm looking at writing a loop where if there's already a [:company/slug "boris-llc"]
I generate "boris-llc-1"
, "boris-llc-2"
etc
Right now I'm planning on (d/entity db [:company/slug "boris-llc"])
but I thought I'd check to see if anyone has some advice on it first
@timgilbert I think that would be fine. I do something like this with query, but my situation is complicated by my slugs not being globally unique. I wish I could do it with entity.
Cool, thanks
@timgilbert believe you could do something like this to avoid a loop and get them all at once:
(q '[:find ?e
:in $ ?slug-partial
:where
[?e :company/slug ?slug]
[(.startsWith ?slug ?slug-partial)]]
that way you could just pass āboris-llcā and get everything that begins with it in one call.
granted that may not be the most efficient