This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-11-23
Channels
- # admin-announcements (1)
- # arachne (3)
- # aws (1)
- # bangalore-clj (2)
- # beginners (209)
- # boot (81)
- # capetown (2)
- # cider (16)
- # clara (13)
- # cljsjs (16)
- # cljsrn (6)
- # clojure (217)
- # clojure-czech (1)
- # clojure-greece (4)
- # clojure-italy (3)
- # clojure-korea (3)
- # clojure-russia (3)
- # clojure-sg (3)
- # clojure-spec (104)
- # clojure-uk (23)
- # clojurescript (7)
- # component (7)
- # cursive (18)
- # datomic (12)
- # devcards (34)
- # dirac (17)
- # editors (3)
- # emacs (1)
- # events (1)
- # hoplon (62)
- # immutant (12)
- # incanter (1)
- # jobs (1)
- # klipse (2)
- # ldnclj (1)
- # luminus (1)
- # mount (1)
- # off-topic (8)
- # om (50)
- # onyx (46)
- # parinfer (5)
- # pedestal (4)
- # perun (2)
- # reagent (1)
- # rum (1)
- # schema (5)
- # specter (30)
- # untangled (5)
- # vim (46)
@zane, if you want the latest transaction for an entity, you’d need to query all its current attributes
[:find (max ?tx) :in $ ?e :where [?e _ _ ?tx]]
is one approach
@marshall Ok, so regarding the Critical failure, cannot continue: Heartbeat failed
error:
- OS: Mac OS X
- transactor command: ./bin/transactor -Xmx4g -Xms4g -Ddatomic.peerConnectionTTLMsec=20000 -Ddatomic.txTimeoutMsec=20000 config/samples/dev-transactor-template.properties
- restore command: ./bin/datomic -Xmx4g -Xms4g restore-db file:/Users/mgn/Downloads/import-2016-11-08 datomic:
- transactor log:
45741-2016-11-21 10:51:24.134 INFO default datomic.kv-cluster - {:event :kv-cluster/create-val, :val-key "5821b80f-e2af-4e7d-a02e-8fb9838bfd56", :bufsize 15561, :phase :begin, :pid 36803, :tid 64}
45742:2016-11-21 10:51:24.153 WARN default datomic.backup - {:message "error executing future", :pid 36803, :tid 10}
45743-java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45744- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45745- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45746- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45747- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45748- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45749- at datomic.backup.ValueRestore.restore_node(backup.clj:446) ~[datomic-transactor-pro-0.9.5407.jar:na]
45750- at datomic.backup.ValueRestore.restore_node(backup.clj:437) ~[datomic-transactor-pro-0.9.5407.jar:na]
45751- at datomic.backup$restore_db$fn__9032$fn__9035.invoke(backup.clj:660) ~[datomic-transactor-pro-0.9.5407.jar:na]
45752- at datomic.backup$restore_db$fn__9032.invoke(backup.clj:656) ~[datomic-transactor-pro-0.9.5407.jar:na]
45753- at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938) [clojure-1.8.0.jar:na]
45754- at clojure.lang.AFn.call(AFn.java:18) [clojure-1.8.0.jar:na]
45755- at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
45756- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
45757- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
45758- at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
45759-Caused by: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45760- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45761- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45762- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45763- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45764- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45765- at datomic.backup.ValueRestore$fn__8956.invoke(backup.clj:422) ~[datomic-transactor-pro-0.9.5407.jar:na]
45766- at datomic.backup.ValueRestore.restore_val(backup.clj:419) ~[datomic-transactor-pro-0.9.5407.jar:na]
45767- at datomic.backup.ValueRestore$fn__8966$fn__8967.invoke(backup.clj:444) ~[datomic-transactor-pro-0.9.5407.jar:na]
45768- ... 6 common frames omitted
45769-Caused by: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45770- at org.h2.message.DbException.getJdbcSQLException(DbException.java:329) ~[h2-1.3.171.jar:1.3.171]
45771- at org.h2.message.DbException.get(DbException.java:158) ~[h2-1.3.171.jar:1.3.171]
45772- at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:399) ~[h2-1.3.171.jar:1.3.171]
Oh, I get some exceptions before that, eg:
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.283 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.306 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.385 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.408 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.586 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.613 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.987 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:44.018 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:44.310 INFO default datomic.process-monitor - {:tid 11, :StoragePutMsec {:lo 0, :hi 18500, :sum 134937, :count 1898}, :AvailableMB 3190.0, :StorageGetMsec {:lo 0, :hi 3370, :sum 22493, :count 1917}, :pid 47395, :event :metrics, :StoragePutBytes {:lo 5641, :hi 19880, :sum 29298291, :count 1903}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :StoragePutBackoffMsec {:lo 0, :hi 400, :sum 750, :count 5}, :StorageGetBackoffMsec {:lo 0, :hi 400, :sum 750, :co
Hi - Anyone setup logstash with Datomic (am using the AMI at present) - any tips appreciated!
morning, i will shamelessly repeat my question i case anyone missed it 🙂.
[
{
:db/id #db/id[:db.part/user -1]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email ""]
}
[:assertWithRetracts #db/id[:db.part/user -1] :basket/items []]
]
Why the above is not working? is #db/id[:db.part/user -1]
working only inside an ‘expression’ and not in a whole transaction? Is there any way to tie that in so i do not have to write logic in code? 🤓BTW: what is transactional semantics in transact? all forms from the vector are part of one transaction i assume?
following the datomic SQL script to creata the Datomic RB and I get ERROR: permission denied for tablespace pg_default in RDS
@robert-stuttaford: Yeah, that's our current implementation. It's not particularly performant so we were looking for something faster. One option is to have an explicit attribute for updatedAt
, but we're trying to avoid that.
We’d like to upgrade the process count on our current Datomic Pro License, how can we do so? Doesn’t seem like there’s a way via http://my.datomic.com
contact <mailto:[email protected]|[email protected]> with any questions