This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-03-01
Channels
- # announcements (4)
- # aws (1)
- # beginners (172)
- # cider (16)
- # cljdoc (63)
- # cljsrn (7)
- # clojure (150)
- # clojure-dev (8)
- # clojure-europe (26)
- # clojure-gamedev (6)
- # clojure-greece (23)
- # clojure-nl (4)
- # clojure-spec (10)
- # clojure-uk (101)
- # clojurescript (40)
- # community-development (5)
- # cursive (19)
- # datomic (54)
- # emacs (39)
- # figwheel-main (5)
- # fulcro (4)
- # graphql (16)
- # immutant (5)
- # jobs (8)
- # jobs-rus (1)
- # leiningen (1)
- # off-topic (31)
- # planck (1)
- # re-frame (7)
- # reagent (8)
- # reitit (6)
- # remote-jobs (4)
- # shadow-cljs (11)
- # spacemacs (18)
- # specter (2)
- # sql (58)
- # vim (2)
- # yada (5)
(datomic.ion.dev/-main "{:op :push}")
{:command-failed "{:op :push}",
:causes ({:message "INSTANCE", :class NoSuchFieldError})}
Process finished with exit code 1
Why I can't push from REPL?@U2J4FRT2T This might help: https://gist.github.com/olivergeorge/f402c8dc8acd469717d5c6008b2c611b
Is there something that would cause an ion deployed to a query group to perpetually retrieve a cached db value?
I have a deployed ion that I am able to transact with and I am logging the time value of the db-after and any time I call (d/db (conn))
. The db-after time is increasing (as expected) after every transaction but I always see the same (d/db (conn))
:t
value despite transactions occurring. The only thing that will bring the db up to date is a re-deploy.
So far as I can tell Iām not caching anything on my side (the client, connection and d/db are all called without memoization).
Another data point to add is when I try to query against the :db-after
of a transaction I get exceptions like Database does not yet have t=257
. Is this expected? From the docs on this page https://docs.datomic.com/cloud/whatis/client-synchronization.html it would seem like the latency should be less then a second before being able to query against the new value but Iām seeing latencies on the order of minutes.
Hi, I wonder what is it that I am doing wrong:
(def limited
(let [sdb (db/db)
mess (d/q {:query '[:find ?mc
:where
[?e :message/content ?mc]]
:args [sdb]
:limit 2
:offset 0})]
mess))
(def non-limited
(let [sdb (db/db)
mess (d/q '[:find ?mc
:where
[?e :message/content ?mc]]
sdb)]
mess))
The limited one will complain that find clause does not exist.
All I want to do is to read data in chunks.
1. Caused by java.lang.IllegalArgumentException
No :find clause specified
All the variables in the :where clauses that arenāt part of the :find clause will be unified. Since you only have 1 :where clause, there is nothing for ?e to unify with. Replacing ?e with a _ (underscore) should fix it.
@U0JPBB10W did you mean this:
(def limited
(let [sdb (db/db)
mess (d/q {:query '[:find ?mc
:where
[_ :message/content ?mc]]
:args [sdb]
:limit 2
:offset 0})]
mess))
still no luck@UBC1Y2E01 What API are you using? client (`datomic.client.api`) or peer (`datomic.api`)?
It looks like you might be passing the argument map of the client api to the peer api.
@U963A21SL I think I know what is going on. I use q
, but it looks like I should be using query
from datomic.api
as in here: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/q
Thanks for pointing me in the right direction!
I found this channel: https://www.youtube.com/user/datomicvideo/
I think it is from Cognitect directly
I would be cool if it could host pointers to other Datomic videos and chats as well... maybe through playlists?
It looks really old though - 6 years ago
I'm not sure if that is official stuff or not. the most recent official series of datomic videos is at https://www.youtube.com/playlist?list=PLZdCLR02grLoMy4TXE4DZYIuxs3Q9uc4i
and clojuretv has many clojure conference videos about datomic https://www.youtube.com/user/ClojureTV/search?query=datomic
Is there an idiomatic way to āabandonā a transaction from within a transaction function? Say I am processing an external update feed, and in so doing, I want to go ahead and transact the message from a feed only if the message represents data that is more current than whatās in the db. If I return an empty list, a transaction still happens, which is fine, but Iām not sure what the implications are of adding an empty transaction to the system.
I think @okocim question is due to a problem with the documentation. The on-prem docs mention throwing an exception from a transaction function to abort the transaction, but I found no mention of it in the Cloud docs. I had to ask on here a few weeks ago if it still works for Cloud. I think an update to the Cloud docs would be useful.
Does Datomic support on-prem (with non-hosted storage) capability to evolve from a single node to a cluster as the need arises?
No, If I remember correctly the team said itās not ok to have the transactor and the storage on the same nodeā¦. Itās been a long time since Iāve looked at all this..
that's only (my guess) because you don't want both fighting for the same resources, but then, I don't understand what your asking, by node, you mean a transactor?
no by node I mean server. I just remembered why I couldnāt use datomicā¦ I would need to support customers whom only want to run a single db server node, but may need to evolve. I was somehow hoping Datomic was capable of this, but Iām now remembering this was not a good fit.
I have many customers. I need a model where each can have their own DB and for that DB to be isolated to a single machine on my network. At the same time some customers may require their data to exist in house (their house), but are not going to be interested in managing a cluster (particularly at the start).
Essentially, Datomic is a good fit for my product as an SAAS, but when it comes to client installs, itās too much. And I would rather choose a DB that can support both models so that I am not doubling my efforts.
The name Dev alone doesnāt give me a good feeling for a production install which is why I asked if it was a supported model. btw, thnx for the responses.
Iāll look into the free version, I remember it was once memory only, but I think they changed thatā¦ thnx
Dev = transactor itself serves an embedded h2 (sql) db. This is the same as the āfreeā storage
If you exceed what that can handle then you should have a separate storage anyway. Even a MySQL server can be used
I see, that really helps, so thnx. As for capacity itāll depend on each client and their data so that would need to be assessed at install time, but that would be the case with any db.
so if you want different datomic dbs to be segregated at the storage level (e.g. different sql tables), you need at least one transactor per each storage