Fork me on GitHub

( "{:op :push}")

{:command-failed "{:op :push}",
 :causes ({:message "INSTANCE", :class NoSuchFieldError})}

Process finished with exit code 1
Why I can't push from REPL?


Bump Context: I'm doing a script that calls push and do some other stuff


Is there something that would cause an ion deployed to a query group to perpetually retrieve a cached db value? I have a deployed ion that I am able to transact with and I am logging the time value of the db-after and any time I call (d/db (conn)). The db-after time is increasing (as expected) after every transaction but I always see the same (d/db (conn)) :t value despite transactions occurring. The only thing that will bring the db up to date is a re-deploy. So far as I can tell I’m not caching anything on my side (the client, connection and d/db are all called without memoization).


This issue was fixed in the latest release


I would recommend updating to it and seeing if that helps


Problem solved, thanks marshall.


Eventually it seems to pick up a fresh :t but it takes ~5 mins to do so.


Another data point to add is when I try to query against the :db-after of a transaction I get exceptions like Database does not yet have t=257. Is this expected? From the docs on this page it would seem like the latency should be less then a second before being able to query against the new value but I’m seeing latencies on the order of minutes.


Hi, I wonder what is it that I am doing wrong:

(def limited
  (let [sdb (db/db)
        mess (d/q {:query '[:find ?mc
                            [?e :message/content ?mc]]
                   :args [sdb]
                   :limit 2
                   :offset 0})]

(def non-limited
  (let [sdb (db/db)
        mess (d/q '[:find ?mc
                    [?e :message/content ?mc]]
The limited one will complain that find clause does not exist. All I want to do is to read data in chunks.
1. Caused by java.lang.IllegalArgumentException
   No :find clause specified


All the variables in the :where clauses that aren’t part of the :find clause will be unified. Since you only have 1 :where clause, there is nothing for ?e to unify with. Replacing ?e with a _ (underscore) should fix it.


@U0JPBB10W did you mean this:

(def limited
  (let [sdb (db/db)
        mess (d/q {:query '[:find ?mc
                            [_ :message/content ?mc]]
                   :args [sdb]
                   :limit 2
                   :offset 0})]

still no luck


1. Caused by java.lang.IllegalArgumentException
   No :find clause specified


@UBC1Y2E01 What API are you using? client (`datomic.client.api`) or peer (`datomic.api`)?


It looks like you might be passing the argument map of the client api to the peer api.


I use datomic.api


@U963A21SL I think I know what is going on. I use q, but it looks like I should be using query from datomic.api as in here: Thanks for pointing me in the right direction!


Ok. The datomic.api does not support :limit and :offset according to the doc.


Yes, I will figure out another way 🙂 Thanks!

Brian Abbott15:03:30

I think it is from Cognitect directly

Brian Abbott15:03:55

I would be cool if it could host pointers to other Datomic videos and chats as well... maybe through playlists?

Brian Abbott15:03:12

It looks really old though - 6 years ago

Alex Miller (Clojure team)15:03:43

I'm not sure if that is official stuff or not. the most recent official series of datomic videos is at

Alex Miller (Clojure team)16:03:22

and clojuretv has many clojure conference videos about datomic


Is there an idiomatic way to “abandon” a transaction from within a transaction function? Say I am processing an external update feed, and in so doing, I want to go ahead and transact the message from a feed only if the message represents data that is more current than what’s in the db. If I return an empty list, a transaction still happens, which is fine, but I’m not sure what the implications are of adding an empty transaction to the system.


@okocim Throwing will cancel the transaction.


😅 thanks


I think @okocim question is due to a problem with the documentation. The on-prem docs mention throwing an exception from a transaction function to abort the transaction, but I found no mention of it in the Cloud docs. I had to ask on here a few weeks ago if it still works for Cloud. I think an update to the Cloud docs would be useful.


Does Datomic support on-prem (with non-hosted storage) capability to evolve from a single node to a cluster as the need arises?


As I understand it, this was not supported, but I haven’t followed all the changes.


you can't do distributed writes if that is what you mean


No, If I remember correctly the team said it’s not ok to have the transactor and the storage on the same node…. It’s been a long time since I’ve looked at all this..


that's only (my guess) because you don't want both fighting for the same resources, but then, I don't understand what your asking, by node, you mean a transactor?


no by node I mean server. I just remembered why I couldn’t use datomic… I would need to support customers whom only want to run a single db server node, but may need to evolve. I was somehow hoping Datomic was capable of this, but I’m now remembering this was not a good fit.


what is a "single db server node"?


I have many customers. I need a model where each can have their own DB and for that DB to be isolated to a single machine on my network. At the same time some customers may require their data to exist in house (their house), but are not going to be interested in managing a cluster (particularly at the start).


Essentially, Datomic is a good fit for my product as an SAAS, but when it comes to client installs, it’s too much. And I would rather choose a DB that can support both models so that I am not doubling my efforts.


Dev transactor isn’t good enough for in-house use?


You might even be able to use the free datomic (check the license)


The name Dev alone doesn’t give me a good feeling for a production install which is why I asked if it was a supported model. btw, thnx for the responses.


I’ll look into the free version, I remember it was once memory only, but I think they changed that… thnx


Dev = transactor itself serves an embedded h2 (sql) db. This is the same as the “free” storage


We use it internally for a shared staging server. It’s fine


If you exceed what that can handle then you should have a separate storage anyway. Even a MySQL server can be used


You need to get an estimate of your read and write capacity


I see, that really helps, so thnx. As for capacity it’ll depend on each client and their data so that would need to be assessed at install time, but that would be the case with any db.


you mean they want their data in a different storage?


the limitation with datomic is that a transactor cannot manage multiple storages


so if you want different datomic dbs to be segregated at the storage level (e.g. different sql tables), you need at least one transactor per each storage


but one storage can have many datomic dbs in it, and can have multiple transactors connecting to it (one will be active and the rest will be hot spares), and can have as many peers as your storage can handle


does that clarify @tim792?