Fork me on GitHub

hmm. i'm running into an issue using dev-local 0.9.235 when trying to import one of our cloud dbs.

  {:source ...,
   :dest   ...})
Importing...................Execution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
Item too large

java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Item too large {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Item too large", 24356559, 1000000, 23978149582305561}
23978149582305561 is an entity with an attribute that has a very large string value, and is unfortunately stuck in our history. is there a way around this?

Joe Lane20:09:16

Hey @U0GC1C09L, can you import up to the bad transaction, transact all non-bad attrs of the bad transaction, then import again starting at t+1 of the bad transaction?


hey Joe, thanks for the response. i suppose i could, but this is part of a larger workflow to backup and "restore" Cloud dbs. if possible i'd like to avoid coding in edge cases for specific databases, or catching exceptions and iterating the import process for "bad" datoms that clearly do exist in the history. i will if we need to, but this feels like an issue with dev-local and a clash between its constraints and the constraints of datomic cloud


i'm picturing some interesting cases for tracking all of the datoms in the skipped transaction, and then deciding how to handle future transactions against them as we replay the transaction log. for example: only replay retractions of skipped datoms when there has been an addition between a skipped transaction and the transaction being replayed. oof.


Hi all, the Datomic query docs say that you cannot use a database source within a rule. The implication of this is that you also cannot use built-in expressions like missing? in rules, is that correct?


Where do you see that? Rules are scoped to a single datasource and cannot realias, but you can invoke them with a different datasource ($ds rulename …) and inside the rule $ is available


Calling d/datoms returns an Iterable of datoms (Client API). For error handling, it points you to the namespace doc which states that all errors are reported via ex-info exceptions. My objective is to do a complete iteration through all datoms returned from d/datoms. My iteration (via reduce) made its way through a large number of datoms before throwing an exception while unchunking the d/datoms result (full stacktrace in thread). What is the recommended way to retry retryable anomalies thrown from a d/datoms chunk?


Hi. I finally deployed my first lambda ion from my Fulcro app. I have the latest Datomic Cloud set up. I am getting a connection refused when I try to invoke the L,ambda function — it is trying to connect to a host in the VPC. How do I go about troubleshooting this? Is it an IAM problem or a problem with the VPC gateway?

Jakub Holý (HolyJak)21:09:44

I suppose the target hosts security group allows connections on any port from the VPC?


Can I assume that :db/txInstant is unique? I planned originally to save the t reference to an older db but once I don't have t->tx funciton anymore, I can't create it for older values


generally t and txes are interchangeable in any time-filtering functions


how do I point to an older point in time? should I use t or tx?!


You need a “real” tx if you want to look at the TX entity itself


but for things like as-of, tx-range, sync, etc, they accept T or TX


Is there a problem to have an entity pointing to a transaction?!


transactions are entities


why datomic client api do not have t->tx and tx->t functions?


Can I use this?

(defn t->tx
  (+ t 13194139533312))
(defn tx->t
  (- tx 13194139533312))

Alex Miller (Clojure team)18:09:20

I'm no expert, but I don't think these things have that relationship in cloud, so no


I tried to do a :thing/as-t that points to a point in the pass But I can't use this because I need to create it for older entities and I don't have the t anymore So I changed my approach: :thing/as-tx Now is easy to create thing entities for older entities in DB, but it's hard to create for newer ones, once for newer ones I get the t from the db and I can't save the t value


I can create a :thing/as-of where sometimes it is a t and other times it is a tx This is a good idea?


Can we step back? what problem are you solving?


I need to create a entity that references another entity in a exact point in time.


Putting the modeling question aside, how do you decide on what moment in time?


something like: this report entity is generated from this entity at this db.


how do you arrive at “this db”?


At this moment, my code is:

(defn do-report
   [db id]
   .... {:tx-data [... {:report/db (:t db)}]})


(You never run that fn with a filtered (e.g. as-of) db?)


I agree not having t->tx is annoying, and I’m concerned by alex’s comment, it’s a pretty fundamental relationship and difficult to imagine cloud being different


It’s quite easy to write yourself (just some bit-masking) but alex is giving me pause


however, you may be better off querying for a specific tx entity to use, then using that with an as-of; or you could use tx-range to find the transaction corresponding to the basis T and inspect its data for the :db/txInstant assertion


(def ^:const ^long MASK-42
(def ^:const ^long TX-PART-BITS

(defn tx->t ^long [^long t]
  (bit-and MASK-42 t))

(defn t->tx ^long [^long t]
  (bit-or TX-PART-BITS (bit-and MASK-42 t)))


This definitely works for on-prem


The “tx-part-bits” is just the number 3 (= the entity-id of the “tx” partition) shifted over 42 bits


d/entid-at on on-prem lets you compute entity-ids for arbitrary partitions