Fork me on GitHub
#datomic
<
2021-09-02
>
joshkh11:09:19

hmm. i'm running into an issue using dev-local 0.9.235 when trying to import one of our cloud dbs.

(dl/import-cloud
  {:source ...,
   :dest   ...})
Importing...................Execution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
Item too large

java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Item too large {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Item too large", :datomic.dev-local.log-representation/count 24356559, :datomic.dev-local.log-representation/limit 1000000, :datomic.dev-local.log-representation/datom-eid 23978149582305561}
23978149582305561 is an entity with an attribute that has a very large string value, and is unfortunately stuck in our history. is there a way around this?

Joe Lane20:09:16

Hey @U0GC1C09L, can you import up to the bad transaction, transact all non-bad attrs of the bad transaction, then import again starting at t+1 of the bad transaction?

joshkh10:09:46

hey Joe, thanks for the response. i suppose i could, but this is part of a larger workflow to backup and "restore" Cloud dbs. if possible i'd like to avoid coding in edge cases for specific databases, or catching exceptions and iterating the import process for "bad" datoms that clearly do exist in the history. i will if we need to, but this feels like an issue with dev-local and a clash between its constraints and the constraints of datomic cloud

joshkh10:09:20

i'm picturing some interesting cases for tracking all of the datoms in the skipped transaction, and then deciding how to handle future transactions against them as we replay the transaction log. for example: only replay retractions of skipped datoms when there has been an addition between a skipped transaction and the transaction being replayed. oof.

zalky17:09:47

Hi all, the Datomic query docs say that you cannot use a database source within a rule. The implication of this is that you also cannot use built-in expressions like missing? in rules, is that correct?

favila17:09:27

Where do you see that? Rules are scoped to a single datasource and cannot realias, but you can invoke them with a different datasource ($ds rulename …) and inside the rule $ is available

kenny17:09:14

Calling d/datoms returns an Iterable of datoms (Client API). For error handling, it points you to the namespace doc which states that all errors are reported via ex-info exceptions. My objective is to do a complete iteration through all datoms returned from d/datoms. My iteration (via reduce) made its way through a large number of datoms before throwing an exception while unchunking the d/datoms result (full stacktrace in thread). What is the recommended way to retry retryable anomalies thrown from a d/datoms chunk?

hadils17:09:16

Hi. I finally deployed my first lambda ion from my Fulcro app. I have the latest Datomic Cloud set up. I am getting a connection refused when I try to invoke the L,ambda function — it is trying to connect to a host in the VPC. How do I go about troubleshooting this? Is it an IAM problem or a problem with the VPC gateway?

Jakub Holý (HolyJak)21:09:44

I suppose the target hosts security group allows connections on any port from the VPC?

souenzzo17:09:37

Can I assume that :db/txInstant is unique? I planned originally to save the t reference to an older db but once I don't have t->tx funciton anymore, I can't create it for older values

favila17:09:34

generally t and txes are interchangeable in any time-filtering functions

souenzzo17:09:34

how do I point to an older point in time? should I use t or tx?!

favila17:09:53

You need a “real” tx if you want to look at the TX entity itself

favila17:09:24

but for things like as-of, tx-range, sync, etc, they accept T or TX

souenzzo17:09:24

Is there a problem to have an entity pointing to a transaction?!

favila18:09:58

transactions are entities

souenzzo18:09:48

why datomic client api do not have t->tx and tx->t functions?

souenzzo18:09:13

Can I use this?

(defn t->tx
  [t]
  (+ t 13194139533312))
(defn tx->t
  [tx]
  (- tx 13194139533312))

Alex Miller (Clojure team)18:09:20

I'm no expert, but I don't think these things have that relationship in cloud, so no

souenzzo18:09:34

I tried to do a :thing/as-t that points to a point in the pass But I can't use this because I need to create it for older entities and I don't have the t anymore So I changed my approach: :thing/as-tx Now is easy to create thing entities for older entities in DB, but it's hard to create for newer ones, once for newer ones I get the t from the db and I can't save the t value

souenzzo18:09:13

I can create a :thing/as-of where sometimes it is a t and other times it is a tx This is a good idea?

favila18:09:53

Can we step back? what problem are you solving?

souenzzo18:09:49

I need to create a entity that references another entity in a exact point in time.

favila18:09:27

Putting the modeling question aside, how do you decide on what moment in time?

souenzzo18:09:26

something like: this report entity is generated from this entity at this db.

favila18:09:10

how do you arrive at “this db”?

souenzzo18:09:58

At this moment, my code is:

(defn do-report
   [db id]
   .... {:tx-data [... {:report/db (:t db)}]})

favila18:09:27

(You never run that fn with a filtered (e.g. as-of) db?)

favila18:09:18

I agree not having t->tx is annoying, and I’m concerned by alex’s comment, it’s a pretty fundamental relationship and difficult to imagine cloud being different

favila18:09:06

It’s quite easy to write yourself (just some bit-masking) but alex is giving me pause

favila18:09:56

however, you may be better off querying for a specific tx entity to use, then using that with an as-of; or you could use tx-range to find the transaction corresponding to the basis T and inspect its data for the :db/txInstant assertion

favila18:09:37

(def ^:const ^long MASK-42
  2r000000000000000000000111111111111111111111111111111111111111111)
(def ^:const ^long TX-PART-BITS
  2r000000000000000000011000000000000000000000000000000000000000000)

(defn tx->t ^long [^long t]
  (bit-and MASK-42 t))

(defn t->tx ^long [^long t]
  (bit-or TX-PART-BITS (bit-and MASK-42 t)))

favila18:09:51

This definitely works for on-prem

favila18:09:42

The “tx-part-bits” is just the number 3 (= the entity-id of the “tx” partition) shifted over 42 bits

favila18:09:22

d/entid-at on on-prem lets you compute entity-ids for arbitrary partitions