This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-02
Channels
- # aleph (25)
- # announcements (17)
- # aws (2)
- # babashka (72)
- # beginners (44)
- # calva (6)
- # cider (3)
- # clj-kondo (109)
- # cljfx (1)
- # cljsrn (31)
- # clojure (151)
- # clojure-austin (1)
- # clojure-europe (36)
- # clojure-nl (5)
- # clojure-norway (2)
- # clojure-spec (17)
- # clojure-uk (12)
- # clojurescript (74)
- # cursive (57)
- # data-science (1)
- # datascript (28)
- # datomic (40)
- # depstar (15)
- # gratitude (3)
- # helix (3)
- # introduce-yourself (1)
- # joker (1)
- # kaocha (2)
- # leiningen (2)
- # lsp (70)
- # lumo (2)
- # malli (2)
- # meander (4)
- # off-topic (10)
- # polylith (27)
- # quil (4)
- # re-frame (18)
- # reagent (24)
- # ring (4)
- # rum (1)
- # shadow-cljs (102)
- # sql (2)
- # tools-deps (48)
- # web-security (8)
- # xtdb (5)
hmm. i'm running into an issue using dev-local 0.9.235 when trying to import one of our cloud dbs.
(dl/import-cloud
{:source ...,
:dest ...})
Importing...................Execution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
Item too large
java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Item too large {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Item too large", :datomic.dev-local.log-representation/count 24356559, :datomic.dev-local.log-representation/limit 1000000, :datomic.dev-local.log-representation/datom-eid 23978149582305561}
23978149582305561
is an entity with an attribute that has a very large string value, and is unfortunately stuck in our history. is there a way around this?Hey @U0GC1C09L, can you import up to the bad transaction, transact all non-bad attrs of the bad transaction, then import again starting at t+1 of the bad transaction?
hey Joe, thanks for the response. i suppose i could, but this is part of a larger workflow to backup and "restore" Cloud dbs. if possible i'd like to avoid coding in edge cases for specific databases, or catching exceptions and iterating the import process for "bad" datoms that clearly do exist in the history. i will if we need to, but this feels like an issue with dev-local and a clash between its constraints and the constraints of datomic cloud
i'm picturing some interesting cases for tracking all of the datoms in the skipped transaction, and then deciding how to handle future transactions against them as we replay the transaction log. for example: only replay retractions of skipped datoms when there has been an addition between a skipped transaction and the transaction being replayed. oof.
Hi all, the Datomic query docs say that you cannot use a database source within a rule. The implication of this is that you also cannot use built-in expressions like missing?
in rules, is that correct?
Where do you see that? Rules are scoped to a single datasource and cannot realias, but you can invoke them with a different datasource ($ds rulename …)
and inside the rule $
is available
Calling d/datoms
returns an Iterable of datoms (Client API). For error handling, it points you to the namespace doc which states that all errors are reported via ex-info exceptions. My objective is to do a complete iteration through all datoms returned from d/datoms. My iteration (via reduce) made its way through a large number of datoms before throwing an exception while unchunking the d/datoms result (full stacktrace in thread). What is the recommended way to retry retryable anomalies thrown from a d/datoms chunk?
Hi. I finally deployed my first lambda ion from my Fulcro app. I have the latest Datomic Cloud set up. I am getting a connection refused when I try to invoke the L,ambda function — it is trying to connect to a host in the VPC. How do I go about troubleshooting this? Is it an IAM problem or a problem with the VPC gateway?
I suppose the target hosts security group allows connections on any port from the VPC?
Can I assume that :db/txInstant
is unique?
I planned originally to save the t
reference to an older db
but once I don't have t->tx
funciton anymore, I can't create it for older values
That’s how this technique is possible: https://docs.datomic.com/on-prem/best-practices.html#add-facts-about-transaction-entity
Can I use this?
(defn t->tx
[t]
(+ t 13194139533312))
(defn tx->t
[tx]
(- tx 13194139533312))
I'm no expert, but I don't think these things have that relationship in cloud, so no
I tried to do a :thing/as-t
that points to a point in the pass
But I can't use this because I need to create it for older entities and I don't have the t
anymore
So I changed my approach: :thing/as-tx
Now is easy to create thing
entities for older entities in DB, but it's hard to create for newer ones, once for newer ones I get the t
from the db
and I can't save the t
value
I can create a :thing/as-of
where sometimes it is a t
and other times it is a tx
This is a good idea?
Are you accidentally falling into this trap? https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html
At this moment, my code is:
(defn do-report
[db id]
.... {:tx-data [... {:report/db (:t db)}]})
I agree not having t->tx is annoying, and I’m concerned by alex’s comment, it’s a pretty fundamental relationship and difficult to imagine cloud being different
however, you may be better off querying for a specific tx entity to use, then using that with an as-of; or you could use tx-range to find the transaction corresponding to the basis T and inspect its data for the :db/txInstant assertion
(def ^:const ^long MASK-42
2r000000000000000000000111111111111111111111111111111111111111111)
(def ^:const ^long TX-PART-BITS
2r000000000000000000011000000000000000000000000000000000000000000)
(defn tx->t ^long [^long t]
(bit-and MASK-42 t))
(defn t->tx ^long [^long t]
(bit-or TX-PART-BITS (bit-and MASK-42 t)))