This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-12-20
Channels
- # adventofcode (23)
- # announcements (4)
- # babashka (1)
- # beginners (37)
- # biff (2)
- # calva (1)
- # cider (19)
- # clj-kondo (11)
- # clojure (45)
- # clojure-bay-area (2)
- # clojure-europe (12)
- # clojure-nl (1)
- # clojure-norway (15)
- # clojure-uk (2)
- # clojurescript (8)
- # conjure (1)
- # cursive (17)
- # datomic (11)
- # garden (1)
- # graalvm (4)
- # hyperfiddle (21)
- # java (10)
- # jobs (3)
- # lsp (23)
- # off-topic (18)
- # polylith (2)
- # re-frame (4)
- # releases (1)
- # remote-jobs (3)
- # rewrite-clj (4)
- # squint (44)
- # uncomplicate (1)
- # xtdb (84)
In XT1, I'd be looking at transaction functions, in the cases where you really need to enforce it; in XT2, you also have assert-exists
and assert-not-exists
DML operations which can check for the presence/absence of certain documents
Hey @U0545PBND please could describe the scenarios you're most concerned about enforcing? Are you needing referential integrity across time?
No, I just need it at point in time. What Datomic has would suffice. Basically I want the database tell me when I have messed up on references (ie, deleting something that is still referenced, or adding a reference that doesn’t exist at this point in time). I’m interested in replacing SQL with XTDB or Datomic for a new project I’m working on, and I really, really want to have referential integrity similar to Foreign Keys from SQL. Either enforced by the db itself, or if someone had a really nice library that did it. I wrote a small something using xtdb v1 transactor functions that works (only got put at the moment). But it’s a bit clunky and was wondering if someone knew of a good solution already
if you want to share your small transaction function example I'm sure others might appreciate seeing it and it might spur others to share also
The transactor function:
;; :with-schema/put
{:xt/id :with-schema/put
:xt/fn '(fn [node schema-name e timestamp]
(try
(let [db (xtdb.api/db node)
{malli-schema :malli/schema
db-schema :db/schema :as schema}
(xtdb.api/entity db schema-name)]
(when (nil? schema)
(swap! novi.xt/tx-log
update timestamp conj
{:db.error/type :missing-schema
:missing-schema schema-name
:message (format "Schema %s does not exist in the datatabse" (str schema-name))}))
(when malli-schema
(when-not (malli.core/validate malli-schema e)
(swap! novi.xt/tx-log update timestamp conj
(assoc (malli.core/explain malli-schema e)
:db.error/type :malli))
(throw (ex-info "Invalid malli schema" {:schema/name schema-name
:entity e}))))
(when db-schema
(doseq [{:keys [:db/ident :db/unique :db.ref/path] :as current-schema} db-schema]
(when unique
(when-let [eid-uniques
(->> (xtdb.api/q db {:find ['?e]
:where [['?e ident (get e ident)]]})
(map first)
(into #{}))]
(when (and (> (count eid-uniques) 0)
(not (eid-uniques (:xt/id e))))
(swap! novi.xt/tx-log update-in
[timestamp :db/unique] conj
{:xt/id (:xt/id e)
:db.error/type :db/unique
:conflicting/xtids eid-uniques
ident (get e ident)
:schema current-schema})
(throw (ex-info "Invalid db schema" {})))))
(when path
(let [query '{:find [?target-id]
:in [[?target-id ...]]
:where [[?target-id :xt/id]]}
target-ids (reduce (fn [entity lens]
(let [v (get entity lens)]
(cond (sequential? entity)
(reduced (map lens entity))
(sequential? v)
v
(map? v)
v
:else
(reduced v)
)))
e path)]
(when (and (seq target-ids)
(empty? (xtdb.api/q db query target-ids)))
(swap! novi.xt/tx-log update timestamp
conj {:db.error/type :db.ref/path
:msg "Unable to find the reference(s)"
:reference-ids target-ids
:path path})))))))
(catch Exception e
(taoensso.timbre/error "Unable to run :with-schema/put" {:exception e})
(throw (ex-info "Unable to execute :with-schema/put" {}))))
[[:xtdb.api/put e]])}
The put function:
(defonce tx-log (atom {}))
(defn put [node schema-name e]
(let [timestamp (java.lang.System/nanoTime)
tx (xt/submit-tx node [[::xt/fn :with-schema/put schema-name e timestamp]])]
(xt/await-tx node tx)
(let [report (get @tx-log timestamp)]
(swap! tx-log dissoc timestamp)
(when report
(throw (ex-info "XT Transaction errors" report)))
tx)))
Example of a schema migration
;; :novi/user
{:xt/id :novi/user
:malli/schema novi.specs.user/User
:db/schema [{:db/ident :user/email
:db/unique true}
{:db/ident :team/members
:db.ref/path [:team/members :team.member/id]}]}
It’s setup this way since there was no way to get information out of a transactor function via throwing exceptions.
I also built in malli checks, which you could remove or choose to keep.
Haven’t really done any performance checks on how much slower it is, but the little I have tried with it suggests it’ll be more than fine for smaller applications.Good morning! I have some more questions regarindg XTQL, I hope you don’t mind….
First question: Is it possible to dig into maps with a query? E.g. like get-in
.
Hey, there is currently nothing in the standard library to go down multiple levels, but that could be added down the road.
The following shows how to unnest something one leve deep.
(xt/q node '(-> (rel [{:x {:y 1}}] [x])
(with {:z (. x y)})))
;; => [{:x {:y 1}, :z 1}]
Okay, it seems the recommended approach would be to keep documents rather flat or unravel them into multiple „tables“, right?
Yes, important information should sit toplevel because this is also what we currently have metadata about and hence are likely to produce more permanent queries.
I'd make the decision about what to separate out based on update granularity, tbh - if you frequently find yourself wanting to update the nested structures, and it has a natural entity primary key, that's the point at which I'd be considering a separate table.
Second question: I was planning on querying all timestamps (valid-time) on which a certain field on a document has changed. How would such a query look like in XTQL?
best bet for this will be 'window functions' (which we don't have yet, but very much on the roadmap) - in SQL, you can achieve something like this with:
SELECT field_value AS old_value,
LEAD(field_value) OVER (PARTITION BY xt$id ORDER BY xt$valid_from) AS new_value,
xt$valid_from,
xt$valid_to
FROM your_table
WHERE field_value <> LEAD(field_value) OVER (PARTITION BY id ORDER BY valid_from)
there're ways you can do this without window functions, using self joins, but I recall that always being relatively convoluted
Okay, thank you! If it was just about finding changes of the whole document, would it be enough to query xt$valid_from? This would assume that documents without changes are not stored / redundant entries are removed, which is probably not the case, right? Every xt/put stores the document, even if it did not change (ignoring internal optimization, which is transparent to the user).
> Every xt/put stores the document, even if it did not change it does, yes - we store the fact that you've re-asserted it. this is partly because, with bitemporality, it's not always the case that you can coalesce together two updates in this way, particularly if you have back-/forward-in-time updates
This is kind of a generic/high level question. I just learned that there has been some significant work in enabling bitemporality in PostgreSQL. Case in point is this project among others: https://pgxn.org/dist/temporal_tables/ How does that compare against the XTDB’s bitemporality? Of course, there’s the obvious fact that bitemporality in XTDB is native while that of postgresql is kind of an afterthought. But what does that mean in practical terms? For example, what capabilities/advantages does a person using temporal extension to postgres ‘miss out’ by not using XTDB instead?
Hey @U9FEN7GF6 the only Postgres support for bitemporal tables I've seen is with https://github.com/hettie-d/pg_bitemporal (the extension you linked is system-time only) The biggest complexities with those approaches is that the queries become very complicated to reason about when you have joins across many bitemporal tables, as do the indexes. Schema migration is also very complex. XTDB aims to avoid/simplify all that with good defaults
I know the author of this blog post is working on a patch to bring valid-time support into Postgres https://illuminatedcomputing.com/posts/2019/08/sql2011-survey/ - but really I'm not sure how successfully Postgres can be retrofitted to make using this stuff either easy or desirable
Thanks for the presentation, gents! Had to drop off early, but had some questions: https://discuss.xtdb.com/t/ann-upcoming-a-first-look-at-xtdb-v2-live-session/310/4
cheers @U03CPPKDXBL - replied https://discuss.xtdb.com/t/ann-upcoming-a-first-look-at-xtdb-v2-live-session/310/6?u=jarohen 🙂
In XTDB v2, I’m having trouble getting a query plan.
In case it matters, I’m running with SNAPSHOT versions and the latest docker image
cool. I’ll switch that until the bug is fixed. Thanks!
I’m trying to simulate an event log in XTDB. Each document includes an entity id, a timestamp and an arbitrary set of key value pairs. At query time, I want to join the most recent value of some key value pairs on the eid.
I expect the result to be
[{:eid "abc", :first "mark", :city "Houston"} {:eid "xyz", :first "Andrea", :city "Houston"}]
But, instead, I get
[{:eid "abc", :first "mark", :city "Houston"} {:eid "xyz", :first nil, :city nil}]
I’m sure I’m doing something silly but I don’t see my mistake
@U2845S9KL if you run the inner queries of each of the left joins individually, do you get the results you'd expect?
I suspect you could probably make it work this way, but I also might be tempted to get the tables and bitemporality working in your favour. could you, for example, have a cities table, use the eid as its xt/id, use the event time as its valid from, and then it's 'just' a current time query on each table?
alternatively, keep it all in one table, perform updates on it for the changed fields, then it should also be a current time query? then query the table for all time if you need the events back?
I considered that but there’s a problem: These documents are coming from remote agents that have no knowledge of the data schema. The schema is arbitrary and imposed at query time. The xt/id
is simply a unique identifier of a statement of fact that the agent noticed and the eid
is coming from the domain and only known as a primary key at query time.
I’m overstating my requirements just a bit: The schema is known before query time but unknown at the time the data is ingested and, more importantly, the schema changes over time. One possibility is that the data is ingested as the agents report it and then rewritten to a new table when the schema is defined but that approach seems pretty cumbersome.,
so when you get the first update in, is this the agent saying "the doc keyed by abc now has first:mark"?
No, it does not. The agents are monitoring other applications with their own information schemas.
Th only thing the agent knows is how to obtain key value pairs from those applications and the time the data is collected
ah, ok, and so the only time you know how to like these statements together into a single real world entity is at query time?
if so, what happens if the agent doesn't provide enough to get its fact linked in correctly? or am I missing something?
You are not missing anything. That situation is possible. In that case, we either treat the missing data as nil or we cannot link the data
Currently, I use traditional relational database for this work and my queries are filled with left joins for exactly that reason
Yes. I don't mind that but how to get the most recent value for a given key attribute and value attribute?
my first thought would be window functions (although we don't have them yet) - first_value(city) filter (where city is not null) over (partition by eid order by t desc)
Ha. Yes, in addition to left joins, my current queries are filled with last-value window functions
To make those efficient, I think compound a compound index is necessary: the partition key plus the timestamp.
Yeah, that was going to be my next question: how to emulate a compound index
I thought there was a way to do that in v1. Something like a vector of values but maybe I'm wrong about that
a map of values would get you close, but that's an identity index not a range index (as you'd want for your timestamp ordering)
we can't use the XT bitemp index (which is sorted by timestamp) because you don't have the PK
Yes, what I really want is an identity+range index but a combine b tree is probably plenty good enough
the bitemp index is the only sorted index atm - the content indices are all hash indices for the time being
My not preferred approach would be to rewrite the agent docs in xtbd's preferred format but that would effectively double the ingestion load. My agents send about 1000 docs per second.
I did not see your performance slide at yesterday's presentation (wink, wink) but I feel like I'd be pushing limits here
One necessary ingredient to make this approach work would be some kind of listener interface to know when to rewrite new docs.
Any plans for something like that?
Ok. All this is experimental right now for me. I'm unhappy with my current database and looking for options but I am in no rush to make a change
Thanks for all your input. I'd like xtdb to be a viable solution for my use case. I'll keep an eye on your progress 🙂
sounds good 👌 if you don't mind being on our early adopters list (cc @U899JBRPF) it'd be great to work this through with you as we go?