This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-05
Channels
- # bangalore-clj (1)
- # beginners (50)
- # boot (72)
- # cider (53)
- # cljs-dev (303)
- # cljsrn (2)
- # clojure (403)
- # clojure-conj (3)
- # clojure-dev (7)
- # clojure-italy (18)
- # clojure-russia (129)
- # clojure-sg (1)
- # clojure-spec (44)
- # clojure-uk (25)
- # clojurescript (112)
- # core-async (4)
- # core-typed (3)
- # cursive (23)
- # datomic (114)
- # defnpodcast (1)
- # emacs (1)
- # figwheel (2)
- # graphql (18)
- # hoplon (110)
- # instaparse (6)
- # jobs (3)
- # jobs-discuss (10)
- # leiningen (5)
- # luminus (1)
- # lumo (151)
- # off-topic (22)
- # om (3)
- # om-next (3)
- # onyx (4)
- # parinfer (1)
- # pedestal (8)
- # precept (51)
- # re-frame (19)
- # reagent (3)
- # ring (1)
- # ring-swagger (5)
- # spacemacs (21)
- # sql (25)
- # test-check (2)
- # uncomplicate (8)
- # unrepl (33)
- # untangled (20)
- # yada (14)
Does datomic help maintaining any kind of schema or relationships among one's data, or does it simply store and retrieve keyed maps to the underlying storage, leaving any data connections to user code?
@matan have you had a chance to read http://docs.datomic.com/schema.html yet?
@val_waeselynck The point would be to make it seem like writes complete faster than they actually do. Like I said, it’s mostly a thought exercise than anything else. Optimistic updates are sometimes used in UI design to make it seem as if things are faster than they are. You show the update as completed to the user, even as it is being sent to the backend.
Facebook appears to do something akin to this. Your post is more or less immediately visible to whatever users happen to be provisioned to the same server/cluster as you, while it can take some time before the post actually propagates to other servers/clusters.
@potetm So, let’s say that we’re running a forum where a user tries to create a new thread. Checks are made to make sure that the user is posting somewhere where they are allowed to post, and then the data is sent off to the transactor. What could make the transactor reject the data, and how would we normally deal with it?
> Does datomic help maintaining any kind of schema or relationships @matan yes, Datomic is very good at that. The schema is explicit, although more flexible than SQL databases. See http://docs.datomic.com/schema.html
@matan http://www.learndatalogtoday.org/ is good at showing how expressive Datomic querying is.
@henrik I see; well this approach definitely has a lot of challenges, both in managing the state location and in orchestrating subsequent writes
thanks @gws @val_waeselynck for pointing me in all the right directions!
okay, so with regard to schema, right, datomic uses a concept it calls schema, to describe the allowed attributes in the database (and this schema may leave dead data behind, when altered). Hopy I've followed the right lingo so far.
But what about relationships within the data? as I understand the unit of storage is a datom: > Each datom is an addition or retraction of a relation between an entity, an attribute, a value, and a transaction.
And as I understand, unlike e.g. a (legacy) RDBMS, datomic incorporates no enforcement of constraints on relations between datoms, am I correct insofar?
Hi, quick question. Can I get the :db/txInstant
from (d/entity db 17592186046014)
or it's only possible with datalog?
@aiser you can't reach the transaction directly through an entity. If you don't want to use a datalog query for some reason, you can use the datoms
api to get tx entity, then get the tx instant from that: (->> (d/datoms db :eavt 17592186046014) first :t ((partial d/entity db)) :db/txInstant)
Cheers - I have this error message IllegalArgumentException No matching clause: :t datomic.db.Datum (db.clj:326)
- I'll dig into it
Hello! Quick question: why is there a distinction between “t values” and transaction IDs in Datomic?
Also, is it possible to keep the transaction IDs and/or t values the same after a complete backup restore?
e.g. regarding my second question, I would like to know if I can safely store transaction IDs and/or t values outside the system to “point at particular point in times”, and whether those references will be broken in case of disaster recovery
@pesterhazy we managed to solve our datomic query in 1 go using :with
[:find (max ?revision)
:with ?id
:in $ ?id
:where [?m :appRegistry/id ?id]
[?m :appRegistry/revisionNumber ?revision]]
this returns what we wanted, the app with the max revision for a given id.Cool!
Haven't used with yet
@henrik What happens when you get a network blip between the peer and the txor, and your transaction never makes it to the txor?
@potetm That’s a good example! So I guess, silently retrying behind the scenes in that case?
Im looking for an intuitive way to update something like a user record. The user can fill out 50 fields which are submitted. If something has changed, it flies straight through without issue. But if something has been emptied, I cant just submit a map with a nil value, I need to make a separate retract transaction. Is there some idiomatic tool for simplifying that workflow?
In both cases we can know that the process failed, but in one case we made it seem like everything was a-OK before we were entirely sure.
Right. You would have to establish an understanding that things "aren't quite done until I get the green check mark"
Im looking for an intuitive way to update something like a user record. The user can fill out 50 fields which are submitted. If something has changed, it flies straight through without issue. But if something has been emptied, I cant just submit a map with a nil value, I need to make a separate retract transaction. Is there some idiomatic tool for simplifying that workflow?
- Addendum: Something akin to nil making an automatic retraction
@potetm I think you’re right, and for things that are very essential, you would want to be pessimistic rather than optimistic. Optimistic writes would only make sense in the case where we know that we will be correct 99.99% of the time or better.
If the success rate is very, very high, we could assume that it approaches 100% and declare victory “prematurely”
I mean, I think you hit the nail on the head before. If the user has a good mental model for what's going on, you can do whatever you want.
But you don't get that for free by just asynchronously writing to the db. You have to build lots of things, most importantly you must build user understanding.
@henrik quick comment on the earlier discussion on optimistic writes: you mentioned that UI applications do it, but it seems like they have different concerns. A user-facing app has to be very responsive, and network delays can be pretty large. A backend application can usually afford slightly longer delays, but most importantly network delays are very low since your database will usually be running in the same local network
@laujensen this was asked earlier in this channel. Hang on, I’ll try to find the message
Roughly speaking I think the summary is that you would need to write your own transaction function which retracts the missing attributes.
@laujensen actually, couldn’t you just write a helper function which, given a map of attributes (with potential nil
values), generate the right set of assertions and retractions?
Thanks buddy! And yeah I could, but it just feels like fixing datomic instead of using it. But looks like it needs some fixing
What exactly would you like Datomic to do? If you would like to “replace” the entity (e.g. only keep the new attributes you are transacting, and retract all others) then you will need to write a transaction function from what I understand.
But if you know, at the point where you make your new assertions, exactly which attributes need to be retracted, then I don’t see an issue generating those retractions in your app and transacting the assertions and retractions all at once
In my mind datomic should automatically retract anything thats assigned a nil value. That would simplify the interface
Oh, I see. Yeah, I am not sure what the implications of this would be but it could be neat. As I said you can implement that very easily though
The “map” syntax of transact
is just a convenience for writing a vector of assertions
Ive modelled a small wrapper after this principle of just joining retracted/edited fields in a generic sense.
@laujensen oh I hadn’t read this blog post. Yes, that’s pretty much what I was suggesting
@laujensen Happy to help. Good luck 🙂
Thanks for the answers yesterday. Last newb question I guess: Is it fair to say that any constraints between datoms are left to transactions to explicitly maintain, or is there any other mechanism which is more declarative? I am pretty sure its the former not the latter case.
This makes me wonder however: is it possible to define a transaction function in Datomic that should be executed on every transaction? As a way to maintain an arbitrary invariant
@val_waeselynck maybe you would know? ^
@hmaurer what you can do (at some performance cost of course) is define a transaction function which would wrap a transaction request, db.with() it, check the invariant, then transact it or throw an error.
@val_waeselynck Oh I see, but what I meant is: is it possible to execute that function on every transaction, not upon request with the :db/fn
attribute
It was more out of curiosity; I’m not sure I would want to do it in a production system
@hmaurer no, there is no trigger-like mechanism in Datomic
@hmaurer @matan I'm curious if there's a particular feature of other database systems you're trying to find here ?
@val_waeselynck no. I was just curious if you could forcibly maintain an invariant in this way
will the transactor process multiple transactions in parallel if they’re against different databases within the same storage?
@hmaurer well just for performance reasons you'd probably want to be explicit about where to look for invariant violations, so having to wrap the tx in a function call does not seem like an additional cost to me
@hmaurer you can use https://github.com/MichaelDrogalis/dire to wrap datomic.api/transact
@souenzzo not sure this solves the same problem; error handling is about dealing with bad stuff after it happens, maintaining invariants is about preventing bad stuff from happening :)
(Having said that, preconditions could do the trick)
Over the top of my head, I guess a “soft” option would be to hope your code doesn’t mess up and respect the invariants (test it properly, etc), but just in case have a service watch the transactions through the Log API and reports any infraction
Another day I thought of use dire to transform the tx-data
of the d/transact
by adding my db/fn
... But reviewing now, I do not know if it is able to do this
I am not sure that would be a very judicious thing to do, it’s late, but I’ll throw it out here 😄
Or a batch job which inspects the whole db periodically
At best, dire can inspect all tx-data and log in if any one is without your db / fn
... 😕
@hmaurer Datomic has opened a world of new possibilities, we need all the crazy ideas we can get to explore it :)
@val_waeselynck it’s great. It has a lot of the benefits of event sourcing without the pain of implementing it yourself
I just started exploring it but I am going to have a lot of fun over the coming months 🙂
Ah by the way, I have another question which you might know about @val_waeselynck :
I understand there is no “order” clause with Datomic, but will the order of n-tuples returned by a Datalog query always be the same for a given db value?
e.g. can I rely on it to do cursor pagination based on index in the result array, etc
@hmaurer no, I don't believe so
I suspect that the “re-indexing” step ran periodically by the transactor might mess this up
You'll need to sort the whole result youtself then truncate. But you can pull most of the data downstream of that
Also, do you know if I can rely on tx ids or “t values” to reference at point in times in the database, and store those externally?
hmaurer: as you suspected, relying on Datomic eids remaining stable on the long term is generally discouraged, because that's not robust to log rewriting (having said that, I have yet to see a complete story about log rewriting with Datomic). Same goes for t values IMO. If :db/txInstant
is not good enough for you, I suggest you annotate each transaction with a UUID-type attribute using datomic.api/squuid
.
e.g. if I ever need to restore the DB from a backup, will I be able to keep the same tx ids or “t values”?
These questions will have to wait until tomorrow - good night everyone, have fun!
i’m looking at introducing a new database for handling some high latency (700k+ average datoms) transactions versus adding them to my current db of low latency ones (~30 average datoms). if i just do d/create-database using the same DDB table will the transactor process low latency transactions and high latency ones at the same time? (so former aren’t held up by latter?)