This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-11-30
Channels
- # aws (1)
- # bangalore-clj (2)
- # beginners (64)
- # boot (29)
- # cider (4)
- # clara (14)
- # cljsjs (22)
- # cljsrn (24)
- # clojure (248)
- # clojure-austin (5)
- # clojure-berlin (1)
- # clojure-china (5)
- # clojure-france (1)
- # clojure-greece (1)
- # clojure-italy (2)
- # clojure-korea (6)
- # clojure-russia (76)
- # clojure-spec (2)
- # clojure-uk (59)
- # clojurescript (67)
- # cursive (12)
- # datascript (6)
- # datomic (126)
- # defnpodcast (2)
- # devcards (1)
- # docker (1)
- # events (2)
- # hoplon (14)
- # leiningen (1)
- # luminus (2)
- # midje (2)
- # mount (1)
- # off-topic (4)
- # om (6)
- # onyx (8)
- # parinfer (2)
- # perun (6)
- # proton (5)
- # re-frame (41)
- # reagent (6)
- # ring-swagger (3)
- # rum (1)
- # spacemacs (10)
- # specter (12)
- # yada (25)
Does anyone have experience using Amazon Aurora as a storage backend with Datomic? They claim it's MySQL compatible, whatever that means, and what I understand about how datomic uses storage services it doesn't need much (or anything) in terms of database implementation specific features.
out of pure interest if on AWS then why not use DynamoDB? can you share some rationale? asking because in similiar position i have not even considered other storage so i am curious about what i might have missed š
DynamoDB is definitely the other option we're considering (actually we're not sure we're going to go forward with Datomic at all but right now it feels promising). Taking Datomic into use means a lot of new stuff to learn from operational perspective. We have a lot of experience running with RDS MySQL but nobody in our team has tried DynamoDB yet. So basically if Aurora would work nicely as a backend that might be one less new thing to take into use right now. I would like to understand the options in general and how the storage affects things so we can make at least somewhat informed decisions. I couldn't easily find a whole lot of information about how to choose the storage backend for Datomic and what are the tradeoffs there.
Price is another consideration. My hunch is that Aurora might be a cheaper option to start with. That said, we haven't done any calculations yet so I might be totally off base here.
@ovan, aurora is a modified version of mysql, so in all probability it'll work just like mysql (plus datomic's sql needs are likely not to be very sophisticated as it uses it as a k/v store)
@pesterhazy, thanks. That matches with my understanding.
@ovan From a pricing perspective I can recommend dynamodb. Using it feels a bit like cheating because datomic caches almost everything
Anyone have any helper code for converting entity maps into a sequence of datoms..?
not hard to write š
@jonpither Datascript does this as well in its implementation. Yoou could also just do a with
on some temp db. Thoough needs a schema
https://clojurians-log.clojureverse.org/datomic/2016-06-01.html#inst-2016-06-01T15:30:22.001012Z
is map-form-tx->vec-form-txs a mechanical, pure fn, or does it require looking at the existing db?
how can i replace a ref of cardinality one that is a composite?? 1) find the id 2) retract id (no loose datoms) 3) assert new entity i reeeaaallyy want something simpler ā¦ any ideas?
i can easily update the main entity (i have uniqe on it). But then each update creates a new referenced entity, even though the thing is marked as a compositeā¦ ;/
;User
{:db/id #db/id[:db.part/db]
:db/ident :user/email
:db/valueType :db.type/string
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one
:db/doc "Email"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :user/shipping
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/one
:db/doc "Shipping address"
:db.install/_attribute :db.part/db}
{:db/id [:user/email ā"]
:user/shipping {
:db/id #db/id[:db.part/user]
:address/line1 "66666one"}}
so on shcema like above the transaction is creating a NEW address entity each time š
@karol.adamiec you need to either merge the existing component entity with the new attributes, or retract it, and add a new entity. It can't be done with the map form of a transaction. You need :db/add
and :db/retract
.
Has anyone here hit the 10 billion datom limit recently? Wondering if the Datomic team are testing with larger databases these days. I can partition data into separate databases, and maintain multiple connections, but I'd like to avoid that complexity for a while.
Relevant discussion from 2015: https://groups.google.com/forum/#!topic/datomic/iZHvQfamirI
Hey all. I'm trying to introduce some memoization to some functions I've written that take database values as arguments. Is there any way to uniquely identify the connection a given database value came from?
I could have them take extra arguments and cache based on those, but I'd rather not if I can avoid it.
@jcf Just yesterday @stuarthalloway mentioned 100B in the Datomic Workshop at Conj.
If you think youāre building a system that will need 10-100B datoms, you should email me
and weāll talk about the specific details/challenges with administering a database of that size
@jcf how can i get the id of :user/shipping entity to wrap it all up in one transaction?
@marshall my client has a paid support agreement in place. I can see if I can get them to add me to the ZenHub account (I'm assuming you guys are still using that?) and go through official channels if you wantā¦ we've already been sold - I just need to see if I can do this without Cassandra.
Sure - Just send an email to support @ cognitect and let us know what client youāre working with
@karol.adamiec something like this:
(let [user-id (d/entity (d/db conn) [:user/email ""])]
[[:db/retract user-id :user/shipping shipping-id-to-remove]
{:user/email ""
:user/shipping [{:address/line "etc"}]}])
You might want/need to diff the components however. That's beyond what I can type up in Slack however.
thanks
@jcf but the real issue for me is how do i get shipping-id-to-remove?
having only the email?
Load the user, and you'll get the shipping IDs back from Datomic. (map :db/id (-> conn d/db (d/entity [:user/email "[email protected]"]) :user/shipping))
will give you a list of all the shipping IDs.
can i do that inside of a transaction?
i am fighting uneven battle trying to use REST API š
Maybe the new client stuff will make your life easier. It was announced in the last couple of days.
oh yes. i am waiting. Ehh. Thanks. Will fire couple https req at the db then. Tried to avoid that š
on a realted note do lookup refs nest?
@karol.adamiec not sure I follow. A lookup ref is of the form [attribute value]
, and you can't do something like [attribute [attribute value]]
.
yeah, i tried and failed, but that is exactly what i would liek to do š
Also, it was mentioned in the Blog post, but we now have a Feature Request & Feedback portal available - if you log into my.datomic there is a link to āSuggest Featuresā in the top nav; go there and vote for/suggest improvements and/or clients in your language of choice
@marshall is a nested lookup ref a technical possibility or am i deeply misunderstanding how datomic works?
@karol.adamiec you more than likely should be using a query.
yeah! but i need to transact! š
that means query first, transact next
i clojure it is almost the same
over rest you feel the pain
You almost always want to offload work to your peers, and only transact simple additions and retractions.
yeah, i think i try to constantly abuse datomic š®
it is the rest trap. However i try to convince myself that firing off requests is fine ā¦. i always end up trying to minimize the amount of traffic, which is a datomic antipattern surely.
@karol.adamiec are you sending requests from the browser or some backend service?
backend
nodejs š
If you keep connections alive, then it doesn't matter so much. It's the cost of establishing a connection I'd worry about.
i think i am fine anywya
it is a small evcommerce shop
if you need atomicity of the lookup and transact, you can either use a transaction function or use a more āoptimisticā concurrency strategy and use cas
CAS works really nicely. Transaction functions are a last resort for me because they can end up being slow (at least when I've abused them in the past).
i think query is right thing to do. get the id. if exists, retract, if not do nothing. then assert full user enity again with address.
but navigating lookup refs like pulls would be nice š
i could abuse datomic longer š
If you're using Clojurescript you can use clojure.set
to work out what you need to retract etc. From JS I guess you have to write it all yourself. š
yep es6
well anyway the right thing to do is query->transact. I do not need CAS semantics per se. All i wanted to do is to be lazy and fire off one, maybe a bit tricky transaction and have it do it for me š
for the simple, āembeddedā app case, is the recommended best practice still using the Peer library, correct? Rather than starting up a transactor AND a peer-server AND the app+client?
i wonder if you can have a process be its own peer-server
allowing you to code with client but only have one jvm run
is that a possibility @jaret ?
at least keeps the code portable
yes, you always need a transactor
for durable storage
means you can make the decision to go separate peer-server later on when need be
then use the peer! š
itās verrrry early days yet. weāll figure it out š
@robert-stuttaford you cannot have a process be its own peer-server.
I think from my quick read, for most of the āembeddedā use cases I can think of, the peer library is a much better fit.
@robert-stuttaford, just listened to a defn podcast where you talk about Datomic. In the light of recent changes it was fun to hear the part about the problems with peer-based licensing model. š Anyway, thanks for doing the podcast, really helpful information for our team as we're considering Datomic for our next project.
ah - thatās not in the summary table: http://docs.datomic.com/clients-and-peers.html
but it is in the text later: "Peers continue to support tempid structures, and in addition they also support the new client tempid capabilities."
@zane: better late than never, I hope, but database values have a :id
attribute which generally points to the URL of the connection they came from
thought so. so, if you want to use peer-server and a durable db in dev, youāre starting 3 processes now
@ovan, yeah š how quickly our discussion became legacy! so totally happy about the changes this week. if you have any questions in aid of your decision, letās have em. i love learning about other contexts
@robert-stuttaford, Thanks. I do have a couple of question if you don't mind. You mentioned in the podcast that you ran the first year or so with Postgres as a storage backend and only later moved to DynamoDB. Would you do the same again or just start directly with dynamo? I'm mainly concerned about operational aspects like tuning the capacity. Also, what's you experience with operating the transactors. Any surprises that were hard to debug or fix?