This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-24
Channels
- # beginners (12)
- # cider (3)
- # clara (3)
- # cljs-dev (3)
- # cljsrn (19)
- # clojure (83)
- # clojure-android (1)
- # clojure-dev (15)
- # clojure-dusseldorf (1)
- # clojure-greece (30)
- # clojure-italy (10)
- # clojure-madison (1)
- # clojure-nl (6)
- # clojure-russia (274)
- # clojure-spec (51)
- # clojure-uk (31)
- # clojurescript (38)
- # core-async (7)
- # cursive (11)
- # datascript (1)
- # datomic (63)
- # emacs (10)
- # figwheel (1)
- # hoplon (27)
- # jobs (11)
- # klipse (4)
- # lein-figwheel (1)
- # lumo (6)
- # nyc (1)
- # off-topic (278)
- # om (12)
- # pedestal (10)
- # protorepl (31)
- # re-frame (13)
- # reagent (23)
- # remote-jobs (1)
- # spacemacs (9)
- # untangled (24)
- # yada (54)
@erichmond - No, but I'm wondering if somebody accidentally put a literal <bucket>
instead of the actual bucket name. Maybe search your code for the literal <bucket>
string?
@jeff.terrell sorry, that was a real bucket name, I removed it tho, so you guys didn't see where our production backups are ;D
😆 d'oh! Makes sense…
forgive me for asking, I feel like the answer should be obvious to me: why is it important to ensure that schema changes are applied once and only once?
no, afaik you can retransact the schema on each connection without any harm
@uwo: most of the time (e.g when installing attributes that don't change and database functions), you don't need to ensure schema installation runs only once.
See https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions for a bit more details
@pesterhazy @val_waeselynck thanks! why is that the focus of conformity then?
for some migrations, like altering attributes, installing the schema only once may be necessary
Another common use case is creating a new attribute and populating it with default data for existing entities
Is there any reason I might be getting db.error/transactor-unavailable
besides overwhelming the transactor?
I’m not in an import situation. And there should be very few writes going on (unless of course something in our system is going haywire)
side note: it'd be amazingly useful if there was a doc of all/most of Datomic error messages explaining what they actually mean
we handle db.error/transactor-unavailable
during imports with exponential backoff to allow it to recover, but I wasn’t anticipating it during normal use
@uwo could definitely be due to dev storage Dev is an integrated storage engine that shares resources with the transactor process. It’s intended strictly as a development convenience
for several reasons, including the fact that it doesn’t have its own independent resources to handle peers and transactor concurrently connected and using read and write bandwidth
thanks. that sounds about right. we were certainly not intending to use it for prod, just been using it in staging for a little bit because of timelines I guess
If I might add a question. we’re bringing our own storage (mssql). Silly question I know, but should we put storage on a separate box from the transactor?
Is there a good reason why EIDs come back as #uuid
from (d/q)
but as java.lang.Long
from (d/entity)
?
@timgilbert Are you sure you are not confusing your domain-specific id (the uuid) with the internal entity id (the long?) What attribute are you reading in each case?
Ah, yes, you're right, my mistake
I was looking at an external domain UUID, as you suspected. Thanks!
anybody here using datomic in production?
I understand that it's designed so that transactor, peer server, storage, etc are usually on different machines (virtual or physical), curious what setups folks are using
@goomba Yes, definitely those should all be on different machines. Most production installations are set up this way.
are you guys doing physical servers or VMs? or microservices?
google cloud
or local hosting
well, ultimately both
depending on the project/company
so you’ll likely be limited to whatever infrastructure is available at the on-prem sites; in general that is something that Datomic supports fairly well
you’ll want independent instances, either real or virtual, for transactor and peers (or peer server)
i.e. you might use Oracle for one customer and cassandra for another if that is what they already have avaliable
okay, got all that. Is the primary way of communicating server information with jdbc strings etc? or is there a config file I should be looking at?
although I don't need it now ultimately I'd like to be doing some sort of microservices thing where where the whole shebang can be treated as a bunch of ephemeral instances and would like to avoid hardcoding resources locations or at least be able to do it programmatically
The call to d/connect
requires the storage location. So in the case of jdbc, yeah it requires a jdbc url.
then the instance pulls metadata, spits into app-specific config (eg an edn or properties file, or systemd environments file, whatever), and it's accessible to app on startup
okay, gotcha. So the storage at least needs to be fairly static
cool, thanks guys