This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-18
Channels
- # ai (1)
- # beginners (71)
- # boot (15)
- # cider (26)
- # clara (4)
- # cljs-dev (81)
- # cljsrn (26)
- # clojure (393)
- # clojure-berlin (2)
- # clojure-dev (5)
- # clojure-dusseldorf (1)
- # clojure-greece (5)
- # clojure-italy (6)
- # clojure-russia (97)
- # clojure-serbia (11)
- # clojure-sg (2)
- # clojure-spec (14)
- # clojure-uk (66)
- # clojurescript (58)
- # core-async (19)
- # cursive (18)
- # data-science (2)
- # datomic (75)
- # emacs (20)
- # events (5)
- # figwheel (1)
- # graphql (2)
- # hoplon (29)
- # jobs-discuss (3)
- # juxt (6)
- # lein-figwheel (1)
- # london-clojurians (2)
- # lumo (29)
- # mount (9)
- # off-topic (4)
- # om (16)
- # onyx (25)
- # other-languages (2)
- # pedestal (38)
- # protorepl (2)
- # re-frame (20)
- # reagent (9)
- # ring-swagger (6)
- # sql (10)
- # unrepl (3)
- # untangled (19)
- # utah-clojurians (1)
- # videos (2)
- # vim (20)
I'm about to start coding a library which reimplements the Entity and Pull APIs to support derived data (derived attributes & getters). Before I dive in, is anyone working on this already?
Im running a Datomic/MySQL service, which in the couse of 6 months has produced an inno-db file weighing in at 26gb. Thats seems excissive to me. Is the MySQL a bad fit or is this to be expected?
@laujensen do you gcStorage on a regular basis?
@val_waeselynck Never. I understand it to remove history beyond a certain point.
No, it doesn't delete data. It can only mess around with Peers which hold an old db value. But if you gcStorage from 1 week ago you should be safe (unless you have some process which holds on to some db value for longer than a week, which seems unlikely š )
@val_waeselynck Aha... I'll give it a try. Sounds like something you would run weekly then ?
yes that seems like a sane default.
you should try that and see if you inno-db file gets smaller. Having said that, one of the design choices of Datomic is to store a lot of things redundantly, the underlying assumption being that storage is cheap.
@val_waeselynck And that makes sense, but I still want to retain some control over storage consumption. Right now it looks like its growing exponentially
maybe your business is too š
I don't know that there are any knobs for that. Maybe you store more things that you intend, or have a lot of unneeded updates for the same data?
One thing you can do is avoid unnecessary indexes (see the :db/fulltext
and :db/index
options) but it's best to anticipate that ahead-of-time
@val_waeselynck Well, yeah, I guess the business is too. But dumping the entire DB without history is just 5% of the total data consumed now.
you can also set :db/noHistory
on some high-churn attributes
not expert enough for that sorry
what I would do is look at the Transactor and Storage metrics, the activity of gcStorage may be visible there
Its only run for a couple of minutes, but its already consumed 1gb of disk space š
maybe you need to run additional MySQL-specific gc
sadly I'm really no help in that regard
@laujensen You can determine the ārealā amount of space required for a Datomic DB by running a backup and calculating the size of the resulting backup dir on disk The difference between that and used storage space will be made up of recoverable storage garbage, unrecoverable storage garbage, and storage-specific overhead
the storage-specific overhead has to be reclaimed via the storage with something like Postgresql VACUUM or MySQL OPTIMIZE TABLE
alternatively if you can tolerate the downtime you can backup your DB, restore into a NEW backend storage instance and switch over your system. This approach will remove all types of garbage
i turned my local from 30gb to 6gb by backing up, deleting, and restoring my local š
thatās about 8 monthsā accretion of garbage segments
@robert-stuttaford do you run gcStorage regularly?
not at all
ā¦ we really should
this is my local machine, with multiple successive restores. so all the previous restoresā garbage is now unreachable by a gcStorage
but itād be worth doing on our production storage for sure
Can db.type/uuid
somehow be used as a valid :db/id
value or do I need to create a new attribute?
@jfntn: new attribute, :db/idās are provided by the transactor.. you canāt choose them
but you can use a stringized uuid as a tempid if you just need to generate a unique :db/id for adding facts
@marshall thanks for weighing in. Im on the trail now but need to migrate to another host before I can kick it off.
@matthavener makes sense, thanks
does this look right to anyone? trying to get a transactor/peer/etc running locally
datomic:sql://<DB-NAME>?jdbc:
I feel like it shouldn't say <DB-NAME>
there š
okay, so that looks normal for the transactor to print as the URI when you run it?
ok, phew
alright making some progress... so close... trying to run the peer and getting this
Access denied for user 'datomic'@'localhost'
running the following command
bin/run -m datomic.peer-server\
-h localhost \
-p 8998\
-a myaccesskey,mysecret \
-d datomic,datomic:
I can connect just fine if I run mysql -udatomic -pdatomic datomic
wait, is <DB-NAME>
the name of the mysql-database name or should it be mysql
?
I've been going through the Datomic docs looking for any specifics on requirements for running it on PostgreSQL. I haven't been able to find anything specific on requirements - am I safe to use Datomic on any reasonable installation of PostgreSQL or are there specific flags or settings I should be using that I missed in the documentation? This is going to be for experimentation to start but ideally will move into a real project soon.
oh snap, how do I found out what I named my datomic database?
At some point you called (d/create-database "datomic:
or restored from a backup with a similar looking uri
fascinating
well... š seems like everything so far is correct then, must be some other error or SQL setting I'm missing
appreciate it š @favila