This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-12-19
Channels
- # beginners (240)
- # boot (9)
- # braveandtrue (2)
- # bristol-clojurians (2)
- # cider (2)
- # cljsrn (84)
- # clojars (1)
- # clojure (195)
- # clojure-belgium (9)
- # clojure-china (5)
- # clojure-denmark (4)
- # clojure-italy (7)
- # clojure-mke (1)
- # clojure-norway (1)
- # clojure-russia (16)
- # clojure-spec (74)
- # clojure-uk (15)
- # clojurescript (78)
- # clr (3)
- # code-reviews (4)
- # datascript (8)
- # datomic (71)
- # emacs (9)
- # hoplon (18)
- # jobs (3)
- # kekkonen (32)
- # klipse (19)
- # lambdaisland (2)
- # luminus (15)
- # off-topic (6)
- # om (35)
- # om-next (62)
- # onyx (17)
- # overtone (5)
- # pedestal (1)
- # perun (1)
- # planck (31)
- # protorepl (1)
- # re-frame (135)
- # reagent (34)
- # ring-swagger (6)
- # rum (54)
- # specter (3)
- # untangled (14)
- # yada (14)
another from that post: >Keep metrics on your query times >Datomic lacks query planning. Queries that look harmless can be real hogs. The solution is usually blindly swapping lines in your query until you get an order of magnitude speedup. what does he mean by "query planning"? is this a feature of databases that datomic lacks, or is this a comment on how datomic is used?
I'd take a look at https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj
@xk05 may be able to save you some pain longer term http://github.com/robert-stuttaford/terraform-example
this has turn-key datomic transactor + apps + memcached + ddb
I just listened to the Cognitect podcast with Paula? Gearon. Man, the things she was saying! Many of the very same things that have bugged me for years. What a joy to come back to this topic now, and a RDF layer in the works. I look forward to developing some ideas on this platform.
I made an attempt to write a functional triplestore with owl2 rules a few years back, after spending a winter studying the topic at Univ. of Washington. It was just a sort of moonlighting hobby thing to sketch out ideas of where I might want to go later. My implementation was in elisp with ttl flatfiles with allegrograph and gruff support. It's a boneyard of broken dreams. 😄 Well, really though, I look at it from a distance now, and it really does appear that this database and the support repositories growing up around it are a mature application of those very ideas. I guess this is a long-winded way of saying, "Wow! Somebody actually made this work!"
When my fire started to peter out I just sort of dumped it into a bucket here. I was sort of planning on organizing it, then I stumbled onto this. I may just convert the little bits I want to keep to clojure and work on this format instead.
i highly recommend spending some time with Datomic, then @xk05 - i think you’ll find the more you use it, the more you see the beauty of its overall design.
I just bumped to 1.9-alpha14 and datomic 0.9.5544 and I’m suddenly seeing different variations on the same exception (still debugging)
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-a-keyword Cannot interpret as a keyword: restricted, no leading :
went through the datomic google group and google itself, but this one seems ungoogleable 😞
has anyone else seen this happen yet?
@kennethkalmer: when?
found it, I’m passing a string value to a keyword attribute
guess in the past it would have just cast it
@marshall did you, by chance, test out the runtime-varying AND query I was trying to build on Thursday? https://clojurians.slack.com/archives/datomic/p1481841384000992
I’ve found I can generate the a where query at runtime like this
{:find '[?e]
:where (for [aka akas]
['?e :aka aka])}
and that might be the idiomatic way, I just want to make sure I’m not missing anything@adamfrey I didn’t get to it over the weekend, but if you have a good programmatically generated solution, I’d go with that.
alright lovely folks, maybe you guys can help me convince my company to adopt Datomic (finally) 😄
Have this SQL problem. T1
has about 2.5 millions rows, about 20% are duplicates. T2
is a near clone of T1
with no duplicates. However, T2
has more recent entries than T1
. I need to get all unique new entries from T1->T2
. Seems to me like it should be a simple set operation but in SQL it is enormously computationally expensive. Is this something on which Datomic can save me or am I still screwed? 😛
I mean as the axis of time progresses I'm surely screwed in general but I'm hoping in this instance there may be some light 😂
well from a scalable architecture point of view you could argue that you could run a computationally expensive read query like that on a peer without bringing down the entire system because other peers will be able to make writes and do queries without being impacted
in a related question, does the horizontal read scaling of datomic have any advantages over, say, read slaves?
tjtolton: unlike read slaves, datomics “slave lag” is reified and you can determine which basis your current database is at
with regular read slaves (at least in my limited experience), you cannot determine if the database you’re reading from has changes some arbitrary transaction
yeah, I saw a nasty bug once due to a race with read lag. its worse because you only see the read lag bugs when load is high
(or during some network partition)
@gdeer81 you are correct, however, I need to run this update ~ every 5 minutes
would be very nice if could get something going like T3 := set(T1) - set(T2)
(pseudo-code)
@goomba you'd spin up a new app process to do it each time, I think is what he's saying.
@tjtolton correct, only issue is that the above operation on SQL takes several hours to complete
and since I need to run it every few minutes...
there are, I'm sure, other way to do it a la "well why don't you just..." ... but it would be REALLY nice if we could just leverage immutable data structures 😂 😂 😂
@goomba first you would need to convert your sql schema to datomic schema then migrate all your current data over, then rewrite your app to use datomic. doing all that might prevent the original problem you were having
@gdeer81 more than happy to do that, this is more of a feasibility question. And yes preventing the original problem would certainly count for me as solving it!
This whole situation came about because trying to prevent duplicate entries on insert means an insert speed of about 20-25 records per second while we generate about 30-35 records/second
so to prevent this we created a table where we don't check for duplicates, giving us an insert speed of 4k records/second, but now we have duplicates
I'm not sure what Datomic's write throughput max is, but assuming that one record would break out into at least 10 datoms, you're talking about being able to transact 40k datoms per second.
well. That would certainly work.
ohh I see
well the good news is use we mongoDB as a buffer to handle the inserts
and I can drip feed with some buffer the info from mongo into Datomic
and it doesn't have to be up-to-the-second accurate
it's our long term storage
but as you know not great for queries
I really hate to discourage anyone from trying Datomic but your original problem could probably be solved with Informatica or writing your own de-dup process in Clojure
Not discouraged, just looking for any excuse to push for Datomic right meow 😄
damn, I might literally have to do this in Clojure. It just takes up too much damn memory in Python
When trying to require the datomic clj-client I get this error:
java.lang.UnsupportedClassVersionError: org/eclipse/jetty/client/HttpClient : Unsupported major.minor version 52.0
clojure.lang.Compiler$CompilerException: java.lang.UnsupportedClassVersionError: org/eclipse/jetty/client/HttpClient : Unsupported major.minor version 52.0, compiling:(cognitect/http_client.clj:1:1)
I don't have any dependency conflicts with jetty. How do I use the clj-client?I was using 7 and I just switched to 8 and it's working. If it's not already documented, that should be added 🙂