This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-04
Channels
- # bangalore-clj (3)
- # beginners (23)
- # boot (89)
- # cider (11)
- # cljs-dev (22)
- # cljsjs (5)
- # cljsrn (21)
- # clojure (141)
- # clojure-android (1)
- # clojure-berlin (1)
- # clojure-greece (1)
- # clojure-italy (13)
- # clojure-mke (2)
- # clojure-nl (8)
- # clojure-norway (5)
- # clojure-russia (22)
- # clojure-sg (4)
- # clojure-spec (38)
- # clojure-uk (109)
- # clojurescript (150)
- # consulting (4)
- # core-async (7)
- # cursive (13)
- # datascript (8)
- # datomic (72)
- # dirac (185)
- # emacs (5)
- # figwheel (2)
- # flambo (1)
- # hoplon (13)
- # immutant (6)
- # lambdaisland (7)
- # lumo (46)
- # off-topic (13)
- # om (4)
- # onyx (1)
- # pedestal (1)
- # re-frame (68)
- # reagent (15)
- # rum (16)
- # slack-help (4)
- # spacemacs (22)
- # specter (3)
- # vim (10)
- # yada (28)
Simplest example: :entity/state is a keyword
[:find ?e :where [?e :entity/state :pending]]
works [:find ?e :where [?e :entity/state ":pending"]]
also works. [:find ?e :where [?e :entity/state "pending"]]
used to work in the old version but no longer works in the new version.I have a question. Say I want to express the equivalent of this (pseudo?) SQL query in Datomic/datalog:
SELECT author.name, count(*) AS num_books
FROM author
JOIN book
ON author.id = book.author_id
GROUP BY author.id
HAVING num_books >= 5
I think that in the peer library, you could just fetch all the author/book combinations then group (`GROUP BY` above) and filter (`HAVING` above) in good ol' Clojure without any waste of bandwidth or anything, since you essentially have all the data locally anyway. (If this is not correct, let me know!)
If instead you were using the client library, and you took the same approach of fetching all the author/book combinations, I'm guessing you'd transfer data (potentially a lot of data) that you didn't end up using. Is that right? How would you avoid that in the client library?@slpssm third one if it ever worked was a bug. Second one is I think a concession to Java users of the API
[:find (pull ?aid [:author/name]) (count ?bid) :where [?aid :author/books ?bid]]
@jeff.terrell
There is any way to specify :db/txInstant
while import data pipelining? Datomic tx
must ensure (:db/txInstant early-tx) <= (:db/txInstant later-tx)
, howerver, pipelining can’t ensure the ordering of imports.
@jeff.terrell you're correct about the peer library. All query engines need their working set in memory 🙂
@jeff.terrell The approach would be essentially the same with the client - the difference is that the work would happen on the peer server instead of in your process
Incidentally, since the client will return results in a channel, I’d do the filtering with a transducer
@marshall - OK. I think that's about what I expected. Thanks for confirming. Now, if there was a long tail, with lots of results < 5, I'd be wasting a lot of bandwidth, right? Is there a simple way to avoid that? Maybe ship the filter up to the server side, somehow?
If this were SQL and I was trying to limit results somehow in a way that wasn't supported by SQL, I'd reach for a database function.
TBD 🙂 Sort in the peer server + limit/offset - which I believe is a request in our customer feedback portal
OK! :-)
@marshall - I think the feature I want to suggest is "custom server-side database functions to limit or transform results server-side when using the client library". Does that sound like a reasonable request to you, or would that be a bad idea for any reason that you can see?
I went ahead and added this.
When I am trying to suggest features - it stays on the account page and just adds “#”
http://api.eu-west-1.receptive.io/widget/ping Failed to load resource: the server responded with a status of 400 (Bad Request) Also I have this error, is this connected?
It worked for me, @kirill.salykin.
but doesnt work for me
@kirill.salykin Are you running any ad blockers or ghostery or anything like that?
I disabled AdBlock
still same
chrome and safari
will try Firefox
possible…
receptive responses with “{”message”:“invalid user”}”
heh. for those of you awaiting updates with baited breath about Kirill’s plight in accessing the feature portal - software is hard, integrating multiple softwares is harder 🙂
how to modify this query: ‘[:find ?name ?filter :where [?e :part/name ?name] [?e :part/filter ?filter]] so it returns […] for ?filter? :part/filter is a many keywords type, but above query returns only first keyword from collection …
got it somehow working like that: (d/q ‘[:find ?name (distinct ?filter) :where [?e :part/name ?name] [?e :part/filter ?filter]] (d/db (d/connect database-uri)))
but i feel i am missing something… ;/
@isaac using the impl of pipelining in the docs, yes. But within a process you are guaranteed that order of d/transact call is order of execution
So with an implementation of pipeline that calls d/transact in correct order you know they will run in correct order
I also wrote one (less elegant) that didn't use core async, used reduction, kept inflights in an accumulating vector, and did "gc" of inflight futures when the vector reached capacity
You mean, the d/transact-async
is keep ordering of executions same with ordering of invokates?
the problem with the pipelining code in the docs is core.async/pipeline does not preserve order of invocation because it uses a threadpool
You are talking about this pipeline code? http://docs.datomic.com/best-practices.html#pipeline-transactions
core.async/pipeline-blocking doesn't run input "tasks" in-order because it runs them in parallel on a threadpool
but d/transact-async etc send transactions to the transactor in the order they are invoked