This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-04-21
Channels
- # admin-announcements (2)
- # beginners (22)
- # boot (223)
- # cider (161)
- # cljs-dev (19)
- # cljsrn (4)
- # clojure (186)
- # clojure-austin (6)
- # clojure-beijing (1)
- # clojure-boston (3)
- # clojure-china (1)
- # clojure-czech (1)
- # clojure-france (1)
- # clojure-greece (10)
- # clojure-russia (17)
- # clojure-uk (154)
- # clojurebridge (3)
- # clojurescript (82)
- # component (12)
- # cursive (12)
- # datomic (71)
- # dirac (3)
- # editors (2)
- # emacs (29)
- # flambo (31)
- # hoplon (21)
- # immutant (11)
- # instaparse (17)
- # jobs (2)
- # jobs-discuss (2)
- # jobs-rus (1)
- # lein-figwheel (12)
- # leiningen (2)
- # off-topic (44)
- # om (78)
- # onyx (38)
- # parinfer (1)
- # re-frame (34)
- # reagent (32)
- # spacemacs (56)
- # untangled (74)
- # vim (12)
- # yada (2)
Is there a way for the pull syntax to pull something up to the "top-level" of the returned entity? I have a cardinality one reference, and I would like one of it's keys to be included in the returned object. For now I have done an update/get combo, but I wonder if there was something in the pull syntax I hadn't understood, hiding this functionality
Is there something like an UPDATE … WHERE …
syntax for transactions? Currently we do a find
and then generate the transactions on the client, but that takes a while.
@dominicm: can’t you just do [:find (pull ?p [*]) (pull ?c [*]) :in $ :where [?p :rel ?c]]
or am I misunderstanding something?
@grav: The relations would be like {:ref {:ref/key :foobar}}
I want to be able to just get {:ref/key}
@grav, I think that's the normal way to do UPDATEs. How long does it take? Maybe there's a way to speed up the query.
@pesterhazy it’s not the query that takes time, it’s the transactions.
if so, doing it inside a transactor fn will probably not help, true?
@dominicm: pull
will return data in a standard way for entities and nesting/refs, etc. - it’s just normal Clojure data. If you want it to be in a different shape, you’ll have to manipulate the returned data - there’s no part of a pull specification that will e.g. flatten multiple entities into a single map.
@grav: There shouldn’t be something intrinsic to transactions taking a while from the peer unless you’re e.g. calling transact
instead of transact-async
in parallel or looping transaction submission logic (not using async implies blocking on a round trip per submitted transaction).
you don’t want to push that logic to the transactor in e.g. a transactor function as it will then run in serial if e.g. the logic that’s generating the transactions that’s the bottleneck. Of course, if you need to query the exact database state before the transaction goes in (i.e. ACID isolation) then inside a transaction function is the correct place for that logic.
@bkamphaus: I thought not, just wanted to check. I wasn't sure how comprehensive the pull api was aiming to be.
so I’ve used hashmaps as inputs where the logic variable was a key. For example, [(get $1 ?k) ?v]
but I’m not sure how—or if it’s even possible—to get a value into a logic variable [(get $1 :id) ?v]
. I understand why it doesn’t work but I can’t quite figure out if something else could work.
I sometimes feel like I’m playing Jeopardy…can you put that expression in the form of a relation, Alex?
I thought something like [(= (get $1 :id) ?v)]
might work but it can no longer resolve $1
.
@bkamphaus: thanks, that cleared it up a bit. We are doing sync. transactions, but only for small amounts of data. This was for a migration, so it would make sense to do it async. Thanks!
@actsasgeek: I think I’m gonna need you to step back to describe the use case and maybe see a full query example.
Is there anything built into Datomic to allow case-insensitive substring matches on string attributes?
Like testing "foo"
against a :user/email
of "
and "[email protected]"
.
@sdegutis: you can use regex stuff in a query if that’s what you mean. Example from a different use case here: http://stackoverflow.com/questions/32164131/parameterized-and-case-insensitive-query-in-datalog-datomic?answertab=oldest#tab-top
@bkamphaus: Thanks. I think fulltext
might actually be what I'm looking for, still figuring this out.
fulltext makes sense if what you’re actually trying to match/search is compatible with Lucene’s defaults Also worth noting that it’s the only aspect of Datomic that’s essentially eventually consistent (fulltext index updates in the background, not always guaranteed to reflect most recent transactions).
@bkamphaus: Oh, so :db/fulltext
needs to be true
on an attribute for it to even work?
Not trying to discourse use of fulltext, but I think it is worth commenting on. For limited text match/search use it’s fine. For anything less trivial it suggests keeping text data you want to search that way outside of Datomic and pointing to it from there.
@bkamphaus: From a high level, what technique would you recommend to search for users whose :user/name
or :user/email
or (comp :account/name :user/account)
match a given substring case-insensitively?
@bkamphaus: I only provide those examples to demonstrate the scope of the kind of query I'm trying to make, i.e. that the thing I'm searching for (via case-insensitive substring) may not all be on the same attribute or even entity.
I have a solution in mind, but I'd like to know how you'd personally go about this, at a high-level.
I would probably start with something like re-matches
in an or
clause. I would reserve fulltext for cases where I had something like posts or tweets or short descriptions of something (i.e. paragraph or a few sentences) I needed to search.
@bkamphaus: Ah. My solution was going to be three separate d/q
queries, and then to just distinct
the results together.
@bkamphaus: But I like your idea, it may be quicker to run and easier to write.
nothing wrong with composing the queries and I’d prefer composing separate queries if there’s a use case for checking to see if only one match applies. But if you always want to collapse those, or
or a rule makes sense.
I guess my only worry was that all the variables in the query unify, but in this case that's not a problem.
@bkamphaus: Ah, I remember now why my solution wouldn't work. It's because my :user/account
(not real attribute name) may be nil
, and thus it would fail to match any ?user
s which didn't have one.
On account of how a :where
clause specifying an attribute inherently assumes that attribute exists.
The clauses establishing e.g. ?u :user/account ?a
and possibly ?a :account/name ?name
would each need to be a different path in a rule or handled by an and
in an or
clause possibly for that use case.
@bkamphaus: right, and then the or
would need a join, and it gets real messy real quick.
I wonder if it would just be cleaner and still run reasonably fast to just do three queries.
this is exactly the use case where I jump from or to rules personally “Three different ways by which a particular condition is met”, especially when the or
clause can get messy.
ha, using this example:
[[(social-media ?c)
[?c :community/type :community.type/twitter]]
[(social-media ?c)
[?c :community/type :community.type/facebook-page]]]
It says In this rule, the name is "twitter",
but the first line is actually [(twitter? ?c)
except you’ll have three (user-search ?string)
rule heads, for user name, user email, or -> user/account account/name
@bkamphaus: which I'll probably just make into a function and refer to it as a fully qualified Clojure function in the query
all done at typical Clojure/JVM speed in the Peer/ your app. Nothing really faster inside/outside query (after your little bit of overhead for the query to be parsed.
why taking a union of relations from multiple queries isn’t a big deal typically, either, unless each part match benefits from the previous clauses restricting the number of datoms that are matched.
@bkamphaus: I was kind of figuring it would be quicker as one query because otherwise it has to enumerate all users thrice.
yep, if each query enumerates all users and the rule w/three paths can operate on a one time numeration of those users that’s a performance win, but semantics and composability can trump performance if it’s not a bottleneck or requirement for that case.
Sweet, this works amazingly. Thanks @bkamphaus.
@bkamphaus: the only thing that would make this sweeter is being able to bind a function to a local name, so I don't have to put a fully qualified function (including namespace path) into the query.
So instead of (myapp.a.b.c/matches? ?a ?b)
I could bind that to matches?
in the rule definition or something and then just use that.
@bkamphaus: I’m not sure why the use case is mysterious. 😛 It’s easy to include a vector of tuples as an input into a query. Suppose I have a vector of tuples, [[id email]], then I can do
[:find ?id ?name ?email :in $ [[?id ?email]] :where [?id :person/name ?name]]
Now what if I had {:id id :email email} instead…is it possible to use it directly or must the data be transformed to a relation?I could do (map (juxt :id :email) input-data)
, I suppose but I’d been able to use hashmaps directly in the past.
Possible to use? Yes. Simple to use? No. I think it’s probably simpler to transform to a collection of tuples/relations for binding inputs to a parameterized query and using those inputs in evident ways in clauses in the general case.
ok, thanks!