This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-03
Channels
- # aleph (2)
- # announcements (13)
- # babashka (7)
- # beginners (36)
- # calva (26)
- # cider (11)
- # circleci (13)
- # clj-kondo (15)
- # clojure (105)
- # clojure-europe (79)
- # clojure-nl (3)
- # clojure-uk (6)
- # clojurescript (17)
- # conjure (4)
- # core-logic (2)
- # cursive (10)
- # data-science (5)
- # datalevin (11)
- # datalog (14)
- # eastwood (6)
- # emacs (2)
- # figwheel-main (1)
- # fulcro (34)
- # google-cloud (1)
- # graphql (3)
- # introduce-yourself (7)
- # jobs (1)
- # leiningen (17)
- # lsp (46)
- # malli (2)
- # minecraft (3)
- # missionary (19)
- # off-topic (31)
- # other-languages (49)
- # polylith (2)
- # portal (5)
- # practicalli (1)
- # quil (77)
- # releases (1)
- # remote-jobs (1)
@quoll It would be great to re-balance http://clojurelog.github.io if you have fixes we can make to the table. The rows are very flexible. 🙂 It's xtdb-heavy because it originally came out of a talk @taylor.jeremydavid gave, which in turn was in response to some deep confusion about what xt even is ("oh, that's the database you use when you have a temporal modelling problem, right?"). We created it because it didn't exist, but the table is only useful if it's representative. If it's not, we should fix that. It's meant to be a community resource, not marketing material. As before, quite happy to make you and @huahaiy (and @tonsky and the DataHike folks) contributors to that repo, but issues and PRs are great too, if that's easier. 🙏
I'm looking at the table for the first time in a few months and noticing more than ever that the granularity of some of the rows is really out of whack. It might make sense to cluster rows into topics (especially if we add some new rows)? Maybe: • transactions • queries • graph query • non-Datalog query • operations ...or something. 🙂 It's sort of disorienting to have "wildcard attributes" and "graal native image" in the same global table, for example.
@quoll I took a stab at organizing the rows in the table: https://clojurelog.github.io/ ... it seems pretty clear that "Advanced Graph Features" hand-waves over a lot of Asami's core feature set. I'm not sufficiently familiar with Asami to know which features you feel are an appropriate granularity to highlight there, or if it makes sense for Graph Query to be its own category. Alternatively, I could see "query planner", "lazy queries", and "SQL" going into Developer Experience instead. Happy to rename/expand/collapse as the owners of these projects see fit. 🙂
Like the partitioning of the table! But, I find things like runtimes go way beyond "developer experience", no? 🙂
Agreed 😄 I've opened an issue to re-organise things further if anyone wants to offer suggestions: https://github.com/clojurelog/clojurelog.github.io/issues/10
> But, I find things like runtimes go way beyond "developer experience", no? @U02E9K53C9L Feel free to comment on the issue. It's definitely a draft. 🙂 I didn't want to end up with 8 categories for the number of rows we have so far and lumping things under DX was a cheap way out. 😉
Hopefully, Paula will give you further (and better) recommendations about Asami, but some things that came to mind: • transaction functions https://github.com/threatgrid/asami/issues/224 • Add transact time support for schemas https://github.com/threatgrid/asami/issues/223 • The fact that Asami can https://github.com/threatgrid/asami/wiki/5.-Entity-Structure#arrays • "Asami allows any datatype to be used as attributes." • "Asami graphs are valid Loom graphs via https://github.com/threatgrid/asami-loom" • "Pluggable Storage: Like Datomic, storage in Asami can be implemented in multiple ways. There are currently 2 in-memory graph systems, and durable storage available on the JVM." Afaik, only memory mapped files are available currently, but the https://github.com/threatgrid/asami/wiki/Dev:-4.-Storage-Whitepaper#blocks (like Datomic). I think this distinguishes Asami from Datalevin and Datahike?
@U02E9K53C9L These are fantastic! Would you mind creating a new issue with this list? Paula could add her recommendations to that issue when she's back, maybe.
I may be a few days. I’m absolutely exhausted after the last couple of days. But we just got power back, meaning that we have water again (we need power for our well pump), I can cook a hot meal once more, and the house has heating! (The house is insulated, but -10℃/14℉ outside still brought the temperature inside down a long way). Still more cleanup after the storm, but I should be back online soon
@quoll Take care! Obviously random websites are lot less important than bringing your house back to life. We had a massive water main break on Christmas day (and for a few days following that) but I can imagine no power and no water makes for an extremely trying end to the holidays. Good luck. ♥️
@U01AVNG2XNF we have water and power back, but roads are cut off or reduced to one lane everywhere here since the snow was so sticky and heavy that it caused trees to collapse everywhere. (I have a massive oak tree down in my front yard that fell from the neighbor’s. THAT will be fun and expensive to move). Many friends and neighbors are still without power and water, so I'm doing supply runs for people today. (Cooking soup for my son’s girlfriend's family as I type). Trying to get everything under control before we get more snow tomorrow night. Hopefully we keep the power on this time!
For an entity having two properties :thing/a
and :thing/b
, is it possible in one query find entity with the max value of :thing/a
and the value for :thing/b
?
Example, for the db holding three values:
{:thing/a 1 :thing/b "Rain"}
{:thing/a 2 :thing/b "Snow"}
{:thing/a 3 :thing/b "Sun"}
I would like to receive a pair
[3 "Sun"]
you could certainly do it by using a (max ?thing-a)
aggregate within a subquery, and then joining to find ?thing-b
in the outer query. If this was for XT, you could also write a custom aggregate (in userspace) to do everything in a single pass, which may well be faster in some scenarios