This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-08-08
Channels
- # announcements (4)
- # aws (1)
- # babashka (4)
- # beginners (75)
- # biff (4)
- # calva (13)
- # clojure (76)
- # clojure-android (1)
- # clojure-austin (9)
- # clojure-europe (14)
- # clojure-mexico (3)
- # clojure-nl (2)
- # clojure-norway (11)
- # clojure-uk (14)
- # clojurescript (19)
- # conjure (14)
- # cursive (30)
- # datomic (13)
- # gratitude (6)
- # hyperfiddle (71)
- # introduce-yourself (2)
- # juxt (5)
- # malli (5)
- # nbb (5)
- # nrepl (10)
- # off-topic (32)
- # re-frame (3)
- # releases (1)
- # shadow-cljs (5)
- # sql (38)
- # tools-deps (24)
- # xtdb (19)
Clojure has clojure.set/rename-keys
and clojure.core/update-keys
-- what additional "helper" would you need/want?
Yes, those are what I'm using now.
Currently I form my queries with HoneySQL in one place, pass the result to execute!
or one of the variants, and then transform the result using rename-keys
or update-keys
. I was just thinking that perhaps it's a common enough operation that some of the bits would be baked into next.jdbc somewhere for convenience. But I suppose it's also simple enough that it doesn't need to be part of the library.
I generally recommend not transforming the result of next.jdbc
except right at the edge if you need to pass that data to a third party (calling an external API or returning data to a client).
You can always use aliases in the queries to get the column name part closer to your needs.
I like to leave stuff in snake_case (which is what our DB is) so it's clearer in the code what is raw data and what is constructed domain data. But some folks like to use CSK (which next.jdbc
supports directly if it is on the classpath) to auto-convert results to kebab-case.
Hmm, that's kind of the opposite of the approach I was thinking. I was considering isolating the database layer to a small portion of the application and using fully qualified namespace keys from my application model everywhere else. I'm using this with Fulcro RAD and Pathom, where part of the value proposition is that you can think about application entities + their attributes and not need to consider the storage mechanism or underlying data model. Matching up my RAD app model keys with the storage keys seems a bit awkward. I tried using aliases, but it looks like you cannot alias to a fully-qualified key.
That's also a perfectly reasonable approach -- translating in both directions just around the DB layer -- it's a different set of tradeoffs. We do a lot of stuff that is purely server-side and often get data from one part of the DB, and store parts of it back into other places, so constantly converting back and forth between DB format and domain format would be a waste of time. If you're treating the DB almost entirely as a persistence mechanism and all of your data handling happens in memory or on the client then transforming both ways around the DB is reasonable.
Very well. Thanks very much for the thoughtful feedback, @U04V70XH6.
noob question, is working with sql strings for static queries that tedious as the internet makes it out to be?
It depends on your threshold for "tedious" I think...
At work, we do have a lot of SQL strings, just embedded in calls to next.jdbc
functions, where they are static (or very simple conditionals). For more complex conditional SQL, we use HoneySQL.
You might find the βhttps://github.com/krisajenkins/yesql#rationaleβ section of the Yesql README
interesting. (Yesql is deprecated, I believe, but HugSQL might be a good alternative.)
Thx. Been struggling with this for a week π , to answer seancorfield, my threshold for tediousness is high if it keeps things simple, so I'm going with strings inside code files. I don't find syntax highlighting that important and indentation is also not a big deal if you use some editor that is good at processing text like vim/emacs, you can also copy/paste from sql files (for large queries), again, vim/emacs make this a breeze. Agree with the rationale that SQL is already a mature and widely use DSL though and that yesql might be a better fit for teams. Is the sad state affairs when dealing with databases I guess (pick your level of pain :)
I think the most challenging part is going to be dynamic queries, haven't got to that part yet.
Strings in Clojure files seems like a totally reasonable way to get started, and the transition to something like HugSQL should be smooth if you ever decide you need to go down that path.
For dynamic queries (where the dynamism is more than just parameter substitution) people seem to love HoneySQL.
thx, and yeah, looks like the best method is just sucking it up and maybe use all three
One thing I tend to do for static SQL is this:
(jdbc/execute! ds [(str "SELECT foo, bar"
" FROM table"
" WHERE id = ?")
table-id])
That gives me readable, indented SQL and makes it easy to add in conditional strings.You have to remember leading spaces on everything after the SELECT
and for conditionals you can use (when condition " some string")
because if the condition
is falsey, then when
produces nil
and str
turns that into an empty string.
For example: https://github.com/seancorfield/next-jdbc/blob/develop/test/next/jdbc/result_set_test.clj#L442-L453
(that first example doesn't need str
but is another possibility for SQL layout with multi-line strings)
another option that solves this problem is datomic π but a closed source niche database is a hard sell
XTDB is similar and open source π
My understanding is that they use different data models document store(xtdb) vs attribute-oriented data modeling (datomic)
Yeah, I prefer the document store approach of XTDB and the fact it's basically "schema-less".
Interesting, I saw a Hickey video where he is really against tables/documents. So mongodb would be an apt solution for you?
We use MySQL (Percona) very heavily at work. We also used MongoDB for several years but it's... problematic... in many ways at scale so we stopped using it. We looked at Datomic and discussed it with Cognitect but some of our use cases do not suit it. XTDB is a much better match for us but I don't know how it would work at scale for us. We have a lot of tables with over 100M rows π
ah wow, that's a lot of data, read somewhere datomic has limits of 10 billion datoms, definitely not a good fit, for most internal applications it might not be an issue, is this a saas?
Online dating platform. Supports about forty dating sites. 140K lines of Clojure on the backend (and about the same lines for the React.js front end).
To support the flexibility we need for several things we already essentially have EAV overlaid in a couple of tables as adjuncts to fixed tables -- all of that would be simpler to deal with, both from a query and an update p.o.v, if we were using a document store where you didn't need to predeclare fields.
Yeah, apparently, but if we were to move away from MySQL/Percona at this point, we'd probably go to XTDB. MongoDB doesn't bring enough benefits to make the change of DB worthwhile really.