This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-09-20
Channels
- # aleph (1)
- # announcements (1)
- # aws (11)
- # babashka (117)
- # beginners (34)
- # calva (13)
- # cider (3)
- # clj-commons (8)
- # clj-kondo (24)
- # clj-yaml (36)
- # cljsrn (46)
- # clojure (50)
- # clojure-australia (5)
- # clojure-europe (239)
- # clojure-nl (3)
- # clojure-norway (3)
- # clojure-spec (16)
- # clojurescript (25)
- # core-typed (20)
- # cursive (41)
- # datahike (1)
- # datalevin (1)
- # datomic (17)
- # fulcro (27)
- # hyperfiddle (35)
- # introduce-yourself (1)
- # jobs (4)
- # lsp (20)
- # malli (8)
- # meander (8)
- # nbb (1)
- # off-topic (31)
- # parinfer (9)
- # pathom (3)
- # portal (2)
- # re-frame (20)
- # react (2)
- # reagent (8)
- # releases (1)
- # remote-jobs (4)
- # scittle (2)
- # shadow-cljs (8)
- # slack-help (4)
- # sql (30)
- # squint (3)
- # tools-deps (34)
- # xtdb (21)
I'm using next.jdbc
and was wondering what is the best place to plug in table name conversion. Assuming I have my system configured as follows, this is using a component like system -- think Lambda Sierra Component or Integrant:
(defn start
[config]
(let [ds (jdbc/get-datasource config)]
{:config config
:ds (jdbc/with-options
(connection/->pool
HikariDataSource
config)
jdbc/snake-kebab-opts)}))
I can run a following query:
(jdbc/execute! ds
["SELECT * FROM account LIMIT 1"])
And get following result:
[{:account/verified true,
:account/full-name "First Second",
:account/account-id #uuid"01835ba0-07c0-835d-bd8e-e476b19104598"}]
I would like to convert the ns
keywords to ... let's say :user
to get following result:
[{:user/verified true,
:user/full-name "First Second",
:user/account-id #uuid"01835ba0-07c0-835d-bd8e-e476b19104598"}]
Is there any recommendation where would be the best place to plug this in?My first question would be "Why?" I try to encourage people to always use the default builder -- I only added the snake/kebab stuff because people whined so much about _
in their table and column names, but I do not consider it a good idea, and it doesn't apply automatically for plan
operations (which I consider the better API to use if possible).
@U8A5NMMGD i'm not certain if this will help, but I asked a similar question a while back https://clojurians.slack.com/archives/C1Q164V29/p1605609802358700 -- I am 100% happier NOT modifying the ns qualifiers that are returned.
What's the default for the builder? return snake_case if table/columns names are snake_case?
The default is to return :table/column
as-is. No conversion = (slightly) better performance 🙂 But note that some DBs won't give you a table name at all (Oracle) and some don't by default (MS SQL Server -- you need to specify an additional connection option as I recall). And if your SQL computes some value that won't have a table name (in most cases) -- and if you don't provide AS alias
you might not like the column name you get.
But basically the default is to "return what JDBC returns".
The reason I encourage that is to prevent your data model bleeding into your domain model -- or to at least be aware of where the keys are coming from. You generally won't get kebab-case via JSON either, so having clear transformations from/to your data model has benefits.
(for weird historical reasons at work, we have a bunch of somewhat legacy database tables that have camelCase names and camelCase column names too -- so where we see :camelCase/someThing
in the code, we can easily spot that it came from "outside")
But I'm confused, I get that you encourage use "return what JDBC returns" but also encourage transformations?
I was thinking of using something like table name 'user' and table column 'user_name' (prefix the column name with the table), that way columns don't clash, reduces as alias
use and you always know from what table the column/attribute comes from
Having table_column
as column names in table
is an antipattern as far as I'm concerned.
Having other_id
in table
as an FK to the other
table (with a PK of id
) is the only time prefixing a column name with a table name makes sense, IMO.
The "transformation" I encourage is at a specific layer boundary between your domain model and other models (such as a data model). On the assumption that the structure of your data model is often very different from that of your domain model and may well have completely different names and representations of things. If you don't need a domain model (that is different to the persistence model) then it's fine to just let the raw data model entities into your code (and possibly out to client code via APIs -- but, again, each boundary should be designed for its specific purpose so don't just code an API in terms of your data model because it's "convenient" 🙂 )
At work, we have an Admin app that is pretty much "just CRUD" so it just using raw hash maps as returned from JDBC and raw HTML forms that use those names and there's pretty much no transformation going on. We also have a billing module that has a complex data model with a relatively simpler domain model and so we use :wsbilling/item-name
in the domain model but that maps down onto a dozen or so tables, many with _
in their names, as have many of the columns within them. So we have an explicit transformation from our domain model to our data model (and back). Hope that clarifies?
yes, thank you. Good food for though, since I'm only writing simple applications right now and trying to keep things simple I try to keep my data model(the database data model) as the primary data model for the whole app and my app layer is just mostly a data transformation pipeline, as in your example admin app but can see that for complex domains with lots business logic maintaining another data model at the domain level is required
Yeah, it all depends on what trade offs you want to make (and whether a view will actually solve your problem 🙂 )
looks like datomic makes one think about this problem less until you get to outputting json
just learning all this stuff so one last doubt if you don't mind, does next/jdbc always returns the table names as ns keys even in query with joins? (assuming the database drivers support it)
select b.*, q.* from bar b join quux q on b.id = q.id
=> {:b/id 123 :b/foo "Hello" :q/id 123 :q/wibble "World!"}
The table names (in each key) are specific to the column in the result -- per JDBC.
The thing is I want do dump this into a graph and want to avoid name collisions. A better example of what I want would be:
[{:product/verified true,
:product/full-name "First Second",
:product/account-id #uuid"01835ba0-07c0-835d-bd8e-e476b19104598"}]
This might look strange, and it did for me at first -- this idea comes from Thomas (Heller) -- and the more I used it the more it made sense and the more it made programming with this model easier including working with "flat" ns keys as opposed to entityt
ns keys.
In this model verified
or any other property can be use on arbitrary entity and entities are maps
with something-id
one could add entity-type
if it gets to confusing but i didn't end up there yet.
So yes I want to have product/<whatever>
and want to ns
which product it is before putting this into a graph. Again this might look unconventional to what is currently accepted and might not fit to many accepted patterns -- yet i found it really pleasant to work with.
Pretty similar to what @U6GNVEWQG mentioned. That's the main "why". I could get the data out and reduce
or update
and would prefer to do this in one place instead of every time I put the result into a graph.
I did the same thing with Datomic -- in this case the schema was already constructed in this way so there was no transformation needed -- for one reason or another i'm doing this with SQL. I hope this helps to clarify why.@U8A5NMMGD I don't understand what you're asking for then. You get qualified keywords by default from next.jdbc
, :table/column
. If you select * from product
and you have columns verified
, full_name
, account_id
, you will get exactly what you showed above (modulo _
/`-` which I don't think is a big deal but you have snake-kebab-opts
anyway).
Failing to explain myself 😅 I want :product/column
and not :table/column
.
@U8A5NMMGD Let's say you have select u.*, a.* from user u join address a on u.id = a.user_id where ...
and you get back rows like {:user/first_name "..", :user/id 123, :user/last_name "..", :address/id 456, :address/street "123 Main St", :address/user_id 123, :address/city "Somewhere", :address/country "US"}
-- you can't replace all the <table> qualifiers with a fixed value because :user/id
and :address/id
would conflict. Can you explain what you would transform that row into?
this is the part where I need to be careful with my naming -- i never have someting/id
it's always someting/account-id
or something/address-id
. I know all of this probably sounds strange and it really did to me as well. I hope that after experimenting with this and living with it i'll be able to share few things around this. I took your advice to heart and went with dropping snake-kebab also in the meantime read the documentation couple of times and found this snippet in getting started:
;; selects specific keys (as qualified keywords -- ignoring the table name):
user=> (into []
(map #(select-keys % [:foo/id :bar/product
:quux/unit_price :wibble/unit_count
:blah/customer_id]))
(jdbc/plan ds ["select * from invoice where customer_id = ?" 100]))
;; or:
user=> (plan/select! ds
[:foo/id :bar/product
:quux/unit_price :wibble/unit_count
:blah/customer_id]
["select * from invoice where customer_id = ?" 100])
which takes me 98% pct of the way. Thanks a lot for all the points in this thread and discussion :hugging_face:Ah, good. Yes, that lets you synthesize qualified names from unqualified column labels (which is what plan
functions use). In addition, as of next.jdbc
1.3.828, you can specify :column-fn
as an option to any plan
function, and it will convert the specified key before using it as a column label -- so you could use ->snake_case
from CSK and then :wibble/unit-count
would be used as the key in the hash map but unit_count
would be used as the label in the JDBC call... I think that gets you 100%?
(and if you're using HoneySQL to generate your SQL, you get that for "free" since it automatically drops the qualifier and performs the (basic) kebab -> snake case conversion on table and column names)