This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-28
Channels
- # announcements (10)
- # aws (1)
- # babashka (12)
- # babashka-sci-dev (6)
- # beginners (46)
- # biff (11)
- # calva (38)
- # clerk (4)
- # clj-kondo (50)
- # cljs-dev (8)
- # cljtogether (1)
- # clojure (82)
- # clojure-doc (1)
- # clojure-europe (35)
- # clojure-nl (1)
- # clojure-norway (46)
- # clojure-spec (3)
- # clojure-uk (1)
- # clojurescript (37)
- # conjure (10)
- # cursive (1)
- # datalevin (2)
- # datomic (3)
- # fulcro (13)
- # honeysql (36)
- # hyperfiddle (73)
- # malli (15)
- # membrane (16)
- # off-topic (28)
- # pathom (6)
- # polylith (2)
- # reagent (14)
- # releases (2)
- # rum (5)
- # shadow-cljs (50)
- # sql (6)
- # tools-build (10)
- # xtdb (6)
is building a jar the recommended way to "deploy" a clojure "application"? Or is running a -main
function with clj
the way to go? I am a little confused about the tools.build
dependency; can a jar be built without such an external dependency?
You can deploy as source and run the code with clojure
(no need to use clj
unless you want the interactively that rlwrap
provides).
Much depends on your deployment target. It's often a lot easier to deploy just a single artifact and only require java
on the target system -- but it is certainly possible to require/have the Clojure CLI installed on the target system if you want.
tools.build
is the core team's library that you can use to build JAR files. The Clojure CLI itself has no ability to build a JAR file.
That understates things 🙂
The application JAR needs all the dependencies in it, as well as your own application code, and you may want things AOT compiled, and you may want a manifest generated so you can do java -jar the-app.jar
to run it.
But, yes, there are a lot of options that get you various degrees of "there".
In general, the valuable thing you get with a build tool is dependency management. But tools.build doesn't need to do that because tools.deps handles it
I am not saying it covers all cases and that you won't quickly grow out of it, but you absolutely can just zip things up
You're assuming a beginner asking this question knows exactly what pieces need to be in such a JAR and also how to gather together all those pieces that need to be zipped up.
(if this question were in #C03S1KBA2 instead of #C053AK3F9 I'd just agree with you)
My first little clojure program was built with a Makefile that just zipped everything up (jarred everything up, I think I did actually use jar to do it)
Back in the day when you could directly download clojure.jar
and run that to get a REPL, I'll bet? 🙂
I would say bundling as a jar is especially easier if you have a bunch of build steps that need to be done with the source before running it. An example would be compiling frontend stuff. The more of these things you need to do, the more dependencies you need to manage on a server. With running via clojure directly, you'd problably want a bare git repo set up as well? Or you go with a CI pipeline, using one of the providers like github/gitlab/circleci and so on. IMO with a jar built locally it's just less moving parts. Plus there are still plenty of things you can do to improve deployments regardless of the above, such as putting a reverse proxy in front (like nginx for example) that either load balances (for rolling deploys) or gracefully provides static or cached responses while you restart your application etc. This is all assuming that you're using some VM/server and not a more specialised cloud product.
I use the jsonista read json from mysql before, after I changed the db from mysql to postgresql show error:
No implementation of method: :-read-value of protocol: #'jsonista.core/ReadValue found for class: org.postgresql.util.PGobject
Should I add some deps?What do you use to get data from a database?
Check this out - https://cljdoc.org/d/seancorfield/next.jdbc/1.2.659/doc/getting-started/tips-tricks#working-with-json-and-jsonb
next.jdbc
(jdbc/execute-one! con
["select json_col from t_test where user_id=?" user-id]
{:builder-fn as-caml-maps})))
the json_col datatype is json and I use (json/read-value json_col)
function to jsonthen check the link from @U0505RKEL
Is there any other way you shouldn't write SettableParameter Or mysql handles the json version of SettableParameter Since my program already handles json string use jsonista on its own, I don't want to change too much code
The problem is not related to the actual database. You use (json/read-value json_col)
to get clojure datastructure from but json/read-value
function doesn't know what to do with PGObject that you are getting from postgres query.
you could extract the value of PGObject before passing it to json/read-value
:
(json/read-value (.value json_col))
or you could delegate conversion from json string to clojure data to the next.jdbc libraryFrom my point of view the later is better because it will give you an abstraction layer that will work (possibly with minor adjustments) with any database supported by next.jdbc
I don't know if there is any mysql to org. Postgresql. Util. PGobject the same class
There's also this place where i can tell when it's mysql database (.setObject i (->mysqlobject m))
(extend-protocol prepare/SettableParameter
clojure.lang.IPersistentMap
(set-parameter [m ^PreparedStatement s i]
(.setObject s i (->pgobject m)))
> I don't know if there is any mysql to org. Postgresql. Util. PGobject the same class no, mysql doesn't have json or jsonb data type. So this will affect only responses from postgres
Hi!
I need to ensure that my JSON is represented one way so that I can hash the resulting JSON strings. Using cheshire with sorted maps seems to do the trick. But doing a Clojure walk for this seems like overkill. I also suspect that it will negatively impact performance, I'm working with large JSON payloads.
Is there built in Cheshire functionality that can be used to ensure a canonical encoding of JSON strings? I tried looking for :sort-keys true
or something similar.
(require '[cheshire.core :as json]
'[clojure.walk])
(defn canonicalize-json-str [s]
(let [parsed (json/parse-string s)
sorted-maps (clojure.walk/postwalk (fn [x] (cond (map? x) (into (sorted-map) x)
:else x))
parsed)]
(json/generate-string sorted-maps)))
(canonicalize-json-str "{\"y\": 1, \"x\": 2}")
;; => "{\"x\":2,\"y\":1}"
https://github.com/DotFox/jsonista.jcs There is a package for jsonista that does that. Maybe you can adjust the code to work with cheshire as well.
Great -- thanks! I could consider using jsonista too -- I've been sticking to Cheshire because I've used it a few times, and it's included in Babashka.
If you've already created a Clojure value from the JSON string, can't you just hash the value itself and not the string? https://clojuredocs.org/clojure.core/hash
That could work.
So far, my plan has been to use sha1sum
(or a hash method with less potential for collisions). I'm planning to store JSON files on disk -- so being able to use CLI tools for hashing would be a nice convenience.
myfolder/03cfd743661f07975fa2f1220c5194cbaff48451-input.json
myfolder/03cfd743661f07975fa2f1220c5194cbaff48451-computed-result.json
who is generating the data for those files?
I'm running a binary that takes JSON on stdin and gives json on stdout. Simplified:
$ echo '{"x":1, "y":2}' | ./compute
{"sum": 3}
In reality, the JSON files can get big (50 MB), and the ./compute
execution time could take minutes. So I want to keep track of which inputs have actually been executed.
got it. If you don't need anything else except your program to verify or recompute the hash then using hash
or anything else that can compute a hash from clojure datastructure is agood idea, imho
canonicalisation usually means you have a third-party that can't use the same hashing function
My motivation for canonicalization is to avoid recomputation: so that if someone first evalutes {"x":1, "y":2}
, then later evalutates {"x":1, "y":2}
, the second analysis is instant.
Specifically, ./compute
is a https://en.wikipedia.org/wiki/Finite_element_method solver, and it's very nice for the user of the system to get instant feedback when possible.
I'm assuming your computation is significantly more expensive than reading json->clojure
in that case just before staring computation you can read json, compute hash from the result and decide is it necessary to start computation or not
yup, that's the fast part. There's some O(n^2) / O(n^3) (where n is json size) / slower stuff going on in ./compute
.
> using hash
or anything else that can compute a hash from clojure datastructure is agood idea
keep in mind that Clojure hash values are not guaranteed to be consistent across Clojure versions though
https://github.com/arachne-framework/valuehash here is a good library I have been using a lot in prod