Fork me on GitHub
#beginners
<
2023-01-11
>
Glenn Lewis00:01:11

I've got a JSON Schema file and would like to use https://github.com/metosin/malli#plantuml to create a PlantUML diagram of it, but know almost zero about Clojure. I believe I've installed Clojure properly but haven't figured out yet how to install Malli (and my require statement fails in the lein repl). Do I need to take some Clojure "Getting Started" tutorials before I'm able to run Malli, or is there a quicker way to run Malli? Oh, and I'm reading through the Manning "Data Oriented Programming" book which is how I found out about Malli. EDIT: Ah! I found clj -Minstall and will try to make some more progress. Unfortunately, that failed. šŸ˜ž

Glenn Lewis00:01:43

Here are the errors from clj -Minstall:

Installing metosin/malli-0.9.2 to your local `.m2`
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See  for further details.
Execution error (FileNotFoundException) at java.io.FileInputStream/open0 (FileInputStream.java:-2).
malli.jar (No such file or directory)

Full report at:
/tmp/clojure-12625880767134676117.edn

hiredman00:01:48

in general the workflow with most clojure tooling is project based, and your project will have a file (project.clj, deps.edn, depends on your tool) which declares what libraries you depend on (like mali)

hiredman00:01:24

and when you run your tool of choice (lein and clj are different tools) it will make sure the libraries you want are present and available

Glenn Lewis00:01:35

Ah, I thought lein was a requirement for all Clojure. Do you recommend one tool over the other (for newbies)? Thank you, @U0NCTKEV8!

hiredman01:01:33

you may also want to double check you clj command, since clj -Minstall doesn't seem to be something that would do anything with the official cljcommand, you might have a different script also named clj or might have an old version of the clj command (both of those have been shipped by certain package managers at one time or another)

Glenn Lewis01:01:23

$ which clj
/usr/local/bin/clj
$ clj --version
Clojure CLI version 1.11.1.1208

hiredman01:01:12

I typically don't use lein for my projects anymore, plenty still do though

šŸ‘ 2
hiredman01:01:47

https://clojure.org/guides/deps_and_cli is the getting started guide for the clj command, https://clojure.org/reference/deps_and_cli is the more in depth reference

hiredman01:01:40

in general when you add a library to deps.edn or project.clj you need to restart your clojure process to make the new library available.

hiredman01:01:27

both lein and clj can be made to allow for adding libraries without restarting, but it is a more complicated setup, and has various caveats

Glenn Lewis01:01:31

Awesome. Thank you, @U0NCTKEV8!

skylize01:01:14

For many years, if you wanted sane project management, you probably used Lein. Eventually the Core team invested time into tools.deps to handle dependency management free of concern for project build requirements. They followed that up with tools.build to provide basic unopinionated infrastructure for flexibly designing build pipelines. And both of those were added with the clojure and clj scripts as part of a typical Clojure installation. Since then the tide has moved heavily in the direction of choosing Clojure CLI Tools instead of Leiningen for new projects. But many people definitely still swear by Lein, and I doubt many people bother trying to migrate older projects. I compared them here: https://clojurians.slack.com/archives/C053AK3F9/p1668909801841079?thread_ts=1668859154.934619&amp;cid=C053AK3F9

Glenn Lewis01:01:15

Cool.... thank you, @U90R0EPHA!

ā˜ŗļø 2
Glenn Lewis01:01:46

@U90R0EPHA - when you said "at least until you finish the book"... sorry, but I lost what book you were talking about. Which book? - oh, sorry, I get it... the book you are currently reading that talks about using "lein". Thanks.

Glenn Lewis01:01:13

Does anyone happen to know how to parse a JSON Schema file (in JSON format) into Clojure so that Malli can operate on it? I read through the https://github.com/metosin/malli#json-schema but am a bit lost. Any tips would be greatly appreciated!

Glenn Lewis02:01:46

Ah, thank you, @U04HK53G7SQ - I'll try that out.

moe02:01:36

IIRC PlanterUML doesn't support JSON Schema ā€” does Malli have functionality to convert a JSON schema into some other kind of schema that Planter can deal with?

Glenn Lewis02:01:24

Yes, that's my understanding. I found out about Malli from the "Data Oriented Programming" book by Manning.

Glenn Lewis02:01:35

But it doesn't explain how to use it. šŸ˜ž

Ben Sless04:01:36

You should also come in to the #CLDK6MFMK channel There isn't anything to read json schema to malli yet, but I have a proof of concept implementation

Glenn Lewis04:01:15

Awesome... thank you, @UK0810AQ2!

Mor Gazith09:01:12

Hey community, exceptions thrown in my app are being shadowed by a different exception:

Attempting to call unbound fn: #'clojure.core.async.impl.ioc-macros/add-exception-frame
it seems that the attempt (by clojure.core.async.impl.ioc-macros/Try) to add an exception frame to the thrown exception itself throws due to unbound function (not sure what that means) which doesnā€™t let me see the exception that my app throws. any idea what causes this and how it can be fixed?

Ferdinand Beyer10:01:18

It seems you are using core.asyncā€™s go macro in an unsupported way. Maybe a recur that jumps out of a try?

Mor Gazith11:01:29

it will return an exception on the go channel

kennytilton12:01:49

Looking at the full.async source I do not see it directly using clojure.core.async.impl.ioc-macros/add-exception-frame. I also note that that might well be a macro, given the path. Macros should be compiled away at run time, but that is my Common Lisp speaking. All this makes me wonder if there is a build issue, an issue with full.async, or egregious misuse thereof. Got some code we can see?

kennytilton13:01:58

I note also that full.async has not been touched in three years, and has a [org.clojure/core.async "0.7.559"] dependency. core.async is up to 1.6.673. Which version of core.async do you have as a dependency? A mismatch could be a problem. Mind you, I know nothing about full.async, just hauling in the usual suspects.

hiredman13:01:38

full.async is an aot compiled library, so it is trying to use the version of core.async it was aot compiled against regardless of which version of core.async you have in your project

hiredman13:01:25

(it is an old not packaged well project)

hiredman13:01:07

You can see the single issue open on the project https://github.com/fullcontact/full.async/issues/4

Mor Gazith13:01:23

hmmā€¦ ok, thanks for the tipsā€¦ Iā€™ll see if I can find a solution based on this direction

lread14:01:14

@U0NCTKEV8 or others, do you know of a good link that talks about the effects of aot compiling libs and that advises against doing so?

lread15:01:53

@U064X3EF3 do you think some guidance on (not) aot compiling for libraries would make sense in this https://clojure.org/reference/compilation? I can take first crack at it, if you think it is a good idea.

Alex Miller (Clojure team)15:01:38

as a reference page, I don't think that's the place for that kind of guidance

Alex Miller (Clojure team)15:01:12

but I think having that guidance somewhere would be good, so would welcome a clojure-site issue

lread15:01:18

I see, maybe a new article under "guides". I'll start with an issue.

Alex Miller (Clojure team)15:01:43

some existing places where it might be useful to say things are https://clojure.org/guides/tools_build (which pointedly does not have a compiled lib example but doesn't really say why) and https://clojure.org/guides/dev_startup_time (which is about a different problem, but may be a useful place to say it)

Alex Miller (Clojure team)15:01:50

what doesn't exist at all on clojure site is a page about deployment, which is kind of a hairy topic, but way underserved and would be a good place for this advice

lread16:01:38

Based on my current understanding, which is certainly incomplete!

respatialized15:01:13

iā€™m trying to make my -main function more REPL friendly for debugging purposes. I have some let-bound variables that take a long time to load, and iā€™m wondering about replacing them with def or defonce. Is it ā€œbad formā€ to just use def in the body of the function? Should I declare them at the top of the namespace then def them in the body of -main?

Ben Sless19:01:23

In those situations I usually build up a context map instead of a let binding, similar to tools build. Each step associates its result to the map, then you can debug it step by step without creating extra state.

(def step1 (f1 opts))
(def step2 (f2 step1))
,,,

4
practicalli-johnny12:01:13

Please avoid using def inside a function, a def is for global scope, let is for local Do you need to go through the -main function each time when interacting with the REPL? Sounds like there is an opportunity to break the main down, so individual parts can be more readily evaluated I assume def is wanted as it will cache the value and avoid re-evaluation of a long running process, so the suggestion by Ben should suffice. defonce should only be needed if regularly reloading the namespace and the Def shouldn't be re-evaluated. If changes are being made to the code, try re-evaluate only those changes

Hendrik15:01:24

You could refactor the main function and put its core functionality in a new function. Similar to this:

(defn my-fn [a b] 42)

(defn main[&args]
  (let [slow1 (get-slow1) 
        slow2 (get-slow2)]
    (my-fn slow1 slow2)
Then you can use and test the body of the main function like this:
(comment
  (def slow1 (get-slow1)
  (def slow2 (get-slow2)

 (my-fn slow1 slow2)
)

respatialized15:01:07

I could, but now I have to worry about what are effectively two copies of my -main function instead of just evaling parts of my -main form by form. Iā€™d like to avoid that.

Hendrik15:01:31

Why would have two copies? Or what are your concerns?

skylize15:01:38

My experience is certainly lacking enough to mean I could be way off, but... I would think ideal repl-friendliness would have -main serving only as a super-thin CLI client speaking to an API. So -main would have only the code needed to map cli-args and env-vars into function calls: functions which you can just eval yourself when working in the repl. If following that model, I think you would expect to mostly think about state and/or slow-load components in regards to API usage; instead of focusing on the particular client (the -main function) that you are using to speak with that API.

respatialized16:01:50

my problem context is stages of a data processing pipeline, not components of an application. I want a UX that supports iterative development in a manner similar to building up a R or Python script. a comment block replicating the various functions I call doubles the chance of a mismatch between ā€œtestā€ and ā€œproductionā€ for every single form. also, dvc, the tool I am using, detects that a pipeline stage should be re-run every time its source file changes, which means edits to a comment block will trigger potentially unnecessary re-runs of expensive computation.

teodorlu16:01:05

Potentially a good use case for #clerk - then you can def a long running computation, and clerk will cache it as long as you only edit "downstream" code. I'm not sure about how to do your cli, though. Perhaps just have a different clerk namespace with defs which use functions from somewhere else. Then call those functions from -main too. https://clerk.vision/

phill16:01:51

I sometimes define a -main (command line entrypoint), which calls main* (REPL entrypoint) and then system/exit's. In turn, main* might call other almost-main functions, among which I have divided the program's steps to help with targeted REPL invocations.

phill16:01:15

If your "main-ish" functions work with (and contribute to) a global state, you can put that global state into an atom. -main creates the atom and passes it to main*. At the REPL, you would def the atom in a comment block and then hand it to main*.

respatialized17:01:37

I appreciate the suggestions, but the ideas offered here seem more complicated than just using declare and def. I think that I will continue to do that, even if it's not idiomatic in other contexts.

respatialized17:01:03

This namespace isn't intended to be required from any others; it requires data-processing functions from elsewhere and isn't part of any kind of public API; it's basic clojure -M -m batch-job style invocation. It's data in, data out. So a def inside a main block doesn't seem to have as many downsides as it would in other areas.

jcb21:01:17

I'm trying to use filterv with multiple filters on an array of maps (originally json). The function is (filterv #(string/includes? (string/lower-case (:title %)) (string/lower-case string)) data) - data is a single map from the array. This seems to work well enough. The string is derived from a text input and is searching the value of a single key. I would like to run this same query against another key in the same collection and add them to the resulting map if any of them are true, while hopefully maintaining the structure of the data (hence filterv rather than filter), but I canā€™t get my head around how to go about it? I can join the values of the two keys into a string within the includes?

(filterv #(string/includes? (str (string/lower-case (:title %)) (string/lower-case (:sub-title %))) (string/lower-case string))
but this feels a bit flimsy.

rolt22:01:39

if you write it this way you can search any number of fields

(filterv (fn [m] (some (fn [k] (str/includes? (str/lower-case (k m)) (str/lower-case string))) [:title :sub-title])) data)
(and use named functions instead of the anonymous ones to make the code clearer, like (defn search [s query] (str/includes? (str/lower-case s) (str/lower-case query))) for instance)

rolt22:01:13

if you only have two fields:

(filterv (fn [{:keys [title sub-title]}] (or (search title string) (search sub-title string))) data)
may be clearer

skylize00:01:26

> (hence filterv rather than filter), filterv is essentially short-hand for (into [] (filter pred my-collection)). Unless you specifically need to ensure you have a vector, or specifically need to ensure a lazy calculation is immediately realized, there is no reason to potentially add an extra pass over the sequence for transforming the result into a vector. --- As for your question: (Looks like I'm somewhat duplicating rolt's comments. I think that's good, though.) For starters, I would break that includes? into a named function, since it's slightly verbose already, and you intend to repeat yourself.

; idiomatic to name this `str`, which is conveniently shorter
(require '[clojure.string :as str])

(defn search-str [string query]
  (str/includes? (str/lower-case string) (str/lower-case query)))
Which makes your original test much easier to read and expand on
(def data [{:title "FOO" :subtitle "Bar"}
           {:title "foo" :subtitle "Car"}
           {:title "woo" :subtitle "Bar"}])

(let [title-query "foo"]
  (filter #(search-str (:title %) title-query) data))
; => 
({:title "FOO", :subtitle "Bar"}
 {:title "foo", :subtitle "Car"})
Then the default way to add another filter is to just send the result of one filter into the next...
(let [title-query "foo"
      subtitle-query "bar"]
  (filter #(search-str (:subtitle %) subtitle-query)
   (filter #(search-str (:title %) title-query) data)))
; =>
({:title "FOO", :subtitle "Bar"})
...except we probably want to use a threading macro to make that easier to read. (I added ,, to indicate to you where the macro inserts the sequence into the function calls.)
(let [title-query "foo"
      subtitle-query "bar"]
  (->> data
       (filter #(search-str (:subtitle %) subtitle-query) ,,)
       (filter #(search-str (:title %) title-query) ,,)))
Usually, that should be enough for the general case. But if you need to minimize iterations over the sequence for performance, you can either use transducers, or find a way to combine operations into a single function. In this case, Clojure actually includes a helper for combining predicates.
(let [title-query "foo"
      subtitle-query "bar"]
  (filter (every-pred
           #(search-str (:subtitle %) subtitle-query)
           #(search-str (:subtitle %) subtitle-query))
          data))

Ed12:01:45

another way to write that, and end up with a vector could be something like

(let [title-query    "foo"
      subtitle-query "bar"]
  (->> data
       (into [] (comp (filter #(search-str (:title %) title-query))
                      (filter #(search-str (:subtitle %) subtitle-query))))))
which uses comp to combine two transducers build using filter and into to populate a vector, passing data through the transducer.