Fork me on GitHub

There’s a function called something like cider-repl-previous-history or similar. If you try m-x it should display functions matching what you type in and display the keyboard shortcut next to it I believe


Thank you! Yep, for me it is M-p. Much obliged @dpsutton


Hello experts... I can't seem to figure out a multi-file project structure for Clojurescript... I'm using Leiningen+figwheel+reagent... If I put everything in src/project_ui/core.cljs, all works fine What's the best way to create src/project_ui/svg-images.cljs and include it in core.cljs? Does project.clj need modification?


@taimoorbhatti no if you dont change your source path.


you can refer it at your core.cljs


@taimoorbhatti you may be running into the dash-vs-underscore mechanism used for file names in Clojure/CLJS. Because Clojure was originally written to run on the JVM, it respects Java rules for filenames and directory names, so any namespace that has a dash in it gets translated to a file/directory name with an underscore. CLJS follows the same rules to stay compatible with Clojure, so if you have a namespace called svg-images, the corresponding filename will be svg_images.cljs. You’ll still use (ns project-ui (:require [project-ui/svg-images])) though.


hmmm... I'll check that!


@ this was it!! Noob mistake on my side :man-facepalming:


in fact, my single file had grown to an ugly size... Can you point me to what 'Clojurescript' has for modules/components? (like Python modules, or like crates?)


I don't think there's an exact equivalent in Clojure/CLJS, due to its Java-land origins. We use namespaces to organize code, and then require namespaces as needed, sorta kinda like modules. When I was first learning Clojure, I was coming from an object-oriented background, and I thought to myself that Clojure namespaces were kind of like having OOP-style objects that only had class methods and no data. So when I required a namespace and called functions from the namespace, it was like importing a class and calling class methods on the class. I haven't thought about it in those terms for a very long time, mostly because I got used to Clojure's functional style and didn't need to any more.


Not sure if that really applies to your situation, but I thought I'd mention it in case it was helpful.


@taimoorbhatti ^^


hmm... this was helpful in understanding ... Thanks a bunch for the elaboration


I don’t want to use this command in this way, just I would like to understand why not works.. ls | clj -e "(println *command-line-args*)" * if I drop this in file with that

   #_DEPS is same format as deps.edn. Multiline is okay.
   {:deps {clj-time {:mvn/version "0.14.2"}}}
   #_You can put other options here
   -J-Xms256m -J-Xmx256m -J-client
exec clojure $OPTS -Sdeps "$DEPS" "$0" "[email protected]")
(println "Hello test!")
(prn *command-line-args*)
(via Eric Normand) .. then works fine


 "Syntax error reading source at (\nNo reader function for tag Change\n",
 {:clojure.error/phase :read-source,
  :clojure.error/line 2,
  :clojure.error/column 0,
  :clojure.error/source "",
  :clojure.error/path "",
  :clojure.error/cause "No reader function for tag Change"},
  [{:type clojure.lang.Compiler$CompilerException,
    "Syntax error reading source at (/Users/sb/Google Drive/know-how/codes/terminator/",
    {:clojure.error/phase :read-source,
     :clojure.error/line 2,
     :clojure.error/column 0,
     "/Users/sb/Google Drive/know-how/codes/terminator/"},
    :at [clojure.lang.Compiler load "" 7643]}
   {:type java.lang.RuntimeException,
    :message "No reader function for tag Change",
   [clojure.lang.LispReader$CtorReader invoke "" 1427]
   [clojure.lang.LispReader read "" 285]
   [clojure.lang.LispReader read "" 216]
   [clojure.lang.Compiler load "" 7630]
   [clojure.lang.Compiler loadFile "" 7574]
   [clojure.main$load_script invokeStatic "main.clj" 475]
   [clojure.main$script_opt invokeStatic "main.clj" 535]
   [clojure.main$script_opt invoke "main.clj" 530]
   [clojure.main$main invokeStatic "main.clj" 664]
   [clojure.main$main doInvoke "main.clj" 616]
   [clojure.lang.RestFn applyTo "" 137]
   [clojure.lang.Var applyTo "" 705]
   [clojure.main main "" 40]],
  :cause "No reader function for tag Change",
  :phase :read-source}}



(LICENSE doc hello new.txt project.clj resources src target test test.txt)
Syntax error reading source at (
No reader function for tag Change

Full report at:


what makes you think those two are some how the same?


Yes, I thought that two solution around the same. I got with the first version the good resp + error with the second version just the resp. I don’t understand what is the difference


pipe pipes to stdin, not args, and your shell will expand that "*" at the end to all the files in the current directory


ahh thanks, maybe Im tired.. sorry.


Is there a way to compile namespaces? For example I need to define functions which are the result of some computation, how can I expose the namespace if I computed it ahead of time?


Clojure can be AOT-compiled (Ahead-Of-Time, instead of on-demand) but I'm not sure that's really what you're asking @neo2551?


Can you be a bit more specific about what you're trying to do?


Are you trying to avoid recomputing a complex/expensive expression? i.e., caching the computation?


Yep, I am trying to cache the computation (I am crating wrapper around R functions, and I collect information around their signature and documentation and it takes time to gather that for all the libraries).


For example if a namespace has a function (def f (delay (fn [y] (+ y x)) 100))


So it is possible in Clojure to create functions in memory at run time, e.g. by calling eval on expressions like (defn foo [x y ] ...), without those function definitions ever needing to be written to a file. Do you want something that must write the function definitions to a file, then read them in from the files?


Or is something that never needs to touch the file system useful?


evaling (defn ...) expressions does compile those functions to JVM byte code in the Java process's memory. They are not interpreted.


And there are ways to create functions at run time that don't even use eval IIRC


I would be interested by this solution " Do you want something that must write the function definitions to a file, then read them in from the files?"


The workflow I imagine is I would create a library


and people would load the library


If you want to write the text of the code to a file with an ns declaration and a unique name space name to the file system, then load that code, then require can do that.


I would like to avoid every user to wait for the loading the library


A run-time require and/or requiring-resolve can be used just before calling a function in the namespace.


They will still have to wait for the require when they first call such a function.


I think it would similar to that problem: suppose you want to add docstring to your functions, but you need to make http call for each docstring, is there a way to fetch all docstrings from the net first and use the cache version? Maybe I will try to read your solutions and see if that works


If your library provides 20 different functions, and you want none of them to be loaded until they are first called, the most obvious, although perhaps 'clunky', way of doing it would be to create 20 'wrapper' functions in a namespace that is loaded, each of which simply does require if it hasn't already been done, then pass the call on to the real function definition.


that approach isn't going to work for docstrings


you can delay loading the function until it is invoked, but docstrings are in the metadata and there isn't really a place to hook in to delay loading that


There isn't a way to make (doc fn-name) load it via HTTP, unless you monkey-patch the definition of doc, of course.


I am (seemingly in the minority about this) partial to code generation for that kind of thing


Let's say I attach the "docstring" to the metadata and create a function that will query that meta data


e.g. I would write some code to do all the expensive stuff and then have it generate clojure code to be used


and save that to a file


This is exactly what I want


require it when needed


so just do that


So, let's take this problem to a higher level


what if the definition on the function is actually loaded?


I think you are misunderstanding


Specifically for the purpose of doc strings, it is also possible in Clojure to alter a Var's doc string even after the function has been defined, in memory. Not sure if that is better or worse than other ideas, but certainly is possible.


Just as it is possible to alter the value of a Var whose current value is a function, to be the value of a different function.


you have some code, it isn't something you normal load, somewhere off to the side it looks like

(doseq [endpoint (list-end-points)] (prn `(defn ~(:name endpoint) [[email protected](:args endpoint)] ~whatever)))


you run that code and direct the output of that code to a file


then you forget that code exists


use the code in the file


Ok that might work.


What I have been doing is a macro that actually evalute the function, never thought of using a macro to print the function definition.


it isn't a macro, just using syntax quote


which you don't have to do, it is just nicer


It is not exactly what I was looking for, but I think this could solve my problem!


thanks 🙂


thanks I think it works 🙂


Is there a clojure core function that implements something like (->> coll (filter fn) (first)) or is it considered trivial enough to use that? In tools I'm used to, that's called find.


I am pretty sure it is not in Clojure core, although in multiple 'useful utilities' add on libraries.


some is similar to that, but not exactly


user=> (some #{:foo} [:bar :foo :baz])
user=> (some #(= % :foo) [:bar :foo :baz])


Oooh that could work! Thanks.


medley is one of the smaller more focused such add on libraries, and calls it find-first:


We could combine this with where… Code would look like:

(find-first (where = :foo) coll) 
or even
(find-first (where :id = "1") coll)
when referencing a map This is easy to implement by yourself in case you don’t want to bring in the whole lib.


Thanks, I'll give that a shot.


Question about protocols/web app design. Lets say you have a 3rd party REST API you are integrating with in your web app, think Github. Would you design a protocol with methods for each endpoint, or would you just use functions? e.g.

(defprotocol GithubClient
  (get-branches [...])
  (get-repos [...])
(defn get-branches [] ...)

(defn get-repos [] ...)
The argument I hear for protocols is that they can be mocked, so testing becomes more “straight forward”


Well, you can mock a number of different ways so I wouldn't use a protocol unless you either a) know you will have multiple implementations in your application or b) have a need for polymorphism and have a performance constraint.


When I'm wrapping a 3rd party REST API I tend to write a single low-level function that can do the actual HTTP interaction with the service and then write the other functions in terms of that. Then you only need to mock one function to mock all operations -- via with-redefs for example.


Although for some of the APIs we use, there's some sense of setup (& teardown) so having a Component (Stuart Sierra's library) for that low-level function can be a good way to go (and then passing it into the wrapper functions -- so it's easy to create a "test" version of the component and pass that in instead).


RE: using protocol > if know you will have multiple implementations Do you mean the protocol could be generalized to also talk to bitbucket, or gitlab? (sorry for using a specific example here, it just helps me to use something tangible) > have a need for polymorphism How do you mean? RE: low level function for HTTP Interaction Sorry, what would be part of this? You mean something like

(defn github-api-request []
  ; auth info
  ; base urls 
  ; other info needed 
  ; make request)
and then
(defn get-repos []
  (github-api-request endpoint)
(apologies for the terrible pseudocode) RE: structuring code But in general, something I feel which is clearer now is that structuring code well results in with-redefs being a fine option and the second is that we don’t need to have all endpoints specified in a protocol, rather, we could do the github-api-request as a protocol/with component and pass that into each respective wrapper fn. The benefit here is the protocol methods won’t grow with the API itself. Would that be fair?


Hard to answer when you ask multiple questions in one big block of text.


Using GitHub as an API example means you're not likely to have multiple implementations.


Polymorphism. Again, relates to multiple implementations behind one interface/protocol.


Low-level function. I was speaking generally but, yes, having an "api request" function would work. For some of ours we have "api get" and "api post" just because that makes it easier to write the code -- and it would also allow you to mock just posts so you could use the real API in readonly mode if it made sense for your tests.


LMK if that does (or doesn't!) answer your questions 🙂


Sorry! I will break those up better in the future 🙂 Yes, I believe that answers my questions, but just so I understand the multiple implementations comment: What is a scenario you have seen where you have had an API with multiple implementations?


I haven't yet. Which is why I haven't needed protocols for this.


That's kind of my whole point.


If you aren't going to have multiple implementations, you almost certainly don't need protocols.


haha amazing. I was going crazy racking my brain for a reason why an API protocol could have multiple implementations.


Oh yes. A great read. My questions are because on some projects I have worked on everything became a protocol because testing became more straightforward and this felt like a questionable use of protocols.


But, yeah, if GitHub and GitLab both implemented the "same" API at a suitable level of abstraction and you wanted to interact with both of them, a protocol might make sense. Now, in that case, you probably would have a higher-level protocol and more functions in it, and you would have to mock several functions to write tests for that which didn't actually touch the real APIs.


Aye, I think protocols just for production vs test implementation is probably a bad idea in general...


...there may be other Clojurians who disagree 🙂


Indeed. If you feel up to it, what is one of the reasons you feel it can be a questionable approach?


"it" = using a protocol for just one production implementation and potentially a testing implementation?


A protocol requires some object to dispatch off, so adding a protocol means that you are forcing the API to follow an argument pattern where some discriminating object is always passed as the first argument.


So you are then forced into creating an actual object to identify "production" (and another to identity "test").


If the (underlying) API doesn't naturally have some common object that needs to be passed as the first argument then you are making the (high-level, protocol) API follow an artificial pattern.


Often, it means introducing record types -- for no other reason than allowing the protocol to dispatch -- and then using those where a perfectly good hash map would have been sufficient (if it was even necessary to pass it).


what would the downside of following this artificial pattern (creating the “fake” dispatch object) be? This is not to say I disagree with you, because I agree, but I have a hard time explaining why the above is a bad thing.


My usual response is that it changes the upstream design and starts moving toward OO patterns


OO vs FP is part of my dislike of that approach but it's really the artificiality of introducing a specific typed object and forcing all those functions to accept it as the first argument -- when that might not be the most natural and idiomatic structure for the API.


Okay, that makes makes sense and is a fair point. Thank you, Sean!