Fork me on GitHub
Michael Agres00:05:32

Just two general questions (I wasn't sure where to put this): 1. Besides web apps, what can someone build using Clojure? 2. And besides native mobile or wearable apps, when is Clojure unsuitable?


That should answer most of your questions in details. If you want a TL;DR, it's anything that uses Java like desktop apps and back-end servers and services, as well as anything that uses JavaScript, like NodeJS back-end servers, front-ends in browser or on mobile or Electron like desktop apps, various extensions like for VSCode extensions, as well as some odd ones like scripting and AWS lambda or command line applications. You can also use it for machine learning and data-science, and anything that Python is good at, since you can use all Python libs with it as well.

🙌 1
j abns00:05:38

+ flutter apps using #clojuredart


Just added SQLite as a backend for duratom if anybody wants to take a look.

😮 2
Nom Nom Mousse06:05:11

What is the best way to create a namespace, evaluate a few forms, and then go back to the previous namespace? I see there was a now-deprecated macro called with-ns.


Do you mean something like this?

dev=> (let [x *ns*] (ns (defn quux [x] (println (* x x) "in" *ns*)) (quux 42) (in-ns (symbol (str x))))
1764 in #object[clojure.lang.Namespace 0x3ff8a3ad]
#object[clojure.lang.Namespace 0x3c27f72 "dev"]

👍 1
Nom Nom Mousse06:05:44

Something like that, but I'd like to be able to return a value from the let, not use the last form to go back to the previous namespace.

Nom Nom Mousse06:05:46

I was hoping there was something like

(with [*ns* other-ns]
  (fn-in-other-ns 1))


The problem with using a single form like a macro to do this is a single form will be read all at once before any of it is executed, where the value of *ns* can effect how forms are read


So ::foo in your with-ns won't be qualified with the ns from with-ns but the original value of *ns*


In Maybe Not Rich talks about missing conditional optionality in Spec. Has there been anything new on this issue of specifying when something is optional?

Alex Miller (Clojure team)12:05:11

Spec 2 (wip) has support for schema and select similar to described in the talk (as an alternative to s/keys)

Alex Miller (Clojure team)12:05:05

So the select is the “when” effectively

Nom Nom Mousse14:05:54

In the below code, why doesn't foo/quux exist after the function call? And why don't I have to prefix the last call to quux with foo? It is called from the user namespace

(defn c [n]
  (let [x *ns*]
    (println *ns*)

    (ns foo)
    (println *ns*)

    (defn quux [x] (* x x))
    (in-ns (symbol (str x)))
    (println *ns*)
    (quux n)))

(c 3)
=> 9
Syntax error compiling at (/private/var/folders/tz/9155ldys7xg36pvt1dxszwk80000gn/T/form-init1480107651682910345.clj:1:8065).
No such var: foo/quux

👀 1
Joshua Suskalo15:05:49

Seems like the quux function ended up being defined in the user namespace. Not entirely sure why that is, but I will note that this isn't exactly a surprise. def, defn, in-ns, and similar are not really designed to be used inline like this.


Another example of this came up recently. I think if you think of Clojure as interpreted your expectations make sense. But if you remember the Clojure is compiled the reality makes sense. Think of what it would compile to and then what happens when it runs


immediately after evaluating the definition of c and before calling c, check out what the definition of quux is:

        "Unbound: #'nocommit.qp/quux"]


That it defines an undefined var does make some sense to me. However, that it is defined in user namespace doesn't. If I run a simple (do (ns bar ) (def a 42) (in-ns 'user)), then it does exactly what you would expect.

Joshua Suskalo15:05:26

there's a special case in the compiler for top-level do forms

Joshua Suskalo15:05:30

this might be related


yeah. top level do i think compiles the first case, runs it, and then continues

👍 1

Interesting. Anyhow as you already said: def probably shouldn't be used inside def. The only useful context I know of is during debugging.

Joshua Suskalo15:05:46

right, but in-ns and ns also fall into the category of "probably shouldn't be used inside a def"

👍 1
Joshua Suskalo15:05:24

in-ns is mostly a repl utility, and ns is designed for file-header namespace declarations, neither is particularly good to use inside a definition.

Nom Nom Mousse13:05:51

Thanks for the explanations 🙂

Nom Nom Mousse13:05:52

This works though:

(def ns-def
  '((ns imoen (:require [clojure.string :as str]))
    (defn j [coll] (str/join " " coll))))
(doseq [form ns-def] (eval form))
(in-ns 'user)
(imoen/j ["Hiya," "it's" "me"])


Yeah you've made it an interpreter with eval basically

Joshua Suskalo14:05:19

Which if you haven't seen this before, using eval is almost always a bad idea. There are genuine usecases for it, but they are few and far between. You should consider alternative ways to do this. For example, using clojure.core/intern to create a var in another namespace and assign it a default value.

Joshua Suskalo14:05:01

Although I do question why you need to create a var in another ns in this code.


Is there a clever way to get a vector (or any collection type I guess) printing with commas separating the items? I'm thinking primarily about emitting code intended to paste into another language. My current idea is to actually json encode it which feels a little silly but should work. :)


yeah, let me give you a snippet


ah crud, i was wrong, it only does the map case


(pp/with-pprint-dispatch pp/code-dispatch (pp/pprint {:a 1 :b 2})) => {:a 1, :b 2}


i haven’t verified it’ll print with commas, but as i understand it gives a ton of control over printing, so i’m pretty confident it’ll do the trick


heck, there may be a way to do it with pprint as well, looking now


This 'works' as well. ¯\(ツ)

(require '[clojure.pprint])
(require '[ :as json])

(binding [clojure.pprint/*print-right-margin* 10]
  (json/pprint [""
               :escape-slash false))
;; => nil
;; stdout:
;; ["",
;;  "",
;;  "",
;;  "",
;;  ""]
Camelot's a silly place. xD


ah yeah good call. i knew i’d seen these print with newlines on vectors


it looks like if you make whatever you’re printing an array it’ll comma-fy it for you?


(pp/pprint (into-array [1 2 3 4]))
[1, 2, 3, 4]


Ah interesting.


Yeah that's better for this particular case. :D


Also looks like it works on hashmaps at least. :)


(with-out-str (pprint {"a" "b" "c" "d"}))
;; => "{\"a\" \"b\", \"c\" \"d\"}\n"


Neat! I like this better than encoding to cussing json. xD


Does anyone know any quick tips to optimize a smallish Clojure app running on Heroku? I’m already hitting R14 memory errors on the lowest paid tier, though they haven’t killed the process entirely yet.


Which JDK are you using?


Good question. Just the default. I didn’t know Heroku offered options.


> Heroku currently uses OpenJDK 8 to run your application by default Try switching to 17 or even 18.


Oh wow… ok let me check that out.


or just limit the memory setting if you haven't done so yet


-Xmx1G command line option for 1gig or however much your instance is supposed to have


That's one of the things that'll happen (or supposed to happen) when switching to a JDK > 15, IIRC. Due to an improved containerization support.


still useful to manually tweak


And Heroku by itself already sets that flag as well. But it doesn't always help. E.g. I currently have -Xmx600m. The app can still balloon to 1.3 GB from time to time. Not sure why, haven't really tried delving into it.


But I should add that you're right, setting it manually to a lower value could be useful. It's just that 1G specifically shouldn't make a difference at all.

Alex Miller (Clojure team)22:05:06

That setting is for Java heap. The JVM also has off heap memory for other things.

👍 1

add that file and it picks the desired jvm


once upon a time there were a lot of settings for tuning performance for clojure in heroku. one note is that if you're running with lein, make sure to trampoline like lein trampoline run -m


Good reminder, I'll do that.


The pipeline solution mentioned in the reply to worked fine. However, in an attempt to see if I could find something faster, I'm using clojure.core.reducers as follows:

(defn all-checked-items-fold
  (let [items (into [] (get-items)]
   (r/fold concat
           (r/filter identity
                     (r/map check-item items)))))
When this is used on a small number of items, everything goes swimmingly. However, when I get to a large number of items, this blows up with a stack overflow. According to research I've done on the net, it appears that it is the use of concat that is the issue causing the stack to overflow. However, I've been unable to find a replacement that doesn't also blow up. What should I be using here in place of concat to keep from overflowing the stack?


use r/mapcat and you don't need filter


but you would have nothing to fold then


maybe foldcat


Would I use r/foldcat in place of (r/fold concat (... ? I tried using (r/foldcat (... and it blew the stack, too.


yes, use foldcat and get rid of the fold concat and the filter identity


nil is the identity element for concat, so filter identity to remove nils is not useful


Thanks. I'll give it a try.


I’d like to build a reactive system for a UI using core.async… something I’m missing is the idea of a “current value” for a channel. Essentially, I’d like to be able to deref a core.async channel to access the last value that came through it. I can think of how to define a lightweight type with an IDeref… but I thought I’d ask if anyone has a suggestion of a minimal library (for ClojureScript) that does something like this already .


I would say that both and have a reactive model similar to your description.


I think is also in the same reactive space, but it's much newer and I'm less familiar with it.


and too, not sure how to tie it in with a UI though


Foldcat died, too. I've currently got two working solutions - the pipeline one and one I did that splits the items into N chunks and spawns a future for each chunk; when the futures all finish, I combine their computational results together. Timing-wise, each of these finish within seconds of each other. I'm hoping that the reducers solution might finish a bit faster - right now the two working solutions are taking 11 or so minutes to process all of the items. However, the speed seems to be limited by a round-trip I'm making to a third party server that only serves 16-17 requests each second. Would you think that the reducer code would be any faster than the other solutions or am I ultimately rate-limited by the server response time? If so, I'm just going to drop the reducer solution for now.


I would be deeply suspicious if foldcat died with a stackoverflow, seems likely the stack overflow is not happening where you think it is, or maybe you are consuming the result as a seq instead of a reducible


I doubt the reducer code would be any faster


if you time is dominated by io, and you io is limited (n requests per second)


I'll look into that. I'd like to have it working, for if nothing else than my own education. But I think you're right about it not making much difference in overall time. I'm going to have to find a way to speed up the third-party server to do that. I could either put in on a beefier machine or get multiple instances of the JBOSS server with a load balancer in front talking to the same database.


that may or may not speed things up, depending on if the bottleneck is the processing the server is doing, or if it is in the database, well you still have that bottleneck and adding more servers isn't going to speed that up


Yeah. I'll need to look into what the bottleneck is in the server setup. It might well be in the database. But I could figure that out from the logs.


Of course, this all depends on whether or not an eleven-minute bring-up time for my microservice is a big deal or not in the bigger scheme of things (I, of course, think of anything over a few seconds to be appalling, but others might not see it that way). Once the data from the server is loaded in the microservice's local cache, requests to my service are handled in a few microseconds and are very simple to process, so I don't expect very much overload or frequent crashes.


🧵 Please if you're going to continue discussing/debugging this...


I'm done.