This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-11-03
Channels
- # announcements (17)
- # asami (17)
- # babashka (20)
- # beginners (110)
- # calva (1)
- # cherry (3)
- # cider (1)
- # clj-kondo (21)
- # clj-on-windows (1)
- # cljsrn (5)
- # clojure (142)
- # clojure-austin (1)
- # clojure-europe (72)
- # clojure-france (28)
- # clojure-hungary (2)
- # clojure-nl (2)
- # clojure-norway (38)
- # clojure-poland (2)
- # clojure-uk (3)
- # clojurescript (4)
- # cursive (33)
- # data-science (3)
- # datahike (5)
- # datomic (1)
- # emacs (27)
- # events (3)
- # fulcro (15)
- # graalvm (4)
- # gratitude (2)
- # honeysql (7)
- # humbleui (8)
- # introduce-yourself (11)
- # jobs-discuss (9)
- # lambdaisland (3)
- # lsp (18)
- # malli (62)
- # music (1)
- # nbb (3)
- # off-topic (10)
- # pathom (3)
- # pedestal (6)
- # polylith (5)
- # re-frame (7)
- # releases (2)
- # shadow-cljs (33)
- # sql (1)
- # test-check (23)
- # vim (20)
- # xtdb (9)
I'd like to render exceptions that occur in my web app as some nice looking html. Can anyone recommend a library for this? I'm thinking of a function that would accept an exception and return hiccup for rendering
Not exactly what you asked for, but maybe you’ll find this helpful: https://github.com/magnars/prone
hey, do you know any good article / youtube video explaining Java 19 virtual threads on examples and showing differences between previous versions of Java? Does clojure.core or async already use it? Will soon for Java 19 and above? There is explanation of the issue > Even if we use various thread pools to maximize the cost-effectiveness of threads, threads often become the performance improvement bottleneck of our application before CPU, network, or memory resources are exhausted, and cannot maximize the performance that the hardware should have. But I didn’t find clear good and precise explanation how virtual threads make it better. Can someone explain it in a few sentences?
hey @U7ERLH6JX I think you know a lot about it. Can you give some explanation?
Sure, • heres the intro by the creators: https://www.youtube.com/watch?v=YQ6EpIk7KgY • Project Loom in general: https://www.youtube.com/watch?v=EO9oMiL1fFo • How we could use them in clojure: https://ales.rocks/notes-on-virtual-threads-and-clojure
Also heres a real short demo of whats possible, we did this on #babashka: https://twitter.com/borkdude/status/1572222344684531717
This one is quite an involved read but very useful and detailed, nails the idea: https://www.infoq.com/articles/java-virtual-threads/
Do you know if / when Clojure plan to implement virtual threads to async and core code?
in short virtual threads are the golang/erlang style concurrency on the JVM where lighter virtual threads are multiplexed on OS threads and "parked" automatically when not needed, like when doing IO. so that we keep on writing sequential code, no callbacks, async await etc
not sure if they would be implemented in core async in clojure, that would mean setting the min level to jdk 19. its very much usable without it. the idea is not to think async
> “parked” automatically when not needed How exactly Java will know to “park” a virtual thread for IO operation? even IO operation need a little CPU, so how it will determine to park or not to park?
Alex said a short while back that Rich has been thinking about a core.async alternative built on JDK 19 vthreads but in general Clojure core is very conservative about using new features in the JVM and as can be seen here it still supports JDK 8: https://build.clojure.org/job/clojure-test-matrix/
As a fun experiment, I created a version of the go
macro that uses virtual threads and then you can use all the blocking operations inside it -- no need to just use the parking ops -- and no need for code analysis/rewriting.
> How exactly Java will know to “park” a virtual thread for IO operation? all of the low level blocking code and IO things are rewitten in jdk 19 to "let the jvm know when its doing an IO thing" hence the vthread can exactly know when to park. same as Go or erlang
like i said before, there is a massive push to simplify concurrent code from the JVM side. its an exciting future!
it is magical and quite fun to use
There have been several posts here -- in #C8NUSGWG6 and #C05423W6H (and a few other places) -- that show how to use structured scopes and virtual threads based on JDK 19 stuff. It really does simplify a lot of code and you get so much "for free".
and the farsightedness of clojure devs i think to be able to very easily replace the executor machinery so that we can use it in (future ...)
etc calls!
I do want to note that you can monkeypatch core.async to use virtual threads without that much effort, and I have a functional minimal example online here: https://git.sr.ht/~srasu/spindle
Here's my go!
macro equivalent from back in May this year: https://clojurians.slack.com/archives/C8NUSGWG6/p1652487349607519?thread_ts=1652433406.842079&cid=C8NUSGWG6 -- note that core.async currently uses full-blown threads for some stuff around channels so swapping out its thread pool for a vthread pool doesn't stop it using full process threads.
This is me introducing vthreads in a fairly sizable and complex project: https://github.com/bob-cd/bob/commit/5c73bace80921df7a93c6fe3f7867ecaf016f403 its a breeze!
We'll probably update our production servers to JDK 19 after our next deployment (now that New Relic supports it) and then we'll start using vthreads in production code I expect.
I didn’t try Java 19 yet. Is it good idea to use it in serious real and critical production today? Is it very stable?
JDK 19 is the current stable release.
I too generally follow along their stable releases every 6 months. 19 i used a Release Candidate actually, too excited!
But cautious companies will wait for JDK 21 I suspect since that's the next LTS version (8, 11, 17, 21).
vthreads are still a preview feature in JDK 19 so they are not enabled by default.
We were only using LTS versions in production for a while but with the CVE in JDK 17, we decided to move to JDK 18 a while back. So now we're not so concerned about moving to JDK 19 🙂
Apart from forgetting the --enable-preview
flag at places, the latest and greatest JVMs are yet to let me down til now!
We have this https://github.com/babashka/babashka/tree/jdk19-loom on #CLX41ASCS if you wanna try it out there too! We build binaries on demand if needed
But is there a way for clojure core libs to enable custom features based on the underlying jvm used? Like, is there a way to port async code to vthreads but enable it only when running on the right jvm, otherwise using the existing code for java8?
You could try something like: https://clojurians.slack.com/archives/C03S1KBA2/p1664289931495499 Bit complex way to load the class but should be portable. You can surround this with a jvm version check maybe
Involves reflective calls, so has possible performance implications, not sure if thats a good idea for core libs
I meant something more at the “compiler/interpreter” level, like running with some “flag” on. Sorry if it sounds unfamiliar, but I come from Scala bg, and there are tons of compiler flags to enable/disable custom features
Possibly. But since this can be addressed in the userland code, probably unlikely to be a part of the compiler i think. Also for clojure, not much difference in compile and run time and unlike scala it cant spend a lot of time thinking, its quite simple by design to be there at all times. The scala compiler does a lot of work upfront and then isnt there at all after that. Clojure compiler the way it is enables things like the REPL and all whole dynamic nature we love 😄
> all of the low level blocking code and IO things are rewitten in jdk 19 to “let the jvm know when its doing an IO thing” hence the vthread can exactly know when to park. Is it somewhere explained? Like how exactly Java know it is IO or other factors to park thread? I mean for example if I will use babashka/unzip or other custom function in some library how Java will know it is IO. Because all this libraries on the end have to use native Java functions to read / write / streaming ? What else instead of IO will be parked?
This is a decent read: https://stackoverflow.com/questions/70174468/project-loom-what-happens-when-virtual-thread-makes-a-blocking-system-call
in short, all libs and code decomposes to a java primitive call in the jvm runtime which is calling the blocking OS syscall. since we know what all possible blocking java primitives are, we can add a bit of logic to "let the runtime know to park now". You can see a simpler https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/Thread.java#L539 in Thread/sleep implementation in JDK 19+. Other places this happens is when "waiting for things" like a read from a BlockingQueue when theres nothing to read. essentially @(future)
in clojure
Essentially this is the solution to minimise resource consumption when waiting for things. So that we can wait for a LOT of things concurrently and that's what a lot of real world apps are.
Loom doesn't improve parallelism in anyway but is a MASSIVE improvement to concurrency. A thing that was sorely missed on the JVM, at least for me 😅
To get around this limitation on the JVM, we had things like async/await, reactive stuff etc. I and a lot of others found it hard to reason about and is also the https://www.reddit.com/r/programming/comments/oxsnqg/brian_goetz_i_think_project_loom_is_going_to_kill/ for Brian Goetz, one of the maintainers of JDK too
but this is only for IO / waiting things? I mean it shouldn’t be used to IO -> process data
in the same virtual thread, but mor like a IO
and add to queue to process data
.
*Assuming process data use more resources.
Or will it magically manage it correctly with IO -> process data
? So for example there is 20 virtual threads. All are ready and finished IO and now want to process data, but only 4 at the time can process while all can do IO. This part has to be coded manually and software developer has to put 4 manually as a parameter somewhere?
So what you want to do is reduce pinning of the vthread to the OS thread
. https://paluch.biz/blog/183-carrier-kernel-thread-pinning-of-virtual-threads-project-loom.html happens when the vthread is doing some CPU dependent things. If youre doing CPU bound things on a vthread its like any other thread then, defeats the purpose of it
Also with this approach we can mix and match well too, avoiding https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/ and I use it with much ease https://github.com/lispyclouds/contajners/blob/main/fetch_api/main.clj#L186 This is an example of the do IO concurrently and process in parallel use-case youre saying
But there's some fun consequences of this: we had Thread pools before to limit the resource consumption and since that was also used to do concurrent IO, there was a bounded amount of calls to a DB or a server etc. Now that pools are pretty much gone with the cheap vthreads, youre much more likely to crash the poor DB if youre not careful 😛 all in just few MBs of RAM on the client side 😅
hold my beer while I build the next DDoS botnet in bb
Isn’t that why we have “connections-pool” too, in addition to performance by re-use?
yep, that should protect the DB too. My concern was was more from opening up connections per request from a server for example.
now servers like https://medium.com/helidon/helidon-n%C3%ADma-helidon-on-virtual-threads-130bb2ea2088 can handle each requests on a vthread instead of a normal thread pool
Re: core.async
and vthreads -- given that vthreads shine for I/O but not for CPU-bound stuff per @U7ERLH6JX above, and the general advice is to not do I/O in core.async
go
blocks, there's a bit of a tension there around just switching core.async
to use vthreads.
(I guess that's sort of a question, rather than an opinion/caution?)
I guess the requirement of JDK 19+ and complexities of maintaining muti version support could be big reason?
I meant more the tension between your comment about vthreads vs threads favoring I/O etc for the former but CPU-bound for the latter -- versus core.async
where it's recommended not to do I/O in go
blocks -- so converting core.async
to vthreads so go
blocks use them seems counter-intuitive?
Ah I meant the same thing, doing cpu bound things in vthreads make it the same as normal threads. So avoid that on vthreads. So same effect as don’t do io in core async blocks.
Switching to vthreads for go blocks should be good, would simplify a lot of the machinery
I guess the way I wrote it was a bit weird, what I meant to say was if one does cpu stuff in vthreads it behaves the same way as a normal one, hence defeats the purpose of it. 😅
Right, so it would seem weird to switch core.async
to vthreads since existing core.async
code is going to be CPU-bound (maybe) rather than I/O-bound -- but a vthread-based core.async
would then be good for doing I/O-bound stuff 🙂
now i finally get the point, sorry for being doubly confused by my own stuff 😅 yes to all you said
core.async go blocks as it currently is with the idiom of not doing IO in them would not benefit from vthreads
a future
namespace maybe
We're looking at maybe creating a near drop-in core.async
replacement (JVM-only) that would use vthreads for go
blocks and then treat non-blocking and blocking ops as synonyms -- just as an experiment.
Would be great! If its done in some public repo, would love to see its progress and/or contrib too! 😄
I think all the puts
and takes
should just work™ due to the channels being concurrent queue based?
My initial experiments have been promising. I don't know what our timeline is for actually making it usable 🙂
for me, just seeing the thing being built is usable enough!
Can Java 19 virtual threads control number of processing function at once? Let’s say I have 200-500 IO functions to run, but because of memory limitation and probably other conditions I want to run max N at once. Can I easy control it with virtual threads?
Yes, you need to use a fixed thread pool for that with a virtual thread factory:
(import '[java.util.concurrent Executors])
(let [vfactory (.. (Thread/ofVirtual)
(name "loom-vthread-" 0)
(factory))
executor (Executors/newFixedThreadPool 10 vfactory)]
; do stuff with the executor
)
@U7ERLH6JX I know this is an older thread, but asking based on your continued experience with them... As the virtual threads are tied to a CPU (and not swappable between them), for CPU work (vs IO work) based virtual threads, do you know if the scheduler does: • the Erlang thing and round robin work of which is the active virtual thread after so many operations to prevent CPU starving the other items, • more Node based scheduling, where it only works on one until done or hits IO, and then picks a new virtual thread to continue with, • something else?
Yeah vthreads are "pinned" to a carrier thread which is a real OS thread. The parking of a vthread happens only when an IO or suspend-able event (Thread/sleep, read from an empty queue etc) happens and isnt pre-empted like the BEAM. Spin-wait loops will hog onto its carrier thread and one of the cores.
its still technically cooperative concurrency, just that all the cooperation is done for us at the lowest levels. instead of us yielding the thread
I am assuming that is current "state of the art" as it looks like the scheduler seems like it is pluggable... and could eventually support one that does the Erlang thing given the people with the right understanding decide to invest in a CPU sharing scheduling model...
yeah not sure what direction it will go, just that the idiom for now is: "don't put cpu heavy things on a vthread." doing preemtive concurrency would mean quite an overhaul of the JVM i think. afaik BEAM doesnt allow real thread access, only BEAM processes? JVM is hoping you know what youre doing while being backwards compatible i suppose 😛
Hello everyone. I came across a problem using an assembled Uberjar for a web service that relies on both the Snowflake JDBC and Buddy dependencies, among many others. I'd like to know if some of you encountered and finally managed to solve this problem, which in essence, is a problem with the class resolution of a shared underlying dependency between both libraries. As a disclaimer, we were able to work with the web service in development mode (with nREPL, Integrant, etc.), loading authentication middlewares with Buddy and successfully connecting to a Snowflake DB through next-jdbc. In release mode, after assembling the Uberjar, when I execute it and starts loading all Integrant components, the app breaks with the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: net/snowflake/client/jdbc/internal/org/bouncycastle/crypto/CipherParameters (wrong name: org/bouncycastle/crypto/CipherParameters)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1012)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:862)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:760)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:681)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:639)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:467)
at clojure.lang.RT.classForName(RT.java:2209)
at clojure.lang.RT.classForName(RT.java:2218)
at buddy.core.mac__init.__init0(Unknown Source)
at buddy.core.mac__init.<clinit>(Unknown Source)
...
After some research, I discovered that the Snowflake JDBC artifact publicly available in the Maven repository, uses a plugin called maven-shade-plugin. This plugin relocates dependencies internally and rewrites imports (within classes bytecode in themselves!) to a different location. In the case of this library, all its dependencies (which are many!) https://github.com/snowflakedb/snowflake-jdbc/blob/v3.13.24/pom.xml#L768-L771 to net.snowflake.client.jdbc.internal
. To my understanding, this plugin converts those dependencies from transitive to inlined, hence :exclusions
in the deps.edn
has no effect when trying to assemble the project's Uberjar.
Looking closer to the stack trace, when the Buddy libraries are loading and finally encounter a Buddy namespace (`buddy.core.mac`) directly importing classes from Bouncycastle, for some reason the Class.forName
method invocation "finds" the Snowflake internal version of said class (`org.bouncycastle.crypto.params.KeyParameter` which relies on the interface org.bouncycastle.crypto.CipherParameters
) but with a different package declaration (`net/snowflake/client/jdbc/internal/org/bouncycastle/crypto/CipherParameters`), even though the same Bouncycastle classes are both located at root level and internally under net/snowflake/client/jdbc/internal/
, within the Jar file structure.
Lastly, I prepared this repository (https://github.com/joshuamzm/snowflake-and-buddy) to easily isolate and reproduce this problem with only those two dependencies.
Thanks for taking the time to read this post and really looking forward to hearing back from the community.Yeah, can't say so much about these dependencies in particular, but I've seen a similar thing before in other situations where package names get rewritten at compile time. Clojure isn't really designed to deal with packages that move around at build/runtime, so this might be something where the most effective way to resolve it would be to build wrapper java classes that "insulate" the clojure code from that nonsense. Is that feasible to do with your usecase, or is that something impossible because all the problems are located inside dependencies?
@U5NCUG8NR I really appreciate any suggestion to overcome this issue. I know vendors do these "clever" tricks for reasons valid to them. Unfortunately, those measures pose obstacles that we need to resolve down the road.
I don't fully grasp you recommendation to add Java wrappers to insulate this problem from the Clojure runtime. In my use case, I dynamically load the JDBC class with the :classname
property of the next.jdbc/get-datasource
function. For Buddy and everything else, I just use require
s as usual.
I wonder how other Clojure projects use the Snowflake JDBC dependency. As far as I know, the Metabase project uses it.
Right, the reason I mention this is that using import
in Clojure produces both a regular import like you would see in a compiled java class, as well as also a Class.forName
call, which means that while one could be rewritten by the tool, the other cannot be since it wouldn't understand strings. So Clojure itself doesn't mesh well with these sorts of rewrite tools.
If you have anything that imports from one of these tools, then it's often better to make a Java class that imports them and exposes the needed functionality, and then import that from clojure so that the rewrite can occur without issue.
If this doesn't match your needs though, I'm not sure exactly what could be done, probably at least partially because I don't quite understand exactly how the misalignment between classes is working out.
I see. I'll do research on your proposition and give it a try. However, just a clarification note, no rewriting happens to any Clojure code or any underlying classes. The only dependency rewritten internally is the Snowflake JDBC, and it comes as-is from the Maven repository, so there's nothing we can do with it. We need to work around it...
ah, I see. So the issue then is that you are naming a class for your jdbc driver with a string that identifies it and that works fine in development, but when you uberjar it the dependency is moved?
As far as I understand, in dev mode classes are loaded in memory, so class resolution works fine. At the time we lay out the Jar contents and starts resolving classes from it, we encounter the exception.
Yes to your question, it works well in dev. In the uberjar, the dependency is copied over with all of its interned deps under net/snowflake/client/jdbc/internal/
Alright, so I wonder then if you couldn't make your compilation depend on like an environment variable which gets set during compilation to use a different classname string?
like a macro that chooses between two different values depending on an environment variable at macroexpansion time, and then you AOT the uberjar which will macroexpand it during compilation, allowing you to have different behavior based on if it's development or deployment
That might help, but how do I change the class resolution for Clojure dependencies? The problem comes from the Buddy dependency, importing classes from the Bouncycastle library, but accidentally finding the interned classes from Snowflake...
The classname for the JDBC works fine, and it doesn't move, it is always net.snowflake.client.jdbc.SnowflakeDriver
, and ultimately, I only use the class name without the qualified part.
aaah, I see. That's even more confusing
Unfortunately, it is. I've been diving into this problem for many hours. That's the reason I finally came here to the community and expose my case.
Maybe the class loader behavior changes if I remove the Snowflake dependency from the Uberjar, and then provide it separately by extending the classpath when executing the service. I'll do research on the default class loader and then try this approach.
In any case, @U5NCUG8NR thank you for your insights.
Yeah, sorry I couldn't be more help
Well, it turned out it worked extending the classpath to include the Snowflake jar, separate from the Clojure project's Uberjar. However, I wasn't able to use the -jar
flag for the Java command, I needed to run the project's main class manually as java --add-opens=java.base/java.nio=ALL-UNNAMED -cp .\target\* org.acme.main
In the https://github.com/joshuamzm/snowflake-and-buddy/blob/offloading-snowflake-dep/README.md of the original repo I pasted the output of running the project. I provided a https://github.com/joshuamzm/snowflake-and-buddy/blob/offloading-snowflake-dep/src/org/acme/main.clj that connects and retrieves data from Snowflake and uses Buddy digest functions to hash it and finally output it to stdout.
Sometimes when an uberjar malfunctions, it's because the individual jars contained META-INF settings that the uberjar mingled or merged or trampled. Unzip the jars' META-INF and read through it, then do likewise with the uberjar. You may have some control of this phenomenon, depending on how you make your uberjar (the Maven Shade plugin's controls are deluxe).
You were right. There are conflicting Bouncycastle classes under META-INF/versions/9/
. From a Clojure project without the Snowflake dependency, decompiling those classes correctly declare their packages under org.bouncycastle.crypto
, but including the Snowflake dep, it replaced those classes to something declared with packages under
.
Now I see why the class loader resolves those classes incorrectly. The next step is to try to assemble the Clojure Uberjar without taking any of the Snowflake META-INF classes into account, yet to see if the actual Snowflake Driver is able to work with this adjustment.
This paragraph is key to understand the class loader behavior:
If a multi-release JAR file is deployed on the class path or module path (as an automatic module or an explicit multi-release module) of major version N of a Java platform release runtime, then a class loader loading classes from that JAR file will first search for class files under the Nth versioned directory, then prior versioned directories in descending order (if present), down to a lower major version bound of 9, and finally under the top-level directory.
Reference: https://docs.oracle.com/en/java/javase/17/docs/specs/jar/jar.html#multi-release-jar-filesI finally migrated the poc project to tools.build
along with Sean Corfield's build-clj
library. I directly created the Uberjar with clojure -T:build org.corfield.build/uber :lib org.acme/main :main org.acme.main
and the final artifact was able to run successfully!
Looking into the files in the Jar archive, I noticed the classes under META-INF/version/9
came from the top-level deps, not from the ones interned by Snowflake, which made everything work nicely.
I'm left concerned that the order in which dependency Jars are expanded and copied into the target Uberjar really matters. For the moment, I'll be happy to migrate the main project to tools.build
but something inside me keeps uneasy with this situation. I'll end up writing big warning notices in the files involved creating the Uberjar.
Over and out.
Hi, anyone knows a way to auto-reload a browser tab when developing a server-side-rendered html-hiccup-page?
One way from the babashka book: https://github.com/babashka/book/blob/master/script/watch.clj
@U4GEXTNGZ Another way is livejs: https://github.com/borkdude/quickblog/blob/76e2d597668121bf03fb0bd2ad3022f194ffd987/src/quickblog/api.clj#L563 If you start quickblog in watch mode, then it inserts livejs in the page, which polls for changes

Note that in watch.clj I use etaoin as a pod, but etaoin is now compatible with bb from source, so you can just use the regular library in bb
would you agree with the statement > with Clojure, I can write code with less coupling than I can in typed languages ?
I think if you wrote code in a typed language that leaned as heavily on abstractions like seq, set, etc vs concrete data types you’d still have low coupling
coupling is one of the goals of using types - changes in one place cause complication issues in other places. The hard part in typed languages is to decouple code that shouldn't break
There are many kinds of coupling (https://en.wikipedia.org/wiki/Coupling_(computer_programming)) - the question is, does it give you less or more of the kind of coupling that you want. And with or without types, would still have the same coupling, only explict through a type or implicit if you go without. Which would you rather want? What are the trade-offs? Less typing means the compiler proves less to you, which means that you have more to verify in your tests or more risk. Again, a trade-off that is very different for e.g. the intranet sign-up page for the Christmas party in a small company compared to code running critical medical devices.
Clojure promises open extension, which means that coupling and decoupling are not necessarily opposites. Dynamic typing contributes to open extension.
It is easier to write code with less coupling in Clojure than in many statically typed languages but it doesn't mean you couldn't do that in those languages too. Heavy and wise use of generics and similar constructs and writing everything as pure functions does allow similar level of coupling. It takes more work there but it is achievable
The problem there is that you have think about that all the time. And in many cases people writing in those languages do not even understand the problem they are creating
If one has written good quality Clojure then that person most likely will also strive for equally loosely coupled code in typed languages like TypeScript or Elm
Good point. Reminds my of a lexilambda blog post: https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-type-systems-are-not-inherently-more-open/
I am trying to use r/foldcat
to parallelize some computations. But my source collection is a finite`lazy-seq` . I see in the docs that lists are not foldable, but there is no mention of lazy-seq. Does it makes sense to use r/foldcat
with it?
r/fold uses the internal tree structure of clojure data structures to break them into parts and then reduce over the parts in parallel
I see, would it make sense in terms of performance to convert the lazy-seq into a vector?
it really depends, you might look upstream at however the lazy-seq is being produce, and try to change that to some kind of transducer based thing (maybe an eduction, maybe just using transducers with into instead of making lazy seqs)
Thanks
Does case
not work with primitives? Cannot for the life of me get it to match against an int
constant.
I have of course read the docstring. I am guessing you're trying to tell me that resolving a reference to a Java field happens at evaluation time?
Got it, thank you.
user=> (type (read-string "java.sql.Connection/TRANSACTION_NONE"))
clojure.lang.Symbol
user=> (type (eval (read-string "java.sql.Connection/TRANSACTION_NONE")))
java.lang.Integer
user=>
Yeah, that makes sense. For some reason it sat in my brain as a distinct interop thing and not an ordinary symbol.
user=> (type (read-string "#=(eval java.sql.Connection/TRANSACTION_NONE)"))
java.lang.Integer
user=>
but if you do that they come for you with pitchforksYeahhh I'll just use condp