This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (3)
- # aws (7)
- # babashka (108)
- # beginners (222)
- # bristol-clojurians (3)
- # calva (8)
- # chlorine-clover (1)
- # cider (14)
- # clj-kondo (4)
- # cljdoc (6)
- # cljs-dev (89)
- # cljsrn (13)
- # clojars (6)
- # clojure (89)
- # clojure-australia (1)
- # clojure-europe (11)
- # clojure-italy (9)
- # clojure-losangeles (11)
- # clojure-nl (6)
- # clojure-spec (2)
- # clojure-sweden (1)
- # clojure-uk (9)
- # clojurescript (47)
- # conjure (18)
- # datomic (7)
- # docker (1)
- # figwheel (43)
- # figwheel-main (2)
- # fulcro (31)
- # kaocha (3)
- # leiningen (7)
- # luminus (2)
- # nrepl (14)
- # off-topic (24)
- # pathom (5)
- # pedestal (5)
- # rdf (4)
- # re-frame (49)
- # reagent (12)
- # reitit (9)
- # rum (21)
- # shadow-cljs (109)
- # tools-deps (35)
- # vim (8)
- # wasm (1)
Hi, I am Ravi, working on Clojure for last 6 months. I have a project where I am using compojure, using toucan-db to connect to pgsql database. Now I am trying to connect to another pgsql server, to store log data. But not able find a way to connect to both databases from the same api function. How to achieve it in clojure using db-connection function in toucan db? Any example would help.
I think the library is trying to be helpful by having a default value, and allowing users to change to different db configuration when needed. Till the concept is clear, it is not very obvious. Coming from C++ background, I am still getting accustomed to Clojure way of doing things.
Dynamic variables are "convenient" but come with a lot of restrictions on how you can use code based on them. They are global mutable state which is just a terrible idea. They were used a lot in the early days of Clojure a decade ago, but most libraries have moved away from them now (with one or two notable exceptions) and that sort of architecture is considered an anti-pattern by most Clojurians.
I wouldn't use any library that is based on that principle (and libraries that I have maintained have all moved away from it).
My background also includes C++ -- eight years on X3J16 (and three years as secretary) as well as producing a compiler front end that was used for static source code analysis. Clojure will take a bit of getting used to after C++ 🙂
@UTLP63467 toucan has many design flaws, see their github issues for more. Have you looked at other "ORM" approaches like https://github.com/ReilySiegel/EQLizr https://github.com/exoscale/seql or https://walkable.gitlab.io
I don't know what's your exact requirements, but jackdaw is a great lib to work with Kafka, with testing support
What are some of the best ways to build an asset pipeline in clojure web apps like building and serving css and images
https://hub.docker.com/_/clojure on docker hub, these images are listed as "Docker Official Images". Are these official in any way?
AFAIR not really, best example of that is Elasticsearch - the "official" Docker image is not in fact maintained by Elastic (or wasn't I don't know what's the status now) - the official images were provided by Elastic's own public registry. If you look at the Clojure image's readme - it links to https://github.com/Quantisan/docker-clojure and not as I would expect http://github.com/clojure/docker (or something like that)
This is a common question around Docker Hub, e.g. the mongo docker image isn't necessarily officially by mongodb corp either.
from https://docs.docker.com/docker-hub/official_images/ > Docker, Inc. sponsors a dedicated team that is responsible for reviewing and publishing all content in the Official Images. This team works in collaboration with upstream software maintainers, security experts, and the broader Docker community. > While it is preferable to have upstream software authors maintaining their corresponding Official Images, this is not a strict requirement. Creating and maintaining images for Official Images is a public process. It takes place openly on GitHub where participation is encouraged. Anyone can provide feedback, contribute code, suggest process changes, or even propose a new Official Image.
I generally do my artifact builds outside of docker, then put the jar inside a normal JDK image
It'd be cool if there was some better Clojure merch out there, using assets like: https://github.com/tallesl/Rich-Hickey-fanclub/tree/master/cartoon but all official and above board
It would be nice to have the parentheses stickers on there (but they're not Clojure-specific).
Testing in my repl it seems to be on put:
(def chan (async/chan 1 (map (fn [v] (println "Effect") v)))) => #'dev/chan (async/>!! chan 1) Effect => true
so if the buffer is full, the put will block until there’s space and then will apply the transducer before unblocking
If I create a new namespace and navigate to it, I appear to lose access to clojure.core. Is there a way I can enable use of clojure.core within the newly created namespace?
(let [new-ns (create-ns (gensym "temp-"))] (in-ns (ns-name new-ns))) (+ 1 1)
Syntax error compiling at (C:\Users\<my-username>\AppData\Local\Temp\form-init12166700272800114096.clj:3:1). Unable to resolve symbol: + in this context
user=> (in-ns 'foo.bar) #object[clojure.lang.Namespace 0x308a6984 "foo.bar"] foo.bar=> (clojure.core/refer-clojure) nil foo.bar=> (+ 1 1) 2 foo.bar=>
It looks like it works if I intern 'require into the new namespace and then (require '[clojure.core :refer :all])
Note that you need
clojure.core/refer-clojure rather than just
refer-clojure because the latter won't exist as a symbol in the new ns.
Is there a reason why you're creating a new namespace with a random name? @jmromrell
Another possible option?
user=> (eval (list 'ns (gensym "temp-"))) nil temp-149=> (+ 1 1) 2 temp-149=>
I'm dynamically running some clj I read in from a database. I might be running more than one such clj file at a time. As such, I want to evaluate each in their own namespaces to avoid possible collisions.
I'm additionally hoping, though am not sure if this is the case, that calling
remove-ns on the namespace when I'm done will allow any state left behind by the evaluation to get garbage collected.
Do folks use reducers in a long running production service? The implicit parallelism seems like it’s asking for trouble.
There’s no implicit parallelism in reducers; it’s very explicit, as you only get it if you opt in to it by calling
I meant implicit threadpool with no control over how various things can utilize that pool.
If you need control over the pool; then yeah you can’t use them; but it depends what you’re doing. Yes fold jobs submitted to that pool will contend with each other; but that’s the case anyway. The main difference is I guess there’s no control over the fairness policy… so a large fold job will contend and may block a smaller one; but throughput will be as high possible; just not responsiveness. If you want some kind of backpressure; just put a channel or a java.util.concurrent queue in front of it.
Yes, my concern was with the "fairness". It seems very easy for a multi-purpose service to have certain jobs take over the pool. Not sure how those backpressure mechanisms would help with reducers & fairness.
Oh, I see. As with most things, this seems very use-case dependent 🙂 With that approach, you'd also need to know the size of the job upfront, which may not be realistic.
parallelism always is; that’s why clojure provides many ways to achieve it a la carte.
Out of curiosity, are you using reducers + parallelism in a long running, production service?
No, not yet… though it’s mainly because few of the things I deal with are ameneable to fold. Or if they I’ve been getting the perf I need out of a sequential approach; or have been achieving parallelism in a different way.
However there is an area of one our systems where I might one day choose to use them in production…
Essentially we have a job scheduling process that builds large downloads. Those downloads are already async things; i.e. using core.async, we either serve users a cached download we’ve already built; or we generate them. If we need to generate it on demand, our users have to wait and fall into a holding process / wait screen etc for it to build. These jobs can sometimes take an hour or more to generate; and they can contend with each other…
Part of the processing to build those downloads might benefit from using
r/fold; and if the jobs on the forkjoinpool contend with each other it won’t matter; as users are already waiting.
Does that make sense?
Yes! That sounds like the perfect application. All jobs equal and a service that does 1 thing.
well the service itself is actually monolithic; i.e. the download generation is done in the same process as many other aspects of the app. No need to scale out across machines/services when you don’t have to handle millions of requests - you can do a huge amount in one JVM with a lot less developer friction. Internally though the processes are essentially separate services provided to the app that communicate through various means e.g. core.async channels being the main one.
And you're not concerned about this download generation eating up all the resources the many other tasks your monolith performs?
Not all resources no, as the number of threads servicing those channels is configured.
If you mean am I concerned about
r/fold s potential addition swamping the system. We’d need to investigate it; but I suspect it would be fine, because again though the fjpool is assigned to all cores, the O/S will still switch in other threads that need to do work. And the
r/fold won’t be over the whole download process; just bits of it; which may even themselves effectively be chunked.
anyway it’s largely hypothetical at this point.
r/fold may not even be faster than
well it would if the data was in the right form; i.e. vectors or hashmaps… but converting into that form might itself blow the perf advantage.
Interesting. My concern was with r/fold swamping the system. I'd also be concerned about the potential increase of latency for other jobs.
Yes latency for other requests is certainly a factor… but infrequent and relatively small spikes in latency can be tolerated. As I said earlier if your service isn’t under high load or SLAs, then you can get away with monolith deployments. It’s a trade off for sure… I should add that it would be pretty trivial to move this subsystem to another machine or cluster behind a load balancer if we needed to. Config is done via integrant; so it would essentially just be a matter of starting a bunch of processes with a subset of keys/routes, and putting a LB infront.
I think this is one unsung benefit of integrant actually; it makes it very easy to move from a monolith to micro services, or use a monolith in dev contexts, but deploy as micro services.
Actually, we do in dev and test envs start two services that are normally independent processes in the same process. It is in dev very convenient to have the same REPL into two or more separate services.
It's technically possible to use a data reader tag in the very same namespace in which the associated reader function was defined - right? Does this happen a lot or is it reasonable to assume that you can require a namespace with reader functions to add it to
*data-readers* yourself using
The context is this issue: https://github.com/borkdude/babashka/issues/419
I want to implement data_readers for babashka, but I don't want to incur any startup penalty by scanning the classpath if people aren't going to use any data readers.
A colleague of mine (Andrew Mcveigh) wrote what I think is quite a beautiful example of a clojure macro for generating SPARQL 1.1 property paths:
It show cases a number of quite advanced macro tricks, so I thought folk here might find it instructive or interesting of how far you can take macros if you choose to. Obviously the first rule of macro club still applies. However in this case property paths are usually static things (so there’s seldom a need to dynamically build them), and if a pure data syntax had been chosen it would require quoting and syntax quote splicing etc, as URI’s in this world are by convention bound to vars or symbols, e.g.
rdfs:label is a var named that way by convention; to match RDF’s cURI syntax.
I should note also that if a data syntax was also required it would be pretty trivial to provide that in addition to this; by essentially lifting the spec parsers into shared functions, and removing the
&env processing from that side of the interface, and I guess finding a data syntax for the pre and postfix unary operators.
Anyway things it shows are:
1. How you can use spec to parse macro syntax which provides an infix notation almost identical to actual property paths.
2. How macros can provide compile time checking of DSL syntax
3. How a macro can be just a thin layer over a function/prefix API
4. How a macro can implement special syntax rules; for example the symbol `foaf:name` is usually just a var binding a URI; but how within the scope of the macro you can use the symbols `foaf:friend+` to represent the path of one or more “friends of”; or `-foaf:friend` to represent the inverted path, or `!foaf:friend` to represent every predicate except `foaf:friend`.
5. Finally also how you can do some pretty neat compiler magic in macros. For example in 4 you might say “but what if the symbol in my scope happened to be prefixed or suffixed with a +, - or !“. Well the answer is the macro checks the scope for bindings that might be ambiguous; and raises a compile time error if they are; (which lets you rename or rebind the symbol locally to a non coliding name).
The tests port the examples from the W3C property path spec into the DSL format as an example:
The W3C spec is here: https://www.w3.org/TR/sparql11-query/#propertypaths