Fork me on GitHub

(for [x ['a]
      y ['b 'c 'd]
      z ['a 'e]]
  [x y z])
=> ([a b a] [a b e] [a c a] [a c e] [a d a] [a d e])


might be something more clever but that gets the job done


yeah I just want a version of that which works for N of those sequences


best I could come up with was a recursive mess


oh gotcha


and it didnt even work


If anyone has a version that isnt a macro that expands into literal nested for loops


please share


if you still are open to messy recursion,

(defn- combo-internal
  [prefix matches combos]
  (let [[c & cs] combos]
    (if (seq c)
      (into matches
            (reduce #(conj %1 (combo-internal (conj prefix %2) matches cs))
      (conj matches prefix))))
seems to maybe do it


(combo-internal [] [] '[[a] [b d]])
=> [[a b] [a d]]
(combo-internal [] [] '[[a]])
=> [[a]]
(combo-internal [] [] '[[a] [z]])
=> [[a z]]
(combo-internal [] [] '[[a] [z t ]])
=> [[a z] [a t]]
(combo-internal [] [] '[[a] [z t ] [m] [p q o]])
=> [[a z m p] [a z m q] [a z m o] [a t m p] [a t m q] [a t m o]]
(combo-internal [] [] '[[a] [b c d] [a e]])
=> [[a b a] [a b e] [a c a] [a c e] [a d a] [a d e]]


works perfectly, but im gonna stick with the combinatorics solution. Mainly because thats just less code in the codebase which is always a pro


oh i didnt see that


good luck with that then!


Don’t hate yourself.


; all the ways to take one item from each passed-in sequence
=> (combo/cartesian-product [1 2] [3 4])
((1 3) (1 4) (2 3) (2 4))


You’re good enough, you’re smart enough, and gosh darn it, people like you.


does anyone have some good resources for addressing huge files that don’t fit into memory? I’m getting caught up in the details of eager vs lazy, reducers realizing sequences, streaming output, core/reducers vs transducers, etc. and I feel like I need a higher-level explanation from some blog post.


-Xmx32G? Jk


Might be the time to google general java solutions


TBH when I hit that question I usually start to question whether I need to deal with the problem or rethink my mental model of the data involved


Sometimes it’s actually a requirement, but other times it is a question of what part of the data model is service-izable, what can be reliably cached, whether my program needs to calculate it all at once or if it could be broken into steps, etc


i’m starting to lean your way @devn with respect to reslicing the data somehow. if I can’t fit something on my 16gb workstation maybe it’s not worth the effort to find a streaming solution. but it does get me thinking about larger streaming solutions.


I suppose that’s why all these other data processing products / solutions exist… keep the complexity somewhere else and have user code consist of small-bite consumers / producers, etc. and not have to worry about it.


a good rule of thumb is not to abstract over side effecting things with lazy sequences


they are a great functional abstraction, but not a good abstraction for things that have state


Many of the products I see are to skip the step of analyzing your model.


That’s not meant to be finger pointy btw


But I think that a lot of products in the data space exist to enable patterns of avoidance rather than solve the real problem




when using multimethods do y'all keep one multimethod and all it's implementations in one file or do you keep all implementations of various multimethods on one kind of thing in it's own file?


Ideally, multi-methods are an extension mechanism that other users extend in their own files.


For instance, adding a new data type for an SQL query system.


But I tend to use them for dispatch (say, web handlers, or event handlers in a web app).


So, I keep them all in one file.


Does anyone know how to declare a static function on a proxy?


Is it possible? I thought they are like anonymous inner classes. They can't have static methods.


yeah "static" has some connotation in bytecode thats a bit hard to explain from top to bottom


basically the answer is no


they can not have static methods


may I reccomend O'Rielys upcomming hit new book


"Java: DOOD"


"Depression Oriented Object Dungeon"


hmm… I’ve been doing the google-dance but can’t find it… isn’t there a Leiningen plugin for inspecting code and detect common anti-patterns and recommend best-practices?


Anyone done much with XML namespaces in Clojure? We're using and have got some entities with xmlns:p="XXX" and others with xmlns="XXX" and when we put them all into the same structure and output we get the same namespace redefined over and over. We want to have all of them show with the namespace using prefix p


Ah… Kibit.


Wanted to share this gist that is a collection of thoughts to help one decide between separating a clojure server/api codebase from a clojurescript client/ui or keeping them both together.


Any thoughts?


@leontalbot we've had a nice time with a single repo with multiple modules (managed with lein-modules)


@mccraigmccraig Ok, cool. Why did you choose to do it that way?


it was the final leg of a journey... we started with a single repo, but wanted to enforce some module separation, so went straight to multiple repos, but even with lein-voom the overhead was painful so we pulled back to a single repo with multiple modules and have been happy ever since


Really nice story, thanks!


What was the pain about?


I'm considering writing a small library which would be a thin layer of clojure on top of an existing java library. Ideally I would like to be able to programmatically scan the java lib for classes of a specific type and create clojure functions for those, a 1-to-1 mapping. What would be the best way to go about this? Macros? build time generation of clojure code? Some kind of runtime reflection-fu combined with macros? Currently I'm leaning towards build time generation as that would give a user of the lib the best support as things like (doc api-fun) and ide support would work as expected, but my feel for the landscape is not solid enough to make an informed choice at this point.


@leontalbot lots of tedious manual steps, leading to contemplating building higher-level build tools, which was only going to take time away from delivering value to clients


There are lots of projects that do this, unfortunately it's a black box if anything goes wrong. If those classes are really trivial, like perhaps boilerplated JSON builders, I would suggest just not using them.


@gtrak duly noted. I guess with generated code the situation is not quite as bad as a user of the lib could still look at the code and see what's going on. Assuming I would still like to pursue this, any pointers to some example project where this is done well?


I don't know any good ones off-hand, just remember having been frustrated by a few :-)


@leontalbot if you are wanting to try lein-modules, this snapshot has a patch which makes version management nicer - [lein-modules-bpk/lein-modules "0.3.13.bpk-20160816.002513-1"] - you can use "_" in your submodule project.clj versions (to inherit the version from the top-level project.clj)


@mccraigmccraig Thanks! Will take a look at it!


Updated the gist to add your nice story 🙂


I am trying to call some clojure code as a sbt task. My build.sbt looks like,

lazy val aTask = taskKey[Unit]("a task")

libraryDependencies ++= Seq(
"org.clojure" % "clojure" % "1.9.0"

import clojure.lang.IFn

aTask := {
val plus: IFn = Clojure.`var`("clojure.core", "+")
println(plus.invoke(1, 4))
Also I have added clojure dep in project/build.sbt of my project. I am getting the following error
[error] java.lang.ExceptionInInitializerError
[error] at clojure.lang.Namespace.<init>(
[error] at clojure.lang.Namespace.findOrCreate(
[error] at clojure.lang.Var.intern(
[error] at
[error] at<clinit>(
[error] at $2d5a9b65ddee7e6a09cc$.$anonfun$$sbtdef$1(build.sbt:20)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$
[error] at sbt.std.Transform$$anon$3.$anonfun$apply$2(System.scala:46)
[error] at sbt.std.Transform$$anon$
[error] at sbt.Execute.$anonfun$submit$2(Execute.scala:262)
[error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] at
[error] at sbt.Execute.$anonfun$submit$1(Execute.scala:262)
[error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:174)
[error] at sbt.CompletionService$$anon$
[error] at
[error] at java.util.concurrent.Executors$
[error] at
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(
[error] at java.util.concurrent.ThreadPoolExecutor$
[error] at
[error] Caused by: Could not locate clojure/core__init.class or clojure/core.clj on classpath.
[error] at clojure.lang.RT.load(
[error] at clojure.lang.RT.load(
[error] at clojure.lang.RT.doInit(
[error] at clojure.lang.RT.<clinit>(
has anyone tried this? Any pointers on what I could try would be helpful.


hey everyone. I’m creating a little rock, paper, scissors app that is purely command - line based. until now, I’ve been creating web apps (clojure on the backend and clojurescript on frontend). And I’ve been using re-frame / reagent and all that beautiful stuff. so now that cljs and re-frame is out of frame, I have to deal with some sort of state - saving user’s score and computer’s score somewhere and figuring out who wins at the end of the game 😄 so I was thinking to create this local state using good old atom. and I was wondering if there are any other suggestions or better alternatives these days?


> Most important, you’ll learn how to model the changes that result from each move a player makes without having to mutate objects like you would in OOP.


@U6N4HSMFW thanks for the pointer!


or should I perhaps create channels and update the atom through there instead


Channels add another indirection. If it is a simple application I'd guess that an atom would be fine.


@shakdwipeea you might want to clarify a bit what you're trying to do. I've used Scala, so I know what sbt is, but others may not. I have used a Clojure REPL in an Akka service, but that was awhile ago. I believe I followed @borkdude's gist


Ah! Yeah, I read your post then went from there. Very useful and fun. Solved a problem at the time 🙂


I have tried to solve it, but it’s too long ago that I used Scala


and SBT for that matter


oh, probably @shakdwipeea is the one asking that question 😄


yes, I have run out of thing to try. looking for any inspiration 😅.


It looks like it’s something sbt specific. I tried adding clojure to the dependencies in ./project/build.sbt


but that didn’t do it either. I think they belong there anyway if you want to use them in a task


I tried that, with the same results. I had more doubts, If you view the exceptions, [error] Caused by: Could not locate clojure/core__init.class or clojure/core.clj on classpath. What seems puzzling to me is that in the line Clojure.`var`("clojure.core", "+") . It is not able find the clojure.core, but in the stacktrace I see multiple references to clojure.lang.* and* . How is it that it can't find the clojure.core classes but is able to refer them in stacktrace, when both are in the same lib.

borkdude12:03:06 is Java and clojure.core is a clojure file


Okay, in the source for core.clj . I don't see the :gen-class attr. Is the loading of the core.clj namespace handled by the java api ?


Also from doc > Functions in clojure.core are automatically loaded. Other namespaces can be loaded via require:


Right. But it has trouble finding this file… don’t know why


I am able to run the same clojure code from normal scala build files and the sbt shell. The problem occurs when I try to use that in a sbt task. To clarify, sbt is a build tool which is genrally used for scala projects. build.sbt is the build definition for the scala project. From the sbt documentation, the build.sbt file is itself built using the build config in the project directory. So i've added the clojure deps at both the places. When I run the task, I get the above error. the task being this block of build.sbt (posted above)

aTask := {
val plus: IFn = Clojure.`var`("clojure.core", "+")
println(plus.invoke(1, 4))


is there a specific channel for asking questions or is here okay?


@leontalbot @mccraigmccraig you may be interested in looking at for monorepo builds


That is really interesting. So why this instead of say lein modules?


from a quick look lein-monolith seems pretty similar to lein-modules, but perhaps more maintained. that said, we've been using lein-modules for the last 18 months and haven't encountered any bugs


Ok, cool. Thanks. What do you think @greg316?


doc looks good @leontalbot - for your open questions i can tell you what we do


CI - we run a docker composition with our database (on codefresh) and run a bash script which calls lein modules test and some boot cljs-test


deployment - a couple of modules build uberjars and docker images which get deployed to dc/os


build-artefacts->source version - we tag docker images with the unirepo version


speed it up ? dunno about that one


@leontalbot happy to answer any specific questions you have about lein-monolith - we’ve been using it to power our builds and local development for the last year and a half, and it’s mostly stabilized. There’s some more areas which could be improved, but it serves our monorepo needs well.


@mccraigmccraig really nice, thanks for sharing!


@greg316 Thanks for answering. Was wondering if lein-monolith is a good choice for front/back code use-case. It seems helpful for managing dependency versions (when there is a couple of cross dependencies of our own projects) I hear it is easy to make a change to a lib then run the tests for the entire project and see if the lib changes broke something in a consuming service. But can a team leverage that workflow when we are talking about a frontend client that talks to a backend with http or websockets?


We build our UI as a clojurescript project inside the same monorepo as the API service and all the other backend services. Some of the libraries are cross-compiled .cljc files, which lets us share logic (and specs!) between the front and back ends. They talk to each other with transit+json, so it is also nice to have one ‘data-types’ library that specifies reader/writer extensions for both sides.


our project is pretty much like @greg316 ‘s @leontalbot , with shared .cljc schema, config system and async behaviours


There is some reference/guide about that "tap" thing?


"remove f from the tap set the tap set."


the tap set the tap set


is there a particular channel here for java interop questions?


this one would be it, it's a core part of the language


This gives me "can't define method not in interfaces"


    (^void uncaughtException [^Thread t ^Throwable e]
      (println e)))


(same without type hints)


Am I missing something?


(like 10 minutes of WTF here)


now that you've found it, you don't need the hints


[oops, that’s what i get for reading code instead of documentation 😉 ]


doesn’t anyone have a ‘monorepo’ for clojure? which build tool did you use?


our source code is growing and was considering this design pattern instead of lots of separate interdependent git repos + lein projects , although now with the new feature to have git hash dependencies, this solves some of the reasons for mono repo


@arrdem is looking at monorepos ^^^


/me appears in a puff of smoke


@jasonjckn so the four good stories I’m aware of around monorepos are 1) don’t, use tools.deps with git. I haven’t evaluated that yet. 2) a bunch of lein projects in a repo and checkouts which is really not that much better than the multirepo thing you’re currently describing. 3) Amperity has a tool lein-monolith linked in the scrollback which they use to try and share some configuration across projects in a single repo. 4) I’m developing somewhat actively and it’s what my team is using, but it doesn’t give you full Blaze/Bazel filesets yet.


(disclaimer that lein-modules is one of my many wildcat projects at work and is not an official product or project of Funding Circle)


There’s some other work around re trying to get Facebook’s Buck to run Clojure, I think it’s called Loony but it’s not super mature or shake & bake.


There isn’t a Bazel or Pants plugin which “just” adds Clojure support. It’s probably doable for Bazel, but I don’t know of one. It’s VERY difficult in pants because their JVM codegen system is designed around interfacing efficiently with the very complex Scala incremental compilation ecosystem. I spent a bunch of time looking at that while I was a Twitter employee.


I’ve wanted to do a blog post on all this for a bit.


If Amperity’s tool gives you fileset-like behavior for inter-submodule deps I’d STRONGLY encourage going with it.


@arrdem was also a twitter employee and yah I always wondered about how hard that would be 🙂


I haven’t had the time to audit it.


thanks a lot for the summary, I’m feeling tools.dep may be the way to go for us at least in the near term, will keep an eye on lein-modules, looks solid


It’s theoretically possible to write Kotlin and Clojure plug-ins for pants, and Pants/Blaze’s concept of tasks is more suitable eg for a repl target than it seems like the Buck/Loony tooling I’ve seen is. Trouble is you have to go dig through the practically undocumented JVM compile pipeline and you have to have strict phase separation between Clojure/Kotlin’s AOT dependencies and the Java/Scala co-compile behemoth.


Because really all you’re doing is producing a classpath…. but how to do that was never obvious after about a week of banging my head on the codebase.


yah, plus that’s just the MVP of a build tool for clojure, producing the classpath


@jasonjckn We have a monorepo at World Singles with about two dozen subprojects. We use Boot to manage it. Each subproject has a deps.edn file (pre-tools.deps) and we have a Boot task that uses tools.namespace to walk code to find the subproject dependencies dynamically so it automatically chains together the right dependencies and paths.


that’s very cool


Right. lein-modules has been OK for us so far, and @jcrossley3 has been really responsive to / receptive of new work but it’s still not quite the semantics I want.


Oh cool! That’s really close to what I want lein-modules to do @seancorfield


We're probably going to switch over to the new deps.edn file setup and streamline how we do subprojects at some vague future date.


If we identify the list of deps.edn files (rather than loading each one as we go), then we can switch to boot-tools-deps for the actual dependency loading stuff.


@seancorfield “Boot task that uses tools.namespace to walk code to find the subproject dependencies dynamically” is this code anywhere open source?


Unfortunately no. It's got a lot of business rules baked into it about what namespaces we allow to reference what projects (to enforce API boundaries in some places).


Longer term I suspect the boot approach fits better with a top-level monorepo than leiningen, but lein-monolith solves our (Amperity’s) problems well enough that we haven’t needed to switch - plus most of us know lein and not boot. YMMV


Disclaimer: I wrote lein-monolith so it gets my vote 😅


the thing with boot is it is so open ended it is hard to write something like that for it


the code we have at worldsingles depends on a lot of conventions


which isn't terrible, but it makes it hard to generalize


not sure exactly what you mean by filesets, but you can do lein-monolith with-all :upstream-of foo-service <task> and it’ll run with all of the sources, tests, resources, etc. from the dependency closure


@greg316 I’d love to sync with you on lein-monolith at some point


also your new serialization library looks awesome


this got me curious, can you link said library? 🙂


Now I too am curious which lib @arrdem meant - serialization could mean clj-multistream, clj-cbor, or maybe merkle-db since that’s the most recent thing I announced. :thinking_face:


I think it was clj-cbor that I saw


although merkle-db is relevant to some of my other interests….


Nice - just switched from storing some stuff as EDN to CBOR and it is pretty great. More compact and much faster to serialize.


yeah I’ve been building a datalog with serialization - It totally works and looking at efficient incremental deserialization is high on my list for it.


right now because it’s early days both my storage implementations just do a good ’ol (spit (pr-str ...)) and (edn/read (io/reader (io/file ...))) dance.


Hmm - I’ve been thinking of ways to build a tuple-store on top of merkle-db as a logical next step! Will have to read through shelving.


there are also lein tools that allow you to take what looks like a single lein code base (one src dir) and slice it up in to different artifacts


the last one I saw was someone's personal project though, so who knows


Wasn’t that technomancy’s slamhound?


slamhound would try to figure out what your ns form was for you


slamhound rewrites your namespaces to only use the necessary require/import statements by taking them out until nothing is left that doesn’t break the compile


yeah, looks like there’s a fair amount of overlap between lein-monolith and lein-modules - almost as if we felt the same underlying pain point 😁


yeah I’ve got some incremental build / test sauce I need to finish off because that’s a major pain point for us.


lein-monolith still uses lein’s core Maven dependency resolution for other submodules so it doesn’t give you fileset semantics - there are still artifacts in play and that’s been another pain point.


:thinking_face: that’s the main thing that monolith doesn’t do yet - try to detect when it actually needs to rebuild things. I’ve got an idea for using sentinels, just haven’t gotten around to implementing it yet.


I’ve got some code in a PR to the FC lein-modules that starts doing that by using to introspect the git state.


It’s just a little git diff --name-status wrapper and some other machinery I pulled out of lein-git-version


hmm, what about the case where I’m on commit 123abc and do some development, then switch to commit 456def (also clean git state) - I would expect the build tool to notice the change and rebuild stuff


our incremental pipeline only gets run in CI because people use repl / manual test runs locally.


a pretty common workflow we run into is just that:

$ git checkout my-feature-branch
$ cd service/foo/foo-service
$ lein monolith each :upstream :parallel 6 do clean, install


but yeah that’s not a case it handles well yet


Right I want to make that do clean, install step go away forever


we’ve had a bunch of dev time wasted by stale artifacts in ~/.m2


I have a very blurry dream involving mvxcvi/blocks and content-addressed artifact coordinates that will somehow solve all of this


but alas, not enough time


I know that feeling well 😛