Fork me on GitHub
Jorge Tovar00:04:00

Hello guys im currently learning clojure (is's amazing) . I have two questions ... 1) How to handle exceptions, errors properly ? 2) Do you have some resource about data driven systems best practices?- Im not too involved with this mindset


As far as I know there is no widely accepted correct way of dealing with exceptions. Most people just use plain try/catch with (ex-info "msg" {:data :data}) exceptions. There are a handful of libraries that provide monadic error handling / containers that indicate whether or not there's a successful result inside or an error. It's also common for people to create macros that implement some generic error handling / logging according to their preferences in their application. Here are a few things that come to mind:


What’s the best “real world” book for Clojure(script)?

eccentric J03:04:15

Is there a good way to get like an ordered set from a string? Attempting (set "ahilrty") results in #(\a \h \i \l \r \t \y).


you could look at the code for distinct and make a version that gives you both a in-order vector, and a set for testing membership


it would mean lifting the result list / set into a tuple I think

eccentric J18:04:23

Interesting! I'll take a look. It turned out that clojure.string/includes? really spoke to the nature of the problem I'm solving + performed significantly faster than using some.


(apply sorted-set "ahilrty")


That's sorted, not ordered.


@jayzawrotny Do you really want a set? They inherently have no order. What about just (distinct "some string")?


That preserves order and just removes the duplicates.

eccentric J04:04:03

The greater context is a small project to filter a list of words by being made out of a specific subset of characters. I chose set because it would both eliminate dupes and can be used with contains?

eccentric J04:04:49

Though thinking about it, I can probably just use the original string and clojure.string/includes?


user=> (apply sorted-set "some string")
#{\space \e \g \i \m \n \o \r \s \t}
user=> (distinct "some string")
(\s \o \m \e \space \t \r \i \n \g)
user=> (set "some string")
#{\space \e \g \i \m \n \o \r \s \t}

eccentric J04:04:58

Not quite, order is important as I assume the first char is required where as the following letters are allowed but not required.

eccentric J04:04:27

vec may retain the order, but not work for (contains?) I think clojure.string/includes? is the right tool for the job


with vectors you can use something like some or many others but sure


You can use some with any sequence, not just vectors.

eccentric J04:04:16

some works but seems to double 1.5x the time

eccentric J04:04:13

For some reason I was expecting it to be faster


With Clojure CLI, is it possible to create aliases that include other aliases? Looks like no, but might be missing something obvious


could someone recommend me a (maintained) library / framework for async jobs? looking for something like ruby's sidekiq, backed by persistent storage


I've been slowly chipping away at building a library for this, backed by next.jdbc and select for update skip locked- not used in production but might be useful


I haven’t used it, but there appears to be a Faktory lib:,


there's probably a java one for postgresql


might have better luck searching for that


@vale i did some light googling to see if i could find something


and this suggestion makes me chuckle


Have you considered using JRuby with Sidekiq?


because honestly, why not


you could probably do it


clojure -> java interop -> jruby runtime -> sidekiq ---| ^------------------------------------------------------------


looking in to how to make this work


shame both annotations exist and clojure interop there is a big shrug


would you mind elaborating on what you mean by this?


Well, spring libraries are some of the more well maintained and feature-ful java libraries. The way they are documented for usage is through annotations on classes, so even if its possible to wrangle classes in the right way dynamically its not always clear how or i am willing to bet possible


and clojure only supports annotations inside of gen-class


which is both non-interactive and obtuse


and requires special build time steps in order to use correctly


lets say i wanted to use this in clojure


whole java library, hooks me up to service discovery stuff


Maybe im wrong, but it definitely feels like clojure's java interop falls short


thanks for the detailed clarification!


the only example i had seen before was inside gen-class - didn't realize there was one more type of place it could be done


Excuse me for being lazy. Do reader tags attach to metadata forms or to the form that the metadata is attached to? I need it for structural editing behaviour in Calva. I don't know of a quick way to test it, and I have only a short slot of time for trying to fix it. So what does the reader do with something like #foo ^:bar :baz?


i just tested it, and iiuc, ^:bar is metadata for :baz -- though this wouldn't work because :baz is a keyword which cannot have metadata. but assuming you had chosen a vector instead of :baz, the vector would have the metadata attached. the function which would handle the foo tag (set via *data-readers* typically), would be given the vector with the metadata attached.


something like the following session is what i did:

user=> (defn my-tag-fn [x] (binding [*print-meta* true] (prn x) [1 2]))
user=> (set! *data-readers* {'my-tag my-tag-fn})
{my-tag #object[user$my_tag_fn 0x256f8274 "user$my_tag_fn@256f8274"]}
user=> #my-tag ^:a []
^{:a true} []
^{:a true} []
[1 2]


so you can see that the vector [] had the metadata attached to it and it was outputted by (prn x) within my-tag-fn


Thanks! And also thanks for sharing how you tested it. Keeping that!


np, discussing discard forms with you helped to motivate me to understand this stuff better, so ty for that 😉


@lockdown- @emccue thanks for the ideas!


I have opencv interop code in which I'm trying to remove noise from an image like so:

(defn ->vector [mat]
  (let [
        width (.width mat)
        height (.height mat)]
     (for [i (range width)]
        (for [j (range height)]
          (vec (.get mat i j))))))))

(defn element-wise-and
  [a b]
  (if (and (number? a) (number? b))
    (double (* a b))
    (into [] (map element-wise-and a b))))

(defn zero-the-vector [a b]
  (if (and (number? a) (number? b))
    (if (zero? b) 0 a)
    (into [] (map zero-the-vector a b)))

(defn element-wise-inverse
  (if (double? a)
    (if (zero? a) 1.0 0.0)
    (into [] (map element-wise-inverse a))))

(defn filter-salt-pepper-noise [edge-img- size]
  (let [last-median (->vector edge-img-)
        median (Mat. size 5)]
    (loop [count 0 edge-img (->vector edge-img-) last-median (->vector edge-img-)]
        (cv/median-blur edge-img- median 3)
        (if (> count 1)
           (inc count)

            (element-wise-and last-median edge-img)
            (zero-the-vector edge-img))

           (->vector median)
But when I do (filter-salt-pepper-noise edges size) where edges is a Mat and size is an opencv Size, the function is just stuck there, and doesn't seem to terminate. How can I check why this function seems to stall?


You might see more responses if you're able to reduce your example to something smaller. Inspiration:


I'm trying to create a new List<SomeObject>() in clojure, but don't know how to. When I do (List. SomeObject), I get No matching ctor found for interface java.util.List


List is a Java interface. So to create a list object, use (java.util.ArrayList.) or (java.util.LinkedList.). This will be a list of Object. You don't need to (and can't) specify the type of object.


I'm trying to create a new List<SomeObject>() in clojure, but don't know how to. When I do (List. SomeObject), I get No matching ctor found for interface java.util.List


List is an interface. Objects cannot be created. Need to use an ArrayList or something?


java.util.List is an Interface, and can't be instantiated directly. You could use: (ArrayList.) or (ArrayList. some-vec)

👆 4

In addition to the above comments, don't think in Java it is possible to pass a "type" to a List constructor like that.


Assuming SomeObject is a type.


pre-`var` java lets you do:

List<SomeObject> lst = new ArrayList<>();


which might cause this confusion


In JVM byte code, List<SomeObject> and List<Object> are identical, so Clojure does not bother providing a way to make you believe you are doing List<SomeObject>, since the produced byte code wouldn't be doing that anyway.

☝️ 4

Java collections are not typed, only the compiler does some checks for you.


And Java collections don't support primitives for the same reason. C++ collections are truly typed.


So, @U010Z4Y1J4Q, when writing any Java code which would accept this list (if that's what you're doing), your method sig might look something like processClojureList(List<Object> lst)


And inside the method, you'd cast it back to List<SomeObject>


When I do the following: (defn find-significant-contours [edge-img size] (def contours '()) (def hierarchy (Mat. size cv/CV_8UC1)) (def edge-img-8u (Mat. size cv/CV_8UC1)) (cv/find-contours edge-img-8u contours hierarchy cv/RETR_TREE cv/CHAIN_APPROX_SIMPLE) )


I get the error (UnsupportedOperationException) at org.opencv.utils.Converters/Mat_to_vector_vector_Point (


find-contours looks like:

public static void findContours​(Mat image, java.util.List<MatOfPoint> contours, Mat hierarchy, int mode, int method)


can't I simply use '() for the java.util.List<MatOfPoint>?


Like instead of passing in List<MatOfPoint>, I think passing in '() is causing this error.


just use (java.util.ArrayList.)


Looks like '() does implement Java's List.


generics are a compile-time trick, in reality List and List<MatOfPoint> are the same after java compile is done with it


the reason why you’re getting an exception with '() is becausse the clojuree list doesn’t allow mutation operations like .add()


so you need a mutable list like ArrayList

👍 4

Yeah, Clojure lists only implement the read-only parts.


Often when processing requests, there’s some values that are in the context of the request, like user making the request, or the billing account this falls under and this can reach 6+ properties and then when pushing that data down to lower layers of the app, these properties absolutely infest the parameter lists of all the functions, making them bigger. I am sorely tempted to create *request-context* and do binding, but at the same time that seems kinda wrong because the functions become dependent on things other than their parameters…. really don’t know which way to decide


haven't found a good solution to that myself, at least in the context of HTTP requests


if there's a lot of parameters to send, I tend to just put them in a map and send them as one param, only destructuring when I need them, but there's still a function somewhere that goes {:keys [param1 param2 param3 param4 param5 ...]}often enough


I posted an explanation of the main API and the design decisions in #lambdaisland, please do pitch in there


I've been slowly chipping away at building a library for this, backed by next.jdbc and select for update skip locked- not used in production but might be useful


hm. what's a good way to check if a string starts with a number?


i suppose this is best done with Character.isDigitanyway, not really a Clojure question

Alex Miller (Clojure team)14:04:24

(Character/isDigit (first s))


yep. I came up with (Character/isDigit (.charAt number-str 0))but yours is better 🙂


Thoughts between Elastisch ( and Spandex ( I’m new to Elasticsearch so have no context for evaluating.


If you’re already using Postgres, you may find this interesting: “ZomboDB brings powerful text-search and analytics features to Postgres by using Elasticsearch as an index type.”


We are migrating away from elastisch to spandex - mostly because a) the DSL masks a lot of the actual ES queries and index setup b) spandex is built on top of the official Java client. The only downside is that it relies on core.async in few places


spandex but I am biased 🙂

😄 12

@U0JEFEZH6 what is the downside of the core.async?


We don't need it :-)


For the last 4 years of running Clojure on the backend I still haven't found use for it


Interesting. I came from the front end so was new to needing threads etc and used PurelyFunctional TV courses straight to core.async lol


I can see that - core.async definitely helps somewhat with the callback hell in cljs. I'm definitely biased: the application I'm working on is not deployed as a single unit, so there's a lot of communication between separate processes, rather than internal queueing

👍 4

7 years of clj + Elasticsearch experience here: For the love of all that is good and holy just use the REST API.


The official client is just that


It just adds pooling/discovery, the rest is essentially ring from the end users perspective


@U07S8JGF7 yep, that's where spandex shines, you have to use the REST stuff (also ~7 years of ES "experience")


Yeah, not enough bang for the buck IMO. clj-http is general-purpose and works fine.


caveat emptor


Always some use case where round-robin connections become important. But you can even do persistent connections pretty trivially in clj-http.


FWIW both clj-http and official ES rest client are built on top of the same apache httpcomponents libs


yep, so… why the middleman?


I take it back. There’s obv a value-add. Just minimal for the gain in my view.


As with all client wrappers - same reason why we abandoned all of them and just use clj-http for all 3rd party service interactions


Amazon being the only outlier because it's amazon


clj-http value-add is significant: general-purpose, data-oriented http access


ES client basically strips off “general purpose” and exchanges “round robin requests”


Amazon is complicated primarily by security, I think.


Arguably clj-http has also a lot of cruft you don't need to deal with when using es :). the official client is configured to work well and be very performant out of the box, saves you some fiddling, writting all the crust around clustering & co. But I get the argument for familiarity


Hi! What are good tools to figure out how much memory jar needs to run? I have a very simple app and it eats more than 256mb at startup. I want to dive deeper and understand if this is something I can fix. Thanks!


@U4EFBUCUE VisualVM is a free alternative to YourKit. I also often find JDK tools to be really useful before/without (ever) reaching out for a profiler. Especially jcmd. Possibly combined with Java Mission Control (JMC) which got a new stable version recently. That's said, 256M isn't that much. It depends on what "a very simple app" means and how do you configure java/jvm in terms of -Xmx et al.


i usually use the yourkit profiler

👍 4

see where those allocations go


in the context of a macro, I'm struggling to understand the difference between '~' and ' They seem to result in the same form:

(defmacro m [] `(my-ns/my-function 'my-symbol.core))
(macroexpand '(m))
; => 
(my-ns/my-function (quote my-symbol.core))

(defmacro m2 [] `(my-ns/my-function '~'my-symbol.core))
(macroexpand '(m2))
; =>
(my-ns/my-function (quote my-symbol.core))
Why would you use one form over another?

Alex Miller (Clojure team)16:04:16

I don't think I've ever used '~'

Alex Miller (Clojure team)16:04:52

~' is common when want an unqualified symbol to be in the expanded form

Alex Miller (Clojure team)16:04:39

or '~ when evaluating something to a symbol


I've used '~' when I've needed a literal symbol in a macro expansion -- I don't what a gensym'd local and I don't want the normal namespace expansion of the symbol.


I see, that makes sense now. It seems like a symbol with a . in it does not get ns qualified when using ' which was throwing me off. The difference is apparent when using plain symbols though.

user=> (defmacro m [] `(my-ns/my-function 'user.core))
user=> (defmacro m2 [] `(my-ns/my-function '~'user.core))
user=> (= (macroexpand '(m2)) (macroexpand '(m)))

Alex Miller (Clojure team)17:04:50

symbols with dots are treated as class names in Clojure so may be a special case here


makes sense, thanks!


user=> (defmacro m [] `(my-ns/my-function 'core))
user=> (defmacro m2 [] `(my-ns/my-function '~'core))
user=> (= (macroexpand '(m2)) (macroexpand '(m)))

Ramon Rios16:04:35

(-> (Math/random) (* 100000000000) Math/floor)
Hello everyone. I'm curretly trying to generate a random number of 9 digits. I use random-int but i realize that it will not bring me 9 digit numbers always 😛 . So i create this function here that it's not working as i expected. Can someone give me some insight?


@ramonp.rios if I tack a call to long at the end there I get nine digit numbers as the result on most calls


but the only difference there is getting an integral type rather than scientific notation from a double


of course 1/10 of calls will give 8 digits or shorter


and 1/100 7 digits or shorter

Ramon Rios17:04:17

Ok, i got. I just needed to convert it to long then 😅


`(repeatedly 9 #(rand-int 9))

Ben Grabow17:04:51

Careful of leading zeros!

Ben Grabow17:04:02

If you need to guarantee the number has no less than 9 digits (+ 100000000 (-> (Math/random) (* 900000000) (Math/floor)))


oh i misread that as nine random numbers not a nine-digit random number. good catch

Ben Grabow17:04:17

I like the approach though. (Long/parseLong (apply str (list* (inc (rand-int 9)) (repeatedly 8 #(rand-int 10)))))

Ben Grabow17:04:28

(defn digits->int [xs] (reduce #(+ (* 10 %1) %2) 0 xs))


I'm curious what the community's thoughts are on monorepos! Advantages? Disadvantages? I get the sense that they aren't very widely used in the Clojure community, but I don't have any data to back that feeling up. (Please point me toward a more appropriate channel for this question if there is one!)


At World Singles Networks, we have a monorepo with about thirty subprojects. We use CLI/`deps.edn`. That's about 96,000 lines of Clojure (of which 21K is tests).


We build about a dozen separate uberjars from that for deployment to production.


Big advantage: easy to have "all" code on your classpath and loaded into your dev environment. Also: nice to have a single git SHA across all your "projects" from a release management/trackability p.o.v. We bake the SHA into every artifact so even if we build uberjars independently, we can still see exactly what point each one was built from.


Technically we have two monorepos: one for all the backend stuff (Clojure) and a separate one for all the frontend stuff (JS).

Dustin Getz18:04:51

At Google size if you are a SRE facing an outage in some random service, a monorepo lets you instantly see the source code for all of google without needing to sort through thousands of project orgs and understand the transitive dependencies


Yeah, it does seem to be something some of the really large companies favor -- but that comes with its own interesting challenges since they have so much code that it won't fit on a developer's machiner all at once.

Dustin Getz18:04:33

Is that true that it doesn’t fit on a dev box? A TB of text is a lot of code


Google have talked about that -- they have custom file system abstractions and all sorts of custom search tooling.


I think it must be this thread


Who else has a monorepo apart Google of the big ones?


This is quite interesting about what Netflix is having to do in a distributed model to get the benefits of a monorepo


I'm seeing several articles stating that Facebook, Twitter, and Microsoft also work with monorepos.


Hum, didn't know that


So, something I don't understand when reading this article, which is bringing up good issues I think, but maybe its that I don't really get what a monorepo entails. Like, on every code change, you would still recompile and re-run the tests of the entire company's code base?


Otherwise I don't understand why it would give you "Publisher Feedback"


Like, do you basically threat the code as one big code base? Like its not just a mono-repo, but a monolithic app, except it has different "main" entry points?


So everything is always on the class-path, and there's just no dependencies, its all just in the source folder?


And running tests and compiling is thus always running the entire tests of everything together and compiling everything together?


No, of course not. That would be silly.


But I don't really have the patience to explain it. There's plenty of information out there about it.


Oh, I was actually thinking that could be nice 😅


Ya, no worries. I'll one day go and learn much more about it


Thanks for that link, it's too bad it didn't really talk how they implemented those solutions though


I've heard of people using a monorepo and being quite happy with it. And there are also people not using them like me, and I'm also quite happy.


I prefer monorepos. when you have multiple repos, you often have implicit dependencies between the repos anyway , but you lose all the tools for managing which versions belong together

💯 4
✔️ 4

having said that, it usually requires some basic tooling beyond the stock version control systems to make it pleasant to work with if you end up with a large monorepos


i think the team at clubhouse has done some thinking in this space recently. stuart had a blog post about it a few months ago


I can't remember if monorepos were correlated with high performance in Accelerate or not


What is accelerate?


I know trunk based development was

Graham Seyffert18:04:11

I agree, I’ve come to enjoy monorepos. We do have custom IDE tooling in place though to improve development experience so that, for instance, you only have the projects (and it’s dependents) that you work on imported

Graham Seyffert18:04:27

Which helps with project (re)build times in a big way


@gseyffert I'm curious as to what aspect of "build" times are made slower by a monorepo?


(we don't build uberjars locally, for the most part, so we just load code into the REPL as needed, and run subsets of the overall test suite for subprojects that we are working on)


I'm assuming each "project" has its own source directories, resource directories, build.boot, etc?


Oh, I'm seeing now your responses in the other thread.


We switched from Boot to CLI back in 2018.


Got it. So each subproject has its own deps.edn?


Or a top-level deps.edn with different aliases for each subproject?


We have clojure/versions/deps.edn which is our "master" deps file and we use CLJ_CONFIG=../versions to make that "replace" the user-level deps when we are in each subproject (which, yes, have their own deps.edn as well).


We have a :defaults alias in the master deps with :override-deps for all the libraries that we "pin" to specific versions across the whole (mono)repo. All tooling aliases are also in that master deps file.


So a per-subproject command is cd subproject; CLJ_CONFIG=../versions clojure -A:defaults ...


(and we have a short build shell script that wraps that so we can just say something like build test api and it adds :test to the above command)


How are dependencies between subprojects handled? Do you use some kind of internal artifact repository and declare dependencies in each deps.edn, or do you just assemble one giant classpath and let the :require s sort it out?


(I hope that question made sense.)


Mostly we use one giant classpath. Part of the benefit of a monorepo for us is not having to use artifacts for internal dependencies and therefore not having to manage that whole tangle of version updates etc.


Every now and then we spin off parts of the repo as public OSS projects, once they're stable enough and we feel they might benefit the community, but we also move those out of the monorepo and treat them entirely standalone at that point.


How do you handle situations where submodule A depends on submodule B and you have to make some breaking changes to submodule B on a short timeframe? It's not clear to me how to do this in a monorepo of the kind you're describing unless you have some kind of artifact repository.


I feel like I'm missing something. 🙂


Clojure's ethos is to not make breaking changes so there's that.


But if you absolutely have to make breaking changes to B you should also update A. Our position is that master should always be releasable.


On rare occasions we do branch builds of isolated services -- but all tests across the whole repo should still pass even on a branch, even if we're only releasing one "product".


In reality, it's just not a problem -- we just don't break APIs very often and we never find ourselves in a situation where we're rushing to release one subproject so badly that we would leave anything broken.


But, again, our basic position is: master is always releasable and we can go from commit/push to production in just a few minutes if we have to -- so we can release multiple times a day fairly easily.


That's really helpful, Sean. Thanks for explaining.

Graham Seyffert18:04:11

If I import multiple projects, instead of just the one I’m working on, and I do a clean build, I have to re-compile a lot more projects. This is primarily a Java repo with some projects in Clojure

Graham Seyffert18:04:00

We do often run apps locally and/or have dedicated build machines that people remote in to from their laptops

Graham Seyffert18:04:26

Building the whole project, if you had to do that, takes over an hour on build servers. Fun stuff

Graham Seyffert18:04:52

“dedicated build machines” --> a desktop in their office with a static IP


Ah yes, I can see that being an issue in Java or other "compiled upfront" languages.

Graham Seyffert18:04:06

Yeah, exactly. So we kind of have to operate in that, and have plenty of Java in our own project(s) because other teams aren’t set up with a Clojure dev environment, so we try to keep interactions with other projects in Java


I've been doing purely Clojure for so long I've pretty much forgotten how "bad" it can be in other languages 🙂

Graham Seyffert18:04:41

The dream 🙂 I will say, not being in a monorepo actually does make using Clojure easier in this regard


I do (vaguely) remember working in environments where builds used to take long enough to go have lunch instead of just a coffee...

Graham Seyffert18:04:14

Yeah, this custom tooling is the difference between those two cases for me

Graham Seyffert18:04:05

It lets me do a rebuild for our main project in < 5 minutes on my laptop if I really dial it in


I've often found myself pondering the virtues of monorepos recently. @seancorfield I notice you go into what you've found the pros of working with monorepos - did you find some cons? Or have you been working with them for so long that it's difficult to recall how things used to be?


We have a bunch of microservices and a few libraries shared between them (all Clojure) and managing the dependency graph between the microservices and the libs is a bit of a pain in the arse - hence the pondering of monorepos.


I think the main con is going to be set up. a lot of tools assume project repos so you'll have to figure out how to make them work the "non-standard" way


Most of the downsides of monorepos come at scale, when there's too much code to fit on a developer laptop, or it's too slow for your editor to search large parts of the codebase. Google and a few others talk about that -- and all the custom tooling they've build to address it.


for instance, turning on travis CI for a repo? easy peasy. turning it on for a folder/subproject? I'm sure you can, but I'd have to look it up


Yeah @U050MP39D I had wondered that myself. We're using Circle CI but it's the same sort of challenge. As you say though, there will be ways around it but one then finds oneself fighting the tooling somewhat. IDE-wise, I'm pretty confident that Cursive will deal with it fine but we have engineers using VSCode/Calva too - I'm not so sure how that will cope.


Thanks @seancorfield, yes the problems at scale certainly make sense. Not something that we're going to struggle with I think though thankfully.


Unless you're at scale, you probably don't care about running CI only for a single folder. Assuming your tests don't take a significant period of time to run, that is.


ehhhh. it's more about one project not blocking another


With failing tests?


We have a monorepo that is set up with CircleCI. We have 51 sub-projects within the repo. To speed up CI test runs, each sub-project runs its tests in a separate job. With CircleCI's new pricing, you can have an "unlimited" number of jobs running at once. This makes it so failing tests won't block, unless I'm misunderstanding @U050MP39D.


right. I know it's possible. I was just saying you'll have to do more when you're setting things up


Ah, true. We have to generate our CircleCI config programmatically at this point.


I know people have told me before, but I still don't see the problems with a multi-repo setups


I've had people say refactoring across projects, but in Clojure, grep is my refactor tool so 😛


And I also always point out that Google did not adopt a Monorepo by choice


They migrated from a Perforce monorepo to a Git monorepo, because it was the easier path to migration


And then had to build a bunch more tooling to address the challenges of a Git monorepo


But I've never tried a monorepo, so I'll reserve criticism to when I do


I guess I'm more interested in hearing about why people even get curious about monorepo? What current issue you face with a multi-repo that make you look for an alternative?


You're on your phone, aren't you? 🙂


"What current issue you face with a multi-repo that make you look for an alternative?" -- pretty sure another thread here had a discussion of that.


I know I talked about some of the downsides of multiple repos when folks were discussing this yesterday...?


For me, the key downside that I'm finding to having multiple repos is that we have several services as projects all of which use the same set of internal shared libraries - when one of those libraries adopts changes that need to filter through to all services, there then follows a laborious job of updating, testing and deploying all of the services separately. Or, in cases where a service doesn't need to be updated immediately with the change to the library, there's no easy way of ensuring that the library hasn't introduced breaking changes to those services. I suppose that one could concoct a CI job to 1) detect the library change; 2) automatically pull down all service source code 3) bump the library version in each service temporarily and 4) run the automated tests. (And I do get the whole point about just not making breaking changes to libraries but in this case these are libraries internal to the (small) team and so the overhead in ensuring backwards compatibility is just too much to consider.)


I kind of think even shared internal libraries should have a high standard. The amount of wasted effort from "migrations" greatly make up for it in my opinion.


But I also don't understand why changes to this library must filter through? Can't services just choose to upgrade whenever they care for the new functionality?


And what I don't yet understand is how a monorepo avoids this problem.


Also, that CI script you describe is what we have. In our CI pipeline for each service, the service can list out certain dependencies for which they want to automatically upgrade too and deploy on new minor version bumps. Major version bumps do not trigger this, since we use those to indicate breaking changes.


But for major versions, we'll get a warning that we are now depending on an old version


This was kind of all setup already for me, so I have no idea how difficult it is to create such CI infrastructure :rolling_on_the_floor_laughing:, so maybe I'm looking at it through rose colored glasses


a monorepo you don't have versions of the shared library, it's part of the same repo. it also removes the friction of having to set up library publishing and internal maven etc


I don't get that? How do you bootsrap the classpath?


Do you just add the source folders of your dependencies to your sources?


Or use deps local repo support?


Was that + meaning a yes to that? If so, that's definitely interesting. I have to ponder on that.


I use the deps local repo stuff fwiw


Only on lein do I see the source paths thing, and it is painful with dependencies


Reading about it more and more, both of these are kind of interesting take. So I guess that's the part I was missing to understand. It seems directed graph package manager seem to work better for monorepo, like Google's Blaze and Facebook's Buck


I just wanted to say thanks to everyone above who got involved in this conversation - it's been enlightening!


Hi there! Just a question - I am composing queries for datomic programatically and I would like to unquote certain elements of the query


like `[?e ~attr "7"]


the problem is that this yields the namespaced '?e' whereas if I use ' instead of ` unquoting doesnt work. Anyone went through this?


use ` and then inside do ~'?e


which means "unquote this thing which is a quoted ?e"


cool, thanks!


As an alternative, you can also do: ['?e attr "7"].


lol, indeed


I’m thinking about experimenting with GraphQL in clojure and clojurescript. What libraries would you recommend to make that experience more pleasant?


What’s your interest in GraphQL? Simplifying getting data to/from single-page apps, GraphQL itself, or something else?


You might also consider looking into EQL, an alternative to GraphQL that uses EDN on the wire, but with very similar semantics. If you go that route, look into Pathom.

👍 4

@ctamayo I was looking into alumbra. How would they compare? And what seems to be the best client-side option (cljs)


Lacinia is in active use and development. Alumbra seems stale. Graphql is still changing so if you want to keep up with the ecosystem that matters.


Also check #graphql


Not familiar with Alumbra, I was honestly just throwing out names I've heard. 😅 Most of my CLJS work is in re-frame these days so the next time I have to speak GraphQL I will probably go with re-graph which is specific to re-frame.


ah, ok. Lacinia and re-graph seems to have a connection as well. What would be the argument for using EQL instead of graphQL? @ctamayo


It's Clojure primitives at every layer of the stack

fulcro 4