This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-01
Channels
- # announcements (21)
- # architecture (6)
- # aws (18)
- # babashka (14)
- # beginners (231)
- # boot (1)
- # calva (2)
- # chlorine-clover (22)
- # cider (34)
- # clara (16)
- # clj-kondo (53)
- # cljdoc (5)
- # cljs-dev (22)
- # cljsrn (3)
- # clojure (283)
- # clojure-europe (24)
- # clojure-italy (9)
- # clojure-nl (5)
- # clojure-spec (5)
- # clojure-uk (57)
- # clojurescript (14)
- # core-typed (8)
- # cursive (4)
- # data-science (11)
- # datomic (41)
- # docker (24)
- # duct (2)
- # emacs (2)
- # exercism (29)
- # fulcro (96)
- # graalvm (4)
- # jobs-discuss (1)
- # kaocha (53)
- # lambdaisland (20)
- # malli (5)
- # nrepl (4)
- # observability (7)
- # off-topic (40)
- # pathom (44)
- # pedestal (8)
- # re-frame (19)
- # shadow-cljs (58)
- # spacemacs (2)
- # sql (9)
- # tools-deps (15)
- # vim (3)
- # yada (10)
Hello guys im currently learning clojure (is's amazing) . I have two questions ... 1) How to handle exceptions, errors properly ? 2) Do you have some resource about data driven systems best practices?- Im not too involved with this mindset
As far as I know there is no widely accepted correct way of dealing with exceptions. Most people just use plain try/catch with (ex-info "msg" {:data :data})
exceptions. There are a handful of libraries that provide monadic error handling / containers that indicate whether or not there's a successful result inside or an error. It's also common for people to create macros that implement some generic error handling / logging according to their preferences in their application. Here are a few things that come to mind: https://github.com/scgilardi/slingshot https://github.com/cognitect-labs/anomalies https://github.com/MichaelDrogalis/dire
Regarding part 2, you might find this an interesting read: https://vvvvalvalval.github.io/posts/2018-07-23-datascript-as-a-lingua-franca-for-domain-modeling.html
thanks
Is there a good way to get like an ordered set from a string? Attempting (set "ahilrty")
results in #(\a \h \i \l \r \t \y).
you could look at the code for distinct
and make a version that gives you both a in-order vector, and a set for testing membership
it would mean lifting the result list / set into a tuple I think
Interesting! I'll take a look. It turned out that clojure.string/includes? really spoke to the nature of the problem I'm solving + performed significantly faster than using some
.
That's sorted, not ordered.
@jayzawrotny Do you really want a set? They inherently have no order. What about just (distinct "some string")
?
That preserves order and just removes the duplicates.
The greater context is a small project to filter a list of words by being made out of a specific subset of characters. I chose set because it would both eliminate dupes and can be used with contains?
Though thinking about it, I can probably just use the original string and clojure.string/includes?
user=> (apply sorted-set "some string")
#{\space \e \g \i \m \n \o \r \s \t}
user=> (distinct "some string")
(\s \o \m \e \space \t \r \i \n \g)
user=> (set "some string")
#{\space \e \g \i \m \n \o \r \s \t}
user=>
Not quite, order is important as I assume the first char is required where as the following letters are allowed but not required.
vec may retain the order, but not work for (contains?)
I think clojure.string/includes?
is the right tool for the job
That's fair
You can use some
with any sequence, not just vectors.
some works but seems to double 1.5x the time
For some reason I was expecting it to be faster
With Clojure CLI, is it possible to create aliases that include other aliases? Looks like no, but might be missing something obvious
could someone recommend me a (maintained) library / framework for async jobs? looking for something like ruby's sidekiq, backed by persistent storage
I've been slowly chipping away at building a library for this, backed by next.jdbc and select for update skip locked
- not used in production but might be useful https://github.com/lukaszkorecki/taskmaster
I haven’t used it, but there appears to be a Faktory lib: https://github.com/contribsys/faktory, https://github.com/apa512/clj-faktory
oh a clojure one: https://github.com/layerware/pgqueue
clojure -> java interop -> jruby runtime -> sidekiq ---| ^------------------------------------------------------------
Well, spring libraries are some of the more well maintained and feature-ful java libraries. The way they are documented for usage is through annotations on classes, so even if its possible to wrangle classes in the right way dynamically its not always clear how or i am willing to bet possible
i guess this doesn't quite help then? https://clojure.org/reference/datatypes#_java_annotation_support
the only example i had seen before was inside gen-class - didn't realize there was one more type of place it could be done
Excuse me for being lazy. Do reader tags attach to metadata forms or to the form that the metadata is attached to? I need it for structural editing behaviour in Calva. I don't know of a quick way to test it, and I have only a short slot of time for trying to fix it. So what does the reader do with something like #foo ^:bar :baz
?
i just tested it, and iiuc, ^:bar is metadata for :baz -- though this wouldn't work because :baz is a keyword which cannot have metadata.
but assuming you had chosen a vector instead of :baz, the vector would have the metadata attached.
the function which would handle the foo tag (set via *data-readers*
typically), would be given the vector with the metadata attached.
something like the following session is what i did:
user=> (defn my-tag-fn [x] (binding [*print-meta* true] (prn x) [1 2]))
#'user/my-tag-fn
user=> (set! *data-readers* {'my-tag my-tag-fn})
{my-tag #object[user$my_tag_fn 0x256f8274 "user$my_tag_fn@256f8274"]}
user=> #my-tag ^:a []
^{:a true} []
^{:a true} []
[1 2]
user=>
so you can see that the vector []
had the metadata attached to it and it was outputted by (prn x)
within my-tag-fn
np, discussing discard forms with you helped to motivate me to understand this stuff better, so ty for that 😉
@lockdown- @emccue thanks for the ideas!
I have opencv interop code in which I'm trying to remove noise from an image like so:
(defn ->vector [mat]
(let [
width (.width mat)
height (.height mat)]
(vec
(for [i (range width)]
(vec
(for [j (range height)]
(vec (.get mat i j))))))))
(defn element-wise-and
[a b]
(if (and (number? a) (number? b))
(double (* a b))
(into [] (map element-wise-and a b))))
(defn zero-the-vector [a b]
(if (and (number? a) (number? b))
(if (zero? b) 0 a)
(into [] (map zero-the-vector a b)))
)
(defn element-wise-inverse
[a]
(if (double? a)
(if (zero? a) 1.0 0.0)
(into [] (map element-wise-inverse a))))
(defn filter-salt-pepper-noise [edge-img- size]
(let [last-median (->vector edge-img-)
median (Mat. size 5)]
(loop [count 0 edge-img (->vector edge-img-) last-median (->vector edge-img-)]
(do
(cv/median-blur edge-img- median 3)
(if (> count 1)
edge-img
(recur
(inc count)
(->>
(element-wise-and last-median edge-img)
(zero-the-vector edge-img))
(->vector median)
))
)
)))
But when I do (filter-salt-pepper-noise edges size) where edges is a Mat and size is an opencv Size, the function is just stuck there, and doesn't seem to terminate. How can I check why this function seems to stall?You might see more responses if you're able to reduce your example to something smaller. Inspiration: https://www.youtube.com/watch?v=FihU5JxmnBg
I'm trying to create a new List<SomeObject>() in clojure, but don't know how to. When I do (List. SomeObject), I get No matching ctor found for interface java.util.List
List is a Java interface. So to create a list object, use (java.util.ArrayList.)
or (java.util.LinkedList.)
. This will be a list of Object
. You don't need to (and can't) specify the type of object.
I'm trying to create a new List<SomeObject>() in clojure, but don't know how to. When I do (List. SomeObject), I get No matching ctor found for interface java.util.List
List is an interface. Objects cannot be created. Need to use an ArrayList or something?
java.util.List is an Interface, and can't be instantiated directly. You could use: (ArrayList.)
or (ArrayList. some-vec)
In addition to the above comments, don't think in Java it is possible to pass a "type" to a List constructor like that.
See here: https://stackoverflow.com/questions/3688730/how-to-pass-a-typed-collection-from-clojure-to-java
In JVM byte code, List<SomeObject> and List<Object> are identical, so Clojure does not bother providing a way to make you believe you are doing List<SomeObject>, since the produced byte code wouldn't be doing that anyway.
And Java collections don't support primitives for the same reason. C++ collections are truly typed.
So, @U010Z4Y1J4Q, when writing any Java code which would accept this list (if that's what you're doing), your method sig might look something like processClojureList(List<Object> lst)
When I do the following: (defn find-significant-contours [edge-img size] (def contours '()) (def hierarchy (Mat. size cv/CV_8UC1)) (def edge-img-8u (Mat. size cv/CV_8UC1)) (cv/find-contours edge-img-8u contours hierarchy cv/RETR_TREE cv/CHAIN_APPROX_SIMPLE) )
I get the error (UnsupportedOperationException) at org.opencv.utils.Converters/Mat_to_vector_vector_Point (Converters.java:542).
find-contours looks like:
public static void findContours(Mat image, java.util.List<MatOfPoint> contours, Mat hierarchy, int mode, int method)
Like instead of passing in List<MatOfPoint>, I think passing in '() is causing this error.
So, the implementation is here: https://www.google.com/url?sa=t&source=web&rct=j&url=https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/PersistentList.java
just use (java.util.ArrayList.)
generics are a compile-time trick, in reality List
and List<MatOfPoint>
are the same after java compile is done with it
the reason why you’re getting an exception with '()
is becausse the clojuree list doesn’t allow mutation operations like .add()
Often when processing requests, there’s some values that are in the context of the request, like user making the request, or the billing account this falls under and this can reach 6+ properties and then when pushing that data down to lower layers of the app, these properties absolutely infest the parameter lists of all the functions, making them bigger. I am sorely tempted to create *request-context*
and do binding, but at the same time that seems kinda wrong because the functions become dependent on things other than their parameters…. really don’t know which way to decide
haven't found a good solution to that myself, at least in the context of HTTP requests
if there's a lot of parameters to send, I tend to just put them in a map and send them as one param, only destructuring when I need them, but there's still a function somewhere that goes {:keys [param1 param2 param3 param4 param5 ...]}
often enough
I posted an explanation of the main API and the design decisions in #lambdaisland, please do pitch in there
I've been slowly chipping away at building a library for this, backed by next.jdbc and select for update skip locked
- not used in production but might be useful https://github.com/lukaszkorecki/taskmaster
i suppose this is best done with Character.isDigit
anyway, not really a Clojure question
(Character/isDigit (first s))
Thoughts between Elastisch (https://github.com/clojurewerkz/elastisch) and Spandex (https://github.com/mpenet/spandex)? I’m new to Elasticsearch so have no context for evaluating.
If you’re already using Postgres, you may find this interesting: “ZomboDB brings powerful text-search and analytics features to Postgres by using Elasticsearch as an index type.” https://github.com/zombodb/zombodb
We are migrating away from elastisch to spandex - mostly because a) the DSL masks a lot of the actual ES queries and index setup b) spandex is built on top of the official Java client. The only downside is that it relies on core.async in few places
@U0JEFEZH6 what is the downside of the core.async?
For the last 4 years of running Clojure on the backend I still haven't found use for it
Interesting. I came from the front end so was new to needing threads etc and used PurelyFunctional TV courses straight to core.async lol
I can see that - core.async definitely helps somewhat with the callback hell in cljs. I'm definitely biased: the application I'm working on is not deployed as a single unit, so there's a lot of communication between separate processes, rather than internal queueing
7 years of clj + Elasticsearch experience here: For the love of all that is good and holy just use the REST API.
It just adds pooling/discovery, the rest is essentially ring from the end users perspective
@U07S8JGF7 yep, that's where spandex shines, you have to use the REST stuff (also ~7 years of ES "experience")
Always some use case where round-robin connections become important. But you can even do persistent connections pretty trivially in clj-http.
FWIW both clj-http and official ES rest client are built on top of the same apache httpcomponents libs
As with all client wrappers - same reason why we abandoned all of them and just use clj-http for all 3rd party service interactions
Arguably clj-http has also a lot of cruft you don't need to deal with when using es :). the official client is configured to work well and be very performant out of the box, saves you some fiddling, writting all the crust around clustering & co. But I get the argument for familiarity
Hi! What are good tools to figure out how much memory jar needs to run? I have a very simple app and it eats more than 256mb at startup. I want to dive deeper and understand if this is something I can fix. Thanks!
@U4EFBUCUE VisualVM is a free alternative to YourKit.
I also often find JDK tools to be really useful before/without (ever) reaching out for a profiler.
Especially jcmd
. Possibly combined with Java Mission Control (JMC) which got a new stable version recently.
That's said, 256M isn't that much. It depends on what "a very simple app" means and how do you configure java/jvm in terms of -Xmx
et al.
Thanks!
in the context of a macro, I'm struggling to understand the difference between '~'symbol.name
and 'symbol.name
They seem to result in the same form:
(defmacro m [] `(my-ns/my-function 'my-symbol.core))
(macroexpand '(m))
; =>
(my-ns/my-function (quote my-symbol.core))
(defmacro m2 [] `(my-ns/my-function '~'my-symbol.core))
(macroexpand '(m2))
; =>
(my-ns/my-function (quote my-symbol.core))
Why would you use one form over another?I don't think I've ever used '~'
~' is common when want an unqualified symbol to be in the expanded form
or '~ when evaluating something to a symbol
I only ask because I just came across it in refer-clojure https://github.com/clojure/clojure/blob/clojure-1.10.1/src/clj/clojure/core.clj#L5826
I've used '~'
when I've needed a literal symbol in a macro expansion -- I don't what a gensym'd local and I don't want the normal namespace expansion of the symbol.
I see, that makes sense now. It seems like a symbol with a .
in it does not get ns qualified when using '
which was throwing me off. The difference is apparent when using plain symbols though.
user=> (defmacro m [] `(my-ns/my-function 'user.core))
#'user/m
user=> (defmacro m2 [] `(my-ns/my-function '~'user.core))
#'user/m2
user=> (= (macroexpand '(m2)) (macroexpand '(m)))
true
symbols with dots are treated as class names in Clojure so may be a special case here
user=> (defmacro m [] `(my-ns/my-function 'core))
#'user/m
user=> (defmacro m2 [] `(my-ns/my-function '~'core))
#'user/m2
user=> (= (macroexpand '(m2)) (macroexpand '(m)))
false
(-> (Math/random) (* 100000000000) Math/floor)
Hello everyone.
I'm curretly trying to generate a random number of 9 digits. I use random-int
but i realize that it will not bring me 9 digit numbers always 😛 . So i create this function here that it's not working as i expected. Can someone give me some insight?@ramonp.rios if I tack a call to long
at the end there I get nine digit numbers as the result on most calls
but the only difference there is getting an integral type rather than scientific notation from a double
Thank you
of course 1/10 of calls will give 8 digits or shorter
and 1/100 7 digits or shorter
Ok, i got. I just needed to convert it to long
then 😅
Careful of leading zeros!
If you need to guarantee the number has no less than 9 digits (+ 100000000 (-> (Math/random) (* 900000000) (Math/floor)))
I like the approach though. (Long/parseLong (apply str (list* (inc (rand-int 9)) (repeatedly 8 #(rand-int 10)))))
(defn digits->int [xs] (reduce #(+ (* 10 %1) %2) 0 xs))
I'm curious what the community's thoughts are on monorepos! Advantages? Disadvantages? I get the sense that they aren't very widely used in the Clojure community, but I don't have any data to back that feeling up. (Please point me toward a more appropriate channel for this question if there is one!)
At World Singles Networks, we have a monorepo with about thirty subprojects. We use CLI/`deps.edn`. That's about 96,000 lines of Clojure (of which 21K is tests).
We build about a dozen separate uberjars from that for deployment to production.
Big advantage: easy to have "all" code on your classpath and loaded into your dev environment. Also: nice to have a single git SHA across all your "projects" from a release management/trackability p.o.v. We bake the SHA into every artifact so even if we build uberjars independently, we can still see exactly what point each one was built from.
Technically we have two monorepos: one for all the backend stuff (Clojure) and a separate one for all the frontend stuff (JS).
At Google size if you are a SRE facing an outage in some random service, a monorepo lets you instantly see the source code for all of google without needing to sort through thousands of project orgs and understand the transitive dependencies
Yeah, it does seem to be something some of the really large companies favor -- but that comes with its own interesting challenges since they have so much code that it won't fit on a developer's machiner all at once.
Is that true that it doesn’t fit on a dev box? A TB of text is a lot of code
Google have talked about that -- they have custom file system abstractions and all sorts of custom search tooling.
This is quite interesting about what Netflix is having to do in a distributed model to get the benefits of a monorepo https://netflixtechblog.com/towards-true-continuous-integration-distributed-repositories-and-dependencies-2a2e3108c051
I'm seeing several articles stating that Facebook, Twitter, and Microsoft also work with monorepos.
So, something I don't understand when reading this article, which is bringing up good issues I think, but maybe its that I don't really get what a monorepo entails. Like, on every code change, you would still recompile and re-run the tests of the entire company's code base?
Like, do you basically threat the code as one big code base? Like its not just a mono-repo, but a monolithic app, except it has different "main" entry points?
So everything is always on the class-path, and there's just no dependencies, its all just in the source folder?
And running tests and compiling is thus always running the entire tests of everything together and compiling everything together?
No, of course not. That would be silly.
But I don't really have the patience to explain it. There's plenty of information out there about it.
Thanks for that link, it's too bad it didn't really talk how they implemented those solutions though
I've heard of people using a monorepo and being quite happy with it. And there are also people not using them like me, and I'm also quite happy.
I prefer monorepos. when you have multiple repos, you often have implicit dependencies between the repos anyway , but you lose all the tools for managing which versions belong together
having said that, it usually requires some basic tooling beyond the stock version control systems to make it pleasant to work with if you end up with a large monorepos
i think the team at clubhouse has done some thinking in this space recently. stuart had a blog post about it a few months ago
This, probably! https://clubhouse.io/blog/monolith-meet-mono-repo/
I can't remember if monorepos were correlated with high performance in Accelerate or not
I agree, I’ve come to enjoy monorepos. We do have custom IDE tooling in place though to improve development experience so that, for instance, you only have the projects (and it’s dependents) that you work on imported
Which helps with project (re)build times in a big way
@gseyffert I'm curious as to what aspect of "build" times are made slower by a monorepo?
(we don't build uberjars locally, for the most part, so we just load code into the REPL as needed, and run subsets of the overall test suite for subprojects that we are working on)
I'm assuming each "project" has its own source directories, resource directories, build.boot
, etc?
We switched from Boot to CLI back in 2018.
We have clojure/versions/deps.edn
which is our "master" deps file and we use CLJ_CONFIG=../versions
to make that "replace" the user-level deps when we are in each subproject (which, yes, have their own deps.edn
as well).
We have a :defaults
alias in the master deps with :override-deps
for all the libraries that we "pin" to specific versions across the whole (mono)repo. All tooling aliases are also in that master deps file.
So a per-subproject command is cd subproject; CLJ_CONFIG=../versions clojure -A:defaults ...
(and we have a short build
shell script that wraps that so we can just say something like build test api
and it adds :test
to the above command)
How are dependencies between subprojects handled? Do you use some kind of internal artifact repository and declare dependencies in each deps.edn
, or do you just assemble one giant classpath and let the :require
s sort it out?
Mostly we use one giant classpath. Part of the benefit of a monorepo for us is not having to use artifacts for internal dependencies and therefore not having to manage that whole tangle of version updates etc.
Every now and then we spin off parts of the repo as public OSS projects, once they're stable enough and we feel they might benefit the community, but we also move those out of the monorepo and treat them entirely standalone at that point.
How do you handle situations where submodule A
depends on submodule B
and you have to make some breaking changes to submodule B
on a short timeframe? It's not clear to me how to do this in a monorepo of the kind you're describing unless you have some kind of artifact repository.
Clojure's ethos is to not make breaking changes so there's that.
But if you absolutely have to make breaking changes to B
you should also update A
. Our position is that master should always be releasable.
On rare occasions we do branch builds of isolated services -- but all tests across the whole repo should still pass even on a branch, even if we're only releasing one "product".
In reality, it's just not a problem -- we just don't break APIs very often and we never find ourselves in a situation where we're rushing to release one subproject so badly that we would leave anything broken.
But, again, our basic position is: master is always releasable and we can go from commit/push to production in just a few minutes if we have to -- so we can release multiple times a day fairly easily.
If I import multiple projects, instead of just the one I’m working on, and I do a clean build, I have to re-compile a lot more projects. This is primarily a Java repo with some projects in Clojure
We do often run apps locally and/or have dedicated build machines that people remote in to from their laptops
Building the whole project, if you had to do that, takes over an hour on build servers. Fun stuff
“dedicated build machines” --> a desktop in their office with a static IP
Ah yes, I can see that being an issue in Java or other "compiled upfront" languages.
Yeah, exactly. So we kind of have to operate in that, and have plenty of Java in our own project(s) because other teams aren’t set up with a Clojure dev environment, so we try to keep interactions with other projects in Java
I've been doing purely Clojure for so long I've pretty much forgotten how "bad" it can be in other languages 🙂
The dream 🙂 I will say, not being in a monorepo actually does make using Clojure easier in this regard
I do (vaguely) remember working in environments where builds used to take long enough to go have lunch instead of just a coffee...
Yeah, this custom tooling is the difference between those two cases for me
It lets me do a rebuild for our main project in < 5 minutes on my laptop if I really dial it in
I've often found myself pondering the virtues of monorepos recently. @seancorfield I notice you go into what you've found the pros of working with monorepos - did you find some cons? Or have you been working with them for so long that it's difficult to recall how things used to be?
We have a bunch of microservices and a few libraries shared between them (all Clojure) and managing the dependency graph between the microservices and the libs is a bit of a pain in the arse - hence the pondering of monorepos.
I think the main con is going to be set up. a lot of tools assume project repos so you'll have to figure out how to make them work the "non-standard" way
Most of the downsides of monorepos come at scale, when there's too much code to fit on a developer laptop, or it's too slow for your editor to search large parts of the codebase. Google and a few others talk about that -- and all the custom tooling they've build to address it.
for instance, turning on travis CI for a repo? easy peasy. turning it on for a folder/subproject? I'm sure you can, but I'd have to look it up
Yeah @U050MP39D I had wondered that myself. We're using Circle CI but it's the same sort of challenge. As you say though, there will be ways around it but one then finds oneself fighting the tooling somewhat. IDE-wise, I'm pretty confident that Cursive will deal with it fine but we have engineers using VSCode/Calva too - I'm not so sure how that will cope.
Thanks @seancorfield, yes the problems at scale certainly make sense. Not something that we're going to struggle with I think though thankfully.
Unless you're at scale, you probably don't care about running CI only for a single folder. Assuming your tests don't take a significant period of time to run, that is.
We have a monorepo that is set up with CircleCI. We have 51 sub-projects within the repo. To speed up CI test runs, each sub-project runs its tests in a separate job. With CircleCI's new pricing, you can have an "unlimited" number of jobs running at once. This makes it so failing tests won't block, unless I'm misunderstanding @U050MP39D.
right. I know it's possible. I was just saying you'll have to do more when you're setting things up
I know people have told me before, but I still don't see the problems with a multi-repo setups
I've had people say refactoring across projects, but in Clojure, grep is my refactor tool so 😛
They migrated from a Perforce monorepo to a Git monorepo, because it was the easier path to migration
And then had to build a bunch more tooling to address the challenges of a Git monorepo
I guess I'm more interested in hearing about why people even get curious about monorepo? What current issue you face with a multi-repo that make you look for an alternative?
You're on your phone, aren't you? 🙂
"What current issue you face with a multi-repo that make you look for an alternative?" -- pretty sure another thread here had a discussion of that.
I know I talked about some of the downsides of multiple repos when folks were discussing this yesterday...?
For me, the key downside that I'm finding to having multiple repos is that we have several services as projects all of which use the same set of internal shared libraries - when one of those libraries adopts changes that need to filter through to all services, there then follows a laborious job of updating, testing and deploying all of the services separately. Or, in cases where a service doesn't need to be updated immediately with the change to the library, there's no easy way of ensuring that the library hasn't introduced breaking changes to those services. I suppose that one could concoct a CI job to 1) detect the library change; 2) automatically pull down all service source code 3) bump the library version in each service temporarily and 4) run the automated tests. (And I do get the whole point about just not making breaking changes to libraries but in this case these are libraries internal to the (small) team and so the overhead in ensuring backwards compatibility is just too much to consider.)
I kind of think even shared internal libraries should have a high standard. The amount of wasted effort from "migrations" greatly make up for it in my opinion.
But I also don't understand why changes to this library must filter through? Can't services just choose to upgrade whenever they care for the new functionality?
Also, that CI script you describe is what we have. In our CI pipeline for each service, the service can list out certain dependencies for which they want to automatically upgrade too and deploy on new minor version bumps. Major version bumps do not trigger this, since we use those to indicate breaking changes.
But for major versions, we'll get a warning that we are now depending on an old version
This was kind of all setup already for me, so I have no idea how difficult it is to create such CI infrastructure :rolling_on_the_floor_laughing:, so maybe I'm looking at it through rose colored glasses
a monorepo you don't have versions of the shared library, it's part of the same repo. it also removes the friction of having to set up library publishing and internal maven etc
Was that + meaning a yes to that? If so, that's definitely interesting. I have to ponder on that.
Reading about it more and more, both of these are kind of interesting take. So I guess that's the part I was missing to understand. It seems directed graph package manager seem to work better for monorepo, like Google's Blaze and Facebook's Buck
I just wanted to say thanks to everyone above who got involved in this conversation - it's been enlightening!
Hi there! Just a question - I am composing queries for datomic programatically and I would like to unquote certain elements of the query
the problem is that this yields the namespaced '?e' whereas if I use ' instead of ` unquoting doesnt work. Anyone went through this?
As an alternative, you can also do: ['?e attr "7"]
.
I’m thinking about experimenting with GraphQL in clojure and clojurescript. What libraries would you recommend to make that experience more pleasant?
What’s your interest in GraphQL? Simplifying getting data to/from single-page apps, GraphQL itself, or something else?
I think Lacinia is the big one https://lacinia.readthedocs.io/en/latest/
You might also consider looking into EQL, an alternative to GraphQL that uses EDN on the wire, but with very similar semantics. If you go that route, look into Pathom. https://github.com/wilkerlucio/pathom
@ctamayo I was looking into alumbra. How would they compare? And what seems to be the best client-side option (cljs)
Lacinia is in active use and development. Alumbra seems stale. Graphql is still changing so if you want to keep up with the ecosystem that matters.
nice thank you @U7PBP4UVA
Not familiar with Alumbra, I was honestly just throwing out names I've heard. 😅 Most of my CLJS work is in re-frame these days so the next time I have to speak GraphQL I will probably go with re-graph which is specific to re-frame.
ah, ok. Lacinia and re-graph seems to have a connection as well. What would be the argument for using EQL instead of graphQL? @ctamayo