This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-04-13
Channels
- # announcements (3)
- # babashka (130)
- # beginners (73)
- # calva (22)
- # cider (46)
- # cljdoc (18)
- # cljs-dev (196)
- # cljsrn (18)
- # clojure (255)
- # clojure-europe (2)
- # clojure-finland (8)
- # clojure-gamedev (1)
- # clojure-germany (2)
- # clojure-losangeles (6)
- # clojure-nl (1)
- # clojure-spec (16)
- # clojure-uk (33)
- # clojurescript (32)
- # community-development (1)
- # conjure (40)
- # core-logic (11)
- # cursive (4)
- # datascript (8)
- # devcards (17)
- # emacs (21)
- # exercism (2)
- # fulcro (29)
- # funcool (15)
- # graalvm (18)
- # jobs (17)
- # jobs-rus (1)
- # lambdaisland (1)
- # lumo (1)
- # malli (19)
- # off-topic (15)
- # pathom (22)
- # quil (7)
- # re-frame (3)
- # reagent (3)
- # shadow-cljs (14)
- # spacemacs (41)
- # specter (2)
- # sql (5)
- # tree-sitter (1)
- # unrepl (16)
- # vscode (3)
- # xtdb (11)
- # yada (1)
If you are going to parse command line args is tools.cli
still the suggested way to go? https://github.com/clojure/tools.cli
@danielglauser I might be a little bit biased but, yes, I don't know of a better alternative 🙂
Bias is fine by me. Thanks Sean 🙂
A question about record use: I'm going through some kata's (like kruskal's and prims algorithms, etc.) where I solved the need to update and keep in sync multiple data structures with mutation (atoms) or via simply passing that problem off to the caller .. but for a few immutable versions online I see some people are recursively generating a new record and passing the new structure to it when needing to build up a map or other data structure. Is this performant? It seems like it would create potentially thousands of objects.
Those are data structures, I'd be surprised to see creation of thousands of map instances as you conj onto one. Perhaps that's what happens, seems unlikely.
That's about how many instances, not whether the instance is a record or a map, etc.
Yes. So when someone is recursively creating Records inside the record to build up a map that feels non-performant. Is that a correct assumption?
..or at least overly memory intensive.
[updated so I don't have that discussion with a ton of people but not the point of the question]
Interesting. Would have thought the left would not create new overhead (instances) where the right clearly would.
I was under the impression that a persistent data structure (the bit partitioned hash tries) would simply join new roots onto the tree. Will find the assoc code and see what's going on.
Or join new structures into the tree. Not create new trees all the time.
Yep, it does do structural sharing, but the root is new, as is the path from the root to the changed value. https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Purely_functional_tree_after.svg/1920px-Purely_functional_tree_after.svg.png
Yes, had assumed that under the hood a single map instance would represent that structure as it changed. So the assertion is that both assoc, by extension reduce, et al. .. and a record creating new instances of itself (recursively) are equivalent in performance and memory footprints. If so then I'll add that as something that isn't odd.
Re: the jvm ... I've seen enough JVM OOM / memory pressure caused by similar recursive object creation (in web scale platforms) to be leery when I see it.
> Yes, had assumed that under the hood a single map instance would represent that structure as it changed.
That behaviour is available via transient
. See https://clojure.org/reference/transients
@US893PCLF The OOM happens when there is a memory leak, not in normal usage with this sort of data structure.
We run a dozen Clojure apps in production, many with just 1GB heaps and they run for weeks and weeks solidly, despite all this object creation.
That's good to know. The system I'm refactoring off of Java to Clojure can't run for a day without OOM (well, it can now of sorts) and we're running 16-20GB heaps (granted, it's mostly the data model). So OOM's are near to my heart lol. Thx @U5ZNLFCQ7 ... transients seem useful for some of the things we actually do (transform large maps).
Note that transients are for when a fn is making repeated modifications to the "same" data structure. They should not be passed around.
Agree, if you don't measure you shouldn't be optimizing... not that you should fly blind anyway.
We've also moved from large legacy systems that required 10-15GB heaps to run for any length of time. We started using Clojure a decade ago and we've been very happy as Clojure has replaced each of our legacy systems.
Very cool. I am moving one of the largest media companies content management and streaming systems to Clojure (and thus will bear the cost if it isn't successful). It's good to have that kind of anecdotal (from experience) information to point to.
The only area you should be wary of with clojure's data structures is laziness. If you have a tail-recursive process that builds up a data structure eagerly, generally maps, vectors, and sets perform pretty well
for all the reasons mentioned above (JVM does pointer bumps for allocations most of the time, so new, small, objects are cheap so each operation is generally fast and structural sharing means its linear too)
but if you build up something in a lazy sequence and expanding that sequence would take up stack space proportional to the size of the data structure, then you can run into issues
Another point about performance, if it's really critical - avoid merge
on your tight loop
Both interesting observations, how would merge by worse than map or reduce? ... and on lazy sequences and concat, That is pretty wild bug (read Stuart Sierra's comments on it) .. wonder why it was never patched (beyond this https://gist.github.com/jondistad/2a4971fe8948ca2f6ba0)
> how would merge by worse than map or reduce? That's odd to me too, given that merge is implemented via reduce.
lazy sequences and concat is usually dealt with via lazy-cat
. The more common problem I've seen is holding onto the head of a lazy seq. Often this isn't noticed in testing, but then production-sized data causes a realized seq that comsumes a lot of memory. E.g.:
; bad
(let [xs (some-lazy-seq ...)
... ...]
(doseq [x xs]
...))
vs
; good
(let [... ...]
(doseq [x (some-lazy-seq ...)]
...))
Why is merge slow? • It always iterates over an unknown sequence of maps • it merges maps via conj • instead of discarding nil values it conjs an empty map • Just to ballpark how slow merge is, merging two maps with 10 members takes ~1.5us. That's fast in human time but in relation to other clojure functions it's super slow
For comparison, this is about 20% faster:
;;; Credit Metosin
;;;
(defn fast-assoc
"Like assoc but only takes one kv pair. Slightly faster."
{:inline
(fn [a k v]
`(.assoc ~(with-meta a {:tag 'clojure.lang.Associative}) ~k ~v))}
[^clojure.lang.Associative a k v]
(.assoc a k v))
;;; Credit Metosin
;;;
(defn fast-map-merge
"Returns a map that consists of the second of the maps assoc-ed onto
the first. If a key occurs in more than one map, the mapping from
te latter (left-to-right) will be the mapping in the result."
[x y]
(reduce-kv
(fn [m k v]
(fast-assoc m k v))
x
y))
@UK0810AQ2 That's premature optimization unless you've already measured/profiled and established merge
is a bottleneck (unlikely). For the vast majority of programs, that sort of optimization is pointless and moves you away from standard, readable, maintainable Clojure. The Metosin folks do a lot of crazy stuff to speed Clojure up -- they're kind of obsessed with performance-at-any-cost. Telling people to "avoid merge" is not helpful advice -- there are lots of core functions that can be optimized for specific cases but you wouldn't want Clojure programs to be made up of customized non-core functions everywhere 😞
@U04V70XH6 I agree, which is why I did not say to avoid merge at any cost, but: > avoid `merge` on your tight loop i.e you already know that's where you're spending cycles I guess my perspective is also colored by my day to day experience where I develop in Clojure and performance of long running processes is a concern > customized non-core functions everywhere That's sort of the capstone of clj-fast which I'm working towards - providing the most idiomatic extensions to core functions via inlining such that the API will stay exactly the same and the customized code will be generated by macros opportunistically. But I digress. I said in another thread that Clojure's core is good. I don't think I'd replace any part of it. I was more trying to shine a spotlight on areas of interest for people who might be concerned with performance, even at the cost of writing some custom code
Fair enough. Tommi Reiman provided a lot of awesome performance-related feedback on next.jdbc
while I was developing that and I've certainly got nothing against making stuff faster but the core team are very conservative about changes and always want to ensure that something that makes "use case A" faster doesn't also make "use case B" slower. In the end, with next.jdbc
, the "happy path" isn't quite the fastest possible but there are extension points that allow faster result set builders so Tommi's use case could be satisfied.
> Why is merge slow?
This would be more helpful as "Why is merge slower than X?" Prior to the example code above, I would not have thought "X" was "using a bespoke implementation of merge
/`assoc`".
I asked this yesterday but I think my comment got occluded by other discussions. Reposting to see if someone has any useful input :) A question about transients: I have a grid data structure represented as a vector of vectors, and I find myself updating it quite frequently. I would like to use transients to make the tight loops faster (and avoid unnecessary allocations!), but since it's a nested structure, I'm a bit confused. Should I recursively convert all the inner vectors into transients, then the outer vector, and then recursively convert everything back to persistent?
How bad is naive non-transient performance?
Just for a bit of context (even though I must admit it's a bit tiring having to justify myself each time I ask about optimization techniques...)
I'm working on a game with Arcadia (Clojure CLR on top of Unity). I have some code that runs on the update loop, and is generating a ton of garbage (CLR's GC is worse than JVM's). In a frame where this particular function is run, there is GC pauses creating huge performance spikes, going down from roughly 60fps (more than acceptable) to less than 30 (noticeably worse).
Since my function is a reduction that basically assoc-in
s to an vector "grid", in the shape I described, I was hoping transients would help me alleviate all the unnecessary intermediate allocations.
I'm still interested in keeping the full grid before and after the operation as an immutable object, so using arrays would be a huge downgrade in usability.
Don't feel bad, optimization has its place and Clojure is a hosted language and it would be a disservice to it not to reach down to the host platform when needed
second, even if using an array, you can populate a new array with the results instead of doing it in-place
That way you can still save all the grid states, because yo always get a new grid, and you generate less intermediate garbage
I suspect that trying to use nested transients in Clojure might require a bit of care.
For example, if you have a vector of vectors, calling transient
on one of the child/inner vectors returns a new object, which should ideally be replaced in the parent/outer vector.
Every assoc!
call can perhaps return a different object than the one you passed to it, and so to be safest ought to cause a replacement in the parent/outer transient vector after every assoc!
call on a child/inner vector.
You could 'batch' many assoc!
calls to a single inner/child vector, and then do a singe assoc!
call on the parent/outer vector to replace it, rather than doing assoc!
on the parent/outer vector each time.
@UK0810AQ2 that makes sense. Probably arrays are the way to go for those tight loops. I was asking more out of curiosity than anything else, since I would still prefer having a more functional-like but still performant solution.
@U0CMVHBL2 thanks! It indeed looks a bit messy having to do all this... It would be awesome having built-in support for something like this in the language though... Having "flattened" data structures is so uncommon for me in day-to-day clojure that I always give up every time I try to reach for transients.
If you know the dimensions of your 2d vector, e.g. every row is the same number of elements as each other, so m rows of n elements each, you could create a single Clojure vector where accessing element (x,y) means accessing element (x*n + y) of the 1d vector.
Then there would never be two 'levels' of Clojure vector to worry about transients for.
but may not be a good fit for your use case for other reasons.
Indeed, that's what I meant by flattened structures. I could do this, but then I need to choose between a more cumbersome API, or encapsulate everything behind a set of accessor functions. And if I have to encapsulate things anyway, I can just go the full way and use arrays.
@UP0Q30S10 I don't know if there is a more functional-like solution with reasonable performance. To be functional you need immutable values. For immutability you need either copying or structural sharing, both of which are terrible for your use-case. If even suggest a further optimization assuming you only need a before and after array: never copy the 2d array. Allocate two 2d arrays in advance, when you calculate the new array, just swap new to be the old, and do the manipulations on the old-now-new array. Zero garbage or new allocations
I feel it goes against the philosophy of the language: We have lots of standard functions that operate seamlessly across different data structures. But as soon as you need performance, you have to abandon all that and resort to a very reduced set of low level functions
@UK0810AQ2 Indeed! In fact, that trick is used a lot in the excellent book "Deep Learning for Programmers (in Clojure)".
I wouldn't say it goes directly against the philosophy of the language because clojure is also very pragmatic
And that assumption holds reasonably well as long as you don't build up huge collections in your tight loop 🙂
I also experimented with it myself and was able to implement sort of "loop-unrolling" macros for core clojure functions for more performance
you can look https://github.com/bsless/clj-fast
Nice! I was aware of the clj-fast library but I didn't know I was speaking to the author 😅
"the author" 🙃 I really think anyone could have done it, nothing magical (besides the lenses, maybe)
I think even the interfaces are the same so the interop code should work just fine, but I never gave CLR clojure a deep look
There are some specific namespaces which rely on java collections, but the inline
namespace should work
Yes, it's more or less a 1:1 translation from Java, so all the interfaces should match. I see just a bit of JVM-specific stuff in concurrent_map.clj
and map.clj
it indeed looks portable... I will try to see if I can make it work with a few reader conditionals 😄
But back to your original issue, I don't know how it would look different, for example, in a fully-lazy language. Maybe a different way to think about it is of every index as a series developing in time, if it's independent of other indices, but if not you're getting into matrix operations and at that point you're doing place-oriented-programming anyway and there it's better to accept mutability as a fact of life
Yeah.. I don't think looking for a "fully functional" approach is required. IMO, as long as we can treat it as pure from outside I don't care what happens on the inside to make it run faster. In fact that's a common promise that people make when you are taught any functional language (e.g., Haskell): "Look at all the crazy optimizations we can do if we embrace immutability!" But then none of them are actually done in practice... That local mutability is the promise behind transients, but then I guess it falls a bit short on delivering it fully 😅 I know it's not easy either. I would just love if the compiler analyzed the code and if it knew I'm not going to reference back the old value after an "assoc" so it just mutated it behind the scenes.
The good and bad of the clojure compiler is that it's dead simple. It has no way of knowing that value isn't going to be referenced outside
I’d be interested to see how a implementation using specter would compare. https://github.com/redplanetlabs/specter
> In addition, Specter has performance rivaling hand-optimized code (see https://gist.github.com/nathanmarz/b7c612b417647db80b9eaab618ff8d83). Clojure’s only comparable built-in operations are get-in
and update-in
, and the Specter equivalents are 30% and 85% faster respectively (while being just as concise).
I think specter does a lot of this internal optimizations itself, like unrolling a (get-in m [k1 k2 k3])
into (-> m k1 k2 k3)
wen things are known statically
For reasons https://github.com/borkdude/babashka/issues/263#, I changed https://github.com/clj-commons/ordered/commit/9f7d36352827777d5f2316c133fe62ddb27a18d3. From the perspective of flatland/ordered, this seems totally fine. However, when using this in clj-yaml (https://github.com/clj-commons/clj-yaml/pull/10) The build breaks with
Reflection warning, flatland/ordered/map.clj:113:5 - reference to field size on clojure.lang.IPersistentMap can't be resolved.
Reflection warning, flatland/ordered/map.clj:117:5 - reference to field isEmpty on clojure.lang.IPersistentMap can't be resolved.
Reflection warning, flatland/ordered/map.clj:119:5 - reference to field keySet on clojure.lang.IPersistentMap can't be resolved.
This leads me to at least two questions:
1. What introduced these reflection warnings?
2. How do I call size
, isEmpty
, and keySet
on an ordered-map
.`(size (ordered-map))` tells me
flatland.ordered.map> (size (ordered-map))
CompilerException java.lang.RuntimeException: Unable to resolve symbol: size in this context, compiling:(*cider-repl )
flatland.ordered.map>
@slipset It seems the ordered library is missing reflection warnings. I put (set! *warn-on-reflection* true)
at the top and those warnings appear when I run the tests. This doesn't fix your problem, but it might be a good practice to include it.
@slipset Maybe just doing this fixes it:
user=> (def x {})
#'user/x
user=> (.size ^java.util.Map x)
0
user=> (.size x)
Reflection warning, NO_SOURCE_PATH:1:1 - reference to field size can't be resolved.
0
@slipset I see there are more such type hints, like here:
(if-let [^MapEntry e (.get ^Map backing-map k)]
(.val e)
not-found)
I do not know the answer yet. Have you tried looking at a macro-expanded version of the older definition of the ordered map data structure that used the delegating-deftype macro, to see how that differs with the new deftype?
I think I understand. this delegating-deftype
takes an options map:
{backing-map {IPersistentMap [(count [])]
Map [(size [])
(containsKey [k])
(isEmpty [])
(keySet [])]}}
which it probably uses to figure out which methods belong to which interfaces and uses that to "delegate".I am looking at the output of (pprint (macroexpand-1 (delegating-deftype ...)))
from the former version of the ordered library, and comparing it against the latest version of the ordered library. I am pretty sure that if there is any metadata in the macro expansion, pprint
will not show it, so I could be missing that in what I am looking at.
I do see one difference that I do not know whether it makes a difference yet: the delegating-deftype
version of the code produces this definition for the Map
interface's method size
: (size [_] (. backing-map (size)))
. The latest version of the ordered lib has this in its place: (size [this] (.size backing-map))
. I doubt the _
vs. this
makes a difference, but I wonder whether the other syntax differences might make a difference.
It isn't clear to me yet what size
method in what JVM class or interface was being resolved to in the former version of the ordered lib code there.
Adding a type hint ^Map
to the three occurrences of backing-map
in the three methods that give the reflection warning, eliminates the reflection warnings, and might be what the former version of the ordered library code was effectively doing?
https://github.com/clj-commons/useful/blob/master/src/flatland/useful/experimental/delegate.clj#L56
Sorry, I'm slow. I see that the PR you mention above suggests this change. I do not see any problems with that change.
for example, the latest solutions on lispcast: https://gist.github.com/ericnormand/6e1a9d9135fc4f5eb7776066e4db9de7#file-eric-normand-clj
the solution iterates over all possible subset, and filters out those are needed. this style is typical in clojure, but lacks branch cutting , hences redundant work are performed.
I'm about to do something highly forbidden and dark magic. I receive bunch of JSON data and I want to generate functions in namespaces and write that to disk. Current solution I'm using is having string templates and interpolation, then just spit
that to disk. Is there any cleaner solutions?
Previously tried to get this to work differently by having lists and pprint
it into disk but the files I'm writing needs to have function calls that are not available in the context that reads/spits the files, so ended up with the string interpolation solution. But, might be a better solution out there
Sounds fun. Templates are probably your safest bet, otherwise you’re getting into parsing your own language territory. If you want it to run faster and load the new code into memory you can look at this with the requisite caveat emptor warning: https://clojuredocs.org/clojure.core/read
Is there any option to wrote logs into a file using clojure.tools.logging
?
Yes. You select an implementation from here: https://github.com/clojure/tools.logging#selecting-a-logging-implementation Then configure that implementation to write to a file.
Thank you
While I’m at it. Is there a difference between (. backing-map (size))
and (.size backing-map))
apart from the syntax?
the latter is sugar for the formerr
the first one is the . special form, which supports many interop cases (instance fields, static fields, instance invocation, static invocation)
the other stuff is all sugar to make it nicer for specific cases
it's implemented in the compiler, not in a macro, if that's what you're asking
What is the idiomatic way to maintain config files in a Clojure project? I’m currently thinking of having a `config.edn` file in the root of my lein project that I’ll parse with `(edn/read-string (slurp "config.edn"))` . Are there better ways?
Trying to sort a map of maps based on the value of a key. also the value in the key needs to be pulled from a string that is an alphacode and then the integer I want to sort by. I have already gotten the integer out with a regex. Just having a hard time with the sort
{:expand "example string",
:self "",
:key "DRO-61",
:fields {:description "example string",
:duedate nil,
:iconUrl "example.url",
:name "To Do",
:id 2,
:key "new",
:colorName "blue-gray",
:name "To Do"}}}}
You said "map of maps". All I see is a nested map that's hard to read due to formatting and that has unbalanced closing braces.
You either need sorted-map-by
with the right comparator or sort-by
with the right key function that extracts that number from :key
.
{{:a "ABC-03"
:b "string"
:c "string"}
{:a "ABC-01"
:b "string"
:c "string"}
{:a "ABC-02"
:b "string"
:c "string"}}
crap im sorry it is
[{:a "ABC-03"
:b "string"
:c "string"}
{:a "ABC-01"
:b "string"
:c "string"}
{:a "ABC-02"
:b "string"
:c "string"}]
Ah, now it's clear. You need sort-by
with a key function that just extracts that number. Do you know how to write that?
(re-find #"\d+$" :a)
is what I was trying to do but I feel like im close but no cigar
How do you want to compare those numbers? As regular numbers without leading zeros, or as 8-based numbers (because of the leading zero), or as strings?
Try (sort-by #(->> % (re-find #"[1-9]\d*$") Integer/parseInt) your-collection-of-maps)
.
Classcastexception clojure.lang.PersistentArrayMap cannot be cast to java.lang.CharSequence clojure.core/re-matcher (core.clj:4789)
lol you know, in my head i was like “what kinda clojure magic is he doing where he doesn’t need the key?”
returns
({:a "ABC-01", :b "string", :c "string"}
{:a "ABC-02", :b "string", :c "string"}
{:a "ABC-03", :b "string", :c "string"})
I didn't have an answer to this, so I'm going to try again: In normal JVM Clojure when executing a script, is there a way of detecting whether you're being called with a main argument -m foo or just executing as a script file?
(defn -main [& args]
(println "args" args))
(when ... ;; we're running as a script
(apply -main *command-line-args*))
I'm considering setting a system property in babashka so you can do (when-not (System/getProperty "babashka.main") (apply -main *command-line-args*))
unless there are already patterns in Clojure to handle this.In shell-scripts you can take a look at $0 to see the name under which the script was invoked. Maybe you can do something similar?
This appears to work:
$ bb -e '(System/getenv "_")'
"/Users/borkdude/Dropbox/bin/bb"
But you only get the program name, not all the raw arguments.Could you check the stack trace and see which entrypoint you came through?
@U04V15CAJ would the sun.java.command
work for you?
These two print its content:
$ clj hello.clj
...
clojure.main hello.clj
$ clj -m hello
...
clojure.main -m hello
Everything starting with sun
is considered private / implementation detail. Therefore it's not supported / present in GraalVM binaries.
I have the following build.boot file,
(set-env!
:resource-paths #{"src"}
:dependencies '[[me.raynes/conch "0.8.0"]
[boot.core :as boot]])
(task-options!
pom {:project 'myapp
:version "0.1.0"}
jar {:manifest {"Foo" "bar"}})
(boot/deftask cider "CIDER profile"
[]
(require 'boot.repl)
(swap! @(resolve 'boot.repl/default-dependencies)
concat '[[org.clojure/tools.nrepl "0.2.12"]
[cider/cider-nrepl "0.15.0"]
[refactor-nrepl "2.3.1"]])
(swap! @(resolve 'boot.repl/default-middleware)
concat '[cider.nrepl/cider-middleware
refactor-nrepl.middleware/wrap-refactor])
identity)
following this documentation: https://github.com/boot-clj/boot/wiki/Cider-REPL
However, upon doing cider-jack-in I get the error "refusing to run as root. set BOOT_AS_ROOT=yes to force.", and yet after doing export BOOT_AS_ROOT=yes, I get the same error. What's wrong?maybe you can insert (prn (System/getenv "BOOT_AS_ROOT"))
in your build.boot file just to make sure it's set correctly. maybe also try in #boot
And my user.clj has this nice (go) function (def go reloaded.repl/go) that reruns my server
However, when I change the namespace and run (go), I get: java.lang.RuntimeException: No such var: user/go
and my boot config is
(set-env!
:resource-paths #{"src" "dev"}
:source-paths #{"dev"}
:dependencies '[[me.raynes/conch "0.8.0"]
])
the user.clj needs to be on the classpath when the clojure runtime is initialized to be loaded
Some info here: https://clojureverse.org/t/how-are-user-clj-files-loaded/3842 EDIT: Sorry this does not address the issue specific to boot.
@hiredman also, this isn't the case when in a lein project, where
:source-paths ["dev"]
just works@hindol.adhya that looks too complicated for such a small issue
Not used boot, but what I feel hiredman is saying is, you need to compile user.clj beforehand and then run the boot task.
I get: http://java.io.FileNotFoundException: Could not locate dev/user__init.class or dev/user.clj on classpath.
java.lang.ClassCastException: http://java.io.File cannot be cast to java.lang.String clojure.lang.Compiler$CompilerException: java.lang.ClassCastException: http://java.io.File cannot be cast to java.lang.String, compiling:(user.clj:1:1)
I think I get what he said. He said when Clojure runtime is initialized, dev
itself is not in your classpath.
What happens when in boot REPL you evaluate the whole user.clj file? Instead of just (go)
?
And looking at the error you posted, seems there's some issue with your user.clj as well. Possible to post it somewhere?
and that myapp/application.clj is in the src directory, which I have in my resource-paths
Seems something wrong with the setup itself. Do you have a particular reason to use boot?
There's a #boot channel where folks are more likely to be able to help. Not many people use Boot these days.
I recognize this is probably just a question of taste, but I'm curious: Which of the following feels clearer? Option 1:
(if (test? x)
(change x arg)
x)
Option 2:
(cond-> x (test? x) (change arg))
(I wish there were a variant of cond->
that threaded through the predicate as well as the expression: (condp-> x test? change)
)
I did this the other day :grinning_face_with_one_large_and_one_small_eye:
(as-> x x
(cond-> x (or (tspec/year? x)
(tspec/year-month? x)) t/beginning)
(cond-> x (tspec/local-date? x) t/midnight)
(cond-> x (or (tspec/local-date-time? x)
(tspec/instant? x)
(inst? x))
(t/in (t/zone))))
note: the word "change" doesn't usually appear in Clojure source, because it implies mutate-in-place
While we're talking about cond->
, is folding updates that should always happen into cond->
forms good style or bad style?
(cond-> x
(test1? x) (sometimes-update1 args1)
(test2? x) (sometimes-update1 args2)
true (always-update))
I usually just do (cond-> (always-update x) ,,,)
or similar
Right on! I'm looking at how to handle situations where the order of updates matters, though.
I have sometimes used true (always-update)
inside a cond->
, so I wouldn’t say it’s terrible.
I tend to use :always
instead of true
but I think it's reasonable for occasional use. Anything more complex, I break down into multiple bindings or I write specific transformation functions that encapsulate the conditional logic.
What are my options if I want to control certain aspects of a running program in runtime? For example, "custom backoff multiplier" config value or smth like that. JMX (writeable) operations are not currently supported in Clojure. Remote repl to production to change some atom - this doesn't seem to work for clustered environments(or does it?) Yes, I can probably store this data in the database but I was looking for something on a higher level(web server? JVM?) with faster access time.
especially in a cluster environment, how about writing the updates to something like a kafka topic which all the instances will subscribe to, and will keep a process which will update the configuration atom from there?
But I wonder if it's possible to implement a sort of multi-repl which connects to multiple machines in parallel :thinking_face:
there's also zookeeper, which kafka builds on top of, and is directly meant for synchronized shared changing state
but the kafka API is nicer to use (if more complexity ops wise...)
I'm also not convinced that one can't use JMX from Clojure, but I don't know JMX well enough to have a counterexample handy
could have an atom that has current config in it, and a background process that periodically swaps the latest value from $database
Would you consider subscribing to a queue topic (kafka or some mq) as push or pull?
and all the individual proxies download their http route tables and config from the brains, periodically
eventual consistency for db transactions is a bad idea, but eventual consistency for cluster config is a great model, IMHO
@U0JEFEZH6 I guess you were referring to the "Consul KV" service, right? HTTP API interface must be pretty handy for the cases when it is not requested too frequently. Thanks for the idea, last time I used Consul they didn't have that product
@U0113AVHL2W yes, http://consul.io - if I understand requirements correctly, it fits the bill. It's really solid and easy to deploy, something I cannot say about Kafka (I wouldn't recommend it for this use case). If you want push/pull - Consul supports notifications/watches as well