This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-05-01
Channels
- # announcements (3)
- # babashka (17)
- # beginners (163)
- # bristol-clojurians (1)
- # calva (18)
- # chlorine-clover (17)
- # clj-kondo (13)
- # cljs-dev (50)
- # cljsjs (3)
- # cljsrn (13)
- # clojure (218)
- # clojure-dev (5)
- # clojure-europe (9)
- # clojure-italy (10)
- # clojure-nl (8)
- # clojure-uk (107)
- # clojurescript (25)
- # conjure (163)
- # cursive (63)
- # data-science (9)
- # datomic (38)
- # docker (1)
- # figwheel (34)
- # figwheel-main (3)
- # fulcro (15)
- # graalvm (1)
- # helix (12)
- # jobs (3)
- # juxt (5)
- # kaocha (3)
- # lein-figwheel (2)
- # leiningen (6)
- # luminus (2)
- # malli (1)
- # meander (12)
- # nrepl (4)
- # rdf (2)
- # re-frame (2)
- # reagent (7)
- # reitit (5)
- # remote-jobs (2)
- # rum (1)
- # shadow-cljs (65)
- # spacemacs (27)
- # tools-deps (18)
- # vim (19)
- # xtdb (2)
Are we talking Emacs? Otherwise I think you can use zprint, it's a total formatter, so it'll rewrite everything exactly as defined
Is there a particular reason CLJ does not have the #queue reader macro unlike CLJS?
Most likely that it is simply lower priority on the Clojure core team's list than other things.
You can define a reader tag of your own that achieves the desired effect.
Note: My "most likely" above is better stated as "My best guess is". I am not a part of the Clojure core dev team, nor can I read their minds.
I guess so. The problem with defining your own reader is that things get messy when your code base gets mixed with another project.
Hi all, the output of a script I'm writing is a blob of json, but a blob that is meant for human editing/reading. I'd like to ouput it to the file pretty-printed. I'm using clojure/data.json, which has pprint, but that goes to *out*
. What's the (i'm sure one-liner) idiomatic way of doing that? Write now I'm just doing (spit "filename.json" (json/write-str result))
.
:thumbsup:
if you are in clojure rather than cljs, you can use the second arg of pprint to supply a writer
but often with-out-str is easier code wise - just don't forget the writer option in other cases
Thanks. I didn't know about that second arg (or, rather, I saw it, but didn't know how to use it). I'll keep it in the toolbox, though. For this particular simple case with-out-str worked fine.
result
is edn at that point.
Hmmm. I’ve never done any “interactive” REPL utilties and I’ve got a question - how to do do (read-line)
in Cider REPL but with “local echo”, so I can see what I am typing?
that sounds like an emacs question, as (read-line)
in a standard repl or client does echo as expected
the fix won't be in clojure, it will be in elisp
Hey all -- which talk was it where one of the Stus talked about how "repl-driven development" is for them never done with an actual REPL window?
I'm guessing it's Running with Scissors
@worlds-endless it was mentioned on the ClojureScript podcast I think, but I think https://www.youtube.com/watch?v=Qx0-pViyIDU is what you're after
Thanks!
probably a longshot but I'm trying to use discord.clj (https://github.com/gizmo385/discord.clj), and for some reason I can't get it to honor mentions, so when I (bot/say "@MyUsername")
it doesn't actually ping, just raw text. has anyone used the library that may be able to help me out?
is there a known bug when using namespaced keyword in =
in clojure.test
?
I can reproduce with
(t/deftest test-a
(t/testing ""
(t/is (= ::s/invalid 1))))
s is alias of clojure.spec.alpha
.
Error is Call to clojure.core/let did not conform to spec.
@doglooksgood The problem is that particular keyword -- it will cause macro-expansion to fail.
(because Spec is checking the macro-expansion)
Nothing to do with clojure.test
.
user=> (let [x ::s/invalid] x)
Syntax error macroexpanding clojure.core/let at (REPL:7:1).
:clojure.spec.alpha/invalid - failed: any? at: [:bindings :init-expr] spec: :clojure.core.specs.alpha/bindings
user=>
No. It's just Spec checking in macro definitions.
It can be very surprising when you first trip over it 😐
isn't there a function in clojure.spec that already does #(= ::s/invalid %)
It's a weird special case -- you can run into it sometimes when you try to write your own Spec predicates and conformers and you want to return ::s/invalid
yourself.
I just run into it when I write a function that return ::s/invalid
, so I create a variable called invalid
to hold this value.
So spec will validate the input passed to macros at macro-expansion time. But ::s/invalid is something spec looks for to know that a spec validation failed. So when validating the macro input if the input is that keyword, it'll think the input is invalid and thus fail the spec validation for the macro
So that has to be a special compiler step? Like there's no way I can attach myself to the macro-expansion in a similar way?
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/Compiler.java#L6948-L6976
@didibus not sure what you mean by "attach myself to the macro-expansion"?
We have a very old system running for years, thanks to how stable Clojure is. And it has some hotfix patches applied through REPL, but they are not committed to repository. Is there a way to find out those functions which is updated by REPL?
the repl input history...
the source
macro uses metadata plus files / jars
you could dump the class files created by the functions and attempt to reverse engineer using a decompiler :D
in all seriousness, there's no way to get the form back if your repl didn't store it in a readline history file or something
My first idea is that the function update by REPL may have a incorrect meta compare to the original one. The line number could be wrong
when you use defn in the repl no useful file / line number is attached to the metadata
and there's no source of data that you can go look up (unless your repl stores it somewhere which is why I bring up the readline lib which creates logs on disk)
the class file exists (it "is the function" per se), but you'd need to dump that and you probably dont' want to make the class file part of your repo - you want code you can debug / edit
a decompiler will make weird looking code, but at least it's code :D
oooh here's an idea
user=> (defn foo [x]
(println x))
#'user/foo
user=> (ns-interns *ns*)
{foo #'user/foo}
user=> (meta (get (ns-interns *ns*) 'foo))
{:arglists ([x]), :line 1, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x2440022a "user"]}
user=> (:file (meta (get (ns-interns *ns*) 'foo)))
"NO_SOURCE_PATH"
oh - right, you can do (-> (all-ns) (ns-publics) (mapcat vals) (map meta) ...)
something like that, to get all vars and then filter by "NO_SOURCE_PATH" to find all the redefs
scratch=> (->> (all-ns)
(mapcat (comp vals ns-publics))
(filter (comp #{"NO_SOURCE_PATH"} :file meta)))
(#'scratch/exercise-oea)
so 🚢 -it - the function that finds all repl redefs is done :D
getting bytes for what changed might be harder... a debugger might help
Does anyone have (useful :)) opinions on distributed transaction coordination in Clojure microservices? Moving storage work out of a Java monolith and now have some new business rules that are forcing me into distributed writes on multiple data sources. Found Immutant.XA but so far haven't found much else (or alternatives using a stream processor ... which I suppose I could use to orchestrate commits and compensations myself).
I would start by enumerating the specific operations in your domain model and what changes each entails
That is a good suggestion, and is what I'm doing to begin thinking about it. I'm not sure whether I agree with the statement that they are a myth (they are a thing) but it is difficult to get right (on your own) and most people use eventual consistency now. However the modeling exercise will take a few hours at most to do (today :)) and then I'm back to looking for information on patterns people are using for this problem (in addition to those I'm familiar with). So I'd still appreciate if people want to share their opinions.
you are presenting a false dichotomy between "distributed transactions" and eventual consistency
you might end up with information that half of your operations do not need database involvement at all, and instead belong on a queue
I tend to agree with that the last statement (an opinion, but not easily provable) but the gist of this is feeling more like the definition of "not useful".
since you used to have a monolith, you could potentially create a single “transactor” whose only job is to group changes into meaningful transactions while still having multiple microservies that feed the transactor. trying to make something consistent out multiple dbs is always tricky
I'll back @ghadi up here @activeghost: with better analysis, you may well find you don't actually need XA stuff at all and would be better off with a combination of regular TX, queues, self-correction, and eventual consistency. An apparent need for XA might also suggest an incorrect division across services (the rush to microservice architecture can often create more orchestration than is really needed because of artificial/arbitrary isolation of things).
with a single transactor, you at least have the opportunity to create something like transaction ids that can be used across the dbs. unfortunately, you’re basically building a database out of other databases at this point
I don't disagree with that @seancorfield, I don't really want to adopt the XA stuff and have read through enough of the research on distributed transactions in microservices to know it's likely a bad idea. I might be wrong however, so am asking (and others likely have a lot of experience to listen to). I just felt the projection and arguing over a projected world view of mine is not useful.
It wasn't the rest of the content. @smith.adriane yes, that feels like it gets into a lot of complexity.
Yeah, cross db transactions are doomed, but if you just have some hip microservice architecture all talking to the same database a transactor like microservice can work, you all the service to create a tx, then pass the created tx around to different services to add their stuff, then you ask the transactor to commit, but things like reads in transactions get tricky
I wouldn’t say doomed, but it’s definitely not fun. unfortunately, it’s not always possible to migrate all your disparate dbs into a single unified db. it’s really common for companies to end up with multiple dbs that start our completely separate, but at some point need to be combined
Like, it is already the case I can't have a database transaction that encompasses calling the braintree api
So I use database implementation like techniques when using the braintree api (write ahead log and recovery)
Collecting ops, semantics, side effects (emails generated, notifs, external APIs) will pay repeated dividends in understanding a system. It's hard to make a generic answer here. I hope I didn't come across as antagonistic @activeghost
It was read a tad bit antagonistically but probably because I've had long ranging arguments on FB recently around coronavirus 🙂
I really enjoy this talk https://www.youtube.com/watch?v=_VftQXWDkfk
If you can rewrite updates to compare and set type operations you can decouple reading and writing which can help
in that talk, he goes through the implementation of a set of services that do an online chess system
Yes, I wasn't handed the ability to solve the storage medium problem (well, not exactly, but I need a very good reason to ditch the new contracts we've signed). We're migrating out of MySql to Redis/S3 (again, also not my choice) and will need to write to all three sources on a write, and two of them across a read (potentially). Of course the monolith is still doing the metadata orchestration purely in MySql (this is just JSON content, but the DB is "immutable" in that all writes are new versions and we never delete). Thx, will watch that.
the sections from 8'00" - 19'55" in that video are relevant to service-to-service coordination (or lack thereof)
No they don't, well Redis kind of sort of supports transactions for sets of REDIS commands but it's not the same.
I feel like you could build something, take a fairly traditional database architecture, but treat redis as memory and s3 as disk, but that is going to be a lot of work
Yes, with essentially replication over to mysql from S3. That's basically what the problem space currently looks like to me since the monolith can switch back to using SQL alone and expect the data to be consistent (which I'm not quite sure how to solve yet). Of course that could change once I have the modeling exercise done.
@activeghost However it turns out, I'd be very interested to hear what solutions you try and what you end up with. I think it's a very interesting (hard) problem, especially given the particular combination of storage systems you're working with.
Will do. It is hard, and a great opportunity to do something interesting.
Thank you! And "good luck" 🙂
I decided to reduce the problem for the micro-service down to a single storage medium and a cache (which I don't care if that succeeds in filling theoretically) ...breaking it up so I can avoid the distributed transaction. The monolith that is using is will handle the write fallback to mysql db .. and if the write through the Clojure storage proxy (to whatever backend configured) fails it can abort the transaction it is in. Between the two layers I can cover that without the complexity of attempting to merge that logic into a single service. S3 and Redis aren't transacted, so what was just written can change between write/read back, but since the state/metadata is still managed in the rdbms that will work. The read scenarios are still complex, but the write scenario is now simplified.
... so no exciting hard problem, but it gets to ship lol.
Sounds like a good, pragmatic choice. Thank you for coming back with that report!
I'm wondering if I'm missing something about stateful transducers. You can't just drop dedupe
into comp
, right?
dedupe itself creates a stateful transducer, you can take (dedupe)
and throw it into comp just fine
it's just that you need a new instance of the thing dedupe returns per pipeline (unless you want dedupe across the pipelines but that might not work for concurrency reasons anyway)
That's what I'm getting at.... it seems in conflict with the docs
I wouldn't suggest reusing one (dedupe)
user=> (into [] (comp (map inc) (dedupe) (filter even?)) [1 1 1 2 2 2 3 3 3 4 5 6 7 7 8 9 9])
[2 4 6 8 10]
user=>
^ @johnnyhauserThe prev value will not be initialized until the transducing process starts (in a call to transduce for example). The stateful interactions are therefore contained within the context of the transducible process.
That doesn't seem to be true ^
the prev
value is initialized in the expression (dedupe)
, is it not?
It is not, dedupe returns a transducer. A transducer returns a reducing function, it is only at the time the transducer is called with a reducing function that it will initialize prev and return a reducing function of it's own which closes over prev
`(defn dedupe [] (fn [xf] (let [prev (volatile! ::none)] (fn ([] (xf)) ([result] (xf result)) ([result input] (let [prior @prev] (vreset! prev input) (if (= prior input) result (xf result input))))))))`
the "transducible process" referenced there is the one owned by dedupe
That's why in your later example, the state isn't maintained between the two calls to into. Dedupe returns a transducer, and it's into which will later call the transducer to get the reducing function out of it. It is that reducing function which closes over the state.
Any idea why the transducer approach is not one degree lazier so that this sort of thing isn't a problem?
scratch=> (let [d (dedupe)] [(into [] (comp (filter even?) d) (range 10)) (into [] d (range 10))])
[[0 2 4 6 8] [0 1 2 3 4 5 6 7 8 9]]
so the new usage of d seems to "reset" the statesuppose transducers required an extra invocation, in other words
I understand, but I'm pondering why it isn't
That's true, but making it not a transducer is not the only solution.
@hiredman but the state isn't initialized until the transducer uses it
my demo above shows that - the second usage of d didn't know the dupes of the first usage
hmm, there's definitely something I'm not grasping there.
so the implementation of dedupe changed to be safer, but instead of change the api (and break people's code) you keep the indirection
Isn't it initialized as soon as you pass another transducer into it?
dedupe is that way because dedupe the function on seqs which is not a transducer exists
oh that makes more sense, thanks
so making it a transducer either requires a new name, or keep it a function and giving it a new arity
looking at the definition, it looks like you invoke dedupe
and get a function waiting on the next thing, and then when pass the next thing, it initializes the state
@johnnyhauser (dedupe coll) exists already and isn't a transducer
so we need a new arity to ask for the thing that transduces
the definition is something like
(defn dedupe ([] (fn [rf] ...)) ([coll] ...))
which you could unwrap like
(defn dupe [rf] ...)
but you lose the last arity there which is not transducer related(defn dedupe []
(fn [xf] // takes the next one
(let [prev (volatile! ::none)] // initialized the state
(fn // transducer
The [xf]
is confusing here, and a mistake in my opinion. While it is called xf, the parameter being passed in is a reducible function rf, not a transducer.
(defn dedupe []
// This is the transducer xf returned by dedupe
(fn [rf] // takes the next reducing function, not the next transducer
(let [prev (volatile! ::none)] // initialized the state
(fn // this is not a transducer, this is a reducing function rf
What am I not seeing?
you can't use that dedupe on a list
but I'm not interested in it
Just interested in transducers
but other people depend on it!
you don't need that implementation for your new transducer
clojure needed it
Well, can we just talk in general and not about dedupe?
I'm not sure you and hiredman were discussing the same thing. He is explaining why dedupe returns a transducer instead of just being the transducer itself. That's just because a transducer is a function of one argument. And dedupe already existed as a function of one argument which takes a coll. So you couldn't modify it to also be a function of one argument which takes a reducing function
I'm just curious about opinions on lazier transducers so that stateful transducers can compose and be reused.
And then you have transducible processes I think they are called, which basically make use of the transducer machinery to iterate through elements.
The transducible context will initiate the push down of elements through the transducers. So it is in charge of when things actually happen.
It helps to look at the fn signature of a transducer: `(fn [rf] (fn ([] ...) ([result] ...) ([result input] ...)))`
So the outer fn is the transducer, it takes an rf which is a reducing function, and returns an rf as well. The inner fn is a reducing function. A reducing function must have 3 arity: init, completion and step
In theory your entire transducing chain could be manually implemented as a single reducing function. But the idea is what if you have two reducing functions? Could you compose them together? Like chain them back to back. This is where the transducer comes in
The transducer will return a reducing function which will internally call the rf that was passed into the transducer
Now, because the transducer is like a constructor for reducing function, if your reducing function needs state, you can create it when the transducer is called to construct the reducing function and have the rf close on it
When you comp transducers, no rf have been constructed yet. You just composed transducers together and the comp of them is still waiting for the rf to be passed in.
It's only when you call a transducible process like into, transduce, sequence, eduction, etc. That those will call the transducer to construct the reducing function and they will be the one to orchestrate the iteration and keep calling the same rf with the elements until they decide it is done.
So if say you wanted what you're asking, you'd need your own transducible process, for example:
`(defn conj-rf
([] [])
([acc] acc)
([acc e] (conj acc e)))`
(def deduping-conj-rf ((dedupe) conj-rf))
`(reduce deduping-conj-rf [] [1 2 2 3])
;> [1 2 3]
(reduce deduping-conj-rf [] [3 4 4 5])
;> [4 5]`
I hope this makes it clear. You see here conj-rf is a reducing function which just conjes elements to a vector. Dedupe returns a transducer. When we call the transducer returned by dedupe with our conj-rf it returns a reducing function which will first dedupe elements and then conj them. This reducing function is stateful. You can then use it with reduce to reduce over things with it. If you re-use it over and over the state is maintained.
Now in theory, there as some rules a transducing context should follow, like encapsulating the reducing function so it can't be used across threads. That's just because if you look at my example, deduping-conj-rf isn't thread safe. If you use it inside two reduce calls happening in different threads it's unclear what would happen.
And you're supposed to call the completion arity on the rf when you're done with the elements (if you are ever done). Because some transducers assume this will be the case and can leverage that as a final step before they return the result.
But if you're not writing a generic transducible process , you can bend these rules as you see fit
like, there is no strong motivation to make sweeping changes to support that, because the way there are now works well and people aren't constantly hitting that as a limitation
What would be the drawback?
It's the sense part I'm most interested in.
It makes sense to me. I wrote some stuff that works and I didn't run into any problems yet. But that doesn't mean it's right or good and such.
have an f
that you use like (reduce f ...)
in multiple different places in your code, is there some generic state that all those places should share?
if f was memoized, kind of
(where memoization is an innocuous kind of hidden state)
an example that I couldn't tolerate was, having created a scan
transducer (that's like clojure's reductions
?) , if you wanted to scan add 0
, and reuse that composition, it would not start from 0 on future invocations
scan would likely be what if I recall the docs call, a reducible context, not a transducer
like transduce is not a transducer, it is a thing that knows how to apply a transducer to a reducing function then use that to run a reduce
and the shared state is setup and managed and exists inside a call to transducer where it uses the reducing function that resulted from appling a transducer
you might want to checkout https://clojure.atlassian.net/browse/CLJ-1903
when you run the clojure command line tool if my deps contains a mvn/repos key it will look for repos there when trying to get deps. Additional it will look in my ~/.m2/settings.xml file. Is there a reason it would look somewhere else other then HOME/.m2/settings.xml? Can i get debug information on why it can't find an artifact?
the maven libraries don't, at least by default, look for artifact in a particular repo, if they need to find an artifact they look through all the repos they know about for it. So really the only error not finding an artifact is "couldn't find that artifact in any of the repos I know about"
so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me?
(the way you figure out what repo an artifact is in is by googling "maven $project-name")
how do i list the repos it knows about? I added
:mvn/repos {""
{:url ""}}
to my deps so i assume it nows about datomic pro (the dep it cant find). But the only output i get is that it cant find it.
> so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me?
yea. i'm guessing if its anything its the later. Specifically the clj tool isn't finding the settings.xml file which i have mounted to /HOME/.m2/settings.xml in my container. But it would be nice to distinguish between. we found the repo and the dep but your not authenticated.if you aren't authenticated the repo isn't going to let you see anything, so you don't know if they have the dep
fair enough.
if you are in a container I would start by verifying which user you are and where your home directory is
that's correct as far as i can tell. I echoed out $HOME and just set it to $HOME/.m2/settings.xml only without the var just to be clear.
the my.datomic site should give you the creds you need in ~/.m2/settings.xml to access