Fork me on GitHub

Are we talking Emacs? Otherwise I think you can use zprint, it's a total formatter, so it'll rewrite everything exactly as defined

Adam Helins07:05:08

Is there a particular reason CLJ does not have the #queue reader macro unlike CLJS?


Most likely that it is simply lower priority on the Clojure core team's list than other things.


You can define a reader tag of your own that achieves the desired effect.


Note: My "most likely" above is better stated as "My best guess is". I am not a part of the Clojure core dev team, nor can I read their minds.

Adam Helins11:05:17

I guess so. The problem with defining your own reader is that things get messy when your code base gets mixed with another project.

Eamonn Sullivan10:05:46

Hi all, the output of a script I'm writing is a blob of json, but a blob that is meant for human editing/reading. I'd like to ouput it to the file pretty-printed. I'm using clojure/data.json, which has pprint, but that goes to *out* . What's the (i'm sure one-liner) idiomatic way of doing that? Write now I'm just doing (spit "filename.json" (json/write-str result)) .


Wrap your printing function with (with-out-str)


if you are in clojure rather than cljs, you can use the second arg of pprint to supply a writer


but often with-out-str is easier code wise - just don't forget the writer option in other cases

Eamonn Sullivan15:05:38

Thanks. I didn't know about that second arg (or, rather, I saw it, but didn't know how to use it). I'll keep it in the toolbox, though. For this particular simple case with-out-str worked fine.

Eamonn Sullivan10:05:28

result is edn at that point.


Hmmm. I’ve never done any “interactive” REPL utilties and I’ve got a question - how to do do (read-line) in Cider REPL but with “local echo”, so I can see what I am typing?


that sounds like an emacs question, as (read-line) in a standard repl or client does echo as expected


the fix won't be in clojure, it will be in elisp


Hey all -- which talk was it where one of the Stus talked about how "repl-driven development" is for them never done with an actual REPL window?


I'm guessing it's Running with Scissors


@worlds-endless it was mentioned on the ClojureScript podcast I think, but I think is what you're after

John Maruska16:05:02

probably a longshot but I'm trying to use discord.clj (, and for some reason I can't get it to honor mentions, so when I (bot/say "@MyUsername") it doesn't actually ping, just raw text. has anyone used the library that may be able to help me out?


is there a known bug when using namespaced keyword in = in clojure.test? I can reproduce with

(t/deftest test-a
  (t/testing ""
    (t/is (= ::s/invalid 1))))
s is alias of clojure.spec.alpha. Error is Call to clojure.core/let did not conform to spec.


@doglooksgood The problem is that particular keyword -- it will cause macro-expansion to fail.


(because Spec is checking the macro-expansion)


How is that? Did clojure.test special case it?


Nothing to do with clojure.test.


user=> (let [x ::s/invalid] x)
Syntax error macroexpanding clojure.core/let at (REPL:7:1).
:clojure.spec.alpha/invalid - failed: any? at: [:bindings :init-expr] spec: :clojure.core.specs.alpha/bindings


So the compiler itself special cases it?


The spec validation of the let macro


No. It's just Spec checking in macro definitions.


It can be very surprising when you first trip over it 😐


Ya, it's a funny edge case


isn't there a function in clojure.spec that already does #(= ::s/invalid %)


You are better off using s/invalid? to test that

💯 8

It's a weird special case -- you can run into it sometimes when you try to write your own Spec predicates and conformers and you want to return ::s/invalid yourself.


Yes, I can use s/invalid? for this case. Thanks you!


I just run into it when I write a function that return ::s/invalid, so I create a variable called invalid to hold this value.


So spec will validate the input passed to macros at macro-expansion time. But ::s/invalid is something spec looks for to know that a spec validation failed. So when validating the macro input if the input is that keyword, it'll think the input is invalid and thus fail the spec validation for the macro

❤️ 4

That's what's happening


make sense, thanks!


How does spec validation for macros get bootsrapped?


it happens at macro expansion time, not macro creation time


So that has to be a special compiler step? Like there's no way I can attach myself to the macro-expansion in a similar way?


@didibus not sure what you mean by "attach myself to the macro-expansion"?


We have a very old system running for years, thanks to how stable Clojure is. And it has some hotfix patches applied through REPL, but they are not committed to repository. Is there a way to find out those functions which is updated by REPL?


the repl input history...


the source macro uses metadata plus files / jars


you could dump the class files created by the functions and attempt to reverse engineer using a decompiler :D


in all seriousness, there's no way to get the form back if your repl didn't store it in a readline history file or something


My first idea is that the function update by REPL may have a incorrect meta compare to the original one. The line number could be wrong


when you use defn in the repl no useful file / line number is attached to the metadata


^ this is an... amazing problem


and there's no source of data that you can go look up (unless your repl stores it somewhere which is why I bring up the readline lib which creates logs on disk)


the class file exists (it "is the function" per se), but you'd need to dump that and you probably dont' want to make the class file part of your repo - you want code you can debug / edit


a decompiler will make weird looking code, but at least it's code :D


oooh here's an idea

user=> (defn foo [x]
(println x))
user=> (ns-interns *ns*)
{foo #'user/foo}
user=> (meta (get (ns-interns *ns*) 'foo))
{:arglists ([x]), :line 1, :column 1, :file "NO_SOURCE_PATH", :name foo, :ns #object[clojure.lang.Namespace 0x2440022a "user"]}
user=> (:file (meta (get (ns-interns *ns*) 'foo)))


that at least lets you iterate the ns's and find out which things have been modified


oh - right, you can do (-> (all-ns) (ns-publics) (mapcat vals) (map meta) ...)


then yeah.. decompiler it is


something like that, to get all vars and then filter by "NO_SOURCE_PATH" to find all the redefs


scratch=> (->> (all-ns)
     (mapcat (comp vals ns-publics))
     (filter (comp #{"NO_SOURCE_PATH"} :file meta)))


❤️ 4
👍 4

so 🚢 -it - the function that finds all repl redefs is done :D


getting bytes for what changed might be harder... a debugger might help


A list of where has been changed is enough for my case.


Maybe we can find those patches in repl history.

Chris Lester18:05:38

Does anyone have (useful :)) opinions on distributed transaction coordination in Clojure microservices? Moving storage work out of a Java monolith and now have some new business rules that are forcing me into distributed writes on multiple data sources. Found Immutant.XA but so far haven't found much else (or alternatives using a stream processor ... which I suppose I could use to orchestrate commits and compensations myself).


IMHO distributed transactions are a myth


I would start by enumerating the specific operations in your domain model and what changes each entails


in a spreadsheet / table


deferring any implementation choices until you have a spreadsheet of ops

Chris Lester19:05:55

That is a good suggestion, and is what I'm doing to begin thinking about it. I'm not sure whether I agree with the statement that they are a myth (they are a thing) but it is difficult to get right (on your own) and most people use eventual consistency now. However the modeling exercise will take a few hours at most to do (today :)) and then I'm back to looking for information on patterns people are using for this problem (in addition to those I'm familiar with). So I'd still appreciate if people want to share their opinions.


you are presenting a false dichotomy between "distributed transactions" and eventual consistency


you might end up with information that half of your operations do not need database involvement at all, and instead belong on a queue


microservices + distributed transactions are a paradox

Chris Lester19:05:33

I tend to agree with that the last statement (an opinion, but not easily provable) but the gist of this is feeling more like the definition of "not useful".


since you used to have a monolith, you could potentially create a single “transactor” whose only job is to group changes into meaningful transactions while still having multiple microservies that feed the transactor. trying to make something consistent out multiple dbs is always tricky


I'll back @ghadi up here @activeghost: with better analysis, you may well find you don't actually need XA stuff at all and would be better off with a combination of regular TX, queues, self-correction, and eventual consistency. An apparent need for XA might also suggest an incorrect division across services (the rush to microservice architecture can often create more orchestration than is really needed because of artificial/arbitrary isolation of things).


with a single transactor, you at least have the opportunity to create something like transaction ids that can be used across the dbs. unfortunately, you’re basically building a database out of other databases at this point

Chris Lester19:05:27

I don't disagree with that @seancorfield, I don't really want to adopt the XA stuff and have read through enough of the research on distributed transactions in microservices to know it's likely a bad idea. I might be wrong however, so am asking (and others likely have a lot of experience to listen to). I just felt the projection and arguing over a projected world view of mine is not useful.

Chris Lester19:05:46

It wasn't the rest of the content. @smith.adriane yes, that feels like it gets into a lot of complexity.


Yeah, cross db transactions are doomed, but if you just have some hip microservice architecture all talking to the same database a transactor like microservice can work, you all the service to create a tx, then pass the created tx around to different services to add their stuff, then you ask the transactor to commit, but things like reads in transactions get tricky


I wouldn’t say doomed, but it’s definitely not fun. unfortunately, it’s not always possible to migrate all your disparate dbs into a single unified db. it’s really common for companies to end up with multiple dbs that start our completely separate, but at some point need to be combined


It is already happening across companies


Like, it is already the case I can't have a database transaction that encompasses calling the braintree api


So I use database implementation like techniques when using the braintree api (write ahead log and recovery)


Collecting ops, semantics, side effects (emails generated, notifs, external APIs) will pay repeated dividends in understanding a system. It's hard to make a generic answer here. I hope I didn't come across as antagonistic @activeghost

Chris Lester19:05:46

It was read a tad bit antagonistically but probably because I've had long ranging arguments on FB recently around coronavirus 🙂


icing on the internet cake


coordination of all sorts is really expensive, and best avoided


If you can rewrite updates to compare and set type operations you can decouple reading and writing which can help


in that talk, he goes through the implementation of a set of services that do an online chess system


the game history service, the pairing service that matches users wanting to play, etc.

Chris Lester19:05:54

Yes, I wasn't handed the ability to solve the storage medium problem (well, not exactly, but I need a very good reason to ditch the new contracts we've signed). We're migrating out of MySql to Redis/S3 (again, also not my choice) and will need to write to all three sources on a write, and two of them across a read (potentially). Of course the monolith is still doing the metadata orchestration purely in MySql (this is just JSON content, but the DB is "immutable" in that all writes are new versions and we never delete). Thx, will watch that.


the sections from 8'00" - 19'55" in that video are relevant to service-to-service coordination (or lack thereof)


also neither redis nor s3 actually has transactions

Chris Lester20:05:20

No they don't, well Redis kind of sort of supports transactions for sets of REDIS commands but it's not the same.


AFAICT they are proper transactions


I feel like you could build something, take a fairly traditional database architecture, but treat redis as memory and s3 as disk, but that is going to be a lot of work

Chris Lester20:05:13

Yes, with essentially replication over to mysql from S3. That's basically what the problem space currently looks like to me since the monolith can switch back to using SQL alone and expect the data to be consistent (which I'm not quite sure how to solve yet). Of course that could change once I have the modeling exercise done.


@activeghost However it turns out, I'd be very interested to hear what solutions you try and what you end up with. I think it's a very interesting (hard) problem, especially given the particular combination of storage systems you're working with.

Chris Lester21:05:40

Will do. It is hard, and a great opportunity to do something interesting.


Thank you! And "good luck" 🙂

Chris Lester08:05:47

I decided to reduce the problem for the micro-service down to a single storage medium and a cache (which I don't care if that succeeds in filling theoretically) ...breaking it up so I can avoid the distributed transaction. The monolith that is using is will handle the write fallback to mysql db .. and if the write through the Clojure storage proxy (to whatever backend configured) fails it can abort the transaction it is in. Between the two layers I can cover that without the complexity of attempting to merge that logic into a single service. S3 and Redis aren't transacted, so what was just written can change between write/read back, but since the state/metadata is still managed in the rdbms that will work. The read scenarios are still complex, but the write scenario is now simplified.

Chris Lester08:05:24

... so no exciting hard problem, but it gets to ship lol.


Sounds like a good, pragmatic choice. Thank you for coming back with that report!

Johnny Hauser20:05:21

I'm wondering if I'm missing something about stateful transducers. You can't just drop dedupe into comp , right?


dedupe itself creates a stateful transducer, you can take (dedupe) and throw it into comp just fine


it's just that you need a new instance of the thing dedupe returns per pipeline (unless you want dedupe across the pipelines but that might not work for concurrency reasons anyway)

Johnny Hauser20:05:01

That's what I'm getting at.... it seems in conflict with the docs


I wouldn't suggest reusing one (dedupe)


user=> (into [] (comp (map inc) (dedupe) (filter even?)) [1 1 1 2 2 2 3 3 3 4 5 6 7 7 8 9 9])
[2 4 6 8 10]
^ @johnnyhauser

Johnny Hauser20:05:24

The prev value will not be initialized until the transducing process starts (in a call to transduce for example). The stateful interactions are therefore contained within the context of the transducible process.

Johnny Hauser20:05:30

That doesn't seem to be true ^

Johnny Hauser20:05:57

the prev value is initialized in the expression (dedupe) , is it not?


It is not, dedupe returns a transducer. A transducer returns a reducing function, it is only at the time the transducer is called with a reducing function that it will initialize prev and return a reducing function of it's own which closes over prev


`(defn dedupe [] (fn [xf] (let [prev (volatile! ::none)] (fn ([] (xf)) ([result] (xf result)) ([result input] (let [prior @prev] (vreset! prev input) (if (= prior input) result (xf result input))))))))`


the "transducible process" referenced there is the one owned by dedupe


I think the transducible process is the one owned by "into"


That's why in your later example, the state isn't maintained between the two calls to into. Dedupe returns a transducer, and it's into which will later call the transducer to get the reducing function out of it. It is that reducing function which closes over the state.

Johnny Hauser20:05:17

Any idea why the transducer approach is not one degree lazier so that this sort of thing isn't a problem?


scratch=> (let [d (dedupe)] [(into [] (comp (filter even?) d) (range 10)) (into [] d (range 10))])
[[0 2 4 6 8] [0 1 2 3 4 5 6 7 8 9]]
so the new usage of d seems to "reset" the state

Johnny Hauser20:05:27

suppose transducers required an extra invocation, in other words


dedupe is not a transducer


it returns a transducer


like ftiler is not a transducer, (filter even?) returns a transducer

Johnny Hauser20:05:16

I understand, but I'm pondering why it isn't


because it has state

Johnny Hauser20:05:41

That's true, but making it not a transducer is not the only solution.


so when you call (dedupe) you can fresh state


@hiredman but the state isn't initialized until the transducer uses it


my demo above shows that - the second usage of d didn't know the dupes of the first usage


it is for backwards compatibility

Johnny Hauser20:05:24

hmm, there's definitely something I'm not grasping there.


so the implementation of dedupe changed to be safer, but instead of change the api (and break people's code) you keep the indirection

Johnny Hauser20:05:05

Isn't it initialized as soon as you pass another transducer into it?


backwards compat is not the right word


dedupe is that way because dedupe the function on seqs which is not a transducer exists


oh that makes more sense, thanks


so making it a transducer either requires a new name, or keep it a function and giving it a new arity

Johnny Hauser20:05:16

looking at the definition, it looks like you invoke dedupe and get a function waiting on the next thing, and then when pass the next thing, it initializes the state


a transducer doesn't have another transducer passed to it


@johnnyhauser (dedupe coll) exists already and isn't a transducer


a transducer is a function from reducing function to reducing function


so we need a new arity to ask for the thing that transduces


the definition is something like

(defn dedupe ([] (fn [rf] ...)) ([coll] ...))
which you could unwrap like
(defn dupe [rf] ...)
but you lose the last arity there which is not transducer related

Johnny Hauser21:05:05

(defn dedupe []
  (fn [xf] // takes the next one
    (let [prev (volatile! ::none)] // initialized the state
      (fn // transducer


The [xf]is confusing here, and a mistake in my opinion. While it is called xf, the parameter being passed in is a reducible function rf, not a transducer.


(defn dedupe []
  // This is the transducer xf returned by dedupe
  (fn [rf] // takes the next reducing function, not the next transducer
    (let [prev (volatile! ::none)] // initialized the state
      (fn // this is not a transducer, this is a reducing function rf

Johnny Hauser21:05:12

What am I not seeing?


you can't use that dedupe on a list


you are not seeing the [coll] function arity which is not transducer related

Johnny Hauser21:05:49

but I'm not interested in it

Johnny Hauser21:05:55

Just interested in transducers


but other people depend on it!


you don't need that implementation for your new transducer


clojure needed it


that is the reason dedupe is written like it is

Johnny Hauser21:05:37

Well, can we just talk in general and not about dedupe?


I'm not sure you and hiredman were discussing the same thing. He is explaining why dedupe returns a transducer instead of just being the transducer itself. That's just because a transducer is a function of one argument. And dedupe already existed as a function of one argument which takes a coll. So you couldn't modify it to also be a function of one argument which takes a reducing function


But neither have anything to do with how state is handled


you might look at cat


which is in the style you are asking about

Johnny Hauser21:05:09

I'm just curious about opinions on lazier transducers so that stateful transducers can compose and be reused.


What do you mean by lazy?


Transducers just let you create nested chains of reducing functions


And then you have transducible processes I think they are called, which basically make use of the transducer machinery to iterate through elements.


The transducible context will initiate the push down of elements through the transducers. So it is in charge of when things actually happen.


It helps to look at the fn signature of a transducer: `(fn [rf] (fn ([] ...) ([result] ...) ([result input] ...)))`


So the outer fn is the transducer, it takes an rf which is a reducing function, and returns an rf as well. The inner fn is a reducing function. A reducing function must have 3 arity: init, completion and step


In theory your entire transducing chain could be manually implemented as a single reducing function. But the idea is what if you have two reducing functions? Could you compose them together? Like chain them back to back. This is where the transducer comes in


The transducer will return a reducing function which will internally call the rf that was passed into the transducer


Now, because the transducer is like a constructor for reducing function, if your reducing function needs state, you can create it when the transducer is called to construct the reducing function and have the rf close on it


When you comp transducers, no rf have been constructed yet. You just composed transducers together and the comp of them is still waiting for the rf to be passed in.


It's only when you call a transducible process like into, transduce, sequence, eduction, etc. That those will call the transducer to construct the reducing function and they will be the one to orchestrate the iteration and keep calling the same rf with the elements until they decide it is done.


So if say you wanted what you're asking, you'd need your own transducible process, for example: `(defn conj-rf ([] []) ([acc] acc) ([acc e] (conj acc e)))` (def deduping-conj-rf ((dedupe) conj-rf)) `(reduce deduping-conj-rf [] [1 2 2 3]) ;> [1 2 3] (reduce deduping-conj-rf [] [3 4 4 5]) ;> [4 5]`


I hope this makes it clear. You see here conj-rf is a reducing function which just conjes elements to a vector. Dedupe returns a transducer. When we call the transducer returned by dedupe with our conj-rf it returns a reducing function which will first dedupe elements and then conj them. This reducing function is stateful. You can then use it with reduce to reduce over things with it. If you re-use it over and over the state is maintained.


Now in theory, there as some rules a transducing context should follow, like encapsulating the reducing function so it can't be used across threads. That's just because if you look at my example, deduping-conj-rf isn't thread safe. If you use it inside two reduce calls happening in different threads it's unclear what would happen.


And you're supposed to call the completion arity on the rf when you're done with the elements (if you are ever done). Because some transducers assume this will be the case and can leverage that as a final step before they return the result.


But if you're not writing a generic transducible process , you can bend these rules as you see fit


I don't think anyone really wants that


like, there is no strong motivation to make sweeping changes to support that, because the way there are now works well and people aren't constantly hitting that as a limitation

Johnny Hauser21:05:37

What would be the drawback?


I am not even sure it makes sense

Johnny Hauser21:05:50

It's the sense part I'm most interested in.

Johnny Hauser21:05:45

It makes sense to me. I wrote some stuff that works and I didn't run into any problems yet. But that doesn't mean it's right or good and such.


have an f that you use like (reduce f ...) in multiple different places in your code, is there some generic state that all those places should share?


if f was memoized, kind of


(where memoization is an innocuous kind of hidden state)


it is highly dependent on the nature of f


for a generic reusable thing like transducers, the answer is no

Johnny Hauser21:05:18

an example that I couldn't tolerate was, having created a scan transducer (that's like clojure's reductions ?) , if you wanted to scan add 0 , and reuse that composition, it would not start from 0 on future invocations


for your own stuff, you can already write transducers that share state


scan would likely be what if I recall the docs call, a reducible context, not a transducer


like transduce is not a transducer, it is a thing that knows how to apply a transducer to a reducing function then use that to run a reduce


and the shared state is setup and managed and exists inside a call to transducer where it uses the reducing function that resulted from appling a transducer

Drew Verlee21:05:28

when you run the clojure command line tool if my deps contains a mvn/repos key it will look for repos there when trying to get deps. Additional it will look in my ~/.m2/settings.xml file. Is there a reason it would look somewhere else other then HOME/.m2/settings.xml? Can i get debug information on why it can't find an artifact?


Maybe if $M2_HOME was set it would respect that.


the maven libraries don't, at least by default, look for artifact in a particular repo, if they need to find an artifact they look through all the repos they know about for it. So really the only error not finding an artifact is "couldn't find that artifact in any of the repos I know about"


so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me?


(the way you figure out what repo an artifact is in is by googling "maven $project-name")

Drew Verlee21:05:59

how do i list the repos it knows about? I added

:mvn/repos {""
              {:url ""}}
to my deps so i assume it nows about datomic pro (the dep it cant find). But the only output i get is that it cant find it. > so you can do something like: what artifact failed to fetch? is that the right artifact coords? what repos is it in? are any of those repos configured for me? is the repo correctly configured for me? yea. i'm guessing if its anything its the later. Specifically the clj tool isn't finding the settings.xml file which i have mounted to /HOME/.m2/settings.xml in my container. But it would be nice to distinguish between. we found the repo and the dep but your not authenticated.


if you aren't authenticated the repo isn't going to let you see anything, so you don't know if they have the dep


if you are in a container I would start by verifying which user you are and where your home directory is

Drew Verlee21:05:31

that's correct as far as i can tell. I echoed out $HOME and just set it to $HOME/.m2/settings.xml only without the var just to be clear.

Alex Miller (Clojure team)21:05:07

the my.datomic site should give you the creds you need in ~/.m2/settings.xml to access