This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-08-13
Channels
- # announcements (10)
- # aws (1)
- # babashka (8)
- # babashka-circleci-builds (1)
- # beginners (67)
- # calva (4)
- # cljs-dev (42)
- # clojars (4)
- # clojure (213)
- # clojure-dev (4)
- # clojure-europe (18)
- # clojure-nl (1)
- # clojure-uk (8)
- # clojurescript (13)
- # conjure (6)
- # cursive (63)
- # data-science (5)
- # datomic (11)
- # events (1)
- # graalvm (2)
- # graalvm-mobile (1)
- # honeysql (4)
- # kaocha (3)
- # leiningen (19)
- # lsp (32)
- # malli (3)
- # meander (13)
- # news-and-articles (3)
- # off-topic (8)
- # polylith (5)
- # re-frame (47)
- # reitit (2)
- # shadow-cljs (28)
- # sql (3)
- # tools-build (4)
- # tools-deps (51)
- # uix (9)
- # xtdb (3)
can someone help me understand how this works
(defn seq1 [#^clojure.lang.ISeq s]
(reify clojure.lang.ISeq
(first [_] (.first s))
(more [_] (seq1 (.more s)))
(next [_] (let [sn (.next s)] (and sn (seq1 sn))))
(seq [_] (let [ss (.seq s)] (and ss (seq1 ss))))
(count [_] (.count s))
(cons [_ o] (.cons s o))
(empty [_] (.empty s))
(equiv [_ o] (.equiv s o))))
the code is taken from http://blog.fogus.me/2010/01/22/de-chunkifying-sequences-in-clojure/comment-page-1/?unapproved=1391028&moderation-hash=034bf1cd8870d627d36fd727b298df79#comment-1391028. Which functions can I put in this list? Can I put map
, filter
, and mapcat
, and thus allow my object to work with for
comprehensions?look at this definition first - https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/ISeq.java
this interface is extending https://github.com/clojure/clojure/blob/a29f9b911b569b0a4890f320ec8f946329bbe0fd/src/jvm/clojure/lang/IPersistentCollection.java#L14
which is also an extension of https://github.com/clojure/clojure/blob/a29f9b911b569b0a4890f320ec8f946329bbe0fd/src/jvm/clojure/lang/Seqable.java#L15
so to implement a custom ISeq you need to provide implementation of methods defined for those interfaces only.
map
, filter
, mapcat
are just functions
ahh and map
, filter
, and mapcat
are not methods on any of those interfaces, so I can't influence the for
comprehension for my object?
influence how?
(defn seq1 [#^clojure.lang.ISeq s]
(reify clojure.lang.ISeq
(first [_] (.first s))
(more [_] (seq1 (.more s)))
; (next [_] (let [sn (.next s)] (and sn (seq1 sn))))
(seq [_] (let [ss (.seq s)] (and ss (seq1 ss))))
; (count [_] (.count s))
; (cons [_ o] (.cons s o))
; (empty [_] (.empty s))
; (equiv [_ o] (.equiv s o))
))
(for [x (seq1 '(1 2 3))]
x)
after some experiments to influence for
behavior you have to implement first
, more
and seq
methods onlywhat I mean by influence is that I can write my-map
which when given a particular type of object, returns a particular type, in particular returns the same type. However, I can't force clojure.core/map
to return this particular type. And I can't influence for
to expand to a call to my-map
.
where first
, more
and seq
are methods, not functions
As I understand (correct me if I'm wrong) map
always returns a chunked lazy sequence. if I give it an unchunked lazy sequence which I have created, it will still returned a chunked lazy sequence. right?
(defn re-chunk [n xs]
(lazy-seq
(when-let [s (seq (take n xs))]
(let [cb (chunk-buffer n)]
(doseq [x s] (chunk-append cb x))
(chunk-cons (chunk cb) (re-chunk n (drop n xs)))))))
(first (map #(doto % (prn "!"))
(map #(doto % (prn "@"))
(re-chunk 1 (range 100)))))
quick way to checkI really don't understand what is happening. I was trying to implement a 1-chunked lazy sequence using things I understand. Your example seems to work for map. but not for mapcat.
(first (mapcat #(doto [%] (prn "!"))
(mapcat #(doto [%] (prn "@"))
(re-chunk 1 (range 100)))))
returns 0 but prints
[0] "@"
[1] "@"
[2] "@"
[3] "@"
[0] "!"
[1] "!"
[2] "!"
[4] "@"
[3] "!"
So what I was doing, was to implement filter, map, mapcat, and concat in a simple enough way to enforce the type. https://gitlab.lrde.epita.fr/jnewton/clojure-rte/-/blob/6fc3b41204a109bcde0ead0e0a69551e1bc47faa/src/clojure_rte/lazy.clj
whereas my implementation of mapcat
does not seem to have a hidden chunkiness
clojure-rte.rte-core> (first (lazy/mapcat #(doto [%] (prn "!"))
(lazy/mapcat #(doto [%] (prn "@"))
(range 100))))
[0] "@"
[0] "!"
0
clojure-rte.rte-core>
whoever mine fails sometimes. still debugging.
clojure-rte.rte-core> (first (lazy/map (fn [x] (prn [:outer x]) x)
(lazy/map (fn [x] (prn [:inner x]) x)
'(1 2 3))))
[:inner 1]
[:outer 1]
[:inner 2]
1
the inner function is called twice. Not yet sure why.out of curiosity - why so complex?
(ns clojure-rte.lazy ...)
(alias 'lazy 'clojure-rte.lazy)
(def lazy/concat ...)
even if you have alias defined you still can define all the functions using (defn concat β¦
and back to the topic -
the problem with mapcat is that it internally uses concat which itself is checking input sequences to be chunked-seq?
and if it is not chunked (effectively not implementing this interface - https://github.com/clojure/clojure/blob/b1b88dd25373a86e41310a525a21b497799dbbf2/src/jvm/clojure/lang/IChunkedSeq.java)
then the sequences will be treated as normal sequence.
that's just a convention I have started using for myself. whenever I define a function whose name is likely to conflict accidentally with another function either in clojure.core or in another one of my own namespaces, I completely avoid using the unqualified name. the danger is that if I use something like grep the output won't be clear which function name is in the output.
of course there are many advantages to mapcat
, concat
, filter
, and map
being normal functions. However, this is one disadvantage. If they were methods then when mapcat
internally used concat
, it would dispatch to the appropriate user defined concat
.
but defn
requires the name to be simple symbol. and I think thatβs why you have to have (def lazy/concat (fn ...
so other tools like clj-kondo might not identify such symbols as functions anymore.
BTW, I'm not even sure if my final experiment will work. I'm trying to see whether in my particular application 1-chunking is advantageous.
yes indeed. defn
requires a simple symbol (I think that is a bug in the spec-based syntax check for defn
), while def
allows the prefix. it is annoying that the defn
check is overzealous.
I would not be surprised if certain tools get confused. But clj-kondo (the one I'm using) is happy with it.
I think main intention was to disallow defining function for some another namespace
I remember Alex talking about this in some thread.
and giving that allowing qualified symbols for def
looks like a bug to me
if that was the intent, then indeed the check is overzealous. it also disallows using the current namespace as prefix.
not sure if this is bad thou
it seems strange to me (my own personal opinion, others may freely disagree) that I use a name like gns/foo
everywhere in my application except in the file where it is defined, where I must use foo
. it is a potential source of errors if I have multiple foo
s defined and I copy code around during refactoring.
of course 99.99% of the time I don't have multiple identifiers of the same name. but it does happen from time to time.
true, but this depends on you personal choice of picking alias name
it is completely fine to have different aliases for the same ns in different other namespaces
I recently defined two slightly different extensions of Boolean algebra in an application. So I had 3 different and
s. the one from clojure.core and then the two from my application. at the beginning I had a hard bug to find. I was afraid I had misused and
somewhere. Turned out the bug was something else, but in the end I renamed all and
s to include prefixes during my debugging.
two different aliases for the same namespace? I hadn't considered that. but yes you are right.
I've probably done that in some of my test cases come to think of it. Because I don't always force myself to be consistent in test cases files.
Because mapcat will use apply and apply always realizes the first 4 elements, nothing to do with chunking.
This is the other problem that not all sequence functions are implemented in a way that is maximally lazy, iregardless of chunking.
which is why the "canonical" answer is always: if your code correctness / suitability is changed by chunking vs. not chunking, you shouldn't be using lazy seqs for that code
Well, you shouldn't be using any sequence function at all, and I think that includes apply or anything using apply since it'll call seq on the args. And honestly I think that's a wart of Clojure, like it's unsatisfactory to think you have all this lazy machinery, but can't actually use it to implement lazy behavior.
but that isn't a problem until you care about lazy realization, without laziness it's just putting four fully realized objects on the stack which is nearly free
the problem isn't apply / seq/ etc/ which cannot be avoided, it's the laziness
it's plenty lazy, it just isn't an execution timing control mechanism
you avoid it by not using lazy functions to create expensive calculations or side effecting operations
right, but there's a boundary point, where if the difference between realizing 32 items and 1 item breaks your program, using laziness is probably a waste of your time (no matter how many hacks might be able to make it work...)
It doesn't break his program, it just slows it down. I don't think anything would prevent Clojure to take better care in making sure that chunk-size can be controlled and that all sequence functions take great care in being maximally lazy. Beyond that there's not been the time and effort by the core team to do so.
> it's unsatisfactory to think you have all this lazy machinery, but can't actually use it to implement lazy behavior
we have LazySeq (and the various sequential functions that generate it), delay
, promise
, and even fn
- all of them are ways of defering evaluation to a later time, none of them give you a lazy language a-la haskell
and even haskell is full of gotchas around lazy behavior
It's already almost there, most sequence functions are implemented so if not given a chunked-seq they won't chunk their return either. And that if given a chunked seq of n size chunk, they'll return similar sized chunks. It's just that maybe there's some places where it's not done or done properly to be depended on. And then you have the issue of a few other functions that just are greedier then they could be.
I suspect that if the language maintainers made any promises about controlling laziness in a granular way they'd be painting themselves into an error prone and brittle corner
that implicit chunking is speeding up a lot of real world code currently
Well, that's possible. I'm not saying that guaranteeing a correct control over the level of lazyness wouldn't come with a maintainance burden. I'm sure there'd be some. They'd promise to fix every function that doesn't respect chunk-size and to re-implement everything that uses lazy-seqs in a not maximally lazy way into a lazier variant. And every time they add something new, they'd need to make sure to take that into account for it as well. But if say they had the man-power and will to do it, it would be really nice as a user.
In the meantime, I think either using transducers if possible, or using delay with sequence functions is probably the simplest way for a user to have more control.
What's interesting though is Fogus's post hints that Rich might have wanted to do so: > Bear in mind, that the code for seq1 is in no way official and should not be used for production code. Clojure will one day provide an official version of the function above, but for now I simply took a rough sketch posted by Rich Hickey and made it work with the βmasterβ branch as an exercise and to hopefully gain more insight into the whole matter of chunkiness in general. Hopefully, it can serve the same purposes for you. > Maybe never got around to it.
I also think to fix apply, maybe you'd need to add a new function to sequence, something like peek, which can see if there's something without realizing it. Otherwise not sure how it could figure out the correct arity.
lazy-seq is literally chaining arbitrary method invocations under the hood, asking "is there a next item" is turing complete
the only reliable way to get an answer is to realize an item
It depends, I think you could provide a test for a given seq. Say I have a lazy-seqs getting data from a remote queue, you could provide a test function that checks with the remote system if the queue has info. Say you have a lazy-seqs but know the count of elements, you could provide a test that checks if we're at the end. Etc.
why not use a data type that's designed for answering these kinds of questions in the first place?
also, pretending the state of a remote machine is a piece of data is reinventing the worst parts of OO
like for example a queue (you can ask if there are items ready) a channel (you can ask if there's an item ready, and if the input is closed), a vector (it has a precise count known at creation time)
What I'm saying is a lot of lazy-seqs could answer of they have a next element, or even N next elements. But it would be specific to each one.
(range 100)
knows exactly.
(filter even? seq)
may not.
code that uses things that might or might not provide relevant information gets complicated, I don't think this improves anything
isn't it simpler to just not use lazy code to wrap expensive or time sensitive operations?
Well, it depends what you mean by simpler. An algorithm that does lots of compute and benefit heavily from short-circuiting exactly when it's done and no more are often easier to implement in a lazy way. You pull and pull until you're done.
surely the version that uses a queue isn't that much more complex?
you have one ugly driver, and the function called on each item inside that driver
I'd argue the complexity introduced is intrinsic: the complexity inherent in explicitly controlling when things are evaluated and when you stop, and that's exactly what you want
Well, ya I guess. Say you have a prime number generator, I guess you can make it into a blocking process that pushes 1 at a time to a channel. But even then it's 1 more then needed, you'd need the consumer to signal saying ok I want one more (and that's back to a pull model)
and it simplifies things like retries / restarts / cleanup which are all garbage when using laziness
I think you can use iterator maybe... I've heard their Clojure implementation short-circuits properly, but I'm not sure if that's true if used with apply as well
well you did say channel instead of queue, and being able to choose between push and pull is one of the advantages channels offer
if you don't buffer there's no reason for it to be ahead of the consumer
I said channel because you need a queue that has back-pressure. Like if the consumers have what they need, don't produce more things in the queue.
right, we are in agreement there
Hum, is that true? Will a unbuffered channel only produce something after the consumer take from it?
(cmd)user=> (require '[clojure.core.async :as >])
nil
(cmd)user=> (def c (>/chan))
#'user/c
(cmd)user=> (>/go (>/>! c :a) (println "wrote"))
#object[clojure.core.async.impl.channels.ManyToManyChannel 0x7ef570be "clojure.core.async.impl.channels.ManyToManyChannel@7ef570be"]
(cmd)user=> (>/<!! c)
:a
user=> wrote
of course you need more machinery than that to control when ":a" is evaluated
but I think that shows clearly that the put does not complete until the value is read
well that blows up because println returns nil
Because arguably you'd have println replaced with (compute-next-prime curr-prime) or something.
but yeah, >! doesn't evaluate args lazily
that's what I meant by "needing more machinery" - I think there's a couple of ways to do it, but it can be done with idiomatic and clear core.async code
Only thing I can think of would become pull again. Something like the consumer will push a message saying :need-more which the producer will block on, and when it sees one it will produce another one and push it.
This is also an interesting read: https://clojure.org/reference/lazy
(cmd)user=> (load-file "/home/justin/clojure-experiments/on-demand-async.clj")
#'user/generate
(cmd)user=> (def c (>/chan))
#'user/c
(cmd)user=> (generate c)
#object[clojure.core.async.impl.channels.ManyToManyChannel 0x1a632663 "clojure.core.async.impl.channels.ManyToManyChannel@1a632663"]
(ins)user=> @(>/<!! c)
0
(cmd)user=> @(>/<!! c)
generated 2
2
(cmd)user=> @(>/<!! c)
generated 3
3
(cmd)user=> @(>/<!! c)
generated 5
5
(cmd)user=> @(>/<!! c)
generated 7
7
(cmd)user=> @(>/<!! c)
generated 11
11
(cmd)user=> @(>/<!! c)
generated 13
13
(cmd)user=> @(>/<!! c)
generated 17
17
(require '[clojure.core.async :as >])
(defn big-inc
[b]
(.add b (BigInteger. "1")))
(defn next-prime
[b]
(let [b' (big-inc b)]
(if (.isProbablePrime b' 1)
(do (println "generated" b')
b')
(recur b'))))
(defn generate
[c]
(>/go (loop [v (delay (BigInteger. "0"))]
(>/>! c v)
(recur (delay (next-prime (force v)))))))
there's probably a more elegant solution (in terms of a consumer needing to both consume a chan and deref...), but probably not much more concise(?)
also that prime generating code is crap but I figure it's a decent placeholder :D
it might be more correct to move the force call outside the delay to avoid a stack bomb if a consumer repeatedly reads without dereffing...
Hum, I don't know, not sure there's benefits to core.async if you add delay into it. Cause now you can also do a lazy-seq or an iterate that wraps in a delay as well.
but using a lazy-seq wouldn't have the fine grained control of execution that a go block / channel combo has
yes, a transducer that does a force or deref would work
You could even do something like put the current prime, transducer on channel takes the current prime and return the next one.
I think doing the actual work inside a transducer undoes some of what is being attempted here, where the semantics of execution are explicitly controlled
another option would be a "generator" macro that yields calculations as they are consumed from a channel
The idea is to have consumers be able to pull elements one at a time through a chain of transforms. I think the thread of execution doesn't really matter. Like even ideally it's all done from the thread of the consumer doing the pull.
I feel there's also probably a way to implement a more lower level >!
that's lazy on its arguments.
But still, with core.async, you're also reinventing the wheel for map, partition, mapcat, filter, remove, etc.
what I meant by "semantics of execution are explicitly controlled" is that you'd see which thread did the calculation, or what constraints were around it, by seeing it explicitly in the code in question
you could compromise with a transducer, but I think that undermines the goal here
and even if you add a transducer to the channel, you introduce a non-negotiable 1 element write ahead buffer
For applying the transducer? I remember asking when the transducer ran and on what thread and being told it's undefined
you can't have a buffer of 0 on a channel with a transducer
it might be easier to experiment with these things in ztellman's manifold lib, IIRC one of the motivations was having easier and more explicit backpressure controls than core.async https://github.com/clj-commons/manifold
but I think this is the right direction for the speculation to be going here - by dropping the pretense that the events are data we get simpler, less confusing or error prone means of controlling execution
(to be clear, in most cases it's better to reify events as data, we are talking about the cases where this goes wrong as the initial premise of this particular thread of discussion)
and I'd argue there's always an edge case where data fails, so it's good to invest in having a least-bad failure state
(def c
(chan
(buffer 1)
(map
(let [p (volatile! (BigInteger. "0")]
(fn [_] (vswap! p next-prime))))))
(go-loop []
(>! c :next-prime)
(recur))
What about something like that? Not at a repl currently to try.Ya, but that's where the Rich text I linked is interesting, cause he explored streams, but says: > In working on streams a few things became evident: > > stream code is ugly and imperative > > even when made safe, still ugly, stateful > > streams support full laziness > > this ends up being extremely nice > > Integrating streams transparently (i.e. not having both map and map-stream) would require a change, to relax the contract of the core sequence functions (map, filter etc) > > If I am going to do that, could I achieve the same full laziness while keeping the beautiful recursive style of Clojure? > > while being substantially compatible with existing code > > Yes! > > but - ideally some names should change
But then... It's like they stopped short, like at some point they were like... Ok well it's lazy enough. And like, it seems so close. Maybe that's what I mean that it's kind of unfortunate. It's not even chunkiness that's the issue. You can have chunks of 1 or unchunked fully lazy even on their first element lazy-seqs. And I've not encountered anything that doesn't respect that yet. Now it's only a matter of a few functions like apply, and I think there might be a few more that are a little greedier then they could be.
> Integrating streams transparently (i.e. not having both map and map-stream) to me that's the dividing line - the streams solution is for when data no longer works as your abstraction, so having "map-stream" or worse, having "map" work on streams, is counterproductive, it spreads the problem to the entire language instead of solving it
I'm arguing that in practice the cleanest thing is to have a line with process on the one side and data on the other, and treat the attempts to treat process as data as source of bugs, and attempts to treat data as process as a waste of programmer effort / time
and in clojure, lazy-seqs, as the thing we currently have, are on the data side
by data I mean a value you can retrieve, by process I mean a calculation you can make
we gain a lot of power by blurring that distinction, but there are places (however narrow / niche) where you gain a lot by preserving it and building up barriers around it
going back to our starting point again, someone wanted more control of lazy values because chunking effectively made their program incorrect. in my description that means they are operating in the area where the data / process distinction is needed and a problem was caused by blurring that distinction
one way to fix that is to have more complexity (or less utility) on our lazy types, but another one is to have idioms and conventions for things that need to distinguish process from data (in order to have more fine grained control of process)
Ya, but isn't that just a consequence of the abstraction not being good enough? Like if it was simple to control chunk-size, and everything had a truly lazy mode, then we could answer: oh ya, just go (chunk-resize s 1)
and you're done. Now you know that everything will be maximally lazy.
only if you do it in the right place and something else in between didn't do some other call increasing the size
Ya, but I think that be a given, Like sure if I hand-off my thing to some third party lib. But if you can leverage Clojure core in a maximally lazy way, well, then that be great. Its pretty easy to just create a generating seq of something and then run a mapcat, or a filter on it. Its a lot more annoying to generate a custom thing using some abstraction I need to invent myself using core.async or manifold, and then re-build my own filter, mapcat and all that over it.
I think you're right in that as it stands, its a more reliable approach. Just it would be cool if lazy seqs could be relied on for this as well.
Study sessions this weekend: dtype-next. https://clojureverse.org/t/scicloj-ml-study-16-high-performance-array-processing-with-dtype-next/8032?u=daslu
QUESTION: what precisely does recur
do when not within a loop
?
I have just discovered that recur
can be used outside of loop. I suppose that in a function such as clojure.core/some
the meaning is obvious, i.e., to call the obvious function.
(defn some
"Returns the first logical true value of (pred x) for any x in coll,
else nil. One common idiom is to use a set as pred, for example
this will return :fred if :fred is in the sequence, otherwise nil:
(some #{:fred} coll)"
{:added "1.0"
:static true}
[pred coll]
(when-let [s (seq coll)]
(or (pred (first s)) (recur pred (next s)))))
However, the docstring of recur
does not seem to describe this behavior. There is also an https://clojuredocs.org/clojure.core/recur#example-55ff3cd4e4b08e404b6c1c7f this implies this behavior, again without explaining it precisely.I know this was already resolved, but Iβll add to it anywayβ¦
I always think of a loop
as being like a Ξ» that gets called immediately. Sort of like:
(defmacro my-loop
[form & body]
(let [forms (partition 2 form)
args (map first forms)
vs (map second forms)]
`((fn [~@args] ~@body) ~@vs)))
That way you can think of all calls to recur
as going to the nearest function entry point.Loops really are a different kind of thing (`loop*` is a special form handled by the parser), but thinking of them as a consistent thing has helped me sometimes
And itβs always the nearest loop/fn that it jumps to. Thatβs similar to how break
/`continue` work in Java, except that thereβs no label option to jump out to a higher level
@jimka.issy The recur
can also be used to recursively call the function you are in
is "the function you are in" always an unambiguous description?
sounds vague to me.
yes obviously in that example. but if recur
is found within an unnamed (fn ...)
does that count or not?
so if you look at the macroexpansion you will see that the recur applies to fn
, defn
is just syntactic sugar
if a macro introduces an fn
might that change the semantics of a captured recur
without loop
?
OK, where is this precisely defined? only in the source code?
The docstrung of recur suggest to visit https://clojure.org/reference/special_forms where it's explained
hmmm. this behavior is different than I imagined. I thought the following would recur to the loop.
(loop [x 1]
(letfn [(y [z] ... do-something (recur (inc z)))]
...))
but apparently recur
recurs to y
not to loop
.This problem happened to me once in Scala, but I think it has never happened to me in clojure. In Scala, we have tail call optimization, but it is somewhat limited. A function can call itself in tail position, and a local function can call itself in tail position, but if a local function is called in tail position, and that function calls the parent function in tail position, the scala compiler does not consider this a tail call. Why is that important? because if you refactor code into a local function, it might cease being tail-call-optimizable, even though it is 100% equivalent.
Same for clojure theoretically, If I have a piece of code calling recur
in tail position, and I refactor that code into a local function, then the recur
call changes semantics. If the local function happens to have the same arity as the parent function, it won't even be caught as a compiler error--it will just be a bug.
(loop [x 1]
(something)
(something-else)
(recur (inc x)))
ought to be (in my option) the same as the following in terms of referential transparency.
(loop [x 1]
(something)
(letfn [(f [x]
(something-else)
(recur (inc x)))]
(f x)))
but lo and behold it is not.we have the same problem of course with reduce
/ reduced
.
good to know π
Gaaaah! Does anyone know how to get debug information out of clj
as to how it is trying to access a maven repo that's in an AWS S3 bucket? I've got a repo that I can aws s3 [ls|cp]
on no problem, but clj
is refusing to get deps from it
the reason for this is almost always creds related
the most common thing I run into is when the creds do not have grants for s3 GetObject etc operations (even though they have access to the bucket)
I pulled the stuff down using aws s3 cp
, so that's fine. Worked it out in the end, but haven't worked out how to fix
The maven plugin for deps.edn
tries to lookup the location for the bucket using the S3 API
The S3 API GetBucketLocation
command cannot, via AWS IAM, be granted to a user/role not in the same account as the bucket
> If the bucket is owned by a different account, the request will fail with an HTTP 403 (Access Denied) error.
So the deps.edn
maven stuff has an option to specify the repo as {:url "
, which works fine
Unfortunately we're also doing stuff with lein
, and the same option doesn't seem to work there
that is a feature specific to the Clojure CLI implementation
From some more digging, it appears that when I use credentials that are directly for the account in question it works, but when I use credentials from another account, which still have access to the bucket, it fails
Another naming question:
I have a function that creates a map from id
to state
. So a good name for the function is perhaps id->state
. However, that also seems like the best name for the map created by the function. Is this a problem?
(let [id->state (id->state args)] ...
Another alternative would be ->id->state
since it creates maps of id
to state
, but that might look weird and not be idiomatic.it's not a problem syntactically - that local binding will shadow the var
Yes, so my question is kinda subjective and about semantics. If no-one here would object to the above let I guess I am good.
I think would probably rename the function to say what it does (key-by-id or index-state or something)
If you have watched Stu halloways debugging videos, you can see why shadowing names can be annoying. For instance, you can just def a value with the same name as the locals and evaluate expressions in your source. But if a local shadows a function you can inadvertently break things
I think key
is a great and descriptive verb for this: key-state-by-id
. I'm glad I asked π
Hi all! Can I extend the clojure.core functions with own functions, without needing to re-import them in every namespace? For example utils like:
(defn concat-vec [& rest] (vec (apply concat rest)))
I tried to use
(in-ns 'clojure.core)
and define my functions, but how should I compile them after clojure.core?you should not want to do that :) just make functions in a namespace and require that namespace when you want to use it.
If you wanted to avoid repeating yourself too much in your requires, I think this would be the way:
(ns foo.bar
(:require-macros [foo.baz :refer [macro-that-expands-to-require]]))
(macro-that-expands-to-require)
(From the May 24 ClojureScript https://clojurescript.org/news/2021-05-24-release)nice, just what I needed, thank you!
That feature is going to be dropped again, there is no good way to make it work without breaking other things
@isak unfortunately that won't be supported anymore in the future, if I understood a conversation between @plexus and @dnolen in #cljs-dev correctly
That is correct, there's no good way to make it work without breaking existing functionality
trying to re-sync with Closure Compiler and Library - and there'll be another release and we'll mention it's been removed
short story is that ClojureScript is an AOT variant of Clojure - you must know the whole graph of deps
but if you put something behind a macro you cannot figure this out in any sensible way
there might be something that could work - but the code started getting ugly immediately once the bug reports flowed in
@paul931224 To add a little color to "you should not want to do that": many people find that this kind of shortcut is a savings only in the very short term. It is extremely valuable to have the source of each of your functions be obvious by inspecting the source. This is the same reason that explicit :refer
is encouraged over :use
in ns
forms.
Look at a custom generator. In particular, the double*
generator. See https://clojure.github.io/test.check/clojure.test.check.generators.html#var-double*
the double-in
generator does not generate them by default. what spec/generator are you using when you see them?
as a matter of fact, i was using https://github.com/stathissideris/spec-provider
shoot, I thought it was the opposite. we've talked about this in the past