Fork me on GitHub

I'm trying to interop to this function: ; cider sees it and autocompletes/explains the signature, but I get the following:


Have you checked to see if the arity and types of the args are correct?


As you can see from the github line, it's public static ShiftReduceParser loadModel(String path, String... extraFlags) {. That should match my attempt to call it


In the java demo code, it's called with just one arg (a string). But it's not liking me


I wonder if it's the extra flags varargs complaining


I wondered that, too, but can't find any reference to that causing problems elsewhere


Java varargs become an array of items. You can't omit it.


The (java) democode omits it with lines like ShiftReduceParser model = ShiftReduceParser.loadModel(modelPath);. But I also tried giving it ["somestring"] etc without any change to the error I received


You need a second argument of (into-array String [])


Yes, Java can omit. Because it's a Java compiler construct.


But if you're calling from other languages, you must pass the array.


that makes me sad inside...


Under the hood, the signature is really public static ShiftReduceParser loadModel(String path, String[] extraFlags)


Well, that's my TIL...

👍 1

Thanks a ton, @seancorfield. That fixed the issue very quickly


It's always puzzling the first time or two you encounter it...


if there's some/file.clj and some/file.cljc does clojure look at both or does one get preference?


So based on `Consider a lib named by the symbol 'x.y.z; it has the root directory <classpath>/x/y/, and its root resource is <classpath>/x/y/z.clj, or <classpath>/x/y/z.cljc if <classpath>/x/y/z.clj does not exist.` It seems like .clj is prioritised.


Thanks! Didn't even think this would be in the docstring haha 😄


Surprising how detailed it is on require^^


I want to load some external state once and cache it for all threads in my app (feels like a promise). However, multiple threads may be racing to initialize the value and I only want one to do the actual work (I will handle my own retries). Should I just wrap my work in a (locking my-promise ...) or is there a better way to do this?


@cpmcdaniel sounds like delay


not quite (oh, hey Max!)


because this thing starts out in a context map that is shared by all threads


and I can’t kick off the work right away


in this case, it’s because the external state may not be ready yet


also, the first thread (the one that “wins”) will parameterize the operation, and I don’t know up front which one it’s going to be - I think that’s primarily why delay would be awkward


so, my first thought was to use a ref and dosync, but that doesn’t guarantee that only one thread is executing the block


and so I think old-fashioned Java synchronized is the way to go


My code is using a 3rd party java library that generates lots of logs via SLF4J. I configured timbre to manage it but I don´t know how to disable log when I run the tests (lein test).


maybe (promise-chan) from core.async might be useful


combined with a single-item channel - first thread to take a non-nil value from the single-item channel gets to do the work


it's quite similar to a promise yes. Basically a chan that will always deliver the same value.


but that just feels like an excuse to use core.async


feels like there could be an easier solution


yeah, that’s why I brought it here


what if each thread when it wants this value can invoke a thunk to load it and that thunk (load-resource!) will have local state of whether already invoked. and if so it's a no-op, and only the first invocation will cause it to do any work?


so use a fn to wrap the state? When invoked the first time it does the work and returns the state, otherwise it just returns the state.


feels like a simple:


(fn [x y]
  (let [p (promise)]
    (do-something-later-with x y
                             #(deliver p %))


yeah so all of your threads can blindly call the load function and the load function is aware of whether to do anthing


is there a brotli ring middleware?


(let [p (promise)]
  (defn get-stuff []
    (deliver p (get-that-stuff))))


looks like multiple calls to deliver are fine. so just make sure there's only one promise


but wouldn't get-that-stuff potentially be called multiple times with this?


yeah, but I don’t want other threads wasting effort


oooh, I have a bug


start-work needs to be reset to false so other threads don’t do stuff too


the promise itself has a countdown latch (let [d (java.util.concurrent.CountDownLatch. 1) so it will only get invoked once


      [this x]
      (when (and (pos? (.getCount d))
                 (compare-and-set! v d x))
        (.countDown d)


yeah I was typing a solution involving compare and set + atom too 😛


is that intended behavior? Seems wasteful.


(let [a (atom ::empty)
      p (promise)]
  (defn f [x]
    (if (compare-and-set! a ::empty ::pending)
      (deliver p (do-stuff x))


that's strict evaluation. need a macro or a func, not a value


well you have to deref the ret of the fn, but that should do it


(let [a (atom ::empty)
      p (promise)]
  (defn f [x]
    (if (compare-and-set! a ::empty ::pending)
      @(deliver p (t x))


I’d like to turn that into a (deliver-once x expr)


if you look at the source of promise, if you copy it as promise-thunk and replace the x in invoke with f and (f) in the countdown you should get what you wnt


I have a sort of philosophical question on why a function is a particular way… I like using keep where applicable. It’s effectively (remove nil? (map ...)) but it’s a little more terse. The annoying thing though is that it doesn’t support multiple arities like map does, so I sometimes end up needing the remove/map idiom anyway. I’m wondering if there is a reason why it wasn’t extended to support multiple seqs like map does, or if it’s just a matter of lack of the motivation to write it? This made me also look at the source for map, by way of comparison to the source for keep. Like keep there is an effort to make it as efficient as possible, using chunked seqs (makes sense, since maps are used a lot). However, the longer arity versions just use a lazy-seq with a simple cons. Was this just a matter of code-writing expediency, since these arities are used less frequently? Does complexity of some other operation overtake the benefits of chunking? Or is there some other reason that chunking isn’t used for these?


@quoll Not sure about most of your questions, but have you considered just using mapcat, which does support multiple collections?


I use mapcat where appropriate, but I don’t see how it offers functionality in this context? My questions are: - Is there a reason why keep does not have arities supporting multiple seq? (If not, then I might try writing them) - Is there a reason why map only does chunking when processing a single seq? (If not, then I might try updating the arities for multiple seqs to also support chunking)

👍 1

mapcat can do everything filter, keep, map etc. do if you construct your return value properly


user=> (mapcat (fn [n] (when (even? n) [n])) (range 10))
(0 2 4 6 8)

👍 1

OK, that makes sense. But my original perspective here was to make code more terse (`keep` vs. map) and optimized (chunked vs. unchunked). mapcat chunks its answers, so that does handle part of it, but the seq wrapping of intermediate values before being returned lazily makes me think it would be losing something that map offers?


I would strongly recommend ignoring chunking, and instead looking at transducers


I’ve used transducers in various circumstances. I’m particularly enamored of the case where individual elements in a stream result in 0..n output elements, and being able to terminate streams early. But I never really felt like I completely understood the benefits of transducers in general. I can, and have, used them just fine. But since things like an applied map or filter are lazy, I never felt like composing transducers for these would have significantly different overhead to just threading through multiple seq processing operations. @hiredman in an effort to drink more Kool-Aid, outside of the cases where transducers offer functionality that’s not easily supported with standard seq processing functions, do you have a link which describes the why of transducers? (as opposed to the numerous pages which show how)


I know you didn't ask me, but the main thing is that they move the "common ground" away from the seq abstraction to a more abstract / general space. Which means you can apply them to things that don't match the assumptions of lazy-seqs, and also use them seamlessly on lazy-seqs


I don't want to be forced to turn a channel, queue, or socket into a lazy data structure shaped interface just to apply useful functional transformations to it

👍 1

Is it possible to find all namespaces in the classpath? That is, all the namespaces that can be loaded?


@quoll I assume you are interested in performance, which is why you care about chunking, transducers avoid the intermediate allocations of seqs, and will perform better


thanks I'll take a look


I am. But it’s more an intellectual curiosity. Looking at the map source, there is a lot of effort in chunking… for one case, and none of the others. Is it just historical for that case? Or did the other arities miss out because there were diminishing returns in putting the effort in?


multi seq map is going to take a significant performance hit upfront(checking for the termination condition is more complex, and the mapped function has to be applied), so the idea of apply chunking to it, making it more complex just to claw back to maybe par, is kind of meh


also, chunking is gross (it is not an insignificant source of bugs introduced because people expect lazy seqs to be processed one at a time, and they were until chunking was added), so seeing it spread is kind of sad


The chunking 'boundaries' for different input seqs could be different, too.


the good thing about chunking is it forces people who were using lazy-seqs for side effects to find some other abstraction


sometimes that's the right replacement - it can vary