This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-03-31
Channels
- # announcements (4)
- # aws (1)
- # babashka (52)
- # beginners (178)
- # boot (4)
- # cider (2)
- # clj-kondo (10)
- # cljs-dev (39)
- # clojure (744)
- # clojure-europe (12)
- # clojure-germany (6)
- # clojure-india (56)
- # clojure-italy (5)
- # clojure-nl (60)
- # clojure-spec (9)
- # clojure-sweden (14)
- # clojure-uk (36)
- # clojuredesign-podcast (6)
- # clojurescript (11)
- # community-development (5)
- # core-async (4)
- # data-science (6)
- # datomic (6)
- # emacs (7)
- # events (4)
- # exercism (33)
- # fulcro (11)
- # funimage (2)
- # graalvm (29)
- # java (1)
- # joker (3)
- # lambdaisland (15)
- # malli (2)
- # meander (55)
- # mid-cities-meetup (1)
- # nrepl (8)
- # observability (4)
- # off-topic (2)
- # pathom (5)
- # re-frame (31)
- # shadow-cljs (73)
- # spacemacs (18)
- # sql (27)
- # test-check (14)
- # testing (1)
- # tools-deps (5)
- # xtdb (13)
Don't think I'm as enthusiastic about unfold. An iterate
which support for a side-effectinf f seems more generally useful. Its easy to combine with a take-while when you care.
I guess I'm just not seeing the point of the non-caching behavior with reduce. What scenario would that be needed for ?
(defn fetch
"A stub fetch that pretends like its getting random data from a url, but instead return a rand char from url."
[url]
(println "fetching " url)
(rand-nth url))
(defn lazy-fetch-until-data-not-seen [fetching-fn]
"Returns a lazy-seq which on every getting of the next element, will fetch the next element by calling
fetching-fn. Will stop once it sees an element is returned by fetching-fn that has already been fetched before."
(->> (iterate
(fn [[_ page-fetching-fn pages-fetched]]
(let [page (page-fetching-fn)]
(if (pages-fetched page)
[:done nil nil]
[page page-fetching-fn (conj pages-fetched page)])))
[nil fetching-fn #{}])
(lazy-seq)
(map first)
(drop 1)
(take-while #(not= :done %))))
(def data (lazy-fetch-until-data-not-seen #(fetch "")))
(mapv upper-case data)
;> fetching
;> fetching
;> fetching
;> fetching
;> fetching
;> fetching
;> fetching
;;=> ["G" "W" "." "H" "O" "T"]
(mapv upper-case data)
;;=> ["G" "W" "." "H" "O" "T"]
(mapv upper-case data)
;;=> ["G" "W" "." "H" "O" "T"]
(mapv upper-case data)
;;=> ["G" "W" "." "H" "O" "T"]
(reduce (fn[acc e] (conj acc e)) [] data)
;;=> [\g \w \. \h \o \t]
(reduce (fn[acc e] (conj acc e)) [] data)
;;=> [\g \w \. \h \o \t]
(reduce (fn[acc e] (conj acc e)) [] data)
;;=> [\g \w \. \h \o \t]
(reduce (fn[acc e] (conj acc e)) [] data)
;;=> [\g \w \. \h \o \t]
But just to be sure I add it there, just to remind myself I need to make sure something does, or I expose the Iterate to the caller
The reason the fetches only happen once is because the lazy seqs you are building on top of iterate (with take, map, etc) are caching
You lose the performance benefit of iterate's reduce impl, but it is not what I'm looking for here
feel free to use the code above from CLJ-2555, it's meant for iterated consumption of APIs
the code above using iterate
is not clear, and @hiredman is right that the lazy-seq usage is wrong
And what's not clear about it? I'd use a similar pattern for generating fibs or anything else side effect free as well? Am I using it wrong?
Having now understood why the doc-string says so, it seems wrapping in lazy-seq solves the issues
I mean, isn’t this kind of ridiculous? It’s not fundamentally different than having a FSM with a terminal state:
{:state :done}
I’m not saying the underlying concept is not real. Positional semantics have been with us forever. I’m saying it’s a ridiculous way to frame the issue (and it’s not always true) 🙂 “Syntax is complecting,” is at least more accurate.
there's a whole history of linked stuff from CLJ-2555 and CLJ-1906 (the latter especially) that is worth reading
Right, I can see that. I guess I just mean, that's just how iterates
work no? Like even for side effect free, you'll be left with code like I have, so its nothing special, just maybe iterate produces ugly code
This is the most used example of iterate: (def fib (map first (iterate (fn [[a b]] [b (+' a b)]) [0 1])))
I think part of the problem here is a mismatch between lazy data oriented evaluation via laziness and procedural evaluation where side-effects matter - that's pretty fundamental
I mean you said: > passing a named tuple of state is awkward and hard to understand > signaling termination is awkward [:done nil nil]
the "no side effects" in the doc pre-dated the 1.7 impl
it has always said that
So I mean, the only thing different I do to handle side-effecting f
is to make sure I return a lazy-seq and not an instance of Iterate
I'm not saying it won't be great to have a different fn with maybe a more ergonomic interface, I'm still trying to wrap my head around the iteration
fn from CLJ-2555, so maybe once I get how to use it, I'll be fully sold. But I'm also not seeing the issue with what I'm doing with iterate right now, seems all concerns around f being side effect free are irrelevant if you make sure you don't leak the clojure.lang.Iterate
the iterate fn arg you have written is complected (scratch off the bingo card) - it's handling production (calling a function) & termination
> the only thing different I do to handle side-effecting f
is to make sure I return a lazy-seq and not an instance of Iterate
if you don't use threads, maybe
Hum, I guess that's a general question related to lazy-seq and their caching being thread safe or not
it's specifically not safe to call iterate with side effects in multiple threads, an effect could happen twice
the fact that this was already a known limitation is why the implementation of reduce which ignores the iterate cache was implemented
And I believe once wrapped, it is now safe, and thus it is fine to have f
be side effecting
that doesn't change the thread safety aspect, iterate will recalculate in a way that other lazy data sources do not
that belief is incorrect
Except it won't be able too, because lazy-seq is now in charge, and it will be caching
lazy-seq is out of the picture after one value is consumed
(ins)user=> (type (rest (lazy-seq (list 1 2 3))))
clojure.lang.PersistentList
(ins)user=> (type (rest (lazy-seq (iterate inc 0))))
clojure.lang.Iterate
lazy-seq only wraps the outer collection, it is not recursive
that suffices for a hack that tricks reduce, it doesn't help with thread safety
reduce only checks the type of the outer collection
in your use case maybe, but it hasn't been made thread safe
I think map might be continuously wrapping things in a lazy-seq, as it handles its own call to rest
Anyway, you had a convincing argument. So if map does make it work, it does still become more brittle, and less general. Like always remembering to be sure it is someone properly wrapped even on call to rest becomes tricky
How to use the thing from https://clojure.atlassian.net/browse/CLJ-2555 :
step!
gets called with an initial key. if you don't override it, the initial key is nil
HTTP Responses from these sorts of paginated APIs contain 2 important bits of information:
1) an argument that you can pass for the next call to step!
2) some data
(In the terminology of that function's docstring, the HTTP Response is called a "ret", the argument that you pass to the next call to step! is "k" and the data is called values)
Seems its very much designed for typical pagination, can it be used more generically?
to get from the HTTP Response to the next k for step!, there is an argument :kf
, which is a function that gets called on the HTTP Response ("ret")
there's one more piece, but just to recap: step! is a function of "k" and return a "ret"
which is a function of the "ret", and it tells the iteration process whether this "page" (HTTP Response) had anything in it
so the flow is: get a page by calling step on k call some?, abort if falsey call vf to get some values out call kf to get the next k, abort if no k ----loop to top
So in my case, it seems I would still need to hack around it, and have ret be a tuple of state, and k won't be a key, but carrying over all previously returned piece of data no?
Since I'm doing, fetch data until you receive data which was previously returned by prior calls to fetch
@potetm I know, but I find lazy-seq to be even less ergonomic 😛, so I was trying to get iterate working, but then it can't handle side effects, and now there is iteration, which might seem like its not generic enough, so maybe I'll just go back to lazy-seq
(defn log-groups
"returns all log groups from CloudWatch"
[client]
(->> (iteration
(fn [token]
(aws/invoke client (cond-> {:op :DescribeLogGroups}
token (assoc :request {:nextToken token}))))
:kf :nextToken
:vf :logGroups)
(into [] cat)))
Like, you might as well loop/recur or atom bash in a doseq
if you gotta check the full result set each time.
my general thumb with laziness is if you care how much of the lazy thing gets read when, you shouldn't use a lazy seq for it
it's kind of a special case
but you should always be prepared to read 32 items ahead or whatever
or be careful in how you consume it, which you probably are with something like this
Doesn't that still leave open a gap in the language, where there is nothing that can really replace a generator easilly?
if you map or filter over it, you're chunking
map and filter produce chunked sequences
if it reads 32 elements ahead from the source, then aren't you chunking?
how else could it work?
oh ghadi, you're talking about the (if (chunked seq? ) ... in there?
I mean, it makes a lot of sense, but I thought map inly did that if the coll was an instance of ChunkedLazySeq or wtv
so map over filter ?
I guess none of them would then be
I was kind of hoping everything would respect the difference between ChunkedSeq and LazySeq, and only chunk once a ChunkedSeq was present in the chain
but a greater point is that any seq impl could chunk and you don't have any control over that
Hum... I understand could, but should? Like wouldn't the semantics dictate it shouldn't
the semantics very much don't
Okay, so ya, just considering Clojure to have lazyness is a mistake at this point I guess
well that's absolutely the wrong takeaway imo
Yeah, I think that’s actually right. You gave up control over the execution model when you decided to use map
.
without lazy seqs, you wouldn't have infinite seqs and partial reading, and many things we find great
I meant like, don't think you can leverage lazy-seq to build your own lazy one at a time workflow of especially something side-effectful
It really doesn't seem to be the intent of the whole lazy-seq machinery, so doing so is not advised
with lazy seqs, you do not have control of consumption
with loop/recur you do
the control is internal, not external
It does leave open the generator use case though it seems, for maybe a community provided library
people have written clojure generator libs
we have also thought about a concurrent flavor of iteration
, where the calls to step!
can run ahead by a configurable buffer
Ya, I just see more and more people looking for a "generator" like behvior in Clojure and asking how to do that. Since its all the trend in other languages right now
just the first one I see on googling, I know I've seen others
on a really old version of core.async :)
seems like I just saw a much more extensive one recently
I honestly prefer that model to lazy seqs for side effecting stuff. You get fine-grained control over side-effects, and it’s separated from transformation code.
I think that's the one I was remembering
definitely is some overlap with how core.async is impl'ed
I've seen this one before: https://github.com/leonoel/cloroutine/blob/master/doc/01-generators.md
That erdos.yield one seems interesting as well, similarly re-implement its own machinery instead of using core.async
To be honest though, in my case, I'm not looking for that imperative style of yielding
Just need to create an iterator that can carry state over from prior iteration, and is truly lazy, one element at a time 😛
Cause I like that over generator, and its easy to just not retain the head if you don't care about prior "generated" elements
Maybe the user won't want to see all thousands, only first hundred, but those first 100 still need to be transformed somewhat before being displayed
@didibus using iteration
if you have seen something in the result set already, step!
can return a ret
that fails the some?
check
Have step! return a tuple, and kf can extract the state and k can just become state I carry over instead of a key from the response
(defn didibus-thing
"fetch is a side-effecting 0 arg fn"
[fetch!]
(let [seen (atom #{})]
(iteration (fn [_]
(let [obj (fetch!)
[old new] (swap-vals! seen conj obj)]
(when (not= old new)
obj))))))
notice step!
's "k" argument is ignored, and the values returned by each step is the same as the "ret"
the neat thing about this is that you get to choose whether you use it as a seq or a reducible
Ya, its probably what I'll start using once 1.11 is out. I feel like, if it allowed to carry over something, you could avoid the use of the closure over an atom or some other construct, but I can see the challenge in the API
I still conceptually prefer the iterate model, like in its beauty. The idea is just that I want a sequence of (f x) -> (f (f x)) -> (f (f (f x))) -> etc. I find this model pretty conceptually elegant, just here f can't have side effect.
The next element is a result of the previous (and possibly all prior or anything else I choose to carry over)
No offense taken, these things are a bit personal opinion. The atom similarly annoys me 😛
(defn didibus-lazy-seq
"fetch is a side-effecting 0-arg fn"
[fetch!]
(let [step (fn step
[seen]
(lazy-seq
(let [obj (fetch!)]
(when (not= (conj seen obj) seen)
(cons obj (step (conj seen obj)))))))]
(step #{})))
Maybe in a language with support for multiple return it would be cleaner. But I got used to tuples in Clojure being an idiomatic way to do so
It also annoys me that the first result of iterate is x on its own, wish it was (f x). That's why I have the whole drop 1 thingy
And I guess you could have a multi-arity iterate, so f
could be one or more arguments, again, to avoid the tuple as arg-vector trick
(defn fibonacci[]
(->> (iterate (fn [[a b]] [b (+ a b)]) [0 1])
(map first)))
So in the above @ghadi would you say that one could also choose to use iteration
for it with an atom? Would there be any advantage? Or do you envision iteration
being used only for side effects?@ghadi Here's my version using iteration
:
(defn lazy-iteration-fetch [fetching-fn]
(->> (iteration
(fn [pages-fetched]
(let [page (fetching-fn)]
(if (pages-fetched page)
nil
[page (conj pages-fetched page)])))
:vf first
:kf peek
:initk #{})
lazy-seq))
What do you think?I know you intended k to be more like the key needed to make the API request, but I guess in practice it can be whatever. So I can use a similar trick of making it previous state. It still ends up being much more readable then with iterate
, so I like it. Have to wrap in a lazy-seq as well, since I see that iteration is non caching, it assumes the API will be idempotent on k, or that consumers won't consume more than once otherwise.
@didibus It only talks about being idempotent on initk
"it is presumed that step! with non-initk is unreproducible/non-idempotent"
"if step! with initk is unreproducible, it is on the consumer to not consume twice"
Ah, but your version is unreproducible on initk
as well, right? Since it calls (fetching-fn)
regardless of what is passed in.
So wrapping in lazy-seq seems to work to make it cached. Though I think at this point, I'm doing the same trick I did with making iterate safe for side effects
Which @noisesmith pointed out that it might not be super reliable
Because lazy-seq only wraps the outer sequence, once you call rest or next on it, you get back something else
The doc is unclear, I think it just means, without an initk, then the server has no way to be idempotent, since you provide it no key, so it is assumed not to be, in which case it is probably expected the consumer won't consume twice
I thought the docstring makes it clear that without an initk it just calls the fn with nil the first iteration
Which I guess, well, I can see it maybe being a bit confusing, like does that mean it assumes that if you pass an initk it must be reproducible?
And if so why? Does it do something different to handle the non reproducible case of using default initk?
Basically at this point I have three implementations and they all seem to work just as well, one with iterate
which I map over, which seems to make it behave properly with side effect when you do that. One with iteration
which I wrap in a lazy-seq, and again it then seems to have the expected behavior, including (rest data) returning the cached thing and not calling fetch again. And one with lazy-seq
.
@didibus If the step is non-reproducible on initk, then you may get different answers if you consume it twice -- which is back to the same boat you were in with iterate
and trying to do both seq
and reduce
.
It doesn't need to do anything special for that case, it's just a consequence of the non-reproducibility of that call (since if it gets called twice on initk it may return different results).
Ya, I'm investigating. So for iterate
, wrapping in lazy-seq
fails if you then reduce on the rest, you will get back an instance of Iterate, and reduce will now again re-consume the fetches.
In other words, it's saying that if calling step! with a known fixed value can return different values on repeated calls (such as a random number generator), then you can't guarantee that repeated consumption of the sequence of values will return anything predictable.
Right, and you're seeing that because you don't have a sane "seed" for your initial fetch call.
Ya, so like... it feels to suffer from the same "non side effecting" as iterate, just I guess, it assumes the caller understands the risk
How do you establish the "first call" in your fetching sequence?
It seems to me that if there's no "seed" value, then you have a random sequence with no repeatability?
Only difference is, for iteration
, wrapping in lazy-seq seems to work fully, and calling rest on it returns a LazySeq
You're not answering the question.
You're focusing on a hack instead of the core problem.
(defn fetch
"A stub fetch that pretends like its getting random data from a url, but instead return a rand char from url."
[url]
(println "fetching " url)
(rand-nth url))
That has no "seed" so the initial call is completely unpredictable -- so it fails the guarantees of iteration
.
No, absolutely not.
That's not what it says.
Ok, but kind of. Lets take a step back. iterate
says it is not safe for side-effects because if consumed twice it will recompute the side-effect
iteration
says you can use it for side-effect, but if consumed twice it will recompute the side-effect
Only for initk
It says that if the step function, invoked on initk, is not reproducible, it's up to the consumer to not attempt to consume it twice.
It doesn't have any caveats about step called on non-initk values -- in fact it specifically says it is presumed to be unreproducible.
But that initk value -- the first call -- has to be reproducible if you want any guarantees about repeated consumption.
i.e., you can't have both.
If you don't care about the guarantee, then the reproducibility of step called on initk doesn't matter.
Like, what does that change? If my first initk is reproducable, but not my next k, how does it guarantee it?
You have the source code -- you can answer that question.
And that would explain why wrapping iteration
in a lazy-seq is enough to add the guarantee even with a non reproducible initk
But I also don't understand why people keep saying me wrapping things in lazy-seq is a hack?
Very specifically, calling seq
on the result of iteration
invokes (step! initk)
and calling reduce
on that same result invokes (step! initk)
again.
Yeah, I don't know why you don't understand, when everyone keeps telling you the same thing 🙂 I was hoping to explain it so you'd see it but I clearly haven't done any better job that either Ghadi or Alex at this point. Sorry, I tried 😞
I give up. Sorry, man, but it's late and I've tried.
(let [p (iteration ...)]
(seq p) ; calls (step! initk) to seed the computation
(reduce ... p)) ; calls (step! initk) AGAIN to seed the computation
That's the clearest I can make this.given an initial seed k, it assumes step! itself will return the same thing, and the same subsequent thing on all next ks it returns
i don't think it assumes that. i thought you quoted the docstring in which it tells the caller to beware
To me that sounds like step! must be idempotent, not just on the first call, but on the full series of call
> > if step! with initk is unreproducible, it is on the consumer to not consume twice" that's saying if your api calls aren't idempotent then only issue them once
And here's the proof:
user=> (def a (iteration foo :initk 0 :kf inc))
#'user/a
user=> (reduce (fn[acc e] (if (> acc 5) (reduced nil) (do (println e) (inc acc)))) 0 a)
0
0.77166746127496
0.07621011513209086
0.9841327041345114
0.821362141438259
0.21009693818295794
nil
user=> (reduce (fn[acc e] (if (> acc 5) (reduced nil) (do (println e) (inc acc)))) 0 a)
0
0.7984895604320498
0.5150573279942907
0.11688034595267827
0.9450752936010137
0.6539525668192183
nil
That's not "proof" of anything.
Well, here you have a step! which returns the same thing consistently for the same initk
I give up. I'm going to bed.
Thanks anyway Sean, I appreciate. I know I'm being difficult, but I'm really not seeing it
But it all started with me wondering what issues iterate
has when used with a side effecting f
, against which its docstring warns not too use
Ghadi said iteration
is a new function, might release in 1.11, which is to handle that kind of scenario
But, its docstring says that it will just keep calling f over and over everytime it is consumed
So in my deep dive, iterate reduce calls the function over and over. And iterate seq will return Iterate, and not lazy-seq. So I get why in Iterate's case, it always calls f over and over
But when looking at iteration
, its only slightly better, in that reduce also just calls f over and over
To make iterate
safe with side effects, I found a call to map
works, because map will never get from the Iterate more than once, as map itself will cache its results.
And with iteration
, to make it that it consumes only once from f
as well, I wrap it in a lazy-seq, so that reduce is on the lazy-seq now, which will use the iteration
seq under the hood. And I'm told that is a hack as well
Finally, I conclude that iterate
and iteration
are both pretty much safe with side effect if you are okay with f being consumed more than once possibly, as you access the same elements over and over
Well, minus that iteration
is more safe I guess, due to the way it implements seq
, which makes it reducing on the rest of it still win't call f again
That said, I must be misunderstanding something, since very smart people told me I am wrong
I think for iterate
I am correct, just most people find wrapping it in a call to map
feels hacky, and even though right now we might not think of an issue, its harder to reason about that it works in all scenario as well
I'm looking at the code for iteration
and trying it in the REPL, and I think I'm also correct. I'm not seeing what Sean is talking about. iteration
seems to only be caching initk
, not subsequent values or ks
Which is fine, iteration
basically leaves it up to the consumer to do whatever, if it wants to re-run the side effects multiple time, its up to them, and the function has good support for using it with paging APIs with idempotency keys.
I don't know. I appreciate everyone's help and input. I did learn a lot. I think I understand things now, but I'll sleep on it all, since it seems some people say I still got a few things wrong, and maybe tomorrow it'll click for me
For anyone interested, I was researching this rabbit whole for my answer here: https://clojureverse.org/t/is-this-a-good-way-to-stream-data-to-caller-of-a-function/5689/3?u=didibus which I hope captures all the learnings from this chat session
hey, I have a clojure library that wraps around a java one. I am trying to diable logging during tests using timbre
(which I only require in the :dev
profile.
I tried calling (timbre/merge-config! {:appenders {:spit {:enabled? false}}})
but I can still see some WARNING logging by the java library.
The java code calls:
import java.util.logging.Level;
import java.util.logging.Logger;
...
logger.log(Level.WARNING, "blah", error);
what am I doing wrong?Have you already added the shims listed here to route other java logging frameworks through timbre? https://github.com/fzakaria/slf4j-timbre#slf4j-timbre
still not working:
I have in lein deps :tree
:
• [com.fzakaria/slf4j-timbre “0.3.19” :scope “test”]
• [org.slf4j/jul-to-slf4j “1.7.30” :scope “test”]
• [org.slf4j/slf4j-api “1.7.30” :scope “test”]
the underlying library uses java.util.logging
but logs still appear on the screen.
the spit appender is for logging to files, not logging to stdout (what's appearing on the screen probably). You might need to do something different to turn stdout off.
try (timbre/merge-config! {:appenders {:println {:enabled? false}}})
is there an easy way to convert nested hiccup-like structures like this [:key1 [:key2 "someval" [:key3 "otherval"]]]
over to a nested map?
i suppose not, the possible existence of values followed by new nested vectors is an issue
Hi guys, I'm using the origami library to do image processing in clojure, but I'm puzzled by how the interop works.
Here: https://www.codepasta.com/computer-vision/2019/04/26/background-segmentation-removal-with-opencv-take-2.html, there's a code segment
blurred_float = blurred.astype(np.float32) / 255.0
edgeDetector = cv2.ximgproc.createStructuredEdgeDetection("model.yml")
edges = edgeDetector.detectEdges(blurred_float) * 255.0
cv2.imwrite('edge-raw.jpg', edges)
so far I have a blurred image: (-> "resources/public/img.png" (imread) (gaussian-blur! (new-size 17 17) 9 9) (imwrite "resources/blurred.png"))
assuming you're passing or storing cv2 somewhere, not familiar with what that is, or if ximgproc
is a field or a nested class
I don't think cv2 is required. Even in what I have so far above, I use imwrite without cv2, and it works with origami.
But I have this now:
(let [blurred-float (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9)
)
edgeDetector (-> ximgproc (.createStructuredEdgeDetection "model.yml"))
edges (* 255 (.detectEdges edgeDetector blurred_float))]
(imwrite edges "edge-raw.jpg" ))
right, if ximgproc
is a class field of whatever cv2
is, it needs dot-dash notation as in my snippet with .-ximgproc
dealing with state/side-effect full java/JS interop like this in general assumes you're storing or passing whatever your stateful object is that you need to access fields & member functions on later
No, like in open cv one usually writes cv2.imwrite to write an image. Similarly, shouldn't it work with just ximgproc and not the cv2?
Is there a simple transducer I could use on a channel where it simply breaks apart a list and feeds the result to the consumer 1 by 1?
I think you could look into cat and mapcat, they might be able to do what you're looking for
i mean. that Clojure snippet you posted has several errors in it. If you intend to access an instance member named ximgproc
, you must use either (.
or (.-
notation, the latter is preferred for fields
as I'm not familiar with this library I don't know what to expect from a call like (imread "logo.png")
for example, what does that return?
something like that should work if (imread "logo.png")
already returns the object you need for the rest of the chain
@pshar10 in a repl you can use javadoc
(from clojure.java.javadoc
ns, always present in the initially loaded repl ns) - when you pass it some object it will usually find the documentation for that object in your web browser
this can help with code that does so much interop
eg. (javadoc (gaussian-blur! (imread "logo.png")))
to figure out what precisely that object is
and what you can do with it
I found it to work with
(import '[org.opencv.ximgproc Ximgproc])
(let [blurred-float (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9)
)
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
edges (* 255 (.detectEdges edgeDetector blurred-float))]
(imwrite edges "edge-raw.jpg" ))
OK - that looks reasonable :D
But I get No matching method detectEdges found taking 1 args for class org.opencv.ximgproc.StructuredEdgeDetection
then (javadoc org.opencv.ximgproc.StructuredEdgeDetection)
and figure out what args that method takes
clojure doesn't try much "clever" stuff about guessing method argument types, you likely need a hint to make it pick the method you need
quick google claims that you need both an input array (what you have now) and an output array
(at least in the C++ version...)
yeah it's true in java too
javadoc should take you here: https://docs.opencv.org/master/javadoc/org/opencv/ximgproc/StructuredEdgeDetection.html#detectEdges(org.opencv.core.Mat,org.opencv.core.Mat)
you can click on the type in the doc
that should mention a constructor or a factory method
yeah, it has a zero arg constructor, that's probably enough(?)
edges (* 255 (.detectEdges edgeDetector blurred-float (. Mat))), gives: Malformed member expression, expecting (. target member ...)
the zero arg constructor for Mat would be (Mat.)
you might want to refresh yourself on the interop syntax doc - it's actually simpler and more consistent than java's own syntax :D https://clojure.org/reference/java_interop
For some reason, multiplying 255 doesn't work in clojure:
(let [blurred-float (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9)
(float)
)
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat.)
_ (.detectEdges edgeDetector blurred-float mat)
edges (* (float 255) mat)
]
(imwrite edges "edge-raw.jpg" ))
gives: Execution error (ClassCastException) at user/eval63001 (form-init8488930107953136676.clj:373). org.opencv.core.Mat cannot be cast to java.lang.Number
@pshar10 are you trying to do a matrix multiply? mat is a matrix
detectEdges returns nil
in clojure and afaik in java as well, *
doesn't handle matrix multiplies - you need a method for that
the original example he posted does this edges = edgeDetector.detectEdges(blurred_float) * 255.0
OK... that doesn't match the detectEdges method I see
it returns nil, and that would blow up
wait, it works without the cast? it shouldn't
I removed it but then:
(let [blurred-float (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9)
)
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat.)
_ (.detectEdges edgeDetector blurred-float mat)
edges (.mul mat 255)
]
(imwrite edges "edge-raw.jpg" )
)
OK .mul loks right
you can google that type name to see what it wants - this API looks tedious
it's a different kind of matrix it wants, and it's complaining about the src
not the dest (unless I'm misreading that cryptic message)
also .mul
wants a double, so use 255.0
- I don't think clojure's clever enough to do that for you(?)
oh, and it wants a second matrix - it's not going to work with just a number
Still the same error after giving in a second matrix and 255.0
(let [blurred (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9))
blurred-float (Mat.)
_ (.convertTo blurred blurred-float 5) ;; five is just the f32 type id
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat.)
_ (.detectEdges edgeDetector blurred-float mat)
edges (Mat.)
_ (.mul mat 255.0 edges)
]
(imwrite edges "edge-raw.jpg" )
)
Execution error (ClassCastException) at user/eval65868 (form-init5691178651358499492.clj:101). java.lang.Long cannot be cast to org.opencv.core.Mat
look at the method signature: the second matrix comes before the double
crashing: yeah, c interop will do that
also, .mul
returns a new matrix, and multiplies two matrixes (and optionally a double as well) - you might want something different, like an imperative loop that scales every value in the array(?)
eg. a loop calling the .get
and .put
methods on each index as I don't see a simple scale operation in that API (I'm no expert on this specific API though, I just know how to read javadoc)
it operates on your matrix, plus another
My understanding is that https://docs.opencv.org/master/javadoc/org/opencv/core/Mat.html#mul(org.opencv.core.Mat,double)(https://docs.opencv.org/master/javadoc/org/opencv/core/Mat.html m, double scale) means that (.mul mat 255.0) should work
you are misreading the doc
mul is a method on a Mat object, so the one in the arg list is another Mat
you need two
mul takes the instance object (to the left of the method call in java, to the right of it in clojure), and then a second Mat object as an argument
unless there is a version of mul without a Mat in the arglist
Oh I see, (.mul mat edges 255.0) is like mat.mul(edges, 255.0). But why doesn't it work?
are you sure it doesn't work? in your snippet above you had the arguments in a different order
well, I'd expect mul to do a matrix multiply, one of the two matrixes seems to be uninitialized - which gives you either zeroes or garbage
(let [blurred (-> "resources/public/Logo.png"
(imread)
(gaussian-blur! (new-size 17 17) 9 9))
blurred-float (Mat.)
_ (.convertTo blurred blurred-float 5) ;; five is just the f32 type id
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat.)
_ (.detectEdges edgeDetector blurred-float mat)
edges (Mat.)
_ (.mul mat edges 255.0)
]
(imwrite edges "edge-raw.jpg" )
)
cv::Exception: OpenCV(4.2.0) /Users/niko/origami-land/opencv-build/opencv/modules/core/src/arithm.cpp:669: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
so you probably need to provide dimension args to the Mat
constructor to make it match your input
at least that's an intelligible error :D
Here you go: This gives an output for a test input .png image, which is completely black and doesn't have the edges. And the repl closes if I give it a jpg input. No errors. Just closes:
(let [blurred (-> "resources/public/test.jpg"
(imread)
(gaussian-blur! (new-size 17 17) 9 9))
blurred-size (.size blurred)
blurred-float (Mat. blurred-size 5)
_ (.convertTo blurred blurred-float 5) ;; five is just the f32 type id
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat. blurred-size 5) ;; five is just the f32 type id
_ (.detectEdges edgeDetector blurred-float mat)
edges (Mat. blurred-size 5) ;; five is just the f32 type id
_ (.mul mat edges 255.0)
]
(prn "blurred size" blurred-size)
(imwrite edges "edge-raw1.jpg" )
)
I would expect a multiply by an uninitialized matrix to give zeroes, and I'd expect zero to be black pixels
edges
probably needs a different content - and maybe there's some trick I couldn't find in the docs to just do a normal multiply
eg. edges could be the identity matrix, or the matrix that scales its input by 255
Okay, so edges ( mat 255.0) in let should work because is overloaded in c++: https://stackoverflow.com/questions/17892840/opencv-multiply-scalar-and-matrix#17894118
so while the C++ implementation underneath that works, the java library wrapper you're using it doesn't. hence the existence of mul
etc
there's a Mat#clone function and a Mat#setTo(Scalar) function that should let you create an identity matrix relatively easily
There's a static function Mat::eye for that, but how do I use it? For using Mat I was doing (import '[org.opencv.core Mat])
I guessed
(let [
img (-> "resources/public/Logo.png"
(imread))
size (.size img)
blurred (Mat. size 5)
_ (gaussian-blur img blurred (new-size 17 17) 9 9)
blurred-float (Mat. size 5)
_ (.convertTo blurred blurred-float 5) ;; five is just the f32 type id
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat. size 5)
_ (.detectEdges edgeDetector blurred-float mat)
identity- (Mat/eye size 5)
edges (.mul mat identity- 255.0)
]
(prn "blurred size" size)
;; (imwrite edges "edge-raw1.jpg" )
)
you're trying to do too much at once, execute each of those pieces of code one at a time and capture their output in a def
yeah - clojure rewards iterative incremental changes
keep your design in mind so you don't make a total mess, but do repeated minimal changes
and just remove everything after the error, add things back one by one and make the individual things work
The error is that eye is a static function in class Mat. I think my syntax for calling static functions is wrong
like printing the value just before it works. mat prints: #object[org.opencv.core.Mat 0x26c3d186 "Mat [ 1024*4096*CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x7ff133f34e90, dataAddr=0x1370aa000 ]"]
ok (. Mat eye size 5) got rid of the error and gave me an output, but it's completely black again
pedantic point: it's a static method, functions are Objects with invoke and applyTo methods implementing Callable and Runnable
(Mat/eye size 5)
should work if it's a static method
(the difference between method and function is less important in languages where you only use one or the other, in Clojure we use both and the differences are important)
Well, anyway, even for smaller images when it isn't crashing, the result isn't completely black. The result, i.e., edges seems to be identity * 255, from doing (.mul mat identity- 255.0)
IMO even just a single reply should go into a thread. Not least because you can never guarantee that it won't start a long discussion.
anyway, to debug this, it's probably a good idea to convert the Mat object to a clojure vector
you can use one of the .get
methods to get an array out from the Mat
object, and the use into
to put an array into a clojure vector
you can also use clojure's get
on arrays directly, but vectors are definitely nicer - they print readably for example
this returns a double array for one dimension https://docs.opencv.org/master/javadoc/org/opencv/core/Mat.html#get(int%5B%5D)
you could iterate it across the other dimension
hmm - I mean this one since it doesn't anchor nicely:
public double[] get(int[] idx)
actually maybe this one
public double[] get(int row, int col)
For
(let [img (-> "resources/public/test-small.jpg"
(imread))
size (.size img)
height (.height size)
width (.width size)
blurred (Mat. size 5)
_ (gaussian-blur img blurred (new-size 17 17) 9 9)
blurred-float (Mat. size 5)
_ (.convertTo blurred blurred-float 5) ;; five is just the f32 type id
edgeDetector (Ximgproc/createStructuredEdgeDetection "resources/model.yml")
mat (Mat. size 5)
arr (.get mat width height)
_ (.detectEdges edgeDetector blurred-float mat)
identity- (. Mat eye size 5)
edges (.mul mat identity- 255.0)
]
(prn "blurred size" size)
(prn "mat is " mat)
(prn "height is " height)
(prn "width is " width)
(imwrite blurred-float "blurred-float.jpg")
(imwrite mat "mat.jpg")
(imwrite identity- "identity.jpg")
(imwrite edges "edge-raw1.jpg" )
)
I get:
No matching method get found taking 2 args for class org.opencv.core.Mattry (int width)
(int height)
- maybe clojure isn't doing the right auto-coercion
also you might find it easier to create individual objects in the repl and experiment, instead of reloading the entire function
okay
(def img (-> "resources/public/test-small.jpg" (cv/imread)))
(def size (.size img))
(def height (.height size))
(def width (.width size))
(def blurred (Mat. size 5))
(cv/gaussian-blur img blurred (cv/new-size 17 17) 9 9)
(def blurred-float (Mat. size 5))
(.convertTo blurred blurred-float 5) ;; five is just the f32 type id
(def edge-detector (Ximgproc/createStructuredEdgeDetection "resources/model.yml"))
(def mat (Mat. size 5))
(.detectEdges edge-detector blurred-float mat)
(def identity- (. Mat eye size 5))
(def edges (.mul mat identity- 255.0))
(def arr (.get mat (int width) (int height))) ;;gives nil
where as mat gives [ 302*302*CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x7fc9d9eba720, dataAddr=0x12ec1c000 ]
perhaps you want (dec width)
and (dec height)
because I think that it's 0 indexed and you are getting the array of rgb doubles at that index
right, you're back to the hinting (int (dec width)
) etc.
I think
OK - you can do this in a loop to get all of these single item vectors of doubles, that would be one approach
or like this? https://stackoverflow.com/questions/26681713/convert-mat-to-array-vector-in-opencv#26685567
Doing (doseq [x (range width) y (range height)] (prn (.get mat (int x) (int y))) ) prints like a lot of these, my guess is 320x320 many: #object["[D" 0x2a0db4b8 "[D@2a0db4b8"]
so maybe something like:
(into []
(for [x (range width)]
(into []
(for [y (range height)]
(into []
(.get mat (int x) (int y)))))))
How to multiply each item with 255, which I ultimately want to do? I tried this:
(into []
(for [x (range width)]
(into []
(for [y (range height)]
(into []
(* 255.0 (get (.get mat (int x) (int y)) 0)))))))
but it says
Don't know how to create ISeq from: java.lang.Doubleyou might find dragan's clojure numerical algebra book interesting https://aiprobook.com/numerical-linear-algebra-for-programmers/
This works: (into [] (for [x (range width)] (into [] (for [y (range height)] [(* 255.0 (get (.get mat (int x) (int y)) 0))] ))))
I have the following code for it:
(into []
(for [x (range width)]
(into []
(for [y (range height)]
(do
(.put edges (int x) (int y) (* 255.0 (get (.get mat (int x) (int y)) 0)))
[(* 255.0 (get (.get mat (int x) (int y)) 0))]
)
))))
But I get the error:
No matching method put found taking 3 args for class org.opencv.core.Mat
But Mat.put documentation says that this method exists:
put(int row, int col, double... data)
Then what am I doing wrong?IMO even just a single reply should go into a thread. Not least because you can never guarantee that it won't start a long discussion.
so it seems metadata is properly handled when reading in EDN files:
Clojure 1.10.1
user=> (clojure.edn/read-string "^{:params {:a 1}} {:jack :jill}")
{:jack :jill}
user=> (meta *1)
{:params {:a 1}}
however, i cannot seem to find any reference to metadata in the edn spec (https://github.com/edn-format/edn). is the documentation lacking or is this just a happy little accident?this response seems to indicate it's not an accident (at least in clojure): https://github.com/edn-format/edn/issues/52#issuecomment-24337627
Suppose I have a vector like so:
[
[[50][60][70][20][0]...]
[[90][56][67][98][78]...]
...]
Which represents a single channel 8-bit image. Is there a simple way to convert this into a jpg or a png?java2d is in the jdk and has support for stuff like this, but it's quite a bit more complicated iirc
I'm sure simpler Java apis probably exist and that's where I'd look, rather than in Clojure
here's a really simple example I wrote in Java 13 years ago when I was doing a mandelbrot thing https://puredanger.github.io/tech.puredanger.com/2007/10/12/images-java2d/
You can also use JOGL and LWJGL for 3D, and I think they support 3D to 2D projection, which can be faster than Java2D due to leveraging the GPU
you've rapidly crossed my line of knowledge :)
depends on what order you want. Sorted by keys via a comparison function, or insertion order of keys?
Here's an example of sorting by key value https://github.com/dmillett/clash/blob/master/src/clash/tools.clj#L187
I'm trying to read a Mat in open cv, multiply each of the components with 255 and save it as a Mat. I have the following code for it:
(into []
(for [x (range width)]
(into []
(for [y (range height)]
(do
(.put edges (int x) (int y) (* 255.0 (get (.get mat (int x) (int y)) 0)))
[(* 255.0 (get (.get mat (int x) (int y)) 0))]
)
))))
But I get the error:
No matching method put found taking 3 args for class org.opencv.core.Mat
But Mat.put documentation says that this method exists:
put(int row, int col, double... data)
Then what am I doing wrong?put
is a method on a matrix. in java that means mat.put(row, col, double)
and in clojure it means (.put mat row col double)
TIP: Don't use for
to imperative things
Prefer doseq
or dotimes
Use a reduce
if you need to return something
The problem probably is something with "type mathing", once in clojure, everything end up with Object
, JAVA can't find if it should use put(int int int)
or put(float float float)
I want to set edges[x,y] to that long expression (* 255.0 (get (.get mat (int x) (int y)) 0))
https://docs.opencv.org/3.4/javadoc/org/opencv/core/Mat.html#put(int%5B%5D,byte%5B%5D)
Java varargs is really passing an array behind the scenes. When calling from Clojure, you’d have to use double-array
@UPEKQ7589 can you please explain how that would work?
(doseq [x (range width)
y (range height)]
(.put edges
(int x) (int y)
(into-array Double [(* 255.0 (get (.get mat (int x) (int y)) 0))])))
If the method you wish to call looks like this in Java put(int row, int col, double... data)
it would look like this in Clojure (.put a-matrix row col (double-array … ))
(doseq [x (range width)
y (range height)
:let [data1 (* 255.0 (get (.get mat (int x) (int y)) 0))]]
(.put edges
(int x) (int y)
(double-array [data1])))
it may be overkill and slow for your use case, but if you’re just trying to get something working, you can use https://github.com/phronmophobic/membrane
(ns membrane.example.squares
(:require [membrane.ui :refer [filled-rectangle]
:as ui]
[membrane.skia :refer [draw-to-image!]
:as skia]))
(def nums (repeatedly 300
(fn []
(repeatedly 500 #(vector (rand-int 255))))))
(defn render-nums [nums]
(vec
(for [[j row] (map-indexed vector nums)
[i col] (map-indexed vector row)
:let [color (-> col
first
(/ 255.0))]]
(ui/translate i j
(filled-rectangle [color color color] 1 1))))
)
(draw-to-image! "vec-image.png" (render-nums nums))
@U2J4FRT2T’s code works, but for some reason it gives a nullpointerexception on bigger images and then crashes the repl, but works fine with smaller images. Why would that be?
@U7RJTCH6J Wow, membrane looks great. Are you just implementing your own layer above each OS native GUI toolkit in C ? Or re-using some existing C/C++ layer for that and just providing a Clojure API above it?
if it’s hard crashing the repl, then maybe it’s an out of memory issue?
maybe? do you have the stack trace?
that’s a segfault. that means the underlying c code is accessing invalid memory
why would the c code have trouble accessing memory for a larger file and not for a smaller one?
it looks like this is happening during edge detection
Actually, these images are about the same size the 320x320 one is 86KB and the larger one is 1024x768 and it's about 93KB
i’m not super familiar with opencv, looks like it’s crashing around https://github.com/opencv/opencv_contrib/blob/35972a1ec4703aac69daecef4c0705110666e5aa/modules/ximgproc/src/structured_edge_detection.cpp#L369
if the wrapper library isn’t validating arguments, then it’s totally possible that not passing the right args will crash the jvm
is there an “output” matrix you’re passing as an arg?
do you have the code that’s calling edge detection?
your output matrix might be the wrong size
Here's the whole code:
(do
(def img (-> "resources/public/test-img.jpg" (cv/imread)))
(def size (.size img))
(def height (.height size))
(def width (.width size))
(def blurred (Mat. size 5))
(cv/gaussian-blur img blurred (cv/new-size 5 5) 0)
(def blurred-float (Mat. size 5))
(.convertTo blurred blurred-float 5) ;; five is just the f32 type id
(def edge-detector (Ximgproc/createStructuredEdgeDetection "resources/model.yml"))
(def mat (Mat. size 5))
(def edges (Mat. size 5))
(.detectEdges edge-detector blurred-float mat)
)
(doseq [x (range width)
y (range height)
:let [data1 (* 255.0 (get (.get mat (int x) (int y)) 0))]]
(.put edges
(int x) (int y)
(double-array [data1]))) ;; crashes here for larger image
(cv/imwrite blurred "blurred.jpg")
(cv/imwrite edges "edge-raw1.jpg")
ah, how did you choose size 5?
I'm trying to replicate this in clojure: https://www.codepasta.com/computer-vision/2019/04/26/background-segmentation-removal-with-opencv-take-2.html
if you're hinting at the filter size going outside the image size then none of the dimensions of either of the images is divisible by 5
does it reliably crash on the same image, or just sometimes?
I was wrong before when I'd said that it only sometimes crashes. It has crashed every time ten times or so.
what’s the java library you’re using?
my guess is that the size is wrong, but i’m not familiar enough with opencv to know for sure
Here's how you can replicate the error: library is http://origamidocs.hellonico.info/#/ Then
(require '[opencv4.core :as cv])
(import
'[org.opencv.core Mat]
'[org.opencv.ximgproc Ximgproc]
)
(do
(def img (-> "resources/public/test-small.jpg" (cv/imread)))
(def size (.size img))
(def height (.height size))
(def width (.width size))
(def blurred (Mat. size 5)) ;; five is just the f32 type id
(cv/gaussian-blur img blurred (cv/new-size 5 5) 0)
(def blurred-float (Mat. size 5)) ;; five is just the f32 type id
(.convertTo blurred blurred-float 5) ;; five is just the f32 type id
(def edge-detector (Ximgproc/createStructuredEdgeDetection "resources/model.yml"))
(def mat (Mat. size 5)) ;; five is just the f32 type id
(def edges (Mat. size 5)) ;; five is just the f32 type id
(.detectEdges edge-detector blurred-float mat)
)
(doseq [x (range width)
y (range height)
:let [data1 (* 255.0 (get (.get mat (int x) (int y)) 0))]]
(.put edges
(int x) (int y)
(double-array [data1])))
(cv/imwrite blurred "blurred.jpg")
(cv/imwrite edges "edge-raw1.jpg")
here are some examples that might help: • https://github.com/haocse/documentscanner-ml/blob/7038c63f8c5427ea63c838454deca2ca006419ed/documentscanner/src/main/java/com/haotran/documentscanner/util/ScanUtils.java#L364 • https://github.com/ricktjwong/edge-detection/blob/ea22b48e73705da9dcd7ee848c8b7280db53ab7f/src/main/java/Main.java#L50 • https://github.com/JieZou1/PanelSeg/blob/b623e42a22b55ee37f1f97405c6f60d8693823eb/PanelSegJ/src/main/java/gov/nih/nlm/lhc/openi/panelseg/PanelSplitEdgeBox.java#L77
instead of 5, what happens when you do (.type img)
that’s what all the examples are doing
and try running the equivalent of img.convertTo(src, CvType.CV_32F, 1.0 / 255.0);
on your image
I think it’s crashing because you’re saying it’s type CV_32FC3 when it’s actually not
The crazy thing is that I got nullpointerexception, but I it worked this time with (/ 1 255). I got the right output
success?
Here's the trace: Numbers.java: 3845 clojure.lang.Numbers/multiply REPL: 623 vendo.core/eval67142 REPL: 621 vendo.core/eval67142 Compiler.java: 7177 clojure.lang.Compiler/eval Compiler.java: 7132 clojure.lang.Compiler/eval core.clj: 3214 clojure.core/eval core.clj: 3210 clojure.core/eval main.clj: 437 clojure.main/repl/read-eval-print/fn main.clj: 437 clojure.main/repl/read-eval-print main.clj: 458 clojure.main/repl/fn main.clj: 458 clojure.main/repl main.clj: 368 clojure.main/repl RestFn.java: 137 clojure.lang.RestFn/applyTo core.clj: 665 clojure.core/apply core.clj: 660 clojure.core/apply regrow.clj: 18 refactor-nrepl.ns.slam.hound.regrow/wrap-clojure-repl/fn RestFn.java: 1523 clojure.lang.RestFn/invoke interruptible_eval.clj: 79 nrepl.middleware.interruptible-eval/evaluate interruptible_eval.clj: 55 nrepl.middleware.interruptible-eval/evaluate interruptible_eval.clj: 142 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn AFn.java: 22 clojure.lang.AFn/run session.clj: 171 nrepl.middleware.session/session-exec/main-loop/fn session.clj: 170 nrepl.middleware.session/session-exec/main-loop AFn.java: 22 clojure.lang.AFn/run Thread.java: 748 java.lang.Thread/run
What I mean is that I get the right output despite a nullpointerexception. I don't know how that's possible. Should the code just stop running on an exception?
usually. I guess it depends on how you’re running it
there’s only one place in your code with multiplication, so I would look there
Now there's this python code:
def filterOutSaltPepperNoise(edgeImg):
# Get rid of salt & pepper noise.
count = 0
lastMedian = edgeImg
median = cv2.medianBlur(edgeImg, 3)
while not np.array_equal(lastMedian, median):
# get those pixels that gets zeroed out
zeroed = np.invert(np.logical_and(median, edgeImg))
edgeImg[zeroed] = 0
count = count + 1
if count > 70:
break
lastMedian = median
median = cv2.medianBlur(edgeImg, 3)
with doseq I can iterate through the mat and do something side-effect related on its individual components, but how to return a clojure vector?
i would probably use loop
I think your intuition of converting the matrix to a clojure vector of vectors is probably the easiest thing to get back into clojure land
to convert a matrix to a vector, I would probably do somthing like
(vec
(for [i (range width)]
(vec
(for [j (range height)]
(.get mat i j)))))
fixed*?or something like that. coding in slack without* paredit is hard
(.get mat i j) itself returns a vector, because the image can have multiple color channels
good catch
you may want to wrap (.get mat i j)
in a vec
as well to make it a clojure vector
(defn ->vector [mat]
(vec
(for [i (range width)]
(vec
(for [j (range height)]
(vec
(let [p (.get mat i j)]
(for [c (range (count p))]
(nth p c)
)
)
)))))
)
I think is fine(vec (.get mat 0 0))
work?
you may not need a third level
and you’ll almost certainly want to get width
and height
from the passed in mat
I would also check the javadoc for whatever the type mat
is, there might already be a more helpful method for accomplishing this
I was hoping it implemented a java interface that made it seqable
instead of width and height, you’ll want to use (.cols mat)
and (.rows mat)
or (.width mat)
and (.height mat)
?
not sure if it’s the same or depends on if the matrix is irregular
Is there something that does logical and of two vectors of arbitrary dimensions in clojure?
i’m sure there is, but it’s fairly easy to write
(mapv (fn [channel1 channel2] (some-compare channel1 channel2)) median edge-img)
something like that
this will truncate to the size of the smaller vector
and some-compare
is the logical comparison function you would need to write
everything in numpy works on vectors
the default clojure operations don’t
there are libraries like that, http://incanter.org/
i haven’t used them in a while
which will let you write* code in that style
but otherwise, you have to do maps and reduces
i have to step out, but it seems like you’re at least through the toughest part! you may want to try some clojure exercises that will cover a lot of the tools you’ll need
I tried this, but it didn't work: (defn logical-and [mat1 mat2] (mapv (fn [channel1 channel2] (map (fn [mat1-c mat2-c] (* mat1-c mat2-c)) channel1 channel2)) mat1 mat2) )
well, how do I fix the logical-and?
(defn logical-and [mat1 mat2]
(mapv (fn [channel1 channel2] (map (fn [mat1-c mat2-c] (* mat1-c mat2-c)) channel1 channel2)) mat1 mat2)
)
@pshar10 not sure. But take a look at https://neanderthal.uncomplicate.org/ I think that neanderthal implements many of these operations
Hi, suppose I have an array list created like so (def some-array-list (ArrayList.)), and it itself contains array-lists. I want to sort this some-array-list based on the first elements of the constituent array lists. How can I do that?
@didibus, I didn’t want to sideline spaceman’s thread so I’m replying this separate thread
there’s a platform agnostic that all the ui codes against
then there’s several implementations for the graphical primitives. the main ones right now are skia for desktop, https://skia.org/ and webgl for the web
being able to render the DOM is likely at some point. not sure how likely implementations for cocoa, X, or windows are
not sure if that answered your question
Hum... I guess kind of. So it does sound a bit like you're building your own UI toolkit, though the rendering for different OS seems like it can be handled by skia, which I hadn't heard off before
the main goal is having as much as possible in clojure itself
my background is in mobile games and web and for both of those, you end up styling your own buttons/textboxes/checkboxes/etc yourself
it’s been a long time since I’ve worked on something where someone cared about having “native” UI widgets
I think people care more about the look and feel matching with the rest of their desktop, and just overall looking good. People did not like old Java Swing in my opinion mostly because it was ugly and had weird UX by default compared to native ones
I think the web has shifted that view
Ya, I'd say it did, at least for looking similar to the rest of the desktop, but I think people still expect things to look good and behave intuitively, maybe even more so than before
And most programmers suck at visual design 😛, so they often expect their UI library to auto-style and make everything beautiful by default
indeed.
i’ve been spending most of my time trying to get the fundamentals right from a programming perspective. the plan for making it look good is a GUI builder targeting graphic designers*. I think it’s been a mistake to put programmers in charge of pixels
I’ve made a few prototypes for the GUI builder a la http://worrydream.com/DrawingDynamicVisualizationsTalkAddendum/ with some new ideas, but still haven’t had a chance to make it available yet
Are there any good libraries for refactoring keywords in a Clojure codebase? Seems like a non-trivial task just because of the different form they can take like {:keys […]}
this is easier with namespaced keywords, but unless your keyword is something like :f
or :test
a global search for the symbol of the keyword will catch literal uses and keys destructures (that works for f or test too, but you get too many false positives along with it...)
here's a regex search of all functions loaded via require
user=> (->> (all-ns) (mapcat (comp keys ns-publics)) (keep clojure.repl/source-fn) (filter #(re-find #"foo" %)) count)
6
it doesn't find repl definitions (a limitation of source-fn) but that shouldn't matter for you
if your app is already loaded, it probably runs faster than your editor would for the same search
@noisesmith thanks, in this case the keys aren’t namespaced but I can see how far I can get with find/replace
I was looking at https://github.com/xsc/rewrite-clj too
readable / usable version of the source regex search above:
(defn all-qualified-symbols
[ns]
(->> ns
(ns-publics)
(keys)
;; takes the symbol, and turns it into a namespaced symbol of ns
(map #(symbol (name (.name ns))
(name %)))))
(defn source-search
[re]
(->> (all-ns)
(mapcat all-qualified-symbols)
;; get a "header" and the source, if findable
(map (juxt #(str "\n****** " % " ******")
clojure.repl/source-fn))
(filter #(some->> %
peek
(re-find re)))
;; "flatten" by one level
(into [] cat)
(run! println)))
I want to do a logical and of two vectors: [[[1] [0]] [[1] [1]]] and [[[1] [0]] [[0] [0]]] to get [[[1][0]] [[0][0]]]
I use a piecewise
function to help with this kind of thing.
(defn piecewise [f & colls]
(cond
(empty? colls)
()
(= 1 (bounded-count 2 colls))
(seq (first colls))
:else
(for [cells (apply map vector colls)]
(reduce f cells))))
(piecewise bit-and [1 0] [0 1])
@pshar10, maybe just write your own utility? It doesn't seem like it would be terrifically hard. This seems like an edge-case that probably isn't covered by the standard library.