Fork me on GitHub

is there a way to increase a queue size above 1024? could I make it 16,384?


@wei you can pass a buffer object as an argument to a chan, I'd expect you could make your own buffer implementation, but I'd also check why the limit is 1024... see for example this discussion


@wei in fact, the patch submitted on that ticket might be a good starting place if you wanted to implement your own buffer


if you are hitting the 1024 limit you might want to re-examine how you are handling pack pressure too


is there a good resource on back pressure?


let’s say I don’t want to drop anything


are you using put!?


basically, for back pressure to work, you can't do thinks as fire and forget


if you have a loop that spins off go blocks, the loop needs to wait for those go blocks to complete (that would be the simplest way, you can get feeback other ways too)


@hiredman you actually can use put! callback + on-caller? to signal backpressure upstream, as long as you're ok accepting to have 1 message queued in the case of block


not that it's the best thing to do tho, It's best to use >! <!


@wei if you're using >! (as you said), then the 1024 limit shouldn't really matter to you, as that applies to non-blocking puts made by put!. >! will park if the buffer on the channel (not the same as the puts queue) is full.


mpenet: I agree it is simpler to use >!,`<!`, <!!, and >!!, a beginner is much more likely to get a system that doesn't have backpressure problems that way. Some people and documentation steers people towards using put!, which I think is much more likely to cause a lack of back pressure, which is why I asked about it


yes, put! is usually a smell


@mpenet usually using put! is preferred to simply wrapping >! in a go for the sake of parking, as the go block is not free. I've done what you said, namely, use the put! callback to ensure that the put queue does not grow.


also the put queue is per chan, using >! doesnt' guarantee anything


(def x (a/chan))
(dotimes [i 1025] (a/go (a/>! x i)))
would blow up


things are a little different on the clojurescript side of things, but on clojure I would use >!! (the real thread blocking one outside of a go block) over put!


in certain situations, perhaps -- but in an http async request handler which is going to put the result on a channel, >!! is (unnecessary?), as one example


thanks for the resources and tips. will need to review our design


in our particular case the problem ended up being a single user spamming the system. so rate-limiting that user solved the problem for now!


pesky users 😉


we were using >! and getting the 1024 error though


yeah, I guess it was a case similar to the example @mpenet posted above -- but curious, what is the size of the buffer on the channel, if it's buffered?


the size of the buffer is what you ask it to be. The pending put! queue is another thing, and it's always 1024


nothing prevents you from create a chan with a giant backing buffer


yeah, i'm asking, what's the size of his buffer


(chan) is unbuffered


(chan n) is n sized


didn't I just say that? what does that have to do with the size of wei's channel's buffer?


not sure I follow, but the 1024 on the pending put! queue is hardcoded I think


To be clear, I'm not asking you anything about this -- I'm asking @wei, if his channel is buffered, and if so, what is the size of the buffer on his channel -- unless you looked at his code, and you know how large the buffer is, on his channel 😉


it’s unbuffered


oh, well didn't realize you were addressing him directly


@wei any reason you can't buffer it? -- it may not solve your problem in this case, but curious if it may give you some breathing room for cases like this


true, that might have reduced some stress. although in this case it was helpful to see the error so we were alerted to a problem


btw, (go (>! c val)) is also a code smell...that pattern is exactly the same as (put! c val) except slower


good to know, thanks Tim. I was wondering how those differed


they both attempt to use a channel channel without directly respecting backpressure. So if you're getting that "1024" should be considered a bug, and a suggestion to refactor your code.


(I said "directly" since you can make put! respect backpressure, but it requires the programmer to worry about the mechanics of it)


I’m seeing an error where (<! c) is returning the same value over and over, seemingly without anything additional being enqueued. has anyone else seen this as a failure case? trying to get an isolated repro


other than nil of course, no


Could be a promise-chan, or a chan with a weird xform


My intuition is that after completing a test that includes a go block in CLJS, cljs.core.async.impl.dispatch/running? would be false and cljs.core.async.impl.dispatch/queued? would be false. But when I print out this information after running a test (using an :each fixture), I see the opposite (they are both true). From reading over the source, I find that surprising, since these lines show that as soon as we set running? to true, we set queued? to false


The test I am running is this test ( copied over into my project for convenience


Is this behavior expected?


The reason I ask is that I’m attempting to verify that running tests doesn’t create an unfinished core.async work that might be slowing down my tests


that's internal implementations that really shouldn't be messed with in user code


but the idea is this (since I wrote that code):


We used to simply call setTimeout whenever we needed to call a callback "as soon as possible" but this introduced some lag. So that code now works in two-phases.


1) queued says "we have executed setTimeout and it will run sometime in the future when the browser has some spare cycles


2) runnng? says "we are currently running that stack of pending callbacks"


that makes sense. And you’re right, we should not depend on the internals. This is just some debugging I’m doing to try to understand our code and tests.


With your explanation above, it seems weird that after a test, both running? and queued? are true though, yes?


because it looks like as soon as we start running, we stop being queued


FWIW, I’m also inspecting the length of the tasks buffer. That may be more useful than looking at running? and queued?. For instance, if I write a simple test that just starts a go block but incorrectly does not use the async macro, I can see that there are tasks left in the buffer.


Nonetheless, I was surprised to see both running? and queued? set to true, but that’s likely because I’m misunderstanding something about the implementation


(my suspicion is that somewhere in our test suite, we do in fact have a test that is using core.async incorrectly, so I’m trying to look into the internals of core.async to determine if any test leaves things in a bad state)


well if the code you are checking the status in is in a go block, then you will always be running?


I believe this should be outside that block (it’s in a fixture). Let me post some code (do you prefer in slack or a gist?)


(thanks for the help, btw!!! much appreciated)


“outside the go block” I mean


gist is fine


it looks like you're calling (done) inside the go


if you did something like (js/setTimeout (fn [] (done)) 1) you probably see what you expect


I will try that. I took that test from (although I see that I left my unnecessary close! call in my example by mistake)


You’re right, that worked! Hm I wonder if I could also fix using take! on the go block, or if I’d run into the same problem


Nope, take! didn’t work, but your setTimeout idea did.


So, I’m guess the issue is that we call done while we’re still running core.async code, but done signals the test can end and therefore the fixture is called


the setTimeout works because it just waits for a bit before signaling the test to end, and at that point, core.async isn’t processing work anymore


Thanks for the help! We have another fixture in our tests and I wonder if this behavior isn’t causing problems there. I will check it out!