This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-09-28
Channels
- # arachne (2)
- # aws (5)
- # aws-lambda (5)
- # beginners (4)
- # boot (25)
- # cljs-dev (270)
- # cljsjs (1)
- # cljsrn (72)
- # clojars (5)
- # clojure (201)
- # clojure-belgium (5)
- # clojure-brasil (4)
- # clojure-italy (2)
- # clojure-korea (2)
- # clojure-russia (24)
- # clojure-spec (24)
- # clojure-uk (22)
- # clojurebridge (1)
- # clojurescript (125)
- # cloverage (3)
- # cursive (41)
- # datomic (37)
- # dirac (4)
- # emacs (2)
- # hoplon (421)
- # lein-figwheel (1)
- # leiningen (5)
- # luminus (2)
- # mount (1)
- # off-topic (18)
- # om (44)
- # om-next (4)
- # onyx (44)
- # pedestal (3)
- # proton (9)
- # re-frame (21)
- # reagent (21)
- # ring-swagger (12)
- # specter (9)
- # sql (2)
- # untangled (62)
- # vim (16)
why is nothing printed when I do this?:
(let [a (timeout 1000)
b (chan)
pipe (async/pipeline 1 a identity b)]
(go-loop []
(println (<! b))
(recur))
(go
(>! a "hello")
(>! a "world")
(>! a "pipelines")
))
@idiomancy you have a bug in your code
If so, it doesn't seem possible to mass update records using yesql (i.e. passing in a vector of records into a yesql function and having it execute the query on each item in the vector). Is there something I'm missing?
I'm seeing. I have a relatively large project and the conversion will take quite some time. It looks worth it though. Thanks for the suggestion. I'm going to evaluate it! 🙂
I just migrated a big project to it, was harmless with a few terminal commands and glue to sniff
Also, if you just need to insert multiple, you can just fall back to jdbc
which works well except for some possible type conversion issues.
so, defprotocol
doesn't really work well with optional arguments. Is using multimethods with class
the best (clojure-y) solution to this?
defprotocol supports multi-arity methods:
(defprotocol Foo
(method [this] [this x] [this x y]))
Yeah, I know about multi-arity, but the optional arguments can vary a lot, so I would need a lot of possible arities, certainly if I want to embed all possibilities. Unless, of course, I take only 2 arities, and make the second arg one a map that can contain all options.
yeah, but I guess you can't really do it with multimethods/functions either?
there's defnk
(https://github.com/plumatic/plumbing#bring-on-defnk) but it still requires you to pass a map into a function
right! So, passing a map is the way to go, still using protocols. Thx.
yeah, I think it all boils down to the fact that clojure doesn't have a built-in way to pass named arguments
Can I get a outputstream from a transit writer instance in jvm? Is there a interface for that?
So i've just spent literally 2 days creating an algorithm. I relied on a global Clojure Atom as I want to use the data captured like a database. This is the main engine of the code
(defn all-combinations [state]
(let [{:keys [outside-number inside-number]} @constants
last-combo (vec (last state))
index (count last-combo)
last-num (last last-combo)]
(if (< last-num (+ (- outside-number inside-number) index))
(do
(swap! global-atom conj
(inc-at-index (dec index) last-combo)
)
(println @global-atom)
(all-combinations state)
)
(do
(println "Time to step back" )
(swap! global-atom conj (inc-reset last-combo))
(println @global-atom)
(all-combinations)))))
I wasn't quite sure how to turn this into a Loop so I just called all-combinations
at the end of each if statement. I have the code successfully generating all the data I want in an atom and then I hit a nil
and the whole thing blows up. How to I safely exit and continue doing transformations on the data I have generated?Apologies for the messiness and random println's I'm SOOO close I just can't figure out how to safely exit the execution.
I don’t know what your editor is but to load core.async in the repl you need to load the namespace first
So nooby. I just fixed it all with a BOOL atom.
nvm it appears nightcode's instarepl doesn't work with third party libraries right now
@bcbradley organizer
is nice! It’s called a “topological sort” if you want to look up further discussion in the literature
which could or could not be the better behavior, depending on how you look at it. but having it throw an exception except if the collection is nil seems to be inconsistent.
@hans, definitely an oddity
I would have expected a NullPointerException
It's only a bug if it doesn't conform to explicitly stated behavior, and the behavior for nil
, which isn't a sequence, vector or string, is not defined.
(also it's probably a behavior that programs have come to rely on so hard to change at this point)
playing devil's advocate 🙂
if it cannot be fixed because "too much code depends on the behavior", then how will bugs ever be fixed?
s/bug/surprising behavior/
but that’s just it, in most languages with lots of users, old “bugs” or underdefined behavior don’t ever get fixed (not in a consistent way anyway) 🙂
my takeaway is to use get
instead of nth
as I don't like handling out of bounds conditions as exceptions
Which is, well, yes, a reasonable choice. there could also be a second part to this explanation ", for historic reasons".
@hans, I added a note here: http://clojuredocs.org/clojure.core/nth
@pesterhazy i like my bugs to be signalled as soon as possible. by using get, i'd just paper over the problem.
user=> (nth nil 0)
(nth nil 0)
nil
user=> (get nil 0)
(get nil 0)
nil
user=> (count nil)
(count nil)
0
user=> (take 5 nil)
(take 5 nil)
()
user=> (first nil)
(first nil)
nil
user=> (rest nil)
(rest nil)
()
nil
seems to be viewed as an empty collection when you use it with a function that receives a iseq, further more user=>(map #(identity %) nil)
()
if there is a need for enforcing your symbol not to be nil
a check is required before use.
I personally find it usefully not to have an error thrown but a nil
returned if my supposed to be iseq is actually nil
.I think amending the docstring would be a good change, @hans
@kokos the difference between nth and the other sequence functions is that nth has it in the contract that bounds checking is performed.
@kokos so it is my belief that making nil behave special is at least surprising. (nth [] 0)
does throw an exception, so why does (nth '() 0)
not do it?
(nth '() 0)
does throw
nil
is an empty sequence. (seq [])
returns nil
@stuartsierra that doesn't explain this behavior though as (nth '() 0)
doesn't match (nth nil)
well here is my organizer function (renamed to topological-sort now that I know what it is)
although they are only different on arity 2. if you pass the optional third arity, they work similarly (as I would expect anyway).
What's an elegant way of tapping a manifold stream, preferably keeping the original stream usage intact?
@yonatanel if you manifold.stream/connect
stream A to stream B (A -> B), B will receive every message that appears in A. However, if nothing is consuming B and it's not buffered (or goes over buffer size), the backpropagation will affect upstream (A). To avoid that you can specify a :timeout
parameter to connect
which will sever the connection in case a put!
from A into B takes too long.
@dm3 connect
will remove from the original stream (A) which is a problem for me. I just want to "tap" and print debug messages while the stream behaves normally. I can connect
two new streams A->B and A->C, B becoming the new source and C just for debugging, but for that I have to change the original usage of A, now needing to handle a split source and sink.
@yonatanel hm, this would only be a problem if you tap A while it's already running but nothing else is connected, so B becomes the only consumer
@dm3 This is interesting. If I use connect
everywhere and then use it also for tapping, the original code remains as is
yes, if B is not the only consumer of A, the message is propagated to every consumer and B is effectively "tapping"
so you need to ensure that you have your main stream topology constructed by the time you tap
It's a bit nuanced though. Sometimes I use consume
so I need to check that it works with that too. And if I wanted to take!
I couldn't
the only thing that won't receive a copy is take!
- it will compete with the connect
ed consumers
if you want to take!
, you could create an intermediate stream and connect
it to A, then take!
from the intermediate stream
i have a question about pipeline in clojure.core.async: if you make a pipeline out of an input channel and an output channel, does the pipeline survive as long as the input channel survives?
In other words, if the pipeline falls out of scope but was connected to a channel that is still in scope, will it be garbage collected?
@yonatanel you can make use of manifold.bus
for things like this, but you'd need to incorporate it into the design explicitly
which is how tapping works in core.async - you first need to create a mult
- a kind of a bus
(registry/add-module! :ping
:regex #"..."
:on-message (fn []))
what's the best way to format code like this?Is anyone having trouble with Heroku today?
looks like it is running very slowly
@bcbradley I’m not clear on the internal details but I’ve used pipelines before and never retained the values returned by the pipeline* invocations, just the input chan
@josh_tackett deploys or your runtime?
runtime @codefinger
app is running fast locally, but slow on heroku
like 3 second response local
then 30 sec on heroku
and I am on fastest tier of heroku
@josh_tackett you should open a ticket at https://help.heroku.com
@codefinger I’ve never had a ticket work haha
there’s not issues right now. but certainly seems not right https://status.heroku.com
@jfntn so if a function takes an input channel as an argument, internally makes a bunch of other channels, and then hooks them all together in a chain with pipelines, but only returns the output channel at the end of the pipeline, the pipelines won't be kept alive as long as the output channel is?
@josh_tackett oh yea? i’m super sorry to hear that. I work at heroku and any of the advanced JVM/Clojure questions usually end up with me
let me know if there’s anything I can do to help
@codefinger Then you are just the person I need to talk to!
@bcbradley in my experience the pipeline will work as long as the input channel doesn’t close, not sure what happens if you lose ref of the pipeline’s input channel though
so what are some causes why it may run fast on local machine and slow on server?
moving to a back channel
is the scope "pipeline owns input and output channels" or is it "input owns pipeline owns output"
My guess is the former since it takes from input and puts on output, but you’d need to look at the source to make sure
that was the first thing i did, but the source is a little too involved for me to parse
@bcbradley not really, if you look at the private fn that powers all three pipeline variants, you’ll see that the last two go-loop
respectively <! from
and >! to
so I’m pretty sure the pipeline will keep going as long as from or to don’t close
what happens if you make a pipeline out of two chans and keep the chans alive but let the pipeline fall out of scope?
That’ll work, the go-loops in the pipeline*
are not going anywhere until they stop recur
ing, and that’d only happen if in or out closes
because the garbage collector won't instantly grab the pipeline when it falls out of scope
so basically, regardless of scope, pipelines are immortal as long as the chans you gave them are alive
@bcbradley would like someone else to confirm but I’d say yes
hello all, I trying to figure out how to get elements from java Enumeration interface, that has only hasMoreElements and nextElement, to clojure collection, is that poassible?
@bcbradley you should think of a go
block as spawning a new thread, in which case it makes sense that it doesn't get gc'd unless that thread completes
(doesn't actually spawn a new thread, puts that go blocks work into a queue for a thread pool but whatever)
i'd prefer if the documentation said something along the lines of "pipelines are immortal as long as both channels are alive"
I don't think pipelines are really the point here, go
blocks are immortal until they return
joking aside here is what i mean: the "immortalness" of pipelines is not in the interface or documentation. Technically, it is an implementation detail. I don't think it should be, I think this is an important (and potentially confusing!) effect that needs to be documented
imagine someone picks up clojure core async and looks at a function like "pipeline" and assumes it behaves like any other function. Imagine this person makes a couple of new chans in the parameter list for pipeline, and assumes that the chans given to it will be garbage collected when the pipeline itself has no more references pointing to it-- in other words normal scope rules. And this person would witness the behavior expected. Now imagine instead that the chans were created somewhere else in the program, had plenty of references to them, and are in fact being referred to in the argument list for the pipeline. The person would expect that the chans wouldn't be going anywhere and that the transducer would exist only for as long as it is in scope. What actually happens is the transducer is totally immortal as long as the chans survive. That isn't normal scoping rules.
oh ok yeah you've got a point, pipeline has a similar caveat to go, it will continue until the source channel is exhausted. personally I think that's kind of obvious (it's spawning asynchronous workers) but I guess it could be in the docs
well, it's normal for any asynchronous programming surely? anything that spawns a new thread in any language causes a bunch of new independent references to the objects in that thread
i think whats going on here is the pipeline isn't really a function, its more like a method that belongs to a thread pool, and the arguments you provide to this "method" are copied into the thread pool (through go blocks in the implementation), and that is what is producing this "odor"
I've occasionally seen people distinguish between pure functions and impure functions as functions and methods. if that's what you mean then yes pipeline (and most things in an async library) are methods, as in they're impure functions
i would prefer to instead have something like (pipeline core.async/pool ...), where pool is a hard reference to "the" pool
so basically if you want to "update" the pool you'd have to rebind it with another def
i'm sure you could probably make this cleaner than constantly using def over and over, and I'm sure you could avoid needless copies by using one of clojure's many persistent data structures to represent the pool
For anyone using the JDBC, consider this:
user> (import '[java.sql Timestamp])
java.sql.Timestamp
user> (def time-test (t/now))
#'next.channels/time-test
user> time-test
#object[org.joda.time.DateTime 0x273e8ee3 "2016-09-28T19:41:55.415Z"]
user> (Timestamp. (c/to-long time-test))
#inst "2016-09-28T19:41:55.415000000-00:00" ;; Same as with (c/to-sql-time)
When inserted into a postgresql database using the jdbc/insert! then it seems the database records time with the local time offset for a`TIMESTMAP WITH OUT TIME ZONE` data type. Is there a jdbc setting that can easily be used, or does there need to be a JVM-level change to keep any mutation from occuring?I disagree, as an asynchronous library and as a function that makes clear that it executes work in the background I think it's clear that normal scoping rules won't apply. that's just the nature
@tom When dealing with JDBC and timestamps, you need to ensure everything is in the same timezone — and the general recommendation is set your database, your server, and your application to all be in UTC.
This comes up repeatedly with JDBC with all databases and with all programming languages unfortunately.
@seancorfield Roger that
At World Singles, we have all our production servers set to run in UTC with NTP time sync. We set our (MySQL) databases to run in UTC (a separate setting from the server level!). And our applications run in UTC (because all our servers are set to UTC). It’s a giant pain in the ass. Especially since our data center is East Coast and every time we get a new server provisioned, the IT folks always deliver it set to Eastern time and we have to get it reset!
Then you have to figure out end user timezone and do translation on every input / output.
I haven't worked on a project this datetime-intensive and how it relies on dates so pervasively, just keep running into corner cases and odd behaviors. Setting it all to UTC is helping. Datetimes are the least fun thing I've worked on this year.
Short of redirecting stderr, probably not. I believe those come from low-level libraries that clj-http uses. We see them a lot too with certain webservices (and just ignore them).
@bcbradley only at the top-level (defmulti et al create top-level vars)
i have a build process on jenkins that run my tests and generates a docker container with the uberjar for deployment..