This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-09-22
Channels
- # beginners (104)
- # bitcoin (1)
- # boot (5)
- # clara (3)
- # cljs-dev (14)
- # cljsjs (5)
- # cljsrn (1)
- # clojure (242)
- # clojure-italy (17)
- # clojure-news (13)
- # clojure-norway (3)
- # clojure-russia (101)
- # clojure-spec (41)
- # clojure-uk (87)
- # clojurescript (38)
- # core-async (38)
- # cursive (6)
- # datomic (11)
- # defnpodcast (3)
- # docs (14)
- # editors (8)
- # events (1)
- # fulcro (7)
- # hoplon (25)
- # leiningen (4)
- # luminus (7)
- # off-topic (25)
- # onyx (1)
- # portkey (14)
- # random (1)
- # re-frame (7)
- # reagent (4)
- # rum (4)
- # schema (8)
- # shadow-cljs (257)
- # spacemacs (10)
- # specter (4)
- # unrepl (3)
- # yada (1)
Is there sqlite middleware for ring?
@jrootham Not sure what a database middleware would even look like for Ring. What would it do, generically?
Wrap a handler such that the database is opened at the beginning and closed at the end.
On second thought, I am doing something weird. I have a pipeline of request -> request functions
At the end it turns into a response
I am trying to figure out how to get them all access to an opened sqlite database
(defn db-middleware
[handler]
(fn [req]
(with-db-connection [db db-spec]
(handler (assoc req :connection db)))))
something like that?That would open a connection on each request, add :connection
to the request map, call the handler
, and then close the connection as the response is returned.
That is what I am after, I am trying to follow earlier advice and not reinvent wheels.
But I think you wrote my code for me, may I use that?
That's using clojure.java.jdbc/with-db-connection
-- org.clojure/java.jdbc "0.7.1"
and you'll need the dependency for SQLite as well.
org.xerial/sqlite-jdbc "3.20.0"
looks like the latest.
How much practise does it take to find things like that quickly?
The db-spec
would be {:dbtype "sqlite" :dbname "whatever"}
.
https://github.com/clojure/java.jdbc -- it's a Contrib library, and the README links to all of that.
(disclaimer: I maintain that library so I'm more familiar than most 🙂 )
But the readme links to the community docs (http://clojure-docs.org -- under Ecosystem & Tools) and to all the database drivers on Maven Central.
Out of curiosity, which Clojure books have you picked up so far?
Learning everything from online sources
Clojure Cookbook is a good way to jump start a lot of this.
Side note: fn is the Clojure equivalent of lambda?
All the code is available here https://github.com/clojure-cookbook/clojure-cookbook if you want to look at how a bunch of things are often done in Clojure
Yes, fn
is for an anonymous function.
Thank you, you have been outstandingly helpful
I'm always happy to help folks be successful with Clojure!
I first picked it up in 2010 -- Amit Rathore (author of Clojure in Action) was offering a one day intro to Clojure workshop near me for $200 on a Saturday. Best $200 I ever spent! Introduced it to work some time later, and I've been doing Clojure in production since 2011. Happiest I've ever been as a developer.
(defn start-async-consumers
"Start num-consumers threads that will consume work
from the in-chan and put the results into the out-chan."
[num-consumers]
(dotimes [_ num-consumers]
(async/thread
(while true
(let [line (async/<!! in-chan)
data (process line)]
(async/>!! out-chan data))))))
Is better way to start N workers for the time of work? Something like pipeline-async
?not perfect, but simple and extendable
Cursive has inspections for unused functions
I thought there was something for Cider, but I can’t find it now
hey guys, if I publish a library to Clojars with GPG signature can someone else (collaborator/second owner) also publish it?
@carocad if you are publishing to a group, then you can add other users to the group and then they will also be able to publish to the group
@carocad by group i mean you have an entity id like "foo-group/bar-library"
then you will be able to add users to foo-group
@mccraigmccraig oh that is great to hear !! yes we do. thanks a lot for your help
@kwladyka How about using futures or something like claypoole? I find that sometimes those will fit my needs better than core.async.
Hi there, can someone explain to me why this 👇 doesn't work?
#{{:id (UUID/randomUUID)}
{:id (UUID/randomUUID)}}
while this 👇 works just fine
(set {:id (UUID/randomUUID)}
{:id (UUID/randomUUID)})
This seems pretty strange to me 🙈#{} enforces uniqueness at read time, before the UUIDs are generated
since it's a reader construct
@mdibwhat would (read-string "#{{:id (UUID/randomUUID)} {:id (UUID/randomUUID)}}")
return?
Yeah, I didn't saw it either 😂. now it returns the as the first experiment:
java.lang.IllegalArgumentException: Duplicate key: {:id (UUID/randomUUID)}
Hello! What would be the cleanest way to take every other item in a sequence? e.g. [1 2 3 4 5] -> [1 3 5]
I thought about zipping with a range a filtering on even (or odd), but I wonder if there is a shorter way
`(map first (partition-all 2 s))`
Thanks @bronsa @noisesmith!
I'm occasionally getting "No namespace: twou.centralpark.async". Cleaning and rebuilding the code fixes it for maybe a few iterations. clojure.tools.namespace.repl/refresh
does not complain about any cycles in namespaces.
Both the calling namespace (according to the stack trace) and the twou.centralpark.async
namespace require clojure.core.async :as async :refer [some-thing]
.
is there something similar to Promise.all in clojure ? https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all
@eraserhd I experienced that sometimes, too, when I do a lein uberjar
and then start a repl. Somehow the class loader gets confused. Like you say, a clean and a restart of the repl helps.
@rnagpal You might be able to get similar behavior out of core.async merge
:
(let [c1 (async/chan 2)
c2 (async/chan 2)
c3 (async/merge [c1 c2])]
(async/go
(loop []
(if-let [v (async/<! c3)]
(do (println v)
(recur))
(println "Channels closed"))))
(async/put! c1 1)
(async/put! c2 2)
(async/close! c1)
(async/close! c2))
anyone know why when starting my app with lein run
I am getting dozens, maybe hundreds of java processes spinning up?
this seemed to have started ever since iTerm shut down when my app was still running
I’ve reinstalled leiningen and java
and still happens
and there are tons more of those same processes
@bostonaholic maybe I'm missing something, but why would you expect there would be dozens or hundreds of processes?
I wouldn’t expect there to be, but there are
I have no idea why they’re starting
ah, I see. typo
@vinai I wonder if there's a way to troubleshoot this, like looking in the target/
directory?
(I had a problem like this once that had to do with a case-insensitive filesystem and functions - turned into .class files - that had names which only differed by case. But that doesn't seem to be the problem here.)
It would be interesting to know what is going on there and take precautions to prevent it. But I'm too much of a newb and short on time to debug that unfortunately.
@bostonaholic are you using core.async? are you using an http server that uses a thread pool?
using immutant
sometimes a process manager gets confused and shows all your threads as if they were seperate processes
yes, I'd expect immutant to start a large number of threads
I’m looking to “unfurl” some datastructure going deeper into it while maintaining some context from the wrapping maps… anyone some clever ideas how to approach this?
;; Turn this
[{:id 1
:todos [{:id 2
:name "xx"
:people [{:id 4
:name "Martin"}
{:id 5
:name "Amy"}]}]}]
;; Into this
[{:project-id 1
:todo-id 2
:id 4
:name "Martin"}
{:project-id 1
:todo-id 2
:id 5
:name "Martin"}]
nah, it’s bringing my computer to a halt
this is new, though
‘new’ as in, it just started happening
@bostonaholic when it's doing that, hit Control-\
to make it show all stack traces, then upload it so we can see it
or if you prefer, run jstack with the pid of the process (same output)
even running lein ancient
starts up a ton of java processes
@bostonaholic you can execute the c-\ shortcut or jstack to see what it's doing, even when it's just lein running, the stack traces will show what code is actaully doing it
let me check
sorry I’m slow. I have to reboot when these processes start up since it brings my computer to a halt. And I have to quickly uninstall leiningen and java before “too many open files in system”
(of course with that many threads it will be a lot of output)
@chris yup, just lein
does the same thing
although, let me try outside of my project
it’s OK if I’m outside of my project
let me try from within a barebones project
OK in new project
this looks odd 501 19553 402 0 10:37AM ?? 0:00.01 /bin/bash /usr/local/bin/lein run .git
maybe not
might have been line cutoff
@noisesmith @chris thanks for the help. a new git checkout
seems to have fixed it ¯\(ツ)/¯
spoke too soon 😞
It’ll even happen if I’m not running the app. Everything is fine ps -ef | grep lein
returns no results (other than the grep). I will reinstall java with brew cask install java
. After it’s done, still everything is OK. I can grep
many times over minutes and it’s gtg. As soon as I cd
into the project directory, 💥, those java processes start kicking off. I don’t even have to cd
into it. If I have an iTerm tab open to that directory, a just hit ‘enter’, the processes start up.
that's super weird
I feel like I’m taking crazy pills
@kwladyka Look at my merge
example above. Another good alternative is pipeline
(blocking if you're going to do IO)
this is for very long I/O operations where input came part by part and output is send part by part. And whole process is running on multi sources.
but there is not transducer, because in each thread i run function to read source part by part and i can’t read next part until i read current part
and the main problem is i have to wait until all threads are finish to run next similar operation, because next one is depend on previous
(let [out-chan (chan num-threads)]
(pipeline-blocking num-threads out-chan (comp (map long-running-fn) (filter (constantly false))) (to-chan (range num-threads)))
(<!! out-chan))
out-chan will not be closed until a long-running-fn has run for each of the integers in (range num-threads) with p//ism num-threads, so <!! wil block until that has finished
I think you can still use pipeline-blocking since your transducer can run all of those steps composed as a single fn. I would be curious about throughput though. Seems to me like it should be about the same.
but it is not like that, i don’t have long-running-fn
. Ok i have to explain it more:
1) first issue is i have multi sources of data and i have to read from each one part by part. That one is easy.
2) next issue i have to use output of 1) to output this data in new place part by part
3) when this process is finish i have to do 1) 2) but for another source of data. And that is the issue. Because even when channels are close still something can send output.
(archai/fetch-epoch #(>!! in-elastic %))
and that one >!! in-elastic
on every incoming part of data
and that is fine i run it parallel and it work. But i need to know when it finish and this is what i don’t know how to solve
the easiest way will be to know when all:
(thread
(try
(elastic/archai->push input)
(catch Throwable ex
(l/error ex))))
finisheach call to thread returns a channel, if you can collect them and wait on all of them
but if i will wait on them i will be blocked by limited numbers of go
, so i don’t wait
@bfaber first solution based on that, but it doesn’t help. Still i can’t determine when output finished processing
@kwladyka why would you be blocked? waiting on a channel parks, you can wait on all the channels from the thread calls, when there are no more you know you are done
if needed, you can put the channel returned by the thread call onto another channel, to park in another context
@noisesmith i don’t see it, do you have some example?
if the threads have all returned, then you can read from on their channels
if you use <!
to read the result, this parks and doesn't block the go block
remember that thread
returns a channel. You know that the thread is finished by waiting for the channel.
no way to start threads with some names and check if threads with this names exists?
no, but you can put all the channels returned by thread calls onto another channel, perhaps called "pending"
(go-loop []
(let [input (<! in-elastic)]
(when-not (nil? input)
(thread
(try
(elastic/archai->push input)
(catch Throwable ex
(l/error ex))))
(recur))))
so step by step, if i will add <!!
here with thread it will be running only 1 at a time, if i will run multiples workers in go block i will be blocked by limited numbers of go blockwhen every channel you read off of pending has been read from, you know all the threads you started are done
@kwladyka here is what I would do
(defn archai->elastic-refresh []
(let [epochs (archai/generate-epochs epochs-from-now)
in-elastic (chan 100)
finished-chan (chan)]
(pipeline-async
10
in-elastic
(fn [epoch out-chan] (thread (archai/fetch-epoch #(>!! out-chan %)) (close! out-chan)))
(to-chan epochs))
(pipeline-blocking
10
finished-chan
(comp
(map
(fn [input]
(try
(elastic/archai->push input)
(catch Throwable ex
(l/error ex)))))
(filter (constantly false)))
in-elastic)
(<!! finished-chan))
it wouldn’t work in that way, because out-chan
has place for only 1 >!!
but (archai/fetch-epoch
how to >!!
many times part by part
the documentation for pipeline-async indicates af could put more than one result onto the channel it has passed in, but my reading of the code is that's not actually possible
@kwladyka in hindsight I think pipeline
is a bad fit for the first set of producer threads. as you don't care about the order of the chunks but it guarantees them
I think for that first set of threads I'd manually create a thread for each producer and manually create a channel for each producer and then merge
them and use as the input for the pipeline-blocking
call
I think merge covers that use case nicely, @noisesmith. You can merge all your chans into one uber-chan.
hmm how it works about memory consuming? if for example 1 <!! takes 100 MB and it is returned by (map) will it be free after read from channel or after whole map end? Another problem is i can’t run it in that way because i know how to get next part of data only after get previous part of data…
I'm assuming you wanted to vary the amount of threads between the input and the output functions. if that's not necessary I'd just make it one big pipeline-blocking xf
@noisesmith is this what you’re asking for with Ctrl-\
?
yes - that should have enough info in it to see where things are going wrong
(checking it out now)
the stack that looks suspicious to me there is environ
it's in the middle of trying to read things from your environment variables - if you dump the output again is it still doing that? what happens if you disable environ?
let me check
nope, still happening
Can someone explain any?
to me? not-any?
, which has been around since the beginning, takes a predicate and a collection and checks that no element in the collection satisfies the predicate... And then 1.9 adds any?
which takes a single argument and just returns true
.
@ghosss https://groups.google.com/forum/#!searchin/clojure/not-any%7Csort:relevance/clojure/f25y6N1OiIo/5Gq70PV1CAAJ
yeah it's not perfectly consistent with the naming of other core functions. lots of things aren't though
not-any?
probably should've been called none
to match some
originally but it's done now and personally I don't think it's that important
naming is hard. I bet we could eliminate 35% of beginner problems if for and map were renamed to lazy-for and lazy-map
somehow I really doubt that 😛
then people would say "why isn't filter lazy?"
People complain about laziness in Clojure being a stumbling block for new users, yet somehow these sort of things exist in every language. For example:
def create_multipliers():
return [lambda x : i * x for i in range(5)]
for multiplier in create_multipliers():
print multiplier(2)
Prints a list of 8, because closures are mutable.I agree with the general sentiment, but this specific example is because the closure isn't doing what the user might expect. The i
is a reference which is what is getting closed over, not the value of i
. Because the list-comp is realized immediately, the reference is always to the last value (4). I don't think this is an issue of laziness as much as python's comprehensions not closing over what is expected (a value instead of a reference). I think an example with generators or generator expressions would be more apt, in part because I see python newbies baffled by those all the time.
I agree, it's not a laziness thing. My point was, every language has those parts that don't make sense unless a user reads the docs
I agree with the general sentiment, but this specific example is because the closure isn't doing what the user might expect. The i
is a reference which is what is getting closed over, not the value of i
. Because the list-comp is realized immediately, the reference is always to the last value (4). I don't think this is an issue of laziness as much as python's comprehensions not closing over what is expected (a value instead of a reference). I think an example with generators or generator expressions would be more apt, in part because I see python newbies baffled by those all the time.
what's the standard way to run a clojure product in production? is it to lein run or make a java jar and run it? Also, my main only calls an infinite go-loop. works fine when i run it from repl but lein run and java -jar uberjar file both seem to not keep an open process. Wondering if anyone can advise on this.
@matthewdaniel for keeping your main running, you can just call @(promise)
at the bottom of your main
running lein in production is possible, but best avoided - it's a build tool
so i added @(promise) at the bottom of my main but it seems like lein uberjar never finishes now
you need to explicitly exit somehow
with core.async running, your system can't really guess your intention - you can bind the promise, then deliver to it when you want to exit
or you can use (System/exit 0)
or you can put a channel read like (<!! exit-chan)
at the bottom of -main and then write to exit-chan later...
oh, I meant the -main function, not at the top level of the ns
yeah, I should have been less ambiguous
I think that is working much better. I'm not a java guy, is supervisord usually used to run a long running java process?
that's one option, I've had good luck with jsvc and I made a small shim lib to simplify using it (it's still alpha quality though)
some people prefer using the native daemon running script for their os/distro and have that call java with the right args
this isn't super critical so i'll probably just stick with what i know. Thanks for the help 🙏
(doseq [epoch epochs]
(<!! (thread (Thread/sleep 3000)
(+ 1 1)))
(println "bar" epoch)))
How can i run it parallel? doseq
doing this one by one. pmap
doing only few at time, but i want run all of them at once and wait until finish@kwladyka create all the thread
s at once, and then call <!!
inside your doseq
but be careful of laziness, so something like: (doseq [c (mapv create-thread epochs)] (<!! c))
mapv
is eager so it'll create all the threads at once, then wait for the results one at a time
@kwladyka channels are only really meant for putting and taking, closed is a special case of taking. so trying to use the closed/empty states as a way to convey some extra out of band information to for instance another thread would be a bit of an antipattern
@bfabry @jstew @noisesmith @tbaldridge thx for help, i didn’t solve my main issue today, but i am much closer. But now is time to sleep 🙂