Fork me on GitHub
#clojure
<
2017-09-22
>
Jim Rootham01:09:54

Is there sqlite middleware for ring?

seancorfield02:09:45

@jrootham Not sure what a database middleware would even look like for Ring. What would it do, generically?

Jim Rootham02:09:24

Wrap a handler such that the database is opened at the beginning and closed at the end.

Jim Rootham02:09:25

On second thought, I am doing something weird. I have a pipeline of request -> request functions

Jim Rootham02:09:46

At the end it turns into a response

Jim Rootham02:09:38

I am trying to figure out how to get them all access to an opened sqlite database

seancorfield02:09:34

(defn db-middleware
  [handler]
  (fn [req]
    (with-db-connection [db db-spec]
      (handler (assoc req :connection db)))))
something like that?

seancorfield03:09:34

That would open a connection on each request, add :connection to the request map, call the handler, and then close the connection as the response is returned.

Jim Rootham03:09:18

That is what I am after, I am trying to follow earlier advice and not reinvent wheels.

Jim Rootham03:09:46

But I think you wrote my code for me, may I use that?

seancorfield03:09:07

That's using clojure.java.jdbc/with-db-connection -- org.clojure/java.jdbc "0.7.1" and you'll need the dependency for SQLite as well.

seancorfield03:09:43

org.xerial/sqlite-jdbc "3.20.0" looks like the latest.

Jim Rootham03:09:00

How much practise does it take to find things like that quickly?

seancorfield03:09:09

The db-spec would be {:dbtype "sqlite" :dbname "whatever"}.

seancorfield03:09:01

https://github.com/clojure/java.jdbc -- it's a Contrib library, and the README links to all of that.

seancorfield03:09:23

(disclaimer: I maintain that library so I'm more familiar than most 🙂 )

seancorfield03:09:04

But the readme links to the community docs (http://clojure-docs.org -- under Ecosystem & Tools) and to all the database drivers on Maven Central.

seancorfield03:09:36

Out of curiosity, which Clojure books have you picked up so far?

Jim Rootham03:09:21

Learning everything from online sources

seancorfield03:09:23

Clojure Cookbook is a good way to jump start a lot of this.

Jim Rootham03:09:52

Side note: fn is the Clojure equivalent of lambda?

seancorfield03:09:34

All the code is available here https://github.com/clojure-cookbook/clojure-cookbook if you want to look at how a bunch of things are often done in Clojure

seancorfield03:09:46

Yes, fn is for an anonymous function.

Jim Rootham03:09:13

Thank you, you have been outstandingly helpful

seancorfield03:09:46

I'm always happy to help folks be successful with Clojure!

seancorfield03:09:50

I first picked it up in 2010 -- Amit Rathore (author of Clojure in Action) was offering a one day intro to Clojure workshop near me for $200 on a Saturday. Best $200 I ever spent! Introduced it to work some time later, and I've been doing Clojure in production since 2011. Happiest I've ever been as a developer.

binora07:09:07

what monitoring tool do you use to monitor kafka ?

schmee07:09:39

We send Kafkas built-in metrics to Graphite and use Grafana for dashboards

kwladyka07:09:39

(defn start-async-consumers
  "Start num-consumers threads that will consume work
  from the in-chan and put the results into the out-chan."
  [num-consumers]
  (dotimes [_ num-consumers]
    (async/thread
      (while true
        (let [line (async/<!! in-chan)
              data (process line)]
          (async/>!! out-chan data))))))
Is better way to start N workers for the time of work? Something like pipeline-async?

kwladyka07:09:31

like i want to create N consumers to update DB (no out to channel, out is DB)

borkdude09:09:34

Is there a way to discover unused functions in a clojure codebase?

delaguardo09:09:46

not perfect, but simple and extendable

danielcompton09:09:47

Cursive has inspections for unused functions

danielcompton09:09:48

I thought there was something for Cider, but I can’t find it now

borkdude16:09:49

Thanks all.

carocad10:09:02

hey guys, if I publish a library to Clojars with GPG signature can someone else (collaborator/second owner) also publish it?

mccraigmccraig11:09:41

@carocad if you are publishing to a group, then you can add other users to the group and then they will also be able to publish to the group

mccraigmccraig11:09:41

@carocad by group i mean you have an entity id like "foo-group/bar-library"

mccraigmccraig11:09:04

then you will be able to add users to foo-group

carocad11:09:31

@mccraigmccraig oh that is great to hear !! yes we do. thanks a lot for your help

carocad11:09:55

I was already considering whether having a GPG key per group was a thing 😕

jstew12:09:31

@kwladyka How about using futures or something like claypoole? I find that sometimes those will fit my needs better than core.async.

kwladyka12:09:30

didn’t try claypoole, so i don’t know 🙂

kwladyka12:09:48

i moved discussion to #core-async so we will see

mdib12:09:34

Hi there, can someone explain to me why this 👇 doesn't work?

#{{:id (UUID/randomUUID)}
  {:id (UUID/randomUUID)}}
while this 👇 works just fine
(set {:id (UUID/randomUUID)}
            {:id (UUID/randomUUID)})
This seems pretty strange to me 🙈

noisesmith12:09:27

#{} enforces uniqueness at read time, before the UUIDs are generated

noisesmith12:09:35

since it's a reader construct

bronsa12:09:15

@mdibwhat would (read-string "#{{:id (UUID/randomUUID)} {:id (UUID/randomUUID)}}") return?

mdib12:09:23

It returns me

#{{:id (UUID/randomUUID)} {:id (UUID/randommUUID)}}
:thinking_face:

bronsa12:09:52

yeah because i made a typo :)

mdib12:09:03

Yeah, I didn't saw it either 😂. now it returns the as the first experiment:

java.lang.IllegalArgumentException: Duplicate key: {:id (UUID/randomUUID)}

bronsa12:09:51

and isn't that what you'd expect?

hmaurer13:09:16

Hello! What would be the cleanest way to take every other item in a sequence? e.g. [1 2 3 4 5] -> [1 3 5]

mdib13:09:02

`(filter odd? [1 2 3 4 5])`

hmaurer13:09:27

@mdib the sequence was just an example; I want it to work on any abitrary sequence

hmaurer13:09:03

I thought about zipping with a range a filtering on even (or odd), but I wonder if there is a shorter way

noisesmith13:09:27

`(map first (partition-all 2 s))`

bronsa13:09:01

user=> (take-nth 2 [1 2 3 4 5])
(1 3 5)

eraserhd14:09:07

I'm occasionally getting "No namespace: twou.centralpark.async". Cleaning and rebuilding the code fixes it for maybe a few iterations. clojure.tools.namespace.repl/refresh does not complain about any cycles in namespaces.

eraserhd14:09:09

The source file's namespace declaration looks correct.

eraserhd14:09:26

I think this happened when upgrading to the last alpha of clojure.

eraserhd14:09:22

I'm out of leads, any ideas?

eraserhd14:09:36

Both the calling namespace (according to the stack trace) and the twou.centralpark.async namespace require clojure.core.async :as async :refer [some-thing].

vinai15:09:51

@eraserhd I experienced that sometimes, too, when I do a lein uberjar and then start a repl. Somehow the class loader gets confused. Like you say, a clean and a restart of the repl helps.

jstew15:09:17

@rnagpal You might be able to get similar behavior out of core.async merge:

(let [c1 (async/chan 2)
      c2 (async/chan 2)
      c3 (async/merge [c1 c2])]

  (async/go
    (loop []
      (if-let [v (async/<! c3)]
        (do (println v)
            (recur))
        (println "Channels closed"))))

  (async/put! c1 1)
  (async/put! c2 2)

  (async/close! c1)
  (async/close! c2))

bostonaholic15:09:27

anyone know why when starting my app with lein run I am getting dozens, maybe hundreds of java processes spinning up?

bostonaholic15:09:46

this seemed to have started ever since iTerm shut down when my app was still running

bostonaholic15:09:56

I’ve reinstalled leiningen and java

bostonaholic15:09:00

and still happens

bostonaholic15:09:58

and there are tons more of those same processes

rnagpal15:09:21

both sinppets are useful

chris16:09:04

@bostonaholic maybe I'm missing something, but why would you expect there would be dozens or hundreds of processes?

bostonaholic16:09:10

I wouldn’t expect there to be, but there are

bostonaholic16:09:22

I have no idea why they’re starting

bostonaholic16:09:05

ah, I see. typo

eraserhd16:09:15

@vinai I wonder if there's a way to troubleshoot this, like looking in the target/ directory?

eraserhd16:09:27

Next time it happens, I'll see what I can figure out.

eraserhd16:09:02

(I had a problem like this once that had to do with a case-insensitive filesystem and functions - turned into .class files - that had names which only differed by case. But that doesn't seem to be the problem here.)

vinai16:09:51

It would be interesting to know what is going on there and take precautions to prevent it. But I'm too much of a newb and short on time to debug that unfortunately.

noisesmith16:09:32

@bostonaholic are you using core.async? are you using an http server that uses a thread pool?

bostonaholic16:09:44

using immutant

noisesmith16:09:46

sometimes a process manager gets confused and shows all your threads as if they were seperate processes

noisesmith16:09:56

yes, I'd expect immutant to start a large number of threads

martinklepsch16:09:57

I’m looking to “unfurl” some datastructure going deeper into it while maintaining some context from the wrapping maps… anyone some clever ideas how to approach this?

;; Turn this
  [{:id 1
    :todos [{:id 2
             :name "xx"
             :people [{:id 4
                       :name "Martin"}
                      {:id 5
                       :name "Amy"}]}]}]

  ;; Into this
  [{:project-id 1
    :todo-id 2
    :id 4
    :name "Martin"}
   {:project-id 1
    :todo-id 2
    :id 5
    :name "Martin"}]

bostonaholic16:09:02

nah, it’s bringing my computer to a halt

bostonaholic16:09:18

this is new, though

bostonaholic16:09:31

‘new’ as in, it just started happening

noisesmith16:09:42

@bostonaholic when it's doing that, hit Control-\ to make it show all stack traces, then upload it so we can see it

noisesmith16:09:21

or if you prefer, run jstack with the pid of the process (same output)

bostonaholic16:09:02

even running lein ancient starts up a ton of java processes

chris16:09:43

what if you just run lein

chris16:09:12

any new lein plugins or anything?

noisesmith16:09:15

@bostonaholic you can execute the c-\ shortcut or jstack to see what it's doing, even when it's just lein running, the stack traces will show what code is actaully doing it

bostonaholic16:09:56

sorry I’m slow. I have to reboot when these processes start up since it brings my computer to a halt. And I have to quickly uninstall leiningen and java before “too many open files in system”

noisesmith16:09:58

(of course with that many threads it will be a lot of output)

bostonaholic16:09:26

@chris yup, just lein does the same thing

bostonaholic16:09:57

although, let me try outside of my project

bostonaholic16:09:56

it’s OK if I’m outside of my project

bostonaholic16:09:06

let me try from within a barebones project

bostonaholic16:09:08

OK in new project

bostonaholic16:09:16

this looks odd 501 19553 402 0 10:37AM ?? 0:00.01 /bin/bash /usr/local/bin/lein run .git

bostonaholic16:09:44

might have been line cutoff

bostonaholic17:09:40

@noisesmith @chris thanks for the help. a new git checkout seems to have fixed it ¯\(ツ)

chris17:09:42

lmao. computers are very good

bostonaholic17:09:17

spoke too soon 😞

bostonaholic19:09:42

It’ll even happen if I’m not running the app. Everything is fine ps -ef | grep lein returns no results (other than the grep). I will reinstall java with brew cask install java. After it’s done, still everything is OK. I can grep many times over minutes and it’s gtg. As soon as I cd into the project directory, 💥, those java processes start kicking off. I don’t even have to cd into it. If I have an iTerm tab open to that directory, a just hit ‘enter’, the processes start up.

noisesmith19:09:57

that's super weird

bostonaholic19:09:00

I feel like I’m taking crazy pills

kwladyka17:09:36

i want run N threads and in main thread i want wait until they done, how?

bfabry17:09:09

@kwladyka there are hundreds of ways to do that

jstew17:09:32

@kwladyka Look at my merge example above. Another good alternative is pipeline (blocking if you're going to do IO)

kwladyka17:09:12

this is for very long I/O operations where input came part by part and output is send part by part. And whole process is running on multi sources.

bfabry17:09:40

ya, like jstew I'm a fan of pipeline-blocking for parallelising io

kwladyka17:09:50

but there is not transducer, because in each thread i run function to read source part by part and i can’t read next part until i read current part

kwladyka17:09:01

so i think i can’t use pipeline-blocking here

kwladyka17:09:32

and the main problem is i have to wait until all threads are finish to run next similar operation, because next one is depend on previous

kwladyka17:09:41

so i have to know when all threads finish

kwladyka17:09:53

that is the issue which i don’t know how to solve in this context

bfabry17:09:14

(let [out-chan (chan num-threads)]
  (pipeline-blocking num-threads out-chan (comp (map long-running-fn) (filter (constantly false))) (to-chan (range num-threads)))
  (<!! out-chan))

bfabry17:09:25

out-chan will not be closed until a long-running-fn has run for each of the integers in (range num-threads) with p//ism num-threads, so <!! wil block until that has finished

jstew17:09:33

I think you can still use pipeline-blocking since your transducer can run all of those steps composed as a single fn. I would be curious about throughput though. Seems to me like it should be about the same.

kwladyka17:09:44

but it is not like that, i don’t have long-running-fn. Ok i have to explain it more: 1) first issue is i have multi sources of data and i have to read from each one part by part. That one is easy. 2) next issue i have to use output of 1) to output this data in new place part by part 3) when this process is finish i have to do 1) 2) but for another source of data. And that is the issue. Because even when channels are close still something can send output.

kwladyka17:09:39

PS the source of data is very big, so also i have to put to channel part by part

kwladyka17:09:10

ech i will paste my code here to better understand

kwladyka17:09:11

but maybe i am blind to solve that in different way

kwladyka17:09:51

(archai/fetch-epoch #(>!! in-elastic %)) and that one >!! in-elastic on every incoming part of data

bfabry17:09:24

which are the long running functions here?

bfabry17:09:50

archai->push and fetch-epoch?

kwladyka17:09:07

but what important

kwladyka17:09:24

(archai/fetch-epoch #(>!! in-elastic %)) do`>!!` multiple times

kwladyka17:09:12

just inside that function i download source part by part (no other way)

kwladyka17:09:18

and that is fine i run it parallel and it work. But i need to know when it finish and this is what i don’t know how to solve

kwladyka17:09:35

i can do (atom) and inc/dec… but it looks so bad

kwladyka17:09:13

the easiest way will be to know when all:

(thread
                        (try
                          (elastic/archai->push input)
                          (catch Throwable ex
                            (l/error ex))))
finish

noisesmith17:09:41

each call to thread returns a channel, if you can collect them and wait on all of them

bfabry17:09:15

sounds like you want pipeline-astync

kwladyka17:09:18

but if i will wait on them i will be blocked by limited numbers of go, so i don’t wait

kwladyka17:09:57

@bfaber first solution based on that, but it doesn’t help. Still i can’t determine when output finished processing

kwladyka17:09:21

pipeline-async stop blocking as soon as input channel is closed

kwladyka17:09:29

output channel can be still processing

bfabry17:09:56

yeah you need to have a finish channel for the thing consuming the output of that

noisesmith17:09:57

@kwladyka why would you be blocked? waiting on a channel parks, you can wait on all the channels from the thread calls, when there are no more you know you are done

noisesmith17:09:37

if needed, you can put the channel returned by the thread call onto another channel, to park in another context

kwladyka17:09:40

@noisesmith i don’t see it, do you have some example?

kwladyka17:09:08

i mean i don’t see how it helps

kwladyka17:09:23

and how not blocking by limit of go numbers

noisesmith17:09:29

if the threads have all returned, then you can read from on their channels

noisesmith17:09:53

if you use <! to read the result, this parks and doesn't block the go block

jstew17:09:59

remember that thread returns a channel. You know that the thread is finished by waiting for the channel.

kwladyka17:09:20

i know, but still i don’t see how it helps

kwladyka17:09:34

in that context

kwladyka17:09:13

no way to start threads with some names and check if threads with this names exists?

noisesmith17:09:35

no, but you can put all the channels returned by thread calls onto another channel, perhaps called "pending"

kwladyka17:09:42

(go-loop []
                  (let [input (<! in-elastic)]
                    (when-not (nil? input)
                      (thread
                        (try
                          (elastic/archai->push input)
                          (catch Throwable ex
                            (l/error ex))))
                      (recur))))
so step by step, if i will add <!! here with thread it will be running only 1 at a time, if i will run multiples workers in go block i will be blocked by limited numbers of go block

noisesmith17:09:54

when every channel you read off of pending has been read from, you know all the threads you started are done

kwladyka17:09:18

ok let me think about that once again

bfabry17:09:30

@kwladyka here is what I would do

(defn archai->elastic-refresh []
  (let [epochs (archai/generate-epochs epochs-from-now)
        in-elastic (chan 100)
        finished-chan (chan)]

    (pipeline-async
      10
      in-elastic
      (fn [epoch out-chan] (thread (archai/fetch-epoch #(>!! out-chan %)) (close! out-chan)))
      (to-chan epochs))

    (pipeline-blocking
      10
      finished-chan
      (comp
        (map
          (fn [input]
            (try
              (elastic/archai->push input)
              (catch Throwable ex
                (l/error ex)))))
        (filter (constantly false)))
      in-elastic)
    
    (<!! finished-chan))

kwladyka17:09:43

it wouldn’t work in that way, because out-chan has place for only 1 >!! but (archai/fetch-epoch how to >!! many times part by part

kwladyka17:09:08

but…. i know how to solve it, so thinking forward how to combine your example

jstew17:09:25

Good deal, glad I could help a little.

bfabry18:09:25

out-chan does not only have a place for 1

bfabry18:09:02

oh wait it does

bfabry18:09:04

that's interesting

bfabry18:09:44

the documentation for pipeline-async indicates af could put more than one result onto the channel it has passed in, but my reading of the code is that's not actually possible

bfabry18:09:30

@kwladyka so long as you close! the channel as the last operation in af it works

kwladyka18:09:16

yeah, still trying figure out how to write that whole code

bfabry19:09:43

@kwladyka in hindsight I think pipeline is a bad fit for the first set of producer threads. as you don't care about the order of the chunks but it guarantees them

bfabry19:09:47

I think for that first set of threads I'd manually create a thread for each producer and manually create a channel for each producer and then merge them and use as the input for the pipeline-blocking call

jstew17:09:50

I think merge covers that use case nicely, @noisesmith. You can merge all your chans into one uber-chan.

kwladyka17:09:18

how would it help here?

jstew17:09:30

If you're starting n workers. (merge (map #(worker-fn..) (range n)))

jstew17:09:06

then later on <!! from the chan that merge returns.

kwladyka17:09:39

hmm how it works about memory consuming? if for example 1 <!! takes 100 MB and it is returned by (map) will it be free after read from channel or after whole map end? Another problem is i can’t run it in that way because i know how to get next part of data only after get previous part of data…

bfabry17:09:52

10 being the number of threads you want

bfabry17:09:22

I'm assuming you wanted to vary the amount of threads between the input and the output functions. if that's not necessary I'd just make it one big pipeline-blocking xf

bostonaholic18:09:20

@noisesmith is this what you’re asking for with Ctrl-\?

noisesmith18:09:30

yes - that should have enough info in it to see where things are going wrong

noisesmith18:09:44

(checking it out now)

noisesmith18:09:06

the stack that looks suspicious to me there is environ

noisesmith18:09:40

it's in the middle of trying to read things from your environment variables - if you dump the output again is it still doing that? what happens if you disable environ?

bostonaholic18:09:10

nope, still happening

ghosss18:09:26

Can someone explain any? to me? not-any?, which has been around since the beginning, takes a predicate and a collection and checks that no element in the collection satisfies the predicate... And then 1.9 adds any? which takes a single argument and just returns true.

bfabry18:09:20

@ghosss it was added to make defining specs that accept any value easy

ghosss18:09:34

this feels PHPesque in how not cohesive the naming is

schmee18:09:40

grab a bowl of popcorn and have at it

bfabry18:09:04

yeah it's not perfectly consistent with the naming of other core functions. lots of things aren't though

bfabry19:09:20

not-any? probably should've been called none to match some originally but it's done now and personally I don't think it's that important

ghosss19:09:54

I guess... thanks

noisesmith19:09:08

naming is hard. I bet we could eliminate 35% of beginner problems if for and map were renamed to lazy-for and lazy-map

tbaldridge19:09:27

somehow I really doubt that 😛

tbaldridge19:09:38

then people would say "why isn't filter lazy?"

bfabry19:09:44

I would assume lazy-map creates a map where each value is a thunk 😛

bfabry19:09:56

just to spite you

tbaldridge19:09:47

People complain about laziness in Clojure being a stumbling block for new users, yet somehow these sort of things exist in every language. For example:

def create_multipliers():
    return [lambda x : i * x for i in range(5)]

for multiplier in create_multipliers():
    print multiplier(2)
Prints a list of 8, because closures are mutable.

bja19:09:00

I agree with the general sentiment, but this specific example is because the closure isn't doing what the user might expect. The i is a reference which is what is getting closed over, not the value of i. Because the list-comp is realized immediately, the reference is always to the last value (4). I don't think this is an issue of laziness as much as python's comprehensions not closing over what is expected (a value instead of a reference). I think an example with generators or generator expressions would be more apt, in part because I see python newbies baffled by those all the time.

tbaldridge20:09:07

I agree, it's not a laziness thing. My point was, every language has those parts that don't make sense unless a user reads the docs

bja19:09:00

I agree with the general sentiment, but this specific example is because the closure isn't doing what the user might expect. The i is a reference which is what is getting closed over, not the value of i. Because the list-comp is realized immediately, the reference is always to the last value (4). I don't think this is an issue of laziness as much as python's comprehensions not closing over what is expected (a value instead of a reference). I think an example with generators or generator expressions would be more apt, in part because I see python newbies baffled by those all the time.

MegaMatt20:09:37

what's the standard way to run a clojure product in production? is it to lein run or make a java jar and run it? Also, my main only calls an infinite go-loop. works fine when i run it from repl but lein run and java -jar uberjar file both seem to not keep an open process. Wondering if anyone can advise on this.

noisesmith20:09:50

@matthewdaniel for keeping your main running, you can just call @(promise) at the bottom of your main

noisesmith20:09:26

running lein in production is possible, but best avoided - it's a build tool

MegaMatt20:09:36

so i added @(promise) at the bottom of my main but it seems like lein uberjar never finishes now

noisesmith20:09:43

you need to explicitly exit somehow

MegaMatt20:09:09

like ctrl+c or do you mean something else?

noisesmith20:09:21

with core.async running, your system can't really guess your intention - you can bind the promise, then deliver to it when you want to exit

noisesmith20:09:29

or you can use (System/exit 0)

ghadi20:09:51

Are you putting it in your main function or main ns?

noisesmith20:09:20

or you can put a channel read like (<!! exit-chan) at the bottom of -main and then write to exit-chan later...

noisesmith20:09:31

oh, I meant the -main function, not at the top level of the ns

ghadi20:09:36

You want them in your main function ^

noisesmith20:09:48

yeah, I should have been less ambiguous

MegaMatt20:09:31

I think that is working much better. I'm not a java guy, is supervisord usually used to run a long running java process?

noisesmith20:09:14

that's one option, I've had good luck with jsvc and I made a small shim lib to simplify using it (it's still alpha quality though)

noisesmith20:09:55

some people prefer using the native daemon running script for their os/distro and have that call java with the right args

MegaMatt20:09:54

this isn't super critical so i'll probably just stick with what i know. Thanks for the help 🙏

kwladyka21:09:59

(doseq [epoch epochs]
        (<!! (thread (Thread/sleep 3000)
                     (+ 1 1)))
        (println "bar" epoch)))
How can i run it parallel? doseq doing this one by one. pmap doing only few at time, but i want run all of them at once and wait until finish

tbaldridge21:09:11

@kwladyka create all the threads at once, and then call <!! inside your doseq

tbaldridge21:09:06

but be careful of laziness, so something like: (doseq [c (mapv create-thread epochs)] (<!! c))

tbaldridge21:09:50

mapv is eager so it'll create all the threads at once, then wait for the results one at a time

kwladyka21:09:24

thx, it inspired me to do something

kwladyka22:09:12

How to wait until channel (async) is closed and empty?

kwladyka22:09:25

hmm ok i think i have some idea, but not sure 🙂

bfabry22:09:47

@kwladyka channels are only really meant for putting and taking, closed is a special case of taking. so trying to use the closed/empty states as a way to convey some extra out of band information to for instance another thread would be a bit of an antipattern

kwladyka22:09:39

@bfabry @jstew @noisesmith @tbaldridge thx for help, i didn’t solve my main issue today, but i am much closer. But now is time to sleep 🙂