Fork me on GitHub
#beginners
<
2019-05-12
>
tiennguyenkhac170201:05:01

Hi, I'm trying out clojure and it's completely different to what I'm used to. Is this the correct way to write clojure? What I'm trying to do is to take the first 2 element of the queue, combine them and append them to the end. The way I have to do double pop and pop then peek felt especially awkward. Usually I would just save the queue to a var and mutate it (but we can't mutate stuff in clojure?). Thank you.

(defn tournamentTree [queue]
  (if (= (count queue) 1)
    queue
    (recur
     (conj
      (pop (pop queue))
      {:firstSlot (peek queue)
       :secondSlot (peek (pop queue))}))))

(defn -main
  []
  (println
   (seq
    (tournamentTree (conj clojure.lang.PersistentQueue/EMPTY 1 2 3 4)))))

seancorfield01:05:25

@tiennguyenkhac1702 Just to clarify, given [1 2 3 4], you're trying to get [3 4 1 2]?

tiennguyenkhac170201:05:04

({:firstSlot {:firstSlot 1, :secondSlot 2}, :secondSlot {:firstSlot 3, :secondSlot 4}})

tiennguyenkhac170201:05:09

this is my final out put

tiennguyenkhac170201:05:43

Which is what I'm trying to get

tiennguyenkhac170201:05:54

Just wondering if I'm using clojure the right way :))

seancorfield01:05:24

Take a look at destructuring for how to break apart sequences. You probably don't really need a queue here.

seancorfield01:05:01

Although it feels like you just want to repeatedly partition into pairs, recursively until you have just one element left.

seancorfield01:05:20

I'd have to give this a bit of thought...

tiennguyenkhac170201:05:29

Yeah, what I'm trying to do is to build a tournament bracket

tiennguyenkhac170201:05:58

But all of these seem rather excessive or odd? (pop (pop queue)) (peek queue) (peek (pop queue))

seancorfield02:05:57

How about something like this

user=> (defn tree [l] (if (= 1 (count l)) l (recur (map #(zipmap [:first :second] %) (partition 2 l)))))
#'user/tree
user=> (tree [1 2 3 4 5 6 7 8])
({:first {:first {:first 1, :second 2}, :second {:first 3, :second 4}}, :second {:first {:first 5, :second 6}, :second {:first 7, :second 8}}})
user=>  

seancorfield02:05:58

Destructuring would help you (let [[a b & more] queue] ...) -- a is the first element, b is the second, and more is the rest of the queue

seancorfield02:05:45

Not sure what your code would do with, say, six elements. My suggestion ignores the last two elements, but you could use partition-all to solve that.

seancorfield02:05:03

user=> (defn tree [l] (if (= 1 (count l)) l (recur (map #(zipmap [:first :second] %) (partition-all 2 l)))))
#'user/tree
user=> (tree [1 2 3 4 5 6])
({:first {:first {:first 1, :second 2}, :second {:first 3, :second 4}}, :second {:first {:first 5, :second 6}}})
user=>   

tiennguyenkhac170202:05:11

Seem a lot more sensible, I'll have a read into partition and zipmap. Thanks

tiennguyenkhac170202:05:01

And #(zipmap [:first :second] %) is some sort of shorthand for anonymous func? I clearly have a lot to learn 🙂

tiennguyenkhac170202:05:28

Just understand ur code and wow, that recursive zip map + partition seem so expessive, I think I'm super hooked on clojure now, thanks

k.i.o02:05:10

@tiennguyenkhac1702 correct

(macroexpand '#(zipmap [:first :second] %))
=> (fn* [x#] (zipmap [:first :second] x#))

dpsutton02:05:59

I don’t think you need macro expand there. If you quote the form you can see it as well. ‘#(inc %)

k.i.o02:05:37

because its a reader macro right?

dpsutton02:05:52

Yes. It happens at read time I think

seancorfield03:05:48

@tiennguyenkhac1702 Clojure is extremely powerful so if you think your solution is complex or repetitive, there's probably some useful core functions that will make it simpler and cleaner -- but it can take a long time to learn enough of them that you internalize the "Clojure way".

seancorfield03:05:29

The main thing is to try to think "holistically" about collections, rather than individually about elements. And that just takes time and practice. practice, practice.

tiennguyenkhac170203:05:42

Thanks @seancorfield, will pretty much have to “un-program” how I used to think about programming now (top-down instead of bottom-up)

seancorfield03:05:24

Yeah, FP in general -- and Clojure in particular -- is a whole new way of thinking 🙂

k.i.o04:05:40

fp and lisp is really a mind-bending experience

tabidots04:05:22

I’m trying to learn some concurrency techniques through building an integer factorization function. I’m totally new to core.async and quite confused. Here is what I’m trying to do: My input is some number n, and I need to have a loop where, on every iteration, some prime factor pf of n is found, then n is divided by the factor pf and the loop repeats until n is 1. This is easy when one factoring algorithm is involved. Now, I have 2 or 3 (depending on the size of n) competing algorithms that can try to find a factor of n. The thing is, they won’t all finish at the same time, they might fail (and return nil), and if they do return a factor, that factor may not necessarily be prime. This is as far as I got. I’m sure it’s not at all how you are supposed to write concurrent code. I haven’t implemented the prime check (since sliding-buffers can’t take xfs). One other thing that I need to do, but don’t know how, is to flush the channels on each loop. Because if one channel keeps a value taken from the n of a previous iteration, alts!! will take it, even though it’s no longer useful. Can someone describe the basic outline of how to accomplish this?

seancorfield04:05:11

@tabidots You might need to ask in #core-async but if you think you need to "flush the channels" then I'd say your approach is fundamentally wrong. All values pushed to all channels should be consumed, in general.

tabidots04:05:15

Hmm. So there is no way to “race” a few functions on each iteration and take the earliest new result?

tabidots04:05:19

and discard the rest

tabidots04:05:14

Like a parallel or that short-circuits, I guess is another way to describe what I want

k.i.o04:05:40

you can with (async/take 1 (async/merge ...)

tabidots04:05:04

Thanks, I’ll give that a try :+1::skin-tone-2:

seancorfield04:05:40

Yeah, but then all the source channels need to be created each time around the loop.

seancorfield04:05:55

And you're just pushing a single value onto each channel, and then taking (essentially) a random one of those three values, right? And throwing the other two away.

tabidots04:05:34

yes, random in the sense that I don’t know with certainty which one will finish first.

seancorfield04:05:59

If you move the let (that creates the channels) inside the loop, I think that will solve your originally basic problem -- since it will create new channels for each iteration -- but it "looks" wrong to me, based on the async code I've seen, having just a single set of channel operations each time (rather than having longer-lived channels that a sequence of values flow through).

tabidots04:05:53

I was thinking that (on both counts)… Is it wasteful to create new channels every iteration just to hold a single value, and then close them? (Or ignore them, I guess?)

seancorfield04:05:59

That's why it feels like a "code smell" to me...

tabidots04:05:18

Or an “approach smell” as you said 😆

tabidots04:05:57

I was trying to take the idea described here: https://pypi.org/project/primefac/

primefac uses by default five threads to take advantage of the multiple cores typically available on modern machines. Each of these threads uses a different algorithm to factor the number

tabidots04:05:50

Although I’m taking a look at the source now, as well as a similar Python script I culled from GitHub, and neither of them do the “parallel short-circuiting or” that I had imagined

dpsutton05:05:51

You want alts I think

dpsutton05:05:51

Each factorization can return onto a channel and you watch from your main loop for the first to return

tabidots05:05:50

I’ve currently got alts!! in there, and it works—but because the channels don’t flush their values on every iteration of the loop, sometimes alts!! takes a no-longer-useful value from a channel. (Because the first to return might be a leftover value from a previous iteration)

tabidots05:05:44

I see from the Python scripts I’m looking at that they hold off on the concurrency until it is really warranted, and then (do the equivalent of) a/merge-ing the values from 5 channels rather than racing the channels against each other. So I guess that’s the way to do it

sfyire13:05:58

Looks like there's discussion about getting clojurescript on code sandbox: https://github.com/codesandbox/codesandbox-client/issues/704

sfyire13:05:11

Someone wants to know about "Github Starter project for Reagent / ClojureScript?" What's the best thing to recommend, figwheel or figwheel main (I didn't know there was a new figwheel) or shadowcljs?

orestis15:05:04

If they are coming from JS and would like to use things from npm, shadow-cljs is much more streamlined.

orestis15:05:18

Not sure about a starter project, there bound to be one.

kari.marttila17:05:25

Damn, I love Clojure. On Friday I was struggling how to compose all different parts in my new Clojure learning project and now on Sunday it seems that suddenly all pieces are going to the right places.

kari.marttila17:05:36

I learned to use a Clojure scratch file and it is so much more efficient to write Clojure expressions in the scratch file and send them to repl with a simple hot key instead of writing expressions in the repl - I should have watched this great presentation earlier: https://vimeo.com/223309989 (see 35:00).

kari.marttila17:05:59

And using Mount I suddenly realized that I actually don't need to start my app with two different configurations in two different REPLs (single-node or aws) - I can switch dynamically in the same REPL from one configuration to another.

kari.marttila17:05:43

And I love all my IntelliJ IDEA / Cursive hot keys (e.g. delete everything from the cursor till end of S-expression, put S-expression after cursor to kill-ring...). Coding Clojure is so fast when you use good hotkeys and Paredit.

kari.marttila18:05:28

Studying and practice really makes a difference. I'm re-implementing the same server I did using Clojure one year ago, and now many thinks are much clearer. I'm really excited with this new implementation and with my new Clojure skills - I feel like a small boy who got bunch of techno legos for a Christmas present and now figuring out what kind of wonderful things to build with them. 🙂 And Clojure REPL is just so wonderful development tool - nothing I have ever seen with any other language I have used.

lennart.buit18:05:46

The wonders of programming ^^!

jarvinenemil18:05:25

I agree @kari.marttila, Clojure is really enjoyable 😄

kari.marttila19:05:20

Yep. And now when I know how to use REPL more effectively I feel like my Clojure development suddenly got boosted some 2x 🙂 .

kari.marttila19:05:02

Clojure is so beautiful and productive language. We really should do more to let every developer know and understand what programming can really be at its best. I'm pretty sure that 50% of programmers would turn to Clojure if they only could feel this productivity and what a joy it is to have a real REPL.

kari.marttila19:05:20

I gave a presentation based on my "Five Languages - Five Stories" blog article (https://medium.com/@kari.marttila/five-languages-five-stories-1afd7b0b583f) in my corporation. There were quite a lot of developers gathered. I tried to explain what it is to have a real REPL. One developer said: "I use Scala, there is also a REPL in Scala." I answered that I haven't used a Scala REPL but if it is anything like a Java shell you can't talk about those REPLs in the same day.

michael.gaare23:05:18

The Scala repl is more of a torture device than a useful part of the workflow, in my experience

seancorfield21:05:27

@kari.marttila Yeah, once you've set up and then internalized a really good REPL workflow, it's hard to imagine working any other way 🙂