Fork me on GitHub
#beginners
<
2019-12-27
>
ackerleytng00:12:41

Still working on AOC's day 23. When my terminating condition is met, if I just return the value without closing the channels, it returns fine. If I try to close the channels with doseq, the function finishes executing (based on println s) but waits for something and doesn't quit. Does anyone have any tips for debugging in core.async?

noisesmith00:12:25

if you use clojure's internal threadpools you need to run shutdown-agents to get timely shutdown at exit

noisesmith00:12:36

don't run that until you are ready to shut down the vm though

noisesmith00:12:03

otherwise the vm doesn't exit until the threadpools reclaim any cached threads

ackerleytng00:12:56

Oops I meant quit as in function exiting. I'm running a let expression on the repl in emacs, that let expression doesn't finish running.

noisesmith00:12:36

then something happens after your last println and the function is still running

ackerleytng00:12:53

I added more println s and it seems like all the channels closed, the function is still running after the last println

noisesmith00:12:56

with jstack (comes with the jdk) you can get the stack traces of all running code in the vm

noisesmith00:12:09

outside emacs you can use Control-\ to get the same result

noisesmith00:12:16

I have no idea how to get that info via emacs

ackerleytng00:12:59

(let [from-router (vec (repeatedly 50 #(a/chan 10)))
        to-router (a/chan 100)]

    ;; Boot all computers
    (doseq [[addr c] (zipmap (range) from-router)]
      (intcode-computer addr c to-router))

    ;; Do routing
    (a/<!!
     (a/go-loop []
       (let [[addr x y] (a/<! to-router)]
         (cond
           (= addr 255)
           (do
             (println "received value at 255")
             (doseq [c from-router] (a/close! c))
             (println "closed from-router")
             (a/close! to-router)
             (println "closed to-router")
             y
             ;; function continues running after the last println
             )

           :else
           (do
             (println "routed" [x y] "to" addr)
             (let [c (nth from-router addr)]
               (a/>! c x)
               (a/>! c y))
             (recur)))))))

ackerleytng00:12:08

thanks, let me look up jstack

ackerleytng00:12:23

Control - \ in what context?

noisesmith00:12:41

also, adding doseq at the end of your function means it will return nil (not sure if that could confuse things for you)

noisesmith00:12:01

when running clojure in a terminal, Control-\ makes java dump all running stack traces

noisesmith00:12:08

so you know which code is running

noisesmith00:12:38

of course emacs isn't a terminal, and has its own way of interpreting Control-\, and wont' pass that to the jvm

ackerleytng00:12:09

There's a tiny y at the end of the function!

noisesmith00:12:03

sure, that means it's returning y from the cond block

noisesmith00:12:32

y, without parens to invoke some action, can't make anything hang

noisesmith01:12:28

and this should make your let block return y

noisesmith01:12:52

err, make it return (<!! y)

noisesmith01:12:30

so that might be your issue - y not having any data ready

ackerleytng01:12:57

intcode-computer is actually a go-loop that reads from the channels in from-router if the state machine requires input and writes to to-router

ackerleytng01:12:23

In this case y is the third element of the vector, right? Does destructuring work after <!?

noisesmith01:12:09

yes, destructuring isn't effected by the action that fetches the data it uses, only the data fetched

noisesmith01:12:18

the problem here has to be that nothing wrote to y

noisesmith01:12:23

so <!! doesn't return

ackerleytng01:12:30

I have a <!! Outside the go-loop, in this case

noisesmith01:12:37

right, that's what's blocking

noisesmith01:12:40

try taking it out

ackerleytng01:12:29

I'll try it in a bit

ackerleytng01:12:54

Removing the doseq let's the function exit nicely though

noisesmith01:12:25

perhaps somehow closing those channels prevents y from blocking providing a return value

noisesmith01:12:01

I'd need to read your code a lot more closesly (and probably see all the async code) to really know the answer to that though

ackerleytng01:12:23

Is destructuring lazy? Maybe it tries to read only when y needs to be output?

noisesmith01:12:15

no, destructuring is just a macro that expands to a bunch of let bindings

ackerleytng01:12:36

Oh but reading from a closed channel doesn't block unless there's nothing in the channel

noisesmith01:12:56

reading from a closed channel doesn't block - you get buffered contents, or nil

noisesmith01:12:14

if y isn't closed, but you prevented it from receiving data, reading from it will block

ackerleytng01:12:06

oh yup thanks

ackerleytng01:12:51

without the <!! outside the go-loop , it appears to exit immediately but actually it's running in the background - it just returns a channel

ackerleytng01:12:14

which is the right behavior i guess

Alper Cugun09:12:09

In the loop docs there are a bunch of examples but is it documented anywhere what the binding syntax is? Like here you can use commas? (loop [n (bigint 5), accumulator 1]

Alper Cugun09:12:38

I’m especially interested in usage of :as there but can’t find anything about it.

Alper Cugun20:12:13

Cool. I’ve asked for those to be linked up in the relevant places. https://github.com/zk/clojuredocs/issues/207

Bobbi Towers09:12:49

commas are always whitespace

Bobbi Towers09:12:57

loop uses the same binding syntax as let

Alper Cugun09:12:59

Because I followed the trail to the binding doc but that seems to be something else again.

noisesmith19:12:07

it's literally the same syntax and rules as let, and mostly the same as function args - this is intentional

hindol09:12:54

Also, comma is less common. I guess with time you will also start seeing comma as unnecessary. I have nothing against commas, I just don't see it much in day to day code.

hindol11:12:49

Hi all, what are some good system/service design resources in Clojure?

hindol15:12:39

Thanks for the book suggestion. My office provides subscription to Oreilly. Will check there.

👍 4
seancorfield17:12:05

A caveat on Clojure Applied: Alex has said that if he was writing the book today, it would use records a lot less. So bear in mind when reading it that you should probably use plain old hash maps in many places where the book encourages records.

hindol18:12:23

Hi Sean, do you also know what changed between the time of writing the book and now?

seancorfield18:12:24

Best practices in Clojure evolve over time 🙂

seancorfield18:12:38

That book was written 4-5 years ago. Even with Clojure's stability as a language, that's still a long time in terms of best practices.

seancorfield18:12:35

I think Alex said one of the main differences in "best practices" is around Spec but I can't find his exact comment right now.

seancorfield18:12:55

April 25th, 2019, in #clojure-spec "Slack: alexmiller: fwiw, I would probably de-emphasize records in a 2nd ed of Clojure Applied" -- still looking for reasoning but that was in response to a comment about modeling the domain using records instead of maps.

hindol18:12:49

Thanks and if I may ask one more question. Is there anything like the AOSA book for Clojure? It need not be a book. Mayne articles, case studies. http://aosabook.org/en/index.html

seancorfield18:12:09

Since that's an architecture book, I'd expect quite a bit of it to apply to Clojure (although I suspect a heavy OOP slant in that book? I've never read it, nor even heard about it before now)

seancorfield18:12:22

Maybe search for "domain-driven design clojure" and see what turns up -- since Clojure is about focusing on the data in your domain.

👍 4
seancorfield18:12:41

(and you may ask as many questions as you want! 🙂 )

hindol19:12:28

I have found this book: https://leanpub.com/building-a-system-in-clojure. Alas, it's not free.

seancorfield19:12:59

It started out as a series of blog posts https://matthiasnehlsen.com/blog/2014/09/24/Building-Systems-in-Clojure-1/ and those are free. Also note that the book is only half complete and last updated three and a half years ago so...

seancorfield19:12:16

Given that it uses top-level defs with side-effecting code (reading a config file, building a Component system), I would be very cautious about treating it as any sort of "best practice"...

hindol20:12:26

Thanks for the input. Appreciate it.

Gulli15:12:34

Why is it faster accessing values in a record than a map?

hindol15:12:00

A map is a map. A record is closer to a Java class and the getters are actually offsets into the array like data store of the class.

Gulli15:12:09

ahh ok, Thanks for the answer. I was trying to locate this in the source code

hindol15:12:32

Hey, but I am not a full 100% sure. Let me dig up more info.

hindol15:12:39

Okay, let me rephrase that. defrecord creates an actual Java class under the hood.

seancorfield17:12:58

But still you want to stick with hash maps in nearly all cases. Records are great when you a) always have a small fixed set of keys that you know will always be present and b) more importantly, you want fast dispatch by type, i.e., polymorphism. So don't think of records as "fast maps" -- there are a bunch of trade offs around the choice.

marreman17:12:35

(map #(Integer/parseInt %) ["1" "2" "3"])
evaulates to (1 2 3) but
(-> ["1" "2" "3"] (map #(Integer/parseInt %)))
throws
Error printing return value (IllegalArgumentException) at clojure.lang.RT/seqFrom (RT.java:553).
Don't know how to create ISeq from: aoc.core$eval1810$fn__1811
Why is this? :thinking_face:

Chris O’Donnell17:12:01

In the second case your arguments are in the wrong order.

Chris O’Donnell17:12:35

You probably want to use ->> instead of ->

dpsutton17:12:17

to be explicit, with the thread first, you end up with (map coll function) rather than (map function coll). The error is saying it doesn't know how to create a seq from a function.

dpsutton17:12:14

since your function is where the collection normally goes, map is trying to get it into a seq so it can map over it and has no idea how to turn #(Integer/parseInt %) into a seq

marreman18:12:06

Oh, I see! Thread-first vs Thread-last. Thank you!

tvalerio17:12:07

I have defined this vector:

(def accounts (ref
			[{:uuid_account "745286b0-24d3-4b17-ab24-d1265e9fb8d1" :identification_id "33333333333" :name "Account1" :transactions [{:amount 1000.00M :created_at "2019-12-27"}]}
				{:uuid_account "234de110-ec07-4568-884c-8aad330c24eb" :identification_id "44444444444" :name "Account2" :transactions [{:amount 1500.00M :created_at "2019-12-26"}]}
				{:uuid_account "e1255330-0f63-42cd-b7c8-acde1915f885" :identification_id "55555555555" :name "Account3" :transactions [{:amount 2000.00M :created_at "2019-12-25"}]}]))
I want to include a new transaction inside the third account. What would be the best way to do this? Using clojure.walkperhaps?

hiredman18:12:46

Don't use a vector, use a map

tvalerio19:12:33

sorry @U0NCTKEV8 but I didn’t understant very well. My ref is constantly changing cause user can include new accounts and transactions inside the accounts. Even though could I use like your example?

hiredman19:12:04

I forget the exact syntax because I never use refs but with a map instead of a vector for accounts you would do something like (dosync (alter! accounts update-in [account-id :transactions] conj new-tx)) to add a new transaction

hiredman19:12:21

Also, I dunno what the rest of you code is doing, but if you only have one mutable reference you can just use an atom, refs are only useful when you need to coordinate changes to multiple mutable references at once, which most people don't end up doing

hiredman19:12:03

And what I said about normalization, you may not think of it those way, but what you are creating is an in memory database, so all the database techniques and ideas (indexing, normalization, etc) apply

👍 4
hiredman19:12:03

In your case you have a vector which is equivalent to an auto incrementing key and that is the only index

hiredman19:12:34

So you can only look things up quickly based on the position in the vector

hiredman19:12:22

But the shape of your data suggestions you need to look things up by account id, which a map will let you do

hiredman19:12:49

Normalization in database terms is basically the process of pulling apart and flattening your data model until all of it can be easily indexed for efficient retrieval

hiredman19:12:03

Like, what if you just have a transaction id and want to find the account for it

hiredman19:12:46

With your model you need to scan all transactions in all accounts to find it (full table scan in DB land)

hiredman19:12:01

In a more normalized schema transactions wouldn't be part of accounts, but would be their own thing with there own index on their ids

hiredman19:12:01

Then to make looking up the transactions for account fast each tx is also indexed by the account they belong to

tvalerio19:12:31

I see… so I could actually have something like this so it would be easier to look into the data I want?

(def accounts (ref {"745286b0-24d3-4b17-ab24-d1265e9fb8d1" {:identification_id "33333333333" :name "Account1" :transactions [{:amount 1000.00M :created_at "2019-12-27"}]}
                    "234de110-ec07-4568-884c-8aad330c24eb" {:identification_id "44444444444" :name "Account2" :transactions [{:amount 1500.00M :created_at "2019-12-26"}]}
                    "e1255330-0f63-42cd-b7c8-acde1915f885" {:identification_id "55555555555" :name "Account3" :transactions [{:amount 2000.00M :created_at "2019-12-25"}]}}))

didibus02:12:57

Yes, you can also duplicate the uuid if you want it returned on lookup:

(def accounts (ref {"745286b0-24d3-4b17-ab24-d1265e9fb8d1" {:uuid_account "745286b0-24d3-4b17-ab24-d1265e9fb8d1":identification_id "33333333333" :name "Account1" :transactions [{:amount 1000.00M :created_at "2019-12-27"}]}}))

didibus02:12:25

Also, while you should most likely use a map, if you insited on using a vector, you can also update it the same way:

(dosync (alter accounts update-in [2 :transactions] conj new-tx))
Where you just give update-in the index into the vector (zero based), so for the third element, it is index 2. This works because vectors are associative, and so you can use most map functions on them given indices as keys.

tvalerio15:12:12

thanks @U0K064KQV and @U0NCTKEV8 I changed to use maps and now it works perfectly parrot

hiredman18:12:12

{accountid accountinfo}

hiredman18:12:26

Then continue to normalize and build yourself a little in memory database use clojure.set/index so you can lookup by account if or transaction I'd, or anything else

Mario C.18:12:28

Is it possible to combine a map-filter process? As in filter for certain criteria and if it meets said criteria then transform the value into this. Instead of filtering and then mapping.

bfabry18:12:47

(into orig-coll (map fn (filter ffn orig-coll)))

bfabry18:12:27

I'm assuming you mean you want to keep the values that don't meet the filter criteria

bfabry18:12:49

if you specifically want to avoid the intermediate lazy sequences. then transducers can do that. but that's more an optimisation than anything different

Mario C.18:12:44

I always thought that the map and filter, how it is written in your example is considered two walks

seancorfield19:12:15

Technically, yes, it will create extra intermediate lazy sequences, but whether that matters in practice is a different question.

bfabry19:12:29

it's lazy. so it's sort of two walks. if you want to make it "definitely only a single walk" you can compose transducers like (into [] (comp (filter ffn) (map fn)) orig-coll)

seancorfield19:12:51

Is your code performance/memory sensitive? i.e., do you already know that map over filter over coll is too slow/uses too much memory?

Mario C.19:12:26

This part of the code needs to be performant as much as I can get it to be

Mario C.19:12:50

its not dire

bfabry19:12:04

imo I would measure the difference with real data with a real use case. but transducers are in general going to be faster

didibus02:12:49

No, it is not two walks

didibus02:12:13

It gets a bit complicated, but the overhead isn't in having to do more looping, but in having to create intermediate objects.

didibus02:12:28

And that overhead is further reduced by the use of chunking

didibus02:12:47

So in practice, lazy-seqs will often be just as fast

didibus02:12:06

Which is why people say you should measure it, because sometimes the non lazy-seq version is actually slower

didibus02:12:27

Think of it as a pull model

didibus02:12:26

You want the first element that matches the criteria, and you want it transformed

didibus02:12:15

So it will start looping on the collection until it finds the first element that matches the filter predicate. When it finds it, it will stop looping, and return the found element and apply the map function to it. Done

didibus02:12:44

When you then ask for the second element, it will resume looping where it last left off

didibus02:12:28

But to do this "resuming", you need to keep track of additional data, that tracking is the overhead which makes lazy-seq potentially slower

didibus02:12:59

To reduce the amount of tracking required, Clojure pulls in chunks of 32 at a time. So it only needs to track what is left every 32 elements.

didibus02:12:01

That even when you ask for the first element only, it will actually pull in the first 32 elements, thus it will loop over the first 32 elements, and then stop there, and it will remember that it must now loop at element 33 next.

didibus02:12:03

Those first 32 elements will be cached. So if you ask for the second element, it is already available in the cache, an no more looping, filtering or mapping was needed.

didibus02:12:46

Only once you ask for an element greater than 32 will it resume the loop, and filter and map another chunk, etc.

didibus03:12:48

This snippet explains it. This creates an unchunked infinite sequence of 0 to infinity, so 0 -> 1 -> 2 -> 3 -> 4 ... The sequence will print the index it is currently iterating over as f: i Now I take the first, and you see it prints f: 0. If I take the second, it prints f: 2. And when I take the third f: 3, etc. See how it didn't have to loop over 1 and 2 again? It just continued from where we were last. Even though I am doing a filter followed by a map. The filter and the map happen one after the other per element. Each element is only visited once.

didibus03:12:31

Look at this snippet as well, might help you understand. See how when using the eager filterv and mapv it first filters everything, which is one loop over 4 elements, and then it maps everything, which is another loop over the 4 elements returned by filterv. But when using the lazy filter and map, it grabs the first element and then filter and map it, and then move to the next element. Thus it is a single pass. While using the transducer element, it similarly did a single pass, and filters and maps element by element, but it all happened before the call to a because it is eager? The only difference thus with the lazy-seq is that the transducer didn't ever have to remember where we were and didn't have to create an intermediate checkpoint if you want after each iteration, where the lazy-seq did, it had to create a new lazy-seq of the remainder after each iteration.

Mario C.17:12:08

@U0K064KQV Thank you for this! It actually completely changed my understanding (lack thereof) of lazy-seq's! Just copied and pasted this onto my bear notes 😄

didibus19:12:32

😄, glad I could help

hiredman18:12:29

Yes, you can use mapcat, but map and filter are lazy, so calling one after another combines into both at once when you walk the result

Mario C.18:12:02

so when they are lazy they are combined into a single walk?

seancorfield19:12:12

If you don't care about laziness, using transducers will give you a single pass: (into [] (comp (filter ,,,) (map ,,,)) coll)

hiredman19:12:25

When you call first on the result of the map, it calls first on the result of the filter and filters it then maps it then returns it

hiredman19:12:14

assuming you aren't forcing intermediate results, which is kind of the point of lazy seqs

Mario C.19:12:26

Thanks guys, didn't know that

Santiago19:12:00

I'm having some trouble adding keys to this nested vector. I need to keep the outermost keys intact. zipmap works for second (first foo)) but not for all the nested vectors in this map. I'll then filter the inner vectors and keep only those without "" . Any ideas?

{:23529ff0 ["AD444D"
            40.6454
            -73.7719
            180
            0
            0
            "3545"
            "F-KJFK2"
            "A321"
            "N954JB"
            1577438229
            "JFK"
            ""
            ""
            1
            0
            ""
            0
            "JBU"],
...
}

seancorfield19:12:00

@slack.jcpsantiago Not sure what you mean by "adding keys to this nested vector"?

Santiago19:12:51

the vector doesn't have any keys it's just strings and numbers. I want to turn it into a map with keywords (not keys sorry)

Santiago19:12:29

so instead of ["AD444D"] I have {:id "AD444D"} to make it easier to select and filter

seancorfield19:12:06

What about the rest of the data in the vector? What should happen to that?

Santiago19:12:59

same thing, I have a list of keywords to add to each element

Santiago19:12:30

(zipmap [:id :lat :lon :track :altitude :speed :unknown1 :unknown2 :aircraft :unknown3 :unknown4 :start :finish :flight :onground :rateofclimb :unknown5 :unknown6 :unknown7] (second (first bar)))

Santiago19:12:47

☝️ works for one vector, but I lose the top-level keyword 😞

seancorfield19:12:53

OK... So reduce-kv is good for processing a map (eagerly producing a new map)

seancorfield19:12:18

(reduce-kv (fn [m k v] (assoc m k (zipmap the-keys v))) {} your-map)

8
noisesmith19:12:27

using first / second etc. to process a hash-map is pretty much never the right thing

Santiago19:12:44

that's what I thought 😓 but didn't know any better

noisesmith19:12:19

@seancorfield’s solution is perfect here, but in other cases you can use key and val to get the two parts of a map entry

noisesmith19:12:49

and you should usually either 1) know the key you want or 2) want to do the same thing to every key

Santiago19:12:28

In my case the top keywords are always different, but I want to apply the same thing to each one so it's ok

noisesmith19:12:39

in fact, in the rare case I want the first entry in a hash map I'd do something like (get m (first (keys m))) to hang a lampshade on the fact that I'm doing it on purpose

noisesmith19:12:23

right - in a map you can never know for sure what the top key will be, except for very rare circusmstances (and even then it's easy to mess it up)

Santiago19:12:44

thanks for the insights @noisesmith and the solution @seancorfield

johnj21:12:08

I remember there being a clojurians log, does it still exist?"

johnj21:12:53

google cache ftw 😉

noisesmith21:12:00

what I last heard is that the logs are out of date

👍 4
manutter5121:12:52

Even on Zulip?

noisesmith21:12:18

your info might be is likely more up to date than mine

seancorfield21:12:12

If a channel here has the @zulip-mirror-bot in it, all messages are mirrored in real-time to http://clojurians.zulipchat.com

seancorfield21:12:03

If a channel here has the @logbot in it, all messages are logged to a system that is behind the clojurians log on ClojureVerse -- however, the indexing/display engine on ClojureVerse was lagging behind, last I checked.