Fork me on GitHub
#beginners
<
2021-01-11
>
seancorfield00:01:44

Updated it. Hopefully it's clearer now @g3o next time you update from the usermanager repo or read the code?

yiorgos19:01:36

Sweet, thank you very much!

Michael Stokley00:01:59

i'd like to make sure that clojure code i evaluate locally is instrumented by default (ideally without modifying the source code i'm evaluating). putting

:instrument                     {:injections [(do (require  '[clojure.spec.test.alpha :as stest]
                                                             '[clojure.spec.alpha :as s])
                                                   (s/check-asserts true)
                                                   (stest/instrument))]}
in my .lein/profiles.clj seems to do the trick. is there a simpler way to accomplish this? i'm vaguely concerned that this adds a lot of overhead to on the fly form evaluation in emacs

Michael Stokley00:01:36

if i were smart about emacs i'd figure out how to run that as a part of a modified jack in command, perhaps

seancorfield00:01:36

@michael740 Be aware that (stest/instrument) will only instrument functions that are already defined and have a spec.

seancorfield00:01:34

It won't instrument any code that gets loaded afterward -- you'd need to call (stest/instrument) again after defining those new functions.

seancorfield00:01:31

user=> (require '[clojure.spec.test.alpha :as st])
nil
user=> (st/instrument)
[]
user=> (require '[clojure.spec.alpha :as s])
nil
user=> (defn foo [x] (* x x))
#'user/foo
user=> (s/fdef foo :args (s/cat :x int?))
user/foo
user=> (foo "a")
Execution error (ClassCastException) at user/foo (REPL:1).
class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
user=> (st/instrument)
[user/foo]
user=> (foo "a")
Execution error - invalid arguments to user/foo at (REPL:1).
"a" - failed: int? at: [:x]
user=>

Michael Stokley01:01:02

thanks @seancorfield - that's a great point, and something i'm wrestling with as we speak.

Michael Stokley01:01:38

it doesn't automagically keep my repl state in good condition as i work, but at least it loads up existing specs in the beginning. i've been working on code for the last few days that i had specced out in 2020 - i'd completely forgotten to instrument it during local development. facepalm

seancorfield01:01:21

What I tend to do is have my test namespace instrument my source namespace (specifically), so whenever I run tests -- which I do fairly often via a hot key while developing -- my source functions have instrumentation enabled.

seancorfield01:01:43

That way, as I add new functions and Specs, whenever I run tests, they get instrumented.

Michael Stokley01:01:26

come to think of it - i was originally concerned that using leiningen's :injections would be overkill since it prepends the injections to all forms. but perhaps not, given what we're saying about how a single call to stest/instrument can become stale

DG02:01:36

Hi, is there any clojure idiom to separate database code from general data/functions code? I'm an experienced software engineer with a background in Java/Go/Rust/Python and I'm having trouble "organizing" the persistence layer. I've looked at some established projects such as Metabase, and it seems the db code is scattered "all over the place". Is that considered a good practice? Thank you!

didibus03:01:20

Normally you wrap the DB calls in functions that only create, update, remove or delete from the DB and take a db-client as an argument.

didibus03:01:57

Then you try and use those functions as much as you can exclusively from the top-level API functions.

didibus03:01:46

But in practice, you'll often break that rule, at least a bit, so you could end up "using those functions directly". The question then is where do you get the DB client instance from? The best practice for this would be to carry it down to the place that needs it, injecting it at every function call along the way. People often do that where they pass down a context map of some sort, and the db-client is a key inside it.

didibus03:01:48

Since you still should try to keep it relatively flat, their shouldn't be that many places where you need to "pass it down" more than 2 levels, so it can be quite manageable.

didibus03:01:18

Another alternative is very similar to OOP langs. Their functions that do CRUD directly access the db-client state. Normally that's managed with some "component-like" library. At its simplest, imagine a namespace with a (def db-client (DbClient.)) and the CRUD functions in the namespace just get the client from that directly.

didibus03:01:05

In practice, a component library is used, which will create a sort of "instance" for the namespace and inject the client into it for the functions to use.

DG03:01:21

Interesting... I've seen those approaches before, wasn't really sure if they were good practices or not. Thank you!

didibus03:01:04

I looked a bit at metabase. It seems they do the latter. They use their own pseudo-ORM called toucan. When their app launches, they set the default database driver for toucan to use based on an environment configuration. So that's how you specify if you want to use RedShift, MySQL, Postgress, etc. From this point on, toucan can be used like a "singleton component". You pass it a query spec, which is not SQL, but some vector representation of some SQL query (they use honeysql for it). Toucan and or HoneySQL (not sure which one does the hard work), will then generate the SQL for the default database that was configured when the app started.

didibus03:01:07

But Toucan add some ORM features too. So you can tell it to insert into a Model and it'll generate the SQL and do the insert for you

Michael Stokley03:01:25

thanks for asking this question, @U01JPGC1PQQ. i also come from java/python and what little i've seen in clojure crud apps also suggests that it's not idiomatic to attempt any kind of separation between persistence layer interfacing and business logic

Michael Stokley03:01:55

it was so striking that it really did make me curious whether there was some broad OO based rejection of sequestering the persistence layer in the clojure community

didibus03:01:45

You have to ask yourself, why were you creating a DAL (data access layer) in Java?

Michael Stokley03:01:19

ostensibly it was to hedge against future persistence-layer swap outs.

Michael Stokley03:01:51

and to separate concerns - business logic from business-agnostic third party interfacing

seancorfield03:01:11

You can swap DBs fairly easily if you're only using CRUD functions with next.jdbc (or any of the JDBC-based wrappers) but there definitely are differences between DBs that make anything more complex less portable.

didibus03:01:01

In that sense, I do think the same is done in Clojure. In that the CRUD operations are often isolated in their own functions. In the case of Metabase, they're using their own ORM. So when you do (db/insert! User :name foo) toucan will handle whichever concrete DB you set it up to use, so no changes to your code necessary

seancorfield03:01:43

In reality, projects almost never switch persistence layers so ORM and much of that sort of stuff is a giant waste.

👍 4
didibus03:01:47

So metabase can "look" like it is not isolating the CRUD operations in functions, but it is, inside "db/insert!" itself.

Michael Stokley03:01:52

@seancorfield - i've wondered that myself. i've also wondered - if you did switch persistence layers, wouldn't that be so traumatic as to require major surgery anyway? idk. i'm not that experienced overall to have a sense of it.

didibus03:01:40

I think the other aspect for a DAL in Java, is that you want to normalize the data you get back and send to the DB, so that you get one of your model Classes, and send one of your model Classes. In Clojure, that's just not a problem, because you send data and get data back, and work with it directly. (well toucan actually tries to mimic a more Java OO style here, by defining models and abstracting the DB into it).

2
Michael Stokley04:01:55

that makes a lot of sense!

didibus04:01:09

Most likely you wouldn't just switch DB, you'd be doing a re-architecture of your data model and storage which maybe requires a different DB. Like going from relational to NoSql or something like that. And ya, that be traumatic anyways, and require a lot to change, a DAL would not protect you from any of that work.

didibus04:01:24

And if you change DB because like your contract expired, say you had Oracle DB and want to go free with say MariaDB. Even without a DAL, that's a much more minor break, since they both still use SQL, and they both use a JDBC driver to interact with them.

Michael Stokley04:01:10

at my last workplace, we specifically guarded against changing db types, too. we fully expected to switch from, eg, relational to nosql. so we cut up all our data base calls into separate network calls - no joins! we did all the joining in the application layer, in the business logic layer

seancorfield04:01:50

@michael740 There's a small subset of SQL that's completely portable and a larger subset that is mostly portable between the "big" DBs. Take a look at the tests in the next.jdbc for plenty of examples of how different DBs can be 🙂

Michael Stokley04:01:51

it was a tremendous amount of boilerplate and network calls, i could hardly believe it. but it was taken quite seriously by the staff engineers.

didibus04:01:57

That sounds like a typical Java monstrosity, suffering from the crux of over-engineering and speculative generality 😛

didibus04:01:00

As an aside, I was able to inspect the Metabase codebase in 10 minutes, and get a pretty good grasp of how it works. Keep in mind I have no experience with Toucan or Metabase. And I've actually never done anything with Ring or Compojure (even though I know a little about them from reading up on them).

Michael Stokley04:01:04

@didibus - rejecting "speculative generality" - is that clojurian?

didibus04:01:52

Good question, I would say so, though I don't know if I've heard it referred to as speculative generality in Clojure-world that much. I think its less rejecting it, and more encouraging things that are its opposite

didibus04:01:03

Clojure tend to encourage small simple direct solutions to problem, but done in a way that they can accrete features over time without causing a bunch of things to break in the process, and making a mess of the code base.

seancorfield04:01:03

We did migrate a segment of our data from MySQL to MongoDB and back again. Even with the shift from RDBMS to document-based storage, we were able to keep a lot of the very basic CRUD functions the same (in fact, within our one API, we supported both DBs dynamically and used the table name/collection name in the call as the discriminator).

🙌 2
seancorfield04:01:08

Everything beyond that very basic CRUD stuff had to be completely rewritten each time. But the simplicity of Clojure code made it relatively tractable. It would have been a nightmare in Java...

didibus04:01:41

Just the fact that on average, more of your business logic will be pure, automatically means less to refactor.

Michael Stokley04:01:58

yeah... i think that makes a lot of sense. you don't need as much structure because everything is lighter-weight anyhow

Michael Stokley04:01:03

it's like going ultralight backpacking - you switch from heavy hiking boots with ankle support to trail runners because a) you're saving weight and b) you've already saved so much weight with the other gear that the support just doesn't mean as much at that point

seancorfield04:01:42

@U01JPGC1PQQ Going back to your original Q: the driving idea is to organize your code as much as possible where your business logic is as pure as can be, so you should strive for code that does all of the DB reading, then runs all the pure business logic, then performs all the DB updates.

2
seancorfield04:01:18

In practice, your code is likely to be less structured but try to aim for that. It makes for more testable code and more maintainable code.

didibus04:01:10

You can take a Clojure app pretty far with just:

(def db ...) ; some jdbc data source config

(defn some-api [...]
  (-> (get-the-thing db)
    (transform-it)
    (put-the-thing-back) ; or return the transformed thing))

seancorfield04:01:19

But remember that "it's just data" so what you store in the DB and what you read back from the DB is typically just hash maps (and sequences of them). That means that reading/writing is a near 1:1 mapping.

DG04:01:52

Sounds good... lots of good info here 🙂

DG04:01:32

Other than Metabase, any other prominent Clojure open-source project to learn from?

DG04:01:47

Any recommendations?

DG04:01:22

(I'm interested in apps - e.g. services, apis, etc, not libraries)

didibus04:01:07

I'd check out https://github.com/cljdoc/cljdoc as another good large example

seancorfield04:01:33

@U01JPGC1PQQ Overall there aren't many Clojure apps available as open source (unfortunately) so there's very little to learn from.

DG04:01:12

Thanks!

didibus05:01:40

This one looks interesting as well: https://github.com/chr15m/slingcode the project too, just found it, pretty cool

jacklombard03:01:49

Though this is a datomic question, I thought it belongs here as I think I'm missing something trivial

(comment
  (def db-uri "datomic:)
  (def db-uri "datomic:)

  (d/create-database db-uri)

  (def conn (d/connect db-uri))

  (def db (d/db conn))

  (def movie-schema [{:db/ident       :movie/title
                      :db/valueType   :db.type/string
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The title of the movie"}

                     {:db/ident       :movie/genre
                      :db/valueType   :db.type/string
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The genre of the movie"}

                     {:db/ident       :movie/release-year
                      :db/valueType   :db.type/long
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The year the movie was released in theaters"}])

  (def first-movies [{:movie/title        "Explorers"
                      :movie/genre        "adventure/comedy/family"
                      :movie/release-year 1985}
                     {:movie/title        "Demolition Man"
                      :movie/genre        "action/sci-fi/thriller"
                      :movie/release-year 1993}
                     {:movie/title        "Johnny Mnemonic"
                      :movie/genre        "cyber-punk/action"
                      :movie/release-year 1995}
                     {:movie/title        "Toy Story"
                      :movie/genre        "animation/adventure"
                      :movie/release-year 1995}])

  (d/transact conn movie-schema)

  (d/transact conn first-movies)

  (def all-movies-q '[:find ?e
                      :where [?m :movie/title ?e]])

  (d/q all-movies-q db))
I get the following error when I run (d/q all-movies-q db)
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/not-an-entity Unable to resolve entity: :movie/title

jacklombard03:01:40

I've tried both the dev and mem protocols seperately and both return the same error. I have a dev transactor running when I was using the dev protocol.

hiredman03:01:42

there is a #datomic channel you can try asking in, but my guess would be transact is executing asynchronously, so the execution of your first and second transactions are overlapping, but I don't really know, you would need to check the documentation to confirm that and see how you can wait on the result of a transaction

jacklombard03:01:05

I can deref the transaction. I'll try this and also ask in the datomic channel thanks!

jumar07:01:23

I'm trying to install Clojure on Windows (have been using Clojure on MacOS for years) following https://github.com/clojure/tools.deps.alpha/wiki/clj-on-Windows Invoke-Expression (New-Object .WebClient).DownloadString('') The installation finished OK, I restarted Powershell and tried clj:

clj : The 'clj' command was found in the module 'ClojureTools', but the module could not be loaded. For more
information, run 'Import-Module ClojureTools'.
At line:1 char:1
+ clj
+ ~~~
    + CategoryInfo          : ObjectNotFound: (clj:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CouldNotAutoloadMatchingModule
I'm using Parallels Desktop VM running on MacOS

jumar07:01:25

Ok, running the suggested command helped

Import-Module ClojureTools
=> 
Import-Module : File C:\Users\jumar\Documents\WindowsPowerShell\Modules\ClojureTools\ClojureTools.psm1 cannot beloaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:1 + Import-Module ClojureTools + ~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : SecurityError: (:) [Import-Module], PSSecurityException     + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
   
# run powershell as admin and then:
Set-ExecutionPolicy Unrestricted

Piotr Brzeziński08:01:57

Is it usual to get a bit overwhelmed by the peg game development while going through braveclojure? 🙂 I feel like it was a leap of faith compared to previous chapters.

2
Christian09:01:17

Are there other resources than "https://dragan.rocks/articles/19/Deep-Learning-in-Clojure-From-Scratch-to-GPU-0-Why-Bother" about ML with clojure? I feel like many projects are abandoned and this one is full of blind links. Should one just use the usual suspects like tensorflow and their ports?

simongray09:01:26

It’s true that there’s been a few abandoneed projects. ML and data science never got the momentum to take off in Clojure. Some people are hard at work trying to make it happen. Dragan is one of them. Currently, the direction that people are moving seems to be developing better interop with languages like Python, R, Julia and other languages typically used for data-science.

simongray09:01:14

Carin Meier has some tutorials for Clojure ML that use interop.

Christian09:01:51

Hm, I was hoping my google-fu is weak and that this is not the answer

Christian09:01:28

I'll check aut Carin Meier, that will get me a bit further I think. Thank you Simon

simongray09:01:48

NP. Go visit the #data-science channel too, especially the one on Zulip where the Clojure data science community is organised: https://clojurians.zulipchat.com/#narrow/stream/151924-data-science

Christian09:01:33

Never heard of Zulip...

simongray09:01:49

You have now 😉

Christian09:01:40

Looks like a slack-reskin 😄

Christian09:01:30

no, it's different. Yeah, another platform. Sometimes I wish for one central thing for everything.

Christian09:01:15

That's my currently open tab

👍 2
Ramon Rios15:01:27

Folks, i'm doing an exercise for a few time and not think in something smart enough to solve the problem: I'm trying to reduce a data-structure to show the amount of sectors per prefixes, example:

{"1" {"Technology" 2
        "Clothing" 1}
   "44" {"Banking" 1}}
I was able to reduce enough to have this :
[{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {sector "Banking", :prefix "44"}]
And then my mind blowup 😞 . How would you guys resolve this issue?

amarnah15:01:10

You can use group-by :prefix to what you produced already.

user=> (def h [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {:sector "Banking", :prefix "44"}])
#'user/h
user=> (group-by :prefix h)
{"1" [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"}], "44" [{:sector "Banking", :prefix "44"}]}

dpsutton15:01:22

you want to end up with a datastructure like {"1" 2 "44" 1}?

Ramon Rios15:01:53

like this

{"1" {"Technology" 2
        "Clothing" 1}
   "44" {"Banking" 1}}

dpsutton15:01:17

oh that's desired, and the flat list is the starting point?

dpsutton15:01:05

(->> [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {:sector "Banking", :prefix "44"}]
     (group-by :prefix)
     (map (fn [[k vs]] [k (reduce (fn [m x] (update m (:sector x) (fnil inc 0)))
                                  {} vs)]))
     (into {}))

Antonio Bibiano16:01:36

you can also try to use something like update-in

Antonio Bibiano16:01:09

(defn updater
  [result, m]
  (update-in result 
             [(:prefix m) (:sector m)] 
             #(if % (inc %) 1)))

(reduce updater {} my-input)

roelof16:01:59

I have studied a chapter about concurrency.

roelof16:01:29

but I wonder if I want to use slurp do I then also need to use a future ?

dpsutton16:01:18

(slurp "project.clj") should work just fine. no need for future. you can of course use other threads and async mechanisms as you like but certainly no need to

roelof16:01:30

oke, wierd then that promises and futures are explained and you do not need them on the challenges

dpsutton16:01:08

i don't know what exercises you're going through but i don't have high confidence in them from what i've seen

roelof16:01:54

im doing the exercices from the brave book @dpsutton

dpsutton16:01:16

oh. never mind then. i thought you were going through something else.

dpsutton16:01:22

i like that book

roelof16:01:44

im now at the first chaptre about concurrency

roelof16:01:32

a few weeks ago I tried the "clojure from the ground up" but that did not explain things well and wierd challenges to my opnion

NPException16:01:17

The usage of slurp in that chapter is for slurping from URLs. So as an exercise it's expected to be done in a future.

roelof16:01:39

oke, I thought that already

NPException16:01:08

but other than that there's indeed no good reason to use a future in those exercises.

Antonio Bibiano16:01:43

about those I was a bit puzzled about getting the first result

Antonio Bibiano16:01:01

looks like it's pretty hard with the current way google responds

roelof16:01:13

Do not spoil it, I want to do it tomnorrow

isaac omy16:01:07

hello from indonesia 😄

👋 1
didibus16:01:24

You put slurp in a future if you want to slurp things concurrently or do other things concurrently while you slurp. If you don't mind doing the slurp sequentially with other things you can use it directly without a future

1
pez17:01:04

It’s been a long day and I can’t figure this out (trying to create Hugo frontmatter from a list of things):

(str "foo:\n"
     (->> [1 2 3]
          (map #(str "  - " %)))) ;=> "foo:\[email protected]"

noisesmith17:01:26

@pex the str for a lazy seq is basically broken

noisesmith17:01:32

you can use pr-str

pez17:01:14

I get the same result if I just replace for pr-str, though.

noisesmith17:01:10

user=> (pr-str "foo:\n" (->> [1 2 3] (map #(str " - " %))))
"\"foo:\\n\" (\" - 1\" \" - 2\" \" - 3\")"

noisesmith17:01:45

of course pr-str has the problem that it prints strings with " around them - you probably want to use format for more nuance

noisesmith17:01:59

or string/join

pez17:01:08

Ah, I replaced the wrong str 😃

didibus17:01:09

Just use mapv

noisesmith17:01:17

@pez this is probably more like what you want

user=> (str "foo:\n" (->> [1 2 3] (map #(str " - " %)) (apply str)))
"foo:\n - 1 - 2 - 3"

❤️ 1
pez17:01:48

Yes, that works great!

pez17:01:06

mapv gave me this:

"foo:[\"  - 1\" \"  - 2\" \"  - 3\"]"

didibus17:01:13

Ya, it depends what you want exactly.

didibus17:01:24

Also, I don't know if it's fair to say str on lazyseq is broken. I think it was intentional for it to not realize the elements

didibus17:01:18

A seq will also print its elements, it's only lazy-seq that doesn't. So I think it was on purpose.

noisesmith17:01:48

@didibus that's false, it realizes all the elements in order to print the hash value it displays

noisesmith17:01:11

it can't calculate that hash if there are unrealized elements

didibus17:01:18

Hum, you're right, so it is broken, weird. I thought that was the memory address, not the hash lol

noisesmith17:01:25

at this point it might be necessary to keep that printing behavior for legacy reasons, but the behavior is just broken

alexmiller17:01:49

this has been repeatedly mentioned various places but I am not aware of a jira / ask clojure question for it

alexmiller17:01:57

is there one?

alexmiller17:01:41

if not, can we make one?

didibus18:01:51

I can make one

1
noisesmith18:01:12

wow - I had assumed this would be done to death by now, I guess everyone's just worked around it

didibus18:01:10

Seems a different issue, but maybe its the root cause of what's happening for str on lazy-seq?

didibus18:01:36

Hum, on second look, I don't think so. I'll create an ask entry for it

didibus18:01:31

Seems eduction suffers the same fate as lazy-seq

alexmiller18:01:14

that one may have been intentional, don't recall for sure

didibus18:01:00

At least eduction doesn't seem to execute the loop over the collection the way that str on lazy-seq does

alexmiller18:01:26

eduction is a pending computation - I don't think you want it to eagerly evaluate on toString

alexmiller18:01:45

similar to lazy seq in that regard

alexmiller18:01:15

but seems like not printing and also forcing for hash can't make sense for lazyseq

didibus18:01:24

Do you consider the "issue" to be that str on lazy-seq realize the elements, or that it doesn't stringify like seq does?

alexmiller18:01:01

well I'm undecided w/o asking rich but it does not seem like the current behavior can make sense across both of those dimensions

didibus18:01:07

Ok, I'll try and word it with both context then.

didibus17:01:57

Even in a legacy reason, I can't really imagine how someone would rely on this behavior?

didibus17:01:46

Also seems ClojureScript str doesn't do this, and returns a string of the elements between parens

mathpunk18:01:47

I'm making a bunch of requests to a web service, and I think that I'm dropping a bunch of data because I don't know how to write async programs and I bet the web service doesn't want me to send 100 requests all at once. Is it time to learn core.async? Is that overpowered and I should learn about queues? Am I barking up the wrong tree?

noisesmith18:01:56

clojure or cljs?

clyfe18:01:26

(buffer n) - Returns a fixed buffer of size n. When full, puts will block/park.

clyfe18:01:32

(chan 10) - Pass a number to create a channel with a fixed buffer

noisesmith18:01:54

on the jvm, I wouldn't use core.async for an io problem unless most of the work needed was complex coordination of results

noisesmith18:01:11

there's too many gotchas

clyfe18:01:44

on jvm, everything you put in the go block (http api calls, db driver calls, other io) must be via a non-blocking driver

clyfe18:01:03

in js that's by default

noisesmith18:01:37

sure, but on the jvm we also have http clients with built in thread pooling and throttling, we don't even have to open the core.async can of worms

👍 1
hiredman18:01:38

I just don't know that adding core.async to "I don't know ho to write async programs" is going to fix anything

💯 1
😆 2
hiredman18:01:07

I might start with just introducing an Executor with a fixed size threadpool and run all your http requests on that

mathpunk18:01:29

it's on the jvm

clyfe18:01:35

or, on jvm, wrap blocking calls in (a/thread ...)

hiredman18:01:46

an Executor in this case is basically a kind of work or job queue

mathpunk18:01:52

hmmm i don't know most of the words you're using

mathpunk18:01:55

an executor is a Java class?

didibus18:01:24

Why even go async?

hiredman18:01:14

maybe checkout the examples in the readme https://github.com/TheClimateCorporation/claypoole (I haven't used claypoole, but I gather it provides a convenience layer over using executors directly)

mathpunk18:01:46

for context, here's what I've been doing: forcing the promise to finish in a let statement, and then doing other stuff:

(defn- read
  [request]
  (let [response @(client/request request)]
    (-> response :body (json/parse-string true))))

mathpunk18:01:22

i just figure, that's gotta be wrong -- my reason for using the @ operator there is, gosh otherwise it just says "promise" instead being data I want

didibus18:01:47

The client returns you a promise?

hiredman18:01:54

ah, you must using the http client from http-kit?

mathpunk18:01:05

maybe "async" isn't even the word! i just figure i need to do something so that if the web service is mad at me, I can exponentially back off (or whatever)

hiredman18:01:37

http-kit is already going to be using some kind of queue internally for this stuff, so you may not need to care about it

mathpunk18:01:55

perhaps i should go ask in that channel..... i thought there was a chance this case was already handled

didibus18:01:13

Ya, I'm not sure how http-kit handle errors on the promise

mathpunk18:01:15

and that i'm just holding http-kit wrong

hiredman18:01:17

I mean, I don't use it, but internally http-kit has many queues and executors

clyfe18:01:44

I don't see where core.aync involved; do you launch multiple of those read in parallel and suffocate the WS?

didibus19:01:24

Ok, you are supposed to check the result for an :error key it seems

didibus19:01:47

(defn- read
  [request]
  (let [{:keys [body error]} @(client/request request)]
    (if error
      ;; handle error here
      (-> body (json/parse-string true))))

mathpunk19:01:50

@UCCHXTXV4 Maybe? I decided to act like the result of read was "just a data structure" and trusted that eventually things would break due to that totally not being true. I map that read function over a whole bunch of request maps

clyfe19:01:31

map and not pmap, yeah?

mathpunk19:01:55

yep, never touched pmap

didibus19:01:28

You definitely don't need core.async. Http-kit is already async. You're just not using it correctly.

clyfe19:01:54

ok so you do them in sequence and server still whines; check some of the http status codes for the dropped (may be 429)

clyfe19:01:46

a sleep in between them may placate that WS; or waiting a day - some WS give you "x req / day"

clyfe19:01:06

is that a short url expander by any chance?

didibus19:01:30

From the code you provided, you are actually doing one request at a time

didibus19:01:45

I suspect the issue is not that you are being throttled, just that you don't handle errors from the request

didibus19:01:01

So when your request fails for some reason, you just drop it and never retry or anything

mathpunk19:01:07

I'm reviewing the http-kit docs. Is this probably the use case? "Combined, concurrent requests, handle results synchronously" http://http-kit.github.io/client.html#combined

clyfe19:01:35

That little @ makes your requests sequential

didibus19:01:22

Do you have to be concurrent? I would start sequential in your case, figure out why your requests are being dropped, fix that, than if you want to speed things up look into making it concurrent

mathpunk19:01:30

:thumbsup::skin-tone-2:

didibus19:01:34

Something like:

(defn- read
  [request]
  (let [{:keys [body error]} @(client/request request)]
    (if error
      [:error error]
      (-> body (json/parse-string true))))

(mapv read coll-of-requests)

didibus19:01:58

And like log a metric or an error for each instance of :error

mathpunk19:01:19

yeah that makes sense, and then figure out it if's 429ing me or if something else is occurring

mathpunk19:01:21

speed is not of the essence at all

mathpunk19:01:46

thanks y'all!

noisesmith19:01:56

and I find that API usage can't be as general as all that - eg. one service I've used refused to ever return an error code for failures, you'd need to look inside the json encoded body to check for errors

👍 1
didibus19:01:08

Yes, also, I don't know what your application is, but if it is a server, it could be concurrent from the requests made to it. So even though your requests to the API are sequential, your application could concurrently be making many sequential requests, and you could still be trottled.

noisesmith19:01:48

that's a great point - if your code is a server with built in parallelism, and the main thing you do is talk to someone else's API, a common pattern is that you end up with some stateful object providing access to that API (in the extreme case it can't even just coordinate on one vm - it has to collaborate with other vms providing the same service)

noisesmith19:01:28

in order to respect limits that aren't imposed on the IP level, but on the credential level

mathpunk19:01:19

i dunno about all that, I've just been working my way through the Gitlab API docs, because I'm tired of scrolling through the browser interface looking for what was the last passing/first failing job

noisesmith19:01:40

oh then you don't need to mess with all that

mathpunk19:01:09

yeah this is a pretty bone-head program, which is why i wrote it on the "pretend it's local until it's clearly broken" principle

mathpunk19:01:11

and that day has come