Fork me on GitHub

Updated it. Hopefully it's clearer now @g3o next time you update from the usermanager repo or read the code?


Sweet, thank you very much!

Michael Stokley00:01:59

i'd like to make sure that clojure code i evaluate locally is instrumented by default (ideally without modifying the source code i'm evaluating). putting

:instrument                     {:injections [(do (require  '[clojure.spec.test.alpha :as stest]
                                                             '[clojure.spec.alpha :as s])
                                                   (s/check-asserts true)
in my .lein/profiles.clj seems to do the trick. is there a simpler way to accomplish this? i'm vaguely concerned that this adds a lot of overhead to on the fly form evaluation in emacs

Michael Stokley00:01:36

if i were smart about emacs i'd figure out how to run that as a part of a modified jack in command, perhaps


@michael740 Be aware that (stest/instrument) will only instrument functions that are already defined and have a spec.


It won't instrument any code that gets loaded afterward -- you'd need to call (stest/instrument) again after defining those new functions.


user=> (require '[clojure.spec.test.alpha :as st])
user=> (st/instrument)
user=> (require '[clojure.spec.alpha :as s])
user=> (defn foo [x] (* x x))
user=> (s/fdef foo :args (s/cat :x int?))
user=> (foo "a")
Execution error (ClassCastException) at user/foo (REPL:1).
class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap')
user=> (st/instrument)
user=> (foo "a")
Execution error - invalid arguments to user/foo at (REPL:1).
"a" - failed: int? at: [:x]

Michael Stokley01:01:02

thanks @seancorfield - that's a great point, and something i'm wrestling with as we speak.

Michael Stokley01:01:38

it doesn't automagically keep my repl state in good condition as i work, but at least it loads up existing specs in the beginning. i've been working on code for the last few days that i had specced out in 2020 - i'd completely forgotten to instrument it during local development. facepalm


What I tend to do is have my test namespace instrument my source namespace (specifically), so whenever I run tests -- which I do fairly often via a hot key while developing -- my source functions have instrumentation enabled.


That way, as I add new functions and Specs, whenever I run tests, they get instrumented.

Michael Stokley01:01:26

come to think of it - i was originally concerned that using leiningen's :injections would be overkill since it prepends the injections to all forms. but perhaps not, given what we're saying about how a single call to stest/instrument can become stale


Hi, is there any clojure idiom to separate database code from general data/functions code? I'm an experienced software engineer with a background in Java/Go/Rust/Python and I'm having trouble "organizing" the persistence layer. I've looked at some established projects such as Metabase, and it seems the db code is scattered "all over the place". Is that considered a good practice? Thank you!


Normally you wrap the DB calls in functions that only create, update, remove or delete from the DB and take a db-client as an argument.


Then you try and use those functions as much as you can exclusively from the top-level API functions.


But in practice, you'll often break that rule, at least a bit, so you could end up "using those functions directly". The question then is where do you get the DB client instance from? The best practice for this would be to carry it down to the place that needs it, injecting it at every function call along the way. People often do that where they pass down a context map of some sort, and the db-client is a key inside it.


Since you still should try to keep it relatively flat, their shouldn't be that many places where you need to "pass it down" more than 2 levels, so it can be quite manageable.


Another alternative is very similar to OOP langs. Their functions that do CRUD directly access the db-client state. Normally that's managed with some "component-like" library. At its simplest, imagine a namespace with a (def db-client (DbClient.)) and the CRUD functions in the namespace just get the client from that directly.


In practice, a component library is used, which will create a sort of "instance" for the namespace and inject the client into it for the functions to use.


Interesting... I've seen those approaches before, wasn't really sure if they were good practices or not. Thank you!


I looked a bit at metabase. It seems they do the latter. They use their own pseudo-ORM called toucan. When their app launches, they set the default database driver for toucan to use based on an environment configuration. So that's how you specify if you want to use RedShift, MySQL, Postgress, etc. From this point on, toucan can be used like a "singleton component". You pass it a query spec, which is not SQL, but some vector representation of some SQL query (they use honeysql for it). Toucan and or HoneySQL (not sure which one does the hard work), will then generate the SQL for the default database that was configured when the app started.


But Toucan add some ORM features too. So you can tell it to insert into a Model and it'll generate the SQL and do the insert for you

Michael Stokley03:01:25

thanks for asking this question, @U01JPGC1PQQ. i also come from java/python and what little i've seen in clojure crud apps also suggests that it's not idiomatic to attempt any kind of separation between persistence layer interfacing and business logic

Michael Stokley03:01:55

it was so striking that it really did make me curious whether there was some broad OO based rejection of sequestering the persistence layer in the clojure community


You have to ask yourself, why were you creating a DAL (data access layer) in Java?

Michael Stokley03:01:19

ostensibly it was to hedge against future persistence-layer swap outs.

Michael Stokley03:01:51

and to separate concerns - business logic from business-agnostic third party interfacing


You can swap DBs fairly easily if you're only using CRUD functions with next.jdbc (or any of the JDBC-based wrappers) but there definitely are differences between DBs that make anything more complex less portable.


In that sense, I do think the same is done in Clojure. In that the CRUD operations are often isolated in their own functions. In the case of Metabase, they're using their own ORM. So when you do (db/insert! User :name foo) toucan will handle whichever concrete DB you set it up to use, so no changes to your code necessary


In reality, projects almost never switch persistence layers so ORM and much of that sort of stuff is a giant waste.

👍 8

So metabase can "look" like it is not isolating the CRUD operations in functions, but it is, inside "db/insert!" itself.

Michael Stokley03:01:52

@seancorfield - i've wondered that myself. i've also wondered - if you did switch persistence layers, wouldn't that be so traumatic as to require major surgery anyway? idk. i'm not that experienced overall to have a sense of it.


I think the other aspect for a DAL in Java, is that you want to normalize the data you get back and send to the DB, so that you get one of your model Classes, and send one of your model Classes. In Clojure, that's just not a problem, because you send data and get data back, and work with it directly. (well toucan actually tries to mimic a more Java OO style here, by defining models and abstracting the DB into it).

Michael Stokley04:01:55

that makes a lot of sense!


Most likely you wouldn't just switch DB, you'd be doing a re-architecture of your data model and storage which maybe requires a different DB. Like going from relational to NoSql or something like that. And ya, that be traumatic anyways, and require a lot to change, a DAL would not protect you from any of that work.


And if you change DB because like your contract expired, say you had Oracle DB and want to go free with say MariaDB. Even without a DAL, that's a much more minor break, since they both still use SQL, and they both use a JDBC driver to interact with them.

Michael Stokley04:01:10

at my last workplace, we specifically guarded against changing db types, too. we fully expected to switch from, eg, relational to nosql. so we cut up all our data base calls into separate network calls - no joins! we did all the joining in the application layer, in the business logic layer


@michael740 There's a small subset of SQL that's completely portable and a larger subset that is mostly portable between the "big" DBs. Take a look at the tests in the next.jdbc for plenty of examples of how different DBs can be 🙂

Michael Stokley04:01:51

it was a tremendous amount of boilerplate and network calls, i could hardly believe it. but it was taken quite seriously by the staff engineers.


That sounds like a typical Java monstrosity, suffering from the crux of over-engineering and speculative generality 😛


As an aside, I was able to inspect the Metabase codebase in 10 minutes, and get a pretty good grasp of how it works. Keep in mind I have no experience with Toucan or Metabase. And I've actually never done anything with Ring or Compojure (even though I know a little about them from reading up on them).

Michael Stokley04:01:04

@didibus - rejecting "speculative generality" - is that clojurian?


Good question, I would say so, though I don't know if I've heard it referred to as speculative generality in Clojure-world that much. I think its less rejecting it, and more encouraging things that are its opposite


Clojure tend to encourage small simple direct solutions to problem, but done in a way that they can accrete features over time without causing a bunch of things to break in the process, and making a mess of the code base.


We did migrate a segment of our data from MySQL to MongoDB and back again. Even with the shift from RDBMS to document-based storage, we were able to keep a lot of the very basic CRUD functions the same (in fact, within our one API, we supported both DBs dynamically and used the table name/collection name in the call as the discriminator).

🙌 4

Everything beyond that very basic CRUD stuff had to be completely rewritten each time. But the simplicity of Clojure code made it relatively tractable. It would have been a nightmare in Java...


Just the fact that on average, more of your business logic will be pure, automatically means less to refactor.

Michael Stokley04:01:58

yeah... i think that makes a lot of sense. you don't need as much structure because everything is lighter-weight anyhow

Michael Stokley04:01:03

it's like going ultralight backpacking - you switch from heavy hiking boots with ankle support to trail runners because a) you're saving weight and b) you've already saved so much weight with the other gear that the support just doesn't mean as much at that point


@U01JPGC1PQQ Going back to your original Q: the driving idea is to organize your code as much as possible where your business logic is as pure as can be, so you should strive for code that does all of the DB reading, then runs all the pure business logic, then performs all the DB updates.


In practice, your code is likely to be less structured but try to aim for that. It makes for more testable code and more maintainable code.


You can take a Clojure app pretty far with just:

(def db ...) ; some jdbc data source config

(defn some-api [...]
  (-> (get-the-thing db)
    (put-the-thing-back) ; or return the transformed thing))


But remember that "it's just data" so what you store in the DB and what you read back from the DB is typically just hash maps (and sequences of them). That means that reading/writing is a near 1:1 mapping.


Sounds good... lots of good info here 🙂


Other than Metabase, any other prominent Clojure open-source project to learn from?


Any recommendations?


(I'm interested in apps - e.g. services, apis, etc, not libraries)


I'd check out as another good large example


@U01JPGC1PQQ Overall there aren't many Clojure apps available as open source (unfortunately) so there's very little to learn from.




This one looks interesting as well: the project too, just found it, pretty cool


Though this is a datomic question, I thought it belongs here as I think I'm missing something trivial

  (def db-uri "datomic:)
  (def db-uri "datomic:)

  (d/create-database db-uri)

  (def conn (d/connect db-uri))

  (def db (d/db conn))

  (def movie-schema [{:db/ident       :movie/title
                      :db/valueType   :db.type/string
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The title of the movie"}

                     {:db/ident       :movie/genre
                      :db/valueType   :db.type/string
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The genre of the movie"}

                     {:db/ident       :movie/release-year
                      :db/valueType   :db.type/long
                      :db/cardinality :db.cardinality/one
                      :db/doc         "The year the movie was released in theaters"}])

  (def first-movies [{:movie/title        "Explorers"
                      :movie/genre        "adventure/comedy/family"
                      :movie/release-year 1985}
                     {:movie/title        "Demolition Man"
                      :movie/genre        "action/sci-fi/thriller"
                      :movie/release-year 1993}
                     {:movie/title        "Johnny Mnemonic"
                      :movie/genre        "cyber-punk/action"
                      :movie/release-year 1995}
                     {:movie/title        "Toy Story"
                      :movie/genre        "animation/adventure"
                      :movie/release-year 1995}])

  (d/transact conn movie-schema)

  (d/transact conn first-movies)

  (def all-movies-q '[:find ?e
                      :where [?m :movie/title ?e]])

  (d/q all-movies-q db))
I get the following error when I run (d/q all-movies-q db)
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/not-an-entity Unable to resolve entity: :movie/title


I've tried both the dev and mem protocols seperately and both return the same error. I have a dev transactor running when I was using the dev protocol.


there is a #datomic channel you can try asking in, but my guess would be transact is executing asynchronously, so the execution of your first and second transactions are overlapping, but I don't really know, you would need to check the documentation to confirm that and see how you can wait on the result of a transaction


I can deref the transaction. I'll try this and also ask in the datomic channel thanks!


I'm trying to install Clojure on Windows (have been using Clojure on MacOS for years) following Invoke-Expression (New-Object .WebClient).DownloadString('') The installation finished OK, I restarted Powershell and tried clj:

clj : The 'clj' command was found in the module 'ClojureTools', but the module could not be loaded. For more
information, run 'Import-Module ClojureTools'.
At line:1 char:1
+ clj
+ ~~~
    + CategoryInfo          : ObjectNotFound: (clj:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CouldNotAutoloadMatchingModule
I'm using Parallels Desktop VM running on MacOS


Ok, running the suggested command helped

Import-Module ClojureTools
Import-Module : File C:\Users\jumar\Documents\WindowsPowerShell\Modules\ClojureTools\ClojureTools.psm1 cannot beloaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/ At line:1 char:1 + Import-Module ClojureTools + ~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : SecurityError: (:) [Import-Module], PSSecurityException     + FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand
# run powershell as admin and then:
Set-ExecutionPolicy Unrestricted

Piotr Brzeziński08:01:57

Is it usual to get a bit overwhelmed by the peg game development while going through braveclojure? 🙂 I feel like it was a leap of faith compared to previous chapters.


Are there other resources than "" about ML with clojure? I feel like many projects are abandoned and this one is full of blind links. Should one just use the usual suspects like tensorflow and their ports?


It’s true that there’s been a few abandoneed projects. ML and data science never got the momentum to take off in Clojure. Some people are hard at work trying to make it happen. Dragan is one of them. Currently, the direction that people are moving seems to be developing better interop with languages like Python, R, Julia and other languages typically used for data-science.


Carin Meier has some tutorials for Clojure ML that use interop.


Hm, I was hoping my google-fu is weak and that this is not the answer


I'll check aut Carin Meier, that will get me a bit further I think. Thank you Simon


NP. Go visit the #data-science channel too, especially the one on Zulip where the Clojure data science community is organised:


Never heard of Zulip...


You have now 😉


Looks like a slack-reskin 😄


no, it's different. Yeah, another platform. Sometimes I wish for one central thing for everything.


That's my currently open tab

👍 4
Ramon Rios15:01:27

Folks, i'm doing an exercise for a few time and not think in something smart enough to solve the problem: I'm trying to reduce a data-structure to show the amount of sectors per prefixes, example:

{"1" {"Technology" 2
        "Clothing" 1}
   "44" {"Banking" 1}}
I was able to reduce enough to have this :
[{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {sector "Banking", :prefix "44"}]
And then my mind blowup 😞 . How would you guys resolve this issue?


You can use group-by :prefix to what you produced already.

user=> (def h [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {:sector "Banking", :prefix "44"}])
user=> (group-by :prefix h)
{"1" [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"}], "44" [{:sector "Banking", :prefix "44"}]}


you want to end up with a datastructure like {"1" 2 "44" 1}?

Ramon Rios15:01:53

like this

{"1" {"Technology" 2
        "Clothing" 1}
   "44" {"Banking" 1}}


oh that's desired, and the flat list is the starting point?


(->> [{:sector "Technology", :prefix "1"} {:sector "Clothing", :prefix "1"} {:sector "Technology", :prefix "1"} {:sector "Banking", :prefix "44"}]
     (group-by :prefix)
     (map (fn [[k vs]] [k (reduce (fn [m x] (update m (:sector x) (fnil inc 0)))
                                  {} vs)]))
     (into {}))

Antonio Bibiano16:01:36

you can also try to use something like update-in

Antonio Bibiano16:01:09

(defn updater
  [result, m]
  (update-in result 
             [(:prefix m) (:sector m)] 
             #(if % (inc %) 1)))

(reduce updater {} my-input)


I have studied a chapter about concurrency.


but I wonder if I want to use slurp do I then also need to use a future ?


(slurp "project.clj") should work just fine. no need for future. you can of course use other threads and async mechanisms as you like but certainly no need to


oke, wierd then that promises and futures are explained and you do not need them on the challenges


i don't know what exercises you're going through but i don't have high confidence in them from what i've seen


im doing the exercices from the brave book @dpsutton


oh. never mind then. i thought you were going through something else.


i like that book


im now at the first chaptre about concurrency


a few weeks ago I tried the "clojure from the ground up" but that did not explain things well and wierd challenges to my opnion


The usage of slurp in that chapter is for slurping from URLs. So as an exercise it's expected to be done in a future.


oke, I thought that already


but other than that there's indeed no good reason to use a future in those exercises.

Antonio Bibiano16:01:43

about those I was a bit puzzled about getting the first result

Antonio Bibiano16:01:01

looks like it's pretty hard with the current way google responds


Do not spoil it, I want to do it tomnorrow

isaac omy16:01:07

hello from indonesia 😄

👋 3

You put slurp in a future if you want to slurp things concurrently or do other things concurrently while you slurp. If you don't mind doing the slurp sequentially with other things you can use it directly without a future


hmm, it seems that (slurp "") does not work, I do not see any urls or whatever returned back 😞

Antonio Bibiano17:01:45

this gives a response


i get quite a bit back for that:

Antonio Bibiano17:01:33

but i don't think the response contains the results


at a glance, it seems like that massive payload of javascript would make the result show up, if you ran it


@U0EGWJE3E it's not accidental that it's usable in a browser but a big mess for machine consumption


Google's response has been much easier to digest years ago. It's unfortunately no longer the case


Just slurp does no longer work with google, since they expect a user agent header, otherwise they throw back a 403 at you.

Antonio Bibiano17:01:15

I think you might have better luck searching something like clojuredocs or github


Here's a little function you can use instead of slurp to get a result from google instead of an error code:

(defn request [url]
  (let [con (.openConnection ( url))]
    (.setRequestProperty con "user-agent" "Brave Clojure")
    (slurp (.getInputStream con))))


also your original string was broken: = is part of the url syntax but got escaped to %3d


hmm, or maybe use another function then

Antonio Bibiano17:01:26

surely there must be a common library to do https requests?


@U01JARXUA75 with your code I get the same mess back


yeah, the mess google returns is a giant blob of javascript which contains the data that is then rendered on their page.


I didn't say it would make google return something useful, just that it won't return a 403 😄 (which (slurp "") does for me)


then it looks I cannot do the challenges from the chapter im now in. Pity


just use something else instead of google

Antonio Bibiano17:01:52

my plan was to change where I search


Can i then not better use clj.http with this code : (client/get "" {:query-params {"q" "foo, bar"}}) ?


oke, then I have a plan for tomorrow

Antonio Bibiano17:01:06

right now I think I will search

Antonio Bibiano17:01:15

and was trying to find a place to search the java docs


another search engine that doesn't just return a blob of JS is Ecosia ""


that was challenge1 .


How do you find this sort of search engines ?


That just happened to be one that I have heard of in the past. I saw Ecosia in an advert on YouTube once


I never heard of it


maybe I try duckgogo also


I checked that before I tried Ecosia. Unfortunately duckduckgo also returns mostly javascript


I get a 500 here

(slurp "")


yeah it also needs a user agent


hmm, I thinking I need 2 for the last assigment of the brave book


Create a new function that takes a search term and search engines as arguments, and returns a vector of the URLs from the first page of search results from each search engine.


and another one . how do I get the urls from the output


I simply did not do that when I did the exercises... 😅 If I remember correctly I just returned the length of the output for each response.


oke, we see it tommorow. Found a few tutorials about web scraping that I want to try


I don't really know what the book author expects there. It seems like a very daunting task for someone who is just in the process of learning the language. Almost reminds me of the "Draw the rest of the f***ing owl" picture 😄

💯 3

now really time for family


I think its one for someone who is more familiar with clojure . The last one are always very advanced


I guess one fairly straightforward way would be to use a regex to extract all parts of the string that start with href=" and end with " .


first find a second one who does not return a blob


(slurp "") ClojureDocs returns mostly HTML.


not a general purpose search engine, but good enough for the exercise


looks if that can help me with the url parts


that looks very nice. I have not heard of Enlive before. Will be a nice addition to my toolbelt 😄


I've only written scraping code once before, and manually implemented searching for certain elements in the parsed DOM


Wauw, I hope I ever can write like this


It’s been a long day and I can’t figure this out (trying to create Hugo frontmatter from a list of things):

(str "foo:\n"
     (->> [1 2 3]
          (map #(str "  - " %)))) ;=> "foo:\nclojure.lang.LazySeq@103175be"


@pex the str for a lazy seq is basically broken


you can use pr-str


I get the same result if I just replace for pr-str, though.


user=> (pr-str "foo:\n" (->> [1 2 3] (map #(str " - " %))))
"\"foo:\\n\" (\" - 1\" \" - 2\" \" - 3\")"


of course pr-str has the problem that it prints strings with " around them - you probably want to use format for more nuance


or string/join


Ah, I replaced the wrong str 😃


Just use mapv


@pez this is probably more like what you want

user=> (str "foo:\n" (->> [1 2 3] (map #(str " - " %)) (apply str)))
"foo:\n - 1 - 2 - 3"

❤️ 3

Yes, that works great!


mapv gave me this:

"foo:[\"  - 1\" \"  - 2\" \"  - 3\"]"


Ya, it depends what you want exactly.


Also, I don't know if it's fair to say str on lazyseq is broken. I think it was intentional for it to not realize the elements


A seq will also print its elements, it's only lazy-seq that doesn't. So I think it was on purpose.


@didibus that's false, it realizes all the elements in order to print the hash value it displays


it can't calculate that hash if there are unrealized elements


Hum, you're right, so it is broken, weird. I thought that was the memory address, not the hash lol


at this point it might be necessary to keep that printing behavior for legacy reasons, but the behavior is just broken

Alex Miller (Clojure team)17:01:49

this has been repeatedly mentioned various places but I am not aware of a jira / ask clojure question for it

Alex Miller (Clojure team)17:01:41

if not, can we make one?


I can make one


wow - I had assumed this would be done to death by now, I guess everyone's just worked around it


Seems a different issue, but maybe its the root cause of what's happening for str on lazy-seq?


Hum, on second look, I don't think so. I'll create an ask entry for it


Seems eduction suffers the same fate as lazy-seq

Alex Miller (Clojure team)18:01:14

that one may have been intentional, don't recall for sure


At least eduction doesn't seem to execute the loop over the collection the way that str on lazy-seq does

Alex Miller (Clojure team)18:01:26

eduction is a pending computation - I don't think you want it to eagerly evaluate on toString

Alex Miller (Clojure team)18:01:45

similar to lazy seq in that regard

Alex Miller (Clojure team)18:01:15

but seems like not printing and also forcing for hash can't make sense for lazyseq


Do you consider the "issue" to be that str on lazy-seq realize the elements, or that it doesn't stringify like seq does?

Alex Miller (Clojure team)18:01:01

well I'm undecided w/o asking rich but it does not seem like the current behavior can make sense across both of those dimensions


Ok, I'll try and word it with both context then.


Even in a legacy reason, I can't really imagine how someone would rely on this behavior?


Also seems ClojureScript str doesn't do this, and returns a string of the elements between parens


I'm making a bunch of requests to a web service, and I think that I'm dropping a bunch of data because I don't know how to write async programs and I bet the web service doesn't want me to send 100 requests all at once. Is it time to learn core.async? Is that overpowered and I should learn about queues? Am I barking up the wrong tree?


clojure or cljs?


(buffer n) - Returns a fixed buffer of size n. When full, puts will block/park.


(chan 10) - Pass a number to create a channel with a fixed buffer


on the jvm, I wouldn't use core.async for an io problem unless most of the work needed was complex coordination of results


there's too many gotchas


on jvm, everything you put in the go block (http api calls, db driver calls, other io) must be via a non-blocking driver


in js that's by default


sure, but on the jvm we also have http clients with built in thread pooling and throttling, we don't even have to open the core.async can of worms

👍 3

I just don't know that adding core.async to "I don't know ho to write async programs" is going to fix anything

💯 3
😆 6

I might start with just introducing an Executor with a fixed size threadpool and run all your http requests on that


it's on the jvm


or, on jvm, wrap blocking calls in (a/thread ...)


an Executor in this case is basically a kind of work or job queue


hmmm i don't know most of the words you're using


an executor is a Java class?


Why even go async?


maybe checkout the examples in the readme (I haven't used claypoole, but I gather it provides a convenience layer over using executors directly)


for context, here's what I've been doing: forcing the promise to finish in a let statement, and then doing other stuff:

(defn- read
  (let [response @(client/request request)]
    (-> response :body (json/parse-string true))))


i just figure, that's gotta be wrong -- my reason for using the @ operator there is, gosh otherwise it just says "promise" instead being data I want


The client returns you a promise?


ah, you must using the http client from http-kit?


maybe "async" isn't even the word! i just figure i need to do something so that if the web service is mad at me, I can exponentially back off (or whatever)


http-kit is already going to be using some kind of queue internally for this stuff, so you may not need to care about it


perhaps i should go ask in that channel..... i thought there was a chance this case was already handled


Ya, I'm not sure how http-kit handle errors on the promise


and that i'm just holding http-kit wrong


I mean, I don't use it, but internally http-kit has many queues and executors


I don't see where core.aync involved; do you launch multiple of those read in parallel and suffocate the WS?


Ok, you are supposed to check the result for an :error key it seems


(defn- read
  (let [{:keys [body error]} @(client/request request)]
    (if error
      ;; handle error here
      (-> body (json/parse-string true))))


@UCCHXTXV4 Maybe? I decided to act like the result of read was "just a data structure" and trusted that eventually things would break due to that totally not being true. I map that read function over a whole bunch of request maps


map and not pmap, yeah?


yep, never touched pmap


You definitely don't need core.async. Http-kit is already async. You're just not using it correctly.


ok so you do them in sequence and server still whines; check some of the http status codes for the dropped (may be 429)


a sleep in between them may placate that WS; or waiting a day - some WS give you "x req / day"


is that a short url expander by any chance?


From the code you provided, you are actually doing one request at a time


I suspect the issue is not that you are being throttled, just that you don't handle errors from the request


So when your request fails for some reason, you just drop it and never retry or anything


I'm reviewing the http-kit docs. Is this probably the use case? "Combined, concurrent requests, handle results synchronously"


That little @ makes your requests sequential


Do you have to be concurrent? I would start sequential in your case, figure out why your requests are being dropped, fix that, than if you want to speed things up look into making it concurrent




Something like:

(defn- read
  (let [{:keys [body error]} @(client/request request)]
    (if error
      [:error error]
      (-> body (json/parse-string true))))

(mapv read coll-of-requests)


And like log a metric or an error for each instance of :error


yeah that makes sense, and then figure out it if's 429ing me or if something else is occurring


speed is not of the essence at all


thanks y'all!


and I find that API usage can't be as general as all that - eg. one service I've used refused to ever return an error code for failures, you'd need to look inside the json encoded body to check for errors

👍 3

Yes, also, I don't know what your application is, but if it is a server, it could be concurrent from the requests made to it. So even though your requests to the API are sequential, your application could concurrently be making many sequential requests, and you could still be trottled.


that's a great point - if your code is a server with built in parallelism, and the main thing you do is talk to someone else's API, a common pattern is that you end up with some stateful object providing access to that API (in the extreme case it can't even just coordinate on one vm - it has to collaborate with other vms providing the same service)


in order to respect limits that aren't imposed on the IP level, but on the credential level


i dunno about all that, I've just been working my way through the Gitlab API docs, because I'm tired of scrolling through the browser interface looking for what was the last passing/first failing job


oh then you don't need to mess with all that


yeah this is a pretty bone-head program, which is why i wrote it on the "pretend it's local until it's clearly broken" principle


and that day has come

fappy22:01:15 says

Returns a channel which will receive the result of the body when completed, then close.
Is there a good way to tell if the launched thread died a horrible death? What would the <! on the channel receive? I tried the following:
(a/go (println (a/<! (a/thread (/ 1 0)))))
it prints out: nil and returns ManyToManyChannel


you can see what a/thread does here:

(defn thread-call
  "Executes f in another thread, returning immediately to the calling
  thread. Returns a channel which will receive the result of calling
  f when completed, then close."
  (let [c (chan 1)]
    (let [binds (clojure.lang.Var/getThreadBindingFrame)]
      (.execute thread-macro-executor
                (fn []
                  (clojure.lang.Var/resetThreadBindingFrame binds)
                    (let [ret (f)]
                      (when-not (nil? ret)
                        (>!! c ret)))
                      (close! c))))))
it runs the body f in a try catch and will close the channel


What do you mean by: "Died a horrible death" ?


I guess any of these are interesting to me … although maybe one of them just doesn’t happen on the JVM? 1. thread crashes (seg faults?) … other threads keep running but this one evaporates 2. thread has an uncaught exception and comes to an “end” 3. thread is hanging — it made a call to something and the call never returns


I guess based on what @dpsutton said, I can return some :happy-ending from the thread and if I get nil instead of that, then I’ll know something bad happened


since a take on a closed channel is nil


1. Seg fault would cause your application to crash and its process to terminate. 2. The channel will close and return nil. The UncaughtExceptionHandler will be called, if set. Otherwise exception just vanishes. 3. If thread is sleeping forever, or doing an infinite loop that will never end, then the <!! will wait forever as well.

thanks2 3

the uncaught exception handler will never be called. there's a try catch immediately around the invocation of your code


Its called in my test


I think you assume the absence of a "catch" means that it catches everything and does nothing, but I think the absence of a "catch" means that it rethrows


Well, more that the exception is not caught, so it is still being thrown, even though a finally block will execute


I don't think it will count as a new "cause", so its not really rethrown, just continues to bubble up as normal


(oh i missed the lack of catch there. good eye)

😉 3