This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-07-16
Channels
- # beginners (48)
- # cider (21)
- # clara (6)
- # cljdoc (3)
- # cljs-dev (11)
- # cljsrn (5)
- # clojure (30)
- # clojure-canada (1)
- # clojure-dusseldorf (2)
- # clojure-italy (10)
- # clojure-losangeles (2)
- # clojure-nl (4)
- # clojure-russia (8)
- # clojure-spain (18)
- # clojure-uk (39)
- # clojurescript (84)
- # core-async (17)
- # cursive (22)
- # data-science (27)
- # datomic (27)
- # docker (3)
- # editors (5)
- # emacs (2)
- # figwheel-main (18)
- # fulcro (54)
- # hoplon (3)
- # hyperfiddle (2)
- # immutant (4)
- # jobs (1)
- # jobs-discuss (1)
- # lein-figwheel (7)
- # leiningen (3)
- # lumo (1)
- # onyx (5)
- # re-frame (64)
- # reagent (5)
- # reitit (7)
- # ring-swagger (6)
- # shadow-cljs (118)
- # specter (23)
- # tools-deps (38)
I want to write a simple web crawler, how to send request with cookie in Clojure?
@vale This is how I get around the issue:
(def data-source
(when-not *compile-files*
(make-datasource options)))
*compile-files*
is set when AOT is happening and prevents the eval.Its a special variable
set in the environment by Clojure. Have a look at this: https://stackoverflow.com/questions/1986961/how-is-the-var-name-naming-convention-used-in-clojure
Is there any way to use https://github.com/weavejester/ns-tracker to trigger a reload of a namespace whenever a configuration file changes? Say I have a “resources” directory with “selectors.edn” in it, is there a way to automatically reload the namespace that reads that config file in whenever it is changed?
Nevermind, I found a small library that I was able to use to cause a reload of the config namespace on resource file change: https://github.com/pocket7878/file-tracker
Hi everyone, I wander to know how you are setting up your development environment, particularly if you are using Visual Studio Code. Any input would be greatly appreciated 🙂
I’m trying to pipeline a stream of XML. What I have was working for smaller files, and I was under the impression that I was streaming, but everything locks up when I pass it a very large xml file. Here’s a snippet of my let block
content [(clojure.data.xml/parse (java.io.FileInputStream. filepath))]
from (a/chan 1 (comp (mapcat :content)
(filter #(-> % :tag (= :TagOfInterest)))))
_ (a/onto-chan from content)
to (a/chan 10)
_ (a/pipeline 1 to xf from true ex-handler)
I assume you are simply consuming from the to channel in a loop?
also, what does xf look like?
when you say consume, do you mean from to
? If so I’m just piping like this (also snippet from let block)
res-ch (a/promise-chan)
into-ch (a/into [] to)
_ (a/pipe into-ch res-ch)
when it was working on smaller sets I assumed it was, but now that it locks up on larger files I assume I’m missing something
> “Parses the source, which can be an InputStream or Reader, and returns a lazy tree of Element records. Accepts key pairs with XMLInputFactory options, see http://docs.oracle.com/javase/6/docs/api/javax/xml/stream/XMLInputFactory.html and xml-input-factory-props for more information. Defaults coalescing true.”
https://github.com/clojure/data.xml/blob/master/src/main/clojure/clojure/data/xml.clj#L86
I mean, I dunno, I am not sure it should be freezing, but io definitely should not be done on the core.async threadpool, processing xml records which are lazily realized from a file like that is a no go
so I was under the impression that the reading and parsing was not happening on the core.async threadpool.
parse returns a lazy sequence, which is then spooled onto the from
channel with onto-chan
, which should get back pressure because from
has a finite buffer, 1 in this case
onto-chan doesn't examine the contents of the collection, so the result of xml is not traversed and not realized by onto-chan
onto-chan is also implement using a go loop, so even if it did, it would be doing it on the core.async thread pool
I am not sure that would cause it to lock up, because the io from the file, while technically blocking, should be guaranteed to make forward progress and eventually complete
calling mapcat on [x] is slightly odd - would you have multiple items in your real use case?
pipeline has a variant, pipeline-blocking that safely performs ops in a threadpool outside the core.async go block pool, but you would still need to consume the channel from pipeline to drive consumption
ah, yeah, sorry. that’s left over from when I was exploring via (def children (mapcat :content))
which it’s nicer to use mapcat when you’re calling children repeatedly. totally unnecessary in this context though, you’re right.
(stole the idea from https://juxt.pro/blog/posts/xpath-in-transducers.html)
in this case there’s currently no io going on in the transducer, xf, that’s being passed to a/pipeline, just a simple projection
OK, right, I'm just saying don't expect xf to run against the data if you aren't consuming from the pipeline
which you may or may not be doing given what you shared, just thought I'd mention
ah, right you mean from the end of the pipeline? yeah, sorry. super snipped; we’re pulling from to
:thumbsup: