This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-07-03
Channels
- # aleph (3)
- # beginners (139)
- # boot (3)
- # cider (12)
- # cljs-dev (18)
- # clojure (100)
- # clojure-dev (21)
- # clojure-dusseldorf (5)
- # clojure-germany (1)
- # clojure-italy (35)
- # clojure-nl (26)
- # clojure-spec (4)
- # clojure-uk (60)
- # clojurescript (11)
- # clojutre (4)
- # cursive (21)
- # data-science (21)
- # datomic (47)
- # editors (3)
- # emacs (2)
- # events (4)
- # figwheel (2)
- # fulcro (28)
- # jobs (27)
- # jobs-discuss (21)
- # lein-figwheel (3)
- # midje (2)
- # off-topic (20)
- # om-next (4)
- # onyx (10)
- # overtone (1)
- # pedestal (2)
- # portkey (14)
- # re-frame (71)
- # reagent (44)
- # reitit (11)
- # remote-jobs (1)
- # ring-swagger (4)
- # shadow-cljs (64)
- # spacemacs (11)
- # testing (2)
- # tools-deps (8)
- # vim (8)
@bronsa Sadly, after much printing I have discovered that eventually some of the contents of two
were being checked with (list? x)
๐ The 'broken' two
contained LazySeq
where the 'working' two
had PersistentList
.
I just wish I hadn't spent half a day finding out about it ๐
it's semi-documented on clojuredocs http://clojuredocs.org/clojure.core/list_q
;; So seq? might be what you are looking
;; for when you want to test listness.
I actually looked it up there and skimmed the examples a bit. But it appears I dit not completely rtfm as this is the last line ๐
I think I was on tilt and could not accept the outrageousness of the bug I was facing.
the warning could / should probably be much clearer and emphatic
To be fair, my code was pretty big and involved and it was at the end of a big series of steps after I had mostly worked with literals in the repl. Then bumped into this when using real data and there were too many things going on to notice the silly list? hiding somewhere.
But, yeah. If it's such a bad idea to generally use list? maybe it could help newbies if it weren't so easy to end up there.
Does anyone see what I'm doing wrong here? I have a deps.edn
that looks like this:
{:deps
{org.wikidata.wdtk/wdtk-dumpfiles {:mvn/version "0.8.0"}}}
I expect to see a class in that library like this, but get an error:
$ clj
Clojure 1.9.0
user=> org.wikidata.wdtk.wikibaseapi.WikibaseDataFetcher
CompilerException java.lang.ClassNotFoundException: org.wikidata.wdtk.wikibaseapi.WikibaseDataFetcher, compiling:(NO_SOURCE_PATH:0:0)
I'm trying to follow some java code here https://github.com/Wikidata/Wikidata-Toolkit-Examples/blob/master/src/examples/FetchOnlineDataExample.java#L30
and here https://www.mediawiki.org/wiki/Wikidata_Toolkit#Download_and_installation
Is there some way to list the classes in a dependency?https://github.com/Wikidata/Wikidata-Toolkit-Examples/blob/master/pom.xml pulls in a bunch of other libraries
Ive been thinking about stuff like that recently. Theres alot of things in core that are landmines like contains?
I wonder what the best way would be to support a streamlined-core
at the library level
To me contains?
is very useful. The fact that it only works on indexed data structures is OK. As opposed to list?
where not only does it not do what you think, it also isn't ever useful.
Clojure is a small language, I think learning what each function in clojure.core is for isn't a huge problem, and doesn't require a new wrapper.
hey folks- curious to know how people deal with libraries which use extend-protocol
in contradicting ways. Say you have one lib that wants to cast all DB dates into string and you want to require that lib but your lib wants to cast all dates into joda dates, is there a way to scope your protocols by namespace/context
libs should make this parts optional when they want to provide this, or make the codec first class (ex an argument)
I was originally working on a Clojurescript project but the errors got so incomprehensible that I decided to try developing it in Clojure instead, however I can't figure out how to get the same REPL experience
What editor/IDE are you using?
why does this evaluate to true (realized? (iterate dec 1))
from my understanding iterate should be lazy and since I'm not using doall or just evaluating it in the repl it shouldn't be realized
ah, I get when wrapping it in something like take or while it shows as false, but I don't know of the quirks of realized?
either you have values from a sequence or you don't, don't test whether the sequence itself has been touched and how
Hi, I'm trying to process an HTTP stream using core.async. I saw this answer on stack overflow. https://stackoverflow.com/questions/30125909/processing-a-stream-of-messages-from-a-http-server-in-clojure But my function hangs when I call (line-seq ) on the stream. I'm able to write the stream to a file but stuck on processing each message and putting it on a channel.
this function just hangs forever https://gist.github.com/jdkealy/1cf1bbd9458ecdf3d7b47571c35585b8
@noisesmith Yeah its relatively small, but having the parts of clojure that require a "wtf" google at hand at all times is something that I personally don't want.
Like, I would rather have to lookup and explicitly import realized?
than to have it always at hand
Wait, what's wrong with realized?
?
I only use it on promises and delays
I think knowing when specific functions are appropriate isn't too much to ask, especially with the core lib being as small and relatively internally consistent as it is
I would find code that ignored clojure.core and used some "better core" frustrating. I would take these concerns into account if I was making a new language though
Because it's using a new language that I don't know.
I asked the Q in stack overflow in case anyone wants the SO cred ๐ https://stackoverflow.com/questions/51159807/line-seq-freezes-on-java-io-bufferedreader-in-clojure
1) I don't see any usage of core.async here 2) are you sure there are any line breaks in the input? what if you temporarily split on bytes instead?
line-seq will block until it finds at least one newline or end of input, depending on the input source you might need something more custom
hey sorry i haven't implemente the core async part as i have nothing to put on a channel yet
If I do the following and tail the file, the file grows in length
(with-open [in-stream (:body (client/get url {:as :stream}))
out-stream (->> "streamoutput.txt"
io/as-file
io/output-stream)]
(io/copy in-stream out-stream))
it might be easier to test your code with a StringReader
, then if it only misbehaves on your stream and not a StringReader, you know it's something about that stream not looking like the input you are testing with
So this is my URL https://stream.tradier.com/v1/markets/events?sessionid=0be1dcde-752c-42c5-b725-1ff74f8d914e&symbols=mu
there's something up with that host, it's either flaky or doing client detection - one time it timed out, another it gave me a 401
try your code on any actual site (like a google search or whatever)
I got an invalid xml file, all on a single line (possible streamed infinitely large)
yeah, line-seq on that wouldn't be a great idea
but xml-seq might work ?? - I don't know if it streams properly
I once tried to consume xml as a lazy stream, I got nodes lazily but the underlying library didn't clean up properly unless I consumed all the way to the end, leading to a resource leak in my usage case
it's meant to be used with parse, and I'm not certain it's well behaved for streaming
:thumbsup:
> wow this XML sucks why repeat yourself :D
"wow this is xml"
i have an odd problem with my uberjar... my code creates a URL (io/resource) successfully when i "lein run" but when i "java -jar" the URL is null
the other day there was an argument if an argument n
contolled the degree of parallel of a function. but it wasn't pmap
i don't think. does anyone recall that?
the one about pmap was the only one like that, but n isn't an argument to pmap, it's calculated based on CPU count
but the value in pmap's let binding was called n
ahhh. it was the n
in its implementation that yall were discussing. thanks @noisesmith
the answer ended up being that the true parallelism ends up being n+chunk-size
where chunk-size is implemented by the source collection you map over
and n is the number of cpus you have available to the vm, +2
thanks. we were just talking about it and i remembered loosely following but not what the conclusion was
another way of putting it is "it uses futures to stay n+2 steps ahead of the realization of the lazy-seq" - so chunking just means you realize items faster and don't actually speed up the dispatch if you look at it that way
empty?
does not like transient maps. I went with checking for a count of zero, but did I miss something? Did not find an empty?!
.
CompilerException java.lang.IllegalArgumentException: Don't know how to create ISeq from: clojure.lang.PersistentHashMap$TransientHashMap
transients are weird
Has been known about for years. Looks like Rich has marked it for fixing in version 1.10
it just seems like you run into things like that easily if you leave a narrow happy path with transients
Or rather Alex Miller has marked it so, which is the next best thing
Yes, the narrow happy path with transients is to convert a persistent collection to a transient, do only 'update' operations ending in ! on it, then convert it back to a persistent collection.
Oh, cool, overnight I decided independently to see if a vector would be faster for my use case, and @noisesmith will be happy to learn it will have 26 items. Life is good.
Actually, it looks as if a simple int-array will be as fast and support mutation. Learning a lot.
Nope, did much better with the transient. Learned a lot. ๐
exactly