This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-09-06
Channels
- # admin-announcements (10)
- # alda (78)
- # arachne (33)
- # bangalore-clj (2)
- # beginners (11)
- # boot (70)
- # chestnut (8)
- # cljsjs (5)
- # cljsrn (4)
- # clojure (212)
- # clojure-art (1)
- # clojure-berlin (1)
- # clojure-brasil (27)
- # clojure-canada (6)
- # clojure-colombia (12)
- # clojure-dev (6)
- # clojure-greece (29)
- # clojure-hk (2)
- # clojure-italy (7)
- # clojure-russia (51)
- # clojure-spec (12)
- # clojure-uk (18)
- # clojurescript (115)
- # clojurex (8)
- # component (1)
- # crypto (41)
- # css (5)
- # cursive (31)
- # datomic (17)
- # defnpodcast (7)
- # emacs (9)
- # flambo (1)
- # funcool (4)
- # juxt (29)
- # off-topic (1)
- # om (122)
- # onyx (12)
- # pedestal (1)
- # planck (10)
- # portland-or (1)
- # re-frame (30)
- # reagent (4)
- # rum (3)
- # slack-help (2)
- # specter (20)
- # sql (3)
Hey there, I have a function which works when runs in "lein repl" but only runs "partially" when runs with "lein run" or uberjar. By "partially" I mean (-main) calls (a) calls (b) calls (c) and stops, while the should be (d) and (e) in the calling stack. If I call (a) from "lein repl" it will calls (b) all way through to (e) as expected. Any ideas?
ok, I'll just post my code out, wait a minute
anything lazy to you?
If I call (rename-pngs-in-folder) directly from repl, it works. However, if I call (-main) it will stops at (rename-files)
possibly; just wrapping the calls to rename-pngs-in-folder
with doall
would tell you for sure
ok, let me try
I saw filter
in one of the fns, which is definitely lazily evaluated: https://clojuredocs.org/clojure.core/filter
lein run and uberjar both work. Thanks for the hint, saved my day.
experience shines
(defn supply
[f eof]
(reify clojure.lang.IReduceInit
(reduce [_ rf init]
(loop [res init]
(let [v (f)]
(if (identical? v eof)
res
(let [res (rf res v)]
(if (reduced? res)
@res
(recur res)))))))))
;; dual to (line-seq rdr)
(supply #(or (.readLine rdr) ::eof) ::eof)
almost, it doesn't do the reduction. it's equivalent to (take-while #(not-sentinel %) (repeatedly f))
Any way to extend a protocol for a particular sequence of precise type? Like I have a sequence of string and a sequence of int, and I want each to have a different implementation. Does Clojure know the type to that degree, or is all sequences typed as sequences of Objects?
Also, is there a way I can set assert to false but only for one namespace and not the whole app?
what is the best library for reading from Excel documents?
@grounded_sage there is a Java lib called Apache Poi you can use. I’m not sure if there is a Clojure wrapper for that.
thanks. I'm still new to programming and wanting to apply it to a small produce business I am starting. Trying to assess Elixir vs Clojure ecosystem. Elixir has a nicer runway but once in the air I'm sure Datomic would be an amazing asset to be able to use.
@jannikko if you want bi-directional routing for your Compojure app, you could try compojure-api. It’s reverse routing is described here: https://github.com/metosin/compojure-api/wiki/Routing#bi-directional-routing
@grounded_sage @alexmiller There is a Clojure wrapper for Apache Poi: https://github.com/mjul/docjure
there you go :)
That one looks great! Man Clojure just has so much clarity in it's syntax. Might finish up the chapter I am on with Programming Elixir and go back to Brave and True haha.
insert usual newbie disclojure. Im a bit stuck and this is when I usually learn the most.. but I am having a bit of a mental block here after trying for a while. I have built a map with this structure, and I want to evaluate it all to determine if :a has x attributes, and 😛 has y attributes, and :c has z attributes do one thing. If :a has y, and 😛 has x, and :c has z something else, and so on.
@tom yes! take a look at =>*
and =>
(https://github.com/plumatic/schema/blob/master/src/cljx/schema/core.cljx#L1073)
Im stuck on how to for/doseq/if/cond through the map and treat them collectively, i can only evaluate one k v at a time , but i want to evaluate k v k1 v1 k2 v2
i’d implement something like this:
(defn parse-cup [attrs] <parse the attributes here>)
(defn check-cup-lifted [cups]
(let [parsed-cups (zipmap (keys cups) (map parse-cup (vals cups)))]
(cond
(and (= (:a parsed-cups) :x) (= (:b parsed-cups) :y) (= (:c parsed-cups) :z)) (do-one-thing)
(and (= (:a parsed-cups) :y) (= (:b parsed-cups) :x) (= (:c parsed-cups) :z)) (do-something-else))))
Cool, thanks @caio . That gives me a different angle, and I was definitely thinking a loop of sorts wasnt quite it, so the above makes sense. I’ll give that a go in the morning
Looking for advice on adhering to http://jsonapi.org/ with Clojure. Right now I’ve built my API with compojure-api
@alexmiller updated those benchmarks per your suggestions https://gist.github.com/nathanmarz/b7c612b417647db80b9eaab618ff8d83
@nathanmarz (and you should get with the Java 1.8 :)
@grounded_sage: There's also https://github.com/Swirrl/clj-excel which is our fork of clj-excel
... Not sure of the differences to docjure, it might be better... but if you want another fallback this one fixes some issues with the original clj-excel
, which fell into lack of maintainance
there are also a bunch of other forks of it on github though - all I can say is this one works for us 🙂
@didibus you can create a class that will implement the interfaces necessary to behave like a LazySeq. I'm not sure it makes sense to talk about a protocol that is compatible with LazySeq? Perhaps I'm misunderstanding you. Can you expand some more details about what you want to accomplish?
@pjstadig I guess I'm kind of trying to subclass it. Basically, I want a LazySeq, but I want to add more behaviour to it. But when I do type polymorphism, I want the type of the one with more behaviour to be different then LazySeq.
@pjstadig Maybe I need to explain my situation a little better. Basically, I have a protocol which I want to extend LazySeq with. But, if the LazySeq contains Tokens, I need a different implementation then if the LazySeq contains other types. So I was thinking of creating a TokenLazySeq which is compatible with LazySeq, but in my code, I would always use that type when I create a seq of Tokens. This way, my protocol can have a different implementation for that.
@pjstadig So I want to define what it means for two LazySeq to be fuzzy-equal. But a LazySeq that contains Token should have a different meaning, so it means something different for it to be fuzzy-equal
@didibus so you want to compare two lazy sequences where =
would return false, but your fuzzy equal would return true because it is more lenient?
@didibus what is an example of something that =
would say is not equal, but your lenient equal would say is?
@pjstadig And depending on the type of objects inside LazySeq I need a different kind of lenience.
If I want to save data (text, some files/images) for a desktop app, could you suggest me a good strategy ? A SQL database seems too heavy (and painful for a user to install)
@pjstadig One can be that only the majority of elements must be equal. Another one can force all elements to be equal. Another one can force at least one element to be equal, etc.
@didibus does order matter? could your lenient equal return true, but if you switch the order of the sequence it would return false?
Is (defn symbol ^{:some :metadata})
deprecated? Seemed to have read that on stackoverflow, and adding metadata with the carrot-map doesnt come in when doing (meta #'symbol)
@pjstadig Either way, I feel like there should be a way to create custom sequences. What if I want to have a list of receipes. Can't I make this explicitly typed to be a ListOfRecipies instead of LazySeq? Or must I resort to metadata
@pjstadig I mean, its much more complicated then that. Fuzzy-equal also calls fuzzy-equals on the elements. And yes order does matter.
@hlolli you can add metadata to a defn with either (defn ^{:some :metadata} symbol ,,,)
or (defn symbol {:some :metadata} ,,,)
@didibus what you are describing is functions, and I think you'll find that you don't need to subclass LazySeq to add behavior, you just need to write a fuzzy-equals
function that operates on (possibly lazy) sequences
@pjstadig No, I need a function which varies in implementation based on the type of objects contained inside a LazySeq
@didibus correct, and that can be done with functions. there's no need to subclass LazySeq for that
equality isn't even something that is compatible with a lazy seq, you need to fully consume a lazy seq to come to a conclusion about equality
@pjstadig But in the case of my fuxxy-equal, I don't always need to consume everything.
@pjstadig Sometimes, it means equal as soon as one element is equal. So you don't have to consume everything
@pjstadig I guess it sounds like I need to create a type which manually implements all of the LazySeq protocol.
(some (fn [[x y]] (fuzzy-equals x y)) (map vector seq1 seq2))
would return falsy if fuzzy-equals returns false for at least one pair of elements
similarly (every? (fn [[x y]] (fuzzy-equals x y)) (map vector seq1 seq2))
would return truthy if fuzzy-equals returns true for every pair of elements
@pjstadig You don't understand my problem. The issue I have is that, I need a different type of equality logic for different kinds of LazySeq. That is, depending on the type contained inside the LazySeq
I guess I'm suggesting that it is not useful to think of LazySeqs with different contents as being different kinds of LazySeqs, but perhaps you are right and I've misunderstood your problem.
@pjstadig So I'd need something like: (fn [lazySeq] (cond (every? #(instance? String %) lazySeq) (do-string-sequence-fuzzy-equal lazySeq) (every? #(instance? Token %) lazySeq) (do-token-sequence-fuzzy-equal lazySeq)
Well, its because I'm simplifying the whole thing. But basically, I have different types of things grouped together. And I want to fuzzy-equal these groups together. But, only groups which have similar things can be fuzzy-equaled, and fuzzy-equaling each group together means something different to my domain depending on the type of things they contain.,
I could group them using records, and then it would all work. Each group will have its on record type, and then I can extend my fuzzy-equal protocol for each of these types.
But, I don't want record semantics, because I need them grouped in order. Also, I like the performance gains I get from the Lazyness. And I don't care about keys, only values.
Ideally, I'd have another method to create say a person-seq and a token-seq and a vehicle-seq. Each would return a sequence of person, token or vehicle based on which one you called. Those function will assert that everything inside it is of the same type. So I get back a bunch of LazySeq which my code guarantees contain the same type. One has tokens, one has vehicle and one has persons. Now I want to call (fuzzy-equal person-seq1 person-seq2) And have it do what should be done when its persons. If I call (fuzzy-equal vehicle-seq1 vehicle-seq2) I need it to do what is required for vehicles.
i think it makes sense to have functions that would create homogeneous sequences. I don't think fuzzy-equals needs to care. It can compare each pair of elements from a couple of sequences, and how you would combine those element comparisons (i.e. did at least one of them match? did all of them match?) is a different question that can be answered by a function like some
or every?
if fuzzy-equals
was trying to compare a vehicle to a person, couldn't it just return false
?
and i guess what i mean my fuzzy-equals
is a function that compares two single elements
i'm trying to separate the idea of comparing elements with the idea of comparing sequences
Well, the behaviour I need is that a sequence of data that relates to the same thing is all used together to decide equality. But, a sequence of sequences is different.
if you want to define some sequence comparison functions that assert that their arguments are sequences of elements that are all the same type, that could work, too
the point is you are talking about defining behavior, and behavior is what functions do. you do not need to subclass something to get behavior
It doesn't really matter to me that I use a protocol or not. But I need a way to identify the type of a LazySeq in a way that's more specific then the LazySeq type. I need to distinguish between different LazySeq
I was trying to not use metadata, because sometimes it gets inadvertently lost as you apply transorms on the seq
i'm still not sure i agree with the approach, but if you want to create a new class that behaves like a LazySeq but has a different type depending on the type of its elements, but you don't want record semantics, then you can use deftype instead of defrecord
@didibus you would not subclass LazySeq. If you want to subclass you should use proxy
. You could use deftype
and implement a bunch of different interfaces (like Seqable and IPersistentCollection).
Hum, ok, can I deftype and implement LazySeq, but just reuse the clojure implementation of it?
Could you suggest me a simple way to handle file uploading from a user in a web app ?
@majenful if that fits your requirement, S3 provides a way to directly upload from the client without the payloag going through your servers
@majenful google "S3 presigned upload url"
I saw @yogthos solution in luminus app, but there is too much java interop for my taste
@majenful are you talking about this: http://www.luminusweb.net/docs/routes.md#handling_file_uploads ?
it seems reasonable to me
At the bottom in ring the body of an http request is an inputstream - so you can read/write it where you want... or mix in a middleware to e.g. coerce x/www-form-encoded
(or whatever it's called) into a java tempfile etc..
most default middleware stacks have something like that included
@val_waeselynck I though in one of libraries there were functions already
if the file is small enough and its text - you can just slurp
the body too
Is there any alternatives to https://github.com/sjl/metrics-clojure?
general clojure convention question: should non-referentially transparent functions be marked with a bang(!)?
like, if I have something that asks for the current mouse pointer location, should it be, e.g. (pointer-location) or (pointer-location!)
@idiomancy maybe. Clojure core itself is not consistent
https://github.com/pjstadig/reducible-stream/blob/master/src/pjstadig/reducible_stream.clj#L158
majenful: pretty sure you could just use (with-open [upload (:body request)] (
one difference is the luminus stuff is using the nio apis though... so it might be more efficient - depending on the webserver. The above will block the thread handling the request
@rickmoynihan will check, thanks
pjstadig: Hey it's funny you've posted that - I've been toying with with using reducer/transducer streams too
@rickmoynihan i wrote a blog post about it. I think the readme contains much of the same information http://paul.stadig.name/2016/08/reducible-streams.html
pjstadig: I've been looking at it from a performance (and of course resource handling) perspective. Initially I've been looking at CSV processing... basically the standard clojure CSV parsers all abstract over them as lazy-sequences... my benchmarks show that reimplementing one in terms of reducers/transducers should be able to improve performance by between 2-10x - and that's before you start layering transformations onto the parsed results
but I've not actually implemented a real parser yet
just a crude simplistic one using (.readLine buf)
and a split on ,
... though I've been thinking also about trying to parallelise the processing
it’s trickier than that due to quoted and multi-line cells
alexmiller: yeah I know 🙂 like I said I haven't implemented a proper parser yet
but it seems like you reuse the guts of an existing one to leverage those parts
adding something like this to data.csv for example would be a great addition
that's why my estimate is so large 🙂 - I was actually planning on taking your clojure.data.csv
and reducerifying it
(not my clojure.data.csv btw :)
but was trying to get a ballpark on performance before I did
ahhh sorry - contribs one 🙂
on that note, I posted this yesterday that will come in handy @rickmoynihan https://clojurians.slack.com/archives/clojure/p1473135664000261
but also saw there's a java one (univocity) that's supposed to be one of the fastest
@ghadi: I think you're probably right... about reducification being the biggest win... but I recon there's some mileage in parallelising... I've been researching it a little... and the results from e.g. widefinder/widefinder2 some years back indicate it's possible. I have a hunch that one trick is that you still (even with SSDs) need to read the file sequentially (otherwise you blow cachelines etc)... so you basically have a single thread reading it as a sliding buffer/window using a bufferedinputstream - then because readers are a lot slower than inputstreams (I'm guessing due to 16bit characters in java) you run each reader on a separate thread... assigning them a chunk of the underlying buffer, the csv parsing itself happens in parallel - along with the object allocations. Computations then get layered ontop of each reader as transducers.
you'll probably need quite a large buffer to make it worth while... coordination overhead could be a killer though
There's a lot of great talks from Guy Steele where he talks about parallel prefix algorithms... But for a large amount of applications, I think it's really worthwhile to get rid of seqs and just do eager reductions on the stack
yeah - its a realshame most of that widefinder stuff is no longer online 😞 oracle destroyed sun
yeah I've seen a lot of Steele's talks
Clojure 1.7 did this with range
. I have a JDBC library that does reducible result sets: https://github.com/ghadishayban/squee/blob/7844619da17907083a00d4621663ac5ac079b6d3/src/squee/impl/resultset.clj#L53-L82
yeah - I was planning on doing something similar with sparql
you can also achieve the chimera of both seq and reducible: http://dev.clojure.org/jira/secure/attachment/15735/CLJ-1906-successions.patch
if you reify Seqable and IReduceInit, then you get to choose between a caching seq and a non caching reducible
reducible implementations have a definite pattern, kind of like the formula that people learn for making lazy-seqs, just a tad more boilerplate.
ghadi: that's exactly what I was doing! 🙂
I really wished clojure.lang.Seq
was a protocol though
an old gist of regex splitting as a reducibles https://gist.github.com/ghadishayban/7002262
but c'est la vie
thanks these links are useful
yeah - it's definitely not as popular as it should be... also once you do it, it's infectious... as you want everything to behave the same
e.g.
- could really benefit from defining CollReduce
on Reader
etc... I'd do it myself, but I don't own the protocol or the type
Hi, I've a HugSQL question with a JDBC driver over a postgres db. I wanted to carry out a date operation https://www.postgresql.org/docs/current/static/functions-datetime.html
In a where
clause such as WHERE date = :date - integer '7'
(note that date
is a column of my table). When I execute the generated HugSQL function with a :date
parameter (joda date type), I've got
PSQLException ERROR: operator does not exist: date = integer
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Position: 249 org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2182)
When I'm trying to specify the type with a WHERE date = date :date - integer '7'
, I've got another Exception
PSQLException ERROR: syntax error at or near "$1"
Position: 256 org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2182)
I fixed it to put this date operation in a ON
after a JOIN
. But I would like to understand the previous errors
ghadi can you explain what you mean "get rid of seqs and just do eager reductions on the stack?"
i can only guess reduce lazy into a single value so that you don't have a huge memory allocation, and just recur on the acc value of the reducer?
lazy-seqs basically consist of a function that produces a value and a thunk -- when you call the thunk it produces another seq (value + thunk). All those values require storage and memory allocation. Instead if you alter the definition of a collection into "something you can reduce over" you can divorce yourself from the need to allocate so much. The only thing you need to store is a couple of things on the stack to accumulate. Here is a loose version of range
as a reducible collection:
(defn rrange [start end]
(reify clojure.lang.IReduceInit
(reduce [_ rf init]
(loop [acc init
i start]
(if (< i end)
(let [acc (rf acc i)]
(if (reduced? acc)
@acc
(recur acc (inc i))))
acc)))))
(if only it were that easy :)
so it's a bit boilerplate-y... but as you can see, acc
and i
are the only thing that are ever stored, and they're only stored on the stack and mutated
says you - I’m old school
efficiency was more important in my day
i'm thinking its gonna operate on a lazy seq. and if its realized, you would gain no benefits?
(reduce + 0 (rrange 5 10))
<- all work is done by the reducing function, in this case +
I think there is a misunderstanding here -- rrange above is a function that returns a 'virtual collection' i.e. something that can be reduced
calling reduce
does that. But it does it in a way that only one item is produced at a time
Nope still, no lazy seqs at all. It's just a implementation of an interface that defines 'something that can be reduced over'
i was the one who asked if it was equivalent to (reduce f init (take-while not-sentinel coll))