This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-05-07
Channels
- # announcements (7)
- # beginners (123)
- # calva (27)
- # cider (23)
- # clj-kondo (4)
- # cljsrn (7)
- # clojure (29)
- # clojure-dev (7)
- # clojure-europe (4)
- # clojure-italy (4)
- # clojure-nl (16)
- # clojure-uk (47)
- # clojurescript (1)
- # code-reviews (4)
- # cursive (4)
- # data-science (4)
- # datomic (30)
- # duct (4)
- # fulcro (4)
- # graphql (1)
- # kaocha (4)
- # mount (8)
- # off-topic (13)
- # overtone (1)
- # pedestal (2)
- # planck (3)
- # re-frame (9)
- # reagent (50)
- # ring (12)
- # shadow-cljs (38)
- # spacemacs (5)
- # testing (13)
- # tools-deps (55)
- # vim (30)
- # xtdb (13)
Some people keep calling dynamically typed languages as "untyped" which sounds wrong to my russian ear because to my understanding untyped == typeless.
@nxtk Right. Clojure is not untyped. It is fairly strongly typed. It is just dynamically typed -- so type checks occur at runtime, not compile-time.
(hence the various ClassCastException errors you can get from incorrectly Clojure code)
And Clojure does also have the ability to define new types (`deftype`, defrecord
, defprotocol
, etc) that are actual Java (class) types.
Hi ppl, what are you using to create Fake data? I found a library called Faker but it seems rather incomplete when compared to other languages such as Python
@iagwanderson Can you be a bit more specific about what you're looking for? clojure.spec
allows for some pretty impressive "fake data" creation.
I have some schemas on graphql that I want to populate with fake data (company names, full names, dates, numbers, addresses) because some of my providers do not have staging environments and I want to provide this feature to me direct users
I just discovered that you can use into
with a transducer. I’m looking at some code written by someone else that uses the form (into [] f coll)
. How is this different than (mapv f coll)
? Are there performance gains (especially if you are into
-ing into a {}
)? Just wondering if there is anything worth refactoring in code that I’ve written.
You mention (into {} ...)
and that is different to (into [] ...)
I find (into [] (map f) coll)
a little clearer in intent than (mapv f coll)
-- since the former makes it clear you are applying a transform to coll
and "pouring the result into a vector".
Actually, (into [] (map f) coll)
and (mapv f coll)
are the "pretty much identical" forms.
Yeah, with {}
I was curious if there are any performance gains to be had. I’m just so used to doing map
-`reduce`
There's reduce-kv
for reducing a map.
Where the transducer form starts to shine is when you are combining multiple map
/`filter` operations. Then it's definitely both clearer and faster.
I don't use eduction
at all.
(into [] (comp (map f) (filter p) (map g)) coll)
gotcha. So my first instinct would be to write
(->> coll
(map g)
(filter p)
(map f)
vec)
That creates intermediate collections for each piece of the pipeline.
The transducer version does not.
ahh. Good to know. I’ll have to hunt for some places where I can apply this. But in the case of just one transformation, there would not be a major difference (other than how the code “reads”) doing it with into res xf coll
vs. a single map
or reduce-kv
, right? (Assuming a list and a vector are equally fine)
map
is lazy, mapv
is not.
mapv
uses a transient vector so it's efficient like into []
. reduce-kv
relies on the collection supporting kv-reduce
via a protocol. I'd have to dig into the source to see how that plays out.
oh! Didn’t know that either. Lazy sequences seem to be a bit of a bottleneck in some of the things I’m doing, or at least they’re taking a lot of horizontal space on the async-profiler
flamegraphs.
Yeah I have one keep-indexed
function that is really slowing things down—I tried using map
with the coll
as c1
and some range
as c2
and it was even slower. But I didn’t try mapv
There are situations where laziness is very important and you need it. Most situations neither need it nor care enough about performance to avoid it.
mapv
with multiple collections won't be any faster than map
Look at the source. mapv
is efficient for a single collection.
Interestingly, keep-indexed
has a transducer arity that will likely be more efficient for you.
user=> (quick-bench (keep-indexed (fn [x y] x) (range 100)))
Evaluation count : 17056068 in 6 samples of 2842678 calls.
Execution time mean : 35.779785 ns
Execution time std-deviation : 4.136766 ns
Execution time lower quantile : 29.745389 ns ( 2.5%)
Execution time upper quantile : 39.122814 ns (97.5%)
Overhead used : 1.639129 ns
nil
user=> (quick-bench (into [] (keep-indexed (fn [x y] x)) (range 100)))
Evaluation count : 229200 in 6 samples of 38200 calls.
Execution time mean : 2.739593 µs
Execution time std-deviation : 134.067772 ns
Execution time lower quantile : 2.606451 µs ( 2.5%)
Execution time upper quantile : 2.893934 µs (97.5%)
Overhead used : 1.639129 ns
nil
Big difference in speed.The wrong way in this case 🙂
So it isn't faster.
I sort of assumed it would be...
Criterium is your friend here.
Oh, I wonder if my benchmark check isn't realizing the lazy sequence... just a sec...
Hahaha... yeah, just shoot me! Lazy benchmarks are really fast when they don't do anything!
user=> (quick-bench (into [] (keep-indexed (fn [x y] y)) (range 100 200)))
Evaluation count : 232302 in 6 samples of 38717 calls.
Execution time mean : 2.605172 µs
Execution time std-deviation : 61.792867 ns
Execution time lower quantile : 2.533126 µs ( 2.5%)
Execution time upper quantile : 2.667096 µs (97.5%)
Overhead used : 1.639129 ns
nil
user=> (quick-bench (doall (keep-indexed (fn [x y] y) (range 100 200))))
Evaluation count : 149862 in 6 samples of 24977 calls.
Execution time mean : 4.093292 µs
Execution time std-deviation : 40.998696 ns
Execution time lower quantile : 4.052769 µs ( 2.5%)
Execution time upper quantile : 4.145466 µs (97.5%)
Overhead used : 1.639129 ns
nil
So, yes, the transducer version of keep-indexed
is faster... which at least matches my expectations! @tabidots
At least this way you can combine the transducer form of keep-indexed
with other things to avoid creating intermediate collections.
Wow, thanks so much for this tip! It just shaved ~9.5s off of the running time for this algorithm I’m working on, given a certain very large test value 😮 went from 41s to 38s yesterday and now 29s. Awesome! And now I see the power of into [] xf coll
😁
Cool!
Clojure performance can be a bit mysterious at times but avoiding laziness and avoiding boxed/checked math can really help.
Sometimes you just have to break out the ol' loop
/`recur` tho'...
hehe yeah I’m currently on a campaign to eliminate all the loop
/`recur`s where I can 😆
hi, is it possible to execute a function without having it required in the namespace? I thought fullyqualified names would do that, but it's not working
my project name is clj-integrator
and I have a function called first-name
on the namespace faker
so I placed it on a edn file like {:Name clj-integrator.faker/first-name
and I expected to read the edn file and call the function on another namespace
no, you need to cause the namespace to be loaded somehow
one helpful function for that (as of Clojure 1.10) is requiring-resolve
, which takes a symbol, requires the namespace, resolves the symbol to a var and returns it
so you can do something like ((requiring-resolve 'faker/first-name))
- the inner parens will resolve the function first-name
in the namespace faker
. The outer parens invoke the returned var, which will invoke the referenced function. You could place arguments in the outer parens as well if needed.
note that project/artifact name is not part of your namespace unless you explicitly create a directory segment and namespace name to include it. it was unclear to me whether you had done that so I did not include it
yes, I have done that. But that's exactly what I was looking for. Didn't know about requiring-resolve
@
is syntax for the deref
function
@foo
== (deref foo)
there's a guide for characters like that
So I’ve used that with atoms, but in this case I’m assuming it’s forcing computation. I think the aleph http client is returning a future. Does that sound correct, or is there something lazy going on with threading?
sounds correct, deref works with futures too
will block until future returns
the threading is incidental here
I'm trying to generate data from this spec I made, but I'm getting an error Unable to construct gen at: [:type] for: :type
::type
is an opaque function to spec so it doesn't how to gen. here, the easiest fix is to change that spec to a set #{"NetworkGraph"}
any time you have an enumerated value, that's probably better as it will gen automatically
some other tips...
::version
could be (s/nilable string?)
::metric
could be (s/nilable string?)
::properties
could just be map?
(also it's defined twice)
::cost
could be (s/or double? int?)
- will gen, etc (but will conform differently so that may not be what you want)
I have a question about making an http post request using aleph's client. I can't seem to pass in authorization headers.
When I try to pass in headers as a map, I get an IllegalArgumentException about how only ' ' and '\t' are allowed after '\n'
Is this most likely a bug in aleph or netty, or am I passing the wrong thing into "headers"? I assumed "headers" would just be a map, so I'm not sure how I can mess that up.
(defn make-auth-request! [url headers] (d/chain (http/post url {:body "grant_type=client_credentials" :headers headers}) :body))
I'm just passing in a map with a single "Authorization" header for basic authentication.
Honestly it's probably me not understanding the documentation well enough, but it seems like it would be straight forward.
I'm just trying to make an authenticated http request. I've done this with clj/http without issue.
I have another simple question. I have a list that I'd like to convert into a map. I'm struggling to find an example. (:k1 "v1" :k2 "v2" :k3 "v3") to {:k1 "v1", :k2, "v2", :k3 "v3"}
(apply hash-map l)
Exception in thread “main” Syntax error compiling deftype* at (flatland/ordered/set.clj:19:1).
my first guess is an old version of flatland which a more recent clojure compiler doesn't like
Hi, I am using the lacinia
and lacinia-pedestal
library and I want to deploy my project. All my schemas
definitions are under a folder at resources/schemas/
however, after I built the jar file using lein uberjar
I tried to run the jar file and I got the following error Caused by: java.lang.IllegalArgumentException: Not a file: jar:file:/graphql/integrador.jar!/schemas
It seems that I need to include something on my project.clj
to include the schemas. (ps: my project is called integrador
)
@iagwanderson resources inside a jar can't be accessed with the File API, if you are using it, you can replace it with the resource API
usually this is a question of replacing (java.io.File. foo)
or (
with (
- nothing else about the code needs to change
I created the following function to list all schema files on the folder:
(defn list-files-resource
"Function to list all files inside a `folder-name` that
is present in resources/."
[folder-name]
(-> (io/resource folder-name)
io/file
file-seq
rest))
that's trickier - you need to find all resources that are children of that path
file-seq and io/file are not usable without unpacking the jar into the file system
You may just need to add "resources"
to your :resource-paths
vector in project.clj
this is one way of doing that https://stackoverflow.com/a/22363700
@donaldball that doesn't help - the file api doesn't work on things inside jars
I speculate that his uberjar simply isn’t including the schema resources
no, the file api never works on things that are inside jars
this is a common problem
One thing that I ended up finding tremendously useful in adding a big new feature to one of my open-source projects this winter was learning about the FileSystems
API introduced in Java 7 (https://docs.oracle.com/javase/8/docs/api/java/nio/file/FileSystems.html). Using that via Clojure Java interop allowed me to effectively mount a Jar file as if it was a file system, and read and write file entries inside the Jar file, so my users only had to deal with a single file, but I could organize complex text and binary values inside it for them.
interesting - a lot more than what we usually need, but it's really cool to know how to do it
Yes, this was definitely the only time I have needed to do something like this, and I don’t expect to again soon! I also suspect it is a rather lightly used part of the Java class library, because I found some bugs in it that I had to work around. 😆
resources is on the resource-paths config by default, you need to go out of your way to make that not work
@donaldball while it's possible the resource isn't inside that jar, using io/file guarantees not finding them, even if they are there
I wrote several functions to manually convert it, but was curious if there was an existing way. Right now I'm converting _ to - prepending : and then converting that to a symbol.
there's a project camel-snake-kebab which does that kind of transform
sometimes the transforming is just introducing complexity - don't discount the utility of just using the original strings instead of re-casing and using keywords
I could use them as they are, I'm still trying to figure out what best practice is for these things.
It was definitely beneficial for me to write the code to convert, but I may delete it now. I learned how to do it at least.
I like to distinguish readability vs. beauty - improvements that actually improve readability are good, if they just improve beauty without making things more readable than they were before, it's probably not a net gain