This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-02-28
Channels
- # aleph (50)
- # announcements (3)
- # aws (35)
- # beginners (74)
- # boot (25)
- # calva (39)
- # cider (18)
- # clara (2)
- # cljdoc (18)
- # cljs-dev (24)
- # cljsrn (11)
- # clojure (166)
- # clojure-europe (13)
- # clojure-italy (5)
- # clojure-nl (6)
- # clojure-spec (35)
- # clojure-uk (263)
- # clojurescript (22)
- # clojutre (1)
- # code-reviews (34)
- # cursive (58)
- # data-science (2)
- # datascript (4)
- # datomic (4)
- # duct (6)
- # emacs (7)
- # figwheel-main (9)
- # fulcro (2)
- # graphql (3)
- # hoplon (22)
- # hyperfiddle (2)
- # juxt (5)
- # kaocha (6)
- # leiningen (33)
- # luminus (15)
- # off-topic (1)
- # pedestal (5)
- # reagent (18)
- # reitit (12)
- # shadow-cljs (171)
- # vim (5)
i've been seeing obscure errors in my stacktraces like in unnamed module of loader 'app'
since upgrading to java 11 and clojure 10. others seeing this? example:
class clojure.lang.PersistentList cannot be cast to class clojure.lang.IFn (clojure.lang.PersistentList and clojure.lang.IFn are in unnamed module of loader 'app')
I have seen those - my understanding is that’s just letting you know what jvm module (jigsaw and all) was responsible for the classes listed. but the exception is that you’re using a list as a function somewhere
I believe with jigsaw there’s an implicit top level module if you don’t define your own modules
presumably that’s what ‘app’ is
yeah, that looks like a regular error with a little extra java module information appended
user=> (())
Execution error (ClassCastException) at user/eval602 (REPL:1).
class clojure.lang.PersistentList$EmptyList cannot be cast to class clojure.lang.IFn (clojure.lang.PersistentList$EmptyList and clojure.lang.IFn are in unnamed module of loader 'app')
user=>
That’s standard for Java ClassCastExceptions now
In Java 9+
Is there a common name for this function in Clojure?
(defn <*>
[fs xs]
(reduce-kv (fn [m k f]
(assoc m k (f (get xs k))))
(cond (vector? xs) []
(associative? xs) {})
fs))
it applies each f
in fs
to each x
in xs
while trying to keep the original data structure
so, I’ve seen thinks like map-vals
that’s basically (into {} (map (fn [[k v]] [k (f v)])) coll)
it’s very questionable to me that there’s value in not knowing whether you have a vector or a map
hmm ok, just realised that it's almost equivalent to
(defn <*> [fs xs]
(reduce-kv update xs fs))
the concept is roughly equivalent to the Applicative typeclass in Haskell, or like juxt
with multiple arguments
(def data {:date "2019-02-28"
:price "$20.50"
:id "0a18abb7-dd4f-4aad-a49a-acb46cf23955"})
(<*> {:date parse-date
:price parse-currency
:id parse-uuid}
data)
@qythium You can use empty
instead of that condition (from 30 minutes ago). (empty xs)
will produce []
if xs
is a vector and {}
if xs
is a hash map.
For the above example, I'd probably be tempted to use conforming specs...
(s/def ::date parse-date)
(s/def ::price parse-currency)
(s/def ::id parse-uuid)
(s/conform (s/keys :req-un [::date ::price ::id]) data)
where each of the parse-*
functions accepted a string and either returned the parsed value or else s/invalid
.(but you'll get howls of protest from the folks who say specs should not conform from string to non-string types 🙂 )
That's interesting, I've only used spec
for instrumenting functions during development - is it common practice to use it in this way for actual application logic?
Also, in the above example, what should have to keys that aren't in the parse hash map? Clojure's principles would suggest that such any additional key/value pairs should be passed through unchanged.
We use spec very heavily in production code. We use it for API parameter validation (and we use heuristics on top of explain-data
to produce user-facing error messages).
Can you elaborate a little on these heuristics? We like spec but have found it very troubling to have to wrap any user-facing validation (since the messages are pretty atrocious for those who don't speak Clojure).
We have a hash map for each API that maps symbols (or occasionally sequences of symbols) to error codes, and we walk the explain-data
, looking at :path
and :pred
and build a reductions
of partial combinations of those, then walk that list looking for a match in our API hash map.
Missing parameters match to a contains?
check on the API parameter name. Incorrect types match to a pair of the API parameter name and the spec operator, e.g., int?
.
Wow, that seems like a lot of work, but is sort of what I imagined would be required to prettify those. Is that code public? Thinking of building something similar at work in the medium-long term.
I'm hoping to extract the generic heuristic code into a library and open sourcing it at some point. Right now, it's a bit tangled up with some of our proprietary code.
@UFMHWUULB Have you looked at expound
?
I have, and I think it's pretty solid. But for a lot of purposes I want to leak even less Clojure/spec stuff in error messages. So if a user sends JSON to my application and gets back this error message, they aren't confused by seeing the actual keys, maps, predicates etc. that I used internally.
For me that's just meant trying my best to wrap any spec
checks and give "Clojure-free" messages, but it's tedious and failure-prone
Maybe there isn't really a great solution, since failing a spec inherently means getting "you were missing key :some-key
in some-map
", and you can't really send that out to the world without revealing internals of Clojure code
If you need to produce domain-specific error messages, you pretty much have to have domain-specific heuristic code... but if I can reduce it to generic data walking and lookup + a simple hash map, then it would be worth releasing as an OSS lib...
We also use spec around our persistence model, to strip hash maps down to just the expected keys (which we extract from the s/keys
in the spec via s/form
) and that the values conform to what the database expects.
We have three layers of spec: API parameters (conforming from string to simple values), domain model (this isn't used a huge amount right now), persistence model.
We don't use instrumentation a great deal (we use it around some tests). We use s/exercise
to produce "random example" data for unit tests. We have a few st/check
tests as well, and some "occasionally run" generative tests.
(we started using early releases of spec in production during the early 1.9 alpha cycle)
In the above case, missing keys are dealt with by passing values through unchanged if there's no corresponding function
(defn <*>
"apply functions in `fs` to values in `xs`"
[fs xs]
(reduce-kv update xs fs))
> (<*> [dec inc str] [1 2 3 4 5])
[0 3 "3" 4 5]
> (<*> [dec inc str] [1 2])
[0 3 ""]
And in the case of hash maps?
Ah, OK. Not what I expected.
I wasn't expecting the function to add new keys to the map being processed. But that's a reasonable behavior too.
I guess that using spec would return s/invalid
or throw an exception? Unless using opt
keys, which I recall were going to be deprecated in the future
I don't think a decision has been made on that yet.
Specs should return either a successfully conformed value or s/invalid
. Throwing an exception is not useful there.
What we do for specs that conform strings, is we wrap their parsing in try
/`catch` and in the catch
we return s/invalid
.
How do I define a custom return value for s/conform
?
E.g. I have a spec like
(s/def ::nested
(s/tuple (s/tuple #{"s"} int?)))
and I'm only interested in the intAlso is there any way to recursively conform specs?
> (s/conform ::int-string "3")
3
> (s/conform (s/keys :req-un [::int-string])
{:int-string "3"})
{:int-string "3"}
Hey how do Clojure people run db migrations? In .NET I have my project set to auto-migrate a DB on startup in production if there is things to migrate. So I just have to git push and I'm done deploying a new version of my app. Anything similar in Clojure?
I used https://github.com/weavejester/ragtime worked out pretty okay for me
I generally call the migrate function in the main which applies it everytime the server starts in an idempotent manner
@U7ERLH6JX Nice, thanks
Hi @U7ERLH6JX - nice to meet you at ClojureD 🙂
Btw, there’s also a GraalVM version of migratus: https://github.com/leafclick/pgmig Should be super fast to run, didn’t try it.
Nice to meet you too @borkdude 😄
does somebody here have experience with integrating single sign via OAuth or SAML 2? Wondering which of the many java libraries to choose. Today I only need Azure Active Directory support, but it would be handy if the library handled more…
hi guys! is there a canonical way to get a subspecification from a clojure.spec? something like get-in where you would supply a spec path? The use case is to validate the results of ´update-in´ when you have a spec for the root without checking the entire datastruct
I guess I could work from spec/form, but maybe there is a better way.
Out of a binding vector with destructuring:
[{::keys [foo bar]
baz ::else/omg
{:keys [tomato]} :potato
:as self}]
...how would you get a mapping of the extracted variables to their paths? Like the following
{foo [::sth/foo]
bar [::sth/bar]
baz [::else/omg]
tomato [:potato :tomato]
self []}
Destructuring in macros is backed by a function called destructure
which rewrites the binding vector into a binding vector without destructuring. It won’t give you paths but it’s halfway there
That function is intentionally not doc’ed (as it’s an impl detail) but it takes a binding vector and returns a binding vector so you can invoke it by quoting your vector
The next step of going from that binding vector to paths is probably not fun, but it’s possible
what would be the best way to log an exception’s ex-data using logging? Say with logback as the logging backend. E.g when if I did
(log/errorf (ex-info "test" {:foo 42}))
and it produced something like
- host date | "test" {:foo 42}
I was teaching a bunch of people Clojure. I asked them to implement the game 2048 and one of the constraints I had was to use as few defn
as possible. https://gist.github.com/craftybones/42f180cf753db07b6694de87f4182ece
I’d appreciate it if people can critique that solution.
Quite a few people landed at roughly the same solution
@srijayanth many of those partials could be transducer arities instead, using comp to combine them, then using a transducing context at the top level
@noisesmith thanks. I hadn’t yet taught them transducers at that point, but point taken
Also, I didn’t realise that partition returns a transducer
Darn, so cool 🙂
yeah, I think all your partials are valid transducers
your comps would need to reverse order of course
Of course.
But ok otherwise isn’t it? Its remarkable how a game such as 2048 can be written almost entirely without a single “variable” and letting a managed stack do all the work
for transpose, there are few cases where list
is better than vector
and I suspect this isn't one of them
both are eager, you can easily seq on a vector, you can't easily do indexed lookup on a list
and in many cases creating a vector is lower overhead (one contiguous object instead of a linked series of allocations)
and agreed, it's a good demonstration of how elegant and powerful higher order functions can be
To be clear, a seq is eager, so seq on a vector and a seq on a list should theoretically be the same, so why the vector?
list is eager
But wouldn’t the seq force both?
the difference is that vectors are more efficient to allocate, and they have fast indexed lookup
there's no forcing in question, both are eager
Right. That’s what I assumed.
Ok, got it
the allocation is the performance you are talking about
lists have few (no?) features that make them superior to vectors
the allocation and the lookup time
lists make better stacks?
vectors are great stacks, you just pile onto the end
lists are much faster than vectors as stacks btw
oh! - good to know
I mean the data structure modification is a pointer, so that should be the guess (but it's easy to see in a perf test)
using criterium and a random series of stack operations I was able to confirm that, though the difference was smaller than the smaller standard deviation
you'll get more variability with the vector I suspect as it will depend on how deep in the data structure you are updating
so for an empty vector, would likely be pretty fast, but you might see a different story on one with a few dozen elements
and peek ?
peek / pop work on both vectors and lists
right
I remember reading somewhere about peek performance.
or last. I can’t remember which
peek is faster than last, and happens to find the last element of a vector
ok. Thanks.
vectors are indexed, so all lookups by index are faster
(of course you lose this advantage by calling seq, but you are still no worse off than you were with a list)
I am guessing lists are there primarily for homoiconicity?
and we always need more parentheses, of course
yeah, you need lists from macros (but only weird macros are built out of direct calls to list)
That’s true. Regular old quoting seems to cover most needs
Is the idea of LightTable dead? There seems to be a brave soul who every now and again revives it from the dead
I am more interested in the idea itself even if the project doesn’t rekindle itself
One of the things that the original video from the kickstarter campaign spoke about was a function as a unit of code as opposed to a file and the ability to live debug as you built stuff
I thought both those ideas had a lot of merit and would have liked to see it go some distance
yes and no. The repl by itself is certainly powerful, but the repl that he showcased and seemed to be aiming at was far more powerful and was real time, often highlighting values overlaid on the original code etc
I haven’t yet seen/used rebl. Is that what I want? I have the talk bookmarked and am meaning to watch it
I dunno, in general gui tools tend to pass me by because I do all my work ssh'ed into a vm specifically setup for work programming
The original demo for LightTable was a radically different way to edit/work with functions -- but in testing with actual users it proved too alien/confusing so they scaled back to a much more "normal" editor experience. The main "novel" features it included were inline result display and auto-evaluation of code as you typed (which, frankly, I found a bit too dangerous/annoying and quickly turned that off).
so lighttable wasn't very compelling, rebl itself isn't super compelling, but it is built on some protocols which are usuable elsewhere which is kind of nice
Ok. There was this fancy node.js repl that doesn’t seem to be maintained anymore which was quite cool as well
Can’t remember the name now
I think several editors provide the inline results and auto-eval options now and LT is virtually abandoned (I think).
@seancorfield - A lot of tools have this thing where they are born again later and somehow seem to catch on
REBL is "different" because it's in-process and it's very focused on data viewing/visualization/exploration right now. It'll be interesting to see where it goes.
@borkdude - LOL 😄
@borkdude - yeah, unlike fish that die and are never born again
I’ll definitely try out REBL
Liquid is also a very interesting project: again, it's an in-process editor, that's fairly vim-like, but written entirely in Clojure and using curses-style terminal UI (so it can run in an ssh session on a remote box, for example).
btw, which is also kinda cool if you work in a terminal: https://github.com/denisidoro/floki just spit your edn to some file and then inspect it. I’m not sure if I’ll use it, since I can inspect things with CIDER pretty well, but it’s cool that this kind of thing can be written in CLJS now
Just hanging out in these channels is a wealth of information. Thank you @borkdude and @seancorfield
https://github.com/mogenslund/liquid -- see the demo videos for insight into the workflow. I really like it (and will like it more after it completes its transition to more vim-like key bindings!).
@seancorfield - the best of vim and emacs but with Clojure instead of elisp.
Seems cool
This is a ridiculously cool project. @seancorfield