This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-03-31
Channels
- # arachne (4)
- # beginners (21)
- # boot (36)
- # cider (59)
- # cljsrn (8)
- # clojure (260)
- # clojure-filipino (3)
- # clojure-greece (3)
- # clojure-italy (15)
- # clojure-russia (58)
- # clojure-spec (54)
- # clojure-uk (99)
- # clojureremote (5)
- # clojurescript (65)
- # core-matrix (1)
- # cursive (17)
- # data-science (9)
- # datascript (7)
- # datomic (33)
- # emacs (8)
- # hoplon (2)
- # jobs (1)
- # jobs-discuss (2)
- # lein-figwheel (2)
- # lumo (2)
- # numerical-computing (1)
- # off-topic (22)
- # om (78)
- # onyx (17)
- # parinfer (3)
- # pedestal (5)
- # perun (1)
- # powderkeg (19)
- # protorepl (37)
- # re-frame (3)
- # rum (2)
- # spacemacs (1)
- # uncomplicate (8)
- # unrepl (78)
- # untangled (29)
- # yada (41)
Played few hours with soap implementation on clojure seems like all is good, wrote simple example application to test it: https://github.com/rmuslimov/clj-soap-srv. Actually just extended existing soap-box
by adding stuartsierra/component
is there a general rule of thumb for N and K where: on a collection of N elements, if you're making K changes, it makes sense to (1) transient! (2) make changes (3) persistent! ?
For any N and K it will be faster, but you make a trade off. You are mutating in place, so you don't accumulate anything except the final output
sometimes the intermediate steps in a long calculation can be reused in different contexts, and in that case if you used transients and didn't bother to save those calculations somewhere, they will be discarded
@bcbradley: my mental model is that the cost is always persistent: k * X transient: C1 + k * x + C2 where x < X, is the per element cost, but with transient!, there is a setup and teardowncow of C1 and C2 ==> are you telling me C1 = C2 = 0 ?
C1 = making a transient! out of a persistent C2 = making a persistent out of a transient how much do C1 and C2 cost compared with x / X ?
i believe there isn't really any set-up tear-down to speak of, its like flipping a flag that says its ok to mutate fields in objects
@tbaldridge : any idea who wrote the transient code and could tell us precisely how much C1 / C2 costs? š
@bcbradley: I think there's the following issue: for C1, there must be some cost
because we can't modify the underlying "persistent' object -- so we must be creating a 'transient layer over it'
but maybe this transient layer is copy on right, so maybe you're right in that it's cost is very minimal
i've yet to have an example, even a small one with one element and one access where it was slower
@qqq @bcbradley the transient! and persistent costs are not high, it's basically a bit flip and the allocation of a single record
transient
calls this code: https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/PersistentHashMap.java#L283-L285
persistent
https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/PersistentHashMap.java#L283-L285
Hey! Woud someone mind pointing me in the right direction? I'm getting an "unable to resolve symbol" when I use a parameter from an anonymous function:
@tbaldridge : is the following idea correct?:
1. transient
= O(1) time / space, does NOT make a deep copy; just a pointer of "hey, I'm a copy of BLAH"
2. modifications on a transient are stored as a "diff" of the underlying map
3. the main savings comes from "if we're modifying osmething that's in the diff, we just overwrite"
4. is the above correct?
@qqq somewhat, I wouldn't describe it as a diff, more as we mark anything we modify as "under modifcation by transient" so that in subsequent updates by the same transient we won't copy the node again, we'll mutate it.
@tbaldridge : so this is unclear on my part, but the code for updating transient is basically: if (not under modificatino by transient) { create a new node of sorts; just like in the persistent case; } else { // we are being modified by transient: just update it }
correct
this also means that if we make a bunch of updates to different keys, transient should not be faster
well no, since hashmaps and vectors are a tree of ~32 items. So if you modify the same nodes of the tree more than once you won't have to copy them
and all the keys/values share at least one root node, so there's that
let N = number of nodes updating persistent = log32 (N) new nodes created updatig transient = likely lots of sharing all the way up to the root
In addition, the "flag" we mark the nodes with is a box, so all the nodes under modification share the same box. That means that when we call persistent
we just set the contents of the box to null
and that changes the flags on all nodes.
@tbaldridge : any other cool tricks we should know about transients, or does this close the lecture ?
Here is an awesome blog post about transient that deserves more exposure š http://hypirion.com/musings/understanding-clojure-transients
I just realized something even better: persistent = log32(n) new nodes created; each has to copy over 31 values (since each block is 32 ints) transient = most likely just 1 update
yep, and I can't take any credit for it (as I had nothing to do with the writing of transients), but you have good handle on them now
I recently started using transients in a complex function, and got about 2 times the performance. Sometimes itās needed to switch to persistent though as the things you can do with transients are very limited.
I need a (async/closed? channel)
function, which is apparently a code smell.
What's the correct way to pipeline a channel c1
to another c2
and wait until c1
is closed/consumed entirely?
So obvious, and yet.... Thanks a lot @U052SB76M, I can pipeline to a promise with a simple (filter nil?)
transducer ā¤ļø
Is there an easy and clean way to wait in a function for an update of an atom, I now resorted to just loop with async till the value changes.. I can find things about promises and futures, but I canāt really use it in my case.
add-watch
Because Iām using quil in fuctional mode, and have to supply an update function, but the update is the change of the atom, but just calling the update function on each change will not work. Unless I let the fuction I add witch add-watch let is unlock are something, but then I still needs some loop inside the update function to make it blocking.
I don't understand your last sentence but what's stopping you to use any blocking method? For example a core async channel? You "get" from it wherever you want to block, and you "put" to it from the watcher?
or any other blocking queue, or a Lock
thanks, donāt know much of core-async, but the block if nothing is available is exactly what I was looking for,
core.async is probably overkill if you use it only fr this
(let [l (java.util.concurrent.CountDownLatch. 1)] (future (Thread/sleep 2000) (.countDown l)) (.await l))
I already needed it for something else, but there I just copy-pasted the code, without really knowing what it was doing. I donāt really like to use java in clojure, but it probably will be more performant then async right?
more importantly it's more simple
a promise sounds like the best solution proposed so far
pun unintentional
During rich hickey and stu holloway's talk at the conj, I think I remember them both making a case for in general, "ignoring" fields your code doesn't know about in some input data structure. Does that ring a bell? Anyone had much experience applying that pattern? Thinking about applying it in a case here but I'm concerned that it might lead to an error propagating to the point where it's hard to find the real cause (e.g. a typo in an actual expected field causing us to ignore a real developer error)
I had, we have tons of config files for our stuff and sadly have to manage lots of different deployed versions of the same services, not having to bother about the format of the config stuff as long as we add to it is useful, allows to be always backward compatible from ops side of things.
then the "typo" problem should be caught by your validation with :req :req-un for crucial stuff imho, or warnings yes
when we were using schema we had a lot of breakage caused by the strict validation (mostly devs, "why x doesn't work with that config file since I downgraded to backport patch foo)
@cddr that's another matter, makes sense to be more strict here I'd say, or defer to your "writer" to not send unwanted params (or send params blindly)
Yeah, code higher up in the stack could quite easily select-keys
to filter out unwanted stuff.
(into {} (map (fn [[k v]] [(keyword "foo" (name k)) v])) {:a 1 :b 2})
I guess you could do more or less the same with reduce-kv or zipmap (probably slower)
@conan @mpenet with specter (setval [MAP-KEYS NAMESPACE] "foo" any-map)
that's 3x faster and also doesn't change sorted maps into unsorted maps
@nathanmarz I have a question, why instead of uppercase symbols you didn't use namespaced keywords for special specter query identifiers/operators?
there are no special operators
everything is a first-class object
that implements the RichNavigator
interface
Still, with some macro magic it could have been done that way (since most of the surface api is composed of macros anyway)
that said, 3x the perf of into + xform for something like that is impressive (I am a fan/user anyway)
yes, but the additional indirection wouldn't get you anything
actually it would lose you compile-time error checking
if you mispell ALL
as ALLL
, you get a compile-time error now
with special keywords as navigators, you wouldn't get an error until runtime
yeah i can do it by walking the map as well, i want to namespace nested keys as well. the idea being to pull some non-namespaced JSON from several sources, and namespace them all nicely in my clojure app
There is a #sql room, but briefly: best isnāt something you can get consensus on, but there seem to be two schools of thought that hold sway nowadays: sql snippets in files ala hugsql, or sql queries as data structures ala honeysql.
@donaldball thanks!
why would you define a function and declare ^:no-doc
. What purpose would this serve?
we're adding the elk stack at work so i'm interested in use cases for your lib spandex
hey does (json/parse-string ā¦) preserve order from: [cheshire.core :as json]
the order of the json
so if you store a vector as json will the chesire parse retain the order?
not keys
[\"Samsung\" \"Colorado Time\" \"Electro-Mech\" \"Spectrum\" \"GV Pro Tables\" \"WatchFire\" \"BSN Sports\" \"Eversan\" \"Scoretronics\" \"Mitsubishi\" \"Fair-Play\" \"Yesco\" \"OES\" \"SignCo\" \"Sideline Interactive\" \"Panasonic\" \"All American\" \"Local Sign Dealer\" \"TS Sports\" \"EverBrite\" \"Sportable\" \"Toshiba\" \"Varsity\" \"LSI\" \"Sharp\" \"Daktronics\" \"Optec\" \"Allied Scoring Tables\" \"Varsity Image Scorers Tables\" \"Other\" \"Front Row\" \"PowerAd\"]"
json vector
haha true
but still having ordering issues when writing and reading from redis
@mpenet actually youāre wrong š https://github.com/dakrone/cheshire/issues/73
mentioned in that ticket is that there's an implementation detail that smaller maps (7 and fewer i think) will "preserve order" because they are a different type than larger maps under the hood, but this is an internal mechanism and not to be banked on
Same in json btw. Dont rely on key order there too. It s up to the impl to do whatever it wants
> From RFC 7159 -The JavaScript Object Notation (JSON) Data Interchange Format (emphasis mine): An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array. An array is an ordered sequence of zero or more values
waht is midje pre-requisites? is it a new way to do software development? I think there's something fundamentally deep here, but I have no idea what
https://github.com/marick/Midje/wiki/Describing-one-checkable's-prerequisites 'unfinished' seems to suggest otherwise
Iāve used midje for years, and I like it. Itās a pretty cool approach to testing. That said, if I were learning a testing framework today, Iād probably put more effort into clojure.spec and generative testing.
@manutter51 they arenāt mutually exclusive, are they?
qqq at a quick glance that looks like regular mocking / stubbing, unless Iām missing something?
@schmee: Hmm, this is the first time I'm seeing mocking/stubbing; maybe this is why I'm so surprised.
@cpmcdaniel right, theyāre not mutually exclusive at all, Iām just thinking priorities.
it is a very common approach in other programming languages, not so much in Clojure though
midje (and similar) test the ādoes it break when I do thisā side of things, and clojure.spec/generative testing looks at the āwhat does it take to break this?ā side.
here is an excellent talk about this from Conj 2015: https://www.youtube.com/watch?v=Tb823aqgX_0
one of the problems with bottom up is that sometimes I try to build a plane bottom up -- and end up having all the components for building a car.
yeah, doing thing bottom-up really forces you to think through the design before you build anything, or that can happen š
the problem probably is: I'm terrible at not-being-distracted and terrible at prioritizing -- so going "bottom up", I'm tempted to just wander around; whereas going "top down" there are unit tests telling me "get this piece of code to work"
because as you say, different design styles lead to naturally different ways of testing
I'm not suggesting "unit testing > generative testing." I am sugesting that having a bunch of failing unit tests = may allow me to better achieve flow. The point is not testing, the point is always having concrete examples in my mind and using that to guide what ot write next.
I can see the confusion though -- I started out asking about midje; when I should have asked about mocking/stubbing.
in 20/20 hindsight, let me change the question to: What is the best clojure mocking/stubbing framework to use?
I work with Ruby and Iām used to RSpec-style testing, and I still havenāt found an approach I feel completely comfortable with in Clojure
on my last project I used https://clojure-expectations.github.io/introduction.html just to try something different
worse yet, mock/stub isn't the right word, otten times, it's people tryhing to mock/stub OBJECTS/DATA ... I want to mock/stub NOT-YET-WRITTEN-FUNCTIONS ; basically the "wishful thinking" of abelson / sussman
I guess you can write a function that returns a hardcoded value and use that as a sort of stub
if you had some data that always came in the exact same format and never needed to be extended to include new structures or fields, what would you use?
Also, you want to be able to provide a nice view to the people using it, probably in a clojure persistent data structure like a map
with defrecord it looks kind of like a map and is effecient, but I don't need equality testing or some other stuff
@bcbradley : I just use plain maps, as defrecord/deftype give me weird errors if I'm not careful with reloading.
you can close over a function by putting the (fn ...) in lexical scope of something else
@bcbradley what are you trying to accomplish?
@pesterhazy I'm making a game engine in clojure and am using GLFW for the windowing system. I have to reify instances of the interfaces here https://javadoc.lwjgl.org/org/lwjgl/glfw/package-summary.html (there are about 19)
Because of the way I set up the engine, the events are buffered up per frame and dealt with in one go-- I don't want the engine's architecture to gravitate around these asynchronous callbacks
So at some point there is going to be dispatch based on event type, during event processing
I want the user of the library to be able to define their own processing pipeline for the events, so that probably means I don't want to use protocols on the event types (because the event types don't define their own behavior, the user does)
hm not sure if it makes sense to use defmulti anywhere except at top level
@bcbradley I don't think I understood everything 100% but: It's perfectly fine to do a defmethod
in code, ie, not at the top level. So there you can also close over variables
in EDN can I reference back to a key in a map from earlier in the file? with tags perhaps?
Some example code will probably help. @bcbradley
@ccann There is a few config libraries that can do that, yes. I'm too lazy to search š
(defmulti react class)
(defmethod react CharEvent [event])
(defmethod react CharModsEvent [event])
...
(defn events! [window]
(let [events (atom [])]
(GLFW/glfwSetCharCallback window
(reify GLFWCharCallbackI
(invoke [_ window codepoint]
(swap! events conj (CharEvent. window codepoint)))))
(GLFW/glfwSetCharModsCallback window
(reify GLFWCharModsCallbackI
(invoke [_ window codepoint mods]
(swap! events conj (CharModsEvent. window codepoint mods)))))
...
(defn future [timeline]
(let [now (peek timeline)
pending-events @(get-in now [:events :pending] [])
consumed-events (get-in now [:events :consumed] [])]
(-> timeline
(assoc-in [(- (count timeline) 1) :events :pending] (atom []))
(assoc-in [(- (count timeline) 1) :events :consumed] (count pending-events))
(conj
(->> now
(assoc :x :y)
(conj timeline))))))
(defn past [timeline] (subvec timeline 0 (max 0 (- (count timeline) 1))))
(defn present [timeline] (peek timeline))
(defn moments [initial-timeline] (map present (iterate future initial-timeline)))
(defn window! [x y name]
(when (GLFW/glfwInit)
(let [window (GLFW/glfwCreateWindow x y name 0 0)
moments (moments [{:events {:consumed [] :pending (events! window)}}])]
Moments is an infinite sequence of game states, the various events are accumulated in an atomI'm trying to batch the handling of those events and allow the user to dispatch on the event type so they can handle it their own way
Most of this is still a work in progress but you should be able to get the gist of what I'm doing now
@bcbradley Silly question: Why not map translate the event into clj datastructures (maps, vectors) and use core.async
for the event handling? That should be a nice API.
clj maps can take up to around 10x more space than they use (i'm not sure if it optimizes for very small maps or not)
space is important because the game engine i'm writing is unique in that it does not destroy previous states
Maps with less than 8 elements are backed by an array. But if memory is an issue then you might consider deftype
.
i guess the issue is basically this: there is no way to efficiently dispatch on a class type without multimethods or protocols
multimethods need to be defined with defmulti and defmethod, and it gets kind of wonky if you want to close over a defmethod
@bcbradley So if you want users to be able to react to event in realtime you can either use callbacks or core.async (if you want to provide a clojure friendly api).
Is core.cache
considered the most "up-to-date" cache library, or should I be looking somewhere else
@psalaberria002 thanks!
iāve been using flow types for javascript and itās a delight. any reason the same couldnāt be done for clojure? core.typed doesnāt seem quite the same, but I could be wrong
the conceptual idea of flow could be captured as a function and applied to some opaque data (could be a clojure persistent structure, a class, a primitive or whatever)
nothing stops you from making an algebra of different operations parallel to the conceptual ideas of flow in javascript
the benefit of that is that unadorned data is practically opaque (because of how you are using it), but not necessarily absolutely opaque (anyone can see the data and extend your set of operations with their own, if they want)
i mean just ensuring iām doing null checks and passing the correct parameters into functions since so many functions pass around maps is really nice. iām catching lots of errors immediately. i suppose spec is sort of handling this but there is something nice about getting red squigglies in your editor.
then again spec lets you do all kinds of checks that would be impossible at compile time so thatās the tradeoff i suppose
being able to model a flow with types is also nice. It serves as both documentation and also promptly indicates the user of a function what errors they should be handling. With the added bonus that you can force someone to handle errors too. I find myself only thinking about the happy path when using Clojure or Java, etc, but with a more expressive type system, tackling the āunhappyā path head first is not just an after thought (which sometimes only surfaces in production).
having said that, I donāt know how Clojure would look with a types tho. It might not be as enjoyable as it currently is.
We've tried adding core.typed to our code base several times and given up each time.
A lot of very idiomatic code is extremely hard to declare types for that cover all the possibilities.
We've also tried using Schema for function signatures a couple of times and also given up on that (but for different reasons).
My gut feeling is that the bolt-on types are attempting to be more powerful than they need to be.
Both core.typed and schema are very expressive, but 90% of the time I just want a terse notation to make my code fail to compile if something's a String when I expect an Integer or similar.
I'd happily deal with it if the compiler said "I can't tell, you'll just have to take your chances at runtime" a lot of the time if it would catch simple bugs like (defn foo [^String bar] ...) (foo 1.0)
it's a tradeoff between the safety over dumb mistakes like this vs. the burden of trying to encode very complicated data structures and business-domain into the type system
in my experience, mistakes like these rarely leak into production, and when they do, the root cause is weak tests anyway. often those are not expensive to fix either
the real hairy and expensive ones are the ones you won't catch unless you can encode your entire business logic into the type system, and that's very hard. when people think "types", it can be on each side of the spectrum
e.g. clojure.spec and dependent-types almost reach in the middle, from different sides of the spectrum, in terms of expressiveness
Yeah, if you want to make business logic errors into compile time errors then the super-powerful type systems are the only way to go.
an awful lot of runtime errors can be eliminated if the type system could just say "you can't add a vec2 and a string"
@qqq the whole point is that, if you're working w/ types like "vec2" and "string", those aren't really a strong case for use of types
There's been plenty of times when a function expects vec2 for it's third argument, I pass it a string, and only realize it at run time. I would love to know that at comiple time.
I see. My point is that once you have more powerful type systems, you also abandon the notion of functions that work on bland types like these. Just look how many types there is to represent a string on Haskell (A: there's String, Text, ByteString, OverloadedStrings), and some libraries also introduce their own.
And with dependent types you push even more semantic into the type system, like creating a CustomerNameString that enforces a length property, such that concatenating two naive strings and passing along to a function that expects CustomerNameString won't type check
To be clear, I don't want all of my code to be typed. I want to have two groups of code that can easily call each other: typed part = simple type system, like OCaml's . No haskell. No dependent types. untyped part = as clojure is right now then, whenever an untyped func calls a typed func, there's a dynamic check at the boundary there are absolutely ideas taht current type systems, even with dependent types, don't represent well -- and are better represented as untyped clojure maps however, there's other pieces of code, where if I had even Ocaml's type system, it'd kill 99% of my bugs without reducing my productivity
@qqq do you have an example of the kind of code that you kill 99% of the bugs w/ static type checking?
@hcarvalhoaves : bad code I write; often, after I finish figuring out what a runtime error is, I realize "oh, if I just had OCaml's type system, it would have been caught, at comiple time, three stack frames up"
we're a heavy user of schema where I work on, in my experience it decreases the distance between where the error is and where it manifests. otherwise I find complaining at compile vs. runtime equivalent for productivity (unless you're only checking runtime errors in production, but then again the root cause is another IMO)
I'm not arguing this is right for everyone. Some people have amazing attention to tdetail and just don't make simple type errors.
I think the biggest productivity gap is the IDE support, yes
otherwise my hypothesis is that the difference between a compiler showing you an error and runtime showing you an error is more psychological/philosophical than practical, if you don't get cryptic errors (e.g. schema does not match vs. NullPointerException)
One of the projects I work on still has some core.typed annotations in it left over from when we used it. Most annotations are like (t/ann foo [-> (t/Option String)])
and no more complex than that. It was pretty rare to have something that needed a complicated annotation. Most of those were for validating the presence of map keys.
Getting the type error at compile time is way more than a philosophical difference. If you want to catch those issues you either need static type checking or a bunch of tests and the former is much cheaper in developer hours.
IMO non-trivial test cases catch these mistakes, and you need them anyway unless you can encode everything in type system, so that's my point. Of course you can attempt a balance. The argument of how much test surface is enough is age old, so I'll stop here because no conclusion will be reached š
There's three types of programmers: (1) those who make type errors, but not logic errors // if it compiles, it runs (2) those who make logic errors, but not type errors // apparently these people exist (3) those who make neither errors (and are paid > 1M / year) Type systems are good for category 1 programmers, of which I fall under.
A Haskeller would never agree that a program that fails to type-check is correct at any level š But I guess I get your point