This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-01-02
Channels
- # announcements (1)
- # beginners (15)
- # calva (6)
- # cider (72)
- # clojure (105)
- # clojure-europe (2)
- # clojure-france (1)
- # clojure-italy (4)
- # clojure-nl (2)
- # clojure-uk (32)
- # clojurescript (14)
- # code-reviews (10)
- # cursive (8)
- # data-science (2)
- # datomic (38)
- # events (1)
- # fulcro (31)
- # graphql (1)
- # hyperfiddle (47)
- # java (4)
- # jobs (4)
- # off-topic (18)
- # overtone (2)
- # parinfer (12)
- # pathom (19)
- # pedestal (4)
- # philosophy (2)
- # portkey (22)
- # re-frame (42)
- # reagent (1)
- # rum (1)
- # shadow-cljs (36)
- # specter (3)
- # tools-deps (2)
@rutledgepaulv if you're taking code as input at runtime, I think that's basically the definition of what eval
is for
if the code has unbound locals, which I think is what you're describing, I've used a trick to wrap it in a function that takes the named arguments you want to supply e.g.,
(eval `(fn [{:keys [~@some-arg-names]}] ~the-expression-with-unbound-locals))
then you can call that function with a map
I think that would do the trick. thanks for the idea!
should’ve known I was trying to do too much in one macro + eval instead of just using a function lol
thanks @gfredericks that allowed me to do what I was trying to.
I’m not actually trying to rebind merge to inc.. just an example. lol
though a “guess what function I replaced your original function with” game sounds fun
Is there something like https://github.com/lightyear/sql-logging#options but for Clojure? tldr it aides you see if you have costly SQL being emitted, and actually points out the culprit LOC
On second thoughts, probably this library wouldn't make that much sense in Clojure, given "magical" ORMs aren't the norm here... so emitted SQL should have no surprises ...but still I can see myself using some simple metrics, e.g. query x took 30ms
@vemv I believe there are Java libraries that can act as a proxy for the actual JDBC driver and provide that sort of statistical logging.
User guide http://ttddyy.github.io/datasource-proxy/docs/current/user-guide/index.html
If you have any problems integrating it with clojure.java.jdbc
, ping me in the #sql channel and I'll see if I can help.
@vemv you can always just hook the function yourself too if you can find the place you want to instrument
np! and if you want to see what code led to it you could create an exception and parse the stack trace (or I think later versions of java have mechanisms to reflect on the stack without making exceptions). Obviously this solution has potential issues with laziness or the arguments not containing the info you want, but it’s often a quick easy option for local dev and can be applied to almost any clojure library.
a leading zero often means that the following number is octal
8 however, is not a valid literal in octal
(read-string "010")
is also 8 (base 10) instead of the 10 (base 10) you would have expected
WHY they decided to have 0
be the prefix for octal instead of 0o
is beyond me. As the hex/binary prefix is 0x
and 0b
respectively…
Because everything dating back to C did it. Technically back to B, C's predecessor. B's predecessor BCPL used # for octal but B used that as an operator. So leading 0 means octal in almost every language out there today.
Yeah, but, “because language X did it” is not a technically sound argument IMO 😉
Octal was much more common back in the day. I worked with octal a lot. Then later "graduated" to hex. So those early conventions were very strong for a generation of programmers, maybe even two generations. And those were the folks that designed and built all the languages we use today.
Right, it makes sense. But endlessly sticking with the convensions of your predecessors ends up with situations where newer developers wonder why a leading zero changes read-string
to a base that is hardly used anymore.
At some point, someone needs to decide that that is crazy, and throw it out of (newer) languages
BCPL -> B -> C. There could have been a P but instead we got C++ and D. Although we did also get PL/1. Programmers are such wits 🙃
Since BCPL was influenced by CPL which in turn was influenced by ALGOL 60, I guess you could say A was ALGOL? 😀
I wonder if I'm the only one here who has programmed in BCPL and ALGOL 60? 😕
(Fun fact, Rust, which is a relatively young language, chose to adopt 0o
for its octal syntax)
So obviously people are not "endlessly" sticking with conventions of predecessors?
Using 0o
, 0b
, 0x
as prefixes is certainly more consistent.
Haha, you are right andy. I actually made my statement before looking up whether newer gen languages would have adapted. As often, Rust serves as a positive surprise
Swift also goes for 0o
whereas Go goes for the traditional 0
. Interesting
Is leading 0o
used for anything else? It would be nice if we could see older languages adopting that pattern too. At least then if 0o
and 0
both represented octal people could start writing code where it was clearer that a number was intended to be octal
I would say that alternatives in syntax are actually worse than having inconsistent, but only one, syntax
Yay Java. I remember that one hitting me way back when I was doing Java for my degree...
Because everything dating back to C did it. Technically back to B, C's predecessor. B's predecessor BCPL used # for octal but B used that as an operator. So leading 0 means octal in almost every language out there today.
if i have an input array of strings, and i want to pick those that are also in a predefined set of valid strings, is it better to use (filter #{"valid1" "valid",,,} input)
or some other way?
I think with the set is quite elegant
def
the set of accepted strings somewhere, or make a predicate is-allowed-string?
I guess
yeah that's the plan. i was wondering if for example set/intersection
would be a better choice?
intersection would work too if your input is a set
(remember, all set operations are explicitly undefined for non-sets)
i'd need to cast the input to set, making me again wonder if that's faster or running distinct
on it
test it ^^
Benchmark it with criterium
Unless performance is a bottleneck, pick the appropriate semantics.
i.e, if order doesn't matter and you don't care about duplicates
use clojure.set/intersection
in the normal case it isn't (as only 3-4 elements should be in the input), but i'd rather not accidentally create an attack vector
else (filter #{"input1" "input2" ...} input)
distinct
has an … interesting implementation
user=> (time (->> input distinct (filter valid)))
"Elapsed time: 0.229339 msecs"
user=> (time (->> input distinct (filter valid) sort))
"Elapsed time: 0.581802 msecs"
user=> (time (set/intersection valid (set input)))
"Elapsed time: 0.219935 msecs"
user=> (time (sort (set/intersection valid (set input))))
"Elapsed time: 0.199401 msecs"
why is sort so slow? the resulting filtered collection in the second case is only 4 elements, why is it taking such a toll?
you have to be careful on some of those - filter is lazy and you aren’t forcing realization
so the first timing isn’t doing the filter at all
all of the other ops here are eager so will force realization
timing a single instance of any of these is also likely meaningless
doall would work
but really you should time 1000s of instances of these ops to compare
like (dotimes [i 5] (time (dotimes [i 1000] (->> input distinct (filter valid) sort))))
the outer loop is 5 trials, the inner loop is 1000 reps per trial
you’ll usually notice it gets faster as the jit warms up
critierium is a fancier version of this
hah, I did not expect that. I expected converting to a set to be a pricey operation
user=> (time (dotimes [m 10000] (->> (input) distinct (filter valid) sort)))
"Elapsed time: 693.849881 msecs"
user=> (time (dotimes [m 10000] (sort (set/intersection valid (set (input))))))
"Elapsed time: 279.379693 msecs"
i didn't expect this big a differenceinput being a list?
try making it a set in the first one, i wonder if that'll speed it up at all
i suspect using the filter set predicate is using some sort of set-based stuff under the hood that makes comparisons to other sets much faster than comparing against a list
the only issue is if your input has duplicates
and you need them for some reason
user=> (time (dotimes [m 10000] (->> (input) set (filter valid) sort)))
"Elapsed time: 272.51252 msecs"
user=> (time (dotimes [m 10000] (->> (input) set (set/intersection valid) sort)))
"Elapsed time: 271.146103 msecs"
basically no difference@vale Just FYI, (time (dotimes ..))
doesn't always give you accurate benchmarks -- check out criterium for a better approach.
(uh-oh, is Alex going to correct me?)
with a big enough N and enough sample points, dotimes is sufficient, imo
the fact that it loudly warns you if JIT is turned off is also a bonus
I find the benefit of NOT using criterium is that you can see when the jit is sufficiently warm when the sample stabilize
generally this is much faster than using criterium
I also don’t find that usually the gc stuff in criterium is worth much
and if you don’t use leiningen you never run into the jit thing :)