This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-02-28
Channels
- # announcements (14)
- # autochrome-github (1)
- # babashka (4)
- # beginners (151)
- # biff (1)
- # calva (24)
- # cider (13)
- # clara (13)
- # clj-commons (1)
- # cljs-dev (24)
- # clojure (50)
- # clojure-europe (20)
- # clojure-france (13)
- # clojure-nl (4)
- # clojure-norway (12)
- # clojure-spec (43)
- # clojure-uk (6)
- # clojurescript (30)
- # cursive (2)
- # datahike (9)
- # editors (6)
- # emacs (2)
- # fulcro (29)
- # google-cloud (20)
- # graphql (2)
- # humbleui (2)
- # jobs (2)
- # juxt (4)
- # kaocha (5)
- # lsp (14)
- # malli (5)
- # membrane (10)
- # off-topic (39)
- # pathom (21)
- # polylith (10)
- # rdf (8)
- # reagent (4)
- # remote-jobs (3)
- # reveal (18)
- # shadow-cljs (27)
- # spacemacs (7)
- # tools-deps (28)
(def b {false 0
true 1})
(defn pow [n f a]
"pow is an APL function that applies f to a n times"
(letfn [(stop [_ _ x] x)
(id [x] x)
(start [m t]
(let [done? (b (> m 1))
again ([stop pow] done?)
doit ([id f] done?)]
(again (dec m) f (doit t))))]
(start n a)))
(pow 3 #(* 3 %) 3)
;; => 27
(pow 0 #(* 3 %) 3)
;; => 3
Branchless repeated application, although this example has an integer overflow at low inputs of n
Is there a way I might flatten the call stack?Why is that?
I suppose what I meant is that this is pure function application with no "shortcutting" or control flow expressions, like cond, if, when, or such like
If there is branching happening under the covers, I wouldn't know, and I'd be interested to hear where that's happening
Jvm bytecode doesn't expose simd stuff, so branchless code just makes things weird looking for little benefit.
if you are trying to write branchless code to do simd stuff, at that level a jvm method call (which lookup on a vector is) is going to be a branch (I don't do too much below the level of jvm bytecode, but I am pretty sure that is the case)
Like, bytecode wise, the jvm doesn't give you any conditional operation that doesn't involve branching
But I'm not using any conditionals or control flow statements
Like, when you invoke a vector like that, this code runs https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/APersistentVector.java#L180
you mean vector indexing isn't just indexing?
But even if you were using arrays, below the surface level of the jvm, those array accesses are also going to get a bounds check (which the jit may or may not optimize away)
Well my purpose is just to monkey around with functional patterns
(pow 111111111 #(+ 1 % ) 5)
;; => 110101105
I can call on big inputs using what's in the paste
If it is to take advantage of processors simd features, the jvm is just not going to cut it (I believe there are some experimental vector apis in the lastest jvm release that you can turn on with a flag maybe)
I'm coming from APL and I was just wondering what it would look like to implement one of the primitives from APL called ⍣ power which (f⍣n) a
applies f to a n times repeatedly applying the previous result
and I was trying to see if there was a strictly "data flow" way to implement it
iterate accumulates results
yes, power doesn't
I was not suggesting that iterate is pow, but that iterate is a similar thing on clojure to pow in apl
({1 + ⍵}⍣10) 0
10
this apply 1 + the argument 10 times
({2 × ⍵}⍣10) 1
1024
but accumulation is easy to add
({⍵, +/ ¯2↑ ⍵}⍣10) 0 1
0 1 1 2 3 5 8 13 21 34 55 89
This gives the first 10 iterations of fibbonacci for instance(pow 10 #(apply list (apply + (take 2 %)) %) [1 0])
;; => (55 34 21 13 8 5 3 2 1 1 0)
or using the pow I just implemented
but maybe I don't want accumulated results, maybe I just want the last result
which iterate for large inputs would mean getting the last item in a linked list
And if you implemented pow in the standard recursive definition with if and a base case and a recursive case it wouldn't consume stack
of course, but then I wouldn't have discovered an interesting functional pattern 😛
But I'm not using if or recur 😛
The point isn't the semantics, the point was just to see if this would be possible with pure function application
like church numbers aren't exactly performant
But they're fascinating anyway
Then just encode booleans as functions, then you can write less tortured looking code while still saying it is all functions
(pow (b (> 0 1)) someFunction input)
applies someFunction to input 0 times
yes, so the point of encoding booleans as 0/1 is to conditionally apply a function without a branch
Sure that's a fair point, and I might do that if I plan to use this in live code somewhere
I suppose that's a good lessoned learned from this experiment, internal representation doesn't have to reflect the API
how do I read from out?
this profiler says "the following is printed to out"
It isn't being printed out
cider in emacs
So likely the messages are going to the stdout of the jvm process, not to the repl buffer
Because nrepl only runs in a client/server setup it is pretty much always the case that *out*
is not the same thing as the server (jvm) process stdout
so how do I read from that?
Well I want to know what the output is somehow
I'm just using default installation of cider, I don't know anything much more than getting it installed
If cider is launching the jvm and managing it within emacs, then the jvms stdout should go to a buffer
Well I've looked through my buffers, I don't have anything extra
I don't know anything about any of that
I was told lein isn't the modern way of managing dependencies, so I don't know anything about lein
or who is doing what with the JVM
So, that is all assuming that the tuft code is actually running and printing stuff out somewhere
It's running, or at least I've imported the references, and I am calling the macros it exports
(profile {}
(p :pow (kata/pow 111111111 #(- %) 5)))
This does stuff, there's just no expected output
println just before that p
form does in fact print
Oh, I seem to have failed to run that println handler
I had copied that
I guess I mispasted ... Apologies
(profile {}
(dotimes [_ 10]
(p :last (last (take 111111111 (iterate #(- %) 5))))
(p :iter (take 111111111 (iterate #(- %) 5)))
(p :pow (pow 111111111 #(- %) 5))))
;; => nil
pId nCalls Min 50% ≤ 90% ≤ 95% ≤ 99% ≤ Max Mean MAD Clock Total
:last 10 12.66s 13.06s 13.64s 13.71s 13.71s 13.71s 13.13s ±2% 2.19m 64%
:pow 10 7.15s 7.35s 7.83s 8.13s 8.13s 8.13s 7.45s ±3% 1.24m 36%
:iter 10 2.88μs 3.38μs 4.82μs 41.09μs 41.09μs 41.09μs 7.18μs ±94% 71.83μs 0%
Accounted 3.43m 100%
Clock 3.43m 100%
How would you get the final value from the take approach? I tried using last
and you can see the rules. I also tried the other naive approach:
(first (reverse (take ...)))
and that blows the heap so I can't even profile it
of course :iter
is much faster, insignificant time compared to the others, but to actually get the :last
result is nearly double the time of the inefficient :pow
Trying into []
I suppose I should have expected these results
(p :vec (last (into [] (take 111111111 (iterate #(- %) 5)))))
:vec 1 18.13s 18.13s 18.13s 18.13s 18.13s 18.13s 18.13s ±0% 18.13s 72%
Hi, What is the best course to get started with Clojure script? with functions too such as atom
I think Learn Reagent by @U8A5NMMGD is a good one. Perhaps combined with a Clojure(Script) book and/or tutorial
Protocols are great for objects. But what is the best approach for static interfaces(when i never have instance members or use this
)? Should I just use a map of anonymous functions?
I am. I have several implementations of an API(the implementations are just collections of def
). And I want to write a parameterized test suite that takes an implementation as an argument so I can reuse the same suite across the different implementations.
I've made plugin kind of things in the past that tried to avoid a kind of this or self argument, so all static, and it is painful
How do you mock/stub for tests, and when business requirements changed and you need two implementations running at once, etc
To give a bit more context. What I’m trying to do is write tests of Datomic, datascript, datahike, asami and other Datomic like databases. So from a test I want to be able to create databases, transact on and query them.
If I do it with protocols I will have an implementation object that just has references to create-database
connect
transact
q
of each implementation. But that seems to not be what protocols are meant for.
That actually seems to be what protocols are for in my opinion. But i question whether those underlying things all satisfy any one protocol. Can you program against an interface without caring about the underlying db? My gut is that you cannot
Ok, I’ll try both and report back. Thank you! 🙂
I agree with @dpsutton that seems like a protocol (and I disagree that you won't have a this
:)
Thank you both 🙂
The thing is I end up with this. And that doesn’t seem right to me:
(defprotocol Implementation
(create-database [this db-name])
(connect [this db-name])
(transact [this connection data])
(make-db [this connection])
(-q [this query params]))
(defrecord DatomicImpl []
Implementation
(create-database [_ db-name]
(datomic.api/create-database (str "datomic:mem://" db-name)))
(connect [_ db-name]
(datomic.api/connect (str "datomic:mem://" db-name)))
(transact [_ connection tx]
(datomic.api/transact connection tx))
(make-db [_ connection]
(datomic.api/db connection))
(-q [_ query params]
(apply datomic.api/q query params)))
where this falls apart is assuming tx
and query
is the same for all impls of this protocol
I ask myself if I’m never using this
should I be making a record? Why not just a map of functions?
each impl should be in charge of translating into local specifics, and issuing the query/transactions
Yes definitely. But that was also my purpose. To write tests that the implementations are doing the same. And find the cases where they’re not.
if the users of a protocol implementation know which specific implementation is being used, something is wrong
even if you never use it in the implementation, it will be used in the callers when selecting which implementation to use
Yes the equivalent map of functions looks like this
(def map-impl
{:create-database (fn [db-name] (datomic.api/create-database (str "datomic:mem://" db-name)))
:connect (fn [db-name] (datomic.api/connect (str "datomic:mem://" db-name)))
:transact datomic.api/transact
:db datomic.api/db
:q datomic.api/q})
And in the caller I think it’s nicer because I can just use destructuring to bind the functions I need.that doesn't really resonate with me: with protocols you don't have to destructure at all, the names are top-level
and have call-site caching for better performance
Ok thanks. It’s just that here the impl
parameter bothers me a bit as I know it’s not being used in the implementation object.
_ (create-database impl "test-map")
connection (connect impl "test-map")
agree about “care about knowing”. If you have to know which implementation you are doing, probably better to just make regular functions that are specific in your test setup. I had feared a bit about this earlier with “But i question whether those underlying things all satisfy any one protocol.”
In @soren's case above, what would the benefits/drawbacks be of instead using multimethods like this?
(defmulti create-database
(fn [{:keys [db-type]} db-name] db-type))
Isn't that a more data-centric representation of a database?conceptually, I think these are the same - they're both opportunities for open polymorphism
protocols are narrow in use in relying on the type of the first arg (but faster and encompass multiple functions), multimethods are more general, single-method oriented
in this case, the ability to bundle methods probably makes it a better choice, but you could just as easily do it with multimethods
multimethods dont have a good story for local anonymous implementations (mostly useful when testing)
Wouldn't this be a sufficiently local story?
(defmethod create-database ::test-db-type
[_ db-name]
;; ...
)
it isn't local though, have you ever tried to get a pull request through code review that has a defmethod inside a deftest?
suppose i want to create a v2 ns of an existing ns. i would like the v2 to support the same api as the existing ns. should i just def
all the vars from v1 to v2? eg (def foo v1/foo)
if foo can be def'ed unchanged from v1/foo, then leave it where it is and use v1/foo when needed
that works
although i do think it would be convenient to callers to just have one ns to worry about. one api.
refer
, perhaps
or (ns my ns (:use …))