This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-04-14
Channels
- # announcements (2)
- # beginners (88)
- # calva (17)
- # cider (25)
- # cljdoc (22)
- # cljs-dev (10)
- # clojure (194)
- # clojure-italy (1)
- # clojure-nl (8)
- # clojurescript (24)
- # data-science (11)
- # datomic (1)
- # fulcro (6)
- # jobs (1)
- # leiningen (4)
- # nyc (1)
- # off-topic (70)
- # pedestal (6)
- # quil (4)
- # shadow-cljs (59)
- # vim (8)
Is this guaranteed to match up elements in a set s
with their mapped ones?
(zipmap s (map f s))
@qythium Yes, both zipmap
and map
call seq
on their argument -- and seq
returns the same ordering on repeated calls for the same object (collection).
The following probably happens because of something in with-in-str
. What would be the best workaround?
(with-in-str (str (char 13))
(.read *in*))
; => 10
Got it by binding *in*
to a StringReader manually.
actually, think i just answered my own question - guess they will be GC’d as usual, with no particular guarantees
In general, Clojure values are just Java objects, so nothing special. One possible exception are keywords, which are interned and shared across the runtime
what is the current recommended way to do desktop GUI in clojure? Still seeaw? I would like to avoid the clojurescript/electron route as much as possible. Also, are there any GUI designer tools that work with either seesaw/whatever is more recommended now?
there are couple other viable options: - https://github.com/cljfx/cljfx - https://github.com/fn-fx/fn-fx but I haven’t done any Clojure GUI work, so can’t compare
hey there 🙂 Anyone know a small real-world code snippet that demonstrates why Clojure can't be statically analysed? (e.g. something def
ing things based on a data file or similar)
There is a library that generates defs from files with sql queries, don't remember what's it's name
oh right, that's a good one
public Seq(IPersistentVector v, int i){
this.v = v;
this.i = i;
}
seems to be about itI'm not sure if a conversion takes place. IIUC, 'sequence' is an abstraction that collections implement. So certain functions will treat it sequentially
Well, it's a trie underneath, so there's a sort of conversion for any operation (depending on the size)
But, yeah, IMO it's not like a "conversion" to an different thing... Moreso a particular view over it.
The docs say that the vector is contiguous… does that mean they are placed side by side in memory?
@nathantech2005 persistent collections are rich collections of 32 element arrays
is 32 element the same as 32 bits?
So it's more likely the elements with one of those arrays are contiguous in mem, than in other parts
It says the sequence is based on the ISeq, IPersistentList
Perhaps I'm wrong and there is a significant cost to using a collection as a sequence. But I didn't think it was any heavier than than its most efficient ability to iterate across the collection.
My guess is you are correct. Because I’d guess that the sequence is just the edge nodes of the trie data tree structure in memory
Like, a seq is printed as a list, but that's not to say the vector is being converted into a list first.
I am reading the docs. How lists and vectors are FIFO or LIFO
1 notable different though in the docs… conj adds to the front of a sequence. but adds to the end of a vector
I am highly interested in this… let’s thread the conversation
Seqs on vectors are just views into the vector and are highly efficient
@alexmiller so the sequence just walks down the edge of the trie?
Effectively, yes. I mean you can read the code, it’s pretty straightforward I think
Normally seqs have some overhead in caching the resulting sequence (effectively a linked or chunked linked list)
Vectors seq is more efficient as it just piggybacks the vector
but Clojure doesn’t use any linked lists underneath? Right?
Clojure lists are linked lists
wow… I thought they were array based
Sequences are usually linked lists
Chunked seqs are links of arrays
So more memory efficient
ahhh ok. I must be reading conflicting info from the web
because some docs said that Clojure lists were array based, next to each other in memory, touching
Java arrays are
wait a second… why don’t I see any middle insertion methods on the Clojure ISeq (lists) if they are linked lists?
there is only conj, adding at begining and end… which would point to using an array structure
In general Clojure provides only “efficient” operations
The efficient place to add elements to a linked list is at the head
I thought a linked list can have insertion done at any point, in constant time?
because it just looks up the memory address
I can’t just insert based on the nth term?
a linked list is really just a node pointing to another node
so I have to walk the linked list to find the nth term?
wow. I was really confused about that.
its a tree without branching 😉
I am coining “unary tree”
why would anyone want to use one?
if the insertion writes are linear complexity… makes no sense to me
ahh, so you use them to chain functions together?
No, I just mean that because Clojure is a lisp we always use lists to apply functions, with the function in the operator position of the list
well unless you want to prepend, thats O(n)
in an vec and O(1)
in a list. The moral of the story, different datastructures have different trade offs
I must have read some erroroneus docs… that said Clojure uses arrays, contiguous, for sequences… when I was comparing it to Elixir
https://clojure.org/reference/sequences read up on sequences here.
I think Lennart was the one who pointed out linked lists don’t do well on the CPU l1, l2 cache
http://insideclojure.org/2015/01/02/sequences/ and http://insideclojure.org/2016/03/16/collections/ are some longer things I’ve written that you might find useful
because they can be spread all over RAM and can’t be loaded in 1 step like arrays can, contigous
I did not, but I have read it sometimes
And some of the faq entries https://clojure.org/guides/faq
that's true. and Clojure does some tricks mentioned earlier (chunking) to bridge this gap. The gain you get is that lazy sequences can be used interchangeably with standard collection types when this interface is pervasive.
if you care about cache misses, you are fairly deep into optimising tho, first write code, then make it fast 😉
Elixir uses linked lists as their basic type, for lists… and because the CPU can’t grab the data in 1 step… it makes threading less efficient… when the CPU has to lock memory… and do several steps to get the linked lists from all over RAM
I think this is a reason why Clojure is faster and has lower level threading models
@lennart.buit but you posted that info on Java Fibers… Which is like Actors… but it sounds lower level too. If clojure takes advantage of Fibers… I think that will be fantastic
@clojurians-slack no shared memory
the only way to share is off-site
or through message passing
this is a big reason I love clojure. We can thread stuff with shared memory… and we can also setup the actor model
This is the reason Clojure is better at number crunching… But can also handle horizontal scaling with Actors. And Fibers can be a game changer for us.
correct. Elixir is setup for distributed computing
I am not knocking it. It’s great for its purpose. But I do think Clojure is more robust for number crunching, and horitzontal and vertical scale
ya, it could make Clojure scale horizontally from day 1, without extra setup… just like Elixir
i remember tim baldridge recently laying out why Clojure is not great for number crunching. I think it was on a reddit post about game design in Clojure?
@dpsutton Clojure isn’t good for number crunching?
read the discussion here: https://www.reddit.com/r/Clojure/comments/b7zymd/sean_murray_at_gdc_2019_a_new_perspective/ tim's username is halgari
i've never done heavy number crunching in clojure. so i like to listen to arguments and experiences from people much smarter than me 🙂
immutable data structures will always be at a disadvantage when it comes to raw efficiency
@dpsutton he is talking about game programming (C, C++)
Well, the flexibility of persistent data structures is never going to be as fast as typed arrays
Games can’t handle the pauses in Garbage Collectors
I agree with @lilactown
Clojure uses tricks though… like lazy-seq
Games aren't unique in realtime programming. Any allocation strategy is going to fail if you abuse it.
I think pauses in the garbage collector is ok for non user facing stuff, e.g. machine learning
I’d glady make the trade off if it means I don’t code it in C
I benchmarked a small test. Clojure lazy seq is as fast as C.
i'm assuming your terminal printing was the same speed in a C program as with clojure 🙂
my clojure lazy seq on the nth term of fibonacci was as fast as C
around 1 second to find the 40th nth term
well, I did use cython, from python recursion script
I didn’t write it in raw C
also, comparisons of language A is as fast as language B usually highly depends on your implementations in A & B. I can write number crunching Python thats faster than C if I butcher the C impl.
I am going to write it in Raw C. then compare it to clojure.
but Clojure optimizes recursion with a lazy-seq. So I cheated a little.
let me write some more today… and try a naive recurssion in Clojure compared to C. I am sure C is faster in that case though
it's a totally different algo. you did the count up version in clojure and the naive exponential version in python
ya, it’s not accurate enough. I cheated.
lol. well lets not claim Clojure is as fast as C if there is 1) no C and 2) different algos used in the python versus clojure verions 🙂
I think I should restate it as… Clojure can use tricks to speed up to C level performance
I’ll rewrite it today, with a more accurate comparison… and share it
I am biased towards Clojure… you can see my bias
Like, could a fiber backed atom be as simple as a cljs atom? Or does interruptibility on the jvm complicate that?
In general, Clojure should be approximately as fast as Java. jit optimized Java is generally fast enough for the majority of programming uses. Where it’s not, there are options to use faster libs as a consumer.
My very sloppy Clojure use is 80-99% as fast as Java. Once in a while I tune it up with just a tiny bit of effort. Mostly it is sufficient.
ha. can you find the point where clojure's single computation is faster than c's recursion?
There may be certain kinds of problems where persistent data structures are already the most performant abstraction. Like an editor that can edit files larger than what's in memory. I think have to implement some shared structure thing anyway.
@dpsutton hahahaha this is an interesting comparison.. working on it now
persistent data structures are a convenience for programmers to ensure certain types of design correctness
there are lower overhead ways of ensuring correctness but they cost developers more at design/review time
It doesn't have to be only about correctness. the shared structure of persistent data structures allows for efficiencies that are harder to achieve without that shared structure.
persistent data structures may be the minimal polynomial complexity for certain kinds of problems that have shared structure
There's this YouTube video of a guy that built an editor using persistent data structures implemented in c++. They call them postmodern data structures lol
This might be it https://youtu.be/sPhpelUfu8Q
another efficiency perk of Clojure's persistent data structures: snapshots. So if you have some problem that needs a consistent view of snapshots of data across some threads, mid-computation, then persistent data structures already solved the problem for you. People rarely use the snapshot capability though AFAIAA
Why would a function have a *
at the end of its name? My guess is that it’s a convention indicating that the function generates a lazy sequence. For example evens*
.
It's often used to indicate an "implementation" function. You'll often see it as the name of a function behind a macro.
You'll also see it for the implementation of some special forms in Clojure itself. Both fn
and let
are technically "Special Forms" but they are implemented as macros that expand to "calls" to fn*
and let*
respectively (which are implemented directly in the compiler).
Where I've also seen it used a lot is for functions that implement a cache: if foo
is the cached version of the function, it will often be implemented in terms of foo*
which will be the uncached version of the function.
Does that help @aryyya.xyz ?
@seancorfield Hmm. I think I understand. In other languages I would expose the modules function with a friendly name and then have another function behind the scenes that does the actual work (because it needs extra parameters for example). So this is a convention for that sort of pattern?
Yes, it's a fairly common pattern for implementation detail functions.
@seancorfield In JavaScript I would usually pre- or post-fix an _
for this same purpose.
Yeah, I've seen that convention in C/C++ as well.