Fork me on GitHub

Is this guaranteed to match up elements in a set s with their mapped ones? (zipmap s (map f s))


@qythium Yes, both zipmap and map call seq on their argument -- and seq returns the same ordering on repeated calls for the same object (collection).


The following probably happens because of something in with-in-str. What would be the best workaround?

(with-in-str (str (char 13))
  (.read *in*))
; => 10


Got it by binding *in* to a StringReader manually.


what are the implications of using clojure values as keys in a java.util.WeakHashMap?


actually, think i just answered my own question - guess they will be GC’d as usual, with no particular guarantees


In general, Clojure values are just Java objects, so nothing special. One possible exception are keywords, which are interned and shared across the runtime


what is the current recommended way to do desktop GUI in clojure? Still seeaw? I would like to avoid the clojurescript/electron route as much as possible. Also, are there any GUI designer tools that work with either seesaw/whatever is more recommended now?


there are couple other viable options: - - but I haven’t done any Clojure GUI work, so can’t compare


hey there 🙂 Anyone know a small real-world code snippet that demonstrates why Clojure can't be statically analysed? (e.g. something defing things based on a data file or similar)


There is a library that generates defs from files with sql queries, don't remember what's it's name


oh right, that's a good one

Nate Sutton19:04:04

is it expensive to convert a vector to a sequence?

Nate Sutton19:04:45

it seems like (hashmap|vector|set) -> sequence happens a lot and I don't have a sense for how expensive that is

Nate Sutton19:04:57

and not sure how to get a sense of that


public Seq(IPersistentVector v, int i){
		this.v = v;
		this.i = i;
seems to be about it


two pointer assignments


I'm not sure if a conversion takes place. IIUC, 'sequence' is an abstraction that collections implement. So certain functions will treat it sequentially

Nate Sutton19:04:15

so first and rest and next just index into the vector?


Well, it's a trie underneath, so there's a sort of conversion for any operation (depending on the size)


But, yeah, IMO it's not like a "conversion" to an different thing... Moreso a particular view over it.

hipster coder19:04:56

The docs say that the vector is contiguous… does that mean they are placed side by side in memory?

Nate Sutton19:04:19

I was hoping it was something like that

Nate Sutton19:04:36

I couldn't imagine how expensive it would be without that being the case


@nathantech2005 persistent collections are rich collections of 32 element arrays

hipster coder19:04:30

is 32 element the same as 32 bits?


So it's more likely the elements with one of those arrays are contiguous in mem, than in other parts


No, in java and js, those arrays are not typed

hipster coder19:04:46

It says the sequence is based on the ISeq, IPersistentList


Perhaps I'm wrong and there is a significant cost to using a collection as a sequence. But I didn't think it was any heavier than than its most efficient ability to iterate across the collection.

hipster coder19:04:46

My guess is you are correct. Because I’d guess that the sequence is just the edge nodes of the trie data tree structure in memory


Like, a seq is printed as a list, but that's not to say the vector is being converted into a list first.

hipster coder19:04:26

I am reading the docs. How lists and vectors are FIFO or LIFO

hipster coder19:04:03

1 notable different though in the docs… conj adds to the front of a sequence. but adds to the end of a vector

hipster coder19:04:55

I am highly interested in this… let’s thread the conversation


Seqs on vectors are just views into the vector and are highly efficient

hipster coder19:04:10

@alexmiller so the sequence just walks down the edge of the trie?


Effectively, yes. I mean you can read the code, it’s pretty straightforward I think


Normally seqs have some overhead in caching the resulting sequence (effectively a linked or chunked linked list)


Vectors seq is more efficient as it just piggybacks the vector

hipster coder19:04:35

but Clojure doesn’t use any linked lists underneath? Right?


Clojure lists are linked lists

hipster coder19:04:06

wow… I thought they were array based


Sequences are usually linked lists


Chunked seqs are links of arrays


So more memory efficient

hipster coder19:04:41

ahhh ok. I must be reading conflicting info from the web

hipster coder19:04:02

because some docs said that Clojure lists were array based, next to each other in memory, touching


Are Java or js arrays really guaranteed to be contiguous in physical memory?


Java arrays are

hipster coder19:04:39

wait a second… why don’t I see any middle insertion methods on the Clojure ISeq (lists) if they are linked lists?

hipster coder19:04:11

there is only conj, adding at begining and end… which would point to using an array structure


In general Clojure provides only “efficient” operations


The efficient place to add elements to a linked list is at the head

hipster coder19:04:17

I thought a linked list can have insertion done at any point, in constant time?

hipster coder19:04:24

because it just looks up the memory address


if you have a linked list you have a node and a pointer to the rest


so to insert in the middle you have to start following the links

hipster coder19:04:43

I can’t just insert based on the nth term?


sure. but first ya gotta get there

Lennart Buit19:04:01

a linked list is really just a node pointing to another node

hipster coder19:04:11

so I have to walk the linked list to find the nth term?


its like a scavenger hunt

hipster coder19:04:32

wow. I was really confused about that.

Lennart Buit19:04:58

its a tree without branching 😉


It's a vine ;)

Lennart Buit19:04:42

I am coining “unary tree”

hipster coder19:04:45

why would anyone want to use one?

hipster coder19:04:02

if the insertion writes are linear complexity… makes no sense to me


For function application ;)

hipster coder19:04:49

ahh, so you use them to chain functions together?


Really, vectors are almost always preferred


No, I just mean that because Clojure is a lisp we always use lists to apply functions, with the function in the operator position of the list

Lennart Buit20:04:35

well unless you want to prepend, thats O(n) in an vec and O(1) in a list. The moral of the story, different datastructures have different trade offs

hipster coder20:04:49

I must have read some erroroneus docs… that said Clojure uses arrays, contiguous, for sequences… when I was comparing it to Elixir

hipster coder20:04:27

I think Lennart was the one who pointed out linked lists don’t do well on the CPU l1, l2 cache

hipster coder20:04:45

because they can be spread all over RAM and can’t be loaded in 1 step like arrays can, contigous

Lennart Buit20:04:56

I did not, but I have read it sometimes


that's true. and Clojure does some tricks mentioned earlier (chunking) to bridge this gap. The gain you get is that lazy sequences can be used interchangeably with standard collection types when this interface is pervasive.

Lennart Buit20:04:26

if you care about cache misses, you are fairly deep into optimising tho, first write code, then make it fast 😉

hipster coder20:04:57

Elixir uses linked lists as their basic type, for lists… and because the CPU can’t grab the data in 1 step… it makes threading less efficient… when the CPU has to lock memory… and do several steps to get the linked lists from all over RAM

hipster coder20:04:17

I think this is a reason why Clojure is faster and has lower level threading models

hipster coder20:04:28

@lennart.buit but you posted that info on Java Fibers… Which is like Actors… but it sounds lower level too. If clojure takes advantage of Fibers… I think that will be fantastic


Does Elixir have shared memory between processes?

hipster coder20:04:10

the only way to share is off-site

hipster coder20:04:30

or through message passing

hipster coder20:04:08

this is a big reason I love clojure. We can thread stuff with shared memory… and we can also setup the actor model

hipster coder20:04:03

This is the reason Clojure is better at number crunching… But can also handle horizontal scaling with Actors. And Fibers can be a game changer for us.


Aye, I don't think elixir is designed for high performance, local number crunching


Which is fine

hipster coder20:04:20

correct. Elixir is setup for distributed computing

hipster coder20:04:05

I am not knocking it. It’s great for its purpose. But I do think Clojure is more robust for number crunching, and horitzontal and vertical scale


Fibers / project loom will be interesting

hipster coder20:04:16

ya, it could make Clojure scale horizontally from day 1, without extra setup… just like Elixir


i remember tim baldridge recently laying out why Clojure is not great for number crunching. I think it was on a reddit post about game design in Clojure?

hipster coder20:04:27

@dpsutton Clojure isn’t good for number crunching?


i'm looking for his post


i've never done heavy number crunching in clojure. so i like to listen to arguments and experiences from people much smarter than me 🙂


immutable data structures will always be at a disadvantage when it comes to raw efficiency

hipster coder20:04:21

@dpsutton he is talking about game programming (C, C++)


Well, the flexibility of persistent data structures is never going to be as fast as typed arrays

hipster coder20:04:01

Games can’t handle the pauses in Garbage Collectors




What lilactown said

hipster coder20:04:36

Clojure uses tricks though… like lazy-seq


Games aren't unique in realtime programming. Any allocation strategy is going to fail if you abuse it.

hipster coder20:04:07

I think pauses in the garbage collector is ok for non user facing stuff, e.g. machine learning

hipster coder20:04:27

I’d glady make the trade off if it means I don’t code it in C


sure, obviously batch processing is just a race to the finish line.

hipster coder20:04:57

I benchmarked a small test. Clojure lazy seq is as fast as C.


Yeah, despite being super flexible, Clojure is pretty fast


i don't believe you. (re: lazy seq is as fast as C)


i'm assuming your terminal printing was the same speed in a C program as with clojure 🙂

✔️ 1
hipster coder20:04:21

my clojure lazy seq on the nth term of fibonacci was as fast as C

hipster coder20:04:41

around 1 second to find the 40th nth term

hipster coder20:04:14

well, I did use cython, from python recursion script

hipster coder20:04:18

I didn’t write it in raw C

Lennart Buit20:04:25

also, comparisons of language A is as fast as language B usually highly depends on your implementations in A & B. I can write number crunching Python thats faster than C if I butcher the C impl.


i heard PyPy is faster than cython


(implementations matter)


i don't see any C in there

hipster coder20:04:05

I am going to write it in Raw C. then compare it to clojure.


the python version is the naive recursive solution

hipster coder20:04:06

but Clojure optimizes recursion with a lazy-seq. So I cheated a little.


I wonder if/how clojure's concurrency abstractions will fit with loom/fibers

hipster coder20:04:09

let me write some more today… and try a naive recurssion in Clojure compared to C. I am sure C is faster in that case though


it's a totally different algo. you did the count up version in clojure and the naive exponential version in python

hipster coder20:04:27

ya, it’s not accurate enough. I cheated.


lol. well lets not claim Clojure is as fast as C if there is 1) no C and 2) different algos used in the python versus clojure verions 🙂


but i like your enthusiasm and willingness to actually benchmark

hipster coder20:04:24

I think I should restate it as… Clojure can use tricks to speed up to C level performance

hipster coder20:04:06

I’ll rewrite it today, with a more accurate comparison… and share it


i look forward to it.

hipster coder20:04:34

I am biased towards Clojure… you can see my bias


i am too 🙂


Like, could a fiber backed atom be as simple as a cljs atom? Or does interruptibility on the jvm complicate that?


In general, Clojure should be approximately as fast as Java. jit optimized Java is generally fast enough for the majority of programming uses. Where it’s not, there are options to use faster libs as a consumer.


99% of users never need that though

✔️ 1

My very sloppy Clojure use is 80-99% as fast as Java. Once in a while I tune it up with just a tiny bit of effort. Mostly it is sufficient.

hipster coder20:04:51

omg. I mis-spoke. Clojure Binet Formula is as fast as C recursion

😆 1

ha. can you find the point where clojure's single computation is faster than c's recursion?


There may be certain kinds of problems where persistent data structures are already the most performant abstraction. Like an editor that can edit files larger than what's in memory. I think have to implement some shared structure thing anyway.

hipster coder20:04:50

@dpsutton hahahaha this is an interesting comparison.. working on it now


you were using 40 earlier i think.


persistent data structures are a convenience for programmers to ensure certain types of design correctness


Or if you were trying simulate a universe of stars that don't fit in memory all at once


that convenience is overhead


it's not going to pay off except in time correctness


time to correctness


Unless the problem itself looks embarrassingly like a persistent data structure problem


there are lower overhead ways of ensuring correctness but they cost developers more at design/review time


what is a persistent data structure problem?


It doesn't have to be only about correctness. the shared structure of persistent data structures allows for efficiencies that are harder to achieve without that shared structure.


Especially over infinite data sets for instance


a "convenience for programmers" - doesn't this describe every single tool?


harder how? in developer effort? that's what i've been saying.


mg: as in the opposite of fundamental to computation


compilers are "convenience for programmers", you should be writing assembler yourself


that's asinine


Harder to reason about, sure. But in terms of polynomial difficulty,


persistent data structures may be the minimal polynomial complexity for certain kinds of problems that have shared structure


Can you be little more concrete about that?


well I think the premise of the question fundamentally kind of silly


Polynomial complexity as I usually hear it refers to computational complexity


Navigating a procedurally generated universe of stars that don't fit in memory


that sounds like an application not a computation problem


Hmm, perhaps


There's this YouTube video of a guy that built an editor using persistent data structures implemented in c++. They call them postmodern data structures lol


It'd be pretty interesting to see that implemented in clojure


Def check out that video sometime! It's worth a watch


another efficiency perk of Clojure's persistent data structures: snapshots. So if you have some problem that needs a consistent view of snapshots of data across some threads, mid-computation, then persistent data structures already solved the problem for you. People rarely use the snapshot capability though AFAIAA


Why would a function have a * at the end of its name? My guess is that it’s a convention indicating that the function generates a lazy sequence. For example evens*.


It's often used to indicate an "implementation" function. You'll often see it as the name of a function behind a macro.


You'll also see it for the implementation of some special forms in Clojure itself. Both fn and let are technically "Special Forms" but they are implemented as macros that expand to "calls" to fn* and let* respectively (which are implemented directly in the compiler).


Where I've also seen it used a lot is for functions that implement a cache: if foo is the cached version of the function, it will often be implemented in terms of foo* which will be the uncached version of the function.


@seancorfield Hmm. I think I understand. In other languages I would expose the modules function with a friendly name and then have another function behind the scenes that does the actual work (because it needs extra parameters for example). So this is a convention for that sort of pattern?


Yes, it's a fairly common pattern for implementation detail functions.


@seancorfield In JavaScript I would usually pre- or post-fix an _ for this same purpose.


Yeah, I've seen that convention in C/C++ as well.


Side note, something I love about Lisps is using a dash as-a-separator for names. It’s by far the most natural way to do this in my view and it’s a shame most other languages don’t do the same.