Fork me on GitHub
#off-topic
<
2018-01-25
>
Ryan Radomski00:01:12

I was attempting to explain to an imperative programmer how immutable data structures force you to think about change differently. I can't quite explain this "difference". I can come up with examples, but not a definition or a word. Is there a word for this structure of programming? It's possible to think about process in this way in imperative languages as well so I'm reluctant to call it functional

Ryan Radomski01:01:15

More specifically the "difference" I'm talking about is exemplified in RH's example of when you change your address, it's not like you go in and reach into Fred's brain and change your address there immediately. Instead that information is changed locally and it needs to propagate to Fred.

andy.fingerhut01:01:06

I doubt a word without a definition is going to help explain it to anyone. Sorry, I don't have a definition for you, though, but would be interested to hear any proposed definitions. Rich Hickey's "Database as a Value" talk has a slide showing what he calls the "Epochal Time Model", but I think that name is pretty empty until you learn the ideas from that talk, or something similar.

Ryan Radomski02:01:04

I was thinking about that concept to hone in on, but I'm wondering about something much more specific, not that objects don't mutate themselves, but more so that they don't mutate their method parameters. For example I often see this situation in OOP

public class Bear() {
  private Cub cub;
  
  public Bear addCub(Cub c){
    cub=c;
    c.addBear(this); //this is what I'm talking about
    return this;
  }
}

Ryan Radomski02:01:22

mutating the class itself has the same control flow as the functional counterpart. They're very very close to the same semantics apart from wanting to peer into the old version, but the mutation of parameters violates the control flow and that was what I was trying to explain.

Ryan Radomski02:01:32

It's common to encapsulate control in methods like this instead of mutating the cub elsewhere

Ryan Radomski02:01:48

Immutable data forces you to never ever do anything of the sort

Ryan Radomski02:01:30

But not only that, but that you are unable to do

Bear a = new Bear();
g.addCub(new Cub());
S.O.P(g.getCub())
and instead are forced to do
Bear a = new Bear();
S.O.P(g.addCub(new Cub()).getCub());
Because either one is valid with the mutable bear it's simply a matter of style or you could say a pattern. There's no name for the second style. When I was talking with that programmer, I was saying that when I disallow myself to program with the first style, then my code becomes remarkably more coherent

tbaldridge02:01:58

I like to think of it thusly: functional programming separates data from manipulation of data. Data never changes. My name is Timothy, if I change it to Tiberius, that doesn't change the fact that my name was Timothy. Thus I think of time itself as immutable. The world never changes, we just get a new instance of the world every millisecond (or less :D)

tbaldridge02:01:07

So once you have data, and that data never changes, FP falls out of the proper way to work with that data. You have functions that take a state of some data and create a new set of data. No code exists on the object, because that really doesn't make a lot of sense, unless the methods on the objects returned a new object each time.

tbaldridge02:01:39

The next question most people have is "how on earth does that perform", and that's when I wave my hands and say "It's magic, but I can hand you a few dozen papers about it if you want".

tbaldridge02:01:54

A lot of programmers can also come to understanding with this if they've used strings in Java or C#, where strings are immutable. Thankfully mutable strings were deprecated decades ago.

Ryan Radomski02:01:10

I find that a lot easier to explain! Functional programming reflects reality and process more accurately, but I have to code imperatively at work and I just simply pretend that I disallow myself to program in the first style and although I don't have the benefits of getting new Bears or anything, if I code as if I only got new Bears, even though they're mutating, I have much less complexity.

Ryan Radomski02:01:40

The style in that second example can adapt to imperative programming pretty well and make for some remarkably cleaner code in that space

tbaldridge02:01:31

And for some of that it's good to realize that immutable programming has really only been possible in the past 15 years with the invention of really good GCs, things that are possible today were just too expensive in the past.

tbaldridge02:01:00

Most of Datomic hinges on that idea, stuff like storage and memory are so cheap these days that mutating data in place is just not needed anymore.

tbaldridge02:01:14

Unless you're doing something that has to be super high performance.

qqq02:01:22

@tbaldridge: is it good GC or cheap RAM? as long as there are no circular references (which is a bit painful), one can get most of the benefits of GC with just ref counting.

tbaldridge02:01:46

But even then people like Carmack (inventor of Doom) is figuring out ways to do this in AAA game engines.

tbaldridge02:01:32

It's GCs, reference counting isn't even close to GCs in terms of performance. And on a multicore system it's even worse.

Ryan Radomski02:01:15

We don't need the super high performance at my current job. The biggest hurdle for them is that they think it impossible to organize programs with that control flow. If they mutate a Person.email then their first question is "how does the rest of my system get that change" That sort of organization is what I want to describe. How to code in a way where all of your objects aren't wired together

qqq02:01:48

@tbaldridge: What do you mean for multicore, all you need is atomic compare and swap for inc/dec ref count, and if it hits 0, free

tbaldridge02:01:35

Nope, two problems, atomic CAS is super slow when you do it this often

tbaldridge02:01:24

And there’s the ABA problem, the memory can be freed between the navigation to a object and the incrementing of the count.

tbaldridge02:01:04

More into there @qqq

qqq02:01:49

Thanks! Reading now. Never heard of this problem until now.

tbaldridge02:01:27

And for reference a CAS can cost about 300 cycles. Not bad for use in an atom, really bad for every pointer dereference in you program

qqq02:01:53

It's not every pointer dereference. It's every pointer "change value"

qqq02:01:57

Reading is free.

tbaldridge02:01:40

No, because you have to let people know you’re reading

tbaldridge02:01:00

Otherwise they may delete the object in the middle of you reading it

qqq02:01:07

They can't

qqq02:01:12

becuase if you're reading, the count is > 0

tbaldridge02:01:08

If it’s a linked list and you hold onto the head, that may work

qqq02:01:43

To clarify, I'm thinkikng C++ style RAII ref counting pointers

qqq03:01:13

so if you're reading something it means: 1. there is some var in the block holding a ref counted pointer 2. this ref doesn't get released until the end of the block

qqq03:01:48

slightly changing topic:

qqq03:01:57

I rarely (never) use STM in clojure.

qqq03:01:16

So if we had Erlang style pre process heap (with some special global shared heap), the cost of ref count gc would be even lower.

qqq03:01:37

So 'per process objs' are ultra ched ref counted GC 'shared objects' are about as expensive as clojure atoms

tbaldridge03:01:34

@qqq finally done with some evening family stuff. So I can talk at a keyboard now. All that taken into consideration it's still faster to use GCs when there are large numbers of objects being created and destroyed.

tbaldridge03:01:29

Think of a immutable collection, like a hashmap. 2-3 levels of arrays. So each insert requires the cloning of two arrays and the creation of a root object. Each array has roughly 32 slots in it.

tbaldridge03:01:53

With a GC it's as simply as allocating an array (a pointer bump on most systems), and the copying of data, super fast.

tbaldridge03:01:27

With RC it's the cost of creating an array, then copying the data, then walking all 32 pointers (pulling in their cache lines), and incrementing every ref count.

tbaldridge03:01:46

Deleting a hashmap is the same, 32+ decrements, 32 branches (to check if the refcount == 0), etc.

tbaldridge03:01:44

In addition, most GCs are copy-compacting GCs. Where every so often they remove garbage then take all the survivors and move them into a new place in memory. This has the effect of compacting the heap so that long lived objects all sit in the same location in memory, this can massively improve cache hit rates.

qqq04:01:01

@tbaldridge: what you say sounds reasonable. I've implemented a ref-counting system before (as it's literally add another field to ptrs), but I've never implemented a generational compacting gc; so at this point I simply don't know enough about the constants to make an intelligent statement

qqq04:01:02

Anyone have a good reference on how (1) blas/atlas does matrix multiply and (2) how they tune the various constants ? I know taht they do blocking to get better cache behaviour, but I want the details.

qqq06:01:14

RANT: how is it that the JVM has libs for everything, but not for BLAS / end rant

andy.fingerhut06:01:08

Maybe because people would rather take Fortran/C implementations of BLAS and just call them as native libs from Java?

tbaldridge13:01:52

@qqq what @andy.fingerhut said. Also, SIMD support on the JVM is really bad, as is control over memory layout.

qqq14:01:05

@tbaldridge: are JVM float[] / int[] not even guaranteed to be contigious blocks in memory ?

tbaldridge14:01:26

Oh they are, but the GC may move them around, they may not be aligned (SIMD often requires that memory be aligned), and you can't do a Foo[] without the underlying array just being an array of pointers.

tbaldridge14:01:17

Fun fact about that, apparently Areon (Martin Thompson's project) runs the fastest on .NET after a ton of optimizations. The JVM doesn't give them enough control over memory layout, and C++ is slower due to the lack of JIT and GC.

tbaldridge14:01:29

Apparently .NET hits the sweet spot there.

dpsutton16:01:02

@alexmiller > 17. Rate the priority of making improvements to Clojure in these areas. what does the namespace option mean in this context on the survey?

Alex Miller (Clojure team)16:01:09

there have been various ideas around improving namespaces over the years. currently they are Java objects holding mutable maps which is the source of some issues. having immutable namespaces is something we’ve thought about for a long time and it would help for things like aliasing (don’t copy refers, just point), partial load failures, etc

mpenet17:01:08

Something inspired from ocaml modules/functors?

Alex Miller (Clojure team)16:01:38

lighter weight keyword namespace aliasing could potentially be put in this category too

dpsutton16:01:13

ah neat. thanks

souenzzo16:01:37

how to know which artifact a namespace came from?

bronsa16:01:36

there’s no relationship between artifacts and namespaces

bronsa16:01:09

but you could scann the classpath for jars and scan those jars

souenzzo16:01:03

There is how to get the current list of dirs in classpath?

qqq17:01:31

Is there a good analysis of why AWS / GCP hasn't killed Box / Dropbox yet ? It seems like if they just build a nice client and charge by the usage, and bundle it with PRIME ... . Also, anyone know of a good analysis on why GDrive isn't winning ?

seancorfield17:01:41

@qqq Did you read Steve Yegge's blog post about why he left Google after 13 years? I think it answers why GCP / GDrive etc isn't "winning"...

qqq17:01:10

@seancorfield: funnily enough I was just reading it; extrapolating that, are you suggesting the argument is : someone wants to build it, 3-4 teams claims it via 'cookie licking' but no one builds it ?

qqq17:01:03

actually, I was just readint the followup article, not the original: https://medium.com/@steve.yegge/google-doesnt-necessarily-need-innovation-95cea96d0eeb?r=1

jgh20:01:08

going back to the RC stuff in C++ from last night. I think you’ll see most performance-centric development will discourage using it. First of all there are performance implications as mentioned by tbaldbridge, but it’s also not great at enforcing clear ownership. You can, as part of your coding guidelines, enforce that there be only one shared_ptr and everybody else has to use weak_ptr to access it, but it’s not like the language is going to tell you to stop being foolish if you just hold shared_ptr everywhere and you suddenly can’t figure out why nothing is being released. I think it’d be cool to see something like rust’s concept of borrowing with unique_ptr.

jgh20:01:15

In many ways it’s like objective-C prior to ARC. iPhone blew up and suddenly there were a massive number of developers who had never really done reference counting before working with ObjC and getting confused about how to avoid memory leaks due to retain cycles. At least with shared_ptr the intention is explicit.

tbaldridge20:01:32

Even shared_ptr isn't safe 😛

tbaldridge20:01:26

@jgh ^^. They've tried fixing that recently, but the most recent work in those areas on C++17 comes down to a stdlib implementation of hazard pointers or RCU.

jgh20:01:45

you mean unique_ptr?

tbaldridge20:01:53

Fun thing is though, RCU, Hazard pointers, or the hybrid RC discussed in some places basically all boils down to a mini GC for certain types, or GC for young objects with RC for old objects

tbaldridge20:01:07

Nope @jgh, shared_ptr can't be accessed by two threads at once

jgh20:01:15

ohh i see what you mean, yeah.

jgh20:01:28

i think the counter is atomic but basically nothing else is

tbaldridge20:01:46

If you build a linked list where the "car" and "cdr are shared_ptr<Cons> you still run into the ABA problem and your program segfaults.

jgh20:01:10

ive seen arguments that RC (and really any automated memory management) falls within the spectrum of GC somewhere, which is a view that makes sense.

tbaldridge20:01:55

It does, I liked this definition: "Traditional GCs track live objects, forgetting garbage. RCs look for garbage and actively reclaim it"

tbaldridge20:01:10

Which makes RC great for C++ because you want ~Foo() to be called

jgh20:01:20

“RC’s are grenades that explode when objects become garbage ;)”

tbaldridge20:01:38

Good paper on the subject ^^

tbaldridge20:01:56

They get really close to the performance of GCs with multithreading support by doing a few things:

tbaldridge21:01:27

1) If the current thread has already incremented the refcount you don't need to do it again. Simply increment/decrement the delta of all changes when you're done with the object. 2) Never delete objects when the count reaches 0. Instead put "dead" objects into a list, every so often stop all threads and reconcile the refcounts. This gets rid of the ABA problem. Nothing will get killed when you're using it, because nothing is deleted until all threads agree that its dead.

tbaldridge21:01:18

3) Pin refcounts. Once a refcount hits 255, it's considered "live forever". Or perhaps available for a GC sometime in the future. So once a refcount hits 255 stop doing any RC on that object at all.

tbaldridge21:01:33

4) All objects start out with a refcount of 0 and already in the "dead list". When it comes time to reconcile stop all threads, increment all refcounts for anything that's alive, and remove them from the dead list. After that delete everything in the dead list. This means that fast allocations are never refcounted at all.

tbaldridge21:01:06

5) Any object that only exists within the local thread (requires escape analysis) doesn't need atomic RC, and be a simple inc/dec.

jgh21:01:58

interesting, this is a bit antithetical to refcounting as it’s expected to work in e.g. c++ and objc though since it requires a garbage collector.

jgh21:01:52

(or a “totally not a garbage collector dont worry about it guys”) 😉

tbaldridge22:01:34

@jgh on that note, do you have any info on how ARC works in ObjectiveC / Swift?

tbaldridge22:01:52

I haven't been able to find any docs on how they solve the multithreading issues.

tbaldridge22:01:41

lol wow, apparently it does something like that paper: "Another problem of ARC is its indeterminacy. You have no control over when a variable is going to be released and its memory freed. In order to help the developer to handle the memory allocated for objects, ARC defines structures of code called “autorelease pools”. An autorelease pool is an area where you declare objects that would get allocated by ARC, and then, after the execution reaches the end of the autorelease pool, all allocated objects would (hopefully) be released. The problem is that ARC usually defines one main autorelease pool in the project, basically enclosing the main function. That means that, unless you take care of memory management, and define another autorelease pools, the memory of your program can grow without control, potentially reaching a very high memory imprint on the system. You can assign an object to nil, thus indicating ARC that you are releasing that object because you no longer need it, but you cannot really infer when (and if) ARC decides to actually free that object."

jgh22:01:10

autorelease pools were around prior to arc, you just had to mark objects as autorelease and they had naming idioms around whether you were getting a retained or autoreleased object returned from a function, etc.

jgh22:01:22

there isn’t anything magical about them, they basically just drain when they go out of scope or are told to drain

jgh22:01:00

@autoreleasepool { ... } is just notation for NSAutoreleasePool* pool = [NSAutoreleasePool alloc]; ... [pool drain]; [pool release];

jgh22:01:29

so any objects marked autorelease in that scope are cleaned up when the pool drains.

jgh22:01:36

you’d probably want to throw a scope block in there or call another function inside the manual way there, I’m not sure what would happen if you mark an object as autorelease and it didnt fall out of scope before the pool drains (at least with the shorthand you get the scope)

jgh22:01:53

but yeah i think arc basically implicitly marks everything as autorelease so if you were to alloc a lot of objects in a tight loop without an autorelease pool you’d end up with a lot fo memory usage

jgh22:01:25

its still not exactly what that paper is suggesting because it’s ultimately a passive system that cleans up when pools are deallocated and it doesnt interrupt threads or anything to make sure there aren’t race conditions and other concurrency fun times.

jgh22:01:20

although i guess maybe it’s different with swift or something if it’s indeterministic