Fork me on GitHub

I just did a straw poll of the 25 top errors in my Rollbar page.


18 NPE, 4 CCE and a couple of randoms


If I count them by number of occurrences rather than just number of errors, that difference is exaggerated even further.


Like 10x further or more, just by eyeballing it.


Actually, I couldn’t resist - of the top 25, 70% of the occurrences are NPE, 9% CCE and then randoms.


I find it very interesting that some (Clojure) folks say they virtually never encounter NPEs and other say NPEs are the most common error by far that they encounter...


I don't doubt either set of people. So that suggests that there are two very different types of (Clojure) developers.


I will say that almost the only time I hit NPEs is in Java interop code -- not in my Clojure code. So maybe the propensity for NPEs depends on how much you have to interact with Java?


Maybe the difference also is in how you make sure your data is valid? I mean, what are possible ways to get NPEs in clojure? Accessing keys in a map that dont exist for instance? (:foo m) -> NPE, so, better use (get m :foo "default") or run a validation function beforehand and handle these? I also think that the interaction level with Java plays a role here like @seancorfield said. @cfleming: Would be interesting to see some examples of your NPEs


Virtually everything I do has to interact with Java


But I don’t blame Java for the problems, since NPEs were a much smaller proportion of my errors when I wrote Java.


The problem, I think, is that once you get nulls into your system, the lack of types makes it very difficult to track them.


I get a lot of NPEs when I inadvertently pass nulls back to IntelliJ, because I wasn’t expecting some value I passed to be null


And again, when relying on nil punning to avoid NPEs, unless you’re aware that that’s what you’re doing IMO you’re just swallowing errors elsewhere in your code.


Hm...lets say I write a library fro data transformation. It takes its data from a csv file which may lack columns. Part of my transformation algorithm will be how I handle non existent columns, In java you will most probably just put a null somewhere inside (be it a list or a map or your object) and then start null checking everywhere, forgetting some of them. Thats common practice. And there they are. Then your small library will turn into a full application with cloud backend and all hell breaks loose, because your core does not handle NULLs in a way it should. I mean, somehow you have to map the absence of a value in your code and no type system will save you from doing that.


A good one (like Kotlin’s) will tell me when I get it wrong.


Nullable types are distinct from non-null types, and cannot be dereferenced.


Essentially, every nullable type is really like a Maybe type.


Since Kotlin has perfect Java interop as a main goal, you can get NPEs at the Java/Kotlin border if you don’t handle them correctly, but they cannot spread further into your code.


@cfleming: I would not argue that it helps with a few cases.But what about nulls in lists or maps?


That’s the thing - in Kotlin, you would have to specify a type for the data in the lists or maps, and that type would either be nullable or not.


So you can't pass null somewhere where you didn’t expect it.


Ok, just to make sure we talk about the same thing. You have interop with Java and in your java code you fill an ArrayList of Type Foo. But some of that FOOs actually is null, but list.add happily pushes that null into the list. Now in your kotlin code, which knows the list only contains FOOs, how does it handle the null occurrence in that case?


I would receive a List<Foo?>, i.e. a list of potentially nullable foos.


Is every interop type potentially a ? type?


Here’s how this works in Kotlin. Types you get back from Java are actually called platform types, written Foo!. But they’re non-denotable, i.e. they can’t be written in the program itself.


If you assign a Foo! to a Foo, Kotlin will insert a null check at that point so you’ll get an NPE then.


Assigning to a Foo? is ok, of course, since you’re saying you’ll deal with the null later.


Since you can’t write these types down and they’re always inferred, they implicitly only ever appear in the function you called the interop from.


The collection question is an interesting one, I’m actually not sure what the rules are - one sec.


Ok, I start to understand the point


The doc actually doesn’t talk about the reflection case.


I’ll ask in their slack channel.


Ok, there’s no runtime check for collection class elements.


So you could potentially pass some nulls around in types you thought weren’t null via collections or some other generic object.


However not-null parameters of public methods always have runtime checks inserted into the bytecode, so the idea is you catch them ASAP.


I guess I will have another look into kotlin when I have time again. Thanks for the explanation @cfleming


It’s not perfect obviously, but it’s enough to make me consider switching more of my code to Kotlin - NPEs give me a ton of problems, and the fact that they propagate so far into my code makes tracking them down a nightmare.


@cfleming: are your NPE problems also due to having to interop a lot with IntelliJ code?


NPEs are certainly hell in Java. Kotlin’s approach is interesting but, as you said, still has holes in it. Frege approaches Java interop by mapping nullable values to Maybe directly (as I recall, it’s been a few months since I did much with Frege).


What makes them such a problem in Java is that a) they popup everywhere b) library functions don’t, in general, accept null as "empty", only "nothing" c) there’s no clean idiom for dealing with them. So you’re kinda stuck with boilerplate if ( v != null ) … everywhere.


Clojure sort of embraces a), treats nil as both "empty" and "nothing" and "falsey", and has plenty of core functions that offer clean idioms for handling nil.


@seancorfield: Kotlin used to do the same as Frege, i.e. all types from Java were potentially nullable, but it was unworkable for interop-heavy projects.


@borkdude: Not sure, since that’s my main experience. I don’t have many pure-Clojure projects, at a minimum I usually end up using a Java lib somewhere.


What I tend to find in the pure-Clojure projects is that I probably get less NPEs, but I have the same number of null-related errors, they just get hidden by nil-punning and the like.


Which I’m increasingly thinking I don’t want.


@cfleming: In Frege you declare the Java interop to either return some type t or Maybe t depending on whether it’s nullable. You also declare if something is mutable (or not), and whether it’s side-effecting (using the ST monad).


So I guess that’s more or less equivalent to what Kotlin does now, i.e. you can assign your platform types to Foo or Foo?


(Kotlin doesn’t have the mutable or effect systems)


The problem is if you get it wrong.


i.e. the interop boundary with Java is never truly safe unless you make it so safe it’s annoying.


> so safe it's annoying


sounds like most strict environments


I’m pretty annoyed with Clojure right now.


but find me another lisp that does the web stack even a tenth as well.


and for me clojure's power w/r/t webstack comes specifically from its interop to the jvm.


But I think there are ways to make that layer better, and helping with nullability would be huge, for me at least.


over my paygrade


i'm just a lowly webdev with pretentions to mathematics and physics.


zero interest in writing even toy programming languages, would rather write 6dof n-body simulators.


Frankly, I can't even recall the last time an NPE hit me. I do see 'type' errors occasionally, but NPEs? That's definitely rarer than hen's teeth. @seancorfield is correct - this stuff (NPEs , type errors) seems to say way more about the developers involved (on both sides) than the languages/tools.


I guess I’m just null prone.


i have high expectations of my compiler.


I don’t think it’s about the developers — I think it’s about the style of programming: if you do a lot of Java interop, I think you’re way more likely to run into NPEs. I think @cfleming experience supports that and that’s how it feels based on the Clojure I’ve written over the last six years.


i get a lot of NPE's out of Datomic as well.


mostly during refactors, surprise surprise.


Interesting… do you have any sense of what is at the root of that?


null arguments, typically.


Yeah, I’m not sure I buy the argument that it’s only interop that provokes NPEs in Clojure


OK, yes, that is more accurate - style is much better description


I mean, (subs (first xs) 10) is as NPE prone as Java


Well, subs just delegates to String. That’s Java interop IMO.


I think this has been brought up before - seems that the style involving less of these 'form' issues revolves around very incremental bottom up development with incremental testing and then aggregating both function and test


I wish Clojure’s thin veneer over String actually handled nil better. A substring of nil in Clojure should just be nil, as should almost every "string operation" on it.


That would make string handling much more idiomatic.


In that case, any string work is NPE prone? I mean, every Clojure program ever uses Strings ubiquitously


Or any numeric work.


I think it depends on the style of string handling code but, yeah, when I do occasionally get an NPE, it’s almost always in String handling these days (as opposed to string handling 😸 )


I suppose that depends on the work - any collection function type stuff should not (in general) have this NPE bomb with strings


Right, and strings occupy a weird middle ground with Clojure — they’re "sort of" collections and they’re "sort of" a primitive type too.


I would love to see a more consistent collection-y approach to strings where it didn’t impact performance too much.


And we already have (str nil) => "" so we have a semantic for nil-as-string so maybe (subs nil 10) should be a StringIndexOutOfBoundsException instead...


…and that’s just a failure to check a boundary since (count nil) is valid and returns 0.