Fork me on GitHub

Haway & morning


Anyone know if iterator-seq is lazy? (Or at least releases the head)


I'm having OOM exceptions with a library & I'm trying to debug.


@agile_geek: it returns a lazy chunked seq


since 1.7 at least


So that's worrying. This exactly the situation that imperative programming wins hands down for me. I can't figure out where some resource is being held but in an imperative loop I would know categorically when that might happen. The sequence abstraction is bleeding implementation details!


I don't want to have to figure out why this is happening.


I just want to ship code


@agile_geek: ooo i haven't had one o' them for ages... what ru doing ?


Working with a scan returning from hbase


I've rewritten it imperatively. I didn't have time to figure out what was wrong.


It's this stuff that makes me fall back on Java


I can rewrite 10 times faster than figure out the functional way!


Serious point tho. For a language that prides itself on simplicity you need to know an awful lot about how core fns r implemented too avoid this. I suspect the real issue was an into in the lib I was using around hbase


altho to be fair - laziness is the thing obscuring the causes of your prob, and lazy vs eager isn't a functional thing - there are certainly eager functional languages (tho i'm not aware of lazy imperative langs)


but you still need to understand the laziness to reason about the side effects.


or in this case the lack of laziness somewhere.


Though combining laziness and side effects I'd generally a bad idea


I meant the side effects of laziness itself or in this case the side effects of eagerness


Basically something in the lib I'm using is eager not lazy but I need to understand which fn's are eager and which lazy just to use them safely. With an imperative loop I would explicitly know all this


My point is you have to have detailed knowledge of the implementation details of the fn's to use them and I thought it was supposed to be simple.


> At first I thought I could just wrap ResultScanner iterator-seq in a take. That seemed to work on small tables. However on a large table (defined here as anything over a few thousand rows), the scan would blow up. It turned out that in order to minimize the number of network calls, a scanner by default will try to get all the matching entries and shove them into a ResultScanner iterator. For this use case, there’s isn’t a limiting criteria on a scanner, so a scan will try to “fetch” the entire table into memory. That’s one large ResultScanner object! No good. What to do?


I think party of the issue will be interfacing with hbase and how jangling that resource all the way through the driver works so I think you left simple a while ago


Some drivers handle that better than others (which is a bit of a pain I'll admit)