Fork me on GitHub
#clojure
<
2020-03-04
>
pablore01:03:20

Proxy question 🙂 Can I access the this object in the methods provided in a proxy? ie:

(defn my-instance [args]
  (proxy [MyAbstractClass] [args]
    (myMethod [this] (...)))

pablore01:03:30

My intuition is yes, but when I pass this proxy to an object that uses it, when the method myMethod is called I get an arity exception

pablore01:03:03

and the method I’m overriding has this signature:

void myMethod() { ... }

hiredman01:03:22

You don't put this as an argument for proxy

hiredman02:03:12

this is automatically bound in proxy methods

pablore02:03:41

😮 so how should I write it?

hiredman02:03:51

Take out the this

pablore02:03:16

omg it works thanks

Alex Miller (Clojure team)02:03:28

that's the only anaphoric macro in clojure (that invents a special symbol)

vemv08:03:16

Possible (random) insight: by prefering mapv to map, one fails faster (in case something goes wrong), and importantly with clearer stacktraces ...stacktraces where laziness is involved tend to involve a lot more Clojure internals, burying the actual culprit. e.g. maybe the defn at fault lives in a different namespace than the namespace that threw the exception. wdyt?

Alex Miller (Clojure team)13:03:24

by preferring (into [] ...) to mapv you get all the perf benefits of transducers (and transients)

vemv14:03:59

Nice! I would have imagined the impl of mapv would have been upgraded after the introduction of transducers

Alex Miller (Clojure team)14:03:26

mapv/filterv are vestigial functions now that transducers exist

👍 4
vemv14:03:39

That's good to know. I never got that message

deep-symmetry17:03:11

Nor did I! Perhaps clj-kondo should highlight it?

Alex Miller (Clojure team)17:03:20

I wouldn't put it as strongly as deprecate

Alex Miller (Clojure team)17:03:29

mapv is certainly more concise and in cases where you know the data is small, you're not stacking transformations, etc I still use it

Alex Miller (Clojure team)18:03:27

mapv and filterv were yet another small step on the "why do we keep building the same family of functional transformations in new contexts" journey

orestis09:03:56

In cases where I read from database where the results might be consumed much later at a wildly different context, I have a thin wrapper that just calls first to force at least the first chunk of the sequence. This way any database errors will be thrown in the context where I can log the query, the user running it etc etc.

4
vemv10:03:27

That's interesting. It preserves the laziness, up to a point. One could call it semi-laziness :)

cursork12:03:08

Isn't that wrapper just equivalent to calling seq ? Given that it checks that the collection is non-empty?

vemv12:03:27

👀 Got you, one could (->> xs (map f) (seq)) for being lazy and fail-fasting. Maybe sequence would be better, for avoiding nil values.

orestis09:03:50

Is there a way to extend keyword lookup to some random Java class that implements Map ? So I can do: (:foo x) and that behind the scenes would call (.get x (name :foo)

vemv10:03:31

Probably not, as clojure.lang.ILookup as an interface (as opposed to a protocol, which can be extend -ed)

noisesmith16:03:59

the lookup behavior of Keywords is in the java implementation (implementing IFn or AFn), so there's no good way to shadow or replace it it's not a syntactic feature of clojure that keywords do lookup, it's just that clojure tries to invoke things and IFn tells you what happens if you do that

orestis09:03:44

I think it's possible by wrapping it with a deftype but I'd like to avoid it -- this is mainly for performance reasons and to lower GC pressure.

caleb.macdonaldblack10:03:18

What's up with this error? Cannot cast javafx.scene.input.MouseButton to [Ljavafx.scene.input.MouseButton; Why the [L ?

bronsa10:03:55

means array of

👍 4
deep-symmetry17:03:39

Yeah, it’s a strange Java class file format internal detail that leaks into Clojure.

teodorlu11:03:00

Hello! I've got a big Java application at work that I'd like to poke around in using Clojure. I've never "called Clojure from Java" or used a Clojure project without Leiningen / tools-deps or similar before. Is there a recommended resource/guide for this use case? Would-be-nice if: • Simple to get a REPL up and connected to CIDER • If I could use tools-deps (or lein) to specify additional dependences • If I could do this with minimal impact to the Java app

vlaaad12:03:52

I’m not sure about nrepl, but clojure supports socket repls out of the box via system properties, this is as close to “minimal impact” as it gets

vlaaad12:03:21

does that java app use something like maven or gradle? clojure is “just a jar” that you can add using your build tool’s way to specify dependencies

sogaiu12:03:53

depending on the jdk, one of the following may work: https://github.com/dkz/liverepl (this version has some nrepl support iiuc) the original is: https://github.com/djpowell/liverepl -- may have need some tweaking

sogaiu12:03:49

the original does not provide nrepl support and i think it predates the official socket repl (it provides its own method iiuc)

sogaiu12:03:37

i think they are more likely to work out of the box with jdk <= 8. i think i got one of them working with a more recent jdk, but i'd have to dig a bit to confirm -- i don't tend to do nrepl-related things much though, so even if this worked it would be via a socket repl.

teodorlu09:03:07

Hey @U47G49KHQ and @UG1C3AD5Z -- thanks the replies. Sorry it took me a while to get back to you. We're using Gradle on OpenJDK 11. Thanks for the links. If I understand correctly, my options are: 1. Link Clojure's jar, add a system property to the process invocation and connect via Socket-REPL 2. Link to Clojure's jar, NREPL and CIDER, and add java code to start a CIDER-enabled NREPL server into the existing codebase 3. Use Liverepl, which if I understand correctly allows the Clojure code to live in a normal Clojure project, but "remote connects" to the Java process. It's not clear to me how this would work, if there's synchronization needed, or whether it eventually "joins" java processes. I appreciate your insight! I feel like I have a bit more to go on now.

sogaiu09:03:08

iirc, option 3 injects clojure into the running java process -- the java process then runs a socket repl of sorts and one connects to that. if you are just poking around, this can be a short-term option. however, the liverepl option is a bit dated and getting it working with recent jdks may require some tweaking. also if the java application is using java 9 modules or something "different" regarding the classpath, it may introduce additional complications. if you have aspirations of eventually using clojure within the application, i think vlaaad's approach makes sense to start with. another reason to go that way is that the application is using jdk > 8.

🙏 4
dominicm11:03:59

I'm having an issue with transit-clj where it seems that transit is somehow causing very small packets of data to be flushed. This is causing me massive overheads in transfer size. I haven't been able to find the cause/config for this though, any pointers?

dominicm11:03:22

Okay. May have found it, but someone's confirmation would be valuable. https://github.com/cognitect/transit-java/blob/cff7111c2081fc8415cd9bd6c6b2ba518680d660/src/main/java/com/cognitect/transit/impl/AbstractEmitter.java#L189 This appears to be the problem. Every time something is emitted, it's flushed... But that's going to cause immediate writing. I have no idea why one would want this.

ghadi14:03:08

@dominicm can you post the problem/repro case and not lead with the presumed solution?

ghadi14:03:37

I'll radiate it to the core team but the issue needs to be better first

dominicm14:03:59

I wanted to do that, but I'm still figuring out how to put something together exactly 🙂. I'd need to create a "slow" output stream or something in order to demonstrate the problem, I think.

ghadi14:03:39

ok. Let me know if/when you do. The issue #43 just uses an ordinary output-stream

dominicm14:03:00

Will be doing that today. Although I have definitely tracked this down to flushing. Would doing this in terms of showing frequency of flushing be sufficient?

Alex Miller (Clojure team)14:03:22

I think the complementary use case is when you want to see each message on the stream as soon as its ready

Alex Miller (Clojure team)14:03:37

these inherently fight in flushing behavior

Alex Miller (Clojure team)14:03:03

and maybe the best option is to make autoflushing a policy choice

Alex Miller (Clojure team)14:03:43

if you're not autoflushing, who's responsible? at what size/frequency? that's basically a flow control problem.

dominicm14:03:51

Isn't that default for streams? As long as I'm using a http://java.io.OutputStream and not opting into using a http://java.io.BufferedOutputStream

borkdude14:03:07

I found that first writing to a byte array output stream and then to a file was much much faster then directly to the file: https://github.com/borkdude/clj-kondo/blob/99dc96c5359ee5420390049cb3ad30eb934a4d5b/src/clj_kondo/impl/cache.clj#L41

Alex Miller (Clojure team)14:03:33

what happens if you do use a bufferedoutputstream?

ghadi14:03:58

@dominicm i'm not disputing flushing -- a cursory skim of the code indicates that it's flushing after every entry in a map, not after full objects

ghadi14:03:45

I'm just saying the ticket should be something like "I'm using this thing in an ordinary way, and it's slow" later -- is it flushing after every key?

dominicm14:03:00

@alexmiller The .flush forces the bufferedoutputstream to immediately clear it's buffer.

ghadi14:03:16

the problem is supported by seeing a difference on BAOS vs FileInputStream

ghadi14:03:21

BAOS ignores flush, I think

dominicm14:03:30

Yeah, flush makes no sense for BAOS.

dominicm14:03:40

I suppose my exact use case would require bringing in something like jetty/pedestal to demonstrate the overheads involved from http. I could make an output stream which prints some newlines & length when each chunk writes to output.

ghadi14:03:35

hypothesis: you should see a performance difference between writing to BAOS and writing to jio/output-stream wrapping anything. Does reality reflect that?

ghadi14:03:52

IMHO no need to make a special OutputStream... on-label problems stronger than off-label problems

pinkfrog14:03:42

I wonder why in the first form gc kicks in. While in the second it cannot. The decompiled java code looks the same.

pinkfrog14:03:53

(decompile (let [r (range 139)] (first r) (last r))) ;; heap space OOM (decompile (let [r (range 1e9)] (last r) (first r)))

Alex Miller (Clojure team)14:03:42

you're holding on to the head of r, and then building a 1e9 long sequence to get the last value

Alex Miller (Clojure team)14:03:14

so the whole sequence is instantiated in memory

Alex Miller (Clojure team)14:03:41

I don't know what decompile is

pinkfrog14:03:12

user=> (doc decompile) ------------------------- clj-java-decompiler.core/decompile ([form]) Macro Decompile the form into Java and print it to stdout. Form shouldn’t be quoted. nil

pinkfrog14:03:44

My understanding is that, it is the =r= that holds the sequence. So both forms should OOM. However, the first form doesn’t.

Alex Miller (Clojure team)14:03:09

well the first form is only 139 elements, which is not much memory?

pinkfrog14:03:12

Sorry. I pasted wrong. The number should be the same 1e9.

Alex Miller (Clojure team)14:03:05

in the first one, the last that walks the sequence is in tail position - the Clojure compiler can clear the r reference before evaluating it

pinkfrog14:03:29

https://bpaste.net/PQ5Q It is the de-compiled java code. Is it more of the runtime java gc, or due to the clojure compiler? I don’t see a much difference between the decompiled code of the first and second form.

dominicm14:03:20

I'm not having a performance issue though. My problem is directly with flushes against my output streams causing me to send very small buffers over http 🙂. I can't see anything particularly like this in Java itself.

Alex Miller (Clojure team)14:03:14

isn't that a performance problem?

ghadi14:03:29

please lead with that in the ticket

ghadi14:03:50

I'm using HTTP C-T-E and 6.7MB of Transit is making 10MB of HTTP chunking

Alex Miller (Clojure team)14:03:58

I find your problem description to make sense

Alex Miller (Clojure team)14:03:46

on solution side, seems like a custom stream impl could selectively ignore flushes

dominicm14:03:50

This is more workaround than solution. Ignoring flushes is potentially dangerous. I think I can get this right for my particular use case, but if you're not careful about flushing at the right time, the end of your response may be cut off. I'll likely be reading the source for the output stream I'm receiving to ensure that it's .close method doesn't utilize .flush.

dominicm14:03:06

Which also means my solution isn't general purpose.

ghadi14:03:22

It makes sense to me too, but a clearer ticket is easier to act upon for someone with less context than us

dominicm14:03:18

I'm happy to reorder the paragraphs and change the title. But an actual repro for my exact problem seems like it would be too large to be useful? Maybe I've been trained to make bad repros?

dominicm14:03:12

The description change is made. I'm happy to make a repro in the way that is most useful to someone 🙂

ghadi14:03:25

thanks, it seems small but that's helpful @dominicm

dominicm14:03:11

@ghadi It's no problem. The previous ordering was entirely my attempt at being as helpful as possible, I apologize that it had the opposite effect.

ghadi14:03:58

@dominicm you're great! I've caught myself lately leading with solutions and it's burned me around understanding problems clearly.

ghadi14:03:34

is there anything special to trigger Chunked-Transfer in pedestal responses?

dominicm14:03:57

I don't think so. I think that pedestal's http.clj transit integration takes care of it automatically. I would assume that as the API returns a function taking an outputstream, it can't know the length and has to automatically fall back to chunked-transfer. One potential thing to note is that I happen to know this is Jetty behavior, but I'm unclear about whether other web servers would consume the whole outputstream and then calculate the length. It's unlikely, but possible.

restenb16:03:37

is there a "nice" way to get map-indexedto accept multiple lists? e.g. (map-indexed (fn [idx foo bar baz] ...) foo-list bar-list baz-list)

chrisulloa16:03:20

combine the lists with concat?

restenb16:03:09

how would that help? concat would just give me one big list

bfabry16:03:18

(map-indexed (fn [idx [a b c]] ...) (map vector a-list b-list c-list))

dpsutton16:03:51

(map (fn [id x y z])
     (range)
     coll
     coll
     coll)

❤️ 8
✔️ 4
bfabry16:03:37

ooh, clever

restenb16:03:44

@dpsutton hell yeah I like that 😄

dpsutton16:03:10

has the benefit of not having to remember where the index is. i always forget if its first or last

borkdude17:03:33

clj-kondo reads type annotations and uses them for linting. this resulted in a false positive for byte:

(defn byte
  "Coerce to byte"
  {:inline (fn  [x] `(. clojure.lang.RT (~(if *unchecked-math* 'uncheckedByteCast 'byteCast) ~x)))
   :added "1.0"}
  [^Number x] (clojure.lang.RT/byteCast x))
So now clj-kondo thinks that byte always takes a number or nil (due to the nullability of objects in Java). But actually it takes a lot more when you look at the impl of byteCast like characters. So is the type annotation wrong?

borkdude17:03:55

False positive:

$ clj -A:clj-kondo --lint - <<< "(byte \a)"
<stdin>:1:7: warning: Expected: number or nil, received: character.

borkdude17:03:18

user=> (instance? Number \a)
false

borkdude17:03:48

in what way is the type hint used here by Clojure / JVM?

hiredman17:03:46

this is why type hints are different from type annotations

hiredman17:03:07

it is just the hint the compiler can use if it needs it

borkdude17:03:14

I know, but the type hint seems redundant and misleading then, maybe?

borkdude17:03:46

if this isn't important, I'll just ignore it, won't make a JIRA issue and override the type config in clj-kondo

hiredman17:03:33

it is definitely redundant, it isn't used

Alex Miller (Clojure team)17:03:41

not something I'm concerned about

denik19:03:06

Looking for resources to deploy & run Clojure apps based on deps.tools. Any resources?

ghadi19:03:27

Datomic Cloud Ion are really nice @denik

ghadi19:03:52

if not using Ions, usually I make an uberjar and ship it to whatever ECS / K8S

denik19:03:42

thanks @ghadi. Ions are too involved (AWS lock in etc). Re: shipping uberjars, what starts your jar/app on the destination machine

ghadi19:03:18

java -cp your.jar clojure.main -m your.entry.namespace

denik19:03:34

preferably I'd use a git push (similar to dukku/heroku) and have everything else run automatically. the complexity of dokku is frightening though.

Ahmed Hassan02:03:02

Complexity of using it or reading codebase of dokku? Dokku have good "git push" story.

denik12:03:55

The codebase / size / deps

Ahmed Hassan12:03:10

It handles many operations which we'll have to deal manually otherwise, won't an alternative with same features have similar size?

Ahmed Hassan12:03:40

Dokku codebase is largely composed of bash scripts.

Ahmed Hassan12:03:13

What alternative ways do you recommend to Dokku?

denik15:03:46

there’s little to recommend. Piku doesn’t work as well as it looks.

denik15:03:11

I posted because I don’t know good alternatives

Ahmed Hassan05:03:28

So, do you prefer bash scripts over these solutions?

ghadi19:03:02

you are describing Datomic Cloud Ions pretty well 🙂

ghadi19:03:02

it's git push, and your code runs inside the database. disclaimer: I work for Cognitect but not on Datomic. I do a bunch of AWS work

ghadi19:03:23

otherwise in other systems your git push triggers your CI workflow

ghadi19:03:41

something (GitHub Actions/GitLab/Jenkins) picks up an event

ghadi19:03:55

checks out code, runs tests, builds jars, deploys jars

ghadi19:03:13

it's a whole bunch of glue scripts

thom19:03:15

Heroku support clojure, I thought? If that’s something your comfortable with.

lukasz19:03:48

setting up a reliable "git push to aws" pipeline is a lot of work imho

lukasz19:03:14

if you want to wire up all of the bits yourself

denik19:03:24

almost ripped my eyes out looking at the internals of a few PaaS frameworks

denik19:03:33

thanks @borkdude not the most convenient but simplest. I'll probably go w/ that

borkdude19:03:15

I've been running (hobby) Clojure apps like that since 2012, not digital ocean but similar

denik20:03:17

Anticipating at least weekly deploys for this app and want to minimize downtime

thom19:03:33

Elastic Beanstalk is dead easy on AWS and you can deploy from the command line. Not sure what you’re gaining by running it all yourself.

kenny19:03:23

@denik https://gumroad.com/l/aws-good-parts has a tried and true methodology of deploying on ec2. I highly recommend it.

hiredman20:03:06

the uberjar approach is tried and true, but eventually leads to complaints about the size of artifacts. there is potential to do much better with tools.deps, deploying each dependency only if it isn't already deployed, and then just shipping a single text file containing a classpath

ghadi20:03:20

same disclaimer as I had above, but Datomic Ions do this already ^

borkdude20:03:22

why not run tools.deps "over there"?

denik20:03:51

That's what I want to do. Done w/ lein

hiredman20:03:48

minimize the amount of code to run over there

ghadi20:03:58

you could, but then you make a larger failure domain

hiredman20:03:12

same reason it hurt me to see people are seriously using lein run in production

denik20:03:51

That's what I want to do. Done w/ lein

ghadi20:03:12

I think it's problematic too.

hiredman20:03:15

in a production environment you want to minimize the possible failures and differences between servers, introducing lein and dependency resolution in to the mix, where that is happening on each server independently is no good

denik20:03:17

Anticipating at least weekly deploys for this app and want to minimize downtime

hiredman20:03:09

so ideally you resolve dependencies once, and each copy of your server gets those exact dependencies instead of resolving again independently

kenny20:03:51

> the uberjar approach is tried and true, but eventually leads to complaints about the size of artifacts. Is this a real issue for people? We have a large enterprise application with quite a large dependency tree, and the artifact size has never been an issue for us. We depend on some large native libraries (e.g., intel mkl) but those are pre-installed onto the image we deploy to. The jar is always under 100mb which can be downloaded in a couple seconds.

hiredman20:03:16

It can be, it depends to some degree how you are doing builds, how many builds, how fat the pipe is between where you do builds and where you deploy is

hiredman20:03:58

At my last job it was just slightly annoying, only one real artifact and fast pipes, but at my current job we have multiple uberjars with a lot of overlap in dependencies (so for a new dep the total jar size grows by dep size * number of uberjars) and a small pipe between builds and deploys

bfabry20:03:20

if you're building and deploying (trunk-based dev) on every commit to a distributed bunch of machines for a project with 100 devs that's gunna add up

kenny20:03:17

Ah true. We're only doing 10s of deploys a day at the moment. You're referring to data transfer fees adding up, right?

bfabry20:03:01

the most likely problem for that scenario yeah

kenny21:03:28

I'm trying to read a large csv file (~1.1gb) using data.csv. data.csv is supposed to read in data lazily to minimize memory usage. I am seeing the java proc go up to 20gb of memory while reading the csv like below. I'm not sure why it would go up to 20gb if it is processing the csv lazily. Am I doing something wrong here or is this size of memory expected when reading a large csv file?

(with-open [rdr (io/reader "data.csv")]
  (let [rows (csv/read-csv rdr)
        header (nth rows 0)
        data-rows (rest rows)]
    (vec (drop-while (constantly true) data-rows))))

bfabry21:03:02

drop-while will walk over the whole sequence, realising it, while data-rows is still retaining the head

kenny21:03:14

From VisualVM, I can see it's holding on to a bunch of strings.

hiredman21:03:45

drop-while doesn't realize it, the call to vec does

kenny21:03:08

Right. So the issue is it gets realized and the let is holding onto the head?

hiredman21:03:22

call vec is realizing it

hiredman21:03:32

you are reading all the data in and turning it in to a vector

kenny21:03:39

An empty vector

bfabry21:03:42

yes sorry the call to vec is realising the lazy seq created by drop while which contains the lazy seq from data.csv

hiredman21:03:34

ah, wasn't paying attention to the drop while predicate

hiredman21:03:22

just because the strings are in memory that doesn't mean they are being held

hiredman21:03:08

that could also just mean they haven't been gc;ed yet, how large is your max heap, and have you actually seen an out of memory error?

kenny21:03:15

No oom. Max heap is 30gb.

kenny21:03:33

Actually the original code seems fine.

hiredman21:03:03

jvm will basically use whatever you give it, so by giving it such a large heap, it may delay gc'ing until it has allocated close to 30gb

kenny21:03:55

Forcing a gc does clean up all that. To be clear, vec is realizing the lazy seq but data-rows is not holding on to the entire csv.

bfabry21:03:07

seems I need to re-read some stuff

bfabry21:03:04

something something locals clearing

ghadi21:03:45

have you tried reading with a 512MB max heap?

ghadi21:03:12

that will confirm both hiredman's reasoning and your correct implementation

kenny21:03:22

Good idea. Will try that

ghadi21:03:20

keep in mind that as soon as you remove the artificial drop-while argument, your call to vec becomes a liability

kenny22:03:49

I do get an OOM when running with 512m.

kenny23:03:46

Same with 1g

kenny23:03:44

2g as well. Perhaps holding on to rows in the let is pulling everything into memory?

kenny23:03:27

This does not oom

(with-open [rdr (io/reader "data.csv")]
  (vec (drop-while (constantly true) (csv/read-csv rdr))))

kenny23:03:50

It's definitely the rows in the let binding. Removing that fixed this.

andy.fingerhut23:03:18

There have been changes to Clojure in the past several years that eliminate some cases of locals clearing being done in cases where it did not used to, to avoid some 'holding onto head' cases similar to this. It isn't clear to me whether this is a case of "should hold onto head" or "locals clearing should prevent that".

bfabry23:03:59

I think with-open could kill locals clearing

andy.fingerhut23:03:57

Hmm, rereading what I wrote, I meant to say "that add (not eliminate) some cases of locals clearing"

ghadi23:03:25

Don’t guess; look at the bytecode

ghadi23:03:46

Need to know what clojure version if someone wants to repro

hiredman23:03:29

I don't spend as much time looking at the compiler as I have done in the past, but I think the way it determines when to clear locals might be the most inscrutable part of it. I have a toy re-implementation of the core.async's go macro that does some pretty standard dataflow analysis which results in always clearing locals right after last use, but is iterative (I think I read some where if you arrange things right the algorithm will be linear, which I have not done).

hiredman23:03:51

what the clojure compiler does is supposed to be better in some way (faster? simpler?) and it generates a tree of usages and compares paths in that tree, and it always seems to need tinkering with

bfabry23:03:57

public static Object invokeStatic() {
        final Object rdr = ((IFn)user$fn__287.const__0.getRawRoot()).invoke("data.csv");
        Object invoke;
        try {
            final Object rows = ((IFn)user$fn__287.const__1.getRawRoot()).invoke(rdr);
            final Object header = RT.nth(rows, RT.intCast(0L));
            final Object data_rows = ((IFn)user$fn__287.const__4.getRawRoot()).invoke(rows);
            invoke = ((IFn)user$fn__287.const__5.getRawRoot()).invoke(((IFn)user$fn__287.const__6.getRawRoot()).invoke(((IFn)user$fn__287.const__7.getRawRoot()).invoke(Boolean.TRUE), data_rows));
        }
        finally {
            ((Reader)rdr).close();
        }
        return invoke;
    }

bfabry23:03:51

would locals clearing, if it were happening, look like a setting of those variables to null before one of those steps in invoke?

bfabry23:03:57

(I dunno how accurate it is but clj-java-decompiler is damn cool)

✔️ 4
hiredman23:03:07

I dunno exactly what it would look like in whatever you are using to turn the bytecode in to java, maybe something like rows = null; after the last usage

hiredman23:03:03

> To make the output clearer, clj-java-decompiler by default disables https://clojuredocs.org/clojure.core/*compiler-options* for the code it compiles. You can re-enable it by setting this compiler option to false explicitly, like this:

hiredman23:03:22

Object rdr = ((IFn)user$fn__196.const__0.getRawRoot()).invoke("data.csv");
        Object invoke2;
        try {
            Object rows = ((IFn)user$fn__196.const__0.getRawRoot()).invoke(rdr);
            RT.nth(rows, RT.intCast(0L));
            final IFn fn = (IFn)user$fn__196.const__3.getRawRoot();
            final Object o = rows;
            rows = null;
            Object data_rows = fn.invoke(o);
            final IFn fn2 = (IFn)user$fn__196.const__4.getRawRoot();
            final IFn fn3 = (IFn)user$fn__196.const__5.getRawRoot();
            final Object invoke = ((IFn)user$fn__196.const__6.getRawRoot()).invoke(Boolean.TRUE);
            final Object o2 = data_rows;
            data_rows = null;
            invoke2 = fn2.invoke(fn3.invoke(invoke, o2));
        }
        finally {
            final Object target = rdr;
            rdr = null;
            Reflector.invokeNoArgInstanceMember(target, "close", false);
        }
        return invoke2;

hiredman23:03:56

is what it looks like if you don't turn off locals clearing (and in fact locals are being cleared)

hiredman23:03:38

you can it is confused by the bytecode that isn't directly translatable to java

hiredman23:03:01

(the o = rows right before rows = null)

bfabry23:03:21

so data_rows is being cleared. but o2 is defined as data_rows and is not

bfabry23:03:32

oh you're saying that's garbage, interesting

hiredman23:03:32

so my guess is @U083D6HK9 is using some tooling that turns locals clearning off

hiredman23:03:20

it is, the bytecode that is emitted loads the value on to the stack (no direct representation in java source) from the local and then nulls out the local

kenny23:03:28

Typing this in my repl does indeed sound like that is true

*compiler-options*
=> {:disable-locals-clearing true}

bfabry23:03:56

mystery solved

kenny23:03:18

Next question is who is setting that & why is it being set to that 🙂

hiredman23:03:38

my guess would be you are using cursive

kenny23:03:43

You would be correct

hiredman23:03:59

cursive likely turns it off by default for the debugger

bfabry23:03:00

oh you can disassemble to bytecode instead, nice

kenny23:03:39

Ah, I am using the debugger-enabled repl.

kenny23:03:38

Yup, that is it. I see a little switch for it. That clears things up 🙂 Thanks

dpsutton21:03:54

no core.match channel but wondering if anyone knows why core match regex is built on re-matches rather than re-find

seancorfield22:03:43

re-find is unanchored, re-matches is whole-string (anchored)

seancorfield22:03:56

(off the top of my head)

seancorfield22:03:35

user=> (re-matches #"foo" "foobar")
nil
user=> (re-find #"foo" "foobar")
"foo"
@dpsutton

borkdude22:03:24

( a little bit meta: finding re-find in an app called re-find: https://borkdude.github.io/re-find.web/?args=%23%22foo%22%20%22foobar%22&amp;ret=%22foo%22 )

😀 4