Fork me on GitHub

@seancorfield: indeed, there needs to be a law of software engineering where for any global all-user-flag, there must be a corresponding per-user-flag 🙂


are there any professional grade ray tracers written in java/clojure? I've found a number of projects without commits for 6+ months


1. the answer I'm looking for is not clojure-spec, core.typed, or spectrum 2. is there a port of tla+ (invented by leslie lamport) to clojure? 3. I figure, if it's invented by a Turing winner and used at AWS, it's probably worth learning


@qqq I didnt realize I could use the X to close out those chatty insertions.... sometimes they are ok, mostly though they are too much chatter... Thanks for mentioning it!


@bherrmann @qqq you can turn them off in your preferences


under Messages & Media -> Inline Media & Links


@sundarj: got rid of inline links; thanks!


@qqq "professional grade raytracer" and "java/clojure" don't really belong in the same sentence. Most professional raytracers these days are hybrid GPU systems.


For example Blender's latest raytracer runs about 80% of the math in the GPU via OpenCL and CUDA.


While I understand the general thrust of your POV here, Neanderthal makes GPU use via OpenCL and CUDA directly available to Clojure users. And it works remarkably well with performance comparable to any C based use.


matrix math is one thing, optimized volume search structures are another. The complexity in raytracing comes not so much from the raw amount of math involved, but by the nasty combination of math and branching logic. While generalized solutions exist ( These are often highly optimized hand-coded C++ and CUDA combinations.


Doing that quickly on the JVM, a platform that makes memory layout really hard to do properly, is not an easy task.


Multiplying matrixes via the GPU and wiring that up via a JVM language is fairly trivial, it's just optimization work. Writing a BSP tree or the like on the JVM is quite hard.


And once you've built all that on the JVM you still don't have SIMD available so you're left with Java's rather crappy math primitives, or you have to ship it all to the GPU.


Contrast that with CUDA and C++ where these days you can have CUDA reach directly into main memory and work with native C/C++ structs.


So all that to say, building this on the JVM doesn't buy you much, which is probably the reason most (all?) professional raytracers are built in unmanaged languages.


Yeah, I get all that and in Neanderthal's case it does push out to GPU and make use of SIMD via MKL etc. In large measure it is a simpler means of pushing all the calculation out to native and GPU routines and gathering back the results. If you need new kernels, you still would need to do that by hand via C...


does anyone know if there’s a way to simply get the top level document names in an elasticsearch query?


Define top level, if you mean indices you can hit /_cat/indices


While I understand the general thrust of your POV here, Neanderthal makes GPU use via OpenCL and CUDA directly available to Clojure users. And it works remarkably well with performance comparable to any C based use.


@mpenet meaning that instead of seeing the full template, I just want to see the name of each template that exists on the server


@tbalashov: interesting, I was unaware ray tracing is now popular on the GPU. I was under the impression that ray tracing, due to light bouncing around + octrees, had lots of branching, which doesn't play well with GPU parallelism. Are these new algorithms, or are my assumptions false ?


@qqq no, that's true but two things have changed more recently. GPU cores are less dependent parallel memory loads, so you don't pay as much when the cores go out of sync. Although branching hurts cores in the same warp, we have more cores now. The GTX 1080ti has 3584 cores, and IIRC there's ~32 cores in a warp? So if you assume a worst case that all the cores in a warp will never align, that still gives you over 100 logical cores. And often it will be much better.


But very few raytracers run 100% in the GPU, instead if you can do your raytracing in a optimized structure that has less branches, you can use it as a batch processor. Send over 100k rays, and it sends back how far the ray got until it intersected a volume. Or perhaps it also sends back the idx of the polygon it hit. Then you calculate all the reflections, and execute another batch, etc.


And that's pretty much the whole idea of NVidia OptiX, you give it a bunch of polygons and then use the library as a "query engine" for batches of rays.


But in general GPUs have just gotten "that good" that at their worst they still beat CPUs at raw raytracing performance.