This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-07-29
Channels
- # announcements (3)
- # babashka (47)
- # beginners (88)
- # calva (17)
- # clj-kondo (8)
- # cljdoc (1)
- # clojars (9)
- # clojure (98)
- # clojure-europe (53)
- # clojure-norway (2)
- # clojure-seattle (1)
- # clojure-uk (5)
- # clojurescript (20)
- # cursive (11)
- # data-oriented-programming (1)
- # data-science (3)
- # datahike (1)
- # datascript (3)
- # events (3)
- # graalvm (5)
- # honeysql (7)
- # hyperfiddle (1)
- # jobs-discuss (10)
- # leiningen (3)
- # malli (16)
- # music (4)
- # nbb (17)
- # off-topic (45)
- # pathom (9)
- # portal (7)
- # releases (1)
- # shadow-cljs (80)
- # sql (15)
- # tools-build (5)
- # xtdb (23)
This https://gist.github.com/drewverlee/4635bc04475670309e488b81089dec96 seems to have two Exception's in it, i'm confused at which is the root cause. I assume the root is closer to the top, so the first Exception is the namespace one?
You can simulate with your own exceptions:
user=> (throw (Exception. "top level" (Exception. "inner")))
Execution error at user/eval151 (REPL:1).
inner
user=> *e
#error {
:cause "inner"
:via
[{:type java.lang.Exception
:message "top level"
:at [user$eval151 invokeStatic "NO_SOURCE_FILE" 1]}
{:type java.lang.Exception
:message "inner"
:at [user$eval151 invokeStatic "NO_SOURCE_FILE" 1]}]
:trace
[[user$eval151 invokeStatic "NO_SOURCE_FILE" 1]
[user$eval151 invoke "NO_SOURCE_FILE" 1]
[clojure.lang.Compiler eval "Compiler.java" 7194]
[clojure.lang.Compiler eval "Compiler.java" 7149]
...
The spec failure is the relevant one. There's an ns
form somewhere that is no longer valid. (I assume you had figured that out but just wanted to confirm it)
I will say that output like that is why I will not use any "stack trace prettifiers" đ
Oh interesting, I wasn't even thinking about the st being modified. I'll try turning it off and seeing how it looked.
Are there any good doc generators that work with a schema framework like malli? I've been hoping to find something automated that helps documents API schemas and large maps in general.
We're working on something like this for schema for our documentation at http://polytope.com (it's a work in progress, so not much to show at the moment), that we might open source if there's interest.
Another option would be to export the malli schema to json schema and produce a doc from that
@U15RYEQPJ that would be great. I'm kind of amazed that there aren't any good doc generators out there. I feel like this is especially needed in clojure as you can't rely on a type system to do formal documentation.
How would I do this in leiningen? https://stackoverflow.com/a/69883701 I have a deeply nested message and I'm trying to render into json so I can convert it to edn.
So you basically need a JSON library? If that's the reason to use the google cloud lib then I would reach for an alternative https://github.com/clojure/data.json
I have a protobuf message that I'm trying to convert into json
It's not cloud sorry that's just seems like the error I'm having?
(:import
com.google.protobuf.util.JsonFormat)
I get a class not found exception with thisis what I'm trying to use
Assuming you have the compiled schema codec, easiest would be wrapping it with pronto then serializing to json https://github.com/AppsFlyer/pronto
I want to dynamically recompute some data whenever it's accessed when running in a REPL, but to precompute the same data (and then not recompute it) during build time when producing an uberjar. Is this possible?
depends on what kind of data you are talking about. It could be possible by having a special env config which would be turned on in the dev environment.
well, it's certainly possible computation-wise
I guess my problem is that the data required to compute the thing isn't available when running the uberjar
which is why I want to compute it at compile time
I'm thinking I should be able to wrap the computation in a macro to get this effect, but I'm not sure how
hmm... isn't delay the opposite of what I want?
I don't think this is achievable and feels fairly suspicious. With aot compilation you get something that's executed twice - during compilation and when the namespace is loaded.
I was thinking of something along the following lines:
(defn compute [] ...)
(defmacro precompute
(compute))
(defn get-data
[...]
(if (= "true" (System/getProperty "recompute.please")
(compute)
(precompute)))
eeeh, I'd rather not
I want this to be as smooth as possible, so I don't want to add any extra build steps
Why?
E.g. we store version of our product in version.edn
which is generated during build time and read at runtime
(I wrote that Why? a bit earlier đ - fair enough although I would probably do it this way)
so a few drawbacks with using files: âą I'd need logic for reading and writing the file, which could get complicated because it's a big data structure with a bunch of binary blobs. âą In development, I'd need to do something completely different (ignore the file)
but @U06BE1L6T, are you saying that the macro approach I outlined above won't work?
I'm not 100% sure but I don't think you'll have your data lying around waiting for you to pick it up. result of AOT compilation is a recipe for loading the file, I don't think the bytecode would actually store your data (and you say the structure is big so another reason to not do it really)
I don't really mind if the generated bytecode is large
Maybe if compute
returns something very simple like a string or a number it could work (= be stored as a byte code)
But I'm not really sure this will work with more complicated structures.
I think you are really just trying to avoid inevitable - that is storing this somewhere explicitly đ.
I'm usually compiling the uberjar to a graalvm native binary anyway
perhaps I could make sure the data is available on disk during that step
wait, actually
wouldn't this indicate that my approach should work? https://stackoverflow.com/a/48105049
that's pretty similar to my use-case: the macro loads data from disk that's not available in the uberjar
ah okay, I tried it and it gives me
java.lang.RuntimeException: Can't embed object in code, maybe print-dup not defined: [B@6146d811
so it kind of works, except that the compiler can't represent byte arrays
right
let's hope using nippy is fast enough
alright, next question: let's say I want to generate a file at JAR build time
do I need to set up a separate entrypoint for this, or is there some way to automatically run some code only while building the jar?
A string is just a byte array plus an encoding ... if you wanted use a macro to embed a byte array in some class file, you could encode it as a string and call .getByteArray
on it or something?
@U0P0TMEFJ good point... I'll try walking the data and see how it goes
if you write the macro to def
a var then you can just re-eval the macro to reload the data in dev?
well, I still need to know if I should re-eval the macro or not
in dev I want to do that when using the data, not when I load the file
...actually, perhaps the string approach would be wasteful? I presume you're thinking of something like:
(def serialized
(clojure.walk/postwalk #(if (bytes? %) ^bytes [(String. ^bytes %)] %) my-data)
(def deserialized
(clojure.walk/prewalk #(if (:bytes (meta %)) ^bytes (.getBytes (first %)) %) serialized)
wouldn't this cause an unnecessary duplication of my byte arrays?
String.getBytes
seems to create a copy
....and, it seems like String
actually uses char arrays, not byte arrays
I ran some benchmarks and it seems like it's pretty fast at least
hopefully it doesn't eat too much memory
the other thing to consider here is that if you are trying to write out a big precomputed data structure that has binary blobs and similar into your bytecode, then every single type that's used must have a print-dup
implementation that works for a round-trip.
doing an io/resource with a nippy freeze/thaw to round trip the data and making your get-data
function read from an environment variable that says "DEVELOPMENT=true" or something to decide to compute the data each time would be the way I'd recommend for this.
although I see you've gone over some of this already.
Makes sense. Right now I'm not bothering with writing to a file; I'm walking over the data structure and base64ing all binary objects.
It works and it's a bit faster than nippy, but I haven't measured the memory/disk impact yet. Could be that I'm wasting a lot of space
oh, it's surprising to me that this would be faster than nippy
Yeah. Presumably the overhead of serializing the rest of the data structure dominates with nippy
as a tangent, you can look at https://github.com/Datomic/fressian - I haven't used it but they say it's the fastest option among (edn,transit,fression)
I quickly played with that and it's not really what I would like to use - e.g. when you deserialize a vector you get ArrayList
back.
See https://github.com/jumarko/clojure-experiments/pull/19
interesting, I didn't know about that
weird that it gives you arraylist
Is there a way to serialize (and deserialize) a bunch of clojure data while leveraging the structural sharing they enjoy in the live process?
I guess something like
(let [a (range 1000)
b (conj a 1001)]
(serialize [a b]))
where (serialize [a b])
is no, it doesn't
(let [a (range 1000)
b (conj a 1001)]
(println "a" (count (nippy/freeze a)))
(println "b" (count (nippy/freeze b)))
(println "[a b]" (count (nippy/freeze [a b]))))
prints :
a 2878
b 2881
[a b] 5756
the count there is the size in bytes
I'm not sure it is possible, but being able to do something like a heap dump but with deserialization would be pretty useful I think
Another terminology question⊠Is anyone aware of a more established term than the âEpochal Time Modelâ or âsuccession modelâ for the general pattern of having a pointer (identity) through which change flows and dereferences to stable values? Itâs a pattern which you see in many places in computing; but is there a more widely established term that describes the concept which is rooted more in a comp-sci or software-engineering discipline? Rather than A.N Whiteheads metaphysics?
I see in the HOPL paper Rich cites standard ML as the inspiration for this
but wonder if it predates that
non-atomic updates also break referential transparency, which is bad! https://en.wikipedia.org/wiki/Referential_transparency
Can't avoid philosophy once you dig in enough đ https://plato.stanford.edu/entries/time/#MoviSpotTheo
I also don't remember whether it addresses the epochal time model directly, but Data and Reality is a great book for addressing these types of questions at the boundary of both philosophy and software engineering.
Thanks for the responses⊠do atomic references not also imply extra implementation details, things like compare-and-swap? The whitehead reference to principia mathematica is interesting; though I think Iâll leave reading that to Kurt Gödel â rumour is itâs not much of page turner, and Kurt used his copy as the touch paper to burn down all of mathematics đ„ đ The Data and Reality book recommendation looks great; I might have to order a copy. These days I do find myself very much drawn to exploring the boundary of philosophy and engineering too.