This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-05-08
Channels
- # announcements (5)
- # babashka (46)
- # beginners (206)
- # boot (1)
- # bristol-clojurians (1)
- # calva (9)
- # chlorine-clover (27)
- # cider (1)
- # clara (10)
- # clj-kondo (105)
- # cljsrn (2)
- # clojars (1)
- # clojure (104)
- # clojure-europe (6)
- # clojure-nl (2)
- # clojure-uk (18)
- # clojurescript (44)
- # conjure (10)
- # core-async (34)
- # cursive (28)
- # data-science (6)
- # datomic (14)
- # emacs (44)
- # events (1)
- # figwheel-main (1)
- # fulcro (13)
- # graphql (9)
- # helix (12)
- # kaocha (2)
- # meander (4)
- # off-topic (2)
- # pathom (1)
- # quil (1)
- # re-frame (21)
- # shadow-cljs (49)
- # spacemacs (6)
- # xtdb (8)
Issue resolved! thanks to the #cider crew. It was a cider-nrepl mismatch that I had.
Quick question. Is transit (format) suitable for persisting to disk or is it strictly for conveying values between applications? I seem to recall in the early days RH said some representations may change, so don’t persist to disk, but presumably it’s out of alpha now and it’s safe to persist to disk in transit format?
it's as safe to persist to disk as any other format. it has not changed since it was first released. the caveat on the spec is - "readers and writers are expected to use the same version of Transit and you are responsible for migrating/transforming/re-storing that data when and if the transit format changes".
version 0.8 is the only version of the transit format
@alexmiller Per the original warning, it sounded like Transit was still subject to breaking changes (unlike, say, EDN). Is that no longer the case?
It was released 6 years ago and no reason has been found to change it. I don’t know how else to say it’s stable.
There are no plans for changes, breaking or otherwise
If someone I trust and who controls something tells me, “This is subject to change,” I believe them. It doesn’t matter the time horizon of the change.
promising backwards compatibility for a wire format would potentially compromise the core goals of a wire format (speed). even if it hasn't happened in practice
so far, nothing. but if there was an advancement in information theory that allowed transit to become dramatically more efficient but would break backwards compatibility I would want them to take it
Like, you’re a) in imaginary land, and b) saying that GZIP should not be a finalized spec
gzip includes a header. if you want transit that's written to disk that you're guaranteed you can decode one day then write what version of transit you used along with your file
that would make transit more chatty and more complicated to fulfill a use-case that is explicitly not in its design goals
I dunno what you think “More chatty” means. There’s zero more back and forth for including version metadata.
The best way to say, “it’s stable” is to say, “It’s finalized. No more breaking changes.”
when has anyone ever said software is final? that is not a thing.
Apologies if I seem confrontational. I’m just trying to reconcile what I’m hearing. I’m gonna take a stab and say: There were once vague ideas for improvements which would require breaking changes. Rich hasn’t let those ideas go, but also has no plans to work on Transit for the foreseeable future. Is that accurate? If so, I’m completely cool with that state of things. I will just continue preferring EDN for disk! 😄
no, I'd say when it was released it was unknown whether breaking changes might be required. it's been 6 years and we haven't found any.
there are no additional plans that I'm aware of
Gotcha. Why the delay to go 1.0? Is there a hesitance, or is it more a matter of priorities?
it's completely unimportant?
are you saying that seeing "1.0" is a bigger signal than "it has not changed in 6 years" ?
I'm a little perplexed. Rich has specifically called out the importance of going 1.0 in the past.
I think I'm going to work on making stuff instead
I always see < 1.0 from cognitect as, "subject to change." Ya'll have always been very explicit with that expectation. In numerous ways.
pretty sure that is not what we intend and not something we have ever said
all software is subject to change
1.0 is not precious
Let me be clear: I’m not saying, “never change.” I’m saying, “Agree to not making breaking changes.”
that has nothing to do with version number
that's something you should strive for whether it is 0.1 or 6.0
Rich has talked about using "alpha" as a marker for "we are still making breaking changes"
we've recently updated lots of libs to 1.0+ because other people assign some magical meaning to 1.0, but that's not our perspective
that's a semver thing
that's exactly what I just said
I’m asking for exactly that. The 1.0 isn’t the most important bit. The promises are. And all I’ve heard are, “Look around you! It’s fine.”
I don't understand what you want
well, I'm not promising that and it's impossible for anyone to promise that
and if they do promise it, you should probably not believe them
but as a baseline, that is always our default mode of operation
look, the spec says what it says and that's where we are with the caveats provided. it's up to you to make a judgement to evaluate how you feel about that if you're going to build on top of it.
when I evaluate a library for stability, I put some, but not much, value on the actual version number. I look at many factors - who made it, are they still using it, how was it developed, could I fork it and take on maintenance if I felt unhappy about direction, whatever
you know who made transit, you know how we work, you know 6 years of history, so make your own judgement
@alexmiller thank you!
I've read https://www.grammarly.com/blog/engineering/building-etl-pipelines-with-clojure-and-transducers/ from Grammarly on etl with transducers. We have a very similar setup but, for us, it's kind of broken. We need to collect errors that occur during the process and return them. The type of error(s) that may have occurred will dictate varying actions from us. The flow goes something like this: 1) Return a IReduceInit that pulls data from a paginated API 2) if, at any point, that API returned an error, the error map should not be passed down the transform pipeline 3) dissect and transform items 4) any transformed items that are invalid should be collected and not included in the pipeline results 5) store the items 6) analyze errors I'm considering something like what's in the snippet. Transducers make handling the "happy path" really nice. Getting the things that are malformed/errors out is less clear. I couldn't find any info online on approaches to this sort of process. Curious if folks doing etl have done something similar.
I'd be tempted to use a compound data structure with {:data [...] :errors [...]}
and a transform over the processing functions, something like (defn lift-data [f] (fn [s] (update s :data f))
and (defn lift-error [f] (fn [s] (update s :error f))
so the transformers would be created using lift-data
so they only act on the :data
key, and the invalid items could be processed using functions made via lift-error
I think I'm following you. This would mean having all transformers down the pipeline take in and spit out that data structure.
right, and using those generator functions clarifies which part of the structure you transform, without risking breaking the structure
for extraction you can just compose the function with :error
or :data
, you only need the lift function for in-domain transforms
and this is just one way to do it - I think there's an advantage though because it makes the function definitions clearer if the lifting is a middleware wrapping them
line 20 could use (lift-data constantly)
with valid and (lift-error concat)
with invalid
but actually that's not very useful
Pulling out all those transducers into functions really cleans it all up:
(def to-process
(eduction
(comp
(page-data-xf :cognitect.anomalies/category)
(page-items-at-xf :items)
(validate-items-xf int?))
[{:items [1 2 3]}
{:items [4 5 "7" 6]}
{:cognitect.anomalies/category :cognitect.anomalies/incorrect}]))
that looks nice, yeah
I'm trying to convert my macro to the following expansion:
(re-frame.core/reg-event-db
:some-name
(fn [db [foo bar]]
(assoc db :foo foo :bar bar)
))
and I have this so far
(defmacro db-event [name params & body
]
`(re-frame.core/reg-event-db
~name
(fn [db ~params]
[email protected]
)
)
)
But this doesn't really work when I try with
(db-event :foo [foo bar] (assoc db :foo foo :bar bar))
I get the following:
------ WARNING - :undeclared-var -----------------------------------------------
Resource: :1:17
Use of undeclared Var vendo.macros/foo
--------------------------------------------------------------------------------
------ WARNING - :undeclared-var -----------------------------------------------
Resource: :1:21
Use of undeclared Var vendo.macros/bar
--------------------------------------------------------------------------------
------ WARNING - :undeclared-var -----------------------------------------------
Resource: :1:33
Use of undeclared Var vendo.macros/db
--------------------------------------------------------------------------------
------ WARNING - :undeclared-var -----------------------------------------------
Resource: :1:41
Use of undeclared Var vendo.macros/foo
--------------------------------------------------------------------------------
------ WARNING - :undeclared-var -----------------------------------------------
Resource: :1:50
Use of undeclared Var vendo.macros/bar
--------------------------------------------------------------------------------
(re-frame.core/reg-event-db {:foo nil, :bar nil} (cljs.core/fn [vendo.macros/my-db nil]))
How do I fix this?I suggest looking at the output of macroexpand 🙂 You can do this: (macroexpand '(db-event :foo [foo bar] (assoc db :foo foo :bar bar)))
Also, if you want db
as the actual name of a locally bound symbol inside the macro expansion (i.e., as the literal name of a fn
parameter), you need ~'db
inside the defmacro
form.
By using the tilde in front of the params, you’re telling it to evaluate them, but what is a foo? What is a bar? The program must know.
How do I unroll a map as parameters?
Suppose I have the following:
(let [foo {:bar1 1 :bar2 2}]
(assoc {} ...)
)
What can I replace the ... with so that I get (assoc {} :bar1 1 :bar2 2)?
@U010Z4Y1J4Q Questions like this are better suited to #beginners where folks have opted in to helping with this sort of stuff.
I've invited you back in there (again).
Hello i am reading mongoDB ,and i saw that they provide so many commands that we can run with runCommand(....)
is there a reason to use the java driver or the clojure wrappers like monger,instead of runCommand()?
I get so so many Grammarly ads... haha.... never knew they used Clojure! Now I'll feel much better about seeing the ads. I turned off personalized ads on YouTube so that's why they come through so much.
Hi guys, I have some questions, regarding startup times. So there is a lot of terms tossed around. AoT - ahead of time compilation. I see this flag in lein. I heard its part of JDK11. Another term that comes up is - GraalVM and native-image. So in context of startup times and performance what are the differences? I presume graalvm and native produce binary that is statically linked(with different vm)? Wheras aot in java produces a jar and requires runtime env? So what are the implications for startup time?
In simplest terms: native image is that: everything is pre-compiled for you, in terms of starting the jvm and your application jar, you have the overhead of the JRE starting, your application classes being loaded, Clojure runtime etc.
Oh ok, yeah skipped the jre impact. Thanks, had hard time understanding why Graalvm is so great(and then you actually read about problems and limitations) if jdk supports aot, but that makes sense.
@roguas "Wheras aot in java produces a jar" -- just to clarify: AOT in Clojure is not actually needed. You can build a JAR or "uberjar" without using AOT at all. You can run such a jar like this
java -cp path/to/the.jar clojure.main -m your.entry.namespace
If you AOT compile your Clojure application (not library) starting from your.entry.namespace
and you have (:gen-class)
in that namespace and a public -main
function and the JAR file also includes a "manifest" file and that file names your.entry.namespace
as the Main-Class
, then you can run such a jar like this:
java -jar path/to/the.jar
Just want to point out that there's more to the process of producing an uberjar that can be run directly via java -jar
than just AOT compilation.also on a linux system you can just treat a jar like that as an executable (no need to explicitly invoke java)
It's probably also worth noting that even with such a jar (AOT compiled, with a designated main class etc), you can still run a Clojure REPL:
java -cp path/to/the.jar clojure.main
or even run some other -main
function in another namespace
java -cp path/to/the.jar clojure.main -m another.namespace
That will run the (compiled) clojure.main/-main
function which will then call another.namespace/-main
True, yes, you can just double-click a JAR file on some systems too.
Ok, so there is a measurable startup penalty, when using jdk, aot or jit? as oposed to Graalvm
It depends on a lot of things. If you're trying to run something as if it were a command line script, you'll notice the startup time of the JVM and, especially, the compile-on-demand time of Clojure if it's anything but a fairly simple program. If you AOT compile the Clojure code for the uberjar, that compile-on-demand time will go away but you'll still have the startup time of the JVM and the time it takes to load all the compiled classes from the JAR file. If you can get your code compiled and acceptable to Graal to produce a native image, that startup time will be very small -- but you lose a number of dynamic features of Clojure doing that.
I don't find startup time to be much of a problem in practice because I run my REPL for days (or weeks) so I only pay that startup time overhead once a week or so -- and our production uber JARs are AOT compiled and they're long-lived server processes anyway so it doesn't matter if they take "a few seconds" to startup.
What are the features that go missing in case of graal? Is the usability of clojure in that case questionable?
It's static: you cannot dynamically load any classes, meaning you cannot define new functions, vars etc. Think of it as a immutable version of your program. It has extra restrictions in terms of compilation (you need to typehint your code) as well as you're restricted in terms of library choice. It really depends on what you're after - if it's a simple clojure interpreter with fast startup time check out #babashka
The codebase already exists, im just making rounds trying to optimize it/retrofit it into faas.
It's sufficiently large, I managed to compile it(with graal) but had to downgrade to 1.8.
Nice - which faas, if you don't mind asking - I have a couple of AWS lambdas here and there using just plain old clojure on the jvm and so far it's working fine. Admittedly - they are tiny so startup time is not an issue.
Yeah, Lambda. Still this is something that requires latency. Not hpc latency, but not 2 seconds either.
And there's a #graalvm channel if you need to dive deep on that tech...