This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-03-03
Channels
- # announcements (15)
- # babashka (143)
- # babashka-sci-dev (2)
- # beginners (35)
- # biff (11)
- # calva (5)
- # cider (8)
- # clerk (4)
- # clj-kondo (58)
- # cljdoc (6)
- # clojure (88)
- # clojure-denmark (1)
- # clojure-europe (77)
- # clojure-nl (1)
- # clojure-norway (16)
- # clojure-uk (1)
- # clojurescript (19)
- # clr (32)
- # code-reviews (158)
- # datahike (5)
- # datomic (10)
- # deps-new (3)
- # fulcro (12)
- # graalvm (20)
- # honeysql (23)
- # hyperfiddle (32)
- # kaocha (17)
- # membrane (6)
- # observability (1)
- # other-languages (2)
- # pathom (5)
- # practicalli (12)
- # reagent (4)
- # reitit (7)
- # releases (1)
- # sci (25)
- # shadow-cljs (52)
Hey guys, I have a question about spec. Is there a convenient way to work with maps where the keys are strings rather than keywords? (Why: it's a CLJS project that exports fns for consumption from JS. I'd much rather not convert JS string keys to qualified keywords and then back as the library does a lot of calculation and I want to keep the overhead low and surprises to the minimum.) I don't really see anything like that mentioned on https://clojure.org/guides/spec. Just keywords. Which is fair enough, that's the Clojure way, but what do I do with incoming JS objects then?
I had something like this for the standard keyword keys:
(s/def ::product-name string?)
(s/def ::sales-price number?)
(s/def ::cost-price number?)
(s/def ::product (s/keys [:req [::product-name ::sales-price ::cost-price]]))
Is there a way to make it work somehow for the same keys in string format?And in case it's not workable for string keys and I really do need to convert these into keywords, how do I do that? I know js->clj
has :keywordize-keys
option but I'm not sure what to pass in so that I get a fully qualified keyword?
I believe spec only works with fully qualified keywords (right?), which in case of loading JS data is a tad confusing.
Actually there's no way to get fully qualified keywords from js->clj
, I can see that :keywordize-keys
is only a boolean and there's no way to pass in a fn.
Actually there's no way to get fully
OK never mind I'm trying with (fully qualified) keywords then:
(defn camel->lisp
[string]
(-> string
(clojure.string/replace #"(.)([A-Z][a-z]+)" "$1-$2")
(clojure.string/replace #"([a-z0-9])([A-Z])" "$1-$2")
(clojure.string/lower-case)))
(defn convert-key [key]
(keyword (str "bizmentor.data/" (camel->lisp key))))
(declare ->load-js)
(defn handle-value [value]
(cond
(map? value) (->load-js value)
(seq? value) (map handle-value value)
:else value))
(defn ->load-js
"Loads JS object as a CLJS object with " [js-object]
(into {} (map (fn [[key value]]
[(convert-key key) (handle-value value)])
(js->clj js-object))))
Ages ago, I wrote about adding New Relic traceability to Clojure functions: https://corfield.org/blog/2013/05/01/instrumenting-clojure-for-new-relic-monitoring/ For reasons, I've had to revisit this today so I decided to wrap this up in a macro because boilerplate. I'd appreciate feedback on this monstrosity (in a thread) and suggestions for making this both more idiomatic and more generic and perhaps getting to a releasable library version... 🧵
(defmacro defn-trace
"Like defn- but creates a function that is traceable via New Relic.
Also creates an interface and a type to attach annotations to."
[sym & fn-tail]
(let [s (name sym)
s* (str s "-impl*")
t (str "_" (str/replace s "-" "_"))
inr (str "INR" t)
nr (str "NR" t)]
`(do
(defn- ~(symbol s*) ~@fn-tail)
(definterface ~(symbol inr) (~'_nr [~'args]))
(deftype ~(symbol nr) [] ~(symbol inr)
(~(with-meta '_nr {`com.newrelic.api.agent.Trace {:dispatcher true}})
[~'_ ~'args]
(apply ~(symbol s*) ~'args)))
(defn- ~sym [& ~'args] (. (new ~(symbol nr)) ~'_nr ~'args)))))
So the idea is you use defn-trace
instead of defn-
(it's private by default for ... reasons), and you get a function that can be called as normal but, behind the scenes, is annotated to create a new "transaction" in New Relic so every call is monitored and reported, allowing you to see database and external system access within that call (as if it were an HTTP request handled by a "standard" web server that New Relic supports).
This produces (by default) transactions called OtherTransaction/Custom/ws.
for a function ws.frankieee-cli.main/process-segments
which isn't bad from an observability p.o.v. (the _nr
is sort of irrelevant).
But this means you can have background processes with the New Relic Java agent enabled and get full process reporting under its APM.
I never really shared this anywhere, but I'll drop it here since I think it's relevant :) https://github.com/RutledgePaulV/newrelic-clj
Yeah, looks relevant. :rolling_on_the_floor_laughing: Thanks. Maybe I can just get rid of my code altogether.
Will do, thanks for giving it a try :)
I cut a Clojars release. Please note I also moved all the namespaces under a io.github.rutledgepaulv package since I felt uncomfortable sitting on the top level newrelic-clj. https://clojars.org/io.github.rutledgepaulv/newrelic-clj.
Regarding our side conversation, I have not added any hooks for customizing the transaction names based on response codes. I'd suggest writing your own wrap-transaction-naming
middleware function to replace mine and selectively renaming the current transaction in there after inspecting the response. Based on my understanding I don't think adding a hook to the library function would save you a meaningful amount of code. If you disagree please share an example of how it would help.
Cheers!
Thanks, Paul! I'll switch us over at work on Monday and let you know how that goes.
One thing I noticed with the new release is the generated pom.xml
file doesn't have enough information in it for http://cljdoc.org to import it: https://cljdoc.org/builds/66432
You can add SCM data to the write-pom
call in build.clj
to solve this.
:scm {:tag (str "v" version) :url "
should do it, I think.
@U04V70XH6 curious how that integration goes! Do you guys like NewRelic in general?
Paul's library is replacing integration we've already written - so it means less code for us to maintain, but we're already doing all of this.
docs appear now: https://cljdoc.org/d/io.github.rutledgepaulv/newrelic-clj/1.0.1
I haven't yet picked up your web transaction middleware. I think you're right about the transaction-naming middleware -- ours has several quirks that are specific to our environment and we might as well just use it instead of yours. I've switched our ignore transaction and ignore apdex functions over to yours and a couple of our transaction naming calls. Once I've added your web transaction middleware, I'll update our naming middleware to use your set transaction name function instead so I can retire as much of our code as possible -- which will basically be just our transaction naming middleware and our telemetry/metric stuff (which wraps NR's telemetry lib, not the agent).
I briefly thought it could make sense for you to ignore the transactions related to those crawler requests (using nr/omit-transaction
) but it's probably still useful telemetry to know how much and what type of activity you're seeing from illegitimate users 🙂.
and of course you can still leverage io.github.rutledgepaulv.newrelic-clj.internals/extract-path-template
to extract the routing params.. it makes some heroic attempts to recover routing params even if you use compojure's "context" which ruins a lot of the data you'd otherwise find on the request if you didn't use it 🙂
Yeah, I may have another run at that another day. I spent today trying to switch from our own custom route-wrapping middleware back to Compojure's built-in wrap-routes
... and it did not go well 😞
Another case from clojure.tools.analyzer.jvm
. Let's consider following example. (Class/forName "[D")
used in extend-type
works but looks like it's invalid usage. Is it valid or not? If not, how to extend such classes?
(defprotocol DoubleArrayProto
(dot-product [a1 a2]))
(extend-type (Class/forName "[D")
DoubleArrayProto
(dot-product [a1 a2] (reduce + (map * a1 a2))))
(dot-product (double-array [1 2 3]) (double-array [-4 5 -6]))
;; => -12.0
(require 'clojure.tools.analyzer.jvm)
(clojure.tools.analyzer.jvm/analyze '(extend-type (Class/forName "[D")
DoubleArrayProto
(dot-product [a1 a2] (reduce + (map * a1 a2)))))
;; Error
1. Unhandled clojure.lang.ExceptionInfo
Class not found: (Class/forName "[D")
{:class (Class/forName "[D"),
:ast
{:name a1__#0,
:op :binding,
:env
{:context :ctx/expr,
:locals {},
:ns playground,
:once false,
:file "...playground.clj",
:column 38,
:line 25},
:o-tag (Class/forName "[D"),
:variadic? false,
:arg-id 0,
:form a1,
:tag (Class/forName "[D"),
:local :arg},
:file "...playground.clj",
:column 38,
:line 25}
Oh, this is valid!
(extend (Class/forName "[D")
DoubleArrayProto
{:dot-product (fn [a1 a2] (reduce + (map * a1 a2)))})
This one came up before. See #tools-analyzer. There is a JIRA issue, but it's up for debate if this pattern is supported or not. Right now it happens to work just by accident.
Use extend
to solve it for now
Also see this discussion: https://clojurians.slack.com/archives/CHY97NXE2/p1677266201633939
It was raised just 7 days ago! Thanks @U04V15CAJ, extend
is enough indeed.
When should I be including a pom.xml in my git repository? I need to build one for clojars, but that can be done through the compile/deploy step. I see that #malli has one, #honeysql has a "template/pom.xml", and most leiningen projects (from a brief look) don't have one.
deployment of an artifact to a Maven repo needs a pom. whether that's in your repo or not does not matter for that purpose (can be generated during deployment etc)
using a git dep of that repo from clj needs either a deps.edn or secondarily a pom.xml
(deps.edn is checked first)
cool, thanks
One advantage of storing pom.xml is that GitHub's dependency scanning will work as if it's a Java project, this might or might not be helpful
ah yes, it will probably report "used by" stuff when you check in a pom.xml, but they should really support reading deps.edn
files ;)
There was an effort to natively support Clojure in dependency scanning but it was abandoned, so I wouldn't count on anything related happening anytime soon See here: #C02PZRZPC4R
@UEENNMX0T My OSS projects have a "template" pom.xml
that is used to generate the final version that is included in the library JAR. This is because a) I want more information in the POM than tools.build
generates by default and b) I don't want an invalid/outdated POM at the top of the repo because it confuses other tooling (that scans GH repos for pom.xml
files).
I plan to update deps-new
to follow the same path soon -- I have an open issue for that.
Cool, that's a nice way to do it. I read your build.clj, it helped me a lot
When switching to tools.build
I think I do what @U04V70XH6 does, I only specify what tools.build
currently cannot fill in (or maybe could not, it might be able to do more now). I've been calling these anemic poms. I'm not sure where I got that term from. https://github.com/clj-commons/rewrite-clj/blob/main/pom.xml.
Triple checking that this logic is lazy and i have some grasp of how this will actually work under the hood.
(->> input
(apply concat)
(partition-all 5)
(run! f))
If input is a really long list, i can expect that it's not going to first process all the input in the first apply concat state. And instead is going to probably to grab about 32 elements at a time because of clojure internal structures, then apply concat until it can create a partition of 5. So if everything in input was a side effect, i would expect like 64 side effects at a time, the results of which would get processed by run! in partitions of 5 until they were done, so like 60/5. Then another 64, etc..Apply need to know the exact amount of elements you pass in the collection. That will "turn off" laziness.
thanks del
Sorry, I confused myself. The question is the answer itself. ---
(def types
(map type->inner
(json/read-value (io/resource "foo.json"))))
How to choose if types
will be made during compilation to jar or runtime? Whatever it will be from resources
or regular files if it is different solution.
context:
The final solution is more complex and there are multiple def
referring to the same data in different way.
This are static files, so can be injected into jar. No need to read them from the beginning each time when start the app. So it is like static DB builded into the jar as EDN data. But all it about cold start time - to make it as short as possible.
Why not real DB? Because it is hobby project and I will have to pay for DB running 24h. I use google cloud run, so app is running only when there is a request to the app, which limit cost of running the app to 1 coffee per month 🙂 But while I am doing a request I want to get fast response.Correct me if I'm wrong, but I understood that jars are just archives, which would include your resources directory.
See here: https://clojure.org/guides/tools_build#_source_library_jar_build and similarly the section about uberjars.
Both build with the source files and the resources directory into the jar. In the case of the uberjar the external libs are also compiled and put into the archive. Since you're writing an application you would use the uberjar option.
You basically end up with a bunch of class files and whatever static files (or say compiled JS/CSS etc.) in your uberjar.
---
Possible alternatives that I'm only aware of (ask someone with more experience or try things out!)
If you really want to embed the data into your application (so you don't have to read them from disk) you would probably have to just use a def
somewhere, maybe in a separate namespace.
I also assume that graalvm supports embedding files into the binary. Since you are worried about cold starts, graalvm might be a useful option. There also might be options for jar building that speed up cold starts as well as JIT caching, AoT (read here: https://clojure.org/reference/compilation) etc.
I'm personally not aware of other options.
> Correct me if I’m wrong, but I understood that jars are just archives, which would include your resources directory. correct
the whole point is to start app as fast as possible. So reading data and make all kind of transformations and building multiple trees referencing to data takes time.
Aha! then why not just call a function in your build step that does all of this in advance?
I was hoping I can just do (def foo (read-during-compilation-and-process "bar.json")
somehow
I misunderstood your question entirely 🙂 I thought you are worried about actually fetching the files
if you aot-compile source code into jar - types will be constructed at compilation time
I think it's actually very easy to get confused about this! I'm used to languages either being dynamic like JS/Node, PHP (even stateless), or fully static/compiled like Go. Clojure is a REPL, so it compiles expression by expression, which is kind of both dynamic and static if that makes sense. I think in many other languages you have to do something special in order to read and process a file at compile time. Like using a code generator, or embedding them explicitly in the binary etc.
> I think it’s actually very easy to get confused about this! Especially when you are tired after whole week and to rest you do hobby project… :)
Given given this lazy list
((1 2) (3 4) (5 6) ...)
what clojure would produce a lazy list whose internal lists contained the inner elements above but given a specific size. e.g if the size was 3 it would be:
((1 2 3) (3 4 5) (6 7 8))
I feel like i have to reach for using loop recure, but maybe there are some high level functions i should consider?user=> (def c (partition 2 (range)))
#'user/c
user=> (take 10 c)
((0 1) (2 3) (4 5) (6 7) (8 9) (10 11) (12 13) (14 15) (16 17) (18 19))
user=> (take 10 (->> c (mapcat identity) (partition 3)))
((0 1 2) (3 4 5) (6 7 8) (9 10 11) (12 13 14) (15 16 17) (18 19 20) (21 22 23) (24 25 26) (27 28 29))
You can do this with transducers
Thanks @U2FRKM4TW and @U7RJTCH6J. Those are both good approaches. mapcat calls apply, does apply have to realize all the elements in the list?
I guess I'm confused at the response here then https://clojurians.slack.com/archives/C03S1KBA2/p1677862625065739?thread_ts=1677862625.065739&cid=C03S1KBA2 Laziness in clojure is one of those topics i never feel i fully have a handle on.
actually there was an interesting kind of related post on the sbcl subreddit recently. someone wrote a big complicated function and kept running out of stack space when trying to run it, but as far as they could tell it should be executing in constant stackspace, but it turned out they were calling apply with a large list, and in sbcl apply needs to push all the arguments on to the stack
which means for sbcl there is a pretty definitive answer to (reduce max 0 ...)
or (apply max 0 ...)
but clojure passes rest args as a seq, so a pointer to the heap, nothing does on the stack, nothing counted
now, apply concat might still not be lazy, but that will depend on concat, not apply
(actually it was on r/lisp and just cross posted to r/sbcl https://www.reddit.com/r/lisp/comments/11gg3pr/sbcl_control_stack_exhausted/)
thanks for the explanation @U0NCTKEV8 🙏
Is there a way to do structural search on clojure code?
For EXAMPLE. find matches for (*+* ?x 1)
where ?x could is any valid clojure expression. Kibit (http://jonase.github.io/kibit-demo/#56) is the closest thing I can find, but the functionality is not exposed so I can just search a file for matches.