This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-11-13
Channels
- # aleph (1)
- # announcements (18)
- # babashka (11)
- # beginners (112)
- # business (1)
- # calva (19)
- # cider (8)
- # clj-kondo (63)
- # cljsrn (10)
- # clojure (188)
- # clojure-australia (1)
- # clojure-dev (38)
- # clojure-europe (112)
- # clojure-nl (3)
- # clojure-provo (1)
- # clojure-spec (22)
- # clojure-uk (108)
- # clojurescript (37)
- # cryogen (4)
- # cursive (8)
- # data-science (1)
- # datomic (13)
- # emacs (9)
- # events (1)
- # fulcro (26)
- # funcool (3)
- # graalvm (2)
- # graphql (11)
- # helix (8)
- # jobs (1)
- # jobs-discuss (7)
- # nrepl (3)
- # off-topic (72)
- # pathom (10)
- # pedestal (1)
- # reagent (6)
- # reitit (7)
- # remote-jobs (1)
- # shadow-cljs (28)
- # xtdb (12)
If I have a custom list type that I've implemented with deftype
, and I want to make it so that I can call (map)
on it, what do I have to do? Implement ISeq
, ISeqable
, both?
Hi 👋 Bit of a crapshoot, but anyone had success instrumenting async web requests with newrelic and netty server running via clojure? I'm looking to add in newrelic instrumentation on specific endpoints for a netty server run via aleph. I'm following this old guide from sean https://corfield.org/blog/2013/05/01/instrumenting-clojure-for-new-relic-monitoring/, and this lib https://github.com/TheClimateCorporation/clj-newrelic implements the same approach and works great for us when instrumenting code running on clojure services running jetty. Following the newrelic docs on async I get either transactions with incorrect times (as deferred value is returned immediately I assume) or I can see in logs transactions are not created despite adding :dispatcher and :async to true on the method annotation.
@adfarries No, unfortunately. Our Jetty-based services are monitored just fine via New Relic, but our one Netty-based service shows almost nothing useful.
We've used New Relic's "metrics publish" library (I think you can d/l the source from their GitHub repo and build it yourself), and the NR agent to allow us to write plugins that send all sorts of specific metrics to New Relic -- they don't show up in APM, only under Plugins, but you have a lot of control over what you publish.
I used to really enjoy using New Relic. We wrote some code to send statsd metrics to DataDog, that seems to be working well.
We're very heavily in bed with NR at this point, both front end and back end. It would take a seismic shift for us to switch to a different metrics solution I think...
Not to say NR is "perfect" (it's not!) but it's "good enough" and easy for everyone to use, from backend to frontend to product management etc...
https://github.com/newrelic/metrics_publish_java -- I see V2 is archived, we're on V1 from ages ago.
(I suspect, at some point, we'll have to revisit/update all our plugins)
@seancorfield thanks. I was hopeful that updating to latest agent, agent-api etc. and following NR's docs on custom async instrumentation would get me there, but unfortunately I'm just not seeing much of value. Segments 'work' in a very naive sense but make no sense in the transaction context
will keep chipping away + try with plugins, if I find a sensible solution I might post it up somewhere
I have a core.cache question (I think, or at least I’m musing about implementing something with core.cache): I want to generate on-disk temp PDF files and have them disappear after say, an hour. The cache holds an identifier and a path to the file, to be fed out of a jetty server with the identifier in the request URL. It’s fine if the URL 404's after an hour.
I got to thinking: would a TTL cache be able to handle this use case? I’m trying to figure out if I could directly supply a function that unlinks the on-disk PDF on eviction, or if this is best done by implementing a custom version of the TTL cache with CacheProtocol evict
wrapped?
does the pdf have to be written to disk or could it be cached in memory?
that's fair.
another option I might consider is to have a temporary folder and have a background process that runs every hour and deletes any files older than an hour
it makes worrying about "what happens if the server crashes or restarts" type of issues less of a problem
Yeah, that’s probably OK too… just yank the existing keys out of the cache as a set and if it’s not in the set, farewell PDF.
I was also considering stealing from https://github.com/ring-clojure/ring/blob/master/ring-core/src/ring/middleware/multipart_params/temp_file.clj
I would not recommend core.cache for that kind of thing, look at guava's caches
/ looks at https://github.com/google/guava/wiki, expires from unexpectedly broadened horizons /
yeah, I had just got to thinking an evict
callback would let me be lazier but it sounds like I should probably think this through a little better.
Yeah, as the maintainer of core.cache
, I would agree with @U0NCTKEV8 on this @U08BW7V1V -- I don't think it's a great match for what you're trying to do.
what is a current library for CLojure for building REST APIs ? And does anyone have any decent documentation ?
Consider https://github.com/metosin/muuntaja for data coercion and for routing https://github.com/metosin/reitit
My stack is ring+jetty (aleph at work but I think jetty is enough for most use cases), https://github.com/exoscale/interceptor for the request lifecycle and https://github.com/exoscale/ex for error handling. For routing I use https://github.com/juxt/bidi
I worry when I goto github page for yada and no check ins in years, or does this jsut mean its pretty stable / finished ?
why does everyone only trust software that changes all the time? seems counter-intuitive... :)
not used to using libraries that arent full of bugs... and need constant patching 😄
@alexmiller there is a difference between ‘all the time’ and ‘not updated in 9 years’ 😉
in any case - i read the source and very often libraries that look like abandonware are just simple enough to not be worried
I think that there is a difference between not being updated because it’s been abandoned and not being updated because it’s stable.
There are libraries that were written for 1.3 that still work with 1.10, so why update it if it isn’t broken and it works?
@st3fan I would say that in Clojure ecosystem it’s common that a library is not being updated for years and it’s still compatible with the latest Clojure and etc.
Cognitect has been very good about not breaking backwards compatibility.
@qmstuart We have been using yada for years. It's powering https://covid-search.doctorevidence.com/.
cool, i only need this api to be dead simple. It wont be public, it will be running on a machine and only taking a single POST request from that same machine.
i think where it gets tricky is with libraries depend on Java packages or have to interact with the outside world - as an example, I was just looking at clj-yml, which wraps a 8 year old version of SnakeYAML - it is entirely possible that it has security issues
fyi, the company behind yada is working on another framework (apex) but that's not production ready
This is an active research area for Malcolm, but I've also been backporting pieces into yada where possible (e.g.I got reap working, and fixed a few bugs)
we also have a lab thing that wraps vertx server (it's called helix internally), I guess there could be crossed interests here
I think so, yeah. Although I think (personally) that vertx isn't too interesting in the bigger picture. Just as a precedent for guiding the design of ring2's web sockets and such.
I think it would be great if apex could be bigger than just juxt, so I would welcome this co-operation from a community perspective
also the work that is happening in malli seems to co-incide with all the json schema stuff maybe
@st3fan you should be looking at https://github.com/clj-commons/clj-yaml for the most recent version which has a recent version of SnakeYAML (also available in bb)
@qmstuart for something super basic, ring jetty + compojure is probably the "easiest"
@borkdude i just pulled in snakeyaml and wrote a 3 line parse
.. that is also another great option IMO - do not pull in massicve deps if your needs are simple
@qmstuart Actually, babashka will let you create a ring app with it's built-in http server. if it's not performance critical, that might be the "cheapest" option in terms of project setup, etc: you only need one file
@qmstuart the thing i am working on only has one endpoint do i did not even bother with a router .. i am just using ring+jetty
yeah, we have a bash script / ps script on windows. It collects data and just want it to go to our local api, which can do some processing locally and fire it off to our rabbitmq queue
if i add another endpoint i may just add a mini dispatcher myself, because a few lines of code is much simpler than new DSLs 🙂
babashka doesn't even have a router at the moment. Just use (:uri request)
and (:method request)
and write your own thing ;)
@borkdude, single small file would be nice. Current alternative is to write it all in Go (to get single small file), but I hate Go
This is the shortest example: https://github.com/borkdude/babashka/blob/master/examples/httpkit_server.clj
One caveat: I don't know if babashka can talk to your RabbitMQ. It doesn't have a built-in library specifically for that.
thanks, i'll have a look. bb seems a great alternative to trying to get graal native stuff working
(where mini-dispatcher could very well be a defmulti dispatching on the path .. clojure has enough on board to do a lot of things very easily)
i’m rewriting a GitHub app from Go to Clojure .. I love Go, but I’m currently at 50% functionality converted to Clojure in like 10% of code 🙂 🙂 🙂
using Go, after using things like C#, F#, clojure just feels like i'm missing so many features. Stuff that in those languages is easy feels really complicated in go when all you have is for loops and ifs
I dont feel missing any features in Clojure. I mean I feel a lot of things I use regularly in Clojure, C# and F# are missing from Golang 🙂
Im in the same boat. Also: working with dynamic datastructures in Golang is such a pain. Reflect much? 😬
I know a project that uses Go but generates the Go code in Clojure: https://github.com/tzzh/pod-tzzh-aws
can I do: #:person{name "joe"}
instead of {:person/name "joe"}
without having a valid namespace person
defined? Seems to work sometimes, but not others.
Warning :undeclared-ns in app/client.cljs at 17:1
`No such namespace: person, could not locate person.cljs, person.cljc, or JavaScript source providing "person"`
hard to tell. is everything clear now or have you found the example that errors that you don't understand? also this might be better served in #beginners as its related to a bit of syntax
Is there any profiling lib that can easily point me to the slowest functions on the code? I am using ptaoussanis/tufte which is almost what I want, except I have to manually instrument my code.
there's nothing easy in my experience - laziness can move the cost of expensive calculations around at runtime
but a proper profiler (yourkit, visualvm) can help you find hot spots, once you get past the ways clojure breaks java assumptions
https://github.com/clojure-goes-fast/clj-async-profiler flame graphs are often a good quick check
Facing this possible memory leak in one of our Clojure apps, has anyone come across something similar before?
it looks like something is creating a lot of nio channels and never closing them http://www.docjar.com/docs/api/java/nio/channels/SelectionKey.html
perhaps you have a global value holding connections to clients?
Can’t really tell, its a pretty big app but this has started happening in the last month.
I’ll read the docs
the clojure libs that would use nio channels would be eg. ring implementations, websocket libs
or maybe someone started doing prematurely optimized file IO and isn't closing the files
Is there a pattern of code that I can look for? Those channels being stored in some var?
the first place to look would be something that holds onto io handles so they can't be collected
but with nio you could well have some internal state that isn't cleaned up and closed on gc, so you'd want to look for file / network handles that get created but never get closed
also check if you have some process that accepts connections or watches files, that would be a likely culprit for creating all the io handles
Thanks for the pointers, going to have an interesting weekend hunting this
also, yourkit / visualvm have a graph view of who allocates / holds objects by type, so you could look for the owners of those objects
(likely that will be some internal of a lib like netty, and then you figure out who is using that lib's api etc.)
Right, but we haven’t enabled visual vm in prod and so can’t set it up any time soon. Would be really hard to pin point the culprit in local. But I think I already sort of know where the problem might be
Is there a reason the clojure.set/intersection
does not have zero-arity version which returns an empty set? This would make it usable in combination with reduce
to find values that appear in every entry of a sequence. I’m guessing it is because returning the empty set is not always something one would want to do (i.e. there is some ambiguity in what the behavior of the zero-arity version would be).
user=> (reduce set/intersection [#{:a :b :c} #{:b :c :d} #{:a :b :d}])
#{:b}
I don't see a problem hereone of the few cases where implicitly using the first item in coll as the accumulator is a win
oh, right
(reduce set/intersection [])
Execution error (ArityException) at user/eval176 (REPL:1).
Wrong number of args (0) passed to: clojure.set/intersection
but if set/intersection actually behaved correctly for zero args, requiring an init arg would make things worse
having implicit first arg acc actually fixes it
I guess your init acc could be "the set of all possible clojure values", but we have no way to represent it currently :D
@noisesmith exactly. @dpassen1 That is true but I’m not sure it is relevant for this discussion. The 1-arity version of intersection
returns the set you pass in. The zero arity (which would be called when the sequence to reduce
is empty) would correctly return the empty set.
@mafcocinco the hacky evil way to do this would be to reify the right interfaces so as to create an object that acts like it contains all values
the right fix is an improved set/intersection :D
Yeah. I solved it by creating a local intersects
function with the proper arity as it is internal code. Just struck me as odd and I suppose it is good sign of how well clojure was put together that I assumed there was some logical reason vs. an oversight/mistake.
@mafcocinco The usual reason is that in many cases there is no obvious identity element. E.g. -
has no identity element, so (-)
doesn't work. intersection is similar
is it wrong that the (intersection)
is #{}
?
haha, my hack idea (never a good one in the first place), won't work because it wants to walk the first arg
perhaps mathematically it is but I’m interested in what the pragmatic reason is for not doing it.
usually the return of that is the identity element. (+)
is 0 because x + 0 = 0 + x = x for all x
oh right, #{} isn't the identity, just a fixed point
right. That makes sense. So in order to support (intersection)
we would need a set-of-everything
.
the identity would be my above (impossible) set of all possible clojure values
if intersection
supported functions in addition to set objects, identity
would be the set of all things
btw, using apply
is sometimes better than reduce
since reduce will use the 0 or 2 arg of the function, while in many cases the varargs version is optimized.
e.g. in the case of the set functions it will usually begin with the set that is of optimal size
Good to know. thanks!
similar for str
: when you use reduce
I think you will have many StringBuilders
under the hood, but with apply
only one
yeah, reduce would create a new StringBuilder for each arg
and (reduce str [x])
wouldn't even make a string
It's also quite easy to write a reducing fn for turning things into a stringbuilder in a single pass. Then you can transduce with it
(ins)user=> (reduce str [42])
42
(ins)user=> (apply str [42])
"42"
i always felt like clojure.string/join
would be more optimized but now i see it just delegates to (apply str coll)
lol
what would the require do?
use alias
, that's what :as
does under the hood
clojure uses a mutable database internally to keep track of what's been loaded
just creating the ns doesn't update the db
What's weird though is that create-ns will look it up and return the existing ns if there is one, or create a new one. So I'd have assume if creating a new one, it be added as well
Also, something doesn't add up, because: (alias 'foo 'foo.bar)
fails if you don't first call (create-ns 'foo.bar)
alias needs the ns to exist
no, that's not what alias is using
$ clj
Clojure 1.10.1
(cmd)user=> (contains? (loaded-libs) 'clojure.set)
false
(ins)user=> (require 'clojure.set)
nil
(cmd)user=> (contains? (loaded-libs) 'clojure.set)
true
(ins)user=> (create-ns 'foo.bar)
#object[clojure.lang.Namespace 0x2bffa76d "foo.bar"]
(ins)user=> (contains? (loaded-libs) 'foo.bar)
false
well, it's not using the one loaded-libs
provides, at the very least
Right, so it means there are two different collections of namespaces. I guess one for loaded and one for not loaded ?
it's likely using the list returned by all-ns
all-ns is all namespaces, only some are considered loaded
but all-ns is a list of namespace objects, it's not keyed for lookup
it is all the namespaces period, no matter how you create them
wait, by "those" in "those would only be the ones created by create-ns" what do you mean?
because all-ns contains every namespace object period
Like, I'd expect that I can: create-ns
which creates the ns, and then load it with require
, but it seems you can't
create-ns is about in memory
Which happens as a side effect of load
But require is about load and there is nothing to load
Or do you mean with ns
If it’s already loaded, the load is short circuited
@didibus things like :as
and :refer
etc. in require
are conveniences, and that usage of require is a no-op
right
and put you in it too