This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-04
Channels
- # aleph (1)
- # asami (6)
- # babashka (44)
- # beginners (20)
- # calva (6)
- # circleci (1)
- # clj-kondo (2)
- # cljdoc (2)
- # clojure (184)
- # clojure-europe (13)
- # clojure-nl (4)
- # clojure-spain (1)
- # clojure-uk (4)
- # clojurescript (35)
- # code-reviews (1)
- # conjure (3)
- # core-async (60)
- # core-logic (1)
- # cursive (11)
- # data-science (2)
- # events (11)
- # graalvm (4)
- # graphql (2)
- # introduce-yourself (1)
- # jobs (2)
- # leiningen (3)
- # malli (16)
- # minecraft (6)
- # practicalli (1)
- # reagent (3)
- # reitit (1)
- # releases (3)
- # remote-jobs (2)
- # rewrite-clj (21)
- # shadow-cljs (12)
- # tools-deps (21)
- # vim (16)
(defn- os-write
[^protojure.internal.grpc.io.OutputStream this b off len]
(-write-impl this (.state this) b off len))
(defn- os-write-byte
[^protojure.internal.grpc.io.OutputStream this b]
(-write-impl this (.state this) b 0 (count b)))
(defn- os-write-int
[^protojure.internal.grpc.io.OutputStream this b]
(let [b (bit-and 0xff b)]
(-write-impl this (.state this) (byte-array [b]) 0 1)))
you have to make a single function that handles the arguments for all the overloaded methods
that might not be true of gen-class, it has been a while since I've used it, but it is true of other forms of interop, lemme find the gen-class docs
but I couldnt figure out the byte[] variant…tried os-write-byte, os-write-bytes, os-write-[B
something you might want to do is instead of using gen-class use reify to implement https://docs.oracle.com/javase/7/docs/api/java/nio/channels/ReadableByteChannel.html then https://docs.oracle.com/javase/7/docs/api/java/nio/channels/Channels.html#newInputStream(java.nio.channels.ReadableByteChannel) to turn it in to an inputstream
unfortunately I am required to support http://java.io.[Input|Output]Stream for the interop I am doing
as I mentioned, the Channels class provides adapters from channels to io streams, so if you don't need a named class, that is nicer
but I just checked the gen-class docstring, and I am pretty sure it is the case that you need to handle all possible argument overloads in a single function
all methods named "write" on the gen-class will call the same same function in the implementing namespace
which is often what people end up doing, but it is kind of terrible, and you also run into this same type vs. name method overloading
you can do it with proxy and reify as well, it works the same way, the methods with a given name, all go to the same impl regardless of argument types
if its “standard” and you already understand leiningen then there isn’t really any difference
if it has or has the potential to grow out of that mould or anyone else needs to learn it from scratch im really starting to come around to deps.edn’s more straight forward model
I reluctantly agree. Though, I've worked on large projects that are over 5 years old in lein and I see no point in migrating them.
@UEQDV142J At work we switched from lein
to boot
back in 2015 because we needed more customization and programmaticability.
Then we switched to CLI / deps.edn
in 2018 to get a simpler tool chain that still provided those benefits.
I personally think the CLI is a much simpler, cleaner tool set for running Clojure programs and it's much more amenable to complex programmability.
But for the "easy" tasks that Leiningen provides out of the box -- running tests, starting an nREPL servers. building a JAR -- it's more "work" to do that with the CLI. But then I don't use nREPL at all (nor does my colleague) and we have more complex needs for running tests and building JARs...
Hello all! I have a general question around composing edn files. We store configuration data in edn files (1 file per <env, service> we are running), and a lot of this data is duplicated in multiple files. In an ideal world, I'd like to define data in a single place, and then "import" / "merge" different files to generate the final configuration map. The challenge with just storing data as maps and using merge
in application code is that the order in which these data maps compose is now external to the data itself. I'm wondering if there is a way to describe the dependency in the data itself, and in general how everyone solves this problem. For example: I'd like a data file to look like this:
(ns config_ec2
(:require [settings.config :as sc]))
(sc/imports 'databases :only [:pg :redis :kafka :mongodb])
(sc/imports 'aws)
{:cron "* * 24 * * *"
:bulk-async? true
:region :nv
:start-nrepl-server false
;; Overrides a single key from aws config data, while keeping all the
;; other keys.
:aws-data-bucket {:key "some-key"
:secret "some-secret"
:bucket "some-bucket"}}
https://github.com/juxt/aero does have option to include other edn files
We have a similar situation that is currently unsolved for us, but some ideas i’ve been mulling: There is no way to “import” in edn, but you can make arbitrary tagged elements if you control the reader
#merge-maps [ "../../path.edn" "../../path-2.edn" ]
You can also have all these edn files “truly” live in clj files and export the final config as part of a build step, but idk how to deal with custom elements like #config/env
that we have pretty much everywhereseems like aero makes use of that first way for a lot of stuff, but makes like a full dsl out of it
@U052TCFJH suggested looking through https://www.clojure-toolbox.com/ for this, which I'm currently doing. In the meantime, would love to understand pros / cons of various approaches.
@U3JH98J4R : How are you handling the problem of reading tagged literals? do you define the readers in the application repo or in the settings library? We use this approach in some services, but it can lead to hairy dependency management issues if the readers are maintained in a different library
Another approach to this is to not fear the duplication in the final edn files, but write a little bb
program or clojure program that can take some template files and emit the final files. This has a benefit that the runtime configuration is completely static and there is no abstraction, merge order, etc at runtime and you can visually inspect the actual config map for prod.
I have yet to see anyone go that complicated, but I have worked with several in house edn map merging configuration file systems, one of which was open sourced https://github.com/sonian/carica
I think the ultimate is a rules engine here because of the way they run until quiescence, basically a database that you assert some basic fact in like "a config file exists here" and then there is a rule that slurps in more facts from the config file, etc etc
@U051S5XR3 here's something I wrote for a little study project (uses juxt/aero
). Does it come close to your requirement? It pulls secrets from environment-specific secrets.edn files (.gitignored, of course):
• the edn config for aero: https://gitlab.com/nilenso/cats/-/blob/master/resources/config.edn#L1 (The commit message has context about how it's supposed to work.)
• actually configuring the app (read via aero, apply via mount): https://gitlab.com/nilenso/cats/-/blob/master/src/clj/cats/config.clj
Caveat emptor; not exactly a backend developer here, so I'd love to know a better way to do the same thing.
Hey Anybody got the following error from VIM & vim-fireplace trying to evaluate some word with ‘cp’
Exception in thread "nREPL-worker-4" java.lang.reflect.InaccessibleObjectException: Unable to make field protected java.io.Reader accessible: module java.base does not "opens " to unnamed module @2eb231a6
Internally the jvm got some changes in 9 that split up the standard library. The warning you are getting on 11 says you are using a module that you didn't declare you wanted to use
Oh I see. But it was not a warning, but an error - the evaluation of the form did not work at all.
17 makes some things that were warnings on 11 into errors
It sounds like this error is that some code is trying to use internals or set accessibility where it shouldn't
yeah, this kind of code is not valid in 17 without setting additional jvm flags https://github.com/nrepl/nrepl/blob/8223894f6c46a2afd71398517d9b8fe91cdf715d/src/clojure/nrepl/middleware/interruptible_eval.clj#L32-L40
so prob worth an issue on nrepl. seems like there's at least one other use of setAccessible in there too https://github.com/nrepl/nrepl/blob/dfecc450ca936dd5716ba144f4d262d53b33be04/src/clojure/nrepl/util/completion.clj#L22-L26
Created, thanks! https://github.com/nrepl/nrepl/issues/263
As its a new year I'm trying out Java 17, the latest Long Term Support version. Its now on a point release 17.0.1+12 so shouldn't have any major issues 🙂 (although. Anyone experienced any issues with Java 17 and Clojure or Clojure libraries?
We've been using JDK17 (adoptium) in production for a while now (pretty much since 17 came out), with no issues observed so far.
bear in mind that LTS is a marketing term by Oracle, so unless you throw money at Oracle for a support license, LTS is meaningless (i.e., going to JDK18 from JDK17 will give you the same benefits as moving from 8 or 11 to 17).
Don't they backport fixes to LTS versions while skipping versions that aren't supported anymore?
And by "they" I mean not just Oracle but all companies making JDKs and labeling them as LTS.
If you use stuff like coretto, sure. I think the premise behind LTS is how long into the future it receives updates. For usual versions it's six months but iirc for 17 it's two years
There was a lot of talk recently on t'interwebs about so-called LTS and the general consensus was that it's pure marketing by oracle - it so happens that the distro's sorta-agree to call 17 LTS. However, there's no absolute reason to do so (it just fits in with the marketing strategy). You would get the same benefits of new features/bugfixes if you keep rolling with the JDK releases, rather than wait for backports to happen.
But isn't it even less risky to stay on 17 till another LTS arrives than to keep on jumping to the newest available version right away or once your current version is not supported anymore? There definitely have been backwards incompatible changes.
There's nothing special about LTS - in order to obtain LTS you need to pay oracle money for the support
IMHO, it's far less risky to keep moving forwards, from 17, to 18, to 19 and so on, rather than wait 2 years between 17 and the next so-called LTS, then have to figure out what needs to change.
With a 6 month cadence from 17 to 18, you're more likely to catch anything and fix it promptly than 2 years down the line.
> There's nothing special about LTS This is factually false, at least when it comes to JDK 11. JDK 11 is still receiving updates. JDK 10 and JDK 12 do not. So, I would say that JDK 11, being LTS, is still supported, while 10 and 12 are not.
And given the alternatives out there, is there any compelling reasons to do so (if you aren't willing to pay for support?)
I honestly don't follow. JDK 11.0.13 is publicly available, I can use it without being on premier support.
By "support" I don't mean that I, p-himik, am personally supported by Oracle. I mean that Oracle supports the JDK by providing updates to it, regardless of who ends up using that JDK.
Was still using in December, now on 17. At least on the project that's currently open.
If you were using the Oracle JDK 11 LTS and if you did not have a support license from them, then you would apparently be in breach of their licensing conditions.
We use Adoptium 17, but we'll be keeping current as 18, 19 and so on as they come out, since we'll get all the benefits of security/bugfixes on each release, without worrying about a big-bang, 17 to 21 upgrade.
I stick with the LTS versions of Java, as I dont wish to regularly update Java. I've always use the JVM for Clojure so most of the new features tend not to be relevant. I use openjdk (or AWS Corretto if deploying to that environment) and openjdk LTS versions get security updates until at least the next LTS version is out. So for me, LTS is about installing and forgetting about Java so I can get on with the long list of other things I'd like to do 🙂 The LTS has nothing to do with Oracle support to me (I dont particularly value such support)
Alright, now it makes sense, thanks! @U11EL3P9U Initially it wasn't that clear that there's a difference between Oracle LTS and someone else's LTS. Although maybe it has changed once again? A recent article: https://www.infoq.com/news/2021/10/oracle-jdk-free-again/
I didnt expect this topic to get as much discussion as it has 🙂 Yes, Java is free again (not surprisingly). To me, Java (the JVM) is just a specialisation of an operating system to run Clojure code and its what I can do on top of it that is of interest, not the JVM itself.
I hope my original answer is still okay - we've been using 17 since it came out (and me beforehand for the RCs) and we've have a smooth sailing ship so far.
I do agree with @U11EL3P9U that https://adoptium.net/ is very good if you do want to test new releases and have a shorter upgrade cycle
I have been using Temurin docker images within some GitHub actions (clojure-setup) and its very nice to use.
Oracle JDK build from the same source as Adoptium (i.e., openjdk), it's jsut that Oracle throw in some properietary stuff too.
FWIW, tried using Adoptium about a year ago (not sure which version - should've been the latest at that point) and 10% of shadow-cljs release builds started to fail for incredibly obscure reasons. Decided to just keep on using OpenJDK.
Yes, Oracle still havent got rid of all that proprietary code in the Oracle JDK (although they have had a good try over the years).
FWIW, we went from 8 to 11 to 17 in production. Our reasoning is that the versions that everyone -- including consortiums that "manage" their on OpenJDK distro -- deem to be "LTS" continue to get updates until (after) the next "LTS" version appears, but the non-LTS versions do not and you would potentially be forced to jump versions every six months to stay up to date with such updates. We don't want to have to change JVM versions every six months to stay current with updates. In particular, for us New Relic was a problem: their monitoring agent lagged on JDK 16 support until just before JDK 17 dropped and the agent would not run on JDK 17 until they released an updated agent for that, a few weeks after JDK 17 was GA.
We had been testing on 14, 15, 16 in the meantime but we literally could not update production until New Relic provided support for those JDKs.
(unless you fork their agent and maintain your own fork that doesn't block it from running on newer versions -- but given the bytecode weaving it does and the sort of weird bugs they have to find and fix, I wouldn't want to do that)
General Availability usually I think. Like the "gold" release of something.
Bing agrees with me: https://www.bing.com/search?q=what+does+GA+mean+in+software 🙂
I think LTS really does mean something and it’s an over simplification to say moving from major versions will give you the same benefits as updating patch releases within a release (LTS or not). Typically it will give you the same benefits; but the important thing is that the cost of making that change may be drastically different. Essentially updating across patches within a release, means you’re a lot less likely to see breaking changes when upgrading to patch releases; as opposed to upgrading over JDK releases. i.e. patch releases typically shouldn’t remove features from the JDK or add new features. This is important when it comes to targeting software, libraries or ops/deployments to a particular JDK. i.e. if you write a library that targets JDK 11 and upgrade to 17, you might be bitten by some of the functionality changes/removals that occurred… e.g. in the transition from 11 to 17 the RMI Activation mechanism was removed… this is almost certainly not a big deal for most people these days; but legacy stuff that uses it will no longer work. Similarly garbage collectors etc and things that may change the performance profile may occur across major JDK releases, but shouldn’t occur in a patch release. These sorts of changes can prevent certain workflows from easily upgrading. Hence I believe it’s still important to know what release of the JDK you are targetting; and it makes most sense for most java libraries to target LTS releases (pure clojure libraries can care a little less) unless they genuinely require a big new JDK feature; as that feature if it requires a major JDK release may not be easily available to many users. Similarly in organisations with hundreds or thousands of JDKs deployed it makes sense to manage them in terms of LTS releases; and not have to upgrade JDK major releases every 6 months just to receive a patch. Now that’s not to say that you can’t when operating at a small scale just run with the latest JDK; most of the time it’ll just work, as backwards compatibility is good and taken seriously… however it won’t “just work” anywhere near as much as a patch release. One other dimension to be aware of is that class files may target a particular class file format i.e. libraries may target a newer JDK class file format than you are running…. even though they may not make use of specific API features of that JDK.
Imagine you're REPL'd into some jvm clojure process. Is there a more or less idiomatic way to determine if (a namespace|any namespaces|all namespaces) are AOT'd?
look for class files with http://clojure.java.io/resource
Nothing too specific, really, more being curious. We recently saw some of our code work differently during dev time and when ran aot'd deployed into a staging environment and wanted to reproduce the problem locally. (the problem was totally our fault) And it seemed easier to AOT then start a local development repl with the aot'd classes added to the classpath, but then I wondered whether those were being properly loaded. And then I just got curious.
so for I've come up with thunkable
but can't say I'm settled on it
(defn add-report-thunkable
[db data]
,,,)
(defn add-report
[db data]
(trampoline add-report-thunkable db data))
This might give you some insight. 😄 https://projects.haykranen.nl/java/
Honestly I might just make it a function defined with letfn and then call it like f
or something.
To keep it separate I might make it private and call it add-report*
I would add some distinct, memorable, but unclear suffix. So that an unknowing user will almost have to look up the docstring, and a knowing user will immediately know what's up. E.g. add-report-tr*
.
And yeah, my comment assumes that the function is public. If it's private, then I agree with Joshua.
is there ever a time you'd use the non-trampolined version? I'd think not, and if your concern is about reducing the function call overhead, you could just add an :inline
meta on the trampolined version.
how so?
It won't break, it will just have the function call overhead.
no, it will return the thunk instead of a result because it won't be called with trampoline
I wasn't proposing making an :inline
key with different behavior from the var it was put on.
I was suggesting adding an :inline
key onto the function that already does a trampoline in its body that would just inline the trampoline call, which would act the same way regardless of how it's used.
the majority of users will use add-report
and be happy, but I have another library that I may want access to the underlying thunk-producing function so I can do clever things like pause and resume its execution. it's not something that many people will use, but since I am making it public I want to come up with w a reasonable name for it
Ah, I see. Then yes, I would recommend add-report*
, since any var which has a paired *
version you need to read the docstring on the *
version separately from the main var, and that should say that it's designed to be trampolined, which would allow you to make your alternate execution context.
Though I'm not a native speaker, I'm not sure English has a more concise way of representing it, so my recommendation for the suffix would be -to-be-trampolined
That is about as concise as you can be while being grammatically-correct english, but english (especially in programming) does lend itself well to making cryptic shorthand.
And there's already a consistent pattern in clojure functions that some-func*
is like some-func
but different in a subtle way you should read the docstring to understand.
aka add-report!
😛
core.logic uses o
to denote this. But don't use a letter because it's not good to read.
e.g. cond -> condo
I guess it depends how much information one wants to put into the function name vs the docstring
I tend towards verbose docstrings and memorable function names. Functions are almost always complex enough to need a docstring to fully understand them anyway, so names don't need to be designed to remove the need to read them.
"there be dragons" is always memorable
fn-there-be-dragons
But I don't think anybody wants to use that
aka add-report!
😛
The perfect suffix would be a future adjectival participle (according to wikipedia). I could supply one in Hungarian but I doubt that would be too much help 🙂
I'm late to the party here but I generally use func*
for something that is generally intended to be called by func
instead, when some intermediate wrapping is concerned (usually caching, where func*
is the underlying uncached version and func
is the cached version and calls it via lookup-or-miss
, but I would do that for trampoline too and any other similar cases).
Actually what you could also use is lookup-deep
. It's short but not a symbol.
@U028ART884X Could you explain what you mean there?
(I'm referring to a specific function in clojure.core.cache
)
Yeah, the add-report*
being called from add-report
is being represented here because add-report*
needs to be called in a trampoline, and add-report
does that.
Also Martynas M I don't want to say is a spammer, but I've had a hard time following how their comments are connected to the things they're commenting on, and that's been consistent for the last several messages they've sent.
Could just be english as a second language or something.
I'd put them in two separate namespaces with the same name. in api
expose add-report
, in impl
or cps
or thunk
(:thinking_face: ) have the underlying add-report
I think it would feel pretty bad to have the implementation so far away.
I think it's pretty important to optimize for the experience of users who need to look at the source but aren't doing it from a repl with your code loaded, because that can be how many users decide if or how to use your code.
@U5NCUG8NR just had a rough day. Got too careless. It got better when I had a walk a moment ago. @U04V70XH6 If I'm correct OP wanted to name a function that should be trampolined. So it handles some kind of a deep recursion case and tries to preserve the stack. This is where the deepness comes from.
ah, I see what you mean. And that's fair that it's been a long day.
And yes, seemingly unrelated comments may provide a way to look into the problem from a different angle. But other times they throw people off. Probably those were too much.
I’m seeing a strange error. I’m running tests with (run-tests), but I get this error:
Testing projectgun.integration.auth-test
Syntax error (ClassCastException) compiling at (projectgun/integration/auth_test.clj:109:1).
mount.core.DerefableState cannot be cast to clojure.lang.IFn
Full report at:
/var/folders/96/df02xppj77g7dx698gtmwmrw0000gn/T/clojure-3408151167268530921.edn
Tests failed.
when I have:
(use-fixtures :once with-sendgrid)
but get no errors when I have:
(use-fixtures :once with-db with-sendgrid)
Why would that bewith-db
should set something in your mock functions or some global state, or should return something that has your db
. And then you use mount to provide dependency for with-sendgrid
. And it doesn't work because db
is not created in the first case.
Disclaimer: I'm not familiar with mount.
Also your error is not related to the testrunner as your error says, it should be related to the fact that auth_test tried to call your database and it failed because it couldn't be deref'ed
So it's the testrunner that reported the error badly. But at least they pointed you to the file.
Also it's possible that with-db
could use a dirty macro.
it still gives the same error if you remove with-sendgrid too
what does it give when you do this?
(clojure.test/use-fixtures :once (fn [test-fn] (println "hi") (test-fn)))
Or even remove the fixture completely? Does it still crash? Because if it does then your test tries to reach something which isn't initialized
Which means that you don't know what you're doing and you are breaking your tests by changing this line
i.e. I'm pretty sure that with-db
and with-sendgrid
sets some variables somewhere in your code. And your tests can't run without it.
So what I'd do in your situation is to make sure that there are no tests in the namespace and uncomment them one by one
and line 109 is (run-tests)
According to the documentation and implementation of use-fixtures
and related functionality, it expects (apart from the first keyword) functions.
So, are you sure that with-db
is a function and not, say, an instance of mount.core.DerefableState
?
it breaks WITHOUT with-db not with with-db
Ah, my bad. In any case, check what with-db
and with-sendgrid
do, especially the latter. The answer is there, so without the code it's impossible to tell for sure.
(defn with-sendgrid [f]
(with-redefs [sendgrid/send-email (fn [type email sendgrid-data]
(println "SENDing " type " to " email "with subs " sendgrid-data))]
(f)))
(defn with-db [f]
(start)
(reset-db)
(f))
That's very nice, but with-db
uses other functions whose implementation is unknown.
Try to debug it, see where that DerefableState
is coming from, figure out why it's not there when you use with-db
, and implement the necessary fix.
The full stacktrace will be in /var/folders/96/df02xppj77g7dx698gtmwmrw0000gn/T/clojure-3408151167268530921.edn and that might provide more insight.
It's kind of unusual to have (run-tests)
in a test file...
I haven't seen run-tests
in clj sources too. Is this clojure.test? For me it looks like this call in tests could be used for front-end testing but you have it in regular clojure... hm
I replied more in this thread: https://clojurians.slack.com/archives/C03S1KBA2/p1641325752080700