Fork me on GitHub
#clojure
<
2022-01-04
>
ghaskins00:01:39

Hello All, I am stuggling with a gen-class overloading scenario for OutputStream

ghaskins00:01:04

This is: given the three forms of (.write)

ghaskins00:01:33

how do I specify the (byte[]) variant?

ghaskins00:01:53

ive tried various permutations such as

ghaskins00:01:55

(defn- os-write
  [^protojure.internal.grpc.io.OutputStream this b off len]
  (-write-impl this (.state this) b off len))

(defn- os-write-byte
  [^protojure.internal.grpc.io.OutputStream this b]
  (-write-impl this (.state this) b 0 (count b)))

(defn- os-write-int
  [^protojure.internal.grpc.io.OutputStream this b]
  (let [b (bit-and 0xff b)]
    (-write-impl this (.state this) (byte-array [b]) 0 1)))

ghaskins00:01:08

but the os-write-byte and friends isnt mapping properly

hiredman00:01:44

you have to make a single function that handles the arguments for all the overloaded methods

hiredman00:01:53

that might not be true of gen-class, it has been a while since I've used it, but it is true of other forms of interop, lemme find the gen-class docs

ghaskins00:01:22

FWIW, i’ve used the type-decorated forms in the past, as discussed here

ghaskins00:01:32

but its unclear how to handle byte[] using that

ghaskins00:01:54

os-write-int
does indeed work

ghaskins00:01:22

but I couldnt figure out the byte[] variant…tried os-write-byte, os-write-bytes, os-write-[B

ghaskins00:01:28

(last one in desperation, heh)

ghaskins00:01:25

unfortunately I am required to support http://java.io.[Input|Output]Stream for the interop I am doing

ghaskins00:01:36

which means i am forced to deal with same-arity overloading

hiredman00:01:53

as I mentioned, the Channels class provides adapters from channels to io streams, so if you don't need a named class, that is nicer

ghaskins00:01:29

ohhh, i see

ghaskins00:01:34

thanks, ill check it out

hiredman00:01:11

but I just checked the gen-class docstring, and I am pretty sure it is the case that you need to handle all possible argument overloads in a single function

ghaskins00:01:17

i would welcome dropping gen-class for other reasons, so thats promising

hiredman00:01:07

all methods named "write" on the gen-class will call the same same function in the implementing namespace

ghaskins00:01:38

and then I could multi-method on type, maybe

ghaskins00:01:40

let me try that too

hiredman00:01:51

you can also proxy inputstream directly to avoid gen-class

ghaskins00:01:11

iirc, proxy/reify were not options because of the overloading

hiredman00:01:17

which is often what people end up doing, but it is kind of terrible, and you also run into this same type vs. name method overloading

hiredman00:01:11

you can do it with proxy and reify as well, it works the same way, the methods with a given name, all go to the same impl regardless of argument types

ghaskins00:01:35

ok, ill try all of that, ty for your help

evocatus02:01:37

are there any practical advantages in using Cli tools over Leiningen as of now?

emccue03:01:20

depends on your project

emccue03:01:48

if its “standard” and you already understand leiningen then there isn’t really any difference

emccue03:01:55

if it has or has the potential to grow out of that mould or anyone else needs to learn it from scratch im really starting to come around to deps.edn’s more straight forward model

devn03:01:32

I reluctantly agree. Though, I've worked on large projects that are over 5 years old in lein and I see no point in migrating them.

seancorfield05:01:00

@UEQDV142J At work we switched from lein to boot back in 2015 because we needed more customization and programmaticability.

seancorfield05:01:28

Then we switched to CLI / deps.edn in 2018 to get a simpler tool chain that still provided those benefits.

seancorfield05:01:16

I personally think the CLI is a much simpler, cleaner tool set for running Clojure programs and it's much more amenable to complex programmability.

seancorfield05:01:00

But for the "easy" tasks that Leiningen provides out of the box -- running tests, starting an nREPL servers. building a JAR -- it's more "work" to do that with the CLI. But then I don't use nREPL at all (nor does my colleague) and we have more complex needs for running tests and building JARs...

h0bbit05:01:16

Hello all! I have a general question around composing edn files. We store configuration data in edn files (1 file per <env, service> we are running), and a lot of this data is duplicated in multiple files. In an ideal world, I'd like to define data in a single place, and then "import" / "merge" different files to generate the final configuration map. The challenge with just storing data as maps and using merge in application code is that the order in which these data maps compose is now external to the data itself. I'm wondering if there is a way to describe the dependency in the data itself, and in general how everyone solves this problem. For example: I'd like a data file to look like this:

(ns config_ec2
  (:require [settings.config :as sc]))

(sc/imports 'databases :only [:pg :redis :kafka :mongodb])
(sc/imports 'aws)


{:cron "* * 24 * * *"
 :bulk-async? true
 :region :nv
 :start-nrepl-server false
 ;; Overrides a single key from aws config data, while keeping all the
 ;; other keys.
 :aws-data-bucket {:key "some-key"
                   :secret "some-secret"
                   :bucket "some-bucket"}}

rdivyanshu05:01:07

https://github.com/juxt/aero does have option to include other edn files

emccue05:01:23

We have a similar situation that is currently unsolved for us, but some ideas i’ve been mulling: There is no way to “import” in edn, but you can make arbitrary tagged elements if you control the reader

#merge-maps [ "../../path.edn" "../../path-2.edn" ]
You can also have all these edn files “truly” live in clj files and export the final config as part of a build step, but idk how to deal with custom elements like #config/env that we have pretty much everywhere

1
emccue05:01:06

seems like aero makes use of that first way for a lot of stuff, but makes like a full dsl out of it

h0bbit06:01:10

@U052TCFJH suggested looking through https://www.clojure-toolbox.com/ for this, which I'm currently doing. In the meantime, would love to understand pros / cons of various approaches.

h0bbit06:01:44

@U3JH98J4R : How are you handling the problem of reading tagged literals? do you define the readers in the application repo or in the settings library? We use this approach in some services, but it can lead to hairy dependency management issues if the readers are maintained in a different library

emccue06:01:27

right now we don’t. its just on the todo list

dpsutton06:01:28

Another approach to this is to not fear the duplication in the final edn files, but write a little bb program or clojure program that can take some template files and emit the final files. This has a benefit that the runtime configuration is completely static and there is no abstraction, merge order, etc at runtime and you can visually inspect the actual config map for prod.

hiredman06:01:29

I think for configuration, really, what you want is a rules engine

hiredman06:01:39

I have yet to see anyone go that complicated, but I have worked with several in house edn map merging configuration file systems, one of which was open sourced https://github.com/sonian/carica

hiredman06:01:13

I think the ultimate is a rules engine here because of the way they run until quiescence, basically a database that you assert some basic fact in like "a config file exists here" and then there is a rule that slurps in more facts from the config file, etc etc

adi16:01:35

@U051S5XR3 here's something I wrote for a little study project (uses juxt/aero). Does it come close to your requirement? It pulls secrets from environment-specific secrets.edn files (.gitignored, of course): • the edn config for aero: https://gitlab.com/nilenso/cats/-/blob/master/resources/config.edn#L1 (The commit message has context about how it's supposed to work.) • actually configuring the app (read via aero, apply via mount): https://gitlab.com/nilenso/cats/-/blob/master/src/clj/cats/config.clj Caveat emptor; not exactly a backend developer here, so I'd love to know a better way to do the same thing.

mx200010:01:52

Hey Anybody got the following error from VIM & vim-fireplace trying to evaluate some word with ‘cp’

mx200010:01:55

Exception in thread "nREPL-worker-4" java.lang.reflect.InaccessibleObjectException: Unable to make field protected java.io.Reader  accessible: module java.base does not "opens " to unnamed module @2eb231a6

mx200010:01:15

Clojure 1.10.3, nrepl 0.9.0, openJDK 17.0.1 on mac OS

mx200010:01:53

Fixed with using openjdk@11

Lennart Buit11:01:38

Internally the jvm got some changes in 9 that split up the standard library. The warning you are getting on 11 says you are using a module that you didn't declare you wanted to use

mx200011:01:13

Oh I see. But it was not a warning, but an error - the evaluation of the form did not work at all.

Alex Miller (Clojure team)13:01:29

17 makes some things that were warnings on 11 into errors

Alex Miller (Clojure team)13:01:37

It sounds like this error is that some code is trying to use internals or set accessibility where it shouldn't

Alex Miller (Clojure team)14:01:29

so prob worth an issue on nrepl. seems like there's at least one other use of setAccessible in there too https://github.com/nrepl/nrepl/blob/dfecc450ca936dd5716ba144f4d262d53b33be04/src/clojure/nrepl/util/completion.clj#L22-L26

mx200014:01:38

Thank you Alex.

practicalli-johnny11:01:08

As its a new year I'm trying out Java 17, the latest Long Term Support version. Its now on a point release 17.0.1+12 so shouldn't have any major issues 🙂 (although. Anyone experienced any issues with Java 17 and Clojure or Clojure libraries?

dharrigan11:01:41

We've been using JDK17 (adoptium) in production for a while now (pretty much since 17 came out), with no issues observed so far.

dharrigan11:01:45

bear in mind that LTS is a marketing term by Oracle, so unless you throw money at Oracle for a support license, LTS is meaningless (i.e., going to JDK18 from JDK17 will give you the same benefits as moving from 8 or 11 to 17).

p-himik11:01:49

Don't they backport fixes to LTS versions while skipping versions that aren't supported anymore?

p-himik11:01:23

And by "they" I mean not just Oracle but all companies making JDKs and labeling them as LTS.

Ben Sless12:01:43

If you use stuff like coretto, sure. I think the premise behind LTS is how long into the future it receives updates. For usual versions it's six months but iirc for 17 it's two years

dharrigan12:01:50

There was a lot of talk recently on t'interwebs about so-called LTS and the general consensus was that it's pure marketing by oracle - it so happens that the distro's sorta-agree to call 17 LTS. However, there's no absolute reason to do so (it just fits in with the marketing strategy). You would get the same benefits of new features/bugfixes if you keep rolling with the JDK releases, rather than wait for backports to happen.

dharrigan12:01:09

(after all, the JVM is very very rigorously tested before releases anyhoooo)

dharrigan12:01:41

(it's also less risky, i.e., to jump from 17 to 18, than from 11 to 17)

p-himik12:01:57

But isn't it even less risky to stay on 17 till another LTS arrives than to keep on jumping to the newest available version right away or once your current version is not supported anymore? There definitely have been backwards incompatible changes.

dharrigan12:01:44

Again, LTS is pure marketing.

dharrigan12:01:59

There's nothing special about LTS - in order to obtain LTS you need to pay oracle money for the support

dharrigan12:01:37

IMHO, it's far less risky to keep moving forwards, from 17, to 18, to 19 and so on, rather than wait 2 years between 17 and the next so-called LTS, then have to figure out what needs to change.

dharrigan12:01:06

With a 6 month cadence from 17 to 18, you're more likely to catch anything and fix it promptly than 2 years down the line.

p-himik12:01:09

> There's nothing special about LTS This is factually false, at least when it comes to JDK 11. JDK 11 is still receiving updates. JDK 10 and JDK 12 do not. So, I would say that JDK 11, being LTS, is still supported, while 10 and 12 are not.

dharrigan12:01:43

Sure, but only if you are on "premier support" from Oracle.

dharrigan12:01:11

If you're using the Oracle JDK that is too

dharrigan12:01:42

And given the alternatives out there, is there any compelling reasons to do so (if you aren't willing to pay for support?)

p-himik12:01:46

I honestly don't follow. JDK 11.0.13 is publicly available, I can use it without being on premier support.

p-himik12:01:26

By "support" I don't mean that I, p-himik, am personally supported by Oracle. I mean that Oracle supports the JDK by providing updates to it, regardless of who ends up using that JDK.

p-himik12:01:10

Same deal with NodeJS LTS versions, same deal with Ubuntu LTS versions.

dharrigan12:01:15

Are you using the Oracle JDK 11 LTS?

p-himik12:01:02

Was still using in December, now on 17. At least on the project that's currently open.

p-himik12:01:10

But how is it relevant?

p-himik12:01:52

Ah, that's OpenJDK. But again - how is this relevant?

dharrigan12:01:58

If you were using the Oracle JDK 11 LTS and if you did not have a support license from them, then you would apparently be in breach of their licensing conditions.

dharrigan12:01:52

We use Adoptium 17, but we'll be keeping current as 18, 19 and so on as they come out, since we'll get all the benefits of security/bugfixes on each release, without worrying about a big-bang, 17 to 21 upgrade.

practicalli-johnny12:01:25

I stick with the LTS versions of Java, as I dont wish to regularly update Java. I've always use the JVM for Clojure so most of the new features tend not to be relevant. I use openjdk (or AWS Corretto if deploying to that environment) and openjdk LTS versions get security updates until at least the next LTS version is out. So for me, LTS is about installing and forgetting about Java so I can get on with the long list of other things I'd like to do 🙂 The LTS has nothing to do with Oracle support to me (I dont particularly value such support)

1
p-himik12:01:33

Alright, now it makes sense, thanks! @U11EL3P9U Initially it wasn't that clear that there's a difference between Oracle LTS and someone else's LTS. Although maybe it has changed once again? A recent article: https://www.infoq.com/news/2021/10/oracle-jdk-free-again/

practicalli-johnny12:01:14

I didnt expect this topic to get as much discussion as it has 🙂 Yes, Java is free again (not surprisingly). To me, Java (the JVM) is just a specialisation of an operating system to run Clojure code and its what I can do on top of it that is of interest, not the JVM itself.

dharrigan12:01:56

It's an interesting topic 🙂

dharrigan12:01:23

I hope my original answer is still okay - we've been using 17 since it came out (and me beforehand for the RCs) and we've have a smooth sailing ship so far.

👍 1
practicalli-johnny12:01:26

I do agree with @U11EL3P9U that https://adoptium.net/ is very good if you do want to test new releases and have a shorter upgrade cycle

practicalli-johnny12:01:55

I have been using Temurin docker images within some GitHub actions (clojure-setup) and its very nice to use.

dharrigan12:01:00

Oracle JDK build from the same source as Adoptium (i.e., openjdk), it's jsut that Oracle throw in some properietary stuff too.

p-himik12:01:17

FWIW, tried using Adoptium about a year ago (not sure which version - should've been the latest at that point) and 10% of shadow-cljs release builds started to fail for incredibly obscure reasons. Decided to just keep on using OpenJDK.

practicalli-johnny12:01:58

Yes, Oracle still havent got rid of all that proprietary code in the Oracle JDK (although they have had a good try over the years).

seancorfield17:01:45

FWIW, we went from 8 to 11 to 17 in production. Our reasoning is that the versions that everyone -- including consortiums that "manage" their on OpenJDK distro -- deem to be "LTS" continue to get updates until (after) the next "LTS" version appears, but the non-LTS versions do not and you would potentially be forced to jump versions every six months to stay up to date with such updates. We don't want to have to change JVM versions every six months to stay current with updates. In particular, for us New Relic was a problem: their monitoring agent lagged on JDK 16 support until just before JDK 17 dropped and the agent would not run on JDK 17 until they released an updated agent for that, a few weeks after JDK 17 was GA.

👍 1
seancorfield17:01:25

We had been testing on 14, 15, 16 in the meantime but we literally could not update production until New Relic provided support for those JDKs.

seancorfield17:01:41

(unless you fork their agent and maintain your own fork that doesn't block it from running on newer versions -- but given the bytecode weaving it does and the sort of weird bugs they have to find and fix, I wouldn't want to do that)

p-himik17:01:30

On a separate topic - what does "GA" above stand for? "Globally available"?

seancorfield17:01:40

General Availability usually I think. Like the "gold" release of something.

rickmoynihan10:01:08

I think LTS really does mean something and it’s an over simplification to say moving from major versions will give you the same benefits as updating patch releases within a release (LTS or not). Typically it will give you the same benefits; but the important thing is that the cost of making that change may be drastically different. Essentially updating across patches within a release, means you’re a lot less likely to see breaking changes when upgrading to patch releases; as opposed to upgrading over JDK releases. i.e. patch releases typically shouldn’t remove features from the JDK or add new features. This is important when it comes to targeting software, libraries or ops/deployments to a particular JDK. i.e. if you write a library that targets JDK 11 and upgrade to 17, you might be bitten by some of the functionality changes/removals that occurred… e.g. in the transition from 11 to 17 the RMI Activation mechanism was removed… this is almost certainly not a big deal for most people these days; but legacy stuff that uses it will no longer work. Similarly garbage collectors etc and things that may change the performance profile may occur across major JDK releases, but shouldn’t occur in a patch release. These sorts of changes can prevent certain workflows from easily upgrading. Hence I believe it’s still important to know what release of the JDK you are targetting; and it makes most sense for most java libraries to target LTS releases (pure clojure libraries can care a little less) unless they genuinely require a big new JDK feature; as that feature if it requires a major JDK release may not be easily available to many users. Similarly in organisations with hundreds or thousands of JDKs deployed it makes sense to manage them in terms of LTS releases; and not have to upgrade JDK major releases every 6 months just to receive a patch. Now that’s not to say that you can’t when operating at a small scale just run with the latest JDK; most of the time it’ll just work, as backwards compatibility is good and taken seriously… however it won’t “just work” anywhere near as much as a patch release. One other dimension to be aware of is that class files may target a particular class file format i.e. libraries may target a newer JDK class file format than you are running…. even though they may not make use of specific API features of that JDK.

1
imre16:01:31

Imagine you're REPL'd into some jvm clojure process. Is there a more or less idiomatic way to determine if (a namespace|any namespaces|all namespaces) are AOT'd?

ghadi16:01:18

specifically "your/namespace__init.class"

imre17:01:56

thank you

ghadi16:01:35

but what are you actually trying to discern?'

imre17:01:39

Nothing too specific, really, more being curious. We recently saw some of our code work differently during dev time and when ran aot'd deployed into a staging environment and wanted to reproduce the problem locally. (the problem was totally our fault) And it seemed easier to AOT then start a local development repl with the aot'd classes added to the classpath, but then I wondered whether those were being properly loaded. And then I just got curious.

lilactown17:01:12

what would one call a function that is meant to be trampolined?

lilactown17:01:30

looking for something succinct enough to put in the name of the fn

lilactown17:01:59

so for I've come up with thunkable but can't say I'm settled on it

(defn add-report-thunkable
  [db data]
  ,,,)

(defn add-report
  [db data]
  (trampoline add-report-thunkable db data))

lilactown17:01:55

clicking the enterprisify button 3 times and calling it a day 😛

Joshua Suskalo17:01:46

Honestly I might just make it a function defined with letfn and then call it like f or something. To keep it separate I might make it private and call it add-report*

p-himik17:01:38

I would add some distinct, memorable, but unclear suffix. So that an unknowing user will almost have to look up the docstring, and a knowing user will immediately know what's up. E.g. add-report-tr*.

p-himik17:01:33

And yeah, my comment assumes that the function is public. If it's private, then I agree with Joshua.

lilactown17:01:00

not something many consumers will use though. probably just me 😄

Joshua Suskalo17:01:16

is there ever a time you'd use the non-trampolined version? I'd think not, and if your concern is about reducing the function call overhead, you could just add an :inline meta on the trampolined version.

hiredman17:01:06

that is terrible

☝️ 1
hiredman17:01:56

any higher order usage of the function will break

Joshua Suskalo17:01:07

It won't break, it will just have the function call overhead.

hiredman17:01:32

no, it will return the thunk instead of a result because it won't be called with trampoline

Joshua Suskalo17:01:59

I wasn't proposing making an :inline key with different behavior from the var it was put on.

Joshua Suskalo17:01:50

I was suggesting adding an :inline key onto the function that already does a trampoline in its body that would just inline the trampoline call, which would act the same way regardless of how it's used.

lilactown17:01:37

the majority of users will use add-report and be happy, but I have another library that I may want access to the underlying thunk-producing function so I can do clever things like pause and resume its execution. it's not something that many people will use, but since I am making it public I want to come up with w a reasonable name for it

lilactown17:01:07

perhaps I'm overthinking it and add-report-tr* or simply add-report* will suffice

Joshua Suskalo17:01:08

Ah, I see. Then yes, I would recommend add-report* , since any var which has a paired * version you need to read the docstring on the * version separately from the main var, and that should say that it's designed to be trampolined, which would allow you to make your alternate execution context.

imre17:01:44

Though I'm not a native speaker, I'm not sure English has a more concise way of representing it, so my recommendation for the suffix would be -to-be-trampolined

Joshua Suskalo17:01:52

That is about as concise as you can be while being grammatically-correct english, but english (especially in programming) does lend itself well to making cryptic shorthand.

Joshua Suskalo17:01:29

And there's already a consistent pattern in clojure functions that some-func* is like some-func but different in a subtle way you should read the docstring to understand.

lilactown17:01:38

add-report-dont-call-this-fn-or-youll-be-fired

p-himik17:01:22

React has very similar names in its internals. :D

Martynas Maciulevičius17:01:44

core.logic uses o to denote this. But don't use a letter because it's not good to read. e.g. cond -> condo

imre17:01:45

I guess it depends how much information one wants to put into the function name vs the docstring

Joshua Suskalo17:01:28

I tend towards verbose docstrings and memorable function names. Functions are almost always complex enough to need a docstring to fully understand them anyway, so names don't need to be designed to remove the need to read them.

Martynas Maciulevičius17:01:52

"there be dragons" is always memorable fn-there-be-dragons But I don't think anybody wants to use that

imre17:01:55

The perfect suffix would be a future adjectival participle (according to wikipedia). I could supply one in Hungarian but I doubt that would be too much help 🙂

lilactown17:01:28

I'm going with add-report* ty all for participating in the naming exercise 😄

seancorfield17:01:12

I'm late to the party here but I generally use func* for something that is generally intended to be called by func instead, when some intermediate wrapping is concerned (usually caching, where func* is the underlying uncached version and func is the cached version and calls it via lookup-or-miss, but I would do that for trampoline too and any other similar cases).

Martynas Maciulevičius18:01:19

Actually what you could also use is lookup-deep. It's short but not a symbol.

seancorfield18:01:05

@U028ART884X Could you explain what you mean there?

seancorfield18:01:31

(I'm referring to a specific function in clojure.core.cache)

Joshua Suskalo18:01:22

Yeah, the add-report* being called from add-report is being represented here because add-report* needs to be called in a trampoline, and add-report does that.

Joshua Suskalo18:01:01

Also Martynas M I don't want to say is a spammer, but I've had a hard time following how their comments are connected to the things they're commenting on, and that's been consistent for the last several messages they've sent.

Joshua Suskalo18:01:15

Could just be english as a second language or something.

Ben Sless18:01:11

I'd put them in two separate namespaces with the same name. in api expose add-report, in impl or cps or thunk (:thinking_face: ) have the underlying add-report

Joshua Suskalo18:01:22

I think it would feel pretty bad to have the implementation so far away.

lilactown18:01:26

i've done that before in other libs and don't hate it

Joshua Suskalo18:01:44

I think it's pretty important to optimize for the experience of users who need to look at the source but aren't doing it from a repl with your code loaded, because that can be how many users decide if or how to use your code.

Ben Sless18:01:50

It's pretty much how aws-api does it, too with async api and blocking api

Martynas Maciulevičius19:01:39

@U5NCUG8NR just had a rough day. Got too careless. It got better when I had a walk a moment ago. @U04V70XH6 If I'm correct OP wanted to name a function that should be trampolined. So it handles some kind of a deep recursion case and tries to preserve the stack. This is where the deepness comes from.

Joshua Suskalo19:01:04

ah, I see what you mean. And that's fair that it's been a long day.

Martynas Maciulevičius19:01:18

And yes, seemingly unrelated comments may provide a way to look into the problem from a different angle. But other times they throw people off. Probably those were too much.

zendevil.eth19:01:12

I’m seeing a strange error. I’m running tests with (run-tests), but I get this error:

Testing projectgun.integration.auth-test
Syntax error (ClassCastException) compiling at (projectgun/integration/auth_test.clj:109:1).
mount.core.DerefableState cannot be cast to clojure.lang.IFn

Full report at:
/var/folders/96/df02xppj77g7dx698gtmwmrw0000gn/T/clojure-3408151167268530921.edn
Tests failed.
when I have:
(use-fixtures :once with-sendgrid)
but get no errors when I have:
(use-fixtures :once with-db with-sendgrid)
Why would that be

Martynas Maciulevičius21:01:43

with-db should set something in your mock functions or some global state, or should return something that has your db. And then you use mount to provide dependency for with-sendgrid. And it doesn't work because db is not created in the first case. Disclaimer: I'm not familiar with mount.

Martynas Maciulevičius21:01:20

Also your error is not related to the testrunner as your error says, it should be related to the fact that auth_test tried to call your database and it failed because it couldn't be deref'ed

Martynas Maciulevičius21:01:46

So it's the testrunner that reported the error badly. But at least they pointed you to the file.

Martynas Maciulevičius22:01:17

Also it's possible that with-db could use a dirty macro.

zendevil.eth09:01:56

it still gives the same error if you remove with-sendgrid too

Martynas Maciulevičius09:01:16

what does it give when you do this? (clojure.test/use-fixtures :once (fn [test-fn] (println "hi") (test-fn)))

Martynas Maciulevičius09:01:02

Or even remove the fixture completely? Does it still crash? Because if it does then your test tries to reach something which isn't initialized

Martynas Maciulevičius09:01:37

Which means that you don't know what you're doing and you are breaking your tests by changing this line

Martynas Maciulevičius09:01:00

i.e. I'm pretty sure that with-db and with-sendgrid sets some variables somewhere in your code. And your tests can't run without it.

Martynas Maciulevičius09:01:48

So what I'd do in your situation is to make sure that there are no tests in the namespace and uncomment them one by one

zendevil.eth19:01:42

and line 109 is (run-tests)

p-himik20:01:17

According to the documentation and implementation of use-fixtures and related functionality, it expects (apart from the first keyword) functions. So, are you sure that with-db is a function and not, say, an instance of mount.core.DerefableState?

zendevil.eth20:01:01

it breaks WITHOUT with-db not with with-db

p-himik20:01:14

Ah, my bad. In any case, check what with-db and with-sendgrid do, especially the latter. The answer is there, so without the code it's impossible to tell for sure.

zendevil.eth20:01:56

(defn with-sendgrid [f]
  (with-redefs [sendgrid/send-email (fn [type email sendgrid-data]
                                      (println "SENDing " type " to " email "with subs " sendgrid-data))]
    (f)))

(defn with-db [f]
  (start)
  (reset-db)
  (f))

p-himik20:01:03

That's very nice, but with-db uses other functions whose implementation is unknown. Try to debug it, see where that DerefableState is coming from, figure out why it's not there when you use with-db, and implement the necessary fix.

seancorfield20:01:45

The full stacktrace will be in /var/folders/96/df02xppj77g7dx698gtmwmrw0000gn/T/clojure-3408151167268530921.edn and that might provide more insight.

seancorfield20:01:56

It's kind of unusual to have (run-tests) in a test file...

Martynas Maciulevičius21:01:31

I haven't seen run-tests in clj sources too. Is this clojure.test? For me it looks like this call in tests could be used for front-end testing but you have it in regular clojure... hm