Fork me on GitHub

Another bug report for the polylith docs: The "copy" button in the code snippet on this page doesn't work in firefox. That wouldn't be a problem, but when I highlight the text and manually hit ctrl+c, some JS on the page is interferring and preventing the copy from happening, so I can't really copy that snippet 😕. Tested on latest Firefox, booted up my chrome test browser and it works as expected.


We use to publish the documentation and we cannot change the page source code.


The bug has been reported to GitBook and they have promised to fix the bug.

🙏 2

When running tests with poly test the value of *file* in a brick test namespace seems incorrect. It ends up being workspace-dir/<path for test namespace> , ie missing components/<component-name>/test


Hi @U0HFRSY0M. I don’t think I really understand. Maybe you can elaborate a bit and give an example? What do you mean with *file* ? The namespaces live in each src and test directory in workspace-root/components and workspace-root/bricks so I’m not sure what you mean.


I’m referring to clojure.core/**file** , which is bound to the file name of the namespace being loaded.


to see it, just put a top level (prn **file*) in a test namespace and run poly test*


poly test creates a classloader to run tests in isolation with respect to each project that the test is included. The poly command runs from the top directory, loads the test namespaces and runs the tests within them one by one. You can to see the whole process of running tests with Polylith. I haven’t used *file* before but found which says it is only useful while compiling a file, and no longer useful afterwards. Maybe the way we evaluate the test namespaces and run tests in a different classloader causes this.


I think the basic issue is that the current directory isn’t changing. This also means if the component is doing any file system access, other ran through resources, then the same path refers to different locations, depending on whether the test is run at the brick level or at the workspace level.


Maybe polylith could provide a *project-root” var, that it sets with the root of the brick, when running a brick’s tests.


Tests that cared could check if it was bound, and use it to abolutise relative paths as needed.


I don’t understand why Polylith should do that. I don’t think we are doing something wrong with running test, rather just calling the default test runner to run the tests. Isn’t it easy that adding a string variable on top of your test namespace to see where it is? I still do not understand neither the use case nor what Or if Polylith is doing something it shouldn’t


Say I want to inspect the brick’s deps.edn file in a test. What is the path to that file? This isn’t my exact use case, but illustrates the problem. I guess I could do (first (filter fs/exists? [<workspace relative path to deps.edn> <brick relative path to deps.edn>[)) but that seems error prone. Or am I missing something obvious?


You can easily construct an absolute path from where your test namespace is, since you know where that file is. Polylith’s own codebase is doing that in several places if I’m not mistaken. I would say this is the easiest way of achieving what you are trying to do. You could also include polylith/clj-api as dependency and extract paths from the workspace data, but I would say it is an over kill.


I don’t see how to specify an absolute path if a test should work in whatever location the repo is checked out to.


II have polylith/clj-api as a dependency anyway, so I’ll just use that.


I have a “pluggable backend” situation where a component implements the library’s api, and two other components provide alternative implementations. I’m wondering how to manage this situation in polylith?


I’ve seen libraries call require and resolve at runtime, but that seems like it’d only work as intended in a polytlith project or base, where you would only import one of the two implementations. The polylith dev profile would instead load all the implementations and which one ends up being used at runtime becomes an implementation detail…


@U0CJ8PTE1 For the development project, you can use profiles to select which implementation is used by default and also how to run a REPL and/or tests using the other implementation(s).


Ah yes good point! If the api errors when it finds more than one implementation that could be pretty solid…


At work we have a component for http-client (as an interface) and we have two implementations, one for httpkit and one for Hato (`HttpClient`). We have :+default and :+httpkit as the two profiles (pulling in the Hato- and httpkit-based implementations respectively) and then our projects all specify which implementation they want.


That makes sense, thanks Sean!

furkan3ayraktar07:11:31 you can find more information about the profiles.


What if you want to mock a component for your tests? eg you implement an external API, and you only want it to be really used is some rarely run integration tests?


One idea is to fall back to with-redef statements in the tests to mock individual functions.