This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-09-17
Channels
- # beginners (51)
- # boot (31)
- # cider (14)
- # clara (13)
- # cljs-dev (15)
- # cljsjs (2)
- # cljsrn (53)
- # clojure (18)
- # clojure-dusseldorf (1)
- # clojure-russia (4)
- # clojure-uk (9)
- # clojurescript (53)
- # cursive (3)
- # datomic (5)
- # docs (1)
- # figwheel (2)
- # fulcro (42)
- # hoplon (3)
- # lein-figwheel (3)
- # leiningen (53)
- # off-topic (1)
- # om (4)
- # re-frame (11)
- # shadow-cljs (8)
Anyone know of some sample code that tests database interactions? Seems to be one of the few places in clojure we can't use pure functions. I'm guessing it's the same as java testing, but thought I'd make sure the clojurians haven't found better ways.
Hm... Should have looked, looks like luminus came with at least one example. So that's probably a good starting place. But any other advice is always appreciated
depending on what sql features you are using, h2 has an in-memory db you can run migrations against and then test
but consider that the clojure db libraries want you to give them data, so in many cases it should suffice to generate the data you would give the db library, and test the properties of that
but it uses other databases for storage
What are the advantages of that? (You must have a lot of space.. but gb is very cheap.. as fair as I know)
thanks @noisesmith I did think about that, mocking/faking the calls to the db and just testing any logic. But in some cases they are pretty intermingled and would be more effort than worth it (at least that happens in java a bunch). And yeah, I use in-memory dbs in java, so familiar with that approach.
in clojure the standard attitude is that it's worth it to make the effort to not write intermingled code
unless what you are writing is specifically a database tool? in that case, it makes sense to have integration tests that run with a real database (the one you claim to support)
also, if you use a library like stuartsierra/component or weavejester/integrant you pass your database connection object as an argument to the code that uses it, which simplifies making a drop in replacement that doesn't talk to a real database
@rcustodio re: advantages of immutable database -- this article does a decent job enumerating them: http://augustl.com/blog/2016/datomic_the_most_innovative_db_youve_never_heard_of/
Is it ever appropriate to use Clojures State Mangments tools (Var, Refs, Atoms, Agents) when your system doesn’t need concurrency in a multithreaded environment. For example, if your modeling a game you could push the state into a Var and update the board every time. Or you could create a new version of the board and potential append it to a board history.
if you do the immutable / functional version first, it's trivial to also write to a state container as well if you need it
the other way around is a much bigger hassle, so there's an advantage to designing the immutable/functional version, and adding the global state as an afterthought if you find you need it
even with a global atom, you can easily make a second atom containing the history of changes (with add-watch
)
thanks @noisesmith, thats what my intuition was.
the way I do it with the code I am working on right now (an event source based state accumulator for coordinating between servers) is a pure functional loop, and the individual client passes a callback that could optionally update a global atom
but this means that the client decides which atom to update (among other advantages that could mean coordinating with multiple clusters at once, eg. if the role of the server was to monitor state in general and was not participating in the group state just observing)
Interesting! I’ll have to think about that for a minute 🙂
Another somewhat related question is when is it appropriate (if ever) to introduce some shared state to reduce the overhead for a caller. To clarify with an example lets say you have a set of functions were all operate on a similar datatype:
(defn foo [x y] ...)
(defn bar [x z] ...)
(defn zoo [x t] ...)
I could introduce a atom into the scope to hold that shared datatype
(def x (atom ...)) ;; or possible a var
(defn foo [y] ...)
(defn bar [z] ...)
(defn zoo [t] ...)
Which simplifies the parameters and is more aesthetically pleasing (somewhat irrelevant), but now increases the introduces a level of indirection and state management in the code, tests, etc…
A third option might be to allow for both ways of calling by allow for re-binding of the var when its called with the re-used datatype.
(defn foo
([y] ...)
([y x] (binding [*x* x] ...)
But this I feel like this could lead to trouble down the road if the functions need to change to allow for more arguments. Or in other scenarios… it just feels somewhat complex, but maybe i’m just not comfortable with dynamic vars.
Here again, i’m probably over thinking it. Having a set of functions that all operate over the same datatype and so all include it as a parameter isn’t really a problem. Its probably just my natural inclication to question what “looks like” redundancy.The only time i could see doing this is if maybe the shared state very rarely changed. For instance, x
was true
95% of the time in the codebase.
don't build binding or atoms into the code executing your logic - allow top level users to leverage them if they want for convenience, but they severely limit flexibility of using the code
it's not redundancy when the flexibility of providing the arg explicitly is useful (consider how much simpler testing is when you aren't worried about dynamic bindings)
Thanks again @noisesmith! This question came from reading “Elements of Clojure” Zach has a idiom where he suggests that “No one should have to know you’ve used binding”. Where his main point seems to be just that, but his examples also seems to suggest that their is a trade off in complexity between opening up all your functions to now accept a new argument and using some State Construction like var. His example highlights that this new parameter is almost never used (it has a very common default). From the book… > The cost, however, is high: we’ve added a positional parameter which is almost never used. If we ever add more parameters, we’ll either have to switch to an option map, or start specifying turbo-mode? everywhere just so we can specify the new parameter. Any new functions which call b or c will also pay this tax.
that is a fair point
in my code base it's just option maps pretty much everywhere
Some would consider it bad practice, but I definitely sometimes use ns-level atoms for some things instead of extra params. From my POV, Clojure is only mostly functional, and it makes sense to deviate from that when you get a big boost to clarity by doing so. I'm pretty sparing with that technique, though, because I agree with @noisesmith that you can definitely paint yourself into corners pretty easily.
@eggsyntax that's fine for an app, but please don't do that in a library
Very insightful comments, the context (lib vs app) and goals always matter. I suppose the take away is to be aware of that context and probably document it.
what is the difference between
'[[org.clojure/clojure "1.8.0"]]
and
`[[org.clojure/clojure "1.8.0"]]
(apostrophe vs backtick)
No difference there that I can think of. But bare symbols behave differently:
user> 'second
second
user> `second
clojure.core/second
Apostrophe is ordinary quote; backtick is syntax-quote. Lots more detail at: https://clojure.org/guides/weird_characters
@drewverlee I think it depends very much what x is. The global atom I think in that context is pretty bad. I would avoid it as much as possible. Using a dynamic Var for passing down configuration/options is the only global behavior that is somewhat acceptable, though it is also a bit of a sign of possible bad design. If your options need to be passed down a deep chain, you should ask yourself if its not possible to flatten your implementation so that you don't have branching behavior deep within the call stack, but at the top of the call stack instead. This way, you can often avoid options, by just having very composable fns, which at the top you can just create different composition of, and that gives you your varying required behaviors. The advantage is that most options create way too many combinations, while you often really only need a handful.
But even then, I'd try to have things that access the global atom kept to the top most layer.
Like if your atom has {:player-pos [120 200]}, and you have (defn move-player ...), make it take x,y and not the atom: (defn move-player [x y direction distance], and have the orchestrating function get the player-pos from the atom, and update the atom with the result of calling move-player.
If you design that way, you'll start to notice how much reuse you gain, and how many more things can actually become libraries across projects. That move-player is now very generic, and you can reuse it across games. Its untangled, decoupled, easy to test and expand. You can easily end up with a move-player namespace full of fns that all move players in different ways: (move-player-zig-zag) (move-player-arc) (move-player-linear) (move-player-curve), etc. All can return a vector of points for the move animation, and just take values, no knowledge of the global state needed.
And notice here I didn't create a move-player that takes a complicated options map. Instead I created a lot of specialized move-player fns. That's what I meant by avoiding deep nested options. Each of these probably require slightly different arguments, and trying to jam all that on an options map would easily create option creep.
@didibus thanks that's a great perspective. In his example their was a function chain where the last one needed the option. We can assume it wasn't possible to flatten. His point was more about if you choose to use a dynamic var, not if how he arrived at that choice. I asked him to clarify it though, as I feel it's important Thanks for your insight I might be misrepresenting the idea from the book.
If you have nested functions, and only the bottom fn needs the parameter. Modifying the signature of all parent fns does kind of couple them to this parameter, even though they actually don't use it. So in that case, a dynamic Var can be a good way to keep the parents fns more isolated from the details of the bottom fn. I think there's a monad which serve a similar purpose. Anyways, and sometimes that's fine, and if not abused and kept under control, it can be the simplest most practical thing. That said, I've learned over the years that the nesting, is a coupling in itself. The definition of the order of the steps of my domain logic is all tangled together from it. So I try to decompose. Threading macros can help here. Something like:
(-> input
(first-step)
(second-step)
(last-step :with-turbo-mode)
Imagine the alternative, which is for first-step to call second-step at the end of its implementation, and second-step to call last-step at the end of its implementation and now you need the option for last-step, so instead of passing the turbo-mode option through first-step and second-step, you decide to wrap the whole thing in a binding and have last-step use an implicit dynamic var instead of an argument. And that's probably better then changing the signature of first-step and second-step to just push down the options of last-step, but why does the logic of the coordination between the step is encoded all over the steps themselves? Decouple that, so steps just do one thing and return, and the coordination is done at the top. That solves all your problems, and decouple even further.