Fork me on GitHub

it what way does clojure's do not do what you want?


Maybe it's that i want the first function to return before executing the next


that is what do does


do (clojure) == progn (elisp) == ; (ml)


for fun, if you have to reimplement do without do, the classical way to do it is write a macro that takes something like (foo x y z) and turns it in to something like (let [_ x _ y] z)


Well i'm speechless i didn't know this. Sorry, and thanks for your help...


clojure also has an implicit do in a lot of places


(fn [x] y z) == (fn [x] (do y z))


good to know


i assumed they were being executed in order but didn't realize they were completing before the next was called


clojure is a strict language


I imagine you might be confused because you are dealing with multithread programming (at least one clojure thread and one database thread), so when some function on the clojure thread returns, some effect it has on the database thread may still be happening (depending on the type of database and the interface you are using)


Some Clojure expressions like (future ...) "complete", but cause more computation to occur after that, too.


I think that was the case at some point and so I've carried around this notion that do wasn't doing that.


I'm glad i've been set straight.


@kkruit one thing to note is that in (do (map f coll) (g)) - thanks to map being lazy, f never gets called and coll never gets accessed


(this is true of other functions that create lazy results as well, map being just one example)


I knew that and that doall forces it to run. That could have had a hand in what got me turned around also.


Hi. Is there a function that behaves like partial but inserts the call time arguments before the creation time arguments?


Rephrasing: Is there another way to write the last map below and get the same result without changing runner2?

(defn runner [arg1 arg2 item]
  (str arg1 arg2 item))

(defn runner2 [item arg1 arg2]
  (str arg1 arg2 item))

(map (partial runner "x" "y") [1 2 3])

(map #(apply runner2 % "x" "y") [1 2 3])


@cristibalan not in core, but the implementation is quite easy:


Is there a reason why it explicitly defines the first few arities instead of jumping directly to & more? Maybe so it shows up nicer in docs or autocomplete?


it's a performance optimisation


the clojure.core functions do the same thing


I see. Thanks.


Unrelated question. Is it really bad style to have a multiple arity function in which the single arg version handles a collection and a multiple arity version handles a single element with extra args? All examples of multiple arity seem to use it only for default arguments.


Thanks. I indeed do try to avoid those things, but the reminder never hurts 🙂 Do I get special exemption if my coll fn is more complex in that it uses a loop/recur over another collection inside to reprocess all the items repeatedly?


I guess I fall in the second part of the argument where he suggests using two separately named functions, which is indeed my current implementation. I'm considering joining them in a single multiple arity process fn instead of having something like process-item [item arg1] and process-all [items [arg1 arg2]], where process-all Would this be against multiple arity conventions?


i would not do that. one of the things i like about Clojure's standard library (and i think should be emulated in user code) is that every function has a consistent, predictable performance profile. conj always has the same time complexity, regardless of data structure, because it dispatches to a data structure-specific algorithm, instead of adding to a specific place. (conj () 1 2) is (2 1), whereas (conj [] 1 2) is [1 2], because adding to the front of a list is fast, whereas for vectors, it's adding to the back.


predictability is important


if you can't predict what a piece of code is going to do, you can't reason about it


process is more convenient, but much less simple, which means it's much less understandable


especially with the arity overloading, it means you have to mentally keep track of how many arguments you're passing to it at all times, to tell whether it'll do the collection thing or the single-item thing


so that immediately makes things like apply and partial much more difficult to use


arity overloading is intended to be used for multiple arities of a single function, not to combine two functions into one


have you watched Simple Made Easy?


Oh, wow. Just noticed the unread thread now. Thanks for the follow up. I agree with your points and it did felt icky to me as well to have the function mean completely different things based on arity. I think one of the things that pushed me into wanting to merge I was feeling the ns grew too much (it grew even more since). I'm now moving to extract those process related operations to a separate ns which should hopefully lead to more but smaller/simpler functions.


I remember watching that a while ago before actually doing any clojure, I'll try watching it again now that I have some actual experience with the language.


fair enough!


Hm, I'm going to go for it and discuss in code review.


Thanks again for the links.


while running lein repl i am getting

Retrieving com/google/protobuf/protobuf-java/3.0.2/protobuf-java-3.0.2.jar from central
Could not transfer artifact from/to central (): GET request of: com/google/protobuf/protobuf-java/3.0.2/protobuf-java-3.0.2.jar from central failed
Could not find artifact in clojars ()
This could be due to a typo in :dependencies, file system permissions, or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
Exception in thread "Thread-1" clojure.lang.ExceptionInfo: Could not resolve dependencies {:suppress-msg true, :exit-code 1}
	at clojure.core$ex_info.invokeStatic(core.clj:4617)
	at clojure.core$ex_info.invoke(core.clj:4617)
	at leiningen.core.classpath$get_dependencies_STAR_.invokeStatic(classpath.clj:311)
	at leiningen.core.classpath$get_dependencies_STAR_.invoke(classpath.clj:265)
	at clojure.lang.AFn.applyToHelper(
	at clojure.lang.AFn.applyTo(
	at clojure.core$apply.invokeStatic(core.clj:646)
	at clojure.core$memoize$fn__5708.doInvoke(core.clj:6107)
	at clojure.lang.RestFn.invoke(
	at leiningen.core.classpath$get_dependencies$fn__3844.invoke(classpath.clj:332)
	at leiningen.core.classpath$get_dependencies.invokeStatic(classpath.clj:330)
	at leiningen.core.classpath$get_dependencies.doInvoke(classpath.clj:324)
	at clojure.lang.RestFn.invoke(
	at clojure.lang.AFn.applyToHelper(
	at clojure.lang.RestFn.applyTo(
	at clojure.core$apply.invokeStatic(core.clj:652)
	at clojure.core$apply.invoke(core.clj:641)
	at leiningen.core.classpath$resolve_managed_dependencies.invokeStatic(classpath.clj:441)
	at leiningen.core.classpath$resolve_managed_dependencies.doInvoke(classpath.clj:428)
	at clojure.lang.RestFn.invoke(
	at leiningen.core.eval$prep.invokeStatic(eval.clj:85)
	at leiningen.core.eval$prep.invoke(eval.clj:73)
	at leiningen.core.eval$eval_in_project.invokeStatic(eval.clj:362)
	at leiningen.core.eval$eval_in_project.invoke(eval.clj:356)
	at leiningen.repl$server$fn__5864.invoke(repl.clj:244)
	at clojure.lang.AFn.applyToHelper(
somthing like this. and i am not connected to any proxy, can somebody help


Hi! Are operations on atoms (like swap! or reset!) considered side effects? Especially, would a function like this be a pure function?


I'd say, "yes" to the first question and "no" to the second question.


IMO, a pure function is one where if you invoke it with the same parameters, it always returns the same result.


Perhaps a stronger version would say that the invocation of a pure function would, in addition, not impact the results of other functions.


seems like a pure function to me if you consider the same argument to mean the same atom with the same value


but that would be an argument about what the "same" is


@dpsutton so modifying an atom in this case counts as function output?


In general I'm leaning towards @dorab's point of view, but it would imply that I basically cannot write pure functions that modify state in reagent


I agree with @dorab as well. The "same" atom can have different values so you lose referential transparency


Basically, calling the function twice with the same arguments leads to different outputs, which is a clear sign there's a side effect


@wojciech526 if you really value pure functions, you will end up moving to a solution like re-frame. the basic solution is to create some kind of proxy object like an “action” or an “event” in a pure function. then the framework does all of the mutation for you

👍 4

is there a way to include a namespace's functions globally for use in the repl? For context I want to use debux's functions without requiring the macros in each file I'm connected to a browser-repl via shadow


(use 'the.namespace)


ah, hmm. ClojureScript and macros might be a bit weird tho


let me know if that doesn't work


will do, thanks


seems like namespaces are quite gimped in clojurescript. can't use use at all


probably have to give up on the dream, or do some editor hack


If I split up a namespace into different files using

(ns a)
(load "dir/file")
And then inside of file
(in-ns 'a)
How would I want to require/use dependencies not across the namespace but only in the file? (in-ns 'a (:require [])) doesn't seem to work, I get class not found anyways.


require can be used without the ns form


don't use load though


And that would require just for the code inside of the file?


what does this mean?


to be clear, namespaces are global, a require anywhere requires for everywhere (but aliases or imports (via :refer or :as) are per namespace)


I was splitting a namespace to separate files and those files used (in-ns 'a) which did not allow for (:require). Does it make sense to require outside of the ns, or go to the defining file for the namespace and require there. I just didn't know.


If you are in the same ns, you can see the libs that are already required in that ns, regardless of file. But defining one ns in multiple files is so obfuscated as to be effectively hostile to anyone trying to read or maintain your code.


:require in ns is a syntax that gets translated into a call to require; as ns is a macro it defines a syntax. in-ns and require are not macros so they don't define syntax so you need to quote things if they aren't defined (eg the symbol for the ns)


That's where I was trying to understand if i could require outside of the namespace, so only the at a file scope that require was defined, not across the namespace.


require modifies a namespace, it isn't meaningful at any other scope


Yeah I get that now.


a namespace isn't a file (but it simplifies things a lot if we can pretend they are, which is why people tell you to not split namespaces into multiple files or define multiple namespaces in one file)

👍 4

and the :require syntax is specific to the ns form (it is auto-translated to require by ns internally)


stick to one namespace per file


I'm struggling with when there is a need to use the quoted form of the ns and when not. Like (ns a) vs (in-ns 'a).


don't use in-ns


don't spread a namespace over multiple files


clojure.core splits across multiple files, what's wrong with that?


clojure.core is a monster


clojure.core is very old and tries very hard not to break things between releases, so it is hard to spin parts of it out in to other namespaces


clojure.core also has to contend with language bootstrapping issues


I'll keep it in mind. This is just me playing around so not like it matters too much. 🙂 I was breaking up a namespace to multiple namespaces just to keep it in separate files but some of it didn't make sense.


I'm playing with clara-rules ( and organizationally if I have hundreds of records that define the rules for one system... I didn't want them with the rules or logic handlers. Now I could seperate them into a new namespace a.b.records?


@UANK2VBHA I responded to some points in a separate thread, but also keep in mind there is a #clara slack channel here in clojurians if you have Clara-specific questions.


not sure what you mean


Don't worry about it.


It's not really a technical issue.


clara unfortunately has a kind of java centric view of defrecords


@U0NCTKEV8 @UANK2VBHA I think that is a generalization that isn’t entirely true. Clara has some default type-based dispatch that takes advantage of defrecords and/or Java types. This is meant to facilitate smooth interop with a Java-based ecosystem of facts. If you are going to use defrecord types as your type, then you have to do the Java interop since Clojure doesn’t provide that type by means of something like a var.


However, the type dispatch system of Clara is completely extensible/pluggable


Also, out-of-the-box, Clara supports clojure.core/type and Clojure’s ad hoc hierarchies if you want to describe derived types.



(defrule demo-rules
  [:my-type [{:keys [x]}] (= ?x x)]
  (insert! ^{:type :another} {:x ?x}))

clojure.core/type will first look in the meta of the object for the :type key. So you can use plain maps in Clara as much as you want this way, but you can also change it to something other than clojure.core/type if you don’t want to do :type meta.


See for more on changing the defaulting type-based dispatch behaviors.


Oh neat, I was going to ask about that


I think it is a easily overlooked feature


It sort of reminds me of multimethods though in how it is extended


Yeah, I can see that.


so you have to import the java class that the defrecord creates (while still also requiring the namespace where the defrecord is created)


and you'll run in to interop issues


e.g. if you run the defrecord again, it creates a new type with the same name, and if you don't recompile the clara rule, it will still be trying to match against the old type


You mean in one session or if you're in the repl?


@UANK2VBHA @U0NCTKEV8 Yep, this is a classic example of the need to write “reload safe” code at the REPL. defrecords are not really reload safe. If they are re-evaluated they create a brand new java.lang.Class instance that isn’t considered the same as the previous eval of that same defrecord. There are ways to deal with this. You can do something like conditionally do a defrecord if it doesn’t exist etc. The potemkin library actually had a reference to this problem and a possible solution provided if you were curious more on the subject. I’m not saying to use the lib or not, just that it discusses the non-reload safe nature. However, as I said in the other thread, you don’t have to deal with defrecord/Java types to write Clara rules if that is not useful to your domain.


(although reloading the defrecord is unlikely to happen in the middle of a session, but depending on whatever, who knows)


Yeah I couldn't think of a case where I would reload the record outside of the application being down.


Besides the repl


What I am trying to say is there are some well known stumbling blocks around interop (people often do things like import the java type without requiring the clojure namespace) you may run in to using clara, and this is #beginners


Did not understand the problems with interop


Can you pls elaborate?


@U7ANZ2MTK This is a bad pattern

(ns something)

(defrecord MyRec [x])

(ns another
  (:import [something.MyRec]))

The namespace another is not ensuring that the namespace something has been loaded/compiled into memory. This means the Java type something.MyRec may not exists yet.


:import is only syntactic sugar to give you a shorthand way to refer to a class name. It has no compilation-side-effects. Clojure’s :require and :use are different, they may cause “just-in-time” (aka JIT) compilation of Clojure namespaces into memory.


What you need to do to always ensure you are not going to accidentally :import a record Java class is this

(ns something)

(defrecord MyRec [x])

(ns another
  (:require [something])
  (:import [something.MyRec]))

Always use :require on every namespace your namespace has any direct dependency on. Anything at all, any var references, any records, protocols, types, or side-effect extensions like defmethod or extend etc.


I'll keep it in mind. Thanks.