This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-12
Channels
- # aleph (10)
- # beginners (79)
- # boot (81)
- # chestnut (3)
- # cider (9)
- # cljs-dev (336)
- # cljsrn (17)
- # clojure (121)
- # clojure-boston (1)
- # clojure-italy (4)
- # clojure-nl (1)
- # clojure-russia (218)
- # clojure-spec (32)
- # clojure-uk (98)
- # clojurescript (109)
- # cloverage (1)
- # core-async (5)
- # cursive (17)
- # datascript (15)
- # datomic (38)
- # editors (4)
- # emacs (6)
- # graphql (1)
- # hoplon (140)
- # instaparse (1)
- # jobs (2)
- # klipse (1)
- # leiningen (4)
- # lumo (2)
- # mount (103)
- # off-topic (3)
- # om (8)
- # onyx (19)
- # parinfer (32)
- # pedestal (3)
- # precept (32)
- # re-frame (33)
- # reagent (24)
- # remote-jobs (11)
- # rum (1)
- # spacemacs (1)
- # specter (37)
- # unrepl (4)
- # untangled (43)
- # vim (11)
does anyone have experience with clojure.walk/postwalk? it says it walks on each form but sometimes there’s strange vectors showing up where there should be maps.. not really sure how to write what I’m trying to write
looking for a generic way to walk a structure and convert something like
{:a 1 :b {:c {:d 2}}}
into
{:a 1 :b {:c {:d 2 :id :c} :id :b}}
@lwhorton hash-maps are made of two element vectors (entries) - you can see it if you call seq on one
technically they are clojure.lang.MapEntry instance, and you can check for it via map-entry?
@U050SC7SV they will not be MapEntry instances when processed in postwalk, postwalk returns vectors
and vector? returns true for instances of MapEntry
now, there are definitely cases where I changed my algorithm so I could use prewalk instead just because prewalk gives you MapEntry instead of vector
you can do that transform above by checking for hash-maps inside each hash-map, and updating them to include their id key before returning it
so your conditional would check if the arg was a map, then if it is check for maps in vals of the map, and update those vals
I have a stupid question. I want to make a project with a different project as a foundation - is there a simple way to get Lein to do that so I don't have to manually edit a bunch of files? Sorry if there's something obvious, Google did not yield results
why not just recursively copy the old project then edit?
it's possible to make your own template, but that seems like overkill to make one project
the project has a bunch of files, and I'm messing with something new (importing React components into Reagent), so mostly I just wanted to avoid the risk of forgetting something.
thanks @noisesmith ill work on that
I have defined logger format in log4j.properties and the file is kept inside resource folder in clojure api. But logs is not getting displayed in given in log4j.properties. Logs getting displayed on console in this [qtp681891967-13] INFO clojure-dauble-business-api.core - Function begins from here
.
I also specified log4j.properties path in project.clj with this line :profiles {:dev {:resource-paths ["resources"]}}
As per log4j.properties logs should be append to app.log file. It is not happening. Where I am doing wrong? Can anybody help me out?
I figured out solution to the logging issue. Dependency [org.slf4j/slf4j-simple "1.7.12"] overriding log4j.properties configurations.
i've been trying for a couple days to resolve this problem when i run lein repl: Error loading clojure.tools.nrepl.server: Could not locate clojure/tools/nrepl/server__init.class or clojure/tools/nrepl/server.clj on classpath
it's due to one of those sticky situations in clojure where sometimes building a project depends on the actual editor used by the developer. give the project to another developer with a different IDE and it blows up
i think it's the only language i've used where a project's build process can be coupled with the code editor
@hmaurer @rauh i managed to resolve the problem by installing and using cider. not my preference but it's a quick solution for now
You were simply missing a dependency on nrepl. Cider jack in injects a few repl only dependencies. Not all editors do this.
are environment variables cached somehow. I removed one that was set in my project.clj, and instead put it in a separate profiles.clj. Even after a lein clean and restart of cider, calling the environ function to read an environment variable reveals the old value that was removed.
@ajs not sure how that's working but i can wager a guess. Emacs starts up with an environment and its possible that this process, since it is spawning the others, is preserving that environment you started with. perhaps restart emacs and see if the environment is updated then
Is there a function to process the values of a map and get back a map with the original keys and updated values? A mapmap
function? I find myself doing this a lot and use reduce like:
(defn process-map
[m]
(reduce #(let [v (get m %2)
new-v (some-processing-here v)] (assoc %1 new-v)) {} (keys m)))
but seems like this is a useful method to have built in.There's the map-vals
function from https://github.com/weavejester/medley
wouldn't merge-with suffice for this? possibly a bit obscure looking:
(mege-with (fn[_ v] (some-processing v)) m m)
I have an atom with a vector with an map, how would I update the value :done of one item?
[{:id "1", :title "Learn Clojure", :done "active"} {:id "2", :title "Learn ClojureScript", :done "active"}]
@shidima_ again, with Specter: (setval [ATOM ALL :id #(= your-id %)] :your-value your-atom)
I’d like a version of empty?
that doesn’t throw on numbers (etc), but simply informs me that, no, the number 9
(for example) is not empty, whereas ""
is empty. What’s the least unintelligent way to achieve this?
Basically a protocol that dispatches on type. Seqs and strings, return empty, java.lang.Object: return false.
True, but perhaps better than straining the input to empty?
through a protocol and hoping that I’m not straining too aggressively. Or not aggressively enough.
@ghadi I’m writing a library where I won’t have control over how the JVM is started up. Does shimdandy still work there? From the readme it looks like it wont
Boot uses shimdandy
I’m working on https://github.com/arohner/spectrum. Spectrum uses tools.analyzer. Tools.analyzer uses eval, which breaks defrecords and protocols when files are reloaded. I’m looking for a way to isolate the code reloading
AIUI, it’s compiler.java that’s creating new classes via eval, so it’s the compiler’s classloader that matters?
how is one supposed to use loop/recur
to reduce a nested collection?
given {:children {1 {:children {3 {:children {}} 2 {:children {}}
(defn recurse-children [acc root]
(reduce (fn [a node]
(if (empty? (:children n))
(do-accumulation acc node)
(map #(recurse-children a %) (:children n)))))
^ this is what I want to do, but I know that will result in an overflowI think if we provided a custom classloader that extends DynamicClassLoader we could be able to isolate the evaluation that happens through tools.analyzer and the clojure runtime
except if I try to rewrite that in loop/recur
, i will end up with some sort of (map #(recur acc %) (:children n))
at some point, which I know isn’t valid either
@bronsa I’m happy to work on it now, but I’m not following yet. which behavior in the CL needs to change?
@lwhorton it can be done breadth first if you add an extra accumulator for nested children, so you process items if leaves, or append to the accumulator if branches, then recur on the vector of branches
DCL has a map that goes classname -> class, and that's how the Compiler resolves classes, if analyze+eval did evaluation using a different classloader using a different class cache, it might not impact the regular evaluation context (so no redefining of defrecords)
@lwhorton that said, the result often looks weird enough that it’s worth just using non-optimized recursion until you know you have inputs too big for that
as always, a big help @noisesmith .. dont think I’ll hit that limit for quite some time
The most recent thing I tried doing was 1) grabbing the current classpath. 2) creating a new URLClassLoader using the current classpath URLs, but setting the parent classloader to (.getParent (.getClassLoader clojure.lang.RT))
. 3) Using classlojure to eval-in
in that supposedly isolated classloader.
ideally t.a.jvm/analyze-ns shouldn't behave differently than normal clojure namespace reloading
I’m also trying to get analysis for e.g. clojure.core, so normal reloading is not good enough
I've cried many many tears trying to make that the case and I thought that's how it behaved now
hm, deftypes might get corrupted as t.a.jvm does an internal eval to set up reflection contexts :( so not as easy as just not using eval
I have to go now but I'll think about it. I'm sure there's a way to make this less painful
@aymat316 the easy way is to install that jar in your local cache (which is lein install
for lein projects) then you can just add it to your deps
though when you deploy you’ll probably need to make sure the deployed jar can find the artifact
which might mean deploying the jar to clojars (for open source you want to share) or might mean building an uberjar on your local machine or hosting a secure private maven repo of your own
In Datomic, I can test my queries on an inline datastructure, like (d/q '[:find ?e :in ...], [[123 :att :val] [[456 :att :val]])
Does something similar exist for SQL tables, so that I do not have to spawn a real database for integration tests?
there are file back and memory backed sql databases, but none of them are entirely compatible with the popular client/server sql dbs
generally, the way I deal with wanting to run things entirely in memory is I create a protocol that represents a set of queries, and then have some kind of in memory store that satisfies that protocol, a long with something that satisfies the protocol for storing in a database, and swap one for the other in tests or whatever
@grav did you try the code sample you shared? it should work 🙂
@grav http://www.h2database.com/html/main.html is an in-memory Java SQL DB
@grav there tend to be enough vendor specific differences between SQL databases that @hiredman’s solution is the only one i’ve had success with
worked on a codebase that tried to do sqlite tests with postgres prod and bugs persistently crept in
You go to production with the isolation level you have, not the isolation level you want to have.
@grav docker compose offers a decent way of running integration tests against a throwaway copy of your actual db