This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-06-03
Channels
- # babashka (17)
- # beginners (166)
- # calva (97)
- # cider (4)
- # clara (2)
- # clj-kondo (46)
- # cljsrn (5)
- # clojure (334)
- # clojure-canada (1)
- # clojure-dev (144)
- # clojure-europe (14)
- # clojure-germany (5)
- # clojure-nl (10)
- # clojure-spec (1)
- # clojure-uk (46)
- # clojurescript (50)
- # conjure (1)
- # core-async (52)
- # core-typed (5)
- # cursive (3)
- # datomic (3)
- # emacs (11)
- # figwheel (16)
- # figwheel-main (9)
- # fulcro (29)
- # graalvm (19)
- # graphql (14)
- # helix (46)
- # hoplon (4)
- # hugsql (2)
- # jobs (2)
- # jobs-discuss (1)
- # juxt (15)
- # kaocha (6)
- # off-topic (9)
- # pedestal (7)
- # portkey (7)
- # re-frame (10)
- # reagent (29)
- # shadow-cljs (13)
- # spacemacs (70)
- # sql (13)
- # tools-deps (26)
- # xtdb (23)
dear all, I have a ring server, when I run it as java -jar target/kimim-standalone.jar, it will work successfully. but if I compile it in docker, it will report such error: Exception in thread "main" Syntax error compiling at (core.clj:109:7).
If it's an uberjar and AOT'd, I wouldn't expect it to be doing any compilation when it runs...
Can you provide more details about the error @U011J2CQT0F?
# docker logs incanter 2020-06-03 04:11:50.245:INFO::main: Logging initialized @4281ms WARNING: read already refers to: #'clojure.core/read in namespace: km.core, being replaced by: #'clojure.data.json/read Exception in thread "main" Syntax error compiling at (core.clj:109:7). at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3707) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3701) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3701) at clojure.lang.Compiler$DefExpr.eval(Compiler.java:457) at clojure.lang.Compiler.eval(Compiler.java:7181) at clojure.lang.Compiler.load(Compiler.java:7635) at clojure.lang.RT.loadResourceScript(RT.java:381) at clojure.lang.RT.loadResourceScript(RT.java:372) at clojure.lang.RT.load(RT.java:463) at clojure.lang.RT.load(RT.java:428) at clojure.core$load$fn__6824.invoke(core.clj:6126) at clojure.core$load.invokeStatic(core.clj:6125) at clojure.core$load.doInvoke(core.clj:6109) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$load_one.invokeStatic(core.clj:5908) at clojure.core$load_one.invoke(core.clj:5903) at clojure.core$load_lib$fn__6765.invoke(core.clj:5948) at clojure.core$load_lib.invokeStatic(core.clj:5947) at clojure.core$load_lib.doInvoke(core.clj:5928) at clojure.lang.RestFn.applyTo(RestFn.java:142) at clojure.core$apply.invokeStatic(core.clj:667) at clojure.core$load_libs.invokeStatic(core.clj:5985) at clojure.core$load_libs.doInvoke(core.clj:5969) at clojure.lang.RestFn.applyTo(RestFn.java:137) at clojure.core$apply.invokeStatic(core.clj:667) at clojure.core$require.invokeStatic(core.clj:6007) at clojure.core$require.doInvoke(core.clj:6007) at clojure.lang.RestFn.invoke(RestFn.java:408) at ring.server.leiningen$load_var.invokeStatic(leiningen.clj:7) at ring.server.leiningen$load_var.invoke(leiningen.clj:5) at ring.server.leiningen$get_handler.invokeStatic(leiningen.clj:14) at ring.server.leiningen$get_handler.invoke(leiningen.clj:10) at ring.server.leiningen$serve.invokeStatic(leiningen.clj:20) at ring.server.leiningen$serve.invoke(leiningen.clj:16) at clojure.lang.Var.invoke(Var.java:384) at km.core.main$_main.invokeStatic(main.clj:1) at km.core.main$_main.invoke(main.clj:1) at clojure.lang.AFn.applyToHelper(AFn.java:152) at clojure.lang.AFn.applyTo(AFn.java:144) at km.core.main.main(Unknown Source) Caused by: java.lang.Exception: Directory does not exist: resources at ring.middleware.file$ensure_dir.invokeStatic(file.clj:17) at ring.middleware.file$ensure_dir.invoke(file.clj:12) at ring.middleware.file$wrap_file.invokeStatic(file.clj:72) at ring.middleware.file$wrap_file.invoke(file.clj:59) at ring.middleware.file$wrap_file.invokeStatic(file.clj:70) at ring.middleware.file$wrap_file.invoke(file.clj:59) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3702) ... 39 more
but interesting point is that, i can run it with java -jar directly. but when wrapped in docker, it failed.
Exception in thread "main" Syntax error compiling at (core.clj:109:7).
And the error place in code is as the picture show:
Well, there's your error: Caused by: java.lang.Exception: Directory does not exist: resources
The stack trace shows that the wrap-file
middleware for Ring is expecting a resources
folder to be present. I suspect it is present locally when you run the JAR but you haven't built it into the JAR file so when it is run inside Docker, there is no resources
folder.
Yes. I understand now. Because resources directory is not in docker there. this directory is under my local project. Then, another question is, how to solve this line of code? to use the files from the standalone.jar?
I don't know how you are building your JAR but I would have expected resources
to be included in it?
Does it have any files in it (locally)?
yes. I can find the fie included in the JAR. seems it can not locate "resources" from JAR.
Hmm, I just tried to build a lein uberjar
with an empty resources
folder and realized that it normally puts the contents of resources
into the JAR, not the directory itself, so the code would look for files as resources directly, not with the resources
prefix.
Ah, how are you calling wrap-file
in your middleware? I think that might be the issue...
The root-path
argument to wrap-file
is intended to be relative to the classpath -- but the classpath includes the contents of both src
and resources
not the folders themselves.
the expectation is that you have a folder inside resources
, such as public
, and put your (static) files inside that, then call (wrap-file "public")
, i.e., naming a folder inside `resources.
So maybe I need to call twice:
(wrap-file "resources")
(wrap-file "public")
Here "public" is the only folder in my resource. This way, it may work both locally and in docker.Remove the (wrap-file "resources")
call -- that is what is failing inside Docker
if resources
is on your classpath (locally -- and it should be) then (wrap-file "public")
is correct -- both locally and in Docker.
(FWIW, this confused me the first time I tried to work with static files in Ring)
Now it cannot find public locally: Caused by: java.lang.Exception: Directory does not exist: public
That says you don't have resources
on your classpath locally -- so that needs fixing.
Does it now work inside Docker?
Docker shows the same exception. Though public is included in JAR:
drwxrwxrwx 0 3-Jun-2020 12:04:24 public/
drwxrwxrwx 0 1-Jun-2020 17:36:10 public/css/
-rw-rw-rw- 160 1-Jun-2020 18:00:34 public/css/main.css
-rw-rw-rw- 522 1-Jun-2020 18:02:00 public/index.html
drwxrwxrwx 0 3-Jun-2020 12:04:34 public/js/
I found one solution, that copy all the resources files to docker:
RUN mkdir -p resources
COPY resources/* resources/
Now, lein ring server-headless, java -jar xxxx.jar, and docker all can provide services...
OK, I went and checked our code and also ring-defaults
. wrap-file
seems to require actual files on disk -- and therefore a directory outside the JAR, wrap-resource
works with the classpath and files inside the JAR.
So I think if you switch from wrap-file
to wrap-resource
your problems will go away.
Yes. @U04V70XH6, when use wrap-resource
, both local and docker work fine with files inside resources folder. Thank you.
side note, I am surprised to see ring.server.leiningen
in your stack trace, lein ring
is meant to be a dev convenience and it's not for production
--> cat posting.clj
(require '[clj-http.client :as client])
(-> (client/post""
{:body "Hello world"})
:body
println)
--> clj -Sdeps '{:deps {clj-http {:mvn/version "3.10.1"}}}' posting.clj
{"args":{},"data":"Hello world","files":{},"form":{},"headers":{"x-forwarded-proto":"https","x-forwarded-port":"443","host":"","x-amzn-trace-id":"Root=1-5ed75a83-2024ebd14f608aa9fc8f04a3","content-length":"11","accept-encoding":"gzip, deflate","content-type":"text/plain; charset=UTF-8","user-agent":"Apache-HttpClient/4.5.10 (Java/11.0.7)"},"json":null,"url":""}
This line executes the file posting.clj
with the additional dependency clj-http
(which will be fetched automatically)
clj -Sdeps '{:deps {clj-http {:mvn/version "3.10.1"}}}' posting.clj
Like @U2FRKM4TW says it's not possible to tell since you omitted the code around it. Now, there's a chance you lifted this from an argument list in a function call, in which case this is probably setting the key :params of a destructured options map to {:name "John"}
I’m getting an error on this part. Here is the full code:
(ns twitter-mute.core
(:require [twttr.api :as api]
[twttr.auth :refer [env->UserCredentials]]))
(def creds (env->UserCredentials))
; get following list
; loop through it
; mute each user
(api/mutes-users-create
creds :params {:screen_name "jack"})
The error is this
Evaluating file: core.clj
Syntax error (NullPointerException) compiling at (src/..../core.clj:7:1).
null
Evaluation of file core.clj failed: class clojure.lang.Compiler$CompilerException
"syntax error" actually means something blew up while loading the file
since you have two top level side effects (constructing the creds, mutes-users-creat) it's ambiguous which one had the error
sounds like the creds might not have something in them the code needs - try printing the creds object
also, if you wrap the api call in a function, it will be easier to iterate
I mean "write code and debug"
top level side effects make that harder, and are almost never appropriate in finished code
I made an issue here by the way, in case that offers any clarity: https://github.com/chbrown/twttr/issues/10
you'd want defn
rather than fn, but yes
the root cause here is almost certainly that env->UserCredentials
needs some data from the env that isn't present
most people use a plugin (eg. lein-env
) to propagate it on startup
the jvm doesn't have a built in portable way to set process env vars, so it's usually set up outside the process by tooling
So you’re saying the best option is to get lein-env
? I’m learning Clojure and would like to do as minimal install as possible
lein-env is a plugin you can add to a project
the other alternative is some startup script that sets the env vars based on the .env
file
also, as a more root design issue, nothing in clojure apps should rely directly on environment variables, since you can't easily set them in a running repl, there should be some second mechanism that lets you configure them inside the running process (that's an opinion of course, but one won through years of practice)
see for example juxt/aero
which gives a syntax for grabbing things from the environment, but also lets you specify the values arbitrarily without restarting https://github.com/juxt/aero - particularly useful when combined with integrant https://github.com/weavejester/integrant
I am dealing with very large (largest I have seen) clj codebase counting 40k+. It is very difficult to navigate around stuff and I believe I have already asked earlier regarding stateful "components" and usage of atoms, but I am also struggling with some lower level code. As most code in clojure follows this pipeline architecture when you start processing something its usually difficult to have reasonable assumptions in deeper parts(espeically if they interact with some java code). Something starts as map and gets transformed further down this line it is difficult to reason about shape of data. Currently I am looking at clojure.spec with high hopes for giving some clarity - what are your experiences? Also I saw a lot of a pattern where you have some general map lets call it "element", so you pass it down as is through your call stack so that each function can extract whatever it needs as a "prelude" to actual op - I am currently unsure if it's so great, but when this pattern is no longer present I struggle even more. Generally I have hard time understanding expectations from dependant functions and usually have to investigate code at multiple levels to have a reasonable dose of confidence that what I have provided should be ok. I am asking for general suggestions and tips - working with large codebase means I won't be able to apply things willy nilly so the more general and gradual the suggestion the better 🙂 and also how's spec:)
Spec is fine. But IMO documentation and examples are much better for clarity, if that's clarity alone that you're seeking. Start from the lowest level and try to document each function and provide a small example for it to grasp the meaning better. Unless some functions do something incomprehensible and ineffable as an example, you should be fine.
I suppose. The main problem for me is the shape of data but in a descriptive fashion. Cause the term familiar in this community "place oriented programming" is happening very often via let [ name (first (:some-val x))]
without any semantics as to what is this data representing
There isn't a silver bullet answer to this. But some things I find useful dealing with similar problems: • Documentation and spec can help with understanding function requirements. • Asserts can help with debugging and the development workflow. I personally have noticed these are tragically missing from many Clojure libraries and applications, so when you get it wrong you are punished with an obscure error. • Tests can notify you of breaking changes to the requirements of functions • Removal of nesting and unnecessary structure lets you focus on the keys and their relationships. • In addition to above I find using sets for structure and set logic for composition of structure much easier to reason about with open domains than nested structures. When structure obviously makes the code simpler, ignore this. Nothing is always a good idea.
If its not already happening namespacing keys can of course help with understanding semantics and being able to lean on tools like spec.
alas, assert throws an Error instead of Exception 😢
Sometimes too many (low impact) little functions can also be a problem. Better to have one place to understand the requirements for 'unit of useful work' than many. Its hard to write good functions, one bad function is better than 100 bad functions.
@USGKE8RS7 not sure what the issue is here, to me the difference is sound - i.e your fault or not. If you wanna catch all throwable things just catch throwable?
Check out malli: https://github.com/metosin/malli Using this on my current site and it's a lot easier to get going with.
It has human error messages built in, it's been enormously helpful for me to debug production logs as to why something might be broken.
So I'm typically an Emacs user BUT ... for situations like this I reach for Intellij+Cursive, because if has really excellent code navigation / jump-to-source and excellent step-thru debugging. I don't usually reach for them except for when I'm completely lost on large projects. Just thought I'd share a point I didn't see otherwise mentioned
Emacs can do all that.
An attempt to make sense of this mess. Rectangle=clojure lib, diamond=logging facade, round=logging framework. ACL=apache commons logging (previously JCL jakarta commons logging), JUL=java.util.logging
Would you please make a blog post about the mess? (Similar to what you did with dates)
Well, depending on how deep you want to go, there’s also https://github.com/fzakaria/slf4j-timbre
Can’t forget https://github.com/stuartsierra/log.dev, either.
In that case, https://github.com/BrunoBonacci/mulog 😛
omg, it is endless
Not going to include log.dev as it's just wiring up slf4j+logback, but it's good to know about as that's basically the setup I would recommend, wrapped in a nice package.
mu-log is interesting, in a "apres moi la deluge" let's rebuild the world from scratch kind of way. Also it uses unicode symbols (or at least recommends to do so) which works but is not officially supported.
it's an SLF4J Logger that would replace logback in that diagram @U07FP7QJ0
interesting! Seems there aren't a lot of libs that implement an SLF4J backend directly. Lots of option for piping it into other frameworks though. What's the motivation for writing your own?
I think I'm probably in a "don't do that, do this" position. I am trying to maintain a counter for # of subscribers. When I transition from 0->pos? I want to register a side effect (listen in postgres & start a hot loop to poll). Once the subscriber count is back to 0, I'd like the loop to end (easily achieved via a sleep & interrupt in my case). It gets more tricky when I start adding failure & recovery to my loop though. It means I need to be able to "stop the world" and prevent any updates to the state while getting the # of subscribed listeners, otherwise I will Listen in postgres for channels that are no longer wanted. My current idea uses a thread & core.async to do RPC, to allow for blocking until the listen has been added (can't perform the task that will trigger the notification until the listen is added). I hate it. What should I do instead?
what about add-watch on an atom holding the number of subscribers, which turns the polling on/off?
often you'd hold more than just a subscriber count (even a map of subscriber id to ancillary data) but a count should be derivable
if the goal is that only one pending task ever exist against the db, you could use an agent to coordinate, and send it actions which are implicitly queued
(that is, all agent actions are queued, it doesn't retry or optimistically execute, so agents are good for wrappping stateful external resources)
The problem with an agent is that you can't wait for the action to complete, to know that you're able to continue.
if you can lift into something async or a callback, you can do it via await
, which is effectively a callback agent action that doesn't change the data, just wakes your code up when everything before it in the queue (at the time of the await call) is finished
The watch would still need to communicate with the polling thread. Also, the polling thread needs to be aware of the subscribers in order to reconnect when the connection fails, but it needs to stop any swaps from happening during that time
sure, with an agent you don't have swap period
Yeah, I think I worried that await would block forever & was a bit racey. But I'm not sure that's true.
await is an agent action via send-off, it doesn't wait for all actions, just implicitly, via the queue, on the ones that were sent before it
so if you do (send a f)
then (await a)
you have the gaurantee that f has already been called by the time await returns
the tricky bit is the error state, that's the part of agents I don't like
you can simulate an agent in core.async with a go-loop that reads a channel and executes a thread and parks on that thread for each input
Hmm, yes. Although in my case we're really talking about a thread anyway, so error state is already bad.
Yeah, that's basically what I've implemented. Although loops don't play nice with sql exceptions.
(go-loop [state []] (let [msg (<! c) result (<! (async/thread (f state msg)))] (recur result))
the basic skeleton I guess
right - exceptions are the gotcha all around, async makes managing them more complex period
the problem with that go-loop is that it has no await, so I was custom implementing that to allow subscribers to know when their msg had been processed.
you could implement await via a second chan, poll on that and the query chan, if the second chan is hit, expect a result chan, and just >! true onto it
that gives the same semantics - a special kind of action that just wakes you up, and implicitly lets you know anything before itis done
My poll has a timeout, so using (async/timeout 200) will trigger the loop and then restart, or an action will come in
another option is to attach a "result chan" or "result promise" to each incoming thing, and ensure all queriers use that
Hmm, now I think about the fact that if I used an agent I'd need to somehow do the polling itself somewhere else still, and manage state updates to that thing… core.async sounds more appealing.
instead of the explicit guarantee of the agent serializing actions, you rely on the single go-loop managing all the state in its closed over context
but those end up being identical if done properly
OK. I think I've got a potentially easier option to explore with this agent thing, time to REPL it a bit and see where I get.
This would be lovely and immutable if catch/recur were friends:
(loop [conn (jdbc/get-connection db-spec)]
(try
(.getNotifications conn)
(catch SQLException _
(.close conn)
(Thread/sleep 200)
(recur (jdbc/get-connection db-spec)))))
(loop [conn (jdbc/get-connection db-spec)]
(let [notifications (try .... (catch SQLException _ (.close con) ::fail))]
(if (= notifications ::fail)
(recur (do (Thread/sleep 200)
(jdbc/get-connection db-spec))
...)))
One problem is that the mutable state actually needs to be accessible to the agent so it can trigger any listens/unlistens as appropriate.
so notifications
would be sent along with some function to the agent?
so the second arm of the branch could be (send-off a conn notifications)
So maybe recovery in the thread happens by sending off to the agent to re-establish the connection…
you know via lexical scope that nobody else touches that particular conn before the agent does
you could also store the connection in the agent itself, and recover connection via the registered agent error handler
This is difficult because it's coordinating 2 threads pretty much, and they both need access to shared state.
first off, is it possible for them to use separate connections?
if not, I'd either supply the connection to the agent as part of the action (knowing nothing else touches it before the agent gets to it), or put the connection in the agent and ensure the error handler and sent function know how to manage it
but separate connections is simpler
in that case I'd be inclined to pass the connection + the notifications as args to the agent send
the rest of the system can deref the agent to see the up-to-date state of data, or use send
to interact with the notification system, and the loop (in a future
most likely) would periodically update the notifications and connection by sending to the agent
Oh, and the thread owns the connection… I didn't like that at first, but I think that makes sense… ooh.
right, I think the thread (aka the loop right?) is the natural owner of the connection
that loop could also deref the agent to look for operations that should use the connection, before polling / sending
oh - I made some assumption that other code would want to read / access this stuff
if it can be self contained, you don't even need an agent, it can all be in the one loop
yeah, my instinct is exposure as data / observability but you could just fire a callback in a new thread and skip the agent thing
I wouldn't callback in the same loop that polls - putting the callback invocation in a future is safer, decomplects things
(doseq [n ns] (future (cb n))) vs (doseq [n ns] (cb n)) feels basically the same :thinking_face:
just make sure if anybody cares about the callback erroring that the right try/catch picks that up, because like any async mechanism the errors can otherwise drop on the floor silentsly
the difference is that cb can slow down or halt your polling
and to address that you now need monitoring / restarts etc.
futures isolate the failure to the function run in the future, the caller can try/catch if failures matter
it puts the responsibility for not being brittle in a better place
Or I could just assume good citizens (I'm pretty much just gonna use this for putting ont oa channel). With a future I have to worry about running out of memory if a lot of them get queued up.
yes, they do - also you can use backpressure to avoid memory errors without having to trust your caller (this matters even when you write all the callers - it's easier to maintain a system where failures push to the edges)
especially async tends to make errors deeper into the infrastructure harder to debug and fix, the more they push to the edges the easier everything gets
(that's my experience with microservices / executor systems at least)
you can replace the future with a fixed size executor pool plus a mechanism to impose bindings
yeah - manifold makes that easy, as does claypoole
I could avoid this issue also by just using a channel instead of a callback as my api
that's true
Btw @noisesmith, your insight here & in general is appreciated. :)
> if it can be self contained, you don't even need an agent, it can all be in the one loop Well, 2 thoughts. One is that maybe I should create a connection per-subscriber. Although with parallel connections, that seems like a bad idea, so nevermind :). But the loop needs something external to do the actual listen/unlisten right?
Hmm, another problem with the loop owning the connection is that the agent then can't add a listen until the thread has started up and established a connection for it to use.
> the agent then cant' add a listen my precondition was not needing the agent
The documention process The goal is to have a prototype by the end of the week I can see several problems related to the documentation. When I get to work in big projects that exists for a long time, I usually spend a lot of energy to understand what is happening, what are the main elements, the domain entities. It might be because documentation gets unsynced with the code, but could be something else and what I'm going to do is to address this problem and produce a prototype by the end of the week. So if you have time, interest and energy, please contribute with insights. Thank you. https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md
Not sure what kind of input you look for. Ideas for improving some clojure project’s documentation? improving cljdoc?
Sorry, I was expecting to edit the original with questions faster. I see the cljdoc as something similar to javadocs, I'm thinking about something that makes the process of documenting easier (would be nice to have some analyzer also, but I don't have a lot of ideas in this area).
I would start with trying to compile which libraries have great documentation and then try to figure out what those libraries are getting right
I think this is an interesting idea to investigate what tools these people are using. Do you have examples of such libraries or tools?
I updated [my github](https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md) with a solution sketch and some maybe useful ideas. Would be nice if you let me know what do you think about this features. How can it be approved? The main ideas are: - Using labels to define multiple level of documentation. - Run a basic static analysis to identify the function and where are they called to generate a diagram where you can filter elements you want to show/hide. - Run this analysis in any language. - Warn dev about missing code referenced in doc, maybe it was deleted or it changed the name. - Show usage examples of the code (tests) close to the code it tests.
Hey! I had a look, I have a little bit of feedback: 1. are you aware of http://cljdoc.org? it seems it exists in the same space you are exploring, providing tools for creating both high-level and low-level documentation, including [[wikilink]]-style syntax to link between different things 2. what is the actual problem you are trying to solve? you mentioned “problem with documentation” — what is that problem? You also mention that it requires a lot of energy to get into new projects — I think this is inevitable no matter the documentation, because you have to learn the system and how parts of it interact together. Do you think understanding system and its moving parts is about documentation?
I'm aware of http://cljdoc.org. It focus in a different problem. it's more about API docs, i.e. in documentation to facilitate the use of code. The problem I'm trying to mitigate is the difficulty of understanding pre-existing code, usually when you enter in a new team or just need to work in some part you are not used to. It might be inevitable spending more energy in these situations, but I thing the documentation plays a big role in how much energy you'll spend in the end. The documentation is simply information other people let to you about the code, if this information is up to date and right, it will definitely help you. There's a cost for maintaining documentation and if this cost is higher than the benefits you get from it, you will avoid wasting resources on it. So I'm trying to increase the value extracting some information automatically and decreasing the cost by making it clear what parts are out of date.
does anyone have a preference between https://github.com/Olical/depot and https://github.com/slipset/deps-ancient/
I prefer depot, only because it doesn’t require leiningen
.
KISS principle and all that…
That was my understanding back when I first checked it out, but it might have changed since then.
I stand corrected - it can be used from leiningen
.
But otherwise is standalone.
Though it uses the same https://github.com/xsc/lein-ancient/tree/master/ancient-clj as the lein-ancient
plugin.
So I guess I don’t have a good reason for choosing depot
over deps-ancient
after all. 😉
The error is this
Evaluating file: core.clj
Syntax error (NullPointerException) compiling at (src/..../core.clj:7:1).
null
Evaluation of file core.clj failed: class clojure.lang.Compiler$CompilerException
for the sake of experimentation, i'd like to wrap a list of functions (project wide) with an outer logging function without modifying the existing code. is alter-var-root
my best option?
yes, the pattern is pretty straightforward:
(alter-var-root #'foo/bar (fn [f] (fn [& args] (report args) (apply f args))))
slightly more complex if you also want to access eg. the name of the function in the wrapper, but still doableuser=> (alter-var-root #'clojure.string/join (fn [f] (fn [& args] (println "joining" args) (apply f args))))
#object[user$eval215$fn__216$fn__217 0xce9b9a9 "user$eval215$fn__216$fn__217@ce9b9a9"]
user=> (clojure.string/join ", " [:a :b :c])
joining (, [:a :b :c])
":a, :b, :c"
there are some gotchas to it, but robert.hooke
uses "hooks" for this sort of thing https://github.com/technomancy/robert-hooke
• fixed link
i think there was another library for heavy namespace manipulation (possibly quite old)? but i can't remember the name
Is there a way to set the dependencies cache location for clojure cli tools to something other than $HOME/.m2/repository
globally? As in, in an environment variable?
I wouldn't be surprised if it respected the env var maven uses (via the underlying lib) M2_HOME
hello everyone, is anyone aware of lib to deal with http://ndjson.org/ in clojure?
my problem is that a service expects a file with one json entry per line, but the clojure code that I have has a [{}. {}, {}]
format
Sorry, forgot to mention I tried MAVEN_OPTS to no avail
As in `
MAVEN_OPTS: "-Dmaven.repo.local=<something>"
I'll try that tks folks
I should hve tried before suggesting, it doesn't look like M2_HOME has any effect
Yep, just tried it. Googling for a bit seems like that's just like JAVA_HOME, for the mvn executable
per https://medium.com/@dainius_jocas/using-gitlab-ci-cache-for-clojure-dependencies-31bb9bf5f003, it can be passed as a parameter :mvn/local-repo
as well
That will do it. I'll just set some variable and use that as a param
Tks again
set :mvn/local-repo for this
you can set it on the command line with clj -Sdeps '{:mvn/local-repo "foo"}'
does anyone develop their apps purely in the cloud? e.g. startup your jvm in some cloud compute and connect to it via remote REPL?
Yup! EC2 instance running the app inside docker, with regular file mounts that I edit transparently over SSH with TRAMP.
But for what you describe we have a couple VPCs with ZK, Datomic, and etc. running. So we just connect to the right VPC and run the part of the app locally that you want. I got tired of spinning fans, so I just put the whole machine on the VPC 🙂
TRAMP gets annoying if you have large pings. While I was abroad, I switched to editing files locally + Unison process syncing them.
we’re going around and around at work about all these issues developing locally, where we have to run so many docker containers/jvms that require a lot of local ops work that isn’t really useful
@lilactown I'd be concerned about the ability to work when offline, such as when flying or something. We have a bunch of services needed to support our apps but we just have them setup locally so docker-compose up
is the only command needed to bring them all up (and the config is all kept in a dev repo so everyone can have the same experience).
@hiredman Isn't your dev setup done via a "remote" server that runs all your services/JVMs etc and you just connect to a REPL in it?
Ah, yeah, right.
for context, our current workflow for developing a single service is to start it up and have it interact with other services in the dev cluster
@lilactown Maybe if your devs have Comcast as their ISP they wouldn't want to rely on Internet connectivity... :rolling_on_the_floor_laughing:
it just gets real complicated when we need to run 2+ services locally because we need to configure our local services to talk to each other
I have a linux vm running locally on another machine that I ssh into, and do all my work dev on
a nice thing about this is I pretty never have to worry about start/restarting services, I just leave them running
I do end up using containers, but I run them via podman (docker alternative) and use user level systemd stuff to manage them
podman is really nice. At this point, I don't have anything complicated so I just get along with plain pods and bash scripts to run them.
the neat thing about podman + user level systemd stuff is all the state is in my users home directory, so when I rebuild the os or whatever I just move my home over and the next time I login all the services are there running
Right now I just mount local dirs within my ~. Do you have any link that shows how to use podman with user-level systemd?
I just write unit files and put them in ~/.config/systemd/user and then use all the normal systemd commands but with the --user flag
I don't use arch but they always have helpful docs https://wiki.archlinux.org/index.php/Systemd/User
I try really hard to limit the ops work my local setup requires, but it can be tough
I've worked at places where we didn't have any code locally, and it didn't seem worth it to me. Lots of downtime waiting for server maintainance, lots of awkward solutions for people who aren't comfortable using vim or emacs over SSH.
Haven't heard about such places before. Would you mind to elaborate what was the reason behind it? Difficult to setup everything locally? Security?
At that shop it was mostly to avoid the work involved in setting up local dev environments and keeping them on the same versions of various packages and libraries as our production target. At one point the deprecated cloud IDE we were using started to show cracks and we ended up setting up local environments for everyone just to save time, but management still wanted a centralized solution for themselves to use. You need a better reason than laziness, it's not easier.
All solvable ofcourse, but someone had to solve it
I think what I really want is to be able to quickly deploy changes to a personal cluster/stack
alternatively, we could write apps that have boundaries such that they can be developed in isolation
one can dream at least
like for 80% I could just connect to a socket or nREPL connection and hack at things. periodically do a clean refresh of the service
we got pretty far with plugging transducers into kafka
because that means kafka becomes an implementation detail, and most dev can just use the same transducer in a non-kafka context
but yeah, that requires a lot of buy in in terms of coding style and architecture and the expedient thing is just relying on the service
(short term, at the very least)
@noisesmith sounds interesting -- do you have that written up somewhere?
we have a private repo where we experimented with this
the jackdaw lib for kafka does have this feature merged and a small demo, but the nature of this stuff is you need something very big and very complex before the approach is vetted
and that's a high bar
because there are a lot of silly ways to develop an app that look great in a minimal demo :D
yeah we have a lot of services and use kafka / zookeeper / some other microservice-y stuff for service discovery and communication
the other thing I’m thinking of is, I think similar to what you’re saying, having the ability to run things in the same JVM and have them communicate in-memory
or minimize / isolate the complexity so that you can use a service with a vector of inputs, and get a vector of outputs back for confirmation
in-memory being neccessary but not sufficient for that of course
but it’s still complicated by the need to run locally sql stores / other cloud services that are just bleh
right - but my dream (usually unrealized) is that those are still sources/sinks of structured data, so there should be a design that lets you plug them safely
like one of the things we are struggling with atm is we want to validate some really complex SQL queries that run in presto
yeah - in the design I'm talking about the query would be part of the "messy" stuff
in some cases pulling complexity out of the query and into pure clojure is feasible, in some cases, of course, that's not going to be viable
but at the very least you can ensure presto is the only live component you need to dev against to make that work
this service needs presto, this one needs postgres, this one needs some other thing
it’s further complicated where we are (I think wisely) building abstractions on top of this
so e.g. we have a service for running queries against sql stores. so if I want to exercise the system, I need to run that service + the service that knows the query that needs to be run
I suspect (but don't have robust proof) that having adaptor protocols for all external resources should make things more robust and make it easier to use placeholder data during dev, but sadly I don't have a strong example that really proves it
right
but is it more work than the ops drudgery of keeping everything hooked up and healthy locally?
because there's a lot of friction to the cloud, and the little sharp corners of tools and stacks of tools start to add up
in my experience
then you're putting the time into the tooling stack I guess, probably messing with things like k8s
for institutional reasons, I'm motivated to solve as much in the architecture of my app instead of relying on the security and ops teams
(we have a lot of laws and stakeholder promises about our data...)
fintech - the laws are more lenient but the intrinsic motivation for others to break our stuff is higher
I updated [my github](https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md) with a solution sketch and some maybe useful ideas. Would be nice if you let me know what do you think about this features. How can it be approved? The main ideas are: - Using labels to define multiple level of documentation. - Run a basic static analysis to identify the function and where are they called to generate a diagram where you can filter elements you want to show/hide. - Run this analysis in any language. - Warn dev about missing code referenced in doc, maybe it was deleted or it changed the name. - Show usage examples of the code (tests) close to the code it tests.