Fork me on GitHub
#clojure
<
2020-06-03
>
kimim04:06:24

dear all, I have a ring server, when I run it as java -jar target/kimim-standalone.jar, it will work successfully. but if I compile it in docker, it will report such error: Exception in thread "main" Syntax error compiling at (core.clj:109:7).

kimim04:06:30

the docker file is basically same as command line:

kimim04:06:48

# Run the JAR
CMD java -jar kimim-standalone.jar

seancorfield04:06:19

If it's an uberjar and AOT'd, I wouldn't expect it to be doing any compilation when it runs...

seancorfield04:06:02

Can you provide more details about the error @U011J2CQT0F?

kimim04:06:40

# docker logs incanter 2020-06-03 04:11:50.245:INFO::main: Logging initialized @4281ms WARNING: read already refers to: #'clojure.core/read in namespace: km.core, being replaced by: #'clojure.data.json/read Exception in thread "main" Syntax error compiling at (core.clj:109:7). at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3707) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3701) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3701) at clojure.lang.Compiler$DefExpr.eval(Compiler.java:457) at clojure.lang.Compiler.eval(Compiler.java:7181) at clojure.lang.Compiler.load(Compiler.java:7635) at clojure.lang.RT.loadResourceScript(RT.java:381) at clojure.lang.RT.loadResourceScript(RT.java:372) at clojure.lang.RT.load(RT.java:463) at clojure.lang.RT.load(RT.java:428) at clojure.core$load$fn__6824.invoke(core.clj:6126) at clojure.core$load.invokeStatic(core.clj:6125) at clojure.core$load.doInvoke(core.clj:6109) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$load_one.invokeStatic(core.clj:5908) at clojure.core$load_one.invoke(core.clj:5903) at clojure.core$load_lib$fn__6765.invoke(core.clj:5948) at clojure.core$load_lib.invokeStatic(core.clj:5947) at clojure.core$load_lib.doInvoke(core.clj:5928) at clojure.lang.RestFn.applyTo(RestFn.java:142) at clojure.core$apply.invokeStatic(core.clj:667) at clojure.core$load_libs.invokeStatic(core.clj:5985) at clojure.core$load_libs.doInvoke(core.clj:5969) at clojure.lang.RestFn.applyTo(RestFn.java:137) at clojure.core$apply.invokeStatic(core.clj:667) at clojure.core$require.invokeStatic(core.clj:6007) at clojure.core$require.doInvoke(core.clj:6007) at clojure.lang.RestFn.invoke(RestFn.java:408) at ring.server.leiningen$load_var.invokeStatic(leiningen.clj:7) at ring.server.leiningen$load_var.invoke(leiningen.clj:5) at ring.server.leiningen$get_handler.invokeStatic(leiningen.clj:14) at ring.server.leiningen$get_handler.invoke(leiningen.clj:10) at ring.server.leiningen$serve.invokeStatic(leiningen.clj:20) at ring.server.leiningen$serve.invoke(leiningen.clj:16) at clojure.lang.Var.invoke(Var.java:384) at km.core.main$_main.invokeStatic(main.clj:1) at km.core.main$_main.invoke(main.clj:1) at clojure.lang.AFn.applyToHelper(AFn.java:152) at clojure.lang.AFn.applyTo(AFn.java:144) at km.core.main.main(Unknown Source) Caused by: java.lang.Exception: Directory does not exist: resources at ring.middleware.file$ensure_dir.invokeStatic(file.clj:17) at ring.middleware.file$ensure_dir.invoke(file.clj:12) at ring.middleware.file$wrap_file.invokeStatic(file.clj:72) at ring.middleware.file$wrap_file.invoke(file.clj:59) at ring.middleware.file$wrap_file.invokeStatic(file.clj:70) at ring.middleware.file$wrap_file.invoke(file.clj:59) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3702) ... 39 more

kimim04:06:08

it seems not aoted, as i can find the clj src file in .jar

kimim04:06:58

but interesting point is that, i can run it with java -jar directly. but when wrapped in docker, it failed.

kimim04:06:25

Exception in thread "main" Syntax error compiling at (core.clj:109:7). And the error place in code is as the picture show:

seancorfield04:06:17

Well, there's your error: Caused by: java.lang.Exception: Directory does not exist: resources

seancorfield04:06:29

The stack trace shows that the wrap-file middleware for Ring is expecting a resources folder to be present. I suspect it is present locally when you run the JAR but you haven't built it into the JAR file so when it is run inside Docker, there is no resources folder.

kimim04:06:52

Yes. I understand now. Because resources directory is not in docker there. this directory is under my local project. Then, another question is, how to solve this line of code? to use the files from the standalone.jar?

seancorfield05:06:16

I don't know how you are building your JAR but I would have expected resources to be included in it?

seancorfield05:06:33

Does it have any files in it (locally)?

kimim05:06:06

yes. I can find the fie included in the JAR. seems it can not locate "resources" from JAR.

seancorfield05:06:33

Hmm, I just tried to build a lein uberjar with an empty resources folder and realized that it normally puts the contents of resources into the JAR, not the directory itself, so the code would look for files as resources directly, not with the resources prefix.

seancorfield05:06:59

Ah, how are you calling wrap-file in your middleware? I think that might be the issue...

kimim05:06:39

I call wrap-file as below:

(wrap-file "resources")

seancorfield05:06:58

The root-path argument to wrap-file is intended to be relative to the classpath -- but the classpath includes the contents of both src and resources not the folders themselves.

seancorfield05:06:56

the expectation is that you have a folder inside resources, such as public, and put your (static) files inside that, then call (wrap-file "public"), i.e., naming a folder inside `resources.

kimim05:06:59

So maybe I need to call twice:

(wrap-file "resources")
      (wrap-file "public")
Here "public" is the only folder in my resource. This way, it may work both locally and in docker.

seancorfield05:06:47

Remove the (wrap-file "resources") call -- that is what is failing inside Docker

seancorfield05:06:21

if resources is on your classpath (locally -- and it should be) then (wrap-file "public") is correct -- both locally and in Docker.

seancorfield05:06:06

(FWIW, this confused me the first time I tried to work with static files in Ring)

kimim05:06:50

Now it cannot find public locally: Caused by: java.lang.Exception: Directory does not exist: public

seancorfield05:06:37

That says you don't have resources on your classpath locally -- so that needs fixing.

seancorfield05:06:48

Does it now work inside Docker?

kimim05:06:05

Docker shows the same exception. Though public is included in JAR:

drwxrwxrwx         0   3-Jun-2020  12:04:24  public/
  drwxrwxrwx         0   1-Jun-2020  17:36:10  public/css/
  -rw-rw-rw-       160   1-Jun-2020  18:00:34  public/css/main.css
  -rw-rw-rw-       522   1-Jun-2020  18:02:00  public/index.html
  drwxrwxrwx         0   3-Jun-2020  12:04:34  public/js/

kimim05:06:14

I found one solution, that copy all the resources files to docker:

RUN mkdir -p resources
COPY resources/* resources/

kimim05:06:44

Now, lein ring server-headless, java -jar xxxx.jar, and docker all can provide services...

seancorfield05:06:41

OK, I went and checked our code and also ring-defaults. wrap-file seems to require actual files on disk -- and therefore a directory outside the JAR, wrap-resource works with the classpath and files inside the JAR.

seancorfield05:06:00

So I think if you switch from wrap-file to wrap-resource your problems will go away.

kimim06:06:49

Thanks. I will try this method again.

kimim06:06:33

Yes. @U04V70XH6, when use wrap-resource , both local and docker work fine with files inside resources folder. Thank you.

8
noisesmith13:06:05

side note, I am surprised to see ring.server.leiningen in your stack trace, lein ring is meant to be a dev convenience and it's not for production

tio07:06:16

Can someone point me to where I can see a sample of creating a Clojure script?

tio07:06:47

Just want to post information to an API, without making an app.

tio07:06:55

Don’t see anything online that I, as a beginner can understand.

tio07:06:48

Or ClojureScript if that is the way to go…

solf08:06:58

@tiotolstoy

--> cat posting.clj
(require '[clj-http.client :as client])

(-> (client/post""
                {:body "Hello world"})
    :body
    println)
--> clj -Sdeps '{:deps {clj-http {:mvn/version "3.10.1"}}}' posting.clj
{"args":{},"data":"Hello world","files":{},"form":{},"headers":{"x-forwarded-proto":"https","x-forwarded-port":"443","host":"","x-amzn-trace-id":"Root=1-5ed75a83-2024ebd14f608aa9fc8f04a3","content-length":"11","accept-encoding":"gzip, deflate","content-type":"text/plain; charset=UTF-8","user-agent":"Apache-HttpClient/4.5.10 (Java/11.0.7)"},"json":null,"url":""}

solf08:06:10

This line executes the file posting.clj with the additional dependency clj-http (which will be fetched automatically)

clj -Sdeps '{:deps {clj-http {:mvn/version "3.10.1"}}}' posting.clj

tio08:06:31

Great! Saving this for future use!

tio08:06:54

What is params doing here?

:params {:name "john"}

p-himik09:06:40

Impossible to tell without any context.

simongray13:06:42

Like @U2FRKM4TW says it's not possible to tell since you omitted the code around it. Now, there's a chance you lifted this from an argument list in a function call, in which case this is probably setting the key :params of a destructured options map to {:name "John"}

tio18:06:44

I’m getting an error on this part. Here is the full code:

(ns twitter-mute.core
  (:require [twttr.api :as api]
            [twttr.auth :refer [env->UserCredentials]]))

(def creds (env->UserCredentials))

; get following list
; loop through it
; mute each user
(api/mutes-users-create
 creds :params {:screen_name "jack"})

tio18:06:00

The error is this

Evaluating file: core.clj
Syntax error (NullPointerException) compiling at (src/..../core.clj:7:1).
null

Evaluation of file core.clj failed: class clojure.lang.Compiler$CompilerException

noisesmith18:06:27

"syntax error" actually means something blew up while loading the file

tio18:06:47

It errors out on that api line.

tio18:06:02

But this is the example from the README in the library docs.

noisesmith18:06:06

since you have two top level side effects (constructing the creds, mutes-users-creat) it's ambiguous which one had the error

noisesmith18:06:30

sounds like the creds might not have something in them the code needs - try printing the creds object

tio18:06:40

It errors out on mutes-users-create

noisesmith18:06:51

also, if you wrap the api call in a function, it will be easier to iterate

tio18:06:23

Sorry I’m new, why do I need to iterate?

tio18:06:32

Just want to get it working

noisesmith18:06:35

I mean "write code and debug"

noisesmith18:06:18

top level side effects make that harder, and are almost never appropriate in finished code

tio18:06:43

Makes sense. Like so?

(fn [] (api/users-show creds :params {:screen_name "jack"}))

tio18:06:09

I made an issue here by the way, in case that offers any clarity: https://github.com/chbrown/twttr/issues/10

noisesmith18:06:24

you'd want defn rather than fn, but yes

noisesmith18:06:05

the root cause here is almost certainly that env->UserCredentials needs some data from the env that isn't present

tio18:06:25

It didn’t error out anymore after wrapping in a function.

tio18:06:31

Going to try to call it, thank. you 🙂

tio18:06:30

So you’re right; the env var returned are nil. How do I source my .env file in Clojure?

noisesmith18:06:01

most people use a plugin (eg. lein-env ) to propagate it on startup

noisesmith18:06:30

the jvm doesn't have a built in portable way to set process env vars, so it's usually set up outside the process by tooling

tio18:06:19

So you’re saying the best option is to get lein-env ? I’m learning Clojure and would like to do as minimal install as possible

noisesmith19:06:50

lein-env is a plugin you can add to a project

noisesmith19:06:09

the other alternative is some startup script that sets the env vars based on the .env file

tio04:06:55

OK, thank you! I appreciate it so much 🙂

noisesmith15:06:15

also, as a more root design issue, nothing in clojure apps should rely directly on environment variables, since you can't easily set them in a running repl, there should be some second mechanism that lets you configure them inside the running process (that's an opinion of course, but one won through years of practice)

noisesmith15:06:49

see for example juxt/aero which gives a syntax for grabbing things from the environment, but also lets you specify the values arbitrarily without restarting https://github.com/juxt/aero - particularly useful when combined with integrant https://github.com/weavejester/integrant

tio08:06:07

I’m very new and english is not my first language. Apologies.

rmxm10:06:07

I am dealing with very large (largest I have seen) clj codebase counting 40k+. It is very difficult to navigate around stuff and I believe I have already asked earlier regarding stateful "components" and usage of atoms, but I am also struggling with some lower level code. As most code in clojure follows this pipeline architecture when you start processing something its usually difficult to have reasonable assumptions in deeper parts(espeically if they interact with some java code). Something starts as map and gets transformed further down this line it is difficult to reason about shape of data. Currently I am looking at clojure.spec with high hopes for giving some clarity - what are your experiences? Also I saw a lot of a pattern where you have some general map lets call it "element", so you pass it down as is through your call stack so that each function can extract whatever it needs as a "prelude" to actual op - I am currently unsure if it's so great, but when this pattern is no longer present I struggle even more. Generally I have hard time understanding expectations from dependant functions and usually have to investigate code at multiple levels to have a reasonable dose of confidence that what I have provided should be ok. I am asking for general suggestions and tips - working with large codebase means I won't be able to apply things willy nilly so the more general and gradual the suggestion the better 🙂 and also how's spec:)

p-himik11:06:20

Spec is fine. But IMO documentation and examples are much better for clarity, if that's clarity alone that you're seeking. Start from the lowest level and try to document each function and provide a small example for it to grasp the meaning better. Unless some functions do something incomprehensible and ineffable as an example, you should be fine.

rmxm11:06:01

I suppose. The main problem for me is the shape of data but in a descriptive fashion. Cause the term familiar in this community "place oriented programming" is happening very often via let [ name (first (:some-val x))] without any semantics as to what is this data representing

wotbrew11:06:20

There isn't a silver bullet answer to this. But some things I find useful dealing with similar problems: • Documentation and spec can help with understanding function requirements. • Asserts can help with debugging and the development workflow. I personally have noticed these are tragically missing from many Clojure libraries and applications, so when you get it wrong you are punished with an obscure error. • Tests can notify you of breaking changes to the requirements of functions • Removal of nesting and unnecessary structure lets you focus on the keys and their relationships. • In addition to above I find using sets for structure and set logic for composition of structure much easier to reason about with open domains than nested structures. When structure obviously makes the code simpler, ignore this. Nothing is always a good idea.

wotbrew11:06:16

If its not already happening namespacing keys can of course help with understanding semantics and being able to lean on tools like spec.

Matti Uusitalo11:06:03

alas, assert throws an Error instead of Exception 😢

wotbrew11:06:19

Sometimes too many (low impact) little functions can also be a problem. Better to have one place to understand the requirements for 'unit of useful work' than many. Its hard to write good functions, one bad function is better than 100 bad functions.

wotbrew11:06:55

@USGKE8RS7 not sure what the issue is here, to me the difference is sound - i.e your fault or not. If you wanna catch all throwable things just catch throwable?

wotbrew11:06:09

Perhaps I'm missing something?

naomarik11:06:20

Check out malli: https://github.com/metosin/malli Using this on my current site and it's a lot easier to get going with.

naomarik11:06:28

It has human error messages built in, it's been enormously helpful for me to debug production logs as to why something might be broken.

Lone Ranger21:06:42

So I'm typically an Emacs user BUT ... for situations like this I reach for Intellij+Cursive, because if has really excellent code navigation / jump-to-source and excellent step-thru debugging. I don't usually reach for them except for when I'm completely lost on large projects. Just thought I'd share a point I didn't see otherwise mentioned

Matti Uusitalo17:06:46

Emacs can do all that.

plexus12:06:18

An attempt to make sense of this mess. Rectangle=clojure lib, diamond=logging facade, round=logging framework. ACL=apache commons logging (previously JCL jakarta commons logging), JUL=java.util.logging

plexus12:06:36

Any libraries or links that I should add?

plexus12:06:44

Added Timbre

kirill.salykin13:06:28

Would you please make a blog post about the mess? (Similar to what you did with dates)

plexus13:06:45

yes, I very well may 🙂

❤️ 4
flowthing13:06:14

Well, depending on how deep you want to go, there’s also https://github.com/fzakaria/slf4j-timbre

😞 4
flowthing13:06:50

(Also, Onelog -> Unilog?)

plexus13:06:18

I don't know Unilog

flowthing13:06:31

Oh! Well, there’s also Unilog. 😛

flowthing13:06:46

I’m not sure which of those fit into your diagram, though.

plexus13:06:58

ALL OF THE THINGS

kirill.salykin13:06:01

omg, it is endless

flowthing13:06:37

Those are just the things I’ve come across.

flowthing13:06:43

There’s probably many more.

plexus13:06:54

Getting somewhere

plexus13:06:51

Not going to include log.dev as it's just wiring up slf4j+logback, but it's good to know about as that's basically the setup I would recommend, wrapped in a nice package.

plexus13:06:06

mu-log is interesting, in a "apres moi la deluge" let's rebuild the world from scratch kind of way. Also it uses unicode symbols (or at least recommends to do so) which works but is not officially supported.

ghadi13:06:17

I have a library that I may be able to open source soon

ghadi14:06:42

it's an SLF4J Logger that would replace logback in that diagram @U07FP7QJ0

plexus14:06:23

interesting! Seems there aren't a lot of libs that implement an SLF4J backend directly. Lots of option for piping it into other frameworks though. What's the motivation for writing your own?

ghadi14:06:27

logback and log4j are bloated, even straight up structured logging is tricky

ghadi14:06:16

for writing to stdout in containers, not much is necessary

plexus14:06:21

structured logging as in logging data structures?

ghadi14:06:37

logging JSON for CloudWatch

plexus14:06:15

ah right, interesting

ghadi14:06:20

it turns slf4j method calls into data, then calls clojure functions (user provided)

ghadi14:06:59

it's 2 files of java. Hoping to release it soonish

plexus14:06:32

do you have a code name? then I can add a placeholder 🙂

dominicm13:06:02

I think I'm probably in a "don't do that, do this" position. I am trying to maintain a counter for # of subscribers. When I transition from 0->pos? I want to register a side effect (listen in postgres & start a hot loop to poll). Once the subscriber count is back to 0, I'd like the loop to end (easily achieved via a sleep & interrupt in my case). It gets more tricky when I start adding failure & recovery to my loop though. It means I need to be able to "stop the world" and prevent any updates to the state while getting the # of subscribed listeners, otherwise I will Listen in postgres for channels that are no longer wanted. My current idea uses a thread & core.async to do RPC, to allow for blocking until the listen has been added (can't perform the task that will trigger the notification until the listen is added). I hate it. What should I do instead?

noisesmith13:06:34

what about add-watch on an atom holding the number of subscribers, which turns the polling on/off?

noisesmith13:06:09

often you'd hold more than just a subscriber count (even a map of subscriber id to ancillary data) but a count should be derivable

noisesmith13:06:21

if the goal is that only one pending task ever exist against the db, you could use an agent to coordinate, and send it actions which are implicitly queued

noisesmith13:06:18

(that is, all agent actions are queued, it doesn't retry or optimistically execute, so agents are good for wrappping stateful external resources)

dominicm14:06:35

The problem with an agent is that you can't wait for the action to complete, to know that you're able to continue.

noisesmith14:06:35

if you can lift into something async or a callback, you can do it via await, which is effectively a callback agent action that doesn't change the data, just wakes your code up when everything before it in the queue (at the time of the await call) is finished

dominicm14:06:22

The watch would still need to communicate with the polling thread. Also, the polling thread needs to be aware of the subscribers in order to reconnect when the connection fails, but it needs to stop any swaps from happening during that time

noisesmith14:06:47

sure, with an agent you don't have swap period

dominicm14:06:11

Yeah, I think I worried that await would block forever & was a bit racey. But I'm not sure that's true.

noisesmith14:06:51

await is an agent action via send-off, it doesn't wait for all actions, just implicitly, via the queue, on the ones that were sent before it

noisesmith14:06:57

so if you do (send a f) then (await a) you have the gaurantee that f has already been called by the time await returns

noisesmith14:06:29

the tricky bit is the error state, that's the part of agents I don't like

noisesmith14:06:16

you can simulate an agent in core.async with a go-loop that reads a channel and executes a thread and parks on that thread for each input

dominicm14:06:04

Hmm, yes. Although in my case we're really talking about a thread anyway, so error state is already bad.

dominicm14:06:28

Yeah, that's basically what I've implemented. Although loops don't play nice with sql exceptions.

noisesmith14:06:43

(go-loop [state []] (let [msg (<! c) result (<! (async/thread (f state msg)))] (recur result)) the basic skeleton I guess

noisesmith14:06:06

right - exceptions are the gotcha all around, async makes managing them more complex period

dominicm14:06:37

the problem with that go-loop is that it has no await, so I was custom implementing that to allow subscribers to know when their msg had been processed.

noisesmith14:06:41

you could implement await via a second chan, poll on that and the query chan, if the second chan is hit, expect a result chan, and just >! true onto it

dominicm14:06:08

yeah, that's pretty much what I'm doing :)

noisesmith14:06:16

that gives the same semantics - a special kind of action that just wakes you up, and implicitly lets you know anything before itis done

dominicm14:06:39

My poll has a timeout, so using (async/timeout 200) will trigger the loop and then restart, or an action will come in

noisesmith14:06:56

another option is to attach a "result chan" or "result promise" to each incoming thing, and ensure all queriers use that

dominicm14:06:09

Ah, that might be a little nicer actually.

dominicm14:06:19

That would probably simplify a bit of my code quite a bit.

dominicm14:06:41

Hmm, now I think about the fact that if I used an agent I'd need to somehow do the polling itself somewhere else still, and manage state updates to that thing… core.async sounds more appealing.

noisesmith14:06:22

instead of the explicit guarantee of the agent serializing actions, you rely on the single go-loop managing all the state in its closed over context

noisesmith14:06:33

but those end up being identical if done properly

dominicm14:06:06

OK. I think I've got a potentially easier option to explore with this agent thing, time to REPL it a bit and see where I get.

dominicm14:06:27

I suppose I'd need to track the thing being polled in a volatile or something

dominicm15:06:55

This would be lovely and immutable if catch/recur were friends:

(loop [conn (jdbc/get-connection db-spec)]
  (try
    (.getNotifications conn)
    (catch SQLException _
      (.close conn)
      (Thread/sleep 200)
      (recur (jdbc/get-connection db-spec)))))

dominicm15:06:03

Instead I have to use a volatile :(

noisesmith15:06:57

(loop [conn (jdbc/get-connection db-spec)]
  (let [notifications (try .... (catch SQLException _ (.close con) ::fail))]
    (if (= notifications ::fail)
      (recur (do (Thread/sleep 200)
                 (jdbc/get-connection db-spec))
      ...)))

dominicm15:06:51

Hmm, dunno why I reached for mutable state instead of that…

dominicm15:06:37

One problem is that the mutable state actually needs to be accessible to the agent so it can trigger any listens/unlistens as appropriate.

dominicm15:06:21

Not a total blocker, but it kills some of the elegance of the agent-based solution

noisesmith15:06:21

so notifications would be sent along with some function to the agent?

dominicm15:06:59

I should be clearer - the current conn needs to be available to the agent.

noisesmith15:06:07

so the second arm of the branch could be (send-off a conn notifications)

dominicm15:06:28

So maybe recovery in the thread happens by sending off to the agent to re-establish the connection…

noisesmith15:06:42

you know via lexical scope that nobody else touches that particular conn before the agent does

dominicm15:06:16

get-connection is called in a different Thread - the thread that is polling.

noisesmith15:06:19

you could also store the connection in the agent itself, and recover connection via the registered agent error handler

dominicm15:06:01

This is difficult because it's coordinating 2 threads pretty much, and they both need access to shared state.

noisesmith15:06:28

first off, is it possible for them to use separate connections?

noisesmith15:06:20

if not, I'd either supply the connection to the agent as part of the action (knowing nothing else touches it before the agent gets to it), or put the connection in the agent and ensure the error handler and sent function know how to manage it

noisesmith15:06:34

but separate connections is simpler

dominicm15:06:09

they have to share - the state is around the connection.

dominicm15:06:31

a connection listens to notifications, so they have to share.

noisesmith15:06:35

in that case I'd be inclined to pass the connection + the notifications as args to the agent send

noisesmith15:06:09

the rest of the system can deref the agent to see the up-to-date state of data, or use send to interact with the notification system, and the loop (in a future most likely) would periodically update the notifications and connection by sending to the agent

dominicm15:06:35

Oh, and the thread owns the connection… I didn't like that at first, but I think that makes sense… ooh.

dominicm15:06:20

I hadn't registered that the thread would own the conn in that case.

noisesmith15:06:33

right, I think the thread (aka the loop right?) is the natural owner of the connection

dominicm15:06:59

yeah, the thread/loop are the same.

dominicm15:06:12

Why would you send the notifications to the agent?

noisesmith15:06:12

that loop could also deref the agent to look for operations that should use the connection, before polling / sending

noisesmith15:06:36

oh - I made some assumption that other code would want to read / access this stuff

noisesmith15:06:51

if it can be self contained, you don't even need an agent, it can all be in the one loop

dominicm15:06:54

I don't want to track it forever, so I was going to just fire a callback.

dominicm15:06:12

but could just do that from the loop right?

noisesmith15:06:34

yeah, my instinct is exposure as data / observability but you could just fire a callback in a new thread and skip the agent thing

noisesmith15:06:04

I wouldn't callback in the same loop that polls - putting the callback invocation in a future is safer, decomplects things

dominicm15:06:47

(doseq [n ns] (future (cb n))) vs (doseq [n ns] (cb n)) feels basically the same :thinking_face:

noisesmith15:06:51

just make sure if anybody cares about the callback erroring that the right try/catch picks that up, because like any async mechanism the errors can otherwise drop on the floor silentsly

noisesmith15:06:15

the difference is that cb can slow down or halt your polling

noisesmith15:06:29

and to address that you now need monitoring / restarts etc.

noisesmith15:06:51

futures isolate the failure to the function run in the future, the caller can try/catch if failures matter

noisesmith15:06:07

it puts the responsibility for not being brittle in a better place

dominicm15:06:08

Or I could just assume good citizens (I'm pretty much just gonna use this for putting ont oa channel). With a future I have to worry about running out of memory if a lot of them get queued up.

dominicm15:06:50

heh, futures use the agent thread pool :)

noisesmith15:06:10

yes, they do - also you can use backpressure to avoid memory errors without having to trust your caller (this matters even when you write all the callers - it's easier to maintain a system where failures push to the edges)

noisesmith15:06:44

especially async tends to make errors deeper into the infrastructure harder to debug and fix, the more they push to the edges the easier everything gets

dominicm15:06:01

With a future, how do you use backpressure?

noisesmith15:06:04

(that's my experience with microservices / executor systems at least)

noisesmith15:06:33

you can replace the future with a fixed size executor pool plus a mechanism to impose bindings

dominicm15:06:00

ah, makes sense. Historically I've used manifold for that.

dominicm15:06:07

Wanted to check I wasn't missing something :)

noisesmith15:06:24

yeah - manifold makes that easy, as does claypoole

dominicm15:06:52

I could avoid this issue also by just using a channel instead of a callback as my api

dominicm15:06:25

Btw @noisesmith, your insight here & in general is appreciated. :)

dominicm15:06:34

I always enjoy our talks

dominicm15:06:01

> if it can be self contained, you don't even need an agent, it can all be in the one loop Well, 2 thoughts. One is that maybe I should create a connection per-subscriber. Although with parallel connections, that seems like a bad idea, so nevermind :). But the loop needs something external to do the actual listen/unlisten right?

dominicm15:06:34

Hmm, another problem with the loop owning the connection is that the agent then can't add a listen until the thread has started up and established a connection for it to use.

dominicm15:06:46

jdbc is hard 🤕

dominicm15:06:03

I think a single loop makes sense again, coordination is hard.

noisesmith15:06:40

> the agent then cant' add a listen my precondition was not needing the agent

dominicm16:06:19

ah, my mistake :)

Jp Soares13:06:00

The documention process The goal is to have a prototype by the end of the week I can see several problems related to the documentation. When I get to work in big projects that exists for a long time, I usually spend a lot of energy to understand what is happening, what are the main elements, the domain entities. It might be because documentation gets unsynced with the code, but could be something else and what I'm going to do is to address this problem and produce a prototype by the end of the week. So if you have time, interest and energy, please contribute with insights. Thank you. https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md

vlaaad14:06:49

Not sure what kind of input you look for. Ideas for improving some clojure project’s documentation? improving cljdoc?

Jp Soares16:06:00

Sorry, I was expecting to edit the original with questions faster. I see the cljdoc as something similar to javadocs, I'm thinking about something that makes the process of documenting easier (would be nice to have some analyzer also, but I don't have a lot of ideas in this area).

Jp Soares16:06:04

I updated the original thread post with questions we can discuss now 😁

phronmophobic16:06:58

I would start with trying to compile which libraries have great documentation and then try to figure out what those libraries are getting right

Jp Soares18:06:08

I think this is an interesting idea to investigate what tools these people are using. Do you have examples of such libraries or tools?

Jp Soares11:06:14

I updated [my github](https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md) with a solution sketch and some maybe useful ideas. Would be nice if you let me know what do you think about this features. How can it be approved? The main ideas are: - Using labels to define multiple level of documentation. - Run a basic static analysis to identify the function and where are they called to generate a diagram where you can filter elements you want to show/hide. - Run this analysis in any language. - Warn dev about missing code referenced in doc, maybe it was deleted or it changed the name. - Show usage examples of the code (tests) close to the code it tests.

vlaaad12:06:13

Hey! I had a look, I have a little bit of feedback: 1. are you aware of http://cljdoc.org? it seems it exists in the same space you are exploring, providing tools for creating both high-level and low-level documentation, including [[wikilink]]-style syntax to link between different things 2. what is the actual problem you are trying to solve? you mentioned “problem with documentation” — what is that problem? You also mention that it requires a lot of energy to get into new projects — I think this is inevitable no matter the documentation, because you have to learn the system and how parts of it interact together. Do you think understanding system and its moving parts is about documentation?

Jp Soares19:06:42

I'm aware of http://cljdoc.org. It focus in a different problem. it's more about API docs, i.e. in documentation to facilitate the use of code. The problem I'm trying to mitigate is the difficulty of understanding pre-existing code, usually when you enter in a new team or just need to work in some part you are not used to. It might be inevitable spending more energy in these situations, but I thing the documentation plays a big role in how much energy you'll spend in the end. The documentation is simply information other people let to you about the code, if this information is up to date and right, it will definitely help you. There's a cost for maintaining documentation and if this cost is higher than the benefits you get from it, you will avoid wasting resources on it. So I'm trying to increase the value extracting some information automatically and decreasing the cost by making it clear what parts are out of date.

deactivateduser18:06:26

I prefer depot, only because it doesn’t require leiningen.

deactivateduser18:06:22

KISS principle and all that…

lvh18:06:11

Wait: I thought deps.ancient doesn’t either

lvh18:06:18

Does it just shell out to lein or something

deactivateduser19:06:48

That was my understanding back when I first checked it out, but it might have changed since then.

deactivateduser19:06:57

I stand corrected - it can be used from leiningen.

deactivateduser19:06:10

But otherwise is standalone.

deactivateduser19:06:34

So I guess I don’t have a good reason for choosing depot over deps-ancient after all. 😉

tio18:06:00

The error is this

Evaluating file: core.clj
Syntax error (NullPointerException) compiling at (src/..../core.clj:7:1).
null

Evaluation of file core.clj failed: class clojure.lang.Compiler$CompilerException

joshkh19:06:07

for the sake of experimentation, i'd like to wrap a list of functions (project wide) with an outer logging function without modifying the existing code. is alter-var-root my best option?

noisesmith19:06:17

yes, the pattern is pretty straightforward:

(alter-var-root #'foo/bar (fn [f] (fn [& args] (report args) (apply f args))))
slightly more complex if you also want to access eg. the name of the function in the wrapper, but still doable

noisesmith20:06:29

user=> (alter-var-root #'clojure.string/join (fn [f] (fn [& args] (println "joining" args) (apply f args))))
#object[user$eval215$fn__216$fn__217 0xce9b9a9 "user$eval215$fn__216$fn__217@ce9b9a9"]
user=> (clojure.string/join ", " [:a :b :c])
joining (,  [:a :b :c])
":a, :b, :c"

joshkh20:06:20

perfect, thanks for confirming

noisesmith20:06:01

there are some gotchas to it, but robert.hooke uses "hooks" for this sort of thing https://github.com/technomancy/robert-hooke • fixed link

joshkh20:06:00

ah yes! i was using robert.hooke to profile some db queries

joshkh20:06:50

i think there was another library for heavy namespace manipulation (possibly quite old)? but i can't remember the name

Lucio Assis20:06:25

Is there a way to set the dependencies cache location for clojure cli tools to something other than $HOME/.m2/repository globally? As in, in an environment variable?

noisesmith20:06:59

I wouldn't be surprised if it respected the env var maven uses (via the underlying lib) M2_HOME

plins20:06:18

hello everyone, is anyone aware of lib to deal with http://ndjson.org/ in clojure? my problem is that a service expects a file with one json entry per line, but the clojure code that I have has a [{}. {}, {}] format

Lucio Assis20:06:24

Sorry, forgot to mention I tried MAVEN_OPTS to no avail

Lucio Assis20:06:46

As in `

MAVEN_OPTS: "-Dmaven.repo.local=<something>"

Cory20:06:48

it uses M2_HOME afaik

Lucio Assis20:06:56

I'll try that tks folks

noisesmith20:06:11

I should hve tried before suggesting, it doesn't look like M2_HOME has any effect

Lucio Assis20:06:31

Yep, just tried it. Googling for a bit seems like that's just like JAVA_HOME, for the mvn executable

Cory20:06:44

i think it's M2 rather

Lucio Assis20:06:31

That will do it. I'll just set some variable and use that as a param

Alex Miller (Clojure team)20:06:15

clj will not use MAVEN_OPTS or M2_HOME

👍 4
Alex Miller (Clojure team)20:06:25

set :mvn/local-repo for this

Alex Miller (Clojure team)20:06:53

you can set it on the command line with clj -Sdeps '{:mvn/local-repo "foo"}'

lilactown21:06:28

does anyone develop their apps purely in the cloud? e.g. startup your jvm in some cloud compute and connect to it via remote REPL?

aisamu22:06:27

Yup! EC2 instance running the app inside docker, with regular file mounts that I edit transparently over SSH with TRAMP.

aisamu22:06:29

But for what you describe we have a couple VPCs with ZK, Datomic, and etc. running. So we just connect to the right VPC and run the part of the app locally that you want. I got tired of spinning fans, so I just put the whole machine on the VPC 🙂

aisamu22:06:17

TRAMP gets annoying if you have large pings. While I was abroad, I switched to editing files locally + Unison process syncing them.

lilactown01:06:47

What’s unison process syncing?

aisamu02:06:46

Like a continuous rsync

lilactown21:06:01

we’re going around and around at work about all these issues developing locally, where we have to run so many docker containers/jvms that require a lot of local ops work that isn’t really useful

seancorfield21:06:01

@lilactown I'd be concerned about the ability to work when offline, such as when flying or something. We have a bunch of services needed to support our apps but we just have them setup locally so docker-compose up is the only command needed to bring them all up (and the config is all kept in a dev repo so everyone can have the same experience).

lilactown21:06:20

sure, offline dev will suffer. that doesn’t seem like a strict requirement for us

seancorfield21:06:22

@hiredman Isn't your dev setup done via a "remote" server that runs all your services/JVMs etc and you just connect to a REPL in it?

hiredman21:06:46

I ssh in, run emacs in the terminal there

seancorfield21:06:59

Ah, yeah, right.

lilactown21:06:08

for context, our current workflow for developing a single service is to start it up and have it interact with other services in the dev cluster

seancorfield21:06:35

@lilactown Maybe if your devs have Comcast as their ISP they wouldn't want to rely on Internet connectivity... :rolling_on_the_floor_laughing:

parrot 4
☝️ 4
lilactown21:06:14

it just gets real complicated when we need to run 2+ services locally because we need to configure our local services to talk to each other

hiredman21:06:07

I have a linux vm running locally on another machine that I ssh into, and do all my work dev on

hiredman21:06:41

a nice thing about this is I pretty never have to worry about start/restarting services, I just leave them running

hiredman21:06:07

in general I try to avoid docker, it is a real ops sink

hiredman21:06:45

I do end up using containers, but I run them via podman (docker alternative) and use user level systemd stuff to manage them

lilactown21:06:05

gotcha interesting

p-himik21:06:15

podman is really nice. At this point, I don't have anything complicated so I just get along with plain pods and bash scripts to run them.

hiredman21:06:15

the neat thing about podman + user level systemd stuff is all the state is in my users home directory, so when I rebuild the os or whatever I just move my home over and the next time I login all the services are there running

p-himik21:06:48

Right now I just mount local dirs within my ~. Do you have any link that shows how to use podman with user-level systemd?

hiredman21:06:40

I just write unit files and put them in ~/.config/systemd/user and then use all the normal systemd commands but with the --user flag

hiredman21:06:22

I don't use arch but they always have helpful docs https://wiki.archlinux.org/index.php/Systemd/User

👍 4
hiredman21:06:23

I try really hard to limit the ops work my local setup requires, but it can be tough

Michael J Dorian21:06:40

I've worked at places where we didn't have any code locally, and it didn't seem worth it to me. Lots of downtime waiting for server maintainance, lots of awkward solutions for people who aren't comfortable using vim or emacs over SSH.

nick11:06:04

Haven't heard about such places before. Would you mind to elaborate what was the reason behind it? Difficult to setup everything locally? Security?

Michael J Dorian13:06:57

At that shop it was mostly to avoid the work involved in setting up local dev environments and keeping them on the same versions of various packages and libraries as our production target. At one point the deprecated cloud IDE we were using started to show cracks and we ended up setting up local environments for everyone just to save time, but management still wanted a centralized solution for themselves to use. You need a better reason than laziness, it's not easier.

Michael J Dorian21:06:23

All solvable ofcourse, but someone had to solve it

lilactown21:06:47

I don’t know if I would want no code locally

lilactown21:06:26

I think what I really want is to be able to quickly deploy changes to a personal cluster/stack

noisesmith21:06:20

alternatively, we could write apps that have boundaries such that they can be developed in isolation

noisesmith21:06:24

one can dream at least

lilactown21:06:27

like for 80% I could just connect to a socket or nREPL connection and hack at things. periodically do a clean refresh of the service

lilactown21:06:36

yeah, that hasn’t worked out in practice thus far 😂

noisesmith21:06:53

we got pretty far with plugging transducers into kafka

noisesmith21:06:17

because that means kafka becomes an implementation detail, and most dev can just use the same transducer in a non-kafka context

noisesmith21:06:42

but yeah, that requires a lot of buy in in terms of coding style and architecture and the expedient thing is just relying on the service

noisesmith21:06:58

(short term, at the very least)

seancorfield21:06:12

@noisesmith sounds interesting -- do you have that written up somewhere?

noisesmith21:06:44

we have a private repo where we experimented with this

noisesmith21:06:24

the jackdaw lib for kafka does have this feature merged and a small demo, but the nature of this stuff is you need something very big and very complex before the approach is vetted

noisesmith21:06:29

and that's a high bar

noisesmith21:06:14

because there are a lot of silly ways to develop an app that look great in a minimal demo :D

lilactown21:06:16

yeah we have a lot of services and use kafka / zookeeper / some other microservice-y stuff for service discovery and communication

lilactown21:06:25

that I don’t really have all of the context for

lilactown21:06:56

the other thing I’m thinking of is, I think similar to what you’re saying, having the ability to run things in the same JVM and have them communicate in-memory

noisesmith21:06:26

or minimize / isolate the complexity so that you can use a service with a vector of inputs, and get a vector of outputs back for confirmation

noisesmith21:06:43

in-memory being neccessary but not sufficient for that of course

lilactown21:06:54

but it’s still complicated by the need to run locally sql stores / other cloud services that are just bleh

noisesmith21:06:35

right - but my dream (usually unrealized) is that those are still sources/sinks of structured data, so there should be a design that lets you plug them safely

lilactown21:06:52

like one of the things we are struggling with atm is we want to validate some really complex SQL queries that run in presto

noisesmith21:06:25

yeah - in the design I'm talking about the query would be part of the "messy" stuff

noisesmith21:06:30

in some cases pulling complexity out of the query and into pure clojure is feasible, in some cases, of course, that's not going to be viable

noisesmith21:06:53

but at the very least you can ensure presto is the only live component you need to dev against to make that work

lilactown21:06:04

right it just balloons out from there IME

lilactown21:06:31

this service needs presto, this one needs postgres, this one needs some other thing

lilactown21:06:52

it’s further complicated where we are (I think wisely) building abstractions on top of this

lilactown21:06:22

so e.g. we have a service for running queries against sql stores. so if I want to exercise the system, I need to run that service + the service that knows the query that needs to be run

noisesmith21:06:09

I suspect (but don't have robust proof) that having adaptor protocols for all external resources should make things more robust and make it easier to use placeholder data during dev, but sadly I don't have a strong example that really proves it

lilactown21:06:31

yeah, it’s just a lot of work y’know?

lilactown21:06:51

like, why can’t I just run all this code! in the cloud! where compute is cheap!

noisesmith21:06:55

but is it more work than the ops drudgery of keeping everything hooked up and healthy locally?

noisesmith21:06:22

because there's a lot of friction to the cloud, and the little sharp corners of tools and stacks of tools start to add up

noisesmith21:06:25

in my experience

lilactown21:06:35

yeah I guess I want to solve that friction

lilactown21:06:40

or understand it more, I guess

lilactown21:06:51

since I haven’t even tried what I’m thinking

lilactown21:06:00

other than using datomic cloud, which is very similar to what I’m thinking about

noisesmith21:06:10

then you're putting the time into the tooling stack I guess, probably messing with things like k8s

noisesmith21:06:47

for institutional reasons, I'm motivated to solve as much in the architecture of my app instead of relying on the security and ops teams

noisesmith21:06:05

(we have a lot of laws and stakeholder promises about our data...)

lilactown21:06:14

understandable

lilactown21:06:27

I worked in health care before this gig 😄

noisesmith21:06:00

fintech - the laws are more lenient but the intrinsic motivation for others to break our stuff is higher

Jp Soares11:06:14

I updated [my github](https://github.com/JpOnline/Blog/blob/master/documentation_sprint.md) with a solution sketch and some maybe useful ideas. Would be nice if you let me know what do you think about this features. How can it be approved? The main ideas are: - Using labels to define multiple level of documentation. - Run a basic static analysis to identify the function and where are they called to generate a diagram where you can filter elements you want to show/hide. - Run this analysis in any language. - Warn dev about missing code referenced in doc, maybe it was deleted or it changed the name. - Show usage examples of the code (tests) close to the code it tests.