This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-10-18
Channels
- # announcements (12)
- # babashka (6)
- # beginners (62)
- # calva (3)
- # cider (41)
- # clerk (5)
- # clojure (192)
- # clojure-bay-area (1)
- # clojure-europe (14)
- # clojure-norway (97)
- # clojure-uk (6)
- # clojuredesign-podcast (4)
- # clojurescript (30)
- # code-reviews (7)
- # cursive (32)
- # datahike (4)
- # datomic (35)
- # docker (8)
- # emacs (8)
- # events (1)
- # fulcro (13)
- # helix (19)
- # hoplon (4)
- # hyperfiddle (37)
- # jobs-discuss (10)
- # membrane (11)
- # missionary (19)
- # off-topic (28)
- # polylith (8)
- # portal (10)
- # practicalli (8)
- # re-frame (31)
- # reitit (6)
- # shadow-cljs (39)
- # timbre (3)
- # vim (1)
- # xtdb (6)
What's the best fit channel to share solutions for leetcode/etc. problems?
I do not know of one. If no one else does, it sounds like a channel I would like! Some of the most informative threads on c.l.lisp followed from shared code everyone dissected. Maybe #C053PTJE6?
There's a #C01J99RN4G5 channel (I don't know its contents or purpose).
Seems dead, I guess I'll populate it :)
I have successfully compiled my first Clojure web app to a .jar
file, which runs on my machine with java -jar myapp-0.1.1-standalone.jar
Next, I must package it in a Docker image, deploy it to Kubernetes (in Microsoft Azure) via GitHub Actions, and then run it there.
Does anyone know of a blog post or similar on how to do something like that?
I found these articles. Leaving them here, in case they are useful to others as well: https://www.metosin.fi/blog/packaging-clojure https://medium.com/@mprokopov/deployment-of-clojure-app-to-production-with-docker-9dbffeac6ef5 https://practical.li/blog/posts/build-and-run-clojure-with-multistage-dockerfile/
I don't see any article links - could you paste them in again? - as I'd be interested in reading about how to do this :-)
Multi-stage docker file is an effective way to build the Clojure app in CI and deploy a minimal resources container in the Cloud ☁️☁️☁️ https://practical.li/engineering-playbook/continuous-integration/docker/clojure-multi-stage-dockerfile/
@UV1JWR18U Weird! I still see the links in my previous comment, but here they are again 🙂 https://www.metosin.fi/blog/packaging-clojure https://medium.com/@mprokopov/deployment-of-clojure-app-to-production-with-docker-9dbffeac6ef5 https://practical.li/blog/posts/build-and-run-clojure-with-multistage-dockerfile/ (edited)
That’s awesome, @U05254DQM! Thanks for the link. I think I found an older version of that site, but the one you shared is presented much nicer.
If I have a “multi-module” project with libA and libB as subdirectories, can I define, at the root level a common deps.edn and “include” that from the libA/libB deps.edn files, so that I get consistent versions for common dependencies like cheshire etc? Or would I maybe create an empty lib “deps” with just a deps.edn, and then depend on that from both libraries?
you could make a sibling to libA and libB called common that defines them and libs A & B could use cheshire transitively from common
Okay, yep
if you’re OK with a preprocessing step that updates your deps files to keep them in sync, I’ve previously had good success using this approach: https://github.com/exoscale/deps-modules
the shared common utils is fine until you want more granular control of which deps are “common”
I’m just used to maven parents and <dependencyManagement>
sections where we just declare the maven coords and in the children we only reference the artefact but without the version. Maybe your preprocessing does something similar to that.
Yeah, you can achieve something similar by using :override-deps
and merging with a second "base" deps.edn file.
But there are many gotchas - e.g. the official clojure tooling doesn't allow passing in multiple files, so you often see hacks with overriding the path to your home deps.edn file (that is also always read and merged with your project level one).
I've seen this break in too many ways where some tooling, linter, editor, etc. doesn't work the way you'd like and you get lots of paper cuts
I like the exoscale approach where it's a preprocessing step, you can just commit the file to git, and all your tooling works without knowing about any of this.
Thanks for sharing this!
Another example on the opposite side of the spectrum to solve this problem is #C013B7MQHJQ which "takes over" your classpath/project structure/etc. But then they work really hard to make all the tooling, etc. work for you :)
If you search for monolith or monorepo on #C6QH853H8 or maybe general slack (especially with keywords like :override-deps
), I'm sure you'll find lots of threads that go into all the nuance of what kind of paper cuts you can expect with various solutions.
I've been writing large-ish transducing functions which is just a data pipeline that does a bunch of stuff and persists stuff to a db but doesn't actually produce anything in the end. To use this pipeline I've been writing this
(transduce my-data-pipeline-xf (constantly nil) input-data)
is there a better way of doing the call above? Something that executes a transduce but doesn't need this reducing function - i.e. in my case (constantly nil)
I don't think there's anything wrong with that (see run!
), but it might be more beneficial to have your "write to db" function be the reducing function in this case.
I do several writes to the db within the pipeline at different steps
Maybe that's ok. Without knowing too much about the use case, I would be worried about how the pipeline handles failures if writes are happening independently.
Ok great, thanks for the answer
I'm scraping over a paginated API which returns json that has other paginated APIs, so I use iteration
inside this transducing pipeline to go over all of these objects and persist them into my db
Ideally, you would build up either one large transaction, or maybe a sequence of transactions that get atomically applied with some appropriate error handling. A typical case where that's awkward is if you are writing to multiple destinations.
But, even for multiple output destinations, it's nice to have a reified log of logical transactions that gets produced separate from the process that applies them and does error handling.
That makes sense but I'm not sure how to do this with a transducer - for example
My pipeline has steps A B and C, at each step I want to write something to the db and then modify the object passed into each step. This modified object gets sent to the next step
Does the modified object depend on the result of writing to the db?
No it doesn't
in reality what this looks like is
map f
cat
write to db
map g
write to db
map h
cat
cat
filter
map
write to db
etc.YMMV and it depends, but one approach that I like is to just pass a long a map through the pipeline that accumulates more information.
At the very end (ie. the reducing function). Takes the map and applies all the I/O
How does this map sit next to the object I'm passing
you just add another key-value to the map
How does this work with cat? I definitely understand the bigger picture of what you're saying but don't fully get the specifics
Mind if I dm you?
What would be your idea for how to pass this map with a cat
?
It kinda depends. https://github.com/cgrand/xforms can be helpful.
Will take a look, thanks!
I haven't tested this code, but here's some psuedo code with xforms:
(comp
(mapcat f)
(x/multiplex [(map (fn [x]
{:foo x}))
(comp
(map g)
(x/multiplex
[(map (fn [g]
{:g g}))
(comp (map h)
cat
cat
(filter f)
(map i)
(map (fn [x]
{:other x})))]))]))
It gets kind of messy, but you end up with a sequence of maps that are marked with the data. At the end, you just have something that checks the keys and writes them to the db.
I probably wouldn't write it as one big hairy transducer, but write the pipeline using a format close to the way I think about the problem and then have another function that takes the description and returns the pipeline.
I see!
That makes a lot of sense
the function that takes the description and returns a pipeline can also add error handling or you can do the error handling in the reducing function that writes to the db
and you can have a policy that independently decides: • quit on first error • ignore errors • log errors, but proceed • log errors, but quit after X errors • etc
What do you mean by "description"?
Also, generating a sequence of transaction can be more efficient since it's trivial to batch and pipeline them.
> What do you mean by "description"? The way I like to think about it is if you wanted to describe your pipeline to someone else or "future you", what would you write down on a whiteboard or piece of paper.
That description can almost always be mapped to some terse edn description.
Interesting, I see
But this is all just general advice and food for thought based on similar projects I've done in the past. It may or may not be overkill for your use case. Depending on number of items in the pipeline, I might just write something simple like:
(let [foos (into
[]
(mapcat f)
input)
bars (into []
(map g)
foos)
bazs (into []
(comp (map h)
cat
cat)
bars)]
;; run transactions
)
It does mean you will have to hold all the intermediate results in memory, but that's not always a problem.