Fork me on GitHub
#clojure
<
2021-07-06
>
xlfe04:07:13

Can anyone explain the following - whether it's a bug in org.clojure/math.combinatorics? or not?

(require '[clojure.math.combinatorics :as combo])

(defn order-preserving-combinations
  [a]
  (filter
    (fn [c]
      (= a (flatten c)))
    (combo/partitions a)))

(let [a (order-preserving-combinations [7 4 2 1 3 2 6 3 4])
      b (order-preserving-combinations [7 4 2 1 3 2 6 3 3])]
  (print "a" (count a) "b" (count b)))
The length of a is 0 while the length of b is 8 - live journal repl at https://nextjournal.com/xlfe/bug-in-orgclojuremathcombinatorics-?

indy04:07:43

I think (combo/partitions a) doesn't guarantee order and so (= a (flatten c)) is false. Hence (order-preserving-combinations [7 4 2 1 3 2 6 3 4]) is empty

xlfe04:07:54

thanks indy

dpsutton04:07:33

you have an expectation that the partitions will retain the same order across all of the partitions as the original collection?

xlfe04:07:50

ah right. I need only the partitions that are in the same order as the original array, hence the filter. hmm

xlfe04:07:51

what I found weird was that b works but a doesn't. (consistently)

xlfe04:07:36

given that combo/partitions doesn't guarantee order, what's the best way to get all partition combination that do preserve order?

phronmophobic05:07:28

I don't know enough about what partition is supposed to do to say, but it is interesting there is a different algorithm for a unique collection of items than when there's a collection with duplicates, https://github.com/clojure/math.combinatorics/blob/master/src/main/clojure/clojure/math/combinatorics.cljc#L943

phronmophobic05:07:27

either way, it doesn't seem like order matters in a partition:

> (->> (combo/partitions [7 4 2 1 3 2 6 3 4])
       (filter #(= 1 (count %))))
(([7 4 4 2 2 1 3 3 6]))

Nazral06:07:27

Has anybody ever used dear imgui in clojure? I've found these java bindings for it https://github.com/SpaiR/imgui-java but no clojure ones

Nazral06:07:38

In any case I'm not sure how to build java interop with this specific part of the example they give:

public static void main(String[] args) {
        launch(new Main());
    }
how does this translate in Clojure? https://github.com/SpaiR/imgui-java

Nazral06:07:20

Nevermind, I accidentally managed 😄

greg08:07:02

Hi, I've got a question about code style and best practices. The inspiration to this question is the saying that it is better to have 100 functions to operate on one data structure than 10 function that operate on 10 data structures. Have a look at the below example:

(defn process [{:keys [_get-rate-fn] :as opts} txs]
  (let [opts (merge {:report-currency "GBP" :precision 4} opts)]
    (-> {:txs txs}
        (process-common/coerce-data)
        (process-common/fetch-rates opts)
        (process-common/add-totals)
        (process-common/convert-currency opts)
        (process-common/calc pl-rules)
        (process-common/calc-summary))))
There is a process fn containing a pipeline of operations. All starts from a map holding one key: txs, which is a collection of data. Let's call it map-at-hand, and in the output there is the same map-at-hand but modified and extended (holding same txs but coerced, modified, plus extra data like what rates are used, and the results of rules calculation for the given report). The question is about passing parameters to each step in pipeline. Some of them take one argument (only that map-at-hand), some of them two where the second is the opts (holding stuff like :report-currency, digits precision, or the data source for rates). I was wondering, wouldn't be better to just merge that map and opts into one map passed through the steps, kind of a context that changes each step. On the one hand some of the functions might have access to data they don't need, on the other hand, it is just easier to manage and there is no problem with positional arguments. What are your thoughts on this?

solf08:07:17

Imo it becomes painful when the functions you use don’t have the same conventions: The data map sometimes in 1st, sometimes in 2nd position. The options being passed sometimes as a map, sometimes as positional arguments. Etc. Your example has a convention: data in first arg, optional arguments in a map as 2nd argument. It looks fine to me.

solf08:07:51

Merging both maps in one has additional problems: How to separate the actual data (that might need to be sent back in an http response, or stored in a db) and the optional arguments.

greg10:07:18

Separation of actual data from opts is fairly simple, just selecting a few keys I need. I think though passing the whole context map (combined map-at-hand with opts), with all its content, to one of the sub-functions (e.g. some funs used from add-totals) smells bad. It would break Interface Segregation Principle. Functions would know too much. At least if we talk about OPP. But we are not OPPing here. What do you think about ISP principle in the context of Clojure and that example?

solf11:07:34

> Separation of actual data from opts is fairly simple, just selecting a few keys I need. Having to select at the end which keys are relevant data and which are not is actually a smell for me 😅

Maxime D11:07:30

Hi @U023TQF5FM3, just reading about OOP vs FP and trying to "unlearn" years of OOP ;) https://learning.oreilly.com/library/view/object-oriented-vs-functional/9781492048138/ (I have an O'Reilly subscription, the author is Richard Burton). You can find one of his talks here: https://www.youtube.com/watch?v=NHN_mqyjOCg.

solf11:07:38

Functions don’t really know too much, unless they are looking through the whole opts object, but why would they? They can destructure it at the signature level (`(fn ... [_ {:keys [a b]}`) and keep only the values that are relevant to them. This is actually another argument in favor of keeping it with two separate maps: In your original example, opts is effectively immutable. It gets passed through multiple functions, but it cannot be mutated by those functions. If you mix all of them in a single map, each function could mutate the opts values (since the map would be threaded through them)

greg12:07:55

@U027H4FHMBK I wouldn't say that all the OOP is crap and needed to be unlearn 🙂 Some of the guidelines are not that bad. I mean, there are some stuff that definitely you should not be replicated writing in FP - I wrote some OOP in Clojure in the past, I learned my lesson 😅😂 Anyway, I will definitely watch that talk. Thanks for posting it 😉 @U7S5E44DB > Functions don’t really know too much Yeah, I had the same thought, but just I didn't know yet why mixing it smells bad 🙂 > It gets passed through multiple functions, but it cannot be mutated by those functions. (...) If you mix all of them in a single map, each function could mutate the `opts` values Yes, the mutation. It makes sense to keep opts impossible to mutate. Although I'm not applying the same rule for the map-at-hand (first arg). > Having to select at the end which keys are relevant data and which are not is actually a smell for me Yeah, I know, but I'm doing it already. Some of the steps create keys that are only needed temporarily - within the -> thread for some of the other steps. Without that, I would end up with a complicated let. Such a let would show what each step takes and produce, but in overall the process function wouldn't be so readable (it wouldn't be so clear what the process does), so I moved that interface of what is needed and returned by each step to the step functions. At the moment the process fn doesn't contain the selecting keys step, but it will be there. I thought that as long as it is within the boundaries of the function it is alright. At the end, if it was the big let, I wouldn't return temporary data. Same here.

nate sire13:07:22

what does "txs" stand for? is it data related to ecommerce?

greg13:07:49

it refers to purchases, sales, withdrawals, deposits, actually all operations that affect balance on the trading account. Why you ask?

nate sire13:07:20

just reading the code and wondering if txs was a Clojure idiom

nate sire13:07:19

txs is transactions

nate sire13:07:12

ahh, I see... the function is doing currency conversions based on British pound?

ghadi14:07:22

@U023TQF5FM3 depending on requirements, you may want to use an interceptor-style framework to execute all the steps in the pipeline

ghadi14:07:20

you may want to terminate the flow early, or report on individual steps generically

greg14:07:14

@U050ECB92 can you share me some link on this approach? Googling doesn't give me anything specific

ghadi14:07:57

you can use pedestal's interceptor library divorced from HTTP

ghadi14:07:31

(I use pedestal interceptors to handle some things for Kafka and SQS processing. There's one interceptor to deserialize, one to validate the message, one to handle, etc.)

greg14:07:56

Ok, so this is something similar to what I'm doing. I'll have a look at it. Thanks a lot @U050ECB92

greg14:07:24

At this point it might be maybe a little overkill, but I will add it in comments in case of refactoring when it expands to a more complicated pipeline

ghadi14:07:27

yeah -- same end result, but the mechanism is slightly different, a bit more ceremony

greg14:07:55

@U01F83MR4KV it is a current default. Most probably it should be refactored out of the scope of this method. Solving problems when they are problems 😄

👍 2
nate sire14:07:33

yea, later, you can remove the hardcoded "GBP" and rename a couple things to explain the code... after you decide on the best design pattern.

nate sire14:07:12

@U050ECB92 would an interceptor pattern be more applicable than a transducer (morphing collections into other collections of data, without knowing about data details) ? is a transducer more abstract than an interceptor?

greg14:07:48

@U01F83MR4KV I think transducers fits better collections' transformations. In my case, and in the case of interceptors, we talk about transformation of a single map (if I understand interceptor pattern correctly). Of course you can use transducers here (functions returning transformation function), and you can combine them, but there is no benefits of using them in this case.

nate sire14:07:46

ahhh. transducers would be actually changing the collection keys... not just changing some data

nate sire14:07:09

transducers would be over kill for this

greg14:07:15

The rule of thumb I use is if I see ->> thread, it is time to (not necessary apply but at least) consider transducers.

emccue11:07:01

Best way to generate a secret rails secret style?

greg11:07:36

I've never used rails. Do you mean some kind of a keystore stored within the project?

nate sire12:07:37

AWS has a couple tools for "secret" management.

nate sire12:07:57

this is the most secure option because there is no chance your secrets are leaked out

nate sire12:07:58

many times, I see projects keeping AWS keys in their secrets... which is a big security risk. You should use AWS IAM and AWS Secrets Manager.

nate sire12:07:38

One time, I saw part of a code base leaked out to the public... with EC2 keys... and the team had a $25k AWS server bill the next morning from crypto miners hijacking their account... but AWS was kind of enough to waive the fees.

nate sire14:07:46

@U3JH98J4R are you referring to encrypted secrets introduced in Rails 5.1?

emccue14:07:19

rails just has a command that spits out a secret you can put on aws or in your config or whatever

emccue14:07:52

like for signing a jwt or session store or whatever

greg14:07:53

@U3JH98J4R Ok, so you do refer to a kind of a keystore within the app. I don't think there is any ready-to-go solution for that. If you want to ship your app with secrets embodied into your binary, you probably need to use some based on Java keystores. As an alternative, you can always pass the secrets by env vars, or using AWS Secrets Manager as @U01F83MR4KV already mentioned.

emccue14:07:14

yeah no, currently doing both env vars and parameter store

emccue14:07:34

just generating the things without being on full secret manager is annoying

emccue14:07:59

i literally just use the rails command, but i'm starting to feel silly

nate sire14:07:22

let me check if aws cli has a command to generate secrets... I like my cli too...

nate sire14:07:34

awesome, aws cli to the rescue :)

emccue14:07:51

that sounds like it is for storing it in secrets manager

emccue14:07:16

i just want to generate a string i can copy paste into my .env or parameter tore

nate sire14:07:28

where do you need to store your secrets? in an exe? or on a server?

nate sire14:07:00

I'd be careful using .env because if it gets added into the repo... then leaked

nate sire14:07:33

make sure your code stays private... Bitbucket vs Github just as example

emccue14:07:13

.env is not in repo directory even and thats for local dev

emccue14:07:26

we inject secrets via secure strings in parameter store

nate sire14:07:34

just for your local dev

nate sire14:07:47

you might want to add a git post hook... that refuses to push code if .env file is added to the repo

emccue14:07:58

that part is already handled

emccue14:07:19

its just not even in the repo directory and we have a command to load in the stuff

emccue14:07:45

but its also not where we store prod. stuff

nate sire14:07:31

ok... just giving you my tips based on how we handled this at a cyber security company... we didn't allow .env secrets

nate sire14:07:40

it's not an "either or"... just my opinion on picking the safer route

nate sire14:07:43

you should be ok... the biggest risk with .env secrets is GitHub leaks

nate sire14:07:40

example... an .env file gets added to a repo before .gitignore is applied

jaihindhreddy15:07:01

Is it possible to loop over the keys of a transient map? Out of curiosity, I'm trying to implement a version of group-by that uses transient vectors as the map-values for the reduction, makes them persistent in a loop, and then makes the whole map persistent. So far, I didn't find a way to "loop over" the transient map, so I'm separately tracking the keys in a set, like this:

(defn group-by-transient [f coll]
  (let [ks (HashSet.)
        grouped (reduce
                  (fn [ret x]
                    (let [k (f x)]
                      (.add ks k)
                      (assoc! ret k (conj! (or (get ret k) (transient [])) x))))
                  (transient {}) coll)]
    (doseq [k ks] (assoc! grouped k (persistent! (get grouped k))))
    (persistent! grouped)))

noisesmith15:07:53

what's the goal here - produce less garbage, consume less memory, spend less time creating the result?

quoll15:07:06

OK, I have 3 points here… 1. The API doesn’t provide this. Yes, it’s theoretically possible to delve into the transient structure, to duplicate how the persistent version creates a seq, but that would need a LOT of code, and almost certainly lead to bugs. You’re basically writing your own implementation at that point. The best way to do it would be to track on the side, like you’re doing. 2. Why use a java.util object? You’re scoped to a let block, so a Clojure volatile would work.

(defn group-by-transient [f coll]
  (let [ks (volatile! #{})
        grouped (reduce
                  (fn [ret x]
                    (let [k (f x)]
                      (vswap! ks conj k)
                      (assoc! ret k (conj! (or (get ret k) (transient [])) x))))
                  (transient {}) coll)]
    (doseq [k @ks] (assoc! grouped k (persistent! (get grouped k))))
    (persistent! grouped)))
3. I’ve actually gone down a similar track before, using nested transient objects when I’m accumulating a lot of data in one hit. It ended up NOT being any faster! So I would measure what benefits you’re actually getting

jaihindhreddy15:07:06

Purely curiosity. But yeah, produce less garbage was my intent. Wanted to measure it with few different kinds of data to see what happens. edit: Just playing around with transients. I would absolutely not do something like this in a real-life situation.

noisesmith15:07:39

@U051N6TTC if the goal is to do less work (fewer allocations for example), putting an immutable set in a volatile doesn't really help

noisesmith15:07:34

and I suspect that a version of this code that simply used mutable objects would be less complex, and much better performing

quoll15:07:46

yup, but a transient can go there just as easily. And jumping to that seemed like more than I needed to say

noisesmith15:07:20

I've never seen transients make code simpler

quoll15:07:50

You’ll see that I responded before the goal was stated. But even so, once the choice to use Clojure is made it seems like avoiding the Clojure constructs for it seems counter intuitive. I agree that the HashSet. will be faster, do less work, and have less code in this instance. I balk at it though, since I mostly write in .cljc files

quoll15:07:14

Also, there’s already a commitment to transients here, so why not go all the way? 🙂

(defn group-by-transient [f coll]
  (let [ks (volatile! (transient #{}))
        grouped (reduce
                  (fn [ret x]
                    (let [k (f x)]
                      (vswap! ks conj! k)
                      (assoc! ret k (conj! (or (get ret k) (transient [])) x))))
                  (transient {}) coll)]
    (doseq [k (persistent! @ks)] (assoc! grouped k (persistent! (get grouped k))))
    (persistent! grouped)))

quoll15:07:24

If I were after raw performance, then yes, I’d probably use Java/JavaScript objects. (I don’t target .net)

noisesmith15:07:05

right, and the first question to answer is "is the performance gain worth the effort"? too often in the real world ™️ things like this are motivated by curiosity rather than need and make application code worse

✔️ 2
quoll15:07:01

That was my point 3. I needed nested maps, and while it was faster to build everything with transients, at the end I had to walk the tree to make everything persistent, and I lost all the performance benefits. This use case is simpler, but it reminded me of what I saw. After all, this code is not doing lots of insertions into the ends of the same vectors over and over. It’s jumping back and forth between vectors, and adding one at a time. It’s most likely faster to execute the reduce operation, but not necessarily by very much. Then the final step of moving to persistence will take time.

💯 2
quoll15:07:54

It’s an interesting exercise, and I encourage this pursuit for the sake of curiosity.

quoll15:07:23

What it taught me was persistent structures work better than I expected 🙂

2
noisesmith15:07:35

haha I guess I get a little prickly because this sort of curiosity has led to bad code (at my hands and by collaborators) but I think we are in agreement here

noisesmith16:07:26

btw here's a first stab at using mutability under the hood (function is still pure - no escape of mutable impl details):

(import (java.util HashMap ArrayList))

(defn dirty-group-by
  [f coll]
  (let [grouped (HashMap.)]
    (doseq [element coll]
      (let [k (f element)
            existing (.get grouped k)
            entry (or existing
                      (ArrayList.))]
        (.add entry element)
        (.put grouped k entry)))
    (into {}
          (map (fn [entry]
                 [(.getKey entry) (vec (.getValue entry))]))
          grouped)))

💯 2
jaihindhreddy16:07:58

Well, my curiosity about this was piqued when I saw https://gist.github.com/ericnormand/33d040d1568d1f0253707e42a1cdba85 in Eric Normand's newsletter this week, and I was https://gist.github.com/ericnormand/33d040d1568d1f0253707e42a1cdba85#gistcomment-3804146 it faster. ^ That was the goal 😅

noisesmith16:07:28

haha I didn't realize this was a challenge, might be time to do some benchmarking

noisesmith15:07:10

OK I got around to testing it and the perf is hilariously bad

user=> (def input (doall (range 10000)))
#'user/input
(cmd)user=> (time (dotimes [_ 10000] (group-by even? input)))
"Elapsed time: 7657.273975 msecs"
nil
(ins)user=> (time (dotimes [_ 10000] (dirty-group-by even? input)))
"Elapsed time: 121684.700624 msecs"
nil

3
jaihindhreddy21:07:19

Well, there's reflective code in there. Here's an impl with typehints to get rid of reflection:

(import (java.util HashMap ArrayList HashMap$Node))

(defn dirty-group-by
  [f coll]
  (let [grouped (HashMap.)]
    (doseq [element coll]
      (let [k (f element)
            entry (or (.get grouped k) (ArrayList.))]
        (.add ^ArrayList entry element)
        (.put grouped k entry)))
    (into {}
          (map (fn [^HashMap$Node entry]
                 [(.getKey entry) (vec (.getValue entry))]))
          grouped)))
And the measurements:
user=> (def input (doall (range 10000)))
#'user/input
user=> (time (dotimes [_ 10000] (group-by even? input)))
"Elapsed time: 8568.308124 msecs"
nil
user=> (time (dotimes [_ 10000] (dirty-group-by even? input)))
"Elapsed time: 2955.335518 msecs"
nil

jmckitrick15:07:25

I'm trying my hand at streaming data from an endpoint using next.jdbc. We have an existing endpoint that does not use datafiable-row but it seems to work. My endpoint does not work without it. Otherwise, they are very similar: run a query (sqlvec format) via plan, reduce over a writer, write-csv,and wrap it all in a piped-input-stream .

seancorfield16:07:33

Happy to try to help here if you want to take it to #sql and ask a specific question -- I'm not sure what your actual question is in the above.

jmckitrick18:07:50

Oh sorry, ok. I might have figured it out, but I'll be sure to move to #sql if not. Thanks!

jmckitrick15:07:56

The other endpoint does do some business logic on the data, but it does not call datafiable-row for sure.

dspiteself15:07:08

Is anyone working on functional effects systems in Clojure?

noisesmith16:07:45

what does that even mean without a type system?

noisesmith16:07:37

we can't make those kinds of guarantees because we don't have the right guardrails to even make it exist

noisesmith16:07:51

(as I understand it at least)

dspiteself16:07:18

It would be a "effectful program" as datastructure that could be "ran" by a runtime.

dspiteself16:07:01

not saying it people would not shoot themselves in the foot.

noisesmith16:07:03

OK but before that even means anything, you need a system with certain guarantees, none of which clojure offers

noisesmith16:07:24

we have the io! macro, but it isn't used consistently, even in clojure.core

noisesmith16:07:10

it seems like a cool idea, I'm just skeptical that clojure would be the place to do it

👀 3
noisesmith16:07:13

LOL - the wiki article for effects systems mentions java's checked exceptions, and clojure notably ignores that entirely

noisesmith16:07:12

on the other hand, if that's enough to count, maybe something interesting could be done in clojure, I'd be fascinated to be wrong here

isak16:07:30

Of those I think only the re-frame effect concept comes close, but you could argue it is just a convention. I.e., never do X in a Y.

noisesmith16:07:45

oh, does component count as an effects system because it uses data to describe things with effects then arranges / orders them for you?

phronmophobic16:07:47

I'm not sure who is in charge of naming and can't really say whether they are "effects systems", but generally, I think they're all different ways (with different trade-offs) to solve the problem of wrangling side effects.

isak16:07:52

I think effect systems are about more than managing side effects, because if that were enough you could say a function naming convention is also an effect system (e.g., a rule that side-effecting functions must end with a !, and you can never call a ! function from a non-`!` function.)

phronmophobic16:07:40

Most of them are mini-DSLs where you pass in a map which gets "run" by the system and hopefully is structured enough to reason about and debug

noisesmith16:07:40

if I were cynical I could say "oh, you mean an intperpreter?"

noisesmith16:07:56

I think there's something more interesting going on here though

phronmophobic16:07:02

and you would be right

isak16:07:21

It is probably a subjective standard, because I think most people would agree Haskell has it, even though I think you can technically do unsafe IO anywhere IIRC

phronmophobic16:07:21

right, clojure programs that solve this problem rely less on static proofs and tend more towards DSLs with data

noisesmith16:07:59

I claimed haskell had an effects system once and got angrily corrected by a haskeller, since then I've avoided thinking I really know what effects systems are

🙂 4
😂 2
noisesmith16:07:12

what would a clojure effects system help me do? what would it make easier / harder?

dspiteself16:07:09

zio is really the reference for me for a fast runtime on the jvm https://github.com/leonoel/missionary is the best match I have found but as far as I can tell you cannot inspect the "effectful program"

dspiteself16:07:10

I would like an effect system in Clojure is another way to composably handle retry and resource management in async programs.

noisesmith16:07:58

there's a lot here I don't get, but I'm suspicious because I've observed a strong pattern where attempts to wrap effects / state in pure abstractions increase complexity instead of decreasing it, and the higher up in the abstraction stack the "purity" is introduced the more complexity is added. but once again this is only my gut instinct and there's a bunch here I don't understand

dspiteself16:07:23

I was playing around with zio for the first time yesterday and I have to say I found it really nice. Oh and it has built in tracing to give you real stack traces for async computations.

noisesmith16:07:35

that's a nice feature

dspiteself17:07:10

A simplified version of the idea would basically wrap the "effect" on a data structure as a thunk the runtime would call following the rules of evaluating the data structure.

phronmophobic17:07:04

https://github.com/Day8/re-frame-10x has a nice dashboard for inspecting what happened when events fired. I think fulcro does as well. It should possible to instrument any of the "systems for effects" I listed above.

noisesmith17:07:41

what about plumatic/graph- you provide a hash map from tokens to calculations, then give it a rule for simplifying (eg. it could be sync or async and may or may not use threads) https://github.com/plumatic/plumbing

noisesmith17:07:05

(nb. the calculations can refer to tokens in the map and are auto-resolved)

noisesmith17:07:55

I guess that reifies the rules but not the states

dspiteself17:07:20

some building blocks are evaluate b after a is completed evaluate b if a fails evaluate a and b in concurrently retry a x times if a fails timeout a if it does not complete withing x ms

dspiteself17:07:38

again you would want the smallest set of primitives.

dpsutton17:07:14

didn't zach tellman make a macro that behaves somewhat like this? at least for a starting point

dspiteself17:07:38

and there is nothing stopping compiling the data structure after you have a chance to edit the structure with your policy

dspiteself17:07:49

yea he wrote let-flow I use it all the time

dspiteself17:07:28

but it just adds callbacks to a pipeline

dspiteself17:07:26

making a promise or deferred chain would be one implementation of a runtime

isak17:07:19

This is a little different, but somewhat related, in case you haven't seen it: https://github.com/resilience4clj/resilience4clj-circuitbreaker#effects

dspiteself17:07:21

likely one of the faster implementations could be making a state machine like core async's go macro does

noisesmith17:07:34

to some outside observers, I think this would all sound like we've delved so deep into our own navels that we've somehow forgotten what a "program" is, and we've become so absorbed in our data structures that we want to redefine a program as a special case of data structure (I guess this reaches back to the old "code is data / data is code" of lisp)

respatialized17:07:37

whoops, I am doing exactly this with state machines right now

noisesmith17:07:14

I recognize it because I've caught myself more than once doing it myself

noisesmith17:07:09

but I also recognize that these abstractions can be valuable - I just get fuzzy on when they are worth the effort to sort out vs. when they will waste my time and lead to code that is worse than the naiive solution

2
respatialized17:07:14

I am doing it intentionally though, mostly to see how far I can take the concept without trying to reinvent a type system

respatialized17:07:25

It's an experiment rather than a production system

Noah Bogart17:07:42

these deep dives/navel gazing moments are also some of the most fun to be had in programming, i’ve found. generally don’t have a lot that’s directly applicable after i’m done but i’ve learned a lot about the problem domain and sated the curiosity

respatialized14:07:03

https://fabricate-site.github.io/fabricate/finite-schema-machines.html here it is, my experience report on "redefin[ing] a program as a special case of data structure" with the help of malli schemas

dspiteself17:07:07

probably right

dspiteself17:07:29

I just want to instrument retry policy after the fact

dspiteself17:07:57

and this is one of the few ideas that would make that tractable

noisesmith17:07:44

so there's some overlap with task monitoring / work queues etc. as well

dspiteself17:07:31

Yea all at the cost of a more sophisticated runtime.

Adam Helins18:07:31

Is there anything happening these days when it comes to hot reloading Java classes from the REPL? A long time ago I used Virgil but it was often misbehaving

jdkealy19:07:15

How do you put an env var in project.clj? Datomic has you put credentials in project.clj like this

:repositories {"" {:url ""
                                   :username ""
                                   :password "my-long-password"}}

seancorfield19:07:01

I think you can ~ escape code to be executed in Leiningen project.clj files? So :password ~(System.getenv "DATOMIC_PASSWD") might work.

seancorfield19:07:15

(it's been years since I used lein tho')

dpsutton19:07:34

i see this in cider-nrepl. No idea if this is a plugin or standard but seems convenient: https://github.com/clojure-emacs/cider-nrepl/blob/master/project.clj#L57-L58

jdkealy19:07:18

cool thanks, i'll try these