Fork me on GitHub
#announcements
<
2023-03-28
>
Matthew Downey03:03:44

Soon to be obviated with ChatGPT plugins around the corner, but I wrote this Chrome extension with CLJS to render ChatGPT code output in iframes. https://github.com/matthewdowney/rendergpt I've found it useful for e.g. drawing SVGs or describing a type of element I want and having it build it, with a REPL-like experience, but it is likely more useful for me because I am extremely bad at front end 🙂 I also got it to render PlantUML syntax, and I've tested e.g. pasting in some AWS CDK code and having it build a diagram of what the AWS deployment looks like. Open to suggestions if there's any other sort of structured output that would be useful to render too!

clojure-spin 16
metal 6
4
👍 6
pez10:03:03

So very cool! Looks like demo material for a #C02V9TL2G3V meeting to me. WDYT, @U066L8B18 ?

Daniel Slutsky10:03:50

Wonderful 👀 Sure, @UP7RM6935 it would be lovely if you wish to discuss it on one of the meetings of the #C02V9TL2G3V group. https://scicloj.github.io/docs/community/groups/visual-tools/

Matthew Downey12:03:02

Sure that sounds like fun, thank you guys!

Matthew Downey23:04:18

This RenderGPT Chrome extension was accepted by the Chrome Web Store (https://chrome.google.com/webstore/detail/rendergpt/faedgcadnkineopgicfikgggjjapeeon), in case there's anyone who wanted to try it out but didn't want to install from source.

👀 4
🎉 6
pez09:04:16

Installed. Rated 5 stars. 😃

❤️ 2
Alys Brooks03:03:21

We released a security update for uri: https://github.com/lambdaisland/uri/releases/tag/v1.14.120. Fetch also uses uri and has been updated: https://github.com/lambdaisland/fetch/releases/tag/v1.3.74

🎉 16
renewdoit06:03:00

Lasagna-Pull first public release, 0.4.150, provides an intuitive pattern for you to query your complex and deep data structure. https://github.com/flybot-sg/lasagna-pull

🎉 18
👍 6
borkdude08:03:23

Looks pretty neat! It reminds me of meander, but I don't see a reference of it in the README. Do you know that project and how does it differ from it? Why is the expression with variables quoted in lasagna but the variables extracted are not? I'm asking this out of pure curiosity. I see you've included a clj-kondo hook 💯

borkdude10:03:37

Btw, I tested your lib with babashka and it works there too: https://clojurians.slack.com/archives/CLX41ASCS/p1679998087911039 Pretty cool

renewdoit21:03:08

Thank you for mentioning meander; I do not know it when writing lasagna-pull, the syntax of lasagna-pull does look like it, but it is just a coincidence. 😁 I studied meander just now, and I think it has a more extensive scope than lasagna-pull and is far more complex in terms of implementation for those additional features. I will update README later to compare them.

renewdoit21:03:50

Regarding your question about symbol quoting in qfn macro: it is a limitation of the current implementation using Clojure’s reader; I will try to make it consistent if possible.

renewdoit22:03:37

I appreciate your work on babashka and clj-kondo, and I enjoy using them very much. Thanks for testing the babashka compatibility of lasagna-pull; I will put a badge in the README; also, thanks for your quick contribution!

borkdude22:03:40

no problem, I was just curious, not necessary to change it

chrisn17:03:53

cnuernber/streams is a small concept library to efficiently build and execute monte-carlo simulations. It is based off the concept of lazy noncaching streams (eduction with transducers is similar). It allows you to do arithmetic on streams similar to how array languages allow you to vector arithmetic on arrays so it makes setting up your simulation a bit simpler and the code is of course pretty efficient - https://github.com/cnuernber/streams

🎉 36
metal 10
👍 6
otfrom20:03:26

This seems relevant to my interests

chrisn23:03:10

That is very cool - makes sense in retrospect

chrisn13:03:21

OK - using the rng interface is helpful but the per-double protocol dispatch is adding a bit -

streams.api> (crit/quick-bench (let [r (fast-r/rng :mersenne)]
                                 (dotimes [idx 10000]
                                   (fast-p/drandom r))))
Evaluation count : 5292 in 6 samples of 882 calls.
             Execution time mean : 113.685269 µs
    Execution time std-deviation : 314.674972 ns
   Execution time lower quantile : 113.321373 µs ( 2.5%)
   Execution time upper quantile : 114.082204 µs (97.5%)
                   Overhead used : 2.011787 ns
nil
streams.api> (fast-r/rng :mersenne)
#object[org.apache.commons.math3.random.MersenneTwister 0x3c43ff33 "org.apache.commons.math3.random.MersenneTwister@3c43ff33"]
streams.api> (crit/quick-bench (let [r (fast-r/rng :mersenne)]
                                 (dotimes [idx 10000]
                                   (.nextDouble ^org.apache.commons.math3.random.RandomGenerator r))))
Evaluation count : 7014 in 6 samples of 1169 calls.
             Execution time mean : 86.089678 µs
    Execution time std-deviation : 463.845944 ns
   Execution time lower quantile : 85.495519 µs ( 2.5%)
   Execution time upper quantile : 86.708926 µs (97.5%)
                   Overhead used : 2.011787 ns
nil
I propose an additional protocol function that returns the fastest way to sample from the generator for uniform or gaussian - something like (let [f (fast-p/rand-fn rng :uniform)] (f)).

chrisn13:03:40

I propose something similar for distributions - I can analyze the function and see of the rng or the distribution produce doubles, longs, or potentially something in ND space.

chrisn14:03:08

well, at least doubles, longs or objects - if the object is IFn$D, IFn$L, then I know it cannot produce anything but doubles or longs respectively.

chrisn14:03:23

If it is generally faster to sample into a double buffer and then pull from that then perhaps we could do that.

chrisn14:03:09

a mersenne gaussian stream is a lot slower than a mersenne uniform stream 🙂.

chrisn14:03:03

Nope, what we have is good enough I think for now - I do think there are faster ways if you know you are generating batches of numbers.

chrisn14:03:06

nm - distributions take so long to calculate that the protocol dispatch time is noise.

chrisn14:03:30

only applies to samplers and then only applies to uniform samplers.

yogthos20:03:10

made a little library to express workflows using a state machine https://github.com/yogthos/maestro

👀 20
metal 14
clojure-spin 14
🎉 8
fuad15:03:28

Looking good! I believe I remember reading an article on your blog about strucuring clojure apps this way. It also reminds me of https://lambdaisland.com/blog/2020-03-29-coffee-grinders-2 I'm interested in trying something like this out in the app I'm currently working and I might give this a try.

👍 2
yogthos15:03:35

Yeah, coffee grinder pattern is pretty similar to what I'm thinking of as well. And this is extending the idea I mentioned in the blog. The original implementation I gave there was to use multimethods, but I think the one aspect that's missing with that is visibility into the state machine since transitions are implicit. Using a map to describe the state machine makes it clear how the states interact with one another. I'm currently using a similar approach at work, and it's been working out well. I have an event based system where I need to track state and react to events as they come in. Using this sort of a state machine made it a lot easier to reason about. Let me know how things go, suggestions and PRs are very welcome.

Jakub10:03:29

Very cool! 1. I am curious on your thoughts how to handle side effects? Would it be adding events for effects with impure handlers or some other way? I am thinking having separate impure handlers would probably be also good for testing, one could just assoc into the map to stub those out. 2. How would you compare the library to other FSM implementations, e.g. https://github.com/metosin/tilakone?

yogthos13:04:04

1. I would treat the top level handlers as small independent programs that deal with side effects such as IO. The handler gets the current state and accesses whatever resources are needed, then returns a new state that gets passed on to the next state. I talk about this approach a bit more here https://yogthos.net/posts/2022-12-18-StructuringClojureApplications.html 2. And compared to other FSM libraries, my goal was to focus on helping facilitate decouple state computations from the routing. It aims to provide a general way to organize the application flow at high level. Each state handler does the computation, and then the dispatches decide what needs to happen next based on the state of the data. Typically, these tasks get conflated and become implicit in how the functions are chained together. I wanted to make this flow explicit.

Jakub16:04:03

Thanks, that makes a lot of sense in the context of the blog post. In the past, I have wondered about ways to make certain computations more explicit to improve understanding. FSM seems like it should be a good fit, but traditional FSM libraries felt a bit clunky for computation workflows and appear to be better suited for situations where an external actor is feeding inputs. I'm going to give it a try.

yogthos15:04:12

Yeah, that’s basically what I found as well. I tried using a few and wasn’t really happy with the ergonomics of expressing this kind of stuff. One thing I’d like to try adding would be to create something like generation of mermaid diagrams based on the FSM spec. Visually seeing workflows would be really nice for reasoning about them.

Jakub19:04:17

Indeed, generating mermaid/graphviz visualization sounds great. I wonder what would be the most frictionless way. Perhaps it might be possible to write a clj-kondo hook to get all maestro.core/compile occurrences and dump all machines in a codebase, that way if a new one is added it will get picked up automatically.

yogthos19:04:24

There are a few ways to do it. I made it so that the specs can be serialized to EDN, so technically you could stick them in a db or something as well.