Fork me on GitHub
#beginners
<
2023-12-20
>
Marius16:12:18

Is there a way to make prn-str deterministic, so that is returns the same string if e.g. an equal map is passed? I observed that the order of keys of a map might switch. Or is there another way/library to serialize Clojure maps to strings which would meet this criteria? (Background: I have serialized Clojure maps in DB columns and would need to check if they are equal in SQL).

clyfe16:12:26

prn-str is deterministic

clyfe16:12:11

> I observed that the order of keys of a map might switch. Is it the same map?

respatialized16:12:59

serializing to json or jsonb, which are pretty widely supported in SQL implementations, may be a better fit for k/v data than strings

respatialized16:12:32

the comparison for equality may also be more efficient if you preserve the structure of the data than if you're doing string comparisons

respatialized16:12:30

depending on your impl and what modules you have enabled, there may also be non-json column types that support k/v data (e.g. hstore in https://www.postgresql.org/docs/current/hstore.html)

ghadi16:12:59

no guarantees about maps or sets, if you want determinism you have to make it

🎯 1
ghadi16:12:18

equality checking on exact representation match seems brittle

ghadi16:12:31

is there a key you can derive from the map?

respatialized16:12:19

https://github.com/replikativ/hasch is another library for canonical serialization hashing of Clojure values, but you will likely not be able to rely very much on your SQL engine for equality comparisons if you use it

delaguardo16:12:14

https://github.com/DotFox/jsonista.jcs this extension for jsonista ensures canonical JSON serialisation. This is not exactly what you asked for but still want to share it just in case you can switch to JSON as a store format

Marius17:12:13

@UCCHXTXV4 it‘s not the identical map, but a map with equal content. Regarding JSONB: It‘s an option, but sometimes some things get lost „in translation“ during a JSON roundtrip (Clojure maps are more powerful than JSON maps), which is why I serialized to a TEXT field. Also I don’t have the need to use SQL to dig into the serialized field.

delaguardo17:12:40

another way to have some equality token is to use hashing library. For example https://github.com/arachne-framework/valuehash Useful if you don't need to store textual representation for postprocessing

🎯 4
Marius17:12:58

Yeah I think I’m going to use a hash value then. Thanks to all of you who have responded so quickly, I very much appreciate that and it’s what makes this community so awesome!

💯 1
Marius07:12:36

True, that might work as well! Thanks @U051H1KL1

Marius08:12:38

Ah, Puget is just pretty printing, thus only serialization and I would also need deserialization from values stored in the DB.

nezaj17:12:30

(defn foo []
  @(future
     (throw (ex-info "foo" {:bar :baz}))))
(try (foo)
     (catch Exception e
       (type e)))
;; java.util.concurrent.ExecutionException
Hey team, a bit of a noob question. I am writing a system which bubbles ExceptionInfo errors. At the top-level, I catch ExceptionInfo, and based on ex-data, I provide some user-friendly error messages. I was surprised to see that if I used futures, exceptions would end up wrapped in an ExecutionException , so my top-level try-catch would not work. To solve this, my current solution is to unwrap ExecutionException at the top-level too. But this made think maybe I was thinking about things the wrong way. What is the clojurian way to "bubble up" certain kinds of exceptions, which we can show to users? What's the right way to do exception handling when dealing with futures et al, which wrap exceptions?

delaguardo17:12:34

your exception is wrapped with java.util.concurrent.ExecutionException but it is not lost. Try calling ex-cause

delaguardo17:12:05

imho, it is better to always catch Exception instead of ExceptionInfo. Then you can analyse it to react according to the logic of your application.

dpsutton17:12:59

(let [chain (Exception. "outer" (Exception. "middle" (ex-info "my error" {:data :stuff})))]
               (last (take-while some? (iterate ex-cause chain))))

dpsutton17:12:55

aha! i thought something like that existed and should have looked

nezaj18:12:59

Awesome, thanks team!

nezaj18:12:44

root-cause is a win 🙂

sheluchin21:12:01

I'm trying to use babashka/http-client to test the performance of my endpoints. I'm not sure I trust the results.

(def request-times (atom []))
(defn perform-request
  [request-fn]
  (let [start (System/currentTimeMillis)
        response-future (request-fn)]
    (future
      (let [_response (deref response-future)
            end (System/currentTimeMillis)]
        (swap! request-times conj (- end start))))))

(mapv perform-request (repeat 200 my-req-fn))
Does this seem like a reasonable approach to measuring how long all of the individual requests take?

borkdude21:12:20

I'd change request times to a atom+map and then use a unique id for each request, add the start time and use a callback to add the end time. Then at the end, collect all the deltas

borkdude21:12:51

the above should work too but it uses an extra future for each request

sheluchin21:12:00

@U04V15CAJ for the callback part, do you mean through :async-then? If I'm passing in the collection of req-fns using map-indexed to generate the unique id, I'm not sure how I should get the unique ids into the :async-then callback fn.

borkdude21:12:44

use a closure

borkdude21:12:03

but perhaps your method is good enough

sheluchin21:12:53

Ah, okay. Yes, I can make it work like that. I do wonder if the cost of the extra future is significant enough to merit re-writing it. Could the extra future skew the results?

sheluchin21:12:25

@U04V15CAJ Alright. Thanks for suggesting an alternate implementation here!

Ben Sless07:12:46

You should use wrk2, it's a way more accurate tool If not, at least use a monotonic clock (time nanos, not millis)

Ben Sless07:12:38

You probably want to see how your service handles concurrent requests, this is why I recommend wrk