This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-10-05
Channels
- # announcements (14)
- # aws (7)
- # babashka (28)
- # beginners (16)
- # calva (2)
- # cider (1)
- # clj-commons (8)
- # clj-kondo (29)
- # clojure (213)
- # clojure-europe (39)
- # clojure-losangeles (2)
- # clojure-norway (9)
- # clojure-spec (2)
- # clojurescript (11)
- # community-development (1)
- # conjure (2)
- # cursive (6)
- # datalevin (2)
- # datomic (8)
- # emacs (29)
- # events (1)
- # fulcro (22)
- # graalvm (14)
- # improve-getting-started (1)
- # jobs (1)
- # lambdaisland (5)
- # leiningen (4)
- # lsp (7)
- # malli (13)
- # meander (11)
- # membrane (13)
- # off-topic (23)
- # polylith (9)
- # re-frame (4)
- # reagent (7)
- # reitit (6)
- # releases (2)
- # sql (58)
- # testing (8)
- # tools-deps (18)
- # web-security (2)
what does it mean to be in the clojure
namesapce? I notice that clojure.spec.alpha
is included in a new clojure project (1.11.?) but clojure.tools.logging
isn't.
The main thing is usually if a namespace name starts with clojure that means it belongs in some way to rhickey
There is no enforcement, but rich has asserted that clojure.* belongs to him and we should attempt no landings there
roger that makes sense.
At one point there was a stronger distinction to be made between the set of namespaces that come with the clojure language, and "contrib" as a kind of central set of extra libraries
The distinction is still kind of there, but spec has started to blur the line, in that it is a separate artifact from clojure, but clojure depends on it
@U0DJ4T5U1 See https://clojure.org/dev/contrib_libs for more information and a list of Contrib libraries. Technically clojure.spec.alpha
and clojure.core.specs
are separate libraries (but still considered "core") and Clojure itself depends directly on those which is why you see this:
seanc@Sean-win-11-laptop:~/clojure/empty$ clojure -Stree
org.clojure/clojure 1.11.1
. org.clojure/spec.alpha 0.3.218
. org.clojure/core.specs.alpha 0.2.62
-- they are literally transitive dependencies of org.clojure/clojure
.
As someone who maintains five of the Contrib libs, I'm happy to answer any more Qs you have 🙂I was mostly trying to understand how to find things. I know i can just require clojure.spec.alpha. So i tried the same with clojure.tools.logging and discovered i couldn't.
in my case, the answser is that i can use emacs to search for namespaces that are available. or at least i think i can...
or i'm sure there are a couple ways that all result in roughly the same information.
Thanks for the offer! Don't have any questions at the moment.
Yeah, figuring out what dependencies are needed is not always intuitive, given that namespaces and group/artifact IDs have no connection to each other and may not be at all related in some projects...
All these namespaces are in Clojure itself: https://clojure.github.io/clojure/ and if you drill into https://clojure.github.io/clojure/clojure.core-api.html you'll see three namespaces "attached" to clojure.core
: protocols
, reducers
, and server
(which further reinforces the incorrect mental model of them somehow being "nested"!).
I see this https://clojure.github.io/clojure/clojure.core-api.html#clojure.core.reducers >
> clojure.core.reducers >
A library for reduction and parallel folding. Alpha and subject
> to change.
> sorry, i keep hitting enter to soon, because i have had to mess with my keyboard. Anyway, yeah i see how it's attached.
Yeah, it's just an artifact of the documentation generator really that they're listed "nested" like that -- but it seems common across several Clojure doc gen tools 😐
gotcha. yeah, i mean, i nest my files according to the namespace so its hard to break that abstraction.
Heh, and that convention which Clojure follows reinforces it further 🙂
Are there any good example of using https://github.com/clojure/tools.logging on a clojure deps built project? Would such a thing be useful for others sense i'm going to end up having to produce some for myself to understand it better? I noticed there were a couple code examples in the https://cljdoc.org/d/org.clojure/tools.logging/0.4.1/doc/readme and now there gone. But not really gone because google is bumping that example link up because people are using it. Which is going to be weird if they use the old version.
Logging on the JVM is a bit of a mess and people have different opinions on which JVM logging library to use (with clojure.tools.logging
). LambdaIsland had a gone article about setting this up -- but at work we chose log4j2 rather than what they picked (but the setup is very similar).
Not sure what "clojure deps built project" has to do with it -- whether you use Leiningen or the CLI has pretty much zero impact on how c.t.l is used... except perhaps around the JVM property for selecting the logging implementation if you want something that isn't the default.
> Not sure what "clojure deps built project" has to do with it -- whether you use Leiningen or the CLI has pretty much zero impact on how c.t.l is used... except perhaps around the JVM property for selecting the logging implementation if you want something that isn't the default. That's good to know for sure. I'm building up a toy project as i tear down the one where logging isnt working, hopefully ill meet it in the middle and figure out the problem.
https://lambdaisland.com/blog/2020-06-12-logging-in-clojure-making-sense-of-the-mess
ok cool. I'll dive into that again with anger this time.
All of our projects at work have this:
;; use log4j 2.x:
org.apache.logging.log4j/log4j-api {:mvn/version "2.19.0"}
;; bridge into log4j:
org.apache.logging.log4j/log4j-1.2-api {:mvn/version "2.19.0"}
org.apache.logging.log4j/log4j-jcl {:mvn/version "2.19.0"}
org.apache.logging.log4j/log4j-jul {:mvn/version "2.19.0"}
org.apache.logging.log4j/log4j-slf4j-impl {:mvn/version "2.19.0"}
and then all of our processes are started with:
-Dclojure.tools.logging.factory=clojure.tools.logging.impl/log4j2-factory
to ensure we get log4j2 instead of one of the others on the classpath!:heart_on_fire:
were using
ch.qos.logback/logback-classic {:mvn/version "1.1.7"}
org.clojure/tools.logging {:mvn/version "1.2.4"}
i should probably look at logback-classic docs.
You probably want to add the various bridging libraries explicitly to ensure "all" logging for all your dependencies is routed through logback and so you can control all of it centrally via logback config.
reading this right now https://www.marcobehler.com/guides/java-logging and https://lambdaisland.com/blog/2020-06-12-logging-in-clojure-making-sense-of-the-mess
Because i forgot to sign up for CS 400 Java logging and languish laughter
I'm i 80% correct in assuming That clojure/java project needs to gather the output from all the logging frameworks in my project (including nested deps) into the logging framework it wants? are libraries that do that "bridging libraries"?
Yes, if you want to have control over all of your app's logging -- even that coming from dependencies -- then you need those libraries that route logging from the various log libraries into the one you want to use/control.
It's really unfortunate that Java has ended up with so many and they're all slightly incompatible in subtle ways. But at least the bridging libs exist to paper over it.
I want to say something philosophical about how time and tide makes us sinners and saints all.
Hahaha... At least with logging, once you have a configuration that "works" you can mostly forget about it. Modulo security issues forcing you to update the whole stack.
One troubles me is things like this > The problem with JCL is, that it relies on https://articles.qos.ch/thinkAgain.html? to find out which logging implementation it should use - at runtime. And that can lead to a lot of pain. In addition, you’ll find that the API is somewhat inflexible, that it comes with a fair amount of cruft, and there’s simply better alternatives out there, nowadays. from https://www.marcobehler.com/guides/java-logging Now, i'm i going to click that link and read the argument on why JCL was a "hack" maybe the author was horrible horrible wrong and JCL was the brilliant! What's the "professional" thing to do? The answer is to write your own logging framework and not publish it anywhere.
It took us a while to get it working to our satisfaction and, for the most part, any one given library is as good as the others for casual use -- and bridging libraries exist for all of them to all of the others I think. But we like log4j2 because a) you can configure it with .properties
files instead of XML b) you can control the configuration at startup with env vars and/or JVM properties and c) you can have it auto-reload the config when it changes at runtime (useful for long-lived processes). But with log4j2 not being the first in the list c.t.l searches for, it is a bit of a pain to have to specify the JVM property for all processes so override c.t.l's default behavior 😞
Given the order that c.t.l looks for implementations https://github.com/clojure/tools.logging/blob/master/src/main/clojure/clojure/tools/logging/impl.clj#L249-L253 it's definitely "easier" to go with slf4j + logback per Lambda Island's article...
Yeah, im getting a java perspective a bit then im going to read that one.
Probably my biggest pet peeve with sl4j is that it treats error
and fatal
as the same error level 😡 :
seanc@Sean-win-11-laptop:~/clojure$ cat src/example.clj
(ns example
(:require [clojure.tools.logging :as log]))
(defn -main [& args]
(log/info "Hello, world!")
(log/warn "Hello, world!")
(log/error "Hello, world!")
(log/fatal "Hello, world!"))
seanc@Sean-win-11-laptop:~/clojure$ cat deps.edn
{:deps {org.clojure/tools.logging {:mvn/version "1.2.4"}
org.slf4j/slf4j-api {:mvn/version "1.7.30"}
org.slf4j/jul-to-slf4j {:mvn/version "1.7.30"}
org.slf4j/jcl-over-slf4j {:mvn/version "1.7.30"}
org.slf4j/log4j-over-slf4j {:mvn/version "1.7.30"}
org.slf4j/osgi-over-slf4j {:mvn/version "1.7.30"}
ch.qos.logback/logback-classic {:mvn/version "1.2.3"}}}
seanc@Sean-win-11-laptop:~/clojure$ clojure -M -m example
22:14:02.022 [main] INFO example - Hello, world!
22:14:02.023 [main] WARN example - Hello, world!
22:14:02.024 [main] ERROR example - Hello, world!
22:14:02.024 [main] ERROR example - Hello, world!
Grrr!!!! With other logging libraries, that last line would be FATAL
instead.Is "CONSOLE" a valid logback.xml appender-ref? I don't see it listed in the https://logback.qos.ch/manual/appenders.html.
<root level="DEBUG">
<appender-ref ref="CONSOLE" />
</root>
Maybe... maybe our project made an appended-ref named console?I don't care that its xml, what troubles me is that i cant find what values it can support. It's like typed but not in a way that's useful.
well it seems to work* so it's almost certinally an ok value from somewhere. i'm still not sure
Here’s my minimal logback.xml
:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
It could be that CONSOLE
exists as a default appender, like you hypothesize. Maybe better to make it explicit by defining it, though?@U06BEJGKD by "defining it" do you mean write an implementation for console? Or change CONSOLE to STDOUT?
The latter - include a <appender>
section with a name
that you then reference in the root logger. In my example: name="STDOUT"
is referenced in <appender-ref ref="STDOUT"/>
.
ah thanks! Yeah ok, so our configuration does have that.
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<charset>UTF-8</charset>
<pattern>%msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
</appender>
xml, it's like a programming language only everyone defines what it means.
I come from a Java background so am used to all the logging options and config. But when I started contributing to cljdoc noticed it uses https://github.com/pyr/unilog. Maybe that would be of help for those who want to avoid some of the complexity?
Hi guys, I'm looking for a library to stop, persist and restore Clojure code at specific points for workflow engine. Something like core.async (or continuation passing style) but with more control and customization and it's ok with some additional restrictions!
I did quite extensive research about a year ago and found nothing. As a result my colleague made a stack based programming language that sits on top of clojure 🤷 - https://github.com/DeLaGuardo/jjoy/blob/main/src/jjoy/core.cljc#L188
(if (= :ok (async-fn ...))
(let [res-1 (async-fn-2 ...) res-2 (async-fn-2' ...)]
(async-fn-2'' res-1 res-2))
(async-fn-3 ...))
Hi everybody! Does anybody knows a way of creating objects ids for JVM objects,so remote systems can refer to them by id? I have tried System/identityHashCode but have found collisions pretty easily. Also I don't want to use uuids because then remote clients can't cache results.
Also tried to keep a map of obj->incremental_id but not all objects are hashable, like (def o (range))
You want a remote system to have an id to refer to an (currently!) in-memory, transient object?
yes, is for a dev tool. I'm retaining the pointers so the GC don't collect them and a remote system (external GUI) should be able to "ask questions" about them by id
> Also I don't want to use uuids because then remote clients can't cache results. How does usage of UUIDs prevent caching exactly?
Thought it sounded familiar and then found your older question about CLJS. :D https://clojurians.slack.com/archives/C03S1L9DN/p1663071702855139
haha yeas, I have solved it for CLJS by monkey patching the objects
> How does usage of UUIDs prevent caching exactly? so one way of doing it is everytime the client calls (give-me-list-of-objects-a) you can create a uuid for each of them, update a map off uuid->obj and return the list of uuids. Then you do the same for (give-me-list-of-objects-b). The thing is those will be different uuids even if the objects are the same. So if the client is calling a cachable (print-object uuid), it will miss the cache. Does it make sense?
> Mmm. What prevented you from using an external ID registry? wdym. The problem with a registry is that you need a map from {obj -> id} and you can't hash infinite sequence objects for example
1. A client sends a message to the server with (give-me-list-of-objects-a)
2. The server calls that function and serializes its result in a way that:
a. Checks the internal registry of UUIDs
b. Generates UUIDs for objects that haven't been encountered yet
c. Reuses already generated and stored UUIDs for all other objects
d. Includes UUIDs of the serialized objects in the serialized result
3. The server sends the serialized result+UUIDs to the client
4. The client deserializes the data and updates its own bidirectional object<->UUID
registry
5. Whenever the client needs a particular object, it just asks the server by UUID
That's exactly how e.g. https://bokeh.org/ works, which I used to maintain. Well, almost - the IDs there are stored in the objects themselves because only the Bokeh-specific data is referenced and because Python and JS are mutable. But in principle using a registry that's separate from the data itself is not that different.
> 2. b Generates UUIDs for objects that haven't been encountered yet how do you know a object hasn't been encountered yet?
And, as I've mentioned in that older thread, with this approach there's no need for UUIDs because you can just generate IDs as ever increasing integers.
how can you create such a map when you have unhashable objects like infinite lazy sequences?
the problem when I tried that solution was how to handle objects like (def o (range))
Check in some way whether the collection is potentially infinite (there's been some discussions) and store them in the registry by their System/identityHashCode
. Of course, internally the object<->ID
registry is not a simple map now but rather something like:
{:hashable object<->ID
:unhashable object-identity-hash-code<->ID}
The object-identity-hash-code<->ID
is a crooked bidirectional map that's comprised of object-identity-hash-code->ID
and ID->object
.
yeah, but you end up again using identityHashCode which have collisions
I mean, I guess using it less, you reduce the risk
I'm trying to imagine how it all works in my head. And I think I understand why an ID->object
mapping is needed - after all, a client needs to be able to ask the server about some particular object by mentioning its ID.
But when do you need the object->ID
mapping?
When you are listing objects not by id and need to ensure that the same object has the same id
sorry, got a phone call
> Check in some way whether the collection is potentially infinite the thing is that a lot of objects are lazy seqs and you can't know if it is infinite or not
> But in which cases does it have collisions? when I'm running on millions of objects I find collisions pretty quickly when using identityHashCode
> But when do you need the object->ID
mapping?
this is for returning the objects, you need to know for a object that you are about to return if you already have an assigned id, so you always return the same id for the same object for allowing caching on the client
damn System/identityHashCode, don't know why it is called identity is so easy to find twins objects :
(defn find-twins []
(let [hs (java.util.HashSet.)]
(loop [i 0]
(let [o (Object.)
o-hash (System/identityHashCode o)]
(if (.contains hs o-hash)
(println "Found after " i)
(do
(.add hs o-hash)
(recur (inc i))))))))
dev> (find-twins)
Found after 71390
dev> (find-twins)
Found after 108724
dev> (find-twins)
Found after 21259
> the thing is that a lot of objects are lazy seqs and you can't know if it is infinite or not
You can't know, yes - that's why I said "potentially". :)
There were discussions here on how to detect something like this. A seq, realized?
, not a range, maybe something else.
> when I'm running on millions of objects I find collisions pretty quickly when using identityHashCode
And are you sure that previously generated objects haven't been GC'ed?
I tweaked find twins to be sure they are not GCed but its the same
(defn find-twins []
(let [hs (java.util.HashSet.)
ohs (java.util.HashSet.)]
(loop [i 0]
(let [o (Object.)
o-hash (System/identityHashCode o)]
(if (.contains hs o-hash)
(println "Found after " i)
(do
(.add hs o-hash)
(.add ohs o)
(recur (inc i))))))))
also identity hash code can't not generate duplicates. The "identity" part of "identity hash code" means that it's a hash code generated from the identity of the object, which means the first time the hash is generated it uses the object's location in memory and then cached.
It's identical to the default implementation of Object#hashCode
. The entire point is that it generates few duplicates, but for any sufficiently large number of objects it will generate duplicates.
dev> (find-twins)
Found after 112484
dev> (find-twins)
Found after 44218
dev> (find-twins)
Found after 33766
The reason to use identityHashCode
is if the hashCode
of the object you're using can't be guaranteed to have good properties for hashing, like infinite seqs which by clojure's spec must hash to the same thing as a collection with the same elements.
> but for any sufficiently large number of objects it will generate duplicates. like 33k :face_with_rolling_eyes:
so I guess the only strategy would be for the JVM to add an incremental id on the instance header, but is for sure a waste of memory
Yeah. You can't expect it to not generate duplicates. The only true way to see if two things are different objects is to stop the world with the GC and compare references, but since references aren't stable over time because of the GC it must be done via Object#equals
if you want a stable idea of equality over time (which also requires the objects be immutable to check that really), so the only other way to do that is to introduce this mapping from id->object, but you can never store it the other way around if you don't want to use equality checks to validate things are the same.
Interesting. But still, you can use it. After all, hash maps rely on hashes that result in collisions from time to time. > When you are listing objects not by id and need to ensure that the same object has the same id Not sure I understand this part though @U5NCUG8NR. Can you give a concrete user-facing example?
@U0739PUFQ the JVM already stores what amounts to a counter on the object header, and lots of other things. The Hotspot object header is something like 16 bytes already. Adding one more counter is substantial, but it's not like it isn't already happening.
That "counter" being the 32 bit identity hash code of the object, plus other things that are in the header.
@U2FRKM4TW you need a two way mapping for this to work obj-id <-> obj, the obj->obj-id one is the problematic for infinite sequences
@U2FRKM4TW stated earlier in the thread was the need for a function like (list-objects-of-type-a)
since this is for a debugging tool. This requires you to just list out all objects, which requires you to have a mapping for ids for them. Technically if you're keeping a registry of all objects you could just iterate over your map of id->object and return the key, but this requires you to maintain that map for all objects at all times.
Which perhaps maintaining the map for all objects at all times is fine, depending on how you're doing this, in which case just do that.
And probably prefer a java hash map of some kind to a clojure one for memory reasons.
yeah you can always iterate over your current object pool searching with equals for the id, but that doesn't scale for big pools
Ah, so (list-objects-of-type-a)
is actually a hard requirement, from the perspective of an end-user? Huh.
no, that's not what I mean either. On the client side you never can do a direct question of object to id, you only need object->id to list out all objects of some type. So just keep a map for each type that's relevant that gets updated when you create objects and iterate the entire map every time you request all objects. You never need to do a linear search for single items.
this allows you to only ever have oid->object maps
> you only need object->id to list out all objects of some type
...why not type->objects
? How does object->id
helps with that?
this is what I'm getting at. You literally never need to have the object be a key because the only time you need an object->id correspondence is when you're listing all objects.
At least from the current problem statement.
> You literally never need to have the object be a key Right, that's my impression as well, and I'm trying to understand why @U0739PUFQ claims the opposite.
Because of the step you proposed for caching uuids, of "for new objects generate a new id, for objects we've seen before fetch the cached id for it"
the problem with that approach is that it needs to have an object->id mapping, which jpmonettas is correct about. The way to avoid it is to not do generation of ids lazily so that we can just iterate the list of all ids and know that it will contain references to all objects.
maybe it helps if I explain what I'm trying to do. So the debugger I'm writing instruments any clojure code so when it run it will retain the pointers to all sub expressions values through time (this is our objects pool) After that, the debugger ui (possibly running remotely) should be able to inspect and work with those objects. Like give me all the objects ids under this F function frame. Now pprint me a object with id OID. Etc
Like I'm suggesting above you can have this work by proactively storing every object in a map when it's created and then at the end you can just iterate over the map to get all the objects' ids
this doesn't require you to realize infinite sequences, or otherwise have well-behaved data.
> proactively storing every object in a map how would that map look like?
just oid->object, and it doesn't require linear scanning ever unless you're literally listing all objects.
it allows your standard hash table lookup time for objects by id, and you never need to look up ids by object ever because you know that all objects will always be in the map so you can just iterate the ids directly when you want to list objects to the client.
if you have multiple disjoint sets of objects separated by type (or some other identifiable trait) then I recommend having multiple maps to allow listing to be linear with the number of objects of that type rather than linear with objects in the live set.
hmmm I think I get it, and on my indexes (there is some indexing pointing to objects) I should use the id instead of the object pointer
yes, object pointers aren't stable
so is kind of reifying pointers earlier, at storage time
yes, at the time of object creation you generate a stable id, a uuid, and store it.
> just oid->object, and it doesn't require linear scanning ever unless you're literally listing all objects.
Consider this workflow:
1. A client asks for a particular vector of objects
2. The server iterates that vector, generates oid->object
for all the objects, stores it somewhere (probably merges into an already existing map), serializes the objects along with their IDs and sends all that to the client
3. The client then asks again for that vector
4. The server now... does what?
your steps are wrong. The generated ids are generated when the vector was created, not when it was requested.
> yes, at the time of object creation you generate a stable id, a uuid, and store it. I think that should work, will have to change my design but I think you are right
This means the server always iterates the oid->object map and returns all keys
no matter if it's the first request or the hundredth
And again, if you have disjoint sets of objects, you keep them in different maps so that you don't have to filter.
If you don't have disjoint sets of objects then you need to use filtering on the value to determine if the key is included, but that's also easy with clojure sequence operations, it's just a bit slower.
In either case an object existing implies it has an id already, so you never need to "look up the id by the object" because if you already have a single object you're working with on the client it means basically by definition that you have its id.
So you just request by id
and on the server you also never request id by object because you never are working with objects outside of a context of also being in a map iteration where the key is also visible.
So yes some operations require linear scans, but this is a hard requirement when you're dealing with objects that have no tractable concept of equality (as infinite sequences don't)
I'll try to implement that pointer reifying on object creation and see how it goes
Thanks @U5NCUG8NR and @U2FRKM4TW for your help! I was stuck and ended up with a possible solution
Glad I could be of some help, I'd love to hear how it goes.
Maybe it's not a concern for the task at hand, but what about mutable collections?
E.g. both a server and a client know the ID of some HashMap
.
The client already has that map but would like to get its updated state. Ideally, the server would just respond with some representation of that map that includes just the IDs of the objects, so that the objects aren't serialized every time.
But how does it know those IDs in this case, given that the map could've been changed in some dark depths?
this is actually part of how the concept of equality breaks down when you throw mutability in.
it's actually why egal, the equality concept that clojure worked from as inspiration, doesn't consider two mutable objects to be equal unless they are identical to each other, and unfortunately there's no way to determine if maps are equal if we can't check if their keys and values are equal (which we can't because equality is not well-behaved in this system).
That's why my mindset from the get go was about object<->oid
and assigning IDs at the point of serialization, as that's how Bokeh deals with it.
Confused myself in the middle of the discussion though, because multitasking is bad.
if we have the request for a map send all the oids of the keys and values for the map it expects though you can detect new keys approximately by looking up the oid for a given key, then checking if that object is a key in the map, and then checking the identityHashCode of the value against the identityHashCode of the objects for the oid that got passed for that value.
Right but it's a hard constraint that we can't have objects as keys in a map @U2FRKM4TW because it's a requirement that this system behaves well with objects that do not have well-behaved hash code or equality semantics.
> Maybe it's not a concern for the task at hand, but what about mutable collections? the debugger "at value store time" stores a snapshot if the thing is derrefable, for things like mutable objects aren't supported, but this is a small percentage of clojure codebases
> it's a hard constraint that we can't have objects as keys in a map
You can, indirectly, by using a hash map implementation that does some checks on the object and, if it can't be hashed, uses its identityHashCode
.
No, that still doesn't work because you will have collisions that must be resolved with an equality check which is poorly behaved in this system.
> things like mutable objects aren't supported That helps somewhat because it means you don't have to worry about the thing I mentioned above about requesting maps and needing to send the expected key and value ids
> things like mutable objects aren't supported
Ah, huh.
In the CLJS thread you mentioned that you wanted for the functionality to work on any JS object though. So either that task was unrelated or I guess the requirements have changed.
> that still doesn't work because you will have collisions that must be resolved with an equality check
The equality check will be a simple identical?
. Why is it poorly behaved?
hmm. That's a good point, I wasn't thinking about using identical?
because my brain was on things that would allow stable references over a wire and time.
mutable objects can't be supported because retaining a pointer to them doesn't make any sense since they can change, and coming after to explore the value at that point in time is going to be wrong
General equality is poorly behaved in this system because it doesn't halt.
identity is pretty much fine though.
> In the CLJS thread you mentioned that you wanted for the functionality to work on any JS object though. yes, the id thing I needed for every object. With not supported I mean that the debugger will signal that the value is a mutable one and you shouldn't trust it (or something around those lines)
> doesn't make any sense since they can change, and coming after to explore the value at that point in time is going to be wrong In the "at value store time" model - yeah, definitely. I'm thinking in the "at value request time" model, so we're talking about different things - here, a client would be able to refresh its knowledge of the state.
yeah, I'm not worried about that functionality because it is not going to be useful for mutable objects doesn't matter how you implement it
I almost never use debuggers, but I disagree here. Requesting a state, then running some code, then re-requesting that state can be really useful to see what exactly running that code has changed.
so this debugger is all around immutable and derefable values, which I think cover most of the use cases
I am going to say that this sounds a lot like an interactive version of sayid, and maybe could be implemented on top of that with a parser
> I almost never use debuggers, but I disagree here. Requesting a state, then running some code, then re-requesting that state can be really useful to see what exactly running that code has changed but how can you store all the intermediate states of that previous find-twins function for example? you would have to serialize the mutable HashSet everytime, and you never know if you can serialize a mutable object, it can also be incredibly big
ah, I guess technically sayid doesn't track values through forms inside a function
> I am going to say that this sounds a lot like an interactive version of sayid, and maybe could be implemented on top of that with a parser this debugger has existed for like 2 years already (https://github.com/jpmonettas/flow-storm-debugger) and IMHO is much more powerful than sayid
it also supports ClojureScript
Fair enough. I don't really keep up to date on debuggers, especially because while I do use debuggers, I really don't like firing up another debugger window to attach to my process.
I never use portal or flow-storm or rebl or any similar tooling.
I created it because I'm kind of tired of randomly adding print statements, specially for complicated algorithms or deep nested data
That's fair. I usually just use the cider debugger.
Which is about to get a lot better with JDK 19 and monkeypatching core.async to use virtual threads.
flow-storm is exactly like cider-debugger but it also supports time travel, works on clojurescript, various ways of exploring what happened with a execution, including a programatic one, since you have access from your repl to the execution indexes
it can also instrument entire codebases, so you can do stuff like instrument the entire ClojureScript compiler, compile something, and then step over everything that happened, explore execution at different levels, inspect values, etc
all with one command
fair enough
> but how can you store all the intermediate states of that previous find-twins function for example? > you never know if you can serialize a mutable object Hmm. My thinking comes from what I've seen with Portal - it can show you some representation of any object. So if you want a similar thing, you need to serialize only that representation and not the whole object. But no idea how it actually tracks whether an object has changed or not. E.g. I think I can expand a hash map, then change it on the server, and try to expand it further on the client - no idea what happens in Portal here.
but if I'm not confused portal is like REBL or Reveal, a way of visualizing a specific value you just tapped
> Stay tuned. We're working on some interesting stuff @U050ECB92 nice!! around debugging?
I guess I'll have to wait to the next Conj 😛
> if I'm not confused portal is like REBL "Yes, but." I was wrong - it doesn't peak into your state over the wire. It either peaks into the state in its own process or serializes values if you work over the wire, so there are limitations in both cases: https://cljdoc.org/d/djblue/portal/0.31.0/doc/guides/shadow-cljs#portalweb-vs-portalshadowremote
I'm a little late to the party, but I know clerk offers streaming potentially infinite values to the browser for visualization.
It sounds like you've already got a plan, but it's possible to use infinite lazy sequences as keys with clojure maps if you wrap them first:
(defprotocol PWrapped
(-unwrap [this]))
(deftype APWrapped [obj]
clojure.lang.IHashEq
(hasheq [_] (System/identityHashCode obj))
(hashCode [_]
(System/identityHashCode obj))
(equals [this that]
(if (instance? APWrapped that)
(identical? obj (-unwrap that))
false))
PWrapped
(-unwrap [_]
obj))
(defn wrap [o]
(->APWrapped o))
(def my-range (range))
(def my-range2 (range))
(def my-obj-map {(wrap my-range) "my-range"
(wrap my-range2) "my-range2"})
(count my-obj-map) ;; 2
(vals my-obj-map) ;; ("my-range" "my-range2")
(get my-obj-map (wrap my-range)) ;; "my-range"
(get my-obj-map (wrap my-range2)) ;; "my-range2"
(get my-obj-map (wrap (range))) ;; nil
I think this works even when System/identityHashCode
has a collision:
(defprotocol PWrapped
(-unwrap [this]))
(deftype APWrapped [obj]
clojure.lang.IHashEq
(hasheq [_] 42)
(hashCode [_] 42)
(equals [this that]
(if (instance? APWrapped that)
(identical? obj (-unwrap that))
false))
PWrapped
(-unwrap [_]
obj))
(defn wrap [o]
(->APWrapped o))
(def my-range (range))
(def my-range2 (range))
(def my-obj-map {(wrap my-range) "my-range"
(wrap my-range2) "my-range2"})
(count my-obj-map) ;; 2
(vals my-obj-map) ;; ("my-range" "my-range2")
(get my-obj-map (wrap my-range)) ;; "my-range"
(get my-obj-map (wrap my-range2)) ;; "my-range2"
(get my-obj-map (wrap (range))) ;; nil
Does anyone have a snippet how to use rewrite-clj to deep merge clojure data into the original EDN document? I guess I’ll need to use zipper and descend the tree…
Depending on what you are doing, https://github.com/borkdude/rewrite-edn might be interesting. Drop by #rewrite-clj if you have questions.
implemented it successfully
(defn- map-node? [zloc]
(some-> zloc z/node n/tag (= :map)))
(defn- children
"Convert a :map zloc to a map of k -> zloc of val"
[zloc]
(take-while some? (iterate z/right (z/down zloc))))
(defn deep-merge
"Deep merge 2 zippers."
[zloc override-zloc]
(if (and (map-node? zloc) (map-node? override-zloc))
(reduce
(fn [zloc [k v]]
(let [k (z/sexpr k)]
(->> (if-let [prev-v (z/get zloc k)] (deep-merge prev-v v) v)
z/node
(z/assoc zloc k))))
zloc
(partition-all 2 (children override-zloc)))
(z/replace zloc (z/node override-zloc))))
if you can simplify that let me know
Congrats @U66G3SGP5!
Howdy folks, anyone know if it's possible to get permission to the Clojure Jira? I'm from Amazon/AWS and I want to be able to comment on an issue related to maven credential customization (for CI use cases) https://clojure.atlassian.net/browse/TDEPS-99 I tried creating an account, and it created successfully, but the account doesn't have permission to access Jira
Thanks!
there is a new tools.deps feature that might take care of this https://clojurians.slack.com/archives/C6QH853H8/p1663368713858269
Oh yeah, that could give me a much nicer workaround than patching the clojure
script
in the future, the best way to start this convo is either with a question in #tools-deps or on https://ask.clojure.org !
Thanks Alex! First time trying to level up from user to question-asker, haha
Hey people! Maybe this is a noob question, but do you know how do we define deprecation warnings on clojure?
It depends on who your audience is. Sometimes some metadata attached to the var works (defn :deprecated something [])
. Sometimes you need a little more visibility, like how core.memorize does it https://github.com/clojure/core.memoize/blob/master/src/main/clojure/clojure/core/memoize.clj
Clojure core does something like the former as well (just search through clojure.core’s src code)
A common convention is to use the :deprecated
https://guide.clojure.style/#deprecated.