Fork me on GitHub
Daniel Shriki10:04:35

(every? {:a :b?} [:a :b?])
return false?

Daniel Shriki10:04:27

(every? {:a :b?} [:a])
return true. for some reason question mark makes him return wrong result

Daniel Shriki10:04:09


(contains? {:a :b?} :b?)
returns false.


It's not about the question mark. Seems like you're using {} where you meant to use #{}. The former is a map, the latter is a set.

Daniel Shriki10:04:54

haha right, my bad 🙂 tnx


Hi. For this deftype, why add the protocol IReactiveAtom but not implement it. What’s the purpose?


For which deftype? Seems like you didn't attach the code you meant to.


> but not implement it But the protocol is empty - it has no methods to implement, so specifying it doesn't necessitate any implementation. It usually means that it's a marker protocol. Just so you can call satisfies? somewhere on it. You can see it in the implementation of cursor.

thanks3 1

Huh, TIL this works:

user=> (into {} [{:a 1} {:b 2} {:c 3 :d 4}])
{:a 1, :b 2, :c 3, :d 4}
I've usually used (reduce into {} [{:a 1} {:b 2} {:c 3 :d 4}])

👍 2

I guess it's down to:

user=> (conj {} {:a 1})
{:a 1}
user=> (conj {} [:a 1])
{:a 1}


map conj accepts maps and entries


I'm looking to store edn data as bytes that I can read in the future. It seems like the options are: • write edn to a string via pr I would also consider, but this seems to indicate that long term storage might not be an intended use case (although I know people are doing it anyway). > Nippy was never designed to be a format portable between platforms. Its design tradeoffs emphasize performance and simplicity of [JVM] implementation over portability. If the goal is portability, a different approach may be sensible. Are there any other options I'm missing that I should consider?


Also, is long term storage an intended use case for fressian? It seems to be the case, but it's not explicitly listed in its


Oh! @U5FV4MJHG and @U0510902N have made on this very topic. They discuss Fressian and Nippy, amongst other things. You might find it helpful. #clojuredesign-podcast

👍 1

Do you know if they discuss long term storage?


A bit if I recall correctly, but only briefly. That was in conjunction with Nippy and data format backwards compatibility.


yes, I remember mentioning long-term storage


we tested serializing data with older versions of nippy and deserializing them with later versions, and things went ok


I've used nippy as a somewhat long-term storage solution in the past


edn is, I believe, the only option where multiple readers exist written in different languages


I'm ok if it's clojure only


Another option is transit, that is much faster than edn while maintaining many of the semantics


I just mean, depending on what your definition of long term is, the more implementations exists, the easier it must be to implement if something changes in the future


Writing edn from clojure can be problematic

😬 1

You have to use pr or prn, which have control knobs that are often set in the repl, because the repl also uses prn


I've had storage of event logs corrupted because the snapshot function was called in a repl where *print-level* or whatever was bound

👍 1

Then we had a fork of tools.reader to try and recover that


@U0510902N I have ruled out transit because there's a big warning at the top of the readme that says transit is not intended for storing data long-term > NOTE: Transit is intended primarily as a wire protocol for transferring data between applications. If storing Transit data durably, readers and writers are expected to use the same version of Transit and you are responsible for migrating/transforming/re-storing that data when and if the transit format changes.

👍 1

@U0NCTKEV8, yep. This is my snippet for using pr to write edn. Hopefully, it writes right.

(with-open [w (io/writer fname)]
  (binding [*print-length* false
            *out* w]
    (pr obj)))


There is the other knob, depth maybe?


You're right. There's a few other dynamic vars that control how printing is handled. I should probably explicitly set all of them.


FWIW, cljdoc uses nippy to store edn in SQLite blobs.


XTDB uses nippy for persistently storing documents


I get that people are using nippy for long-term storage, but it doesn't seem like a good fit to me based on: 1. no formal written spec 2. Nippy was never designed to be a format portable between platforms. Its design tradeoffs emphasize performance and simplicity of [JVM] implementation over portability. If the goal is portability, a different approach may be sensible. I'm happy to be persuaded if the situation has changed or changes in the future, but my #1 requirement is writing data that I can read at some indefinite point the future without a hassle.


Yeah I know - I made that issue


Yeah, not suggesting it is a good idea, just sharing a usage.


I don't think fressian has a spec either


im open to puzzling through writing one though for one or both if there is interest


There are indications that Fressian is used to serialize data for datomic in place. That gives me at least some confidence that long term storage might be a design goal. Either way, this problem is a lot harder than I thought. Just searching for *print-readably* on;q=%2Aprint-readably%2A&amp;filter[lang][0]=Clojure, I couldn't find a single usage that explicitly sets all 6 dynamic vars that affect printing via pr and friends. There was only 1 usage that set 5 of 6.


Yeah, I am not surprised, I just have a very peculiar scar, what are the odds you have a repl you use all the time for ops work in production that binds those things, you have a function that transfers events logs from short term to long term storage, part of that process is serializing to edn, and you call that function in your production repl as while debugging some other issue


And then somehow (I forget, it's been years), sometime later when you attempt to rebuild from you event logs you get errors because of the ... scattered throughout


@U7RJTCH6J At the risk of branding myself a heretic, have you considered using something non-Clojure, like It also has a binary format. And there are for it.


I'm not against using a non-clojure library, but I really do want to use most clojure data types like maps, sets, symbols, vectors, lists, keywords, etc as well as some other basic data types like instants and uuids. Also, the fact that "you cannot fully interpret [a proto buf] without access to its corresponding .proto file" is a big drawback.

👍 1

@U7RJTCH6J Out of sheer curiosity, why must the data be stored in a binary format instead of plain text, and is compression a requirement?


It doesn't. I'm just listing options for comparison.

👍 1

Aha! I misunderstood your initial post and thought it had to be binary. If that's not a requirement, perhaps the safest long-term option is just plain ol’ EDN.

👍 1

There's only 1 real requirement: I want to write edn data that I can read at some indefinite point the future without a hassle. Hassles include: • Requiring a specific library version • Requiring a specific version of clojure • Requiring a specific JVM version • Requiring a particular operating system Bonus points for: • speed • efficiency • readable from other languages/platforms • readable from multiple libraries


It sounds like EDN would satisfy those requirements, as there are libraries to read and write it for many different programming languages, etc.

👍 1

at penpot, where the space efficiency, speed, and backward compatibility is important, we currently using fressian for our long term storage

🙏 1

a combination of fressian + zstd


I know I'm a bit late to the party. I can comment a bit more on the experience that @U0510902N and I had with Nippy. We have nippy serialized data from 5 years ago that we can still read out just fine. Nippy makes a best effort to try to preserve all sorts of underlying types. (eg. java.time.Instant vs java.util.Data). If you want to avoid any of that "magic", you can wrap your serialization/deserialization to transform the tree into the most basic of Clojure types. (Eg. use a long integer for an instant instead of a java.time.Instant). The core set of types has been extremely stable for many years.

💡 1

We've never had any trouble going from "old" nippy data to "new" nippy data, just the other way around. Over time, Nippy has added more capabilities, so the version you serialize with should be the lowest version you use to deserialize with.


For what it's worth, we've never had any trouble requiring a specific version of Nippy.


There have been breaking changes, but not on any of the core types. It's related to advanced features and how native Java classes are handled. Take a look at the changelog and search for "breaking":


For cljdoc, we were at one point serializing Exception objects. We stopped doing that and moved to a simple map representation instead. I did spend a In the end, the upgrade was fine.

💡 1

Yeah. I was concerned about the changes that came with the security fix and it turned out to be a no-op for us because we avoided serializing classes.


@U08JKUHA9 Thanks for sharing clj-cbor. That's new to me. It looks really interesting.


Have you done any tests or analysis to determine that there aren't currently any version dependencies on OS, clojure version, nippy version? The quote makes it sound like this sort of compatibility is explicitly not a design goal: > Nippy was never designed to be a format portable between platforms. Its design tradeoffs emphasize performance and simplicity of [JVM] implementation over portability. If the goal is portability, a different approach may be sensible. Given that, I don't see how even I can rely on this to be stable in the future.


I'm not saying other people shouldn't use nippy (I also use it for other use cases), but I don't think it's a good fit for me for this particular use case.


Yeah, that language was concerning to me. Since I've used it for many years now and it has been very stable, I think what he meant by it is that it's not designed to be an interchange format. For that, use something cross-platform or spec based (like Message Pack, Protocol Buffers, JSON, EDN, etc).


If you are exchanging data between services (like in a database or HTTP posts), you have to be careful to avoid a service encoding nippy using a new feature


For example, when LZW compression was introduced.


Is there any documentation or otherwise that indicates that long-term, durable storage is an explicit design goal for nippy?


I mean, could be helpful to write out the features in a spec and maybe make a toy alternative implementation


Could also name the data format nipple, so that's fun


For example, the protocol buffer documentation has "The format is suitable for both ephemeral network traffic and long-term data storage."


The docs for thaw: >

** By default, supports data frozen with Nippy v2+ ONLY **
> Add `{:v1-compatibility? true}` option to support thawing of data frozen with
> legacy versions of Nippy.


If you really want to get into the weeds, check out the source. It's pretty readable. Check out the type list. Types don't change. Even deprecated types are kept around so they can be deserialized:


There are several types marked as future removal candidates


I would expect those to move to ready-only and end up in the deprecated section. Also, since it's type-based, you can make sure the content you serialize does not include those types.


For me, personally, I'm relying on the fact that Nippy has had exceptional backward compatibility for many years now, and all the changes thus far have been edge cases. Even with those breaking changes, the docs have a clear migration strategy.


Something like cbor that doesn't even attempt to do much beyond the core types might be a better fit in some cases. You don't have to worry about the exotic problems that come with trying to handle arbitrary Java classes.


This might be a very naive thought, so brace yourselves 😅 Would it make sense to have a Clojure library that can “convert” Java classes and objects to/from Clojure’s core data structures as a free-standing thing? Then that could be used as a first step, passing the resulting data structures along to something like CBOR. It’s like extracting “the Java stuff” from Nippy to its library to enable broader utility.


eh? I think for compatibility basically we should have a translation from the serialized data model <-> EDN, which is different from get/set to maps


He was talking about something that would be a standalone library that could be used as the first step for that. Are you saying wouldn't fill that role?


No, because specifically takes a class with getters and derives a conversion to a map


nippy actually serializes data in java's serialization format - which is extremely cursed w.r.t. its behavior in the JVM, but conceptually can still be read in as maps instead of directly to the intended classes


so if a reader/writer were made independent of the jvm's implementation that could be used


There are some things you are missing, but I don't care enough to argue

👍 1

final class Thing implements Serializable {
    private final double n;

    public Thing() {
        this.n = Math.random();

    double n() {
        return this.n;
What i'm trying to get at is that there is no way to round trip stuff like this with the approach, but nippy can via its serialization whitelist fallback So if we are trying to replace "the java parts" of nippy, we would need to be able to work with some model of that representation


Ok I don't really disagree with that, I just looked at it as a toolkit that helps with some patterns, but would need help with others. I didn't think of it as a complete solution or anything.


Nippy itself doesn't work automatically for everything, it also needs helps sometimes

Вадим Вадим20:04:23

Hello everyone! I am new to Clojure. So sorry for my question ) What is your opinion about the future of clojure language? Is the number of new projects on clojure growing year after year?

Вадим Вадим22:04:24

But what about number of NEW projects on clojure? Is it growing year after year or not? If number of new projects is decreasing, then it means that the language is dying


If that number is not in that article, I doubt you'll find it anywhere. But the language is far from dying. At the very least, this community keeps on growing, the amount of tools, including novel ones, keeps on growing. There's now a very solid funding behind the language as well. I won't say more on the topic, because this question pops up here and there once every couple of months - should be easy to find other answers, some of which into much more detail.

👍 1

It really depends on the definition you want to attach to "dying". A language doesn't really "die", "death" isn't a property of languages like it is of living things. If you're worried that it won't receive new features or updates, for example, you could say a language that stops receiving new features or updates is "dead". In that definition, that there are less projects using Clojure than previous years wouldn't mean it's dying. The question would be more, have the language maintainers and creators moved on to other things? And the answer is they haven't, they are very well active on Clojure and new releases are being cut actively. That said, I have no idea if the number of projects started using Clojure is going up or down. I think if you care about that, I'd say does it matter what kind of projects they are? I feel it should. If they were all student or beginner projects on GitHub, all attempting to build a simple thing over and over again. Those don't seem super relevant. Whereas I think a better consideration would be, are there useful projects started, things that get deployed to prod and serve an actual business? Or libraries that you can then build upon yourself to help you build things that serve real users. That seems a more relevant metric, the latter means you will be getting the opportunity to use existing libs to help you deliver working useful software. The former might mean that indirectly there might be more job opportunities in Clojure in the future, and that maybe even if the core maintainers moved on, others would be motivated to pick up the mantle since they rely on it for real useful software.


So, now that I've planted the seed that there might be better qualifiers for defining "dying" and "alive" for a language. I can say the following for Clojure: The core maintainers and creators are actively working on it, as-in, on a daily basis. They work for a company that uses Clojure to build a database called datomic, for which they have real customers using the database and paying real money for it. They are themselves owned by a parent company called NuBank, that is a Unicorn from Brazil which is now valued at $33 billion dollar, and they have gone public on the New York stock Exchange already see here: This is the company bankrolling Clojure development. They themselves make heavy use of Clojure and they also use Datomic the database built and maintained by the same maintainers and creators as Clojure. That's pretty "alive" already in my opinion. Now on top of that, here's an active community of people on this slack and other channels, where you can see people post and comment on a daily basis. Many also use Clojure at their work, such as I do. So there's again another set of projects definitely that use Clojure for real useful work that pays bills. You also see new blog posts and libraries being announced almost each week, check the announcement and news-and-blog channels to see. Also, not all of them get announced here. There's people running podcasts, more than one, and there are more than one Clojure conference. There are many developers being donated money on a recurring basis to work on open source Clojure libraries, it's hard to tally up all the GitHub sponsors and patterns and open collective, but the biggest representation of that is clojuristtogether seen here: which pays thousand of dollars a month to sponsor open source work. And then, there's also more than one implementation of Clojure! There are other people who build and actively maintain a JS based implementation called Clojurescript which is similarly actively maintained, getting new updates and features. And there is someone that maintains an implementation called ClojureCLR which targets .Net, that's also actively maintained, though at a slower pace. There's someone who has an interpreted implementation built in Clojure itself which has a version on Node and one self-contained called Babashka and Node Babashka. There's someone who maintains a self-hosted implementation called Planck, etc. There's also people about to launch a new implementation of Clojure that will run on DartVM and support Flutter. So we know that we have: 1. Active core maintainers backed by billion dollars company 2. Active community 3. Active open source scene 4. Active alternate implementation scene 5. Other companies actively using it in production 6. Active language seeing new features and patches and releases 7. Active donations towards related Clojure open source I'd say from that, it already well deserves to be called "alive" and far from "dying". And even if I don't know the number of new projects started in Clojure per-year, I don't think it matters to say that it is alive, and you can confidently decide to depend on it for useful real production software, since with all it has as of now, it seems to be well positioned to be supported, maintained, evolved for years to come with a healthy community and open source.

❤️ 5
👍 4

The future of Clojure is extremely positive. Jobs have continued to increase (meaning an increase in projects) and the world wide community continues to grow. Its the 13th anniversaries of the London Clojurians this year and we still get a regular stream of new members Clojure is not a hyped-up language and does not invest in marketing, so adoption is a considered decision by developers which leads to a high retention to the language and community.


> adoption is a considered decision by developers which leads to a high retention to the language and community This is a very salient bit, thanks!

👍 1