Fork me on GitHub
#beginners
<
2022-03-28
>
leifericf11:03:56

Dear fellow beginners! Please consider joining the #growth and/or #improve-getting-started channels to share your experiences with learning Clojure. Here are some prompts to get you thinking: • How did you first become aware of Clojure? • What was your reaction when you first heard about Clojure? • If applicable: What made you overcome your initial skepticism about Clojure? • How did you become convinced to try Clojure for yourself? • How would you describe your experience when you first tried to use Clojure? • Which aspects of Clojure did you find particularly difficult to understand? • What would have made your “getting started experience” smoother? Any insights and hot takes are much appreciated!

sova-soars-the-sora16:03:27

awesome! i thinkk for me having a live REPL and getting the synergy of the repl figured out what the biggest stepping stone

👍 1
Savo17:03:21

Hello everyone! I wrote a guide that is mostly aimed at beginners on Overtone and how to set it up and get to first sounds. Any feedback is appreciated. Thanks. https://savo.rocks/posts/overtone-basic-setup/

🙌 2
1
arielalexi20:03:45

Hey 🙂 I am working on the time-to-sleep between 2 events, and I am using this formula to calculate the time differences between 2 events: (- (:ts next-event) (:ts event)). my data looks is like this (they are maps) : {... :ts "1633392058.124700" ...} {... :ts "1633392055.124600" ...}. I parsed the data from strings to floats:

(- (Float/parseFloat (:ts next-event)) (Float/parseFloat (:ts event))) 
the problem with parsing the :ts in this format, it that the expected result of the difference is not as the actual result. For example, for those :ts : 1633392058.124700 , 1633392055.124600 the result I get from the formula is 0.0 (the parsing convert the time to be : 1.633392E9) and not <tel:30001001358|3.0001001358>. is there a better way to calc the difference between two strings that represents timestamps? or maybe is there a better parsing for the data in-order to keep the formula? tnx!

Alex Miller (Clojure team)20:03:34

Double/parseDouble maybe?

🚀 1
Alex Miller (Clojure team)20:03:11

Float is single-precision (32-bit), Double is double-precision (64-bit)

Martin Půda20:03:48

(->> [{:ts "1633392058.124700"} {:ts "1633392055.124600"}]
     (map :ts)
     (map parse-double)
     (apply -))
=> 3.0001001358032227

1
arielalexi20:03:17

using Double/parseDouble instead of Float/parseFloat fixed the problem I had. thank you both 🙏:skin-tone-2:

Alex Miller (Clojure team)20:03:15

fyi, the ->> solution there will be slower (but if it's readable to you, that may be ok)

Alex Miller (Clojure team)20:03:41

your original solution will return primitive floats (or doubles, later) and you'll get primitive arithmetic on the -, which is about 100x faster than the boxed arithmetic you'll get in the other solution

🚀 2
dpsutton20:03:17

(also, is it clear to you why the original result was 0 with Float and returns a correct result when using Double?)

arielalexi20:03:15

@dpsutton the reason is because we have more bits. type float, 32 bits long, has a precision of 7 digits. we can store values with very large or very small range (+/- 3.4 10^38 or 10^-38), it has only 7 significant digits. type double, 64 bits long, has a precision of 15 digits. has a bigger range (*10^+/-308) and 15 digits precision. I think this is the reason why in float I got 0 vs using double than I got numbers

💯 1
dpsutton20:03:13

Awesome. Didn’t want the solution to be opaque. Glad you know the reason the type change resolves it

😊 1
arielalexi20:03:34

tnx for checking 🙂

Paavo Pokkinen20:03:56

I defined simple "hello world" handler for ring.adapter.jetty and launched this with run-jetty. However evaluating handler with some changes in REPL does not modify response given by the server, full restart of application is required. Why is that?

ghadi20:03:39

the most common pathology here is passing the value inside a var to a web server like jetty. Remember, vars are boxes

ghadi20:03:13

when you redefine that var, jetty or the route handler don't see the changes, because they have the value inside the var

Paavo Pokkinen20:03:23

This is my code:

Paavo Pokkinen20:03:27

(ns paavo.k-backend
  (:gen-class)
  (:require [ring.adapter.jetty :as jetty]))

(defn handler [request]
  {:status 200
   :headers {"Content-Type" "text/html"}
   :body "Hello dssddWodrld"})

(defn -main
  "I don't do a whole lot ... yet."
  [& args]
  (jetty/run-jetty handler {:port 3000
                    :join? false}))

ghadi20:03:40

pass the var itself

ghadi20:03:41

#'handler

ghadi20:03:48

rather than handler

Paavo Pokkinen20:03:57

That works. Trying to understand the difference... plain handler is like pointing to the current value of "handler", instead of being like pointer?

seancorfield21:03:55

@U038K5JQM2T Yes, the Var (`#'...`) provides a level of indirection.

👍 1
ghadi21:03:44

The key thing is that the box (the var) and the contents inside both implement the ‘function’ interface

1
andrewzhurov20:03:29

So you know how a Clojure program is a bag of mutable files. Is there any work on having it instead as an immutable structure?

quoll20:03:23

A jar file sort of works here, doesn't it? Admittedly, it is possible to edit them, but they're reasonably fixed

andy.fingerhut21:03:39

On Unix/Linux/macOS, turn of the write permissions on the source files? Not sure what kind of immutability you are looking for.

andy.fingerhut21:03:51

And yes, there are multiple kinds of immutability 🙂

andy.fingerhut21:03:59

At least, I distinguish different kinds of immutability by "what is the set of back-door / maybe-not-considered-in-your-set-of-operations-you-consider-normal that can actually mutate the things". e.g. of course you can turn of write permissions on files, but if someone can turn the write permissions back on, then they can easily write the files again.

andy.fingerhut21:03:36

Haskell data structures are immutable "normally", but if you load some unsafe add-ons, they can mutate any byte of memory in a process, so immutability goes out the window.

razum2um23:03:18

nope guys, this thread (not sure why beginners) is • not about prohibiting souce code modifications • not about immutable single-artifact code distribution this is about the idea like to threat the codebase as a datastructure and persist it accordingly but without the MESS of smalltalk-like environment or not to repeat CL-dump-the-RAM-deployments - well because both dev environments threat the codebase as a mutable world (…because THE world is mutable but well, you would say, but wait - Church’s lambda doesn’t exist there as well, only turing machine) this is about the idea about threating codebase as at least an immutable map with namespaced fn-names and only modify it accordingly

(swap my-codebase :fn/name a-macro-which-modifies-that-fn)
I know that it’s pretty much possible in repl (until restart trollface), and repl is even (maybe) persisted in dotfiles, however this is not enough, now both are just side-effects and the file is the source of the truth. but a file is inherently mutable (as a row in traditional RDBMSes) and shouldn’t be the way to go. datomic’s on the other hand is exposing bytes in an immutable nature but this is a fact-collection, in the very nutshell it’s an array of EAV (entity-attribute-values), unfortunately, this is another direction as well But this idea is about a CQRS nature and cljc/jar files are just the side-effects of swap-ing the code (i.e. storage should remind kafka more than datomic) i.e. a code change is just another event in a system which is “playable” and “rewindable”

👍 1
1
respatialized23:03:25

https://www.unison-lang.org/learn/the-big-idea/ I wonder how many ideas Clojure can steal from Unison (a language with content-addressed and immutable code). it seems like you could do neat things with a Lisp by storing (parts of) program state and code in a persistent data structure or even a CRDT. unfortunately you wouldn't be able to get one of the major benefits of Unison's immutable code (trivial serialization of arbitrary programs) in Clojure because you can't serialize all of a function's dependencies. jars are immutable but not (clojure) data, so you can't peel them apart after compilation, and therefore wherever code you're sending over the wire has to have the same classpath as your origin point.

👀 1
👍 1
1
respatialized23:03:22

https://github.com/Datomic/codeq codeq gets part of the way to "code as immutable structure" but it doesn't have the "liveness" of a REPL or smalltalk image, but that may be in part because no one has built REPL tooling on top of it (as far as I know)

👀 1
nice 1
respatialized23:03:12

here's another early stage project that's pushing things in a similar direction: https://github.com/repl-acement/editors

nice 1
respatialized23:03:22

I think the "dream" for something like this would be being able to diff the state of two running programs

razum2um23:03:12

> I think the “dream” for something like this would be being able to diff the state of two running programs good thinking, I also stopped at the point where we need a hardware accellerated “immutable”-emulating memory because well, we HAVE TO re-use semiconductors in the physical silicon world - i.e. mutate the lowlevel ram but can we prototype a software emulator for a machine like that?

1
respatialized23:03:25

why not just replace the GC in your program with a queue that persists infrequently used objects/values to disk as part of mark-and-sweep 😅

quoll23:03:35

Oh, that grepl looks nice!

💯 1
respatialized23:03:53

some other projects have looked at how one might store program state in "larger than RAM" contexts https://github.com/jimpil/duratom https://github.com/polygloton/durable-atom https://github.com/jackrusher/spicerack

quoll23:03:51

It reminds me of something I did in 2015: https://github.com/quoll/cast Back then, Asami didn't exist (hence it uses Datomic), and neither did the various Clojure parsers (hence, it uses a modified LispReader.java from the Clojure sources)

nice 2
🆒 1
💪 1
quoll23:03:27

But it parses Clojure, then stores the whole lot in Datomic

razum2um23:03:58

> other projects have looked at how one might store program state yeah duratom is my the way to go in all small pet-projects and self-serve utilities. but this is about the usual (data-)state only

respatialized23:03:56

I've been led down this exact rabbit hole by the seemingly simple problem of "my static website generator shouldn't waste time recomputing pages that have already been rendered"

razum2um23:03:28

> modified LispReader.java yeah in my very basic prototypes i realised a big gap in what we write using macroses and what in particular is executed and what will become a subject to swap-modify later (and should be stored as a code-data) ideally we need a basic macro-sugar-free core lang (clojure.core is already quite polluted unfortunately)

1
phronmophobic23:03:55

This past weekend, we were diving through https://github.com/nextjournal/clerk implementation which touches on a number of these problems, https://youtu.be/1bdUfq-8XLM

razum2um23:03:15

basically macroses break the rule “there should be just one way to introduce branching given a code-node” - but we already have if and when or “there should be just 1 way to wrap a computation node and make it a let-binding”

👍 1
phronmophobic23:03:16

you can always macro expand to bottom out at the clojure special forms

phronmophobic23:03:31

additionally, all jvm clojure code eventually emits jvm bytecode which can provide a bottom (although it's not a trivial bottom)

razum2um23:03:32

> you can always macro expand yes i know, but the mental trick that in a-macro-which-modifies-that-fn a real person prefers to manipulate over a tree he submitted. but logically we should operate on macroexpand outcome only (on the very-very basic building blocks)

1
phronmophobic23:03:18

Are there good problem statements for this type of work? repl-acement has a pretty interesting take, https://github.com/repl-acement/editors/blob/main/BACKGROUND.md

👍 1
razum2um23:03:21

but even let in clojure.core is not a basic primitive troll looks like i need a new lang definition when macroexpand-all thinks let and if considered “low-level” and not expanded

1
1
seancorfield00:03:47

How would your conceptual (swap! code :fn/name some-macro) handle stateful changes that are inherent in certain Clojure forms, such as ns (and require/`refer`/`import`, etc) or top-level set! calls or other top-level forms that have side-effects (such as invoking a macro to generate a bunch of def/`defn` forms)?

seancorfield00:03:15

(and, yeah, this thread is far beyond #beginners at this point but I'm not entirely sure where it belongs given that it is Clojure-related, sort of)

quoll01:03:10

> a real person prefers to manipulate over a tree he submitted. Yes, but I might prefer that too.

🙃 1
ahungry01:03:16

The entire idea of non-source file "code" is imo a solution in search of a problem - I was going to link unison as I had read about it before and tinkered with it a little - but there is so much around files that'd need to be reinvented, you'd be reinventing all 18 wheels of a semi (VCS system, peer review system, patch systems, file backup systems, editor systems) - afaic recall, unison despite being around with this idea for awhile now only has basic repl image support

ahungry01:03:39

could you dev faster with such a system? probably not - could you have more reproducible builds? maybe - but there are way simpler ways to achieve a deterministic (re)build - you could target something like nix and ensure you mirror all your own deps and it'd be much less effort

phronmophobic01:03:40

> The entire idea of non-source file "code" is imo a solution in search of a problem I agree that there's an impedance mismatch with the world of programming tools that expect text files, but there's a tremendous of amount of power unlocked when you can treat code as data. I think non-source file "code" is a prerequisite for building programs via direct manipulation. For example: • https://youtu.be/1gGd7pKSpRMhttps://www.youtube.com/watch?v=jC2_O5Jh_Rghttps://github.com/LeifAndersen/interactive-syntax-clojure • my current experimentation, https://clojurians.slack.com/archives/C02V9TL2G3V/p1646782532013739

😋 1
👀 1
💪 2
1
andrewzhurov08:03:50

Wow, so many ideas mentioned. Loving it! ❤️

andrewzhurov08:03:21

> datomic’s on the other hand is exposing bytes in an immutable nature but this is a fact-collection, in the very nutshell it’s an array of EAV (entity-attribute-values), unfortunately, this is another direction as well > @U04V1HS2L Why an array of EAVs is an unfortunate data model?

andrewzhurov08:03:30

> But this idea is about a CQRS nature and cljc/jar files are just the side-effects of swap-ing the code (i.e. storage should remind kafka more than datomic) I'm not sure there is need for an event system in order to have program as an immutable structure. If we have a persistent immutable data structure, then history is kinda.. there, all the time. E.g., git. It may be nice to capture user's intent in order to derive different data models out of it later on though. Intent, captured in such events, is the source of truth, it seems.. :thinking_face:

andrewzhurov08:03:43

> cljc/jar files are just the side-effects of swap-ing the code If we treat events as the source of truth, then an immutable datastructure is just a data model, derived out of them, and text files is a representation of it / yet another derived view. events => immutable data structure immutable data structure + names / place-based addressing => bag of text files

1
andrewzhurov08:03:52

> it seems like you could do neat things with a Lisp by storing (parts of) program state and code in a persistent data structure or even a CRDT. Line between code and data it operates upon kinda gets blurred, right? I.e., there is just data (code included) and more data can be derived out of it (and then, later on, being used to derived more data (becoming code)). Macro-time, run-time and other *-times become one. Or, rather, time ceases to exist at all.. as programs become data that are free to be interpreted to derive more data at our leasure, and paused without lose of run-time state, as there is no 'run-time' anymore.. huh? :thinking_face:

andrewzhurov08:03:59

> it seems like you could do neat things with a Lisp by storing (parts of) program state and code in a persistent data structure or even a CRDT Remember git conflicts that appear regularly when we fight for the same place in place-based addressing with text files? Neat thing of having CRDTs to manage code is in forgeting the word 'conflict' exists. 😄

andrewzhurov09:03:17

> unfortunately you wouldn't be able to get one of the major benefits of Unison's immutable code (trivial serialization of arbitrary programs) in Clojure because you can't serialize all of a function's dependencies. jars are immutable but not (clojure) data, so you can't peel them apart after compilation, and therefore wherever code you're sending over the wire has to have the same classpath as your origin point. Having a Clojure project, couldn't you crawl it's place-based addressable codebase reading it all as data? E.g., read codebase of a project as data, fetch dependent libraries, repeat. At the end we have a datastructure of all dependent Clojure code, leaving us only with platform dependencies such as .jars Can't be that simple, right? 😄

andrewzhurov09:03:29

> At the end we have a datastructure of all dependent Clojure code, leaving us only with platform dependencies such as .jars + environment variables + environment (JDK version, native libs, etc) In order to have a reproducible program these two would need to be specified as well. Perhaps these dependencies can be explicitly captured as data, too, and a Nix or Guix description derived out of it. .. that makes it a biiit more coplex though, yeah.. 🙂

andrewzhurov10:03:00

> codeq gets part of the way to "code as immutable structure" but it doesn't have the "liveness" of a REPL or smalltalk image, but that may be in part because no one has built REPL tooling on top of it (as far as I know) Neat! codeq looks promosing, it seems to take steps in the same direction. I wonder how granular do they go, is it down just to defs or all the way to code as data? they mention that they dig for semantics of code https://hyp.is/awIsyK9GEey13JdEQAiLFw/blog.datomic.com/2012/10/codeq.html and they parse something called :codec/code , as appears https://hyp.is/GkeZoq9HEeyqKA_DVqXXTA/blog.datomic.com/2012/10/codeq.html. I wonder, what's inside?

andrewzhurov10:03:29

> I think the "dream" for something like this would be being able to diff the state of two running programs Having code as data, will there be run-time at all? As any evaluation of code results in more data which accreates on top.. a kind of first-class cache (huuh? 😄)

andrewzhurov10:03:26

> But it parses Clojure, then stores the whole lot in Datomic That sounds like it! I will take a keen look. Interestingly, it seems https://github.com/repl-acement/editors to enhance codeq into the exact direction of having code as data. That sounds plain lit to have a tooling that, having a git repo of a Clojure project, can derive a temporal data representation of it and its dependencies. :the_horns: I wonder can codeq + cast be used to achieve that? @U051N6TTC @U04V5V0V4

andrewzhurov10:03:59

> basically macroses break the rule “there should be just one way to introduce branching given a code-node” - but we already have if and when > or “there should be just 1 way to wrap a computation node and make it a let-binding” It bugs me too that macro-time and run-time are two different times, can we just have one? 😄 or, rather, not have time at all.. (having code as data and having one set of abilities on data, with no regard whether some data is meant to be interpreted as code later on or used by code) i.e., make macro-time abilities available at run-time (removing the need for macro-time / blurring line between the two)

andrewzhurov11:03:57

> you can always macro expand to bottom out at the clojure special forms Could we instead bring marco-time abilities to run-time? Perhaps it would make the language simpler?

quoll11:03:40

cast is a bit stale now, but it could be adapted however you want.

😃 1
😏 1
andrewzhurov11:03:06

> How would your conceptual (swap! code :fn/name some-macro) handle stateful changes that are inherent in certain Clojure forms, such as ns (and require/`refer`/`import`, etc) or top-level set! calls or other top-level forms that have side-effects (such as invoking a macro to generate a bunch of def/`defn` forms)? Namespaces is a way to do place-based addressing. When we have codebase as data it seems possible to derive a place-based addressable view of it (as namespaced defs, serialized as text files) or a content-addressable view (to host fns on IPFS or smth) if we feel so fancy. Are namespaces crucial for eval? In the end, for eval, they serve to tie code together. Having that code already tied as data, in the first place, do we need to introduce place-based addressing just to ditch it later on when we resolved code back to a tied version? Same goes for defs, as they are a part of place-based addressing. E.g., a program in Unison is an immutable datastructure, having names only on user-level as a way for a user (programmer) make sense of code by naming/labeling some bits of that datastructure.

andrewzhurov11:03:11

> (and, yeah, this thread is far beyond #beginners at this point but I'm not entirely sure where it belongs given that it is Clojure-related, sort of) Yeah, I've chosen the best place to discuss such a lightweight topic. 😄 Is there a better place? @U04V70XH6 Slack inability of having a tree-structure of discussion makes my rumble a scarying wall of text ^ too:(

andrewzhurov11:03:10

> The entire idea of non-source file "code" is imo a solution in search of a problem - I was going to link unison as I had read about it before and tinkered with it a little - but there is so much around files that'd need to be reinvented, you'd be reinventing all 18 wheels of a semi (VCS system, peer review system, patch systems, file backup systems, editor systems) - afaic recall, unison despite being around with this idea for awhile now only has basic repl image support I'm with you on inpracticality of structural editing as it is not easy and we have a rich text ecosystem that we are used to. But mainly due to 'not easy' part, as peeps heads' is the hardest thing to change, whereas 'rich text ecosystem' stuck on solving lots of problems of itself: adding pseudostructural editing (paredit & co), linting & need to fight for how to present/style semantics, mitigating conflicts when fighting for places, tools for tying up place-based addressing - so many problems will be gone right away and there are easy solutions for some others. Since place-based textual representation can be derived out of a program-as-an-immutable-datastructure, ones could derive their personal views: Want to structure folders and files in a particular way? Go ahead, derive your personal place-based representation. Want to format things with tabs, fancy indented? Sure! It's your personal view of data, form it up to your taste. Want to name things differently? E.g., to have first as fst ? Ok. Your choice. Name stuff how it works best for you. Want (deref stuff) instead of @stuff and just don't like a macro-notation that much? Ok, have it your way, render it macroexpanded. ... You got the idea. 😄 Ditching text-editing is not practical, I get it. We are used to it. A solution to that is to have bidirectionality. data <=> files So one can both edit files to get a data representation and create a new data representation and see it as files. files => data is the hard bit, but taking look at cast it seems possible! data => files doesn't seem that freightening. 🙂 So we don't need to ditch text-based approach and tooling! Although having program as data we have the whole new range of abilities open to us, so we may choose to create some novel tooling as successors for stuff like text formating, linting.

andrewzhurov12:03:07

> could you dev faster with such a system? probably not - could you have more reproducible builds? maybe - but there are way simpler ways to achieve a deterministic (re)build - you could target something like nix and ensure you mirror all your own deps and it'd be much less effort I think you could dev faster with such a system. I base my belief on blue-sky-thinking and belief of folks from facebook that were doing a similar project, described https://www.facebook.com/notes/kent-beck/prune-a-code-editor-that-is-not-a-text-editor/1012061842160013. What appeals is that it seems to simplify the language, give meta-level power at run-time, eliminate semantical conflicts, eliminate syntactical conflicts, open content-addressable distribution possibilities (and decouple content-based addressing from code), allow for powerful codebase-wide editing capabilities. Looks like a wonderland.. 😄 where is that rabbit hole?!

andrewzhurov12:03:11

> could you dev faster with such a system? probably not - could you have more reproducible builds? maybe - but there are way simpler ways to achieve a deterministic (re)build - you could target something like nix and ensure you mirror all your own deps and it'd be much less effort Seems we won't be able to achieve reproducible builds by merely having Clojure codebase as data, because it won't capture platform dependencies (such as .jars) and environment dependencies (env vars, JDK version), as mentioned https://clojurians.slack.com/archives/C053AK3F9/p1648545629929359?thread_ts=1648500209.863569&amp;cid=C053AK3F9. But perhaps these dependencies can be captured as data and a Nix or Guix description be derived out of it, indeed allowing for reproducible computation. @U96DD8U80

razum2um22:03:26

@U04V70XH6 > How would it handle stateful changes that are inherent > such as ns This of ns as a ${PWD} in shell - it’s just changing the user’s point of view and allows to resolve relative paths. It’s a shortcut and it’s resolved at the moment of enter-pressing. Imagine defn supports explicit ns in fn-name (even if it just implicitly switches into specified ns and returns back after definition - in this case + if we forgot about jvm/class-files/etc => the whole program with multiple ns-es can be written inside 1 single file (or read: repl session) Besides, tbh, even in standart clojure real I prefer fully-qualified names (thanks autocomplete) because they’re place-independent > (and require/refer/import, etc) good point, what about ruby/rails autoloading convention when a fully-qualified name? however, unlike ruby all fully-qualified names can be statically extracted from a clojure form ahead of time still, implicitly this is a question about “how to thread a classpath/classloader modification” - yes, unfortunately classloader has to be mutable, but if we only allow import of immutable sources (like import not by name/version, but by aliased-shasum-say-git-sha) then this modification will be at least deterministic and imo ok > or top-level set! calls or other top-level forms that have side-effects (such as invoking a macro to generate a bunch of def/defn forms)? Considering set! and multi-def-macro a bad practice during concussion of this topic? Again I undestand it’s flexible and may be convenient but tbh, in my practice I only set them just once in the repl-session init or in mail.clj file and consider values as constants

razum2um22:03:38

@U96DD8U80 > you’d be reinventing all 18 wheels of a semi > VCS system, peer review system Unfortunately YES. We need a structural diffing, and more… Seriously guys, during next code-review try to catch yourself in a very-very tiny gap of mind-parsing the diff to get “ok, here we added a branch”, “here we extracted a computation into a binding and passed it into two places”. Plain text loses this communication: the writer “flattens” his idea into plaintext and the reader “parses/reconstruct” that. Partially it’s not only a rant against files, but challenge plaintext as a code-medium as well > patch systems Kinda, but look, if you got functions (I know that in clojure you cannot comp macros, think instead of composing functions using rewrite-cli lib) - those function are more powerful and reusable than knuth-morris-pratt in matching texts against different texts (because it resembles the semantic of a specific token) > file backup systems This should die :) For text-based content, seriously, I use VCS 100% and teach non-tech people things like gitbook > editor systems This was my initial thinking more than 5 years ago :) Take a look https://github.com/razum2um/cljsh > a solution in search of a problem Likely true, that’s why it didn’t evolve anywhere 😢 > could you dev faster with such a system Yes, initial guess: this large reinvention of a lot of usual stuff to get another experience: my slight guess is that this could enable a truly powerful refactoring tools because runtime is aware of how it was written and besides do you even know how many man-hours spend by ide-makers only because they have to live with assumption that after re-read from fs a file can change 100% and everything should be rebuilt from scratch

razum2um22:03:47

@U7RJTCH6J > an impedance mismatch with the world of programming tools that expect text files, but there’s a tremendous of amount of power unlocked when you can treat code as data. 💯

razum2um22:03:58

@U0ZQT0K2N > Why an array of EAVs is an unfortunate data model? On the high level code is a graph of functions and values - I see only 2 entities each with limited defined set of attributes (compare with best EAV usage in e-commerce storing goods parameters) On low level code is an array of instructions and jumps (or even array of a Turing machine instructions) - EAV doesn’t fit for that Here I’m talking about constructing a graph using same things which are stored inside that, i.e. if I get business requirement to change a function (defn f1 [xs] {:sum (reduce + xs)}) to return average alongside the sum I shouldn’t wrap body with let and modify text, I should be able to tell • take the innermost expression and a lexical closure from it using name sum (or gensym :) ◦ so I can reference it more than once, but w/o actual call • compute the size of collection (note: here runtime already knows this doesn’t use any previous fn content and can be automatically paralleled if it does make sense ◦ see more https://github.com/plumatic/plumbing#graph-the-functional-swiss-army-knife search def stats-graph there • return same result assoc-ed with sum referenced from step 1 devided by count Note: here we can prove (!) that we didn’t change initial behaviour, i.e. following the Rich’s talk speculation: we provided more, this means we 100% compatible

andrewzhurov07:03:26

> Besides, tbh, even in standart clojure real I prefer fully-qualified names (thanks autocomplete) because they’re place-independent This is my preference as well. But it's a good mention that it is a preference - it's subjective to person and use-case. E.g., I actually prefer to express myself in a concise way, I'd like to spend the least amount of effort for machine to understand what I want to express, be it via unique names (`clojure.core/first`) with autocomplete or short names (`clojure.core/fst`) and/or context-dependent (`first` or fst). Other folks may have their own tastes. Perhaps somebody would like to name value of reduce as foldl . Bottom line is it's a mater of how we present/style semantics. Where semantics is our program, as data/value, and names/labels is a way to view semantics - concern of user-level. E.g., the way it's done in Unison, having a program as a value and having personal dictionaries of how to label parts of that value to make sense. I.e., names are not part of a program, they are a mean to place-base address. Place-base addressing does not affect program's semantics - peeps can have their own place-based addressing preference for the same semantics. E.g., some may prefer to have /handlers/user/main.clj, some may prefer /user/handlers/main.clj, some may find it useful to switch between these two presentations depending on use-case (e.g., when going through user's stuff we'd like to have program indexed as /user/handlers/... /user/domain/..., and when going through, say, handlers , we'd prefer to have /handlers/user/... /handlers/order/... ) More on how we can view semantics mentioned https://clojurians.slack.com/archives/C053AK3F9/p1648555150814799?thread_ts=1648500209.863569&amp;cid=C053AK3F9.

andrewzhurov07:03:58

>> you’d be reinventing all 18 wheels of a semi >> VCS system, peer review system > > Unfortunately YES. Ditching current text-based tooling seems to require a lot of effort for recreating their semantics-based analogies. Switching to text-based ecosystem to semantics-based ecosystem has a learning barrier that would be hard to overcome in one step. We can solve both of these problems by allowing a gradual migration by allowing semantics-based ecosystem coexist with text-based ecosystem. Then people can stay productive with familiar text-based ecosystem and gradually learn neat tricks they can do with semantics-based ecosystem. That would require to have bidirectionality semantics <=> text i.e., data <=> files. This idea is also mentioned https://clojurians.slack.com/archives/C053AK3F9/p1648555150814799?thread_ts=1648500209.863569&amp;cid=C053AK3F9.

andrewzhurov08:03:30

>> you’d be reinventing all 18 wheels of a semi >> editor systems > > This was my initial thinking more than 5 years ago :) > Take a look https://github.com/razum2um/cljsh +1 Clojure is a language that works on data. A Clojure program is data. Shan't we use the same language to modify program in that language? (we kinda do it with macros but with limited expressivity. having the whole program as data allows to have all expressive power of Clojure to play with our codebase at any -time). Bolting on top a UI may be nice, of course. E.g., representing program as a graph and having arrow keys bound to walk that graph (using e.g., clojure.zip). With structural ops such as delete current node etc. Idea of having macro-level abilities at run-time is also mentioned https://clojurians.slack.com/archives/C053AK3F9/p1648544152439699?thread_ts=1648500209.863569&amp;cid=C053AK3F9, https://clojurians.slack.com/archives/C053AK3F9/p1648549289158929?thread_ts=1648500209.863569&amp;cid=C053AK3F9, https://clojurians.slack.com/archives/C053AK3F9/p1648550639522069?thread_ts=1648500209.863569&amp;cid=C053AK3F9 and https://clojurians.slack.com/archives/C053AK3F9/p1648552137265519?thread_ts=1648500209.863569&amp;cid=C053AK3F9. It seems to be a solution to many problems, as I see myself rumbling about it around a lot. 😄

andrewzhurov08:03:29

> a solution in search of a problem Text-based ecosystem is a solution to problem of 'how do we manage programs', we are used to it, we take it as dogma, but it does not mean it is a good one, it results in so many bollocks of accidental complexity. I.e., the problem is out there. Semantical/structural-based source solution is an alternative to text-based source solution.

1
andrewzhurov09:03:43

> how many man-hours spend by ide-makers only because they have to live with assumption that after re-read from fs a file can change 100% and everything should be rebuilt from scratch Leaving aside the need to parse semantics out of text before actually providing anything uneful. Just how programs in Unix ecosystem struggle communicating with text.

andrewzhurov09:03:59

> On low level code is an array of instructions and jumps (or even array of a Turing machine instructions) - EAV doesn’t fit for that A sequence of instructions can be captured as data in an EAV data model.

andrewzhurov09:03:05

> Here I’m talking about constructing a graph using same things which are stored inside that, i.e. if I get business requirement to change a function (defn f1 [xs] {:sum (reduce + xs)}) to return average alongside the sum I shouldn’t wrap body with let and modify text, I should be able to tell EAV can serve as the underlying data-model to capture a program, doing so we don't loose semantics, same operations should be possible to perform on it. E.g., capturing a list (+ 1 2) in an EAV data model will still allow to treat it as a list. In addition to that, however, we get ability to treat it as EAV, having a uniform querying/tx interface be it a list, map or a set 😉

andrewzhurov09:03:18

I don't think EAV is the best uniform data model, however, as it is too specific, a bit redundant and has problems with expressivity.

quoll13:03:02

As a graph data person, I’d like to dig in on what you mean by this please

quoll13:03:00

To wit: • I don't follow “too specific” • I assume “bit redundant” refers to some of the extra structure needed for forms that can be expressed concisely in other contexts (like a map in Clojure doesn't need to mention the entity node over and over, and a list can be described with just its elements and not show you the cons cells) • I’m quite lost on what you mean by “problems with expressivity”

andrewzhurov16:03:13

> As a graph data person, I’d like to dig in on what you mean by this please > @U051N6TTC Awesome, your domain knowledge is far superior, would love to run it through you. Perhaps you'll bust it. 🙂 Happy to see that end. Atm my head is running on fumes, I'll get back with point replies tomorrow. I had a rumble on why triplets model is no honey https://unisonlanguage.slack.com/archives/C031Q7N7VJ8/p1645345267823129 (EAVs included), in case you'd like to check it meanwhile.

andrewzhurov06:03:23

> eav is a too specific data model EAV model is of shape #{[e a v]}, it introduces: set, vector and uses place-based addressing. While capturing less specific models, such as lists, maps, sets it results in overhead of specificity wasted. It's like modeling a multiset (bag, non-unique set) via a list, where:

multiset = list - order
Better have the least specific data strucructure as a modeling block, accreting specificity on top of it, as it is needed. E.g.:
list = multiset + order
set = multiset + uniqueness constraint
map = set + labels
eav = list + eav shape constraint
How would a sorted map look like? :thinking_face: What seems to be the least specific data structure?

andrewzhurov07:03:47

> eav data model is a bit redundant This referts to e - entity identifier. (@U051N6TTC correctly assumed:) • The purpose of entities is to address content. ◦ Creating addressing ahead-of-time in hope that it will fit for a use case does not seem ideal. Better to have ad-hoc addressing. ◦ While doing so entities are being baked into content. If we have content-addressing we can address any piece of content ad-hoc, as we need, without baking addressing into content. E.g., CID (of IPLD) as a way to address arbitrary content ad-hoc.

andrewzhurov07:03:17

> eav model has problems with expressivity Don't get me wrong, expressivity of triples is good, perhaps the best out there, but it has its limits. One expressivity constraint goes from [e a v] shape, as you cannot extend a triple with arbitrary metadata. Some systems extend it to [e a v txTime] or [e a v txTime logicalTime] , but these are yet other constrained models. Another expressivity constraint comes from the graph-model, at large. An edge cannot reference an edge. And it's often desired to accrete an edge with metadata, leaving us with need to reify an edge in a way. In order to surpass this constraint some folks created even more specific tooling on top, allowing to have triples as e, a, v - RDF* And there are many other https://www.ontotext.com/knowledgehub/fundamentals/what-is-rdf-star/, none offer a clear win solution, alas. This problem arises from pure inexpresivity of the original data model.

quoll11:03:03

Perhaps I should wait until I've had coffee before making this attempt, but I'm going to push back on some of this... > EAV model is of shape #{[e a v]}, it introduces: set, vector and uses place-based addressing. That's serialization. The model is a directed, labeled graph model. > While capturing less specific models, such as lists, maps, sets it results in overhead of specificity wasted. Again, I don't see this. It's using a specific data model to represent all other data. Scheme does it with cons cells everywhere, which is like a directed unlabeled graph. I think that the labeled graph approach is much better. Is it the best approach to represent all data? No, not in every circumstance. But it's relatively efficient, it's extremely flexible, and it offers features not available in other approaches (like paths through the graph, for instance). > How would a sorted map look like? :thinking_face: Try drawing the data structure on a whiteboard... There. That's it. That's what it would look like. Maybe you drew a tree? Nodes with pointers to children? How many children per node? It can all be represented with a graph. The most annoying things in graphs are "arrays". RDF tried this with "Containers"... generally people don't like this (because of the whole open-world semantics, and the potential for "gaps"), so they typically adopt linked lists instead. But it's all available. It's more awkward with Datomic, given that all your predicates have to be declared ahead of time, but it still works.

quoll11:03:16

There is a bit of redundancy, whereby:

{:a1 "v1"
 :a2 "v3"
 :a3 "v3"}
Is serialized to:
[(node) :a1 "v1"]
[(node) :a2 "v2"]
[(node) :a3 "v3"]
In which the (node) appears 3 times. But you seem to imply that the existence of (node) is an issue. No... that just implies the existence of the thing that is the wrapping {} and labels it. This happens with in-memory maps as well. Now, the repetition of the node value is redundant, but again, that's only in some serialization formats, and some storage formats. (Considering Asami, JSON can serialize without a node value, and in-memory graphs need not repeat the node in memory)

quoll11:03:51

On the final point, the only expressivity issue that I really see is the one of reification. Yes, RDF reification is indeed awful. It really does call out for a better way, though thats more about speed and efficiency than about the data model. But that's why RDF* and SPARQL* were invented. It's also why Asami has a reifying value for every triple (I really do need to get that API released, but it's in the indexes already!)

andrewzhurov08:04:21

>> EAV model is of shape #{[e a v]}, it introduces: set, vector and uses place-based addressing. > That's serialization. The model is a directed, labeled graph model. That's a good mention! I got it wrong, indeed the model is a direct labeled graph, which, in turn, can be modeled in many different ways, but these are interchangeable, whereas the original model stays put. Is it fair to put it this way: • Direct labeled graph model ◦ can be modeled via EAV ▪︎ can be modeled via #{[eav]} ▪︎ can be modeled via {a {a v}} ◦ can be modeled via SPO ▪︎ can be modeled via RDF/XML ▪︎ can be modeled via Turtle-syntax ▪︎ can be modeled via HDT I jumped on a wrong target here: > EAV model is of shape #{[e a v]}, it introduces: set, vector and uses place-based addressing. The point being, that direct labeled graph* data model is specific and this specificity is not needed when modeling less specific data structures. E.g., direct graph E.g., graph E.g., multiset And, on the contrary, less specific data structures can be used to express additional specificity as needed. E.g., direct labeled graph can be modeled via direct graph E.g., direct graph can be modeled via graph E.g., graph can be modeled via multiset So, it seems they can be sorted by specificity (from high to low) as such: • direct labeled graph • direct graph • graph • multiset Another point is that a unifying data structure should be least specific, but able to express specificity as needed - that points us to multiset.

andrewzhurov08:04:31

> I think that the labeled graph approach is much better. Oh, I'm curious! Why so?

andrewzhurov08:04:47

> (about direct labeled graph) > Is it the best approach to represent all data? No, not in every circumstance. But it's relatively efficient, it's extremely flexible, and it offers features not available in other approaches (like paths through the graph, for instance). I used to like flexibility of DLG as I started with Datomic and later on as I played around with RDF, DLGs are the most flexible datastructures I tried so far, I like em. But I've been putting my hopes on them as on the ultimate data structure, and as I found they do have shortcomings, which saddens me, hence the bitter tone. Nowadays I have an idea and hope that multisets can be the ultimate datastructure, as it's the least specific and yet expressive as all the others. Like, any datastructure (I can think of (way not exhaustive 😄 )) can be expressed via multisets:

set = multisets + uniqueness constraint
list = multisets + order
map = multisets + labels
sorted map = multisets + order + labels
edge = multiset of two nodes
direct edge = {:from <from> :to <to>} = map + shape requirement
direct labeled edge = {:from <from> :to <to> :label <label>} = map + shape requirement = multisets + labels + shape requirement

andrewzhurov08:04:35

> It can all be represented with a graph. I think so. Perhaps as any datastructure can be modeled in any other data structure. But speaking of the best datastructure to unify them all, I'd want it to be as barebone as possible, i.e., the basic building block of structure, and to me DLG doesn't seem as it, hopes on multisets atm.

andrewzhurov08:04:25

> The most annoying things in graphs are "arrays". RDF tried this with "Containers"... generally people don't like this (because of the whole open-world semantics, and the potential for "gaps"), so they typically adopt linked lists instead. But it's all available. It's more awkward with Datomic, given that all your predicates have to be declared ahead of time, but it still works. It seems to me that index is a way to represent order via position. Whereas linked list is a way to represent order via relation. So, both seems to be models to capture order . I'm fine with any, although linked list seems less error-prone and capture semantics more explicitly (since order is about relation of how 'that thing relates to that thing', for every thing), whereas need for keeping track of places (`idx`s) seems like accidential complexity. Maybe there are better ways to model order, but I don't know of them.

andrewzhurov08:04:37

Interesting thing is that if we add DAG constraint to our DLG, we can have e = hash(#{[a v]})

andrewzhurov09:04:38

> But you seem to imply that the existence of (node) is an issue. Yup > No... that just implies the existence of the thing that is the wrapping {} and labels it. This happens with in-memory maps as well. Oh, that's interesting.. I believed that a multiset can be modeled without a node. But now I am not certain. Node seems to not be in semantics of a multiset, but I can't think of a way to model it without one.. That's a really good point. I'm at a loss..

andrewzhurov09:04:42

> But you seem to imply that the existence of (node) is an issue. Regarding node in DLG, I thought that it's redundant because I believed that e can be automatically calculated for an entity as hash(#{[a v]}) , as just mentioned https://clojurians.slack.com/archives/C053AK3F9/p1648803337011379?thread_ts=1648500209.863569&amp;cid=C053AK3F9, i.e., put content-addressing to use. But then I realized that DLG does not DAG constraint by default and I don't know of a way to content-address content with cycles. So, by default in DLG, e is not redundant, although having DAG constraint added we can use hash of content in its stead.

andrewzhurov09:04:19

> Yes, RDF reification is indeed awful. It really does call out for a better way, though thats more about speed and efficiency than about the data model. But that's why RDF* and SPARQL* were invented. This problem seems to arise in the first place due to DLG model being specific. And it's tried to be solved by bolting more specific tooling on top. Taking an alternative route, can this problem be avoided altogether by having less specific foundation? E.g., DLG modeled via a DG has L as a node out of the box, right?

andrewzhurov09:04:50

> It's also why Asami has a reifying value for every triple (I really do need to get that API released, but it's in the indexes already!) That sounds interesting, although I have a worry about being able to express nested reification (as it's possible with RDF*, as shown https://hyp.is/bRgdtLGfEeyULZcVJw6QoQ/www.ontotext.com/knowledgehub/fundamentals/what-is-rdf-star/) with approach of reifying ahead-of-time.

andrewzhurov09:04:48

To sum it up, it's an idea that modeling program->data better be done via the most primitive structure block for data there is, which seems to be multiset.

andrewzhurov09:04:14

Awaiting your take on it, @U051N6TTC. 🙂

quoll17:04:12

> Is it fair to put it this way: > Direct labeled graph model > • can be modeled via EAV > • can be modeled via #{[eav]} > • can be modeled via {a {a v}} > can be modeled via SPO > • can be modeled via RDF/XML > • can be modeled via Turtle-syntax > • can be modeled via HDT “Modeled“ is the wrong word here. Perhaps “implemented”, “realized”, or “serialized”. Also, you distinguish between EAV and SPO. Why? So, a directed labeled graph can be realized with: #{[ e a v]} or {e {a #{v}}} or (if the identifiers are allocated URIs) serialized into an RDF format (which includes RDF/XML (yuck!), TTL, and HDT). > So, it seems they can be sorted by specificity (from high to low) as such: > • direct labeled graph > • direct graph > • graph > • multiset > Another point is that a unifying data structure should be least specific, but able to express specificity as needed - that points us to multiset. I disagree. The fact that data can be represented using simpler constructs may be tempting, but this is often inefficient and can create significant cognitive overhead. For instance, it’s possible to express triples using pairs (e.g. cons cells), but no one wants to see that, and it makes processing the data more difficult. However, the more complex the data type, the less efficient simpler types become to express. Examples there include relational tables full of NULLABLE columns, or RDF (which is typically interpreted as binary predicates) using rdf:type to express a unary predicate. Using graph/triples/binary-predicates representation is a good tradeoff between breaking data down into its components, while not breaking it so far down as to be inefficient and difficult to work with. This statement addresses a number of your follow-up messages as well. > Yes, RDF reification is indeed awful. It really does call out for a better way, though thats more about speed and efficiency than about the data model. But that's why RDF* and SPARQL* were invented. > This problem seems to arise in the first place due to DLG model being specific. > And it's tried to be solved by bolting more specific tooling on top. > Taking an alternative route, can this problem be avoided altogether by having less specific foundation? > E.g., DLG modeled via a DG has L as a node out of the box, right? It’s actually the other way around. Just like a triple is inefficient to represent with pairs, a quad (which is what a reified triple is) is inefficient to represent with triples. If you try to go to simpler data structures, you’ll make your problem worse, not better. > To sum it up, it's an idea that modeling program->data better be done via the most primitive structure block for data there is, which seems to be multiset. It’s certainly tempting to go all the way down to the most basic possible components, and there can certainly be benefits there. For instance, the triple representation of data provides easy indexing in all dimensions. But it makes sequences awkward, and workarounds need to be created. The pair representation seemed too inefficient to me, but in trying to explain this to someone, I started noticing parts that I hadn’t seen before, and ended up designing a new triple index that was very similar to the https://www.researchgate.net/publication/228910462_Efficient_Linked-List_RDF_Indexing_in_Parliament (that came later).

👍 1
andrewzhurov10:04:03

I see two good points you make regarding inefficiency of representing specific datastructures via less specific ones (e.g. DLG via DG (e.g., triples via pairs (e.g., #{[e a v]] via #{[e v]})) : 1. Cognitive-wise inefficiency, as programmer has a DG interface/tools but actually works with DLG and could use more specific/tailored interface/tooling. > For instance, it’s possible to express triples using pairs (e.g. cons cells), but no one wants to see that, and it makes processing the data more difficult. Using pairs as interface for working with a DLG sounds like an experience straight out of a horror. 😄 I'm on it with you. However, I've been thinking of a more pleasant interface, we could represent triples as pairs and have a similar to Datalog interface on top of pairs. E.g., represent #{[e a v]} as #{[e v]]} E.g.,

#{[:person :name :bob]}
as
#{[:triplet-person-name-bob :entity-person]
  [:triplet-person-name-bob :attribute-name]
  [:triplet-person-name-bob :value-bob]

  [:entity-person :entity]
  [:entity-person :person]

  [:attribute-name :attribute]
  [:attribute-name :name]

  [:value-bob :value]
  [:value-bob :bob]
  }
A Datomic-like interface for a DG could be similar to that of DLG. E.g., :where
[[?person :name :bob]]

[[?triplet-person-name-bob ?entity-person]
 [?triplet-person-name-bob ?attribute-name]
 [?triplet-person-name-bob ?value-bob]

 [?entity-person :entity]
 [?entity-person ?person]

 [?attribute-name :attribute]
 [?attribute-name :name]

 [?value-bob :value]
 [?value-bob :bob]]
Wow, I'm suprised how alike interfaces on pairs and triples are. Though now that I think on it, it resembles the story of querying quads! Less surprised now. 😄 However, having need to query a DLG as a DG, even with a Datomic-like interface, still looks like a horror scene. Perhaps it can be possible to combine two interfaces, allowing to query both triples and pairs. (just as we can query quads already) That would allow us to as-before model stuff in DLG and resort to DG for when L is not needed. On a side node, I wonder what kind of benefits we can get out of a DG model. DG that represents DLG allows for more powerful meta-level play is a one neat benefit I can think of. (as now L is a node (becoming a first-class citizen of a graph model)) Perhaps another benefit of having DLG as DG is that we could use some cool DG algorithms (-> DLG dlg->dg cool-DG-algorithm)

andrewzhurov10:04:25

2. representing specific datastructures via less specific ones is performance-wise inefficient. As there can be clever optimization on specific datastructures, I take it. (e.g., HDT) I think it's a good pragmatic point, e.g., HDT is blazing-fast for read and usually that is of high value for a production system. I imagine a similar tool could be built for DG, but I would expect it to be less performant due to more generic datamodel. An alternave approach could be, having DLG-as-DG data, what if we would derive it's (usual) DLG-as-DLG representation for such use-cases. That seems to allow to have expressivity of DLG-as-DG and performance of DLG-as-DLG at the same time. I.e., to have bidirectionality of DLG<=>DG, having DLG->DG for expressivity and DG->DLG for access to existing production quality software.

andrewzhurov11:04:53

Side points: >> DLG can be modeled via EAV, that can be modeled via #{[e a v]} > “Modeled“ is the wrong word here. Perhaps “implemented”, “realized”, or “serialized”. I think what I wanted to express was that there is an abstract concept (DLG) and it can be.. realized as EAV, that can be realized as #{[e a v]}. "model" is not the right word, I agree, "realized" seems to fit. Thanks for pointing. 🙂 > Also, you distinguish between EAV and SPO. Why? I wanted to express how DLG is realized and wrongly pointed to a realization (`#{[e a v]}`) as to a model (DLG). I think you point that we discuss DLG as data model and model-wise realizations are the same. I see it now and agree.

andrewzhurov10:04:55

I have corrected my terminology error of calling a "pair" a "tuple" in https://clojurians.slack.com/archives/C053AK3F9/p1649068683431679?thread_ts=1648500209.863569&amp;cid=C053AK3F9. Now it should make more sense. If I'm not talking bollocks in the first place. 🙂 Am I, @U051N6TTC? 😄 It seems we could have DLG represented as DG, allowing for better meta-level expressivity and yet have it's DLG-as-DLG representation for when we need compatibility with existing tools and performance.

razum2um23:03:18

nope guys, this thread (not sure why beginners) is • not about prohibiting souce code modifications • not about immutable single-artifact code distribution this is about the idea like to threat the codebase as a datastructure and persist it accordingly but without the MESS of smalltalk-like environment or not to repeat CL-dump-the-RAM-deployments - well because both dev environments threat the codebase as a mutable world (…because THE world is mutable but well, you would say, but wait - Church’s lambda doesn’t exist there as well, only turing machine) this is about the idea about threating codebase as at least an immutable map with namespaced fn-names and only modify it accordingly

(swap my-codebase :fn/name a-macro-which-modifies-that-fn)
I know that it’s pretty much possible in repl (until restart trollface), and repl is even (maybe) persisted in dotfiles, however this is not enough, now both are just side-effects and the file is the source of the truth. but a file is inherently mutable (as a row in traditional RDBMSes) and shouldn’t be the way to go. datomic’s on the other hand is exposing bytes in an immutable nature but this is a fact-collection, in the very nutshell it’s an array of EAV (entity-attribute-values), unfortunately, this is another direction as well But this idea is about a CQRS nature and cljc/jar files are just the side-effects of swap-ing the code (i.e. storage should remind kafka more than datomic) i.e. a code change is just another event in a system which is “playable” and “rewindable”

👍 1
1