This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-01-31
Channels
- # aleph (24)
- # announcements (2)
- # aws (1)
- # babashka (2)
- # beginners (46)
- # calva (15)
- # chlorine-clover (1)
- # clojure-europe (27)
- # clojure-nl (3)
- # clojure-norway (13)
- # clojure-uk (7)
- # clojurescript (16)
- # datomic (29)
- # emacs (4)
- # fulcro (16)
- # hugsql (6)
- # hyperfiddle (65)
- # lsp (9)
- # malli (3)
- # off-topic (29)
- # pedestal (1)
- # releases (1)
- # shadow-cljs (52)
- # specter (5)
- # xtdb (1)
XTDB is amazing and uses under the hood a tool called Nippy to serialized native clojure datatypes giving you access to vector, set, all your favorites, as things you can write to DB in the value
part of key, value
Datomic does not allow this, and for this restraint the rendering time and retrieval time seems far superior. Which is very interesting. For some save-to-db trade offs (Datomic is less convenient in some ways) you must figure out how to represent things in Datomic. For attributs/properties that can have many values you can simply have multilppe transactions and retrieve all of the things ever transacted. it lends into how you think about designing the data and the lookup.
Datomic is, in my opinion, the perfect solution for Electric applications. I am still getting everything to work just right, but the lookup speed is significantly better which will make a huge difference in live applications.
However, for the development phase, XTDB is amazing. Absolutely a joy to work with. And lets you fly schema-less. This is only partially true as when you write to db, you must conform to some degree of "everything you are looking for is there" otherwise you will get nubs that will not reveal themselves in queries of any limiting factors. Ok no problem.
So I would recommend: if your app is very ambitious and you don't know the exact schema you need, consider using XTDB to start and develop it to a strong stable level
I spent some time writing all my "schema" over to a notebook (my sketchbook) and I found that we only have 7 data types across 2 different somewhat interwoven music apps.
while both have datalog, it isn’t a trivial change to go from XTDB to Datomic (or vice versa)
Not knowing this when I started, using XTDB made great logical senes. Now knowing the shapes rather clearly, codifying them into something more robust... Tatut brings up a great point.
But, Tatut, you can, in theory, massage anything from xtdb shapes to a datomic shape )
however, perhaps after a certain point of production being alive and ongoing, it would be not a fun transition.
so I think early-on it is wise to consider what one needs, but yeah, very good point it's not as easy as drop in and go. it's a little more nuanced because datomic limits what goes in, more sacrosanct with more performance guarantees, xtdb a delight to work with / in as well.
I don’t think I’ve ever seen an actual “running in production” system change databases… there’s just so many devils in the details
if you use the lucene index for full text search on XTDB, that has no equivalent afaict on Datomic… you would need something external for that
Is there a way to get datomic to "react to changes' in electric? Not sure how to turn it into a live ref and not just a instance in time for db
I noticed, that the shared snippets (see here https://clojurians.slack.com/archives/CL85MBPEF/p1685970532152309?thread_ts=1684166929.251079&cid=CL85MBPEF) of how to get a flow of datomic db changes, can’t be used with the reload wf(tools-namespace refresh etc.). If you use it out of the box, then you will notice, that some changes are ignored and then 2 to 3 transactions become visible at once. This is due to the fact, that each item in datomic’s transaction queue can only be consumed once. After the first namespace refresh /reload there are 2 consumers (the first one is dangling). To overcome this you must properly delete the queue consumer before refresh (add it to the stop command in mount, component or whatever state management library you use). It took me quite some time to figure this out. So I put it here, so that it may help others :)
Fwiw, electric v3 + newest missionary m/signal + defonce solves this problem. The solution (m/signal + defonce) doesn't work in electric today because electric v2 still uses the legacy m/reactor and m/signal! (with a !)
Let me elaborate, the snippet you linked to actually doesn't contain m/signal! at all, because m/signal! is illegal at global scope (m/signal! is only valid inside the scope of a m/reactor). Missionary's new m/signal (no bang) resolves this (also removing m/reactor api entirely), but electric isn't upgraded to new missionary yet, that's part of electric v3
m/signal + defonce is the combination that guarantees a stable tx queue singleton instance
Thanks for your answer. I think that it is not possible with v3. I do not speak about hot code reloading, but tools-namespace refresh, which destroys all namespaces. So even defonce vars will be recreated after refresh.
Did you consider shutting down the Electric app on tools-namespace refresh?
IIUC it should be sufficient after v3 to shutdown the app on refresh (which will dispose all managed resources including the global tx listener)
What’s the news? electric-starter isnt a deprecated repo anymore?
it's been rebuilt
Okay. What happened to fiddle being default? I was migrating and also building a new project on fiddle so curious what the status is.
electric-fiddle is still a great place to start, we continue to move all our demos into it
if you're just fiddling around, fiddle is probably what you want. If you're making a real application, electric-starter-app has a simpler entrypoint for you to copy paste (though we still insist that you need to understand every line of your entrypoint, copy pasting is not a real solution)
Cool. Yea I’m quite familiar with electric now so not starting out. I’ll stick to the electric-starter. I thought fiddle was the primary/only future repo for starting and tracking changes. I hadn’t dug in fully but also felt the entry points were simpler in starter. Thanks for the update.
I deployed the xtdb-demo fiddle as is. After a while it crashes. (might be my environement or some xtdb thing). I haven’t tried to replicate failure locally yet. Where do I look for logs? (with the fiddle setup). What are the current memory requirements? My droplet has 1gb.
if you're ssh'ing into a linux box on digital ocean, i think capturing logs is not automatic, you have to write the bash code to capture logs
It does sound like an OOM
try starting at 8gb and working your way down
Ok will try that first. Then… Biff has some logging setup I should be able to reuse.
java ... >> logs.txt 2>&1
something like this
Wow, Droplet price goes up $40/mo for move to 8gb.
maybe that’s for a whole package though. will look for just memory.
No couldnt find other options (to only resize memory)
I bumped it to 2gb and will work up.. Is it your general opinion that the uberjar way is worth it for the simplicity? Maybe on a cloud deploy (http://fly.io or DO app platform) It would be cheaper for 8gb If I’m not using it continuously (interal app)
to be clear, electric does not use much memory, you are using an in process database
i dont know how much memory xtdb needs, but i do think $40 rounds to $0
certainly is negligible compared to the debugging cost
True. Just orienting towards reliable minimal deploy exprience I can use in other projects as well. I’m understanding XTDM is adding significant extra (memory, complexity) here.
:face_with_peeking_eye: I believe my crashing was just my ssh session terminating, which was killing the process. Didn’t know about that. I just used tmux now to prevent this.
how much work would it take to get specter to work inside of an e/def
? guessing a lot because of the macros and precompilation
i don’t know anything about spectre but i will point out that core.match works (or at least worked before the IC changeset, i don’t know if we re validated it yet)
not a blocking issue for me, but just fyi I'm getting the following error trying to transform
a watch
:
(ns app.test
(:require [hyperfiddle.electric :as e]
[com.rpl.specter :as sp :refer-macros [select transform]]))
(def !my-list (atom [1 2 3 4 5 6]))
(e/def my-list (sp/transform [sp/ALL] inc (e/watch !my-list)))
{:error
{:cause "Cannot resolve 'var', maybe it's defined only on the client or needs to be referred in :require-macros."
:data {:form var, :in my-list, :context "server"}
:location "hyperfiddle.electric.impl.lang$fail_BANG_ at lang.clj:148" ...}}
while you’re there can you please capture the macroexpansion and paste here
the spectre macroexpansion not the electric
is this what you're looking for?
(macroexpand `(sp/transform [sp/ALL] inc (e/watch !my-list)))
=> (com.rpl.specter.impl/compiled-transform* (com.rpl.specter/path [com.rpl.specter/ALL]) clojure.core/inc (hyperfiddle.electric/watch app.test3/!my-list))
ha i guess, we’ll have to look more closely then to understand what’s wrong
you can fallback to calling into a wrapper fn from electric and the wrapper uses spectre
There are versions of specter fns that have different pre-compilation expectations. But probably still macro heavy.