Fork me on GitHub
#fulcro
<
2022-12-17
>
Andrey11:12:37

Hello. Decided to try fulcro, started with a template and a video. As a result, I noticed that the template with pathom 2.4. As far as I understand pathom 3 has made great progress in performance, but how to connect it to the template and will it work? Perhaps there is a template already with pathom 3 on board.

tony.kay15:12:56

Look for a pathom3 branch on the fulcro-rad-template. I think that's where I did a sample. There's not much to it really.

👍 1
Andrey15:12:45

Thank you, I found it in the pathom3 branch.

Jakub Holý (HolyJak)11:12:34

Hi! If you are new to Fulcro you might also want to check out https://fulcro-community.github.io/guides/tutorial-minimalist-fulcro/ Feedback welcome!

Andrey11:12:17

Yes thank you. I am actively studying this. I like the ideas, but it turned out to be not easy to launch and use in production.

Andrey11:12:35

I want to run fulcro-rad with Datomic, pathom3, auth, API for external interaction with mobile apps. But I still can’t figure out how to put it together. I lack experience, and I did not find examples or videos with the implementation of these functions. What is now explains the essence of fulcro, but the essence is easier to understand with working examples (this is me to myself) than from theory.

Jakub Holý (HolyJak)12:12:21

Well, https://github.com/fulcrologic/fulcro-template and https://github.com/fulcrologic/fulcro-rad-demo/ are working examples, no? Plus https://fulcro-community.github.io/main/awesome-fulcro/README.html links to few more real apps. For RAD + Datomic + Pathom 3 you can use https://github.com/fulcrologic/fulcro-rad-demo/tree/pathom3 . Authorization is app specific so you need to make your own. Typically I guess your resolvers would check whether the user is authorized to see the data and not return them if not. Not sure what you mean by “API for external interaction with mobile apps”. Mobile apps can also talk to Pathom. Or do you want a separate REST api? That seems somewhat orthogonal to Fulcro itself…

tony.kay18:12:03

NOTES: • The react native stuff is out of date. I was making apps a couple of years ago, and the experience was “ok” with React Native, but not fabulous. Hopefully it has improved. I suspect the template for native is broken. • The auth stuff is really meant for demonstration purposes only. RAD cannot possibly solve that problem for you in real production apps. So, you’ll need to write you own. • Do you have experience with Datomic? In a production setting it is critical that you understand things so that you can properly optimize it for large numbers of users. It is very different to optimize for than your standard SQL database. • Launching in production: again that is something that Fulcro itself isn’t involved in, but the tempate, as-is, should build easily to an uberjar that can just be run via java -jar …. Of could you’ll need to set up the front-end of that (load balancer, ssl, etc.). If you’re using Datomic Cloud you can put that into their ion framework, but to be honest I found that to be a poor fit for our production needs, and ended up revamping our infra (2 years in) to not do that. We run Datomic Cloud as a network database. This has the advantages of: we are not tied to the dependencies and versions of the datomic cloud database server, we can tune performance of our app without interference of the query engine, we can do blue-green deployments, and many others. The downsides are that, since we don’t deploy any code to Datomic, we cannot use transactor functions (we use CAS instead) and we had to hand-tune index-pull to work better by wrapping it, and autoscaling is a bit more complicated. YMMV.

❤️ 2
tony.kay18:12:08

See tools.build for how to build an uberjar. My build.clj in production is:

(ns build
  (:require
    [clojure.tools.build.api :as b]))

(def class-dir "target/classes")
(def basis (b/create-basis {:project "deps.edn"}))
(def uber-file (format "target/app.jar"))

(defn clean [_]
  (b/delete {:path "target"}))

(defn uber [{:keys [main]}]
  (clean nil)
  (b/copy-dir {:src-dirs   ["src/main" "resources"]
               :ignores    [#"^goog/.*\.(js|html|gif|css)$"
                            #"externs.zip"
                            #"brepl_client.js"
                            #".*\.cljs$"]
               :target-dir class-dir})
  (b/compile-clj {:basis     basis
                  :src-dirs  ["src/main"]
                  :class-dir class-dir})
  (b/uber {:class-dir         class-dir
           :uber-file         uber-file
           :main              'com.company.your_server_main
           :conflict-handlers {"^data_readers.clj[cs]?$"                                    :data-readers
                               "^META-INF/services/.*"                                      :append
                               "(?i)^(META-INF/)?(COPYRIGHT|NOTICE|LICENSE)(\\.(txt|md))?$" :append-dedupe
                               "^.*/LICENSE(\\.(txt|md))?$"                                 :ignore
                               :default                                                     :ignore}
           :basis             basis}))

Jakub Holý (HolyJak)18:12:28

I was at one point considering running an app as a Datomic Ion so thanks for your insights! And I would be very interested in more details about the ".. and many others" 🙏

tony.kay18:12:44

Ions are great if you’re going to essentially use them as a lambda API. In Fulcro we don’t, which means putting a Ring app inside of the Datomic Cloud servers.

tony.kay18:12:49

This complicates everything: • Slower deployment times for autoscaling Datomic on high demand. Our software would take 5 mins to compile and start in addition to Cloud. Now I pre-compile our code (not possible in Cloud) and the Datomic nodes start much faster, as does our software. Autoscaling is much more responsive. • Rolling deployment (the only mode for Datomic Cloud) means that if you’re under heavy load and deploy a new version, you have to wait a very long time (10 minutes per node to deploy…so if you have 6 nodes running, it was an hour), and during that time you have two versions of your app on servers. We were seeing times hitting an hour or more. If ANY node fails (sometimes a glitch botches one), then they ALL roll back. If rollback fails, OMG it’s a mess. Meaning you might spend an entire day trying to roll out a new release. All while giving customers a negative experience. • Our new infra uses blue-green for our code. ◦ All-at-once deployment to a new set of nodes. ◦ Uses load balancer to switch traffic from old set to new set ◦ If there is a problem with the new version (sometimes happens you released garbage) a single-click instant rollback has load balancers redirect back to old servers. ◦ Prior version servers can stay up for up to a day, if you want. • Startup logic is a mess because you have no good hook for starting your server components. They recommend using memoize on stuff. It’s a mess. I had to write a Ring middleware handler that would block requests while calling mount/start on the first request. • Datomic Cloud hard-pins all the deps it uses. If you use diff versions (or NEED diff versions) of those, then you’re stuck doing workarounds or vendorizing code. • Crashes were hard to diagnose. Was it our code or Datomic? • Out of memory errors: nodes would become zombies. So when we made a coding mistake that caused OOM it killed the database node, but not really…it just became a zombie. Having control of the JVM ourselves (now) we can set nodes to exit on OOM and auto-restart, making for better 24/7 uptime without human intervention • Not possible to do load balancer sticky sessions with Ions • Pay a Datomic “tax” for all of your application code’s CPU use. You pay about 1.5 to 2x the EC2 price for a Datomic Cloud node. When your code is running on that node, you’re essentially paying more for that resource than you would running it on a raw EC2 instance. • When you run your code on stock instances you can: ◦ Pay less for the equivalent instance ◦ Choose WAY more instance types ◦ Provision things in ways that better meet your application needs (e.g. we go with large memory instances with fewer CPUs and run more instances to improve reliability by using load balancer sticky sessions. If one app node crashes, it only affects a fraction of our users. • I had to hand-patch their AWS scripts in the ion deployment jar to add in my own stuff to ensure our software started cleanly, or deployments would “succeed” that were actually bad. This is a weakness of not having a startup hook build into the Datomic Cloud system. • Upgrading Datomic Cloud meant our deps would change, as would possibly critical parts of our infra (Datomic Cloud controls a bunch of resources like the load balancers). This meant that certain upgrades could break our entire system. There were more, but those are the ones I remember off the top of my head 😄

👍 2
Jakub Holý (HolyJak)18:12:31

Many thanks! And, BTW, Merry Christmas!

tony.kay18:12:45

Happy Newton’s Birthday!

😻 1
tony.kay19:12:17

If you’re using Datomic Cloud, then look into using AWS CDK to set up your own infra elements. You can’t automate the setup of blue-green deployment, but you can do most everything else. My new infra using the CDK can be tweaked in typescript in just a few seconds, and updated via cdk deploy. It’s quite nice. I have code pipeline building and deploying the software on a codecommit commit. So, git push release main is all I have to do to release.

❤️ 1
Andrey21:12:16

Thanks, very interesting. I’ve mostly worked with Postgres and haven’t worked with Datomic yet and it was helpful to see the other side of the coin. Have you had experience in production with XTBD. As far as I know their philosophy is similar to Datomic, but XTBD is free. But it is not known how it will behave in production. What is your opinion on the XTBD + Fulcro bundle?

tony.kay22:12:39

XTBD has a significantly slowed query speed last time I checked. If you’re making a production app for real business, I’d go with Datomic or PostgreSQL. On-prem might be the more affordable route, but the scalability isn’t as easy.

👍 1
1
tony.kay22:12:58

The query scaling of Datomic is awesome (never blocked by writes) as is the auditing and ability to undo accidental changes by looking at the reified transaction, but creating indexes (tuples) is a permanent operation, and learning to use index-range and index-pull are critical once your database gets big. PostgreSQL is easier to tune (create/delete index on demand) and has better cross-region backup support, and has a faster transaction speed, but writes can interfere with reads, and the reasoning model behind transaction isolation is much more complicated. Datomic Cloud has some documented limitations that you should take very seriously: • Don’t store large values (e.g. long strings). Strings like email addresses are fine, but bigger things should go into S3 or something and the URL goes in Datomic • Don’t expect to be able to write at “logging” speeds. E.g. the write speeds are good for user-driven data, but not for saving logging info.

👍 2