This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-09-10
Channels
- # beginners (151)
- # cider (41)
- # cljdoc (7)
- # cljs-dev (6)
- # clojure (92)
- # clojure-dev (5)
- # clojure-italy (26)
- # clojure-losangeles (1)
- # clojure-nl (10)
- # clojure-russia (3)
- # clojure-spec (23)
- # clojure-uk (82)
- # clojurescript (56)
- # clojutre (1)
- # core-async (3)
- # cursive (15)
- # datomic (26)
- # editors (3)
- # emacs (3)
- # events (2)
- # figwheel-main (192)
- # fulcro (66)
- # leiningen (12)
- # mount (1)
- # off-topic (131)
- # portkey (6)
- # re-frame (38)
- # reagent (10)
- # reitit (7)
- # ring-swagger (55)
- # shadow-cljs (21)
- # spacemacs (11)
- # tools-deps (48)
How looks using fulcro: 1) fulcro FE with third party BE 2) fulcro FE+BE wit third party FE for API Does it make sense then?
What I am trying to ask is if fulcro is designed to use FE+BE only with fulcro. If i wouldn't do it I wouldn't have any benefits.
@kwladyka If you use the fulcro backend out of the box it makes things a lot simpler. If it's a different type of backend then you have to write the networking layer. For example, if it's a restfull api http://book.fulcrologic.com/#RESTAPI
if you're backend is in grapqhl worth looking at pathom. Datomic pull syntax maps pretty well to graphql. https://wilkerlucio.github.io/pathom/DevelopersGuide.html#_graphql_integration_todo
@kwladyka For the graphql backend there are two fulcro examples in the pathom repo https://github.com/wilkerlucio/pathom/tree/master/workspaces/src/com/wsscode/pathom/workspaces/graphql
So I could use fulcro FE with third party Graphql (not fulcro BE) and I would have almost the same advantage like I would use fulcro BE?
Yep. Pathom provides the networking glue for converting graphql <-> om-next datomic pull syntax.
@kwladyka they have a good feature match, but there still differences, to me the biggest one is about the mindset, GraphQL focus primarily on types, so you are always thinking about the container names (Person, Company, Address...) while Fulcro and Pathom have a more attribute oriented design, IME this makes a big difference as your system grows, types tend to get bloated and confusing to understand, while relying directly on attributes makes your dependencies much simpler and easier to compose over time
I never tried reframe + graphql, but I'm currently working porting a big re-frame app (50k+ LOC in cljs) to fulcro, the main issue with the current re-frame app is there it got into a spagetti state with some many code dedicated to coordinate data fetching, I don't blame re-frame though, I think it's a problem with a REST model applied to SPA, just doesn't scale...
but then, if you start thinking about graph apis, they give you this feature that if you are in a node you can "reach" for other things, this is what takes most of the complexity out
then you start to want to have your components to express their data needs, and then you will need to normalize your db... at some point if you are not using Fulcro I think you will be just reinventing it 🙂
so that's the thing to me about re-frame, it doens't have the graph mindset on it, you can surely take advantage of a graphql api to reduce the data fetch complexity, but you keep hitting those issues unless you change a lot about how it works
What about https://github.com/oliyh/re-graph ?
never tried, but I don't see the colocated queries with components, it seems to facilitate the query running, but not how to manage the data after you get it (and how to update/keep it consistent)
of course at some point other solutions will come on top of reframe, but I'm not sure how good they will be considered they are not in the core of how reframe works
unless reframe changes in a big way 🙂
@wilkerlucio Think it was mentioned but haven't tried it. Technically fulcro has a built-in server java backend. But if I wanted could just write the backend in pathom as a node-js service right ?
I guess there is no other way to make my own opinion, than write something in fulcro. But it is huge +1 to a long queue things to learn 😕
@claudiu yes, totally, I have projects in prod running with this setting, and with the new parallel parser (that works on node too) this can be a pretty nice setup, at nubank we use just pathom running on the JVM, and are currently trying out the new parallel parser with some peers
yeah, it can be time consuming, but building something with the tech is the best way IMO to understand it
I try to avoid "big" framework in Clojure, prefer to choose all small parts instead of be forced to use what somebody else choose. This is my biggest concern, but fulcro could be good with it.
and you can consider fulcro just the FE part, I always used it like this, although it provides a lot it makes so you can pick the pieces you want, it uses a language for query that's based on the datomic pull syntax, but you can consider this as a separated thing (as we consider Relay and Apollo separated things, altough they both use GraphQL)
think fulcro is "big" in the "batteries included" sense, quite surprising how much flexibility you have. A lot of things are just optional nice to haves if you want to use the default stuff.
But does help out a lot ex: normalized db, intial-app-state is co-located, load markers out of the box, global error, networking layer batches-up loads & makes sure mutations run before load.
really nice that this is handled by fulcro, I don't have to invent the stuff that does this.
@kwladyka It was an unintended consequence that Fulcro ended up being seen as a “framework”. I originally had the different bits split out into smaller “choose your own bits” libraries, and it was a pain constantly doing releases of a bunch of different bits that then all had to be tested…it was a lot of time overhead. I combined the bits into a single library for my own convenience and time constraints. The cljs side with dead-code elimination means “if you don’t require it, you generally don’t get it”, so there was no real advantage (other than appearances) to split those bits apart (though I regret adding a few that I didn’t want to maintain ;p). The server-side stuff that you “need” amounts to a very small amount of clj code (basically the API handler hook in server.clj). The rest of the clojure code is completely optional (e.g. the config stuff). I’ve even made the Clj dependecies “provided” with dynamic loading. If you don’t use Fulcro’s server stuff, you don’t even get the JVM server deps in the resulting uberjar.
At some point I may split them back apart for reasons of perception. As far as networking goes, you just need to be able to talk EDN (I use transit+json). Nothing really special or complicated.
and even that constraint can be relaxed if you put a networking component on the client side that can convert the EDN to/from your desired endpoint protocol.
Hello, I am on Fulcro 2.2 and currently am having some performance issues. I am getting (long 90s) periods of 0 fps after merging data from the backend. The biggest section I have seen this happen is in the reconciler code (according to the chrome profiler). Does the reconciler actually do the rendering? The chrome profiler claims it’s all “Scripting”. Does anyone have any ideas on things I could try to get more responsiveness?
just to understand better, you get some big blob of data, and after that any state updates are super slow, is like that?
roughly yeah, but it’s more that it’s unresponsive when merging (reconciling) the specific blob of data. I am noticing general slowness after repeated performance testing as well
first I can suggest you try upgrading, 2.2 seems quite old and a lot about rendering has changed since
but anyway, another thing that's time consuming is generating the tree from the db, fulcro optimizes a lot on that and try to reduce which parts are re-computed, but how good it is can vary according to your component structure/organization
having lots of components with idents helps
okay those are good suggestions, thanks! Seems like a good reason to upgrade. I’ll try to make the jump to 2.6
also is the tree generation part of the reconciler? I noticed that function hogged the main thread for 90 seconds straight (at least that’s what I assume 0 fps means)
yes it is, you can try to look for the fn db->tree
(a bit changed by munging)
okay perfect, I’ll read into it
There is a performance section in the book, as well. I doubt you’ll see significant performance differences with an upgrade. In fact, I would first go to 2.5.12 and see there, though, as the 2.6 rendering is a bit slower than 2.5.12 (though further optimizations are planned).
A 90 second pause is definitely a problem, and you should describe more about what you’re doing…I have a feeling you’ve got something more serious wrong with you code.
OH, do you have Fulcro Inspect installed? Devtools? Those can also cause performance problems. Try running without them and see if it is better.
@U9E12PLN8 There a good chance it is tools/extensions, and not Fulcro itself unless you’re doing something extreme (and you haven’t told us enough to diagnose that).
Yeah so, this is the most data heavy sync (in a single ident) from the backend to the frontend. E.G. 12,000 rows 10 columns worth of data. This same process works fine (there are perhaps 1 or 2 second blips of unresponsiveness) when we are loading < 2,000 rows 10 columns. After merging the data, we render the data into a table (but only render 50 rows as html). This isn’t strictly related to tools/extensions since the issue appears on production (with very long sessions of unresponsiveness). The 90s I recorded was on dev with fulcro inspect & cljs devtools though.
but essentially, we just (swap! state update-in ident merge model)
(as well as some additional swaps into other various spots). The main slowdown appears to be in the reconciler, more specifically split between:
react-dom’s flattenSingleChildIntoContext
takes up 83.3%
and measureLifeCyclePerf
spends it’s 15.3% printing cljs$core$pr_writer_impl
(not sure where it’s printing to though)
The data blob on the backend is a component entity (only has 1 parent), and we mirror that on the frontend. As a result, it is fairly deep in the clientside database:
[:widget/by-id 17592186085115 :widget/data-source :data-source/current-stream]
The data blob shouldn’t be getting normalized though as we don’t query into it
ima investigate fixed-data-table as I think the performance hit is actually in the React rendering code Though perhaps I should try some other profiling tools to verify (react-perf or maybe tufte)?
just to make sure you could try rendering a stubbed UI (or just the first 2 rows of the table)?
that way you can confirm if the issue is actually in the reconciler or in react’s rendering
do you need to render all 12,000 rows? 😅
make sure you stub well above the part where you construct fancy table cells with all those tooltips and formatting
if fancy cells are the cause you could disable those for large tables
Hey @U09FEH8GN, I think I confused myself as I noticed the fulcro reconciler calling React’s forceUpdate fn, and thought that directly triggered a render. Does the render get triggered after the reconciliation (through some event or something)? fixed-data-table itself only renders like 50 dom elements, but I think the solution I am leaning towards is some sort of lazy pagination for the rows. We only need the extra rows if they actually scroll (except for webviews). Tables with < 1000 rows don’t lag for more than 1 second
@U9E12PLN8 I suggest you check this lib: https://github.com/bvaughn/react-virtualized it seems to do a pretty good job at "buffering" the rows to render
The table library he’s using does something similar already
@U9E12PLN8 Render is triggered during reconciliation. You do a transact (or a load finishes)…UI queries run at the targeted components needing refresh, then it pushes the render off to React. The two heavy-hitters are db->tree
, which is what does the heavy lifting to satisfy queries, and React itself on the refresh. Any defsc
will short-circuit (PureComponent like) if props have not changed. But yeah, sounds like you’re strugging with React.
Thanks for the tips, I will jump back on this the week after next. I’ve noticed that there are additional hangups with sente deserializing the data. I think the 90s I saw in the fulcro reconciler was an unrealistic anomaly probably because of the devtools and fulcro inspect. Profiling on prod, i’ve noticed several smaller blips of unresponsiveness, and the reconciliation is usually one of the smaller blips! Sente deserialization seems to cause 0fps as well. I’ll keep digging once I get back and let you all know the results, but I have a feeling breaking the large responses into pages will make everything much smoother!