Fork me on GitHub
#fulcro
<
2017-10-09
>
roklenarcic08:10:48

If you want to use rest you need to write your own remote implementation on the client

alpox08:10:40

I have a few questions regarding the use of fulcro: 1. Is it production ready for big-scale apps? Also in terms of guaranteed future maintainment? 2. Is there a good way to interact with complicated sql databases (postgresql) with dynamic and sometimes custom queries?

tony.kay20:10:29

@clojure388 The fulcro template does HTML routing and demonstrates what you need…including the SSR for it. But you don’t use REST itself on the wire

tony.kay20:10:15

@alpox We have a number of commercial users. There are no guarantees in this life. Period. Did you try Angular 1??? How many companies use(d) it? Sorry, you’re in a world where things change. In terms of SQL, see fulcro-sql. In terms of custom queries…yes: you write them, you shape them, you respond with them. Just like any other app.

tony.kay20:10:28

The beauty of Clojure + open source = the code is relatively short, it is MIT licensed. If you adopted it, it would not be your major expense ever. It is pretty stable (much of the code hasn’t needed touching in quite some time), and if there was a bug you needed fixed, you can just fix it…it’s why open-source works in general.

tony.kay20:10:27

If you want a guarantee…hm, I guess you can buy one of those. I’ll gladly guarantee that I’ll maintain it for as long as you want if you’ll agree to guarantee to pay me $100k/yr to do so 🙂

tony.kay20:10:03

Alternatively, you could throw some Fulcro consulting my way though my company (http://fulcrologic.com), and that will likely get you a similar result 😉

tony.kay21:10:54

In all seriousness: the best bet you’ve got is adoption. The more people adopt a technology, the more the community will treat it as something to maintain. Additional contributors will appear, companies will pay people to do bug fixes, etc, etc. The reason you don’t worry about React (did you ask Facebook about guarantees???) is that it is a multi-billion $ company that uses it, but you should consider that if Facebook decides to radically revamp React, Facebook won’t give much of a care about how it affects your company…you either port or maintain the old. However, the wider the adoption, the better protected you are. But I would strongly encourage you to think about what I said first: what you’re getting with Fulcro is some code that your team could tractably maintain, without the expense of having written it. Same with any other open-source library. If maintenance stops, you do what you do with any other one: use it because it still works, or fork and fix it if necessary. Happens all the time, and we worry way too much about it, nor do we ever compare how much money we saved in making the original choice. People belly-ache about “such and such” is no longer maintained and we adopted it…well, it saved you tens of thousands of development dollars to do so, and I bet it isn’t going to ever cost you that to do a bugfix here or there.

tony.kay22:10:03

So, you’re much better off evaluating this: how well does it help you solve your problems? That is where your money goes. If it is better at solving the problem you have, then it is probably going to be better for a lot of other people, too.

tony.kay22:10:49

So, there you have a win-win: you end up with a better overall software system, possibly lower maintenance costs (according to your own design skills in using it), etc.

tony.kay22:10:46

If I were doing small things that have a relatively short life I’d be tempted to use Reagent because it is easier to throw something together if you don’t already know it (I’d still use Fulcro, but I already know it well). Fulcro is aimed at longer/larger full-stack software projects where the growth over time means that the simplicity of the model matters more.

roklenarcic22:10:27

Tony did you have any good ideas about fulcro-sql joins where things are filtered (`deleted` columns in joined tables)?

tony.kay22:10:33

When you make large systems, “easy” is a losing proposition. Tack that “easy” thing together with that other “easy” thing. Ever tried to internationalize an app that uses jQuery UI’s calendar with some other libraries form support, with etc etc etc. Nightmares.

tony.kay22:10:50

@roklenarcic Oh, I have ideas 🙂 Just have not implemented them yet.

tony.kay22:10:06

@roklenarcic I am leaning towards a notation in the join data like this:

[:invoice/id :invoice_items/invoice_id [:item/quantity] :invoice_items/item_id :item/id]

tony.kay22:10:28

where the middle vector is data that exists on the join table, but you name it to indicate which direction it “flows”

tony.kay22:10:52

e.g. in the above example the quantity column on the invoice_items table would appear on the item entity as :item/quantity

tony.kay22:10:56

how does that sound?

roklenarcic22:10:27

that sounds ok, but you do your joining in code if I'm not mistaken

tony.kay22:10:03

it seems more efficient that the arbitrarily large and complex table-based join that you’d generate otherwise

roklenarcic22:10:34

this could lead to problems when joined table has from-to columns or deleted and 99% of rows you would pull from DB will get filtered away on the server

tony.kay22:10:56

true. good point for filtering esp

tony.kay22:10:33

should additionally support filtering, but I’d lean towards that being part of the API instead of config

tony.kay22:10:50

The first step is making the data accessible. Filtering is an issue at each layer of the graph, because any table might have that same kind of encoding (mark deleted instead of deleting)

roklenarcic22:10:51

true, you always run into trouble when trying to get something as complex as sql filters done in code

tony.kay22:10:26

I mean, that use-case is so commom, (deleted) that perhaps that is a feature all on it’s own

tony.kay22:10:36

but I’d hope we could generalize it

roklenarcic22:10:41

now that I'm thinking about it, I should really rather use history tables

tony.kay22:10:09

well, let’s not change subjects 🙂 This particular thing is needed

tony.kay22:10:27

data on join tables is one topic filtering is another

tony.kay22:10:32

and history tables yet another

tony.kay22:10:37

which shall we talk about?

tony.kay22:10:56

oh, and optimization…

tony.kay22:10:02

so, four topics

tony.kay22:10:09

One idea on filtering is just by column. The namespace/column convention means that you could send in set of filters to be applied (by keyword), and whenever that property is in a query, the filter would be supplied to the SQL.

tony.kay22:10:59

something like: { :invoice_item/deleted true :account/from "2017-01-11" } as a parameter to run-query

tony.kay22:10:47

ideally we’d support parameters from the client in the graph query, as intended, as well.

tony.kay22:10:57

[:account/id :account/name (:account/deleted {:eq false})] would be a legal Fulcro client query to the server that would be trivial to morph on the server

roklenarcic22:10:21

Sure that could work

tony.kay22:10:10

The AST support for queries makes it relatively easy to take a client query that doesn’t need to know about filters (like :deleted) and apply them in the server logic before passing them into the graph walking code.

tony.kay22:10:44

e.g. the client sends [:account/id :account/name] and you pre-process that into the suggested form above.

tony.kay22:10:53

either way…flexible

tony.kay22:10:05

but same mechanism to support in the graph traveral query code

tony.kay22:10:42

My one concern is queries that hit the same table at different levels of the graph

tony.kay22:10:53

I’m not certain that is a problem, but it bugs me

tony.kay22:10:09

let me restate that: it is technically a problem, but I’m not sure it matters (in that it might be so rare as to be a non-issue…hand write the few cases that you hit)

tony.kay22:10:43

[:person/id :person/name {:person/spouse ...}} is a trivial query that hits the same table at an arbitrary number of levels in the resulting graph

tony.kay22:10:23

{:person/deleted false} is a filter that globally applies…but should {:person/age {:gt 20}} apply to just the first level, or all?

tony.kay22:10:56

and you cannot even encode that on the query with the UI graph notation…you’re stuck with it as an external parameter.

tony.kay22:10:51

I guess you could do {:person/age {:gt 20 :max-depth 1}} to limit where the filter was applied…and just have a few ways to specifying depth ranges

tony.kay22:10:43

Now that I’m working through it more, I think the primary mechanism is the “sideband parameter”, and the “in query” mechanism is an alternate way to pass the information into the query engine. The former is more powerful and general purpose, whereas the latter is useful for the data-driven flavoring.

roklenarcic22:10:14

yeah global filters are going to have that problem