Fork me on GitHub
#keechma
<
2019-06-02
>
carkh17:06:56

@mihaelkonjevic when i navigate to one of my counter detail page (it's just another component), i see that the subscription first returns an nil value for my counter. then the datasource's processor method is called, and everything updates as expected. I seem to remember that one of the design goals of the dataloader was to avoid such corner cases. So i guess i'm again doing something wrong ?

carkh17:06:22

what i'm expecting is : the counter detail component only is shown when the data is ready

carkh17:06:36

three tiered counter app going on here =) we don't want any shenanigans like a loading spinner !

mihaelkonjevic19:06:39

@carkh for each datasource you have a -meta subscription, so if you registered counter under the :counter datasource, you can subscribe to :counter-meta datasource which will hold the information about the datasource - there you can check if the datasource is :pending or :loaded. As of the datasource’s subscription being nil that is a correct result in that point of time - since the data is not loaded yet (so the datasource will be in the :pending state).

carkh19:06:21

ok so do this manually

mihaelkonjevic19:06:27

Dataloader is loading data asynchronously even if you return the data synchronously from the loader fn - most datasources are going to be async anyway. But, it should be returned inside one reagent’s render cycle

mihaelkonjevic19:06:07

the reason for async behavior is that the dataloader is using channels under the hood - to orchestrate the promises

carkh19:06:33

ok good thanks.... one more question

carkh19:06:51

pipeline controllers do not have access to the app-db-atom ?

carkh19:06:04

like regular controllers

carkh19:06:17

just the value itself ?

carkh19:06:27

i'm bombarding you with questions, feel free to send me away =)

mihaelkonjevic19:06:55

don’t worry about it, I enjoy answering questions about keechma 🙂

mihaelkonjevic19:06:12

so, normal pipeline functions don’t have access to the atom, but there is an escape hatch if you really need it. What is your use case?

carkh19:06:42

i said earlier that i had made a app-db logging controller

carkh19:06:48

it's a regular controller

carkh19:06:11

i was trying to perfect this and debounce the logging because i get many logs of the app-db

mihaelkonjevic19:06:42

ok, so there is something that you can use for that use case - tasks

mihaelkonjevic19:06:55

it’s not documented yet but it’s pretty battle tested

carkh19:06:02

so i want to try using a pipeline controller to make use of the exclusive and delay-pipeline things

mihaelkonjevic19:06:24

we are using it for animations and some other advanced needs, like event handlers inside the pipeline controllers. The idea behind tasks is that they can be inserted in the pipeline and (potentially) block the pipeline until they’re done. The task processor fn will run on every signal from the producer fn, and there is a built in task that can be used to listen to the app-db changes

carkh19:06:31

(though i could do this from a regular controller i wanted to go "modern")

carkh19:06:55

mhh that's in the toolbox ?

mihaelkonjevic19:06:14

We still use regular controllers when there is a need, so pipeline controllers just solve one specific use case

mihaelkonjevic19:06:20

Yeah, tasks are in a toolbox

carkh19:06:34

i'll have to investigate this

mihaelkonjevic19:06:37

Let me do a real qucik implementation of the logger with tasks

carkh19:06:15

well i'm doing this for training purposes more than actual need... don't go out of your way for that

carkh19:06:55

i must say until now every worry or doubt about keechma has been squashed, but i'm glad i started with a silly app rather than the real thing

mihaelkonjevic19:06:20

so the app-db-change-producer is already debouncing the changes so it should be called less frequently than if you just use add-watch

carkh19:06:15

reading it

mihaelkonjevic19:06:47

We are using tasks in cases where you want to have a subprocess running and potentially updating app-db multiple times before releasing and letting the pipeline to continue. They were created because we needed a way to do state based animations

mihaelkonjevic19:06:09

for instance wait-dataloader-pipeline! function which you can use inside the pipeline to wait until the dataloader is done with loading all datasources is using tasks under the hood

carkh19:06:31

i need to digest this and play with it

mihaelkonjevic19:06:23

also, blocking tasks are “managed” - which means they will be stopped when the controller is stopped. If you use non-blocking-task! flavor, you must stop them manually

carkh19:06:07

allright ! thanks again

carkh19:06:51

hum i see the :on-start key, where are those listed ?

mihaelkonjevic19:06:06

:on-start is the same as :start for pipeline functions, but we added another name when we allowed the synchronous start and stop functions too. So instead of (constantly true) for params, first argument can be an object that has :params, :start and :stop functions - these are the lifecycle functions that are called synchronously (just like in the regular controllers) and must return app-db. So it made sense to have :on-start and :on-stop in pipeline functions to signalize that they are async

carkh19:06:19

allright i understand

carkh19:06:30

thanks again for your kind help, i'll be sure to bother you again soon =)