Fork me on GitHub
#untangled
<
2016-04-27
>
currentoor00:04:11

If I want to make a few remote reads in parallel, how would I accomplish that?

cjmurphy01:04:07

Could not find artifact navis:untangled-client:jar:0.4.8-SNAPSHOT in clojars ()

cjmurphy01:04:47

Would it be okay to put that one up too?

kauko08:04:18

Hey, how do I use the lein template for untangled? When I do lein new untangled bazbaz, it "fails to resolve the version"

kauko08:04:51

Do I need to somehow add the template myself before I can use it?

ethangracer15:04:42

@currentoor: what do you mean by in parallel?

ethangracer15:04:54

@cjmurphy: 0.4.8 has been released, no snapshot version

ethangracer15:04:27

is there something you’re looking for in a snapshot version that isn’t in 0.4.8?

ethangracer15:04:48

@kauko: i’m having the same issue, not too sure. @adambros any thoughts?

tony.kay15:04:46

@kauko the template is not usable yet. Use todomvc as a template for now

tony.kay15:04:52

@adambros: If you wanted to pull todomvc and turn it into a base template that does work (without any special features yet), that would probably be nice for satisfying the basic need.

kauko15:04:04

ah ok. Cool beans. Might want to add mention of that to the repo readme simple_smile

tony.kay15:04:10

yeah, will do

kauko15:04:04

How ready for use do you think untangled is? I know it's alpha since om next is alpha, but I'd still like to hear your thoughts.

tony.kay15:04:31

We're releasing a product to production in about 8 weeks.

tony.kay15:04:05

There are definitely "missing" things, but most of them are small and well understood. Refinements, for example, are in progress on the unhappy path error handling

tony.kay15:04:30

documentation is coming along, but we're kind of swamped, so that is kind of a hole...but the tutorial has most of what you'd need to know

currentoor17:04:14

@ethangracer: by in parallel I mean, if I have multiple calls to load-field in the same UI transition but I don't want them to be batched together.

tony.kay17:04:01

Why don't you want them batched?

currentoor17:04:23

I have a dashboard with several widgets, and these widgets have very different data requirements (some external APIs and some internal).

currentoor17:04:33

I don't want the loading of a slow widget to block the others.

tony.kay17:04:55

post-mutation

currentoor17:04:11

what do you mean?

tony.kay17:04:20

use the post-mutation to trigger the follow-on load

tony.kay17:04:36

queue one in the UI, then queue the next one after load finishes

tony.kay17:04:42

that would serialize them

tony.kay17:04:12

It is a valid use-case, so we could add a little sugar so you could mark loads that you want to serialize

tony.kay17:04:44

We also want to add support to "future-based" loads, where you spin something off on the server but get an immediate response, and then poll for the result.

tony.kay17:04:13

It would be easy enough to add all of the bits of that so you could just specify a flag on a load, and it would switch to that behavior

tony.kay17:04:25

Useful for long-running things

tony.kay17:04:35

like reports

currentoor17:04:54

so the post-mutation approach, it would load the widgets one by one correct?

tony.kay17:04:09

the post-mutation happens after the response, so by definition, yes

currentoor17:04:31

i was hoping for something like make these four requests at once

tony.kay17:04:32

the only complication is that writing that in post-mutation isn't ideal from a clarity standpoint

tony.kay17:04:47

yeah, which is why I'm talking about adding support for your use-case(s)

tony.kay17:04:15

I'm feeling like the future-based approach is something we're going to need soon, and would solve your case

currentoor17:04:34

what about a websocket based solution?

tony.kay17:04:41

network is irrelevant

tony.kay17:04:57

we're adding that too, but this could be implemented on top of either

currentoor17:04:59

i see, so polling is just an implementation detail?

tony.kay17:04:11

would be hidden from you

tony.kay17:04:19

(load-data ... :future true)

tony.kay17:04:26

is all you'd see

tony.kay17:04:32

or something like that

tony.kay17:04:08

basically: respond immediately...don't wait for response....but when the response arrives, normalize it into to app state and call the post mutation

tony.kay17:04:37

so, we could still batch the requests, but their responses would process in parallel and arrive when complete

currentoor17:04:58

yeah that would be cool

tony.kay17:04:12

Our default sequential processing is needed for optimistic update reasoning, but this parallel processing is needed for reports/dashboards

tony.kay17:04:29

also don't want to tie up network connection (and have timeouts) for long running things

currentoor17:04:52

yeah the sequential processing has worked pretty great for us so far

currentoor17:04:05

optimistically creating report and widgets is a breeze

tony.kay17:04:23

yeah, really nice overall feel.

tony.kay17:04:03

Let's open an issue on untangled-client for this...we're going to need it within the next few weeks for our product, and I was already thinking about it, which is why I had so many ideas about it so quickly 😉

currentoor17:04:22

nice, you want me to do it?

tony.kay17:04:15

You are welcome to give it a shot...should be fun simple_smile

currentoor17:04:37

@tony.kay: LOL i meant write the issue!

tony.kay17:04:47

nope, got that

currentoor17:04:12

well i'd be happy to help in anyway i can

tony.kay17:04:15

I'm going to coordinate with @mahinshaw since he's working on the network layer at the moment. I've got the arch in my head.

tony.kay17:04:53

I could probably get something working pretty quickly...as a rough draft. Minimal configuration, etc. Wouldn't be ideal for production, but would be easy to refine from there

currentoor17:04:17

i'm looking forward to seeing what you come up with

currentoor17:04:07

I'm calling load-field inside componentWillMount. Is that frowned upon?

(componentWillMount [this]
    (let [{:keys [widget/data-source]} (om/props this)]
      (if (empty? data-source)
        (df/load-field this :widget/data-source))))

currentoor17:04:41

so not frowned upon?

tony.kay17:04:17

so, I'd ask the #C06DT2YSY channel if transact! is ok from componentWillMount

tony.kay17:04:45

load-field is just a helper in front of transact.

tony.kay17:04:59

My guess is that it is perfectly fine

tony.kay17:04:21

unless, of course, your transact could cause it to unmount and then something could cause it to re-mount...

mahinshaw17:04:15

@tony.kay: It might be worth making that cache a component

tony.kay17:04:55

yes, of course...otherwise server restarts during dev would be hosed

mahinshaw17:04:22

right, I’m mostly thinking from a pluggable/configurable perspective

brianosaurus19:04:33

@tony.kay … I’ve noticed that with-db-fixture somehow masks components started in let bindings within. For example this will start the scheduler:

(let [scheduler (component/start (scheduler/make-scheduler))]
    (with-db-fixture db-comp
while this won’t:
(with-db-fixture db-comp
    (let [scheduler (component/start (scheduler/make-scheduler))]

tony.kay19:04:47

with-db-fixture is a macro...evaluated at compile time

tony.kay19:04:59

that might affect it

tony.kay19:04:09

I'm not seeing why without looking deeper....use the source simple_smile

brianosaurus19:04:46

simple_smile yeah, I poked around real quick. Nothing jumped out. Just pointing it out. We’re on a deadline coming Monday so I’m in hurry up mode ATM. Our thought is to either make an issue or just fix it but that will have to be later next week or the week thereafter.

brianosaurus19:04:58

Also, FYI (I’ll prb either submit a patch or issue for this)

clj
[varname form & {:keys [migrations seed-fn log-level db-key] :or {log-level :fatal db-key :mockdb}}]
  `(t/with-level ~log-level
     (let [~varname (db-fixture ~db-key :migration-ns ~migrations :seed-fn ~seed-fn)]
       (try ~form (finally (component/stop ~varname))))))
If the migration fails it doesn’t report why (since there isn’t a catch)…instead the component/stop just says it can’t call stop on nil.

brianosaurus19:04:51

I fat fingered a migration and got confused for a while wondering what the nil was from.

tony.kay19:04:02

sure, PRs welcome simple_smile

tony.kay19:04:44

there are a number of things that do not give adequate error messages....at least we're in good company there, eh?

mahinshaw19:04:09

@brianosaurus: Did you get logs about your migrations failing, because you should have?

brianosaurus19:04:44

@mahinshaw: Sorry, I mispoke. The migration was good but the seed tried to shove an integer into an :instant during fixture setup. Once I fixed my mistake it all started to work very well (except timbre/info doesn’t output inside of (provides and also the schduler not starting).

brianosaurus20:04:12

Don’t get me wrong, THANK YOU THANK YOU for untangled. I’d like to help with PRs once my crunch is over.

mahinshaw20:04:50

@brianosaurus: No problem. I was interested in logs to see where the issue propagated from. In the case of seed errors, there is a catch around those, and the log uses a fatal level.

brianosaurus20:04:43

ahh, ok. thanks for clarifying

mahinshaw20:04:42

The DatabaseComponent in untangled.datomic.impl.components is where the migration and seeding happens.

therabidbanana21:04:51

Wondering if someone has a cleaner way to do this with untangled.spec - what I want is something like this:

(assertions
  channel-result => {:db/id _}
)
Confirming that the channel receives something with an id I don't care about. I end up using =fn=> and destructuring in the fn:
`
(assertions
  channel-result =fn=> (fn [{:keys [db/id]}] id)
)

therabidbanana21:04:15

I feel like maybe we'd need some sort of matcher library for that? Realizing I could shorten a bit using the keyword as the fn: (`channel-result =fn=> :db/id`)

tony.kay21:04:29

I'm trying to avoid too much syntactic sugar. The fn arrow gives you the power to make your own and typically the core library functions already give you nice expressiveness...as you just found

tony.kay21:04:55

@adambros: has added a little contains set of functions in there somewhere

tony.kay21:04:18

@therabidbanana: I usually prefer something more like (contains? channel-result :db/id) => true

tony.kay21:04:43

for looking in at things...just access the data structure and say what you want to find there

therabidbanana21:04:16

That's fair, I'm just used to all of the weird things rspec matchers provide in Ruby, I guess.

tony.kay21:04:47

yeah, it is very tempting...but then you have a whole new language to learn, when everyone already knows the core stuff

tony.kay21:04:59

and it rarely improves clarity much

therabidbanana21:04:23

(contains? channel-result :db/id) => true definitely has a better response than channel-result =fn=> :db/id on fail, so I'll probably go with that.

tony.kay21:04:17

that is the other thing about even supplying =fn=>: hard to make a quickly comprehensible failure message

tony.kay22:04:09

@ethangracer: @currentoor OK, a first draft of parallel loading is on the branch feature/untangled-7 of untangled-server untangled-client, and the cookbook (sample recipe). I'll push the server/client libs as SNAPSHOTs to clojars. Use the branch on cookbook to see how to use it.

tony.kay22:04:48

DEFINITELY NOT ready for prime-time...it does some really nasty stuff...but the API looks right, requires no changes on the server, and requires almost nothing on the client....so the API is right simple_smile

tony.kay22:04:24

server: 0.4.8-SNAPSHOT

tony.kay22:04:49

client: 0.4.9-SNAPSHOT

tony.kay22:04:35

just add :background true to load-data or load-field

tony.kay22:04:46

probably buggy, as well simple_smile

tony.kay22:04:52

also leaks memory on server like a seive and is more chatty than a housewife in a 50's sitcom

tony.kay22:04:58

as I said...not production code

tony.kay22:04:52

@currentoor: There are definitely things you could fix about it. If you want some suggestions.

tony.kay22:04:19

Ideally, to support this feature, we'd change the networking layer to use long-polling or websockets full-time.

tony.kay22:04:31

like the Lift framework in Scala

tony.kay22:04:29

This is a really important feature for webapps. Comes up in every non-trivial one I've ever worked in: How to cope with users that can submit long-running tasks without killing your server or making the user unable to do anything else.

tony.kay22:04:19

On the plus-side, the internals of Om make it trivial to deal with the fact that they might "navigate away"....once the result is in the db, it is in the db...doesn't matter what changes on the screen while the bg job runs

tony.kay22:04:07

might be nice to integrate a caching story on the server...e.g. "keep the result of this query for x minutes, in case someone else asks for it"

tony.kay22:04:23

but I think that is easy enough to implement yourself in your query processing

tony.kay22:04:34

On a related, but different note: We can do some nifty things to make HTTP caching work without much coding (based on the Om wiki tutorial on caching)

adambrosio22:04:28

@brianosaurus: I’ve actually fixed the error being swallowed in the DatabaseComponent, I just haven’t commited it yet if I remove the try catch here: https://github.com/untangled-web/untangled-datomic/blob/master/src/untangled/datomic/impl/components.clj#L56 errors propagate properly and you will see what’s wrong with your seed function

adambrosio22:04:40

@therabidbanana: https://github.com/untangled-web/untangled-spec/blob/master/src/untangled_spec/contains.cljc#L59 is my current attempt at having something similar to what midje provides, just as a function my current gripe that I haven’t had time to think about, is how I want =fn=> checkers to be able to report when something fails in a custom and more detailed way

adambrosio22:04:45

@brianosaurus: did you try it yourself? an easy workaround is to wrap your seed-function in a try catch, but the error will still be swallowed, but you can at least print and find it in your terminal

adambrosio22:04:06

so like (try (link-and-load-seed-data …) (catch Exception e (println e)))

currentoor22:04:06

@tony.kay: how do you feel about sente?

currentoor22:04:13

we were thinking about having a core.async/go loop monitor the datomic transaction log and push relevant stuff to the right users using sente

adambrosio22:04:41

@currentoor: i think @mahinshaw is finishing up a cookbook on that

mahinshaw23:04:36

@currentoor: There is a bit of setup for that, but it will work. I have hooked up traffic with a websocket, and am moving to server push in a minute. I will post here when I get the cookbook code pushed to github.