This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-09-28
Channels
- # arachne (2)
- # aws (5)
- # aws-lambda (5)
- # beginners (4)
- # boot (25)
- # cljs-dev (270)
- # cljsjs (1)
- # cljsrn (72)
- # clojars (5)
- # clojure (201)
- # clojure-belgium (5)
- # clojure-brasil (4)
- # clojure-italy (2)
- # clojure-korea (2)
- # clojure-russia (24)
- # clojure-spec (24)
- # clojure-uk (22)
- # clojurebridge (1)
- # clojurescript (125)
- # cloverage (3)
- # cursive (41)
- # datomic (37)
- # dirac (4)
- # emacs (2)
- # hoplon (421)
- # lein-figwheel (1)
- # leiningen (5)
- # luminus (2)
- # mount (1)
- # off-topic (18)
- # om (44)
- # om-next (4)
- # onyx (44)
- # pedestal (3)
- # proton (9)
- # re-frame (21)
- # reagent (21)
- # ring-swagger (12)
- # specter (9)
- # sql (2)
- # untangled (62)
- # vim (16)
I seem to have gotten the travis CI build of U.C. working better. I reduced the memory footprint of the JVM instances and made sure all of the versions of things used to run the automated tests were up to date, and that hopefully has solved the erratic failures on running the browser that runs the tests.
It’s looking to me like load-data
is deleting fields on subsequent queries with fewer keywords. e.g. if I first call (load-data [:several :keys {:with [:some :joins]}])
, and later I call (load-data [:several :keys])
, then the data under the :with
key is gone. has anyone else seen the same behavior?
doesn’t seem appropriate to me since I might want to update the data at :several
and :keys
but not another key
@tony.kay thought it might be a post-mutation but I’m sure it isn't
it definitely could be some other bug
but the data for all fields not specified in the second load is gone
yes to using get-query
not sure what you mean by similar root
I don’t think I’m excluding anything relevant
yes, normalization is involved
I have a survey that is already normalized in app-state
then i’m pulling down a survey request, which itself pulls down certain details about that survey
the already normalized survey data is a superset of the survey data pulled down with the request
but all of that survey data uses the same ident, because both refer to the same survey
by the time the post-mutation runs, the data is already incorrect
seems that way
that’s my running hypothesis
having a hard time tracking it down
merge-idents is a separate step in the low level Om merge. I'm trying to remember if we override that or just the general data merge
correct
aaah, that would make a lot of sense
if it marks keys as missing
I’m not sure I ever bothered to learn it 😬
I probably should
well, it is a bit involved...here is the basic idea: 1. It uses the query AND the result 2. It follows both through the recursive logic or the query and structure of the data 3. If something is asked for in Q and not in R, then it is marked as missing 4. It if is asked for in Q and IS in R, it should pick the proper merge (or recursive step)
shouldn’t this all happen before looking at the app-state?
and then just do a deep merge?
@adambros wrote the algorithm, so if you find a problem, it might be best to get his help
well I think you’re right
it is definitely deleting stuff from the app state
but it shouldn’t be
only from the remote result
haha ok
i’ll take the plunge and see what I can figure out
thanks
sounds good to me
im running into an issue where when i get a 401 i have a network-error-callback
that redirects to the logout page
but for some reason it gets into an infinite loop when i go to a page that does a load-field
BUT i can get out of the loop when i switch to a different tab and it does the logout correctly
any ideas?