This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aleph (1)
- # beginners (105)
- # boot (6)
- # cider (9)
- # cljs-dev (61)
- # cljsrn (59)
- # clojure (132)
- # clojure-germany (1)
- # clojure-italy (6)
- # clojure-russia (18)
- # clojure-spec (1)
- # clojure-uk (58)
- # clojurescript (56)
- # core-async (1)
- # cursive (17)
- # datomic (20)
- # docs (1)
- # duct (5)
- # editors (1)
- # emacs (7)
- # events (2)
- # figwheel (7)
- # fulcro (30)
- # graphql (8)
- # jobs (3)
- # leiningen (23)
- # luminus (14)
- # mount (6)
- # off-topic (41)
- # onyx (14)
- # protorepl (2)
- # re-frame (7)
- # reagent (32)
- # shadow-cljs (236)
- # tools-deps (92)
- # unrepl (8)
- # vim (60)
- # yada (1)
At the moment I'm doing all my test manually.. but it is quite annoying and I'm sure there is more productive way to do it..
@manu I've ended up writing at least tests for view generation code that can run directly in
jsc. If you look at https://facebook.github.io/jest/docs/en/tutorial-react-native.html you could argue that testing generated Hiccup (in the case of using Re-Frame), is more flexible than diffing a file.
@manu We’re running our tests in nodejs, but mainly testing logic and not components (spec + re-frame = zero component issues)
@manu I've recently set up some initial integration testing, using Appium and http://webdriver.io: https://github.com/OkLetsPlay/lets-bet-integration-test
Basically, integration testing works on the production app, just as a user would. You can script sessions, saying where to tap/swipe, what to type, and what to expect to be visible and where.
Nothing fine-grained, but it's a good automated way to navigate through the app and make sure the full system (front-end and back-end) is working properly.
We combine this with unit testing (which is not open source yet). I'd like to get something like jest's snapshots going as well, but that'll have to be made from scratch, since jest requires metro bundling, which takes several minutes for our app and is a real pain.
Am I right in understanding that http://webdriver.io is for testing the Web frontend and Appium for the Mobile frontend?
@pesterhazy Sort of. Appium is the back-end which talks to the emulator. wdio is the front-end which provides the API for interacting with the UI. There are a few options for front-end libraries, with mostly the same UI, but they will all talk to Appium.
Finding docs for getting all of that going, especially with ClojureScript, was not straightforward. I have some blog posts on it, as well as using promesa for the async tests, in the works.
@pesterhazy wdio speaks a protocol. The webdriver protocol. Appium is a server which implements that protocol for mobile devices instead of the web.
The tests run nicely in my Android emulator, even in debug builds with figwheel enabled.
If you're running integration tests, and you use spec, I think you should absolutely be using Orchestra. It's the perfect time to make sure all of your data is the correct shape as the "user" uses the "production" app.
If you need to rerun CI 20% of the time, it almost negates the benefits of the 80% of cases where things work fine.
We're not yet running these in CI. I've just been setting them up in the past couple weeks and I'm trying to sort out how to best orchestrate two devices using the app together (since our app is about connecting people to challenge each other).
When running the tests, I have not yet seen spurious failures. Running on a CI machine, all the damn time, will help bring those out though.
I'm really looking forward to being able to automate full app playthroughs, between two users. Especially on a CI machine, though, which can likely only run on Android emulator, that may be tough to pull off.
Could make a service behind an API (so a second "device" isn't needed) for starting fake client sessions, changing bets, chatting, etc, but that service would have to have its paws in the internals of the back-end to make things work. It'd be the easiest, but it would break with internal changes.
Manual testing the whole app, after every update, is not only a big time sink, but a big money sink, in terms of paying QA personnel (who're human and make mistakes).
It can be automated well, I think; it's just not an easy problem. At this point, it still seems worth the time. That may change.
It's also possible to get lost in the attempt to automate testing. Hopefully not the case for you!