This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-01-15
Channels
- # announcements (5)
- # architecture (17)
- # aws (2)
- # bangalore-clj (1)
- # beginners (157)
- # boot (22)
- # boot-dev (2)
- # cider (64)
- # clara (2)
- # cljs-dev (3)
- # clojure (30)
- # clojure-art (2)
- # clojure-australia (1)
- # clojure-belgium (1)
- # clojure-denver (1)
- # clojure-dusseldorf (1)
- # clojure-europe (8)
- # clojure-finland (2)
- # clojure-italy (9)
- # clojure-nl (21)
- # clojure-spec (261)
- # clojure-switzerland (3)
- # clojure-uk (67)
- # clojurescript (57)
- # clojurewerkz (2)
- # cursive (3)
- # datomic (27)
- # emacs (12)
- # figwheel-main (2)
- # fulcro (48)
- # garden (67)
- # graphql (41)
- # jobs (8)
- # kaocha (8)
- # liberator (2)
- # lumo (1)
- # off-topic (19)
- # parinfer (9)
- # perun (4)
- # re-frame (50)
- # reagent (7)
- # remote-jobs (4)
- # ring-swagger (20)
- # rum (6)
- # shadow-cljs (170)
- # specter (3)
- # tools-deps (19)
- # vim (3)
sounds to me like youโre trying to monitor low level data (server logs) on too high a level (black box testing)
I've learned that we have a service which escalates certain kinds of events, by their severity
note: this is my first testing job and you will not offend me if you ask me any obvious questions or have obvious comments because, they won't be obvious to me ๐
i do know that i am spending a lot of time at the e2e tippy top of "The Testing Pyramid" but, until our API stabilizes more I don't think that I will be writing tests at a lower layer
@mathpunk just a thought but, because log lines have timestamps (which I'm assuming you can parse and get at), you can just capture all the data the system under test slurps or spits, and do the analysis later, obviating the need to tail or watch files.
Sort of like how simulant does it.
You can model the actions a user can do, and the reactions of the system, and refine these models as you generatively test/simulate them against the system.
I found this talk useful https://www.youtube.com/watch?v=zjbcayvTcKQ
simulation testing is the goal.... but our app is so big that there is a lot of plain example-based tests to be written, just to make sure that i understand what user actions are available
^ I feel your pain ๐
It seems to me like doing it generatively is going to save you a lot of effort, because it's very easy to make assumptions about how a system will behave by reasoning inductively, something that generative tests relentlessly point out. So once you've invested in setting up a quick feedback loop doing this, you'll get to a solid model in no time (fingers crossed ๐)
We have a couple of interactive applications with some fairly complex semantics, and we wrote specs for the possible valid sequences of actions, and then we generatively test that when we get the app into a given state, the expected properties hold true...
It's tricky to do but can be pretty powerful. Mind you, as @mathpunk says, first you need to know a) what actions are available and b) how combinations of actions can be combined!