Fork me on GitHub

RE: Integration test patterns Started at a new company at the beginning of the month where the entirety of their integration tests are for the api. Within some of my work, have been hitting situations where functionality I want to test both (a) isn't exposed via the api and (b) requires http/db/etc. Naturally, these would fit into the "integration test" bucket. One of the more tenured folks on the team thinks we should create new api endpoints for this other functionality we want to test, rather than simply testing the specific functions themselves. To me, this seems a bit overkill, but wanted to make sure that I understand the idea behind/benefits of this approach. Does anyone have familiarity with this line of reasoning or resources I could read around it?


One of the things we do with some tests around our API is to spin up a test instance actually inside our tests -- so the test and the API code is running in the same JVM -- so we can mock/manipulate various parts of the stack directly in the tests. That has obviated any need for test-only API endpoints.


This is a bit more advanced than what we're doing afaict. What are you using to do this?


Our services all use embedded servers -- Jetty or http-kit -- so it's just a function call in a test fixture to start a server in the same JVM process and shut it down after the test completes. We use Component so start/`stop` is a natural part of the test fixture lifecycle.


Thanks, that makes sense


We do essentially what @seancorfield is saying, but we’re using integrant / duct. So you can trivially spin up a system or subset of it (with the transitive deps) all running and then integration test against that in process. We have also done it out of process for smoke tests; but I prefer the in process stuff because it’s much faster and easier to manage. The benefit of integrant over component is that you can easily meta-merge test config overrides ontop of what would otherwise be your production/base config. Also integrant/duct can give you access to the full prepped config for the whole system, along with the system itself. It’s by far the cleanest way I’ve found to do this kind of thing.


Yup, we use Component and our tests only spin up the subsystem(s) they actually need. Our DB setup is too large/complex to try to replicate in-memory because the DDL is rarely compatible across DB types. I keep looking at Integrant and may do a spike to switch over to it on a branch... but it would be a pretty daunting task at this point (we have 82,000 lines of code and we use Component really heavily).


Great thanks for the input @U06HHF230 --- always nice to hear how others are solving this


Where possible we try to test at the function level but we have found value in a high-level test suite that deliberately exercises the API -- to verify error responses for bad input, as well as some "happy path" responses.


Right @seancorfield --- I definitely see value in high-level tests that verify an api is working as expected. And our integration-test suite can work the same way as you describe in allowing us to directly import namespaces from application code, then mock/etc, which is what I think makes the most sense to cover "inner workings" for lack of a better term


> create new api endpoints for this other functionality we want to test, rather than simply testing the specific functions themselves imo it sounds very unconventional :man-shrugging::skin-tone-2: I'm a fan of minimizing a number of integration tests, there were some nice explanations and videos on this topic by Testdouble, e.g. see

👍 4

Right @metametadata just wanted to make sure it wasn't just ignorance on my part