Introduction
Testing is a critical part of building reliable software. While many teams focus heavily on unit tests covering 70–80% of the codebase, integration testing is often overlooked. Yet, integration tests play a crucial role in validating how different parts of your app work together. In the classic testing pyramid, integration and end-to-end tests sit at the top, but they’re usually the first to be skipped.

Integration testing helps capture real user journeys and key flows across your app, making sure everything works smoothly without crashes or unexpected behavior. It goes beyond isolated units and ensures that your screens, ViewModels, repositories, and data sources interact correctly.
In this article, I’ll walk you through how to set up proper integration tests using one of my projects, the DogBreed App 🐶.
App Use Case: Understanding the DogBreed App
It’s important to understand how the DogBreed App works. Here’s a quick overview in five steps:
- The DogBreed App fetches and displays various dog breeds by querying a remote API. It’s built entirely with Kotlin, Jetpack Compose, Hilt, and other modern Android libraries.
- The app contains a single
MainActivitythat hosts multiple destinations. - The main screen is a bottom navigation layout with two tabs: All Breeds and Favorites.
- Tapping on a breed opens a Breed Details screen, which displays the breed’s sub-breeds. From here, users can add or remove a breed from their favorites.
- Each sub-breed is also clickable. Tapping on one opens a gallery screen showing images specific to that sub-breed.
Video llustration:

To learn more, you can take a look at the ReadMe
Building an app without integration tests is like testing car parts in isolation, the engine runs perfectly, the brakes work flawlessly, and the steering is smooth as butter, but nobody bothered to check if the brakes actually stop the car when the engine is running.
Your unit tests might be pristine, covering every edge case in your repository logic. But what happens when that repository tries to talk to your database? When your ViewModel attempts to update the UI? When a user actually uses your app the way humans do?
Project setup
Before we start writing integration tests, let’s break down the relevant parts of the DogBreed App codebase. The app is fully built with Kotlin and Jetpack Compose, and follows a clean architecture approach with three main layers:
- UI layer — built with Compose
- Domain layer — includes
ViewModelsandUseCases - Data layer — where things get interesting for integration testing
The app uses Hilt for dependency injection. For example, the DogBreedsRepositoryImpl depends on two key components:
DogBreedService– fetches breed data from a remote APIDogBreedDao– handles local caching via Room
class DogBreedsRepositoryImpl @Inject constructor(
private val dogBreedDao: DogBreedDao,
private val dogBreedService: DogBreedApiService
) : DogBreedsRepository
When writing integration tests, the goal is to ensure your app behaves almost exactly as it would in production; that’s the whole point. Integration tests launch your app just like it would on a real device or emulator, but instead of calling real APIs or hitting a real database, we swap in fake backend responses and, optionally, an in-memory Room database. The Room part isn’t strictly required, but it’s highly recommended if you’re using Room in your app.
The key here is: we don’t mock the internal layers of our app. We use the actual repositories, use cases, ViewModels, and UI code, just like in production. This allows us to validate that everything works end to end, while still keeping the test environment controlled and predictable.
Now, how do we do this? walk with me;
Swapping Real Dependencies with Fake Ones
We first need to know how we provide these dependencies. In the DogBreed App, we use NetworkModule to provide the dependency like so:
@InstallIn(SingletonComponent::class)
@Module
object NetworkModule {
..........
..........
@Provides
@Singleton
fun provideDogBreedService(retrofit: Retrofit): DogBreedApiService {
return retrofit.create(DogBreedApiService::class.java)
}
}
And for the database:
@Module
@InstallIn(SingletonComponent::class)
object DatabaseModule {
.........
.........
@Provides
@Singleton
fun providesBreedsDao(database: DogBreedDatabase): DogBreedDao {
return database.breedDao()
}
}
To provide a fake API response, we need to replace the prod module with a fake one; fortunately, Dagger allows us to do this easily with @TestInstallIn . With this, we can create a fake implementation of DogBreedApiService to return stubbed data.
Here’s DogBreedApiService
interface DogBreedApiService {
@GET("breeds/list/all")
suspend fun getAllDogBreeds(): DogBreedsApiModel
@GET("breed/{breedName}/images/random")
suspend fun getBreedRandomImage(
@Path("breedName") breedName: String
): BreedImageApiModel
@GET("breed/{breedName}/{subBreedName}/images")
suspend fun getSubBreedImages(
@Path("breedName") breedName: String,
@Path("subBreedName") subBreedName: String
) : SubBreedImageApiModel
}
Here’s FakeDogBreedApiService we’ll use in the test:
class FakeDogBreedApiService : DogBreedApiService {
override suspend fun getAllDogBreeds(): DogBreedsApiModel {
return DogBreedsApiModel(
breeds = mapOf(
"australian" to listOf("cattle", "kelpie", "shepherd"),
"affenpinscher" to emptyList(),
"african" to emptyList(),
"airedale" to emptyList(),
"akita" to emptyList(),
"appenzeller" to emptyList(),
"basenji" to emptyList(),
),
)
}
override suspend fun getBreedRandomImage(breedName: String): BreedImageApiModel {
return BreedImageApiModel(
imageUrl = "https://images.dog.ceo/breeds/" +
"$breedName/fake_${breedName}_${System.currentTimeMillis()}.jpg",
)
}
override suspend fun getSubBreedImages(
breedName: String,
subBreedName: String
): SubBreedImageApiModel {
val fakeImages = (1..5).map {
"https://images.dog.ceo/breeds/$breedName-$subBreedName/fake_${subBreedName}_$it.jpg"
}
return SubBreedImageApiModel(images = fakeImages)
}
}
Take note that we’re using the sameDogBreedApiService , just providing a different implementation (FakeDogBreedApiService) for test.
Now we tell Hilt to use this implementation during test
@Module
@TestInstallIn(
components = [SingletonComponent::class],
replaces = [NetworkModule::class]
)
object TestNetworkModule {
@Provides
@Singleton
fun provideFakeDogBreedApiService(): DogBreedApiService {
return FakeDogBreedApiService()
}
}
Same goes for the database implementation (using an in-memory Room database)
@Module
@TestInstallIn(
components = [SingletonComponent::class],
replaces = [DatabaseModule::class]
)
object TestDatabaseModule {
@Provides
@Singleton
fun provideInMemoryDatabase(@ApplicationContext context: Context): DogBreedDatabase {
return Room.inMemoryDatabaseBuilder(
context,
DogBreedDatabase::class.java
)
.build()
}
@Provides
@Singleton
fun provideTestBreedsDao(database: DogBreedDatabase): DogBreedDao {
return database.breedDao()
}
}
To use @TestInstallIn, make sure to include the Hilt Android testing dependency:
hilt-android-testing = { group = "com.google.dagger", name = "hilt-android-testing", version.ref = "hilt-version" }
implementation(libs.hilt.android.testing)
Writing the test
Now that we understand how the app works and the layers involved, let’s write a real integration test. (Note that the app is already fully unit-tested, and you can explore in the repo)
Now, let’s write a test for this scenario:
*When the app launches and the API responds successfully, the user should see a list of breeds.
*Clicking on a breed should open the breed details screen.
*The breed should initially not be marked as liked.
*The user taps the like button to favorite the breed.
*The like icon updates to reflect the new state.
*The user presses the back button to return to the list.
*Tapping the same breed again should show the details screen, and the breed should still appear as liked.
@Test
fun addDogBreedToFavorites() {
composeTestRule.apply {
waitUntilAtLeastOneExists(hasText("All Breeds"))
onNodeWithText("affenpinscher").performClick()
//breedDetails
waitUntilAtLeastOneExists(hasText("Affenpinscher"))
onNodeWithContentDescription("click to add breed as favorite")
.assertIsDisplayed()
//click on add favorites
onNodeWithContentDescription("click to add breed as favorite")
.performClick()
onNodeWithContentDescription("click to remove breed as favorite")
.assertIsDisplayed()
//click back
onNodeWithContentDescription("navUp").performClick()
//see breedDetails
waitUntilAtLeastOneExists(hasText("All Breeds"))
onNodeWithText("affenpinscher").performClick()
//confirm still favorite
waitUntilAtLeastOneExists(hasText("Affenpinscher"))
onNodeWithContentDescription("click to remove breed as favorite")
.assertIsDisplayed()
}
}
This scenario perfectly captures a user journey, showing how a real user would interact with the app, validating that the UI state and data persistence are correct across navigation.
Testing Failure Scenarios
We’ve tested the happy path, of course, we also want to test how the app behaves when something goes wrong, for instance, when the API fails.
To simulate this, we can introduce a flag in our FakeDogBreedApiService:
companion object {
var allBreedAPIErrorOccurred = false
}
override suspend fun getAllDogBreeds(): DogBreedsApiModel {
if (allBreedAPIErrorOccurred) throw Exception("Error occurred")
return DogBreedsApiModel(
breeds = mapOf(
"australian" to listOf("cattle", "kelpie", "shepherd"),
"affenpinscher" to emptyList(),
"african" to emptyList(),
"airedale" to emptyList(),
"akita" to emptyList(),
"appenzeller" to emptyList(),
"basenji" to emptyList(),
),
)
}
With that in place, testing the error scenario becomes simple:
@Test
fun canSeeBreedsScreen_errorOccurred() {
FakeDogBreedApiService.allBreedAPIErrorOccurred = true
composeTestRule.apply {
waitUntilAtLeastOneExists(hasText("All Breeds"))
onNodeWithTag("ErrorScreen").assertIsDisplayed()
}
}
And of course, since the error flag persists across tests, you’ll want to reset it before each test. To run integration tests using Hilt and Jetpack Compose, we need to set up two test rules:
@get:Rule(order = 0) val hiltRule = HiltAndroidRule(this) @get:Rule(order = 1) val composeTestRule = createAndroidComposeRule<MainActivity>()
This initializes Hilt’s dependency injection before the test starts. This must run first, which is why we give it order = 0.
This launches the specified activity (MainActivity in our case) and gives us access to testing APIs like onNodeWithText(...) and assertIsDisplayed().
class NavTest {
@get:Rule(order = 0)
val hiltRule = HiltAndroidRule(this)
@get:Rule(order = 1)
val composeTestRule = createAndroidComposeRule<MainActivity>()
@Before
fun setUp() {
hiltRule.inject()
FakeDogBreedApiService.allBreedAPIErrorOccurred = false
}
@Test
fun addDogBreedToFavorites()
......
}
Robotic Pattern for compose test
To make our tests more readable and maintainable, we can introduce the robot pattern. This pattern helps organize UI test code by abstracting away low-level UI interactions like tapping buttons, checking UI states, or entering text into reusable classes. This leads to cleaner, intention-revealing tests.
Let’s create a robot for all breeds screen
fun ComposeTestRule.allBreedsRobot(function: AllBreedsRobot.() -> Unit) =
function(AllBreedsRobot(composeTestRule = this))
@OptIn(ExperimentalTestApi::class)
class AllBreedsRobot(private val composeTestRule: ComposeTestRule) :
ComposeTestRule by composeTestRule {
init {
waitUntilAtLeastOneExists(hasText("All Breeds"))
}
fun clickOnBreed(breed: String) {
onNodeWithText(breed).performClick()
}
fun errorShown() {
onNodeWithTag("ErrorScreen").assertIsDisplayed()
}
}
Also for breedDetails screen
fun ComposeTestRule.breedDetailsRobot(
breedName: String,
function: BreedDetailsRobot.() -> Unit
) =
function(BreedDetailsRobot(breedName = breedName, composeTestRule = this))
@OptIn(ExperimentalTestApi::class)
class BreedDetailsRobot(
breedName: String,
private val composeTestRule: ComposeTestRule
) : ComposeTestRule by composeTestRule {
init {
waitUntilAtLeastOneExists(hasText(breedName))
}
fun addToFavouritesDisplayed() {
onNodeWithContentDescription("click to add breed as favorite")
.assertIsDisplayed()
}
fun clicksAddFavourites() {
onNodeWithContentDescription("click to add breed as favorite")
.performClick()
}
fun removeFromFavouritesDisplayed() {
onNodeWithContentDescription("click to remove breed as favorite")
.assertIsDisplayed()
}
fun clicksRemoveFavourites() {
onNodeWithContentDescription("click to remove breed as favorite")
.performClick()
}
fun clicksASubBreed() {
onNodeWithText("shepherd").performClick()
}
fun pressBack() {
onNodeWithContentDescription("navUp").performClick()
}
fun seesSubBreed() {
onNodeWithText("Sub breeds").assertIsDisplayed()
}
fun seesEmptySubBreedText() {
onNodeWithText("No sub breeds listed").assertIsDisplayed()
}
}
We use the Robot Pattern to keep tests clean and focused. Each screen gets a robot class that handles UI actions, while extension functions with lambdas let us use them in a scoped, readable way. This makes tests more expressive, less repetitive, and easier to maintain.
Now let’s refactor addDogBreedToFavorites
@Test
fun addDogBreedToFavorites() {
composeTestRule.apply {
allBreedsRobot {
clickOnBreed("affenpinscher")
}
breedDetailsRobot("Affenpinscher") {
addToFavouritesDisplayed()
clicksAddFavourites()
removeFromFavouritesDisplayed()
pressBack()
}
allBreedsRobot {
clickOnBreed("affenpinscher")
}
breedDetailsRobot("Affenpinscher") {
removeFromFavouritesDisplayed()
}
}
}
Job Offers
With this version, you can immediately understand what the test is trying to verify, without getting distracted by the boilerplate of UI interaction details.
To see the full test, you can check here
A screen record of how the test works:

🚀 Why Integration Testing Matters
Unit tests are great for isolated logic, but they don’t tell you how your app behaves end-to-end. Integration tests fill that gap by verifying that components like ViewModels, repositories, UI, and DB work together as expected.
They catch real issues like:
- State not persisting across screens — e.g., a user favorites a breed, navigates back, and the favorites icon unexpectedly resets because the state wasn’t stored in the DB correctly.
- Missing navigation arguments causing crashes — e.g., tapping on a breed in AllBreedsScreen fails because BreedDetailsScreen expects a
breedNameargument that isn’t passed, causing MainActivity to crash on start. - UI not syncing with actual data — e.g., the favorites list still shows removed breeds because the UI layer isn’t observing the updated Room database state.
The Trade-offs
Integration tests take longer to run and require more setup, such as fake APIs, in-memory databases, and dependency injection, depending on the size of your codebase.
They can also introduce cost and complexity when integrated into your CI pipeline, especially with tools like Firebase Test Lab, where test runs may consume time quotas or incur actual charges.
If not well-isolated, they can also become flaky and harder to maintain over time.
Final Thoughts
Integration tests boost the integrity of your app and your confidence that it won’t break where it matters the most.
This article was previously published on proandroiddev.com.



