Blog Infos




All modern Android apps adopt some architectural variations, MVVM or MVI being the most common ones. Regardless which architecture, they all define a layer with the Repository pattern. I recently dived into some refactoring in the repository layer and noticed a few gotchas. I started this as my own memo. But realized that some notes & code may be helpful for others to avoid pitfalls.

Principles of Repository Layer

Why do we need any application architecture? Because it helps us manage the complexity by breaking it down into modules with distinct & cohesive responsibilities. The Repository layer is no exception. Here are the few distinct responsibilities in this layer:

Data Mapping

repository mediates between domain and data-source. It maps data into domain models, so that the Domain layer only need to deal with the domain models for business logic. Thanks to the repositories, all network data models should be concealed from Domain layer.

Take GraphQL as example, the network data is generated by GQL schema, which is defined by backend services. We don’t want those externally defined network-models to leak into any layers beyond repositories. So Repositories deal with the data mapping and hide the underlying network mechanism from Domain layer.


If data cache is used, repositories should encapsulate the underlying caching mechanism, such as in-memory or on-disk caches.


Caches should support concurrent queries from Domain. Modern Android apps use coroutines, which achieve asynchrony by suspending and resuming work via the implicit Continuations. Older Android apps could use parallel processing with multi-threads. But either way, the Repository layer with cache should be concurrency-safe.

Single Source-of-truth

Since Repository layer takes on responsibilities and encapsulates data-mapping, caching & concurrency-safety, it should become the single source-of-truth for the corresponding domain-models in the entire application. Watch out for other sources in the app for the same domain-models or sub-models. They are probably the false sources and should be deprecated. It is especially important when concurrency is involved and the unsafe sources could cause race conditions.

Sample Code for Concurrency-safe Cache

A common use case for Repository pattern is to support cache. In order to be concurrency-safe, we want to ensure the accesses(reads & writes) to the cache is synchronized. Here is a snippet to illustrate a in-memory cache in concurrent environment with a few callouts.

private val lock = Mutex() // #1
private var cachedAccount: Account? = null
suspend fun getAccount(): Account? {
return cachedAccount ?: mutex.withLock { // #2, #3, #6
cachedAccount ?: withContext(Dispatchers.IO) { // #4, #5, #6
val networkModel = ... // perform network query and response parsing
cachedAccout = networkModel.mapToAccountDomain()


, ,

Migrating to Jetpack Compose – an interop love story

Most of you are familiar with Jetpack Compose and its benefits. If you’re able to start anew and create a Compose-only app, you’re on the right track. But this talk might not be for you…
Watch Video

Migrating to Jetpack Compose - an interop love story

Simona Milanovic
Android DevRel Engineer for Jetpack Compose

Migrating to Jetpack Compose - an interop love story

Simona Milanovic
Android DevRel Engin ...

Migrating to Jetpack Compose - an interop love story

Simona Milanovic
Android DevRel Engineer f ...


No results found.

  1. Mutex is an idiom in Kotlin coroutines for mutually exclusive lock
  2. When cache is available(assuming non-null means available), we return cache immediately
  3. When cache is unavailable, use mutex to allow only the 1st coroutine to execute within withLock{} lambda. All concurrent coroutines are waiting for the access to mutex lock.
  4. The first coroutine would perform network operation in I/O-thread, map network to domain model, save it to cache and return
  5. After the first coroutine finished its execution in withLock{} lambda, the next coroutine that was waiting for the lock before can enter withLock{} now. They should always check if cache is already available as the result of the 1st coroutine. This is important, because those await coroutines already got passed the 1st cache check and waiting at the mutex lock entry. When they enter withLock{}, if they don’t check cache again, they would repeat executing the network query again to cause inefficiency and possible errors. That is why we want to check cachedAccount for the 2nd time. It is for a different purpose in concurrency environment.
  6. Noticed that for both cache checks, if cache is available, no switching to I/O dispatcher is needed. We return cache immediate in the same thread.

The callouts above are implemented in order to achieve safety & efficiency. Hope they make sense. The principles of the implementation should be applicable to other caching, concurrency or networking models.

StateFlow from Repository

I also want to mention a few words about Kotlin’s StateFlow as it has become more popular nowadays thanks to a common use case in Android to implement the state-holder observable pattern as a substitute to the traditional Android LiveData.

StateFlow can not only be used in ViewModel to expose observables to the UI layer, it can also be applied to Repository layer. For example, if a user logged out, we want to observe the the login-state and clear all caches for that user. In that case, we can expose a flow for the login state, so when user logs in or out, the StateFlow emits a new state. Here is an over-simplified snippet:

private val _loginState = MutableStateFlow<Boolean>(false)
val loginStateFlow: StateFlow<Boolean> = _loginState.asStateFlow()
fun loggedIn() {
_loginState.value = true
fun loggedOut() {
_loginState.value = false
// Call-site
suspend fun observeLoginState() = coroutineScope { // #1
repository.loginStateFlow.collect { ... } // #2, #3
view raw StateFlow.kt hosted with ❤ by GitHub

There are few attributes about StateFlow that we want to be aware of, in order to apply it in the right situation and use it in the right way:

  1. StateFlow is like other Flow APIs, the producer side(flow builder) is not suspending and does not require any coroutine scope. But the subscriber (consumer) side with the terminal operations (like collect{}), have to be in a coroutine scope.
  2. StateFlow can be subscribed by multiple consumers. So StateFlow is “hot”, which means the producer side are emitting items proactively. This is different than many other Flow APIs, which are “cold”, meaning they are lazy and do not start emitting until the consumer subscribed to it with terminal operations.
  3. StateFlow always replays the last item. That means, when a consumer subscribes to it, the most recent item will be received first, then will the subsequent emissions.
Final Thought

Hope those notes & tips around repository layer could help others solve common problems and avoid pitfalls.

This article was originally published on on December 11, 2022



It’s one of the common UX across apps to provide swipe to dismiss so…
In this part of our series on introducing Jetpack Compose into an existing project,…
This is the second article in an article series that will discuss the dependency…
Let’s suppose that for some reason we are interested in doing some tests with…

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.