
If you’ve ever felt confused about when coroutines actually run concurrently versus asynchronously, or why some operations seem to block despite using suspend functions, you’re not alone. Today, we’ll build some intuition around these concepts with practical examples that you can run and see for yourself.
Quick note: While coroutines technically use thread pools rather than individual threads, we’ll use “threads” as a mental model here to keep things simpler.
Setting Up Our Experiment
Let’s start by creating two fundamental building blocks that will help us visualize the differences between blocking and non-blocking work.
Simulating Blocking Work
First, we need something that represents CPU-intensive work, the kind that keeps a thread busy:
suspend fun workOnBlockingTask(time: Long) {
Thread.sleep(time)
yield()
}
Here’s what’s happening: Thread.sleep() simulates real computational work (think sorting a massive list or processing images). The thread is genuinely busy during this time and can’t do anything else.
The yield() call is crucial, it creates a suspension point that allows other coroutines on the same thread to run. Without it, our coroutine would hog the thread until completion, defeating the purpose of cooperative multitasking.
Simulating Non-blocking Work
Now for something that represents I/O operations — work that involves waiting rather than computing:
suspend fun workOnNonBlockingTask(time: Long) {
delay(time)
}
This is fundamentally different. delay() suspends the coroutine and frees up the thread entirely. Think of it like making an API call: once you send the request, you’re just waiting for the response. Your thread doesn’t need to sit there tapping its fingers; it can go help other coroutines while waiting.
Building Our Test Actions
Now let’s create four actions that will help us see these differences in practice:
// blocking work
suspend fun actionA() {
println("A >> STARTED")
workOnBlockingTask(500)
println("A >> Step 1 done")
workOnBlockingTask(300)
println("A >> Step 2 done")
println("A >> DONE")
}
suspend fun actionB() {
println("B >> Started")
workOnBlockingTask(200)
println("B >> Step 1 done")
workOnBlockingTask(400)
println("B >> Step 2 done")
println("B >> DONE")
}
// non-blocking work
suspend fun actionC() {
println("C >> Started")
workOnNonBlockingTask(500)
println("C >> Step 1 done")
workOnNonBlockingTask(300)
println("C >> Step 2 done")
println("C >> DONE")
}
suspend fun actionD() {
println("D >> Started")
workOnNonBlockingTask(200)
println("D >> Step 1 done")
workOnNonBlockingTask(400)
println("D >> Step 2 done")
println("D >> DONE")
}
Actions A and B simulate CPU-intensive work, while C and D simulate I/O operations. The timing is intentionally different so we can see how they interleave.
Experiment 1: Sequential Execution (The Baseline)
Let’s start with the most basic approach:
// blocking work
val blockingTime = measureTimeMillis {
runBlocking {
actionA()
actionB()
}
}
println("Blocking operations: $blockingTime ms")
// non-blocking work
val nonBlockingTime = measureTimeMillis {
runBlocking {
actionC()
actionD()
}
}
println("Non-blocking operations: $nonBlockingTime ms")
Results:
// blocking work A >> STARTED A >> Step 1 done A >> Step 2 done A >> DONE B >> Started B >> Step 1 done B >> Step 2 done B >> DONE Blocking operations: 1441 ms // non-blocking work C >> Started C >> Step 1 done C >> Step 2 done C >> DONE D >> Started D >> Step 1 done D >> Step 2 done D >> DONE Non-blocking operations: 1419 ms
No surprises here: everything runs sequentially, taking about 1400ms total for each group. This is our baseline.
Experiment 2: Concurrent Execution (The Plot Thickens)
Now let’s see what happens when we use launch to run things concurrently:
// blocking work
val blockingTime = measureTimeMillis {
runBlocking {
launch { actionA() }
launch { actionB() }
}
}
println("Blocking operations: $blockingTime ms")
// non-blocking work
val nonBlockingTime = measureTimeMillis {
runBlocking {
launch { actionC() }
launch { actionD() }
}
}
println("Non-blocking operations: $nonBlockingTime ms")
Results:
// blocking work A >> STARTED B >> Started A >> Step 1 done B >> Step 1 done A >> Step 2 done A >> DONE B >> Step 2 done B >> DONE Blocking operations: 1442 ms // non-blocking work C >> Started D >> Started D >> Step 1 done C >> Step 1 done D >> Step 2 done D >> DONE C >> Step 2 done C >> DONE Non-blocking operations: 815 ms
This is where it gets interesting!
The blocking operations still take the same time (~1400ms) because even though we’re “concurrent,” we’re really just doing cooperative multitasking on a single thread. Action A runs until it hits yield(), then B gets a chance, then back to A, and so on. It’s more like polite turn-taking than true parallelism.
But look at the non-blocking operations: we’ve nearly halved the execution time! Here’s what’s actually happening step by step:
- Time 0ms: C starts and immediately hits
delay(500), suspending for 500ms - Time 0ms: Since C is suspended, D starts immediately and hits
delay(200), suspending for 200ms - Time 0–200ms: Both coroutines are suspended, thread is idle (waiting for the first timer to fire)
- Time 200ms: D wakes up, prints “Step 1 done”, then hits
delay(400), suspending for another 400ms - Time 200–500ms: Both coroutines suspended again, thread idle
- Time 500ms: C wakes up, prints “Step 1 done”, then hits
delay(300), suspending for 300ms - Time 500–600ms: Both suspended, thread idle
- Time 600ms: D wakes up (200+400), prints “Step 2 done” and completes
- Time 800ms: C wakes up (500+300), prints “Step 2 done” and completes
The key insight is that both coroutines start their delays at nearly the same time, so their timers run in parallel even though we’re on a single thread. The thread doesn’t need to actively wait: it just sets up the timers and handles the callbacks when they fire. This is the beauty of non-blocking I/O: the “waiting” happens outside of our thread’s execution time.
Experiment 3: Asynchronous Execution (True Parallelism)
Now let’s add a dispatcher to get true parallelism:
// blocking work
val blockingTime = measureTimeMillis {
runBlocking(Dispatchers.Default) {
launch { actionA() }
launch { actionB() }
}
}
println("Blocking operations: $blockingTime ms")
// non-blocking work
val nonBlockingTime = measureTimeMillis {
runBlocking(Dispatchers.Default) {
launch { actionC() }
launch { actionD() }
}
}
println("Non-blocking operations: $nonBlockingTime ms")
// blocking work A >> STARTED B >> Started B >> Step 1 done A >> Step 1 done B >> Step 2 done B >> DONE A >> Step 2 done A >> DONE Blocking operations: 834 ms // non-blocking work C >> Started D >> Started D >> Step 1 done C >> Step 1 done D >> Step 2 done D >> DONE C >> Step 2 done C >> DONE Non-blocking operations: 814 ms
Now we’re talking! Both scenarios run in about 800ms because we have true parallelism. Multiple threads are working simultaneously, whether the work is blocking or non-blocking.
Verifying What’s Actually Happening
Let’s add some debugging to see what’s going on under the hood. Add this to the beginning of each action:
println("A >> Job: ${coroutineContext[Job]}")
println("A >> Dispatcher: ${coroutineContext[ContinuationInterceptor]}")
println("A >> Thread: ${Thread.currentThread().name}")
Sequential Execution Results:
// blocking work
A >> Job: BlockingCoroutine{Active}@57855c9a
A >> Dispatcher: BlockingEventLoop@3b084709
A >> Thread: main
[...]
B >> Job: BlockingCoroutine{Active}@57855c9a
B >> Dispatcher: BlockingEventLoop@3b084709
B >> Thread: main
// non-blocking work
C >> Job: BlockingCoroutine{Active}@184f6be2
C >> Dispatcher: BlockingEventLoop@56aac163
C >> Thread: main
[...]
D >> Job: BlockingCoroutine{Active}@184f6be2
D >> Dispatcher: BlockingEventLoop@56aac163
D >> Thread: main
Everything runs on the main thread, sharing the same job and the same dispatcher within each runBlocking scope.
Concurrent Execution Results:
// blocking work
A >> Job: StandaloneCoroutine{Active}@3712b94
A >> Dispatcher: BlockingEventLoop@2833cc44
A >> Thread: main
[...]
B >> Job: StandaloneCoroutine{Active}@536aaa8d
B >> Dispatcher: BlockingEventLoop@2833cc44
B >> Thread: main
// non-blocking work
C >> Job: StandaloneCoroutine{Active}@2bbf4b8b
C >> Dispatcher: BlockingEventLoop@30a3107a
C >> Thread: main
[...]
D >> Job: StandaloneCoroutine{Active}@6b57696f
D >> Dispatcher: BlockingEventLoop@30a3107a
D >> Thread: main
Still on the main thread, but now each launch creates its own job while sharing the dispatcher.
Asynchronous Execution Results:
// blocking work
A >> Job: StandaloneCoroutine{Active}@b9bf36d
A >> Dispatcher: Dispatchers.Default
A >> Thread: DefaultDispatcher-worker-2
[...]
B >> Job: StandaloneCoroutine{Active}@44799410
B >> Dispatcher: Dispatchers.Default
B >> Thread: DefaultDispatcher-worker-3
// non-blocking work
C >> Job: StandaloneCoroutine{Active}@656b6164
C >> Dispatcher: Dispatchers.Default
C >> Thread: DefaultDispatcher-worker-3
[...]
D >> Job: StandaloneCoroutine{Active}@79e8c7c0
D >> Dispatcher: Dispatchers.Default
D >> Thread: DefaultDispatcher-worker-2
Now we see different worker threads being used, confirming true parallelism.
Job Offers
The Key Takeaways
I hope this article helps clarify some of the confusion around these concepts. Here’s what we’ve learned:
Blocking vs Non-blocking isn’t just about using suspend functions: it’s about whether your work actually frees up the thread or keeps it busy.
Concurrent vs Asynchronous is about whether you’re sharing a single thread cooperatively or using multiple threads simultaneously.
But honestly, the best way to truly understand these concepts is to run the code yourself. Try modifying the timing values, experiment with different dispatchers, add more actions, or mix blocking and non-blocking work within the same coroutine. Play around with it until the behavior becomes intuitive.
The debugging output is particularly enlightening: seeing those thread names and job instances change as you modify the code will give you a much deeper understanding than any article can provide.
Have you experienced similar “aha moments” with coroutines? Or do you have other tricky concurrency scenarios you’d like to explore? Let me know in the comments: I love diving into these kinds of practical problems with fellow engineers.
If you need quick reminders about coroutine syntax and patterns, check out my Advanced Kotlin Coroutine Cheat Sheet for Android Engineers.
This article was previously published on proandroiddev.com.



