Documentation Index
Fetch the complete documentation index at: https://mintlify.com/AmolPardeshi99/android-performance-skills/llms.txt
Use this file to discover all available pages before exploring further.
Incorrect coroutine dispatcher usage is one of the most common causes of ANRs and jank in Android apps. Misusing Dispatchers.Default for I/O, calling runBlocking on the main thread, or leaking GlobalScope coroutines can all block the UI thread or cause deadlocks.
Dispatcher selection
Choose your dispatcher based on the type of work, not convenience. Using the wrong dispatcher can exhaust thread pools or block the UI thread.| Work type | Dispatcher |
|---|
| Network (blocking HTTP), file read/write, SQLite, SharedPreferences | Dispatchers.IO |
| CPU-intensive: sorting, mapping large collections, crypto, image decoding | Dispatchers.Default |
| UI update, View access, Compose state mutation | Dispatchers.Main (or Dispatchers.Main.immediate) |
| Single-threaded sequential write of shared mutable state | Dispatchers.IO.limitedParallelism(1) |
| Room (suspend functions) | Room automatically uses Dispatchers.IO internally |
| Retrofit (suspend functions) | Retrofit automatically dispatches on Dispatchers.IO |
I/O work on Dispatchers.Default — thread pool starvation
Dispatchers.Default has a fixed thread pool sized to CPU core count (minimum 2). Blocking those threads with I/O (network, disk, database) starves CPU-bound work and can cause thread starvation deadlocks.
// ❌ BAD: blocking file I/O on Default — exhausts CPU thread pool
suspend fun loadConfig(): Config = withContext(Dispatchers.Default) {
File("/data/config.json").readText().let { gson.fromJson(it, Config::class.java) }
}
// ✅ CORRECT: I/O on Dispatchers.IO — elastic pool designed for blocking calls
suspend fun loadConfig(): Config = withContext(Dispatchers.IO) {
File("/data/config.json").readText().let { gson.fromJson(it, Config::class.java) }
}
// ✅ CPU transform on Dispatchers.Default — correct use
suspend fun transformItems(raw: List<RawItem>): List<DomainItem> =
withContext(Dispatchers.Default) { raw.map { it.toDomain() } }
JSON parsing on the main thread
API response post-processing — gson.fromJson, moshi.adapter().fromJson, proto.parseFrom — is CPU-bound work. It is invisible as blocking but can take 10–500 ms on large payloads. When it happens on the main thread (inside a viewModelScope.launch {} without a withContext switch), it contributes directly to ANRs.
// ❌ BAD: Retrofit suspend function delivers on Main thread;
// post-processing (mapping) therefore runs on Main
class UserViewModel(private val api: UserApi) : ViewModel() {
fun loadUser(id: String) {
viewModelScope.launch { // defaults to Dispatchers.Main in viewModelScope
val dto = api.fetchUser(id) // Retrofit does I/O on IO dispatcher correctly
// The following map() runs back on Main because we're in Dispatchers.Main context
val user = dto.toUser() // ← Main thread CPU work; risky for large DTOs
_uiState.value = UiState.Success(user)
}
}
}
// ✅ CORRECT: explicit withContext for any transformation after suspension
class UserViewModel(private val api: UserApi) : ViewModel() {
fun loadUser(id: String) {
viewModelScope.launch {
val user = withContext(Dispatchers.IO) {
val dto = api.fetchUser(id) // network on IO
dto.toUser() // mapping stays on IO
}
_uiState.value = UiState.Success(user) // back on Main
}
}
}
// ✅ BEST: push all I/O + transformation to the Repository layer
class UserRepository(private val api: UserApi) {
// This function is main-safe; callers don't need to add withContext
suspend fun getUser(id: String): User = withContext(Dispatchers.IO) {
api.fetchUser(id).toUser()
}
}
RxJava observeOn placement — operations after switch run on Main
Every map, flatMap, or filter placed after observeOn(mainThread()) runs on the main thread. observeOn(mainThread()) must be the last operator before subscribe.
// ❌ BAD: map{} is placed after observeOn(mainThread()) → JSON parsing on Main
api.fetchOrderRx()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread()) // ← switch to Main happens here
.map { json -> gson.fromJson(json, Order::class.java) } // ← runs ON MAIN
.subscribe { render(it) }
// ✅ CORRECT: all transformations before observeOn; only terminal UI work after
api.fetchOrderRx()
.subscribeOn(Schedulers.io())
.map { json -> gson.fromJson(json, Order::class.java) } // still on IO thread
.observeOn(AndroidSchedulers.mainThread())
.subscribe { render(it) } // UI update on Main
runBlocking on the main thread
Never call runBlocking in Activity, Fragment, ViewModel, or Service code. runBlocking blocks the calling thread until the coroutine finishes. On the main thread, this blocks the entire UI pipeline and causes ANRs within 5 seconds.
runBlocking is only valid in unit test entry points and pure CLI main() functions.
// ❌ ANR: runBlocking in Activity lifecycle callback
class SplashActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
// Blocks Main thread — system cannot process input events — ANR in 5s
val config = runBlocking { configRepository.fetch() }
}
}
// ❌ DEADLOCK: inner coroutine needs Main; Main is blocked by runBlocking
fun getDataSync(): Data = runBlocking {
withContext(Dispatchers.Main) { // tries to post to Main — which is blocked by runBlocking → deadlock
heavyWork()
}
}
// ✅ CORRECT: always use lifecycle-scoped coroutines, never block Main
class SplashActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
lifecycleScope.launch {
val config = configRepository.fetch()
applyConfig(config)
}
}
}
GlobalScope — uncontrolled lifetime and hidden leaks
Never use GlobalScope.launch in production Android code. Coroutines launched on GlobalScope cannot be cancelled from the caller, errors are silently dropped, and a hung coroutine lives for the entire process lifetime.
// ❌ BAD: GlobalScope escapes structured concurrency
fun syncData() {
GlobalScope.launch(Dispatchers.IO) { repo.sync() }
// Cannot cancel this from the caller; errors are silently dropped;
// if repo.sync() hangs indefinitely, this coroutine lives forever
}
// ✅ CORRECT: inject a lifecycle-bound or application-scoped CoroutineScope
class SyncUseCase @Inject constructor(
@ApplicationScope private val scope: CoroutineScope // DI-provided, cancelled at app death
) {
fun syncData(): Job = scope.launch(Dispatchers.IO) { repo.sync() }
}
// Application-scope definition (DI module)
@Provides @Singleton @ApplicationScope
fun provideApplicationScope(): CoroutineScope =
CoroutineScope(SupervisorJob() + Dispatchers.Default)
Excessive withContext switching
Each withContext call involves rescheduling the coroutine on a different thread pool. While cheaper than thread context switches, they accumulate on hot paths. Batch I/O work into a single withContext block rather than ping-ponging between dispatchers.
// ❌ BAD: ping-pong between dispatchers — N context switches for N steps
suspend fun processOrder(id: String) {
withContext(Dispatchers.Main) { showLoading() }
withContext(Dispatchers.IO) { val order = api.fetch(id) }
withContext(Dispatchers.Main) { showOrder() }
withContext(Dispatchers.IO) { val history = api.fetchHistory(id) }
withContext(Dispatchers.Main) { showHistory() }
}
// ✅ CORRECT: batch all I/O, single context switch back to Main
suspend fun processOrder(id: String) {
showLoading() // already on Main (viewModelScope default)
val (order, history) = withContext(Dispatchers.IO) {
val order = api.fetch(id)
val history = api.fetchHistory(id)
Pair(order, history)
}
showOrder(order) // back on Main — single switch
showHistory(history)
}
lazy thread safety mode misuse
Kotlin’s lazy {} defaults to LazyThreadSafetyMode.SYNCHRONIZED, which acquires an intrinsic monitor lock on every access until initialized. This is unnecessary — and adds CPU overhead — for properties accessed only from a single thread such as the main thread.
lazy mode reference:
| Mode | Lock behavior | Use case |
|---|
SYNCHRONIZED | Acquires intrinsic lock; init runs exactly once | Multi-thread, expensive init, must run once |
PUBLICATION | No lock; multiple threads may init, first stored | Multi-thread, idempotent init |
NONE | No synchronization at all | Single-thread (Main-only UI properties) |
// ❌ BAD: SYNCHRONIZED mode on a Main-only UI property — lock overhead per access before init
class HomeFragment : Fragment() {
private val adapter by lazy { HomeAdapter() } // default = SYNCHRONIZED; unnecessary lock
private val animation by lazy { LottieAnimation() }
}
// ✅ CORRECT: NONE mode for single-thread / Main-only properties
class HomeFragment : Fragment() {
private val adapter by lazy(LazyThreadSafetyMode.NONE) { HomeAdapter() }
private val animation by lazy(LazyThreadSafetyMode.NONE) { LottieAnimation() }
}
// ❌ BAD: SYNCHRONIZED lazy on a hot-path multi-thread object — lock contention
// During initialization, all threads wait on the single lock
class IconManager {
private val cache by lazy { buildHeavyIconCache() } // 200ms init while others wait
}
// ✅ CORRECT: PUBLICATION mode if initialization is idempotent and can run in parallel
class IconManager {
// Multiple threads may init but only the first result is stored — no lock wait
private val cache by lazy(LazyThreadSafetyMode.PUBLICATION) { buildHeavyIconCache() }
}