Blog Infos
Author
Published
Topics
, , , ,
Published

When users say “the app feels slow”, they’re rarely wrong — but they’re also rarely specific. As Android developers, we need real, actionable data to understand what’s happening beneath the surface. That’s where performance monitoring comes in.

Most of us rely on tools like Firebase Performance Monitoring or Sentry Performance, and they’re incredibly useful for catching high-level issues like slow app startup, frozen frames, or network latency. But what happens when you want to measure something your app uniquely cares about?

  • How long does your dependency injection take?
  • How much time passes between screen navigation and user interactivity?
  • How long does it take to load that hero image on your homepage?

These aren’t generic metrics — they’re specific to your app, your flows, and your UX.

In this article, I’ll show you how to build a simple, flexible performance tracker for Android. It will let you define and record custom traces for app-level and screen-level performance events, and make those traces observable in tools like:

  • âś… Firebase Performance Monitoring
  • âś… Sentry Performance
  • âś… Your own in-house or logging-based SDK

No magic. No heavy frameworks. Just a clean abstraction you can drop into any app and plug into any observability stack.

Let’s build it.

Step 1: Define the Core Tracker Interface

We begin with the simplest and most important part of the system: the API for starting and stopping named performance traces.

interface PerformanceTracker {
    fun startTrace(name: String)
    fun stopTrace(name: String)
}

Simple In-Memory Tracker implementation:

class InMemoryPerformanceTracker : PerformanceTracker {
    private val traces = mutableMapOf<String, Long>()

    override fun startTrace(name: String) {
        traces[name] = SystemClock.elapsedRealtime()
    }

    override fun stopTrace(name: String) {
        val start = traces.remove(name) ?: return
        val duration = SystemClock.elapsedRealtime() - start
        println("Trace '$name' took $duration ms")
    }
}
đź’ˇ What This Does
  • startTrace(name): Marks the beginning of a timed block with the given name (e.g., "app_start", "login_screen")
  • stopTrace(name): Marks the end of that block and calculates the duration

This minimal interface is intentionally simple. It gives you complete control over when and what to measure — no assumptions, no overhead.

🛠️ Example Use Case

 

performanceTracker.startTrace("app_startup")
// your app init process
performanceTracker.stopTrace("app_startup")

 

In this case, we’re measuring how long app initialization takes — something no third-party SDK can track for you unless you instrument it yourself.

Step 2: Add more metadata to traces
Why We Need Attributes and Metrics in Traces

So far, we’ve focused on measuring durations — but real performance tracking isn’t just about time. It’s about context.

That’s where attributes and metrics come in.

Attributes: Add Meaningful Context

Attributes are key-value pairs (usually strings) that describe the environment, conditions, or type of the trace. They help you group, filter, and debug performance data later.

đź’ˇ Example attributes:

 

"device_class" → "high"
"build_type" → "release"
"screen_type" → "feed"
"navigation_flow" → "login → dashboard"

 

Why they matter:
  • Let you compare performance across different devices or builds
  • Help identify regressions in specific flows (e.g., onboarding vs returning user)
  • Can be indexed or filtered in platforms like Firebase and Sentry
📊 Metrics: Measure Custom Numeric Values

Metrics are numeric values you record inside a trace — they could be durations, counts, sizes, or anything measurable.

đź’ˇ Example metrics:

 

"image_load_time_ms" → 128
"viewmodel_init_time_ms" → 42
"db_query_count" → 3
"retries" → 2

 

Why they matter:
  • Let you track inner performance inside a larger span
  • Show how sub-operations contribute to total time
  • Can be visualized over time to detect anomalies or trends
Step 3: Add a Listener Interface for Trace Events

Now that we can start and stop traces, we want a way to react when a trace is active — for example:

  • Send it to Firebase or Sentry
  • Log it to Logcat or your own analytics SDK
  • Build a custom dashboard

That’s where TraceListener comes in:

Create a Listener Interface

 

interface TraceListener {
    fun onStart(traceName: String)
    fun onStop(trace: PerformanceTrace)
    fun onAttributeAddToTrace(traceName: String, attrName: String, attrValue: Any)
    fun onMetricAddToTrace(traceName: String, metricName: String, metricValue: Any)
}

 

Implement a Lightweight In-Memory Trace

Now that we’ve defined the core data model (PerformanceTrace), we need a runtime object that will:

  • Track the start time
  • Collect attributes and metrics
  • Produce a PerformanceTrace when finished

Here’s what that looks like:

class InMemoryTrace {
    private val startTime = SystemClock.elapsedRealtime()
    private val attributes = mutableMapOf<String, Any>()
    private val metrics = mutableMapOf<String, Any>()

    fun stop(): Long = SystemClock.elapsedRealtime() - startTime

    fun addAttribute(key: String, value: Any) {
        attributes[key] = value
    }

    fun addMetric(key: String, value: Any) {
        metrics[key] = value
    }

    fun toPerformanceTrace(name: String): PerformanceTrace {
        return PerformanceTrace(name, stop(), attributes.toMap(), metrics.toMap())
    }
}
Create a PerformanceTrace Data Class

When a trace ends, we need to collect all the relevant data in one place so it can be passed to listeners or reported to SDKs.

Here’s the model:

data class PerformanceTrace(
    val name: String,
    val durationMs: Long,
    val attributes: Map<String, Any> = emptyMap(),
    val metrics: Map<String, Any> = emptyMap()
)
đź’ˇ What This Does

This class holds all the meaningful data about a completed trace:

  • nameThe unique name of the trace (e.g., "home_screen", "api_call")
  • durationMsTotal time the trace took, in milliseconds
  • attributesOptional metadata about the trace — e.g., "device_class", "build_type"
  • metricsOptional numeric measurements — e.g., "image_load_time_ms", "db_query_count"

This object is passed to TraceListener.onStop(...), so your reporters (Firebase, Sentry, etc.) have all the context they need.

Step 4: Update the Tracker

 

class InMemoryPerformanceTracker : PerformanceTracker {

    private val traces = mutableMapOf<String, InMemoryTrace>()
    private val listeners = mutableSetOf<TraceListener>()

    override fun startTrace(name: String) {
        traces[name] = InMemoryTrace()
        listeners.forEach { it.onStart(name) }
    }

    override fun stopTrace(name: String) {
        val trace = traces.remove(name) ?: return
        val result = trace.toPerformanceTrace(name)
        listeners.forEach { it.onStop(result) }
    }

    fun addAttribute(traceName: String, key: String, value: Any) {
        traces[traceName]?.addAttribute(key, value)
        listeners.forEach { it.onAttributeAddedToTrace(traceName, key, value) }
    }

    fun addMetric(traceName: String, key: String, value: Any) {
        traces[traceName]?.addMetric(key, value)
        listeners.forEach { it.onMetricAddedToTrace(traceName, key, value) }
    }

    fun addListener(listener: TraceListener) = listeners.add(listener)
    fun removeListener(listener: TraceListener) = listeners.remove(listener)
}

 

Why This Matters

By exposing a listener interface:

  • You can connect multiple backends to the same tracker (e.g., Firebase + Sentry + custom logger)
  • You avoid tight coupling between the tracker and any specific SDK
  • You support real-time sync with external tracing tools — which expect addAttribute() or addMetric() to be called before stop()
Example Use

Let’s say you have a Firebase integration. It might look like this:

override fun onStart(traceName: String) {
    val trace = Firebase.performance.newTrace(traceName)
    trace.start()
    firebaseTraces[traceName] = trace
}

override fun onAttributeAddedToTrace(traceName, key, value) {
    firebaseTraces[traceName]?.putAttribute(key, value.toString())
}

override fun onStop(trace: PerformanceTrace) {
    firebaseTraces.remove(trace.name)?.stop()
}

For sentry integration:

override fun onStart(traceName: String) {
    val transaction = Sentry.startTransaction(traceName, "custom.trace")
    sentryTraces[traceName] = transaction
}

override fun onAttributeAddedToTrace(traceName: String, attrName: String, attrValue: Any) {
    sentryTraces[traceName]?.setTag(attrName, attrValue.toString())
}

override fun onStop(trace: PerformanceTrace) {
    sentryTraces.remove(trace.name)?.finish(SpanStatus.OK)
}

Each time a trace begins or is enriched with data, your listener is notified in real time — making it possible to keep your 3rd-party SDK in sync.

🧾 Why We Return PerformanceTrace in onStop()

You might wonder: why bother returning a full PerformanceTrace object when third-party SDKs like Firebase or Sentry already handle everything internally?

Here’s why it matters:

1. Firebase and Sentry don’t expose tracked data
  • When you start a trace in Firebase or Sentry, they record timing and data internally
  • But once the trace is stopped, you can’t access the collected values
  • There’s no way to review the result locally, or repurpose the data elsewhere
2. PerformanceTrace gives you full access to all metadata

By wrapping the result of a trace in a self-contained object (name, durationMs, attributes, metrics), you can:

  • Log it locally in debug builds
  • Send it to your own backend
  • Write tests against performance-critical flows
  • Build dashboards or monitor regressions over time
3. Avoid duplicating work

Imagine having to manually measure and re-structure timing data just to submit it to your own system in addition to Firebase:

val start = SystemClock.elapsedRealtime()
// do work
val duration = SystemClock.elapsedRealtime() - start

 

// Firebase handles this internally
firebaseTrace.putMetric("duration", duration)
// Your own backend? You’d need to do this again:
yourCustomBackend.log("trace_name", duration)

 

Instead, you can rely on the tracker to give you one clean result:

override fun onStop(trace: PerformanceTrace) {
    yourBackendReporter.submit(trace)
}

No duplication. No fragile stopwatches. One single trace → many destinations.

This gives your performance system flexibility that Firebase and Sentry alone can’t offer — without giving up their strengths.

Job Offers

Job Offers

There are currently no vacancies.

OUR VIDEO RECOMMENDATION

No results found.

Jobs

Conclusion

Building your own performance tracker isn’t about reinventing the wheel — it’s about owning your observability.

By walking through this simple but flexible design, we’ve seen how you can:

âś… Measure any user-defined code path
âś… Enrich traces with attributes and custom metrics
âś… Report to one or many backends (like Firebase, Sentry, or your own)
✅ Access all collected data directly — no black boxes, no guesswork

Most third-party SDKs offer great observability tools, but they’re limited to predefined events and often don’t expose your own data.

With this in-house tracker, you:

  • Track exactly what matters to your team
  • Use the same trace in multiple destinations
  • Log, test, or debug performance in ways SDKs simply don’t support
Full Source Code on GitHub

All code from this article is available here:
👉 github.com/akniyetc/performance-tracker

This article was previously published on proandroiddev.com.

Menu