As an Android Developer I love finding some neat trick or peace of code in android application codebase. It not only extends my knowledge but also it is really interesting how other developers might think of solving particular problems.
One of the most convenient, smooth and solid applications I use as my daily driver is Telegram messenger. Since the android app source code is available on GitHub, sometimes I’d like to dig into codebase to check how Telegram devs did some feature. Pretty often approaches they use are really interesting, so I decided to share couple of things I encountered in their codebase.
Splitting devices by performance classes
First interesting thing is splitting devices by performance classes. Since Android has really huge fragmentation and you want your app to be as smooth as possible on any device it’s probably a good thing to do some things based on hardware power of the device.
Telegram splits all devices into three performance classes LOW, AVERAGE and HIGH, and each performance class is being set based on hardware information of the device like CPU count, CPU frequencies and memory class.
public class SharedConfig { | |
private static int devicePerformanceClass; | |
public final static int PERFORMANCE_CLASS_LOW = 0; | |
public final static int PERFORMANCE_CLASS_AVERAGE = 1; | |
public final static int PERFORMANCE_CLASS_HIGH = 2; | |
public static int getDevicePerformanceClass() { | |
if (devicePerformanceClass == -1) { | |
int androidVersion = Build.VERSION.SDK_INT; | |
int cpuCount = Runtime.getRuntime().availableProcessors(); | |
int memoryClass = ((ActivityManager) ApplicationLoader.applicationContext.getSystemService(Context.ACTIVITY_SERVICE)).getMemoryClass(); | |
int totalCpuFreq = 0; | |
int freqResolved = 0; | |
for (int i = 0; i < cpuCount; i++) { | |
try { | |
RandomAccessFile reader = new RandomAccessFile(String.format(Locale.ENGLISH, "/sys/devices/system/cpu/cpu%d/cpufreq/cpuinfo_max_freq", i), "r"); | |
String line = reader.readLine(); | |
if (line != null) { | |
totalCpuFreq += Utilities.parseInt(line) / 1000; | |
freqResolved++; | |
} | |
reader.close(); | |
} catch (Throwable ignore) {} | |
} | |
int maxCpuFreq = freqResolved == 0 ? -1 : (int) Math.ceil(totalCpuFreq / (float) freqResolved); | |
if (androidVersion < 21 || cpuCount <= 2 || memoryClass <= 100 || cpuCount <= 4 && maxCpuFreq != -1 && maxCpuFreq <= 1250 || cpuCount <= 4 && maxCpuFreq <= 1600 && memoryClass <= 128 && androidVersion <= 21 || cpuCount <= 4 && maxCpuFreq <= 1300 && memoryClass <= 128 && androidVersion <= 24) { | |
devicePerformanceClass = PERFORMANCE_CLASS_LOW; | |
} else if (cpuCount < 8 || memoryClass <= 160 || maxCpuFreq != -1 && maxCpuFreq <= 2050 || maxCpuFreq == -1 && cpuCount == 8 && androidVersion <= 23) { | |
devicePerformanceClass = PERFORMANCE_CLASS_AVERAGE; | |
} else { | |
devicePerformanceClass = PERFORMANCE_CLASS_HIGH; | |
} | |
if (BuildVars.LOGS_ENABLED) { | |
FileLog.d("device performance info (cpu_count = " + cpuCount + ", freq = " + maxCpuFreq + ", memoryClass = " + memoryClass + ", android version " + androidVersion + ")"); | |
} | |
} | |
return devicePerformanceClass; | |
} | |
} |
Device performance class
Telegram shows some animations and sets blur params, measures particles count in particles animations and defines size of the area in which camera stream is to be drawn based on the performance class of the device.
public static boolean canBlurChat() { | |
return getDevicePerformanceClass() == PERFORMANCE_CLASS_HIGH; | |
} | |
private static int measureMaxParticlesCount() { | |
switch (SharedConfig.getDevicePerformanceClass()) { | |
default: | |
case SharedConfig.PERFORMANCE_CLASS_LOW: | |
case SharedConfig.PERFORMANCE_CLASS_AVERAGE: | |
return 100; | |
case SharedConfig.PERFORMANCE_CLASS_HIGH: | |
return 150; | |
} | |
} | |
boolean animationEnabled = | |
MessagesController.getGlobalMainSettings().getBoolean("view_animations", true) && | |
SharedConfig.getDevicePerformanceClass() != SharedConfig.PERFORMANCE_CLASS_LOW; | |
Examples of usage getDevicePerformance class
Actually the idea of splitting devices by performance classes it not unique, and there are already a few existing solutions for that. There is a library from Meta, that does something similar, and Google recently released alpha version of their performance class determining library.
Interesting approach for animation
There are quite a few ways to launch animation on android, each of them has it’s pros and cons, but there is one, that I’ve never encountered before. It’s so simple yet pretty elegant. The idea behind it can be shown with couple lines of code.
public class CustomView extends View { | |
protected void onDraw(Canvas canvas) { | |
super.onDraw(canvas); | |
// slightly change some value comparing to previous onDraw call | |
// do some actual drawing using canvas | |
invalidate(); | |
} | |
} |
Now we’ve got our animation loop, each time onDraw called, we’re just invalidating view so it will be called again on next draw pass. Ok, but where is the animation? You’re right, it’s not animation yet, and for it to be an animation the animated value (could be anything, color, translation, whatever) should be slightly changed from one draw pass to another, therefore for users it will look like an animation.
A good usecase for it is sound (voice for example) amplitudes animation, since it’s just emits a stream of amplitudes and view should be able to animate between them quickly.
A stream of amplitudes will look just like array of float values in range 0 to 1200f. [0f, 5f, 646.5f … 700f, 400f, 200f, … ]. The idea is each time new value dispatched to the view we will set new target amplitude to animate and until there will be no new values dispatched each time onDraw() called we will slightly adjust current amplitude value towards new target amplitude.
Let’s split this concept into few parts.
First one is just updating view with new amplitude value came from hardware. At this step we should also calculate small amplitude delta — value which will be added to or subtracted from current amplitude on each onDraw() call. The greater this value, the greater the speed with which the amplitude will be changed to be drawn in view.
class DynamicView | |
@JvmOverloads constructor( | |
context: Context, | |
attrs: AttributeSet? = null, | |
defStyleAttr: Int = 0 | |
) : View(context, attrs, defStyleAttr) { | |
private var animateToAmplitude = 0f | |
private var amplitude = 0f | |
private var deltaAmplitude = 0f | |
var speed = Speed.HIGH | |
init { | |
setWillNotDraw(false) | |
} | |
override fun onDraw(canvas: Canvas) { | |
super.onDraw(canvas) | |
// draw something depending on current amplitude value | |
invalidate() | |
} | |
fun setAmplitude(value: Float) { | |
animateToAmplitude = value | |
val diff = animateToAmplitude - amplitude // current amplitude stored in view | |
if (animateToAmplitude > amplitude) { | |
deltaAmplitude = diff / (100f + 600f * speed.coef) | |
} else { | |
deltaAmplitude = diff / (100f + 1000f * speed.coef) | |
} | |
} | |
enum class Speed(val coef: Float) { | |
HIGH(0.35f), | |
SLOW(0.50f) | |
} | |
} |
updating amplitude from stream of values
Job Offers
Don’t mind the all the random numbers you saw in this Gist, they are just picked up for deltaAmplitude variable to be relatively small.
Second part is to actually update current amplitude value considering this deltaAmplitude variable and draw on canvas. For this example I will just draw a circle which will represent current amplitude. Telegram draws something called blob instead.
Telegram blob
class DynamicView | |
@JvmOverloads constructor( | |
context: Context, | |
attrs: AttributeSet? = null, | |
defStyleAttr: Int = 0 | |
) : View(context, attrs, defStyleAttr) { | |
private var lastUpdateTime: Long = System.currentTimeMillis() | |
// .. | |
override fun onDraw(canvas: Canvas) { | |
super.onDraw(canvas) | |
val delta = System.currentTimeMillis() - lastUpdateTime | |
calculateNextFrame(delta) | |
val radius = minRadius + (maxRadius - minRadius) * amplitude / MAX_AMPLITUDE | |
canvas.drawCircle(width / 2f, height / 2f, radius, paint) | |
lastUpdateTime = System.currentTimeMillis() | |
invalidate() | |
} | |
private fun calculateNextFrame(dt: Long) { | |
if (animateToAmplitude != amplitude) { | |
amplitude += deltaAmplitude * dt | |
if (deltaAmplitude > 0) { | |
amplitude = amplitude.coerceAtMost(animateToAmplitude) | |
} else { | |
amplitude = amplitude.coerceAtLeast(animateToAmplitude) | |
} | |
} | |
} | |
fun setAmplitude(value: Float) { | |
//.. | |
} | |
private companion object { | |
private const val MAX_AMPLITUDE = 1200f | |
} | |
} |
Calculating next frame of animation Gist
Key thing in the Gist above is calculateNextFrame function, it takes dt — delta time between subsequent onDraw() calls, and based on it and deltaAmplitudecalculates next amplitude which to be drawn on canvas.
One last thing is just dispatch some random amplitude values to view and see how it handles it.
class MainActivity : AppCompatActivity() { | |
private val handler = Handler(Looper.getMainLooper()) | |
override fun onCreate(savedInstanceState: Bundle?) { | |
super.onCreate(savedInstanceState) | |
setContentView(R.layout.activity_main) | |
val dynamic = findViewById<DynamicView>(R.id.dynamic1) | |
val button = findViewById<Button>(R.id.button) | |
button.setOnClickListener { | |
button.isEnabled = false | |
for (i in 0 until 100) { | |
handler.postDelayed({ | |
val ampl = Random.nextFloat() * 1200 | |
dynamic.setAmplitude(ampl) | |
if (i == 99) { | |
button.isEnabled = true | |
dynamic.setAmplitude(500f) | |
} | |
}, i * 100L) | |
} | |
} | |
} | |
} |
Dispatching amplitudes to view
And with combining 2 DynamicViews together in one layout with setting different speed to them we can see pretty nice results.