Blog Infos
Author
Published
Topics
,
Published

Sometimes I visit hackathons where I can implement any crazy ideas. Today I will tell you how to make a mobile game prototype with unusual controls in just a couple of hours: a character will react to a smile and a wink.

The idea to create such a game came to me during a hackathon several years ago. The format assumed that I had one working day to develop, that is, 8 hours. I chose the Android SDK to get the prototype done in time. Perhaps game engines would be better suited, but I don’t know any of them good enough.

Another game suggested to me the concept of control with the help of emotions: there you could set the character’s movements by changing the volume of your voice. Maybe already someone has also used emotions in the game control. But I knew few such examples, so I settled on this format.

Watch out! It is a loud video!

Sometimes I visit hackathons where I can implement any crazy ideas. Today I will tell you how to make a mobile game prototype with unusual controls in just a couple of hours: a character will react to a smile and a wink.

We only need Android Studio on the computer. If you don’t have a real Android device to run, you can use an emulator with a webcam enabled.

Creating a project with ML Kit

 

 

ML Kit is a great tool to impress the hackathon jury as you use AI in your prototype! It also helps you embed machine learning solutions into projects, such as functionality for defining objects in a frame, as well as translation and OCR.

It is suitable for us that ML Kit has a free offline API for recognizing smiles and open or closed eyes.

Previously, we had to first register in the Firebase console in order to create any project with ML Kit. Now we can skip this step for offline functionality.

Removing unnecessary

Let’s take the official sample and remove what we don’t need so as not to write the logic for working with the camera from scratch.

First, download the example and try running it. Explore the Face detection mode: it will look like the article preview.

Note that the link goes to the specific commit in the past. The tutorial uses this particular version of the sample as its base. You surely can use the latest version of the sample, but you’ll have to match the diffs and adapt the ideas to the new version.

Manifest

Let’s start editing AndroidManifest.xml. We will remove all activity tags except the first one. And we will put the CameraXLivePreviewActivity in its place so that we can start from the camera right away. We leave only face in the android:value attribute to exclude unnecessary resources from the APK.

<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="face"/>
<activity
android:name=".CameraXLivePreviewActivity"
android:exported="true"
android:theme="@style/AppTheme">
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>

Full diff.

Camera

We will save time and won’t delete unnecessary files, and instead, we will focus on the elements of the CameraXLivePreviewActivity screen.

On line 117, we will set the face detection mode:

private String selectedModel = FACE_DETECTION;

On line 118, we will turn on the front camera:

private int lensFacing = CameraSelector.LENS_FACING_FRONT;

At the end of the onCreate method we will hide the settings on lines 198–199:

findViewById(R.id.settings_button).setVisibility(View.GONE); findViewById(R.id.control).setVisibility(View.GONE);

We can stop at this point. But if the FPS rendering and face grid are visually distracting, then you can turn them off in the following way:

In the VisionProcessorBase.java file, we remove lines 213–215 to hide the FPS:

graphicOverlay.add(
       new InferenceInfoGraphic(
          graphicOverlay, currentLatencyMs, shouldShowFps ? framesPerSecond : null));

In the FaceDetectorProcessor.java file, we remove lines 75–78 to hide the face grid:

for (Face face : faces) {
    graphicOverlay.add(new FaceGraphic(graphicOverlay, face));
    logExtrasForTesting(face);
}

Job Offers

Job Offers

There are currently no vacancies.

OUR VIDEO RECOMMENDATION

, ,

Introduction to object detection with Vertex AI and ML Kit

In this lightning talk you will learn about Google Vertex AI, a new platform to train and optimize machine learning models in the cloud. You will see an example app showing how to use a…
Watch Video

Introduction to object detection with Vertex AI and ML Kit

Christian Dziuba
Software Engineer Android
Rewe Digital

Introduction to object detection with Vertex AI and ML Kit

Christian Dziuba
Software Engineer An ...
Rewe Digital

Introduction to object detection with Vertex AI and ML Kit

Christian Dziuba
Software Engineer Android
Rewe Digital

Jobs

Full diff.

Emotion recognition

Smile detection is turned off by default, but it’s straightforward to get started. It’s not for nothing that we took the example code as a basis! We will allocate the parameters we need into a separate class and declare the listener interface:

// Inside FaceDetectorProcessor.java
public class FaceDetectorProcessor extends VisionProcessorBase<List<Face>> {
public static class Emotion {
public final float smileProbability;
public final float leftEyeOpenProbability;
public final float rightEyeOpenProbability;
public Emotion(float smileProbability, float leftEyeOpenProbability, float rightEyeOpenProbability) {
this.smileProbability = smileProbability;
this.leftEyeOpenProbability = leftEyeOpenProbability;
this.rightEyeOpenProbability = rightEyeOpenProbability;
}
}
public interface EmotionListener {
void onEmotion(Emotion emotion);
}
private EmotionListener listener;
public void setListener(EmotionListener listener) {
this.listener = listener;
}
@Override
protected void onSuccess(@NonNull List<Face> faces, @NonNull GraphicOverlay graphicOverlay) {
if (!faces.isEmpty() && listener != null) {
Face face = faces.get(0);
if (face.getSmilingProbability() != null &&
face.getLeftEyeOpenProbability() != null && face.getRightEyeOpenProbability() != null) {
listener.onEmotion(new Emotion(
face.getSmilingProbability(),
face.getLeftEyeOpenProbability(),
face.getRightEyeOpenProbability()
));
}
}
}
}

We will set up the FaceDetectorProcessor in the CameraXLivePreviewActivity class and subscribe to receive the emotion state to enable emotion classification. Then we convert the probabilities to boolean flags. For testing, we will add a TextView to the layout, in which we will show emotions through emoticons.

 

 

Full diff.

Divide and conquer

We are making a game, so we need a place to draw the elements. We will assume that it runs in portrait mode on the phone. Let’s split the screen into two parts: the camera on top and the game on the bottom.

Controlling a character with a smile is a difficult task, and the hackathon has little time to implement advanced mechanics. Therefore, our character will collect cool things along the way, being either at the top of the playing field or at the bottom. We will add actions with closed or open eyes as a complication of the game: we caught cool things with closed eyes, which means the points are doubled.

If you want to implement different gameplay, then I can suggest you some other options:

  • Guitar Hero/Just Dance — analog, where you need to show a certain emotion to the music;
  • a race with overcoming obstacles, where you need to reach the finish line in a specific time or without crashing;
  • shooter, where the player shoots the enemy with a wink.

We will display the game in a custom Android View. In the onDraw method, we will draw the character to Canvas. In the first prototype, we will restrict ourselves to geometric primitives.

Player

 

 

Our character is a square. We’ll size it and position it to the left on initialization since it will be fixed to the same place. The Y-axis position will depend on the player’s smile. We will calculate all absolute values ​​relative to the size of the game area. It’s easier than choosing specific sizes, and we will also get a good look on other devices.

private var playerSize = 0
private var playerRect = RectF()
// Initialize size depending on screen size
private fun initializePlayer() {
playerSize = height / 4
playerRect.left = playerSize / 2f
playerRect.right = playerRect.left + playerSize
}
// Saving emotion flags
private var flags: EmotionFlags
// Set position depending on smile flag
private fun movePlayer() {
playerRect.top = getObjectYTopForLine(playerSize, isTopLine = flags.isSmile).toFloat()
playerRect.bottom = playerRect.top + playerSize
}
// We get the top object position
// to draw it in the center of the first or the second line
private fun getObjectYTopForLine(size: Int, isTopLine: Boolean): Int {
return if (isTopLine) {
width / 2 - width / 4 - size / 2
} else {
width / 2 + width / 4 - size / 2
}
}
// Saving paint to reuse it in onDraw
private val playerPaint = Paint(Paint.ANTI_ALIAS_FLAG).apply {
style = Paint.Style.FILL
color = Color.BLUE
}
// Draw a square in onDraw
private fun drawPlayer(canvas: Canvas) {
canvas.drawRect(playerRect, playerPaint)
}
view raw GameView.kt hosted with ❤ by GitHub
Cake

Our character “runs” and tries to catch cakes to score as many points as possible. We use the standard technique with the transition to the reference system relative to the player: he will stand still, and the cakes will fly towards him. If the cake square intersects with the player’s square, we count the point. And if at the same time at least one of the user’s eyes is closed, we count two points. ¯ \ _ (ツ) _ / ¯

Also there will be only one cake in our universe. When a character eats it, it moves off the screen to a random line with a random coordinate.

// Move the cake off the screen right away
private fun initializeCake() {
cakeSize = height / 8
moveCakeToStartPoint()
}
private fun moveCakeToStartPoint() {
// Choose a random position
cakeRect.left = width + width * Random.nextFloat()
cakeRect.right = cakeRect.left + cakeSize
// Choose a random line
val isTopLine = Random.nextBoolean()
cakeRect.top = getObjectYTopForLine(cakeSize, isTopLine).toFloat()
cakeRect.bottom = cakeRect.top + cakeSize
}
// Mpve the cake by time
private fun moveCake() {
val currentTime = System.currentTimeMillis()
val deltaTime = currentTime - previousTimestamp
val deltaX = cakeSpeed * width * deltaTime
cakeRect.left -= deltaX
cakeRect.right = cakeRect.left + cakeSize
previousTimestamp = currentTime
}
// If the cake and the player intersect each other then count a point
private fun checkPlayerCaughtCake() {
if (RectF.intersects(playerRect, cakeRect)) {
score += if (flags.isLeftEyeOpen && flags.isRightEyeOpen) 1 else 2
moveCakeToStartPoint()
}
}
// If the player misses the cake then move the cake off the screen again
private fun checkCakeIsOutOfScreenStart() {
if (cakeRect.right < 0) {
moveCakeToStartPoint()
}
}
view raw Cake.kt hosted with ❤ by GitHub

We will make the display of points very simple. We will display the number in the screen center. We only need to consider the text height and indent the top for beauty.

private val scorePaint = Paint(Paint.ANTI_ALIAS_FLAG).apply {
color = Color.GREEN
textSize = context.resources.getDimension(R.dimen.score_size)
}
private var score: Int = 0
private var scorePoint = PointF()
private fun initializeScore() {
val bounds = Rect()
scorePaint.getTextBounds("0", 0, 1, bounds)
val scoreMargin = resources.getDimension(R.dimen.score_margin)
scorePoint = PointF(width / 2f, scoreMargin + bounds.height())
score = 0
}
view raw Points.kt hosted with ❤ by GitHub

Let’s see what game we made:

Sometimes I visit hackathons where I can implement any crazy ideas. Today I will tell you how to make a mobile game prototype with unusual controls in just a couple of hours: a character will react to a smile and a wink.

We’ll add some graphics so we won’t be ashamed to show the game at the hackathon presentation!

Images

We don’t know how to draw impressive graphics. Fortunately, there are websites with free game assets. I liked this website, although it is currently not available directly for a reason unknown to me.

Animation

We draw on Canvas, which means we need to implement the animation ourselves. If there are pictures with animation, it will be easy to program them. We introduce a class for an object with images that alternate with each other.

class AnimatedGameObject(
private val bitmaps: List<Bitmap>,
private val duration: Long
) {
fun getBitmap(timeInMillis: Long): Bitmap {
val mod = timeInMillis % duration
val index = (mod / duration.toFloat()) * bitmaps.size
return bitmaps[index.toInt()]
}
}

The background also needs to be animated to get the motion effect. Having a series of background frames in memory is expensive. Therefore, we will do a trickier thing: draw one image with a time shift. Idea outline:

Full diff.

It’s hardly a masterpiece, but it’s okay for a prototype overnight. You can find the code here. It runs locally without any additional steps.

Sometimes I visit hackathons where I can implement any crazy ideas. Today I will tell you how to make a mobile game prototype with unusual controls in just a couple of hours: a character will react to a smile and a wink.

Finally, I will add that ML Kit Face Detection can be useful for other scenarios.

For example, you can analyze all the people in the frame and make sure everyone smiles and opens their eyes to take perfect selfies with friends. Detecting multiple faces in a video stream works out of the box, so the task is not difficult.

Using face contour recognition from the Face Detection module helps to replicate the masks that are now popular in almost all camera apps. And if we add interactivity through the definition of a smile and a wink, they will be doubly fun to use.

This functionality — face contour recognition — can be used for more than just entertainment. Those who have tried to cut out a photo for documents themselves will appreciate this feature. We take the face contour and automatically crop out the photo with the desired aspect ratio and the correct head position. The gyroscope sensor will help determine the proper shooting angle.

Let’s become friends on Twitter, Github, and Facebook!

Clap, share and follow me if you like it🐱‍💻

YOU MAY BE INTERESTED IN

YOU MAY BE INTERESTED IN

blog
Image Segmentation is one of most useful feature that ML has provided us. Have…
READ MORE
Menu