top of page
90s theme grid background
Writer's pictureGunashree RS

Guide to Reduce UI Test Flakiness on Android

Updated: Aug 8

Introduction

Flakiness in automated UI testing can be one of the most frustrating challenges developers face. When building an Android application, having a solid suite of tests is crucial for ensuring functionality and preventing regressions. However, UI tests, particularly those built using frameworks like Espresso, are often plagued by intermittent failures that are hard to diagnose and fix. These flaky tests can lead to wasted time, reduced confidence in the test suite, and ultimately, less robust software.


In this comprehensive guide, we will explore various strategies to reduce flakiness in Espresso-based Android UI tests. These strategies can also be applied to other testing frameworks, ensuring more stable and reliable automated tests. From disabling animations to using custom wait conditions and leveraging retry mechanisms, we will cover all aspects of minimizing flakiness. Let's dive into the specifics of making your automated UI tests more dependable.



Understanding Flakiness in Automated UI Testing


What is Flakiness?

Flakiness in the context of automated testing refers to the inconsistent results of test cases. A test that passes or fails sporadically without any changes in the codebase or the test environment is considered flaky. Flaky tests can lead to false positives or false negatives, making it difficult to trust the test results.


Flakiness

Common Causes of Flakiness

Several factors contribute to the flakiness of automated UI tests, including:

  • Timing Issues: Tests may proceed before the UI elements are fully loaded or ready.

  • Resource Contention: Concurrent processes competing for system resources can lead to test instability.

  • Environmental Variations: Differences in hardware, operating systems, and network conditions can affect test outcomes.

  • Animations and Transitions: UI animations can interfere with the timing of test actions.


Impact of Flaky Tests

Flaky tests can severely impact the development process by:

  • Reducing Confidence: Developers may lose trust in the test suite's ability to catch bugs.

  • Wasting Time: Debugging and rerunning flaky tests consume valuable development time.

  • Impairing Continuous Integration (CI): Flaky tests can cause CI pipelines to fail, delaying deployment and delivery.



Disabling Animations in Tests


Importance of Disabling Animations

Animations and transitions in the UI can cause tests to fail intermittently because they introduce timing variability. Disabling animations ensures that UI elements are in a consistent state, making tests more predictable.


Automating the Disabling of Animations

To disable animations during tests, you can set the animations disabled property in your build.gradle file:

gradle

android {
    ...
    testOptions {
       animationsDisabled = true
    }
}

This setting disables animations during instrumented tests run from the command line. For tests run within Android Studio, you need to manually disable animations on your device:

  1. Open the device's Settings.

  2. Navigate to Developer options.

  3. Set the Window animation scale, Transition animation scale, and Animator duration scale to Off.



Implementing Wait Conditions


Handling Timing Issues with Idling Resources

Espresso provides the Idling Resources API to handle timing issues by ensuring the application is idle before proceeding with test actions. However, using Idling Resources often requires adding code to the production codebase, which is not ideal.


Custom WaitForCondition Method

A more flexible approach is to use a custom waitForCondition method. This method checks a condition in a loop and waits until the condition is met or a timeout occurs:

kotlin

/**
 * Waits for the given `condition` to return `true`.
 * If the timeout elapses before the condition returns `true`, this method throws an exception.
 * @param reason    Reason printed when waiting for condition timeouts.
 * @param condition Condition to wait for.
 * @param timeout   Timeout in ms.
 */
fun waitForCondition(reason: String, condition: Callable<Boolean>, @IntRange(from = 0) timeout: Long) {
    val end = System.currentTimeMillis() + timeout

    try {
        while (!condition.call()) {
            if (System.currentTimeMillis() > end) {
                throw AssertionError(reason)
            }
            Thread.sleep(16)
        }
    } catch (e: Exception) {
        throw Exceptions.propagate(e)
    }
}

Waiting for Views to Appear

In addition to waiting for conditions, you may need to wait for specific views to appear in the UI. You can use a custom waitForView method for this purpose:

kotlin

fun waitForView(viewMatcher: Matcher<View>, timeout: Long = 10000, waitForDisplayed: Boolean = true): ViewAction {
    return object : ViewAction {
        override fun getConstraints(): Matcher<View> {
            return Matchers.any(View::class.java)
        }

        override fun getDescription(): String {
            val matcherDescription = StringDescription()
            viewMatcher.describeTo(matcherDescription)
            return "wait for a specific view <$matcherDescription> to be ${if (waitForDisplayed) "displayed" else "not displayed during $timeout millis."}"
        }

        override fun perform(uiController: UiController, view: View) {
           uiController.loopMainThreadUntilIdle()
           val startTime = System.currentTimeMillis()
            val endTime = startTime + timeout
            val visibleMatcher = isDisplayed()
            do {
                val viewVisible = TreeIterables.breadthFirstViewTraversal(view)
                    .any { viewMatcher.matches(it) && visibleMatcher.matches(it) }

                if (viewVisible == waitForDisplayed) return
                uiController.loopMainThreadForAtLeast(50)
            } while (System.currentTimeMillis() < endTime)

            // Timeout happens.
            throw PerformException.Builder()
                .withActionDescription(this.description)
                .withViewDescription(HumanReadables.describe(view))
                .withCause(TimeoutException())
                .build()
        }
    }
}

Using this method, you can wait for a view to appear in your tests like this:

kotlin

onView(isRoot()).perform(waitForView(withText("My awesome view")))

Using the Retry Feature of Espresso


Addressing Touch Injection Issues

Touch injection in Espresso can be problematic, as touches meant to be simple taps can be interpreted as long presses under certain conditions. To mitigate this, you can use the retry feature of Espresso.


Implementing Click Retry

The default click action in Espresso includes a rollback mechanism for handling accidental long presses. Here's an example of using the retry feature:

kotlin

onView(withText("My Button")).perform(click(pressBack()))

If Espresso detects a long press, it will perform the rollback action (in this case, pressing the back button) to correct the issue.



Automatically Retrying Flaky Tests


Marking Flaky Tests

To handle flaky tests more robustly, you can implement an automatic retry mechanism. This involves marking tests that should be retried with a custom annotation, such as FlakyTest, and updating your test runner to retry these tests.


Implementing Automatic Retries

A full implementation of automatic retries would require significant changes to your test infrastructure, including modifying the test runner. However, the basic idea is to rerun flaky tests multiple times, only marking them as failures if they consistently fail.



Avoiding UI Tests When Possible


Code Design Considerations

Often, the need for extensive UI tests indicates a suboptimal code design. By structuring your code to minimize the need for UI tests, you can reduce flakiness.


Using Design Patterns

Design patterns such as Model-View-Presenter (MVP) can help separate business logic from UI logic, making it easier to test each component independently. This reduces the need for complex UI tests and results in more stable tests overall.



Conclusion

Flakiness in automated UI testing is a common challenge, but with the right strategies, it can be significantly reduced. By disabling animations, implementing custom wait conditions, using the retry feature of Espresso, and avoiding unnecessary UI tests, you can achieve more stable and reliable test results.


By following these tips, you can improve the stability of your automated UI tests and ensure a smoother development process. Remember, the goal is to build a robust test suite that gives you confidence in your application's quality and allows you to catch and fix issues early.



Key Takeaways

  • Flakiness in automated UI tests can be significantly reduced with proper strategies.

  • Disabling animations ensures a consistent test environment.

  • Custom wait conditions help handle timing issues effectively.

  • The retry feature of Espresso addresses touch injection problems.

  • Automatic retries provide a fallback for inherently flaky tests.

  • Minimizing UI tests through better code design and patterns like MVP leads to more reliable tests.



FAQs


What is flakiness in automated testing?

Flakiness refers to the inconsistent results of test cases, where tests may pass or fail intermittently without any changes to the codebase or test environment.


Why are animations problematic for UI tests?

Animations introduce timing variability, which can cause tests to fail intermittently. Disabling animations ensures that UI elements are in a consistent state, making tests more predictable.


How can I handle timing issues in UI tests?

You can handle timing issues by using the Idling Resources API provided by Espresso or by implementing custom wait conditions to ensure that the application is idle before proceeding with test actions.


What is the retry feature of Espresso?

The retry feature of Espresso allows you to handle accidental long presses by providing a rollback action that is executed when Espresso detects a long press instead of a tap.


How can I automatically retry flaky tests?

You can implement an automatic retry mechanism by marking flaky tests with a custom annotation and updating your test runner to rerun these tests multiple times, only marking them as failures if they consistently fail.


How can design patterns help reduce the need for UI tests?

Design patterns like MVP separate business logic from UI logic, making it easier to test each component independently. This reduces the need for complex UI tests and results in more stable tests overall.


What is the impact of flaky tests on the development process?

Flaky tests can reduce confidence in the test suite, waste development time, and impair continuous integration pipelines by causing intermittent test failures.


Why is it important to disable animations in CI environments?

Disabling animations in CI environments ensures that tests run in a consistent state, reducing the variability introduced by animations and making test results more reliable.


Article Sources

Comments


bottom of page