Streaming is available in most browsers,
and in the Developer app.
-
Embrace Expected Failures in XCTest
Testing is a crucial part of building a great app: Great tests can help you track down important issues before release, improve your workflow, and provide a quality experience upon release. For issues that can't be immediately resolved, however, XCTest can help provide better context around those problems with XCTExpectFailure. Learn how this API works, its strict behavior, and how to improve the signal-to-noise ratio in your tests to identify new issues more efficiently.
Resources
Related Videos
WWDC22
WWDC21
-
Download
♪ Bass music playing ♪ ♪ Wil Addario-Turner: Hi, welcome to "Embrace Expected Failures in XCTest." My name is Wil, and in this session, I'm going to discuss ways of improving the data you get when you run your project's tests. To begin with, let's consider why we test our code in the first place. Of course, at a high level, it's how we ensure the quality of the product. But in more concrete terms, I would say it's to discover bugs before we ship and not afterwards. Now, testing is an investment. It takes resources to create, run, and maintain tests. As with any investment, we want to maximize our returns while minimizing our cost. This session focuses on tools for reducing the maintenance cost. By maintenance, I'm primarily referring to how you handle failures when they occur in your test suites. When a test that's been passing begins to fail, that's a valuable piece of new information. This indicates either a flaw in the product, a problem in the test itself, or some issue in one of the dependencies -- that is, all of the frameworks and subsystems on top of which the product sits. Regardless of the type of problem, once that failure has been registered, subsequent reports of the same failure are significantly less valuable, because they represent information that you already have. Ideally, any new failure is triaged and fixed quickly. However, your team may not be able to resolve a problem right away, which means that the failure quickly goes from being a valuable piece of new information to a noisy distraction. Given a known failure in your tests that cannot be immediately resolved, what tools are available for managing the noise? Two approaches that might come to mind are disabling and skipping. Let's consider the tradeoffs for these and then we'll talk about the best tool -- and the topic of this session -- XCTest's ExpectedFailures API. Xcode lets you disable tests in the test plan or scheme. You can use this for known test failures, and one advantage is that your test code will continue to be compiled. However, since the code won't execute, you won't see it in the test report. This reduced visibility makes it harder to track as an issue that needs to be resolved. Where this feature -- the ability to choose which tests are enabled or disabled -- really shines is for curating collections of tests for specific purposes. But it's rarely the best way to handle a known failure. XCTSkip is another way you might manage a failing test. With this approach, not only does the code continue to get built with your tests, it also executes up until the point where XCTSkip is called. This means that it's included in the test report, giving you much better visibility of the issue. However, it doesn't execute all of your test, which means you lose out on potentially useful information in the form of new issues and changes to the existing issue. XCTSkip is a great tool for managing configuration-based limitations on your test, such as requiring a specific OS version or device type. In the example here, the test will be skipped if it's not running on an iPad. This brings us to XCTExpectFailure, a set of functions in XCTest specifically designed for managing known failures. In Swift, it has a number of overloads for different use cases, and Objective-C provides the same capabilities with several distinct functions. With this API, your test executes normally, but the results are changed as follows: Failure in the test will be reported as an expected failure. Failure in the test suite containing that test will be reported as a pass, unless of course some other test in it fails. This eliminates the noise generated by the failure, making it easier to see whether there are any other issues in your tests. Of course, suppressing the noise doesn't solve the underlying issue. So to help you keep track of it, the API takes a failure reason. This string documents the problem in your code and you can even embed a URL for your issue-tracking system. Xcode's test report UI shows expected failures just as it does normal failures or skipped tests. When you hover, if the failure reason contains a URL, an issue-tracking button appears that lets you jump out to the link. So let's see how this works! I have here a simple project with some unit tests for my VendorAccount class. I'm going to run the tests, and when they finish, we'll see that one is failing while the other is passing. You can see three test result icons, one for each test. A red X for the failing test and a green check for the passing test, and one for the test suite; a red X because one of the tests in the suite has failed, so we consider the suite itself to have failed. Now I'm going to add a call to XCTExpectFailure at the beginning of the failing test. You can see the failure reason begins with a URL that references the bug I've filed to keep track of this failure. Now I'll rerun the tests and we'll see how this affects the outcome. OK, so the red X icon for the failing test has changed to a gray X, which is the indicator for an expected failure. What's even more interesting is that the test suite icon has changed from a red X to a green dash. This icon indicates that the test suite has passed with a mixed state, meaning that one or more of its tests did not pass, but was either a skip or an expected failure. So that's how easy it is to use XCTExpectFailure to handle a failing test. Now let's take a closer look at the API.
The first consideration when using XCTExpectFailure is which API variant to call. There are two approaches: a stateful approach where you call XCTExpectFailure and any subsequent failure in the test is treated as expected; alternatively, you can use the scoped approach, in which you wrap the failing code in a closure passed to XCTExpectFailure. Let's look at some examples. Here's a very simple test that calls some function in my project. The test begins to fail because the function is no longer returning true. Here's what it looks like to use the stateful expected failure approach, just as we did in the demo. Alternatively, we could use the scoped approach by wrapping the failing code in a closure trailing the call to XCTExpectFailure. This means any failure in the code outside the closure will be reported normally. The API also supports nesting. In other words, you can call the API more than once in a test, including inside the closure from another call. This is an important consideration when using the API in test library code. For example, if a common utility function begins to fail, many tests could be impacted, some of which might already be using XCTExpectFailure for different issues. When a failure occurs in the context of nested calls to XCTExpectFailure, the issue is matched against the nearest call site first, and if rejected by the matcher, will be passed on to the next call and so on with stack semantics for the calls to XCTExpectFailure. For this reason, with shared code, it's best to use the closure-based API to limit the effects on test state. The next thing to consider is how precisely to match the issue. By default, any failure in the affected scope is caught, but you can be more selective by specifying an issue-matching filter. In this example, we construct an object of type XCTExpectedFailure.Options and define its issueMatcher. The matcher is passed the XCTIssue object with the failure details, so you have full access to that information in determining whether or not to match. If the matcher rejects the failure, then it won't be handled as an expected failure. This can be useful in detecting when new problems show up in the code being tested. The options object also has a property that can be used to disable the expected failure in certain configurations. For example, my test may be passing on macOS but failing on iOS, so I only want to expect failures on iOS. To achieve that, I disable the expected failure via the options, but only for platforms where I don't need it. So what happens when your expected failures stop failing? Usually this means the underlying issue has been resolved, which is great. But how does XCTExpectFailure behave? If you're still calling the API and no failure is occurring, it will generate a new and distinct failure. We call this an "unmatched expected failure" and it's part of the strict behavior that is the default for XCTExpectFailure. This behavior helps you maintain your code by prompting you to remove unnecessary calls to the API. But what about tests that only fail some of the time? There are cases in which a test might fail sometimes but not other times. These fall into two categories, the first of which is deterministic and includes environmental or other knowable conditions such as the earlier example of a test that only fails on certain platforms. On the other hand, some failures are inherently nondeterministic. These might be caused by timing issues, unreliable ordering dependencies, or concurrency bugs. For nondeterministic failures, the strict behavior isn't helpful; it just generates noise. Once again, the options object provides a way to control this. The isStrict flag, which defaults to true, can be turned off. Then, if XCTExpectFailure does not catch a failure, it will still allow the test to pass. In Swift, you can also specify the strict behavior as a direct parameter to XCTExpectFailure. Disabling strict behavior is great way to handle flaky or nondeterministic tests in your project. As an aside, when you need to investigate a nondeterministic failure, Xcode makes it easy to run a test multiple times, stopping when it fails or some other condition is met. This can be really helpful in tracking down failures in flaky tests. For more about this, watch the session "Diagnose Unreliable Code with Test Repetitions." So that's XCTExpectFailure -- APIs in XCTest for improving the signal-to-noise in your test suite results. This helps you identify new issues more efficiently, leading to higher-quality code. Thanks for watching! ♪
-
-
3:31 - XCTSkip unless device is iPad
try XCTSkipUnless(UIDevice.current.userInterfaceIdiom == .pad, "Only supported on iPad")
-
4:31 - XCTExpectFailure
XCTExpectFailure("<https://dev.myco.com/bugs/4923> myValidationFunction is returning false")
-
7:14 - Scoped XCTExpectFailure
XCTExpectFailure("<https://dev.myco.com/bugs/4923> fix myValidationFunction") { XCTAssert(myValidationFunction()) }
-
8:34 - XCTExpectFailure with issue matcher
let options = XCTExpectedFailure.Options() options.issueMatcher = { issue in return issue.type == .assertionFailure } XCTExpectFailure("<https://dev.myco.com/bugs/4923> fix myValidationFunction", options: options)
-
9:03 - Disable XCTExpectFailure for some platforms
let options = XCTExpectedFailure.Options() #if os(macOS) options.isEnabled = false #endif XCTExpectFailure("<https://dev.myco.com/bugs/4923> fix myValidationFunction", options: options) { XCTAssert(myValidationFunction()) }
-
10:39 - Disable strict XCTExpectFailure behavior via options
let options = XCTExpectedFailure.Options() options.isStrict = false XCTExpectFailure("<https://dev.myco.com/bugs/4923> fix myValidationFunction", options: options) { XCTAssert(myValidationFunction()) }
-
10:53 - Disable strict XCTExpectFailure behavior via parameter
XCTExpectFailure("<https://dev.myco.com/bugs/4923> fix myValidationFunction", strict: false) { XCTAssert(myValidationFunction()) }
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.