VMC’s SDET3/DEV 1 Chris Stephenson looks at when and why to use element IDs, the usefulness of events for automation, test content, and other insights to help get your UI automation running smoothly and accurately.
Yep, it happened: your manager just announced that all projects will now have automated UI testing. Why?! It’s been tried. It never gets us anywhere. After spending hours running the tests, more often than not, they’re broken, which means more hours trying to figure out what broke the tests. When they aren’t broken, they’re reporting “bugs” that aren’t bugs, just out-of-date tests. And then there are the bugs that DO get through the UI automation.
The benefits from automation are well known for things like networking and API calls. What if we could get some of the same benefits in our UI test suite? It can be done, but not without some work. Let’s take a look at the UI automation pain points and see what developers can do to help fix them.
“I’m Going to Need to See Some ID”
UI automation is slow. To some degree, that is inherent in the process. Unlike a unit test or API call test, the UI automation needs to bring up the UI, with all of the underlying objects. This takes time and memory. Once this is done, the tests can actually start performing their assigned actions. In most cases, that’s something along the lines of scanning through all of the elements of the UI under test, then doing a series of comparisons against various properties to find the element that needs to be interacted with. Frequently, that goes something like this:
“Go find the 2nd element that is a container, then look inside that for an element that has ‘Eat at Joes’ in the title.”
So now the automation has to scan all the elements by type looking for containers, then dig into the children of the second container and read the titles of all of those elements until it finds ‘Eat at Joes’. Alternately, the test could do this:
“Go find the element with ID ‘JoesSign1’.”
Now the automation scrubs through just the IDs of all the elements, which are intentionally made readily available for accessibility coding, and returns the element with the matching ID. In most cases, this is a couple orders of magnitude faster.
By simply editing the automation ID to something unique, a great deal of time can be saved. But that’s not all. Let’s look at that first test command again:
“Go find the 2nd element…”
The automation scans through the object model from top to bottom, root to leaf. We’re adding a new feature to the product that is going to put another container-type element between the 1st and 2nd container elements. What just happened to our test that is running that command? Yeah, busted. So again, simply updating an existing field to something unique will go a long way toward speeding up and stabilizing UI automation. If you only do this one thing, and stop reading here, you will have done wonders for your UI automation. But wait, there’s more.
“The Waiting Is the Hardest Part”
I can’t begin to tell you how many tests I’ve refactored, or worse, written, with something like this in the code:
“Press the big red button, then wait for…oh, I don’t know, 30 seconds?.. then check to see what happened.”
Really? You hit the button to make it do something. You know it’s going to do something, that’s why a button was there. Why are you waiting for a set time? Just listen for the something to be done. Oh. Right. There is no OnButtonDone event to listen to. This is the root cause of the “UI automation takes forever to run” complaint. Nothing slows automation down as consistently or as heavily as Thread.Wait();. As a developer, you made that button. It does a thing. You know what that thing is. You know when it’s done. So add a simple OnButtonDone event, and fire it when your button is done doing its thing.
I hear you. “Just reduce the wait time – problem solved”. Sure, 30 seconds seems excessive. Unless your button is going to query a database that frequently has a lot of work to slog through, and the database is set to a 30 second timeout. If I don’t wait at least as long as the timeout, I run the risk of calling the test a failure even though the data came back as valid, if a little slow. But what about the days when the database isn’t busy? When the data returns in 30ms? How much time saved, test over test, by being able to definitively know when it is safe to check the return data? The results are generally painful, all because of a simple OnButtonDone event.
“What, Exactly, Are We Looking at Here?”
Many products are designed to provide users with a way to interact with some kind of dynamic content. As such, the products tend to use the content as a support and driver in UI development and real-time behavior. Unfortunately, automated tests are far more difficult to design in a similarly adaptable manner because they need to be able to definitively say “yes, that worked” or “nope, it’s busted.” Changes in content dramatically increase the complexity, and therefore stability and maintainability, of UI automation.
The best way to get around this particular problem is to provide test content. You already have a mechanism to swap content easily (that’s the point of your product), so use that feature to give the tests a little love. Build out test content and provide an access method to the tests that can be used to replace the live content. Alternately, provide your test engineer with a mechanism for inserting their own content at test run-time.
To be clear, this is for testing around truly dynamic content that changes without alteration of the product code, and can potentially change the way the product code behaves. Anything with a data-driven catalog of “things” that are altered outside of the product and subsequently displayed or interacted with by the product user (any inventory system, ever) would qualify. If the “dynamic content” is feature changes, or a set number of scripted states that can’t be altered without rebuilding the product, then this doesn’t really apply.
“So, When You Say ‘No’, What You Really Mean…”
You’re building an awesome product, and you don’t want an ugly error message detracting from that when some underlying mechanism glitches, so they get filed away in some dusty directory somewhere, assuming that they get filed away at all. Those glitches are what tests are looking for.
I agree that smacking the user with an error isn’t the preferred course, but test automation needs this data. Specifically, what happened, when, and why, in real-time. Looping back to the OnButtonDone concept, throw in a little bit of data for the tests to catch. A simple success/fail bool is a good start. An enum that can be referenced at run-time to determine the “why” of the failure is even better.
If I’m writing a test that opens a UI, pushes a button, then reads the resulting dialog, I’ll be able to file a much cleaner bug if my test tells me “I hit the button, and it failed because the DB is not responding” instead of “I ran the test, and the dialog wasn’t there.” My test doesn’t know about a database. It doesn’t care. It’s just telling me what the button told it. This way, no one needs to spend hours trying to figure out why the dialog wasn’t there – it was called out in the test.
“Are You Talking to Me?”
I certainly hope so. Clear and constant communication is the best defense against UI automation problems. When a developer and a tester work closely together on a product, automated tests turn out faster, more robust, and more accurate. A tester can’t really tune their automation until the product feature is checked in and functional. They can pre-build and mock out feature behavior, however, and will be far closer to accurate with direct guidance from the developer that is working on the feature. This minimizes the gap between feature complete and automation complete.
Constant communication also clears out a lot of the “false positives” test automation tends to be known for. If the automation is expecting the result of the button push to be a confirmation dialog, and that dialog was removed to minimize user clicks, well, the test will fail on what is ultimately a “good” behavior. It may be that “everyone knew” that the feature was changing, but it doesn’t hurt anything, and helps a great deal, if you touch base with the test engineer anyway.
I hope this helps to clear some things up, and perhaps explains why your test engineers are frequently muttering under their breath about IDs, events, and content.