Last month I found an interesting conversation on Twitter about UI tests that started with a tweet from Alan Page:
I'm not against discussions on the invalidity of the test automation pyramid.
— Alan Page (@alanpage) May 3, 2018
If you don't like it, you use whatever model you want as long as it suggests you write AS FEW UI TESTS AS POSSIBLE.
seriously - stop your infatuation with UI tests
Not long after that Simon Stewart, the Selenium project Lead, tweeted in response:
Listen, fellow mortals. This is truth. I love what we can do with UI tests, but each one you write needs to be incredibly valuable to offset the cost of writing and maintaining it. https://t.co/aJjHTxpUcz
— Simon Mavi Stewart (@shs96c) May 4, 2018
The reaction to both tweets has resulted in some interesting responses. Some agree that fewer UI Tests is a good thing, some disagree, and some saw this as a problem with tooling and offered alternatives. It’s always good to see healthy discussions on topics like this but I couldn’t help but feel that some points need more elaboration.
Testing the UI Testing Through the UI
Finding myself getting frustrated with all this 'UI Tests' are slow chat. I'm not a semantic person, I can just about spell it. But let's be clear about 'UI Tests'. When most complain about them they means tests that start with an interaction on the UI.
— Richard Bradshaw (@FriendlyTester) May 8, 2018
During mine and Richard’s time helping teams with their automation and delivering Automation in Testing we’ve noticed a common pattern. An overwhelming amount of Automators are heavily biased towards building automated checks via the User Interface layer. Not only do we see this in day-to-day work but in job roles, blogs on automation and available training. They are almost always focused on UI automation tools such as Selenium, Cypress, Sikuli, the list goes on…!
The tweets from Alan and Simon are a challenge to this common assumption that we should automate everything via the UI, and I welcome it. Not everything has to be automated via the UI. Our UI is just the shop window to a business that goes all the way to the back of the store and beyond.
One of the traps a lot of Automators fall into is a lack of appreciation for the risks that are being mitigated with automated checks. The assumption is that human testers do things via the UI, so automation should be done through the UI as well. This results in complex automated checks that are vulnerable to the changes in a product and general flakiness.
But not all risks exist within the UI. A lot of products contain most of their complex logic within backend services or libraries that are hidden from the UI to make things easier for the user. So if the risk exists in the backend, then why are we testing it through the UI?
One technique we like to use in Automation in Testing when designing our checks is to ask ourselves the following question:
Am I ‘Testing the UI or Testing Through the UI’?
This question (TuTTu for short) helps focus our attention on what specific risks we are interested in checking. If the risk is around storing data, then we are ‘Testing Through the UI’ and we don’t need the UI. We can drive it via a different interface such as an HTTP or a Library API. If the risk is around rendering UI components according to a style guide, then we are ‘Testing the UI’ and we need to drive the check via the products UI. Additionally, once we have confirmed we are ‘Testing the UI’ this tell us the backend isn’t our focus, the UI is. This can allow us to run our check against a full stack system, or I could look to stub the backend, a discussion for another post in the future.
The important thing is that the risk determines the approach, not the tool.
UI as an abstract term
See a lot of comments about automated "UI tests". Regardless of full stack or independent testing, what do you mean by UI? Are you testing the HTML, CSS, or JS and what risks are you testing for that manifest in those technologies? #automation #testing 1/2
— Mark Winteringham (@2bittester) June 9, 2018
When we talk about techniques such as TuTTu, it’s can be perceived that we are anti-UI tests, but that’s not the case. UI checks are useful and required, but only when they are focused on risks that live within the UI.
The term ‘User interface’ can be useful as a way to refer to multiple technologies working together to deliver an experience that users can interact with. But, when we start thinking about risks we need to be aware of those different technologies. We need to break them down and think about what risks might affect them.
For example, a Web Browser is typically considered as one single application that presents a Web page, but actually, it’s a mix of different libraries that work together to deliver a UI. We have HTML that provides the structure of the page, CSS that provides styling and JavaScript that provides behaviour. Each of these technologies has different goals, syntaxes and behaviours that we should consider.
If I have determined that I am ‘Testing the UI’ with a check focused on risks to rendering UI components according to a style guide, which specific technologies am I checking? This question is important because it affects my tooling choices. If I am interested in how the HTML is structured, then tools that inspect the HTML and/or the DOM will be useful to me. If I am interested in how the CSS affects the look of the page, then tools that can do visual comparisons are of interest to me. If I am interested in how the JavaScript behaves in creating UI components then tools to create small DOMs that aren’t necessarily in the Browser to execute the JavaScript and monitor the behaviour might be useful to me.
Let Risk be your guide
To create useful automated checks that help a team, the obsession with driving every check through the UI has to be stopped. Our checks need to return information that is useful to us. For example, when a check fails, we should be able to quickly identify which part of the product is failing. This can only be achieved by combining our understanding of the product, it’s technologies and using our analytical testing skills.
By driving risk identification through discussion and other testing activities around our product we can use our knowledge of the tech stack to determine the best interface to mitigate that risk at. This will allow us to work towards creating checks that are focused on specific risks, leveraging the closest interface to the risk in question and creating useful feedback that teams can react to depending on its outcome.
Comments