TRIMS - A Mnemonic For Valuable Automation in Testing

Richard Bradshaw |  |  | 8 mins read time

TRIMS is a mnemonic for creating valuable automation that truly supports your testing. The TRIMS heuristic encourages us to focus on five keys areas when strategising and designing automated checks. Here is TRIMS at a high level and we’ve elaborated on each letter in the rest of the post.

T
Targeted

Targeted to a specific risk and automated on the lowest layer the testability allows.

R
Reliable

To maximise their value, checks need to avoid flakiness, we need them to be deterministic.

I
Informative

Passing and failing checks need to provide as much information as possible to aid exploration.

M
Maintainable

Automated checks are subject to constant change so we need a high level of maintainability.

S
Speedy

Execution and maintenance need to be as fast as the testability allows to achieve rapid feedback loops.

Targeted

The T stands for targeted and is intended to encourage us to think about two keys aspects of strategy and design. The first is risk, we need to be selective with what we automate because we can’t automate everything. I’ve seen folk try, they believe they get close but then enter the very difficult to break cycle of break-fix-break-fix. They believe all the checks are equal and have to fix them all. To break that circle they need to look at what risk each check is mitigating and delete the ones of little value.

We need to be focusing on risk from the beginning, being selective about what we automate. This mnemonic isn’t aimed at identifying those risks, we have great heuristics like RCRCRC by Karen Johnson to help with that, its purpose is to make us question the value and purpose of what we’re automating.

The second aspect of targeted is focused on the implementation of the automated check. What is the best layer/seam to implement this check on? We prefer the term seam within AiT. To us layers sound like completely independent pieces stacked on top of each other, whereas in reality they are connected in some way, they are sewn together. However, we can get between them if our testability allows. Testability is the key component to this focus of targeted. We may identify that the lowest layer we can mitigate the risk on is the API layer, however, our Testability doesn’t support it. Perhaps we don’t have API automation skills within the team. We then bring risk back into the situation and address the risk of automating at a higher seam such as the UI, versus the risk of delaying the automating of the check while the team improve their API automation skills, and in turn their testability.

Think risk. Think seam. Think testability.

Reliable

One of the key benefits of automated checks is their rapid execution speeds, in most cases, their execution is significantly quicker than a human can perform them. However, it takes a significant amount of work to achieve such speeds and a lot of skill. There are several key elements to get right when implementing a check, we created SACRED within AiT to keep a track of those. I spoke about SACRED at Selenium Conference Berlin 2017, with my talk Your Tests Aren’t Flaky, You Are! and Mark did as well with his talk REST APIs and WebDriver: In Perfect Harmony. SACRED stands for State Management, Actions (use to be Algorithm), Codified Oracle, Execution, Reporting and Deterministic. I’m not going to elaborate on them in this post, but the reliable of TRIMS encourages us to get all those elements right so that we have automated checks that are deterministic. Automated checks that are alerting us to genuine change that we need to explore and not false-positives from flaky execution. A huge amount of the value that automated checks bring can be washed away by investigating and fighting flaky executions. When we get it right, we create rapid feedback loops that we trust to be providing us with reliable information that infers testing and team decisions.

Think deterministic. Think time. Think rapid feedback loops.

Informative

Automated checks pass, and automated checks fail, or as we prefer to say within AiT, automated checks detect change. We don’t know if the change is good or bad, but something has changed and we see the detection of those changes as invitations to explore. We need our automated checks to provide as much information as possible to aid that exploration. The intent of the check needs to be clear, usually achieved with good naming conventions. A well-named check sets the framing for any exploration. The intent should also be mirrored into the codified oracle used within the assertion/approval, to make it clear to the team how we are measuring if the intent has been met.

However, the main focus off informative is on providing as much information as possible to the team upon the detection of change. I like to call it, giving your robot a voice, “I detected change Richard, here’s a pile of information that I think will aid your exploration into this change”. Things like log files, screenshots, applications logs, JSON, XML and so forth. It depends on your context and testability, but a good practice to identify them is to record your usual activities when a check fails and look to automate some of them. We are trying to do everything we can to make the exploration as rapid as possible, so we can take appropriate actions, and keep those all important feedback loops as rapid as possible.

Think debugging. Think decision making. Think exploration.

Maintainable

As mentioned already checks will detect change and actions will need to be taken, in the majority of cases those actions will be changes to the automated check itself. Perhaps we need to add some additional actions into a check, perhaps our codified oracles need maintaining or we need to update to the latest version of a library. It could be a whole host of things. Therefore, we need to take advantage of design patterns and good coding practices to make our lives easier in these scenarios. Not all automation requires code of course, but even with GUI/Keyword driven tools, there will be approaches and patterns that can be followed to increase maintainability.

Maintenance in most cases is the biggest contributor to not achieving rapid feedback loops with our automated checks. However, if we factor in the ideas from informative and follow good maintainability practices we can keep those checks running reliably and providing their valuable information.

Think design. Think clean code. Think good practices.

Speedy

Finally, speedy. We’ve been mentioning rapid feedback loops throughout the mnemonic, but we felt it needed its own letter to bring some context into play. Speedy’s focus is to encourage us to achieve as rapid execution and maintenance as our testability allows or is needed. Not all teams need <10 minute build times, nor could all teams achieve it even if they wanted to. Therefore, we need to identify our time goals and take the appropriate actions to achieve and maintain them. Faster doesn’t always mean better, especially if it comes at the cost of reliability and informativeness.

Think context. Think execution. Think maintenance.

Conclusion

So, that’s TRIMS. Apply it to your existing automated checks, and if you need to, go ahead and TRIM them. If you’re about to embark on a new automation project, think about TRIMS upfront and use it to guide the implementation of your automated checks.

Want to learn more about Test Automation?

We offer various paid and free services to help you and your team go further by taking advantage of Automation in Testing principles

Richard Bradshaw's photo
Author

Richard Bradshaw

@FriendlyTester

Software Tester, speaker and trainer at Friendly Testing. BossBoss at Ministry of Testing. Whiteboard Testing creator. Striving to improve the testing craft.

Comments