Stop asking AI for help – start working with it!

It’s Monday morning. A tester opens their laptop, sips coffee, and stares at a Jira board crammed with user stories. By 9:15 a.m., they’re already asking AI for help:

“Generate some test cases for this login flow.”

Later, it’s, “Summarise these 47 defect logs for me.” 

By lunchtime, a developer chimes in, “Write a Selenium script for checkout.”

This is the reality in many test engineering teams today – AI is treated like a sidekick. Handy, efficient, but ultimately stuck on the sidelines.

Now imagine a different day. AI analyses requirements in real-time, flags ambiguous acceptance criteria, and suggests edge cases before they even reach refinement. Regression suites are dynamically optimised, and high-risk areas are predicted after each commit. Broken automation scripts self-heal overnight. The team wakes up to actionable insights, not endless firefighting.

That’s the shift we’re talking about. AI isn’t a helper you occasionally tap on the shoulder – it’s a collaborator shaping how we design, plan, and execute testing. Not just saving minutes, but transforming the way quality is delivered.

The problem with the “AI as a tool” mindset in testing

Remember when automation first became mainstream in testing? Teams rushed to “automate everything,” turning every manual test into a script. On paper, it looked like progress. In reality, many ended up with bloated, brittle suites that were expensive to maintain and rarely kept pace with change.

Today, the same mistake is repeating with AI. Most teams use it transactionally:

“Generate a script,” “Write some Gherkin scenarios,” “Find me edge cases.” Outputs are helpful, but shallow. Testers spend more time fixing AI-generated code than learning from it. Leaders celebrate “time saved,” but the underlying processes remain unchanged.

It’s like giving testers faster typewriters instead of inventing the word processor. The form changes, but the workflow stays the same. Efficiency improves, but true transformation doesn’t happen.

The real opportunity – and challenge – is to stop treating AI as a sidekick and start using it as a co-pilot, embedded in every step of testing.

AI as a constant collaborator in test engineering

Picture a tester reviewing new user stories in a Salesforce release. Instead of manually drafting test cases, an AI-powered assistant scans the requirements, highlights ambiguous statements, and suggests potential edge cases based on historical defects. The tester validates and enhances the recommendations, focusing on tricky scenarios that only human judgment can catch.

Instead of imagining AI replacing entire regression runs, think of how it fits within the practices teams already trust. A developer pushes code through the pipeline. The familiar automation and DevOps practices – CI/CD pipelines, pull requests, orchestration – still do the heavy lifting. At key points, AI steps in for the fuzzier tasks: analysing change impact to flag high-risk areas, suggesting gaps in coverage, or surfacing tricky edge cases in the IDE. But it’s always the human expert who reviews, validates, and decides what action to take. Automation ensures consistency, AI brings adaptability, and human judgment ties it all together. Rather than firefighting failed scripts, the team focuses on improving test design and quality.

Even planning transforms. AI analyses historical defects, code changes, and business-critical processes to predict which areas require the most attention. Test leads can prioritise cycles, optimise resource allocation, and reduce risk – without spending hours manually crunching spreadsheets.

The pattern is clear: AI stops being reactive and becomes a strategic partner. From design to execution, planning to reporting, it co-creates workflows with humans, amplifying intelligence, reducing mundane work, and driving higher-quality outcomes.

At Assurity, AI isn’t a tool – it’s a collaborator

At Assurity, we don’t treat AI as a sidekick – it’s embedded as a co-pilot across the test engineering lifecycle. Here’s how this looks in practice:

Design: When working on complex transformations, we suggest using AI to analyse user stories and requirements in real time. AI highlights ambiguities, identifies potential edge cases, and suggests workflow combinations that could fail under unusual conditions.

Planning: Assurity champions a “human in the loop” approach as a cornerstone of our AI adoption strategy, ensuring human expertise remains central to decision-making processes. We strategically leverage AI to enhance test planning and test strategy development, while carefully applying it in other appropriate areas where it can amplify human capability rather than replace it.

Execution: During automation runs, AI can be incorporated into pipelines to help support code and test reviews, highlight changes or failures, and propose potential fixes structured as separate, composable tasks that teams can adopt where useful. Allowing QA engineers to focus on critical issues and exploratory testing.

Reporting: AI turns raw results into clear, actionable insights. Post-release trends, risk areas, and defect correlations are automatically summarised and contextualised for stakeholders, enabling faster, more informed decision-making.

Across all these stages, the pattern is consistent: AI amplifies human impact, allowing our teams to focus on strategy, creativity, and business-critical quality assurance, while the collaborator handles repetitive tasks.

At Assurity, this approach isn’t theoretical – it’s how we help clients embed AI into their testing practices, moving from incremental efficiency gains to truly transformative ways of working.

Teams that embrace this approach don’t just save time – they redefine quality. Faster pipelines, smarter coverage, reduced risk, and higher confidence in every release become the norm.

The question for test engineering leaders isn’t whether AI will help – they know it will. The question is: how do we work with AI to transform the way we deliver quality?

More reading:

  1. AssurityIntelligence: AI-powered quality engineering for faster, smarter testing.

Keen to discuss and share ideas? Connect with me on LinkedIn or via my email

Share

Related articles

  • Ensuring platform assurance through On-demand Quality Assurance

  • Navigating a world transformed by AI

    The uprising of Artificial Intelligence: Navigating a world transformed

  • Traffic jam

    Beyond software functionality: Why prioritising performance testing is a must