Rasika Vyawahare

SPEAKERS

Rasika A Vyawahare

Rasika is a Quality lead at IBM India and she brings 12 years of Quality Assurance, Quality Control, Process Improvement, and Software Testing experience. She believes “Good quality brings not only trust but also long-term relation with customers”. She has a strong experience in Cloud Orchestration, deployment, and management. Currently, she is working with a team to automate the deployment of the Openshift Container Platform and IBM Cloud Paks on the IBM Cloud Pak Systems. Test Automation is key for Agile development and maintaining and enhancing test automation is everyone’s responsibility. Currently, her team is working on building test automation which can bring quality as well as reduce time to market.

Topic: Adaptive Test automation in the Agile Test Cycle


Abstract:
In Agile development team, code churn rate is very high. Also, product owner feedback changes the design and function iteratively. Hence maintenance of function and integration tests is very unstable. Despite using CI/CD pipelines, adapting tests as per the new function requests is more difficult than actually adding that function, as the test systems need to determine if this function is robust and does not break the existing automation. Thus the test automation team is always playing catch up to the development. Often development team works around this by keeping automation team in sync with their development, but it slows the development cycle and makes it less fruitful.

In this talk, we will cover how Test automation can keep pace with an agile development team that is churning out constantly evolving code. Our current automation system takes advantage of latest tools to interpret the change, adapt to it, update the test report, analyse the extent of change before the CI/CD picks it for propagating through various levels. It brings proactive insights by looking at tags and keywords inserted by development team to help test engineer understand the degree and areas of change in depth. Once the code is in staging, test automation run intuitively, flags breaking points to recommend changes for automation engineer to adapt the flow, or alert the development team for fixes required in their code. It does static analysis of the automation code and calls attention in a visual for the entire team on how the development changes will impact existing automation bucket.

We use the below principles in our automation development-
1. With every check in, identify the code updates and suggest impact to existing automation scripts.
2. If any changes in test environment are needed, identify and update the test environment.
3. Identify the changes required in the test data and provide details to update the test data.
4. Generate a traceability matrix which will provide information if testcases/test suites for updated code are available or not.
5. If any defect is fixed with code check-in, generate a test script for defect verification.
6. Identify the areas vulnerable to test automation breaks through the newly introduced code and provide an alert for particular feature(s).

Full paper will cover the general direction for building such an end to end system, some snapshots of our design principles and a quick overview for the functional use-cases. We also plan to show a full flow chart of how the system was built, what pitfalls we encountered and then successfully stood up a system using AI/ML workflows, which adapts and works to enhance automated validations.

Brought to you by

PLATINUM SPONSOR(S)

Product Partner(S)

COMMUNITY PARTNER(S)

Other Sponsor(s)

Get your brand known across the world

Drop us an email at : atasupport@agiletestingalliance.org to sponsor