Lavanya Kalaiselvan

SPEAKERS

Lavanya Kalaiselvan

Working for over a decade in software industry, Lavanya Kalaiselvan assure quality applications and always delighted about sharing the acquired knowledge.

Started her career as a performance tester, she expanded her boundaries with the knowledge acquired from diversified technologies. Over the period, she developed interest towards DevOps and involved in multiple automation activities making testing both simpler and easier. Understanding the potential of AI and deep learning, currently engaged in implementing the learnt ideologies to improve the performance engineering cycle and to overcome the associated challenges.

Topic: Re-defining DevOps for seamless Performance Testing


Abstract: In recent times, DevOps and Continuous testing have become the industry buzz words and gained significant importance in the software industry. The 2019 State of DevOps report produced by Splunk, Puppet and DORA (DevOps Research and Assessment) supports the said statement wherein, the annual deployment numbers of high-performing elite group DevOps organizations ranges from 1460 deploys per year and 106 times faster lead time (commit to deploy) than low performing organizations.

Despite the hype, the failure rate in implementing the DevOps is huge as the transformation is limited or constrained to certain organisational levels. . A concern commonly cited by those who have not yet implemented performance testing in DevOps is “too time consuming” and unable to facilitate the performance test at the rate they need to release the product.

One of the areas where it’s not extended is continuous performance testing due to the level of difficulty and the traditional performance testing practice. Even if an organisation embraces the continuous performance testing, performance tuning in case of an anomaly poses as a constraint to establish a successful Continuous Delivery model.


We have faced a range of challenges whilst trying to address the high demand for stable code with small time deltas; the ever-changing agile world means that we constantly need to be one step ahead and ensure that we “shift-left” as early as possible, which is difficult when the process is not autonomous. So it has become need of the hour to establish a definitive DevOps model for performance testing lifecycle which will integrate different phases of the performance testing by implementing cutting edge technologies. Hence in this paper, we have discussed on the following aspects of the proposed DevOps model aided by proficient customised tools and advanced machine learning models.


1. Strategies to achieve the continuous performance testing and how we automated every single aspect of performance testing to make that a perfect fit in the Devops model
• Process to automate the NFRs and set goals to the performance testing scenario to validate results against the goals automatically
• Automated application scripts creation and maximum elimination of the manual intervention for newly added scenarios/NFRs in CI/CD model
• How to plugin the newly created scenario into the performance testing tool, trigger test sets by automatically designing the workload pattern using API’s.
• Low volume/Early performance testing in development environment
• Component level performance tests to achieve multiple performance tests in parallel reducing the overall performance testing timeline
• Automation of manual processes like environment shakedown, data preparation and test pre-requisites in-turn aligning it to the DevOps model
• Automated result analyser to validate the deployment status


2. Deep learning-based recommendation engine in case of performance anomalies
• Python log parser to analyse the application/database logs
• Deep learning algorithm implemented using Keras API with tensorflow backend to isolate the issue (Classification)
• Feed forward neural network algorithm to identify the performance hotspot considering throughput, response behaviour, and utilisations which is compared with historical data to provide recommendations (Implementation is in progress).


3. Case study conducted on a payments-based application wherein 60% of the said solution was implemented
• High Value Payment processing engine is quite complex and spans across 6 MQ based applications and 23 interfaces for routing a payment to completion which makes that a tough application for aligning CI and performance hotspot identification
• Early performance testing in development environment with success criteria evaluation
• Regression test will be triggered after deployment completion wherein data preparation, test pre-requisites, and post result analysis were automated
• In case of performance issues, deep learning-based algorithms will assess all the involved application and database logs to isolate the application under concern, identify the performance hotspot and provide recommendation


Benefits:

• Faster time to market: The solution has drastically reduced the test execution duration by 55%.
• Reduce dependency on manual effort: test analysis and performance tuning effort has been improved by 30% and hence an overall testing effort reduction by 40%.
• Improved cost efficiency: Overall effort reduction has improved the cost efficiency by 45% to deliver a performance intensive product.

Brought to you by

PLATINUM SPONSOR(S)

Product Partner(S)

COMMUNITY PARTNER(S)

Other Sponsor(s)

Get your brand known across the world

Drop us an email at : atasupport@agiletestingalliance.org to sponsor