Modern Test Automation Tools
--
Welcome to the page containing AI-based testing tools developed by Yuvaraja Paramasivam. Some tools draw inspiration from popular commercial AI tools but are created with my own logic, while others are entirely original creations. These tools are designed to significantly reduce the effort and cost associated with traditional test automation methods. Unlike the traditional approach, where only test execution is automated, our modern approach automates not just the test execution, but also the Test Scenario Identification, Test script generation, Test result analysis, Test maintenance, and Test selection for execution. These tools represent a step forward in the evolution of test automation and have been designed to make the testing process more efficient and cost-effective.
AI BOT for API Testing (In 2024)
Imagine having an intelligent digital assistant that streamlines API negative testing. When you provide an API request as input, it autonomously performs negative testing, identifies potential issues, and notifies you of any defects. This AI-powered tool goes beyond just testing — it generates test scripts, compiles them, fixes any compilation errors, reviews and executes the scripts, analyzes test results, and pinpoints issues in the API. Finally, it consolidates all findings into a detailed report, highlighting deviations across API endpoints.
Now, imagine the impact — what if this AI bot could detect 2 to 5 defects in most API endpoints it tests? At the same time, hundreds of automated tests are generated in parallel, significantly increasing test coverage. Best of all, from test creation to defect identification, everything happens within minutes, making the entire process seamless and highly efficient. Read More
AI Visual Testing (In 2023)
Visual testing is a software testing technique that focuses on evaluating the visual aspects of an application, website, or software system. It involves checking how the application looks on various devices, browsers, and operating systems, and ensuring that it meets the design and branding guidelines. Visual testing can help detect issues like layout, formatting, and alignment problems that cannot be found through other testing methods. It plays a crucial role in delivering a high-quality user experience and making sure that the product meets the desired standards.
Pixel-to-pixel comparison is a traditional method of visual testing that compares the entire test image to the base image, pixel by pixel. In this approach, the visual output of the software is compared to the base image as a whole, and any differences in color or shape between the two images are flagged as potential issues. This method provides a high level of accuracy, as it verifies every single pixel in the image, but it can also lead to false positive results, as even minor differences in the images can be flagged as issues.
The AI layout parser approach, on the other hand, breaks down the base image into smaller sections, or “layout blocks”, and compares each section to the corresponding section in the test image. This approach allows for a more detailed and granular comparison of the visual output, as it verifies the position, size, and layout of individual elements and sections of the user interface. The layout parser approach is less susceptible to false positive results, as it only flags differences that affect the layout or arrangement of elements in the user interface, rather than minor differences in the images as a whole.
In the below animation, the Test Image is missing its header and all the other elements have been shifted a few pixels towards the top. Pixel-to-pixel comparison will register that the two images are entirely dissimilar. However, AI-based analysis is employed, it pinpoints the absence of the header and mark it as the primary difference between the two images.
Test Result Analyzer
Advanced Version (In 2022)
The Test Result Analyzer is a powerful tool designed to classify automated test results with precision. Whether it’s a bug in the Application Under Test, a failure in the automated test, or a failure due to infrastructure, the Test Result Analyzer can identify and categorize the issue with ease. This tool is built using the NLP and computer vision technology, making it a highly sophisticated solution for automated testing. Utilizing a supervised learning approach, this tool uses previous test results as training data, allowing it to continually improve its accuracy and efficiency. Say goodbye to manual analysis for the repetitive tasks and hello to a smarter, more efficient testing process with the Test Result Analyzer. The below Animation shows how the Test Result Analyzer works ..Read More
Basic Version (In 2018)
This solution groups tests based on failure reason, which speeds up the analysis by allowing us to focus on one test from each error group instead of analyzing all failed test cases. This efficient approach saves time and resources, providing accurate and reliable results. The below Video demonstrates how the solution works
Self-Healing (In Selenium 2021)
Self-healing in test automation refers to the ability of the test automation system to automatically detect and repair failures in the test execution process. This approach eliminates the need for manual intervention, reducing downtime and ensuring a continuous testing process. The ultimate goal of self-healing in test automation is to improve the reliability and stability of the testing process, allowing teams to focus on other important tasks. Below demo video shows how our self-healing solution works
Exploratory Testing using AI (In 2021)
I have implemented a solution using deep reinforcement learning for monkey testing of software applications. I have developed intelligent monkey agents that can perform exploratory testing on the application under test. These agents are trained using deep Q-learning, which is a type of reinforcement learning algorithm that enables them to learn from their actions and experience.
To begin with, I created three types of monkey agents — dumb, smart, and brilliant monkeys. The dumb monkey randomly performs actions like clicking elements, selecting elements from drop-downs, and refreshing pages without any knowledge of the application under test. The smart monkey has some knowledge about the application and navigates to different functionality, while storing the actions and their results. The brilliant monkey learns from the experience collected by the smart monkey and identifies vulnerabilities in the application, attempting to crash it.
I used Python and Keras to implement the deep Q-learning algorithm. The algorithm works by training the monkeys to take actions that maximize the expected reward, which is a function of the state of the application and the action taken. The reward function is designed to encourage the monkeys to perform actions that lead to the discovery of potential defects in the application.
To train the monkeys, I used a dataset of inputs and outputs that were generated by running the application under test with random inputs. I also used a feedback loop to continuously improve the performance of the monkeys by providing feedback on their actions and adjusting the reward function.
Once the monkeys are trained, they can be used to perform exploratory testing on the application under test. The monkeys will generate input to the application and test its functionality, while reporting any defects or vulnerabilities that they discover.
Overall, the use of deep reinforcement learning for monkey testing has proved to be an effective approach for uncovering potential defects in software applications. The intelligent monkey agents developed using this approach can perform exploratory testing more efficiently and effectively than traditional manual testing approaches.
AI Kwon Error Data Base (In 2020)
In today’s software development landscape, defect management tools like Quality Center, Jira, and Bugzilla have become an essential part of project management. These tools help in tracking, managing, and resolving defects that arise during the software development process. However, these tools typically rely on text-based search functionalities that may have some limitations, such as difficulties in handling misspelled words and context-based search.
To address these limitations, an AI-based search tool has been developed that leverages Word2Vec models to understand the meaning of the text in defects. The tool can handle misspelled words and perform context-based search, providing more accurate and relevant search results. Additionally, the tool can offer recommendations and suggestions for search terms, making it easier for users to find the information they need quickly and efficiently. This AI-based known error database is a valuable addition to any project, enhancing the defect management process and enabling teams to deliver high-quality software more efficiently. The below video shows how context based defect search works.
Error Assertion (In 2020)
I implemented Error Assertion, a powerful tool that utilizes a Convolutional Neural Network (CNN) model to detect application errors from screenshots of a webpage. The tool has proven to be a game-changer in the field of software testing, providing a more efficient and reliable method of identifying application errors compared to traditional HTML DOM based implementations.
To begin with, I trained the CNN model used by Error Assertion on a dataset of screenshots that included both correct and incorrect web pages. This training enabled the model to identify patterns in the images that corresponded to application errors, allowing it to detect even subtle differences between correct and incorrect web pages.
Once the CNN model was trained, I integrated Error Assertion into the test automation framework. The tool was configured to capture screenshots of web pages during the testing process and pass them through the CNN model for error detection. Any detected errors were then reported back to the automation framework, allowing us to quickly identify and address any issues that were found.
One of the major advantages of Error Assertion is its ability to detect errors in a more accurate and reliable way compared to traditional HTML DOM based implementations. This makes it an invaluable tool for ensuring the quality and reliability of web applications, particularly those with complex user interfaces.
In conclusion, by implementing Error Assertion in our test automation framework, we were able to significantly improve the efficiency and accuracy of our testing processes. The tool’s ability to detect application errors from screenshots using a CNN model has made it a valuable asset in our testing toolkit.
My Articles
Below, you will find a list of articles that I have authored on the significance of incorporating AI into test automation, as well as the various ways in which AI can alleviate some of the laborious tasks associated with test automation. Please take a look: