Forget the Hype: 6 (Realistic) AI-Powered Techniques to Enhance Your Software Quality

Let's address the elephant in the room. It seems like nearly every article on Medium and other platforms discusses the benefits of AI and how it can revolutionize your daily work. They often promise that with a few simple tricks, you'll never have to work again. However, after reading these articles, you may find that many of their assumptions are either impractical or seem to have been written by AI itself.

In this blog, I'm going to take a different approach.

I won't promise to 10x your workflow, show you how AI will take over your job, or find all the bugs in your software. Instead, I'll share six practical use cases where AI, specifically large language models (LLMs) like ChatGPT, can assist you in achieving better software quality. Yes, you read that right—they can assist you, not replace you.

So, let's dive into the topic and explore six ways LLMs can help software developers and QA engineers improve the quality of their software development.

If you're intrigued by this topic, be sure to subscribe to my YouTube channel where I'll be diving into this and related subjects in the coming weeks

https://www.youtube.com/channel/UC9xxd7ESzQV4HYOuzxQwqiQ

1. AI Pair Programming Partner

Let's start with writing and testing code. The most well-known tool in this space is GitHub Copilot and similar tools. These tools assist you not only in writing code but also in writing unit tests. GitHub Copilot, which uses OpenAI's GPT-4 model trained on GitHub code, operates within an IDE to read and suggest code.

Here's how you can use Copilot in your coding process:

  1. Start with Unit Tests: Based on your clear requirements, you can ask Copilot to generate some initial unit tests.

  2. Verify the Tests: Ensure that the generated tests correctly cover your requirements.

  3. Implement Code: Write the necessary code to pass these tests.

  4. Enhance Tests: Ask the LLM to generate improved or additional unit tests based on your code, designed to catch potential issues or edge cases.

  5. Iterate and Improve: Implement the necessary code changes to pass these new tests.

I guess you saw what I did there—I used the LLM as your pair programming partner with a Test-Driven Development (TDD) approach. This is one method I tried myself and I liked the outcome. I have to admit that I only used it for a personal project, and the code I wrote was relatively simple, but I think you can see how this role of LLMs could be beneficial.

AI's role in TDD can be very beneficial. By automating the creation of test cases before actual code implementation, AI ensures that each piece of code is rigorously tested, adhering to specified requirements and promoting better design and functionality from the start. This integration within the TDD red/green/refactor loop enhances both the speed and quality of development. Of course, the quality of the tests written matters and will depend on the prompts you give to the LLM, but this concept seems very interesting to me.

Many developers who care about proper testing enjoy pair programming sessions and the input of another programmer when writing code. Using AI as a TDD partner seems like a very useful use case, in my opinion. However, keep in mind that there always exists a push-pull relationship between how much we are leading versus the tooling leading the design of the code. As I mentioned at the beginning, you should always have a solid concept before starting to write code, and AI can only assist you in achieving your goal.

2. AI Brainstorming Your Test Scenarios

After you have written your code, the next step is to have it reviewed. While AI can assist in this process, I would still recommend having a human do the final code review. The reason is that AI may not fully grasp the complete concept or final form of the software and could miss a lot of information and context at this level. Therefore, relying on a human for code review is still essential…sorry, developers.

After the review, the QA engineer's role begins, where they test the part of the software handed over to them. Depending on the specific code written, there will be different tests to conduct. Nevertheless, LLMs can assist in thinking through and generating these test scenarios. The core of all test planning is identifying and addressing the risks that might appear in the software.

For this approach, I am a big fan of cheat sheets. For example, if there's an input field to test, numerous cheat sheets are available that can help you find all the different possibilities for bugs. LLMs can generate these cheat sheets very effectively as a starting point. This allows QA engineers to ensure comprehensive test coverage and address potential risks more efficiently.

In the context of using large language models for test scenario generation, it is essential to focus on what to test and use LLMs to suggest ideas rather than allowing the models to dictate the testing process. This approach ensures that the testing remains aligned with the identified risks and specific needs of the system under test. Be precise in your prompts, specifying what needs to be tested and what the known constraints are. This approach applies not only to testing the UI parts of the software but also to API testing, penetration testing, security testing, and more. The AI should aim to provide you with all possible test scenarios rather than executing them. Once you have a list of things to test, you can use your critical thinking to expand or refine the suggested tests based on your knowledge and the specifics of the software.

3. AI Generating Your Test Data

This is a well-known use case and one that I have used myself in the past. For example, consider testing a field that allows uploading a file with a maximum size of 10 MB. In the past, you would have to generate such files manually and think of every different test data variation that might be necessary. However, with LLMs, you can simply request a 10MB file in the desired data type and then vary it. You can create files with sizes like 11MB, 9MB, or 10.5MB, as well as files that are corrupt or in an unsupported data type. AI can be incredibly beneficial for generating test data, significantly accelerating the work of software QA engineers. By automating the creation of diverse test files, AI allows you to focus on more critical aspects of testing and ensure comprehensive coverage of potential issues.

If you've already used AI to identify different test scenarios, you can also use those prompts to generate the necessary test data for performing these tests. When working with distributed data structures, SQL statements that define the initial structure of a database can guide LLMs in generating consistent and structured test data. This approach ensures that the generated data adheres to the required database schema and relationships, facilitating effective testing.

By leveraging LLMs in this way, you can streamline the process of creating test data, ensuring it aligns with your database's schema and constraints. This not only saves time but also enhances the accuracy and reliability of your tests, making your QA processes more efficient and robust.

4. AI Assisting in UI Test Automation

Let's now talk about automation in testing. In my opinion, the best way to use AI when writing test automation code is to optimize specific parts of the process rather than attempting to automate everything at once. Trying to generate an entire automated UI test can lead to extensive rework in my experience. Instead, using AI selectively at specific points in the UI automation process proves to be more efficient and effective. Rather than asking for a complete end-to-end test for a certain UI element, it's better to write your code and then ask for improvements or ways to simplify things. This approach allows you to maintain control over the automation process while leveraging AI's strengths to enhance your work. By focusing on incremental improvements, you can create more reliable and maintainable automated tests.

Success in using AI for creating automated checks comes from identifying specific tasks within the process where AI can provide the most value. Test automation engineers should lead the creation process, determining when and where AI tools can accelerate their work. By carefully selecting tasks where AI can add the most value, such as optimizing code snippets or suggesting improvements test automation engineers can enhance their productivity and efficiency.

Another interesting use case for AI in testing is transferring tests to another framework. For example, if your company wants to move from Cypress to Playwright, AI can assist you in quickly converting the code. I tried this myself when I needed to transfer code written in Python to Java. The LLM did a great job, although I still had some minor fixes to do. This capability can be very beneficial for a test engineer who wants to transfer tests quickly and then review them. However, as always, you need to check and verify the output. Don't use AI-generated code without knowing what the outcome should look like. Always review the converted code to ensure it meets your requirements and standards.

5. AI Enhancing Exploratory Testing

Let's get to the fun part of testing—exploratory testing. Large Language Models can play a pivotal role in identifying potential risks and creating charters for exploratory testing. One significant challenge in risk identification is overcoming biases, such as functional fixedness, which can cause testers to overlook certain risks. LLMs can help analyze known risks and propose additional ones, offering a fresh perspective that might reveal overlooked areas.

The risks suggested by LLMs will likely be a mix of useful and irrelevant ones. Testers should evaluate these suggestions critically, selecting those that add value to their testing process. The primary goal here is to use LLMs to inspire new thinking and perspectives rather than accepting their suggestions blindly. By doing so, testers can enhance their exploratory testing approach, ensuring a more comprehensive and effective assessment of potential risks.

Combining prompts with testing heuristics can generate a wide range of suggested test ideas. Testers can then evaluate these ideas, keeping the valuable ones and discarding the rest, ensuring that the testing process remains focused and efficient. Additionally, LLMs can help explain errors and defects that may not be immediately clear to the tester. The goal is to eliminate the need for extensive googling by having the LLM assist you throughout the testing process, providing immediate answers when questions arise. By leveraging LLMs in this way, testers can streamline their exploratory testing efforts, making the process more effective and less time-consuming. This approach allows testers to focus on critical issues and maintain a high level of productivity and thoroughness.

6. AI Making Documentation

AI can assist you significantly when it comes to documentation. One of the primary benefits is its ability to write comments in the code, providing explanations of what each section is doing. This not only improves code readability but also aids future developers in understanding and maintaining the codebase.

Additionally, AI can summarize what a certain code block is or should be doing in text form. This feature is invaluable for creating high-level overviews and detailed documentation that bridge the gap between code and human comprehension. Such summaries can quickly convey the purpose and functionality of complex algorithms and processes, making it easier for teams to collaborate and for new members to get up to speed.

Everyone knows the struggles of onboarding a new person to a project. Well-done documentation, supported by AI, can be very beneficial for the entire software team, ensuring that knowledge is easily accessible and transferable.

Moreover, AI can create comprehensive test reports during the testing phase. As it analyzes test results, it can generate detailed reports that highlight the testing process, its findings, and any bugs detected. AI can take your rudimentary notes from exploratory testing and transform them into a polished report that is easy for everyone to understand. Of course, a human needs to review the AI-generated documentation. However, having a rough outline provided by AI can overcome the common hesitation with documentation. When there is already a basic structure in place, many people find it easier to finish the documentation. This approach ensures that thorough and clear documentation is produced, benefiting the entire team by making information readily accessible and easy to comprehend.

Previous
Previous

The Most Catastrophic Software Bugs in History [Part 1 ]: Therac-25’s Deadly Software Tragedy

Next
Next

Before the First Line of Code: 7 Key Questions for Starting a Test Automation Project