The significance of writing unit tests for code is well understood among software developers. However, adding tests to a project that has already been finalized can be a daunting task. Fortunately, we don’t have to do it alone – we can leverage ChatGPT, our very own version of Tony Stark’s AI assistant J.A.R.V.I.S, to help us.

Since its introduction, ChatGPT has become a highly sought-after tool for various tasks, including software development. With a highly advanced language model developed by OpenAI, it has the capability to generate a human-like text. It has been trained on a huge dataset, giving it a deep understanding of language that allows it to produce relevant responses. It can be tailored to specific use cases, such as generating code snippets, answering technical questions or providing recommendations based on user input. Due to its advanced capabilities, it is not uncommon for developers to seek ChatGPT’s assistance for tasks like unit testing in their projects.

What do we write tests for?

In this article, we will demonstrate to you how ChatGPT assisted us by generating unit tests for an existing project written in Swift. The focus was on testing view model classes that include gateways and limited logic.

The classes used in our example have been slightly modified to serve as a good example of code before testing, but have been shortened for clarity.

Good preparation is the key

Before beginning the process of writing unit tests, it is essential to have a good understanding of the existing project and the classes that need to be tested. By taking the time to inspect the classes and injecting every dependency into the constructor, we have made sure that ChatGPT generated clearer and more comprehensive tests.

Additionally, we included Quick and Nimble frameworks, which are popular testing libraries for the Swift programming language. Quick is a behavior-driven development framework that provides a simple, readable syntax for writing tests. Nimble is a framework that makes it easy to write assertions in your tests. In comparison to Xcode assertions and tests, they offer a more intuitive and expressive syntax, making it easier to write clear and concise tests.

How to get started

First, we copied the class below and gave ChatGPT a simple command: “Please write tests using Quick and Nimble for the following Swift code”. When making a request for code from ChatGPT, it is important to provide as much information as possible. Failing to do so, such as simply asking “Write a test for this class,” would result in a significantly different outcome.

ChatGPT generated a well-structured test that meets all the required specifications:

  1. The test is a subclass of the QuickSpec class:
    • It has a spec() method that includes the class SettingsViewModel being tested and all necessary mock dependencies
  2. ViewModel and mocks are initialized through a beforeEach closure
  3. The describe blocks are named after the methods being tested
  4. The context blocks divide the various cases of execution and behavior for the method
  5. The it blocks contain the actual test, where the method is called on the viewModel and verified using the except clause

As can be seen, the output process was interrupted. However, with the “Continue previous response” command, ChatGPT can usually pick up where it left off. In some cases, it may start from the beginning, which is one of the challenges of using ChatGPT for testing.  But with the command “Please continue the following test” and by providing an interrupted test, it was able to complete the generation of the remaining code, resulting in the following output:

ChatGPT efficiently generated a substantial amount of code, reducing the time and effort required. However, modifications were necessary to ensure proper functionality. Upon testing in Xcode, we encountered several errors. We did not have implementations for SettingsViewModelDelegateMock, SettingsCoordinatorDelegateMock, UserDefaultsManagerMock and SettingsGatewayMock.

To address these issues, we asked ChatGPT to generate a mock for the UserDefaultsManager class. Here’s the result:

Additionally, we asked ChatGPT to generate mocks for the delegates by providing the following command: “Please continue previous test and write mocks for delegate and coordinatorDelegate”:

Finally, to complete our testing process, we tasked ChatGPT with writing mocks for the SettingsGateway class. All we had to do was provide the class and issue the command: “Please write mocks for the SettingsGateway class”:

and ChatGPT generated the following:

We were not entirely satisfied with the initial results, as the mocked property values were missing, and the original class actually did not conform to the SettingsGatewayProtocol. To rectify this, we asked ChatGPT to correct the previous mock with a simple command: “Please correct the previous mock“, and the updated result was as follows:

After analyzing the result, we noticed that the changeSettings(body: SettingsBody) method only returned results, but not errors. To resolve this, we added an additional property, errorToReturn, to the mock.

Additionally, we asked ChatGPT to generate an object with specific values with the command: “Please create an object for the following struct”:

What it returned was a changeSettingsResponse object, which was used as a return value in the changeSettings method:

This can greatly reduce the amount of time and effort spent on manual object creation, especially in instances where multiple properties need to be instantiated.

As a result, we received the final version of the SettingsGatewayMock:

After making a few modifications, we were able to resolve the remaining errors when running the tests. To make the gateway call successful, we replaced:


And to make the gateway call fail, we added:

instead of

We also added the correct initializer for UserInfo, and with these modifications we successfully completed the tests.

However, our tests failed during the execution. To resolve this issue, we had to replace ‘to‘ with ‘toEventually‘ in expect clauses. Both ‘to‘ and ‘toEventually‘ are matchers used to verify the equality between the two values. The difference between them is in the timing of their evaluations. ‘To‘ checks if the actual value matches the expected value immediately, while ‘toEventually‘ will continuously check the actual value until it matches the expected value or a timeout is reached. The latter option is particularly useful when testing asynchronous code, which is why we chose to use ‘toEventually‘ in our case.

Our tests were now running smoothly. With a solid foundation of tests in place, adding more tests manually, or relying on ChatGPT’s assistance to generate additional tests, allows us to thoroughly cover all possible scenarios.

How effective is ChatGPT's assistance?

While the previous example may seem straightforward, using ChatGPT for real-world projects required some effort to get the desired outcome. However, with clear communication and a thorough understanding of your project and what needed to be tested, ChatGPT proved to be a great assistant. By providing clear instructions, it quickly generated a significant amount of code and saved time on repetitive tasks such as preparing mocks.


ChatGPT is a useful tool for developers seeking to streamline their testing process. Despite occasional setbacks, such as errors or unavailability, it has proven to be a great time-saving tool, especially for projects that have not been developed using the test-driven development approach. It is an excellent resource for developers of all levels, including those new to testing, who can practice and learn to write effective tests over time.

So, to become the Ironman of unit testing, ChatGPT can serve as a personal J.A.R.V.I.S. for generating test foundations and creating mocks, and, with some adjustments, we can ensure that the tests are effective and accurate.