Blog post
The significance of writing unit tests for code is well understood among software developers. However, adding tests to a project that has already been finalized can be a daunting task. Fortunately, we don’t have to do it alone – we can leverage ChatGPT, our very own version of Tony Stark’s AI assistant J.A.R.V.I.S, to help us.
Since its introduction, ChatGPT has become a highly sought-after tool for various tasks, including software development. With a highly advanced language model developed by OpenAI, it has the capability to generate a human-like text. It has been trained on a huge dataset, giving it a deep understanding of language that allows it to produce relevant responses. It can be tailored to specific use cases, such as generating code snippets, answering technical questions or providing recommendations based on user input. Due to its advanced capabilities, it is not uncommon for developers to seek ChatGPT’s assistance for tasks like unit testing in their projects.
What do we write tests for?
In this article, we will demonstrate to you how ChatGPT assisted us by generating unit tests for an existing project written in Swift. The focus was on testing view model classes that include gateways and limited logic.
The classes used in our example have been slightly modified to serve as a good example of code before testing, but have been shortened for clarity.
Good preparation is the key
Before beginning the process of writing unit tests, it is essential to have a good understanding of the existing project and the classes that need to be tested. By taking the time to inspect the classes and injecting every dependency into the constructor, we have made sure that ChatGPT generated clearer and more comprehensive tests.
Additionally, we included Quick and Nimble frameworks, which are popular testing libraries for the Swift programming language. Quick is a behavior-driven development framework that provides a simple, readable syntax for writing tests. Nimble is a framework that makes it easy to write assertions in your tests. In comparison to Xcode assertions and tests, they offer a more intuitive and expressive syntax, making it easier to write clear and concise tests.
How to get started
First, we copied the class below and gave ChatGPT a simple command: “Please write tests using Quick and Nimble for the following Swift code”. When making a request for code from ChatGPT, it is important to provide as much information as possible. Failing to do so, such as simply asking “Write a test for this class,” would result in a significantly different outcome.
import Foundation
import RxSwift
protocol SettingsViewModelDelegate: AnyObject {
func settingsDidChange()
func showProgressIndicator()
func hideProgressIndicator()
}
protocol SettingsCoordinatorDelegate: AnyObject {
func showErrorScreen()
}
class SettingsViewModel {
weak var delegate: SettingsViewModelDelegate?
weak var coordinatorDelegate: SettingsCoordinatorDelegate?
let userInfo: UserInfo
var notificationsEnabled: Bool
var privacyEnabled: Bool
let userDefaultsManager: UserDefaultsManager
let gateway: SettingsGateway
let disposeBag = DisposeBag()
init(userInfo: UserInfo, userDefaultsManager: UserDefaultsManager = UserDefaultsManager(), gateway: SettingsGateway = SettingsGateway()) {
self.userInfo = userInfo
self.userDefaultsManager = userDefaultsManager
self.notificationsEnabled = userDefaultsManager.notificationsEnabled
self.privacyEnabled = userDefaultsManager.privacyEnabled
self.gateway = gateway
}
func changeSettings(notificationsEnabled: Bool, privacyEnabled: Bool) {
guard let userId = userDefaultsManager.userId else {
return
}
delegate?.showProgressIndicator()
let body = SettingsBody(userId: userId, notificationsEnabled: notificationsEnabled, privacyEnabled: privacyEnabled)
gateway
.changeSettings(body: body)
.subscribeOnBackgroundObserveOnMain()
.subscribe(onSuccess: { [weak self] response in
guard let self else { return }
self.notificationsEnabled = response.notificationsEnabled
self.privacyEnabled = response.privacyEnabled
self.userDefaultsManager.notificationsEnabled = response.notificationsEnabled
self.userDefaultsManager.privacyEnabled = response.privacyEnabled
self.delegate?.settingsDidChange()
}, onFailure: { [weak self] _ in
guard let self else { return }
self.delegate?.hideProgressIndicator()
self.coordinatorDelegate?.showErrorScreen()
}).disposed(by: disposeBag)
}
}
ChatGPT generated a well-structured test that meets all the required specifications:
- The test is a subclass of the QuickSpec class:
- It has a spec() method that includes the class SettingsViewModel being tested and all necessary mock dependencies
- ViewModel and mocks are initialized through a beforeEach closure
- The describe blocks are named after the methods being tested
- The context blocks divide the various cases of execution and behavior for the method
- The it blocks contain the actual test, where the method is called on the viewModel and verified using the except clause
import Quick
import Nimble
import RxSwift
@testable import Test_Project
class SettingsViewModelSpec: QuickSpec {
override func spec() {
var viewModel: SettingsViewModel!
var delegate: SettingsViewModelDelegateMock!
var coordinatorDelegate: SettingsCoordinatorDelegateMock!
var userDefaultsManager: UserDefaultsManagerMock!
var gateway: SettingsGatewayMock!
beforeEach {
delegate = SettingsViewModelDelegateMock()
coordinatorDelegate = SettingsCoordinatorDelegateMock()
userDefaultsManager = UserDefaultsManagerMock()
gateway = SettingsGatewayMock()
viewModel = SettingsViewModel(
userInfo: UserInfo(),
userDefaultsManager: userDefaultsManager,
gateway: gateway
)
viewModel.delegate = delegate
viewModel.coordinatorDelegate = coordinatorDelegate
}
describe("changeSettings") {
context("when the user ID exists") {
beforeEach {
userDefaultsManager.userId = "123"
}
context("when the gateway call is successful") {
beforeEach {
gateway.changeSettingsResult = .success(SettingsResponse(
notificationsEnabled: true,
privacyEnabled: false
))
viewModel.changeSettings(notificationsEnabled: true, privacyEnabled: false)
}
it("calls showProgressIndicator on delegate") {
expect(delegate.showProgressIndicatorCalled).to(beTrue())
}
it("updates notificationsEnabled and privacyEnabled on the view model") {
expect(viewModel.notificationsEnabled).to(beTrue())
expect(viewModel.privacyEnabled).to(beFalse())
}
it("updates notificationsEnabled and privacyEnabled on the userDefaultsManager") {
expect(userDefaultsManager.notificationsEnabled).to(beTrue())
expect(userDefaultsManager.privacyEnabled).to(beFalse())
}
it("calls settingsDidChange on delegate") {
expect(delegate.settingsDidChangeCalled).to(beTrue())
}
it("doesn't call hideProgressIndicator on delegate") {
expect(delegate.hideProgressIndicatorCalled).to(beFalse())
}
it("doesn't call showErrorScreen on coordinatorDelegate") {
expect(coordinatorDelegate.showErrorScreenCalled).to(beFalse())
}
}
context("when the gateway call fails") {
beforeEach {
gateway.changeSettingsResult = .failure(APIError.unknown)
viewModel.changeSettings(notificationsEnabled: true, privacyEnabled: false)
}
it("calls showProgressIndicator on delegate") {
expect(delegate.showProgressIndicatorCalled).to(beTrue())
}
it("doesn't update notificationsEnabled and privacyEnabled on the view model") {
expect(viewModel.notificationsEnabled).to(beFalse())
expect(viewModel.privacyEnabled).to(beFalse())
}
it("
As can be seen, the output process was interrupted. However, with the “Continue previous response” command, ChatGPT can usually pick up where it left off. In some cases, it may start from the beginning, which is one of the challenges of using ChatGPT for testing. But with the command “Please continue the following test” and by providing an interrupted test, it was able to complete the generation of the remaining code, resulting in the following output:
it("doesn't update notificationsEnabled and privacyEnabled on the view model") {
expect(viewModel.notificationsEnabled).to(beFalse())
expect(viewModel.privacyEnabled).to(beFalse())
}
it("doesn't update notificationsEnabled and privacyEnabled on the userDefaultsManager") {
expect(userDefaultsManager.notificationsEnabled).to(beFalse())
expect(userDefaultsManager.privacyEnabled).to(beFalse())
}
it("doesn't call settingsDidChange on delegate") {
expect(delegate.settingsDidChangeCalled).to(beFalse())
}
it("doesn't call hideProgressIndicator on delegate") {
expect(delegate.hideProgressIndicatorCalled).to(beFalse())
}
it("doesn't call showErrorScreen on coordinatorDelegate") {
expect(coordinatorDelegate.showErrorScreenCalled).to(beFalse())
}
}
}
}
ChatGPT efficiently generated a substantial amount of code, reducing the time and effort required. However, modifications were necessary to ensure proper functionality. Upon testing in Xcode, we encountered several errors. We did not have implementations for SettingsViewModelDelegateMock, SettingsCoordinatorDelegateMock, UserDefaultsManagerMock and SettingsGatewayMock.
To address these issues, we asked ChatGPT to generate a mock for the UserDefaultsManager class. Here’s the result:
class UserDefaultsManagerMock: UserDefaultsManager {
var mockUserId: String?
var mockNotificationsEnabled: Bool = false
var mockPrivacyEnabled: Bool = false
override var userId: String? {
get {
return mockUserId
}
set {
mockUserId = newValue
}
}
override var notificationsEnabled: Bool {
get {
return mockNotificationsEnabled
}
set {
mockNotificationsEnabled = newValue
}
}
override var privacyEnabled: Bool {
get {
return mockPrivacyEnabled
}
set {
mockPrivacyEnabled = newValue
}
}
}
Additionally, we asked ChatGPT to generate mocks for the delegates by providing the following command: “Please continue previous test and write mocks for delegate and coordinatorDelegate”:
class SettingsViewModelDelegateMock: SettingsViewModelDelegate {
var showProgressIndicatorCalled = false
var settingsDidChangeCalled = false
var hideProgressIndicatorCalled = false
func showProgressIndicator() {
showProgressIndicatorCalled = true
}
func settingsDidChange() {
settingsDidChangeCalled = true
}
func hideProgressIndicator() {
hideProgressIndicatorCalled = true
}
}
class SettingsCoordinatorDelegateMock: SettingsCoordinatorDelegate {
var showErrorScreenCalled = false
func showErrorScreen() {
showErrorScreenCalled = true
}
}
Finally, to complete our testing process, we tasked ChatGPT with writing mocks for the SettingsGateway class. All we had to do was provide the class and issue the command: “Please write mocks for the SettingsGateway class”:
class SettingsGateway {
private let apiClient: APIClientProtocol
init(apiClient: APIClientProtocol = APIClient.shared) {
self.apiClient = apiClient
}
func changeSettings(body: SettingsBody) -> Single<ChangeSettingsResponse> {
let request = ChangeSettingsRequest(body: body)
return apiClient.perform(request: request)
}
}
and ChatGPT generated the following:
class SettingsGatewayMock: SettingsGatewayProtocol {
var changeSettingsResult: Single<ChangeSettingsResponse>!
func changeSettings(body: SettingsBody) -> Single<ChangeSettingsResponse> {
return changeSettingsResult
}
}
We were not entirely satisfied with the initial results, as the mocked property values were missing, and the original class actually did not conform to the SettingsGatewayProtocol. To rectify this, we asked ChatGPT to correct the previous mock with a simple command: “Please correct the previous mock“, and the updated result was as follows:
class SettingsGatewayMock: SettingsGateway {
var changeSettingsResult: Single<ChangeSettingsResponse> = Single.just(ChangeSettingsResponse(success: true))
override func changeSettings(body: SettingsBody) -> Single<ChangeSettingsResponse> {
return changeSettingsResult
}
}
After analyzing the result, we noticed that the changeSettings(body: SettingsBody) method only returned results, but not errors. To resolve this, we added an additional property, errorToReturn, to the mock.
Additionally, we asked ChatGPT to generate an object with specific values with the command: “Please create an object for the following struct”:
struct ChangeSettingsResponse: Codable {
let status: Int
let notificationsEnabled: Bool
let privacyEnabled: Bool
}
What it returned was a changeSettingsResponse object, which was used as a return value in the changeSettings method:
let changeSettingsResponse = ChangeSettingsResponse(status: 200, notificationsEnabled: true, privacyEnabled: false)
This can greatly reduce the amount of time and effort spent on manual object creation, especially in instances where multiple properties need to be instantiated.
As a result, we received the final version of the SettingsGatewayMock:
class SettingsGatewayMock: SettingsGateway {
var errorToRetun: Error?
var changeSettingsResult: Single<ChangeSettingsResponse> = Single.just(ChangeSettingsResponse(status: 200, notificationsEnabled: true, privacyEnabled: false))
override func changeSettings(body: SettingsBody) -> Single<ChangeSettingsResponse> {
if let errorToReturn {
return Single.error(errorToReturn)
}
return changeSettingsResult
}
}
After making a few modifications, we were able to resolve the remaining errors when running the tests. To make the gateway call successful, we replaced:
gateway.changeSettingsResult = .success(SettingsResponse(
notificationsEnabled: true,
privacyEnabled: false
))
with
gateway.errorToRetun = nil
And to make the gateway call fail, we added:
gateway.errorToReturn = NSError(domain: "TestError", code: 0, userInfo: nil)
instead of
gateway.changeSettingsResult = .failure(APIError.unknown)
We also added the correct initializer for UserInfo, and with these modifications we successfully completed the tests.
However, our tests failed during the execution. To resolve this issue, we had to replace ‘to‘ with ‘toEventually‘ in expect clauses. Both ‘to‘ and ‘toEventually‘ are matchers used to verify the equality between the two values. The difference between them is in the timing of their evaluations. ‘To‘ checks if the actual value matches the expected value immediately, while ‘toEventually‘ will continuously check the actual value until it matches the expected value or a timeout is reached. The latter option is particularly useful when testing asynchronous code, which is why we chose to use ‘toEventually‘ in our case.
Our tests were now running smoothly. With a solid foundation of tests in place, adding more tests manually, or relying on ChatGPT’s assistance to generate additional tests, allows us to thoroughly cover all possible scenarios.
How effective is ChatGPT's assistance?
While the previous example may seem straightforward, using ChatGPT for real-world projects required some effort to get the desired outcome. However, with clear communication and a thorough understanding of your project and what needed to be tested, ChatGPT proved to be a great assistant. By providing clear instructions, it quickly generated a significant amount of code and saved time on repetitive tasks such as preparing mocks.
Conclusion
ChatGPT is a useful tool for developers seeking to streamline their testing process. Despite occasional setbacks, such as errors or unavailability, it has proven to be a great time-saving tool, especially for projects that have not been developed using the test-driven development approach. It is an excellent resource for developers of all levels, including those new to testing, who can practice and learn to write effective tests over time.
So, to become the Ironman of unit testing, ChatGPT can serve as a personal J.A.R.V.I.S. for generating test foundations and creating mocks, and, with some adjustments, we can ensure that the tests are effective and accurate.