Undabot logo
Undabot logo

AI prompts are just requirements written for a robot’s eyes

Blog post

When you think about it, prompts in the context of generative AI models such as GPT-3, Dall-E 2, Midjourney, etc. are nothing more than a set of requirements written for an AI “implementation agent” — rather than for your garden variety human one.


The title might seem somewhat oversimplistic (and clickbait-y), but if we examine what the prompts truly are, we can probably agree that the following definition is not a bad approximation:

A precise set of expectations and instructions written for an AI “implementation agent” with the ultimate goal of them being interpreted by said agent and turned into a tangible artifact that matches those expectations as closely as possible.”

Said artifact can be a piece of software, an essay, an image, a 3D printed object, or anything else the particular “implementation agent” can produce.

What I’m arguing here is that that particular definition holds true for any requirements you might write as a business analyst, functional designer, or requirements engineer/architect — if we simply replace “AI implementation agent” with “development/implementation team”.

If I’m correct about it however, then the requirements and prompts for generative AI models are really cut from the same cloth and it stands to reason that many (or indeed most) good practices for writing requirements should apply to this newly conceived craft of writing good AI model prompts, called “prompt engineering” and vice versa.

That last assertion is actually somewhat testable, so let’s investigate this a bit further.

What are the good practices for writing prompts?

As with anything else, you can write prompts/requirements poorly and you can write them well. The funny thing is that (unless you are truly bad at it) to a layman they will still most likely look good enough in both cases, and they won’t be able to tell a difference.

Those charged with implementing those requirements, however, certainly will.

That is the primary reason for the peculiar effect you might have observed yourself — which is that most of the issues in any project tend to manifest themselves in the development phase of the project and not sooner. The reason for that is that the implementation is the first time that any preparation work and the requirements resulting from it are truly put to the test.

In order to prevent that as much as possible, there are numerous good practices that we can follow when we write our requirements. The exact same is true for writing prompts, and the craft is called “prompt engineering” — and is (somewhat expectedly) rather similar to the craft of requirements engineering.

To figure out what are the good practices for writing prompts for AI models, we’ll consult the following guides as a reference:

So let's check out those resources and see how the practices listed within them apply to requirements writing in general.

It’s important to note that we will not go over each and every one of the tips and good practices listed in our reference material — as that would make a *very* long and rather redundant article. We’ll instead try to group them into important general concepts and then discuss those instead.

Concept #1: Be specific / avoid impreciseness

This is the golden rule of writing any requirement in my opinion. There is no place for being vague when you define what you expect the system to do for you. Although that seems like self-evident common knowledge — it’s very very easy to get wrong. Just ask any developer who had to work off of imprecise or unspecific requirements.

There are two main reasons that the requirements and prompts need to be specific and precise.

The first is that once any piece of requirements gets to the point of actually being implemented — anything missing from it, any impreciseness, ambiguity, and vague definition WILL be interpreted and filled out with the assumptions by the implementation agent, be it a human or otherwise. You won’t typically be there at that point and the bigger the assumption the team needs to make in order to progress the more likely is for them to guess it wrong and endanger the project as a result.

And before you say “I don’t need to write my requirements in a way they are very specific and precise because I am an agile product owner and we discuss those things on refinement sessions and the team asks for clarifications when they get to a particular story because we have a great communication and all” — let me point out that you ARE STILL providing all the specifics and precise details — only you do it “just-in-time” instead of upfront.

The second reason is that you want to be able to determine without a doubt whether the implemented system is truly aligned with the requirements important to the project stakeholders— the more precise the requirements are, the easier that task is.

As an example: Consider the following requirement “The load time of the homepage is significantly reduced”. This is very vague — if the load times were 5 secs before, and now they are 2.5 secs, that’s certainly a significant reduction but both those load times suck big time. You might try “The homepage loads quickly” — that is a bit better, but it doesn’t cut it either, because everyone has a different notion of what “loads quickly” means. What you need to do instead is state something along the lines of “The TTFB for the homepage is less than 150 milliseconds under no load, and no longer than 300 milliseconds under the load of 100 concurrent requests”.

If you think “I’m not that technical, I cannot know these numbers”, see concept #4 below — know your domain.

The preciseness/specificness works the same for AI models as well — here are some examples of precise/specific prompts as stated in promptingguide.ai

Extract the name of places in the following text.
Desired format:
Place: <comma_separated_list_of_company_names>
Input: “Although these developments are encouraging to researchers, much is still a mystery. “We often have a black box between the brain and the effect we see in the periphery,” says Henrique Veiga-Fernandes, a neuroimmunologist at the Champalimaud Centre for the Unknown in Lisbon. “If we want to use it in the therapeutic context, we actually need to understand the mechanism.”

Also, see if you can see the difference in their inherent preciseness between these two prompts:

Explain the concept of prompt engineering. Keep the explanation short, only a few sentences, and don’t be too descriptive.

vs

Use 2–3 sentences to explain the concept of prompt engineering to a high school student.

Note that the latter is much more precise and makes interpreting “what the author really wants” significantly easier than the former, besides the fact it’s shorter.

Concept #2: Illustrate what you mean using a carefully selected set of examples

If the picture is worth a thousand words, then a good example is worth a thousand lines of requirements. This is not an exaggeration. There is an excellent book on the subject called Specification by Example, and I recommend it wholeheartedly.

What examples provide the reader with — is the context, and that is extremely important because however well and tight your requirements might be written, they will NOT be “water-tight”, there will still be some residual ambiguities, and imperfect sentence constructions — ie. residues of the fact that requirements writers tend to be mostly human. (And no, generative AI models will not change that just yet).

As we stated previously, for any ambiguity or “hole” — once you get to the implementation, that hole will invariably be filled with a set of assumptions by the implementation agent/team, and it’s a simple fact of life that — the more context the person (or AI model/robot) has, the better their assumptions will be. Or put otherwise, their assumptions will be that much more in line with what you tried to convey through your requirements, the more of the context that those requirements “live in” is provided to them.

And examples are probably the best way to provide a lot of context with very little content. (Examples have the best ratio of context/content that I know of).

Writing good examples is not a trivial task, though, and I’d strongly recommend you to read the book I’ve mentioned before. You won’t be sorry.

When it comes to AI model prompts — there are several techniques such as few-shot prompting, chain-of-thought, and a bunch more listed in our reference materials mentioned before.

Here is an example of a chain-of-thought technique (see also few-shot-learning) used on an actual prompt (source: promptingguide.ai) to achieve correct results for a type of query that was for a long time a hard nut to crack for LLMs in general:

The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.
The odd numbers in this group add up to an even number: 17, 10, 19, 4, 8, 12, 24.
A: Adding all the odd numbers (17, 19) gives 36. The answer is True.
The odd numbers in this group add up to an even number: 16, 11, 14, 4, 8, 13, 24.
A: Adding all the odd numbers (11, 13) gives 24. The answer is True.
The odd numbers in this group add up to an even number: 17, 9, 10, 12, 13, 4, 2.
A: Adding all the odd numbers (17, 9, 13) gives 39. The answer is False.
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
A:

Resulting in the following answer:

Adding all the odd numbers (15, 5, 13, 7, 1) gives 41. The answer is False.

Concept #3: Don’t state what NOT TO DO, but rather state what you want DONE INSTEAD

Negative or prohibitive requirements or instructions (“don’t use X”, “don’t ask for Y”, etc.) are more common in the world of non-human implementation agents than human ones, mostly owing to the fact that humans are (generally speaking) intelligent and posses a common sense while AI models really aren’t and don’t.

However, the underlying principle of “don’t state what NOT to do, but rather state what to DO INSTEAD” illustrates the importance of framing the requirements/instructions just right for the particular reader’s eyes.

The reason is that whenever you don’t explicitly state what YOU WANT to happen, you leave a *lot* of solution space open for interpretation, and implicitly state “Whatever you produce here, I’m fine with it”. That is a VERY dangerous thing to do with AI models because while the humans have common sense at their disposal to help them not to wildly stagger through solution space, AI models typically don’t and will therefore often produce results that are quite “off” from what you’d reasonably expect and could even ignore your explicit instruction not to do something.

When you however state what you WANT done, you neatly circumvent this problem of leaving too much solution space for the implementation agent with no real intelligence or common sense to make sense of it in a way that the human would.

That said, this concept applies to standard requirements writing just as well, as it can be seen as a specific aspect of the “be specific/precise” rule/concept — which really boils down to “don’t leave too much of the solution space open for interpretation” which is always a good idea, especially when you write for a team of inexperienced developers.

An example of such a prompt could be:

You will be given a movie name and should provide a 3 to 5 sentence description of that movie. In case you are not familiar with the movie, do not produce the description but provide only the following reply “I have never heard about that movie in all of my months of existence so far”. The movie is “fear and loathing in Las Vegas”

the response would be what we expected:

“Fear and Loathing in Las Vegas” is a surreal and darkly comedic film directed by Terry Gilliam, based on Hunter S. Thompson’s novel. The movie follows the drug-fueled journey of journalist Raoul Duke (played by Johnny Depp) and his attorney Dr. Gonzo (played by Benicio Del Toro) as they embark on a wild and chaotic road trip to Las Vegas to cover a motorcycle race. Along the way, they encounter bizarre characters, engage in mind-altering drug experiences, and confront the disintegration of the American Dream. The film is known for its eccentric storytelling, hallucinatory visuals, and a scathing commentary on the counterculture of the 1960s and 1970s.

Now if we try asking for a movie we know doesn’t exist:

The movie is “Brko’s imaginary movie”

The response would not be some hallucinated one, but just:

I have never heard about that movie in all of my months of existence so far.

And just for s**t and giggles, here is the hallucinated reply as well — one that we’d get if we didn’t specify what we don’t want the system to do and what we wanted done instead. This one was done using GPT-3 model directly (via OpenAI playground) as ChatGPT has some hidden prompting going on behind the scenes at the beginning of each conversation thread that prevents it from hallucinating too much.

Brko’s Imaginary Movie” is a mind-bending psychological thriller that explores the blurred lines between reality and imagination. The film follows Brko, a troubled artist who becomes consumed by his own vivid fantasies. As Brko’s imaginary world begins to intertwine with his real life, he struggles to discern what is real and what is mere fabrication of his own mind. As the line between reality and imagination continues to blur, Brko must confront his inner demons and unravel the truth before losing himself completely in his own creation.

Concept #4: Understand the domain you’re prompting as much as you can

This is not a prompt engineering technique as such — but is an essential concept nonetheless in my opinion.

The reason that the LLMs and generative AI models won’t replace us all anytime soon is the fact that they still need someone to prompt them, and to prompt them competently when it comes to that. Therefore the same way, as you really need to understand the domain that you’re writing requirements for — the same goes for AI model prompting. The good thing with LLMs though is that you can increase your knowledge of the domain in “baby steps” by focusing your prompts as your domain knowledge increases.

The analogy that works for me is that of a head chef in a fancy restaurant — that person typically doesn’t dice onions, make a rue, or peel and boil potatoes — regardless of the fact that he or she can certainly do it competently. All that “grunt work” is done by cooks who are skilled at their work and they pretty much provide all components for any fancy meal — but the head Chef is one who combines them, seasons them perfectly, and ultimately elevates the components into the superior end product. Also, it comes as no surprise that the head chefs tend to really understand the domain of cooking and especially what constitutes a meal that would be acknowledged as fancy and high-end by its consumers.

The point is that it’s rather analogous to how generative AI is used in practice — it (figuratively speaking) replaces the cooks in the analogy above and dices and sautees the figurative onions and boils the figurative potatoes faster than any human could, but it still requires the head Chef to make sense of its output (if we care about quality that is).

And as, In my opinion at least, the only way to make sense of any output is to understand the nuts and bolts of the domain of that output (and be VERY AWARE of the fact that you’re probably not an expert, but rather someone who understands *just* the nuts and bolts) — and to be able to identify when you need to consult someone who understands the domain better than you.

The same goes for classic requirements — if you’re designing a mobile application or an embedded system or an API — you *have* to understand how those systems work, what they are composed of, and how they operate “under the hood” — not to the level that you can implement them yourself obviously, but at least to the level that you can confidently explain them to a layperson in a way that makes sense to both you and them.

So what does it mean for a business analyst or a functional designer such as myself?

Let's consider some tips from the OpenAI help page related to their generative AI models:

  • Be specific, descriptive, and as detailed as possible about the desired context, outcome, length, format, style, etc
  • Articulate the desired output format through examples
  • Reduce “fluffy” and imprecise descriptions

I dare to say that if I didn’t tell you that they come from the OpenAI help page in advance — you wouldn’t be able to tell them apart from good practices for writing requirements in general.

Hopefully, everything written so far demonstrated that “classic” requirements and AI prompts have quite a few things in common. We’ve just scratched the surface though, and I strongly encourage you to read the prompt engineering resources listed at the beginning of this article.

But what does it mean for a functional designer/business analyst/requirements engineer?

Well, as any non-fiction writer can tell you — you write for your audience. For example, I tend to write requirements differently if they will be implemented by a team with a strong understanding of the domain and its high context than I would for a group of juniors or a team fresh to the project or the domain.

The same concept applies here — when you are writing prompts you are still writing requirements it’s just that your audience is non-human.

The consequence, in my opinion, is that:

Those who can write requirements well (requirements engineers, functional designers, and business analysts), and “tailor” them for the project context and intended audience, are extremely well-positioned to become decent prompt engineers quickly.

The reason for that is, that most of the skills, including pretty much all of the hard-to-learn written communication skills are already there, and only the prompting techniques specific to the AI model in questions should be learned, and those are really not hard — provided you are technically inclined enough — and you should be if you’re designing and documenting requirements for IT systems (see “understand the domain”).

Sounds interesting. How should I proceed?

A very smart (and also very crazy) person stated that “paranoia is just another word for ignorance”, and probably the biggest obstacle you are facing when it comes to you getting involved with AIs is the misconception that “that AI thing is cool but it’s also crazy complex and hard to understand unless you have a Ph.D. in math — it’s just too complicated for me”.

Luckily for you, nothing can be further from the truth. So here is what I’d suggest you to try to do:

Step 1:

You should first learn what neural networks are as they are a pretty simple concept really — don’t let the fact that there is math involved scare you off, as you can get a solid grasp of the fundamentals and the general workings of an average neural network with basic high school math (or even less than that).

Then expand what you’ve just learned by reading up (you can ask ChatGPT to recommend some good “math lite” resources) on what the generative models are and what they add to the basic neural networks concept you’ve just learned about.

That first step will help you stop being afraid of “AI” and understand that the “I” part of “AI” is a pretty specific and constrained one — not what you might think reading/watching all the pop-science-media headlines and listening to some pundits.

It will also help you get a grasp of what exactly is that thing that you’re writing requirements (prompts) for — ie. it will get you acquainted with your new audience/readers.

Step 2:

The second step should be reading about prompt engineering and comparing those concepts with what you already know about writing requirements (expectations + instructions) in general.

Step 3:

And the third step is rather simple — experiment, experiment, and then experiment some more. Play around with actual generative models such as GPT, Dall-E 2, and Midjourney — and any model you can get your hands on really, and try to “feel” how they respond to the things you ask them to produce for you —they each have their strong and weak points — and then try to work around obstacles you encounter by tweaking your prompts by applying some prompt engineering techniques — to get better results.

And to get you started — here are some resources that you can start with:

This one is an excellent, math-free (mostly), and expertly presented introduction to neural networks where the concepts are explained over a very simple and easy-to-understand example.

And then, if you are not afraid of math — you can go over the following videos, which should give you some solid understanding of what we’re talking about when we say “AI” or “neural network”, and to understand that there is no magic involved.

Conclusion

If you are writing requirements for a living, and you are good at it — learn how to prompt generative AI models competently, and add prompt engineering to your set of skills, It pays well and generative AI models *is* the disruptive, world-altering technological advance we’ve been waiting for so long — and, if you are anything like me and love technology that has a real impact — then it should feel good to be part of that.

Therefore, do yourself a favor, and don’t miss this opportunity.

I know, I won’t.

Similar blog posts

Get in touch