The other day I got tagged on a Twitter thread started by Wicked Witch of the Test about people with a background in linguistics who’ve ended up in testing. That prompted me to think about the language concepts I've found valuable in my day job, then I started listing them, and then realised how many of them I've mentioned here over the years.
This post is one of an occasional series collecting some of those thoughts.
In this
series so far
we've looked at words and syntax. In both cases we've found that natural
language is an imprecise medium for communication.
- We might know the same words and grammar as others
- ... but they will have their own idea about what they mean
- ... and even where we agree there is ambguity
- ... and all of us, the world, and the language are evolving
- ... all the time.
Today we'll add semantics which, in a pleasing twist, is itself
ambiguous, meaning the study of meaning and also some specific meaning.
Sounds very formal, you might say. Where and when could semantics be relevant to testers in their day-to-day work?
Well, erm, just about everywhere and all the time!
On a recent project my team wanted to provide input validation rules for
clients of a service to allow them to check user input without a round-trip to
the server. Initially this seemed simple: we'd have a list of rules with names
like "minimum," "maximum," "minimumLength," "maximumLength," and
"allowableCharacters."
But it doesn't take long to get to a point
where we were wondering how these constraints might apply to different data
types, how to make behaviour consistent across them, and how to combine
multiple rules on the same input.
These are questions of semantics, of meaning, and for non-trivial applications
they can quickly get tricky. We would have got it very wrong (for us and our
users) if the semantics of our constraint language were hard to understand or,
worse, ambiguous.
So we paused and looked around for existing
frameworks we could borrow from, such as
constraint validation in HTML 5. Artificial languages such as these, which are machine-parsable, tend to be
smaller and less prone to ambiguity than natural languages, from necessity and
through the careful work of the designers.
But semantics isn't
confined to formal languages. Your whole product has semantics for its
users, whether you intend it and consciously craft it or not. And it can be
different for different users.
For example, in a UI we'd generally like a consistent experience in which the semantics of a button with a cross in it (say "close") can be combined with the semantics of other components such as a window or dialog box to mean "close the window" or "close the dialog". If that's not the case, where does it leave a user in their ability and confidence to navigate the software easily?
Outside of the user experience, a shared understanding of the terms a team is using can help to avoid unnecessary friction and increase information sharing and collaboration. However, arguing over nuances rather than getting on and doing can also block everything else.
Context plays a part in where I'll put myself on that particular spectrum.
Sometimes a project needs a terminological grounding to stop stakeholders
talking past one another (as
Iain McCowatt found while establishing testing principles across teams at a
large corporation) on other occasions only sucking-it-and-seeing will help a team to get a
feel for the problems at hand (as my
team at Linguamatics did when trying to work out team values and mission.)
What I do favour in general, though, is being explicit
about what any definitions are when we have them, or that we're agreeing to
proceed without them for the moment when we don't.
If you've been
around testing for a while you've probably seen "it's just semantics" and
"it's not just semantics" tossed casually into conversations. For
example, Dan Ashby in reply to
Casey Rosenthal,
on a Twitter thread:
I don't think it's just semantics here. It seems to be the meaning behind the words that's causing problems. Not the words itself. There's an entire craft relating to exploratory testing that seems to be missing from your radar.
A lot of time and effort from many people has gone into defining what testing
is or isn't. I've
done it myself
and there are numerous different versions. So what does testing mean?
The
concept of a
namespace is interesting
here. You may be familiar with the idea already from package structures in
computer languages, where the reuse of names with different semantics is
enabled by adding a label to indicate the version being used. For example:
dan.exploratory-testing means something different to
casey.exploratory-testing. We only need to use the differentiating
labels when there is a chance of ambiguity.
While this can
certainly solve one kind of problem, it also risks confusion or even
us-and-them opposition between groups who favour particular namespaces, or
perhaps between those in a namespace and those who are not even aware that
there is a namespace.
Semantics is not just a linguistic
construct. It has potential social impacts too.
In Linguistics,
compositionality
is the idea that the meaning of a sentence or phrase is built from the
meanings of its parts guided by the syntax. Take a sentence like "James tests
software", we might break this down into simple syntax like this:
[S [N James] [VP [V tests] [N software] ] ]
Where a sentence (S) is made up of a noun (N) and a verb phrase (VP), and VP
is itself broken down into a verb (V) and another noun.
We first
build the meaning of VP. From our understanding of "tests" and "software" we
can have an idea what it means to be testing software rather than testing
saliva samples, or electrical circuits, or widgets coming off a production
line.
Next we combine the meaning of "tests software" with "James"
to understand that it's a specific person that is being asserted to test
software rather than some other person, or a team, or Selenium.
This
initially seems appealing and logical. From a testing perspective there are
helpful analogies to be taken from it, such as the example of a cancel button
applying in the same way to windows and dialog boxes from earlier.
But
there are less helpful analogies too. Who hasn't had a conversation about the
risks of relying on the testing of subsystems for testing the whole system at
some point in their career? I had one last week.
In fact, the
principle of compositionality doesn't hold in language either, or at least not
straightforwardly for some cases. Take sarcasm, where building the meaning of
a sentence by composing the meanings of individual words is likely to miss
other cues that give the intended interpretation. Often the meaning is the
complete opposite of "what was said."
In philosophy,
the context principle
recognises that it can be dangerous to look for the meaning of some term
outside of the context in which it is being used.
Context-driven testing
wraps that insight into two of its own principles:
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
Without some universal understanding of a thing, an understanding that applies
regardless of any context, we can't claim that any actions on it will have
particular value.
Image:
Wikimedia
Comments
Post a Comment