An article in CallCentreHelper lists sarcasm as one of its top ten “oh-no” moments. However, there is no speech analytics solution on the market that can effectively pinpoint sarcasm in your call center. Should you be concerned? Not necessarily.
Although sarcasm is a glamorous research topic, it is not a significant driver of caller sentiment or top-line revenue. “Given the volumes of text we analyse, errors created by irony or sarcasm are very small and fall well within acceptable limits,” says Mark Westaby of Spectrum Consulting, a market research firm in London. The Boston-based text analytics company Luminoso agrees: “Thus far, we have found sarcasm to be a statistically insignificant problem, as the overwhelming majority of text is sincere.”
In addition, sarcasm isn’t necessary for evaluating agent performance. After mining calls for hundreds of customers, our analysts confirm that sarcasm in the call center is actually quite rare and never in a customer’s top ten concerns about their call center. They have found that accurate categories and call scoring are much more reliable in identifying agent performance problems or in driving marketing revenue.
Sarcasm is also notoriously difficult for machines to detect. First off, it is not easy to define and is subject to competing theories from psychology, linguistics, and neuroscience. “Before you can program a computer to do something cool, you have to understand what the cool thing is,” says Noah Goodman of Stanford University. Secondly, even humans struggle to distinguish what is sarcastic and what is not. It requires a non-trivial amount of knowledge about the world and about the speaker, and a high sensitivity to non-verbal cues. When two annotators were asked to label spoken sentences as sarcastic or non-sarcastic, their agreement rate was not far above chance. The human judges of sarcastic messages on Twitter didn’t fare much better.
The fact that sarcasm is rare and hard to label is bad news for machine learning algorithms, which need lots of examples from a relevant data set. Since sarcasm is not a statistically significant problem in the first place, it doesn’t make sense to make an investment in a costly process of collecting and annotating data.
If sarcasm is difficult to deal with in written form, then it should be much easier to spot it in a call recording, right? Not exactly: a highly cited study on sarcasm in spoken dialogue deals with only one phrase (“Yeah, right”) and was performed on manually-transcribed audio. Unfortunately, with acoustic features alone the system produced unbalanced results. Judging from its low F-measure, the system either retrieved few instances of sarcasm, or could not reliably identify the ones it retrieved as sarcastic, or both. An acoustics-only system would face much steeper challenges in a contact-center environment with no manual transcription and millions of utterances to interpret.
Given that sarcasm is nearly impossible for a machine to reliably identify and that it doesn’t drive quality in the contact center, sarcasm should not be the focal point of your speech analytics implementation. Instead, CallMiner chooses to focus on other aspects of speech analytics that provide much more accurate, reliable indicators of agent performance and top-line growth --- analyze 100% of your calls (no sampling), reliably categorize and score each call with Semantic Building Blocks™, and get that feedback into the front-line agent’s hands as quickly as possible with MyEureka and EurekaLive.
Image Courtesy: http://www.someecards.com/
Originally posted here: http://callminer.com/sarcasm-in-the-call-center-yeah-right/
About the Author Anya Korneyeva
Anya is a senior speech scientist at CallMiner. She has a Master's in Computational Linguistics from Brandeis University where she focused on speech recognition and machine learning. Her previous experience also includes managing global projects in the translation and localization industry.