2021 Trends for Enhancing Digital Customer Experiences

A CTO Chat on the Technology Outlook for 2021

In this episode of AGENT511’s podcast, The Digital Voice of the Customer, host Raed Adhami (co-founder and CTO of AGENT511) and guest Jason Perry, CTO of Mindgrub, discuss technologies that companies of all kinds can implement to improve customer experiences in 2021. Hot topics covered in this 30-minute session include app clips, artificial intelligence, and chatbots.

Listen - or peruse the transcript below - to learn how these cutting edge technologies could help you cut through the noise and improve your customer experience.

Transcript

Raed Adhami: Welcome everybody to the digital voice of the customer podcast series by AGENT511. I am Raed Adhami, Co-Founder and CTO of AGENT511, a leading customer engagement software firm, which for over a decade has been serving municipal government, major utilities, financial institutions, and Fortune 100 brands.

Today we bring you the second part of our latest podcast titled CTO Chat for 2021: Technology Outlook. 2020 was defined by COVID-19, as will at least the rest of 2021. So with this episode, we want to focus on the top technology trends that have emerged in an effort to differentiate and combat the digital noise and the shift in societal trends, particularly as they relate to technology. I'm really excited to be joined again by Jason Perry, CTO of Mindgrub Technologies. Mindgrub is a full-service agency and consultancy based in Baltimore, Maryland.

They specialize in digital marketing strategy, design, and engineering. Mindgrub, which partners with AGENT511, has been around since 2002 and has since developed award-winning mobile, web, and marketing experiences for global leaders such as Exelon, NASA, and Wendy's.

Jason, good to see you again. We're both on video and audio. I believe the podcast will be on audio, but as we discussed in our previous one, it's great to have those visual cues, and that empathy as we actually walk through it.

So one of the things I want to ask you about Jason is as we see users increasingly turning to mobile and tablet devices as the platforms of choice and we see an explosion of usage, what are some of the mobile features that you're most excited about that you see in 2021? Whether it's on phones tablets, networks, et cetera, that enable this type of experience.

Jason Perry: Even though it was a pandemic, it's been a really exciting tech year. Especially on the Apple side, the release of Apple’s own Silicon, but also the iPhone 12 and the new release of iOS 14.

There are a lot of really cool things that come from it. Some of the things that I find really interesting right now, of course, are widgets and app clips. I haven't had a chance to play too much with these yet, but the idea of an app clip is that by scanning a QR code or something that like it could download a light version of the application that only has limited functionality.

And if you've experienced the world like I have one of the things that you probably noticed is that QR codes are kind of the new way of distributing menus in a lot of restaurants. Especially if you're eating outside, potentially instead of getting a menu here, there's a QR code sitting on the table.

So the idea that you can now scan that QR code and almost instantly have a limited portion of an application load at a certain functionality, I think is really cool. It’s something that Android has had the ability to do, using progressive apps. So it's not new in the Android world.

I'm a big Apple guy and a big iOS guy, but my Director of Mobile, Kenny, is an Android guy and he always likes to make sure that he points at me and lets me know this functionality existed beforehand. So I'll give–

Raed Adhami: They love to do that. That's for sure.

Jason Perry: Figured I'd give them, just throw them a little bone.

Raed Adhami: So how is that different? How is an app, let’s say, for a menu different if I load it on a QR code versus what would be a more traditional use of a QR code, which is loading a URL for a web-based menu? Maybe give us an example.

Jason Perry: We both do work in the utility sector, so let's imagine a case where your local utility sends you a bill and you open the bill at this moment. For whatever reason, you haven't actually gone and downloaded their mobile application. The bill has a QR code. You scan the QR code and immediately a light sliver of the app loads in that gives you the ability to do a guest checkout process - to go ahead and pay your bill right there without actually downloading the app.

But then it gives you the option to continue on, and in this case, if you're an Apple user you can use something like Apple Pay. So instead of having to put in your credit card information, you could scan the bill, the app loads, use FaceID or touch your finger, and pay your bill all in one action without necessarily having the app already there.

Raed Adhami: That's incredible. And so you get essentially ephemeral application-like functionality for the moment that you need it to do things that you normally wouldn't be able to do on the web. You don't have to log in, sign in, do this, and you potentially could use your payment whether it's Apple Pay or Android pay – is it called Android pay?

Jason Perry: There’s Samsung Pay and there’s Google pay, right?

Raed Adhami: Okay, whatever the payment framework is for Android and, I mean that changes. I can imagine that being used as hotel room keys and checking out and all these things. Yeah.

Jason Perry: There's also a lot of throwaway apps that are out there, and I mean this in a positive way.

The perfect example: you're in a national park. You want the maps, you want all the information you want, all of the functionalities there in an app. You want it for the experience, but you're not necessarily looking to keep an app on your phone for the next five years. It's a one-time experience. I think in our last podcast, we were talking about taking the kids to Rushmore for class, right?

You’re at Rushmore and you want to just scan it. You want to scan a code or tap a beacon. And there is a light enough version of the app that you can see maps, you can see times, you can see where to go, you can see all the information.

And then once you're done, it's done. This is disposable, right?

Raed Adhami: Right. And with the appropriate privacy and security framework, it becomes a frictionless experience for the user. I can see how that's really something.

Jason Perry: I think that the part of mobile that's always been very interesting to me is the ability to take advantage of where you physically are. And I know that a lot of innovation that's happening with Google Maps and Apple Maps, and also in this case of location and QR codes and beacons, is really about understanding as much about where you're located and being able to use that information to contextually provide you with whatever data is necessary. I think that this is just a step in that evolution of using information that's around you. 

Raed Adhami: Yeah, and the other thing I'm excited about is the 5g rollout nationwide. You've got the incredible Apple Silicon that you were talking about that really sort of changes the amount of processing power that you have available for those devices. And then you're getting a sort of true depth in your cameras because there are multiple lenses for every phone.

So the amount of AR - augmented reality - that can happen based on app clips - let’s go back to the Mount Rushmore example. I'm at Mount Rushmore. I scan something and now I point my phone at it and I can get this highly interactive experience to see what's going on after I experienced it in person. I can learn more about it.

So I'm not entirely sure how it's going to change things because it's hard to predict, but it's definitely a different paradigm than what we had before.

Jason Perry: I definitely want to talk about Silicon - but one of the things you mentioned too that's new with this standard now is LIDAR as part of the phone. I think all the Pro models now on the Apple side have LIDAR.

Before, with augmented reality, what's difficult is that you're doing all this depth sensing, and you're guessing where things are. You're trying to use what's essentially a flat image to guess the distance that I'm standing from, say, the stairs of my house. LIDAR allows that real-time measuring by sending a laser. It can actually figure out what the distance is and allows that depth field, which makes the ability to create augmented worlds so much more amazing.

We have a few clients that we’re developing AR mobile apps for, and the difference between AR on a regular phone versus AR on a phone with LIDAR – it's night and day.

I know that we're going to see glasses or other AR devices in the future, but the idea that, who knows, in the near future, you may able to walk into a park surrounding Mount Rushmore and just stare at a code and have a mini-app load right in front of your eyes that shows you all the information and things that are around you – AR is super exciting. I think we're just starting to see the beginnings of what's possible with the releases that are happening right now on both Android and the iOS side.

Raed Adhami: Absolutely. And I think to add to that, one of the things that will begin to happen is you start realizing that because of the ability to have just-in-time information, you can literally start reconfiguring the way you use your own brain away from information retention and more towards information processing.

You don't need to remember anything anymore. You just need to know how to interpret it. You need to know how to analyze it. So I walk into a place and I use AR and it tells me a whole bunch of things about it. And now I have all that information available to me through other ways, too.

And it kind of changes the mindset of thinking. We used to think of ourselves kind of like a system on a chip. We remember things and then we analyze things and then we keep things in short-term memory, and then we have things in long-term memory. We have basic principles for operating, and I think having an enormous amount of data or memory sitting in your pocket, or that can be generated for you in real-time that's contextual, that knows what you know, and what you don't know, and presents it in a certain way, really decreases the value of remembering and increases the value of your ability to analyze it. That's how a lot of these AI models are built, or some of the principles of these AI models are built on.

Jason Perry: It's like a realm where we went from it being incredibly hard to get access to information, like you were lucky you had encyclopedias at home, to a world where there is endless amounts of information – but it's still hard to decipher and cut through the noise. To a world now where, through contextual cues, we can now use technology and computers, use things like AI, that can actually start to understand enough to break these things down to the bits that really matter for us, like to deliver the things that you need, the things that are important to you, the things that will help you right now.

It's like the difference between before asking the computer, “What do I need to know?” to now the computer is like, “Let me look at where you are, look at what you're doing. Let me look at your calendar. Let me look at all this information, and look at the person you last messaged. Let me take all this contextual information and analyze it and suggest what I think the things are that are important to you based on what you've done.”

I mean, I don't know if you get it from your phone or on other devices. One of the crazy things is when it's like, “You know what? This is usually the time you leave to go pick up your kids.” I still get those alerts sometimes it's like, “Yeah, that's right. That's what I should be doing right now.

Raed Adhami: So let’s talk about AI, which I find very exciting and transformational. Probably closest to our area is how it impacts customer service, and the ability to use this kind of technology for utilities and government organizations and companies to provide their customer service in a new way. Have you looked into that at all? AI, natural language processing - What are your thoughts on where that stands now?

Jason Perry: I think that it's getting incredibly realistic in terms of AI and chatbots and robots that we can communicate with. It’s that first tier of communication that [they] are doing an amazing job of parsing information and beginning to respond and direct people to where they need to go.

I'm going to take a big circle here. So, I apologize for this, but for home insurance, I use a software thing called Lemonade, and if you haven't heard of it, Lemonade is the sole idea of using AI and computer learning to help you reduce the cost of your home insurance. A lot of your interfacing is with this chatbot that you communicate with, that answers your questions, and essentially does your quoting process.

If you have a claim, you talk to the bot and I think there's this thing where they were able to get money in a bank based on a claim that came in and less than a minute, based on using artificial intelligence to determine whether or not the claim is true, based on different trends and get back to it.

Like, that's the kind of stuff that's possible now. And when we talk with our utility clients or really any clients that are in the customer service world, the thing that I keep stressing is the fact that we’ve moved from this world where you work with a mobile app, or you make a phone call, or you use a website, or you talk to Siri or Google Assistant, to this idea that we're looking for an omnipresent source that, when I communicate, I'm talking to you, your brand and I don't care what mechanism I'm using. You should remember all the communication has happened across all of these mechanisms. If I'm talking to a person or a bot in customer service, they should know about what I just did in the mobile app. And if I'm on the website, the website should know about what I'm doing on the mobile app right now at the same time.

And so I think AI is going to bridge that gap. It's going to make it easier for those communication mechanisms. Be it things that start as text messages on my phone, or move to chat on the website that then move to a physical conversation... It's going to allow those things to start to become one consistent conversation that's happening, and that allows me to move across all my mechanisms.

Raed Adhami: That's an excellent point. I think that's the big struggle. There are few things that are more irritating than calling for customer service and answering six questions to help you identify yourself and then being transferred somewhere else, and then you have to answer those questions again. It's like there’s gotta be a way. And I understand what they're doing. They’re, in a sense, protecting the consumer. You don't want people to make changes on your behalf.

But I do have an interesting question for you. This is a little bit of research that I did for this podcast: when would you say the first chatbot was developed? Just give me kind of a range. Give me, I dunno, like a couple years range or however you want to answer.

Jason Perry: I'm going to go with late seventies to early eighties.

Raed Adhami: Well, you went a lot. You were a lot closer than I thought. It's actually 1966. 1966 was the Eliza project at MIT, and it was the first sort of attempt at natural language processing, which is natural language processing, as a term, it's within narrow AI.

Let me backup for a second. Narrow AI is a specific set of artificial intelligence that's applied to solve specific problems. So while the narrow AI system that you're interacting with seems intelligent, it's not generally intelligent. It is trained to do certain things within the area. So a customer NLP would be  AI that can understand language, even patterns that it has not seen before. It can kind of deduce it through deep learning, but it's just language. It can't go drive a car. So that's kinda where it started.

And these transcripts are really entertaining. The first conversations with Eliza and someone saying, “I'm here, how are you feeling?” “Oh, I'm unhappy.” And then the answer was something like, “Well, do you think talking to me would make you happy?” Things like that.

So it started way back then, and of course, since then, there have been massive advances. I think one of the mind-blowing stats I saw was from 2016 to 2019, there was a 3000% increase in the amount of bots on Facebook. It went from 11,000 to 300,000. Yeah, I think China is also a leading user because of the population density creating a really favorable ROI.

Anything from general customer service, to ordering pizza, all that stuff is done on bots. And it is the experience you're talking about. I pick it up and say, “Hey, I want to, I want an extra-large with pepperoni.” And their response is we'll be there in 20 minutes. I don't need to say who I am. I don't need to give them a credit card number. They have all that. They just send it. And that's, that's the sort of the dream, if we can get there, but it's not always done well.

Jason Perry: Sometimes when I think about AI or bots - and I went low because I figured it was a trick question. So I figured I had to go to the early years.

But sometimes I think we take for granted the many things that have AI and bots and, and just technology that's doing so much of this, already baked into it that we use on a daily basis. I mean, there's things like the course search like we're talking about natural language processing and understanding the relationship of words, like there's so much that goes into that.

There's so much AI that we’re now at a point where we have things like Google Assistant and Siri that are taking our voice, converting it to text, and then trying to figure out what we mean. It's just a crazy realm.

I bought some drones lately and I've been playing with them and flying them around the neighborhood and I flew it a few blocks away and there's a button that's like, “Come home,” and then you just have to press the button. And the thing just flies back the way that it came and it lands right in my backyard. And it's like, this is insane that this little thing has enough processing power to figure all this out and use this information in this way.

Raed Adhami: And it was delivered to you the next day from Amazon?

Jason Perry: Yeah, it was, it was delivered two-hour shipping.

Raed Adhami: We got some Same Day, which is always just, I dunno, it just seems excessive to me. It's like, no, it's okay. Tomorrow is fine.

Jason Perry: I know you're going to ask something, but I was at a Game Stop. This was pre-pandemic when you could actually go to stores, and I picked up a video game and I went to buy it.

I was like, just out of curiosity, I looked on Amazon and it was cheaper on Amazon. And so I bought it with two-hour delivery and it was there by the time I got home. And it's just like, this is ridiculous. Like, why would I? No.

Raed Adhami: Yeah, it's tough. It's tough to compete with. And while the platforms themselves have gotten really good, I feel like there's a design ethic to chatbots and there's actually a good amount of research on this that design ethic has not been internalized.

A lot of the chatbots I see people will just seem to think, or the way the implementation seems to imply that as long as you understand what people are saying, you can ask them. But there's a couple of problems with that.

Jason Perry: I think there are two things. I think there's one, there's the fact that the really good chatbots put the really bad ones to shame. And that there's a level of comfort and expectation that, as we start to use these things more regularly, there's a bar that you have to at least meet. There are a lot of companies that are not meeting that bar that are making it look as if this technology is not quite as ready for prime time as it is.

I think the second thing - I had a really long conversation with a friend around UX for language processing, UX around language. We think about how things look visually, but it's rare that we think about how people talk in terms of technology and how it responds to it, or how people write, and the types of things, like the certain ticks and things that you have to look for.

I don't know if you remember it - I was at GoogleIO, the presentation where they launched the feature in Android that allows it to call restaurants for you and do a reservation by talking to a real person. And it would even do things like add to like mmhmm, or like the pausing noises and things like that, that give the feeling. It's one thing to give back information. It's another thing to have to feel like you're talking to someone.

Raed Adhami: You're absolutely right. And you maybe got through that organically, but that's, that's actually a well-studied concept. It's referred to as anthropomorphic design cues, ADC.

And what it is is - so anthropomorphism is just something that isn't human, that feels human, right? So the attribution of human-like qualities to a non-human entity, like, in this case, a chatbot. They really increased compliance when the chatbot is viewed as a social actor by the user. I mean, we all know that this isn't a person.

Actually it is a mistake. Research has shown that it is a mistake for a company to pretend that their chatbot is a human, even though it might look like a human or would be pretty close in terms of how it formulated sentences and how it understands you. You want to be clear about that, but adding ADCs to your bot makes an enormous difference in compliance.

There's a bunch of data on this. There's a 2020 study done in Denmark that showed that a baseline compliant rate for a chatbot is like 60 some percent, like in the low sixties. If you just add ADCs, it becomes 84%.

And ADCs can be very simple, right? They can be verbal. They can be non-verbal. So a verbal ADC is like the perception of intelligence. So when you talk to this thing, it talks back to you, and you can tell that it gets what you're saying. You're like, well, okay. You kind of have a sense of respect for it, like this thing could help me.

But there's also non verbal animations and facial expressions and things and giving it a name and things like that. That just jumps your compliance rate. It increases it from 60, 60-some to 80%. So that's one mistake. I feel like it happens.

There is another one, which there's really three kinds of design pitfalls for these chatbots. So we've got a lack of ADCs. The rollout is a problem. Studies have actually shown that if you shift completely to self-service, particularly at the beginning of a customer relationship, it doesn't work.

So there seems to be a model where, if you're establishing a customer relationship, you don't want it to be fully automated for certain things. Like for certain more complicated - obviously if you just go buy something on Amazon or a very simple bot that just answers a couple of questions, you could do that, but it's a mistake to do that.

You start with that, and then slowly, as the relationship deepens, you introduce more and more technology. So there's kind of a slow roll.

And the third one, which I thought was really interesting, was the study referred to it as the foot in the door technique. The foot in the door technique you may have heard of. It’s a very common way, a very powerful way, actually, to get people to do things.

So this is like, I ask you for something very small because you perceive me as a social actor and you perceive yourself as a good person, you'll do that for me. And then I'll ask you for something a little bit bigger and a little bit bigger, and eventually I'll have my biggest ask.

So one example where that happens is when you go buy a car. “Hey, would you like a cup of coffee?” “Sure.” “Would you like to sit down?” “Sure.” “Would you like to take a test drive?” “Sure.” “Would you like to just look at some numbers?” “Sure. We could do that.”

If you just walked in and said, “Hey, you want to buy a car today, come on, let's look at the numbers.” It wouldn't work the same way.

So the similar concept works for chatbots. You start with very low interaction to establish the credibility of this thing. And then you can ask for more things.

So if you take those three things and you do them right as you're designing a chatbot, you are now looking at the reduction of what is approximately $1.3 trillion spent every year on customer service inquiries by companies. Study after study shows that you could reduce that by maybe 30% a year, you could take care of 80% of all routine interactions via properly designed chatbots.

Jason Perry: I believe it. I mean, I bet if we looked at the types of interactions that we have on a day-to-day basis, they're simple, they're little things. The amount of truly complex interactions that we have with the different brands that we interact with are probably very rare that it really needs human intervention.

Raed Adhami: Yeah. I mean, with Mindgrub, think about if you're building brand new screens, brand new users, like just traditional UI development. You've got a process. You've got designers, you've got UI folks. You've got UX folks. You've got visibility in all of that.

The same amount of design needs to go into chatbots because that is your new interface.

Jason Perry: It has to.

Raed Adhami: It probably requires different modeling tools, because you’re kind of modeling more of a tree of work.

Where could the user go in terms of their requests? And all of that:the way to wireframe it and conceptualize it and implement it. But the tools are completely different. The amount of design expertise that it requires, just because it's a simpler UI does not mean it requires less design - it actually requires more design.

Jason Perry: I think that's a common misconception, is that because it's something that's simple, that it's quick, it's easy to design. It's easy to build. But making things simple takes a lot, and it also means there are things that didn’t make the cut, all the things that have to be removed or had to be rethought to make the interface work.

I think there's a lot of that and chatbox and communication in terms of not only the natural processing of understanding, but making sure that the interaction feels like you were saying somewhat more genuine or relatable.

It's like the difference between a kid playing with a toy versus a dog. One doesn't react,the other one does, and they know that they don't want that reaction. It’s the same thing that you're looking for, that sense of something that you don't want to hurt in a way.

Raed Adhami: Because it's anthropomorphic. You think it has feelings and then you started behaving in a way.

There's a lot of evidence that shows the way the initial chatbots were built, the average interaction was longer and more arduous for the customer. The use of profanity was higher by the customer because they're frustrated. And this thing just seems like a stupid program. And you say whatever. I mean, I've caught myself telling my kid to say, “please” to Alexa. It's like, you can't just be rude to Alexa. You have to say please.

To the folks listening out there, if you're thinking about doing that: don't buy the hype. Not any bot works. The technology is incredible. It can work when designed properly, but if you want to have compliance, and if you want people to stick with your chatbot, and actually save that call to the call center, look into incorporating ADCs into it, look into slowly increasing the complexity of requests.

Not just come in and say “you're now fully automated and you will always be going through this chatbot” and roll it out slowly. Roll it out not at the very beginning of your customer journey, but make a reference to it. Roll it out slowly so that you get that extra compliance. The potential is huge.

I mean it's twenty-four seven. It's multi-lingual. It's multi-platform. It's got authentication all over the place. If done properly, it can really put a dent in the customer service problems that we see.

Jason Perry: I think there's also a lot to be said about timing. One of the things that's been a struggle during this pandemic is obviously a lot of customers care centers have had to drastically reduce their size, or figure out ways to work remotely, and the availability times, especially talk to a human, are just smaller.

I feel like I've had to interact, either by requirement or because it's the easiest way to interact, through chatbots or some other mechanism with a lot of the brands that I interact with. Especially if you're looking at that time between eight to five I stay pretty busy between meetings and if something has happened at six, sometimes it's the only interaction that I can have

The difference when there's something that's built right, that gives me the responses, it can help me get to a solution - it's amazing. It's a very different, different experience.

Raed Adhami: Absolutely. Absolutely. Okay. Well, I think we've hit our time here. Thank you so much, Jason, for being here. This was a lot of fun and educational for me. And thanks to everybody for listening.

For more information on these topics, please reach out to AGENT511 and Mindgrub Technologies. We'd be happy to work with you, or discuss various philosophical and technical issues that you may be thinking about.

And the Digital Voice of the Customer podcast series by AGENT511 will be back soon with more content until then. Bye-bye. Thank you. Thank you, Jason. And thank you to the audience for listening.


Stay up to date with the latest technology insights. Subscribe to our quarterly newsletter.