Authenticity and ‘AI’
I was in a training program five or six years ago with a group of other advisors, all with different tenures. The training had to do with communication: verbal, non-verbal, written, unwritten. It was hilarious listening to how different we all communicated.
There were some of the most articulate, professional sounding people I’ve ever heard in that group. They spoke like they were old-school news anchoes.
There were also some people whose mastery of non-verbal communication was at another level. Their body language matched their tonality and their use of silence between sentences was perfect.
Then there was one person who could summarize three paragraphs into a three-sentence analogy without losing an ounce of context. That person sounded just like Derek Thompson (look him up if you’re not sure).
There were a ton of gifted communicators, but plenty of bad ones too. I was in that latter group.
I still have my feedback reports to this day.
- “Yells instead of speaks,” said one person. I still speak loudly. I’m the youngest of three, how else can I get my point across?
- Another person wrote, “Overuses bullet-points in written communications.” If you’ve seen a follow-up email from me, you know this is still true. I believe blocks of prose are disrespectful in corporate communications.
- There was also a comment that said, “Contributes little in group conversations.” An old mentor helped me with this one (thanks, SP). If I agree with what everyone else is saying, why do I need to pipe up? I can just nod.
Then there was this one: “Communicates in an informal, bordering on unprofessional, manner. He’s trying to be authentic but he’s pushing it.” That comment irked me, and it still does. But with the introduction of ‘AI’ to the world of communication, maybe that “bordering on unprofessional” style has aged better than expected.
There’s a new meme across the internet. It’s the “Bro wrote this with ChatGPT” meme. It basically means, “They didn’t put much effort into this, and probably don’t care.” I’m sure ‘AI’ will get better and better, maybe even so good we can’t notice it (like we do now, with em dashes everywhere and half-hearted witticisms as sign offs), but does that really mean the writer (or prompter) cared?
This goes back to what I wrote in this post. Imagine receiving a market update from your advisor right after the market tanks 25%, that reads exactly like every other advisor’s. Where is the authenticity? And how important is the authenticity? I know I’d want an update written by advisor, not a robot.
It’s already reached a point where I’m certain entire businesses will be created solely to make ‘AI’ generated content look and feel original. Because right now, it’s not.*
I communicate a certain way. The market takes it on the chin, it doesn’t “drop by 5% intraday.” Dough is a fine term for money. If a client concern sounds like it could result in a potentially catastrophic financial decision, I just say, “That’s unwise” and explain why. Sometimes I might even say, “That sounds like a dumb idea.”
I would much rather communicate authentically than like everyone else. It’s why this website reads the way it does and it’s why I write on this page regularly. And that’s especially important now that the “Bro wrote this with ChatGPT” meme exists.
Authenticity may well be another word for “he actually wrote this.” And maybe that’s worth more than inauthentic polish.
Then again, maybe I should remember what that mentor told me, and just keep my mouth shut.
*Quick aside: I’m told customer service will be one of the first jobs replaced by ‘AI.’ Tell me, when you call a customer service line today, and the robot answers, do you just keep hitting zero or shouting “agent” until you get a human on the phone? Of course you do. Because humans are authentic, and humans can bend (but not break) the rules. Talking to a robot is the equivalent of talking to a policy and procedure manual.