A Balancing Act in the Age of AI: The Discomfort of the Grey Area of Authenticity

Practice what you preach

In the myriad of keynotes, workshops, discussions and other work I do both in education and beyond, no one can ever question the heart and soul I put into what I deliver. It always amazes me how much some people under-estimate the hours and, often, days that it can take to construct a presentation, craft a session, create a programme. This does not even account for the months of curating content that I find and thinking about how that may/will play some role going forward. Likewise, I am proud of the honesty and forthrightness in the content and discourse I offer to the AI and education community online. Yet, I do not hide from the fact that I use chatbots to assist in my writing, I use AI image generators to create images, voice-to-text programs, tools like Notion to curate content and many AI and non-AI tools. I stand by this slide that I put up in most keynotes I deliver:

Recent keynote slide

The grey area

Despite all this, I think it is important to share the following:

I admit that I have a certain degree of unease and uncertainty in working with AI

I don't know how much is too much. I am not sure when I have gone too far, when a piece of my work becomes too altered, infiltrated by a chatbot; when my work loses it's authenticity.

Aren't these the same issues playing out in education for students, in businesses and in communications on a personal level around us? Clearly, some people have less concerns than others but I am positive that the majority of people experience similar inquietudes.

I particularly like this graphic by @jmattmiller on Twitter, that highlights the issues in education with responsible use of AI:

I don't agree with some of this model but what it highlights is that anything that moves from the extremities of the continuum (i.e. the first point and the last point on the right of the diagram), goes into a grey area of authenticity. It is difficult to establish a consensus as to whether these uses of AI are acceptable or not.

Boundary pushing with use of AI

Lately, I have been using chatbots more extensively in posts and articles I write. I have been trying to establish an improved workflow that harnesses the power of these AI tools to lessen the demands of sharing and also, to augment my contributions. In my last article, The Crucial Triad in the Era of AI Disruption: Identity, Authenticity and Sharing, it was interesting to see the following comments:

Bridget Pearce - 'How much of this article was written by ChatGPT?', 'I found myself parsing the text trying to identity the real human thoughts and voice behind the GAI.', 'I could be wrong but that sounds like classic GPT to me. I’d be more interested to hear how YOU would describe how you came to be who you are. I’d like to see you, Nick, in there. You seem like an interesting guy who has achieved heaps. I would love to hear about it from the man himself. I think your voice is lost here.',

In other words, for Bridget, I had gone too far into the grey area; lost some authenticity. Rather than be offended by these comments, I embraced them and in many ways, they initiated this article. The discussion Bridget and I had in the comments led to this:

It’s a new world and I am certainly grappling with how to strike the right balance when copiloting with technology myself.

That cements my thoughts of how many of us are dealing with a balancing act in the age of AI and we will not always get it right or right in some peoples' eyes. Conversely, Vince W. commented: 'I really love the reflections here', which shows that there are others for who the content resonated and it's authenticity was not diluted.

Conclusion

As we work through what is acceptable and what is not, what is authentic and what is not, what is too grey and what is an ok amount of grey, I do wonder where the parameters will be set. My reply conveyed my feelings about that article:

I think we all are Bridget. Sometimes I think the balance tips one way too much and then other times the opposite for me. I appreciate the insight. There is a lot of my writing in there and definitely the thoughts and origins are my own. I would argue, however, that the piece reflects my journey. The personal insight it gives was not generated by AI.

I would argue that my response highlights that perhaps the focus need not be on the writing itself, but on the foundations, the intent and the message it conveys. Could they be created by a chatbot without having the user give of themselves?

All of this article was written by Dr Nick Jackson with no assistance from any AI, Grammarly or anything other tool except the Linkedin environment and it's built-in spell checker.

Previous
Previous

Dynamic Differentiation has already begun

Next
Next

The Crucial Triad in the Era of AI Disruption: Identity, Authenticity and Sharing