With artificial intelligence (AI) on the scene, writers now have many app and chatbot choices for checking grammar, punctuation, and spelling. But when it comes to creating content, most still choose to keep it real and let their own creativity take the lead.
I’ve dipped my toe in the sea of AI-assisted writing and marveled at its ability to swiftly churn out content, help organize thoughts, and compose lists. A remarkable time-saver. But I don’t let it speak for me. I don’t use anything AI produces verbatim, even though my more cleverly crafted prompts may produce higher-quality output.
Instead, I use AI as a sounding board, inviting it to brainstorm ideas. Rather than replacing my writing, I see it as a silent teammate, enhancing my own talent.
Following this overview of AI chatbots as collaborative writing tools is my list of AI’s Limitations and Gaps during editing. Copilot volunteered some of these points during a general Bing search.
The top contenders in the chatbot arena are ChatGPT, Gemini, and Claude. My initial practice was with Copilot (formerly Microsoft’s Bing Chat—accessed from the browser’s menu). From 2021 to the end of 2024, OpenAI’s ChatGPT Large Language Model (LLM) was integrated with Copilot. Since the end of 2024, Copilot has disengaged its alliance with OpenAI and subsists on its own. Though Copilot is built into Microsoft’s applications, I’ve conversed with it less, and log directly onto ChatGPT using my Google credentials..
Google’s Gemini is powered by the Gemini LLM. Gemini and Copilot are designed to integrate with your data.
Claude (created by Anthropic) was yet another LLM that rolled out in 2023 with free and paid versions. It has recently upgraded to Claude 3.7 Sonnet. I have conversed with it and find its responses to be very similar to ChatGPT’s. See one comparison on these two.
If you'd like me to edit a story for you, you can share the text and let me know what specific areas you'd like me to focus on. I can then provide suggestions and explanations for any proposed changes.
While writing this blog, a Gemini message popped up on my Android phone, inviting me to engage. I declined for now, as I don’t want AI butting in on my phone activities. I have played with Gemini on the Chrome browser only a little. For more on Gemini, a general comparison of it vs. Copilot and ChatGPT, and key differences between the pay versions, read this Popular Science article.
As of this writing, the Chatbot Arena LLM Leaderboard (user platform) ranks ChatGPT-4o #1, followed by Claude, then Gemini. Elon Musk’s Grok AI currently lags a fair distance behind. Originally, Grok had no free version, but as of February 2025, Grok 3 is free to all X users. There is still a paid "Super Grok" version for X Premium subscribers.
NOTE: Due to privacy and data protection concerns, I have not evaluated China’s DeepSeek AI (it launched for public use in January of 2025).
Small Language Models (SLMs) such as Grammarly, Scribbr, Wordtune, Hemingway, and others can be helpful, but understand this: they don’t catch every problem and often make inappropriate or incorrect suggestions, introducing new errors. This has been my experience and is a consensus among editors. I’ve had fun playing with the free versions of SLM apps and find it interesting when one arrives at slightly different corrective advice than another.
Microsoft Word’s grammar check function is not particularly robust, but it finds most misspelled words. Despite its limitations, Word remains the preferred writing canvas for editors working with authors due to its Track Changes function, which allows for easy tracking of edits and comments.
My late 2024 conversation with CoPilot revealed how it treated new content we shared with it and its capabilities at that time (while still integrated with ChatGPT).
Copilot stated that to avoid plagiarism, it generates responses based on patterns and info it has learned but does not copy and paste from its training data. This data is not cataloged by the AI, so it only cites references for any new info it pulls from the internet. It is designed to generate original content. All the same, it’s a good idea to take a close look at critical or sensitive outputs in academic, professional, or legal contexts where plagiarism could have serious consequences. If you use an LLM for these purposes, at the very least, it may be wise to run it through a plagiarism checker (there’s a free option on Scribbr).
Credit: CopySoWrite/DALL-E
According to Ethan Mollick in his book Co-Intelligence: Living and Working with AI published in 2024, “there is no way to detect whether or not a piece of text is AI-generated.” Don’t bother using an AI detector app because none of them are foolproof, and often result in false positives (pasting text from grammar apps into your work could inadvertently introduce embedded code, also potentially triggering false AI detection). Still, there are ways that editors and very astute readers can tell when AI has done some of the writing. This article “13 Ways to Detect AI Content” by Satya D (SEO specialist and Surferseo.com blog contributor) offers some insight.
What happened during pretraining or machine learning? AI was fed data from the open web and books (by computer scientists and underpaid entry-level folks), which explains how biases and misinformation could not be completely filtered out. Apparently, no authors were contacted to obtain their permission. Some publishers have since openly allowed LLMs to be trained on their books, whereas other publishers are careful to acquire signed contract clauses concerning the use of such content for LLM training—or not—with authors and agents.
Wes Cowley, a fellow freelance editor who spent many years in the IT industry writing and editing technical and business material, is firm about his position never to use LLMs in his writing or editing work. His reasons for this are fully explained on his website.
But wait, there’s more! Rachel Baron reported on AsiaEdit’s extensive testing of generative AI tools for editing work in a recent abstract article for Science Editor. For a simplified version of the main points, read Wes Cowley‘s summary.
While AI may not be the perfect editor, its role in writing and learning is evolving rapidly, with implications for both education and the workplace.
Mollick, a Wharton professor specializing in management and innovation, actively incorporates AI tools into student learning. In his aforementioned book (pp. 94–96), he delves into the underlying reasons why LLMs are prone to hallucinate. That said, while some educators ban the use of AI in their students’ coursework, Mollick requires that his students use AI for various assignments. He explains that today, students are already using AI as a learning tool, and many teachers have begun to use AI in preparing for class.
On the future of AI in the classroom, Mollick says: “We’ll find a practical consensus that will allow AI to be integrated into the learning process without compromising the development of critical skills. Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically . . . we must do so—it’s too late to put the genie back in the bottle.”
Even the free online education platform Khan Academy uses AI tutoring. A 7:20 video explains how chatbots and LLMs work. For the latest in classroom AI-assisted learning, watch this CBS 60 Minutes report on how Khanmigo, an AI-powered learning tool developed by Khan Academy in partnership with OpenAI, is being tested in schools.
Mollick’s research on this topic as explained in his books (some of which was performed in partnership with lead researchers from other universities) is fascinating and easy to grasp for non-tech types. He is certain that learning to work in tandem with LLMs will ultimately be beneficial, and I would add: especially with continued standards for AI safety and security. “Jobs don’t consist of one automatable task, but rather a set of complex tasks that still require human judgment,” he says. The world of work is headed in this direction, whether we like it or not.
Looking for an editor? If you’re interested in working with me, Complete my form or email me about your project. I will contact you to discuss your needs.
After many hours of my prompts and OpenAI’s DALL-E iterations, I finally chose this graphic design as my CopySoWrite business logo. I am very pleased with it. One of the reasons it took so long: AI does not “see” its illustrations as we do. If I really liked an image but wanted some adjustments or changes, even though a way to re-upload the image was included in the prompt, DALL-E was incapable of editing that specific image—it may be more advanced now. DALL-E could not add text within the graphic, so I had to import the image into Canva and insert the CopySoWrite text on the logo.
Additional recommended reading:
Why great AI produces lazy humans, Ethan Mollick, Big Think, March 26, 2024.
How to succeed at the “jagged technological frontier” of AI, Kevin Dickinson, October 23, 2023.
‘Worse’ AI Counterintuitively Enhances Human Decision Making and Performance, Adam Hadhazy, July 5, 2022.