Artificial intelligence (AI) chatbots have come into the public’s eye in recent times and with that has come both scrutiny and favor. ChatGPT especially raised awareness as it was released by OpenAI. Countries have had a variety of responses with many threatening to ban it and Italy already doing so. However, that raises the question of what the harm is and to what extent others should be allowed to constrict the use of AI by individuals.
Paid software providing services similar to ChatGPT have been around for a while for content creation, especially in the online medium. One of the unique aspects of this software is its availability to the public for free. People could be exposed to articles completely written by AI and not be aware of it, which makes them susceptible to fake news in a world where it is already hard to distinguish news from propaganda.
Many controversies follow software of this nature, which has also arisen for this program. The answers come from online databases and sources and for that reason all of its content is plagiarized, as with other AI made products. For those who may be using it to write their essays, the answers are often wrong or provide responses that have a high amount of bias towards one side of an argument.
First-year English education major Ben Armitage had this to say about the use of software like ChatGPT: “If a student or writer were to use AI software to generate content and then pass it off as their own work without acknowledging the source, they would be committing plagiarism. This is because the content generated by the AI software is not their own original work, but rather a product of the software’s algorithms and pre-existing data. Furthermore, plagiarism is not only a violation of ethical standards, but it is also considered academic misconduct and can have serious consequences. In academic settings, plagiarism can result in a failing grade, suspension, or even expulsion from school. In professional settings, it can lead to a loss of reputation and credibility.”
Though, not everyone sees it as an absolute wrong due to the complexity of the issue. Etown English professor Dr. Tara Moore provides AI software as an option for certain sections of an assignment in her web-writing class because her students may come across it in their future careers. Students can use it to write their headings and not their actual content. On an interesting note, the use of the software was optional, and most students chose not to do so. Since the project was only started this semester, the long-term trends are not yet understood, but perhaps it displays that students that intend to have a job with writing in their future may want to present only their original work.
Moore described the implications for future writing: “There is not a clear answer yet. We do not know everything about the software yet. We are not comfortable with students replacing the work of their cognitive growth by relying on artificial intelligence.”
In addition, Moore brought up an interesting point that when Wikipedia came out professors had a similar fear, but over time that fear was reduced, and it became a part of policy. People used Wikipedia for their essays and such a use was obvious and people instead had to find different avenues for their writing. With time, professors may have to say to what extent, if any, they want their students to use AI software to write and adapt to the times.
With all the speculation and changing world surrounding AI, a lot is up in the air around policy in schools, the workforce and even the country. Students should always be wary of plagiarizing and put their best foot forward when submitting content and keep up with any AI policy that may come in the near future.