This probably is not the post you would expect on Halloween. My defense for this, however, is that AI is a bit of a scary topic to broach in this day and age.
I mostly mean this as a joke, but in all seriousness using AI makes me…uneasy despite the fact that I am part of the generation who should be embracing it.
As part of our work at the History, Philosophy, and Newspaper Library, staff and graduate assistants often lead classes on conducting historical research. In my first year of instruction, I generally stayed away from the topic of AI. Now that it has become obvious to me that more and more students are using AI in a variety of ways for research, it is more and more important that we address it as part of the research process.
Before I jump into the specific issue of using AI for historical research, I would like to suggest reading my previous post on the essay collection entitled AI and (Mis)Information from our new book collection so you can better understand where some of my information and opinions may come from.

Now, jumping straight in, as part of a class covering an introduction to secondary sources this semester, I asked ChatGPT to generate a list of sources and topic overviews to compare with actual bibliographies and encyclopedias. To my general surprise, the information that was generated was… fine. In this case, “fine” mostly means that it avoided the major problem of generating sources that do not exist. I still made sure to mention fake sources as a possibility to the students, but ultimately it wasn’t something I had to worry about in this case. The biggest issue ended up being the lack of breadth of the search. My prompt asked ChatGPT to create the list of sources/a summary covering abolitionism from 1760-1815. While it did provide sources on that topic, the students noted that they focused primarily on abolitionists in Europe and the North American colonies. This class did cover Europe and North America, but they were also being encouraged to look into Central America and the Caribbean.
I certainly could just tailor my question to ask about those specific areas, but I hadn’t even noticed that it was an issue until the students brought it to my attention since it is not one of my subject specialties. This is concerning because, had I not looked beyond what AI was showing me, I would have just kept going down a very narrow line of thinking. Even though asking further questions would get me to a wider range of resources, I probably wouldn’t have realized what I needed to ask about simply based on what I already had from
ChatGPT. At the end of the day, AI will give you exactly what you ask for, it won’t do any of the synthesizing or broadening for you. Well… it won’t do a good job at it at the very least.

This lack of breadth comes from the major issue that there are many sources that AI is not allowed to comb in order to build their knowledge base. Most of the major AI systems that students have been known to use, such as ChatGPT, utilize a Machine Learning system. These systems are trained using a “databank” that they then rely on to answer questions. Many Machine Learning systems use information freely available on the internet as their “databanks.” When it comes to academic research, quite a bit of it is held in repositories that are not freely available. This likely means that the information AI is providing you is not based on academic research. I can’t really confirm that assumption aside from what I have previously stated about AI being unable to breach paywalls, but I would always err on the side of caution when it comes to the sourcing of your information.
Regardless, this means that there is a wealth of information accessible through academic libraries that you will not be accessing if you use AI as your primary search tool. This makes it all the more important that history students continue to interact with their librarians and library websites in order to get access to the best range of information possible.

Along with materials that may be behind paywalls, there are also hundreds of thousands of materials that have simply not been digitized. Primary sources (sources produced during the time of the event being studied, usually firsthand accounts) are often the backbone for historical research. However it is also extremely difficult to get all of the materials an institution has digitized and machine readable. For instance, HPNL holds most if not all of the University’s microfilm collections and while there are efforts to get certain titles digitized, a majority of items have yet to be put online. We still get hundreds of requests per year for reels of microfilm, and I know that our University Archives also gets a large amount of requests for items that have not been digitized. Considering that, AI physically cannot provide you with an extensive look into primary sources that are essential to developing new research and ideas.
Now, despite all of that, I am not saying that you should never use AI for research. It would be naive of me to say anything like that considering the rate at which AI is becoming more central to our lives.
What I AM saying is that there are right ways and wrong ways.
AI can be a helpful tool as a starting search, but it cannot do the bulk of research for you as it stands. AI systems such as the one being beta-tested by JSTOR are a great example of this. This system will answer questions you ask about the content of the piece all while pointing you to the point in the text it is getting that information from. It will also suggest related works that you can review. However, when asked to give an overview of a general topic, it will be clear that it cannot write you an essay or a book review. These types of systems will help give you an overview of the work you are looking at, but it will not come to conclusions about the wider scope of the topic. Even if you are using a system with a wider scope, such as our friend ChatGPT, it can be a good tool to help you understand general words or topics you may continually see in your searches.
The AI skeptic in me still wants to encourage students to read the piece for themselves, but I know that if I had this resource available to me as an undergraduate I would have absolutely used it no question.
All in all, I think the key is to make sure that you are still being critical about what you are reading. You cannot take what AI gives you at face value just like you can’t necessarily take anything in a book at face value. You also have to keep trying to look beyond the information that is placed in front of you. Injecting a layer of your own ideas and synthesis over past events is the point of historical research, so make sure they are YOUR ideas so you can continue to be successful moving forward in this profession. As with anything in the machine world, the tool is not usually the problem, it is how people are using it.