{"id":7923,"date":"2025-04-01T11:27:15","date_gmt":"2025-04-01T16:27:15","guid":{"rendered":"https:\/\/wordpress.library.illinois.edu\/hpnl\/?p=7923"},"modified":"2025-04-01T11:27:15","modified_gmt":"2025-04-01T16:27:15","slug":"ai-and-misinformation-a-new-book-review","status":"publish","type":"post","link":"https:\/\/wordpress.library.illinois.edu\/hpnl\/blog\/ai-and-misinformation-a-new-book-review\/","title":{"rendered":"AI and (Mis)Information: A New Book Review"},"content":{"rendered":"<p><span style=\"font-weight: 400\">AI has been a hot topic around the world lately. And rightfully so. Artificial intelligence is a technological development that we have all heard about and has been rapidly growing for the last decade. It was only a few years ago that my class&#8217;s syllabi started including statements on the use of AI for classes as students were continually caught submitting work they had not completed themselves. Since then, AI has become more and more and more integrated into every part of our lives. Most major search engines have AI built in and you cannot expect to interact with social media without seeing some kind of strange, AI generated content. As AI has become an unavoidable part of our day-to-day lives, debates have sprung up in multiple circles about how and when AI should be used.\u00a0<\/span><\/p>\n<figure id=\"attachment_7924\" aria-describedby=\"caption-attachment-7924\" style=\"width: 232px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7924\" src=\"http:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/misinformation_cover-200x300.jpg\" alt=\"Image of the cover of Truth Seeking in an Age of (Mis)Information Overload.\" width=\"232\" height=\"348\" srcset=\"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/misinformation_cover-200x300.jpg 200w, https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/misinformation_cover.jpg 432w\" sizes=\"auto, (max-width: 232px) 100vw, 232px\" \/><figcaption id=\"caption-attachment-7924\" class=\"wp-caption-text\">Image provided by the <a href=\"https:\/\/sunypress.edu\/Books\/T\/Truth-Seeking-in-an-Age-of-Mis-Information-Overload\">SUNY Press<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400\">As a library and information science student I have seen how, regardless of if they are dealing with seasoned researchers, students, or the public, information professionals are seeing more and more people starting to rely on AI as a research tool. In many cases, this can be a detriment to critical research skills and encourage a spread of misinformation as people start to trust the information that AI produces more and more. Although I have been warned to expect misinformation spread by AI and seen it first hand in the form of fake citations and quotes, I know I am not an authority on the subject. So to further inform myself on this issue, I picked up a good ol\u2019 book and got to reading.<\/span><\/p>\n<p><span>For this blog post, I will be engaging primarily with the first part of a new book from our collection, <\/span><i><span>Truth-Seeking in an\u00a0 Age of (Mis)Information Overload <\/span><\/i><span>(2024) entitled \u201cMisinformation and Artificial Intelligence.\u201d This section is composed of two essays: \u201cIt Is Artificial, But Is It Intelligent?\u201d by E. Bruce Pitman and \u201cDisinformation, Power, and the Automation of Judgments: Notes on<\/span>\u00a0the Algorithmic Harms to Democracy\u201c by Ewa P\u0142onowska Ziarek.\u00a0<!--more--><\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_7937\" aria-describedby=\"caption-attachment-7937\" style=\"width: 379px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7937\" src=\"http:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/60f6feb4be651f666b46194a_AI-vs-Machine-Learning-vs-Deep-Learning-e1743522783558-1024x1008.jpg\" alt=\"Image explaining the differences between Artificial Intelligence, Machine Learning, and Deep Learning. Image is formatted a small circle for deep leaning with the description &quot;Machine learning algorithms with brain-like logical structure of algorithms called artificial neural networks&quot; inside a larger circle for machine learning with the description &quot;Gives computers &quot;the ability to learn without being explicitly programmed&quot;&quot; and a large circle encapsulating both other circles for AI with the description &quot;The theory and development of computer systems able to perform tasks normally requiring human intelligence&quot;\" width=\"379\" height=\"373\" srcset=\"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/60f6feb4be651f666b46194a_AI-vs-Machine-Learning-vs-Deep-Learning-e1743522783558-1024x1008.jpg 1024w, https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/60f6feb4be651f666b46194a_AI-vs-Machine-Learning-vs-Deep-Learning-e1743522783558-300x295.jpg 300w, https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/60f6feb4be651f666b46194a_AI-vs-Machine-Learning-vs-Deep-Learning-e1743522783558-768x756.jpg 768w, https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/60f6feb4be651f666b46194a_AI-vs-Machine-Learning-vs-Deep-Learning-e1743522783558.jpg 1222w\" sizes=\"auto, (max-width: 379px) 100vw, 379px\" \/><figcaption id=\"caption-attachment-7937\" class=\"wp-caption-text\">Image from Levity article <a href=\"https:\/\/levity.ai\/blog\/difference-machine-learning-deep-learning\">&#8220;Deep Learning vs. Machine Learning \u2013 What\u2019s The Difference?&#8221;<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400\">In hi<\/span><span style=\"font-weight: 400\">s article, Pitman discussed two major types of AI systems: Machine learning (ML) systems and deep neural network (DNN) systems. MLs are a class of algorithms that \u201clearn\u201d from a training dataset that they then rely on to answer questions. DNNs aim to emulate the human brain by setting up layers of \u201cneurons\u201d that are connected in a preconceived geometric pathway which it then uses to identify the presence (or lack thereof) of signals or data that would lead it to give a certain answer (Pitman 20). This is my attempt (as a non-computer sciences person) to simplify the complex math and ideas that Pitman explains. To learn more about these types of systems, I would suggest reading these articles for yourself or using other sources such as this <\/span><a href=\"https:\/\/www.ibm.com\/think\/topics\/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks\"><span style=\"font-weight: 400\">IBM article<\/span><\/a><span style=\"font-weight: 400\"> to supplement your learning.<\/span><\/p>\n<p><span style=\"font-weight: 400\">After explaining the ways that these systems operate, Pitman then evaluates the ability of AI to make unbiased and accurate decisions. Pitman points out that trusting the answers that AI provides can often be risky as it is simply comparing whatever prompt you give it to the training datasets it is provided with. Considering this, \u201cAI systems are (often) biased. These deep networks require enormous amounts of data on which to train and, so, very often, these training datasets whether through inattention or a lack of care, are not comprehensive and tend to under-represent minority communities that are already disadvantaged in society\u201d (Pitman, 20). Pitman gives the examples of Black people\u2019s faces being mis-recognized by AI at a much higher rate than White people\u2019s and the fact that Amazon\u2019s recruiting tool shows a clear bias against applicants who identify as women (20). This is troubling on multiple levels as it shows that certain AI systems could have a tendency to reinforce certain viewpoints that are untrue based on the limited information it has been given by its creators.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">At the end of the day, AI systems are designed to recognize patterns and give answers based on those patterns, which can ultimately lead it to give an answer that isn\u2019t actually accurate. But how would it know? It is only operating on a limited set of data and physically cannot be critical about what information it is providing because it has been coded to produce answers in a certain format. <\/span>At the end of the day, Pitman makes it clear that he is \u201cnot here to rant against AI systems and DNNs. But [he does] wish to rant against the uncritical, unsupervised, unchecked use of DNNs\u201d (25).<\/p>\n<p><span style=\"font-weight: 400\">Ewa P\u0142onowska Ziarek\u2019s essay echoes this warning against fully trusting AI as an information source. Ziarek aims to prove that capitalistic AI technologies \u201cweaken political agency and understanding by the ever-increasing automation of judgements, debates, and decisions by algorithmic procedures\u201d(35). To start off the chapter, Ziarek discusses the National Science and Technology Council\u2019s December 2022 report entitled \u201cRoadmap for researchers on Priorities Related to Information Integrity Research\u201d which aimed to give guidance to researchers trying to minimize the amount of disinformation produced and circulated by AI (30). While their report seems to support integrating more diverse perspectives into AI training datasets at first glance, Ziarek posits that \u201ca proposed engagement with diverse communities is not based on a participatory collaboration and understanding but rather driven by a top down, and at the time patronizing approach\u201d (32). The creators of these systems are less concerned with the actual inclusion of different opinions as they are with capitalizing on all the information they can get their hands on.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">The goal of gathering this information is not to make a system able to compare and critically analyze multiple viewpoints, it is to be able to produce the information that will get any sort of community to interact with it. This priority around profit makes the algorithms that companies decide to use unpredictable as we cannot be certain of what they are including, excluding, prioritizing, etc. As if they need to exclude the input of citizens even more, companies are extremely secretive about the algorithms that they use, making it impossible to know what information you are interacting with. Ziarek also emphasizes that AI systems operate in such a way that they cannot take into account actual human understandings of the world and they cannot replicate it (38). AI cannot understand the complexity of human interactions and the multitude of ways in which one decision can impact something in hundreds of different ways based on human reactions. Considering that, they are not a reliable source to use in making decisions on complex issues.\u00a0<\/span><\/p>\n<figure id=\"attachment_7947\" aria-describedby=\"caption-attachment-7947\" style=\"width: 391px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7947 \" src=\"http:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/gov.ai_-e1743524017263.jpg\" alt=\"Image of a scale on a computer screen made up of zeroes and ones.\" width=\"391\" height=\"251\" srcset=\"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/gov.ai_-e1743524017263.jpg 468w, https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-content\/uploads\/sites\/19\/2025\/04\/gov.ai_-e1743524017263-300x192.jpg 300w\" sizes=\"auto, (max-width: 391px) 100vw, 391px\" \/><figcaption id=\"caption-attachment-7947\" class=\"wp-caption-text\">Image from Ivanti article <a href=\"https:\/\/www.ivanti.com\/blog\/pentagon-ai-principles\">&#8220;Could Pentagon AI Principles Be a Model for Future Government AI Regulation?&#8221;<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400\">Much to her distress, however, government agencies have already taken to using AI to do who knows what. Much like the algorithms that major companies use, we cannot be certain what algorithms government agencies are using or even what they are using them for. Ziarek discusses an extremely influential study commissioned by the Administrative Conference of the United States in 2020 to evaluate government use of AI. At that time, of the 142 major federal departments studied \u201cnearly half of them have already adopted AI, including areas of law enforcement, health, financial regulation, adjudication\u201d and in communication with the public about their rights (38). This is a little alarming considering what we have already discussed. AI is physically incapable of understanding human behavior and can clearly produce biased information based on its algorithm, so why would we ever rely on it to make decisions or interact with other humans for us?\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">It is unlikely that I\u2019ll ever get a satisfying answer to that question as we continue to rely on AI more and more and start to do critical research less and less on our own. An unsatisfying answer I can think of right now is: Because it is easier. As with anything in nature, humans tend to follow the path of least resistance. It is much easier to ask Chat GPT to give you sources or answer a question for you than it is to actually dig through those sources yourself. But by designating those kinds of tasks to AI, we lose the ability to create our own opinions based on what we have actually read on our own. In reading this book, I had to parse through Pitman and Ziarek\u2019s ideas to form my own understanding. If I had just asked AI for a summary, I wouldn\u2019t have the chance to think critically about their findings and ideas.<\/span><\/p>\n<p><span style=\"font-weight: 400\">And hey, this may make me seem like an anti-AI purist. I\u2019ve used my fair share of AI whether I\u2019ve realized it or not. That AI summary from Google is pretty enticing sometimes when I need a quick answer to \u201cHow to get my car free from ice.\u201d I\u2019ve also seen some of my STEM major friends use AI to generate practice questions based on their field standards to use for studying purposes. All in all, AI is not outright bad, but the reliance on it as a decision making or thinking tool can be. These two essays show how AI is not as all knowing and reliable as it may seem. We should all be aware of the risk of disinformation and think a little more critically about the answers ChatGPT and Google\u2019s AI Overview give us. Myself included.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Want to find more insightful and thought provoking books like this one? Check out the History, Philosophy, and Newspaper Library\u2019s New Book sections both in person and <a href=\"https:\/\/i-share-uiu.primo.exlibrisgroup.com\/discovery\/search?query=any,contains,%22*%22&amp;pfilter=rtype,exact,books,AND&amp;tab=LibraryCatalog&amp;search_scope=MyInstitution&amp;sortby=date_d&amp;vid=01CARLI_UIU:CARLI_UIU&amp;facet=newrecords,include,90%20days%20back&amp;mfacet=library,include,5899%E2%80%93160736410005899,1&amp;lang=en&amp;offset=0\">online<\/a> to find the most up-to-date publications we have to offer!<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI has been a hot topic around the world lately. And rightfully so. Artificial intelligence is a technological development that we have all heard about and has been rapidly growing for the last decade. It was only a few years ago that my class&#8217;s syllabi started including statements on the use of AI for classes [&hellip;]<\/p>\n","protected":false},"author":873,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[199,201,180,61,200],"class_list":["post-7923","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-artificial-intelligence","tag-book-review","tag-new-books","tag-philosophy","tag-theory"],"acf":[],"_links":{"self":[{"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/posts\/7923","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/users\/873"}],"replies":[{"embeddable":true,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/comments?post=7923"}],"version-history":[{"count":20,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/posts\/7923\/revisions"}],"predecessor-version":[{"id":7955,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/posts\/7923\/revisions\/7955"}],"wp:attachment":[{"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/media?parent=7923"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/categories?post=7923"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordpress.library.illinois.edu\/hpnl\/wp-json\/wp\/v2\/tags?post=7923"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}