The topic that all three articles are based on is artificial intelligence. Artificial intelligence was invented to achieve human level general intelligence. Certainly, it became a topic most talked about, due to increased data volumes, and improvement in computing power and storage. Even with such success, AI started to become a weapon of mass misinformation. Creating deep fakes, in other words, digital altered videos, images, or audios. Including AI getting involved with politics such as the 2024 presidential race, using AI generated audio and imagery in campaign ads. A new source that was contained in one of the articles was how tech companies are aware of AI creating false information and want to prevent that from happening by working with the government, but also what to make it clear that there is going to be downsides to making that happen. As mentioned before, this new source was held facts in one of the articles because CNN News got there sources straight from the CEO of OpenAI. The language and word choice that was used in these articles was professional, using words that described AI and how it affected society in negative and dangerous ways. In one of the articles from CNN News, there was a subjective to favor one point of view over the another, when the CNN reporter stated that tech companies are aware of AI creating false information and the companies will do the work to prevent that from happening. Then questioning Noah Giansiracusa — author of “How Algorithms Create & Prevent Fake News” — that they will do the work but have doubts about it was will. Giansiracusa responding, yes there’s doubts about it, and so much money going into AI, that yes tech companies what to do the right thing, but at the same all they can really think about is the money, and that is what matters most to them, success and money.
Some examples of bias that I detected in these articles were AI created information without any consent, that information being false (confirmation bias), people being influenced by AI without their knowledge, thinking that what they’re seeing on social media is true (framing effect), and people asking tech companies to do something about AI creating information that is untrue, but tech companies not listening even though they are aware of this situation (reactance).