{"id":1879,"date":"2026-03-08T22:51:55","date_gmt":"2026-03-09T02:51:55","guid":{"rendered":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/?p=1879"},"modified":"2026-03-08T22:54:00","modified_gmt":"2026-03-09T02:54:00","slug":"ai-generative-image-overview","status":"publish","type":"post","link":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/images-module\/ai-generative-image-overview\/","title":{"rendered":"AI Generative Image Overview"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">From Text to Pixels: How Generative AI Creates Images<\/h2>\n\n\n\n<p>When you use generative AI to make an image, you\u2019re working with a system that has been trained to recognize and rebuild visual patterns \u2014 not just to draw, but to recreate structure from noise.<\/p>\n\n\n\n<p>In text generation, the AI predicts the next word in a sequence based on patterns it has learned. For images, the data is pixels \u2014 millions of color values that form shapes, textures, and lighting patterns. The model learns from billions of training images, each converted into numbers that describe how pixels relate to each other. Over time, it builds statistical \u201cmaps\u201d of what things like trees, faces, or clouds tend to look like.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Diffusion Process: Learning to Remove Noise<\/h2>\n\n\n\n<p>During training, the model is given an image and taught to add random noise until the image becomes pure static. Then it learns the reverse process \u2014 how to take that noisy image and gradually remove the noise to recover the original picture.<\/p>\n\n\n\n<p>By repeating this millions of times, the model learns a general rule:<\/p>\n\n\n\n<p><em>Starting with random noise, here\u2019s how to remove the noise in a way that reveals something that looks like the images the model has been trained on<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Human Input: Beyond the Data<\/h2>\n\n\n\n<p>It\u2019s important to understand that creating these models isn\u2019t just about the billions of images. People are essential at every step.&nbsp; Initially, vast datasets of images are gathered.&nbsp; Human labelers often categorize these images and write descriptive captions \u2013 detailed text descriptions of what&#8217;s in the image (e.g., &#8220;a golden retriever playing fetch in a park&#8221;). These captions become crucial for connecting the visual content with language.&nbsp; Furthermore, AI trainers are employed to fine-tune the models, evaluating their output and adjusting the training process.&nbsp; Even seemingly simple tasks like verifying that images are not duplicates or that they are safe for public display require human oversight.&nbsp; Finally, platforms like reCAPTCHA (or similar systems) often utilize human interaction to help distinguish real images from automatically generated ones, improving data quality.<\/p>\n\n\n\n<p>When you generate a new image, the AI starts with noise and applies that learned denoising process \u2014 guided by your text prompt. Each step removes a bit more noise, revealing colors and shapes that match your description. It doesn\u2019t \u201ccopy\u201d any one training image; instead, it uses what it has learned about visual structure to create a new combination. However, without the images it was trained on, the model could not be made.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"347\" src=\"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-content\/uploads\/sites\/4563\/2026\/03\/512px-X-Y_plot_of_algorithmically-generated_AI_art_of_European-style_castle_in_Japan_demonstrating_DDIM_diffusion_steps.png\" alt=\"X-Y plot of algorithmically-generated AI art of European-style castle in Japan demonstrating DDIM diffusion steps\" class=\"wp-image-1922\" srcset=\"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-content\/uploads\/sites\/4563\/2026\/03\/512px-X-Y_plot_of_algorithmically-generated_AI_art_of_European-style_castle_in_Japan_demonstrating_DDIM_diffusion_steps.png 512w, https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-content\/uploads\/sites\/4563\/2026\/03\/512px-X-Y_plot_of_algorithmically-generated_AI_art_of_European-style_castle_in_Japan_demonstrating_DDIM_diffusion_steps-300x203.png 300w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><figcaption class=\"wp-element-caption\">&#8220;X-Y plot of algorithmically-generated AI art of European-style castle in Japan demonstrating DDIM diffusion steps&#8221; by Benlisquare is licensed under CC BY-SA 4.0.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Prompting and Iteration<\/h2>\n\n\n\n<p>Your prompt gives the AI a direction \u2014 it turns words into a kind of \u201cmap\u201d that influences what it reveals during denoising.&nbsp; The quality of the prompt directly impacts the quality of the image.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Ethical Considerations: Thinking Critically About AI Image Generation<\/h2>\n\n\n\n<p>AI image generation is a powerful technology with significant ethical implications. To use AI ethically, it&#8217;s crucial to understand these implications, both in how the AI is trained and in how it&#8217;s used.<\/p>\n\n\n\n<p><strong>1. Bias in Training Data:<\/strong> AI learns from the data it\u2019s fed. The massive datasets used to train image generation models are compiled from the internet, and the internet reflects existing societal biases. This means AI can also use and even amplify harmful stereotypes related to gender, race, age, ability, religion and more. For example, a prompt including &#8220;CEO&#8221; might disproportionately generate images of men in suits, reinforcing a biased perception of leadership. The humans involved in curating and labeling these datasets, as well as those who provide feedback to train the AI to avoid certain outputs (a process called <em>human-in-the-loop reinforcement learning<\/em>), also bring their own biases into the loop. Even seemingly neutral captions can subtly reinforce stereotypes. <\/p>\n\n\n\n<p><strong>2. Copyright and Ownership:<\/strong> The images used to train these models are often copyrighted. While AI doesn&#8217;t &#8220;copy&#8221; images directly, there\u2019s ongoing debate about whether the generated images infringe on the copyrights of the original artists.\u00a0 The legal landscape is still changing, and it&#8217;s important to be aware of the potential copyright implications of using AI-generated images.\u00a0 Think about how a prompt referencing a specific artist&#8217;s style raises these issues. <\/p>\n\n\n\n<p>While there are potential copyright issues with how models are trained, at the current time, images generated by AI re not copyrightable. This means that if you&#8217;re doing work that you or a client wants to copyright, you should not use AI generated images.<\/p>\n\n\n\n<p><strong>3.<\/strong> <strong>Misuse and Potential Harm:<\/strong> AI image generation can be misused to create deepfakes, spread misinformation, or generate harmful content. It\u2019s essential to consider the potential impact of your creations and to use this technology responsibly.\u00a0 <\/p>\n\n\n\n<p>For example on the social media platform X, the Grok AI allowed people to edit photos other users had posted including allowing people other than the original poster to change photos so people wore revealing clothing. This caused public controversy including bans in certain countries and investigations by the <a href=\"https:\/\/oag.ca.gov\/news\/press-releases\/attorney-general-bonta-launches-investigation-xai-grok-over-undressed-sexual-ai\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">State of California<\/a> and the <a href=\"https:\/\/ico.org.uk\/about-the-ico\/media-centre\/news-and-blogs\/2026\/02\/ico-announces-investigation-into-grok\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">United Kingdom<\/a>.<\/p>\n\n\n\n<p>Nnon-consensual, sexually explicit material is never appropriate and there are many other types of images that could be inappropriate including generating images that could be used to impersonate someone or to create false narratives. Think about the implications of the images you create with generative AI.<\/p>\n\n\n\n<p><strong>4.<\/strong> <strong>Transparency and Accountability:<\/strong> It&#8217;s important to be transparent about the fact that an image was AI-generated. This helps to avoid misleading viewers and promotes accountability.&nbsp; Consider adding a disclaimer when sharing AI-generated images, especially if they could be misinterpreted as real.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Related Posts<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/images-module\/generative-ai-image-tools\/\" data-type=\"post\" data-id=\"1901\">Generative AI Image Tools<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/images-module\/image-prompting-guide\/\" data-type=\"post\" data-id=\"1919\">Image Prompting Guide<\/a><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>From Text to Pixels: How Generative AI Creates Images When you use generative AI to make an image, you\u2019re working with a system that has been trained to recognize and rebuild visual patterns \u2014 not just to draw, but to recreate structure from noise. In text generation, the AI predicts the next word in a [&hellip;]<\/p>\n","protected":false},"author":16,"featured_media":1923,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"portfolio_post_id":0,"portfolio_citation":"","portfolio_annotation":"","openlab_post_visibility":"","footnotes":""},"categories":[34],"tags":[53,33],"class_list":["post-1879","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-images-module","tag-gen-ai","tag-images"],"_links":{"self":[{"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/posts\/1879","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/comments?post=1879"}],"version-history":[{"count":6,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/posts\/1879\/revisions"}],"predecessor-version":[{"id":1980,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/posts\/1879\/revisions\/1980"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/media\/1923"}],"wp:attachment":[{"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/media?parent=1879"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/categories?post=1879"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/openlab.bmcc.cuny.edu\/mmp100-stein\/wp-json\/wp\/v2\/tags?post=1879"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}