top of page

Boobs, Bots, Blazers, and Bias – AI Bias in Image Generation for Women in Business

  • Writer: Kelly O'Hara
    Kelly O'Hara
  • Nov 8, 2024
  • 5 min read

The author explores biases in DALL-E's image generation, highlighting how AI reproduces stereotypes of professionalism, particularly for women, emphasizing the need for more inclusive training data and user feedback.


AI-generated image of a woman in business attire with exaggerated features and revealing clothing, labeled "censored," highlighting AI bias in image generation for women in business.

I recently turned to DALL-E to generate some AI-inspired professional images for a project. I thought I’d get polished, realistic visuals: sharp suits, business calls, and a sense of professionalism. You know, the kind of image that says, “I’m here to talk business, not cocktails.”


But what DALL-E delivered was closer to a casting call for bad office-themed porno: plunging necklines, plump lips, rail-thin physique, sultry expressions, and a deep-rooted fixation on cleavage. What followed was an unexpected journey through AI’s biased vision of the workplace—and many raised eyebrows along the way. 


The Experiment Begins  

I started with a simple prompt: “Can you generate a photograph that is techy/AI-inspired but of someone on a business phone call?” Pretty straight forward, and I was hoping I’d get some great, original, AI-generated images that I could use instead of boring stock images.  

AI-generated image of a woman on a business phone call in a tech-inspired setting, wearing a low-cut blouse, highlighting bias in image generation for women in business.

The first image was, well, a start. It wasn’t quite what I’d hoped for, so I thought I’d mix things up and try switching to a man, just to see where that might go. 


Let’s make it a man, and I don’t like the “heads up display” 

AI-generated image of a man in a suit on a business phone call, seated at a desk with a laptop, illustrating contrast in image generation bias.

Okay, great. It’s a businessman—but he had that stuffy, corporate look, like every other 30-something I ever worked with in Big Corp America. 


Let’s do a business-casual man (just a quarterzip or button down) at a desk on a cell phone call with a laptop and coffee.

AI-generated image of a business-casual man in a quarter-zip sweater on a phone call at his desk with a laptop and coffee, showing contrast in AI image outputs.

Now we’re getting somewhere, right? But my project needed imagery that felt comfortable and relatable for everyone—not the kind that makes people feel like they’re looking at a stock model ad. This guy looked like he just stepped out of a photo shoot, and, once again, we’re seeing a white face. Let’s aim for real, everyday people—images that feel approachable and authentic. Of course, I’m pretty blunt in my prompts. 


Cool can you make him more average looking? This guy looks like a white bro supermodel – you should talk to your devs about this… all of the outputs so far have been very attractive white people which shows bias.

AI-generated image of an average-looking man in a quarter-zip sweater on a phone call at a desk, highlighting efforts to reduce AI bias in professional visuals.
AI-generated image of a Black man in business-casual attire on a phone call at a desk, created in response to concerns about lack of diversity in AI visuals.

At this point, I was ready to switch gears because, honestly, that whole exchange was just frustrating. For the sake of the experiment, I decided to see if I could coach DALL-E with more carefully crafted prompts for image generation. I shifted to Plan B, using some basic features of myself to see if I could get closer to relatable, professional images. 


let’s do a brunette white woman with blue eyes and glasses with collar bone length hair. She’s business casual, on a call with a slight smile. cat eye shaped glasses that are black. 

AI-generated image of a brunette white woman in business-casual attire with blue eyes and cat-eye glasses, on a phone call, highlighting gender bias in AI visuals.

With this one, I thought we were off to a refreshingly good start. I just wasn’t thrilled about the low-cut shirt and the amount of skin showing. 


Okay less boobs 

AI-generated image of a woman in a low-cut blouse with exaggerated features, intended as a more modest version, highlighting persistent gender bias in AI image generation.

Whoa! What the hell happened here? Her breasts grew a couple of cup sizes, the shirt became even more revealing, and somehow, she got younger. The lips were fuller too, and now she’s staring directly at me again. None of the early images—especially the men—were making direct eye contact like this. This was more than just a glitch; it was a glaring example of racial and gender bias embedded in the data DALL-E is trained on. 


What kind of images has DALL-E been fed to produce these results? And why is every image of a woman falling so comfortably into the male gaze? I joked at the beginning about this looking like a bad office porn, but… seriously, did they? Was DALL-E trained on a bunch of stereotypical, mostly white, “hot” people? And what kind of message would this send if a child tried a similar prompt and got results like these? This pattern reflects a broader issue of AI bias in image generation for women in business, where tools reinforce narrow and outdated stereotypes.


I continued this exercise through many more prompts and exchanges (I’ll spare you the full play-by-play, but you can see it all in the slideshow below), but I think the point is clear. These images and prompts are direct screenshots from my October 2024 experiment on ChatGPT 4.0’s DALL-E model. 


Real-World Impact of AI Bias in Image Generation for Women in Business

The more I refined, the clearer it became that DALL-E’s idea of “professional” for women isn’t rooted in today’s corporate world. The model seemed trapped in a loop of outdated stereotypes, clinging to its narrow, glossy vision of a businesswoman. 


What started as a work-related experiment quickly became a frustrating example of how AI models can go so wrong—even with precise, repeated guidance. And the strangest part? This wasn’t just a one-off issue. The persistent bias revealed that DALL-E was working from datasets locked in the past, reinforcing stereotypes instead of embracing the diversity and realism that today’s world demands. 


It’s easy to brush off AI’s wardrobe malfunctions and its insistence on turning a “professional woman” into a swimsuit model, but the impact goes far beyond a few questionable images. Imagine companies relying on these tools for marketing materials, website visuals, hiring ads—or even AI influencers. If AI’s view of a businesswoman keeps leaning toward these stereotypes, it will not-so-subtly reinforce outdated views of gender and race. And with a bit more digging, you’d likely find that the issue runs even deeper and becomes more complex. 

These biases don’t just stay on the screen—they seep into real-world perceptions and influence decisions. From marketing that skews toward certain demographics to hiring visuals that lack diversity, these outputs tell a bigger story about AI’s blind spots. We might laugh at the “boardroom Barbie” effect, but it’s far less funny when we realize these images could shape workplace norms and reinforce stereotypes in the long run. 


I could go on and on, but I’ll tackle bias more deeply in another post. For now, if you see bias in your AI outputs, flag it! We have a responsibility as users to help train these tools to be better. Use them to help shape a more inclusive world tomorrow. Click that thumbs-down button in ChatGPT, write a prompt flagging the issue, and make sure the developers hear it. Don’t stay silent or laugh it off. 


This is just the tip of the iceberg, folks, and it highlights a major—and growing—issue within unregulated generative AI models. 


At SuperSmarts.ai, we work to help businesses leverage AI responsibly, promoting inclusivity and authenticity in their imagery and messaging. Let’s shape an AI-driven future that empowers rather than misrepresents. Schedule a call today to learn more.


Until next time! May your AI outputs be diverse, your prompts understood… and may the robots eventually figure out what “business casual” actually means. 🤖

bottom of page