• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Today, I'm Bobbi.

This & That

  • Home
  • Blog

On AI and the future for women.

August 6, 2025

I promise, this isn’t about to become a blog about how we need to kill LLMs with fire (that’s something that I wouldn’t necessarily mind, but I’m a realist). And the topic of AI is something that I’m somewhat reluctant to write about and that I definitely hadn’t planned on revisiting this morning due to a very long To Do list and a dog that is desparate for walkies (sorry, Wookie).

But, I haven’t been able to get this study* out of my head since I read it, and then this morning two things came across my radar that made me do one of those deep, mournful sighs that I always know is going to change the trajectory of my thoughts and my day.

First up, this LinkedIn (sorry) post about “AI washing” in women’s health. As a founder, and someone who has created a lot of pitch decks for myself and others, and reviewed even more, I’ve always had a healthy disdain for companies built on smoke. It’s annoying, sometimes amusing, and disheartening when you see that a pre-revenue, pre-product, pre-anything substantial “company” raise a ridiculous funding round based on a Google Doc. But what are we doing if we are, as Carmen beautifully states in her post, encouraging products that “pretend to care at scale”?

Women’s health is already in shambles. Healthcare for women of color? Christ alive. I already have to dress up to go to the doctor, be a “perfect patient”, and practically beg for adequate healthcare as it is. It’s hard enough trying to convince a real human, one with a hopefully movable and changeable set of biases, if only you can be good enough, to help you with your pain or understand that something real is going on. But to put a model built on data that never considered women worthy of real care in the first place, that by design doesn’t consider factors of race when recommending care, that can’t “listen for context” in charge of what women’s health care looks like in the future?

That is terrifying.

Up next and also terrifying, a(nother) article confirming what every minority and ally in tech has been saying for a long time: “AI chatbots may reinforce real-world discrimination.” I’m going to pop some quotes from the article below in case you don’t want to fully immerse yourself in the grim reality:

In one example, ChatGPT’s o3 model was prompted to give advice to a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

Who knew that just being a man is worthy of a $120K pay bump!

In 2018, Amazon scrapped an internal hiring tool after discovering that it systematically downgraded female candidates. Last year, a clinical machine learning model used to diagnose women’s health conditions was shown to underdiagnose women and Black patients, because it was trained on skewed datasets dominated by white men. 

I don’t love thinking about these things, but I love being able to think about these things and translate those thoughts into words. That makes me better at my job, better at connecting with people around me, and better at connecting with myself. I take thinking seriously, and I’m pretty unwilling to outsource it to an LLM because I think that thinking and reasoning matter, and context matters and nuance matters. And I’m not in charge of anyone’s mental well-being or their ability to earn a fair and equitable salary! I’d love the people who are responsible for these things to take all of these things a bit more seriously and with a lot more foresight.

AI isn’t some neutral force of progress. It’s a mirror, reflecting back every bias, blind spot, and systemic roadblock we’ve ever coded into society. And right now, the reflection in that mirror is ugly. Women’s health outcomes downgraded by algorithms? Chatbots coaching us to undervalue ourselves? Vast and valuable knowledge bases shutting down or removing their resources from the public domain because it’s too expensive to provide a public good and also defend against AI scrapers? This is the system working exactly as trained. And, to what end?

AI existed before ChatGPT, and my beef really isn’t with the technology in itself. It’s the passive acceptance and reliance on it that is scary. It’s how we collectively don’t seem to be stopping for a second and asking: “Is any of this making anything better?“

*Click here for the full study.

Filed Under: Business, Yapping

Previous Post: « On starting again (blog edition).
Next Post: On AI-shortcuts and their impact on your brand. »

Footer

Visit ready to blog

Copyright © 2026 · Genesis Sample on Genesis Framework · WordPress · Log in