Home » AI

The current state of AI ethics

Interview with Laura Miller

AI Think Tank Podcast Laura Miller NextGen Ethics

After a nice Summer break here at the AI Think Tank Podcast, I had the pleasure of sitting down with Laura Miller. Laura is the founder of NextGen Ethics, an AI consultancy dedicated to embedding ethics into the lifecycle of AI development. She is not only a philosopher and award-winning ethicist, but also a passionate advocate for creating technology that aligns with the greater good of society.

The conversation touched on a wide range of topics, from the challenges of AI in moderating social media content to the broader implications of AI on global sustainability goals. This episode was packed with thought-provoking insights, and I’m excited to share a recap of the key points we discussed.

The next frontier in content moderation

Laura’s company, NextGen Ethics, has been working on a personalized solution for moderating harmful content on social media platforms. One of the core issues she addressed is the responsibility tech companies have (or in some cases, fail to take) when it comes to curating the vast amount of content on their platforms.

“We’re no longer waiting for tech companies to come in with a magic solution. Instead, we’re creating one of our own, guided by the communities that are using these platforms,” Laura explained. She emphasized that the term “users” can be problematic, noting its association with addictive behaviors, particularly for Gen Z. This generation is not just the future; they’re nearly half of the world’s population and are facing one of the greatest mental health crises in history.

Her company is moving quickly in this space and even reached the semi-finals in the Startup World Cup, demonstrating how critical and urgent their work is. Laura expressed the urgency to take action, not waiting for government regulation to catch up: “Waiting for regulation or businesses to act in our best interests is not enough. We must take matters into our own hands.”

NextGen Ethics

The ethics of AI: What we should be asking

A big part of our conversation centered around how developers and companies are thinking about AI and its impact. Laura’s stance was clear: AI isn’t just software you develop and push out, it has far-reaching implications and responsibilities attached.

“We’re at a huge moment in AI, particularly in education,” she said, “but the talk about education gives me pause. It’s not just about educating the public, it’s also about educating developers and making them realize this isn’t like any other software.” She emphasized the importance of building relationships between developers, deployers, and the public, creating a three-way dialogue that can ensure the technology is not only deployed effectively but also ethically.

I was fascinated by how she broke down these responsibilities: “You can’t just build AI in your basement, push it out the door, and wash your hands of it. Developers need to have an ongoing relationship with the technology they create and the people who will be affected by it.”

AI’s impact on global development and inequality

One of the more striking moments in our conversation was when Laura connected AI with the global struggle to meet sustainable development goals. “Every single one of the sustainable development goals that we’re supposed to meet by 2030? We’re on track to miss all of them,” she told me, “and AI is going to play a massive role in either exacerbating or alleviating these global challenges.”

The idea that AI can help or harm efforts around climate change, education, and even resource distribution is something that’s not always front and center in AI discussions. For Laura, this was personal. She urged that ethics must become a priority, not just something added on after the fact. “Laws are the bare minimum, they can tell you what’s legal, but they can’t tell you what’s right,” she asserted. At some point, society will face a crisis, and ethics will be what guides us through it.

The battle for data ownership

We also tackled the complex issue of data ownership in the age of AI, something Laura believes is in a state of chaos. She argued that “if you post it online, it’s no longer yours.” Laura has personal experience with this, having deliberately kept her own copyrighted material off her website to avoid it being scraped by AI tools. “We’ve lost ownership of our information, our content, and our ideas to the AI systems that scrape everything we post for training data.”

This becomes especially problematic in the realm of creative work. “Imagine being an artist or a writer and realizing that everything you create could be fed into a model that others will use to generate their own works,” she said, noting that there’s a tension between the progress AI makes possible and the erosion of individual rights to intellectual property.

Friend of the show and retired CTO of the RealNetworks, Reza Rassool shared his thoughts live with us. Reza founded Kwaai AI, a non-profit organization that is working to democratize AI for all people. His insights complemented and deepened many of the ethical concerns Laura  raised throughout the episode, particularly around data ownership, AI deployment, and the ethical responsibilities of developers. Their ideas intersected in several key ways, creating a rich dialogue on the future of AI and ethics.

Reza Rassool stopped in to back up Laura's points.

1. Data ownership and privacy

Laura was clear in her concern about the loss of data ownership in the current AI landscape. She argued that once content is posted online, it can be scraped and used for training AI models without consent, a major ethical violation in her eyes. She pointed out how even copyrighted content can be absorbed into AI systems, stripping creators of their intellectual property.

Reza’s discussion of personalized AI offered a potential solution to this issue. His advocacy for small language models that run locally, controlled by the user, aligns with Laura’s concerns about protecting personal data and intellectual property. By allowing individuals and businesses to bring their own data into AI systems, Reza’s model mitigates the risk of data being scraped and misused by large corporations.

“Personal AI puts data control back in the hands of the user,” Reza explained, which directly addresses Laura’s point that “we’ve lost ownership of our information” due to the vast scraping practices of cloud-based AI systems​.

2. Ethics in AI development

Laura stressed that developers have a moral responsibility not just to create technology but to consider its long-term societal impact. She emphasized that AI isn’t just software you develop and release; it carries significant ethical obligations. “We need to recognize that developers now carry different duties and obligations they haven’t had before,” she said​.

Reza supported this point by emphasizing the importance of building AI systems that reflect the values of the people who create and use them. His work on Retrieval-Augmented Generation (RAG) allows AI to be built around a specific, ethically controlled knowledge base, reducing the risk of bias and misinformation. By giving users the ability to train AI on their own curated datasets, Reza’s approach adds an additional layer of responsibility and control, aligning with Laura’s call for developers to take ownership of the tools they create.

3. Bias in AI models

One of Laura’s key concerns was the inherent bias present in large datasets used to train most AI models. She argued that bias exists not just in data but in the developers and systems that create AI, and that AI models trained on vast, uncontrolled datasets will inevitably inherit societal biases. She also highlighted that AI often reflects mainstream, surface-level understandings of ethical concerns without diving into deeper, more complex issues like intersectionality and global impacts.

Reza’s critique of bloated language models reinforced Laura’s concerns. He argued that large language models, by design, are filled with general knowledge that can often be irrelevant, outdated, or biased. His solution, to use smaller, specialized models trained on personal or specific datasets, helps counteract these biases. By limiting the scope of the data and allowing users to build their own ethical frameworks, Reza’s approach reduces the influence of the often biased, generalized data that Laura was worried about.

As Reza noted, “You don’t need every conflicting idea ever presented on the internet to come to your simple query,” which is a direct response to Laura’s concerns about bias being baked into AI systems trained on massive, unfiltered datasets​.

4. The need for ethical AI in global development

Both Laura and Reza are focused on the broader, global implications of AI. Laura stressed that AI could either help or hinder progress towards the UN’s Sustainable Development Goals (SDGs), and she warned that we are currently on track to miss these goals because of the unregulated and often harmful deployment of AI.

Reza’s emphasis on decentralizing AI and making it more accessible to individuals and communities ties into this concern. His work aims to give more people access to AI tools without relying on the large tech companies that, as Laura pointed out, have little incentive to prioritize ethical concerns over profit. By making AI more personal and accessible, Reza’s approach could help smaller businesses and developing nations, which often don’t have the resources to compete with the tech giants.

5. Human-centric AI

Both Laura and Reza underscored the importance of keeping humans at the center of AI development and deployment. Laura warned that AI systems cannot replace human empathy, judgment, or connection, especially in areas like customer service, education, and mental health. She believes that some businesses may even thrive by choosing not to over-automate, maintaining personal, human connections with their customers.

Reza echoed this sentiment with his focus on local, personalized AI systems. By putting AI in the hands of individuals and allowing them to shape it around their own data and needs, Reza’s approach keeps humans at the center of AI development, ensuring that the technology is used as a tool to augment human capabilities rather than replace them.

AI and small businesses: What’s the ROI?

One of the practical challenges many businesses are grappling with is how to integrate AI into their workflows in a way that makes sense—and, more importantly, delivers a return on investment. Laura pointed out that while large tech companies have the resources to experiment with AI, small and medium-sized businesses (SMBs) are often left trying to figure out how to use AI tools effectively. “There’s a huge tech gap,” she explained. “Many companies are struggling just to find the right skills, let alone implementing AI in a way that makes sense for their business.”

For a lot of SMBs, the focus is on where AI can save time, like automating emails or helping with marketing. But Laura warned that this approach misses the bigger picture. “It’s about more than just finding shortcuts, it’s about understanding the long-term implications of using AI. If you’re using AI to replace people, you still need experts who know whether the AI’s output is accurate or ethical.”

The personal side of AI: Relationships, trust, and authenticity

At the heart of all of Laura’s work is a concern for maintaining human relationships and authenticity in a world increasingly dominated by automation. We discussed the importance of human-to-human interaction, especially as more and more companies lean on chatbots and automated systems to handle customer service.

“There’s going to be a niche for not modernizing,” Laura said. “We need to think carefully about who we want to be as businesses and what kind of experience we want to offer. Do you want to be a company where your customers talk to a bot, or do you want to be a company where they feel seen and heard by a human?”

She believes that there’s value in keeping things personal, even as AI advances. “We can’t lose sight of the fact that people need real connections. We saw this during COVID, people missed interacting with one another. As we build these systems, we have to remember that technology can’t replace that feeling of being understood by another human being.”

The current state of AI ethics

Final thoughts: AI as a reflection of society

Laura left us with a powerful reflection: AI is a mirror of the society that creates it. “We are the data set for AI. Everything we put into it—our biases, our values, our flaws—will be reflected back to us. If we want AI to become the best of us, we have to show it the best of us.”

It was a sobering and inspiring conversation, reminding me that we are all responsible for the future of AI. It’s not just up to the tech giants or the government to steer this ship—it’s up to all of us. “AI can be the tool that helps us solve some of our biggest challenges,” Laura said, “but only if we put ethics at the heart of it.”

As we wrapped up the conversation, I couldn’t help but feel optimistic. Yes, there are enormous challenges ahead, but with thinkers like Laura leading the charge, there’s hope that AI will ultimately serve the greater good.

Watch the full episode of AI Think Tank Podcast Episode 13 with Laura Miller here:

Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field. 

Subscribe to the AI Think Tank Podcast on YouTube. Would you like to join the show as a live attendee and interact with guests? Contact Us

1 thought on “The current state of AI ethics”

  1. “Really enjoyed your insights on automated machine learning (AutoML)! It’s amazing how AutoML is democratizing access to advanced machine learning techniques by lowering the barrier for non-experts. I’m curious about your thoughts on how this will affect the role of data scientists in the long term. Do you see it as a tool to enhance their work or something that could potentially reduce the demand for their expertise? Also, I loved the section on explainable AI—it’s such a critical area as AI becomes more prevalent in decision-making processes. Thanks for the great read!”

    This comment touches on current innovations like AutoML and explainable AI, encouraging further exploration of their impacts on the data science field.

Leave a Reply

Your email address will not be published. Required fields are marked *