Be the first to know.
Get our A.I. weekly email digest.

The Regulatory Landscape of AI: A Conversation with Jared Bowns on Innovation and Accountability

Article #6 of Confronting AI Series: AI's rapid evolution has outpaced regulation. In this Q&A, Jared Bowns explores how adaptive frameworks, accountability, and ethical oversight can help engineers manage the risks and responsibilities of building trustworthy AI systems.

author avatar

06 Aug, 2025. 8 minutes read

This is the final article in our multi-part Confronting AI series, brought to you by Mouser Electronics. Based on the Methods: Confronting AI e-magazine, this series explores how artificial intelligence is reshaping engineering, product design, and the ethical frameworks behind emerging technologies. Each article examines real-world opportunities and unresolved tensions—from data centers and embedded ML to regulation, adoption, and ethics. Whether you're developing AI systems or assessing their broader impact, the series brings clarity to this rapidly evolving domain.

"AI Everywhere" explains how AI extends beyond LLMs into vision, time series, ML, and RL.

"The Paradox of AI Adoption" focuses on trust and transparency challenges in AI adoption.

Repowering Data Centers for AI explores powering data centers sustainably for AI workloads.

Revisiting AI’s Ethical Dilemma revisits ethical risks and responsible AI deployment.

Overcoming Constraints for Embedded ML presents ways to optimize ML models for embedded systems.

The Regulatory Landscape of AI discusses AI regulation and balancing safety with innovation.


Introduction

The rapid rise of artificial intelligence (AI) has sparked intense debates about regulation, ethics, and societal impacts. Unlike other technologies, AI launched ahead of regulatory frameworks, leading to controversies over training data, privacy, and intellectual property. The European Union's 2023 AI Act was one of the first major legislative responses to AI, but questions remain about how to regulate this transformative technology without stifling innovation. 

In this discussion, Jared Bowns, a veteran of enterprise AI implementation and regulatory strategy, shares insights into the challenges and opportunities in navigating AI's regulatory landscape. The conversation explores how engineers and organizations can balance innovation with safety, fairness, and social good, while still enabling future progress.


Interviewer: Jared, please introduce yourself.

Jared Bowns: I'm the head of data and AI practice at Elyxor, where I focus on software consulting, technical strategy, and implementation. Over the past decade, I've had the privilege of working on transformative AI projects, including my time as vice president of engineering at DataRobot. There, I founded the explainable AI team to enhance transparency and compliance, collaborating with government agencies and industry leaders to shape regulatory frameworks. My passion lies in making AI both innovative and accessible.

Exploring the Concept of Adaptive Regulation

Interviewer: What does the concept of adaptive regulation mean to you, and how can it be implemented effectively in the AI industry?

Jared Bowns: Adaptive regulation means having flexible rules that can evolve over time—particularly in industries like AI, where breakthroughs happen every few weeks. Whether from startups or [from] established players like Google, Amazon, or OpenAI, the pace of change in AI requires frameworks that maintain safety without stifling innovation. Static legislation won't work. Instead, we need a dedicated regulatory body—similar to the [US] Federal Trade Commission or the Securities and Exchange Commission—focused exclusively on developing and maintaining an appropriate framework for AI.

Interviewer: How can regulatory frameworks evolve to keep pace with rapid AI advancements without hindering innovation?

Jared Bowns: I'll be the first to admit that expecting the government to handle everything perfectly is optimistic. Fostering a partnership among leading researchers, the public sector, and the private sector is key. AI's impact varies significantly across industries, which means a one-size-fits-all regulatory approach won't work. A good example of a tailored approach is the United Kingdom's AI Airlock Program.[1] It provides a safe space to experiment while learning lessons that can inform emerging regulatory frameworks. Its public-private collaborations have been successful and could serve as models for other areas.

Interviewer: Given lawsuits like those against OpenAI, where do you see the balance between fostering innovation and ensuring safety?

Jared Bowns: Litigation is inevitable as industries evolve. These early stages are often messy, and we've already seen copyright lawsuits, like those involving OpenAI. However, these cases also show the need for accountability and guardrails. For example, concerns about monopolistic behavior, when companies consolidate power by funding early-stage startups, could stifle innovation. Regulation must prevent monopolies without stifling the progress of smaller players. Companies must balance innovation with accountability to ensure fair competition and ethical outcomes.

Interviewer: Can you share examples of regulatory approaches in other industries that might apply to AI?

If we look at historical examples from the 1950s and 1960s, many of the most transformative technologies came from government-funded research or collaborations between the public and private sectors. Similarly, AI development may benefit from a mix of public and private investment that aligns innovation with societal goals.

Characterizing Predictive AI Versus Generative AI

Interviewer: What are the key differences between predictive AI and generative AI, and why is distinguishing between them important?

Jared Bowns: Predictive AI and generative AI might sound similar because they both fall under the AI umbrella, but their use cases are very different. Predictive AI focuses on analyzing historical data to forecast outcomes, like predicting customer churn or determining the likelihood of equipment failure. It uses historical data to identify patterns and calculate probabilities. Generative AI, on the other hand, creates new content—text, images, or simulations. While predictive AI excels in decision-making based on probabilities, generative AI is better suited for creative tasks like brainstorming or simulating possibilities. Using the right tool for the job is important; using the wrong type of AI for a specific problem may lead to poor results.

Interviewer: What unique challenges do predictive AI and generative AI present in terms of safety, reliability, and ethical deployment?

Jared Bowns: Predictive AI's biggest challenge is data quality. If the input data are biased, the outputs will reflect that, reinforcing stereotypes or even discriminating against specific groups. This is particularly problematic in regulated industries like finance or insurance. Generative AI faces its own challenges, such as hallucinations, where the model generates plausible-sounding but false or nonsensical content. [Another] concern is misuse—deepfakes and fabricated content are already prevalent on social media. Strong guardrails and careful oversight are needed for both types [of AI] to ensure safe deployment.

As AI systems gain influence in critical decisions, the push for fairness, accountability, and adaptable regulation becomes a shared responsibility across sectors. Image generated using OpenAI’s Dall-EInterviewer: How do misconceptions about AI determinism affect the way organizations choose and implement AI solutions?

Jared Bowns: A misconception [exists] that AI outputs are always accurate or truthful. This belief may lead to overreliance on AI, treating it as an infallible source of truth. In reality, AI systems operate on statistical probabilities and may make mistakes. Organizations must approach AI with the right mindset, using it as a tool for insights rather than as a definitive decision-maker.

Interviewer: Why is selecting the correct type of AI—predictive or generative—for a given task crucial?

Jared Bowns: Choosing the right tool for the job is essential. Predictive AI is suited for analyzing patterns and forecasting outcomes, while generative AI excels at creating unique content or summarizing large documents and text. Using the wrong type of AI may result in missed insights or poor performance, so understanding their strengths and weaknesses before deployment is important.

Regulating for Social Impact

Interviewer: What socioeconomic disparities might arise from widespread AI adoption?

Jared Bowns: AI is already reshaping industries at every level, from white-collar roles in law to blue-collar manufacturing jobs and customer service. In the legal field, AI is taking over tasks like research, contract review, and document drafting—work that traditionally provided entry-level employees with career growth opportunities.[2] Similarly, in manufacturing, investments in humanoid robots are transforming tasks like assembly and packaging, with examples like Amazon warehouses showcasing robots performing end-to-end operations without human involvement.

Even customer service is undergoing a revolution, with AI-powered chatbots handling increasingly complex interactions and tasks. As automation accelerates, entry-level roles are disappearing across industries, narrowing career pathways and deepening socioeconomic divides. To prevent a two-tiered economy where only those who can adapt to and leverage AI thrive, proactive measures like upskilling programs and equitable AI policies are crucial to ensuring that these advancements benefit society as a whole.

Interviewer: How can regulations ensure equitable access to AI's benefits?

Jared Bowns: Investing in workforce retraining and education is critical. Public funding for AI infrastructure, especially in underserved areas, can help bridge the gap. By expanding access to AI tools and creating opportunities for upskilling, we can ensure broader participation in the AI-driven economy.

Interviewer: How can regulations promote social good while preventing negative impacts on vulnerable populations?

Jared Bowns: Transparency and fairness must be at the core of regulations. For example, mandatory bias audits can ensure AI systems don't perpetuate discrimination. Subsidies for companies developing AI solutions that address social challenges, like improving healthcare or education, can also make a meaningful impact.

Ensuring Accountability

Interviewer: In light of recent events where AI has caused harm, how important is establishing clear accountability mechanisms?

Jared Bowns: Accountability in AI is a growing concern as these systems become more integrated into daily life. Recent incidents, like those involving Character.AI, highlight the risks of operating without clear mechanisms for assigning responsibility.[3] Simple measures, such as age restrictions, might help, but they don't address the deeper issue: When harm occurs, it's often unclear who is liable—the developer, trainer, infrastructure provider, or even the user.

Open-source models further complicate this, as no single entity may be accountable. Without clear frameworks, victims may have no recourse, eroding trust in AI systems. Governments and regulators must act swiftly to establish rules that mitigate harm, rebuild trust when failures occur, and prepare for the increasing role of AI in critical areas of life.

Interviewer: What steps can regulatory bodies take to ensure humans remain accountable for AI outcomes?

Jared Bowns: Transparency is key. Companies should be required to document how AI models are trained and deployed. Assigning roles like chief AI officer can also ensure oversight and accountability at the organizational level.[4]

For practical implementation, this could involve several steps:

  • Impact assessments: Evaluating the potential societal and environmental impacts of an AI system before its production release.

  • Robust testing frameworks: Developing frameworks where AI models are exposed to a wide range of scenarios to uncover potential issues before deployment. Borrowing from software engineering's concept of chaos engineering, we could simulate various failure scenarios to stress-test AI systems.

  • Red teaming: Many in the industry are already employing red teams, dedicated groups tasked with trying to make models behave in unintended or unethical ways. Codifying this process as part of the development pipeline could identify weaknesses before they reach the public.

Interviewer: What lessons can the industry learn from incidents where AI systems have caused harm?

Jared Bowns: Negative incidents highlight the importance of rigorous testing and monitoring. Continuous oversight and a commitment to ethical practices can prevent future harm and build trust in AI technologies.

Interviewer: Thank you for your time, Jared. This has been very informative.

Jared Bowns: Thank you. It's been a pleasure.


Conclusion

AI’s future will not be defined solely by its technical capabilities but by the systems of accountability, transparency, and equity that govern its use. As Jared Bowns highlights, responsible AI development demands more than innovation, it requires foresight, collaboration, and a deep commitment to mitigating harm. Whether you're building predictive models for infrastructure or deploying generative tools in customer-facing applications, regulation isn’t a barrier but a blueprint for trust.

References

[1] UK Department for Science, Innovation and Technology. AI Airlock: the regulatory sandbox for AIaMD. London: GOV.UK; 2024. Available from: https://www.gov.uk/government/collections/ai-airlock-the-regulatory-sandbox-for-aiamd

[2] Molly K. How AI Could Break the Career Ladder. Bloomberg. 2024 Nov 15. Available from: https://www.bloomberg.com/news/articles/2024-11-15/ai-replacing-entry-level-jobs-could-break-the-career-ladder

[3] Dan J. New Lawsuits Targeting Personalized AI Chatbots Highlight Need for AI Quality Assurance and Safety Standards. Natl Law Review. 2024 Jan 6. Available from: https://natlawreview.com/article/new-lawsuits-targeting-personalized-ai-chatbots-highlight-need-ai-quality-assurance

[4] Kelly J. The rise of the Chief AI Officer. Forbes. 2024 May 28. Available from: https://www.forbes.com/sites/jackkelly/2024/05/28/the-rise-of-the-chief-ai-officer/


This article marks the conclusion of our Confronting AI series. Throughout the series, we examined how AI is reshaping everything from data centers and embedded systems to ethics and regulation. But to truly confront AI is to go beyond its mechanics—it means asking where we apply it, whom it benefits, and what values it encodes. These are not abstract questions; they are engineering, policy, and societal challenges that demand ongoing reflection and responsible action.

The road ahead is uncertain, but our choices today—technical, ethical, and regulatory—will determine whether AI becomes a force for progress or division. Confronting AI, then, is not a one-time exercise but an ongoing responsibility.

This article was originally published in “Methods: Confronting AI,” an e-magazine by Mouser Electronics. It has been substantially edited by the Wevolver team and Ravi Y Rao for publication on Wevolver.

24,000+ Subscribers

Stay Cutting Edge

Join thousands of innovators, engineers, and tech enthusiasts who rely on our newsletter for the latest breakthroughs in the Engineering Community.

By subscribing, you agree to ourPrivacy Policy.You can unsubscribe at any time.