At the November MarTech Conference, Tim Hillison, founder and CMO of Entry Point 1, moderated a session called “Consent, compliance and trust in the AI age,” a practical discussion at the intersection of speed, regulation and trust.
He was joined by Jeanne Jennings, CEO of Email Optimization Shop; Neil Jennings, lawyer and founder of GLF Strategic Compliance; and Chris Black, a fractional marketing-operations leader — together unpacking privacy-first personalization, dynamic consent management and how to balance performance with responsibility as the rules keep evolving.
The conversation began with a live poll that set the tone for the session. When attendees were asked which challenge kept them up at night — AI consent compliance, changing regulations or performance versus privacy — the majority chose the latter. The tension between delivering hyper-relevant, AI-driven experiences and maintaining ethical, compliant marketing practices is clearly top of mind.
Keeping humans in the loop
The panel’s first challenge was how to preserve human judgment when algorithms make more of the calls. Jeanne Jennings cautioned AI is a tool, not a value system. “It’s not a substitute for good marketing judgment,” she said. Her example: An email program where executives were receiving content irrelevant to them, yet the vendor insisted the algorithm must be right — captured a common risk. “When someone says the AI is right and the people are wrong, that’s when trust breaks,” she explained. The solution, she said, lies in regular human oversight: QA, spot-checking and good old-fashioned gut checks.
Neil Jennings, approaching from a governance lens, agreed this is as much about culture as code. “The algorithm isn’t always right,” he said. “We need QA processes and governance to identify bias and ensure accountability.” He noted that manipulation isn’t one tidy legal concept—it spans user interfaces, algorithmic bias, and systemic transparency.
Trust: Feeling or metric?
When asked whether trust should be treated as an emotion or as a measurable outcome, the panelists agreed it must be both. Neil Jennings described it as “a consumer expectation and a legal outcome,” noting the rising cost of getting it wrong — from Facebook’s Cambridge Analytica fallout to the FTC’s fines against brands like Experian and Honda. Compliance is measurable in penalties and lawsuits; trust, he argued, is measurable in brand value and customer retention. Jeanne Jennings framed compliance as “the floor,” with trust as “the ceiling.” Her experience in email marketing has shown that going beyond legal minimums pays off in deliverability and loyalty.
“You’ll know when you’ve lost trust,” she said, “because the metrics will tell you.”
Black extended this view, pointing out that trust now lives across the entire organization, not just marketing.
“Prospects don’t just see our website and emails anymore,” he said. “They see Glassdoor reviews, G2 ratings, screenshots and social sentiment — all exposed by AI in real time.” That transparency, he argued, demands a company-wide commitment to consistency between what a brand says and how it behaves.
Turning consent into a living system
The discussion turned to operationalizing consent as a dynamic, real-time process.
“We’ve had the checkbox for years,” said Black. “Now consent is a live signal.”
He explained that marketing stacks often interpret consent differently — one system halts all emails, another only stops marketing messages, while ads continue to follow the user online. The outcome, he said, is a compliance paradox: technically legal, but trust-negative.
To solve this, Black urged cross-functional alignment on what each consent state actually means and how it triggers or suppresses actions across all systems. He described new dashboards that help visualize trust by unifying data from CRM, automation and customer success platforms. With today’s APIs and server integrations, he said, “we can finally see where consent breaks down.”
Neil Jennings added a governance perspective, identifying four essentials for real-time compliance: technical reliability, transparency, autonomy and necessity. “If you can answer ‘yes’ to all four,” he said, “you’re building trust the right way.” Jeanne Jennings echoed this sentiment, emphasizing that companies must use AI to scale governance, not replace it.
Balancing speed and responsibility
One recurring theme was that governance doesn’t slow progress — it enables it. Neil Jennings argued that leaders need to understand their organization’s appetite for risk and avoid “AI FOMO.” Many firms, he said, treat fines as a cost of doing business, but that approach is unsustainable as penalties escalate. The goal is to distinguish between what AI can do and what it should do. Jeanne Jennings added that over-delegating authority to AI erodes accountability: “You can’t outsource responsibility.”
The conversation highlighted that compliance is no longer a checkbox but a competitive advantage. “It’s how you build confidence,” Hillison summarized. “When teams can move fast without cutting corners, you build trust that compounds.”
Beyond compliance: Trust as brand equity
Asked how brands can move beyond compliance, Black offered a striking metaphor: “Trust isn’t the ceiling — it’s the moat.”
He pointed to Apple as an example of how responsible data practices build loyalty even when innovation slows. “That trust buys time, forgiveness and brand equity,” he said. Jeanne Jennings agreed, adding that marketers should ask, “Is it smart to do this?” rather than simply, “Is it legal?” Her example: A B2B campaign that openly asked for industry information rather than inferring it, illustrated how transparency can strengthen relationships instead of risking them.
Measuring trust in practice
In the audience Q&A, participants asked how to quantify something as intangible as trust. Neil Jennings simplified it: “Compare what you told people you’d do with what you actually did. Shrink that gap.”
Jeanne Jennings added a more personal guideline: “Treat your customers the way you’d want your family treated.”
Another question raised whether AI note-taking tools were safe for executive meetings. Both Black and Neil Jennings warned of discoverability risks and urged companies to establish clear governance policies before deploying them in sensitive contexts.
The road ahead
In closing, the panel predicted that trust will soon be both more measurable and more human.
Black envisioned it being tracked in CRMs like a performance metric, while Jeanne Jennings foresaw a widening gap between brands that use AI as a conduit for human connection and those that delegate responsibility to it. Neil Jennings predicted increased scrutiny of AI vendors and backend algorithms as regulators evolve beyond surface-level UI concerns.
Ultimately, the group agreed that the future of marketing isn’t about eliminating risk — it’s about earning confidence at the speed of change. The brands that win won’t be those that move fastest, but those that move fastest with integrity.
Fuel up with free marketing insights.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.


