[Webinar] Learn how Assurant is scaling AI in the contact center
Save your seat
back to blog
Published on
April 16, 2025

Will the real human in the loop please stand up?

Gina Clarkin
Product Marketing Manager
6 minutes

Stay up to date

Sign up for the latest news & content.

back to blog
Published on
April 16, 2025

Will the real human in the loop please stand up?

Gina Clarkin
Product Marketing Manager
6 minutes

Even if you weren’t around in the 1950s, chances are you’ve heard some variation of this line from the classic television game show, To Tell the Truth. In the show, celebrity panelists were introduced to three contestants, each claiming to be the same person. The real person had to tell the truth, while the impostors could lie. After a round of questioning, the panelists voted on who they believed was the real deal. The big reveal—often a surprise—delivered the show's signature drama. The format proved compelling enough to last through multiple revivals, staying on air in various forms until 2022.

Today, a new version of this guessing game is playing out in the world of generative AI for customer service. But unlike the lighthearted game show, the stakes here are much higher—especially for companies whose competitive edge depends on delivering outstanding customer experiences.

What is a HILA?

HILA, or Human-in-the-Loop Assistance, is a concept rooted in the AI/ML world that is now critical to generative AI contact center solutions. As providers race to bring AI agents into customer-facing roles, there’s an understandable focus on mitigating risk—ensuring responses are safe, compliant, and on-brand.

There are legitimate challenges to overcome for generative AI to truly revolutionize the contact center, and ensuring generative AI agent success with human-in-the-loop is part of this transformation. 

There are multiple interpretations of HILA, but the common thread is clear: AI should assist humans, not the other way around. HILA is about amplifying human decision-making—not requiring it for every single task.

Let’s Meet Our Contestants

HILA #1

This version of HILA prioritizes safety by requiring human agents to approve AI-generated responses before they reach customers. While this approach ensures accuracy and compliance—particularly valuable when monitoring new or sensitive intents—it also introduces friction. The approval workflow adds delays, increases handling time, and reduces the cost-efficiency of automation, even for interactions that the AI could likely manage reliably on its own. 

HILA #2

Here, AI handles interactions until it encounters a scenario it can't confidently resolve—like comparing billing cycles or navigating nuanced policy exceptions. At that point, it hands off to a human, typically with a detailed summary to streamline the transition. While helpful in certain situations, this method can degrade the customer experience by introducing a forced transfer and increasing reliance on more expensive human agents.

HILA #3

There’s something different about this HILA – it’s designed to collaborate flexibly with the AI agent, getting involved behind the scenes when necessary, unblocking the AI, then letting the AI handle the rest to successful resolution.  

So, who is the Real human-in-the-loop?

Okay, so we spoiled the surprise. But imagine flipping the script: AI leads the interaction, and humans support it—stepping in with guidance or approval when necessary, but without taking over the conversation. No hard handoffs. No context lost. Just continuous collaboration behind the scenes.

This is the GenerativeAgent HILA paradigm: human in the loop agent. Human agents supporting AI supporting customers.

Let that sink in.

By putting humans in a supporting role—where their judgment elevates the AI rather than replaces it—we unlock scalable, safe automation while maintaining high-quality customer interactions.

How ASAPP’s GenerativeAgent® leverages HILA

Here are the approaches we take with GenerativeAgent to put the HILA model into practice—blending human input with AI automation to support better outcomes.

Real-Time Human-AI Collaboration: GenerativeAgent reaches out to human agents in real time when it needs clarity, input, or permission—without handing off the conversation. This preserves continuity and keeps the interaction fully automated from the customer’s point of view.

Agent-Centered Workspace Design: A next-gen interface gives human agents relevant context, smart summaries, and intuitive tools. It captures their tacit knowledge and enables new ways of collaborating with AI—without burdening them with repetitive tasks.

Continuous Optimization: GenerativeAgent captures human decision rationale to learn and improve over time, elevating automation’s potential for your contact center without additional configuration effort.

A workspace built for this new paradigm

This new model—humans assisting AI assisting customers—requires a different kind of workspace. Not one built around handling conversations, but one designed to support judgment calls.

ASAPP developed its HILA experience through deep research with contact center agents and direct input from enterprise customers. The result: an interface designed specifically for human-in-the-loop agents, enabling fast, fluid collaboration with AI in a streamlined workflow.

Flexible, role-based human-in-the-loop design

And GenerativeAgent lets you define the role of the human in the loop—and decide how they support AI across your workflows. You stay in control of which intents AI handles, when and how it should seek human help, and what happens when something goes wrong.

Why it matters for your contact center strategy

When GenerativeAgent HILA is built into the core of your AI strategy, it changes what’s possible. Contact centers will gain practical advantages that make scaling GenerativeAgent and automation safer, smarter, and more manageable. Contact centers using GenerativeAgent HILA can:

  • Expand automation safely: Define which intents the AI should handle and exactly what happens when it hits a roadblock.
  • Improve AI performance: Leverage human feedback to continuously optimize GenerativeAgent’s responses and decision-making.
  • Increase capacity while maintaining quality: Reduce reliance on expensive human agents by letting GenerativeAgent handle complex interactions, with intelligent human support as needed.
  • Accelerate adoption and expansion: Bridge gaps in data, policy, and tooling with real-time human support—so you can roll out automation faster and at greater scale.

Common scenarios for human support

  • Knowledge or system gaps: When GenerativeAgent lacks access to certain data or APIs, a human agent can fill in the blanks—or directly execute system tasks AI can't access.
  • Authorization rules: For sensitive actions—like offering discounts or closing accounts—humans can provide explicit approvals based on company policy.
  • Customer request: If a customer asks for a human, you can choose to transfer—or have the AI consult a human to keep the interaction moving, especially useful when queues are long.
  • System-initiated triggers: When something’s unclear (e.g., API errors, ambiguous data), GenerativeAgent can ask a human for help mid-interaction—resolving the issue without starting over.

This kind of flexible, consultative support model helps organizations adopt automation faster and expand its use across more complex workflows—without compromising on control or quality.

Final thoughts

Generative AI is poised to reshape the contact center—but unlocking its full potential requires a smarter approach to human-in-the-loop collaboration. The HILA model ensures your AI is not only ready for production today, but also able to continuously improve and scale tomorrow.

As the technology evolves, so does the role of the human agent. It’s no longer about choosing between AI or people—it’s about empowering both to do what they do best. With the right human-in-the-loop strategy, companies can find the ideal balance between automation and judgment, leading to more efficient operations and better customer experiences.

Loved this blog post?

About the author

Gina Clarkin
Product Marketing Manager

Gina Clarkin is a product marketing manager at ASAPP. She works to bring advanced technologies to market that help companies better solve real-world problems. Prior to joining ASAPP, she honed her product marketing craft at tech companies with firmware, wireless, and contact center solutions.

Will the real human in the loop please stand up?

Even if you weren’t around in the 1950s, chances are you’ve heard some variation of this line from the classic television game show, To Tell the Truth. In the show, celebrity panelists were introduced to three contestants, each claiming to be the same person. The real person had to tell the truth, while the impostors could lie. After a round of questioning, the panelists voted on who they believed was the real deal. The big reveal—often a surprise—delivered the show's signature drama. The format proved compelling enough to last through multiple revivals, staying on air in various forms until 2022.

Today, a new version of this guessing game is playing out in the world of generative AI for customer service. But unlike the lighthearted game show, the stakes here are much higher—especially for companies whose competitive edge depends on delivering outstanding customer experiences.

What is a HILA?

HILA, or Human-in-the-Loop Assistance, is a concept rooted in the AI/ML world that is now critical to generative AI contact center solutions. As providers race to bring AI agents into customer-facing roles, there’s an understandable focus on mitigating risk—ensuring responses are safe, compliant, and on-brand.

There are legitimate challenges to overcome for generative AI to truly revolutionize the contact center, and ensuring generative AI agent success with human-in-the-loop is part of this transformation. 

There are multiple interpretations of HILA, but the common thread is clear: AI should assist humans, not the other way around. HILA is about amplifying human decision-making—not requiring it for every single task.

Let’s Meet Our Contestants

HILA #1

This version of HILA prioritizes safety by requiring human agents to approve AI-generated responses before they reach customers. While this approach ensures accuracy and compliance—particularly valuable when monitoring new or sensitive intents—it also introduces friction. The approval workflow adds delays, increases handling time, and reduces the cost-efficiency of automation, even for interactions that the AI could likely manage reliably on its own. 

HILA #2

Here, AI handles interactions until it encounters a scenario it can't confidently resolve—like comparing billing cycles or navigating nuanced policy exceptions. At that point, it hands off to a human, typically with a detailed summary to streamline the transition. While helpful in certain situations, this method can degrade the customer experience by introducing a forced transfer and increasing reliance on more expensive human agents.

HILA #3

There’s something different about this HILA – it’s designed to collaborate flexibly with the AI agent, getting involved behind the scenes when necessary, unblocking the AI, then letting the AI handle the rest to successful resolution.  

So, who is the Real human-in-the-loop?

Okay, so we spoiled the surprise. But imagine flipping the script: AI leads the interaction, and humans support it—stepping in with guidance or approval when necessary, but without taking over the conversation. No hard handoffs. No context lost. Just continuous collaboration behind the scenes.

This is the GenerativeAgent HILA paradigm: human in the loop agent. Human agents supporting AI supporting customers.

Let that sink in.

By putting humans in a supporting role—where their judgment elevates the AI rather than replaces it—we unlock scalable, safe automation while maintaining high-quality customer interactions.

How ASAPP’s GenerativeAgent® leverages HILA

Here are the approaches we take with GenerativeAgent to put the HILA model into practice—blending human input with AI automation to support better outcomes.

Real-Time Human-AI Collaboration: GenerativeAgent reaches out to human agents in real time when it needs clarity, input, or permission—without handing off the conversation. This preserves continuity and keeps the interaction fully automated from the customer’s point of view.

Agent-Centered Workspace Design: A next-gen interface gives human agents relevant context, smart summaries, and intuitive tools. It captures their tacit knowledge and enables new ways of collaborating with AI—without burdening them with repetitive tasks.

Continuous Optimization: GenerativeAgent captures human decision rationale to learn and improve over time, elevating automation’s potential for your contact center without additional configuration effort.

A workspace built for this new paradigm

This new model—humans assisting AI assisting customers—requires a different kind of workspace. Not one built around handling conversations, but one designed to support judgment calls.

ASAPP developed its HILA experience through deep research with contact center agents and direct input from enterprise customers. The result: an interface designed specifically for human-in-the-loop agents, enabling fast, fluid collaboration with AI in a streamlined workflow.

Flexible, role-based human-in-the-loop design

And GenerativeAgent lets you define the role of the human in the loop—and decide how they support AI across your workflows. You stay in control of which intents AI handles, when and how it should seek human help, and what happens when something goes wrong.

Why it matters for your contact center strategy

When GenerativeAgent HILA is built into the core of your AI strategy, it changes what’s possible. Contact centers will gain practical advantages that make scaling GenerativeAgent and automation safer, smarter, and more manageable. Contact centers using GenerativeAgent HILA can:

  • Expand automation safely: Define which intents the AI should handle and exactly what happens when it hits a roadblock.
  • Improve AI performance: Leverage human feedback to continuously optimize GenerativeAgent’s responses and decision-making.
  • Increase capacity while maintaining quality: Reduce reliance on expensive human agents by letting GenerativeAgent handle complex interactions, with intelligent human support as needed.
  • Accelerate adoption and expansion: Bridge gaps in data, policy, and tooling with real-time human support—so you can roll out automation faster and at greater scale.

Common scenarios for human support

  • Knowledge or system gaps: When GenerativeAgent lacks access to certain data or APIs, a human agent can fill in the blanks—or directly execute system tasks AI can't access.
  • Authorization rules: For sensitive actions—like offering discounts or closing accounts—humans can provide explicit approvals based on company policy.
  • Customer request: If a customer asks for a human, you can choose to transfer—or have the AI consult a human to keep the interaction moving, especially useful when queues are long.
  • System-initiated triggers: When something’s unclear (e.g., API errors, ambiguous data), GenerativeAgent can ask a human for help mid-interaction—resolving the issue without starting over.

This kind of flexible, consultative support model helps organizations adopt automation faster and expand its use across more complex workflows—without compromising on control or quality.

Final thoughts

Generative AI is poised to reshape the contact center—but unlocking its full potential requires a smarter approach to human-in-the-loop collaboration. The HILA model ensures your AI is not only ready for production today, but also able to continuously improve and scale tomorrow.

As the technology evolves, so does the role of the human agent. It’s no longer about choosing between AI or people—it’s about empowering both to do what they do best. With the right human-in-the-loop strategy, companies can find the ideal balance between automation and judgment, leading to more efficient operations and better customer experiences.

Authors: 
Gina Clarkin

Gina Clarkin is a product marketing manager at ASAPP. She works to bring advanced technologies to market that help companies better solve real-world problems. Prior to joining ASAPP, she honed her product marketing craft at tech companies with firmware, wireless, and contact center solutions.

Get Started

AI Services Value Calculator

Estimate your cost savings

contact us

Request a Demo

Transform your enterprise with generative AI • Optimize and grow your CX •
Transform your enterprise with generative AI • Optimize and grow your CX •