Evaluating AI feedback tools? Here are 7 critical data safety questions every HR leader should ask — and how Huckleberry answers each one.
You're evaluating AI-powered feedback platforms. Your team is excited about the product. But somewhere between the demo and the business case, someone's going to ask: "What happens to our people's data?"
Good. That's exactly the right question.
AI feedback tools collect some of the most sensitive information in your organization — how people work, where they struggle, what their peers really think. Before you sign anything, here are seven questions worth asking every vendor. And because we'd rather show our cards than ask you to take our word for it, here's how Huckleberry answers each one.
If the tool uses voice, audio is being captured somewhere. Ask where it goes, how long it's kept, and who can access it.
Our answer: Voice data never touches Huckleberry's servers. Calls are handled entirely by our voice partner, ElevenLabs, using their own encryption. Audio is retained for up to 30 days for quality purposes, then deleted. What your employees and managers see are AI-generated summaries — rephrased for constructive growth, with names and identifying details removed. Nobody at Huckleberry, your company, or in management ever hears the raw audio or sees the transcript.
If a vendor can't give you a clear answer here, that's your answer.
Our answer: No. Your data is never used to train AI models. We have explicit opt-outs configured with all our AI providers.
This one matters more than people think. If the company owns everything, employees hold back. If nobody's thought it through, you've got a problem waiting to happen.
Our answer: The individual owns their feedback. Your company gets access to anonymized, aggregated team insights — not the raw feedback itself. When someone leaves, their personal profile goes with them. This was a founding design decision, not a policy we added later. We believe people share more honestly when they know the data is theirs.
"We use AI" isn't an answer. You need to know who handles what.
Our answer: Two AI providers handle core processing. ElevenLabs does voice and transcription (audio only). OpenAI does text analysis and summarization (text only — never audio). Both operate under data processing agreements, and all data is encrypted in transit. We're happy to share the full sub-processor list.
Someone will mention a colleague by name. A specific incident. A team dynamic. What happens to that before anyone sees it?
Our answer: We're actively building AI that strips identifying details, third-party names, and toxic or unproductive content before feedback reaches anyone. Our goal is that what comes through is the growth signal, the insight that helps someone improve, without the parts that could cause harm. This is a hard problem and we're committed to getting it right rather than rushing it.
When AI summarizes feedback from multiple people, it can amplify existing biases if it's not designed carefully. This is worth asking about.
Our answer: We focus on behavioural patterns rather than subjective ratings — what someone does, not how one reviewer feels about them. By collecting input from managers, peers, direct reports, and others, it's structurally harder for any one person's bias to dominate. We haven't solved workplace bias (nobody has), but we've built the architecture to minimize where it can take hold.
Roadmaps are nice. But what protects your data right now?
Our answer: Encryption in transit and at rest, role-based access controls, audit logging, daily encrypted backups, a secure development lifecycle, background checks on staff, and incident response plan. We're currently progressing through SOC 2 Type II, happy to share the timeline and our full Security & Data Governance Summary with your IT team.
Every HR leader and security team evaluating AI feedback tools should be pausing to ask hard questions. What happens to the data? Who sees it? How the AI handles sensitive information. Whether this could backfire.
Those aren't objections. Those are exactly the right questions. Every vendor in this space should be able to answer them clearly, specifically, and without flinching.
Huckleberry published this because trust architecture matters more than feature demos. Because if the guardrails aren't real, the product isn't either.