Skip to main content
All CollectionsFAQGeneral
Research Assistant: Understanding our AI-Powered Survey Analysis
Research Assistant: Understanding our AI-Powered Survey Analysis

Learn about sentiment scoring, bias reduction, and insight extraction

Updated over 2 months ago

Our Research Assistant feature uses cutting-edge artificial intelligence to analyze freeform text responses from your surveys. This article explains how this powerful tool works behind the scenes, helping you gain valuable insights from your customer feedback.

How It Works

1. Survey Response Collection

The process begins with the collection of freeform text responses from surveys answered on your website, in your app, or via email. These responses can cover a wide range of topics and experiences.

2. AI-Powered Analysis

We utilize advanced artificial intelligence, specifically Anthropic's language model, to analyze these survey responses. This AI functions as an expert research assistant, capable of understanding and interpreting human language in context.

3. Sentiment Analysis

At the core of the analysis is sentiment determination. The AI reads each answer and categorizes it as:

  • Positive (P): The response expresses satisfaction or a good experience

  • Negative (N): The response expresses dissatisfaction or a poor experience

  • Mixed (M): The response contains both positive and negative elements

  • Unsure (U): The sentiment is unclear or the question doesn't lend itself to sentiment analysis

4. Context-Aware Interpretation

The AI considers the context of each question when analyzing the responses. This means it's not just looking for positive or negative words, but understanding the meaning behind the responses in relation to what was asked.

5. Confidence Scoring

For each sentiment classification, the AI provides a confidence score between 0.00 and 1.00. This indicates how certain the AI is about its interpretation. Higher scores (closer to 1.00) indicate higher confidence.

Ensuring Accuracy and Minimizing Bias

We've implemented several measures to ensure accurate results and minimize bias:

  1. Independent Analysis: Each question/response pair is considered independently, preventing previous responses from influencing the analysis of subsequent ones.

  2. Confidence Transparency: By providing confidence scores, we acknowledge that some interpretations may be less certain than others. This allows for human review of low-confidence results.

  3. Nuanced Interpretation: The system recognizes subtle cues. For example, expressions of wanting something better are considered negative if the current experience isn't meeting those expectations.

  4. "Unsure" Category: When a question doesn't lend itself to sentiment analysis, the AI uses the "Unsure" category instead of forcing a sentiment.

Limitations and Best Practices

While Research Assistant is highly advanced, it's important to remember:

  1. AI interpretation, while sophisticated, is not perfect.

  2. The confidence scores provided allow you to gauge the reliability of each interpretation.

  3. For best results, we recommend using Research Assistant as part of a broader research and analysis strategy, combining AI insights with human expertise.

Our Research Assistant feature offers a powerful way to gain rapid, consistent insights from large volumes of survey responses. By leveraging advanced AI technology, we provide you with valuable understanding of customer sentiment and experiences, helping you make data-driven decisions to improve your products and services.

For more information or assistance with using the Research Assistant feature, please contact our support team.

Did this answer your question?