Vibe Querying with MCP - Episode 2: Vibing with Product Managers - Analyzing Support Cases to Build a Roadmap

by Marie Forshaw, Jerod Johnson | May 28, 2025

Blog_VQ_ProductManagement.png

Watch how a product manager uses natural language queries with CData MCP Servers and Salesforce data to identify the root causes of customer support cases and prioritize development work to build a roadmap.

Welcome to a new era of data exploration: Vibe Querying with MCP. This content series is here to show business professionals how conversational AI and MCP technology can help effortlessly access, analyze, and interpret real-time business data.

Watch Now: Vibe Querying with MCP - Episode #2

In our second episode, "Vibing for Product Management" hosts Jerod and Marie are joined by Jaclyn Wands, Director of Product Management for CData’s Sync and DBAmp products, in a deep dive analysis of support case data using CData's MCP Server for Salesforce.

Introducing MCP, CData MCP Servers, and Vibe Querying

Model Context Protocol (MCP) is a game-changing protocol designed to securely and efficiently link AI models with external business data sources and tools. With MCP, users chat naturally with their data, seamlessly blending insights into everyday decisions.

CData MCP Servers supercharge MCP by hooking it up with over 350 business data sources. This means AI models like Claude can effortlessly pull and analyze real-time business data, cutting out all the usual hassle of traditional reporting and BI.

Vibe Querying is your new best friend—it uses natural language with AI to explore business data intuitively, without bogging you down in the technical details of data schemas, SQL, APIs, or perfectly curated prompts. Quick, intuitive, and instantly actionable—perfect for marketers and business pros.

Build a product roadmap with Claude: Goal of the episode

Jaclyn’s goal is to analyze support cases for the CData Sync product (an ETL/ELT product) and identify where we should focus our development efforts to have the greatest impact on customers. In this episode, we demonstrate how product managers can transform customer support analysis from a manual, time-intensive process into conversational data exploration.

The setup: Understanding customer support architecture

Before diving into queries, Jaclyn explained her data landscape: "Our customers email us directly at [email protected]. Our support teams watch specific inboxes for our products, and engineers work tickets directly in Salesforce. Everything is connected—support tickets link to accounts, opportunities, Jira tickets, and even solution engineer notes."

This interconnected data structure in Salesforce becomes the perfect playground for vibe querying, allowing us to explore relationships between support patterns, customer accounts, and product issues.

The queries and insights

Note: Any numbers and customer names presented below have been obfuscated (by a very neat Claude instruction).

Query 1: The foundation - getting all support cases

Like any good data analysis, Jaclyn started with the broadest possible view to establish a comprehensive baseline.

The ask: "Pull all support cases for CData sync from 2024. Do not limit yourself to tops, twenties, and tens, but actually pull all the data for CData sync support cases."

What Claude did:

  • Explored available Salesforce tables and identified the Cases table
  • Found cases where the group equals "sync support"
  • Retrieved all cases from 2024 and 2025
  • Began analyzing patterns across 1,400+ support cases

The insight: Claude immediately started categorizing common error types, identifying patterns like authentication errors (500+ cases) and connection errors (1,462 cases). This broad foundation gave us the complete picture needed for deeper analysis.

Query 2: Drilling into configuration issues

With the foundation established, Jaclyn zeroed in on one of the most promising areas for quick wins—configuration problems that could be solved with simple defaults.

The ask: "Focus on default configuration improvements. What connectors are they related to? What are the configurations by connector? Give me example tickets and the number of tickets per configuration."

What Claude did:

  • Analyzed configuration-related support cases by data source
  • Identified SQL Server (a data source for ETL/ELT) as having the most authentication issues
  • Found Windows authentication as a primary pain point
  • Quantified the impact: 32 tickets per year for this single issue

The insight: Simple configuration defaults could eliminate significant support volume. SQL Server Windows authentication, service account permission validation, and redirect URL detection emerged as quick wins.

Key finding: "Windows authentication as a primary default would improve the experience. Service account permission validation... Us being able to detect things like redirect URLs and actual permissions upfront in sync would actually reduce the back and forth that users have with connections,” shared Jaclyn.

Query 3: Solution development and impact analysis

Moving from problem identification to actionable solutions, Jaclyn demonstrated how to get development-ready specifications directly from the data analysis.

The ask: "For SQL Server, export an example case for SQL Server Windows authentication, show the problem, identify the exact fix, and outline the development work required to update the primary default."

What Claude did:

  • Found specific customer examples from support cases
  • Analyzed the technical problem and current workarounds as identified in the support case resolution
  • Outlined required UI component updates and backend changes
  • Estimated development effort: 2-3 sprints for implementation
  • Calculated impact: Would eliminate 32 support tickets annually

The insight: Claude provided actionable development specifications, complete with effort estimates and business justification—everything needed to create actual development tickets.

Query 4: Strategic prioritization

The final piece of the puzzle was organizing all the identified opportunities into an actionable roadmap that balances effort with impact. As Jaclyn shared, no R&D team has infinite resources so the key to a good product roadmap is identifying where to spend the effort for the highest impact.  

The ask: "Going back to your original analysis, rank all work from low effort/high impact, medium effort/high impact, and high effort/high impact."

What Claude did:

  • Categorized all identified solutions by effort and impact
  • Provided ticket reduction estimates for each category
  • Organized fixes from immediate wins to strategic initiatives

The results:

Low effort, high impact (start here):

  • Default timeout configuration: 89 cases, 2 weeks effort, 25% ticket reduction
  • Automatic trial license extension: 156 cases, 1 week effort, 15% ticket reduction
  • SQL Server Windows authentication default: 32 cases, 3 weeks effort, 8% ticket reduction

Medium effort, high impact:

  • Intelligent memory management: 67 cases, 6 weeks effort, 18% ticket reduction
  • Batch size optimization: 45 cases, 4 weeks effort, 12% ticket reduction

High effort, high impact:

  • Advanced license management platform: 201 cases, 12 weeks effort, 35% ticket reduction
  • Communication management platform: 134 cases, 10 weeks effort, 28% ticket reduction

The power of "purposeful prompting"

Throughout the episode, Jaclyn emphasized a key technique that separates effective vibe querying from casual AI interaction.

What makes Jaclyn's approach so effective is what she calls "purposeful prompting"—being very specific about what you want and how to validate it:

"I need you to go look for this and return it, and then return where you found it so I can check your work. So it almost takes all of that legwork out from me and over to Claude, allowing me to make very specific decisions based on the output data instead of having to clean the data, review the data, and then make the decision."

Advanced technique: Pre-mapping your data

In our bonus segment, Jaclyn demonstrated how she creates reusable "table maps" to accelerate future queries and revealed a powerful technique for making future vibe querying sessions even more efficient and accurate.

The setup query: "We need to analyze competitor mentions in opportunities. Here's what we know: Accounts table has account names and IDs, Opportunities table has opportunity names and account IDs. Where are we most likely to find competitor data? In activities within opportunities table and solution engineer notes."

Claude's response: Built a comprehensive map of table relationships, identifying:

  • Key tables needed for competitive analysis
  • Primary and foreign key relationships
  • Specific fields containing competitor mentions
  • SE Request tables for solution engineer insights

This pre-mapping approach reduces future query time and creates reusable resources for the entire product team. Jaclyn has created a Confluence Wiki where she stores these mappings for shared use with her team. They can also be used as standing instructions for Claude or ChatGPT.

From analysis to action: Real business impact

What truly sets this episode apart is how seamlessly the queries translated into concrete business value and actionable next steps.

The most compelling aspect of this episode wasn't just the technical capability—it was seeing how vibe querying translates directly into business value:

  • Quantified ROI: Each fix comes with specific ticket reduction numbers
  • Resource planning: Development effort estimates help with sprint planning
  • Cross-team impact: Understanding how support reduction affects multiple teams
  • Continuous improvement: The ability to re-run analysis after implementing fixes

As Jaclyn noted: "A support case doesn't just take up a support engineer's time. It might take up their boss's time, product's time, and definitely engineering time if it's a defect. You're talking about cross-team ROI."

The evolution from manual to conversational

One of the most striking moments in the episode was when Jaclyn reflected on how dramatically her workflow has evolved.

Perhaps most striking was Jaclyn's reflection on her previous approach: "I wrote a Python script with NLTK to parse every single support case via a CSV document... After I did that, I was able to go in and pull those samples, compare cases, really human-in-the-loop analysis."

Now, that same comprehensive analysis happens through natural conversation, allowing product managers to focus on decision-making rather than data wrangling.

Key takeaways for product managers

For product managers looking to implement similar data-driven approaches, Jaclyn's methodology offers a clear blueprint for success.

  1. Start broad, then narrow: Begin with comprehensive data pulls before drilling into specifics
  2. Always validate: Ask for examples and source references to verify AI findings
  3. Prioritize by effort and impact: Use the three-bucket system (low/med/high effort, all high impact)
  4. Create reusable assets: Build table maps and confluence documentation for your team
  5. Quantify business impact: Always include ticket reduction numbers and ROI calculations

Ready to build a product roadmap with a chat?

The future of product management is conversational, and the tools to get started are available today.

The combination of CData MCP Servers with AI tools like Claude is democratizing advanced data analysis for product managers. You don't need to be a data scientist—you just need to know how to ask the right questions. 

Want to start vibe querying your own support data? Download free MCP server betas at cdata.com/solutions/mcp and join our community at the CData subreddit. Until next time, stay curious and keep vibing

Try CData MCP Server Beta

As AI moves toward more contextual intelligence, CData MCP Servers can bridge the gap between your AI and business data.

Try the beta