MCP & Claude Skills: Using Both to Build Agentic Workflows

by Jerod Johnson | November 11, 2025

MCP vs Claude SkillsWe wanted to know: Are Claude Skills more efficient than MCP? And if so, when should you use MCP and when should you use Claude Skills? We ran the same queries through both methods and compared token usage side by side. The results surprised us—not just in terms of performance, but in how the tools complemented each other.

The AI agent community recently latched onto a hot debate: Skills vs MCP. Which one is better? Which should you build first? Simon Willison, a key voice in the space, recently posed the question head-on: "What if Claude Skills replace the need for tools like MCP entirely?" (simonwillison.net)

It’s a fair question, until you start building with both.

At CData, we work with teams building production-grade agent workflows on our CData Connect AI platform. So, we put the question to the test. We ran the same queries through Claude Skills and Connect AI's MCP server, measured the token costs, and compared the results.

What we found was clear: Skills and MCP solve different problems. You need both. And when used together, they follow a natural lifecycle that improves agent performance and cuts token usage.

In part one of this series, we'll define the tools and share the findings from our experiment. In part two, we'll show you exactly how we used the Connect AI MCP Server and Claude Code to create Skills to streamline our data analysis and optimize our token usage.

Let’s define the tools:

  • MCP (Model Context Protocol): An open standard that allows AI models to communicate with external data sources, tools, and services, creating a standardized way for them to share context and functionality. (modelcontextprotocol.io)

  • Claude Skills: Folders containing instructions, scripts, and resources that teach Claude how to perform specific, repeatable tasks and workflows. These are often deterministic code blocks designed to be run in a terminal outside of the LLM's context. (docs.claude.com)

In theory, you could build with just one. In practice, they complement each other. MCP helps agents discover what’s possible. Skills help them execute what’s already known—faster, cheaper, and more reliably.

We ran real benchmarks to prove it. Same data. Same questions. Two different approaches. Here's what actually works.

The benchmarks — what we actually tested

To move past theory, we ran controlled tests using MCP and then Claude Skills based on the initial MCP discovery on the Connect AI platform. We issued the same set of queries to test each method and measured token usage using Claude’s tokenizer. The goal: understand when each tool performs best and why.

We tested three common scenarios:

  1. Data discovery — “What Salesforce objects do I have access to?”

  2. Simple query — “Show me the top 10 accounts by revenue.”

  3. Cross-system join — “Which top accounts have the most open Zendesk tickets?”

The method

To perform the tests, we first ran each prompt using Claude Code connected to the Connect AI MCP server. Once the LLM explored the dataset and constructed the appropriate request, we added that request to a Claude Skill that sends that request directly through Connect AI's REST API. This led to a significant reduction in token usage, but this one only possible after the LLM explored the data using Connect AI's MCP server.

The table below shows how many tokens each approached used and the difference.

Scenario

Operation

MCP Alone

MCP with Skills

Difference

Data discovery

List tables

10,912

3,842

65% fewer

Simple query

Query top accounts

1,513

1,006

34% fewer

Cross-system join

Find accounts with open tickets

2,069

871

58% fewer


Let's explore what the agent did using each approach for the first question.

Sample question: What Salesforce tables do I have access to?

Using MCP

Using the MCP server, the agent made use of the available MCP tools to list the user's tables (or objects) in Salesforce.

  1. Use getCatalogs to list the available catalogs (connections)

  2. Use getSchemas to list the available schemas for Salesforce (ways to model Salesforce)

  3. Use getTables to list the available tables (objects) for Salesforce

This method is extremely flexible, allowing the LLM to explore the data and properly query The responses to each of these tool calls include full schema metadata for all available tables in a verbose JSON structure, resulting in significant token usage.

Using Skills

After using the MCP server, we were able to update our Connect AI skill with the code required to quickly list tables (which represent objects) with an optimized response. Once we implemented the Skill, the agent was able to quickly list the user's tables.

  1. Use the get_salesforce_objects method from the Skill to list the tables for our specific Salesforce connection.

This pattern repeated for the other questions. When the question was first asked, the agent used the MCP tools to answer the user's question. This generally meant using any schema information available in its context or memory to construct a request that Connect AI could process (using SQL).

Once we knew how to write the request (SQL), we updated the Skill with code to submit the specific request and return an optimized response outside of the LLM's context. This saves tokens because the LLM isn't responsible for constructing the request or parsing a verbose JSON response.

Key takeaway: MCP for discovery, Skills for execution

The key takeaway is this:  MCP is critical for learning how to interact with external systems. Only after you have usable understanding of the system can you write Skills for execution.

After our testing, the pattern became clear: use MCP to discover and use Skills to execute.

Even small token differences add up in production. But those savings aren’t possible until the agent understands what data exists, how it’s structured, and what questions are worth asking. That’s where MCP shines.

Understand your data today with Connect AI

CData Connect AI has a built-in MCP server that allows you to easily explore your data using natural language. Once you know how the LLM translates your question into a request, you can use that request to create a repeatable Skill to quickly and repeatedly answer your question with reduced token usage.

Stay tuned for part 2 of this series where we'll show you exactly how we created a Skill based on our exploration.

In the meantime, sign up for a free trial of Connect AI and get a deeper understanding of your data through natural language today.

Explore CData Connect AI today

See how Connect AI excels at streamlining business processes for real-time insights.

Get the trial