AI agents are transforming how sales teams prospect, how marketers analyze audiences, and how recruiters source candidates. But there is one data source that nearly every B2B agent needs and almost none can reliably access: LinkedIn.
This guide shows you how to give your AI agent safe, reliable LinkedIn access using the OutX API. We will walk through working code for LangChain, Claude (Anthropic tool_use), and the Model Context Protocol (MCP), plus the guardrails you need to keep your LinkedIn account safe.
If you are building an AI agent for B2B workflows, LinkedIn data is not optional. It is the single richest source of professional context on the planet. Here is what agents typically need it for:
The common thread: your agent needs structured LinkedIn data it can reason over, not a screenshot or a browser session it has to navigate.
LinkedIn's official API is restricted to approved partners. You cannot just sign up and start pulling profile data. The Marketing API and Sales Navigator API require partnership applications that take months and are routinely denied for smaller companies.
That leaves two common workarounds, both with serious downsides:
Cloud scraping services spin up headless browsers in data centers. LinkedIn detects these through IP reputation, browser fingerprinting, and behavioral analysis. The result: your account gets restricted or banned. Even if the scraping service uses their own accounts, the data is stale and the service can disappear overnight.
Browser automation tools (Puppeteer, Playwright, Selenium) control a browser on your machine. They are fragile, break whenever LinkedIn changes its DOM, and still trigger detection if the automation patterns are too mechanical.
AI agents make this worse. An LLM calling a scraping endpoint in a tight loop will burn through rate limits and trigger every anti-automation flag LinkedIn has. You need a layer between your agent and LinkedIn that handles timing, session management, and compliance.
OutX takes a fundamentally different approach. Instead of scraping LinkedIn from the cloud, OutX uses a Chrome extension that runs in your real browser session. Here is why that matters for AI agents:
The OutX Chrome extension operates within your authenticated LinkedIn session. To LinkedIn, it looks like you are browsing normally, because you are. There is no data center IP, no headless browser fingerprint, no credential sharing with a third party.
Your agent sends a request (e.g., "fetch this profile") and gets back a task ID. It polls for the result. This async model naturally prevents the burst-request patterns that get accounts flagged.
POST /api/task -> { task_id: "abc123" }
GET /api/task/abc123 -> { status: "completed", data: {...} }
OutX publishes a skill file that describes every API endpoint, parameter, and response format in a structured way that LLMs can parse directly. Drop it into your agent's system prompt and the agent knows how to use the API without you writing wrapper code.
The llms.txt file at the docs root follows the emerging standard for AI-readable site descriptions. AI agents and tools that support llms.txt can automatically discover OutX's capabilities.
Before writing any agent code, get the OutX API working:
Step 1: Install the Chrome extension. Download from the OutX website and pin it in your browser. Sign in with your OutX account. Keep this browser window open while your agent runs.
Step 2: Get your API key. Go to your OutX dashboard and copy your API key. You will use this in the x-api-key header for all API calls.
Step 3: Download the skill file. Grab the skill file from outx.ai/docs/outx-skill.md. This is the document your agent will use to understand the API.
Step 4: Test a simple call.
import requests
import time
API_KEY = "your-api-key-here"
BASE_URL = "https://api.outx.ai"
HEADERS = {"x-api-key": API_KEY, "Content-Type": "application/json"}
# Create a task to fetch a LinkedIn profile
response = requests.post(
f"{BASE_URL}/api/task",
headers=HEADERS,
json={
"type": "fetch-profile",
"params": {
"linkedin_url": "https://www.linkedin.com/in/example-profile"
}
}
)
task = response.json()
task_id = task["task_id"]
# Poll for results (respect rate limits)
while True:
time.sleep(15) # Wait 15 seconds between polls
result = requests.get(f"{BASE_URL}/api/task/{task_id}", headers=HEADERS)
data = result.json()
if data["status"] == "completed":
print(data["data"])
break
elif data["status"] == "failed":
print("Task failed:", data.get("error"))
break
If that returns profile data, you are ready to wire it into an agent.
LangChain is the most popular framework for building AI agents. Here is how to create OutX tools that a LangChain agent can use.
import requests
import time
from langchain.tools import tool
API_KEY = "your-api-key-here"
BASE_URL = "https://api.outx.ai"
HEADERS = {"x-api-key": API_KEY, "Content-Type": "application/json"}
def _run_outx_task(task_type: str, params: dict) -> dict:
"""Submit a task to OutX and poll for the result."""
response = requests.post(
f"{BASE_URL}/api/task",
headers=HEADERS,
json={"type": task_type, "params": params}
)
response.raise_for_status()
task_id = response.json()["task_id"]
for _ in range(20): # Max ~5 minutes of polling
time.sleep(15)
result = requests.get(
f"{BASE_URL}/api/task/{task_id}",
headers=HEADERS
)
data = result.json()
if data["status"] == "completed":
return data["data"]
elif data["status"] == "failed":
return {"error": data.get("error", "Task failed")}
return {"error": "Task timed out"}
@tool
def fetch_linkedin_profile(linkedin_url: str) -> dict:
"""Fetch a LinkedIn profile by URL. Returns name, headline,
current role, company, location, summary, and recent activity."""
return _run_outx_task("fetch-profile", {
"linkedin_url": linkedin_url
})
@tool
def search_linkedin_posts(query: str, limit: int = 10) -> dict:
"""Search LinkedIn posts matching a query. Returns post content,
author info, engagement metrics, and timestamps."""
return _run_outx_task("search-posts", {
"query": query,
"limit": limit
})
@tool
def fetch_company_profile(company_url: str) -> dict:
"""Fetch a LinkedIn company page. Returns company name, size,
industry, description, and recent updates."""
return _run_outx_task("fetch-company", {
"company_url": company_url
})
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
# Load the OutX skill file as context
with open("outx-skill.md", "r") as f:
skill_context = f.read()
llm = ChatOpenAI(model="gpt-4o", temperature=0)
tools = [fetch_linkedin_profile, search_linkedin_posts, fetch_company_profile]
prompt = ChatPromptTemplate.from_messages([
("system", f"""You are a B2B sales research assistant. You have access
to LinkedIn data through the OutX API.
IMPORTANT RULES:
- Wait at least 15 seconds between API calls
- Never make more than 4 requests per minute
- Always cite the LinkedIn data you reference
API Reference:
{skill_context}"""),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Example: Research a prospect
result = executor.invoke({
"input": "Research the VP of Engineering at Acme Corp on LinkedIn. "
"Find their profile, check their recent posts, and draft "
"a personalized outreach message about our DevOps platform."
})
print(result["output"])
For the full LangChain integration guide with more examples, see outx.ai/docs/integrations/langchain.
If you are using Claude directly through the Anthropic API, you can define OutX endpoints as tools in the tool_use format.
import anthropic
client = anthropic.Anthropic()
tools = [
{
"name": "fetch_linkedin_profile",
"description": (
"Fetch a LinkedIn profile using the OutX API. "
"Returns structured profile data including name, headline, "
"current company, role, location, summary, and recent posts. "
"This is an async operation - the function handles polling internally."
),
"input_schema": {
"type": "object",
"properties": {
"linkedin_url": {
"type": "string",
"description": "The full LinkedIn profile URL, e.g. https://www.linkedin.com/in/username"
}
},
"required": ["linkedin_url"]
}
},
{
"name": "search_linkedin_posts",
"description": (
"Search for LinkedIn posts matching a keyword query. "
"Returns post content, author details, engagement counts, "
"and timestamps. Use this to find relevant discussions "
"and trending topics."
),
"input_schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query for LinkedIn posts"
},
"limit": {
"type": "integer",
"description": "Max results to return (default 10, max 50)",
"default": 10
}
},
"required": ["query"]
}
}
]
def handle_tool_call(tool_name: str, tool_input: dict) -> str:
"""Execute an OutX API call and return the result."""
if tool_name == "fetch_linkedin_profile":
result = _run_outx_task("fetch-profile", {
"linkedin_url": tool_input["linkedin_url"]
})
elif tool_name == "search_linkedin_posts":
result = _run_outx_task("search-posts", {
"query": tool_input["query"],
"limit": tool_input.get("limit", 10)
})
else:
result = {"error": f"Unknown tool: {tool_name}"}
return str(result)
def run_agent(user_message: str) -> str:
messages = [{"role": "user", "content": user_message}]
while True:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
tools=tools,
messages=messages
)
# If the model wants to use a tool, execute it
if response.stop_reason == "tool_use":
tool_block = next(
b for b in response.content if b.type == "tool_use"
)
tool_result = handle_tool_call(
tool_block.name, tool_block.input
)
messages.append({"role": "assistant", "content": response.content})
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_block.id,
"content": tool_result
}]
})
else:
# Model is done, return the text response
return response.content[0].text
# Example usage
result = run_agent(
"Look up the LinkedIn profile for linkedin.com/in/example-cto "
"and tell me what topics they post about most frequently."
)
print(result)
This same pattern works with GPT-4o via the OpenAI function calling API. The tool definitions are nearly identical; you just swap input_schema for parameters.
The Model Context Protocol is an open standard for connecting AI agents to external tools and data sources. OutX fits naturally into the MCP architecture.
The skill file at outx.ai/docs/outx-skill.md is designed to work as an MCP resource. Point your MCP server at it and any connected agent can discover OutX's capabilities automatically:
{
"resources": [
{
"uri": "https://outx.ai/docs/outx-skill.md",
"name": "OutX LinkedIn API",
"description": "Full API reference for accessing LinkedIn data through OutX",
"mimeType": "text/markdown"
}
]
}
{
"tools": [
{
"name": "outx_fetch_profile",
"description": "Fetch a LinkedIn profile via OutX API. Returns structured data about a person including their role, company, and recent activity.",
"inputSchema": {
"type": "object",
"properties": {
"linkedin_url": {
"type": "string",
"description": "Full LinkedIn profile URL"
}
},
"required": ["linkedin_url"]
}
}
]
}
Any MCP-compatible client (Claude Desktop, Cursor, Windsurf, custom agents) can connect to an MCP server exposing these tools and immediately start using LinkedIn data.
For more on the AI integration page, see outx.ai/docs/ai.
AI agents are fast. LinkedIn's anti-automation systems are faster. If your agent hammers the API without guardrails, you will lose your LinkedIn account. Follow these rules.
import os
import requests
import time
# Good: API key from environment
API_KEY = os.environ["OUTX_API_KEY"]
# Good: Rate limiting built into the helper
class OutXClient:
def __init__(self):
self.api_key = os.environ["OUTX_API_KEY"]
self.base_url = "https://api.outx.ai"
self.last_call = 0
self.min_interval = 15 # seconds
def _wait(self):
elapsed = time.time() - self.last_call
if elapsed < self.min_interval:
time.sleep(self.min_interval - elapsed)
self.last_call = time.time()
def fetch_profile(self, linkedin_url: str) -> dict:
self._wait()
response = requests.post(
f"{self.base_url}/api/task",
headers={
"x-api-key": self.api_key,
"Content-Type": "application/json"
},
json={
"type": "fetch-profile",
"params": {"linkedin_url": linkedin_url}
}
)
response.raise_for_status()
task_id = response.json()["task_id"]
return self._poll(task_id)
def _poll(self, task_id: str, max_attempts: int = 20) -> dict:
for _ in range(max_attempts):
self._wait()
result = requests.get(
f"{self.base_url}/api/task/{task_id}",
headers={"x-api-key": self.api_key}
)
data = result.json()
if data["status"] == "completed":
return data["data"]
elif data["status"] == "failed":
raise Exception(f"Task failed: {data.get('error')}")
raise TimeoutError("Task polling timed out")
Let us put it all together. This agent takes a LinkedIn profile URL, fetches the person's data, analyzes their recent activity, and drafts a personalized outreach message.
import os
import time
import requests
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
# --- OutX API setup ---
class OutXClient:
def __init__(self):
self.api_key = os.environ["OUTX_API_KEY"]
self.base_url = "https://api.outx.ai"
self.last_call = 0
def _wait(self):
elapsed = time.time() - self.last_call
if elapsed < 15:
time.sleep(15 - elapsed)
self.last_call = time.time()
def run_task(self, task_type: str, params: dict) -> dict:
self._wait()
resp = requests.post(
f"{self.base_url}/api/task",
headers={
"x-api-key": self.api_key,
"Content-Type": "application/json"
},
json={"type": task_type, "params": params}
)
resp.raise_for_status()
task_id = resp.json()["task_id"]
for _ in range(20):
self._wait()
result = requests.get(
f"{self.base_url}/api/task/{task_id}",
headers={"x-api-key": self.api_key}
)
data = result.json()
if data["status"] == "completed":
return data["data"]
elif data["status"] == "failed":
return {"error": data.get("error")}
return {"error": "Timed out"}
outx = OutXClient()
@tool
def fetch_profile(linkedin_url: str) -> dict:
"""Fetch a LinkedIn profile. Returns name, headline, role, company,
location, about section, and recent posts."""
return outx.run_task("fetch-profile", {
"linkedin_url": linkedin_url
})
@tool
def search_posts(query: str) -> dict:
"""Search LinkedIn posts by keyword. Returns matching posts with
content, author, and engagement data."""
return outx.run_task("search-posts", {"query": query, "limit": 5})
@tool
def draft_outreach(prospect_name: str, context: str) -> str:
"""Draft a personalized LinkedIn outreach message based on
prospect research. Returns the draft message text."""
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
response = llm.invoke(
f"Write a short, personalized LinkedIn connection request "
f"message (under 300 characters) for {prospect_name}. "
f"Context about them: {context}. "
f"Be genuine, reference something specific, no hard sell."
)
return response.content
# --- Agent setup ---
llm = ChatOpenAI(model="gpt-4o", temperature=0)
tools = [fetch_profile, search_posts, draft_outreach]
prompt = ChatPromptTemplate.from_messages([
("system", """You are a B2B sales research assistant. Your job is to:
1. Fetch the prospect's LinkedIn profile
2. Analyze their role, company, and recent activity
3. Search for their recent LinkedIn posts to understand their interests
4. Draft a personalized outreach message
RULES:
- Be thorough in your research before drafting outreach
- Reference specific details from their profile and posts
- Keep outreach messages genuine and non-pushy
- Always wait for each tool call to complete before making the next one"""),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# --- Run it ---
if __name__ == "__main__":
result = executor.invoke({
"input": (
"Research the person at linkedin.com/in/example-vp-sales "
"and draft an outreach message. Our product is an AI-powered "
"sales engagement platform that helps teams find warm leads "
"through LinkedIn social listening."
)
})
print("\n" + "=" * 60)
print("AGENT OUTPUT:")
print("=" * 60)
print(result["output"])
This agent will:
The entire flow respects rate limits, handles async polling, and keeps API keys out of the code.
Can I use OutX with frameworks other than LangChain?
Yes. OutX is a REST API, so it works with any framework or language. The examples above use LangChain and Anthropic's Python SDK, but the same patterns apply to CrewAI, AutoGen, Haystack, or plain HTTP requests in any language.
Do I need the Chrome extension running at all times?
Yes. The Chrome extension is what provides the authenticated LinkedIn session. Your agent's API calls are routed through the extension. If the browser is closed or the extension is disabled, tasks will fail. Keep a dedicated browser window open while your agent runs.
How many profiles can I fetch per day?
This depends on your OutX plan and LinkedIn's own behavioral limits. Check the rate limits documentation for current numbers. As a general rule, stay under 100 profile fetches per day and space them at least 15 seconds apart.
Is this compliant with LinkedIn's terms of service?
OutX operates through your own authenticated browser session, similar to how you would manually browse LinkedIn. It does not scrape from external servers, share credentials, or create fake accounts. That said, review LinkedIn's current terms and your own compliance requirements before deploying at scale.
Can multiple agents share one OutX account?
One OutX account maps to one Chrome extension session. If you need multiple agents running concurrently, each needs its own OutX account and browser session. This also helps with rate limiting since each account has independent limits.
What data format do the API responses use?
All responses are JSON. Profile data includes structured fields (name, headline, company, location, summary) plus arrays for experience, education, and recent posts. See the API quickstart for full response schemas.
Can I use this with Claude Desktop or Cursor via MCP?
Yes. Define OutX endpoints as MCP tools in your MCP server configuration, and any MCP-compatible client can use them. The skill file at outx.ai/docs/outx-skill.md provides the tool descriptions in a format that works well as an MCP resource.
If you are building AI agents that need LinkedIn data, OutX gives you the reliable, safe access layer that makes the rest of your stack work. No scraping bans, no stale data, no fragile DOM selectors. Just a clean API backed by your real LinkedIn session.