Skip to main content

Introduction to Looker Code Mode MCP

· 5 min read

Today we're introducing the lkr code-mode MCP server to allow your LLM to orchestrate all of Looker API's in a simple interface. The Model Context Protocol (MCP) is a great way to connect AI agents to external tools. But as agents connect to bigger APIs, we run into a big problem: context bloat. Looker actually tried to fix this by creating a trimmed-down version of its MCP that exposes only a few select APIs. But that's pretty limiting if you want to build complex workflows and those workflows require many back and forth tool calls. Code Mode flips this on the head, the LLM writes code to orchestrate the entire workflow in one go; that's why developers are moving towards Code Mode for these use cases.

Think of traditional MCP as requiring a separate phone call to a worker for every step of a project (e.g., "Check the file," "Now read the first line," "Now delete the file"). Code Mode is like sending the worker a short Python script that does all three steps in one go. It saves time, reduces miscommunication, and gets the job done much faster.

What exactly is Code Mode?

Normally, you have to feed the AI the full JSON schema for every tool you want it to use if you have an API with hundreds of endpoints, which eats up almost your entire token budget to explain what each tool does, leaving no room for the actual conversation. Instead of listing hundreds of separate tools, Code Mode gives the LLM a compact, typed interface (basically a small SDK). The AI writes a script (usually in Python or JavaScript) to do what it needs, then runs it in a secure sandbox (such as a V8 isolate or a Python sandbox).

Cloudflare and Anthropic have been pushing this pattern because it shifts the model from "calling tools one by one" to "writing code to get the job done."

Why is it better

  • You collapse a massive API into a tiny interface, which saves a ton of tokens. Cloudflare reported cutting its token usage by 99.9% when it tried this.
  • The agent can write loops, conditions, and process data all in one go, instead of ping-ponging back and forth with the LLM for every single step.
  • Running code is deterministic. It either works or it doesn't, making it much easier to debug than an LLM guessing which tool to call next.

How we built it at lkr.dev

We wanted to solve this token bloat problem and receive the full API and SDKs for Looker, so we built a Python-based MCP server called lkr code-mode.

Here is how it works under the hood:

  • Instead of giving the agent hundreds of Looker SDK methods, we give it exactly one: run_python_code(code: str).
  • The tool spins up the Looker SDK, finds all the available methods, and passes them into the sandbox as global functions.
  • We use the Monty sandbox to run the code, so it can't mess with your local filesystem or network.
  • We convert complex Looker objects into standard Python dictionaries so the script can handle them easily.
  • If the session expires, Code Mode automatically pops up the PKCE auth browser to refresh the token without failing the run.

Check out the full Code Mode Docs and the CLI README for setup details.

What Can You Do with Looker Code Mode?

With a full Python environment and access to the Looker SDK, you can build some pretty cool workflows. Here are a few ideas:

Instance Governance & Cleanup

  • The "Marie Kondo" Content Archiver: Automatically find and archive dashboards and Looks that haven't been viewed in over 90 days.
  • Orphaned Schedule Rescuer: Find scheduled emails where the owner's account has been disabled and reassign them to prevent silent failures.

Developer & Performance Tools

  • Dashboard Performance Profiler: Test the load time of every tile on a dashboard by running queries asynchronously to find bottlenecks.
  • LookML "Impact Radius" Analyzer: Search for all Dashboards and Looks containing a specific field before deleting or changing it.

Dynamic Automation & Alerting

  • Smart Escalation Router: Dynamically route alerts to managers based on data conditions and user attributes.
  • The "Morning Briefing" Generator: Create personalized daily digest dashboards on the fly and export them as PDFs.

Advanced Migrations & Syncing

  • Environment Synchronizer: Replicate folder structures, permissions, and roles from a staging instance to production.
  • Bulk Onboarding Machine: Onboard 100+ users in seconds from a CSV, setting up credentials, user attributes, and row-level security.

What others are saying

  • On Cloudflare, they've been talking a lot about this, showing how they use Code Mode to let agents use their massive API without hitting token limits.
  • On Reddit (r/ClaudeAI, r/LLMDevs), developers agree that this isn't replacing MCP, but rather making it actually usable for big projects. A lot of the discussion focuses on how to build secure sandboxes.
  • On Hacker News, the consensus is that LLMs are just better at writing code than trying to figure out complex JSON tool schemas.

The Bottom Line

Code Mode is a big deal for making AI agents actually useful for complex tasks. By letting them write code instead of just calling API endpoints one by one, we can work around token limits and build much more reliable automation.