Google makes the web ready for agentic AI with the new standard WebMCP

Imagine an AI that actually understands how a website works instead of staring at it like a confused tourist with the map upside down. That is exactly what Google is now trying to solve. With WebMCP, the AI no longer has to guess where the Submit button is and can instead communicate directly with the site. Less chaos, less computing power, fewer digital breakdowns.

The end of screenshot chaos

So far, AI agents have treated the web as an image. They take screenshots, send them to a vision model, and try to figure out where to click. Moving a button a few pixels can cause the entire flow to fail.

With the Web Model Context Protocol WebMCP, the game changes. Instead of AI guessing, the website tells exactly which tools are available and how they are used. It is like switching from guessing to clear instructions. The result is faster interaction, fewer errors, and significantly lower computational costs.

Two ways to an agent-ready website

Developers have two main ways to make a website ready for AI agents.

Declarative method with html

The simplest path is to add new attributes directly in the HTML. By using attributes like toolname and tooldescription in form tags, functions can be exposed as clear tools.

Chrome reads these attributes and automatically creates a structured schema that the AI model can interpret. A flight booking form then becomes a defined tool with specific input fields.

When an AI submits the form, a special event is triggered that signals that it is an agent and not a human initiating the actions. The backend can thus handle the request in the correct way.

Simply put, each function gets a clear nameplate and a manual.

Imperative method with javascript

For more advanced applications, there is a deeper integration via JavaScript. Here, navigator modelContext registerTool is used to register functions directly in the browser.

The developer defines the tool’s name, a description, and a JSON schema for input. For example, when the AI agent wants to add a product to the shopping cart, the registered function is called in real time within the user’s current session.

This means that the agent does not need to log in again or bypass security layers. Everything happens in a controlled manner within the current user session. Perfect for multi-step flows such as payments or booking processes.

Performance that actually matters

The transition from image-based interpretation to structured JSON communication is not just technically elegant. It makes a real difference.

  • Latency decreases because no screenshots need to be uploaded and analyzed.
  • Accuracy increases because the model works with structured data instead of interpreting pixels.
  • Costs decrease because text-based schemas are significantly cheaper to process than high-resolution images in a language model.
  • Reports indicate up to 67 percent reduced computational load and an accuracy around 98 percent. That is the difference between guessing and knowing.

The technical core navigator modelContext

Everything revolves around the new object navigator modelContext. There are four central methods here.

  • registerTool makes a function visible to the AI agent.
  • unregisterTool removes the function from the AI agent’s access.
  • provideContext sends additional metadata such as user preferences to the agent.
  • clearContext clears shared data and strengthens privacy.

It functions as a control panel where the developer decides exactly what the agent is allowed to do.

Safety first always

Safety is an obvious matter. WebMCP is built according to the permission first principle. The AI agent cannot perform sensitive actions without the browser acting as an intermediary.

In many cases, the user receives a confirmation before anything is carried out. The user retains control while the agent does the heavy lifting. At the same time, there is the option to clear context data to avoid sensitive information being stored unnecessarily.

This is how you get started with WebMCP

Starting to work with WebMCP requires three basic steps.

First, the correct version of Chrome needs to be used. The features are initially tested via the Early Preview Program and in newer versions like Chrome 146. Apply for access through Google’s developer program and enable relevant experimental features.

The next step is to identify which features on the website are suitable as tools. Start simple. A booking form, a contact request, or a product that can be added to the shopping cart are good candidates.

If the website is relatively simple, the declarative method can be used. Add attributes to the form and describe the features clearly. The descriptions should be specific and structured to avoid misinterpretations from language models.

For more complex flows, the imperative method is implemented. Register tools via navigator modelContext and define clear JSON schemas. Test how different models interpret the descriptions and adjust until the behavior is stable.

Finally, the security flows should be thoroughly tested. Ensure that the user always has the final say and that sensitive data can be cleared with clearContext.

Getting started is not just about technology. It’s about rethinking how a website communicates. From graphical interface to structured function.

And here are some examples

Google has built WebMCP into Chrome and is making it available as an early preview in Chrome 146 Canary behind an experimental flag. The standard has been jointly developed by engineers from Google and Microsoft and is managed under the W3C umbrella, which speaks well of them.

There are two ways for developers to make their site “agent-ready”:

The declarative API is the simple version. Here you add new attributes directly in HTML forms. A form to search for flights might look something like this:

html

<form toolname="searchFlights" tooldescription="Sök efter flygningar">
  <input name="origin" type="text" required>
  <input name="destination" type="text" required>
  <input name="date" type="date" required>
  <button type="submit">Sök</button>
</form>

The result? Chrome automatically reads these tags and creates a schema the AI agent understands. The agent no longer needs to figure out what the form does, it knows.

The imperative API is for more complex flows and requires JavaScript. Here you register tools programmatically via navigator.modelContext.registerTool(). An e-commerce example might look like this:

javascript

navigator.modelContext.registerTool({
  name: "addToCart",
  description: "Lägg till en produkt i varukorgen",
  parameters: {
    productId: { type: "string" },
    quantity: { type: "number" }
  },
  execute: async ({ productId, quantity }) => {
    return await cart.add(productId, quantity);
  }
});

Now the AI agent can call this function directly without needing to visually find the “Buy” button.

A new standard for the agentic web

WebMCP marks the beginning of a more structured relationship between AI and websites. Instead of interpreting pixels, AI gains access to a toolkit of defined functions.

It is more than an update. It is a shift in how the web is intended to be used. Less guessing. More structure. Fewer digital panic attacks.

For those building digital services, the question is not whether AI will use the web. The question is how well prepared the website is when it happens.

Sources

Följ på
Search
Poppis
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.