Google Chrome Lets Users Remove On-Device AI Model from Enhanced Protection

CyberSecureFox 🦊

Google is expanding the use of artificial intelligence in Chrome security while simultaneously giving users more control over these technologies. In the experimental Chrome Canary build, a new option has appeared that allows users to disable and remove the local AI model that powers the browser’s Enhanced Protection feature.

Chrome Enhanced Protection and the role of on-device AI

Enhanced Protection in Google Chrome is an advanced security mode designed to provide more aggressive protection against phishing, malicious sites, unsafe downloads and suspicious extensions. Compared with standard protection, it performs more frequent checks against Google’s security services and acts in a more proactive way.

In 2023, Google began strengthening Enhanced Protection by integrating on-device AI models. While technical specifications of these models are not publicly disclosed, available information and industry analysis indicate that they are used to detect malicious patterns in real time. This allows Chrome to warn users about emerging threats that may not yet be present in traditional blocklists or reputation databases.

Beyond scanning web pages, Google states that the AI model is also used for deeper inspection of suspicious downloads. Instead of relying only on file signatures, the model can evaluate behavioral traits, download context and other indicators typically associated with malware, improving detection of previously unseen threats.

Why Chrome uses a local AI model on the endpoint

The key architectural change is that the AI model is deployed locally on the user’s device, not only in Google’s cloud. This on-device approach addresses several security and privacy objectives that are increasingly important in modern browser security.

From a security standpoint, local inference reduces latency. Threat analysis happens directly on the endpoint, which is critical for blocking phishing pages that may exist only for a few hours. Faster decisions at the browser level reduce the attack window and limit user exposure.

From a privacy perspective, on-device processing minimizes the amount of browsing data that has to be sent to the cloud. Potentially sensitive signals used for detection can remain on the device, which aligns with data minimization principles important for regulated industries and privacy-conscious users.

At the same time, Google does not disclose the architecture, training data, or full capabilities of the AI model used in Chrome. This lack of transparency is a regular point of discussion among security researchers and privacy advocates, who are increasingly demanding clearer documentation and auditable behavior from AI-based security tools.

How to disable and remove the on-device AI model in Chrome Canary

According to industry reports and testing of the latest builds, Chrome now exposes an option to turn off and delete the local AI model via a setting called On-device GenAI. At the time of writing, this control is available in the Chrome Canary channel, which Google uses to test new and experimental features.

Step-by-step: disabling On-device GenAI in Chrome Canary

To remove the local AI model in Chrome Canary, users can follow these steps:

1. Open Google Chrome Canary.
2. Navigate to Settings → System (Settings → System).
3. Locate the On-device GenAI option.
4. Toggle this option off.

Once On-device GenAI is disabled, the local AI model is removed from the device. Chrome will no longer use it for Enhanced Protection and likely for other upcoming features that depend on on-device generative AI capabilities.

Google is expected to roll out this setting to the stable version of Chrome in future releases, but no official timeline has been announced. As with many Chrome security features, the option will probably appear gradually across platforms and regions.

Security, privacy and performance impact of disabling on-device AI

Impact on protection against phishing and malware

The main security trade-off of disabling the local AI model is a reduction in proactive detection capabilities. Without AI-based behavioral analysis, Chrome falls back more heavily on classic mechanisms such as blocklists, signatures and reputation services. These remain effective against known threats, but are less agile in detecting novel or fast-moving attack campaigns.

Modern phishing campaigns frequently use short-lived domains and convincingly cloned login pages for banks, cloud services and corporate portals. Industry threat reports from major vendors consistently highlight sustained growth in these types of browser-based attacks. In such scenarios, Enhanced Protection backed by AI can provide a useful additional layer, especially for home users and small businesses that do not have dedicated security teams.

Privacy, compliance and data governance considerations

On the other hand, turning off On-device GenAI may be justified for organizations handling highly sensitive data or operating under strict regulatory regimes. In many regulated environments, any extra process or model that inspects user content is evaluated as a potential data-processing risk, even when it runs locally.

Security and compliance teams often prefer to explicitly control which AI capabilities are active in their environment, document them in data protection impact assessments and align them with internal security policies. The new Chrome setting provides a more transparent way to enforce such policies at the browser level.

Performance and resource usage on endpoint devices

Running local AI models typically consumes disk space, RAM and CPU/GPU resources. While Google optimizes models for client devices, the overhead can still be noticeable on older or low-spec systems. For users experiencing slowdowns or high resource usage while browsing, disabling on-device AI may slightly improve responsiveness and battery life, though at the cost of some security benefits.

Growing trend: on-device AI security with explicit user and admin control

The appearance of the On-device GenAI toggle reflects a broader trend in browser security: vendors are increasingly embedding AI into protection mechanisms, but are also under pressure to provide clear, user-visible controls over how these models operate. This is relevant not only for individual users, but also for enterprises implementing zero-trust and data governance strategies.

It is reasonable to expect that Google will further expand Chrome enterprise policies to let administrators centrally enable, disable or restrict on-device AI features on corporate workstations. For sectors such as finance, healthcare and government, centralized policy enforcement over AI-driven browser security will be a critical requirement.

For most individual users without special regulatory constraints, keeping Enhanced Protection with on-device AI enabled remains a strong recommendation, provided it is combined with multi-factor authentication, regular software updates, careful handling of browser extensions and cautious behavior with downloads and links. At the same time, the new option empowers those who need stricter control to align Chrome’s AI capabilities with their risk appetite, performance needs and compliance obligations.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.