A US federal jury has convicted former Google engineer Linwei (Leon) Ding of stealing confidential information about Google’s artificial intelligence (AI) infrastructure and channeling it to entities linked to the People’s Republic of China. The case has quickly become a landmark example of economic espionage targeting high-performance computing and AI platforms, underscoring how damaging a single insider can be for even the most mature technology companies.
Economic espionage and trade secret theft charges
According to the US Department of Justice, Ding was found guilty on 14 counts in total: seven counts of economic espionage and seven counts of theft of trade secrets. Each trade secret count carries a potential sentence of up to 10 years in prison, while each economic espionage count is punishable by up to 15 years. The jury returned guilty verdicts on all charges, with formal sentencing expected shortly.
Ding joined Google in 2019 and worked on its internal AI infrastructure. His role granted him access to sensitive documentation on the architecture of AI supercomputers, large-scale compute clusters, and custom machine learning (ML) solutions. In the current global race for AI capabilities, such architectural knowledge is treated as strategic intellectual property on par with model weights and proprietary algorithms.
How Google’s AI infrastructure data was exfiltrated
Prosecutors established that between May 2022 and April 2023, Ding stole more than 2,000 pages of internal Google documents related to its AI and high-performance computing stack. These materials reportedly described compute topologies, software stacks, job orchestration mechanisms, and other critical components that enable Google to build and scale AI supercomputers.
To bypass security monitoring and data loss prevention, Ding used a multi-step exfiltration technique. He first copied portions of internal documents into the Apple Notes application on his corporate MacBook. He then exported those notes as PDF files and uploaded them to a personal Google Cloud account. Because the workflow resembled normal user behavior and relied on allowed applications, it was harder for traditional DLP tools to flag the activity as malicious.
This pattern is consistent with what many incident response teams observe: insiders often use “living off the land” techniques—abusing legitimate tools and workflows—to blend in with normal traffic and evade automated detection.
Chinese tech ties, hidden conflicts, and “talent program” links
While employed at Google, Ding was simultaneously engaged with at least two Chinese technology companies. In one, he served as chief technology officer. In 2023, he founded Shanghai Zhisuan Technology Co., becoming its CEO. To investors, he reportedly promised to build an AI infrastructure platform “comparable to Google’s,” strongly suggesting an intent to commercialize knowledge derived from his employer’s proprietary systems.
Ding did not disclose these outside roles or his repeated travel to China to Google, violating common conflict-of-interest and disclosure policies. To conceal his absence from the office, he allegedly asked a colleague to scan his corporate badge at building entrances, creating the false impression that he was physically present and working on-site. The scheme unraveled after Google became aware of a public investor presentation by Ding in China in late 2023.
In February 2025, prosecutors added formal economic espionage charges after uncovering that Ding had applied to a Shanghai government-sponsored “talent program.” In his application, he cited a goal of helping China reach “international-level” computing infrastructure. Investigators say he aimed to assist two Chinese state-controlled entities in building an AI supercomputer and designing specialized ML accelerators. This elevated the case beyond mere corporate theft to a national-level technology transfer risk.
Cybersecurity analysis: insider threats to AI and HPC infrastructure
The Ding case illustrates the unique challenges of defending against insider threats in AI and high-performance computing environments. Unlike external attackers, insiders operate with valid credentials and legitimate access paths, making their behavior far harder to distinguish from normal administrative activity. Industry studies such as the Verizon Data Breach Investigations Report and CERT’s insider threat research have repeatedly shown that trusted users are a significant source of data breaches, particularly in sectors holding valuable intellectual property.
Why perimeter and device trust are no longer sufficient
Many organizations still rely heavily on perimeter security and trust corporate devices by default. In this incident, Ding’s use of note-taking software and PDF exports—common productivity tasks—did not initially appear suspicious. This highlights the limits of static rules or simple content-based DLP alone. When critical design documents, infrastructure runbooks, and architecture diagrams can be transformed and moved through benign-looking channels, context and behavior become as important as content inspection.
Behavioral analytics, least privilege, and continuous monitoring
Modern defenses for AI infrastructure require User and Entity Behavior Analytics (UEBA), strict access segmentation, and a robust implementation of the principle of least privilege. UEBA tools can detect anomalies such as large-scale document exports, unusual file formats, or atypical activity outside normal working hours. Access to AI supercomputer designs, orchestration frameworks, and ML optimization pipelines should be tightly limited on a need-to-know basis, with detailed logging and regular review of high-risk access patterns.
Practical lessons for protecting AI trade secrets and supercomputing platforms
1. Harden access to AI infrastructure knowledge. Source code, architecture diagrams, deployment blueprints, and performance-tuning playbooks for AI clusters must be treated as crown-jewel assets. Enforce role-based access control, multifactor authentication, and mandatory logging and review for any access to highly sensitive repositories.
2. Combine DLP with UEBA and context-aware monitoring. Traditional DLP should be augmented with behavioral analytics that can correlate multiple weak signals—such as mass copying of documents, conversion to unusual formats, or uploads to personal cloud accounts—into a high-confidence alert, especially for privileged engineers.
3. Actively manage conflicts of interest and external engagements. Organizations building advanced AI and high-performance computing capabilities should maintain rigorous processes for declaring outside work, board roles, and startup involvement. Periodic reviews, background checks for highly privileged staff, and clear policies on working with foreign entities reduce opportunities for undisclosed side activities.
4. Build a security-aware culture around intellectual property. Employees need to understand that architecture documents and infrastructure designs are trade secrets with legal protections comparable to patents or source code. Regular training on insider risk, economic espionage, and reporting channels for suspicious behavior is essential, particularly in research, cloud, and AI engineering teams.
The conviction of Linwei Ding is a reminder that in the era of large-scale AI and cloud supercomputing, defending intellectual property is as much about governance, access control, and culture as it is about firewalls and encryption. Organizations developing AI platforms and high-performance computing environments should reassess their insider threat posture, strengthen monitoring of high-value assets, and invest in comprehensive cybersecurity programs that anticipate not only external attacks, but also the rare yet highly damaging trusted insider.