from Hotznplotzn@lemmy.sdf.org to world@lemmy.world on 24 Feb 05:10
https://lemmy.sdf.org/post/51341873
cross-posted from: lemmy.sdf.org/post/51341402
A series of patents filed the past two years indicates that institutions across China are working out how to use AI to improve grassroots surveillance.
Reading between the lines, a dry little document released by the Fujian Police Academy in December last year is a small window onto the future of authoritarianism.
The academy, which is directly under the Fujian provincial government and conducts research to improve public security mechanisms, proposes a new method for detecting an abnormal build-up of people into “potential mass incidents” (潜在群体性事件) — referring to an oft-used official bureaucratic euphemism for collective protests, riots, demonstrations, strikes, and other forms of organized public unrest. The academy’s new method uses AI that is fed data from sound sensors, cameras and official reports. The AI system flags an incident as soon as it starts to develop, giving the police advance warning. If the system overlooks an incident, it reviews the video footage and recordings to improve detection in future. This is machine learning in the service of AI-based surveillance.
[…]
Throughout the past year, institutions across China, both private and state-owned, have proposed variations of the same system: taking big data from China’s extensive surveillance system — including input from street cameras and satellites, noise sensors, social media posts, as well as reports from social services — and feeding it into AI models to aid predictive policing. This is part of the government’s vision of a fusion of human and machine response, making for a more robust domestic security system.
The trend does not bode well for the most vulnerable sections of Chinese society.
[…]
While some institutions are making use of Chinese AI models for these projects, Western ones are also being considered. In August 2025 Guizhou Normal University suggested using OpenAI’s GPT models as a “core reasoning tool” in a system to predict “social governance incidents” based on reports of an individual’s “personality traits,” “long-term emotional states” or “degree of exposure to negative cultural influences.” The patent does not specify how data on “negative cultural influences” would be collected, though any such system would depend on extensive pre-existing surveillance infrastructure. While OpenAI has banned individual Chinese users from accessing its products since 2024, businesses in China can still access OpenAI models through Microsoft Azure.
[…]
How would these inventions impact society? The systems described in these patents would likely fall hardest on the most vulnerable members of Chinese society. The algorithms are programmed around catch-all risk categories commonly associated with violent or disorderly behavior, with little apparent regard for individual circumstances. Guizhou’s risk monitoring system for assessing the danger levels of an individual include a “criminal record, drug abuse record, serious mental illness” as well as tense relationships with family members. It is not clear how the algorithm would make allowances for those, say, who have a criminal record through minor offences as opposed to a major one, or whose family relationships are tense due to living with abusive parents or spouses.
[…]
It is also a chance to exert greater control over a system that has persistently caused trouble for local authorities. The Southwestern University of Political Science and Law in Chongqing has created a risk monitoring system specifically targeted at petitioners, individuals who are seeking redress for a wrong done to them either by a local cadre or peer. Petitioners are frequently driven to increasingly desperate acts after years spent navigating a grievance system that rarely produces results — a dynamic that authorities have long treated as a public order problem rather than a governance failure.
The invention would see sensors and cameras placed in spaces where citizens meet officials, flagging a warning to police based on detecting heightened emotion through noise sensors and facial recognition software. But the algorithm is also programmed to take “Life Observations” into account. Subjects are considered high risk if they have spread inflammatory comments on social media over three times in one month, not had steady employment for over a year or do not have any social security, are homeless or reported as “not going out [of the house] for a long time (≥ 7 days).”
[…]
#world
threaded - newest