top of page
Writer's pictureOlga Bavrina

EU AI Act Bans Social Scoring and Preventive Policing. With Exceptions 🤔

Imagine a world where the Police can spot a criminal before the crime actually happens. A world where a dating app can tell you if your potential date is prone to violence. A world where you are accepted or rejected for a job based on your facial features and voice timbre. 


This is not a scene from an Isaac Azimov novel. AI is marching the world, and this is a future we are on the verge of. The last fine line between the existence of a cyberpunk society with predictive policing and a diverse humanistic society is the law.


The EU AI Act, issued on the 13th of March 2024, bans AI applications that threaten citizens’ rights; this includes social scoring, preventive policing, and emotional- and appearance-focused exploits. There are several “buts” here, not to mention the ethical dilemma that cannot be denied [1].


The Idea of Social Scoring


The idea of social scoring is very natural. Every person is born with a certain score, and every action they take affects this score. Let’s say, buying a pack of cigarettes takes a couple of points away while volunteering gives additional ones.


One of the concerns would be the potential use of said score for society to make inferences about the people with certain scores doing things that haven’t yet happened. 


  • Is this person likely to abuse social services or insurance companies?

  • Is this person likely to be violent?

  • Is this person likely to commit a crime?


How much of a likelihood is there for the crime to be committed? Is it a threat? Should we take action?


As you can see, the idea of social scoring and proactive policing is not new at all. It’s so old that humanity has experienced its devastating consequences many times. It’s only the means that evolve. 


Social Scoring and AI Deanonymization Today


Facial recognition, credit scoring and AI-boosted decision making is not something that was born yesterday. They have been developed as a proactive solution to risk minimization or remote identification.


There are multiple global examples launched over the last decade:


UK police have been using public facial recognition systems by way of street cameras since 2016 (first launched by London Met Police). The efficiency reported is controversial with the most optimistic assessment of 80%, which means innocent people detained by the Police because they look similar to a suspect is not a rare case [2].


US and UK public places like bars and clubs have been using PartonScan, originally intended for age validation, to ban unwanted persons from frequenting particular places.  Of course, this effort is well-intended, but from time to time it can be too personal and biased [3]. 


C2C (shared economy) services like AirBnB and taxi agencies all have credibility scoring for clients.  Of course, the same can be said for banks and insurance companies who launched client scoring long before it became mainstream.


AI-boosted target advertising and newsfeeds are also a headache for many customers who become deeply offended due to AI deciding that certain content is relevant to them. No matter how social networks try to overcome this with their “I don’t want to see this” options, customers can become depressed and anxious resulting from the content generated for their use.


During the COVID-19 pandemic, the Israel Ministry of Health approved the launching of an app that allows for monitoring one’s surroundings to determine COVID exposure within the last two weeks.


Finally, China’s Social Scoring System launched in current embodiment in 2014.  It is a working system that literally affects the lives of individuals and businesses. The consequences of low scoring are versatile from low interest for bank savings to internet throttling and hotel bans [4].


Notably, many well-defined regulations like GDPR-2016 require service providers to have transparent rules for scoring so that customers can predict the consequences of actions; this disqualifies the preventional idea of scoring, though. Once the rules are clear, it becomes clearer how to abuse the system.


The (Not-So) Banned AI Applications


In the prevention of the social scoring concerns, the AI Act provides a closed list of prohibited technology applications, the most of those having “except for” clauses. Like so [5]


  • using subliminal techniques, purposefully manipulative, or deceptive techniques to materially distort behavior, leading to significant harm; 


  • exploiting vulnerabilities of a person or group due to specific characteristics, leading to significant harm;


  • biometric categorization systems that individually categorize a person based on sensitive information, except for labelling or filtering lawfully acquired biometric datasets in the area of law enforcement;


  • social scoring systems;


  • real-time remote biometric identification systems in the public for law enforcement purposes;


  • predictive policing based solely on profiling or personality traits, except when supporting human assessments based on objective, verifiable facts linked to criminality;


  • facial recognition databases based on untargeted scraping


  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons;


But the question has to be considered: does this mean that all of the above is fine if certain conditions are met?


EU AI Act - the Outcome of the Day


The EU AI Act is the first and very uncertain attempt to take the situation with AI technology under control. 


Clearly stating that “public facial recognition is a citizens’ rights violation BUT” makes it full of implications like it is acceptable for governments to amend human rights for the supposed greater good. This is opening Pandora’s box.


This new initiative is raw but needed; some of how this initiative has room to grow would include the following:


Development of frameworks and standards that allow us to validate AI Act compliance or identify gaps. Although the terms are blurry, the fines of up to 7% of global turnover are very well determined. 


A new level of data protection is required. Given governments are allowed to social score within the objectives of severe crime prevention, the leak of score data and algorithms would be disastrous for global security.


The criteria of AI usage for non-government-affiliated service providers. The fine line between gamification, content adjustment and product decisions vs privacy violation and discrimination should be as precise as possible. 


The adjustment strategy for service providers who happen to violate the AI Act although they started legally back in the day.


The global perspective of such an act. As enforced in the EU jurisdiction only, it can potentially contradict with the upcoming regulations of other jurisdictions, so the global companies will probably need to keep an eye on things and prepare to react.


PartnerAlly Сan Help


Compliance is not something that can be built in one day and remain stagnant for once and for all. Reliable and maintainable Compliance Programs that not only formally cover, but sufficiently boost the business, is an important part of a company culture and strategy regardless of the business domain. Staying aware and ready for action is vital in the constantly changing world. That’s why full delegation of compliance is a risk. At the same time, we know very well how much time and effort it takes to build sustainable compliance programs.


With our compliance automation services and expertise, we can help you unload the paperwork and stay sharp and aware.






12 views0 comments

Comments


bottom of page