Alegal challenge heard in the European Court of Justice today over a controversial EU-funded research project using artificial intelligence for facial “false detection” aimed at speeding up immigration checks. Transparency lawsuits against the European Union’s research executive agency (REA), which oversees the bloc’s financing programs, in March 2019, Pirate Party Germany’s MEP Patrick Breyer,a civil liberties activist-who refused to disclose documents before successfully suing the commission.
He wants to publish documents related to the ethical evaluation, legal recognition, marketing and results of the project. In addition, expecting a policy that publicly funded research must comply with the EU’s fundamental rights – and in the process help avoid wasting public money on AI “snake oil”. “As the EU continues to develop dangerous surveillance and control technologies, and will also fund future arms research, I expect a landmark ruling that will allow the public to debate funding funded investigations and services for the benefit of the private sector,” said one at the end of today’s hearing, Breaker in statement. “With my transparency case, I want the court to rule once and for all that taxpayers, scientists, the media and members of parliament have the right to information on publicly funded research – especially in the case of pseudoscientific and video lie detectors in Orwell technology, such as iBorderCtrl.”
The court has not yet set a date for a decision on the case, but Breyer said the judges questioned the agency “intensively and critically” for more than an hour – and revealed that documents related to AI technology had not made public but had reviewed by judges. Ethnic traits “contain a lot of information about such questions.
The presiding judge sought to know whether it would be in the interests of the EU research institute to prove it by releasing more information about the controversial iBorderCtrl project.
AI ‘laid detection’
The research in question is controversial because the concept of a true false detector machine remains a science fiction, and the plausible reason: there is no evidence that there is any “universal psychological signal” of deception.
Yet this AI-fuel commercial research and development is a test for creating “false detectors” – in which testers are asked to answer questions posed to them by a virtual Border Guard scanning their webmaster’s facial expressions and what the system wants to identify. Describes it as “biomarkers of deception” in an attempt to score the truth of opinion (yes, really) — scored over €4.5 million/$5.4 million in EU research funding under the bloc’s Horizon 2020 scheme.
The iBorderCtrl project ran from September 2016 to September 2019, with funding scattered across 13 private or non-profit organizations in several member states (including the UK, Poland, Greece and Hungary).
According to a written response to Breaker’s questions challenging the lack of transparency, the commission said the published research reports do not appear to have seen the light of day yet.
Back in 2019, Intercept was able to test the iBorderCtrl system for itself. The video false detector falsely accused his reporter of lying – judging he gave four false answers out of 16, and gave him an overall score of 48, who reported that a police officer evaluated the result saying he should trigger a system suggestion. Did subject to further checks (although the system is operate for real during border testing)?
Intercept said it would have to file a data access request to get a copy of the reporter’s findings – a right established under EU law, citing Ray Bull, a professor of criminal investigations at the University of Derby, who described the iBorderCtrl project as “unbelievable”. Lack of evidence is the right way to measure falsehood.