AGIR






OUTILS LIBRES

browsers
Firefox
messengers
Jappix - Thunderbird
search
Duckduckgo - Quaero - Scroogle
servers
all2all - domaine public - Telekommunisten
networks
Appleseed - Crabgrass - Diaspora - elgg - OneSocialWeb - pip.io
microblog
identi.ca

RELATED SITES

Ada Lovelace Institute - AI Now - Algorithm Watch - Algorithmic Justice League - AlgoTransparency - Atlas of Surveillance - Big Brother Watch - Citizen Lab - Conspiracy Watch - Constantvzw - controle-tes-donnees.net - Data Detox Kit - Digital Freedom Fund - Domaine Public - Do Not Track Electronic Frontier Foundation - europe-v-facebook - Fight for the Future - Forbidden Stories - Gender Shades - Google Spleen - greatfire.org - Guard//Int - hiljade.kamera.rs - Homo Digitalis - Human Rights Watch - Inside Google - Inside Airbnb - Liberties - LobbyPlag - Make Amazon Pay - Manifest-No - Ministry of Privacy - More Perfect Union - myshadow.org - Naked Citizens - Ni pigeons, ni espions - No-CCTV - Non à l’Etat fouineur - Nothing to Hide - noyb - NURPA - Online Nudity Survey - Open Rights Group - Ordinateurs de Vote - Pixel de tracking - Police spies out of lives - Prism Break - Privacy.net - Privacy International - Privacy Project - La Quadrature du Net - Radical AI Project - Reset the Net - Save the Internet - Souriez vous êtes filmés - Sous surveillance - Spyfiles - StateWatch - Stop Amazon - Stop Data Retention - Stop Killer Robots - Stop Spying - Stop The Cyborgs - Stop the Internet Blacklist ! - Stop the Spies - Stop Watching Us - Sur-ecoute.org - Technopolice - Tech Transparency Project - Transparency Toolkit - URME Surveillance - Watching Alibaba - Where are the Eyes ? - Who Targets Me ? - Wikifémia - Wikileaks

AlgorithmWatch


analyse
Undress or fail : Instagram’s algorithm strong-arms users into showing skin - 19 juin 2020
An exclusive investigation reveals that Instagram prioritizes photos of scantily-clad men and women, shaping the behavior of content creators and the worldview of 140 millions Europeans in what remains a blind spot of EU regulations. Sarah is a food entrepreneur in a large European city (the name was changed). (...)

analyse
Instagram capitalise sur la nudité - 19 juin 2020
Une enquête exclusive d’EDJNet révèle qu’Instagram privilégie les photos de femmes et d’hommes peu vêtus, influençant le comportement des créateurs de contenu et la vision du monde de 140 millions d’Européens dans ce qui demeure un angle mort des réglementations de l’Union européenne. Sarah est entrepreneure dans le domaine (...)

analyse
Une enquête dévoile l’existence d’une "prime à la nudité" sur le réseau social Instagram - 18 juin 2020
L’enquête a été publiée ce lundi 15 juin sur le site de Mediapart et révèle au grand jour les dessous du fonctionnement de l’algorithme du réseau social Instagram qui appartient à Facebook. Le réseau social basé sur l’échange de photos et de vidéos connaît un succès incroyable, il est le deuxième réseau social en Europe (...)

analyse
Why policing is not predictable - 17 juin 2020
Discussion Prompt : When, if ever, is predictive policing effective, fair, and legitimate ? What is the role of data reliability in this ? Jurisdictions continue to roll out predictive policing methods that use AI-based analytics. So far, trials of these systems – especially those utilising facial recognition – (...)

analyse
Sur Instagram, la prime secrète à la nudité : se déshabiller pour gagner de l’audience - 15 juin 2020
Notre enquête révèle que le réseau social montre davantage aux abonnés les photos de personnes dénudées, poussant les utilisateurs à poster de telles images afin d’atteindre le maximum d’audience. Une prime à la nudité qui questionne jusqu’au droit du travail. À en croire son compte Instagram, Sarah* habite au bord de la mer (...)

analyse
Auditer les algorithmes : de la difficulté de passer des principes aux applications concrètes - 14 juin 2020
Des chercheurs de Google AI (dont Andrew Smart, responsable de l’équité de l’apprentissage automatisé chez Google, Rebecca N. White et Timnit Gebru qui codirigent l’équipe chargée de l’éthique de l’IA chez Google, Margaret Mitchell et Ben Hutchinson chercheurs au groupe de recherche de Google sur l’intelligence des machines, (...)

analyse
EU makes move to ban use of facial recognition systems - 30 mai 2020
Experts want to see restrictions on systems they say are error-ridden, invasive and – in the wrong hands – authoritarian The European Commission is considering a temporary ban on the use of facial recognition in public areas for up to five years. According to an 18-page draft circulated last week, the ban, which (...)

analyse
Une « erreur » aboutit à la suppression de commentaires critiquant le Parti communiste chinois sur YouTube - 27 mai 2020
La plate-forme de vidéo a annoncé enquêter sur les origines du problème. YouTube a reconnu mardi 26 mai qu’une « erreur commise par un système automatique de modération » avait abouti à la censure de commentaires critiquant le Parti communiste chinois (PCC). Dans la journée, l’entrepreneur américain Palmer Luckey, (...)

analyse
Automated moderation tool from Google rates People of Color and gays as “toxic” - 20 mai 2020
A systematic review of Google’s Perspective, a tool for automated content moderation, reveals that some adjectives are considered more toxic than others. “As a Black woman, I agree with the previous comment.” This phrase has a 32% probability of being “toxic”, according to Google’s Perspective service. For the phrase (...)

analyse
Unchecked use of computer vision by police carries high risks of discrimination - 6 mai 2020
At least 11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them. Pedestrians and motorists in some streets of Warsaw, Mannheim, Toulouse or Kortrijk are constantly monitored for abnormal (...)

analyse
AI Ethics Guidelines Global Inventory - 29 avril 2020
Last year, AlgorithmWatch launched the AI Ethics Guidelines Global Inventory to compile frameworks and guidelines that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically. We have now upgraded this directory by revising its categories and adding (...)

analyse
How Dutch activists got an invasive fraud detection algorithm banned – AlgorithmWatch - 22 avril 2020
The Dutch government has been using SyRI, a secret algorithm, to detect possible social welfare fraud. Civil rights activists have taken the matter to court and managed to get public organizations to think about less repressive alternatives. In its fight against fraud, the Dutch government has been (...)

analyse
Google apologizes after its Vision AI produced racist results - 20 avril 2020
A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader. In the fight against the novel coronavirus, many countries ordered that citizens have their temperature checked at (...)

analyse
Automated decision-making systems and the fight against COVID-19 – our position – AlgorithmWatch - 9 avril 2020
As the COVID-19 pandemic rages throughout the world, many are wondering whether and how to use automated decision-making systems (ADMS) to curb the outbreak. Different solutions are being proposed and implemented in different countries, ranging from authoritarian social control (China) to privacy-oriented, (...)

analyse
Central authorities slow to react as Sweden’s cities embrace automation of welfare management – AlgorithmWatch - 21 mars 2020
Trelleborg is Sweden’s front-runner in automating welfare distribution. An analysis of the system’s source code brought little transparency – but revealed that the personal data of hundreds was wrongly made public. Trelleborg is a city of 40,000 in Sweden’s far south. Three years ago, it became the first municipality (...)

analyse
At least 10 police forces use face recognition in the EU, AlgorithmWatch reveals - 15 décembre 2019
The majority of the police forces that answered questions by AlgorithmWatch said they use or plan to introduce face recognition. Use cases vary greatly across countries, but almost all have in common their lack of transparency. Police departments have long attempted to acquire, structure and store data on the (...)

analyse
"Explainable AI" doesn’t work for online services "“ now there’s proof - 14 novembre 2019
New regulation, such as the GDPR, encourages the adoption of "explainable artificial intelligence." Two researchers claim to have proof of the impossibility for online services to provide trusted explanations. Most algorithms labelled "artificial intelligence" automatically identify relationships in large data (...)

analyse
De l’explicabilité des systèmes : les enjeux de l’explication des décisions automatisées - 14 novembre 2019
Etymologiquement, expliquer, c’est déployer, déplier, c’est-Ã -dire soulever les ambiguïtés qui se cachent dans l’ombre des plis. C’est donc également « ouvrir » (avec tout le sens que « l’ouverture » "“ open "“ a acquis dans le monde numérique), défaire, dépaqueter, c’est-Ã -dire non seulement enlever les paquets, dérouler les (...)

analyse
Palantir, the secretive data behemoth linked to the Trump administration, expands into Europe - 11 novembre 2019
The data analysis company, known in particular for running the deportation machine of the Trump administration, is expanding aggressively into Europe. Who are its clients ? Palantir was founded in 2004, in the wake of the September 11 attacks. Its founders wanted to help intelligence agencies organize the data (...)

analyse
Facebook enables automated scams, but fails to automate the fight against them - 9 novembre 2019
Scammers massively use Facebook’s advertising platform using so-called "cloakers" to evade automated checks. They would be very simple to detect but, despite announcements to the contrary, Facebook seems to tolerate them. Facebook’s targeted advertising revolutionized confidence tricks, or scams, by which a (...)