AGIR






OUTILS LIBRES

browsers
Firefox
messengers
Jappix - Thunderbird
search
Duckduckgo - Quaero - Scroogle
servers
all2all - domaine public - Telekommunisten
networks
Appleseed - Crabgrass - Diaspora - elgg - OneSocialWeb - pip.io
microblog
identi.ca

RELATED SITES

Ada Lovelace Institute - AI Now - Algorithm Watch - Algorithmic Justice League - AlgoTransparency - Atlas of Surveillance - Big Brother Watch - Citizen Lab - Conspiracy Watch - Constantvzw - controle-tes-donnees.net - Data Detox Kit - Digital Freedom Fund - Domaine Public - Do Not Track Electronic Frontier Foundation - europe-v-facebook - Fight for the Future - Forbidden Stories - Gender Shades - Google Spleen - greatfire.org - Guard//Int - hiljade.kamera.rs - Homo Digitalis - Human Rights Watch - Inside Google - Inside Airbnb - Liberties - LobbyPlag - Make Amazon Pay - Manifest-No - Ministry of Privacy - More Perfect Union - myshadow.org - Naked Citizens - Ni pigeons, ni espions - No-CCTV - Non à l’Etat fouineur - Nothing to Hide - noyb - NURPA - Online Nudity Survey - Open Rights Group - Ordinateurs de Vote - Pixel de tracking - Police spies out of lives - Prism Break - Privacy.net - Privacy International - Privacy Project - La Quadrature du Net - Radical AI Project - Reset the Net - Save the Internet - Souriez vous êtes filmés - Sous surveillance - Spyfiles - StateWatch - Stop Amazon - Stop Data Retention - Stop Killer Robots - Stop Spying - Stop The Cyborgs - Stop the Internet Blacklist ! - Stop the Spies - Stop Watching Us - Sur-ecoute.org - Technopolice - Tech Transparency Project - Transparency Toolkit - URME Surveillance - Watching Alibaba - Where are the Eyes ? - Who Targets Me ? - Wikifémia - Wikileaks

biais


analyse
Warning Signal : the messaging app’s new features are causing internal turmoil - 26 janvier 2021
The fast-growing encrypted messaging app is making itself increasingly vulnerable to abuse. Current and former employees are sounding the alarm. OnOn January 6th, WhatsApp users around the world began seeing a pop-up message notifying them of upcoming changes to the service’s privacy policy. The changes were (...)

analyse
Developing Algorithms That Might One Day Be Used Against You - 25 janvier 2021
Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms : They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may (...)

information
analyse
Sexiste, homophobe, anti-handicapés... Un chatbot sud-coréen mis hors-ligne après avoir déraillé - 19 janvier 2021
Un logiciel conversationnel disponible sur Facebook Messenger s’est imprégné des propos nauséabonds de certains internautes, avant d’être désactivé. Un très populaire chatbot sud-coréen, un robot conversationnel permettant aux internautes de papoter avec ce qui serait une étudiante de 20 ans, a été désactivé cette semaine (...)

analyse
The dark side of open source intelligence - 16 janvier 2021
Internet sleuths have used publicly available data to help track down last week’s Washington D.C. rioters. But what happens when the wrong people are identified ? In May, a video of a woman flouting a national Covid-19 mask mandate went viral on social media in Singapore. In the clip, the bare-faced woman argues (...)

Lire
Dark Data - 15 janvier 2021
A practical guide to making good decisions in a world of missing data In the era of big data, it is easy to imagine that we have all the information we need to make good decisions. But in fact the data we have are never complete, and may be only the tip of the iceberg. Just as much of the universe is composed of (...)

analyse
La société de la prédiction en ses limites - 11 janvier 2021
Dans un stimulant syllabus (.pdf) d’un cours qui s’est tenu à l’automne 2020 à l’université de Princeton (voir le programme et les documents associés), les professeurs d’informatique Arvind Narayanan (@random_walker) et de sociologie Matt Salganik (@msalganik) se sont attelé à un sujet d’importance : la limite des (...)

analyse
The coming war on the hidden algorithms that trap people in poverty - 8 janvier 2021
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services. Miriam was only 21 when she met Nick. She was a photographer, fresh out of college, waiting tables. He was 16 years her senior and a local business owner who had worked (...)

analyse
Aided by Palantir, the LAPD Uses Predictive Policing to Monitor Specific People and Neighborhoods - 4 janvier 2021
A new report details the Los Angeles Police Department’s use of algorithms to identify “hot spots” and “chronic offenders” and target them for surveillance. Police stops in Los Angeles are highly concentrated within just a small portion of the population, and the Los Angeles Police Department has been using targeted (...)

analyse
My Data Rights - 1er janvier 2021
A feminist review of AI, privacy and data protection to enhance digital rights Are we all equal in the eyes of AI ? What are the opportunities and challenges for marginalised groups in Society with Artificial Intelligence ? What control do we have over our data as personal information is collected, stored and (...)

analyse
Flawed Facial Recognition Leads To Arrest and Jail for New Jersey Man - 1er janvier 2021
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition. In February 2019, Nijeer Parks was accused of shoplifting candy and trying to hit a police officer with a car at a Hampton Inn in Woodbridge, (...)

analyse
Unreliable algorithms could be deciding who gets a covid vaccine first - 30 décembre 2020
When front-line workers at Stanford Health Care were passed over for the first wave of coronavirus vaccines, officials at the hospital in Palo Alto, Calif., blamed the “very complex algorithm” it had built to decide employees’ place in line. But unlike the sophisticated machine-learning algorithms that underpin the (...)

analyse
L’« écoscore », une étiquette environnementale qui pourrait favoriser la nourriture industrielle - 22 décembre 2020
Les aliments pourraient bientôt avoir eux aussi une étiquette environnementale, dite « écoscore ». Mais pour l’instant, la méthode choisie a tendance à privilégier les produits issus de l’agriculture intensive, qui pourraient être ainsi mieux notés que le bio. Si on vous propose de noter le coût écologique des produits (...)

analyse
New report highlights the risks of AI on fundamental rights - 20 décembre 2020
The European watchdog for fundamental rights published a report on Artificial Intelligence. AlgorithmWatch welcomes some of the recommendations, and encourages a bolder approach. The European Union Agency for Fundamental Rights (FRA), which supports European institutions and members states on related issues, (...)

plainte
What is robodebt ? When was it introduced and what did it do ? - 11 décembre 2020
There are many questions about the government’s controversial robodebt scheme. Let’s start with what it actually is. Robodebt is the common name given to the Online Compliance Intervention, an automated debt recovery program that was introduced by the federal government in mid-2016. The robodebt system was (...)

analyse
Robodebt class action : Coalition agrees to pay $1.2bn to settle lawsuit - 11 décembre 2020
Some 400,000 Australians will share $112m in extra compensation, lawyers say The Australian government has agreed to a $1.2bn settlement for a class action brought on behalf of hundreds of thousands of robodebt victims. In a deal struck the day a federal court trial was set to begin, 400,000 people will share in (...)

analyse
Dealing With Bias in Artificial Intelligence - 5 décembre 2020
Three women with extensive experience in A.I. spoke on the topic and how to confront it. This article is part of our Women and Leadership special section, which focuses on approaches taken by women, minorities or other disadvantaged groups challenging traditional ways of thinking. Bias is an unavoidable feature (...)

analyse
Google se sépare d’une chercheuse spécialiste des biais de l’IA : que s’est-il passé ? - 5 décembre 2020
Une vive polémique est en train d’agiter le monde universitaire aux États-Unis, et plus particulièrement la recherche dans l’intelligence artificielle. Google s’est séparé d’une informaticienne spécialisée dans les biais algorithmiques, mais les conditions de ce départ sont controversées. C’est une affaire qui est en train (...)

analyse
Google Employees Say Scientist’s Ouster Was ’Unprecedented Research Censorship’ - 4 décembre 2020
Hundreds of Google employees have published an open letter following the firing of a colleague who is an accomplished scientist known for her research into the ethics of artificial intelligence and her work showing racial bias in facial recognition technology. That scientist, Timnit Gebru, helped lead Google’s (...)

analyse
Les algorithmes toxiques de la Fnac - 4 décembre 2020
Quand les résultats de recherche font remonter des ouvrages problématiques.

information
Biais sexiste chez Deepl - 2 décembre 2020
Biais sexiste dans la traduction par Deepl de l’article https://nbcnews.com/news/world/uber-made-big-promises-kenya-drivers-say-it-s-ruined-n1247964 (voir encadré bleu)