31/5/15

Invitación

Esta semana dedico este espacio para hacer llegar una invitación a presenciar la presentación de un libro cuya autora es Tamara, mi compañera en la vida y madre de nuestros dos hijos. Pocos como yo saben el esfuerzo que puso para escribirlo, el orgullo que siente por su abuelo y la alegría que tiene por haber concluido el trabajo. Muchas semanas mis lectores leen aquí las notas sobre lo que yo escribo. Hoy, invito a leer el libro de Tamara. Retomo pronto con alguno de los temas a los que los tengo acostumbrados.

16/5/15

Carta a Google sobre la implementación del llamado derecho al olvido

Ha pasado un año desde que el Tribunal de Justicia de la Unión Europea (TJUE) decidió que Google y los motores de búsqueda en general son “responsables” por el tratamiento de los datos personales que aparecen en los sitios web.  De acuerdo a la sentencia, una persona puede pedir que determinada información personal sea removida de los resultados de las búsquedas cuando esta es “inapropiada, irrelevante y desactualizada” y siempre que no exista interés público. Sin perjuicio de lo que he publicado con anterioridad sobre el mal llamado ¨derecho al olvido¨, una vez que Google decidió cumplir con la sentencia, me sumé a una iniciativa de alrededor de 80 académicos alrededor del mundo donde solicitamos a la empresa que transparente los procesos por los que lleva adelante la implementación de la decisión del Tribunal.

Aquí reproduzco (en Inglés) la carta que ha tenido alto impacto en distintos medios internacionales, entre ellos, The GuardianTechCrunchV3ITProTelegraphWiredNOS,The RegisterInternational Business TimesVice MotherboardLondon Review of BooksSC MagazineSearch Engine LandInformation WeekWall Street Journal. La respuesta de Google a la carta está reflejada aquí. 

Full text of the letter demanding more transparency from Google over how it processes ‘right to be forgotten’ requests:

May 13, 20015

What we seek

Aggregate data about how Google is responding to the more than 250,000 requests to delist links, thought to contravene data protection laws, from name search results. We should know if the anecdotal evidence of Google’s process is representative: What sort of information typically gets delisted (e.g., personal health) and what sort typically does not (e.g., about a public figure), in what proportions and in what countries?

Why it’s important

Google and other search engines have been enlisted to make decisions about the proper balance between personal privacy and access to information. The vast majority of these decisions face no public scrutiny, though they shape public discourse. What’s more, the values at work in this process will/should inform information policy around the world. A fact-free debate about the RTBF is in no one’s interest.

Why Google

Google is not the only search engine, but no other private entity or Data Protection Authority has processed anywhere near the same number of requests (most have dealt with several hundred at most). Google has by far the best data on the kinds of requests being made, the most developed guidelines for handling them, and the most say in balancing informational privacy with access in search.
One year ago, the European Court of Justice, in Google Spain v AEPD and Mario Costeja González, determined that Google and other search engines must respond to users’ requests under EU data protection law concerning search results on queries of their names. This has become known as the Right to Be Forgotten (RTBF) ruling. The undersigned have a range of views about the merits of the ruling. Some think it rightfully vindicates individual data protection/privacy interests. Others think it unduly burdens freedom of expression and information retrieval. Many think it depends on the facts.

We all believe that implementation of the ruling should be much more transparent for at least two reasons: (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the RTBF in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows.

Google reports that it has received over 250,000 individual requests concerning 1 million URLs in the past year. It also reports that it has delisted from name search results just over 40% of the URLs that it has reviewed. In various venues, Google has shared some 40 examples of delisting requests granted and denied (including 22 examples on its website), and it has revealed the top sources of material requested to be delisted (amounting to less than 8% of total candidate URLs). Most of the examples surfaced more than six months ago, with minimal transparency since then. While Google’s decisions will seem reasonable enough to most, in the absence of real information about how representative these are, the arguments about the validity and application of the RTBF are impossible to evaluate with rigour.

Beyond anecdote, we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.

The RTBF ruling addresses the delisting of links to personal information that is “inaccurate, inadequate, irrelevant, or excessive for the purposes of data processing”, and which holds no public interest. Both opponents and supporters of the RTBF are concerned about overreach. Because there is no formal involvement of original sources or public representatives in the decision-making process, there can be only incidental challenges to information that is delisted, and few safeguards for the public interest in information access. Data protectionauthorities seem content to rely on search engines’ application of the ruling’s balancing test, citing low appeal rates as evidence that the balance is being appropriately struck. Of course, this statistic reveals no such thing. So the sides do battle in a data vacuum, with little understanding of the facts – facts that could assist in developing reasonable solutions.

Peter Fleischer, Google global privacy counsel, reportedly told the 5th European Data Protection Days on 4 May that, “Over time, we are building a rich program of jurisprudence on the [RTBF] decision.” (Bhatti, Bloomberg, 6 May). It is a jurisprudence built in the dark. For example, Mr. Fleischer is quoted as saying that the RTBF is “about true and legal content online, not defamation”. This is an interpretation of the scope and meaning of the ruling that deserves much greater elaboration, substantiation, and discussion.

We are not the only ones who want more transparency. Google’s own Advisory Council on the RTBF in February 2015 recommended more transparency, as did the Article 29 Working Party in November 2014. Both recommended that data controllers should be as transparent as possible by providing anonymised and aggregated statistics as well as the process and criteria used in delisting decisions. The benefits of such transparency extend to those who request that links be delisted, those who might make such requests, those who produce content that is or might be delisted, and the wider public who might or do access such material. Beyond this, transparency eases the burden on search engines by helping to shape implementation guidelines and revealing aspects of the governing legal framework that require clarification.

Naturally, there is some tension between transparency and the very privacy protection that the RTBF is meant to advance. The revelations that Google has made so far show that there is a way to steer clear of disclosure dangers. Indeed, the aggregate information that we seek threatens privacy far less than the scrubbed anecdotes that Google has already released, or the notifications that it is giving to webmasters registered with Google webmaster tools. The requested data is divorced from individual circumstances and requests. Here is what we think, at a minimum, should be disclosed

What we seek

      1. Categories of RTBF requests/requesters that are excluded or presumptively excluded (e.g., alleged defamation, public figures) and how those categories are defined and assessed.
      2. Categories of RTBF requests/requesters that are accepted or presumptively accepted (e.g., health information, address or telephone number, intimate information, information older than a certain time) and how those categories are defined and assessed.
      3. Proportion of requests and successful delistings (in each case by % of requests and URLs) that concern categories including (taken from Google anecdotes): (a) victims of crime or tragedy; (b) health information; (c) address or telephone number; (d) intimate information or photos; (e) people incidentally mentioned in a news story; (f) information about subjects who are minors; (g) accusations for which the claimant was subsequently exonerated, acquitted, or not charged; and (h) political opinions no longer held.
      4. Breakdown of overall requests (by % of requests and URLs, each according to nation of origin) according to the WP29 Guidelines categories. To the extent that Google uses different categories, such as past crimes or sex life, a breakdown by those categories. Where requests fall into multiple categories, that complexity too can be reflected in the data.
      5. Reasons for denial of delisting (by % of requests and URLs, each according to nation of origin). Where a decision rests on multiple grounds, that complexity too can be reflected in the data.
      6. Reasons for grant of delisting (by % of requests and URLs, each according to nation of origin). As above, multi-factored decisions can be reflected in the data.
      7. Categories of public figures denied delisting (e.g., public official, entertainer), including whether a Wikipedia presence is being used as a general proxy for status as a public figure.
      8. Source (e.g., professional media, social media, official public records) of material for delisted URLs by % and nation of origin (with top 5-10 sources of URLs in each category).
      9. Proportion of overall requests and successful delistings (each by % of requests and URLs, and with respect to both, according to nation of origin) concerning information first made available by the requestor (and, if so, (a) whether the information was posted directly by the requestor or by a third party, and (b) whether it is still within the requestor’s control, such as on his/her own Facebook page).
      10. Proportion of requests (by % of requests and URLs) where the information is targeted to the requester’s own geographic location (e.g., a Spanish newspaper reporting on a Spanish person about a Spanish auction).
      11. Proportion of searches for delisted pages that actually involve the requester’s name (perhaps in the form of % of delisted URLs that garnered certain threshold percentages of traffic from name searches).
      12. Proportion of delistings (by % of requests and URLs, each according to nation of origin) for which the original publisher or the relevant data protection authority participated in the decision.
      13. Specification of (a) types of webmasters that are not notified by default (e.g., malicious porn sites); (b) proportion of delistings (by % of requests and URLs) where the webmaster additionally removes information or applies robots.txt at source; and (c) proportion of delistings (by % of requests and URLs) where the webmaster lodges an objection.

As of now, only about 1% of requesters denied delisting are appealing those decisions to national Data Protection Authorities. Webmasters are notified in more than a quarter of delisting cases (Bloomberg, May 6). They can appeal the decision to Google, and there is evidence that Google may revise its decision. In the remainder of cases, the entire process is silent and opaque, with very little public process or understanding of delisting.

The ruling effectively enlisted Google into partnership with European states in striking a balance between individual privacy and public discourse interests. The public deserves to know how the governing jurisprudence is developing. We hope that Google, and all search engines subject to the ruling, will open up.

Sincerely yours,

Ellen P. Goodman
Professor
Rutgers University School of Law
Co-Director
Rutgers Institute for Information Policy & Law
@ellgood
Julia Powles
Researcher
University of Cambridge, Faculty of Law
@juliapowles
Database of Academic Commentary
Jef Ausloos
Researcher
KU Leuven, ICRI/CIR – iMinds
Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA School of Law
Eduardo Bertoni
Global Clinical Professor, New York University School of Law
Director of the Center for Studies on Freedom of Expression and Access to Information –CELE-
Palermo University School of Law
Reuben Binns
Researcher
University of Southampton
Michael D. Birnhack
Professor of Law
Tel-Aviv University, Faculty of Law
Eerke Boiten
Director of Cyber Security Centre
University of Kent
Oren Bracha
Howrey LLP and Arnold, White & Durkee Centennial Professor
University of Texas School of Law
George Brock
Professor of Journalism
City University London
Sally Broughton Micova
LSE Fellow & Acting Director, LSE Media Policy Project
London School of Economics and Political Science
Ian Brown
Professor of Information Security and Privacy
University of Oxford, Oxford Internet Institute
Robin Callender Smith
Professorial Fellow in Media Law, Centre for Commercial Law Studies
Queen Mary University of London
Caroline Calomme
MJur candidate
University of Oxford
Ignacio Cofone
Researcher
Erasmus University Rotterdam
Julie E. Cohen
Mark Claster Mamolen Professor of Law & Technology
Georgetown Law
Ray Corrigan
Senior Lecturer in Maths, Computing and Technology
Open University
Jon Crowcroft
Marconi Professor of Communications Systems
University of Cambridge, Computer Laboratory
Angela Daly
Postdoctoral Research Fellow, Swinburne University of Technology
Research Associate, Tilburg University - TILT
Richard Danbury
Postdoctoral Research Fellow
University of Cambridge, Faculty of Law
Leonhard Dobusch
Assistant Professor on Organization Theory
Freie Universitaet Berlin
Lilian Edwards
Professor of Internet Law
University of Strathclyde
Niva Elkin-Koren
Professor of Law
University of Haifa
David Erdos
University Lecturer in Law and the Open Society
University of Cambridge, Faculty of Law
Gordon Fletcher
Senior Lecturer in Information Systems
University of Salford
Michelle Frasher
Non-resident Visiting Scholar, Fulbright-Schuman Scholar
University of Illinois, European Union Center
Brett M. Frischmann
Professor of Law
Benjamin N. Cardozo School of Law
Martha Garcia-Murillo
Professor of Information Studies
Syracuse University
David Glance
Director, UWA Centre for Software Practice
University of Western Australia
Andres Guadamuz
Senior Lecturer in IP Law
University of Sussex
Edina Harbinja
Law Lecturer
University of Hertfordshire
Woodrow Hartzog
Associate Professor, Samford University, Cumberland School of Law
Affiliate Scholar, Stanford Law School, Center for Internet & Society
Andrew Hoskins
Professor
University of Glasgow
Martin Husovec
Legal Advisor, European Information Society Institute
Affiliate Scholar, Stanford Law School, Center for Internet & Society
Agnieszka Janczuk-Gorywoda
Assistant Professor
Tilburg University - TILEC
Lorena Jaume-Palasí
PhD candidate and Lecturer
Ludwig Maximilians University
Bert-Jaap Koops
Professor of Regulation and Technology
Tilburg University - TILT
Paulan Korenhof
Researcher
Tilburg University - TILT
Aleksandra Kuczerawy
Researcher
KU Leuven, ICRI/CIR – iMinds
Stefan Kulk
Researcher
Utrecht University
Rebekah Larsen
MPhil candidate
University of Cambridge, Judge Business School
David S. Levine
Associate Professor, Elon University School of Law
Visiting Research Collaborator, Princeton Center for Information Technology Policy
Affiliate Scholar, Stanford Law School, Center for Internet & Society
Michael P. Lynch
Professor of Philosophy and Director, Humanities Institute
University of Connecticut
Orla Lynskey
Assistant Professor of Law and Warden, Sidney Webb House
London School of Economics and Political Science
Daniel Lyons
Associate Professor of Law
Boston College Law School
Ian MacInnes
Associate Professor, School of Information Studies
Syracuse University
Robin Mansell
Professor, Department of Media and Communications
London School of Economics and Political Science
Alan McKenna
Lecturer
University of Kent Law School
Shane McNamee
Research Assistant, Research Centre for Consumer Law
University of Bayreuth
Maura Migliore
LL.M. candidate, Centre for Commercial Law Studies
Queen Mary University of London
Christian Moeller
Internet Policy Observatory, Center for Global Communication Studies, Annenberg School for Communication, University of Pennsylvania
University of Applied Sciences Kiel
Maria Helen Murphy
Lecturer in Law
Maynooth University
Andrew Murray
Professor of Law
London School of Economics and Political Science
John Naughton
Professor, Wolfson College
University of Cambridge
Abraham Newman
Associate Professor, School of Foreign Service
Georgetown University
Kieron O’Hara
Senior Research Fellow, Electronics and Computer Science
University of Southampton
Marion Oswald
Senior Fellow, Head of the Centre for Information Rights
University of Winchester
Pablo A. Palazzi
Professor of Law
San Andres University
Frank Pasquale
Professor of Law
University of Maryland Carey School of Law
Richard J. Peltz-Steele
Professor
University of Massachusetts Law School
Artemi Rallo
Constitutional Law Professor and Former Director, Spanish Data Protection Agency
Jaume I University
Giovanni Sartor
Professor of Legal Informatics and Legal Theory
European University Institute
Evan Selinger
Associate Professor of Philosophy
Rochester Institute of Technology
Sophie Stalla-Bourdillon
Associate Professor in IT law
University of Southampton
Konstantinos Stylianou
Fellow, Centre for Technology and Society
FGV Direito Rio
Dan Jerker B. Svantesson
Professor
Bond University Faculty of Law
Damian Tambini
Research Director and Director of the Media Policy Project
London School of Economics and Political Science
Judith Townend
Director, Centre for Law and Information Policy
Institute of Advanced Legal Studies
Alexander Tsesis
Professor of Law
Loyola University School of Law
Siva Vaidhyanathan
Robertson Professor, Department of Media Studies
University of Virginia
Peggy Valcke
Professor of Law, Head of Research
KU Leuven - iMinds
Alfonso Valero
Principal Lecturer, College of Business Law & Social Sciences
Nottingham Law School
Brendan Van Alsenoy
Researcher
KU Leuven, ICRI/CIR - iMinds
Joris van Hoboken
Research Fellow
New York University School of Law
Asma Vranaki
Postdoctoral Researcher, Centre for Commercial Law Studies
Queen Mary University of London
Kevin Werbach
Associate Professor of Legal Studies & Business Ethics
University of Pennsylvania, The Wharton School
Abby Whitmarsh
Web Science Researcher
University of Southampton
Tijmen Wisman
PhD candidate and Lecturer
VU University Amsterdam
Lorna Woods
Professor of Internet Law
University of Essex
Nicolo Zingales
Assistant Professor
Tilburg University - TILEC



9/5/15

Irresponsabilidad de Intermediarios

Esta semana que termina ocurrieron ciertos acontecimientos que reafirman una tesis que se viene sosteniendo hace tiempo: los intermediarios en Internet son irresponsables! Pero cuidado, léase bien: son "i-responsables", en sentido literal: no son responsables. Para quien no esté muy al tanto de los temas que hacen a la regulación de Internet, bien vale una mínima aclaración: no son responsables por los contenidos que no crean. Y tal vez, corresponde otra aclaración para comprender mejor lo que digo: de acuerdo a una definición de la OCDE, "los intermediarios en Internet facilitan o cooperan en las transacciones entre terceras partes en Internet. Ellos dan acceso, alojamiento, transmiten e indexan contenidos, productos y servicios originados por terceras partes en Internet o proveen servicios de Internet a terceras partes."

Los acontecimientos internacionales son fundamentalmente dos: tanto en la celebración del día mundial de la libertad de prensa establecido por UNESCO y celebrado el 3 de mayo en Riga, Latvia, como en la reunión de la "Coalición para la libertad online" llevada a cabo entre el 4 y 5 del mismo mes en Ulaanbaatar, Mongolia, se presentaron los "Principios de Manila." Estos seis principios, lanzados formalmente en abril de este año, son consecuencia de una iniciativa global desarrollada por académicos y organizaciones de la sociedad civil, que consiste en una guía de principios y buenas prácticas para limitar la responsabilidad de intermediarios y con ello promover la libertad de expresión y la innovación. El principio 1 determina que los intermediarios deben estar protegidos por ley de la responsabilidad por contenidos de terceros. Además, este principio explica que los Intermediarios nunca deben ser requeridos de monitorear contenidos de manera proactiva como parte de un régimen de responsabilidad de intermediarios.

Pasemos ahora a otro acontecimiento importante, en este caso en Argentina: la Sala 1 de la Cámara Nacional de Apelaciones en lo Criminal y Correccional de la Capital Federal, confirmó el sobreseimiento de Alberto Nakayama, Matias Botbol y Hernán Botbol, entre otros. Tal vez sus nombres no sean lo que nos conecta con el tema de los intermediarios, salvo que recordemos que ellos fueron los imputados por caso conocido como "Taringa".


En la decisión de la Cámara, el reflejo con lo que establecen los "Principios de Manila" son evidentes. Los jueces argentinos sostuvieron que los contenidos cuestionados por la denuncia como ilegales se ubicaban a través de links direccionados por Taringa, es decir, que no eran parte del contenido de ésta, sino material ajeno. Por esta razón, los magistrados sostuvieron "que no se verifica una conducta positiva de reproducción ilegitima de obra ajena, ni una violación al deber objetivo de cuidado en tanto [...] no existe una obligación de verificar ex ante el material de intercambio, sino posteriormente cuanto éste resulte denunciado.”

En definitiva, los intermediarios son irresponsables. Pero, hay que decirlo: también son irresponsables - por no cumplir con responsabilidad, esto es, con un cuidado estudio de lo que hacen - quienes en el ámbito del Congreso han elaborado proyectos de ley que postulan que los intermediarios sean responsables.