28/5/13

A Technological Solution to the Challenges of Online Defamation

This piece was originally published on Global Voices Advocacy at http://advocacy.globalvoicesonline.org/2013/05/28/a-technological-solution-to-the-challenges-of-online-defamation/

When people are insulted or humiliated on the Internet and decide to take legal action, their cases often follow a similar trajectory. Consider this scenario:
A public figure, let’s call her Senator X, enters her name into a search engine. The results surprise her — some of them make her angry because they come from Internet sites that she finds offensive. She believes that her reputation has been damaged by certain content within the search results and, consequently, that someone should pay for the personal damages inflicted.
Her lawyer recommends appealing to the search engine – the lawyer believes that the search engine should be held liable for the personal injury caused by the offensive content, even though the search engine did not create the content. The Senator is somewhat doubtful about this approach, as the search engine will also likely serve as a useful tool for her own self-promotion. After all, not all sites that appear in the search results are bothersome or offensive. Her lawyer explains that while results including her name will likely be difficult to find, the author of the offensive content should also be held liable. At that point, one option is to request that the search engine block any offensive sites related to the individual’s name from its searches. Yet the lawyer knows that this cannot be done without an official petition, which will require a judge’s intervention.
“We must go against everyone – authors, search engines – everyone!” the Senator will likely say. “Come on!” says the lawyer, “let's move forward.” However, it does not occur to either the Senator or the lawyer that there may be an alternative approach to that of classic courtroom litigation. The proposal I make here suggests a change to the standard approach – a change that requires technology to play an active role in the solution.
Who is liable?
The “going against everyone” approach poses a critical question: Who is legally liable for content that is available online? Authors of offensive content are typically seen as primarily liable. But should intermediaries such as search engines also be held liable for content created by others?
This last question raises a very specific, procedural question: Which intermediaries will be the subjects of scrutiny and viewed as liable in these types of situations? To answer this question, we must distinguish between intermediaries that provide Internet access (e.g. Internet service providers) and intermediaries that host content or offer content search functions. But what exactly is an ‘intermediary’? And how do we evaluate where an intermediary’s responsibility lies? It is also important to distinguish those intermediaries which simply connect individuals to the Internet from those that offer different services.
What kind of liability might an intermediary carry?
This brings us to the second step in the legal analysis of these situations: How do we determine which model we use in defining the responsibility of an intermediary? Various models have been debated in the past. Leading concepts include:
  • strict liability, under which the intermediary must legally respond to all offensive content
  • subjective liability, under which the intermediary’s response depends on what it has done and what it was or is aware of
  • conditional liability – a variation on subjective liability – under which, if an intermediary was notified or advised that it was promoting or directing users to illegal content and did nothing in response, it is legally required to respond to the offensive content.
These three options for determining liability and responses to offensive online content have been included in certain legislation and have been used in judicial decisions by judges around the world. But not one of these three alternatives provides a perfect standard. As a result, experts continue to search for a definition of liability that will satisfy those who have a legitimate interest in preventing damages that result from offensive content online.
How are victims compensated?
Now let’s return to the example presented earlier. Consider the concept of Senator X’s “satisfaction.” In these types of situations, “satisfaction” is typically economic — the victim will sue for a certain amount of money in “damages”, and she can target anyone involved, including the intermediary.
Interestingly, in the offline world, alternatives have been found for victims of defamation: For example, the “right to reply” aims to aid anyone who feels that his or her reputation or honor has been damaged and allows individuals to explain their point of view.
We must also ask if the right to reply is or is not contradictory to freedom of expression. It is critical to recognize that freedom of expression is a human right recognized by international treaties; technology should be able to achieve a similar solution to issues of online defamation without putting freedom of expression at risk.
Solving the problem with technology
In an increasingly online world, we have unsuccessfully attempted to apply traditional judicial solutions to the problems faced by victims like Senator X. There have been many attempts to apply traditional standards because lawyers are accustomed to using in them in other situations. But why not change the approach and use technology to help “satisfy” the problem?
The idea of including technology as part of the solution, when it is also part of the problem, is not new. If we combine the possibilities that technology offers us today with the older idea of the right to reply, we could change the broader focus of the discussion.
My proposal is simple: some intermediaries (like search engines) should create a tool that allows anyone who feels that he or she is the victim of defamation and offensive online content to denounce and criticize the material on the sites where it appears. I believe that for victims, the ability to say something and to have their voices heard on the sites where others will come across the information in question will be much more satisfactory than a trial against the intermediaries, where the outcome is unknown.
This proposal would also help to limit regulations that impose liability on intermediaries such as search engines. This is important because many of the regulations that have been proposed are technologically impractical. Even when they can be implemented, they often result in censorship; requirements that force intermediaries to filter content regularly infringe on rights such as freedom of expression or access to information.
This proposal may not be easy to implement from a technical standpoint. But I hope it will encourage discussion about the issue, given that a tool like the one I have proposed, although with different characteristics, was once part of Google’s search engine (the tool, “Google Sidewiki” is now discontinued). It should be possible  improve upon this tool, adapt it, or do something completely new with the technology it was based on in order to help victims of defamation clarify their opinions and speak their minds about these issues, instead of relying on courts to impose censorship requirements on search engines. This tool could provide much greater satisfaction for victims and could help prevent the violation of the rights of others online as well.
Critics may argue that people will not read the disclaimers or statements written by “defamed” individuals and that the impact and spread of the offensive content will continue unfettered. But this is a cultural problem that will not be fixed by placing liability on intermediaries. As I explained before, the consequences of doing so can be unpredictable.
If we continue to rely on traditional regulatory means to solve these problems, we’ll continue to struggle with the undesirable results they can produce, chiefly increased controls on information and expression online. We should instead look to a technological solution as a viable alternative that cannot and should not be ignored.

5 comentarios:

  1. Justo el otro día se me ocurrió esto: Que ante un requerimiento de un tercero (modelo, senador, etc.) los buscadores hagan aparecer en los resultados de las búsquedas un símbolo que anuncie que existe algún conflicto con esas páginas (en el caso que estén identificadas) y que al entrar en los sitios se presente un cartel con el descargo y/o explicación del conflicto. Al menos hasta que sea un juez el que ordene a los buscadores bajar el contenido o no. Si después de un tiempo no se lleva a la justicia el buscador remueve el aviso. Qué opinás?

    ResponderEliminar
  2. Gracias por el comentario. Mi idea va en esa dirección! Mañana publico en español esto mismo.

    ResponderEliminar
    Respuestas
    1. Sobre todo porque el otro día escuche a uno de los abogados de las modelos/celebridades y uno de sus argumentos era que el tiempo que llevaba lograr la bajada de contenido si se iba por la vía judicial era perjudicial para aquellos que veían afectada su honra. De esta manera creo que se disminuye mucho esa afectación.
      (Soy Atilio, por cierto. Después me di cuenta que no aparecía un contacto en el perfil de Blogger).

      Eliminar
  3. Eduardo, la propuesta es interesante aunque aún no visualizo si realmente sería efectiva en miras a encontrar una solución y reparación del daño ocasionado por difamación. Es cierto que la posibilidad de replicar actúa como una suerte de descarga del afectado quien en forma inmediata podría dar su versión de los hechos. Julio Verne escribió un libro titulado "París en el S XX" en el cual imaginaba que la prensa desaparecería alrededor de 1960 en razón de que los diarios solo se dedicaban a publicar difamaciones y, al mismo tiempo, se permitía ejercer el derecho a réplica del afectado. Con lo cual, imaginaba Verne, los diarios terminaron siendo meros vehículos de transmisión de réplicas y contra réplicas en lugar de información. El punto es si la propuesta de replicar ante una difamación (que por supuesto es un derecho humano) no generaría un interminable cruce de acusaciones y difamaciones mutuas en la web entre quien comenzó y quien fue afectado en primer término.

    ResponderEliminar
  4. Por otra parte, la "descarga" emotiva que uno puede ejercer al replicar la difamación, en algunos casos quizás no sea efectiva. Por ej, si en un sitios web solo se dedican a insultar a una persona, difícilmente se pueda replicar sin incurrir en otro insulto. Tal vez hasta sea conveniente no emitir opinión alguna. Sin embargo, lógicamente quien sea afectado (sobre todo una persona sin fama pública) querrá que se suprima o se deje sin efecto el sitio donde se lo está difamando. Para tal fin, hasta tanto no surja una suerte de Tribunal de Justicia on line, no veo otra solución que recurrir a la Justicia del domicilio del afectado para instar vía "medida autosatisfactiva" la remoción del sitio agraviante, por supuesto previo análisis de un Juez que valorará si no se encuentra comprometido el interés público en la difusión de ideas, opiniones e información. Omití identificarme: Manuel Larrondo, argentino, abogado, docente Derecho de la Comunicación Facultad de Periodismo de la Universidad Nacional de La Plata

    ResponderEliminar