On some criticisms of the digital platforms bill

On October 28 of this year, representatives of the Global Network Initiative (GNI) published an open letter to the members of the Commission on Future Challenges, Science, Technology and Innovation of the Senate of Chile, to express their concern about the recent draft Law bulletin No. 14.561-19 to regulate digital platforms in Chile. It offers a series of arguments that express concern about the possibility that it puts “at risk the rights to freedom of expression and privacy in Chile.” Our general impression is that the criticisms presented that are addressable point to issues to be developed and could therefore be formulated more naturally as constructive criticism or recommendations, although this would be out of line with the overall tone of the document. We consider that there are 4 points that can be addressed in this text, which we will comment on.
First, it points to a tension between, on the one hand, the obligation of DTs to monitor their users’ accounts to identify illegal content and thus avoid fines and penalties and, on the other hand, restrictions on moderation by them on harmful but legal content. It seems clear to us that there is no tension here. One may have the function of identifying illegal and even harmful content, but have limited power to decide what to do with it. In fact, it sounds ethically reasonable that greater power to regulate content should be relegated to civil society or the institutions that represent it. For example, with respect to content that is not illegal (and therefore does not have an obligation to remove pd) is, however, harmful (for example, because it is highly addictive), DPs should be limited to reporting it (for example, by means of some kind of digital tag, as given in ads for offensive content) and the user should be able to decide what to do with it, if you are of legal age.
A second consideration that we believe is addressable is that content restriction covers too little on the one hand, and too much on the other. On the one hand, it is argued that some of the most harmful forms of expression, such as online harassment and targeted disinformation, are not covered. Now, these would not be forms of expression properly such, but rightly crimes and that is why a sufficiently comprehensive expression of different civil crimes and also specific to the world of cybercrime is necessary. These crimes are included in the contents that, as the law says, “can be considered civilly defamatory, harmful …”.
On the other hand, it is indicated that there are contents that could be considered civilly defamatory or harmful but that should not be removed from the digital space if we respect the principle of equivalence, since these are not illegal outside the digital space. This seems to us to be a good point. But in this case we do not believe that these contents should be allowed, but that the fact that they are much more harmful in the digital space than in the analog one should be considered, due to the degree of diffusion and the amount of reactions it can generate. Thus, in this case, we believe that equivalence would have to be measured by the effects that content can have on users. In this way, in any case an adjustment to the equivalence criterion will be needed, because the virtual world could have an intensity of effects that in the analog world does not occur. In fact, the degree of harm or the type of effects that a data can have on people is one of the criteria that is normally used to consider a type of information as sensitive.
A third argument put before is that the neutrality proposed in Article 5 could allow authorities or other powerful figures to influence the variety of content allowed on digital platforms, to the extent that it limits the ability of digital platforms to downgrade or curate possibly harmful, but not illegal, content. It seems to us that this objection omits the fact (already mentioned by the authors themselves) that Article 6 correctly limits that neutrality. Additionally, neutrality itself could help reduce the damage, limiting the ability to privilege content, which is a common practice on platforms such as Facebook. For example, increasing the weight of certain hate-provoking posts (which generate more angry-face reactions) and encourage political polarization could be considered a form of interference that conflicts with theto neutrality.
Finally, the authors are concerned that the “right of consumers to deactivate programs” in Article 10 is vague and unrealistic to implement. This seems to us a strange and also vague objection. Primarily, just because something is technically difficult (the authors don’t indicate why) doesn’t mean it’s not ethically necessary. In fact, this point of article 10 is completely necessary for the due process that the authors claim to defend, while a human mediator is required who is able to understand the reasons that users may present regarding the handling of their contents or themselves. Moreover, this follows from the digital/analog equivalence with which the authors agree.
In summary, we consider that regulating digital platforms is a complex issue, in which several disciplines converge, it is not only a fintech issue, but also includes ethical, philosophical, economic and mental health dimensions. The focus of the PD bill in question is to protect people’s fundamental rights from the advancement of technologies that can be disruptive to the human condition and its dignity. Large platforms use advanced systems of algorithms that allow impact on the human mind and brain (impact exemplified by the technique of “psychological targeting”), which is why they came to be considered indirect neurotechnologies.
Digital data analytics can be used to infringe on our mental privacy and freedom of thought, not only by showing what we think and feel, but also by providing the opportunity to identify and exploit our moments of weakness for business benefits. As human rights lawyer Susie Alegre points out, “what has been described as ‘the attention economy’ is using psychological techniques to make devices and platforms ‘sticky’, so that it is difficult for us to leave them and stop generating data that these platforms can use.” The problem is not limited only to the way in which this attentional control can affect our mental health, but mainly the psycho and socio-political use that can be made of the information obtained at the expense of our health, influencing our minds in the political sphere and polarizing societies through disinformation.
An example of what is being tried to regulate is the Metaverse announced by Mark Zuckerberg. In this regard, Markus Gabriel, director of the International Center for Philosophy in Bonn, pointed out that “Meta is an extremely dehumanizing and immoral system. It is a drug, an ideology, a pure propaganda machine. Even worse than Facebook and that is already a big problem because without Facebook we would not have these conspiracy theories, the anti-vaccines … Meta is going to create even bigger problems than the populism of Trump and Bolsonaro. That’s why I think we have to ban it.”
Of course, the proposal of the PD project is not to ban, but to hold the big platforms accountable for the technology they use to manipulate and maintain users’ attention without their consent, without warnings about the effects on their health, their moral autonomy and mental privacy, introducing hard law guidelines in what is currently only ineffective self-regulation, without implications for the big platforms, which for users constitutes a habitation in a kind of digital far west.

The content expressed in this opinion column is the sole responsibility of its author, and does not necessarily reflect the editorial line or position of El Mostrador.

Original source in Spanish

Related Posts

Add Comment