For a sociology of the robots

Apropos of some conversations in the past few days.

The idea that automation will replace a significant amount of the current jobs, and that-to tell truth – the progress of the IA will involve the replacement of not just repetitive tasks, but many that involve skills ‘human’ advanced has appeared several times. Will not be out of place to recall that in the last few weeks we have had a robot surgeon, IA defeating the world champion of Go, landing the autonomous Space-X on a vessel on the high seas, the plans of unattended cars have been ahead, and we could continue. The impact of automated systems in different areas of the social life (as for example in equity markets) it is a fact that you already have time. However, the worry only of an issue of jobs, and moving jobs to more complex jobs and higher quality (we are all programmers) represents a look very close to what that may involve the development of robots with capabilities of AI are more advanced.

The first scenario is one in which this type of robot is kept as a ‘tool’: you Can make decisions much more effectively (and at less cost) than human beings but is not able to make decisions about objectives: To put it another way, more accurate, does not have the ability to re-program its parameters and dimensions. You can buy and sell stocks in a better way (and much faster) than any one person, but can’t decide if you want to perform that activity, and if you want to behave according to those parameters (it can not ask yourself if you really want to spend it to maximize profit).

In this situation, then in principle, what we would have would be -so to speak – the complete victory of the factor capital (understanding the robot as a form of capital). In a scenario in which it is not necessary to the work factor, what is it? Stating that from the perspective of capitalism, there remains a problem: Where is the demand? I produce automobiles without driver, and I produce only with robots, but someone has to buy them. The sum of the demand of the owners of that capital (of which they have robots and factories) may not be sufficient. It is curious and instructive to note that several companies in the Silicon Valley, between the more clarity you can have about it, you are supporting initiatives like the basic income, and the reason given is precisely that in the future there will be no jobs for human beings.

Now, is it true? Because today there are a lot of jobs and whole sectors of the economy that work low levels of productivity clearly inferior to those of the technological frontier. If it continues to exist, why could not continue to exist in the situation being discussed? If getting a job continues to be essential to the livelihood, then people will be bound to find / to generate jobs, even when they are not highly productive.

Which, by the way, brings us to another issue: what if there is a demand for work done by human beings? We think that there is activity that we can have a preference for which are made by human beings (or some have it): Perhaps a server robot to be more efficient and bring less problems, but do not have those who prefer to have a server human? In the end, there are still chess championships, even when we know that the AI does it better. There is a demand to see human beings do things. Any person who has received income for participating in reality shows has lived that demand. One possibility is, then, of a future in which all of the work ‘serious’, ‘productive’ is done by robots, the work of low productivity is made by human in the informal sector, and the formal work human to focus not so much on high-quality jobs (as many argue) but in the work of entertainment.

 

All this under the assumption that the robots are still ‘tools’. But what if they acquire the ability that they were denied in the first scenario? Do the desirable things for your account? In other words, the ability to acquire full autonomy: not only to decide around X using criteria pre-defined but rather to create its own criteria. What are the alternatives we can think of in that scenario?

The first is the robot as a slave. It is still considered property, a tool that has an owner. Now, it is a slave that -by hypothesis – have much greater capabilities than the owners. On the other hand, we can think that many humans would declare that in this case the robots should be considered persons, with their own rights (and we have enough cases of science fiction stories to think that it would be a common alternative). Then, it is a situation not very stable. In the long run, it is unlikely that they can maintain the property.

The second is the robot as a ‘new species’: And that being so superior to humans, then the question will be what will decide them on human beings? The alternatives here are very diverse: Ranging from the elimination (all scenarios Skynet), the indifference and the distance (something that’s explored the movie Her a few years ago, and in some sense it is the path followed by the stories of Asimov, too), to take us as pets and own benevolent (and here again, the ratio can be varied, it is not the same to be a pet type hamster to a type cat; and that depending on the vision of robots from the human needs can go from the initial situation in Wall-E the one that appears in the end credits of the same movie), the ‘natural park’ (where the robots are concerned with the survival of a species that recognize close-by, let us say, like we observe in other apes). And probably other alternatives -by hypothesis, the robots are able to think of new alternatives, have creative ability, and then may think of others beyond that which we have thought until now to deal with species that any conscience they have. But it is still instructive to observe the relationships that we have with these species, is the closest thing that we would have to see how it would relate to us a species with a clearly higher capacity for intellectual and symbolic.

The third are all the scenarios of fusion. The only way that human beings could ‘sustain themselves’ in the future would be to become cyborgs: In other words, incorporating the capabilities of the robots in his own body, or, in general, acquiring new capabilities. With which the point of the stage disappears -you lose the distinction.

 

In any case, with regard to the automation does not have to forget the idea that you always sobrestimamos the effects of a new technology in the short term and underestimate in the long term (original phrase from Roy Amara). And we should also remember that these effects are not direct effects of the technologies. In the first scenario, the robot as a tool, a scenario of masses of people with low incomes or a population liberated from the necessity of work depends on social dynamics, not only technological. In the second scenario, robot with full autonomy, the various outputs depend on elections (whether carried out by the robots or by humans). Although, by the way, perhaps the most basic thing would be to forget the idea that technology is something separable from the social systems in which it is inserted.

Sociology, however, would do well to think about all of these developments as social dynamics (something that, in reality, I assume that you have already begun to implement, and a second subsequent input to these reflections should review the literature in this regard), and in particular to think in terms of social relations, what happens in the case of robots with full autonomy.

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top