There's an interesting article going around today about robotics as it applies to our socioeconomic system. It's always dangerous to use artificial intelligence research as a basis for speculation about how the world actually works, but I think there's something in this.
What I think is in this is a sense of distance. We tend to see social injustice as a case of a bad person harming a good person, or as a bad system creating harm that falls upon many, and is generally supported by a few despite its badness, because those few benefit from it.
What the article talks about is an experiment where some robots are put in an environment where they have to cooperate in order to survive. All of the robots are given the same programming, but the environment is a real physical environment, meaning that it is variable, not deterministic. Over time, something that the researchers describe in terms of a social hierarchy forms. The researchers claim that they were not expecting such a thing to happen, and indeed their programming was expected to produce a non-hierarchical situation.
So why did the hierarchy form? The nice thing about this experiment is that the actors are robots. Computer programs. While people sometimes tend to personify computers, in general I think we all know that computers are machines; a computer with a soul, whether or not we believe that it is possible, is at a best the exception, not the rule.
So we can't ascribe the outcome of this experiment to a soul. Rather, it is an emergent behavior of a supposedly cooperative environment. What I find personally interesting about this is that it matches my experience of the world better than the "evil actor" model - the model where every ill of the world can be blamed on the intentional or stupidly ignorant actions of some scapegoat.
I don't mean that there aren't criminals, nor that there is no need for law enforcement. But what I do mean is that when we find ourselves in the midst of a dysfunctional environment, there is a commonly-held idea that the way to resolve the dysfunction is to either eliminate certain bad actors, or to reform them. And this idea is probably wrong. It would be a wonderful thing if we could rest ourselves from our long labor of tilting at this particular windmill, and seek out real solutions to the world's pain.
(Yes, this is an awful lot of philosophy to suck from the marrow of a single magazine article. So sue me. :')
What I think is in this is a sense of distance. We tend to see social injustice as a case of a bad person harming a good person, or as a bad system creating harm that falls upon many, and is generally supported by a few despite its badness, because those few benefit from it.
What the article talks about is an experiment where some robots are put in an environment where they have to cooperate in order to survive. All of the robots are given the same programming, but the environment is a real physical environment, meaning that it is variable, not deterministic. Over time, something that the researchers describe in terms of a social hierarchy forms. The researchers claim that they were not expecting such a thing to happen, and indeed their programming was expected to produce a non-hierarchical situation.
So why did the hierarchy form? The nice thing about this experiment is that the actors are robots. Computer programs. While people sometimes tend to personify computers, in general I think we all know that computers are machines; a computer with a soul, whether or not we believe that it is possible, is at a best the exception, not the rule.
So we can't ascribe the outcome of this experiment to a soul. Rather, it is an emergent behavior of a supposedly cooperative environment. What I find personally interesting about this is that it matches my experience of the world better than the "evil actor" model - the model where every ill of the world can be blamed on the intentional or stupidly ignorant actions of some scapegoat.
I don't mean that there aren't criminals, nor that there is no need for law enforcement. But what I do mean is that when we find ourselves in the midst of a dysfunctional environment, there is a commonly-held idea that the way to resolve the dysfunction is to either eliminate certain bad actors, or to reform them. And this idea is probably wrong. It would be a wonderful thing if we could rest ourselves from our long labor of tilting at this particular windmill, and seek out real solutions to the world's pain.
(Yes, this is an awful lot of philosophy to suck from the marrow of a single magazine article. So sue me. :')
2 Comments:
=v= Not to be too reductionist or anything, but couldn't hierarchical behavior amongst robots be just as easily a consequence of the structure of their internal programming?
It must be a consequence of the interaction between their programming and the environment. Possibly the same thing would have happened if the environment were entirely deterministic rather than being mildly non-deterministic, in which case it would be solely a result of their programming.
But the people doing the study suggest an alternative hypothesis which seems more plausible than the idea that this is solely an issue of programming, particularly considering that all the robots were running the same program. It's that hypothesis that I find interesting.
I can think of two reasons to express skepticism about this. The obvious one is that drawing conclusions about human behavior from the behavior of robots seems like a bit of a stretch.
But the reason I'm drawing the parallel between the robots' behavior and that of humans in society is because I think we are currently operating from a position of weakness. If the source of the world's pain is bad people, then all we have to do is identify the bad people and stop them from being bad. This is the tactic that I see employed most frequently, and it's the basis of every war ever fought, and every political prisoner ever jailed.
But if the source of the world's pain is a systemic problem, then we can never solve it by stopping bad actors. Stopping bad actors is like getting mad at a stick instead of at the person who's hitting you with it.
Of course one way to protect yourself is to take away the stick, but if the person who wants to hit you still wants to hit you, you've achieved only a temporary solution.
If there is a thing that *makes* bad actors, and it is a mechanical problem, not some incomprehensible evil that lurks in the hearts and minds of some members of our society, then possibly we can do something about it. But if we devote all our energy to identifying and stopping bad actors, then we are doomed to live our lives in a war zone.
Honestly, though, aren't I preaching to the converted? Isn't the reason you got into sociology because you already accept that there may be systemic problems that don't reside in the hearts and minds of individuals?
Post a Comment
<< Home