At one time, artificial intelligence and automation promised us with more leisure time and a more fulfilling life. But as it develops, the cruel truth may be revealed that such happiness is for the rich and the poor are only left with unemployment.
Amazon has reportedly developed a system that can track employee's "Time off Task" and can automatically fire someone who fail to meet the criteria. More than 300 workers have been sacked for low productivity. An undercover author has found that some employees in an Amazon warehouse in the UK feel so pressured that they are using bottles as their toilets.
If that still sounds futuristic and only individual cases from a demanding tech company, that might be too optimistic. According to a 2018 survey by the research company Gartner, 22 percent organizations are using employee-movement data, 17 percent of them are keeping track of work-computer usage data. There is a growing tendency to use tech devices to strengthen surveillance.
Some would argue that monitoring staff could improve the company's productivity, and the "cold-hearted" but objective machine could use algorithms to make the evaluation system free of human bias. But could the program discover the already unequal reality and revise itself?
If the machine were to take charge, it would probably sack most of the women first as they seem to be most unproductive: they have to take 2-3 months of maternity leave when they are pregnant, and many of them could not afford work extra hours as they have to rush home to do chores and make dinner (which oddly have never been recorded and paid).
Based on the current data, it seems that women are bad choices for chief executives as there are only 24 women on Fortune's 2018 list. And according to The New York Times, experts are saying that deep bias against women, rather than individual choices, are mainly responsible for this imbalance.
If we are feeding the already biased data for deep learning, how can we expect the computer to generate results free of human bias? The machine can do a better job than any human on math and data analysis. But essentially, it is down to humans to make a value judgment out of these results.
There is indeed need for building a system of regulation in the office environment, and it seems that in the future, AI is bound to be part of it. And here comes the age-old question that's already been raised in films like Modern Times and the Matrix: Are we friends with robots, or slaves to them?
Over a hundred years ago, when workers had to work more than 10 hours a day, six days a week with extremely low wages, they won the battle for the 8-hour day through blood. Perhaps today there are more peaceful ways to reconcile the more explicit conflicts between the workers and capitalists.
One way out of it could be more timely policy responses to new technologies. For a long time, we have been hailing the developments in AI, and the idea of regulating it is frowned upon as technological innovation seems to conflict inherently with "dirty politics." But as soon as AI enters into management to determine who is to be sacked, the stakes are too high. While employees are deeply concerned whether data collected from them are being used responsibly, there are few guidelines and laws to inform the employer and address concerns from the staff. As Carl Miller, a researcher at London-based think tank Demos, rightly points out, "in all honesty we need more innovation there than in tech itself."
Google is on the move when it tried to establish an AI ethics board, but failed due to serious public doubts of its board members. But it is at least an encouraging attempt to put AI in an ethical context. And this effort should not be limited to coders who are designing the digital future we are living in. As we are embracing a future with automation and AI deployment, understanding and training for workers are also in dire need. It's either that, or we risk a future where humans are beneath robots.