Why Machines Won’t Displace Human Workers in the Knowledge Economy is a short thought experiment, in the spirit of all Noodles, which was in response to a post in Wired. In Here’s How to Keep the Robots From Stealing Our Jobs, John Hagel posited that a major rationale for the Knowledge Economy firm would be its role as a “knowledge platform” that enabled people to accelerate their learning and productivity. I highly recommend the post, which sparked many intelligent comments.
It’s obvious that many people are having difficulties imagining the world toward which we are hurtling, a world in which machines are getting “smarter” and able to “compete” for work roles that humans now do. In writing The Social Channel App, I thought long and hard about the Knowledge Economy and people’s roles in it, and its main thesis is that everything, from states and enterprises to people and products, will be differentiated in the Social Channel and that “humanness” will assume a much more visible importance in the economy.
The shift from the Industrial Economy to the Knowledge Economy forms the context of the Social Channel App. I don’t know the answer, but I suspect that we’ll get closer to it if we imagine human societies prior to the Industrial Economy, which focused on creating wealth through mechanization. Let’s try to imagine ourselves before machines were such a part of our lives. I’m not a sociologist, but since humans are so plastic, it would be logical that we have become more mechanical in terms of how we see ourselves and, therefore, how we see the world. I argued in “Digital Transformation’s Personal Issue” that digital social technologies are enabling people to “re-personalize” the economy because people can be heard, people can respond to and care for each other, ridicule each other, the whole spectrum—at scale. This is giving participants relatively more personal interactions than they had in the mass-media world. Humans are profoundly social so, to a human, it will be more intolerable to be ignored (treated impersonally) than criticized because the latter is at least relatively personal (if painful). This is changing people’s expectations of all interactions.
This may be a big part of the answer to how the Knowledge Economy will develop and how people will “work.”
Because machines have no free will, they cannot be personal, and my client work consistently shows that people value human interaction. I analyze thousands of human conversations in digital social venues.
I think part of the reason why is that other people have free will, so that makes human interactions (with other humans) potentially pleasing; a person cannot expect a certain response from another being with such a range of free will as humans have. If you expect something and can be assured of getting it, it has less value than if there’s uncertainly. Of course, certain interactions have the most value when there’s virtually no variance, but a huge portion of human interactions have higher value due to free will and uncertainly.
As most thought leaders didn’t predict the prevalence of the high-tech industry, we have difficulty predicting how people will work in the Knowledge Economy. Given the above observations, I hypothesize that most people will be occupied in customer experience roles in which machines can’t compete.
“But wait,” (I have thought before, “Algos are getting better, so machines will become more humanlike, so your hypothesis is out the window!” I think that people are too smart for that; they won’t be fooled very easily and, because an algo runs a machine whose will is only comprised by what was designed in (and the machine could learn), it has no free will, so people will inherently value it less. Some interactions with machines that have “personality algos” will be valuable, but others won’t. That will depend on the use case of the human.
This hypothesis rests on a big assumption that may turn out to be false, in which case humans will be toast.
That assumption is that humans will still be the customer. The customer, because s/he has free will and decides whether to employ a company or person (or buy product), will matter. If machines make more buying decisions than people, humans lose their hegemony, relevance and permission to live, assuming they cost more to live than most machines do.
It’s hard for me to imagine that happening because machines have no free will, so they can’t be customers. However, machines consume far more data than people at the moment, so they are bigger customers than people. Right now, they aren’t paying, so they don’t have the customer’s influence.
What do you think?
Leave a Reply
You must be logged in to post a comment.