Autonomy Artificial Intelligence and the Internet of Things reflects on how people’s autonomy will be affected by software-powered devices and systems that are rapidly permeating our individual and social lives.
Although this noodle has a strong personal angle for me, I also have the unusual benefit of having regular conversations with people who are leading the redesign of our “environment.” By superimposing digital devices, sensors, and “intelligence” onto the physical world, designers, engineers, policy makers, behavioral economists, neuroscientists, nanoscientists, and investors, just to name a few, are changing how we perceive and interact with our “world,” so I’ll also bring my insights from those conversations to it. Finally, I’ll consider creator and user points of view on autonomy artificial intelligence and the Internet of things.
Reflections on Autonomy
In a dream this week. I was in a super high-tech building with elevators controlled by key fobs (and of course, a central system). I had to go to several floors in sequence due to meetings; however, the elevator wouldn’t let me go where I needed to go; it kept displaying inane messages on its screen, so I had to go down to the security guard to get help.
Autonomy, the ability to choose how to act. Where to go. When. How. I think it’s core to life’s meaningfulness. Maybe it’s just me thinking that, but I don’t think so.
How about you? Have you ever lost your autonomy? I have, and I experienced it as profoundly sad, like a prison with invisible bars. I didn’t even fully realize its impact for many years. This is probably why I think about it now, why I feel it. I think autonomy is like love; one might not appreciate it until it’s gone. I learned how to get my autonomy back, so now I’m very present with how precious it is to me.
On a philosophical level, I think autonomy is central to spiritual life. If you practice a religion, probably a big part of it is making choices, each of which has religious or spiritual meaning. Alternatively, evolutionists assert that choices largely determine one’s degree of success in passing on one’s genes, and economists maintain that people make (autonomous) decisions rationally. Autonomy is the ability to act. Action results from exercising autonomy. Action is commitment and is very meaningful, even if one doesn’t think about it, but its meaning increases when one acts with full awareness.
Primates (most scientists think) are unique in their ability to be self-aware. Because you and I can be aware of our thoughts and feelings and intentions, this adds nuances to our autonomy and meaning to our actions. Awareness is variable as our brains have to manage everything that’s going on around us and within us.
A powerful and subtle thing I observe about autonomy is that it’s a package that comes with the freedom to make choices but also the responsibility of having made those choices. And sometimes one makes choices by action while at other times one chooses by not acting. Humans’ large brains present us with a wide range of choices because we can think.
Finally, I observe that autonomy is not only a human issue. I have lived with many kinds of animals over the years, and every cat, raccoon, dog, woodchuck, toucan, and parrot had a will that s/he wanted to exercise. They seemed happier when they could do what they wanted, when they wanted.
Why I Have Mixed Feelings About Artificial Intelligence and the Internet of Things
Autonomy is one of the critical issues of our time, right now, so here I hope to start you thinking about it at many levels, if you’re not already, and to start a discussion. I am also a creator of things as well as a user of other things. You probably are, too, so I invite you to think of autonomy from creator and user perspectives.
Autonomy is coming to a head now due to all the investor- and creator-led excitement for “autonomous” vehicles, “smart” devices, robots, and artificial intelligence that are quickly forming a web—an Internet of things—around our lives. Your autonomy, my autonomy, and every other person’s autonomy will be affected in surprising ways by these devices and the systems that run them. Yes, their creators say that they use cutting edge design principles. They assert that big data and analytics enable systems to “learn” about us, so they can serve us better. I think that most of them really believe this and mean well. For example, autonomous vehicle enthusiasts insist that smart cars will make traveling safer.
However, there’s a big flaw in this argument. I observe that subjectivity is part of our existence as individuals. It’s hard-wired into us, our default setting. I work hard to be aware of my subjectivity, in business and personal parts of my life, and I know other people who do, too, and it takes constant commitment. Still other people just exist within their subjectivity without being aware of it. Therefore, there’s a whole range of awareness of subjectivity among people. And each person’s awareness varies with what’s going on around him or her at each moment. In sum, this means that when people design things for other people, they invariably, if unknowingly, limit how other people can do things even when their intent is to serve and empower people who use their “thing” (product, service). They limit their users’ autonomy. I write this as a creator and a user.
Think about when you try to get something done on a website, or an app, or large corporate system, or a device like a smoke alarm. Or building entry systems, or elevators. Maybe my brain just works really illogically, but I don’t think it does. I constantly encounter frustration, and I use products built by some of the best designers and engineers in the world. Most of my personal and professional friends constantly share stories about their frustrations when they try to use things. Think about your own experiences.
Since I perceive that, overall, creators are already doing the best they can, and the human-built systems with which I interact often provide mediocre or downright rotten user experience, I doubt that the “Internet of things” will be all it’s cracked up to be, that artificial intelligence will be as breakthrough and its creators have long envisioned (the field has been around for decades). Human subjectivity is the root cause of the problem, and it’s not changing. Yes, devices will give real-time data on users’ behavior, and analytics will enable designers and engineers to use this voluminous user data, but this will result in incremental improvement because it doesn’t counteract the core issue, human subjectivity in creators and users.
Very few websites, apps, and smart devices enable me to interact without significant friction, so I expect “smart” devices to create significant friction when I am interacting with them. They won’t have the right options for me, and they’ll have tons of options I don’t care about. Overall, though, I expect a loss of autonomy as more things become “smart” because their “intelligence” will be biased by design or chance. Sometimes this will serve me, and other times it will create more friction.
Artificial intelligence aims to mechanize machines’ learning, and singularity enthusiasts foresee machines’ “intelligence” surpassing humans’. These developments will likely affect the lives of everyone reading this post within his/her lifetime, and in surprising ways.
What We Can Do as Creators
- Since subjectivity is the key limiting factor, we can diminish its limitations by creating diverse teams of people from different disciplines. We can encourage people to voice different points of view and to have vigorous debates when we’re designing devices, systems, products, services, and policies. This costs more money, but it results in better design, happier users, more positive reviews, and more sales and/or profit.
- I am very excited about the growth of the design disciplines because many of them take an explicitly user-centric point of view. Moreover, many designers I know focus on developing empathy with users. I belong to several professional groups of various branches of design such as service design, user experience design, user experience strategy, and product design. One of their key principles is starting with the user rather than starting with the technology (product/service). Having designers well represented on teams can help.
- Use ethnographic research to focus on studying people in their “natural habitats.” It strives to impose minimal assumptions before focusing on field observation. I use it extensively, and it’s breakthrough because it observes first, forms hypotheses and tests them, iteratively. It’s very helpful in correcting my and my teams’ subjectivity by focusing on users in their environments.
- Co-create with users as much as possible because their subjectivity is priceless for the creator team, especially when you invite diverse users to your project. Realize that they will introduce complexity into your process because they’ll surface many exceptions your team overlooked.
- Refocus your teams away from finding markets for your products to empowering user outcomes (also known as “jobs to be done”). I have repeatedly seen that exceptionally few people want products or services; rather, they buy the ability to use products or services to attain outcomes that are personally meaningful to them.
- Strive to be humble. I try to approach situations by grounding myself in my ignorance rather than what I think I know. This is a way that every person can learn to reduce subjectivity. I also remind myself that I don’t know what’s right for any other person. I find this difficult because it makes my brain work harder, and it goes against my instincts and emotions, which seem to want to be proud in what I know.
- Give ethics a front row seat at your table. One way to do this is to practice transparency consistently; explicitly define and disclose your motivations to users/buyers. I do this on experiential social media projects by sharing what the team is about and why it’s doing what it’s doing. Even though behavioral economics research suggest that disclosure often doesn’t have as much impact as one might expect, do your part to give people the insight; they are responsible for their awareness and choices.
- Subjectivity has little to do with intelligence. There is no escaping the former (or the latter ;^). We can only try to minimize subjectivity.
What We Can Do as Users (and Buyers)
- You can maximize your autonomy by being present with how/when/why/where you use things. This is a subtle and powerful point. You can decide how you want your relationship with every tool and technology to be, but to retain your autonomy, you must actively choose the conditions for using things. For example, many people have lost their autonomy to their smartphones because they can’t exist without them. Many people can’t use or draw maps or plan travel because they depend on GPS. Still others increasingly rely on digital personal assistants (Siri, Google Now, Cortana, et al). In brilliant irony, “autonomous” cars will diminish people’s autonomy while traveling. Depending on something often means losing one’s autonomy. This is a choice we all make, whether or not we’re aware of it. An immensely enlightening and sobering point of view on the process of a culture losing autonomy is Neil Postman’s Technopoly. Impossible for me to over-recommend it.
- Give feedback, let creators hear your experience and opinions. This also means writing reviews. In addition to giving your emotional feedback, give facts because they help creators put your experience in context, and to learn more from you.
- Fight mediocrity by striving for quality. Choose better products when you can because your choice is a powerful feedback loop.
- Organize with other people to fight the loss of your autonomy. I expect that the loss of autonomy will happen gradually, so it’s important to look for it, and confront it when that makes sense to you. This often presents as a person or group citing a problem and “deciding” they know what’s best for you. Of course, this is happening constantly, so pick your battles.
- Losing autonomy is the slow boil, and it often happens by accident, so be vigilant. Although very few creators intend to take away your autonomy, it can still happen as the result of choices they make for their own reasons. Therefore, don’t depend on things without thinking about it. Hard. Because increasingly “smart” things will try to think for you and act for you. Cars, refrigerators, watches, appliances: all objects will increasingly have “intelligence.” Things’ makers always claim to “serve” you, but when you forget how to do it yourself, you lose the choice, and your autonomy is limited to the choices their creators make for their reasons, not necessarily your reasons. Things’ “autonomousness” displaces your autonomy. Constantly reevaluate the choices you make.
- For contrast, recall that, during the hunter-gatherer and agrarian economies, our “world” was filled with simple inanimate objects, and we were the primary actors, guided by our intelligence. During the industrial economy, objects became more complex but were still largely inanimate, i.e. dumb, in our context here. In today’s knowledge economy, most objects of “value” will be “smart” and controlled by software that will interact with us and influence our decisions (and autonomy). They will also be actors.
- Be creative and purposeful with how you use devices and systems, and create pockets of time and space that minimize them. For example, I take special pleasure in using hand tools and going days without using my iPhone or Air. Of course, even “dumb” products are limited by their makers’ research, skill, and commitment to users, but their strengths and weaknesses are usually more explicit since there are fewer variables when you use them. “Interactive” things have more variables.
- Even though none of this is unintentional, unless you’re aware of it, you’ll lose autonomy anyway. Another unintended result of “smart” devices is they set constraints for the actions we take when we use them. So they distract us from what we really want; they set the options. They can reduce our self-awareness and curtail our autonomy. This is an unintended yet insidious process, but we have the ability to control it through our awareness.
- Study behavioral economics, the “study of nudging” because it will soon be prevalent in policies, interfaces and devices. It’s very powerful because behavioral economists study behavior, and their findings can be used in the design of devices, rules, websites, and other interfaces to influence behavior. Like any tool, it can be used to serve different purposes. A simple example is the “default setting” on an application; yes, users can choose whether to leave it or change it, but the default is rarely changed due to the power of suggestion and implied endorsement. B.E. will be increasingly baked into all interfaces that we use, so it will undoubtedly affect our autonomy.
Other Insights
- I study primates because I strive to learn about humans at a more abstract (less subjective) level. I’ve learned that sociality is primates’ defining feature. That means our survival is largely dependent on cooperating with other members of our bands. Due to our subjectivity, we’re not aware of how gifted we are with adjusting our behavior based on social context and the other people present. Since we’re not aware of this, we can’t duplicate it in the “smart” things we make. Robin Dunbar finds that one of the key evolutionary reasons for our large brains is the need to navigate complex nuanced trade-offs with multiple inter-dependencies with other people, so it’s likely that the “smartness” that we put into things will be inferior in many situations. Happily, though, they will be better than us in other use cases (situations).
- “Smart” devices and systems have no will, but they have rules and software that may feel like it (i.e. artificial intelligence). However, they have no inherent autonomy and no will (it must be designed, at least to start); they are only the result of teams’ best efforts. I riffed on this in the context of privacy the Rewiring the World review (see Privacy at the end).
- I try to never assume that I know what’s best for any other person or user because it’s not true, and it violates their personhood. This is true even when a person says through words or actions that s/he wants you to think or act on his/her behalf. I always ask people what they think is best for them.
- Autonomy is curious because it includes the right not to exercise it. For example, you’re on a team that’s designed something, and you have a range of user feedback. Many users seem to accept your product but don’t seem delighted by it. They aren’t exercising their autonomy by giving you feedback. That doesn’t give you the right to make decisions for them. If they remain uninvolved, you will have to make decisions without their input, but always realize this is second best; it’s never preferred to getting real user input.
- I have learned that ignorance is under-rated. I have seen this repeatedly in my life. If I think I “know” something, I’m far more likely to miss things. If I remind myself that I don’t know things, I’m more observant and open, and I think I do better work. Instead, I try to think in terms of hypotheses that I test. I think this helps me to serve people better. But I realize that subjectivity limits my ability to understand and serve.
- I’ve learned that when I support someone’s autonomy, it deeply touches the other person. Because my action essentially tells him/her that I care. Especially when their autonomy constrains me in some way. This can become a strong bond between me and the other person. More on this in Reflections on Trust.
Leave a Reply
You must be logged in to post a comment.