We have the technical know-how, but are humans fully ready to allow machines to do most of our thinking for us?
The topic has become quite pervasive. Tech giants like Google and Uber and car manufacturers like Ford, GM and Tesla are beginning to publicize their investments into the research and development of the self-driving consumer vehicle and other “smart” systems. Such systems promise to make our lives easier, more efficient and, ultimately, safer.
Yet concerns abound around issues of safety, privacy and effectiveness of these growing technologies. And understanding the usage and applications of autonomous systems, such as as self-driving vehicles, requires an in-depth discussion beyond the mere novelty of smart products.
Dr. Michael Francis is the chief of advanced programs and a senior fellow at United Technologies Research Center (UTRC). He leads the development of advanced aerospace technologies, including autonomous and intelligent systems, as well as unmanned vehicles.
As Dr. Francis puts it, autonomy itself is a technology that is changing very rapidly. And while the technology is here in some form, it is still in its infancy.
The still has to be engaged with the system, he cautions, debunking the notion that autonomy, as we understand it on a consumer level, means completely giving up control to our prized machines.
Take for instance the recently reported Tesla crash and subsequent death of one of its car owners, who crashed into a tractor trailer as his car drove itself along the highway. As details emerged following the tragic accident, it was revealed that the driver was completely disengaged while the vehicle was operating.
Tesla responded in a statement: “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” Even on autopilot, the company asserts, drivers must keep their hands on the wheel at all times.
The same instructions apply across the board to all “autonomous” vehicles and systems. There is still some human interaction required -- at least for now.
As Dr. Francis explains, there is a tendency for the operator to be mentally distracted while everything is going fine. If something goes wrong, it's hard for our brains to respond quickly and take action when they need to.
And therein lies our challenge in employing autonomous systems.
But first, a little background. Though the discussion on self-driving cars has become highly popular over the last few years, these technologies have been in existence for quite some time, adeptly serving in advanced military aviation systems.
For example, this technology began to emerge as early as the mid 1800s when English engineer and inventor Robert Whitehead designed and developed a torpedo (known as the Whitehead Torpedo) that could propel itself underwater for several hundred yards. Early on and through several iterations of the torpedo, the technology proved to be useful for naval fleets. The design served as an early skeletal template for what would later evolve to other self-driving weapons of war and aviation.
Dr. Francis’s experience draws on this history of this evolution and eventual common use of autonomous technology. Prior to joining UTRC, he led several pioneering aviation programs during his over 20-year career in the U.S. Air Force.
Among his achievements: playing an integral role in the development of several autonomous combat systems over the last 25 years, such as the Unmanned Combat Air Vehicle and the Joint DARPA-USAF-Navy Unmanned Combat Air Systems programs in the 1990s.
On the consumer level, we’re not exempt to this technology in everyday use. Take for instance the commercial flights we board every day. The majority of the technology we see on the way to our seat is also equipped with digital control systems that are autonomous and allow the pilot and co-pilot to tell the computer what they want the airplane to do.
For example, the computer tells the plane and pilot "how much" and "which controls." It works at 30 to 60 times per second while going through an entire decision cycle to decide whether the airplane is controllable and stable.
If human interaction is still required on some level alongside autonomous machines, when then can we expect these technologies to be independent from human engagement? The answer centers less on our ability to program for autonomy, and more on engineering autonomous machines that can handle high precision -- whether that be on the road, in the operating room or in the sky.
In the future, human-machine integration will lose importance, Dr. Francis adds. Human-machine intelligence integration will gain much more significance. The biggest problem area and opportunity for the technology sector to solve, he says, is the machine’s ability to deal with things it hasn’t seen before -- detecting an anomaly and knowing how to deal with it correctly.
The challenge of managing the unknown will keep humans employed for a few more decades at least!
Image credit: Wikimedia Commons
Sherrell Dorsey is a social impact storyteller, social entrepreneur and advocate for environmental, social and economic equity in underserved communities. Sherrell speaks and writes frequently on the topics of sustainability, technology, and digital inclusion.