Ever since computers came on the scene in the middle of the previous century (yes, it really has been that long), pundits and other great thinkers have envisioned these machines taking over any number of manual tasks in order to give workers and everyone else more free time to work at other things, or to just relax.
One of the latest applications being touted for computer technology is the so-called "driverless" automobile -- a car that simply allows passengers to sit back and be taken to their destinations with no thought of having to sit behind a wheel and pay attention to the road ahead.
According to a recent Forbes article about Google's attempt to develop such a vehicle, the driverless car will be guided by GPS technology and a battery of intricate sensors. Google Chairman Eric Schmidt is, in fact, quoted as saying, "It's reallyan error that we're allowed to drivethe car."
So why not just crank these things out and be done with the bother of driving? Because the technology is far from fully developed, meaning that it is risky -- for individuals in the vehicle and for the companies that insure such cars.
In the Forbes piece, Bryan Reimer, a research scientist at the Massachusetts Institute of Technology'sAgeLab, said he's not ready to trust anyone with a vehicle that so fully turns the driver into a passenger.
Reimer said, "We have case study upon case study of how individuals are terrible overseers of autonomous systems." Even in cars that can largely function without drivers, the article added, human operators need to remain alert to be able to take over at a moment's notice. "In those situations, inattention could be fatal."
Now think for a moment about the many technologies available to companies overall and risk managers in particular. We have no trouble trusting applications and methods that have proven to be successful in our own industry and elsewhere, but newer technologies that promise the moon we are not so eager to adopt. We would love, of course, to turn more and more things over to automated systems that we can simply turn on and promptly ignore, but the reality is that very few such systems exist.
To address the question above: No, technology in itself is not a risk factor. But, technology that is not fully tested in its intended operational environment certainly could be a danger. Further, technology for which we, as humans, are not ready is also a problem (as illustrated by the response to development of a driverless car).
This is a problem inherent with technology development, marketing and distribution.
Because technology tends to be developed at an incredibly rapid pace (thanks, ironically, to the automation involved in the development process), we are often caught in a position of being completely unprepared when the given technology becomes a real product with pie-in-the-sky benefits.
It takes a while to figure out how we will use a new technology is a way that is not opening up risks for us and our customers.
A key lesson for risk managers is to not allow the wave of technology implementation to get ahead of our ability to deal with it.
ARA TREMBLY is founder of The Tech Consultant and The Rogue Guru Blog. He can be reached at firstname.lastname@example.org.
March 1, 2013
Copyright 2013© LRP Publications