Risky Business: How we perceive risk- and how we must do MUCH better (part 1)

No Gravatar

I said I was going to discuss risk last week; today seems like the perfect day to do so. Yesterday was the anniversary of the (almost) Iranian Revolution. The younger generation (about ½ of all Iranian citizens) were dismayed at the election “results” (not likely to have been as advertised), and for a few weeks, took to the streets and tried to change the way the country was governed. It failed (at least as of today). Most of the leaders understood there was risk to this position; I am not sure they understood that risk meant that their government would callously kill them. Likewise, the Tiananmen revolution in China- it’s not likely that these folks expected 3000 to be killed. Or, to date things back further, the students from 1968 did not believe that the Chicago police would “billy-club” and beat them. These are political- not technological- risks.

A much smaller group (but one you would be hard pressed to convince the “sky is falling” DC denizens of the era) was involved with the DC sniper. I had long and hard discussions (right, that’s the word) with my ex-wife about allowing my (then 10 year old) son to play in the streets and in the parks. I recited a wonderful article published in the Jerusalem Report earlier that year (aptly titled “The Fxxx-It Factor”) as to how Jerusalem residents were reacting to the terror campaign ensuing at the time. The article described four different types of people. The first group mapped out where the terrorist exploded their bombs, and excluded those areas from potential travel (since they considered these areas to be prime targets). The second group used the same maps, but considered those areas inherently safe (since the terrorists would not likely attack the same area twice). The third group thought they employed more science in their analysis- they mapped out those locations bombed, looked for similar places, and avoided both types. The fourth group, the one with which I identify, said fxxx it, whatever happens- happens; you can’t predict random events. I had determined that the snipers were not choosing their targets, but selecting them at random. (That, it turns out, was not totally true- the snipers were employing random targets, up until they could kill John Muhammed’s wife, which he was hoping would be attributed to be yet another random target.)

Why do I bring up these political events? Because if we can’t determine what is risk and acceptable risk in our daily lives, how can we discern technological risks, where we barely understand the basic concepts. We know drinking and driving is dangerous. (I will not waste your time and mine discussing whether the statutory limits have any bearing on this fact.) We know that driving our cars without wearing seat belts, when and if we have an accident, can lead to catastrophic results. We know that running across an eight lane divided highway, which has a speed limit of 70 miles per hour, is risky behavior.

What about smoking cigarettes? When my parents were young, it was not considered to be risky behavior. Now, we know it entails significant risk. (Recent data even demonstrated there are some 10,000 gene pair mutations per cigarette smoked. The problem: there is no one-to-one correspondence between these mutations and the incidence of cancer or respiratory failure.)
In the past, we believed implicitly that our government or our businesses (one, the other, or both) had all the (right) answers. Our culture (helped by programs such as StarTrek, Magruder, et.al.) has taught us (incorrectly) that engineers and creative individuals can save the day- up to the very last minutes; however, that is not the way of the world. Most of our technological developments are incremental- not coming by great leaps. Nixon declared “War on Cancer” 40 years ago; but only now does our recent progress seem to indicate new breakthroughs treatments for different cancers each and every month; these have been years (decades?) in the making (building on previous developments).

There is a big difference between political risk and technological risk- and how we perceive them.  Scientists and engineers are trained to not accept vague concepts or ill-defined hypotheses. These technologists need solid data and well-developed theories upon which they base their designs and prognoses. (This phenomenon may actually explain Jimmy Carter’s presidency well- he was always waiting for new data!)
We need a proper assessment of risks to balance against the costs. Instead of making policy, technologists define risk and employ the state of current knowledge to advise those political deciders. When designing medical devices, the technologists weigh the risks inherent in the use of the device against the benefits to be derived from its use. The FDA then determines if this is acceptable or not. This dichotomy, the balance developing what is an acceptable risk, was eloquently defined by the Honorable John Quarles, then the General Counsel of the Environmental Protection Agency, some 40 years ago when called upon by the esteemed members occupying the U.S. Capitol. They wanted Quarles to guarantee that drinking water would be safe under this new agency (the successor to the Federal Water Quality Administration). His response was that it was Congress’ responsibility to determine acceptable risk- whether that was to be 1 person per 100,000 dying, 1 person per million, etc. Once a political decision were provided regarding the appropriate risk (acceptable death rate) then one could design the proper water system to meet or exceed that standard.

There is a process, “probabilistic risk assessment”, that is supposed to account for all avenues of failures- machines AND people. It requires one to set the maximum risk acceptable- and to identify and eradicate problem areas. [The nuclear power industry has been using it since the 70’s; NASA since the 80’s. Obviously, the biggest “hole” in this assessment is human frailty or stupidity or intent to harm.]
In this process, risk is assessed by determining the severity (magnitude) of the possible adverse event and the probability of said occurrence. The combination (multiplication) of people hurt (or killed) and the likelihood of occurrence then determine the expected loss. Probabilistic assessment is, at best, only part of risk assessment. One must also quantify the damage that such an event will cause. And, we must minimize both the damage and the possibility it will occur. Something that will cause 1 cent of damage every 10 years is not as “risky” as something that causes 10 million dollars of damage every 30 years, or the injury to 1 person per day for ten years is much lower than the death of 1 million people once every ten years. We need to learn how to discern that difference.
I will extend my remarks tomorrow (since I just noticed there are more than 1000 words here!)Roy A. Ackerman, Ph.D., E.A.

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Share

3 thoughts on “Risky Business: How we perceive risk- and how we must do MUCH better (part 1)”

  1. Pingback: go learn web.

Comments are closed.