Skip to main content

Full text of "DTIC ADA332725: Proceedings of the 1994 Battlefield Atmospherics Conference. Las Cruces, New Mexico. 29 November - 1 December 1994."

See other formats


PROCEEDINGS 
of  the 


r 


1994 

Battlefield 

Atmospherics 

Conference 


Las  Cruces.  New  Mexico 


29  November  - 1  December  1994 


BATTLEFIELD  ENVIRONMENT  DIRECTORATE 
U.S.  Army  Research  Laboratory 

White  Sands  Missile  Range 
New  Mexico 


APPROVED  FOR  PUBLIC  RELEASE;  DISTRIBUTION  IS  UNLIMITED, 


1 9971 21 5  1 04 


NOTICES 


Disclaimers 


The  findings  in  this  report  are  not  to  he  construed  as  an  official  Department  of  the  Army 
position,  unless  so  designated  by  other  authorized  documents. 

The  citation  of  trade  names  and  names  of  manufacturers  in  this  report  is  not  to  be 
construed  as  official  Government  indorsement  or  approval  of  commercial  products  or 
services  referenced  herein. 


Destruction  Notice 


When  this  document  is  no  longer  needed,  destroy  it  by  any  method  that  will  prevent 
disclosure  of  its  contents  or  reconstruction  of  the  document. 


Proceedings 
of  the 

1994  Battlefield  Atmospherics  Conference 


29  November  -  2  December  1994 


Sponsor 


Battlefield  Environment  Directorate 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  New  Mexico 


Conference  Chairmen 

Mr.  Edward  D.  Creegan 
Mr.  John  R.  Elrick 

U.S.  Army  Research  Laboratory 


PROGRAM  COMMITTEE 


Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico 


Conference  Manager 

Mr.  Edward  D.  Creegan  (505)  678-4684 


Conference  Chairman 

Mr.  John  R.  Elrick  (505)  678-3691 


Conference  Advisor 

Dr.  Richard  C.  Shirkey 


Conference  Support 

Protocol/Public  Affairs  Office 
Ann  Berry 
Elizabeth  Moyers 


Conference  Technical  Support 
Technical  Publication  Branch 
Maria  J.  Campolla 


11 


CONTENTS 


Preface . . . 

SESSION  I:  SIMULATION/ANALYSIS 

Visualization  of  Obscuration  and  Contrast  Effects  Using  the  Beams  Models  .  3 

Donald  W.  Hoock  and  Patsy  S.  Hansen,  U.S.  Army  Research  Laboratory; 

John  C.  Giever  and  Sean  G.  O’Brien,  New  Mexico  State  University 

A  Portable  System  for  Data  Assimilation  in  a  Limited  Area  Model  .  15 

Keith  D.  Sashegyi  and  Rangarao  V.  Madala,  Naval  Research  Laboratory; 

Frank  H.  Ruggiero,  Phillips  Laboratory;  Sethu  Raman,  North  Carolina 
State  University 

Effect  of  High  Resolution  Atmospheric  Models  on  Wargame  Simulations .  25 

Scarlett  D.  Ayres,  U.S.  Army  Research  Laboratory 

An  Assessment  of  the  Potential  of  the  Meteorological  Office  Mesoscale  Model 

for  Predicting  Artillery  Ballistic  Messages .  37 

Jonathan  D.  Turton  and  Peter  F.  Davies,  Defence  Services  Division,  Meteorological 
Office,  United  Kingdom;  Maj  Tim  G.  Wilson,  Projects  Wings,  Royal  School 
of  Artillery,  United  Kingdom 

Results  of  the  Long-Range  Overwater  Diffusion  (LROD)  Experiment .  47 

Janies  F.  Bowers;  U.S.  Army  Dugway  Proving  Ground;  Roger  G.  Carter  and 
Thomas  B.  Watson,  NOAA  Air  Resources  Laboratory 

Modeled  Ceiling  and  Visibility  .  57 

Capt  Robert  J.  Falvey,  U.S.  Air  Force  Environmental  Technical  Applications  Center 

A  New  PCFLOS  Tool  . 65 

K.  E.  Eis,  Science  and  Technology  Corporation 

The  Influence  of  Scattering  Volume  on  Acoustic  Scattering  by  Atmospheric 

Turbulence .  75 

Harry  J.  Auvermann,  U.S.  Army  Research  Laboratory;  George  H.  Goedecke 
and  Michael  DeAntonio,  New  Mexico  State  University 

Relationship  Between  Aerosol  Characteristics  and  Meteorology  of  the  Western 

Mojave .  85 

L.  A.  Mathews,  and  J.  Finlinson,  Naval  Air  Warfare  Center;  P.  L.  Walker,  Naval 
Postgraduate  School 


111 


SESSION  II:  OPERATIONAL  WEATHER 

Evaluation  of  the  Navy’s  Electro-Optical  Tactical  Decision  Aid  (EOTDA) 

S.  B.  Dreksler,  S.  Brand,  and  A.  Goroch,  Naval  Research  Laboratory 

U.S.  Army  Battlescale  Forecast  Model  . 

Martin  E.  Lee,  James  E.  Harris,  Robert  W.  Endlich,  Teizi  Henmi, 
and  Robert  E.  Dumais,  U.S.  Army  Research  Laboratory;  Maj  David  I.  Knapp, 
Operating  Location  N,  Air  Weather  Service;  Danforth  C.  Weems,  Physical 
Science  Laboratory 

Development  and  Verification  of  a  Low-Level  Aircraft  Turbulence  Index  Derived 
from  Battlescale  Forecast  Model  Data  . 

Maj  David  I.  Knapp  and  MSgt  Timothy  J.  Smith,  Operating  Location  N, . 

Air  Weather  Service;  Robert  Dumais,  U.S.  Army  Research  Laboratory 

Current  and  Future  Design  of  U.S.  Navy  Mesoscale  Models  for  Operational  Use 
R.  M.  Hodur,  Naval  Research  Laboratory 

Combat  Weather  System  Concept  .  . . 

James  L.  Humphrey,  Maj  George  A.  Whicker,  Capt  Robert  E.  Hardwick, 

2nd  Lt  Jahna  L.  Wollard,  and  SMSgt  Gary  J.  Carter, 

Air  Weather  Service 

Small  Tactical  Terminal  Concept  and  Capabilities . 

2nd  Lt  Stephen  T.  Barish,  George  N.  Coleman  III,  and  Maj  Tod  M.  Kunschke, 
Air  Weather  Service 

Operational  Use  of  Gridded  Data  Visualizations  at  The  Air  Force  Global 
Weather  Central  . 

Kim  J.  Runk  and  John  V.  Zapotocny,  Air  Force  Global  Weather  Central 

Theater  Forecast  Model  Selection . . . 

R.  M.  Cox,  Defense  Nuclear  Agency;  J.  M.  Lanicci,  Air  Force 
Global  Weather  Central;  H.  L.  Massie,  Jr.,  Air  Weather  Service 

Air  Weather  Service:  Evolving  to  Meet  Tomorrow’s  Challenges . 

Col  William  S.  Weaving,  Maj  Dewey  E.  Harms,  Capt  Donald  H.  Berchojf, 
and  Capt  Timothy  D.  Hutchison,  Air  Weather  Service 

Air  Force  Weather  Modernization  Planning . 

Lt  Col  Alfonse  J.  Mazurowski,  Air  Weather  Service 


Uses  of  Narrative  Climatologies  and  Summarized  Airfield  Observations 

for  Contingency  Support  .  181 

Kenneth  R.  Walters,  Sr.  and  Capt  Christopher  A.  Donahue,  U.S.  Air  Force 
Environmental  Technical  Applications  Center 

USAFTAC  Dial-in  Access .  185 

Capt  Kevin  L.  Stone  and  Robert  G.  Pena,  U.S.  Air  Force  Environmental  Technical 
Applications  Center 

Astronomical  Models  Accuracy  Study .  191 

Capt  Chan  W.  Keith  and  Capt  Thomas  J.  Smith,  U.S.  Air  Force 
Environmental  Technical  Applications  Center 

Atmospheric  Transmissivity  in  the  1-12  Micron  Wavelength  Band  for 

Southwest  Asia . 201 

Capt  Chan  W.  Keith,  Rich  Woodford,  U.S.  Air  Force  Environmental 
Technical  Applications  Center 

SESSION  III:  BATTLE  WEATHER 

Owning  the  Weather:  It  Isn’t  just  for  Wartime  Operations  .  211 

R,  J.  Szymber,  M.  A.  Seagraves,  James  L.  Cogan,  and  O.  M.  Johnson, 

U.S.  Army  Research  Laboratory 

The  Real  Thing:  Field  Tests  and  Demonstrations  of  a  Technical  Demonstration 

Mobile  Profiler  System .  221 

J.  Cogan,  E.  Measure,  E.  Creegan,  D.  Littell,  and  J.  Yarbrough, 

U.S.  Army  Research  Laboratory;  B.  Weber,  M.  Simon,  A.  Simon, 

D.  Wolfe,  D.  Merritt,  D.  Weurtz,  and  D.  Welsh,  Environmental 
Technology  Laboratory,  NOAA 

Characterizing  the  Measured  Performance  of  CAAM . .  231 

Abel  J.  Blanco,  U.S.  Army  Research  Laboratory 

Evaluation  of  the  Battlescale  Forecast  Model  (BFM)  .  245 

T.  Henmi  and  M.  E.  Lee,  U.S.  Army  Research  Laboratory; 

MSgt  T.  J.  Smith,  Air  Weather  Service 

Verification  and  Validation  of  the  Night  Vision  Goggle  Tactical  Decision  Aid .  255 

John  R.  Elrick,  U.S.  Army  Research  Laboratory 


V 


SESSION  IV:  BOUNDARY  LAYER 


Clutter  Characterization  Using  Fourier  and  Wavelet  Techniques  .  263 

J.  Michael  Rollins,  Science  and  Technology  Corporation;  William  Peterson, 

U.S.  Army  Research  Laboratory 

Validation  Tools  for  SWOE  Scene  Generation  Process .  273 

Max  P.  Bleiweiss,  U.S.  Army  Research  Laboratory;  J.  Michael  Rollins, 

Science  and  Technology  Corporation 

The  Vehicle  Smoke  Protection  Model  Development  Program .  281 

David  J.  Johnston,  OptiMetrics,  Inc.;  William  G.  Rouse,  Edgewood  Research, 
Development  and  Engineering  Center 

Development  of  a  Smoke  Cloud  Evaluation  Plan  . 291 

M.  R.  Perry,  Batelle;  W.  G.  Rouse  and  M.  T.  Causey,  Edgewood  Research, 
Development  and  Engineering  Center 

Analysis  of  Water  Mist/Fog  Oil  Mixtures . . 30i 

William  M.  Gutman  and  Troy  D.  Gammill,  Physical  Science  Laboratory; 

Frank  T.  Kantrowitz,  U.S.  Army  Research  Laboratory 

New  Millimeter  Wave  Transmissometer  System  . 309 

Robert  W.  Smith,  U.S.  Army  Test  and  Evaluation  Command; 

William  W.  Carrow,  EOIR  Measurements,  Inc. 

SESSION  V:  ATMOSPHERIC  PHYSICS 

Wind  Field  Measurement  with  an  Airborne  cw-COj-Doppler-Lidar  (ADOLAR) . 323 

S.  Rahm  and  Ch.  Werner,  German  Aerospace  Establishment  DLR 

Behavior  of  Wind  Fields  through  Tree  Stand  Edges . 33 1 

Ronald  M.  Cionco,  U.S.  Army  Research  Laboratory;  David  R.  Miller, 

The  University  of  Connecticut 

Transilient  Turbulence,  Radiative  Transfer,  and  Owning  the  Weather .  345 

R.  A.  Sutherland,  T.  P.  Yee,  and  R.  J.  Szymber,  U.S.  Army 
Research  Laboratory 

Forecasting/Modeling  the  Atmospheric  Optical  Neutral  Events  Over  a 
Desert  Environment  . 

G.  T.  Vaucher,  Science  and  Technology  Corporation;  R.  W.  Endlich, 

U.S.  Army  Research  Laboratory 


SESSION  I  POSTERS  :  SIMULATION  AND  ANALYSIS 


Combined  Obscuration  Model  for  Battlefield  Induced  Contaminants  -  Polarimetric 

Millimeter  Wave  Version  (COMBIC-PMW)  . 367 

S.  D.  Ayres,  B.  Millard,  and  R.  Sutherland,  U.S.  Army  Research  Laboratory 

A  Multistream  Simulation  of  Multiple  Scattering  of  Polarized  Radiation  by 

Ensembles  of  Non-Spherical  Particles  . 381 

Sean  G.  O’Brien,  Physical  Science  Laboratory 

Combined  Obscuration  Model  for  Battlefield  Induced  Contaminants-Radiative 

Transfer  Version  (COMBIC-RT)  . 391 

Scarlett  D.  Ayres,  Doug  Sheets,  and  Robert  Sutherland,  U.S.  Army 
Research  Laboratory 

Emissive  Smoke  Modeling  for  Imaging-Infrared  Seeker/Tracker  Simulation  . 401 

Joseph  L.  Manning,  Charles  S.  Hall,  and  Sheri  M.  Siniard,  Computer 
Science  Corporation 

SESSION  II  POSTERS:  OPERATIONAL  WEATHER 

Performance  of  the  U.S.  Army  Battlefield  Forecast  Model  Performance  During 

Operation  Desert  Capture  II  . ^^3 

R.  E.  Dumais,  Jr.,  U.S.  Army  Research  Laboratory 

A  Weather  Hazards  Program  Used  for  Army  Operations  on  IMETS . 423 

Jeffrey  E.  Passner,  U.S.  Army  Research  Laboratory 

SESSION  III  POSTERS:  BATTLE  WEATHER 

Comparison  of  Radiometer  and  Radiosonde  Derived  Temperature  Profiles  Measured 

at  Wallops  Island,  VA . 433 

Edward  M.  Measure,  U.S.  Army  Research  Laboratory;  Dick  R.  Larson, 

Physical  Science  Laboratory;  Francis  Schmidlin  and  Sean  McCarthy, 

NASA  Goddard  Space  Flight  Center  Facility,  Wallops  Island,  VA 

The  Integrated  Weather  Effects  Decision  Aid  Threat  Module . . . 443 

David  P.  Sauter,  U.S.  Army  Research  Laboratory;  Carl  H.  Chesley 
and  Andrew  R.  Spillane,  Science  and  Technology  Corporation 

Owning  the  Weather  Battlefield  Observations  Framework . 449 

Richard  J.  Szymber  and  James  L.  Cogan,  U.S.  Army  Research  Laboratory 


Electro-Optical  Climatology  Microcomputer  Version  2.2  Demonstration  (EOCLIMO)  461 
Capt  Matthew  R.  Williams,  U.S.  Air  Force  Environmental  Technical 
Applications  Center 


SESSION  IV  POSTERS:  BOUNDARY  LAYER 


Technical  Exchange  with  Australia . 

James  Gillespie  and  Patti  Gillespie,  U.S.  Army  Research  Laboratory 


...  467 


Improvements  to  Modeling  of  Polarimetric  Scattering . 

Michael  DeAntonio,  National  Research  Council  Post  Doc,  U.S.  Army 
Research  Laboratory 


Atmospheric  Acoustic  Characterization  in  Support  of  BAT- Vehicle 
Field  Testing  . 

John  R.  Fox,  U.S.  Army  Research  Laboratory;  Prasan  Chintawongvanich, 
Physical  Science  Laboratory 


ARL  Remote  Sensing  Rover  as  a  Ground  Truth  Monitor  at  the  XM-21  Challenge 
System  Field  Test  . 

Frank  T.  Kantrowitz  and  Dale  U.  Foreman,  U.S.  Army  Research  Laboratory i 
William  M.  Gutman,  Physical  Science  Laboratory 

Lidar  Observations  During  Smoke  Week  XIV 

M.  P.  Bleiweiss  and  R.  A.  Howerton,  U.S.  Army  Research  Laboratory 

Enhanced  Photon  Absorption  in  Multi-Component  Aerosol  Clouds 

Young  P.  Yee  and  Robert  A.  Sutherland,  U.S.  Army  Research  Laboratory 

Visualization  of  the  MADONA  Data  Base  and  Use  of  Selected  Sequences  in  a 

Wind  Flow  and  Diffusion  Simulation  System . 

Harald  Weber  and  Welfhart  aufm  Kampe,  German  Military  Geophysics  Office 


SESSION  V  POSTERS:  ATMOSPHERIC  PHYSICS 

Temperature  Profile  of  the  Nocturnal  Stable  Boundary  Layer  over  Homogeneous 
Desert  Using  LA-Teams  . 

R.  Todd  Lines,  New  Mexico  State  University;  and  Young  P.  Yee, 

U.S.  Army  Research  Laboratory 

Comparison  of  Boundary-Layer  Wind  and  Temperature  Measurements 
with  Model  Estimations . 

R.  J.  Okrasinski,  Physical  Science  Laboratory;  A.  Tunick, 

U.S.  Army  Research  Laboratory 


viii 


Optical  Turbulence  Measurements  at  Apache  Point  Observatory  . 545 

Frank  D.  Eaton,  John  R.  Hines,  and  William  H.  Hatch,  U.S.  Army 
Research  Laboratory;  James  J.  Drexler  and  James  Northrup,  Lockheed 
Engineering  and  Sciences  Company 

The  APRF  SODAR:  Bridging  the  Lower  Boundary  Layer  . 553 

John  Hines,  Frank  Eaton,  Scott  McLaughlin,  and  William  Hatch, 

U.S.  Army  Research  Laboratory;  G.  Hoidale,  W.  Flowers  and 
L.  Parker-Sedillo,  Science  and  Technology  Corporation 

A  Look  at  Thermal  Turbulence  Induced  Radar  Echoes  in  or  Near 

Rain  Clouds  at  the  Atmospheric  Profiler  Research  Facility . .  563 

William  H.  Hatch,  U.S.  Army  Research  Laboratory 

Border  Area  Air  Quality  Study  . 569 

B.  W.  Kennedy,  J.  M.  Serna,  J.  R.  Pridgen,  D.  Kessler,  J.  G.  Moran, 

G.  P.  Steele,  and  R.  Okrasinski,  Physical  Science  Laboratory; 

J.  R.  Fox,  R.  Savage,  and  D.  M.  Garvey,  U.S.  Army  Research  Laboratory 

APPENDICES 

Appendix  A:  Agenda . 579 

Appendix  B:  List  of  Attendees . 587 

Author  Index . 607 


IX 


PREFACE 


The  1994  Battlefield  Atmospheric  Conference  was  held  29  November  through 
1  December  1994  at  White  Sands  Missile  Range,  New  Mexico,  under  the  sponsorship  of 
the  U.S.  Army  Research  Laboratory,  Battlefield  Environment  Directorate,  White  Sands 
Missile  Range,  New  Mexico.  The  conference  included  oral  presentations,  posters,  and 
demonstration  sessions  on  five  topics:  Simulation  and  Analysis,  Operational  Weather, 
Battle  Weather,  Boundary  Layer,  and  Atmospheric  Physics.  The  conference  had  219 
attendees,  including  representatives  from  Denmark,  France,  Germany,  Israel,  The 
Netherlands,  and  the  United  Kingdom. 

The  genesis  of  the  Battlefield  Atmospherics  Conference  was  the  Electro-Optical  Systems 
Atmospheric  Effects  Library  (EOSAEL)  and  Tactical  Weather  Intelligence  (TWI) 
Conference  set  up  to  analyze  the  1993  Israeli  War  and  the  effective  use  of  smoke  by  the 
Israeli  forces  to  defeat  electro-optical  systems. 

In  1991,  in  an  effort  to  encompass  additional  aspects  of  battlefield  atmospheric  effects 
such  as  acoustic  transmission,  the  conference  became  known  as  the  Battlefield 
Atmospherics  Conference. 

The  reader  will  find  the  items  related  to  the  conference  itself  (the  agenda  and  the  list  of 
attendees)  in  the  appendices.  An  author  index  is  included  after  the  appendices. 


XI 


Session  I 


SIMULATION/ANALYSIS 


1 


VISUALIZATION  OF  OBSCURATION  AND  CONTRAST 
EFFECTS  USING  THE  BEAMS  MODELS 


Donald  W.  Hoock 
Patsy  S.  Hansen 

Battlefield  Environment  Directorate 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM  88002-5501 

John  C.  Giever 
Sean  G.  O’Brien 
Physical  Science  Laboratory 
New  Mexico  State  University 
Las  Cruces,  NM  88003-0002 


ABSTRACT 

Interest  in  using  highly  interactive,  real-time  computer  simulations  as  development, 
analysis,  planning  and  training  tools  continues  to  expand  within  DoD  and  industry. 
Interactive  simulations  range  from  graphical  manipulations  of  3-D  scientific  data 
to  realistic  3-D  virtual  and  "fly-through"  environments  in  real-time.  Improvement 
in  both  real-time  graphics  hardware  and  wider  access  to  off-the-shelf  visualization 
software  has  particularly  stimulated  a  user  demand  for  better,  "physically  correct" 
models  of  processes,  effects  and  appearances.  One  such  improved,  physics-based 
model  for  support  of  battlefield  environment  simulations  is  the  U.  S.  Army 
Research  Laboratory,  Battlefield  Environment  Directorate,  Battlefield  Emission 
and  Multiple  Scattering  Model  (BEAMS).  BEAMS  computes  both  radiance  (color 
values)  and  partial  obscuration  (opacity)  of  inhomogeneous  battlefield  clouds  of 
obscurants,  smoke,  dust,  haze  and  fog  layers.  In  achieving  a  stable,  accurate 
solution,  BEAMS’  long  calculations  are  far  from  real-time.  A  full  BEAMS  3-D 
radiative  transfer  calculation  produces  diffuse  radiance  outputs  in  26  directions  for 
each  volume  element  of  its  non-uniform  cloud  concentration  distribution.  Thus, 
for  real-time  use  in  representing  the  "color"  and  "opacity"  of  battlefield  clouds  in 
simulators,  it  is  necessary  to  develop  simpler,  parametric  representations  of 
BEAMS  outputs.  These  outputs  are  now  being  analyzed  to  produce  one  of  a 
number  of  "environmental  representation"  products  to  support  real-time 
visualization  of  the  battlefield  environment.  This  paper  ziddresses  relative  errors 
between;  using  a  simple  mean  radiance  profile  derived  from  many  sets  of  BEAMS 
calculations;  using  actual  transmittance  distributions  through  a  given  cloud  along 
with  the  scaled  (limiting)  path  radiance  averaged  from  many  BEAMS  calculations; 
and  performing  a  full  BEAMS  calculation  for  the  entire  cloud  at  each  point  in 
time.  BEAMS  outputs  for  these  approaches  to  cloud  visualizations  are  compared. 


3 


1.  INTRODUCTION 

Given  a  concentration  distribution  and  a  wavelength-dependent  mass  extinction  for  the  aerosol 

oh!rnrlT"i  ^  Compute  transmittance  to  an  observer’s  position  for  every  line  of  sight  through  m 
bscurant  cloud  In  visualization,  one  applies  cloud  transmittance  as  a  2-D  map  or  array  of  the 
fractions  of  background  radiance  that  will  show  through  a  cloud.  However,  transmittance  is 

v3iJeT  only  be  reduced  (never  increased)  by  just  transmittance, 

on  y  y  using  transmittance,  every  cloud  appears  dark  against  its  background. 

Obscurants,  one  usually  also  requires  an 
^lor  ^"'itted  radiance  that  gives  the  cloud  its  own  appearance 

(color  value)  or  wavelength-dependent  signature.  This  is  called  the  "path  radiance"  of  the  cloud. 

epends  on  external  illumination,  the  cloud  concentration  and  extinction  per  unit  concentration 
the  relative  amount  of  scattering  versus  absorption  from  individual  particles,  and  the  wavelength- 
•  pattern  with  angle  (the  phase  function)  for  the  type  of  obscurant  particles 

m  he  cloudy  Except  for  optically  "thin"  clouds,  the  path  radiance  is  also  dependent  on  the  many 
possible  paths  over  which  the  radiant  power  can  be  multiply-scattered  before  emerging. 

Scattering  Model  (BEAMS)  (Hoock  et  al.  1993-  O’Brien 
1993)  IS  an  approach  to  computing  the  steady  state  diffuse  path  radiance  for  finite  clouds  of  non- 
uniform  concentration.  Typical  run  times  for  the  BEAMS  model  to  compute  a  3-D  cloud  path 
radi^ce  distribution  can  be  tens  of  minutes  to  hours.  It  gives  the  path  radiance  in  26  solid  angles 
a  -  elements  inside  and  on  the  surface  of  the  cloud.  A  two-dimensional  version  (BEAMS-2D) 

Giever  1993;  1994)  for  haze  and  fog  In  vertically  stratified  layers 
BEAMS-2D  requires  a  few  seconds  to  minutes  to  compute  the  multiply-scattered  path  radiance 
distributions  in  and  among  the  layers  m  34  directions  (17  upward  and  17  downward  solid  angles). 

R^AiIlrQ^  near  real-time  interactive  scene  visualizations  obviously  cannot  embed  the 

BEAMS  codes  directly  into  the  scene  generation  process.  However,  to  represent  clouds  of 
obscurants,  smoke,  dust,  haze  and  fog  with  physical  accuracy  one  must  give  them  both  a  correct 
transparency  (transmittance)  and  color  value  (path  radiance).  Thus,  the  current  approach  to 

interactive  simulations  is  to  pre-compute  databases  or  scenario- 
dependent  data  sets  (called  environmental  representations),  such  as  scene  illumination,  visibility 
and  obscurant  cloud  radiance.  It  is  these  tables  or  simple  parametric  curve  fits  that  are  then  used 

non-imaging  combat  simulations.  Thus,  it  is  first  necessary  to 
detemme  if  BEAMS  model  outputs  are  general  enough  to  apply  to  a  sufficiently  large  range  of 
cloud  and  illumination  scenarios  in  battlefield  environment  simulations. 

This  paper  addresses  the  extent  to  which  BEAMS  outputs  can  be  reduced  to  useful  data  to 
support  real-time  battlefield  scene  simulation.  The  relevant  parameters  are  described  in  section 
Z.  Ihe  BEAMS  methodology  is  briefly  reviewed  in  section  3,  and  scenario-dependent  inputs  are 
given  m  section  4.  Sections  5  and  6  are  a  case  study  of  the  analysis  of  BEAMS  outputs.  In 
particular,  the  question  is  to  what  accuracy  can  the  consolidated  outputs  of  many  BEAMS  runs 

Q  represent  the  path  radiance  (color  values)  of  real-time  simulated  battlefield  obscurant 
clouds.  Section  7  gives  conclusions. 


4 


2.  CLOUD  TRANSMITTANCE  AND  RADIANCE 

Assume  that  the  obscurant  cloud  dimensions  and  mass  concentration  C(x,y,z)  distribution 
throughout  the  cloud  are  known.  These  can  be  from  a  dynamic  model  of  transport  and  diffusion. 
Or,  they  can  be  provided  by  a  mean  obscurant  concentration  model  such  as  the  Combined 
Obscuration  Model  for  Battlefield  Contaminants  (COMBIC)  (Ayres  and  DeSutter  1994)  and  the 
2-D  or  3-D  concentration  fluctuations  provided  by  the  Statistical  Texturing  Applied  To 
Battlefield-Induced  Contaminants  (STATBIC)  model  (Hoock  1991).  We  also  assume  that  the 
wavelength-dependent  mass  extinction  coefficient  aiX)  is  known.  It  can  be  obtained  from  the 
Electro-Optical  Systems  Atmospheric  Effects  Library  (EOSAEL)  phase  function  database 
(PFNDAT)  (Davis  et  al.  1994)  or  computed  from  particle  size  distribution  and  wavelength- 
dependent  refractive  index  via  a  shape-dependent  particle  scattering  model,  such  as  the  AGAUS 
Mie  code  for  spherical  particles  from  EOSAEL  (Miller,  1983).  Transmittance  T  at  wavelength 
X  over  a  path  from  0  to  L  through  concentration  C(s)  =  C(fi*S)  in  direction  fl  has  concentration 
length  CL  and  optical  depth  x: 

L 

-t(X)  -a{X)CL  -aik)  f  C(s)ds  (D 

T  {  X  ;  X  )  =  e  =  e  =e  o 


The  change  in  radiance  L(fl*S)  along  the  path  in  direction  Q  is  given  in  terms  of  all  incoming 
radiances  Lin(n’;s),  the  obscurant  and  wavelength-dependent  single  scattering  albedo  co  (ratio  of 
scattering  to"extinction),  and  a  scattering  phase  function  P(n’,0),  by  the  radiative  transfer  equation 


ds 


-  a  (  A. )  C  {  s) 


HQ»S)  -«  j* s)  P(Q;  QO  dQ' 


(2) 


If  the  incoming  illuminating  radiance  has  the  same  relative  directional  dependence  over  a  finite 
path  of  length  As,  so  that  an  average  incoming  illumination  can  be  defined  over  that  path,  then 
one  can  define  a  "limiting  path  radiance"  L5(0)  as: 

Lg{Q)  =  <£>  j  I»iD  ( Q' !  averaged  over  £is)  P(  Q  ;  flO  <3Q'  (3) 

The  result,  in  terms  of  optical  depth  x,  transmittance  T  and  path  radiance  Lp(Il;s)  is: 

L(Q;s)  =  e-  i(Q  ;  0)  =  e-  L  ( Q  ;  0 )  +  ( Q ;  s) 

=  T{s)  LiQio)  +  [  1  -  r(s)  ]  L^(Q) 


3.  THE  BEAMS  MODEL 

Equations  1  through  4  have  direct  implications  to  rendering  propagation  effects  of  haze,  fog  and 
obscurant  clouds.  If  L(n;0)  is  the  scene  background  and  L(fi;s)  is  the  perceived  radiance  at  the 
observer  position  s  after  passing  through  the  cloud,  then  the  first  term  is  the  transmitted 
background  radiance  through  the  cloud,  and  Lp(ft;s)  is  the  radiance  (color  value)  observed  from 


5 


the  cloud  itself.  Furthermore,  L,(0)  is  the  maximum  (limiting)  radiance  from  the  cloud  as  the 
transmittance  goes  to  zero  over  the  path.  It  can  be  used  with  graphics  hardware  that  allows  a 
current  (background)  pixel  color  value  to  be  blended  toward  a  limiting  value  (thick  cloud  L^) 
linearly  with  an  opacity  factor  [l-T(s)].  While  is  implicitly  dependent  on  optical  depth  and 
position  in  the  cloud  (since  these  affect  the  incoming  illumination  Lj^),  Lj  has  far  less  dependence 
on  local  variations  in  x  than  does  Lp(n;s).  We  will  exploit  this  in  the  following  sections. 

To  compute  Lp(n;s)  and  L,(n)  we  use  the  BEAMS  model.  In  the  3-D  version  of  this  model  the 
cloud  is  gridded  into  cubical  elements,  each  with  its  own  concentration  and  scattering  properties. 
The  radiance  is  broken  into  26  solid  angles  connecting  each  element  with  its  nearest  neighbors. 
The  phase  function  P(0’,fi)  is  integrated  over  the  incoming  solid  angle  and  averaged  over  the 
outgoing  solid  angle  to  produce  a  26x26  transfer  matrix  of  incoming  and  outgoing  radiance  which 
can  be  used  in  place  of  the  integral  in  the  radiative  transfer  equation.  The  incident  illuminations, 
both  direct  and  diffuse,  on  the  outside  of  the  cloud  are  held  constant  as  boundary  conditions. 


The  internal  cloud  elements  are  repeatedly  swept  over  from  different  directions,  redirecting 
scattered  or  emitted  radiance  out  of  each  element.  When  internal  radiance  fields  settle  down  to 
"final"  values,  then  the  outgoing  radiance  at  the  cloud  boundaries  are  the  Lp(n,s)  radiance  of  the 
cloud.  The  average  L,(fi)  is  then  computed  through  the  cloud  to  each  boundary  point.  Because 
of  the  many  angular  averages  involved  in  integrating  in  each  element,  specific  contributions 
of  external  scene  elements  to  the  incoming  radiance  are  not  as  important  as  the  average  diffuse 
scene  illumination.  Strong  direct  (for  example  solar)  radiation  incident  on  the  cloud  is  used  to 
determine  the  direct-to-diffuse  radiance  source  terms  in  the  cloud  elements.  Thus,  the  resulting 
diffuse  radiance  from  the  cloud  can  usually  be  determined  from  basic  scene  illumination  inputs. 
The  BEAMS-2D  version  for  stratified  layers  is  similar,  although  standard  "doubling  techniques" 
are  used  instead  of  iterations  to  determine  the  solutions  (Hoock  and  Giever  1994). 

4.  SCENARIO-DEPENDENT  VARIABLES 

Scenario  inputs  can  thus  be  identified  as  three  types: 

o  Scene  illumination  in  the  form  of:  Sun  Angle  (0,(j))  or  similar  direct  beam  source;  ratio 

of  direct  to  diffuse  incident  irradiance;  incident  relative  diffuse  radiance  on  each  element 
of  the  cloud  boundary;  reflectance  or  albedo  Aj,  of  the  boundary  below  the  cloud  (if  any). 

o  Cloud  inputs  in  the  form  of:  Number  of  cloud  elements  (Nx,Ny,Nz);  length  of  the  side 

of  each  element;  concentration  array  C(i,j,k)  for  each  element. 

o  Optical  inputs  in  the  form  of:  Obscurant  type;  phase  function  P;  single  scattering  albedo 
0),  mass  extinction  coefficient  ct;  steady-state  emission  source  term  for  each  element. 

Outputs,  as  previously  described,  are  Lp(Q„,,Sjj|.)  and  L5(Q^„Sji|.)  for  m=l  to  26  directional  solid 
angles,  and  ijk  =  coordinates  of  elements  on  the  surface  or  interior  of  the  cloud.  The  optical 
depths  T(i,j,k)  of  each  element  are  computed  simply  by  multiplying  the  concentration,  mass 
extinction  and  element  size.  They  are  summed  along  lines  of  sight  for  total  optical  path  t’s. 


6 


5.  CASE  STUDY  -  RADIANCE  FROM  SMOOTH  VERSUS  STRUCTURED  CLOUDS 

Assuming  incident  scene  illumination  and  obscurant  optical  properties  are  fixed  or  vary  slowly, 
one  would  still  like  to  account  for  radiance  changes  due  to  cloud  shape  and  concentration  changes 
as  it  evolves  and  moves  downwind.  It  would  be  particularly  nice,  for  purposes  of  cloud  rendering 
and  real-time  simulation,  if  one  could  reuse  pre-computed  tables  or  curves  of  cloud  radiance  for 
a  variety  of  cloud  configurations.  Since  mainly  averages  over  incoming  illumination  in  all 
directions,  one  would  hope  that  the  Lj  computed  for  smooth  clouds  and  average  concentrations 
(and  optical  depths)  is  still  approximately  valid  for  structured  clouds  and  fluctuations  in  cloud 
concentration  about  the  mean.  Actual  cloud  output  radiance  Lp,  however,  is  expected  to  vary 
greatly  about  the  mean,  correlating  closely  to  the  structure  observed  in  cloud  appearance. 

To  test  these  assumptions,  a  series  of  cases  have  been  run.  The  obscurant  used  is  white 
phosphorus  smoke  at  a  visual  (0.55  pm)  wavelength.  The  mass  extinction  for  phosphorus  smoke 
in  the  visual  is  4.3  mVg,  the  single  scattering  albedo  is  0.9912,  and  the  phase  function  is  from 
the  PFNDAT  database.  A  terrain  surface  albedo  A^  of  0.3  is  assumed.  Solar  zenith  angles  of 
0,  30,  60  and  90  degrees  were  run  at  a  fixed  azimuth  placing  it  over  the  positive  y-axis  (y-z 
plane).  The  ratio  of  direct  sunlight  irradiance  (normal  to  propagation)  to  diffuse  irradiance  (on 
to  a  horizontal  surface)  was  taken  as  10,  representing  a  clear  day.  The  cloud  was  given  a  variety 
of  simple  rectangular  shapes  with  coordinates  x  downwind,  y  crosswind  and  z  vertical.  Cloud 
dimensions,  as  simple  ratios  of  X:Y:Z  lengths  per  side,  were  assigned  as:  1:1:1,  2:1:1,  4:1:1, 
2:2:1,  4:2:1  and  4:4:1.  The  number  of  blocks  used  (8  to  64)  and  optical  depths  were  varied  in 
runs  ranging  from  0.25  to  64  across  the  total  Y-dimension  of  the  cloud  (crosswind  width). 

Limiting  path  radiances  for  uniform  concentration  clouds  were  first  computed.  Several  of  the 
outputs  are  shown  in  figs.  1  through  4.  In  each  case  the  sun  is  at  an  azimuth  of  90  deg  and  a 
zenith  angle  of  60  deg.  Figure  1  shows  L,  computed  for  rays  emerging  at  normal  angles  from 
the  centers  of  each  cloud  face.  Note  the  large  value  when  the  optical  depth  is  low  and  the  cloud 
is  between  the  observer  and  sun.  Figures  2  and  3  are  other  cases  for  radiance  emerging  at 
various  directions  from  the  cloud.  Figure  4  plots  the  output  L,  emerging  across  the  face  of  the 
cloud,  with  sun  at  the  right. 

For  optical  depths  below  about  10  the  variation  is  small.  Above  x  =  10  distinct  darkening  or 
shadowing  appears.  This  behavior  has  been  found  to  be  parameterized  by  the  simple  relation 

,  perimeter  ,  _ 

s  (5) 

Lg  (  corrected )  ^  L^i  uncorrected )  e 

which  represents  the  "escape"  of  solar  radiance  through  the  sides  of  an  optically  thick  cloud  as 
the  product  of  the  physical  (not  x)  distance  s  from  the  major  radiance  source  (sun)  into  the  cloud, 
divided  by  the  ratio  of  the  cross  sectional  area  of  the  cloud  perpendicular  to  s  to  the  perimeter 
of  this  area.  (Effectively  the  exponent  is  thus  the  area  to  volume  ratio  of  the  cloud  up  to  the 
distance  s  into  the  cloud.)  These  curves  were  then  used  to  approximate  the  limiting  path  radiance 
L,  for  non-uniform  clouds  of  the  same  average  optical  depth  across  the  cloud.  STATBIC  was 
used  to  generate  these  3-D  cloud  concentrations,  simulating  the  statistical  properties  of 


7 


Ls  for  Uniform  Cloud  Concentration 
X:Y:Z= 1:1:1,  Solar  Az= 90.  Zen  =  60.  Fdir/Fdif = 1 0.  Ag  =  0.3 


Figure  1.  Limiting  Path  Radiance  from  Center  of  6 
Cloud  Faces,  Normal  Angles,  Showing  Solar  Angles. 


Ls  for  Uniform  Cloud  Concentration 
X;Y:Z=1 :1 :1 .  Solar  Az=  90.  Zen  =  60.  Fdir/Fdif =1 0,  Ag  =0.3 


Figure  3.  Ls  Limiting  Path  Radiance  as  in  Figs.  I, 
and  2,  But  for  Upward  and  Downward  Look  Angles. 


Figure  5.  STATBIC  Concentrations  Used  in 
Simulations.  Cuts  are  through  the  x-y-z  Planes. 


Ls  for  Uniform  Cloud  Concentration 
X:Y:Z=1 :1 :1 .  Solar  Az=90.  Zen= 60.  Fdir/Fdlf =1 0.  Ag  =0.3 


Figure  2.  Ls  Limiting  Path  Radiance  as  in  Figure  1, 
but  for  Outgoing  Angles  at  45  degrees. 


Ls  Variation  Across  Cloud  with  Optical  Depth 
X:  Y:Z= 1:1:1.  Solar  Az= 90.  Zen = 60.  Fdir/Fdif = 1 0.  Ag  =  0.3 


Figure  4.  Variation  in  Output  Radiance  Across  a  16 
m  Cloud  with  Darkening  Shadows  at  x  >  10. 


Figure  6.  Radiance  example  Output  from 
BEAMS  run  Using  STATBIC  Inputs. 


8 


concentration  fluctuations  in  homogeneous,  Kolmogorov  turbulence.  Figure  5  shows  single  x-y-z 
plane  cross-sections  through  one  of  the  STATBIC-generated  input  arrays  of  3-D  concentration 
fluctuations.  Brighter  regions  represent  greater  concentrations. 

As  baseline  cases,  the  non-uniform  concentrations  were  run  directly  in  BEAMS  to  obtain  their 
resulting  radiances.  Then,  for  comparison,  the  same  non-uniform  concentrations  were  used,  but 
with  Lj  values  for  fl  and  the  mean  across  the  cloud.  Concentration  fluctuations  lead  to  opdcal 
depth  fluctuations  x’.  So  the  proposed  rapid  (but  approximate)  cloud  radiance  calculation  is  just. 


^ cloud 


(Q)  =Lp(Q;T^  +  x')  «  ll-T{Q:x^  +  x^)  ]  L^(Q;xJ 

=  [  (T  fluctuating  but  Lg  mean) 


(6) 


Figures  6  and  7  compare  the  outputs  of  the  full  calculation  for  radiance  Lp  (fig.  6)  and  limiting 
path  radiance  L3  (fig.  7).  Note  that  the  resulting  limiting  radiance  is  quite  smooth  even  with 
the  cloud  fluctuations  present.  This  supports  the  idea  that  most  (but  not  all)  of  the  fluctuation 
is  in  the  transmittance,  not  L^.  Table  1  quantitatively  compares  the  absolute  and  mean  squared 
error  between  method  #1  (complete  BEAMS  calculation  for  each  fractel  cloud  realization)  and 
method  #2  (the  rapid  calculation  using  Lj  for  a  uniform  cloud  superimposed  on  transmission 
fluctuations)  for  representative  sets  of  runs.  The  most  relevant  values  are  those  for  error  in  L^. 
The  fluctuations  in  Lp  from  both  fill  calculation  and  from  the  approximate  result  from  Eq.  6  are 
proportional  to  the  input  fluctuations  in  transmission,  as  one  would  expect. 


TABLE  1.  Error  Analysis  of  BEAMS  Output  Diffuse  Radiance  Comparing  Fac 
Differences  in  Full  Calculation  for  Non-Uniform  Cloud  and  Using  Ls 
from  Full  Calculation  for  Uniform  Cloud  and  Fluctuating  Transmittanc 

e-Averaged 

:e 

Radiance  Case 

Solar(Az  90,  Zen  60);  Fdir/ 

Fdif=  10;  Ag=0.3 

Ls(Exac 

Ls(Fast 

% 

Absolute 

Error 

t  Fluct.)- 
Param.) 

%  RMS 
Error 

Lp(Exact  Fluct)- 
Lp(Uniform) 

%  Absolute 
Error 

Lp(Exact  Fluct.)- 
Lp(from  Ls  Table) 

%  Absolute 

Error 

T  =  0.14  -  0.16,  various  rays 

0.3-0,9% 

0.2-0.7% 

9-16% 

4-6  % 

T  =  0.234,  Az  180,  Zen  90 

0.86% 

0.77% 

5% 

1.8% 

T  =  0.56  -  0.66,  various  rays 

1.1 -2.9% 

0.9- 1.9% 

8-14% 

4-8% 

T  =  0.995,  (180,  90)  best  case 

0.83% 

0.79% 

1.3% 

4% 

T  =  2.2  -  2.6,  various  rays 

4.0-5.8% 

3.8-4.5% 

6-12% 

4-8% 

T  =  3.75,  (180,  90)  best  case 

1.06% 

0.91% 

1.0% 

2% 

T  =  8.9  -  10.6,  various  rays 

3-10% 

2.5-7% 

4-10% 

5-8% 

T  =  15.,  (180,  90)  best  case 

4.0% 

2.9% 

4.0% 

5% 

T  =  36  -  43,  various  rays 

4-8% 

3-8% 

4-8% 

6-10% 

9 


STATBIC  Textured 
Curved  Surface  (Cone) 


6.  USE  OF  PARAMETRIC  RADIANCE  VALUES 

Scene  visualization  of  obscurant  clouds,  haze  and  fog  requires  both  transparency  (transmittance) 
and  color  value  (path  radiance)  from  realistic,  three-dimensional,  non-uniform  distributions  of 
aerosol  concentrations.  In  actual  simulators  the  cloud  is  rendered  into  a  2-D  screen  image  by  a 
variety  of  techniques.  A  common  approach  in  real-time  simulations  is  to  use  a  two-dimensional 
"billboard"  picture  of  a  cloud  as  it  would  be  seen  from  the  current  (and  perhaps  several)  observer 
positions.  This  picture-icon  is  placed  as  a  small,  simple  scene  object  which  is  kept  turned  toward 
the  observer  and  blended  to  the  background  by  being  totally  transparent  at  its  edges.  BEAMS 
can  provide  color  values  and  opacity  through  the  entire  cloud  for  this  approach,  although  values 
should  be  changed  as  the  "billboard"  is  turned.  A  more  ambitious  approach  to  giving  the  cloud 
a  3-D  presence,  as  in  fig.  8,  uses  a  semi-transparent  cloud  image  wrapped  over  a  3-D  object  cloud 
"surface"  as  a  semi-transparent  texture.  A  third  method  uses  many  small,  flat  semi-transparent 
disks  or  planes  that  represent  component,  textured  "puffs"  in  the  cloud.  These  are  distributed 
throughout  the  3-D  volume  of  the  cloud  region  and  turned  to  face  the  observer.  They  usually 
overlap  so  that  one  perceives  the  combined  color  and  attenuation  of  nearer  elements  in  front  of 
farther  ones.  This  is  shown  in  fig.  9  for  a  real-time  3-D  fly-through  simulation  of  smoke  from 
an  M2  Bradley.  Finally,  given  enough  time,  one  can  fully  render  the  most  accurate  propagation 
representation  of  the  cloud  as  a  complete  ensemble  of  semi-transparent  volume  elements  (voxels) 
of  different  optical  depths  and  color  values  in  a  3-D  cloud  volume. 

Whichever  approach  is  used  requires  two  arrays  (over  a  2-D  surface  or  in  a  3-D  volume)  of 
computer  graphics  parameters.  The  first  is  opacity  of  the  cloud  (usually  rendered  in  the  cpu  or 
automatically  in  the  graphics  hardware  as  a  random  dithered  matrix  of  clear  pixels  or  sub-pixel 
points  mixed  in  appropriate  ratios  with  opaque  colored  points).  Physically,  the  graphics  opacity 
has  a  complementary  relation  (opacity  =  1  -  transparency)  to  the  physical  transmittance 
(transparency)  of  the  cloud  at  the  given  wavelength.  The  second  array  is  the  color  value 
(typically  RGB)  of  the  cloud  itself  (Lp)  or  a  limiting  blending  color  (LJ.  The  latter  uses  opacity 
as  a  linear  interpolator  between  the  unobscured  background  and  the  totally  opaque  cloud.  In  this 
case  Lj  is  the  color  of  the  cloud  when  totally  opaque.  Opacity  and  are  direct  inputs  to  the 
Silicon  Graphics  "fog  function",  for  example,  which  renders  visibility  effects  using  its  internally 
computed  ranges  (z-buffer)  from  the  observer  to  each  scene  pixel  (Hoock  and  Giever,  1994). 

Figure  10  is  from  a  real-time  3-D  simulator  of  an  airfield  over  Ft.  Hunter-Liggett  terrain.  Haze 
effects  make  a  similar  use  of  t  (determined  from  visibility  and  the  Koschmieder  relation)  with 
a  solar-angle  dependent  L^.  The  "monolith"  at  the  end  of  the  runway  is  a  black  cube,  100  m  on 
a  side.  It  has  been  placed  into  this  3-D  simulation  to  measure  the  accuracy  of  SGI  Performer 
real-time  software  to  achieve  the  objective  definition  of  meteorological  visibility  when  given 
physically-correct  Computer  Image  Generator  (CIG)  inputs.  This  scene  is  completely  analogous 
to  the  test  procedures  done  on  the  basic  SGI  GL  language  "fog  function"  presented  by  us  at  last 
years’  BAG  conference  (Hoock  and  Giever,  1993).  Figure  11  shows  various  simulations  of 
reduced  visibility  due  to  haze,  solar  illumination  and  fog  that  can  be  generated  using  z  and  Lj 
from  the  BEAMS-2D  program.  The  scene  is  from  a  near  real-time  virtual  3-D  representation  of 
White  Sands  Missile  Range  (USGS  terrain)  and  a  rendered  tank  in  the  foreground. 


11 


7.  CONCLUSIONS 


Rapid  progress  is  taking  place  in  the  incorporation  of  physics-based  environmental  effects  in 
interactive,  real-time  scene  simulations  of  the  battlefield  environment.  One  aspect,  dealing  with 
the  visualization  of  obscurants,  smoke,  dust,  haze  and  fog,  is  to  properly  simulate  the  obscuration 
and  radiance  effects  of  these  clouds  on  propagation.  These  effects  impact  target  acquisition, 
weapon  engagement,  identification  friend  or  foe,  concealment,  deception,  visual  cues,  mobility 
and  the  general  "realism"  of  scenes.  The  non-real-time  BEAMS  model  is  thus  being  used  to 
generate  propagation  data  that  can  be  used  as  data  sets  to  support  real-time  3-D  synthetic 
environment  simulations.  We  have  found  that  it  is  feasible  to  combine  tabulated  (or  parametric) 
mean  values  of  limiting  path  radiance  L,  (dependent  on  cloud  type,  mean  transmittance  and  sun 
angle)  with  simulated  fluctuations  in  transmittance  about  the  mean.  In  the  limited  case  study 
done  here,  the  relative  error  in  using  this  approach  over  the  full  (and  very  time  consuming) 
multiple  scattering  calculations  for  each  cloud  realization  is  overall  typically  under  14%. 

ACKNOWLEDGEMENTS 

ARL/BED  particularly  acknowledges  support  from  the  Joint  Project  Office  for  Smoke/Obscurants 
and  Special  Countermeasures  for  the  development  of  the  BEAMS  models.  And,  under  support 
from  the  Defense  Modeling  and  Simulation  Office  project  Environmental  Effects  for  Distributed 
Interactive  Simulation  (E2DIS),  the  BEAMS  model  outputs  are  being  analyzed  as  one  of  the 
environmental  representation  products  that  can  support  real-time  and  near  real-time  3-D  scene 
visualization  of  the  battlefield  environment.  The  authors  thank  Mr.  Mario  Torres  of  Science  and 
Technology  Corp.  and  Mr.  Steven  McGee  of  Physical  Science  Laboratory,  NMSU  for  their  help 
in  generating  scenes  for  figures  for  this  paper. 

REFERENCES 

Ayres,  S.  and  S.  DeSutter,  1993.  EOSAEL  92:  Vol4.  Combined  Obscuration  Model  for  Battlefield 
Induced  Contaminants  (COMBIC).  In  press,  U.S.  Army  Research  Laboratory,  Battlefield 
Environment  Directorate,  White  Sands  Missile  Range,  NM  88002-5501. 

Davis,  B.,  A.  Wetmore,  D.  Tofsted,  R.  Shirkey,  R.  Sutherland  and  M.  Seagraves,  1994.  EOSAEL 
92:  Vol  19.  Aerosol  Phase  Function  Database  PFNDAT.  In  press,  U.S.  Army  Research 
Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile  Range,  NM  88002. 

Hoock,  D.,  1991.  "Modeling  Time  Dependent  Obscuration  for  Simulated  Imaging  of  Dust  and 
Smoke  Clouds."  In  Proceedings  of  the  SPIE,  SPIE  Conference  Vol  1486,  pp.  164-175. 

Hoock  D.  and  J.  Giever,  1993.  "Methods  for  Representing  the  Atmosphere  in  Interactive  Scene 
Visualizations."  In  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference,  U.S. 
Army  Research  Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile 
Range,  NM  88002-5501,  pp  405-419. 


12 


Hoock,  D.,  J.  Giever  and  S.  O’Brien,  1993.  "Battlefield  Emission  and  Multiple  Scattering 
(BEAMS),  a  3-D  Inhomogeneous  Radiative  Transfer  Model."  In  Proceedings  of  the  SPIE, 
SPIE  Conference  Vol  1967,  pp  268-277. 

Hoock,  D.  and  J.  Giever,  1994.  "Modeling  Effects  of  Terrain  and  Illumination  on  Visibility  and 
the  Visualization  of  Haze  and  Aerosols."  In  Proceedings  of  the  SPIE,  SPIE  Conference 
Vol  2223,  pp  450-461. 

Miller,  A.,  1983.  Mie  Code  AGAUS  82,  ASL-CR-83-0100-3,  U.S.  Army  Atmospheric  Sciences 
Laboratory,  White  Sands  Missile  Range,  NM  88002-5501.  (Now  in  reprints  as  EOSAEL 
92:  Vol  1.  Mie  Code  AGAUS). 

O’Brien,  S.,  1993.  "Comparison  of  the  BEAMS  2.2  Radiative  Transfer  Algorithm  with  other 
Radiative  Transfer  Methods."  In  Proceedings  of  the  1993  Battlefield  Atmospherics 
Conference,  U.S.  Army  Research  Laboratory,  Battlefield  Environment  Directorate,  White 
Sands  Missile  Range,  NM  88002-5501,  pp  421-435. 


13 


A  PORTABLE  SYSTEM  FOR  DATA  ASSIMILATION 
IN  A  LIMITED  AREA  MODEL 


Keith  D.  Sashegyi  and  Rangarao  V.  Madala 
Naval  Research  Laboratory 
Washington,  DC  20375,  U.S.A. 

Frank  H.  Ruggiero 
Phillips  Laboratory 
Hanscom  AFB,  MA  01731,  U.S.A. 

Sethu  Raman 

North  Carolina  State  University 
Raleigh,  NC  27695,  U.S.A. 


ABSTRACT 

A  numerical  weather  prediction  system  has  been  developed  for  assimilating  regional 
and  mesoscale  data  in  a  high  resolution  limited  area  model.  The  system,  which  can  be 
run  on  both  high  performance  workstations  and  super  computers,  has  been  used  to 
study  the  assimilation  of  upper  air  soundings,  surface  observations,  and  precipitation 
estimates  derived  from  satellite.  The  model’s  grid  system  is  nested  in  horizontal  with 
a  fine  resolution  nest  covering  the  area  of  interest  surrounded  by  two  coarser 
resolution  nests.  An  efficient  iterative  analysis  scheme  is  used  for  interpolating 
atmospheric  sounding  data  and  surface  observations  to  the  model  grid.  A  sequential 
coupling  of  the  mass  and  wind  analyses  is  used  for  the  upper  air  data  outside  of  the 
tropical  regions.  Surface  observations  of  wind,  relative  humidity  and  potential 
tempierature  are  analyzed  on  the  lowest  model  vertical  level.  An  iterative  adjustment  of 
the  surface  fluxes,  and  the  winds,  temperature  and  humidity  in  the  planetary 
boundary  layer  then  follows.  A  normal  mode  initialization  with  the  diabatic  heating 
derived  from  observed  precipitation  is  used  to  balance  the  initial  mass  and  wind 
fields.  During  the  first  three  hours  of  a  subsequent  forecast,  the  observed  diabatic 
heating  is  merged  with  the  model  generated  diabatic  heating.  The  major  impact  of  the 
assimilation  scheme  is  in  the  enhancement  of  the  mesoscale  circulations  and 
precipitation  in  the  first  twelve  hours  of  model  forecast.  On  the  workstation,  a  twelve 
hour  period  of  assimilation  followed  by  a  24  hour  forecast  can  be  produced  within 
two  hours  of  clock  time  when  two  grids  are  used. 


1.  INTRODUCTION 

High  resolution  regional  weather  prediction  models  have  been  successfully  used  in  the  past  to 
study  many  mesoscale  weather  systems  (Anthes  1990)  and  recently  to  provide  operationa 
forecasts  (Benjamin  et  al.  1991).  Now  with  the  introduction  of  new  high  temporal  and  spatial 
resolution  observing  systems  such  as  the  Doppler  Radar  network,  automatic  surface  observing 
stations  and  the  GOES  I  satellite,  there  will  be  a  dramatic  increase  in  the  amount  of  data  available 
for  the  running  of  high  resolution  limited  area  models.  The  large  volume  of  data  produced  by  these 


15 


new  remote  sensing  systems  will  limit  the  amount  of  data  that  can  be  included  in  the  operational 
weather  prediction  systems  at  central  weather  forecasting  centers.  A  high  resolution  weather 
prediction  model  run  at  a  local  center  would  be  better  able  to  utilize  the  data  from  such  a  local 
observing  system  for  producing  short  range  weather  forecasts  in  the  local  region.  Further  with  the 
increasing  computational  power  and  memory  now  available  in  desktop  workstations,  it  has  become 
possible  to  run  a  quite  sophisticated  weather  prediction  system  on  a  workstation.  Recently,  Cotton 
et  al.  (1994)  have  demonstrated  the  running  of  the  Regional  Atmospheric  Modeling  System 
(RAMS)  on  a  RISC  workstation  at  Colorado  State  University.  While  the  detailed  cloud 
microphysics  used  in  the  model  improved  the  forecasts,  this  was  at  an  increased  cost  in  CPU  time 
required  to  run  the  model.  In  the  very  near  future  with  further  increases  of  computational  power  in 
these  workstations,  it  should  be  possible  to  run  a  high  resolution  weather  prediction  system  on  a 
workstation  at  a  local  site,  utilizing  the  available  high  resolution  data  to  produce  accurate  short 
range  weather  forecasts  for  the  local  region. 


At  the  Department  of  Defense  there  is  a  great  need  for  accurate  short  range  regional  and  mesoscale 
weather  forecasts  in  support  of  military  operations  which  can  be  used  in  different  regions  around 
the  globe.  It  is  envisioned  that  a  portable  weather  prediction  system  that  can  run  on  a  workstation 
will  be  able  to  produce  high  resolution  short  range  3-12  hour  forecasts  for  this  purpose.  The  short 
range  forecasts  of  high  resolution  would  compliment  the  larger  scale  forecasts  which  would 
available  from  a  central  site  such  as  the  from  the  Navy's  Operational  Global  and  Regional 
Atmospheric  Prediction  Systems  (NOGAPS,  NORAPS).  These  central  site  data  sets,  with  high 
resolution  surface  conditions  (elevation,  sea  surface  temperature,  albedo,  etc.)  and  available 
observations  could  be  transmitted  by  satellite  from  the  central  site  to  the  local  site,  such  as  a  Navy 


BattleField  Weather  Forecast  System 


high  performance 
workstation 


local  high  resolution 
Satellite  data 


local  data 


Local  site  -  analysis,  forecasts 


data,  forecasts^ 
boundary  conditions, 
high  resol.  terrain 
>1.5  m^abits/sec 


f? 


Central  Site  -  Global 
and  regional  forecasts 


Figure  1.  Illustration  of  future  battlefield  weather  forecast  system  based  on  a  high  performance  workstation  on  a 
Naval  ship  with  high  speed  satellite  communications  link  to  on-shore  weather  forecasting  center. 


16 


ship  (Fig  1)  Then  utilizing  any  local  observations  and  high  resolution  satellite  observations,  very 
high  resolution  analyses  and  forecasts  could  be  run  on  the  local  high  performance  workstation  to 
support  the  military  operations.  Such  a  system  will  depend  on  the  availability  of  high  speed  satellite 
communications  between  the  local  military  operation  and  the  central  weather  forecasting  site. 
Several  recent  trials  have  demonstrated  that  transmission  speeds  of  up  to  1.5  megabits/sec  can  be 

achieved  between  a  Navy  ship  and  a  shore  site  (Masud  1994  ).  With  these  speeds,  the  data  needed 
for  the  initial  conditions  and  boundary  values  could  easily  be  rapidly  transmitted  to  the  ship  within 
10  minutes  for  use  on  the  workstation  system. 


At  the  Naval  Research  Laboratory  we  have  ported  a  simplified  version  of  our  numerical  weather 
prediction  system  to  a  high  performance  workstation.  As  a  denionstration  of  the  concept  of  a  local 
analysis  and  forecasting  system  using  current  technology,  the  limited-^ea  modeling  s^tem  is  run 
on  the  workstation  using  upper-air  data  collected  during  the  Genesis  of  Atlantic  Lows  Experiment 
(GALE)  which  was  conducted  over  the  southeastern  U.S.  during  the  winter  of  1986.  A  12-hour 
period  prior  to  the  start  of  the  forecast  run,  is  used  to  assimilate  the  observations  into  the  model 
using  an  intermittent  data  assimilation  method  as  in  Harms  et  al.  (1992).  A  12  hour  prediction  with 
a  10-laver  version  of  the  numerical  model  utilizing  a  coarse  mesh  covering  the  continent^  U.S  and 
a  fine  mesh  covering  the  eastern  U.S.  took  30  mins  of  CPU  time  on  the  workstation.  The  analysis 
component  itself  used  12  mins  of  CPU  time  to  produce  analyzes  at  19  pressure  levels  with  a 

hundred  soundings. 


2.  THE  ANALYSIS/FORECAST  SYSTEM 

The  intermittent  data-assimilation  method  is  used  as  in  Harms  et  al.  (1992)  to  assimilate  upper  air 
observations  during  a  12-hour  period  prior  to  the  start  of  the  forecast.  During  the  assimilation 
period,  the  numerical  model  forecast  provides  the  first  guess  or  background  for  the  analysis  of  the 
new  upper  air  data  at  3-hourly  intervals.  A  diabatic  initialization  procedure  is  used  to  balance  the 
mass  and  wind  fields.  The  assimilation  is  first  started  from  an  operational  analysis  which  is 
interpolated  to  the  model  grids  and  initialized.  After  the  assimilation,  short  range  predictions  of  12- 
24  hours  are  produced. 


2.1  Forecast  Model 

The  forecast  model  used  was  developed  at  the  Naval  Research  Laboratory  and  is  described  in  detail 
in  the  reports  by  Madala  et  al.  (1987)  and  Harms  et  al.  (1992).  This  is  a  hydrostatic 
equations  model  in  terrain-following  sigma  coordinates  with  a  triple  nested  grid  network  in  the 
horizontal.  Spherical  coordinates  are  used  in  the  horizontal,  with  the  mass  and  momentum 
variables  staggered  on  a  C  grid.  The  model  uses  the  split-explicit  method  of  time  integration 
(Madala  1981).  The  finite-difference  scheme  in  flux  form  is  second-order  accmate,  and  m  the 
absence  of  sources  and  sinks,  conserves  total  mass,  energy  and  momentum.  The  model  uses 
horizontal  diffusion  of  second  order  and  includes  large-scale  precipitation  dry  convective 
adjustment  and  a  modified  Kuo  cumulus  parameterization  scheme.  A  inulti-level  planetary 
boundary  layer  utilizes  similarity  theory  in  the  surface  layer  and  vertical  turbulent  mixing  above 
(Gerber  et  al  1989;  Holt  et  al.  1990).  The  mixing  is  modeled  using  a  turbulent  kinetic  energy 
equation  (Detering  and  Etling  1985).  The  lateral  boundary  values  for  the  coarse  grid  are  derived 
from  12  hourly  operational  analyses  and  forecasts  by  linearly  interpolating  in  time.  Boundary 
values  for  an  inner  grid  are  provided  by  the  integrations  on  the  coarser  grid.  The  mode  variables  at 
each  grid  boundary  are  updated  each  time  step,  using  the  relaxation  scheme  of  Davies  (ly /6). 

In  the  version  of  the  model  used  on  the  workstation,  10  equally  spaced  vertical  layers  are  used  and 
the  boundary  layer  is  parameterized  using  a  single  layer  with  the  fluxes  computed  from 


*  Government  Computer  News,  July  II,  1994,  pp  45,47. 


17 


generalized  similarity  theory  as  in  Chang  (1981).  For  the  demonstration  on  the  workstation  two 
nested  grids  are  used,  where  the  model's  coarse  grid  covers  the  continental  U.S.  from  40°  to 
140°W  and  10°  to  70°N  with  a  resolution  of  2.0°  latitude  and  1.5°  longitude.  The  fine  grid  covers 
the  eastern  U.S.  from  58°  to  102°W  and  23.5°  to  56.5°N  with  a  grid  spacing  a  third  less  than  the 
coarse  grid  (of  approx.  50  km).  For  comparison,  the  full  version  of  the  model  with  the  multi-layer 
PBL  is  run  on  the  Cray  super  computer  with  16  layers  in  the  vertical  and  a  third  fine  mesh.  The 
third  grid  run  on  the  Cray  covers  the  south-eastern  U.S.  from  90°W  to  70°W  and  29.5°N  to  40.5°N 
with  a  resolution  of  2/9  degree  in  longitude  and  1/6  degree  in  latitude  (about  20  km). 

2.2  Analysis  Method 


Our  analysis  method  uses  the  successive  corrections  scheme  of  Bratseth  (1986),  which  converges 
to  the  same  solution  as  that  obtained  by  optimum  interpolation.  Such  iterative  analysis  schemes  are 
generally  more  efficient  than  the  optimum  interpolation  method,  which  requires  solving  a  large 
linear  system  of  equations  (Sashegyi  and  Madala  1994).  The  Bratseth  scheme,  in  which  the 
weights  are  also  based  on  the  statistical  correlations  of  the  forecast  error,  is  therefore  a  very 
attractive  method  for  use  in  a  portable  system  to  be  run  on  a  workstation.  This  method  has  been 
successfully  applied  operationally  in  the  multivariate  analysis  scheme  in  Norway  by  Grpnas  and 
Midtb0  (1987).  In  our  application  of  the  scheme  (Sashegyi  et  al.  1993),  univariate  analyses  of  the 
mass  and  wind  fields  are  initially  produced.  To  provide  a  coupling  of  the  mass  and  wind  fields,  the 
mass  analysis  is  enhanced  using  gradient  information  derived  from  estimates  of  the  geostrophic 
wind.  The  wind  analysis  is  used  to  provide  the  initial  estimate  of  the  geostrophic  wind.  The  wind 
analysis  is  then  also  updated  to  reflect  the  new  geostrophic  wind.  The  components  of  the  analysis 
method  are 

(a)  data  preparation  and  quality  control, 

(b)  univariate  analyses  of  the  mass  and  wind  field, 

(c)  enhancement  of  the  geopotential  gradient,  and 

(d)  enhancement  of  the  wind  field. 

The  analysis  scheme  is  described  in  more  detail  in  Sashegyi  et  al.  (1993)  and  Harms  et  al.  (1992). 
We  now  briefly  describe  each  of  these  components  in  turn. 

2.2.1  Data  preparation  and  quality  control.  Sounding  data  are  smoothed  in  the  vertical 
and  retained  at  50  mb  levels.  The  soundings  are  sorted  into  5°  latitude-longitude  boxes  for  each 
pressure  level  from  1000  mb  to  100  mb.  We  perform  a  "gross"  check  and  a  simplified  "buddy" 
check  in  which  ob.servations  with  large  deviations  from  the  first-guess  or  from  neighboring 
obseivations  are  removed.  Observations  in  clo.se  proximity  of  each  other  are  averaged  to  generate 

super  observations  and  any  remaining  isolated  observations  are  eliminated.  If  an  operational 
analysis  is  available  at  the  time,  bogus  data  derived  from  the  operational  analysis  can  be  used  in 
regions  where  we  have  no  soundings. 

2.2.2  Univariate  analyses  of  the  mass  and  wind  field.  Univariate  analyses  of  .sea- 
level  piessure,  geopotential  the  u-  and  v-  wind  components  and  the  relative  humidity  are  conducted 
on  19  pressure  levels  at  50  mb  steps  from  100  mb  to  1000  mb,  using  a  1.5°  latitude/longitude  grid, 
m  the  successive  corrections  method  of  Bratseth  (1986),  the  analysis  weights  are  derived  from  the 
foiecast  error  covariance,  and  include  a  "local  data  density",  which  reduces  the  weights  in  regions 
of  higher  data  density  and  prevents  extrapolation  into  data  void  regions  (Bratseth  1986).  In  the 
rnethod,  the  background  field  is  updated  by  the  latest  analysis  after  each  iteration  or  pass,  where 
the  inteipolated  value  at  an  analysis  grid  point  after  n  such  iterations  is  given  by 


0a,x(n  +  l)  = 


+ 


J 

I 

j=l 


w 


(1) 


18 


where  <t)o  j  is  one  observation  at  location  rj  (of  a  total  of  J  such  observations),  Wxj  is  the  weight 
for  that  observation  and  (l)a,x  is  the  analyzed  value  at  a  grid  point  r^.  In  the  previous  successive 
corrections  schemes,  the  updated  analyzed  values  ^a,x  were  then  interpolated  to  the  observation 
locations  using  a  polynomial  interpolation  method,  in  order  to  compute  the  observation  corrections 
for  the  next  iteration.  Here,  an  "observation  estimate"  is  computed  instead  by  using  the  same 
interpolating  equation  as  was  used  for  the  analyzed  values  in  eq.  (1), 


^^.(n  +  l)  = 


(n) 


J 

I 

j  =  l 


w. 


01 


OJ 


‘0  (n) 


(2) 


A  starting  guess  for  the  analysis  (!)a,x(l)  and  observation  estimate  (t)a,j(l)  are  derived  from  the 
background  forecast  (|)b  by  a  cubic  polynomial  interpolation.  Instead  of  using  empirical  weights  as 
in  earlier  schemes,  the  weights  in  each  equation  are  defined  in  terms  of  the  covariance  of  the 
corrections  to  the  background  forecast,  which  are  then  reduced  by  dividing  by  a  local  data  density 


w  .  = 
xj  Mj 


=  ^ 


_  p.  -+£  S-- 


w.  .  = 

IJ  M. 


where  the  local  data  density  is  defined  by 


m. 

J 


j=' 

M.  J 

mj  =  ^  =  Z 

CT  j=| 


n.  .  +  d 

i 


(3) 

(4) 

(5) 

(6) 


The  Px  j  and  pj  \  are  the  values  for  the  correlation  function  for  the  true  background  forecast  errors 
rp,  -  (bbl  between  values  at  an  observation  location  rj  and  at  a  grid  point  I'x,  and  between  the 
values  at  observation  locations  rj  and  rj,  respectively.  Here  we  have  assumed  that  the  observation 
errors  are  not  correlated  with  the  forecast  errors.  The  variance  of  the  background  forecast  errors  is 
a^,  is  the  ratio  of  the  observation  error  variance  to  the  background  forecast  error  variance 
o2’and  5ij  is  the  Kronecker  delta  function  (one  for  i=j,  zero  otherwise).  The  error  correlation 
function  p(r)  for  the  mass  and  humidity  is  modeled  by  a  Gaussian  function. 


p(r)  =  e 


-r^/d^ 


(7) 


which  is  a  function  of  the  distance  r  and  the  length  scale  d  is  600  km.  For  the  components  of  the 
wind  field  the  correlation  functions  are  reduced  across  the  direction  of  the  flow  using 


Pu  = 


(y-yjr 

..2 


P(l) 


(8) 


19 


(9) 


(x-x  )2‘ 

Pv=  1 - ^  P(0 

where  du  is  700  km  and  (x,y)  and  (xi,yj)  are  the  positions  of  the  analysis  grid  point  and  the 
observation,  respectively.  After  the  first  three  or  four  iterations  the  length  scales  d,  dy  are  reduced 
to  330  km  and  380  km,  respectively,  for  one  additional  iteration,  to  speed  convergence  of  the 
scheme  (see  also  Grpnas  and  Midtbp  1987). 

2.2.3  Enhancement  of  the  geopotential  gradient.  We  use  the  analyzed  wind  as  an 
initial  estimate  of  the  geostrophic  wind,  which  is  then  used  to  extrapolate  the  geopotential  to  the 
grid  point  locations  for  a  further  iteration  of  the  geopotential  analysis,  in  a  fashion  similar  to 
Cressman  (1959).  That  is, 

(10) 

where  V<j)o  j  is  the  gradient  derived  from  the  horizontal  wind  at  the  observation  location  using  the 
geostrophic  relation.  A  fixed  correlation  length  scale  of  600  km  is  used  for  the  re-analysis.  An 
updated  geostrophic  wind  estimate  is  then  defined  by  the  new  geopotential  gradient.  Three  further 
iterations  of  the  geopotential  are  used  for  the  geostrophic  wind  estimate  to  converge. 

2.2.4  Enhancement  of  the  wind  gradient.  The  geostrophic  wind  changes  produced  by 
the  geopotential  enhancement  are  then  used  to  update  the  univariate  wind  analysis  as  in  Kistler  and 
McPherson  (1975),  where  the  updated  wind  is  given  by 

V*  =  V  +  Avg  (11) 

for  geostrophic  wind  changes  Avg.  Four  additional  passes  of  the  wind  univariate  analysis  are  then 
used  to  enhance  the  ageostrophic  components  of  the  wind. 

The  final  analyzed  corrections  on  pressure  surfaces  are  interpolated  to  the  horizontal  model  grids 
using  a  cubic  polynomial  interpolation.  Both  the  background  forecast  fields  and  the  new  analyzed 
fields  on  pressure  surfaces  are  interpolated  to  the  sigma  levels  of  the  model.  Analysis  corrections 
are  then  recomputed  on  the  sigma  levels  to  update  the  model  forecast  fields. 

2.3  Surface  Analysis  and  Boundary  Layer  Adjustment 

To  utilize  the  large  volume  of  surface  observations  which  are  available,  analy.ses  of  potential 
temperature,  relative  humidity  and  wind  are  carried  out  on  the  model's  lowest  sigma  layer  with  a 
horizontal  grid  ot  0.5°  resolution  that  covers  the  domain  of  the  middle  grid.  For  surface  pressure, 
the  observed  and  model  forecast  surface  pressures  are  reduced  to  sea  level  following  a  procedure 
similar  to  Benjamin  and  Miller  (1990).  In  our  case  we  u.se  a  lapse  rate  computed  from  the  virtual 
temperature  at  255  and  105  mb  above  the  surface,  extrapolating  the  virtual  temperature  to  the 
surface  to  detine  an  effective  mean  surface  temperature.  Univariate  analyses  are  then  produced  for 
the  lowest  model  layer  as  in  the  upper  air  analysis  de.scribed  in  section  2.2.2.  For  the  analysis  of 
sea  level  pressure,  the  Gaussian  correlation  function  in  eq.  (7)  is  used  with  a  correlation  length 
scale  d  of  300  km,  as  in  Miller  and  Benjamin  (1992).  For  potential  temperature,  humidity  and  the  u 
and  V  components  ot  the  wind,  the  Gaussian  correlation  functions  are  similarly  modified  as  in 
Miller  and  Benjamin.  The  potential  temperature  and  wind  in  the  planetary  boundary  layer  are  then 
adjusted  by  a  forward  integration  of  the  vertical  diffusion  equation  for  a  number  of  time  steps. 


20 


2.4  Diabatic  Normal  Mode  Initialization 


In  this  forecasting  system,  the  updated  forecast  fields  are  initialized  on  the  coarse  and  fine  grids  for 
the  first  three  vertical  modes  of  the  numerical  model  as  described  in  Sashegyi  and  Madala  (1993) 
using  the  vertical  mode  scheme  of  Bourke  and  McGregor  (1983).  For  each  vertical  mode  of  the 
forecast  model,  the  equations  for  the  vorticity  divergence  D  and  a  generalized  geopotential  O 
are 


^  -  f  C  =  Ap 

ghkD  =  A^ 


(12) 

(13) 

(14) 


where  h^  is  the  equivalent  depth  for  the  kth  vertical  mode,  f  is  the  Coriolis  parameter,  g  is  the 
acceleration  due  to  gravity  and  the  terms  on  the  right  hand  sides  of  the  equations  include  the  non¬ 
linear  advection,  friction  and  cumulus  heating.  The  generalized  geopotential  is  defined  by  O  = 
ps[<t>  -  <l>s  +  R  T*  -  (j)*],  where  ps  is  the  surface  pressure,  0  the  geopotential,  ([is  the  surface 
geopotential,  T*  and  a  mean  temperature  and  geopotential  profile.  The  filtering  conditions  used 
to  remove  the  fast  inertia-gravity  waves  are 


£D  ^  ^(f?-  ,  0  (15) 

<9t  ^t 

with  the  further  condition  that  the  linearized  potential  vorticity  ^  -  f  0/(ghk)  is  unchanged  by  the 
procedure.  The  amplitude  of  the  inertia-gravity  modes  depends  only  on  the  divergence  D  and  the 
ageostrophic  "vorticity"  f  C  -  and  setting  their  tendencies  zero  initially,  effectively  removes 
the  inertia-gravity  waves.  In  applying  these  conditions  for  the  first  three  vertical  modes,  the  schenae 
is  solved  iteratively.  This  and  other  methods  which  can  be  used  to  apply  normal  mode  initialization 
to  a  limited  area  model  are  further  discussed  in  Sashegyi  and  Madala  (1994). 

For  the  initialization  on  the  fine  grid,  boundary  values  for  the  mass  field  and  the  tangential  wind  are 
updated  using  the  results  of  the  initialization  on  the  coarse  grid.  As  in  H^ms  et  al.  (1992;  1993) 
diabatic  forcing  is  included  as  a  fixed  forcing  function  in  the  initialization,  where  the  diabatic 
heating  rates  are  computed  from  a  merged  field  of  observed  and  model-  produced  rainfall.  A 
reverse  Kuo  cumulus  parameterization  scheme  is  used  to  convert  these  prescribed  rain  rates  into 
vertical  heating  profiles  in  regions  where  the  lower  atmosphere  is  convectively  unstable.  Dunng  the 
first  three  hours  of  a  subsequent  forecast,  the  prescribed  heating  rates  (used  in  the  initialization)^^ 
linearly  combined  with  the  model  generated  heating  rates.  The  weighting  factor  for  the  prescribed 
heating  rate  is  one  initially  and  decreases  as  a  sine  function  to  zero  after  three  hours  of  integration 
(Harms  et  al.  1993). 

3.  DISCUSSION 

As  an  example,  three  hourly  upper  air  soundings,  which  were  collected  during  the  second 
Intensive  Observing  Period  (lOP)  of  GALE,  were  used  to  generate  analyses  and  forecasts  with  our 
analysis/forecast  system  on  the  workstation.  The  1(X)0  mb  analysis  for  1200  UTC  25  January 
1986  is  shown  in  Fig.  2.  The  cold  air  damming  and  the  strong  temperature  gradient  along  the  East 
Coast,  which  were  generated  by  the  first  guess  forecast,  are  retained  in  the  analysis  of  the  upper  air 


21 


Figure  2.  The  analyzed  1000  mb  temperature,  winds  and  sea-level  pressure  for  12  UTC  25  January  1986.  The  solid 
contours  of  sea  level  pressure  are  every  4  mb,  dashed  contours  of  temperature  every  5°C,  and  vectors  indicate  the 
direction  and  magnitude  of  the  winds. 


Figure  3.  The  analyzed  sea  level  pressure,  temperature  and  winds  at  the  lowest  model  level  for  6  UTC  25  January. 
Contours  as  in  Fig.  2. 


soundings.  The  low  over  the  Great  Lakes,  which  was  too  weak  in  the  first  guess  (Sashegyi  et  al. 
1993)  was  corrected  by  the  analysis.  The  surface  analysis  produced  by  the  16  layer  Cray  version 
of  the  model  is  shown  in  Fig.  3  for  0600  UTC  25  January  as  the  strong  temperature  gradient  was 
developing  across  the  coastline.  On  this  higher  resolution  grid,  the  confluence  of  the  flow  and  the 
temperature  gradient  along  the  coastline  are  stronger,  but  the  general  features  were  similar  to  that 
produced  on  the  coarser  grid  on  the  workstation.  The  prediction  of  rainfall  during  the  first  12- 
hours  of  integration  is  much  improved  by  using  the  12-hour  period  of  assimilation  prior  to  running 
the  forecast  (Harms  et  al.  1992).  In  Fig.  4,  the  rainfall  seen  over  the  Carolinas  and  across  the 
Florida  panhandle  in  the  first  six  hours  of  the  forecast  was  produced  as  a  result  of  the  assimilation 
on  the  workstation. 

4.  CONCLUSIONS 

An  analysis/forecasting  system  with  a  12-hour  period  of  assiimlation  prior  to  the  running  of  a 
forecast  was  run  on  a  high  performance  workstation.  The  intermittent  scheme  with  3  hour  updates 
successfully  assimilates  upper  air  observations,  maintaining  the  ageostrophic  circulations  generated 
by  the  forecast  model.  A  higher  resolution  planetary  boundary  layer,  a  third  horizontal  grid  of  finer 
resolution  and  a  surface  analysis  in  the  full  model  were  run  on  the  Cray  super  computer  for 
comparison.  With  10  layers  used  in  the  vertical  and  quite  a  coarse  operational  analysis  used  as  the 
starting  point  for  the  assimilation,  good  results  were  achieved  in  a  reasonable  CPU  time  of  two 
hours  on  the  workstation  when  compared  with  the  full  run  on  the  Cray. 


Figure  4.  Six  hour  forecast  of  accumulated  precipitation  in  cm  valid  at  6  UTC  26  January  1986.  Contours  0.1  cm, 
every  0.25  cm  up  to  1.0  cm  and  then  every  0.5  cm. 


23 


ACKNOWLEDGMENTS 


Support  for  this  research  was  provided  by  SPAWAR  of  the  U.S.  Navy  and  by  basic  research  programs 

at  the  Naval  Research  Laboratory.  The  computing  was  supported  in  part  by  a  grant  of  HPC  time  from 

the  DOD  HPC  Shared  Resource  Center  for  use  on  the  Cray  YMP-EL  at  NRL. 

REFERENCES 

Anthes,  R.A.,  1990:  "Recent  applications  of  the  Penn  State/NCAR  mesoscale  model  to  synoptic, 
mesoscale  and  climate  studies."  Bull.  Amer.  Meteor.  Soc.,  71,  1610-1629. 

Benjamin,  S.G.,  K.A.  Brewster,  R.  Brammer,  B.F.  Jewett,  T.W.  Schlatter,  T.L.  Smith  and  P.A.  Stamus, 
1991:  "An  isentropic  three-hourly  data  assimilation  system  using  ACARS  aircraft  observations." 
Mon.  Wea.  Rev.,  119,  888-906. 

Benjamin,  S.G.  and  P.A.  Miller,  1990:  "An  alternative  sea  level  pressure  reduction  and  a  statistical 
comparison  of  geostrophic  wind  estimates  with  observed  surface  winds."  Mon.  Wea.  Rev.  188 
2099-2116. 

Bourke,  W.,  and  J.L.  McGregor,  1983:  "A  nonlinear  vertical  mode  initialization  scheme  for  a  limited 
area  prediction  model."  Mon.  Wea.  Rev.,  Ill,  2285-2297. 

Bratseth,  A.M.,  1986:  "Statistical  interpolation  by  means  of  successive  corrections."  Tellus,38A,  439- 
447. 

Chang,  S.W.,  1981:  "Test  of  a  planetary  boundary-layer  parameterization  based  on  a  generalized 
similarity  theory  in  tropical  cyclone  models."  Mon.  Wea.  Rev.,  109,  843-853. 

Cotton,  W.R.,  G.  Thompson  and  P.W.  Mielke  Jr.,  1994:  "Real-Time  mesoscale  prediction  on 
workstations."  Bull.  Amer.  Meteor.  Soc.,  75,  349-363. 

Cressman,  G.  1959:  "An  operational  objective  analysis  system."  Mon.  Wea.  Rev.,  87,  367-374. 

Davies,  H.C..,  1976:  "A  lateral  boundary  formulation  for  multi-level  prediction  models."  Quart.  J. 
Roy.  Meteor.  Soc.,  102,  405-418. 

Detering,  H.W.  and  D.  Etling,  1985:  "Application  of  the  E-eps  turbulence  model  to  the  atmospheric 
boundary  layer."  Bound.-Layer  MeteoroL,  33,  113-133. 

Gerber,  H.S.,  S.W.  Chang  and  T.R.  Holt,  1989:  "Evolution  of  a  marine  boundary  layer  jet."  J.  Atmos. 
Sci.,46,  1312-1326. 

Gr0nas,  S.  and  K.H.  Midtb0,  1987:  "Operational  multivariate  analyses  by  successive  corrections." 
Collection  of  papers  presented  at  WMO/IUGG  numerical  weather  prediction  symposium,  Tokyo, 
4-8  August  1986,  J.  Meteor.  Soc.  Japan,  61-74. 

Harms,  D.E.,  K.D.  Sashegyi,  R.V.  Madala,  and  S.  Raman,  1992:  Four-dimensional  data  assimilation 
of  GALE  data  using  a  multivariate  analysis  scheme  and  a  mesoscale  model  with  diabatic 
initialization.  NRL  Memo.  Rep.  No.  7147,  Naval  Research  Laboratory,  Washington,  D.C.,  219dd 
[NTIS  A256063]. 

Harms,  D.E.,  R.V.  Madala,  S.  Raman  and  K.D.  Sashegyi,  1993:  "Diabatic  initialization  tests  using  the 
Naval  Research  Laboratory  limited  area  numerical  weather  prediction  model."  Mon.  Wea  Rev 
121,  3184-3190. 

Holt,  T.R.,  S.W.  Chang  and  S.  Raman,  1990:  "A  numerical  study  of  the  coastal  cyclogenesis  in  GALE 
lOP  2:  Sensitivity  to  PBL  parameterization."  Mon.  Wea.  Rev.,  118,  234-257. 

Kistler,  R.E.,  and  R.D  McPherson,  1975:  "On  the  use  of  a  local  wind  correction  technique  in  four¬ 
dimensional  data  assimilation."  Mon.  Wea.  Rev.,  103,  445-449. 

Madala,  R.V.,  1981:  "Efficient  time  integration  schemes  for  atmosphere  and  ocean  models."  Finite 
D^erence  Techniques  for  Vectorized  Fluid  Dynamic  Calculations,  Chpt.  4,  Springer  Verlag,  pp 

Miller,  P.A.  and  S.G.  Benjamin,  1992:  "A  system  for  the  hourly  assimilation  of  surface  observations 
in  mountainous  and  flat  terrain."  Mon.  Wea.  Rev.,  120,  2342-2359. 

Sashegyi,  K.D,  D.E.  Harms,  R.V.  Madala,  and  S.  Raman,  1993:  "Application  of  the  Bratseth  scheme 
for  the  analysis  of  GALE  data  using  a  mesoscale  model."  Mon.  Wea.  Rev.,  121,  2331-2350. 

Sashegyi,  K.D.  and  R.V.  Madala,  1994:  "Initial  Conditions  and  Boundary  Conditions."  Mesoscale 
Modeling  of  the  Atmosphere,  Meteorological  Monographs,  Vol.  25,  No.  47,  Chpt.  1,  Amer. 

Meteor.  Soc.,  pp  1-12. 

Sashegyi,  K.D.  and  R.V.  Madala,  1993:  "Application  of  vertical-mode  initialization  to  a  limited-area 
model  in  flux  form."  Mon.  Wea.  Rev.,  121,  207-220. 


24 


EFFECT  OF  HIGH-RESOLUTION 
ATMOSPHERIC  MODELS  ON  WARGAME  SIMULATIONS 

Scarlett  D.  Ayres 

Battlefield  Environment  Directorate 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  New  Mexico  88002-5501 


ABSTRACT 

Battlefield  weather  conditions  have  affected,  sometimes  determined,  the  outcome 
of  military  conflicts  and  the  resultant  global  order  for  generations.  An  ^ea  of 
continuing  concern  for  military  strategists,  the  operations  research  co^unity,  an 
soldiers  throughout  history  has  been  atmospheric  variability  ^  its  impact  on  the 
battlefield.  The  Combat-Induced  Atmospheric  Obscurants  (CIAO)  systeni  is  a 
prototype  computer-based  atmosphere  modeling  and  simulation  system  designed 
to  demLtrate  the  impact  of  the  effects  of  advanced  high-resolution  atmo^henc 
models  on  force-on-force  wargame  simulations,  such  as  the  Combined  Aims  an 
Support  Task  Force  Evaluation  Model;  thus,  impacting  tactics  and  doctrine  derived 
from  the  simulations.  Wargames  use  low-resolution  atmospheric  models  that  tend 
to  ignore  some  of  the  more  realistic  effects  of  the  battlefield  environment  and 
weather  that  could  prove  highly  significant  on  the  wargame  outcome.  In  the  past 
this  limitation  was  necessary  because  of  computer  restrictions 
unavailability  of  appropriate  atmospheric  models.  The  goal  of  _ 

the  CIAO  system  is  to  determine  the  impact  of  advanced  high-fidelity,  hig 
resolution  obscuration  models  on  simulated  battles.  A  poster/paper  was  presented 
at  the  1994  Battlefield  Atmospherics  Conference  detailing  the  p^ose  ot  the 
models  included  in  CIAO.  This  paper  illustrates  the  expected  effect  ot  these 
models  on  the  wargame. 

1.  INTRODUCTION 

The  Combined  Arms  and  Support  Task  Force  Evaluation  Model  (CASTFOMM)  deals  with  a 
nlane-narallel  atmosphere  with  wind  varying  neither  with  direction  nor  speed  m  the  horizontal 
directiL  Terrain  cm  cause  a  nonlinear  inertial  character  of  the  flow  interacting  with  temin 

surface.  Terrain  sheltering  and  channeling,  wakes  and  flow  separation  f 

A  r’A^TFORFM  uses  a  fairly  detailed  smoke  model  (Combined  Ubscuraiion 

Ldf  foSefieS  In™ftonS«  (Ayres,  DeSutter  1993))  to  determine 

St^rlriotCsrnission  caused  by  smoke.  However.  COMBIC  uses  a  ata^nc 

boundarv  layer  model.  The  wind  field  direction  and  horizontal  windspeed  profile  m  COMBIC 
^“SSrJL  ^dTmic  everywhere  in  the  scenario.  Wind  fields  and  diftaon  ~ 
hv  the  effects  of  complex  terrain  and  surface  properties  in  the  real  world.  COMBIC  is  a  tla 
ten-ain  model  It  allows  only  a  uniform  boundary  layer  wind  field  that  is  assimed  to  apply  over 
the  entire  geographic  region.  COMBIC  smoke  flows  through  hills  instead  of  over  and  aroimd 
them  The  CIAO  system  adds  high-resolution  atmosphenc  and  modified  smoke  models  to  the 


25 


CASTFOREM  wargames  to  more  realistically  simulate  the  battlefield  atmosphere.  The  models 
discussed  are  SANDIA,  High-Resolution  Wind  (HRW),  Onion  Skin,  and  Radiative  Energy 
Balance  Redistribution  (REBAR),  and  the  radiative  transfer  (RT)  and  polarimetric  millimeter 
(PMW)  version  of  COMBIC. 

2.  CASTFOREM 

CASTFOREM  is  a  high-resolution,  two-sided,  force-on-force,  stochastic,  event-sequenced, 
systemic  simulation  of  a  combined  arms  conflict  (Mackey  et  al.  1992).  CASTFOREM  represents 
tactics  through  the  use  of  decision  tables,  and  it  embeds  an  expert  system  for  battlefield  control. 
Battle  orchestration  up  to  the  battalion  level  is  accomplished  strictly  through  the  use  of  decision 
tables.  CASTFOREM  provides  extensive  line-of-sight  (LOS)  calculations  along  various  observer- 
to-target  directions,  accounting  for  terrain,  elevation,  and  vegetation.  CASTFOREM  also 
accounts  for  intervening  atmospheric  conditions  that  can  include  effects  of  combat  induced 
obscurants  through  the  use  of  the  COMBIC  model.  Digitized  terrain  is  included  but  is  not,  at 
present,  coupled  with  the  meteorological  conditions.  The  original  CASTFOREM  assumes 
homogeneous  weather,  considered  constant  in  time  and  space. 

3.  IMPROVEMENTS 
3.1  Effects  of  HRW 

Modifying  COMBIC  for  complex  terrain  would  greatly  increase  the  run  time,  a  fact  that  could 
adversely  affect  CASTFOREM  users.  Instead  of  using  a  simple  wind  model  with  a  complex 
smoke  model,  it  was  decided  to  use  a  complex  wind  model  with  a  simplified  smoke  model.  The 
HRW  model  developed  at  the  Army  Research  Laboratory  can  be  used  to  determine  wind  fields 
and,  in  conjuriction  with  rudimentary  smoke  clouds  produced  by  the  SANDIA  and  Onion  Skin 
models,  examines  the  effects  that  terrain-induced  wind  fields  can  have  on  the  modern  battlefield. 
The  HRW  model  is  a  high-resolution  micro-alpha  scale,  two-dimensional,  surface  layer  wind  and 
temperature  model  (Cionco  1985;  Cionco,  Byers  1993).  The  model  supplies  high-resolution 
calculations  of  surface  layer  wind,  temperature,  and  turbulence  parameters  at  selected  grid  points 
over  a  limited  area,  considering  both  the  terrain  topography  and  thermal  structure.  SANDIA  and 
Onion  Skin  are  highly  parameterized  smoke  obscuration  models.  SANDIA  treats  smoke  as  binary 
entities  in  either  of  two  electro-optical  (EO)  bandpasses  (visible  and  infrared)  (Sutherland, 
B^s  1986).  Onion  Skin  models  smoke  as  if  it  were  layered  like  an  onion.  SANDIA  and  Onion 
Skin  are  easily  modified  to  have  the  smoke  clouds  follow  the  wind  streamlines,  as  determined 
by  HRW,  for  a  particular  terrain.  The  affect  on  wargaming  can  be  seen  in  figure  1.  Figure  la 
represents  the  prevalent  smoke  representation  in  CASTFOREM.  The  smoke  blows  in  a  constant 
direction,  unmodified  by  the  existing  terrain.  Figure  lb  illustrates  the  affect  on  wargaming  when 
SANDIA,  combined  with  HRW,  produces  smoke  clouds  that  follow  the  complex  wind  field.  The 
LOS  is  obscured  by  smoke;  whereas,  it  is  not  obscured  in  figure  la.  The  smoke  follows  the 
complex  wind  field  when  HRW  is  included;  thus,  changing  position  changes  the  effectiveness  of 
the  smoke  screen. 


26 


Figure  1.  The  simplified  smoke  model  (a)  allows  smoke  to  blow  in  one  direction. 
An  advanced  atmospheric  model  like  HRW  (b)  allows  smoke  to  flow  with  the 
complex  wind  field  generated  from  the  complex  terrain  data. 


27 


SANDIA  was  modified  to  compute  the  location  of  the  clouds  by  utilizing  the  complex  wind  fields 
generated  by  HRW.  The  wind  fields  are  computed  for  a  height  of  10  m  from  the  terrain  surface. 
The  standard  windspeed  profile  was  used  to  allow  the  windspeed  to  vary  with  height.  The  profile 
is  defined  as  follows: 


l^(z)  _ 


K^refJ 


(1) 


where 

z  =  height  above  terrain 
z,gf  =  10  m 

p  =  windspeed  at  height  z  and  at  10  m. 

P  depends  on  surface  roughness  and  stability. 

CASTf  OREM  passes  the  initial  position  of  the  smoke  cloud,  the  LOS  information,  and  the  time 
into  SANDIA.  SANDIA  determines  the  size  and  new  position  of  the  cloud  using  HRW  complex 
wind  fields.  For  a  given  threshold,  SANDIA  determines  if  the  LOS  intersects  a  cloud  and  is 
defeated.  HRW  can  be  run  off  line  to  generate  the  wind  field  output  used  to  compute  the 
updated  cloud  location  and  size.  The  updates  are  used  to  determine  if  the  LOSs  are  affected  by 
the  clouds,  which  saves  computer  time,  an  important  consideration  to  CASTFOREM  users 
SANDIA  was  ascertained  to  be  twice  as  fast  as  COMBIC.  The  addition  of  the  algorithm  to 
compute  complex  wind  driven  smoke  clouds  should  not  slow  CASTFOREM. 

3.2  Effects  of  Adding  Elevation  Data  to  Smoke  Models 

CASTFOREM  uses  an  algorithm  that  determines  if  the  LOS  can  acquire  the  target  through  the 
complex  terrain,  if  so,  the  LOS  is  passed  into  the  COMBIC  or  SANDIA  smoke  model  to 
determine  if  the  LOS  is  obscured.  The  smoke  models  are  utilized  as  if  all  the  smoke,  observers, 
and  targets  are  on  the  same  level.  The  height  of  the  LOS  is  the  height  of  the  sensor  above  the 
ground.  However,  the  observer  might  be  on  one  hill,  the  target  on  another,  and  the  smoke  in  a 
valley  between  them  in  a  complex  terrain  scenario.  Figure  2  illustrates  the  effect  of  complex 
terrain  on  an  obscured  scenario.  Figure  2a  presents  the  normal  way  of  modeling  an  obscured 
battlefield  scenario.  Figure  2b  presents  the  new  methodology  for  taking  terrain  elevation  into 
account.  Figure  2b  shows  that  the  tank  acquires  the  target  because  the  LOS  passes  above  the 
smoke  to  reach  the  targets  on  the  hill. 

3.3  Effects  of  Onion  Skin-HRW  (OS-HRW) 

The  Onion  Skin  model  is  an  extension  of  the  SANDIA  model  previously  described;  however,  the 
smoke  clouds  are  not  modeled  as  binary  entities,  but  are  resolved  into  layers  representing  various 
thresholds  of  optical  thickness.  Thus,  clouds  can  be  played  at  a  higher  resolution  without  much 
loss  in  computational  speed.  Another  advantage  is  that  the  cumulative  effects  of  multiple  clouds 
OS-HRW;  whereas,  only  the  binary  option  for  a  single  cloud  can  be  used  with 
SANDIA.  As  with  SANDIA,  the  OS-HRW  approach  can  be  made  much  more  compatible  with 
complex  wind  models  such  as  HRW.  Figure  3  illustrates  the  Onion  Skin  concept.  Figure  3a 


28 


Figure  2.  The  prevalent  methodology  in  modeling  smoke  is  to  pretend  the  terrain  has 
no  elevation  (a).  CIAO  includes  an  enhancement  that  alloivs  elevation  to  be  included 
in  modeling  smoke  by  COMBIC  and  SANDIA  (b). 


ONION  SKIN  MODEL  (b) 


Figure  3.  The  LOS  does  not  encounter  the  SANDIA  produced  clouds  (ellipses)  (a)  so  it 
is  not  defeated.  The  LOS  goes  through  enough  outer  edges  of  the  Onion  Skin  produced 
clouds  (b)  so  it  is  defeated. 


shows  SANDIA  produced  clouds  specified  by  an  optical  depth  of  three.  Optical  depA  is  the 
product  of  the  mass  extinction  coefficient  md  concentration  length  (CL).  The  LOS  from  the 
observer  to  the  target  is  not  defeated  because  it  does  not  go  through  any  part  of  the  cloud 
(represented  by  the  ellipses).  However,  figure  3b  shows  that  the  LOS  goes  trough  enou^^ter 
layers  of  the  onion-like  cloud  to  build  up  an  optical  depth  of  three  to  be  defeated,  OS-HRW 
model  increases  the  number  of  LOSs  obscured  in  wargames,  as  compared  to  the  SANDIA  model, 
which  means  increased  survivability  for  the  targets. 


3.4  Effects  of  REBAR 

A  large  area  smoke  screen  (LASS)  that  endures  a  long  time  can  significantly  ^ 
irradiance  and  drive  the  local  atmosphere  toward  more  stable  conditions  (Yee,  Sutherland  1993). 
Smoke  operations  can  be  affected  because  the  rise  and  diffusion  of  smoke  is  critically  dependent 
upon  the  stability  class.  Figure  4  shows  how  critical  the  Pasquill  Category  (PC)  is  m  detemming 
the  height  and  width  of  the  cloud.  The  stability  of  the  atmosphere  is  related  to  PC  as  follow: 
PC  =  A,  extremely  unstable;  PC  =  B,  moderately  unstable;  PC  —  C  slightly  unstable,  PC  , 
neutral;  PC  =  E,  slightly  stable;  PC  =  F,  moderately  stable;  and  PC  =  G,  extremely  stable.  Note 
how  width  and  height  increase  with  increasing  instability.  The  overall  concentration  decreases 
as  the  cloud  increases  with  size.  Thus,  any  factor  influencing  the  stability  should  be  modeled  in 
the  wargames. 


Cloud  DM«u«lvti  width  (2.16  <r) 

Par  Dilterent  PssquUl  C»tegari«*  ^ 


Cloud  DiffuaivB  Height  (2.16  . 

for  Different  Paequill  CategorlM  (O) 


Cloud  Wldtft  <m) 


300  400  BOO  BOO  TOO 

Downwind  OIttinoe  (m) 


■i  Cat  P 

y//A  Cat  c 


i\X!^  Cat  B 

O  Cat  a 


CZ]  Cato 
^  Cat  A 


900  1G00 


Cat  r 
Cat  C 


Cat  E 
r  ]  Cat  3 


I  Cat  D 
^  Cat  A 


Figure  4.  Width  (a)  and  height  (b)  of  smoke  clouds  versus  downwind  distance  for 
different  Pasquill  Stabilities. 


Figure  5  illustrates  the  effect  of  aerosol-induced  radiative  damping  on  turbulence  and  Pasquill 
Stability  for  different  optical  depths  (x).  Figure  5a  shows  that  thicker  smoke  clouds  tend  to 
prevent  the  atmosphere  from  becoming  more  turbulent.  Similarly,  figure  5b  shows  the  effect  of 
smoke  clouds  in  increasing  the  stability  of  the  atmosphere  as  modeled  using  the  REBAR  model. 
Notice  that  at  noon  (solar  elevation  =  90)  an  unsmoked  atmosphere  would  be  extremely  unstable 
(PC  =  1).  However,  the  atmosphere  becomes  only  slightly  unstable  when  a  dense  LASS  is 
present.  The  REBAR  model  is  the  first  attempt  to  model  this  important  aspect  of  LASSs.  The 


31 


CIAO  system  will  use  REBAR  to  determine  the  impact  of  radiative  damping  on  the  battlefield. 
Significant  reductions  in  the  amount  of  smoke  are  expected  to  occur  if  the  wargamc  developers 
^e  aware  of  the  depressed  conditions  caused  by  the  LASS  because  neutral  conditions  are  often 
ideal  for  smoke  deployment  and  a  desired  smoke  screen  can  be  maintained  with  less  smoke.  If 
the  warg^e  developers  are  not  aware  that  less  smoke  is  necessary,  the  battlefield  might  be  over 
smoked,  inhibiting  target  acquisition  on  both  sides.  Inhibition  of  target  acquisition  might  be 
adv^tageous  to  the  side  modeled  with  the  best  ability  to  observe  through  an  obscured 
environment.  A  smart  commander  might  create  a  LASS  in  the  early  morning  to  inhibit  the 
development  of  turbulence. 

3.5  Effects  of  COMBIC-RT 

Models  like  CASTFOREM  directly  relate  transmission  to  EO  system  performance  and  smoke 
effectiveness  by  considering  only  the  directly  transmitted  signal.  However,  EO  systems  respond 
not  only  to  directly  transmitted  radiation  but  also  to  contrast.  The  contribution  caused  by  path 
radiance,  which  may  be  caused  by  scattering  of  ambient  radiation  (sun,  sky)  into  the  path  of 
propagation,  emission  along  the  path,  or  both,  must  be  determined  to  determine  contrast.  Path 
radiance  has  a  directional  nature  causing  asymmetries  to  exist  between  target  and  observer.  The 
target  or  observer  has  an  optical  advantage  not  present  when  only  the  direct  transmission 
component  is  modeled.  The  LASS  model  was  developed  to  model  the  effects.  The  radiative 
transfer  algorithms  were  integrated  with  COMBIC-RT  to  enable  COMBIC  to  compute  path 
radiance. 


32 


Most  target  acquisition  models  work  by  determining  the  number  of  resolvable  cycles  across  the 
target,  which  directly  relates  to  the  target  contrast  at  the  aperture  of  the  nonthermal  sensor.  It 
is  possible  to  determine  the  probability  of  acquisition  of  a  given  target  through  a  LASS  cloud 
at  any  given  point  in  space  and  time  using  COMBIC-RT  and  a  target  acquisition  model  like  the 
one  in  CASTFOREM  (Ayres,  Sutherland  1994),  providing  a  direct  measure  of  the  effectiveness 
of  smoke.  Figure  6  shows  the  effect  of  sun  angle  on  detection  probabilities  for  different  optical 
depths  (t).  The  probability  of  detection  for  t  of  1  varies  from  34  percent  for  the  sun  in  front 
of  the  observer  to  63  percent  for  the  sun  behind  the  observer,  as  expected.  A  force  with  the  sun 
behind  it  has  a  tactical  advantage. 

Figure  7  shows  the  affect  that  the  observer  azimuth  angle  (defined  clockwise  with  respect  to 
North)  can  have  on  contrast  transmission.  Contrast  transmission  is  shown  for  five  CL  values. 
The  scenario  is  for  early  morning  and  the  zenith  angle  of  the  observer  is  10  .  Notice  that  low 
contrast  transmission  occurs  when  the  observer  is  looking  into  the  sun  (0°)  and  high  contrast 
transmission  occurs  with  the  sun  to  the  back  (180°)  of  the  observer.  Further,  note  that  the  curve 
flattens  as  the  CL  increases.  The  degree  to  which  a  force  with  the  sun  in  their  opponent’s  eyes, 
has  a  tactical  advantage,  can  depend  upon  the  density  of  the  LASS.  However,  it  must  be  noted 
that  very  thick  clouds  can  reflect  all  light  and  cause  inverse  situations. 


PHOTOSIMULATION  EXPERIMENT 
LASS  MODEL  RESULTS 


100 

- 

*to 

Ef 

00 

::  'JO 

i 

■  63^0  \ 

1/^  ...  .  \ 

S  “ 

\ 

•  <0 

-  ^  \ 

AVf\ 

• 

.  0  \ 

30 

\ 

SUM  rc  rno^jT 

n 

— 

X 

10 

,  1 ' 

1 

0 

0 

_ j  .i_ ..  1 .  1 . 1 ..  i.  .t.  j 

0  0.9  1. 

1  s _ l 

0 

EFFECT  OF  SUN  ANGLE  J 


jutj  ro  r?t?4  n 


Oetieai  0*«tn 


Figure  6.  Plot  of  detection  probability  as  a 
function  of  optical  depth  for  various  solar 
azimuth  angles. 


CONTRAST  TRANSMISSION 

FOR  DIFFERENT  OBSERVER  AZIMUTH  ANGLE 


150 
-  CL  •  .09 


Figure  7.  Plot  contrast  transmission  versus 
observer  azimuth  angles. 


3.6  Effects  of  COMBIC-PMW 

Perhaps  the  greatest  single  parameter  describing  the  effectiveness  of  an  obscurant  is  the  mass 
extinction  coefficient.  The  mass  extinction  coefficient  is  used  in  Beer’s  law,  along  with  the  path 
integrated  concentration,  to  determine  the  obscurant  optical  depth  and,  hence,  transmission  for 
a  specified  wavelength.  The  mass  extinction  coefficient  lumps  together  electromagnetic  (EM) 
radiation  scattering  out  of  the  LOS  and  EM  absorption  radiation  along  the  LOS.  The  mass 
extinction  coefficient  is  used  by  smoke  models,  such  as  COMBIC,  to  determine  degradation  of 


33 


the  atmosphere  caused  by  battlefield  obscurants.  The  COMBIC  model  was  originally  developed 
for  Electro-Optical  Systems  of  Atmospheric  Effects  Library  (EOSAEL)  to  model  aerosols  for 
which  spherical  symmetry  can  be  assumed  to  describe  the  physical  and  optical  properties  of  the 
aerosols.  Whereas  this  is  a  reasonable  assumption  when  considering  the  older,  conventional 
obscurants  such  as  fog  oil  and  white  phosphorus,  the  approximation  breaks  down  for  newer 
developmental  obscurants  designed  to  be  effective  at  longer  wavelengths.  Many  of  the  new 
millinieter  wave  (MMW)  and  radar  obscurants  are  highly  nonspherical.  The  propagation  of  EM 
radiation  m  any  medium  containing  particles  is  governed  by  the  combination  of  absorption, 
emission,  and  scattering,  making  the  particles  a  subject  of  great  importance  in  determining  effects 
ot  obscurants  on  EM  radiation.  Scattering  and  absorption  by  particles  depend  upon  the  size, 
shye,  refractive  index,  and  concentration  of  the  particles.  Mathematically  determining  the 
radiation  field  scattered  by  particles  of  arbitrary  shape  at  any  point  in  space  can  be  quite  difficult. 
Exact  Malytical  solutions  ^e  only  available  for  the  sphere  and  infinite  cylinder.  However,  the 
scattering  properties  of  simple  geometries  have  been  well  studied  (Bowman  et  al.  1987) 
Numerical  techniques  and  approximate  analytical  methods  are  used  to  analyze  the  properties^ 
usually,  over  a  limited  range  of  conditions.  New  techniques  are  required  to  model  the  obscurants 

COMBIC-PMW  (Ayres  et  al.  1994),  which  is  a  merger  between 
LUMBIC  and  the  techniques  that  account  for  the  optical  and  mechanical  behavior  of  finite 
cylinders.  The  techniques  determine  EM  properties,  such  as  the  ensemble  orientation  averaged 
extinction,  absorption,  and  scattering,  as  well  as  meehanical  properties,  such  as  fall  velocity  and 

angular  orientation  of  the  obscurant  particles  when  released  into  the  turbulent  atmospheric 
boundary  layer. 

Extinction  for  MMW  obscurants  can  widely  vary  depending  upon  the  fiber  properties  Such 
intrinsic  particle  properties  like  shape  and  bulk  density,  and  bulk  EM  properties  (complex  indices 
ot  refraction)  must  be  determined  for  accurate  extinction  determination.  Furthermore,  ensemble 
characteristics  such  as  orientation  distribution  of  the  obscurant  cloud  and  incident  beam  properties 

polarization  must  also  be  included.  Orientation  distribution  is 
needed  because  particle  scattering  phase  functions  and  attenuation  can  depend  strongly  on  the 
orientation  of  the  particle  relative  to  the  polarization  of  the  illuminating  radiation.  Also,  the 
direction  of  the  LOS  can  be  of  significance  in  determining  obscurant  effectiveness,  although 
current  wargames  use  one  value  for  obscurant  extinction  per  scenario.  For  example  there  can 
be  deferences  in  extinction  for  horizontal  and  vertical  LOSs  when  particles  are  preferentially 
oriented.  The  vertical  LOS  is  exposed  to  this  preferred  orientation,  while  the  horizontal  LOS  is 
exposed  to  a  randomly  oriented  ensemble  if  the  particles  are  released  in  a  stable  atmosphere  and 
oriented  vnth  their  long  axis  horizontal.  All  the  characteristics  affect  the  computation  of 
extinction  for  cylindrical  obscurants,  which  can  affect  the  loss-exchange-ratios  used  to  describe 
the  results  of  the  wargame. 

4.  SUMMARY 


Batfiefield  weather  eonditions  have  affected,  sometimes  determined,  the  outcome  of  military 
conflicts  and  the  resultant  global  order  for  generations.  Accounting  for  atmospheric  variability 
has  been  ^  ^ea  of  continuing  concern  for  military  strategists,  planners,  and  soldiers  throughout 
history.  With  the  emphasis  on  restructuring  the  Armed  Forces  into  a  streamlined  fighting  force 
equipped  with  advanced  technology  weapon  systems,  it  is  necessary  to  develop  tactics,  doctrine 
and  weapon  systems  to  minimize  friendly  and  collateral  casualties  while  destroying  the  enemy’s 
capability  to  fight.  Obscurants  can  be  a  very  important  tool  on  the  battlefield.  Obscurants  are 


34 


often  described  as  low  technological  countermeasures  to  the  high  technological  weapons  of  today. 

It  is  imperative  that  the  technological  tools  can  realistically  depict  the  battlefield  with  accurate 
physics  and  engineering  algorithms.  This  paper  shows  that  the  effectiveness  of  obscurants  is 
influenced  in  many  ways  by  the  atmosphere;  therefore,  better  atmospheric  algonthms  must  be 
included  in  the  wargames  that  define  so  much  of  the  tactics  and  doctrine.  The  CIAO  system  is 
an  important  part  of  the  effort  to  produce  a  better  atmospheric  algorithm.  In  p^icular,  the 
improved  algorithm  includes:  (a)  terrain  effects  on  smoke  transport,  (b)  contrast  effects  caused 
by  multiple  scattering,  and  (c)  polarimetric  effects  of  nonspherical  particles. 

ACKNOWLEDGMENTS 

The  author  would  like  to  thank  Robert  Sutherland  and  Doug  Sheets  of  the  Battlefield 
Environment  Directorate,  Army  Research  Laboratory  and  Steve  LaMotte  of  the  Physical  Sciences 
Laboratory,  NM  for  their  advice  and  assistance  in  developing  the  CIAO  model. 

REFERENCES 

Ayres  S.  D.,  and  S.  DeSutter,  1993.  Combined  Obscuration  Model  for  Battlefield  Induced 
Contaminants  (COMBIC)  Users  Guide.  In  Press,  Department  of  the  Army,  U.S.  Army  Research 
Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile  Range. 

Avres  S  D  and  R  A.  Sutherland,  1994.  "Combined  Obscuration  Model  for  Battlefield  Induced 
cLaminants-Radiative  Transfer  Version  (COMBIC-RT)."  In  1994  Battlefield  Atmospherics 
Conference,  In  Press,  Department  of  the  Army,  U.S.  Army  Research  Laboratory,  Battlefield 
Environment  Directorate,  White  Sands  Missile  Range,  NM  88002-5501. 

Ayres,  S.  D.,  R.  A.  Sutherland,  and  J.  B.  Millard,  1994.  "Combined  Obscuration  Model  for 
Battlefield  Induced  Contaminants-Polarimetric  Millimeter  Wave  Version  (COMBIC-PMW). 
In  1994  Battlefield  Atmospherics  Conference,  In  Press,  Department  of  the  Army,  U.S.  Army 
Research  Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile  Range,  NM 
88002-5501. 

Bowman,  J.  J.,  T.  B.  A.  Senior,  and  P.  L.  E.  Uslenghi,  1987.  Electromagnetic  and  Acoustic 
Scattering  by  Simple  Shapes.  Hemisphere  Publishing  Corporation,  ISBN  0-89116-885-0. 

Cionco,  R.  M.,  1985.  Modeling  Windfields  and  Surface  Layer  Wind  Profiles  Over  Complex 
Terrain  and  Within  Vegetative  Canopies.  The  Forest- Atmosphere  Interaction.  Editors:  Hutchison 
and  Hicks.  D.  Reidel  Publishing  Co.,  Holland. 

Cionco,  R.  M.,  and  J.  H.  Byers,  1993.  "A  Method  for  Visualizing  the  Effects  of  Terrain  and 
Wind  Upon  Battlefield  Operations."  In  Proceedings  of 1 993  Battlefield  Atmospherics  Conference. 
U.S.  Army  Research  Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile 
Range,  NM  88002-5501. 

Mackey,  D.  C.,  Dixon,  D.  S.,  Jensen,  K.  G.,  Loncarich,  and  J.  T.  Swaim,  1992.  CASTFOREM 
(Combined  Arms  and  Support  Task  Force  Evaluation  Model)  Update:  Methodologies.  U.S.  Army 
TRADOC  Technical  Report  TRAC-WSMR-TD-92-011. 


35 


Sutherland,  R.  A.,  and  D.  E.  Banks,  1986.  "Smoke  Modeling  in  the  Trasana  Wargames-The 
Comprehensive  Smoke  Study."  In  Smoke  Symposium  X,  Volume  1,  pp.  259-268,  Aberdeen 
Proving  Ground,  MD. 

Yee,  Y.  P,  and  R.  A.  Sutherland,  1993.  "The  Radiative  Energy  Balance  and  Redistribution  Model, 
REBAR."  In  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference.  U.S.  Army  Research 
Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile  Range,  NM  88002-5501 . 


36 


AN  ASSESSMENT  OF  THE  POTENTIAL  OF  THE  METEOROLOGICAL  OFFICE 
MESOSCALE  MODEL  FOR  PREDICTING  ARTILLERY  BALLISTIC  MESSAGES 


Jonathan  D  Turton,  Peter  F  Davies 
Defence  Services  Division,  Meteorological  Office 
Bracknell,  Berkshire,  RG12  2SZ,  UK 
and 

Maj.  Tim  G  Wilson 

Developments  Division,  HQ  Director  Royal  Artillery,  Larkhill,  Salisbury,  Wilts,  SP4  8QT,  UK 


ABSTRACT 

An  assessment  of  the  potential  of  using  data  from  the  Met  Office  Mesoscale  Unified  N^el 
(MM)  for  producing  artillery  ballistic  messages  was  made  by  the  Royal  Artillery  and  the 
Meteorological  Office,  Defence  Services  during  Summer  1994.  This  paper  reports  the 
results  of  this  assessment. 

MM  forecasts  of  vertical  profiles  for  LarkhiU  were  compared  with  routine  radiosonde 
ascents  made  at  Larkhill.  Specifically,  wind  and  temperature  data  were  compared  over  the 
various  height  zones  used  in  artillery  ballistics.  In  addition,  both  the  MM  data  and  the 
measurements  were  applied  in  a  ballistic  model  to  evaluate  the  likely  impact  on  gunnery 
accuracy  that  would  be  achieved  using  MM  data  for  meteorological  corrections. 

The  implications  of  the  results  of  this  assessment  are  discussed  in  terms  of  the  potential 
use  of  the  MM  for  making  routine  ballistic  forecasts  for  (i)  artillery  training  ranges  m  the 
UK  and  (ii)  incorporating  model  predictions  into  future  Royal  Artillery  battlefield 
meteorological  systems. 


1.  INTRODUCTION 

TTie  motivation  for  this  work  is  two-fold.  Firstly,  Larkhill  met  office  is  responsible  for  providing  balli^c 
meteorology  for  a  number  of  Army  training  ranges  around  the  UK.  Many  of  these  ranges  are  distant  from 
Larkhill  and  artillery  meteorological  (artymet)  soundings  are  seldom  made.  The  only  available  upper  ^ 
data  comes  from  those  trials  ranges  with  an  on-site  met  office  and  the  UK  Met  Office  upper  air  ne^ork. 
Thus  there  is  often  no  measured  upper  air  data  for  tiie  training  ranges,  from  which  to  detemme  ballistic 
messages  (SBMM,  SCMM).  However,  the  Larkhill  met  office,  through  their  meteorological  informafron 
system,  the  Outstation  Display  System  (ODS),  do  have  access  to  all  the  observational  data  that  is  avail^le 
together  witii  forecast  upper  air  winds  and  temperatures  at  WMO  standard  levels  from  the  Imuted  area 
vereion  of  the  Met  Office  Unified  Model  (Cullen,  1993).  Consequently,  the  accuracy  of  these  ballistic 
messages  is  very  dependent  upon  the  skill  of  the  Larkhill  forecasters  and  their  ability  to  mterprrt  tte 
available  data.  It  is  postulated  that  the  provision  of  site-specific  mesoscale  model  profiles  for  tiie  UK 
training  areas  could  provide  valuable  information  to  assist  in  this  task. 


37 


Secondly,  &ere  is  the  potential  application  of  mesoscale  models  to  enhance  the  accuracy  of  in-theatte 
meteorological  information  obtained  using  the  Royal  ArtUleiy  BMETS  (Battlefield  METeorological 
System),  or  to  improve  the  quahty  of  ballistic  information  when  there  are  insufSciait  BMETS  deployed 
Typically,  atmospheric  conditions  account  for  some  30%  -  70%  (depending  upon  range)  of  the  total  error 
budget  for  the  accuracy  of  artillery  fire  and,  as  longer  range  artillery  pieces  come  into  service  the 
requirement  for  better  meteorological  data  becomes  more  critical.  In  the  future  it  is  anticipati  that 
mesoscale  models,  or  rather  battlefield-scale  models,  will  be  used  to  provide  optimum  hattlpfifld 
meteorology  for  artilleiy  purposes,  lie  use  of  such  models  is  already  being  investigated  by  the  US  Army 
^  (Computer  Assisted  Artillery  Meteorology)  program  (Grunwald,  1993;  Spalding  et 

al,  1993).  A  possible  fiiture  concept  for  UK  artymet,  CMETS  (Computerised  METeorological  System) 
may  well  embrace  this  approach  and  so  this  woric  is  a  useful  precursor  to  any  CMETS  studies. 


2.  RADIOSONDE  DATA 

m  radiosonde  currently  used  at  aU  UK  upper  air  sounding  stations  and  ranges  is  lie  Vaisala 

re^ra  ^sto.  Tie  PC.Cora  system  has  been  described  by  Nash  (1991)  and  uses  the  standard  Vaisala 
KS8Q  sonde  for  temperature  and  humidity  measurements.  At  Larkhdl  the  winds  are  determined  by  traddna 
a  radar  target  using  a  Cossor  353C  wind-finding  radar.  Soundings  are  usually  made  several  times  a  day 
(typically  m  summer  around  06Z  and  lOZ),  with  additional  ascents  being  made  as  required.  The  RS80 
^perature  and  humidity  sensors  have  an  accuracy  of  ±0.2  °C  and  ±2%  and  give  measurements  eveiy  2  s 
(«10  m)  from  launch.  Wmds  at  Larkhill  are  computed  from  30  s  of  radar  tracking  data,  with  reported 
values  updated  at  2  s  mtervals  during  flight.  Previous  studies  (Edge  et  al.,  1986)  of  the  reproducibility  of 
Cossor  radar  wmds  have  found  that  the  rms  vector  errors  attributable  to  the  radar  are  about  0  4  m/s  at  20 

^  Thus  the  wind  errors  are  typically  0.4  m/s  up  to 

9000  m  height,  mcreasmg  to  0. 8  m/s  at  20000  m  height.  ^ 

In  this  study,  archived  2  s  data  were  used  to  compute  winds  and  temperatures  for  the  ballistic  zones  These 
zones  are  pven  m  Table  1.  Wmds  were  computed  as  mean  winds  through  the  zones  whilst  the  pressures 
and  virtu^  t^peratures  were  for  the  zone  mid  points.  The  PC-Coia  systems  at  the  range  stations  have 
software  to  pr^uce  specialised  artillery  ballistic  data,  i.e.  Standard  Ballistic  Met  Messages 
(SBMM)  and  Standard  Artillery  Computer  Met  Messages  (SCMM). 


3.  THE  MET  OFFICE  MESOSCALE  MODEL 

The  Met  Office  Mesoscale  Unified  Model  (MM)  is  integrated  within  the  operational  Unified  Model  (UM) 
suite  which  IS  run  routinely  at  the  Met  Office,  Bracknell.  The  suite,  which  is  described  by  CuUen  (1993) 
consists  of  global,  hrmted  area"  and  mesoscale  versions  of  the  Unified  Model.  The  global  version  has  19 
yeffical  levels  up  to  4.6  mb  (typically  35-40  km)  with  a  horizontal  resolution  of  0.83®  in  latitude  and  1  25® 
m  longitude  (giv^  a  typical  grid  spacing  of  about  90  km).  The  "limited  area"  model  covers  an  area  ‘ 
ej^nduig  from  North  ^enca  m  the  west  to  Russia  in  the  east,  covering  Greenland  to  the  north  and  North 

^^^(^out  5^1^^  ^  horizontal  grid  length 

TTie  mesoscale  version  of  the  model  has  a  grid  length  of  0. 15®  (about  17  km)  on  a  92x92  grid  covering  an 
Mea  of  ^ut^OO  lmxl500  km  and  has  31  levels  up  to  4.6  mb,  with  an  increased  number  of  levels  in  the 
oposphere.  These  levels  are  shown  m  Table  2,  the  heights  are  approximate  since  the  levels  are  defined 


38 


using  a  hybrid  sigma/pressure  co-ordinate  system.  There  are  28  levels  up  to  20000  m  as  normally  r^uired 
for  artilleiy  ballistics.  The  MM  can  be  run  for  a  number  of  relocateable  ar^,  with  a  standard  version 
being  run  for  the  UK  region,  together  with  two  "crisis  area"  windows  covering  the  Gulf  and  the  r^on 
around  the  former  Yugoslavia.  However,  this  paper  will  concentrate  on  the  UK  version  of  the  MM  which 
is  run  four  times  each  day  to  produce  forecasts  out  to  t+24  (hrs). 


Zone 

No. 

Zonerkige 

(m) 

midpoint 

(m) 

00 

surface 

0 

01 

0-200 

100 

02 

200  -  500 

350 

03 

500-1000 

750 

04 

1000-1500 

1250 

05 

1500  -  2000 

1750 

06 

2000-3000 

2500 

07 

3000-4000 

3500 

08 

4000-5000 

4500 

09 

5000-6000 

5500 

10 

6000-8000 

7000 

11 

8000-10000 

9000 

12 

10000-12000 

11000 

13 

12000-14000 

13000 

14 

14000-16000 

15000 

15 

16000-18000 

17000 

Zone 

No. 

Zone  range 
(m) 

midpoint 

(m) 

00 

surface 

0 

01 

0-200 

100 

02 

200  -  500 

350 

03 

500-1000 

750 

04 

1000  - 1500 

1250 

05 

1500  -  2000 

1750 

06 

2000  -  2500 

2250 

07 

2500  -  3000 

2750 

08 

3000  -  3500 

3250 

09 

3500  -  4000 

3750 

10 

4000-4500 

4250 

11 

4500  -  5000 

4750 

12 

5000  -  6000 

5500 

13 

6000  -  7000 

6500 

14 

7000  -  8000 

7500 

15 

8000  -  9000 

8500 

16 

9000  - 10000 

9500 

17 

10000-11000 

10500 

18 

11000-12000 

11500 

19 

12000  - 13000 

12500 

20 

13000  - 14000 

13500 

21 

14000  - 15000 

14500 

22 

15000  - 16000 

15500 

23 

16000  - 17000 

16500 

24 

17000  - 18000 

17500 

25 

18000  - 19000 

18500 

26 

19000  -  20000 

19500 

Table  1.  Heights  of  (left)  Standard  Artillery  Computer  Meteorological  Message  (SCMM)  zones  and 
(right)  Standard  Ballistic  Meteorological  Message  (SBMM)  zones. 


Level 

Level 

Level 

1 

10 

11 

1365 

21 

6800 

2 

40 

12 

1600 

22 

7900 

3 

100 

13 

1870 

23 

9040 

4 

190 

14 

2200 

24 

10260 

5 

300 

15 

2600 

25 

11750 

6 

435 

16 

3080 

26 

13700 

7 

595 

17 

3640 

27 

16200 

8 

770 

18 

4300 

28 

19700 

9 

955 

19 

5050 

29 

23850 

10 

1155 

20 

5870 

30 

29000 

Table  2.  Approximate  heists  of  levels  m  the  Met  Office  MM. 


39 


fcthis  study  site-specific  profiles  for  LarkhiU  (51.2^,  LS^W)  were  inteipolated  fi^omthe  four  surrounding 
MM  grid  points  and  model  data  were  extracted  at  each  grid  level.  The  heights  (above  10  m)  were  then 
recomputed  using  the  hydrostotic  relationship.  (Typically  at  low  levels,  at  1000  m,  the  recomputed  heights 
differed  fi-om  the  nominal  heights  by  »10  m,  whilst  higher  up,  at  2000  m,  the  increased  to 

»100  m.)  'nie  data  were  extracted  fi-om  the  midnight  run  of  the  model,  for  06Z  (t+6)  and  lOZ  (t+10)  to 
coincide  with  the  Laridiill  radiosonde  ascents.  The  MM  data  were  then  interpolated  to  the  mid  points  of  die 
standard  levels  (Table  1)  and  used  to  produce  SCMM  data  (winds  and  virtual  temperatures  for  the  26 
zones  up  to  20000  m)  and  SBMM  data  (ballistic  winds  and  temperatures  for  the  15  zones  up  to  18000  m). 
Both  the  MM  and  radiosonde  dato  were  archived  over  a  two  month  period  July/August  1994.  Radiosonde 
ascents  were  only  made  at  LarkhiU  on  weekdays  giving  59  ascents  for  comparison  purposes. 


4.  COMPARISON  OF  MODEL  PROFILES  WITH  ACTUAL  DATA 
4.1  Wind  Profiles 

For  artiUery  ballistics  the  quality  of  the  wind  data  is  the  most  critical  fector  and  this  is  generaUy  quantified 
in  terms  of  the  vector  wind  error.  Fig.  1  shows  the  rms  vector  wind  errors  (MM  -  radiosonde)  against 
height  (for  the  SCMM  zone  mid  points).  Up  to  5500  m  (zone  12)  the  errors  are  similar  for  both  the  t+6 
and  t+10  predictions,  being  typically  2.5-3  m/s.  The  errors  increase  fi-om  6500  m  to  12500  m  (zones  13  to 
19)  and  here  the  t+10  winds  have  larger  errors.  This  region  is  associated  with  the  jet  stream,  near  the 
ttiyopause,  where  there  are  strong  winds  and  shears.  Above  13500  m  (zone  20)  the  errors  reduce  to  about 
2-3  m/s.  Over  all  26  zones  the  average  rms  errors  were  3.1  m/s  (t+6)  and  3.3  m/s  (t+10) 

In  an  earlier  study  examining  the  potential  capability  of  Met  Office  models  to  provide  baUistic  data 
^telaw,  1989)  the  average  model  rms  error,  from  the  Met  Office  "fine-mesh"  model,  up  to  20000  m  was 
3.0  m/s  for  the  analysis  (t+0)  and  4.7  m/s  for  a  t+3  forecast.  The  "fine-mesh"  model  has  now  been 
replaced  by  the  "limited  area"  model  and  the  mesoscale  model  has  since  been  integrated  into  the  UM  suite. 
The  accuracy  of  the  available  model  winds  has  clearly  improved. 


rms  Vector  Wind  Error  (m/s) 


Figure  1.  Mesoscale  Model 
rms  vector  wind  errors  (m/s) 
against  height.  The  solid 
line  shows  the  predictions 
for  t+6,  the  dashed  line  for 
t+10. 


Wenckebach  (1991)  looked  at  SBMM  and  SCMM  data  derived  from  operational  models  run  by  the 
German  Military  Geophysical  Office,  these  were  a  Boundary  Layer  Model  (BLM)  for  the  near  surfece 
r^on  (zones  0  to  2)  and  a  9-level  baroclinic  model  for  zones  3  to  18.  The  results  showed  that  for  zones  3 


40 


to  18  the  rms  vector  error  was  in  the  range  3  to  5  m/s,  and  tended  to  increase  with  height.  The  errors  for 
the  lower  winds,  zones  0  to  2,  were  typically  2  m/s  (zone  0)  and  3.5  m/s  (zones  1  and  2).  At  these  lower 
levels  the  t+6  forecast  winds  were  better  tlm  those  for  t+12,  however  at  die  higher  levels  there  was  no 
.fignifirant  difference  between  the  t+6  and  the  t+12  winds. 


4.2  Temperature  Profiles 

Fig.  2  shows  the  ims  errors  (MM  -  radiosonde)  for  the  temperature  data.  At  levels  below  10000  m  the  rms 
errors  are  generally  <  1°C,  the  t+6  predictions  being  slightly  better  than  those  for  t+10,  particularly  nearer 
the  surface.  At  higher  levels  the  rms  errors  increased  up  to  a  maximum  of  2.3®C. 


0.B  1  1.6  2 

rms  Temperature  Error  (C) 


Figure  2.  Mesoscale 
Model  rms  temperature 
errors  (®C)  against  height 
The  solid  line  shows  the 
predictions  for  t+6,  the 
dashed  line  for  t+10. 


4.3  Ballistic  Winds 

The  ballistic  wind  is  that  wind,  constant  in  speed  and  direction  up  to  a  specified  zone,  which  would  produce 
the  same  displacement  of  a  shell  as  the  actual  wind  profile,  and  can  be  computed  fi-om  the  actual  winds  by 
applying  standard  weighting  factors  to  the  winds  within  the  various  SBMM  zones.  For  each  ^osonde 
and  model  profile  a  ballistic  wind  profile  was  computed  and  the  vector  differences  (MM  -  radiosonde)  were 
calculate  Fig.  3  shows  the  profile  of  the  rms  ballistic  vector  wind  errors. 


Figure  3.  Mesoscale 
Model  rms  ballistic  wind 
errors  (m/s)  against 
height.  The  solid  line 
shows  the  predictions  for 
t+6,  the  dashed  line  for 
t+10. 


O.s  1  l.s  2  2-S  3 

rms  Vector  Ballistic  Wind  Error  (m/s) 


20000 

leooo 

16000 

14000 

E  12000 

C  10000 
? 

^  sooo 
6000 
4000 
2000 
A 


41 


^  j  ^  associated  with  the  jet  stream.  However,  the  errors 

m  the  balhstic  wmds  are  generally  less  than  those  for  the  actual  winds. 

of  the  ways  of  quantifying  the  representativeness  of  meteorological  data  for  ballistic  predictions  is  to 
d^cnbe  It  m  terms  of  its  equivalent  "staleness".  Blanco  (1988)  gives  some  simple  algebraic  formulae 
meteorological  variability  m  terms  of  a  time  staleness,  this  is  given  in  Eq.  (1)  for  the 


®bw~  0  061  (1  +  0.03455  v^wr  -  0.05846  zj,)*  tgj. 


(1) 


variance  m  the  b^hstic  wmd  (lets*),  Vb^  is  the  ballistic  wind  speed  (kts),  Zb  is  the  top  of 
^  the  ballistic  wmd  is  evaluated  (km)  and  is  the  staleness  (min).  The  sJeness 

can  also  he  equat^  to  a  spatial  separation  through  the  generally  accepted  relationship  that,  over  feirly  level 
ten^  a  time  steleness  of  1  hour  is  equivalent  to  a  spatial  displacement  of  30  km.  Fig  4  shows  an 
lUu^ration  of  foe  equivalent  time  staleness  for  a  ballistic  wind  up  to  8  km  (line  10),  in  which  a  ballistic 
wmd  speed  of  30 1^  was  specified.  In  this  illustration  a  ballistic  wind  error  of  3  kts  (1  5  m/s)  is 

equivalent  to  a  staleness  of  1  hour  and  an  error  of  6  kts  (3  m/s)  is  equivalent  to  a  staleness  of  4  hours  ^ 


Equivalent  Time  Staleness  (hrs) 


Figure  4.  lUustrating  foe 
equivalent  time  staleness  of 
foe  ballistic  wind,  up  to  8000 
m  (line  10),  in  terms  of  foe 
rms  error  in  foe  ballistic 
wind. 


staleness  based  on  a  statistical  analysis  of  upper  air  measurements 
but  that  foe  stdeness  for  any  p^icular  situation  will  depend  upon  foe  homogeneity  of  foe  atmosphere  (i  e  ’ 
foe  synoptic  situation).  Given  foe  errors  in  foe  ballistic  winds,  as  shown  in  Fig.  3,  it  is  possible  to  use 
Eq.  (1)  to  estimate  foe  t^ical  equivalent  time  staleness.  In  doing  this  foe  squL  of  foeLs  vector  ballistic 
wmd  error  was  applied  m  Eq.  (1).  Figure  5  shows  foe  estimated  equivalent  staleness  for  foe  MM  balhstic 
wmds  relative  to  foe  on-site  radiosonde  winds.  At  foe  lowest  levels  (zones  1  to  3)  foe  model  data  has  an 
Sr  r  S^^ess  of  4  hours  or  more,  this  reflects  foe  fact  that  site-specific  low  level  winds  are 
Ki «  t  t!  influenced  by  foe  local  topography  (which  is  not  resolved  in  foe 

^  fro*"  A®  foe  equivalent 

^  ^  ^  ^  to  4  hours  for  foe  t+10 

figures  are  similar  to  foe  results  of  Wenckebach  (1991),  who  concluded  that  model  derived 
messages  (above  zone  2)  were  preferable  to  stale  measured  ones  when  foe  staleness  exceeded  3  hours 


42 


20000 


Figures.  Equivalent  time 
staleness  of  ^  mesoscale 
model  winds,  the  solid  line 
shows  the  staleness  for  the 
winds  from  the  t+6  forecast, 
the  dashed  line  for  the  t+10 
forecast. 


5.  BALLISTIC  PREDICTIONS 

Whilst  the  above  gives  an  indication  of  the  accuracy  of  the  forecast  winds  and  temperatures  from  the  MM, 
it  does  not  provide  any  specific  information  on  the  likely  impact  of  using  model  winds  for  artillery 
ballistics.  To  do  this  the  data  were  applied  in  a  fire  control  computer  to  determine  the  expected  targeting 
errors,  which  were  quantified  in  terms  of  the  range  (distance)  and  line  (deflection)  corrections.  These 
comput?»tinns  were  done  for  a  typical  Royal  School  of  Artillety  training  scenario,  for  an  FH70  gun,  firing 
charge  8.  Other  fectors  specified  were  a  charge  temperature  of  21°C  and  a  muzzle  velocity  of  820  m/s. 
The  specified  target  was  23  km  due  north  of  the  gun.  Figure  6  shows  the  computed  range  and  line 
corrections  using  the  on-site  radiosonde  ascents.  These  corrections  correspond  to  the  miss  distances  that 
would  be  expected  if  meteorology  was  ignored  (i.e.  assuming  a  standard  ballistic  atmosphere)  and 
a«?su»r>ing  that  the  radiosonde  data  accurately  characterised  the  meteorological  conditions. 


O 

o 

o 

o 

c 


Line  Correction  (m) 


Figure  6.  Showing  the 
computed  gun  corrections 
based  on  the  measured 
meteorological  conditions. 


Figure  6  shows  that  most  of  the  points  fiiU  in  the  quadrant  for  which  a  negative  (reduced)  range  correction 
is  needed  together  with  a  negative  (westerly)  line  correction.  This  is  because  the  predominant  wind  was 
from  the  south-west,  with  a  tendency  to  carry  a  shell  further  towards  the  north-east.  The  results  give  rms 
corrections  of  12  m  (line)  and  484  m  (range).  Similar  calculations  were  also  made  using  the  MM  data.  An 


43 


assessment  of  the  accuracy  of  the  ballistic  forecasts  using  the  MM  data  can  then  be  made  by  comparing  the 
computed  corrections  against  those  made  using  the  radiosonde  data.  Assuming  that  all  other  fectors  which 
contribute  to  the  artillery  error  budget  are  constant,  then  the  differences  in  range  and  line  corrections  (MM- 
son^)  give  an  indication  of  the  targeting  error  introduced  by  using  MM  data  rather  than  real-time  on-site 
radiosonde  measuronents.  The  differences  are  shown  in  Figure  7,  where  the  rms  errors  are  7  m  in  line  and 

228  m  in  range.  Thus  for  a  typical  training  scenario,  use  of  the  MM  data  could  reduce  the  meteorological 
error  by  over  50% . 


- - - — - igoo 

tooo  ■ 

eoo  - 

600  - 

400  - 

200  - 

•  • 

• 

•  • 

1  1 1  f  — 

!0  -40  -30  -20  -10  .jScfJ 

•  • 

— - 1  * — 1 — 1 — 

g  •  10  20  30  40  5 

-400  * 

• 

-600  - 

-800  - 

-1000  - 

- - - - — >tgoo 

Figure  7.  Targeting  errors 
resulting  from  using  MM 
data  instead  of  radiosonde 
measuremrats. 


Line  Error  (m) 


6.  DISCUSSION  AND  CONCLUSIONS 

The  above  results  give  an  indication  of  the  expected  accuracy  of  ballistic  met  messages  derived  from  MM 
data.  The  awuracy  of  these  messages  is  dominated  by  the  quality  of  the  wind  data.  The  results  show  that 
the  MM  derived  winds  are  least  reliable  near  to  the  surface  (i.e.  in  the  lowest  1000  m  up  to  zone  3).  For 
ballistic  messages,  above  zone  3,  the  model  winds  are  equivalent  to  measured  winds  with  a  staleness  of  1!4 
to  4  hours,  this  equates  to  a  spatial  displacement  of  45  to  120  km. 

The  training  ranges  are  all  more  than  130  km  away  from  the  nearest  station  making  radiosonde  acf/»nt«f 
(and  at  weekends  when  the  range  stations  are  closed  may  be  much  further  from  the  nearest  ascent).  Even  if 
the  radiosonde  data  were  available  for  the  time  of  interest  (which  is  rarely  the  case),  the  forecaster  would 
s^  have  to  to  interpolate  between  ascents.  It  is  concluded  that,  there  would  be  a  clear  benefit  in  making 
site-specific  MM  data  for  the  training  ranges  available  to  forecasters,  as  it  will  give  them  much  more 
r^resentative  meteorological  profile  information  to  work  with.  Plans  are  now  in  hand  to  allow  forecasters 
to  access  these  data,  to  provide  them  with  software  to  view  and  edit  the  data,  and  to  automatically  produce 
SBMM  and  SCMM.  It  is  considered  preferable  to  allow  the  forecasters  to  edit/quality  control  the  data, 
rather  than  provide  a  totally  automated  hands-off  facility,  as  there  will  be  occasions  when  the  model 
information  provides  poor  guidance  (e.g.  in  mobile  synoptic  situations  when  timing  errors  occur)  This 
fecility  will  also  aUow  them  to  modify  the  low  level  winds  on  the  basis  of  local  surface  wind  information 


44 


6.1  BMETS 


BMETS  (Battlefield  METeorological  System)  is  expected  to  enter  service  with  the  Royal  Artilleiy  in  1995 
and  will  replace  the  current,  but  dated,  AMETS  (Artillery  METeorological  System).  BMETS  will  employ 
a  modem,  ground  passive  RDF  radiosonde  tracking  system  to  obtain  meteorological  data.  Each  BMETS 
detachment  will  consist  of  2  light  wheeled  vehicles  with  trailers  carrying  a  total  detachment  of  4  men.  One 
system  will  be  deployed  with  each  field  artillery  and  MLRS  (Multiple  Launch  Rocket  Systran)  reginient. 
BMETS  will  provide  the  capability  of  generatirig  hourly  ballistic  meteorological  messages,  Basic  Wind 
Reports  and  WMO  TEMP  messages.  Typically  6  BMETS  would  deploy  with  a  Division  and  operate 
about  20  km  apart.  BMETS  will  have  an  automated  interfece  to  BATES  (the  Battlefield  Artillery  Target 
Engagement  System)  which  will  allow  messages  to  be  passed  between  units. 

BMETS  will  provide  much  more  timely  and  more  densely  spaced  ballistic  meteorological  data  on  the 
battlefield.  This  will  lead  to  significant  improvements  in  the  efficiency  of  artillery  fire,  both  operationally 
and  during  training.  However,  as  artillery  ranges  increase,  the  met  induced  error  increases  accordin^y  and 
becomes  particularly  significant  when  considering  deep  operations  where  instrumental  measurement  is 
particularly  difficult.  In  order  to  provide  more  representative  meteorological  data  to  support  deep  and 
depth  operations,  it  will  be  necessary  to  use  meteorological  models. 

6.2  Future  Concepts  -  CMETS 

There  is  currently  a  project  underway  at  the  Met  Office  to  develop  a  portable  workstation-based  high 
resolution  crisis  area  model  which  can  be  nm  in  a  secure  environment  (e.g.  at  the  Principal  Forecast  Office, 
HQSTC).  This  will  include  a  full  dynamic  data  assimilation  scheme  which  will  allow  it  to  assimilate  all  the 
available  in-theatre  data,  e.g.  firom  BMETS  and  other  military  sources  (which  for  security  reasons  cannot 
be  used  in  NWP  schemes  run  at  Bracknell)  and  so  should  be  capable  of  providing  the  best  quality  theatre 
NWP.  This  capability  is  expected  to  become  operational  in  a  few  years  time  and  it  will  then  become  the 
primary  source  for  short-period  meteorological  forecasts  for  crisis  areas.  However,  it  is  envisaged  that  it 
will  be  a  theatre-scale  model,  with  a  resolution  «15  km,  rather  than  a  battlefield-scale  capability.  The 
results  presented  here  suggest  that  the  present  MM  is  capable  of  giving  the  most  rq)resentative 
meteorological  information  when  the  radiosonde  measurements  are  between  1  Vi  and  3  hours  old  (or  45  to 
120  km  distant),  although  these  figures  should  be  reduced  for  forecasts  up  to  t+6.  (It  is  worth  noting  that 
no  upper  air  data  from  Larkhill  was  assimilated  into  the  MM  for  the  forecasts  assessed  here,  such  tiiat 
these  figures  are  representative  of  its  accuracy  away  from  sources  of  upper  air  data.)  Thus,  away  from  the 
BMETS  network,  e.g.  in  the  target  area,  it  would  be  expected  that  the  model  would  be  the  best  source  of 
meteorological  information. 

The  CMETS  (Computerised  METeorological  System)  concept  is  to  provide  "now-casts"  (i.e.  for  0  to  3  hrs 
ahead)  of  ballistic  messages  based  on  meteorological  profiles  at  the  point  of  the  vertex  of  a  shells 
trajectory,  or  even  along  a  shell  or  rocket  trajectory.  This  will  necessitate  information  on  battlefield 
locations  and  demands  that  any  processing/modelling  is  done  at  a  lower  echelon.  An  idea,  currently  being 
considered,  is  to  use  a  PC/workstation-based  battlefield-scale  model.  This  would  cover,  say,  a  200  kmx 
200  km  area,  with  a  horizontal  resolution  of  5  km  or  better,  containing  orogr^hic  information.  This  model 
would  receive  background  fields  derived  from  the  portable  theatre-scale  models  run  at  a  higher  echelon  and 
the  BMETS  data  (and  any  other  avaUable  target  area  data)  in  order  to  provide  an  optimum  3- 
Himpnginnal  analysis  of  the  meteorological  Conditions.  Additional  detail  in  the  boundary  layer  wind  field 
could  be  HiagnnsPiH  using  a  complex  terrain  model  (e.g.  based  on  mass  continuity  or  linear  inviscid  flow) 
which  should  improve  the  low  level  winds  (which  is  where  the  current  MM  winds  are  poorest).  Thus  the 


45 


aim  is  to  provide  a  mobile  computer  work-station  with  the  communications  &cilities  to  update  the 
meteorological  field  both  from  strat^c  background  information  from  a  higher  echelon  and  from  tactical 

date  available  m-theatre.  Using  this  information,  it  is  hoped  to  provide  mission  specific  ballistic  data  to 
artillery  users. 


REFERENCES 

Blanco  A  J,  1988;  Methodology  for  Estimating  Wind  Variability,  ASL-TR-0225,  US  Army  Atmospheric 

Sciences  Laboratory,  White  Sands  Missile  Range,  NM,  US. 

Cullen  M  J  P,  1993:  The  Unified  Forecast/Climate  Model,  McteorolMap,  n?  81-94. 

^eP,  Kitchen  M,  Harding  J  and  StancombeJ,  1986:  The  Reproducibility  of  RS3  Radiosonde  and 
CossorWFMklV  Radar  Measurements.  Observational  Services  Memorandum  OSM  No  35 
Meteorological  OfBce,  Bracknell,  UK. 

(^wdd  Maj  A  A,  1993:  Computer  Assisted  Artillery  Meteorology  (CAAM).  Briefing  to  the  NAAG 
ISWG.3  at  Army  Research  Laboratory,  Battlefield  Environment  Directorate  on  10  March  1993. 

N^h  J,  1991:  Implementation  of  the  Vaisala  PC-Cora  upper  air  sounding  system  at  operational 
r^osonde  ^lons  and  test  ranges  in  the  United  Kingdom.  In  Proceedings  of  the  7th  Svmpnsiiim  nn 
Meteorological  Observations  and  Instnunentation^  New  Orleans,  LA,  US.  pp  270-275 

Spalding  JB,  Kellner  NG  and  Bonner  RS,  1993:  Computer-Assisted  Artillery  Meteorological  System 

Design.  In  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference.  Las  Cruces,  NM,  USA.  pp  45- 


Wenckeb^h  K,  1991:  On  the  accuracy  of  meteorological  messages  computed  from  numerical  forecasts. 
German  Mihtary  Geophysical  Office.  Presentation  given  at  the  NATO  Panal  IV  SP2/T.SWG  3  Joint 
Symposium  on  Ballistic  Meteorology,  May  1991,  NATO  HQ,  Brussels,  Belgium. 

WhitelawA,  1989:  Study  of  Future  Artillery  Meteorological  Techniques.  Technical  Note  3  Short  Term 
Forecasting.  Logica  Report  242.20103.003. 


46 


RESULTS  OF  THE  LONG-RANGE  OVERWATER 
DIFFUSION  (LROD)  EXPERIMENT 


James  F.  Bowers 
West  Desert  Test  Center 
Dugway,  Utah  84022-5000 

Roger  G.  Carter  and  Thomas  B.  Watson 
NOAA  Air  Resources  Laboratory 
Idaho  Falls,  Idaho  83402 


ABSTRACT 

The  Long-Range  Overwater  Diffusion  (LROD)  Experiment  was  a  Joint  Servic¬ 
es,  multi-agency  project  to  help  fill  the  data  gap  on  the  alongwind  diffusion 
(especially  at  intermediate  to  long  range)  of  a  vapor  or  aerosol  cloud  instanta¬ 
neously  released  to  the  atmosphere.  LROD  was  conducted  northwest  of  the 
island  of  Kauai,  Hawaii  in  July  1993.  As  described  in  detail  in  a  1993  Battle¬ 
field  Atmospherics  Conference  paper,  the  experiment  consisted  of  a  series  of 
crosswind  line  source  releases  of  sulfur  hexafluoride  (SF5)  from  a  C-130 
transport.  The  tracer  cloud  was  tracked  to  100  km  using  an  aircraft-mounted 
continuous  SF^  analyzer.  The  SF^  cloud  was  also  sampled  by  continuous 
analyzers  on  boats  at  downwind  distances  of  up  to  100  km.  This  paper 
summarizes  the  LROD  results  and  provides  an  overview  of  the  data  that  will 
be  available  to  researchers  and  model  developers. 

1.  BACKGROUND 

Current  atmospheric  transport  and  diffusion  models  commonly  assume  that  the  alongwind 
and  crosswind  diffusion  rates  are  the  same  because  little  is  known  about  alongwind 
diffusion.  However,  both  short-range  diffusion  experiments  (Nickola,  1971)  and 
theoretical  analyses  (Wilson,  1981)  indicate  that  this  is  a  poor  assumption.  Little  data 
exist  to  characterize  alongwind  diffusion,  especially  at  distances  of  more  than  a  few 
kilometers,  because!  (1)  alongwind  diffusion  usually  is  not  an  issue  when  modeling 
continuous  sources  of  air  pollution,  (2)  total  dosages  traditionally  have  been  assumed  to 
be  more  important  for  hazard  assessments  than  concentration  exposure  histories,  and  (3) 
samplers  capable  of  making  time-resolved  concentration  measurements  have  not  been 
readily  available  until  recently. 

The  data  gap  on  alongwind  diffusion  affects  the  accuracy  of  model  predictions  of  the 
transport  and  diffusion  of  any  material  from  a  short-term  atmospheric  release.  These 


47 


rcleHscs  often  present  an  immediate  threat  to  life  and  property  when  they  involve  the 
accidental  release  of  a  toxic  substance  from  a  failed  containment  vessel  (for  example, 
rupture  of  a  chlorine  tank  car).  If  diffusion  models  are  used  in  these  cases  to  make 
decisions  about  evacuation  routes  and  priorities,  erroneous  model  assumptions  about 
alongwind  diffusion  could  have  disastrous  consequences. 

The  Long-Range  Overwater  Diffusion  (LROD)  Experiment  was  conducted  near  Kauai, 
Hawaii  in  July  1993  to  help  fill  the  data  gap  on  alongwind  diffusion,  especially  at 
intermediate  to  long  range.  The  experiment  was  conducted  over  water  rather  than  land 
primarily  because  it  was  desired  that  meteorological  conditions  be  essentially  constant 
over  distances  of  100  km  or  more  for  days  at  a  time.  (Steady-state  mesoscale  meteoro- 
logical  conditions  were  desired  to  facilitate  both  experiment  conduct  and  the  interpreta¬ 
tion  of  experiment  results.)  Because  the  experiment  was  conducted  over  water,  a 
secondary  objective  was  to  acquire  data  that  will  contribute  to  meteorologists’  under¬ 
standing  of  atmospheric  transport  and  diffusion  processes  over  oceans.  The  design  of  the 
LROD  experiment  is  discussed  in  a  paper  presented  at  the  1993  Battlefield  Atmospheric 
Conference  (Bowers,  1993).  This  paper  briefly  summarizes  the  results  of  the  experi¬ 
ment. 

2.  EXPERIMENTAL  DESIGN  AND  CONDUCT 

The  design  and  conduct  of  the  LROD  experiment  are  discussed  in  detail  by  Bowers  et  al. 
(1994)  and  summarized  in  a  paper  presented  at  the  1993  Battlefield  Atmospheric 
Conference  (Bowers,  1993).  Briefly,  LROD  consisted  of  13  crosswind  releases  of  inert, 
nontoxic  sulfur  hexafluoride  (SF^)  from  a  C-130  transport  flying  90  m  above  the  ocean’s 
surface.  The  tracer  cloud,  which  formed  a  100-km  crosswind  line  source,  was  tracked  to 
100  km  downwind  using  a  continuous  SFg  analyzer  mounted  in  a  twin-engine  aircraft. 

The  sampling  aircraft  repeatedly  measured  the  alongwind  concentration  profile  150  m 
above  the  ocean  as  it  flew  through  the  cloud  in  a  series  of  downwind  and  upwind  passes. 
The  SFg  cloud  was  also  sampled  by  continuous  analyzers  on  five  boats  at  downwind 
distances  of  up  to  100  km.  Because  all  aircraft-  and  boat-based  SFg  concentration 
measurements  were  made  near  the  midpoint  of  the  100-km  line  source,  the  measured 
concentrations  should  be  unaffected  by  diffusion  from  either  end  of  the  cloud,  even  at 
100  km  downwind.  Meteorological  measurements  were  made  from  one  of  the  boats  and 
a  specially  instrumented  single-engine  aircraft. 

The  unseasonably  high  seas  experienced  in  Hawaiian  waters  during  LROD  often 
prevented  the  small  sampling  boats  from  going  out  into  open  ocean.  Even  when  the 
boats  were  able  to  leave  port,  the  scientists  on  the  boats  were  generally  incapacitated  by 
seasickness.  Consequently,  the  boat-based  SF^  measurements  were  quite  limited  (six 
trials  with  one  or  more  sampling  boats).  However,  the  aircraft-based  SF^  measurements 
were  highly  successful,  yielding  over  230  measurements  of  the  alongwind  cloud  concen¬ 
tration  profile.  The  only  significant  problems  with  the  aircraft  sampling  system  were  a 
data  logger  problem  during  Trial  1  and  a  Global  Positioning  System  (GPS)  failure  during 


48 


Trial  12,  which  resulted  in  termination  of  aircraft  sampling  at  47  km.  Because  of  the 
data  logger  problem  during  Trial  1,  aircraft  sampling  data  are  not  available  for  this  trial. 

3.  LROD  EXPERIMENT  RESULTS 

3.1  SFfi  Dissemination 

The  SFg  tracer  was  released  from  the  C-130  in  liquid  form.  Because  SFg  has  a  boiling 
point  of  -63.9  "C,  the  liquid  SF^  vaporized  almost  instantaneously  and  was  quickly  mixed 
with  ambient  air  by  the  aircraft’s  wake  turbulence.  The  dissemination  rate  was  measured 
by  a  flow  meter,  and  the  SFg  cylinders  were  weighed  before  and  after  use.  The  average 
SFe  dissemination  rate  was  12  g/m  for  Trial  1  and  5  g/m  for  the  remaining  trials. 

Because  the  concentrations  measured  during  Trial  1  were  much  higher  than  anticipated, 
the  dissemination  rate  was  reduced  after  the  first  trial  by  decreasing  the  flow  rate  and 
increasing  the  speed  of  the  C-130.  The  dissemination  rate  standard  deviations  average 
less  than  5  percent  of  the  corresponding  mean  dissemination  rates,  which  indicates  that 
the  dissemination  rate  was  fairly  uniform. 

3.2  SFg  Sampling 

The  aircraft  SFg  concentration  measurements  were  paired  with  GPS  position  and  time 
information  during  data  acquisition.  During  post-experiment  data  processing,  the  SF^ 
concentrations  were  converted  from  millivolts  to  parts  per  trillion  (ppt)  by  volume  using 
the  calibrations  made  during  each  trial.  The  positions  were  also  adjusted  to  account  for 
the  measured  9-s  delay  between  the  time  when  air  entered  the  sampling  inlet  on  the 
aircraft’s  exterior  and  the  time  when  it  reached  the  continuous  analyzer.  For  conve¬ 
nience  in  data  analysis,  the  positions  were  converted  from  longitude  and  latitude  to  a 
Cartesian  coordinate  system  of  the  type  used  in  Gaussian  diffusion  models.  For  each 
trial,  the  origin  of  the  coordinate  system  was  placed  at  the  midpoint  of  the  dissemination 
line,  the  positive  x  axis  extended  in  the  downwind  direction,  and  the  y  axis  was  positive 
to  the  right  when  looking  downwind.  Because  of  the  easterly  trade  winds  during  the 
experiment,  the  x  axis  pointed  approximately  to  the  west  and  the  y  axis  pointed  approxi¬ 
mately  to  the  north.  Time  was  converted  to  seconds  after  the  C-130  was  at  the  midpoint 
of  the  dissemination  line. 

If  the  SFg  cloud  had  been  stationary,  the  aircraft  measurements  could  be  used  to  estimate 
the  alongwind  cloud  width  under  the  assumption  that  cloud  expansion  was  negligible  for 
a  single  pass.  However,  the  cloud  transport  speed  was  10  to  20  percent  of  the  aircraft’s 
ground  speed.  Consequently,  it  was  necessary  to  correct  the  downgrid  coordinates  of  the 
aircraft  concentration  measurements  to  remove  the  effects  of  the  cloud’s  motion. 
Assuming  that  the  cloud’s  alongwind  expansion  was  negligible  during  each  pass  and  that 
neither  the  cloud  transport  speed  nor  the  aircraft  ground  speed  varied  during  the  pass,  the 
corrected  downgrid  coordinate  of  a  concentration  measured  at  time  t  is 


49 


(1) 


x'  =  X  +  u{t^  -  t) 

where  x  is  the  uncorrected  coordinate,  u  is  the  cloud  transport  speed,  and  t,,  is  the  time 
when  the  aircraft  passed  through  the  cloud’s  center  of  mass.  The  cloud  transport  speed 
was  determined  from  the  motion  of  the  cloud’s  center  of  mass  as  determined  from  the 
aircraft  measurements.  Equation  (1)  was  used  to  construct  the  alongwind  SFg  concentra¬ 
tion  profile  at  time  to  for  each  pass  through  the  cloud. 

Figure  1  shows  examples  of  upwind  and  downwind  aircraft  SFg  cloud  concentration 
profiles  both  before  and  after  the  correction  for  cloud  motion.  Because  the  concentration 
profiles  in  the  figure  are  from  a  trial  that  had  one  of  the  highest  cloud  transport  speeds, 
the  correction  for  cloud  motion  is  more  evident  than  for  most  of  the  other  trials.  As 
shown  by  the  figure,  the  corrected  profile  is  narrower  than  the  original  profile  for  the 
downwind  traverse  and  broader  for  the  upwind  traverse. 


Airplane  data,  Trial  12,  Pass  01 


Airplane  data.  Trial  12,  Pass  02 


Figure  1.  Corrected  (dashed  line)  and  original  (solid  line)  SFg  cloud  concentration 
profiles  for  aircraft  Pass  1  (downwind)  and  Pass  2  (upwind)  through  the  cloud  during 
Trial  12. 


The  LROD  data  can  be  used  to  test  or  develop  a  number  of  types  of  diffusion  models, 
but  the  type  of  model  most  frequently  used  for  operational  applications  is  the  Gaussian 
puff  or  plume  model.  As  suggested  by  Figure  1,  the  individual  cloud  concentration 
profiles  usually  were  not  Gaussian  in  appearance.  Nevertheless,  most  of  the  profiles 
could  be  described  reasonably  well  by  a  Gaussian  distribution.  The  LROD  report 
contains  best-fit  Gaussian  cloud  parameters  for  three  different  fitting  methods,  but  the 
method  that  appeared  to  give  the  best  overall  representation  of  the  actual  profiles  was  the 


50 


peak/area  match  method.  In  this  method,  the  fitted  peak  concentration  Xo  was  set  equal 
to  the  measured  peak  concentration  and  the  Gaussian  alongwind  dispersion  coefficient 
was  computed  from 


a 


X 


(2) 


where  CL  is  the  alongwind-integrated  concentration.  Note  that  this  fitting  method 
ensures  that  the  fitted  and  actual  profiles  account  for  the  same  total  mass.  Figure  2 
shows  the  peak/area  Gaussian  fits  to  the  corrected  cloud  concentration  profiles  from 
Figure  1. 


Airplane  data.  Trial  12,  Pass  01 


Figure  2.  Measured  (solid  line)  and  fitted  (dashed  line)  SF^  cloud  concentration  profiles 
for  aircraft  Passes  1  and  2  of  Trial  12.  Measured  profiles  have  been  corrected  for  cloud 
motion. 


The  boat  continuous  analyzer  measurements  were  processed  in  the  same  manner  as  the 
aircraft  measurements,  including  conversion  to  trial  grid  coordinates  and  correction  for 
cloud  motion.  Each  boat  continuous  analyzer  SFg  concentration  profile  was  compared 
with  the  aircraft  profiles  for  the  two  passes  made  nearest  to  the  boat  during  that  trial. 
Figure  3  shows  an  example  of  two  aircraft  profiles  superimposed  on  a  boat  profile.  In 
this  case,  the  differences  between  the  two  consecutive  aircraft  profiles  are  greater  than 
the  differences  between  the  boat  profile  and  the  first  aircraft  profile.  A  statistical 
comparison  of  the  Gaussian  model  cloud  parameters  and  Xo  estimated  from  the  boat 
and  aircraft  measurements  showed  that  the  differences  are  not  significant  at  the  95 
percent  confidence  level.  Thus,  the  aircraft  and  Xo  should  be  representative  of  and 
Xo  near  the  surface,  at  least  at  the  downwind  distances  of  the  available  boat  measure¬ 
ments  (60  and  100  km). 


51 


itiiiiiiiliiiniiiilmtiintliiiiiiiiilMitiimliiniiiii 


Figure  3.  Comparison  of  aircraft  and  boat  continuous  SF^  analyzer  measurements  for 
Trial  5,  Boat  5  (100  km). 


3.3  Meteorological  Measurements 

Standard  surface  and  upper-air  (radiosonde)  measurements  were  made  from  Boat  1 
approximately  10  km  downwind  from  the  dissemination  line  during  the  first  three  trials. 
Because  of  high  seas,  it  was  not  possible  to  keep  Boat  1  in  the  experiment  area  for  days 
at  a  time  during  the  remaining  trials.  Beginning  with  Trial  6,  Boat  1  was  sent  each  day 
to  a  position  15-20  km  south  of  Kauai,  outside  of  the  island’s  wake.  The  observations 
should  therefore  be  representative  of  conditions  in  open  ocean,  but  not  necessarily  of 
conditions  in  the  experiment  area  200  km  to  the  northwest.  Wind,  temperature,  and 
humidity  measurements  were  made  at  nominal  height  of  10  m  on  the  boat’s  main  mast 
and  sea  surface  temperature  was  obtained  from  a  thermometer  mounted  on  the  hull.  The 
wind  observations  were  corrected  for  pitch,  roll,  and  boat  heading.  In  addition  to  the 
Boat  1  radiosonde  soundings,  standard  synoptic  radiosonde  soundings  are  available  from 
the  Lihue  Airport  on  the  east  shore  of  Kauai. 

The  meteorological  research  aircraft,  a  specially  instrumented  Rutan  Long-EZ,  entered 
the  experiment  area  after  the  C-130  had  completed  SFg  dissemination.  Flying  a  track 
approximately  parallel  to  and  45  km  south  of  the  sampling  line,  the  Long-EZ  began  each 
trial  by  measuring  the  vertical  profiles  of  wind,  temperature,  and  humidity  as  it  slowly 
descended  from  2500  m  above  mean  sea  level  (MSL)  to  about  25  m  MSL.  The  long-EZ 
then  flew  to  the  dissemination  line  and  back  while  it  measured  the  vertical  fluxes  of 


52 


sensible  and  latent  heat,  momentum,  and  carbon  dioxide  (CO2).  After  completing  the 
flux  runs,  the  Long-EZ  again  measured  meteorological  profiles  as  it  slowly  ascended 
from  25  to  2500  m  MSL  during  its  return  flight  to  the  dissemination  line.  The  only 
aircraft  meteorological  measurements  included  in  the  LROD  final  report  (Bowers  et  al., 
1994)  are  the  vertical  profiles  of  temperature  and  dewpoint  and  two  derived  parameters: 
potential  temperature  and  equivalent  potential  temperature.  The  wind,  turbulence,  and 
flux  measurements  will  be  provided  in  a  supplemental  report. 

Table  1  sununarizes  the  meteorological  conditions  at  the  start  of  each  LROD  trial.  The 
10-m  wind  speeds  and  atmospheric  stabilities  are  based  on  the  Boat  1  surface  observa¬ 
tions.  Both  the  Naval  Postgraduate  School  overwater  stability  classification  scheme 
(Schacher  et  al.,  1982)  and  the  widely  used  Turner  (1964)  scheme  give  the  Pasquill 
stability  category  as  the  neutral  D  category.  The  inverse  Obukhov  lengths  were  calculat¬ 
ed  from  Boat  1  observations  using  the  bulk  methods  of  Wu  (1986).  The  cloud  transport 
speeds  were  determined  from  the  aircraft  measurements  of  the  motion  of  the  SF^  cloud’s 
center  of  mass  and  the  mixing  depths  were  calculated  from  the  SFg  mass  balance  at  long 
downwind  distances. 


Table  1 .  Summary  of  LROD  Trial  Meteorological  Conditions* 


Trial 

u,o„,  (m/s) 

u  (m/s) 

Stab. 

1/L  (m ') 

H„(m) 

2 

8.2 

10.5 

D 

-0.001 

810 

3 

9.8 

10.1 

D 

-0.001 

1155 

4 

- 

11.1 

- 

815 

5 

- 

11.5 

- 

1495 

6 

7.7 

10.7 

D 

-0.003 

735 

7 

9.3 

12.0 

D 

-0.001 

1110 

8 

10.3 

12.7 

D 

0.003 

1005 

9 

10.3 

10.1 

D 

0.000 

665 

10 

5.1 

9.9 

D 

0.002 

435 

11 

10.3 

10.3 

D 

0.000 

635 

12 

11.3 

13.5 

D 

0.002 

715 

13 

11.3 

15.6 

D 

0.002 

765 

*  u,om  =  10-m  wind  speed,  u 

=  cloud  transport  speed.  Stab  = 

Pasquill  stability  category. 

1/L  =  inverse  Obukhov  length,  =  mixing  depth 

53 


4.  DISCUSSION  OF  RESULTS 

Figure  4  shows  all  of  the  LROD  aircraft  measurements  plotted  as  a  function  of 
downwind  distance.  Figure  4  also  shows  the  Pasquill-Gifford  (lateral  dispersion 
coefficient)  curves  for  the  D  (neutral),  E  (stable),  and  F  (very  stable)  Pasquill  stability 
categories  (Turner,  1970)  because  many  current  diffusion  models  for  instantaneous 
releases  assume  that  can  be  approximated  by  o^.  The  measured  cr^  values  range  from 
near  the  Pasquill-Gifford  curve  for  D  stability  to  well  below  the  curve  for  F 

stability.  Thus,  although  meteorological  conditions  were  similar  during  all  trials,  there 
were  significant  trial-to-trial  variations  in  a,.  It  therefore  appears  that  the  LROD  data  set 
is  suitable  for  use  in  investigating  the  quantitative  relationship  between  alongwind  puff 
growth  and  meteorological  conditions.  If  this  relationship  can  be  established,  can  be 
predicted  for  other  meteorological  conditions  and  other  settings,  including  over  land. 


Figure  4.  Alongwind  dispersion  coefficient  versus  downgrid  distance  for  all  LROD 
trials. 


54 


5.  SUMMARY 


The  LROD  experiment  yielded  a  unique  data  set  that  should  contribute  to  an  improved 
understanding  of  both  the  physics  of  alongwind  diffusion  and  atmospheric  transport  and 
diffusion  processes  over  oceans.  The  data  presented  in  the  LROD  final  report  will  likely 
meet  the  needs  of  most  model  developers  and  researchers.  For  those  who  require  more 
detailed  information,  the  4-Hz  (aircraft)  and  1-Hz  (boat)  continuous  analyzer  SFg 
concentration  measurements  are  available  from  the  Meteorology  Division,  West  Desert 
Test  Center  (formerly  the  Materiel  Test  Directorate,  U.S.  Army  Dugway  Proving 
Ground). 

ACKNOWLEDGMENTS 

The  sponsors  of  the  LROD  experiment  were  the  Joint  Contact  Point  Directorate  (Project 
D049),  U.S.  Army  Dugway  Proving  Ground,  Dugway,  Utah;  Chemical  and  Biological 
Defense  Division,  Brooks  Air  Force  Base,  Texas;  and  Naval  Surface  Warfare  Center, 
Dahlgren,  Virginia. 

REFERENCES 

Bowers,  J.  F.,  1993:  "Overview  of  the  Long-Range  Overwater  Diffusion  (LROD) 
Experiment."  Tn  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference, 

U.S.  Army  Research  Laboratory,  White  Sands  Missile  Range,  NM  88002-5501,  pp 
157-170. 

Bowers,  J.  F.,  G.  E.  Start,  R.  G.  Carter,  T.  B.  Watson,  K.  L.  Clawson,  and  T.  L. 

Crawford,  1994:  "Experimental  Design  and  Results  for  the  Long-Range  Overwater 
Diffusion  (LROD)  Experiment."  Report  No.  DPG/JCP-94/012,  U.S.  Army 
Dugway  Proving  Ground,  Dugway,  UT  84022-5000. 

Nickola,  P.  W.,  1971:  "Measurements  of  the  Movement,  Concentration,  and  Dimensons 
of  Clouds  Resulting  from  Instantaneous  Point  Sources."  J.  AppI.  Meteor.,  8: 
962-973. 

Schacher,  G.  E.,  D.  E.  Speil,  K.  L.  Davidson,  and  C.  W.  Fairall,  1982:  "Comparison 
of  Overwater  Stability  Classification  Schemes  with  Measured  Wind  Direction 
Variability."  Naval  Postgraduate  School  Report  No.  NPS-6 1-82-0062  prepared  for 
Bureau  of  Land  Management,  Los  Angeles,  CA. 

Turner,  D.  B.,  1964:  "A  Diffusion  Model  for  an  Urban  Area."  J.  ApdI.  Meteor.,  3: 
83-91. 

Wilson,  D.  J.,  1982:  "Along-wind  Diffusion  of  Source  Transients."  Atmos.  Env.,  15: 
489-495. 


55 


Wu,  J.,  1986:  "Stability  Parameters  and  Wind-Stress  Coefficients  Under  Various 
Atmospheric  Conditions."  J.  Atmos,  and  Ocean.  Tech  .  333-339. 


MODELED  CEILING  AND  VKIBIUnY 


Robert  J.  Falvey 
United  States  Air  Force 
Environmental  Technical  Applications  Center 
Simulation  and  Techniques  Branch 
859  Buchanan  Street 
Scott  AFB,IL  62225-5116 


ABSTRACT 


Modeling  distributions  of  climatological  data  using  mathematical 
equations  is  an  effective  data  compression  technique.  Since  ceiling  and 
visibility  data  are  not  normally  distributed,  modeling  their  distributions 
is  accomplished  using  the  Weibull  family  of  curves.  Cumulative 
frequency  distributions  using  20  years  of  ceiling  and  visibility  data  are 
analyzed  and  Weibull  curves  are  fit  to  the  data.  The  Weibull  coefficients 
are  calculated  and  stored  for  use  with  the  microcomputer  based  MODCV 
software.  The  software  uses  these  coefficients,  along  with  current 
conditions  and  serial  correlations  in  a  first-order  Markov  process,  to 
produce  conditional  and  unconditional  probability  forecasts  of  ceiling  and 
visibility  and  joint  ceiling  and  visibility  probabilities.  The  MOEX2V 
software  uses  the  standard  Microsoft  Windows  interface  which  allows  the 
user  to  quickly  select  conditional  and  unconditional  probabilities.  The 
output  consists  of  bar  graphs  and  tables  of  ceiling  and  visibility 
probabilities  for  ten  wind  stratifications  and  ei^t  user-selected  forecast 
times  up  to  72  hours  in  the  future. 

1.  iNno)ucnoN 

In  the  past,  climatological  requirements  of  the  typical  weather  station  have  been  satisfied  by 
bulky  paper  copies  of  the  Revised  Uniform  Standard  Surface  Weather  Observations 
(RUSSWO),  Wind  Stratified  Conditional  Climatology  (WSCC)  tables,  and  ^  Weather 
Service  (AWS)  Climatic  Briefs.  As  microcomputers  have  become  integrated  into  weather 
station  operations,  the  opportunity  to  complement  and/or  replace  these  printed  summaries  with 
electronic  climatological  databases  is  possible.  The  MODeled  Ceiling  and  Visibility 
(MODCV)  software  was  originally  designed  to  replace  the  WSCC  tables.  However,  in  the 
original  version  of  the  program,  the  data  was  not  wind  stratified.  Since  winds  play  a  vital 
role  in  both  ceiling  and  visibility,  this  lack  of  wind  stratification  made  the  software  fall  short 
of  the  end-user's  needs.  The  methodology  described  below  is  the  same  as  in  the  original 
version  and  is  taken  directly  from  Kroll  and  Elkins  (1989).  The  results  of  the  wind  stratified 
modeling  are  described  in  section  3. 


57 


2.  MEHOXXXXiY 

MODCV  was  developed  to  provide  rapid  transportable  access  to  unconditional  and  conditional 
probability  forecasts  of  ceiling  and  visibility  based  on  climatological  data 

2.1  Unconditional  ProbaHlity 

Unconditional  climatology  data,  which  is  simply  the  relative  frequency  of  occurrence  that  a 
certain  wndition  was  observed,  is  easily  tabulated  at  any  location  that  has  a  representative 
observational  record.  The  unconditional  probability  that  the  visibility  below  1  mile  at  12 
GMT  is  calculated  by  summing  the  number  of  times  the  visibility  at  12  GMT  was  below  1 
mile  and  dividing  by  the  number  of  observations  at  12  GMT.  This  is  the  method  used  to 
produce  the  RUSSWO  summaries. 


A  very  powerful  alternative  to  tabulating  each  condition  into  frequency  tables  is  to  model  the 
Cumulative  Distribution  Function  (CDF).  This  process  involves  the  use  of  mathematical 
equations  to  fit  the  cumulative  frequency  distribution.  If  the  variable  being  modeled  is 
continuous,  the  cumulative  probability  associated  with  any  value  of  that  variable  can  be 
calculated  using  the  equation: 


P  =  Fix)  (1) 

where  F(x)  is  the  function  that  models  the  distribution  of  the  CDF.  In  MODCV,  therefore, 
given  any  threshold  value  of  x,  this  function  is  used  to  calculate  the  unconditional  probability 
that  X  will  be  below  that  threshold  value. 

2.2  Conditional  Probability 

The  basic  component  of  the  conditional  ceiling  and  visibility  is  based  on  the  Omstein- 
Uhlenbeck  (O-U)  stochastic  process,  a  first  order  Markov  process  for  which  each  value  of  a 
random  variable  x  j  is  considered  a  particular  value  of  a  stationary  stochastic  process.  The 
stochastic  model  relates  a  value  of  x  at  time  f  (Xj)  to  an  earlier  value  of  x  at  time  zero  (xg). 
A  frequent  assumption  in  statistical  application  is  that  the  variable  is  normally  distributed. 
Unfortunately,  many  meteorological  variables,  such  as  ceiling  and  visibility,  are  not  normally 
distributed.  Non-normal  variables  can  be  transformed  into  normal  distributions  through  a 
process  called  transnormalization.  Transnormalization  involves  expressing  the  raw  variables 
in  terms  of  it's  equivalent  normal  deviate  (END).  This  process  is  discussed  by  Boehm  (1976). 
Once  the  variable  has  been  normalized,  the  joint  density  function  associated  with  Xg  and  Xj 
becomes: 


f{XQ,X^) 


1 


e! 


{Xo-p)2-2p(yo-^) 

2o2(i-p2) 


(2) 


where  p  is  the  serial  correlation  between  successive  values  of  x  and  vdiere  p  and  a  are  the 


58 


mean  and  standard  deviation  of  x.  Since  we  are  interested  in  the  probability  of  x,  given  the 
initial  value  of  Xq,  a  conditional  distribution  of  x  is  required.  If  x  can  be  approximated  by 
a  first-order  hferkov  equation,  then  x,  is  dependent  only  upon  Xq.  If  successive  observations 
of  X  have  a  bivariate  normal  distribution,  the  conditional  distribution  of  x,  is  normal  with  a 
mean  of: 

£’[(X(.IXo)]  =o+p(Xo-p)  (3) 

and  a  variance  of: 

varlix^lx^)]  =o^(l-p^)  (4) 

Equations  3  and  4  are  basic  to  the  first-order  Markov  equation.  A  value  of  x^  can  be 
calculated  using: 

Xf.  =  p+p  +oy'l-p2Tlt 

If  the  variable  is  distributed  normally  with  a  mean  of  zero  and  a  variance  of  one,  equation 
5  reduces  to: 


(6) 


where  p  is  the  correlation  coefficient  between  Xq  and  Xj  separated  by  time  interval  t,  and 
is  a  random  normal  number.  The  process  is  considered  to  be  Markov  if  p  =  P)‘,  where  pb  is 
the  hour-to-hour  correlation  associated  with  x.  If  is  a  constant,  then  this  process  is 
considered  stationary  and  is  known  as  the  Omstein-Uhlenbeck  or  OU  process. 

Application  of  the  OU  process  to  meteorological  variables  is  well  documented  in  Gringerton 
(1966),  Sharon  (1967),  and  Whiton  and  Berecek  (1982).  Its  use  with  variables  whose  time 
series  have  a  random  component  and  adhere  to  the  restrictions  of  the  Markov  is  justifiable. 
Stationarity  is  a  feature  that  is  especially  favorable  for  application  to  weather  variables  since 
predictions  derived  fi*om  stationary  processes  will  converge  toward  the  mean  as  time 
increases.  Thus,  the  conditional  probabilities  will  converge  to  unconditional  probabilities  as 
the  forecast  time  period  increases. 

From  equation  6,  we  can  conclude  that,  for  a  specific  value  of  Xq,  the  value  of  will  exceed 
a  minimum  value  Tv„in  as  fi*equently  as  Xt  exceeds  a  minimum  value  x^^n  given  an  initial  value 
Xq.  In  terms  of  probability. 

Now  we  replace  the  value  of  as  x(t|0),  the  normalized  value  corresponding  to  file 
conditional  probability  of  Xj.  Thus,  equation  7  becomes: 


59 


PiXf-i  Xq)  P(Xi.>X^^j^\  Xq) 


(8) 


where  P(x,|xo)  is  the  conditional  probability  of  x,  given  the  value  of  %  P(x,)  is  the 
unconditional  probability  of  x  at  time  t,  and  P(Xo)  is  the  unconditional  probability  of  x  at  time 
zero.  MODCV  uses  equation  8  to  calculate  conditional  probabilities  of  ceiling  and  visibility. 


2.3  Modeling  Qimulative  Distributions 

Observations  for  a  station's  entire  period  of  record  (POR)  are  extracted  from  USAFETAC's 
database  and  binned  by  month,  hour,  and  wind  direction  category  (calm,  22.5°  either  side  of 
N,  ME,  E,  SE,  S,  SW,  W,  NW,  and  all).  The  Cumulative  Distribution  Function  (CDF)  is  fit 
using  the  Weibull  family  of  curves.  Since  ceiling  and  visibility  distributions  are  not  normal, 
fitting  their  CDF  requires  the  Weibull's  flexibility.  The  use  of  the  Weibull  curve  for 
modeling  ceiling  and  visibility  is  well  documented  (Somerville  and  Bean,  1979;  Somerville 
and  Bean  ,  1981;  and  Whiton  and  Berecek  ,  1982). 

Using  the  transnormalization  process.  Equivalent  Normal  Deviates,  or  ENDs,  are  calculated 
for  each  month,  hour,  wind  category.  Once  the  variables  are  normalized,  the  Weibull  and 
Reverse  Weibull  are  used  to  fit  the  visibility  and  ceiling  CDF,  respectively.  The  equations 
are  expressed  as: 


P=l-e 


(9) 


and 


P=e 


(-axf) 


(10) 


where  a  and  P  are  the  modeling  coefficients,  is  some  threshold  of  ceiling  or  visibility,  and 
P  is  the  probability  that  an  actual  ceiling  or  visibility  observation  (^  will  be  less  than 
The  values  of  the  empirical  cumulative  distribution  are  regressed  on  the  Weibull  and  Reverse 
Weibull  distribution  frinction. 

The  modeling  coefficients  are  used  by  the  microcomputer  program  to  calculate  normalized 
probabilities  which  are  inverse  transnormalized  to  convert  the  END  probability  back  to  an 
actual  probability.  This  process  allows  the  user  to  generate  conditional  and  unconditional 
climatological  forecasts  of  ceiling,  visibility,  and  joint  ceiling  and  visibility.  The  user  selects 
a  wind  direction  category  and  in  the  case  of  conditional  climatology,  an  initial  value  of 
ceiling  and/or  visibility.  The  program  then  displays  the  probability  of  ceiling  and/or 
visibility  at  user  specified  thresholds  and  times  out  to  72  hours  in  either  tabular  or  bar  graph 
form,  as  shown  in  Figures  1  and  2. 


60 


Figure  1.  MODCV  tabular  output 


Figure  2.  MODCV  graphical  output 


3.  MCH)Fl.VF3aFlCATia\ 

Verification  of  MODCV  was  conducted  for  eight  locations  around  the  world  in  order  to  test 
the  model  under  different  climatological  regimes.  MODCV  was  tested  against  Wind 
Stratified  Conditional  Climatology  (WSCC)  tables  which  are  produced  by  USAFETAC  OL-A 
located  in  Ashville,  NC.  The  goal  was  to  provide  a  compact  computer  based  product  that  was 
at  least  as  good  as  the  bulky  WSCC  tables. 


3.1  Brier  Skill  (P)  Score 


The  Brier  Skill  or  "P"  score  (Brier,  1950)  is  a  statisitcal  technique  used  to  measure  the 
amount  of  skill  in  the  probability  forecast.  Ihe  P-score  equation  is: 


(11) 


where  r  is  the  number  of  forecast  categories,  N  is  the  number  of  days,  f  is  the  probability 
forecast  of  the  event  occurring  in  the  category,  and  E  takes  the  value  of  one  or  zero  according 
to  whether  the  ceiling  or  visibility  occurs  in  that  category.  P  rmges  from  zero  for  a  perfect 
forecast  to  two  for  no  skill.  The  number  of  forecast  categories  was  determined  from  the 
WSCC  tables  for  each  of  the  eight  stations. 

Verification  data  was  collected  during  the  months  of  April  and  November  of  1988.  Because 
of  the  sheer  bulk  of  the  data,  only  the  3-hour  and  24-hour  forecasts  were  verified.  For  each 
day  in  April  and  November,  MODCV  and  WSCC  forecasts  were  compared  to  observed 
conditions  and  a  P-score  was  calculated  for  each.  The  MODCV  P-scores  averaged  over  the 
month  were  consistently  lower  than  the  WSCC  P-scores  at  both  the  3-hour  and  24-hour 
forecast  times  for  both  ceiling  and  visibility.  Figures  1-3  show  the  average  P-scores  of  all 


61 


nine  st^ions  wmbined  for  the  months  of  April,  November  and  for  both  months  combined, 
respertiyely.  Note  that  the  scores  are  consistently  lower  (better)  for  the  MODCV  forecasts 
especialty  at  the  3  hour  point.  Table  1  lists  the  numeric  P-scores  for  each  of  the  nine  stations 
averaged  over  the  30  days  of  April  and  Novemb^.  Also  included  are  the  averages  for  all 
stations  for  April,  November,  and  fiDr  both  months  combined. 


Table  1.  P-Scores  for  9  stations  for  April  and  November  for  MODCV  and  WSCC. 


4.  SUMMARY 


MODCV  was  originally  created  to  give  forecasters  an  easy-to-use  microcomputer  prograrn 
to  make  climatological  data  easier  to  use.  Unfortunately,  the  fielded  program  lacked  the  wnd 
stratification  needed  to  provide  useful  probabilistic  guidance.  Wind  stratification  has  been 
added  and  the  results  indicate  that  MODCV  provides  more  accurate  climatological  forec^ts 
than  the  WSCC  tables.  Also,  the  P-Scores  calculated  using  the  new  version  are  smaller  than 
those  calculated  using  the  original  version  (not  shown).  Since  the  conclusion  at  that  time  was 
that  MODCV  was  practical  for  operational  use,  it  can  be  concluded  that  the  new  version  can 

be  used  operationally. 


63 


5.  REFERENCES 

Boehm,  A.  R,  1976,  Trcmnormdized  Regression  ProbMity",  AWS-TR-75-259  Air 
Weather  Service,  Scott  AFB,  IL 


Brier,  G.  W.,  1950,  "Verification  of  Forecasts  Expressed  in  Terms  of  Probability"  Mon  Wea 
Rev,  78:  1-3.  ’ 

Gringerton,  1. 1.,  1966:  A  Stochasitc  Model  of  the  Frequency  and  Duration  of  Weather  Events, 
United  States  Air  Force  Cambridge  Research  Laboratories,  Bedford,  MA 

Kroll,  J.  T.,  and  H.  A.  Elkins,  1989,  The  Modeled  Ceiling  and  Visibility  (MODCV)  Pmeram 
USAFETAC/TN-89/001,  United  States  Air  Force  Environmental  Technical 
Applications  Center,  Scott  AFB  IL  62225-5116. 

Somerville,  S.  J.  and  P.  N.  Bean,  1979,  Stochastic  Modeling  of  Climatic  Probabilities, 
AFGL-TR-79-0222,  United  States  Air  Force  Geophysics  Laboratory,  Bedford,  MA ' 

Somerville,  S.  J.  and  P.  N.  Bean,  1981,  Some  Models  for  Visibility  for  German  Stations, 
AFGL-TR-8 1-0144,  United  States  Air  Force  Geophysics  Laboratory,  Bedford,  MA ' 

Whiton,  R  C.  and  E.  M.  Berecek,  1982,  Basic  Techniques  in  Environmental  Simulation 
USAFETAC  TN-82-004,  Air  Weather  Service,  Scott  AFB,  IL 


64 


A  NEW  PCFLOS  TOOL 


K.E.  Eis 
5rC-METSAT 

Fort  Collins,  CO  80521,  U.S.A. 


ABSTRACT 


Probability  of  Cloud-Free-Line-of-Sight  (PCFLOS)  is  a  powerful  tool  used  by  all 
components  of  the  DoD  in  weapons  and  sensor  development.  STC-METSAT  has 
just  completed  the  development  of  a  medium-resolution  (12  to  15  km) 
3-dimensional  satellite-derived  database  under  DOE  sponsorship  that  provides 
PCFLOS  interrogation  of  unprecedented  resolution  in  the  Korean  and  Iraq  areas 
of  interest.  The  database  and  PCFLOS  extraction  software  includes  the  13  month 
period  of  record  from  April  1990  through  April  1991.  This  package  allows 
user-defined  target  and  sensor  altitudes  and  unlike  any  other  PCFLOS  analysis 
package,  can  provide  azimuthally  dependent  results.  Analysis  of  PCFLOS  using 
this  package  helped  confirm  earlier  investigations  into  the  behavior  of  PCFLOS 
and  its  dependence  not  only  on  mean  cloud  cover,  but  also  on  cloud  structure  in 
the  area  of  investigation.  The  study  includes  both  temporal  and  spatial  variance 
analysis  of  the  new,  satellite-based  PCFLOS. 

1.  INTRODUCTION 

Airborne  sensors  such  as  RAPTOR  are  effected  by  both  cloud  obscuration  and  clear-air  IR 
wavelength  attenuation.  Previous  methods  of  quantifying  these  degradations  have  used 
statistically-based  Probability  of  Cloud-Free-Line-of-Sight  (PCFLOS)  algorithms  that  use  little, 
or  no,  satellite  data  and  are  dominated  by  low-resolution  surface  databases.  This 
DOE-sponsored  study  provides  a  13-month  database  over  the  Iraqi  and  Korean  areas  and  the 
appropriate  extraction  software  to  compute  PCFLOS  directly  from  satellite  data  for  the  period 
April  1990  to  the  end  of  April  1991.  Temporal,  spatial,  and  frequency  analyses  were  generated 
from  this  database. 

This  15-km  resolution  3 -hourly  database  can  be  improved  to  a  5-km  resolution,  1-hour  global 
analysis  using  the  STC  developed  Climatological  and  Historical  ANalysis  of  Clouds  for 
Environmental  Simulations  (CHANCES)  database  (Reinke  et  al.  1993).  This  paper  will 


65 


describe  the  elements  of  the  tool  and  outline  some  of  the  results  obtained  from  the  analysis  in 
regard  to  temporal  and  spatial  variations,  cloud  structure,  and  azimuthal  behavior  of  PCFLOS. 

2.  BACKGROUND 

The  method  used  to  create  a  3-dimensional  cloud  field  was  fully  developed  in  a  Phase  I  study 
(Eis  1993).  A  geosynchronous  IR  image  is  first  interrogated  for  clouds.  The  IR  radiance  is  then 
converted  to  a  blackbody  temperature,  which  is  then  converted  to  cloud  top  height  by  using  an 
interpolated  rawinsonde  value  computed  using  local  rawinsonde  stations  analyzed  with  a  Barnes 
algorithm. 

The  cloud/no  cloud  algorithm  is  a  modification  of  the  International  Satellite  Cloud  Climatology 
Project  (ISCCP)  and  High  Resolution  Satellite  Cloud  Climatologies  (HRSCC)  cloud  detection 
algorithms.  Only  IR  images  were  used  for  cloud  detection.  Infrared  satellite  imagery  from 
GMS  (Korea)  and  METEOSAT  3  and  4  (Iraq)  was  used  for  the  time  period  from  April  1990 
through  April  1991.  The  details  of  the  cloud/no  cloud  algorithm  are  beyond  the  scope  of  this 
paper  but  are  fully  developed  in  the  Raptor  II  report  (Eis  1994).  Figures  1  and  2  show  the 

locations  of  the  rawinsonde  data  used  in  the  estimation  of  the  vertical  cloud  portion  of  the  data 
base. 


Figure  1 .  Iraq  area  of  interest.  Dots  indicate  upper  air  data  locations. 


66 


Figure  2a.  Korean  (North)  area  of  interest.  Dots  indicate  upper  air  data  locations. 


Figure  2b.  Korean  (South)  area  of  interest.  Dots  indicate  upper  air  data  locations. 

The  analyzed  height  and  temperature  fields  at  the  mandatory  levels  were  merged  with  the 
temperature  of  the  cloudy  pixels  to  determine  the  cloud-top  height.  A  linear  interpolation 
between  levels,  or  an  extrapolation  if  the  cloud  was  above  the  uppermost  level,  was  used  to  find 
the  height  of  the  cloud  top.  Again,  the  details  of  the  cloud  top  and  base  algorithms  can  be  found 
inEis(1994). 


67 


To  determine  the  cloud  base,  a  check  was  first  made  to  see  if  the  temperature  minus  the  dew 
point  at  850  and  700  mb  was  less  than  a  specified  threshold,  in  this  case,  5  K  was  used  for  both 
Iraq  and  Korea.  If  the  lower  levels  of  the  atmosphere  were  determined  to  be  moist,  the  cloud 
base  was  placed  at  the  LCL,  on  the  assumption  that  the  clouds  were  caused  from  moisture  near 
the  surface  (e.g.  a  thunderstorm).  If  the  cloud  top  was  less  than  the  LCL,  the  base  was  assigned 
a  value  of  the  top  height  minus  250  m.  This  condition  could  be  brought  about  by  a  poor 
objective  analysis,  very  low  clouds  with  emittance  of  less  than  one,  or  a  partly  cloudy  field  of 
view  of  the  satellite.  The  cloud  base  was  never  allowed  to  go  below  the  LCL. 

3.  ERROR  CONSIDERATIONS 

There  are  several  sources  of  error  associated  with  the  creation  of  the  3-dimensional  database. 
The  cloud  detection  algorithm  was  developed  to  match  IR  images  with  the  eye's  response, 
consequently,  subvisual  cirrus  will  not  be  detected.  The  cloud  detection  algorithm  was  tuned 
for  each  day.  Cloud  detection  over  cold  land  was  manually  edited  for  146  of  the  1259  images 
used  in  this  study  (106  for  the  Korean  sector  and  40  for  Iraqi  sector). 

Cloud  height  values  are  the  database  parameter  with  the  largest  possible  error.  In  both  the 
Korean  and  Iraqi  cases  the  distances  between  sampling  points  is  well  over  500  km.  The  result  is 
that  there  is  significant  temporal  and  spatial  spreading  of  the  data  to  each  grid  point.  If  an  air 
mass  discontinuity  between  two  adjacent  stations  is  sensed,  we  have  no  way  of  determining 
where  along  the  station-to-station  line  the  front  lies. 

The  cloud  base  and  top  values  were  analyzed  statistically  for  the  Iraq  data  to  show  some  of  the 
bulk  behavior  of  the  cloud/no  cloud  and  top  and  bottom  assignment  algorithms.  Figure  3  shows 
the  normalized  histogram  of  cloud  top  temperatures,  as  measured  with  the  METEOSAT  3  and  4 
satellites  for  the  Iraq  area  of  interest  for  all  13  months  of  0000  UTC  interrogated  pixels. 

Iraq  Cloud  Top  Temp  Statistics  390  Days  0000  UTC 


Figure  3.  Normalized  histogram  of  cloud  top  temperatures  for  Iraq  at  0000  UTC. 


68 


Note  the  assigned  temperatures  behave  quite  well  and  appear  to  fall  into  a  Chi-squared 
distribution.  This  is  logical  since  most  continuous-boimded  meteorological  parameters  (i.e., 
humidity  and  winds  are  bounded  by  zero  and  cloud  heights  by  the  earth's  surface)  exhibit  this 
type  of  statistical  behavior.  The  "noise"  in  the  trace  was  expected.  It  represents  local  effects 
caused  by  terrain,  water  bodies,  etc.  Figure  4  shows  the  normalized  distribution  of  computed 
cloud  top  heights  (in  meters)  for  the  same  Iraq  sector  data  set.  You  would  expect  these 
distributions  (temperature  and  height)  to  be  very  similar  since  height  is  directly  derived  from 
temperature.  The  only  departure  from  the  Chi-squared  distribution  are  the  spikes  at  100,  2800, 
3250,  6000,  7700,  9800,  and  16400  meters,  due  to  the  dominance  of  mandatory  levels  in  the 

rawinsonde  data. 

Figure  5  is  the  distribution  of  cloud  base  assignments.  Again,  the  spikes  show  how  the 
rawinsonde's  mandatory  levels  effect  the  cloud  base  values.  Cloud  base  height  clustering  is  seen 
in  ceiling  statistics,  but  there  is  clearly  more  error  associated  with  base  height  assignment  than 
any  other  part  of  this  analysis. 

Iraq  390  Day  0000  UTC  Cloud  Top  Statistics 


Cloud  Top  (meters) 


Figure  4.  Normalized  distribution  of  computed  cloud  top  heights  in  meters  at  0000  UTC. 


Iraq  Cloud  Base  Frequency  for  390  Days  0000  UTC 


Cloud  Base  Height  meters-ASL 


Figure  5.  Distribution  of  cloud  base  assignments. 


69 


4.  PCFLOS  TOOL  DESCRIPTION 

Eis  (1994)  describes  the  input  variables  and  output  format  in  detail.  Basically,  the  user  is 
allowed  to  specify  the  location  and  elevation  of  both  the  target  and  sensor.  Other  specifications 
allow  the  user  to  integrate  over  azimuth,  time,  or  date.  Range  is  typically  the  independent 
vanable.  The  algorithm  interrogates  each  path  length  for  a  clear/obscured  path  and  averages 
over  the  user  defined  parameters  range  to  produce  a  PCFLOS  value.  In  addition  to  PCFLOS 
the  algorithm  also  computes  the  mean  cloud  cover  in  the  circle  whose  radius  is  defined  by  the 
maximum  range  set  by  the  user. 

5.  ANALYSIS 

Several  analyses  were  performed  using  the  new  PCFLOS  tool.  These  included:  High  resolution 
composite  climatologies  (not  discussed  in  this  paper,  see  Reinke  (1993)),  temporal  correlation 
analysis,  cloud  structure  effects,  spatial  analysis,  and  azimuthal  variance. 


5.1  Temporal  Correlation  Analysis 

A  3-hourly  database  (0000,  0300,  0600,  0900,  1200,  1500,  1800,  and  2100  UTC)  was  created 
using  the  November  1990  imagery  for  both  Iraq  and  Korea.  In  order  to  show  the  temporal 
behavior  we  ran  each  hour  available  for  a  selected  day  in  November  1990  for  four  points  in  the 
Iraqi  and  Korean  areas.  In  all  cases  we  used  the  following  parameters:  Sensor  height  =  18  km, 
azimuth  values  used  the  full  360  degrees,  and  target  altitudes  were  0,  2.5,  5,  7.5,  10,  12.5,  and 
15  km  for  the  low  resolution  cases  and  0,  .5,  1,  1.5,  2,  2.5,  and  3  km  for  the  high’ resolution 
cases.  The  purpose  of  this  analysis  was  to  explore  what  changes  occur  in  PCFLOS  over  short  (3 
hour)  time  spans.  The  following  example  was  for  western  Iraq  where  many  of  the  SCUD  TELs 

were  located  during  Desert  Storm.  As  you  will  see,  there  were  major  differences  at  the  3-hourlv 
intervals. 


PCFLOS  went  from  a  cloud-free  unobstructed  view  at  100  km  to  20  percent  PCFLOS  at  25  km 
just  3  hours  later  (midnight  to  0300  UTC).  The  rapid  variation  in  PCFLOS  and  mean  cloud 
cover  indicate  a  smoothed  temporal  climatology  will  contain  large  errors.  Most  examples 
studied  with  the  limited  data  set  confirm  that  PCFLOS  values  vary  rapidly  and,  in  fact,  vary 
faster  than  the  temporal  variance  in  mean  cloud  cover. 

5.2  Cloud  Structures  Interrogated  Over  Korea 

We  interrogated  a  flight  path  (sensor  at  18  km)  over  Korea  along  the  38th  parallel.  The  mean 
cloud  cover  for  the  January  days  (sensor  at  18  km,  target  at  the  surface)  is  depicted  in  Figure  6 
Again,  the  rapid  variation  between  0  and  100  values  indicates  frontal-produced  clouds.  The 
spatial  variance  of  the  PCFLOS  is  plotted  on  Figure  7.  Note  that  for  a  full  360  degree  azimuthal 
integration  and  with  a  target-sensor  separation  of  100  km,  the  variance  of  the  daily  PCFLOS  is 
less  than  the  mean  cloud  cover.  This  is  in  contrast  to  the  temporal  variance  study  (Figure  8). 


70 


Lastly,  Figure  9  depicts  the  mean  cloud  cover  probability  versus  PCFLOS  scatter  diagram.  It 
shows'a  markedly  larger  scatter  on  the  clear  cloud  end  of  the  scatter  diagram  than  does  the  Iraq 
case.  This  is  evidence  of  cloud  structure  impacting  PCFLOS  more  than  the  variation  in  mean 

cloud  cover. 


Jan  91  Date 

Figure  6.  Mean  Cloud  Cover  over  Korea,  January  1991. 

°  Sensor  Lat/Lon 


January  Date 


Figure  8.  Korea  Temporal  Variance  (Daily). 


71 


Figure  9.  Korea  PCFLOS  versus  Mean  Cloud  Freeness. 

5.3  Azimuthally  Dependent  Analysis 

Up  to  now,  all  of  the  analyses  have  used  an  azimuthally  independent  method  where  all  PCFLOS 
values  are  derived  using  target  and  sensor  range  and  heights.  The  delivered  database  can  also  be 
set  for  an  analysis  of  azimuths  integrated  over  sectors  of  less  than  360°. 

In  Figure  10,  the  0  to  25  km  range  shows  several  periods  of  totally  clear  cloud  cover  where  all 
four  quadrants  show  100  percent  PCFLOS  conditions  (3-8,  10-12,  15,  16,  19,  20,  22,  23,  and 
28)  for  January.  On  the  2,  3,  9,  18,  24,  and  25  of  January,  the  conditions  were  overcast  in  all 
directions  causing  CFLOS  values  to  be  0.  All  the  other  dates  show  wide  variances  in  CFLOS 
For  instance,  if  Baghdad  were  to  be  attacked  on  January  1,  and  if  the  pilot  were  to  approach 

^  25  km,  from  the  southeast  the 

^FLOS  shows  a  totally  obscured  condition. 

Figure  1 1  shows  similar  behavior.  A  clearing  trend  is  evident  from  January  24-28.  Note  that 
on  January  1,  a  northwest  approach  would  still  have  a  CFLOS  value  of  over  90  percent. 

6.  SUMMARY 

The  new  PCFLOS  tool  gives  an  unprecedented  window  into  the  behavior  of  clouds  PCFLOS 
and  cloud  structure  as  they  relate  to  military  operations.  In  the  age  of  stealth  aircraft  and  smart 
munitions,  knowledge  of  the  detailed  behavior  of  clouds  at  a  specific  target  (azimuthally 
variable)  could  be  used  to  mask  the  delivery  system's  IR  signature.  A  mission  planner  could 
select  an  attack  profile  using  day,  time,  and  direction  information  derived  from  an  advanced 
version  of  this  tool.  The  database  and  extraction  software  are  on  8mm  media  and  are  written  in 
UNIX  based  ANSI  standard  C. 

The  utility  of  this  tool  is  even  higher  in  regions  of  the  world  with  strong  terrain-anchored  clouds 
such  as  coast  lines  (navy  and  amphibious  operations)  and  mountains.  The  fact  that  there  was 
such  a  strong  azimuthal  dependence  in  western  Iraq  indicates  the  utility  extends  to 
non-mountainous  areas  also. 


72 


ACKNOWLEDGMENTS 


This  paper  summarizes  the  work  performed  under  University  of  California,  Lawrence  Livermore 
National  Laboratory  Subcontract  No.  B2 18795.  The  author  wishes  to  thank  Dr.  Thomas  H. 
Yonder  Haar  and  Mr.  Donald.  L.  Reinke  of  STC-METSAT  for  their  scientific  expertise  and 
assistance  in  analyzing  the  data.  The  valuable  support  and  cooperation  provided  by  Mr.  Dennis 
Hakala,  Technical  Monitor,  Lawrence  Livermore  National  Laboratory,  during  the  period  of  this 
task  are  gratefully  acknowledged.  The  authors  would  also  like  to  thank  John  M.  Forsythe  for 
developing  the  database  and  Mr.  Mark  Ringerud  for  developing  the  PCFLOS  algorithm  and 
extraction  code.  If  the  reader  would  like  interaction  about  the  database  and  extraction  code, 
please  contact  the  author  with  a  copy  of  your  request  to  Mr.  Dennis  Hakala,  University  of 
California,  Lawrence  Livermore  National  Laboratory,  P.O.  Box  808,  Livermore,  CA  94551. 

REFERENCES 

Eis,  K.E.,  T.H.  Yonder  Haar,  J.M.  Forsythe  and  D.L.  Reinke,  1993:  "Cloud  Free  Line  of  Sight 
Model  Differences".  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference,  U.S.  Army 
Research  Lab,  White  Sands  Missile  Range,  New  Mexico  88002-5501,  pp.  863-870. 

Raptor  II  -  Climatology  Studies  in  Relation  to  Cloud  Type  Occurrences,  Final  Report  to  the 
University  of  California,  Lawrence  Livermore  National  Laboratory,  P.O.  Box  808  Livermore 
CA  9455 1 ,  by  K.E.  Eis,  March  1 994. 

Reinke,  D.L.,  T.H.  Yonder  Haar,  K.E.  Eis,  J.M.  Forsythe,  and  D.N.  Allen,  1993: 
Climatological  and  Historical  ANalysis  of  Clouds  for  Environmental  Simulations 
(CHANCES)".  Proceedings  of  the  1993  Battlefield  Atmospherics  Conference,  U.S.  Army 
Research  Lab,  White  Sands  Missile  Range,  New  Mexico  88002-5501,  pp.  863-870. 


74 


THE  BSELUENCE  OF  SCATTERING  VOLUME  ON  ACOUSTIC  SCATTERING 

BY  ATMOSPHERIC  TURBULENCE 


Harry  J.  Auvermann 

Army  Research  Laboratory,  Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico 

George  H.  Goedecke  and  Michael  De Antonio 
Dept,  of  Physics,  New  Mexico  State  University 
Las  Cruces,  New  Mexico 


ABSTRACT 

From  a  complete  set  of  fluid  equations,  a  complete  set  of  coupled  linear 
differential  equations  for  the  acoustic  pressure,  temperature,  mass  density, 
and  velocity  in  the  presence  of  stationary  turbulence  may  be  derived.  To 
first  order  in  the  turbulent  temperature  variation  and  flow  velocity,  these 
coupled  acoustic  equations  yield  an  acoustic  wave  equation  given  in  the 
literature.  Further  reduction  of  this  wave  equation  results  in  a  second 
equation  given  in  the  literature  which  is  good  for  turbulent  length  scales  a 
much  greater  than  the  acoustic  wavelength  X.  The  length  scale  of  the 
scattering  volume  is  found  to  be  just  as  important  as  ci  and  X  in  predicting 
the  general  behavior  of  acoustic  scattering  by  turbulence.  In  particular,  if  a 
<<  a„  then  the  first  Bom  temperature  and  velocity  scattering  amplitudes 
for  any  ratio  a/K  are  the  usual  ones  predicted  by  the  first  equation,  and  both 
the  forward  and  backward  velocity  scattering  are  essentially  zero  for 
solenoidal  turbulent  flow  velocity.  The  latter  is  not  tme  if  a  >  a,.  If  a  > 
a,  >>  \,  then  the  first  Bom  scattering  amplitudes  are  those  predicted  by 
the  second  equation.  If  X  ^  a  >  a„  other  forms  result  for  the  scattering 
amplitudes.  Implications  of  these  findings  for  predicting  results  of 
acoustical  scattering  experiments  where  the  scattering  volume  is  often  ill 
defined  are  discussed. 

1.  INTRODUCTION 

This  is  the  third  paper  given  at  Battlefield  Atmospherics  Conferences  that  deals  with 
acoustic  scattering  by  atmospheric  turbulence.  In  the  first  paper  (Auvermann,  Goedecke, 
DeAntonio  1992),  experimental  evidence  was  presented  showing  that  atmospheric 
turbulence  near  the  ground  was  neither  homogeneous  nor  isotropic,  two  conditions 


75 


required  for  the  usual  statistical  model  of  turbulence  to  be  valid.  An  alternate  model 
consisting  of  a  collection  of  isolated  vortices  of  different  sizes  was  proposed.  This  model 
was  termed  a  structural  model  previously.  The  more  descriptive  name  of  Turbule 
Ensemble  Model  (TEM)  will  now  be  adopted.  A  turbule  is  an  isolated  inhomogeneity  of 
either  fluid  temperature  or  fluid  velocity.  The  first  paper  (Auvermann,  Goedecke, 
DeAntonio  1992)  presented  a  general  formulation  of  the  method  by  which  acoustic  signal 
levels  in  shadow  zones  may  be  estimated  using  TEM.  The  second  paper  (Auvermann, 
Goedecke,  DeAntonio  1993)  showed  how  TEM  may  be  used  to  explain  theoretically  the 
extreme  variability  of  acoustic  shadow  zone  signals  that  has  been  documented 
experimentally.  In  TEM,  the  scattering  pattern  of  the  various  individual  turbules  is 
assumed  known.  The  analysis  proceeds  by  assuming  a  distribution  function  for  the  sizes, 
and  then  locating  the  turbules  of  each  size  randomly  within  the  atmospheric  region  of 
interest.  The  shadow  zone  signal  is  then  the  summation  of  the  contributions  from  each 
turbule.  To  carry  out  the  summation  in  the  first  paper  (Auvermann,  Goedecke, 

DeAntonio  1992),  a  uniform  concentration  of  turbules  (number  of  turbules  per  unit 
volume)  was  assumed  accompanied  by  a  reasonable  estimation  of  the  volume  from  which 
the  detector  could  receive  signals.  The  summation  in  the  second  paper  (Auvermann, 
Goedecke,  DeAntonio  1993)  was  carried  out  directly  because  a  relatively  small  number  of 
turbules  was  postulated,  the  position  and  size  of  each  being  chosen  randomly  within 
appropriate  limits. 


In  this  paper,  the  problem  of  determining  the  volume  from  which  significant  scattering  can 
occur  is  addressed  in  a  more  rigorous  manner.  Acoustical  signals  of  interest  to  the  Army 
are  in  general  low  frequency.  The  import  of  this  is  that  the  wavelengths  of  interest  are 
large  compared  to  the  dimensions  of  either  source  or  detector.  Therefore,  both  source 
and  detector  are  nearly  omni-directional  and  thus  cannot  serve  to  define  a  scattering 
volume.  This  is  a  complication  not  usually  experienced  in  optical  scattering  scenarios. 

For  example,  optical  scattering  by  atmospheric  aerosol  is  usually  modeled  with  a  narrow 
beam  from  a  laser  source  and  a  narrow  field-of-view  detector  system,  the  overlap  or  union 
of  the  two  defining  a  small  scattering  volume.  This  geometric  construction  is  the  usual 
way  scattering  volume  is  defined.  In  this  paper  (except  in  section  4),  scattering  volume 
will  denote  the  volume  from  which  significant  scattering  can  occur.  An  additional 
simplification  that  can  be  taken  advantage  of  in  aerosol  scattering  is  the  fact  that  the 
largest  aerosol  particle  dimension  is  small  compared  to  the  dimension  of  the  scattering 
volume.  Even  though  acoustic  wavelengths  are  large,  turbule  sizes  can  be  larger,  and 
may  approach  the  scattering  volume  dimension.  In  section  2,  the  scattering  pattern  of 
individual  turbules  is  used  to  define  a  scattering  volume  as  a  function  of  turbule  size. 

Then,  in  section  3,  scattering  cross-section  modified  by  number  concentration  is  used  to 
determine  the  relative  contributions  from  the  various  size  classes.  Section  4  contains 
general  results  that  may  be  appUed  to  determine  if  surface  integrals  from  scattering  theory 
may  be  ignored,  as  is  usually  done  in  optics.  Conclusions  that  may  be  drawn  from  this 
work  are  discussed  in  section  5. 

The  symbols  for  the  some  of  the  variables,  parameters  and  mathematical  operations  are 
summarized  in  the  foUowing  Ust.  Others  are  defined  in  the  text.  Bold  quantities  in  the 


76 


list  (for  example  r)  are  three-vectors. 


NOTATION 

turbule  characteristic  size,  m 
asymptotic  acoustic  wave  speed  =  344  m  s  ' 
a/dXi  ;  X,  =  x;  X2  =  y;  Xj  =  z 
time  dependence  of  acoustic  wave 
acoustic  wave  frequency  =  500  hz 
propagation  vector,  m  ' 
magnitude  of  k  =  a)/c„ 
wavelength,  m 
position  vector,  m 
magnitude  of  r 

integration  variable  position  vector 
time  variable,  s 

turbulence  field  temperature  difference  ratio 
turbulent  flow  velocity  in  the  absence  of  the  acoustic  wave 
i-th  component  of  Vq 

scattering  volume  limit  angle  for  velocity  turbule  ensemble 
velocity  turbule  ensemble  relative  scattering  cross-section 
size  parameter  =  ka 
scattering  angle  between  k  and  r 
wave  angular  frequency  =  2irf 

2.  SCATTERING  PATTERN  INFLUENCE  ON  SCATTERING  VOLUME 

The  theory  of  acoustical  scattering  from  turbules  is  too  lengthy  to  be  covered  in  this 
paper.  It  is  covered  elsewhere  (Goedecke,  DeAntonio,  Auvermann  1994a).  'Hie 
following  is  a  brief  synopsis  of  how  scattering  patterns  are  determined  theoretically. 
Beginning  with  a  complete  set  of  fluid  equations  in  density,  pressure,  temperature,  and 
velocity,  each  variable  is  assumed  to  be  made  up  of  a  time  independent  part  representing 
the  inhomogeneous  medium  plus  a  small  time  dependent  part  representing  the  acoustic 
wave.  Expressions  representing  the  above  are  substituted  in  the  fluid  equations  and  terms 
second  order  or  higher  in  the  ratios  (vo/c»,  t)  are  discarded.  Assuming  harmonic  time 
dependence,  the  resulting  wave  equation  is 


a 

Coo 

ai 

exp(-ja)t) 

f 

k 

k 

X 

r 

r 

ti 

t 

T(r) 

Vo(r) 

Voi 

r 

I 

X 

0) 


+  k2)T7(r)  =  d.jd^T](j)  +  2jo)-'ai(Vo,0,0i)Tj(f)  =  -47rS(T)r7(r) 


(1) 


where  the  summation  convention  is  used  for  repeated  subscripts  and 


S(T)  =  StOO  +  S^(f)  =  -(47r)''ai(T0i)  -  (j/27r(o)ai(Vo,a,0i). 


(2) 


77 


[Sj(f),  SyOO]  define  the  scattering  operators  for  (temperature,  velocity)  inhomogeneities 

respectively.  ij(r)  is  the  acoustic  wave  field  quantity,  the  ratio  of  the  acoustic  pressure  to 
the  total  fluid  pressure. 

The  Green’s  function  solution  for  an  incident  plane  wave  is  written  down,  and  a  Bom 
approximation  is  made  in  the  scattering  integral.  The  Bom  approximation  involves 
replacing  the  field  in  the  scattering  integral  by  the  incident  field.  The  result  is  where 

=  /d^r,exp(-jkf-r,)S(r,)expO‘ic-r,)  (3) 

fe  is  called  the  scattering  amplitude.  Equation  (3)  has  been  used  (Goedecke,  DeAntonio, 
Auvermann  1994a)  to  derive  the  scattering  cross-section  for  temperature  and  velocity 
turbules.  The  scattering  cross-section  is  equal  to  the  scattering  amplitude  multiplied  by  its 
complex  conjugate,  and  has  units  of  length  squared.  Only  velocity  turbules  will  be 
considered  further  in  this  paper. 

It  has  been  shown  (Goedecke,  DeAntonio,  Auvermann  1994b)  that  an  isotropic  ensemble 
of  turbules  having  a  given  scale  length  a  but  arbitrary  velocity  morphology  except  for 
V  vq  =  0  can  be  replaced  by  an  ensemble  with  velocity  given  by  Vq  =  n5<ff(r), 
where  f(r)  is  a  function  of  the  distance  r  from  turbule  "center",  and  fl  is  a  randomly 
oriented  angular  velocity.  A  Gaussian  form  of  f(r)  has  proved  convenient  (Goedecke 
1992),  so  that 

VqOO  =  Ax  r  exp(-r^/a^)  (4) 

The  scattering  cross-section  obtained  from  the  theory  outlined  above  was  given  in  the 
report  (Goedecke  1992).  To  simplify  the  presentation  in  this  paper,  the  cross-section 
averaged  over  orientation  angles  will  be  used.  This  cross-section  is  (Goedecke  1992) 


f— 1 

fftax'f 

l3  j 

,  4kc„ 

\  / 

[sin(i|;)cos(i|;)fexp{-x^[l  -  cos(i|;)]} 


(5) 


Two  normalized  expressions  are  defined  in  eq.  (6)  below  for  the  puipose  of  illustration  of 


=  4[sin(i|;)cos(iJ;)pexp{-x^[l  -  cos(i|;)]} 


^vN 


j  [sin(tlf)cos(il;)]2exp{-x^[l  -  cos(iJ;)]} 


(6) 


the  behavior  of  eq.  (5).  The  first  contains  all  of  the  angular  dependence  and  the  second 
includes  the  multiplicative  size  parameter  dependence.  These  two  functions  are  plotted  in 


78 


the  next  figures  to  show  the  nature  of  the  velocity  cross  section  in  the  (x,  domain. 
Figure  1  shows  the  angle  dependent  part,  the  axis  label  "Sigma"  being  Uvn-  Hereafter,  the 
axis  label  "Chi"  is  |  and  the  axis  label  "Psi"  is  \p.  Figure  2  shows  the  influence  of  the 
size  parameter  on  the  cross-section,  the  axis  label  "Sigma"  being  u’vn. 


Figure  1.  Normalized  velocity  turbule  Figure  2.  Velocity  turbule  scattering 

cross-section  (angle  part  only)  cross-section 


The  range  of  the  independent  variables  x  and  ^  were  chosen  in  figure  2  to  illustrate 
interesting  features  of  the  function  (t’vn.  When  x  is  13.4  and  ^  is  0.1,  u’vn  peaks  at 
very  nearly  unity.  For  \j/  not  near  zero,  the  exponential  drives  a’vN  to  zero.  As  x 
increases  beyond  the  values  in  the  figure,  the  peak  of  continues  to  increase. 

However,  the  width  of  the  peak  decreases  and  the  value  of  \p  at  the  peak  decreases. 

These  curves  for  the  cross-section  are  not  the  entire  story  because  other  factors  influence 
the  scattered  signal  at  a  detector.  Consideration  of  these  other  factors  is  undertaken  in  the 
next  section. 

3.  NUMBER  CONCENTRATION  INFLUENCE  ON  SCATTERING  VOLUME 

In  the  previous  section,  the  dependence  of  the  cross-section  upon  size  parameter  was 
shown  to  emphasize  the  importance  of  large  turbules  which  have  narrow  scattering 
patterns.  To  do  an  incoherent  summation  for  the  signal  scattered  from  an  ensemble  of 
turbules,  the  first  expedient  to  employ  is  to  simply  add  the  effects  of  the  many  by 
multiplying  by  a  number  concentration.  Since  the  scattering  volume  as  a  function  of 
turbule  size  is  the  desired  quantity  in  this  paper,  the  number  concentration  as  a  function  of 
size  is  necessary.  An  estimate  of  this  has  been  made  (Goedecke,  DeAntonio,  Auvermann 
1994c).  Furthermore,  an  estimate  of  the  characteristic  velocities  of  the  turbules  is 
necessary.  The  following  power  law  scaling  functions  are  first  assumed: 


79 


(7) 


N.  I 

\  1 

f  a 

a 

JZ| 

1 

9 

II 

The  meaning  of  eq.  (7)  is  as  follows.  The  largest  turbules  are  identified  by  the  subscript 
1.  The  largest  turbules  have  the  concentration  Nj.  Their  characteristic  size  is  aj.  The 
largest  velocity  turbules  have  characteristic  velocities  v,.  Other  sized  turbules  are 
identified  by  the  index  a.  The  exponents  (/5,  p)  are  chosen  so  that  a  homogeneous 
isotropic  ensemble  of  turbules  matches  the  Kolmogorov  spectrum.  The  results  are 
(Goedecke,  DeAntonio,  Auvermann  1994c)  p  =  3,  v  =  1/3 .  An  interesting  feature  of 
this  result  is  the  concentration  exponent  being  3.  This  means  that  the  packing  fraction  is 
the  same  for  all  size  classes.  The  velocity  scaling  exponent  is  1/3  as  may  be  derived  from 
a  simple  energy  cascade  calculation. 

It  is  now  possible  to  write  the  size  parameter  dependence  of  the  cross-section  for  each 
size  of  turbules.  Multiplying  eq.  (5)  by  N„  and  using  eq.  (7)  yields  for  the  cross-section 
per  unit  volume  for  turbule  size  3.^  and  size  parameter 


ffy(k,f)  =  ira,^ 


48ci 


(kai)’/^Xa’’^^rsin(il;)cos(iJj)]2exp{-Xa^[l  -  cos(4i)]} 


(8) 


The  size  parameter  subscript  will  be  dropped  hereafter.  The  largest  turbule  size  has  been 
assumed  to  be  10  m,  which  is  appropriate  for  a  velocity  turbule  centered  ten  meters  above 
the  ground.  The  size  parameter  for  this  size  turbule  is  91 .325.  A  velocity  turbule  is 
produced  by  wind  shear  with  the  wind  velocity  zero  at  ground  level.  Further  assuming 
that  the  velocity  ratio  (vj/c„)  is  0.01,  this  size  turbule  would  have  a  characteristic  velocity 
of  3.44  m-s  *.  This  velocity  would  be  produced  by  a  wind  gradient  with  this  velocity  at 
turbule  center  at  10  m  height  and  twice  this  velocity  at  turbule  upper  edge  at  20  m  height. 
The  scenario  including  the  ground  is  more  complex  than  is  possible  to  treat  in  this  paper. 
Rather,  the  assumption  will  be  made  that  both  source  and  detector  are  in  free  space  and 
that  the  atmospheric  turbulence  is  homogeneous  and  isotropic  with  characteristics  of  that 
at  10  m  height.  Considering  the  line  joining  source  and  detector  a  reference  line  from 
which  to  measure  scattering  angles  and  that  the  source  is  a  long  distance  away,  the 
scattering  Mgle  is  the  offset  angle  from  the  detector  to  a  differential  scattering  volume. 

The  total  signal  received  by  the  detector  will  be  a  volume  integral  of  the  differential 
scattering  volume  times  the  expression  in  eq.  (8).  A  further  simplification  will  be  taken  in 
that  no  further  specification  of  scenario  parameters  will  be  given.  Thus,  no  range 
dependent  effects  will  be  considered.  TTie  volume  integration  will  be  confined  to  a  shell 
around  the  detector  of  radius  R,  and  thickness  AR^.  Figure  3  depicts  this  geometry  on  a 
plane  section  through  the  scattering  volume  under  consideration.  The  scattering  angle  is  ^ 
Md  the  view  angle  is  .  These  two  angles  are  equal  for  this  choice  of  source  wave 
incident  direction.  The  differential  scattering  volume  will  then  be  the  ring  around  the  axis 


80 


whose  cross-section  is  shown  in  the  figure 
and  is  [2w  AR,  sin(^’)  df].  Equation  (9) 
below  derived  from  eq.  (8)  gives  the  specific 
mathematical  form  for  the  velocity  turbule 
ensemble  scattering  cross-section. 


^v(X,C) 


Vafv|"(ka|)'”R/AR,N, 

,  24c^ 

c  (9) 

I'd  ilJ 'sin^  (ijr  0  cos^  { 0 

0 

exp{-x^[l  -  cos(ilj')]} 


Figure  3.  Scattering  volume  geometry 
has  been  determined  and  which  contains  the 


The  expression  involving  the  size  parameter 
X  and  the  scattering  volume  limit  angle  f  in 
eq.  (8)  will  be  plotted  in  figure  4  for  the 
case  of  f  =  IT.  This  is  the  case  in  which 
the  cross-section  for  an  entire  spherical  shell 
maximum  of  the  expression.  The  maximum 


occurs  at  X  =  91.325  and  is  3701.82.  That  which  has  been  plotted  in  figure  4  is  the  ratio 
of  the  scattering  cross-section  to  this  value,  so  the  maximum  in  figure  4  is  unity.  The 
curves  for  which  the  relative  scattering  cross-section  is  0.01  and  0.95  are  plotted  in  figure 


5.  A  few  of  the  calculated  points  are  listed  below: 


(88.5592,  ir)  =  0.95  (91. 3250,0.033716)  =  0.95 

I  (60.2799,  it)  =  0.50  |,.,„(91. 3250,0.020059)  =  0.50 

6.2246,7r)  =  0.01  (91. 3250,0.005969)  =  0.01 


These  curves  are  interpreted  as  an  aid  to  forming  a  turbule  distribution  as  follows.  As  an 
example  limit  angle,  take  0.05  radian  on  figure  5  at  the  largest  size  parameter.  This  is 
larger  than  the  size  for  the  0.95  contour.  Extend  a  cone  of  this  central  angle  out  to  200  m 
and  it  will  contain  an  entire  10  m  radius  turbule.  Even  if  turbules  were  placed  in  a 
hexagonal  close-pack  configuration,  the  six  turbules  in  the  next  circle  can  at  most 
contribute  0.05  to  the  total  scattering.  Thus,  our  distribution  need  contain  only  one 
turbule  of  the  largest  size,  or  at  the  most  seven.  Assume  that  the  shell  thickness  AR,  is 
20.0  m,  large  enough  to  accommodate  a  collection  of  the  largest  turbules.  At  the  other 
end  of  the  distribution,  consider  those  whose  size  parameter  is  6.2246.  The  radius  of 
these  would  be  0.682  m.  Some  5,047,000  of  these  could  be  packed  into  the  entire 
spherical  shell  at  a  range  of  200  m.  However,  this  entire  ensemble  would  scatter  no  more 
than  0.01  times  that  scattered  by  the  single  tubule  of  the  largest  size.  Consider  now 
turbules  with  size  parameter  88.5592  or  radius  9.697  m.  The  complete  shell  would  need 
to  be  filled  with  turbules  of  this  size  to  scatter  0.95  times  the  scatter  of  the  largest  turbule. 
The  number  of  these  would  be  in  the  neighborhood  of  1,755.  For  turbules  of  size 


parameter  60.2799  or  radius  6.600  m,  the  number  in  the  shell  would  be  5,566.  These 


VELOCITY  TURBULB  SCATTERINO  CROSS- SBCTICW 


Figure  4.  Relative  velocity 

scattering  cross-section 


SCATTBRINC  VOLUHB  LIMIT  ANCXB 

/vAirtriKv 


Figure  5.  Scattering  volume  limit 
angles  for  velocity 


would  contribute  only  0.50  times  the  scatter  of  the  largest  turbule.  Although  integration 
with  respect  to  size  parameter  has  not  been  attempted,  and  therefore  it  cannot  be  said 
definitely,  it  does  seem  likely  that  the  scatter  from  turbules  less  than  0.682  m  radius  will 
total  a  great  deal  less  than  0.01  times  the  scatter  from  those  of  larger  radius.  This  means 
that  an  upper  bound  has  been  established  above  on  the  number  of  turbules  that  need  to  be 
placed  in  a  representative  distribution  for  the  scenario  considered.  It  is  clear  that  the 
larger  turbules  dominate  the  scattering  to  be  considered. 


4.  APPLICABILITY  OF  STANDARD  SCATTERING  THEORY 


Standard  scattering  theory  for  scattering  from  a  single  localized  scatterer  involves  a  plane 
wave  incident  on  the  scatterer  and  a  Green’s  function  solution  of  the  wave  equation.  The 
quantities  obtained  are  the  scattering  amplitude  and  the  scattering  cross-section  in  the  far 
field  of  the  scatterer.  In  the  analysis  of  the  previous  section,  standard  theory  results  were 
used  for  the  scattering  from  individual  turbules,  in  the  Bom  approximation.  The 
summation  of  cross-sections  for  the  collection  of  turbules  in  each  size  range  neglects  the 
coherence  properties  of  the  scattered  signals  which  is  appropriate  for  the  ensemble  average 
scattering  by  a  collection  of  scatterers  having  random  locations  except  in  the  near  forward 
direction  (Goedecke,  DeAntonio,  Auvermann  1994c).  Standard  scattering  theory  for  an 
ensemble  of  scatterers  is  usually  applied  when  the  detector  is  in  the  far  field  of  the 
scattering  volume  occupied  by  the  ensemble.  For  the  scenario  used  in  this  paper,  the 
detector  is  assumed  to  be  in  the  "near  field"  of  the  scattering  volume,  but  the  far  field  of 
each  turbule.  This  will  not  actually  be  tme  for  the  larger  turbules  in  a  more  realistic 
scenario.  Also,  in  many  scenarios,  the  incident  wave  is  not  plane.  The  importance  of 


82 


deviations  from  standard  far  field  scattering  geometry  has  not  been  determined. 

Some  general  conclusions  may  be  drawn  from  the  standard  scattering  theory,  general  in 
the  sense  that  they  do  not  depend  on  the  form  of  the  temperature  and  velocity  distributions 
considered  (Goedecke,  DeAntonio,  Auvermann  1994a).  Using  the  formd  result  of  eq. 

(3),  it  is  recognized  that  the  volume  integral  need  only  extend  over  a  finite  scattering 
volume.  In  this  section,  the  expression  scattering  volume  is  thought  of  as  the  intersection 
of  the  illuminated  region  and  the  detector  field-of-regard,  perhaps  made  of  limited  extent 
by  the  use  of  a  parabolic  reflector.  Although  this  volume  may  have  a  complicated  shape, 
a  single  scale  length  aj  is  ascribed  to  it  for  convenience.  The  effect  of  turbulence  outside 
the  scattering  volume  rapidly  goes  to  zero.  Integration  of  eq.  (3)  by  parts  yields  two 
terms,  the  first  being  essentially  the  Fourier  transform  of  the  turbulence  distribution 
(either  temperature  or  velocity)  and  the  other  a  surface  integral  over  the  scattering  volume 
surface.  In  the  commonly  treated  case  where  a<  <as,  there  are  many  turbules  in  the 
scattering  volume  and  the  surface  integral  will  be  negligibly  small.  The  result  is  the 
standard  theory  that  has  been  used  in  the  first  parts  of  this  paper.  If,  however,  the  turbule 
scale  a  is  greater  than  a^,  the  surface  integral  is  not  negligible.  The  result  is  the  same  as 
our  result  above  except  the  cosine  factor,  which  makes  scattering  at  right  angles  to  the 
incident  wave  zero,  is  not  present.  In  the  third  case,  where  the  wavelength  is  greater  than 
the  turbule  size  which  is  in  turn  greater  than  a,,  the  surface  integral  is  again  not 
negligible.  The  scattering  pattern  can  deviate  appreciably  from  the  standard  results. 

5.  CONCLUSIONS 

Battlefield  scenarios  in  which  acoustic  propagation  can  have  significant  importance  will 
often  be  such  that  standard  scattering  theory  will  not  fully  apply.  This  may  occur  when 
the  wavelength,  the  length  scale  of  the  turbulence,  and  the  length  scales  of  some  of  the 
turbulent  eddies,  and  the  length  scale  of  the  scattering  volume  are  of  the  same  order,  or 
when  the  detector  is  not  in  the  far  field  of  the  scattering  volume  and/or  the  individual 
turbules.  Thus,  definition  of  the  scattering  volume  is  an  important  element  in  calculating 
turbulence  scattering. 

Using  specific  examples  of  turbule  morphologies  and  a  particular  simplified  scenario,  it 
was  shown  that  large  scale  turbulence  dominates  scattering  and  that  the  scattering  pattern 
of  these  large  scale  entities  tends  to  limit  the  effective  scattering  volume.  The  scenario 
involved  a  4x  detector  assumed  to  be  in  the  far  field  of  each  turbule  in  an  ensemble  of 
randomly  located  turbules  of  different  sizes. 

Future  work  will  need  to  deal  with  more  complex  scenarios.  However,  considerable 
information  may  be  derived  with  the  use  of  standard  scattering  theory,  such  information 
giving  an  initial  approximation  to  the  true  result.  Also,  to  be  included  are  the  effects  of 
ground  reflections,  shadow  zones,  and  the  l/i^  effects  of  source  and  scattered  fields.  All 
of  these  effects  may  further  limit  effective  scattering  volumes  and/or  modify  the  current 
results. 


83 


Concerning  the  enormous  fluctuations  of  scattered  signals  measured  in  shadow  zones 
(Auvermann,  Goedecke,  DeAntonio  1993),  these  must  occur  because  of  relative  motion 
among  a  moderate  number  of  large  entities.  This  situation  requires  consideration  of 
scattering  amplitudes  rather  than  cross-sections.  Even  if  the  signal  from  all  turbules  at 
6.22  size  parameter  were  in  phase  and  the  signal  from  all  turbules  at  91.325  size 
parameter  were  in  phase  separately,  the  former  could  produce  only  a  plus  or  minus  20% 
variation  in  the  overall  summed  signal.  The  experiment  showed  the  variation  was  100%, 
indicating  that  the  relative  amplitudes  of  the  different  signiflcant  contributors  are  nearly 
equal. 


REFERENCES 

Auvermann,  H.  J.,  and  G.  H.  Goedecke,  1992,  "Acoustical  Scattering  from  Atmospheric 
Turbulence,"  Proceedings  of  the  1992  Battlefield  Atmospherics  Conference.  1  -  3  Dec., 
1992,  Ft.  Bliss,  Texas. 

Auvermann,  H.  J.,  G.  H.  Goedecke  and  M.  D.  DeAntonio,  1993,  "Fluctuations  of 
Acoustic  Signals  Scattered  by  an  Ensemble  of  Turbules,"  Proceedings  of  the  1993 
Battlefield  Atmospherics  Conference.  30  Nov.  -  2  Dec.,  1993,  Las  Cruces,  New  Mexico. 

Goedecke,  G.  H.,  1992,  "SCATTERING  OF  ACOUSTICAL  WAVES  BY  A  SPINNING 
ATMOSPHERIC  TURBULE,"  Contractor  Report  CR-92-0001-2,  U.  S.  Army 
Atmospheric  Sciences  Laboratory,  White  Sands  Missile  Range,  NM  88002. 

Goedecke,  G.  H.,  M.  DeAntonio  andH.  J.  Auvermann,  1994a,  "First-order  acoustic 
wave  equations  and  scattering  by  atmospheric  turbulence,"  (submitted  for  publication  in 
the  Journal  of  the  Acoustical  Society  of  America') . 


Goedecke,  G.  H.,  M.  DeAntonio  and  H.  J.  Auvermann,  1994b,  "Acoustic  scattering  by 
atmospheric  turbulence  I:  Individual  and  randomly  oriented  turbules,"  (submitted  for 
publication  in  the  Journal  of  the  Acoustical  Society  of  America V 

Goedecke,  G.  H.,  M.  DeAntonio  and  H.  J.  Auvermann,  1994c,  "Acoustic  scattering  by 
atmospheric  turbulence  H:  Homogeneous  isotropic  ensembles,"  (submitted  for  publication 
in  the  Journal  of  the  Acoustical  Society  of  AmericaV 


84 


RELATIONSHIP  BETWEEN  AEROSOL  CHARACTERISTICS 
AND  METEOROLOGY  OF  THE  WESTERN  MOJAVE 

L.A.  Mathews  and  J.  Finlinson 
Naval  Air  Warfare  Center 
China  Lake,  CA  93555,  USA 

P.L.  Walker 

Naval  Postgraduate  School 
Monterey,  CA  93943,  USA 


ABSTRACT 

The  Visibility  Impact  Study  was  an  intense,  comprehensive  project  intended  to  measure  aerosol  size, 
chemical  composition  and  optical  properties.  Sites  at  Tehachapi  Pass,  Antelope  Valley  and  China 
Lake  were  instrumented  with  nephelometers,  aerosol  filter  samplers,  meteorological  instruments  and 
in  the  case  of  the  Antelope  Valley  and  Tehachapi  Pass  with  aerosol  sizing  instruments  operated 
continuously  from  mid-July  through  mid-September  1990.  Most  data  collected  were  for  ambient 
conditions.  Also,  data  were  collected  for  intensive  smog  conditions  in  the  Tehachapi  Pass  and  for 
windy  conditions  on  the  high  desert.  Four  six  hour  filter  samples  were  collected  daily  in  the 
Tehachapi  Pass.  The  purpose  of  this  report  is  to  present  some  results  of  analysis  of  the  aerosol  data 
and  to  relate  the  observed  aerosol  characteristics  with  meteorological  conditions.  Usually,  polluted 
air  is  transported  into  the  western  Mojave  from  Los  Angeles  through  the  Soledad  and  Cajon  Passes 
and  from  the  San  Joaquin  Valley  through  the  Tehachapi  Pass  primarily  during  thermal  lows  in  the 
San  Joaquin  Valley  and  high  desert  which  occur  most  frequently  in  the  summer.  Polluted  air  at 
China  Lake  originates  in  the  San  Joaquin  Valley.  Fine  particle  (0-2.5p)  concentrations  by  mass  are 
35-40%  organic  carbon  and  30%  sulfates,  nitrates  and  elemental  carbon.  The  remainder  is  dust. 
The  organic  carbon  component  of  the  Tehachapi  aerosols  increased  dramatically  during  some 
intensive  periods.  Also,  large  amounts  of  sulfur  were  observed  for  some  of  these  periods.  Wind  and 
dust  conditions  occur  during  Rocky  Mountain  highs  causing  flows  from  the  northeast.  Dust  mass 
and  composition  dependence  on  wind  speed  were  determined  at  each  of  the  sites  from  filter  data. 
The  dust  mode  aerosols  are  made  of  clays  and  those  clays  have  been  identified.  Their  composition 
is  wind  speed  independent  for  speeds  up  to  10  m/s,  i.e.  there  is  no  silicate  mode.  Dust  mass  is  wind 
sjjeed  independent  up  to  6  m/s.  Beyond  that  dust  mass  is  exponentially  related  to  wind  speed  by  m 
=  0.55exp(0.59u).  Dust  mass  computed  from  size  distributions  also  exhibits  the  6  m/s  threshold. 

1.  INTRODUCTION 

Objectives  of  the  present  work  are  to  characterize  aerosols  of  the  Mojave  Desert  and  to  relate  those 
characteristics  to  meteorological  conditions.  Simultaneous  visibility,  meteorological  and  aerosol 
data  were  taken  at  four  wide-spread  locations  in  the  western  Mojave  Desert  of  California  starting 
the  first  week  in  July  and  running  through  the  second  week  in  September  1990.  Locations  of  the 
sampling  sites  are  shown  in  the  first  figure.  Size  distribution  data  were  taken  at  Tehachapi  Pass  and 
on  Edwards  Air  Force  Base  in  the  Antelope  Valley  2  miles  south-east  of  Rogers  dry  lake.  Filter 
sampling  was  performed  daily  on  the  China  Lake  dry  lake  in  the  Indian  Wells  Valley,  Edwards  and 
Tehachapi. 


85 


Figure  1 .  Topographical  Map  of  the  Mojave  Desert,  San  Joaquin  Valley,  and  Los  Angeles  Basin  Marked  with  the 
Measurement  Sites  at  Tehachapi,  Edwards  Air  Force  Base  and  China  Lake.  Air  pollution  is  transported  from  the  San 
Joaquin  Valley  to  China  Lake  and  Antelope  Valley  through  the  Tehachapi  Pass  and  from  the  Los  Angeles  Basin  into  the 
Antelope  Valley  through  Soledad  and  Cajon  Passes. 


The  Antelope  Valley  is  part  of  the  southwestern  Mojave  Desert  beginning  fifty  miles  north  of  Los 
Angeles  International  Airport.  The  Mojave  Desert  is  in  the  rain  shadow  of  the  Sierra  Nevada  and 
Tehachapi  Mountains  to  the  west  and  the  San  Gabriel  Mountains  to  the  south  leaving  the  desert 
relatively  dry  and  cloud  free.  The  Antelope  Valley  is  separated  from  the  Los  Angeles  and  San 
Fernando  Valley  air  basins  by  the  San  Gabriel  Mountains.  The  Tehachapi  Mountains,  to  the  west, 
separate  the  Antelope  Valley  from  the  San  Joaquin  Valley.  The  Indian  Wells  Valley  lies  in  the 
north-western  Mojave  Desert  200  km  northeast  of  Los  Angeles  at  the  southern  entrance  to  the 
Owens  Valley.  It  is  bounded  by  the  Sierra  Nevada  Mountains  to  the  west  and  the  Panamint  range 
to  the  northeast. 

Combustion  aerosols  are  transported  into  the  Mojave  from  the  San  Joaquin  Valley  through  the 
Tehachapi  Pass  and  through  the  Soledad  and  Cajun  passes  from  the  Los  Angeles  air  basin.'  Thus 
the  valley's  atmosphere  contains  a  spatially  and  temporally  complex  mixture  of  aerosols  of  urban. 


industrial  and  desert  origin.  Combustion  aerosols  at  China  Lake  originate  in  the  San  Joaquin  Valley. 

The  prevailing  air  flow  tends  to  be  from  the  west  or  northwest  most  of  the  year,  with  a  shift  to 
southwesterly  flows  in  the  summer.  The  actual  flow  patterns  and  wind  directions  in  the  lower  levels 
of  the  atmosphere  are  controlled  by  the  locations  of  high  and  low  pressure  systems.  In  the  summer 
months  solar  heating  in  the  desert  creates  a  thermal  low  pressure  area  which  tends  to  persist  through 
the  night,  generating  flow  into  the  desert  for  most  of  the  day.  Winds  speeds  are  typically  highest 
in  the  afternoon  and  lowest  in  the  morning. 

The  prevailing  mesoscale  flow  patterns  tend  to  be  dominated  in  the  lower  levels  by  topography  and 
thermal  effects.  Without  convective  mixing  associated  with  wind,  the  air  in  the  Mojave  tends  to  be 
stable,  with  mixing  depths  comparable  to,  but  somewhat  higher  (in  the  afternoon)  than  the  heights 
of  the  mountain  ranges  which  separate  the  desert  from  the  coastal  valleys.  Thus,  the  prevailing 
westerly  flows  tend  to  be  channeled  into  the  desert  through  passes  in  the  mountain  ranges  most  of 
the  day.  Flow  into  the  Indian  Wells  Valley  is  from  the  San  Joaquin  Valley  through  the  Tehachapi 
Pass.  This  flow  bypasses  the  Indian  Wells  Valley  in  the  morning,  but  flows  through  it  in  the 
afternoon. 


2.  EXPERIMENTAL  PROCEDURE 

2.1  Meteorological  Data 

Radiosonde  data  were  taken  daily  at  0230  PST  at  Edwards  and  at  0530  at  China  Lake  at  1000  foot 
intervals  from  surface  to  1(X),000  feet.  Wind  speed  and  direction  were  obtained  at  China  Lake  and 
Edwards  with  Handar  Wind  Speed  and  Direction  Sensors  while  temperature  and  dew  point  were 
obtained  with  a  Handar  Temp/RH  Probe.  Climtronics  instruments  were  used  to  take  similar  data 
at  Telhill  and  Tehachapi.  The  meteorological  instruments  were  placed  ten  meters  above  the  ground 
and  recorded  data  24  hours  a  day. 

2.2  Aerosol  Filter  Samplers 

Continuous,  twenty-four  hour  samples  were  taken  with  Wedding  2X4  Filter  Samplers.  These 
samplers  acquired  one  coarse  and  three  fine  aerosol  samples  daily.  The  coarse  filter  sample  and  two 
of  the  fine  filter  samples  were  collected  on  Teflon  filters  for  mass,  absorption,  and  elemental 
composition.  The  second  fine  Teflon  filter  serves  as  a  data  quality  check.  The  third  fine  particle 
filter  was  quartz  fiber  and  was  used  for  elemental  and  organic  carbon  capture. 

Two  five  hour  samples  were  taken  daily  from  0700  to  1200  and  from  1200  to  1700  PDT  at  Edwards, 
Tehachapi  and  China  Lake  using  NEA  Sequential  Filter  Samplers  (SFS).  The  PMIO  size  fraction 
was  transmitted  through  Sierra-Andersen  2541  size-selective  inlet  into  a  plenum.  At  Edwards  and 
China  Lake  the  PM2.5  fraction  was  obtained  using  a  Bendix  240  Cyclone  PM2.5  Inlet.  At 
Tehachapi  the  PM2.5  fraction  was  obtained  using  a  Desert  Research  Institute  (DRI)  MEDVOL 
Model  3030F  with  Bendix  240  Cyclone  PM2.5  Inlet.  Two  sets  of  filters  were  used  simultaneously 
in  both  PMIO  and  PM2.5  SFS.  One  set  consisted  of  a  Teflon-membrane  filter  which  collected 
particles  for  gravimetric  and  x-ray  fluorescence  (XRF)  analyses.  The  other  fine  filter  holder 
contained  a  quartz-fiber  filter.  Deposits  on  this  filter  are  submitted  to  ion  and  carbon  analyses. 


87 


Nitrate,  sulfate,  ammonium,  chlorine  and  potassium  masses  were  determined  gravimetrically  as  each 
was  chemically  extracted  from  the  Teflon  filter.  XRF  analysis  was  performed  on  Teflon-membrane 
filters.  Analyses  were  performed  using  a  Kevex  Corporation  Model  700/8000  energy  dispersive  x- 
ray  (EDX)  fluorescence  analyzer.  The  analyses  were  controlled,  spectra  acquired,  and  elemental 
concentrations  calculated  by  software  implemented  on  an  LIS  1 1/23  microcomputer  interfaced  to 
the  analyzer. 

2.3  Particle  Sizers 

Particle  sizers  were  located  at  Edwards  and  Tehachapi  for  the  1990  experiment.  The  sizers  were 
mounted  four  meters  above  the  ground  and  took  20  minute  data  twenty-four  hours  a  day.  At 
Edwards  aerosol  size  distributions  were  obtained  with  a  TSI  Differential  Mobility  Particle  Sizer 
(DMPS)  for  particles  with  diameters  in  the  range  0.01  to  0.8p  and  with  an  APS-33  Aerosol  Particle 
Sizer  for  particle  diameters  ranging  from  0.5  to  30p.  Aerosol  size  distribution  measurements  were 
also  taken  at  Tehachapi  with  a  TSI  Electrical  Aerosol  Analyzer  (EAA)  for  particles  in  the  size  range 
0.01  to  0.6p  and  a  Laser  Aerosol  Speetrometer  (LAS-X  model)  Optical  Particle  Counter  for  particles 
in  the  size  range  0.09  to  3  p. 

3.0  AEROSOLS  DURING  AMBIENT  METEOROLOGICAL  CONDITIONS 
3.1  Composition 

The  average  composition  of  the  aerosols  captured  on  the  NEA  SFS  five-hour  samplers  at  Tehachapi, 
Edwards  and  China  Lake  is  tabulated  in  Tables  1 , 2  and  4.  The  anomalously  high  organic  content 
of  accumulation  mode  aerosols  in  the  western  Mojave  (Table  1)  has  been  observed  several  times 
before  and  there  has  been  some  concern  that  this  is  due  to  contamination.  Therefore,  blank  quartz- 
fibers  were  randomly  examined  for  organic  carbon  contamination  before  use  in  the  field.  They  were 
heated  for  at  least  three  hours  at  900  "C  before  use  and  kept  refrigerated  prior  to  heating. 


TABLE  1.  PM2.5  Composition  (in  pg/m^)  Averaged  Over  the  Period  of  the  Project. 


Edwards 

China  Lake 

Tehachapi 

Tehachapi 

Intensive 

Ion 

Mass 

% 

Mass 

% 

Mass 

% 

Mass 

% 

Chloride 

0.04 

0.033 

0.037 

0.4 

.08 

.94 

Nitrate 

0.5 

0.34 

0.96 

9.8 

.15 

1.76 

Sulfate 

1.76 

1.37 

1.61 

16.4 

2.37 

27.8 

Ammonium 

0.66 

6.7 

.93 

10.9 

Organic 

4.46 

4.57 

4.87 

49.5 

3.13 

36.7 

Soot 

0.78 

0.7 

1.7 

17.3 

1.86 

21.8 

Total  Mass 

9.84 

8.52 

88 


Table  1  is  a  tabulation  of  PM2.5  aerosol  composition  captured  on  glass  fiber  filters.  (PM2.5 
aerosols  are  actually  a  mix  of  accumulation  and  dust  mode  aerosols.)  The  optical  properties  of 
sulfate  and  nitrate  aerosols  depend  upon  whether  they  are  present  as  acids  or  ammonium 
compounds.  The  acids  are  clear  liquids;  whereas,  the  ammonia  compounds  are  white,  hygroscopic 
solids.  Unfortunately,  ammonium  ion  mass  was  only  recorded  for  Tehachapi.  For  both  normal  and 
intensive  smog  periods  there  is  just  enough  ammonia  present  in  Table  1  to  neutralize  the  sulfuric  and 
nitric  acids.  Thus,  the  sulfates  and  nitrates  appear  as  ammonium  sulfate  and  ammonium  nitrate.  The 
source  of  the  organic  carbon  is  yet  to  be  determined.  It  may  reside  in  the  atmosphere  in  vapor  form 
which  has  subsequently  condensed  on  the  filters  or  in  aerosol  form  or  some  combination  thereof. 

Table  2  is  a  tabulation  of  the  elemental  composition  of  PM  10  aerosol  at  Edwards  and  China  Lake. 
Table  3  is  a  tabulation  by  per  cent  of  the  elemental  composition  of  the  most  common  clays. 
Comparison  of  Tables  2  and  3  show  that  the  best  match,  except  for  excess  sulfur  and  calcium,  is 
with  illite  clay,  which  is  common  to  deserts.^  (The  compositions  of  the  listed  clays  are  averages 
obtained  from  an  extensive  review  of  the  literature^  thus,  compositional  agreement  cannot  be 
expected  to  be  exact.)  It  remains  to  be  determined  whether  the  excess  sulfur  and  calcium  are 
associated  with  the  clay.  Calcium  and  silicon  masses  strongly  correlate  for  both  Edwards  and  China 
Lake;  so,  calcium  is  a  clay  component.  On  the  other  hand,  silicon  and  sulfur  masses  are  not  at  all 
correlated  for  Edwards  and  are  only  weakly  correlated  for  China  Lake.  Thus,  sulfur  is  not  a  dust 
mode  component  at  Edwards.  Gypsum  (CaS04)  could  be  present  at  China  Lake.  However,  sulfur 
and  calcium  masses  do  not  correlate  at  China  Lake.  Other  possible  sources  of  dust  mode  sulfur  are 
sodium  and  potassium  sulfates.  Unfortunately,  there  was  no  check  for  sodium.  Sulfur  and 
potassium  are  weakly  correlated  at  China  Lake  in  the  same  way  as  are  sulfur  and  silicon.  The  most 


TABLE  2.  Averaged  Mass  and  Relative  Composition  of  PM10 
Aerosols  Caught  on  Teflon  Filters. _ 


Edwards 

China  Lake 

Element 

Mass 

% 

Mass 

% 

A1 

2.5 

20.7 

1.62 

19.6 

Si 

5.87 

48.5 

4 

48.46 

p 

0.004 

0.03 

0.0053 

0.06 

s 

0.69 

5.7 

0.63 

7.6 

Cl 

0.0076 

0.068 

K 

0.87 

12 

0.55 

6.6 

Ca 

0.78 

6.4 

0.68 

8.3 

Ti 

0.11 

0.9 

0.06 

0.7 

Mn 

0.025 

0.015 

Fe 

1.28 

10.6 

0.71 

8.6 

Ba 

0.039 

0.0004 

La 

0.018 

0.013 

Total  Clay 

12.1 

8.26 

likely  source  of  the  PM  10  sulfur  is  accumulation  mode  sulfates.  PM2.5  sulfur  masses  from  sulfates 
at  both  Edwards  and  China  Lake  are  only  slightly  less  than  the  PM  10  masses.  Thus,  at  Edwards 
sulfur  and  dust  have  totally  independent  origins;  whereas,  some  of  the  sulfur  is  associated  with  clay 


89 


at  China  Lake. 


TABLE  3.  Composition  of  Clays  Averaged  from  Data  Taken  from  All  Over  the  World  (Reference  5). 


PM  10  samples  were  not  taken  at  Tehachapi  Pass.  Nevertheless,  the  composition  of  dust  mode 
aerosols  needs  to  be  known  for  this  location.  Because  the  China  Lake  site  is  located  on  a  dry  lake 
bed,  its  dust  composition  may  not  be  typical  of  deserts  at  all.  The  Edwards  and  Tehachapi  sites  are 

not  on  a  dry  lake  bed.  The  composition  of  aerosols  captured  on  PM2.5  Teflon  filters  is  tabulated 
in  Table  4. 


TABLE  4.  Reconstruction  of  Composition  of  Dust  Mode  Aerosol  Captured  on  PM2  5  Teflon 
Filter. 


Sulfur  mass  is  in  agreement  with  that  captured  on  the  PM  10  Teflon  filters,  which  is  to  be  expected 
if  most  of  the  sulfur  is  in  the  accumulation  mode.  The  clay  components  are  under  represented 
relative  to  sulfur  since  PM2.5  filters  capture  only  part  of  the  dust  mode  particles.  If  the  sulfur 
fraction  is  adjusted  to  match  that  of  PM  10,  then  the  distribution  of  the  other  elements  on  the  PM2.5 


filters  falls  in  line  with  that  of  the  PM  10  except  for  a  greater  abundance  of  clay  trace  elements. 
Perhaps  these  elements  reside  on  the  surface  of  alumino-silicate  particles  so  that  they  represent  a 
greater  fraction  of  the  volume  of  the  smallest  dust  particles.  In  conclusion,  the  dust  mode 
composition  at  Tehachapi  Pass  is  also  illite  clay. 

4.  METEOROLOGICAL  EFFECTS  ON  AEROSOL  CHARACTERISTICS 
4.1  Wind  Speed  Dependence  of  Aerosol  Mass 

Wind  Speed  dependence  of  the  mass  (in  pg/m^)  of  PMIO  aerosols  in  Figure  2  was  obtained  by 
plotting  the  five-hour  integrated  mass  captured  on  the  NEA  Sequential  Filter  Sampler  at  Edwards 
against  wind  speeds  averaged  over  the  same  period.  The  NEA  samplers  were  operated  from  07(X) 
to  1200  and  again,  with  new  filters,  from  1200  to  1700  PDT.  The  U-shaped  dashed  line  in  the  figure 
is  a  least-squares  fit  to  the  data.  A  more  sensible  fit  would  be  no  Wind  Speed  dependence  below 
6  m/sec  and  an  exponential  fit  that  approximates  the  least-squares  curve  for  greater  wind  speeds. 
This  fit  is  indicated  by  the  solid  lines  in  the  figure.  Thus,  the  wind  speed  dependence  of  dust  mode 
mass  at  Edwards  is  given  by  Equation  1. 

m  =  7  pg/m^ 

m  =  0.13exp(0.66u)  pg/m^ 


for  u  <  6  m/sec 
for  u  >  6  m/sec 


(1) 


I 

I 


cd 

s 

I 

Q 


Figure  2, 


Wind  Speed  Dependence  of  Dust  Mass  Figure  3.  Wind  Speed  Dependence  of  Dust 

at  Edwards  Air  Force  Base.  Mass  at  China  Lake. 


Wind  speed  dependence  of  dust  mass  at  China  Lake  is  plotted  in  Figure  3.  There  is  no  wind  speed 
dependence  for  wind  speeds  up  to  12  m/sec  according  to  the  filter  measurements.  This  outcome  is 
consistent  with  anecdotal  information. 


91 


It  was  hypothesized  in  the  last  section  that  gypsum  is  a  major  dust  component.  The  wind  speed 
dependence  of  sulfur  and  calcium  for  Edwards  AFB  are  plotted  in  Figure  4.  The  wind  speed 
dependence  of  calcium  mass  is  similar  to  that  of  dust  as  a  whole.  On  the  other  hand  sulfur  has  no 
wind  speed  dependence.  Calcium  is  probably  a  dust  component;  whereas,  sulfur  has  some  other 
origin.  The  most  likely  source  of  the  sulfur  is  ammonium  sulfate  and  nitrate. 


Edwards  aerosol  mass  in  the  size  range  .06  to  1  p  diameter  is  dominated  by  accumulation  mode 
particles  typically  made  of  the  substances  listed  in  Table  1.  Aerosols  in  the  size  range  1  to  10  p  are 
typically  clays.  There  is  supposed  to  be  yet  another  size  distribution  mode,  due  to  blowing  sand, 
made  of  quartz.  Therefore,  the  composition  of  the  PMIO  captured  particles  might  also  be  wind 
speed  dependent.  Figure  5  is  a  plot  of  the  ratio  of  PMIO  silicon  to  aluminum  versus  the  short-term 
averaged  wind  speed.  Clearly,  not  even  a  part  of  a  silicon  dominated  sand  mode  is  being  captured 
even  at  wind  speeds  of  9  m/sec. 


101 


m 

E 


3 

Q 


100 


101 


Figure  4.  Different  Wind  Speed  Dependence  of 

Calcium  and  Sulfur  at  Edwards  Implies 
Different  Origins. 


Figure  5.  Mass  Ratio  of  Silicon  to  Aluminum 

at  Edwards  Docs  not  Change  with 
Wind  Speed  Indicating  the  Absence 
of  the  Blowing  Sand  Mode. 


4.2  Size  Distribution 

The  size  distributions  of  Figures  6  and  7  are  for  pre-storm  winds  at  0200,  maximum  aerosol  loading 
at  1600  with  wind  speed  at  8.9  m/sec,  and  post-storm  at  2400.  These  size  distributions  were 
calculated  from  aerodynamic  particle  data  assuming  particle  specific  gravity  of  2.7.  The  pre-storm 
aerosol  environment  was  dominated  by  accumulation  mode  particles  with  light  dust  loading.  The 
post-storm  atmosphere  is  dominated  by  residual  dust  with  no  apparent  accumulation  mode  particles. 
Maximum  particulate  loading  at  wind  speeds  near  10  m/sec  is  dominated  by  dust  mode  clay  particles 
with  the  possible  presence  of  a  larger  mode  with  mode  diameter  of  about  seven  microns. 


92 


Wind  Speed  and  Dust  Loading  Calculated  from  Size  Distribution  Data  Taken  at  Edwards 


5.  DISCUSSION 


Results  from  the  present  study  differ  in  several  significant  ways  from  that  of  Longtin,  et  al",  whose 
model  is  incorporated  in  LOWTRAN  l\  First,  the  accumulation  (or  water  soluble)  mode  is 
predominantly  organic  carbon  instead  of  being  composed  of  sulfates.  That  may  be  due  to  the 
proximity  to  large  urban,  industrial  and  oil  producing  and  refining  regions.  Secondly,  the  Longtin 
model  does  not  even  include  the  dust  mode,  which  is  the  predominant  continental  large  aerosol 
mode.  It  may  be  that  the  Sahara  Desert  from  which  part  of  their  model  is  extracted®  has  been 
depleted  of  small  particle  clays.  Last,  there  is  a  hint  in  the  current  study  of  the  presence  of  the  so- 
called  blowing  sand  mode  which  is  supposed  to  appear  when  wind  speeds  exceed  10  m/sec. 
However,  the  composition  of  this  mode  is  that  of  clay,  not  sand. 

6.  REFERENCES 

1 .  Trijonis,  J.  et  al.,  "RESOLVE  Project  Final  Report",  Naval  Weapons  Center,  China  Lake, 
Calif.,  December  1987.  179  pp.  NWC  TP  6869. 

2.  Pye,  K.,  AeoUan  Dust  and  Dust  Deposits.  Academic  Press,  New  York,  1987. 

3.  Weaver,  C.E.  and  Pollard,  L.D.,  The  Chemistry  of  Clav  Minerals.  Elsevier,  New  York,  1 973. 

4.  Longtin,  D.L.,  Shettle,  E.P.,  Hummel,  J.R.,  and  Pryce,  J.D.,  "A  Desert  Aerosol  Model  for 
Radiative  Transfer  Studies",  in  Aerosols  and  Climate,.  Hobbs,  P.V.  and 

McCormick,  M.P.,  eds.,  Deepak  Publishing.  1988. 

5.  F.X.  Kneizys,  E.P.  Shettle,  W.O.  Gallery,  J.H.V  Chetwynd,  Jr.,  L.W.  Abreu,  J.E.A.  Selby, 
S.A.  Clough,  and  G.P.  Anderson,  "Atmospheric  Transmittance/Radiance:  Computer  Code 
LOWTRAN  7",  Air  Force  Geophysics  Laboratory,  Hanscom  AFB,  Mass.  01731, 

August  1988.  146  pp.  AFGL-TR-88-0177. 

6.  dAImeida,  G.A.,  "On  the  variability  of  desert  aerosol  radiative  characteristics",  J.  Geophys 
Res.  92  D33017  (March  20, 1987). 


94 


Session  II 


OPERATION  WEATHER 


95 


EVALUATION  OF  THE  NAVY'S  ELECTRO-OPTICAL  TACTICAL  DECISION  AH) 

(EOTDA) 


S.  B.  Dreksler 

Computer  Sciences  Corporation 
Monterey,  CA  9393-5502 

S.  Brand  and  A.  Goroch 
Naval  Research  Laboratory  Monterey 
Monterey,  C A  93943-5502 


ABSTRACT 


The  Naval  Research  Laboratory  EOTDA  evaluation  program  evaluates  the 
accuracy  and  utility  of  the  EOTDA  in  forecasting  forward  looking  infrared  (FLIR) 
system  performances.  The  program  has  focused  on  the  collection  of  performance 
data  for  three  specific  FLIR  systems  in  use  by  Naval  training  squadrons.  Both  the 
model  and  analysis  methods  have  been  refined  since  the  earlier  work  described  by 
Scasny  and  Sierchio  at  the  1992  Battlefield  Atmospherics  Conference.  The 
EOTDA  model  now  uses  physical  background  models  rather  than  empirical 
backgrounds.  Analysis  procedures  include  a  variation  of  target/background  pairs 
to  allow  for  typical  uncertainties.  The  results  of  the  analysis  are  discussed,  and 
future  direction  will  be  described. 


1.  INTRODUCTION 

The  strike  warfare  community  requires  accurate  meteorological  analyses  and  forecasts  to 
properly  plan  and  effectively  execute  tactical  operations.  ^Vhile  meteorological  information 
itself  is  very  important,  it  is  generally  of  more  value  to  the  tactical  decision-maker  if  it  is 
presented  in  a  tactically  relevant  form.  An  example  of  such  an  environmental  tool  is  the  Electro- 
Optical  Tactical  Decision  Aid  (EOTDA),  under  development  at  the  Naval  Research 
Laboratory,  Monterey.  This  product  was  derived  from  the  Mark  III  EOTDA,  which  was 
originally  developed  by  the  USAF  Phillips  Laboratory  (Freni,  et  al.,  1993) 

EOTDAs  are  models  that  predict  the  performance  of  electro-optical  weapon  systems  and  night 
vision  goggles,  based  on  environmental  and  tactical  information.  Performance  is  expressed  in 
terms  of  maximum  detection  and  lock-on  ranges  or  designator  and  receiver  ranges.  The 


97 


EOTDAs  consist  of  three  microcomputer  based  programs  supporting  infrared  (IR)  (8-12  pm), 
visible  (0.4-0. 9  pm),  and  laser  (1.06  pm)  systems.  Each  program  is  comprised  of  three  sub¬ 
models:  an  atmospheric  transmission  model,  a  target  contrast  model,  and  a  sensor  performance 
model. 

As  part  of  any  development  process,  it  is  important  to  have  a  good  understanding  of  the 
environmental  sensitivity  of  any  meteorological  decision  aid.  It  is  also  equally  important  to 
thoroughly  evaluate  these  applications  under  various  environmental  conditions  to  establish  their 
strengths  and  weaknesses.  This  documentation  of  strengths  and  weaknesses  can  assist 
operational  users  and  can  help  direct  research  and  development  efforts. 

Since  the  initial  phase  of  this  evaluation,  the  EOTDA  has  been  upgraded  to  version  3.0  from 
version  2.0  and  2.2i.  The  primary  modifications  in  the  EOTDA  included  different  methods  of 
entering  target,  background  and  weather  data.  Additional  generic  targets  were  added  so  the 
user  could  specify  particular  parameters  for  dams,  bridges,  buildings,  bunkers, 
Petroleum/Oil/Liquification  (POL)  tanks,  power  plants,  and  runway  targets.  The  number  of 
backgrounds  were  reduced  from  over  150  empirical  backgrounds  to  6  first  principles  of  physics 
generic  backgrounds:  Vegetation,  Soil,  Snow,  Water,  Concrete,  and  Asphalt.  The  user  can 
adjust  several  parameters  for  each  background.  The  method  for  entering  meteorology  data  was 
simplified  and  data  are  now  entered  using  the  standardized  Terminal  Aerodrome  Forecast 
(TAF). 

2.  METHODOLOGY 

The  Navy's  EOTDA  evaluation  consists  of  the  comparison  between  calculated  and  observed 
sensor  performance.  Actual  sensor  performance  data  were  collected  in  conjunction  with  normal 
training  missions.  Navy  and  DoD  test  programs,  ship  deployments,  reserve  cruises,  naval 
exercises,  and  Naval  Postgraduate  School  experiments.  This  paper  focuses  on  forward  looking 
infrared  (FLIR)  data  because  much  FLIR  data  had  been  collected  to  date.  Also,  our  experience 
indicates  that  the  IR  EOTDA  is  the  most  used  EOTDA  module;  therefore,  it  is  the  most 
important  to  evaluate. 

To  collect  both  operational  and  meteorology  data,  a  two-sided  "knee-board"  information  card 
was  distributed  to  the  participants.  Scasny  and  Sierchio  (1992)  provide  a  more  detailed 
description.  Side  one  of  the  data  cards  was  used  by  the  weapon  system  operators  (WSO)^  for 
recording  FLIR  detection  ranges,  sensor,  target  and  background  information.  The  WSOs  were 
asked  to  choose  detection  targets  that  were  closest  in  description  to  those  targets  in  the 
EOTDA  Target  List  or  those  that  could  be  reproduced  using  the  generic  models  option.  To 
obtain  the  operational  sensor  detection  data,  FLIR  operators  were  asked  to  begin  approach  of 
the  target  from  beyond  maximum  detection  range  and  to  maintain  a  constant  altitude  and 
airspeed.  As  the  aircraft  neared  the  target,  the  operator  logged  the  appropriate  data.  If  possible, 

Wersion  2.2  was  an  update  to  version  2.0  and  was  specifically  used  during  DESERT  SHIELD/STORM 

operations. 

^The  Weapon  System  Operator  (WSO)  is  the  person  who  records  the  sensor  range  and  other  tactical  data.  In 

different  scenarios,  the  weapon  system  operator  can  be  the  Pilot,  the  Bombardier/Navigator,  an  Electronic 

Warfare  Officer  or  any  other  member  of  the  air  crew. 


98 


multiple  passes  over  the  target  at  different  altitudes  and  approach  angles  were  accomplished^ 
After  the  flight  was  completed,  the  data  card  was  given  to  weather  personnel  for  completion  of 
weather  data  on  the  reverse  side  of  the  card.  In  addition  to  the  card,  the  weather  personnel 
were  requested  to  provide  copies  of  their  observation  sheets  for  the  day  preceding  and  the  day 
of  the  data  collection.  This  provided  approximately  12  to  24  hours  of  background  information 
for  input  into  the  EOTDA.  The  meteorology  information  required  is  wind,  temperature, 
humidity,  and  cloud  cover. 

The  completed  data  sets  were  entered  into  the  Mark  ffl  EOTDA.  The  EOTDA  was  mn  for 
each  completed  data  set  and  the  calculated  output  was  then  compared  to  the  observed  values  of 
detection  range.  Due  to  the  importance  of  choosing  the  correct  target/background  pair,  and  the 
fact  that  we  did  not  always  have  exact  knowledge  of  the  precise  target  and  background,  we  ran 
the  model  with  different  backgrounds  and  sometimes  different  targets.  For  example,  if  the 
target  was  a  POL  tank,  the  model  was  run  with  the  POL  tank  as  full,  empty  and  half  empty.  If 
the  background  was  described  as  grass,  the  model  was  run  with  several  different  vegetation  and 

soil  types. 

Data  previously  run  using  EOTDA  versions  2.0  or  2.2  were  rerun  using  version  3.0^  The 
resultant  data  between  the  two  versions  were  compared.  Data  collected  since  the  EOTDA  was 
upgraded  to  version  3.0  were  not  run  using  previous  versions, 

3.  ASSUMPTIONS  AND  LIMITATIONS 

Our  data  collections  were  done  without  actually  interacting  with  the  WSOs  prior  to  or  after  the 
mission  Because  of  this  lack  of  interaction,  we  would  expect  the  EOTDAs,  in  many  cases,  to 
predict  longer  ranges  than  seen  by  the  WSO,  since  the  WSO  may  not  have  reported  the  target 
at  the  maximum  detection  range.  Additionally,  we  do  not  know  exactly  what  targets  or 
background  parameters  the  WSO  saw  at  detection  range,  resulting  in  an  uncertainty  in  selecting 
the  target-background  pair.  The  interactions  and  feedback  in  an  operational  environment  have 
been  shown  to  improve  the  skill  of  the  EOTDAs  (Kelly  and  Goforth,  1994). 

Weather  biases  are  always  present.  In  many  cases,  the  weather  used  by  the  model  was  taken  at 
the  nearest  reporting  station  and  not  at  the  target  site.  However,  we  were  able  to  develop  a 
fairly  accurate  12-24  hour  weather  history.  This  weather  history  is  necessary  to  initialize  the 
thermal  contrast  model.  In  many  real-time  operations,  weather  data  could  be  less  accurate, 
since  forecast  data  instead  of  archived  data  would  be  used  as  the  input  to  initialize  the  EOTDA 
model. 

No  temperature  or  moisture  measurements  of  the  targets  or  backgrounds  were  available.  In  an 
operational  setting,  model  output,  observations  and  feedback  would  provide  some  insight  as  to 
the  nature  of  these  variables. 

Statistical  tests  for  each  sensor  were  limited  because  the  independence  of  the  data  collected 
could  not  be  established  at  this  time.  This  is  not  a  simple  task,  since  independence  is  a  function 
of  differences  in  dates/times  of  data  collected  as  well  as  differences  in  target/background  pairs, 
approach  angle,  flight  altitude  of  aircraft,  time  of  day,  etc. 


99 


3.1  EOTDA  Limitations 

In  addition  to  the  data  collection  limitations,  several  assumptions  were  made  during  the 
development  of  the  EOTDA  (Dunham  and  Schemine,  1993).  The  major  model  assumptions 
are:  1)  the  target  is  in  the  sensor's  field  of  view,  and  time  in  view  is  not  an  issue,  2)  targets  are 
all  ground-based,  and  operating  vehicular  targets  have  been  operating  long  enough  to  reach 
thermal  equilibrium,  3)  the  immediate  background  around  the  target  is  homogeneous,  4)  high- 
value-targets  detection  criteria  are  similar  to  those  used  for  vehicular  targets,  5)  the  atmosphere 
is  horizontally  homogeneous  with  only  two  vertical  layers,  6)  cloud  cover  is  continuous 
(scattered  or  broken  coverages  are  not  modeled),  and  7)  there  exists  a  cloud  free-line  of  sight 
between  the  sensor  and  the  target. 

4.  RESULTS  AND  DISCUSSION 

Sensor  data  from  three  FLIR  sensors  have  been  obtained  and  preliminary  analysis  has  begun 
The  present  analysis  examines  the  data  collected  to  date.  The  following  paragraphs  discuss  the 
comparison  of  the  observed  versus  predicted  detection  ranges.  Due  to  the  classification  of 
sensor  data  when  associated  with  sensor  nomenclature,  the  sensors  will  be  identified  in  this 
paper  solely  as  sensor  1,  sensor  2,  and  sensor  3.  For  a  complete  discussion  of  the  sensor  results 
refer  to  Dreksleret.al.,  1994. 

4.1  Best  Choice 

As  mentioned  earlier,  for  every  observed  detection  range,  we  ran  many  input  combinations  of 
the  EOTDA  varying  the  backgrounds  and  the  complexity  of  the  scene.  In  some  cases  the  target 
was  also  varied.  Backgrounds  were  reported  by  the  WSO;  however,  these  were  usually  brief 
comments,  such  as  grass  field.  Several  similar  backgrounds  were  evaluated.  For  the  grass  field 
example,  background  selections  were  made  from  the  following  categories:  growing  states 
(intermediate,  dormant  or  growing),  coverages  (dense,  medium  or  sparse),  and  soil  moisture 
(dry,  wet,  or  intermediate). 

To  help  us  build  the  backgrounds,  we  examined  the  season  and  the  previous  rainfall  data.  Then 
we  selected  background  parameters  such  as  dormant,  sparse,  dry  for  California  summer  or 
growing,  dense,  or  wet  for  New  England  spring.  In  many  cases  we  selected  three  or  four 
different  realistic  vegetation  backgrounds,  varying  the  soil  moisture  from  wet  to  intermediate  or 
varying  the  coverage  from  dense  to  medium.  We  then  made  separate  EOTDA  model  runs  for 
each  background.  We  believe  this  is  a  good  method  to  provide  a  basis  for  analysis  because  an 
experienced  EO  forecaster  will  make  many  EOTDA  runs  per  mission,  and  then  will  decide  on 

an  EOTDA  forecast  based  on  his  knowledge  of  what  the  target-background  pair  will  look  like 
to  the  WSO. 

A  measure  of  rnodel  success  that  was  examined  was  the  best  case  (where  the  background  was 
chosen  to  provide  a  range  closest  to  that  observed  -  see  Figure  1).  We  refer  to  this  as  "best 
choice"  and  use  it  extensively  during  the  analyses. 

4.2  Background  Analysis 

Since  one  of  the  major  upgrades  between  versions  2.0  and  3.0  was  the  change  in  backgrounds 
from  empirical  (version  2.0)  to  first  principles  (version  3.0),  we  were  interested  to  see  hov/  well 


100 


TDA  Predict  (kft) 


■TTiMBmiggiBiEnEnBnBni|B 

naEnEBiiHEiiBaiiEff^Eaw 

mSfCnilEIBlfEOm^^K 

■MMBnPaElfFlUfHaEil^M 
-TtTiMBnnKnmiiKgBgiBaigll 
~l!lfEaE]XIBllIOBQBi]BBH 


lElIlEaElO^ 


■■Tr«HramgrgjM|E^^I 

■■'III  ■!  Ill  III  II III  IF IIM  111  ]l 

MiaaisBSigiiaiflg  — ■—'’■■ 

iimQ3[ 

UnUffiffSimaiaEtil - 

■■?!!■  rmngngaMMiMaiMEai 

hues 

MKgaBEEaiBlgiaiBEa^iiMiliJI 

HiKZ3IIS^S9S9[3E^HHE!]BEuS]l 

unsn^TBfffKCTBi^BE]iBEi2]l 


lESEaHEEll 

iesieseqi 

lESEaifSlI 

lES^BESill 

IEQES9B!il| 

IBSESlEal 

■■TOBFMftlill 

ICBEBI^I 

IC2^HEEI| 

IEBE91EIEII 

■■TOiwtral 

lESESlE^I 

iees^b^^J 


best  choi<»  3.0 


Figure  I.  Example  of  all  EOTDA  runs  for  one  mission.  The  best  choice  is  determined  by  selecting  the  case 
with  the  smallest  error.  A  best  choice  case  for  each  EOTDA  version  (2.0  and  3.0)  is  selected  as  annotated. 


00  10,0  20.0  30.0  400  50.0  60.0  70.0  80,0  90.0  100.0 

Obscrvetl  (kft) 

Figure  2.  Comparison  of  EOTDA  versions  2.0  and  3.0  best  choice  selections  for  the  water  background. 


1 


each  background  performed.  We  divided  all  the  best  choice  runs  from  all  three  sensors  by 
background  type.  We  then  compared  version  2.0  and  version  3.0  backgrounds. 

4.2.1.  Water  Background 

Figure  2  shows  the  results  of  using  the  water  background  for  each  of  the  three  sensors.  For 
EOTDA  version  2.0,  sensors  1  and  2,  50%  (3  of  6)  of  the  best  choice  runs  were  within  20%  of 
the  observed  value  and  83%  (5  of  6)  of  the  best  choice  runs  were  within  50%  of  the  observed 
value.  By  comparison,  for  EOTDA  version  3.0,  sensors  1  and  2,  33%  (2  of  6)  of  the  best 
choice  runs  were  within  20%  of  the  observed  value  and  50%  (3  of  6)  of  the  best  choice  runs 
were  within  50%  of  the  observed  value.  However,  for  sensor  3,  EOTDA  version  2.0,  all  5 
cases  had  errors  in  excess  of  225%  and  an  average  error  of  355%.  For  version  3.0  ,  the  same  5 
cases  had  errors  in  excess  of  159%  and  an  average  error  of  172%.  For  sensor  3,  the  EOTDA 
version  3 . 0  average  of  the  1 0  cases  was  1 20%. 

Version  3.0  had  percent  errors  42%  lower  and  root  mean  square  (RMS)  errors  25%  lower  than 
version  2.0  for  the  same  11  cases.  But  overall,  the  error  using  the  water  background  was  over 
90%  for  version  3.0.  The  empirical  water  background  did  a  better  job  of  representing  the  actual 
background  for  those  cases  using  sensors  1  and  2.  However,  when  the  empirical  background 
was  off,  it  was  far  off.  The  first  principles  water  background  appears  to  be  more  consistent,  but 
still  produced  large  errors.  The  following  describes  how  well  the  water  background  compared 
with  the  other  backgrounds. 

4.2.2.  Other  Backgrounds 

Figures  3  and  4  compare  the  backgrounds  regardless  of  sensor  type.  The  Y-axis  is  in  percent 
for  the  percent  error  analyses  and  in  thousands  of  feet  (kft)  for  the  RMS  analyses.  Comparing 
the  same  33  cases  (Figure  3)  for  the  other  backgrounds,  version  2.0  was  slightly  better  for  soil 
and  asphalt,  while  version  3.0  was  slightly  better  for  vegetation  and  water.  There  was  only  one 
case  where  concrete  was  the  background  and  there  were  no  cases  that  used  a  snow 
background.  The  soil  background  (7  cases)  gave  the  best  results  for  each  version,  26%  error 
for  version  2.0  and  33%  error  for  version  3.0.  Asphalt  was  the  next  most  accurate  background 
(3  cases),  79%  error  for  version  2.0  and  94%  error  for  version  3.0.  However,  the  RMS  error 
from  version  3.0  was  considerably  lower  than  the  version  2.0  error;  39,000  feet  versus  61,000 
feet.  For  the  water  and  vegetation  backgrounds,  version  3.0  had  lower  percent  errors  and  lower 
RMS  errors.  However,  version  3.0  still  had  over  a  100  percent  errors  for  these  backgrounds. 

These  results  are  supported  by  the  Schemine  and  Dunham  (1993)  assessment  of  the  first 
principles  background  signature  predictions  in  the  Target  Contrast  Model.  Schemine  and 
Dunham  (1993)  showed  the  soil  model  provided  excellent  signature  predictions,  while  the 
foliage  (vegetation)  model  produced  fairly  good  predictions  and  the  water  and  concrete  models 
were  highly  inaccurate.  The  greatest  inaccuracies  were  found  in  the  water  background  model, 
due  primarily  to  two  factors.  First,  the  bi-directional  effects  from  the  reflection  of  the  sky 
ternperature  are  not  included  in  the  water  background  model.  Second,  during  model 
initialization,  the  water  background  model  sets  the  water's  initial  core  temperature  equal  to  the 
air  temperature  at  the  initial  input  time.  Both  problems  need  to  be  addressed. 


102 


Total  (33/88)  Asphalt  (3/4)  Concrete  Soil  (7/27)  Vegetation  Water  Snow  (0/2) 

(1/7)  (11/24)  (11/24) 

Figure  4.  Comparison  of  all  best  choice  selections  subdivided  by  background.  The  numbers  in  parenthesis  are  the  number 
of  cases  for  that  background  from  EOTDA  versions  2,0  and  3.0  respectively.  The  Y-axis  is  in  percent  for  the  percent 
error  analyses  and  in  thousands  of  feet  (kft)  for  the  RMS  analyses. 


5.  SECOND  PHASE 


The  second  phase  of  the  EOTDA  evaluation  is  designed  for  controlled  operational  validation  of 
the  EOTDA.  In  this  phase,  each  range  data  point  used  in  the  analysis  is  supervised  by  the 
analyst  in  cooperation  with  the  WSO  and  the  local  weather  detachment.  The  WSO  is  provided 
with  a  detailed  briefing  of  sensor  and  weather  information  and  data  requirements  by  the  analyst. 
After  the  mission,  the  WSO  is  debriefed  by  the  analyst  about  activity  during  the  mission. 

The  purpose  of  this  detailed  analysis  is  to  determine  the  detailed  chronology  of  each  event, 
including  detection,  classification  and  identification  ranges.  These  data  will  identify  the  detailed 
operational  and  environmental  characteristics  controlling  the  utility  of  the  EOTDA.  It  has  been 
found  in  previous  analyses  that  when  information  gathering  duties  are  added  to  the  already  full 
workload  of  air  crews  and  met  personnel,  data  quality  suffers  substantially. 

There  are  two  types  of  analyses  being  conducted,  aboard  ship  and  at  an  air  station.  The 
procedure  at  the  Naval  Air  Station  involves  the  analyst  obtaining  environmental  information 
from  the  weather  detachment  prior  to  mission  pre-brief  At  the  pre-brief,  the  analyst  provides 
support  to  the  weather  briefer  in  identifying  conditions  expected  over  the  target.  The  analyst 
also  obtains  details  on  the  mission  profile,  including  aircraft  operations  plan,  target  definition, 
and  the  sensor  utilization  plan.  During  the  mission  the  analyst  remains  with  flight  control  noting 
location  of  the  aircraft  and  communications  between  the  aircraft  and  flight  control.  It  is 
anticipated  that  some  of  the  sensor  ranges  will  be  noted  by  the  WSO,  and  this  should  be  duly 
recorded  to  the  nearest  second  by  the  analyst.  After  the  mission,  the  analyst  debriefs  the  WSO, 
answering  any  questions  which  arise  from  the  mission  communications  or  in  the  review  of  data 
taken  by  the  WSO.  If  video  records  are  available  of  the  mission,  these  are  to  be  reviewed  by  the 
analyst  with  the  WSO. 

The  aboard  ship  evaluation  will  be  conducted  with  the  LAMPS  SH-60B  helicopter  as  primary 
airborne  sensor  platform,  and  the  Navy  Mast  Mounted  Sight  (NMMS)  as  the  ship  borne 
platform.  The  helicopter  procedures  are  similar  to  the  procedures  at  the  air  station.  It  is 
expected  that  the  weather  team  deployed  aboard  the  ship  will  provide  similar  air- WSO  briefings 
supported  by  the  analyst.  During  the  mission,  the  analyst  maintains  station  at  Combat 
Information  Center  (CIC)  to  monitor  communications  and  helicopter  location  using  standard 
ship  resources.  The  analyst  will  debrief  the  sensor  operator  to  obtain  complete  information 
about  each  contact.  The  NMMS  is  used  for  surface-surface  classification  and  identification.  The 
modus  operandi  includes  obtaining  a  target  range  and  bearing  from  another  source  (  radar, 
sonar,  or  pilot  report),  and  then  searching  the  expected  target  region.  The  control  is  usually 
mounted  on  the  bridge  with  video  supplied  to  CIC.  With  this  arrangement  the  analyst  can 
maintain  a  watch  over  the  NMMS,  although  communication  with  the  NMMS  operator  may  be 
limited.  An  important  element  of  this  data  set  is  to  recognize  when  the  operator  is  cued  to  a 
target,  and  when  the  operator  actually  detects  the  target. 

6.  CONCLUSIONS  AND  RECOMMENDATIONS 

This  paper  has  described  the  evaluation  of  the  sensor  performance  model  portions  of  the 
EOTDA  for  Navy  and  Marine  Corps  EO  sensors.  As  previously  mentioned,  data  are  limited 
and  many  more  data  points  must  be  collected  under  various  environmental  and  operational 
conditions  to  show  a  statistically  sound  representation  for  each  sensor's  performance.  Efforts  to 


104 


collect  additional  data  for  these  sensors  and  others  are  ongoing  and  will  provide  the  needed 
information  to  produce  a  useful  evaluation  of  the  EOTDA's  prediction  performance  for  the  full 
range  of  Navy  and  Marine  Corps  electro-optical  sensors. 

We  are  continually  looking  for  ways  to  collect  more  controlled  data  for  more  sensors  under 
different  scenarios.  Most  data  were  collected  from  just  a  few  sources  with  few  scientific 
controls.  No  thermal  or  moisture  measurements  of  the  targets  or  backgrounds  were  taken. 
These  data  collection  efforts  require  interactions  with  participants  before  the  data  collection 
mission,  as  well  as  briefings  following  the  missions.  We  need  to  measure  the  weather  at  the 
target  and  to  determine  the  background  parameters  at  detection,  and  we  need  to  get  all 
targeting  information.  We  also  need  to  explore  the  impact  of  using  atmospheric  numerical 
output  as  input  to  the  EOTDAs. 

Based  on  previous  sensitivity  studies  (Keegan,  1990),  one  of  the  most  important  parameters  is 
the  target-background  pair.  The  first  principles  of  physics  backgrounds  were  more  consistent, 
while  the  empirical  backgrounds  were  as  likely  to  over-predict  or  under-predict. 

The  first  principles  of  physics  water  background  appeared  to  be  more  consistent  than  the 
empirical  water  background,  but  it  still  produced  large  errors.  A  more  detailed  examination  of 
the  physics  of  the  water  background  is  needed.  As  a  start,  the  ability  to  set  the  initial  sea 
surface  temperature  is  needed  and  also  to  include  the  bi-directional  effects  from  the  reflection 
of  the  sky  temperature.  The  utility  of  a  nonhomogeneous  water  background  could  be  studied 
for  applicability  to  account  for  moving  targets  and  target  heading  changes. 

To  overcome  model  limitations  cited  earlier,  the  transmissivity  model  could  be  examined,  and 
we  could  study  the  effects  of  improving  the  vertical  resolution  and  of  including  near  surface 
aerosols.  Human  factors  studies  could  also  be  examined.  As  mentioned  earlier,  the  EOTDA  is 
not  concerned  with  the  target  search  process.  It  assumes  that  the  target  is  in  the  sensor's  field  of 
view.  This  assumption  could  lead  to  over  predictions  of  target  detection. 

The  efforts  to  collect  the  data  discussed  above  proved  to  be  an  enlightening  experience. 
Detailed  information  was  obtained  about  the  varying  methods  used  in  the  different  types  of 
tactical  aviation  missions.  As  scientific  researchers,  one  can  get  lost  in  the  realm  of  the  "way 
things  should  be  done"  without  the  legitimate  knowledge  of  the  "way  things  am  done"  in  the 
operator's  or  user's  world.  Discussions  with  sensor  operators  provided  insight  into  the  way 
things  are  done"  and  as  is  always  the  case  in  research,  the  test  plan  was  adjusted.  Continuing 
discussions  with  the  aviation  community  and  sensor  operators  on  their  tactics  will  provide  the 
required  information  to  collect  useful  sensor  detection  data. 

Strong  interaction  with  the  WSOs  also  provides  insights  into  the  way  the  environmentalist 
needs  to  interact  with  the  tactical  users.  For  example.  Air  Force  weather  forecasters  at  Cannon 
AFB  NM,  working  closely  with  tactical  pilots,  were  able  to  achieve  remarkable  success;  that  is 
over  82%  of  the  time,  the  forecasters  predicted  detection  and  lock-on  ranges  within  10%  of 
what  was  observed  (Kelly  and  Goforth,  1994).  We  need  to  learn  to  understand  the  limits  of 
these  tools  as  well  as  the  extent  of  tactical/environmental  interactions  while  using  them. 

In  summary,  the  EOTDA  evaluation  program  has  observed  the  following:  1)  results  differed  for 
each  sensor;  2)  predicted  versus  observed  errors  were  particularly  large  for  over  water 


105 


targeting;  3)  soil  backgrounds  gave  excellent  results;  4)  some  biases  became  evident  for 
selected  sensors;  5)  more  controlled  experiments/evaluations  are  needed  to  determine  causes  of 
error;  and  6)  some  errors  in  data  sets  were  induced  by  mission  tactics  and  not  algorithm  errors. 
The  present  or  second  phase  of  the  EOTDA  evaluation  will  attempt  to  remedy  some  of  the 
"lessons  learned"  during  the  first  phase  by  including  direct  interaction  with  the  WSOs  prior  to 
and  after  the  missions.  This  should  provide  a  firmer  basis  for  assisting  operational  users  as  well 
as  help  redirect  research  and  development  efforts. 

ACKNOWLEDGMENT 

The  Navy  effort  in  the  tri-service  development  of  the  Electro-Optical  Tactical  Decision  Aid  is 
sponsored  by  the  Oceanographer  of  the  Navy  (OP-096)  through  the  Space  and  Naval  Warfare 
Systems  Command  Program  Office,  Washington  D.C.,  program  element  0603207N. 

REFERENCES 

Dreksler,  S.  B.,  S.  Brand,  J.  M.  Sierchio,  and  K.  L.  Scasny,  1994:  Electro-Optical  Tactical 
Decision  Aid  Sensor  Performance  Model  Evaluation.  NRL/MR/7543-94-7216,  Naval 
Research  Laboratory,  Monterey,  CA  93943-5502.  (In  Final  Review) 

Dunham,  B.  M.  and  K.  L.  Schemine,  1993:  Intermediate  Grade  Infrared  TDA  Analyst's 
Manual.  Battelle  Report  for  Period  March  1993  through  September  1993.  WL-TR-94-1084, 
Wright  Laboratory,  Wright-Patterson  AFB,  OH  45433-7409. 

Freni,  J.  M.  L.,  M.  J.  Gouveia,  D.  A.  DeBenedictis,  I.  M  Halberstam,  D.J.  Hamann,  P.  F. 
Hilton,  D.  B.  Hodges,  D.  M.  Hoppes,  M.  J.  Oberlatz,  M.  S.  Odle,  C.  N.  Touart,  and  S-L  Tung, 
1993:  Electro-Optical  Tactical  Decision  Aid  (EOTDA)  Users  Manual  Version  3.  Hughes-STX 
Scientific  Report  No.  48.  PL-TR-93-2002,  Phillips  Laboratory,  Hanscom  AFB,  MA  02731- 
5000. 

Keegan,  T.  P.,  1990:  EOTDA  Sensitivity  Analysis.  STX  Scientific  Report  No.  44(11).  GL-TR- 
90-0251  (II),  Phillips  Laboratory,  Hanscom  AFB,  MA  0273 1-5000. 

Kelly,  J.  L.,  and  B.  K.  Goforth,  1994:  ACC  Cannon  AFB  User  EOTDA  Experience. 
Proceedings  of  the  Weather  Impact  Decision  Aids  (WIDA)  Conference,  Las  Vegas,  NV,  22-23 
March  1994. 

Scasny,  K.  L.,  and  J.  M.  Sierchio,  1992:  Mark  III  Electro-Optical  Tactical  Decision  Aid  Sensor 
Performance  Model  Evaluation.  Proceedings  of  Battlefield  Atmospheric  Conference,  Fort  Bliss, 
Texas,  1-3  December  1992. 

Schemine,  K.  L.,  and  B.  M.  Dunham,  1993:  Infrared  Tactical  Decision  Aid  Background 
Signature  Model  Assessment.  Battelle  Report  for  Period  March  1993  through  August  1993. 
WL-TR-94-1064,  Wright  Laboratory,  Wright-Patterson  AFB,  OH  45433-7409. 


106 


U.S.  ARMY  BATTLESCALE  FORECAST  MODEL 


Martin  E.  Lee,  James  E,  Harris,  Robert  W.  Endlich, 
Teizi  Henmi,  and  Robert  E.  Dumais 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM  88002,  USA 

Major  David  I.  Knapp 

Operating  Location  N,  Headquarters  Air  Weather  Service 
White  Sands  Missile  Range,  NM  88002,  USA 

Danforth  C.  Weems 

Physical  Science  Laboratory,  New  Mexico  State  University 
Las  Cruces,  NM  88003,  USA 


ABSTRACT 

The  U.S.  Army  Research  Laboratory  (ARL),  Battlefield  Environment 
Directorate  (BED)  is  conducting  research  and  development  to  satisfy  Army 
Science  and  Technology  Master  Plan,  Science  and  Technology  Objectives 
(STOs)  which  call  for  target  area  meteorology  and  automated  decision  aid 
capabilities  by  FY95.  This  STO  technology  will  be  provided  to  the  battlefield 
soldier  through  the  Integrated  Meteorological  System  (IMETS)  and  the  Army 
Battle  Command  System.  ARL/BED  is  working  to  provide  the  IMETS  Block 
II  an  operational  Battlescale  Forecast  Model  (BFM)  by  FY95  that  will 
accurately  forecast  target  area  weather  and  provide  input  weather  data  to 
automated  weather  effects  decision  aids.  This  paper  describes  how  ARL/BED 
envisions  the  BFM  and  subsequent  models  will  be  used  operationally  on  the 
tactical  battlefield,  both  fi-om  a  general  and  technical  perspective,  and  discusses 
fiiture  improvements  that  are  planned. 

1.  INTRODUCTION 

The  U.S.  Army  Research  Laboratory  (ARL),  Battlefield  Environment  Directorate  (BED)  is 
conducting  research  and  development  (R&D)  to  satisfy  Army  Science  and  Technology  Master 
Plan,  Science  and  Technology  Objectives  (STOs)  IV.K.l  and  IV.K.3.  STO  IV.K.1  calls  for 
a  12  hour  target  area  weather  forecasting  capability  by  FY95,  and  24  hours  by  FY97.  STO 
IV.K.3  requires  development  of  automated  weather  decision  aids  by  FY95  and  FY97  that  use 


107 


artificial  intelligence  techniques  to  provide  the  Army  Battle  Command  System  (ABCS)  the 
capability  to  assess  and  exploit  battlefield  environmental  effects  for  tactical  advantage.  This 
STO  technology  will  be  provided  to  the  battlefield  soldier  through  the  Integrated 
Meteorological  System  (IMETS)  and  the  ABCS.  Thus  ARL/BED  is  working  to  provide  the 
IMETS  Block  II  with  an  operational  mesoscale  meteorological  model  by  FY95  that  will 
provide  it  the  capability  to  forecast  target  area  weather  and  provide  input  weather  data  to 
automated  weather  effects  decision  aids. 

In  meteorology,  the  mesoscale  domain  can  range  from  2,000  km  (often  referred  to  as  the 
regional  or  theater  scale)  to  2  km,  which  is  near  the  microscale.  Of  primary  interest  to  the 
Army  is  an  intermediate  mesoscale  domain  of  approximately  500  km  which  ARL/BED  refers 
to  as  the  battlescale.  ARL/BED's  focus  therefore  is  to  develop  a  battlescale  model  capable  of 
forecasting  battlefield  and  target  area  meteorology  at  the  accuracies  sufficient  to  support  Army 
operations  and  automated  decision  aids.  A  model  capable  of  these  accuracies  will  significantly 
improve  the  intelligence  preparation  of  the  battlefield  process  and  specifically  the  planning  and 
execution  of  deep  strike  fire  support  missions,  increase  the  first  round  hit  probability  of  artillery 
ballistic  systems,  and  prevent  using  high  cost  "smart"  munitions  and  "precision  strike"  assets 
in  atmospheric  conditions  that  would  render  them  ineffective. 

To  achieve  the  milestones  above  and  build  a  strategy  for  future  development,  ARL/BED  has 
elected  to  rely  on  the  Navy  and  Air  Force  to  perform  the  basic  research  of  mesoscale  model 
development,  freeing  ARL/BED  to  concentrate  its  resources  on  adapting  and  applying  this 
research  to  specific  Army  applications.  This  strategy  is  consistent  with  the  Joint  Directors  of 
Laboratories  Project  Reliance,  whereby,  the  Navy  was  assigned  the  lead  in  mesoscale  modeling 
research  within  the  DOD  R&D  laboratory  community,  with  the  Army  and  Air  Force  R&D 
laboratories  agreeing  to  adapt  the  Navy's  model  for  service  specific  applications.  The  Air  Force 
is  also  pursuing  an  independent  initiative  outside  the  DOD  R&D  laboratory  environment  to 
evaluate  mesoscale  modeling  technology  to  insure  that  the  most  suitable  technology,  federal 
or  nonfederal,  is  adopted  for  Air  Force  and  Army  battlefield  weather  support. 

2.  CURRENT  SITUATION 

Currently  neither  the  Navy  Operational  Regional  Atmospheric  Prediction  System  (NORAPS) 
—  which  is  currently  being  tested  for  operational  cases  using  45  km  single  mesh  grids  (Liou 
et  al.,  1994)  —  or  the  Air  Force's  Relocatable  Window  Model  are  well  suited  for  forecasting 
small  scale  weather  features  within  complex  terrain  domains  ^  500  x  500  km^  important  to 
./^y  battlefield  operations.  However,  as  mentioned  above,  both  the  Air  Force  and  Navy  have 
either  in-house  research  efforts  underway  or  they  are  supporting  research  through  contractual 
mechanisms  to  develop  a  model  that  will  support  the  smaller  domain  at  the  accuracies  that  the 
Army  requires.  Until  this  technology  matures  to  the  point  that  it  can  be  used  operationally,  and 
to  satisfy  STO  and  IMETS  milestones,  ARL/BED  has  adapted  a  hydrostatic  model  HOTMAC 
(Higher  Order  Turbulence  Model  for  Atmospheric  Circulation)  which  was  initially  developed 
by  Dr.  Yamada  while  at  Los  Alamos  National  Labs  (Yamada  and  Bunker,  1989).  ARL/BED 


108 


scientists  have  subsequently,  with  Dr.  Yamada's  assistance,  improved  and  tailored  HOTMAC 
for  Army  applications.  This  >^rmy  version  of  HOTMAC  is  called  the  Battlescale  Forecast 
Model  (BFM).  The  BFM  uses  the  hydrostatic  approximation,  is  relatively  fast,  numerically 
stable,  easy  to  use,  and  has  detailed  boundary  layer  physics,  a  most  important  feature  for  Army 
operations. 

ARL/BED  will  continue  to  work  jointly  with  the  Navy  and  Air  Force  to  identify,  develop,  and 
evaluate  an  objective  model  capable  of  the  accuracies  required  by  the  Army.  ARL/BED  will 
contribute  its  expertise  in  boundary  layer  physics  and  complex  terrain  interactions  as  this 
development  process  evolves.  However,  until  the  objective  model  is  considered  mature  enough 
for  operational  use,  ARL/BED  will  use  the  Army  BFM  to  satisfy  its  near  term  STO  and  IMETS 
milestone  requirements.  Once  the  objective  model  is  judged  ready  for  operational  use,  then 
ARL/BED  will  replace  the  BFM  with  the  objective  model  and  adapt  it  for  Army  applications. 
The  purpose  of  this  paper  is  to  describe  how  ARL/BED  envisions  the  BFM  and  subsequent 
models  will  be  used  operationally  in  the  field,  both  from  a  general  and  technical  perspective, 
and  to  discuss  future  improvements  that  are  planned. 

3.  GENERAL  CONCEPT  OF  OPERATIONS 

The  BFM  takes  into  account  local  effects  on  weather  patterns  which  may  take  an  experienced 
forecaster  years  to  learn  for  a  particular  area.  Running  a  mesoscale  model  on  a  workstation 
computer  offers  the  SWO  the  opportunity  to  produce  a  fine-tuned  local  forecast  for  unfamiliar 
areas  with  accuracy  far  superior  to  the  large-scale  products  currently  available  in  the  battlefield. 
The  BFM  automatically  incorporates  knowledge  of  local  terrain,  important  battlefield  weather 
observations,  and  centrally-produced  boundary  conditions  close  to  the  Area  of  Operations 
(AO)  to  produce  its  mesoscale  forecast  gridded  fields  (see  section  4.3.b).  The  BFM  predicts 
battle  scale  weather  features  causing  localized  effects  often  missed  by  the  coarse-grid  resolution 
output  from  global  and  regional  models.  These  large-scale  models  do  not  incorporate  high 
resolution  terrain  and  timely  local  observations;  the  BFM  does,  and  thus  is  able  to  more 
accurately  characterize  battlefield  weather  both  spatially  and  temporally. 

The  BFM  will  essentially  serve  as  the  automated  forecast  portion  of  the  Local  Analysis  and 
Forecast  Program  (LAPP).  In  a  battlefield  scenario,  the  BFM  will  automatically  determine  the 
influence  of  terrain  and  local  features  on  atmospheric  conditions  which  the  forecaster  has 
heretofore  been  determining  manually  and  subjectively  in  the  LAPP  process.  The  BFM 
calculates  intercepted  solar  radiant  energy  that  is  converted  to  budgeted  atmospheric  and 
terrestrial  thermal  energy  over  complex  gridded  terrain  —  which  is  translated  into  pressure 
gradient  driven,  mesoscale  wind  production  (e.g.,  predicted  daytime  heating  of  mountains  and 
high  terrain  reverses  noctumally  forced  localized  downslope  drainage  flows  into  upslope  flows 
and  vice  versa). 

BFM  initialization  will  include  all  observations  from  the  AO  such  as  data  fi’om  nearby 
airfield^rigade  weather  teams,  soundings  from  the  division  or  corps  Artillery  Met  teams,  other 


109 


deployed  military  units'  weather  observation  data,  and  any  indigenous  observations  being 
transrrutted.  Data  from  global  or  theater  scale  models  are  also  used  in  this  process,  and  this  is 
discussed  later.  Observations  from  Automated  Meteorological  Surface  Sensors,  Unmanned 
Aerial  Vehicles  meteorological  sensors,  and  meteorological  satellites  will  also  be  included  in 
the  initialization  process  as  they  become  available  in  the  future.  Initialization  can  also  consist 
of  only  the  observation  taken  from  the  Tactical  Operations  Center.  In  the  event  that  no 
observations  are  available  from  the  AO,  the  BFM  will  be  initialized  using  only  the  gridded 
analysis  and/or  forecast  data  from  global  or  theater  scale  models. 

Boundary  meteorological  conditions  are  automatically  input  for  the  region  surrounding  the 
mesoscale  AO.  Typically,  these  data  would  be  derived  from  grid  point  data  closest  to  the  AO 
taken  from  global  or  regional  model  output  valid  at  analysis  and  forecast  times  of  interest.  The 
BFM  forecast  is  executed  using  these  boundary  conditions  and  AO  raw  data  as  initialization 
guidance  and  solves  towards  the  forecast  solution  dictated  by  the  global/theater  scale  forecast 
boundary  condition  gridded  data.  Thus,  large-scale  flow  patterns  produced  by  the  BFM  will 
automatically  solve  towards  the  global/theater  model's  forecast  solution. 

Looking  ahead,  we  envision  the  use  of  mesoscale  forecast  models  as  part  of  the  LAFP  at  most 
fixed  airfields,  for  test  range  and  shuttle  operations,  and  in  more  civilian  applications  such  as 
air  pollution  episodes,  natural  disasters  and  emergencies,  etc.  In  the  immediate  future,  the 
BFM  offers  the  SWO  deployed  to  an  unfamiliar  AO  the  opportunity  to  accurately  predict  the 
weather  on  the  battlescale  in  real  time  at  resolutions  never  before  possible. 

4.  BFM  IMPLEMENTATION  PLAN 

4.1  Technical  Characteristics 

The  BFM  selected  for  inclusion  in  the  IMETS  Block  II  software  deliverable  was  developed  to 
provide  operational  short-range  (^12  hour)  forecasts.  The  BFM  is  suitable  for  use  within 
battlescale  areas  (<  500  km  x  500  km).  The  basic  equations  for  the  BFM  are  the  conservation 
relationships  for  mass,  momentum,  potential  temperature,  water  vapor  mixing  ratio,  and  mean 
turbulent  kinetic  energy.  The  composite  influence  of  diumally  forced  solar,  atmospheric,  and 
terrestrial  radiation  effects  on  evolving  Planetary  Boundary  Layers  (PBLs)  over  complex  terrain 
is  accurately  simulated  by  the  BFM. 

Second-moment^  mean  turbulence  equations  in  the  BFM  are  solved  by  assuming  certain 
relationships  between  unknown  higher-order  turbulence  moments  and  known  lower-order 
moments.  Presently,  the  BFM  assumes  hydrostatic  equilibrium  and  uses  the  Boussinesq 
approximation.  The  Boussinesq  approximation  is  the  assumption  that  the  modeled  fluid  is 


^  A  second  statistical  moment  is  the  double  correlation,  or  normalized  covariance,  of  two 
turbulent  quantities  (Stull,  1988). 


no 


incompressible  to  the  extent  that  thermal  expansion  produces  a  buoyancy  (Huschke,  1959;  and 
Houghton,  1985).  Buoyancy  forces  are  retained  in  a  hydrostatic  basic  state  with  respect  to 
pressure  and  density  [p^  pj  via  the  inclusion  of  small  pressure  and  density  deviations  [p',  p']. 
This  assumption  holds  as  long  as  the  (po+p')/po  term  in  the  vertical  equation  of  motion  is  close 
to  unity  (i.e.,  density  variations  are  only  considered  when  they  are  closely  coupled  to  gravity). 

4.2  BFM  Initialization  Interface  ' 

The  BFM  X-window  interactive  initialization  interface,  summarized  in  fig.  1,  will  provide  users 
with  the  following  flexibility  when  initializing  all  model  executions: 

a.  Users  will  specify  the  center  of  the  BFM  forecast  domain  via;  i)  an  input 
longitude  and  latitude;  or  ii)  a  user  specified  hfilitary  Grid  Reference  System 
(MGRS)  input  -  consisting  of  a  UTM  zone,  x,  and  y  coordinate;  or  iii)  if  a  map 
display  is  active,  by  using  a  mouse  to  graphically  point  and  click  at  the  desired 
center  point  on  the  map  background. 

b.  Table  1  depicts  all  possible  combinations  of  grid  spacing  and  grid  point  array 
configurations  to  produce  the  BFMs  horizontal  dimensions.  These  options  will  be 
specified  by  users  to  structure  model  domains  that  can  range  from  40  x  40  km^  to 
500  X  500  km^.  An  estimate  of  the  model  run-time  for  the  specified  number  of  grid 
points  and  grid  spacing  will  be  displayed  on  the  screen  Avith  a  display  of  the 
selected  grid  extent  before  the  user  executes  the  BFM  program. 

c.  The  BFM  is  initialized  with  user  selected  data  inputs.  All  available  and  current 
data  will  be  listed  for  the  automatically  defaulted  model  initialization  time.  The 
default  initialization  times  may  commence  at  any  hourly  interval  from  00:00  UTC 
(i.e.  00,  01,  02,  ...  23  Z),  which  ever  is  the  most  relevant  to  the  local  IMETS 
hardware  system  time  and  which  can  also  be  supported  with  currently  available 
data;  this  time  will  be  posted  on  the  interface  screen.  The  primary  Boundary 
Condition  (BC)  initialization  data,  as  a  function  of  pressure,  consists  of  alternating, 
3-dimensional  coarse  grid  (381  km  x  381  km  grid  spacing)  sets  of  12,  24,  and  36 
hour  forecasts  of  wind,  temperature,  water  vapor,  and  geopotential  height. 

Currently  boundary  condition  data  are  obtained  from  the  Global  Spectral  Model  (GSM), 
which  is  regularly  transmitted  by  the  U.S.  Air  Force  (USAF)  Global  Weather  Central  via 
the  Automated  Weather  Distribution  System  (AWDS).  This  GSM  data  has  alternating 
valid  times  of  00:00  and  12:00  UTC  for;  i)  analyses;  and  ii)  12,  24,  36,  and  48  hour 
forecast  fields.  Future  plans  call  for  obtaining  boundary  conditions  from  the  Navy 
Operational  Global  Atmospheric  Prediction  System  (NOGAPS)  if  it  replaces  the  GSM 
as  the  AWDS'  global  model,  and  finally  a  higher  resolution  theater  level  model,  such  as 
NORAPS,  when  one  becomes  operational  on  AWDS. 


Ill 


Select  center  of  model  domain 

Latitude 

1 _ 1  Longitude  d 

MGRS 

UTMX/Y 

Mouse 

point  and  click  on  map 

_Gri^ 

Spacing 

(in  km) 


m  2.0 
m  5.0 
m  10.0 


N^umber  of  X  and  Y  grid  points  (array  size) 


21X21 

31X31 

41X41 

51X51 

L 

- - - 1 

Initialization 

Date/Time^our 


•O 

I 


Estimated 
Model  Run-Time 


«  Initial  B.C.  fbatemmc  1 

U  Final  B.C.  \  JDate/Timc  1 

■ 

UA  1  Date/Time  1 

SFC  1  l>atc/Time  | 

s 

Accept  setup  parametcns 
Execute  BFM  1 


Figure  1.  BFM  initialization  user  interface  concept. 


Table  1.  BFM  domains  in  km^  as  a  function  of  user-specified  BFM  grid  spacing. 


BFM  Parameters 

Array  Number  of  BFM  Grid  Points  i  || 

Grid  Spacing  i 

21x21 

31x31 

41x41 

51x51 

2  km 

40x40  km' 

60x60  km* 

80x80  km* 

100x100  kirf 

5  km 

100x100  km' 

150x150  km' 

200x200  km' 

250x250  km' 

10  km 

200x200  km' 

300x300  km' 

400x400  km' 

500x500  km' 

Typically,  the  set  of  GSM  data  necessaiy  to  produce  a  12  hour  BFM  prediction  will 
consist  of  12,  24,  and  36  hour  GSM  forecasts.  This  is  due  to  the  receipt  time-lag  of 
GSM  data  in  the  field  being  typically  ^  4  hours.  For  example,  a  user  may  desire  to  run 
the  BFM  for  a  twelve  hour  period  commencing  at  07:00  UTC  and  ending  at  19:00  UTC 
on  January  28th.  To  produce  hourly  boundary  conditions  for  this  forecast,  three  GSM 
forecasts  will  be  required  (i.e.,  the  12,  24,  and  36  hour  GSM  forecasts):  between  07:00 
UTC  and  12.00  UTC  January  28th  hourly  boundary  conditions  will  be  produced  via 
interpolation  between  the  GSM  12  hour  forecast  valid  at  00:00  UTC  January  28th,  and 


112 


the  24  hour  GSM  forecast  valid  at  12:00  UTC  January  28th;  and  finally,  for  the  period 
between  12:00  to  19:00  UTC  on  January  28th,  BFM  hourly  boundary  conditions  will  be 
produced  by  interpolating  between  the  GSM  24  hour  forecast  valid  at  12:00  UTC 
January  28th  and  the  36  hour  GSM  forecast,  valid  at  00:00  UTC  on  January  29th. 

During  BFM  execution,  GSM  data  sets  (At  >  12  hours)  are  linearly  interpolated  in  time 
and  3 -dimensional  space  to  produce  hourly  forecast  boundary  condition  data  that 
coincide  with  the  selected  BFM  time  and  space  domain.  The  date/time  group  (or  non¬ 
availability)  of  GSM,  and/or  radiosonde  upper-air,  and/or  surface  data,  applicable  to  the 
selected  model  domain,  will  be  listed  on  the  user's  display.  If  GSM  data  are  not 
available,  radiosonde  data  with  or  without  surface  data  can  be  used  as  initialization  fields 
to  produce  short  term  forecasts  of  up  to  six  hours.  The  user  has  interactive  window 
mouse/switch  control  (e.g.,  □  =  unselected,  or  S  =  an  activated  option)  over  the 
selection  of  the  possible  initialization  observations,  GSM  analyses,  and  forecast  data 
stream  to  the  BFM  in  this  mode.  The  capability  to  review  and  manually  edit  radiosonde 
and  surface  BFM  input  observation  data  will  also  exist. 

After  all  initialization  inputs  satisfy  the  user's  specifications  for  the  model  forecast  run, 
the  user  starts  execution  of  the  BFM  via  a  window  interface  switch.  Actual  model 
execution  begins  only  after  an  automated  initialization  data  program  transforms  GSM, 
surface  and  upper  air  data  into  a  BFM  compatible  format.  Upon  selecting  this  switch, 
if  the  model  domain  has  been  altered  from  the  previous  BFM  forecast  run,  the  user  will 
be  prompted  to  load  the  terrain  data  from  a  Defense  Mapping  Agency  compact  disk 
applicable  to  the  model  domain  selected.  After  this  operation  is  completed,  model 
execution  commences. 

4.3  BFM  Output  Interface 

Upon  completion  of  the  model  forecast  run  a  BFM  X-window  interactive  output  interface, 
summarized  in  figure  2,  will  provide  users  with  the  following  BFM  forecast  data  output  review 
capabilities: 

a.  The  principle  BFM  calculations  consist  of  forecasted  3-dimensional:  1)  u  and 
V  horizontal  wind  vector  components;  2)  potential  temperature;  and  3)  liquid  water 
potential  -  which  is  the  combination,  within  each  grid  volume,  of  all  predicted: 
i)liquid  water;  and  ii)  water  vapor  converted  to  liquid  water.  These  3 -dimensional 
forecast  fields  will  be  saved  at  3  hour  intervals  over  a  12  hour  forecast  period 
commencing  at  the  user-specified  initialization  time. 

b.  In  the  field  data  output  mode,  users  will  be  able  to  graphically  analyze  BFM 
wind  speed  and  direction,  ambient  temperature,  and/or  the  relative  humidity  via  line 
and/or  shaded  contours  (including  velocity  vectors  and/or  streamlines  for  wind 
fields)  at  7  different  levels:  i)  10  m  above  ground  level  (AGL);  ii)  250  m  AGL;  iii) 


113 


500  m  AGL;  iv)  1,000  m  AGL;  v)  1,500  m  AGL;  vi)  at  the  700  mb  constant 
pressure  surface;  and  vii)  at  the  500  mb  constant  pressure  level.^ 

c.  Users  will  also  be  able  to  graphically  review  vertical  profiles  of  wind  speed  and 
direction,  temperature,  and/or  relative  humidity  (in  percent)  at  user  specified  points 
within  the  model  domain.  These  profiles  are  constructed  by  vertically  interpolating 
results  between  the  model  domain  terrain  ground  level  and  the  highest  model  level, 
using  a  linear  interpolation  scheme.  Users  will  be  able  to  select  any  point  within 
the  model  domain,  to  obtain  vertical  profile  outputs,  using  a  mouse  to  point-and- 
click  on  the  map  background,  and/or  from  manual  keyboard  entry  of  point  location 
data.  And  relevant  radiosonde  and/or  surface  observation  sites  within  selected 
BFM  model  domains  will  be  graphically  identified  on  the  map  background  display 
along  with  the  location  of  domain  bounded  GSM  grid  data  points. 

d.  As  indicated  in  the  Parameter  Selection  decision  frame  (fig.  2),  the  option  to 
select  two-dimensional  horizontal  field  predictions  of  the  occurrence  of  fog  and/or 
stratus  are  also  planned  for  inclusion  in  the  IMETS  Block  II  BFM.  These  outputs 
will  be  presented  in  planar  Cartesian  coordinates  above  sea  level  -  unlike  the 
remaining  parameters,  which  are  in  either  terrain  following  Sigma  coordinates  (10 
m  -  1,500  m,  AGL)  or  pressure  coordinates  (700  mb  and  500  mb  surfaces). 

Planar  Cartesian  coordinates  are  more  correlated  to  the  horizontal  stratification  of 
fog  and/or  stratus  along  geopotential  surfaces  than  terrain  folloAving  Sigma  or 
pressure  surfaces. 

5.0  FUTURE  PLANS 

ARL/BED  plans  to  provide  beta  releases  of  the  BFM  to  the  Tactical  Fusion  Systems  Branch, 
Software  Engineering  Directorate  (SED),  Communication  and  Electronics  Command  Research 
and  Development  Engineering  Center  (CERDEC),  Fort  Huachuca,  Arizona  during  1-3QFY95 
for  testing  and  evaluation.  The  final  version  of  the  BFM  for  the  IMETS  Block  11  will  be 
delivered  to  SED/CERDEC  4QFY95  with  supporting  software  documentation  to  include 
software  design  specifications,  software  test  procedures,  software  user's  guide,  and  verification 
and  validation  statistics/reports.  SED/CERDEC  will  then  integrate  the  BFM  software  into  the 
IMETS  Block  n  and,  in  partnership  with  Air  Force  weather  personnel,  accredit  it  for  operational 
use. 


^  Mesoscale  atmospheric  models  are  typically  designed  to  focus  attention  primarily  on 
internal  PBL  dynamics  and  interactions  near  the  earth ’s  surface.  As  a  result,  global  or  synoptic 
scale  models  are  more  suitable  in  making  high  altitude  (e.g,  £  500  mb)  transport  and  diffusion 
predictions,  which  do  not  fluctuate  significantly  compared  to  the  same  predictions  in  the  PBL. 


114 


Select  time  into 
forecast  period 


ANALYSIS 

+03FCST 

+06FCST 

+09FCST 

+12FCST 


Parameter 

Selection 

horizontal  winds 

units 

temperature 

p 

relative  humidity 

2-d  fog/stratus  prediction 

Field  data 


m 


Vertical 
Profile  data 


MODEL 

INITIALIZATION 
DATE /TIME 


lOmAGL 

250mAGL 

500  ni  AGL 

Level 
Selection  1 

1000  mAGL 

1500  m  AGL 

700  mb 

500  mb 

Contours 


Line 


Shaded 


Wind 

Fields 


Vectors 


Streamlines 


Use  mouse  to  select  map  location 
point  for  vertical  profile  analysis 


Coordinates 

Manually  define  Long/Lat  i 

profile  location  MGRS  ^ 


Figure  2.  BFM  output  user  interface  concept. 


Since  the  objective  mesoscale  model  referred  to  in  paragraph  2.0  will  probably  not  be  available 
until  FY97  or  later,  ARL/BED  plans  to  improve  the  BFM  for  the  IMETS  Block  II  and  III,  with 
improvements  delivered  to  SED/CERDEC  in  FY96  and  FY97  as  they  become  available. 
Planned  improvements  include  extending  the  BFM  maximum  forecast  limit  of  12  hours  to  24 
hours  in  the  IMETS  Block  III  delivery,  better  quality  control  and  editing  of  input  data,  and 
increased  output  parameters  to  include  turbulence  and  icing  index,  temperature  and  moisture 
advection,  vorticity,  visibility,  precipitation,  improved  cloud  predictions,  and  meteorological 
satellite  data  assimilation.  Other  I/O  interface  refinements,  resulting  from  a  maturing  "user 
feedback-product  improvement  cycle"  anticipated  to  occur,  will  also  be  implemented.  Once  the 
objective  model  matures  then  it  will  replace  the  BFM,  provided  its  performance  in  comparison 
to  the  BFM  justifies  replacement. 


115 


REFERENCES 


Houghton,  D.D.  (Editor),  1985:  Handbook  of  Applied  Meteorology,  John  Wiley  &  Sons,  1461  pp. 

Huschke,  R.E.  (Editor),  1959:  Glossary  of  Meteorology,  American  Meteorological  Society,  638  pp. 

Liou,  Chi-Saan,  Hodur,  R.,  and  Langland,  R,,  1994:  "NORAPS:  A  Triple  Nest  Mesoscale  Model,” 
Proceedings  of  the  Tenth  American  Meteorological  Society  Conference  on  Numerical 
Weather  Prediction,  Portland,  Oregon. 

Stull,  R.B.,  1988:  An  Introduction  to  Boundary  Layer  Meteorology,  Kluwer  Academic  Publishers, 

666  pp. 

T.  Yamada  and  S.  Bunker,  1989:  A  Numerical  Model  Study  of  Nocturnal  Drainage  Flows  with 
Strong  Wind  and  Temperature  Gradients,  Journal  of  Applied  Meteorology,  Volume  28  545- 
554. 


116 


DEVELOPMENT  AND  VERIFICATION  OF  A  LOW-LEVEL  AIRCRAFT 
TURBULENCE  INDEX  DERIVED  FROM  BATTLESCALE  FORECAST  MODEL  DATA 

Major  David  I.  Knapp  and  MSgt  Timothy  J.  Smith 
Operating  Location  N,  Headquarters  Air  Weather  Service 
White  Sands  Missile  Range,  NM 


Robert  Duma is 

U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM 


ABSTRACT 

Improving  the  accuracy  of  low-level  aircraft 
turbulence  forecasts  is  addressed  using  high  resolution 
gridded  data  and  terrain  fields  to  derive  localized 
horizontal  and  vertical  wind  flow  patterns.  Two 
QjjjQQtive  upper— level  aircraft  turbulence  indices  are 
tested  at  lower  levels  using  mesoscale  model  data  in  an 
effort  to  calculate  "first  guess"  estimates  of 
potential  low-level  turbulence  areas.  The  Turbulence 
Index  (TI)  is  the  product  of  two  independent  terms, 
vertical  wind  shear  and  the  sum  of  the  horizontal 
deformation  and  convergence.  The  Panofsky  Index  (PI) 
is  a  function  of  horizontal  wind  speed  and  the 
Richardson  Number.  The  TI,  its  independent  terms,  and 
the  PI  are  calculated,  stratified  and  evaluated  using 
multivariate  linear  regression  for  specific  low-level 
layers  across  three  CONUS  mesoscale  regions  for  20 
cases  from  January  to  April  1993.  Using  results  from 
the  regression  analyses,  new  low-level  turbulence 
forecast  equations  are  proposed  for  future  refinement 
and  verification. 


1 .  INTRODUCTION 

Staff  Weather  Officers  and  forecasters  supporting  U.S. 
Army  aviation  operations  provide  low-level  turbulence 
analyses  and  forecasts  for  fixed  and  rotary  wing  aviation 
missions.  New  advancements  in  the  use  of  "smart"  munitions 
and  unmanned  aerial  vehicles  sensitive  to  turbulence  make 
these  forecasts  even  more  important  to  mission 

accomplishment.  Forecasters  generally  rely  on  empirical  low- 
level  turbulence  forecast  rules  that  have  been  used  for 
years,  resulting  in  the  habit  of  consistently 
underforecasting  or  overforecasting  suspected  turbulence 

areas. 


117 


Upper-level  instabilities  believed  to  cause  turbulence 
have  been  approximated  using  the  components  of  Petterssen's 
(1956)  frontogenesis  equation.  Mancuso  and  Endlich  (1966) 
found  that  the  deformation  and  vertical  wind  shear  components 
of  this  equation  were  independently  correlated  with  the 
frequency  of  moderate  or  severe  turbulence.  Ellrod  and  Knapp 
(1992)  went  further  by  deriving  a  turbulence  index  (TI)  based 
on  certain  assumptions  to  Petterssen's  equation.  Assuming 
that  frontogenesis  results  in  an  increase  in  vertical  wind 
shear  (VWS) ,  horizontal  deformation  (DEF) ,  and  horizontal 
convergence  (CVG) ,  a  similar  increase  in  turbulence 
occurrence  should  also  be  expected.  For  a  given  layer  the 
index  is  stated  as;  ' 

TI  =  VWS  X  (DEF+CVG) ,  (1) 

The  range  of  TI  values  associated  with  turbulence  occurrence 
were  found  to  be  model-dependent  based  on  grid  resolution  and 
other  physical  and  dynamic  parameter  calculations  unique  to 
every  model.  Typical  TI  values  ranged  from  1.0  to  12.0 
(xlO-  s-2)  ,  with  highest  values  correlated  with  moderate  and 
greater  turbulence  intensities. 

At  levels  below  upper-level  jet  stream  maxima,  Ellrod 
and  Knapp  found  the  TI's  performance  to  be  unreliable.  This 
was  attributed  to  coarse  synoptic  scale  model  grid-point 
resolution  that  missed  the  more  subtle  features  in  low-level 
wind  fields  contributing  to  turbulence  occurrence.  The 
resolution  problem  is  solved  by  deriving  the  TI  from 
mesoscale  model  grid  data.  The  purpose  of  this  study  is  to 
evaluate  the  TI  and  its  individual  component  indices  at  lower 
levels.  This  is  part  of  a  project  to  find  or  develop  an 
accurate  objective  low-level  turbulence  forecast  technique 
'£ot  future  use  by  Air  Force  Staff  Weather  Officers  supportincr 
the  U.S.  Army.  ^ 

2.  MODEL  AND  TURBULENCE  TECHNIQUE  DESCRIPTIONS: 

The  Higher  Order  Turbulence  Model  for  Atmospheric 
Circulations  (HOTMAC)  was  used  to  provide  a  mesoscale  nowcast 
^ri^lysis  of  meteorological  variables  using  observed  upper— air 
and  surface  data  reported  at  rawinsonde  locations.  This 
analysis  is  produced  by  horizontally  interpolating  observed 
rawinsonde  wind  data  onto  the  mesoscale  grid  with  a 
resolution  of  20  km.  In  the  vertical,  wind  observations  are 
interpolated  to  predefined  levels.  Utilizing  a 
transformation  to  a  terrain— following  vertical  coordinate 
system,  22  staggered  levels  are  retained  to  allow  for  high 
resolution  (Yamada  and  Bunker,  1989) ,  with  eight  levels 
within  the  first  1000  feet  AGL,  five  levels  from  1,000-5,000 
feet  AGL,  and  eight  levels  from  5,000-16,000  feet  AGL.  (NOTE: 


118 


Hereafter,  levels  AGL  will  be  in  hundreds  of  feet,  i.e.,  010- 
050  050-160,  etc.)  Rawinsonde  data  reported  from  00  UTC 

orovide  the  raw  input  to  the  mesoscale  objective  analysis  for 
this  study.  The  TI  is  calculated  from  HOTMAC's  u  and  v  wind 
components  at  each  grid  point  in  the  horizontal  and  vertical. 
Layer  average  values  of  VWS,  DBF,  and  CVG  are  used  for  the 
final  TI  calculation  at  every  grid  point  for  each  prescribed 

layer . 


3 .  TECHNIQUE  VERIFICATION 

The  TI  has  been  tested  using  data  in  the  Huntsville, 
Chicago,  and  Denver  mesoscale  regions  (Fig  1)  for  twenty 
00  UTC  cases  during  February,  March,  and  April  1993.  Point 
verification  was  used  to  verify  the  TI's  effectiveness  using 
pilot  reports  (PIREPs)  from  2130  UTC  to  0230  UTC  for  each 
case  studied.  PIREP  data  included  reports  of  turbulence 
level (s)  and  intensity,  as  well  as  reports  of  no  turbulence 
for  each  case.  If  a  PIREP  did  not  contain  any  turbulence 
remarks,  it  was  not  counted  as  a  valid  report.  A  program  was 
written  to  extract  values  for  the  TI  and  its  components  at 
the  grid  point  closest  to  each  verifying  report.  Turbulence 
reports  in  the  vicinity  of  thunderstorms  were  not  included  in 
the  verification  process.  Data  from  the  National  Lightning 
Detection  Network  were  used  to  filter  out  these  reports  in  or 
near  active  cloud-to-ground  lightning  strikes  during  the 
times  studied.  Reports  from  heavier  aircraft  (i.e.,  civilian 
airliners  and  military  transports)  were  also  eliminated  from 
the  database  to  keep  findings  specific  to  the  lighter-weight 
aircraft  used  by  the  Army.  The  majority  of  the  reports  in 
the  database  were  from  these  lighter  aircraft,  resulting  in 
only  5%  of  the  reports  being  filtered  out. 

Turbulence  reports  were  assigned  numerical  values  based 
on  intensity  to  establish  correlation  coefficients  when 
compared  to  grid  point  output  from  each  of  the  technigues. 
Intensities,  abbreviations,  and  corresponding  numerical 
values  are  listed  in  Table  1.  Correlation  coefficients  (r) 
for  the  TI  at  different  turbulence  intensities  in  all  three 
regions  studied  for  the  010-040  and  040-070  levels  are  shown 
in  Tables  2  and  3 ,  respectively .  The  TI  is  shown  to  be  a 
poor  indicator  of  NEG  and  LGT  intensities.  However,  for 
reports  of  LGT-MDT  or  greater  turbulence,  r-values  centered 
around  .80  show  a  good  relationship  between  calculated  TI 
values  and  intensities  for  both  layers  below  070. 


119 


Figure  1.  Mesoscale  regions  used  for  turbulence  study. 
RAOB  locations  indicated  with  3-letter  identifiers. 


Table  1.  Turbulence  intensities  and  abbreviations  with 
associated  numerical  values. 


pilot  numerical 

REPORT  VALUE 


No  Turbulence 

NEG  or  NIL 

0 

Light 

LGT 

1 

Light-Moderate 

LGT-MDT 

2 

Moderate 

MDT 

3 

Moderate-Severe 

MDT-SVR 

4 

Severe 

SVR 

5 

Severe-Extreme 

SVR-XTR 

6 

Extreme 


XTR 


7 


Table  2.  Correlation  coefficients  (r)  relating 
turbulence  parameters  to  turbulence  intensities  from 
010-040  in  the  three  mesoscale  regions  studied  for  20 
cases  from  February-April  1993.  INTENSITY  values  taken 
from  Table  1.  #  is  the  number  of  pilot  reports  in  the 

sample . 


INTENSITY 

ft 

Tl 

VWS 

[DEF  +  CVG] 

VWS  AND 
[DEF  +  CVG] 

ALL 

54 

.58 

.21 

.51 

.51 

0-1 

37 

.22 

.07 

.21 

.21 

>1 

17 

.80 

.49 

.80 

.81 

>2 

9 

.80 

.32 

.81 

.86 

Table  3.  Same  as  Table  2,  for  the  040-070  layer. 


INTENSITY 

ft 

Tl 

VWS 

[DEF  +  CVG] 

VWS  AND 
[DEF  +  CVG] 

ALL 

66 

.58 

.24 

.58 

.59 

0-1 

48 

.10 

.04 

.07 

.08 

>1 

18 

.79 

.49 

.77 

.78 

>2 

11 

.84 

.64 

.76 

.80 

4.  TOWARD  AN  IMPROVED  TURBULENCE  INDEX 

In  a  case  study  of  moderate  and  greater  turbulence 
reports  occurring  across  the  Huntsville  region  (Knapp,  et. ^ 
al.,  1993),  turbulence  reports  occurred  in  regions  of  varied 
VWS  and  DEF+CVG.  A  report  of  extreme  turbulence  was  reported 
near  the  center  of  an  area  of  maximum  DEF+CVG  with  a  minimum 
of  VWS.  Reports  of  moderate  and  moderate-severe  turbulence 
occurred  in  local  maxima  of  VWS  with  only  moderate  values  of 


121 


DEF+CVG.  No  distinctive  pattern  from  either  term  could  be 
seen  as  dominating  the  other.  Based  on  this  case,  and  on  the 
19  other  cases  examined  for  each  mesoscale  region,  r-values 
were  also  calculated  independently  for  each  of  the  TI's  terms 
(VWS  and  DEF+CVG) .  These  are  also  summarized  in  Tables  2  and 
3.  As  for  the  TI,  VWS  and  DEF+CVG  performed  poorly  as 
turbulence  indicators  for  the  NEG  and  LGT  intensities,  while 
statistics  dramatically  improve  for  greater  intensities.  In 
both  layers,  DEF+CVG  correlated  better  with  turbulence 
intensity  than  did  VWS,  implying  the  importance  of  horizontal 
wind  flow  changes  in  low-level  turbulence  generation.  Notice 
also  that  VWS  r-values  increased  significantly  in  the  higher 
layer  (Table  3) .  DEF+CVG  compared  favorably  to  the  TI  as  an 
independent  low-level  LGT-MDT  and  greater  turbulence 
indicator.  Treating  the  TI's  two  independent  terms  together 
in  a  multi-variate  analysis  shows  this  new  combination 
slightly  outperforming  the  TI  from  010-040  (Table  2)  with  the 
reverse  occurring  from  040-070  (Table  3) . 

Another  index  considered  was  the  Panofsky  Index  (PI) , 
which  has  been  used  by  the  Navy  to  forecast  low-level 
turbulence  up  to  850mb  (Boyle,  1990) .  The  formula  for  this 
index  is 


PI=  (windspeed)  ^x  [  1 .  O-Ri/Ri^^,]  ( 2 ) 

where  windspeed  is  the  average  speed  in  the  prescribed  layer 
in  ms'*,  Ri  is  the  Richardson  Number,  and  Ri^^it  is  a  critical 
Richardson  Number.  This  value  should  theoretically  be  .25 
for  very  fine  scale  data,  but  in  this  study  (using  3000  foot 
layers)  as  well  as  for  the  Navy's  purposes  (1000mb-850mb 
layer),  the  best  empirically  derived  value  is  10.0.  The  PI 
takes  into  account  vertical  wind  shear  as  well  as  the 
vertical  lapse  rate  by  virtue  of  the  Richardson  Number. 
Studies  for  the  Huntsville  region  included  the  PI  as  an 
additional  turbulence  index.  As  an  independent  term 
considered  by  itself,  r-values  for  the  PI  were  insignificant 
for  all  layers.  However,  when  treating  it  as  an  independent 
term  in  a  multivariate  analysis  with  DEF+CVG,  r-values  exceed 
all  others  at  both  low  levels  studied  (Tables  4  and  5) . 


122 


Table  4.  Same  as  Table  2,  for  the  010-040  layer  in  the 
Huntsville  Region. 


Table  5.  Same  as  Table  4,  for  the  040-070  layer. 


Based  on  the  high  correlation  coefficients  shown  in 
Tables  2  through  5,  new  indices  for  predicting  low-level 
turbulence  intensities  are  derived  as  linear  regression 
equations.  Using  data  for  the  010-040  layer  which  produced 
the  r-values  depicted  in  Tables  2  and  4  for  all  turbulence 
intensities,  the  following  equations  can  be  derived  as  a 
starting  point  for  future  refinements  and  verification: 

Y=.0065(TI)+1.0888  (3) 

Y=. 0078 (DEF+CVG)-. 0003 (VWS) +1.074  (4) 

Y=.0089 (DEF+CVG) +.0006 (PI) +1.1751  (5) 

where  Y  is  turbulence  intensity  as  defined  in  Table  1;  TI, 
VWS,  and  DEF+CVG  are  in  units  as  depicted  in  Figs  3,  4,  and 
5,  respectively;  and  the  PI  is  not  scaled. 

5 .  CONCLUSIONS 

A  previously  validated  upper-level  aircraft  turbulence 
index  (TI)  was  studied  using  mesoscale  model  gridded  analysis 
data  in  an  effort  to  develop  a  useful  objective  low-level 
index  for  use  by  military  forecasters.  The  TI  output  were 
shown  to  be  strongly  correlated  with  LGT-MDT  or  greater 
intensity  for  light-weight  aircraft.  A  TI  case  study 


123 


examining  the  specific  contributions  of  each  term  of  the  TI 
was  accomplished.  This  led  to  further  correlating  turbulence 
occurrence  and  intensity  for  each  pilot  report  independently 
with  each  term  of  the  TI.  DEF+CVG  proved  to  be  comparable  to 
the  TI  as  a  turbulence  indicator.  Combining  DEF+cVG  with  VWS 
as  two  independent  variables  in  a  regression  improved 
performance  as  correlation  coefficients  exceeded  those  of  the 
TI  from  010-040.  Another  tool,  the  Panofsky  Index  (PI),  was 
combined  as  an  independent  term  with  DEF+CVG  to  produce 
correlation  coefficients  which  exceed  all  others. 

New  turbulence  potential  tools  derived  as  multivariate 
linear  regression  equations  were  proposed  for  further  study. 
These  equations  will  be  refined  by  increasing  the  database 
from  which  they  were  derived  with  additional  data  from  the 
winter  1994  season.  Final  equations  will  then  be  verified 
for  both  intensity  and  horizontal  extent  of  turbulence 
forecast  areas  using  an  independent  data  set  from  1992. 


REFERENCES 

Boyle,  J.S.,  1990:  Turbulence  Indices  Derived  From  FNOC 
Fields  and  TOVS  Retrievals,  NOARL  Technical  Note  47, 
Naval  Oceanographic  and  Atmospheric  Research  Laboratory 
Stennis  Space  Center,  MS  39529-5004.  ^ 

Ellrod,  G.P.,  and  D.I.  Knapp,  1992:  "An  Objective  Clear-Air 
Turbulence  Forecasting  Technique:  Verification  and 
Operational  Use."  Wea.  Forecasting,  7:  150-165. 

Knapp,  D.I.,  T.J.  Smith,  and  R.  Dumais,  1993:  "Evaluation  of 
Low-Level  Turbulence  Indices  on  a  Mesoscale  Grid."  In 
Proceedings  of  the  1993  Battlefield  Atmospherics 
Conference,  U.S.  Army  Research  Laboratory,  White  Sands 
Missile  Range,  NM  88002-5501,  pp  501-514. 

Mancuso,  R.L.,  and  R.M.  Endlich,  1966:  "Clear  Air  Turbulence 
Frequencies  as  a  Function  of  Wind  Shear  and 
Deformation."  Mon.  Wea.  Rev.,  94:  581-585. 

Petterssen,  S.,  1956:  Weather  Analysis  and  Forecasting, 

Vol  1.  McGraw-Hill  Book  Co.,  428  pp. 

Yamada,  T.,  and  S.  Bunker,  1989:  "A  Numerical  Model  Study  of 
Nocturnal  Drainage  Flows  With  Strong  Wind  and 
Temperature  Gradients."  J.  Appl .  Meteor. ,  28:  545-553. 


CURRENT  AND  FUTURE  DESIGN  OF  U.  S.  NAVY  MESOSCALE  MODELS  FOR 

OPERATIONAL  USE 

1994  BATTLEFIELD  ATMOSPHERICS  CONFERENCE 


R.  M.  Hodur 

Naval  Research  Laboratory 
Monterey,  CA  93940-5502 


ABSTRACT 

The  Naval  Research  Laboratory  (NRL),  which  has  developed,  implemented,  and  improved 
Navy  operational  mesoscale  models  for  over  10  years,  plans  a  series  of  further  significant 
improvements.  The  current  operational  system,  the  Navy  Operational  Regional  Atmospheric 
Prediction  System  (NORAPS),  has  produced  over  35,000  operational  forecasts  over  the  past 
12  years.  This  has  been  possible  through  the  computer  support  and  cooperation  with  personnel 
at  the  Fleet  Numerical  Meteorology  and  Oceanography  Center  (FNMOC),  co-located  with  NRL 
in  Monterey.  NORAPS  is  a  complete  data  assimilation  system  that  contains  4  major 
components:  1)  quality  control  to  maintain  consistency  and  integrity  of  the  incoming  data,  2) 
multivariate  optimum  interpolation  analysis,  3)  nonlinear  vertical  mode  initialization,  and  4) 
a  hydrostatic  forecast  model  with  physical  parameterization s  for  cumulus,  radiation,  and  the 
planetary  boundary  layer.  Three  major  design  changes  are  being  developed  and  tested.  The 
first  is  the  incorporation  of  horizontally  nested  grids,  which  will  allow  for  high-resolution  (10 
km  or  less)  over  limited  domains.  The  second  is  the  development  of  a  nonhydrostatic 
atmospheric  model  which  will  replace  the  hydrostatic  model  in  NORAPS  within  the  next  two 
years.  The  third  project  is  a  redesign  of  the  operational  mesoscale  system  so  that  it  can  be 
ported  to  other  mainframes  and/or  workstations  with  little  or  no  modifications.  This  will  allow 
for  use  of  the  prediction  system  by  other  labs,  universities,  regional  forecast  centers,  and 
aboard  Navy  ships.  These  design  changes  will  ensure  that  the  U.  S.  Navy  maintains  state  of 
the  art  mesoscale  forecasting  capabilities,  particularly  in  littoral  regions,  for  the  years  to  come. 


1.  INTRODUCTION 

The  U.  S.  Navy  is  poised  to  move  into  a  new  era  of  operational  numerical  weather  prediction 
(NWP).  The  first  era  began  over  20  years  ago  when  FNMOC  implemented,  and  began 
running  on  a  twice-daily  basis,  a  hemispheric  model  based  on  the  primitive  equations  using  a 
grid  spacing  of  381  km  and  5  levels  (Kesel  and  Winninghoff  1972).  The  focus  of  this  model 
was  to  provide  3  day  forecasts  of  systems  of  synoptic-scale  and  larger.  The  next  era  in  U.  S. 


125 


Navy  NWP  began  about  a  decade  later  with  the  introduction  of  the  Navy  Operational  Global 
Atmospheric  Prediction  System  (NOGAPS,  Rosmond  1981),  the  Navy  Operational  Regional 
Atmospheric  Prediction  System  (NORAPS,  Hodur  1982),  and  the  arrival  of  high-speed  vector 
processors.  The  purpose  of  NOGAPS  was  to  provide  3-5  day  global  forecasts  for  systems  of 
synoptic-scale  and  larger.  NORAPS,  on  the  other  hand,  using  a  horizontal  grid-spacing 
approximately  one-third  that  of  NOGAPS,  was  implemented  to  produce  24-48  h  forecasts  over 
given  regions  of  the  world.  The  NORAPS  forecasts  were  used  as  an  "early-look"  when  run 
before  NOGAPS,  and  to  provide  mesoscale  forecast  information  due  to  its  higher  resolution. 
Over  the  past  decade,  both  NOGAPS  and  NORAPS  have  improved  due  to  increased  computer 
si^  and  memory,  and  also  due  to  improvements  in  the  prediction  systems  themselves.  Now, 
with  the  arrival  of  another  generation  of  supercomputers  featuring  vector  and  parallel 
processing,  the  Navy  is  preparing  for  another  era  of  improved  NWP  products.  The 
improvements  to  NOGAPS  and  computer  technology  have  already  been  such  that  NOGAPS 
is  now  capable  of  mesoscale  forecasts  previously  performed  by  NORAPS.  This  implies  that 
we  must  push  mesoscale  modeling  to  higher  resolutions,  which  calls  for  a  redesign  of  the 
model  equations  and  parameterizations.  It  is  expected  that  these  improvements  to  our 
mesoscale  effort  will  lead  to  NWP  forecasts  that  can  directly  support  future  battlescale 
missions. 

The  purpose  of  this  paper  is  to  describe  the  current  and  future  design  of  Navy  mesoscale  NWP. 
A  description  of  the  currently  used  mesoscale  model,  NORAPS,  is  given  in  Section  2.  A 
description  of  the  system  that  will  replace  NORAPS,  the  Coupled  Ocean/ Atmosphere 
Mesoscale  Prediction  System  (COAMPS),  is  given  in  Section  3.  Section  4  presents  the  work 
being  conducted  to  allow  for  the  use  of  NORAPS  and/or  COAMPS  on  a  workstation.  A 
summary  is  presented  in  Section  5. 


2.  NORAPS 

The  structure  of  NORAPS  is  basically  unchanged  from  that  described  by  Hodur  (1987),  that 
is,  NORAPS  is  a  mesoscale  data  assimilation  system  with  four  major  components:  quality 
control,  analysis,  initialization,  and  forecast  model.  The  system  is  designed  so  that  it  can  be 
run  over  any  region  of  the  world  using  any  of  the  following  map  projections;  Mercator, 
Lambert  Conformal,  polar  stereographic,  or  spherical.  The  grid  spacings  and  grid  dimensions 
can  be  set  to  any  value  within  the  speed  and  memory  limitations  of  the  computer  system. 
Global  databases  of  terrain,  surface  roughness,  albedo,  and  ground  wetness  are  bilinearly 
interpolated  to  the  model  grid  for  each  forecast  application. 

The  NORAPS  analysis  is  now  based  on  the  multivariate  optimum  interpolation  (01)  technique 
using  the  volume  method  similar  to  that  described  by  Lorenc  (1986).  The  D-values  and  u-  and 
v-components  are  analyzed  at  10,  20,  30,  50,  70,  100,150,  200,  250,  300,  400,  500,  700,  850, 
925,  and  1000  mb.  Wind  observations  are  used  from  radiosondes,  pibals,  aircraft  reports 
including  ACARS  data,  SSM/I,  surface  reports  over  the  water,  and  cloud  track  winds.  D- 
values  and/or  thicknesses  are  obtained  from  radiosondes,  and  DMSP  and  NOAA  satellites.  All 
data  is  subject  to  the  complete  quality  control  (QC)  system  which  includes  the  National 


126 


Meteorological  Center’s  (NMC)  complex  QC  of  radiosondes  (Collins  and  Gandin  1990)  as  well 
as  the  more  traditional  QC  techniques  described  in  Baker  (1992)  and  Norris  (1990). 

The  NORAPS  hydrostatic  forecast  model  is  based  on  the  primitive  equations  using  the  sigma-p 
vertical  coordinate  system.  The  horizontal  grid  is  the  Arakawa  and  Lamb  (1977)  staggered 
"C"  grid  The  nonlinear  vertical  mode  initialization  described  by  Bourke  and  McGregor  (1953) 
has  been’ incorporated  as  an  integral  part  of  the  forecast  model.  Recent  improvements  of  the 
physical  parameterizations  include  the  use  of  the  Louis  (1982)  surface  layer  parameterization 
the  Detering  and  Etling  (1985)  turbulence  parameterization  and  the  Harshvardhan  et.  al.  (1987) 
radiation  scheme,  which  includes  cloud  interactions. 

Several  improvements  of  NORAPS  are  currently  being  worked  on.  The  first  is  horizontally 
nested  grids,  which  will  allow  for  high  resolution  (10  km  or  less)  over  given  areas  of  mteresf 
Another  advantage  of  the  nested  grid  structure  is  to  move  the  boundary  zone  where  NORAPS 
and  NOGAPS  fields  are  blended  together  as  far  away  from  the  area  of  interest  as  possible. 
The  second  improvement  is  the  ability  to  predict  aerosols.  Currently,  we  allow  for  generation 
of  sea-salt  aerosols  over  water,  specifying  a  point  source  of  aerosols  at  any  point  m  the  grid 
advection,  diffusion,  fallout,  and  rainfall  scavenging.  The  third  improvement  is  in  the  PBL 
parameterization  in  which  we  are  examining  methods  to  introduce  counter-gradient  flux  terms 
into  the  model,  thereby  giving  us  more  realistic  temperature  and  moisture  profiles. 


3.  COAMPS 

The  use  of  a  hydrostatic  mesoscale  model,  such  as  NORAPS,  with  resolutions  finer  than  about 
10  km,  can  pose  problems  in  certain  situations.  These  occur  when  the  hydrostatic  assumpUon 
is  violated,  i.e. ,  when  vertical  accelerations  become  significant.  Events  such  as  convection, 
sea  breezes,  and  topographic  flows  often  exhibit  strong  nonhydrostatic  effects  and  these  need 
to  be  included  for  proper  simulation.  To  account  for  these  effects,  NRL  is  developing  a  new 
mesoscale  model  using  the  fully  compressible  form  of  the  primitive  equations  following  Klemp 
and  Wilhelmson  (1978).  This  model  is  the  atmospheric  component  of  the  Coupled 
Ocean/ Atmosphere  Mesoscale  Prediction  System  (COAMPS).  The  other  component  of 
COAMPS  is  a  hydrostatic  ocean  model  (Chang  1985).  The  two  inodels  can  be  used  separate  y 
or  in  a  fully  coupled  mode.  Although  the  ocean  model  plays  a  vital  role  in  the  basic  research 
we  conduct  with  COAMPS,  the  remainder  of  this  section  will  focus  on  the  details  and  plans 
for  the  nonhydrostatic  atmospheric  model  only. 

COAMPS  has  been  designed  to  make  the  transition  from  a  hydrostatic  model  within  NORAPS 
to  a  nonhydrostatic  model  as  easy  as  possible.  This  has  been  done  by  incorporating  many  of 
the  details  already  in  NORAPS  into  COAMPS.  This  includes  the  data  QC  and  multivariate  01 
analysis.  COAMPS  also  has  the  same  global  relocatability  features  found  in  NORAPS,  and 
the  user  can  set  the  type  of  grid  projection,  the  number  of  nested  grids  (a  maximum  of  3  is 
allowed),  as  well  as  the  grid  dimensions  and  resolutions  of  each  mesh. 

The  COAMPS  atmospheric  model  is  based  on  the  sigma-z  vertical  coordinate.  The  prognostic 


127 


variables  are  the  u-,  v-,  and  w-components  of  the  wind,  perturbation  exner  function  (related 
to  perturbation  pressure),  potential  temperature,  water  vapor,  cloud  droplets,  raindrops,  ice 
crystals,  snowflakes,  and  turbulent  kinetic  energy  (tke).  The  choice  of  five  moisture  variables 
allows  for  the  explicit  pr^iction  of  clouds,  rain,  and  snow,  using  the  Rutledge  and  Hobbs 
(1983)  explicit  moist  physics  scheme.  For  resolutions  coarser  than  5-10  km,  such  as  for  the 
coarser  meshes  of  a  nested  grid  simulation,  cumulus  parameterization  must  still  be  used.  For 
this,  we  have  included  a  scheme  developed  for  mesoscale  convective  events  (Kain  and  Fritsch 
1990,  Kain  1993).  The  1-1/2  order  tke  prediction  scheme  presented  by  Deardorff  (1980)  is 
used.  The  Harshvardhan  et.  al.  (1987)  radiation  scheme,  used  in  NORAPS,  is  also  included 
in  COAMPS. 


4.  PORTABILITY 

Until  very  recently,  the  computational  power  needed  to  execute  numerical  models  such  as 
NORAPS  or  COAMPS  existed  only  on  large  mainframes.  However,  workstation  technology 
has  now  improved  to  the  point  where  these  models  can  be  tested  on  them,  although  the  best 
performance  is  still  on  vectorized,  multi-processor  machines.  Given  the  pace  of  workstation 
technology,  it  is  expected  that  over  the  next  few  years,  realistic,  operationally  useful  forecasts 
will  be  produced  on  workstations. 

To  take  advantage  of  this  emerging  technology,  NRL  is  leading  a  program,  the  purpose  of 
which  IS  to  make  the  mesoscale  prediction  systems,  NORAPS  and  COAMPS,  easy  to  port  to 
other  systems.  The  benchmark  operational  systems  will  still  reside  at  NRL/FNMOC  in 
Monterey,  but  execution  of  a  single  program  can  build  a  file  containing  all  the  source  code  and 
the  surface  parameters  databases  that  are  required  to  run  and  install  either  system.  This  file 
can  then  be  sent,  via  tape,  internet,  etc.,  to  another  machine  in  which  the  install  program  is 
used  to  install  the  system.  At  this  point,  the  remote  site  must  have  the  ability  to  get  fields  for 
the  first  guess  and  boundary  conditions,  as  well  as  observational  data. 

Of  course,  there  are  certain  constraints  on  the  portability  of  these  systems.  First,  the  source 
code  is  written  in  the  FORTRAN  language.  Currently,  we  adhere  to  FORTRAN  77  standards, 
but  will  be  transitioning  to  FORTRAN  90  standards  over  the  next  year  or  so.  Second,  the 
prediction  systems  require  the  use  of  dynamic  memory  allocation.  While  this  is  a  standard 
feature  in  FORTRAN  90,  it  only  exists  as  extensions  on  some  FORTRAN  77  compilers. 
Third,  it  is  required  that  the  operating  system  be  UNIX,  or  a  compatible  version,  such  as 
UNICOS,  on  Cray  machines.  Generalized  scripts  to  execute,  build,  and  install  NORAPS  and 
COAMPS  are  written  for  the  UNIX  operating  system.  In  addition,  the  generalized  database 
that  we  use  in  these  systems  is  based  on  the  existence  of  a  UNIX  environment. 

The  portability  of  NORAPS  and  COAMPS  is  rapidly  becoming  a  reality.  Recently,we  have 
ported  each  prediction  system  to  other  Cray  systems  and  have  been  able  to  perform  data 
assimilation  experiments  the  same  day.  Porting  to  smaller  workstations  is  still  under 
development.  The  long  term  goal  is  to  be  able  to  use  NORAPS  or  COAMPS  for  data 
assimilation  at  a  regional  center  or  onboard  a  Navy  ship.  The  first  step  toward  accomplishing 


128 


this  goal  will  be  taken  in  the  SHAREM  110  exercise  in  the  Gulf  of  Oman  during  February 
1995.  During  this  time,  NORAPS  will  be  run  at  FNMOC  for  the  Gulf  of  Oman  area  and  the 
forecast  fields  will  be  sent  to  a  remote  station  near  the  Gulf  of  Oman.  There  will  be  a 
workstation  at  this  location  on  which  the  NORAPS  multivariate  01  analysis  is  installed. 
Analyses  will  be  generated  at  this  site  using  the  NORAPS  forecast  fields  for  the  first  guess  and 
all  on-scene  observations.  This  will  serve  as  a  test  for  the  communication,  personnel, 
hardware,  training,  and  timing  necessary  to  extend  this  to  a  full  on-scene  predictive  capability. 

5.  SUMMARY 

The  U.  S.  Navy  is  committed  to  improving  its  mesoscale  NWP  capabilities.  The  major  focus 
is  on  improving  its  efforts  in  the  numerical  prediction  of  mesoscale  events  in  littoral  regions. 
Recent  improvements  to  our  current  operational  mesoscale  model,  NORAPS,  such  as  improved 
boundary  layer  and  radiation  parameterizations  and  horizontally  nested  grids  make  it  a  practical 
choice  for  now.  We  are  also  developing  a  new  mesoscale  forecast  system,  COAMPS,  which 
will  use  all  the  functionality  already  found  in  NORAPS,  but  which  is  also  better  suited  to 
mesoscale  prediction  since  it  uses  a  nonhydrostatic  formulation  and  can  perform  explicit 
prediction  of  precipitation  processes.  The  operational  switch  from  NORAPS  to  COAMPS  is 
expected  to  occur  within  the  next  2  years.  Finally,  both  NORAPS  and  COAMPS  have  been 
designed  so  as  to  be  used  on  systems  other  than  mainframe  supercomputers.  This  feature 
makes  these  systems  attractive  for  porting  to  other  mainframes  for  research  or  to  workstations 
at  other  labs,  regional  centers,  or  onboard  Navy  ships. 


ACKNOWLEDGMENTS 

The  support  of  the  sponsors.  Office  of  Naval  Research  under  program  element  0602435N,  and 
Space  and  Naval  Warfare  Systems  Command  under  program  element  0603207N,  is  gratefully 
acknowledged. 


REFERENCES 

Arakawa,  A.,  and  V.  R.  Lamb,  1977:  "Computational  design  of  the  UCLA  general  circulation 
model."  Methods  in  Computational  Physics,  Vol.  17,  Academic  Press,  pp  173-265. 

Baker,  N.  L.,  1992:  "Quality  control  for  the  Navy  operational  atmospheric  database."  Wea. 
Forecasting,  7:250-261. 

Bourke,  W.  and  J.  L.  McGregor:  "A  nonlinear  vertical  mode  initialization  scheme  for  a 
limited  area  prediction  model."  Mon.  Wea.  Rev.,  7 77. *2285 -2297. 

Chang,  S.  W.,  1985:  "Deep  ocean  response  to  hurricanes  as  revealed  by  an  ocean  model  with 
free  surface.  Parti:  Axisymmetric  case."  J.  Phys.  Oceanogr.,  75.T847-1858. 


129 


Collins  ,  W.  G.,  and  L.  S.  Gandin,  1990:  "Comprehensive  hydrostatic  quality  control  at  the 
National  Meteorological  Center."  Mon.  Wea.  Rev.,  118:2152-2167. 

Deardorff,  J.  W.,  1980;  "Stratocumulus-capped  mixed  layers  derived  from  a  three-dimensional 
model."  Bound. -Layer Meteor.,  18:495-521. 


Detering,  H.  W.,  and  D.  Etling,  1985:  "Application  of  the  E-e  turbulence  model  to  the 
atmospheric  boundary  layer. "  Bound. -Lay er  Meteorol.,  5i.-113-133. 

Harshvardhan,  R.  Davies,  D.  Randall,  and  T.  Corsetti,  1987:  "A  fast  radiation 
parameterization  for  atmospheric  circulation  models.  J.  Geophys.  Res.,  P2;1009-1016. 

Hodur,  R.  M. ,  1982.  Description  and  evaluation  of  NORAPS:  The  Navy  operational  regional 
atmospheric  prediction  system."  Mon.  Wea.  Rev.,  77aT591-1602. 

Hodur,  R.  M.,  1987:  "Evaluation  of  a  regional  model  with  an  update  cycle  "  Mon  Wea 
Rev.,  77J.-2707-2718. 

Kain,  J.  S.,  and  J.  M.  Fritsch,  1990;  "A  one-dimensional  entraining/detraining  plume  model 
and  its  application  in  convective  parameterization."  J.  Atmos.  Sci.,  47.-2784-2802. 

Kain,  J,  S.,  1993:  "Convective  parameterization  for  mesoscale  models:  The  Kain-Fritsch 
scheme."  The  representation  of  cumulus  convection  in  numerical  models.  Meteor. 
Monogr.  No.  24,  Amer.  Meteor.  Soc.,  pp  165-170. 

Kesel,  P.  G.,  and  F.  J.  Winninghoff,  1972:  "The  Fleet  Numerical  Weather  Central  operational 
primitive-equation  model."  Mon.  Wea.  Rev.,  7(%).-360-373. 

Klemp,  J.,  and  R.  Wilhelmson,  1978:  "The  simulation  of  three-dimensional  convective  storm 
dynamics."  J.  Atmos.  Sci.,  i5.T070-1096. 

Lx)renc,  A.  C.,  1986:  "Analysis  methods  for  numerical  weather  prediction."  Quart  J  Roy 
Meteor.  Soc.,  772.-1 177-1 194. 

Louis,  J.  F.,  M.  Tiedtke  and  J.  F.  Geleyn,  1982:  "A  short  history  of  the  operational  PBL- 
parameterization  at  ECMWF.  Workshop  on  Planetary  Boundary  Parameterization, 
ECMWF,  Reading,  pp  59-79.  [Available  from  The  European  Centre  for  Medium- 
Range  Weather  Forecasts,  Shinfield  Park,  Reading  RG2  9Ax,  U.  K.] 

Norris,  B.,  1990:  "Preprocessing  and  general  data  checking  and  validation."  Meteorological 
Bulletin  of  the  ECMWF.  Ml. 4-3.,  European  Centre  for  Medium-Range  Weather 
Forecasts,  Reading,  U.  K. 


130 


Rosmond,  T.  E.,  1981:  "NOGAPS:  Navy  operational  global  atmospheric  prediction  system." 
Preprints  Fifth  Conf.  Numerical  Weather  Prediction,  Monterey,  CA,  pp  74-79. 

Rutledge,  S.  A.,  and  P.  V.  Hobbs,  1983:  "The  mesoscale  and  microscale  structure  of 
organization  of  clouds  and  precipitation  in  midlatitude  cyclones.  VIII:  A  model  for  the 
"seeder-feeder"  process  in  warm-frontal  rainbands."  J.  Atmos.  Sci.,  40;1 185-1206. 


131 


COMBAT  WEATHER  SYSTEM  CONCEPT 


Mr  James  L.  Humphrey 

Science  Applications  International  Corporation 
Headquarters  Air  Weather  Service 
Scott  Air  Force  Base,  Illinois  62225-5206 


Maj  George  A.  Whicker,  Capt  Robert  E.  Hardwick, 
2nd  Lt  Jahna  L.  Wollard,  SMSgt  Gary  J.  Carter 
Headquarters  Air  Weather  Service 
Scott  Air  Force  Base,  Illinois  62225-5206 


ABSTRACT 


Current  weather  information  is  critical  in  a  deployed  environment,  however,  in  a  combat 
environment  it  can  mean  the  difference  in  mission  success  or  failure.  Air  Force  units  supporting 
Joint  Forces  Commander,  Air  Force,  and  Army  combat  operations  require  the  means  to  produce 
and  apply  environmental  information  to  support  the  employment  of  military  power.  The  Combat 
Weather  System  (CWS)  will  enable  users  to  provide  combat  and  support  forces  the  required 
timely  and  accurate  global,  theater,  and  local  weather  information  for  effective  planning, 
deployment,  employment,  and  redeployment  in  response  to  worldwide  crises.  The  CWS  will 
integrate  highly  capable  automated  weather  observing  and  forecasting  systems  into  a  light¬ 
weight  easily  transportable  system  capable  of  meeting  the  CWS  critical  mission,  which  is,  to 
support  launch  and  recovery  of  aircraft.  Headquarters  Air  Weather  Service,  Directorate  of 
Program  Management  and  Integration  is  the  CWS  Standard  Systems  Manager  and  is  the 
interface  with  the  Implementing  Agency,  Electronic  Systems  Center,  Weather  Systems  Division. 
There  will  be  two  major  components  of  CWS;  observing  and  forecasting.  The  observing 
components  will  provide  accurate  observations  of  more  weather  elements  and  distribute  these 
data  more  quickly  than  current  systems  to  the  weather  forecast  system.  The  forecasting  portion 
provides  a  platform  on  which  forecasters  can  integrate  observations  and  generate  tailored 
forecast  information  quicker  and  make  it  readily  available  for  operational  customers  through  the 
automated  Theater  Battle  Management  Command,  Control,  Communications,  Computer  and 
Intelligence  systems.  The  mission  areas  supported  by  CWS  will  be  Air  Force;  Counter  Air, 
Strategic  Attack,  Interdiction,  Close  Air  Support,  Strategic  Airlift,  Aerial  Refueling, 
Aeromedical  Evacuation,  Operation  Support,  Airlift,  Electronic  Combat,  Surveillance  and 
Reconnaissance,  Special  Operations,  Base  Operability  and  Defense,  and  Logistics;  and  Army; 
Aviation,  Air  Defense,  Close  Combat  (Heavy/Light),  Land  Combat  Engineering  Support, 
Special  Operations,  Fire  Support,  Biological,  and  Chemical  operations.  The  CWS  is  being 
acquired  to  provide  the  warfighter,  planner,  and  commander  the  current,  timely,  and  accurate 
weather  information. 


133 


1.  INTRODUCTION 


Air  Force,  Army,  Joint  and  Combined  warfighters  require  detailed  weather  observations  and 
forecasts  across  the  depth  and  breadth  of  the  combat  zone  to  refine  mission  tactics  and  to 
manage  combat  resources.  The  system  to  meet  these  needs  must  be  a  small,  lightweight, 
modular  system  for  maximum  functionality.  It  must  be  durable,  quickly  activated,  packaged  for 
rapid  deployment,  and  field  maintainable.  The  systems'  modular  design  must  allow  for  an  initial 
deployment  capability  that  can  be  expanded,  as  required,  to  a  more  capable  system. 


PROPOSED  CWS  SYSTEM 


Figure  2. 1  Proposed  CWS  System 


134 


The  system  must  also  provide  responsive,  reliable,  accurate  weather  information  in  near  real 
time  directly  to  the  warfighter/decisionmakers.  Combat  Weather  System  (CWS)  was  proposed 
and  developed  to  meet  these  needs.  The  CWS  was  to  have  two  components.  Tactical  Forecast 
System  (TFS)  and  Tactical  Weather  Observing  System  (TWOS).  Recent  funding  cuts  have 
canceled  all  funding  for  CWS  beyond  FY95.  As  a  result,  CWS,  as  originally  planned,  has  been 
canceled.  The  system  was  to  be  fielded  in  FY96  to  FY99.  While  Air  Weather  Service  (AWS)  is 
no  longer  able  to  field  a  deployed  weather  system  as  defined  by  the  operational  users  in  the 
CWS  Opperational  Requirements  Document  (ORD)  21 1-89-I/III,  there  still  remains  valid 
mission  needs.  USAF  Statement  of  Operational  Need  (SON)  211-89,  Tactical  Weather 
Observing  Systems  (TWOS),  5  Mar  90,  and  USAF  SON  212-89,  Tactical  Forecast  System 
(TFS),  3  Aug  90,  define  the  needs.  AWS  has  developed  a  plan  to  acquire  and  field  a  TFS  and 
TWOS  capability  that  will  satisfy  the  stated  needs  for  initial  combat  operations. 


2.  REQUIREMENTS 


The  primary  objective  is  to  integrate  highly  capable  forecasting  and  automated  weather 
observing  systems  with  combat  planning  and  execution  systems  (Figure  2.1).  The  TFS  and 
TWOS  will  enhance  the  effectiveness  of  combat  operations  by  improving  the  capability  of 
deployed  weather  forces  to  produce  comprehensive  and  timely  weather  decision  products  for 
combat  zone  commanders,  planners,  and  aircrews.  The  critical  and  most  basic  mission  for  the 
TFS  and  TWOS  is  to  support  launch  and  recovery  of  aircraft  by:  providing  tailored  weather 
support,  which  includes  receipt  of  data,  ingesting,  displaying,  processing,  and  disseminating  data 
and  products  (excluding  administrative  functions);  and  observing  weather  elements  (cloud 
height,  cloud  amount,  surface  wind  speed  and  direction,  surface  visibility,  surface  free  air 
temperature,  and  surface  pressure).  This  primary  objective  and  critical  mission  must  be  met 
while  meeting  the  users  stated  set-up/tear-down  times,  weight  and  size  requirements.  The  set- 
up/tear-down  requirement  is  for  two  people  in  full  chemical  protective  clothing  to  be  able  to 
complete  either  flmction  in  6  or  less  hours.  The  size  requirement  is  for  the  system  to  fit  within 
the  2/5  standard  463L  airlift  pallet.  The  TWOS  will  provide  accurate  observations  of  weather 
elements  and  distribute  these  data  more  quickly  than  current  systems  permit  to  the  weather 
forecast  system  and  operational  customers.  In  turn,  the  TFS  will  integrate  these  observations 
and  generate  tailored  forecast  information  quicker  and  distribute  it  faster  to  operational 
customers  through  the  automated  Theater  Battle  Management  (TBM)  Command,  Control, 
Communication,  Computer,  and  Intelligence  (C4I)  systems.  The  weather  operator  will  use  the 
TFS  to  build  weather  products  to  meet  the  needs  of  the  Air  Force's  C4I  systems.  The  main 
weather  data  source  will  be  Air  Force  Global  Weather  Central  (AFGWC)  via  long  haul 
communications  reach  back  capability.  TFS  will  be  interoperable  (be  able  to  exchange  data) 
with  other  services;  e  g..  Navy,  environmental  support  systems  at  the  AF  Component  Theater 
Weather  Center  (TWC),  and  at  lower  levels  as  appropriate.  Additionally,  TFS  must  respond  to 
single  points  of  failure  both  within  and  outside  the  theater  and  degrade  gracefully  to  a  point 
where  it  will  flmction  with  whatever  data  is  available  (e.g.,  complete  data  set,  set  of  theater 
observations  only,  or  single-station).  The  TFS  must  produce  the  most  accurate  analysis  and 


135 


forecast  fields  current  state-of-the-art  technology  and  science  provides  to  meet  operational 
customer  requirements. 


3.  OPERATIONAL  CONCEPT 


TFS  will  provide  a  first-in  and  eventually  a  sustainment  capability  for  the  conduct  of  weather 
operations  and  will  be  interoperable  with  automated  C4I  systems  such  as  the  Contingency 
Theater  Automated  Planning  System  (CTAPS),  Wing  Command  and  Control  System  (WCCS), 
and  Command  and  Control  Information  Processing  System  (C2IPS)  (Figure  3.1). 


Figure  3.1  CWS  Communications  and  Systems  Flow  Chart 


Figures  3.2  and  3.3  show  examples  of  TFS  display  screens. 


137 


Figure  3.2  TFS  screen  in  quadrant  view 


Figure  3.3  screen  with  pop-up  menu 


Weather  data  in  the  C4I  weather  data  base  will  be  in  a  standard  relational  data  base  format  that 
will  enable  personnel  on  C4I  user  positions  to  overlay  weather  products  on  products  from  any 
other  functional  area.  C4I  customers  can  use  their  systems  to  access  weather  information 
(observations,  forecasts,  warnings,  and  advisories,  etc.)  and  locally  generated  mission  forecast 
products.  In  addition,  weather  operators  using  the  TFS  will  make  available  gridded  weather 
data  fields  on  the  C4I  data  base  for  use  by  automated  mission  planning  and  intelligence 
applications.  The  TFS  will  also  allow  weather  operators  to  access  and  display  information  from 
the  C4I  data  base  to  generate  mission-specific  forecasts;  e  g.,  display  flying  operations  data  from 
the  Air  Tasking  Order  (ATO)  and  target  information  from  the  Intelligence  Summaries  to  tailor 
and  provide  departure,  enroute,  and  recovery  flight  weather  briefings.  TFS  fielding  will  allow  a 
shift  in  weather  operator  duties;  a  decrease  in  face-to-face  support  and  an  increase  in  weather 
data  base  interpretation,  manipulation,  and  systems  management  duties.  However,  weather 
operators  will  still  be  required  to  augment  automated  weather  observations  and  provide  direct 
support  to  customers  upon  their  request.  Exact  configuration  of  the  TFS  and  TWOS 
deployed/employed  will  vary  depending  on  the  mission  and  customer  supported. 


4.  PROGRAM  EXECUTION 


Under  the  revised  phased  program  (Figure  4.1),  the  software  developed  under  the  original  CWS 
program,  exploiting  existing  Automated  Weather  Distribution  System  (AWDS)  and  Combat  Air 
Forces  Weather  Software  Package  (CAFWSP)  software,  would  become  the  TFS  software 
baseline.  AWS  would  then  purchase  the  CAF  standard  hardware  on  which  the  TFS  software 
would  be  hosted.  Under  this  plan,  the  total  number  of  TFSs  would  be  reduced  to  44  systems. 


PROGRAM  SCHEDULE 

EVENT 

iPIiKl 

FTiTin 

TFS  SOFTWARE  BASELINE 

TFS  HARDWARE  44  SYSTEMS 

1> 

MOD  TFS  SOFTWARE 

1> 

ADDITIONAL  TFS  HARDWARE 

MOD  EXISTING  TACMET 

1L_ 

Figure  4. 1  Program  Schedule 


These  systems  will  be  single  user  positions  verses  the  three-position  systems  in  the  original 
program.  This  represents  a  great  reduction  in  total  numbers,  however,  operational  users  would 
have  a  standard  deployable  state-of-the-art  system  by  late  1995.  This  would  insure  compatibility 


139 


between  MAJCOMs  in  the  deployed  environment.  In  1997  the  plan  is  to  acquire  the  remaining 
TFSs  to  meet  the  users  requirements.  Additionally,  AWS  would  seek  to  modify  the  TFS 
Software  Baseline  in  the  FY98-01  time  frame  to  incorporate  changes  to  the  AWDS  software 
and  to  integrate  a  tactical  automated  observing  capability.  This  tactical  automated  observing 
capability  would  be  achieved  via  modification  programs  to  replace  and  upgrade  existing 
Tactical  Meteorological  systems  (TACMET),  i.e.,  Transportable  Cloud  Height  Detector  (GMQ- 
33),  Tactical  Meteorological  Observing  System  (TMQ-34),  and  Tactical  Wind  Measuring  Set 
(TMQ-36).  These  modified  systems  may  be  modular  enough  to  meet  most  of  the  original 
TWOS  requirements. 

5.  CONCLUSION 


In  today's  environment  of  shrinking  budgets,  new  and  innovative  approaches  to  meeting 
operational  requirements  will  have  to  be  considered  and  employed.  As  a  representative  of  the 
user,  AWS  is  deeply  committed  to  meeting  the  users  operational  requirements.  With  the 
planned  phased  approach  a  modicum  of  success  can  be  salvaged  from  a  program  broken  by 
fiscal  constraints  and  a  redirection  of  national  priorities. 


6.  REFERENCES 

Air  Force  PMD  2326(3)/PE0604707F/0305111F/0305117F/0305123F,  Program  Management 
Directive  for  the  Weather  System  (WXSYS)  -  IWSM  20  Jun  94. 

USAF  Statement  of  Operational  Need  (SON2 11-89),  Tactical  Weather  Observing  Systems 
(TWOS),  5  Mar  90. 

USAF  SON  212-89,  Tactical  Forecast  System  (TFS),  3  Aug  90. 

CWS  Operational  Requirements  Document  I  (ORD  I),  26  Mar  93. 

Air  Force  Systems  Command  /Military  Airlift  Command  Mission  Area  Analysis,  Weather  2000, 
20  Sep  84. 

Air  Force  System  Command  Electronic  Systems  Division  Technical  Alternatives  Analysis 
30  Sep  91. 

Concept  Paper  for  Weather  Support  to  Air  Force  Theater  Operations  1995-2005,  5  May  92. 


140 


Page  1  of  10 


SMALL  TACTICAL  TERMINAL  (STT)  CONCEPTS  AND  CAPABILITIES 


2Lt  Stephen  T.  Barish 

Mr.  George  N.  Coleman  III,  Maj  Tod  M  Kunschke 
Directorate  of  Systems  and  Communications 
Headquarters  Air  Weather  Service 
Scott  Air  Force  Base,  Illinois  62225-5206 


ABSTRACT 

Real-time  satellite  imagery  is  vital  to  first-in  deployed  troops  and  aircraft;  it  can  enhance  the 
mission's  effectiveness  and  ensure  a  maximum  safety  margin  for  deployed  personnel.  It  is  often 
the  only  source  of  meteorological  data  available  upon  deployment.  As  such.  Air  Force  weather 
teams  require  the  capability  to  receive,  process,  and  display  real-time  satellite  imagery  m  order 
to  support  Army,  Air  Force,  and  Joint  Force  combat  operations  world-wide.  The  Small 
Tactical  Terminal  (STT)  will  provide  users  with  a  stand-alone  platform  to  receive  and  process 
real-time  polar  orbiting  meteorological  satellite  data  from  the  Defense  Meteorological  Satellite 
Program  (DMSP)  and  National  Oceanographic  and  Atmospheric  Administration  (NOAA) 
satellites  It  will  receive  near  real-time  geostationary  weather  facsimile  (WEFAX)  data  from 
both  foreign  and  domestic  satellites.  There  are  three  configurations  of  the  STT,  each  designed 
for  use  during  specific  phases  of  any  conflict.  The  Basic  STT  is  a  first-in  asset  intended  for 
rapid  deployment  to  the  theater  and  receives  low-resolution  imagery.  The  Enhanced  STT 
consists  of  a  modular  kit  added  to  the  Basic  STT  and  will  be  used  in  the  sustainment  phase  of 
the  operation.  It  adds  the  capability  to  receive  high  resolution  data  from  DMSP  and  NOAA 
satellites.  The  Joint  Task  Force  Satellite  Terminal  (JTFST)  consists  of  a  modular  kit  added  to 
the  Enhanced  STT  and  is  a  sustainment  phase  asset  intended  to  provide  weather  support  to  the 
theater  commander.  It  adds  the  capability  to  receive  high  resolution  data  from  geostationary 
satellites  All  STT  configurations  will  interface  with  the  following  weather  forecasting 
systems.  Tactical  Forecast  System  (TFS),  Transportable  Automated  Weather  Distribution 
System  (TAWDS),  and  Integrated  Meteorological  System  (IMETS).  They  will  provide  the 
warfighter,  mission  planner,  and  commander  real-time  satellite  imagery  for  use  in  operations 
from  the  first-in  through  sustainment  phases  of  a  conflict.  Headquarters  Air  Weather  Service, 
Directorate  of  Systems  and  Communications  is  the  STT  User  Representative  and  interface  to 
the  Implementing  Agency,  Space  and  Missile  Center,  Defense  Meteorological  Satellite 

Program. 


1.  INTRODUCTION 

The  United  States  Air  Force  and  Army  provide  forces  for  the  world-wide  conduct  of  combat 
operations.  Military  personnel  operate  in  both  peace-time  and  war-time  scenarios.  Past 
experience  has  shown  a  direct  correlation  between  mission  effectiveness  and  accurate, 


141 


Page  2  of  10 


dependable  knowledge  of  present  and  future  weather  conditions.  In  a  war-time  scenario,  the 
ability  to  observe  and  forecast  weather  conditions  in  both  the  local  area  of  operations,  as  well 
as  over  hostile  territory,  vastly  enhances  the  ability  of  Army  and  Air  Force  pilots  to 
successfully  complete  their  missions  and  return  home  safely.  In  peace-time,  accurate  weather 
observations  give  pilots  critical  information  needed  to  fly  sorties  safely,  while  ground  forces’ 
mobility  can  be  enhanced  significantly.. 

One  of  the  most  effective  sources  of  real-time  weather  data,  and  the  sole-source  over  enemy- 
held  territory,  is  meteorological  satellite  data.  The  Air  Force  has  long  recognized  the  need  of 
its  combat  planners,  commanders,  and  pilots  to  have  recent,  accurate,  and  dependable  weather 
observations  in  order  to  accomplish  its  mission.  However,  the  Air  Force  has  a  capability 
shortfall  in  meeting  its  combat  weather  operations  commitments. 

The  lessons  learned  from  Operation  DESERT  SHIELD/DESERT  STORM  showed  current 
tactical  direct  satellite  readout  terminals  do  not  meet  the  full  needs  of  deployed  forces.  Current 
systems  do  not  provide  a  fast  enough  data  refresh  rate,  often  have  limited  data  reception,  and 
are  too  large  to  be  easily  transported.  Additionally,  current  systems  require  too  many 
personnel  to  operate.  While  interim  systems  were  procured  to  fill  the  gap  in  the  short  run,  a 
long-term  solution  was  needed.  Thus,  Headquarters  USAF  directed  Air  Weather  Service 
(AWS)  and  Space  and  Missile  Center  (SMC)  to  acquire  a  small,  light-weight,  tactical,  semi- 
automated,  two-person  transportable  tactical  satellite  imagery  receiver,  the  Small  Tactical 
Terminal  (STT). 

2.  OPERATIONAL  CONCEPT 

In  today's  global  society,  threats  to  the  United  State's  National  Security  and  foreign  interests 
can  crop  up  virtually  anywhere  with  very  little  notice.  The  way  the  Air  Force  mobilizes  has 
changed  to  reflect  this.  Combat  forces  must  now  deploy  with  little  to  no  notice  to  any  location 
world-wide,  taking  with  them  their  support  organizations  and  services.  Weather  operations 
are  no  exception. 

Weather  teams  in  the  deployed  environment  will  retrieve  their  equipment  from  the  airlift  or 
other  mode  of  transportation  used  to  transport  them  in-theater.  In  general,  there  will  not  be  an 
established  source  of  meteorological  data.  This  is  particularly  true  for  those  Air  Force  weather 
personnel  dedicated  to  supporting  Army  units.  Once  deployed  to  a  secure  area,  a  weather 
team  will  set  up  for  operations.  The  STT  will  be  one  of  the  first  weather  systems  deployed. 

It  will  provide  real-time  satellite  imagery,  and  enable  weather  teams  to  begin  performing  their 
duties  within  an  hour  of  arrival.  It  will  operate  24  hours  per  day,  for  a  minimum  duration  of 
30  days  without  re-supply,  with  no  dependence  on  outside  communications.  It  is  light-weight, 
highly  reliable,  and  easily  maintainable,  all  of  which  enhance  its  capability  as  a  combat  system. 

The  STT  will  operate  in  three  configurations,  each  with  its  own  concept  of  operations.  The 
Basic  STT  (B  STT)  (Fig  2.1),  weighing  470  pounds,  is  the  bare-base  system.  It  is  deployed 
with  the  initial  cadre  of  weather  forecasters  and  observers  to  deployed  units  in-theater.  The 
BSTT  gives  weather  teams  low  resolution  satellite  data  from  a  variety  of  sources,  both  foreign 


142 


Page  3  of  10 


and  domestic.  It  allows  weather  teams  to  give  mission  planners,  commanders,  and  pilots  up  to 
the  minute  satellite  imagery  over  the  theater. _ 


Figure  2. 1 :  Basic  STT 


The  Enhanced  STT  (ESTT)  configuration  is  composed  of  the  BSTT  and  an  Enhancement  Kit. 
The  ESTT  receives  high-resolution  imagery  from  polar  orbiting  satellites  in  addition  to  the  low 
resolution  data  available  on  the  BSTT.  The  enhancement  kit  will  typically  follow  the  BSTT  to 
the  theater  within  the  first  30  days  of  deployment,  although  it  can  be  deployed  at  any  time. 
The  ESTT  weighs  790  pounds  and  is  intended  to  operate  as  a  sustainment  asset. 


Figure  2.2;  Enhanced  STT 


143 


Page  4  of  10 


The  final  configuration,  the  Joint  Task  Force  Satellite  Terminal  (JTFST),  is  a  modular  addition 
to  the  ESTT.  The  JTFST  receives  high-resolution  data  from  geostationary  satellites.  The 
JTFST  kit  will  deploy  during  the  first  30  days,  and  will  be  used  at  theater  weather  centers,  and 
Joint  Task  Force  command/control  centers.  This  configuration  is  still  in  its  design  phase,  but 
certain  key  physical  parameters  are  known.  The  completed  JTFST  will  weigh  less  than  3000 
lbs.  It  will  receive  high-resolution  geostationary  data.  It  will  allow  operators  to  send  satellite 
products  to  remote  customers  via  a  facsimile  network,  or  via  the  Air  Force  weather  link  into 
the  command,  control,  communications,  computers,  intelligence  (C4I)  network. 


JOINT  TASK  FORCE 
SATELLITE  TERMINAL 


1 


APT  ANTENNA 


RDS/RTD/HRPT 

ANTENNA 


RECEIVER  SUB-ASSEMBLY 


HI  RESOLUTION  DISPLAY  MOUSE  EXTERNAL  KEYBOARD 


Figure  2.3;  Joint  Task  Force  Satellite  Terminal 


3.  MAINTENANCE  CONCEPT 

The  STT  was  designed  to  be  a  highly  reliable  system.  However,  it  also  had  to  be  capable  of 
being  maintained  by  weather  personnel  rather  than  dedicated  technicians.  This  concept  of 
Operator  Maintenance  allows  the  fielding  of  a  new  system  without  additional  maintenance 
personnel.  This  approach  provides  unique  challenges  to  the  program.  First,  there  is  a  design 
issue;  weather  personnel  are  not  trained  as  technicians,  and  the  equipment  and  any 
maintenance  actions  are  designed  with  this  in  mind.  Second,  there  is  a  training  issue;  weather 
personnel  have  never  performed  Operator  Maintenance  before,  and  the  learning  curve  will  be 
steep  initially.  Third,  there  is  a  maintenance  issue;  preventative  maintenance  is  essential  to 
keep  any  equipment  operating  correctly.  The  STT  is  designed  to  simplify  preventative 
maintenance. 

The  solutions  to  these  problems  are  found  in  the  maintenance  concept  and  in  the  system 
design.  In  the  field,  weather  teams  will  remove  and  replace  large  components  of  the  system 


144 


Page  5  of  10 


called  Line  Replaceable  Units  (LRUs).  The  STT  LRUs  will  be  large  items,  such  as  a  receiver 
sub-assembly  or  the  computer.  Removal  and  replacement  of  LRUs  will  not  require  weather 
teams  to  open  any  chassis,  or  replace  any  electrical  circuit  cards  or  components.  All  LRUs  can 
be  removed  without  the  use  of  hand  tools.  This  minimizes  the  level  of  technical  expertise 
necessary  to  maintain  the  system. 

When  a  system  fails  in  the  field,  the  operator  will  initiate  a  Built  In  Test  (BIT)  routine,  and 
identify  the  problem.  The  BIT  will  isolate  the  problem,  and  direct  the  operator  to  a  set  of 
actions  to  correct  the  malfunction.  The  operator  will  remove  the  faulty  LRU,  replace  it  with  a 
spare,  and  return  the  faulty  LRU  to  the  supply  office  at  his/her  location. 

The  faulty  item  will  then  be  shipped  back  the  maintenance  depot  at  Sacramento  Air  Logistics 
Center  (SM-ALC),  McClellan  AFB,  CA.  The  depot  will  repair  the  faulty  LRU  in-house  or 
send  it  to  the  manufacture  for  repair. 

Additionally,  when  the  faulty  item  is  shipped  back  to  depot,  a  replacement  spare  will  be 
shipped  back  to  the  deployed  weather  team.  As  no  single  removal/replacement  action  will  take 
greater  than  30  minutes,  the  down  time  of  the  system  will  be  minimal.  In  the  event  an  LRU 
fails  and  there  is  no  spare  on  site,  the  system  is  designed  to  be  redundant.  If  any  single  source 
of  data  becomes  unavailable,  the  remaining  data  sources  will  still  be  available.  This 
redundancy  assures  the  operator  mission  capability  in  virtually  all  failure  modes. 

These  maintenance  procedures  will  be  extensively  addressed  during  initial  skills  training,  as 
well  as  through  an  aggressive  recurring  training  program.  Additionally,  the  system's  software 
has  a  built  in  help  program  designed  to  provide  quick  access  to  these  procedures.  The  training 
effort  will  be  minimized  as  the  maintenance  actions  will  be  virtually  identical  with  the  set  up 
and  tear  down  actions. 

The  only  other  maintenance  actions  are  preventative  maintenance  instructions.  These  are 
limited  to  the  cleaning  and  changing  of  filters,  and  occasional  loading  or  cleaning  of  the  printer. 
Otherwise,  the  STT  requires  no  preventative  maintenance.  This  addresses  the  third  issue,  since 
there  will  be  no  calibration  of  the  equipment  necessary  to  keep  it  working  properly. 

4.  THEORY  OF  OPERATION 

The  STT  system  is  designed  as  a  direct  satellite  read  out  terminal.  The  system's  antennas 
receive  telemetry  and  data  from  polar  orbiting  and  geostationary  meteorological  satellites.  The 
receivers  then  synchronize  (bit  synch)  the  data  stream  and  pass  the  data  along  to  the  COMSEC 
equipment  for  decryption  (if  necessary).  Once  the  data  is  decrypted,  the  receiver  then  sends  it 
to  the  processing  equipment  for  framing  into  an  image.  The  data  are  processed  into  visible  and 
infrared  imagery,  and  assorted  meteorological  products.  The  processing  equipment  is  also  the 
operator's  point  of  interaction.  By  using  a  graphical  user  interface  (GUI)  the  operator  can 
manipulate  and  enhance  the  data,  resulting  in  better  observations  and  forecasts. 


145 


Page  6  of  10 


The  STT  hardware  is  designed  to  be  rugged  and  transportable.  It  is  capable  of  operating  in 
extremes  from  O^C  to  55^C  (for  the  computer  equipment  -  antennas  are  operable  down  to 
-  450C).  All  connectors  are  ruggedized,  environmentally  sealed,  and  require  no  tools  to 
connect.  In  fact,  all  fasteners  and  connectors  on  the  system  are  captive  hardware  and  cannot 
be  separated  from  the  system.  The  entire  system  can  be  assembled  without  using  any  hand 
tools.  Size  and  weight  of  the  hardware  were  minimized  wherever  possible  without  reducing 
effectiveness. 

Much  of  the  hardware  exploits  state  of  the  art  technology.  The  computer  that  operates  the 
STT  software  uses  a  Sun  SPARC  10/41  microprocessor,  one  of  the  more  powerful 
microprocessors  currently  available.  This  highly  capable  processor  is  packaged  in  a  specially 
designed,  ruggedized  laptop,  using  an  active-matrix  color  10.2  inch  LCD  monitor.  In  the 
enhanced  configuration,  a  1 6  inch  color  monitor  is  added,  along  with  an  external  keyboard  and 
mouse  to  allow  operators  to  interact  with  the  machine  more  effectively.  The  receivers  were 
designed  specifically  for  this  system.  Each  receiver  is  contained  on  a  single  PC  board,  all  of 
which  are  contained  in  a  single  chassis.  Also  contained  in  the  receiver  sub-assembly  are  the 
Communications  Security  (COMSEC)  devices,  and  a  removable  hard  drive.  The  COMSEC 
devices  were  also  specially  engineered  for  this  system  (to  minimize  weight).  Figure  4. 1  shows 
a  block  diagram  of  the  system's  design. 


Demod/  I  - . 

Bit  Sync/  L-Cti/status  _T  T  kg  Status 
PacodT  ‘ 


MYK-7 

(KQR-46) 


— * 

O0fairNltirtKi»r  ; 

10  Inch  Internal 
Color  Display 

Data  ) 

USER 

. . ' . 

I 

tntarnal 

Keyboard/ 

I  Trackball 

lx 


R/F  Antenna 
Equipment 


Controi/Daia 

Signals 


COMSEC 

Equipment 


1  182  900  Controlle^P  640  «  480  Controller' 


128  MB  RAM 


Sena!  Port 


FGB  Hart] 
_ disk  J 


Floppy  Drive 


. . 

Processing  Equipment 


-- H  cws 


Printer 

Power 

Inverter 

Generator 

Transit 

Cases 

Auxilllary  Equipment 

iM  li  Ramovabl*  Enhanc*m«nt 
I'l  [  'll'  Equipmant/Deta 

:  Basic 

:  Equlpmant/Data 


Figure  4. 1 :  Enhanced  STT  Block  Diagram 


The  robust  hardware  is  complemented  by  a  well-designed  and  stable  software  package.  The 
STT  software  was  designed  in  accordance  with  the  Software  Engineering  Institute's  principles, 
and  in  accordance  with  applicable  sections  of  DOD-STD  2167A.  This  design  approach  yielded 
a  mature  software  package  capable  of  fully  exploiting  the  unique  advantages  of  the  system's 
hardware. 


146 


Page  7  of  10 


The  software  operates  in  the  Unix  environment,  is  Unix/POSIX  compliant,  and  uses  an  X- 
Windows/Motif  interface.  The  Graphical  User  Interface  (GUI )  allows  the  operator  to  quickly 
and  easily  perform  a  wide  variety  of  meteorological  analyses  on  the  data,  with  little  wait  time. 
The  true  benefit  of  the  GUI  is  found  in  its  simplicity.  All  functions  of  the  software  can  be 
accessed  quickly,  and  with  a  minimum  of  instructions.  The  GUI  performs  multiple  tasks:  it 
provides  an  interactive  interface  with  the  data,  monitors  equipment  status,  informs  the  operator 
of  equipment  failure,  logs  significant  events  occurring  in  both  hardware  and  software 
(including  both  failure  data  and  corrective  maintenance  data),  and  provides  a  computer  based 
instruction  (CBI)  module  that  covers  set  up  and  tear  down  of  the  equipment,  use  of  the 
COMSEC  devices,  and  maintenance  actions. 

The  software  ingests  data  at  the  same  time  the  operator  is  interactively  analyzing  previous 
passes,  thus  minimizing  time  delays.  Typically,  an  operator  will  be  able  to  analyze  a  pass 
within  2  minutes  of  the  end  of  the  pass.  Any  stored  product  will  be  available  within  1  minute 
of  request,  and  any  printed  hard  copy  will  be  available  within  5  minutes  of  request.  In  short, 
the  system  provides  products  in  a  prompt  fashion  to  allow  the  operator  to  brief  customers  with 
the  latest  data  available. 

5.  IMAGERY  AND  PRODUCTS 

The  STT  is  designed  to  receive  both  polar  orbiting  and  geostationary  satellites.  Table  5.1 
shows  the  satellite  data  each  configuration  of  the  STT  will  receive. 


Satellite 

BSTT 

ESTT 

JTFST 

DMSP  RDS  (Vis/IR) 

X 

X 

X 

X 

X 

DMSP  Microwave  Sensors 

X 

X 

X 

NOAA  APT  (Vis/IR) 

X 

X 

X 

NOAAHRPT 

X 

X 

Geostationary  (WEFAX) 

X 

X 

X 

Geostationary  (High-Res) 

X 

Table  5- 1 ;  STT  Data  Reception 


Polar  orbiting  satellites  provide  real-time  coverage  of  an  area  of  interest,  at  a  horizontal 
resolution  that  is  generally  greater  than  that  offered  by  geostationary  satellites.  There  are  two 
types  of  domestic  polar  satellites  received  by  the  STT;  Defense  Meteorological  Satellite 
Program  (DMSP)  satellites,  and  National  Oceanographic  and  Atmospheric  Administration 
(NOAA)  satellites.  The  STT  also  receives  selected  foreign  polar  orbiting  satellites,  including 
Russian  METEOR  and  Chinese  FENG- YUNG. 

Geostationary  satellites  provide  near  real-time  weather  facsimile  (WEFAX)  data  at  a 
significantly  lower  resolution  than  that  available  on  polar  orbiting  satellites.  However,  the 
geostationary  imagery  gives  weather  personnel  the  capability  to  give  a  quick,  synoptic  scale 
view  of  the  weather  in  the  area  of  interest.  Combined  with  the  ability  to  set  up  animation 
loops,  this  imagery  gives  operators  a  powerful  tool  to  brief  their  customers.  Selected  units 


147 


Page  8  of  10 


receiving  the  Joint  Task  Force  Satellite  Terminal  (JTFST)  configuration  will  also  have  real¬ 
time  access  to  high  resolution  data  from  geostationary  satellites.  The  STT  will  receive  GOES, 
GOES-NEXT,  METEOSAT,  and  GMS  satellites. 

6.  ANALYSIS  TOOLS 

The  STT  provides  weather  teams  with  satellite  analysis  tools  never  before  available  in  the 
combat  environment.  Users  will  be  able  to  enhance  the  data  using  a  variety  of  tools  including 
image  filters  and  color  enhancement.  The  STT  allows  the  user  to  zoom  into  the  image,  in 
effect  magnifying  the  image.  Any  given  image  can  be  displayed  in  the  satellite  (overhead) 
projection,  mercator  projection,  or  polar  stereographic  north/south  projection.  This  allows  the 
operator  to  select  the  best  way  to  view  the  data  for  the  given  location.  The  STT  also  allows 
operators  to  annotate  images  with  meteorological  symbols  and  text,  thereby  adding  to  the 
information  available  to  the  customer.  Additionally,  unique  tools  allow  the  STT  to  position  the 
cursor/mouse  at  specific  points  of  latitude/longitude.  This  ability  is  enhanced  by  the  use  of  the 
Global  Positioning  System  (GPS),  making  the  STT's  latitude/longitude  fixes  extremely 
accurate. 


The  most  innovative  tool  provided  by  the  STT  is  its  ability  to  generate  Environmental  Data 
Records  (EDRs)  from  the  Satellite  Data  Records  (SDRs)  generated  by  the  microwave  sensors 
on  board  the  DMSP  spacecraft.  These  EDRs  give  the  operator  detailed  information  about  the 
environmental  conditions  of  a  region.  These  EDRs  can  be  viewed  as  images,  and  enhanced  as 
such,  or  they  can  be  viewed  as  contours,  and  overlayed  on  top  of  DMSP  imagery.  Again,  this 
capability  can  dramatically  enhance  the  quality  of  forecasts  and  briefings  given  to  customers. 

7.  EXTERNAL  INTERFACES 

All  the  data  in  the  world  is  useless  if  it  cannot  be  used  to  meet  the  customer's  needs.  In  the 
world  of  combat  weather  operations,  many  of  the  customers  are  commanders  and  planners. 
Most  of  these  customers  work  through  the  command,  control,  communications,  and  computer, 
and  intelligence  (C^I)  wide  area  network  set  up  in  theater.  There  are  several  combat  weather 
systems  which  provide  inputs  to  this  network.  The  Transportable  Automated  Weather 
Distribution  System  (TAWDS),  the  Integrated  Meteorological  System  (IMETS),  and  the 
Tactical  Forecast  System  (TFS)  are  three  which  the  STT  interfaces  with.  These  systems  act  as 
primary  sources  of  meteorological  satellite  data  for  the  entire  deployed  C^I  community. 

TAWDS  is  a  currently  fielded  system  that  will  be  modified  in  late  FY95  to  accept  STT 
products.  These  products  will  be  restricted  to  images.  Satellite  Data  Records  (SDRs  -  the 
raw  output  of  the  microwave  sensors),  and  EDRs.  The  images  will  be  transmitted  as  rasters, 
the  SDRs  and  EDRs  as  Uniform  Gridded  Data  Fields  (UGDFs).  This  will  allow  the  TAWDS 
to  overlay  these  UGDFs  on  the  images  and/or  other  products.  IMETS  is  an  Army 
communications  system  providing  access  to  the  C^I  network.  It  broadcasts  meteorological 
data  and  forecasts  to  deployed  users  via  a  set  of  high  frequency  radios.  Its  software  is  similar 
to  that  resident  on  the  TAWDS,  and  will  receive  the  same  products.  TFS  is  the  combat 
forecasting  system  of  the  future.  It  will  send  weather  data  from  a  wide  variety  of  sources  to 
the  local  C  I  network.  The  STT  will  act  as  a  front  end  satellite  data  receiver/processor  for  this 


148 


Page  9  of  10 


system.  It  is  important  to  note  that  while  the  STT  will  have  a  one  way  interface  to  TAWDS 
and  IMETS,  it  will  have  a  two  way  interface  with  the  TFS,  allowing  the  TFS  to  remotely  log 
on  and  operate  the  STT.  This  automation  allows  fewer  personnel  to  accomplish  more  work  in 
the  deployed  environment.  Raster  images  and  UGDFs  will  be  transmitted  to  the  TFS. 

8.  CONCLUSION 

Combat  operations  and  weather  go  hand  in  hand.  Accurate  weather  forecasts  enable  pilots  to 
avoid  dangerous  weather  conditions,  while  ground  forces  are  prepared  to  find  easier  routes  to 
travel,  and  positions  more  advantageous  to  their  mission.  Knowledge  of  future  weather 
conditions  is  of  critical  importance  when  planning  aviation  missions.  Fuel  loads,  flight  safety, 
take-off  and  landing  are  all  central  elements  of  any  aviation  operation,  and  all  are  impacted  by 
the  weather.  On  the  ground  operations  side,  knowledge  of  the  weather  and  surface  conditions 
can  allow  armor  divisions  to  move  more  rapidly,  and  prevent  them  from  being  bogged  down  in 
soft  terrain.  Troop  safety  is  enhanced  by  warning  of  hazardous  weather  conditions,  thus 
enabling  commanders  to  take  protective  actions. 

Many  of  today’s  modem  weapon  systems  employ  electro-optical  guidance  systems.  Weather 
conditions  can  have  significant  impact  on  the  effectiveness  of  these  guidance  systems.  Even 
aircraft  relying  on  free-fall  gravity  bombs,  or  Army  troops  moving  through  a  battlefield  are 
affected  by  weather  conditions,  such  as  visibility  and  precipitation.  Without  a  dependable 
weather  forecast,  mission  planners  may  not  be  able  to  identify  achievable  mission  goals. 
Without  accurate  knowledge  of  current  conditions,  combat  commanders  may  send  their  crews 
into  hazardous  situations  where  their  mission  goals  cannot  be  achieved.  In  today’s  military 
where  budget  limitations  force  the  services  to  operate  more  efficiently,  these  types  of 
limitations  are  unacceptable. 

Throughout  the  history  of  warfare,  accurate  and  dependable  knowledge  of  the  weather  has 
been  critical  to  successful  mission  execution.  When  properly  utilized,  weather  forecasts  can 
act  as  a  force  multiplier,  enhancing  the  combat  effectiveness  of  our  air  and  ground  forces. 
Currently,  weather  personnel  in  the  combat  environment  are  dependent  on  sparse  sources  of 
data  to  support  their  customers.  The  field  of  combat  weather  operations  has  a  critical 
deficiency  in  meeting  the  needs  of  combat  planners,  commanders,  and  pilots. 

Air  Weather  Service  and  Space  and  Missile  Center,  is  meeting  the  challenge.  The  delivery  of 
the  STT,  beginning  in  July  1995,  will  substantially  enhance  the  quality  of  today’s  combat 
weather  operations  by  providing  real-time  imagery  and  products  in-theater,  and  by  doing  so  in 
a  largely  automated  fashion.  This  will  allow  combat  weather  personnel  to  concentrate  on  using 
the  data,  rather  than  gathering  it.  The  Small  Tactical  Terminal  will  become  a  mission  critical 
piece  of  equipment,  and  in  conjunction  with  expert  Air  Force  weather  personnel,  will 
dramatically  improve  the  safety  and  effectiveness  of  the  combat  operations  of  the  United  States 
Air  Force  and  Army. 


149 


Page  10  of  10 


9.  REFERENCES 

General  Operational  Requirement  For  a  Pre-Strike  Surveillance/Recon  System  (PRESSURS), 
MAC  508-78,  28  Dec  78. 

Air  Force  Space  Command  (AFSPC)  Mission  Need  Statement  (MNS)  for  Environmental 
Sensing  (ES),  AFSPC  MNS  035-92,  6  Jan  93 

Operational  Requirement  Document  (ORD)  for  the  Follow-On  Defense  Meteorological 
Satellite  Program  (DMSP),  AFSPC  ORD  03  5-92-1 -A,  27  Dec  93 

Program  Management  Directive  (PMD)  for  Defense  Meteorological  Satellite  Program 
(DMSP),  PMD  3015(35)/PE030516F/PE0305162F,  3  Sep  93 

Small  Tactical  Terminal  (STT)  System/Segment 
Specification,  CDRL  A024/DI-CMAN-80008A, 

Contract  F040701-93-C-0007,  11  Apr  94 

Air  Force  Systems  Command  /Military  Airlift  Command  Mission  Area  Analysis,  Weather  2000, 
20  Sep  84. 

Air  Force  System  Command  Electronic  Systems  Division  Technical  Alternatives  Analysis, 

30  Sep  91. 

Concept  Paper  for  Weather  Operations  to  Air  Force  Theater  Operations  1995-2005,  5  May  92. 


150 


OPERATIONAL  USE  OF  GRIDDED  DATA  VISUALIZATIONS 
AT  THE  AIR  FORCE  GLOBAL  WEATHER  CENTRAL 

Kim  J.  Runk  and  John  V.  Zapotocny 
Headquarters,  Air  Force  Global  Weather  Central 
OffuttAFB,  Nebraska,  68113 


ABSTRACT 

In  modem  weather  support  operations,  the  forecaster  is  forced  to  process  and  assimilate 
tremendous  volumes  of  data  in  a  short  period  of  time.  Thus,  it  is  becoming  increasingly 
important  to  provide  forecasters  with  tools  which  enable  them  to  use  that  information  more 
effectively  to  quickly  and  accurately  assess  the  state  of  the  atmosphere  and  evaluate  the 
meteorological  processes  affecting  the  forecast.  The  Air  Force  Global  Weather  Central 
(AFGWC)  has  experimented  with  several  such  tools  which  convert  gridded  data  sets  into 
image  visualizations  and  animations  for  use  in  the  operational  forecast  routine.  These 
visualization  tools  have  proven  to  be  useful  aids  for  enhancing  a  forecaster’s  ability  to 
assimilate  the  data,  providing  a  greater  sense  of  weather  system  temporal  evolution  and 
numerical  model  continuity.  This  paper  will  discuss  and  illustrate  some  of  the  data 
visualization  methods  which  have  been  developed  at  AFGWC.  Particular  attention  will  be 
given  to  imagery  products  created  from  gridded  data  unique  to  AFGWC,  such  as  the  RWM 
(Relocatable  Window  Model),  the  HALT  (high  altitude  turbulence)  model,  and  SSM/I 
(Special  Sensor  Microwave  Imager)  mosaic  grids. 


1.  INTRODUCTION 

A  large  assortment  of  gridded  data  sets  are  produced  to  support  operations  at  AFGWC.  Those 
which  are  built  for  use  on  the  Satellite  Data  Handling  System  (SDHS),  which  is  the  primary 
delivery  system  for  weather  analysis  data  at  AFGWC,  are  generally  formatted  into  either  polar 
stereographic  projection  grids,  tropical  mercator  grids,  or  cross-sectional  grids.  Spatial  resolution 
of  these  grids  ranges  from  48km  to  381km,  depending  on  the  application.  A  discussion  of  the 
operational  grid-to-imagery  generation  process  on  the  SDHS  can  be  found  in  Zapotocny  (1993). 

There  are  also  a  number  of  gridded  data  sets  which  are  created  on  Unix-based  workstations  for  the 
purpose  of  providing  supplemental  data  visualizations  to  AFGWC  forecasters.  Since  these  tools 
are  delivered  as  prototypes,  forecasters  can  provide  critical  feedback  to  programmers,  thereby 
participating  in  the  definition  and  refinement  of  the  final  software  configuration. 


151 


2.  VISUALIZING  OBSERVATIONAL  DATA  AND  NUMERICAL  MODEL  OUTPUT 


The  ability  to  highlight,  or  even  isolate  key  meteorological  features  through  color  imaging  is  gen¬ 
erally  superior  to  the  cluttered  appearance  presented  by  overlaying  several  contoured  fields.  This 
is  particularly  true  when  images  are  animated  in  time  series.  During  critical  decision  periods, 
when  time  is  short  and  events  are  unfolding  rapidly,  the  task  of  evaluating  whether  a  given  region 
is  becoming  more  favorably  disposed  toward  a  specific  weather  condition  can  be  made  more 
manageable  through  creative  use  of  model  derived  imagery  products. 

A  number  of  operational  and  prototype  data  analysis  techniques  developed  by  Sterling  Software 
and  the  Product  Improvement  Branch  at  AFGWC  provide  the  forecaster  with  the  flexibility  to 
interactively  define  the  image  display  structure.  This  has  been  well  received  by  forecasters  since 
the  suite  of  tools  favored  by  one  is  not  necessarily  the  same  as  that  which  is  preferred  by  another. 
One  popular  technique  involves  the  analyst  selecting  specific  fields,  assigning  minimum  (or  max¬ 
imum)  threshold  values,  then  masking  out  values  which  do  not  fall  within  the  assigned  range.  That 
data  set  is  then  colored  using  values  defined  by  another  field.  As  an  example,  hourly  grids  of 
surface  divergence  with  an  absolute  value  greater  than  2x10^  sec'^  could  be  colored  with  a  palette 
defined  by  all  values  of  moisture  convergence  or  by  the  surface-based  lifted  index  from  the  same 
array.  By  employing  this  form  of  colorized  displays  in  animation,  the  forecaster  can  fashion  a 
more  ordered  and  focused  portrayal  of  the  parameters  of  interest. 

Several  visualization  tools  have  been  designed  specifically  for  viewing  or  evaluating  in-house 
model  output.  A  variety  of  display  formats,  including  user-defined  plan  views,  cross-sections,  and 
animations  of  both  observed  and  derived  fields  are  being  developed  for  operational  use. 

Four  general  types  of  displays  are  notable  for  the  unique  value  they  add  to  the  forecast  process: 

(1)  Color-enhanced  images  overlaid  with  contours.  This  format  enables  the  user  to  distinguish 
features  in  animation  more  easily;  patterns  and  trends  often  become  more  distinctive. 


(2)  Time-height  cross-sections  of  individual  RWM  fields.  These  perspectives  are  often  more 
revealing  than  viewing  a  single  level  in  an  instant  of  time. 


(3)  Along-track  displays  of  aviation  hazards  tools.  This  technique  facilitates  tailoring  briefings 
to  specific  mission  requirements  utilizing  either  Global  Spectral  Model(GSM),  RWM 
or  HALT  model  grids. 


(4)  Model  error  field  diagnostics.  These  displays  provide  a  quick,  objective  evaluation  of 
recent  model  performance,  assisting  the  forecaster  in  determining  the  need  for,  and 
scope  of  necessary  adjustments  to  current  model  guidance. 


152 


AFGWC  has  also  begun  to  explore  some  new  techniques  which  are  very  useful  for  initializing 
numerical  guidance,  and  for  nowcasting  convective  development.  Superimposing  tropopause 
level  isentropic  potential  vorticity  and  low  level  equivalent  potential  temperature  on  water  vapor 
imagery  is  one  such  example.  Juxtaposition  of  these  two  fields  with  a  well-defined  dry  prod  shows 
strong  correlation  with  cyclogenesis.  Building  composites  of  hourly  changes  in  various  surface- 
based  indices  of  static  stability,  lid  strength,  and  other  convective  predictors  overlaid  on  visual 
satellite  imagery  is  another  example. 

3.  VISUALIZATIONS  UTILIZING  DMSP  IMAGERY 

Because  AFGWC  supports  operations  and  contingencies  worldwide,  the  organization  is  often 
called  upon  to  provide  weather  forecasts  in  regions  for  which  data  availability  is  extremely  limited. 
In  such  cases,  pass-by-pass  animations  of  polar  orbiter  imagery  are  valuable  aids  for  identifying 
synoptic  trends.  Animation  frames  are  created  by  mapping  routines  which  convert  images  with 
different  local  swath  orientation  to  a  common  map  projection.  This  permits  a  stable  image  looping 
capability  over  a  fixed  region  in  areas  with  limited  or  non-existent  geosynchronous  coverage. 

For  some  limited  applications,  AFGWC  has  begun  to  utUize  three  dimensional  data  visualizations 
in  analyzing  satellite  imagery.  The  three  dimensional  view  is  produced  using  a  combination  of 
surfacing  routines  and  image  mapping.  The  infrared  component  is  used  to  produce  a  wire-mesh 
surface  whose  height  values  correspond  to  brightness  temperatures.  The  visual  component  is  then 
mapped  onto  that  surface,  yielding  a  three  dimensional  visualization  of  the  original  image.  For 
extended  animation  sequences,  the  IR  is  generally  mapped  onto  itself  since  continuity  of  visual 
data  is  lost  at  night. 

Several  analysis  techniques  exploiting  the  capabilities  of  SSM/I  data  (NRL  CaWal,  1991)  have 
been  integrated  into  operations  at  AFGWC.  When  blended  with  corresponding  conventional 
observations,  these  data  can  provide  significant  insight  into  the  character  of  the  synoptic  setting. 

(1)  Intercomparison  of  horizontally  and  vertically  polarized  85GHz  with  IR  imagery. 

These  perspectives  have  proven  to  be  useful  for  detecting  thunderstorms  concealed 
beneath  large  cirrus  canopies.  This  technique  is  extremely  valuable  for  positioning 
tropical  cyclone  circulations. 

(2)  Analysis  of  multichannel  algorithms  used  to  estimate  maritime  surface  windspeeds. 

Estimating  tropical  cyclone  gale  wind  radii  or  evaluating  extent  and  intensity  of  wind 
fields  surrounding  large  polar  storms  are  primary  applications  of  this  technique. 

(3)  Bichannel  differential  between  37GHz  and  I9GHz.  Imagery  derived  from  this  data  permits 

qualitative  evaluation  of  surface  moisture  conditions. 


153 


4.  FUTURE  DIRECTION 


At  AFGWC,  emphasis  will  continue  to  be  placed  on  improving  our  capability  to  provide  timely 
and  accurate  weather  information  to  the  warfighter.  For  example,  several  projects  are  in  progress 
now  to  refine  our  aviation  hazards  algorithms,  such  as  incorporating  the  Schultz-Politovich 
scheme  (Schultz,  Politovich,  1992)  into  our  aircraft  icing  forecast  algorithm,  and  upgrading  the 
HALT  model  (based  on  Bacmeister  et  al,  1994)  to  include  background  shear. 

AFGWC  will  soon  implement  an  upgrade  to  faster,  more  powerful  microcomputers  for  operational 
production.  This  robust  workstation  environment  will  make  widespread  application  of  the  types 
of  techniques  discussed  in  this  paper  much  more  feasible.  In  addition,  forecasters  will  have  tools 
at  their  disposal  which  permit  them  to  interact  with  the  data  themselves,  to  create  their  own  visu¬ 
alizations  and  algorithms;  in  short,  to  employ  new  technologies  and  data  sources  more  creatively 
and  effectively. 


While  it  is  true  that  our  applications  development  is  generally  oriented  toward  operations  at  a 
weather  central,  many  of  these  display  capabilities  could  be  readily  adapted  to  a  tactical 
environment.  In  fact,  a  number  of  our  products  are  already  accessible  to  deployed  troops  via  the 
Air  Force  Dial-In  System  (AFDIS).  Details  regarding  the  AFDIS  are  outlined  in  a  companion 
paper  in  these  proceedings  (Engel,  1994). 

REFERENCES 


Bacmeister,  J.T.,  P.A.  Newman,  B.L.  Gary,  and  K.R.  Chan,  1994:  "An  Algorithm  for  Forecasting 
Mountain  Wave  Related  Turbulence  in  the  Stratosphere."  Wea.  and Fcstg^,  9:  241-253. 

Engel,  Gregory  T.,  1994:  "Operational  Applications  of  the  Air  Force  Dial-In  System." 
Proceedings,  1994  Battlefield  Atmos.  Conf.,  (in  press). 

Naval  Research  Laboratory  DMSP  SSM/I  Calibration/Validation  Report,  Vol  2. 

Coordinated  by  J.P.  Hollinger,  1991:  NRL,  Washington,  D.C.,  257  pp. 

Schultz,  P.,  and  M.K.  Politovich,  1992:  "Toward  the  Improvement  of  Aircraft  Icing  Forecasting 
for  the  Continental  United  States."  Wea.  and Fcstg.,  7;  491-500. 

Zapotocny,  J.V.,  1993:  "Meteorological  Applications  Tools  for  Generating  Images  from  Gridded 
Data  on  the  Satellite  Data  Handling  System  at  AFGWC."  Preprints,  10th  IntL  Conf.  on 
Interactive  Info,  and  Proc.  Sys.for  Meteo.,  Oceano.,  and  Hydro.,  Nashville,  TN, 

Amer.  Meteor.  Soc.,  37-38. 


154 


THEATER  FORECAST  MODEL  SELECTION 
R.  M.  Cox 

Defense  Nuclear  Agency 
Alexandria,  VA  22310 

J,  M.  Lanicci 

Air  Force  Global  Weather  Central 
OffuttAFB,  ME  68113 

H.  L.  Massie,  Jr. 

Air  Weather  Service 
Scott  AFB,  IL  62225 

ABSTRACT 

Recent  contingencies  including  Operation  DESERT  STORM  have  shown  a  need  for 
a  finer-resolution  weather  forecast  capability  to  aid  decision  making  in  theater-level  combat 
air  and  land  operations.  To  address  this  n^,  Air  Force  Weather  (AFW)  and  the  Defense 
Nuclear  Agency  (DNA)  have  begun  a  joint  effort  to  create  a  Theater  Forecast  Model  (TFM) 
architecture  from  government  and  commercial  off-the-shelf  hardware  and  software. 

The  concept  calls  for  global  model  data  (temperature,  pressure,  geopotential,  winds, 
and  humidity)  of  approximately  100  km  x  100  km  horizontal  resolution  to  provide  boundary 
and  initial  conditions.  The  TFM  will  have  a  horizontal  domain  of  approximately  2400  km 
X  2400  km  and  a  horizontal  grid  mesh  resolution  of  at  least  40  km  with  a  goal  of  becoming 
approximately  10  km.  It  will  run  a  36-hour  forecast  within  1-hour  after  data  assimilation. 
The  TFM  will  use  in-theater  observations  as  it  generates  0  to  36  hour  forecast  data  for  theater 
applications  (clouds,  visibility,  present  weather,  aviation  hazards,  etc.). 

AFW  and  DNA  are  comparing  four  mesoscale  models  to  the  current  Air  Force  Global 
Weather  Central  Relocatable  Window  Model  to  determine  which  is  best  suited  for  theater 
operations.  The  models  include  the  Colorado  State  Regional  Atmospheric  Modeling  System, 
the  National  Center  for  Atmospheric  Research  (NCAR)/Pennsylvania  State  Mesoscale  Model, 
the  Navy  Operational  Regional  Analysis  and  Prediction  System,  and  the  DNA  Operational 
Multiscale  Environment  model  with  Grid  Adaptivity.  Comparisons  include  model  numerics, 
physics,  fidelity,  accuracy,  sustainability,  maintainability,  flexibility,  and  extensibility. 

Global  model  data  from  NCAR  and  AFW  will  provide  boundary  and  initial  conditions 
for  test  cases  over  five  topographically  and  seasonally  complex  regions.  Accuracy 
measurements  will  include  interpolated  grid  point  to  rawinsonde  paired  difference  root  mean 
square  error,  mean  absolute  error,  relative  error,  bias,  and  evaluations  of  standard  map  sets. 

This  paper  will  outline  AFW  and  DNA  activities  to  ensure  selection  of  a  model  best 
suited  to  joint  needs.  A  brief  overview  of  numerical  weather  prediction  limitations,  the  TFM 
approach,  theater  requirements,  and  selection  requirements  will  be  presented. 


155 


1.  INTRODUCTION 


Recent  contingencies  including  DESERT  STORM  have  shown  a  need  for  finer- 
resolution  weather  forecast  capabilities  to  aid  decision  making  for  a  myriad  of  theater-level 
combat  air  and  land  operations.  Centralized  facilities  like  Air  Force  Global  Weather  Central 
(AFGWC)  generate  much  of  the  theater  weather  information  for  these  contingencies. 
However,  the  centralized,  reach-back  approach  takes  more  time  to  transmit  weather 
information  into  and  out  of  the  theater. 

The  use  of  timely  in-theater  observations  can  greatly  increase  Theater  Forecast  Model 
(TFM)  accuracy  and  value  to  the  decision  maker.  Because  observations  are  perishable,  from 
a  modeling  perspective,  most  theater  observations  do  not  arrive  in  time  to  be  used  in  the 
centrally  run  models. 

To  address  the  need  for  timely  and  accurate  theater  weather  forecasts,  including  the 
benefits  from  in-theater  observations,  the  Air  Force  Chief  of  Staff  has  approved  Air  Force 
Weather  (AFW)  Mission  Need  Statements  (MNS)  for  the  Combat  Weather  System  (CWS)  and 
the  Global  Theater  Weather  Analysis  and  Prediction  System  (GTWAPS).  These  programs 
require  a  TFM  to  supply  theater  warfighters  with  theater-optimized  weather  information. 

The  Air  Weather  Service  (AWS)  pre-screened  numerous  mesoscale  models  before 
finally  settling  upon  four  primary  candidates  for  the  TFM.  The  candidate  models  for  this 
study  include  the  Colorado  State  University  (CSU)  Regional  Atmospheric  Modeling  System 
(RAMS),  the  National  Center  for  Atmospheric  Research  (NCAR)/Pennsylvania  State 
University  Mesoscale  Model  (MM  5),  the  Navy  Operational  Regional  Analysis  and  Prediction 
System  (NORAPS  6),  and  the  Defense  Nuclear  Agency’s  (DNA)  Operational  Multiscale 
Environment  model  with  Grid  Adaptivity  (OMEGA).  After  detailed  comparison  tests,  one 
of  the  four  models  will  be  selected  for  adaptation  and  transition  to  a  Department  of  Defense 
(DoD)  standard,  theater  weather  architecture. 


2.  NUMERICAL  WEATHER  PREDICTION  LIMITATIONS 


Despite  improvement  over  the  past  two  decades.  Numerical  Weather  Prediction 
(NWP)  still  has  limitations.  We  will  first  review  some  basic  NWP  limitations  before 
providing  a  general  discussion  on  our  approach  for  the  TFM  selection. 

There  has  always  been  a  trade-off  in  NWP  between  resolution  and  computer  power. 
Whenever  the  horizontal  grid  resolution  is  increased  by  a  factor  of  two,  the  resultant 
computer  time  required  to  produce  a  forecast  increases  by  a  factor  of  eight.  This  happens 
because  there  is  change  in  the  x  and  y  direction  along  with  the  change  in  time  step.  Also, 
this  limitation  is  magnified  by  the  necessity  to  resolve  geographically  induced  meteorological 
features.  As  it  is  well  known,  weather  patterns  are  often  result  from  a  given  terrain  feature. 
Whether  it  is  a  coastal  pattern  (land/sea  breeze)  or  mountain  flows  (lee-side  cyclogenesis), 
to  forecast  meteorological  phenomena  in  an  accurate  and  timely  fashion,  the  model  must  be 
able  to  resolve  these  terrain  induced  features.  However,  that  resolution  requires  a  fine-scale 
numerical  grid,  which  stated  earlier  requires  more  computational  power.  The  new  generation 


156 


of  computer  workstations  may  well  put  this  limitation  behind  us  in  the  not  to  distant  future. 

Model  spin-up  time  presents  another  challenge.  When  gridded  data  fields  are  used  for 
the  model’s  initial  conditions,  a  0  -  12  hour  period  is  needed  for  the  model  to  adjust  to 
numerical  artifacts  created  by  the  differences  between  the  initial  fields  and  the  model 
equations.  This  time  can  be  remedied  by  using  a  data  assimilation  procedure.  Although  data 
assimilation  procedures  such  as  nudging  may  reduce  spin-up  time,  there  is  no  real  "best" 
technique. 

Many  physical  processes  are  simulated  in  mesoscale  models.  This  simulation  or 
parameterization  is  a  very  challenging  task.  If  the  parameterization  is  in  error,  then  the 
resultant  atmospheric  simulation  will  be  suspect.  Several  areas  which  are  parameterized  in 
NWP  models  include  radiation,  soil  moisture,  evapotranspiration,  cloud  energetics,  surface 
fluxes,  etc.  These  parameterizations  are  of  significance  because  often  in  meteorology  we 
have  unobserved  or  inadequately  observed  parameters.  Also,  the  process  may  exist  on  a  scale 
not  resolved  by  the  observational  network.  Most  NWP  models  have  the  above  mentioned 
parameterizations;  however,  the  values  used  for  the  variable  may  be  in  error  because  it  has 
not  been  studied  fully  to  provide  a  sound  basis  for  the  value. 


3.  APPROACH  FOR  THEATER  FORECAST  MODEL  SELECTION 


These  limitations  and  others  can  and  will  have  significant  impact  on  the  results 
produced  by  a  NWP  model.  To  compound  that  impact,  this  efforts  seeks  to  port  a  model  that 
generally  operates  on  a  supercomputer  to  a  workstation.  The  model  must  produce  a  36-hour 
forecast  one  hour  after  data  assimilation.  To  accomplish  this  task,  the  model  must  be 
effectively  downsized.  One  can  develop  an  engineering  version  of  the  model  that  will  operate 
quicker  than  its  first  principles  version.  This  will  require  the  model  will  be  downsized, 
allowing  for  tradeoffs  between  run  time  and  accuracy. 

Under  the  auspices  of  the  DNA  and  AFW,  a  modeling  team  is  porting  and  downsizing 
candidate  models  first  on  a  supercomputer  and  then  to  a  workstation  class  machine.  This 
approach  will  provide  insight  into  what  can  be  optimized  within  the  model  to  meet  the  TFM 
run  times  and  still  achieve  acceptable  forecast  accuracies. 

All  the  models  have  a  four  dimensional  data  assimilation  scheme  to  help  model 
stability  and  reduce  model  spin-up  time.  The  models  also  have  a  flexible  domain,  which  is 
important  when  relocating  the  model  operational  forecast  region. 

These  models  will  predict  most  but  not  all  required  theater  weather  elements  out  to 
36-  hours.  Other  needed  weather  elements  will  require  the  development  of  applications 
models,  which  will  get  their  basic  input  from  gridded  TFM  data  (pressure,  temperature, 
winds,  and  humidity). 


157 


4.  THEATER  FORECAST  MODEL  REQUmEMENTS 


The  AFW  Functional  Area  Plan  (FAP)  includes  joint  operational  requirements  for  the 
TFM.  These  requirements  call  for  a  basic  capability  to  have  a  fine-resolution  weather 
analysis  and  forecast  capability  to  support  theater  combat  and  non-combat  operations. 

The  TFM  will  ingest  Gridded  Data  Fields  (GDFs)  along  with  additional  observational 
and  model  data  to  calculate  "derived"  parameters  such  as  present  weather,  visibility,  and 
clouds.  The  TFM  will  also  provide  higher  resolution  GDFs  for  use  in  other  applications  as 
directed  by  theater  operations. 

The  AFGWC  will  receive  GDFs  of  basic  parameters  like  temperature,  pressure, 
winds,  and  dew  point  from  the  Navy  at  approximately  1  degree  x  1  degree  horizontal 
resolution.  The  relocatable  TFM  will  use  these  GDFs  for  its  lateral  boundary  conditions  and 
initial  conditions.  Next,  the  AFGWC  will  provide  additional  meteorological  information, 
e.g.,  theater  observations,  satellite  imagery,  cloud  information,  etc.,  for  the  TFM  data 
assimilation. 

The  TFM  is  required  to  operate  with  a  horizontal  domain  of  2400  X  2400  km.  Its 
horizontal  resolution  is  required  to  be  40  km  with  an  objective  resolution  of  10  km.  Theater 
operations  dictate  producing  a  36-hour  forecast  within  1  hour  run  time  after  data  ingest. 
Initial  plans  call  for  two  cycles  per  day,  eventually  becoming  eight  cycles  per  day. 

To  identify  and  assemble  government  and  commercial  off-the-shelf  technologies  to 
meet  these  needs,  AFW  has  developed  a  coordinated  TFM  strategy  with  the  DNA,  Argonne 
National  Laboratory  (ANL),  and  the  Phillips  Laboratory  (PL)  Geophysics  Directorate.  This 
strategy  falls  under  the  auspices  of  the  Electronic  Systems  Center  (ESC)  and  includes  various 
proof-of-concept  studies  on  the  candidate  TFM  technologies.  The  ESC  has  outlined  TFM 
requirements  in  their  report  number  E-1243U,  dated  15  December  1993. 


5.  THEATER  FORECAST  MODEL  SELECTION  CRITERIA 


A  team  of  scientist  from  AWS,  AFGWC,  ANL,  PL,  and  DNA  met  at  AFGWC  and 
developed  the  criteria  which  will  be  used  to  ensure  the  model  best  suited  for  theater 
operations  is  chosen.  It  was  the  intent  of  these  individuals  to  provide  objective  measures, 
which  could  be  easily  followed  for  model  comparison  and  subsequent  selection.  The  team 
considered  model  configuration,  theaters  of  operation,  and  verification  criteria. 

Initially  the  team  wanted  to  ensure  the  models  had  the  necessary  numerics,  physics, 
and  run  options  to  fulfil  the  TFM  requirements  and  that  also  allow  for  a  graceful  degradation 
in  data  denied  scenarios.  The  horizontal  grid  spacing  will  be  40  km  with  a  computational 
domain  of  71  x  71,  and  a  verification  domain  of  61  x  61.  The  model  will  have 
approximately  20  vertical  levels.  AFW  and  NCAR  will  provide  observational  and  first  guess 
data  fields  for  initial  conditions  and  lateral  boundary  conditions.  Each  model  will  be  allowed 
to  use  whatever  observations  and  variables  it  can  incorporate  in  its  analysis  scheme.  Models 
will  be  evaluated  out  to  the  36-hour  forecast  period  and  compared  at  3-hour  intervals. 


158 


To  ensure  a  relocatable,  worldwide  TFM  capability,  test  cases  will  be  run  over  five 
topographically  and  seasonally  complex  regions.  The  regions  are  Alaska,  Central  America, 
Middle  East,  Korea,  and  the  United  States.  Data  collection  for  each  test  case  will  cover  a 
72-hour  period.  Output  from  the  TFM  candidates  will  be  compared  against  each  other  and 
the  AFGWC  Relocatable  Window  Model  (RWM).  To  be  a  viable  candidate,  a  model  must 
outperform  the  RWM.  The  data  will  cover  specific  seasons  in  each  of  the  theaters. 

The  verification  criteria  includes  the  analysis,  6,  12,  24,  and  36  hour  forecasts  for  the 
u-component  and  v-component  of  the  wind,  temperature,  pressure,  relative  humidity,  and 
specific  humidity.  These  forecast  periods  and  variables  will  be  evaluated  at  the  surface  and 
mandatory  upper-air  levels.  The  evaluation  will  include  a  measurement  of  accuracy  for  each 
of  the  variables,  forecast  periods,  and  atmospheric  levels.  Measures  of  accuracy  will  include 
mean  absolute  error,  relative  error,  bias,  and  root  mean  square  error  determined  from  the 
paired  differences  of  interpolated  model  grid  point  data  to  rawinsondes.  The  accuracy 
comparisons  will  also  include  comparisons  of  standard  map  sets  from  each  model  for  given 
time  periods.  After  all  the  above  measures  have  been  complied  for  each  of  the  models,  AFW 
and  DNA  will  decide  which  model  best  meets  the  TFM  meteorological  accuracy 
requirements. 

The  atmospheric  accuracy  of  each  model  will  be  evaluated  against  the  computational 
requirements  and  technology  transition  factors,  including  the  model’s  sustainability, 
maintainability,  flexibility,  and  extensibility.  Each  candidate  model  will  have  timing  statistics 
gathered  on  its  operation  on  a  supercomputer  and  a  workstation.  The  final  selection  of  a 
model  will  depend  on  which  one  best  satisfies  timing,  accuracy,  and  technology  transition 
requirements. 


6.  CONCLUSION 


Recent  DoD  contingencies  have  demonstrated  the  need  for  high  quality  and  timely 
meteorological  information  at  the  theater  level  of  operations.  To  address  this  need,  the  Air 
Force  Chief  of  Staff  approved  a  MNS  for  CWS  and  GTWAPS.  AFW  and  DNA  have 
undertaken  a  joint  effort  to  ensure  theater  operators  in  future  contingencies  will  receive  value- 
added  meteorological  information  when  they  require  it. 

The  selection  of  the  TFM  will  take  place  within  the  next  12  months  after  a  series  of 
tests  are  completed.  These  tests  will  compare  four  leading  mesoscale  numerical  weather 
prediction  models  against  each  other.  The  tests  will  be  conducted  using  data  from  five 
geographically,  topographically,  and  seasonally  complex  regions.  Accuracy  comparisons  will 
involve  basic  meteorological  variables  for  forecast  periods  of  0  -  36  hours.  The  model  best 
meeting  the  theater  forecast  requirements,  including  technology  integration  factors,  will  be 
selected. 


159 


AIR  WEATHER  SERVICE: 


EVOLVING  TO  MEET  TOMORROW'S  CHALLENGES 


Col  William  S.  Weaving,  Maj  Dewey  E.  Harms,  Capt  Donald  H.  BerchofF,  and 

Capt  Timothy  D.  Hutchison 
Headquarters  Air  Weather  Service 
Scott  AFB,  Illinois,  62225-5206,  USA 


ABSTRACT 

Air  Weather  Service  (AWS)  has  undergone  a  notable  evolution  since  activation  on 
1  July  1937.  Through  all  the  changes,  the  basic  AWS  mission  remains  the  same;  to 
assist  the  warfighter  in  any  way  possible  to  achieve  victory  on  the  battlefield.  With 
the  rapid  advancement  of  technology,  and  the  resulting  increase  in  technical 
training  requirements,  AWS'  role  as  the  technical  leader  for  Air  Force  and  Army 
weather  units  is  as  vital  as  ever.  To  ensure  military  weather  capability  keeps  pace 
with  technology,  a  number  of  major  programs  and  initiatives  are  underway  within 
AWS  to  upgrade  its  two  major  production  centers.  Air  Force  Global  Weather 
Central  (AFGWC),  and  the  USAF  Environmental  Technical  Applications  Center 
(USAFETAC).  At  the  headquarters,  AWS  continues  to  evolve  and  adapt  to 
improve  technology  transition,  equipment  acquisition,  and  training.  Working 
closely  with  the  Pentagon  and  other  major  commands,  AWS  continues  to  field  a 
host  of  new  standard  weather  systems.  With  the  rapid  integration  and  the  limitless 
potential  of  these  new  weather  systems,  effective  technology  transition  through 
innovative  training  methods  is  extremely  critical.  With  this  in  mind,  AWS  is 
working  not  only  to  field  these  systems,  but  has  also  set  in  place  a  mechanism  for 
assuring  their  maximum  exploitation.  This  paper  will  concentrate  on  Air  Weather 
Service's  role  in  providing  centralized  products  to  the  warfighter  and  in  improving 
those  forecaster  skills  necessary  to  optimize  exploitation  of  new  and  existing 
technology. 


1.  INTRODUCTION 

During  most  of  our  nation's  military  history,  weather  personnel  have  played  a  vital  role  in 
maximizing  the  effectiveness  of  the  warfighter  by  accurately  identifying  windows  of 
opportunity  for  aviators  to  seize..  Whether  it  be  providing  decision  assistance  in  the 
Normandy  invasion,  the  Berlin  Airlift  operation,  or  hunkering  down  with  the  United 
Nations  coalition  forces  during  Operation  DESERT  STORM,  Air  Weather  Service  has 


161 


always  been  ready  to  provide  weather  advice  to  military  decision  makers.  In  most 
instances,  the  advice  is  a  key  ingredient  to  mission  success.  According  to  the  book,  Air 
Weather  Service:  A  Brief  History  1937-1991.  during  the  Berlin  airlift,  low  clouds,  fog, 
freezing  rain,  and  turbulence  frequently  impacted  airlift  activities.  The  success  of  the  airlift 
mission,  and  in  turn,  the  future  ,  of  the  residents  of  Berlin  depended  on  the  ability  of  our 
aircrews  to  deliver  ample  supplies  and  break  the  Soviet  stranglehold  on  the  city.  As 
history  shows,  despite  the  frequent  occurrences  of  inclement  weather,  the  Berlin  airlift  was 
a  success.  Precise  forecasts  played  a  major  role  then  and  still  do  today.  Despite  the 
development  of  all  weather"  aircraft,  low  ceilings  and  visibility,  and  hazardous  weather 
still  impact  mission  effectiveness.  Additionally,  weather  can  impact  the  aircrew  "rules  of 
engagement".  During  DESERT  STORM,  pilots  were  required  to  visually  acquire  targets 
before  firing.  This  requirement  made  accurate  cloud  forecasts  an  absolutely  critical 
ingredient  for  mission  success. 

AWS  has  gone  through  many  structural  and  organizational  changes  since  activation  on 
1  July,  1937.  For  54  years,  AWS  maintained  command  and  control  of  all  Air  Force  and 
Army  weather  units  worldwide.  This  changed  in  1991  as  world  events  dictated  major 
changes  in  the  direction  of  national  policies. 

In  August  1991,  as  part  of  the  25  percent  Department  of  Defense  (DoD)  manpower 
reduction,  a  neSv  era  in  AWS  began  when  the  Air  Force  directed  the  transfer  of  Air  Force 
weather  field  units  from  Headquarters  AWS  (HQ  AWS)  to  local  operational  commanders. 
The  purpose  was  to  substantially  streamline  middle  management,  and  assign  the  weather 
field  units  directly  to  the  local  wing  commander.  This  initiative  meant  the  total 
deactivation  of  six  AWS  weather  wings  and  associated  subordinate  weather  squadrons. 
HQ  AWS  and  its  remaining  subordinate  agencies  moved  out  from  under  Military  Airlift 
Command  and  became  a  field  operating  agency  reporting  directly  to  the  Pentagon.  Today, 
the  Directorate  of  Weather  at  the  Pentagon  assumes  responsibility  for  Air  Force 
atmospheric  and  space  policies,  plans,  and  resources  while  the  focus  of  HQ  AWS  and  its 
subordinate  centers  is  on  meeting  the  present  and  future  operational  needs  of  the  Air 
Force,  Army,  and  other  military  and  government  agencies. 


^though  AWS  has  undergone  a  notable  evolution,  the  basic  mission  remains  the  same  as 
it  was  over  57  years  ago,  to  assist  the  warfighter  in  every  way  possible  to  achieve  victory 
on  the  battlefield.  The  AWS  role  as  the  technical  leader  for  Air  Force  and  Army  weather 
units  is  vital.  Internally,  AWS  continues  to  evolve  and  adapt  to  increase  the  peacekeepers 
ability  to  use  weather  to  gain  all  possible  advantage.  This  paper  will  concentrate  on  Air 
Weather  Service's  role  in  providing  centralized  products  to  the  warfighter  and  in 
improving  those  forecaster  skills  necessary  to  optimize  exploitation  of  new  and  existing 
technology. 


162 


2.  THE  NEW  STRUCTURE:  CONCEPT  OF  OPERATIONS 

AWS  provides  centralized  weather  products  and  technical  assistance  to  the  Air  Force, 
Army,  selected  DoD  agencies,  and  classified  programs  of  the  highest  priority.  Centralized 
weather  information  is  critical  to  operational  planners,  weapon  system  designers,  and  field 
units  who  rely  heavily  upon  centrally  produced  analyses  and  forecasts  and  climatological 
studies.  Due  to  the  large  amount  of  data  and  complexity  of  global,  regional,  and 
mesoscale  models,  the  atmospheric  products  and  services  to  DoD  warfighters  are  beyond 
the  resources  of  the  operational  military  commands.  AWS  provides  these  products  and 
services  during  peace  and  war. 


Figure  1  depicts  the  operational  weather  architecture  as  the  Air  Force  moves  into  the  21st 
Century.  Conceptually,  observational  data  that  is  collected  and  used  in  global 
(hemispheric)  analysis  and  forecast  models  will  be  sent  to  a  centralized  weather  facility  as 
input  to  theater-scale  (mesoscale)  forecast  models.  Here,  "theater"  refers  to  a  domain 
approximately  2500  km  by  2500  km  centered  over  the  area  of  interest  where  military 
operations  are  occurring.  The  theater  model  will  use  this  data  along  with  other  observed 
and  model  output  data  to  produce  finer  resolution  mesoscale  forecasts.  The  primary  goals 
are  to  provide  timely,  accurate  observations  and  forecasts  to  help  ensure  successful  air, 
ground,  and  sea  battlefield  operations. 


Figure  1.  Weather  Support  Concept  into  the  21st  Century 


163 


2. 1  Headquarters  Air  Weather  Service 

HQ  AWS,  which  is  located  about  15  miles  east  of  St.  Louis  MO  at  Scott  Air  Force  Base 
(AFB)  IL,  provides  meteorological  technical  expertise  to  the  Air  Force  and  the  Army,  and 
directs  the  operations  of  its  subordinate  units.  It  provides  oversight  for  the 
standardization  and  interoperability  of  Air  Force  and  Army  weather  units  worldwide,  plans 
for  and  fields  standard  weather  systems,  transitions  new  technology  to  field  units,  deUlops 
standardized  training  programs,  and  assesses  the  quality  and  technical  goodness  of 
weather  information,  (AWS  Mission  Directive  49-1,  1993). 


Figure  2.  Air  Weather  Service  Organizational  Structure 


2.2  Centralized  Facilities 

Essential  components  of  the  Air  Force  weather  concept  are  centralized  weather  facilities 
capable  of  producing  tailored  global  and  theater  weather  products  to  enhance  operations 
worldwide.  The  current  Air  Force  centralized  weather  functional  architecture  is  dedicated 
towards  the  synthesis  of  worldwide  weather  data,  the  ingest  and  manipulation  of 
numerous  meteorological  satellite  (MET  SAT)  datasets,  daily  operational  runs  of  global 
weather  analysis  and  prediction  models,  storage  of  the  data  and  imagery  files  within  a 


164 


centralized  database  structure,  and  the  generation  of  gridded  data  field,  graphical,  and 
alphanumeric  forecast  products  for  global  and  theater  applications.  Centralized  products 
normally  will  be  used  to  enhance  command,  control,  communication,  computers,  and 
intelligence  activities.  Although  these  activities  are  frequently  decentralized,  they  require 
consistent,  automated  weather  information  at  multiple  decision  points. 

2.2. 1  Air  Force  Global  Weather  Central  (AFGWC) 

Centralized  weather  information  is  provided  to  military  forces  for  planning,  training, 
resource  protection,  and  operational  decision  assistance.  AFGWC  (located  at  Offlitt  AFB, 
Omaha  NE)  primarily  provides  this  information  to  fixed  weather  units  located  at  Air  Force 
bases.  Army  posts,  and  tactical  units  within  a  theater  of  operations. 

AFGWC  is  designated  as  the  DoD  center  for  theater-scale  weather  analyses  and  forecasts, 
meteorological  satellite  (METSAT)  data  processing,  and  cloud  analyses  and  forecasts. 
Before  the  turn  of  the  century,  AFGWC  will  run  theater  weather  analysis  and  forecast 
models  for  not  only  the  Air  Force  and  the  Army,  but  all  warfighters  conducting  operations 
on  land,  sea,  and  in  the  air.  The  Navy's  Fleet  Numerical  Meteorological  and 
'  Oceanographic  (METOC)  Center  (FNMOC)  will  provide  global  model  gridded  data  as 
one  source  of  input  for  the  AFGWC  theater-scale  model. 


165 


Figure  3  depicts  the  flow  of  centralized  operational  weather  information  in  the  future. 
^GWC  will  receive  Gridded  Data  Fields  (GDFs),  and/or  spectral  coefficient  datasets  of 
"basic"  meteorological  parameters  (e  g.,  temperature,  pressure,  winds,  and  dew-point) 
from  the  Navy.  AFGWC  theater  and  hemispheric  models  will  use  these  datasets  along 
with  additional  observational,  satellite,  hemispheric,  and  internal  model  data  to  calculate 
derived"  parameters  (e.g.,  clouds,  visibility,  etc.)  and  provide  theater  and  hemispheric 
uniform  GDFs  (UGDFs).  UGDFs  will  be  the  field  weather  teams'  primary  source  of 
centralized  data. 

AFGWC  will  ingest  all  available  foreign  and  domestic  observations  fi'om  atmospheric  and 
satellite  data  sources  to  build  an  accurate  environmental  database.  In  addition  to  currently 
available  information,  these  data  include  automated  surface  and  upper-air  meteorological 
observations,  automated  tactical  surface  observations,  wind  and  thermodynamic  profiler 
information,  and  aircraft  upper-air  meteorological  observations. 

AFGWC  will  use  regional  data  assimilation  systems  which  feed  atmospheric  data  into  its 
theater  analysis  and  forecast  models.  The  atmospheric  and  satellite  data  collected  will  be 
processed  and  incorporated  into  the  AFGWC  environmental  data  base  on  a  standardized 
srid.  The  observational  data  base  will  be  automatically  updated  on  a  regular  basis  as 
newer  data  become  available.  Specialists  will  use  weather  workstations  to  tailor  the 
output  of  the  regional/theater  models  periodically  (e  g.,  every  6  hours)  and  update  the 
forecast  portion  of  the  data  base.  The  workstations  will  allow  specialists  to  generate  four¬ 
dimensional  visualization  of  the  analysis  and  forecast  fields  (Air  Force  Weather  Support 
System  Concept  Paper  2015,  1994). 


AFGWC  production  work  centers  will  consist  of  personnel  trained  to  produce  standard, 
routine  meteorological  products  and  mission-tailored  products  serving  warfighters'  needs 
worldwide.  Also,  AFGWC  will  operate  theater  cells  as  required,  which  will  have  the 
responsibility  for  operational  execution  of  theater-scale  models  for  a  specified  theater.  Up 
to  two  cells  will  be  required  to  cover  two  regional  conflicts  simultaneously.  Each  cell  will 
manipulate  data  available  from  the  Centralized  Database  Management  System  (CDMS) 
and  run  theater  model(s).  Each  individual  cell  will  tap  into  the  centralized  database, 
updating  information  for  its  respective  area  of  interest.  The  CDMS  itself  is  the  centra! 
computer  repositoiy  for  weather  information-analyses,  forecasts,  METSAT  imagery,  and 
cloud  information.  The  CDMS  will  be  capable  of  simultaneous  communications  with  each 
theater  cell,  as  well  as  any  field  request  for  information. 


In  addition  to  near  real-time  forecasting  information,  there  must  be  a  timely  response  to  all 
climatological  information  requests  to  enhance  theater  operations  and  contingencies 
anywhere  in  the  world.  For  instance,  military  planners  needed  climatological  information 
on  cloud  cover,  frequencies  of  precipitation,  and  diurnal  ranges  of  temperature  for  Kuwait 
within  hours  of  the  initial  Iraqi  invasion. 


166 


2.2.2  USAF  Environmental  Technical  Applications  Center  (USAFETAC) 

USAFETAC  (located  at  Scott  AFB  IL)  collects,  maintains,  and  applies  climatological 
information  to  determine  the  environmental  effects  on  military  operations  and  systems  and 
to  meet  requirements  of  the  Air  Force,  Army,  and  other  military  and  civilian  agencies. 
USAFETAC's  data  processing  and  archival  center  at  Asheville  NC  receives  worldwide 
observations  from  AFGWC  and  combines  this  information  with  that  received  at  the 
National  Climatic  Data  Center  from  other  sources  to  continually  build  and  maintain  a 
climatological  archival  database  available  for  global  and  theater  applications. 

Dedicated  USAFETAC  personnel  provide  tailored  climatological  products  for  any  DoD 
mission  upon  request.  Some  requests  will  be  supported  by  products  generated 
automatically  by  computer.  The  majority  of  requests  received  (dial-in  or  message)  from 
DoD  customers,  however,  will  be  evaluated  and  provided  tailored  support  by  trained 
specialists,  particularly  those  requests  requiring  significant  climatological  expertise  to 
solve.  Climatological  assistance  ranges  from  preparation  of  historical  area  weather  data 
for  contingencies  to  providing  data  for  combat  simulation  war  games. 

Environmental  simulation  data  will  provide  combat  simulation  models  the  ability  to 
present  statistically  representative  weather  in  both  time  and  space  (ground/air/underwater) 
domains.  These  models  will  integrate  weather  information  into  combat  (war  gaming)  and 
weapon  simulators,  taking  advantage  of  fractals,  data-compression  techniques,  and  other 
advances  in  statistics,  to  provide  more  realistic  training  to  the  warfighter. 

2.3  Technology  Transition  and  Training 

With  the  rapid  integration  of  new  technology  and  the  limitless  potential  of  new  weather 
systems,  effective  technology  transition  through  innovative  training  methods  is  as 
important  as  ever  in  assuring  maximum  exploitation  of  new  systems.  Throughout  the 
operational  weather  community,  the  key  to  meeting  the  training  challenges  of  tomorrow 
lies  in  keeping  pace  with  technology.  New  systems  such  as  the  WSR-88D  Doppler 
weather  radar  offer  a  wealth  of  information,  and  frequently,  users  don't  have  the  time  or 
knowledge  to  build  up  quick  system  expertise.  These  problems  are  further  compounded 
within  the  military.  Besides  keeping  pace  with  the  rapid  technological  advances,  the 
military  also  contends  with  shrinking  weather  unit  staffs  and  a  younger  forecaster  work 
force;  both  of  which  are  at  their  lowest  levels  ever.  Ironically,  as  the  need  for  training  is 
increasing  due  to  new  technology,  the  available  manning  resources  dedicated  to  providing 
the  training  is  decreasing.  The  challenge  of  the  future  is  to  develop  training  and 
technology  transition  programs  that  keep  pace  with  technology  and  overcome 
acknowledged  manpower  constraints.  AWS  is  working  in  this  area,  exploring  innovative 
training  approaches  for  new  weather  system  acquisitions,  and  follow-on  training,  which 
are  responsive  and  meet  the  unique  requirements  of  today's  Air  Force. 


167 


AWS  manages  training  requirements  for  all  new  standardized  Air  Force  weather  systems 
deployed  to  the  field.  Working  closely  with  Headquarters,  United  States  Air  Force  and 
the  operating  commands,  AWS  performs  new  system  task  analyses  using  the  Air  Force 
Instructional  System  Design  (ISD)  process.  In  the  past,  training  on  new  systems  was 
usually  worked  out  late  in  the  system  acquisition  phase.  The  ISD  process  places  more 
focus  on  analyzing  the  training  requirements  early  in  the  acquisition  phase.  Proper 
application  of  IDS  principles  enables  the  best  mix  and  match  of  training  for  the  operator  in 
today's  high  tech  environment.  Recognizing  requirements  early  in  the  acquisition  process 
allows  for  smoother  and  more  efficient  implementation  of  new  training  concepts  such  as 
on-line  training.  "On-line"  training  enables  the  user  to  train  on  the  same  system  used  in 
day-to-day  operations;  as  a  result,  it  reduces  the  need  for  a  human  trainer,  saves  operator 
time,  and  provides  user  help  on  demand.  On-line  training  will  carry  new  systems 
acquisition  training  into  the  21st  century. 

Besides  seeking  innovative  training  methods,  AWS  is  attempting  to  make  standardized 
training/transition  programs  more  flexible.  The  complexity  of  new  computer  systems  and 

their  growing  number  of  potential  applications  makes  training/transition  program  flexibility 
critical. 


Following  the  deployment  of  new  weather  systems  and  completion  of  initial  system 
training,  AWS  becomes  the  focal  point  for  technology  transition  and  exploitation.  AWS 
provides  direct  assistance  to  the  field  through  various  training  methods  by  developing  and 
transifioning  new  and/or  existing  meteorological  techniques,  and  by  exploiting  technical 
capabilities  of  fixed  based  or  tactical  weather  station  equipment.  It  manages  the  transition 
of  new  forecast  techniques  to  the  field  through  a  regional  approach  that  encourages  active 

field  unit  participation  through  regional  working  group  meetings/conferences,  and  bulletin 
boards. 


AWS  manages  the  exchange  of  technical  information  between  the  field  units  and  AWS 
through  1 1  regional  managers.  Regions  are  broken  out  based  on  geographic  location  and 
climatology  with  six  regions  in  the  CONUS,  three  in  the  Pacific  and  two  in  Europe. 

egional  managers  seek  to  facilitate  the  exploitation  of  technology  within  a  region  by 
tailoring  assistance  to  a  sub-synoptic  level  and  acting  as  the  central  focal  point  for 
technology  transition  and  technique  sharing  within  a  region.  Communication  is  achieved 
through  regional  bulletin  boards,  telephone,  and  annual  regional  conferences  and  working 
groups.  Regional  managers  assure  similar  initiatives  are  not  worked  simultaneously  in 
Other  regions.  The  crossfeed  concept  relieves  unit  workload  by  eliminating  redundancy  in 
initiatives  and  maximizes  use  of  techniques  already  developed  in  the  field.  Additionally, 
rather  than  depending  solely  on  the  development  initiatives  from  a  centralized  facility,  the 
concept  utilizes  the  intellect  of  forecasters  in  the  field.  Regional  managers  work  closely 
with  a  group  of  functional  experts  who  specialize  in  AWDS,  WSR-88D,  meteorological 
computer  applications,  and  theater  forecast  techniques.  These  experts  provide  technical 
assi^stance  to  the  regional  managers  and  the  field  during  technique  development  and  assure 
technical  goodness  of  the  final  product. 


168 


AWS  also  provides  on-site  technical  assistance  through  Meteorological  Enhancement 
Teams  (METs).  The  MET  is  a  team  of  functional  and  regional  experts  sent  to  the  field  to 
present  one-day,  multi-media  presentations  on  various  meteorological  topics.  AWS  visits 
units  by  request  only  and  tailors  each  MET  presentation  to  meet  the  needs  of  the 
individual  unit.  The  purpose  of  the  MET  is  to  introduce  new  or  existing  meteorological 
techniques  and  to  show  the  forecaster  how  to  exploit  the  new  technology  in  applying  these 
techniques.  MET  teams  have  already  presented  over  100  seminars  on  topics  ranging  from 
"Fog  and  Stratus  Forecasting  Problems"  to  "Severe  Weather  Forecasting." 

AWS  is  also  working  to  modernize  the  AFW  follow-on  training  (FOT)  program.  The  new 
program  exploits  the  use  of  interactive  courseware  (instructional  software  and  hardware) 
and  Computer  Aided  Instruction  (CAI)  replacing  the  old  slide  projection  technology.  In 
1991,  AWS  fielded  the  first  of  253  Multimedia  Training  Systems  (MTS)  to  weather  units 
worldwide.  The  MTS  provides  interactive  video,  CD-ROM ,  and  a  VCR  to  run  self-paced 
POT  courseware  to  enhance  forecaster  technical  skills  and  supplement  formal  school  and 
on-the-job  training  (OJT).  Interactive  courseware  enhances  retainability  of  information  by 
engaging  the  learner's  senses  and  involving  the  individual  in  the  learning  process  by 
presenting  the  material  through  a  variety  of  media  such  as  videotape,  compact  disk, 
videodisk,  graphic  animation,  and  sound.  Students  proceed  at  their  own  pace  and  can  exit 
and  enter  the  course  as  required.  This  feature  ensures  maximum  flexibility  so  the  students 
can  fit  training  into  their  busy  schedule.  The  first  videodisk  modules  fielded  included, 
"Workshop  on  Doppler  Radar  Interpretation"  and  "Boundary  Detection  and  Convective 
Initiation."  Military  forecasters  successfully  used  both  these  modules  to  prepare  for 
formal  WSR-88D  training  courses. 

Development  of  interactive  software  is  a  lengthy  process,  often  taking  1  to  2  years  per 
module.  Occasionally,  circumstances  require  the  quick  fielding  of  a  training  module  to 
respond  to  a  noted  technical  shortfall.  In  these  instances,  AWS  exploits  the  capability  of 
CAI.  CAI  is  not  as  visually  effective  as  interactive  courseware,  but  because  development 
time  is  less  than  a  year,  it  provides  an  effective  method  of  quickly  addressing  a  critical 
technical  deficiency.  In  1992,  a  deficiency  was  noted  in  the  forecasting  of  icing  and 
turbulence.  AWS  quickly  developed  two  CAIs  on  icing  and  turbulence  forecasts  using  the 
WSR-88D.  The  WSR-88D  Operational  Support  Facility  at  Norman  Oklahoma  also 
develops  CAIs;  they  have  already  developed  ten  CAIs  on  WSR-88D  algorithms.  Both 
CAIs  and  interactive  courseware  are  an  integral  part  of  the  weather  FOT  training  plan. 

AWS  is  also  an  active  participant  with  the  National  Weather  Service  (NWS)  and  the  Naval 
Oceanography  Command  (NOC)  in  the  Cooperative  Program  for  Operational 
Meteorology,  Education,  and  Training  (COMET).  COMET  was  originally  developed  by 
the  NWS  as  part  of  their  modernization  program  to  place  emphasis  on  improving  the 
professional  background  and  operational  capabilities  of  meteorologists  to  use  mesoscale 
data.  Three  COMET  programs  have  been  developed  to  meet  the  objectives  of  improving 


169 


mesoscale  forecasting:  a  Distance  Learning  Program,  an  Outreach  Program,  and  a 
Residence  Program. 


AWS,  NWS,  and  NOC  provide  funds  as  sponsors  in  the  Distance  Learning  Program.  The 
objective  of  the  program  is  to  provide  professional  development  education  for  operational 
weather  forecasters,  university  facility,  and  other  meteorologists  who  do  not  have  the  time 
or  money  to  attend  COMET  resident  courses.  Training  is  mainly  conducted  via 
interactive  courseware  that  is  developed  for  in-station  use.  The  two  modules  used  to 
successfully  prepare  military  forecasters  for  formal  WSR-88D  training,  "Workshop  on 
Doppler  Radar  Interpretation"  and  "Boundary  Detection  and  Convective  Initiation"  were 
both  the  result  of  COMET  initiatives. 

Additionally,  AWS  provides  funds  for  research  under  the  Outreach  Program  and 
participates  in  the  Resident  Program.  The  Outreach  Program  creates  partnerships 
between  the  academic  research  community  and  operational  weather  forecasters  that  focus 
on  resolving  forecast  problems  of  utmost  concern  to  the  operational  weather  community. 
For  example,  under  the  Outreach  Program,  North  Carolina  State  University  was  paired 
with  AWS  to  work  on  nowcasting  convective  activity  during  space  shuttle  launches  and 
landings.  The  Resident  Program  brings  meteorologists  together  with  nationally 
recognized  experts  for  the  purpose  of  improving  their  collective  understanding  of 
mesoscale  meteorology.  The  program  conducts  courses,  symposiums,  and  workshops 
that  provide  operational  weather  forecasters,  hydrologists,  and  other  atmospheric 
scientists  with  new  skills  and  concepts  in  mesoscale  meteorology.  Through  the  years, 

AWS'  association  with  COMET  has  been  very  productive,  and  prospects  for  the  future  are 
just  as  bright. 


3.  CONCLUSIONS 

Over  the  last  few  years,  AWS  has  experienced  significant  structural  and  organizational 
chants  and  technological  advances.  Even  though  AWS  no  longer  has  operational  control 
of  Air  Force  field  weather  units,  its  role  in  providing  decision  assistance  to  the  warfighters 
has  not  diminished.  AWS  still  provides  centralized  analysis  and  forecast  information  to 
operations  worldwide  and  the  technical  weather  expertise  to  the  Air  Force  and  the  Army. 


170 


AIR  FORCE  WEATHER  MODERNIZATION  PLANNING 


Alfonse  J.  Mazurowski 
Headquarters  Air  Weather  Service 
Scott  Air  Force  Base,  Illinois,  62225-5206,  USA 


ABSTRACT 

Air  Force  modernization  planning  results  in  the  development  of  documents  which  evaluate 
all  aspects  of  specific  functions,  pinpoint  deficiencies,  and  demonstrate  how  the  Air  Force 
plans  to  affordably  satisfy  those  deficiencies  to  achieve  required  capabilities.  The  Air 
Force  Weather  (AFW)  Functional  Area  Plan  (FAP)  is  a  25-year  modernization  planning 
document  that  details  the  programs  and  laboratory  technologies  required  to  enhance 
operational  capabilities.  Laboratory  funding  will  be  based,  in  part,  on  their  role  in 
satisfying  Technology  Needs  documented  in  FAPs  and  Major  Command  Mission  Area 
Plans  (MAPs).  A  FAP  Integrated  Product  Team  (FAPIPT),  consisting  of  weather 
representatives  from  Headquarters  United  States  Air  Force  and  Army,  Army  and  Air 
Force  major  commands,  product  centers,  and  laboratories,  was  formed  to  construct  the 
plan.  A  FAP  Steering  Group,  made  up  of  senior  Air  Force  leaders  in  weather,  provided 
oversight  and  guidance  for  the  FAPIPT.  The  FAP  addresses  the  two  major  segments  of 
AFW;  unit  capabilities,  which  includes  fixed-base  weather  station  and  tactical  unit  areas; 
and  centralized  facilities,  which  includes  the  Air  Force  Global  Weather  Central,  United 
States  Air  Force  Environmental  Technical  Applications  Center,  and  the  Air  Force  Space 
Forecast  Center.  The  FAP  was  developed  by  compiling  a  comprehensive  set  of  customer 
weather  requirements,  evaluating  AFW's  capability  to  provide  those  products,  and 
satisfying  any  capability  deficiencies  through  the  development  of  a  modernization 
roadmap.  The  modernization  roadmap  in  the  FAP  contains  planned  acquisition  programs 
for  hardware  and  software  in  addition  to  descriptions  of  Critical  Enabling  Technologies 
that  are  the  contribution  of  science  and  technology  programs  to  correct  deficiencies.  The 
FAP  is  a  living  document  that  will  be  updated  annually.  Through  successful  execution  of 
the  modernization  roadmap.  Air  Force  Weather  will  be  able  to  respond  to  the  complex, 
evolving  requirements  of  tomorrow's  operational  missions. 

1.  INTRODUCTION 

During  the  1992  Fall  Corona,  the  Air  Force  Chief  of  Staff  initiated  the  Air  Force's  Year  of 
Equipping  by  charging  the  major  commands  (MAJCOMs)  and  Field  Operating  Agencies  (FOAs) 
to  build  25-year  modernization  plans.  These  plans  are  called  Mission  Area  Plans  (MAPs). 


171 


Functional  areas  such  as  weather,  intelligence,  communications,  and  security  police  were  tasked 
to  develop  FAPs,  since  they  are  not  specifically  operational  mission  areas,  but  rather  are 
backbone,  cross-cutting  functions  which  support  all  mission  areas.  The  FAP  follows  the  MAP 
format  and  methodology  as  documented  in  Air  Force  Policy  Directive  10-14,  Draft,  and  Air 
Force  Instruction  10-1401,  Draft. 

The  purpose  of  the  completed  MAP/FAP  is  to  guide  the  development  of  the  Program  Objective 
Memorandum  (POM)  and  to  push  technology  development  through  Air  Force  Material 
Command's  Technology  Master  Process. 

2.  MODERNIZATION  PLANNING 

MAPs/FAPs  are  developed  through  a  modernization  planning  process  which  uses  a  mission  area 
assessment  (MAA),  mission  needs  analysis  (MNA),  and  an  assessment  of  possible  solutions.  The 
MAA  process  evaluates  force  structure,  the  operational  environment,  and  the  threat  we  expect  to 
encounter  v/hile  conducting  the  assigned  mission.  The  MAA  process  uses  a  strategy-to-task 
(STT)  evaluation  of  operational/support  mission  tasks  requiring  certain  capabilities  (current  and 
programmed).  These  tasks  are  derived  from  the  National  Military  Strategy  and  identify  what 
capabilities  are  needed  to  achieve  military  objectives.  The  STT  is  a  framework  used  to  better 

understand  and  communicate  how  Air  Force  Weather's  activities  support  the  nation's  security 
needs.  ^ 

During  the  MNA,  operational  tasks  are  analyzed  to  determine  the  factors  which  impact  the 
current  and  programmed  capability  to  accomplish  identified  operational  objectives.  The  MNA 
ultimately  identifies  deficiencies  in  current  and  future  capabilities  to  provide  adequate  weather 
products  for  operational  missions. 

After  deficiencies  are  identified,  an  assessment  of  possible  solutions  is  accomplished.  Non¬ 
material  solutions,  such  as  doctrine,  tactics,  techniques,  procedures,  and  training,  are  examined 
to  determine  if  changes  in  these  areas  can  solve  the  deficiencies.  If  not,  new  technologies  needed 
to  improve  the  warfighting  capability  in  the  field  are  identified  and  prioritized  through  interaction 
among  AFW,  the  supported  operational  customers  (the  warfighters),  and  Air  Force  laboratories. 

3.  DEVELOPMENT  OF  THE  AFW  FAP 

In  August  1993,  HQ  USAF/XOW  tasked  Air  Weather  Service  (AWS)  to  develop  an  AFW  MAP 
(which  subsequently  became  the  AFW  FAP).  A  FAPIPT,  co-chaired  by  AWS  (the  user 
command)  and  Electronic  Systems  Center's  system  program  director  for  weather  programs 
(AFMC  Product  Center)  and  consisting  of  weather  representatives  from  Headquarters  United 
States  Air  Force  and  Army,  Army  and  Air  Force  major  commands,  product  centers,  and 
laboratories,  was  charged  to  develop,  document,  and  continually  update  the  FAP.  An  AFW 
FAPIPT  Steering  Group,  comprised  of  senior  weather  functional  managers  from  the  Air  Force 
major  commands,  the  FOA,  and  Headquarters  Air  Force  and  Army,  was  organized  to  provide 
direction  for  the  FAP  planning  process. 


172 


In  October  1993,  the  Steering  Group  directed  the  FAPIPT  to  limit  the  initial  focus  of  the  then 
MAP  to  evaluate  the  Air  Force  Global  Weather  Central's  (AFGWC's)  capability  to  provide 
Theater  Battle  Management  (TBM)-required  meteorological  and  oceanographic  (METOC) 
products.  The  FAPIPT’s  evaluation  of  AFGWC's  capabilities  identified  the  hardware  and 
software  deficiencies  in  satisfying  the  centralized  product  requirement  for  wartime  operations. 
The  initial  MAP  was  completed  on  18  February  1994. 

In  March  1994,  HQ  USAF/XOW  directed  AWS  to  expand  the  scope  of  the  AFW  MAP  to 
include  all  areas  of  AFW  activities  and  redesignate  it  as  a  FAP.  Now  the  FAP  addresses  the  two 
major  segments  of  AFW:  centralized  facilities  and  unit  capabilities.  The  centralized  area 
includes  the  three  meteorological  centers  within  AFW:  AFGWC  (operational  weather  products), 
USAF  Environmental  Technical  Applications  Center  (USAFETAC)  (climatological  products), 
and  Air  Force  Space  Forecast  Center  (AFSFC)  (space  environmental  products).  The  unit  area 
concentrates  on  fixed  base  weather  station  (BWS)  and  tactical  weather  unit  operations.  In 
keeping  with  the  concept  of  "train  in  peace  as  we  fight  in  war,"  the  far-term  goal  is  to  combine 
the  software  and  hardware  functionalities  of  the  fixed  BWS  and  tactical  weather  unit  to  the 
maximum  extent  possible. 

4.  DESCRIPTION  OF  THE  AFW  FAP 

The  AFW  FAP  is  a  weather  modernization  plan  which  will  serve  as  a  roadmap  for  weather 
operations  through  2019.  It  highlights  deficiencies  that  still  need  to  be  corrected  to  meet  mission 
requirements  into  the  21st  century.  This  plan  is  a  living  document  which  is  updated  at  least 
annually  to  initiate  and  validate  POM  actions.  It  also  provides  the  basis  for  technology  programs 
to  be  developed  which  will  chart  AFMC's  science  and  technology  investment  strategy. 

4.1  Centralized  Facilities 

4.1.1  AFGWC  is  designated  as  the  DoD  center  for  theater-scale  weather  analysis  and  forecast 
models,  meteorological  satellite  (METSAT)  data  processing,  and  cloud  analyses  and  forecasts. 
The  primary  mission  of  AFGWC  is  to  provide  centralized  weather  products  to  US  combat  forces 
to  include  unified  and  specified  commands  and  all  Air  Force  and  Army  major  commands. 
AFGWC  uses  super  computers,  large  computer  mainframes,  minicomputers,  and  workstations  to 
manage  the  enormous  amount  of  incoming  weather  data  and  run  complex  weather  analysis  and 
forecast  models  to  meet  customer  needs. 

AFGWC  was  assessed  on  its  capability  to  provide  the  required  suite  of  1340  METOC  products 
in  uniform  gridded  data  field  (UGDF)  format  as  identified  through  the  TBM  program.  As  a 
result  of  this  assessment,  shortfalls  in  AFGWC's  capability  to  satisfy  stated  product  requirements 
were  determined.  Hardware  shortfalls  were  primarily  due  to  inadequate  processing  power, 
internal  communications,  and  data  storage.  Software  shortfalls  included  limited  or  no  capability 
to  provide  certain  METOC  parameters. 

Solutions  to  correct  AFGWC  hardware  shortfalls  are  planned  through  a  combination  of  upgrade 
and  modernization  programs: 


173 


Satellite  Data  Handling  System  (SDKS)  is  an  interactive  computer  system  within  AFGWC  for 
centralized  weather  forecast  generation  and  dissemination.  The  SDKS  ingests  meteorological 
data,  both  conventional  and  satellite,  for  interactive  display  and  manipulation  at  the  forecaster 
consoles.  It  provides  the  user-interface  and  computational  support  for  producing  specific 
meteorological  analysis  and  forecast  products  required  to  support  AFGWC  and  its  external 
customers.  The  SDHS  Upgrade  (SDHSU)  program  provides  enhancements  to  the  SDHS  and 
bridges  the  gap  until  the  scheduled  SDHS  replacement  program  (SDHS  II)  comes  on-line  The 
enhancements  include  a  logistical  upgrade/replacement  of  the  user-interfaces,  further 
development  of  the  Air  Force  Global  Weather  Central  Dial-In  System,  storage  expansion 
upgrade  of  processors,  and  hardware/software  modifications  to  receive  new  meteorological 
satellite  data  types/formats  and  foreign  meteorological  satellite  data.  In  addition  to  these 
enhancements,  three  more  user-interfaces  will  be  included  in  SDHSU.  The  SDHSU 
alleviate  hardware  saturation  and  will  completely  meet  identified  TBM 
METOC  requirements.  The  SDHS  II  program  will  modernize  the  SDHS  hardware  architecture 
and  will  support  data  access  and  retrieval  from  a  centralized  weather  data  base  to  meet 
anticipated  future  operational  requirements. 


Cloud  Depiction  and  Forecast  System  (CDFS)  uses  polar  orbiting  satellite  imagery  and 
continually  builds  and  updates  the  Satellite  Global  Data  Base  (SGDB)  upon  receipt  of  the  polar 
satellite  imagery.  Also,  CDFS  builds  a  global  cloud  analysis  based  upon  satellite  imagery  and 
conventional  meteorological  data.  CDFS  provides  all  customers  with  global  satellite  imagery 
cloud  analyses,  and  forecasts  products.  CDFS  II  is  an  acquisition  program  that  will  replace 
satellite  data  processing  hardware  and  will  develop  a  new  cloud  analysis  model  therebv 
improving  cloud  forecasts  for  TBM.  ’ 

Global  Theater  Weather  Analysis  and  Prediction  System  (GTWAPS)  will  replace  the  Automated 
Weather  Analysis  and  Prediction  System  (AWAPS)  with  an  open  system  architecture 
workstation  environment  which  will  host  advanced,  high-resolution  theater  weather  analysis  and 
orecast  models  to  meet  TBM  resolution  requirements  for  theater-scale  operations.  The  CDFS 
I  and  GTWAPS  programs,  along  with  other  software  initiatives  (e  g.,  laboratory  technology 
e  orts,  utilization  of  Navy  data,  AFGWC  software  development),  will  meet  TBM  METOC 
product  requirements. 


Weather  Information  Processing  System  (WIPS)  is  made  up  of  two  areas;  Weather  Information 
Processing-Production  (WIPP)  and  the  Weather  Information  Processing-Development  (WIPD) 
WIPP  pnmanly  receives  and  processes  conventional  weather  data.  Conventional  weather  data 
surface  and  upper  air  data  accumulated  from  weather  stations  around  the  world. 
WIPD  serves  as  a  development  system  and  back-up  to  some  of  the  hardware  on  WIPP.  The 
WIPS  Expansion  (WIPS-E)  Program  and  the  WIPS  processor  modification  program  procure 

alleviate  current  saturation  and  expand  the  capability  of  WIPS  and 

uSTc  requirements.  The  WIPS-R  program  will  replace  the  existing 

WIPS  computer  hardware  and  operating  system  to  satisfy  future  operational  requirements. 


174 


HYPERchannel  (HYPER)  is  the  means  by  which  required  information  is  transported  from 
computer  system  to  computer  system  within  AFGWC  and  ultimately  to  the  Communications 
Front-End  Processor  for  transmission  to  the  field.  The  HYPERchannel  Replacement  Program 
procures  the  hardware  and  software  to  replace  the  existing  HYPER  communication  link  which  is 
not  compliant  with  Air  Force  Open  System  Standards  and  does  not  have  sufficient  throughput 
capability  to  meet  TBM  METOC  product  requirements. 

Software  enhancements  are  planned  to  support  modernization  plans  and  future  improvements  in 
AFGWC's  capability.  Key  technologies  are  needed  to  meet  TBM  METOC  requirements  for 
clouds,  surface  visibility,  present  weather  conditions,  snow  depth,  soil  moisture,  turbulence, 
icing,  thunderstorms,  volcanic  ash,  and  layered  visibility. 

4.1.2  USAFETAC  is  the  AFW  center  for  global  climatological  products.  Their  mission  is  to 
archive  worldwide  atmospheric  and  space  environmental  data  and  to  prepare  analyses  and  studies 
from  manual  and  computer  manipulation  of  this  data  for  DoD  applications.  Air  Force  and  Army 
combat  planning  and  employment  decisions,  development  of  weapon  systems,  and  national 
programs  are  examples  of  their  products.  USAFETAC  uses  two  large  computer  mainframes, 
minicomputers,  workstations,  and  large  amounts  of  mass  storage  to  manage  the  tremendous 
amounts  of  climatological  data  required  for  product  generation. 

USAFETAC  product  evaluation  used  customer  identified,  coordinated  climatological  product 
requirements.  The  following  assessed  product  categories  attempt  to  cover  all  types  of  customer 
needs:  climatic  summaries,  descriptive  climatology,  electromagnetic  propagation,  engineering 
climatologies,  studies/product  improvement,  tailored  operational  support,  forensic  studies,  and 
simulation/modeling.  Hardware  shortfalls  included  inadequate  processing  power  and  mass 
storage.  Software  shortfalls  were  primarily  due  to  the  lack  of  worldwide  data  availability  and 
insufficient  simulation  models. 

Hardware  upgrades  through  the  USAFETAC-Replacement  (ETAC-R)  program  are  planned  to 
satisfy  the  above  deficiencies.  ETAC-R  will  replace/upgrade  computer  systems  required  at 
USAFETAC  and  AFGWC  in  order  to  provide  climatological  products  to  DoD  customers 
worldwide.  At  USAFETAC,  the  program  replaces  its  existing  mainframe  computer  system  and 
associated  storage  with  a  cluster  of  workstations  and  state-of-the-art  storage  devices.  This 
equipment  will  provide  necessary  processing  required  to  run  high  resolution  mesoscale  models, 
which  may  partially  solve  data  availability,  and  storage  for  rapidly  expanding  databases.  At 
AFGWC,  ETAC-R  upgrades  the  AFGWC  Centralized  Database  (CDB)  and  allows  the  new 
ETAC  systems  access  to  the  CDB. 

Software  solutions  include,  in-house  training  programs,  increasing  the  availability  of  additional 
worldwide  data,  and  development  of  advanced  climatic  models. 

4.1.3  The  AFSFC  provides  space  environmental  forecasts,  warnings,  and  anomaly  assessments 
to  enhance  the  capability  of  DoD  forces  worldwide.  As  the  basis  for  these  products,  the  AFSFC 
collects  data  from  a  mix  of  global  networks  of  DoD-unique,  national,  and  international  ground- 
and  space-based  instrumentation  which  monitors  the  solar  and  near-earth  environment.  This  data 


175 


IS  processed  into  a  wide  range  of  products  by  various  computer  algorithms  which  model  the 
space  environment.  Assistance  includes  alerts  of  solar  and  geomagnetic  events  and  assessments 
of  event  impacts  on  satellite  drag,  satellite  performance,  radar  correction,  early  warning  radar 
and  space  surveillance  performance,  and  communications  effectiveness. 


AFSFC  uses  three  computer  systems,  large  databases,  and  communications  in  both  their 
hardware  and  software  operational  clusters.  These  systems  run  specific  application  programs  for 
customer  products  while  a  separate  computer  cluster  is  used  for  unclassified  software 
development.  The  assessment  of  hardware  capabilities  led  to  a  determination  of  shortfalls  in 
computer  processing  and  storage  capabilities.  Software  shortfalls  include  the  inability  to  provide 
solar  and  geophysical  alerts,  analyses,  and  forecasts  to  the  level  of  accuracy  required  by  users. 

The  solution  to  the  hardware  and  software  limitations  is  the  AFSFC  Replacement(AFSFC-R) 
Program.  The  AFSFC-R  program  provides  for  replacement  of  AFSFC's  four  Digital  VAX 
computer  systems  (clusters)  and  current  database  environment  with  a  system  of  new  computers 
(to  include  replacement  of  the  Uninterruptable  Power  Supply).  Additionally,  the  program 
includes  upgraded  external  communication  interface  hardware  and  software  for  each  of  the 
operational  clusters.  The  program  also  contains  software  transition  and  integration  support  for 
application  software  to  be  transitioned  from  the  current  AFSFC  Digital  VAX  environment  to  the 
new  vendor's  hardware.  The  new  hardware  will  include  the  capability  to  satisfy  software 
solutions  by  processing  new  model  software  being  developed  under  the  Space  Environmental 
Technology  Transition  (SETT)  program.  This  program  also  includes  development  of  the  follow- 
on  SETT  models.  The  program  will  comprise  an  integrated  effort  beginning  in  the  early  2000s  to 
accomplish  both  the  Operational  Software  Development  (OSD)  and  the  software  maintenance 
for  the  Ionospheric  Models,  the  Magnetospheric  Model,  the  Neutral  Atmospheric  Models,  the 
Integrated  Space  Environmental  Models  (ISEM),  and  for  the  coupling  of  these  follow-on  space 
models.  These  improved,  follow-on  space  models  will  concentrate  on  the  advanced  development 
of  algorithms  that  will  use  new  space  measurements  to  improve  model  accuracy. 

4.2  Unit  Areas 

Unit  Areas  include  both  tactical  and  fixed  BWSs.  Functions  of  these  units  include  weather 
warning,  observing,  forecasting,  briefing,  and  resource  protection.  Weather  personnel  use  a 
combination  of  visual  observations  and  equipment-sensed  measurements  to  observe,  record,  and 
report  weather  elements.  Fixed  BWSs  take  advantage  of  state-of-the-art  computer  hardware  and 
software  technology  improvements  to  provide  operational  products  at  Army  posts  and  Air  Force 
bases  worldwide.  Tactical  units  currently  rely  on  less  sophisticated  systems  to  provide  decision 
assistance  to  deployed  combat  forces. 


Assessments  were  made  from  a  hardware  and  software  viewpoint  to  determine  the  tactical  and 
fixed  BWSs  capability  to  satisfy  customer  requirements  for  METOC  products.  In  the  near  term 
there  will  be  different  solutions  to  achieve  desired  capability  at  tactical  and  fixed  BWSs. 

However  in  the  far  term,  the  software  and  as  much  of  the  hardware  solution  as  possible  should 
be  the  same. 


176 


4.2.1  AF  Systems'  Enhancements 

The  following  programs  are  planned  to  satisfy  Unit  Area  shortfalls: 

Tactical  Forecasting  System  (TFS)  is  a  hardware/software  upgrade  of  AFW  tactical  forecasting 
capability.  TFS  must  be  a  small,  lightweight,  modular  system  that  is  rapidly  deployable,  durable, 
quickly  activated,  and  field  maintainable.  TFS'  modular  design  must  allow  for  an  initial 
deployment  capability  that  can  be  expanded,  as  required,  to  a  more  capable  system.  TFS  will 
provide  responsive,  reliable,  accurate  weather  information  in  near  real  time.  In  addition,  the 
system's  products  will  flow  to  other  systems  locations  using  the  in-theater  communication 
system.  The  system's  modules  at  the  in-theater  forecast  center  must  provide  a  theater-scale 
weather  analysis  and  forecast  model  capability  that  will  produce  tailored,  accurate,  and  reliable 
forecast  products  in  a  more  responsive  fashion  than  is  currently  possible. 


Modifications  to  existing  tactical  observing  systems,  GMQ-33s,  TMQ-34s,  and  TMQ-36s,  are 
required  to  provide  an  automated  capability  to  determine:  cloud  amount,  cloud  heights, 
visibility,  surface  pressure,  surface  wind  speed  and  direction,  surface  temperature,  surface  dew 
point,  and  precipitation  (amounts  and  type)  and  the  capability  to  add  additional  sensors  to 
determine  lightning  (direction  and  range),  nighttime  illumination,  present  weather,  soil  moisture 
and  soil  temperature,  precipitation  (fall  rates),  and  cloud  type.  They  must  automatically  collect 
and  transmit  data  directly  to  the  TFS.  The  TFS  will  then  transmit  the  quality  controlled  data  to 
the  C4I  systems  making  it  available  to  operational  customers. 

Modifications  to  the  current  upper  air  observing  system,  MARWINs,  must  be  a  lightweight, 
deployable  subsystem  that  determines  vertical  profiles  of  wind  speed  and  direction,  temperature, 
pressure,  geopotential  heights,  and  dew  point.  The  base  station  will  transmit  this  data  to  the 

TFS. 

Modifications  to  the  current  manual  surface  observing  system,  Belt  Weather  Kits,  must  be 
single-person  portable,  consisting  of  components  that  can  be  hand  held  or  ground  mounted  to 
manually  determine  cloud  heights,  visibility,  surface  pressure,  surface  wind  speed  and  direction, 
surface  temperature,  surface  dew  point,  precipitation  amount,  and  infrared  visibility. 

The  solution  for  remote,  automated  capability  must  be  an  expendable,  lightweight,  and  small 
sensor  system  deployable  to  remote  areas  under  friendly  or  enemy  control.  It  must  automatically 
determine  cloud  heights;  visibility;  surface  pressure;  surface  wind  speed  and  direction;  cloud 
coverage;  cloud  type;  infrared  visibility;  precipitation  type,  rate,  and  amount;  surface 
temperature;  and  surface  dew  point.  These  determinations  will  be  automatically  transmitted  to  a 
designated  TFS  location. 

Meteorological  Operational  Capability  (MOC)  develops  and  procures  observing  and  data 
processing  systems  to  meet  Army  and  Air  Force  operational  requirements  in  the  fixed  BWS 
environment.  This  program  replaces  and  improves  existing  fixed  meteorological  observing  and 
processing  systems,  improving  support  to  the  planning  and  execution  of  aerospace  operations, 
while  satisfying  critical  flight  safety  and  resource  protection  requirements.  The  MOC  will  build 


177 


upon  technological  advances  developed  under  the  TFS  and  tactical  observing  programs.  The 
transition  of  tactical  systems  technology  back  into  the  fixed  base/post  environment  supports  the 
train  in  peace  as  you  fight  in  war"  concept,  ensuring  combat  and  peacetime  support  systems  are 
as  similar  as  possible;  ultimately  reducing  unit  wartime  training  requirements.  The  MOC  must 
ingest  all  available  sources  of  METOC  information  and,  from  a  single  user  position,  quality 
control,  format,  display,  process,  analyze,  and  archive  all  required  observed  and  forecasted 
weather  data  and  products.  The  MOC  must  disseminate  this  meteorological  information  to  local 
C4I  systems  and  worldwide  weather  communications  systems.  Improved  weather  observing 
capabilities  must  provide  continuous  and  automatic  sensing,  collection,  quality  control,  and 
display  of  local  weather  conditions.  Also,  new  automated  observing  capabilities  must  provide 
lightning  detection  for  ground  refueling,  munitions  safety  and  support  to  base  and  post  central 
computer  facilities,  measurement  of  wind  and  temperature  vertical  profiles  for  wind  shear 
detection  and  warning,  and  measurement  of  slant  range  visibility  to  improve  flight  safety. 
Improved  forecasting  capabilities  must  include  the  integration  of  a  local  environmental  forecast 
model  (designed  to  improve  short-range  forecasting),  replacement  or  upgrade  of  existing 
meteorological  data  manipulation  and  display  systems,  and  an  integrated  platform  dedicated  to 
foe  timely  collection,  assimilation,  processing,  and  dissemination  of  all  required  METOC 
information. 

NEXRAD  is  a  hardware/software  development  and  upgrade  to  current  weather  radar  systems. 
The  NEXRAD  will  replace  a  majority  of  current  fixed  radars  (FPS-77s  and  FPQ-2Is)  and 
improve  weather  forecasts.  Doppler  and  computer  technology  will  allow  better  storm  detection 
and  assessment  of  severity,  improve  warning  accuracy,  increase  warning  lead-times,  and  permit 
the  automated  exchange  of  digital  radar  data  with  civil  agencies. 

Automated  Weather  Distribution  System  Pre-Planned  Product  Improvement  (AWDS  P3I) 
program  develops,  procures,  installs,  and  maintains  evolutionary  AWDS  improvements.  A 
timely  processing  improvement  will  increase  the  responsiveness  and  processing  abilities  of  the 
original  system  to  meet  increasing  system  demand  and  operational  requirements. 
Communications  and  computer  systems  interfaces  between  AWDS  and  customer  C4I  systems, 
weather  satellite  receiving  systems,  and  other  weather  systems  will  allow  timely  forecasting  and 
dissemination  of  critical  weather  information  to  customer  decision  makers.  A  Remote  Briefing 
Capability  (RBC)  will  allow  AWDS  to  provide  selected  alphanumeric  and  graphic  products  to 
customer  facilities  both  on  and  off  base/post  for  briefings.  Software  improvements  include 
severe  weather  algorithm  calculations,  solar  and  lunar  data  calculations,  toxic  corridor 
calculations,  high  resolution  grid  processing,  improved  archival  abilities,  and  model  climatology 
processing.  AWDS  P3I  will  support  the  migration  of  this  software  to  an  open  systems 
environment. 

4.2.2  Army  Systems'  Enhancements 

The  following  Army  programs  are  planned  to  satisfy  tactical  unit  shortfalls; 

The  Integrated  Meteorological  System  (IMETS),  AN/TMQ-40  is  predominately  a  non- 
developmental  item  that  provides  automation  and  communications  support  to  Air  Force  Weather 


178 


Teams  assigned  to  Army  Intelligence  (G2/S2)  Sections  at  echelons  from  separate  brigade 
through  the  echelon-above-corps  level  and  Special  Operations  Forces.  IMETS  will  receive, 
process,  and  collect  weather  forecasts,  observations,  and  climatological  data  used  to  produce 
timely,  accurate  products  tailored  to  meet  supported  commander's  requirements  for  state-of-the- 
art  weather  support.  IMETS  produces,  displays,  and  disseminates,  over  the  Army  Tactical 
Command  and  Control  System  (ATCCS),  weather  forecasts  and  decision  aids  that  compare  the 
impact  of  current,  projected,  or  hypothesized  weather  conditions  on  both  friendly  and  enemy 
capabilities. 

The  Meteorological  Measuring  System  (MMS),  AN/TMQ-41  is  under  development  by  the  Army 
Research  Laboratory  (ARL),  Battlefield  Environment  Directorate.  The  system  will  have  the 
capability  to  provide  meteorological  support  to  Army  artillery  operations.  The  information 
provided  will  be  the  same  that  is  provided  by  radiosonde-based  systems  through  the  use  of 
profiling  radars,  ground  based  sensors,  and  meteorological  satellites.  This  system  will  provide 
more  frequent  atmospheric  profiles  then  are  currently  provided. 

4.2.3  Science  and  Technology  Programs 

Key  technologies  are  needed  to  support  modernization  plans  and  future  improvements  in 
capability  for  meeting  shortfalls  in  wartime  and  peacetime  requirements.  Software  development 
research  for  battlefield  operations  is  needed  for  the  following; 

A  Theater-Scale  Analysis  Procedure  (TAP)  capable  of  ingesting  and  fusing  observations 
available  in-theater,  including  ground,  upper-air,  and  satellite  data  in  a  nearly  continuous  manner 
and  of  providing  timely  analyzed  values  of  wind,  temperature,  moisture,  and  surface  pressure. 

Analytical,  statistical,  and  artificial  intelligence  (AI)  (e.g..  Expert  Systems  and  Neural  Network) 
techniques  and  models  to  serve  single  station  and  regional  data  analysis  and  forecast  scenario 
requirements,  including  battlefield  models  capable  of  predicting/inferring  precipitation  rates,  soil 
moisture,  vertical  density  variations  (atmospheric  refractive  (atmospheric  refractive  index 
structures),  ceiling,  visibility  (obscurants,  e.g.,  dust  and  smoke),  height  of  the  low-level 
(atmospheric  boundary  layer)  inversion,  and  severe  and  extreme  low-level  turbulence  (e.g.,  due 
to  downslope  windstorms). 

In  addition,  a  theater-scale  numerical  weather  prediction  model  will  be  selected,  prototyped, 
evaluated,  and  validated.  The  model  will  have  the  capability  of  providing  reliable,  very  high 
resolution  forecasts  of  atmospheric  elements,  including  wind,  temperature,  pressure,  and 
moisture. 

Research  is  also  needed  to  develop  improved  battlefield  atmospheric  sensors.  Sensors, 
algorithms,  and  strategies  necessary  to  automate  the  detection  of  the  elements  of  the  surface 
observation  will  be  developed. 

R&D  is  needed  to  develop  improved  algorithms  for  the  NEXRAD.  The  following  research 
efforts  are  needed  to  meet  customer  requirements; 


179 


Severe  Storms  (automated  mesocyclone  identification  and  prediction  of  supercell 
tornadoes) 

Aviation  Hazards  (icing  and  turbulence) 

Storm  Structure  (automated  wind  analysis  and  non-severe  weather) 

Tropical  Cyclone  Analysis  (storm  strength  and  wind  analysis) 

5.  FUTURE  FAP  PLANNING 

Currently,  several  studies  are  underway  that  will  determine  future  requirements  and  operational 
concepts  of  AFW.  One  example  is  a  study  scheduled  for  completion  in  December  1994  to 
provide  options  on  the  capability  of  a  centralized  weather  unit  on  the  battlefield.  A  second 
example  is  the  AFW  architecture  study  scheduled  for  completion  in  December  1995  to  determine 
several  functional  and  physical  model  options  for  future  AFW  development.  A  third  effort  is  the 
study  to  determine  joint-level  communication  connectivity  needs.  These  studies  will  be  used  to 
update  and  modify  the  FAP's  operational  concept  and  adjust  future  modernization  plans.  These 
may  form  additional  customer  requirements  and  initiate  the  need  to  start  acquisition  and 
technology  programs  to  solve  the  newly  created  deficiencies.  The  modernization  planning 
process  described  is  continually  evolving  to  optimally  meet  the  changing  requirements  of  our 
nation's  defense.  As  new  technological  improvements  enhance  and  change  the  mission 
capabilities  of  AFW's  customers,  our  challenge  is  to  keep  AFW  at  the  forefront  of  technology 
and  poised  to  provide  the  warfighter  with  the  best  possible  decision  assistance. 


180 


USE  OF  NARRATIVE  CLIMATOLOGIES  AMD  SUMMARIZED  AIRFIELD 
OBSERVATIONS  FOR  CONTINGENCY  SUPPORT 


Kenneth  R,  Walters.  Sr.  and  Christopher  A.  Donahue 
U.  S.  Air  Force  Environmental  Technical  Applications  Center 
Scott  Air  Force  Base,  Illinois  62225-5116 

ABSTRACT 

A  recurring  problem  for  the  United  States  military  is  supporting 
contingency  operations  worldwide.  Integral  to  effective  planning 
of  such  operations  is  detailed  knowledge  of  regional  climate  and 
weather.  USAF  Environmental  Technical  Applications  Center 
(USAFETAC)  fills  this  void  by  providing  tailored  narrative  studies 
and  summarized  airfield  observational  statistics.  Planners  use 
the  narrative  studies  to  ascertain  weather  associated-problems  for 
the  entire  area  in  question.  Summarized  airfield  observational 
statistics  give  detailed  airfield  information  for  both  ground  and 
air  operational  planning.  Additional  tailored  specialized  studies 
are  provided  as  requested,  packages  are  prepared  and  transmitted 
electronically  to  worldwide  users  within  as  little  as  eight  hours 
of  request  receipt.  Examples  will  be  presented  at  the  conference 
for  various  recent  real  world  contingencies. 

1.  INTRODUCTION 

The  Readiness  Support  Branch  of  the  United  States  Air  Force 
Environmental  Technical  Applications  Center  (USAFETAC)  serves  as 
the  focal  point  for  climatological  support  to  contingency 
operations  levied  on  the  USAF  and  USA.  Products  go  through  the 
Unified  and  Specified  Commands  to  all  subordinate  United  States 
military  commands  as  determined  by  the  tasking  Unified  Command 
senior  meteorological/oceanographic  (METOC)  officer.  Such 
tailored,  point  or  area  specific  products  are  often  the  only 
climatological  information  available  to  planners  who  are 
responding  on  very  short  notice  to  unforeseen  deployment  of  the 
United  States  military.  Where  possible,  these  products  are 
operationally  tailored. 

2.  SCOPE  AMD  CONTENT  OF  SUPPORT 

Two  products  form  the  core  of  such  support:  a  narrative  study  and 
summarized  airfield  weather  observations  for  airfields  to  be  used 
either  in  the  area  of  operations  (AOR)  or  by  forces  enroute. 

The  focus,  of  the  narratives  varies  greatly  from  one  study  to  the 
next,  both  spatially  and  temporally.  These  normally  range  from 
"point”  studies,  which  cover  the  weather  for  a  city-sized  area,  up 
to  "small  area"  studies  such  as  that  done  for  the  former 


181 


Yugoslavia.  Such  studies  may  concentrate  on  a  specific  time 
period  or  cover  the  complete  annual  cycle.  The  emphasis  is  placed 
on  weather  affecting  the  type  of  operations  planned,  which  can 
cover  the  gamut  of  military  operations. 

Such  studies  begin  by  covering,  in  as  much  detail  as  possible,  the 
terrain,  vegetation  and,  if  required  the  flora  and  fauna  of  the 
area  of  interest.  Greatest  detail  is  in  the  small  point  studies, 
such  as  the  heights  of  nearby  hills,  ridges,  and  mountain  ranges; 
a  discussion  of  current  speeds  and  flooding  potential  of  local 
rivers  and  drainage  systems;  and  a  discussion  of  indigenous  flora 
and  fauna.  This  last  item  is  not  always  possible. 

Next  a  very  brief  discussion  of  the  synoptic,  mesoscale,  and  local 
meteorological  factors  which  drive  the  local  climate  is  provided. 
The  discussion  is  aimed  at  the  audience  specified  by  the 
requestor.  Normally,  at  least  a  limited  meteorological  background 
is  assumed,  but  some  have  been  tailored  for  non-meteorologists. 

The  core  of  the  study  is  a  description  of  the  weather  cycle  for 
the  requested  time  period.  These  are  usually  divided  by  local 
seasons — which  are  not  necessarily  the  classical  temperate  zone 
ones.  Conditions  that  impact  military  operations  are  highlighted. 
In  some  studies,  only  those  factors  (fog,  low  clouds,  heavy  rain, 
high  winds,  flood  and  so  on)  that  have  adverse  affects  are 
discussed.  Included  are  discussions  and  frequencies  of  the  "rare 
events” — severe  thunderstorms,  dense  fog,  heavy  snows,  and  the 
like. 

Finally,  a  subjective  "confidence  factor”  is  assigned.  As  most  of 
these  studies  concern  areas  for  which  both  studies  and  raw  data 
are  sparse,  such  confidence  factors  are  necessary  if  the  users  are 
to  fully  integrate  the  information  into  contingency  operations. 
This  allows  us  to  discuss  our  evaluation  of  the  quantity  and 
quality  of  the  information  used  in  preparing  the  study. 

Source  material  encompasses  the  total  USAFETAC  information  base. 
The  primary  source  is  the  superb  collections  of  USAFETAC 's  Air 
Weather  Service  Technical  Library.  Its  over  500,000  documents 
make  it  arguably  the  largest  dedicated  atmospheric  sciences 
library  in  the  United  States.  Summarized  numerical  data  is 
extracted  from  the  Air  Weather  Service  Climatic  Database 
maintained  by  USAFETAC 's  Operating  Location  A  (OL-A)  at  Asheville 
NC.  Located  in  the  Federal  Climate  Complex  with  the  National 
Climatic  Data  Center  and  the  Navy's  Fleet  Numerical  Meteorology 
and  Oceanography  Detachment  Asheville,  OL-A  has  immediate  access 
to  the  combined  data  bases.  If  we  are  lucky,  the  area  of  interest 
is  within  the  area  covered  by  one  of  our  regional  climatologies. 
Research  is  simplified  greatly  by  such  a  coincidence. 

These  studies  are  normally  transmitted  via  secure 
telecommunication  links  to  the  requestor (s) ;  illustrations  are 
necessarily  kept  to  a  minimum  due  to  the  very  short  amount  of  time 


182 


allowed  to  complete  such  studies.  During  "DESERT  SHIELD”,  for 
example,  such  studies  were  routinely  researched,  prepared,  and 
sent  within  24  hours  after  receiving  the  request.  Studies  are 
under  10  pages. 

The  second  product  is  one  or  more  airfield  climatological  data 
summaries.  Such  summaries  are  highly  desirable  for  those 
locations  where  flight  operations  are  planned.  These  give  not  only 
standard  mean  and  extreme  monthly  temperature  and  precipitation 
information,  but  also  percent  frequencies  of  occurrence  of 
selected  joint  ceiling  and  visibility  values  by  three  hour  blocks 
(00-02  local  time,  03-05,  etc),  prevailing  and  extreme  winds,  mean 
number  of  days  with  fog,  dust,  and  so  on.  Such  a  summary  can  be 
prepared,  if  sufficient  data  is  available  in  the  database,  within 
4  to  8  hours  after  request  receipt.  Observation  availability  is, 
of  course,  key  in  preparing  such  summaries.  Data  receipt  from 
many  Third  World  areas  is  somewhat  limited.  Observations  may  not 
be  available  for  nighttime  hours;  often  observations  are 
transmitted  only  every  three  hours  even  during  the  day.  Such 
constraints  often  mean  that  only  a  "limited  hour  summary"  can  be 
prepared.  In  the  worst,  and  most  frustrating,  cases  insufficient 
observations  are  available  to  prepare  such  a  summary. 

Other  products  may  be  provided  as  requested  by  the  senior  METOC 
officer.  For  example,  climatological  refractive  index  profiles 
can,  under  certain  conditions,  be  extremely  useful  in  determining 
radar  performance.  Electro  optical  climatologies  are  vital  for 
determining  just  what  sensors  will  work  effectively  under  various 
weather  conditions.  Wet-bulb  globe  temperature  climatologies  are 
crucial  in  tropical  regions  in  establishing  deployed  personnel 
work  schedules.  Conversely,  climatological  wind  chill  factors  for 
certain  seasons  and  areas  of  the  world  are  equally  vital. 
Engineers  tasked  to  build  everything  from  runways  to  ports  in  the 
AOR  require  specialized  temperature  and  precipitation 
climatologies. 

3.  USES 

Effective  planning  for  any  military  operation  require  a  thorough 
knowledge  and  integration  of  the  effects  of  weather  and/or  climate 
into  such  plans.  Papers  in  the  9th  Applied  Climatology  Conference 
discuss  the  uses  of  climatology  in  long  term,  or  "deliberate" 
planning.  Here  we  focus  on  the  rapid,  or  "contingency,"  response 
to  unforeseen,  rapidly  developing  operations.  Few  military 
operations  are  not  affected  by  weather. 

The  specialized  climatological  packages  discussed  in  this  paper 
allow  planners  to  modify  deploying  equipment  to  effectively 
operate  in  the  particular  conditions  of  the  small  area  of 
interest.  Often  all  this  requires  is  slight  modification  of 
already  existing  plans.  On  occasion,  it  may  require  more 
comprehensive  changes.  Such  information  becomes  especially 
important  now  that  most  operations  occur  in  Third  World  areas 


183 


where  little  current  meteorological  data  is  available,  both  the 
Somalia  and  the  Rwanda  Humanitarian  Relief  Operations  are 
excellent  examples. 

Experience  has  shown  the  narrative  studies  are  best  for  general 
discussions  of  area  conditions  and  for  ground  operations. 
Summarized  point  data  are  best  for  air  operations  over  a  specific 
point,  if  such  observational  data  are  available.  In  some  cases, 
such  climatological  contingency  packages  provide  the  major  portion 
of  the  on-site  weather  information  available  to  deploying, 
including  weather  support,  forces.  This  is  particularly  true  for 
Special  Operations  Forces.  These  packages  not  only  provide  the 
deploying  weather  personnel  with  badly  needed  background 
information  on  the  area  concerned,  they  also  provide  packages 
that,  often  with  little  modification,  can  be  given  to  flying  and 
ground  operations  personnel  to  acquaint  them  with  expected  general 
conditions.  Such  packages  do  not  take  the  place  of  real-time 
weather  support;  they  do  provide  invaluable  information  which 
allows  effective  planning  for  immediate  operations. 

Such  packages  mark  a  departure  from  standard  climatological 
support  as  it  has  been  provided  for  the  past  20  years.  The 
combination  of  computer  sophistication  and  the  availability  of  a 
large  atmospheric  sciences  library  allows  preparations  of  such 
packages  on  very  short  notice  and  rapid  dissemination  to  DoD  units 
worldwide.  Such  centrally  prepared  and  distributed  climatological 
packages  ensure  that  weather  personnel  and  operations  staff  at  all 
levels  are  provided  with  identical  information.  To  quote  General 
of  the  Air  Force  "Hap"  Arnold,  "Weather  is  the  essence  of 
successful  air  operations."  Given  its  effect  on  present  and 
future  weapons  systems,  this  is  now  true  for  all  operations.  Our 
climatological  contingency  packages  are  designed  to  help  maximize 
weather  support  quality. 


184 


USAFETAC  DIAL-IN  ACCESS 


Kevin  L.  Stone  and  Robert  G.  Pena 
USAF  Environmental  Technical  Applications  Center 
Scott  AFB,  Illinois  62225 

ABSTRACT 

Describes  the  USAF  Environmental  Technical  Applications  Center's 
(USAFETAC's)  Online  Climatology  Dial-ln  Service  (Dial-In).  Dial-In  allows 
Department  of  Defense  (or  any  U.S.  Government  agency)  users  to  gain  direct 
access,  using  a  PC  and  modem,  to  certain  climatological  applications  available  on 
USAFETAC's  IBM  3090  mainframe  computer.  Dial-In  uses  a  batch-type 
communication  technique  called  "Advanced  Program-to-Program  Communication 
(APPC)."  Dial-In  works  cooperatively  with  commercial  APPC  software  to  allow 
information  exchange  between  a  PC  and  the  IBM  mainframe.  A  menu  system 
allows  users  to  run  pre-selected  programs  to  receive  standard  output.  Applications 
currently  available  are  divided  into  three  categories:  Surface  (12  applications). 

Upper- Air  (2  applications),  and  Utilities  (3  applications). 

1.  INTRODUCTION 

The  USAF  Environmental  Technical  Applications  Center's  (USAFETAC's)  Online  Climatology 
Dial-In  Service  (Dial-In)  allows  Department  of  Defense  or  any  U.S.  Government  agency  users 
direct  access,  using  a  personal  computer  (PC)  and  modem,  to  certain  climatological  applications 
available  on  USAFETAC's  IBM  3090  mainframe  computer.  Dial-In,  which  became  operational 
in  1992,  makes  climatological  data  quickly  and  easily  available  to  USAFETAC's  customers. 

A  user  can  login  to  Dial-In  by  entering  a  valid  user  ID  and  password.  The  TM.ystem  allows  users 
to  run  pre-selected  programs  and  receive  standard  output.  The  output  can  be  downloaded  to  the 
remote  PC.  The  program  features  "point  and  click"  commands  to  control  3-D  buttons  that 
prompt  the  user  with  dialog  windows  for  required  input  information. 

Dial-In  has  a  messaging  capability  that  allows  communication  between  remote  users  and 
USAFETAC.  Seventeen  applications  are  currently  available  and  are  divided  into  three 
categories:  surface  (twelve  applications),  upper-air  (two  applications),  and  utility  (three 
applications). 

2.  HARDWARE/SOFTWARE  CONFIGURATION 

Dial-In  uses  a  batch-type  communication  technique  called  "Advanced  Program-to-Program 
Communication  (APPC)."  Dial-In  works  cooperatively  with  commercial  APPC  software  to 


185 


allow  information  exchange  between  a  PC  and  the  IBM  mainframe.  To  access  Dial-In  the  end 
user  needs  an  IBM  or  compatible  286  based  PC  with  640KB  main  memory,  1.5MB  of  available 
hard-disk  space,  MS  DOS  version  3.2  or  better,  EGA  or  better  graphics  display  (256KB) 
memory,  and  Hayes  compatible  2400  baud  or  better  modem.  The  end  user  connects  to  a  PC 
at  USAFETAC  which  serves  as  an  asynchronous  controller.  The  controller  contains  a 
coprocessor  board  and  control  program  allowing  up  to  eight  simultaneous  users.  The 
commercial  APPC  software  communicates  with  the  mainframe  computer  host  through  the 
asynchronous  controller. 

3.  MAIN  DISPLAY 

The  Dial-In  main  display  screen  (see  Figure  1)  consists  of  two  sections:  Control  Buttons  and 
the  Log  Area. 


Surface  Apps 


Utility  Apps 


Mieu  Job  Status 


Uieu  Results 


Delete  Results 


Uieu  Message 


Send  Message 


Help 


Quit 


Shell  to  DOS 


I  Upper  Air  rtpps  | 


Dounload  Results 


USAFETAC  DIALIN  Uex*sion  1.0 


12:28:54  Sending  Station  Locator  run  cards 
12:28:56  Request  successfully  transnitted 
12:29:05  Sending  Mean  Coincident  Wet  Bulb  run  cards 
12:29:07  Request  successfully  transnitted 
12:29:14  Sending  Uind  Speed  Sunnary  run  cards 
12:29:16  Request  successfully  transnitted 
12:29:23  Sending  Distribution  Sunnary  run  cards 
12:29:25  Request  successfully  transnitted 
12:29:31  Sending  Sunnary  of  the  RUSSUO  run  cards 
12:29:33  Request  successfully  transnitted 
12:29:39  Sending  Sads  Data  Interpolator  run  cards 
12:29:41  Request  successfully  transnitted 
12:30:40  Sending  Station  Locator  run  cards 
12:30:42  Request  successfully  transnitted 
12:32:39  Dounloading  UINDSPD.TXT 
12:32:40  UINDSPD.TXT  uas  doun loaded  successfully 


<F1>  Help  <F10>  Quit  <Entey>  Select  I te»  <Down/Up>  Next/Prev i ous  Selection 


Figure  1.  Main  Display 


3.1  Control  Buttons 

The  Control  Buttons  line  the  left  side  of  the  main  display.  Activating  the  control  buttons  allows 
the  user  to  enter  into  any  of  three  types  of  applications  (surface,  upper  air  or  utility),  a  job  status 
viewer,  a  module  to  download  the  job  results,  other  modules  to  view  and  delete  results,  and  a 
message  viewer  allowing  the  user  to  read  messages  stored  on  the  mainframe  computer.  The 
user  can  send  a  message  to  USAFETAC  by  activating  the  Send  Message  button. 

3.2  The  Log  Area 

The  log  area  appears  on  the  right  two-thirds  of  the  screen  displaying  a  summary  of  transactions. 
The  log  is  copied  to  a  file  and  stored  on  the  user's  PC  for  a  history  of  the  Dial-In  session. 


186 


4.  APPLICATIONS 


Surface  applications  run  against  the  DATSAV2  surface  data  set.  The  data  set  consists  of 
worldwide  weather  observations  collected  through  the  USAF  Automated  Weather  Network 
(AWN);  decoded  at  the  Air  Force  Global  Weather  Central  (AFGWC),  Offiitt  AFB,  Nebraska; 
and  stored  on  magnetic  tape  at  USAFETAC,  Scott  AFB,  Illinois  and  at  USAFETAC's  Operating 
Location  A  (OL  A),  Asheville,  North  Carolina.  The  database  contains  synoptic,  METAR, 
SMARS,  AMOS,  AERO,  MARS,  and  airways  observations. 

Upper  air  applications  run  against  the  DATSAV  Upper- Air  data  set  which  contains  rawinsonde 
and  pilot  balloon  observations  derived  from  reports  received  at  AFGWC  over  the  AWN.  These 
observations  are  quality  checked  before  they  are  sent  to  USAFETAC. 

Utility  applications  are  general  purpose  utilities  which  help  with  locating  Block  Station 
information.  These  applications  run  against  the  Air  Weather  Service  Master  Station  Catalog 
(AWSMSC)  which  is  a  comprehensive  listing  of  environmental  observing  sites  current  to  the 
last  nine  months.  For  each  station,  it  lists  the  name,  identifier,  location,  types  of  data  reported, 
field  elevation,  pressure  reporting  locations,  equipment  types,  etc. 

4.1  Surface  Applications 

4.1.1  A  Summary  (Weather  Conditions) 

This  program  provides  output  that  is  equivalent  to  part  A  of  the  Surface  Observation  Climatic 
Summary  (SOCS).  The  program  provides  for  both  a  total  occurrence  count  and  a  percent 
frequence  of  occurrence  for  a  specified  period  of  record  (FOR)  for  the  following  weather 
categories:  thunderstorms,  rain  and/or  drizzle,  freezing  rain  and/or  drizzle,  snow  and/or  sleet, 
hail,  fog,  smoke  and/or  haze,  blowing  snow,  dust  and/or  sand. 

4.1.2  Conditional  Weather  Summary 

This  program  provides  the  mean  number  of  days  a  selected  surface  weather  element  (e.g.,  fog, 
rain,  precipitation)  or  a  combination  of  two  elements  occurred  for  each  month  of  a  specified 
FOR.  The  data  are  arranged  in  hourly  and  3-hourly  groups  in  local  time.  A  range  of  values 
may  be  used  to  further  describe  an  element  (e.g.,  visibility  from  4800  to  8000  meters). 

4.1.3  Distribution  Summary 

This  program  prints  hourly,  monthly,  or  annual  cumulative  frequency  distributions  for  density 
altitude,  pressure  altitude,  or  dry  bulb  temperature  for  a  specified  FOR. 

4.1.4  Ceiling  Durations 

This  program  provides  the  duration  each  time  the  ceiling  is  below  a  specified  level  for  a 
specified  FOR.  The  beginning  and  ending  date  and  hour  are  provided  for  each  duration. 


187 


4.1.5  Mean  Coincident  Temperature 


This  program  gives  the  mean  frequency  of  occurrence  of  a  primary  temperature  with  a  mean 
coincident  secondary  temperature  for  each  primary  temperature  range.  It  provides  the  number 
of  occurrences  within  a  range  of  a  temperature  type  and  the  average  corresponding  value  of 
another  specified  temperature  type.  This  information  may  be  used  in  temperature  or  design 
studies.  The  possible  primary  and  secondary  temperature  types  are  dry  bulb,  wet  bulb,  and  dew 
point. 

4.1.6  Percent  Cloud  Free  Line  of  Sight 

The  Percent  Qoud  Free  Line  of  Sight  program  provides  matrices  of  percent  probability  of  cloud 
free  line  of  sight  above  a  selected  location.  The  matrices  give  average  percent  values  by  month 
for  three  hourly  periods.  The  angles  above  the  location  are  computed  every  10  degrees  from  0 
to  80  degrees.  Surface-derived  databases  require  a  specified  Block  Station  number.  Satellite- 
derived  databases  can  be  run  for  any  latitude  and  longitude  point. 

4.1.7  Phenomena  Summary 

This  program  supplies  output  that  is  equivalent  to  part  A  of  the  Surface  Observation  Climatic 
Summary  (SOCS).  The  program  accesses  data  already  in  the  computer  database;  however,  the 
period  of  record  may  not  be  as  complete  as  what  is  available  for  the  "A  Summary"  program 
which  runs  all  available  tape  data.  The  program  provides  both  a  total  occurrence  count  and  a 
percent  frequency  of  occurrence  for  a  specified  POR  for  the  following  weather  categories: 
thunderstorms,  rain  and/or  drizzle,  freezing  rain  and/or  drizzle,  snow  and/or  sleet,  hail,  fog, 
smoke  and/or  haze,  blowing  snow,  dust  and/or  sand. 

4.1.8  Precipitation  Summary 

The  Precipitation  Summary  presents  precipitation,  temperature,  and  sky  cover  data  for  a  selected 
station  and  POR.  The  data  is  derived  from  the  DATSAV  database  and  can  be  obtained  for  any 
reporting  station  back  through  the  year  1973.  The  program  is  relatively  fast  and  produces  a 

large  amount  of  data  but  is  limited  to  an  1 1  year  POR  per  request  due  to  size  limitations  in  the 
program. 

4.1.9  Surface  Package 

The  Surface  Package  program  produces  a  percent  frequency  of  occurrence  for  specified  elements 
and  POR.  A  list  of  over  50  elements  may  be  compared  against  each  other  for  a  specified  block 
station.  The  program  is  designed  for  either  a  one  element  percent  frequency  of  occurrence  or 
a  detailed  comparison  of  many  elements. 


188 


4.1.10  Temperature,  Relative  Humidity,  and  Wind  Climo  Summary 

This  program  provides  the  following  tables  of  climatological  statistics  using  a  specified  POR  not 
to  exceed  30  years. 

1)  Monthly/annual  temperature  and  relative  humidity  statistics. 

2)  Percent  frequency  of  occurrence  of  wind  direction  and  wind  speed  (knots)  for  both 
sustained  winds  and  gusts. 

3)  Monthly/annual  winds  (knots). 

4)  Maximum  wind  occurrence  -  the  five  highest  values  per  year. 

4.1.11  Wind  Chill 

The  Wind  Chill  program  provides  the  percent  frequency  occurrence  of  equivalent  chill 
temperature  (wind  chill).  The  frequency  distributions  are  given  for  user  specified  temperature 
categories  and  POR. 

4.1.12  Windspeed  Analysis 

The  Windspeed  Analysis  program  provides  the  five  strongest  wind  speeds  (sustained  or  gust) 
for  each  year  and  month  of  a  specified  POR. 

4.2  Upper  Air  Applications 

4.2.1  Probability  of  Icing 

The  Probability  of  Icing  program  computes  icing  information  for  a  selected  rawinsonde  site  and 
determines  the  probability  of  icing  at  the  following  levels:  1000,  850,  700,  500,  and  400  mb. 
The  probabilities  are  multiplied  by  a  correction  factor  from  AWSM  105-39  (AWS/TR-80/001) 
based  on  20,000  aircraft  flights  in  icing  conditions  to  obtain  the  potential  for  icing. 

4.2.2  Upper  Air  Reader 

The  Upper  Air  Reader  program  extracts  RAOB  data  for  a  particular  station  from  USAFETAC's 
climatic  database.  Pressure,  temperature,  moisture,  and  wind  data  are  interpolated  to  either 
pressure  or  height.  Pressure  interpolation  is  from  the  surface  to  100  mb  in  100  mb  intervals. 
Height  interpolation  is  from  the  surface  to  50,000  feet  MSL  in  1,000  foot  intervals. 

4.3  Utility  Applications 

4.3.1  Nearest  50  Stations 

The  Nearest  50  Stations  program  provides  the  50  closest  active  weather  stations  to  a  given 
point.  The  program  searches  the  Air  Weather  Service  Master  Station  Catalog  and  helps  locate 
stations  which  may  be  used  for  climatological  studies. 


189 


4.3.2  Station  Locator 


Station  locator  helps  find  the  best  reporting  station(s)  for  either  surface  or  upper  air  data.  The 
program  selects  and  displays  reporting  stations  in  specified  areas  and  provides  the  frequency  of 
reports  for  surface  or  upper  air  data.  Five  degree  squares  or  less  are  the  optimum  input, 
especially  in  data  dense  areas. 

4.3.3  TAFVER  II  Statistics 

The  TAFVER  II  program  is  an  automated  tool  designed  to  measure  the  quality  of  weather 
forecasting  support  provided  by  the  Air  Force  weather  community.  The  program  verifies  all 
Terminal  Aerodrome  Forecasts  (TAFs)  issued  by  Air  Force  weather  forecasters,  providing  the 
corresponding  observations  are  available. 

5.  SUMMARY 

The  Dial-In  service  was  fielded  to  provide  USAFETAC's  customers  with  more  responsive 
support.  The  system  allows  remotely  located  users  to  access  USAFETAC's  mainframe 
computer,  which  benefits  USAFETAC  and  the  customer.  A  menu  system  allows  the  user  to  run 
pre-selected  programs  to  receive  standard  output.  Programs  construct  commonly  requested 
summarized  profiles  of  meteorological  variables.  The  output  can  then  be  downloaded  to  the  end 
user's  PC.  The  Dial-In  service  was  implemented  to  augment  the  traditional  system  to  help 
relieve  backlogs  and  provide  information  quickly,  not  as  a  replacement  to  the  traditional  system. 

REFERENCES 

Pena,  Robert  G.,  1994:  USAFETAC  Online  Climatology  Dial-ln  Service  Users  Manual, 
USAFETAC  TN-94002,  USAF  Environmental  Technical  Applications  Center,  Scott  Air  Force 
Base,  IL,  80  pp. 


190 


page  1  of  10  pages 


ASlRO^aVUCAL  MC80ELS  ACCURACY  STUDY 

Chan  W.  Keith  and  Thomas  J.  Smith 
USAF  Environmental  Technical  Applications  Cento* 
Scott  AFB,  Illinois  62225-5116 


ABSTRACT 

Several  astronomical  routines  and  their  accuracy  in  calculating  solar  and  lunar 
event  times  were  compared.  The  Naval  Observatoiy  MICA  (Multiyear  Interactive 
Computer  Almanac)  model  was  used  as  ground  truth  and  assumed  to  be  correct. 
Programs  compared  to  MICA  were  the  USAFETAC  Nitelite  for  Windows  model; 
LightPC  version  4.2,  developed  by  Sgt  Schweinfiirth  of  Detachment  5,  5th 
Weather  Squadron;  the  Ilium  routine  used  in  the  USAFETAC  program  INSOL; 
and  the  Army  model,  NVG  versions  4.0  and  5.1.  Model  output  for  a  series  of 
latitudes  between  the  equator  and  80  degrees  north  were  compared. 

1.  INTRODUCTION 

The  USAF  Environmental  Technical  Applications  Center  (USAFETAC)  performed  a  study  to 
determine  the  accuracy  of  various  astronomical  event  time  calculation  routines.  Model  accuracy 
was  determined  by  using  the  models  to  calculate  sunrise,  sunset,  moonrise,  moonset,  and  various 
twilight  times  over  a  three  year  period,  and  comparing  these  values  to  an  acceptable  standard. 
The  Naval  Observatory's  MICA  (Multiyear  Interactive  Computer  Almanac)  model  was  used  as 
ground  truth  in  the  study.  Programs  compared  to  MICA  were  (a)  the  USAFETAC  Nitelite  for 
Windows  model,  (b)  Li^tPc  version  4.2,  developed  by  Sgt  Schweinfiirth  of  Det  5,  5th  WS,  (c) 
the  Ilium  routine  used  in  the  USAFETAC  program  INSOL  and  the  Electro-Optical  Tactical 
Decision  Aid  (EOTDA)  software,  and  (d)  versions  4.0  and  5.1  of  the  Army  model,  NVG. 
Statistical  parameters  including  average  error,  root  mean  square  error,  standard  deviation,  and 
total  error  were  computed.  Model  output  for  a  series  of  latitudes  between  the  equator  and  80 
degrees  north  were  compared.  The  Ilium  routine  was  the  most  accurate  model  compared  to 
MICA.  Accuracy  degraded  only  slightly  at  the  extreme  northern  latitudes.  The  next  most 
accurate  model  was  LightPC.  Nitelite  performance  was  nearly  equivalent  to  LightPC  at  latitudes 
below  60  degrees  north,  but  accuracy  decreased  rapidly  at  the  hi^er  latitudes.  The  latest  version 
of  NVG  performed  comparably  with  Nitelite  in  computing  the  sunrise,  sunset  and  moonrise,  but 
performance  degraded  somewhat  for  the  other  event  times.  The  average  errors  for  all  of  the 
models  except  NVG  were  less  than  3  minutes  for  all  of  the  latitudes  and  event  times  tested. 

All  of  the  models  are  capable  of  running  on  a  desktop  PC,  although  our  version  of  ILLUM  92 
was  run  on  an  IBM  R/S  6000  workstation  using  IBMs  XL  FORTRAN  compiler,  which  defaults 
to  double  precision  for  floating  point  calculations.  Output  from  LIGHTPC  version  3.2  and  earlier 
is  assumed  to  be  identical  to  NITELITE,  since  they  used  the  same  algorithm  for  computing  the 
sun  and  moon  locations. 


191 


page  2  of  10  pages 

2.  MODEL  DESCMPnaVS 

2.1  NITELITE 

NITELIIE  is  the  latest  USAFETAC-developed  program  produced  for  calculating  astronomical 
data  It  is  the  only  program  tested  that  is  available  in  a  Windows  version.  It  computes  the 
beginning  and  ending  of  nautical  twilight,  sunrise  and  sunset,  moonnse  and  moonset,  and  percent 
illumination  information.  The  output  is  in  an  easy  to  read  graphical  format,  but  the  program  will 
not  output  the  data  to  a  file  in  a  tabular  format.  As  a  result,  a  slight  modification  was  made  to 
the  program  to  save  the  data  to  a  file  for  comparison  purposes. 

The  algorithm  that  calculates  the  solar  and  lunar  data  was  originally  developed  by  A.  C.  van 
Bochove  (1982)  in  FORTRAN;  the  NITELITE  version  contains  a  few  minor  corrections.  The 
algorithm  was  converted  to  visual  BASIC  and  adapted  for  use  within  the  Windows  program  on 
the  desktop  computer.  The  program  is  maintained  by  USAFETAC/SYS.  NITELITE  uses  the 
same  algorithm  (ILLUM,  1987)  as  LIGHTPC  version  3.2,  an  USAFETAC  program,  and  we 
assume  LIGHTPC  v3.2  performance  would  be  nearly  identical  to  NITELITE. 

2.2  UGfflPC 

This  program  is  an  updated  version  of  the  original  LIGHT  program  developed  by  ILt's  D.  Payne 
and  J.  Morrison  from  USAFETAC.  This  version  was  produced  by  Sgt  J.  Schweinfiirth,  of  Det 
5, 5WS,  and  was  developed  specifically  for  determining  solar  and  lunar  event  times,  and  for  night 
vision  goggle  support.  In  addition  to  computing  event  times,  it  can  compute  percent  illumination 
and  nighttime  (tokness  data.  The  program  is  user  friendly  and  menu  driven.  Processing  time, 
however,  is  somewhat  slow  for  large  amounts  of  data  (i.e.  more  than  several  months). 

LIGHTPC  calculations  are  based  primarily  on  the  methods  developed  by  A.  C.  van  Bochove 
(1982),  with  a  number  of  corrections  and  updates.  The  solar  semidiameter  was  assumed  to  be 
a  constant  16  minutes  (')  of  arc.  The  lunar  semidiameter  and  parallax  were  individually 
computed  and  included  as  correction  terms  in  the  calculations  of  event  times.  Upgrades  to  this 
program  can  not  be  readily  obtained. 

2.3  II1IJM92 

Capt  M.  Raffensberger,  in  USAFETAC/SYT,  developed  the  INSOL  program  in  1994,  primarily 
to  compute  the  daily  cumulative  insolation  at  the  surface  and  the  top  of  the  atmosphere  as  an  aid 
for  forecasting  fog  dissipation.  The  original  program  did  not  provide  the  solar  and  lunar 
information  directly,  but  since  it  made  the  astronomical  computations  and  used  the  data 
internally,  the  program  was  modified  to  output  that  information  so  that  its  accuracy  could  be 
determined. 

The  algorithm  is  based  on  the  program  developed  by  A.  C.  van  Bochove  and  Erlich  (1982)  and 
was  modified  by  Sidney  Wood  (1986),  Paul  Hilton  (1987),  Maria  Gouveia  (1989)  and  Dan 


192 


page  3  of  10  pages 


DeBenedictis  (1992)  of  Hu^es  STXL  The  ILLUM  subroutine  within  the  INSOL  program  was 
originally  used  in  the  Electro-Optics  Tactical  Decision  Aid  (EOTDA)  program  by  Hu^es  SJX; 
it  computed  solar  position  at  15  minute  intervals,  but  was  modified  to  make  the  confutations 
every  20  seconds  to  ensure  the  event  times  we  obtained  were  to  the  nearest  minute.  Solar  and 
lunar  locations  were  adjusted  to  account  for  a  standard  refractive  index  produced  by  the 
atmosphere,  which  corresponded  to  34'  of  the  solar  or  lunar  path  arc  length.  The  sm's 
semidiameter  (16')  and  moon's  semidiameter  (16')  were  also  included  in  the  adjustment,  since 
event  times  are  based  on  the  upper  limb  of  the  disk.  A  search  was  then  conducted  to  find  the 
positions  and  times  corresponding  most  closely  to  die  appropriate  solar  and  lunar  locations 
described  within  the  definitions  of  the  various  twilights  and  rise  and  set  times. 

2.4  NVG 

The  Night  Vision  Goggles  program  (NVG)  was  developed  and  recently  updated  (version  5.1)  by 
the  Army  Research  Laboratory  (ARL),  primarily  for  computing  nighttime  natural  illumination 
values.  User  inputs  include  observer  location,  date  and  time,  and  current  meteoroloj^cal 
information  for  specific  parameters.  The  model  is  capable  of  calculating  solar  and  lunar  position, 
rise  and  set  times,  two  forms  of  twilight  times,  and  illumination  information.  The  calculated  sun 
and  moon  positions  are  based  on  the  methods  developed  in  1982  by  A.  C.  van  Bochove. 

The  program  is  menu  driven  and  some  background  knowledge  of  the  program  is  helpful,  since 
some  of  the  input  parameters  require  a  specific  format.  Wraknesses  of  this  software  include: 
(1)  calculations  are  limited  to  the  area  between  64°  S  and  64°  N,  and  (2)  the  prograrn  will  only 
send  the  output  to  the  screen  or  printer,  and  not  to  an  output  file  for  further  processing.  (ARL 
provided  a  modified  version  of  the  program  that  did  allow  for  data  to  be  sent  to  an  output  file.) 
One  year  of  data  from  v4.0  was  also  used  in  the  study.  This  model  includes  horizontal  refection 
and  disk  semidiameter  corrections  (34'  and  16',  respectively)  in  the  event  time  computations. 

According  to  D.  Sauter  (personal  communication),  NVG  searches  for  the  beginning  of  nautical 
and  civil  twilight  (BNT,  BCT),  sunrise  and  sunset  (SR,  SS),  then  uses  the  time  differential 
between  BNT  and  BCT  from  SR  to  get  an  approximated  end  of  civil  and  nautical  twilight  (ECT, 
ENT).  While  this  procedure  is  a  useful  time  saving  tool,  it  can  cause  a  non-occurring  event  to 
be  predicted  at  higher  latitudes.  This  occurs  when  the  sun  goes  above  (below)  12°  elevation 
angle  and  stays  above  (below)  12°,  for  example,  for  many  weeks  in  the  summer  (winter). 

2.5  MICA 

This  Naval  Observatory  program  superseded  their  Floppy  Almanac  in  1992,  and  was  created  for 
astronomers,  surveyors,  meteorologists,  and  navigators  to  provide  high  precision  astronomical 
ftata  for  a  variety  of  astronomical  objects,  including  all  of  the  planets,  the  moon  and  many  stars. 
Stated  accuracy  is  less  than  1  minute  of  time.  A  sample  of  the  parameters  that  be  calculated 
are  three  forms  of  twilight  times,  rise  and  set  times,  positions  of  celestial  bodies,  and  fi:action 
illumination.  The  calculated  celestial  body  positions,  as  described  in  the  Mica  Users'  Manual, 
are  based  on  methods  presented  by  C.  A.  Smith  et  al.,  and  ephemeris  data  provided  by  E.  M. 


193 


page  4  of  10  pages 

Standish,  Jr.  The  progi^m  is  user  fiiendly,  fast,  and  menu  driven.  A  constant  solar  semidiameter 
of  16'  of  arc  and  a  con^t  horizontal  refraction  (34')  are  assumed.  The  program  individually 
computes  the  lunar  semidiameter  and  parallax  and  includes  them  in  the  event  time  computations. 

3.  ME1MC®)S  Table  1.  Latitudes  for  conputing  event  times 

^  „  used  within  this  study  for  each  of  the  models. 

3.1  Scope 


Each  program's  data,  except  that  from  NVG 
v4.0,  was  compared  to  output  from  MICA  for 
the  3  year  period  of  record  from  1  Jan  94 
through  31  Dec  96.  Output  from  NVG  v4.0 
was  limited  to  1  Jan  94  throu^  31  Dec  94. 

The  locations  for  which  event  times  were 
calculated  for  each  model  corresponded  to  a 
constant  longitude  of  90°  W  and  the  latitudes 
given  by  Table  1.  The  latitudes  were  limited  to 
the  northern  hemisphere.  It  was  assumed  that 
the  results  obtained  apply  to  the  southern 
hemisphere  as  well,  and  that  model  errors  did 
not  vary  for  different  longitudes.  Latitudes 
above  80°  N  were  not  used  in  this  study 
because  MCA  accuracy  decreased  within  a  few 


Table  2.  Event  times  predicted  by  each  model 
for  purposes  of  this  study. 


NITE 

LITE 

ILLUM 

LIGHT 

PC 

NVG 

BNT 

✓ 

/ 

/ 

/ 

Bcr 

NA 

/ 

/ 

✓ 

SR 

✓ 

/ 

/ 

/ 

ss 

✓ 

✓ 

✓ 

✓ 

Ecr 

NA 

/ 

✓ 

✓ 

ENT 

✓ 

✓ 

✓ 

MR 

✓ 

/ 

✓ 

✓ 

MS 

✓ 

/ 

/ 

LAT 

NITE 

LITE 

ILLUM 

UGHT 

PC 

NVG 

0 

/ 

/ 

/ 

/ 

10 

/ 

✓ 

/ 

✓ 

20 

/ 

/ 

30 

/ 

/ 

✓ 

40 

/ 

✓ 

/ 

/ 

50 

/ 

✓ 

✓ 

/ 

60 

/ 

/ 

/ 

/ 

65 

✓ 

✓ 

/ 

NA 

70 

✓ 

/ 

/ 

NA 

75 

/ 

/ 

/ 

NA 

80 

/ 

/ 

/ 

NA 

degrees  of  the  poles.  Event  categories 
indicated  in  Table  2  are  available  from  the 
models.  The  program  times  were  compared 
with  the  MCA  times,  and  the  statistical 
analyses  described  below  were  performed. 

3.2  Statistical  Ptirameteis 

Several  statistical  parameters  were  computed 
for  each  latitude  for  the  results  generated  by 
the  models.  The  following  statistical 
parameters  were  computed  (Wilmott,  1982). 


194 


page  5  of  10  pages 


Total  eiror  -  the  total  number  of  minutes  of  error  over  the  period  of  record; 

hp.-o).  « 

i=l 

where  N  is  the  total  number  of  observations  for  the  period  of  record,  Oj  is  the  MICA  predicted 
event  time,  and  P;  is  the  model  predicted  event  time.  The  total  error  provides  an  indication  of 
the  model  bias,  whether  the  model  consistently  predicts  events  to  occur  either  before  or  afta  the 
MICA  model.  To  determine  the  bias  of  model  results,  it  is  necessary  to  consider  the  magnitude 
of  the  total  error,  in  conjunction  with  the  magmtude  of  the  total  absolute  error,  desaibed  below. 

Total  absolute  error  -  the  total  number  of  minutes  of  the  absolute  value  of  the  error  over  the 
period  of  record; 

i=l 

The  total  absolute  error  describes  the  overall  magnitude  of  the  error  for  each  model.  This 
statistic  is  not  normalized  by  the  number  of  events  and  will  vary  significantly  with  the  number 
of  events  in  the  period  of  record. 

Aveix^e  error  -  the  total  number  of  minutes  of  the  absolute  value  of  the  error  over  the  period 
of  record  divided  by  the  total  number  of  observations; 

(3) 

j=i _ 

N 

This  is  referred  to  as  mean  absolute  error  by  Wilmott  (1982),  and  it  weights  all  errom  equally 
in  its  determination.  This  statistic  is  normalized  by  the  number  of  observations,  so  it  will  not 
change  if  a  larger  period  of  record  is  used,  as  long  as  the  sample  size  is  representative. 

Root  Mean  Square  Error  (RMSE)  -  the  square  root  of  the  sum  of  the  individual  errors  squared 
divided  by  the  total  number  of  observations; 

(4) 


The  RMSE  weights  larger  individual  errors  more  heavily  than  the  smaller  errors.  The  RMSE  is 
also  normalized  by  the  total  number  of  observations. 


195 


4.  RESULTS 


page  6  of  10  pages 


4.1  Solar  Events 

In  general,  the  models  predicted  the  solar  event  times  much  more  accurately  at  the  low  latitudes 
than  hi^  latitudes,  particularly  above  60  °  N.  All  of  the  models  exhibited  some  seasonal  trend. 
The  models  tended  to  work  best  near  the  equinoxes  (March  and  September)  and  worst  near  the 
solstices  (June  and  December).  Results  from  NVG  v5.1  did  exhibit  a  sli^t  degradation  of 
performance  over  time. 

Figure  1  depicts  the  frequency,  in  percent, 
that  each  model  predicted  the  sunrise  time 
within  1  minute  of  MICA,  as  a  function  of 
latitude.  Below  60  °  N,  all  of  the  models 
except  NVG  were  within  1  minute  at  least 
80  percent  of  the  time.  NVG  v4.0 
performed  worse  than  version  5.1,  and  since 
the  latest  version  is  strictly  an  update  of 
v4.0,  only  v5.1  results  are  shown.  For  all 
latitudes  and  events,  ILLUM  92  predicted  at 
least  90  percent  of  the  events  correctly.  At 
60  °  N  and  below,  the  number  of  correctly 
predicted  events  generally  exceeded  95 
percent  for  this  model.  LIGHTPC  was  the 
second  best  performing  model,  and 
predictions  were  within  1  minute  over  90 
percent  of  the  time  over  the  entire  range  of  latitudes.  There  is  a  decreasing  trend  in  the 
performance  of  NITCLITE  with  increasing  latitude.  NITELITE  correctly  predicts  as  little  as  30 
percent  of  any  specific  event  at  80  N.  Predicted  event  times  were  within  1  minute  of  the  MICA 
determined  tiine  at  le^t  67  percent  of  the  time  and  within  2  minutes  at  least  83  percent  of  the 
time  for  any  given  latitude.  NVG  sunrise  times  within  1  minute  decreased  to  53  percent  for  the 
higher  latitudes,  but  the  model  was  within  2  minutes  at  least  80  percent  of  the  time.  NVG 
predicted  sunset  more  accurately  than  sunrise. 

Table  3  provides  error  statistics  for  a  typical  midlatitude.  Statistical  analyses  from  other  latitudes 
showed  similar  results.  The  ILLUM  92  model  performs  the  best  of  the  group,  compared  to 
MICA.  The  maximum  absolute  error  did  not  exceed  3  minutes  for  ILLUM  92,  except  for  one 
occurrence  of  a  31  minute  error  at  65  °  N.  Average  errors  were  under  0.1  minute.  The  total  aror 
for  the  beginning  event  times  are  slightly  negative,  meaning  the  model  forecasted  the  beginning 
times  early  more  often  than  late  overall.  It  also  predicted  the  event  ending  times  slightly  early. 

Error  statistics  for  LIGHTPC  show  it  to  be  the  next  best  performing  model.  Total  errors  indicate 
a  positive  bias,  meaning  the  model  predicted  event  times  were  slightly  later  than  MICA  predicted 
event  times.  Average  errors  ranged  from  0.02  minutes  at  the  equator  to  0.64  minutes  at  50°  N. 


NITELITE 

ILLUM 

LIGHTPC 

NVG 


Figure  1.  Frequency  of  model  predicted  sunrise 
times  within  1  minute  of  MICA  sunrise  times. 


196 


page  7  of  10  pages 


Table  3.  Statistical  analysis  results  for  the  model  predicted  solar  event  times  for  40°  N. 


40  N 

Tot 

No 

Obs 

Tot 
Abs  Err 
(min) 

Tot 

Err 

(min) 

Avg 

Err 

(min) 

Nfex 
Abs  Err 
(min) 

RMSE 

(min) 

Num 

Non 

Occ 

MTELITE  BNT 

1096 

506 

404 

0.46 

2 

0.76 

0 

0 

ILLIM  92  BNT 

1096 

15 

-5 

0.01 

1 

0.12 

0 

0 

LIGHTPC  BNT 

1096 

496 

410 

0.45 

2 

0.74 

0 

0 

NVG  BNT 

1096 

478 

-370 

0.44 

2 

0.66 

0 

0 

ILLUM  92  BCT 

1096 

13 

-9 

0.01 

1 

0.11 

0 

0 

LIGHTPC  BCT 

1096 

324 

138 

0.30 

1 

0.54 

0 

0 

NVG  BCT 

1096 

423 

-209 

0.39 

2 

0.64 

0 

0 

NITELITE  SR 

1096 

422 

120 

0.39 

1 

0.62 

0 

0 

ILLUM  92  SR 

1096 

23 

5 

0.02 

1 

0.14 

0 

0 

LIGHTPC  SR 

1096 

394 

116 

0.36 

1 

0.60 

0 

0 

NVG  SR 

1096 

1066 

294 

0.97 

2 

1.14 

0 

0 

NITELITE  SS 

1096 

436 

12 

0.40 

1 

0.63 

0 

0 

ILLUM  92  SS 

1096 

33 

7 

0.03 

1 

0.17 

0 

0 

LIGHTPC  SS 

1096 

449 

29 

0.41 

1 

0.64 

0 

0 

NVGSS 

1096 

705 

283 

0.64 

2 

0.83 

0 

0 

ILLUM  92  ECT 

1096 

18 

18 

0.02 

1 

0.13 

0 

0 

LIGHTPC  ECT 

1096 

302 

-16 

0.28 

1 

0.52 

0 

0 

NVG  ECT 

■1096 

2084 

1482 

1.90 

4 

2.19 

0 

0 

NITELITE  ENT 

1096 

478 

-244 

0.44 

2 

0.68 

0 

0 

ILLUM  92  ENT 

1096 

27 

27 

0.02 

1 

0.16 

0 

0 

LIGHTPC  ENT 

1096 

490 

-234 

0.45 

2 

0.69 

0 

0 

NVG  ENT 

1096 

2183 

1603 

1.99 

4 

2.28 

0 

0 

Event  errors  are  distributed  much  more  widely  for  NITELITE,  and  some  of  the  events  show  a 
skewed  distribution  in  the  statistics.  Below  60  °  N,  the  errors  were  approximately  the  same 
magnitude  as  those  of  LIGHTPC  but  at  the  higher  latitudes,  LIGHTPC  performance  was 
significantly  better  than  NITELITE. 


Error  statistics  for  NVG  v5.1  were  the  least  accurate  in  the  study,  and  they  exhibited  a  definite 


197 


page  8  of  10  pages 

bias  for  several  of  the  events.  The  model  predicted  the  beginning  of  nautical  and  civil  twili^t 
^ly.  Likewise,  the  model  predicted  the  ending  of  civil  and  nautical  twilight  late,  on  average, 
^ors  for  sunnse  were  larger  than  those  for  sunset.  NVG  does  not  produce  results  poleward  of 
64  ,  so  study  results  are  limited  to  0  °  N  through  60  “  N  latitude. 

4.2  Lunar  Events 


^erall,  the  lunar  event  time  predictions  followed  trends  similar  to  those  of  the  solar  event  times. 
All  of  the  models  exhibited  a  decrease  in  accuracy  with  increasing  latitude.  The  results  suggest 

a  mmor  penodic  trend  for  all  of  the  models.  None  of  the  models  exhibited  a  decrease  in 
accuracy  over  time. 

Except  for  NVG  v5.1,  there  were  fewer 
correctly  predicted  lunar  event  times  than 
the  solar  event  times,  but  as  Figure  2 
demonstrates,  the  number  of  moonrise 
times  within  1  minute  is  nearly  equivalent 
to  those  of  sunrise  times.  NVG  predicted 
moonrise  much  more  accurately  than 
sunnse.  All  of  the  models  were  within  1 
minute  at  least  80  j^cent  of  the  time  for 
latitudes  below  65  °  N.  The  ILLUM  92 
predicted  event  times  were  within  1 
minute  of  the  actual  event  over  95 
percent  of  the  time  for  any  latitude. 

LIGHTPC  and  NVG  v5.1  performed  as 
well  as  ILLUM  92  at  the  low  and  middle 
latitudes,  but  LIGHTPC  degraded  more 
quickly  at  the  higher  latitudes.  NITELITE 
was  within  1  minute  over  90  percent  of  the  time  up  to  60  °  N. 

The  m^mum  absolute  error  for  ILLUM  92  was  only  12  minutes  and  this  model  performed  best 
mpredictmg  li^  events.  Table  4  provides  results  for  a  typical  midlatitude.  The  maximum 
absolute  error  for  LIGHTPC  was  only  one  minute  from  the  equator  through  65  °  N,  and  there 
were  no  missed  events  for  that  range  of  latitudes.  Prediction  times  r^idly  deteriorated  above 
umt  latitude  and  the  maximum  absolute  error  increased  to  as  much  as  26  minutes.  The  maximum 
^smute  error  for  MTELITE  was  only  1  minute  throu^  40°  N,  and  4  minutes  throu^  65  °  N. 
IwG  v5.1  moonrise  errors  were  comparable  with  NITELITE,  however,  NVG  performance 
degraded  somewhat  in  predicting  moonset. 

5.  CmCLUSIONS 

^thou^  the  four  models  tested  used  variations  of  the  same  algorithm,  updates  and 
improvemOTts  in  processing  speed  caused  large  variations  in  model  performance  results. 


NITELITE 

^UM 

LIGHTPC 

NVG 


Figure  2.  Frequency  of  model  predicted  moonrise 
times  within  1  minute  of  MICA  moonrise  times. 


198 


page  9  of  10  pages 


Table  4.  Statistical  analysis  results  for  the  model  predicted  lunar  event  times  for  40  N. 


40  N 

Tot 

No 

Cfcs 

Tot 
Abs  Err 
(min) 

Tot 

Err 

(min) 

Avg 

Err 

(min) 

Max 
Abs  Err 
(min) 

RMSE 

(min) 

Num 

Mssd 

Evnt 

Num 

Non 

Occ 

NITELITE  MR 

1058 

340 

38 

0.32 

1 

0.57 

0 

0 

ILLUM  92  MR 

1058 

84 

76 

0.08 

1 

0.28 

0 

0 

LIGHTPC  MR 

1058 

86 

58 

0.08 

1 

0.29 

0 

0 

NVG  MR 

1058 

375 

-279 

0.36 

2 

0.60 

0 

0 

NITELITE  MS 

1059 

343 

117 

0.32 

1 

0.57 

0 

0 

ILLUM  92  MS 

1059 

81 

-17 

0.08 

1 

0.28 

0 

0 

LIGHTPC  MS 

1059 

76 

12 

0.07 

1 

0.27 

0 

0 

NVG  MS 

1059 

861 

-861 

0.81 

2 

0.99 

0 

0 

The  most  accurate  model  compared  to  the  Naval  Observatory's  model,  MCA,  was  the  ILLUM 
92  routine  found  in  the  USAFETAC-sponsored  INSOL  and  Hughes  STX  provided  EOTDA 
programs.  This  routine  consistently  produced  the  smallest  average  and  root  mean  square  error^ 
However  in  its  current  state,  the  routine  is  not  readily  suitable  for  directly  computing  and 
displaying  astronomical  event  times.  The  program  needs  to  be  capable  of  determining  event 
times  more  efficiently.  Some  time  saving  methods  could  be  incorporated  mto  the  routme  to 
allow  the  model  to  determine  event  times  more  quickly.  The  program  also  needs  to  be  converted 
into  a  user  iSiendly  format,  such  as  making  it  Windows  compatible. 

LightPC  version  4.2  is  the  next  most  accurate  model.  The  high  accuracy  mode  at  the  extremely 
high  latitudes  and  the  easy  to  use  menus  make  this  model  a  solid  performer.  A  significarit 
drawback  to  this  program  is  the  inaccessibility  of  the  program  code  for  maintenance  and  upgrade 
purposes,  such  as  a  foil  time  high  accuracy  mode  and  changes  to  the  output  form. 

Nitelite  for  Windows  works  fairly  well  at  the  low  and  middle  latitudes  but  accuracy  decreases 
rapidly  above  60°  N.  The  program  is  driven  by  friendly  prompts  and  has  the  capabrhty  to 
display  the  data  in  an  easy  to  read  graphics  format,  but  rnore  optiom,  such  as  the  ability  fo  save 
to  a  file  and  the  computation  of  civil  twilight,  may  be  desirable.  This  program  should  be  updated 
by  the  newer  ILLUM  92  routine  for  improved  accuracy  if  it  continues  to  be  used. 

The  Army  model,  NVG,  had  the  largest  errors  for  all  latitudes  when  compared  to  MCA. 
Presumably,  much  of  the  error  for  several  of  the  events  can  be  eliminated  by  a  simple  correction 
factor  within  the  program  itself  The  model  uses  prompts  and  Help  Screens  for  input  par^eto 
that  it  requires.  Although  the  documentation  that  comes  with  the  program  claims  that  the 
program  is  not  for  operational  use,  a  few  modifications  to  the  program  would  nj^c  it  slight  y 
more  accurate.  An  increase  in  accuracy  for  all  of  the  event  times  and  the  capability  to  wnte  to 
an  output  file  are  two  changes  that  could  be  made. 


199 


page  10  of  10  pages 

All  of  the  models  assumed  a  constant  correction  for  the  effects  of  refiuction  (34')  wiien 
computmg  event  times.  Variations  in  the  refractive  index  occur  due  to  natural  atmospheric 
vanations,  such  as  mversions  or  stable  layers,  causing  all  of  the  models,  including  MCA,  to 
produce  erroneous  results.  s  ^ 

The  magnifede  of  these  errors  are  partially  a  function  of  the  magnitude  of  the  atmospheric 
vanations  m  the  index  of  refraction;  in  the  tropics  variations  are  relatively  small-  at  the 
nudlatitudes  ^d  higher,  variations  will  be  larger.  A  more  important  contributor  to  the  magnitude 
of  the  emors  is  the  sun's  apparent  path  through  the  sky,  or  more  precisely,  the  maximum  zenith 
^gle  of  the  sun.  This  zenith  angle  decreases  with  increasing  latitude.  At  the  higher  latitudes 
the  sun  remains  near  the  horizon  for  a  longer  period,  so  that  variations  in  the  refractive  index  will 
produce  a  larger  error  than  the  same  refractive  index  variation  at  lower  latitudes.  Near  the  poles 
this  variation  forces  larger  errors  and  may  be  sufficient  to  incorrectly  predict  the  occurrence  or 
nonoccurrence  of  an  event. 

This  report  was  designed  to  provide  accuracy  information  to  users  or  potential  users  of  the 
models  studied  to  allow  more  informed  decisions  be  made,  based  on  model  output.  If  the  reader 
IS  lookmg  for  simple  and  accurate  event  times  or  fraction  illumination  information  in  tabular 
format,  we  suggest  obtaining  MCA  from  the  address  in  the  bibliography.  If  a  graphical  format 
IS  needed  then  MTELITE  will  be  required.  For  specific  support  and  output  requirements  needing 
a  hi^  de^ee  of  accuracy,  such  as  support  to  night  vision  goggle  users,  additional  programs  or 
moditications  to  the  existing  programs  will  have  to  be  sought. 

BBIJOGRAPHY 

^can,  Louis  D.,  and  David  P.  Sauter,  Naurd  Illumination  under Redistic  Weather  Conditions 
Atmospheric  Sciences  Laboratoiy,  ASL  TR-0212,  White  Sands  Mssile  Range,  NM  88002,  198?! 

Duncan,  I^uisD.  and  Gavino  Zertuche,  Night  Vision  Goggles  (NVG)  Software  Use/s  Guide 

^^fsion  4.0,  U.S.  Army  Research  Laboratoiy  Battlefield  Environment  Directorate,  White  Sands 
Mssile  Range,  NM  88002-5501. 


MCA  for  DOS  User's  Guide,  Astronomical  Applications  Department,  U.S.  Naval  Observatorv 
Washington,  DC  20392-5420.  woi>crvaiory, 

vari  Bochove,  A.  C.,  The  Computer  program  "ILLUM":  CdcuMon  of  the  Positions  of  the  Sun 
a^  Moon  and  the  Naturd  Illumination,  Physics  Laboratoiy  TNO,  National  Defense  Research 
Organization  INO,  P.O.  Box  96864,  2509  JG  The  Hague,  The  Netherlands,  1982. 

Willmott,  Cort  J.,  Some  Comments  on  the  Evaluation  of  Model  Performance  Bulletin  of  the 
Amencan  Meteorologicd  Society,  63,  1 1,  1982,  pp  1309-1313. 


200 


page  1  of  8 

ATMOSPHERIC  TRANSMISSIVITY  IN  THE  1  TO  12  MICRON  WAVELENGTH  BAND 

FOR  SOUTHWEST  ASIA 


Richard  A.  Woodford  and  Chan  W.  Keith 
USAF  Environmental  Technical  Applications  Center 
Scott  AFB,  Illinois  62225-5116 


ABSTRACT 

The  USAF  Environmental  Technical  Applications  Center  (USAFETAC)  performed 
a  study  to  analyze  the  atmospheric  transmittance  for  the  path  between  a  variety  of 
target  altitudes  and  a  satellite  based  sensor  over  southwest  Asia  during  Desert 
Storm,  and  to  compare  these  results  to  transmittances  computed  for  different 
climate  regimes.  The  atmospheric  transmittance  computer  model,  LOWTRAN  7, 
was  used  to  compute  transmittances  for  specific  atmospheric  conditions.  Model 
target  height  varied  between  the  surface  and  10  km  above  ground  level.  Actual 
sounding  data  from  Baghdad,  Iraq;  Howard  AFB,  Panama;  and  Pyongyang,  North 
Korea  for  the  period  from  January  1973  through  December  1988  was  used  in  the 
model  to  provide  a  climatological  reference  for  the  analysis.  Atmospheric  slant 
path  model  (ASPAM)  output  for  Iraq  during  January  and  February  1991  and 
climatologically  averaged  soundings  from  specific  climate  regions  provided  data 
sources  for  additional  LOWTRAN  7  computations.  Using  the  individual  data  sets, 
transmissivities  were  computed  for  the  wavelength  bands  1-2.5  pm,  3-4  pm,  1-4 
pm,  1-8  pm,  8-12  pm  and  individual  wavelengths  between  8  and  12  pm.  Results 
indicate  that  the  ASPAM  generated  data  set  had  considerably  lower 
transmissivities  than  the  climatological  average  over  Baghdad,  but  were  not  as  low 
as  typical  values  computed  for  the  tropical  (Howard  AFB)  or  midlatitude 
(Pyongyang)  environments. 

1.  INTRODUCTION 

This  report  is  a  synopsis  of  two  separate  studies  evaluating  atmospheric  transmittance  values  at 
select  locations  in  Southwest  Asia.  The  objective  of  the  first  study  was  to  derive  atmospheric 
transmittance  values  for  the  8  to  12  micron  wavelength  band,  then  determine  if  those  values  were 
lower  than  "normal"  or  expected  values  for  the  area.  The  second  study  extended  the  range  to 
include  the  1  to  8  micron  wavelength  band.  The  atmospheric  transmittance  model,  LOWTRAN7, 
(Kneizys,  et  al,  1988)  was  used  to  generate  both  average  and  total  transmittance  values  for  the 
bandwidth  in  question. 

LOWTRAN7  was  run  using  two  sensor/source  geometries.  The  first  series  of  model  runs  held 
the  sensor  directly  above  the  source  (i.e.,  at  nadir).  The  second  series  increased  the  transmittance 
path  length  by  moving  the  sensor  to  a  30  degree  viewing  angle  (i.e.,  30  degrees  off-nadir). 
Source  altitudes  were  varied  incrementally  from  the  surface  to  10  km  above  the  surface. 


201 


page  2  of  8 


2.  DATA  SOURCES 


The  study  was  accomplished  using  three  primary  data  sources.  We  will  refer  to  the  data 
throughout  the  rest  of  this  report  as  datasets  1,  2  and  3.  Dataset  1  included  the  January/February 
91  vertical  temperature/moisture  profiles  for  several  select  Southwest  Asia  locations.  For  this 
set  actual  sounding  data  were  not  available  at  the  locations  requested  by  USAF  Environmental 
Technical  Applications  Center's  (USAFETAC's)  customer  for  the  study,  so  vertical  temperature 
^d  moisture  profiles  were  generated  for  those  locations  by  the  USAFETAC's  Atmospheric  Slant 
Path  Analysis  Model,  ASPAM  (Koermer,1984).  Dataset  2  consisted  of  average  climatological 
profiles  of  temperature  and  dewpoint;  profiles  not  restricted  to  Southwest  Asia,  but  representative 
o  several  different  geographic  regions.  Dataset  3  was  made  up  of  actual  sounding  data  covering 
a  year  period  of  record  (POR)  from  January  1973  through  December  1988  for  the  following 
locations.  Baghdad,  Iraq;  Howard  AFB,  Panama;  and  Pyongyang,  North  Korea.  These  sites  were 
selected  because  US AFETAC  believes  they  are  representative  of  a  desert,  tropical,  and 
continental-type  climate,  respectively. 

3.  BACKGROUND 


The  overall  distribution  of  atmospheric  gases  and  aerosols  determine  the  spectral  absorbency  of 
the  atmosphere.  Any  change  in  a  beam  of  radiation  passing  through  a  layer  of  air  is  determined 
m  part  by  the  concentration  and  temperature  of  these  resident  constituents.  Atmospheric 

1  to  12  micron  wavelength  band  is  greatly  affected  by  tri-atomic 
K  absorption.  The  absorption  however,  is  not  continuous  across  the 

band.  There  are  regions  where  the  atmosphere  is  nearly  transparent  to  electromagnetic  radiation 
and  absorption  is  at  a  minimum.  These  windows  are  located  roughly  between  1  to  2.5  microns 

3  to  4  microns,  8  to  9.5  microns,  and  10  to  12  microns  (Environmental  Research  Institute  of 
Michigan,  1978). 


USAFETAC  s  customer  for  this  study  originally  supplied  datasets  1  and  2,  and  requested  an 
eva  uation  of  atmospheric  transmittance  values  based  on  those  sets.  We  felt  that  because  dataset 
1  consisted  of  atmospheric  profiles  which  might  have  been  overly  smoothed,  dataset  1  might 
misrepresent  actual  conditions  at  the  associated  locations  by,  for  example,  eliminating  temperature 
inversions.  Dataset  2  was  generated  by  compiling  several  hundred  soundings  averaged  to  produce 
one  mean  temperature,  pressure,  and  moisture  profile  per  geographic  region  noted.  These 
soundings  are  mean  values  for  the  selected  geographic  regions,  and  also  may  not  be 
representative  of  actual  conditions.  The  temperature  and  moisture  profiles  have  been  smoothed 
considerably;  subsequently,  the  resulting  vertical  profile  may  never  actually  occur.  It  is  for  this 
reason  that  USAFETAC  suggested  providing  statistical  distributions  based  upon  atmospheric 
transmittance  values  generated  by  inputting  actual  soundings  into  LOWTRAN7. 

The  LOWTRAN7  computer  code  is  capable  of  modeling  many  atmospheric  parameters  of 
transmittance  and  radiance  over  wavelengths  from  0.2  microns  to  infinity  (Kneizys,  et.  al.  1988)- 
owever,  in  this  study,  LOWTRAN7  was  used  to  determine  atmospheric  transmittance  only. 


202 


page  3  of  8 


4.  APPROACH 

Datasets  1  and  2  were  input  directly  into  LOWTRAN7,  and  model  runs  made.  Both  total  and 
average  transmittance  values  for  each  location  in  each  dataset  were  generated.  Only  the  average 
transmittance  values  per  bandwidth  specified  were  used  in  the  analysis.  Dataset  3  consisted  of 
USAFETAC's  archived  soundings  for  Baghdad,  Howard  AFB,  and  Pyongyang.  USAFETAC 
soundings  for  those  three  stations  covering  a  15  year  POR  were  input  into  LOWTRAN7  to 
produce  both  total  and  average  transmittance  values.  The  results  of  these  model  runs  were  then 
sorted  by  location,  month,  viewing  angle,  bandwidth,  and  source  altitude.  They  were  then 
processed  through  a  statistical  analysis  package,  SAS  (SAS  INSTITUTE,  1994),  to  create 
frequency  distributions!  Values  derived  from  these  distributions  were  then  compared  to  the  values 
computed  for  datasets  1  and  2. 

5.  ASSUMPTIONS  /  LIMITATIONS 

A  no  cloud/no  rain  scenario  was  assumed  for  all  datasets,  however  the  presence  of  water  vapor 
in  the  atmosphere  was  the  primary  source  of  atmospheric  extinction.  Adding  both  cloud  cover 
and  rain  would  significantly  reduce  transmissivity. 

The  LOWTRAN7  model  assumed  default  atmospheric  profiles  for  O3,  CH4,  NjO,  CO,  COj,  O2, 
NO,  SO2,  NO2  NH3,  HNO3,  and  for  other  background  aerosols,  such  as  dust  and  smoke.  No 
effort  was  made  to  adjust  the  modeled  extinction  for  higher  or  lower  concentrations  of  these 
constituents. 

Assumptions  specific  to  datasets  1,  2  and  3  are  as  follows:  (1)  Station  elevations  were  assumed 
to  be  24  meters  above  mean  sea  level.  NOTE:  This  input  into  LOWTRAN7  is  used  to  modify 
aerosol  profiles  below  6  km.  (2)  Dataset  1  consisted  of  atmospheric  profiles  interpolated  to 
needed  levels  by  ASPAM.  The  boundary  layer  wind  speed  was  extracted  from  the  dataset- 
supplied  surface  wind  speed.  (3)  Dataset  2  was  generated  by  compiling  several  hundred 
soundings  averaged  to  produce  one  mean  temperature,  pressure  and  moisture  profile,  per 
geographic  region  noted.  (4)  In  dataset  3,  a  6  meter  per  second  surface  wind  speed  was  used  to 
initialize  the  boundary  layer  aerosol  model  within  LOWTRAN7  for  Baghdad  and  Pyongyang. 
A  3  meter  per  second  surface  wind  speed  was  assumed  for  Howard  AFB. 

6.  RESULTS 

Tables  1,  2  and  3  on  the  following  pages  compare  dataset  1  average  surface  transmittances  with 
those  calculated  for  dataset  3.  In  the  I  to  8  micron  band,  water  vapor  is  the  dominant  absorber 
with  reduced  transmittances  indicated  at  approximately  2  and  3  microns.  This  held  the  average 
surface  transmittance  for  dataset  1  to  approximately  0.45.  When  compared  to  climatological 
records  (dataset  3),  0.45  is  expected  to  occur  only  1%  of  the  time  at  Baghdad.  At  Pyongyang, 
average  transmittances  could  be  expected  to  be  less  than  0.45  26%  of  the  time,  while  at  Howard, 
we  would  expect  values  to  be  at  or  below  this  average  (0.45)  90%  of  the  time.  (All  comparisons 
are  being  made  to  January  dataset  3  data). 


203 


page  4  of  8 


TABLE  1;  50th  Percentile  Values  of  Average  Transmittance  for  Baghdad  vs  Average 
Transmittance  Values  for  Dataset  1 . 


MICRON  BAND 

BAGHDAD 

1-  8 

1-2.5 

3-4 

1-4 

JANUARY 

0.54 

0.65 

0.62 

0.58 

FEBRUARY 

0.53 

0.63 

0.60 

0.56 

BEST 

0.54(JAN) 

0.65(JAN) 

0.62(JAN) 

0.58(JAN) 

WORST 

0.50(MAY) 

0.59(MAY) 

0.54(SEP) 

0.54(MAY) 

ANNUAL 

0.52 

0.60 

0.58 

0.55 

DATA  SET  1 

0.54 

0.48 

0.49 

♦FREQUENCY 

1% 

'T' _ 

1% 

_ ’aa _ _  T7  1 

3% 

2% 

’■Percentile  ranking  of  Average  Transmittance  Values  for  Dataset  1  vT  Baghdad's  January 
Cumulative  Frequency  of  Occurrence. 


TABLE  2.  50th  Percentile  Values  of  Average  Transmittance  for  Pyongyang  vs  Average 
Transmittance  Values  for  Dataset  1. 


MICRON  BAND 

PYONGYANG 

1-  8 

1-2.5 

3-4 

1-4 

JANUARY 

0.48 

0.54 

0.65 

0.51 

FEBRUARY 

0.47 

0.53 

0.64 

0.50 

BEST 

0.47(JAN) 

0.54(JAN) 

0.65(JAN) 

0.51(JAN) 

WORST 

0.35(JUL) 

0.42(AUG) 

0.40(JUL) 

0.37(JUL) 

ANNUAL 

0.42 

0.47 

0.54 

0.44 

DATA  SET  1 

0.45 

0.54 

0.48 

0.49 

♦FREQUENCY 

***Pprr*  An+1 1  o  rortb-it-ir* 

26% 

i-k-P  T' _ 

50% 

- _ Aj _ _  _  't  7  1 

1% 

42% 

’■Percentile  ranking  of  Average  Transmittance  Values  for  Dataset  1  vs  Pyongyang's  Janumy 
Cumulative  Frequency  of  Occurrence. 


In  the  1  to  2.5  micron  band,  water  vapor  is  still  the  dominant  absorber  with  reduced  transmittance 
starting  to  show  up  at  2  microns.  The  effects  are  quite  pronounced  at  2.5  microns.  The  average 
surface  transmittance  for  dataset  1  across  this  band  was  0.54,  20%  higher  than  the  0.45  average 
arrived  at  when  considering  the  entire  1  to  8  micron  band.  Based  upon  dataset  3  runs,  0.54  is 
expected  to  occur  only  1%  of  the  time  at  Baghdad.  At  Pyongyang,  average  transmittances  could 
be  expected  to  be  less  than  this  value  (0.54)  50%  of  the  time,  while  at  Howard,  94%  of  the  time 
we  would  expect  values  to  be  at  or  below  0.54. 


204 


page  5  of  8 


TABLE  3:  50th  Percentile  Values  of  Average  Transmittance  for  Howard  AFB  vs  Average 
Transmittance  Values  for  Dataset  1. 


MICRON  BAND 

HOWARD 

1-  8 

1-2.5 

3-4 

1-4 

JANUARY 

0.40 

0.51 

0.41 

0.45 

FEBRUARY 

0.41 

0.52 

0.42 

0.46 

BEST 

0.41  (FEB) 

0.52(FEB) 

0.42(FEB) 

0.46(FEB) 

WORST 

0.38(JUN) 

0.45(JUN) 

0.36(JUN) 

0.40(JUN) 

ANNUAL 

0.39 

0.47 

0.38 

0.42 

DATA  SET  1 

0.45 

0.54 

0.48 

0.49 

♦FREQUENCY 

90% 

94% 

96% 

95% 

*Percentile  ranking  of  Average  Transmittance  Values  for  Dataset  1  vs  Howard  AFB's  January 
Cumulative  Frequency  of  Occurrence. 


Similar  results  were  noted  when  evaluating  the  3  to  4,  1  to  4,  and  8  to  12  micron  bandwidths. 
In  general,  the  dataset  1  values  would  be  expected  to  occur  more  often  at  Howard,  rather  than 
at  Pyongyang  or  Baghdad.  In  the  8  to  12  micron  wavelength  band,  there  are  two  noticeable 
regions  where  the  transmissivity  values  dropped  considerably.  They  were  at  9.5  and  12  microns. 
Ozone  absorption  was  at  a  peak  near  9.5  microns.  The  12  micron  wavelength  was  affected  by 
absorption  due  to  water  vapor. 

Atmospheric  transmittance  values  are  presented  in  tabular  form  for  each  location.  Data  from  the 
months  of  January  and  February  are  evaluated,  since  this  period  corresponded  with  the  dataset 
1  time  frame.  Also  presented  are  data  for  the  month  with  the  best  transmissivity,  the  month  with 
the  worst  transmissivity,  and  finally  an  average  transmissivity  for  all  months.  The  best  month 
is  defined  as  the  month  with  the  lowest  cumulative  percentage  of  occurrence  of  transmissivities 
equal  to  or  less  than  0.75.  Based  on  this  definition,  the  best  month  contained  the  fewest 
occurrences  of  transmissivity  values  at  or  below  0.75.  Similarly,  the  worst  month  was  defined 
as  the  month  with  the  highest  cumulative  percentage  of  occurrence  of  transmissivity  values  at  or 
below  0.75.  Transmissivity  in  the  8  to  12  micron  band  at  Baghdad  is  highest  during  the  months 
of  January  and  February.  In  general,  over  half  of  the  transmissivities  reported  at  all  altitudes 
were  greater  than  0.80.  The  two  poorest  months  were  judged  to  be  August  and  September, 
August  being  the  worst.  August  surface  transmissivity  values  were  less  than  0.75  more  than 
50%  of  the  time,  but  0.65  or  less  only  10  percent  of  the  time.  Transmissivity  values  rapidly 
improved  with  an  increase  in  target  altitude  above  the  surface  boundary  layer.  Transmissivities 
at  2  km  and  above  were  generally  0.70  or  greater  for  any  month. 

Data  from  Howard  AFB,  was  used  to  demonstrate  transmissivity  values  from  a  tropical  regime. 
The  month  of  February  was  the  month  with  the  highest  transmissivity  values,  and  in  general,  over 
50%  of  the  time  were  greater  than  0.50,  for  all  altitude. 

The  two  poorest  months  were  June  and  July,  July  being  the  worst.  A  dramatic  increase  in  low 


205 


page  6  of  8 

level  moisture  gave  July  a  transmissivity  value  of  less  than  0.40  at  the  surface  more  than  50% 
of  the  time.  Transmissivity  values  rapidly  improved  above  the  surface  boundary  layer,  and 
generally  exceeded  0.65  at  and  above  2  km. 

Data  from  Pyongyang  demonstrated  a  much  more  seasonal  bias  in  the  transmissivity  than  did  the 
other  locations.  January  and  February  were  the  months  with  the  highest  reported  transmissivity 
values,  January  being  the  best.  In  general,  over  50%  of  the  transmissivities  reported  at  all 
altitudes  were  greater  than  0.80.  The  poorest  month  was  July,  where  a  dramatic  increase  in  low 
level  moisture  resulted  in  a  transmissivity  value  of  less  than  0.45  at  the  surface  more  than  50% 
of  the  time.  Transmissivity  values  again  rapidly  improved  as  altitudes  above  the  surface 

boundary  layer  were  evaluated,  and  nearly  always  exceeded  0.60  at  source  altitudes  of  2  km  or 
greater. 

USAFETAC  evaluated  the  50th  percentile  frequency  of  occurrence  to  estimate  the  mean.  In  the 
I  to  8  micron  case,  surface  transmissivity  is  highest  during  the  month  of  January  at  Baghdad 
(0.54).  The  poorest  month  was  May  (0.50).  The  annual  average  surface  transmissivity  was  0.52. 
Transmissivity  values  rapidly  improved  with  altitude. 

Data  from  Howard  AFB,  Panama  showed  February  had  the  highest  transmittance  value  of  0.41, 
while  June  was  the  poorest  month  at  0.38.  The  abundance  of  low  level  moisture  keeps 
transmissivity  values  near  the  0.40  value. 

Pyongyang  data  demonstrated  a  much  more  seasonal  bias  in  transmittance  values.  January  had 
the  highest  transmittance  of  0.48,  while  July  was  the  poorest  at  0.35.  Annually,  values  were  less 
than  0.42  at  the  surface  50%  of  the  time. 

Transmissivity  values  again  rapidly  improved  as  source  altitudes  above  the  boundary  layer  were 
evaluated.  Similar  results  were  found  on  the  3  to  4,  1  to  2.5,  and  1  to  4  micron  cases.  Values 
of  absolute  humidity  from  dataset  1  were  compared  with  the  absolute  humidity  values  from  the 
Baghdad  sounding,  and  are  depicted  in  Table  4.  The  average  surface  absolute  humidity  for 
dataset  1  was  11.032  grams  per  cubic  meter  (g/m^),  and  ranged  from  6.121  g/m^  to  13.89  g/m^ 

The  surface  absolute  humidity  from  the  Baghdad  sounding  averaged  approximately  7.5  g/m^,  and 
humidities  greater  than  11  g/m^  occurred  1.8%  of  the  time.  At  approximately  2  km  (data  used 
was  taken  from  7000  feet),  the  absolute  humidity  from  dataset  1  averaged  4.48  g/m^  and  ranged 
from  0.9232  g/m^  to  7.021  g/m^  compared  to  the  average  Baghdad  humidity  of  2.6  g/m^.  At  this 
level,  humidities  greater  than  4.0  g/m^  occurred  11.8%  of  the  time.  This  demonstrates  that 
dataset  1  atmospheric  profiles,  at  these  2  levels,  do  contain  higher  concentrations  of  water  vapor 
than  the  average  Baghdad  15  year  POR  sounding,  but  they  do  not  exceed  the  extremes.  Above 
2  km,  the  humidities  from  dataset  1  continue  to  be  slightly  higher  than  the  Baghdad  soundings; 
as  a  result,  transmissivities  are  slightly  lower.  However,  the  transmissivities  from  dataset  1  fall 
within  the  range  of  the  expected  transmissivity  values  for  January. 


206 


page  7  of  8 

TABLE  4.  Absolute  humidity  for  dataset  1,  and  for  the  archived  data  for  Baghdad _ 


DATA  SET  1 
LOCATION 

SURFACE  ABSOLUTE 
HUMIDITY  (g/m^)  . 

UPPER  LEVEL  (7000  FT) 
ABSOLUTE  HUMIDITY(g/m^) 

OlA 

9.064 

3.836 

02A 

13.89 

6.013 

03A 

6.121 

3.396 

04A 

8.657 

2.116 

05A 

8.261 

3.574 

06A 

9.099 

0.9232 

07A 

11.10 

7.021 

BAGHDAD 

7.5 

2.6 

7.  CONCLUSIONS 


Review  of  preliminary  results  of  combinations  of  transmissivity  vs  wavelength,  altitude,  and  as 
a  function  of  sensor  view  angles  show  no  great  surprises:  (1)  Dataset  1  atmospheric  profiles  do 
contain  higher  concentrations  of  water  vapor  than  the  average  Baghdad  15  year  FOR  sounding, 
but  they  do  not  exceed  the  extremes.  Transmissivities  fall  within  the  range  of  expected 
transmittance  values  for  January.  (2)  Transmissivity  values  increased  as  a  function  of  source 
altitude.  The  decrease  in  water  vapor  with  increasing  distance  from  the  surface  contributed  to 
the  increase  in  transmissivity.  (3)  Transmissivity  values  for  all  cases  were  lower  given  an 
increased  path  length  in  the  30  degree  off-nadir  case.  (4)  The  selection  of  the  bandwidth  interval 
affected  the  average  transmittance  values  calculated.  There  are  select  regions  in  the  1  to  12 
micron  range  band  that  are  "opaque"  to  electromagnetic  wave  propagation.  We  ran  LOWTRAN7 
over  the  entire  1  to  12  micron  bandwidth  interval.  Transmittance  values  calculated  were 
substantially  lower  than  values  calculated  in  select  subintervals  of  the  1  to  12  micron  band.  The 
reductions  ranged  anywhere  from  7%  to  nearly  20%.  This  reduction  was  primarily  due  to 
inclusion  of  the  "opaque"  regions  mentioned  above.  Also,  when  we  used  a  large  step  size,  on 
the  order  of  1  micron,  for  the  calculations,  the  "opaque"  regions  were  effectively  masked  in  the 
model  output.  To  help  eliminate  this  bias,  we  calculated  transmittances  in  selected  spectral  bands 
located  within  the  atmospheric  windows. 

There  were  differences  between  the  interpolated  dataset  1  cases  provided  by  USAFETAC's 
customer,  and  the  desert  source  soundings  selected  by  USAFETAC  (Baghdad).  January  Baghdad 
surface  transmissivities  (8  to  12  micron  band)  were  50%  of  the  time  greater  than  0.80. 
Transmissivities  for  dataset  1  averaged  near  0.59  for  sources  at  the  surface.  This  value  (0.59) 
has  a  very  small  probability  of  occurrence  (1%)  at  Baghdad,  but  is  more  likely  to  occur  (10%) 
at  Howard  AFB. 


207 


page  8  of  8 

Dataset  1  showed  lower  surface  transmissivities  throughout  the  year  when  compared  to  a 
Baghdad  15  year  FOR,  but  did  not  approach  the  reduction  experienced  in  a  true  tropical 
environment  such  as  at  Howard  AFB.  A  tropical  environment  shows  surface  transmissivities  on 
the  order  of  0,40,  depending  on  the  time  of  year. 

It  should  be  noted  that  dataset  1  was  based  upon  ASPAM-derived  data.  Previous  research  at 
USAFETAC  (0  Connor,  1994)  has  shown  ASPAM-derived  data  to  have  a  slight  bias  to  report 
higher  absolute  humidity  values  than  may  actually  be  present. 

The  data  extracted  from  the  Pyongyang  upper-air  soundings  show  a  definite  seasonal  bias,  with 
lower  transmissivities  in  the  summer  months,  all  tied  to  an  increase  in  absolute  humidity.' 

This  study  was  an  example  of  how  climatological  data  might  be  used  in  determining  expected 
atmospheric  transmittance  values.  USAFETAC  plans  to  apply  this  approach  to  geographic 
locations  worldwide. 


REFERENCES 

Environmental  Research  Institute  of  Michigan  for  the  Office  of  Naval  Research,  Department  of 
the  Navy,  The  Infrared  Handbook,  1978. 

Kneizys,  F.X.,  et.  al.,  AFGL-TR-88-0177,  1988,  Users  Guide  ToLowtranJ,  Air  Force  Geophysics 
Laboratory,  Hanscom  AFB,  MA. 

Koermer,  James  P.,  and  J.P.  Tuell,  Improved  Point  Analysis  Model  (IPAM)  Functional 
Description,  1984,  Internal  Working  Document,  Air  Force  Global  Weather  Center  Offut 
AFB,  NE. 

O'Connor,  Lauraleen,  and  Charles  R.  Co^^\n,  Atmospheric  Slant  Path  Analysis  Model  Baseline 
Study,  1994,  USAFETAC/PR-94/001,  Scott  AFB,  IL. 

SAS  Institute,  Cary,  NC,  SAS/STAT  User's  Guide,  Version  6,  1990,  FOURTH  EDITION 


208 


Session  III 

BATTLE  WEATHER 


209 


OWNING  THE  WEATHER: 

IT  ISN’T  JUST  FOR  WARTIME  OPERATIONS 

RJ.  Szymber,  M.A.  Seagraves,  J.L.  Cogan,  and  O.M,  Johnson 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM  88002-5501 


ABSTRACT 

Owning  the  Weather  (OTW)  technologies  that  provide  state-of-the-art  weather 
support  for  Army  tactical  operations  and  battlefield  simulations  may  also  be 
used  to  support  certain  Army  Operations  Other  Than  War  (OOTW),  as  well  as 
civilian  and  commercial  applications.  Types  of  OTW  technologies  and  products 
that  may  be  used  in  applications  other  than  tactical  situations  include  remote 
sensing,  atmospheric  characterization,  scene  visualization,  and  atmospheric 
models.  OTW  products  can  be  used  in  Army  humanitarian  assistance  and 
disaster  relief,  peace  enforcement,  and  peacekeeping  operations,  as  well  as  in 
civilian  applications  such  as  air  and  noise  pollution  control,  environmental 
cleanup,  global  climate  change  programs,  transportation,  forestry,  and 
agriculture.  Unique  OTW  meteorological  testbeds  are  used  in  product 
development.  Interactions  and  partnerships  with  other  government  agencies  and 
private  industry  help  to  pave  the  way  for  technology  transitions. 

1.  INTRODUCTION 

In  his  essays  on  "The  Art  of  War"  written  more  than  2,000  years  ago.  Sun  Tzu  asserted, 
"Know  the  enemy,  know  yourself;  your  victory  will  never  be  endangered.  Know  the  ground, 
know  the  weather;  your  victory  will  then  be  total."  Historically,  weather  has  decisively 
impacted  battlefield  success,  and  future  warfighters  prepared  to  exploit  weather  and  terrain 
effects  will  also  benefit  in  battle.  Today’s  Army  doctrine  (Dept,  of  the  Army,  1993)  states, 
"The  commander  who  can  best  measure  and  take  advantage  of  weather  conditions  has  a  decided 
advantage  over  his  opponents.  By  understanding  the  effects  of  weather,  seeing  the  opportunities 
it  offers,  and  anticipating  when  they  will  come  into  play,  the  commander  can  set  the  terms  for 
battle  to  maximize  his  performance  and  take  advantage  of  limits  on  enemy  forces. " 

Owning  the  Weather  (OTW)  is  the  Army  vision  for  improved  battlefield  weather  support  to 
Force  XXI,  the  force  projection  Army  of  the  21st  century.  It  is  critical  to  out-thinking  the 
enemy  and  winning  the  information  war,  and  in  executing  precision  strikes.  OTW  is  defined 
as  the  use  of  advanced  knowledge  of  the  environment  and  its  effects  on  friendly  and  enemy 
systems,  operations,  and  tactics  to  gain  a  decisive  advantage  over  opponents.  The  OTW 
strategy  involves  the  observation,  collection,  processing,  forecasting,  and  distribution  of  timely 


211 


battlefield  environmental  conditions.  This  information  is  transformed  into  weather  intelligence 
and  decision  aids  for  final  battlefield  exploitation  of  the  weather. 

OTW  will  provide  a  digitized  picture  of  battlefield  weather  and  its  effects  for  Intelligence 
Preparation  of  the  BattleHeld  (IPB)  to  support  mission  planning,  situation  awareness,  synchro¬ 
nized  battle  management,  and  advanced  decision  and  execution  support.  The  Integrated 
Meteorological  System  (IMETS)  will  collect  data  from  various  sources  and  distribute  timely 
battlescale  weather  information  to  multiple  command  elements  via  the  All  Source  Analysis 
System.  This  information  will  be  used  in  tactical  decision  aids  (TDA’s)  resident  on  computers 
in  all  battlefield  functional  areas  to  provide  commanders  and  soldiers  with  real-time  and 
predicted  environmental  effects  on  missions  and  systems. 

OTW  capabilities  and  products  that  provide  state-of-the-art  weather  support  for  Army  tactical 
operations  and  battlefield  simulations  are  ideal  for  supporting  Army  force  projection  operations 
and  joint  military  missions,  including  operations  other  than  war  (OOTW).  OTW  technologies 
also  have  important  dual-use  civilian,  and  commercial  environmental  applications  that  can 
contribute  to  defense  conversion  and  technology  transfer. 

1.1  OTW  Battlefield  Sensing 

Weather  conditions  must  be  observed  before  they  can  be  forecast  and  converted  into  weather 
intelligence.  A  suite  of  complementary  and  synergistic  space-based,  airborne,  and  ground- 
based  sensing  systems  provides  real-time  observations  at  required  accuracies,  resolutions,  and 
coverage.  All  available  data  are  collected,  validated,  and  assimilated  to  build  a  complete 
horizontal  and  vertical  picture  of  the  atmosphere  over  friendly  and  enemy  controlled  territory. 

Battlefield  sensing  systems  and  technologies  include: 

-  meteorological  satellites; 

-  Automatic  Meteorological  Sensor  System; 

-  Meteorological  Measuring  Set; 

-  Target  Area  Meteorological  Sensors  System  (TAMSS),  that  is,  the  Mobile  Profiler 
System  (MPS),  Unmanned  Aerial  Vehicles  (UAV)  with  meteorological  sensors  and  dropsonde 
payloads,  and  Computer  Assisted  Artillery  Meteorology  software;  and 

-  remote  sensing  and  data  fusion  techniques. 

1.2  OTW  Processing,  Analysis,  and  Dissemination 


The  IMETS  receives,  processes,  analyzes,  and  distributes  mission-specific  observations, 
forecasts,  and  weather  intelligence.  The  IMETS  is  a  mobile,  tactical,  automated  weather  data 
system  designed  to  provide  timely  weather  and  environmental  effects  forecasts,  observations, 
and  decision  aid  information  to  appropriate  command  elements  through  the  Army  Battle 
Command  System.  It  includes  a  battlescale  (mesoscale)  forecast  model  and  satellite 
communications. 


212 


1.3  OTW  Battle  Decision  Aids  and  Displays 

TDA’s  permit  commanders  to  rapidly  war  game  courses  of  action;  determine  probable  effects 
on  friendly  and  enemy  systems,  tactics,  and  doctrine;  and  incorporate  weather  effects  into 
tactical  planning  and  operations.  Decision  aids  not  only  provide  information  about  weather 
effects,  but  also  show  the  commander  if  and  when  weather  conditions  give  a  competitive  edge 
over  the  enemy. 

IPB,  TDA’s,  and  war  gaming  enable  the  commander  to  quickly  and  accurately  analyze  the 
effects  of  weather  on  impending  operations.  Examples  of  these  types  of  products  are: 

-  Integrated  Weather  Effects  Decision  Aid, 

-  IPB  weather  analysis  overlays, 

-  mobile  generator  smoke  screen  TDA, 

-  night  vision  goggles  TDA,  and 

-  electro-optics  TDA’s. 

1.4  OTW  Technology  Exploitation  of  Weather 

Training,  combat  simulations,  weapon  development,  and  system  testing  and  evaluation  are  all 
areas  where  the  exploitation  of  weather-related  technology  results  in  having  advantages  over 
threats  and  in  making  adverse  weather  a  force  multiplier.  Some  examples  of  these  exploitation 
technologies  and  products  include: 

-  atmospheric  scene  visualization  of  battlefield  obscurants  (smoke,  dust,  haze,  and  fog), 

-  atmospheric  transport  and  diffusion  modeling  over  urban  and  complex  terrain, 

-  target  contrast  change  characterization,  and 

-  simulation  of  optical  turbulence  effects. 

2.  OPERATIONS  OTHER  THAN  WAR  (OOTW) 

The  Army  has  evolved  from  the  Cold  War  doctrine  and  structure  to  a  new  strategic  era  of 
force  projection,  and  has  consequently  modified  its  weather  support  architecture  to  support  new 
missions.  War,  that  is,  a  major  regional  conflict,  remains  the  baseline  objective  for  OTW 
support  to  the  Army.  However,  many  new  missions  are  now  likely  and  a  major  regional 
conflict  is  only  one  of  many  contingencies  for  which  the  Army  must  provide  weather  support. 
These  new  missions  require  more  flexible,  mobile  forces  to  respond  to  the  wider  range  of 
unpredictable  threats  and  situations.  Tailored  weather  information  is  vital  to  the  success  of 
these  noncombat  operations.  Planning  now  considers  and  integrates  components  of  other 
service  into  joint  task  force  meteorological  and  oceanographic  support.  Split-base  operations 
provide  support  from  the  CONUS  or  theater  to  complement  capabilities  deployed  with  the  joint 
task  force. 

OOTW  are  military  activities  during  peacetime  and  conflict  that  do  not  necessarily  involve 
armed  clashes  between  two  orgcinized  forces  (Dept,  of  the  Army,  1993).  Today’s  Army 
conducts  OOTW  as  part  of  a  joint  team  and  usually  in  conjunction  with  other  government 
agencies.  The  Army  has  participated  in  OOTW  supporting  national  interests  throughout  its 


213 


history.  However,  the  pace,  frequency,  and  types  of  OOTW  have  increased  over  the  last  25 
years.  Furthermore,  the  future  will  likely  see  a  growing  percentage  of  the  Army’s  activities 
committed  to  OOTW  (Eden,  1994). 

In  general,  OOTW  have  weather  support  requirements  different  from  those  for  war.  During 
war,  the  peacetime  weather  infrastructure  is  usually  not  available  and  all  indigenous  sources 
of  local  weather  data  in  the  war  zone  may  be  denied  or  lost.  Therefore,  the  full  range  of  OTW 
support  capabilities  is  required  in  a  war  situation.  In  OOTW,  availability  of  weather  data  from 
the  existing  peacetime  indigenous  sources  will  likely  continue,  allowing  a  much  smaller 
weather  support  element  to  deploy  to  support  missions  with  a  greater  reliance  on  indirect 
support  from  the  CONUS  or  theater  weather  facilities.  The  exception  to  this  is  OOTW 
conducted  in  remote,  under-developed  areas  where  no  weather  infrastructure  exists,  such  as 
in  Rwanda,  Generally,  noncombat  missions  enable  a  small  weather  team  deployed  to  the 
contingency  area  to  incorporate  all  available  indigenous  weather  information  relayed  to  the 
center,  integrate  it  with  data  from  other  sources,  tailor  and  repackage  it,  and  transmit  it  in 
minutes.  Also,  OOTW  may  occur  in  relatively  benign  environments  where  weather  support 
concepts  and  procedures  are  much  different  from  those  in  high-  to  mid-intensity  conflicts. 

2.1  Disaster  Relief 

Disaster  relief  operations  occur  when  emergency  humanitarian  assistance  is  provided  by  DoD 
forces  to  prevent  loss  of  life  and  destruction  of  property  resulting  from  man-made  or  natural 
disasters.  The  diverse  capabilities  of  the  Army  make  it  ideally  suited  for  disaster  relief 
missions.  Assistance  provided  by  U.S.  forces  is  designed  to  supplement  efforts  by  civilian 
agencies  who  have  primary  responsibility  for  such  assistance. 

OTW  technologies  are  especially  well-suited  to  provide  support  during  severe  weather  and 
other  weather-related  disasters  such  as  hurricanes,  tornado  outbreaks,  flash  flooding,  windstorm 
fires,  and  toxic  air  pollution  episodes.  For  example,  weather  support  was  critical  during  the 
disaster  relief  the  Army  provided  during  the  landfall  and  aftermath  of  Hurricane  Andrew  in 
1992  in  southern  Florida.  In  this  type  of  operation,  a  detailed  knowledge  of  predicted  weather 
conditions  and  their  effects  on  coastal  zones,  infrastructure,  transportation,  and  public  safety 
is  important  in  quickly  achieving  stated  objectives,  while  avoiding  added  injuries  and 
destruction  related  to  the  weather. 

2.2  Peace  Enforcement  and  Peacekeeping  Operations 

Peace  enforcement  is  military  intervention  designed  to  forcefully  restore  peace  between 
belligerents  engaged  in  combat.  Peacekeeping  operations  use  military  forces  to  supervise  a 
cease-fire  and/or  separate  the  parties  at  the  request  of  the  disputing  groups.  OTW  support  is 
required  to  initially  project  the  force  and  for  subsequent  ground  and  aerial  reconnaissance 
efforts  to  collect  intelligence.  During  these  operations  Army  combat  power  that  benefits  from 
the  OTW  capabilities  may  need  to  be  applied. 


214 


2.3  Noncombat  Evacuation  Operations 

Noncombat  evacuation  operations  relocate  threatened  civilians  from  hazardous  locations  in 
foreign  countries.  These  operations  usually  involve  U.S.  citizens  whose  Uyes  are  in  danger. 
OTW  support  is  critical  to  the  success  of  these  operations,  as  the  failed  Iranian  hostage  rescue 
mission  demonstrated  in  1980.  The  failure  of  this  operation  was  directly  attributed  to  the 
weather,  specifically,  the  effects  of  unexpected  dust  and  sand  storms  encountered  along  the 
helicopters’  flight  paths  and  at  staging  points. 

OTW  supports  overall  mission  planning  and  execution,  to  include  elements  of  concealment  and 
surprise,  by  predicting  weather  conditions  and  their  effects  along  planned  flight  paths  and 
loading  zones.  During  the  successful  1983  Grenada  action,  for  example,  weather  was  an 
important  factor  in  that  overcast  cloud  conditions  prevented  Russian  satellites  from  observing 
our  aircraft  and  ships,  adding  to  the  element  of  surprise. 

3.  NONMILITARY/CIVILIAN  APPLICATIONS 

3.1  Air  and  Noise  Pollution 

Of  the  many  civilian  and  commercial  applications  for  OTW  technologies,  the  problem  of  air 
pollution  is  one  that  impacts  a  great  number  of  people.  The  air  pollution  problem  has  reached 
such  a  significance  that  urban  areas  that  do  not  show  positive  efforts  to  meet  Environmental 
Protection  Agency  standards  will  be  penalized  by  reductions  in  federal  funding.  The  pollution 
problem  along  segments  of  the  U.S. -Mexico  border  has  been  noted  as  quite  serious, 
particularly  in  the  El  Paso-Ciudad  Juarez  area.  It  often  becomes  evident  at  White  Sands 
Missile  Range  (WSMR)  when  southerly  wind  flow  carries  the  dark  brown  layer  of  contami- 
n'ants  northward  into  New  Mexico.  Concern  for  this  regional  situation  has  prompted  state 
agencies  of  Texas  and  New  Mexico,  and  city  governments  of  El  Paso  and  Ciudad  Juarez  to 
form  the  "Paso  del  Norte  Task  Force,"  dedicated  to  cooperation  in  attempting  to  correct  the 
problem.  The  Task  Force  welcomed  U.S.  Army  Research  Laboratory  (A^)  proposals  to 
enter  into  Cooperative  Research  and  Development  Agreements  (CRD A),  which  would  benefit 
all  partners  by  pooling  resources  and  technologies  in  efforts  to  alleviate  the  mutual  problem. 
The  partnerships  augment  the  civilian  atmospheric  monitoring  capabilities  with  state-of-the-art 
direct  and  remote  sensing  instruments,  which  can  continually  monitor  the  state  of  the 
atmosphere.  Some  of  the  capabilities  are  fixed  in  place  at  WSMR,  but  many  of  the  instruments 
can  be  transported  to  observation  sites  where  needed.  Major  benefits  to  the  Army  include  the 
testing  and  evaluation  of  transport  and  diffusion  models  in  an  urban  and  complex  terrain 
environment,  and  the  display  of  results  on  a  computer-based  geographic  information  system. 

The  Mobile  Profiler  System  (MPS)  proved  its  value  as  a  primary  source  of  atmospheric  data 
during  the  Los  Angeles  Free  Radical  Experiment,  a  multi-agency  and  bi-national  (U.S.  and 
Canada)  air  pollution  experiment  in  the  Los  Angeles  basin  during  September  1993.  The  MPS 
consists  of  a  radar  wind  profiler,  a  Radio  Acoustic  Sounding  System,  a  ground-based 
microwave  radiometer,  and  other  instruments,  as  well  as  a  meteorological  satellite  receiver. 
It  provided  vertical  profiles  of  wind  and  temperature  nearly  non-stop  throughout  the  four  weeks 


215 


of  the  experiment,  at  an  unprecedented  level  of  detail  in  time  and  space  as  shown,  for  example, 
in  figure  1.  These  profiles  were  averaged  and  displayed  as  often  as  every  3  minutes  at  vertical 
resolutions  as  fine  as  100  m  from  the  surface  to  3-5  km  (wind)  and  to  0.8-1. 6  km  (virtual 
temperature).  Combining  these  profiles  with  those  derived  from  satellite  data  extended  the 
maximum  height  to  over  14  km.  The  profiles  from  the  MPS  were  used  with  measurements 
of  concentrations  of  pollutant  species  (ozone,  particulates,  etc.)  to  describe  the  transport  and 
diffusion  of  those  pollutants  within  the  local  area. 

Meteorological  models  of  the  type  found  in  the  IMETS  use  these  MPS  profiles  to  derive 
descriptions  and  forecasts  of  atmospheric  conditions  throughout  a  mesoscale  region.  These 
analyses  and  forecasts,  in  turn,  provide  essential  input  into  regional  transport  and  diffusion 
models  also  being  developed  as  part  of  the  OTW  effort.  More  information  on  the  experiment 
and  the  role  of  the  MPS  may  be  found  in  Wolfe  et  al.  (1994),  and  Cogan  et  al,  (1994). 


. . .  -i.mcinr-uiw  120000  OS-SEP-93 

Figure  1.  Time/height  display  of  the  MPS  radar  wind  profiler  data.  Wind  arrows  have 
conventional  meaning  except  a  full  barb  represents  10  m/s  and  a  half  barb  represents  5  m/s. 
Sounding  derived  from  15  min  of  data  displayed  every  half  hour. 


216 


Noise  pollution  is  another  area  where  OTW  technologies  can  benefit  the  civilian  community. 
An  acoustic  testbed  is  available  to  validate  acoustic  models  that  predict  noise  impacts  of 
planned  urban  developments,  such  as  airport  placement,  industrial  development  sites,  and 
traffic  routing.  The  testbed  is  located  in  an  isolated  area  where  an  exceptionally  wide  range 
of  frequencies  may  be  produced  without  public  disturbance.  Scientists  performing  acoustic 
research  have  also  provided  assistance  in  dealing  with  the  noise  pollution  associated  with 
munitions  testing  in  the  Aberdeen  Proving  Ground  area. 

3.2  Environmental  Cleanup 

Environmental  cleanup  operations  include  the  transportation  and  disposal  of  toxic/hazardous 
materials  that  have  the  potential  for  environmental  disasters  in  the  event  of  accidents.  OTW 
technologies  with  potential  in  this  area  include  the  continuous  remote  monitoring  of  hazardous 
waste  sites  and  the  prediction  of  toxic  corridors  resulting  from  potential  and  actual  chemical 
spills  or  nuclear  radiation  releases.  The  monitoring  capability  was  recently  demonstrated  at 
WSMR  when  routine  operation  of  the  Remote  Sensing  Rover,  a  portable  Fourier  transform 
spectrometer,  detected  the  presence  of  ammonia  gas  as  a  byproduct  in  the  smoke  from  a  forest 
fire  at  a  distance  of  8  km  from  the  Rover.  The  gas  was  determined  to  have  been  produced 
from  ammonia  fertilizer  compounds  used  in  slurry  for  fighting  the  fire.  While  the  concentrat¬ 
ion  of  the  gas  was  not  hazardous,  its  detection  demonstrated  one  of  the  many  areas  for  potential 
civilian  applications. 

3.3  Global  Climate  Change 

Understanding  climatic  change  and  effects  of  human  activities  requires  an  intensive  effort  to 
monitor  the  atmosphere,  generate  essential  input  to  environmental  models,  and  provide  data 
to  update  and  check  the  quality  of  those  models.  OTW  technology  can  assist  in  fulfilling  those 
requirements.  The  MPS  demonstrated  the  ability  to  continuously  monitor  the  atmosphere  over 
a  period  of  nearly  a  month.  The  future  MPS  will  have  the  ability  to  provide  high-quality  data 
for  more  extended  periods.  These  data  will  feed  mesoscale  meteorological  models  in  the 
IMETS,  permitting  high-resolution  analysis  and  short-term  forecasts  over  regional  scales.  Both 
the  MPS  and  the  IMETS  can  be  deployed  in  remote  areas  not  normally  accessible  by  more 
conventional  measurement  and  analysis  systems.  The  mobility  of  these  systems  allows  them 
to  be  placed  in  a  variety  of  locations  on  short  notice  at  a  relatively  low  cost.  At  the  same  time, 
their  durability  and  reliability  result  in  lower  costs  of  operation,  repair,  and  maintenance. 

The  measurements  provided  by  a  network  of  MPS’s  may  be  supplemented  by  meteorological 
sensors  and  dropsondes  carri^  by  small  manned  or  unmanned  aircraft.  The  use  of  these 
instruments  on,  or  deployed  by,  aircraft  provides  detailed,  quantitative  data  over  wide  areas 
during  experiments  or  unusual  weather.  The  dropsondes  can  also  become  small  meteorological 
ground  (or  sea-surface)  stations  when  they  reach  the  surface.  Measurements  from  these  sensors 
serve  as  input  to  analysis  and  forecast  systems  such  as  the  IMETS,  further  increasing  the 
ability  of  mesoscale  models  to  accurately  depict  the  atmosphere. 


217 


Global  monitoring  will  be  carried  out  by  future  space  earth-observing  systems  that  provide 
^de-area  coverage.  To  obtain  accurate  and  detailed  measurements,  satellite  remote  sensors 
require  calibration  both  before  and  after  launch,  and  periodically  during  their  lifetimes.  This 
calibration  and  validation  (cal/val)  process  depends  on  having  high  quality  "ground  truth" 
measurements.  The  MPS  and  airborne  sensors  can  provide  these  data  with  both  high  accuracy 
and  wide  coverage.  The  MPS  can  be  moved  to  a  large  variety  of  locations  around  the  globe, 
and  the  airborne  sensors  and  processors  may  be  fitted  in  light  military  and  civil  aircraft.  These 
systems  will  allow,  for  the  first  time,  cal/val  over  many  different  climatic  regions  at  an 
affordable  cost,  as  opposed  to  the  common  practice  of  taking  data  over  a  few  limited  areas  and 
extrapolating  those  results  for  the  entire  earth.  The  unique  earth-target  provided  by  the  White 
Sands  National  Monument,  the  world-class  ARL  Atmospheric  Profiler  Research  Facility,  and 
a  vast  array  of  other  meteorological  instrumentation  make  WSMR  an  ideal  site  for  satellite 
sensor  cal/val. 

3.4  Transportation 

OTW  sensors  can  provide  real-time  data  for  aircraft  safety  and  hazard  avoidance.  The  MPS 
can  generate  wind  profiles  at  airports  that  will  enable  rapid  response  warnings  of  hazardous 
conditions  such  as  down-bursts  and  other  sudden  changes  in  wind  speed  or  direction,  including 
vertical  motion.  Current  experimental  sites  at  airports  such  as  Denver,  CO  provide  horizontal 
wind  information  as  often  as  every  15  min.  However,  even  this  relatively  rapid  refresh  rate 
may  not  be  adequate  for  rapidly  developing  situations,  for  example,  such  as  a  gust  front  or 
sudden  down-burst.  The  3  min  refresh  time  of  the  current  MPS  enables  the  detection  of  these 
events  in  time  to  provide  adequate  warning.  Another  useful  MPS  capability  is  the  production 
of  accurate  wind  data  in  the  presence  of  overflying  aircraft  or  birds,  either  of  which  produce 
erroneous  results  in  current  types  of  radar  wind  profiling  systems.  In  the  presence  of  strong, 
rapidly-changing  convective  conditions,  radar  profilers  may  not  produce  reliable  winds! 
Nevertheless,  the  MPS  will  generate  information  that  will  indicate  that  the  wind  data  are 
unreliable,  and  will  provide  estimates  of  probable  errors  that  may  be  used  as  indicators  of 
hazardous  conditions.  Unlike  many  lidars,  the  MPS  will  produce  profiles  through  clouds  and 
fog.  It  will  be  able  to  move  to  a  location  most  suitable  for  the  active  runway  and  start 
operations  in  less  than  an  hour.  The  MPS  satellite  receiver  will  help  provide  advance  warning 
of  potential  severe  weather  that  could  affect  airport  operations.  IMETS  models  may  be  used 
to  extend  the  area  of  coverage  to  the  mesoscale  region  around  the  airport,  taking  advantage  of 
other  sources  of  data,  for  example,  from  the  National  Weather  Service.  Civilian  versions  of 
decision  aids  will  permit  controllers  and  others  to  perform  their  functions  more  efficiently. 

The  models  and  decision  aids  of  the  IMETS  will  be  able  to  provide  short-term  warning  of 
hazardous  conditions  for  ground  transportation.  Examples  include  ice  on  roadways,  fog,  high 
winds,  and  flooding.  Data  can  be  extracted  from  existing  systems,  such  as  rawinsondes  and 
surface  stations,  and  specifically  deployed  systems  such  as  the  MPS.  Several  MPS’s,  and 
perhaps  light  aircraft  with  meteorological  sensors  and  dropsondes,  would  be  sent  to  locations 
with  a  high  potential  for  hazardous  weather  events.  The  mobility  and  consequent  low  cost  of 
deploying  OTW  systems  would  allow  the  movement  of  these  systems  to  locations  of  interest, 
even  in  remote  areas. 


218 


ARL  is  in  the  process  of  making  technologies  available  to  civili^  efforts  in  the  development 
of  the  Intelligent  Vehicle  Highway  System  (IVHS).  A  low-visibility  warning  system  being 
developed  as  part  of  the  IVHS  will  provide  early  warning  to  transportation  officials  and  traffic 
of  rapidly  deteriorating  highway  visibility  due  to  the  sudden  onset  of  blowing  dust,  snow, 
smoke,  fog,  etc.  To  reduce  development  costs  resulting  from  waiting  for  natural  obscunng 
phenomena  in  which  to  validate  the  system,  the  developer  plans  to  make  use  of  some  of  ARL’s 
technologies  such  as  an  artificial  fog  generator  and  remote  measurement  systems. 


3.5  Forestry 

Application  of  OTW  systems  to  forestry  is  closely  related  to  environmental  momtonng  and 
providing  short-term  warning  of  hazardous  conditions.  Of  particular  concern  are  the 
atmospheric  conditions  that  affect  the  spread  of  forest  and  brush  fires.  During  the  unusually 
hot  and  dry  period  during  the  summer  of  1994  at  WSMR,  NM  a  series  of  brush  fires  bum^ 
much  of  the  vegetation  covering  the  Organ  and  San  Andreas  mountains.  Sudden  and  frequently 
unexpected  changes  in  wind  speed  and  direction  caused  the  fire  to  spread  rapidly,  at  one  point 
threatening  the  WSMR  main  post  area.  Other  recent  examples  include  fires  in  the  Los  Angeles 
area  in  1993,  in  Yellowstone  National  Park  in  1988,  and  near  Glenwood  Springs,  Colorado  in 
1994  The  fact  that  sudden  changes  in  atmospheric  conditions  cause  fires  to  rapidly  spread 
when  they  may  have  been  thought  to  be  under  control  is  well  known  to  many  charged  with 
fighting  and  controlling  fires.  The  combination  of  high  temperature  (>  40  "C),  very  jow 
relative  humidity  (10-15%),  and  highly  variable  wind  conditions  led  to  a  highly  volatile  fire 
weather"  situation  that  fed  the  fire  at  WSMR.  In  the  Colorado  fire,  a  sudden  shift  in  wind 
direction  over  rugged  terrain  led  to  the  death  of  14  fire  fighters. 

The  MPS  and  other  OTW  sensors  can  provide  invaluable  information  for  analysis  and 
prediction  of  atmospheric  conditions  that  affect  forest  fires.  Especially  valuable  is  the  mobility 
of  the  MPS  and  its  ability  to  reach  remote  areas  accessible  only  by  4- wheel  drive  vehicles. 
The  essential  capability  is  the  production  of  profiles  of  wind  for  the  lowest  few  kilometers 
every  3  min,  tracking  sudden  changes  during  highly  variable  conditions.  The  MPS  also 
includes  instruments  that  can  provide  equally  frequent  data  for  temperature  and  total  moisture 
in  the  lowest  kilometers.  The  satellite  data  acquired  by  the  MPS  will  provide  a  general  picture 
of  the  atmospheric  situation  over  a  large  area  around  the  location  of  the  fire.  The  airborne 
instruments  being  developed  may  be  carried  or  dropped  by  fire  fighting  or  observation  aircraft 
to  provide  required  atmospheric  data  above  the  fire  location.  The  very  low  weight,  size,  and 
power  requirement  of  these  instruments  permit  them  to  be  added  on  a  non-interference  basis, 
such  as  in  a  removable  pod. 

The  data  gathered  by  these  OTW  sensors  and  more  conventional  instruments  may  be  analyzed 
by  software  in  the  highly  mobile  IMETS  to  provide  rapid  analyses  and  predictions  for  the  area 
of  and  nearby  the  fire.  The  predictions  will  enable  fire  fightep  to  place  personnel  and 
equipment  in  places  where  the  fire  is  expected  to  spread  ahead  of  time,  and  to  avoid  potential 


219 


high-risk  areas.  The  overall  result  is  the  ability  to  control  and  extinguish  fires  quickly  with 
less  expenditure  of  resources,  and  with  less  danger  to  personnel. 

3.6  Agriculture 

Agriculture  has  always  been  extremely  dependent  upon  weather  conditions,  especially  floods 
droughts,  and  freezes,  but  OTW  technologies  promise  some  applications  that  are  not  as  readily 
evident  as  the  daily  weather  reports  and  forecasts  available  through  the  news  media.  For 
example,  ARL’s  FM-CW  radar  is  used  primarily  to  provide  profiles  of  atmospheric  turbulence 
but  IS  so  sensitive  that  it  can  detect  airborne  insects.  The  atmospheric  transport  of  adult  moths 
IS  a  cntical  concern  in  combating  infestations  of  the  Fall  Army  Worm,  for  example.  This 
capability  is  advantageous  for  detecting  and  monitoring  insect  migration,  thereby  permitting 

more  timely,  effective,  and  efficient  use  of  pesticides  and  minimizing  contamination  of  the 
environment  by  chemicals. 

4.  CONCLUSIONS 

State-of-the-art  OTW  technology  that  shows  great  promise  in  providing  land  warfare  weather 
sup^rt  has  many  applications  outside  of  wartime  operations.  Knowledge  of  atmospheric 
conditions  and  their  effects  is  essential  to  success  in  Army  OOTW,  such  as  disaster  relief, 
pe^  enforcement,  peacekeeping  and  noncombat  evacuation  operations.  OTW  is  the  next 
weighting  edge  for  enabling  land  force  dominance  by  leveraging  the  power  of  information 
and  t^hnology  to  increase  the  lethality,  survivability,  and  tempo  of  operations  in  war  and 
CWW.  In  addition,  much  of  this  technology  will  be  extremely  useful  in  a  wide  variety  of 
civilian  applications  such  as  air  and  noise  pollution  control,  environmental  cleanup  global 
climate  change  analyses,  transportation  safety,  forest  fire  control,  and  agriculture. 

REFERENCES 


Cogan,  J.  L.,  E.  M.  Measure,  E.  D.  Creegan,  D.  Littell,  and  J.  Yarbrough,  1994,  "The  Real 
Thing:  Field  Tests  and  Demonstrations  of  a  Technical  Demonstration  Mobile  Profiler 
System."  In  Proceedings  of  the  1994  Battlefield  Atmospherics  Conference,  U.S.  Army 
Research  Laboratory,  White  Sands  Missile  Range,  NM  88002-5501. 


Dept,  of  the  Army,  1993,  Field  Manual  100-5  Operatinns  U.S.  Army  Training  and 
Command,  ATTN;  ATDO-A,  Fort  Monroe,  VA  23651-5000. 


Doctrine 


Eden,  MAJ  Steve,  1994,  "Preserving  the  Force  in  the  New  World  Order."  Military  Review 
No.  6,  pp  2-7.  ’ 


Wolfe,  D.,  B.  Weber,  D.  Wuertz,  D.  Welsh,  D.  Merritt,  S.  King,  R.  Fritz,  K.  Moran,  M. 
Simon,  A.  Simon,  J.  L.  Cogan,  D.  Littell,  and  E.  M.  Measure,  1994,  "An  Overview 
of  the  Mobile  Profiler  System,  Preliminary  Results  from  Field  Tests  during  the  Los 
Angeles  Free-Radical  Study."  Submitted  to  Bull  Amer.  Meteor.  Soc. 


220 


THE  REAL  THING:  FIELD  TESTS  AND  DEMONSTRATIONS  OF 
A  TECHNICAL  DEMONSTRATION  MOBILE  PROFILER  SYSTEM 


J.  Cogan,  E.  Measure,  E.  Creegan,  D.  Littell,  and  J.  Yarbrough 
U.S.  Army  Research  Laboratory 
Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  NM  88002-5501 

and 


B.  Weber,  M.  Simon,  A.  Simon,  D.  Wolfe,  D.  Merritt, 
D.  Weurtz,  and  D.  Welsh 
Environmental  Technology  Laboratory 
National  Oceanographic  and  Atmospheric  Administration 
Boulder,  CO  80303 


ABSTRACT 

A  near  real-time  sounding  of  the  atmosphere  from  the  surface  to  >30  km  over  a  battlefield 
may  be  obtained  by  combining  atmospheric  profiles  from  an  array  of  ground-based  remote 
sensors  and  meteorological  satellites.  This  type  of  capability  is  essential  for  optimum  use  of 
Army  assets  such  as  artillery  and  defense  against  biological  and  chemical  attack,  as  well  as  a 
variety  of  civilian  applications.  This  paper  briefly  describes  the  techmcal  demonstration  (TD) 
Mobile  Profiler  System  (MPS),  and  outlines  the  method  for  merging  data  from  the  satellite  and 
ground-based  systems.  The  processing  software  allows  acquisition  of  valid  ground-based  data 
with  a  refresh  time  as  short  as  3  min  and  in  the  presence  of  overflying  birds  or  aircraft. 
Results  from  field  tests  at  sites  in  White  Sands  Missile  Range,  NM;  near  Los  Angeles,  CA; 
and  at  Ft.  Sill,  OK  indicate  the  capability  of  the  TD  MPS  to  generate  useful  atmospheric 
soundings  for  the  field  artillery  and  a  variety  of  other  military  and  civilian  users. 

1.  INTRODUCTION 

The  Mobile  Profiler  System  (MPS)  is  being  developed  to  provide  military  and  civilian  users 
with  atmospheric  soundings  in  close  to  real  time.  Applications  include  avoidance  of  hazardous 
wind  conditions  at  airfields  and  training  ranges,  and  obscurant  and  pollution  monitoring. 
Szymber  et  al.  (1994)  discuss  potential  military  applications  in  operations  other  than  war  and 
related  civilian  uses.  The  type  of  systems  found  in  the  Technical  Demonstration  (TD)  MPS 
are  described  in  Cogan  and  Izaguirre  (1993),  Miers  et  al.  (1992),  their  references,  Weber  and 
Weurtz  (1990),  Hassel  and  Hudson  (1989),  and  Strauch  et  al.  (1987).  This  paper  briefly 
describes  the  TD  MPS,  provides  an  outline  of  the  combined  sounding  technique,  and  presents 
examples  of  actual  data. 


221 


2.  SYSTEM  DESCRIPTION 


The  TD  MPS  consists  of  3. 924-MHz  radar  profiler  operating  in  a  five-beam  mode  for  winds, 
a  Radio  Acoustic  Sounding  System  (RASS)  for  virtual  temperature  (T^),  a  ground-based 
microwave  radiometer  for  T^  and  humidity,  a  small  ground  station  for  temperature,  pressure, 
humidity,  and  wind  velocity,  and  a  small  satellite  receiving  system  for  acquiring  and  processing 
satellite  sounder  data  for  temperature  and  humidity.  Satellite  sounding  heights  are  computed 
for  the  standard  pressure  levels,  and  wind  velocity  is  calculated  using  the  geostrophic 
assumption.  Temperature  is  converted  to  T„  as  required.  Pressure  versus  height  is  computed 
from  the  measured  sounding  data  and,  in  the  future,  may  be  measured  for  the  lower  part  of  the 
sounding  using  the  microwave  radiometer.  The  main  electronic  components  and  some  of  the 
sensors  of  the  TD  MPS  are  housed  in  or  on  a  9-m  trailer.  The  radar  antenna,  the  four  RASS 
sources,  the  satellite  antenna,  and  the  microwave  radiometer  are  deployed  around  the  trailer. 

A  single  workstation  controls  the  satellite  terminal  and  processor,  while  a  PC  operates  the 
radar  ^d  collects  data  from  the  remaining  ground  systems.  A  second  workstation  serves  as 
the  primary  processor  and  data  manager.  Up  to  two  balloon  systems  may  be  run  from  the 
trailer  to  obtain  comparison  data,  as  during  the  Los  Angeles  Free  Radical  Experiment  (LAFRE) 
in  the  Los  Angeles  basin.  Further  tests  have  been  held  at  White  Sands  Missile  Range 
(WSMR),  NM;  near  Boulder,  CO;  and  at  Ft.  Sill,  OK. 

3.  PROCESSING  AND  COMBINING  METHOD 

New  algorithms  help  operate  the  ground-based  sensors  and  considerably  enhance  the  quality 
control  of  the  output.  Examples  include  updated  software  for  the  radar  wind  profiler  and 
RASS.  Entirely  new  routines  eliminate  most  of  the  problems  arising  from  birds  or  aircraft 
flying  through  the  main  radar  beam  or  side  lobes  (Merritt  1994).  In  the  few  cases  in  which 
the  atmospheric  return  cannot  be  separated  out  of  contaminated  data,  the  output  for  those 
particular  times  and  altitudes  can  be  flagged  as  unreliable.  New  techniques  under  development 
inckde  neural-net-based  methods  for  converting  radiances  from'  the  ground-based  microwave 
radiometer  into  T^  profiles  and  for  converting  satellite  radiances  into  temperature  and  dewixtint 
profiles  (e.g.,  Bustamante  et  al.  1994). 


Merging  algorithms  are  described  in  Cogan  and  Izaguirre  (1993)  and  their  references.  Ground- 
based  systems  provide  detailed  soundings  for  the  lower  troposphere,  while  a  satellite  sounder 
covers  the  atmosphere  from  about  2  or  3  km  up  to  >30  km.  Profiles  from  ground-based 
systems  are  combined  to  form  a  single,  multivariable  sounding.  The  satellite  sounding  is 
weighted  relative  to  the  MPS  location  and  time  and  merged  with  the  ground-based  sounding. 
Normally,  satellite  and  ground-based  profiles  overlap;  if  not,  satellite  data  for  each  variable 
are  extrapolated  down  to  the  uppermost  level  of  each  ground-based  profile.  For  each  variable, 
routines  within  the  merging  program  adjust  the  satellite  profiles  starting  at  the  satellite  sounding 
level  immediately  above  the  highest  level  of  each  ground-based  profile.  The  merged  profiles 
are  entered  in  a  single  file  to  form  a  combined  sounding. 


222 


4.  DATA 


This  section  presents  samples  of  data  acquired  at  WSMR,  NM  and  Ft.  Sill,  OK.  Figure  1 
presents  a  plot  from  the  Ft.  Sill  data  of  radar  profiler  winds  in  the  form  of  smndard  wind  barbs 
in  which  speeds  are  in  meters  per  second  instead  of  knots.  The  abscissa  is  time  in  hours  UTC, 
from  1130  to  1330  on  15  September  1994.  The  included  scale  (original  in  color)  at  the  bottom 
of  the  chart  also  gives  an  indication  of  wind  speed.  Radar  wind  profiles  appear  every  15  min, 
but  satellite  wind  profiles  generally  are  available  every  2  to  6  h.  For  each  satellite  pass,  the 
same  satellite  sounding  is  input  into  the  processing  program  until  the  next  satellite  pass  or  until 
the  current  satellite  sounding  reaches  a  maximum  time  staleness  (e.g.,  6  h).  Adjustments  to 
the  satellite  winds  are  only  slight  as  a  consequence  of  the  very  light  wind  at  uppermost  radar 
heights  (about  4  km  in  figure  1). 


'  / 


r 


r 

r 


aHe:  Fi.  Sill _  WINDS  inainj  ment:  Cogon  Intaqrolion 


A  A 

r  r 
r  f 


A 

r 

r 


i . 


a 


A 

r 

[ 


A  A 

r  r 
r  f 


rf  '/  1  '/  '/  '/  '/ 

£ 


p 


p 

Wj 


Figure  1.  Time-height  display  of  combined  wind  velocity 
profiles  derived  from  radar  profiler  and  satellite  data. 


The  TD  MPS  can  generate  a  variety  of  useful  profiles.  Figures  2  through  7  show  profiles  of 
wind  (15  and  3  min)  and  temperature  (15  and  3  min)  as  well  as  3-min  graphs  of  vertical 
velocity  and  an  indicator  of  error  (error  estimate).  Figure  2  presents  15-min  wind  profiles 
from  the  924-MHz  radar  profiler  from  1600  to  2000  UTC  on  29  July  1994.  The  surface  layer 
(up  to  1  km)  shows  light  and  variable  winds  capped  by  a  layer  of  westerly  winds  around  5  m/s 
that  gradually  veer  with  height  reaching  speeds  near  10  m/s  from  the  northeast  at  about  4  km. 
Toward  the  end  of  this  period  the  wind  becomes  variable  around  2  to  3  km.  Figure  3  shows 
3-min  winds  for  part  of  the  period  of  figure  2  (i.e.,  1830  to  1930).  The  general  pattern  is 
roughly  the  same  as  for  the  15-min  winds  for  the  same  period  even  though  more  variability  is 


223 


apparent  (e.g.,  more  variable  wind  direction  and  changes  in  maximum  height).  Particularly 
noticeable  is  the  zone  of  missing  data  around  1920.  It  did  not  appear  in  figure  2  because  the 
15-min  average  displays  a  sounding  if  at  least  one  3-min  profile  occurs  within  the  averaging 
period.  This  smoothing  effect  also  is  apparent  in  the  soundings  from  the  RASS.  The 
15-min  averages  (figure  4)  smoothed  the  3-min  values  (figure  5),  especially  near  the  surface, 
eliminating  some  holes  in  the  data  created  by  removal  of  questionable  values  by  the  quality 
control  algorithm.  Figures  4  and  5  show  useful  RASS  data  to  about  1.5  km.  Under 
unfavorable  conditions  the  maximum  height  may  reach  only  up  to  0.7  to  1.0  km. 


224 


from  the  924-MHz  radar  profiler. 


225 


Virtuol  1 


ent.  Wind  Profiler 


Figure  5.  Time-height  display  of  3-min  profiles  of  profiles 
from  the  RASS. 


Figure  6.  Time-height  display  of  3-min  vertical  velocity 
proHles. 


226 


aKe;  While  Sends  Miaale  Renqe _ Error  EaUmotc  fm/s)  Inalrurnent:  Wind  Profiler 


Figure  7.  Time-height  display  of  3-min  profiles  of  error 
estimate. 


In  the  TD  MPS  the  software  computes  and  displays  two  other  usefiil  quantities,  vertical 
velocity  (figure  6)  and  error  estimate  (figure  7).  Vertical  velocity  in  this  system  is  computed 
from  the  four  oblique  beams.  The  vertical  velocity  may  be  used  in  investigations  of 
atmospheric  dynamics,  as  well  as  providing  a  correction  to  the  RASS  measurements  of  T^. 
More  details  on  the  algorithm  may  be  found  in  Weurtz  et  al.  (1988).  Figure  7  presents  a 
measure  of  the  error  in  the  computed  horizontal  wind  in  the  form  of  error  estimates 
(Weurtz  et  al.  1988).  These  estimates  primarily  indicate  the  nonuniformity  of  the  wind  at  the 
specified  height  interval  over  the  averaging  period  (e.g.,  100  m  and  3  min).  For  example,  the 
radial  wind  velocity  from  the  north  beam  should  be  of  equal  magnitude  and  opposite  sign  of 
that  from  the  south  beam  for  a  uniform  wind  field  in  a  particular  layer.  Normally,  the 
assumption  of  uniformity  is  not  exact,  especially  at  higher  altitudes,  but  in  the  absence  of 
strong  convection  should  be  close  enough  to  allow  useful  measurements.  If  the  error  estimate 
is  high  relative  to  the  horizontal  wind  speed,  the  user  at  least  knows  that  the  measurement  at 
that  height  and  time  is  unreliable. 

The  profiles  presented  in  figures  2  through  5  suggest  that  for  many  atmospheric  situations 
15  min  averaged  profiles  may  be  sufficient  for  certain  applications  in  which  relatively  small, 
very  short  term  changes  are  not  critical.  An  example  would  be  routine  analyses  and  forecasts 
for  mesoscale  areas.  However,  for  applications  such  as  detection  of  hazardous  winds  at 
airfields,  defense  against  chemical  attack,  or  fighting  forest  fires,  3-min  profiles  may  be 
extremely  important.  Szymber  et  al.  (1994)  present  some  applications  for  operations  other  than 
war  that  require  profiles  with  a  very  short  refresh  time. 


227 


5.  COMPARISONS 


Ty  profiles  from  RASS,  satellite,  and  combined  RASS  and  satellite  were  compared  with 
soundings  from  rawinsonde  using  the  LAFRE  data.  A  limited  set  of  comparisons  for  3  days 
indicated  standard  deviations  of  differences  (sdd)  between  combined  soundings  and  rawinsondes 
of  around  1.5  to  2.8  K.  The  sdd  for  the  combined  sounding  was  lower  for  the  satellite  alone 
but  usually  higher  for  the  RASS.  The  magnitudes  of  the  mean  differences  (mmd)  for  combined 
soundings  relative  to  rawinsondes  were  <0.7  K.  The  one  case  of  large  sdd  (2.8  K)  and  mmd 
(0.7  K)  may,  in  part,  be  a  result  of  the  large  sdd  and  mmd  of  the  satellite  profile  (2.4  and 
2.8  K,  respectively).  The  sdd  and  mmd  for  radar  profiler  wind  speeds  were  in  line  with 
values  reported  in  the  literature  (Miers  et  al.  1992),  about  1.5  to  2.5  m/s  and  <1  m/s, 
respectively .  Wind  speeds  derived  from  satellite  data  relative  to  those  from  rawinsonde  varied 
widely  depending  on  atmospheric  conditions  and  time  and  distance  from  the  ground-based 
sounding,  ranging  from  about  3  to  4  m/s  to  over  15  m/s.  Wind  direction  differences  varied 
from  around  10°  to  over  70°.  These  differences  are  in  line  with  values  found  in 
Miers  et  al.  (1992)  and  others.  A  possible  method  for  reducing  differences  in  wind  speed  and 
direction  is  the  use  of  an  analysis  model  to  produce  a  satellite  sounding  at  the  location  (and 
possibly  time)  of  the  ground-based  profiles  (e.g.,  Caracena,  1992). 

To  gain  an  idea  of  the  quality  of  the  rawinsonde  data,  wind  soundings  from  two  similar 
systems  receiving  data  from  one  sonde  were  compared:  (1)  MARWIN  and  (2)  Cross  Chain 
Loran  Atmospheric  Sounding  System  (CLASS).  Usually,  wind  speeds  and  directions  tracked 
each  another  within  1  m/s  and  10°.  However,  poor  agreement  occurred  occasionally,  with  as 
much  as  2  to  3  m/s  and  70°  for  100-m  layers.  A  possible  partial  explanation  is  that  the 
MARWIN  software  has  more  extensive  built-in  checks  and  somewhat  smooths  the  data. 
Nevertheless,  the  user  should  make  sure  each  sounding  contains  valid  data  and  apply 
appropriate  quality  controls. 

6.  CONCLUSION 

The  TD  MPS  is  a  mobile  system  that  combines  the  capabilities  of  an  array  of  remote  sensors 
to  provide  atmospheric  soundings  with  a  rapid  refresh  rate  that  can  greatly  reduce  the  error 
caused  by  time  staleness.  The  MPS  is  a  true  dual-use  system,  capable  of  providing  data  that 
have  a  variety  of  applications.  The  rapid  refresh  capability  is  of  great  value  for  fire  support, 
airfield  operations,  and  chemical  and  biological  defense.  The  ability  to  generate  a  picture  of 
very  short  term  flow  and  T^  patterns  can  lead  to  a  better  understanding  of  the  atmosphere  and 
to  better  modeling  at  smaller  scales.  As  shown  in  the  LAFRE,  this  system  can  be  invaluable 
for  pollution  studies.  Use  of  the  MPS,  especially  if  tied  to  prognostic  models,  could  help 
reduce  damage  from  forest  fires,  and  lower  the  cost  of  fighting  them  in  both  lives  and  material. 


228 


7.  REFERENCES 


Bustamante,  D.,  A.  Dudenhoffer,  and  J.  Cogan,  1994.  "Neural  Network  Derived  Thermal 
Profiles:  Analysis  and  Comparison  with  Rawinsonde  Data."  In  Proceedings  of  the 
1994  Battlefield  Atmospherics  Conference,  White  Sands  Missile  Range,  NM,  (in  press). 

Caracena,  F.  1992.  "The  Use  of  Analytic  Approximations  in  Providing  Meteorological  Data 
for  Artillery."  In  Proceedings  of  the  1992  Battlefield  Atmospherics  Conference,  Ft. 
Bliss,  TX,  pp  189-198. 

Cogan,  J.,  and  A.  Izaguirre,  1993.  A  Preliminary  Method  for  Atmospheric  Soundings  in  Near 
Real  Time  Using  Satellite  and  Ground  Based  Remotely  Sensed  Data.  ARL-TR-240, 
U.S.  Army  Research  Laboratory,  White  Sands  Missile  Range,  NM. 

Hassel,  N.,  and  E.  Hudson,  1989.  "The  Wind  Profiler  for  the  NOAA  Demonstration  Network. 
Instruments  and  Observing  Methods  Rep.  No.  35."  At  Fourth  WMO  Technical 
Conference  on  Instruments  and  Methods  of  Observation.  (TE-CIMO-IV),  Brussels, 
WMO/TD,  pp  261-266. 

Merritt,  D.A.,  1994.  "A  Statistical  Averaging  Method  for  Wind  Profiler  Doppler  Spectra." 
J.  Atmos.  Ocean.  Tech.,  submitted. 

Miers,  B.,  J.  Cogan,  and  R.  Szymber,  1992.  A  Review  of  Selected  Remote  Sensor 
Measurements  of  Temperature,  Wind,  and  Moisture,  and  Comparison  to  Rawinsonde 
Measurements.  ASL-TR-0315,  U.S.  Army  Atmospheric  Sciences  Laboratory,  White 
Sands  Missile  Range,  NM. 

Strauch,  R.  G.,  B.  L.  Weber,  A.  S.  Frisch,  C.  G.  Little,  D.  A.  Merritt,  K.  P.  Moran,  and 
D.  C.  Welsh,  1987.  "The  Precision  and  Relative  Accuracy  of  Profiler  Wind 
Measurements."  J.  Atmos.  Oceanic  TechnoL,  4:563-571. 

Szymber,  R.  J.,  M.  A.  Seagraves,  J.  L.Cogan,  and  O.  M.  Johnson,  1994.  "Owning  the 
Weather:  It  Isn’t  Just  for  Wartime  Operations."  In  Proceedings  of  the  1994  Battlefield 
Atmospherics  Conference,  White  Sands  Missile  Range,  NM,  (in  press). 

Weber,  B.  L.,  and  D.  B.  Weurtz,  1990.  "Comparisons  of  Rawinsonde  and  Wind  Profiler 
Measurements."  J.  Atmos.  Oceanic  Technol,  7:157-174. 

Weurtz,  D.  B.,  B.  L.  Weber,  R.  G.  Strauch,  A.  S.  Frisch,  C.  G.  Little,  D.  A.  Merritt,  K. 
P.  Moran,  and  D.  C.  Welsh,  1988.  "Effects  of  Precipitation  on  UHF  Wind  Profiler 
Measurements."  J.  Atmos.  Ocean.  Tech.,  5:450-465. 


229 


CHARACTERIZING  THE  MEASURED  PERFORMANCE  OF  CAAM 


Abel  J.  Blanco 

Army  Research  Laboratory 
Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico  88002-5501,  USA 


ABSTRACT 

The  Computer  Assisted  Artillery  Meteorology  (CAAM)  provides  a 
proposed  artillery  meteorological  (MET)  message  that  can 
significantly  improve  predicted  artillery  fire.  The  CAAM  design 
allows  the  artillery  commander  to  use  tailored  MET  messages 
computed  by  an  advanced  physics  model  using  recent  MET  data  input 
rather  than  his  stale  dedicated  station  message  for  adjusting  a 
first-round-hit  artillery  fire  mission.  This  paper  presents  two 
important  kinds  of  estimates  describing  the  performance  of  CAAM 
using  data  collected  in  the  desert  and  mountains  of  southern  New 
Mexico.  These  include  the  best  single  estimate  and  the 
confidence  interval  estimate  derived  from  measured  upper  air  data 
versus  nowcasted  and  forecasted  results.  In  complex  terrain  the 
confidence  interval  improves  with  the  number  of  available 
initializing  MET  stations.  Simulated  cannon  impact  displacements 
effected  by  wind,  virtual  temperature,  and  pressure  parameters 
are  tabulated  for  the  evaluation  of  an  analytical  objective 
analysis  algorithm  and  a  physical,  time  dependent,  three- 
dimensional  hydrodynamic  forecasting  model  used  in  CAAM. 


1.  INTRODUCTION 

The  Computer  Assisted  Artillery  Meteorology  (CAAM)  research  was  designed 
to  include  a  two  phase  approach.  The  first  phase  included  the  solution  for 
the  immediate  need  of  improving  the  accuracy  of  the  current  cannon/rocket 
systems  and  the  developmental  long  range  weapon  systems.  A  short  suspense 
for  supporting  the  actual  firings  of  an  engineering  development  weapon  system 
led  to  the  design  and  implementation  of  the  Time  Space  Weighted  (TSW)  CAAM. 
This  proposal  is  described  as  a  met  message  manager,  "nowcasting"  technique 
(Blanco,  etc.,  1993).  Based  on  centralizing  all  available  met  data,  this 
objective  analysis  algorithm  automatically  tailors  a  best  met  message  for  a 
particular  user.  Through  a  peer  review  including  the  Army  designer,  the 
weapon  development  contractors,  and  the  Army  Research  Laboratory  (ARL) ,  the 
Project  Office  selected  the  TSW  algorithm  from  available  proposals.  The 
selected  technology  was  portable  and  required  a  PC  environment.  This 


231 


methodology  derives  the  expected  met  variability  due  to  time  staleness  and 
space  separation  of  the  collected  data.  It  follows  the  same  science  applied 
by  the  U.S.  Army  Materiel  System  Analysis  Activity  (AMSAA)  in  their 
development  of  the  Cannon  Artillery  Delivery  Accuracy  Model  (CADAM)  which  is 
been  used  to  defined  met  accuracy  requirements  in  the  development  of  the  new 
weapon  systems  (Reichelderfer ,  etc.  1993).  The  TSW  assigns  weights  to  the 
available  field  data  and  takes  advantage  of  applying  functional  relationships 
such  that  the  time  staleness  follows  more  important  role  than  the  space 
separation  in  computing  the  best  met  to  be  disseminated. 

The  other  phase,  a  longer  term,  proposal  involves  a  time  dependent, 
three  dimensional  Higher  Order  Turbulence  Model  for  Atmospheric  Circulations 
(HOimc)  CAAM.  Because  of  the  complexity  of  this  model  the  required  platform 
consists  of  a  HP  9000  Series  750  computer.  The  HOTMAC  CAAM  computer  runs  a 
complex  suite  of  software  that  will  manage  all  tactical  commimications  and 
data  sharing.  HOTMAC  is  a  computer  code  that  forecasts  wind,  virtual 
temperature,  and  pressure  over  complex  surface  conditions.  This  model  is 
based  on  a  set  of  second-moment  turbulence  equations  and  can  be  used  under 
quite  general  conditions  of  flow  and  thermal  stratification.  Effects  of 
short  and  long  wave  solar  radiation,  tall  tree  canopies,  and  topography  are 
included  in  the  model.  The  surface  temperatures  are  computed  from  a  heat 
conduction  equation  for  the  soil  and  a  heat  energy  balance  equation  at  the 
surface.  The  model  assumes  hydrostatic  equilibrium  and  uses  the  Boussinesq 
approximation.  A  terrain-following  vertical  coordinate  system  is  used  in 
order  to  increase  the  accuracy  in  the  treatment  of  surface  boundary 
conditions.  Vegetation  plays  an  active  role  in  the  apportionment  of 
available  heat  energy  between  convective  (sensible  and  latent)  and  conductive 
(into  the  soil)  components. 

Based  on  the  U.S.  Army  Field  Artillery  School  requirements  and  funding 
from  the  Project  Manager,  Electronic  Warfare,  Reconnaissance,  Surveillance, 
Target  Acquisition,  ARL  defined  and  monitored  the  development  of  two  research 
proto-type  systems  that  allow  significant  meteorological  (met)  accuracy 
improvements  for  the  field  artillery.  These  systems  provide  state-of-the-art 
software  and  hardware  developments  that  allow  automated  field  data 
integration,  meteorological  modeling,  and  dissemination  techniques.  The  Fire 
Control  Centers  (FCC)  are  automatically  refreshed  with  met  data  that  can 
enhance  the  first  roimd  hit  capability  for  predicted  fire.  No  longer  will 
the  FCC  be  delayed  in  waiting  for  final  met  adjustments  because  CAAM  will 
automatically  refresh  met  messages  for  the  particular  user.  The  TSW  CAAM  was 
developed  in-house  (Vidal  etc.,  1994);  and  the  HOTMAC  CAAM  was  developed  by 
the  Physical  Science  Laboratory  under  contract  with  the  ARL,  Battlefield 
Environment  Directorate  (Spalding  etc.  1993). 

This  report  defines  a  methodology  that  demonstrates  the  worth  of  the 
proposed  CAAM  models.  Available  met  data  from  White  Sands  Missile  Range,  New 
Mexico,  has  been  utilized  to  test  and  evaluate  CAAM  performance.  To 
qualitatively  demonstrate  the  long  term  capability,  data  from  the  Target  Area 
Meteorology  Data  Experiment  (TAMDE)  conducted  during  July  -  September  1992, 
is  used^  to  derive  a  spatial  analysis  for  a  60  by  220  km  area.  The  CAAM 
forecasting  performance  capability  is  also  evaluated  with  this  data  set.  The 
other  set  of  data  collected  during  the  Proto-type  Artillery  Sub-System  (PASS) 
Field  Experiment  conducted  within  the  same  area  during  November  and  December 
1974  is  used  to  quantitatively  reveal  the  significant  improvement  over  the 
current  doctrinal  method  of  adjusting  artillery  fire  for  met  variations. 
Simulated  lS5mm  rocket  assisted  round  impact  displacements  are  tabulated  and 


232 


analyzed  results  are  graphically  presented  to  demonstrate  the  significant 
improvement  afforded  by  each  proposal.  This  improvement  enhances  the 
predicted  fire  accuracy  such  that  the  new  capability  approaches  the  accuracy 
afforded  by  registration/transfer  fire  techniques.  Both  proto-type  systems 
are  automatic  and  all  modeling  input  and  output  are  transparent  to  the 
operator  except  for  met  data  editing  and  final  recommendation  to  disseminate 
the  best  met  messages.  The  systems  were  designed  so  that  a  high  school 
graduate  can  effectively  operate  and  efficiently  interact  with  the  FCC. 


2.  STATEMENT  OF  PROBLEM 

All  effective  artillery  fire  includes  meteorological  (MET)  aiming 
adjustments  to  compensate  for  the  variations  of  atmospheric  wind,  temperature 
and  density.  Many  times  the  current  doctrine  of  utilizing  data  from  a 
dedicated  met  station  is  not  representative  of  the  actual  met  effects 
experienced  by  unguided  projectiles.  This  has  become  most  noticeable  for 
extended  range  artillery.  The  classical  mechanics  for  predicting  unguided 
projectile  trajectories  are  well  known  and  automated  at  artillery  FCC.  Under 
standard  conditions,  this  simulation  science  is  assumed  to  be  exact.  Since 
the  atmospheric  conditions  are  rarely  standard,  the  U.S.  Amy  Field  Artillery 
deploys  met  teams  to  measure  atmospheric  conditions  within  the  battle  area. 
These  teams  are  not  co- located  with  the  artillery  fire  systems,  and  the 
balloon-borne  sensor  may  drift  away  or  towards  the  point  of  application 
depending  on  the  general  wind  flow  within  the  battle  area. 

MET  data  time  staleness  must  be  significantly  reduced  if  the  artillery 
commander  is  to  maintain  effective  predictive  fire  for  future  long-range 
targets.  CAAM  provides  the  field  artillery  with  inexpensive  techniques  for 
automating  representative  met  corrections  by  retrieving,  analyzing,  and 
disseminating  best  met  data.  All  battlefield  MET  messages  are  received 
through  existing  tactical  field  equipment.  These  messages  are  cataloged  with 
respect  to  time  staleness  and  space  separation  from  the  point  of  application. 
The  CAAM  design  allows  the  artillery  commander  to  use  tailored  MET  messages 
computed  by  an  objective  analysis  or  an  advanced  physics  model  using  recent 
MET  data  input  rather  than  the  normally  stale  dedicated  station  message. 
This  derived  message  enhances  the  first  round  hit  probability  for  current  and 
future  artillery  fire  systems.  Aiming  adjustments  will  accurately  deliver 
carrier  projectiles  and  compensate  for  met  effects  on  target  area  parachute 
delivered  sub -munit ions ,  scan  and  search  patterns,  chemical  bursts,  and  wind 
gliding  warheads.  CAAM  provides  a  proposed  artillery  MET  message  that  can 
significantly  improve  predicted  artillery  fire.  With  the  CAAM  the  field 
commander  can  review  simulated  results  revealing  his  expected  artillery 
accuracy  before  his  mission  engagement. 


3.  EVALUATION  METHODOLOGY 

The  two  data  bases  utilized  in  the  comparison  evaluation  are  the 
following:  the  1992  TAMDE  (Grace,  1993)  and  the  1974  PASS  (Blanco  etc., 
1976).  The  tactical  scenario  addressed  was  a  battle  area  covering  60  by  220 
km.  The  emphasis  was  placed  on  a  60  by  40  km  area  corresponding  to  a  more 
representative  application  of  cannon/rocket  artillery.  Both  data  base 
experiments  were  conducted  to  help  define  meteorolgical  effects  on  unguided 


233 


projectiles.  Temporal  and  spatial  variability  of  atmospheric  conditions  were 
the  focus  in  these  programs.  The  data  bases  may  not  be  representative  of 
other  climates  and  regions,  but  their  uniqueness  is  that  the  sets  contain 
simultaneous  upper  air  sounding  from  as  much  as  nine  stations  with  artillery 
computer  messages  simultaneously  collected  at  2  hours  intervals,  over  a  time 
of  as  much  as  ten  hours,  with  the  start  and  ending  times  varying  with  each 
day.  The  TAMDE  data  base  is  used  to  qualitatively  demonstrate  the  analysis 
over  a  60  by  220  km  area  that  includes  complex  terrain.  It  is  also  used  to 
demonstrate  the  forecasting  capabilities. 

Paired  (measure/estimate)  statistics  is  used  in  quantitatively  comparing 
the  accuracy  and  confidence  limits  in  evaluating  the  worth  of  proposed  CAAM 
solutions  for  improving  the  artillery  accuracy.  The  emphasis  is  placed  on 
the  artillery  miss  instead  of  the  actual  met  parameter.  CAAM  does  not  have 
to  exactly  predict  the  weather  conditions,  but  it  is  designed  to  accurately 
predict  artillery  fire  accuracy.  The  projectile's  weight,  velocity,  and 
flight  time  determine  the  met  effect  it  experiences  along  the  trajectory. 
For  example,  the  fine  weather  conditions  that  effect  smoke  particles  have 
minor  effects  in  aiming  a  95  lbs  artillery  shell.  But  understand  that  the 
met  IS  a  major  contributor  in  the  total  artillery  error  budget,  and  that  the 
gross  0.2  -  2.0  km  averages  are  used  in  adjusting  artillery  fire  for  met 
variability. 

oAA«  actual  artillery  firings  have  not  been  completed  with  the  proposed 

CAAM  solutions,  expected  accuracies  are  simulated  using  a  demonstrator 
Battery  Computer  System  (BCS)  fire  control.  For  example,  a  measured  met 
message  and  a  27  km  target  range  firing  problem  are  inputed  to  the  BCS,  and 
the  aiming  solution  is  then  assumed  to  represent  the  "truth"  impact. 

Following  a  similar  procedure  using  the  derived  aiming  angles  and  the 

estimated  met  message,  a  new  impact  is  computed.  If  the  estimated  and 
measured  met  messages  are  the  same  then  the  computed  impacts  should  be 

Identical.  A  bad  estimated  met  message  should  then  produce  a  large  impact 

difference  from  the  one  derived  by  using  the  measured  met  message.  The  best 
solution  IS  identified  when  the  paired  accuracy  difference  is  equal  to  zero 
or  the  difference  is  well  within  the  lethal  radius  of  the  delivered  warhead 
Note  that  the  individual  difference  is  derived  from  the  comparison  between 

computed  impacts  using  the  estimated  (nowcasted  TSW  or  the  forecasted 
HOTMAC)  and  the  measured  met  messages. 

A  day  from  each  of  the  met  data  base  experiments  is  selected  to  describe 
the  worth  of  the  proposed  C.^  solutions.  All  data  days  will  be  analyzed  and 
results  will  be  documented  in  the  final  report.  The  purpose  of  this  report 
IS  to  present  preliminary  results  and  provide  the  status  for  the  on-going 
applied  research.  The  most  variable  weather  day  from  each  experiment  was 
selected  to  reveal  a  maximum  improvement.  September  2,  1992  is  selected  to 
present  the  HOTMAC  spatial  analysis  and  forecasting  capability  over  a  60  by 
220  km,  complex  terrain,  battlefield  area.  December  7,  1974  is  used  to 

present  the  expected  improvement  afforded  by  the  HOTMAC  and  TSW  CAAM 
solutions . 

With  this  constraint  the  sample  size  is  limited  to  analysis  of  computer 
met  messages  simultaneously  collected  at  2  hours  intervals  over  a  time  of  ten 
hours.  Table  1  presents  the  met  message  pairing  used  in  deriving  time 
staleness  results  for  the  current  doctrine  of  adjusting  artillery  fire  to 
compensate  for  met  variability.  Table  2  presents  the  met  message  pairing 
used  in  deriving  the  TSW  estimates.  For  example,  if  station  1  is  considered 
the  truth  station,  then  the  following  staggered  releases  define  the  expected 


234 


time  staleness:  to  estimate  Stn  1  at  0715  use  Stn  1  at  0515  and  Stn  2  at 
0715;  and  to  estimate  Stn  1  at  0915  use  the  above  releases  plus  Stn  3  at 
0915.  TSW  always  uses  a  fresh  message  except  for  the  case  of  simulating  six 
hour  staleness  referenced  to  Stn  1.  In  reality  at  the  sixth  hour  the 
commander  using  Stn  1  would  have  realtime  data  because  the  release  cycle  is 
repeated  maintaining  a  new  message  every  2  hours  among  the  three  available 
stations.  The  same  cycle  is  repeated  to  derive  other  replicates  defining  the 
sample  size  used  in  this  preliminary  analysis.  One  can  start  at  0715  and 
pair  messages  to  increase  the  replicate  size.  All  HOTMAC  estimates  are 
derived  from  the  first,  0515,  met  message.  The  hourly  forecasts  are 
dependent  on  only  one  message.  The  same  cycle  is  repeated  to  derive  other 
replicates  defining  the  sample  size;  for  example  use  the  0715  to  forecast, 
then  use  the  0915  to  forecast,  etc.. 


Table  1.  Pairing  met  messages  for  deriving  time  staleness  sample  size. 


Message 

Staleness(h) 

Local  time 

2 

4 

6 

0515 

0715 

1 

0915 

1 

1 

1115 

1 

1 

1 

1315 

1 

1 

1 

1515 

1 

1 

1 

Total 

5 

4 

3 

Sample 

Table  2.  Pairing  met  messages  for  deriving  TSW  estimates. 


Local  time 
Stn  1 
0515 
0715 
0915 
1115 


Staggered  Releases 
Stn  1  Stn  2  Stn  3 
X 

X 

X 


Time  Staleness 
2  h  4  h  6  h 


0715 

0915 

1115 

1315 


X 


X 


X 


1 


0915 

1115 

1315 

1515 


X 


1 


Total 


333  Sample 


235 


4.  PERFORMANCE  CHARACTERISTICS 


The  first  performance  check  is  on  HOTMAC  CAAM's  ability  to  provide  a 
qualitative  spatial  analysis  for  an  area  60  by  220  km  using  only  one 
initializing  met  message.  Frame  1  on  figure  1  presents  the  contour  map  for 
the  desert  and  mountainous  region.  The  reporting  stations  are  identified  as 
circles  within  the  simulated  battle  area.  Notice  that  the  open  circle 
located  at  the  lower  left  corner  at  about  the  (6,8)  coordinates  indicates  the 
station  used  to  start  HOTMAC  forecasting  hourly  computer  met  messages  for  the 
entire  area.  The  darken  circles  represent  the  location  of  the  other  stations 
used  to  evaluate  the  accuracy  of  the  computed  estimates  for  each  of  the  met 
parameters  effecting  the  artillery  accuracy.  The  index  represents  a 
normalized  4  km  grided  universal  transverse  mercator  unit  internal  to  CAAM. 
All^  terrain  features  are  retrieved  at  this  interval;  however,  CAAM  is 
designed  to  compute  the  met  message  on  an  8km  grid.  Each  complete  square  in 
the  map  represents  40  by  40  km  and  the  entire  area  contains  more  than  the  60 
by  220  km  requirement.  In  this  case  the  required  area  is  oriented  due  north 
but  CAAM  has  the  capability  to  rotate  the  desired  area  in  any  direction. 

Using  the  September  2,  1992  (fifth  day  -  TAMS)  data,  frame  2  displays 
the  wind  vector  plot  for  the  terrain  following  1227  m  level.  Note  the 
speeding  up  of  the  wind  at  the  location  of  the  elevated  terrain.  The  wind 
starts  as  westerly  with  changing  direction  as  it  travels  through  the  mountain 
canyons.  The  current  doctrine  assumes  that  the  met  message  collected  at  the 
open  circle  location  is  representative  of  the  entire  area.  One  can  realize 
the  CAAM  improvement  is  already  significant.  Frame  3  reveals  a  more 
representative  description  because  there  are  three  open  circles  representing 
three  met  messages  initializing  the  HOTMAC  CAAM.  In  this  case  the  two 
darkened  circles  represent  available  data  for  comparison  with  the  generated 
estimates  at  the  corresponding  locations.  Examining  the  station  at  about 
coordinate  (14,32),  one  can  see  that  the  one  station  run  over  estimates  the 
wind  speed  and  predicts  the  wrong  direction.  The  mountain  provides  a 
stronger  sheltering  than  the  model  predicted.  Frame  4  represents  the  most 
realistic  description  of  the  wind.  Here  all  stations  are  used  to  initialize 
CAAM  and  simulate  the  effect  of  dropsondes  in  the  target  area.  This  case 
study  reveals  that  space  interpolation  results  contain  the  best  confidence  in 
the  estimated  results.  An  application  for  knowing  the  target  area  winds  is 
the  management  of  air  delivery  of  supplies  or  personnel  to  a  particular 
sector  within  the  battle  area. 

The  other  qualitative  performance  check  on  CAAM  is  how  well  can  it 
forecast.  Figure  2  presents  the  terrain  contour,  0600,  and  0900  wind  vector 
plots.  Generally,  CAAM  estimates  a  persistence  forecast  and  for  this  case 
the  wind  field  was  indeed  following  this  same  pattern.  The  pattern  is  much 
smoother  at  0900  than  at  the  initialization  time  at  0600.  In  the  following 
section  one  can  quantitatively  see  that  the  forecasting  capability  is 
significantly  better  than  the  current  doctrine  of  using  stale  data  that  may 
be  as  much  as  6  hours  old. 


5.  EXPECTED  ARTILLERY  IMPROVEMENTS 

Using  the  December  7,  1974  (julian  day  341)  data  from  the  other  met  data 
base  one  can  quantify  the  improvement  afforded  by  the  two  proposed  CAAM 
solutions.  Figure  3  presents  the  station  locations  on  a  20  km  grid.  Using 


236 


Figure  1.  Terrain  contour  and  wind  vectors  special  analysis. 


.  \  W  i  i  f  f  / 
^  /  c  y  /  /  /  / 


\  \  \  \ 
k  k  k  k\ 
M  ♦  ♦  ♦ 


I  1  !  I  (  M  M  4  M  M  M  M  /  ^ 

\  {  { "t  n  \  \  M  y  y  4  4  y  4  4  4  M  /  / 


mmm'4 


Mb 


apu]  Buil]1JO[^i 


the  Table  1  scenario,  the  stations  are  identified  as  follows:  Stn  1  is  tsx; 
Stn  2  is  oro;  and  Stn  3  is  meg.  The  Table  2  staggered  releases  are  followed 
to  predict  the  actual  met  messages  measured  at  tsx.  To  compute  the  current 
met  accuracy  one  pairs  the  appropriate  met  messages  outlined  in  Table  1  and 
inputs  into  the  BCS  to  compute  a  firing  angle  solution  for  a  155mm  rocket 
assisted  round  fired  at  a  27  km  range  target.  After  completing  the 
differences  from  the  simulated  impacts  as  described  in  the  above  section, 
Table  3  presents  the  comparison  results.  ’ 


Table  3.  The  tsx  delta  BCS  output  for  jday  341. 


current  doctrine  tsx  analysis 


stale(h) 

pairs 

Range (m) 

Cross (m) 

QE(mil) 

2 

0715 

0515 

116 

62 

547.2 

4 

0915 

0515 

51 

15 

H 

6 

1115 

0515 

228 

112 

N 

2 

0915 

0715 

-59 

-44 

541.9 

4 

1115 

0715 

111 

50 

n 

6 

1315 

0715 

333 

133 

H 

2 

1115 

0915 

174 

97 

544.7 

4 

1315 

0915 

397 

181 

n 

6 

1515 

0915 

452 

254 

ff 

2 

1315 

1115 

220 

81 

537.0 

4 

1515 

1115 

274 

150 

N 

2 

1515 

1315 

55 

65 

527.6 

2 

0715 

HOTMAC  tsx0515 
0515  -68 

forecasting  tsx 

-40  541.9 

4 

0915 

0515 

-35 

8  544.7 

6 

1115 

0515 

-210 

-83  537.0 

TSW_tom  nowcasting  tsx 


2 

0715 

0715 

20 

-32 

541.9 

4 

0915 

0915 

93 

10 

544,7 

6 

1115 

0915 

-92 

-79 

537.0 

Note  that  the  deltas  for  the  two  hour  staleness  varies  with  the  time  of 
the  day.  For  the  range  component  the  smallest  variabilities  are  listed  as 
occurring  during  the  0915-0715  and  1515-1315  periods.  The  -59m  and  the  55m 
represent  the  expected  error  for  firing  at  0915  and  1515  with  met  aiming 
adjustments  from  0715  and  1315.  Another  observation  is  that  the  results  for 
one  case  reveal  the  four  hour  stale  data  (0915-0515)  providing  more  accurate 
results  than  the  two  hour  stale  data  (0715-0515).  This  is  the  behavior  of 
the  weather;  it  is  unpredictable  and  never  standard.  As  we  group  the  results 


240 


in  a  small  sample  and  derive  the  mean  and  standard  deviation,  the  results  can 
be  presented  in  another  arrangement.  The  root -mean- square  values  are  fitted 
to  a  fxmction  of  the  time  staleness  raised  to  the  one-half  power.  Figure  4 
presents  the  statistical  results  and  demonstrates  the  accuracy  of  the  fit 
with  the  data  located  on  the  derived  curve.  The  solid  line  curve  represents 
the  accuracy  of  the  current  doctrine.  If  one  aims  with  two  hour  stale  data, 
one  can  expect  about  150m  miss  in  the  range  and  about  75m  miss  in  the  cross. 
The  other  two  curves  represent  the  expected  accuracy  afforded  by  the  two 
proposals:  HOTMAC  using  one  met  message  at  the  start  of  the  day,  and  TSW 
using  all  messages  available  from  the  staggered  balloon  releases.  The 
scenario  presented  in  Table  2  is  used  in  deriving  the  Figure  4  results.  TSW 
is  always  using  current  data  collected  at  the  other  stations  except  for  the 
six  hour  staleness. 


Table  4.  The  hms  delta  BCS  output  for  jday  341. 


current  doctrine  hms  analysis 


stale(h) 

pairs 

Range (m) 

Cross (m) 

QE(mil) 

2 

0745 

0545 

96 

23 

544.6 

4 

0945 

0545 

133 

44 

tf 

6 

1145 

0545 

261 

96 

N 

2 

0945 

0745 

49 

22 

540.8 

4 

1145 

0745 

174 

73 

M 

6 

1345 

0745 

287 

170 

N 

2 

1145 

0945 

122 

50 

538.5 

4 

1345 

0945 

232 

146 

N 

6 

1545 

0945 

331 

230 

M 

2 

1345 

1145 

105 

96 

533.3 

4 

1545 

1145 

208 

174 

If 

2 

1545 

1345 

106 

73 

529.1 

HOTMAC 

tsx0515 

forecasting  hms 

2 

0745 

0515" 

-88 

30  540.8 

4 

0945 

0515 

-150 

15  538.5 

6 

1145 

0515 

-269 

-34  533.3 

TSW  tom  nowcasting  hms 


2 

0745 

0715 

-7 

30 

540.8 

4 

0945 

0915 

-42 

7 

538.5 

6 

1145 

0915 

-162 

-40 

533.3 

Table  4  present  the  results  for  estimating  met  data  at  the  hms  station 
from  using  data  from  the  tsx,  oro,  and  meg  stations.  Again  the  HOTMAC  using 
only  one  station  and  TSW  using  the  staggered  release  schedule  reveal  a 
significant  improvement  over  that  expected  from  the  current  method  of  using  a 


241 


dedicated  met  station  that  may  provide  6  hour  stale  data.  Figure  5  presents 
the  graphical  comparison  that  used  historical  met  data  to  estimate  the  met  at 
a  50  km  location.  Comparing  figure  4  and  5  reveals  that  the  range  (wind, 
temperature,  and  pressure)  variability  at  hms  is  lower  than  that  at  tsx. 
Note  that  the  solid  line  curves  are  derived  from  actual  measurements  at  each 
station  as  defined  in  Table  1.  The  other  curves  represent  how  well  the 
estimates  do  in  predicting  the  actual  measurements.  A  general  conclusion  is 
that  the  two  proposals  do  better  when  estimating  closer  to  where  the 
initializing  data  are  collected.  The  cross  component  comparison  in  Figure  5 
indicates  that  the  HOTMAC  temperature  and  pressure  forecasts  are  not  as 
accurate  as  those  esimated  by  TSW  using  the  0715  and  0915  observations. 
Perhaps  the  single  station  HOTMAC  can  be  improved  by  using  surface 
observation  in  order  to  adjust  for  large  changes  in  the  pressure  due  to 
fronts  moving  in.  TSW  estimates  at  a  distance  greater  than  50  km  reveal  a 
significant  improvement  over  the  current  doctrine  of  using  stale  data  from  a 
dedicated  station. 


6.  SUMMARY 

The  ARL,  Battlefield  Environment  Directorate,  has  completed  the 
development  of  two  proto- type  CAAM  systems.  The  performance  of  the  CAAM  two 
phase  approach  has  been  qualitatively  characterized  for  spatial  analysis  and 
forecasting.  For  a  single,  variable  weather,  day  quantitative  results  have 
been  tabulated.  Graphical  comparisons  reveal  impressive  results.  The  TSW  is 
already  implemented  in  the  software  of  an  engineering  development  fire 
control  weapon  system.  It  is  also  under  review  and  evaluation  for  inclusion 
in  future  FCC.  This  preliminary  analysis  presents  results  that  quantify  the 
improvement  afforded  by  TSW  under  variable  met  conditions.  The  TSW 
improvement  is  significant  and  the  algorithm  is  portable  and  requires  no 
change  to  the  actual  tactical  procedure  in  adjusting  artillery  fire  for  met 
variability.  There  are  other  days  that  reveal  no  significant  improvement 
because  of  the  homogeneity  of  the  weather;  the  wind  remains  strong  and 
persistent  in  its  direction  through  the  day.  For  very  light  wind  days  the 
expected  improvement  is  also  insignificant.  For  example  under  these 
conditions  the  time  staleness  takes  a  minor  role  because  the  four  hour  old 
message  continues  to  be  a  good  estimate  of  the  present  weather  conditions. 
The  final  report  will  document  all  cases  from  available  TAMDE  and  PASS  data 
and  present  the  general  capability  of  TSW.  Based  on  the  on-going  research 
ARL  is  already  compiling  a  list  of  improvements  for  TSW. 

The  HOTMAC  using  a  single  met  message  has  been  demonstrated  to 
significantly  improve  upon  the  current  method  of  adjusting  artillery  fire. 
In  the  same  variable  met  day  that  TSW  was  evaluated,  HOTMAC  revealed 
impressive  results.  The  added  advantage  is  that  the  staggard  release  met 
message  schedule  is  not  required  in  enhancing  the  first  round  hit  capability. 
This  means  that  the  artillery  commander  can  continue  with  his  dedicated  met 
station  and  update  or  forecast  his  met  message  until  he  receives  fresh  met 
data.  However,  this  is  not  the  case  for  the  spatial  analysis  for  the  60  by 
220  km.  One  needs  more  initialization  observations  in  order  to  obtain 
accurate  results.  If  the  application  is  over  a  time  when  no  fronts  are 
passing  then  the  single  message  initialization  may  yield  acceptable  results. 
Many  areas  of  improvement  have  been  identified  and  the  final  report  will 


242 


document  the  status  and  future  plans  for  HOTMAC.  This  approach  in  the  CAAM 
research  has  a  longer  implementation  schedule. 

Actual  artillery  firing  using  these  two  proposals  are  being  planned. 
The  quick  implementation  of  TSW  was  accepted  by  the  weapon  system  developer 
in  order  to  show  the  required  accuracy  during  the  initial  system  technical 
demonstration.  The  on-going  research  findings  can  easily  be  incorporated  by 
installing  the  new  revision  into  the  weapon's  fire  control  subroutine.  The 
CAAM  specifications,  requirements,  and  design  were  established  to  allow 
portable  revision  control.  Because  of  the  modular  development,  the  HOTMAC 
can  be  the  final  revision  of  CAAM  by  replacing  TSW,  the  intermediate  CAAM 
solution. 


REFERENCES 

1.  Blanco,  Abel,  Edward  Vidal,  and  Sean  D'Arcy,  1993:  "Time  and  space 
weighted  computer  assisted  artillery  message" ,  Proceeding  of  the  1993 
Battlefield  Atmospherics  Conference,  Army  Research  Lab,  WSMR,  NM. 

2.  Blanco,  A.  J.  and  L.  E.  Traylor,  1976:  "  Artillery  meteorological 
analysis  of  project  PASS,"  ECOM-5804,  U.S.  Army  Atmospheric  Sciences 
Laboratory,  WSMR,  NM 

3.  Grace,  John,  1993:  "TAMDE  -  The  variability  of  weather  over  an  army 
division  size  area,"  Proceeding  of  the  1993  Battlefield  Atmospherics 
Conference,  Army  Research  Lab,  WSMR,  NM. 

4.  Reichelderfer ,  Magan  and  Craig  Barker,  1993:  "155 -mm  howitzer 
accuracy  and  effectiveness  analysis".  Note  DN-G-32,  U.S.  Army 
Materiel  System  Analysis  Activity,  Aberdeen  Proving  Ground,  MD. 

5.  Spalding,  John  B. ,  Natalie  G.  Kellner,  and  Robert  S.  Bonner,  1993: 
"Computer-assisted  artillery  meteorology  system  design".  Proceeding 
of  the  1993  Battlefield  Atmospherics  Conference,  Army  Research  Lab, 
WSMR,  NM. 

6.  Vidal,  Edward,  1994:  Personal  communication.  Army  Research 
Laboratory,  Battlefield  Environment  Directorate,  WSMR,  NM. 

7.  Yamada,  T. ,  and  S.  Btmker,  1989:  "A  nximerical  Model  Study  of 
Nocturnal  Drainage  Flows  with  Strong  Wind  and  Temperature  Gradients", 
J.  Appl.  Meteorol.,  28:545-554. 


243 


EVALUATION  OF  THE  BATTLESCALE  FORECAST  MODEL  (BFM) 


T.  Henmi  and  M.  E.  Lee 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM  88002-5501,  USA 

T.  J.  Smith 

Operating  Location  N,  Air  Weather  Service 
White  Sands  Missile  Range,  NM  88002-5501 ,  USA 


ABSTRACT 

The  performance  of  the  Battlescale  Forecast  Model  (BFM),  developed  at  the 
U.S.  Army  Research  Laboratory  (ARL)  to  produce  an  operational  short-range 
( <  12  h)  mesoscale  forecast,  is  evaluated.  The  model  test  domain  centers  on 
the  White  Sands  Missile  Range  (WSMR),  NM  where  observation  data  from 
Surface  Automated  Meteorological  System  (SAMS)  10-m  towers  and 
Atmospheric  Profiler  Research  Facility  (APRF)  profilers  are  available.  Three 
different  initialization  approaches  are  examined  to  identify  optimal  model 
initialization  methods.  Statistical  parameters  such  as  mean  residual  and 
standard  deviation  of  residual  are  calculated  for  hourly  forecast  fields  of 
surface  wind  and  temperature  from  comparisons  of  corresponding  observations 
and  twenty  five  12-h  forecast  calculations.  Results  indicate  that  incorporation 
of  surface  wind  observation  data  into  the  initial  field  is  essential  to  produce 
good,  short-range  BFM  forecasts. 

1.  INTRODUCTION 

The  Army  Research  Laboratory  (ARL)  developed  the  Battlescale  Forecast  Model  (BFM)  to 
produce  an  operational,  short-range  ( <  12  h)  forecast  over  an  area  of  <  5(X)  x  500  km.  The 
BFM  will  become  a  major  component  of  the  Integrated  Meteorological  System  (IMETS) 
Block  2  software.  The  BFM  is  composed  of  two  major  programs.  A  program  called  3DO^ 
creates  initial  and  boundary  values  for  the  forecast  model  by  processing  selected  U.S.  Air 
Force  Global  Spectral  Model  (GSM)  forecast  field  output  data,  and/or  upper-air  sounding  and 
surface  observation  data,  if  available.  The  BFM  was  adapted  from  a  mesoscale 
meteorological  model  called  the  Higher  Order  Turbulence  Model  for  Atmospheric  Circulation 
(HOTMAC)  (Yamada,  Bunker  1989).  HOTMAC  has  been  used  extensively  at  ARL 
(Henmi  et  al.  1987;  Henmi  1990;  1992)  to  simulate  the  evolution  of  locally  forced 
circulations  caused  by  surface  heating  and  cooling  over  meso-jS  and  7  scale  areas.  HOTMAC 
is  numerically  stable  and  easy  to  use,  and  thus  suitable  for  operational  use.  Details  of  the 
BFM  are  described  in  Henmi  et  al.  (1993;  1994). 


245 


In  this  study,  the  forecasting  capability  of  the  BFM  is  evaluated  by  comparing  forecast  results 
with  surface  and  upper-air  data  observed  by  the  White  Sands  Missile  Range  (WSMR)  Surface 
Automated  Meteorological  System  (SAMS)  and  Atmospheric  Profiler  Research  Facility 
(i^RF)  profilers,  respectively.  To  find  an  appropriate  method  to  initialize  the  model,  three 
initialization  methods  were  selected,  and  25  comparisons  between  12-h  forecasts  and 
observations  were  made  at  hourly  intervals.  The  purpose  of  this  paper  is  to  describe  the  three 
initialization  methods  and  the  method  evaluation  results. 

2.  MODEL  DOMAIN  AND  OBSERVED  DATA 

The  study  area  centered  on  WSMR,  NM.  Figure  1  shows  the  terrain  elevation  distribution 
of  the  selwted  BFM  domain,  covering  a  250  x  250  km  area.  The  latitude  and  longitude  of 
the  domain  are  33.20“  N  and  106.41“  W,  respectively.  Meteorological  variables  are 
calculated  at  51  x  51  horizontal  grid  points  x  16  vertical  grid  points  with  a  unit  horizontal 
grid  distance  of  5  km.  The  upper  atmosphere  model  boundary  is  7000  m  above  the  highest 
surface  terrain  elevation  in  the  domain.  The  locations  of  selected  WSMR  SAMS  sites  are 
marked  by  Arabic  numbers  in  figure  1. 


Figure  1.  Selected  WSMR  BFM  model  domain  (250  x 
250  km).  Contour  lines  are  drawn  every  200  m.  The 
locations  of  SAMS  sites  are  marked  by  Arabic  numbers. 


GSM  output  are  reported  on  grid  points  spaced  381  km  apart  on  mandatory  pressure  surfaces. 
A  three-dimensional  objective  analysis  of  GSM  data  is  made  over  an  area  covering 
800  X  800  km  centered  on  the  BFM  domain. 


246 


Twelve-hour  forecast  computations  producing  hourly  outputs  were  made  for  25  cases  ^l^ted 
from  the  months  of  February  and  I^ch  1994.  Hourly  averaged  values  of  surface  wmd  and 
temperature  were  used  for  comparison. 

3.  INITIALIZATION  METHODS 

Based  on  the  case  study  reported  in  Henmi  et  al.  (1994),  the  BFM  was  initialized  by  toee 
different  methods  described  in  sections  3.1  through  3.3.  Additionally,  two  computed  data, 
mentioned  in  sections  3.4  and  3.5,  are  compared  with  observations. 


3.1  Initializatioa  Using  GSM 

GSM  uses  a  normalized  pressure  a  =  p/p,  vertical  coordinate.  GSM  analysis  and  12-hour 
forecast  values  of  horizontal  wind  components,  temperature,  dew-point  depression,  and 
geopotential  height  on  mandatory  pressure  levels  were  used  to  produce  three-dimensional 
fields  for  BFM  initialization  and  time-dependent  boundary  values. 

HOTMAC  uses  a  z*  vertical  coordinate,  and  is  defined  in  the  following  manner: 


z*  =  H 


(1) 


where 

z*  =  the  transformed  vertical  coordinate 

z  =  the  Cartesian  vertical  coordinate 

Zj  =  the  ground  elevation  above  mean  sea  level  (MSL) 

H  =  the  material  surface  top  of  the  model 

H  =  the  corresponding  height  in  the  Cartesian  coordinates. 


For  simplicity,  H  is  defined  as 


H  =  W 


(2) 


where  is  the  maximum  value  of 


247 


Because  different  vertical  coordinates  are  used  in  GSM  and  HOTMAC,  the  following  two 
steps  are  needed: 

(1)  HoriTOntal  interpolation  of  wind  components  (u,v),  temperature,  mixing  ratio,  and 
geopotential  height  froni  GSM  grid  points  to  BFM  grid  points  on  constant  pressure  surface. 
Barnes’  method  (1964)  is  used  for  horizontal  interpolation. 

(2)  Vertical  interpolation  of  the  variables  from  BFM  constant  pressure  surfaces  to  z*  surfaces 
at  BFM  grid  points  using  a  linear  interpolation  method. 

GSM  synoptic  scale  variations  of  meteorological  variables  are  incorporated  into  the  model 
equations  by  nudging  (Hoke,  Anthes  1976). 

For  12-h  forecasting,  both  the  current  analysis  and  the  12-h  forecast  fields  from  the  GSM  are 
analyzed  using  the  above  method,  and  hourly  data  are  generated  by  a  linear  interpolation 
between  the  two  time  periods.  The  first  hourly  analysis  field  data  are  assimilated  by  using 
the  nudging  method  for  the  hour  preceding  the  initiation  of  forecast  computation.  The  next 
hourly  data  are  assimilated  into  the  forecast  1  h  into  the  forecast  period;  the  process  is 
repeated  hourly  over  the  12-h  forecast  period. 

Out  of  16  vertical  layers,  nudging  was  applied  only  in  the  9  upper  layers  (corresponding  to 
z  heights  >  than  151  m).  ^ 

3.2  Initialization  With  GSM  and  Mean  Snrface  Wind  Direction  and  Speed 

Wind  directiotis  reduced  from  GSM  data  in  layers  near  the  surface  were  frequently  and 
significantly  different  from  those  observed.  Thus,  to  improve  the  agreement  between 
computed  and  observed  wind  vectors  in  short-range  forecasts,  mean  surface  wind  data  is 
incorporated  into  initial  fields.  From  all  the  selected  SAMS  data  obtained  over  WSMR,  mean 
surface  wind  vector  components  were  calculated  at  the  initial  time  of  forecast,  and  logarithmic 
wind  profiles  were  assumed  from  the  surface  (z*  =  10  m)  to  the  seventh  layer  (z*  =  151  m). 
Linear  profiles  were  then  interpolated  between  the  7th  to  the  10th  model  layers,  above  which 
only  GSM  data  is  used  to  initialize  BFM  grid  points. 

3.3  Nudging  of  Individual  Surface  Wind  Data  at  Initial  Time 


In  addition  to  the  m^  surface  observation  process  described  in  section  3.2,  individual  SAMS 
site  wind  observation  data  obtained  at  initialization  times  are  assimilated  into  model 
calculations  at  the  grid  points  adjacent  to  the  SAMS  locations.  The  method  of  surface  wind 
data  assimilation  is  described  in  Henmi  et  al.  (1994). 

3.4  Surface  Data  Nudging  Every  Three  Hours 

In  section  3.3,  surface  wind  data  is  nudged  only  at  the  first  hour  of  model  computation.  This 
method  was  extended  in  a  way  such  that  SAMS  wind  data  are  assimilated  into  model 
(^culations  every  3  h.  For  instance,  the  data  observed  at  5,  8,  11,  14,  and  17  local  standard 
time  (LST)  are  nudged  for  1  h  starting  at  4,  7,  10,  13,  and  16  LST. 


248 


3.5  Linear  Interpolation  of  GSM  Data 

For  comparison  purposes,  the  three-dimensional  GSM  data  set,  creat^  by  the  method 
described  in  section  3.1  at  two  time  periods,  is  linearly  interpolated  in  time,  and  resulting 
data  (at  hourly  intervals)  is  compared  with  observation. 

4.  STATISTICAL  PARAMETERS 

To  examine  the  differences  in  the  results  using  methods  described  in  sections  3.1  through  3.5, 
the  following  statistical  parameters  are  calculated  hourly  by  using  the  data  from  the  25 
different  cases. 

4.1  Mean  Residual 

The  difference  between  observed  and  forecast  values  of  a  meteorological  parameter  can  be 
written  as 

p  =  F 

res  obs  for 


where  F  represents  a  meteorological  parameter  and  and  represent  residual, 

observation,  and  forecast,  respectively. 


A  mean  residual  for  12  forecast  hours  is  defined  as 


E.  E, 


wxn 


(4) 


where  ni  represents  the  number  of  forecast  cases,  and  n  represents  the  number  of  SAMS  data 
at  forecast  time  t. 


4.2  Standard  Deviation  of  Residual 


The  standard  deviation  of  residual  of  a  meteorological  parameter  is  defined  as 


mxn 


1 

2 


(5) 


where  F^Jt)  is  the  standard  deviation  of  residual  at  forecast  time  t. 

Improved  forecast  calculations  result  in  mean  residuals  converging  to  zero  in  conjunction  with 
smaller  standard  deviations  of  residual.  Perfect  agreement  between  observation  and  forecast 
results  in  zero  values  for  both  parameters. 


249 


5.  RESULTS 


In  figures  2  through  4,  the  mean  residuals  (mean  curves)  and  standard  deviations  (upper-  and 
lower-bound  curves)  are  plotted  as  a  function  of  time. 


(b) 

Figure  2.  Temporal  variations  of  mean  residual  (mean  curves)  and  standard  deviation  (upper- 
and  lower-bound  curves)  for  methods  in  sections  3.1(a)  and  3.2(b).  Upper  plots  represent 
the  surface  x  wind  vector  components,  middle  plots  are  the  y  wind  vector  components,  and 
bottom  plots  are  of  surface  temperature. 


Comparisons  of  figures  2  through  4  reveal  the  following: 

(1)  Differences  between  figures  2(a)  and  4  indicate  that  the  BFM  produced  significantly 
pproved  forecast  fields  of  wind  and  temperature  (section  3.1)  compared  to  the  linear 
interpolation  of  GSM  data  (section  3.5).  In  gener^,  the  values  of  mean  residuals  and 
standard  deviations  are  smaller  in  figure  2(a)  than  in  figure  4.  The  physical  scheme  of  the 
model  produced  better  agreement  with  observation  data  than  simple  interpolation  of  GSM  data 
m  time  and  space. 

(2)  From  figures  2(a)  and  (b),  the  initialization  using  the  mean  wind  speed  and  direction 
(section  3.2)  produced  better  forecast  fields  than  the  GSM  data  initialization  (section  3.1). 
Substanti^  improvements  in  both  x  and  y  components  of  wind  vectors  were  obtained.  As  can 

s^n  in  figure  2(e),  the  meEn  residuEl  vElues  of  both  wind  vector  components  were 
negative.  This  means  that  BFM  forecast  calculations  initialized  with  GSM  data  produced 


250 


larger  wind  vector  components  than  observed.  Conversely,  the  mean  r^idual  values  in  figure 
2(b)  are  much  closer  to  zero,  indicating  that  on  average  the  BFM,  using  mean  wind  speed 
and  direction  at  model  initialization  times,  produced  surface  wind  vector  component 
magnitudes  similar  to  those  actually  observed.  Even  temperature  mean  residuals  m 
figure  2(a)  show  larger  negative  values  throughout  the  12-h  forecast  calculation  Aan  m 
figure  2(b).  Initial  temperature  fields  in  the  methods  in  sections  3. 1  and  3.2  are  identical  for 
all  25  simulations.  The  boundary  layer  scheme  of  the  BFM  produc^  improved  temperature 
predictions  using  the  method  in  section  3.2,  compared  to  the  method  in  section  3. 1 .  Although 
it  is  not  clearly  understood,  logarithmic  profiles  of  wind  components  assumed  in  Ae  method 
in  section  3.2  may  coincide  to  produce  good  surface  temperature  profile  predictions  in  the 
boundary  layer.  Further  studies  are  needed  to  understand  this  problem. 


(3)  Comparison  between  figures  2(b)  and  3(a)  indicates  that  nudging  the  surface  wind  vector 
components  at  the  forecast  initial  time  (section  3.3)  produced  better  forecast  results  in  wind 
fields  for  a  few  hours  during  the  early  stage  of  calculation,  but  during  the  later  stage  of 
calculation  the  forecast  method  in  section  3.2  produced  superior  agreement  betw^n  predicted 
and  observed  parameters.  This  can  be  inferred  from  larger  standard  deviations  in  both  x  Md 
y  wind  components  in  the  last  several  hours  of  forecast  calculation.  Nudging  of  surface  wind 
components  that  are  not  dynamically  balanced  with  the  numerical  schemes  of  the  model  may 
be  the  reason  for  the  results  of  the  method  in  section  3.3.  Temperature  fields  show  little 
differences  between  the  methods  in  sections  3.2  and  3.3. 


251 


(4)  The  method  in  section  3.4  produced  the  best  agreement  between  predicted  and  observed 
parameters.  In  this  method,  observed  wind  vector  components  were  assimilated  into  model 
calculations  by  nudging  every  3  h.  Figure  3(b)  shows  smaller  standard  deviations  at  3,  6, 
9,  and  12  h  when  the  data  were  nudged  during  the  previous  1  h.  It  should  be  noted  that, 
although  the  nudging  of  dynamically  unbalanced  wind  vectors  is  done  repeatedly,  the 
numerical  scheme  of  the  model  is  stable  enough  to  prevent  numerical  instability. 


for  temporal  and  spatial  interpolation  of  GSM 
data  (section  3.5). 


6.  SUMMARY 

Comparison  of  forecast  results  using  the  method  in  section  3.1,  with  space  and  time 
interpolation  of  GSM  data,  clearly  shows  that  the  BFM  produced  substantially  improved 
forecast  fields  over  methods  using  a  simplistic  interpolation  of  GSM  data.  Initialization  by 
methods  in  sections  3.2  and  3.3  produced  further  improvement  over  the  method  in 
section  3.1,  confirming  that  incorporation  of  observed  data  into  initial  fields  is  important. 

In  the  present  study,  all  the  cases  simulated  were  in  February  and  March,  1994,  Forecast 
fields  of  moisture  were  not  compared  with  observation  because  observed  data  were  not 
reliable.  In  a  future  study,  cases  will  be  simulated  for  the  summer  for  which  the  data  have 
been  archived. 


252 


7.  REFERENCES 


Barnes,  S.  L.,  1964.  "A  Technique  for  Maximizing  Details  in  Numerical  Weather  Map 
Analysis."  J.  App.  Meteor.,  5:396-409. 

Henmi,  T.,  R.  E.  Dumais,  Jr.,  and  T.  J.  Smith,  1993.  "Operational  Short-range  Forecast 
Model  for  Battlescale  Area."  In  Proceedings  of  1993  Battlefield  Atmospherics  Conference, 
BED,  U.S.  Army  Research  Laboratory,  White  Sands  Missile  Range,  NM,  pp  569-578. 

Henmi,T.,  M.  Lee,  and  T.  J.  Smith,  1994.  Evaluation  Study  of  Battlescale  Forecast  Model 
(BFM)  using  WSMR  Observation  Data,  U.S.  Army  Research  Laboratory,  White  Sands 
Missile  Range,  NM  88002-5501. 

Hoke,  J.  E.,  and  R.  A.  Anthes,  1976.  "The  Initialization  of  Numerical  Models  by  a 
Dynamic-Initialization  Technique."  Mon.  Wea.  Rev.,  104:1551-1556. 

Yamada  T.,  and  S.  Bunker,  1989.  "A  Numerical  Study  of  Nocturnal  Drainage  Flows  with 
Strong  Wind  and  Temperature  Gradients."  J.  Appl.  Meteor.,  28:545-554. 


253 


VERIFICATION  AND  VALIDATION 


OF  THE 

NIGHT  VISION  GOGGLE  TACTICAL  DECISION  AID 

John  R.  Elrick 

U.S.  Army  Research  Laboratory 
Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico,  88002-5501,  USA 


ABSTRACT 

The  night  vision  goggle  (NVG)  tactical  decision  aid  (TDA)  is  a  computer 
software  application  used  to  determine  the  suitability  of  NVG  use  based 
on  existing  or  forecast  meteorological  conditions.  The  TDA  combines 
solar  and  lunar  ephemeris  data  with  the  general  effects  of  clouds  and 
precipitation  on  illumination  levels  based  on  the  weather  data  contained  in 
standard  weather  observations.  This  TDA  is  one  of  a  suite  of  TDAs  that 
was  delivered  in  the  Block  I,  Integrated  Meteorological  System  (IMETS) 
release  to  the  Program  Executive  Office  Command  and  Control  Systems, 

Project  Director,  IMETS.  Although  the  NVG  TDA  was  acceptance  tested 
before  it  was  released,  it  was  never  formally  verified  and  validated.  The 
verification  and  validation  (V&V)  described  here  are  results  of  Battlefield 
Environment  Directorate  efforts  to  include  V&V  as  part  of  future  software 
releases  to  U.S.  Army  weather  support  personnel.  The  accurate,  early 
identification  of  software  problems  and  their  correction  prior  to  operation^ 
applications  are  integral  parts  of  providing  physically  and  theoretically 
sound  products  to  the  end  user.  The  methods  that  were  used  in  the  V&V 
of  the  NVG  TDA  are  discussed  and  some  of  the  pertinent  findings  are 
presented. 

1.  INTRODUCTION 

The  night  vision  goggle  (NVG)  tactical  decision  aid  (TDA)  is  part  of  a  suite  of  computer 
software  applications  that  is  designed  to  be  included  in  the  Integrated  Meteorological  System 
(IMETS).  It  is  a  product  that  will  provide  battlefield  decision  makers  with  detailed 
information  about  the  solar  and  lunar  illumination  levels  at  user-specified  locations,  with 
general  consideration  for  cloud  cover  and  precipitation.  It  was  one  of  the  TDAs  that  was  part 
of  the  block  I  IMETS  release  from  the  U.S.  Army  Research  Laboratory’s  Battlefield 


255 


Environment  (BE)  Directorate  to  the  Program  Executive  Office  Command  and  Control 
Systems  (PEO  CCS)  headquartered  at  Fort  Monmouth,  NJ.  The  block  I  release  was  delivered 
to  the  PEO  CCS  project  director  (PD)  IMETS  as  the  initial  step  in  a  three-block  transition 
process  to  field  an  operational  IMETS.  The  final  version  of  the  IMETS  will  be  used  by  U.S. 
Air  Force  Staff  Weather  Officers  (SWOs)  to  support  future  Army  operations  with  modem 
hardware  and  software  specifically  tailored  to  the  concept  of  a  highly  mobile  fighting  force 
capable  of  worldwide  deployment.  The  IMETS  will  be  used  at  the  echelon-above-corps  level 
down  to  the  separate  brigade  and  special  operation  force  level  where  SWO  support  is 
necessary  and  defined  by  Army  doctrine. 

2.  DEFTNITIONS 

The  following  definitions,  taken  from  Army  Regulation  5-11,  "Army  Model  and  Simulation 
Management  Program"  (1992),  were  used  in  the  V«&V  of  the  NVG  TDA  (M&S  in  these 
definitions  refers  to  model  and  simulation): 

"a.  Verification. 

(1)  Verification  is  the  process  of  determining  that  M&S  accurately 
represent  the  developer’s  conceptual  description  and  specifications. 
Verification  evaluates  the  extent  to  which  the  M&S  has  been  developed 
using  sound  and  established  engineering  techniques.  The  verification 
process  involves  identifying  and  examining  the  stated  and  pertinent 
unstated  assumptions  in  the  M&S,  examining  interfaces  with  input  data¬ 
bases,  ensuring  that  source  code  accurately  performs  all  intended  and 
required  calculations,  reviewing  output  records,  performing  structured 
walk-through  techniques  to  determine  if  M&S  logic  correctly  performs 
intended  functions,  and  performing  M&S  sensitivity  analyses.  Unexpected 
sensitivity  (or  lack  of  sensitivity)  to  key  inputs  may  highlight  a  need  to 
review  the  M&S  algorithm  for  omissions  or  errors. 

(2)  Verification  also  includes  appropriate  data  certification  and  M&S 
documentation  (e.g.,  programmer’s  manual,  user’s  guide,  and  analyst’s 
manuals). 

(3)  Verification  should  normally  be  performed  by  an  independent  V&V 
(IV&V)  agent  but  remains  the  responsibility  of  the  M&S  proponent  to 
ensure  accomplishment. 

b.  Vqlidqtion.  Validation  is  the  process  of  determining  the  extent  to  which 
M&S  accurately  represent  the  real-world  from  the  perspective  of  the 
intended  use  of  the  M&S.  The  validation  process  ranges  from  single 
modules  to  the  entire  system.  Ultimately,  the  purpose  is  to  validate  the 
entire  system  of  M&S  data,  and  operator-analysts  who  will  execute  the 


256 


M&S.  Validation  methods  will  incorporate  documentation  of  procedures  and  results 
of  any  validation  effort." 

The  above  document  cites  the  types  of  validation  that  may  be  used  in  the  process  described 
here.  "Face  Validation"  or  the  determination  that  an  M&S,  based  on  the  software 
performance,  seems  reasonable  to  people  knowledgeable  about  the  system  being  modeled  was 
used  in  this  V&V  effort  in  conjunction  with  "Peer  Review"  where  people  who  are  very 
familiar  with  the  technical  area  being  modeled  evaluate  its  internal  representativeness  and  the 
accuracy  of  the  output  of  the  M&S. 

3.  TECHNICAL  DOCUMENT  REVIEW 

A  comprehensive  review  of  the  technical  references  used  in  the  NVG  TDA  research  and 
development  (R&D)  effort  was  made  along  with  a  complete  review  of  the  technical 
documentation  associated  with  the  release  of  the  NVG  TDA  to  PD  IMETS.  This  review  was 
necessary  for  the  validation  of  the  physical  principles  used  in  the  computer  model.  The 
validation  described  is  of  the  "conceptual"  variety  described  by  Dale  K.  Pace  in  his  article 
"Modeling  and  Simulation"  (1993)  because  of  the  maturity  level  of  the  TDA.  The  following 
paragraphs  present  the  findings  of  this  review. 

The  first  major  document  reviewed  was  the  basis  for  the  illumination  calculations.  The 
computer  program  ILLUM  (van  Brochove  1982)  was  the  technical  basis  for  all  illumination 
values  reported  by  the  NVG  TDA.  This  program  was  used  for  all  solar  and  lunar  ephemeris 
computations.  A  reasonable  "constant"  value  for  natural  illumination  without  solar  or  lunar 
contribution  is  presented.  There  is  a  full  explanation  of  the  FORTRAN  computer  code  used 
to  develop  the  model.  ILLUM  calculates  the  illumination  based  on  the  geographical  altitude 
and  longitude  of  an  earth-based  observer  (user)  for  clear  skies.  Infrequent  solar  and  lunar 
phenomena,  such  as  eclipses,  are  considered  and  the  application  warns  of  their  occurrence. 
A  natural  illumination  value  of  1.1  X  10'^  lux  (lumens(lm)m-2)  of  natural  illumination  and  the 
illumination  value  for  the  full  moon  of  0.267  are  consistent  with  the  RCA  Electro-Optics 
Handbook  (1978). 

Another  major  contributor  to  this  development  was  AFGL-TR-82-0039,  Solar  Radiance  Flux 
Calculations  from  Standard  Meteorological  Observations  (Shapiro  1982).  Shapiro’s  work  and 
associated  computer  models  were  used  to  include  the  effects  that  clouds  have  on  the 
illumination  reaching  the  ground.  In  its  most  complex  form,  the  computer  model  described 
will  calculate  the  solar  radiation  incident  at  or  near  the  earth’s  surface  through  n-layers  of  the 
atmosphere  through  a  system  of  2n  -I-  2  linear  equations.  These  equations  comprise  a  closed 
set  of  equations  that  account  for  the  physical  processes  of  reflection,  absorption,  and 
transmission  of  the  electromagnetic  radiation  along  its  path. 

To  be  consistent  with  the  standard  methodology  of  reporting  cloud  cover,  the  n-layers  are 
taken  as  three  discrete  cloud  layers  representing  low,  middle,  and  high  clouds.  Specific 
radiative  transfer  coefficients  were  developed  that  are  dependent  on  cloud  amount  and 


257 


thickness  for  each  layer.  The  effects  of  the  earth’s  surface  (albedo)  are  considered.  Shapiro 
tested  the  model  he  d^ribed  against  independent  data  and  found  it  to  be  accurate.  The 
c^culations  presented  in  this  work  are  based  on  simple  scattering  theory  and  Monte  Carlo 
simulations.  Nine  cloud  types  were  chosen  to  represent  those  clouds  commonly  observed. 
The  World  Meteorological  Organization  Synoptic  Cloud  Code  recognizes  other  cloud  types 
but  they  are  seldom  observed.  ’ 

The  case  where  precipitation  is  occurring  is  the  least  reliable  of  the  solar  flux  calculations 
used.  Only  a  small  number  of  precipitation  events  led  to  a  small  number  of  case  studies. 
During  precipitation,  the  cloud  types  present  can  be  very  complex  and  ground-based 
observers  can  see,  and  therefore  report,  cloud  types  and  amounts  up  to  and  including  the 
lowest  overcast  layer.  Because  of  this,  worst-case  thick  clouds  are  assumed  at  all  levels  when 
precipitation  is  occurring. 

Neither  Shapiro’s  work  nor  the  NVG  TDA  computer  model  accounts  for  such  radiative 
transfer  processes  as  aerosol  and  molecular  scattering  and  absorption.  Ozone  and  water  vapor 
absorption  are  treated  only  in  the  most  rudimentary  way.  This  seems  a  most  reasonable 
approach  in  light  of  the  other  simplifying  assumptions  and  the  accuracy  of  the  input 
observational  data.  Even  with  the  simplified  treatment  presented  by  Shapiro,  the  process  of 

radiative  transfer  is  very  complex  and  is  handled  in  a  physically  realistic  and  rieorouslv 
complete  way.  ^ 

Finally,  the  Technology  Exploitation  Weather  TestBed  (TEWTB)  User’s  Onirtp.  and  T^rtiniV^i 
Reference  fpr  the  Block  I  Integrated  Meteorological  System  riMETS'i  (Elrick  et  al.  1992)  was 
reviewed  for  content  and  consistency.  This  document  was  prepared  by  scientists  and 
engineers  from  the  Physical  Science  Laboratory  and  the  BE  Directorate.  It  is  intended  as  a 
reference  manual  for  individuals  who  are  unfamiliar  with  the  IMETS  but  who  possess  some 
basic  computer  operating  skills. 

This  document  has  some  inconsistent  references  to  "night  vision  devices"  that  could  include 
such  low-light-level  equipment  as  starlight  scopes  and  tank  gunner’s  sights.  The  NVG  TDA 
is  geared  toward  aviation  applications  for  nighttime  flying  operations. 

Occasionally,  computer  jargon  is  used  in  the  text.  This  could  present  a  limitation  to  its  use 
by  operators  who  are  not  familiar  with  computer  nomenclature.  In  other  instances  incorrect 
reference  is  made  to  operations  being  "GO"  or  "NO  GO"  based  on  the  existing  or  forecast 
weather.  Conditions  should  be  identified  as  being  "FAVORABLE,  MARGINAL,  or 
UNFAVORABLE;"  TDAs  are  not  intended  to  be  directives;  they  are  intended  instead  to  be 
intelligent  planning  guides  for  battlefield  decision  makers  based  on  existing  doctrine  and 
equipment  limitations. 

This  document  has  a  final  weakness.  It  does  not  contain  definitions  for  some  of  the  terms 
used.  Terms  such  as  nautical  and  civil  twilight  need  to  be  defined  for  operators  who  are  not 
famili^  with  them.  Inclusion  of  a  complete  set  of  definitions  will  make  this  document  a 
valuable  reference  guide  that  will  stand  on  its  own  merit. 


258 


4.  SOFTWARE  TESTING 


During  the  period  1-18  November  1994,  the  NVG  TDA  computer  software  was  exercised  for 
a  total  of  11.5  hours  in  seven  separate  sessions.  This  testing  was  done  on  the  system  known 
locally  as  the  ACCS4,  which  is  a  nonrugged  commercial  version  of  the  Army  common 
hardware  system.  The  most  current  version  of  the  block  I IMETS  baseline  software  resides 
on  this  computer  and  is  identical  to  the  baseline  version  on  the  Army  Command  and  Control 
System.  The  purpose  of  this  testing  was  to  test  for  accuracy  of  the  software  application  and 
its  "user  friendliness." 

Several  errors  in  the  software  were  noted  and  documented  according  to  the  established 
configuration  management  practices  employed  in  the  BE  Directorate.  There  are  undoubtedly 
other  errors  that  were  not  detected  because  of  the  inability  to  examine  every  possible 
scenario.  The  ephemeris  data  that  are  an  output  of  the  TDA  were  compared  with  data  used 
at  the  official  meteorological  forecasting  and  observing  station  at  White  Sands.  All  ephemeris 
data  were  found  to  be  within  plus  or  minus  5  minutes  of  the  data  provided  to  the  local  station 
by  the  Nautical  Almanac  Office  (1993).  This  is  well  within  the  acceptable  operating  envelope 
for  most  Army  operations. 

For  this  stage  of  the  IMETS  R&D  effort,  the  software  is  acceptable.  The  minor  errors  found 
and  documented  during  this  test  should  be  fixed  before  final  fielding.  Other  errors  that 
surface  need  to  be  documented  and  corrected  before  future  baseline  releases. 

5.  CONCLUSIONS  AND  RECOMMENDATIONS 

The  NVG  TDA  is  complete  and  accurate  based  on  its  stage  of  development  in  the  IMETS 
release  cycle.  It  is  based  on  sound  physical  principles,  and  it  is  certainly  usable  and 
trustworthy  for  limited  operational  considerations.  Before  the  software  can  be  fielded,  it  must 
be  made  absolutely  "user-friendly"  and  any  errors  noted  in  this  and  future  V&V  efforts  must 
be  corrected.  The  TDA  must  be  fully  tested  and  independently  evaluated  in  each  baseline 
stage  of  its  development  before  it  is  fielded  as  part  of  a  fully  operational  IMETS.  As  in  the 
past,  software  developers  must  periodically  test  changes  and  upgrades  to  verify  the 
correctness  of  the  computer  code  and  its  interaction  with  its  associated  computer  hardware 
platform.  IV&V  must  be  thoroughly  conducted,  as  part  of  sound  configuration  management 
practices,  before  the  IMETS  blocks  II  and  III  releases. 

REFERENCES 

Some  references  that  are  listed  here  were  not  referenced  in  this  paper  but  were  cited  in  the 
original  V&V  effort.  They  are  included  here  for  completeness. 

Army  Model  and  Simulation  Management  Program.  Army  Regulation  5-11,  Headquarters, 
Department  of  the  Army,  Washington,  D.C.,  10  June  1992 


259 


Burks,  J.  D.,  1993,  Verification.  Validation,  and  Assessment  for  the  Technology  Ryplnitarinn 

Weather_TestBed  (TBWTB).  PSL-93/57,  Physical  Science  Laboratory,  Las  Cruces, 
NM. 

Electro-Optics  Handbook,  Technical  Series  EOH-11,  RCA,  Solid  State  Division,  Electro- 
Optics  Devices,  Lancaster,  PA,  reprinted  5-78 

Elrick,  J.  R.,  D.  C.  Shoop,  P.  V.  Laybe,  R.  R.  Lee,  J.  E.  Passner,  J.  B.  Spalding,  J.  D. 
Brandt,  D.  C.  Weems,  A.  W.  Dudenhoeffer,  G.  E.  McCrary,  and  S.  H.  Cooper, 
1992,  Technology  Exploitation  Weather  TestBed  rTEWTBl  User’s  Guide  and 
Technical  Reference  for  the  Block  I  Integrated  Meteorological  System  aMETSV 
PSL-92/60,  Physical  Science  Laboratory,  Las  Cruces,  NM 

Harris,  J.  E.,  1992,  "A  Technology  Exploitation  Weather  TestBed  for  Army  Applications," 
Proceedings  of  the  Eighth  International  Conference  on  Interactive  Information  and 
Processing  Systems  for  Meteorology.  Oceanography,  and  Hydrology.  American 
Meteorological  Society,  45  Beacon  Street,  Boston,  MA  02108-3693,  pp.  5-9 

Lunar  Ephemeris  Tables  for  White  Sands  Missile  Range,  NM,  for  1993,  Nautical  Almanac 
Office,  U.S.  Naval  Observatory,  Washington,  D.C. 


Pace,  D.  K.,  1993,  "Modeling  and  Simulation,"  Phalanx.  The  Bulletin  of  Military  Onerafinns 
Research.  26(3):27-29,  ISSN  0195-1920 

Shapiro,  R.,  1982,  Solar  Radiative  Flux  Calculations  from  Standard  Surface  Meteorological 
Observations,  AFGL-TR-82-0039,  Scientific  Report  No.  1,  Systems  and  Applied 
Sciences  Corporation,  Air  Force  Geophysics  Laboratory,  Hanscom,  MA. 

van  Brochove,  Ir.  A.  C.,  1982,  The  Computer  Program  ILLUM:  Calculation  of  thp. 
Positions  of  Sun  and  Moon  and  Natural  Illuminatinn  PHL  1982-13,  Physics 
Laboratory  TNO,  National  Defence  Research  Organization  TNO  (the  Netherlands) 


260 


Session  IV 

BOUNDARY  LAYER 


261 


CLUTTER  CHARACTERIZATION  USING  FOURIER  AND  WAVELET  TECHNIQUES 


J.  Michael  Rollins 

Science  and  Technology  CorpoiaticMi 
Las  Cruces,  New  Mexico  88011,  U.S.A. 

William  Peterson 
U.S.  Army  Research  Laboratory 
Battlefield  j^vironment  Directorate 
White  Sands  Missile  Range,  New  Mexico  88002,  U.S.A. 


ABSTRACT 

Clutter  is  a  feature  of  scene  content  that  can  confuse  an  observer  or 
automatic  algorithm  trying  to  locate  and  track  a  particular  target  object.  It 
consists  of  variations  in  the  radiance  field  that  either  camouflage  an  object  or 
divert  perceptual  attention  away  from  the  object  location.  The  ability  to 
measure  and  quantify  clutter  is  playing  a  growing  role  in  reliably  estimating 
target  acquisition  ranges. 

Two  methods  for  characterizing  clutter  are  presented.  They  involve  Fourier 
and  wavelet  analysis.  Results  of  clutter  analysis  on  terrain  images  are  shown 
and  the  merits  of  the  two  characterization  methods  are  discussed. 

1.  INTRODUCTION 

An  important  aspect  of  scene  characterization  is  measuring  to  what  extent  natural  features 
of  the  terrain  interfere  with  the  ability  to  distinguish  manmade  objects.  Scene  characteristics 
that  malffi  such  discernment  difficult  are  qualitatively  called  "clutt^."  Various  quantitative 
metrics  for  the  assessment  of  clutter  have  been  proposed,  most  of  which  are  related  in  some 
way  to  the  scene  variance.  Other  types  of  analysis  may  provide  metrics  giving  similar 
information,  such  as  the  fractal  dimension  and  parameters  derived  from  wavelet  power 
distributions. 

The  presrace  of  significant  intensity  variation  in  an  image  does  not  alone  constitute  clutter. 
The  spatial  »trat  of  patches  of  intensities  significantly  differrat  from  the  average  intrasity 
is  also  an  important  factor.  If  bright  or  dark  patches  are  presoit  that  are  of  the  same 
goieral  spatial  dimrasions  as  an  object  of  interest,  the  discernment  of  the  object  is  the  most 
challenging,  especially  if  the  patches  are  of  the  same  brightness  as  the  object  of  interest. 
If  a  measure  can  provide  a  reliable  goieralization  about  the  rough  dimensions  of  clutter,  it 
is  a  useful  tool  in  the  validation  of  terrain  simulators  that  might  be  used  to  predict  detection 
and  recognition  ranges  in  the  presrace  of  clutter  of  varying  spatial  structure. 


263 


The  simplest  approach  to  clutter  characterization  is  simply  to  analyze  the  scene  in  terms  of 
pixel  blocks  of  a  certain  size— most  often,  the  size  of  a  real  or  hypothetical  object  of 
interest.  To  further  study  scene  features  that  constitute  clutter  of  the  same  spatial  dimension 
as  a  manmade  object  of  interest,  techniques  that  describe  the  spatial  extent  of  scene 
information  can  be  useful.  Such  techniques  include  Fourier  analysis  and  wavelet  analysis. 
The  first  is  used  to  describe  scene  information  in  terms  of  a  superposition  of  two- 
dimensional  sinusoids  and  is  most  useful  when  the  image  pattern  to  be  analyzed  is 
distributed  homogeneously  across  the  entire  image  region  of  interest.  The  second  describes 
scene  information  in  terms  of  a  superposition  of  very  localized  functions  called  wavelets, 
Md  is  most  useful  in  characterizing  discrete  features  of  limited  spatial  extent  within  the 
image  region  of  interest. 

Fourier  techniques  are  attractive  because  they  have  been  used,  studied,  and  interpreted  so 
thoroughly.  Wavelet  techniques  are  attractive  because  wavelet  analysis  appears  to  have 
similarities  to  the  human  visual  decomposition  process.  The  more  closely  an  analysis 
procedure  emulates  biological  vision,  the  more  directly  predictions  can  be  made  of  visual 
performance  under  variable  environmental  conditions.  Both  Fourier  and  wavelet  techniques 
have  an  advantage  over  the  clutter  metric  in  that  each  frequency  or  wavelet  band  contains 
completely  unique  and  independent  information  about  the  size  and  relative  weighting  of 
scene  features.  The  full  set  of  Fourier  and  wavelet  information  gives  a  complete  description 
of  the  image. 

This  paper  presents  the  results  of  clutter  analysis  of  two  different  types  of  terrain— forest 
and  desert.  The  terrain  images  were  obtained  at  two  field  tests  sponsored  by  the  Smart 
Weapons  Operability  Enhancement  Joint  Test  and  Evaluation  (SWOE  JT&E)  program  at 
Grayling,  Michigan,  and  Yuma,  Arizona.  The  analysis  involves  four  metrics:  direct 
measurement  of  clutter  in  terms  of  a  hypothetical  object  five  pixels  wide  and  three  pixels 
high,  autocorrelation  function  (ACF)  slope  correlation  length  and  fractal  dimension,  and  the 
Haar  wavelet  horizontal  centroid.  The  ACF  and  fractal  dimension  are  derived  from  Fourier 
frequency  analysis.  The  latter  three  metrics  are  not  sought  for  replacing  the  first  as  contrast 
measures,  but  rather  for  providing  information  on  the  spatial  extent  of  clutter.  Special 
attention  is  given  to  the  various  measures  in  the  case  where  the  clutter  metric  is  relatively 
high.  Finally,  problems  specific  to  the  type  of  wavelet  analysis  used  are  discussed, 

2.  SIMPLE  CLUTTER  METRIC 

A  simple,  often-used  metric  for  clutter  is  based  on  the  horizontal  and  vertical  dimensions 
of  a  target  in  the  image  field.  This  metric  is  given  by  Schmeider  and  Weathersby  (1983) 
as 


C  = 


1  " 

MZ-f  "i 
i-1 


(1) 


where  M  is  the  number  of  image  blocks  obtained  by  partitioning  the  image  into  pixel  blocks 
whose  horizontal  and  vertical  dimensions  are  twice  those  of  the  object  and  is  the  variance 


264 


within  each  block.  If  no  target  is  present  in  the  image,  a  block  size  can  be  specified  based 
on  a  hypothetical  target  at  the  center  of  the  image.  For  instance,  in  this  study,  pixels  in  the 
center  of  the  image  represent  a  distance  of  approximately  0.5  m  (horizontally)  and  a 
hypothetical  object  five  pixels  wide  and  three  pixels  high  was  used  in  the  specification  of 
the  image  block  partitioning.  This  metric  is  easy  to  calculate  and  produces  results  that  agree 
with  visual  assessment.  Unfortunately,  while  the  block  size  is  related  to  the  target  size,  the 
variances  are  calculated  on  a  pixel  basis  and  the  relationship  of  this  metric  to  image  features 
of  specific  sizes  is  rather  tenuous. 

3.  FOURIER  ANALYSIS 

A  number  of  metrics  based  on  the  Fourier  power  spectrum  can  be  used  to  describe  scene 
data.  One  of  the  most  popular  is  the  correlation  length,  which  is  a  measure  of  local 
uniformity.  The  correlation  length  can  be  determined  in  two  ways— by  integrating  over  the 
power  spectrum  and  dividing  by  the  variance  or  by  modelling  the  ACF  as  a  decaying 
exponential  and  determining  the  semilog  slope  of  the  decay.  The  latter  method  has  been 
found  more  useful  in  this  study. 

The  autocorrelation  function  in  one  dimension  (t)  is  given  by 

ACF(t)  «  «(-»")  (2) 


where  a  is  a  constant. 

Figures  1  and  2  show  typical  radiance  fields  for  the  Grayling  and  Yuma  sites  respectively. 
For  a  small  region  of  interest  in  the  Yuma  scene  (figure  3)  the  two-dimensional 
autocorrelation  is  shown  in  figure  4.  The  ACF  (0,0)  in  Ae  upper  left  comer  is  the  most 
intense  pixel  and  the  function  decreases  from  this  point  monotonically  in  each  direction. 
The  decaying  exponential  model  is  well  suited  to  this  function. 

A  somewhat  different  measure  is  the  fractal  dimension,  which  is  an  indicator  of  the 
"roughness”  of  the  image  texture.  The  fractal  dimension  is  based  on  the  slope  of  the  power 
spectrum  (Peitgen  and  Saupe,  1988).  More  accurately,  assume  the  power  spectram  can  be 
characterized  such  that 

s(/)  -  -7  w 


where  /  =  and  jS  is  some  constant  {k  and  /  are  the  two-dimensional  Fourier 

transform  indices). 


265 


(5) 


Taking  the  log  of  both  sides  and  defining  the  fractal  dimension  as 


7  -  P 
2 


the  result  is 


log[S(/)]  =  (2/J,  -  7)log/ 


(6) 


If  the  downward  slope  of  the  power  spectrum  is  substantial,  implying  principally  low- 
frequency  components,  the  fractal  dimension  is  low.  Conversely,  when  the  slope  of  the 
power  spectrum  is  very  shallow,  approaching  white  noise,  the  fractal  dimension  is  very 
high.  Most  natural  terrain  exhibits  a  between  1  and  3.  Since  the  fractal  dimension  is  a 
measure  of  the  "roughness"  of  a  signal,  it  is  a  convenient  measure  for  analyzing  the  clutter 
content  of  an  image.  A  fractal  dimension  is  a  real-valued  metric  that  has  the  clearest 
meaning  when  it  happens  to  correspond  to  an  integer  value  such  as  1  or  2.  A  signal  with 
a  fractal  dimension  of  2  is  represented  by  a  simple  two-dimensional  topographical  surface. 
As  deviations  from  one  pixel  to  the  next  increase  in  intensity,  the  roughness  increases,  as 
does  the  fractal  dimension. 

4.  WAVELET  PROCESSING 

In  addition  to  the  one-dimensional  power  spectrum  analysis,  a  study  was  made  using  wavelet 
analysis.  Fourier  analysis  is  concerned  with  the  frequency  (and  phase)  content  of  an  image; 
wavelet  analysis  is  concerned  with  its  scale  (and  translation)  content.  Wavelet  analysis 
recasts  image  content  in  terms  of  objects  of  various  sizes  (scales)  and  positions 
(translations).  As  this  area  of  study  is  new,  convenient  metrics  for  encapsulating  trends  in 
the  wavelet  representation  have  not  been  developed  and  tested  as  thoroughly  as  has  the 
fractal  dimension  in  Fourier  analysis. 

In  the  previous  section,  a  Fourier  analysis  of  terrain  images  was  described.  The  power 
q)ectra  provide  a  catalogue  of  the  frequency  content  of  the  images.  The  Fourier  transform 
compares  image  patterns  to  sinusoidal  basis  functions  differing  in  frequency  and  phase. 

While  there  is  a  relationship  between  object  sizes  and  frequency  content,  the  Fourier-based 
methods  applied  to  real  scenes  match  periodic  basis  functions  with  nonperiodic  data.  The 
result  is  that  objects  of  finite  extent  in  space  have  a  very  widespread,  nonlocalized  signature 
in  the  frequency  domain. 

A  logical  response  to  this  situation  is  to  search  for  a  basis  of  functions  that  are  themselves 
at  least  somewhat  local  in  space  and  then  to  catalog  image  content  according  to  this  basis. 
Since  the  basis  functions  are  of  finite  extent,  the  word  "scale"  is  more  meaningful  than 
"frequency"  in  indexing  the  functions.  If  an  orthonormal  basis  of  local  functions  is 


267 


available,  a  transform  analogous  to  the  Fourier  transform  can  be  developed.  One  such 
transform  developed  recently  is  the  continuous  wavelet  transform,  defined  in  one  dimension 
as  (Chui,  1992): 


(7) 


where  c  is  a  scaling  factor  and  is  a  shift  (translation)  along  the  axis  of  support.  The 
function  ^  is  a  wavelet;  W^,f  is  the  wavelet  transform;  overscore  denotes  the  complex 
conjugate. 

Most  of  the  energy  in  the  function  ^  is  concentrated  in  a  small  interval  [c,d];  that  is, 

-  d 

/ «  ||i|r(x)p<fe  (8) 


For  scale  p  and  translation  q,  the  Haar  wavelet  defined  on  the  interval  [0,1]  is  given  by 
(Jain  1989) 


W  =  — 


1 


2(9  ,  ^  <  g_.: 

2^ 

Jf)  q-Vz 


JP 


^  X 

2P  2^ 

0  ,  elsewhere 


(9) 


where 

0  <:  p  ^  logj  (iV)  -  1 

q  -  0,1  for  p  =  0  (10) 

1  ^  q  ^  2^  for  p  *  0 

The  Haar  coefficient  index  k  is  given  by 

k  =  2P  +  q  -  1  (11) 


As  with  the  Fourier  analysis,  it  is  instructive  to  use  a  metric  that  can  represent  the  extensive 
information  contained  in  the  wavelet  transform  of  an  image.  For  this  study,  a  metric  was 


268 


used  that  roughly  characterizes  the  mean  horizontal  scale  of  image  features.  This  is  the 
horizontal  centroid  given  in  equation  12  (Bleiweiss  et  al.  1994;  Rollins  et  al.,  1994). 


Hor.  cen.  = 


k)fe(A01ofe(A0 

2"-l  2»-l  \ 

E  E 

«  E  E 

m-0  11=0 

^  t=/NT(2"-‘)  /=/NT(2‘-')  J 

N-\ S-\ 


E  E 

i'O  /=0 


(12) 


where  W{k,  1)  is  the  two-dimensional  wavelet  transform  coefficient  at  indices  k,l. 

This  equation  partitions  the  wavelet  transform  domain  into  regions  (bands,  indexed  by  m  and 
n)  of  the  same  wavelet  scale  and  obtains  the  energy  in  each  band.  The  horizontal  centroid 
specifies  the  center  of  mass  of  energy  location  in  terms  of  the  horizontal  band  index.  The 
value  gives  a  rough  indication  of  dominant  horizontal  feature  sites  in  the  logarithmic 
domain.  No  pair  of  indices  should  be  simultaneously  zero  in  this  expression  to  avoid 
considering  global  brightness  offsets  (i.e.,  DC  term).  The  centroid  expression  is  only  valid 
for  the  Haar  wavelet,  because  it  is  the  only  wavelet  for  which  the  DC  information  is 
captured  entirely  within  a  single  coefficient. 

While  the  Haar  wavelet  and  others  commonly  used  are  convenient  for  image  analysis,  they 
lack  an  important  quality  of  Fourier  power  spectrum  analysis— shift  invariance.  The  Fourier 
power  spectrum  signature  of  an  image  feature  does  not  change  with  small  shifts.  The 
wavelet  transform  and  wavelet  power  spectrum  of  an  image  feature,  however,  can  change 
with  even  a  one-pixel  shift. 

5.  RESULTS 

In  this  study,  terrain  images  were  selected  from  Grayling,  Michigan,  and  Yuma,  Arizona, 
sites.  In  each  scene,  areas  containing  grass  and/or  bare  soil  were  treated  separately  from 
regions  containing  thick  foliage  or  trees,  resulting  in  four  regions  of  interest.  In  ^ch 
region,  the  five  metrics  were  calculated  within  a  32x32-pixel  window  advancing  1  pixel 
horizontally  at  a  time.  The  following  paragraphs  discuss  some  salient  results  of  the  study. 

5.1  Merit  Criteria 

In  assessing  the  effectiveness  of  the  metrics,  several  types  of  criteria  were  used.  Visual 
assessment,  level  of  sensitivity  to  scene  changes,  and  correlation  with  the  simple  clutter 
metric  can  be  used  can  all  provide  information  about  the  metric’s  validity  in  representing 
the  spatial  extent  of  scene  clutter. 

For  the  metrics  whose  results  can  be  cast  conveniently  in  terms  of  size,  such  as  correlation 
length  and  wavelet  centroid,  it  is  useful  to  compare  the  metric  values  with  a  visual 
assessment  of  scene  feature  sizes.  The  scenes  were  assessed  with  respect  to  32x32-pixel 


269 


regions  of  interest,  and  objects  smaUer  than  about  3  pixels  are  difficult  to  discern  visually 
m  these  images,  l^erefore,  values  for  these  metrics  larger  than  32  pixels  or  smaUer  than 
5  pixels  would  obviously  call  the  validity  of  the  metric  into  question. 

T^e  regions  examined  in  these  studies  were  chosen  to  be  visually  homogeneous.  If  a  metric 
displays  a  large  standard  deviation  about  its  mean  across  a  given  region,  that  would  suggest 
that  the  metric  is  too  sensitive  to  very  small  changes  in  the  scene. 

The  simple  clutter  metric  varies  in  its  accuracy  with  the  size  of  the  clutter  being  examined. 
As  scene  features  approach  sizes  for  which  the  clutter  metric  is  increasingly  responsive,  it 
is  desirable  that  the  other  metrics  have  a  strong  correlation  with  the  clutter  metric.  Such 
correlation  would  show  that  the  sizes  each  measures  accurately  reflect  the  same  scene 
features. 

5.2  Simple  Clutter  Metric 

The  clutter  meMc  indicated  negligible  clutter  for  the  Grayling  images  while  showing 
considerable  activity  for  the  Yuma  images,  with  a  large  number  of  discrete  objects  in  the 
Mze  order  of  Ae  imaginary  target.  For  the  Yuma  images,  the  clutter  metric  was  of  higher 
mtensity  than  m  the  Grayling  images,  indicating  that  distinct  background  features  here  were 
more  often  of  similar  size  to  the  imaginary  target  and  had  higher  contrast  than  those  in  the 
Grayling  scenes. 

5.3  ACF  Correlation  Length 

For  the  Grayling  images,  the  average  value  for  the  correlation  length  was  15.56  pixels  in 
the  area  with  foliage  and  had  a  standard  deviation  of  0.58  pixels.  For  the  bare/grassy  area 
^  mean  correlation  length  was  8.90  pixels  and  had  a  standard  deviation  of  0.87  pixels! 
The  DC  term  was  removed  before  the  generation  of  the  ACF,  so  that  the  discolorations 
within  the  bare  areas  are  more  important  than  the  average  intensity.  Thus  the  correlation 

lengm  demonstrated  significant  class  separation  between  the  thickly  vegetated  and 
bare/grassy  areas. 

correlation  lengths  for  vegetated  areas  and  bare  areas  were  very  similar, 
(6.02  ±  0.76  ^d  6.38  ±  0.47  pixels,  respectively)  due  to  the  small  average  size  of  the 
vegetation.  With  a  high  average  intensity  for  the  clutter  metric,  a  correlation  between  the 
other  metrics  and  the  clutter  metric  indicated  a  strong  relationship  between  the  correlation 
Iragth  and  the  clutter  metric  (|p|  =  0.93  for  vegetation  and  |p|  =0.80  for  bare  soil). 
The  ^nsitiyity  of  the  clutter  metric  and  correlation  length  to  each  other  demonstrated  the 
useful  auxiliary  information  that  can  be  provided  by  the  correlation  length  to  further 
nieasure  the  clutter  content  in  geometric  extent  when  the  clutter  present  is  close  to  the  size 
of  the  target.  In  essence,  in  conditions  of  significant  clutter  where  the  contrast  is 
appreciable,  the  information  the  correlation  length  conveys  is  rather  specifically  about  the 
clutor,  whereas  the  other  metrics  are  providing  scene  information  somewhat  less  specific 
to  the  clutter.  For  the  other  metrics,  |p  |  was  less  than  about  0.35  in  every  case. 


270 


5.4  Fractal  Dunension 


For  the  Grayling  images,  the  fractal  dimension  had  a  mean  of  1.81  in  the  vegetated  area  and 
2.06  in  the  bare  area  with  a  standard  deviation  of  0.08  in  each.  The  lower  fractal 
dimension  within  the  trees  indicates  the  presence  of  larger,  more  distinct  features  than  in 
the  grassy  regions. 

For  the  Yuma  images,  the  fractal  dimension  of  the  bare  soil  area  had  a  mean  of  2.00  and 
a  standard  deviation  of  0. 17,  while  the  area  with  foliage  had  a  mean  of  1.89  and  a  standard 
deviation  of  0.28.  The  fractal  dimension  was  again  slightly  lower  in  the  area  of  thick 
foliage.  In  general,  the  fractal  dimension  means  did  not  differ  appreciably  between  regions 
of  apparently  different  clutter  content. 

5.5  Wavelet  Metrics 

In  the  GrayUng  region  containing  a  fir  tree,  the  horizontal  centroid  of  the  wavelet  energy 
bands  had  a  mean  value  of  0.74  with  a  standard  deviation  of  0.04.  This  value  is  between 
band  0  and  band  1,  closer  to  band  1.  Band  0  represents  the  DC  component  (i.e.,  average 
intensity  in  the  horizontal  direction)  whereas  band  1  represents  features  of  one  half  the 
breadth  of  the  32x32-pixel  region  of  interest.  A  value  of  0.74  for  this  centroid  indicates  the 
presence  of  a  feature  of  width  somewhat  greater  than  16  pixels,  which  is  in  fair  agreement 
with  visual  assessment  and  in  rough  agreement  with  the  correlation  length.  In  the  grassy 
area,  the  centroid  mean  increases  to  1.25  with  a  standard  deviation  of  0.02,  indicating  the 
presence  of  clutter  between  8  and  16  pixels  in  breadth.  This  result  is  in  fairly  strong 
agreement  with  the  correlation  length. 

In  the  Yuma  region  with  foliage,  the  horizontal  centroid  had  a  mean  of  1.34  and  a  standard 
deviation  of  0.09,  indicating  the  presence  of  features  between  8  and  16  pixels  in  breadth, 
somewhat  larger  than  the  6.02  pixel  correlation  length.  In  the  bare  soil  region,  the  centroid 
had  a  mean  of  1.16  and  a  standard  deviation  of  0.10.  There  are  no  prominent  discrete 
features  here,  however,  and  the  result  is  somewhat  ambiguous  in  descriptive  meaning  in 
terms  of  the  presence  of  discrete  clutter. 

6.  CONCLUSIONS 

In  conclusion,  metrics  derived  from  both  Fourier  and  wavelet  representations  of  the  image 
provide  concise  and  useful  descriptions  of  clutter  content  in  terms  of  clutter  size.  This  study 
indicates  that  in  the  Fourier  transform-based  analysis,  the  correlation  length  is  more  reliable 
than  the  fractal  dimension  in  ascribing  a  rough  size  to  clutter  content  in  a  scene,  at  least  in 
terms  of  separation  of  means  between  classes  and  in  terms  of  agreement  with  visual 
assessment.  The  centroid  obtained  from  the  wavelet  analysis  also  gives  useful  information 
regarding  the  horizontal  size  of  discrete  clutter,  but  has  the  disadvantage  of  correlating  scene 
features  with  wavelet  basis  functions  that  are  frozen  in  specific  positions.  This  problem 


271 


may  be  somewhat  alleviated  by  the  use  of  complex-supported  harmonic  wavelets,  which 
allow  phase  shifting  of  the  basis  functions  and  thus  better  congruence  to  scene  features.  An 
investigation  into  the  use  of  harmonic  wavelets  in  assessing  clutter  is  ongoing. 

Specification  of  a  number  such  as  the  correlation  length  can  provide  an  additional  parameter 
in  a  scene  generation  process.  For  instance,  a  two-dimensional  autocorrelation  map  can  be 
generated  corresponding  to  equation  2  in  each  direction.  The  map  is  then  Fourier 
transformed  and  the  square  root  of  the  resulting  power  spectrum  is  taken.  This  proems 
produces  a  transfer  function  that  can  be  used  to  filter  another  two-dimensional  random  map 
of  white  Gaussian  noise,  producing  a  synthetic  "clutter"  map  with  the  desired  correlation 
length.  Such  clutter  maps  can  then  be  used  to  modify  probability  of  detection  and 
recognition  of  synthetic  targets  in  the  presence  of  clutter  of  various  size  distributions. 

REFERENCES 

Bleiweiss,  M.P.,  J.M.  Rollins,  and  C.  Chaapel,  1994.  "Analysis  of  Infrared  Background 
Scenes  from  the  Grayling  I  SWOE  JT&E  Field  Test."  In  Proceedings  of  the  1993 
Battlefield  Atmospherics  Conference,  U.S.  Army  Research  Laboratory,  White  Sands 
Missile  Range,  NM  88002,  pp  281-295. 

Chui,  C.K.,  1992.  An  Introduction  to  Wavelets.  Vol.  1  of  the  series  Wavelet  Analysis  and 
its  Applications,  Academic  Press,  San  Diego,  California. 

Jain,  A.K.,  1989.  Fundamentals  of  Digital  Image  Processing,  Prentice-Hall,  Englewood 
Cliffs,  New  Jersey. 

Peitgen,  H.-O.,  and  D.  Saupe,  eds.,  1988.  The  Science  of  Fractal  Images.  Spiinger- 
Verlag,  New  York. 

Rollins,  J.M.,  C.  Chaapel,  and  M.P.  Bleiweiss,  1994.  "Spatial  and  Temporal  Scene 
Analysis."  In  Characterization  and  Propagation  of  Sources  and  Backgrounds,  SPIE 
Proceedings,  International  Society  for  oi)tical  Engineering,  Vol.  2223,  W.R  Watkins 
and  D.  Clement,  eds,  pp  521-532. 


Schmeider,  D.E.,  and  M.R.  Weathersby,  1983.  "Detection  Performance  in  Clutter  with 
Variable  Resolution."  IEEE  Trans.  Aerospace  &  Elect.  Syst.,  19(4):622-630. 


VALIDATION  TOOLS  FOR  SWOE  SCENE  GENERATION  PROCESS 


Max  P.  Bleiweiss 
U.S.  Army  Research  Laboratory 
Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico  88002,  U.S.A. 

J.  Michael  Rollins 

Science  and  Technology  Corporation 
Las  Cruces,  New  Mexico  88011,  U.S.A 


ABSTRACT 

The  Smart  Weapons  Operability  Enhancement  (SWOE)  scene  generation 
process  involves  the  simulation  of  infrared  radiance  fields  produced  by  a 
variety  of  terrain  and  vegetation  environments  under  diverse  weather 
conditions.  As  part  of  the  validation  procedure,  an  ensemble  of  land-based 
and  airborne  terrain  images  were  captured  at  two  locations— Grayling, 
Michigan,  and  Yuma,  Arizona.  These  mid  and  far  infrared  images  were 
compared  on  a  frame-by-frame  basis  to  synthetic  images  of  the  same  scenes 
generated  by  the  SWOE  process.  Statistical  procedures  as  well  as  metrics  for 
feature  segmentation  were  implemented  in  the  comparison  to  assess  the 
accuracy  with  which  the  SWOE  process  generates  a  facsimile  of  the  real  scene 
radiances.  A  description  of  the  image  registration  process,  the  analytical  tools 
used,  and  qualitative  results  for  the  SWOE  process  are  presented. 

1.  INTRODUCTION 

This  paper  describes  some  of  the  image  analysis  methodology  used  in  the  validation  of  the 
Smart  Weapons  Operability  Enhancement  (SWOE)  Joint  Test  and  Evaluation  (JT&E)  scene 
generation  process.  A  diverse  collection  of  measurement  and  analysis  techniques  was  applied 
for  the  dual  purpose  of  comparing  real  and  synthetic  images  and  of  investigation  methods  for 
automatic  segmentation  of  homogenous  regions  of  interest  within  images  of  natural  terrain. 

The  primary  means  of  validation  involves  the  comparison  of  real  and  synthetic  image 
histograms  using  the  chi  square  statistic.  A  secondary  effort,  reported  here,  seeks  to  develop 
a  tool  chest  of  various  metrics  that  can  be  used  to  compare  and  contrast  image  features  within 
real  images  and  between  real  and  synthetic  images.  Metrics  that  display  invariance  to 
temporal  changes  in  environmental  conditions  are  of  special  interest  in  this  paper. 


273 


2.  REAL  IMAGE  ACQUISITION 

As  described  previously  (Bleiweiss  et  al.  1994a),  a  large  number  of  paired  far  and  mid 
infrared  images  of  natural  terrain  were  acquired  using  an  AGEMA  BRUT  880  system.  The 
imagery  was  acquired  at  sites  near  Grayling,  Michigan,  and  Yuma,  Arizona.  The  acquisition 
took  place  during  periods  of  greatest  seasonal  change  (near  the  spring  and  fall  equinoxes)  to 
capture  as  wide  a  range  of  variability  in  scene  radiances  as  possible. 

A  subset  of  the  images  acquired  was  selected  randomly  and  was  used  for  comparison  with 
corresponding  synthetic  images  generated  by  the  SWOE  process.  A  typical  real  image  is 
shown  in  figure  1,  with  its  synthetic  counterpart  in  figure  2.  A  comprehensive  presentation 
and  analysis  of  the  results  is  given  in  a  report  that  is  being  prepared  by  the  SWOE  JT&E 
office.  Qualitative  comparison  results  are  discussed  in  this  paper  to  highlight  the  utility  of 
the  various  scene  characterization  metrics  used  in  this  image  analysis. 


Figure  1.  Mid  infrared  image  from  Figure  2.  Synthetic  connterpart  of  image 
Grayling  site.  in  Figure  1. 


3.  IMAGE  COMPARISON  PROCESS 

The  metric  comparison  of  real  and  synthetic  images  involved  the  following  process: 

1.  Based  on  knowledge  of  the  artificial  scene  generation  process,  designate 
appropriate  image  analysis  tools  as  determined  from  experience  and  literature 
searches.  These  tools  should  be  able  to  provide  succinct,  objective,  and 
intuitively  meaningful  descriptions  of  image  regions  of  interest  for  comparison 
between  real  and  synthetic  counterparts. 


274 


2.  Align  images  through  objective  registration  process  so  that  compariMns  will 
always  involve  exactly  the  same  scene  features.  Without  proper  registration, 
objective  comparison  is  impossible. 

3.  Apply  each  designated  metric  to  the  same  regions  of  interest  in  each  real  and 
synthetic  image. 

4.  Investigate  the  differences  between  the  real  and  synthetic  metrics  for 
corresponding  images  and  determine  a  useful  measure  of  significance  for  Ae 
differences.  If  the  metrics  are  for  contrasting  dissimilar  types  of  terrain, 
measure  the  mean  and  standard  deviation  for  each  homogeneous  type  of  terrmn 
and  establish  whether  or  not  the  metrics  demonstrate  sufficient  discrimination 
between  classes.  If  the  metrics  or  the  histogram  values  are  to  be  used  to 
establish  whether  regions  of  interest  are  from  the  same  population,  use 
techniques  to  compare  distributions  such  as  chi-square  techniques. 

5.  Based  on  the  resulting  agreement  or  disagreement  from  the  comparisons, 
observe  the  performance  of  the  scene  generation  process  in  terms  of  each 
metric  and  propose  corrective  modifications  to  the  process  as  necessary. 

4.  IMAGE  REGISTRATION 

In  order  to  make  valid  comparisons  between  real  and  synthetic  images,  it  is  necessary  to 
register  each  image  from  a  site  to  a  given  reference  image.  This  process  ensures  that  the 
regions  of  interest  selected  for  analysis  in  each  image  correspond  to  the  same  area  of  tennin. 
Otherwise,  the  comparison  becomes  meaningless  with  offsets  greater  than  1  to  2  pixels. 
During  the  registration  process,  the  initial  misregistration  was  t^ically  5  to  7  pixels  in  both 
the  horizontal  and  vertical  directions.  If  the  region  of  interest  is  within  a  tree,  this  amount 
of  offset  is  enough  to  mistakenly  include  a  large  number  of  pixels  from  the  background, 
which  is  of  a  considerably  different  statistical  population  in  terms  of  radiance  values  ^d 
texture.  Since  different  images  of  a  scene  may  contain  different  features  near  the  boundaries, 
only  a  subset  of  each  image  is  common  to  all  images.  In  this  registration  process,  a  128x128- 
pixel  region  was  cropped  from  the  original  larger  image. 

The  method  of  registration  ultimately  employed  in  this  effort  involved  the  use  of  normaliz^ 
cross  correlation  between  a  given  image  and  the  reference.  In  essence,  the  image  is 
converted  to  its  two-dimensional  finite  Fourier  transform  (FFT)  and  normalized  at  each 
coefficient  to  have  a  magnitude  of  1.  The  same  operation  is  perform^  on  the  reference 
image.  The  FFT  arrays  are  multiplied  at  each  coefficient,  and  the  resulting  array  is  inverse 

transformed. 

The  registration  process  was  two  tiered.  The  first  tier  consisted  of  the  display  of  the 
reference  image  and  the  image  to  be  registered  superimposed  over  *e  s^e  area  of  the 
screen.  Arrow  keys  were  assigned  movements  that  allowed  the  registration  image  to  be 
moved  with  respect  to  the  reference  image.  In  this  way,  a  specific  tree  as  seen  in  both 


275 


images,  for  instance,  could  be  visually  aligned.  This  process  requires  that  the  registration 
image  have  enough  contrast  to  contribute  useful  feature  information  to  the  superposition. 
Even  when  contrast  is  comparatively  high,  the  sharpness  of  the  imagery  is  usually  not 
sufficient  for  visual  registration  better  than  2  or  3  pixels.  Thus  the  visual  registration  process 
is  followed  by  the  slower  but  more  accurate  phase  correlation  process  (Tian  and  Huhns  1986) 
in  the  second  tier.  The  ouq)ut  array  from  the  phase  correlation  process  represents  the 
normalized  correlation  between  the  two  images.  The  position  of  maximum  magnitude 
determiiies  the  amount  of  misregistration  in  each  direction.  The  offsets  required  for 
registration  can  then  be  recast  as  phase  shifts  for  each  FFT  coefficient  for  the  image  to  be 
registered.  Once  the  phase  shifts  are  performed  in  the  frequency  domain,  an  inverse 
transform  is  implemented  and  the  resulting  image  is  registered  correctly. 

The  frequency  domain  registration  techniques  can  be  used  to  implement  registration  accurate 
to  the  sub-pixel  level.  In  the  normalized  phase  correlation  map,  a  sub-pixel  location  of  the 
true  m^imum  can  be  determined  (accurate  to  about  0.1  pixel)  by  using  two-dimensional 
quadratic  interpolation  over  the  pixel  with  the  maximum  amplitude  and  its  eight  nearest 
neighbors.  In  our  prot^ure,  we  returned  misregistration  offsets  with  quarter-pixel  accuracy 
to  the  frequency  domain  and  implemented  the  corresponding  phase  shifts. 

It  is  important  to  note  that  phase  shifts  corresponding  to  quarter-pixel  translations  require  a 
four-fold  lengthening  of  the  signal  vectors  in  the  frequency  domain.  Suppose,  for  instance, 
that  a  given  row  in  the  frequency  domain  has  N  =  128  elements.  The  row  must  be 
lengthened  to  512  elements,  the  first  and  last  64  being  the  same  as  before,  but  those  in  the 
center  being  assigned  a  value  of  zero.  This  represents  the  same  information  on  a  finer 
resolution  grid.  The  center  values’  being  set  to  zero  correctly  indicates  that  no  information 
of  frequencies  higher  than  the  63rd  harmonic  are  present.  The  phase  shifting  is  implemented 
and  the  resulting  data  are  inverse  transformed,  subsampled  to  the  original  resolution,  and 
stored  as  the  registered  image. 

Suppose  a  high-resolution  grid  had  not  been  used  in  the  frequency  domain,  but  a  quarter-pixel 
phase  shift  had  been  implemented.  The  resulting  image  upon  inverse  transforming  would  be 
complex,  not  real.  This  is  due  to  the  fact  that  phase  shifts  corresponding  to  sub-pixel 
amounts  destroy  the  frequency  domain  arrangement  in  which  the  coefficients  above  N/2  are 
complex  conjugates  of  those  below  N/2.  This  arrangement  is  required  to  represent  real 
images. 


The  ability  to  use  the  frequency  domain  in  shifting  the  images  was  also  important  in  some 
ca^s  where  the  images  could  not  otherwise  be  registered  because  the  real  image  scenery  was 
a  little  tTO  far  to  the  left  of  the  synthetic  scenery  to  have  a  128x128  pixel  intersection.  Using 
the  cyclical  representation  of  the  image  in  the  frequency  domain,  such  images  were  scrolled 
slightly  into  an  adjacent  cycle  so  that  as  many  features  in  the  main  image  cycle  could  align 
with  those  in  the  reference  image  as  possible. 

Positions  of  certain  features  in  the  newly  registered  images  were  noted  and  compared  to  the 
positions  of  the  same  features  in  the  reference  images.  It  was  found  that  the  phase  correlation 


276 


process  worked  very  well  in  most  cases.  Conditions  for  which  the  visual  registration 
overrode  the  phase  correlation  results  were  rare  and  occurred  principally  in  the  mid  infrared 
images  when  the  sun  angle  was  low  enough  to  cause  bright  outlines  and  shadows,  changing 
the  apparent  structural  content  of  the  images.  Occasionally,  when  the  signal-to-noise  level 
of  an  image  was  very  low,  the  phase  correlation  method  indicated  a  maximum  at  an  obviously 
incorrect  position.  However,  the  phase  correlation  method  in  general  was  much  more 
accurate  in  measuring  the  offset  or  indicating  correct  registration  than  the  purely  visual 
superposition  method. 

5.  METRICS  USED 

In  a  previous  paper  (Bleiweiss  et  al.  1994a),  several  metrics  for  the  characterization  of 
histogram  information,  texture  and  structure  of  image  features  were  demonstrated.  These 
included  the  mean,  median,  maximum,  minimum,  variance,  standard  deviation,  absolute 
deviation,  skewness,  kurtosis,  autocorrelation  integral  correlation  lengths,  variant-based 
clutter,  gray-level  co-occurrence  matrix  (GLCM)  statistics,  and  wavelet  compaction.  In 
addition  to  these,  autocorrelation  function  (ACF)  slope  correlation  lengths  and  fractal 
dimension  are  discussed  here. 

The  correlation  length  determined  from  the  slope  of  the  autocorrelation  function  is  based  on 
the  rQiresentation  of  the  ACF  as  an  exponential  (Ben-Yosef  et  al.  1985): 

ACF(x)  « 

where  r  is  the  offset  from  ACF(0).  The  correlation  length  may  thus  be  represented  as 

I  =  (2) 

a 

The  slope  a  is  determined  numerically  by  calculating  ln[ACF(r)J  close  to  r  =  0  and 
performing  a  linear  regression  with  respect  to  r.  The  fractal  dimension  is  similarly  based  on 
the  slope  of  the  power  spectrum,  and  is  discussed  further  elsewhere  (Rollins  and  Peterson 

these  Proceedings). 

First-order  statistics  are  useful  in  assessing  the  macroscopic  performance  of  the  thermal 
radiance  and  diffusion  model  employed  in  the  SWOE  process.  The  emphasis  here  is  on  the 
accuracy  of  the  temporal  aspect  of  the  model  and  its  abUity  to  predict  thermal  evolution  as 
a  function  of  time  based  on  initial  conditions.  Second-order  statistics  such  as  the  GLCM 
metrics  and  correlation  lengths  are  useful  in  assessing  reliable  texture  simulation  and  provide 
information  on  the  accuracy  of  fine  spatial  detail  simulated  by  the  SWOE  process.  S^nd- 
order  statistics  are  also  used  for  their  characterization  of  the  structure  present  in  a  region  of 
interest.  Second-order  statistics  that  demonstrate  invariance  to  the  diurnal  cycle  are  desirable. 


277 


6.  IMAGERY  MEASUREMENTS  AND  COMPARISON  RESULTS 
6.1  Evaluation  of  Metrics 

At  this  time,  the  analysis  of  real-to-synthetic  measurement  comparisons  for  significance  is 
ongoing.  The  results  will  be  given  in  the  final  SWOE  report.  The  first-order  comparison 
made  between  histograms  of  the  real  and  synthetic  images  uses  the  chi-square  statistic  at  95% 
confidence  to  accept  or  reject  the  hypothesis  that  the  samples  come  from  the  same 
distribution.  A  secondary  effort  is  the  evaluation  of  the  metrics  themselves  as  useful  tools 
in  ihe  compmson  of  like  regions  or  contrast  of  dissimilar  ones.  A  complete  set  of  images 
registered  with  sub-pixel  accuracy  has  been  produced  and  compared  to  their  corresponding 
synthetic  images.  Based  on  initial  correlation  analysis,  the  first-order  histogram  measures 
correlate  well  between  the  real  and  synthetic  images  and  better  than  the  second-order 
measures.  The  second-order  measures  showing  the  highest  correlation  were  the  GLCM 
entropy  and  the  ACF  slope-based  correlation  length.  The  wavelet-based  metrics  showed  poor 
agreement. 

In  some  cases,  it  may  be  desirable  for  image  metrics  to  exhibit  invariance  to  environmental 
chwges.  For  example,  table  1  gives  results  for  some  of  the  metrics  in  regions  containing 
foliage  over  the  full  testing  period,  during  which  seasonal  and  diurnal  changes  took  place. 

Table  1.  Selected  Metrics  from  Grayling  and  Yuma  Sites 


Metric 

Mean 

Gravling 
Std.  Dev. 

Ratio 

Mean 

Yuma 

Std.  Dev.  Ratio 

Clutter 

0.07 

0.09 

0.778 

0.37 

0.33 

1.121 

Correlation  Length 

2.43 

1.82 

1.335 

3.87 

0.97 

3.990 

Fractal  Dimension 

1.72 

0.59 

2.915 

1.80 

0.32 

5.625 

Horizontal  Centroid 

1.12 

0.33 

3.394 

1.30 

0.26 

5.000 

GLCM  Entropy 

0.67 

0.95 

0.705 

3.46 

1.07 

3.234 

GLCM  Contrast 

0.19 

0.18 

1.05 

2.91 

3.43 

0.848 

The  data  in  table  1  show  that  the  fractal  dimension  and  the  horizontal  centroid  display  more 
inv^Mce  to  diurnal  effects  than  the  other  measures,  if  the  ratio  of  the  mean  to  the  standard 
deviation  is  used  to  indicate  invariance. 


It  should  be  noted  that  even  though  the  Grayling  and  Yuma  regions  of  interest  included 
foliage,  the  Grayling  region  was  completely  within  a  single  coniferous  tree,  while  the  Yuma 
region  contained  as  much  foliage  as  possible  but  also  contained  bare  soil,  with  a  significantly 
different  apparent  radiance.  Thus  the  Grayling  region  can  be  characterized  as  a  radiance  field 
with  dull,  morphous  structures,  while  the  Yuma  region  is  described  by  stark  contrast 
between  foliage  and  the  background.  As  such  contrast  is  necessary  for  visual  identification 


278 


Figure  3.  Plot  of  GLCM  contrast  with  Figure  4.  Plot  of  fractal  dimension  with 
diurnal  cycle  from  Yuma  site  region  of  diurnal  cycle  from  Yuma  site  region  of 
interest.  interest. 


of  discrete  objects,  examining  regions  with  both  foliage  and  background  has  its  own 
importance  in  designing  simulators  to  be  used  in  predictions  of  recognition  range.  Such 
contrast  for  boundaries  between  different  textures  is  also  important  to  the  reliability  of  the 
correlation  length  as  an  intuitive  measure  of  structure  extent  in  these  images. 

Plots  of  the  GLCM  contrast  and  fractal  dimension  for  a  Yuma  region  with  thick  foliage  are 
given  in  figures  3  and  4,  respectively.  The  sensitivity  of  the  contrast  to  the  diurnal  cycle  is 
clear,  while  the  fractal  dimension  seems  to  indicate  a  greater  invariance  to  diurnal  changes. 

7.  CONCLUSIONS 

In  this  paper,  a  description  of  some  of  the  ongoing  SWOE  image  analysis  has  been  presented. 
The  registration  technique  has  been  demonstrated  to  be  both  necessary  and  accurate.  We 
have  shown  that  the  chosen  image  metrics  are  uniquely  descriptive  of  scene  content.  Certam 
metrics  have  been  found  to  show  a  degree  of  spatial  invariance  (Bleiweiss  et  al.  1994b)  or 
temporal  invariance,  as  seen  above.  For  instance,  this  work  has  shown  that  the  fractal 
dimension  and  the  horizontal  centroid  retain  much  of  the  same  information  about  Ae  texture 
and  structure  in  a  scene  from  trial  to  trial,  regardless  of  changes  in  thermal  conditions. 


REFERENCES 


Ben- Yosef,  N.,  K.  Wilner,  S.  Simhony,  snd  G.  Feigin,  1985.  "Measurement  and  Analysis 
of  2-D  Infrared  Natural  Background."  Applied  Optics,  24(14):2109-2113. 

Bleiweiss,  M.P.,  M.  Rollins,  and  C.  Chaapel,  1994a.  "Analysis  of  Infrared  Background 
Scenes  from  the  Grayling  I  SWOE  JT&E  Field  Test."  in  Proceedings  of  the  1993 
Battlefield  Atmospherics  Conference,  U.S.  Army  Research  Laboratory,  White  Sands 
Missile  Range,  New  Mexico,  pp  282-295. 

Bleiweiss,  M.P.,  M.  Rollins,  C.  Chaapel,  and  R.  Berger,  1994b.  "Analysis  of  Real  Infrared 
Scenes  Acquired  for  SWOE  JTifeE."  In  Proceedings  of  the  1994  International 
Geoscience  and  Remote  Sensing  Symposium,  in  press. 

Rollins,  J.M.,  and  W.  Peterson,  these  Proceedings.  "Clutter  Characterization  Using  Fourier 
and  Wavelet  Techniques. " 

Tian,  Q.,  and  M.N.  Huhns,  1986.  "Algorithms  for  Subpixel  Registration."  Computer 
Vision,  Graphics,  and  Image  Processing,  35:220-23. 


280 


THE  VEHICLE  SMOKE  PROTECTION  MODEL  DEVELOPMENT  PROGRAM 


David  J.  Johnston 
OptiMetrics,  Inc. 

Bel  Air,  Maryland  21015-6181 

William  G.  Rouse 

U.S.  Army  Edgewood  Research,  Development,  and  Engineering  Center 
Aberdeen  Proving  Ground,  Maryland  21010-5423 


ABSTRACT 

This  paper  reports  on  work  in  progress  to  adapt  existing  methodologies  to 
develop  a  Vehicle  Smoke  Protection  Model.  The  objective  of  this  effort  is  to 
produce  a  data-rich  model  that  will  become  the  standard  technique  for  simulating 
on-vehicle  smoke  protection  systems.  Other  types  of  obscurants  and  dispensing 
mechanisms  may  also  be  included.  Software  will  be  designed  and  constructed 
using  object-oriented  techniques  so  that  the  simulation  modules  can  be  used  in  a 
stand-alone  mode  or  adapted  for  use  in  other  applications,  including  distributed 
interactive  simulations. 


1.  INTRODUCTION 

In  1993,  the  Defense  Science  Board  convened  a  Task  Force  on  Simulation,  Readiness,  and 
Prototyping  to  assess  the  impact  of  simulation  technology  on  U.S.  forces.  In  its  findings,  the 
task  force  enthusiastically  embraced  distributed  interactive  simulation  (DIS)  for  training 
applications  and  strongly  encouraged  its  continued  use.  In  addition,  it  recognized  that  DIS 
could  transform  the  acquisition  process  if  it  were  used  to  support  materiel  development,  combat 
development,  training  development,  and  operational  testing.  As  a  result,  several  advanced 
technology  demonstrations  (ATD)  have  been  planned  which  will  make  extensive  use  of  this 
simulation  technology. 

A  major  effort  is  now  underway  to  enhance  the  DIS  architecture  so  that  it  can  be  used  in  ATDs 
and  similar  applications.  This  is  being  accomplished  through  a  rigorous  standardization  process 
with  the  voluntary  cooperation  of  numerous  organizations  from  government,  industry,  and 
academia.  The  architectural  enhancements  are  required  because  DIS  cannot  currently  simulate 
complex  battlefield  interactions  in  a  physically  realistic  manner  and  it  has  insufficient  resolution 
for  detailed  system  studies.  When  DIS  is  finally  ready  to  support  dynamic  effects,  many 
operations  will  be  added  to  the  DIS  environment  that  cannot  currently  be  modeled  with 


281 


acceptable  fidelity.  This  includes  the  employment  of  smoke  and  obscurants  on  the  virtual 
battlefield. 

A  number  of  techniques  have  been  developed  over  the  years  to  model  the  production,  transport, 
diffusion,  and  effect  of  smoke  and  obscurants  on  the  battlefield.  Two  of  these,  GRNADE  and 
COMBIC,  have  gained  considerable  acceptance  and  are  part  of  the  Electro-Optical  Systems 
Atmospheric  Effects  Library  (EOSAEL).  GRNADE  simulates  multiple-round  salvos  of  tube- 
launched  grenades  (L8A1  and  M76)  and  is  used  for  self-screening  analysis  (Davis,  Sutherland 
1987).  COMBIC  is  a  more  comprehensive  model  that  simulates  several  obscurant  sources, 
including:  high  explosive  dust;  vehicular  dust;  phosphorus  and  hexachloroethane  munitions; 
diesel  oil  fires;  generator-disseminated  fog  oil  and  diesel  fuel;  and,  other  screening  aerosols 
(Hoock  et  al.  1987).  It  is  used  in  numerous  and  diverse  applications. 

While  GRNADE  and  COMBIC  are  accepted  standards,  they  are  somewhat  dated  and  do  not 
explicitly  simulate  many  of  these  systems,  sources,  and  materials  currently  in  service  or  under 
consideration.  In  addition,  a  number  of  deficiencies  have  been  noted  which  influence  their 
fidelity.  Given  the  expanded  role  of  simulation  technology  in  research,  development,  and 
acquisition,  an  immediate  need  exists  for  an  updated  standard.  This  is  particularly  true  for  on- 
vehicle  smoke  protection  systems  because  they  are  most  likely  to  be  included  in  DIS  simulations. 

OptiMetrics,  Inc.  is  addressing  this  need  by  developing  a  Vehicle  Smoke  Protection  Model. 
This  effort  is  being  conducted  under  contract  to  the  U.S.  Army  Tank-Automotive  Research, 
Development,  and  Engineering  Center  (TARDEC)  and  in  cooperation  with  the  U.S.  Army 
Edgewood  Research,  Development,  and  Engineering  Center  (ERDEC).  This  paper  reports  on 
that  work  in  progress. 

2.  OBJECTIVES  AND  SCOPE 

The  objective  of  this  program  is  to  produce  a  data-rich  model  that  will  become  the  standard 
technique  for  simulating  on-vehicle  smoke  protection  systems.  This  will  be  achieved  by  building 
upon  and  enhancing  the  standard  methods  for  simulating  smoke  and  obscurant  production  (i.e. 
GRNADE  and  COMBIC)  and  related  models.  Self-screening  systems  wUl  be  emphasized,  but 
other  types  of  obscurants  and  dispensing  mechanisms  may  also  be  included. 

The  goal  is  to  increase  the  resolution  of  the  simulation  process,  improve  its  overall  fidelity,  and 
package  the  product  in  a  manner  that  will  facilitate  its  use  in  a  wide  variety  of  applications, 
including  DIS  simulators.  The  program  is  focused  on  smoke  production.  Consequendy, 
exis&g  predictive  techniques  for  transport,  diffusion,  radiative  transfer,  etc.  will  be  used  to  the 
maximum  extent  possible  and  the  Vehicle  Smoke  Protection  Model  will  nQi  deviate  significantly 
from  current  procedures.  Puffs  and  plumes  wiU,  for  example,  still  be  described  by  three- 
dimensional  Gaussian  distributions. 

Rapid  obscuration  systems  (ROS)  wiU  be  addressed  first,  followed  by  obscuration  reinforcing 
systems  (ORS)  and  aU  other  dispensers.  Any  vehicle,  dispensing  system,  grenade,  or  obscurant 


282 


may  be  modeled  from  user-specified  parameters,  but  a  descriptive  database  will  be  constructed 
and  it  win  include  most  fielded  and  developmental  items.  The  vehicle  database  wiU,  for  example, 
include  the  :  Ml  family  of  main  battle  tanks;  M2/3  family  of  fighting  vehicles;  M88A1E1 
Improved  Recovery  Vehicle;  CATTB/CCATTD;  Breacher;  Heavy  Assault  Bridge;  Armored 
Gun  System;  and,  M93  Reconnaissance  Vehicle  (FOX). 

3.  APPROACH 

The  Vehicle  Smoke  Protection  Model  development  program  is  being  conducted  in  three  phases: 
analysis,  design,  and  development.  The  program  is  currently  in  the  analysis  and  design  phases, 
which  are  being  conducted  concurrently. 

In  the  analysis  phase,  GRNADE,  COMBIC,  and  related  models  are  being  examined  to  identify 
the  simulation  techniques  that  are  used  for  different  sources  and  materials.  The  algorithms  and 
parameters  will  be  described  in  a  set  of  flow  charts  with  supporting  documentation.  They  wiU 
also  be  evaluated  using  the  Concentration  and  Path  Length  (CL)  Product  Visualization  Utility 
(paragraph  4)  to  determine  how  well  they  simulate  the  smoke  production  process.  If  the 
algorithms  produce  satisfactory  results,  they  wiU  be  included  in  the  Vehicle  Smoke  Protection 
Model  without  modification.  Otherwise,  an  alternative  simulation  technique  will  be  sought 

In  the  second  phase,  the  Vehicle  Smoke  Protection  Model  is  being  designed  using  object- 
oriented  techniques  (Coad,  Yourdon  1991).  A  preliminary  design  is  described  in  paragraph  5. 
The  results  of  this  phase  will  be  documented  in  a  report  and  submitted  to  ERDEC  and  the  Army 
Research  Laboratory  -  Battlefield  Effects  Directorate  (ARL-BED)  for  evaluation. 

The  Vehicle  Smoke  Protection  Model  wiU  be  coded  and  implemented  in  the  development  phase 
using  object-oriented  programming  techniques.  The  model  will  be  completely  self-contained  and 
may  be  used  in  a  stand-alone  mode  to  support  the  analysis  of  smoke  and  obscurant  effectiveness. 
In  addition,  its  classes  may  be  used  independently  to  provide  similar  functionality  in  other 
applications.  This  wUl  be  particularly  useful  in  the  DIS  arena. 

4.  CL-PRODUCT  VISUALIZATION  UTILITY 

As  described  in  paragraph  3,  a  CL-Product  Visualization  Utility  has  been  developed  to  aid  in  the 
analysis  of  smoke  production  algorithms  and  investigate  alternatives.  This  utility  operates  on 
IBM  or  compatible  personal  computers  and  runs  under  Microsoft  Windows™.  It  was  adapted 
from  the  CL  computational  routines  in  GRNADE  (EOS  AEL- 1992  version). 

The  CL-Product  Visualization  Utility  operates  on  a  set  of  files  that  record  puff  and  plume 
information  in  a  specified  format.  These  files  can  be  produced  by  GRNADE  and  COMBIC 
(with  proper  modifications)  or  by  any  other  application  (e.g.,  a  spreadsheet)  that  can  describe 
puffs  and/or  plumes  as  a  function  of  time.  For  a  given  puff  or  plume  and  for  each  sample  (time 
increment)  in  the  descriptive  data,  the  utility  computes  a  CL-product  matrix  for  the  front,  side, 
and  top  views  as  depicted  in  Figure  1.  The  results  are  then  displayed  in  accordance  with  a 


283 


mapping  scheme  that  associates  a  range  of  CL-products  with  a  specified  color.  This  enables  the 
user  to  visualize  how  the  obscurant  material  is  distributed  in  three-dimensional  space  and  how 
that  distribution  changes  with  time. 

+z 


Figure  1.  CL-Product  Visualization  Utility  computational  procedure. 

To  illustrate  how  this  utility  is  being  used  in  the  analysis  phase,  consider  the  manner  in  which 
GRNADE  simulates  the  M76  grenade.  The  model  computes  a  detonation  point  that  is  thirty 
(30)  meters  from  the  launcher  along  a  calculated  azimuth  and  fom  (4)  meters  above  the  ground. 
The  detonation  occurs  0.75  seconds  after  launch  and  a  simulated  smoke  cloud  is  formed. 
GRNADE  models  this  cloud  as  a  small  spherical  initial  burst  puff  (Figure  2)  that  grows  larger 
and  moves  downwind  as  a  function  of  time.  The  CL-Product  Visualization  Utility  displays  these 
changes  in  a  series  of  fi'ames,  such  as  those  depicted  in  Figures  3  and  4. 


Figure  2.  M76  Grenade  modeled  as  a  spherical  puff. 

It  has  been  suggested  that  this  simulation  does  not  accurately  model  the  manner  in  which  the 
M76  grenade  functions,  particularly  in  the  initial  build-up  phase.  Field  tests  have  demonstrated 
that  the  obscurant  is  released  very  rapidly  after  the  grenade  is  detonated  and  a  large  smoke  cloud 
is  formed  almost  instantaneously.  Furthermore,  the  cloud  does  not  have  a  spherical  shape;  the 


284 


Figure  3.  CL-Product  Visualization  Utility  sample  output  (M76  Grenade  modeled  as  a  spherical 
pirff  one  second  after  launch). 


Figure  4.  CL-Product  Visualization  Utility  sample  output  (M76  Grenade  modeled  as  a  spherical 
puff  ten  seconds  after  launch). 


285 


obscurant  material  is  generally  distributed  about  the  detonation  point  to  form  a  toroidal  puff.  To 
experiment  with  alternatives,  GRNADE  was  modified  to  model  the  resulting  smoke  cloud  as  a 
collection  of  six  small  spherical  sub-puffs,  which  are  distributed  about  the  detonation  point  to 
form  a  torus  (Figure  5).  The  CL-Product  Visualization  Utility  was  then  used  to  examine  this 
approach  and  determine  if  it  improved  the  fidelity  of  the  simulation  process  (Figures  6  and  7). 


Figure  5.  M76  Grenade  modeled  as  a  toroidal  puff  with  six  spherical  sub-puffs. 


Tim©  1. 000  (  V  of  Toy 


Figure  6.  CL-Product  Visualization  Utility  sample  output  (M76  Grenade  modeled  as  a  toroidal 
puff  one  second  after  launch). 


286 


Figure  7.  CL-Product  Visualization  Utility  sample  output  (M76  Grenade  modeled  as  a  toroidal 
puff  ten  seconds  after  launch). 

5.  VEHICLE  SMOKE  PROTECTION  MODEL  PRELIMINARY  DESIGN 

A  preliminary  design  has  been  developed  for  the  Vehicle  Smoke  Protection  Model  and  it  wiU 
serve  as  the  foundation  for  the  evolving  design.  In  its  simplest  form,  the  model  consists  of  the 
six  classes  depicted  in  Figure  8. 


Figure  8.  Vehicle  Smoke  Protection  Model  preliminary  design. 


In  this  design,  vehicles  are  loaded  with  expendable  material,  such  as  smoke  grenades  and  fog  oil. 

The  vehicles  carry  this  material  until  an  initiation  event  occurs,  at  which  time  the  obscurant  is 


287 


released  and  a  smoke  cloud  is  produced.  The  formation  and  fate  of  the  cloud  is  influenced  ly 
the  terrain  and  atmosphere.  This  simple  representation  has  been  expanded,  as  depicted  in  Figure 
9,  and  additional  details  will  be  added  as  the  design  matures. 


In  the  expanded  design,  the  vehicle  class  comprises  numerous  components,  which  (for  a  given 
vehicle)  might  include  grenade  discharger  tubes  and/or  smoke  generators.  The  location  and 
orientetion  of  these  components  must  be  known  when  a  grenade  is  launched  or  obscurant 
material  is  released  because  they  determine  where,  in  three-dimensional  space,  the  smoke  cloud 
is  formed.  This  be  particularly  important  when  the  Vehicle  Smoke  Protection  Model  is 
included  DIS  applications  where  vehicle  position  and  orientation  cannot  be  known  a  priori. 
Similarly,  the  state  of  rotating  and  articulated  components  cannot  be  predicted  in  advance. 
Consequently,  the  Vehicle  Smoke  Protection  Model  must  include  a  general  method  for 
computing  component  location  and  orientation  given:  a  mounting  hierarchy;  vehicle  location 
and  orientation;  and,  the  state  of  components  in  the  mounting  hierarchy.  Note:  the  mounting 
hierarchy  might  be  different  from  the  parts  breakdown  structure.  This  methodology  must  be 
compatible  with  the  DIS  standard  (Institute  for  Simulation  and  Training  1994). 


This  point  is  illustrated  by  the  MlAl  tank  in  Table  1  and  Figure  10.  The  vehicle  has  a  number 
of  components,  each  of  which  can  be  considered  to  have  its  own  coordinate  system. 
Components  are  mounted  with  an  offset  (three-dimensional  translation)  and  orientation  (three- 
dimensional  rotation)  in  accordance  with  the  vehicular  design.  However,  some  components  are 
also  free  to  move  within  specified  constraints.  The  vehicle  (i.e.,  its  huU)  is  free  to  move  about 
the  terrain  and  assume  any  orientation  that  the  topography  permits.  The  turret  is  free  to  rotate 
in  any  direction  about  its  axis. 

Table  1.  Example  MlAl  parts  breakdown  stracture  and  mounting  hierarchy. 


Whole 

Parts 

Mounted  On 

Translation  (in) 

Rotation  (deg) 

AX 

AY 

AZ 

Yaw 

Pitch 

Roll 

MlAl 

Tank 

Turret 

MlAl  Tank 

5 

0 

-30 

0-360 

0 

0 

M250  launcher 

VEESS 

MlAl  Tank 

-156 

6 

-26 

180 

-30 

0 

M250 

Launcher 

■ 

w 

' 

Turret 

5 

50 

-24 

0 

25 

-12.6 

RH 

Discharger 

RHtube#! 

0 

0 

0 

0 

0 

0 

RHtube#2 

RH  discharger 

0 

0 

0 

-10 

0 

0 

RHtube  #3 

0 

0 

0 

-20 

0 

0 

RH  tube  M 

0 

0 

0 

-30 

0 

0 

RHtube  #5 

RH  discharger 

0 

0 

0 

-40 

0 

0 

RHtube  #6 

RH  discharger 

0 

0 

0 

-50 

0 

0 

M250  Launcher  (right  discharger) 


Figure  10.  Example  MlAl  mounting  hierarchy. 


289 


Given  this  configuration,  consider  tube  #3  on  the  right-hand  discharger.  Its  location  and 
orientation  is  dependent  upon:  (1)  the  location  and  orientation  of  the  vehicle  (hull)  with  respect 
to  the  terrain;  (2)  the  offset  and  orientation  of  the  turret  with  respect  to  the  hull;  (3)  the  offset 
and  orientation  of  the  right-hand  discharger  with  respect  to  the  turret;  and,  (4)  the  offset  and 
orientation  of  the  tube  with  respect  to  the  right-hand  discharger.  The  DIS  standard  specifies 
how  this  information  will  be  expressed  and  reported  to  networked  simulators  through  message 
protocols.  The  computational  procedures  for  calculating  component  location  and  orientation 
are  well  established  and  widely  used  in  such  fields  as  robotic  control  (Paul  1981). 

7.  SUMMARY 

Starting  from  vehicle  position  and  attitude  in  variable-terrain  scenarios,  the  Vehicle  Smoke 
Protection  Model  will  be  able  to  provide  the  location,  orientation,  and  initial  cloud 
characteristics  for  diffusion  and  transport  in  any  battlefield  model.  The  resolution  can  be  varied 
to  support  small-scale  one-on-one  simulations  or  large-scale  organizational  wargames.  The 
software  will  be  developed  using  object-oriented  techniques  so  that  it  can  be  readily  used  in 
many  applications. 

REFERENCES 

Coad,  P.  and  E.  Yourdon,  1991:  Object-Oriented  Design,  Prentice-Hall,  Inc.,  Englewood 
Cliffs,  New  Jersey. 

Davis,  R.E.  and  R.A.  Sutherland,  1987:  EOSAEL  87,  Volume  14,  Self-Screening  Applications 
Module  GRNADE.  U.S.  Army  Laboratory  Command,  Atmospheric  Sciences 
Laboratory  Technical  Report,  TR-0221-14,  White  Sands  Missile  Range,  New  Mexico. 

Hoock,  D.W.,  R.A.  Sutherland,  and  D.  Clayton,  1987:  EOSAEL  87,  Volume  11,  Combined 
Obscuration  Model  for  Battlefield-Induced  Contaminants  (COMBIC).  U.S.  Army 
Laboratory  Command,  Atmospheric  Sciences  Laboratory  Technical  Report,  TR-0221- 
11,  White  Sands  Missile  Range,  New  Mexico. 

Institute  for  Simulation  and  Training,  University  of  Central  Florida,  1994:  Proposed  IEEE 
Standard  Draft,  Standard  for  Information  Technology  -  Protocols  for  Distributed 
Interactive  Simulation  Applications,  Version  2.0  (Fourth  Draft),  Orlando,  Florida. 

Paul,  R.P.  1981:  Robot  Manipulators:  Mathematics,  Programming,  and  Control.  The 

Computer  Control  of  Robot  Manipulators,  The  MIT  Press,  Cambridge,  Massachusetts. 


290 


DEVELOPMENT  OF  A  SMOKE  CLOUD  EVALUATION  PLAN 


M.  R.  Perry 
Battelle 

Columbus,  Ohio,  43201,  USA 
W.  G.  Rouse  and  M.  T.  Causey 

Edgewood  Research,  Development,  and  Engineering  Center  (ERDEC) 
Aberdeen  Proving  Ground,  Maryland,  21010,  USA 


ABSTRACT 

This  paper  describes  a  methodology  for  field  test  design  intended  to  achieve 
repeatability  in  smoke  cloud  evaluation.  The  objective  is  to  establish  standard  test 
and  data  analysis  procedures  for  the  characterization  of  smoke  clouds.  Obscurant 
output  rates,  dissemination  durations,  and  obscurant  particle  characteristics  will  be 
related  to  effective  cloud  size  and  duration  for  visible,  infrared  (IR)  and  millimeter 
(MM)  frequency  regions  of  the  spectrum.  If  relationships  can  be  established,  they 
may  be  used  later  within  a  standardized  Test  Operations  Procedure  (TOP)  for  smoke 
generator  cloud  characterization.  The  Research  and  Technology  Directorate, 
Armored  Systems  Modernization  Team  of  ERDEC  will  be  conducting  a  field  test 
using  visible,  infrared,  and  millimeter  wave  smoke/obscuration  generator  systems  at 
Dugway  Proving  Ground  (DPG),  UT,  in  September  1994.  Three  types  of  smoke 
generators  will  be  used  during  the  trials:  XM56,  MM  Cutter,  and  MM  Wafer  Storage 
and  Dispensing  System  (WSDS).  The  XM56  produces  visible  screening  with  a 
visible  to  near-IR  (NIR)  obscurant  disseminated  at  two  temperatures,  IR  screening 
with  two  types  of  visible  to  far-IR  (FIR)  obscurant,  or  a  combination  of  both  visible 
and  IR.  The  Cutter  and  WSDS  produce  MM  screening  clouds  by  disseminating  two 
types  of  MM  obscurants  of  various  lengths  and  diameters.  More  than  23 
combinations  of  obscurant  will  be  disseminated.  There  will  be  four  main  categories 
of  equipment:  cloud  monitoring,  aerosol  sampling,  obscurant  consumption 
monitoring,  and  meteorological  monitoring  equipment.  The  test  approach  will  focus 
on  measurement  of  aerosol  parameters  near  the  point  of  generation  only,  and  on 
measurement  of  the  macroscopic  obscurant  cloud  properties  down  range.  This  will 
lead  to  an  ability  to  evaluate  smoke  generator  performance  without  meteorological 
constraints  on  production  testing. 


291 


1.  INTRODUCTION 


The  goal  of  this  task  is  to  establish  standard  test  designs  and  data  analysis  procedures  for  the 
characterization  of  smoke  clouds.  Obscurant  output  rates,  dissemination  durations,  and  obscurant 
particle  characteristics  will  be  related  to  effective  cloud  size  and  duration  for  visible,  IR,  and  MM 
obscuring  clouds.  If  relationships  can  be  established,  they  will  be  used  later  within  a  Test 
Operations  Procedure  (TOP)  for  smoke  generator  cloud  characterization. 

1.1  Test  Objectives 

This  paper  will  define  repeatable  methods  for  determining  effective  cloud  size/duration, 
dissemination  parameters  and  obscurant  parameters  that  will  be  used  to  test  the  following 
hypotheses: 

Effective  cloud  size  =  /,  (Generator  parameters.  Aerosol  properties) 

Effective  cloud  duration  =  /2(Generator  parameters.  Aerosol  properties) 

The  details  of /,  and  /2,  and  any  other  independent  parameters  influencing  cloud  effectiveness  will 
be  assessed  once  a  relationship  has  been  confirmed.  The  specific  objectives  that  will  test  the  above 
stated  hypotheses  are  listed  below: 

a.  Establish  procedures  for  determining  effective  cloud  size  and  effective  cloud  duration. 

b.  Establish  procedures  for  determining  smoke  generator  operation  parameters  (effective 
cloud  formation  time,  delay  time,  generation  time,  dissemination  duration,  and  feed  rate). 

c.  Establish  procedures  for  monitoring  obscurant  parameters  (size  distribution  and 
condition). 

d.  Evaluate  effective  cloud  size  as  a  function  of  generator  and  obscurant  parameters. 

e.  Evaluate  effective  cloud  duration  as  a  function  of  generator  and  obscurant  parameters. 

1.2  Approach 

To  meet  the  above  objectives,  the  test  approach  focuses  on  measurement  of  aerosol  parameters  at 
the  point  of  generation  only,  and  on  measurement  of  the  macroscopic  obscurant  cloud  properties 
down  range.  The  expectation  is  that  the  resulting  obscurant  cloud  can  be  predicted  with  substantial 
accuracy  from  the  characterization  of  the  generator  output.  This  will  enable  developers  to  evaluate 
the  performance  of  smoke  generators  free  from  the  usual  meteorological  constraints. 


292 


1.3  Test  Scope 


This  paper  describes  procedures  for  determining  size  and  duration  of  smoke/obscurant  clouds. 
Included  are  descriptions  of  the  field  test  equipment  needed  to  provide  the  required  data  from  the 
field  tests.  In  addition,  this  paper  describes  the  method  for  analyzing  the  field  test  data.  The 
procedures  are  appropriate  for  existing  and  developmental  smoke/obscurant  clouds  that  screen 
visible,  near,  mid,  far  infrared,  and  millimeter  wavelengths. 

2.  Test  Equipment  and  Material 

2.1  Test  Location  and  Grid  Layout 

The  test  will  be  conducted  at  the  Romeo  Grid  at  Dugway  Proving  Grounds,  UT.  The  test  grid 
layout,  shown  in  Figure  1,  illustrates  the  location  of  the  monitoring  equipment  and  the  smoke 
generators  within  the  test  grid.  The  smoke  generator(s)  will  operate  from  the  north  or  south  launch 
pads  (LP2  and  LPl,  respectively)  based  on  wind  direction,  to  allow  the  smoke  cloud  to  travel  in 
front  of  the  cloud  monitoring  equipment.  All  particle  sampling  equipment  will  be  located  within 
5  meters  of  the  launch  pads. 

2.2  Cloud  Generating  Equipment 

Three  types  of  smoke  generators  will  be  used  during  the  trials;  XM56,  MM  Cutter,  and  MM  Wafer 
Storage  and  Dispensing  System  (WSDS).  Table  1  summarizes  the  screening  spectrum  for  each 
generator.  The  following  sections  describe  each  of  the  smoke  generators. 

Table  1.  Summary  of  the  intended  screening  spectrum  of  each  of  the  smoke  generators  being  used 
during  the  field  test 


Smoke  Generator 

Intended  Screening  Spectrum 

Vis-NIR 

Vis  -  FIR 

MM 

XM56 

X 

X 

Cutter 

X 

WSDS 

X 

2.3  Field  Test  Equipment 

There  are  four  main  categories  of  field  test  equipment:  cloud  monitoring,  aerosol  sampling, 
obscurant  consumption  monitoring,  and  meteorological  equipment.  Tables  2-5  list  all  the  field  test 
equipment,  the  parameters  monitored,  the  organization(s)  that  will  support  the  equipment  and  the 
applicable  spectrum(s). 


293 


294 


Figure  1.  Test  Grid  Layout.  Romeo  Test  Grid,  Dugway  Proving  Ground,  Utah. 


Table  2.  Listing  of  the  cloud  monitoring  equipment  that  will  be  used  during  the  field  test 


Equipment 

Parameters  Monitored 

Org. 

Applicable  Spectrum:  j| 

Vis 

IR 

Millimeter  wave  Radar 

Obscurant  Characterization 

System  (MROCS) 

MM  backscatter  and  2-way  attenuation  for  heights 
of  1,  3.5  and  6  m  over  a  40  degree  horizontal  FOV. 

Eglin 

AFB‘ 

■ 

■ 

X 

Atmospheric  Transmission 
Large-Area  Analysis  System 
(ATLAS) 

FIR  transmittance  over  a  20  degree  horizontal 

FOV. 

ASL^ 

X 

■ 

Mobile  Image  Processing  System 
(MIPS) 

Visible  and  FIR  cloud  growth. 

DPG^ 

X 

X 

■ 

Multi-Path  Transmissometer/ 
Radiometer  (MPTR) 

Visible  and  IR  1-way  attenuation  at  3.5  m  height 
over  a  26  degree  horizontal  FOV. 

ASL 

X 

X 

■ 

Research  Visible  and  Infrared 
Transmissometer  (REVIRT) 

Visible,  IR,  and  MM  one-way  signal  attenuation  at 
3.5  m  height. 

ASL 

X 

X 

X 

Full  Grid  FOV  Camera 

Visible  images  of  the  entire  grid  during  smoke 
generator  operation. 

DPG 

X 

■ 

■ 

Tank  Thermal  Sight 
(TTS)/Visible  Split  Image 
Recording  System 

TTS  and  visible  images  will  be  combined, 
producing  a  visible/infrared  split-image  video  of 
the  test  grid  during  the  smoke  generator  trials. 

DPG 

X 

X 

■ 

*  =  Eglin  Air  Force  Base 
^  =  Atmospheric  Science  Laboratory 
^  =  Dugway  Proving  Ground 


Table  3.  Listing  of  the  aerosol  sampling  and  analysis  equipment  that  will  be  used  during  the  field 
test  _ 


Equipment 

Parameters  Monitored 

Org. 

Applicable  Spectrum:  || 

Vis 

IR 

MM 

Cascade 

Impactor/Microbalance 

Measures  size  distribution  of  visible-NIR  obscurant 
particles. 

Battelle 

X 

Cyclone  Sampler/Elzone 
Analysis 

The  Cyclone  sampler  captures  samples  of  visible- 
FIR  obscurant.  Elzone  analysis  measures  size 
distribution  and  concentration  of  the  particles. 

ERDEC" 

X 

Guillotine/Hercules 

Radar  Chamber 

The  Guillotine  sampler  captures  MM  obscurant  on 
sticky  paper.  Optical  and  radar  analysis  of  samples 
will  provide  number  of  particles  per  unit  area  and  a 
relative  measure  of  dissemination  effectiveness, 
respectively. 

ERDEC/ 

Hercules^ 

X 

Electrostatic  Ball/Pulse 
Counter 

Measures  size  distribution  and  concentration  of 

MM  obscurant. 

ETI‘ 

X 

^  =  Edgewood  Research,  Development,  and  Engineering  Center 
*  =  Hercules,  Inc. 

^  =  Engineering  Technology,  Inc. 


295 


Table  4.  Listing  of  the  meteorological  parameters  that  will  be  monitored  during  the  field  test 


Parameter 

Height  of  Measurements  (m) 

Horizontal  Wind  Speed  and  Direction 

2,  4,  8,  16,  and  32 

Vertical  Wind 

6 

Temperature 

2 

Dew  Point 

2 

Pasquill  Stability  Category 

8 

All  parameters  will  be  monitored  at  a  1  Hz  rate,  except  for  the  vertical  wind  components 

which  will  be  monitored  at  a  10  Hz  rate.  DPG  will  be  responsible  for  monitoring 
meteorological  conditions. 

Table  5.  Listing  of  the  obscurant  consumption  monitoring  procedures  that  will  be  implimented 
during  the  field  test 


Procedure 

Purpose 

Org. 

Applicable  Spectrum; 

Vis 

IR 

MM 

Identify  Trial  Timing  From 
Smoke  Generator  Monitoring 
Camera  Video 

Determine  effective  cloud  formation  time, 
delay  time,  generation  time,  dissemination 
.  duration,  and  feed  rate 

DPG 

X 

X 

X 

Weigh  Auxiliary  Visible-NIR 
Container 

Determine  amount  of  visible-NIR  obscurant 
consumed  during  trial. 

ERDEC 

X 

■ 

Weigh  Visible-FIR  Obscurant 
Required  to  Reload  XM56 
Hopper 

Determine  amount  of  visible-FIR  obscurant 
consumed  during  trial. 

ERDEC 

X 

Weigh  MM  Obscurant  For 

Cutter 

Determine  amount  of  MM  obscurant 
consumed  during  trial. 

ERDEC 

■ 

X 

Count  Number  of  Wafers 
Disseminated  by  WSDS 

Determine  amount  of  MM  obscurant 
consumed  during  trial. 

ERDEC/ 

Battelle 

■ 

X 

2.  ANALYTICAL  PROCEDURES 

This  section  describes  how  the  acquired  field  test  data  will  be  used  to  satisfy  the  test  objectives. 

2.1  Test  Objective  a:  Establish  procedures  for  determining  effective  cloud  size  and  effective 
cloud  duration. 

Effective  cloud  size  is  defined  as  the  horizontal  extent  (m)  of  a  cloud  that  is  ^  a  predetermined 
height  and  attenuation  level.  Effective  cloud  duration  is  defined  as  the  maximum  time  in  which 
there  axe  consecutive  effective  cloud  sizes.  MROCS  and  MIPS  data  will  be  the  primary  data 
sources  used  to  determine  cloud  size  and  duration  in  the  MM  and  visible/IR  spectral  regions, 


296 


respectively.  ATLAS  and  MPTR  data  will  be  used  to  approximate  visible  and  IR  signal 
attenuation  levels  associated  with  the  MIPS  data. 

MROCS  Data-  The  MROCS  MM  attenuation  and  backscatter  data  will  be  used  to  calculate 
effective  MM  cloud  size  and  duration.  The  MROCS  data  will  be  analyzed  using  Battelle's 
"Computer  Program  for  Analysis  of  Millimeter  Wave  (MMW)  Attenuation  Data",  dated  December, 
1993.  The  program  may  require  modifications  if  the  MROCS  data  format  has  been  changed.  In 
addition,  the  MROCS  data  will  be  compared  to  the  REVIRT  data  for  validation. 

MIPS  Data:  The  MIPS  visible  and  FIR  cloud  growth  data  will  be  used  to  calculate  effective  visible 
and  FIR  cloud  size  and  duration.  Cloud  length,  height,  and  duration  values  will  be  taken  directly 
from  the  3DCAV  cloud  dimensioning  program.  These  data  will  be  compared  with  ATLAS  and 
MPTR  data  for  establishing  the  MIPS  cloud  transmission  level. 

ATT ,  AS  Data:  The  ATLAS  FIR  transmittance  contour  plots  will  be  used  to  calculate  effective  FIR 
cloud  size  and  duration.  Cloud  length,  height,  and  duration  values  will  be  measured  directly  off 
the  contour  plots.  The  size  scaling  factors  used  in  the  measurements  will  be  calculated  from  the 
ATLAS  contour  plot  frame  radians  and  the  distance  from  the  cloud  center  to  ATLAS.  ATLAS 
transmittance  contour  plots  will  be  compared  with  MPTR  FIR  transmittance  data.  In  addition, 
ATLAS  plots  will  be  used  to  establish  the  FIR  transmittance  level  of  the  MIPS  cloud  dimensioning 

data. 

MPTR  Data:  The  MPTR  FIR  transmission  data  will  be  compared  with  ATLAS  contour  plots.  The 
MPTR  visible  and  FIR  transmission  data  will  be  used  to  attempt  to  establish  the  transmission  level 
of  the  MIPS  cloud  dimensioning  data. 


2.2  Test  Objective  b:  Establish  procedures  for  determining  smoke  generator  operation  parameters 
(effective  cloud  formation  time,  delay  time,  generation  time,  dissemmation  duration,  and  feed  rate). 


Table  6.  Definitions  of  the  smoke  generator  operation  time  parameters  that  will  be  monitored 
during  the  field  test  _  — _  ^ 


Time  difference  between  . 


Cloud  formation  time  generator  start 


Delay  time 


Generation  time 


generator  start 


generator  start 


Dissemination  duration  initial  obscurant 
dissemination 


formation  of  an  effective 
cloud _ 

initial  obscurant 
dissemination _ 

generator  stop  _ _ 

final  obscurant  dissemination 


Generator  Dnta  Sheets:  Information  from  the  Smoke  Generator  Data  Sheets  will  contain 
trial  specific  parameters  for  every  trial  which  will  be  used  in  correlating  smoke  generator  operation 


297 


with  particle  results  and  cloud  screening,  size,  and  duration  results.  The  data  sheets  will  also 
contain  obscurant  consumption  weights  which  are  required  for  determining  feed  rates.  Key 
information  from  the  data  logs  will  be  imported  into  a  summary  table. 

Smoke  Generator  Mpnitoring  Camera  Videos:  Video  recordings  of  the  smoke  generator  during 
operation  will  be  used  to  determine  the  delay  time,  generation  time,  and  dissemination  duration  of 
the  smoke  generators  for  each  trial.  Smoke  generator  start  and  stop  times  will  be  indicated  by  the 
smoke  generator  operator. 

Fgsdj^  will  be  determined  by  dividing  the  weight  of  the  obscurant  materials  consumed  by  the 
dissemination  duration. 

2.3  Test  Objective  c:  Establish  procedures  for  monitoring  obscurant  parameters  (size  distribution 
and  condition). 

Cascade  Impactor  Visible  Obscurant  Samnler  Datf.-  Visible-NIR  obscurant  sampling  data  will  be 
used  to  characterize  the  obscurant  as  it  exits  the  XM56.  Average  concentration  and  size 
distribution  data  for  the  hot  and  cold  obscurant  will  be  compared. 

ELzone  Analysis  of  Horn  Samples:  Elzone  analysis  of  the  Horn  samples  will  be  used  to 
characterize  the  two  types  of  visible-FIR  obscurants  as  they  exit  the  XM56.  Particle  size 
distribution  data  will  be  compared. 

Optical  and  Radar  Chamber  Analysis  of  Guillotine  Samples:  Optical  and  radar  chamber  analysis 
of  the  Guillotine  samples  will  be  used  to  characterize  the  MM  obscurant  materials  as  they  exit  the 
Cutter  and  WSDS.  Optical  analysis  will  provide  number  of  particles  per  unit  area  and  percent 
clumping.  The  radar  chamber  analysis  will  provide  MM  attenuation  data  which  will  be  compared 
to  MM  obscurant  standards  available  at  Hercules. 

Electrostatic  Ball  MM  Obscurant  Samnler  Data-  The  Electrostatic  Ball  detector  data  will  be  used 
to  characterize  the  MM  obscurant  as  they  exit  the  Cutter  and  WSDS.  Concentration  and  length 
distribution  data  will  be  reported. 

2.4  Test  Objective  d:  Evaluate  cloud  size  as  a  function  of  generator  and  obscurant  parameters. 

Effective  cloud  size  results  from  the  MROCS  and  MIPS  data  will  be  compared  with  smoke 
generator  p^ameters  and  obscurant  particle  results.  The  primary  goal  is  to  determine  if  there  is 
a  relationship  between  feed  rate  and  effective  cloud  size  for  each  type  of  obscurant  material. 


298 


2.5  Test  Objective  e:  Evaluating  cloud  duration  as  a  function  of  generating  time  and  obscurant 
parameters. 

Effective  cloud  duration  results  from  the  MROCS  and  MIPS  data  will  be  compared  with  smoke 
generator  operation  timing  and  obscurant  particle  results.  The  pnmary  goal  is  to  detenmne  if  there 
is  a  relationship  between  generation  time  and  effective  cloud  duration  for  each  type  of  obscurant 

material. 

2.6  Additional  information  that  will  be  analyzed  from  the  data. 

2.6.1  Homogeneity  of  multispectral  screening  clouds. 

RF.VTRT  Data-  The  REVIRT  visible,  IR  and  MM  signal  attenuation  data  will  be  used  to  determine 
the  multispectral  screening  effectiveness  of  clouds  containing  multiple  obscurants.  Attenuation 
levels  and  screening  times  will  be  compared.  REVIRT  data  will  also  be  us^  to  validate 
MROCS  data.  REVIRT  LOSs  will  be  correlated  with  MROCS  comer  reflectors  and  the  M 

attenuation  data  will  be  compared. 

2.6.2  Approximate  Obscurant  Dissemination  Velocity. 

SmnVp  Generator  Timid  Monitoring:  Video  images  will  be  used  to  estimate  the  velocity  of  the 
exiting  obscurant  material.  As  the  initial  obscurant  exits  the  ejector,  the  distance  it  travels  per  units 
time  will  be  monitored.  Obscurant  travel  distance  will  be  approximated  using  maps  with  Imovm 
dimension  within  the  FOV.  Travel  time  will  be  approximate  because  of  the  limitation  of  the  3U 

frame/second  video  image  speed. 

2.6.3  Additional  Support  data 

Fr.ii  rvrift  FOV  Moniinrincf  Camera  Videos:  The  Full  Grid  FOV  video  tapes  will  be  used  to 
qualitatively  assess  each  trial.  Selective  images  will  be  used  m  the  final  briefing  package. 

TTS/Visihle  Split  Image  Videos:  Selective  split  image  frames  will  be  used  in  the  final  bnefing 
package.  The  images  incorporated  into  the  briefing  package  will  be  of  each  of  the  screening  clouds 
produced  (i.e.,  visible,  IR,  MM,  and  combinations). 

Data:  Wind  speed,  wind  direction,  temperature,  dew  point,  md  stability  category 
data  will  be  used  to  assess  the  effects  of  the  meteorological  conditions  on  the  generated  clouds. 

3.  QUICK-LOOK  RESULTS 

This  paper  was  submitted  for  the  Battlefield  Atmospheric  Conference  just  two  weeks  after  the 
compktion  of  the  above  described  field  test.  As  a  result,  the  quantitative  field  test  data  was  not 


299 


available  for  analysis.  Listed  below  are  qualitative  assessments  of  the  quick-look  data  that  were 
available  during  the  field  test. 

3.1  MPTRData 

Preliminary  MPTR  data  (Valdez  1994)  were  reviewed  to  compare  the  screening  effectiveness  of 
the  visible-NIR  obscurant  disseminated  hot  and  cold.  The  quick-look  data  suggest  that  the  hot 

disseminated  obscurant  screened  visible-NIR  signals  more  effectively  and  for  a  longer  period  of 
time. 

3.2  MROCSData 

Preliminary  MROCS  data  (Mijangos  1994)  were  reviewed  to  compare  the  effectiveness  of  the 
various  lengths  and  diameters  of  the  MM  obscurant.  The  quick-look  data  suggest  that  the  shorter 

lengths  and  shorter  diameter  particles  screened  the  MM  signals  more  effectively  and  for  a  longer 
period  of  time. 

3.3  Smoke  Generator  Data  Sheets 

The  Smoke  Generator  Data  Sheets  accurately  documented  the  obscurant  consumption  during  each 
trial.  This  information  will  significantly  increase  the  ability  to  relate  the  dissemination  parameters 
with  the  resulting  cloud. 

4.  OVERVIEW 

This  paper  illustrates  repeatable  procedures  which  can  be  used  to  monitor  and  analyze 
smoke/obscurant  source  parameters,  aerosol  characteristics,  and  effectiveness  (size,  duration, 
attenuation,  and  wavelength).  These  procedures  will  provide  data  that  can  be  used  to  evaluate 
cloud  size/duration  as  a  function  of  generator  and  obscurant  parameters.  The  point  of  this  effort 
IS  to  demonstrate  that  by  recording  generator  parameters  and  point-of-exit  aerosol  data,  you  can 
adequately  define  generator  performance  in  terms  of  anticipated  cloud  effectiveness. 

REFEREENCE 

Perry,  M.,  Kuhlman,  M.,  Kogan,  V.,  Rouse,  W.,  and  Causey,  M.,  1994:  Study  of  Test  Methods  for 
Visible,  Infrared,  and  Millimeter  Smoke  Clouds  -  DPG:  Sept.  1994.  Test  Plan,  Battelle  and 

ERDEC,  Contract  No.  DLA900-86-C-2045,  Task  182. 

Mijangos,  Adrian,  1994:  MROCS  Quick-Look  MM  Attenuation  Plots,  Unpublished,  Supplied  to 
Michael  Causey  (ERDEC)  during  DPG  Field  Test,  Eglin  AFB,  Florida. 

Valdez,  Robert,  1994:  MPTR  Quick-Look  Visible-IR  Transmittance  Plots,  Unpublished,  Reviewed 
by  Mark  Perry  (Battelle)  during  DPG  Field  Test, 


300 


ANALYSIS  OF  WATER  MIST/FOG  OIL  MIXTURES 

William  M.  Gutman  and  Troy  D.  Gammill 
Physical  Science  Laboratory 

New  Mexico  State  University,  Las  Cruces,  New  Mexico  88003 
Frank  T.  Kantrowitz 

Anny  Research  Laboratory  Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico  88002 

ABSTRACT 

The  Army  Research  Laboratory  Mobile  Atmospheric  Spectrometer  (MAS)  has  been  used  to 
optically  characterize  obscurants  at  numerous  tests  over  the  past  several  years.  These  have 
included  Smoke  Weeks  XIII,  XIV,  and  XV  as  well  as  the  recent  Large  Area  Smoke  Screen 
Experiment  (LASSEX).  The  MAS  spectrometers  are  usually  operated  as  transmissometers  at 
4  cm-i  spectral  resolution  and  this  configuration  provides  approximately  200  measurement 
channels  in  the  8-12  pm  region  and  600  in  the  3-5  jixa  region. 

At  LASSEX,  the  principal  MAS  line-of-sight  was  approximately  parallel  to  the  nephelometer  line 
but  offset  by  approximately  by  30  m.  This  was  sufficiently  close  to  permit  realistic  time-adjusted 
correlation  comparisons  between  MAS  transmittance  measurements  and  nephelometer-based 
mass  loading  data  fw  most  materials.  Time  adjustment  was  necessary  to  corr^t  fca-  the  time  for 
the  materia]  to  be  transported  from  the  nephelometer  line  to  the  MAS  line-of-sight  or  vice  versa. 
One  of  the  generator  systems  tested  at  LASSEX  could  combine  water  mist  with  fog  oil  smoke. 
Trials  were  conducted  with  that  generator  system  with  the  separate  materials  and  with  the 
combined  materials.  Nephelometers  normally  cannot  distinguish  between  components  of  a  multi- 
component  mixture.  By  using  distinctive  absorption  features  of  separate  components,  however, 
MAS  transmittance  data  offer  a  means  to  estimate  mass  loading  for  the  separate  components, 
although  the  difficulty  of  collecting  water  mist  with  a  filter  sampler  introduces  considerable 
uncertainty  into  the  calibration  of  the  nephelometer  data.  MAS  transmittance  spectra  were  used  to 
investigate  the  properties  of  water  mist/fog  oil  smoke  mixtures,  and  results  of  that  investigation 
are  presented. 

1.  INTRODUCTION 

For  the  past  several  years,  the  Army  Research  Laboratory  Mobile  Atm^pheric  Spectrometer  has 
been  used  to  characterize  the  infrared  transmissive  properties  of  various  obscurant  materials. 
Measurements  have  been  made  at  Smoke  Weeks  XIII*,  XIV^,  and  XV  as  well  as  at  the  Large 
Area  Smoke  Screen  Experiment  (LASSEX)  which  was  conducted  at  Eglin  Air  Force  Base, 
Florida  during  May,  1994.  Over  the  period  of  time  spanned  by  these  tests,  steady  improvements 
have  been  made  in  the  data  acquisition  repetition  rate,  the  signal-to-noise  ratio  of  the  spectra,  and 
the  processing  algorithms. 

2.  DATA  ACQUISITION  AND  REDUCTION 

As  currently  configured,  the  primary  spectroscopic  instruments  in  the  MAS  are  two  Fourier 
transform  spectrometers.  The  original  instrument  is  capable  of  0.04  cm'  spectral  resolution.  A 
second  instrument  that  is  capable  of  0.5  cm  *  spectral  resolution  was  added  prior  to  LASSEX,  but 
collectal  with  the  original  instrument  are  ihe  subject  of  this  paper.  The  original  spectrometer 
is  particularly  well  suited  to  the  field  measurement  environment.  The  instrument  uses  comer 


301 


reflectors  rather  than  flat  mirrors,  and  it  is,  therefore,  essentially  immune  to  thermally-induced 
misalignment  that  can  severely  Umit  the  reproducibility  of  flat-mirror  systems.  A  comer  reflector 
spectrometer  achieves  this  immunity  without  the  complexity  of  a  dynamic  alignment  system. 

2.1  Test  Conflguration 

At  LASSEX,  m^t  of  the  MAS  data  were  collected  in  transmissometer  mode.  A  source  was  set¬ 
up  on  the  west  side  of  the  test  grid  at  “Ml”  while  the  MAS  van  containing  the  spectrometer  was 
set  up  on  the  e^t  side  of  the  grid  at  “S3.”  The  source  was  a  1000  °C  temperature-controlled 
blackbody  colUmated  with  a  modified  60-inch  searchlight.  The  receiver  optics  for  the 
spectrometer  consisted  of  the  main  31-inch  Coudd-mounted  Cassegrain  telescope.  The  Coudd 
telescope  mount  greatly  facilitates  spectrometer  repointing  when  required,  for  example,  to  collect 
radiance  spectra  of  munitions  set  on  the  test  grid.  As  will  be  discussed  below,  a  rotating-blade 
shutter  was  used  to  block  the  source  on  command  from  the  receiver  in  order  to  obtain  background 
and  path  radiance  spectra.  The  source  and  spectrometer  were  set  up  so  that  the  line-of-sight  was 
approximately  parallel  to,  and  30  m  south  of,  the  nephelometer  line.  All  transmittance  spectra 
were  collected  at  4-cm  ‘  spectral  resolution. 

2.2  Measurement  Methodology 

The  transmittance  of  a  sample  of  material  at  radiation  frequency  v  is  defined  as  the  ratio  of  the 

radiant  power  at  frequency  v  exiting  the  material  to  the  radiant  power  at  that  frequency  incident 
upon  the  material,  i.e. 


T(v)  = 


m 


Absolute  atmospheric  transmittance  is  quite  difficult  to  measure  over  a  path  of  significant  length 
(of  the  order  of  hundreds  of  meters)  because,  except  in  special  cases,  it  is  impossible  to  collect 
the  entire  beam  of  radiation  at  the  receiver.  The  beam  spreads  because  of  the  finite  size  of  any 
non-laser  source,  because  of  diffraction,  and  because  of  atmospheric  turbulence.  The  only 
satisfactory  broadband  absolute  atmospheric  transmittance  measurement  methodology  that  has  yet 
been  demonstrated  is  to  measure  the  transmittance  at  discrete  frequencies  using  a  low  divergence 
laser  whose  spot  size  and  spread  and  wander  footprint  at  the  receiver  are  small  enough  to  allow 
collection  of  the  entire  beam.  Even  with  this  approach,  a  large  collecting  aperture  is  required  for 
most  path  lengths  of  interest,  especially  during  high  turbulence  parts  of  the  day.  The 
transmittance  at  one  or  more  discrete  frequencies,  however,  can  be  used  to  normalize  the  result  of 
a  broadband  relative  transmittance  measurement.  The  broadband  measurement  is  made  by 
comparing  a  spectrum  collected  at  the  desired  path  length  with  a  spectrum  collected  over  a  very 
short  atmospheric  path.  This  is  usually  called  a  zero-path  spectrum.  Dividing  the  long  path 
spectrum  by  the  zero-path  spectrum  corrects  for  the  shape  of  the  instrument  response  function. 
The  resulting  spectrum  is  the  unnormalized  transmittance  of  the  part  of  the  path  that  is  different 
between  the  two  measurements.  Normalization  with  the  laser  measurement  corrects  for  the 
spreading  of  the  beam.  Measurements  of  this  type  are  extremely  difficult. 


In  the  case  of  an  otecurant  measurement,  what  is  usually  required  is  not  the  absolute 
transmittance  of  the  entire  path,  but  rather  the  transmittance  of  the  path  with  the  obscurant  relative 
to  the  clear  air  path.  The  spreading  factors  and  the  instrument  response  function  remain 
unchanged  when  comparing  the  path  with  the  obscurant  to  the  path  without  the  obscurant,  and  so 
a  simpler  measurement  methodology  can  be  applied.  In  the  absence  of  path  radiance  and 
background  effects,  the  relative  transmittance  of  the  obscurant  is  the  point-by-point  ratio  of  the 
obscurant  sp^trum  to  the  clear  air  spectrum.  Path  radiance  and  background  radiation,  i.e. 
scattered  sunlight,  and  radiation  emitted  by  the  obscurant,  by  the  optical  elements  in  the  beam 


302 


path,  and  by  the  part  of  the  background  that  is  not  obscured  by  the  source,  may  contaminate  the 
raw  signals  and  thus  must  be  removed.  I.e. 

~  ^ clear  ^bkg 


Therefore, 


TO  = 


c  _  c 

^obscurant  paih 
^clear  ^bkg 


It  is  understood  that  all  of  the  quantities  on  the  right  hand  side  of  Eq.  1  are  functions  of  t>. . 

The  MAS  measurement  methodology  is  to  measure  these  four  quantities  over  ^  short  a  time 
interval  as  possible.  It  is  especially  important  for  the  obscurant  and  path  radiance  data  to  close 
together  in  time  because  of  the  rapidly  changing  nature  of  the  typical  obscur^t  cloud.  Clear  air 
data  typically  were  acquired  both  before  and  after  each  obscurant  release.  Whenever  possible 
pre-trial  clear  air  data  were  used  in  the  analysis  because  they  were  less  likely  to  be  cont^mated 
by  residual  obscurant  that  may  not  have  been  apparent  visually.  Obscurant  spectra  were  altermt^ 
with  path  radiance  spectra.  Path  radiance  spectra  were  obtained  by  blocking  the  source  with  the 
remotely  controlled  rotating  shutter.  Two  way  communication  with  the  shutter  hel^  to  ensure 
that  it  was  always  in  its  correct  state.  Each  spectrum  was  the  result  of  the  addition  ol  two 
interferograms.  This  had  the  effects  of  reducing  the  signal  variability  resulting  from  atoosphenc 
turbulence  and  of  improving  the  signal -to-noise  ratio.  The  acquisition  of  ^h  coadded 
interferogram  required  approximately  0.5  s,  but  because  of  the  overhead  associated  with  tiai^ier 
and  storage  of  the  data,  the  time  between  spectra  was  approximately  1.5  s.  Because  ol  the 
requirement  to  alternate  between  obscurant  and  path  radiance  spectra,  the  overall  measurement 
period  was  approximately  3  s.  Transmittance  spectra  at  LASSEX  were  collected  at  a  nominal 
spectral  resolution  of  4  cm  ‘  and  covered  the  transparent  parts  of  the  atmospheric  spectrum 
between  800  and  3000  cm  '  (12.5  and  3.3  ptm).  Data  acquisition  was  controlled  by  a  computer 
program  to  ensure  that  opening  and  closing  of  the  shutter  were  properly  time  with  res^t  to  the 
data  acquisition.  Fourier  transformation  of  interferograms  into  spectra  was  accomplished  alter  the 
conclusion  of  each  trial.  Computation  of  transmit^ce  spectra  from  the  raw  spectra  was 
controlled  by  a  computer  program.  Several  computational  aids  have  been  develop^  to  lacilitate 
analysis  of  the  data.  These  include  a  program  to  display  time  sequence  movies  of  spectra  in  a 
simulated  three-dimensional  space. 


3.  COMPARISON  WITH  NEPHELOMETER  MEASUREMENTS 

The  proximity  of  the  MAS  line-of-sight  with  the  nephelometer  line  permits  realistic  correl^on 
comparisons  between  MAS  transmittance  measurements  and  nephelometer-bas^  m^s  loading 
for  most  materials.  The  correlation  function  expressed  as  a  function  of  the  time  between  the 
spectral  measurement  and  the  nephelometer  measurement  can  be  defined  as 

Cit)=la{T)T(t-t)dT 

where  a  is  the  nephelometer  signal  and  T  is  the  transmittance  measured  with  the  MAS.  The 
correlation  function  normally  has  its  peak  value  at  r  =  —  where  x  is  the  displacement  between 

V I 


303 


the  MAS  Ime-of-sight  and  the  nephelometer  line,  and  Vj^  is  the  mean  magnitude  of  the  wind 
component  ^^ndicular  to  the  line-of-sight.  Previous  work  has  demonstrated  good  correlation 
with  the  nephelometer  measurements  under  some  meteorological  conditions. 

3.  MULTICOMPONENT  SMOKES 


Ob^urant  ^rfonnance  is  often  enhanced  by  combining  materials.  The  combination  of  graphite 
with  fog  oil,  for  example,  provides  good  obscuration  from  the  visible  through  the  infeed 
,  region.  Measmement  of  the  individual  components  of  multicomponent  smokes  can  be 
ditticult.  By  using  distinctive  absorption  features  of  the  separate  components,  however 
spectrally-resolved  transmittance  data  offer  a  potential  means  to  estimate  the  contribution  from  the 
separate  components.  Obviously,  for  this  approach  to  be  successful,  at  least  one  component  of  a 
two-component  mixture  must  have  some  distinctive  spectral  feature. 

3.1  Water  Mist/Fog  Oil  Mixtures 

^e  of  the  generator  systems  tested  at  LASSEX  could  combine  water  mist  with  fog  oil  smoke. 
The  water  component  is  particularly  troublesome  to  measure.  Because  of  its  high  volatility  it  is 
not  possible  to  use  filter  samplers  to  obtain  reliable  mass  concentration  estimates. 

A  number  of  trials  were  conducted  at  LASSEX  with  the  water  mist/fog  oil  system.  During  most 
of  the  trials,  both  components  were  generated.  One  trial  was  conducted  with  pure  water  mist  and 
numerous  fog  oil  spectra  have  been  collected  with  the  MAS  in  support  of  other  trials. 

3.2  Measured  Spectra 
3.2.1  Fog  Oil 


Figure  1  IS  a  pure  fog  oil  spectrum.  Fog  oil  is  a  poor  infrared  attenuator,  and  that  fact  is  obvious 
trom  this  figure.  The  strong  absorption  band  at  the  high  frequency  end  of  the  spectrum  is 
charactenstic  of  fog  oil..  The  band  originates  from  the  C— H  single  bond  and  is,  therefore 
common  to  most  hydrocartons.  This  absorption  band  in  fog  oil  spectra  is  probably  toe  result  of 
gaseous  material  that  vaporizes  in  the  generator  or  that  evaporates  from  the  droplets  in  the  fog.  In 
general  toe  strength  of  this  feature  correlates  with  toe  overall  attenuation  level  in  the  spectra.  The 
depth  of  this  future,  then  can  be  used  to  estimate  toe  contribution  of  fog  oil  to  the  attenuation  in 
other  parts  of  toe  spectrum,  to  estimate  the  fog  oil  contribution  to  the  attenuation  of  a  mixture  and 
to  estimate  toe  fog  oil  concentration-length  product.  The  feature  between  2000  and  2200  cm  *  that 
appears  to  be  a  weak  a^iption  band  is  a  weak  water  vapor  feature  that  did  not  ratio  out  perfectlv 
because  of  small  vanation  in  toe  water  content  between  toe  clear  air  and  toe  obscurant  spectra.  It 
IS  not  uncommon  to  see  this  band  in  MAS  obscurant  transmittance  spectra. 

3.2.2  Water  Mist 


u  ^  spectra  of  water  mist.  These  spectra  were  collected  during  LASSEX  Trial 

50.  There  are  several  interesting  characteristics  common  to  these  spectra; 

1 .  The  overall  transmittance  level  is  fairly  low,  at  least  compared  with  fog  oil.  Water  mist 
can  be  a  fairly  good  infrared  obscurant. 

absorption  band  that  was  discernible  in  the  fog  oil  spectrum  is 
slightly  more  evident  in  toe  water  mist  spectra.  It  is  not  surprising  that  toe  water  vapor 
content  of  toe  air  might  be  enhanced  by  the  presence  of  water  droplets  in  toe  path. 

3.  The  low  frequency  end  of  the  spectrum  is  up-turned  slightly.  The  origin  of  this  effea 
IS  unknown  at  this  time,  but  it  seems  to  be  common  in  toe  water  mist  spectra.  Its 
strength  appears  to  be  directly  related  to  toe  overall  attenuation  level.  Although  it  is  not 


304 


particulariy  strong,  it  offers  promise  as  an  aid  in  estimating  the  concentration-path 
length  product  of  water  particles. 

4.  The  high  frequency  end  of  the  spectra  contain  an  absorption  featui^e  simil^  to  the 
hydrocarbon  absorption  band  but  much  weaker.  This  feature  probably  results  from 
unbumed  hydrocarbons  in  the  turbine  exhaust  of  the  generator. 


The  weak  absorption  band  between  2000  and  2200  cm  holds  little  promise  as  a  potential  3*^  o 
analyze  the  separate  water  vapor  and  fog  oil  concentiutions  be<rause  it  may  be  present  m  both 
spectra.  The  slightly  up-turned  transmittance  near  850  cm'  ,  although  weak,  is  unique  to  the 
xiater  spectra.  The  height  of  this  feature  above  the  baseline  correlates  with  the  oyei^l 
transmittance  level,  and  so  it  should  be  usable  to  aid  the  analysis.  Because  this  feature  is  relative  y 
weak  the  uncertainty  level  in  the  resulting  estimate  of  the  transmittance  and  concentration-path 
length  product  would  be  relatively  high.  The  presence  of  the  weak  hydroc^ton  absorption  in  the 
water  mist  spectra  does  not  appear  to  represent  a  serious  problem  because  it  is  so  weak  compared 
with  ihe  fog  oil  spectra. 


LASSEX  Fog  OU  Spectrum 


Figure  1.  Typical  transmittance  spectrum  of  pure  fog  oil  collected  during  LASSEX  Trial  075. 


305 


1.0 

0.9 


0.8 

I 

1  0.6 

2  0.5 

u 

■a  0.4 

u 

^  0.3 


0.2 


0.1 


0.0 


LASSEX  Water  Mist  Spectra 


500 


1000  1500  2000  2500  3000 


Spatial  Frequency  (cm‘^) 


Figure  2.  Three  water  mist  transmittance  spectra  collected  during  LASSEX  Trial  050.  These 
spectra  exhibit  among  the  strongest  attenuation  observed  for  water  mist. 

3.2.3  Fog  OilAVater  Mist  Mixture 

Figure  3  is  a  spectnim  of  the  combined  water  mist  and  fog  oil  smoke.  Both  the  strong 
hydrocarbon  absorption  band  of  fog  oil  and  the  up-turned  baseline  at  850  cm  *  of  water  mist  are 
evident  m  the  spectrum. 

3.3  Analysis 


By  correlating  the  strength  of  the  up-turned  baseline  in  the  pure  water  mist  spectra  with  the 
average  transmittance,  and  the  depth  of  the  hydrocarbon  absorption  with  the  baseline,  it  is 
straghtforward  to  amve  at  estimated  values  for  the  transmittance  levels  that  would  result  from  the 
individual  c^^nents  of  the  mixture  if  present  at  the  same  concentration  alone.  These  values  are 

the  water  ^d  0.96±0.02  for  the  fog  oil.  The  product  of  these 
numters  is  0.83^.05  and  the  actual  combined  transriiittance  was  measured  to  be  appioximatelv 
0.9.  Given  Ae  limited  data  set  that  has  so  far  been  examined,  this  appears  to  be  reasonable 
agreement.  No  attempt  was  made  to  estimate  flie  concentration-path  length  products  for  the 

sep^te  com^nente.  This  step  will  require  detailed  analysis  of  nephelometer  data  sets  for  both 
water  mist  and  log  oil. 


306 


LASSEX  Fog  OilAVater  Mist  Spectrum 


Spatial  Frequency  (cm’^) 


Figure  3.  Typical  spectrum  of  fog  oil/water  mist  mixture  collected  during  LASSEX  Trial  076. 

4.  CONCLUSIONS  AND  FUTURE  DIRECTIONS 

Spectrally  resolved  transmittance  measurements  appe^  to  offer  an  eff^tive  m^s  ^ 

opSal  properties  of  the  individual  components  of  mixed  obscurants  for  v^hich  no  ottier  meth(^ 
hL  been  d^onstrated.  Future  work  in  this  area  will  be  directed  confirming  and  refining  these 
results,  and  toward  obtaining  concentration-length  products  for  the  separate  components. 


REFERENCES 

1  Peterson  W  A  D.  M.  Garvey,  and  W.  M.  Gutman,  “Spectrally  Resolved  Transmittance 
■  Measurements  at  Smoke  Week  XIII,”  Proceedings  of  the  1991  Battlefield  Atmospherics 

Conference,  U.  S.  Army  Atmospheric  Sciences  Laboratory,  White  Sands  Missile  Range, 
New  Mexico. 

2  Kantrowitz,  F.  T.,  W.  M.  Gutman,  T.  D.  Gammill,  and  J.  V.  Rice,  “High  Resolution 
Spectroscopy  at  Smoke  Week  XIV,”  Proceedings  of  the  Smoke/Obscurant  Symposium 
XVII,  Johns  Hopkins  University,  Laurel,  Maryland. 


307 


NEW  MILLIMETER  WAVE  TRANSMISSOMETER  SYSTEM 
ROBERT  W.  SMITH 

U.S.Artny  Test  and  Evaluation  Command 
Ft.  Belvoir  Meteorological  Team 

WILLIAM  W.CARROW 
EOIR  Measurements , Inc 
Spotsylvania , Va 

ABSTRACT 

The  TECOM  Ft  Belvoir  Meteorological  team  and  the  Night 
Vision  and  Electronic  Sensor  Directorate  of  CECOM  contracted 
with  EOIR  Measurements , Inc  to  develop  a  new  instrument  which 
would  provide  atmospheric  transmission  data  in  the  3  5  giga¬ 
hertz  region.  The  desired  instrument  would  have  complete 
redundancy,  long  path  length,  compact  size,  stable  microwave 
pgrf ormance ,  easy  field  setup  and  alignment,  standard  data 
output,  low  development  risk,  and  above  all,  low  development 
cost.  The  design  by  EOIR  consists  of  mostly  off  the  shelf 
components  with  a  design  goal  of  measuring  1  percent  transmis 
sion  over  a  5  km  path  in  a  rainfall  of  64  millimeters  per 
hour.  To  achieve  simplicity  of  design  and  field  use,  and  to 
]^gQp  the  cost  down,  two  innovations  have  been  made.  Fir^t,  a 
new  antenna  design  that  uses  optical  refraction  principles 
replaces  the  large  and  cumbersome  parabolic  antennas  and 
second,  an  open  loop  frequency  design,  as  opposed  to  a  fre- 
quency  tracking  receiver , allows  for  the  use  of  less  expensive 
transmitters  and  receivers.  In  this  paper  we  will  describe 
the  instrument,  present  the  test  procedure  and  look  at  some  of 
the  data. 


■  INTRODUCTION 

For  many  years  the  Army's  Ft  Belvoir  Meteorological  Team 
has  been  making  measurements  of  atmospheric  transmission  in 
the  visual  and  infrared  regions  of  the  spectrum.  With  the 
increasing  interest  in  the  millimeter  wave  region,  we  have 
been  asked  to  extend  our  capability.  Propagation  effects  are 
very  important  to  millimeter  wave  systems.  They  include  atten¬ 
uation  and  scattering  by  precipitation, fog  and  dust  and  clear 
air  absorption  by  water  vapor  and  oxygen.  Millimeter  wave 
propagation  models  do  exist  but  it  remains  difficult  to  make 
accurate  predictions  of  attenuation  due  to  differences  in  data 
bases  and  in  the  assumptions  which  are  made  during  the  calcu¬ 
lations.  Some  of  the  measurements  required  for  predictions 
have  significant  uncertainties  which  can  effect  the  results. 
These  include  rainrate,drop  size  and  distribution,  water 


309 


content  of  fog,  and  the  extent  of  precipitation  or  fog.  Addi- 
tionally,  predictive  models  do  not  handle  rapidly  changing 
conditions  caused  by  either  changing  weather  conditions  or  by 
battlefield  obscurants.  Because  of  these  considerations,  we 
decided  to  obtain  a  system  which  would  measure  the  attenuation 
loss  directly.  A  brief  survey  of  existing  systems  found  many 

problems  but  primarily  their  cost  exceeded  the  resources 
available . 


3 .  DESIGN  GOALS 

-  The  process  of  developing  our  system  started  with  a  set 
features  which  had  to  be  met.  We  required  a 
path  length  of  at  least  5  kilometers, a  compact  size,  stable 
microwave  performance,  easy  field  setup  and  alignment,  output 
capable  of  computer  processing,  low  development  risk,  high 

above  all,  low  system  cost.  We  will  expand  of 
each  of  these  considerations.  ^ 

system  has  been  built  with  complete  redundancy  since 
we  have  two  stand  alone  source  units  and  two  stand  alone 

computer  is  a  standard  pc,  we  can 

f I’pJThi  another.  This  gives  the  additional 

flexibility  of  measuring  two  separate  paths  if  desired. 

Size  and  weight  have  been  carefully  controlled  so  that 
the  entire  source  or  radiometer  are  contained  in  environmental 
housings  four  feet  long  and  eight  inches  in  diameter.  These 

weigh  about  30  pounds  each  will  be  mounted  on 

^ 1  provides  a  very  simplified  representation  of 
the  transmissometer  system. 

performance  design  goal  for  this  system  is  to  allow 
tne  user  to  make  path  loss  measurements  accurately  with  1% 
transmission  over  5  km  path  length.  This  corresponds  to  oper¬ 
ating  in  over  64  mm/hr  of  rainfall.  ^ 

_  The  two  primary  mechanisms  responsible  for  unstable 

(  power  and  frequency  drift)  are  changes 
ambient  temperature  and  power  supply.  To 
tZwtt  power  problem,  extensive  use  of  tightly  regulated 

components  use  two  stages  of  power 
regulation  to  assure  isolation  from  power  line  or  generator 
flu^ctuations.  Temperature  stability  is  obtained  by  the  use  of 

housing  for  the  microwave  head  and 
of^  Calculated  frequency  drift  over  a  range 

of  20C  to  +40C  ambient  is  on  the  order  of  5  Mhz.  This  worst 

accommodated  by  the  10  Mhz 
the  radiometer.  Worst  case  power  drift  is  calcu¬ 
lated  to  be  on  the  order  of  0.4%  over  the  same  range  of  tem- 


310 


peratures.  Because  of  excellent  power  supply  regulation,  the 
teinperature  induced  variations  will  dominate  the  performance 
considerations. 

The  system  has  about  a  70  mr  field  of  view  (4  degrees)  . 
While  this  is  considered  narrow  for  radio  systems,  it  is 
significantly  wider  than  the  3  mr  used  on  our  optical  trans- 
missometers  and  greatly  eases  alignment  considerations.  In 
addition  each  unit  is  provided  with  an  alignment  system  con¬ 
sisting  of  a  narrow  beam  intense  spotlight  and  a  high  quality 
24  power  aiming  scope. 

Data  output  from  the  radiometer  to  the  user  will  be  0  to 
10  volts  analog  and  RS232  serial  digital.  These  signals 
represent  0  to  100  %  transmission.  The  power  source  provides  a 
0  to  10  volt  power  monitor  signal  where  10  volts  represents  a 
relative  100  %  power  output.  We  do  not  anticipate  using  lock- 
amp  processing.  Finally  the  source  and  radiometer  have  rear 
panel  displays  to  assist  in  alignment. 

All  key  components  of  this  system  are  commercially  avail¬ 
able.  The  most  critical  components,  the  microwave  transmit  and 
receive  heads,  are  derived  from  police  traffic  radar  systems 
which  have  had  over  20  years  of  in  the  field  use.  The  entire 
system,  consisting  of  two  complete  separate  transmissometers 
is  the  result  of  an  exhaustive  survey  of  the  U.S.  microwave 
industry,  is  about  1/2  the  cost  of  the  nearest  competitor ' and 
it  is  specifically  designed  for  the  needs  of  the  Ft  Belvdir 
Meteorological  Team. 


4 .  SYSTEM  CONCEPT 

The  initial  concept  for  this  transmissometer  system 
borrows  heavily  from  the  present  Barnes  Optical  transmissome¬ 
ter  system  presently  being  used  by  the  meteorological  team. 
That  is,  a  known  amount  of  energy  of  the  desired  frequency  is 
transmitted  into  space  towards  a  companion  radiometer .The 
radiometer  is  separated  from  the  source  by  a  known  path 
length.  After  correction  for  free  space  loss  over  the  path 
length,  and  any  system  loss,  the  amount  of  energy  received  by 
the  radiometer  is  considered  to  represent  the  propagation  loss 
of  the  path. 

The  free  space  loss  is  calculated  from  the  formula:  path 
loss  =  1/(4  pi  R^).  System  loss  will  be  measured  by  a  method 
of  suitably  attenuating  a  very  near  field  signal  such  that 
path  atmospheric  loss  can  be  considered  zero.  This  calibration 
procedure  will  be  more  fully  developed  once  we  have  field 
tested  the  unit. 


311 


Considerations  unique  to  microwave  systems  require  some 
departure  from  classical  optical  concepts.  However  the  system 
prepnted  here,  for  the  most  part,  adheres  to  the  above  con- 
cepts.  The  source  for  this  system  uses  a  34  Ghz  police  radar 
head  of  about  50  mw  output  power.  The  difference  in  signal 
attenuation  between  34  and  35  Ghz  can  be  assumed  to  be  very 
small  since  these  frequencies  lie  in  an  absorption  minimum  for 
water  vapor  and  the  small  difference  will  not  affect  rainfall 
attenuation.  A  power  monitor  circuit  has  been  included  on  the 
source  which  allows  the  user  to  monitor  the  power  output  of 
the  source  and  to  make  periodic  corrections  to  the  calibra- 
tion  factor  used  to  compute  the  per  cent  transmission.  At  a 
®  modulate  the  output  power  information  onto 

the  34  Ghz  signal  to  give  the  capability  to  monitor  the  source 
performance  from  the  receiver  site.  Finally,  the  microwave 
section,  or  "front  end"  of  the  source  environmental  housing  is 
temperature  stabilized  at  about  35  degrees  C  (+/-  3  deg)  to 
help  obtain  the  required  frequency  and  power  stability.  A 
cutaway  view  of  the  environmental  housing  is  provided  in 
rigure  2  and  a  block  diagram  of  the  system  is  at  figure  3. 

the  34  Ghz  radiometer  is  also  tempera- 
^  ^he  same  manner  and  for  the  same  reasons  as 

at  the  source.  The  receiver  signal  handling  method  used  is 
known  as  super-hetrodyne  detection.  This  is  the  same  method 
used  in  all  modern  radio  and  tv  receivers.  Super-hetrodyne 
detection  provides  superior  signal  to  noise  performance  rela- 
unhetrodyned  lockamp  assisted  radiometers  such  as  used 
on  the  optical  transmissometer  system.  The  primary  benefits  of 
super-hetrodyned  detection  are  three.  First, the  received 
signal  IS  immediately  down  converted  (or  hetrodyned)  to  a  more 
riendly  frequency  (34  Ghz  to  30  MHz  in  this  case)  in  the 
microwave  head.  The  new  frequency  contains  all  of  the  informa- 

in  the  original  frequency.  The  lower  interme¬ 
diate  frequency  (i.f.)  of  30  MHz, however,  eliminates  the  need 
^°JJ^  ^hain  of  difficult  to  tune  microwave  circuits. 
Second,  30  Mhz  amplifiers  and  filters  are  standard  electronic 
Items,  readily  available  at  low  cost.  This  is  an  important 
consideration  for  field  maintenance. Third,  in  general  i.f. 
amplifiep  are  used  because  they  provide  a  stable  tuned  cir¬ 
cuit  with  very  high  gain  and  low  noise. 

A  second  frequency  conversion  takes  place  when  the  30  Mhz 

chopped  at  1  KHz.  (  we  will  evaluate 
aJf  ”  ^  ^  reference  signal  in  at  the  source 

evaluation  but  it  is  difficult  and  may  not  be 
fild  ^  signal  is  then  further  filtered  and  ampli- 

ried.  Finally,  a  precision  demodulator  circuit  converts  the  1 

^  voltage  representing  the  received  signal 

strength.  The  system  uses  six  inch  diameter  dielectric  lenses 


312 


to  collimate  and  collect  the  transmitted  energy.  The  efficien¬ 
cy  of  these  lenses  allows  them  to  replace  more  costly  and  much 
more  cumbersome  parabolic  metal  dish  antennas. 

System  performance  has  been  calculated  for  a  5  kilometer 
path  length.  A  free  space  (no  attenuation)  signal  to  noise 
ratio  of  36  db  is  predicted.  Assuming  that  a  minimum  signal 
to  noise  ratio  of  3  db  is  needed  to  make  a  usable  reading,  we 
have  a  dynamic  range  of  33  db,  of  from  100%  down  to  0.05% 
transmission.  It  is  likely  that  other  factors  will  cause  the 
minimum  readable  signal  to  go  to  the  ,0.1  to  1.0  per  cent 
range,  but  this  is  a  healthy  performance  range.  Calculations 
indicate  that  it  should  be  possible  to  make  5  kilometer  path 
measurements  in  precipitation  in  excess  of  64  mm/hr.  Our 
field  evaluation  of  the  system  will  of  course  confirm  these 
predictions. 


^  DATA  PROCESSING 

The  software  for  the  transmissometer  was  developed  by 
Chris  Wolf son  of  EOIR  Measurements , Inc  who  was  also  one  of  the 
development  engineers.  The  voltage  from  the  receiver  output  is 
sent  to  a  computer  by  RS  232  where  it  is  converted _  into 
transmission  values.  The  software  has  a  number  of  configura¬ 
tion  files  which  are  accessed  by  menus.  The  main  menu  offers 
the  following  selections: 

OPERATION  MENU  -  collects  and  processes  data 
CONFIGURATION  MENU  -  sets  the  test  parameters 
CALIBRATION  MENU  -  saves  calibration  and  setup  data 
PRINTOUT  MENU  -  controls  printout  of  selected  data 

The  first  step  after  setup  and  alignment  uses  the  calibration 
menu.  In  this  step  the  calibration  distance,  signal  strength 
and  attenuator  setting  are  recorded  along  with  the  specific 
run  values  for  distance  and  attenuator  setting.  This  menu  is 
followed  by  the  configuration  menu  is  used  to  set  run 

ID, sampling  and  recording  intervals.  Finally  the  operation 
menu  is  called.  Here  one  has  the  choice  of  timed  start/stop, 
user  commanded  start/stop,  or  continuous  readout.  When  in  the 
continuous  mode,  the  data  is  only  displayed,  not  stored.  The 
sample  interval  can  be  varied  from  many  times  a  second  to  a 
few  seconds.  The  recording  interval  specifies  the  averaging 
period.  The  last  five  recording  period  data  points  are  dis¬ 
played. 

The  transmission  values  are  calculated  using  the  follow¬ 
ing  equation: 

T  =  S^*  100 


313 


where  T  is  transmission  in  percent  corrected  for  path  loss 
S  is  signal  strength 
A  is  attenuator  setting 
D  is  distance  or  path  length 

the  subscripts  c  and  m  refer  to  calibration  or  measured 

The  software  contains  several  diagnostic  routines,  however 
these  are  not  on  a  menu  at  this  time. 


FIELD  EVALUATION 

The  instrument  was  set  up  at  the  NVESD  compound  at  FT 
A.P.  Hill,  Va  on  30  August.  The  day  was  very  clear  and  dry. 
The  first  step  was  calibration  and  checkout.  We  first  wanted 
to  demonstrate  the  received  signal  intensity  under  good 

transmission  conditions  followed  the  free  space  loss  curve.  To 
do  this  we  set  up  at  500,2000,  and  5000  meters  and  did  the 
necessary  alignment  procedures.  We  then  took  intensity  data  at 
each  of  the  attenuation  settings  from  each  distance.  The 
calibration  distance  was  the  500  meter  point , assuming  that 
there  we  would  have  negligible  transmission  loss.  We  normal- 
data  collected  at  each  distance  and  calculated  the 
standard  deviation.  Figure  4  shows  the  results  with  the  aver¬ 
aged  data  for  each  distance  and  the  3  sigma  standard  devia¬ 
tion.  While  the  data  did  not  fall  exactly  on  the  free  space 
loss  curve,  the  shape  appears  the  same  and  the  losses  repre¬ 
sent  a  constant  percentage  regardless  of  range.  The  differenc¬ 
ial  attributed  to  system  noise  and  less  than  perfect 

calibration.  Also  small  errors  in  calibration  distance  can 
significant  errors  at  the  longer  ranges.  Armed 
with  this  system  verification  we  proceeded  to  the  collection 
of  transmission  data. 


DATA  PRESENTATION 

Data  will  be  presented  for  three  rain  events.  The  first 
event  was  on  2  6  and  27  September.  This  was  the  first  day  of 
rain  after  we  set  up  the  instrument.  There  were  four  measure¬ 
ment  sites  down  range  at  1,2,3,  and  5  km.  The  rainrate  data  in 
^  shows  three  periods  of  significant  rain  fall.  The 
millimeter  wave  transmission  data  shown  in  figure  6  clearly 
was  affected  during  these  periods.  However  the  data  dropped 
below  zero  percent,  which  was  bothersome.  After  invest igatinq 
this  data  we  discovered  that  the  problem  was  in  a  lineariza- 
ro'4tine  which  had  been  turned  on  in  the  software.  This 
routine  had  the  effect  of  incorrectly  distorting  the  received 
signal  below  about  the  5%  level.  This  routine  proved  to  not 
only  be  erroneous  but  also  unnecessary  and  was  removed.  Anoth- 
er  interesting  result  was  the  reduction  in  transmission  after 
tne  rain  ended.  Figure  7  shows  visibility  data  for  that 


314 


period.  Most  of  the  time  it  was  below  5km  in  fog.  A  few  days 
after  the  first  event  we  noticed  some  problems  with  power 
stability  and  response  to  calibration  so  the  instrument  was 
returned  for  maintenance.  After  its  return  we  again  set  up  and 
calibrated.  After  a  period  of  no  rain,  we  finally  had  another 
chance  on  October  20.  Here  about  3  mm/hr  of  rain  reduced  the 
transmission  to  about  18  %.  Data  for  this  event  is  shown  in 
figures  8,9,  and  10.  The  noise  in  the  millimeter  data  ^  after 
2230  hrs  is  caused  by  over  ranging  which  we  did  not  edit  out. 
This  was  probably  due  to  the  wet  ground.  On  October  23  we 
captured  another  rain  event  as  shown  in  figure  11.  Figure  12 
provides  the  transmission  data.  Again  the  periods  corre¬ 
sponded  well.  Figure  13  provides  the  visibility  data. On  this 
occasion  our  PMS  precipitation  probe  was  functional  and  some 
of  the  size  distribution  data  is  presented  in  figures  14  and 
15.  After  this  day,  we  had  to  pack  the  instrument  for  shipment 
to  Alaska  where  it  was  to  be  used  in  snow  conditions.  However, 
this  test  was  canceled,  so  snow  data  will  be  collected  later. 

CONCLUSION 

We  are  very  encouraged  by  the  first  data  from  the  milli¬ 
meter  wave  transmissometer  but  we  have  a  lot  to  learn  about 
its  use. The  addition  of  the  millimeter  wave  transmissometer  is 
eagerly  awaited  by  the  Ft.  Belvoir  Meteorological  Team.  It 
v/iii  extend  their  transmission  measurement  capability  into  an 
important  region  of  future  system  development.  The  system  has 
already  been  scheduled  for  two  field  tests.  It  is  important  to 
understand  that  this  new  system  has  been  designed  to  fit  in 
the  overall  data  collection  system  used  by  the  team  so  that  no 
additional  resources  will  be  reguired  for  operational  use.  The 
second  unit  is  nearly  completed  and  incorporates  changes 
identified  during  the  testing  of  the  first  unit. 


315 


Fig  1.  34  GHZ  TRANSMISSOMETER  FIG  2.  ENVIRONMENTAL 

SYSTEM  HOUSING 


fIG  3.  SYSTEM  BLOCK 
DIAGRAM 


FIG  4.  INITIAL  FIELD 

PERFORMANCE  TEST 


316 


FIG  5.  SEPT  26,94  RAINRATE  VS  TIME 


FIG  6.  SEPT  26,94  MILLIMETER  WAVE  TRANSMISSION 


FIG  7 


SEPT  26,94  VISIBILITY  VS  TIME 


Session  V 

ATMOSPHERIC  PHYSICS 


321 


WIND  FIELD  MEASUREMENT  WITH  AN 
AIRBORNE  CW-CO2-DQPPLER-LIDAR  (ADOLAR) 


S.  Rahm  and  Ch.  Wemer 
German  Aerospace  Establishment  DLR 
82234  Oberpfaffenhofen,  Germany 


ABSTRACT 

The  small  scale  wind  fieldin  the  boundary  layer  is  an  important  parameter  e.g. 
for  the  detection  of  fluxes  from  pollutants.  For  this  purpose  a  compact  cw  002 
Doppler  lidar  has  been  developed  that  can  perform  measurements  from  the 
ground  as  well  as  from  an  aircraft.  In  the  airborne  setup  this  instrument  can 
easily  be  installed  in  the  research  aircraft  FALCON  F20  of  the  DLR.  The 
instrument  consists  of  two  racks,  one  electronic  rack  (size  56  X  66  X  96  cm) 
and  an  optical  rack  (41  X  62  X  125  cm),  which  carries  the  transceiver,  the  laser 
and  the  interferometer  optics  all  together  mounted  at  two  sides  of  an  optical 
breadboard.  The  transceiver  consists  of  of  an  150  mm  diameter  off-axis- 
telescope  and  a  Germanium  wedge,  which  provides  the  conical  scan  with  a  cone 
angle  of  60“.  The  interface  to  the  atmosphere  is  a  Germanium  window  installed 
in  the  bottom  of  the  aircraft.  One  critical  part  is  the  elimination  of  the  Doppler 
shift  due  to  the  platform  motion.  This  can  be  done  at  a  low  flight  level  by  the  use 
of  the  ground  return.  At  higher  flight  level,  where  the  ground  return  is  not 
available,  the  built  in  inertial  reference  system  (IRS)  of  the  aircraft  will  be  used 
for  this  task.  With  this  instrument  was  tested,  that  the  wind  field  can  be 
measured  from  the  aircraft.  For  battlefield  operations  (ground  based  or  airborne) 
the  system  should  have  an  automatic  operation  mode.  The  wind  measurement 
requirements  are:  wind  speed  1-30  m/s  with  an  accuracy  of  1  m/s  and  5°  of  the 
direction.  The  time  of  one  measurement  to  get  the  mean  wind  depends  for  the 
ground  based  system  on  the  atmospheric  stability  and  surface  roughness  length. 

It  will  not  exce^  60  s  in  the  worst  case. 

1.  INTRODUCTION 

The  knowledge  of  the  three  dimensional  wind  field  is  mandatory  for  the  description  of  transport 
phenomena  e.g.  fluxes  of  pollutants  or  dust  and  also  for  small  scale  meteorological  effects.  To 
obtain  this  wind  field  at  the  condition  of  clear  air  (no  rain  or  fog)  a  Doppler  lidar  is  the 
appropriate  instrument.  For  the  continuous  wave  (cw)  (X>2  lidar,  the  energy  is  focused  by  the 
telescope  into  the  region  of  investigation.  Some  of  the  radiation  is  scattered  back  by  small 
aerosol  particles  drifting  with  the  wind  speed  through  the  sensing  volume.  The  back  scattered 


323 


radiation  is  collected  by  the  telescope  and  detected  by  coherent  technique.  With  the  laser  Doppler 
method  one  gets  the  radial  wind  component  along  the  beam  axis.  To  determine  the  magnitude 
^d  direction  of  the  wind,  some  form  of  scanning  is  required.  With  a  ground  based  Doppler 
lidar  the  wind  of  only  a  small  region  can  be  observed.  Therefore  an  airborne  system  is  a  good 
approach  to  obtain  the  information  about  the  wind  of  a  larger  area  in  a  relative  short  time.  On  the 
other  hand  with  an  airborne  system  one  has  to  deal  with  some  additional  problems  as  there  are 
the  influence  of  vibrations,  the  safety  requirements  and  most  important,  the  elimination  of  the 
platform  motion  by  an  appropriate  signal  processing.  This  presentation  will  deal  with  these 
problems  as  well  as  with  the  system  design  and  the  results  of  a  first  test  flight 

2.  THEORY  OF  THE  WIND  EVALUATION 

One  possibility  to  measure  this  wind  field  is  the  use  of  a  conical  scanning  Doppler  lidar.  The 
principle  of  such  a  lidar  is  quite  simple.  Monochromatic  light  is  transmitted  into  the  atmosphere 
and  scattered  back  by  aerosols.  At  this  process  the  line  of  sight  (LOS)  component  of  the  velocity 
causes  a  Doppler  shift  [eq.  (1 )]. 


Av  =  -.yLOS 

^  (1) 

At  a  wavelength  X  =  10,6  }im,  1  m/s  LOS  velocity  corresponds  to  a  Doppler  shift  Av  = 
189  kHz.  The  Doppler  shift  is  detected  by  an  optical  heterodyning  technique.  If  the  local 
osciUator  has  the  same  frequency  as  the  transmitted  light,  the  lidar  system  is  called  homodyne, 
and  in  the  other  one  heterodyne.  If  several  measurements  during  one  conical  scan  are  evaluated, 
it  is  possible  to  calculate  the  tree  dimensional  wind  field  by  applying  a  sinus  fit  for  example.  For 
this  the  wind  field  is  assumed  to  be  homogeneous  in  each  level  over  the  measured  area.  This 
technique  is  well  approved  for  ground  based  systems  (Schwiesow  et  al.  1985),  (Bilbro  1980). 
On  the  other  hand  only  a  few  attempts  have  been  made  to  integrate  a  Doppler  lid^  into  an  aircraft 
(Bilbro  1980)(Bilbro  et  al.  1986)(Woodfield  et  al.  1983),  and  none  of  these  systems  were 
applying  a  conical  scan.  If  the  laser  Doppler  system  is  used  on  board  an  aircraft,  the  speed  of  the 
aircraft  modifies  of  the  Doppler  shift  [eq.  (2)J. 


^OS  =  '^LOs(wind)  +  a)Los(carrier)  '  (2) 

where  -Olos (carrier)  carrier  speed,  and  speed  both  with  respect  to 

the  lidar  line-of-sight  (LOS).  Normally  the  wind  field  is  the  interesting  parameter,  therefore  the 
platfom  motion  has  to  be  subtracted  by  the  means  of  signal  processing.  This  will  be  done  at  a 
low  flight  level  by  the  use  of  the  ground  return.  At  higher  flight  level,  where  the  ground  return  is 
not  available,  the  built  in  inertial  reference  system  (IRS)  of  the  aircraft  can  be  used  for  this  task, 
"^e  platform  velocity  contribution  is  the  main  problem  together  with  the  pointing  accuracy,  but 
like  shown  below  the  data  of  the  IRS  can  fulfil  these  strong  requirements. 

3.  SYSTEM  SETUP 

The  Doppler  lidar  ADOLARis  a  homodyne  system.  This  means,  that  the  magnitude  of  the  wind 
field  can  be  detected  but  not  the  sign.  The  principal  setup  of  this  system  can  be  seen  in  figure  1. 


324 


Figure  1 .  Principal  layout  of  the  transceivCT  and  the  inter¬ 
ferometer  optic.  At  the  real  system,  the  telescope  is  mounted 
at  at  the  front  side  of  the  breadboard  and  the  interfCTometer 
optic  at  the  rear. 

The  laser  is  aCMlOOOfrom  Laser  Ecosse  with  anouq)ut  power  of  approximate  3.5  W  cw.  The 
laser  is  operating  at  single  longitudinal  mode  (SLM),  transversal  at  TEMqq,  and  is  p-polarised. 
The  laser  beam  passes  the  lens  L^,  which  is  used  to  adopt  the  beam  parameters  of  the  laser  to  the 
telescope,  the  Brewster  window  B,  and  the  quarter  wave  plate  which  converts  the  polarisation  to 
a  circular  one.  After  passing  the  beam  splitterBS  (R  =  10  %)  the  radiationis  coupledinto  the  15 
cm  off-axis-  telescope  from  Lambda/Ten  Optics,  which  focuses  the  light  to  180  m  distance.  The 
wedge  scanner  provides  the  conical  scan  with  half  a  cone  angle  of  30°.  At  the  measurements 
described  below,  one  revolution  of  the  scanner  needs  20  s.  The  back  scattered  light  goes  tfie 
same  way  back,  passes  again  the  quarter  wave  plate,  where  its  polarisation  is  converted  to  the  s- 
state,  so  that  it  can  be  focused  onto  the  detector  via  the  Brewster  window  B,  the  mirror  and 
the  lens  1^.  The  local  oscillator  (lo)  is  realised  by  the  beam  splitterBS,  the  CaF2  plate  to  adopt 
the  power,  and  the  mirror  Mg.  The  lo-beam  is  also  focussed  on  the  detector  and  tii^ 
heterodyned  with  the  back  scattered  light.  The  mixing  efficiency  m  (Kingston  1978)  is  estimated 
to  m  »  0.3  out  of  the  calculated  beam  parameters  (Gaussian  for  the  lo  and  an  Airy  pattern  for  the 
received  light).  The  detectoris  a  LN,  cooled  MCT-diode  with  an  active  size  of  200  X  200  |xm, 
and  a  quantum  efficiency  11  =  0.53  from  Kolmar.  The  electric  beat  signal  gets  amplified  with  a 
bandwidth  from  1  to  20  MHz.  The  low  cut  off  frequency  is  necessary  to  eliminate  EMI  from  the 


325 


laser  power  supply  and  the  high  cut  off  frequency  to  reduce  the  effect  of  aliasing  of  noise.  The 
amplified  signal  is  ^gitised  with  a  sampling  rate  of  20  MHz  and  a  resolution  of  8  Bit.  One 
measurement  contains  8  kByte  of  data  which  represent  a  duration  of  about  0.4  ms.  The 
repetition  rate  at  these  measurements  was  with  2  Hz  quite  low.  One  thing  to  be  mentioned  is, 
that  the  Nyquist  frequency  was  with  10  MHz  rather  low  due  to  a  malfunction  of  the  ADC.  But 
the  aliasing  effect  which  occurred  at  some  few  measurements  has  been  resolved. 

To  obtain  a  mechanical  stiff  and  robust  setup,  the  interferometer  optic  together  with  the  laser  is 
mounted  at  one  side  of  an  optical  breadboard  900  X  300  mm  from  Newport,  and  the  telescope  is 
fixed  at  the  other  side.  The  breadboard  itself  is  mounted  in  a  light  weight  aluminium  frame, 
which  compensates  the  average  pitch  angle  of  5.5°  of  the  aircraft.  The  scanner  is  fixed  below  the 
telescope  at  the  frame.  Figure  2  shows  the  installation  of  the  optical  part  in  the  Falcon  F20 
aircraft. 


To  reduce  the  influence  of  vibration  the  whole  instrument  is  connected  with  shock  mounts  to  the 
aircraft.  The  electronic  equipment,  like  the  cooling  unit  for  the  laser,  the  A/D  converter,  the 
computer,  the  spectrum  analyser  for  quick  look  etc.  are  mounted  in  a  standard  electronic  rack  of 
size  0,55  X  0,65  x  0,95  m  in  front  of  the  operator. 


326 


4.  MEASUREMENT  OF  THE  WIND  FIELD 


In  the  case  of  an  airborne  Doppler  lidar  we  had  to  deal  with  some  problems.  The  most  important 
one  is  the  elimination  of  the  platform  motion  in  the  detected  Doppler  signal.  AI^LAR  was 
originally  designed  as  a  test  bed  to  gain  experience  concerning  the  points  ^scribed  above. 
However  the  results  of  the  first  test  flight,  discussed  below,  were  so  encouraging,  that  it  is  now 
planned  to  upgrade  this  cw-system  to  an  operational  airborne  Doppler  lidar  for  the  detection  of 
small  scale  wind  fields.  A  flight  test  was  performed  on  May  19,  1994.  Test  results  of  the 
different  signatures  are  shown  in  figure  3.  During  this  test  flight  measurements  at  several  height 
level  have  been  performed.  The  most  interesting  scenario,  that  will  described  here,  was  a  part  of 
the  flight  in  315  m  height  over  the  ,Ammersee“,  a  lake  in  Bavaria.  At  this  day  a  rather  strong 
wind  was  blowing,  which  was  ideal  for  the  test  of  the  system,  for  each  measurement  contains 
information  about  the  LOS  velocity  from  both,  the  aerosols,  and  the  ground  return  mostly  at 
different  frequencies,  so  that  they  can  be  differed  from  each  other.  The  algorithm  for  the  signal 
processing  is  quite  simple.  The  data  set  of  one  measurement  (8  kByte)  is  divided  into  1 6  parts  of 
length  512  Byte.  Each  of  them  is  processed  with  a  FFT  and  the  resulting  power  spectra  ate 
averaged. 


LOS  [m/s] 


Figure  3.  Doppler  lidar  signals  for  different  targets 

Figure  3  shows  3  single  measurements  at  different  scan  direction  and  different  time  (flight 
altitude)  for  an  first  overview.  There  is  a  land  and  sea  surface  return  each  together  with  an 
aerosol  (  wind )  signal.  This  is  caused  by  the  focal  distance  of  200  m  and  the  stronger  return  of 
land  and  sea  surface  from  outside  the  focal  volume  (figure  2).  The  wind  si^al  is  left  from  the 
land  signal  and  right  from  the  sea  signal  caused  by  the  scanning  and  the  aircraft  velocity.  The 
LOS  difference  is  in  the  order  of  8  m/s.  The  third  signature  comes  fiwm  a  cloud. 


327 


The  strong  narrow  peaks  in  the  spectra  (figure  3)  are  belonging  to  to  the  ground  return  and  the 
weak  broad  peaks  to  the  aerosol  signal.  The  spectral  width  of  the  peak  is  an  indicator  for  the 
coherence  time  which  corresponds  to  the  turbulence.  The  ground  return  is  therefore  narrow,  and 
the  peak  belonging  to  the  aerosol  is  rather  broad  due  to  the  velocity  distribution  in  the  focal 
regime.  The  iritensity  of  the  ground  return  in  the  case  of  water  is  rather  low.  A  ground  return 
from  the  land  is  normally  about  10  -  15  dB  stronger.  As  it  can  be  seen  (figure  3)  are  the  two 
peaks  ch^ging  their  absolute  position  as  well  as  their  relative  position  to  each  other  during  the 
scan.  This  effect  and  the  estimation  of  an  average  wind  vector  will  now  be  discussed  more  in 
deual  (figure  4). 


corresponds  to  the  position  of  die  scanner. 


a)  Doppler  shift  of  the  ground  return  with  IsinI  fit 

b)  Doppler  shift  of  the  aerosol  with  IsinI  fit 

c)  Difference  b)-a). 

d)  Corrected  Doppler  shift  of  the  wind  field. 

The  centre  frequencies  of  both  peaks  (aerosol  and  ground  return)  are  estimated  for  the 
measurements  of  3  scanner  revolutions  (figure  4a  and  4b).  To  each  couple  of  points  a  IsinI 


328 


function  has  been  fitted  with  a  least  square  fit  procedure.  At  a  homodyne  system,  the  influence 
of  the  platform  motion  cannot  be  eliminated  by  calculating  the  difference  aerosol  -  ground  return. 
This  would  lead  to  the  rather  confusing  result  shown  in  figure  4c.  Therefore  the  graph  (figure 
4c)  was  divided  into  two  groups  of  areas,  where  the  sign  of  one  ^itrarily  chosen  group  was 
changed  (shaded  in  figure  4c).  This  operation  leads  to  the  graph  in  figure  4d.  There  it  can  be 
seen,  that  mostly  the  measured  values  and  the  sin  fit  are  lying  quite  close  together,  only  a  few 
points  do  not  fit  to  the  sinus.  There  are  different  reasons  possible  for  this  effect.  First,  changes 
of  the  attitude  of  the  aircraft  and  small  changes  of  it's  speed  are  not  considered  at  all  at  this 
discussion,  and  second  due  to  the  strong  wind  and  the  low  level  of  measurement,  there  can  be 
expected  a  lot  of  turbulences  and  variations  of  the  wind  speed  during  the  observation  time. 

Out  of  the  coefficients  of  the  IsinI  fit  an  average  wind  vector  was  estiniatedin  reverence  to  the 
aircraft  x-axis  and  the  result  has  been  compared  with  the  data  from  the  inertial  reference  system 
(IRS)  of  the  aircraft.  This  results  are  shown  in  table  1. 

Table  1.  All  values  in  m/s  if  not  other  indicated.  Comparison  of  the 

measured  wind  field.  The  results  in  the  level  of  159  m  over  ground  has  been 


obtained  bv  the  Doppler  lidar,  and  the  resul 

Its  at  the  aircraft  level  by  the  IRS. 

Lidar 

IRS 

ground  speed 

104,5 

104,1 

horizontal  wind  speed  at  315  m 

19,2 

horizontal  wind  angle  at  315  m 

61,7° 

vertical  wind  speed  at  315  m 

0,1 

horizontal  wind  speed  at  159  m 

13,5 

horizontal  wind  angle  at  159  m 

74,3° 

vertical  wind  speed  at  159  m 

0,9 

The  results  of  the  ground  speed  from  the  lidar  and  the  IRS  fit  good  together.  The  difference  in 
the  wind  parameters  is  due  to  the  different  levels,  159  m  over  ground  for  the  lidar  and  3 15  m  for 
the  aircraf^t. 

These  points  are  the  most  important  results  of  this  campaign.  The  good  coincidence  between  the 
lidar  and  the  IRS  concerning  the  ground  speed  is  the  basis  for  the  evaluation  of  a  tl^ 
dimensional  wind  field  with  an  airborne  Doppler  lidar.  Furthermore  it  has  been  proved  that  it  is 
possible  to  measure  a  Doppler  shift  from  aerosols  with  this  instrument.  Before  the  next  flight  the 
following  points  will  be  changed  or  improved.  An  acousto  optic  modulator  will  be  integrated  to 
obtain  a  heterodyne  system  so  that  magnimde  and  sign  of  the  Doppler  shift  can  be  measured.  In 
dependence  of  this  the  sampling  rate  of  the  ADC  must  be  higher  («  100  MHz).  To  establish  a 
more  sophisticated  signal  processing  the  information  about  the  attitude,  the  velocity  of  the 
aircraft,  and  the  exact  scanner  position  are  required.  These  parameters  have  to  be  stored 
simultaneously  together  with  the  digitised  data.  With  all  these  points  improved,  it  should  be 
possible  to  establish  an  operational  Doppler  lidar  for  the  measurement  of  small  scale  wind 
phenomena. 


329 


REFERENCES 


Bilbro,  J.  W.,  1980.  “Atmospheric  laser  Doppler  velocimetry;  an  overview".  Ootical 
Engineering,  19: 533-542.  ^ 

Billwo,  J.  W.,  DiMarzio  C.,  Fitzjarrald  D.,  Johnson,  S.,  and  Jones,  W.,  1986.  “Airborne 
Doppler  lidar  measurements".  Appl.  Opt.,  25:  3952-3960 

Kingston,  R.  H.,  1978.  Detection  of  Optical  and  Ir^ared  Radiation..  Vol.  10  of  Optical 
Sciences,  Springer  Verlag,  New  York,  24  pp. 

Schwiesow,  R.  L.,  Kopp,  F .,  and  Werner,  Ch.,  1985.  “Comparison  of  cw-Lidar  Measured 
Wind  Values  Obtained  by  Full  Conical  Scan,  Conical  Sector  Scan  and  Two-Point- 
Technique”.  Joum.  Atmospheric  and  Oceanic  Technology,  2:  3-14 

Woo^eld,  A^.,  and  Vaughan,  J.  M.,  1983  “Airspeed  and  Wind  Shear  Measurements  with 
Airborne  OO2  CW  Lasef“.  International  J.  Aviation  Safety,  1: 207-224 


an 


330 


BEHAVIOR  OF  WIND  FIELDS  THROUGH  TREE  STAND  EDGES 

Ronald  M.  Cionco 
Battlefield  Environment  Directorate 
US  Army  Research  Laboratory 
White  Sands  Missile  Range,  NM,  88002-5501  USA 

David  R.  Miller 

Natural  Resources  Management  and  Engineering  Department 
University  of  Connecticut 
Storrs,  CT,  06269-4087,  USA 


ABSTRACT 

Recently  several  investigators  have  indicated  that  the  forest  edge  effect  involves 
the  generation  of  form  drag  forces,  the  appearance  of  a  large  pressure  gradient, 
the  upward  (or  downward)  deflection  of  mean  flow,  the  transport  of  momentum 
into  the  leading  edge  of  the  canopy,  and  the  advection  of  the  flow  characteristics 
conditioned  by  the  upstream  surface  across  the  edge.  The  purpose  of  this  paper 
is  to  quantify  the  effects  of  atmospheric  stability  and  wind  regime  on  these  edge 
flow  processes.  To  analyze  these  effects,  the  huge  Project  WIND  canopy  flow 
and  micrometeorological  data  base  collected  by  the  USA  ASL  (now  the  US 
ARL).  The  WIND  data  are  tailored  for  this  type  of  study.  Other  known  data  sets 
are  notably  limited  for  this  purpose.  These  raw  wind  data  sets  were  conditionally 
selected  for  periods  when  the  wind  was  +/-  20  degrees  of  perpendicular  to  the 
stand  edge.  This  procedure  resulted  in  132,  30  minute  periods  at  the  orchard 
edge  and  94  periods  at  the  forest  edge.  The  30  minute  data  runs  were  classified 
by  z/L  into  three  categories;  free  convection  (z/L<-l),  mixed  convection 
(-1  <z/L<0),  and  stable  (z/L>0)  as  similarly  defined  by  Panofsky  and  Dutton. 
Results  of  this  research  demonstrate  that  the  airflow  properties  conditioned  by  the 
upwind  surface  such  as  friction  velocity,  mixing  length,  and  turbulence 
characteristics  are  advected  for  varying  distances  across  and  through  the  tree 
stand  edge  depending  on  the  atmospheric  stability. 


1.  INTRODUCTION 

Albini  (1981),  Li  et  al  (1990),  Miller  et  al(1991)  and  most  recently  Klaassen  (1992)  have 
modeled  the  air  flow  across  forest  edges.  They  have  indicated  the  edge  effect  involves  the 
generation  of  form  drag  forces,  the  appearance  of  a  large  pressure  gradient,  the  upward  (or 
downward)  deflection  of  mean  flow,  the  transport  of  momentum  into  the  windw^d  edge  of  the 
canopy,  and  the  advection  across  the  edge  of  the  flow  characteristics  conditioned  by  the 
upstream  surface.  Very  few  field  measurements  are  available  to  verify  these  and  subsequent 


331 


models.  Those  that  are  available  were  made  for  limited  studies  (Miller  et  al  1991- 
Raynor  1971;  Thistle,  1988;  Wang,  1989;  Kruijt,  1993).  The  data  sets  are  therefore  difficull 
to  transfer  to  other  sites  and  conditions  because  of  instrumentation,  spatial  sampling,  and  fetch 
wnstramts.  None  are  comprehensive  enough  to  analyze  the  interactions  of  the  forest  edge  and 
me  state  of  the  atmospheric  boundary  layer  (stability,  etc.)  on  the  local  mean  wind  field  except 
for  the  Project  WIND  data  base. 

During  Project  WIND,  comprehensive  (spatial  and  temporal)  micrometeorological 
me^urements  were  made  across  the  edge  of  an  orchard  and  a  pine  forest  in  north  central 
California  (Cionco,  1989)  conducted  by  the  US  Army  Research  Laboratory  (formerly  the  USA 
AtmosphOTc  Sciences  Laboratory).  The  purpose  of  this  paper  is  to  use  the  measured  wind 
fields  at  these  two  edges  to  quantify  the  effects  of  the  atmospheric  stability  and  wind  regime 
on  the  m^  wind  flow  through  and  over  the  tree  stand  edges.  Note  that  although  the  forest 
setup  will  be  mentioned,  this  paper  will  limit  its  scope  to  reporting  on  the  results  of  the 
analysis  of  the  Orchard  Site. 


2.  METHODS 


2.1  Field  Measurements 

^  Project  >^ND  were  conducted  in  and  about  the  Sacramento  River  Valley 

o  Northern  California  during  the  period  beginning  June  1985  and  ending  October  1987 
(Cionco,  1989).  One  site  was  a  geometrically  uniform,  almond  orchard  on  the  flat  terrain  of 
the  Sacramento  River  Valley.  The  other  site  was  a  more  complex  coniferous  forest  on  the 
west  slopes  of  the  Sierra  Nevada  Mountains. 

In  each  phase,  data  were  collected  over  a  two  week  time  span  for  selected  periods  resulting 

in  two  full  sets  of  daytime  (1000  to  1600  hrs),  nighttime  (2200  to  0400  hrs),  transition  (sunrise 

^  Mnset)  periods  and  two  full  24  hour  diurnal  periods  (1000  to  1000).  The  four  phases  of 

T  ®  conducted  during  synoptic  meteorological  regimes  of  weak  marine  incursion  - 

c"  /lif  activity  -  Jan/Feb  86,  shallow  convection  -  Apr/May  86,  and  subsidence  - 

oCp/Oct  o7. 


Identical  sets  of  eight-level  micrometeorological  towers  were  located  at  both  orchard  and  forest 
sites  at  three  positions  during  each  phase  as  presented  in  Table  1.  One  tower  (OT3)  is  located 
deep  into  the  canopy  24  tree  heights  (H)  from  the  canopy’s  edge.  The  second  tower  (OT2) 
IS  placed  just  inside  (2.5H)  the  canopy’s  edge.  The  third  tower  (OTl)  was  on  the  extensive 
and  uniformly  cut  open  field  24  H  from  the  canopy’s  edge  in  the  clearing.  Note  that  OTl 
provides  the  reference  profile  of  the  surface  layer  ambient  flow  for  this  study.  The  sensor 
heights,  vmables  measured,  measurement  frequencies  and  tower  locations  were  reported  in 
de^l  by  Cionco  (1989).  Complete  profiles  of  the  wind  components  (u,v,w),  temperature  (T) 
Md  relative  humidity  (RH)  wete  measured  at  each  tower.  Note  that  the  orchard  canopy  was 
8m  tall  whereas  the  forest  canopy  averaged  18m  tall. 


332 


Table  1.  Micrometeorological  Tower  Instrumentation  for  the  Orchard 


SENSOR  HT 


OTl 


OT2 


OT3 


2.0TH 

1.7 

1.45 

1.25 

l.OOTH 

0.75 

0.50 

0.25 

Sfc 


uvw,T/AT,RH 
uvw,T/4.T, 
uvw,T/AT, 
UVW,T/^T, 
uvw,T/4T, 
uvw,T/^T, 
UVW,T/aT, 
uvw,T/A,T,RH 
Rn,  P 
Hs 


uvw,T/AT,RH 

uvw,T/AT,RH 

uvw,T/aT,RH 

uvw,T/AT,RH 

uvw,T/aT,RH 

uvw,T/aT,RH 

uvw,T/AT,RH 

uvw,T/AT,RH 


uvw,T/AT,RH 

uvw,T/AT,RH 

uvw,T/AT,RH,Rs 

uvw,T/AT,RH 

uvw,T/AT,RH 

uvw,T/AT,RH 

uvw,T/AT,RH, 

uvw,T/AT,RH,Rs 

Hs 


2.2  Data  Reduction 

The  raw  data  sets  were  conditionally  sampled  to  select  all  the  periods,  at  least  one  hour  long, 
when  the  wind  direction  was  within  +  or  -  20  degrees  of  normal  to  the  stand  edge.  The  data 
were  then  split  into  even  30  minute  run  periods.  This  procedure  resulted  in  132  thirty  minute 
periods  at  the  orchard  edge  (60  into  the  edge  and  72  with  the  wind  out  of  the  edge)  and  112 
thirty  minute  periods  at  the  forest  edge  (19  in  and  93  out). 

Mean  fluxes  of  heat  (6,)  and  moisture  (q.)  and  mixing  length  (IJ  and  the  resulting  stability 
parameter  (z/L)  were  calculated  from  the  above  canopy  profiles  by  Monin-Obukhov  surface 
layer  similarity  (Obukhov,  1971)  where  the  method  of  Rachele  and  Tunick  (1992)  and  Tunick 
et  al.,(1994)  was  used  in  place  of  the  diabatic  influence  functions  of  Paulson  (1970)  and  Benoit 
(1977)  as  follows. 

The  30  minute  data  runs  were  classified  by  z/L  (measured  in  the  open  field)  into  three 
categories;  free  convection  (-l<z/L),  mixed  convection  (-1<  =z/L<  =0),  and  stable 
(z/L>0).  These  are  similar  to  the  stability  classes  defined  by  Panofsky  and  Dutton  (1984), 
except  we  group  periods  when  mechanical  turbulence  dominates  and  when  mechanical 
turbulence  is  dampened  as  follows; 


z/L 


Panofsky  and  Dutton  This 

Interpretation  Classification 


Strongly  negative 
Negative, but  small 
Zero 

Slight  positive 
Strong  positive 


Heat  convection  dominant  Free  convection 
Mechanical  turb.  dominant  Mixed  convection 
Solely  Mechanical  turbulence  Mixed  convection 
Slight  damping  of  turbulence  Stable 
Mech.  turb.  severely  reduced  Stable 


These  groupings  were  used  because  the  number  of  30  minute  runs  with  slightly  positive  or  zero 
z/L  were  limited. 


333 


Rawindsondc  measurements  of  the  upper  air  profiles  were  available  every  two  hours.  The 
stability  classifications  determined  from  the  surface  tower  data  were  compared  to  the  static 
stability  of  the  boundary  layer  determined  from  the  rawindsbnde  data  by  the  method  of  Stull 
(1993).  Only  runs  in  which  both  measurements  of  stability  agreed  were  used.  Table  2  lists  the 
number  of  30  minute  runs  in  each  stability/wind  direction  category. 

Table  2.  Numbers  of  30  minute  runs  in  each  Wind  Direction/Stability  Category  used  in  the 
Analysis. 


Wind  Direction 

Free 

Mixed 

Stable 

Orchard: 

Into  edge 

6 

18 

36 

Out  of  edge 

21 

28 

23 

Forest: 

Into  edge 

7 

10 

2 

Out  of  edge 

1 

5 

87 

3.  RESULTS 

The  forest  edge  and  orchard  edge  results  were  very  similar.  Therefore  the  results  are  presented 
here  for  the  orchard  only  to  meet  space  limitations. 

3.1  Profile  adjustments  across  the  edge 

Figure  1  presents  average  vertical  profiles  from  all  of  the  half  hour  periods  for  the  three 
stability  classes  (stable,  mixed  convection,  free  convection)  at  each  tower  location  with  the 
wind  blowing  either  into  (figure  1  a,b,c)  or  out  of  (figure  1  d,e,f)  the  stand  edge.  For 
comparison,  all  the  profiles,  even  those  outside  the  orchard,  are  scaled  by  the  wind  at  8 
meters  which  is  the  height  of  the  orchard  canopy. 

In  the  open  field  (tower  OTl,  figure  la,d)  the  measurements  were  all  in  the  surface  layer  well 
above  the  short  crop  canopy  and  the  scaled  profiles  demonstrate  the  effects  of  stability  on  the 
surface^  layer  flow.  The  profiles  from  mixed  and  free  convection  conditions  are  very  similar 
with  slightly  less  shear  during  free  convection  conditions.  The  profiles  in  stable  boundary 
layers  diverge  drastically  from  the  profiles  during  convective  boundary  layer  conditions  with 
the  overall  shear  much  larger  as  expected  in  conditions  with  little  vertical  turbulent  mixing. 

Comparison  of  the  OTl  profiles  with  wind  into  and  out  of  the  edge  shows  essentially  no 
difference  during  convective  conditions,  but  during  stable  conditions  the  mean  profile  has 
significantly  less  shear.  Apparently  the  greater  mixing  capacity  above  the  orchard  is 


334 


Figure  1.  Mean  Wind  Profiles:  With  wind  into  edge:  a.  at  OTl;  b.  at  OT2;  c.  at  OT3 
and  With  wind  out  of  the  edge:  d.  at  OTl;  e.  at  OT2;  f.  at  OT3. 

transported  further  than  24  tree  heights  out  of  the  edge  during  stable  boundary  layers  but  is 
transported  less  than  24  tree  heights  during  convective  and  neutral  conditions.  Therefore  the 
assumption  of  infinite  fetch  is  violated  at  this  location  during  stable  conditions  but  not  in  a 
convective  boundary  layer. 

The  wind  flowing  into  the  edge  at  2.5H  inside  the  orchard  edge  (figure  lb),  shows  high  sh^ 
above  the  canopy  as  the  wind  compresses  over  the  top  of  the  canopy  and  a  relatively  high  wind 
penetrating  the  subcanopy  trunk  space  as  noted  in  previous  edge  studies.  Stability  effects  the 
magnitude  of  these  flows  significantly.  In  free  convection  conditions,  the  below  canopy 
penetration  is  maximized  while  in  stable  conditions  the  above  canopy  shear  is  maximized. 
Obviously  the  strength  of  vertical  mixing  in  the  upwind  surface  layer  has  a  significant  effect 
on  momentum  absorption  and  wind  penetration  into  the  edge. 

At  2.5H  inside  the  leeward  orchard  edge  (OT2,  figure  le),  the  mixed  convection  profile  is 
essentially  the  same  as  inside  the  canopy.  But  the  stable  and  free  convection  profiles  are 
showing  the  effect  of  the  relaxation  of  drag  2.5H  downstream  from  their  position.  The  free 
convection  profile  shows  greater  shear  above  the  canopy  and  the  stable  profile  shows  less  shear 
above  the  canopy  than  is  present  inside  the  orchard  away  from  the  edge.  Obviously  the 
downward  vertical  motion  at  the  leeward  edge  overshadows  the  effects  of  stability  at  this 
location. 


335 


At  24H  windward  inside  the  orchard  edge  (OT3,  figure  If),  the  relative  effects  of  stability  are 
similar  to  outside  the  canopy  when  no  edge  effects  are  present.  The  mixed  and  free  convection 
profiles  are  very  similar  while  the  stable  profile  shows  greater  shear  above  the  canopy.  Below 
the  canopy ,  no  subcanopy  maximum  is  present  and  the  least  penetration  of  momentum  occurs 
during  stable  conditions,  as  expected.  Comparison  of  this  to  figure  Ic,  the  same  position  but 
leeward  of  the  edge,  shows  the  same  profile  during  mixed  convection  conditions  but  the  stable 
and  the  free  convection  profiles  diverge  significantly  from  figure  If.  The  free  convection  has 
less  shear  and  the  stable  profile  has  more  shear  above  the  canopy  than  when  the  edge  was 
downwind.  These  characteristics  were  developed  at  the  edge  (figure  lb)  and  apparently  were 
transported  horizontally  further  than  24H  into  the  edge. 

3.2  Scaling  Parameter  Adjustment  Across  the  Edge 

Table  2  presents  the  mean  values  of  the  friction  velocity,  U*,  calculated  from  the  profile  data 
above  the  canopies,  for  each  stability  and  wind  direction  classification  at  the  three  orchard 
tower  locations.  Comparisons  of  OTl  and  OT3  values  shows  that  U*  in  the  roughness 
sublayer  above  the  orchard  canopy,  at  OT3,  was  about  two  to  three  times  that  in  the  surface 
layer  above  the  open  field,  reflecting  the  greater  mechanical  turbulence  above  the  rougher 
surface.  In  all  cases  the  greatest  difference  between  the  open  and  orchard  friction  velocities 
was  during  stable  conditions  and  the  least  difference  was  in  free  convection  conditions. 

Table  3.  Mean  U*,  6*  and  q*  Values  Across  The  Orchard  Edge  In  Different  Stability  Classes. 


Wind 

OTl 

Dir 

Free 

Mixed 

Stb 

Into 

Edge. 

u* 

,22 

.39 

.15 

0* 

-.6 

-.27 

.005 

q* 

0 

0 

.0007 

Out  of  Edge 

u* 

,21 

.25 

.03 

e* 

-1.1 

-.8 

.04 

q* 

.004 

.002 

.004 

OT2  OT3 


Free 

Mixed 

Stb 

Free 

Mixed 

Stb 

— — — — 

— — — — 

— 

.  19 

.28 

.24 

.39 

.83 

.41 

-1 

-.7 

.08 

-10 

-1.2 

.1 

0 

-.006 

-.0003 

0 

-.0002 

.012 

.22 

.46 

.10 

.30 

.62 

.42 

-1.1 

-.17 

.04 

-6.5 

-1.0 

.11 

-.002 

-.0009 

-.002 

-.018 

-.003 

-.018 

The  OT2  edge  tower  data  were  intermediate  between  the  open  and  orchard.  But  the  edge 
tower  generally  shows  U*  closer  to  the  open  conditions  when  the  wind  is  from  the  open  field 
and  closer  to  the  orchard  values  when  the  wind  was  from  the  orchard,  reflecting  the  horizontal 
advection  of  the  upwind  conditions.  This  is  demonstrated  visibly  when  the  values  in  Table  2 
^e  plotted  as  horizontal  profiles  in  figure  2,  The  plots  show  concave  curves  when  the  wind 
IS  from  the  o^n  field  and  convex  curvature  when  the  wind  is  from  the  orchard.  The  exception 

to  this  is  during  stable  conditions  which  shows  an  opposite  trend  when  the  the  flow  is  out  of 
the  edge. 


Figure  3  compares  the  stability  parameters  measured  simultaneously  above  the  smooth  field 


336 


z-d/L  above  orchard.  z-d=8m 


mechanical  mixing  (i.e.  orchard  (z-d)/L  <  open  z/L).  In  a  convective  boundary  layer  the  air 
and  rough  orchard.  In  stable  conditions,  the  flow  over  the  field  is  significantly  less  turbulent 
than  that  above  the  orchard.  Thus  conditions  are  less  stable  above  the  orchard  due  to  higher 
above  the  orchard  tended  to  be  more  unstable  than  that  above  the  open  field  when  the  open  z/L 
was  less  than  -1,  Apparently  when  these  conditions  of  strong  convection  (high  heat  flux  and 
low  wind  speeds)  occurred,  the  drag  of  the  orchard  canopy  slowed  the  wind  to  nearly  zero  and 
thus  induced  nearly  free  convection  above  the  orchard.  When  mixed  convection  dominated 
the  boundary  layer  (-l<z/L<0),  the  above  relationship  was  reversed  as  shown  in  the  inset 
graph  in  figure  3. 

Vertical  profiles  of  momentum  flux  inside  the  orchard  (OT3)  and  just  inside  the  edge  (OT2) 
are  shown  in  figure  4.  The  open  area  tower  is  not  shown  because  all  the  measurement  levels 
were  in  the  surface  layer  and  the  momentum  flux  was  essentially  constant.  The  orchard  tower 
(OT3)  with  homogeneous  conditions  showed  profiles  similar  to  other  tree  stands  in  the 
literature.  With  maximum  momentum  flux  above  the  canopy  and  decreasing  values  below  the 
canopy  top. 

Remembering  that  the  momentum  flux  values  at  the  edge  tower  are  not  vertical,  but  are 
calculated  perpendicular  to  the  stream  lines,  the  edge  tower  shows  a  maximum  at  the  bottom 
of  the  canopy  and  a  minimum  just  above  the  canopy  top  when  the  wind  is  horizontal 
penetration  and  downward  deflection  of  momentum  below  the  canopy.  The  above  canopy 
minimum  reflects  the  speed  up  over  the  top  of  the  canopy,  an  increase  in  horizontal  advection 
and  a  subsequent  reduction  in  cross  streamline  turbulent  transport. 

When  the  mixed  convection  wind  is  from  the  edge  the  edge  profile  is  similar  to  the  profile 
inside  the  canopy  except  that  the  profile  is  more  vertical  and  showed  lower  values  at  all  but 
the  below  canopy  level.  The  lower  vertical  momentum  transport  is  a  reflection  of  the  relaxation 
of  the  flow  as  it  diverges  and  pours  overs  the  edge. 

The  6*  values  show  that  the  sensible  heat  flux  was  higher  over  the  orchard  and  lowest  over 
the  open  field.  The  very  low  (==0)  q*  values  reflected  the  lack  of  ET  in  the  arid  open  field. 
The  orchard  was  periodically  irrigated  during  the  growing  season  and  therefore  generally 
showed  a  non-zero  humidity  gradient. 

4.  DISCUSSION 

4.1  Mixing  Length  Adjustments 

IGaassen  (1992)  pointed  out  that  the  mixing  length  (1,J  adjustment  across  the  edge  was  non¬ 
linear  with  advection  of  the  mixing  length  from  a  different  height  a  significant  influence.  He 
modeled  the  change  in  1„  across  the  edge  using  advection  and  adjustment: 


Momentum  flux  profiles 
wind  into  orchard 


where  the  advection  is: 


dx  u  dz 

and  the  adjustment  term  is: 


L 

^=0/1--^) 
OX  I 


(2) 

(3) 


where  q  is  the  "rate  of  adjustment"  constant  and  is  the  fully  adjusted  mixing  length. 
Klaassen  fit  the  above  equation  to  the  data  of  Bradley  and  arrived  at  a  value  of  for  q. 
Using  the  profile  data  presented  here  10"^  is  a  better  fit  and  there  is  no  change  with  stability. 
There  is,  however,  a  significant  change  in  the  advection  term  with  stability  because  the  ratio 
w/u  changes.  Mean  values  of  w/u  at  the  edge  (2.5h  inside  the  edge)  and  interior  (24h  inside 
the  edge)  are  plotted  in  figure  5  for  each  stability  class. 

Outside  the  edge  in  the  open  the  mean  flow  was  horizontal  and  parallel  at  all  levels.  The 
slight  non-zero  values  in  these  average  profiles  are  the  average  leveling  errors  of  the 
anemometers.  In  the  interior  canopy  with  no  edge  effects  present  (OT3)  the  mean  flow  above 
the  canopy  is  essentially  horizontal.  Below  the  canopy  there  is  a  general  downward  motion 
reflecting  the  periodic  penetration  of  gusts  from  above. 

Near  the  edge  with  the  wind  blowing  into  the  stand  (OT2,  figure  8b)  the  general  upward  flow 
over  the  top  and  the  upward  movement  of  air  that  penetrated  the  side  of  the  stand  below  the 
canopy  are  apparent.  The  large  positive  below  canopy  values  during  stable  flow  are  reflections 
of  the  relative  absence  of  turbulent  drag  as  air  is  forced  into  the  side  of  the  stand  and  then 
moves  upward.  In  convective  conditions,  the  kinetic  energy  of  air  forced  into  the  side  of 
the  stand  is  dissipated  more  rapidly  by  the  turbulence  and  the  mean  flow  at  2.5  h  inside  the 
edge  does  not  move  upward  as  readily. 

Near  the  edge  with  the  wind  blowing  outward  (OT2,  figure  5),  the  above  canopy  flow  was 
essentially  the  same  as  the  interior  flow  except  when  the  mechanical  mixing  was  strong  (mixed 
convection)  where  a  slight  upward  motion  can  be  seen.  Below  the  canopy,  the  motion  was  the 
opposite  of  the  interior  canopy  where  a  general  upward  motion  can  be  seen  during  convective 
conditions.  The  exception  is  during  stable  conditions  where  the  flow  has  changed  from 
downward  in  the  interior  to  horizontal  near  the  edge.  The  general  upward  motion  during 
turbulent  conditions  ahead  of  the  drag  release  at  the  edge  was  accompanied  by  a  slowdown  of 
the  wind  at  this  position. 

The  streamline  slope  is  an  important  indicator  of  how  rapidly  the  wind  field  is  adjusting  to  the 
height  change  in  the  new  surface.  Thus  from  figure  5,  when  the  air  is  flowing  out  of  the 
orchard  edge,  we  can  see  that  the  flow  reacts  to  the  drag  release  at  the  edge  well  before 


340 


Deviation  of  Streamline  from  Horizontal 
wind  into  orchard 


Deviation  of  Streamlines  from  Horizontal 
wind  out  of  orchard 


-50-40-30-20*“  10  0  10  20  30  -50-40-30-20-10  0  10  20  30  -50-40-30-20-10  0  10  20  30 
W/U  (degrees)  W/U  (degrees)  W/U  (degrees) 

TOWER-OTt  TOWER-0T2  T0WR-0T3 


Figure  5.  Profiles  of  the  streamline  angle,  W/U,  with  wind  into  the  stand  (a)  and  with  wind 
out  of  the  stand  (b).  Horizontal  bars  indicate  standard  deviations. 


341 


reaching  the  edge.  We  can  infer  that  the  adjustment  length  of  the  vertical  component  is 
longest  above  canopy  in  stable  conditions  and  shortest  in  free  convection. 

5.  CONCLUSIONS 

Stability  has  a  major  effect  on  the  canopy-air  interaction  at  tree  stand  edges.  In  a  stable 
boundary  layer,  the  air  over  the  tall  canopy  is  less  stable  than  that  over  the  oi)en  field  due  to 
higher  turbulent  mixing  induced  by  the  rougher  canopy.  In  a  convective  boundary  layer,  the 
absolute  wind  speed  interacts  with  the  two  canopies  differently.  At  very  low  wind  speeds,  the 
canopy  drag  reduces  the  wind  enough  that  free  convection  conditions  occur  above  the  orchard 
while  mixed  convection  dominates  the  open  field.  At  moderate  and  high  wind  speeds,  the 
higher  turbulence  over  the  orchard  keeps  the  air  very  close  to  neutral  while  the  open  field  is 
dominated  again  by  mixed  convection. 

The  strength  of  vertical  mixing  in  the  upwind  surface  layer  has  a  significant  positive 
correlation  with  momentum  absorption  and  wind  penetration  into  the  edge.  Whereas  at  the 
leeward  edge,  the  downward  vertical  motion  overshadows  the  effects  of  stability  at  this 
location  and  the  flow  reacts  to  the  drag  release  at  the  edge  well  before  reaching  the  edge. 

The  greatest  difference  between  the  open  and  orchard  scaling  parameter  u*  was  during  stable 
conditions  and  the  least  difference  was  in  free  convection  conditions.  The  temperature  scaling 
parameter,  6,,  showed  that  the  sensible  heat  flux  was  higher  over  the  tall  canopy  and  lowest 
over  the  open  field,  q*  was  zero  over  the  non-irrigated  open  fields,  but  indicated  measureable 
latent  heat  flux  occurring  over  the  tree  canopies. 

Upwind  conditions  are  advected  horizontally  across  the  edges  and  the  adjustment  length 
depends  on  stability.  The  adjustment  length  of  both  the  u  and  w  components  are  longest  above 
the  orchard  in  stable  conditions  and  shortest  in  free  convection. 

REFERENCES 

Albini,  F.  1981.  A  Phenomenological  Model  for  Wind  Speed  and  Shear  Stress  Profiles  in 
Vegetation  Cover  Layers.  J  Appl.  Meteorol.  20:1325-1335. 

Benoit,  R.  1977.  On  the  Integral  of  the  Surface  Layer  Profile-Gradient  Functions.  J  Appl. 
Meteorol  Vol  16:859-860. 

Cionco,  R.  M.  1989.  Design  and  Execution  of  Project  WIND.  Proceedings  of  19th  Conference 
on  Agr  and  Forest  Meteorol.  Charleston,  SC.  AMS,  Boston.  MA. 

Klaassen,  W.  1992.  Average  Fluxes  from  Heterogeneous  Vegetated  Regions.  Boundary  Layer 
Meteorol.  58:329-354. 

Kruijt,  B.  1994.  Turbulence  Over  Forest  Downwind  of  an  Edge.  PhD  Dissertation,  Dept,  of 
Physical  Geography,  University  of  Groningen,  The  Netherlands.  156p. 


342 


Li,  Z.  J.,  J.  D.  Lin,  and  D.  R.  Miller.  1990.  Air  Flow  Over  and  Through  a  Forest  Edge: 
A  Steady-state  Numerical  Simulation.  Boundary  Layer  Meteorol.  51:179-197. 

Miller,  D.  R.,  J.  D.  Lin,  and  Z.  N.  Lu.  1991.  Air  Flow  Across  an  Alpine  Forest  Clearing: 
A  Model  and  Field  Measurements.  Agr.  and  Forest  Meteorol.  56:209-225. 

Obukhov,  A.  M.  1971.  Turbulence  in  an  Atmosphere  with  a  Nonuniform  Temperature. 
Boundary  Layer  Meteorol.  2,  7-29. 

Panofsky,  H.  A.  and  J.  A.  Dutton.  1984.  Atmospheric  Turbulence.  John  Wiley,  N.  Y. 

Paulson,  C.  A.  1970.  The  Mathematical  Representation  of  Wind  Speed  and  Temperature 
Profiles  in  the  Unstable  Atmospheric  Surface  Layer.  J  Appl.  Meteorol.  Vol.  9:857-861. 

Rachele,  H.  and  A.  Tunick.  1992.  Energy  Balance  Model  for  Imagery  and  Electronmagnetic 
Propagation.  Technical  Report  ASL-TR-0311,  US  Army  Atmospheric  Sciences  Laboratory, 
White  Sands  Missile  Range,  NM  88002-5501. 

Raynor,  G.  1971.  Wind  and  Temperature  Structure  in  a  Coniferous  Forest  and  a  Contiguous 
Field.  Forest  Science  17:351-363. 

Stull,  R.  B.  1991.  Static  Stability-An  Update.  Bui  Am.  Meteorol.  Soc.  72(10):  1521-1529. 

Thistle,  H.  W.,  Jr.  1988.  Air  Flow  Through  a  Deciduous  Forest  Edge  Using  High 
Frequency  Anemometry.  Ph.D.  Dissertation.  Department  of  Natural  Resources  Management 
and  Engineering,  University  of  Connecticut,  Storrs,  CT  06268.  211  pp. 

Tunick,  A.,  H.  Rachele,  F.  V.  Hansen,  T.  A.  Howell,  J.  L.  Steiner,  A.  D.  Schneider  and  S. 
R.  Evett.  1994.  Rebal  ’94  -  A  Cooperative  Radiation  and  Energy  Balance  Field  Study  for 
Imagery  and  E.  M.  Propagation.  Bui.  American  Meteorol.  Soc.  73(3):421-430. 

Wang,  Y.  1989.  Turbulence  Structure,  Momentum  and  Heat  Transport  in  the  Edge  of  Broad 
Leaf  Tree  Stands.  Ph.D.  Dissertation.  Dept,  of  Natural  Resources  Management  and 
Engineering  &  Civil  Engineering,  University  of  Connecticut,  Storrs,  CT  06268  137  pp. 


343 


Proceedings  o£  the  1994  Battlefield 
Atmospherics  Conference,  26  Nov  -  1  Dec,  1994 
White  Sands  Missile  Range,  New  Mexico 


TRANSILIENT  TURBULENCE,  RADIATIVE  TRANSFER, 
AND  OWNING  THE  WEATHER 

R.A.  Sutherland,  Y.P.  Yee  and  R.J.  Szymber 
U.S.  Army  Research  Laboratory 
Battlefield  Environment  Directorate 
White  Sands  Missile  Range,  New  Mexico  88002-5501 


ABSTRACT 

A  major  technical  barrier  encountered  in  modeling  radiative  processes  in  the 
atmospheric  boundary  layer  involves  making  proper  account  of  turbulent  and 
radiative  interactions.  Exact  solutions  are  not  possible  due  to  the  problem 
of  closure  of  the  underlying  differential  equations  and  the  complexity  of 
both  the  turbulent  and  radiative  processes.  The  most  direct  effect  of  the 
radiative  interaction  is  to  alter  the  energy  balance  at  the  surface  and  cause 
differential  heating  in  the  aerosol  layer.  These  effects  then  alter  the  local 
vertical  profiles  of  temperature,  aerosol  concentration,  and  other  meteoro¬ 
logical  variables  which  have  an  effect  on  the  overall  stability  of  the  layer. 
However  most  conventional  micro-meteorological  models  either  ignore  radiative 
processes  entirely  or  utilize  sub-grid  parameterization  schemes  which  may  not 
be  applicable  to  the  modern,  aerosol -laden,  "dirty  battlefield"  environment. 
On  the  other  hand  many  conventional  radiation  models  ignore  the  turbulent 
interaction  by  focusing  only  on  cases  where  the  turbulent-radiative  heat  flux 
ratio  is  small.  In  this  paper  we  offer  an  approximate  solution  to  handle  both 
radiation  and  turbulence  using  a  modified  two-stream  radiative  transfer 
scheme  of  McDaid  (1993)  in  combination  with  a  relatively  new  " transilient" 
turbulence  theory  of  Stull  (1987)  and  others.  In  this  paper  we  extend  the 
Stull  method  to  incorporate  radiative  interactions  making  special  account  for 
such  radiative  processes  as  absorption,  extinction,  thermal  emission,  and 
multiple  scattering.  The  research  is  relevant  to  Army  applications  involving 
modeling  and  simulation  of  boundary  layer  processes  and  contributes  to  the 
scientific  basis  of  programs  in  “Owning  the  Weather"  and  limited  weather 
modification. 


1.  INTRODUCTION 

The  Army  has  had  a  longstanding  interest  in  simulating  and  modeling  the 
effects  of  the  "dirty  battlefield"  on  boundary  layer  micro-meteorological 
processes.  The  main  emphasis  in  the  recent  past  has  been  on  the  direct  effect 
of  aerosols  on  electromagnetic  propagation.  More  recently  it  has  been 
realized  that  these  same  processes  can  have  a  significant  effect  on  critical 
environmental  parameters  (Yee,  et.al.  1993a, b)  and  boundary  layer  destabil¬ 
ization  processes  (Lines  and  Yee,  1994;  Grisogono  and  Keislar,  1992; 
Grisogono,  1990  and  Telford,  1994) .  Other  relevant  work  on  a  larger  scale  and 
not  directly  involving  the  turbulent  reaction  has  been  published  for  fogs  by 
Bergstrom  and  Cogley  (1979)  and  Saharan  dust  by  Carlson  and  Benjamin  (1980)  . 
More  recently  the  idea  of  "Limited  Weather  Modification"  through  the  use  of 
aerosol  obscurants  and  artificial  fogs  has  been  considered  in  the  ARL  "Own 
the  Weather"  program  (Szymber  and  Cogan,  1994)  .  Also  the  relevance  of  such 
a  capability  to  the  Army  mission  is  discussed  in  the  recent  STAR  21  report 
published  by  the  National  Research  Council  (STAR  21  -  Strategic  Technologies 
for  the  Army  of  the  Twenty-First  Century,  1993)  .  However  one  of  the  major 
technical  barriers  in  modeling  the  relevant  physical  processes  has  been  in 
the  understanding  of  the  nature  of  turbulent-radiative  interactions  and  the 
attendant  effects  on  the  structure  of  the  atmospheric  boundary  layer  and  it 
is  this  problem  that  we  are  addressing  here. 


345 


2.  PHYSICS  OF  THE  PROBLEM 

One  of  the  questions  that  arose  during  the  initial  phases  of  this  work 
concerned  the  magnitude  of  the  radiative  effect  as  compared  to  the  turbulent 
effect.  Whereas  it  is  certainly  true  that  for  strong  winds  the  effect  of 
turbulence  will  dominate  the  radiative  effect,  it  is  not  so  clear  which  is 
more  significant  at  the  lower  wind  speeds.  We  can  gain  some  insight  into  this 
(^estion  by  examining  fogs  where  the  radiative  effect  has  been  studied.  This 
is  done  in  the  plot  of  figure  1  which  is  based  upon  an  expression  from 
Oliver et.  al.  (1978)  which  we  have  modified  slightly  to  express  the  ratio 
of  radiative-to-turbulent  heat  flux  density  as  a  function  of  the  friction 
velocity,  u. 


Figure  1.  Plot  showing  the  radiative-to-turbulent  heat  flux  ratios  as  a 
function  of  friction  velocity  for  fog  thickness  of  10,  50,  and  100  meters. 


As  expected,  the  plot  shows  the  radiative  flux  to  dominate  at  low  wind  speeds 
(friction  velocity)  and  to  diminish  as  the  wind  speed  increases.  Note,  for 
example,  that  for  a  layer  thickness  of  50  meters  that  the  radiative  flux  is 
greater  than  the  turbulent  flux  for  all  values  of  u  less  than  about  0,20  m/s 
and  is  at  least  10  percent  of  the  turbulent  flux  out  to  a  value  of  1.0  m/s. 
The  calculations,  although  approximate,  do  give  a  qualitative  indication  of 
the  significance  of  radiative  effects. 

The  physics  ^  of  the  problem  is  explained  with  the  aid  of  the  sketches  in 
figure  2  which  sho\v  a  hypothetical  example  of  the  effect  of  radiation  and 
turbulence  on  profiles  of  temperature,  T,  and  aerosol  concentration,  C. 
Figure  la  sets  the  initial  condition  of  a  hypothetical  early  morning 
temperature  inversion  over  an  isothermal  layer  near  the  surface.  It  is 
assumed  that,  at  some  height,  the  inversion  gives  way  to  a  lapse  condition 
iriciicated  by  the  upper  dashed  line.  In  figure  la  we  also  assume  a  Gaussian 
aerosol  concentration  profile  with  a  maximum  in  the  isothermal  layer. 

The  effect  of  the  solar  radiation  is  to  first  set  up  an  "energy  balance"  at 
the  surface  that  results  in  an  increase  in  the  sensible  heat  flux  density, 
H.  The  second  effect  is  to  cause  heating  of  the  entire  layer  at  a  rate 
dependent  upon  the  concentration  and  radiative  properties  of  the  underlying 
aerosol .  Because  of  the  presence  of  the  higher  aerosol  concentration  near  the 
surface,  the  change  in  the  profile  due  to  radiative  heating  will  be  most 
affected  near  the  surface.  This  is  also  where  surface  induced  radiative  and 


346 


turbulent  heat  fluxes  are  most  significant.  The  overall  effect  is  shown  in 
figure  2b  as  an  increase  in  the  temperature  of  the  isothermal  layer  and  the 
development  of  an  unstable  lapse  condition  near  the  surface.  In  this  step  the 
upper  level  inversion  remains  almost  unaffected  by  the  direct  heating  due  to 
the  lower  aerosol  concentration  at  this  level.  Also  during  this  step  the 
aerosol  concentration  is  not  directly  affected  by  the  radiative  heating. 


Figure  2 .  Sketch  demonstrating  the  stages  of  the  radiative  turbulent 
interaction  and  the  effects  on  profiles  of  . temperature ,  T,  and  aerosol 
concentration,  C. 


In  figure  2c  we  show  the  combined  effect  of  radiative  cooling  and  the  induced 
turbulence  which  tends  to  counteract  the  radiative  forcing  by  producing  an 
upward  “mixing"  of  the  hot  air  from  the  surface  with  the  relatively  colder 
air  above.  This  step  may  be  viewed  as  a  (turbulent)  reaction  to  the  unstable 
layer  created  near  the  surface.  Note  from  figure  2c  that  the  overall  effect 
results  in  a  tendency  toward  neutral .  Note  also  in  figure  2c  •  that  the 
concentration  profile  has  also  changed  due  to  the  actual  movement  during  the 
mixing  process.  The  final  step  of  the  process  that  takes  place  simultaneously 
with  the  turbulent  reaction  is  due  to  radiative  cooling  by  thermal 
emission  to  an  extent  dependent  upon  both  the  temperature  profile  and  the 
concentration  levels. 

In  this  paper  we  present  our  results  in  modeling  the  processes  sketched  in 
figure  2  using  a  combination  of  radiative  transfer  theory  and  a  relatively 
new  " transilient "  approach  to  modeling  the  turbulent  interaction  due  to  Stull 
and  co-workers  at  the  University  of  Wisconsin  (Stull,  1984,  1986,  1993;  Stull 
Sc  Takehiko,  1984;  Stull  &  Driedonks,  1987;  see  also  Cuxart,  et.al.,  1994). 

3.  RADIATIVE  MODELS 

The  radiative  transport  model  consists  of  two  parts;  one  to  treat  the  effect 
of  radiative  heating  of  both  the  air  column  and  the  surface  (i.e.  radiative 
"forcing")  and  one  to  account  for  radiative  cooling  due  to  thermal  self 
emission  (i.e.  radiative  "reaction").  The  radiative  transport^  model  is 
composed  of  two  parts;  one  treating  solar  band  (shortwave)  radiation  and 


347 


another  treating  thermal  band  (longwave)  radiation.  In  both  cases  effects 
of  multiple  scattering  and  absorption  are  treated  using  a  modified  two— stream 
fo^ulation  originally  due  to  Adamson  (1975)  as  modified  by  McDaid  (1993). 
This  particular  model  has  the  advantage  of  relative  simplicity  and  can  be 
modified  to  treat  inhomogeneities  using  first  order  corrections  developed  bv 
Sutherland  (1988). 

Both  models,  and  the  turbulence  model  described  later,  assume  a  five  level 
aerosol  layer  as  illustrated  in  the  sketch  of  figure  3.  Each  layer  is  assumed 
to  be  homogeneous  and  described  by  a  single  value  for  temperature,  wind 
speed,  humidity,  and  aerosol  concentration. 


Figure  3 .  Sketch  describing  of  the  five  layer  model .  Note  that  the 
optical  depth,  t,  is  referenced  positive  downward. 


The  equations  for  calculating  the  radiative  fluxes  at  each  layer  interface 
are  written  in  general  as  follows: 

Shortwave : 

)  ^^0^0 1  '■  “o'  ;  &)„,  g)  ] 

Longwave : 

F  Ut)  (X ;  g)  -R'  {z !  g)  ]  (2) 


(T^-x) 


"^0^0 f  ^  o-'f  '  “o'  '•  g) 


] 


where  t  is  the  optical  depth  at  any  level  inside  the  layer  and  is  the 


348 


total  optical  depth  of  the  layer.  In  eq.  (1)  is  the  solar  band  irradiance 
incident  at  a  zenith  angle  0„  [vi„=  |Cos  (GJ  |  ]  at  the  top  of  the  layer  and  in 
eq.  (2)  D  is  the  total  thermal  band  downwelling  hemispherical  irradiance  at 
the  top  of  the  layer.  In  both  expressions  is  the  surface  albedo  and  is 
the  downwelling  surface  irradiance  both  taken  as  appropriate  to  the 
particular  bandpass'  of  interest  (i.e.  shortwave  or  longwave) .  Other 
quantities  are;  aerosol  scattering  albedo,  Uq,  and^  the  optical  phase  function 
asymmetry  parameter,  g,  both  of  which  are  a  function  of  the  aerosol  type  and 
the  bandpass  under  consideration.  In  both  expressions  EjCx)  is  the  well  known 
exponential  integral  and  the  functions  R'  and  T*  are  the  diffuse  transmission 
and  reflection  operators  which  are,  strictly,  complex  functions  of  the 
optical  depth  that  account  for  effects  of  multiple  scattering  and  absorption 
and  described  in  greater  detail  elsewhere  (Sutherland  1988) .  For  purposes 
here  we  use  a  less  accurate  but  nevertheless  useful  approximation  based  upon 
a  modified  two-stream  approximation  due  to  McDaid  (1993) .  Some  typical  values 
of  the  R*  and  T*  functions  are  plotted  in  figure  4. 


Optical  Depth  (t)  Optical  Depth  (r) 


Figure  4.  Representative  plots  of  the  multiple^  scattering  functions 
for  diffuse  reflection,  R*,  and  transmission,  T*,  . 


In  all  of  the  above  expressions  the  surface  irradiance,  G^,  is  approximated 
to  account  for  the  effects  of  the  aerosol  layer  as : 

Shortwave : 


Longwave : 


The  net  radiative  flux  at  any  level  in  the  layer  is  determined  by  repeated 
application  of  eqs.  (1)  and  (2),  then  the  time  rate  of  change  of  temperature 
due  to  radiative  heating  for  each  level  i  is  given  by: 


349 


dT^ 

~dt 


(5) 


dT^ 

"dF 


pCpAZ 


;  i=l 


(6) 


where  the  quantity  pC^  is  volumetric  specific  heat  of  air,  Az  is  the  sub¬ 
layer  thickness  and  AF^  is  the  net  radiative  flux  density  entering  the  i'” 
layer  for  either  the  shortwave  (superscript  s)  or  longwave  (superscript  1) 
spectral  regime.  As  indicated,  the  second  expression  applies  only  to  the 
layer  nearest  the  surface  and  utilizes  the  modeled  surface  heat  flux  density, 
H . 

In  practice  the  above  expressions  are  used  in  a  matrix  formulation  relating 
temperature  rate  of  change  to  height.  For  the  radiative  forcing  terms  this 
results  in  a  diagonal  matrix.  For  the  self  emission  terms  however  there  is 
a  need  to  account  for  transfer  of  (thermal)  radiation  from  one  level  to  the 
next  as  well  as  the  self  emission.  This  results  in  a  full  matrix  with 
elements  approximated  by: 

R..-e.e.(aT*)  E^\  z  .-t  .\ ;  i*l  ,g . 


R,j^e^e.{aT*)E^\z.-z.\;i^l 


where  o  is  the  Stefan  Boltzmann  constant  and  the  layer  emissivity  e-  is 
given  simply  as  {e^  =  [1  -  coj  [1  -  }  where  At  is  the  layer  optical 
thickness.  As  before  the  first  layer  (i  =  1)  is  an  exception  and  requires 
accounting  for  surface  emissivity, 


4.  TURBULENT  REACTION  MODEL 

We  now  turn  attention  to  the  turbulent  reaction  model  where  we  borrow 
^hrongly  from  the  theory  of  "transilient  turbulence"  developed  over  the  years 
by  Stull  and  co-workers  at  the  University  of  Wisconsin.  A*  complete 
description  of  the  theory  can  be  found  in  the  cited  references  and  in  the 
following  paragraphs  we  give  only  a  cursory  description.  The  direct  effect 
of  the  radiative  forcing  is  to  alter  the  temperature  profile,  and  this,  in 
turn,  results  in  the  creation  of  an  unstable  sub-layer  region  as  explained 
in  the  discussion  of  figure  (2)  .  The  creation  of  this  instability  then  sets 
up  (turbulent)  motions  in  the  layer  which  tend  to  oppose  the  cause  of  the 
forced  instability.  The  degree  to  which  this  happens,  and  whether  or  not  the 
reaction  will  be  turbulent  or  non— turbulent ,  depends  upon  several  factors 
including  the  temperature  and  wind  speed  profiles  and  the  general 
®^v^^onmental  conditions.  One  quantifiable  measure  of  the  strength  of  the 
instability  is  the  Richardson  Number  given  by: 

(g/T^)  dT/dt 
(dU/dz)^ 


where  T  is  air  temperature,  u  is  wind  speed,  g  is  acceleration  of  gravity, 
and  is  a  reference  temperature.  The  Richardson  Number  represents  the  ratio 


350 


of  the  thermal  (static)  to  mechanical  (turbulent)  fluxes.  Large  values  imply 
a  stable  layer  and  small  values  imply  an  unstable  layer,  pother  important 
quantity  is  the  turbulent  kinetic  energy,  ^turh^  which  is  a  complicated 
function  of  both  the  thermal  and  mechanical  forces  and  is  given  in  one 
dimensional  differential  form  as: 


dE 


£H£^=.U  V'— -u  V'— w'e'-e 


dt 


dz 


dz 


turb 


(12) 


where  u',  w' ,  0'  represent  turbulent  fluctuations  in  wind  and  potential 
temperature  and  U,V,  0  represent  their  time  averaged  counterparts.  The 
quantity  6^^^,  is  the  turbulent  energy  dissipation  rate. 


In  a  classic  series  of  papers,  Stull  and  co-workers  have  worked  up  a 
theoretical  scheme  that  utilizes  the  above  expressions  in  a  formulation 
37Qpj^QS©nting  the  time  dependent  turbulent  reaction  effect  ^  on  any  scaler 
property.  The  result,  when  adapted  to  our  five  layer  model,  is  expressed  in 
matrix  form  as : 


(13) 

(14) 

where  [T,j]  and  [C^]  are  five  component  column  vectors  representing  the  initial 
(subscript  i)  and  final  (subscript  j)  temperature  and  concentration  profiles 
and  is  the  time  dependent  “transilient  turbulence"  reaction  matrix. 

For  all  of  the  work  here  we  calculated  the  turbulent  reaction  matrix  using 
the  FORTRAN  program  described  by  Stull  (1986)  which  we  applied  to  the  wind 
components  as  well  as  concentration  and  temperature. 


5.  RESULTS  AND  DISCUSSION 

As  a  test  of  the  full  radiative- turbulent  theory  we  used  the  "clear  air" 
example  described  by  Stull  (1986)  as  our  baseline  and  added  an  assumed 
aerosol  concentration  profile  and  reworked  the  example  to  include  the 
radiative  effect.  The  various  aerosol  and  environmental  parameters  used  in 
the  study  are  listed  in  Table  1  and  results  are  shown  in  table  2.  For  the 
example  shown  the  momentum  forcing  fluxes  were  assumed  to  be  zero  and  all 
inputs  were  assumed  constant  in  time. 

Short  wave  flux  density 
Long  wave  flux  density 
Surface  albedo  (shortwave) 

Surface  albedo  (longwave) 

Aerosol  albedo  (shortwave) 

Aerosol  albedo  (longwave) 

Asymmetry  parameter  (shortwave) 

Asymmetry  parameter  (longwave) 

Aerosol  concentration 

Extinction  coefficient _ _ 

Table  1.  Aerosol  and  environmental  parameters  used  in  the  study. 

In  table  2,  the  case  1  example  shows  the  effect  as  calculated  ignoring 

aerosol  loading  (i.e.  the  "clear  air"  approximation)  and  case  2  shows  the 

results  as  calculated  using  the  full  radiative- turbulent  model  without  self 


w/m 
50  w/m' 
0.15 
0.10 
0.60 
0.20 
0.750 
0.000 
0.003  g/m^ 
1.00  km“^ 


351 


emission.  Case  3  results  include  self  emission  and  Case  4  includes  self 
emission  but  omits  the  turbulent  reaction.  The  final  column  represents 
Stull's  original  model  using  our  calculated  heat  flux  density  for  the  clear 
air  case  (22.1  W/m^)  . 

From  comparisons  between  case  1  and  case  2  in  table  2  we  see  that  for  this 
particular  example  the  effect  of  the  aerosol  loading  gives  rise  to  an  overall 
radiative  contribution  of  about  1/2°  C  per  hour.  Comparing  all  cases  we  also 
see  that  the  initial  change  is  largest  near  the  surface  and  tends  to  decrease 
with  height  and  that  the  profile  tends  to  isothermal  as  time  proceeds.  It  is 
important  to  note  that  the  results  here  were  extrapolated  over  time  assuming 
a  constant  solar  and _ infrared  loading.  In  applications  there  would  be  some 
variation  over  this  time  span.  For  simplicity  we  have  also  assumed  a  constant 
aerosol  concentration. 


_|  HEIGHT  [INITIAL  | CASE  1  I  CASE  2  I  CASE  3  I  CASE "4  I  STULL ' 


1  HR 


450 

350 

250 

150 

50 


18.0 

16.0 

15.0 

15.0 

15.0 


18.00 

16.00 

15.17 

15.21 

15.27 


18.67 

16.58 

15.58 
15.60 
15.63 


17.94 
16.07 
16.08 
16.24 

16.95 


17.94 

16.19 

15.33 

15.39 

18.47 


18.00 

16.00 

15.17 

15.21 

15.27 


2  HR 


450 

350 

250 

150 

50 


18.0 

16.0 

15.0 

15.0 

15.0 


18.00 

15.73 

15.45 

15.48 

15.64 


19.34 

17.16 

16.18 

16.20 

16.23 


17 

17, 

17, 

17, 

18. 


41 

20 

34 

49 

18 


17.88 

16.38 

15.66 

15.77 

22.07 


18.00 

15.68 

15.50 

15.53 

15.58 


4  HR 


450 

350 

250 

150 

50 


18.0 

16.0 

15.0 

15.0 

15.0 


18.00 

15.83 

15.85 

15.88 

16.04 


20.68 

18.32 

17.39 

17.40 
17.44 


18, 

19, 

19. 

19. 

20. 


86 

02 

16 

30 

01 


17.76 

16.75 

16.33 

16.54 

29.70 


18.00 

15.68 

15.87 

15.90 

15<96 


6  HR 


450 

350 

250 

150 

50 


18.0 

16.0 

15.0 

15.0 

15.0 


18.00 

16.15 

16.17 

16.20 

16.36 


22.02 

19.49 

18.59 

18.61 

18.64 


20, 

20. 

20. 

21, 

21. 


61 

78 
92 
07 

79 


17.64 

17.11 

16.98* 

17.31 

37.99 


18 . 00 
16.18 
16.20 
16.23 
16.28 


Table  2.  Results  of  the  modeling  exercise  showing  temperature  profiles. 


Perhaps  the  most  marked  result  from  the  study  is  the  effect  of  ignoring  the 
turbulent  reaction  as  evidenced  by  case  4  where  the  temperature  change  is  in 
excess  of  20  C  for  the  lowest  level.  This  unrealistic  result  represents  the 
case  of  ignoring  any  exchange  at  all  and  as  such  represents  an  extreme 
example.  It  is  also  interesting  to  note  from  comparing  case  2  and  case  3  that 
the  effect  of  adding  the  radiative  reaction  is  to  cause  cooling  at  some 
levels  and  heating  at  others.  This  may  appear  unusual  at  first  because  this 
term  generally  implies  cooling  by  self  emission.  This  occurs  because,  in  our 
model  there  is  an  added  term  due  to  multiple  scattering  which  tends  to  trap 
the  radiation,  however,  the  most  significant  cause  of  the  increase  is  due  to 
increased  transport  in  the  first  layer  due  to  radiation  from  the  surface. 


5.  SUMMARY  AMD  CAVEATS 

The  modeling  exercises  reported  here  have  demonstrated  the  significance  of 
both  the  radiative  and  turbulent  heating  effects  in  boundary  layer  modeling, 
and  the  importance  of  treating  both  in  micro-meteorological  models.  In 
particular  the  radiative,  or  "aerosol  loading",  component  has  been  shown  to 
be  more  significant  that  some  have  assumed  for  "dirty"  atmospheric 
conditions.  On  the  other  hand  there  is  more  work  to  be  done  in  developing  the 
model  for  applicability  over  a  wider  set  of  scenarios  and  in  comparing  with 
measurements  and  in  the  theoretical  treatment  of  wind  profile  effects . 


352 


ACKMOVniEDGEHENTS 


Portions  of  this  work  were  funded  under  the  1994  ARL  Directors  Research 
Initiative  Program.  We  also  wish  to  aclcnowledge  the  contribution  of  Dr.  David 
Miller,  University  of  Connecticut,  for  first  bringing  the  subject  of 
transilient  turbulence  to  our  attention. 


REFERENCES 


Adamson,  D.  ,  1975,  The  Role  of  Multiple  Scattering  in 

Radiative  Transfer,  NASA  Technical  Note  (NASA  TN  D-8084) , 
Center,  Hampton,  VA. 


One -Dimens i onal 
Langley  Research 


Bergstrom,  R.W.  and  A.C.  Cogley,  1979.  "Scattering  of  Emitted  Radiation  from 
Inhomogeneous  and  Nonisothermal  Layers. "J.  Quantitative  Spectroscopy  and 
Radiative  Transfer,  21:279-292. 


Carlson,  T.N.  and  S.G.  Benjamin,  1980.  "Radiative  Heating  Rates  for  Saharan 
Dust . "Journal  of  the  Atmospheric  Sciences,  37:193-213. 


Cuxart,  J.P.  Bougeault,  P.  Lacarrere, 
Between  Transilient  Turbulence  Theory 
Approaches ." Boundary  Layer  Meteorology, 


and  J.  Noilhan,  1994.  "A  Comparison 
and  the  Exchange  Coefficient  Model 
67:251-276, 


Grisogono,  B.  and  R.E.  Keislar,  1992.  Radiative 
Nocturnal  Boundary  Layer  over  Desert . "Boundary  Layer 


Destabilization  of  the 
Meteorology,  52 : 221-225  . 


Grisogono,  B.,  1990.  "A  Mathematical  Note  on  the  Slow  Diffusive  Character  of 
Long-wave  Radiative  Transfer  in  the  Stable  Atmospheric  Boundary  Layer  , 
Boundary  Layer  Meteorology,  52:221-225. 


Lines  R.T.  and  Y.P.  Yee,  1994,  "Temperature  Profile  of  the  Nocturnal 
Boundary  Layer  over  Homogenous  Desert  Using  LA-TEAMS" .  Proceedings  of 
1994  Battlefield  Atmospherics  Conference,  U.S.  Army  Research  Laboratory, 
White  Sands  Missile  Range,  NM  88002-5501  (in  press) 


McDaid,  W.J.,  1993.  A  Modified  Two-Stream:  Improvements  over  the  Standard^ 
Stream.  Master’s  Thesis,  New  Mexico  State  University,  Las  Cruces,  NM  88005. 

Oliver  D.A.,  W.S.  Lewellen  and  G.G.  Williamson,-  1978.  "The  Interaction 
Between  Turbulent  and  Radiative  Transport  in  the  Development  of  Fog  and  Low 
Level  Stratus ." Journal  of  the  Atmospheric  Sciences,  35:301-316. 

Stull  R.B.,  1984.  "Transilient  Turbulence  Theory.  Part  I:  The  Concept  of 

Eddy-Mixing  across  Finite  Distances ." Journal  of  the  Atmospheric  Sciences, 
41(23)  :3351-3367. 


Stull,  R.B.  and  T.  Takehiko,  1984.  "Transilient  Turbulence 
Turbulent  Adjustment. "Journal  of  the  Atmospheric  Sciences, 


Theory.  Part  II: 
41(23) :3368-3379. 


Stull  R.B.  and  T.  Takehiko,  1984.  "Transilient  Turbulence  Theory.  Part  III: 
Bulk  Dispersion  Rate  and  Numerical  Stability . "Journal  of  the  Atmospheric 
Sciences,  41(l):50-57. 

Stull,  R.B.  1986.  "Transilient  Turbulence  Algorithms  to  Model  Mixing  Across 
Finite  Distances",  Environmental  Software,  2(1):4-12. 


Stull,  R.B,  and  A.G.M.  Driedonks,  1987,  "Applications  of  the  Transilient 
Turbulence  Parameterization  to  Atmospheric  Boundary-Layer  Simulations.  , 
Boundary  Layer  Meteorology,  40:209-239. 

Stull,  R.B,  1993,  "Review  of  Non-local  Mixing  in  Turbulent  Atmospheres: 
Transilient  Turbulence  Theory ",  Boundary  Layer  Meteorology,  62:21-96. 


353 


Sutherland,  R-A-/  1988.  "Methods  of  Radiative  Transfer  for  Electro-Optical 
??rSr221-23°4  the  1988  Army  Science  Conference  Vol. 

Szymber,  R.J.and  J.L.Cogan,  "Owning  the  Weather  Battlefield  Observations 
Framework  Proceedings  of  the  1994  Battlefield  Atmospherics  Conference,  U.S. 
Army  Research  Laboratory,  White  Sands  Missile  Range,  MM  88002-5501  (in 
pir©s s )  . 

Telford,  J.W.,  1994.  "Comment  on  Radiative  Destabilization  of  the  Nocturnal 
Boundary  Layer  over  the  Desert."  Boundary  Layer  Meteorology,  68:327-328. 

Yee,  Y.P.,^  R.A.  Sutherland,  R.E.  Davis,  S.W.  Berrick,  and  M.M.  Orgill  1993. 
"The  Radiative  Energy  Balance  and  Redistribution  (REBAR)  Proaram" ' 
Proceedings  of  the  1993  Battlefield  Atmospherics  Conference,  U.S.  Army 
Research  Laboratory,  White  Sands  Missile  Range,  NM,  88002-5501,  pp.  181-194 

Yee,  Y-P,,  R.A.  Sutherland,  H.  Rachele,  and  A.  Tunick,  1993.  "Effects  of 
Aeroso]^-Induced  Radiative  Interactions  on  Stability  and  Optical  Turbulence" 
Proceedings  of  the  Society  of  Photo-Optical  Instrumentation  Engineers. 

?QQT  Technologies  for  the  Army  of  the  Twenty-First  Century, 

1993.  Prepared  by  the  Board  on  Army  Science  and  Technology,  National  Research 
Council,  National  Academy  Press,  Washington,  D.C, 


354 


FORECASTING/MODELING  THE  ATMOSPHERIC  OPTICAL  NEUTRAL  EVENTS 

OVER  A  DESERT  ENVIRONMENT 

G.T.  Vaucher 

Science  and  Technology  Corporation 
White  Sands  Missile  Range,  New  Mexico  88002,  U.S.A. 

R.W.  Endlich 

U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  New  Mexico  88002,  U.S.A. 


ABSTRACT 

Optical  turbulence  can  degrade  seeing  conditions  over  long  paths,  especially  horizontal 
paths  near  a  desert  floor.  Forecasting  the  onset  and  duration  of  optical  turbulence 
minima,  which  we  call  neutral  events,  requires  knowledge  of  the  local  energy  bailee 
through  the  heat  flux  cycles.  At  the  High-Energy  Laser  Systems  Test  Facility 
(HELSTF),  White  Sands  Missile  Range,  New  Mexico,  we  collected  two  months  of 
morning  and  evening  neutral  event  data  sets.  From  these,  we  determined  first-iteration 
models  for  forecasting  the  timing  of  the  morning  and  evening  turbulence  minimum. 

We  provide  a  general  definition  for  the  atmospheric  optical  neutral  event,  a  description 
of  the  morning  and  evening  neutral  event  models  over  desert  terrain,  and  an  “ideal  and 
a  "less  than  ideal"  set  of  case  studies  for  the  model. 

1.  INTRODUCTION 

For  years,  the  degrading  effects  of  atmospheric  optical  turbulence  (AOT)  have  plagued  scientists  dealing 
with  light/laser  propagation.  With  the  declassification  of  the  adaptive  optics  techniques  developed  by 
Starfire  Optical  Range  (SOR)  scientists,  astronomers  and  atmospheric  optical/laser  propagation 
researchers  now  have  a  viable  alternative  to  these  degrading  atmospheric  effects  along  a  slant  path 
(Fugate  and  Wild  1994).  For  those  unable  to  benefit  from  SOR’s  technology,  we  offer  this  study, 
which  integrates  the  properties  of  AOT  and  AOT  neutral  events  (NE)  into  a  forecastable  phenomenon. 

The  AOT  NE  forecasting  model  we  developed  is  based  on  a  near-surface  AOT  data  set  collected  along 
a  1-km  horizontal  desert  path  in  the  Tularosa  Basin,  White  Sands  Missile  Range  (WSMR),  NM. 
Though  the  1-km  path  is  essentially  flat,  about  50  km  to  the  east  lie  the  Sacramento  Mountains,  a  flat- 
topped  range  that  rises  about  1.5  km  above  the  basin  floor.  About  25  km  to  the  west  are  the  San 
Andres  Mountains,  a  much  more  jagged  range  also  with  maximum  elevation  around  1.5  km  above  the 
desert  floor.  To  the  north  and  south  of  the  site,  the  terrain  is  relatively  flat  with  no  major  obstructions. 

1.1  Atmospheric  Optical  Turbulence  Defined 

Light  propagates  through  the  atmosphere  in  the  form  of  a  wavefront,  "a  surface  over  which  an  optical 
disturbance  has  a  constant  phase"  (Hecht  and  Azjac  1974).  Fermat’s  Principle  describes  the  optical  path 
length  primarily  as  a  function  of  the  index  of  refraction.  When  a  wavefront  encounters  random 
irregularities  in  the  index  of  refraction,  a  well-acknowledged  characteristic  of  the  atmosphere,  phase 
distortions  occur.  An  accumulation  of  random  phase  differences  degrades  light  propagation  and  image 


355 


system  perfoimance.  Depending  on  the  beam  size  and  the  characteristics  of  the  index  of  refraction 
inhomogeneities,  the  results  take  the  form  of  laser  beam  centroid  wander,  scintillation,  image  breakup 
and  blurring. 

1.2  Measuring  Atmospheric  Optical  Turbulence 

Quantifying  AOT  requires  an  understanding  of  the  AOT  phenomenon,  as  well  as  of  the  assumptions 
necessary  to  express  its  effects  in  terms  of  a  measurable  quantity.  The  following  sections  provide  a 
brief  summary  of  AOT  theory  and  a  description  of  the  main  sensors  used  for  this  study. 

1.2.1  Atmospheric  Optical  Turbulence  Parameters 

Atmospheric  optical  turbulence  is  a  random  process.  Therefore,  to  quantify  AOT  characteristics,  we 
use  statistics.  The  primary  parameter  employed  in  our  study  was  the  index  of  refraction  structure 
function,  C„\  By  definition, 


C2  =  <(”i  - 

"  ^2/3 


(1) 


where  <(n,  -  is  an  ensemble  average  of  the  atmospheric  index  of  refraction  differences 

(effectively  Ae  index  of  refraction  variance),  and  r  is  the  separation  between  Uj  and  n2.  An  alternative 
equation,  using  more  easily  measured  meteorological  elements,  is 

c  .  [79xl0-‘^rc/  (2) 


where  P  is  pressure,  Tis  temperature,  and  C/  is  the  temperature  structure  function  (Tatarski  1961) 
By  definition, 


<(r,  -  T^)^> 


(3) 


where  <  (T,  -  T^)  >  is  the  ensemble  average  of  temperature  differences.  When  using  these  structure 
functions,  we  assume  (1)  horizontal  homogeneity  and  isotropy  within  path  r  and  (2)  that  the  separation 
between  sample  points  is  within  the  turbulence  inner  and  outer  scales.  (Tatarski  1961-  Kolomogorov 
1961;  Clifford  1978).  ’ 

1.2.2  Atmospheric  Optical  Turbulence  and  Meteorological  Sensors 

The  AOT  sensors  used  to  collect  the  C/  data  were  Lockheed  Model  IV  scintillometers.  These 
instruments  essentially  measure  the  log  amplitude  variance  of  a  beam  transmitted  along  a  1-km 
horizontal  path  at  8  and  32  m  above  ground  level  (AGL). 

Aspirated  thermistors  and  three-component  anemometers  measured  temperature  and  wind  profiles  on 
32^  towers  at  2,  4,  8,  16,  and  32  m  AGL.  Temperature  differences  between  the  16-  and  2-m  levels 
(AT)  were  used  to  characterize  the  heat  flux.  These  AT  values  were  observed  at  the  0-,  0.5-  and  1-km 
positions  along  the  scintillometer  path. 


356 


1.3  Neutral  Events  Defined 


To  best  understand  the  AOT  "neutral  event,"  one  must  first  understand  the  AOT  diurnal  cycle.  The 
following  describes  a  typical  sequence  of  AOT  conditions  over  a  desert  valley  floor  under  cle^  skies. 
The  24-hr  cycle,  described  below,  begins  at  0000  hours  local  time.  Figure  1  displays  a  "typical" 
diurnal  AOT  time  series  along  with  temperature  and  insolation  for  the  same  time  period. 

Under  clear  skies  and  calm  winds,  the  desert  basin  atmosphere  at  0000  hours  (loc^  time)  is  stably 
stratified,  with  the  coldest  temperatures  at  the  lowest  levels  {AT  >  0);  the  heat  flux  is  negative.  ^  "Hie 
AOT  is  low.  With  the  slightest  wind,  such  as  a  katabatic  flow,  the  stable  layers  ovei^rn,  mixing 
atmospheric  layers  with  different  indices  of  refraction.  The  resulting  mdlange  of  density  variations 
increases  the  AOT.  If  winds  decrease,  AOT  decreases. 

As  the  sun  rises  under  clear  skies,  the  sun’s  rays  begin  to  warm  the  soil.  Over  time,  the  soil  radiates 
this  warmth  into  the  lowest  layers  of  the  atmosphere.  The  heat  flux  increases,  passing  through  zero, 
and  the  previously  stable  atmosphere  evolves  into  an  adiabatic  or  neutral-stability  atmosphere.  A  C„ 
minimum  is  observed;  this  is  the  morning  AOT  NE. 

As  morning  progresses,  the  sun  continues  to  warm  the  ground.  The  ground  in  turn  warms  the 
atmosphere,  resulting  in  a  deeper  boundary  layer.  The  vertical  temperature  difference  (AT)  becomes 
increasingly  negative,  indicating  an  unstable  atmosphere.  increases.  Atmospheric  convection 
attempts  to  rebalance  the  unstable  conditions  by  mixing  the  near-surface  warm  air  into  cooler  air  aloft. 
The  heat  flux  is  positive,  and  the  atmosphere  is  unstable,  with  a  super-adiabatic  lapse  rate.  This 
persistent  mixing  intensifies  the  atmosphere’s  density  (temperature  and  index  of  refraction)  variations, 
increasing  AOT.  The  peak  AOT  occurs  around  midday,  or  soon  after. 

In  the  late  afternoon,  reduced  insolation  decreases  the  magnitude  of  negative  AT  values.  AOT  also 
decreases.  Just  before  sunset,  the  atmosphere  briefly  becomes  adiabatic,  the  heat  flux  goes  to  zero,  and 
AOT  reaches  a  minimum.  The  second  NE  of  the  day  occurs. 


Twilight  evolves  into  night,  and  the  warmed  soil  strongly  emits  the  solar  radiation  absorbed  during  the 
day.  AT  becomes  positive.  Though  the  atmosphere  is  stable,  colder  and  heavier  air  from  the 
surrounding  mountains  and  hills  drains  into  the  valleys.  The  unequal  cooling  and  drainage  create 
mixing,  which  moderately  increases  AOT  throughout  the  night. 

The  NE  is  clearly  associated  with  sunrise  and  sunset.  The  sunrise  NE  occurs  as  the  stable  nighttime 
atmosphere  makes  the  transition  to  the  unstable  atmosphere  of  the  daytime  (AT  changes  from  a  positive 
to  a  negative  value).  The  sunset  NE  takes  place  as  the  daylight’s  unstable  conditions  progress  into  the 
night’s  stable  state  (AT  changes  from  a  negative  to  a  positive  value).  The  common  factor  in  the  two 
cases  is  that  the  atmosphere  briefly  becomes  dry  adiabatic,  exhibiting  the  smallest  index  of  refraction 
variations  along  horizontal  and  vertical  paths.  In  terms  of  actual  field  measurements,  we  found  that  the 
AT  values  were  slightly  negative  during  the  NE.  This  observation  is  consistent  with  the  dry  adiabatic 
lapse  rate  of  9.8  °C  km  '. 

2.  FORECASTING  ATMOSPHERIC  OPTICAL  TURBULENCE  NEUTRAL  EVENTS 

Figure  1  (Vaucher  and  Endlich  1993)  is  a  "typical"  AOT  desert  floor  diurnal  cycle.  "Hie  relevant 
features  are  the  two  C/  minima  and  their  correlating  insolation  and  vertical  temperature  time  series. 


357 


^  ^  xTAaicu  inaex  or  reiraction  structure  function 

C„ .  Semors  sampled  at  1  m  AGL,  along  a  1-km  path,  (b)  Temperature  and  insolation 
Temperature  sensors  at  2,  4,  8,  16  m,  insolation  sampled  at  32  m. 


The  quasi-isothermal  condition  and  C/  minima  occurred  about  1  hr  after  sunrise  (when  insolation  begiris 
to  increase),  and  again  about  40  min  before  sunset  (when  insolation  is  approaching  zero).  Does  this 
NE  timing,  observed  near  the  vernal  equinox,  also  occur  when  the  sun’s  position  is  higher  in  the  sky, 
or  when  clouds  block  the  rising  and  setting  sun?  In  the  next  few  sections,  we  refine  the  typical 
March  observations. 

2.1  Statistical  Model  for  Predicting  the  Neutral  Events 

AOT  data  were  collected  between  April  and  May  1994  along  a  1-km  path  in  the  desert  environment  of 
the  Tularosa  Basin,  WSMR,  NM.  Based  on  the  AT  and  8-m  Q  time  series,  the  closest  minute  or 
minutes  of  the  NE  were  tabulated.  A  NE  was  considered  to  have  occurred  when  the  C„  value  reached 
a  minimum  below  m  ^'^  and  the  AT  was  near  zero  or  slightly  negative.  When  the  minimum  C„ 
value  remained  constant  over  an  extended  period,  the  midpoint  of  the  period  was  listed  as  the  NE  time 
The  reference  point  used  to  standardize  the  neutral  event  statistics  was  the  astronomical  sunrises  and 
sunsets  tabulated  for  Holloman  Air  Force  Base,  about  20  miles  northeast  of  our  site.  Differences 
between  the  astronomical  sunrises  (sunsets)  and  the  NE  times  were  calculated  and  averaged,  and  a 
minimum  and  maximum  NE  time  (with  respect  to  tabulated  sunrise  or  sunset)  were  determined. 

Based  strictly  on  the  April/May  94  data  set,  the  average  occurrence  of  the  morning  NE  was  about 
70  min  after  sunrise.  The  sunrise-NE  time  difference  ranged  between  40  and  133  min  after  sunrise. 
The  evening  NE  occurred  an  average  of  about  60  min  before  sunset,  with  the  sunset-NE  time  difference 
ranging  between  approximately  98  and  12  min  before  sunset.  During  the  calculations  and  subsequent 
analysis,  variables  were  identified  that  directly  influenced  the  NE  times.  These  parameters  are 
discussed  in  the  next  section. 

2.2  Field  Observations 

The  "ideal"  atmospheric  conditions  selected  consisted  of  clear  skies  and  low  wind  speeds.  In  analyzing 
the  "less  than  ideal"  cases,  we  noted  the  effects  of  cloud  cover,  moist  soil,  and  mountain  shadowing. 


The  greatest  cause  for  variation  in  NE  timing  was  cloud  cover.  Specifically,  an  isolated  stratified  cloud 
deck  obscuring  the  sun  at  sunrise  or  sunset  tended  to  delay  the  sunrise  NE  and  cause  an  earlier  sunset 
NE.  The  more  extensive  the  cloud  cover  from  the  horizon  to  the  site,  the  more  ill-defined  the  NE. 
In  fact,  the  C/  minima  for  these  shrouded  sunrise/sunset  cases  were  often  significantly  shallower  than 
those  observed  under  clear  skies. 

The  muddling  effects  of  clouds  on  the  NE  can  be  partially  explained  in  that  the  sun  goes  through  an 
estimated  25  times  more  atmosphere  at  the  horizon  than  at  the  zenith.  Thus,  any  difference  fi'om  fully 
clear  sky  would  result  in  a  diffused  and  more  irregular  insolation.  This  weakened  and  erratic  warming 
of  the  terrain  translates  into  a  sluggish  evolution  between  stable  and  unstable  atmospheric  conditions. 

Damp  ground  was  another  major  influence  on  NE  timing  and  duration.  When  the  site  experienced  rain 
within  the  previous  12  hr,  AOT  tended  to  be  suppressed,  with  the  NE  occurring  earlier.  That  is, 
sunrise  NE  would  occur  sooner  after  sunrise;  the  sunset  NE  would  occur  longer  before  sunset. 


Mountain  shadowing  was  not  taken  into  account  when  the  table  of  astronomical  sunrises/sunsets  was 
calculated.  The  actual  on-site  sunrise  occurred  about  10  min  after  the  calculated  sunrise.  For 
consistency,  we  have  expressed  all  NE  time  measurements  with  respect  to  the  calculated  table  of 
astronomic  sunrises  and  sunsets. 


359 


Mountain  shadowing  affected  the  local  NE  times.  West  of  the  site  is  a  very  jagged  mountain  range, 
the  San  Andres  Mountains.  The  extremely  irregular  horizon  had  the  same  effect  as  cloud  cover  In 
Ae  northern  hemisphere,  mid-latitude  location  of  sunset  is  to  the  north  in  the  summer  and  to  the  south 
in  the  winter.  The  exact  sunset  location  with  respect  to  the  mountain  silhouette  at  the  local  site  had  to 
be  taken  into  account  before  the  local  NE  forecast  could  be  issued.  The  evenness  of  the  mountain  range 
on  the  eastern  horizon  minimized  this  effect  for  the  sunrise  NE. 

We  conducted  a  month-by-month  review  of  the  average  NE  timing  for  April  through  June.  The  sunrise 
NE  was  selected  for  study  because  of  the  more  ideal  eastern  horizon.  When  the  NE  average  and  range 
were  tabulated,  we  found  that  the  average  time  separating  sunrise  and  NE  was  about  50  min  near  the 
vernal  equinox  and  around  85  min  near  the  summer  solstice.  Each  succeeding  month  displayed  an 
increase  of  approximately  12  min.  The  fact  that  the  NE  occurred  further  from  sunrise  as  the  sun’s 
position  moved  northward  may  seem  inconsistent  at  first.  The  following  explanation  assumes  clear 
skies  and  light  winds. 

It  is  true  that  northern  hemisphere  summer  temperatures  are  warmer  than  winter  temperatures.  The 
AOT  NE,  however,  is  concerned  with  temperature  differences  and  heat  fluxes  (density  variations).  In 
the  winter,  morning  air  masses  are  cooler  than  they  are  in  summer.  Therefore,  the  solar  heat  flux 
required  to  create  an  adiabatic  environment  near  the  surface  (AOT  minimum)  in  the  winter  is  less  than 
it  is  in  summer,  when  the  air  mass  over  the  terrain  is  warmer. 

3.  CASE  STUDIES 

Two  case  studies  are  presented  below.  The  first  is  a  sunset  NE  under  "almost  ideal"  atmospheric 
conditions;  the  second  is  a  "less  than  ideal"  sunset  NE  case  study. 

3.1  Clear  Skies  Case  Study 

On  15  June  1994,  the  site  had  a  high-pressure  area  to  the  south  and  a  low-pressure  area  to  the  north 
causing  westerly  winds  to  persist  throughout  the  day.  The  skies  overhead  were  mostly  clear  during  the’ 
^y,  though  scattered  high  clouds  moved  across  the  horizon  from  the  northwest  shortly  before  sunset. 
These  clouds  were  well  to  the  north  of  the  sunset  horizon.  A  fire  on  a  mountain  range  to  the  southwest 
released  large  quantities  of  smoke  visible  from  the  site;  however,  westerly  winds  kept  the  smoke  well 
to  the  south  during  the  entire  period.  During  the  forecasted  NE,  the  temperature  at  2  m  AGL  was 
around  32  °C,  the  winds  at  8  m  AGL  were  from  the  west  at  about  5  m  s  ',  and  the  dew  point  at  2  m 
AGL  was  around  0  °C. 

The  sunset  horizon  was  free  of  clouds,  as  were  the  atmosphere  between  the  sunset  horizon  and  the  site 
and  the  sky  east  of  the  site.  Applying  the  statistical  model  to  these  "almost  ideal"  conditions  the  NE 
time  range  forecast  for  ISrJune  1994  was  the  following; 

Astronomical  sunset:  1914  mST 

Local  NE  based  on  average:  1814  MST 

Range  in  which  the  NE  could  occur:  1736  -  1902  MST 

Fi^re  2  displays  the  C/  and  AT  time  series  for  this  June  case.  Placing  the  NE  threshold  at  IQ-'* 
m-  ,  the  NE  at  both  8  and  32  m  begins  around  1810  MST  and  ends  around  1848  MST.  During  this 
period,  the  AT  hovers  around  0  °C.  The  single  C/  minimum  occurs  around  1824  MST,  about  10  min 
later  than  the  statistical  average  for  April/May,  but  well  within  the  anticipated  NE  range. 


360 


Delta— T  (deg  C) 


:  SOURCE 


ZERO  32m 
ZERO  8m 


.15^  ?  -4^1^  /  :  0’  ’  •  -1 


b.  :  c  <  • 

•  'M  ?  : ;  T:  * 
r 

‘  *?? 


yW 


18:00 


Time  (MST) 


19:00 


SOURCE 


HALF  KM 
ONE  KM 
ZERO  KM 


Sunset 


17:00 


18:00 


19:00 


Time  (MST) 


Figure  2.  (a)  C„^  and  (b)  Ar  time  series  for  the  "almost  ideal"  15  June  1994  case  study 


3.2  Effect  of  Overcast  Skies 


On  12  May  1994,  a  low-pressure  area  centered  over  eastern  Arizona  brought  moist,  unstable  air  over 
the  site  from  the  south.  Thunderstorms,  rain  events,  and  considerable  cloud  cover  dominated  this  24-hr 
period.  The  local  thunderstorm  activity  began  soon  after  0300  MDT  and  continued  until  about  sunrise. 
Moisture  and  cloud  cover  over  the  site  persisted  throughout  the  day,  leading  to  an  ill-defined  and 
extended  evening  NE.  Winds  around  sunset  were  firom  the  north  at  about  5  m  s  ‘.  The  statistical 
evening  NE  model  forecast  the  NE  time  and  range  as  follows: 

Astronomical  sunset:  1854  MST 

Local  NE  based  on  average:  1754  MST 

Range  in  which  the  NE  could  occur:  1716  -  1842  MST 

Figure  3  displays  the  C/  and  AT  time  series  for  this  ill-defined  NE.  In  contrast  to  the  15  June  case, 
the  32-m  and  8-m  magnitudes  tended  to  coincide.  They  also  lacked  a  single  point  minimum.  In 
fact,  there  were  four  turbulence  minima.  The  "best"  32-m  level  minimum  (1708  MST)  occurred  before 
the  "best"  8-m  Cf  minimum  (1754  MST).  Note,  however,  that  the  "best"  Cf  minimum  at  8  m 
coincides  with  the  forecast  NE.  The  AT  magnitudes  hover  around  the  0  °C  mark  throughout  the 
extended  NE  period. 

4.  SUMMARY 

Atmospheric  optical  turbulence  (AOT)  was  observed  in  order  to  develop  a  model  for  predicting  the  time 
of  AOT  neutral  events  (NE),  which  occur  shortly  after  sunrise  and  shortly  before  sunset  in  a  desert 
environment.  The  parameter  used  to  quantify  the  AOT  was  the  index  of  refraction  structure  function. 
The  assumptions  made  when  using  Cf  are  horizontal  homogeneity  and  isotropy  within  the  path 
and  that  the  distance  separating  the  two  sampled  points  is  within  the  turbulence  inner  and  outer  scales. 
Principle  sensors  used  for  this  study  were  Lockheed  Model  IV  scintillometers  to  determine  Cf  and  the 
aspirated  thermistors  to  measure  the  AT  (16-m  -  2-m  AGL  temperature  differences).  All  sampling  was 
done  along  a  1-km  horizontal  path. 

Cf  and  AT  data  for  periods  near  sunrise  and  sunset  from  April-May  1994  were  collected.  Using  the 
astronomical  sunrise  and  sunset  for  a  local  Air  Force  base,  the  difference  between  sunrise  (or  sunset) 
and  the  NE  was  calculated.  An  average  of  the  time  differences  combined  with  the  range  in  the  time 
of  occurrence  allowed  us  to  refine  the  forecasting  model. 

Based  strictly  on  the  April/May  94  data  set,  the  average  occurrence  of  the  morning  NE  was  about 
70  min  after  sunrise.  The  difference  in  time  between  sunrise  and  the  associated  NE  ranged  between 
40  and  133  min  after  sunrise.  The  evening  NE  occurred  an  average  of  about  60  min  before  sunset, 
with  a  sunset-NE  difference  ranging  between  approximately  98  and  12  min  before  sunset. 

The  statistical  NE  model  was  tested  and  a  subsequent  analysis  identified  additional  factors  that  directly 
influence  the  NE.  The  greatest  cause  for  variation  in  the  NE  timing  was  cloud  cover.  A  shroud  of 
clouds  during  the  sunrise  or  sunset  period  tended  to  delay  the  local  NE.  In  some  cases,  a  shallower 
AOT  minimum  was  also  observed.  Since  the  sun  travels  through  more  atmosphere  at  the  horizon  than 
at  the  zenith,  one  can  expect  a  clouded  horizon  to  affect  NE  timing. 


362 


:  SOURCE 


ZERO  32m 
ZERO  8m 


%  Tmrn 


17:00 


Time  (MST) 


18:00 


SOURCE 


16:00 


HALF  KM 
ONE  KM 
ZERO  KM 


Sunset 


17:00 


18:00 


Time  (MST) 


Figure  3.  (a)  C„*  and  (b)  AT  time  series  for  the  "less  than  ideal"  12  May  1994  case  study 


A  second  influence  was  soil  moisture,  which  retarded  the  effects  of  insolation.  A  third  influence  was 
mountain  shadowing  from  the  surrounding  horizons.  The  orographic  profiles  affected  the  exact  timing 
of  local  sunrise  or  sunset,  and  jagged  terrain  occulting  the  sun  had  a  similar  dulling  effect  on  the  NE 
to  that  of  cloud  cover. 

A  month-by-month  analysis  of  the  sunrise  NE  occurrence  showed  the  sunrise-to-NE  time  differential 
to  increase  by  about  12  min  per  month  during  the  spring.  The  greater  heat  flux  required  in  the  summer 
months  to  produce  the  low-level  adiabatic  environment  associated  with  a  C„^  minimum  helps  to  explain 
the  longer  sunrise-to-NE  separation.  An  "almost  ideal"  (clear  skies)  and  "less  than  ideal"  (cloudy)  case 
were  presented.  When  skies  were  clear,  the  forecasted  NE  generally  agreed  with  the  model.  During 
cloudy  and  showery  conditions,  the  effect  of  nonuniform  radiation  and  latent  heat  made  the  simple 
statistical  model  difficult  to  use. 


5.  RECOMMENDATIONS 

The  above  study  is  by  no  means  an  exhaustive  investigation  of  AOT  NE  forecasting/modeling. 
Collecting  and  analyzing  a  full  annual  cycle  of  AOT  NE  data  and  quantitatively  linking  cloud  cover, 
heat  flux,  and  ground  moisture  with  the  AOT  NE  would  greatly  enhance  the  AOT  NE  forecast  model. 

ACKNOWLEDGEMENTS 

A  special  thanks  to  T.  Jameson  for  daytime  observations  of  12  May  94;  to  A.  Rishel  and  J.  Niehans 
for  data  management  and  assistance  with  the  figures;  and  to  C.  Vaucher  for  text  critique. 

REFERENCES 

Clifford,  S.R.,  1978.  "The  Classical  Theory  of  Wave  Propagation  in  a  Turbulent  Medium."  Topics 
in  Applied  Physics — Laser  Beam  Propagation  in  the  Atmosphere,  v.  25,  Springer-Verlag, 
Berlin,  Germany,  325  pp. 

Fugate,  R.Q.,  and  W.J.  Wild,  1994.  "Untwinkling  the  Stars  -  Part  I."  Sky  and  Telescope,  May  1994, 
24-31. 


Hecht,  E.,  and  A.  Azjac,  1974.  Optics,  Addison-Wesley  Publishing  Co.,  Reading,  MA,  565  pp. 

Kolomogorov,  A.,  1961.  Turbulence,  Classic  Papers  on  Statistical  Theory,  ed.  by  S.  Friedlander  and 
L.  Topper,  Interscience,  New  York.  NY,  151  pp. 

Tatarski,  V.I.,  1961.  Wave  Propagation  in  a  Turbulent  Medium.  Dover  Publications,  Inc.,  New  York 
NY,  285  pp. 

Vaucher,  G.  Tirrell,  and  R.W.  Endlich,  1993.  "Intercomparison  of  Simultaneous  Scintillometer 
Measurements  over  Four  Unique  Desert  Terrain  Paths."  Eighth  Symposium  on  Meteorological 
Observations  and  Instrumentation,  American  Meteorological  Society,  Boston,  MA. 


364 


Session  I  Posters 


SIMULATION  AND  ANALYSIS 


365 


COMBINED  OBSCURATION  MODEL  FOR  BATTLEFIELD 


INDUCED  CONTAMINANTS  -  POLARIMETRIC 

MILLIMETER  WAVE  VERSION  (COMBIC-PMW) 

S.  D.  Ayres,  J.  B.  Millard,  and  R.  A.  Sutherland 
Battlefield  Environment  Directorate 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  New  Mexico  88002-5501 


ABSTRACT 

The  COMBIC  model  was  originally  developed  for  Electro-Optical  Systems  of 
Atmospheric  Effects  Library  (EOSAEL)  to  model  aerosols  for  which  spherical 
symmetry  can  be  assumed  to  describe  both  the  physical  and  optical  properties  of 
the  aerosols.  This  is  a  reasonable  assumption  when  considering  older, 
conventional  obscurants  such  as  fog  oil  and  white  phosphorus;  this  approximation 
breaks  down  for  newer  developmental  obscurants  designed  to  be  effective  at 
longer  wavelengths.  Many  of  the  new  millimeter  wave  and  radar  obscurants  are 
highly  nonspherical.  New  techniques  are  required  to  model  nonspherical 
obscurants.  COMBIC-PMW  is  a  merger  between  COMBIC  and  the  techniques 
that  account  for  the  optical  and  mechanical  behavior  of  these  nonspherical 
battlefield  aerosols.  These  new  techniques  determine  electromagnetic  properties 
such  as  the  ensemble  orientation  averaged  extinction,  absorption,  and  scattering 
as  well  mechanical  properties  such  as  fall  velocity  and  angular  orientation  of  the 
obscurant  particles  when  released  into  the  turbulent  atmospheric  boundary  layer. 

This  paper  describes  COMBIC-PMW,  its  function,  and  how  to  use  it.  The  paper 
also  describes  the  range  of  conditions  under  which  the  model  is  applicable. 

1.  INTRODUCTION 

1.1  Model  Purpose 

Millimeter  wave  (MMW)  radars  were  developed  to  provide  greater  accuracy  than  conventional 
microwave  (centimeter  wave)  radars  even  though  MMW  radars  do  not  have  the  same  all-weather 
capability.  Although  MMW  radars  have  superior  penetrability  through  smoke,  fog,  and  rain 
over  their  electro-optical  (EO)  counterparts,  they  do  not  have  the  high  resolution  of  EO  systems 
(Sundaram  1979).  MMW  systems  represent  a  compromise  in  which  most  of  the  advantageous 
characteristics  of  the  microwave  and  EO  regions  are  available  and  the  disadvantageous  effects 
are  minimized.  MMW  radar  systems  are  also  much  smaller  because  component  size  is  related 
to  wavelength.  MMW  systems  are  of  considerable  interest  for  applications  in  which  size  and 


367 


weight  restrictions  are  important,  as  in  aircraft  and  smart  munitions.  With  the  development  of 
the  MMW  systems,  the  Army  turned  to  countermeasures  that  can  defeat  these  radars.  The 
prevalent  thought  in  the  Army  is  that  conventional  battlefield  obscurants  hardly  affect  MMW 
(Knox  1979).  The  Army  is  developing  obscurants  that  can  defeat  the  MMW  systems.  These 
obscurants  are  different  from  the  obscurants  that  can  defeat  conventional  EO  systems.  Their 
primary  dimension  is  approximately  the  same  as  the  wavelength  of  the  system  (i.e.,  on  the  order 
of  millimeters).  Furthermore,  the  obscurants  are  not  spherical,  like  the  more  conventional 
obscurants.  The  combination  of  these  effects  create  situations  not  present  with  the  traditional 
smokes.  Atmospheric  turbulence  can  affect  the  orientation  of  nonspherical  particles.  The 
orientation  and  scattering  from  nonspherical  particles  leads  to  different  scattering  intensities  at 
different  angles.  New  models  are  required  to  simulate  these  obscurants. 

1.2  COMBIC 

One  of  the  original  purposes  for  developing  COMBIC-PMW  was  to  assist  in  modeling  the 
effectiveness  of  smoke  screens  used  in  wargame  simulations.  The  COMBIC  computer 
simulation  predicts  spatial  and  temporal  variation  in  transmission  produced  by  various  smoke  and 
dust  sources.  It  models  the  effects  of  reduction  in  electromagnetic  (EM)  energy  by  combining 
the  munition  characteristics  with  meteorological  information  of  an  idealized  real  world. 
COMBIC  produces  transmission  histories  at  any  of  seven  wavelength  bands  for  a  potentially 
unlimited  number  of  sources  and  lines  of  sight.  It  also  computes  concentration  length,  which 
is  the  integration  of  the  concentration  over  the  path  length.  Previous  smoke  models,  like 
COMBIC,  adequately  model  older  conventional  smokes  such  as  fog  oil  and  white  phosphorous; 
this  is  not  true  of  the  newer  developmental  obscurants.  Since  the  original  development  of 
COMBIC,  which  treats  only  spherical  obscurants,  new  obscurants  have  been  developed  for 
effectiveness  in  the  MMW  regime.  These  obscurants  are  severely  nonspherical  in  shape; 
therefore,  new  algorithms  are  required.  The  traditional  simplifying  assumption  of  spherical 
symmetry  to  describe  the  optical  and  mechanical  properties  of  the  obscurants  is  no  longer  valid 
for  the  newer  obscurants.  The  new  obscurants  are  actually  nonspherical  and  require  different 
methodologies  to  compute  their  effect  on  the  battlefield  and  the  effect  of  atmospheric  turbulence 
on  the  obscurants. 

1.3  MMW  Obscurants 

A  wide  variety  of  MMW  obscurants,  such  as  graphite,  have  been  developed  in  recent  years. 
The  most  efficient  are  the  fibers  modeled  as  finite  cylinders.  Very  few  MMW  obscurants  have 
made  it  to  the  inventory  list.  The  fibers  can  be  either  prepackaged,  precut  and  packed  in  parallel 
arrays  having  packing  densities  as  high  as  .8,  or  precut  fibers  loosely  packed  in  powder  form. 
The  Army  favors  the  first  method  (Farmer,  Kennedy  1991).  MMW  obscurants  can  exist  in  a 
multitude  of  complex  shapes  including  helix,  coils,  disks,  flakes,  cubes,  antennas,  and  their 
aggregates. 


368 


1.4  Dissemination  Methods 


In  a  dissemination  system,  such  as  a  grenade  or  rocket,  the  fibers  are  packed  coaxially  in  disks. 
Disk  thickness  corresponds  to  fiber  length.  The  aspect  ratio  of  diameter  to  fiber  length  is 
typically  1  to  1000.  The  disks  are  stacked  on  a  center-core  burster-unit  that  is  used  to  break  the 
packaging  binder  and  spread  the  fibers.  For  artillery-shell  packaging,  the  disks  must  be 
reinforced  with  a  steel  or  aluminum  superstructure.  This  design  approach  assumes  that  when 
the  disks  burst,  the  disks  separate  into  single  fibers  of  the  same  length  and  diameter  as  the 
original  fibers  used  to  make  the  disk.  Electrostatic  charge  and  other  factors  can  cause  clustering 
of  fibers  to  stick  together  along  the  long  axis  or  to  agglomerate  into  randomly  oriented  sets  of 
particles  resembling  bird  nests.  The  large  clusters  tend  to  fall  out  of  the  obscurant  cloud  at  a 
much  greater  rate  than  single  fibers.  The  large  clusters  decrease  cloud  obscuration  efficiency. 
The  decrease  in  obscuration  efficiency  results  from  a  reduction  in  extinction  efficiency  for  the 
individual  clusters  relative  to  single  particles  and  from  a  reduction  of  the  numbers  of  individual 
fibers  available  for  effective  obscuration.  COMBIC-PMW  does  not  model  this  directly  except 
through  an  empirically  derived  munition  efficiency  factor. 

In  a  fiber  cutter  MMW  smoke  generator  system,  the  obscurant  material  comes  from  the  factory 
in  multiple  strand  ropes,  called  tows,  wound  on  spools.  The  material  is  often  graphite,  although 
other  materials  have  been  employed.  The  number  of  fibers  per  tow  can  vary  from  1000  to 
48,000,  and  there  are  10  to  30  tows  per  belt.  In  a  typical  system,  the  belt  material  is  fed  to  a 
fiber  cutter  consisting  of  two  rollers  in  contact,  in  which  one  contacts  the  cutting  blades  at  fixed 
spacing  (typically  6.25  mm  or  1/4  in).  The  fiber  length  can  be  varied  by  changing  the  blade 
spacing.  The  motor  speed  is  variable  allowing  fiber  belt  speeds  from  0  to  12  ft/s.  Proper 
selection  of  belt  speed  and  belt  size  can  produce  throughputs  of  0  to  10  Ib/min.  A  coanda  flow 
ejector  consists  of  a  short  cylindrical  shell  with  a  high-speed  sheath  (generated  by  air  pressure 
expelled  axially  at  the  inside  edge  of  the  cylindrical  shell).  Momentum  is  then  transferred  to  the 
air  within  the  cylindrical  shell.  This  device  can  be  used  to  produce  air  flow  without  mechanical 
interference  and  within  which  shear  flow  can  be  carefully  controlled.  The  coanda  flow  ejector 
separates  and  disseminates  the  fibers  by  accelerating  them  to  very  high  speeds  resulting  in  a 
nearly  uniform  nonbuoyant  cloud. 

A  wafer  storage  and  dispensing  smoke  generator  disseminates  fibrous  material  from  wafers 
containing  fibers.  Wafer  storage  and  dispensing  consists  of  a  cartridge  magazine,  four  wafer 
cartridges,  and  a  pneumatic  indexing  mechanism.  The  wafer  cartridges  are  inserted  into  four 
bores  in  the  cartridge  magazine.  In  a  prototype  system  tested  at  the  Dug  way  Proving  Ground 
(Perry  et  al.  1994),  the  wafer  cartridge  can  contain  up  to  54,  6.25-mm-  (1/4  in)  thick  wafers. 
Each  wafer  is  approximately  20  g  (.044  lb).  Pistons  in  the  wafer  cartridges  discharge  the  wafers 
into  slots  in  the  wafer  turret  motor  that  includes  two  rows  of  wafer  slots.  The  wafer  turret 
motor  rotates  until  the  wafer  lines  up  with  the  exit  from  the  turret  housing.  There  a  spring 
strikes  the  wafer’s  rear  surface,  ejecting  the  compact  fibers  into  the  turret  housing.  Ambient  air 
enters  at  a  high  velocity  and  mixes  with  the  aerosol  and  forms  the  exiting  cloud. 


369 


2.  DEFINITION  OF  PROBLEM 

2.1  Scattering 

The  propagation  of  EM  radiation  in  any  medium  containing  particles  is  governed  by  the 
combination  of  absorption,  emission,  and  scattering.  Particles  are  a  subject  of  great  importance 
in  determining  effects  of  obscurants  on  EM  radiation.  Scattering  and  absorption  depend  upon 
the  particle  size,  shape,  refractive  index,  and  concentration.  Mathematically  determining  the 
radiation  field  scattered  by  particles  of  arbitrary  shape  at  any  point  in  space  can  be  quite 
difficult.  Exact  analytical  solutions  are  only  available  for  the  sphere  and  infinite  cylinder.  The 
scattering  properties  of  simple  geometries  have  been  well  studied  (Bowman  et  al.  1987). 
Numerical  techniques  and  approximate  analytical  methods  are  used  to  analyze  these  properties, 
usually  over  a  limited  range  of  conditions.  In  this  first  attempt  to  more  effectively  model  MMW 
obscurants,  one  is  limited  to  modeling  finite  cylinders.  A  recently  held  workshop  entitled 
Second  Workshop  on  the  Electromagnetics  of  Combat  Induced  Atmospheric  Obscurants  examined 
all  aspects  of  the  scattering  problem  to  determine  status  of  existing  models,  measurement 
capabilities,  field-model  comparison,  and  where  research  needs  to  be  focused. 

Long-wavelength  theory  predicts  that  parallel  rays  encountering  a  cylinder  are  scattered  axially 
symmetric.  Short-wavelength  theory  predicts  that  parallel  rays  encountering  a  cylinder  are 
scattered  into  a  conical  shell  with  a  half  angle  equal  to  the  angle  between  the  incident  rays  and 
the  cylinder  axis.  For  cylinders  nearly  parallel  to  the  incident  radiation,  the  scattered  radiation 
is  contained  in  a  small  cone  near  the  forward  direction.  The  cone  half  angle  increases  as  the 
angle  increases  between  the  incoming  radiation  and  the  cylinder.  For  an  extreme  case  with  the 
cylinder  at  right  angles  with  the  incoming  radiation,  the  conical  wave  becomes  a  cylindrical 
wave  propagating  in  a  direction  perpendicular  to  the  particle  axis.  (Note,  in  this  case,  that 
backscatter  can  only  occur  when  the  particle  is  aligned  perpendicular  to  the  viewing  angle.) 

2.2  Polarization 

2.2.1  What  Is  Polarization 

Light  waves  are  transverse  in  the  far-field  approximation.  The  displacements  of  the  electric  and 
magnetic  vectors  are  not  along  the  line  of  travel,  like  sound  waves,  but  are  perpendicular  to  it. 
For  example,  if  the  direction  of  travel  of  a  given  light  beam  is  east,  the  electric  vibrations  may 
be  up  and  down,  or  north  and  south,  or  along  some  other  line  perpendicular  to  the  east- west 
axis.  The  transverse  electrical  (TE)  and  transverse  magnetic  (TM)  fields  are  mutually 
perpendicular  at  any  point  in  space.  Polarized  light  is  light  in  which  the  transverse  components 
vibrate  in  a  preferred  manner.  Unpolarized  light  is  light  that  exhibits  no  long-term  preference 
as  to  vibration  pattern.  Partially-polarized  light  falls  somewhere  in  between. 


370 


2.2.2  Effects  of  Polarization 


Laboratory  obscuration  effectiveness  in  the  MMW  regime  is  strongly  dependent  upon  the  system 
polarization  mode.  Note  that  in  figure  1  this  particular  obscurant  is  much  more  effective  against 
horizontally-polarized  systems.  Figure  1  shows  mass  extinction  coefficients  versus  concentration 
for  horizontal-  and  vertical-polarized  radiation.  The  mass  extinction  coefficient  average  is 
0.2  m^/g  for  vertically-polarized  radiation  and  0.8  mVg  for  horizontally  polarized  radiation. 
Even  unpolarized  light  can  become  polarized  after  encounters  with  scatterers,  although  the 
opposite  usually  occurs.  Figure  2  shows  plots  of  the  (relative)  magnitude  of  the  scattered 
intensity  as  a  function  of  the  cone  azimuth  angle  for  various  values  of  the  incident  angle  for  the 
TE  and  TM  polarization  modes  (Sutherland,  Millard  1994).  When  0i„c  is  perpendicular  to  the 
cylinder,  scattered  TM  radiation  reaches  a  minimum  at  180°  (backscatter).  This  is  not  true  for 
TE  radiation  which  reaches  a  minimum  at  79°  and  shows  a  significant  amount  of  backscatter. 
Potential  counter-countermeasures  that  can  take  advantage  of  scattered  radiation  polarization 
characteristics  can  be  identified  through  studies  of  a  phase-function  plot  for  both  vertical  and 
horizontal  polarization. 


Figure  1.  Chaff  particles  polarization  effects.  Mass  extinction  coefficients  vary  with 
polarization  mode  being  higher  for  horizontal  polarized  incident  radiation. 


371 


2.3.1  What  Causes  Particles  to  Orient 

Under  certain  conditions,  nonspherical  particles  tend  to  adopt  a  preferred  orientation  when 
falling  through  the  atmosphere.  For  long  cylindrical-shaped  particles  used  to  approximate 
MMW  obscurants  the  stable  mode  occurs  when  the  particle  is  oriented  with  long  axis  horizontal. 
Figure  3  shows  the  laboratory  measured  orientation  distribution  of  chaff  particles  2  and  10  s 
after  release.  At  first  the  particle  orientation  is  nearly  uniform;  however,  after  10  s  the 
aerodynamic  and  gravitation  forces  tend  to  shift  the  distribution  to  the  more  stable  model.  The 
degree  to  which  the  particle  orients  will  also  affect  the  polarimetric  extinction  properties  of  the 
ensemble.  The  fall  velocity  of  a  nonspherical  particle  is  significantly  lower  than  for  an 
equivalent  spherical  particle  of  the  same  mass  (Sutherland,  Klett  1992). 


372 


2.3.2  Role  of  Turbulence 

Nonspherical  particles  tend  to  adopt  a  preferred  orientation  when  falling  through  the  atmosphere 
under  quiescent  conditions.  Atmospheric  turbulence  can  cause  a  perturbation  to  the  stable-fall 
mode  that  can  result  in  random  tumbling  for  extremely  turbulent  conditions.  The  degree  of 
perturbations  depend  upon  the  level  of  turbulence  and  cylinder  length  as  well  as  the  aspect  ratio 
(ratio  of  particle  diameter  to  length).  Figure  4  shows  results  of  the  computation  of  stable-fall 
mode  for  Reynolds  number  versus  cylinder  length  for  different  aspect  ratios.  Note  that  the  small 
Reynolds  number  for  stable-fall  mode  means  the  dominance  of  viscous  forces  over  inertial  forces 
for  MMW  obscurants.  The  current  Army  belief  is  that  stable-fall  modes  are  the  exception  rather 
than  the  rule  in  the  turbulent  atmospheric  boundary  layer.  This  belief  is  increasingly  being 
challenged. 


373 


REYNOLDS  NUMBER  vs  PARTICLE  LENGTH 
Stable  Fall  Mode  (Broadside  to  Flow) 


Figure  4.  Reynolds  number  versus  particle  length  for  stable-fall 
mode  for  three  different  aspect  ratios. 


2.3.3  Effect  of  Particle  Orientation 

MMW  particles  will  rarely  all  have  the  same  orientation.  The  expected  orientation  distribution 
of  particles  in  a  cloud  will  probably  fall  somewhere  between  completely  oriented  to  completely 
random  for  a  turbulent  atmosphere.  The  problem  is  not  to  compute  obscuration  efficiency  for 
a  particle  at  angle  B  but  to  determine  obscuration  efficiency  for  a  cloud  of  particles  oriented  at 
all  different  angles  but  possibly  having  a  preferred  orientation.  Sutherland  and  Klett  (1992) 
created  a  model  that  estimates  the  degree  of  orientation  of  various  sized  particles  falling  though 
the  turbulent  atmosphere.  The  problem  is  difficult  and,  like  most  problems  involving  the  real 
atmosphere,  is  not  exactly  solved.  Sutherland  and  Klett  (1992)  assumed  that  the  mean  square 
tilt  angle  of  a  large  ensemble  of  particles  is  proportional  to  the  magnitude  of  turbulent  pressure 
fluctuations.  Results  for  cylindrically-shaped  particles  are  described  elsewhere  (Sutherland 
Mdlard  1943).  The  theory  is  valid  only  in  the  inertial  subrange  of  turbulence  where  the 
behavior  of  the  vanous  microscale  parameters  are  fairly  well  known.  Some  results  of  Sutherland 
and  KleU’s  model  are  shown  in  figures  5a  and  5b.  Figure  5a  gives  estimates  of  the  root  mean 
square  tilt  as  a  function  of  particle  length  for  various  levels  of  the  turbulent  dissipation  rate  e. 
The  vertical  scale  represents  the  calculated  mean  square  tilt  varying  from  a  value  of  0°  (full 
horizontal  orientation)  to  90°  (near  total  uniform  random  orientation).  It  is  evident  from 
figure  5a  that  larger  particles  tend  toward  the  stable  orientation  mode  {bB  =  0)  is  intuitively 
expected.  Note  that  as  the  turbulence  level  increases  in  figure  5b,  the  width  of  the  function 

increases  to  the  point  where  the  distribution  becomes  nearly  uniform  (flat)  at  the  highest 
turbulence  levels. 


374 


ANGLE  (degrees) 


(a)  (b) 

Figure  5.  Modeled  particle  orientation  statistics  for  long  cylindrical  fibers  (a)  mean  square  tilt 
as  a  function  of  particle  length  and  (b)  particle  orientation  distribution. 


3.  PMW  RESULTS 

Unlike  their  spherical  counterparts,  the  extinction  efficiency  of  nonspherical  obscurants  depends 
upon  the  viewing  angle  and  the  level  of  atmospheric  turbulence.  It  becomes  necessary  to  model 
these  factors.  The  PMW  model  uses  the  Wentzel-Kramers-Brillouin  (WKB)  method  to  calculate 
the  values  of  the  ensemble  averaged  efficiencies  and  the  differential  scattering  cross  section  for 
fibers  with  lengths  much  less  than  the  wavelength  (i.e.,  the  Rayleigh  regime).  The  WKB 
method  and  the  quasistatic  model  (used  to  calculate  the  absorption  efficiency),  as  used  by  PMW, 
are  described  in  papers  by  Klett  and  Sutherland  (1992);  Evans  (1991);  Pederson,  Pederson,  and 
Waterman  (1985);  and  Pederson,  Pederson,  and  Waterman  (1984).  The  subroutine  WKB 
requires  the  following  inputs:  M,  the  complex  index  of  refraction;  R,  the  cylinder  radius 
(microns);  L,  the  cylinder  length  (millimeters);  W,  the  wavelength  (microns);  Pq,  the  incident 
polarization  angle;  B,  the  tilt  distribution  parameter  (as  shown  in  figure  5);  D,  the  maximum  tilt 
angle  (as  measured  from  the  X-Y  plane);  and  the  zenith  angle  (degrees).  The  outputs  of  WKB 
are  the  absorption,  extinction,  and  scattering  ensemble  averaged  efficiencies  and  the  differential 
scattering  cross  section  for  <^  =  tt  and  0  =  0  with  the  vector  polarizations.  The  polarization 
angle  is  the  angle  measured  from  the  vertical,  clockwise,  in  a  plane  perpendicular  to  the  incident 
direction.  The  vector  polarizations  are  the  in  the  same  XYZ  coordinate  system  as  the  incident 
angle  and  the  maximum  tilt  angle. 


375 


4.  COMBIC-PMW 


4.1  Description 

COMBIC-PMW  is  made  up  of  two  models:  one  that  treats  transport  and  diffusion  (the  original 
COMBIC)  and  another  that  models  the  mechanical  and  optical  properties  of  MMW  obscurants 
(PMW).  COMBIC  calls  PMW  as  a  subroutine  and  passes  the  parameters  that  define  the  MMW 
obscurant  such  as  length,  diameter,  and  complex  index  of  refraction  as  well  as  the  polarization 
information  described  in  section  3,  The  output  is  the  extinction  efficiency,  absorption  efficiency, 
scattering  efficiency,  phase  function,  and  vector  polarization  for  both  backscatter  and  angle  of 
interest.  Only  the  extinction  efficiency  is  used  by  COMBIC.  Future  research  will  make  use  of 
other  parameters. 

4.2  Inputs 

All  input  data  for  COMBIC-PMW  are  entered  in  standard  EOSAEL  format,  A4,6X,7E10.4. 
Input  data  are  entered  through  80-character,  order-independent,  "card"  images.  Tables  1,2,  and 
3  describe  the  new  input  cards  used  in  addition  to  the  original  COMBIC  input  cards. 


Table  1.  The  PMWO  card  describes  the  properties  of  MMW  obscurant.  The  first  line 
shows  the  parameters  of  the  record.  The  second  line  gives  a  typical  example. 
Explanation  of  the  parameters  follows. 


PMWO 

PMWO 

FLENG  FDIAM 

3.4  1.0 

FINDEX 

.5 

FDNSTY 

1.8 

NAME 

UNITS 

FLENG 

FDIAM 

FINDEX 

FDNSTY 

mm 

fim 

Length  of  fiber 

Diameter  of  fiber 

Complex  index  of  refraction 
Density  of  the  fiber 

Table  2.  The  TURB  card  describes  the  turbulence  and  lists  frequencies  of  interest. 
The  first  line  shows  the  parameters  of  the  record.  The  second  line  gives  a  typical 
example.  Explanation  of  the  parameters  follows. 

TURB  EPS  GHz(l)  GHz(2)  GHz(3)  GHz(4) 

TURB  10.0  220  140  94  70 

NAME  UNITS _ 


EPS  Turbulence  parameter  (10  =  low  turbulence,  100  =  medium  turbulence,  1000  = 

high  turbulence) 

GHz(l-4)  GHz  COMBIC-PMW  computes  the  transmission  at  6  default  MMW  frequencies  (220, 

_ 140,  94,  70,  35,  and  24  GHz).  The  user  can  change  the  first  four. 


376 


Table  3.  The  TLOC  describes  target  location  and  specifies  if  the  sensor  is  a 
sensor.  The  first  line  shows  the  parameters  of  the  record.  The  second  line  gives  a 


typical  example.  Eixplanation  of  the  parameters  follows. 

TLOC  OBSN  XTAR  YTAR  ZTAR  TARN 

TLOC  1  2000  3  1 


OBSN 

1 


PANG 

45 


NAME 


UNITS 


OBSN 

XTAR 

YTAR 

ZTAR 

TARN 

PMW 

PANG 


User  assigned  number  matching  an  observer 
Target  X  location 
Target  Y  location 
Target  Z  location 

User  assigned  target  number.  One  observer  can  have  many  targets. 
If  greater  than  zero,  then  the  sensor  works  in  MMW  frequencies. 
Incident  polarization  angle _ _ 


4.3  COMBIC-PMW  Results 

Figures  6  through  9  are  for  identical  clouds.  The  first  two  plots  are  crosswind  views  of  three 
generators  producing  graphite.  The  second  two  plots  are  top-down  views  of  the  same  thr^ 
generators.  In  the  first  three  examples,  the  atmospheric  turbulence  is  high  (e  =  1000),  and  in 
the  fourth  example,  the  atmospheric  turbulence  is  light  (e  =  10).  The  first,  third,  and  fourth 
examples  are  for  an  incident  polarization  angle  (PANG)  of  0°  and  the  second  example  is  for  an 
incident  polarization  angle  of  90°.  The  incident  angle  is  the  only  difference  between  the  first 
two  examples.  The  only  difference  between  the  third  and  fourth  example  is  the  turbulence 
parameter.  Notice  how  the  effectiveness  of  the  exact  same  clouds  changes  with  incident 
polarization  angle  and  also  with  atmospheric  turbulence. 


5.  CONCLUSIONS 

Past  models  that  assume  spherical  symmetry  are  not  capable  of  treating  effects  of  either  viewing 
angle  or  atmospheric  turbulence,  which  are  highly  significant  according  to  the  model  of 
Sutherland  and  Millard.  In  general,  the  angular  scattering  pattern  produced  by  nonspherical 
obscurants  is  much  more  complex  than  the  spherical  counterparts. 


377 


C]rtc=1.038  Horlz  LOS 


378 


REFERENCES 


Ayres,  S.  D.,  and  S.  DeSutter,  1993.  Combined  Obscuration  Model  for  Battlefield  Induced 
Contaminants  (COMBIC)  User’s  Guide.  In  Press,  Department  of  the  Army,  U.S.  Army 
Research  Laboratory,  Battlefield  Environment  Directorate,  White  Sands  Missile  Range,  NM. 

Bowman,  J.  J.,  T.  B.  A.  Senior,  and  P.  L.  E.  Uslenghi,  1987.  Electromagnetic  and  Acoustic 
Scattering  by  Simple  Shapes.  Hemisphere  Publishing  Corporation. 

Evans,  B.  T.  N.,  1991.  Laboratory  Technical  Report  MLR-R-11231.  Commonwealth  of 
Australia  Department  of  Defence  Materials  Research,  Ascot  Vale,  Victoria  3032,  Australia. 

Farmer,  W.  M.,  and  B.  Kennedy,  1991.  Electro-Magnetic  Properties  of  RADAR/MMW 
Obscurants.  Contract  Report  DAAL03-86-D-0001,  Bionetics  Corporation,  Hampton,  VA. 
Sponsoring  agency:  U.S.  Army  Research  Office,  Research  Triangle  Park,  NC. 

Fournier,  G.  R.,  and  B.  T.  N.  Evans,  1991.  "Approximation  to  Extinction  Efficiency  for 
Randomly  Oriented  Spheroids."  Applied  Optics,  50(1 5): 2042-204 8. 

Klett,  J.  D.,  and  R.  A.  Sutherland,  1992.  "Approximate  Methods  for  Modeling  the  Scattering 
PropertiU  of  Non-spherical  Particles:  Evaluation  of  the  Wentzel-Kramers-Brillouin  Method. " 
Applied  Optics,  27(3): 373-386. 

Knox,  J.  E.,  1979.  "Millimetre  Wave  Propagation  in  Smoke."  In  IEEE  EASCON-79 
Conference  Record,  Vol.2,  pp  357-361. 

Pederson,  N.  E.,  J.  C.  Pederson,  and  P.  C.  Waterman,  1984.  Recent  Results  in  the  Scattering 
and  Absorption  by  Elongated  Conductive  Fibers.  Panametrics,  Inc.,  221  Crescent  Street, 
Waltham,  MA  02254. 

Pederson,  N.  E.,  J.  C.  Pederson,  and  P.  C.  Waterman,  1985.  Absorption  and  Scattering  by 
Conductive  Fibers:  Basic  Theory  and  Comparison  with  Asymptotic  Results.  Panametrics, 
Inc.,  221  Crescent  Street,  Waltham,  MA  02254. 

Perry,  M.  R.,  M.  R.  Kulman,  V.  Kogan,  W.  Rouse,  and  M.  Causey,  1994.  Test  Plan  -  Study 
of  Test  Methods  for  Visible,  Infrared,  and  Millimeter  Smoke  Clouds.  ERDEC-CR-115, 
Edgewood  Research  Development  and  Engineering  Center,  Aberdeen  Proving  Ground,  MD. 

Sundaram,  G.  S.,  1979.  "Millimetre  Waves  -  The  Much  Awaited  Technological  Breakthrough?" 
International  Defense  Review,  1 1  {2) '.211-211. 


379 


Sutherland,  R.  A.,  and  J.  D.  Klett,  1992.  "Modeling  the  Optical  and  Mechanical  Properties  of 
Advanced  Battlefield  Obscurants."  In  Proceedings  of  the  1992  Battlefield  Atmospherics 
Conference. 

Sutherland,  R.  A.,  and  J.  B.  Millard,  1994.  "Modeling  the  Optical  and  Mechanical  Properties 
of  Advanced  Battlefield  Obscurants:  Alternatives  to  Spherical  Approximations."  In 
Proceedings  of  the  19th  Army  Science  Conference. 

Sutherland,  R.  A.,  and  W.  M.  Farmer,  1994.  Second  Workshop  on  the  Electromagnetics  of 
Combat  Induced  Atmospheric  Obscurants.  In  Press,  U.S.  Army  Research  Laboratory, 
Battlefield  Environment  Directorate,  White  Sands  Missile  Range,  NM  88002-5501. 


380 


A  MULTISTREAM  SIMULATION  OF  MULTIPLE  SCATTERING  OF 
POLARIZED  RADIATION  BY  ENSEMBLES  OF  NON- SPHERICAL  PARTICLES 


Sean  G.  O'Brien 
Physical  Science  Laboratory 
New  Mexico  State  University 
Las  Cruces,  New  Mexico  88003-0002 


ABSTRACT 


The  Battlefield  Emission  and  Multiple  Scattering  (BEAMS)  model  has  been 
niodified  to  allow  for  simulation  of  the  multiple  scattering  of  polarized 
incident  radiation  by  both  spherically  symmetric  and  non-spherical 
scatterers.  The  modified  Stokes  vector  representation  is  used  to 
characterize  the  incident  and  scattered  radiation  streams .  The  new 
model  uses  multiple  scattering  Mueller  phase  matrices  to  describe  the 
interaction  between  the  incident  radiation  and  the  spatial  volume 
containing  scattering  particles.  The  theory  behind  necessary 
modifications  to  the  BEAMS  model  is  described,  along  with  comparison 
examples  of  the  modified  model  with  the  previous  scalar  version  for 
spherical  (Mie)  particles.  Comparisons  of  total  scattered  power  between 
the  new  and  scalar  BEAMS  versions  show  good  agreement,  indicating  that 
the  coding  and  normalization  of  the  new  version  are  fundamentally  sound. 
BEAMS  simulation  examples  for  preferentially-oriented  ensembles  of  non- 
spherical  particles  are  also  provided.  Interesting  features  and 
applications  of  these  results  are  discussed. 


1 .  INTRODUCTION 

The  practical  simulation  of  interactions  of  electromagnetic  radiation  sources 
with  the  environment  has  been  an  enduring  topic  of  interest  to  military  systems 
analysts  and  climatological  modelers.  One  more  difficult  aspect  of  such 
simulations  is  the  accurate  depiction  of  radiative  transfer  through  realistic 
atmospheres  composed  of  scatterers  of  varying  size,  shape,  and  number  density. 
By  necessity,  computer  models  developed  to  consider  this  class  of  problems 
represent  compromises  in  both  execution  speed  and  accuracy.  Scene  visualization 
for  infrared  (IR)  and  millimeter  wave  (MMW)  sensors  in  battlefield  environments 
populated  by  dense  inhomogeneous  clouds  of  aerosol  obscurants  is  a  particularly 
demanding  enterprise.  Any  model  used  in  this  application  must  have  reasonably 
high  spatial  and  angular  resolution,  and  be  efficient  enough  to  allow  time- 
stepped  (but  not  necessarily  real-time)  calculations  simulating  relative  motions 
of  the  clouds,  sensors,  and  targets. 

The  BEAMS  series  of  models  (Hoock,  1987,  1991;  Hoock  et  al .  ,  1993;  O'Brien  1993) 
represents  an  evolving  effort  to  provide  practical  and  efficient  means  for 
performing  radiative  transfer  calculations  used  in  scene  visualizations.  The 
03^2;72.y  versions  of  the  BEAMS  models  simulated  the  multiple  scattering  of 
monochromatic  scalar  (unpolarized)  radiation  from  infinite  beam  (e.g.,  solar)  and 
finite  beam  sources.  This  scalar  scattering  treatment  provided  a  foundation  for 
one  of  the  major  goals  of  the  BEAMS  development  project,  which  is  to  model  the 
multiple  scattering  of  arbitrarily  polarized  radiation  by  finite,  inhomogeneous 
aerosol  clouds.  The  latest  version  of  the  BEAMS  model  (version  4.0)  realizes 
this  objective  by  describing  the  incident  and  propagated  radiation  in  terms  of 
modified  Stokes  4-vector  streams.  These  4-vector  streams  replace  the  single 
scalar  intensity  streams  of  the  scalar  model.  In  place  of  the  phase  matrix  used 


381 


“9"  JtJ/ariSo 

workings  of  BEAMS  2 . 2  will  thus  b/br?ef  of  the 

2 .  THEORY  AND  IMPLEMENTATION 


2.1 


Review  of  the  BEAMS  2.2  Scalar  Multistream  Approach 


Tt  ^TJ‘  -a^-rical  solution 

oubioal  volume  elements  or  "ceUs-  eJS  e?em\nt  Ts  ontie^ 

its  own  volume  and  may  opticallv  differ  optically  homogeneous  within 

interactions  between  aTjacent  ^cenf  are^^L  Radiative  transfer 

gror 

immediate  neighbors  usina  the  scathf^-ri  nrr  i  /  r,  «4-  ^  isrerrea  to  its  26 

to  its  neighbors'  oppositely-directed  s^ream^c,  stream  output  powers  as  inputs 

Strple"“s“att°erTn“  “^ular  shape  of  that  matrir  due  to 

In  practice,  the  later  versions  of  the  j 

isiilps£sssi 

2.2  The  Mueller  Matrix  for  Single  Scattering 

roaftL^t^  i“fde;rm“d“S 

aL??tjf  derived  from  the  transverse  electrfrUector' (or 

a  desortptil??r?^f  JlUpti^nvToA^^^  ‘’‘d“?  process  begins  with 

ellipse  swept  out  by  the  tS  elictS  ™ct^  r  f“  ^  ‘he 

may  be  represented  by  the  relations 


=  Ejg  sin  (to  t  -  ej) 

=  E^g  sin  (tot  -  e^) 


(1) 


where 

total 


the  corresponding  intensities 
intensity  is  given  by  =  l 


are  given  by  and  =  E^,,"  and  the 

-  Ii  +  If.  It  is  convenient  to  define  the 


382 


ratio  of  the  minor  axis  to  major  axis 
as  the  tangent  of  a  parameter  :  tan  P 
=  a/b.  The  total  E  vector  may  be 
decomposed  into  components  along  the 
major  (E,^)  and  minor  (E^^  axes  of 
the  polarization  ellipse: 


EL  =  sin  (ot  cos  P 
"  (2) 
^X*n/2  =  COS  Wt  sin  P 


Projecting  the  components  and  ♦  1/2 
onto  the  parallel  and  perpendicular 
scattering  planes,  and  expanding  the 
Eq.  1  relations,  it  is  seen  that 


El  =  (sin  a>t  cos  P  cos  %  -  cos  wt  sin  p  sin  x  ) 

=  Eig  (sin  o)t  cos  -  cos  wt  sin  Sj  ) 

E^  =  E^  (sin  wt  cos  P  sin  x  +  cos  wt  sin  P  cos  x  ) 

=  Ejo  (sin  o)t  cos  -  cos  ot  sin  ) 

Equating  the  coefficients  of  the  sin  wt  and  cos  cot  terms  in  Eq.  3,  and  using 
simple  trigonometric  identities,  the  amplitudes  and  phases  in  Eq.  3  are  seen  to 
obey  the  relations 

Eio  =  (cos^  p  cos^  X  +  sin^  P  sin^  X  ) 

Ej.^  =  E^  (cos^  P  sin^  X  +  sin^  p  cos^  %  ) 
tan  Ej  =  tan  p  tan  % 
tan  Ej.  =  -tan  p  cot  x 


Figure  1.  Polarization  ellipse  for 
elliptically  polarized  plane  wave. 


The  components  of  the  modified  Stokes  vector  F  -  {li/  It<  U,  v}  can  then  be 
defined  in  terms  of  either  their  fundamental  forms,  involving  amplitucies  and 
phases,  or  one  composed  of  intensities  and  the  geometry  of  the  polarization 
ellipse : 

Ii  =  eIo  =  I  (cos^  P  cos^  X  +  sin^  P  sin^  X  ) 

=  I  (cos^  P  sin^  X  +  sin^  P  cos^  X  )  ^5) 

U  =  2  Ei^  E„  cos  {Ej  -  )  =  I  cos  2P  sin  2x 

y  =  2  Ejo  E„  sin  (e^  -  e,  )  =  I  sin  2P 

The  intensity  form  of  the  Stokes  vector  is  convenient  for  use  by  a  flux  transport 
model  like  BEAMS,  because  each  of  its  components  may  be  treated  as  an 
independently-propagating  stream.  The  4x4  transformation  matrix  R  that  defines 
the  scattering  process  converts  an  incoming  Stokes  vector  F  into  an  outgoing 
vector  F'  =  {li' ,  Ir' /  U' ,  V' } ;  F'  =  R  F. 


383 


"models  calculate  the  polarized  scattering  properties  of 
an  aerosol  particle  or  collection  of  particles  in  the  form  of  2x2  amplitude 

aS  outgoinrw^ies!  ®  amplitudes  of  the  incoming 


S2  s,] 


S, 


matrices  may  be  transformed  to  the  intensity  (Mueller) 
form  through  the  amplitude  definitions  for  the  Stokes  vector  components  The 

a  given  S  into  a  corresponding  R  is  given  by  (van  de  Hulst, 


where 


'  M2 

My 

C23 

-Dyy 

M, 

M, 

^>41 

-l>4i 

2<?24 

2(?31 

^21  ■*'^34 

-£>21+1); 

,21524 

2P3, 

*^21 '^■^34 

^21 

li 

5; 

=  Qj„  =  (Sj  5;  +  Sj,  S;)/2 
-^ky  =  =  i  (5j.  Si*  -  Sj,  s;)/2 


The  asterisk  superscript  in  Eg.  8  denotes  complex  conjugation  of  the  S  matrix 

elements,  which  are  in  general  complex -valued.  The  elements  of  the  Mueller 
matrix  R  in  Eg.  7  are  real -valued.  wueiier 

2.3  Change  of  Coordinates  between  Planes  of  Incidence  and  Scattering 

i 

representation  given  by  Egs.  5-8  is  directly  usable 
y  plane  of  scattering  is  studied.  If  a  fixed  scenario  coordinate 

employed  (as  is  the  case  for  BEAMS),  the  incoming  and  outgoing 
p  pagation  directions  of  the  Stokes  vector  are  essentially  arbitrary.  In  that 
matrices  must  be  employed  to  rotate  the  Stokes  vector  defined  in 
out  of  scattering  plane  before  the  scattering  event  and 

plane  after  scattering.  The  procedure  for  constructing  such 
matrices  is  straightforward  (Chandrasekhar,  1960)  .  Looking  in  the 
direction  of  propagation,  a  clockwise  rotation  of  the  reference  axes  about  an 

S!  L:  the 

ellipse.  The  defining  relations  in  Eg.  5  then  become 

I'l  =  I  (cos2  p  cos^  (x-a)  +  sin^  P  sin^  (x-a) ) 


Ir  =  I  (cos^  P  sin^  (x-a) 
U'  ^  I  cos  2P  sin  2  (x-a) 
V'  =  I  sin  2P 


sin^  P  cos^  (x-a) 


expressions,  grouping  terms,  and  making  appropriate 
Identifications  from  Eg.  5,  Eg.  9  defines  a  rotation  matrix  L  that  expSss?7Se 

iStiarJoordlnaf'’''  the  rotated  coordinate  system  in  terms  of  the  vector  F  in  the 
cooirdinst©  systsm  (i.©, ,  =  I#  F)  ; 


384 


cos^  a 

sin^  a 

-isin  2a 

2 

0 

sin^  a 

cos^  a 

-  —  sin  2a 

2 

0 

(10) 

-sin  2a 

sin  2a 

cos  2a 

0 

0 

0 

0 

Ij 

The  geometry  for  a  scattering  event  in  the  BEAMS  model  is  shown  in  Figure  2 .  The 
nomenclature  for  angles  used  here  follows  that  given  by  Chandrasekhar.  Referring 
to  Fig.  2,  the  component  of  the  E  field  parallel  to  the  meridian  plane  containing 
the  Z  axis  (Z  assumed  vertical)  is  labeled  V;  the  perpendicular  component  (which 
is  parallel  to  the  XY  plane)  is  labeled  H.  The  spherical  angle  between  meridian 
Diane  1  (which  contains  the  vertical  Z  axis  and  the  line  of  incidence  through  the 
- - - - - - -  origin)  and  the  scattering 


I  nci  dent 


A 

^"0  ^ 

- ► 

5cat  t  er  i  nq 
Point 


plane 

(which  contains  the  lines  of  incidence 
and  scattering  through  the  origin)  is 
denoted  by  ii.  The  spherical  angle  ±2 
is  formed  by  meridian  plane  2 
(containing  the  line  of  scattering 
through  the  origin  and  the  Z  axis)  and 
the  scattering  plane.  The  scattering 
angle  between  the  incoming  and 
outgoing  directions  is  0,  and  (d^,  <^i)  , 
(^2,  02)  are  the  respective  polar  and 
azimuth  angles  for  the  incoming  and 
outgoing  directions .  The  transfor¬ 
mation  angles  ii  and  ij  may  then  be 
obtained  from  the  cosine  law  for  a 
spherical  triangle  (Smart,  1977)  : 


Figure  2 .  Geometry  for  Stokes  vector 
scattering  in  BEAMS  4.0. 


cos 


cos  02  -  cos  03^  cos  0 
sin^  sin  0 


cos  22  = 


cos  01  -  cos  02  cos  0 
sin  02  sin  0 


(11) 


A  Stokes  vector  scattering  from  meridian  1  to  meridian  2  in  Fig.  2  must  be  trans¬ 
formed  by  the  linear  transformation  L(-ii)  (Eq.  10)  prior  to  the  scattering 
event.  After  scattering,  another  rotation  LCir-ij)  is  performed  in  order  to 
express  the  scattered  Stokes  vector  in  terms  of  the  orthogonal  (V,  H)  components 
in  meridian  2.  The  Mueller  phase  matrix  for  scattering  from  meridional  plane 
1  to  meridional  plane  2  may  then  be  stated  as 

Pi2(0i/<|)i;  02<<I’2)  =  ~  ^2)  Ricos  ©)  (12) 


Eq.  12  is  the  fundamental  result  that  allows  the  transition  from  the  scalar 
scattering  model  of  BEAMS  2.2  to  the  polarized  scattering  treatmesnt  of  version 
4.0.  The  user  now  must  input  new  parameters  that  specify  the  fraction  of 
incident  flux  that  is  polarized,  the  polarization  angle  X/  and  the  axial  ratio 
parameter  |0  of  the  polarized  component.  These  quantities  are  used  in  conjunction 


385 


with  Eq.  5  to  construct  the  Stokes  vector  for  both  the  polarized  and  unpolarized 
(Ii  -  Ir  =  1/2,  u  =  V  =  0)  portions  of  the  incident  radiation.  After  propagation 
through  the  rectangular  array  of  cubical  scattering  cells,  the  resulting  Stokes 
vectors  may  be  analyzed  to  yield  degree  of  polarization,  polarization  angle,  and 
ellipticity  of  polarization  information  for  radiances  at  the  boundary  of  any 
individual  scattering  cell  or  for  any  group  of  such  cells. 

2.4  The  Multiple  Scattering  Mueller  Matrix 


The  Stokes  vector  formalism  allows  for  the  construction  of  a  Mueller  phase  matrix 
that  reflects  multiple  scattering  effects.  The  method  used  in  the  BEAMS  4.0 
package  is  essentially  identical  to  that  employed  by  the  scalar  BEAMS  2.2.  A 
single  scattering  model  is  first  used  to  generate  the  scattering  amplitude  matrix 
S.  The  scattering  plane  Mueller  matrix  R  is  next  generated  by  averaging  over  the 
BEAMS  input  streams  and  applying  Eqs .  7  and  8 .  The  single  scattering  Mueller 
phase  matrix  P  is  then  generated  for  stream- to- stream  scattering  geometries  with 
the  relations  of  Eqs.  10-12.  This  result  is  stored  in  a  file  (named  POLOXJT.MAT) 
with  a  format  that  is  directly  usable  by  the  BEAMS  4.0  program. 

As  in  the  scalar  version  of  BEAMS,  a  dedicated  version  of  BEAMS  (named  MSPPHMX) 
was  created  to  generate  the  multiple  scattering  phase  matrix.  This  version  does 
not  have  the  normal  BEAMS  output  routines  and  takes  its  input  from  files 
containing  the  single  scattering  Mueller  matrix  P  (POLOUT.MAT)  and  a  uniform 
aerosol  concentration  parameter  fixed  at  a  value  of  unity.  The  code  computes  the 
stream  output  Stokes  vector  radiances  for  a  uniform  cubical  5x5x5  array  of  cells. 
The  axial  optical  depth  t  of  the  identical  component  cells  is  varied  to  give 
results  for  different  total  axial  optical  depths  5t  of  the  cubical  array.  The 
output  radiances,  when  renormalized  under  energy  conservation,  provide  the 
multiple  scattering  Mueller  matrix  for  the  5t  cubical  array.  This  matrix  result 
can  be  used  for  an  individual  cell  in  a  nonuniform  rectangular  scenario  array  in 
a  BEAMS  4.0  production  run. 

The  demand  that  creation  and  storage  of  the  BEAMS  4.0  multiple  scattering  Mueller 
matrix  places  upon  computer  resources  is  considerable.  MSPPHMX  loops  over  15 
optical  depths,  creating  a  result  for  each  depth.  At  each  optical  depth,  the 
BEAMS  code  is  executed  once  for  each  input  stream  direction  (for  a  total  of  26 
separate  runs) .  If  the  aerosol  scatterers  under  study  are  spherical,  display 
some  degree  of  shape  symmetry,  or  are  randomly- oriented,  then  clearly  the  number 
of  such  runs  could  be  reduced  by  application  of  symmetry.  Such  reductions  are 
inconvenient  because  they  must  be  applied  on  a  case-by-case  basis  and  require 
care  to  avoid  errors  caused  by  inappropriate  symmetry  assumptions .  For  this 
reason,  the  MSPPHMX  software  only  considers  the  general  case  where  no  symmetry 
is  assumed. 

A  Mueller  phase  matrix  is  computed  by  MSPPHMX  at  each  of  15  optical  depths,  is 
stored  in  a  file  named  MSPPHMX. MAT,  and  contains  26x26x4x4  =  10,816  elements. 
In  a  binary  file  format,  a  file  of  15  such  phase  matrices  slightly  exceeds  half 
of  a  megabyte  in  size.  Thus,  on  any  capable  modern  computer  system,  file  size 
is  seldom  a  problem.  However,  because  BEAMS  logarithmically  interpolates  phase 
matrix  elements  over  optical  depth,  the  entire  phase  matrix  data  set  must  reside 
in  memory  during  a  BEAMS  4.0  run.  The  required  Mueller  multiple  scattering  phase 
matrix  storage,  combined  with  that  required  for  the  scenario  array  of  cubical 
aerosol  scattering  elements,  makes  the  BEAMS  4.0  model  impractical  to  use  on 
personal  computer  platforms  for  scenario  arrays  with  over  a  few  thousand  cells. 


386 


3 . 0  APPLICATIONS 


3.1  Hie  Scattering  -  Comparison  of  BEAMS  2.2  and  BEAMS  4.0 

A  Mie  scattering  aerosol  was  chosen  to  compare  the  far  field  scattered  power 
predicted  by  the  scalar  BEAMS  2.2  code  with  the  total  (Ii  +  1^)  scattered  power 
yielded  by  version  4.0  of  BEAMS.  The  2x2  scattering  amplitude  matrix  for  the 
Deirmendjian  Cloud  C.l  aerosol  (at  a  wavelength  of  0.45  ^m)  (Deirmendjian,  1969) 
was  employed  for  this  purpose.  A  uniform  cubical  cloud  with  an  edge  length  of 
5  m  was  constructed  for  this  case,  with  an  axial  optical  depth  of  5.  The  top  (+Z 
face)  of  the  cube  was  illuminated  by  an  unpolarized  plane-parallel  infinite  beam 
with  a  flux  value  of  1  W/m^  Figure  3  shows  how  the  scalar  and  polarized 

scattering  versions  of  the  BEAMS  model 
compared.  The  quantity  compared  is 
the  total  scattered  power  exiting  from 
the  boundary  surfade  of  the  cube  into 
a  solid  angle  equal  to  47r/26 
steradians.  It  is  apparent  that  the 
total  scattered  power  predicted  by  the 
two  versions  of  BEAMS  compare 
reasonably  well  for  this  case.  The 
BEAMS  4 . 0  results  indicate  that  the 
scattered  power  in  the  forward 
hemisphere  (scattering  angles  less 
than  90  degrees)  is  not  strongly 
polarized.  The  backward  hemisphere 
shows  a  marked  enhancement  in  the 
horizontal  polarization  component, 
which  is  consistent  with  the 
,  polarization  properties  of  Mie 

Figure  3 .  Comparison  of  detected  power  scatterers 
in  the  XZ  plane  for  BEAMS  2.2  (scalar) 
and  BET^S  4 . 0  (polarized)  multiple 
scattering  phase  matrices  for 
Deirmendjian  C.l  aerosol. 


Vertical 

a 

Horizontal 

Vert.  +  Horiz.  Pwr. 
Scalar  BEAMS 


-1.0-0.8-0.6-0.4-0.2  0.0  0.2  0.4  0.6  0.8  1.0 
Cosine  of  Scattering  Angle 


3.2  Ice  Crystals 

The  BEAMS  4 . 0  model  can  be  used  to  predict  the  scattered  power  from  an  ensemble 
of  preferentially-oriented  or  randomly-oriented  non-spherical  aerosol  particles. 
For  the  example  shown  here,  a  solid  cylindrical  ice  rod  with  a  20:1  (length-to- 
diameter)  aspect  ratio  was  chosen.  In  order  to  simulate  results  for  a  randomly- 
oriented  particle,  the  Digitized  Green  Function  (DGF)  model  (Goedecke  and 
O'Brien,  1988)  was  used  to  generate  an  averaged  scattering  amplitude  matrix  for 
a  set  of  orientations  of  this  particle.  The  ice  rod  length  was  set  at  0.98  mm, 
the  illuminating  wavelength  was  fixed  at  3  mm,  and  a  complex  refractive  index  of 
1.78  +  0.00387  i  was  used.  Figures  4  and  5  show  the  BEAMS  4.0  radiances  for  a 
32mx8mx8m  uniform  cloud  of  such  crystals.  The  optical  depths  along  the 
X,  Y,  Z  axes  of  the  cloud  were  6.4,  1.6,  and  1.6,  respectively.  The 
horizontally-polarized  (H)  collimated  illuminating  source  had  a  uniformly- 
illuminated  aperture  of  0.5  m  diameter  pointing  in  the  +X  direction.  Beam  power 
was  set  at  10  W,  and  the  entry  point  of  the  beam  was  at  X  =  -16  m,  Y  =  +0.5  m, 
and  Z  =  +0.5  m.  The  emergent  radiances  shown  here  are  actually  the  total 
scattered  power  emitted  from  the  cloud  into  a  solid  angle  of  4ir26  steradian. 


387 


Figure  4.  Orthographic  perspective 
view  of  emergent  radiances  from  ice 
crystal  cloud:  V  polarization. 


Figure  5 .  Orthographic  perspective 
view  of  radiances  from  ice  crystal 
cloud:  H  (incident)  polarization. 


The  viewing  direction  chosen  for  the  emergent  radiance  is  looking  down  (at  a 

degrees)  at  an  azimuth  of  0  degrees  (in  the  +X  direction)  . 

oblique  scattering  in  the  backward 
emisphere.  It  can  be  seen  that  the  backscattered  radiance  from  the  cross- 
polarized  (V)  component  (Fig.  4)  is  rather  weak  and  diffuse  compared  to  the  H 
(incident)  component  (Fig.  5)  .  Also,  note  the  growing  strength  and  spread  of  the 

P^'^^t^tes  into  the  cloud.  The  relatively  thin  transverse 
optical  depth  of  the  cloud  appears  to  be  the  cause  of  this  trend. 

3.3  Graphite  and  Metal  Fibers 

Preferentially-oriented  graphite  and  copper  fibers  with  very  large  aspect  ratios 
may  be  used  to  illustrate  possible  applications  of  the  BEAMS  4.0  model.  The 

for  such  fibers  (Klett,  1994;  Sutherland  and  iciett, 
1994)  may  be  created  for  average  particle  orientations  at  different  mechanical 
Jewels .  Under  low  turbulence  conditions,  an  aerosol  fiber  will 
fre<^ently  fall  with  its  long  axis  wobbling  in  a  small  envelope  about  the 
horizontal  (Sutherland  and  Klett,  1992) .  Both  fibers  were  3  mm  long,  with  the 
graphite  and  copper  having  respective  diameters  of  7  /xm  and  lO  urn.  Both  fibers 
were  also  assigned  bulk  material  densities  of  2.5  g/cm\  The  graphite  fiber 
diameter  was  such  that  it  fell  with  a  larger  amplitude  of  wobble  than  that  of  the 
copper  wire  fiber.  The  same  size  and  optical  depth  were  used  for  the  aerosol 
cloud  as  in  the  previous  (ice  crystal)  example,  as  were  the  source  location, 
orientation,  aperture  size,  power,  and  polarization.  Figures  6  and  7  show  the 
emergent  radiance  results  for  the  graphite,  and  Figures  8  and  9  show  the  copper 
wire  results.  The  direction  of  the  emergent  radiance  is  the  same  as  in  the 
previous  (ice  crystal)  case.  It  can  be  seen  that  the  cross-polarized  radiance 
IS  considerably  stronger  for  both  the  graphite  and  copper  fibers  than  it  is  for 

although  the  more  preferentially-oriented  copper  wire 
scatterers  show  an  appreciable  enhancement  of  the  H  polarization  radiance  over 
that  or  the  V  component. 


388 


Figure  6 .  Emergent  radiances  for 
graphite  cloud  (V  polarization) . 


Figure  8 .  Emergent  radiances  for 
cloud  of  copper  wire  scatterers  (V 
polarization) . 


4 .  CONCLUSIONS 


Figure  7 .  Emergent  radiances  for 
graphite  cloud  (H  polarization) . 


Figure  9 .  Emergent  radiances  for 
cloud  of  copper  wire  scatterers  (H 
polarization) . 


Modifications  for  treating  Stokes  vector  scattering  to  BEAMS  2.2  have  produced 
a  model  (BEAMS  4.0)  that  compares  well  with  its  predecessor  for  Mie  scattering. 
Preliminary  BEAMS  4.0  results  for  scattering  from  randomly  and  preferentially 
oriented  particles  are  consistent  with  expectations.  However,  more  testing  will 
be  made  to  confirm  the  validity  and  consistency  of  the  new  model.  One  difficulty 
with  the  BEAMS  4.0  code  is  that  it  places  considerable  demands  upon  system 
resources,  even  in  production  mode.  For  rectangular  scenario  arrays  (e.g., 
64x32x32)  with  large  numbers  of  elements,  a  BEAMS  4.0  execution  may  take  several 
hours  on  a  fairly  capable  machine  (i.e.,  a  Silicon  Graphics  Onyx)  .  This  does  not 
represent  a  problem  for  the  intended  applications  of  the  BEAMS  model,  where 
multiple  scattering  radiance  statistics  are  examined.  Nevertheless,  in  cases 
where  polarization  effects  are  not  significant,  it  is  still  preferable  to  use  one 
of  the  faster,  scalar  versions  of  BEAMS. 


REFERENCES 


Chandrasekhar,  s.,  i960.  Radiative  Transfer,  Dover  Publications,  New  York,  NY, 

York,'  scattering-  on  Spherical  Polydieperaicna, 

"Spattering  by  irregular  inhomogeneous 
particles  via  the  digitized  Green's  function  algorithm",  Appl .  Opt.,  27:2431. 

Hoock,  D.W.,  1987.  "A  Modeling  Approach  to  Radiative  Transfer  through 

Clouds",  Proceedings  of  the  7th  Annual  EOSAEL/TWI 
Conference,  Las  Cruces,  NM,  pp.  575-596.  ' 

Hoock,  D.W.,  1991.  "Theoretical  and  Measured  Fractal  Dimensions  for  Battlefield 
Aerosol  Cloud  Visualization  and  Transmission",  Proceedings  of  the  1991 
Battlefield  Atmospherics  Conference,  Ft.  Bliss,  TX,  pp.  46-55. 

Giever,  and  S.G.  O'Brien,  1993.  "Battlefield  Emission  and 
tiple  Scattering  (BEAMS) ,  a  3-D  Inhomogeneous  Radiative  Transfer  Model" 
Proceedings  of  the  SPIE  Vol .  1967,  Characterization,  Propagation,  and 
Simulation  Conference,  Orlando,  FL,  pp.  268-277. 

Klett,  J.D.,  1994.  Scattering  of  Polarized  Light  by  High  Conductivity  Fiber 
erosol  in  Turbulent  Air,  Final  Report,  Contract  No.  DAAD07-91-C-0139  PAR 
Associates,  4507  Mockingbird  St.,  Las  Cruces,  NM,  88001. 

2-2  Radiative  Transfer  Algorithm 
Radiative  Transfer  Methods",  Proceedings  of  the  1993  Battlefield 
Atmospherics  Conference,  Las  Cruces,  NM,  pp.  421-435. 

Spherical  Astronomy,  Cambridge  University  Press, 

Sutherland,  R.A.,  and  J.D.  Klett,  1992.  "Modeling  the  Optical  and  Mechanical 
Properties  of  Exotic  Battlefield  Obscurants",  Proceedings  of  the  1992 
Battlefield  Atmospherics  Conference,  Ft.  Bliss,  TX,  pp.  237-246. 

Sutherland,  R.A.,  and  J.D.  Klett,  1994.  Private  communications. 

1981.  Light  Scattering  by  Small  Particles.  Dover 
Publications,  New  York,  NY,  470  pp. 


390 


COMBINED  OBSCURATION  MODEL  FOR  BATTLEFIELD 
INDUCED  CONTAMINANTS-RADIATIVE  TRANSFER  VERSION  (COMBIC-RT) 

Scarlett  D.  Ayres,  Doug  Sheets 
and  Robert  Sutherland 
Battlefield  Environment  Directorate 
U.S.  Army  Research  Laboratory 
White  Sands  Missile  Range,  New  Mexico  88002-5501 


ABSTRACT 

The  COMBIC  model  was  originally  developed  for  the  Electro-Optical  Systems 
Atmospheric  Effects  Library  (EOSAEL  84)  to  model  effects  of  direct  transmission 
(i.e.  Beer  Law)  only  and  ignored  the  more  complicated  effect  of  contrast 
transmission.  COMBIC-RT  represents  an  improvement  in  the  radiative  transfer 
algorithm  to  account  for  single  and  multiple  scattering,  and  hence  contrast 
transmission.  COMBIC-RT  is  a  merger  between  COMBIC  and  the  Large  Area  Smoke 
Screen  (LASS)  model  developed  in  1985.  COMBIC-RT  reverts  to  normal  COMBIC  if 
the  RT  option  is  not  exercised.  If  the  RT  option  is  exercised  then  the  outputs 
of  the  model  are  symbolic  maps  displaying  the  direct  and  diffuse  components  of 
scene  transmission  as  affected  by  a  large-area  smoke  screen  or  a  contrast 
transmission  history.  The  model  can  be  exercised  with  various  optional  inputs 
to  determine  the  effects  of  solar  angle,  solar  flux  density,  sky  radiance, 
surface  albedo,  etc.  The  COMBIC  part  of  the  model  applies  the  Gaussian 
diffusion  approximation  to  compute  obscurant  concentration  path  length  (CL 
product),  and  the  LASS  part  applies  the  plane-parallel  approximation  to  compute 
target-background  contrast  and  contrast  transmission.  The  radiative-transfer 
algorithms  are  unique  to  LASS  and  COMBIC-RT  in  the  use  of  the  extensive 
radiative-transfer  tables  originally  published  by  Van  De  Hulst  that  are  used 
together  with  novel  scaling  algorithms  to  account  for  effects  of  single  and 
multiple  scattering  along  arbitrary  slant  path  and  horizontal  lines  of  sight 
(LOS).  The  model  does  not  treat  thermal  emission  and  is  thus  restricted  to 
visible  and  near-infrared  regions.  The  obscurant  phase  function  is  taken  to  be 
of  the  Henyey-Greenstein  form  and  can  account  for  various  degrees  of  anisotropic 
scattering  as  well  as  isotropic  scattering.  The  model  accounts  for  scattering 
of  the  direct  solar  beam,  uniform  diffuse  skylight,  and  diffuse  reflection  from 
the  underlying  (earth)  surface. 

1.  INTRODUCTION 

One  of  the  original  purposes  for  developing  COMBIC-RT  model  was  to  assist  in 
modeling  the  effectiveness  of  smoke  screens  used  in  wargame  simulations.  Large 
area  self-screening  smokes  are  feasible  at  large  fixed  and  semifixed  military 
installations  such  as  air  bases,  air  fields,  and  ammunition  supply  points  where 
attack  by  nap  of  the  earth  aircraft  is  a  possibility.  The  commander  of  these 
military  installations  need  to  know  to  what  degree  a  LASS  deployment  will 
protect  his  station  from  enemy  aircraft  as  well  as  know  how  the  LASS  will  effect 
friendly  aircraft.  The  wargame  simulations  will  ultimately  impact  the  doctrine 
the  commander  will  use.  In  these  type  of  scenarios,  contrast  reduction  caused 
by  scattering  of  light  is  the  major  acquisition  defeat  mechanism.  This 
scattering  of  light  into  the  path  in  real  world  scenarios  can  often  be  of 
overriding  significance  in  affecting  perception.  A  natural  example  is  the 
apparent  disappearance  of  stars  in  daytime.  Another  common  example  is  the 
backscatter  from  headlights  when  driving  through  fog  with  the  brights  on.  The 
degree  to  which  scattering  can  be  important  is  indicated  by  the  optical 
properties  of  the  medium;  the  mass  extinction  coefficient  a  which  combines 
absorption  and  scattering  out  of  the  path  of  propagation  into  one  term;  the 
single  scattering  albedo  (wq)  which  indicates  the  fractional  amount  of 
scattering,  and  (l-c5o)  which  indicates  the  fractional  amount  of  absorption. 
Conventional  visible  band  obscurants  such  as  fog  oil  has  indicating  a 

predominance  of  scattering. 


391 


COMBIC-RT  is  made  up  of  two  sub-models:  one  that  treats  transport  and  diffusion 
(the  original  COMBIC)  and  another  that  treats  radiative  transfer  (the  radiative 
transfer  algorithms  of  LASS)*  COMBIC  uses  a  Gaussian  formalism  to  calculate, 
for  potential  unlimited  number  of  smoke  clouds,  the  obscurant  path-integrated 
concentration  (CL)  for  either  parallel  LOSs  over  the  extent  of  the  entire  screen 
or  for  just  individual  LOSs*  The  radiative  transfer  segment  performs  extensive 
radiative  transfer  calculations  by  using  the  plane  parallel  approximation  that 
essentially  transforms  a  CL  map  into  a  radiative  transfer  map  of  contrast 
transmission*  Since  the  output  of  COMBIC-RT  includes  path  radiance  and 
downward-directed  hemispherical  surface  irradiance,  digital  maps  of  these 
quantities  may  also  be  generated  with  minor  code  modifications*  The  model  is 
primarily  applicable  to  situations  in  which  the  observer  (for  example,  an 
aircraft)  is  located  above  the  screen  and  the  target  is  located  on  the  surface* 
The  LASS  computer  model  provides  a  tool  for  the  study  of  large  area  screening 
systems  applications  and  effects. 


2 •  BACKGROUND 

Models  like  CASTFOREM  directly  relate  transmission  to  Electro-Optical  (EO) 
system  performance  and  smoke  effectiveness  by  considering  only  the  directly 
transmitted  signal: 

s(f)=s(fjr  (1) 


where  S(f)  is  the  optical  signal  received  by  an  observer  at  {£)  from  a  target 
at  (fo) .  The  transmission  (T)  includes  effects  of  both  scattering  out  of  the 
path  plus  absorption  along  the  path.  However,  EO  systems  respond  not  only  to 
directly  transmitted  radiation  but  also  to  contrast.  Equation  (1)  is  thus 
modified  to  include  a  term  representing  path  radiance  as: 

5(i')=5(f^)r+5^(f)  (2) 


where  the  contribution  due  to  path  radiance  (5p)  may  be  due  either  to  scattering 
of  ambient  radiation  (sun,  sky)  into  the  path  of  propagation  or  emission  along 
the  path,  or  both.  Path  radiance  has  a  directional  nature  causing  asymmetries 
between  target  and  observer .  One  or  the  other  has  an  optical  advantage 
not  present  when  one  models  only  the  direct  transmission  component.  The  LASS 
model  was  developed  to  model  these  effects.  The  radiative  transfer  algorithms 
were  then  integrated  with  COMBIC-RT  to  enable  COMBIC  to  compute  path  radiance. 


Most  target  acquisition  models  work  by  determining  the  number  of  resolvable 
cycles  across  the  target.  This  directly  relates  to  the  target  contrast  X  at 
the  sensor's  aperture  for  non-thermal  sensors,  and  for  a  slant-path  LOS: 


Tc(±u.^) 


1 

Ajb€Xp(“T) 


(3) 


where  is  the  direction  cosine  directed  upward,  -|i  is  the  directional  cosine 
directed  downward,  ii>  is  the  azimuth,  x  is  the  optical  depth,  refer  to  the 
target  and  background  albedos.  P*(±n,<|))  is  called  the  Duntley  factor,  after  the 
Pioneering  work  of  S.Q.  Duntley,  (Duntley,  1948)  and  reduces  to  "sky— to— ground" 
for  a  horizontal  LOS. 

The  probability  of  acquisition  may  be  calculated  using  the  integral: 
where  n-  is  the  number  of  resolvable  cycles  across  the  target  for  an  acquisition 
probability  of  50  percent,  and  a  is  the  standard  deviation  of  the  number  of 
resolvable  cycles  across  the  target.  Using  COMBIC-RT  and  a  target  acquisition 


392 


®  exp(-x^/2)  dx 

^  y/SnJ^m 


model  like  the  one  in  CASTFOREM,  it  is  possible  to  determine  the  probability  of 
acquisition  of  a  given  target  through  a  LASS  cloud  at  any  given  point  in  space 
and  time.  This  provides  a  direct  measure  of  the  effectiveness  of  smoke.  Figure 
1  shows  the  effect  of  sun  angle  on  detection  probabilities  for  different  optical 
depths  (t).  The  probability  of  detection  for  x  of  1  varies  from  34%  in  the 
case  of  the  sun  to  the  front  of  the  observer  to  63%  for  the  sun  behind  the 
observer.  This  is  as  expected.  Most  of  the  time,  it  is  easier  to  ”see”  with 
the  sun  to  the  back. 

Figure  2  shows  the  effect  that  the  observer  azimuth  angle  (defined  wrt  North) 
can  have  on  contrast  transmission.  Contrast  transmission  is  shown  for  five  CL 
values.  The  scenario  is  for  early  morning  and  the  zenith  angle  of  the  observer 
is  10  degrees.  Notice  that  low  contrast  transmission  occurs  when  the  observer 
is  looking  into  the  sun  (0^)  and  high  contrast  transmission  occurs  with  the  sun 
to  the  back  (180®)  of  the  observer.  Further,  note  that  the  curve  flattens  out 
as  the  CL  increases. 


3.  DEFINITION  OF  THE  PROBLEM 

In  a  typical  obscuration  scenario,  the  problem  is  to  compute  the  total  radiance, 
both  direct  and  diffuse,  reaching  an  observer  and  emanating  from  the  direction 
of  the  target  (oi^  background) .  The  direct  radiance  includes  light  either 
emitted  or  reflected  by  the  target  (or  background)  then  transmitted  (with  some 
loss  due  to  extinction)  along  the  LOS  to  the  observer.  The  diffuse  radiance  is 
the  path  radiance  emitted  and  scattered  by  suspended  material  (obscurants)  at 
all  points  along  the  LOS  then  transmitted  (again  with  some  loss  due  to 
extinction)  a  remaining  distance  to  the  observer.  The  scenario,  including  the 
large-area  screen,  is  assumed  to  be  irradiated  from  above  by  diffuse  sky 
radiation  and  from  below  by  diffuse  surface  radiation  (see  Figure  3).  For 
daytime  scenarios,  the  direct  solar  beam  is  included  in  the  sky  component.  For 
nighttime,  the  direct  source  may  be  the  Moon,  and  starlight  may  be  included  in 
the  diffuse  component. 

The  radiance  incident  at  the  observer  propagating  from  the  direction  of  the 
target  is  the  combination  of  the  direct  and  diffuse  component  or  more  formally: 


J(r;n,4>)  e“"+ f V(r' ;  n , <|))  (5) 

J  0 

where  ji»|cos0|  is  the  zenith  angle  of  the  observer  with  respect  to  the  target. 
The  geometry  is  shown  in  Figure  4.  The  first  term  on  the  right-hand  side  is  the 
familic^r  Beers  law  and  represent  radiance  transmitted  directly  from  the  target 
to  the  observer  (the  direct  component).  The  second  term  on  the  right  hand  side 
represent  the  diffuse  component.  It  represents  contributions  due  to  scattering 
of  ambient  radiation  into  the  path  of  propagation  at  all  points  along  the  path. 
This  equation  has  been  extensively  studied  but  the  major  difficulty  is  the 
optical-source  function,  J’(r /  which  is  itself  a  function  of  incoming 
radiance  from  all  directions  so  that  the  formal  solution  is  really  quite 
complex. 

The  LASS  model  makes  several  simplifying  assumptions  that  allow  rigorous 
solutions  of  Equation  5,  including  all  orders  of  multiple  scattering.  A  major 
simplification  is  the  plane-parallel  approximation  where  the  optical  depths  for 
a  slant  path  at  angle  0  is  equal  to  the  vertical  optical  depth  divided  by  the 
cosine  of  the  angle. 


393 


PHOTOSIMULATION  EXPERIMENT 


Figure  1  Plot  of  detection 
probability  as  a  function  of  optical 
depth  for  various  solar  azimuth 
angles. 

SKY 


Figure  2  Plot  contrast  transmission 
vs.  observer  azimuth  angles. 


SURFACE 

Figure  3  Typical  LASS  scenario. 


The  optical  source  function  is  dependent  upon  the  phase  function  p{u,<b;|i-,<k.) 
which  mathematically  describes  the  angular-scattering  properties  of  the 
obscurant.  For  inventory  smokes,  the  phase  function  is  best  approximated  with 
the  Henyey-Greenstein  form. 


(6) 


394 


where  !|r  is  the  scattering  angle  and  g  is  the  asymmetry  parameter  that 
determines  the  overall  shape  of  the  scattering  phase  function  and  can  vary  from 
-1  for  strong  backscatter,  to  zero  for  isotropic  scattering^  an  on  to  near  +1 
for  strong  forward  scattering.  The  use  of  the  Henyey-Greenstein  form  presumes 
a  spherical  aerosol,  which  is  reasonable  for  many  obscurant  types,  especially 
fog  oil.  Plots  of  the  Henyey-Greenstein  phase  function  for  various  values  of 
the  asymmetry  parameter  are  shown  in  Figure  5. 


Figure  4  Geometry  of  the  path 

propagation. 


0  30  60  90  120  150  180 


SCATTERING  ANGLE 

of  Figure  5  Plot  of  the  Henyey- 

Greenstein  phase  function  for 
various  asymmetry. 


4.  REFLECTION  AND  TRANSMISSION  FUNCTIONS 


The  major  computational  problem  in  modeling  contrast  transmission  is  the 
determination  of  the  diffuse  transmission  and  reflection  functions.  These 
functions  account  for  effects  of  multiple  scattering  within  the  obscurant  cloud 
which  gives  rise  to  the  DIFFUSE  component  of  the  radiation  field  and  should  not 
be  confused  with  the  more  familiar  DIRECT  component  which  is  treated  with  the 
simple  Beer's  Law.  In  general  the  diffuse  component  is  difficult  to  calculate, 
even  under  the  simplifying  assumption  of  a  plane  parallel  atmosphere.  In  COMBIC- 
RT  we  use  a  combination  of  precomputed  look  up  tables  based  upon  rigorous 
solutions  (Sutherland  &  Fowler,  1986)  and  special  scaling  algorithms  to 
approximate  the  full  angular  dependent  transmission  reflection  functions 
accounting  for  both  absorption  and  scattering  (Sutherland,  1988). 

In  Figure  6  we  give  an  example  for  a  Henyey-Greenstein  asymmetry  parameter  of 
0.750  which  is  typical  of  conventional  visible  band  obscurants  such  as  fog  oil. 
The  reflection  operator,  denoted  in  general  as  i2(T, jio'4>o)  f  represents  that 
fraction  of  an  incident  plane  parallel  beam  that  is  diffusely  "reflected"  into 
the  direction  denoted  by  polar  angles  0  (^=|cos0|)  and  <|>,  where  the  incident 
beam  is  from  the  direction  denoted  by  (|iQ=|cos0o|)  and  The  transmission 
operator,  ^  same  except  that  it  accounts  for  diffuse 
"transmission". 


395 


Figure  6  Plots  of  diffuse  reflection  and  transmission  as  a  function  of 
optical  depth  for  several  azimuth  angles.  Solar  beam  zenith  direction  is 
uo=0.50  and  viewing  angle  is  u=0.10. 


For  example,  the  plots  on  the  left  in  Fig.  6  show  the  value  of  the  reflection 
operator  as  a  function  of  optical  depth  assuming  a  solar  incident  beam  direction 
|io=0.50  (00=60°)  and  <|>o=0°.  This  example  corresponds  to  the  case  of  an  airborne 
observer  looking  downward  and  solar  radiation  "reflected”  upward  from  the 
obscurant  cloud.  Note  that,  in  gen