Is it time to ditch spreadsheets for engineering (education)


I have been teaching an undergraduate engineering course this term, and have beenn very frustrated that the first tool students reach for to solve any quantitative problem is a spreadsheet (generally Excel, but my comments here I think are generic to spreadsheets of all flavors).

I remember when I first used a spreadsheet (Lotus123 anyone!).  It was an eyeopener that one could replace pencil and paper and a calculator to do simple arithmetic manipulations.  After all, it was motivated by business/accounting/financial users where addition, subtraction and perhaps various interest rate calculations were all that were required.

Spreadsheets evolved to where calculations of various levels of sophistication can be performed, and with the ability of some (e.g. using macros) to programming, perhaps highly complex engineering calculations can be done in these environments.  However, just because they can be does not mean that they should be.

Maslow’s Law of the Instrument is quoted as “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (  Its use is truly frequent amongst students.  Having learned that they can use spreadsheets for calculations (even if they have also learned how to program) often means that they regard that as their preferred, and even only, tool to do calculations.

I would argue that given the prevalence of numerically sophisticated programming environments, whether your preference is Matlab, Python, Mathematica, or some other platform with high level numerical and graphical capability (for a lot of data analysis, I personally use R), their use is to be preferred. Here are some reasons:

  1. Without a lot of effort in constructing the spreadsheet, commenting, naming variables, etc., it is very difficult to debug or ascertain where a calculation may have gone off the rails.  This becomes very difficult when teaching. A presentation by IBM in the context of statistical analysis notes that 88% of spreadsheets have at least one error.
  2. A corollary to the above is the difficulty in communicating the procedure used to achieve a result either to others, or to some future user — without extensive documentation.
  3. Autocorrect errors in spreadsheets have been documented for data, for example in the field of genetics.
  4. Algorithms are not often fully disclosed or references.  For example, below is the help screen for the Bessel Y function from Office 365.  In contrast, details of the algorithms used for this function are disclosed in Python, R, to a lesser degree in Mathematica,  Unfortunately details for this function’s implementation in Matlab are similarly unclear.
Screen Shot 2022 02 23 at 11 38 19 AM



In contrast, use of a programming language encourages separation of data (e.g. as an input file) from calculations used to process the data.  Algorithms have (generally) been more documented.  The use of actual descriptive variable names (e.g. “volume”  rather that “B2”) makes the logical flow of analysis clearer.  Programming languages also facilitate (and good practice encourages) the use of commenting to walk anyone reviewing the code thru the logical flow.

Engineers still need to pay attention to units.  In spreadsheets this is an issue.  Within programming environments, there are some packages and add-ons that allow consideration and conversion of units — although some of these need further development.

To some degree, I would advocate a “back to the future” approach.  In the days of pencil and paper, laying out the data and a stepwise delineation of computational steps (and assumptions) was an intrinsic skill needed for education of engineering students, and practice.

The availability of spreadsheets, IMHO, has encouraged sloppy thinking in engineering calculations, and has hindered instruction in good practice.

Who is ready to teach an engineering class where spreadsheets for presentation and documentation of calculations are disallowed?  I am seriously thinking that I am.

Masking the Use of Masks

The principal messaging from health authorities in the COVID-19 outbreak to the general public has been that masks protect others from the wearer.  There are two important concepts in this simple message that need to be unpacked, and I want to put down my thoughts on them.  I believe the underlying miscommunication of concepts (and failure to update with new understandings, in some cases of old knowledge) has led to a great deal of the political debate about mask wearing (caveat — I am not a political scientist).

As a purely technical point, masks behave as filters.  Masks, especially those which are in use by the general public are not anisotropic, at least in the direction of airflow.  The key meaning of this is that particle removals are the same whether from the wearer to the surroundings (on talking, coughing, sneezing or exhalation), or from the surroundings to the wearer (on inhalation).  The following table, excerpted from a recent paper on testing cloth mask materials with simulated gaps showed that cloth masks of various types could achieve upwards of 54 % removal of small particles and 44% of large particles — and noteworthy was that with or without simulated gaps, most fabric masks performed as well or better than surgical masks.


Table excerpted from Konda et al. (2020)

Material <300 nm average ± error >300 nm average ± error
N95 (no gap) 85 ± 15 99.9 ± 0.1
N95 (with gap) 34 ± 15 12 ± 3
surgical mask (no gap) 76 ± 22 99.6 ± 0.1
surgical mask (with gap) 50 ± 7 44 ± 3
cotton quilt 96 ± 2 96.1 ± 0.3
flannel 57 ± 8 44 ± 2
cotton (600 TPI), 1 layer 79 ± 23 98.4 ± 0.2
cotton (600 TPI), 2 layers 82 ± 19 99.5 ± 0.1
natural silk, 1 layer 54 ± 8 56 ± 2
natural silk, 2 layers 65 ± 10 65 ± 2


In communicating about mask use, there is frequent use of the word “protect”.  The first definition of this word is: “to cover or shield from exposure, injury, damage, or destruction”.  To the lay public this is construed as “eliminate”,  There are two problems with use of this word in communicating about masks.  First is that no risk can be eliminated, but only reduced (perhaps to a level where the residual risk is regarded as acceptable).  Second, masks must be considered as part of a multiple barrier strategy (possibly a subject of a future post). It is becoming clear with COVID19 that transmission may occur not only via large particles (which the medical community has historically called droplets), but small particles.  

Total protection by any one intervention is not necessary.  In fact, concepts from industrial and occupational health recognize that the use of PPE such as masks are a last resort after interventions such as engineering design, administrative controls, etc. are employed.  The combination of all such interventions are what yields a needed risk reduction.

Complete elimination by controls, while an admiral goal, is not necessary.  In the current pandemic, the key desire is to reduce the reproduction number of cases below 1 so that there are diminishing numbers of cases and the case incidence rate is sufficiently low to permit testing, contact tracing and isolation as a final means of driving illness levels down.  Hence, for example, if the uncontrolled reproduction number (R0) is 4, then only an overall 75% reduction of risk is needed.

From the tests on homemade masks, it is clear that most materials can provide risk reductions to levels that would be desired, especially in conjunction with other measures such as improvement of indoor ventilation, social distancing, and prudence in gathering of large groups.

If I were designing a risk communication message about masks, I would phrase it along the lines that “wearing a mask reduces risk to you and those you interact with”.  Hopefully such changes will start to come about.


It’s the Dose Response, Stupid

During the first campaign of Bill Clinton, James Carville hung a sign in the campaign office, “It’s the Economy, Stupid”.  Amidst the discussions on COVID19 and SARS-COV-2, the concept of dose-response seems to have been lost.  Too many, especially from medical backgrounds, continue to use terms such as “infectious dose” or “minimal infectious dose”.  Ever since my first paper on microbial dose response modeling in 1983 [1], this concept has been outmoded.  But it is timely for me to restate the evidence and implications for the current pandemic. 

More than 60 years ago, Meynell and Stocker [2] outlined two prevailing views for microbial infection:

“The relationship between inoculated bacteria might take one of two extreme
forms. Either they could be assumed to be acting co-operatively, death being
a consequence of their joint action, or they could be regarded as acting
independently, more than one bacterium usually being needed because the probability
of a given bacterium being lethal is less than unity …. The situation when the LD50 dose contains many organisms
is analogous to that of a poor marksman firing at a bottle. Since his aim is
poor, the bottle is unlikely to have been broken after a small number of shots
has been fired but if he persists he will probably hit the bottle eventually.
A local observer might be aware that the bottle was broken by the action of
one bullet. On the other hand, a distant observer, informed only of the total
number of shots fired before the bottle broke, would not be able to exclude
the hypothesis that the breakage was due to the accumulated stresses produced
by all the bullets fired.

We describe (1) the hypothesis of independent action, and (2) hypotheses of
synergistic action,”

In their work, by experimentation, Meynell and Stocker provided strong evidence that the hypothesis of independent action governed bacterial infections.

In work over the last 37 years, I, my students, and colleagues around the world have analyzed numerous dose-response studies of bacteria, virus, protozoa and fungi, in both animal and human hosts, and found ALL to fit dose-response models that are consistent with the independent action dose response model.  Many of these have been compiled in a wiki that started when US EPA and Department of Homeland Security co-funded the Center for Advancing Microbial Risk Assessment (CAMRA) which Joan Rose and I co-led.  The wiki is still available.  

In dose-response modeling, the dose frequently used is the population-average dose.  So for example, an average dose of 0.1 might be that less than 10% of individuals might experience an actual dose of a single organism, a smaller fraction would experience a dose of more than one organism, and the vast bulk of individuals would experience a dose of zero organisms.

It is emphasized that a successful infection and disease results from the growth of progeny of successful exogenous organism(s) from the exposure.  There is a significant literature on describing the dynamics of this in vivo birth-death process, and a useful starting point is the work of Williams [3, 4, 5].

The two dose response models that have been found to fit ALL such data are the exponential and the beta-Poisson.  The exponential is derived from a random (Poisson) distribution of organisms between individuals, a probability that a single organism in vivo survives to colonize and initiate infection, and a binomial variation between multiple exogenous organisms within a host of surviving.  Furthermore, it is assumed that only one survivor initiating infection is sufficient.  The combination of these assumptions yields the following equation:

           UntitledImage                                    (1)

where p is the proportion of individuals experiencing the average dose d who are affected, and k is the survival probability of an individual organism in vivo. 

There is variability in the propensity of microorganisms to survive in a host.  It is reasonable to describe “k” by a probability distribution.  Furumoto and Mickey [6] were the first to describe this and used a beta distribution to describe this variability.  The exact result is given by a confluent hypergeometric distribution, which can be approximated by:

       UntitledImage                 (2)

where UntitledImage is the average dose eliciting 50% response, and UntitledImage is the dispersion parameter (as this approaches infinity, the best-Poisson model approaches the exponential).  

The figure below illustrates the behavior of both the exponential and beta-Poisson models (at different values of alpha).  There are several salient things to observe:

  • The beta-Poisson model is never steeper than the exponential
  • At low dose, the dose-response slope is linear (a straight line on a log-log plot)
  • There is no dose at which the response probability is zero



The last point is critical.  The phrase “minimal infectious dose” is often thrown around in common parlance, and even by the medical community.  The exponential and beta-Poisson relationships show that even at very low average doses, there is still a non-zero proportion of individuals who may become affected by the pathogen (ultimately since the Poisson distribution predicts that some proportion of people will become exposed to one or more organisms).  This often discomfits those making decisions, since it indicates the impossibility of assuring certitude of zero risk, but rather explicitly or implicitly some acceptance of a level of residual risk (e.g. after a cleanup) needs to occur.

There are a number of extensions and embellishments of the exponential and beta-Poisson models that I have reviewed elsewhere.[7].

So what does this mean with respect to SARS-COV-2.  As of this writing, there do not appear to be any animal data sets suitable for dose response modeling.  In all of our prior work, we have found that data on inhaled dose of pathogens in a competent animal species is suitable without inter-species correction factors, for direct use in human health risk assessment.  

In my group, after the SARS outbreaks, we reviewed coronavirus data that would be suitable for dose response modeling.  We were able to fit human CoV 229E data (primarily associated with common cold) and mouse data for several coronaviruses to exponential relationships [8]. The plots of the best fit models (in the original paper we provide fitting statistics and uncertainties) are shown below.  There is about a 40 fold difference between the two curves.  The mouse data seemed to provide plausible estimates based on the attack rates for the SARS-1 Amoy Gardens cluster [9]/



This graph shows, for example, that if we want to keep the risk to a population below 0.0001, we would need to have an inhaled dose less than 0.004 based on the mouse data, and considerably less (about 0.0001) for the human data.  The tolerable risk level is a risk management decision.  Conventionally we would use this for a daily exposure.  Given a dose criterion, based on a breathing rate and an hours at exposure, we could develop concentration limits.

Undoubtedly as we gain more information on SARS-COV-2, we will be able to ascertain the more exact positioning of the dose-response relationship.  However there is no reason to believe that it would differ from all other pathogens that have been investigated with respect to functional forms of the dose response relationship.

Parenthetically, I note that in indoor air, microbial risk has frequently been estimated using the approach of Wells and Riley [10].  This is essentially an exponential in form as well, however it conflates the exposure assessment and dose response assessment components of modern risk assessment, and therefore the use of separate dose-response relationships should be preferred.  

The bottom line, in assessing risks from SARS-COV-2, in the paraphrase of James Carville, remember: “It’s the dose-response, stupid”.



[1] Haas, C. N. “Estimation of Risk Due to Low Doses of Microorganisms: A Comparison of Alternative Methodologies.” American Journal of Epidemiology 118 (1983): 573–82.

[2] Meynell, G. G., and B. A. D. Stocker. “Some Hypotheses on the Aetiology of Fatal Infections in Partially Resistant Hosts and Their Application to Mice Challenged with Salmonella Paratyphi-B or Salmonella Typhimurium by Intraperitoneal Injection.” Journal of General Microbiology 16 (1957): 38–58.

[3] Trevor Williams, and G. G. Meynell. “Time-Dependence and Count-Dependence in Microbial Infection.” Nature 214 (1967): 473–75.

[4] Williams, T. “The Basic Birth-Death Model for Microbial Infections.” Journal of the Royal Statistical Society Part B 27 (1965): 338–60.

[5] Williams, Trevor. “The Distribution of Response Times in a Birth-Death Process.” Biometrika 52, no. 3/4 (December 1965): 581.

[6] Furumoto, W. A., and R. Mickey. “A Mathematical Model for the Infectivity-Dilution Curve of Tobacco Mosaic Virus: Theoretical Considerations.” Virology 32 (1967): 216.

[7] Haas, Charles N. “Microbial Dose Response Modeling: Past, Present, and Future.” Environmental Science & Technology 49 (February 3, 2015): 1245–59.

[8] Watanabe, Toru, Timothy A. Bartrand, Mark H. Weir, Tatsuo Omura, and Charles N. Haas. “Development of a Dose-Response Model for SARS Coronavirus.” Risk Analysis 30 (2010): 1129–38.

[9] Li, Y., S. Duan, I. T. Yu, and T. W. Wong. “Multi-Zone Modeling of Probable SARS Virus Transmission by Airflow between Flats in Block E, Amoy Gardens.” Indoor Air 15 (April 2005): 96–111.

[10] Sze To, G. N., and C. Y. Chao. “Review and Comparison between the Wells-Riley and Dose-Response Approaches to Risk Assessment of Infectious Respiratory Diseases.” Indoor Air 20 (February 2010): 2–16.

Bridges and Distillation Columns

Distillation columns are very common in chemical engineering. Bridges are very common in civil engineering.

In chemical engineering, the design of distillation columns have to a very substantial degree been incorporated in software platforms. While students need to understand the key assumptions undergirding the calculations in these software platforms, it has become no longer necessary to do hand calculations. The opportunity for novel research in distillation is low, except as novel materials may require considerations above and beyond those embodied in software platforms.

In civil engineering, design of bridges (except for the very small number of high performance or “signature” bridges) has become routinized with embodiment in codes and standards, and by the use of structural analysis platforms. While students need to understand the key assumptions undergirding the calculations is these codes, standards and software platforms, it has become no longer necessary to do hand calculations. The opportunity for novel research in bridge design is low, except as novel materials and highly unique situations may require considerations above and beyond those embodied in codes, standards and software platforms.

With increasing automation of engineering design, it becomes essential that students be educated for the non-routine problems that will go beyond the capability of extant software, codes and standards. While they must know about the codes, standards and software, this should not be the strong focus of their education. More and more the design and maintenance of the routine, while important (perhaps more so in civil engineering where systems must last a long time) will become the purview of technicians and automation. The role of engineers in these circumstances will be to be alert to, characterize, and modify activity for the unusual.

PS – as an environmental engineer specializing in water, I would say much the same thing for activated sludge, rapid sand water filtration or primary sedimentation. Though there is important work in optimization and in the automated design of resilient systems yet to be done in these contexts.

A Culinary Fable for Higher Education


In the city of Aroga, there were many, many restaurants.  They served diverse cuisine, and a diverse clientele.  Each maintained its distinctive character, and had a loyal, overlapping, following.  The Maisonnette D’Aroga served from the authentic pages of L’Escoffier, and was THE place to go for those seeking the best or seeking to make the best impression.  Gert’s Grill served wicked BLT’s, waffles and stew – and two could walk out after a satisfying meal with still enough change from a $20 to get a couple of ice cream cones on the way down the street.  Sometimes when one wanted to put on finery, the exclusive French restaurant suited one’s taste.  Sometimes when one wanted a simple greasy spoon the day before payday, the local hash house was just the right thing.

The local newspaper decided to hire a restaurant critic – the first ever in Aroga.  The critic had superb credentials, having studied at the Massachusetts Exemplary Culinary Criticism Academy, and having interned at papers in the northeast and in the San Francisco Bay.  Surely he was, so believed the editor of the Aroga Inquirer, the person to bring excellence, rationality and value to the hectic restaurant scene. 

And so “Drew“Andrew Solunstorp went about the task of eating in every restaurant in Aroga, inquiring about the quality of the raw material that each restaurant used, the quality of its china (or lack thereof), of its “ambience” and of the value.  And, also the fidelity of the cuisine of the each restaurant to the best in its class in the country.  Certain restaurants were first tier, certain were second tier, etc; and the select were in the top 10 of the Solunstorp Report.

Many diners of Aroga were enthralled by the Solunstorp Restaurant Reports; these reviews appeared so well reasoned and articulate, and clearly someone trained at MECCA and familiar with the global restaurant scene had the wisdom that they lacked.  Why waste time and money at restaurants that did not rank high?  After all, who would want to be seen coming out of a third or even a second tier restaurant.  So the top 10 restaurants – where in the past one could make a noon reservation for an evening dinner – were now booked two weeks in advance, and in certain third tier restaurants one could roll a bowling ball down the aisle at 8pm.  It was even rumored that some of these unfortunates would be closed down and converted into parking lots or landfills.

As time went by, one found that it became more and more difficult to get hash and eggs, egg rolls or macrobiotic vegetarian cooking.  After all (according to Drew) such restaurants simply did not “cut the mustard” and the Arogaites did not want to patronize these “lower tier” enterprises.   So, the dining scene in Aroga became bimodal, with elegant continental cuisine and fast food franchises, but little in the middle.  It was hard to find a good submarine sandwich anymore, and the businessmen missed the little “hole in the wall” that served great fish sandwiches every Friday.

Even those that survived found that they could only exist if they changed to suit Solunstorp’s taste.  He did not care for meat unless it came from cattle free-ranged in North Dakota, or for anything but marigold-yellow chickens, or for carnations on tables (rather than tulips).  Catfish and sloppy joes were so declassee; not to mention Gert’s scrapple or Hank’s Hashhouse red beans and rice.  So inevitably, menus started showcasing beef from North Dakota, yellow-fleshed chicken … and the funeral directors in Agora were delighted to find that carnations could be obtained much more cheaply. But the local butchers and florists and other vendors who did not fully stock items favored by Solunstorp also found themselves at the brink of failure.  The variety of offerings by the proprietors diminished to conform to Solunstorp’s idiosyncrasies.

But just as Gert and Hank were about to fold their tents and close up, they wondered  “why do we have to tolerate this dictatorship of taste and excellence”. Some customers weight some attributes differently, and even have different preferences on different days.  Why not let the customers know the consequences that arise from single-minded adherence to a sole arbiter of excellence and taste.  Shouldn’t people know that there is virtue in maintaining this diversity, and realize that not every proprietor can be in the top 10 on every scale (indeed there are some who can be “good” and even highly valued by particular customers without being in the top on any scale).  So scrimping their resources, they were able to inform the Arogaites of what would continue to happen by relying upon a single critic as the one metric that would be relied upon in their dining decision.  And indeed, the Aroga Inquirer saw what was happening and the editor (a bit of a gourmet herself) decided to hire a suite of additional restaurant critics (including some of the restaurateurs and even “ordinary” customers) to provide a richer perspective on the dining scene. 

And now, 10 years later, although Gert and Hank have both passed on to the eternal griddle, their kitchens and dining rooms are even more vibrant than ever – their sons, daughters, and cousins deciding to maintain their families establishments.  Yes, the Maisonnette continues.   But there is also a healthy heterogeneity of choices from the mundane to the marvelous (and the marvelous include restaurants to all tastes and prices).  The dining scene in Aroga truly flourishes again.

The academy has been participating in the disestablishment of our own diversity and strength by not simply tolerating but indeed cooperating with the activities of a few ratings and rankings compilers.  Many institutions have consciously and overtly tailored their development, hiring, promotion, tenure and fundraising decisions not to a collegially reached set of objectives and priorities, but to a perception of what they could do to raise their position in these ratings and rankings.

In the opinion of this writer, the net effect is unhealthy for the national academic enterprise.  If all the flowers in a field were tulips, the vista would be boring; if the only meat in a supermarket was prime filet, the menu would be limited and costly; if all of our students were brown eyed males from two parent families where both parents were attorneys or MBA’s, it would be a pretty dull classroom; if everyone only bought stocks that had gotten “A” ratings by a single broker, innovation and risk-taking would be stifled. 

Is it not leading to a pretty limited academic enterprise with many universities striving to achieve against the same metric?  I certainly think so.

Presidents, provosts, deans, department heads and faculty must resist the loss of academic diversity to which this lemming-like rush will lead. We have a responsibility to educate our constituents (students, parents, trustees, donors, legislators and alumni) that evaluating the “quality” of a college or university is a much more multidimensional, multiobjective and complex task than evaluating the ranking of a basketball team.  If we believe that universities and academic programs should set their own local objectives and measure their achievements with respect to these local objectives (as is indeed the perspective of many accrediting organizations) then how can we possibly tolerate, support and sustain a single (or even a small number of) ranking(s) that purport(s) to measure the quality of universities across the United States?  Yes, we need standards and guidelines and accreditation to assure competent performance and honest management, as much as local restaurants need health departments and police forces to assure minimal levels of sanitation and absence of fraudulent behavior.  But we do not need one (or a small number) of external raters to be the metric by which we all are measured.


Who is ready to be Gert and Hank?

An Open Reply to “Crossing the Imaginary Line” – Initial Thoughts

My professional friend, David Sedlak has recently published an editorial on “Crossing the Imaginary Line” in Environmental Science & Technology – a highly reputed journal of which he is editor in chief.  My interpretation of the gist of Professor Sedlak’s argument is that when environmental engineering & science researchers, through their scholarship, uncover significant information that merits public attention, they should work through governmental bodies and non-governmental entities such that these latter organizations can take action to effect change.  Doing otherwise, such as going directly to media, according to Sedlak is risky because “an idealistic researcher might just step over the imaginary line that separates the dispassionate researcher from the environmental activist. “  This editorial is provoking discussion in the environmental engineering community, including amongst students as reflected in this student blog.

I would not encourage junior faculty to engage in direct advocacy to the media before establishing a strong record in traditional scholarship, teaching and outreach. However once established, I do not share Professor Sedlak’s view that going to the media is beyond an imaginary line.

Certainly it would be preferable for researchers to use conventional government agencies and non-governmental organizations as “force multipliers” to effect change.  However there can be circumstances where such routes are either non-existent, or perhaps are clogged with inertia or active hostility to action based on well founded data and analyses.  More and more this appears to have been the case in Flint, Michigan

Many of us came into this profession (including myself) because we saw it as a way to have a rewarding career while benefiting people and the environment. There are great examples of environmental engineering and science researchers taking their knowledge from the ivory tower into the public sphere:

Clearly as academics we (are at least perceived by some to) have a privileged role in society.  According to Vesilind, ethical systems derive from moral principles.  The three key moral frameworks involved in engineering, which are combined in what we do, are duty-based (deontological), utilitarian, and virtue-based.  Deontological principles, deriving from Kant, are essentially statements of the golden rule.  Utilitarian principles (the greatest good for the greatest number) underly much of engineering decision making, however we recognize that they must be constrained by the deontological principles. Virtue concepts refer to the traits inherent in persons.

A key source for engineering ethical concepts is the American Society of Civil Engineers, particularly Canon 1, which states:

“Engineers shall hold paramount [emphasis added]the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties. “

This canon, which should hold equally to the academic as the practicioner tells us that our first duty is to the public.

While some may be focused on developing scholarship in the realm of fundamental research, others in our field are interested in advancing and applying knowledge that maintains and improves public health and the environment. In an ideal world, university research would be immediately used by responsible government entities to effect change.  However all who have been in the field for some time can cite examples where such avenues have been imperfect. We should not shy from the necessity of applying the principle of Canon 1 when it becomes necessary.

As human beings if we witness a mugging on the street, we would perhaps first seek to call the police.  However if they don’t respond in time, we would be morally justified in intervening to stop the crime and perhaps detain the perpetrator.  

When environmental researchers have data to ascertain the likely presence of environmental damage, they should perhaps first seek to involve competent authorities or advocacy organizations.  But it could be perceived as in accordance with the duties inherent in Canon 1 if, when they find such authorities or organizations to be absent or perhaps even ineffectual, they make their findings known to the public directly.  This should not, in my opinion, be regarded as crossing an imaginary line.

Clearly going directly to the public may effect benefits against the environmental damage, but may accrue personal risks to the individuals going this route.  These risks should not include the opprobrium of their professional communities when the message is based on sound factual information and reasoning.  We do not do either our profession or the environment justice by saying that public messaging must wait for community consensus.  There is equal room in the big tent of environmental engineering and science researchers for those who wish to focus on fundamental issues, and for those who are interested in using the results of their knowledge advances to effect improvement to the environment and human health — and NEITHER should be denigrated.


We Need a Safe Breathing #Water Act – #Legionella #aerosols #IAQ

Forty years ago this month, more than 200 cases of Legionnaires disease, resulting in 29 deaths, occurred at hotel hosting an American Legion conference in Philadelphia — giving the disease its name and the American public its first media-amplified look at an outbreak. 

Four decades later we’re still being exposed to Legionella bacteria — the rate of reported occurrences has quadrupled since 2000 according to a recent CDC report — but we’ve done little to stifle its primary vector: water in the air.

After months of investigation through the summer and into the fall of 1976, officials traced the Philadelphia outbreak to contaminated water in the hotel’s cooling towers, which exposed the people to the bacteria via the air conditioning system.

Some things never change.

According to the CDC report, issued in May, most cases in the last 15 years were attributed to exposure to Legionella-contaminated potable water, frequently in aerosol form. This sort of exposure can occur from air conditioners, showers, decorative fountains, humidifiers and other places where running or falling water creates a spray that can be inhaled.

Legionella is not the only infectious agent that can multiply in water systems and cause outbreaks when water is aerosolized and inhaled. Outbreaks of non-tuberculosis Mycobacterium have also been traced to water in aerosol form. 

It’s time for a Safe Breathing Water Act.

While the Safe Drinking Water Act of 1974 and its subsequent amendments has significantly reduced the public’s exposure to ingested, infectious agents, such as viruses and harmful bacteria and chemicals, the crisis in Flint, Michigan has shown us that gaps in public health protection remain.

What was not well understood at the time of the Safe Drinking Water Act is that the pipes through which water is conveyed may serve as incubators for some bacteria, a number of which can cause illness if aerosols containing these bacteria are inhaled. 

Today bacterial amplification and exposure processes are better understood. We have also identified practices that can minimize chances of bacterial occurrence, such as  maintaining appropriate disinfection concentrations, keeping levels of nutrients (including those that can be released via corrosion) low and reducing leaks.

It is time to consider amending the Safe Drinking Water Act to include a “safe breathing water” provisions, which would incorporate our best knowledge and practice to reduce the public’s risk of inhaling Legionella, Mycobacteria and other respiratory pathogens that can be amplified in water systems and transmitted in aerosol form. 

As decades of public health engineering practice have shown, prevention is more effective when implemented closer to the source of the problem. So A Safe Breathing Water Act would include closer control of distribution systems and building piping, as well as restrictions on how systems with the potential to generate large volumes of aerosol are managed. 

It would also require licensing those who are responsible for maintaining water quality in large buildings. And buildings, with licensed operators, could be allowed to engage in local treatment without being considered public water systems. The act would set water quality contaminant limits that can be monitored and enforced at end-user taps and intakes of aerosol-generating equipment so as to protect not just people who drink water but also those who unwittingly breathe it as an aerosol.

This would be a suitable recognition of the lessons we’ve learned in the course of 40 years since the mysterious outbreak in Philadelphia made us reconsider all the ways we are exposed to water. 

#Denver Union Station and #LoDo – a model?

I had a wonderful two day visit this past week to Denver, where I stayed a block and a half from Union Station.  I have not been in Denver since the light rail to the airport opened this spring.  The change in the “feel” of the LoDo (Lower Downtown) area is fantastic.  I was particularly impressed with the Union Station redevelopment itself. There are several important features that development of other urban train stations could take note of (are you listening Philadelphia 30th Street Station and  NYC Penn Station).  As an untrained observer of the urban environment, the following in particular stand out to me:

  1. There is a diversion of heavy automobile through traffic away from the area, in favor of pedestrian and bike access
  2. Seemless integration of rail, light rail and busses.  Also integration with a free 16th Street shuttle (think if Philadelphia had a free Market St shuttle from 30th Street to City Hall or even the Delaware river)
  3. Both in the station and surrounds, there are many local eateries, coffee shops, etc (not a single national franchise in the station!).  There is a hotel in the station as well as several within a 2 block radius.  This project has clearly been catalytic for development in the 5-10 block radius (the LoDo neighborhood).

It was wonderful to be able to walk from the airport baggage claim to the Denver RTD station at the airport to take a comfortable train ride – 37 minutes or so, with 15 minute headways, to Union Station, then walk 2 blocks to my hotel – without having to navigate a single step or even a curb.  This is intelligent multimodal planning. It is still not without glitches; one of my colleagues at the meeting had a two hour delay on his train due to a power failure.  So advice is to plan ahead heading to the airport.  But I had a smooth ride both ways.

Other cities should think about this as a model, although Denver has fewer short and long-haul Amtrak trains than Philadelphia or New York.







Outside of Union Station at night.  From: union-greathall-tooltip.jpg
















Interior of old trainhall (old ticket windows are now a bar).  From:

A (Baby) Step Towards One Water

As even high school students know these days, the concept of the hydrologic cycle underlies all of what we do as environmental engineering practitioners and educators  There are several key engineered systems in the urban water cycle:

  • Water supply storage & conveyance
  • Water treatment plant
  • Finished water storage & distribution 
  • Sewer and stormwater collection system
  • Wastewater (and stormwater) treatment plants
  • Effluent discharge structure


Historically in the US, in most places, different agencies sprung up to manage the “water” and the “wastewater/stormwater” sides of this cycle.  It is obvious however that everything is connected to everything else per Barry Commoner’s First Law of Ecology. There are a few cities that have progressively realized that “water is water” and developed a single agency to manage both sides of the urban cycle.  I am glad to live in one such place, where Philadelphia Water is a unified agency handling drinking water, wastewater, and stormwater.  

At the professional level in the US, we have had multiple different organizations work in different subsets of the engineered water cycle.  The American Water Works Association (AWWA) historically has worked in the water supply, treatment and distribution sectors.  The Water Environment Federation (WEF) has worked on the sewerage collection, wastewater treatment and disposal sectors.  More recently with the growth of planned wastewater reuse (including for drinking water supply), the Water Reuse Association (WRA) has worked in this sector.

Internationally, there is a more rational picture.  In the early 2000’s, realizing that “water is water”, the International Water Association (IWA) was formed from predecessors separately organizing the wastewater and water supply & treatment sectors.

Each of the US organizations has begat parallel foundations to conduct research programs in its areas of interest: the Water Research Foundation (formerly the American Water Works Research Foundation, the Water Environment Research Foundation, and the Water Reuse Research Foundation.  Earlier this month, in a baby step towards recognizing “one water”, the latter two foundations merged to form the Water Environment & Reuse Foundation, cleverly maintaining the acronym WERF. They are to be congratulated for this, and should be inspired to go many steps further.

In reality it is high time for the organizations and foundations to take the big step.  As someone who works in the areas of disinfection and microbial risk assessment, it has long been obvious to me that there is no big qualitative difference between “dirty” water and “clean” water (some in the industry like to use the terms “clean” water and “cleaner” water).  We really need one single US association and one single US foundation.  It is time for the US Water Association and the US Water Research Foundation!  That would really align the structure of the profession with the structure of what we work on.  

Of course there also needs to be unification of the federal legislative structure governing the overall sector – and I may devote a later piece to this.

#Zika Virus From a #RiskAssessment Point of View

Zika Virus From a Risk Assessment Point of View

Zika virus appears to have a transmission cycle of infected human host –> mosquito (via blood meal) –> susceptible host (in course of a second blood meal). To understand transmission via the route, the following need to be known:

  1. What are the levels in blood of an infected individual?
  2. What is the volume of blood ingested in feeding by a mosquito?
  3. What is the volume of disgorgement of blood by a mosquito upon a second blood meal?
  4. What is the die-off of Zika virus within a mosquito between blood meals?
  5. What is the dose-response in the human host for infection by Zika virus.

Questions (2) and (3) should be identifiable by a literature review and would not be expected to be a function of the pathogen (Zika). Question (1) may be obtainable from a review of case reports and deliberate trials in the literature, as well as on the ongoing primate trials at the University of WIsconsin, which are being done in an open science manner ( 

It is not anticipated that data on question (4) is available per se, however inferences may be drawn from persistence of other Flaviviridae in conditions analogous to carriage in the mosquito. A preliminary scan of the literature suggests prior data that could be useful in developing a dose-response relationship per question (5) for Zika. We have developed dose response relationships for many other organisms including several vector borne pathogens:

The assembly of this information can be useful, when embedded in a population transmission model, for projecting consequence and estimating the effectiveness of public health interventions. To my knowledge, such risk assessment approach has not been underway.

Unfortunately, serious risk analysis seems to be minimized as a tool to respond, right now.  Decision makers and funders need to be educated.