Table of Contents

  1. Introduction and Summary
  2. Motivating Issues
  3. Conceptual Model
  4. Assessment/Measurement Endpoints
  5. Indicators
  6. Monitoring Design
  7. The Design's Statistical Basis
  8. Regional Coordination
  9. Further Basic Research
  10. Detailed Methods
  11. Database Development

Introduction and Summary

This chapter outlines the major technical features of the bacteriology monitoring element of the Santa Monica Bay regional monitoring program. As with the other chapters in this report, it provides the context and justification for specific design decisions about what parameters to measure and where and when to measure them. The following summary highlights the major features of this aspect of the regional monitoring program.

Available monitoring and research data strongly suggest that discharge plumes from the two large municipal waste discharges (Hyperion and White Point) do not reach the beaches in the Bay. Risks to human health from swimming in the Bay thus stem primarily from the input of pathogens due to nonpoint sources such as rivers, stormdrains, and other runoff. Since these pathogens cannot be measured directly and easily with existing technology, monitoring and management focus on a suite of indicator organisms that are presumed to roughly correspond with health risk. Levels of these indicators can change quickly from day to day depending on the amount and quality of runoff; there are also large differences among locations along the shoreline for the same reasons.

The principal mechanism for managing health risks due to swimming is the issuance of warnings and beach closures by the LA County Department of Health Services. These actions are based on the comparison of monitoring data to water quality standards and on the professional interpretation of overall patterns of contamination. Because indicator levels exhibit high variability and change quickly in response to the nature of runoff, this management activity must occur in nearly with monitoring data that are as current as possible.

The monitoring program is designed to respond to these management needs as efficiently as possible. It does this by:

In addition, the program identifies specific opportunities for further standardizing methods among the four agencies monitoring bacterial contamination in the Bay.

Motivating Issues

The primary motivation for bacteriological monitoring is the question, "How safe is it to swim in the Bay." Regulators and managers can best address this question by meeting the following objective, adapted from the SMBRP's Comprehensive Framework for regional monitoring:

Ensure that valid public health standards are met, that illegal discharges are eliminated, and that information on swimming conditions is rapidly communicated to regulators and the public. Develop data that clarifies sources and amounts of pathogen inputs, along with a suite of effective indicators that can be sampled as needed at shoreline and nearshore stations throughout the Bay.

Monitoring is intended to produce information for three distinct purposes related to this objective (see Table 1). The first and primary purpose is to furnish regulators and managers with an effective tool for determining compliance with regulations and for undertaking actions to protect public health (beach closures and warnings). The second is to measure bacterial contamination in a way that can be used to assess relative safety and how this might be changing over time. The third is to evaluate the effectiveness of restoration actions taken to reduce pathogen inputs to the Bay.

Each purpose requires monitoring information with particular characteristics (see Table 1). We use these characteristics to help judge the utility of the monitoring design and measurement indicators and, where these fall short, to suggest where changes and/or further research might be needed. As explained further below, it is not always possible for a single monitoring program to simultaneously fulfill the needs of several different purposes. Some tradeoffs must therefore be made in order to adequately meet the needs of the primary purpose - managing compliance and public health.

Table 1.
Three primary management needs and the key characteristics of monitoring information needed to meet these needs.

  1. Management Need
    • Characteristics of useful Information

  2. Manage Compliance and Health
    • Timely data (daily in many instances)
    • Data easily interpreted
    • Data clearly related to decision criteria
    • Indicators reflect health risk
    • Sites reflect human use
    • Sites are near sources of contamination

  3. Assess relative safety
    • Indicators reflect health risk
    • Data suitable for trend analysis
    • Program gives regional coverage
    • Sites reflect human use
    • Sites are near sources of contamination
    • Indicators reflect pathogen inputs
    • Reference data available

  4. Evaluate restoration actions
    • Indicators reflect pathogen inputs
    • Sites are near sources of contamination
    • Data suitable for trend and before/after analyses
    • Reference data available

Conceptual Model

An understanding of where contaminants come from, how they work their way through the ecosystem, and how they produce risks to humans is a fundamental basis for monitoring design. Such "conceptual models" help focus monitoring on key processes or parameters and on specific kinds of information most useful for decision making. They also identify critical assumptions and uncertainties that set limits on how monitoring data can be interpreted.

Figure 1 shows the interrelated processes that contribute to potential human health impacts from swimming in the Bay. These occur relatively quickly after exposure (hours to days) and levels of pathogens vary markedly from day to day, although they are generally higher in the wet than the dry season. Potential health impacts are diffuse and include illnesses such as eye, ear, and wound infections, skin rashes, and gastroenteritis. A primary assumption of this conceptual model is that pathogens, and therefore health risk, die off relatively rapidly upon exposure to sunlight and seawater. Management actions therefore focus on closing and opening beaches in response to current monitoring data.

There are several other key judgments and assumptions that help structure the conceptual model. The first is that nonpoint sources (storm drains -- although regulated as point sources), other runoff, rivers, and sewage spills) account for the majority of pathogen inputs to the shoreline and the inshore zone. This primarily reflects rainfall, although occasional events such as hydrant breaks, upstream spills, and lagoon breaches can create localized and short-lived contamination. The second is that pathogen contamination from the major sewage outfalls in the Bay (Hyperion and Whites' Point) does not reach the shoreline and the inshore zone in any appreciable amount. These judgments are based on 35 years of historical bacteriological data collected between Point Dume and Malaga Cove that show indicator counts have not been elevated as a result of effluent from the Hyperion outfall. In addition, plume tracking studies (Appendix 1) at several locations in California show that bacterial indicators can be detected in subsurface wastewater plumes for some distance from the discharge. However, these studies also show that bacterial indicators quickly die when exposed to sunlight at the surface. Thus, it is unlikely that bacteria would survive long enough to reach the shoreline even in winter when the lack of stratification allows the plume to surface. Finally, while low levels of indicator bacteria have been measured at depth off Whites' Point as a result of effluent discharged from that outfall, process changes controlled this problem several years ago.

These judgments and assumptions require that the measured bacterial indicators, which are not themselves pathogens, accurately reflect risks to human health. Thus, a major assumption implicit in the current management and monitoring system is that non-detectable or low levels of bacterial indicators necessarily imply low levels of other pathogens that are not monitored (e.g., viruses). We now know that this is not necessarily true (see Indicators below) and research efforts will be directed at resolving this issue (see Further Research below). Since this research, especially the large-scale epidemiological study, is expensive, labor intensive, and highly specialized, funding constraints have hampered updating these aspects of the conceptual model.

Together, these judgments and assumptions form the basis for the choice of indicators and for decisions about the monitoring design itself. These issues are discussed further in following sections.

Assessment/Measurement Endpoints

The illnesses of concern are not only diffuse but also result from many other causes not related to contamination of the Bay. This makes it extremely difficult to measure the actual endpoint of human health impacts. Instead, regulatory and management attention has focused on more measurable endpoints associated with a suite of indicators. For example, in 1993, there were ten beach closures, primarily during the rainy season.

Managing Compliance and Health
Clear assessment or measurement endpoints exist for only the first of the three management needs shown in Table 2. Compliance- and health management-related actions are triggered when counts at sampling stations reach specific levels. According to the California Ocean Plan, the monthly median for total coliforms shall not exceed 1000 cfu/100mL, provided no more than 20% of samples from a single station exceed 1000 cfu/100mL in any 30-day period. Also, no single sample can exceed 10,000 cfu/100mL when verified by a repeat sample taken within 48 hours. The recommended U.S. EPA standard for fecal coliforms is somewhat different. Based on no less than five samples from any single station in a 30-day period, the geometric mean of fecal coliform densities shall not exceed 200 cfu/100mL. Nor shall more than 10% of the total samples during any 60-day period exceed 400 cfu/100mL. As yet, no formal standards exist for enterococcus, even though its levels are regularly monitored and reported, and EPA has recommended a standard.

Any exceedances of these limits are reported to the Regional Water Quality Control Board by the two POTW dischargers (Los Angeles City and County Sanitation Districts of Los Angeles) in monthly NPDES permit reports. More importantly, staff at discharge agencies review test results daily or weekly, depending on the type of sampling. They communicate these results, also on a daily or weekly basis, to the Los Angeles County Department of Health Services (DHS). When elevated counts are found, agency staff talk directly to DHS by phone to review and interpret the data and plan further actions if needed. This real-time review process enables responsible staff at discharge agencies to anticipate problems by allocating additional sampling effort to potential problem areas. When necessary, DHS can notify the public and close contaminated beaches. DHS compiles data from the discharger sampling programs, as well as its own shoreline sampling program, and submits a monthly report to the Los Angeles County Board of Supervisors.

Assessing Relative Safety
With regard to the second need in Table 2, it is not currently possible to assess absolute levels of risk and safety because of shortcomings in the available indicators (see Indicators below). In spite of this, it is possible to use changes in levels of these indicators to qualitatively assess trends in relative levels of health risk, particularly stemming from sewage contamination. Thus, risk is assumed to rise or fall in tandem with indicator levels. For example, Heal the Bay distributes a monthly "report card" on the apparent health status of beaches in the Bay. In broad terms this is probably true. For example, the drop of several orders in magnitude in indicator concentrations from the 1950's to the present undoubtedly reflects a reduction in human health risk. However, the lack of any formal link between indicator levels and risk makes it impossible for now to set endpoints in terms of specific and quantitative levels of human health risk.

Evaluating Restoration Actions
Neither have endpoints been established for the third management need, evaluating the success of Bay Restoration Plan actions to reduce contamination. These actions are still in the planning stage and it would be premature to establish measurement endpoints at this time. However, sampling stations have been cited with an eye to measuring the effectiveness of these actions, when they occur (see Monitoring Design below). For example, trends over time at specific storm drains could be followed, and structured comparisons between more contaminated and reference areas could be carried out.

Limitations of the Endpoints
As discussed in the next section (Indicators), these indicators do not measure human health risk directly and their relationship to actual risk is not clear. As a result, the compliance standards are not based on actual estimates of risk but instead represent commonly accepted arbitrary limits. Improved indicators that reflect actual health risks would permit the development of more meaningful standards and other endpoints. While such indicators are not currently available, it is hoped that the planned epidemiology study in the Bay will provide a clearer picture of the relationship between currently used indicators and the actual incidence of human health impacts.


The three indicators currently used to assess bathing water standards are total and fecal coliforms and enterococcus bacteria. Each of these are natural residents in the guts of warm blooded animals. It is therefore assumed that elevated counts of these indicators in bathing waters demonstrates the presence of animal and/or human waste products and thus the possible presence of pathogens. While the validity of this assumption, and consequently of one or the other of these indicators, has been questioned, current knowledge is insufficient to improve the situation. However, the epidemiology study planned for mid 1995 will hopefully resolve these questions. We recommend that the suite of indicators be critically reviewed once data from this study are available.

Advantages of Available Indicators
One of the advantages of these indicators is that they are the basis for bathing water standards that enable managers to quickly assess whether or not sample results are within compliance limits. In addition, sampling and analysis procedures are straightforward and results can be obtained fairly quickly. This ensures that additional sampling can be performed and health authorities notified within 24 hours of detecting high indicator counts. A further advantage is that the ratio of fecal to total coliforms can help determine if elevated counts reflect sewage contamination or an unrelated incident such as soil erosion.

Disadvantages of Available Indicators
These indicators' major disadvantage is that there is no assured link between their presence in measurable concentrations and the presence of human pathogens. Neither is there an established link in Santa Monica Bay between their presence and the incidence of human health effects such as gastroenteritis. Cabelli (1983) found a correlation between elevated enterococcus concentrations and gastroenteritis incidence at marine bathing beaches on the east coast. While this has stimulated interest in using enterococcus as an indicator in monitoring programs, there is concern about the applicability of Cabelli's results to cooler west coast waters. Establishing dependable indicators must await a more relevant and thorough epidemiological study (see Further Research below).

Another disadvantage of these bacterial indicators is that most illnesses associated with swimming in the ocean are likely caused by viruses. The relationship between the behavior of these indicators (which are bacteria) and associated viral agents is unknown. For example, it has been speculated that viruses survive much longer in the marine environment than do bacteria. If true, viral pathogens may still be present in nearshore waters even though bacterial indicator counts are not elevated. Unfortunately, current methods for measuring virus concentrations in the environment are cumbersome, slow, and costly, making them unsuitable for routine monitoring use.

Methods Discrepancies
Four agencies measure indicator bacteria in Santa Monica Bay:

There are differences among these agencies in the laboratory methods they use to process samples and produce indicator counts. Hyperion and CSDLA use the membrane filtration method (EPA standard method 9222B for total and 9222D for fecal coliforms), while DHS and Public Works use the multiple tube method (EPA standard method 9221B for total and 9221C for fecal coliforms). The multiple tube method is more cost effective for measuring total coliforms alone, but more labor intensive and costly for measuring the complete suite of indicators (total and fecal coliforms and enterococcus). However, the membrane filtration method can very occasionally produce contradictory results, with fecal coliform levels higher than total coliform levels. This is because separate analyses are run for each indicator. A more important concern is that counts from the membrane filtration method are consistently higher than those from the multiple tube method. In a 23 day study performed over a two month period at several shoreline stations in the winter of 1986 - 87, Hyperion found that the membrane filtration method produced higher values 93.7% of the time for total and 88.9% of the time for fecal coliforms. Only very rarely were the membrane filtration values as much as an order of magnitude (10 x) higher. However, in 6.8% of the instances for total and 4.8% for fecal coliforms, the membrane filtration method indicated a violation when the multiple tube method did not. In only one instance (.005%) was the opposite true. We recommend that a longer-term goal be to standardize sampling methods across all monitoring programs. However, we also recommend that this be deferred until results of the 1995 epidemiology study are available and until proposed new methods (e.g., Coli-Alert) are more thoroughly evaluated. Finally, any revisions must take account of the fact that Public Works' samples in the stormdrain system typically are more turbid than shoreline and inshore samples. At present, the multiple tube method is better suited for turbid samples. (see Regional Coordination below for further discussion.)

Monitoring Design

The monitoring design is based on the fundamental judgments and assumptions in the conceptual model (see above) and focuses on meeting the management needs described in Table 2.

Station Locations and Sampling Frequency
Stations are located along the shoreline and in the inshore at 30 feet depth or 1000 feet from shore, whichever is furthest (Figure 2a and 2b). Complete station descriptions are given below. The station numbering scheme has been modified. Station names now contain a code indicating the region of the Bay they are from, whether they are shoreline or inshore stations, and the agency responsible for sampling the station. They also contain a numeric designation that represents their position along the coastline and permits additional stations to be added between existing stations if needed. Both the shoreline and the inshore programs have been modified to focus more specifically on priority areas of concern and to improve the program's overall efficiency.

Detailed station descriptions for the shoreline monitoring program. Latitudes and longitudes are listed in Appendix 1.

Agency Site ID Description

Detailed station descriptions for the inshore monitoring program. Latitudes and longitudes are listed in Appendix 1.

Agency Site ID Description

There are 60 shoreline stations, sampled by the Los Angeles County Department of Health Services (DHS), Hyperion (Los Angeles City), and CSDLA (County Sanitation Districts of Los Angeles. DHS samples weekly and Hyperion and CSDLA (with the exception of two stations) sample daily. [check this out] In July of 1994, DHS and Hyperion traded responsibility for many shoreline stations, with Hyperion focusing primarily on piers and stormdrains and DHS chiefly on the most used beaches. This concentrates routine daily sampling effort at those locations where past experience shows that problems are most likely to occur and where contamination is highest. DHS's weekly sampling is adequate in most instances to identify and track potential problems at the beaches. When it is not, Hyperion's and CSDLA's field and laboratory crews provide a flexible rapid response capability that allows DHS to target additional sampling where and when it is needed. For example, since most shoreline contamination stems from piers and stormdrains, the daily sampling will typically identify potential problems quickly enough for additional sampling to be targeted at beaches when necessary.

This shoreline sampling plan and its distribution of effort among the three primary agencies involved in part reflects their respective sampling and laboratory capacities. However, it is also an efficient and effective use of available resources. While it might theoretically be worthwhile to increase sampling effort during the rainy season when contamination problems are more frequent and severe, it is not practically feasible to increase and then decrease laboratory capacity throughout the year. In addition, while it is true that runoff and therefore contaminant levels are highest in the wet season, beach usage is highest in the dry season. Thus risk, roughly approximated as a combination of contaminant and usage levels, seems relatively similar in wet and dry seasons. Further, the shoreline stations provide coverage of piers, major stormdrains, and other key runoff sources on a daily basis, which is the maximum frequency that is both feasible and scientifically meaningful. Finally, they cover the highly frequented beaches at an adequate interval, with the ability to quickly increase both sampling frequency and intensity as needed.

Figure 2b shows the locations of the 17 inshore sampling stations. Inshore stations must be sampled by boat, and cost and logistical constraints limit the number of stations that can be sampled in one day. As described above, stations are located at 30 feet depth or 1000 feet from shore, whichever is furthest. Stations are located in areas with heavy usage (e.g., kelp beds) and/or high potential for contamination from nonpoint sources. Potential inshore sites throughout the Bay were prioritized by the workgroup and available sampling effort was then assigned to them in descending order of importance.

The inshore stations, like the shoreline stations, are sampled consistently throughout the year. Again, runoff and therefore contamination are highest in the wet season, which corresponds with the lobster sport diving season. However, dive classes and casual sport diving peak in the summer. Since it is impossible to quantitatively weigh these relative risk levels, sampling simply occurs consistently year-round rather than being weighted more heavily to one season or another.

Relationship to Management Information Needs
The monitoring design primarily reflects the requirements of the first main management information need, compliance and health management. As explained in the next section, the design is therefore not optimally suited for making generalizations about health risks in the Bay. In addition, while specific restoration actions have not yet been undertaken, there are probably enough stations in the current design to permit effective comparisons between sites where restoration actions will be taken and sites where no action will occur.

The Design's Statistical Basis

Each of the three primary management needs (Table 2) can be expressed as questions which can only be answered by specific kinds of information (Table 4). Examining the statistical basis of the monitoring design helps determine whether it will successfully provide this information. Unavoidable tradeoffs result from using one monitoring design to address all three types of issues and questions shown in Tables 2 and 4. As explained in more detail below, the overriding interest in compliance and health management restricts the ability to assess relative safety in various parts of the Bay.

The following sections briefly discuss the kinds of analysis approaches best suited to the questions in Table 4, along with relevant statistical issues. Relatively straightforward methods are available for addressing compliance and health management questions and for evaluating restoration actions. Assessing relative safety requires somewhat more consideration of alternative methods.

Table 4.
The three primary management needs expressed as questions, along with the information required to answer them. Question Required Information

  1. Manage compliance and health
    • Are standards being violated?
    • Should warnings be posted?
    • Should beaches be closed?
      Daily indicator measurements in high use areas
      Daily indicator measurements in high risk areas
      Identification of sources of spills and contamination

  2. Assess relative safety
    • Where are the riskiest areas?
    • When are the riskiest times?
    • Are conditions improving?
      Classification of monitoring sites by degree of risk
      Average risk measures for predefined Bay regions
      Measures of trends over time

  3. Evaluate restoration actions
    • Are actions having effects?
    • Are these effects worthwhile?
      Structured comparisons with reference sites
      Comparison to larger regional context

Managing Compliance and Health
Compliance monitoring and health management are essentially site-specific activities. This follows from the conceptual model's key assumption that most contamination stems from storm drains, that pathogens die off relatively quickly in seawater, and that localized pathogen levels therefore reflect the input of nearby drains. As a result, data from monitoring sites can, for the most part, be analyzed individually. Indicator measurements at each station are separately compared to regulatory standards. If indicator values at a station exceed these standards, various actions can be taken, depending on indicator levels, the circumstances, and past data from that station. In the case of a large spill, as has occurred in the past from Ballona Creek, data from several adjoining stations may be evaluated together if the spill contaminates a section of shoreline.

Statistical procedures are uncomplicated, involving simple comparisons with regulatory standards and the computation of straightforward averages. Counts that are elevated but still below the compliance standard can be difficult to interpret and may require additional sampling to pin down the source of the contamination. It would also be helpful to examine the effect of daily variability, which is often very high, on the usefulness of monthly medians in compliance tests for total coliforms.

Assessing Relative Safety
Assessing relative safety requires addressing the questions shown in Table 4. As Table 2 shows, accomplishing this depends on having indicators that accurately reflect health risk. Without these, no adjustments to the monitoring design and no statistical analyses (no matter how sophisticated) will meet this management goal. Assuming that the research and development program can produce such indicators, the present monitoring design permits a variety of informative analyses.

However, the sampling design has one fundamental statistical feature that will limit all assessments of relative safety. Neither shoreline nor nearshore monitoring stations are randomly located. Instead, they are deliberately placed at sites (e.g., stormdrains) where human use and/or pathogen contamination are assumed to be highest. As a result, the monitoring data cannot be used to generalize about either regions of the Bay or the Bay as a whole. Attempting to do so would result in biased estimates of regional contamination because stations are deliberately placed where contamination is highest. In extreme terms, this would be analogous to generalizing about the temperature in your kitchen based on a thermometer reading taken in the flame of a stove burner. Instead, monitoring data can only be used to draw conclusions about the particular kinds of sites where sampling actually occurs. For instance, if shoreline stations are a representative subset of all stormdrains, monitoring data can support generalizations about stormdrains. Similarly, if these stations are representative of only the worst stormdrains, then conclusions can only be drawn about this class of stormdrains.

This is a limitation of sorts. However, the present sampling design does maximize safety and presents a conservative picture of relative risk. This is because it specifically focuses on areas where contamination is assumed to be greatest and where daily sampling can provide an early warning of potentially more widespread effects.. In addition, the monitoring design reflects the overriding concern of the management system with compliance and health management. As a result, short of establishing a separate set of sampling stations, the effort to assess relative safety must necessarily accept the tradeoffs involved with this monitoring design.

Within this overall constraint, Table 4 lists three distinct kinds of information needed to answer questions about relative safety. Analyses used to generate this information must take account of the following issues. There are several levels of temporal variability. Wet and dry seasons differ dramatically. Within the wet season, runoff and therefore contaminant outflow peaks during storm events. Among storms, early season storms carry higher loads of contaminants than do storms later in the season. Finally, bacterial counts often change dramatically from one day to the next. If analyses are properly structured to account for the seasonal and storm-related variability, then daily variability would be used as the background variability in any analyses of pattern and trend. There are at least two levels of spatial variability to consider. Separate stations often have distinct characteristics that reflect the areas they drain. At a somewhat larger scale, regions of the Bay also have somewhat different characteristics, again reflecting the watersheds they contain.

The large amount of variability in this situation makes it difficult to draw simple conclusions about patterns and trends in indicators of risk. The daily station-by-station data are suited for ongoing compliance and health management, but the raw data are too variable and confusing for this other kind of "big picture" analysis. It would therefore be helpful to summarize and/or simplify the raw data to help answer the questions in Table 4. Data smoothing techniques could help identify and group stations with clearly similar contamination patterns over time. This would help answer the questions: Where are the riskiest areas? and When are the riskiest times? If some characteristics of the smoothed curves can be numerically defined, then a multivariate cluster and/or ordination analysis could be performed on these derived data.

This would provide a more analytical method of grouping stations with similar contamination patterns over time. Alternatively, an ANOVA, stratified by season and other important sources of variability, could be used to group similar stations. The analysis could be performed on the log-transformed data themselves, on values taken from specific points on the smoothed curves, or on some derived numerical characteristic of the curves.

Such analyses, performed on the entire set of sampling stations, would group stations with similar contamination patterns. These would not necessarily be spatially contiguous sets of stations. Instead such station groups are likely to contain similar stations scattered throughout the Bay. This would directly address the question, "Where are the riskiest areas?" It would then be up to managers and scientists to interpret and describe these relative degrees of risk in terms meaningful to the public.

Another analysis approach is to compare the relative risk associated with groups of stations from pre-defined regions of the Bay (e.g., Malibu, Santa Monica). This could be accomplished by simply presenting the area averages and confidence limits of the transformed indicator values, of the smoothed curves, or of the derived numerical characteristics of the curves. These averages could also be compared more analytically with an ANOVA. For this and the other analyses suggested above,

An important aspect of assessing relative risk is determining whether apparent risk is increasing, decreasing, or staying the same over time. This question is equally applicable to individual stations, to groups of similar stations, and to pre-defined areas of the Bay. The most straightforward approach to this question would be to examine confidence limits around the smoothed curves of indicator values over time at the monitoring stations. Average curves for station groups and pre-defined areas could also be compared from one time period to the next. If specific time periods must be compared (e.g., one year's wet season to the next), then a simple t-test on the two sets of data would be appropriate. Because of the large amount of data available, more sophisticated questions could be addressed with time series analysis.

Evaluating Restoration Actions
Restoration actions are planned for specific stormdrains and creeks. A straightforward monitoring impact design can be used to determine the results of these actions. "Treatment" stormdrains at which actions are planned should be paired with similar reference stormdrains which will not be treated. Within the limits of cost and logistics, better results will be obtained if replicate treatment and reference stormdrains can be used. A BACI (Before-After-Control-Impact) analysis can then be used to judge whether the restoration actions have affected the preexisting differences between the treatment and reference sites. A key to the success of this analysis is the selection of valid reference stations against which the treatments can be compared.

Regional Coordination

There are four levels of regional coordination relevant to the shoreline and nearshore bacteriology monitoring program in the Bay (Table 5). At the first level, coordination among Hyperion, CSDLA, and DHS is concerned mainly with ensuring consistency among sampling and analysis methods and with removing gaps and overlaps in spatial coverage. In addition, the more limited resources available for nearshore sampling must be focused on priority problem areas. The sampling scheme presented in Figure 2 and Table 3 directly addresses these issues. Station locations have been adjusted to more closely monitor contamination sources. Responsibility for specific stations has been shifted among agencies and an additional agency has been recruited to reduce costs and make better use of existing logistical resources. Perhaps most importantly, contamination problems in the Bay as a whole have been examined together and prioritized to guarantee that monitoring effort will be directed to the most important problems and areas. Finally, station designations and data transfer formats have been standardized to make it easier to combine data from the various agencies collecting data in the Bay.

Coordination between the bacteriology program and the NPDES stormwater monitoring program is concerned chiefly with linking indicator measurements in the shoreline and nearshore zones with estimates of pathogen inputs to the Bay from stormwater. These input estimates will be modeled rather than measured directly and it is therefore not possible to directly link both program's sampling schemes. However, both programs will measure the same set of indicators and should use comparable sampling and lab analysis methods. Results of the two programs should be compared as a cross check on their accuracy. Coordination should be addressed during the upcoming design of the stormwater monitoring program in early 1995.

The third level of coordination focuses on information linkages between the bacteriology program and other aspects of the regional monitoring program (Figure 3). In particular, the bacteriology program will need information from the stormwater program on runoff volume and characteristics and from the wetlands, birds, and marine mammals programs on the sources of animal fecal material. Data sharing procedures are a major aspect of this type of coordination and are the central focus of the regional data management effort.

The fourth and final level of coordination involves connections with other bacteriology programs in the Bight. A primary issue is ensuring comparability of monitoring results by matching indicators and sampling/analysis methods. Another important issue is establishing data sharing procedures that permit data from throughout the Bight to be readily combined.

Table 5.
Four levels of regional coordination relevant to bacteriology monitoring. Kind of Coordination Issue(s)

  1. Among bacteriology programs
    • Sampling/analysis methods
    • Lab and field QA/QC
    • Spatial gaps/overlaps
    • Timing of sampling
    • Focus on priority areas

  2. Between bacteriology and stormwater
    • Sampling/analysis methods
    • Indicator selection
    • Agreement between results

  3. With other Bay monitoring efforts
    • Availability/suitability of needed data
    • Suitability of bacteriology data for other programs
    • Data sharing procedures

  4. With other bacteriology monitoring in the Bight
    • Sampling/analysis methods
    • Lab and field QA/QC
    • Spatial gaps/overlaps
    • Indicator selection
    • Data sharing procedures

Further Basic Research

There are two primary research needs relevant to the bacteriology monitoring program. The first is for the development of indicators that are immune to problems mentioned above (see Indicators) and that therefore more accurately reflect the true health risk. This will improve the soundness of management decisions based on monitoring data. The second is for a formal risk assessment to determine actual levels of health risk associated with different contaminant levels and different human activities. This will provide a solid basis for reevaluating the monitoring design to ensure that effort is being allocated to the greatest health risks.

Detailed Methods

[Not yet completed. The following topics will be covered]

Database Development

There are two main goals for the data management effort. These are to make data readily available as needed to address the issues in Tables 2 and 4 and to provide software tools to efficiently analyze and present the data. This requires that all data management procedures and database tools be focused on meeting such key information needs. Chapter X describes the regional data management concept and structure in detail. This is the larger context for the bacteriology-specific topics described here.

End User Application
End users of regional bacteriology monitoring data include the Hyperion and LACSD monitoring staffs, as well as the County Department of Health Services and the Regional Board. Once they receive raw monitoring data, they are interested in such operations as scanning them for out-of-compliance events, inspecting trends at particular stations over various time periods. Because they have more regional responsibilities, Health Services and the Regional Board are also interested in comparing different regions of the Bay. This requires the ability to readily combine data from the City of Los Angeles and the County Sanitation Districts and to then perform these operations using efficient data visualization tools like mapping and graphing. It also requires moving data to statistical analysis software to perform the analyses described above in section The Design's Statistical Basis.

A simple software system is being developed to assist end users. It will load monitoring data received in standardized format (see below), incorporate data display functions such as plots and maps, and produce a variety of summary reports. [Further description when system is completed.]

Standardized Transfer Formats
Implementing a standardized transfer format is an important part of each individual program element. It is a fundamental prerequisite for the success of end user applications like the one described above. The goal of standardized transfer file formats is to formalize data exchanges. This will make it easier for data recipients to understand and use the data, improve overall data quality, and eliminate the costly task of reformatting data when they are combined and/or exchanged. Data sources can automate the production of standardized files and recipients can be assured that they will receive the same formats regardless of the source.

A "transfer file" is actually a package composed of three or more separate, ASCII text files. Each package includes one Master and one Sample/Event file and one or more Data files. The Master file contains fields that define a sample, such as sample ID, sampling location, date, time, and data type. The Sample/Event file contains information about the sampling event, such as the volume, weight, depth, sample type, and other descriptive fields. The structure of these tables is the same for all data types.

Results data are stored in one or more Data files, specifically designed for each data type. Bacteriology data, for example, has one Data file while benthic infauna is made up of separate abundance and biomass files. The structure of the relevant files for bacteriology data is shown in Table 6.

Table 6.
Transfer file structures and formats for the bacteriology component of the comprehensive monitoring program. See Chapter X for more detail.

Draft Program Summary: Bacteriology

The following document is a draft of the bacteriology program. It is intended for review only and does not include figures, appendices and some tables. The approximate length of this summary is 19 printed pages. Comments should be forwarded to:

Dr. Guangyu Wang
(213) 266-7568


Home Page | Citizen Actions | Government Actions | Bay Play |
Bay Ecosystems | Human Impacts | What's Happening? |Watershed |

For more information, contact
Last Update 10/00