Objectives. We sought to determine whether monitoring rapid influenza laboratory tests improved the influenza-like illness surveillance already in place in New Mexico.

Methods. For the past 3 influenza seasons, the New Mexico Department of Health examined influenza-like illness visits and positive rapid influenza test results.

Results. The proportion of positive rapid influenza test results started to rise earlier than did the percentage of clinical visits because of influenza-like illness in each of the past 3 influenza seasons: 5 weeks earlier during the 2004–2005 season; 3 weeks earlier in 2005–2006; and 2 weeks earlier in 2006–2007. In addition, rapid influenza tests showed a spike in influenza B activity late in the 2005–2006 season that influenza-like illness syndrome surveillance did not.

Conclusions. Laboratory-based rapid influenza test surveillance required relatively few resources to implement and offered a sensitive mechanism to detect the onset of influenza activity while allowing for the distinction of influenza types.

Influenza and pneumonia constituted the eighth leading cause of death in the United States1 in 2005 and the seventh leading cause in New Mexico2 in 2005. Approximately 36 000 deaths and over 200 000 hospitalizations occur annually in the United States from influenza and influenza-related causes.3,4 Data from influenza surveillance systems describe the timing, burden, and severity of seasonal influenza activity, identify circulating viral types and subtypes, and guide vaccination practices and outbreak management. The emergence of human infection with H5N1 avian influenza has focused attention on influenza surveillance systems in the context of planning for pandemic influenza.5,6 Virologic surveillance is critical for the development of vaccine for the next influenza season, and it may help to identify viruses with pandemic potential.7 Information derived from influenza surveillance has been helpful for unexpected situations, such as during the recent sporadic national shortages of vaccine. Monitoring seasonal influenza includes the development and maintenance of systems for clinical syndrome surveillance, laboratory-based surveillance, and hospitalization and influenza-related mortality surveillance.

Methods employed by states to conduct influenza surveillance vary. All states use a sentinel network of providers to report on the percentage of clinical visits because of influenza-like illness throughout the influenza season. However, the number of providers that participate and the method of recruiting and retaining providers are not standardized throughout the country. Some states have chosen to monitor laboratory data in addition to influenza-like illness by establishing sentinel laboratory networks that report influenza testing volume and proportion of positive results.8

Rapid influenza tests are relatively inexpensive and have become widely available for diagnostic purposes. Studies have shown that rapid influenza tests, when used to direct the clinical management of children with fever and hospitalized adults, can increase appropriate antiviral prescriptions, limit incorrect antibiotic use, reduce the number of additional diagnostic tests ordered (e.g., urinalyses, chest x-rays, and complete blood counts), and reduce the length of stay in an emergency department.912

We sought to determine whether monitoring rapid influenza laboratory test results improves the influenza surveillance already in place in New Mexico.

The New Mexico Department of Health (NMDOH) conducts statewide influenza surveillance through (1) a sentinel influenza-like illness network, (2) laboratory rapid influenza test surveillance, (3) active surveillance for influenza-associated hospitalizations in select counties,13 and (4) influenza-related mortality. We present data collected from only the first 2 of these surveillance components.

Influenza-Like Illness Surveillance

NMDOH maintains a statewide network of health care providers that voluntarily report on a weekly basis the total number of patient visits in their practice settings for any cause and the percentage of visits because of influenza-like illness. Sentinel providers include both primary care outpatient settings and hospital emergency departments. For surveillance purposes, NMDOH uses the nationally standardized influenza-like illness definition of fever above 100°F ( > 37.8°C) and cough or sore throat, in the absence of a known cause. Data are collected and reported to the Centers for Disease Control and Prevention (CDC) during the influenza season, defined as October through May, to contribute to the national influenza sentinel providers’ surveillance network. An influenza-like illness sentinel provider site in New Mexico must report a minimum of 16 weeks (50%) to be included in the statewide surveillance system.

NMDOH periodically reviews the number and characteristics of the health care provider sites that participate in influenza-like illness surveillance to make the patient sample as representative of the state's population as possible. CDC recommends 1 regularly reporting sentinel provider for every 250 000 residents (and a minimum of 10 for less populous states).14 Although 10 provider sites would meet this recommendation in New Mexico, 15 or more sentinel provider sites have participated in the network since the 2003–2004 influenza season.

Sentinel influenza-like illness sites are supplied with viral isolation kits with which to submit clinical specimens to the NMDOH scientific laboratory division to confirm the diagnosis of influenza at the beginning, middle, and end of the influenza season. Influenza-like illness surveillance data are included in weekly influenza reports generated by NMDOH, disseminated on a Web site and through e-mail lists, and incorporated into summary reports mailed to health care providers statewide.

Laboratory Rapid Influenza Test Surveillance
Rapid antigen and direct fluorescent antibody influenza testing.

Laboratory-confirmed influenza cases became reportable to NMDOH during the 2003–2004 influenza season. During that season, reporting the identified case-based data and management of the information proved resource intensive and probably did not fully capture the actual burden of disease. As an alternative to case-based reporting, laboratory rapid influenza test sentinel surveillance was initiated in the 2004–2005 influenza season. It included the collection of nonidentified (i.e., aggregated) rapid influenza testing results from 16 sentinel laboratory sites across the state (14 hospital laboratories and 2 large reference laboratories).

One high-volume statewide clinical reference laboratory (laboratory A) joined the sentinel laboratory system during the 2005–2006 season and provided influenza direct fluorescent antibody (DFA) results from a standard respiratory panel. An additional 13 clinical laboratories were recruited to the surveillance network before the 2006–2007 season for a total of 30 laboratories, which consisted of all in-state commercial clinical laboratories that had the capacity to perform in-house rapid diagnostic influenza testing. NMDOH received weekly aggregated reports of rapid influenza tests that consisted of the weekly total number of rapid (and DFA from laboratory A) diagnostic tests performed for influenza and the number of positive results. Laboratories reported results as distinguishable between type A and type B influenza when that information was available.

Isolation of influenza virus.

As part of an ongoing research project at the University of New Mexico, laboratory A submitted all clinical specimens with an initial DFA positive result during the 2005–2006 and 2006–2007 influenza seasons for viral isolation through culture or hemadsorption techniques. A subset of these initial DFA positive samples was forwarded to the NMDOH scientific laboratory division for confirmation and subtyping procedures. The NMDOH scientific laboratory division also performed viral isolation and subtyping on another subset of nasopharyngeal swabs submitted by sentinel provider sites in the influenza-like illness network.

Data Analysis

Influenza-like illness and reports of rapid tests for influenza were analyzed over time. New Mexico influenza-like illness data were plotted against national influenza-like illness data to determine possible correlation. Weekly New Mexico rapid and DFA influenza test data were analyzed in relation to viral isolation data to compare the 2 laboratory methods and their chronological relationship to influenza-like illness data. The number of rapid tests ordered was analyzed in relation to the percentage of tests with positive results on a weekly basis.

New Mexico has traditionally viewed the onset of influenza season as the first time that an influenza-like illness clinical case is accompanied by viral isolation of its clinical specimen, because of the low positive predictive value of rapid influenza tests early in the season when the prevalence of disease is low.

After we reviewed past influenza seasons—including the traditional definition of the onset of the season, graphical depiction of influenza-like-illness activity throughout the season, and the accompanying laboratory data—we defined the following parameters. First, the onset of influenza-like illness and laboratory activity was considered to be the beginning of a rise in graphed data, defined as the week of the season that directly preceded a 200% or greater increase sustained for 3 weeks or longer. We included the requirement of a 3-week or longer maintenance of the 200% or greater increase in the definition to account for increases in rapid influenza test data early in the season that may have been caused by false-positive laboratory results. Second, the peak of influenza season was defined as the single highest weekly point of influenza-like illness activity or percentage of positive laboratory results during the influenza season (correlated to the CDC's Morbidity and Mortality Weekly Report’s calendar year). We defined additional peaks if large secondary rises were observed more than 5 weeks after the initial peak descended. We consistently applied the definitions for peak and onset across the 4 seasons of data used in this study.

The study period consisted of 1 full influenza season (2003–2004) of influenza-like illness sentinel provider data and the subsequent 3 complete influenza seasons for which we obtained both influenza-like illness sentinel-provider and rapid influenza test data. In 4 seasons of influenza-like illness weekly reporting, approximately 590 923 clinical visits were reported by sentinel providers, of which 6702 (1.1%) were listed as being because of influenza-like illness. In 3 seasons of rapid influenza laboratory reporting, laboratories performed 27 170 influenza tests (rapid and DFA), of which 3402 (12.5%) were positive for influenza type A or B (Table 1). Viral isolation was successfully completed by culture or hemadsorption methods for 365 influenza isolates (301 [82.5%] from laboratory A) during the 2005–2006 influenza season. Of the total number of isolates, 309 (84.7%) were confirmed as influenza type A and 56 (15.3%) as type B. During the 2006–2007 influenza season, 234 isolates were confirmed by culture or hemadsorption (157 [67.1%] from laboratory A). Of the total number of isolates, 218 (93.2%) were confirmed as type A and 16 (6.8%) as type B.

Table

TABLE 1 Four Season Comparison of Influenza-Like-Illness and Virologic Surveillance Data: New Mexico, 2003–2007

TABLE 1 Four Season Comparison of Influenza-Like-Illness and Virologic Surveillance Data: New Mexico, 2003–2007

Influenza-Like Illness Data
Virologic Surveillance Data
Chronological Comparison of Influenza-Like Illness and Virologic Surveillance Data
Influenza SeasonNo. SitesTotal VisitsVisits (% of Total)No. LabsTotal TestsPositive Results (% of Total)Onset (Week Ending)Peak (Week Ending)Onset Lab (Week Ending)Peak Lab (Week Ending)
2003–20041588 6281356 (1.5)Nov 9, 2003Dec 7, 2003
2004–200518143 879897 (0.6)167227792 (11.0)Jan 29, 2005Feb 12, 2005Dec 25, 2004Feb 26, 2005
2005–200620166 1292047 (1.2)1793891351 (14.4)Dec 24, 2005Jan 7, 2006Nov 26, 2005Dec 24, 2005
2006–200720192 2872402 (1.2)3010 5541259 (11.9)Feb 9, 2007Mar 17, 2007Jan 20, 2007Mar 10, 2007
Total590 9236702 (1.1)27 1703402 (12.5)

The onset and peak of influenza-like illness and rate of positive rapid laboratory results are shown for each influenza season in Table 1. In the past 3 consecutive influenza seasons, the proportion of positive rapid influenza test results started to rise earlier than did the percentage of clinical visits because of influenza-like illness. Specifically, during the 2004–2005 season, positive rapid influenza test results started to rise 5 weeks earlier than did influenza-like illness visits. During the 2005–2006 season, the same pattern occurred 4 weeks earlier, and finally, during the 2006–2007 season, the rise in positive rapid influenza test results occurred 3 weeks earlier than did the rise in influenza-like illness visits (Table 1). For 2 of those 3 seasons, the peak percentage of positive rapid influenza test results occurred earlier than did the peak in influenza-like illness activity.

Influenza activity in New Mexico varied seasonally (Figure 1). The differences in influenza seasons are apparent: an early and relatively intense season was seen in 2003–2004 and a late and milder season was seen in 2006–2007. Figure 2 shows that the positive rapid influenza test results started to rise after the first influenza A virus positive culture was detected. The percentage of positive rapid influenza test results for that season began to rise earlier than influenza like illness activity and rose sharply, along with the number of influenza isolates that were confirmed by scientific laboratory division and laboratory A beginning in late November or early December. Figure 2 also shows that the peak in New Mexico influenza-like illness activity occurred later than the peak in percentage of positive rapid influenza test results and that influenza-like illness activity and the proportion of positive laboratory test results decreased over time in a similar fashion. This pattern is consistent in the previous (2004–2005) and subsequent (2006–2007) seasons (data not shown).

With rapid influenza test data, we compared the circulation of influenza A virus to that of influenza B virus during the 2005–2006 influenza season (Figure 3). The early peak (30% of positive tests) during December through February was caused mostly by influenza A activity (i.e., 97% of positive clinical specimens), whereas the later peak during March through May was caused mostly by influenza B (i.e., 62% of positive clinical specimens). Although rapid influenza laboratory surveillance detected this spike of influenza B activity late in the influenza season, influenza-like-illness syndrome surveillance did not detect the activity that was hidden in what appeared to be an ongoing decline in overall influenza illness (Figures 2 and 3). The 2006–2007 influenza season was a relatively mild season dominated by influenza A: during peak activity, approximately 93% of rapid tests were identified as influenza A (data not shown).

Surveillance for infectious diseases must have well-defined objectives and to achieve them must take into account representativeness, sensitivity, specificity, timeliness, and feasibility.15 Surveillance for influenza, in particular, needs to be simple and sensitive because of its seasonal nature, high burden of disease in populations, and lack of a specific clinical syndrome. The worldwide influenza surveillance that the World Health Organization established in 1952 relies on influenza cultures that laboratories report to identify global influenza activity.16 It is important that local, national, and global surveillance systems conduct virologic surveillance to monitor antigenic characteristics of circulating influenza strains; this information is used to determine annual vaccine composition and monitor new variants as they occur.

There is growing interest in exploring more-timely surveillance approaches to improve early-warning systems for influenza. For example, the European Influenza Surveillance Scheme, comprising 30 countries that collect and exchange influenza activity information, organized an expert task group to evaluate a new rapid laboratory testing surveillance network. This group evaluated the rapid testing surveillance conducted during 4 influenza seasons between 1999 and 2003 in Switzerland. It found that the average time gained, compared with that of viral cell culture, was 9 days.17 Although the group acknowledged the lower sensitivity of the rapid tests compared with cell culture and polymerase chain reaction, they also noted that the high specificity of the tests made them valuable additional tools for influenza surveillance purposes. European Influenza Surveillance Scheme members subsequently recommended that rapid influenza testing data be added to traditional surveillance as early-warning systems for changing influenza activity.

Similar to Switzerland's observations of time gained with rapid tests to help describe influenza activity over 4 seasons, New Mexico observed earlier rises in rapid influenza tests than in influenza-like illness data over 3 influenza seasons. In New Mexico before 2004, influenza-like illness surveillance was the primary tool to depict any given influenza season. However, since the 2004–2005 influenza season, both influenza-like illness and laboratory surveillance have been used to more carefully describe the onset of influenza activity and its regional spread during the annual influenza season. We found that surveillance for rapid influenza tests can enhance state-based influenza surveillance while requiring only minimal additional resources. This surveillance has proven to be relatively simple to implement, sensitive, and flexible.

In New Mexico during the past 3 influenza seasons, the monitoring of aggregate rapid tests for influenza provided more timely and more detailed information that better defined the burden of influenza in New Mexico as well as a laboratory correlate to the syndrome-based clinical information observed through influenza-like illness surveillance alone. Although the rapid tests for influenza demonstrated a comparable epidemiological pattern of influenza activity compared with influenza-like illness surveillance for each season, the initial rise of influenza activity was detected earlier through rapid testing. In addition, peaks of influenza type A as distinct from influenza type B were identified through rapid laboratory test surveillance, whereas influenza-like illness surveillance did not enable the detection of such differences. Identifying the true onset of the influenza season as early and accurately as possible helps not only in the formulation of more-specific prevention, diagnostic, and treatment guidelines but also in the enhancement of the effectiveness of communication with health care providers and the public, including targeted vaccination campaigns.

Aggregate rapid influenza test surveillance data have inherent limitations arising from the sensitivity and specificity of the tests. Rapid tests performed very early or very late in the influenza season, when prevalence of influenza is low, can include a greater number of false positive results than would be seen at other times during the season. One study, conducted among children who were hospitalized and symptomatic outpatients, suggested that the positive predictive value of rapid tests approached 80% when influenza prevalence was higher than 15%; however, the tests proved to be of limited use when the prevalence was lower than 10%.8 The World Health Organization recommends that local influenza surveillance guide the optimal use, interpretation, and confirmation of rapid influenza tests during various parts of an influenza season.18

As shown in Figure 2, there are 2 early peaks in the percentage of rapid tests, with positive results in early and late October 2005 and an additional late peak in May 2006. These are presumably influenced by false-positive results during a time when influenza virus was not being isolated in the state (i.e., there presumably was a very low prevalence of influenza) and the positive predictive value of the tests was low. However, both rapid and DFA positive test results rose sharply along with a concurrent rise in the number of positive influenza cultures beginning in late November or early December and again in March or April. These parallel trend lines show that a much higher proportion of the rapid tests were true positives when the prevalence of influenza had increased in the state compared with the earlier period when influenza virus had not yet been isolated.

Both the number of rapid tests performed and the positive test results began to rise sharply after the onset of the season, as we defined it in this study. During the 2005–2006 influenza season, the trend of the number of rapid tests performed over time was similar to the proportion of positive test results reported over the same period (Figure 3). An increase in tests ordered during a time when the positive predictive value of the test is higher suggests that the pretest probability of influenza circulating in the community influences the testing behavior of providers.19

The introduction of rapid influenza testing has raised the concern that it might lead to obtaining fewer viral isolates, thereby reducing our capability to detect circulating influenza strains early.20 To test this concern, among others, the Hawaii Department of Health evaluated the impact of incorporating rapid influenza testing into ongoing surveillance activities between the 1998–1999 and 2000–2001 influenza seasons and found coupling rapid tests with cultures to be an effective means of improving influenza surveillance.21 In New Mexico, as the number of clinical laboratories participating in the rapid influenza testing network nearly doubled, the number of specimens reported to NMDOH for isolation also nearly doubled from the 2005–2006 to the 2006–2007 influenza seasons.

The strengths of New Mexico's influenza surveillance system arise in part from its single, centralized state public health agency. The system has more influenza-like illness health providers than recommended by CDC based on population. NMDOH was able to successfully set up a sentinel influenza-like illness provider network and develop an efficient mechanism for tracking seasonal influenza laboratory tests in this large, sparsely populated, mixed urban and rural western state. After an initial successful sentinel laboratory pilot, NMDOH recruited all clinical laboratories into the surveillance network, making rapid influenza test data collection complete statewide. Furthermore, the state public health laboratory contributes to the success of influenza surveillance by making the confirmation of influenza results a high priority.

Limitations

One limitation of this study is that influenza-like illness data were compared with rapid influenza test surveillance data for only 3 influenza seasons. Another limitation is that during the course of the study, the composition of influenza-like illness and laboratory surveillance sites varied, which may have influenced the findings. In addition, the variability of each season's influenza activity may have contributed to differences in provider laboratory testing behaviors; however, this possibility did not appear to alter the sensitivity of early rapid testing. The type of aggregate rapid influenza laboratory surveillance conducted in New Mexico provides no data related to age or other demographics, and the spectrum of patients for whom testing is performed depends on the medical practice of health care providers; therefore, results may not represent all patients infected with influenza in any given season.

State and local health departments often do not have adequate resources to conduct comprehensive seasonal influenza surveillance. Developing simple systems such as that described here to monitor rapid influenza testing may prove a useful adjunct to other influenza surveillance approaches. Implementing rapid influenza laboratory surveillance may be less easily accomplished in states with local health jurisdictions; however, our experience suggests that this type of system should be explored in states where feasible.

Conclusions

National influenza surveillance consists of influenza-like illness, virologic, hospitalization, and influenza-related mortality data. New Mexico added rapid influenza test surveillance to this list. Our state-based experience with influenza surveillance demonstrates that clinical influenza-like illness syndrome surveillance can track seasonal influenza activity and burden of disease if the sentinel provider network is robust. Virologic surveillance with comprehensive viral isolation data may be overly burdensome, costly, or unavailable to some state programs. In New Mexico, rapid influenza test surveillance provided early detection of the onset of the influenza season, differentiation of circulating influenza types, and an adequate description of the seasonal epidemic pattern with respect to magnitude and time trend.

We were able to compare rapid laboratory-based influenza test surveillance not only with influenza-like illness surveillance but also with traditional influenza culture data. Our results showed favorable comparisons between these 2 types of laboratory-based influenza testing data. This comparison is encouraging, persuading us that it is worthwhile to consider the incorporation of rapid influenza tests into the array of influenza surveillance systems, particularly if comprehensive virologic surveillance is not available. To sum up, rapid laboratory test surveillance provides timely and type-specific influenza data that not only inform prevention and control activities but also require relatively few resources.

Acknowledgments

Financial and material support was provided by the New Mexico Department of Health. M. Mueller was supported by the Centers for Disease Control and Prevention, Emerging Infections Program (cooperative agreement U10/CCU622221).

The authors gratefully acknowledge the contributions of the New Mexico Department of Health Influenza Surveillance Coordinator, Catherine Avery, the influenza sentinel provider sites, and the clinical laboratories that conduct and report rapid influenza test results, all of whom made this study possible. The authors also thank Adam Aragon, Mike Breckenridge, and Judy Klauber, at the New Mexico Department of Health, Scientific Laboratory Division; Stephen Young, at TriCore Reference Laboratories; and Kathryn Henderson, at the University of New Mexico, for their invaluable services in the identification of circulating influenza virus strains in New Mexico.

Human Participant Protection

Institutional review board approval was not obtained because all findings were based on data provided by routine infectious disease surveillance conducted by the New Mexico Department of Health as dictated by state regulations.

References

1. Hsiang-Ching K, Hoyert DL, Xu J, Murphy SL. Deaths: final data for 2005. Natl Vital Stat Rep. 2008;56(10):2. Google Scholar
2. New Mexico Selected Health Statistics Annual Report for 2005. Santa Fe, NM: Bureau of Vital Records and Health Statistics, New Mexico Department of Health; 2007. Google Scholar
3. Thompson WW, Shay DK, Weintraub E, et al.. Mortality associated with influenza and respiratory syncytial virus in the United States. JAMA. 2003;289:179186. Crossref, MedlineGoogle Scholar
4. Thompson WW, Comanor L, Shay DK. Epidemiology of seasonal influenza: use of surveillance data and statistical models to estimate the burden of disease. J Infect Dis. 2006;194:S82S91. Crossref, MedlineGoogle Scholar
5. Snacken R, Kendal AP, Haaheim LR, Wood JM. The next influenza pandemic: lessons from Hong Kong, 1997. Emerg Infect Dis. 1999;5:195203. Crossref, MedlineGoogle Scholar
6. The World Health Organization Global Influenza Program Surveillance Network. Evolution of H5N1 avian influenza viruses in Asia. Emerg Infect Dis. 2005;11:15151521. Crossref, MedlineGoogle Scholar
7. Monto AS, Comanor L, Shay DK, Thompson WW. Epidemiology of pandemic influenza: use of surveillance and modeling for pandemic preparedness. J Infect Dis. 2006;194:S92S97. Crossref, MedlineGoogle Scholar
8. Grijalva CG, Poehling KA, Edwards KM, et al.. Accuracy and interpretation of rapid influenza tests in children. Pediatrics. 2007;119:e6e11. Crossref, MedlineGoogle Scholar
9. Falsey AR, Murata Y, Walsh EE. Impact of rapid diagnosis on management of adults hospitalized with influenza. Arch Intern Med. 2006;167:354360. CrossrefGoogle Scholar
10. Bonner AB, Monroe KW, Talley LI, Klasner AE, Kimberlin DW. Impact of the rapid diagnosis of influenza on physician decision-making and patient management in the pediatric emergency department: results of a randomized, prospective, controlled trial. Pediatrics. 2003;112:363367. Crossref, MedlineGoogle Scholar
11. Poehling KA, Zhu Y, Yi-Wei T, Edwards K. Accuracy and impact of a point-of-care rapid influenza test in young children with respiratory illness. Arch Pediatr Adolesc Med. 2006;160:713718. Crossref, MedlineGoogle Scholar
12. Albanses JC, Dowd MD, Simon SD, Sharma V. Impact of rapid influenza testing at triage on management of febrile infants and young children. Pediatr Emerg Care. 2006;22:145149. Crossref, MedlineGoogle Scholar
13. Schrag SJ, Shay DK, Gershman K, et al.. Multistate surveillance for laboratory-confirmed, influenza-associated hospitalizations in children: 2003–2004. Pediatr Infect Dis J. 2006;25:395400. Crossref, MedlineGoogle Scholar
14. Centers for Disease Control and Prevention. Section 5: US influenza sentinel provider surveillance network. Presented at: Influenza Surveillance in the United States Surveillance Coordinator's Conference; August 13–14, 2003; Atlanta, GA. Google Scholar
15. Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the Guidelines Working Group. MMWR Morb Mortal Wkly Rep. 2001;50:135. MedlineGoogle Scholar
16. Layne SP. Human influenza surveillance: the demand to expand. Emerg Infect Dis. 2006;12:562568. Crossref, MedlineGoogle Scholar
17. Thomas Y, Kaiser L, Wunderli W. The use of near patient tests in influenza surveillance: Swiss experience and EISS recommendations. Euro Surveill. 2003;8:240246. Crossref, MedlineGoogle Scholar
18. Szklo M, Nieto FN. Epidemiology: Beyond the Basics. Gaithersburg, MD: Aspen Publishers; 2000:362400. Google Scholar
19. World Health Organization. WHO Recommendations on the Use of Rapid Testing for Influenza Diagnosis, 2007. Available at: http://www.who.int/csr/disease/avian_influenza/guidelines/RapidTestInfluenza_web.pdf. Accessed January 23, 2008. Google Scholar
20. Gensheimer KF, Fukuda K, Brammer L, Cox N, Patriarca PA, Strikas RA. Preparing for pandemic influenza: the need for enhanced surveillance. Emerg Infect Dis. 1999;4:297299. CrossrefGoogle Scholar
21. Effler PV, Ieong M-C, Tom T, Nakata M. Enhancing public health surveillance for influenza virus by incorporating newly available rapid diagnostic tests. Emerg Infect Dis. 2002;8:2328. Crossref, MedlineGoogle Scholar

Related

No related items

TOOLS

Downloaded 24 times

SHARE

ARTICLE CITATION

Joan Baumbach, MD, MPH, Mark Mueller, MPH, Chad Smelser, MD, Bernadette Albanese, MD, MPH, and C. Mack Sewell, DrPH, MSJoan Baumbach, Mark Mueller, Chad Smelser, and C. Mack Sewell are with the New Mexico Department of Health, Santa Fe. Bernadette Albanese is with the El Paso County Department of Public Health and Environment Colorado Springs, CO. “Enhancement of Influenza Surveillance With Aggregate Rapid Influenza Test Results: New Mexico, 2003–2007”, American Journal of Public Health 99, no. S2 (October 1, 2009): pp. S372-S377.

https://doi.org/10.2105/AJPH.2007.125450

PMID: 18923127