A novel method for estimating specificity of the Single Intradermal Comparative Cervical Tuberculin (SICCT) test for bovine tuberculosis (bTB) using surveillance tests results is reported. The specificity of the SICCT test at three cut-offs was estimated from the dates, locations and skinfold measurements of all routine tests carried out in Officially TB Free (OTF) cattle herds in Great Britain (GB) between 2002 and 2008, according to their separation (by distance and time) from known infected (OTF-withdrawn) herds. The proportion of animals that tested positive was constant (P>0.20) when the distance between tested herds and nearest infected herd exceeded 8 km. For standard cut-off, calculated specificity was 99.98 per cent (95 per cent confidence interval ±0.004 per cent), equating to one false positive result per 5000 uninfected animals tested. For severe cut-off it was 99.91 per cent (±0.013 per cent) and for ultrasevere cut-off (selecting all reactors and inconclusive reactors) it was 99.87 per cent (±0.017 per cent). The estimated positive predictive value of the test averaged 91 per cent and varied by regional prevalence. This study provides further evidence of the high specificity of the SICCT test under GB conditions, suggests that over 90 per cent of cattle currently culled using this test in GB were infected, and endorses slaughter of at least these cattle for bTB control.
- Comparative Tuberculin Skin Test
- Bovine tuberculosis
- Great Britain
- Accepted July 27, 2015.
- British Veterinary Association
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Bovine tuberculosis (bTB) is a notifiable infectious disease of cattle caused by Mycobacterium bovis. This disease is the subject of a statutory national eradication programme in Great Britain (GB) that began in 1950 (MacRae 1961). The programme was initially successful, but bTB prevalence has risen since the 1980s and at least 31,700 cattle were slaughtered in GB as reactors to the Single Intradermal Comparative Cervical Tuberculin (SICCT) test in 2014 (Defra 2015).
The SICCT test is the primary screening test for bTB in cattle in GB and measures the delayed hypersensitivity to purified protein derivatives of standard cultures of Mycobacterium avium and M bovis, called tuberculins (Downs and others 2013). The supplementary Bovigam interferon-γ blood test (Prionics AG, Switzerland) is also sometimes used to help resolve bTB incidents on farms. Neither of these tests has perfect sensitivity or specificity. When specificity is less than 100 per cent, a proportion of animals not infected with M bovis will have a (false) positive reaction to the test, which leads to their slaughter (European Union 2013). A false positive test result can initiate or prolong a bTB herd incident (breakdown) during which the Official Tuberculosis Free (OTF) status of the herd is suspended (OTFS), and can also prolong an incident in which OTF status is withdrawn (OTFW). Imperfect specificity of surveillance tests would increase the costs of bTB eradication campaigns to farmers and government and undermine stakeholder confidence.
In the SICCT test, one can adjust the interpretation (cut-off) criterion that defines a positive result to manipulate the balance between sensitivity and specificity. Lowering the cut-off to increase test sensitivity tends to reduce its specificity, and vice versa. In GB, the standard interpretation of the SICCT test is used for OTF herds and OTFS herd incidents. ‘Severe interpretation’, which uses a more sensitive cut-off, is usually applied in OTFW herd incidents, in which characteristic bTB lesions are detected in a SICCT test reactor, or M bovis is isolated in laboratory cultures.
Detection of tuberculous lesions in the carcases of SICCT reactors and identification of M bovis from carcase tissues can help confirm the presence of bTB in a herd, but these methods are insensitive (Corner and others 1990, Crawshaw and others 2008). Thus, estimation of the specificity of the SICCT test using results from postmortem analyses is imperfect. To address this problem, studies have been conducted in assumed bTB-free cattle populations in various countries and specificity estimated based on the assumption that all positive responses were spurious. These studies have reported specificities for the SICCT test in the range 88.8–100.0 per cent, although importantly each study used a slightly different approach to conduct the SICCT test (reviewed in Monaghan and others 1994, de la Rua-Domenech and others 2006, Schiller and others 2010, Downs and others 2011, EFSA 2012, Hartnack and Torgerson 2012, van Dijk 2013).
This paper reports a novel method for estimating specificity in SICCT tested cattle, based mainly on epidemiological principles, rather than each animal's postmortem test results. The prevalence of reactors in SICCT surveillance tests is known to decrease with their distance from herds known to be infected with M bovis (Nicholson and others 2013, APHA 2014). The method employs the hypotheses that (1) the decrease in reactor prevalence is a result of decreasing prevalence of true M bovis infection, (2) there is a threshold distance beyond which further increases in distance do not significantly change the proportion of animals that react to the SICCT test, and (3) that this threshold denotes the distance beyond which the prevalence of M bovis infection is negligible in comparison with the prevalence of false-positive SICCT test results. Specificity was estimated from SICCT surveillance test results conducted beyond the threshold distance, for each of three different interpretations of the SICCT test.
Materials and methods
Data were downloaded from the Animal and Plant Health Agency (APHA) bTB surveillance database (APHA-SAM) into a Microsoft (MS) Access 2003 file at the start of April 2014. The data recorded all tuberculin tests performed on cattle in GB, including geographical locations, dates and results of SICCT tests in cattle herds. Further classification of the data was performed using MS Access 2003 queries, and the results were transferred to MS Excel 2010 files for statistical analysis and presentation.
The SICCT herd testing events used for calculating specificity were performed during the seven years between January 1, 2002 and December 31, 2008 by trained practicing veterinarians and APHA-employed veterinarians. For each event, records were selected for the herd identity, the total number of animals tested, the animals in these herds that had been identified as inconclusive reactors (IRs) and reactors, along with the sizes (in mm) of their reactions to bovine and avian tuberculin.
SICCT test interpretation
In this study, an animal was considered positive to the SICCT test according to each of three interpretations (cut-offs) of 72-hour reactions to tuberculins (Fig 1):
Standard cut-off: the reaction to bovine tuberculin (Bov) was greater than the reaction to avian tuberculin (Av) by more than 4 mm (European Union 2013);
Severe cut-off: Bov was greater than Av by more than 2 mm;
Ultrasevere cut-off: Bov was greater than Av.
Following APHA custom, the calculations assumed that any Bov or Av reaction equal or smaller than 2 mm was negligible; it was treated as if 0 mm.
An increase in the size of the cut-off for the net reaction to bovine tuberculin is accompanied by a reduction in sensitivity so that standard interpretation is less sensitive than severe interpretation and severe interpretation less sensitive than ultrasevere interpretation. The ultrasevere cut-off would require the slaughter not only of reactors according to severe and standard cut-offs, but also IRs.
Herds with OTF-withdrawn incidents from which M bovis transmission might occur
UK Ordnance Survey grid references of all herds were extracted from APHA-SAM; these enabled calculation of the minimum distance between each non-OTFW herd and the closest herd that was a possible source of infection. A herd with its OTF status withdrawn was considered to be a possible source of transmission to other herds in the 60 months preceding the start of its bTB incident, during its incident, and in the 60 months following the end of its incident; OTFW herds in this time widow will be termed hazardous OTFW herds. Data were therefore obtained for all OTFW incidents that were ongoing at any time between January 1, 1997 and December 31, 2013. In the 60 months before the start of an OTFW incident a herd may have acquired infection during (or shortly before) the previous surveillance test, which would have been performed between one year and four years beforehand. In the 60 months after OTF status was restored, the herd may have retained one or more infected animals (Wolfe and others 2010, Conlan and others 2012, Gallagher and others 2013). The study did not attempt to optimise the durations of these 60-month precautionary periods.
Selecting herds with negligible bTB prevalence for the estimation of apparent animal-level specificity
Data for routine surveillance tests at known times and distances from OTFW incidents were used in the calculation of the apparent test specificity. Some of these surveillance tests disclosed OTFS incidents, but most of them did not affect OTF status. To avoid herds with possible undetected infection, SICCT tests conducted because of a recognised risk of M bovis infection were excluded from analysis. Tests in a herd that had experienced any OTFW incident after the start of 1986, check tests that were 6 months and 18 months after the end of an OTFS incident and tests on individual animals in the herd (e.g. tests on animals traced from known infected herds) were excluded. Tests intended to be performed 42 days or 60 days after another test were also excluded, to avoid repeatedly measuring the same animal and the possible effects on the immune response (Coad and others 2010). Such tests included retests of IRs and short-interval tests during OTFS incidents.
Routine surveillance tests that marked the start of an OTFS incident were the only tests in which standard-interpretation reactors were counted. Any routine surveillance test that identified an IR for the first time provided additional data for calculating the results of severe or ultrasevere cut-offs.
As stated above, it was assumed that the suitability of test data for the calculation of specificity depended upon the distance between the surveillance-tested herd and the nearest hazardous OTFW herd in a five-year window. Identical analyses were performed for a series of 13 distance ranges. These were: >0–1 to >5–6 km in 1-km steps; >6–8 km; >8–10 km; >10–15 to >25–30 km in 5-km steps; and >30 km (Figs 2 and 3). Specificity reported in this paper is the estimate of specificity at a distance range that was not significantly different from the estimate of specificity at greater distances from the nearest hazardous OTFW incident. Where more than one estimate fulfilled this criterion, the estimate with the smallest 95 per cent confidence interval was selected.
Apparent animal-level specificity was calculated as 1.0 minus the proportion of animals in apparently bTB-free herds that were SICCT test reactors, at the standard, severe or ultrasevere cut-offs:therefore
If M bovis infection is absent, apparent specificity is equal to true specificity.
SICCT test measurements (from APHA-SAM) were interpreted according to the TB64 chart for England (Fig 1). Information on oedema at the injection site was not available for all tests in 2002–2008 and has been omitted from Fig 1 because in practice it had little effect on the number of standard or severe reactors. The relationship between interpretations and cut-offs is shown in Table 1.
Positive predictive value
The positive predictive value (PPV) is the proportion of animals with positive test results that are truly infected. The calculations in this paper assume that true specificity is homogeneous everywhere in GB. Values of PPV were calculated for the years 2002–2008, for which specificities have been estimated. It was assumed that tests in hazardous OTFW incidents were performed at severe interpretation and all other tests were performed at standard interpretation. Conventionally,
But because the denominator represents [Total reactors], the equivalent expression will be is used:
PPV was calculated for three groups of counties in GB that varied in bTB prevalence in 2013 (APHA 2014). Groups, called Risk Areas, have been defined for England by Defra (2015), and equivalent groups in Scotland and Wales are added here for completeness. The prevalence of bTB in each group is indicated by the median and interquartile range (IQR) of the proportion of herds under movement restriction (PHMRi) in each English county:
The counties with a high prevalence of bTB, which include the High Risk Area of England and counties in Wales that had a similarly high prevalence in 2013. The counties in Wales (as named by APHA-SAM) were Dyfed, Gwent, Powys and West Glamorgan. The median PHMR was 9.0 per cent and the IQR was 6.1–11.3;
The medium-prevalence counties, which comprise the so-called Edge Area in England (Defra 2015) and equivalent counties in Wales (Clwyd, Mid Glamorgan and South Glamorgan). Median PHMR was 1.7 per cent and IQR 0.9–4.3 per cent.
The low-prevalence counties, which comprises the Low-Risk Area of England (Defra 2015), two equivalent counties in Wales (Gwynedd and Anglesey), and the whole of Scotland. Median county PHMR was 0.3 per cent and IQR 0.2–0.5 per cent.
For the production of specificity charts, the upper and lower confidence limits were calculated from the variance between specificity estimates for 14 six-month periods on the assumption that the estimates were independent and normally distributed.
The asymptotic regression equations used were of the form:
where y^ was fitted to a value such as apparent specificity and the calculated parameters were Asymptote, β and γ. To do this, the Solver tool in Excel 2010 was forced to minimise the sum of squares of deviations between y^ and observed values weighted by [SE]−2
The effect of distance on estimated specificity was calculated from the following comparisons: the estimates for >0–1 km and >1 km were compared, then the estimates for >1–2 km and >2 km, and so on until the estimates for >25–30 km and >30 km. The SEs of these comparisons were calculated from data for each six-month period, and the statistical significance of the comparisons was obtained using a t test. The test was one-tailed, because comparisons suitable for specificity calculations were those that did not increase with distance. Because several comparisons were conducted, a Šidák (1967) adjustment was applied to all significance probabilities to protect against overoptimistic attribution of significance:
Relation between apparent specificity and distance from the nearest potential source of infection
The increase in the apparent specificity of the SICCT test with the distance from hazardous OTFW incidents is shown in Fig 2, for standard, severe and ultrasevere cut-offs; the error bars represent 95 per cent confidence limits. The chart for standard cut-off is shown with an expanded vertical axis in Fig 3, and raw data are shown in Table 2. The threshold beyond which apparent specificity was unaffected by distance from hazardous OTFW herds was calculated from the same data.
The significance probabilities of pairs of comparisons between estimated specificity are shown in Table 3 for distances from hazardous OTFW incidents. Of the first four distance comparisons (down to ‘Over 4 km versus >3 to 4 km’), all were statistically significant (p<0.05) at the severe and ultrasevere cut-offs and three were significant (P<0.05) at the standard cut-off. At greater threshold distances fewer of the comparisons approached significance, except when >6–8 km was compared with >8 km at severe and ultrasevere cut-offs (P=0.046 and P=0.056). No comparison at greater distances showed any increase (all P were >0.20). For this reason, 8 km was considered to be the threshold distance from hazardous OTFW incidents, above which distance had no effect on apparent specificity. The distance from hazardous incidents below which the evidence for the presence of undetected infected animals was consistently significant was 4 km.
Specificity estimated by asymptotic regression
The second method for estimating SICCT test specificity was also based on the distance of tests from hazardous OTFW incidents, but was calculated by asymptotic regression. The asymptote was the predicted value that apparent specificity would approach if the distance were infinitely large. If the assumption of homogeneous specificity holds, the asymptote would represent true specificity. Figs 2 and 3 show curved lines fitted by asymptotic regression to estimates of apparent specificity at the midpoints of all except the first and last distance ranges, weighted by the SE of the estimates to the power of −2. The asymptotic regression equations for apparent specificity, with chi-square (χ2), degrees of freedom (d.f.) and significance probability of χ2 (P), are:
Standard interpretation (per cent): 99.983−0.0778×e−0.528×[distance], χ²=9.46, 8 d.f., P=0.30;
Severe interpretation (per cent): 99.917−0.526×e−0.541×[distance], χ²=4.94, 8 d.f., P=0.76;
Ultrasevere interpretation (per cent): 99.876−0.911×e−0.545× [distance], χ²=8.37, 8 d.f., P=0.40.
The non-significant value χ2 represents a good fit. This method gives results that are well within the 95 per cent confidence limits of the estimates for distances exceeding 8 km (Table 4).
Location of herds used in the estimation of specificity
Nearly half (14,598) of the 30,483 herd tests that were selected for the estimation of specificity were performed in Scotland (Fig 4). Herds in the north-west of Scotland were the furthest from herds that experienced a hazardous OTFW incident during the period of analysis. Less than five per cent (1484) of the selected herd tests were performed in Wales. The remaining 14,401 herd tests were in two of the risk areas defined by Defra (2015) for England, most of them being in the Low-Risk Area (12,629) with a smaller number in the Edge Area (1728).
Number of reactors in the disclosing test for OTFS incidents
There were 211 OTFS incidents with at least one reactor and 24 with more than one reactor disclosed among the 30,480 herd tests selected for the estimation of specificity (Table 5). This represented a proportion of 11.4 per cent of OTFS incidents with more than one reactor (95 per cent confidence interval 7.4, 16.5 per cent), or approximately 12.9 per cent of OTFS incidents calculated by asymptotic regression. At the disclosing tests of OTFS incidents, an average of 1.194 reactors was detected (1.163 by asymptotic regression). All these values are higher than expected from a purely random distribution of false positive reactors (unpublished calculations).
Positive predictive value
The number of SICCT reactors detected by the 28.50 million animal tests performed in the high-prevalence counties between 2002 and 2008 was 159,604 (APHA-SAM data). Severe interpretation of the SICCT test would have been used for most of the tests in OTFW incidents (10.94 million tests), and standard interpretation was used for the remaining 17.57 million tests. If false-positive rates (that is, 100 per cent minus specificity) were 0.017 and 0.085 per cent for standard and severe cut-offs, respectively, one would expect to find 2987 standard and 9295 severe false-positive reactors in the high-prevalence counties of GB. This would imply that the PPV of the test in the high-prevalence counties was 92.3 per cent (Table 6), as in:
Details of the calculation of PPV estimates for the medium and low-prevalence counties of GB (88.6 per cent and 77.2 per cent, respectively) are also shown in Table 6. The PPV of the test for GB as a whole was 91.8 per cent.
The relationship between estimated specificity and distance from the nearest hazardous OTFW incident suggests that the true specificity of the SICCT test is equal to apparent specificity in herds >8 km away that had OTF status withdrawn currently, or in the previous or next five years. Reactors within 8 km of known infected herds are more likely to harbour undetected infection than herds further away. The proportion of animals reacting to the SICCT test in OTFS disclosing tests is significantly elevated if hazardous OTFW herds are up to 4 km away, suggesting that some of them represent unconfirmed infected animals. This distance is similar to the median rate of advance of the edge of endemic bTB areas in GB between 2002 and 2012, which is around 4 km a year (Nicholson and others 2013).
Relevance of specificity estimates
The three estimates of specificity were based on all data from routine surveillance tests in herds >8 km from known M bovis infection hazards. Calculations have excluded data from contiguous, tracing, check tests, IR retests and within-incident tests. These test types were excluded because there was an elevated risk that some animals had been infected with M bovis. Premovement tests were not included because they were not performed in the first half of the study period. Specificity would have been unaffected unless these tests had been performed with a different precision from other tests.
Numbers of SICCT test reactors in OTFS incidents
The proportion of OTFS incidents with more than one reactor at the disclosing test was generally larger than the number that would be expected if they had been randomly distributed between herds. This was most prominent close to hazardous OTFW herds, but occurred even in herds more than 8 km away from any hazardous OTFW incident (Table 5). Two processes seem to be occurring: for all herds with OTFS incidents, the underlying risk of false positive reactors varied from herd to herd, possibly because of spatial clustering. Also, nearer to hazardous OTFW incidents (significantly within 3 km; Table 5), the number of OTFS disclosing tests with more than one reactor is increased, suggesting that some of the reactors are, in fact, infected. Surveillance maps also show this pattern, for example that for 2013 (APHA 2014, Figure 2.2). These observations are consistent with the practice in Ireland, where multireactor breakdowns are managed as if infected, irrespective of postmortem and microbiological results (Good and Duignan 2011).
Other estimates of skin test specificity in the literature
The 11 estimates of SICCT test specificity from field data reviewed by van Dijk (2013) ranged from 94.0 per cent to 100.0 per cent, with a Monte-Carlo simulated mean of 98.0 per cent, but these estimates included tests using different approaches and under different conditions from the UK. Some of the published specificity estimates used herd test data without regard to their proximity to infected herds, or have erroneously assumed that all test reactors that lack pathological or bacteriological confirmation of infection were false positive (uninfected) reactors (e.g. Hartnack and Torgerson 2012).
Recent studies using latent class analysis assume that no perfect reference (‘gold’) standard exists for bTB, but are forced by the methodology to use subpopulations with diverse disease prevalence. Some of these are likely to come from animals exposed to M bovis organisms or antigens (see Hartnack and Torgerson 2012; Clegg and others 2011; EFSA 2012), which are arguably inappropriate for specificity estimation. Analyses of GB data are more consistent with the results presented here. A median value of 100 per cent with a credible interval 99 per cent to 100 per cent was found in a Bayesian framework meta-analysis (Downs and others 2011, EFSA 2012), and modal posterior estimates of 99.977 per cent and 99.997 per cent were derived from GB data by Approximate Bayesian Computation in the within-herd models of Conlan and others (2012).
The assumption of spatially homogeneous specificity
The estimates of PPV assume that true specificity is not affected by distance from infected herds. The time between surveillance tests varied between one year and four years during the study, and herds closer to hazardous OTFW incidents tended to have shorter testing intervals owing to bTB control protocols. In contrast to possible increases in specificity in frequently tested animals (Coad and others 2010), proximity to hazardous OTFW incidents was actually associated with reduced apparent specificity (Figs 2 and 3). The smooth asymptotic relationship between apparent specificity and distance from hazardous OTFW incidents suggests that there are no abrupt changes in true specificity as the distance increases.
Low-prevalence areas are exactly the areas where specificity is of greatest importance in bTB control. Those areas provided the data from which specificity has been estimated in this study.
The estimate of SICCT specificity for tests more than 8 km from OTFW incidents varied significantly from one half-year to the other between 2002 and 2008 (p<0.001 for all three cut-offs). This may have been associated with batch-to-batch differences in tuberculin (Tameni and others 1998, Downs and others 2013). The coefficient of variation of false positive reactor prevalence calculated from six-monthly estimates was 40 per cent, 26 per cent and 22 per cent for the standard, severe and ultrasevere cut-offs, respectively. The source of tuberculin used in GB in most tests between 2002 and 2008 was from Weybridge, but in the first 9 months of 2006 and for the 12 months starting in April 2007 it was Lelystad Biologicals B.V. in the Netherlands (Downs and others 2013). The data (not shown) supports the finding of Downs and others that the specificity of the SICCT test was marginally greater when using Lelystad tuberculin than when using Weybridge tuberculin.
Positive predictive value
In GB as a whole, the PPV of the SICCT test for the mix of interpretations currently used was 91.8 per cent, meaning that 11 out of 12 SICCT test reactors were truly infected. This figure is only slightly smaller than the PPV estimate for the high-prevalence counties, and is over twice as large as the proportion of SICCT reactors that have postmortem evidence of infection in GB (30-40 per cent; Figure 6.2 in APHA 2014). It suggests that postmortem and laboratory investigations profoundly underestimate the proportion of reactors that is truly infected. In the high-prevalence counties, the PPV for ultrasevere interpretation is acceptable (88.9 per cent, 95 per cent confidence interval 87.4 per cent to 90.3 per cent), which would appear to justify culling all IRs as well as reactors from herds. This value is based on aggregate data, and one would need to take precautions against removing false positive reactors from so-called ‘non-specific reactor’ herds that yield an excessive number of non-specific reactors. One should also remember that reactors in routine surveillance tests supplying data for specificity calculations were assumed to be uninfected, which implies that the PPV of these particular tests was zero.
Values for the specificity of the SICCT test between 2002 and 2008 (with 95 per cent confidence intervals) have been calculated for surveillance tests in herds that had not suffered an OTFW incident between 1986 and 2013 and were at least 8 km from any OTFW incident in the previous or next five years. The specificity estimates (with 95 per cent confidence intervals) for three cut-offs were as follows:
Standard cut-off or interpretation: 99.983 per cent (99.979, 99.987 per cent);
Severe cut-off, which includes severe and standard reactors: 99.915 per cent (99.904, 99.929 per cent); and
Ultrasevere cut-off, which includes inconclusive, severe and standard reactors: 99.871 per cent (99.853, 99.887 per cent).
These values imply that the SICCT test will give rise to one false positive reactor animal for every 4760–7690, 1040–1410 or 680–885 uninfected animals tested, for standard, severe and ultrasevere cut-offs, respectively.
The average number of animals at each surveillance test throughout GB was 45.3, and there were asymptotically 1.163 false-positive standard-interpretation reactors at the disclosing test of each OTFS incident. Given these conditions, one would expect (with a 95 per cent probability) to find between five and eight new OTFS incidents in surveillance tests in every thousand uninfected herds. That is equivalent to a herd-wise specificity of between 99.2 per cent and 99.5 per cent for the average-sized test. Tests of larger herds are likely to suffer lower specificity: for example, a herd in which 250 animals are tested would have an expected herd-wise specificity of around 96.5 per cent.
According to the calculated PPV of the SICCT test, 91.8 per cent of reactors in GB were infected, varying between 92.3 per cent (95 per cent confidence interval 91.1 to 93.7 per cent) in the high-prevalence counties and 76.9 per cent (72.1 to 82.0 per cent) in the low-prevalence counties. The proportion of SICCT test reactors in which visible lesions were found or from which M bovis could be cultured is of the order of 30–40 per cent (APHA 2014, Figure 6.2). Thus, a small majority of SICCT test reactors with no visible lesions in each group of counties of GB – high, medium or low prevalence – is likely to be infected.
The study indicates that the SICCT test, as used in GB, has a very high specificity. The findings suggest that over 90 per cent of reactor cattle identified only by skin test in GB between 2002 and 2008 were infected and endorse the compulsory slaughter of all SICCT test reactor cattle for effective disease control.
- Accepted July 27, 2015.
Provenance: not commissioned; externally peer reviewed
Twitter Follow Anthony GOODCHILD at @ElderBadchild
Funding Funding for the analysis and writing of this paper and for open publication fees were provided by the UK Department of Environment, Food and Rural Affairs (Contract L, Project SB4500). JLNW is funded by the Alborada Trust and the Research and Policy for Infectious Disease Dynamics (RaPIDD) Program of the Science and Technology Directorate, Department of Homeland Security, Fogarty International Centre, U.S. National Institute of Health.