In the May 2013 meeting, it was suggested that such an analysis be carried out on the scoring matrix. Below some comments by Eduardo Somarriba, CATIE

Some reflections leading to the issue of a SENSITIVITY ANALYSIS for TechFit.

The basic logic is the following. FEAST tell us whether feed is an important issue at some location and gives a good description of the farming system, context, and livestock component. TechFit offers a framework to evaluate and priotitize feed options based on expert knowledge.

In TechFit, experts compile a list of possible INTERVENTIONS (INT1-INT45, or so in current spreadsheet), evaluate each of them in terms of CRITERIA (farming system, main constraints, type of commodity, requirement of production factors) each composed of FACTORS (for instance, farming systems may be pastoral, extensive-mixed pastoral-crop farms, intensive-mixed pastoral-crop farms, and commercial pastures; main constraints include quantity, quality and seasonality; etc.) which are given GRADES (0,1,2,3,4 and ordinal sequence). By summation of the scores given by the expert to all factors evaluated, a SCORE is obtained for each intervention.

Some conceptual questions: The higher the score, the better the intervention? The scores assigned to the current list of interventions –knowledge pool- is immutable? New interventions will be added, and expert will again score them, and include them in the immutable knowledge pool?. Currently, factors are evaluated independently from each other when in practice they are hierarchical. For instance, a medium sized pig producer in an intensive-mixed crop-livestock farming system in the mid-elevations (300-1500 m) humid tropics will look downward the sub-set of columns (criteria) that best describe this farmer. It is also evident that for a given farmer-community-location not all columns are relevant. Consequently, this analysis of the knowledge provided by TechFit to this farmer will be a sub-set of rows, each evaluated with a sub-set of columns. He will need to look only at the SUB-SCORES (both row and column) of this reduced matrix of knowledge. We have been focusing on the total row scores of the full matrix!.

Now we take on the sensitivity analysis issues. Two main sources of errors can be identified in the construction of the full matrix: 1) the precision with which the list of columns describe the situation-location being considered; additional criteria may be needed, for instance, agroecological zonation and socio-economic levels, just to mention a couple. We need to think hard in determining the full set of factors that could provide a good representation of the conditions where TechFit will be used; and 2) the subjectivity in the grading of the factors; different groups of experts will provide different grades to the same factors. We need to ask as many experts as possible to grade these scores to determine consistency in the grading of each factor. As a preliminary sensitivity analysis I would follow the following protocol (I am sure that a statistician will have better ideas to do this).

Assuming that we have a good set of factors and a long list of potential interventions to choose from, we will probably end up with a reduced, say 10 x 10 matrix, with rows sorted by descending row total. We compare the row totals of the reduced matrix with a theoretical distribution of rowtotal values generated by simulation as follows. For each cell in this reduced matrix generate a random number in the set of grades: 0, 1, 2, 3 or 4. For the moment, assume equal probability of selection of all grades; with more experts providing grades to the same factors we may be able in the future to test other assumptions regarding the probability of selecting grades for each factor. Simulate one value for each factor in the reduced matrix and compute a rowtotal value. Iterate 1000 times or more and construct a theoretical cumulative frequency distribution of rowtotals. This graph (if desired we can adjust a non-linear function to the data –e.g. Weibull- and have equations for both the probability density function and cumulative density function of the rowtotals) tell us the probability of obtaining a rowtotal value equal or lower than the observed by simple chance.

After going through all this thinking, various questions related to FEAST-TechFit came to my mind; maybe we can turn them into joint research projects. Three key questions: 1) Is TechFit equally useful in semiarid-subhumid-humid-wet regions, lowlands-mid elevations-highlands, in tropics-subtropics-temperate?. We (CATIE-CIAT-ILRI….) could try to answer these questions for the agroecological conditions of the Central American regions within the platform of our pilot development projects in Nicaragua (although we also have similar actions in Honduras, Guatemala and El Salvador…possible connections with our new MAPNoruega project and other activities of the GAMMA –animal production- program). 2) How to link action-research resulting of the application of FEAST-TechFit to existing development projects in selected key regions in close partnership with development partners?. Current flow chart describing TechFit ends with action-research, but we need to say how to do it, and most importantly, how to embed this action-research into an existing or new development initiative at selected locations. 3) How to incorporate farmers’ knowledge into the knowledge database of TechFit; how do farmers grades compare with experts’ grades for the same factors? Why, how to close the gap?.

## Sensitivity Analysis

In the May 2013 meeting, it was suggested that such an analysis be carried out on the scoring matrix. Below some comments by Eduardo Somarriba, CATIE

Some reflections leading to the issue of a SENSITIVITY ANALYSIS for TechFit.

The basic logic is the following. FEAST tell us whether feed is an important issue at some location and gives a good description of the farming system, context, and livestock component. TechFit offers a framework to evaluate and priotitize feed options based on expert knowledge.

In TechFit, experts compile a list of possible INTERVENTIONS (INT1-INT45, or so in current spreadsheet), evaluate each of them in terms of CRITERIA (farming system, main constraints, type of commodity, requirement of production factors) each composed of FACTORS (for instance, farming systems may be pastoral, extensive-mixed pastoral-crop farms, intensive-mixed pastoral-crop farms, and commercial pastures; main constraints include quantity, quality and seasonality; etc.) which are given GRADES (0,1,2,3,4 and ordinal sequence). By summation of the scores given by the expert to all factors evaluated, a SCORE is obtained for each intervention.

Some conceptual questions: The higher the score, the better the intervention? The scores assigned to the current list of interventions –knowledge pool- is immutable? New interventions will be added, and expert will again score them, and include them in the immutable knowledge pool?. Currently, factors are evaluated independently from each other when in practice they are hierarchical. For instance, a medium sized pig producer in an intensive-mixed crop-livestock farming system in the mid-elevations (300-1500 m) humid tropics will look downward the sub-set of columns (criteria) that best describe this farmer. It is also evident that for a given farmer-community-location not all columns are relevant. Consequently, this analysis of the knowledge provided by TechFit to this farmer will be a sub-set of rows, each evaluated with a sub-set of columns. He will need to look only at the SUB-SCORES (both row and column) of this reduced matrix of knowledge. We have been focusing on the total row scores of the full matrix!.

Now we take on the sensitivity analysis issues. Two main sources of errors can be identified in the construction of the full matrix: 1) the precision with which the list of columns describe the situation-location being considered; additional criteria may be needed, for instance, agroecological zonation and socio-economic levels, just to mention a couple. We need to think hard in determining the full set of factors that could provide a good representation of the conditions where TechFit will be used; and 2) the subjectivity in the grading of the factors; different groups of experts will provide different grades to the same factors. We need to ask as many experts as possible to grade these scores to determine consistency in the grading of each factor. As a preliminary sensitivity analysis I would follow the following protocol (I am sure that a statistician will have better ideas to do this).

Assuming that we have a good set of factors and a long list of potential interventions to choose from, we will probably end up with a reduced, say 10 x 10 matrix, with rows sorted by descending row total. We compare the row totals of the reduced matrix with a theoretical distribution of rowtotal values generated by simulation as follows. For each cell in this reduced matrix generate a random number in the set of grades: 0, 1, 2, 3 or 4. For the moment, assume equal probability of selection of all grades; with more experts providing grades to the same factors we may be able in the future to test other assumptions regarding the probability of selecting grades for each factor. Simulate one value for each factor in the reduced matrix and compute a rowtotal value. Iterate 1000 times or more and construct a theoretical cumulative frequency distribution of rowtotals. This graph (if desired we can adjust a non-linear function to the data –e.g. Weibull- and have equations for both the probability density function and cumulative density function of the rowtotals) tell us the probability of obtaining a rowtotal value equal or lower than the observed by simple chance.

After going through all this thinking, various questions related to FEAST-TechFit came to my mind; maybe we can turn them into joint research projects. Three key questions: 1) Is TechFit equally useful in semiarid-subhumid-humid-wet regions, lowlands-mid elevations-highlands, in tropics-subtropics-temperate?. We (CATIE-CIAT-ILRI….) could try to answer these questions for the agroecological conditions of the Central American regions within the platform of our pilot development projects in Nicaragua (although we also have similar actions in Honduras, Guatemala and El Salvador…possible connections with our new MAPNoruega project and other activities of the GAMMA –animal production- program). 2) How to link action-research resulting of the application of FEAST-TechFit to existing development projects in selected key regions in close partnership with development partners?. Current flow chart describing TechFit ends with action-research, but we need to say how to do it, and most importantly, how to embed this action-research into an existing or new development initiative at selected locations. 3) How to incorporate farmers’ knowledge into the knowledge database of TechFit; how do farmers grades compare with experts’ grades for the same factors? Why, how to close the gap?.