Outcome-Network.org

An International Database and eJournal for Outcome-Evaluation and Research

Paper

Research design, effectiveness and evidence

abstract

Complexity as a discriminant factor. When we experiment new solutions, we reduce in some ways the number of variables in order to concentrate observations on a small set of units: doing this, we can manage multidimensional problems. This is consistent with researchers' views about the strength of recommendations seen as a hierarchy of research designs.

Tab. 1 - Level of evidence and strength of recommendations, methodology SIGN-CEBM (Spread 2003)

Levels of evidence

1++

High quality meta analyses without heterogeneity, systematic reviews of RCTs each with small confidence intervals (CI), or RCTs with very small CI and/or very small alpha and beta

1+

Well conducted meta analyses without clinically rele-vant heterogeneity, systematic reviews of RCTs, or RCTs with small CI and/or small alpha and beta

2++

High quality systematic reviews of case-control or cohort or studies. High quality case-control or cohort studies with very small CI and/or very small apha and beta

2+

Well conducted case control or cohort studies with small CI and/or small alpha and beta

3

Non-analytic studies, e.g. case reports, case series

4

Expert opinion

Note: metanalyses with clinically relevant heterogeneity, systematic reviews of studies with large confidence intervals, studies with large confidence intervals and/or large alpha and/or large beta are classified as -(minus).

Strength of recommendation

A

At least one meta analysis, systematic review, or RCT rated as 1++, and directly applicable to the target population; or

A systematic review of RCTs or a body of evidence consisting principally of studies rated as 1+, directly applicable to the target population, and demonstrating overall consistency of results

B

A body of evidence including studies rated as 2++, directly applicable to the target population, and demonstrating overall consistency of results; or

Extrapolated evidence from studies rated as 1++ or 1+

C

A body of evidence including studies rated as 2+, directly applicable to the target population and demonstrating overall consistency of results; or

Extrapolated evidence from studies rated as 2++

D

Evidence level 3 or 4; or Extrapolated evidence from studies rated as 2+; or Evidence from studies classified as - (minus), regardless of the level

Good
Practice Point GPP

Recommended best practice based on the clinical experience of the guideline development group, without research evidence




This hierarchy that underlies the strength of recommendations is debated, mainly because the formal criteria and accuracy in data management must not prevail so that to modify the study objective and make it different from what it is. For example, the last version of the guidelines on Stroke (2005) partly makes this approach debated by declaring: "This classification is based, however, on criteria which are exclusively mathematic-statistical, providing the strength of evidence to: meta-analyses, controlled randomized experimental studies, retrospective analyses, perspective follow-up, transversal population studies, systematic revisions, and anecdotic evidence". This could be understandable as regards only clinic studies, particularly if they focus on outcome evaluations carried out in a sample of specific interest. The same is not correctly applicable but according to similarities and extension, to non-clinic evaluations, such as epidemiologic information, evaluation of life quality, econometric models and studies aimed at optimising resources allocation, or to extrapolations from populations which aren't directly the target of the considered recommendation, or to situations where the object of recommendations can't be technically and adequately experimentally evaluated. The same arguments are being discussed in other contexts.

To summarize, the problem of theoretical and epistemological interest can be expressed in these terms: is it enough to base the strength of evidence on the experimentation level classified accordingly to the hierarchy just presented? The hierarchy takes into consideration the complexity of the methodology, rather than the number of variables considered in the study.

For this reason, isn't it necessary to consider the question regarding the relationship between complexity of the experimentation and number of variables managed by the study, is it? Indeed, it would be strange that the great number of variables considered didn't have its own relevance when correctly interpreting results and their strengths.

Simple and/or sectorial problems imply  a little number of variables involved. Complex problems need a greater number of variables - those which impact on the problem studied -  to be observed and contextually managed. For example, we could have a study in which to a high number of variables corresponds an inferior level of evidence or vice versa. This is a matter of possible combinations of factors which must be properly weighted by taking into account their functional links. An intuitive and descriptive way of doing this is represented by the following figure where the two factors presented above are put into relation.

If we use traditional logic we can notice the sequence of in the x-axis can be assimilated to a nominal scale. As can be seen from the example, we can have a study composing the level of evidence 2+ based on n variables (in this case 12) and a study of level 1+ based on n ± x variables.

If we wanted to place these variables in a different way and so that to take the weight of both factors (number of cases and level of evidence) into account, we would notice a different distribution of the level of evidence. It's a matter of understanding whether the strength of recommendations - in some cases - must be defined not only considering the level of study but also the quantity and nature of the variables manages by research.

In case of problems with a multifactor aetiology such that require multiprofessional interventions and competences, the number of variables increases. If we tend to deny or to reduce this number, this also means to reduce and modify the study itself, thus risking to have unuseful results. After these premises it is important to discuss, at least in general, about this epistemological question in order to give sense to research and experimentation results. This is necessary to give a deeper and more coherent meaning to the knowledge describing interventions and their impact, also in terms of measurable outcomes. Different research designs can be:

  • single cases, series of cases,
  • cross-over cases,
  • multiple time-series,
  • case-control,
  • coorts,
  • randomized and semi-randomized trials,
  • systematic revisions and meta-analyses,
  • comparative studies.

For all these reasons the potential meaning of experimental designs as well as their explicative value should be reconsidered, since they depend from at least 2 determinants (the number of variables and the nature of the design used). The process based on a single variable is limiting and has not a significance for itself but acquires it only after the predictive value is better defined by weighting the combination of variables. Therefore it would become easier to value their evidence and costs.

It is not an easy task to identify the simplest outcomes to measure as the most pertinent to the individual's. Indeed, very often there is some confusion between service output - such as a family that follows the treatment until the end - and the outcomes measured on the individuals, such as some level of psychosocial adjustment or the attainment of some educational goals. It's easier to measure the results of services (that usually are defined through process or quantity variables) than the outcomes (effectiveness) for the service user.

In some cases, but not always, these can be considered as proxy outcome measures. To consider these pathways towards successfull outcomes for service users and to translate them into measurable factors is not an easy task to perform.

Key references

Spread (2005), Italian guidelines for stroke prevention and management, Milano, www.spread.it.

Vecchiato, T. (2007). Paradigmi scientifici e intervento sociale, Studi Zancan 3.

Zeira, A., Canali, C., Vecchiato, T., Jergeby, U., Thoburn, J., Neve, E., Evidence-based Social Work Practice with Children and Families: A Cross National Perspective, Studi Zancan 1.

Contacts: Canali, Fondazione Emanuela Zancan onlus, Via Vescovado 66, 35141 Padova Italy, E-mail: cinziacanali@fondazionezancan.it, Phone +39 049 663800, fax +39 049 663013.

 

 

 

© copyright 2024 Outcome-Network.org all rights reserved, in partnership with FondazioneZancan | iaOBERfcs | read the legal notice.