Comparative analysis of meta-analysis methods: when to use which?

Dieste Tubio, Oscar; Fernández, Enrique; García Martínez, Ramón y Juristo Juzgado, Natalia (2011). Comparative analysis of meta-analysis methods: when to use which?. En: "15th Annual Conference on Evaluation & Assessment in Software Engineering, EASE 2011", 11/04/2011 - 12/04/2011, Durham, UK. ISBN 978-1-84919-509-6. pp. 36-45. https://doi.org/10.1049/ic.2011.0005.

Descripción

Título: Comparative analysis of meta-analysis methods: when to use which?
Autor/es:
  • Dieste Tubio, Oscar
  • Fernández, Enrique
  • García Martínez, Ramón
  • Juristo Juzgado, Natalia
Tipo de Documento: Ponencia en Congreso o Jornada (Sin especificar)
Título del Evento: 15th Annual Conference on Evaluation & Assessment in Software Engineering, EASE 2011
Fechas del Evento: 11/04/2011 - 12/04/2011
Lugar del Evento: Durham, UK
Título del Libro: Proceedings of 15th Annual Conference on Evaluation & Assessment in Software Engineering, EASE 2011
Fecha: 2011
ISBN: 978-1-84919-509-6
Materias:
Escuela: Facultad de Informática (UPM) [antigua denominación]
Departamento: Lenguajes y Sistemas Informáticos e Ingeniería del Software
Licencias Creative Commons: Reconocimiento - Sin obra derivada - No comercial

Texto completo

[img]
Vista Previa
PDF (Document Portable Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (531kB) | Vista Previa

Resumen

Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.

Más información

ID de Registro: 11606
Identificador DC: http://oa.upm.es/11606/
Identificador OAI: oai:oa.upm.es:11606
Identificador DOI: 10.1049/ic.2011.0005
Depositado por: Memoria Investigacion
Depositado el: 13 Jul 2012 08:17
Ultima Modificación: 20 Abr 2016 19:37
  • Open Access
  • Open Access
  • Sherpa-Romeo
    Compruebe si la revista anglosajona en la que ha publicado un artículo permite también su publicación en abierto.
  • Dulcinea
    Compruebe si la revista española en la que ha publicado un artículo permite también su publicación en abierto.
  • Recolecta
  • e-ciencia
  • Observatorio I+D+i UPM
  • OpenCourseWare UPM