Ate an ES. When the correlation was not out there we assumedAte an ES. When

Ate an ES. When the correlation was not out there we assumed
Ate an ES. When the correlation was not obtainable we assumed that the scores inside the two conditions are correlated at the degree of r 0.five. To pool individual impact sizes, we applied a randomeffects model (DerSimonian Laird, 986). Whereas the fixedeffects model assumes that all research that go into the metaanalysis come in the identical population, the randomeffects model assumes that studies are drawn from unique populations that may have unique correct effect sizes (e.g study populations that differZeitschrift f Psychologie (206), 224(three), 68Coding ProcedureIf available, we collected and coded each experiment when it comes to the moderators suggested by theory or empirical proof (see Introduction). Regarding experimenter effects, we coded experiments as blinded, when the authors stated explicitly that the experimenter was not aware in the hypotheses or situation or in the event the experimenter was206 Hogrefe Publishing. Distributed under the Hogrefe OpenMind License http:dx.doi.org0.027aM. Rennung A. S. G itz, Prosocial Consequences of Interpersonal SynchronyTable . Interrater and intrarater reliability for coded variables Variable Intentionality Muscle tissues involved Familiarity PubMed ID:https://www.ncbi.nlm.nih.gov/pubmed/12172973 with interaction partner Gender of interaction companion Number of interaction partners Music Experimenter blindedness Manipulation verify Style Form of MSIS Comparison group Outcome g se Measure ICC ICC Interrater 0.70 0.85 .00 0.57 0.92 0.76 .00 .00 .00 .00 .00 0.96 0.999 .00 IntraraterNotes. Cohen’s ; ICC the intraclass correlation coefficient; g Hedges’ g; se standard error of g.in traits which will have an effect on impact size, like intensity of therapy, age of participants, and so on.). Consequently, below a fixedeffects model all variation in effect sizes across research is assumed to be because of sampling error, whereas the randomeffects model makes it possible for the studylevel variance to be an additional source of variation. As we expected heterogeneity in effect sizes, the randomeffects model was much more acceptable (Hedges Vevea, 998). For the general evaluation (RQ), we utilised only 1 data point per experiment. For moderator analyses (RQ2), we carried out two separate metaanalyses for every class of outcome variables (attitudes vs. behavior) and once more integrated only a single data point per experiment in every of those analyses to make sure independence among information points. Choices concerning the collection of data points have been based on the following guidelines. If experiments included comparisons in the experimental group with two or much more handle groups, we chose the group that differed from the experimental group in as handful of other traits (except synchrony) as you can to stop biases resulting from confounds (Table two). In instances in which experiments integrated two or additional synchronous groups (e.g synchrony established intentionally vs. incidentally), we chose the synchronous group that was anticipated to yield the greatest impact on prosociality. Expectations concerning the effectiveness of a manipulation were derived from prior investigation (e.g intentional synchrony was preferred more than incidental synchrony). Similarly, if research integrated greater than one handle group on the identical LED209 cost category, we chose the manage group that was expected to possess the greatest effect on prosociality. Once more we produced these predictions a priori andbased on prior investigation. If studies reported more than a single social outcome, we calculated a combined effect size by averaging across outcomes since it may be the additional conserv.