SPSS Statistics

 View Only
  • 1.  Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 05, 2022 12:06 PM
      |   view attached
    Hello! I am currently trying to run an analysis with the mixed linear model with repeated measures and fixed effects. My study design is this:

    At the end of a project, participants within each team were given the opportunity to take a peer evaluation and rate how psychologically safety they felt with each member on their team. This rating of psychological safety is my dependent variable and is measured as a continuous variable. What we want to get at is how different genders perceive how psychologically safe they feel with one another, so we grouped these categories as 1=male rates male, 2=male rates female, 3=female rates male, and 4=female rates female. I would call this a fixed factor. I am also looking to account for nesting, as these are individual ratings, so I would say that the Reviewer Number (the person who completes the peer review) is a subject, and I would guess that the Rated Participant (person evaluated BY the reviewer) is repeated? But I am not too sure, as I know that time is typically used in this category, but we are not dealing with that in this case. When I run my syntax, I get an error whenever I try to account for nesting in teams:

    MIXED PS BY ReviewerGenderToRatedGender(TeamNumber)
    /CRITERIA=DFMETHOD(SATTERTHWAITE) CIN(95) MXITER(100) MXSTEP(10) SCORING(1)
    SINGULAR(0.000000000001) HCONVERGE(0.00000001, RELATIVE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0,
    ABSOLUTE)
    /FIXED=ReviewerGenderToRatedGender | SSTYPE(3)
    /METHOD=REML
    /PRINT=CPS CORB COVB DESCRIPTIVES G LMATRIX R SOLUTION TESTCOV
    /REPEATED=RatedParticipantNumber | SUBJECT(ReviewerNumber) COVTYPE(DIAG)
    /EMMEANS=TABLES(OVERALL)
    /EMMEANS=TABLES(ReviewerGenderToRatedGender) COMPARE ADJ(BONFERRONI).

    I am mainly concerned in whether gender impacts how psychologically safety people perceive themselves to be with either another gender or the same gender, so any advice on other analysis methods are welcome as well. I have attached my data in the long format, too. Please let me know if I can give any further clarification.

    Thank you!

    Courtney

    ------------------------------
    Courtney Cole
    ------------------------------

    #SPSSStatistics

    Attachment(s)

    sav
    IndividualGenderRating.sav   14 KB 1 version


  • 2.  RE: Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 05, 2022 12:33 PM
    Edited by System Fri January 20, 2023 04:08 PM
    Hi again, Courtney.

    Could you please give us the exact error message you got from this syntax?

    Thanks.

    ------------------------------
    Rick Marcantonio
    Quality Assurance
    IBM
    ------------------------------



  • 3.  RE: Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 05, 2022 12:35 PM
    Ignore the last message. I'll just run it myself and see what's what.

    ------------------------------
    Rick Marcantonio
    Quality Assurance
    IBM
    ------------------------------



  • 4.  RE: Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 05, 2022 04:44 PM
    Hi again. I looked at the data and the model and couldn't really figure out the unit of analysis, so I passed it on to one of our statisticians. Here are his comments:

    • It looks like:
    • The customer has one single continuous output to model.
    • She has a fixed factor coded as 1-4.
    • We may want to account for the within team and reviewer correlation. This did not reflect in her comments, but that's what I am thinking to do.
    • Whether there should be a repeated measure - not sure about this. 
    After looking at the sample data, it seems to me that team is the first cluster to adjust. A plausible assumption is that the data sharing the same TeamNumber are supposed to be more correlated. The second cluster we may consider is reviewer. The records sharing the same ReviewerNumber could be more correlated. Thus, I think a unique record is defined by a combination of TeamNumber, ReviewerNumber, and RatedParticipantNumber. 

    In terms of the repeated measures (PS*), this is the part I did not understand very well. 

    • First, who gave to whom? How to reflect this in the data?
    • Second, will the rates given to the others make an impact on PS, a measure receiving from the other members? I will say probably not. Maybe we don't need such a repeated measure?


    ------------------------------
    Rick Marcantonio
    Quality Assurance
    IBM
    ------------------------------



  • 5.  RE: Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 05, 2022 05:52 PM
    Hello Rick! Thank you very much for taking the time to go through this with me. I think I was overthinking what I needed to do, and it sounds like if I just treat each combination of TeamNumber, ReviewerNumber, and RatedParticipantNumber as its own value (so I have 372 combinations--and thus 372 data points), I could potentially run GLM and UNIANOVA, and then account for nested effects by specifying the interaction terms TeamNumber*ReviewerNumber and ReviewerNumber*RatedParticipantNumber (I found this here: https://www.ibm.com/support/pages/can-you-specify-nested-designs-anova-models-spss-menus)? I do agree that being on the same team, a team member may rate others highly or lowly in general, as these are self-assessments of perceptions (so some people may be more likely to give higher ratings on average than others), and we should consider the nesting effect of being on a team for sure. If MIXED seems like it is not practical, I will switch to GLM.

    I apologize for the lack of clarity in the original data. The scenario was this: We gave students an end of project peer review assessment, where they rated how psychologically safe they felt with each team member. This rating is produced by ReviewerNumber, as they were reviewing others in this case. RatedParticipantNumber is the individual who was rated, so this person did not produce the rating, but rather they were evaluated by ReviewerNumber. Using this pattern, if ReviewerNumber was male and RatedParticipantNumber was female, the ReviewerGenderToRatedGender would have a 3 to represent a male rating a female.
    To address your second question, we would expect that how one unique ReviewerNumber rates the multiple RatedParticipantNumber participants in their respective team would not necessarily be related. In some instances, we found that there may be a "problem team member" for any one ReviewerNumber that gets a PS score significantly lower than the other team members rated by ReviewerNumber. Ultimately, we do want see if gender-to-gender ratings are significant, as this would point to any biases students may have about one another based on gender (even though it would be best for the students if there is minimal bias in this case).
    Please let me know if I can be more clear in my explanation, as I know it may seem convoluted without seeing the entire study procedure. 



    ------------------------------
    Courtney Cole
    ------------------------------



  • 6.  RE: Mixed Linear Model and Analyzing Repeated Ratings Nested in Teams

    Posted Wed January 12, 2022 10:48 AM
    Hi, Courtney. I received a note from one of our statisticians, who was on break last week. He wanted you to have this:
    ---

    I believe the primary interest is whether there are differences in perceptions of psychological safety based on the gender of the reviewer and the person rated in pairs of participants working in teams. There are four types of pairs, MM, MF, FM, and FF, where the first letter indicates the gender of the reviewer, and the second one the gender of the person rated, and the desire is to compare mean ratings among these four types of pairs. It becomes more complicated because the same reviewer rates multiple participants, introducing possible correlations among ratings from the same reviewer. There's also the fact that the sets of participants are formed into distinct teams, which is another source of possible variation, and it's thought that it might be the case that the effect of rating pair types is different in different teams. This can be thought of as an interaction between teams and the primary effect of interest, and assuming this exists, comparisons of the primary effect might be done separately within teams, treating teams as a second fixed factor.

    There are 372 observations from 137 subjects in 38 teams, with 135 subjects providing sufficiient data to use in the analysis (two reviewers have all missing data for ratings), and 136 of the 137 participants being rated at least once. Not all teams have observations for all reviewers rating all other team members. If both factors are used, with ReviewerGenderToRatedGender nested within TeamNumber, there are 78 fixed-effects parameters being estimated, which is not a small number for only 372 cases. Also, treating TeamNumber as a fixed effect means that you don't want to infer from these teams to a broader (perhaps hypothetical) population of teams, but apply the results only to these teams. This seems unlikely. It's also the case that testing comparisons among the four levels of ReviewerGenderToRatedGender separately for each TeamNumber involves a large number of tests, and if any adjustment for multiple testing is applied not just within each TeamNumber, but also across them (which requires some "manual" adjustment to EMMEANS COMPARE specifications), power is likely to be very poor, and not adjusting across teams provides lots of opportunities for Type I errors. Thus including TeamNumber as a fixed effect has some notable drawbacks. 

    Note that if TeamNumber is included as a fixed factor, it would need to be listed separately after BY on the MIXED command, and the fixed-effects model including whatever versions of the two factors would be specified on the FIXED subcommand. For example, the part of the syntax that specifies the basic model for nesting ReviewerGenderToRatedGender within TeamNumber would be:

    MIXED PS BY ReviewerGenderToRatedGender TeamNumber

     /FIXED TeamNumber ReviewerGenderToRatedGender(TeamNumber)

    or 

    MIXED PS BY ReviewerGenderToRatedGender TeamNumber

     /FIXED TeamNumber ReviewerGenderToRatedGender*TeamNumber

    which both fit the same fixed-effects model (which is overall equivalent to a full-factorial model on these two factors). The latter approach has the advantage of allowing specification of the interaction effect on EMMEANS, with COMPARE(ReviewerGenderToRatedGender), giving EMMEANS and comparisons among them for each TeamNumber separately. As noted above, this is probably not a great idea here, but in situations with smaller numbers of levels of the equivalent of the TeamNumber factor, it's reasonable and can be very useful.

    Handling the fact that the same reviewers are providing multiple ratings can indeed be done using the REPEATED specification with ReviewerNumber as a subject factor. The default covariance structure for the residual or R matrix in MIXED is DIAG, or diagonal, which actually allows for unequal residual variances across levels of the repeated factor within a subject, but still assumes independence among residuals within subjects. I'll admit it's a bit strange to have a default that assumes independence for repeated measures, but for some reason most mixed modeling procedures have such defaults. Anyway, given that the likely correlated residuals are based on a structure other than time, and in the absence of other specific reasons to posit particular differences among dependence levels for different pairings within subjects, it seems that a single constant covariance for related pairs might make the most sense. In a structure that doesn't assume unequal variances over levels of the repeated factor, this would be the CS or compound symmetric structure. This involves only two parameters, a variance or diagonal offset, and a common covariance. If variances across repeated levels are suspected to be potentially unequal, this can be generalized somewhat by specifying the CSH or heterogeneous compound symmetry structure. 

    Unfortunately, here with 136 distinct levels of the repeated factor RatedParticipantNumber, the CSH structure has 137 parameters, and the DIAG structure has 136. These are too many parameters to estimate uniquely with the 135 subjects, leading to warnings. There does appear to be some evidence of unequal residual variances over levels of the repeated factor, but it's hard to judge things since unique estimates are not available. Estimating this model also takes a long time, due to the number of covariance parameters.

    Standard recommendations for selecting covariance structures for mixed models generally involve comparing models with the same set of fixed effects, either using likelihood-ratio tests of additional parameters in models with more parameters where the smaller models are nested (i.e., the larger model has all the same parameters plus more) by taking differences in -2 log-likelihood values and referring these to chi-square distributions on degrees of freedom equal to the number of additional parameters in the larger model, or using the information criteria to select the better model (smaller values being preferred). Typically, the recommendation is to do these comparisons using the model with the fullest set of fixed effects under consideration. In this case, whether I include the fuller model with both fixed factors, or the simpler model with just ReviewerGenderToRatedGender, the information criteria point in different directions. The LR test, as well as the AIC and AICC measures favor models with the CSH structure over the simpler CS structure, while the CAIC and  BIC measures, which penalize additional parameters more, favor the simpler CS structure. 

    The unfortunate bottom line here is that I'm not sure there's a way to analyze these data that's entirely immune from legitimate criticisms. The simpler models appear not to capture a lot of the systematic variation in the data, while the more complicated models can't be well estimated with the available data. The problems with the R matrix structure for the larger models are intrinsic to the design, as the number of parameters is growing with the number of subjects included in the analysis, so more data won't help. The similar issue with the fixed effects isn't as severe, but as long as the number of members of a team is fixed, the addition of more teams also wouldn't help much (and would just add to the multiple comparisons issue). One approach would be to fit the model with both fixed effects but only look at the averaged main effects of the ReviewerGenderToRatedGender factor, so only the six unique comparisons, using either of the covariance structure models, or doing it with each to see how the results compare. There again, the problems with estimation for the larger models come up, with the trustworthiness of the results being somewhat uncertain.

    ---

    ------------------------------
    Rick Marcantonio
    Quality Assurance
    IBM
    ------------------------------