ASU Electronic Theses and Dissertations
This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.
In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.
Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at firstname.lastname@example.org.
- 4 English
- 4 Public
- Educational tests & measurements
- 2 Quantitative psychology
- 1 BN
- 1 Bayesian Estimation
- 1 Categorical Data Analysis
- 1 Conditional covariance theory
- 1 Dimensionality assessment
- 1 Educational psychology
- 1 Missing Data Theory
- 1 Missing at Random Data
- 1 Multilevel Modeling
- 1 Multiple Imputation
- 1 PPMC
- 1 PPP-value
- 1 Posterior predictive model checking
- 1 Quantitative psychology and psychometrics
- 1 bifactor models
- 1 dimensionality
- 1 discrepancy measures
- 1 latent class
- 1 latent means
- 1 measurement invariance
- 1 multidimensional
- 1 simulation
- 1 structural equation modeling
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline …
- Xu, Yuning, Green, Samuel, Levy, Roy, et al.
- Created Date
Accurate data analysis and interpretation of results may be influenced by many potential factors. The factors of interest in the current work are the chosen analysis model(s), the presence of missing data, and the type(s) of data collected. If analysis models are used which a) do not accurately capture the structure of relationships in the data such as clustered/hierarchical data, b) do not allow or control for missing values present in the data, or c) do not accurately compensate for different data types such as categorical data, then the assumptions associated with the model have not been met and the …
- Kunze, Katie Lynn, Levy, Roy, Enders, Craig K, et al.
- Created Date
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by …
- Crawford, Aaron Vaughn, Levy, Roy, Green, Samuel, et al.
- Created Date
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of …
- Reichenberg, Ray E., Levy, Roy, Thompson, Marilyn S., et al.
- Created Date