Skip to main content

ASU Electronic Theses and Dissertations


This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.


Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices, programmable matter consists of systems of simple computational elements, called particles, that can establish and release bonds, compute, and can actively move in a self-organized way. In this dissertation the feasibility of solving fundamental problems relevant for programmable matter is investigated. As a model for such self-organizing particle systems (SOPS), …

Contributors
Derakhshandeh, Zahra, Richa, Andrea, Sen, Arunabha, et al.
Created Date
2017

Large-scale $\ell_1$-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. In many applications, it remains challenging to apply the sparse learning model to large-scale problems that have massive data samples with high-dimensional features. One popular and promising strategy is to scaling up the optimization problem in parallel. Parallel solvers run multiple cores on a shared memory system or a distributed environment to speed up the computation, while the practical usage is limited by the huge dimension in the feature space and synchronization problems. In this dissertation, I carry …

Contributors
Li, Qingyang, Ye, Jieping, Xue, Guoliang, et al.
Created Date
2017