Current Projects


This project is a user-level library for active messages based on the Active Pebbles programming model. AM++ allows message handlers to be run in an explicit loop that can be optimized and vectorized by the compiler and that can also be executed in parallel on multicore architectures. Runtime optimizations, such as message combining and filtering, are also provided by the library, removing the need to implement that functionality at the application level.


Bokeh: A Declarative, Scalable Framework for Extensive Visualization

A new visualization system, named Bokeh, is being proposed for interactive visual explorations of large, multidimensional datasets. Domain expertise and intuition are critical to effective exploration of large data. Thus, we have chosen a novel architectural approach that tackles scalability, interactivity, and extensibility as its core challenges, while maintaining a simple conceptual model for the non-programmer end user. This project is sponsored by DARPA, subcontracted through Continuum.


C-SWARM: Center for Shock Wave-processing of Advanced Reactive Materials

CREST is conducting research, in collaboration with the University of Notre Dame and Purdue University, on parallel multiscale and multiphysics computational framework for predictive science using models that are verified and validated with uncertainty quantification on future high-performance Exascale computer platforms. The goal is to predict the behavior of heterogeneous materials, specifically the dynamics of their shock induced chemo-thermo-mechanical transformations and resulting material properties. Through the adaptive Exascale simulations,  we aspire to predict conditions for synthesis of novel materials-by-design and provide prognoses of non-equilibrium structures that will form under shock wave processing.



Multipole methods contribute to a broad range of end-user science applications extending from molecular dynamics to galaxy formation. Many of these applications describe very dynamic physical processes, both in their time dependence and in their range of relevant spatial scales. However, conventional implementations of multipole methods are essentially static in nature leading to computational inefficiencies. The Dynamic Adaptive System for Hierarchical Multipole Methods (DASHMM) will emply dynamic adaptive execution methods to provide a scalable and efficient multipole method library that is easy to use. This project is sponsored by the National Science Foundation.


DLT: Data Logistics Toolkit

DLT combines software technologies for shared storage, network monitoring, enhanced control signaling and more efficient use of dynamically allocated circuits. Its main components are network storage server (“depot") technology based on the Internet Backplane Protocol (IBP), perfSONAR for network performance measurement and monitoring, and Phoebus, for optimizing the use of network resources in long haul data transfers.
Project webpage:


EAGER: Dynamic Data Path Management for Synchronous Vertical Storage Hierarchy

This project, funded by NSF, focuses on the development of semantics for practical unification of main memory and I/O spaces in large parallel applications.  The project aims to define a new relationship between ephemeral objects and persistent storage that will take advantage of asynchrony of operation while achieving high efficiency. The underlying concepts are provided by the ParalleX model of computation.


Evaluating Execution Models

In partnership with the Pacific Northwest National Laboratory, CREST is working to advance the state of the art in methodologies for modeling and evaluating parallel execution models. This work encompasses areas including the development of modeling methodologies and techniques designed to quantify the performance impact of execution models, the development of formal methods for modeling execution model semantics, derivation of performance models for quantitative evaluation of execution models, the design and implementation of mini-apps translated to ParalleX, and a comparative analysis quantifying the performance impact incurred by execution model primitive operations.


GEMINI: The Global Environment for Network Innovations

GENI is designed for network experimentation, making measurement critical. GEMINI will provide the extensive instrumentation needed for collecting, analyzing, and sharing real network measurements from potentially groundbreaking GENI experiments.
Indiana University researchers, together with partners from the University of Kentucky and Internet2, have partnered on the $1.3 million project, which will build on the success of perfSONAR—an internationally deployed, network monitoring infrastructure.
Led by CREST's Martin Swany, GEMINI is housed at IU and is one of only two Instrumentation and Measurement (I&M) awards—the largest GENI efforts to date.
GENI home page:
GENI Project wiki:


Graph Algorithms on Future Architectures

In collaboration with the Software Engineering Institute at Carnegie Mellon University, CREST is developing a library of graph algorithms that effectively leverage GPU for performance and scalability.


HOBBES: OS and Runtime Support for Application Composition

This project is sponsored by the Department of Energy and subcontracted from Sandia National Laboratory. Indiana University’s contributions center on debugging and Graph Analytics. In debugging, a key component added to the runtime software will be diagnostic support to identify and report the correlation of the detected error with the regime of the perpetrated software fault (bug) for future correction. IU’s graph analytics contribution will explore active messages. Specific features of the active message layer include message coalescing, active routing, message reduction, and termination detection.


HPC Education through Formal On-Line On-Demand Curriculum

This project, sponsored by the NSF, will dramatically increase the accessibility of HPC education for this nation’s students independent of socio-economics or demographics and provide an enhanced skill force to strengthen the US capabilities in computational science, engineering, and HPC system administration. The project will make possible the near-term development and dissemination of a new on-demand course in HPC for national distribution. This asynchronous course is realized through advanced Internet based on-line services and on-demand video lectures with supportive instructional materials.


NSF Graphs: CSR: Small: High-Level Programming Languages and Environments for Scalable Graph Processing

This NSF sponsored project proposes to study an approach to high-level, domain-specific programming languages that would enable a single representation of a graph algorithm to run efficiently and scalably on a variety of parallel architectures. The languages existing in this area are limited in scope and expressiveness. Accordingly, this research incorporates the study of appropriate abstractions to create reusable algorithm implementations enabling scalable graph algorithms on multiple types of high-performance platforms.


perfSONAR: Performance focused Service Oriented Network monitoring ARchitecture

perfSONAR is an international collaboration for network monitoring. Collaborators include ESnet, GÉANT2, RNP (Rede Nacional de Ensino e Pesquisa) in Brazil, and Internet2.  perfSONAR is an infrastructure for network performance monitoring, facilitating the ability to solve end-to-end performance problems on paths crossing several networks and to enable network-aware applications. It contains a set of services delivering performance measurements in a federated environment. These services act as an intermediate layer, between the performance measurements tools and the diagnostic or visualization applications.  This layer is aimed at making and exchanging performance measurements between networks, using well-defined protocols.   It allows for the easy retrieval of the same metrics from multiple administrative domains.  perfSONAR is a services-oriented architecture. That means that the set of elementary functions have been isolated and can be provided by different entities called services.  All of those services communicate with each other using well-defined protocols. perfSONAR has tree contexts:

  • A consortium of organizations seeking to build network performance middleware that is interoperable across multiple networks and useful for intra- and inter-network analysis. One of the main goals is to make it easier to solve end-to-end performance problems on paths crossing several networks.
  • A protocol. It assumes a set of roles (the various service types), defines the protocol standard (syntax and semantics) by which they communicate, and allows anyone to write a service playing one of those roles.  The protocol is based on SOAP XML messages conforming to the Open Grid Forum (OGF) Network measurement Group (NM-WG) schema definitions.
  • Several interoperable software packages (implementations of various services) that implement an interoperable performance middleware framework.  These packages are developed by different partners.  Some parts of the software are "more important" than others because their goal is to ensure interoperability between domains (e.g. the Lookup Service and the Authentication Service). Different subsets of the software are important to each partner, with a great deal of overlap.  The services act as an intermediate layer, between the performance measurement tools and the diagnostic or visualization applications.


Phoebus and XSP

The Phoebus project seeks to encourage a paradigm shift in the way traditional edge and backbone networks are utilized in order to improve end-to-end throughput over long distances. By augmenting the current Internet model with an additional service layer, Phoebus embeds "intelligence'' in the network that allows a connection to become articulated and adapt to the environment on a segment by segment basis. The system includes a protocol and software infrastructure that addresses many of the fundamental issues in long distance data movement and allows the Internet infrastructure to evolve.
• Allows existing applications to utilize dynamic circuit allocation with no changes.
• Allows adaptation to segment-specific transport protocols.
• Automatically improves end-to-end performance without extensive host tuning.
Internet2 Phoebus Page:


PXFS: Dynamic data path management for vertical storage hierarchy

Working in collaboration with Clemson University, CREST is investigating a unique approach to parallel I/O in which traditionally disjoint memory and storage namespaces are transparently and semantically unified. The goal of this National Science Foundation (NSF) sponsored project is to achieve persistency of computational objects.



Working in collaboration with Louisiana State University, New Mexico State University, and Sandia National Laboratory, this National Science Foundation (NSF) project is focused on providing a fundamental, new understanding of large-scale graph processing basics and how to build scalable systems to efficiently solve large-scale graph problems. This research characterizes processing overheads and the limits of graph processing scalability, developing performance models that properly capture graph algorithms, defining the co-design process for the development of graph-specific hardware, and experimentally verifying CREST’s approach with a prototype execution environment.


Real-time Semantics for the ParalleX Execution Model to Enable Single-Image Multicore Embedded Computing

Sponsored by the National Science Foundation, this project seeks span the gap between supercomputers and embedded control computers through the conceptual bridge of a new execution model and its surrogate runtime system software and programming interface. The goal of this project is to extend the applicability of the ParalleX execution model to the domain of embedded computers by incorporating real-time semantics as an intrinsic property of the model.



Funded by the Defense Advanced Research Projects Agency (DARPA), CREST is working on Stencil, based on a declarative language for building visualizations utilizing dynamic data. Stencil is used internally to explore the actual runtime behavior of AM++ applications.  These explorations will assist further algorithm development and tuning.  Stencil is also used by the Epidemics Cyberinfrastructure Tool to display the predictions of epidemiology models.


XPS: FP: Language Support for the ParalleX Execution Model

The XPS program, funded by Indiana University's Faculty Research Support Program, aims to support groundbreaking research leading to a new era of parallel computing. XPS seeks research re-evaluating, and possibly re-designing, the traditional computer hardware and software stack for today’s heterogeneous parallel and distributed systems and exploring new holistic approaches to parallelism and scalability.


XPRESS: eXascale Programming Environment and System Software

Funded by the Department of Energy (DOE), XPRESS will develop a revolutionary software system to enable Exascale computing for DOE mission-critical applications through a three-year research project led by Sandia National Laboratories. This collaborative effort includes CREST, along with Louisiana State University, University of Houston, University of Oregon, RENCI at UNC Chapel Hill, Oak Ridge National Laboratory, and Lawrence Berkley National Laboratory. These top institutions are working to develop and deliver OpenX, a system software stack for future DOE systems and applications.  The conceptual and development al components of this project will provide unprecedented capability towards the DOE Exascale era.