SCS Lab has transitioned into Gnosis Research Center (GRC). This website is archived. Please visit the new website at https://grc.iit.edu.

Softwares Tools and Frameworks

Hermes: A heterogeneous aware, multi-tiered, dynamic, and distributed I/O buffering system that aims to significantly accelerate I/O performance

Modern High-Performance Computing (HPC) systems add extra layers to the deep memory and storage hierarchy (DMSH) to increase I/O performance. However, each layer of DMSH is an independent heterogeneous system and data movement among more layers is significantly more complex. Hermes is a middleware service that enables, manages, supervises, and, in some sense, extends I/O buffering to fully integrate the DMSH. More information about the Hermes project can be found here.

PortHadoop: Support Direct HPC Data Access for Hadoop Applications

PortHadoop is a software developed to support direct HPC (High Performance Computing) data access under Hadoop environments. Currently, HPC generated data often require Cloud computing data processing power to process and analyze data, while Cloud computing applications, such as Deep Learning, require HPC's compute power for computing. There is a demand and call for the converging HPC and Cloud computing. PortHadoop is designed and developed to meet the call of converging. A key component of PortHadoop is the concept of 'virtual block' which virtually maps the files on remote HPC PFS (Parallel File System) to a HDFS (Hadoop Distributed File System) under Cloud environment and enables Hadoop applications to access remote PFS-resided data directly, transparently, and seamlessly. PortHadoop is further extended to PortHadoop-R recently, to provide a more user-friendly interface and embrace R's capability and versatility in data analysis and visualization. PortHadoop-R is equipped with a novel, efficient strategy for diagnoses, sub-setting and visualization of HPC data under Hadoop and Spark environments and supports data processing and data transfer overlapping. More information about the PortHadoop project can be found here.

IOSIG: I/O Signatures Based Data Access Optimization

I/O Signature is a pre-defined notation to provide simple and clear presentation of data access patterns. IOSIG software allows us to characterize the I/O access patterns of an application in two steps: 1) trace collecting tool can get the trace of all the I/O operations of the application, 2) through the offline analysis on the trace, the analyzing tool gets the I/O Signature. Using the information in I/O Signatures, we can apply several optimizations on I/O systems, such as: data prefetching, I/O scheduling, cost model based data access optimization. (IOSIG flyer)

PFS-IOC: Server-side I/O-Coordination in Parallel File System

Parallel file systems have become a common component of modern high-end computers to mask the ever-increasing gap between disk data access speed and CPU computing power. Recognizing that an I/O request will not complete until all involved file servers in the parallel file system have completed their parts, we propose a server-side I/O coordination scheme for parallel file systems. The basic idea is to coordinate file servers to serve one application at a time in order to reduce the completion time, and in the meantime maintain the server utilization and fairness. A window-wide coordination concept is introduced to serve our purpose.

ORCHECK

ORCHECK stands for "ORCHEstrated CHECKpointing". Motivated by the recognition that I/O contention is a dominant factor that impedes the performance of parallel checkpointing, ORCHECK proposes a systematic approach to improving the performance of parallel checkpointing. The main idea of ORCHECK is to orchestrate the concurrent checkpoints in an optimized and controllable way to minimize the I/O contentions. The targeted platform for ORCHECK is large-scale parallel computing systems with multi-core architecture and parallel file system such as PVFS2.

GHS: Grid Harvest Service

GHS stands for Grid Harvest Service. It is a performance evaluation and task scheduling system for solving large-scale applications in a shared environment. GHS is based on a novel performance prediction model and a set of task scheduling algorithms. GHS supports three classes of task scheduling, single task, parallel processing and meta-task. The Grid Harvest Service system comprises of five primary subsystems: performance evaluation, performance measurement, task allocation, task scheduling, and execution management. Coordinately, they provide appropriate services to harvest Grid computing.

Network Bandwidth Predictor(NBP)

This is an online network performance forecasting system developed based on neural network technology. GHS plus NBP provides a full function task scheduling system for distributed computing.

HPCM: High Performance Computing Mobility

HPCM stands for High Performance Computing Mobility. It is a middleware supporting user-level heterogeneous process migration of legacy codes written in C or Fortran or other stock-based programming languages via denoting the source code. It consists of several subsystems to support the main functionalities of heterogeneous process migration, including source code pre-compiling, data collection and restoration, communication coordination and redirecting, process monitoring, process scheduling, I/O redirecting and friendly user interfaces, etc.

Scarlet: A Context Aware Infrastructure

Pervasive Computing is one of most challenging research areas in Computer Science. Its ultimate goal is to provide 'Human Centered Computing' by understanding the user environment context. Most computer programs are controlled strictly based on program parameters and user input. Either they completely ignore these useful context information or they found it very difficult to extend them to new platforms because of tight coupling to context sources and underlying platforms. Scarlet is a context aware infrastructure deigned to captures the environment context from the environment devices and deliver them to context aware applications to provide modularity, platform independence and extensibility. It is implemented in Python and tested to work under Linux and Windows environment.
For more information refer to: Scarlet: A Framework for Context Aware Computing  

Patents

Label-based Data Representation I/O Process and System

Abstract: A system and method for executing input/output (I/O) tasks for clients in a distributed computing system. One or more I/O requests made by a client are received. The operation instructions for the request data in the I/O requests are separated from the request data. A data representation called data label (or label) is created for executing operation instructions of the I/O requests. A data label corresponds to each of the I/O request and includes a unique identifier, information to the source and/or destination for the request data, and an operation instruction separated from the request data. The data label is pushed into a distributed label queue and is dispatched to an individual worker node according to a scheduling policy. The worker node executes the I/O tasks by executing the dispatched data label. The system and method can execute I/O tasks independently and decoupled from the client applications.

Methods and Devices for Layered Performance Matching in Hierarchical Memory

Abstract: A method of optimizing memory access in a hierarchical memory system. The method includes determining a request rate from an i'th layer of the hierarchical memory system for each of n layers in the hierarchical memory system. The method also includes determining a supply rate from an (i+1)'th layer of the hierarchical memory system for each of the n layers in the hierarchical memory system. The supply rate from the (i+1)'th layer of the hierarchical memory system corresponds to the request rate from the i'th layer of the hierarchical memory system. The method further includes adjusting a set of computer architecture parameters of the hierarchical memory system or a schedule associated with an instruction set to utilize heterogeneous computing resources within the hierarchical memory system to match a performance of each adjacent layer of the hierarchical memory system.

"Timing-Aware Data Prefetching for Microprocessors," United States Letters Patent No. 8,856,452 (Oct. 7, 2014), Serial No. 13/149,425, United States Department of Commerce, Patent and Trademark Office, (with Y. Chen and H. Zhu). 

Abstract: A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.

"Systems, Methods, and Protocols for Process Migration and Group Membership Management," Patent No. 8,335,813 (Dec. 2012), Serial No. 12/045,546, United States Department of Commerce, Patent and Trademark Office (with D. Cong) 

Abstract: A system, method, and set of protocols for dynamic group communication are provided for enabling dynamic process migration and dynamic group membership management. A process in a group receives and distributes a migration signal. Group communication continues while the processes in the group asynchronously reach a global superstep and then a synchronization point. The processes then spawn a new process on a new device and update group membership information. The new process operates in continuous execution with the new group.

"Memory Server," Patent No. 7.865,570 (January, 2011), Serial No. 11/215,321, United States Department of Commerce, Patent and Trademark Office

Abstract: A memory server provides data access as a service to clients and has a memory service architecture and components for removing data management burdens from the client processor and providing increased speed and utility for the client through aggressive prediction of client memory requirements and fast provision of data.

"Communication and Process Migration Protocols for Distributed Heterogeneous Computing," United States Letters Patent No. 7,065,549 (June 20, 2006), United States Department of Commerce, Patent and Trademark Office (with K. Chanchio)

Abstract: Communication and Process Migration Protocols instituted in an independent layer of a virtual machine environment allow for heterogeneous or homogeneous process migration. The protocols manage message traffic for processes communicating in the virtual machine environment. The protocols manage message traffic for migrating processes so that no message traffic is lost during migration, and proper message order is maintained for the migrating process. In addition to correctness of migration operations, low overhead and high efficiency is achieved for supporting scalable, point-to-point communications.

"Data Collection and Restoration for Homogeneous or Heterogeneous Process Migration," Patent No. 6442663 (August 27, 2002), Serial No. 09/100,364, US patent, United States Department of Commerce, Patent and Trademark Office, (with K. Chanchio, through Louisiana State University)

Abstract: A technique for process migration between computers is disclosed, particularly for collecting the memory contents of a process on one computer in a machine-independent information stream, and for restoring the data content from the information stream to the memory space of a new process on a different computer. The data collection and restoration method enables sophisticated data structures such as indirect memory references to be migrated appropriately between heterogeneous computer environments. 

TRENDY WEBSITE BLOCKS

Choose from the large selection pre-made blocks - full-screen intro, bootstrap carousel, slider, responsive image gallery with, parallax scrolling, sticky header and more.

Address

Stuart Building
Room 112i and 010
10 W. 31st Street
Chicago, Illinois 60616

Contacts

Email: scslab@iit.edu
Phone: +1 312 567 6885