While Internet-scale services demand huge amounts of system resources these days, data centers are, ironically, plagued by resource utilization imbalance between hosts. To lift the fundamental limits of single host-based paradigm, data centers are moving towards disaggregated resource system designs, where resources are maintained in pools, and systems utilize them remotely, through high-performance network technologies such as RDMA and CXL. Our research group is primarily focusing on realizing the disaggregated resource concept, especially for system memory. Also, we are exploring operating system design and optimization for future computing environment.
As the price for high-performance computers is dropping fast, it becomes feasible for consumers to maintain multiple computers and to use them for solving compute-intensive tasks. However, application software is getting complicated, making it challenging to convert existing applications built for running on a single machine to use multiple nodes. We propose distributed execution environment; it provides a simple yet effective way of distributing threads to remote nodes, enabling legacy applications to easily utilize system resources from multiple nodes.
Many intelligent, mission-critical services nowaday are based on AI and big data technologies, and they inevitably consume, process, and produce massive amounts of data every second. In this sense, they can be only realized on top of efficient data storage and management systems. We are investigating on building intelligent high-performance storage systems by incorporating emerging storage technologies such as key-value SSDs and storage-class memory systems.