Junsik Shin

Research

I work at the intersection of systems and high-performance computing, with particular focus on:

  • High-performance computing applications
  • Scalability on heterogeneous clusters with accelerators such as GPUs and FPGAs

Education

M.S. / Ph.D. in Computer Science and Engineering
Seoul National University
Advisor: Prof. Jaejin Lee
B.S. in Electrical and Computer Engineering
Seoul National University

Scholarship

Yongwoon Scholarship
Yongwoon Scholarship Foundation

Honors & Awards

2nd Place, Large Language Model Track — Samsung Computer Engineering Challenge
Fastest inference on HellaSwag with LLaMA-30B on a server with four NVIDIA Tesla V100 GPUs.
Grand Prize, Open Innovation Contest for AXDIMM Technology
Synergistic approach for systems with AXDIMMs, GPUs, and NVMe devices.

Publications

To be updated.

Experience

Teaching Assistant — Seoul National University
  • 2023Scalable High-Performance Computing (M1522.006700, Fall)
  • 2024Scalable High-Performance Computing (M1522.006700, Spring)
  • 2024Scalable High-Performance Computing (M1522.006700, Fall)
  • 2025Scalable High-Performance Computing (M1522.006700, Fall)
Teaching Assistant — Samsung
  • 2022System Architect / Expert Course — Parallel Programming
  • 2023System Architect / Expert Course — Parallel Programming
  • 2024System Architect / Expert Course — Parallel Programming
  • 2025System Architect / Expert Course — Parallel Programming
Teaching Assistant — Accelerator Programming School
  • 2023Accelerator Programming Winter School
  • 2023Accelerator Programming Summer School
  • 2024Accelerator Programming Winter School
  • 2024Accelerator Programming Summer School
  • 2025Accelerator Programming Winter School
  • 2025Accelerator Programming Summer School
Volunteer
  • 2022Student Volunteer, PPoPP '22 — Principles and Practice of Parallel Programming

Patents

Quantum Circuit Simulation Method Based on CXL Memory
KR Patent Application No. 10-2025-0216406
Distributed Processing System and Method for Graph Convolutional Network Inference
PCT Patent Application No. PCT/KR2025/020107
Sparse-Dense Matrix Multiplication System and Method Using High-Bandwidth Memory (HBM)
PCT Patent Application No. PCT/KR2025/018974
Method and Apparatus for Searching Collective Communication Paths in Heterogeneous Cluster Systems
PCT Patent Application No. PCT/KR2024/011432
Sparse Matrix Multiplication System Using HBM
KR Patent Application No. 10-2024-0179919
Distributed Inference System for Graph Convolutional Networks Using Multiple FPGAs and GPUs
KR Patent Application No. 10-2024-0179918
Remote Computing Device and Data Storage System
US Patent Application No. 23P0740-US
Low-Latency Remote Storage Device Driver
KR Patent Application No. 10-2023-0147969
General-Purpose Arbiter for FPGA Storage Drivers
KR Patent Application No. 10-2022-0173527
Low-Latency Storage Device Driver
KR Patent Application No. 10-2022-0173526