Thursday, January 29, 2026
Abstract: Drawing on our many years of experience in scientific data visualization, we outline the current state of the field at supercomputing centers. Using examples from climatology, astrophysics, fluid dynamics, and other domains, we highlight the key challenges involved in deploying visualization and analysis capabilities at scale. Our toolbox spans modern I/O systems, concurrent processing techniques, advanced rendering libraries, and integrated visualization environments, all designed to minimize data movement, reveal critical features, and help scientists seamlessly incorporate data visualization into HPC workflows.
Abstract: To overcome the difficulty in visualizing large-scale simulation results in HPC, we implemented in-situ and in-transit visualization techniques into in-house computational fluid dynamics (CFD) solvers: upacs-LES and LS-FLOW-HO. First, as an in-situ and in-transit approach, VisIt/Libsim and ADIOS2 were implemented in upacs-LES that is a high-order structured grid solver used for launch vehicle external flows and plume acoustics simulations. Both approaches can be used for batch and interactive visualization. The in-situ approach is easy to implement and use. Meanwhile, in-transit approach offers lower execution overhead and more flexibility in the use of heterogeneous HPC system. Recently, the Kombyne developed by the Intelligent Light was implemented in LS-FLOW- HO that is a high-order unstructured hexa-grid solver for liquid rocket engine combustors. The in-situ visualization becomes indispensable for engineering analysis, and it has been successfully applied to a full-scale combustor simulation conducted at the supercomputer Fugaku.
Abstract: This talk presents an integrated framework for in-situ steering of large-scale CFD simulations using xR technologies. We build on Particle-Based Volume Rendering (PBVR), which converts volumetric fields into compact particle data, enabling remote, high-frame-rate visualization without regenerating geometry for each viewpoint. PBVR is parallelized on supercomputers and coupled with a file-based communication layer (“In-Situ PBVR”) that streams visualization parameters and particle data between running batch jobs and user clients. This allows multivariate volume rendering, including derived quantities such as invariants of the velocity-gradient tensor, to be interactively explored during simulations. We extend the framework to in-situ steering, where users modify boundary conditions and source terms while monitoring 3D fields and time-series plots, demonstrated on an Oklahoma City contaminant-dispersion case and achieving real-time feedback compared with hours-to-days in conventional workflows. Using OpenXR, the system supports head-mounted displays and sustains over 60–90 FPS for VR exploration of simulation spaces, and is further integrated with OpenFOAM for industrial studies. Finally, we show unified visualization of simulation results and large 3D point clouds, including tunnel surveys for geological disposal, toward digital-twin applications on modern HPC systems.
Abstract: High-performance simulations face significant challenges in managing the massive datasets produced by exascale systems, exacerbated by the growing performance gap between computational power and I/O bandwidth. In-situ analysis addresses these limitations by processing data immediately upon generation, bypassing disk I/O bottlenecks and leveraging high-performance computing (HPC) resources directly. However, implementing in-situ approaches often requires complex configurations and the development of specialized parallel analysis codes. PDI offers a lightweight and flexible solution by decoupling I/O, filtering, and analysis logic from the simulation code. Through a declarative configuration system and a plugin-based architecture, PDI enables simulation developers to expose data buffers and trigger events without embedding I/O decisions directly into their application. DEISA (Dask-Enabled In-Situ Analytics) extends PDI's capabilities by coupling MPI-parallel simulation codes with Dask-based analysis workflows. It optimizes data transfer between simulation and analysis components during execution and provides users with enhanced control through Python's extensive ecosystem.