HyunJae Lee

Publications

Bayesian Optimization Meets Self-Distillation

HyunJae Lee*, Heon Song*, Hyeonsoo Lee*, Gi-hyeon Lee, Suyeong Park, Donggeun Yoo

ICCV
2023

Bayesian optimization (BO) has contributed greatly to improving model performance by suggesting promising hyperparameter configurations iteratively based on observations from multiple training trials. However, only partial knowledge (ie, the measured performances of trained models and their hyperparameter configurations) from previous trials is transferred. On the other hand, Self-Distillation (SD) only transfers partial knowledge learned by the task model itself. To fully leverage the various knowledge gained from all training trials, we propose the BOSS framework, which combines BO and SD. BOSS suggests promising hyperparameter configurations through BO and carefully selects pre-trained models from previous trials for SD, which are otherwise abandoned in the conventional BO process. BOSS achieves significantly better performance than both BO and SD in a wide range of tasks including general image classification, learning with noisy labels, semi-supervised learning, and medical image analysis tasks.

Improving Multi-fidelity Optimization with a Recurring Learning Rate for Hyperparameter Tuning

HyunJae Lee, Gihyeon Lee, Junhwan Kim, Sungjun Cho, Dohyun Kim, Donggeun Yoo

WACV
2023

Despite the evolution of Convolutional Neural Networks (CNNs), their performance is surprisingly dependent on the choice of hyperparameters. However, it remains challenging to efficiently explore large hyperparameter search space due to the long training times of modern CNNs. Multi-fidelity optimization enables the exploration of more hyperparameter configurations given budget by early termination of unpromising configurations. However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase. In this paper, we propose Multi-fidelity Optimization with a Recurring Learning rate (MORL) which incorporates CNNs' optimization process into multi-fidelity optimization. MORL alleviates the problem of slow-starter and achieves a more precise low-fidelity approximation. Our comprehensive experiments on general image classification, transfer learning, and semi-supervised learning demonstrate the effectiveness of MORL over other multi-fidelity optimization methods such as Successive Halving Algorithm (SHA) and Hyperband. Furthermore, it achieves significant performance improvements over hand-tuned hyperparameter configuration within a practical budget.

Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images

Weonsuk Lee, Hyeonsoo Lee, HyunJae Lee, Eun Kyung Park, Hyeonseob Nam, Thijs Kooi

Radiology: Artificial Intelligence
2023

The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 fourview DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity.

Reducing Domain Gap by Reducing Style Bias

HyunJae Lee*, Hyeonseob Nam*, Jongchan Park, Wonjun Yoon, Donggeun Yoo

CVPR
2021
Oral Presentation

Convolutional Neural Networks (CNNs) often fail to maintain their performance when they confront new test domains, which is known as the problem of domain shift. Recent studies suggest that one of the main causes of this problem is CNNs' strong inductive bias towards image styles (ie textures) which are sensitive to domain changes, rather than contents (ie shapes). Inspired by this, we propose to reduce the intrinsic style bias of CNNs to close the gap between domains. Our Style-Agnostic Networks (SagNets) disentangle style encodings from class categories to prevent style biased predictions and focus more on the contents. Extensive experiments show that our method effectively reduces the style bias and makes the model more robust under domain shift. It achieves remarkable performance improvements in a wide range of cross-domain tasks including domain generalization, unsupervised domain adaptation, and semi-supervised domain adaptation on multiple datasets.

SRM : A Style-based Recalibration Module for Convolutional Neural Networks

HyunJae Lee, Hyo-Eun Kim, Hyeonseob Nam

ICCV
2019

Following the advance of style transfer with Convolutional Neural Networks (CNNs), the role of styles in CNNs has drawn growing attention from a broader perspective. In this paper, we aim to fully leverage the potential of styles to improve the performance of CNNs in general vision tasks. We propose a Style-based Recalibration Module (SRM), a simple yet effective architectural unit, which adaptively recalibrates intermediate feature maps by exploiting their styles. SRM first extracts the style information from each channel of the feature maps by style pooling, then estimates per-channel recalibration weight via channel-independent style integration. By incorporating the relative importance of individual styles into feature maps, SRM effectively enhances the representational ability of a CNN. The proposed module is directly fed into existing CNN architectures with negligible overhead. We conduct comprehensive experiments on general image recognition as well as tasks related to styles, which verify the benefit of SRM over recent approaches such as Squeeze-and-Excitation (SE). To explain the inherent difference between SRM and SE, we provide an in-depth comparison of their representational properties.

A Novel Service-oriented Platform for the Internet of Things

HyunJae Lee, Eunjin Jeong, Donghyun Kang, Jinmyeong Kim, Soonhoi Ha

IoT Conference
2017

As Internet of Things (IoT) has received substantial attention in industry and academia recently, many IoT devices and IoT platforms have been proposed and being developed. In this paper we propose a novel IoT platform, called SoPIoT that is different from existent IoT platforms in several aspects. Since a device is abstracted with a set of services it provides, any computing resources can be easily integrated to the platform. In addition to the general use case of IoT where the smart devices provide useful services cooperatively and autonomously without the intervention of the user, SoPIoT allows a user to define a composite service dynamically at run-time by a script language program. To SoPIoT, the IoT system looks like a distributed system that consists of many computing resources, running multiple applications currently where an application corresponds to a composite service. The central middleware maps and schedules the services to the computing resources. The scalability of SoPIoT is achieved by forming the hierarchy of middlewares. The viability of the proposed IoT platform is confirmed by building a smart office test-bed. Experimental results show that a central middleware can support more than 1,000 devices.