top of page

This section presents a collection of projects developed during my undergraduate and master’s studies. I am deeply motivated by exploring diverse research domains and actively engage in experimentation, prototyping, and real-world implementation. I embrace intellectual curiosity and am willing to take risks to transform novel ideas into functional systems.
My research focuses on innovative and original topics, with a strong emphasis on human-centered design. The goal of these projects is to support and empower people across a wide range of needs and application contexts. My work spans interdisciplinary areas including: Brain–Computer Interfaces (e.g., EEG-to-Music, epilepsy and sleep), Language and Cognitive Disorders (including Alzheimer’s disease and Aphasia), Minimally Invasive Surgical Navigation, Sports Science and Human Performance Analysis.
Across these projects, I have contributed as a developer and system designer, working with technologies that range from algorithms and signal processing to embedded systems, mobile application development, and deep learning. This cross-disciplinary experience allows me to bridge theoretical research with practical, deployable solutions.

image.png
image.png
image.png

PingPro: Turn Your Ping pong into Pro

Sep 2025 – Present

advisor: Sheng-Fu liang, Fu-Zen Shaw

This application is designed as a digital AI-powered table tennis coach, featuring automatic video clipping and visual motion feedback to support skill development and performance analysis. Through real-world testing, the system has been validated by over 40 users, including national-level coaches and competitive athletes. Future development will focus on further refining the underlying technology, delivering more personalized training insights, and introducing automated highlight generation to enhance user engagement and enjoyment.

My Role

System Design, Frontend, Backend, and AI/CV Algorithm

image.png

My Role

Embedded system design, hardware design, frontend

An EEG-Based Wearable Eye Mask and Wristband System for Continuous Home Sleep Monitoring

Nov 2024 – Present

advisor: Sheng-Fu Liang

The device is a multimodal Bluetooth-enabled wearable sensing platform built on an nRF-based embedded system, implemented as a EEG-Based Eye Mask and wristband. Designed for a lightweight form factor and ultra-low-power operation, the wearable integrates EEG acquisition and behavioral monitoring and communicates with mobile devices to support continuous home-based sleep monitoring. Ongoing collaborative studies focus on sleep disorders, including obstructive sleep apnea, insomnia, and REM sleep behavior disorder (RBD).

S__32202768.jpg
S__32202766.jpg

My Role

Embedded system design, hardware design, frontend

An 8-Channel EEG-Based Wearable System for Home-Based Epilepsy Monitoring and Alerting

Oct 2025 – Present

advisor: Sheng-Fu Liang

Developed an nRF-based 8-channel EEG/ECG wearable system for home-based, real-time epilepsy monitoring and alerting, featuring a compact, lightweight hardware design and low-power BLE-based real-time biosignal streaming. The system interfaces with a Java application for data acquisition and visualization and supports multi-device synchronization for multimodal data collection from heterogeneous devices. Future work will focus on the development of a low-power, real-time embedded seizure detection algorithm.

image.png

My Role

Initiator, Clinical & Technical Research, System Design, Frontend, Backend, and AI/ NLP Algorithm Development

EEG2Music: Reconstructing Perceived Sound from EEG

Jul 2023 – Feb 2024

advisor: Jia-Jin Chen, Sheng-Fu liang

This project explores the reconstruction of perceived sound directly from EEG brain signals, aiming to bridge neural activity and auditory experience. By modeling how sound is internally represented in the brain, the system seeks to recover not only recognizable music genres, but also the subtle, low-level details that contribute to human perception and sensation. To achieve this, the project adopts a Tacotron2-based sequence-to-sequence transformer architecture, enabling the synthesis of fine-grained acoustic features from neural inputs. Looking forward, the project will continue to evolve toward a human-in-the-loop framework to enable the generation of individualized music and auditory experiences that reflect unique neural and perceptual signatures.

截圖 2026-01-05 下午6.17.18.png

My Role

Initiator, Clinical & Technical Research, Data analysis, Deep learning/CV/algorithm

  • Recipient of the Undergraduate Research Fellowship, Ministry of Science and Technology, Taiwan

  • Present in 6th Global Conference on Biomedical Engineering,, Taiwan

A Novel multimodal deep learning framework for aphasia automatic analysis

Feb 2024 – Mar 2025

advisor: Jia-Jin Chen

The proposed algorithm aims to overcome the complexity and challenges of aphasia severity analysis. It leverages a novel multimodal transformer to analyze aphasia severity across different behavioral modalities, such as mouth movements, textual information, and speech signals. The model attains 75% accuracy, surpassing recent state-of-the-art approaches. Furthermore, explainable AI methods are applied to extract clinically relevant evidence from speech and behavioral data.

  • Selected for 2025 NCKU Innovation Dreams Come True Project,

image.png
  • Selected for Tainan Youth Public Participation Action Program,

My Role

Clinical & Technical Research, System Design, Frontend, Backend, and AI/ NLP Algorithm Development

Tainan New Care — Guiding You Toward an Alzheimer’s-Free Future

Apr 2025 – Nov 2025

advisor: Chun-Li Tsai 

This project aims to enhance long-term care for elderly patients with Alzheimer’s disease by empowering foreign caregivers through an AI-driven care application, enabling caregiving to transcend language and nationality barriers. The project was developed by a cross-disciplinary team in collaboration with the Tainan City Government and social welfare organizations. Inspired by the human brain’s language-processing mechanisms, the system incorporates an AI algorithm capable of analyzing multiple cognitive function dimensions—including memory, executive function, and attention—directly from patients’ spoken language.

News: 

  • Third Place, Best Content Award and Best Popularity Award, 2024 SOC-iCaps Interdisciplinary Competition and Professional Exhibition, Taiwan

Intelligent Navigation System for Vitreoretinal Surgery

Oct 2023 – Mar 2024

This system is primarily designed to address the high technical difficulty associated with vitrectomy, which arises from the limited illumination and distorted visual field in the operative environment. Inspired by insights gained from a lecture, I initiated and developed an automated system capable of predicting the current instrument position, identifying lesion regions in real time, and tracking surgical trajectories. The system has been recognized and positively received through evaluations at competitive exhibitions and feedback from practicing surgical specialists.

My Role

Initiator, Clinical & Technical Research, System Design, Frontend, Backend, and AI/CV Algorithm

截圖 2026-01-06 晚上9.49.39.png

MYGO: an LLM-Controlled Robotic Dog that Mimics Real-World Dog Behavior

Mar 2025 – Jun 2025

This was probably my funniest project attempt in college—although the final robot itself turned out to be somewhat clumsy!

This project was developed as the final project for our Principles of Robotics course. Inspired by AI game agents and built upon MiniPupper and the Stanford Robot Dog Driver, the system employs a 12-degree-of-freedom mechanical structure driven by servo motors and controlled by a Raspberry Pi. A cloud-based large language model (LLM) is used to integrate visual perception and memory, enabling the robot to plan and execute a sequence of action responses that mimic real-world dog behaviors.

image.png

My Role

Embedded system design, hardware design, frontend, backend, LLM

bottom of page