← Back to Projects
SLR

Hybrid Sign Language Recognition Framework

ICIPROB 2026 Research Project (IEEE Conference)

Research ProjectBest Student Paper Award2025 - 2026

Initiated during my research exchange period at Shibaura Institute of Technology, this work was submitted to ICIPROB 2026 (4th International Conference on Image Processing and Robotics), held on March 6-7 at Mount Lavinia Hotel, Sri Lanka, and received the Best Student Paper award.

Research Overview

This research focuses on linguistically complete Sign Language Recognition (SLR), with specific attention to American Sign Language (ASL). Existing SLR systems typically focus on either camera-based hand and body perception or sensor-based 3D hand tracking, but neither stream alone captures the full grammatical structure of sign language.

The proposed system bridges this gap through a hybrid neuro-symbolic framework that combines cross-modal deep learning for perception with symbolic grammar validation for syntactic correctness. The objective is to improve recognition reliability while preserving both lexical meaning and grammatical intent.

Core Research Gaps

No multimodal system captures full ASL grammar by combining manual and non-manual markers.

LMC + RGB sensor fusion remains underexplored for linguistically complete SLR.

Cross-modal attention has not been broadly used to align asynchronous hand and facial signals.

Existing pipelines lack uncertainty-aware fusion when one modality degrades.

Proposed System Diagram

Proposed hybrid neuro-symbolic sign language recognition system

The system is designed in three stages: cross-modal perception, symbolic grammar validation, and hybrid translation/refinement.

Framework Breakdown

Phase 1: Cross-Modal Perception

Fuses Leap Motion 3D hand skeletons with RGB-based face and body cues using a Cross-Modal Attention Transformer. This stage outputs lexical gloss predictions with confidence scores.

Phase 2: Symbolic Grammatical Validation

Applies a Finite State Machine (FSM) to validate sign-order constraints and Non-Manual Marker consistency, then accepts or re-evaluates uncertain sequences.

Phase 3: Hybrid Translation and Refinement

Converts validated gloss sequences into natural language with rule-based templates and a lightweight fallback sequence model for complex utterances.

Key Contributions

Proposes a hybrid neuro-symbolic architecture for linguistically complete sign language recognition.

Addresses both lexical and grammatical dimensions of ASL by integrating hand, face, and posture cues.

Introduces a robustness-oriented fusion strategy for real-world sensing conditions.

Frames a practical pathway for future real-time, grammar-aware assistive communication systems.

Publication and Resources

The paper has been presented at ICIPROB 2026 and is expected to be published on IEEE Xplore soon. The official publication link will be made visible here once it is available.

Status: Pending official IEEE Xplore publication.

Project Award

Best Student Paper Award - ICIPROB 2026

4th International Conference on Image Processing and Robotics (IEEE Conference)

Presented on March 6-7 at Mount Lavinia Hotel, Sri Lanka.