Sign language was designed to allow hearing-impaired person to interact with others. Nonetheless, sign language was not a common practice in the society which produce difficulty in communication with hearing-impaired community. The general existing studies of sign language recognition applied computer vision approach; but the approach was limited by the visual angle and greatly affected by the background lightning. In addition, computer vision involved machine learning (ML) that required collaboration work from team of expertise, along with utilization of high expense hardware. Thus, this study aimed to develop a smart wearable American Sign Language (ASL) interpretation model using deep learning method. The proposed model applied sensor fusion to integrate features from six inertial measurement units (IMUs). Five IMUs were attached on top of the each fingertip whereas an IMU was placed on the back of the hand’s palm. The study revealed that ASL gestures recognition with derived features including angular rate, acceleration and orientation achieved mean true sign recognition rate of 99.81%. Conclusively, the proposed smart wearable ASL interpretation model was targeted to assist hearing-impaired person to communicate with society in most convenient way possible.