Investigation of Feature Extraction for Unsupervised Learning in Human Activity Detection
Abstract
The ability to understand what humans are doing is crucial for any intelligent system to autonomously support human daily activities. Technologies to enable such ability, however, are still undeveloped due to the many challenges in human activity analysis. Among them are the difficulties in extracting human poses and motions from raw sensor data, either recorded from visual sensor or wearable sensor and the need to recognize activities not seen before using unsupervised learning. Furthermore, human activity analysis usually requires expensive sensors or sensing environment. With the availability of low-cost RGBD (RGB-depth) sensor, the new form of data can provide human posture data with high degree of confidence. In this paper, we present our approach to extract features directly from such data (joint positions) based on human range of movement and the results of tests performed to check their effectiveness to distinguish sixteen (16) example activities are reported. Simple unsupervised learning, K-means clustering was used to evaluate the effectiveness of the features. The results indicate that the features based on range of movement significantly improved clustering performance.
Keywords
Full Text:
PDFRefbacks
- There are currently no refbacks.