Automated assessment of fitness exercises has important applications in computer and robot-based exercise coaches to deploy at home, gymnasiums or care centers. In this work, we introduce AHA-3D, a labeled dataset of sequences of 3D skeletal data depicting standard fitness tests on young and elderly subjects, for the purpose of automatic fitness exercises assessment. To the best of our knowledge, AHA-3D is the first publicly available dataset featuring multi-generational, male and female subjects, with frame-level labels, allowing for action segmentation as well as the estimation of metrics like risk of fall, and autonomy to perform daily tasks. We present two baseline methods for recognition and one for segmentation. For recognition, we trained models on the positions of the joints achieving 88.2% ± 0.077 accuracy, and on joint positions and velocities, achieving 91% ±0.082 accuracy. Using the Kolmogorov-Smirnov test we determined the model trained on velocities was superior. The segmentation baseline achieved an accuracy of 88.29% in detecting actions at frame level. Our results show promising recognition and detection performance suggesting AHA3D’s potential use in practical applications like exercise performance and correction, elderly fitness level estimation and risk of falling for elders.