Our society is increasingly more susceptible to chronic stress. Reasons are daily worries, workload, and the wish to fulfil a myriad of expectations. Unfortunately, long-exposure to stress leads to physical and mental health problems. To avoid the described consequences, mobile applications have been studied to track stress in combination with wearables. However, wearables need to be worn all day long and can be costly. Given that most laptops have inbuilt cameras, using video data for personal tracking of stress levels could be a more affordable alternative. In previous work, videos have been used to detect cognitive stress during driving by measuring the presence of anger or fear through a limited number of facial expressions. In contrast, we propose the use of 17 facial action units (AUs) not solely restricted to those emotions. We used five one-hour long videos from the dataset collected by Lau [1]. The videos show subjects while typing, resting, and exposed to a stressor, being a multitasking exercise combined with social evaluation. We performed binary classification using several simple classifiers on AUs extracted in each video frame and were able to achieve an accuracy of up to 74% in subject independent classification and 91% in subject dependent classification. These preliminary results indicate that the AUs most relevant for stress detection are not consistently the same for all 5 subjects. Also in previous work, using facial cues, a strong person-specific component was found during classification.