Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. A lot of existing VQA databases cover small numbers of video sequences with artificial distortions. When testing newly developed Quality of Experience (QoE) models and metrics, they are commonly evaluated against subjective data from such databases, that are the result of perception experiments. However, since the aim of these QoE models is to accurately predict natural videos, these artificially distorted video databases are an insufficient basis for learning. Additionally, the small sizes make them only marginally usable for state-of-the-art learning systems, such as deep learning. In order to give a better basis for development and evaluation of objective VQA methods, we have created a larger datasets of natural, real-world video sequences with corresponding subjective mean opinion scores (MOS) gathered through crowdsourcing.

We took YFCC100m as a baseline database, consisting of 793436 Creative Commons (CC) video sequences, filtered them through multiple steps to ensure that the video sequences are representative of the whole spectrum of available video content, types of distortions, and subjective quality. The resulting 1200 videos are available to download, alongside the subjective data and evaluation of the best-performing techniques available for multiple video attributes. Namely, we have evaluated blur, colorfulness, contrast, spatial information, temporal information and video quality.

The KoNViD-1k data is publicly available to the research community. Please cite the following references if you use this database in your research:

  • V. Hosu, F. Hahn, M. Jenadeleh, H. Lin, H. Men, T. Szirányi, S. Li and D. Saupe, "The Konstanz Natural Video Database" http://database.mmsp-kn.de

  • V. Hosu, F. Hahn, M. Jenadeleh, H. Lin, H. Men, T. Szirányi, S. Li and D. Saupe, "The Konstanz Natural Video Database (KoNViD-1k)", Quality of Multimedia Experience (QoMEX), 2017 Nineth International Conference on. IEEE, 2017. LINK


Video Data: KoNViD-1k 8s video sequences LINK
Subjective Data: KoNViD-1k crowdsourcing data LINK
Attribute Evaluation Data: KoNViD-1k video attributes LINK

The original 30s video sequences and per-frame video attribute evaluations can be shared upon request.


This website is hosted by the Multimedia Signal Processing Group, University of Konstanz, Germany.
More datasets will be publishet here in the future.