Home > Research > Publications & Outputs > Evaluating Brush Movements for Chinese Calligraphy

Electronic data

  • Calligraphy_Evaluation

    Accepted author manuscript, 2.33 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License


Text available via DOI:

View graph of relations

Evaluating Brush Movements for Chinese Calligraphy: A Computer Vision Based Approach

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

  • Pengfei Xu
  • Lei Wang
  • Ziyu Guan
  • Xia Zheng
  • Xiaojiang Chen
  • Zhanyong Tang
  • Dingyi Fang
  • Xiaoqing Gong
  • Zheng Wang
Publication date1/07/2018
Number of pages7
<mark>Original language</mark>English
Event27th International Joint Conference on Artificial Intelligence, IJCAI 2018 - Stockholm, Sweden
Duration: 13/07/201819/07/2018


Conference27th International Joint Conference on Artificial Intelligence, IJCAI 2018


Chinese calligraphy is a popular, highly esteemed art form in the Chinese cultural sphere and worldwide. Ink brushes are the traditional writing tool for Chinese calligraphy and the subtle nuances of brush movements have a great impact on the aesthetics of the written characters. However, mastering the brush movement is a challenging task for many calligraphy learners as it requires many years’ practice and expert supervision. This paper presents a novel approach to help Chinese calligraphy learners to quantify the quality of brush movements without expert involvement. Our approach extracts the brush trajectories from a video stream; it then compares them with example templates of reputed calligraphers to produce a score for the writing quality. We achieve this by first developing a novel neural network to extract the spatial and temporal movement features from the video stream. We then employ methods developed in the computer vision and signal processing domains to track the brush movement trajectory and calculate the score. We conducted extensive experiments and user studies to evaluate our approach. Experimental results show that our approach is highly accurate in identifying brush movements, yielding an average accuracy of 90%, and the generated score is within 3% of errors when compared to the one given by human experts.