An Automatic Video Editing and Broadcasting System for Enhanced Mixed-Mode Teaching/Learning Experience
Mixed-mode teaching has become more and more important due to its convenience and safety, especially under the current pandemic situation. However, the students studying online usually have poor experience since the information acquired from a single view generated by most video conferencing software is limited. Moreover, the mixed-mode teaching under a single camera also limits the teachers’ teaching activities. Although more cameras can be deployed, it will cost a heavy network burden when delivering multiple videos streams simultaneously, and more camera views may distract the attention of students. To tackle these problems, this project focuses on building a multi-camera directing/editing system that can automatically determine the most concerned view at each time instance, using computer vision and cloud computing technologies. Not only can this system record multiple synchronized videos of all the important views, but it can automatically edit the multiple video steams into a single one that simulates the students’ attention during the classes. By watching the mashup of multi-camera videos, the online students will obtain a better education experience similar to the on-site students. On the other hand, teachers will not be restricted by the single close-up camera and can carry out more teaching activities more freely.