I recently bought a drone for photography. But the video looks so good that I end up taking a lot of video, too. Video is new to me, so the raw files keep piling up in my "in" tray. I will have to learn to edit video, one day, or send them to a pro to edit for me.
I decided that the least I could do is watch the videos, organize them, and trim them to just the interesting parts. Saves on disk space.
I wrote a bash script to help me:
- Loop over all videos in a dir.
- Play each video.
- Extract the clips I want.
- Tag, rate and organize the clips.
The script opens each video in two MPV players. One is full screen and unscaled (watching 4k on a 2k screen means the video appears zoomed in to a 2k region). This is for pixel peaking. I can quickly check, at a glance, the raw video quality. Another MPV window acts like a PIP, taking up a quarter of the screen, and showing the whole video scaled down.
If a video is DLOG, a LUT is applied to MPV to show the video in a more natural colour (raw LOG video looks grey before it is processed).
Hacking this together, without a plan, I use simple msg boxes, on top of the playing videos, to control the process. Better than having to flick back and forth to a terminal window.
When I see a good place in the video to start my cut, I press the "Start" button. An input box pops up, prefilled with the current time of the player, e.g. 00:00:09 if the video is 9 seconds in.
I watch some more of the video and notice some messy, jerky camera movement starting at 38s. I press the "End" button, and another input box pops up to capture the end time of the clip. I change it to 00:00:37 to exclude that jerky part.
Now, in the background, ffmpeg is called to extract the section of video between 9s and 37s. I use keyframes so that video does not need to be re-encoded. It sets the real start time to the nearest keyframe before the start, and the real end time to the next keyframe after the input end_time. This means the output video is always a bit longer than I chose. I can trim those few extra frames when I use the clip. Because we don't re-encode, the extraction time is near instantaneous.
A preview of the clipped file is played back at high-speed.
If the source video is long, and contains more content I want, I continue playing until I see the next clip I want to extract.
When I finish with a source file, I am asked to give a star rating (1-5) for the videos and then to choose tags. For these I make use of the rating and file tagging extended metadata (xdg). I can select any number of pre-existing tags, and add new tags. Some metadata tags will be added automatically, such as frame_rate, resolution, and colour_profile.
Now the clips are in the output dir, and I choose to send the original file to the wastebin. The next video in the source dir starts playing, and the process continues until the dir is empty.
Then, using Dolphin file browser, or Digikam, I can click on a tag and instantly see all clips under it. I can see all videos that are 50fps and DLog color. Or, I can filter all clips tagged "sea" and "sunset", or "mountains" and "sunrise".The result is a neat pile of trimmed and catalogued video clips. Ready to be thrown into some YouTube video.
Only problem, now? I'm more interested in refining the bash script, than I am in learning to use Resolve to make a finished video.