Video Metadata Basics
In the 1910 book Physics by Charles Riborg Mann and George Ransom Twiss, the authors asked “If a tree falls in a forest and no one is around to hear it, does it make a sound?”
At VRmeta, we ask, What happens if you create a video or film but no can find it?
While standard metadata provides important information, in the era of big video, we need the ability to go beneath the surface and analyze video on a frame-by-frame basis.
Enter time-based metadata. This class of metadata provides a frame-by-frame analysis of everything that is happening on-screen throughout the entire video or episode.
Time-based metadata gives you deeper insight into the video because you're analyzing the content at DNA level and providing a digital blueprint of content.
VRmeta gives users the ability to inject eternal discoverability into any piece of content it touches.
Examples of time-based metadata include categories for actor names, characters, locations, scenes, objects, product placement, genres, dialogue, and subject matter. In short, it creates a wealth of intelligence about your content that covers thousands, if not millions, of video elements.
"Great content without accurate metadata is, after all, a missed opportunity"
Try us free for 1 month. If you enjoy your VRmeta trial, do nothing and your membership will automatically continue for as long as you choose to remain a member.
VRmeta offers members the choice of a month-by-month subscription or a discounted year-by-year plan.