Digital video has come a long way since the early 2000s. We’ve seen picture quality improve leaps and bounds, in tandem with the introduction of new display technologies like OLED. As consumers too, we have higher expectations than ever before, both at home and on portable devices such as smartphones and tablets. Codecs enable all of this, compressing large swathes of raw information into a video file that’s much more manageable for storage, broadcast, and distribution.
Over the years, major industry players like Google, Intel, and Apple have been interested in new ways to compress and package video. You may have heard of YouTube adopting the new AV1 standard, for example, and newer iPhone models targeting professional videographers with Apple’s ProRes codec. Indeed, there are at least a handful of different standards currently in use, each with their own strengths and weaknesses.
With so many video codecs on offer then, it’s worth discussing what they do, why the digital video industry is still fragmented, and how some of the most popular standards differ from each other. Here’s everything you need to know.
What is a video codec?
Ryan-Thomas Shaw / Android Authority
The term codec itself offers a pretty big hint to understanding how it all works — it’s simply shorthand for encoding and decoding. Why are videos encoded and decoded, you ask? In simple terms, it’s because they typically carry a lot of raw data.
You may have heard that videos are essentially a series of still images. Old-school movie projectors are the best showcase of this principle. They are physically fed a reel of film and show you 24 frames per second, tricking your brain into thinking it’s a motion picture.
While you could absolutely do the same with digital images instead, the storage needed for so much data is unfathomable. According to Mozilla’s calculations, a single 30-minute video — stored in the form of raw images — would weigh well over 1TB. For context, that’s ten times the total storage capacity of a typical 128GB smartphone.
A single 30-minute video — stored in the form of raw images – would weigh well over 1TB.
To that end, video storage and playback simply is not feasible without using complex compression algorithms in the form of codecs. It’s also worth noting that codecs exist for audio too, for many of the same reasons. Raw and uncompressed video and audio can both quickly balloon in size, making them impossible to edit, store, and distribute.
Related: 10 best video editor apps for Android
How do codecs work?
While codecs employ several complex compression algorithms, a few basic methods are easy to visualize. For example, what if you only store the information related to changes between one frame and the next, instead of storing full-sized images? That way, a several minutes-long, mostly static scene can be compressed significantly. A person speaking against a fixed background, for example, wouldn’t have much motion, and this is a fairly common scenario in most videos and movies.
You can also take this one step further with motion vectors and compensation algorithms. These can achieve higher compression levels by predicting where a particular pixel ends up in a future frame. If a camera is simply panning horizontally, for example, the codec can tell that a particular pixel will be displaced to the left or right after a few frames.
Codecs aim to deliver an acceptably accurate image at a fraction of the original file size.
Another compression method involves grouping nearby pixels with similar colors. When taken to the extreme, however, this is what causes the infamous “blocky” look in low-quality videos. In this instance, the compressed file simply doesn’t hold enough information for the decoder to reconstruct the original image.
In moderation, these compression techniques — alongside others — can put out an acceptably accurate image at a fraction of the original size. While you inevitably lose some information during compression, it’s a worthwhile tradeoff, to say the least.
Each video codec uses a slightly different approach or method to achieve compression. And as you’d expect, newer codecs are designed to retain or improve picture quality while also reducing file size.
Why do video codecs matter?
From chat applications like WhatsApp to streaming services like Netflix and Disney+, codecs open the door to many smartphone use-cases we take for granted.
Sharing a media file on a service like Facebook or Twitter, for example, typically involves re-encoding the video to a smaller size. The same is true for photos and audio files too. Similarly, streaming companies like YouTube encode and store each piece of media in multiple qualities and codecs. They will then deliver the right version depending on your device’s capabilities and connection speed.
From streaming services to chat apps, video codecs are an integral part of our connected lives. Yet, we rarely notice their impact.
Even though internet speeds have improved considerably over the years, most of us still have to deal with data caps and occasional slowness. Not forgetting that porting around high-resolution video quickly eats into our limited mobile storage space. Newer codecs are explicitly designed with these constraints in mind.
As such, video codecs come in handy while recording video too. Many modern Android devices will offer the option to record in a more efficient codec, allowing you to save valuable disk space.
To analyze this, I recorded two 20-second 4K clips on my smartphone — one in the default H.264 codec and the other in the more efficient H.265 codec (more on those in a bit). The first clip’s file size totaled 125MB, while the second weighed 90MB.
Those figures equal a 30% difference in file size, just from changing one setting! What’s more, it should be possible to compress the file even further, using more powerful hardware than a smartphone SoC. For streaming companies like Netflix or YouTube, moving to a more efficient codec can cut storage and bandwidth requirements by nearly half — saving staggering amounts of money in the process.
How to tell which codecs your smartphone or device supports
The performance of encoding and decoding videos can be greatly helped along with the presence of dedicated hardware. To that end, the chips in our televisions, mobile phones, computers, and even game consoles all support a fixed set of codecs at the hardware level. In other words, they’re capable of compressing and decompressing video files extremely efficiently using hardware acceleration. This is especially important in smartphones because the lower processing strain equals longer battery life.
However, you may still come across a video file that cannot be played back or opened by any app — chances are that it uses a codec your device either cannot handle or doesn’t recognize. To confirm this, you can use an app like MediaInfo to identify the video’s format and encoding details. On Android, you can use free apps like Codec Info or AIDA64 to check your device’s support for audio and video codecs. If a particular codec isn’t on the list, it’s likely because your device’s SoC doesn’t support it. The Android Developers website offers a list of mandatory codecs, in case you’re curious.
That said, modern smartphones have plenty of brute-force CPU power to decode unsupported codecs. To that end, third-party video player apps like VLC will offer to playback such files through software decoding, without any hardware acceleration. However, this has the potential to heat up your device and drain your battery over longer periods, so it’s best to not rely on it.
Related: The best media streaming devices you can buy in 2021
A brief history of video codecs
Competing codecs and standards were once a major problem for the video industry. Many popular codecs only really worked well with hardware from specific manufacturers. Thankfully though, the past few years have seen device makers converge on a handful of codecs. While fragmentation is no longer a problem, it’s still worth knowing which codecs you’re likely to encounter in the real world and how we got here.
Adamya Sharma / Android Authority
MPEG-2 is perhaps the oldest video codec that’s still in circulation today. It became extremely popular in the early 2000s, back when it was almost exclusively used to compress television broadcasts and DVD movie releases. Some early Blu-Ray releases used MPEG-2 for high-definition content as well.
MPEG-2 was developed for the DVD era. It became the dominant video codec in the early 2000s.
Today, virtually no new content is encoded in MPEG-2. However, decode support for it is extremely common, especially since many newer devices are backwards compatible with it. From basic DVD players to decade-old computers, it’s easy to find a device that can playback MPEG-2 files these days.
Edgar Cervantes / Android Authority
Advanced Video Coding (AVC), or H.264 as it’s more commonly known, is the new king of video codecs in terms of compatibility and adoption. It grew in popularity alongside the rise of high-definition video because of its improved efficiency compared to previous codecs. H.264 manages to deliver similar picture quality at approximately 50% of the size of an MPEG-2 video.
H.264 was such a massive leap in efficiency over previous codecs that it quickly became the de facto standard for HD videos. This was especially true for bandwidth-constrained applications, like streaming video over the internet. In fact, the H.264 codec is what enabled YouTube to first introduce support for 720p and 1080p resolutions all the way back in 2008 and 2009, respectively. Even a decade later, you’ll find H.264 widely used for streaming video, HD Blu-Ray discs, and television broadcasts.
Most content on the internet is encoded in the H.264 codec because of its wide comaptibility.
As a result of this widespread adoption, almost all mainstream hardware and software supports the codec today. Unsurprisingly, many smartphones and digital cameras also record in H.264 to ensure maximum compatibility with other devices.
H.265 or HEVC
High Efficiency Video Coding, or HEVC, was the follow-up to the incredibly popular H.264 codec. As the title suggests, it offers a pretty sizable jump in efficiency compared to previous codecs, making it a no-brainer for bandwidth-sensitive applications and ultra-high-resolution content.
HEVC’s rise coincided with the introduction of 4K displays and releases. To that end, the latest Blu-Ray standard — Ultra HD Blu-Ray — relies on the H.265 codec. You’re also likely to come across H.265 while trying to record 4K and 8K video on smartphones, especially if shooting in an HDR format like Dolby Vision.
See also: Everything you need to know about HDR display technology
However, HEVC failed to gain as much traction as H.264 in other areas, despite its advantages. For years, H.265 adoption was stunted by uncertainty over the codec’s licensing and royalty fees. With three different stakeholders versus H.264’s single licensing group, it took years for players in the content, hardware, and software industries to warm up to HEVC. And even now, major web browsers like Google Chrome and Mozilla Firefox don’t support it at all.
Hesitation over HEVC’s patent and royalty specifics prompted Google to take matters into its own hands and develop an open-source alternative called VP9. It offers a similar 30% efficiency gain over H.264, making it the ideal choice for high-resolution video files. More importantly, VP9 is completely royalty-free, meaning that companies don’t have to pay Google anything to add support for it.
Google developed VP9 as a free and open codec to avoid dealing with H.265’s confusing royalty situation.
Google bolstered VP9 adoption when it decided to use it for 4K videos on YouTube. Starting in 2016, it also required manufacturers of Android TV devices to support the codec. Both of these were enough to propel VP9 to success, at least to a greater degree than HEVC. Unsurprisingly, virtually all smartphones, browsers, and televisions released since 2017 can handle VP9 encoded content.
However, not many content providers have adopted VP9. Besides Google’s own YouTube and Stadia platforms, only Netflix adopted it for a brief period.
AV1 is the newest video codec on this list and also shaping up to become a true successor to the popular H.264. Like VP9, it is open-source and royalty-free. More importantly, though, many more companies support it than any previous codec. AV1’s development is led by the Alliance of Open Media — a cross-industry coalition of giants like Intel, Apple, Google, Adobe, Facebook, and Arm. With such backing, it’s hard to imagine AV1 faltering like HEVC and other codecs designed for the streaming era.
Facebook’s testing in 2018 found that AV1 offered 50% better compression than H.264. Another test concluded that AV1 offered a 10% and 15% reduction in file size compared to HEVC and VP9 respectively. Those figures mean that a 25GB 1080p Blu-Ray movie encoded in H.264 could be compressed down to just 12-13GB using AV1 instead — all without any degradation in image quality.
The AV1 codec is backed by the Alliance of Open Media — a cross-industry coalition of giants like Intel, Apple, Google, Adobe, Facebook, and Arm.
While AV1’s specification was finalized around 2019, adoption has progressed slower than you’d expect. This is because virtually no hardware on the market offered hardware-accelerated encoding for the codec until relatively recently. Without that, Moscow State University estimated that AV1 encoding is 2,500 to 3,000 times slower than its competitors.
Similarly, AV1 decoding capabilities weren’t widespread either. In the Android ecosystem, Mediatek’s Dimensity 1200 was the first chipset to include hardware acceleration for AV1 in early 2021. However, its direct competitors — the Qualcomm Snapdragon 888 and 870 SoCs — didn’t support the codec at all. Qualcomm isn’t part of the Alliance of Open Media and doesn’t support AV1 in its latest Snapdragon 8 Gen 1 chipset either.
Once hardware-level support for AV1 is more commonplace, we’ll likely see more and more services adopt it. YouTube and Netflix already use AV1 on Android, as does Google Duo. Furthermore, all major web browsers — except for Safari — support the codec.
Read more: A look at the inner workings of AV1
Robert Triggs / Android Authority
Unlike the other codecs on this list, ProRes is a relatively niche video codec that is designed almost exclusively for video editors and professionals. Simply put, videos stored in ProRes retain more information, with lower compression levels. This makes post-production work like color grading easier since the file still retains a fair amount of raw information from the camera.
See also: 10 best filmmaker apps for Android
Of course, more information and a lower compression ratio mean that ProRes files tend to be quite a bit larger. According to Apple’s white paper detailing the codec, a single hour of 4K 30fps video encoded in ProRes will yield a file size north of 280GB! This is why ProRes is almost never used for content delivery, and only during intermediate production stages. In fact, Apple won’t even let you record 4K ProRes video on the 128GB model of the iPhone 13.
Apple ProRes is an intermediate codec that prioritizes quality over compression. It is meant for video editing and color grading, not content delivery like the other codecs.
In 2021, Apple announced that the iPhone 13 would be the first smartphone to optionally shoot videos directly in ProRes. Later in the year, drone maker DJI released the Mavic 3 Cine — its flagship consumer drone — with the ability to record in ProRes. Over on the encoding side, Apple included dedicated ProRes accelerators in its M1 Pro and M1 Max SoCs’ media engines.
And with that, you’re now up to speed on all of the popular video codecs in use today. For further reading, check out our comprehensive guide on Bluetooth audio codecs.