Opinion: FCC Rules Need To Be Updated For ATSC 3.0
First, Let’s Start Off With Some FCC Terms:
Primary Services
Ancillary or Supplementary Services
Primary Services refers to the broadcast TV streams, which is the main focus of a broadcaster’s license.
Ancillary or Supplementary Services are non-primary services, such as data transmission or subscription services, which are not central to their main programming duties.
The FCC collects a 5% fee from the broadcaster on any revenue associated with Ancillary or Supplementary Services.
ATSC 3.0’s Data Capacity vs ATSC 1.0’s Data Capacity
Depending on the implementation, ATSC 3.0 can fit way more data compared to ATSC 1.0. Like, 250% more. Here’s a comparison with a minimum SNR of approximately 15-16 dB with the two standards:
ATSC 1.0:
8-VSB 2/3: 19.39 Mbps
15.2 dB SNR
*Updated 11/18/24 for accuracy
ATSC 3.0:
Layered Division Multiplexing:
PLP0: 16-NUQAM 2/15: 3.00 Mbps
(-0.72+3 dB correction) = 2.28 dB SNR
PLP1: 64-NUQAM 10/15: = 22.47 Mbps
(14.89+2.28 dB correction) = 17.17 dB SNR
3.00 Mbps + 22.47 Mbps = 25.47 Mbps
MIMO = 25.47 Mbps x 2 = 50.94 Mbps
That is a 2.63x increase in data capacity (or 162.76% more capacity), and if some additional settings are tweaked, the data capacity increase could be even higher! Again, this is still a 6 MHz RF channel at just about a 17 dB SNR. ATSC 3.0 is just maximizing that RF channel to its (almost) full potential.
Required SNR is Ricean Fading.
Side note: If you think ATSC 3.0 is impressive with a 6 MHz RF channel, this is just the tip of the iceberg. In WiFi and 5G NR, more advanced spatial multiplexing (ie. MU-MIMO) is used to substantially increase data capacity further by utilizing multiple antennas to emit spatially separate RF transmissions operating on the same frequency. This could technically be used in the broadcast industry, but is not included in the ATSC 3.0 spec and thus is out of the scope of this article.
This really shows just how inefficient ATSC 1.0 is, with a data capacity of only 19.39 Mbps at a 15 dB SNR.
Greater Data Capacity Without Greater Expectations
Let’s say there’s two hypothetical stations. One is ATSC 1.0 and the other one is ATSC 3.0, using the configurations above. The broadcaster decides to only broadcast one channel with 1080p resolution, both for the ATSC 1.0 station and the ATSC 3.0 station.
The MPEG-2 1080p video on the ATSC 1.0 station takes up 19 Mbps, and the audio and other small but necessary data takes up 0.39 Mbps.
∴ There’s no room left over for anything else with ATSC 1.0.
The HEVC 1080p video on the ATSC 3.0 station takes up 4.75 Mbps for the same visual quality (If VVC is used instead, the video would take up 2.375 Mbps and have approximately the same visual quality), and the audio and other small but necessary data takes up 0.2 Mbps.
∴ 46.19 Mbps is remaining if HEVC is used, and 48.565 Mbps is left over if VVC is used. That’s a lot of data capacity left over to serve the public…right?
The way the FCC rules are written, broadcasters have no obligation to use any of that remaining 46.19 Mbps (HEVC implementation) or 48.565 Mbps (VVC implementation) for broadcasting to the general public.
In this scenario, only 4.5%-9% of the data capacity is actually being used for the public. Let me remind you, the public airwaves are meant to benefit the public, not to sell data capacity to businesses. Unlike the cellular carriers, who had to spend billions of dollars to acquire the frequencies they have the license to operate on, and rightfully charge people to access their network, broadcasters on the public airwaves paid $0 to use the frequencies they are operating on, and are only subject to a 5% fee on whatever their revenue is on that remaining data capacity that they use for datacasting (or “Ancillary or Supplementary Services”).
Rules Desperately Need Updating For ATSC 3.0. Here’s What Should Be Done
The FCC needs to implement stricter regulations regarding a minimum broadcast requirement. Currently, a broadcaster operating on the TV band† of the public airwaves is required to broadcast a minimum of one 480i SD channel, 24/7. That’s it. Historically, this makes sense. Analog NTSC broadcasts of the past took up the entire 6 MHz RF channel with 480 interlaced lines. A lot of television content during the switch from NTSC to ATSC was still being produced in 4:3 standard definition. The low bar requirement of 480i during the digital transition allowed broadcasters that were satisfied with broadcasting in the square 480 format to continue broadcasting in that format, just in a digital form.
†54MHz-72MHz, 76MHz-88MHz, 174MHz-216MHz, and 470MHz-608MHz
The problem is, this isn’t 1996, or 2009. This is 2024, where streaming 1440p or even 4K content on a unicast 4G LTE or 5G cellular connection is not that big of a deal. 480i in this context feels incredibly antiquated. A minimum of 480i is like the equivalent of making a minimum passing grade in school an F instead of a D or C. If we don’t push the envelope for innovation, the fate of the public airwaves will be dire in a decade.
The problem is, this is a 2-fold issue. Never before in the history of the TV public airwaves in the United States has there been the ability for broadcasters to send TV channels at varying SNRs. As I mentioned in this video, there are some examples of broadcasters trying to make receiving TV channels harder by increasing the minimum receive SNR. I believe this needs to be stopped with some sort of regulations.
After much thought, this is what I believe the additional regulations should look like for broadcasters using ATSC 3.0:
PLP0 must contain at least one free TV channel, operating 24/7, without any kind of encryption, that serves the public interest, convenience, and necessity. PLP0 must have an SNR less than 15.00* dB (this requirement would be similar, but less than ATSC 1.0’s 15.2 dB SNR threshold):
Hypothetical ATSC 3.0 Rules
Minimum 24/7 Video Stream | Highest Res. Video Stream 480p to less than 720p | Highest Res. Video Stream 720p to less than 2160p | Highest Res. Video Stream 2160p and greater | DRM Encryption Allowed | |
---|---|---|---|---|---|
PLP0 | 1 | Minimum receive SNR threshold of less than 0.00* dB required | Minimum receive SNR threshold of less than 10.00* dB required | Minimum receive SNR threshold of less than 15.00* dB required | No |
PLP1 | None | No SNR Restriction | No SNR Restriction | No SNR Restriction | Yes |
PLP2 | None | No SNR Restriction | No SNR Restriction | No SNR Restriction | Yes |
PLP3 | None | No SNR Restriction | No SNR Restriction | No SNR Restriction | Yes |
PLP0 should also be mandated to have functioning Advanced Warning And Response Network (AWARN) alerts without requiring an internet connection. After all, this is partially what made the FCC approve ATSC 3.0 in the first place.
This is a win-win. If a broadcaster doesn’t want to broadcast in 4K, they will have to make their minimum receive SNR lower, which will make reception of over the air TV much easier. If a broadcaster doesn’t want to broadcast in HD, they will have to make their minimum receive SNR even lower, which will make reception of over the air TV even easier. Although a broadcaster wouldn’t have a channel(s) laced with DRM on PLP0, they would be free to implement it elsewhere. Obviously banning DRM outright would be best, but I believe this is a good compromise.
A few notes: I wanted to make sure the stipulations mentioned above were as clear and concise as possible. Also, I wanted to make sure that if the broadcaster used, say, TDM—and only a third of the time resources were used for PLP0—the lowest SNR threshold would still be enough for high quality video at those resolutions with VVC/H.266. These are very basic regulations that could easily be implemented. Additionally, I’m open to suggestions, such as modulation and code rate choices instead of using minimum SNR requirements.
Why this nuanced take is important
I don’t believe broadcasters should be penalized with super harsh rules. I do believe that broadcasters should be serving the public with at least one TV channel that doesn’t require a high SNR to receive, and isn’t locked down with DRM. What a concept! Those hypothetical rules that I created above aren’t meant to hinder a broadcaster’s ability to use their license(s) for other means, like B2B datacasting, broadcasting dozens of channels, innovating, etc. It’s to ensure that broadcasters are at some basic level serving the public meaningfully.
It’s these situations below that made me write this:
Sinclair’s WNYO and WUHF stations in Buffalo, NY and Rochester, NY respectively are reserving PLP0 for B2B datacasting, with the minimum receive SNR for any TV channels at a 21dB SNR on PLP1 for WNYO and a 19dB SNR on PLP1 for WUHF
PLP0 on WTVJ in Miami, FL is reserved, with its TV channels only available on PLP1 at a 18.8 dB SNR
WEYS-LD in Miami, FL is only broadcasting one 480p channel and yet PLP0 (the only PLP) has a 16 dB SNR
Even in the lighthouse stage, minimum SNRs could be much lower for all of these stations.
And none of these broadcasts have functioning advanced emergency alerts at the time of writing.
Conclusion
I want broadcasters to make money, I don’t want them to be too bogged down by regulations, and I want them to innovate with ancillary services like B2B datacasting. The more money they make with OTA TV, the more likely it’ll be to stick around. However, there needs to be a balance between serving the public (the reason for OTA TV to exist in the first place) and serving an elite few. From what I’ve seen so far with ATSC 3.0 deployments, there’s been unnecessarily high minimum receive SNRs, DRM encryption, and a lack of advanced emergency alerting.
Like this article? Consider donating. Thanks for helping me keep this website ad-free.