Shrinking expenses as fiber goes deeper in the network

Jan. 3, 2019
Optical fiber provides higher performance and reliability and can handle more bandwidth, which justifies driving fiber deeper into the network (fiber deep) and closer to homes and businesses. But, since fiber-deep migration is a brownfield operation, the existing HFC network could see downtime that affects customer services. Another challenge includes the lack of service assurance, test and measurement at every stage of the network cycle.

With data rate requirements constantly increasing, multiple system operators (MSOs) are looking to optimize the coax plant, including enabling higher throughput rates transmitted over existing coaxial cable systems. The Data Over Cable Service Interface Specification (DOCSIS®) protocol— a telecommunications standard used to provide internet access via a cable modem—was developed to help accomplish that goal. Figure 1 shows the evolution of DOCSIS technology and the downstream and upstream capacity increases with each iteration.

Figure 1. Evolution of DOCSIS® capacity.

More bandwidth means smaller tolerance of noise and the introduction of other detrimental effects, all resulting in shortened delivery distances. To deliver such high speeds, the use of coax cable must be reduced, necessitating the installation of more optical fiber cable much closer to the user. The blend of fiber and coax cables forms the basis of a hybrid fiber/coaxial (HFC) network.

Cable providers are heavily investing in fiber by adding more than a million fiber nodes across their network, rolling out DOCSIS 3.1 across millions of homes, delivering 1-Gbps broadband services using DOCSIS 3.1 over HFC or upgrading to new converged cable access platform (CCAP) systems to shift towards a node-plus-zero architecture. With these upgrades, MSOs can better compete with fiber-based and over-the-top (OTT) service providers by delivering services including 4K or IP video. The upgrades will also enable excellent fronthaul coverage as fiber reaches multiple new access points and service providers prepare for 5G mobile.

Optical fiber provides higher performance and reliability and can handle more bandwidth, which justifies driving fiber deeper into the network (fiber deep) and closer to homes and businesses. Shorter coaxial cables and fewer amplifiers enable MSOs to reduce operational expenses (opex) related to power, maintenance and troubleshooting. But, since fiber-deep migration is a brownfield operation, the existing HFC network could see downtime that affects customer services. Another challenge includes the lack of service assurance, test and measurement at every stage of the network cycle. Training for those key tasks is also often neglected.

How Remote PHY fits into the fiber-deep architecture

For a fiber-deep architecture, several topologies are possible. One of the most promising, which also has a great deal of market traction, is Remote PHY, “PHY” being the abbreviation of the physical layer representing the active equipment used for the optical-to-electrical conversion of QAM modulations. Basically, it’s the equipment required to transmit and receive RF signals.

Historically, QAM modulations were done at the headend, then sent over the fiber via an analog 1550-nm signal. With Remote PHY, that action is pushed into the field to a Remote PHY node, also referred to as a Remote PHY device (RPD). This RPD is fed from the headend via a digital Ethernet link over DWDM wavelengths, providing several advantages and long-term extension of the DOCSIS evolution.

Figure 2. Traditional HFC vs. digital Ethernet signals over DWDM.

Figure 2 shows the difference between a traditional HFC architecture involving linear analog optics, versus digital Ethernet signals over DWDM optical wavelengths. The latter is part of a major network upgrade removes the amplifiers (node plus zero) on the coax portion by bringing the fiber much closer to the home and having smaller service groups per node.

CWDM and DWDM technology overview

Once fiber is deployed, there are two options to get more bandwidth to the node:

  1. Increase the speed of the transmitters to 1, 2.5, 10, 40 or 100 Gbps, with each option being costlier than the previous one, then providing each node with a portion of total bandwidth.
  2. Add more lanes through WDM technology—directing all traffic through one lane delivers maximum capacity, but putting it through multiple lanes easily multiplies bandwidth.

WDM combines multiple wavelengths using a multiplexer (mux) for transmission over a single fiber. In the case of existing fibers, such as in an HFC network, a mux can be easily installed at the headend to combine 4, 8, 16 or 18 CWDM wavelengths or up to 80 DWDM wavelengths (see Figure 3). Depending on the local density and bandwidth requirements, MSOs may begin with CWDM to gain bandwidth at a lower cost and upgrade to DWDM later or begin initially with DWDM for a significant bandwidth boost right from the start.

Figure 3. Mux and demux in an HFC network.

At the receiving end, a de-multiplexer (demux) is used to isolate the portion of the signal required at that location. Next, each wavelength is divided back and sent onto its specific path, with each node getting only one or two wavelengths (see Figure 4). Since the installation of new fibers is very costly, the goal here is to maximize the use of existing infrastructure to implement a very good, cost-efficient solution that both increases bandwidth and minimizes costs.

Figure 4. Wavelength paths.

Testing of fiber-deep topologies

New fiber, technology and topology always come with new challenges. This is especially true when MSOs and their contractors have coax-expert technicians who must deal with both optical and Ethernet testing.

Let’s discuss some basic testing requirements (see Figure 5). The overall goal is to make sure that fibers have good connectors, loss is within budget, the mux and demux are good and, obviously, that the fiber is transporting the correct wavelength to the correct location.

We can split this access network into two main categories: the headend and the field, with respective testing requirements for each.

Figure 5. Testing requirements for fiber-deep architectures.

Verify connector cleanliness. Dirty connectors are the number one cause of network failure. It’s important to check all optical connectors at the headend (i.e., in the patch panels) and those on the fiber jumpers. In addition, it is essential to inspect and clean the connectors at both ends if needed. If one is dirty, there is a risk of cross-contamination resulting from the transfer of dirt and dust from one connector to the other.

In some cases, networks may feature high-density fiber patch panels including MPO connectors. With MPO connectors, it’s common for at least one fiber out of the lot to be dirty. Over time, that piece of dirt or dust contaminant can cross-contaminate other fibers. In addition, not all fibers will be active on day one. If all fibers are lit and a single fiber needs cleaning, all fibers on that link may need to go down in order to clean just one. Given that possibility, it is very important to validate all fibers during the initial installation.

Getting the loss right. The next step involves fiber characterization, or eventually maintenance and troubleshooting. To characterize a fiber, an optical time domain reflectometer (OTDR) shoots an optical signal into the fiber and uses back reflection to characterize that fiber (i.e., detect cuts, micro- and macrobends, bad splices, etc.). Remember, though, since the networks in question include filtering devices, muxs and demuxes, it is crucial that OTDR signals are not filtered out somewhere along the way. That’s why dedicated CWDM and/or DWDM OTDRs are required to test at a specific wavelength, characterize every branch of the mux and demux, measure end-to-end loss and troubleshoot the system. Such OTDRs will locate potential problems along the entire fiber link or even pinpoint the location of fiber breaks affecting a specific node. All this can be done from the headend, avoiding the need to dispatch a truck before identifying the location of the problem’s source.

This is complex technology, even for optical experts. That’s why OTDRs are now simpler and automated software is available to remove the complexity of setting up the test and configuration parameters as well as better interpret results. Some of these software applications can help provide multiple pulsewidths for intelligent fiber and mux/demux characterization at the push of one button.

Check fiber dispersion. Fiber characterization includes dispersion testing: tests recommended by the ITU G.650.3 standard for any fiber transmitting signals at 10 Gbps or more, as is the case for Remote PHY signals between the headend and the demux. As shown in Figure 6, dispersion is pulse broadening that can lead to bit errors. Dispersion comes in two types: chromatic dispersion (CD), which is mainly due to different wavelengths (or colors) traveling at different velocities in the fiber, and polarization mode dispersion (PMD), which arises because different polarizations travel at different velocities.

Figure 6. Dispersion leads to pulse broadening, as light propagates in the fiber.

Dispersion scales with distance, so longer fibers are generally more prone to dispersion issues. As a rule of thumb, technician should test fibers that are longer than 10 km for such dispersion. Likewise, older fibers and aerial cables suffer much more from PMD issues. Accordingly, dispersion testing should be part of proper Remote PHY deployment.

Right signal at right location. The last set of optical tests required are to characterize CWDM or DWDM signals. Once all the tests discussed above are done and the fiber plant is fully characterized, it is time to activate it. To do so, it is necessary to ensure that all transmitters (active equipment) are fully functional and that each signal reaches its intended destination. This is done with an optical spectrum analyzer (OSA), which will conduct, at any given point in the network, live measurements of power per channel (i.e., measuring the power at each wavelength), and even optical signal to noise ratio (OSNR) which, depending on the headend configuration, may be a required test. Spectral analysis can also be performed with channel checkers, which will only measure power per channel.

At the node, many wavelengths can be dropped. A simple power meter will provide the total power measurement but will not be able to discriminate if there are one, two or four different wavelengths, and, if so, which wavelengths they are. Again here, the OSA or channel checker acts like a tunable filter power meter, looking at power per wavelength. This is a very critical tool for Remote PHY because now, at the node level, multiple wavelengths are involved.

Summing it up

MSOs face huge bandwidth requirements; bringing fiber deeper into the network will enable the necessary increase in bandwidth availability. With this shift, however, comes new optical challenges and testing requirements. To get these right, new sets of tools are required, but they alone cannot do the job. Technicians need to be properly trained on the what, why and how of using those tools. To get uniformity and consistency throughout the plant and the network, methods of procedure (MOPs) need to be created and respected. Best practices include thorough testing and documenting to avoid unpleasant surprises down the road.

Francis Audet is manager, installation and maintenance, at EXFO. He joined EXFO in 2000 as a product line manager (PLM) for the dispersion analyzer product line. Becoming a senior PLM a few years later, he added the optical spectrum analyzer product line to his mandate. After being a member of the CTO office for 3 years (2012-2014), where his contribution was instrumental in analyzing market dynamics and implementing corresponding technical strategies for the wireline product portfolio, both for fiber and copper plants, he became system group manager, leading a team of PLMs covering the installation and maintenance product line.