IP Networking - General Overview

Network performance is defined by the following criteria:

  • Bandwidth
  • Latency
  • Packet Delay Variation
  • Error rate


The bandwidth describes the amount of data that can be transported over the network per time unit. The usual unit is Mbit/s or Gbit/s.

When bandwidth is specified it is not always clear whether the bandwidth given is the net bitrate or the gross bitrate, but for Ethernet the data rates given are the net bitrates. For example, 100BaseT Ethernet provides 100Mbit/s usable data rate, while the amount of data on the physical connection is 125Mbit/s including the coding necessary to transport the data safely.

The bandwidth for a given technology is fixed. There is no way of increasing the bandwidth of e.g. 100Mbit/s Ethernet, except by changing to another technology such as 1000Mbit/s Ethernet (you can combine multiple links of a given technology to increase the available bandwidth, but that comes with other drawbacks).

For networks dedicated to media transport the bandwidth is determined by the bandwidth each media stream has and the amount of media stream that need to be transported over a single link. E.g. 1 HD-SDI signal has a bandwidth of 1.485GBit/s, encapsulated as SMPTE ST2022-6 the bandwidth increases to 1.57GBit/s, so if you need to transport 3 HD video signals, the link needs to support at least 4.71GBit/s. Leaving aside 5GBit/s Ethernet, the choice in this case would be a 10GBit/s Ethernet connection.



Latency describes the time that the information needs to travel from source to destination.

The most basic foundation is the speed of light (roughly 300’000’000 meters / second in vacuum); no information can travel faster. Depending on the medium used to transport the information, the time is longer. E.g. in optical fibers the index of refraction is 1.5, meaning that the light travels 1.5 times slower than in a vacuum. That results in approximately 5µs of latency for every kilometer of fiber.

Of course, latency is added for other elements in the data’s path as well: packetizing of data, encoding, queueing in active network elements, etc.

For networks dedicated to media transport you want to minimize latency as latency ultimately translates into delay between the actual event and the representation of the event on screens or speakers. Since the distance between source and destination is usually fixed, only the processing on the path can be influenced, e.g. by limiting the amount of active network elements in the data path and limiting the processing in source and destination.


Packet Delay Variation

Packet delay variation (“PDV”) is a measurement for the difference of the one-way, end-to-end delay of packets. Sometimes this is also referred to as “network jitter”.

In an ideal network all packets would take the same time to travel from the source to the destination, but in real networks various factors cause this time to vary. When the packets are handled by an active network element such as a switch, the processing of the packets depends on the processing load of that switch. The load is mainly related to the number of concurrent packets to process and the complexity of the processing. Due to the “bursty” nature of data transport in Ethernet networks, the load varies quickly and thus influences the packet delay variation.

Packet delay variation can only be counteracted by adding buffers at the receiving end. The incoming data is first written to a buffer. Once the buffer is filled with an amount of data which can compensate the longest packet delay variation that you expect on the network (plus a little safety), the receiving device can start reading data from the buffer using a constant rate.

For networks dedicated to media transport you want to minimize packet delay variation, because the buffers you need to add in order to compensate for the packet delay variation add to the unwanted latency in the signal path. This is usually done by limiting the amount of active network elements in the data path and using techniques like Quality of Service (QoS) to prioritize the processing of data packets carrying real-time media over other traffic.


Error rate

The error rate describes the amount of data that has been altered on the path from source to destination. In data networks this is usually related to corrupted packets (bit errors) or lost packets.

Bit errors are usually compensated by adding some redundancy to the data being transported, often in the form of error correction data transported on the physical transport layer in addition to the actual data. Only errors that cannot be corrected by these measures will be noticeable to the user of the network and will need to be handled by higher protocol layers.

Since bit errors happen randomly, the error rate describes a probability in the form of percentage of packets likely to be affected by errors.

Lost packets are usually caused by overloading one or multiple network elements in the data path. E.g. if two sources try to send 1GBit/s each to a destination connected to the network with a 1GBit/s connection only half of the packets can actually be forwarded, the rest of them needs to be dropped.

For networks dedicated to real-time media transport you want low error rates. Bit errors can be compensated by higher protocol layers, e.g. by adding redundancy to the data transported which allows reconstructing the original data, even if some packets are corrupted or lost on the transport.

Lost packets caused by overloaded paths in the network can only be handled by careful network design and management, including but not limited to correct choice of bandwidth and prioritization of traffic using Quality-of-Service.

Another method of dealing with lost data for video streams on the application layer is concealment: replacing missing data from previous video frames or from another part of the same frame. However, this is only the last resort and it is preferable to ensure that all packets are arriving at the destination.

Lost synchronization information can be compensated by the “flywheel” design (keeping the last known speed until new sync information is received).


Additional factors

Another factor to be observed is packet reordering: When data is transported over different network paths, it can happen that a packet that has been created first arrives at the destination after packets created later. In order to correctly process the data from the packets, these packets need to be sorted back into order before processing. Allowing for packets to be re-ordered also necessitates a larger buffer, causing more latency in processing.

In more complex networks it can also happen that a packet is duplicated and arrives twice at the destination. The device reading the data from the network and the protocols need to provide measures to identify and discard duplicate packets.


Physical Layer

The physical connections for Ethernet can either be copper cables or fiber (optical) cables.



Copper connections are established using twisted-pair cables, which are available in different categories. Gigabit Ethernet requires at least category 5e while category 6 cables (or higher) are recommended for longer distances.

The connectors that terminate the copper cables are called “RJ45”.

The maximum distance for Ethernet with copper cables is defined as 100m (328ft) in the Ethernet standard. However, the distance achievable depends on the quality of the cable, the quality of the connectors and the number of connectors. It is therefore recommended to plan for a cable length of no more than 70-80m.

If you need to cover more distance, active network elements can be used to refresh the signal (e.g. a switch), but the use of optical cables is highly recommended.

While there is the possibility to transport 10Gigabit Ethernet over copper cables, it is common to use fiber cables.



Fiber cables exist of two varieties, multimode and single (or mono) mode, and come with a wide range of connectors. The multimode cables are suitable for distances up to 550m with 1GBit/s and up to 300m with 10GBit/s. Single-mode fibers are suitable for distances up to 80km with 10GBit/s or even higher.

Please be aware that any fiber connection longer than 10km will require individual tuning of the fibers and the lasers to cater for the exact transmission power levels.


SFP Connectors

Many devices today offer a choice of connector, by providing an “intermediate” connector called the “small form-factor pluggable” (SFP) or its enhanced version for higher data rates SFP+.

The device side of the SFP connector is standardized while the connector side can provide copper connections (on RJ45) or different types of fiber connections (with LC connectors).

When choosing an SFP, make sure pick the correct type:

  • Data rate: 100Mb/s, 1Gb/s, 10Gb/s
  • Connector: Copper with RJ45 connector, Fiber with LC connector
  • Fiber type and distance: Multimode (up to 550m for 1GBit/s, up to 300m for 10GBit/s) vs. Single-mode (1km, 10km, 40km, 80km)

For short distances (~ up to 30m) SFP modules with direct attached fiber cables exist (AOC – Active Optical Cable).

While there are copper cables with SFP connectors we strongly recommend not to use them as they have proven to cause interoperability issues. Use AOCs instead.

Some devices (especially switches) might require approved SFPs and will reject non-approved ones.


Continue ...