Skip to main content

Frequently Asked Questions

If SRT is open source, does that mean it is royalty-free?

SRT was released as open source software in April 2017, under version 2.0 of the Mozilla Public License. It is royalty-free and available to everyone on GitHub.

Can SRT carry modulated signals like DVB-T and T2, or complex streams like BTS and MPTS over the internet?

SRT is completely content agnostic. It can carry any kind of stream.

What are the bandwidth limitations when using SRT?

SRT has no specific bandwidth limitation, although a bandwidth limit can be set, if required. All other limits result from the network parameters and characteristics of the devices that handle it. Keep in mind that the bandwidth overhead setting is there for re-sending traffic in parallel with new content, and that you have to take that into account with a rigidly rate constrained transmission channel. It should actually be easier with DVB-T/T2 than with a random VBR stream, but don’t send a 17 Mbps payload over an 18 Mbps link with a 25% SRT overhead.

Would SRT be suitable for VSAT SCPC C-Band, SCPC Ku-Band or iDirect Ku-Band transmission? Would latency would be adversely affected?

The loss of a packet and its recovery can be described as follows:

  1. A packet doesn’t arrive at the receiver.
  2. The next packet arrives, triggering the receiver to send back a lost packet report.
  3. The sender receives the report and schedules the packet for retransmission*.

  4. The retransmitted packet reaches the receiver.

The typical time it takes for packet traversal (the single transmission time or “STT”) at each step will vary for different types of networks. Normally you need just one STT unit to receive a packet. But when a packet is lost, you need three STT units plus the time interval between two consecutive packets. This aggregate time (multiplied by 2 to be safe) must be still less than the configured latency.

The RTT for VSAT links is over 540† ms, and can be much higher. Using a conservative value of 800 ms, we can calculate the single transmission time (STT) as RTT/2 = 400 ms. So for a lost packet you have: STT (originally sent packet) + packet delta (time to realize packet loss) + STT (send loss report) + packet delta (sender decides to retransmit) + STT (sending the lost packet again) = 3*STT plus some packet interval time (100 ms max). Multiplying this by 2 gives a minimum safe latency of 2600 ms to allow for reliable packet recovery (providing that no packet gets lost twice, and no subsequent loss happens). In practical terms, it would be better to use an even higher latency value, like 4000 ms.

Note

In some cases, first-time retransmission may also fail. SRT compensates for this using the NAKREPORT feature (the receiver, when the packet is not received in the expected timeframe, sends the loss report again). For the connection to take advantage of this feature, extended time for second-time retransmission should be taken into account.

* When the sender receives the loss report, the lost packet is scheduled for retransmission (which has a higher priority than regular transmission) at the next possible opportunity. But this is not instantaneous. There may be a "sleep" delay as required by the bandwidth limit. Earlier regular data might already be underway in the network, or in the system network buffer. So the retransmitted packet is only sent as quickly as the application can react upon receiving the loss report.

† See: https://www.satellitetoday.com/telecom/2009/09/01/minimizing-latency-in-satellite-networks/ for more details.

What about using SRT for MCPC signals?

With multiple channel per carrier (MCPC) signals, as described above for VSAT, you need to allow sufficient latency/buffer for the 4 x RTT requirement. Here is some additional guidance:

  1. Use traffic shaping and idle cells to provide a CBR stream.
  2. Ensure the stream MTU is below the link MTU. You may need to do a ping test with the DF flag set.
  3. Leave yourself some bandwidth overhead above your SRT bandwidth, and remember that you’re dealing with video bandwidth + audio bandwidth + TS muxing overhead + Ethernet/IP packet overhead + SRT resent buffering. If you have signal degradation on SAT and experience persistent packet loss, you can end up with SRT retransmissions spiking, and you never catch up.

If there’s an SRT connection between two devices and the network goes down, does the SRT protocol automatically reconnect when the network comes up?

Once the stream has started the protocol works by establishing a connection with regular acknowledgement and negative acknowledgement packets between the peers. In addition, keepalive packets are exchanged as necessary to maintain the connection. For network interruptions of less than 1 second, the SRT connection may be re-established automatically. For longer interruptions, it is up to the application (not SRT) to re-establish the connection. All Haivision products will try to reconnect immediately, and retry indefinitely, until a user deletes the connection from their configuration. 

Does SRT encrypt a stream at the packet level?

No, SRT encryption occurs at the payload level. Only the data that are being transported are encrypted. All control information is sent in the clear (this is a security “best practice” to avoid repeated patterns that would weaken the encryption). Note that encryption does not have an impact on the SRT stream itself, but does place a burden on the CPUs of the sender and receiver, which is proportional to the number of streams and their bitrates.

Can a connection be established between devices with different versions of SRT?

All versions of SRT are backwards compatible. When establishing a connection, a newer version of SRT will “fall back” to the capabilities of the older version.

Can Pro-MPEG error recovery be used with SRT?

Pro-MPEG FEC is not used by SRT. The basic error recovery mechanism in SRT is retransmission requests, in the form of NAK (Negative Acknowledgment) packets. Whenever missing packets are detected, the receiver sends NAKs with information about those packets back to the sender. The sender responds by re-sending the missing packets.

As of version 1.4 SRT has a general-purpose mechanism for injecting extra processing instructions at the beginning and/or end of a transmission. This mechanism, based on packet filtering, was originally created as a means to add Forward Error Correction (FEC) in SRT, but can be extended for other uses. The built-in FEC filter implements standard XOR-based FEC protection operation. It is also possible to set an ARQ level that decides how FEC should cooperate with retransmission.

For more information, see:
https://github.com/Haivision/srt/blob/master/docs/features/packet-filtering-and-fec.md

Is SRT FIPS 140-2 certified? Has it been evaluated against the FIPS 140-2 standard?

SRT uses only FIPS-140.2 approved algorithms, but it can be linked with any version of OpenSSL and other crypto libraries. If a FIPS mode does not exist in the linked library, SRT cannot enable it. Note that FIPS 140.2 does not guarantee tamper resistance across site-to-site links. For example, even if using a FIPS 140.2-validated crypto lib, SRT does not provide perfect forward secrecy.

What would be an example of a firewall policy that would cause Rendezvous mode to fail, assuming outbound ports are opened and inbound traffic from established connections is allowed? Any edge-case policies that would explicitly prohibit traffic, or architectures that might confuse the connection process?

Dynamic Port Address Translation (PAT, aka port mangling) will foil Rendezvous mode, as will firewalls that do not have UDP “fixup” or session tracking. Remember that UDP itself is a stateless protocol. All Rendezvous mode does is to force both sender and receiver to connect to each other simultaneously on the same source and destination port. The devices need to connect to each other’s public IP addresses/ports, and the firewalls have to allow that traffic through to the internal IP address/port of each device. If the firewalls are very strict or are randomizing the addresses, Rendezvous will not work.

Rate limiting can also cause issues on enterprise networks.

What can cause an SRT connection to terminate?

SRT connections can only get broken if the connection is timed out (no packet was sent to particular device from its peer for some period of time, about 5 seconds). As of SRT version 1.3.1, a connection timeout may also happen when the receiver buffer is not emptied by the receiver application fast enough, to prevent going into an unrecoverable state.

The continuous corruption of an input source will cause the encoder process to restart, consequently resetting the SRT connection.

Often when a connection is broken, it is simply because the socket has been closed. If the application manages errors correctly, the broken connection is reported with an appropriate error number and a clear message.

What is the underlying SRT protocol structure?

SRT is based on the User Datagram Protocol (UDP), but has its own mechanisms to ensure real-time delivery of media streams over noisy networks such as the Internet.

What ports are required to be open on a firewall?

The source and destination device UDP ports are configurable. Each stream only requires a single port. The port is user-defined and can be between 1025 and 65,535. The auto-generated routes on Haivision Media Platform are created in the 31000–31100 range. Specific port requirements for firewalls may depend on the security policies of your organization. Consult with your network system administrator.

What is the bandwidth overhead requirement for a connection traversing the Internet?

BW Overhead specifies the maximum stream bandwidth overhead that can be used for recovery of lost packets. The default BW Overhead is 25%. Depending on your settings, SRT will either retransmit lost packets quickly (thereby using more bandwidth) or over a longer period (requiring less bandwidth but resulting in higher latency). 

For more information, see Bandwidth Overhead.

How does encryption affect the bandwidth?

Encryption does not affect the bandwidth. However, applying encryption is a processor-intensive task, and may have an impact on the number and bit rate of the streams an encoder is able to output.

Is SRT compatible with wireless networks?

SRT can be used over wireless networks, WiMANs (Wireless Metro Area Networks), LANs, private WANs, or the public Internet.

Does the number of “Lost Packets” increment even when an SRT retransmit is successful? Would “Lost Packets” result in visual artifacts or quality issues?

“Lost Packets” and the related “Skipped Packets” are statistics reported by an SRT decoder:

  • Lost Packets: A hole in the packet sequence has been detected. A request to re-transmit the lost packet has been sent to the source device. This lost packet may (or may not) be recovered by the re-transmit request.
  • Skipped Packets: The time to play the packet has arrived and the lost packet was not recovered, so the decoder will continue playing without it. A video or audio artifact may result from a skipped packet.

How can I determine the SRT bandwidth requirements from a decoder back to an encoder?

SRT back-channel control packets from the decoder to encoder take up a minimum of ~40 Kbps of bandwidth when the channel conditions are perfect. If there are lost packets on the link, then the SRT receiver will generate more signal traffic, proportional to the lost packet rate. A single lost packet will consume about 400 bps of the available bandwidth on the receiver side.

From the decoder back to the encoder, bandwidth usage will increase linearly with the packet loss rate.

How can I determine the bandwidth available between two endpoints before starting an SRT stream?

If you have established an SRT stream, you can view bandwidth information on the Statistics pages of the source and destination devices.

If no SRT stream is currently running, you can use the iperf bandwidth measurement utility to get bandwidth and jitter information. iperf is available from the command line on either the Makito X Encoder or Decoder. You need to specify the port number as well as stop the encoder stream and decoder before using iperf (to release the ports).

Note

iperf should never be run on a system carrying production video.

On the Makito X Decoder (MXD), enter the following commands:

viddec <decoder# | all> stop
iperf -s -u -i 1 -p “port#”

where “decoder#” is the number of the decoder instance (you must stop all instances), and “port#” is the same as the port opened on the firewall. The “-u” parameter specifies the use of UDP packets (using TCP would cause the measured available bandwidth value to be lower).

On the Makito X Encoder (MXE), enter the following commands:

stream <stream# | all> stop
iperf -c “IP ADDRESS OF MXD” -u -b “BW” -i 1 -p “Port#”

where “stream#” is the number of the stream (you must stop all stream), “BW” is the appropriate bandwidth in Mbps, and “IP ADDRESS OF MXD” is the IP address of the decoder (use the public facing IP if traversing firewalls).

Example:

iperf -c 198.51.100.20 -u -b 5.5m -i 1 -p 20000

Result:

0.0-10.1 sec 6.50 MBytes 5.41 Mbits/sec 0.247 ms 38/ 4678 (0.81%)

where 5.41 Mbits/second is the real bandwidth, 0.247 ms is the jitter, and 0.81% is the percentage of lost packets.

Note that when using iperf in a UDP client/server configuration, the “server” listens for connections from clients, but it is the “client” that generates the traffic.

If you set a Haivision Media Gateway to SRT Listener mode, multiple decoders can simultaneously “call” the same port. However, if you set a Makito X Encoder to Listener mode, you can only connect with a single decoder. If the MXE can handle up to 300 Mbps of streaming, why can’t multiple simultaneous Callers connect to the same MXE port?

The 300 Mbps limit on the MXE applies to TS/UDP streams. The TS/SRT streams are much more resource intensive (30-90 Mbps depending on the system configuration), so there is a much lower limit to the number of SRT streams that can be created on the MXE vs the Media Gateway.

Can I stream from one encoder to multiple decoders?

No. SRT is for point-to-point connections. It does not support multicast. If you need to have an SRT stream delivered to multiple decoders, you can use the Haivision Media Gateway.

I’m seeing a “sawtooth” pattern on the Statistics page for my encoder. What does that mean?

It means the decoder is not acknowledging that it has received the packets sent from the encoder. The encoder keeps the packets it has sent in its buffer until it receives a response (this takes at least the equivalent of the round trip time). If the encoder receives acknowledgments promptly from the decoder, the buffer remains relatively empty. Otherwise, the buffer gradually fills until it reaches a point where it must drop the unacknowledged packets, creating the characteristic sawtooth pattern.

This typically occurs when you don't have enough bandwidth to transmit. The buffer value will increase up to the point where it can no longer keep up, and then will drop in a classic sawtooth pattern. In such cases, try increasing the SRT overhead, or lowering your video bit rate.

What does it mean when I see the blue line drop below the white line on the Statistics page for my decoder?

The decoder has a buffer (blue line) that it uses to hold on to what it has received to allow time for retransmission of missing packets. Normally the contents of this buffer (measured in milliseconds) should be between the latency (orange line) configured for the SRT stream, and the round trip time value (white line). If it falls below the RTT, this means the decoder has no packets to play, and not enough time to ask for more. So the video output will have some sort of artifact, such as a replayed frame or a blocking artifact. It could be that the encoder hasn't given any packets to the decoder, indicating a problem at the source or in the intervening network. But it might just be because there is nothing to display.

When should I start to worry about a rising number of dropped packets?

If the rate is going up constantly, it means that you haven’t configured enough bandwidth overhead or latency. Check to see if you are streaming too far above the latency value on the encoder.

How about skipped packets on the decoder?

Sometimes a packet, let’s call it Packet B, arrives at the decoder ahead of time and sits in the queue, ready to play. But if the packet that should be played immediately before it (Packet A) arrives late (or never), the decoder skips Packet B sitting in the queue. In other words, the time to play for the associated packet has passed, and it is either not at the decoder or arrives after any associated later content has already played. Usually this means some type of video artifact also occurs (a replayed frame or video blocking artifacts).

You could think of a skipped packet as a packet that the decoder drops, except that it doesn’t tell the encoder. The decoder sends a positive ACK, even though the packet is “lost” from the point of view of the decoder. It might drop an entire frame because it is corrupted, but the encoder doesn't know it. This is because when things are going badly, the last thing you want to do is to increase the overhead traffic. A packet has a certain time-to-live, and if it doesn't play before that time then it is skipped.

You might see the number of skipped packets increasing on the decoder, without a corresponding effect on the number of dropped packets seen on the encoder. If the number of skipped packets on the decoder increases slowly, you should increase the SRT Latency. If the number of skipped packets on the decoder increments in large jumps, the best thing to do is lower your video bitrate, or increase your Bandwidth Overhead % if you have available capacity.

Do I control the latency value at the source or destination?

You can control the latency, the bit rate, and the overhead percentage on the source device. But you can configure the latency on the destination device as well. The two devices will settle on the maximum value.

How do I decide whether to boost the latency or lower the bit rate?

You have to decide what is most important for you: quality or latency. If low latency is more important to you, then you may see that you are not using your bandwidth as effectively, and image quality may not be optimal. If you want more quality, you need to use the maximum available bandwidth, with minimal overhead, and therefore more latency. An SRT transmission needs either bandwidth or time. So if you are using your maximum bandwidth at a higher bitrate because you want higher quality, you’ll have to allow more time to recover from faults.

What happens if I set my latency value too low?

If the delay/latency setting is too low, you will see blue lines (Send Buffer) climbing up past the orange line (Latency) in the graph on the encoder. This will be reflected at the destination as visible artifacts, corresponding to skipped packets on the decoder.

If I see my Send Buffer repeatedly spiking by one or two seconds, by how much should I increase my latency?

In early SRT versions, there was little tolerance. The Send Buffer values had to remain below the Latency to obtain good results. In more recent SRT versions the Send Buffer can occasionally go over one second without being dropped. However, this can cause a problem at the other end, because while the packets are shown as “delivered” on the encoder, they may not arrive at the other end.

What are “keep alive” packets?

To maintain an SRT connection, control packets must be sent at an interval of 10 ms (max). When a destination device receives a media packet, it acknowledges the reception by returning an “ACK” control packet. If the interval between “ACK” packets is greater than 10 ms, “keep alive” packets are sent to keep the connection open.

Unlike with TCP, if an SRT connection is broken the source device will be unaware that the destination is not receiving packets for up to several seconds before the connection is re-established.

Why do I have to specify an Outbound NAT Source Port on my source device’s firewall?

Since UDP is not session based, some firewalls will randomize source UDP. Let’s say your source is a Makito X Encoder, with Firewall A. If you send from a random port on the Makito X to port 20000 on the destination, Firewall A may not use the same port. The assumption is UDP is not bidirectional. Firewall behaviour differs from one to the next, so this is tricky. Return traffic has to come back on the same port it was sent from. So if you sent from X.X.53.36 port 20000, the return traffic has to go back to X.X.53.36 port 20000. This is why the Outbound NAT Source Port setting exists. Some firewalls will randomize the source on the outbound interface.

In some cases, Rendezvous mode may be required. Rendezvous mode sets the port to be same on both sides; firewalls will almost always allow traffic returned on the same port. Cisco has a “UDP Fixup” setting that tells firewalls to treat UDP packets as if they are session based.

It is important to consider both network (NAT) and port (PAT) address translation. It is possible, for example, for public port 20000 on a firewall to be mapped to 2500 on an internal address. Port Address Translation (PAT), or port rewriting, can actually be made to work in cases where the mapping is fixed. However, many firewalls randomize the source port through “packet mangling.”

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.