Introduction to RTSP
RTSP (Real Time Streaming Protocol) is an application-layer protocol proposed by Real Network and Netscape to effectively transfer streaming data over IP networks.
RTSP provides control over the streaming media, such as pause, fast forward, etc., but it does not transmit the data itself, the role of RTSP is equivalent to the remote control of the streaming media server.
The server side can choose to use TCP or UDP to stream content. It is similar in syntax and operation to HTTP 1.1, but does not emphasize time synchronization, so it is more tolerant of network latency. It also allows for multiple simultaneous streams to be controlled on demand (Multicast), which reduces network usage on the server side, but also allows for the use of a multicast server. Video conference can be supported. Because it is similar to HTTP1.1, the Proxy cache function (Cache ) also applies to RTSP, and because RTSP has redirection capabilities, it can switch the service delivery of servers to avoid latency caused by excessive load concentrated on the same server.
Introduction to RTMP
RTMP (Real Time Message Protocol) protocol is an application layer protocol that relies on the underlying reliable transport layer protocol (usually TCP) to ensure the reliability of message transmission.
After the link based on the transport layer protocol is established, the RTMP protocol also requires the client and the server to establish a link based on the RTMP protocol through a “handshake”. The RTMP Connection link on top of the Transport Layer link will transmit on the Connection link Some control information, such as SetChunkSize, SetACKWindowSize. The CreateStream command creates a Stream link for transferring specific audio and video data and control of the These messages transmit command messages.
The RTMP protocol does its own formatting of the data as it is transmitted, and this formatted message we call the RTMP Message, while the actual transmission is done to better achieve multiplexing, packetization, and message fairness, the sender puts the Message is divided into Chunks with Message IDs, and each Chunk may be a separate of the chunk, or it may be part of a Message, on the receiving end, depending on what is contained in the chunk. The length of data, message id, and message revert chunk to full Message, thus enabling the sending and receiving of messages.
Introduction to HLS
HLS, known as HTTP Live Streaming, is Apple’s proposed HTTP-based Streaming Media Network Transport Protocol. It works by splitting the entire media stream into small HTTP-based media slices for download, downloading only a few at a time. Splits. When starting a streaming session, the client downloads an index file containing the media splits, the extended M3U playlist file (m3u8), used to find available media splits.
In HLS, the index file can be nested, usually only the primary index and secondary index; the media stream packet fragmentation format only supports MPEG-2 transport streams (ts), WebVTT [WebVTT] files or Packed Audio files.
The following diagram shows the relationship between the index file (m3u8) and the media slice (ts): primary m3u8 nested secondary m3u8, secondary m3u8 describing ts slice.
Introduction to SRT
SRT (Secure Reliable Transport) is a collaboration between Haivision and Wowza to manage and support open source applications of the SRT protocol, an organization dedicated to facilitating the interoperability of video streaming solutions and to advancing the collaborative efforts of video industry pioneers to enable low-latency network video transport.
Three main features: security, reliability and low latency. SRT supports AES encryption to ensure end-to-end video transmission security. SRT ensures transmission stability through forward correction technology (FEC). SRT is built on top of UDT protocol, which solves the problem of high latency of UDT transmission. UDT is a low latency protocol. The protocol is based on the UDP network communication protocol. SRT solves the complex transmission timing problem and enables real-time transmission of high-throughput files and ultra-clear video.
SRT allows for a direct connection between source and target, in contrast to many existing video transmission systems. These systems require a centralized server to collect signals from a remote location and redirect them to one or more destinations. Central server-based architectures have a single point of failure, which can also be a bottleneck during periods of high traffic. Transmitting signals through a hub also increases end-to-end signal transmission time and can double bandwidth costs because of the need to implement two Links: One from source to central hub, the other from center to destination. By using a direct source-to-destination connection, SRT can reduce latency, eliminate center bottlenecks, and reduce network costs.
Introduction to NDI
NDI is an interface transport protocol, and NDI is ultra-low latency, lossless transmission, and interactive control over IP networks. Standard protocol: the biggest difference of the NDI protocol is that NDI video transmission can get rid of traditional HDMI, SDI cable. NDI is short for Network Device Interface, which is a kind of IP address. Network Device Interface Protocol.
NDI is an open protocol that enables video compatible products to share video over a LAN. production environment where any NDI device can be connected to all other devices and every signal source can be a target location. The NDI will have the flexibility to obtain arbitrary signal inputs and outputs. It is a completely innovative IP mode of operation. NDI protocol can transmit and receive multiple broadcast-quality signals in real time over IP networks, and at the same time has the characteristics of low latency, precise frame video, data stream mutual recognition and communication and so on. NDI is signaling, calling, and multiplexing, making system design, integration, application, maintenance, and feature expansion simpler and more flexible.