Statistics
| Branch: | Revision:

ffmpeg / doc / protocols.texi @ 21a569f3

History | View | Annotate | Download (12.4 KB)

1
@chapter Protocols
2
@c man begin PROTOCOLS
3

    
4
Protocols are configured elements in FFmpeg which allow to access
5
resources which require the use of a particular protocol.
6

    
7
When you configure your FFmpeg build, all the supported protocols are
8
enabled by default. You can list all available ones using the
9
configure option "--list-protocols".
10

    
11
You can disable all the protocols using the configure option
12
"--disable-protocols", and selectively enable a protocol using the
13
option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14
particular protocol using the option
15
"--disable-protocol=@var{PROTOCOL}".
16

    
17
The option "-protocols" of the ff* tools will display the list of
18
supported protocols.
19

    
20
A description of the currently available protocols follows.
21

    
22
@section concat
23

    
24
Physical concatenation protocol.
25

    
26
Allow to read and seek from many resource in sequence as if they were
27
a unique resource.
28

    
29
A URL accepted by this protocol has the syntax:
30
@example
31
concat:@var{URL1}|@var{URL2}|...|@var{URLN}
32
@end example
33

    
34
where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35
resource to be concatenated, each one possibly specifying a distinct
36
protocol.
37

    
38
For example to read a sequence of files @file{split1.mpeg},
39
@file{split2.mpeg}, @file{split3.mpeg} with @file{ffplay} use the
40
command:
41
@example
42
ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
43
@end example
44

    
45
Note that you may need to escape the character "|" which is special for
46
many shells.
47

    
48
@section file
49

    
50
File access protocol.
51

    
52
Allow to read from or read to a file.
53

    
54
For example to read from a file @file{input.mpeg} with @file{ffmpeg}
55
use the command:
56
@example
57
ffmpeg -i file:input.mpeg output.mpeg
58
@end example
59

    
60
The ff* tools default to the file protocol, that is a resource
61
specified with the name "FILE.mpeg" is interpreted as the URL
62
"file:FILE.mpeg".
63

    
64
@section gopher
65

    
66
Gopher protocol.
67

    
68
@section http
69

    
70
HTTP (Hyper Text Transfer Protocol).
71

    
72
@section mmst
73

    
74
MMS (Microsoft Media Server) protocol over TCP.
75

    
76
@section mmsh
77

    
78
MMS (Microsoft Media Server) protocol over HTTP.
79

    
80
The required syntax is:
81
@example
82
mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
83
@end example
84

    
85
@section md5
86

    
87
MD5 output protocol.
88

    
89
Computes the MD5 hash of the data to be written, and on close writes
90
this to the designated output or stdout if none is specified. It can
91
be used to test muxers without writing an actual file.
92

    
93
Some examples follow.
94
@example
95
# Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
96
ffmpeg -i input.flv -f avi -y md5:output.avi.md5
97

    
98
# Write the MD5 hash of the encoded AVI file to stdout.
99
ffmpeg -i input.flv -f avi -y md5:
100
@end example
101

    
102
Note that some formats (typically MOV) require the output protocol to
103
be seekable, so they will fail with the MD5 output protocol.
104

    
105
@section pipe
106

    
107
UNIX pipe access protocol.
108

    
109
Allow to read and write from UNIX pipes.
110

    
111
The accepted syntax is:
112
@example
113
pipe:[@var{number}]
114
@end example
115

    
116
@var{number} is the number corresponding to the file descriptor of the
117
pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr).  If @var{number}
118
is not specified, by default the stdout file descriptor will be used
119
for writing, stdin for reading.
120

    
121
For example to read from stdin with @file{ffmpeg}:
122
@example
123
cat test.wav | ffmpeg -i pipe:0
124
# ...this is the same as...
125
cat test.wav | ffmpeg -i pipe:
126
@end example
127

    
128
For writing to stdout with @file{ffmpeg}:
129
@example
130
ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
131
# ...this is the same as...
132
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
133
@end example
134

    
135
Note that some formats (typically MOV), require the output protocol to
136
be seekable, so they will fail with the pipe output protocol.
137

    
138
@section rtmp
139

    
140
Real-Time Messaging Protocol.
141

    
142
The Real-Time Messaging Protocol (RTMP) is used for streaming multimeā€
143
dia content across a TCP/IP network.
144

    
145
The required syntax is:
146
@example
147
rtmp://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
148
@end example
149

    
150
The accepted parameters are:
151
@table @option
152

    
153
@item server
154
The address of the RTMP server.
155

    
156
@item port
157
The number of the TCP port to use (by default is 1935).
158

    
159
@item app
160
It is the name of the application to access. It usually corresponds to
161
the path where the application is installed on the RTMP server
162
(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.).
163

    
164
@item playpath
165
It is the path or name of the resource to play with reference to the
166
application specified in @var{app}, may be prefixed by "mp4:".
167

    
168
@end table
169

    
170
For example to read with @file{ffplay} a multimedia resource named
171
"sample" from the application "vod" from an RTMP server "myserver":
172
@example
173
ffplay rtmp://myserver/vod/sample
174
@end example
175

    
176
@section rtmp, rtmpe, rtmps, rtmpt, rtmpte
177

    
178
Real-Time Messaging Protocol and its variants supported through
179
librtmp.
180

    
181
Requires the presence of the librtmp headers and library during
182
configuration. You need to explicitely configure the build with
183
"--enable-librtmp". If enabled this will replace the native RTMP
184
protocol.
185

    
186
This protocol provides most client functions and a few server
187
functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
188
encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
189
variants of these encrypted types (RTMPTE, RTMPTS).
190

    
191
The required syntax is:
192
@example
193
@var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
194
@end example
195

    
196
where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
197
"rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
198
@var{server}, @var{port}, @var{app} and @var{playpath} have the same
199
meaning as specified for the RTMP native protocol.
200
@var{options} contains a list of space-separated options of the form
201
@var{key}=@var{val}.
202

    
203
See the librtmp manual page (man 3 librtmp) for more information.
204

    
205
For example, to stream a file in real-time to an RTMP server using
206
@file{ffmpeg}:
207
@example
208
ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
209
@end example
210

    
211
To play the same stream using @file{ffplay}:
212
@example
213
ffplay "rtmp://myserver/live/mystream live=1"
214
@end example
215

    
216
@section rtp
217

    
218
Real-Time Protocol.
219

    
220
@section rtsp
221

    
222
RTSP is not technically a protocol handler in libavformat, it is a demuxer
223
and muxer. The demuxer supports both normal RTSP (with data transferred
224
over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
225
data transferred over RDT).
226

    
227
The muxer can be used to send a stream using RTSP ANNOUNCE to a server
228
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
229
RTSP server, @url{http://github.com/revmischa/rtsp-server}).
230

    
231
The required syntax for a RTSP url is:
232
@example
233
rtsp://@var{hostname}[:@var{port}]/@var{path}[?@var{options}]
234
@end example
235

    
236
@var{options} is a @code{&}-separated list. The following options
237
are supported:
238

    
239
@table @option
240

    
241
@item udp
242
Use UDP as lower transport protocol.
243

    
244
@item tcp
245
Use TCP (interleaving within the RTSP control channel) as lower
246
transport protocol.
247

    
248
@item multicast
249
Use UDP multicast as lower transport protocol.
250

    
251
@item http
252
Use HTTP tunneling as lower transport protocol, which is useful for
253
passing proxies.
254
@end table
255

    
256
Multiple lower transport protocols may be specified, in that case they are
257
tried one at a time (if the setup of one fails, the next one is tried).
258
For the muxer, only the @code{tcp} and @code{udp} options are supported.
259

    
260
When receiving data over UDP, the demuxer tries to reorder received packets
261
(since they may arrive out of order, or packets may get lost totally). In
262
order for this to be enabled, a maximum delay must be specified in the
263
@code{max_delay} field of AVFormatContext.
264

    
265
When watching multi-bitrate Real-RTSP streams with @file{ffplay}, the
266
streams to display can be chosen with @code{-vst} @var{n} and
267
@code{-ast} @var{n} for video and audio respectively, and can be switched
268
on the fly by pressing @code{v} and @code{a}.
269

    
270
Example command lines:
271

    
272
To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
273

    
274
@example
275
ffplay -max_delay 500000 rtsp://server/video.mp4?udp
276
@end example
277

    
278
To watch a stream tunneled over HTTP:
279

    
280
@example
281
ffplay rtsp://server/video.mp4?http
282
@end example
283

    
284
To send a stream in realtime to a RTSP server, for others to watch:
285

    
286
@example
287
ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
288
@end example
289

    
290
@section sap
291

    
292
Session Announcement Protocol (RFC 2974). This is not technically a
293
protocol handler in libavformat, it is a muxer and demuxer.
294
It is used for signalling of RTP streams, by announcing the SDP for the
295
streams regularly on a separate port.
296

    
297
@subsection Muxer
298

    
299
The syntax for a SAP url given to the muxer is:
300
@example
301
sap://@var{destination}[:@var{port}][?@var{options}]
302
@end example
303

    
304
The RTP packets are sent to @var{destination} on port @var{port},
305
or to port 5004 if no port is specified.
306
@var{options} is a @code{&}-separated list. The following options
307
are supported:
308

    
309
@table @option
310

    
311
@item announce_addr=@var{address}
312
Specify the destination IP address for sending the announcements to.
313
If omitted, the announcements are sent to the commonly used SAP
314
announcement multicast address 224.2.127.254 (sap.mcast.net), or
315
ff0e::2:7ffe if @var{destination} is an IPv6 address.
316

    
317
@item announce_port=@var{port}
318
Specify the port to send the announcements on, defaults to
319
9875 if not specified.
320

    
321
@item ttl=@var{ttl}
322
Specify the time to live value for the announcements and RTP packets,
323
defaults to 255.
324

    
325
@item same_port=@var{0|1}
326
If set to 1, send all RTP streams on the same port pair. If zero (the
327
default), all streams are sent on unique ports, with each stream on a
328
port 2 numbers higher than the previous.
329
VLC/Live555 requires this to be set to 1, to be able to receive the stream.
330
The RTP stack in libavformat for receiving requires all streams to be sent
331
on unique ports.
332
@end table
333

    
334
Example command lines follow.
335

    
336
To broadcast a stream on the local subnet, for watching in VLC:
337

    
338
@example
339
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
340
@end example
341

    
342
Similarly, for watching in ffplay:
343

    
344
@example
345
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
346
@end example
347

    
348
And for watching in ffplay, over IPv6:
349

    
350
@example
351
ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
352
@end example
353

    
354
@subsection Demuxer
355

    
356
The syntax for a SAP url given to the demuxer is:
357
@example
358
sap://[@var{address}][:@var{port}]
359
@end example
360

    
361
@var{address} is the multicast address to listen for announcements on,
362
if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
363
is the port that is listened on, 9875 if omitted.
364

    
365
The demuxers listens for announcements on the given address and port.
366
Once an announcement is received, it tries to receive that particular stream.
367

    
368
Example command lines follow.
369

    
370
To play back the first stream announced on the normal SAP multicast address:
371

    
372
@example
373
ffplay sap://
374
@end example
375

    
376
To play back the first stream announced on one the default IPv6 SAP multicast address:
377

    
378
@example
379
ffplay sap://[ff0e::2:7ffe]
380
@end example
381

    
382
@section tcp
383

    
384
Trasmission Control Protocol.
385

    
386
@section udp
387

    
388
User Datagram Protocol.
389

    
390
The required syntax for a UDP url is:
391
@example
392
udp://@var{hostname}:@var{port}[?@var{options}]
393
@end example
394

    
395
@var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
396
Follow the list of supported options.
397

    
398
@table @option
399

    
400
@item buffer_size=@var{size}
401
set the UDP buffer size in bytes
402

    
403
@item localport=@var{port}
404
override the local UDP port to bind with
405

    
406
@item pkt_size=@var{size}
407
set the size in bytes of UDP packets
408

    
409
@item reuse=@var{1|0}
410
explicitly allow or disallow reusing UDP sockets
411

    
412
@item ttl=@var{ttl}
413
set the time to live value (for multicast only)
414

    
415
@item connect=@var{1|0}
416
Initialize the UDP socket with @code{connect()}. In this case, the
417
destination address can't be changed with udp_set_remote_url later.
418
If the destination address isn't known at the start, this option can
419
be specified in udp_set_remote_url, too.
420
This allows finding out the source address for the packets with getsockname,
421
and makes writes return with AVERROR(ECONNREFUSED) if "destination
422
unreachable" is received.
423
For receiving, this gives the benefit of only receiving packets from
424
the specified peer address/port.
425
@end table
426

    
427
Some usage examples of the udp protocol with @file{ffmpeg} follow.
428

    
429
To stream over UDP to a remote endpoint:
430
@example
431
ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
432
@end example
433

    
434
To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
435
@example
436
ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
437
@end example
438

    
439
To receive over UDP from a remote endpoint:
440
@example
441
ffmpeg -i udp://[@var{multicast-address}]:@var{port}
442
@end example
443

    
444
@c man end PROTOCOLS