Statistics
| Branch: | Revision:

ffmpeg / doc / protocols.texi @ a92c30d7

History | View | Annotate | Download (12.5 KB)

1
@chapter Protocols
2
@c man begin PROTOCOLS
3

    
4
Protocols are configured elements in FFmpeg which allow to access
5
resources which require the use of a particular protocol.
6

    
7
When you configure your FFmpeg build, all the supported protocols are
8
enabled by default. You can list all available ones using the
9
configure option "--list-protocols".
10

    
11
You can disable all the protocols using the configure option
12
"--disable-protocols", and selectively enable a protocol using the
13
option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14
particular protocol using the option
15
"--disable-protocol=@var{PROTOCOL}".
16

    
17
The option "-protocols" of the ff* tools will display the list of
18
supported protocols.
19

    
20
A description of the currently available protocols follows.
21

    
22
@section concat
23

    
24
Physical concatenation protocol.
25

    
26
Allow to read and seek from many resource in sequence as if they were
27
a unique resource.
28

    
29
A URL accepted by this protocol has the syntax:
30
@example
31
concat:@var{URL1}|@var{URL2}|...|@var{URLN}
32
@end example
33

    
34
where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35
resource to be concatenated, each one possibly specifying a distinct
36
protocol.
37

    
38
For example to read a sequence of files @file{split1.mpeg},
39
@file{split2.mpeg}, @file{split3.mpeg} with @file{ffplay} use the
40
command:
41
@example
42
ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
43
@end example
44

    
45
Note that you may need to escape the character "|" which is special for
46
many shells.
47

    
48
@section file
49

    
50
File access protocol.
51

    
52
Allow to read from or read to a file.
53

    
54
For example to read from a file @file{input.mpeg} with @file{ffmpeg}
55
use the command:
56
@example
57
ffmpeg -i file:input.mpeg output.mpeg
58
@end example
59

    
60
The ff* tools default to the file protocol, that is a resource
61
specified with the name "FILE.mpeg" is interpreted as the URL
62
"file:FILE.mpeg".
63

    
64
@section gopher
65

    
66
Gopher protocol.
67

    
68
@section http
69

    
70
HTTP (Hyper Text Transfer Protocol).
71

    
72
@section mmst
73

    
74
MMS (Microsoft Media Server) protocol over TCP.
75

    
76
@section mmsh
77

    
78
MMS (Microsoft Media Server) protocol over HTTP.
79

    
80
The required syntax is:
81
@example
82
mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
83
@end example
84

    
85
@section md5
86

    
87
MD5 output protocol.
88

    
89
Computes the MD5 hash of the data to be written, and on close writes
90
this to the designated output or stdout if none is specified. It can
91
be used to test muxers without writing an actual file.
92

    
93
Some examples follow.
94
@example
95
# Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
96
ffmpeg -i input.flv -f avi -y md5:output.avi.md5
97

    
98
# Write the MD5 hash of the encoded AVI file to stdout.
99
ffmpeg -i input.flv -f avi -y md5:
100
@end example
101

    
102
Note that some formats (typically MOV) require the output protocol to
103
be seekable, so they will fail with the MD5 output protocol.
104

    
105
@section pipe
106

    
107
UNIX pipe access protocol.
108

    
109
Allow to read and write from UNIX pipes.
110

    
111
The accepted syntax is:
112
@example
113
pipe:[@var{number}]
114
@end example
115

    
116
@var{number} is the number corresponding to the file descriptor of the
117
pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr).  If @var{number}
118
is not specified, by default the stdout file descriptor will be used
119
for writing, stdin for reading.
120

    
121
For example to read from stdin with @file{ffmpeg}:
122
@example
123
cat test.wav | ffmpeg -i pipe:0
124
# ...this is the same as...
125
cat test.wav | ffmpeg -i pipe:
126
@end example
127

    
128
For writing to stdout with @file{ffmpeg}:
129
@example
130
ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
131
# ...this is the same as...
132
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
133
@end example
134

    
135
Note that some formats (typically MOV), require the output protocol to
136
be seekable, so they will fail with the pipe output protocol.
137

    
138
@section rtmp
139

    
140
Real-Time Messaging Protocol.
141

    
142
The Real-Time Messaging Protocol (RTMP) is used for streaming multimeā€
143
dia content across a TCP/IP network.
144

    
145
The required syntax is:
146
@example
147
rtmp://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
148
@end example
149

    
150
The accepted parameters are:
151
@table @option
152

    
153
@item server
154
The address of the RTMP server.
155

    
156
@item port
157
The number of the TCP port to use (by default is 1935).
158

    
159
@item app
160
It is the name of the application to access. It usually corresponds to
161
the path where the application is installed on the RTMP server
162
(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.).
163

    
164
@item playpath
165
It is the path or name of the resource to play with reference to the
166
application specified in @var{app}, may be prefixed by "mp4:".
167

    
168
@end table
169

    
170
For example to read with @file{ffplay} a multimedia resource named
171
"sample" from the application "vod" from an RTMP server "myserver":
172
@example
173
ffplay rtmp://myserver/vod/sample
174
@end example
175

    
176
@section rtmp, rtmpe, rtmps, rtmpt, rtmpte
177

    
178
Real-Time Messaging Protocol and its variants supported through
179
librtmp.
180

    
181
Requires the presence of the librtmp headers and library during
182
configuration. You need to explicitely configure the build with
183
"--enable-librtmp". If enabled this will replace the native RTMP
184
protocol.
185

    
186
This protocol provides most client functions and a few server
187
functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
188
encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
189
variants of these encrypted types (RTMPTE, RTMPTS).
190

    
191
The required syntax is:
192
@example
193
@var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
194
@end example
195

    
196
where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
197
"rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
198
@var{server}, @var{port}, @var{app} and @var{playpath} have the same
199
meaning as specified for the RTMP native protocol.
200
@var{options} contains a list of space-separated options of the form
201
@var{key}=@var{val}.
202

    
203
See the librtmp manual page (man 3 librtmp) for more information.
204

    
205
For example, to stream a file in real-time to an RTMP server using
206
@file{ffmpeg}:
207
@example
208
ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
209
@end example
210

    
211
To play the same stream using @file{ffplay}:
212
@example
213
ffplay "rtmp://myserver/live/mystream live=1"
214
@end example
215

    
216
@section rtp
217

    
218
Real-Time Protocol.
219

    
220
@section rtsp
221

    
222
RTSP is not technically a protocol handler in libavformat, it is a demuxer
223
and muxer. The demuxer supports both normal RTSP (with data transferred
224
over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
225
data transferred over RDT).
226

    
227
The muxer can be used to send a stream using RTSP ANNOUNCE to a server
228
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
229
RTSP server, @url{http://github.com/revmischa/rtsp-server}).
230

    
231
The required syntax for a RTSP url is:
232
@example
233
rtsp://@var{hostname}[:@var{port}]/@var{path}[?@var{options}]
234
@end example
235

    
236
@var{options} is a @code{&}-separated list. The following options
237
are supported:
238

    
239
@table @option
240

    
241
@item udp
242
Use UDP as lower transport protocol.
243

    
244
@item tcp
245
Use TCP (interleaving within the RTSP control channel) as lower
246
transport protocol.
247

    
248
@item multicast
249
Use UDP multicast as lower transport protocol.
250

    
251
@item http
252
Use HTTP tunneling as lower transport protocol, which is useful for
253
passing proxies.
254

    
255
@item filter_src
256
Accept packets only from negotiated peer address and port.
257
@end table
258

    
259
Multiple lower transport protocols may be specified, in that case they are
260
tried one at a time (if the setup of one fails, the next one is tried).
261
For the muxer, only the @code{tcp} and @code{udp} options are supported.
262

    
263
When receiving data over UDP, the demuxer tries to reorder received packets
264
(since they may arrive out of order, or packets may get lost totally). In
265
order for this to be enabled, a maximum delay must be specified in the
266
@code{max_delay} field of AVFormatContext.
267

    
268
When watching multi-bitrate Real-RTSP streams with @file{ffplay}, the
269
streams to display can be chosen with @code{-vst} @var{n} and
270
@code{-ast} @var{n} for video and audio respectively, and can be switched
271
on the fly by pressing @code{v} and @code{a}.
272

    
273
Example command lines:
274

    
275
To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
276

    
277
@example
278
ffplay -max_delay 500000 rtsp://server/video.mp4?udp
279
@end example
280

    
281
To watch a stream tunneled over HTTP:
282

    
283
@example
284
ffplay rtsp://server/video.mp4?http
285
@end example
286

    
287
To send a stream in realtime to a RTSP server, for others to watch:
288

    
289
@example
290
ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
291
@end example
292

    
293
@section sap
294

    
295
Session Announcement Protocol (RFC 2974). This is not technically a
296
protocol handler in libavformat, it is a muxer and demuxer.
297
It is used for signalling of RTP streams, by announcing the SDP for the
298
streams regularly on a separate port.
299

    
300
@subsection Muxer
301

    
302
The syntax for a SAP url given to the muxer is:
303
@example
304
sap://@var{destination}[:@var{port}][?@var{options}]
305
@end example
306

    
307
The RTP packets are sent to @var{destination} on port @var{port},
308
or to port 5004 if no port is specified.
309
@var{options} is a @code{&}-separated list. The following options
310
are supported:
311

    
312
@table @option
313

    
314
@item announce_addr=@var{address}
315
Specify the destination IP address for sending the announcements to.
316
If omitted, the announcements are sent to the commonly used SAP
317
announcement multicast address 224.2.127.254 (sap.mcast.net), or
318
ff0e::2:7ffe if @var{destination} is an IPv6 address.
319

    
320
@item announce_port=@var{port}
321
Specify the port to send the announcements on, defaults to
322
9875 if not specified.
323

    
324
@item ttl=@var{ttl}
325
Specify the time to live value for the announcements and RTP packets,
326
defaults to 255.
327

    
328
@item same_port=@var{0|1}
329
If set to 1, send all RTP streams on the same port pair. If zero (the
330
default), all streams are sent on unique ports, with each stream on a
331
port 2 numbers higher than the previous.
332
VLC/Live555 requires this to be set to 1, to be able to receive the stream.
333
The RTP stack in libavformat for receiving requires all streams to be sent
334
on unique ports.
335
@end table
336

    
337
Example command lines follow.
338

    
339
To broadcast a stream on the local subnet, for watching in VLC:
340

    
341
@example
342
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
343
@end example
344

    
345
Similarly, for watching in ffplay:
346

    
347
@example
348
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
349
@end example
350

    
351
And for watching in ffplay, over IPv6:
352

    
353
@example
354
ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
355
@end example
356

    
357
@subsection Demuxer
358

    
359
The syntax for a SAP url given to the demuxer is:
360
@example
361
sap://[@var{address}][:@var{port}]
362
@end example
363

    
364
@var{address} is the multicast address to listen for announcements on,
365
if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
366
is the port that is listened on, 9875 if omitted.
367

    
368
The demuxers listens for announcements on the given address and port.
369
Once an announcement is received, it tries to receive that particular stream.
370

    
371
Example command lines follow.
372

    
373
To play back the first stream announced on the normal SAP multicast address:
374

    
375
@example
376
ffplay sap://
377
@end example
378

    
379
To play back the first stream announced on one the default IPv6 SAP multicast address:
380

    
381
@example
382
ffplay sap://[ff0e::2:7ffe]
383
@end example
384

    
385
@section tcp
386

    
387
Trasmission Control Protocol.
388

    
389
@section udp
390

    
391
User Datagram Protocol.
392

    
393
The required syntax for a UDP url is:
394
@example
395
udp://@var{hostname}:@var{port}[?@var{options}]
396
@end example
397

    
398
@var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
399
Follow the list of supported options.
400

    
401
@table @option
402

    
403
@item buffer_size=@var{size}
404
set the UDP buffer size in bytes
405

    
406
@item localport=@var{port}
407
override the local UDP port to bind with
408

    
409
@item pkt_size=@var{size}
410
set the size in bytes of UDP packets
411

    
412
@item reuse=@var{1|0}
413
explicitly allow or disallow reusing UDP sockets
414

    
415
@item ttl=@var{ttl}
416
set the time to live value (for multicast only)
417

    
418
@item connect=@var{1|0}
419
Initialize the UDP socket with @code{connect()}. In this case, the
420
destination address can't be changed with udp_set_remote_url later.
421
If the destination address isn't known at the start, this option can
422
be specified in udp_set_remote_url, too.
423
This allows finding out the source address for the packets with getsockname,
424
and makes writes return with AVERROR(ECONNREFUSED) if "destination
425
unreachable" is received.
426
For receiving, this gives the benefit of only receiving packets from
427
the specified peer address/port.
428
@end table
429

    
430
Some usage examples of the udp protocol with @file{ffmpeg} follow.
431

    
432
To stream over UDP to a remote endpoint:
433
@example
434
ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
435
@end example
436

    
437
To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
438
@example
439
ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
440
@end example
441

    
442
To receive over UDP from a remote endpoint:
443
@example
444
ffmpeg -i udp://[@var{multicast-address}]:@var{port}
445
@end example
446

    
447
@c man end PROTOCOLS