Imported Upstream version 2.6.1
Andreas Cadhalpun
9 years ago
0 | 0 | Entries are sorted chronologically from oldest to youngest within each release, |
1 | 1 | releases are sorted from youngest to oldest. |
2 | ||
3 | version 2.6.1: | |
4 | - avformat/mov: Disallow ".." in dref unless use_absolute_path is set | |
5 | - avfilter/palettegen: make sure at least one frame was sent to the filter | |
6 | - avformat/mov: Check for string truncation in mov_open_dref() | |
7 | - ac3_fixed: fix out-of-bound read | |
8 | - mips/asmdefs: use _ABI64 as defined by gcc | |
9 | - hevc: delay ff_thread_finish_setup for hwaccel | |
10 | - avcodec/012v: Check dimensions more completely | |
11 | - asfenc: fix leaking asf->index_ptr on error | |
12 | - roqvideoenc: set enc->avctx in roq_encode_init | |
13 | - avcodec/options_table: remove extradata_size from the AVOptions table | |
14 | - ffmdec: limit the backward seek to the last resync position | |
15 | - Add dependencies to configure file for vf_fftfilt | |
16 | - ffmdec: make sure the time base is valid | |
17 | - ffmdec: fix infinite loop at EOF | |
18 | - ffmdec: initialize f_cprv, f_stvi and f_stau | |
19 | - arm: Suppress tags about used cpu arch and extensions | |
20 | - mxfdec: Fix the error handling for when strftime fails | |
21 | - avcodec/opusdec: Fix delayed sample value | |
22 | - avcodec/opusdec: Clear out pointers per packet | |
23 | - avcodec/utils: Align YUV411 by as much as the other YUV variants | |
24 | - lavc/hevcdsp: Fix compilation for arm with --disable-neon. | |
25 | - vp9: fix segmentation map retention with threading enabled. | |
26 | - Revert "avutil/opencl: is_compiled flag not being cleared in av_opencl_uninit" | |
2 | 27 | |
3 | 28 | version 2.6: |
4 | 29 | - nvenc encoder |
34 | 59 | - Fix stsd atom corruption in DNxHD QuickTimes |
35 | 60 | - Canopus HQX decoder |
36 | 61 | - RTP depacketization of T.140 text (RFC 4103) |
37 | - VP9 RTP payload format (draft 0) experimental depacketizer | |
38 | 62 | - Port MIPS optimizations to 64-bit |
39 | 63 | |
40 | 64 |
20 | 20 | 10-bit support in spp, but maybe it's more important to mention the addition |
21 | 21 | of colorlevels (yet another color handling filter), tblend (allowing you |
22 | 22 | to for example run a diff between successive frames of a video stream), or |
23 | eventually the dcshift audio filter. | |
23 | the dcshift audio filter. | |
24 | 24 | |
25 | There is also two other important filters landing in libavfilter: palettegen | |
26 | and paletteuse, submitted by the Stupeflix company. These filters will be | |
27 | very useful in case you are looking for creating high quality GIF, a format | |
28 | that still bravely fights annihilation in 2015. | |
25 | There are also two other important filters landing in libavfilter: palettegen | |
26 | and paletteuse. Both submitted by the Stupeflix company. These filters will | |
27 | be very useful in case you are looking for creating high quality GIFs, a | |
28 | format that still bravely fights annihilation in 2015. | |
29 | 29 | |
30 | There are many other features, but let's follow-up on one big cleanup | |
30 | There are many other new features, but let's follow-up on one big cleanup | |
31 | 31 | achievement: the libmpcodecs (MPlayer filters) wrapper is finally dead. The |
32 | 32 | last remaining filters (softpulldown/repeatfields, eq*, and various |
33 | 33 | postprocessing filters) were ported by Arwa Arif (OPW student) and Paul B |
34 | 34 | Mahol. |
35 | 35 | |
36 | Concerning API changes, not much things to mention. Though, the introduction | |
37 | of devices inputs and outputs listing by Lukasz Marek is a notable addition | |
38 | (try ffmpeg -sources or ffmpeg -sinks for an example of the usage). As | |
39 | usual, see doc/APIchanges for more information. | |
36 | Concerning API changes, there are not many things to mention. Though, the | |
37 | introduction of device inputs and outputs listing by Lukasz Marek is a | |
38 | notable addition (try ffmpeg -sources or ffmpeg -sinks for an example of | |
39 | the usage). As usual, see doc/APIchanges for more information. | |
40 | 40 | |
41 | 41 | Now let's talk about optimizations. Ronald S. Bultje made the VP9 decoder |
42 | 42 | usable on x86 32-bit systems and pre-ssse3 CPUs like Phenom (even dual core |
43 | 43 | Athlons can play 1080p 30fps VP9 content now), so we now secretly hope for |
44 | Google and Mozilla to use ffvp9 instead of libvpx. | |
45 | ||
46 | But VP9 is not the center of attention anymore, and HEVC/H.265 is also | |
47 | getting many improvements, which includes optimizations, both in C and x86 | |
48 | ASM, mainly from James Almer, Christophe Gisquet and Pierre-Edouard Lepere. | |
44 | Google and Mozilla to use ffvp9 instead of libvpx. But VP9 is not the | |
45 | center of attention anymore, and HEVC/H.265 is also getting many | |
46 | improvements, which include C and x86 ASM optimizations, mainly from James | |
47 | Almer, Christophe Gisquet and Pierre-Edouard Lepere. | |
49 | 48 | |
50 | 49 | Even though we had many x86 contributions, it is not the only architecture |
51 | 50 | getting some love, with Seppo Tomperi adding ARM NEON optimizations to the |
60 | 59 | complete Git history on http://source.ffmpeg.org. |
61 | 60 | |
62 | 61 | We hope you will like this release as much as we enjoyed working on it, and |
63 | as usual, if you have any question about it, or any FFmpeg related topic, | |
62 | as usual, if you have any questions about it, or any FFmpeg related topic, | |
64 | 63 | feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask |
65 | 64 | on the mailing-lists. |
1768 | 1768 | TOOLCHAIN_FEATURES=" |
1769 | 1769 | as_dn_directive |
1770 | 1770 | as_func |
1771 | as_object_arch | |
1771 | 1772 | asm_mod_q |
1772 | 1773 | attribute_may_alias |
1773 | 1774 | attribute_packed |
2594 | 2595 | drawtext_filter_deps="libfreetype" |
2595 | 2596 | ebur128_filter_deps="gpl" |
2596 | 2597 | eq_filter_deps="gpl" |
2598 | fftfilt_filter_deps="avcodec" | |
2599 | fftfilt_filter_select="rdft" | |
2597 | 2600 | flite_filter_deps="libflite" |
2598 | 2601 | frei0r_filter_deps="frei0r dlopen" |
2599 | 2602 | frei0r_src_filter_deps="frei0r dlopen" |
4561 | 4564 | .unreq ra |
4562 | 4565 | EOF |
4563 | 4566 | |
4567 | # llvm's integrated assembler supports .object_arch from llvm 3.5 | |
4568 | [ "$objformat" = elf ] && check_as <<EOF && enable as_object_arch | |
4569 | .object_arch armv4 | |
4570 | EOF | |
4571 | ||
4564 | 4572 | [ $target_os != win32 ] && enabled_all armv6t2 shared !pic && enable_weak_pic |
4565 | 4573 | |
4566 | 4574 | elif enabled mips; then |
5450 | 5458 | enabled atempo_filter && prepend avfilter_deps "avcodec" |
5451 | 5459 | enabled ebur128_filter && enabled swresample && prepend avfilter_deps "swresample" |
5452 | 5460 | enabled elbg_filter && prepend avfilter_deps "avcodec" |
5461 | enabled fftfilt_filter && prepend avfilter_deps "avcodec" | |
5453 | 5462 | enabled mcdeint_filter && prepend avfilter_deps "avcodec" |
5454 | 5463 | enabled movie_filter && prepend avfilter_deps "avformat avcodec" |
5455 | 5464 | enabled pan_filter && prepend avfilter_deps "swresample" |
30 | 30 | # This could be handy for archiving the generated documentation or |
31 | 31 | # if some version control system is used. |
32 | 32 | |
33 | PROJECT_NUMBER = 2.6 | |
33 | PROJECT_NUMBER = 2.6.1 | |
34 | 34 | |
35 | 35 | # With the PROJECT_LOGO tag one can specify a logo or icon that is included |
36 | 36 | # in the documentation. The maximum height of the logo should not exceed 55 |
348 | 348 | @code{concat}} protocol designed specifically for that, with examples in the |
349 | 349 | documentation. |
350 | 350 | |
351 | A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate | |
351 | A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate | |
352 | 352 | video by merely concatenating the files containing them. |
353 | 353 | |
354 | 354 | Hence you may concatenate your multimedia files by first transcoding them to |
71 | 71 | configuration file. |
72 | 72 | |
73 | 73 | Each feed is associated to a file which is stored on disk. This stored |
74 | file is used to allow to send pre-recorded data to a player as fast as | |
74 | file is used to send pre-recorded data to a player as fast as | |
75 | 75 | possible when new content is added in real-time to the stream. |
76 | 76 | |
77 | 77 | A "live-stream" or "stream" is a resource published by |
3485 | 3485 | may want to reduce this value, at the cost of a less effective filter and the |
3486 | 3486 | risk of various artefacts. |
3487 | 3487 | |
3488 | If the overlapping value doesn't allow to process the whole input width or | |
3488 | If the overlapping value doesn't permit processing the whole input width or | |
3489 | 3489 | height, a warning will be displayed and according borders won't be denoised. |
3490 | 3490 | |
3491 | 3491 | Default value is @var{blocksize}-1, which is the best possible setting. |
22 | 22 | |
23 | 23 | @item probesize @var{integer} (@emph{input}) |
24 | 24 | Set probing size in bytes, i.e. the size of the data to analyze to get |
25 | stream information. A higher value will allow to detect more | |
25 | stream information. A higher value will enable detecting more | |
26 | 26 | information in case it is dispersed into the stream, but will increase |
27 | 27 | latency. Must be an integer not lesser than 32. It is 5000000 by default. |
28 | 28 | |
66 | 66 | |
67 | 67 | @item analyzeduration @var{integer} (@emph{input}) |
68 | 68 | Specify how many microseconds are analyzed to probe the input. A |
69 | higher value will allow to detect more accurate information, but will | |
69 | higher value will enable detecting more accurate information, but will | |
70 | 70 | increase latency. It defaults to 5,000,000 microseconds = 5 seconds. |
71 | 71 | |
72 | 72 | @item cryptokey @var{hexadecimal string} (@emph{input}) |
0 | 0 | @chapter Input Devices |
1 | 1 | @c man begin INPUT DEVICES |
2 | 2 | |
3 | Input devices are configured elements in FFmpeg which allow to access | |
3 | Input devices are configured elements in FFmpeg which enable accessing | |
4 | 4 | the data coming from a multimedia device attached to your system. |
5 | 5 | |
6 | 6 | When you configure your FFmpeg build, all the supported input devices |
843 | 843 | Return 1.0 if @var{x} is NAN, 0.0 otherwise. |
844 | 844 | |
845 | 845 | @item ld(var) |
846 | Allow to load the value of the internal variable with number | |
846 | Load the value of the internal variable with number | |
847 | 847 | @var{var}, which was previously stored with st(@var{var}, @var{expr}). |
848 | 848 | The function returns the loaded value. |
849 | 849 | |
911 | 911 | Compute expression @code{1/(1 + exp(4*x))}. |
912 | 912 | |
913 | 913 | @item st(var, expr) |
914 | Allow to store the value of the expression @var{expr} in an internal | |
914 | Store the value of the expression @var{expr} in an internal | |
915 | 915 | variable. @var{var} specifies the number of the variable where to |
916 | 916 | store the value, and it is a value ranging from 0 to 9. The function |
917 | 917 | returns the value stored in the internal variable. |
37 | 37 | static int zero12v_decode_frame(AVCodecContext *avctx, void *data, |
38 | 38 | int *got_frame, AVPacket *avpkt) |
39 | 39 | { |
40 | int line = 0, ret; | |
40 | int line, ret; | |
41 | 41 | const int width = avctx->width; |
42 | 42 | AVFrame *pic = data; |
43 | 43 | uint16_t *y, *u, *v; |
44 | 44 | const uint8_t *line_end, *src = avpkt->data; |
45 | 45 | int stride = avctx->width * 8 / 3; |
46 | 46 | |
47 | if (width == 1) { | |
48 | av_log(avctx, AV_LOG_ERROR, "Width 1 not supported.\n"); | |
47 | if (width <= 1 || avctx->height <= 0) { | |
48 | av_log(avctx, AV_LOG_ERROR, "Dimensions %dx%d not supported.\n", width, avctx->height); | |
49 | 49 | return AVERROR_INVALIDDATA; |
50 | 50 | } |
51 | 51 | |
66 | 66 | pic->pict_type = AV_PICTURE_TYPE_I; |
67 | 67 | pic->key_frame = 1; |
68 | 68 | |
69 | y = (uint16_t *)pic->data[0]; | |
70 | u = (uint16_t *)pic->data[1]; | |
71 | v = (uint16_t *)pic->data[2]; | |
72 | 69 | line_end = avpkt->data + stride; |
70 | for (line = 0; line < avctx->height; line++) { | |
71 | uint16_t y_temp[6] = {0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000}; | |
72 | uint16_t u_temp[3] = {0x8000, 0x8000, 0x8000}; | |
73 | uint16_t v_temp[3] = {0x8000, 0x8000, 0x8000}; | |
74 | int x; | |
75 | y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
76 | u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
77 | v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
73 | 78 | |
74 | while (line++ < avctx->height) { | |
75 | while (1) { | |
76 | uint32_t t = AV_RL32(src); | |
79 | for (x = 0; x < width; x += 6) { | |
80 | uint32_t t; | |
81 | ||
82 | if (width - x < 6 || line_end - src < 16) { | |
83 | y = y_temp; | |
84 | u = u_temp; | |
85 | v = v_temp; | |
86 | } | |
87 | ||
88 | if (line_end - src < 4) | |
89 | break; | |
90 | ||
91 | t = AV_RL32(src); | |
77 | 92 | src += 4; |
78 | 93 | *u++ = t << 6 & 0xFFC0; |
79 | 94 | *y++ = t >> 4 & 0xFFC0; |
80 | 95 | *v++ = t >> 14 & 0xFFC0; |
81 | 96 | |
82 | if (src >= line_end - 1) { | |
83 | *y = 0x80; | |
84 | src++; | |
85 | line_end += stride; | |
86 | y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
87 | u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
88 | v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
97 | if (line_end - src < 4) | |
89 | 98 | break; |
90 | } | |
91 | 99 | |
92 | 100 | t = AV_RL32(src); |
93 | 101 | src += 4; |
94 | 102 | *y++ = t << 6 & 0xFFC0; |
95 | 103 | *u++ = t >> 4 & 0xFFC0; |
96 | 104 | *y++ = t >> 14 & 0xFFC0; |
97 | if (src >= line_end - 2) { | |
98 | if (!(width & 1)) { | |
99 | *y = 0x80; | |
100 | src += 2; | |
101 | } | |
102 | line_end += stride; | |
103 | y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
104 | u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
105 | v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
105 | ||
106 | if (line_end - src < 4) | |
106 | 107 | break; |
107 | } | |
108 | 108 | |
109 | 109 | t = AV_RL32(src); |
110 | 110 | src += 4; |
112 | 112 | *y++ = t >> 4 & 0xFFC0; |
113 | 113 | *u++ = t >> 14 & 0xFFC0; |
114 | 114 | |
115 | if (src >= line_end - 1) { | |
116 | *y = 0x80; | |
117 | src++; | |
118 | line_end += stride; | |
119 | y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
120 | u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
121 | v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
115 | if (line_end - src < 4) | |
122 | 116 | break; |
123 | } | |
124 | 117 | |
125 | 118 | t = AV_RL32(src); |
126 | 119 | src += 4; |
128 | 121 | *v++ = t >> 4 & 0xFFC0; |
129 | 122 | *y++ = t >> 14 & 0xFFC0; |
130 | 123 | |
131 | if (src >= line_end - 2) { | |
132 | if (width & 1) { | |
133 | *y = 0x80; | |
134 | src += 2; | |
135 | } | |
136 | line_end += stride; | |
137 | y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
138 | u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
139 | v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
124 | if (width - x < 6) | |
140 | 125 | break; |
141 | } | |
142 | 126 | } |
127 | ||
128 | if (x < width) { | |
129 | y = x + (uint16_t *)(pic->data[0] + line * pic->linesize[0]); | |
130 | u = x/2 + (uint16_t *)(pic->data[1] + line * pic->linesize[1]); | |
131 | v = x/2 + (uint16_t *)(pic->data[2] + line * pic->linesize[2]); | |
132 | memcpy(y, y_temp, sizeof(*y) * (width - x)); | |
133 | memcpy(u, u_temp, sizeof(*u) * (width - x + 1) / 2); | |
134 | memcpy(v, v_temp, sizeof(*v) * (width - x + 1) / 2); | |
135 | } | |
136 | ||
137 | line_end += stride; | |
138 | src = line_end - stride; | |
143 | 139 | } |
144 | 140 | |
145 | 141 | *got_frame = 1; |
216 | 216 | OBJS-$(CONFIG_DVVIDEO_ENCODER) += dvenc.o dv.o dvdata.o |
217 | 217 | OBJS-$(CONFIG_DXA_DECODER) += dxa.o |
218 | 218 | OBJS-$(CONFIG_DXTORY_DECODER) += dxtory.o |
219 | OBJS-$(CONFIG_EAC3_DECODER) += eac3dec.o eac3_data.o | |
219 | OBJS-$(CONFIG_EAC3_DECODER) += eac3_data.o | |
220 | 220 | OBJS-$(CONFIG_EAC3_ENCODER) += eac3enc.o eac3_data.o |
221 | 221 | OBJS-$(CONFIG_EACMV_DECODER) += eacmv.o |
222 | 222 | OBJS-$(CONFIG_EAMAD_DECODER) += eamad.o eaidct.o mpeg12.o \ |
871 | 871 | start_subband += start_subband - 7; |
872 | 872 | end_subband = get_bits(gbc, 3) + 5; |
873 | 873 | #if USE_FIXED |
874 | s->spx_dst_end_freq = end_freq_inv_tab[end_subband]; | |
874 | s->spx_dst_end_freq = end_freq_inv_tab[end_subband-5]; | |
875 | 875 | #endif |
876 | 876 | if (end_subband > 7) |
877 | 877 | end_subband += end_subband - 7; |
938 | 938 | nblend = 0; |
939 | 939 | sblend = 0x800000; |
940 | 940 | } else if (nratio > 0x7fffff) { |
941 | nblend = 0x800000; | |
941 | nblend = 14529495; // sqrt(3) in FP.23 | |
942 | 942 | sblend = 0; |
943 | 943 | } else { |
944 | 944 | nblend = fixed_sqrt(nratio, 23); |
242 | 242 | * Parse the E-AC-3 frame header. |
243 | 243 | * This parses both the bit stream info and audio frame header. |
244 | 244 | */ |
245 | int ff_eac3_parse_header(AC3DecodeContext *s); | |
245 | static int ff_eac3_parse_header(AC3DecodeContext *s); | |
246 | 246 | |
247 | 247 | /** |
248 | 248 | * Decode mantissas in a single channel for the entire frame. |
249 | 249 | * This is used when AHT mode is enabled. |
250 | 250 | */ |
251 | void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch); | |
251 | static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch); | |
252 | 252 | |
253 | 253 | /** |
254 | 254 | * Apply spectral extension to each channel by copying lower frequency |
255 | 255 | * coefficients to higher frequency bins and applying side information to |
256 | 256 | * approximate the original high frequency signal. |
257 | 257 | */ |
258 | void ff_eac3_apply_spectral_extension(AC3DecodeContext *s); | |
258 | static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s); | |
259 | 259 | |
260 | 260 | #endif /* AVCODEC_AC3DEC_H */ |
163 | 163 | } |
164 | 164 | } |
165 | 165 | |
166 | #include "eac3dec.c" | |
166 | 167 | #include "ac3dec.c" |
167 | 168 | |
168 | 169 | static const AVOption options[] = { |
27 | 27 | * Upmix delay samples from stereo to original channel layout. |
28 | 28 | */ |
29 | 29 | #include "ac3dec.h" |
30 | #include "eac3dec.c" | |
30 | 31 | #include "ac3dec.c" |
31 | 32 | |
32 | 33 | static const AVOption options[] = { |
36 | 36 | OBJS-$(CONFIG_FLAC_DECODER) += arm/flacdsp_init_arm.o \ |
37 | 37 | arm/flacdsp_arm.o |
38 | 38 | OBJS-$(CONFIG_FLAC_ENCODER) += arm/flacdsp_init_arm.o |
39 | OBJS-$(CONFIG_HEVC_DECODER) += arm/hevcdsp_init_arm.o | |
39 | 40 | OBJS-$(CONFIG_MLP_DECODER) += arm/mlpdsp_init_arm.o |
40 | 41 | OBJS-$(CONFIG_VC1_DECODER) += arm/vc1dsp_init_arm.o |
41 | 42 | OBJS-$(CONFIG_VORBIS_DECODER) += arm/vorbisdsp_init_arm.o |
0 | /* | |
1 | * This file is part of FFmpeg. | |
2 | * | |
3 | * FFmpeg is free software; you can redistribute it and/or | |
4 | * modify it under the terms of the GNU Lesser General Public | |
5 | * License as published by the Free Software Foundation; either | |
6 | * version 2.1 of the License, or (at your option) any later version. | |
7 | * | |
8 | * FFmpeg is distributed in the hope that it will be useful, | |
9 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | |
10 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | |
11 | * Lesser General Public License for more details. | |
12 | * | |
13 | * You should have received a copy of the GNU Lesser General Public | |
14 | * License along with FFmpeg; if not, write to the Free Software | |
15 | * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | |
16 | */ | |
17 | ||
18 | #ifndef AVCODEC_ARM_HEVCDSP_ARM_H | |
19 | #define AVCODEC_ARM_HEVCDSP_ARM_H | |
20 | ||
21 | #include "libavcodec/hevcdsp.h" | |
22 | ||
23 | void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth); | |
24 | ||
25 | #endif /* AVCODEC_ARM_HEVCDSP_ARM_H */ |
0 | /* | |
1 | * Copyright (c) 2014 Seppo Tomperi <seppo.tomperi@vtt.fi> | |
2 | * | |
3 | * This file is part of FFmpeg. | |
4 | * | |
5 | * FFmpeg is free software; you can redistribute it and/or | |
6 | * modify it under the terms of the GNU Lesser General Public | |
7 | * License as published by the Free Software Foundation; either | |
8 | * version 2.1 of the License, or (at your option) any later version. | |
9 | * | |
10 | * FFmpeg is distributed in the hope that it will be useful, | |
11 | * but WITHOUT ANY WARRANTY; without even the implied warranty of | |
12 | * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | |
13 | * Lesser General Public License for more details. | |
14 | * | |
15 | * You should have received a copy of the GNU Lesser General Public | |
16 | * License along with FFmpeg; if not, write to the Free Software | |
17 | * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | |
18 | */ | |
19 | ||
20 | #include "libavutil/attributes.h" | |
21 | #include "libavutil/arm/cpu.h" | |
22 | #include "libavcodec/hevcdsp.h" | |
23 | #include "hevcdsp_arm.h" | |
24 | ||
25 | av_cold void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth) | |
26 | { | |
27 | int cpu_flags = av_get_cpu_flags(); | |
28 | ||
29 | if (have_neon(cpu_flags)) | |
30 | ff_hevcdsp_init_neon(c, bit_depth); | |
31 | } |
20 | 20 | #include "libavutil/attributes.h" |
21 | 21 | #include "libavutil/arm/cpu.h" |
22 | 22 | #include "libavcodec/hevcdsp.h" |
23 | #include "hevcdsp_arm.h" | |
23 | 24 | |
24 | 25 | void ff_hevc_v_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q); |
25 | 26 | void ff_hevc_h_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q); |
140 | 141 | put_hevc_qpel_uw_neon[my][mx](dst, dststride, src, srcstride, width, height, src2, MAX_PB_SIZE); |
141 | 142 | } |
142 | 143 | |
143 | static av_cold void hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth) | |
144 | av_cold void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth) | |
144 | 145 | { |
145 | #if HAVE_NEON | |
146 | 146 | if (bit_depth == 8) { |
147 | 147 | int x; |
148 | 148 | c->hevc_v_loop_filter_luma = ff_hevc_v_loop_filter_luma_neon; |
220 | 220 | c->put_hevc_qpel_uni[8][0][0] = ff_hevc_put_qpel_uw_pixels_w48_neon_8; |
221 | 221 | c->put_hevc_qpel_uni[9][0][0] = ff_hevc_put_qpel_uw_pixels_w64_neon_8; |
222 | 222 | } |
223 | #endif // HAVE_NEON | |
224 | } | |
225 | ||
226 | void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth) | |
227 | { | |
228 | int cpu_flags = av_get_cpu_flags(); | |
229 | ||
230 | if (have_neon(cpu_flags)) | |
231 | hevcdsp_init_neon(c, bit_depth); | |
232 | } | |
223 | } |
62 | 62 | |
63 | 63 | #define EAC3_SR_CODE_REDUCED 3 |
64 | 64 | |
65 | void ff_eac3_apply_spectral_extension(AC3DecodeContext *s) | |
65 | static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s) | |
66 | 66 | { |
67 | 67 | int bin, bnd, ch, i; |
68 | 68 | uint8_t wrapflag[SPX_MAX_BANDS]={1,0,}, num_copy_sections, copy_sizes[SPX_MAX_BANDS]; |
100 | 100 | for (i = 0; i < num_copy_sections; i++) { |
101 | 101 | memcpy(&s->transform_coeffs[ch][bin], |
102 | 102 | &s->transform_coeffs[ch][s->spx_dst_start_freq], |
103 | copy_sizes[i]*sizeof(float)); | |
103 | copy_sizes[i]*sizeof(INTFLOAT)); | |
104 | 104 | bin += copy_sizes[i]; |
105 | 105 | } |
106 | 106 | |
123 | 123 | bin = s->spx_src_start_freq - 2; |
124 | 124 | for (bnd = 0; bnd < s->num_spx_bands; bnd++) { |
125 | 125 | if (wrapflag[bnd]) { |
126 | float *coeffs = &s->transform_coeffs[ch][bin]; | |
126 | INTFLOAT *coeffs = &s->transform_coeffs[ch][bin]; | |
127 | 127 | coeffs[0] *= atten_tab[0]; |
128 | 128 | coeffs[1] *= atten_tab[1]; |
129 | 129 | coeffs[2] *= atten_tab[2]; |
141 | 141 | for (bnd = 0; bnd < s->num_spx_bands; bnd++) { |
142 | 142 | float nscale = s->spx_noise_blend[ch][bnd] * rms_energy[bnd] * (1.0f / INT32_MIN); |
143 | 143 | float sscale = s->spx_signal_blend[ch][bnd]; |
144 | #if USE_FIXED | |
145 | // spx_noise_blend and spx_signal_blend are both FP.23 | |
146 | nscale *= 1.0 / (1<<23); | |
147 | sscale *= 1.0 / (1<<23); | |
148 | #endif | |
144 | 149 | for (i = 0; i < s->spx_band_sizes[bnd]; i++) { |
145 | 150 | float noise = nscale * (int32_t)av_lfg_get(&s->dith_state); |
146 | 151 | s->transform_coeffs[ch][bin] *= sscale; |
194 | 199 | pre_mant[5] = even0 - odd0; |
195 | 200 | } |
196 | 201 | |
197 | void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch) | |
202 | static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch) | |
198 | 203 | { |
199 | 204 | int bin, blk, gs; |
200 | 205 | int end_bap, gaq_mode; |
287 | 292 | } |
288 | 293 | } |
289 | 294 | |
290 | int ff_eac3_parse_header(AC3DecodeContext *s) | |
295 | static int ff_eac3_parse_header(AC3DecodeContext *s) | |
291 | 296 | { |
292 | 297 | int i, blk, ch; |
293 | 298 | int ac3_exponent_strategy, parse_aht_info, parse_spx_atten_data; |
2599 | 2599 | if (ret < 0) |
2600 | 2600 | goto fail; |
2601 | 2601 | |
2602 | ff_thread_finish_setup(s->avctx); | |
2602 | if (!s->avctx->hwaccel) | |
2603 | ff_thread_finish_setup(s->avctx); | |
2603 | 2604 | |
2604 | 2605 | return 0; |
2605 | 2606 |
102 | 102 | {"hex", "hex motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_HEX }, INT_MIN, INT_MAX, V|E, "me_method" }, |
103 | 103 | {"umh", "umh motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_UMH }, INT_MIN, INT_MAX, V|E, "me_method" }, |
104 | 104 | {"iter", "iter motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_ITER }, INT_MIN, INT_MAX, V|E, "me_method" }, |
105 | {"extradata_size", NULL, OFFSET(extradata_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX}, | |
106 | 105 | {"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, INT_MIN, INT_MAX}, |
107 | 106 | {"g", "set the group of picture (GOP) size", OFFSET(gop_size), AV_OPT_TYPE_INT, {.i64 = 12 }, INT_MIN, INT_MAX, V|E}, |
108 | 107 | {"ar", "set audio sampling rate (in Hz)", OFFSET(sample_rate), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E}, |
448 | 448 | int coded_samples = 0; |
449 | 449 | int decoded_samples = 0; |
450 | 450 | int i, ret; |
451 | int delayed_samples = 0; | |
452 | ||
453 | for (i = 0; i < c->nb_streams; i++) { | |
454 | OpusStreamContext *s = &c->streams[i]; | |
455 | s->out[0] = | |
456 | s->out[1] = NULL; | |
457 | delayed_samples = FFMAX(delayed_samples, s->delayed_samples); | |
458 | } | |
451 | 459 | |
452 | 460 | /* decode the header of the first sub-packet to find out the sample count */ |
453 | 461 | if (buf) { |
461 | 469 | c->streams[0].silk_samplerate = get_silk_samplerate(pkt->config); |
462 | 470 | } |
463 | 471 | |
464 | frame->nb_samples = coded_samples + c->streams[0].delayed_samples; | |
472 | frame->nb_samples = coded_samples + delayed_samples; | |
465 | 473 | |
466 | 474 | /* no input or buffered data => nothing to do */ |
467 | 475 | if (!frame->nb_samples) { |
998 | 998 | |
999 | 999 | av_lfg_init(&enc->randctx, 1); |
1000 | 1000 | |
1001 | enc->avctx = avctx; | |
1002 | ||
1001 | 1003 | enc->framesSinceKeyframe = 0; |
1002 | 1004 | if ((avctx->width & 0xf) || (avctx->height & 0xf)) { |
1003 | 1005 | av_log(avctx, AV_LOG_ERROR, "Dimensions must be divisible by 16\n"); |
837 | 837 | default: |
838 | 838 | s->bpp = -1; |
839 | 839 | } |
840 | } | |
841 | if (s->bpp > 64U) { | |
842 | av_log(s->avctx, AV_LOG_ERROR, | |
843 | "This format is not supported (bpp=%d, %d components)\n", | |
844 | s->bpp, count); | |
845 | s->bpp = 0; | |
846 | return AVERROR_INVALIDDATA; | |
847 | 840 | } |
848 | 841 | break; |
849 | 842 | case TIFF_SAMPLES_PER_PIXEL: |
1157 | 1150 | } |
1158 | 1151 | } |
1159 | 1152 | end: |
1153 | if (s->bpp > 64U) { | |
1154 | av_log(s->avctx, AV_LOG_ERROR, | |
1155 | "This format is not supported (bpp=%d, %d components)\n", | |
1156 | s->bpp, count); | |
1157 | s->bpp = 0; | |
1158 | return AVERROR_INVALIDDATA; | |
1159 | } | |
1160 | 1160 | bytestream2_seek(&s->gb, start, SEEK_SET); |
1161 | 1161 | return 0; |
1162 | 1162 | } |
373 | 373 | case AV_PIX_FMT_YUVJ411P: |
374 | 374 | case AV_PIX_FMT_UYYVYY411: |
375 | 375 | w_align = 32; |
376 | h_align = 8; | |
376 | h_align = 16 * 2; | |
377 | 377 | break; |
378 | 378 | case AV_PIX_FMT_YUV410P: |
379 | 379 | if (s->codec_id == AV_CODEC_ID_SVQ1) { |
278 | 278 | |
279 | 279 | // retain segmentation map if it doesn't update |
280 | 280 | if (s->segmentation.enabled && !s->segmentation.update_map && |
281 | !s->intraonly && !s->keyframe && !s->errorres) { | |
281 | !s->intraonly && !s->keyframe && !s->errorres && | |
282 | ctx->active_thread_type != FF_THREAD_FRAME) { | |
282 | 283 | memcpy(f->segmentation_map, s->frames[LAST_FRAME].segmentation_map, sz); |
283 | 284 | } |
284 | 285 | |
1350 | 1351 | |
1351 | 1352 | if (!s->last_uses_2pass) |
1352 | 1353 | ff_thread_await_progress(&s->frames[LAST_FRAME].tf, row >> 3, 0); |
1353 | for (y = 0; y < h4; y++) | |
1354 | for (y = 0; y < h4; y++) { | |
1355 | int idx_base = (y + row) * 8 * s->sb_cols + col; | |
1354 | 1356 | for (x = 0; x < w4; x++) |
1355 | pred = FFMIN(pred, refsegmap[(y + row) * 8 * s->sb_cols + x + col]); | |
1357 | pred = FFMIN(pred, refsegmap[idx_base + x]); | |
1358 | if (!s->segmentation.update_map && ctx->active_thread_type == FF_THREAD_FRAME) { | |
1359 | // FIXME maybe retain reference to previous frame as | |
1360 | // segmap reference instead of copying the whole map | |
1361 | // into a new buffer | |
1362 | memcpy(&s->frames[CUR_FRAME].segmentation_map[idx_base], | |
1363 | &refsegmap[idx_base], w4); | |
1364 | } | |
1365 | } | |
1356 | 1366 | av_assert1(pred < 8); |
1357 | 1367 | b->seg_id = pred; |
1358 | 1368 | } else { |
123 | 123 | const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt); |
124 | 124 | if (!(desc->flags & (AV_PIX_FMT_FLAG_HWACCEL | AV_PIX_FMT_FLAG_BITSTREAM | AV_PIX_FMT_FLAG_PAL)) && |
125 | 125 | (desc->flags & AV_PIX_FMT_FLAG_PLANAR || desc->nb_components == 1) && |
126 | (!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN) || desc->comp[0].depth_minus1 == 7) | |
126 | (!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN || desc->comp[0].depth_minus1 == 7)) | |
127 | 127 | ff_add_format(&formats, fmt); |
128 | 128 | } |
129 | 129 |
503 | 503 | int r; |
504 | 504 | |
505 | 505 | r = ff_request_frame(inlink); |
506 | if (r == AVERROR_EOF && !s->palette_pushed) { | |
506 | if (r == AVERROR_EOF && !s->palette_pushed && s->nb_refs) { | |
507 | 507 | r = ff_filter_frame(outlink, get_palette_frame(ctx)); |
508 | 508 | s->palette_pushed = 1; |
509 | 509 | return r; |
659 | 659 | * It is needed to use asf as a streamable format. */ |
660 | 660 | if (asf_write_header1(s, 0, DATA_HEADER_SIZE) < 0) { |
661 | 661 | //av_free(asf); |
662 | av_freep(&asf->index_ptr); | |
662 | 663 | return -1; |
663 | 664 | } |
664 | 665 |
35 | 35 | #include "riff.h" |
36 | 36 | #include "libavcodec/bytestream.h" |
37 | 37 | #include "libavcodec/exif.h" |
38 | #include "libavformat/isom.h" | |
38 | 39 | |
39 | 40 | typedef struct AVIStream { |
40 | 41 | int64_t frame_offset; /* current frame (video) or byte (audio) counter |
772 | 773 | st->codec->codec_tag = tag1; |
773 | 774 | st->codec->codec_id = ff_codec_get_id(ff_codec_bmp_tags, |
774 | 775 | tag1); |
776 | if (!st->codec->codec_id) { | |
777 | st->codec->codec_id = ff_codec_get_id(ff_codec_movvideo_tags, | |
778 | tag1); | |
779 | if (st->codec->codec_id) | |
780 | av_log(s, AV_LOG_WARNING, "mov tag found in avi\n"); | |
781 | } | |
775 | 782 | /* This is needed to get the pict type which is necessary |
776 | 783 | * for generating correct pts. */ |
777 | 784 | st->need_parsing = AVSTREAM_PARSE_HEADERS; |
81 | 81 | FFMContext *ffm = s->priv_data; |
82 | 82 | AVIOContext *pb = s->pb; |
83 | 83 | int len, fill_size, size1, frame_offset, id; |
84 | int64_t last_pos = -1; | |
84 | 85 | |
85 | 86 | size1 = size; |
86 | 87 | while (size > 0) { |
100 | 101 | avio_seek(pb, tell, SEEK_SET); |
101 | 102 | } |
102 | 103 | id = avio_rb16(pb); /* PACKET_ID */ |
103 | if (id != PACKET_ID) | |
104 | if (id != PACKET_ID) { | |
104 | 105 | if (ffm_resync(s, id) < 0) |
105 | 106 | return -1; |
107 | last_pos = avio_tell(pb); | |
108 | } | |
106 | 109 | fill_size = avio_rb16(pb); |
107 | 110 | ffm->dts = avio_rb64(pb); |
108 | 111 | frame_offset = avio_rb16(pb); |
116 | 119 | if (!frame_offset) { |
117 | 120 | /* This packet has no frame headers in it */ |
118 | 121 | if (avio_tell(pb) >= ffm->packet_size * 3LL) { |
119 | avio_seek(pb, -ffm->packet_size * 2LL, SEEK_CUR); | |
122 | int64_t seekback = FFMIN(ffm->packet_size * 2LL, avio_tell(pb) - last_pos); | |
123 | seekback = FFMAX(seekback, 0); | |
124 | avio_seek(pb, -seekback, SEEK_CUR); | |
120 | 125 | goto retry_read; |
121 | 126 | } |
122 | 127 | /* This is bad, we cannot find a valid frame header */ |
260 | 265 | AVIOContext *pb = s->pb; |
261 | 266 | AVCodecContext *codec; |
262 | 267 | int ret; |
263 | int f_main = 0, f_cprv, f_stvi, f_stau; | |
268 | int f_main = 0, f_cprv = -1, f_stvi = -1, f_stau = -1; | |
264 | 269 | AVCodec *enc; |
265 | 270 | char *buffer; |
266 | 271 | |
330 | 335 | } |
331 | 336 | codec->time_base.num = avio_rb32(pb); |
332 | 337 | codec->time_base.den = avio_rb32(pb); |
338 | if (codec->time_base.num <= 0 || codec->time_base.den <= 0) { | |
339 | av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n", | |
340 | codec->time_base.num, codec->time_base.den); | |
341 | ret = AVERROR_INVALIDDATA; | |
342 | goto fail; | |
343 | } | |
333 | 344 | codec->width = avio_rb16(pb); |
334 | 345 | codec->height = avio_rb16(pb); |
335 | 346 | codec->gop_size = avio_rb16(pb); |
433 | 444 | } |
434 | 445 | |
435 | 446 | /* get until end of block reached */ |
436 | while ((avio_tell(pb) % ffm->packet_size) != 0) | |
447 | while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached) | |
437 | 448 | avio_r8(pb); |
438 | 449 | |
439 | 450 | /* init packet demux */ |
502 | 513 | case AVMEDIA_TYPE_VIDEO: |
503 | 514 | codec->time_base.num = avio_rb32(pb); |
504 | 515 | codec->time_base.den = avio_rb32(pb); |
516 | if (codec->time_base.num <= 0 || codec->time_base.den <= 0) { | |
517 | av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n", | |
518 | codec->time_base.num, codec->time_base.den); | |
519 | goto fail; | |
520 | } | |
505 | 521 | codec->width = avio_rb16(pb); |
506 | 522 | codec->height = avio_rb16(pb); |
507 | 523 | codec->gop_size = avio_rb16(pb); |
560 | 576 | } |
561 | 577 | |
562 | 578 | /* get until end of block reached */ |
563 | while ((avio_tell(pb) % ffm->packet_size) != 0) | |
579 | while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached) | |
564 | 580 | avio_r8(pb); |
565 | 581 | |
566 | 582 | /* init packet demux */ |
2599 | 2599 | /* try relative path, we do not try the absolute because it can leak information about our |
2600 | 2600 | system to an attacker */ |
2601 | 2601 | if (ref->nlvl_to > 0 && ref->nlvl_from > 0) { |
2602 | char filename[1024]; | |
2602 | char filename[1025]; | |
2603 | 2603 | const char *src_path; |
2604 | 2604 | int i, l; |
2605 | 2605 | |
2625 | 2625 | filename[src_path - src] = 0; |
2626 | 2626 | |
2627 | 2627 | for (i = 1; i < ref->nlvl_from; i++) |
2628 | av_strlcat(filename, "../", 1024); | |
2629 | ||
2630 | av_strlcat(filename, ref->path + l + 1, 1024); | |
2631 | ||
2628 | av_strlcat(filename, "../", sizeof(filename)); | |
2629 | ||
2630 | av_strlcat(filename, ref->path + l + 1, sizeof(filename)); | |
2631 | if (!use_absolute_path) | |
2632 | if(strstr(ref->path + l + 1, "..") || ref->nlvl_from > 1) | |
2633 | return AVERROR(ENOENT); | |
2634 | ||
2635 | if (strlen(filename) + 1 == sizeof(filename)) | |
2636 | return AVERROR(ENOENT); | |
2632 | 2637 | if (!avio_open2(pb, filename, AVIO_FLAG_READ, int_cb, NULL)) |
2633 | 2638 | return 0; |
2634 | 2639 | } |
2040 | 2040 | if (!*str) |
2041 | 2041 | return AVERROR(ENOMEM); |
2042 | 2042 | if (!strftime(*str, 32, "%Y-%m-%d %H:%M:%S", &time)) |
2043 | str[0] = '\0'; | |
2043 | (*str)[0] = '\0'; | |
2044 | 2044 | |
2045 | 2045 | return 0; |
2046 | 2046 | } |
361 | 361 | { AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '4') }, |
362 | 362 | { AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '5') }, |
363 | 363 | { AV_CODEC_ID_FIC, MKTAG('F', 'I', 'C', 'V') }, |
364 | { AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'N') }, | |
365 | { AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'H') }, | |
366 | { AV_CODEC_ID_QTRLE, MKTAG('r', 'l', 'e', ' ') }, | |
367 | 364 | { AV_CODEC_ID_HQX, MKTAG('C', 'H', 'Q', 'X') }, |
368 | 365 | { AV_CODEC_ID_NONE, 0 } |
369 | 366 | }; |
48 | 48 | #elif HAVE_ARMV5TE |
49 | 49 | .arch armv5te |
50 | 50 | #endif |
51 | #if HAVE_AS_OBJECT_ARCH | |
52 | ELF .object_arch armv4 | |
53 | #endif | |
51 | 54 | |
52 | 55 | #if HAVE_NEON |
53 | 56 | .fpu neon |
57 | ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch | |
58 | ELF .eabi_attribute 12, 0 @ suppress Tag_Advanced_SIMD_arch | |
54 | 59 | #elif HAVE_VFP |
55 | 60 | .fpu vfp |
61 | ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch | |
56 | 62 | #endif |
57 | 63 | |
58 | 64 | .syntax unified |
23 | 23 | * assembly (rather than from within .s files). |
24 | 24 | */ |
25 | 25 | |
26 | #ifndef AVCODEC_MIPS_ASMDEFS_H | |
27 | #define AVCODEC_MIPS_ASMDEFS_H | |
26 | #ifndef AVUTIL_MIPS_ASMDEFS_H | |
27 | #define AVUTIL_MIPS_ASMDEFS_H | |
28 | 28 | |
29 | #include <sgidefs.h> | |
30 | ||
31 | #if _MIPS_SIM == _ABI64 | |
29 | #if defined(_ABI64) && _MIPS_SIM == _ABI64 | |
32 | 30 | # define PTRSIZE " 8 " |
33 | 31 | # define PTRLOG " 3 " |
34 | 32 | # define PTR_ADDU "daddu " |