Codebase list ffmpeg / 070a7b8
Imported Upstream version 2.6.1 Andreas Cadhalpun 9 years ago
41 changed file(s) with 266 addition(s) and 134 deletion(s). Raw diff Collapse all Expand all
00 Entries are sorted chronologically from oldest to youngest within each release,
11 releases are sorted from youngest to oldest.
2
3 version 2.6.1:
4 - avformat/mov: Disallow ".." in dref unless use_absolute_path is set
5 - avfilter/palettegen: make sure at least one frame was sent to the filter
6 - avformat/mov: Check for string truncation in mov_open_dref()
7 - ac3_fixed: fix out-of-bound read
8 - mips/asmdefs: use _ABI64 as defined by gcc
9 - hevc: delay ff_thread_finish_setup for hwaccel
10 - avcodec/012v: Check dimensions more completely
11 - asfenc: fix leaking asf->index_ptr on error
12 - roqvideoenc: set enc->avctx in roq_encode_init
13 - avcodec/options_table: remove extradata_size from the AVOptions table
14 - ffmdec: limit the backward seek to the last resync position
15 - Add dependencies to configure file for vf_fftfilt
16 - ffmdec: make sure the time base is valid
17 - ffmdec: fix infinite loop at EOF
18 - ffmdec: initialize f_cprv, f_stvi and f_stau
19 - arm: Suppress tags about used cpu arch and extensions
20 - mxfdec: Fix the error handling for when strftime fails
21 - avcodec/opusdec: Fix delayed sample value
22 - avcodec/opusdec: Clear out pointers per packet
23 - avcodec/utils: Align YUV411 by as much as the other YUV variants
24 - lavc/hevcdsp: Fix compilation for arm with --disable-neon.
25 - vp9: fix segmentation map retention with threading enabled.
26 - Revert "avutil/opencl: is_compiled flag not being cleared in av_opencl_uninit"
227
328 version 2.6:
429 - nvenc encoder
3459 - Fix stsd atom corruption in DNxHD QuickTimes
3560 - Canopus HQX decoder
3661 - RTP depacketization of T.140 text (RFC 4103)
37 - VP9 RTP payload format (draft 0) experimental depacketizer
3862 - Port MIPS optimizations to 64-bit
3963
4064
0 2.6
0 2.6.1
2020 10-bit support in spp, but maybe it's more important to mention the addition
2121 of colorlevels (yet another color handling filter), tblend (allowing you
2222 to for example run a diff between successive frames of a video stream), or
23 eventually the dcshift audio filter.
23 the dcshift audio filter.
2424
25 There is also two other important filters landing in libavfilter: palettegen
26 and paletteuse, submitted by the Stupeflix company. These filters will be
27 very useful in case you are looking for creating high quality GIF, a format
28 that still bravely fights annihilation in 2015.
25 There are also two other important filters landing in libavfilter: palettegen
26 and paletteuse. Both submitted by the Stupeflix company. These filters will
27 be very useful in case you are looking for creating high quality GIFs, a
28 format that still bravely fights annihilation in 2015.
2929
30 There are many other features, but let's follow-up on one big cleanup
30 There are many other new features, but let's follow-up on one big cleanup
3131 achievement: the libmpcodecs (MPlayer filters) wrapper is finally dead. The
3232 last remaining filters (softpulldown/repeatfields, eq*, and various
3333 postprocessing filters) were ported by Arwa Arif (OPW student) and Paul B
3434 Mahol.
3535
36 Concerning API changes, not much things to mention. Though, the introduction
37 of devices inputs and outputs listing by Lukasz Marek is a notable addition
38 (try ffmpeg -sources or ffmpeg -sinks for an example of the usage). As
39 usual, see doc/APIchanges for more information.
36 Concerning API changes, there are not many things to mention. Though, the
37 introduction of device inputs and outputs listing by Lukasz Marek is a
38 notable addition (try ffmpeg -sources or ffmpeg -sinks for an example of
39 the usage). As usual, see doc/APIchanges for more information.
4040
4141 Now let's talk about optimizations. Ronald S. Bultje made the VP9 decoder
4242 usable on x86 32-bit systems and pre-ssse3 CPUs like Phenom (even dual core
4343 Athlons can play 1080p 30fps VP9 content now), so we now secretly hope for
44 Google and Mozilla to use ffvp9 instead of libvpx.
45
46 But VP9 is not the center of attention anymore, and HEVC/H.265 is also
47 getting many improvements, which includes optimizations, both in C and x86
48 ASM, mainly from James Almer, Christophe Gisquet and Pierre-Edouard Lepere.
44 Google and Mozilla to use ffvp9 instead of libvpx. But VP9 is not the
45 center of attention anymore, and HEVC/H.265 is also getting many
46 improvements, which include C and x86 ASM optimizations, mainly from James
47 Almer, Christophe Gisquet and Pierre-Edouard Lepere.
4948
5049 Even though we had many x86 contributions, it is not the only architecture
5150 getting some love, with Seppo Tomperi adding ARM NEON optimizations to the
6059 complete Git history on http://source.ffmpeg.org.
6160
6261 We hope you will like this release as much as we enjoyed working on it, and
63 as usual, if you have any question about it, or any FFmpeg related topic,
62 as usual, if you have any questions about it, or any FFmpeg related topic,
6463 feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
6564 on the mailing-lists.
0 2.6
0 2.6.1
17681768 TOOLCHAIN_FEATURES="
17691769 as_dn_directive
17701770 as_func
1771 as_object_arch
17711772 asm_mod_q
17721773 attribute_may_alias
17731774 attribute_packed
25942595 drawtext_filter_deps="libfreetype"
25952596 ebur128_filter_deps="gpl"
25962597 eq_filter_deps="gpl"
2598 fftfilt_filter_deps="avcodec"
2599 fftfilt_filter_select="rdft"
25972600 flite_filter_deps="libflite"
25982601 frei0r_filter_deps="frei0r dlopen"
25992602 frei0r_src_filter_deps="frei0r dlopen"
45614564 .unreq ra
45624565 EOF
45634566
4567 # llvm's integrated assembler supports .object_arch from llvm 3.5
4568 [ "$objformat" = elf ] && check_as <<EOF && enable as_object_arch
4569 .object_arch armv4
4570 EOF
4571
45644572 [ $target_os != win32 ] && enabled_all armv6t2 shared !pic && enable_weak_pic
45654573
45664574 elif enabled mips; then
54505458 enabled atempo_filter && prepend avfilter_deps "avcodec"
54515459 enabled ebur128_filter && enabled swresample && prepend avfilter_deps "swresample"
54525460 enabled elbg_filter && prepend avfilter_deps "avcodec"
5461 enabled fftfilt_filter && prepend avfilter_deps "avcodec"
54535462 enabled mcdeint_filter && prepend avfilter_deps "avcodec"
54545463 enabled movie_filter && prepend avfilter_deps "avformat avcodec"
54555464 enabled pan_filter && prepend avfilter_deps "swresample"
3030 # This could be handy for archiving the generated documentation or
3131 # if some version control system is used.
3232
33 PROJECT_NUMBER = 2.6
33 PROJECT_NUMBER = 2.6.1
3434
3535 # With the PROJECT_LOGO tag one can specify a logo or icon that is included
3636 # in the documentation. The maximum height of the logo should not exceed 55
348348 @code{concat}} protocol designed specifically for that, with examples in the
349349 documentation.
350350
351 A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
351 A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
352352 video by merely concatenating the files containing them.
353353
354354 Hence you may concatenate your multimedia files by first transcoding them to
7171 configuration file.
7272
7373 Each feed is associated to a file which is stored on disk. This stored
74 file is used to allow to send pre-recorded data to a player as fast as
74 file is used to send pre-recorded data to a player as fast as
7575 possible when new content is added in real-time to the stream.
7676
7777 A "live-stream" or "stream" is a resource published by
34853485 may want to reduce this value, at the cost of a less effective filter and the
34863486 risk of various artefacts.
34873487
3488 If the overlapping value doesn't allow to process the whole input width or
3488 If the overlapping value doesn't permit processing the whole input width or
34893489 height, a warning will be displayed and according borders won't be denoised.
34903490
34913491 Default value is @var{blocksize}-1, which is the best possible setting.
2222
2323 @item probesize @var{integer} (@emph{input})
2424 Set probing size in bytes, i.e. the size of the data to analyze to get
25 stream information. A higher value will allow to detect more
25 stream information. A higher value will enable detecting more
2626 information in case it is dispersed into the stream, but will increase
2727 latency. Must be an integer not lesser than 32. It is 5000000 by default.
2828
6666
6767 @item analyzeduration @var{integer} (@emph{input})
6868 Specify how many microseconds are analyzed to probe the input. A
69 higher value will allow to detect more accurate information, but will
69 higher value will enable detecting more accurate information, but will
7070 increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
7171
7272 @item cryptokey @var{hexadecimal string} (@emph{input})
00 @chapter Input Devices
11 @c man begin INPUT DEVICES
22
3 Input devices are configured elements in FFmpeg which allow to access
3 Input devices are configured elements in FFmpeg which enable accessing
44 the data coming from a multimedia device attached to your system.
55
66 When you configure your FFmpeg build, all the supported input devices
843843 Return 1.0 if @var{x} is NAN, 0.0 otherwise.
844844
845845 @item ld(var)
846 Allow to load the value of the internal variable with number
846 Load the value of the internal variable with number
847847 @var{var}, which was previously stored with st(@var{var}, @var{expr}).
848848 The function returns the loaded value.
849849
911911 Compute expression @code{1/(1 + exp(4*x))}.
912912
913913 @item st(var, expr)
914 Allow to store the value of the expression @var{expr} in an internal
914 Store the value of the expression @var{expr} in an internal
915915 variable. @var{var} specifies the number of the variable where to
916916 store the value, and it is a value ranging from 0 to 9. The function
917917 returns the value stored in the internal variable.
3737 static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
3838 int *got_frame, AVPacket *avpkt)
3939 {
40 int line = 0, ret;
40 int line, ret;
4141 const int width = avctx->width;
4242 AVFrame *pic = data;
4343 uint16_t *y, *u, *v;
4444 const uint8_t *line_end, *src = avpkt->data;
4545 int stride = avctx->width * 8 / 3;
4646
47 if (width == 1) {
48 av_log(avctx, AV_LOG_ERROR, "Width 1 not supported.\n");
47 if (width <= 1 || avctx->height <= 0) {
48 av_log(avctx, AV_LOG_ERROR, "Dimensions %dx%d not supported.\n", width, avctx->height);
4949 return AVERROR_INVALIDDATA;
5050 }
5151
6666 pic->pict_type = AV_PICTURE_TYPE_I;
6767 pic->key_frame = 1;
6868
69 y = (uint16_t *)pic->data[0];
70 u = (uint16_t *)pic->data[1];
71 v = (uint16_t *)pic->data[2];
7269 line_end = avpkt->data + stride;
70 for (line = 0; line < avctx->height; line++) {
71 uint16_t y_temp[6] = {0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000};
72 uint16_t u_temp[3] = {0x8000, 0x8000, 0x8000};
73 uint16_t v_temp[3] = {0x8000, 0x8000, 0x8000};
74 int x;
75 y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
76 u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
77 v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
7378
74 while (line++ < avctx->height) {
75 while (1) {
76 uint32_t t = AV_RL32(src);
79 for (x = 0; x < width; x += 6) {
80 uint32_t t;
81
82 if (width - x < 6 || line_end - src < 16) {
83 y = y_temp;
84 u = u_temp;
85 v = v_temp;
86 }
87
88 if (line_end - src < 4)
89 break;
90
91 t = AV_RL32(src);
7792 src += 4;
7893 *u++ = t << 6 & 0xFFC0;
7994 *y++ = t >> 4 & 0xFFC0;
8095 *v++ = t >> 14 & 0xFFC0;
8196
82 if (src >= line_end - 1) {
83 *y = 0x80;
84 src++;
85 line_end += stride;
86 y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
87 u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
88 v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
97 if (line_end - src < 4)
8998 break;
90 }
9199
92100 t = AV_RL32(src);
93101 src += 4;
94102 *y++ = t << 6 & 0xFFC0;
95103 *u++ = t >> 4 & 0xFFC0;
96104 *y++ = t >> 14 & 0xFFC0;
97 if (src >= line_end - 2) {
98 if (!(width & 1)) {
99 *y = 0x80;
100 src += 2;
101 }
102 line_end += stride;
103 y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
104 u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
105 v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
105
106 if (line_end - src < 4)
106107 break;
107 }
108108
109109 t = AV_RL32(src);
110110 src += 4;
112112 *y++ = t >> 4 & 0xFFC0;
113113 *u++ = t >> 14 & 0xFFC0;
114114
115 if (src >= line_end - 1) {
116 *y = 0x80;
117 src++;
118 line_end += stride;
119 y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
120 u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
121 v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
115 if (line_end - src < 4)
122116 break;
123 }
124117
125118 t = AV_RL32(src);
126119 src += 4;
128121 *v++ = t >> 4 & 0xFFC0;
129122 *y++ = t >> 14 & 0xFFC0;
130123
131 if (src >= line_end - 2) {
132 if (width & 1) {
133 *y = 0x80;
134 src += 2;
135 }
136 line_end += stride;
137 y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
138 u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
139 v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
124 if (width - x < 6)
140125 break;
141 }
142126 }
127
128 if (x < width) {
129 y = x + (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
130 u = x/2 + (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
131 v = x/2 + (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
132 memcpy(y, y_temp, sizeof(*y) * (width - x));
133 memcpy(u, u_temp, sizeof(*u) * (width - x + 1) / 2);
134 memcpy(v, v_temp, sizeof(*v) * (width - x + 1) / 2);
135 }
136
137 line_end += stride;
138 src = line_end - stride;
143139 }
144140
145141 *got_frame = 1;
216216 OBJS-$(CONFIG_DVVIDEO_ENCODER) += dvenc.o dv.o dvdata.o
217217 OBJS-$(CONFIG_DXA_DECODER) += dxa.o
218218 OBJS-$(CONFIG_DXTORY_DECODER) += dxtory.o
219 OBJS-$(CONFIG_EAC3_DECODER) += eac3dec.o eac3_data.o
219 OBJS-$(CONFIG_EAC3_DECODER) += eac3_data.o
220220 OBJS-$(CONFIG_EAC3_ENCODER) += eac3enc.o eac3_data.o
221221 OBJS-$(CONFIG_EACMV_DECODER) += eacmv.o
222222 OBJS-$(CONFIG_EAMAD_DECODER) += eamad.o eaidct.o mpeg12.o \
871871 start_subband += start_subband - 7;
872872 end_subband = get_bits(gbc, 3) + 5;
873873 #if USE_FIXED
874 s->spx_dst_end_freq = end_freq_inv_tab[end_subband];
874 s->spx_dst_end_freq = end_freq_inv_tab[end_subband-5];
875875 #endif
876876 if (end_subband > 7)
877877 end_subband += end_subband - 7;
938938 nblend = 0;
939939 sblend = 0x800000;
940940 } else if (nratio > 0x7fffff) {
941 nblend = 0x800000;
941 nblend = 14529495; // sqrt(3) in FP.23
942942 sblend = 0;
943943 } else {
944944 nblend = fixed_sqrt(nratio, 23);
242242 * Parse the E-AC-3 frame header.
243243 * This parses both the bit stream info and audio frame header.
244244 */
245 int ff_eac3_parse_header(AC3DecodeContext *s);
245 static int ff_eac3_parse_header(AC3DecodeContext *s);
246246
247247 /**
248248 * Decode mantissas in a single channel for the entire frame.
249249 * This is used when AHT mode is enabled.
250250 */
251 void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
251 static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
252252
253253 /**
254254 * Apply spectral extension to each channel by copying lower frequency
255255 * coefficients to higher frequency bins and applying side information to
256256 * approximate the original high frequency signal.
257257 */
258 void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
258 static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
259259
260260 #endif /* AVCODEC_AC3DEC_H */
163163 }
164164 }
165165
166 #include "eac3dec.c"
166167 #include "ac3dec.c"
167168
168169 static const AVOption options[] = {
2727 * Upmix delay samples from stereo to original channel layout.
2828 */
2929 #include "ac3dec.h"
30 #include "eac3dec.c"
3031 #include "ac3dec.c"
3132
3233 static const AVOption options[] = {
3636 OBJS-$(CONFIG_FLAC_DECODER) += arm/flacdsp_init_arm.o \
3737 arm/flacdsp_arm.o
3838 OBJS-$(CONFIG_FLAC_ENCODER) += arm/flacdsp_init_arm.o
39 OBJS-$(CONFIG_HEVC_DECODER) += arm/hevcdsp_init_arm.o
3940 OBJS-$(CONFIG_MLP_DECODER) += arm/mlpdsp_init_arm.o
4041 OBJS-$(CONFIG_VC1_DECODER) += arm/vc1dsp_init_arm.o
4142 OBJS-$(CONFIG_VORBIS_DECODER) += arm/vorbisdsp_init_arm.o
0 /*
1 * This file is part of FFmpeg.
2 *
3 * FFmpeg is free software; you can redistribute it and/or
4 * modify it under the terms of the GNU Lesser General Public
5 * License as published by the Free Software Foundation; either
6 * version 2.1 of the License, or (at your option) any later version.
7 *
8 * FFmpeg is distributed in the hope that it will be useful,
9 * but WITHOUT ANY WARRANTY; without even the implied warranty of
10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
11 * Lesser General Public License for more details.
12 *
13 * You should have received a copy of the GNU Lesser General Public
14 * License along with FFmpeg; if not, write to the Free Software
15 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
16 */
17
18 #ifndef AVCODEC_ARM_HEVCDSP_ARM_H
19 #define AVCODEC_ARM_HEVCDSP_ARM_H
20
21 #include "libavcodec/hevcdsp.h"
22
23 void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth);
24
25 #endif /* AVCODEC_ARM_HEVCDSP_ARM_H */
0 /*
1 * Copyright (c) 2014 Seppo Tomperi <seppo.tomperi@vtt.fi>
2 *
3 * This file is part of FFmpeg.
4 *
5 * FFmpeg is free software; you can redistribute it and/or
6 * modify it under the terms of the GNU Lesser General Public
7 * License as published by the Free Software Foundation; either
8 * version 2.1 of the License, or (at your option) any later version.
9 *
10 * FFmpeg is distributed in the hope that it will be useful,
11 * but WITHOUT ANY WARRANTY; without even the implied warranty of
12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
13 * Lesser General Public License for more details.
14 *
15 * You should have received a copy of the GNU Lesser General Public
16 * License along with FFmpeg; if not, write to the Free Software
17 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
18 */
19
20 #include "libavutil/attributes.h"
21 #include "libavutil/arm/cpu.h"
22 #include "libavcodec/hevcdsp.h"
23 #include "hevcdsp_arm.h"
24
25 av_cold void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth)
26 {
27 int cpu_flags = av_get_cpu_flags();
28
29 if (have_neon(cpu_flags))
30 ff_hevcdsp_init_neon(c, bit_depth);
31 }
2020 #include "libavutil/attributes.h"
2121 #include "libavutil/arm/cpu.h"
2222 #include "libavcodec/hevcdsp.h"
23 #include "hevcdsp_arm.h"
2324
2425 void ff_hevc_v_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q);
2526 void ff_hevc_h_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q);
140141 put_hevc_qpel_uw_neon[my][mx](dst, dststride, src, srcstride, width, height, src2, MAX_PB_SIZE);
141142 }
142143
143 static av_cold void hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth)
144 av_cold void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth)
144145 {
145 #if HAVE_NEON
146146 if (bit_depth == 8) {
147147 int x;
148148 c->hevc_v_loop_filter_luma = ff_hevc_v_loop_filter_luma_neon;
220220 c->put_hevc_qpel_uni[8][0][0] = ff_hevc_put_qpel_uw_pixels_w48_neon_8;
221221 c->put_hevc_qpel_uni[9][0][0] = ff_hevc_put_qpel_uw_pixels_w64_neon_8;
222222 }
223 #endif // HAVE_NEON
224 }
225
226 void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth)
227 {
228 int cpu_flags = av_get_cpu_flags();
229
230 if (have_neon(cpu_flags))
231 hevcdsp_init_neon(c, bit_depth);
232 }
223 }
6262
6363 #define EAC3_SR_CODE_REDUCED 3
6464
65 void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
65 static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
6666 {
6767 int bin, bnd, ch, i;
6868 uint8_t wrapflag[SPX_MAX_BANDS]={1,0,}, num_copy_sections, copy_sizes[SPX_MAX_BANDS];
100100 for (i = 0; i < num_copy_sections; i++) {
101101 memcpy(&s->transform_coeffs[ch][bin],
102102 &s->transform_coeffs[ch][s->spx_dst_start_freq],
103 copy_sizes[i]*sizeof(float));
103 copy_sizes[i]*sizeof(INTFLOAT));
104104 bin += copy_sizes[i];
105105 }
106106
123123 bin = s->spx_src_start_freq - 2;
124124 for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
125125 if (wrapflag[bnd]) {
126 float *coeffs = &s->transform_coeffs[ch][bin];
126 INTFLOAT *coeffs = &s->transform_coeffs[ch][bin];
127127 coeffs[0] *= atten_tab[0];
128128 coeffs[1] *= atten_tab[1];
129129 coeffs[2] *= atten_tab[2];
141141 for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
142142 float nscale = s->spx_noise_blend[ch][bnd] * rms_energy[bnd] * (1.0f / INT32_MIN);
143143 float sscale = s->spx_signal_blend[ch][bnd];
144 #if USE_FIXED
145 // spx_noise_blend and spx_signal_blend are both FP.23
146 nscale *= 1.0 / (1<<23);
147 sscale *= 1.0 / (1<<23);
148 #endif
144149 for (i = 0; i < s->spx_band_sizes[bnd]; i++) {
145150 float noise = nscale * (int32_t)av_lfg_get(&s->dith_state);
146151 s->transform_coeffs[ch][bin] *= sscale;
194199 pre_mant[5] = even0 - odd0;
195200 }
196201
197 void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
202 static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
198203 {
199204 int bin, blk, gs;
200205 int end_bap, gaq_mode;
287292 }
288293 }
289294
290 int ff_eac3_parse_header(AC3DecodeContext *s)
295 static int ff_eac3_parse_header(AC3DecodeContext *s)
291296 {
292297 int i, blk, ch;
293298 int ac3_exponent_strategy, parse_aht_info, parse_spx_atten_data;
25992599 if (ret < 0)
26002600 goto fail;
26012601
2602 ff_thread_finish_setup(s->avctx);
2602 if (!s->avctx->hwaccel)
2603 ff_thread_finish_setup(s->avctx);
26032604
26042605 return 0;
26052606
102102 {"hex", "hex motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_HEX }, INT_MIN, INT_MAX, V|E, "me_method" },
103103 {"umh", "umh motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_UMH }, INT_MIN, INT_MAX, V|E, "me_method" },
104104 {"iter", "iter motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_ITER }, INT_MIN, INT_MAX, V|E, "me_method" },
105 {"extradata_size", NULL, OFFSET(extradata_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
106105 {"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, INT_MIN, INT_MAX},
107106 {"g", "set the group of picture (GOP) size", OFFSET(gop_size), AV_OPT_TYPE_INT, {.i64 = 12 }, INT_MIN, INT_MAX, V|E},
108107 {"ar", "set audio sampling rate (in Hz)", OFFSET(sample_rate), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E},
448448 int coded_samples = 0;
449449 int decoded_samples = 0;
450450 int i, ret;
451 int delayed_samples = 0;
452
453 for (i = 0; i < c->nb_streams; i++) {
454 OpusStreamContext *s = &c->streams[i];
455 s->out[0] =
456 s->out[1] = NULL;
457 delayed_samples = FFMAX(delayed_samples, s->delayed_samples);
458 }
451459
452460 /* decode the header of the first sub-packet to find out the sample count */
453461 if (buf) {
461469 c->streams[0].silk_samplerate = get_silk_samplerate(pkt->config);
462470 }
463471
464 frame->nb_samples = coded_samples + c->streams[0].delayed_samples;
472 frame->nb_samples = coded_samples + delayed_samples;
465473
466474 /* no input or buffered data => nothing to do */
467475 if (!frame->nb_samples) {
998998
999999 av_lfg_init(&enc->randctx, 1);
10001000
1001 enc->avctx = avctx;
1002
10011003 enc->framesSinceKeyframe = 0;
10021004 if ((avctx->width & 0xf) || (avctx->height & 0xf)) {
10031005 av_log(avctx, AV_LOG_ERROR, "Dimensions must be divisible by 16\n");
837837 default:
838838 s->bpp = -1;
839839 }
840 }
841 if (s->bpp > 64U) {
842 av_log(s->avctx, AV_LOG_ERROR,
843 "This format is not supported (bpp=%d, %d components)\n",
844 s->bpp, count);
845 s->bpp = 0;
846 return AVERROR_INVALIDDATA;
847840 }
848841 break;
849842 case TIFF_SAMPLES_PER_PIXEL:
11571150 }
11581151 }
11591152 end:
1153 if (s->bpp > 64U) {
1154 av_log(s->avctx, AV_LOG_ERROR,
1155 "This format is not supported (bpp=%d, %d components)\n",
1156 s->bpp, count);
1157 s->bpp = 0;
1158 return AVERROR_INVALIDDATA;
1159 }
11601160 bytestream2_seek(&s->gb, start, SEEK_SET);
11611161 return 0;
11621162 }
373373 case AV_PIX_FMT_YUVJ411P:
374374 case AV_PIX_FMT_UYYVYY411:
375375 w_align = 32;
376 h_align = 8;
376 h_align = 16 * 2;
377377 break;
378378 case AV_PIX_FMT_YUV410P:
379379 if (s->codec_id == AV_CODEC_ID_SVQ1) {
278278
279279 // retain segmentation map if it doesn't update
280280 if (s->segmentation.enabled && !s->segmentation.update_map &&
281 !s->intraonly && !s->keyframe && !s->errorres) {
281 !s->intraonly && !s->keyframe && !s->errorres &&
282 ctx->active_thread_type != FF_THREAD_FRAME) {
282283 memcpy(f->segmentation_map, s->frames[LAST_FRAME].segmentation_map, sz);
283284 }
284285
13501351
13511352 if (!s->last_uses_2pass)
13521353 ff_thread_await_progress(&s->frames[LAST_FRAME].tf, row >> 3, 0);
1353 for (y = 0; y < h4; y++)
1354 for (y = 0; y < h4; y++) {
1355 int idx_base = (y + row) * 8 * s->sb_cols + col;
13541356 for (x = 0; x < w4; x++)
1355 pred = FFMIN(pred, refsegmap[(y + row) * 8 * s->sb_cols + x + col]);
1357 pred = FFMIN(pred, refsegmap[idx_base + x]);
1358 if (!s->segmentation.update_map && ctx->active_thread_type == FF_THREAD_FRAME) {
1359 // FIXME maybe retain reference to previous frame as
1360 // segmap reference instead of copying the whole map
1361 // into a new buffer
1362 memcpy(&s->frames[CUR_FRAME].segmentation_map[idx_base],
1363 &refsegmap[idx_base], w4);
1364 }
1365 }
13561366 av_assert1(pred < 8);
13571367 b->seg_id = pred;
13581368 } else {
123123 const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt);
124124 if (!(desc->flags & (AV_PIX_FMT_FLAG_HWACCEL | AV_PIX_FMT_FLAG_BITSTREAM | AV_PIX_FMT_FLAG_PAL)) &&
125125 (desc->flags & AV_PIX_FMT_FLAG_PLANAR || desc->nb_components == 1) &&
126 (!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN) || desc->comp[0].depth_minus1 == 7)
126 (!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN || desc->comp[0].depth_minus1 == 7))
127127 ff_add_format(&formats, fmt);
128128 }
129129
503503 int r;
504504
505505 r = ff_request_frame(inlink);
506 if (r == AVERROR_EOF && !s->palette_pushed) {
506 if (r == AVERROR_EOF && !s->palette_pushed && s->nb_refs) {
507507 r = ff_filter_frame(outlink, get_palette_frame(ctx));
508508 s->palette_pushed = 1;
509509 return r;
659659 * It is needed to use asf as a streamable format. */
660660 if (asf_write_header1(s, 0, DATA_HEADER_SIZE) < 0) {
661661 //av_free(asf);
662 av_freep(&asf->index_ptr);
662663 return -1;
663664 }
664665
3535 #include "riff.h"
3636 #include "libavcodec/bytestream.h"
3737 #include "libavcodec/exif.h"
38 #include "libavformat/isom.h"
3839
3940 typedef struct AVIStream {
4041 int64_t frame_offset; /* current frame (video) or byte (audio) counter
772773 st->codec->codec_tag = tag1;
773774 st->codec->codec_id = ff_codec_get_id(ff_codec_bmp_tags,
774775 tag1);
776 if (!st->codec->codec_id) {
777 st->codec->codec_id = ff_codec_get_id(ff_codec_movvideo_tags,
778 tag1);
779 if (st->codec->codec_id)
780 av_log(s, AV_LOG_WARNING, "mov tag found in avi\n");
781 }
775782 /* This is needed to get the pict type which is necessary
776783 * for generating correct pts. */
777784 st->need_parsing = AVSTREAM_PARSE_HEADERS;
8181 FFMContext *ffm = s->priv_data;
8282 AVIOContext *pb = s->pb;
8383 int len, fill_size, size1, frame_offset, id;
84 int64_t last_pos = -1;
8485
8586 size1 = size;
8687 while (size > 0) {
100101 avio_seek(pb, tell, SEEK_SET);
101102 }
102103 id = avio_rb16(pb); /* PACKET_ID */
103 if (id != PACKET_ID)
104 if (id != PACKET_ID) {
104105 if (ffm_resync(s, id) < 0)
105106 return -1;
107 last_pos = avio_tell(pb);
108 }
106109 fill_size = avio_rb16(pb);
107110 ffm->dts = avio_rb64(pb);
108111 frame_offset = avio_rb16(pb);
116119 if (!frame_offset) {
117120 /* This packet has no frame headers in it */
118121 if (avio_tell(pb) >= ffm->packet_size * 3LL) {
119 avio_seek(pb, -ffm->packet_size * 2LL, SEEK_CUR);
122 int64_t seekback = FFMIN(ffm->packet_size * 2LL, avio_tell(pb) - last_pos);
123 seekback = FFMAX(seekback, 0);
124 avio_seek(pb, -seekback, SEEK_CUR);
120125 goto retry_read;
121126 }
122127 /* This is bad, we cannot find a valid frame header */
260265 AVIOContext *pb = s->pb;
261266 AVCodecContext *codec;
262267 int ret;
263 int f_main = 0, f_cprv, f_stvi, f_stau;
268 int f_main = 0, f_cprv = -1, f_stvi = -1, f_stau = -1;
264269 AVCodec *enc;
265270 char *buffer;
266271
330335 }
331336 codec->time_base.num = avio_rb32(pb);
332337 codec->time_base.den = avio_rb32(pb);
338 if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
339 av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
340 codec->time_base.num, codec->time_base.den);
341 ret = AVERROR_INVALIDDATA;
342 goto fail;
343 }
333344 codec->width = avio_rb16(pb);
334345 codec->height = avio_rb16(pb);
335346 codec->gop_size = avio_rb16(pb);
433444 }
434445
435446 /* get until end of block reached */
436 while ((avio_tell(pb) % ffm->packet_size) != 0)
447 while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
437448 avio_r8(pb);
438449
439450 /* init packet demux */
502513 case AVMEDIA_TYPE_VIDEO:
503514 codec->time_base.num = avio_rb32(pb);
504515 codec->time_base.den = avio_rb32(pb);
516 if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
517 av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
518 codec->time_base.num, codec->time_base.den);
519 goto fail;
520 }
505521 codec->width = avio_rb16(pb);
506522 codec->height = avio_rb16(pb);
507523 codec->gop_size = avio_rb16(pb);
560576 }
561577
562578 /* get until end of block reached */
563 while ((avio_tell(pb) % ffm->packet_size) != 0)
579 while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
564580 avio_r8(pb);
565581
566582 /* init packet demux */
25992599 /* try relative path, we do not try the absolute because it can leak information about our
26002600 system to an attacker */
26012601 if (ref->nlvl_to > 0 && ref->nlvl_from > 0) {
2602 char filename[1024];
2602 char filename[1025];
26032603 const char *src_path;
26042604 int i, l;
26052605
26252625 filename[src_path - src] = 0;
26262626
26272627 for (i = 1; i < ref->nlvl_from; i++)
2628 av_strlcat(filename, "../", 1024);
2629
2630 av_strlcat(filename, ref->path + l + 1, 1024);
2631
2628 av_strlcat(filename, "../", sizeof(filename));
2629
2630 av_strlcat(filename, ref->path + l + 1, sizeof(filename));
2631 if (!use_absolute_path)
2632 if(strstr(ref->path + l + 1, "..") || ref->nlvl_from > 1)
2633 return AVERROR(ENOENT);
2634
2635 if (strlen(filename) + 1 == sizeof(filename))
2636 return AVERROR(ENOENT);
26322637 if (!avio_open2(pb, filename, AVIO_FLAG_READ, int_cb, NULL))
26332638 return 0;
26342639 }
20402040 if (!*str)
20412041 return AVERROR(ENOMEM);
20422042 if (!strftime(*str, 32, "%Y-%m-%d %H:%M:%S", &time))
2043 str[0] = '\0';
2043 (*str)[0] = '\0';
20442044
20452045 return 0;
20462046 }
361361 { AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '4') },
362362 { AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '5') },
363363 { AV_CODEC_ID_FIC, MKTAG('F', 'I', 'C', 'V') },
364 { AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'N') },
365 { AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'H') },
366 { AV_CODEC_ID_QTRLE, MKTAG('r', 'l', 'e', ' ') },
367364 { AV_CODEC_ID_HQX, MKTAG('C', 'H', 'Q', 'X') },
368365 { AV_CODEC_ID_NONE, 0 }
369366 };
4848 #elif HAVE_ARMV5TE
4949 .arch armv5te
5050 #endif
51 #if HAVE_AS_OBJECT_ARCH
52 ELF .object_arch armv4
53 #endif
5154
5255 #if HAVE_NEON
5356 .fpu neon
57 ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
58 ELF .eabi_attribute 12, 0 @ suppress Tag_Advanced_SIMD_arch
5459 #elif HAVE_VFP
5560 .fpu vfp
61 ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
5662 #endif
5763
5864 .syntax unified
2323 * assembly (rather than from within .s files).
2424 */
2525
26 #ifndef AVCODEC_MIPS_ASMDEFS_H
27 #define AVCODEC_MIPS_ASMDEFS_H
26 #ifndef AVUTIL_MIPS_ASMDEFS_H
27 #define AVUTIL_MIPS_ASMDEFS_H
2828
29 #include <sgidefs.h>
30
31 #if _MIPS_SIM == _ABI64
29 #if defined(_ABI64) && _MIPS_SIM == _ABI64
3230 # define PTRSIZE " 8 "
3331 # define PTRLOG " 3 "
3432 # define PTR_ADDU "daddu "
610610 }
611611 opencl_ctx.context = NULL;
612612 }
613 for (i = 0; i < opencl_ctx.kernel_code_count; i++) {
614 opencl_ctx.kernel_code[i].is_compiled = 0;
615 }
616613 free_device_list(&opencl_ctx.device_list);
617614 end:
618615 if (opencl_ctx.init_count <= 0)