remove obsolete files

This commit is contained in:
mgthepro
2022-07-31 12:49:57 +02:00
parent 117076fd71
commit de846fb71e
3631 changed files with 0 additions and 9433291 deletions

View File

@@ -1,79 +0,0 @@
# FFmpeg project
## Organisation
The FFmpeg project is organized through a community working on global consensus.
Decisions are taken by the ensemble of active members, through voting and
are aided by two committees.
## General Assembly
The ensemble of active members is called the General Assembly (GA).
The General Assembly is sovereign and legitimate for all its decisions
regarding the FFmpeg project.
The General Assembly is made up of active contributors.
Contributors are considered "active contributors" if they have pushed more
than 20 patches in the last 36 months in the main FFmpeg repository, or
if they have been voted in by the GA.
Additional members are added to the General Assembly through a vote after
proposal by a member of the General Assembly.
They are part of the GA for two years, after which they need a confirmation by
the GA.
## Voting
Voting is done using a ranked voting system, currently running on https://vote.ffmpeg.org/ .
Majority vote means more than 50% of the expressed ballots.
## Technical Committee
The Technical Committee (TC) is here to arbitrate and make decisions when
technical conflicts occur in the project.
They will consider the merits of all the positions, judge them and make a
decision.
The TC resolves technical conflicts but is not a technical steering committee.
Decisions by the TC are binding for all the contributors.
Decisions made by the TC can be re-opened after 1 year or by a majority vote
of the General Assembly, requested by one of the member of the GA.
The TC is elected by the General Assembly for a duration of 1 year, and
is composed of 5 members.
Members can be re-elected if they wish. A majority vote in the General Assembly
can trigger a new election of the TC.
The members of the TC can be elected from outside of the GA.
Candidates for election can either be suggested or self-nominated.
The conflict resolution process is detailed in the [resolution process](resolution_process.md) document.
## Community committee
The Community Committee (CC) is here to arbitrage and make decisions when
inter-personal conflicts occur in the project. It will decide quickly and
take actions, for the sake of the project.
The CC can remove privileges of offending members, including removal of
commit access and temporary ban from the community.
Decisions made by the CC can be re-opened after 1 year or by a majority vote
of the General Assembly. Indefinite bans from the community must be confirmed
by the General Assembly, in a majority vote.
The CC is elected by the General Assembly for a duration of 1 year, and is
composed of 5 members.
Members can be re-elected if they wish. A majority vote in the General Assembly
can trigger a new election of the CC.
The members of the CC can be elected from outside of the GA.
Candidates for election can either be suggested or self-nominated.
The CC is governed by and responsible for enforcing the Code of Conduct.

View File

@@ -1,91 +0,0 @@
# Technical Committee
_This document only makes sense with the rules from [the community document](community)_.
The Technical Committee (**TC**) is here to arbitrate and make decisions when
technical conflicts occur in the project.
The TC main role is to resolve technical conflicts.
It is therefore not a technical steering committee, but it is understood that
some decisions might impact the future of the project.
# Process
## Seizing
The TC can take possession of any technical matter that it sees fit.
To involve the TC in a matter, email tc@ or CC them on an ongoing discussion.
As members of TC are developers, they also can email tc@ to raise an issue.
## Announcement
The TC, once seized, must announce itself on the main mailing list, with a _[TC]_ tag.
The TC has 2 modes of operation: a RFC one and an internal one.
If the TC thinks it needs the input from the larger community, the TC can call
for a RFC. Else, it can decide by itself.
If the disagreement involves a member of the TC, that member should recuse
themselves from the decision.
The decision to use a RFC process or an internal discussion is a discretionary
decision of the TC.
The TC can also reject a seizure for a few reasons such as:
the matter was not discussed enough previously; it lacks expertise to reach a
beneficial decision on the matter; or the matter is too trivial.
### RFC call
In the RFC mode, one person from the TC posts on the mailing list the
technical question and will request input from the community.
The mail will have the following specification:
* a precise title
* a specific tag [TC RFC]
* a top-level email
* contain a precise question that does not exceed 100 words and that is answerable by developers
* may have an extra description, or a link to a previous discussion, if deemed necessary,
* contain a precise end date for the answers.
The answers from the community must be on the main mailing list and must have
the following specification:
* keep the tag and the title unchanged
* limited to 400 words
* a first-level, answering directly to the main email
* answering to the question.
Further replies to answers are permitted, as long as they conform to the
community standards of politeness, they are limited to 100 words, and are not
nested more than once. (max-depth=2)
After the end-date, mails on the thread will be ignored.
Violations of those rules will be escalated through the Community Committee.
After all the emails are in, the TC has 96 hours to give its final decision.
Exceptionally, the TC can request an extra delay, that will be notified on the
mailing list.
### Within TC
In the internal case, the TC has 96 hours to give its final decision.
Exceptionally, the TC can request an extra delay.
## Decisions
The decisions from the TC will be sent on the mailing list, with the _[TC]_ tag.
Internally, the TC should take decisions with a majority, or using
ranked-choice voting.
The decision from the TC should be published with a summary of the reasons that
lead to this decision.
The decisions from the TC are final, until the matters are reopened after
no less than one year.

File diff suppressed because it is too large Load Diff

View File

@@ -1,336 +0,0 @@
/*
* Functions common to fixed/float MPEG-4 Parametric Stereo decoding
* Copyright (c) 2010 Alex Converse <alex.converse@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include "libavutil/common.h"
#include "libavutil/thread.h"
#include "aacps.h"
#include "get_bits.h"
#include "aacpsdata.c"
static const int8_t num_env_tab[2][4] = {
{ 0, 1, 2, 4, },
{ 1, 2, 3, 4, },
};
static const int8_t nr_iidicc_par_tab[] = {
10, 20, 34, 10, 20, 34,
};
static const int8_t nr_iidopd_par_tab[] = {
5, 11, 17, 5, 11, 17,
};
enum {
huff_iid_df1,
huff_iid_dt1,
huff_iid_df0,
huff_iid_dt0,
huff_icc_df,
huff_icc_dt,
huff_ipd_df,
huff_ipd_dt,
huff_opd_df,
huff_opd_dt,
};
static const int huff_iid[] = {
huff_iid_df0,
huff_iid_df1,
huff_iid_dt0,
huff_iid_dt1,
};
static VLC vlc_ps[10];
#define READ_PAR_DATA(PAR, OFFSET, MASK, ERR_CONDITION, NB_BITS, MAX_DEPTH) \
/** \
* Read Inter-channel Intensity Difference/Inter-Channel Coherence/ \
* Inter-channel Phase Difference/Overall Phase Difference parameters from the \
* bitstream. \
* \
* @param avctx contains the current codec context \
* @param gb pointer to the input bitstream \
* @param ps pointer to the Parametric Stereo context \
* @param PAR pointer to the parameter to be read \
* @param e envelope to decode \
* @param dt 1: time delta-coded, 0: frequency delta-coded \
*/ \
static int read_ ## PAR ## _data(AVCodecContext *avctx, GetBitContext *gb, PSCommonContext *ps, \
int8_t (*PAR)[PS_MAX_NR_IIDICC], int table_idx, int e, int dt) \
{ \
int b, num = ps->nr_ ## PAR ## _par; \
VLC_TYPE (*vlc_table)[2] = vlc_ps[table_idx].table; \
if (dt) { \
int e_prev = e ? e - 1 : ps->num_env_old - 1; \
e_prev = FFMAX(e_prev, 0); \
for (b = 0; b < num; b++) { \
int val = PAR[e_prev][b] + get_vlc2(gb, vlc_table, NB_BITS, MAX_DEPTH) - OFFSET; \
if (MASK) val &= MASK; \
PAR[e][b] = val; \
if (ERR_CONDITION) \
goto err; \
} \
} else { \
int val = 0; \
for (b = 0; b < num; b++) { \
val += get_vlc2(gb, vlc_table, NB_BITS, MAX_DEPTH) - OFFSET; \
if (MASK) val &= MASK; \
PAR[e][b] = val; \
if (ERR_CONDITION) \
goto err; \
} \
} \
return 0; \
err: \
av_log(avctx, AV_LOG_ERROR, "illegal "#PAR"\n"); \
return AVERROR_INVALIDDATA; \
}
READ_PAR_DATA(iid, huff_offset[table_idx], 0, FFABS(ps->iid_par[e][b]) > 7 + 8 * ps->iid_quant, 9, 3)
READ_PAR_DATA(icc, huff_offset[table_idx], 0, ps->icc_par[e][b] > 7U, 9, 2)
READ_PAR_DATA(ipdopd, 0, 0x07, 0, 5, 1)
static int ps_read_extension_data(GetBitContext *gb, PSCommonContext *ps,
int ps_extension_id)
{
int e;
int count = get_bits_count(gb);
if (ps_extension_id)
return 0;
ps->enable_ipdopd = get_bits1(gb);
if (ps->enable_ipdopd) {
for (e = 0; e < ps->num_env; e++) {
int dt = get_bits1(gb);
read_ipdopd_data(NULL, gb, ps, ps->ipd_par, dt ? huff_ipd_dt : huff_ipd_df, e, dt);
dt = get_bits1(gb);
read_ipdopd_data(NULL, gb, ps, ps->opd_par, dt ? huff_opd_dt : huff_opd_df, e, dt);
}
}
skip_bits1(gb); //reserved_ps
return get_bits_count(gb) - count;
}
int ff_ps_read_data(AVCodecContext *avctx, GetBitContext *gb_host,
PSCommonContext *ps, int bits_left)
{
int e;
int bit_count_start = get_bits_count(gb_host);
int header;
int bits_consumed;
GetBitContext gbc = *gb_host, *gb = &gbc;
header = get_bits1(gb);
if (header) { //enable_ps_header
ps->enable_iid = get_bits1(gb);
if (ps->enable_iid) {
int iid_mode = get_bits(gb, 3);
if (iid_mode > 5) {
av_log(avctx, AV_LOG_ERROR, "iid_mode %d is reserved.\n",
iid_mode);
goto err;
}
ps->nr_iid_par = nr_iidicc_par_tab[iid_mode];
ps->iid_quant = iid_mode > 2;
ps->nr_ipdopd_par = nr_iidopd_par_tab[iid_mode];
}
ps->enable_icc = get_bits1(gb);
if (ps->enable_icc) {
ps->icc_mode = get_bits(gb, 3);
if (ps->icc_mode > 5) {
av_log(avctx, AV_LOG_ERROR, "icc_mode %d is reserved.\n",
ps->icc_mode);
goto err;
}
ps->nr_icc_par = nr_iidicc_par_tab[ps->icc_mode];
}
ps->enable_ext = get_bits1(gb);
}
ps->frame_class = get_bits1(gb);
ps->num_env_old = ps->num_env;
ps->num_env = num_env_tab[ps->frame_class][get_bits(gb, 2)];
ps->border_position[0] = -1;
if (ps->frame_class) {
for (e = 1; e <= ps->num_env; e++) {
ps->border_position[e] = get_bits(gb, 5);
if (ps->border_position[e] < ps->border_position[e-1]) {
av_log(avctx, AV_LOG_ERROR, "border_position non monotone.\n");
goto err;
}
}
} else
for (e = 1; e <= ps->num_env; e++)
ps->border_position[e] = (e * numQMFSlots >> ff_log2_tab[ps->num_env]) - 1;
if (ps->enable_iid) {
for (e = 0; e < ps->num_env; e++) {
int dt = get_bits1(gb);
if (read_iid_data(avctx, gb, ps, ps->iid_par, huff_iid[2*dt+ps->iid_quant], e, dt))
goto err;
}
} else
memset(ps->iid_par, 0, sizeof(ps->iid_par));
if (ps->enable_icc)
for (e = 0; e < ps->num_env; e++) {
int dt = get_bits1(gb);
if (read_icc_data(avctx, gb, ps, ps->icc_par, dt ? huff_icc_dt : huff_icc_df, e, dt))
goto err;
}
else
memset(ps->icc_par, 0, sizeof(ps->icc_par));
if (ps->enable_ext) {
int cnt = get_bits(gb, 4);
if (cnt == 15) {
cnt += get_bits(gb, 8);
}
cnt *= 8;
while (cnt > 7) {
int ps_extension_id = get_bits(gb, 2);
cnt -= 2 + ps_read_extension_data(gb, ps, ps_extension_id);
}
if (cnt < 0) {
av_log(avctx, AV_LOG_ERROR, "ps extension overflow %d\n", cnt);
goto err;
}
skip_bits(gb, cnt);
}
ps->enable_ipdopd &= !PS_BASELINE;
//Fix up envelopes
if (!ps->num_env || ps->border_position[ps->num_env] < numQMFSlots - 1) {
//Create a fake envelope
int source = ps->num_env ? ps->num_env - 1 : ps->num_env_old - 1;
int b;
if (source >= 0 && source != ps->num_env) {
if (ps->enable_iid) {
memcpy(ps->iid_par+ps->num_env, ps->iid_par+source, sizeof(ps->iid_par[0]));
}
if (ps->enable_icc) {
memcpy(ps->icc_par+ps->num_env, ps->icc_par+source, sizeof(ps->icc_par[0]));
}
if (ps->enable_ipdopd) {
memcpy(ps->ipd_par+ps->num_env, ps->ipd_par+source, sizeof(ps->ipd_par[0]));
memcpy(ps->opd_par+ps->num_env, ps->opd_par+source, sizeof(ps->opd_par[0]));
}
}
if (ps->enable_iid){
for (b = 0; b < ps->nr_iid_par; b++) {
if (FFABS(ps->iid_par[ps->num_env][b]) > 7 + 8 * ps->iid_quant) {
av_log(avctx, AV_LOG_ERROR, "iid_par invalid\n");
goto err;
}
}
}
if (ps->enable_icc){
for (b = 0; b < ps->nr_iid_par; b++) {
if (ps->icc_par[ps->num_env][b] > 7U) {
av_log(avctx, AV_LOG_ERROR, "icc_par invalid\n");
goto err;
}
}
}
ps->num_env++;
ps->border_position[ps->num_env] = numQMFSlots - 1;
}
ps->is34bands_old = ps->is34bands;
if (!PS_BASELINE && (ps->enable_iid || ps->enable_icc))
ps->is34bands = (ps->enable_iid && ps->nr_iid_par == 34) ||
(ps->enable_icc && ps->nr_icc_par == 34);
//Baseline
if (!ps->enable_ipdopd) {
memset(ps->ipd_par, 0, sizeof(ps->ipd_par));
memset(ps->opd_par, 0, sizeof(ps->opd_par));
}
if (header)
ps->start = 1;
bits_consumed = get_bits_count(gb) - bit_count_start;
if (bits_consumed <= bits_left) {
skip_bits_long(gb_host, bits_consumed);
return bits_consumed;
}
av_log(avctx, AV_LOG_ERROR, "Expected to read %d PS bits actually read %d.\n", bits_left, bits_consumed);
err:
ps->start = 0;
skip_bits_long(gb_host, bits_left);
memset(ps->iid_par, 0, sizeof(ps->iid_par));
memset(ps->icc_par, 0, sizeof(ps->icc_par));
memset(ps->ipd_par, 0, sizeof(ps->ipd_par));
memset(ps->opd_par, 0, sizeof(ps->opd_par));
return bits_left;
}
#define PS_INIT_VLC_STATIC(num, nb_bits, size) \
INIT_VLC_STATIC(&vlc_ps[num], nb_bits, ps_tmp[num].table_size / ps_tmp[num].elem_size, \
ps_tmp[num].ps_bits, 1, 1, \
ps_tmp[num].ps_codes, ps_tmp[num].elem_size, ps_tmp[num].elem_size, \
size);
#define PS_VLC_ROW(name) \
{ name ## _codes, name ## _bits, sizeof(name ## _codes), sizeof(name ## _codes[0]) }
static av_cold void ps_init_common(void)
{
// Syntax initialization
static const struct {
const void *ps_codes, *ps_bits;
const unsigned int table_size, elem_size;
} ps_tmp[] = {
PS_VLC_ROW(huff_iid_df1),
PS_VLC_ROW(huff_iid_dt1),
PS_VLC_ROW(huff_iid_df0),
PS_VLC_ROW(huff_iid_dt0),
PS_VLC_ROW(huff_icc_df),
PS_VLC_ROW(huff_icc_dt),
PS_VLC_ROW(huff_ipd_df),
PS_VLC_ROW(huff_ipd_dt),
PS_VLC_ROW(huff_opd_df),
PS_VLC_ROW(huff_opd_dt),
};
PS_INIT_VLC_STATIC(0, 9, 1544);
PS_INIT_VLC_STATIC(1, 9, 832);
PS_INIT_VLC_STATIC(2, 9, 1024);
PS_INIT_VLC_STATIC(3, 9, 1036);
PS_INIT_VLC_STATIC(4, 9, 544);
PS_INIT_VLC_STATIC(5, 9, 544);
PS_INIT_VLC_STATIC(6, 5, 32);
PS_INIT_VLC_STATIC(7, 5, 32);
PS_INIT_VLC_STATIC(8, 5, 32);
PS_INIT_VLC_STATIC(9, 5, 32);
}
av_cold void ff_ps_init_common(void)
{
static AVOnce init_static_once = AV_ONCE_INIT;
ff_thread_once(&init_static_once, ps_init_common);
}

View File

@@ -1,621 +0,0 @@
/*
* ARM NEON optimised IDCT functions for HEVC decoding
* Copyright (c) 2014 Seppo Tomperi <seppo.tomperi@vtt.fi>
* Copyright (c) 2017 Alexandra Hájková
*
* Ported from arm/hevcdsp_idct_neon.S by
* Copyright (c) 2020 Reimar Döffinger
* Copyright (c) 2020 Josh Dekker
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/aarch64/asm.S"
const trans, align=4
.short 64, 83, 64, 36
.short 89, 75, 50, 18
.short 90, 87, 80, 70
.short 57, 43, 25, 9
.short 90, 90, 88, 85
.short 82, 78, 73, 67
.short 61, 54, 46, 38
.short 31, 22, 13, 4
endconst
.macro clip10 in1, in2, c1, c2
smax \in1, \in1, \c1
smax \in2, \in2, \c1
smin \in1, \in1, \c2
smin \in2, \in2, \c2
.endm
function ff_hevc_add_residual_4x4_8_neon, export=1
ld1 {v0.8h-v1.8h}, [x1]
ld1 {v2.s}[0], [x0], x2
ld1 {v2.s}[1], [x0], x2
ld1 {v2.s}[2], [x0], x2
ld1 {v2.s}[3], [x0], x2
sub x0, x0, x2, lsl #2
uxtl v6.8h, v2.8b
uxtl2 v7.8h, v2.16b
sqadd v0.8h, v0.8h, v6.8h
sqadd v1.8h, v1.8h, v7.8h
sqxtun v0.8b, v0.8h
sqxtun2 v0.16b, v1.8h
st1 {v0.s}[0], [x0], x2
st1 {v0.s}[1], [x0], x2
st1 {v0.s}[2], [x0], x2
st1 {v0.s}[3], [x0], x2
ret
endfunc
function ff_hevc_add_residual_4x4_10_neon, export=1
mov x12, x0
ld1 {v0.8h-v1.8h}, [x1]
ld1 {v2.d}[0], [x12], x2
ld1 {v2.d}[1], [x12], x2
ld1 {v3.d}[0], [x12], x2
sqadd v0.8h, v0.8h, v2.8h
ld1 {v3.d}[1], [x12], x2
movi v4.8h, #0
sqadd v1.8h, v1.8h, v3.8h
mvni v5.8h, #0xFC, lsl #8 // movi #0x3FF
clip10 v0.8h, v1.8h, v4.8h, v5.8h
st1 {v0.d}[0], [x0], x2
st1 {v0.d}[1], [x0], x2
st1 {v1.d}[0], [x0], x2
st1 {v1.d}[1], [x0], x2
ret
endfunc
function ff_hevc_add_residual_8x8_8_neon, export=1
add x12, x0, x2
add x2, x2, x2
mov x3, #8
1: subs x3, x3, #2
ld1 {v2.d}[0], [x0]
ld1 {v2.d}[1], [x12]
uxtl v3.8h, v2.8b
ld1 {v0.8h-v1.8h}, [x1], #32
uxtl2 v2.8h, v2.16b
sqadd v0.8h, v0.8h, v3.8h
sqadd v1.8h, v1.8h, v2.8h
sqxtun v0.8b, v0.8h
sqxtun2 v0.16b, v1.8h
st1 {v0.d}[0], [x0], x2
st1 {v0.d}[1], [x12], x2
bne 1b
ret
endfunc
function ff_hevc_add_residual_8x8_10_neon, export=1
add x12, x0, x2
add x2, x2, x2
mov x3, #8
movi v4.8h, #0
mvni v5.8h, #0xFC, lsl #8 // movi #0x3FF
1: subs x3, x3, #2
ld1 {v0.8h-v1.8h}, [x1], #32
ld1 {v2.8h}, [x0]
sqadd v0.8h, v0.8h, v2.8h
ld1 {v3.8h}, [x12]
sqadd v1.8h, v1.8h, v3.8h
clip10 v0.8h, v1.8h, v4.8h, v5.8h
st1 {v0.8h}, [x0], x2
st1 {v1.8h}, [x12], x2
bne 1b
ret
endfunc
function ff_hevc_add_residual_16x16_8_neon, export=1
mov x3, #16
add x12, x0, x2
add x2, x2, x2
1: subs x3, x3, #2
ld1 {v16.16b}, [x0]
ld1 {v0.8h-v3.8h}, [x1], #64
ld1 {v19.16b}, [x12]
uxtl v17.8h, v16.8b
uxtl2 v18.8h, v16.16b
uxtl v20.8h, v19.8b
uxtl2 v21.8h, v19.16b
sqadd v0.8h, v0.8h, v17.8h
sqadd v1.8h, v1.8h, v18.8h
sqadd v2.8h, v2.8h, v20.8h
sqadd v3.8h, v3.8h, v21.8h
sqxtun v0.8b, v0.8h
sqxtun2 v0.16b, v1.8h
sqxtun v1.8b, v2.8h
sqxtun2 v1.16b, v3.8h
st1 {v0.16b}, [x0], x2
st1 {v1.16b}, [x12], x2
bne 1b
ret
endfunc
function ff_hevc_add_residual_16x16_10_neon, export=1
mov x3, #16
movi v20.8h, #0
mvni v21.8h, #0xFC, lsl #8 // movi #0x3FF
add x12, x0, x2
add x2, x2, x2
1: subs x3, x3, #2
ld1 {v16.8h-v17.8h}, [x0]
ld1 {v0.8h-v3.8h}, [x1], #64
sqadd v0.8h, v0.8h, v16.8h
ld1 {v18.8h-v19.8h}, [x12]
sqadd v1.8h, v1.8h, v17.8h
sqadd v2.8h, v2.8h, v18.8h
sqadd v3.8h, v3.8h, v19.8h
clip10 v0.8h, v1.8h, v20.8h, v21.8h
clip10 v2.8h, v3.8h, v20.8h, v21.8h
st1 {v0.8h-v1.8h}, [x0], x2
st1 {v2.8h-v3.8h}, [x12], x2
bne 1b
ret
endfunc
function ff_hevc_add_residual_32x32_8_neon, export=1
add x12, x0, x2
add x2, x2, x2
mov x3, #32
1: subs x3, x3, #2
ld1 {v20.16b, v21.16b}, [x0]
uxtl v16.8h, v20.8b
uxtl2 v17.8h, v20.16b
ld1 {v22.16b, v23.16b}, [x12]
uxtl v18.8h, v21.8b
uxtl2 v19.8h, v21.16b
uxtl v20.8h, v22.8b
ld1 {v0.8h-v3.8h}, [x1], #64
ld1 {v4.8h-v7.8h}, [x1], #64
uxtl2 v21.8h, v22.16b
uxtl v22.8h, v23.8b
uxtl2 v23.8h, v23.16b
sqadd v0.8h, v0.8h, v16.8h
sqadd v1.8h, v1.8h, v17.8h
sqadd v2.8h, v2.8h, v18.8h
sqadd v3.8h, v3.8h, v19.8h
sqadd v4.8h, v4.8h, v20.8h
sqadd v5.8h, v5.8h, v21.8h
sqadd v6.8h, v6.8h, v22.8h
sqadd v7.8h, v7.8h, v23.8h
sqxtun v0.8b, v0.8h
sqxtun2 v0.16b, v1.8h
sqxtun v1.8b, v2.8h
sqxtun2 v1.16b, v3.8h
sqxtun v2.8b, v4.8h
sqxtun2 v2.16b, v5.8h
st1 {v0.16b, v1.16b}, [x0], x2
sqxtun v3.8b, v6.8h
sqxtun2 v3.16b, v7.8h
st1 {v2.16b, v3.16b}, [x12], x2
bne 1b
ret
endfunc
function ff_hevc_add_residual_32x32_10_neon, export=1
mov x3, #32
movi v20.8h, #0
mvni v21.8h, #0xFC, lsl #8 // movi #0x3FF
1: subs x3, x3, #1
ld1 {v0.8h-v3.8h}, [x1], #64
ld1 {v16.8h-v19.8h}, [x0]
sqadd v0.8h, v0.8h, v16.8h
sqadd v1.8h, v1.8h, v17.8h
sqadd v2.8h, v2.8h, v18.8h
sqadd v3.8h, v3.8h, v19.8h
clip10 v0.8h, v1.8h, v20.8h, v21.8h
clip10 v2.8h, v3.8h, v20.8h, v21.8h
st1 {v0.8h-v3.8h}, [x0], x2
bne 1b
ret
endfunc
.macro sum_sub out, in, c, op, p
.ifc \op, +
smlal\p \out, \in, \c
.else
smlsl\p \out, \in, \c
.endif
.endm
.macro fixsqrshrn d, dt, n, m
.ifc \dt, .8h
sqrshrn2 \d\dt, \n\().4s, \m
.else
sqrshrn \n\().4h, \n\().4s, \m
mov \d\().d[0], \n\().d[0]
.endif
.endm
// uses and clobbers v28-v31 as temp registers
.macro tr_4x4_8 in0, in1, in2, in3, out0, out1, out2, out3, p1, p2
sshll\p1 v28.4s, \in0, #6
mov v29.16b, v28.16b
smull\p1 v30.4s, \in1, v0.h[1]
smull\p1 v31.4s, \in1, v0.h[3]
smlal\p2 v28.4s, \in2, v0.h[0] //e0
smlsl\p2 v29.4s, \in2, v0.h[0] //e1
smlal\p2 v30.4s, \in3, v0.h[3] //o0
smlsl\p2 v31.4s, \in3, v0.h[1] //o1
add \out0, v28.4s, v30.4s
add \out1, v29.4s, v31.4s
sub \out2, v29.4s, v31.4s
sub \out3, v28.4s, v30.4s
.endm
.macro transpose8_4x4 r0, r1, r2, r3
trn1 v2.8h, \r0\().8h, \r1\().8h
trn2 v3.8h, \r0\().8h, \r1\().8h
trn1 v4.8h, \r2\().8h, \r3\().8h
trn2 v5.8h, \r2\().8h, \r3\().8h
trn1 \r0\().4s, v2.4s, v4.4s
trn2 \r2\().4s, v2.4s, v4.4s
trn1 \r1\().4s, v3.4s, v5.4s
trn2 \r3\().4s, v3.4s, v5.4s
.endm
.macro transpose_8x8 r0, r1, r2, r3, r4, r5, r6, r7
transpose8_4x4 \r0, \r1, \r2, \r3
transpose8_4x4 \r4, \r5, \r6, \r7
.endm
.macro tr_8x4 shift, in0,in0t, in1,in1t, in2,in2t, in3,in3t, in4,in4t, in5,in5t, in6,in6t, in7,in7t, p1, p2
tr_4x4_8 \in0\in0t, \in2\in2t, \in4\in4t, \in6\in6t, v24.4s, v25.4s, v26.4s, v27.4s, \p1, \p2
smull\p1 v30.4s, \in1\in1t, v0.h[6]
smull\p1 v28.4s, \in1\in1t, v0.h[4]
smull\p1 v29.4s, \in1\in1t, v0.h[5]
sum_sub v30.4s, \in3\in3t, v0.h[4], -, \p1
sum_sub v28.4s, \in3\in3t, v0.h[5], +, \p1
sum_sub v29.4s, \in3\in3t, v0.h[7], -, \p1
sum_sub v30.4s, \in5\in5t, v0.h[7], +, \p2
sum_sub v28.4s, \in5\in5t, v0.h[6], +, \p2
sum_sub v29.4s, \in5\in5t, v0.h[4], -, \p2
sum_sub v30.4s, \in7\in7t, v0.h[5], +, \p2
sum_sub v28.4s, \in7\in7t, v0.h[7], +, \p2
sum_sub v29.4s, \in7\in7t, v0.h[6], -, \p2
add v31.4s, v26.4s, v30.4s
sub v26.4s, v26.4s, v30.4s
fixsqrshrn \in2,\in2t, v31, \shift
smull\p1 v31.4s, \in1\in1t, v0.h[7]
sum_sub v31.4s, \in3\in3t, v0.h[6], -, \p1
sum_sub v31.4s, \in5\in5t, v0.h[5], +, \p2
sum_sub v31.4s, \in7\in7t, v0.h[4], -, \p2
fixsqrshrn \in5,\in5t, v26, \shift
add v26.4s, v24.4s, v28.4s
sub v24.4s, v24.4s, v28.4s
add v28.4s, v25.4s, v29.4s
sub v25.4s, v25.4s, v29.4s
add v30.4s, v27.4s, v31.4s
sub v27.4s, v27.4s, v31.4s
fixsqrshrn \in0,\in0t, v26, \shift
fixsqrshrn \in7,\in7t, v24, \shift
fixsqrshrn \in1,\in1t, v28, \shift
fixsqrshrn \in6,\in6t, v25, \shift
fixsqrshrn \in3,\in3t, v30, \shift
fixsqrshrn \in4,\in4t, v27, \shift
.endm
.macro idct_8x8 bitdepth
function ff_hevc_idct_8x8_\bitdepth\()_neon, export=1
//x0 - coeffs
mov x1, x0
ld1 {v16.8h-v19.8h}, [x1], #64
ld1 {v20.8h-v23.8h}, [x1]
movrel x1, trans
ld1 {v0.8h}, [x1]
tr_8x4 7, v16,.4h, v17,.4h, v18,.4h, v19,.4h, v20,.4h, v21,.4h, v22,.4h, v23,.4h
tr_8x4 7, v16,.8h, v17,.8h, v18,.8h, v19,.8h, v20,.8h, v21,.8h, v22,.8h, v23,.8h, 2, 2
transpose_8x8 v16, v17, v18, v19, v20, v21, v22, v23
tr_8x4 20 - \bitdepth, v16,.4h, v17,.4h, v18,.4h, v19,.4h, v16,.8h, v17,.8h, v18,.8h, v19,.8h, , 2
tr_8x4 20 - \bitdepth, v20,.4h, v21,.4h, v22,.4h, v23,.4h, v20,.8h, v21,.8h, v22,.8h, v23,.8h, , 2
transpose_8x8 v16, v17, v18, v19, v20, v21, v22, v23
mov x1, x0
st1 {v16.8h-v19.8h}, [x1], #64
st1 {v20.8h-v23.8h}, [x1]
ret
endfunc
.endm
.macro butterfly e, o, tmp_p, tmp_m
add \tmp_p, \e, \o
sub \tmp_m, \e, \o
.endm
.macro tr16_8x4 in0, in1, in2, in3, offset
tr_4x4_8 \in0\().4h, \in1\().4h, \in2\().4h, \in3\().4h, v24.4s, v25.4s, v26.4s, v27.4s
smull2 v28.4s, \in0\().8h, v0.h[4]
smull2 v29.4s, \in0\().8h, v0.h[5]
smull2 v30.4s, \in0\().8h, v0.h[6]
smull2 v31.4s, \in0\().8h, v0.h[7]
sum_sub v28.4s, \in1\().8h, v0.h[5], +, 2
sum_sub v29.4s, \in1\().8h, v0.h[7], -, 2
sum_sub v30.4s, \in1\().8h, v0.h[4], -, 2
sum_sub v31.4s, \in1\().8h, v0.h[6], -, 2
sum_sub v28.4s, \in2\().8h, v0.h[6], +, 2
sum_sub v29.4s, \in2\().8h, v0.h[4], -, 2
sum_sub v30.4s, \in2\().8h, v0.h[7], +, 2
sum_sub v31.4s, \in2\().8h, v0.h[5], +, 2
sum_sub v28.4s, \in3\().8h, v0.h[7], +, 2
sum_sub v29.4s, \in3\().8h, v0.h[6], -, 2
sum_sub v30.4s, \in3\().8h, v0.h[5], +, 2
sum_sub v31.4s, \in3\().8h, v0.h[4], -, 2
butterfly v24.4s, v28.4s, v16.4s, v23.4s
butterfly v25.4s, v29.4s, v17.4s, v22.4s
butterfly v26.4s, v30.4s, v18.4s, v21.4s
butterfly v27.4s, v31.4s, v19.4s, v20.4s
add x4, sp, #\offset
st1 {v16.4s-v19.4s}, [x4], #64
st1 {v20.4s-v23.4s}, [x4]
.endm
.macro load16 in0, in1, in2, in3
ld1 {\in0}[0], [x1], x2
ld1 {\in0}[1], [x3], x2
ld1 {\in1}[0], [x1], x2
ld1 {\in1}[1], [x3], x2
ld1 {\in2}[0], [x1], x2
ld1 {\in2}[1], [x3], x2
ld1 {\in3}[0], [x1], x2
ld1 {\in3}[1], [x3], x2
.endm
.macro add_member in, t0, t1, t2, t3, t4, t5, t6, t7, op0, op1, op2, op3, op4, op5, op6, op7, p
sum_sub v21.4s, \in, \t0, \op0, \p
sum_sub v22.4s, \in, \t1, \op1, \p
sum_sub v23.4s, \in, \t2, \op2, \p
sum_sub v24.4s, \in, \t3, \op3, \p
sum_sub v25.4s, \in, \t4, \op4, \p
sum_sub v26.4s, \in, \t5, \op5, \p
sum_sub v27.4s, \in, \t6, \op6, \p
sum_sub v28.4s, \in, \t7, \op7, \p
.endm
.macro butterfly16 in0, in1, in2, in3, in4, in5, in6, in7
add v20.4s, \in0, \in1
sub \in0, \in0, \in1
add \in1, \in2, \in3
sub \in2, \in2, \in3
add \in3, \in4, \in5
sub \in4, \in4, \in5
add \in5, \in6, \in7
sub \in6, \in6, \in7
.endm
.macro store16 in0, in1, in2, in3, rx
st1 {\in0}[0], [x1], x2
st1 {\in0}[1], [x3], \rx
st1 {\in1}[0], [x1], x2
st1 {\in1}[1], [x3], \rx
st1 {\in2}[0], [x1], x2
st1 {\in2}[1], [x3], \rx
st1 {\in3}[0], [x1], x2
st1 {\in3}[1], [x3], \rx
.endm
.macro scale out0, out1, out2, out3, in0, in1, in2, in3, in4, in5, in6, in7, shift
sqrshrn \out0\().4h, \in0, \shift
sqrshrn2 \out0\().8h, \in1, \shift
sqrshrn \out1\().4h, \in2, \shift
sqrshrn2 \out1\().8h, \in3, \shift
sqrshrn \out2\().4h, \in4, \shift
sqrshrn2 \out2\().8h, \in5, \shift
sqrshrn \out3\().4h, \in6, \shift
sqrshrn2 \out3\().8h, \in7, \shift
.endm
.macro transpose16_4x4_2 r0, r1, r2, r3
// lower halves
trn1 v2.4h, \r0\().4h, \r1\().4h
trn2 v3.4h, \r0\().4h, \r1\().4h
trn1 v4.4h, \r2\().4h, \r3\().4h
trn2 v5.4h, \r2\().4h, \r3\().4h
trn1 v6.2s, v2.2s, v4.2s
trn2 v7.2s, v2.2s, v4.2s
trn1 v2.2s, v3.2s, v5.2s
trn2 v4.2s, v3.2s, v5.2s
mov \r0\().d[0], v6.d[0]
mov \r2\().d[0], v7.d[0]
mov \r1\().d[0], v2.d[0]
mov \r3\().d[0], v4.d[0]
// upper halves in reverse order
trn1 v2.8h, \r3\().8h, \r2\().8h
trn2 v3.8h, \r3\().8h, \r2\().8h
trn1 v4.8h, \r1\().8h, \r0\().8h
trn2 v5.8h, \r1\().8h, \r0\().8h
trn1 v6.4s, v2.4s, v4.4s
trn2 v7.4s, v2.4s, v4.4s
trn1 v2.4s, v3.4s, v5.4s
trn2 v4.4s, v3.4s, v5.4s
mov \r3\().d[1], v6.d[1]
mov \r1\().d[1], v7.d[1]
mov \r2\().d[1], v2.d[1]
mov \r0\().d[1], v4.d[1]
.endm
.macro tr_16x4 name, shift, offset, step
function func_tr_16x4_\name
mov x1, x5
add x3, x5, #(\step * 64)
mov x2, #(\step * 128)
load16 v16.d, v17.d, v18.d, v19.d
movrel x1, trans
ld1 {v0.8h}, [x1]
tr16_8x4 v16, v17, v18, v19, \offset
add x1, x5, #(\step * 32)
add x3, x5, #(\step * 3 *32)
mov x2, #(\step * 128)
load16 v20.d, v17.d, v18.d, v19.d
movrel x1, trans, 16
ld1 {v1.8h}, [x1]
smull v21.4s, v20.4h, v1.h[0]
smull v22.4s, v20.4h, v1.h[1]
smull v23.4s, v20.4h, v1.h[2]
smull v24.4s, v20.4h, v1.h[3]
smull v25.4s, v20.4h, v1.h[4]
smull v26.4s, v20.4h, v1.h[5]
smull v27.4s, v20.4h, v1.h[6]
smull v28.4s, v20.4h, v1.h[7]
add_member v20.8h, v1.h[1], v1.h[4], v1.h[7], v1.h[5], v1.h[2], v1.h[0], v1.h[3], v1.h[6], +, +, +, -, -, -, -, -, 2
add_member v17.4h, v1.h[2], v1.h[7], v1.h[3], v1.h[1], v1.h[6], v1.h[4], v1.h[0], v1.h[5], +, +, -, -, -, +, +, +
add_member v17.8h, v1.h[3], v1.h[5], v1.h[1], v1.h[7], v1.h[0], v1.h[6], v1.h[2], v1.h[4], +, -, -, +, +, +, -, -, 2
add_member v18.4h, v1.h[4], v1.h[2], v1.h[6], v1.h[0], v1.h[7], v1.h[1], v1.h[5], v1.h[3], +, -, -, +, -, -, +, +
add_member v18.8h, v1.h[5], v1.h[0], v1.h[4], v1.h[6], v1.h[1], v1.h[3], v1.h[7], v1.h[2], +, -, +, +, -, +, +, -, 2
add_member v19.4h, v1.h[6], v1.h[3], v1.h[0], v1.h[2], v1.h[5], v1.h[7], v1.h[4], v1.h[1], +, -, +, -, +, +, -, +
add_member v19.8h, v1.h[7], v1.h[6], v1.h[5], v1.h[4], v1.h[3], v1.h[2], v1.h[1], v1.h[0], +, -, +, -, +, -, +, -, 2
add x4, sp, #\offset
ld1 {v16.4s-v19.4s}, [x4], #64
butterfly16 v16.4s, v21.4s, v17.4s, v22.4s, v18.4s, v23.4s, v19.4s, v24.4s
scale v29, v30, v31, v24, v20.4s, v16.4s, v21.4s, v17.4s, v22.4s, v18.4s, v23.4s, v19.4s, \shift
transpose16_4x4_2 v29, v30, v31, v24
mov x1, x6
add x3, x6, #(24 +3*32)
mov x2, #32
mov x4, #-32
store16 v29.d, v30.d, v31.d, v24.d, x4
add x4, sp, #(\offset + 64)
ld1 {v16.4s-v19.4s}, [x4]
butterfly16 v16.4s, v25.4s, v17.4s, v26.4s, v18.4s, v27.4s, v19.4s, v28.4s
scale v29, v30, v31, v20, v20.4s, v16.4s, v25.4s, v17.4s, v26.4s, v18.4s, v27.4s, v19.4s, \shift
transpose16_4x4_2 v29, v30, v31, v20
add x1, x6, #8
add x3, x6, #(16 + 3 * 32)
mov x2, #32
mov x4, #-32
store16 v29.d, v30.d, v31.d, v20.d, x4
ret
endfunc
.endm
.macro idct_16x16 bitdepth
function ff_hevc_idct_16x16_\bitdepth\()_neon, export=1
//r0 - coeffs
mov x15, x30
// allocate a temp buffer
sub sp, sp, #640
.irp i, 0, 1, 2, 3
add x5, x0, #(8 * \i)
add x6, sp, #(8 * \i * 16)
bl func_tr_16x4_firstpass
.endr
.irp i, 0, 1, 2, 3
add x5, sp, #(8 * \i)
add x6, x0, #(8 * \i * 16)
bl func_tr_16x4_secondpass_\bitdepth
.endr
add sp, sp, #640
mov x30, x15
ret
endfunc
.endm
idct_8x8 8
idct_8x8 10
tr_16x4 firstpass, 7, 512, 1
tr_16x4 secondpass_8, 20 - 8, 512, 1
tr_16x4 secondpass_10, 20 - 10, 512, 1
idct_16x16 8
idct_16x16 10
// void ff_hevc_idct_NxN_dc_DEPTH_neon(int16_t *coeffs)
.macro idct_dc size, bitdepth
function ff_hevc_idct_\size\()x\size\()_dc_\bitdepth\()_neon, export=1
movi v1.8h, #((1 << (14 - \bitdepth))+1)
ld1r {v4.8h}, [x0]
add v4.8h, v4.8h, v1.8h
sshr v0.8h, v4.8h, #(15 - \bitdepth)
sshr v1.8h, v4.8h, #(15 - \bitdepth)
.if \size > 4
sshr v2.8h, v4.8h, #(15 - \bitdepth)
sshr v3.8h, v4.8h, #(15 - \bitdepth)
.if \size > 16 /* dc 32x32 */
mov x2, #4
1:
subs x2, x2, #1
.endif
add x12, x0, #64
mov x13, #128
.if \size > 8 /* dc 16x16 */
st1 {v0.8h-v3.8h}, [x0], x13
st1 {v0.8h-v3.8h}, [x12], x13
st1 {v0.8h-v3.8h}, [x0], x13
st1 {v0.8h-v3.8h}, [x12], x13
st1 {v0.8h-v3.8h}, [x0], x13
st1 {v0.8h-v3.8h}, [x12], x13
.endif /* dc 8x8 */
st1 {v0.8h-v3.8h}, [x0], x13
st1 {v0.8h-v3.8h}, [x12], x13
.if \size > 16 /* dc 32x32 */
bne 1b
.endif
.else /* dc 4x4 */
st1 {v0.8h-v1.8h}, [x0]
.endif
ret
endfunc
.endm
idct_dc 4, 8
idct_dc 4, 10
idct_dc 8, 8
idct_dc 8, 10
idct_dc 16, 8
idct_dc 16, 10
idct_dc 32, 8
idct_dc 32, 10

View File

@@ -1,92 +0,0 @@
/*
* Copyright (c) 2020 Reimar Döffinger
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include "libavutil/attributes.h"
#include "libavutil/cpu.h"
#include "libavutil/aarch64/cpu.h"
#include "libavcodec/hevcdsp.h"
void ff_hevc_add_residual_4x4_8_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_4x4_10_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_8x8_8_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_8x8_10_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_16x16_8_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_16x16_10_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_32x32_8_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_add_residual_32x32_10_neon(uint8_t *_dst, int16_t *coeffs,
ptrdiff_t stride);
void ff_hevc_idct_8x8_8_neon(int16_t *coeffs, int col_limit);
void ff_hevc_idct_8x8_10_neon(int16_t *coeffs, int col_limit);
void ff_hevc_idct_16x16_8_neon(int16_t *coeffs, int col_limit);
void ff_hevc_idct_16x16_10_neon(int16_t *coeffs, int col_limit);
void ff_hevc_idct_4x4_dc_8_neon(int16_t *coeffs);
void ff_hevc_idct_8x8_dc_8_neon(int16_t *coeffs);
void ff_hevc_idct_16x16_dc_8_neon(int16_t *coeffs);
void ff_hevc_idct_32x32_dc_8_neon(int16_t *coeffs);
void ff_hevc_idct_4x4_dc_10_neon(int16_t *coeffs);
void ff_hevc_idct_8x8_dc_10_neon(int16_t *coeffs);
void ff_hevc_idct_16x16_dc_10_neon(int16_t *coeffs);
void ff_hevc_idct_32x32_dc_10_neon(int16_t *coeffs);
void ff_hevc_sao_band_filter_8x8_8_neon(uint8_t *_dst, uint8_t *_src,
ptrdiff_t stride_dst, ptrdiff_t stride_src,
int16_t *sao_offset_val, int sao_left_class,
int width, int height);
av_cold void ff_hevc_dsp_init_aarch64(HEVCDSPContext *c, const int bit_depth)
{
if (!have_neon(av_get_cpu_flags())) return;
if (bit_depth == 8) {
c->add_residual[0] = ff_hevc_add_residual_4x4_8_neon;
c->add_residual[1] = ff_hevc_add_residual_8x8_8_neon;
c->add_residual[2] = ff_hevc_add_residual_16x16_8_neon;
c->add_residual[3] = ff_hevc_add_residual_32x32_8_neon;
c->idct[1] = ff_hevc_idct_8x8_8_neon;
c->idct[2] = ff_hevc_idct_16x16_8_neon;
c->idct_dc[0] = ff_hevc_idct_4x4_dc_8_neon;
c->idct_dc[1] = ff_hevc_idct_8x8_dc_8_neon;
c->idct_dc[2] = ff_hevc_idct_16x16_dc_8_neon;
c->idct_dc[3] = ff_hevc_idct_32x32_dc_8_neon;
c->sao_band_filter[0] = ff_hevc_sao_band_filter_8x8_8_neon;
}
if (bit_depth == 10) {
c->add_residual[0] = ff_hevc_add_residual_4x4_10_neon;
c->add_residual[1] = ff_hevc_add_residual_8x8_10_neon;
c->add_residual[2] = ff_hevc_add_residual_16x16_10_neon;
c->add_residual[3] = ff_hevc_add_residual_32x32_10_neon;
c->idct[1] = ff_hevc_idct_8x8_10_neon;
c->idct[2] = ff_hevc_idct_16x16_10_neon;
c->idct_dc[0] = ff_hevc_idct_4x4_dc_10_neon;
c->idct_dc[1] = ff_hevc_idct_8x8_dc_10_neon;
c->idct_dc[2] = ff_hevc_idct_16x16_dc_10_neon;
c->idct_dc[3] = ff_hevc_idct_32x32_dc_10_neon;
}
}

View File

@@ -1,87 +0,0 @@
/* -*-arm64-*-
* vim: syntax=arm64asm
*
* AArch64 NEON optimised SAO functions for HEVC decoding
*
* Copyright (c) 2020 Josh Dekker <josh@itanimul.li>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/aarch64/asm.S"
// void sao_band_filter(uint8_t *_dst, uint8_t *_src,
// ptrdiff_t stride_dst, ptrdiff_t stride_src,
// int16_t *sao_offset_val, int sao_left_class,
// int width, int height)
function ff_hevc_sao_band_filter_8x8_8_neon, export=1
sub sp, sp, #64
stp xzr, xzr, [sp]
stp xzr, xzr, [sp, #16]
stp xzr, xzr, [sp, #32]
stp xzr, xzr, [sp, #48]
mov w8, #4
0:
ldrsh x9, [x4, x8, lsl #1] // x9 = sao_offset_val[k+1]
subs w8, w8, #1
add w10, w8, w5 // x10 = k + sao_left_class
and w10, w10, #0x1F
strh w9, [sp, x10, lsl #1]
bne 0b
ld1 {v16.16b-v19.16b}, [sp], #64
movi v20.8h, #1
1: // beginning of line
mov w8, w6
2:
// Simple layout for accessing 16bit values
// with 8bit LUT.
//
// 00 01 02 03 04 05 06 07
// +----------------------------------->
// |xDE#xAD|xCA#xFE|xBE#xEF|xFE#xED|....
// +----------------------------------->
// i-0 i-1 i-2 i-3
// dst[x] = av_clip_pixel(src[x] + offset_table[src[x] >> shift]);
ld1 {v2.8b}, [x1]
// load src[x]
uxtl v0.8h, v2.8b
// >> shift
ushr v2.8h, v0.8h, #3 // BIT_DEPTH - 3
// x2 (access lower short)
shl v1.8h, v2.8h, #1 // low (x2, accessing short)
// +1 access upper short
add v3.8h, v1.8h, v20.8h
// shift insert index to upper byte
sli v1.8h, v3.8h, #8
// table
tbx v2.16b, {v16.16b-v19.16b}, v1.16b
// src[x] + table
add v1.8h, v0.8h, v2.8h
// clip + narrow
sqxtun v4.8b, v1.8h
// store
st1 {v4.8b}, [x0]
// done 8 pixels
subs w8, w8, #8
bne 2b
// finished line
subs w7, w7, #1
add x0, x0, x2 // dst += stride_dst
add x1, x1, x3 // src += stride_src
bne 1b
ret
endfunc

View File

@@ -1,740 +0,0 @@
/*
* Argonaut Games Video decoder
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "libavutil/imgutils.h"
#include "libavutil/internal.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/mem.h"
#include "avcodec.h"
#include "bytestream.h"
#include "internal.h"
typedef struct ArgoContext {
GetByteContext gb;
int bpp;
int key;
int mv0[128][2];
int mv1[16][2];
uint32_t pal[256];
AVFrame *frame;
} ArgoContext;
static int decode_pal8(AVCodecContext *avctx, uint32_t *pal)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
int start, count;
start = bytestream2_get_le16(gb);
count = bytestream2_get_le16(gb);
if (start + count > 256)
return AVERROR_INVALIDDATA;
if (bytestream2_get_bytes_left(gb) < 3 * count)
return AVERROR_INVALIDDATA;
for (int i = 0; i < count; i++)
pal[start + i] = (0xFF << 24U) | bytestream2_get_be24u(gb);
return 0;
}
static int decode_avcf(AVCodecContext *avctx, AVFrame *frame)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
const int l = frame->linesize[0];
const uint8_t *map = gb->buffer;
uint8_t *dst = frame->data[0];
if (bytestream2_get_bytes_left(gb) < 1024 + (frame->width / 2) * (frame->height / 2))
return AVERROR_INVALIDDATA;
bytestream2_skipu(gb, 1024);
for (int y = 0; y < frame->height; y += 2) {
for (int x = 0; x < frame->width; x += 2) {
int index = bytestream2_get_byteu(gb);
const uint8_t *block = map + index * 4;
dst[x+0] = block[0];
dst[x+1] = block[1];
dst[x+l] = block[2];
dst[x+l+1] = block[3];
}
dst += frame->linesize[0] * 2;
}
return 0;
}
static int decode_alcd(AVCodecContext *avctx, AVFrame *frame)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
GetByteContext sb;
const int l = frame->linesize[0];
const uint8_t *map = gb->buffer;
uint8_t *dst = frame->data[0];
uint8_t codes = 0;
int count = 0;
if (bytestream2_get_bytes_left(gb) < 1024 + (((frame->width / 2) * (frame->height / 2) + 7) >> 3))
return AVERROR_INVALIDDATA;
bytestream2_skipu(gb, 1024);
sb = *gb;
bytestream2_skipu(gb, ((frame->width / 2) * (frame->height / 2) + 7) >> 3);
for (int y = 0; y < frame->height; y += 2) {
for (int x = 0; x < frame->width; x += 2) {
const uint8_t *block;
int index;
if (count == 0) {
codes = bytestream2_get_byteu(&sb);
count = 8;
}
if (codes & 0x80) {
index = bytestream2_get_byte(gb);
block = map + index * 4;
dst[x+0] = block[0];
dst[x+1] = block[1];
dst[x+l] = block[2];
dst[x+l+1] = block[3];
}
codes <<= 1;
count--;
}
dst += frame->linesize[0] * 2;
}
return 0;
}
static int decode_mad1(AVCodecContext *avctx, AVFrame *frame)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
const int w = frame->width;
const int h = frame->height;
const int l = frame->linesize[0];
while (bytestream2_get_bytes_left(gb) > 0) {
int size, type, pos, dy;
uint8_t *dst;
type = bytestream2_get_byte(gb);
if (type == 0xFF)
break;
switch (type) {
case 8:
dst = frame->data[0];
for (int y = 0; y < h; y += 8) {
for (int x = 0; x < w; x += 8) {
int fill = bytestream2_get_byte(gb);
uint8_t *ddst = dst + x;
for (int by = 0; by < 8; by++) {
memset(ddst, fill, 8);
ddst += l;
}
}
dst += 8 * l;
}
break;
case 7:
while (bytestream2_get_bytes_left(gb) > 0) {
int bsize = bytestream2_get_byte(gb);
uint8_t *src;
int count;
if (!bsize)
break;
count = bytestream2_get_be16(gb);
while (count > 0) {
int mvx, mvy, a, b, c, mx, my;
int bsize_w, bsize_h;
bsize_w = bsize_h = bsize;
if (bytestream2_get_bytes_left(gb) < 4)
return AVERROR_INVALIDDATA;
mvx = bytestream2_get_byte(gb) * bsize;
mvy = bytestream2_get_byte(gb) * bsize;
a = bytestream2_get_byte(gb);
b = bytestream2_get_byte(gb);
c = ((a & 0x3F) << 8) + b;
mx = mvx + (c & 0x7F) - 64;
my = mvy + (c >> 7) - 64;
if (mvy < 0 || mvy >= h)
return AVERROR_INVALIDDATA;
if (mvx < 0 || mvx >= w)
return AVERROR_INVALIDDATA;
if (my < 0 || my >= h)
return AVERROR_INVALIDDATA;
if (mx < 0 || mx >= w)
return AVERROR_INVALIDDATA;
dst = frame->data[0] + mvx + l * mvy;
src = frame->data[0] + mx + l * my;
bsize_w = FFMIN3(bsize_w, w - mvx, w - mx);
bsize_h = FFMIN3(bsize_h, h - mvy, h - my);
if (mvy >= my && (mvy != my || mvx >= mx)) {
src += (bsize_h - 1) * l;
dst += (bsize_h - 1) * l;
for (int by = 0; by < bsize_h; by++) {
memmove(dst, src, bsize_w);
src -= l;
dst -= l;
}
} else {
for (int by = 0; by < bsize_h; by++) {
memmove(dst, src, bsize_w);
src += l;
dst += l;
}
}
count--;
}
}
break;
case 6:
dst = frame->data[0];
if (bytestream2_get_bytes_left(gb) < w * h)
return AVERROR_INVALIDDATA;
for (int y = 0; y < h; y++) {
bytestream2_get_bufferu(gb, dst, w);
dst += l;
}
break;
case 5:
dst = frame->data[0];
for (int y = 0; y < h; y += 2) {
for (int x = 0; x < w; x += 2) {
int fill = bytestream2_get_byte(gb);
uint8_t *ddst = dst + x;
fill = (fill << 8) | fill;
for (int by = 0; by < 2; by++) {
AV_WN16(ddst, fill);
ddst += l;
}
}
dst += 2 * l;
}
break;
case 3:
size = bytestream2_get_le16(gb);
if (size > 0) {
int x = bytestream2_get_byte(gb) * 4;
int y = bytestream2_get_byte(gb) * 4;
int count = bytestream2_get_byte(gb);
int fill = bytestream2_get_byte(gb);
av_log(avctx, AV_LOG_DEBUG, "%d %d %d %d\n", x, y, count, fill);
for (int i = 0; i < count; i++)
;
return AVERROR_PATCHWELCOME;
}
break;
case 2:
dst = frame->data[0];
pos = 0;
dy = 0;
while (bytestream2_get_bytes_left(gb) > 0) {
int count = bytestream2_get_byteu(gb);
int skip = count & 0x3F;
count = count >> 6;
if (skip == 0x3F) {
pos += 0x3E;
while (pos >= w) {
pos -= w;
dst += l;
dy++;
if (dy >= h)
return 0;
}
} else {
pos += skip;
while (pos >= w) {
pos -= w;
dst += l;
dy++;
if (dy >= h)
return 0;
}
while (count >= 0) {
int bits = bytestream2_get_byte(gb);
for (int i = 0; i < 4; i++) {
switch (bits & 3) {
case 0:
break;
case 1:
if (dy < 1 && !pos)
return AVERROR_INVALIDDATA;
else
dst[pos] = pos ? dst[pos - 1] : dst[-l + w - 1];
break;
case 2:
if (dy < 1)
return AVERROR_INVALIDDATA;
dst[pos] = dst[pos - l];
break;
case 3:
dst[pos] = bytestream2_get_byte(gb);
break;
}
pos++;
if (pos >= w) {
pos -= w;
dst += l;
dy++;
if (dy >= h)
return 0;
}
bits >>= 2;
}
count--;
}
}
}
break;
default:
return AVERROR_INVALIDDATA;
}
}
return 0;
}
static int decode_mad1_24(AVCodecContext *avctx, AVFrame *frame)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
const int w = frame->width;
const int h = frame->height;
const int l = frame->linesize[0] / 4;
while (bytestream2_get_bytes_left(gb) > 0) {
int osize, type, pos, dy, di, bcode, value, v14;
const uint8_t *bits;
uint32_t *dst;
type = bytestream2_get_byte(gb);
if (type == 0xFF)
return 0;
switch (type) {
case 8:
dst = (uint32_t *)frame->data[0];
for (int y = 0; y + 12 <= h; y += 12) {
for (int x = 0; x + 12 <= w; x += 12) {
int fill = bytestream2_get_be24(gb);
uint32_t *dstp = dst + x;
for (int by = 0; by < 12; by++) {
for (int bx = 0; bx < 12; bx++)
dstp[bx] = fill;
dstp += l;
}
}
dst += 12 * l;
}
break;
case 7:
while (bytestream2_get_bytes_left(gb) > 0) {
int bsize = bytestream2_get_byte(gb);
uint32_t *src;
int count;
if (!bsize)
break;
count = bytestream2_get_be16(gb);
while (count > 0) {
int mvx, mvy, a, b, c, mx, my;
int bsize_w, bsize_h;
bsize_w = bsize_h = bsize;
if (bytestream2_get_bytes_left(gb) < 4)
return AVERROR_INVALIDDATA;
mvx = bytestream2_get_byte(gb) * bsize;
mvy = bytestream2_get_byte(gb) * bsize;
a = bytestream2_get_byte(gb);
b = bytestream2_get_byte(gb);
c = ((a & 0x3F) << 8) + b;
mx = mvx + (c & 0x7F) - 64;
my = mvy + (c >> 7) - 64;
if (mvy < 0 || mvy >= h)
return AVERROR_INVALIDDATA;
if (mvx < 0 || mvx >= w)
return AVERROR_INVALIDDATA;
if (my < 0 || my >= h)
return AVERROR_INVALIDDATA;
if (mx < 0 || mx >= w)
return AVERROR_INVALIDDATA;
dst = (uint32_t *)frame->data[0] + mvx + l * mvy;
src = (uint32_t *)frame->data[0] + mx + l * my;
bsize_w = FFMIN3(bsize_w, w - mvx, w - mx);
bsize_h = FFMIN3(bsize_h, h - mvy, h - my);
if (mvy >= my && (mvy != my || mvx >= mx)) {
src += (bsize_h - 1) * l;
dst += (bsize_h - 1) * l;
for (int by = 0; by < bsize_h; by++) {
memmove(dst, src, bsize_w * 4);
src -= l;
dst -= l;
}
} else {
for (int by = 0; by < bsize_h; by++) {
memmove(dst, src, bsize_w * 4);
src += l;
dst += l;
}
}
count--;
}
}
break;
case 12:
osize = ((h + 3) / 4) * ((w + 3) / 4) + 7;
bits = gb->buffer;
di = 0;
bcode = v14 = 0;
if (bytestream2_get_bytes_left(gb) < osize >> 3)
return AVERROR_INVALIDDATA;
bytestream2_skip(gb, osize >> 3);
for (int x = 0; x < w; x += 4) {
for (int y = 0; y < h; y += 4) {
int astate = 0;
if (bits[di >> 3] & (1 << (di & 7))) {
int codes = bytestream2_get_byte(gb);
for (int count = 0; count < 4; count++) {
uint32_t *src = (uint32_t *)frame->data[0];
size_t src_size = l * (h - 1) + (w - 1);
int nv, v, code = codes & 3;
pos = x;
dy = y + count;
dst = (uint32_t *)frame->data[0] + pos + dy * l;
if (code & 1)
bcode = bytestream2_get_byte(gb);
if (code == 3) {
for (int j = 0; j < 4; j++) {
switch (bcode & 3) {
case 0:
break;
case 1:
if (dy < 1 && !pos)
return AVERROR_INVALIDDATA;
dst[0] = dst[-1];
break;
case 2:
if (dy < 1)
return AVERROR_INVALIDDATA;
dst[0] = dst[-l];
break;
case 3:
if (astate) {
nv = value >> 4;
} else {
value = bytestream2_get_byte(gb);
nv = value & 0xF;
}
astate ^= 1;
dst[0] = src[av_clip(l * (dy + s->mv1[nv][1]) + pos +
s->mv1[nv][0], 0, src_size)];
break;
}
bcode >>= 2;
dst++;
pos++;
}
} else if (code) {
if (code == 1)
v14 = bcode;
else
bcode = v14;
for (int j = 0; j < 4; j++) {
switch (bcode & 3) {
case 0:
break;
case 1:
if (dy < 1 && !pos)
return AVERROR_INVALIDDATA;
dst[0] = dst[-1];
break;
case 2:
if (dy < 1)
return AVERROR_INVALIDDATA;
dst[0] = dst[-l];
break;
case 3:
v = bytestream2_get_byte(gb);
if (v < 128) {
dst[0] = src[av_clip(l * (dy + s->mv0[v][1]) + pos +
s->mv0[v][0], 0, src_size)];
} else {
dst[0] = ((v & 0x7F) << 17) | bytestream2_get_be16(gb);
}
break;
}
bcode >>= 2;
dst++;
pos++;
}
}
codes >>= 2;
}
}
di++;
}
}
break;
default:
return AVERROR_INVALIDDATA;
}
}
return AVERROR_INVALIDDATA;
}
static int decode_rle(AVCodecContext *avctx, AVFrame *frame)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
const int w = frame->width;
const int h = frame->height;
const int l = frame->linesize[0];
uint8_t *dst = frame->data[0];
int pos = 0, y = 0;
while (bytestream2_get_bytes_left(gb) > 0) {
int count = bytestream2_get_byte(gb);
int pixel = bytestream2_get_byte(gb);
if (!count) {
pos += pixel;
while (pos >= w) {
pos -= w;
y++;
if (y >= h)
return 0;
}
} else {
while (count > 0) {
dst[pos + y * l] = pixel;
count--;
pos++;
if (pos >= w) {
pos = 0;
y++;
if (y >= h)
return 0;
}
}
}
}
return 0;
}
static int decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
ArgoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
AVFrame *frame = s->frame;
uint32_t chunk;
int ret;
bytestream2_init(gb, avpkt->data, avpkt->size);
if ((ret = ff_reget_buffer(avctx, frame, 0)) < 0)
return ret;
chunk = bytestream2_get_be32(gb);
switch (chunk) {
case MKBETAG('P', 'A', 'L', '8'):
for (int y = 0; y < frame->height; y++)
memset(frame->data[0] + y * frame->linesize[0], 0, frame->width * s->bpp);
if (avctx->pix_fmt == AV_PIX_FMT_PAL8)
memset(frame->data[1], 0, AVPALETTE_SIZE);
return decode_pal8(avctx, s->pal);
case MKBETAG('M', 'A', 'D', '1'):
if (avctx->pix_fmt == AV_PIX_FMT_PAL8)
ret = decode_mad1(avctx, frame);
else
ret = decode_mad1_24(avctx, frame);
break;
case MKBETAG('A', 'V', 'C', 'F'):
if (avctx->pix_fmt == AV_PIX_FMT_PAL8) {
s->key = 1;
ret = decode_avcf(avctx, frame);
break;
}
case MKBETAG('A', 'L', 'C', 'D'):
if (avctx->pix_fmt == AV_PIX_FMT_PAL8) {
s->key = 0;
ret = decode_alcd(avctx, frame);
break;
}
case MKBETAG('R', 'L', 'E', 'F'):
if (avctx->pix_fmt == AV_PIX_FMT_PAL8) {
s->key = 1;
ret = decode_rle(avctx, frame);
break;
}
case MKBETAG('R', 'L', 'E', 'D'):
if (avctx->pix_fmt == AV_PIX_FMT_PAL8) {
s->key = 0;
ret = decode_rle(avctx, frame);
break;
}
default:
av_log(avctx, AV_LOG_DEBUG, "unknown chunk 0x%X\n", chunk);
break;
}
if (ret < 0)
return ret;
if (avctx->pix_fmt == AV_PIX_FMT_PAL8)
memcpy(frame->data[1], s->pal, AVPALETTE_SIZE);
if ((ret = av_frame_ref(data, s->frame)) < 0)
return ret;
frame->pict_type = s->key ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P;
frame->key_frame = s->key;
*got_frame = 1;
return avpkt->size;
}
static av_cold int decode_init(AVCodecContext *avctx)
{
ArgoContext *s = avctx->priv_data;
switch (avctx->bits_per_raw_sample) {
case 8: s->bpp = 1;
avctx->pix_fmt = AV_PIX_FMT_PAL8; break;
case 24: s->bpp = 4;
avctx->pix_fmt = AV_PIX_FMT_BGR0; break;
default: avpriv_request_sample(s, "depth == %u", avctx->bits_per_raw_sample);
return AVERROR_PATCHWELCOME;
}
s->frame = av_frame_alloc();
if (!s->frame)
return AVERROR(ENOMEM);
for (int n = 0, i = -4; i < 4; i++) {
for (int j = -14; j < 2; j++) {
s->mv0[n][0] = j;
s->mv0[n++][1] = i;
}
}
for (int n = 0, i = -5; i <= 1; i += 2) {
int j = -5;
while (j <= 1) {
s->mv1[n][0] = j;
s->mv1[n++][1] = i;
j += 2;
}
}
return 0;
}
static void decode_flush(AVCodecContext *avctx)
{
ArgoContext *s = avctx->priv_data;
av_frame_unref(s->frame);
}
static av_cold int decode_close(AVCodecContext *avctx)
{
ArgoContext *s = avctx->priv_data;
av_frame_free(&s->frame);
return 0;
}
AVCodec ff_argo_decoder = {
.name = "argo",
.long_name = NULL_IF_CONFIG_SMALL("Argonaut Games Video"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_ARGO,
.priv_data_size = sizeof(ArgoContext),
.init = decode_init,
.decode = decode_frame,
.flush = decode_flush,
.close = decode_close,
.capabilities = AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
};

View File

@@ -1,119 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stddef.h>
#include <stdint.h>
#include "atsc_a53.h"
#include "get_bits.h"
int ff_alloc_a53_sei(const AVFrame *frame, size_t prefix_len,
void **data, size_t *sei_size)
{
AVFrameSideData *side_data = NULL;
uint8_t *sei_data;
if (frame)
side_data = av_frame_get_side_data(frame, AV_FRAME_DATA_A53_CC);
if (!side_data) {
*data = NULL;
return 0;
}
*sei_size = side_data->size + 11;
*data = av_mallocz(*sei_size + prefix_len);
if (!*data)
return AVERROR(ENOMEM);
sei_data = (uint8_t*)*data + prefix_len;
// country code
sei_data[0] = 181;
sei_data[1] = 0;
sei_data[2] = 49;
/**
* 'GA94' is standard in North America for ATSC, but hard coding
* this style may not be the right thing to do -- other formats
* do exist. This information is not available in the side_data
* so we are going with this right now.
*/
AV_WL32(sei_data + 3, MKTAG('G', 'A', '9', '4'));
sei_data[7] = 3;
sei_data[8] = ((side_data->size/3) & 0x1f) | 0x40;
sei_data[9] = 0;
memcpy(sei_data + 10, side_data->data, side_data->size);
sei_data[side_data->size+10] = 255;
return 0;
}
int ff_parse_a53_cc(AVBufferRef **pbuf, const uint8_t *data, int size)
{
AVBufferRef *buf = *pbuf;
GetBitContext gb;
size_t new_size, old_size = buf ? buf->size : 0;
int ret, cc_count;
if (size < 3)
return AVERROR(EINVAL);
ret = init_get_bits8(&gb, data, size);
if (ret < 0)
return ret;
if (get_bits(&gb, 8) != 0x3) // user_data_type_code
return 0;
skip_bits(&gb, 1); // reserved
if (!get_bits(&gb, 1)) // process_cc_data_flag
return 0;
skip_bits(&gb, 1); // zero bit
cc_count = get_bits(&gb, 5);
if (!cc_count)
return 0;
skip_bits(&gb, 8); // reserved
/* 3 bytes per CC plus one byte marker_bits at the end */
if (cc_count * 3 >= (get_bits_left(&gb) >> 3))
return AVERROR(EINVAL);
new_size = (old_size + cc_count * 3);
if (new_size > INT_MAX)
return AVERROR(EINVAL);
/* Allow merging of the cc data from two fields. */
ret = av_buffer_realloc(pbuf, new_size);
if (ret < 0)
return ret;
buf = *pbuf;
/* Use of av_buffer_realloc assumes buffer is writeable */
for (int i = 0; i < cc_count; i++) {
buf->data[old_size++] = get_bits(&gb, 8);
buf->data[old_size++] = get_bits(&gb, 8);
buf->data[old_size++] = get_bits(&gb, 8);
}
return cc_count;
}

View File

@@ -1,56 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_ATSC_A53_H
#define AVCODEC_ATSC_A53_H
#include <stddef.h>
#include <stdint.h>
#include "libavutil/buffer.h"
#include "libavutil/frame.h"
/**
* Check AVFrame for A53 side data and allocate and fill SEI message with A53 info
*
* @param frame Raw frame to get A53 side data from
* @param prefix_len Number of bytes to allocate before SEI message
* @param data Pointer to a variable to store allocated memory
* Upon return the variable will hold NULL on error or if frame has no A53 info.
* Otherwise it will point to prefix_len uninitialized bytes followed by
* *sei_size SEI message
* @param sei_size Pointer to a variable to store generated SEI message length
* @return Zero on success, negative error code on failure
*/
int ff_alloc_a53_sei(const AVFrame *frame, size_t prefix_len,
void **data, size_t *sei_size);
/**
* Parse a data array for ATSC A53 Part 4 Closed Captions and store them in an AVBufferRef.
*
* @param pbuf Pointer to an AVBufferRef to append the closed captions. *pbuf may be NULL, in
* which case a new buffer will be allocated and put in it.
* @param data The data array containing the raw A53 data.
* @param size Size of the data array in bytes.
*
* @return Number of closed captions parsed on success, negative error code on failure.
* If no Closed Captions are parsed, *pbuf is untouched.
*/
int ff_parse_a53_cc(AVBufferRef **pbuf, const uint8_t *data, int size);
#endif /* AVCODEC_ATSC_A53_H */

File diff suppressed because it is too large Load Diff

View File

@@ -1,88 +0,0 @@
/*
* AV1 video decoder
* *
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AV1DEC_H
#define AVCODEC_AV1DEC_H
#include <stdint.h>
#include "libavutil/buffer.h"
#include "libavutil/pixfmt.h"
#include "avcodec.h"
#include "cbs.h"
#include "cbs_av1.h"
#include "thread.h"
typedef struct AV1Frame {
ThreadFrame tf;
AVBufferRef *hwaccel_priv_buf;
void *hwaccel_picture_private;
AVBufferRef *header_ref;
AV1RawFrameHeader *raw_frame_header;
int temporal_id;
int spatial_id;
uint8_t gm_type[AV1_NUM_REF_FRAMES];
int32_t gm_params[AV1_NUM_REF_FRAMES][6];
uint8_t skip_mode_frame_idx[2];
AV1RawFilmGrainParams film_grain;
uint8_t coded_lossless;
} AV1Frame;
typedef struct TileGroupInfo {
uint32_t tile_offset;
uint32_t tile_size;
uint16_t tile_row;
uint16_t tile_column;
} TileGroupInfo;
typedef struct AV1DecContext {
const AVClass *class;
AVCodecContext *avctx;
enum AVPixelFormat pix_fmt;
CodedBitstreamContext *cbc;
CodedBitstreamFragment current_obu;
AVBufferRef *seq_ref;
AV1RawSequenceHeader *raw_seq;
AVBufferRef *header_ref;
AV1RawFrameHeader *raw_frame_header;
TileGroupInfo *tile_group_info;
uint16_t tile_num;
uint16_t tg_start;
uint16_t tg_end;
int operating_point_idc;
AV1Frame ref[AV1_NUM_REF_FRAMES];
AV1Frame cur_frame;
// AVOptions
int operating_point;
} AV1DecContext;
#endif /* AVCODEC_AV1DEC_H */

View File

@@ -1,851 +0,0 @@
/*
* AVCodecContext functions for libavcodec
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* AVCodecContext functions for libavcodec
*/
#include "config.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
#include "libavutil/imgutils.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h"
#include "libavutil/thread.h"
#include "avcodec.h"
#include "decode.h"
#include "encode.h"
#include "frame_thread_encoder.h"
#include "internal.h"
#include "thread.h"
#if CONFIG_ICONV
# include <iconv.h>
#endif
#include "libavutil/ffversion.h"
const char av_codec_ffversion[] = "FFmpeg version " FFMPEG_VERSION;
unsigned avcodec_version(void)
{
av_assert0(AV_CODEC_ID_PCM_S8_PLANAR==65563);
av_assert0(AV_CODEC_ID_ADPCM_G722==69660);
av_assert0(AV_CODEC_ID_SRT==94216);
av_assert0(LIBAVCODEC_VERSION_MICRO >= 100);
return LIBAVCODEC_VERSION_INT;
}
const char *avcodec_configuration(void)
{
return FFMPEG_CONFIGURATION;
}
const char *avcodec_license(void)
{
#define LICENSE_PREFIX "libavcodec license: "
return &LICENSE_PREFIX FFMPEG_LICENSE[sizeof(LICENSE_PREFIX) - 1];
}
int avcodec_default_execute(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2), void *arg, int *ret, int count, int size)
{
int i;
for (i = 0; i < count; i++) {
int r = func(c, (char *)arg + i * size);
if (ret)
ret[i] = r;
}
emms_c();
return 0;
}
int avcodec_default_execute2(AVCodecContext *c, int (*func)(AVCodecContext *c2, void *arg2, int jobnr, int threadnr), void *arg, int *ret, int count)
{
int i;
for (i = 0; i < count; i++) {
int r = func(c, arg, i, 0);
if (ret)
ret[i] = r;
}
emms_c();
return 0;
}
static AVMutex codec_mutex = AV_MUTEX_INITIALIZER;
static void lock_avcodec(const AVCodec *codec)
{
if (!(codec->caps_internal & FF_CODEC_CAP_INIT_THREADSAFE) && codec->init)
ff_mutex_lock(&codec_mutex);
}
static void unlock_avcodec(const AVCodec *codec)
{
if (!(codec->caps_internal & FF_CODEC_CAP_INIT_THREADSAFE) && codec->init)
ff_mutex_unlock(&codec_mutex);
}
#if FF_API_LOCKMGR
int av_lockmgr_register(int (*cb)(void **mutex, enum AVLockOp op))
{
return 0;
}
#endif
static int64_t get_bit_rate(AVCodecContext *ctx)
{
int64_t bit_rate;
int bits_per_sample;
switch (ctx->codec_type) {
case AVMEDIA_TYPE_VIDEO:
case AVMEDIA_TYPE_DATA:
case AVMEDIA_TYPE_SUBTITLE:
case AVMEDIA_TYPE_ATTACHMENT:
bit_rate = ctx->bit_rate;
break;
case AVMEDIA_TYPE_AUDIO:
bits_per_sample = av_get_bits_per_sample(ctx->codec_id);
if (bits_per_sample) {
bit_rate = ctx->sample_rate * (int64_t)ctx->channels;
if (bit_rate > INT64_MAX / bits_per_sample) {
bit_rate = 0;
} else
bit_rate *= bits_per_sample;
} else
bit_rate = ctx->bit_rate;
break;
default:
bit_rate = 0;
break;
}
return bit_rate;
}
int attribute_align_arg avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options)
{
int ret = 0;
int codec_init_ok = 0;
AVDictionary *tmp = NULL;
AVCodecInternal *avci;
if (avcodec_is_open(avctx))
return 0;
if (!codec && !avctx->codec) {
av_log(avctx, AV_LOG_ERROR, "No codec provided to avcodec_open2()\n");
return AVERROR(EINVAL);
}
if (codec && avctx->codec && codec != avctx->codec) {
av_log(avctx, AV_LOG_ERROR, "This AVCodecContext was allocated for %s, "
"but %s passed to avcodec_open2()\n", avctx->codec->name, codec->name);
return AVERROR(EINVAL);
}
if (!codec)
codec = avctx->codec;
if (avctx->extradata_size < 0 || avctx->extradata_size >= FF_MAX_EXTRADATA_SIZE)
return AVERROR(EINVAL);
if (options)
av_dict_copy(&tmp, *options, 0);
lock_avcodec(codec);
avci = av_mallocz(sizeof(*avci));
if (!avci) {
ret = AVERROR(ENOMEM);
goto end;
}
avctx->internal = avci;
#if FF_API_OLD_ENCDEC
avci->to_free = av_frame_alloc();
avci->compat_decode_frame = av_frame_alloc();
avci->compat_encode_packet = av_packet_alloc();
if (!avci->to_free || !avci->compat_decode_frame || !avci->compat_encode_packet) {
ret = AVERROR(ENOMEM);
goto free_and_end;
}
#endif
avci->buffer_frame = av_frame_alloc();
avci->buffer_pkt = av_packet_alloc();
avci->es.in_frame = av_frame_alloc();
avci->ds.in_pkt = av_packet_alloc();
avci->last_pkt_props = av_packet_alloc();
avci->pkt_props = av_fifo_alloc(sizeof(*avci->last_pkt_props));
if (!avci->buffer_frame || !avci->buffer_pkt ||
!avci->es.in_frame || !avci->ds.in_pkt ||
!avci->last_pkt_props || !avci->pkt_props) {
ret = AVERROR(ENOMEM);
goto free_and_end;
}
avci->skip_samples_multiplier = 1;
if (codec->priv_data_size > 0) {
if (!avctx->priv_data) {
avctx->priv_data = av_mallocz(codec->priv_data_size);
if (!avctx->priv_data) {
ret = AVERROR(ENOMEM);
goto free_and_end;
}
if (codec->priv_class) {
*(const AVClass **)avctx->priv_data = codec->priv_class;
av_opt_set_defaults(avctx->priv_data);
}
}
if (codec->priv_class && (ret = av_opt_set_dict(avctx->priv_data, &tmp)) < 0)
goto free_and_end;
} else {
avctx->priv_data = NULL;
}
if ((ret = av_opt_set_dict(avctx, &tmp)) < 0)
goto free_and_end;
if (avctx->codec_whitelist && av_match_list(codec->name, avctx->codec_whitelist, ',') <= 0) {
av_log(avctx, AV_LOG_ERROR, "Codec (%s) not on whitelist \'%s\'\n", codec->name, avctx->codec_whitelist);
ret = AVERROR(EINVAL);
goto free_and_end;
}
// only call ff_set_dimensions() for non H.264/VP6F/DXV codecs so as not to overwrite previously setup dimensions
if (!(avctx->coded_width && avctx->coded_height && avctx->width && avctx->height &&
(avctx->codec_id == AV_CODEC_ID_H264 || avctx->codec_id == AV_CODEC_ID_VP6F || avctx->codec_id == AV_CODEC_ID_DXV))) {
if (avctx->coded_width && avctx->coded_height)
ret = ff_set_dimensions(avctx, avctx->coded_width, avctx->coded_height);
else if (avctx->width && avctx->height)
ret = ff_set_dimensions(avctx, avctx->width, avctx->height);
if (ret < 0)
goto free_and_end;
}
if ((avctx->coded_width || avctx->coded_height || avctx->width || avctx->height)
&& ( av_image_check_size2(avctx->coded_width, avctx->coded_height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx) < 0
|| av_image_check_size2(avctx->width, avctx->height, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx) < 0)) {
av_log(avctx, AV_LOG_WARNING, "Ignoring invalid width/height values\n");
ff_set_dimensions(avctx, 0, 0);
}
if (avctx->width > 0 && avctx->height > 0) {
if (av_image_check_sar(avctx->width, avctx->height,
avctx->sample_aspect_ratio) < 0) {
av_log(avctx, AV_LOG_WARNING, "ignoring invalid SAR: %u/%u\n",
avctx->sample_aspect_ratio.num,
avctx->sample_aspect_ratio.den);
avctx->sample_aspect_ratio = (AVRational){ 0, 1 };
}
}
if (avctx->channels > FF_SANE_NB_CHANNELS || avctx->channels < 0) {
av_log(avctx, AV_LOG_ERROR, "Too many or invalid channels: %d\n", avctx->channels);
ret = AVERROR(EINVAL);
goto free_and_end;
}
if (av_codec_is_decoder(codec) &&
codec->type == AVMEDIA_TYPE_AUDIO &&
!(codec->capabilities & AV_CODEC_CAP_CHANNEL_CONF) &&
avctx->channels == 0) {
av_log(avctx, AV_LOG_ERROR, "Decoder requires channel count but channels not set\n");
ret = AVERROR(EINVAL);
goto free_and_end;
}
if (avctx->sample_rate < 0) {
av_log(avctx, AV_LOG_ERROR, "Invalid sample rate: %d\n", avctx->sample_rate);
ret = AVERROR(EINVAL);
goto free_and_end;
}
if (avctx->block_align < 0) {
av_log(avctx, AV_LOG_ERROR, "Invalid block align: %d\n", avctx->block_align);
ret = AVERROR(EINVAL);
goto free_and_end;
}
avctx->codec = codec;
if ((avctx->codec_type == AVMEDIA_TYPE_UNKNOWN || avctx->codec_type == codec->type) &&
avctx->codec_id == AV_CODEC_ID_NONE) {
avctx->codec_type = codec->type;
avctx->codec_id = codec->id;
}
if (avctx->codec_id != codec->id || (avctx->codec_type != codec->type
&& avctx->codec_type != AVMEDIA_TYPE_ATTACHMENT)) {
av_log(avctx, AV_LOG_ERROR, "Codec type or id mismatches\n");
ret = AVERROR(EINVAL);
goto free_and_end;
}
avctx->frame_number = 0;
avctx->codec_descriptor = avcodec_descriptor_get(avctx->codec_id);
if ((avctx->codec->capabilities & AV_CODEC_CAP_EXPERIMENTAL) &&
avctx->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL) {
const char *codec_string = av_codec_is_encoder(codec) ? "encoder" : "decoder";
const AVCodec *codec2;
av_log(avctx, AV_LOG_ERROR,
"The %s '%s' is experimental but experimental codecs are not enabled, "
"add '-strict %d' if you want to use it.\n",
codec_string, codec->name, FF_COMPLIANCE_EXPERIMENTAL);
codec2 = av_codec_is_encoder(codec) ? avcodec_find_encoder(codec->id) : avcodec_find_decoder(codec->id);
if (!(codec2->capabilities & AV_CODEC_CAP_EXPERIMENTAL))
av_log(avctx, AV_LOG_ERROR, "Alternatively use the non experimental %s '%s'.\n",
codec_string, codec2->name);
ret = AVERROR_EXPERIMENTAL;
goto free_and_end;
}
if (avctx->codec_type == AVMEDIA_TYPE_AUDIO &&
(!avctx->time_base.num || !avctx->time_base.den)) {
avctx->time_base.num = 1;
avctx->time_base.den = avctx->sample_rate;
}
if (av_codec_is_encoder(avctx->codec))
ret = ff_encode_preinit(avctx);
else
ret = ff_decode_preinit(avctx);
if (ret < 0)
goto free_and_end;
if (!HAVE_THREADS)
av_log(avctx, AV_LOG_WARNING, "Warning: not compiled with thread support, using thread emulation\n");
if (CONFIG_FRAME_THREAD_ENCODER && av_codec_is_encoder(avctx->codec)) {
unlock_avcodec(codec); //we will instantiate a few encoders thus kick the counter to prevent false detection of a problem
ret = ff_frame_thread_encoder_init(avctx, options ? *options : NULL);
lock_avcodec(codec);
if (ret < 0)
goto free_and_end;
}
if (HAVE_THREADS
&& !(avci->frame_thread_encoder && (avctx->active_thread_type&FF_THREAD_FRAME))) {
ret = ff_thread_init(avctx);
if (ret < 0) {
goto free_and_end;
}
}
if (!HAVE_THREADS && !(codec->caps_internal & FF_CODEC_CAP_AUTO_THREADS))
avctx->thread_count = 1;
if ( avctx->codec->init && (!(avctx->active_thread_type&FF_THREAD_FRAME)
|| avci->frame_thread_encoder)) {
ret = avctx->codec->init(avctx);
if (ret < 0) {
codec_init_ok = -1;
goto free_and_end;
}
codec_init_ok = 1;
}
ret=0;
if (av_codec_is_decoder(avctx->codec)) {
if (!avctx->bit_rate)
avctx->bit_rate = get_bit_rate(avctx);
/* validate channel layout from the decoder */
if (avctx->channel_layout) {
int channels = av_get_channel_layout_nb_channels(avctx->channel_layout);
if (!avctx->channels)
avctx->channels = channels;
else if (channels != avctx->channels) {
char buf[512];
av_get_channel_layout_string(buf, sizeof(buf), -1, avctx->channel_layout);
av_log(avctx, AV_LOG_WARNING,
"Channel layout '%s' with %d channels does not match specified number of channels %d: "
"ignoring specified channel layout\n",
buf, channels, avctx->channels);
avctx->channel_layout = 0;
}
}
if (avctx->channels && avctx->channels < 0 ||
avctx->channels > FF_SANE_NB_CHANNELS) {
ret = AVERROR(EINVAL);
goto free_and_end;
}
if (avctx->bits_per_coded_sample < 0) {
ret = AVERROR(EINVAL);
goto free_and_end;
}
if (avctx->sub_charenc) {
if (avctx->codec_type != AVMEDIA_TYPE_SUBTITLE) {
av_log(avctx, AV_LOG_ERROR, "Character encoding is only "
"supported with subtitles codecs\n");
ret = AVERROR(EINVAL);
goto free_and_end;
} else if (avctx->codec_descriptor->props & AV_CODEC_PROP_BITMAP_SUB) {
av_log(avctx, AV_LOG_WARNING, "Codec '%s' is bitmap-based, "
"subtitles character encoding will be ignored\n",
avctx->codec_descriptor->name);
avctx->sub_charenc_mode = FF_SUB_CHARENC_MODE_DO_NOTHING;
} else {
/* input character encoding is set for a text based subtitle
* codec at this point */
if (avctx->sub_charenc_mode == FF_SUB_CHARENC_MODE_AUTOMATIC)
avctx->sub_charenc_mode = FF_SUB_CHARENC_MODE_PRE_DECODER;
if (avctx->sub_charenc_mode == FF_SUB_CHARENC_MODE_PRE_DECODER) {
#if CONFIG_ICONV
iconv_t cd = iconv_open("UTF-8", avctx->sub_charenc);
if (cd == (iconv_t)-1) {
ret = AVERROR(errno);
av_log(avctx, AV_LOG_ERROR, "Unable to open iconv context "
"with input character encoding \"%s\"\n", avctx->sub_charenc);
goto free_and_end;
}
iconv_close(cd);
#else
av_log(avctx, AV_LOG_ERROR, "Character encoding subtitles "
"conversion needs a libavcodec built with iconv support "
"for this codec\n");
ret = AVERROR(ENOSYS);
goto free_and_end;
#endif
}
}
}
#if FF_API_AVCTX_TIMEBASE
if (avctx->framerate.num > 0 && avctx->framerate.den > 0)
avctx->time_base = av_inv_q(av_mul_q(avctx->framerate, (AVRational){avctx->ticks_per_frame, 1}));
#endif
}
if (codec->priv_data_size > 0 && avctx->priv_data && codec->priv_class) {
av_assert0(*(const AVClass **)avctx->priv_data == codec->priv_class);
}
end:
unlock_avcodec(codec);
if (options) {
av_dict_free(options);
*options = tmp;
}
return ret;
free_and_end:
if (avctx->codec && avctx->codec->close &&
(codec_init_ok > 0 || (codec_init_ok < 0 &&
avctx->codec->caps_internal & FF_CODEC_CAP_INIT_CLEANUP)))
avctx->codec->close(avctx);
if (HAVE_THREADS && avci->thread_ctx)
ff_thread_free(avctx);
if (codec->priv_class && avctx->priv_data)
av_opt_free(avctx->priv_data);
av_opt_free(avctx);
if (av_codec_is_encoder(avctx->codec)) {
#if FF_API_CODED_FRAME
FF_DISABLE_DEPRECATION_WARNINGS
av_frame_free(&avctx->coded_frame);
FF_ENABLE_DEPRECATION_WARNINGS
#endif
av_freep(&avctx->extradata);
avctx->extradata_size = 0;
}
av_dict_free(&tmp);
av_freep(&avctx->priv_data);
av_freep(&avctx->subtitle_header);
#if FF_API_OLD_ENCDEC
av_frame_free(&avci->to_free);
av_frame_free(&avci->compat_decode_frame);
av_packet_free(&avci->compat_encode_packet);
#endif
av_frame_free(&avci->buffer_frame);
av_packet_free(&avci->buffer_pkt);
av_packet_free(&avci->last_pkt_props);
av_fifo_freep(&avci->pkt_props);
av_packet_free(&avci->ds.in_pkt);
av_frame_free(&avci->es.in_frame);
av_bsf_free(&avci->bsf);
av_buffer_unref(&avci->pool);
av_freep(&avci);
avctx->internal = NULL;
avctx->codec = NULL;
goto end;
}
void avcodec_flush_buffers(AVCodecContext *avctx)
{
AVCodecInternal *avci = avctx->internal;
if (av_codec_is_encoder(avctx->codec)) {
int caps = avctx->codec->capabilities;
if (!(caps & AV_CODEC_CAP_ENCODER_FLUSH)) {
// Only encoders that explicitly declare support for it can be
// flushed. Otherwise, this is a no-op.
av_log(avctx, AV_LOG_WARNING, "Ignoring attempt to flush encoder "
"that doesn't support it\n");
return;
}
// We haven't implemented flushing for frame-threaded encoders.
av_assert0(!(caps & AV_CODEC_CAP_FRAME_THREADS));
}
avci->draining = 0;
avci->draining_done = 0;
avci->nb_draining_errors = 0;
av_frame_unref(avci->buffer_frame);
#if FF_API_OLD_ENCDEC
av_frame_unref(avci->compat_decode_frame);
av_packet_unref(avci->compat_encode_packet);
#endif
av_packet_unref(avci->buffer_pkt);
av_packet_unref(avci->last_pkt_props);
while (av_fifo_size(avci->pkt_props) >= sizeof(*avci->last_pkt_props)) {
av_fifo_generic_read(avci->pkt_props,
avci->last_pkt_props, sizeof(*avci->last_pkt_props),
NULL);
av_packet_unref(avci->last_pkt_props);
}
av_fifo_reset(avci->pkt_props);
av_frame_unref(avci->es.in_frame);
av_packet_unref(avci->ds.in_pkt);
if (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME)
ff_thread_flush(avctx);
else if (avctx->codec->flush)
avctx->codec->flush(avctx);
avctx->pts_correction_last_pts =
avctx->pts_correction_last_dts = INT64_MIN;
if (av_codec_is_decoder(avctx->codec))
av_bsf_flush(avci->bsf);
#if FF_API_OLD_ENCDEC
FF_DISABLE_DEPRECATION_WARNINGS
if (!avctx->refcounted_frames)
av_frame_unref(avci->to_free);
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
void avsubtitle_free(AVSubtitle *sub)
{
int i;
for (i = 0; i < sub->num_rects; i++) {
av_freep(&sub->rects[i]->data[0]);
av_freep(&sub->rects[i]->data[1]);
av_freep(&sub->rects[i]->data[2]);
av_freep(&sub->rects[i]->data[3]);
av_freep(&sub->rects[i]->text);
av_freep(&sub->rects[i]->ass);
av_freep(&sub->rects[i]);
}
av_freep(&sub->rects);
memset(sub, 0, sizeof(*sub));
}
av_cold int avcodec_close(AVCodecContext *avctx)
{
int i;
if (!avctx)
return 0;
if (avcodec_is_open(avctx)) {
if (CONFIG_FRAME_THREAD_ENCODER &&
avctx->internal->frame_thread_encoder && avctx->thread_count > 1) {
ff_frame_thread_encoder_free(avctx);
}
if (HAVE_THREADS && avctx->internal->thread_ctx)
ff_thread_free(avctx);
if (avctx->codec && avctx->codec->close)
avctx->codec->close(avctx);
avctx->internal->byte_buffer_size = 0;
av_freep(&avctx->internal->byte_buffer);
#if FF_API_OLD_ENCDEC
av_frame_free(&avctx->internal->to_free);
av_frame_free(&avctx->internal->compat_decode_frame);
av_packet_free(&avctx->internal->compat_encode_packet);
#endif
av_frame_free(&avctx->internal->buffer_frame);
av_packet_free(&avctx->internal->buffer_pkt);
av_packet_unref(avctx->internal->last_pkt_props);
while (av_fifo_size(avctx->internal->pkt_props) >=
sizeof(*avctx->internal->last_pkt_props)) {
av_fifo_generic_read(avctx->internal->pkt_props,
avctx->internal->last_pkt_props,
sizeof(*avctx->internal->last_pkt_props),
NULL);
av_packet_unref(avctx->internal->last_pkt_props);
}
av_packet_free(&avctx->internal->last_pkt_props);
av_fifo_freep(&avctx->internal->pkt_props);
av_packet_free(&avctx->internal->ds.in_pkt);
av_frame_free(&avctx->internal->es.in_frame);
av_buffer_unref(&avctx->internal->pool);
if (avctx->hwaccel && avctx->hwaccel->uninit)
avctx->hwaccel->uninit(avctx);
av_freep(&avctx->internal->hwaccel_priv_data);
av_bsf_free(&avctx->internal->bsf);
av_freep(&avctx->internal);
}
for (i = 0; i < avctx->nb_coded_side_data; i++)
av_freep(&avctx->coded_side_data[i].data);
av_freep(&avctx->coded_side_data);
avctx->nb_coded_side_data = 0;
av_buffer_unref(&avctx->hw_frames_ctx);
av_buffer_unref(&avctx->hw_device_ctx);
if (avctx->priv_data && avctx->codec && avctx->codec->priv_class)
av_opt_free(avctx->priv_data);
av_opt_free(avctx);
av_freep(&avctx->priv_data);
if (av_codec_is_encoder(avctx->codec)) {
av_freep(&avctx->extradata);
#if FF_API_CODED_FRAME
FF_DISABLE_DEPRECATION_WARNINGS
av_frame_free(&avctx->coded_frame);
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
avctx->codec = NULL;
avctx->active_thread_type = 0;
return 0;
}
static const char *unknown_if_null(const char *str)
{
return str ? str : "unknown";
}
void avcodec_string(char *buf, int buf_size, AVCodecContext *enc, int encode)
{
const char *codec_type;
const char *codec_name;
const char *profile = NULL;
int64_t bitrate;
int new_line = 0;
AVRational display_aspect_ratio;
const char *separator = enc->dump_separator ? (const char *)enc->dump_separator : ", ";
const char *str;
if (!buf || buf_size <= 0)
return;
codec_type = av_get_media_type_string(enc->codec_type);
codec_name = avcodec_get_name(enc->codec_id);
profile = avcodec_profile_name(enc->codec_id, enc->profile);
snprintf(buf, buf_size, "%s: %s", codec_type ? codec_type : "unknown",
codec_name);
buf[0] ^= 'a' ^ 'A'; /* first letter in uppercase */
if (enc->codec && strcmp(enc->codec->name, codec_name))
snprintf(buf + strlen(buf), buf_size - strlen(buf), " (%s)", enc->codec->name);
if (profile)
snprintf(buf + strlen(buf), buf_size - strlen(buf), " (%s)", profile);
if ( enc->codec_type == AVMEDIA_TYPE_VIDEO
&& av_log_get_level() >= AV_LOG_VERBOSE
&& enc->refs)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %d reference frame%s",
enc->refs, enc->refs > 1 ? "s" : "");
if (enc->codec_tag)
snprintf(buf + strlen(buf), buf_size - strlen(buf), " (%s / 0x%04X)",
av_fourcc2str(enc->codec_tag), enc->codec_tag);
switch (enc->codec_type) {
case AVMEDIA_TYPE_VIDEO:
{
char detail[256] = "(";
av_strlcat(buf, separator, buf_size);
snprintf(buf + strlen(buf), buf_size - strlen(buf),
"%s", enc->pix_fmt == AV_PIX_FMT_NONE ? "none" :
unknown_if_null(av_get_pix_fmt_name(enc->pix_fmt)));
if (enc->bits_per_raw_sample && enc->pix_fmt != AV_PIX_FMT_NONE &&
enc->bits_per_raw_sample < av_pix_fmt_desc_get(enc->pix_fmt)->comp[0].depth)
av_strlcatf(detail, sizeof(detail), "%d bpc, ", enc->bits_per_raw_sample);
if (enc->color_range != AVCOL_RANGE_UNSPECIFIED &&
(str = av_color_range_name(enc->color_range)))
av_strlcatf(detail, sizeof(detail), "%s, ", str);
if (enc->colorspace != AVCOL_SPC_UNSPECIFIED ||
enc->color_primaries != AVCOL_PRI_UNSPECIFIED ||
enc->color_trc != AVCOL_TRC_UNSPECIFIED) {
const char *col = unknown_if_null(av_color_space_name(enc->colorspace));
const char *pri = unknown_if_null(av_color_primaries_name(enc->color_primaries));
const char *trc = unknown_if_null(av_color_transfer_name(enc->color_trc));
if (strcmp(col, pri) || strcmp(col, trc)) {
new_line = 1;
av_strlcatf(detail, sizeof(detail), "%s/%s/%s, ",
col, pri, trc);
} else
av_strlcatf(detail, sizeof(detail), "%s, ", col);
}
if (enc->field_order != AV_FIELD_UNKNOWN) {
const char *field_order = "progressive";
if (enc->field_order == AV_FIELD_TT)
field_order = "top first";
else if (enc->field_order == AV_FIELD_BB)
field_order = "bottom first";
else if (enc->field_order == AV_FIELD_TB)
field_order = "top coded first (swapped)";
else if (enc->field_order == AV_FIELD_BT)
field_order = "bottom coded first (swapped)";
av_strlcatf(detail, sizeof(detail), "%s, ", field_order);
}
if (av_log_get_level() >= AV_LOG_VERBOSE &&
enc->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED &&
(str = av_chroma_location_name(enc->chroma_sample_location)))
av_strlcatf(detail, sizeof(detail), "%s, ", str);
if (strlen(detail) > 1) {
detail[strlen(detail) - 2] = 0;
av_strlcatf(buf, buf_size, "%s)", detail);
}
}
if (enc->width) {
av_strlcat(buf, new_line ? separator : ", ", buf_size);
snprintf(buf + strlen(buf), buf_size - strlen(buf),
"%dx%d",
enc->width, enc->height);
if (av_log_get_level() >= AV_LOG_VERBOSE &&
(enc->width != enc->coded_width ||
enc->height != enc->coded_height))
snprintf(buf + strlen(buf), buf_size - strlen(buf),
" (%dx%d)", enc->coded_width, enc->coded_height);
if (enc->sample_aspect_ratio.num) {
av_reduce(&display_aspect_ratio.num, &display_aspect_ratio.den,
enc->width * (int64_t)enc->sample_aspect_ratio.num,
enc->height * (int64_t)enc->sample_aspect_ratio.den,
1024 * 1024);
snprintf(buf + strlen(buf), buf_size - strlen(buf),
" [SAR %d:%d DAR %d:%d]",
enc->sample_aspect_ratio.num, enc->sample_aspect_ratio.den,
display_aspect_ratio.num, display_aspect_ratio.den);
}
if (av_log_get_level() >= AV_LOG_DEBUG) {
int g = av_gcd(enc->time_base.num, enc->time_base.den);
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %d/%d",
enc->time_base.num / g, enc->time_base.den / g);
}
}
if (encode) {
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", q=%d-%d", enc->qmin, enc->qmax);
} else {
if (enc->properties & FF_CODEC_PROPERTY_CLOSED_CAPTIONS)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", Closed Captions");
if (enc->properties & FF_CODEC_PROPERTY_LOSSLESS)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", lossless");
}
break;
case AVMEDIA_TYPE_AUDIO:
av_strlcat(buf, separator, buf_size);
if (enc->sample_rate) {
snprintf(buf + strlen(buf), buf_size - strlen(buf),
"%d Hz, ", enc->sample_rate);
}
av_get_channel_layout_string(buf + strlen(buf), buf_size - strlen(buf), enc->channels, enc->channel_layout);
if (enc->sample_fmt != AV_SAMPLE_FMT_NONE &&
(str = av_get_sample_fmt_name(enc->sample_fmt))) {
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %s", str);
}
if ( enc->bits_per_raw_sample > 0
&& enc->bits_per_raw_sample != av_get_bytes_per_sample(enc->sample_fmt) * 8)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
" (%d bit)", enc->bits_per_raw_sample);
if (av_log_get_level() >= AV_LOG_VERBOSE) {
if (enc->initial_padding)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", delay %d", enc->initial_padding);
if (enc->trailing_padding)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", padding %d", enc->trailing_padding);
}
break;
case AVMEDIA_TYPE_DATA:
if (av_log_get_level() >= AV_LOG_DEBUG) {
int g = av_gcd(enc->time_base.num, enc->time_base.den);
if (g)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %d/%d",
enc->time_base.num / g, enc->time_base.den / g);
}
break;
case AVMEDIA_TYPE_SUBTITLE:
if (enc->width)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %dx%d", enc->width, enc->height);
break;
default:
return;
}
if (encode) {
if (enc->flags & AV_CODEC_FLAG_PASS1)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", pass 1");
if (enc->flags & AV_CODEC_FLAG_PASS2)
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", pass 2");
}
bitrate = get_bit_rate(enc);
if (bitrate != 0) {
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", %"PRId64" kb/s", bitrate / 1000);
} else if (enc->rc_max_rate > 0) {
snprintf(buf + strlen(buf), buf_size - strlen(buf),
", max. %"PRId64" kb/s", enc->rc_max_rate / 1000);
}
}
int avcodec_is_open(AVCodecContext *s)
{
return !!s->internal;
}

View File

@@ -1,118 +0,0 @@
/*
* AVS3 related definitions
*
* Copyright (C) 2020 Huiwen Ren, <hwrenx@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AVS3_H
#define AVCODEC_AVS3_H
#define AVS3_NAL_START_CODE 0x010000
#define AVS3_SEQ_START_CODE 0xB0
#define AVS3_SEQ_END_CODE 0xB1
#define AVS3_USER_DATA_START_CODE 0xB2
#define AVS3_INTRA_PIC_START_CODE 0xB3
#define AVS3_UNDEF_START_CODE 0xB4
#define AVS3_EXTENSION_START_CODE 0xB5
#define AVS3_INTER_PIC_START_CODE 0xB6
#define AVS3_VIDEO_EDIT_CODE 0xB7
#define AVS3_FIRST_SLICE_START_CODE 0x00
#define AVS3_PROFILE_BASELINE_MAIN 0x20
#define AVS3_PROFILE_BASELINE_MAIN10 0x22
#define AVS3_ISPIC(x) ((x) == AVS3_INTRA_PIC_START_CODE || (x) == AVS3_INTER_PIC_START_CODE)
#define AVS3_ISUNIT(x) ((x) == AVS3_SEQ_START_CODE || AVS3_ISPIC(x))
#include "libavutil/avutil.h"
#include "libavutil/pixfmt.h"
#include "libavutil/rational.h"
static const AVRational ff_avs3_frame_rate_tab[16] = {
{ 0 , 0 }, // forbid
{ 24000, 1001},
{ 24 , 1 },
{ 25 , 1 },
{ 30000, 1001},
{ 30 , 1 },
{ 50 , 1 },
{ 60000, 1001},
{ 60 , 1 },
{ 100 , 1 },
{ 120 , 1 },
{ 200 , 1 },
{ 240 , 1 },
{ 300 , 1 },
{ 0 , 0 }, // reserved
{ 0 , 0 } // reserved
};
static const int ff_avs3_color_primaries_tab[10] = {
AVCOL_PRI_RESERVED0 , // 0
AVCOL_PRI_BT709 , // 1
AVCOL_PRI_UNSPECIFIED , // 2
AVCOL_PRI_RESERVED , // 3
AVCOL_PRI_BT470M , // 4
AVCOL_PRI_BT470BG , // 5
AVCOL_PRI_SMPTE170M , // 6
AVCOL_PRI_SMPTE240M , // 7
AVCOL_PRI_FILM , // 8
AVCOL_PRI_BT2020 // 9
};
static const int ff_avs3_color_transfer_tab[15] = {
AVCOL_TRC_RESERVED0 , // 0
AVCOL_TRC_BT709 , // 1
AVCOL_TRC_UNSPECIFIED , // 2
AVCOL_TRC_RESERVED , // 3
AVCOL_TRC_GAMMA22 , // 4
AVCOL_TRC_GAMMA28 , // 5
AVCOL_TRC_SMPTE170M , // 6
AVCOL_TRC_SMPTE240M , // 7
AVCOL_TRC_LINEAR , // 8
AVCOL_TRC_LOG , // 9
AVCOL_TRC_LOG_SQRT , // 10
AVCOL_TRC_BT2020_12 , // 11
AVCOL_TRC_SMPTE2084 , // 12
AVCOL_TRC_UNSPECIFIED , // 13
AVCOL_TRC_ARIB_STD_B67 // 14
};
static const int ff_avs3_color_matrix_tab[12] = {
AVCOL_SPC_RESERVED , // 0
AVCOL_SPC_BT709 , // 1
AVCOL_SPC_UNSPECIFIED , // 2
AVCOL_SPC_RESERVED , // 3
AVCOL_SPC_FCC , // 4
AVCOL_SPC_BT470BG , // 5
AVCOL_SPC_SMPTE170M , // 6
AVCOL_SPC_SMPTE240M , // 7
AVCOL_SPC_BT2020_NCL , // 8
AVCOL_SPC_BT2020_CL , // 9
AVCOL_SPC_UNSPECIFIED , // 10
AVCOL_SPC_UNSPECIFIED // 11
};
static const enum AVPictureType ff_avs3_image_type[4] = {
AV_PICTURE_TYPE_NONE,
AV_PICTURE_TYPE_I,
AV_PICTURE_TYPE_P,
AV_PICTURE_TYPE_B
};
#endif /* AVCODEC_AVS3_H */

View File

@@ -1,179 +0,0 @@
/*
* AVS3-P2/IEEE1857.10 video parser.
* Copyright (c) 2020 Zhenyu Wang <wangzhenyu@pkusz.edu.cn>
* Bingjie Han <hanbj@pkusz.edu.cn>
* Huiwen Ren <hwrenx@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "avs3.h"
#include "get_bits.h"
#include "parser.h"
static int avs3_find_frame_end(ParseContext *pc, const uint8_t *buf, int buf_size)
{
int pic_found = pc->frame_start_found;
uint32_t state = pc->state;
int cur = 0;
if (!pic_found) {
for (; cur < buf_size; ++cur) {
state = (state << 8) | buf[cur];
if (AVS3_ISPIC(buf[cur])){
cur++;
pic_found = 1;
break;
}
}
}
if (pic_found) {
if (!buf_size)
return END_NOT_FOUND;
for (; cur < buf_size; ++cur) {
state = (state << 8) | buf[cur];
if ((state & 0xFFFFFF00) == 0x100 && AVS3_ISUNIT(state & 0xFF)) {
pc->frame_start_found = 0;
pc->state = -1;
return cur - 3;
}
}
}
pc->frame_start_found = pic_found;
pc->state = state;
return END_NOT_FOUND;
}
static void parse_avs3_nal_units(AVCodecParserContext *s, const uint8_t *buf,
int buf_size, AVCodecContext *avctx)
{
if (buf_size < 5) {
return;
}
if (buf[0] == 0x0 && buf[1] == 0x0 && buf[2] == 0x1) {
if (buf[3] == AVS3_SEQ_START_CODE) {
GetBitContext gb;
int profile, ratecode;
init_get_bits(&gb, buf + 4, buf_size - 4);
s->key_frame = 1;
s->pict_type = AV_PICTURE_TYPE_I;
profile = get_bits(&gb, 8);
// Skip bits: level(8)
// progressive(1)
// field(1)
// library(2)
// resv(1)
// width(14)
// resv(1)
// height(14)
// chroma(2)
// sampe_precision(3)
skip_bits(&gb, 47);
if (profile == AVS3_PROFILE_BASELINE_MAIN10) {
int sample_precision = get_bits(&gb, 3);
if (sample_precision == 1) {
avctx->pix_fmt = AV_PIX_FMT_YUV420P;
} else if (sample_precision == 2) {
avctx->pix_fmt = AV_PIX_FMT_YUV420P10LE;
} else {
avctx->pix_fmt = AV_PIX_FMT_NONE;
}
}
// Skip bits: resv(1)
// aspect(4)
skip_bits(&gb, 5);
ratecode = get_bits(&gb, 4);
// Skip bits: resv(1)
// bitrate_low(18)
// resv(1)
// bitrate_high(12)
skip_bits(&gb, 32);
avctx->has_b_frames = !get_bits(&gb, 1);
avctx->framerate.num = avctx->time_base.den = ff_avs3_frame_rate_tab[ratecode].num;
avctx->framerate.den = avctx->time_base.num = ff_avs3_frame_rate_tab[ratecode].den;
s->width = s->coded_width = avctx->width;
s->height = s->coded_height = avctx->height;
av_log(avctx, AV_LOG_DEBUG,
"AVS3 parse seq HDR: profile %d; coded size: %dx%d; frame rate code: %d\n",
profile, avctx->width, avctx->height, ratecode);
} else if (buf[3] == AVS3_INTRA_PIC_START_CODE) {
s->key_frame = 1;
s->pict_type = AV_PICTURE_TYPE_I;
} else if (buf[3] == AVS3_INTER_PIC_START_CODE){
s->key_frame = 0;
if (buf_size > 9) {
int pic_code_type = buf[8] & 0x3;
if (pic_code_type == 1 || pic_code_type == 3) {
s->pict_type = AV_PICTURE_TYPE_P;
} else {
s->pict_type = AV_PICTURE_TYPE_B;
}
}
}
}
}
static int avs3_parse(AVCodecParserContext *s, AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
ParseContext *pc = s->priv_data;
int next;
if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) {
next = buf_size;
} else {
next = avs3_find_frame_end(pc, buf, buf_size);
if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) {
*poutbuf = NULL;
*poutbuf_size = 0;
return buf_size;
}
}
parse_avs3_nal_units(s, buf, buf_size, avctx);
*poutbuf = buf;
*poutbuf_size = buf_size;
return next;
}
AVCodecParser ff_avs3_parser = {
.codec_ids = { AV_CODEC_ID_AVS3 },
.priv_data_size = sizeof(ParseContext),
.parser_parse = avs3_parse,
.parser_close = ff_parse_close,
.split = ff_mpeg4video_split,
};

View File

@@ -1,159 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "bsf_internal.h"
#include "cbs_bsf.h"
static int cbs_bsf_update_side_data(AVBSFContext *bsf, AVPacket *pkt)
{
CBSBSFContext *ctx = bsf->priv_data;
CodedBitstreamFragment *frag = &ctx->fragment;
uint8_t *side_data;
buffer_size_t side_data_size;
int err;
side_data = av_packet_get_side_data(pkt, AV_PKT_DATA_NEW_EXTRADATA,
&side_data_size);
if (!side_data_size)
return 0;
err = ff_cbs_read(ctx->input, frag, side_data, side_data_size);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR,
"Failed to read extradata from packet side data.\n");
return err;
}
err = ctx->type->update_fragment(bsf, NULL, frag);
if (err < 0)
return err;
err = ff_cbs_write_fragment_data(ctx->output, frag);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR,
"Failed to write extradata into packet side data.\n");
return err;
}
side_data = av_packet_new_side_data(pkt, AV_PKT_DATA_NEW_EXTRADATA,
frag->data_size);
if (!side_data)
return AVERROR(ENOMEM);
memcpy(side_data, frag->data, frag->data_size);
ff_cbs_fragment_reset(frag);
return 0;
}
int ff_cbs_bsf_generic_filter(AVBSFContext *bsf, AVPacket *pkt)
{
CBSBSFContext *ctx = bsf->priv_data;
CodedBitstreamFragment *frag = &ctx->fragment;
int err;
err = ff_bsf_get_packet_ref(bsf, pkt);
if (err < 0)
return err;
err = cbs_bsf_update_side_data(bsf, pkt);
if (err < 0)
goto fail;
err = ff_cbs_read_packet(ctx->input, frag, pkt);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR, "Failed to read %s from packet.\n",
ctx->type->fragment_name);
goto fail;
}
if (frag->nb_units == 0) {
av_log(bsf, AV_LOG_ERROR, "No %s found in packet.\n",
ctx->type->unit_name);
err = AVERROR_INVALIDDATA;
goto fail;
}
err = ctx->type->update_fragment(bsf, pkt, frag);
if (err < 0)
goto fail;
err = ff_cbs_write_packet(ctx->output, pkt, frag);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR, "Failed to write %s into packet.\n",
ctx->type->fragment_name);
goto fail;
}
err = 0;
fail:
ff_cbs_fragment_reset(frag);
if (err < 0)
av_packet_unref(pkt);
return err;
}
int ff_cbs_bsf_generic_init(AVBSFContext *bsf, const CBSBSFType *type)
{
CBSBSFContext *ctx = bsf->priv_data;
CodedBitstreamFragment *frag = &ctx->fragment;
int err;
ctx->type = type;
err = ff_cbs_init(&ctx->input, type->codec_id, bsf);
if (err < 0)
return err;
err = ff_cbs_init(&ctx->output, type->codec_id, bsf);
if (err < 0)
return err;
if (bsf->par_in->extradata) {
err = ff_cbs_read_extradata(ctx->input, frag, bsf->par_in);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR, "Failed to read extradata.\n");
goto fail;
}
err = type->update_fragment(bsf, NULL, frag);
if (err < 0)
goto fail;
err = ff_cbs_write_extradata(ctx->output, bsf->par_out, frag);
if (err < 0) {
av_log(bsf, AV_LOG_ERROR, "Failed to write extradata.\n");
goto fail;
}
}
err = 0;
fail:
ff_cbs_fragment_reset(frag);
return err;
}
void ff_cbs_bsf_generic_close(AVBSFContext *bsf)
{
CBSBSFContext *ctx = bsf->priv_data;
ff_cbs_fragment_free(&ctx->fragment);
ff_cbs_close(&ctx->input);
ff_cbs_close(&ctx->output);
}

View File

@@ -1,131 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CBS_BSF_H
#define AVCODEC_CBS_BSF_H
#include "cbs.h"
typedef struct CBSBSFType {
enum AVCodecID codec_id;
// Name of a frame fragment in this codec (e.g. "access unit",
// "temporal unit").
const char *fragment_name;
// Name of a unit for this BSF, for use in error messages (e.g.
// "NAL unit", "OBU").
const char *unit_name;
// Update the content of a fragment with whatever metadata changes
// are desired. The associated AVPacket is provided so that any side
// data associated with the fragment can be inspected or edited. If
// pkt is NULL, then an extradata header fragment is being updated.
int (*update_fragment)(AVBSFContext *bsf, AVPacket *pkt,
CodedBitstreamFragment *frag);
} CBSBSFType;
// Common structure for all generic CBS BSF users. An instance of this
// structure must be the first member of the BSF private context (to be
// pointed to by AVBSFContext.priv_data).
typedef struct CBSBSFContext {
const AVClass *class;
const CBSBSFType *type;
CodedBitstreamContext *input;
CodedBitstreamContext *output;
CodedBitstreamFragment fragment;
} CBSBSFContext;
/**
* Initialise generic CBS BSF setup.
*
* Creates the input and output CBS instances, and applies the filter to
* the extradata on the input codecpar if any is present.
*
* Since it calls the update_fragment() function immediately to deal with
* extradata, this should be called after any codec-specific setup is done
* (probably at the end of the AVBitStreamFilter.init function).
*/
int ff_cbs_bsf_generic_init(AVBSFContext *bsf, const CBSBSFType *type);
/**
* Close a generic CBS BSF instance.
*
* If no other deinitialisation is required then this function can be used
* directly as AVBitStreamFilter.close.
*/
void ff_cbs_bsf_generic_close(AVBSFContext *bsf);
/**
* Filter operation for CBS BSF.
*
* Reads the input packet into a CBS fragment, calls update_fragment() on
* it, then writes the result to an output packet. If the input packet
* has AV_PKT_DATA_NEW_EXTRADATA side-data associated with it then it does
* the same thing to that new extradata to form the output side-data first.
*
* If the BSF does not do anything else then this function can be used
* directly as AVBitStreamFilter.filter.
*/
int ff_cbs_bsf_generic_filter(AVBSFContext *bsf, AVPacket *pkt);
// Options for element manipulation.
enum {
// Pass this element through unchanged.
BSF_ELEMENT_PASS,
// Insert this element, replacing any existing instances of it.
// Associated values may be provided explicitly (as addtional options)
// or implicitly (either as side data or deduced from other parts of
// the stream).
BSF_ELEMENT_INSERT,
// Remove this element if it appears in the stream.
BSF_ELEMENT_REMOVE,
// Extract this element to side data, so that further manipulation
// can happen elsewhere.
BSF_ELEMENT_EXTRACT,
};
#define BSF_ELEMENT_OPTIONS_PIR(name, help, field, opt_flags) \
{ name, help, OFFSET(field), AV_OPT_TYPE_INT, \
{ .i64 = BSF_ELEMENT_PASS }, \
BSF_ELEMENT_PASS, BSF_ELEMENT_REMOVE, opt_flags, name }, \
{ "pass", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_PASS }, .flags = opt_flags, .unit = name }, \
{ "insert", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_INSERT }, .flags = opt_flags, .unit = name }, \
{ "remove", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_REMOVE }, .flags = opt_flags, .unit = name }
#define BSF_ELEMENT_OPTIONS_PIRE(name, help, field, opt_flags) \
{ name, help, OFFSET(field), AV_OPT_TYPE_INT, \
{ .i64 = BSF_ELEMENT_PASS }, \
BSF_ELEMENT_PASS, BSF_ELEMENT_EXTRACT, opt_flags, name }, \
{ "pass", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_PASS }, .flags = opt_flags, .unit = name }, \
{ "insert", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_INSERT }, .flags = opt_flags, .unit = name }, \
{ "remove", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_REMOVE }, .flags = opt_flags, .unit = name }, \
{ "extract", NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = BSF_ELEMENT_EXTRACT }, .flags = opt_flags, .unit = name } \
#endif /* AVCODEC_CBS_BSF_H */

View File

@@ -1,369 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "cbs.h"
#include "cbs_internal.h"
#include "cbs_h264.h"
#include "cbs_h265.h"
#include "cbs_sei.h"
static void cbs_free_user_data_registered(void *opaque, uint8_t *data)
{
SEIRawUserDataRegistered *udr = (SEIRawUserDataRegistered*)data;
av_buffer_unref(&udr->data_ref);
av_free(udr);
}
static void cbs_free_user_data_unregistered(void *opaque, uint8_t *data)
{
SEIRawUserDataUnregistered *udu = (SEIRawUserDataUnregistered*)data;
av_buffer_unref(&udu->data_ref);
av_free(udu);
}
int ff_cbs_sei_alloc_message_payload(SEIRawMessage *message,
const SEIMessageTypeDescriptor *desc)
{
void (*free_func)(void*, uint8_t*);
av_assert0(message->payload == NULL &&
message->payload_ref == NULL);
message->payload_type = desc->type;
if (desc->type == SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35)
free_func = &cbs_free_user_data_registered;
else if (desc->type == SEI_TYPE_USER_DATA_UNREGISTERED)
free_func = &cbs_free_user_data_unregistered;
else
free_func = NULL;
if (free_func) {
message->payload = av_mallocz(desc->size);
if (!message->payload)
return AVERROR(ENOMEM);
message->payload_ref =
av_buffer_create(message->payload, desc->size,
free_func, NULL, 0);
} else {
message->payload_ref = av_buffer_alloc(desc->size);
}
if (!message->payload_ref) {
av_freep(&message->payload);
return AVERROR(ENOMEM);
}
message->payload = message->payload_ref->data;
return 0;
}
int ff_cbs_sei_list_add(SEIRawMessageList *list)
{
void *ptr;
int old_count = list->nb_messages_allocated;
av_assert0(list->nb_messages <= old_count);
if (list->nb_messages + 1 > old_count) {
int new_count = 2 * old_count + 1;
ptr = av_realloc_array(list->messages,
new_count, sizeof(*list->messages));
if (!ptr)
return AVERROR(ENOMEM);
list->messages = ptr;
list->nb_messages_allocated = new_count;
// Zero the newly-added entries.
memset(list->messages + old_count, 0,
(new_count - old_count) * sizeof(*list->messages));
}
++list->nb_messages;
return 0;
}
void ff_cbs_sei_free_message_list(SEIRawMessageList *list)
{
for (int i = 0; i < list->nb_messages; i++) {
SEIRawMessage *message = &list->messages[i];
av_buffer_unref(&message->payload_ref);
av_buffer_unref(&message->extension_data_ref);
}
av_free(list->messages);
}
static int cbs_sei_get_unit(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
int prefix,
CodedBitstreamUnit **sei_unit)
{
CodedBitstreamUnit *unit;
int sei_type, highest_vcl_type, err, i, position;
switch (ctx->codec->codec_id) {
case AV_CODEC_ID_H264:
// (We can ignore auxiliary slices because we only have prefix
// SEI in H.264 and an auxiliary picture must always follow a
// primary picture.)
highest_vcl_type = H264_NAL_IDR_SLICE;
if (prefix)
sei_type = H264_NAL_SEI;
else
return AVERROR(EINVAL);
break;
case AV_CODEC_ID_H265:
highest_vcl_type = HEVC_NAL_RSV_VCL31;
if (prefix)
sei_type = HEVC_NAL_SEI_PREFIX;
else
sei_type = HEVC_NAL_SEI_SUFFIX;
break;
default:
return AVERROR(EINVAL);
}
// Find an existing SEI NAL unit of the right type.
unit = NULL;
for (i = 0; i < au->nb_units; i++) {
if (au->units[i].type == sei_type) {
unit = &au->units[i];
break;
}
}
if (unit) {
*sei_unit = unit;
return 0;
}
// Need to add a new SEI NAL unit ...
if (prefix) {
// ... before the first VCL NAL unit.
for (i = 0; i < au->nb_units; i++) {
if (au->units[i].type < highest_vcl_type)
break;
}
position = i;
} else {
// ... after the last VCL NAL unit.
for (i = au->nb_units - 1; i >= 0; i--) {
if (au->units[i].type < highest_vcl_type)
break;
}
if (i < 0) {
// No VCL units; just put it at the end.
position = au->nb_units;
} else {
position = i + 1;
}
}
err = ff_cbs_insert_unit_content(au, position, sei_type,
NULL, NULL);
if (err < 0)
return err;
unit = &au->units[position];
unit->type = sei_type;
err = ff_cbs_alloc_unit_content2(ctx, unit);
if (err < 0)
return err;
switch (ctx->codec->codec_id) {
case AV_CODEC_ID_H264:
{
H264RawSEI sei = {
.nal_unit_header = {
.nal_ref_idc = 0,
.nal_unit_type = sei_type,
},
};
memcpy(unit->content, &sei, sizeof(sei));
}
break;
case AV_CODEC_ID_H265:
{
H265RawSEI sei = {
.nal_unit_header = {
.nal_unit_type = sei_type,
.nuh_layer_id = 0,
.nuh_temporal_id_plus1 = 1,
},
};
memcpy(unit->content, &sei, sizeof(sei));
}
break;
default:
av_assert0(0);
}
*sei_unit = unit;
return 0;
}
static int cbs_sei_get_message_list(CodedBitstreamContext *ctx,
CodedBitstreamUnit *unit,
SEIRawMessageList **list)
{
switch (ctx->codec->codec_id) {
case AV_CODEC_ID_H264:
{
H264RawSEI *sei = unit->content;
if (unit->type != H264_NAL_SEI)
return AVERROR(EINVAL);
*list = &sei->message_list;
}
break;
case AV_CODEC_ID_H265:
{
H265RawSEI *sei = unit->content;
if (unit->type != HEVC_NAL_SEI_PREFIX &&
unit->type != HEVC_NAL_SEI_SUFFIX)
return AVERROR(EINVAL);
*list = &sei->message_list;
}
break;
default:
return AVERROR(EINVAL);
}
return 0;
}
int ff_cbs_sei_add_message(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
int prefix,
uint32_t payload_type,
void *payload_data,
AVBufferRef *payload_buf)
{
const SEIMessageTypeDescriptor *desc;
CodedBitstreamUnit *unit;
SEIRawMessageList *list;
SEIRawMessage *message;
AVBufferRef *payload_ref;
int err;
desc = ff_cbs_sei_find_type(ctx, payload_type);
if (!desc)
return AVERROR(EINVAL);
// Find an existing SEI unit or make a new one to add to.
err = cbs_sei_get_unit(ctx, au, prefix, &unit);
if (err < 0)
return err;
// Find the message list inside the codec-dependent unit.
err = cbs_sei_get_message_list(ctx, unit, &list);
if (err < 0)
return err;
// Add a new message to the message list.
err = ff_cbs_sei_list_add(list);
if (err < 0)
return err;
if (payload_buf) {
payload_ref = av_buffer_ref(payload_buf);
if (!payload_ref)
return AVERROR(ENOMEM);
} else {
payload_ref = NULL;
}
message = &list->messages[list->nb_messages - 1];
message->payload_type = payload_type;
message->payload = payload_data;
message->payload_ref = payload_ref;
return 0;
}
int ff_cbs_sei_find_message(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
uint32_t payload_type,
SEIRawMessage **iter)
{
int err, i, j, found;
found = 0;
for (i = 0; i < au->nb_units; i++) {
CodedBitstreamUnit *unit = &au->units[i];
SEIRawMessageList *list;
err = cbs_sei_get_message_list(ctx, unit, &list);
if (err < 0)
continue;
for (j = 0; j < list->nb_messages; j++) {
SEIRawMessage *message = &list->messages[j];
if (message->payload_type == payload_type) {
if (!*iter || found) {
*iter = message;
return 0;
}
if (message == *iter)
found = 1;
}
}
}
return AVERROR(ENOENT);
}
static void cbs_sei_delete_message(SEIRawMessageList *list,
int position)
{
SEIRawMessage *message;
av_assert0(0 <= position && position < list->nb_messages);
message = &list->messages[position];
av_buffer_unref(&message->payload_ref);
av_buffer_unref(&message->extension_data_ref);
--list->nb_messages;
if (list->nb_messages > 0) {
memmove(list->messages + position,
list->messages + position + 1,
(list->nb_messages - position) * sizeof(*list->messages));
}
}
void ff_cbs_sei_delete_message_type(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
uint32_t payload_type)
{
int err, i, j;
for (i = 0; i < au->nb_units; i++) {
CodedBitstreamUnit *unit = &au->units[i];
SEIRawMessageList *list;
err = cbs_sei_get_message_list(ctx, unit, &list);
if (err < 0)
continue;
for (j = list->nb_messages - 1; j >= 0; j--) {
if (list->messages[j].payload_type == payload_type)
cbs_sei_delete_message(list, j);
}
}
}

View File

@@ -1,199 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CBS_SEI_H
#define AVCODEC_CBS_SEI_H
#include <stddef.h>
#include <stdint.h>
#include "libavutil/buffer.h"
#include "cbs.h"
#include "sei.h"
typedef struct SEIRawFillerPayload {
uint32_t payload_size;
} SEIRawFillerPayload;
typedef struct SEIRawUserDataRegistered {
uint8_t itu_t_t35_country_code;
uint8_t itu_t_t35_country_code_extension_byte;
uint8_t *data;
AVBufferRef *data_ref;
size_t data_length;
} SEIRawUserDataRegistered;
typedef struct SEIRawUserDataUnregistered {
uint8_t uuid_iso_iec_11578[16];
uint8_t *data;
AVBufferRef *data_ref;
size_t data_length;
} SEIRawUserDataUnregistered;
typedef struct SEIRawMasteringDisplayColourVolume {
uint16_t display_primaries_x[3];
uint16_t display_primaries_y[3];
uint16_t white_point_x;
uint16_t white_point_y;
uint32_t max_display_mastering_luminance;
uint32_t min_display_mastering_luminance;
} SEIRawMasteringDisplayColourVolume;
typedef struct SEIRawContentLightLevelInfo {
uint16_t max_content_light_level;
uint16_t max_pic_average_light_level;
} SEIRawContentLightLevelInfo;
typedef struct SEIRawAlternativeTransferCharacteristics {
uint8_t preferred_transfer_characteristics;
} SEIRawAlternativeTransferCharacteristics;
typedef struct SEIRawMessage {
uint32_t payload_type;
uint32_t payload_size;
void *payload;
AVBufferRef *payload_ref;
uint8_t *extension_data;
AVBufferRef *extension_data_ref;
size_t extension_bit_length;
} SEIRawMessage;
typedef struct SEIRawMessageList {
SEIRawMessage *messages;
int nb_messages;
int nb_messages_allocated;
} SEIRawMessageList;
typedef struct SEIMessageState {
// The type of the payload being written.
uint32_t payload_type;
// When reading, contains the size of the payload to allow finding the
// end of variable-length fields (such as user_data_payload_byte[]).
// (When writing, the size will be derived from the total number of
// bytes actually written.)
uint32_t payload_size;
// When writing, indicates that payload extension data is present so
// all extended fields must be written. May be updated by the writer
// to indicate that extended fields have been written, so the extension
// end bits must be written too.
uint8_t extension_present;
} SEIMessageState;
struct GetBitContext;
struct PutBitContext;
typedef int (*SEIMessageReadFunction)(CodedBitstreamContext *ctx,
struct GetBitContext *rw,
void *current,
SEIMessageState *sei);
typedef int (*SEIMessageWriteFunction)(CodedBitstreamContext *ctx,
struct PutBitContext *rw,
void *current,
SEIMessageState *sei);
typedef struct SEIMessageTypeDescriptor {
// Payload type for the message. (-1 in this field ends a list.)
int type;
// Valid in a prefix SEI NAL unit (always for H.264).
uint8_t prefix;
// Valid in a suffix SEI NAL unit (never for H.264).
uint8_t suffix;
// Size of the decomposed structure.
size_t size;
// Read bitstream into SEI message.
SEIMessageReadFunction read;
// Write bitstream from SEI message.
SEIMessageWriteFunction write;
} SEIMessageTypeDescriptor;
// Macro for the read/write pair. The clumsy cast is needed because the
// current pointer is typed in all of the read/write functions but has to
// be void here to fit all cases.
#define SEI_MESSAGE_RW(codec, name) \
.read = (SEIMessageReadFunction) cbs_ ## codec ## _read_ ## name, \
.write = (SEIMessageWriteFunction)cbs_ ## codec ## _write_ ## name
// End-of-list sentinel element.
#define SEI_MESSAGE_TYPE_END { .type = -1 }
/**
* Find the type descriptor for the given payload type.
*
* Returns NULL if the payload type is not known.
*/
const SEIMessageTypeDescriptor *ff_cbs_sei_find_type(CodedBitstreamContext *ctx,
int payload_type);
/**
* Allocate a new payload for the given SEI message.
*/
int ff_cbs_sei_alloc_message_payload(SEIRawMessage *message,
const SEIMessageTypeDescriptor *desc);
/**
* Allocate a new empty SEI message in a message list.
*
* The new message is in place nb_messages - 1.
*/
int ff_cbs_sei_list_add(SEIRawMessageList *list);
/**
* Free all SEI messages in a message list.
*/
void ff_cbs_sei_free_message_list(SEIRawMessageList *list);
/**
* Add an SEI message to an access unit.
*
* Will add to an existing SEI NAL unit, or create a new one for the
* message if there is no suitable existing one.
*
* Takes a new reference to payload_buf, if set. If payload_buf is
* NULL then the new message will not be reference counted.
*/
int ff_cbs_sei_add_message(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
int prefix,
uint32_t payload_type,
void *payload_data,
AVBufferRef *payload_buf);
/**
* Iterate over messages with the given payload type in an access unit.
*
* Set message to NULL in the first call. Returns 0 while more messages
* are available, AVERROR(ENOENT) when all messages have been found.
*/
int ff_cbs_sei_find_message(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
uint32_t payload_type,
SEIRawMessage **message);
/**
* Delete all messages with the given payload type from an access unit.
*/
void ff_cbs_sei_delete_message_type(CodedBitstreamContext *ctx,
CodedBitstreamFragment *au,
uint32_t payload_type);
#endif /* AVCODEC_CBS_SEI_H */

View File

@@ -1,322 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
static int FUNC(filler_payload)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawFillerPayload *current, SEIMessageState *state)
{
int err, i;
HEADER("Filler Payload");
#ifdef READ
current->payload_size = state->payload_size;
#endif
for (i = 0; i < current->payload_size; i++)
fixed(8, ff_byte, 0xff);
return 0;
}
static int FUNC(user_data_registered)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawUserDataRegistered *current, SEIMessageState *state)
{
int err, i, j;
HEADER("User Data Registered ITU-T T.35");
u(8, itu_t_t35_country_code, 0x00, 0xff);
if (current->itu_t_t35_country_code != 0xff)
i = 1;
else {
u(8, itu_t_t35_country_code_extension_byte, 0x00, 0xff);
i = 2;
}
#ifdef READ
if (state->payload_size < i) {
av_log(ctx->log_ctx, AV_LOG_ERROR,
"Invalid SEI user data registered payload.\n");
return AVERROR_INVALIDDATA;
}
current->data_length = state->payload_size - i;
#endif
allocate(current->data, current->data_length);
for (j = 0; j < current->data_length; j++)
xu(8, itu_t_t35_payload_byte[], current->data[j], 0x00, 0xff, 1, i + j);
return 0;
}
static int FUNC(user_data_unregistered)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawUserDataUnregistered *current, SEIMessageState *state)
{
int err, i;
HEADER("User Data Unregistered");
#ifdef READ
if (state->payload_size < 16) {
av_log(ctx->log_ctx, AV_LOG_ERROR,
"Invalid SEI user data unregistered payload.\n");
return AVERROR_INVALIDDATA;
}
current->data_length = state->payload_size - 16;
#endif
for (i = 0; i < 16; i++)
us(8, uuid_iso_iec_11578[i], 0x00, 0xff, 1, i);
allocate(current->data, current->data_length);
for (i = 0; i < current->data_length; i++)
xu(8, user_data_payload_byte[i], current->data[i], 0x00, 0xff, 1, i);
return 0;
}
static int FUNC(mastering_display_colour_volume)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawMasteringDisplayColourVolume *current, SEIMessageState *state)
{
int err, c;
HEADER("Mastering Display Colour Volume");
for (c = 0; c < 3; c++) {
ubs(16, display_primaries_x[c], 1, c);
ubs(16, display_primaries_y[c], 1, c);
}
ub(16, white_point_x);
ub(16, white_point_y);
ub(32, max_display_mastering_luminance);
ub(32, min_display_mastering_luminance);
return 0;
}
static int FUNC(content_light_level_info)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawContentLightLevelInfo *current, SEIMessageState *state)
{
int err;
HEADER("Content Light Level Information");
ub(16, max_content_light_level);
ub(16, max_pic_average_light_level);
return 0;
}
static int FUNC(alternative_transfer_characteristics)
(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawAlternativeTransferCharacteristics *current,
SEIMessageState *state)
{
int err;
HEADER("Alternative Transfer Characteristics");
ub(8, preferred_transfer_characteristics);
return 0;
}
static int FUNC(message)(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawMessage *current)
{
const SEIMessageTypeDescriptor *desc;
int err, i;
desc = ff_cbs_sei_find_type(ctx, current->payload_type);
if (desc) {
SEIMessageState state = {
.payload_type = current->payload_type,
.payload_size = current->payload_size,
.extension_present = current->extension_bit_length > 0,
};
int start_position, current_position, bits_written;
#ifdef READ
CHECK(ff_cbs_sei_alloc_message_payload(current, desc));
#endif
start_position = bit_position(rw);
CHECK(desc->READWRITE(ctx, rw, current->payload, &state));
current_position = bit_position(rw);
bits_written = current_position - start_position;
if (byte_alignment(rw) || state.extension_present ||
bits_written < 8 * current->payload_size) {
size_t bits_left;
#ifdef READ
GetBitContext tmp = *rw;
int trailing_bits, trailing_zero_bits;
bits_left = 8 * current->payload_size - bits_written;
if (bits_left > 8)
skip_bits_long(&tmp, bits_left - 8);
trailing_bits = get_bits(&tmp, FFMIN(bits_left, 8));
if (trailing_bits == 0) {
// The trailing bits must contain a bit_equal_to_one, so
// they can't all be zero.
return AVERROR_INVALIDDATA;
}
trailing_zero_bits = ff_ctz(trailing_bits);
current->extension_bit_length =
bits_left - 1 - trailing_zero_bits;
#endif
if (current->extension_bit_length > 0) {
allocate(current->extension_data,
(current->extension_bit_length + 7) / 8);
bits_left = current->extension_bit_length;
for (i = 0; bits_left > 0; i++) {
int length = FFMIN(bits_left, 8);
xu(length, reserved_payload_extension_data,
current->extension_data[i],
0, MAX_UINT_BITS(length), 0);
bits_left -= length;
}
}
fixed(1, bit_equal_to_one, 1);
while (byte_alignment(rw))
fixed(1, bit_equal_to_zero, 0);
}
#ifdef WRITE
current->payload_size = (put_bits_count(rw) - start_position) / 8;
#endif
} else {
uint8_t *data;
allocate(current->payload, current->payload_size);
data = current->payload;
for (i = 0; i < current->payload_size; i++)
xu(8, payload_byte[i], data[i], 0, 255, 1, i);
}
return 0;
}
static int FUNC(message_list)(CodedBitstreamContext *ctx, RWContext *rw,
SEIRawMessageList *current, int prefix)
{
SEIRawMessage *message;
int err, k;
#ifdef READ
for (k = 0;; k++) {
uint32_t payload_type = 0;
uint32_t payload_size = 0;
uint32_t tmp;
GetBitContext payload_gbc;
while (show_bits(rw, 8) == 0xff) {
fixed(8, ff_byte, 0xff);
payload_type += 255;
}
xu(8, last_payload_type_byte, tmp, 0, 254, 0);
payload_type += tmp;
while (show_bits(rw, 8) == 0xff) {
fixed(8, ff_byte, 0xff);
payload_size += 255;
}
xu(8, last_payload_size_byte, tmp, 0, 254, 0);
payload_size += tmp;
// There must be space remaining for both the payload and
// the trailing bits on the SEI NAL unit.
if (payload_size + 1 > get_bits_left(rw) / 8) {
av_log(ctx->log_ctx, AV_LOG_ERROR,
"Invalid SEI message: payload_size too large "
"(%"PRIu32" bytes).\n", payload_size);
return AVERROR_INVALIDDATA;
}
CHECK(init_get_bits(&payload_gbc, rw->buffer,
get_bits_count(rw) + 8 * payload_size));
skip_bits_long(&payload_gbc, get_bits_count(rw));
CHECK(ff_cbs_sei_list_add(current));
message = &current->messages[k];
message->payload_type = payload_type;
message->payload_size = payload_size;
CHECK(FUNC(message)(ctx, &payload_gbc, message));
skip_bits_long(rw, 8 * payload_size);
if (!cbs_h2645_read_more_rbsp_data(rw))
break;
}
#else
for (k = 0; k < current->nb_messages; k++) {
PutBitContext start_state;
uint32_t tmp;
int trace, i;
message = &current->messages[k];
// We write the payload twice in order to find the size. Trace
// output is switched off for the first write.
trace = ctx->trace_enable;
ctx->trace_enable = 0;
start_state = *rw;
for (i = 0; i < 2; i++) {
*rw = start_state;
tmp = message->payload_type;
while (tmp >= 255) {
fixed(8, ff_byte, 0xff);
tmp -= 255;
}
xu(8, last_payload_type_byte, tmp, 0, 254, 0);
tmp = message->payload_size;
while (tmp >= 255) {
fixed(8, ff_byte, 0xff);
tmp -= 255;
}
xu(8, last_payload_size_byte, tmp, 0, 254, 0);
err = FUNC(message)(ctx, rw, message);
ctx->trace_enable = trace;
if (err < 0)
return err;
}
}
#endif
return 0;
}

View File

@@ -1,118 +0,0 @@
/*
* Copyright (c) 2015-2016 Kieran Kunhya <kieran@kunhya.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/avassert.h"
#include "cfhddsp.h"
static av_always_inline void filter(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int len, int clip)
{
int16_t tmp;
int i;
tmp = (11*low[0*low_stride] - 4*low[1*low_stride] + low[2*low_stride] + 4) >> 3;
output[(2*0+0)*out_stride] = (tmp + high[0*high_stride]) >> 1;
if (clip)
output[(2*0+0)*out_stride] = av_clip_uintp2_c(output[(2*0+0)*out_stride], clip);
tmp = ( 5*low[0*low_stride] + 4*low[1*low_stride] - low[2*low_stride] + 4) >> 3;
output[(2*0+1)*out_stride] = (tmp - high[0*high_stride]) >> 1;
if (clip)
output[(2*0+1)*out_stride] = av_clip_uintp2_c(output[(2*0+1)*out_stride], clip);
for (i = 1; i < len - 1; i++) {
tmp = (low[(i-1)*low_stride] - low[(i+1)*low_stride] + 4) >> 3;
output[(2*i+0)*out_stride] = (tmp + low[i*low_stride] + high[i*high_stride]) >> 1;
if (clip)
output[(2*i+0)*out_stride] = av_clip_uintp2_c(output[(2*i+0)*out_stride], clip);
tmp = (low[(i+1)*low_stride] - low[(i-1)*low_stride] + 4) >> 3;
output[(2*i+1)*out_stride] = (tmp + low[i*low_stride] - high[i*high_stride]) >> 1;
if (clip)
output[(2*i+1)*out_stride] = av_clip_uintp2_c(output[(2*i+1)*out_stride], clip);
}
tmp = ( 5*low[i*low_stride] + 4*low[(i-1)*low_stride] - low[(i-2)*low_stride] + 4) >> 3;
output[(2*i+0)*out_stride] = (tmp + high[i*high_stride]) >> 1;
if (clip)
output[(2*i+0)*out_stride] = av_clip_uintp2_c(output[(2*i+0)*out_stride], clip);
tmp = (11*low[i*low_stride] - 4*low[(i-1)*low_stride] + low[(i-2)*low_stride] + 4) >> 3;
output[(2*i+1)*out_stride] = (tmp - high[i*high_stride]) >> 1;
if (clip)
output[(2*i+1)*out_stride] = av_clip_uintp2_c(output[(2*i+1)*out_stride], clip);
}
static void vert_filter(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int width, int height)
{
for (int i = 0; i < width; i++) {
filter(output, out_stride, low, low_stride, high, high_stride, height, 0);
low++;
high++;
output++;
}
}
static void horiz_filter(int16_t *output, ptrdiff_t ostride,
const int16_t *low, ptrdiff_t lstride,
const int16_t *high, ptrdiff_t hstride,
int width, int height)
{
for (int i = 0; i < height; i++) {
filter(output, 1, low, 1, high, 1, width, 0);
low += lstride;
high += hstride;
output += ostride * 2;
}
}
static void horiz_filter_clip(int16_t *output, const int16_t *low, const int16_t *high,
int width, int clip)
{
filter(output, 1, low, 1, high, 1, width, clip);
}
static void horiz_filter_clip_bayer(int16_t *output, const int16_t *low, const int16_t *high,
int width, int clip)
{
filter(output, 2, low, 1, high, 1, width, clip);
}
av_cold void ff_cfhddsp_init(CFHDDSPContext *c, int depth, int bayer)
{
c->horiz_filter = horiz_filter;
c->vert_filter = vert_filter;
if (bayer)
c->horiz_filter_clip = horiz_filter_clip_bayer;
else
c->horiz_filter_clip = horiz_filter_clip;
if (ARCH_X86)
ff_cfhddsp_init_x86(c, depth, bayer);
}

View File

@@ -1,44 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CFHDDSP_H
#define AVCODEC_CFHDDSP_H
#include <stddef.h>
#include <stdint.h>
typedef struct CFHDDSPContext {
void (*horiz_filter)(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int width, int height);
void (*vert_filter)(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int width, int height);
void (*horiz_filter_clip)(int16_t *output, const int16_t *low, const int16_t *high,
int width, int bpc);
} CFHDDSPContext;
void ff_cfhddsp_init(CFHDDSPContext *c, int format, int bayer);
void ff_cfhddsp_init_x86(CFHDDSPContext *c, int format, int bayer);
#endif /* AVCODEC_CFHDDSP_H */

View File

@@ -1,866 +0,0 @@
/*
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Cineform HD video encoder
*/
#include <stdlib.h>
#include <string.h>
#include "libavutil/avassert.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "avcodec.h"
#include "bytestream.h"
#include "cfhd.h"
#include "cfhdencdsp.h"
#include "put_bits.h"
#include "internal.h"
#include "thread.h"
/* Derived from existing tables from decoder */
static const unsigned codebook[256][2] = {
{ 1, 0x00000000 }, { 2, 0x00000002 }, { 3, 0x00000007 }, { 5, 0x00000019 }, { 6, 0x00000030 },
{ 6, 0x00000036 }, { 7, 0x00000063 }, { 7, 0x0000006B }, { 7, 0x0000006F }, { 8, 0x000000D4 },
{ 8, 0x000000DC }, { 9, 0x00000189 }, { 9, 0x000001A0 }, { 9, 0x000001AB }, {10, 0x00000310 },
{10, 0x00000316 }, {10, 0x00000354 }, {10, 0x00000375 }, {10, 0x00000377 }, {11, 0x00000623 },
{11, 0x00000684 }, {11, 0x000006AB }, {11, 0x000006EC }, {12, 0x00000C44 }, {12, 0x00000C5C },
{12, 0x00000C5E }, {12, 0x00000D55 }, {12, 0x00000DD1 }, {12, 0x00000DD3 }, {12, 0x00000DDB },
{13, 0x0000188B }, {13, 0x000018BB }, {13, 0x00001AA8 }, {13, 0x00001BA0 }, {13, 0x00001BA4 },
{13, 0x00001BB5 }, {14, 0x00003115 }, {14, 0x00003175 }, {14, 0x0000317D }, {14, 0x00003553 },
{14, 0x00003768 }, {15, 0x00006228 }, {15, 0x000062E8 }, {15, 0x000062F8 }, {15, 0x00006AA4 },
{15, 0x00006E85 }, {15, 0x00006E87 }, {15, 0x00006ED3 }, {16, 0x0000C453 }, {16, 0x0000C5D3 },
{16, 0x0000C5F3 }, {16, 0x0000DD08 }, {16, 0x0000DD0C }, {16, 0x0000DDA4 }, {17, 0x000188A4 },
{17, 0x00018BA5 }, {17, 0x00018BE5 }, {17, 0x0001AA95 }, {17, 0x0001AA97 }, {17, 0x0001BA13 },
{17, 0x0001BB4A }, {17, 0x0001BB4B }, {18, 0x00031748 }, {18, 0x000317C8 }, {18, 0x00035528 },
{18, 0x0003552C }, {18, 0x00037424 }, {18, 0x00037434 }, {18, 0x00037436 }, {19, 0x00062294 },
{19, 0x00062E92 }, {19, 0x00062F92 }, {19, 0x0006AA52 }, {19, 0x0006AA5A }, {19, 0x0006E84A },
{19, 0x0006E86A }, {19, 0x0006E86E }, {20, 0x000C452A }, {20, 0x000C5D27 }, {20, 0x000C5F26 },
{20, 0x000D54A6 }, {20, 0x000D54B6 }, {20, 0x000DD096 }, {20, 0x000DD0D6 }, {20, 0x000DD0DE },
{21, 0x00188A56 }, {21, 0x0018BA4D }, {21, 0x0018BE4E }, {21, 0x0018BE4F }, {21, 0x001AA96E },
{21, 0x001BA12E }, {21, 0x001BA12F }, {21, 0x001BA1AF }, {21, 0x001BA1BF }, {22, 0x00317498 },
{22, 0x0035529C }, {22, 0x0035529D }, {22, 0x003552DE }, {22, 0x003552DF }, {22, 0x0037435D },
{22, 0x0037437D }, {23, 0x0062295D }, {23, 0x0062E933 }, {23, 0x006AA53D }, {23, 0x006AA53E },
{23, 0x006AA53F }, {23, 0x006E86B9 }, {23, 0x006E86F8 }, {24, 0x00C452B8 }, {24, 0x00C5D265 },
{24, 0x00D54A78 }, {24, 0x00D54A79 }, {24, 0x00DD0D70 }, {24, 0x00DD0D71 }, {24, 0x00DD0DF2 },
{24, 0x00DD0DF3 }, {26, 0x03114BA2 }, {25, 0x0188A5B1 }, {25, 0x0188A58B }, {25, 0x0188A595 },
{25, 0x0188A5D6 }, {25, 0x0188A5D7 }, {25, 0x0188A5A8 }, {25, 0x0188A5AE }, {25, 0x0188A5AF },
{25, 0x0188A5C4 }, {25, 0x0188A5C5 }, {25, 0x0188A587 }, {25, 0x0188A584 }, {25, 0x0188A585 },
{25, 0x0188A5C6 }, {25, 0x0188A5C7 }, {25, 0x0188A5CC }, {25, 0x0188A5CD }, {25, 0x0188A581 },
{25, 0x0188A582 }, {25, 0x0188A583 }, {25, 0x0188A5CE }, {25, 0x0188A5CF }, {25, 0x0188A5C2 },
{25, 0x0188A5C3 }, {25, 0x0188A5C1 }, {25, 0x0188A5B4 }, {25, 0x0188A5B5 }, {25, 0x0188A5E6 },
{25, 0x0188A5E7 }, {25, 0x0188A5E4 }, {25, 0x0188A5E5 }, {25, 0x0188A5AB }, {25, 0x0188A5E0 },
{25, 0x0188A5E1 }, {25, 0x0188A5E2 }, {25, 0x0188A5E3 }, {25, 0x0188A5B6 }, {25, 0x0188A5B7 },
{25, 0x0188A5FD }, {25, 0x0188A57E }, {25, 0x0188A57F }, {25, 0x0188A5EC }, {25, 0x0188A5ED },
{25, 0x0188A5FE }, {25, 0x0188A5FF }, {25, 0x0188A57D }, {25, 0x0188A59C }, {25, 0x0188A59D },
{25, 0x0188A5E8 }, {25, 0x0188A5E9 }, {25, 0x0188A5EA }, {25, 0x0188A5EB }, {25, 0x0188A5EF },
{25, 0x0188A57A }, {25, 0x0188A57B }, {25, 0x0188A578 }, {25, 0x0188A579 }, {25, 0x0188A5BA },
{25, 0x0188A5BB }, {25, 0x0188A5B8 }, {25, 0x0188A5B9 }, {25, 0x0188A588 }, {25, 0x0188A589 },
{25, 0x018BA4C8 }, {25, 0x018BA4C9 }, {25, 0x0188A5FA }, {25, 0x0188A5FB }, {25, 0x0188A5BC },
{25, 0x0188A5BD }, {25, 0x0188A598 }, {25, 0x0188A599 }, {25, 0x0188A5F4 }, {25, 0x0188A5F5 },
{25, 0x0188A59B }, {25, 0x0188A5DE }, {25, 0x0188A5DF }, {25, 0x0188A596 }, {25, 0x0188A597 },
{25, 0x0188A5F8 }, {25, 0x0188A5F9 }, {25, 0x0188A5F1 }, {25, 0x0188A58E }, {25, 0x0188A58F },
{25, 0x0188A5DC }, {25, 0x0188A5DD }, {25, 0x0188A5F2 }, {25, 0x0188A5F3 }, {25, 0x0188A58C },
{25, 0x0188A58D }, {25, 0x0188A5A4 }, {25, 0x0188A5F0 }, {25, 0x0188A5A5 }, {25, 0x0188A5A6 },
{25, 0x0188A5A7 }, {25, 0x0188A59A }, {25, 0x0188A5A2 }, {25, 0x0188A5A3 }, {25, 0x0188A58A },
{25, 0x0188A5B0 }, {25, 0x0188A5A0 }, {25, 0x0188A5A1 }, {25, 0x0188A5DA }, {25, 0x0188A5DB },
{25, 0x0188A59E }, {25, 0x0188A59F }, {25, 0x0188A5D8 }, {25, 0x0188A5EE }, {25, 0x0188A5D9 },
{25, 0x0188A5F6 }, {25, 0x0188A5F7 }, {25, 0x0188A57C }, {25, 0x0188A5C8 }, {25, 0x0188A5C9 },
{25, 0x0188A594 }, {25, 0x0188A5FC }, {25, 0x0188A5CA }, {25, 0x0188A5CB }, {25, 0x0188A5B2 },
{25, 0x0188A5AA }, {25, 0x0188A5B3 }, {25, 0x0188A572 }, {25, 0x0188A573 }, {25, 0x0188A5C0 },
{25, 0x0188A5BE }, {25, 0x0188A5BF }, {25, 0x0188A592 }, {25, 0x0188A580 }, {25, 0x0188A593 },
{25, 0x0188A590 }, {25, 0x0188A591 }, {25, 0x0188A586 }, {25, 0x0188A5A9 }, {25, 0x0188A5D2 },
{25, 0x0188A5D3 }, {25, 0x0188A5D4 }, {25, 0x0188A5D5 }, {25, 0x0188A5AC }, {25, 0x0188A5AD },
{25, 0x0188A5D0 },
};
/* Derived by extracting runcodes from existing tables from decoder */
static const uint16_t runbook[18][3] = {
{1, 0x0000, 1}, {2, 0x0000, 2}, {3, 0x0000, 3}, {4, 0x0000, 4},
{5, 0x0000, 5}, {6, 0x0000, 6}, {7, 0x0000, 7}, {8, 0x0000, 8},
{9, 0x0000, 9}, {10, 0x0000, 10}, {11, 0x0000, 11},
{7, 0x0069, 12}, {8, 0x00D1, 20}, {9, 0x018A, 32},
{10, 0x0343, 60}, {11, 0x0685, 100}, {13, 0x18BF, 180}, {13, 0x1BA5, 320},
};
/*
* Derived by inspecting various quality encodes
* and adding some more from scratch.
*/
static const uint16_t quantization_per_subband[2][3][13][9] = {
{{
{ 16, 16, 8, 4, 4, 2, 6, 6, 9, }, // film3+
{ 16, 16, 8, 4, 4, 2, 6, 6, 9, }, // film3
{ 16, 16, 8, 4, 4, 2, 7, 7, 10, }, // film2+
{ 16, 16, 8, 4, 4, 2, 8, 8, 12, }, // film2
{ 16, 16, 8, 4, 4, 2, 16, 16, 26, }, // film1++
{ 24, 24, 12, 6, 6, 3, 24, 24, 36, }, // film1+
{ 24, 24, 12, 6, 6, 3, 24, 24, 36, }, // film1
{ 32, 32, 24, 8, 8, 6, 32, 32, 48, }, // high+
{ 32, 32, 24, 8, 8, 6, 32, 32, 48, }, // high
{ 48, 48, 32, 12, 12, 8, 64, 64, 96, }, // medium+
{ 48, 48, 32, 12, 12, 8, 64, 64, 96, }, // medium
{ 64, 64, 48, 16, 16, 12, 96, 96, 144, }, // low+
{ 64, 64, 48, 16, 16, 12, 128, 128, 192, }, // low
},
{
{ 16, 16, 8, 4, 4, 2, 6, 6, 9, }, // film3+
{ 16, 16, 8, 4, 4, 2, 6, 6, 12, }, // film3
{ 16, 16, 8, 4, 4, 2, 7, 7, 14, }, // film2+
{ 16, 16, 8, 4, 4, 2, 8, 8, 16, }, // film2
{ 16, 16, 8, 4, 4, 2, 16, 16, 26, }, // film1++
{ 24, 24, 12, 6, 6, 3, 24, 24, 36, }, // film1+
{ 24, 24, 12, 6, 6, 3, 24, 24, 48, }, // film1
{ 32, 32, 24, 8, 8, 6, 32, 32, 48, }, // high+
{ 48, 48, 32, 12, 12, 8, 32, 32, 64, }, // high
{ 48, 48, 32, 12, 12, 8, 64, 64, 96, }, // medium+
{ 48, 48, 32, 12, 12, 8, 64, 64, 128, }, // medium
{ 64, 64, 48, 16, 16, 12, 96, 96, 160, }, // low+
{ 64, 64, 48, 16, 16, 12, 128, 128, 192, }, // low
},
{
{ 16, 16, 8, 4, 4, 2, 6, 6, 9, }, // film3+
{ 16, 16, 8, 4, 4, 2, 6, 6, 12, }, // film3
{ 16, 16, 8, 4, 4, 2, 7, 7, 14, }, // film2+
{ 16, 16, 8, 4, 4, 2, 8, 8, 16, }, // film2
{ 16, 16, 8, 4, 4, 2, 16, 16, 26, }, // film1++
{ 24, 24, 12, 6, 6, 3, 24, 24, 36, }, // film1+
{ 24, 24, 12, 6, 6, 3, 24, 24, 48, }, // film1
{ 32, 32, 24, 8, 8, 6, 32, 32, 48, }, // high+
{ 48, 48, 32, 12, 12, 8, 32, 32, 64, }, // high
{ 48, 48, 32, 12, 12, 8, 64, 64, 96, }, // medium+
{ 48, 48, 32, 12, 12, 8, 64, 64, 128, }, // medium
{ 64, 64, 48, 16, 16, 12, 96, 96, 160, }, // low+
{ 64, 64, 48, 16, 16, 12, 128, 128, 192, }, // low
}},
{{
{ 16, 16, 8, 16, 16, 8, 24, 24, 36, }, // film3+
{ 16, 16, 8, 16, 16, 8, 24, 24, 36, }, // film3
{ 16, 16, 8, 16, 16, 8, 32, 32, 48, }, // film2+
{ 16, 16, 8, 16, 16, 8, 32, 32, 48, }, // film2
{ 16, 16, 8, 20, 20, 10, 80, 80, 128, }, // film1++
{ 24, 24, 12, 24, 24, 12, 96, 96, 144, }, // film1+
{ 24, 24, 12, 24, 24, 12, 96, 96, 144, }, // film1
{ 32, 32, 24, 32, 32, 24, 128, 128, 192, }, // high+
{ 32, 32, 24, 32, 32, 24, 128, 128, 192, }, // high
{ 48, 48, 32, 48, 48, 32, 256, 256, 384, }, // medium+
{ 48, 48, 32, 48, 48, 32, 256, 256, 384, }, // medium
{ 56, 56, 40, 56, 56, 40, 512, 512, 768, }, // low+
{ 64, 64, 48, 64, 64, 48, 512, 512, 768, }, // low
},
{
{ 16, 16, 8, 16, 16, 8, 24, 24, 36, }, // film3+
{ 16, 16, 8, 16, 16, 8, 48, 48, 72, }, // film3
{ 16, 16, 8, 16, 16, 8, 48, 48, 72, }, // film2+
{ 16, 16, 8, 16, 16, 8, 64, 64, 96, }, // film2
{ 16, 16, 8, 20, 20, 10, 80, 80, 128, }, // film1++
{ 24, 24, 12, 24, 24, 12, 96, 96, 144, }, // film1+
{ 24, 24, 12, 24, 24, 12, 192, 192, 288, }, // film1
{ 32, 32, 24, 32, 32, 24, 128, 128, 192, }, // high+
{ 32, 32, 24, 32, 32, 24, 256, 256, 384, }, // high
{ 48, 48, 32, 48, 48, 32, 256, 256, 384, }, // medium+
{ 48, 48, 32, 48, 48, 32, 512, 512, 768, }, // medium
{ 56, 56, 40, 56, 56, 40, 512, 512, 768, }, // low+
{ 64, 64, 48, 64, 64, 48,1024,1024,1536, }, // low
},
{
{ 16, 16, 8, 16, 16, 8, 24, 24, 36, }, // film3+
{ 16, 16, 8, 16, 16, 8, 48, 48, 72, }, // film3
{ 16, 16, 8, 16, 16, 8, 48, 48, 72, }, // film2+
{ 16, 16, 8, 16, 16, 8, 64, 64, 96, }, // film2
{ 16, 16, 10, 20, 20, 10, 80, 80, 128, }, // film1++
{ 24, 24, 12, 24, 24, 12, 96, 96, 144, }, // film1+
{ 24, 24, 12, 24, 24, 12, 192, 192, 288, }, // film1
{ 32, 32, 24, 32, 32, 24, 128, 128, 192, }, // high+
{ 32, 32, 24, 32, 32, 24, 256, 256, 384, }, // high
{ 48, 48, 32, 48, 48, 32, 256, 256, 384, }, // medium+
{ 48, 48, 32, 48, 48, 32, 512, 512, 768, }, // medium
{ 56, 56, 40, 56, 56, 40, 512, 512, 768, }, // low+
{ 64, 64, 48, 64, 64, 48,1024,1024,1536, }, // low
}},
};
typedef struct Codebook {
unsigned bits;
unsigned size;
} Codebook;
typedef struct Runbook {
unsigned size;
unsigned bits;
unsigned run;
} Runbook;
typedef struct PlaneEnc {
unsigned size;
int16_t *dwt_buf;
int16_t *dwt_tmp;
unsigned quantization[SUBBAND_COUNT];
int16_t *subband[SUBBAND_COUNT];
int16_t *l_h[8];
SubBand band[DWT_LEVELS][4];
} PlaneEnc;
typedef struct CFHDEncContext {
const AVClass *class;
PutBitContext pb;
PutByteContext pby;
int quality;
int planes;
int chroma_h_shift;
int chroma_v_shift;
PlaneEnc plane[4];
uint16_t lut[1024];
Runbook rb[321];
Codebook cb[513];
int16_t *alpha;
CFHDEncDSPContext dsp;
} CFHDEncContext;
static av_cold int cfhd_encode_init(AVCodecContext *avctx)
{
CFHDEncContext *s = avctx->priv_data;
const int sign_mask = 256;
const int twos_complement = -sign_mask;
const int mag_mask = sign_mask - 1;
int ret, last = 0;
ret = av_pix_fmt_get_chroma_sub_sample(avctx->pix_fmt,
&s->chroma_h_shift,
&s->chroma_v_shift);
if (ret < 0)
return ret;
if (avctx->width & 15) {
av_log(avctx, AV_LOG_ERROR, "Width must be multiple of 16.\n");
return AVERROR_INVALIDDATA;
}
s->planes = av_pix_fmt_count_planes(avctx->pix_fmt);
for (int i = 0; i < s->planes; i++) {
int w8, h8, w4, h4, w2, h2;
int width = i ? avctx->width >> s->chroma_h_shift : avctx->width;
int height = i ? FFALIGN(avctx->height >> s->chroma_v_shift, 8) :
FFALIGN(avctx->height >> s->chroma_v_shift, 8);
ptrdiff_t stride = (FFALIGN(width / 8, 8) + 64) * 8;
w8 = FFALIGN(width / 8, 8) + 64;
h8 = FFALIGN(height, 8) / 8;
w4 = w8 * 2;
h4 = h8 * 2;
w2 = w4 * 2;
h2 = h4 * 2;
s->plane[i].dwt_buf =
av_mallocz_array(height * stride, sizeof(*s->plane[i].dwt_buf));
s->plane[i].dwt_tmp =
av_malloc_array(height * stride, sizeof(*s->plane[i].dwt_tmp));
if (!s->plane[i].dwt_buf || !s->plane[i].dwt_tmp)
return AVERROR(ENOMEM);
s->plane[i].subband[0] = s->plane[i].dwt_buf;
s->plane[i].subband[1] = s->plane[i].dwt_buf + 2 * w8 * h8;
s->plane[i].subband[2] = s->plane[i].dwt_buf + 1 * w8 * h8;
s->plane[i].subband[3] = s->plane[i].dwt_buf + 3 * w8 * h8;
s->plane[i].subband[4] = s->plane[i].dwt_buf + 2 * w4 * h4;
s->plane[i].subband[5] = s->plane[i].dwt_buf + 1 * w4 * h4;
s->plane[i].subband[6] = s->plane[i].dwt_buf + 3 * w4 * h4;
s->plane[i].subband[7] = s->plane[i].dwt_buf + 2 * w2 * h2;
s->plane[i].subband[8] = s->plane[i].dwt_buf + 1 * w2 * h2;
s->plane[i].subband[9] = s->plane[i].dwt_buf + 3 * w2 * h2;
for (int j = 0; j < DWT_LEVELS; j++) {
for (int k = 0; k < FF_ARRAY_ELEMS(s->plane[i].band[j]); k++) {
s->plane[i].band[j][k].width = (width / 8) << j;
s->plane[i].band[j][k].height = (height / 8) << j;
s->plane[i].band[j][k].a_width = w8 << j;
s->plane[i].band[j][k].a_height = h8 << j;
}
}
/* ll2 and ll1 commented out because they are done in-place */
s->plane[i].l_h[0] = s->plane[i].dwt_tmp;
s->plane[i].l_h[1] = s->plane[i].dwt_tmp + 2 * w8 * h8;
// s->plane[i].l_h[2] = ll2;
s->plane[i].l_h[3] = s->plane[i].dwt_tmp;
s->plane[i].l_h[4] = s->plane[i].dwt_tmp + 2 * w4 * h4;
// s->plane[i].l_h[5] = ll1;
s->plane[i].l_h[6] = s->plane[i].dwt_tmp;
s->plane[i].l_h[7] = s->plane[i].dwt_tmp + 2 * w2 * h2;
}
for (int i = 0; i < 512; i++) {
int value = (i & sign_mask) ? twos_complement + (i & mag_mask): i;
int mag = FFMIN(FFABS(value), 255);
if (mag) {
s->cb[i].bits = (codebook[mag][1] << 1) | (value > 0 ? 0 : 1);
s->cb[i].size = codebook[mag][0] + 1;
} else {
s->cb[i].bits = codebook[mag][1];
s->cb[i].size = codebook[mag][0];
}
}
s->cb[512].bits = 0x3114ba3;
s->cb[512].size = 26;
s->rb[0].run = 0;
for (int i = 1, j = 0; i < 320 && j < 17; j++) {
int run = runbook[j][2];
int end = runbook[j+1][2];
while (i < end) {
s->rb[i].run = run;
s->rb[i].bits = runbook[j][1];
s->rb[i++].size = runbook[j][0];
}
}
s->rb[320].bits = runbook[17][1];
s->rb[320].size = runbook[17][0];
s->rb[320].run = 320;
for (int i = 0; i < 256; i++) {
int idx = i + ((768LL * i * i * i) / (256 * 256 * 256));
s->lut[idx] = i;
}
for (int i = 0; i < 1024; i++) {
if (s->lut[i])
last = s->lut[i];
else
s->lut[i] = last;
}
ff_cfhdencdsp_init(&s->dsp);
if (s->planes != 4)
return 0;
s->alpha = av_calloc(avctx->width * avctx->height, sizeof(*s->alpha));
if (!s->alpha)
return AVERROR(ENOMEM);
return 0;
}
static void quantize_band(int16_t *input, int width, int a_width,
int height, unsigned quantization)
{
const int16_t factor = (uint32_t)(1U << 15) / quantization;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++)
input[j] = av_clip_intp2(((input[j] * factor + 16384 * FFSIGN(input[j])) / 32768), 10);
input += a_width;
}
}
static int put_runcode(PutBitContext *pb, int count, const Runbook *const rb)
{
while (count > 0) {
const int index = FFMIN(320, count);
put_bits(pb, rb[index].size, rb[index].bits);
count -= rb[index].run;
}
return 0;
}
static void process_alpha(const int16_t *src, int width, int height, ptrdiff_t stride, int16_t *dst)
{
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
int alpha = src[j];
if (alpha > 0 && alpha < 4080) {
alpha *= 223;
alpha += 128;
alpha >>= 8;
alpha += 256;
}
dst[j] = av_clip_uintp2(alpha, 12);
}
src += stride;
dst += width;
}
}
static int cfhd_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
const AVFrame *frame, int *got_packet)
{
CFHDEncContext *s = avctx->priv_data;
CFHDEncDSPContext *dsp = &s->dsp;
PutByteContext *pby = &s->pby;
PutBitContext *pb = &s->pb;
const Codebook *const cb = s->cb;
const Runbook *const rb = s->rb;
const uint16_t *lut = s->lut;
unsigned pos;
int ret;
for (int plane = 0; plane < s->planes; plane++) {
int width = s->plane[plane].band[2][0].width;
int a_width = s->plane[plane].band[2][0].a_width;
int height = s->plane[plane].band[2][0].height;
int act_plane = plane == 1 ? 2 : plane == 2 ? 1 : plane;
int16_t *input = (int16_t *)frame->data[act_plane];
int16_t *low = s->plane[plane].l_h[6];
int16_t *high = s->plane[plane].l_h[7];
ptrdiff_t in_stride = frame->linesize[act_plane] / 2;
int low_stride, high_stride;
if (plane == 3) {
process_alpha(input, avctx->width, avctx->height,
in_stride, s->alpha);
input = s->alpha;
in_stride = avctx->width;
}
dsp->horiz_filter(input, low, high,
in_stride, a_width, a_width,
width * 2, height * 2);
input = s->plane[plane].l_h[7];
low = s->plane[plane].subband[7];
low_stride = s->plane[plane].band[2][0].a_width;
high = s->plane[plane].subband[9];
high_stride = s->plane[plane].band[2][0].a_width;
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
input = s->plane[plane].l_h[6];
low = s->plane[plane].l_h[7];
high = s->plane[plane].subband[8];
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
a_width = s->plane[plane].band[1][0].a_width;
width = s->plane[plane].band[1][0].width;
height = s->plane[plane].band[1][0].height;
input = s->plane[plane].l_h[7];
low = s->plane[plane].l_h[3];
low_stride = s->plane[plane].band[1][0].a_width;
high = s->plane[plane].l_h[4];
high_stride = s->plane[plane].band[1][0].a_width;
for (int i = 0; i < height * 2; i++) {
for (int j = 0; j < width * 2; j++)
input[j] /= 4;
input += a_width * 2;
}
input = s->plane[plane].l_h[7];
dsp->horiz_filter(input, low, high,
a_width * 2, low_stride, high_stride,
width * 2, height * 2);
input = s->plane[plane].l_h[4];
low = s->plane[plane].subband[4];
high = s->plane[plane].subband[6];
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
input = s->plane[plane].l_h[3];
low = s->plane[plane].l_h[4];
high = s->plane[plane].subband[5];
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
a_width = s->plane[plane].band[0][0].a_width;
width = s->plane[plane].band[0][0].width;
height = s->plane[plane].band[0][0].height;
input = s->plane[plane].l_h[4];
low = s->plane[plane].l_h[0];
low_stride = s->plane[plane].band[0][0].a_width;
high = s->plane[plane].l_h[1];
high_stride = s->plane[plane].band[0][0].a_width;
if (avctx->pix_fmt != AV_PIX_FMT_YUV422P10) {
for (int i = 0; i < height * 2; i++) {
for (int j = 0; j < width * 2; j++)
input[j] /= 4;
input += a_width * 2;
}
}
input = s->plane[plane].l_h[4];
dsp->horiz_filter(input, low, high,
a_width * 2, low_stride, high_stride,
width * 2, height * 2);
low = s->plane[plane].subband[1];
high = s->plane[plane].subband[3];
input = s->plane[plane].l_h[1];
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
low = s->plane[plane].subband[0];
high = s->plane[plane].subband[2];
input = s->plane[plane].l_h[0];
dsp->vert_filter(input, low, high,
a_width, low_stride, high_stride,
width, height * 2);
}
ret = ff_alloc_packet2(avctx, pkt, 64LL + s->planes * (2LL * avctx->width * avctx->height + 1000LL), 0);
if (ret < 0)
return ret;
bytestream2_init_writer(pby, pkt->data, pkt->size);
bytestream2_put_be16(pby, SampleType);
bytestream2_put_be16(pby, 9);
bytestream2_put_be16(pby, SampleIndexTable);
bytestream2_put_be16(pby, s->planes);
for (int i = 0; i < s->planes; i++)
bytestream2_put_be32(pby, 0);
bytestream2_put_be16(pby, TransformType);
bytestream2_put_be16(pby, 0);
bytestream2_put_be16(pby, NumFrames);
bytestream2_put_be16(pby, 1);
bytestream2_put_be16(pby, ChannelCount);
bytestream2_put_be16(pby, s->planes);
bytestream2_put_be16(pby, EncodedFormat);
bytestream2_put_be16(pby, avctx->pix_fmt == AV_PIX_FMT_YUV422P10 ? 1 : 3 + (s->planes == 4));
bytestream2_put_be16(pby, WaveletCount);
bytestream2_put_be16(pby, 3);
bytestream2_put_be16(pby, SubbandCount);
bytestream2_put_be16(pby, SUBBAND_COUNT);
bytestream2_put_be16(pby, NumSpatial);
bytestream2_put_be16(pby, 2);
bytestream2_put_be16(pby, FirstWavelet);
bytestream2_put_be16(pby, 3);
bytestream2_put_be16(pby, ImageWidth);
bytestream2_put_be16(pby, avctx->width);
bytestream2_put_be16(pby, ImageHeight);
bytestream2_put_be16(pby, avctx->height);
bytestream2_put_be16(pby, -FrameNumber);
bytestream2_put_be16(pby, frame->pts & 0xFFFF);
bytestream2_put_be16(pby, Precision);
bytestream2_put_be16(pby, avctx->pix_fmt == AV_PIX_FMT_YUV422P10 ? 10 : 12);
bytestream2_put_be16(pby, PrescaleTable);
bytestream2_put_be16(pby, avctx->pix_fmt == AV_PIX_FMT_YUV422P10 ? 0x2000 : 0x2800);
bytestream2_put_be16(pby, SampleFlags);
bytestream2_put_be16(pby, 1);
for (int p = 0; p < s->planes; p++) {
int width = s->plane[p].band[0][0].width;
int a_width = s->plane[p].band[0][0].a_width;
int height = s->plane[p].band[0][0].height;
int16_t *data = s->plane[p].subband[0];
if (p) {
bytestream2_put_be16(pby, SampleType);
bytestream2_put_be16(pby, 3);
bytestream2_put_be16(pby, ChannelNumber);
bytestream2_put_be16(pby, p);
}
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x1a4a);
pos = bytestream2_tell_p(pby);
bytestream2_put_be16(pby, LowpassSubband);
bytestream2_put_be16(pby, 0);
bytestream2_put_be16(pby, NumLevels);
bytestream2_put_be16(pby, 3);
bytestream2_put_be16(pby, LowpassWidth);
bytestream2_put_be16(pby, width);
bytestream2_put_be16(pby, LowpassHeight);
bytestream2_put_be16(pby, height);
bytestream2_put_be16(pby, PixelOffset);
bytestream2_put_be16(pby, 0);
bytestream2_put_be16(pby, LowpassQuantization);
bytestream2_put_be16(pby, 1);
bytestream2_put_be16(pby, LowpassPrecision);
bytestream2_put_be16(pby, 16);
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x0f0f);
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++)
bytestream2_put_be16(pby, data[j]);
data += a_width;
}
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x1b4b);
for (int l = 0; l < 3; l++) {
for (int i = 0; i < 3; i++) {
s->plane[p].quantization[1 + l * 3 + i] = quantization_per_subband[avctx->pix_fmt != AV_PIX_FMT_YUV422P10][p >= 3 ? 0 : p][s->quality][l * 3 + i];
}
}
for (int l = 0; l < 3; l++) {
int a_width = s->plane[p].band[l][0].a_width;
int width = s->plane[p].band[l][0].width;
int stride = FFALIGN(width, 8);
int height = s->plane[p].band[l][0].height;
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x0d0d);
bytestream2_put_be16(pby, WaveletType);
bytestream2_put_be16(pby, 3 + 2 * (l == 2));
bytestream2_put_be16(pby, WaveletNumber);
bytestream2_put_be16(pby, 3 - l);
bytestream2_put_be16(pby, WaveletLevel);
bytestream2_put_be16(pby, 3 - l);
bytestream2_put_be16(pby, NumBands);
bytestream2_put_be16(pby, 4);
bytestream2_put_be16(pby, HighpassWidth);
bytestream2_put_be16(pby, width);
bytestream2_put_be16(pby, HighpassHeight);
bytestream2_put_be16(pby, height);
bytestream2_put_be16(pby, LowpassBorder);
bytestream2_put_be16(pby, 0);
bytestream2_put_be16(pby, HighpassBorder);
bytestream2_put_be16(pby, 0);
bytestream2_put_be16(pby, LowpassScale);
bytestream2_put_be16(pby, 1);
bytestream2_put_be16(pby, LowpassDivisor);
bytestream2_put_be16(pby, 1);
for (int i = 0; i < 3; i++) {
int16_t *data = s->plane[p].subband[1 + l * 3 + i];
int count = 0, padd = 0;
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x0e0e);
bytestream2_put_be16(pby, SubbandNumber);
bytestream2_put_be16(pby, i + 1);
bytestream2_put_be16(pby, BandCodingFlags);
bytestream2_put_be16(pby, 1);
bytestream2_put_be16(pby, BandWidth);
bytestream2_put_be16(pby, width);
bytestream2_put_be16(pby, BandHeight);
bytestream2_put_be16(pby, height);
bytestream2_put_be16(pby, SubbandBand);
bytestream2_put_be16(pby, 1 + l * 3 + i);
bytestream2_put_be16(pby, BandEncoding);
bytestream2_put_be16(pby, 3);
bytestream2_put_be16(pby, Quantization);
bytestream2_put_be16(pby, s->plane[p].quantization[1 + l * 3 + i]);
bytestream2_put_be16(pby, BandScale);
bytestream2_put_be16(pby, 1);
bytestream2_put_be16(pby, BandHeader);
bytestream2_put_be16(pby, 0);
quantize_band(data, width, a_width, height,
s->plane[p].quantization[1 + l * 3 + i]);
init_put_bits(pb, pkt->data + bytestream2_tell_p(pby), bytestream2_get_bytes_left_p(pby));
for (int m = 0; m < height; m++) {
for (int j = 0; j < stride; j++) {
int16_t index = j >= width ? 0 : FFSIGN(data[j]) * lut[FFABS(data[j])];
if (index < 0)
index += 512;
if (index == 0) {
count++;
continue;
} else if (count > 0) {
count = put_runcode(pb, count, rb);
}
put_bits(pb, cb[index].size, cb[index].bits);
}
data += a_width;
}
if (count > 0) {
count = put_runcode(pb, count, rb);
}
put_bits(pb, cb[512].size, cb[512].bits);
flush_put_bits(pb);
bytestream2_skip_p(pby, put_bits_count(pb) >> 3);
padd = (4 - (bytestream2_tell_p(pby) & 3)) & 3;
while (padd--)
bytestream2_put_byte(pby, 0);
bytestream2_put_be16(pby, BandTrailer);
bytestream2_put_be16(pby, 0);
}
bytestream2_put_be16(pby, BitstreamMarker);
bytestream2_put_be16(pby, 0x0c0c);
}
s->plane[p].size = bytestream2_tell_p(pby) - pos;
}
bytestream2_put_be16(pby, GroupTrailer);
bytestream2_put_be16(pby, 0);
av_shrink_packet(pkt, bytestream2_tell_p(pby));
pkt->flags |= AV_PKT_FLAG_KEY;
bytestream2_seek_p(pby, 8, SEEK_SET);
for (int i = 0; i < s->planes; i++)
bytestream2_put_be32(pby, s->plane[i].size);
*got_packet = 1;
return 0;
}
static av_cold int cfhd_encode_close(AVCodecContext *avctx)
{
CFHDEncContext *s = avctx->priv_data;
for (int i = 0; i < s->planes; i++) {
av_freep(&s->plane[i].dwt_buf);
av_freep(&s->plane[i].dwt_tmp);
for (int j = 0; j < SUBBAND_COUNT; j++)
s->plane[i].subband[j] = NULL;
for (int j = 0; j < 8; j++)
s->plane[i].l_h[j] = NULL;
}
av_freep(&s->alpha);
return 0;
}
#define OFFSET(x) offsetof(CFHDEncContext, x)
#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
static const AVOption options[] = {
{ "quality", "set quality", OFFSET(quality), AV_OPT_TYPE_INT, {.i64= 0}, 0, 12, VE, "q" },
{ "film3+", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 0}, 0, 0, VE, "q" },
{ "film3", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 1}, 0, 0, VE, "q" },
{ "film2+", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 2}, 0, 0, VE, "q" },
{ "film2", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 3}, 0, 0, VE, "q" },
{ "film1.5", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 4}, 0, 0, VE, "q" },
{ "film1+", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 5}, 0, 0, VE, "q" },
{ "film1", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 6}, 0, 0, VE, "q" },
{ "high+", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 7}, 0, 0, VE, "q" },
{ "high", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 8}, 0, 0, VE, "q" },
{ "medium+", NULL, 0, AV_OPT_TYPE_CONST, {.i64= 9}, 0, 0, VE, "q" },
{ "medium", NULL, 0, AV_OPT_TYPE_CONST, {.i64=10}, 0, 0, VE, "q" },
{ "low+", NULL, 0, AV_OPT_TYPE_CONST, {.i64=11}, 0, 0, VE, "q" },
{ "low", NULL, 0, AV_OPT_TYPE_CONST, {.i64=12}, 0, 0, VE, "q" },
{ NULL},
};
static const AVClass cfhd_class = {
.class_name = "cfhd",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
AVCodec ff_cfhd_encoder = {
.name = "cfhd",
.long_name = NULL_IF_CONFIG_SMALL("GoPro CineForm HD"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_CFHD,
.priv_data_size = sizeof(CFHDEncContext),
.priv_class = &cfhd_class,
.init = cfhd_encode_init,
.close = cfhd_encode_close,
.encode2 = cfhd_encode_frame,
.capabilities = AV_CODEC_CAP_FRAME_THREADS,
.pix_fmts = (const enum AVPixelFormat[]) {
AV_PIX_FMT_YUV422P10,
AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRAP12,
AV_PIX_FMT_NONE
},
.caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
};

View File

@@ -1,79 +0,0 @@
/*
* Copyright (c) 2015-2016 Kieran Kunhya <kieran@kunhya.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/avassert.h"
#include "cfhdencdsp.h"
static av_always_inline void filter(int16_t *input, ptrdiff_t in_stride,
int16_t *low, ptrdiff_t low_stride,
int16_t *high, ptrdiff_t high_stride,
int len)
{
low[(0>>1) * low_stride] = av_clip_int16(input[0*in_stride] + input[1*in_stride]);
high[(0>>1) * high_stride] = av_clip_int16((5 * input[0*in_stride] - 11 * input[1*in_stride] +
4 * input[2*in_stride] + 4 * input[3*in_stride] -
1 * input[4*in_stride] - 1 * input[5*in_stride] + 4) >> 3);
for (int i = 2; i < len - 2; i += 2) {
low[(i>>1) * low_stride] = av_clip_int16(input[i*in_stride] + input[(i+1)*in_stride]);
high[(i>>1) * high_stride] = av_clip_int16(((-input[(i-2)*in_stride] - input[(i-1)*in_stride] +
input[(i+2)*in_stride] + input[(i+3)*in_stride] + 4) >> 3) +
input[(i+0)*in_stride] - input[(i+1)*in_stride]);
}
low[((len-2)>>1) * low_stride] = av_clip_int16(input[((len-2)+0)*in_stride] + input[((len-2)+1)*in_stride]);
high[((len-2)>>1) * high_stride] = av_clip_int16((11* input[((len-2)+0)*in_stride] - 5 * input[((len-2)+1)*in_stride] -
4 * input[((len-2)-1)*in_stride] - 4 * input[((len-2)-2)*in_stride] +
1 * input[((len-2)-3)*in_stride] + 1 * input[((len-2)-4)*in_stride] + 4) >> 3);
}
static void horiz_filter(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height)
{
for (int i = 0; i < height; i++) {
filter(input, 1, low, 1, high, 1, width);
input += in_stride;
low += low_stride;
high += high_stride;
}
}
static void vert_filter(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height)
{
for (int i = 0; i < width; i++)
filter(&input[i], in_stride, &low[i], low_stride, &high[i], high_stride, height);
}
av_cold void ff_cfhdencdsp_init(CFHDEncDSPContext *c)
{
c->horiz_filter = horiz_filter;
c->vert_filter = vert_filter;
if (ARCH_X86)
ff_cfhdencdsp_init_x86(c);
}

View File

@@ -1,41 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_CFHDENCDSP_H
#define AVCODEC_CFHDENCDSP_H
#include <stddef.h>
#include <stdint.h>
typedef struct CFHDEncDSPContext {
void (*horiz_filter)(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height);
void (*vert_filter)(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height);
} CFHDEncDSPContext;
void ff_cfhdencdsp_init(CFHDEncDSPContext *c);
void ff_cfhdencdsp_init_x86(CFHDEncDSPContext *c);
#endif /* AVCODEC_CFHDENCDSP_H */

View File

@@ -1,202 +0,0 @@
/*
* AVCodecParameters functions for libavcodec
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* AVCodecParameters functions for libavcodec.
*/
#include <string.h>
#include "libavutil/mem.h"
#include "avcodec.h"
#include "codec_par.h"
static void codec_parameters_reset(AVCodecParameters *par)
{
av_freep(&par->extradata);
memset(par, 0, sizeof(*par));
par->codec_type = AVMEDIA_TYPE_UNKNOWN;
par->codec_id = AV_CODEC_ID_NONE;
par->format = -1;
par->field_order = AV_FIELD_UNKNOWN;
par->color_range = AVCOL_RANGE_UNSPECIFIED;
par->color_primaries = AVCOL_PRI_UNSPECIFIED;
par->color_trc = AVCOL_TRC_UNSPECIFIED;
par->color_space = AVCOL_SPC_UNSPECIFIED;
par->chroma_location = AVCHROMA_LOC_UNSPECIFIED;
par->sample_aspect_ratio = (AVRational){ 0, 1 };
par->profile = FF_PROFILE_UNKNOWN;
par->level = FF_LEVEL_UNKNOWN;
}
AVCodecParameters *avcodec_parameters_alloc(void)
{
AVCodecParameters *par = av_mallocz(sizeof(*par));
if (!par)
return NULL;
codec_parameters_reset(par);
return par;
}
void avcodec_parameters_free(AVCodecParameters **ppar)
{
AVCodecParameters *par = *ppar;
if (!par)
return;
codec_parameters_reset(par);
av_freep(ppar);
}
int avcodec_parameters_copy(AVCodecParameters *dst, const AVCodecParameters *src)
{
codec_parameters_reset(dst);
memcpy(dst, src, sizeof(*dst));
dst->extradata = NULL;
dst->extradata_size = 0;
if (src->extradata) {
dst->extradata = av_mallocz(src->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!dst->extradata)
return AVERROR(ENOMEM);
memcpy(dst->extradata, src->extradata, src->extradata_size);
dst->extradata_size = src->extradata_size;
}
return 0;
}
int avcodec_parameters_from_context(AVCodecParameters *par,
const AVCodecContext *codec)
{
codec_parameters_reset(par);
par->codec_type = codec->codec_type;
par->codec_id = codec->codec_id;
par->codec_tag = codec->codec_tag;
par->bit_rate = codec->bit_rate;
par->bits_per_coded_sample = codec->bits_per_coded_sample;
par->bits_per_raw_sample = codec->bits_per_raw_sample;
par->profile = codec->profile;
par->level = codec->level;
switch (par->codec_type) {
case AVMEDIA_TYPE_VIDEO:
par->format = codec->pix_fmt;
par->width = codec->width;
par->height = codec->height;
par->field_order = codec->field_order;
par->color_range = codec->color_range;
par->color_primaries = codec->color_primaries;
par->color_trc = codec->color_trc;
par->color_space = codec->colorspace;
par->chroma_location = codec->chroma_sample_location;
par->sample_aspect_ratio = codec->sample_aspect_ratio;
par->video_delay = codec->has_b_frames;
break;
case AVMEDIA_TYPE_AUDIO:
par->format = codec->sample_fmt;
par->channel_layout = codec->channel_layout;
par->channels = codec->channels;
par->sample_rate = codec->sample_rate;
par->block_align = codec->block_align;
par->frame_size = codec->frame_size;
par->initial_padding = codec->initial_padding;
par->trailing_padding = codec->trailing_padding;
par->seek_preroll = codec->seek_preroll;
break;
case AVMEDIA_TYPE_SUBTITLE:
par->width = codec->width;
par->height = codec->height;
break;
}
if (codec->extradata) {
par->extradata = av_mallocz(codec->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!par->extradata)
return AVERROR(ENOMEM);
memcpy(par->extradata, codec->extradata, codec->extradata_size);
par->extradata_size = codec->extradata_size;
}
return 0;
}
int avcodec_parameters_to_context(AVCodecContext *codec,
const AVCodecParameters *par)
{
codec->codec_type = par->codec_type;
codec->codec_id = par->codec_id;
codec->codec_tag = par->codec_tag;
codec->bit_rate = par->bit_rate;
codec->bits_per_coded_sample = par->bits_per_coded_sample;
codec->bits_per_raw_sample = par->bits_per_raw_sample;
codec->profile = par->profile;
codec->level = par->level;
switch (par->codec_type) {
case AVMEDIA_TYPE_VIDEO:
codec->pix_fmt = par->format;
codec->width = par->width;
codec->height = par->height;
codec->field_order = par->field_order;
codec->color_range = par->color_range;
codec->color_primaries = par->color_primaries;
codec->color_trc = par->color_trc;
codec->colorspace = par->color_space;
codec->chroma_sample_location = par->chroma_location;
codec->sample_aspect_ratio = par->sample_aspect_ratio;
codec->has_b_frames = par->video_delay;
break;
case AVMEDIA_TYPE_AUDIO:
codec->sample_fmt = par->format;
codec->channel_layout = par->channel_layout;
codec->channels = par->channels;
codec->sample_rate = par->sample_rate;
codec->block_align = par->block_align;
codec->frame_size = par->frame_size;
codec->delay =
codec->initial_padding = par->initial_padding;
codec->trailing_padding = par->trailing_padding;
codec->seek_preroll = par->seek_preroll;
break;
case AVMEDIA_TYPE_SUBTITLE:
codec->width = par->width;
codec->height = par->height;
break;
}
if (par->extradata) {
av_freep(&codec->extradata);
codec->extradata = av_mallocz(par->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!codec->extradata)
return AVERROR(ENOMEM);
memcpy(codec->extradata, par->extradata, par->extradata_size);
codec->extradata_size = par->extradata_size;
}
return 0;
}

View File

@@ -1,438 +0,0 @@
/*
* CRI image decoder
*
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Cintel RAW image decoder
*/
#define BITSTREAM_READER_LE
#include "libavutil/intfloat.h"
#include "libavutil/display.h"
#include "avcodec.h"
#include "bytestream.h"
#include "get_bits.h"
#include "internal.h"
#include "thread.h"
typedef struct CRIContext {
AVCodecContext *jpeg_avctx; // wrapper context for MJPEG
AVPacket *jpkt; // encoded JPEG tile
AVFrame *jpgframe; // decoded JPEG tile
GetByteContext gb;
int color_model;
const uint8_t *data;
unsigned data_size;
uint64_t tile_size[4];
} CRIContext;
static av_cold int cri_decode_init(AVCodecContext *avctx)
{
CRIContext *s = avctx->priv_data;
const AVCodec *codec;
int ret;
s->jpgframe = av_frame_alloc();
if (!s->jpgframe)
return AVERROR(ENOMEM);
s->jpkt = av_packet_alloc();
if (!s->jpkt)
return AVERROR(ENOMEM);
codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);
if (!codec)
return AVERROR_BUG;
s->jpeg_avctx = avcodec_alloc_context3(codec);
if (!s->jpeg_avctx)
return AVERROR(ENOMEM);
s->jpeg_avctx->flags = avctx->flags;
s->jpeg_avctx->flags2 = avctx->flags2;
s->jpeg_avctx->dct_algo = avctx->dct_algo;
s->jpeg_avctx->idct_algo = avctx->idct_algo;
ret = avcodec_open2(s->jpeg_avctx, codec, NULL);
if (ret < 0)
return ret;
return 0;
}
static void unpack_10bit(GetByteContext *gb, uint16_t *dst, int shift,
int w, int h, ptrdiff_t stride)
{
int count = w * h;
int pos = 0;
while (count > 0) {
uint32_t a0, a1, a2, a3;
if (bytestream2_get_bytes_left(gb) < 4)
break;
a0 = bytestream2_get_le32(gb);
a1 = bytestream2_get_le32(gb);
a2 = bytestream2_get_le32(gb);
a3 = bytestream2_get_le32(gb);
dst[pos] = (((a0 >> 1) & 0xE00) | (a0 & 0x1FF)) << shift;
pos++;
if (pos >= w) {
if (count == 1)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a0 >> 13) & 0x3F) | ((a0 >> 14) & 0xFC0)) << shift;
pos++;
if (pos >= w) {
if (count == 2)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a0 >> 26) & 7) | ((a1 & 0x1FF) << 3)) << shift;
pos++;
if (pos >= w) {
if (count == 3)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a1 >> 10) & 0x1FF) | ((a1 >> 11) & 0xE00)) << shift;
pos++;
if (pos >= w) {
if (count == 4)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a1 >> 23) & 0x3F) | ((a2 & 0x3F) << 6)) << shift;
pos++;
if (pos >= w) {
if (count == 5)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a2 >> 7) & 0xFF8) | ((a2 >> 6) & 7)) << shift;
pos++;
if (pos >= w) {
if (count == 6)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a3 & 7) << 9) | ((a2 >> 20) & 0x1FF)) << shift;
pos++;
if (pos >= w) {
if (count == 7)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a3 >> 4) & 0xFC0) | ((a3 >> 3) & 0x3F)) << shift;
pos++;
if (pos >= w) {
if (count == 8)
break;
dst += stride;
pos = 0;
}
dst[pos] = (((a3 >> 16) & 7) | ((a3 >> 17) & 0xFF8)) << shift;
pos++;
if (pos >= w) {
if (count == 9)
break;
dst += stride;
pos = 0;
}
count -= 9;
}
}
static int cri_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
CRIContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
ThreadFrame frame = { .f = data };
int ret, bps, hflip = 0, vflip = 0;
AVFrameSideData *rotation;
int compressed = 0;
AVFrame *p = data;
s->data = NULL;
s->data_size = 0;
bytestream2_init(gb, avpkt->data, avpkt->size);
while (bytestream2_get_bytes_left(gb) > 8) {
char codec_name[1024];
uint32_t key, length;
float framerate;
int width, height;
key = bytestream2_get_le32(gb);
length = bytestream2_get_le32(gb);
switch (key) {
case 1:
if (length != 4)
return AVERROR_INVALIDDATA;
if (bytestream2_get_le32(gb) != MKTAG('D', 'V', 'C', 'C'))
return AVERROR_INVALIDDATA;
break;
case 100:
if (length < 16)
return AVERROR_INVALIDDATA;
width = bytestream2_get_le32(gb);
height = bytestream2_get_le32(gb);
s->color_model = bytestream2_get_le32(gb);
if (bytestream2_get_le32(gb) != 1)
return AVERROR_INVALIDDATA;
ret = ff_set_dimensions(avctx, width, height);
if (ret < 0)
return ret;
length -= 16;
goto skip;
case 101:
if (length != 4)
return AVERROR_INVALIDDATA;
if (bytestream2_get_le32(gb) != 0)
return AVERROR_INVALIDDATA;
break;
case 102:
bytestream2_get_buffer(gb, codec_name, FFMIN(length, sizeof(codec_name) - 1));
length -= FFMIN(length, sizeof(codec_name) - 1);
if (strncmp(codec_name, "cintel_craw", FFMIN(length, sizeof(codec_name) - 1)))
return AVERROR_INVALIDDATA;
compressed = 1;
goto skip;
case 103:
if (bytestream2_get_bytes_left(gb) < length)
return AVERROR_INVALIDDATA;
s->data = gb->buffer;
s->data_size = length;
goto skip;
case 105:
hflip = bytestream2_get_byte(gb) != 0;
length--;
goto skip;
case 106:
vflip = bytestream2_get_byte(gb) != 0;
length--;
goto skip;
case 107:
if (length != 4)
return AVERROR_INVALIDDATA;
framerate = av_int2float(bytestream2_get_le32(gb));
avctx->framerate.num = framerate * 1000;
avctx->framerate.den = 1000;
break;
case 119:
if (length != 32)
return AVERROR_INVALIDDATA;
for (int i = 0; i < 4; i++)
s->tile_size[i] = bytestream2_get_le64(gb);
break;
default:
av_log(avctx, AV_LOG_DEBUG, "skipping unknown key %u of length %u\n", key, length);
skip:
bytestream2_skip(gb, length);
}
}
switch (s->color_model) {
case 76:
case 88:
avctx->pix_fmt = AV_PIX_FMT_BAYER_BGGR16;
break;
case 77:
case 89:
avctx->pix_fmt = AV_PIX_FMT_BAYER_GBRG16;
break;
case 78:
case 90:
avctx->pix_fmt = AV_PIX_FMT_BAYER_RGGB16;
break;
case 45:
case 79:
case 91:
avctx->pix_fmt = AV_PIX_FMT_BAYER_GRBG16;
break;
}
switch (s->color_model) {
case 45:
bps = 10;
break;
case 76:
case 77:
case 78:
case 79:
bps = 12;
break;
case 88:
case 89:
case 90:
case 91:
bps = 16;
break;
default:
return AVERROR_INVALIDDATA;
}
if (compressed) {
for (int i = 0; i < 4; i++) {
if (s->tile_size[i] >= s->data_size)
return AVERROR_INVALIDDATA;
}
if (s->tile_size[0] + s->tile_size[1] + s->tile_size[2] + s->tile_size[3] !=
s->data_size)
return AVERROR_INVALIDDATA;
}
if (!s->data || !s->data_size)
return AVERROR_INVALIDDATA;
if ((ret = ff_thread_get_buffer(avctx, &frame, 0)) < 0)
return ret;
avctx->bits_per_raw_sample = bps;
if (!compressed && s->color_model == 45) {
uint16_t *dst = (uint16_t *)p->data[0];
GetByteContext gb;
bytestream2_init(&gb, s->data, s->data_size);
unpack_10bit(&gb, dst, 4, avctx->width, avctx->height, p->linesize[0] / 2);
} else if (!compressed) {
GetBitContext gbit;
const int shift = 16 - bps;
ret = init_get_bits8(&gbit, s->data, s->data_size);
if (ret < 0)
return ret;
for (int y = 0; y < avctx->height; y++) {
uint16_t *dst = (uint16_t *)(p->data[0] + y * p->linesize[0]);
if (get_bits_left(&gbit) < avctx->width * bps)
break;
for (int x = 0; x < avctx->width; x++)
dst[x] = get_bits(&gbit, bps) << shift;
}
} else {
unsigned offset = 0;
for (int tile = 0; tile < 4; tile++) {
av_packet_unref(s->jpkt);
s->jpkt->data = (uint8_t *)s->data + offset;
s->jpkt->size = s->tile_size[tile];
ret = avcodec_send_packet(s->jpeg_avctx, s->jpkt);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Error submitting a packet for decoding\n");
return ret;
}
ret = avcodec_receive_frame(s->jpeg_avctx, s->jpgframe);
if (ret < 0 || s->jpgframe->format != AV_PIX_FMT_GRAY16 ||
s->jpeg_avctx->width * 2 != avctx->width ||
s->jpeg_avctx->height * 2 != avctx->height) {
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR,
"JPEG decoding error (%d).\n", ret);
} else {
av_log(avctx, AV_LOG_ERROR,
"JPEG invalid format.\n");
ret = AVERROR_INVALIDDATA;
}
/* Normally skip, if error explode */
if (avctx->err_recognition & AV_EF_EXPLODE)
return ret;
else
return 0;
}
for (int y = 0; y < s->jpeg_avctx->height; y++) {
const int hw = s->jpgframe->width / 2;
uint16_t *dst = (uint16_t *)(p->data[0] + (y * 2) * p->linesize[0] + tile * hw * 2);
const uint16_t *src = (const uint16_t *)(s->jpgframe->data[0] + y * s->jpgframe->linesize[0]);
memcpy(dst, src, hw * 2);
src += hw;
dst += p->linesize[0] / 2;
memcpy(dst, src, hw * 2);
}
av_frame_unref(s->jpgframe);
offset += s->tile_size[tile];
}
}
if (hflip || vflip) {
rotation = av_frame_new_side_data(p, AV_FRAME_DATA_DISPLAYMATRIX,
sizeof(int32_t) * 9);
if (rotation) {
av_display_rotation_set((int32_t *)rotation->data, 0.f);
av_display_matrix_flip((int32_t *)rotation->data, hflip, vflip);
}
}
p->pict_type = AV_PICTURE_TYPE_I;
p->key_frame = 1;
*got_frame = 1;
return 0;
}
static av_cold int cri_decode_close(AVCodecContext *avctx)
{
CRIContext *s = avctx->priv_data;
av_frame_free(&s->jpgframe);
av_packet_free(&s->jpkt);
avcodec_free_context(&s->jpeg_avctx);
return 0;
}
AVCodec ff_cri_decoder = {
.name = "cri",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_CRI,
.priv_data_size = sizeof(CRIContext),
.init = cri_decode_init,
.decode = cri_decode_frame,
.close = cri_decode_close,
.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_INIT_CLEANUP,
.long_name = NULL_IF_CONFIG_SMALL("Cintel RAW"),
};

View File

@@ -1,105 +0,0 @@
/*
* CRI parser
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* CRI parser
*/
#include "libavutil/bswap.h"
#include "libavutil/common.h"
#include "parser.h"
typedef struct CRIParser {
ParseContext pc;
int count;
int chunk;
int read_bytes;
int skip_bytes;
} CRIParser;
#define KEY (((uint64_t)'\1' << 56) | ((uint64_t)'\0' << 48) | \
((uint64_t)'\0' << 40) | ((uint64_t)'\0' << 32) | \
((uint64_t)'\4' << 24) | ((uint64_t)'\0' << 16) | \
((uint64_t)'\0' << 8) | ((uint64_t)'\0' << 0))
static int cri_parse(AVCodecParserContext *s, AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
CRIParser *bpc = s->priv_data;
uint64_t state = bpc->pc.state64;
int next = END_NOT_FOUND, i = 0;
s->pict_type = AV_PICTURE_TYPE_I;
s->key_frame = 1;
s->duration = 1;
*poutbuf_size = 0;
*poutbuf = NULL;
for (; i < buf_size; i++) {
state = (state << 8) | buf[i];
bpc->read_bytes++;
if (bpc->skip_bytes > 0) {
bpc->skip_bytes--;
if (bpc->skip_bytes == 0)
bpc->read_bytes = 0;
} else {
if (state != KEY)
continue;
}
if (bpc->skip_bytes == 0 && bpc->read_bytes >= 8) {
bpc->skip_bytes = av_bswap32(state & 0xFFFFFFFF);
bpc->chunk = state >> 32;
bpc->read_bytes = 0;
bpc->count++;
}
if (bpc->chunk == 0x01000000 && bpc->skip_bytes == 4 &&
bpc->read_bytes == 0 && bpc->count > 1) {
next = i - 7;
break;
}
}
bpc->pc.state64 = state;
if (ff_combine_frame(&bpc->pc, next, &buf, &buf_size) < 0) {
*poutbuf = NULL;
*poutbuf_size = 0;
return buf_size;
}
*poutbuf = buf;
*poutbuf_size = buf_size;
return next;
}
AVCodecParser ff_cri_parser = {
.codec_ids = { AV_CODEC_ID_CRI },
.priv_data_size = sizeof(CRIParser),
.parser_parse = cri_parse,
.parser_close = ff_parse_close,
};

View File

@@ -1,180 +0,0 @@
/*
* Copyright (C) 2017 foo86
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "get_bits.h"
#include "put_bits.h"
#include "dolby_e.h"
static const uint8_t nb_programs_tab[MAX_PROG_CONF + 1] = {
2, 3, 2, 3, 4, 5, 4, 5, 6, 7, 8, 1, 2, 3, 3, 4, 5, 6, 1, 2, 3, 4, 1, 1
};
static const uint8_t nb_channels_tab[MAX_PROG_CONF + 1] = {
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 4, 4, 4, 4, 8, 8
};
static const uint16_t sample_rate_tab[16] = {
0, 42965, 43008, 44800, 53706, 53760
};
static int skip_input(DBEContext *s, int nb_words)
{
if (nb_words > s->input_size) {
return AVERROR_INVALIDDATA;
}
s->input += nb_words * s->word_bytes;
s->input_size -= nb_words;
return 0;
}
static int parse_key(DBEContext *s)
{
if (s->key_present) {
const uint8_t *key = s->input;
int ret = skip_input(s, 1);
if (ret < 0)
return ret;
return AV_RB24(key) >> 24 - s->word_bits;
}
return 0;
}
int ff_dolby_e_convert_input(DBEContext *s, int nb_words, int key)
{
const uint8_t *src = s->input;
uint8_t *dst = s->buffer;
PutBitContext pb;
int i;
av_assert0(nb_words <= 1024u);
if (nb_words > s->input_size) {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Packet too short\n");
return AVERROR_INVALIDDATA;
}
switch (s->word_bits) {
case 16:
for (i = 0; i < nb_words; i++, src += 2, dst += 2)
AV_WB16(dst, AV_RB16(src) ^ key);
break;
case 20:
init_put_bits(&pb, s->buffer, sizeof(s->buffer));
for (i = 0; i < nb_words; i++, src += 3)
put_bits(&pb, 20, AV_RB24(src) >> 4 ^ key);
flush_put_bits(&pb);
break;
case 24:
for (i = 0; i < nb_words; i++, src += 3, dst += 3)
AV_WB24(dst, AV_RB24(src) ^ key);
break;
default:
av_assert0(0);
}
return init_get_bits(&s->gb, s->buffer, nb_words * s->word_bits);
}
int ff_dolby_e_parse_header(DBEContext *s, const uint8_t *buf, int buf_size)
{
DolbyEHeaderInfo *const header = &s->metadata;
int hdr, ret, key, mtd_size;
if (buf_size < 3)
return AVERROR_INVALIDDATA;
hdr = AV_RB24(buf);
if ((hdr & 0xfffffe) == 0x7888e) {
s->word_bits = 24;
} else if ((hdr & 0xffffe0) == 0x788e0) {
s->word_bits = 20;
} else if ((hdr & 0xfffe00) == 0x78e00) {
s->word_bits = 16;
} else {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Invalid frame header\n");
return AVERROR_INVALIDDATA;
}
s->word_bytes = s->word_bits + 7 >> 3;
s->input = buf + s->word_bytes;
s->input_size = buf_size / s->word_bytes - 1;
s->key_present = hdr >> 24 - s->word_bits & 1;
if ((key = parse_key(s)) < 0)
return key;
if ((ret = ff_dolby_e_convert_input(s, 1, key)) < 0)
return ret;
skip_bits(&s->gb, 4);
mtd_size = get_bits(&s->gb, 10);
if (!mtd_size) {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Invalid metadata size\n");
return AVERROR_INVALIDDATA;
}
if ((ret = ff_dolby_e_convert_input(s, mtd_size, key)) < 0)
return ret;
skip_bits(&s->gb, 14);
header->prog_conf = get_bits(&s->gb, 6);
if (header->prog_conf > MAX_PROG_CONF) {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Invalid program configuration\n");
return AVERROR_INVALIDDATA;
}
header->nb_channels = nb_channels_tab[header->prog_conf];
header->nb_programs = nb_programs_tab[header->prog_conf];
header->fr_code = get_bits(&s->gb, 4);
header->fr_code_orig = get_bits(&s->gb, 4);
if (!(header->sample_rate = sample_rate_tab[header->fr_code]) ||
!sample_rate_tab[header->fr_code_orig]) {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Invalid frame rate code\n");
return AVERROR_INVALIDDATA;
}
skip_bits_long(&s->gb, 88);
for (int i = 0; i < header->nb_channels; i++)
header->ch_size[i] = get_bits(&s->gb, 10);
header->mtd_ext_size = get_bits(&s->gb, 8);
header->meter_size = get_bits(&s->gb, 8);
skip_bits_long(&s->gb, 10 * header->nb_programs);
for (int i = 0; i < header->nb_channels; i++) {
header->rev_id[i] = get_bits(&s->gb, 4);
skip_bits1(&s->gb);
header->begin_gain[i] = get_bits(&s->gb, 10);
header->end_gain[i] = get_bits(&s->gb, 10);
}
if (get_bits_left(&s->gb) < 0) {
if (s->avctx)
av_log(s->avctx, AV_LOG_ERROR, "Read past end of metadata\n");
return AVERROR_INVALIDDATA;
}
return skip_input(s, mtd_size + 1);
}

View File

@@ -1,69 +0,0 @@
/*
* Copyright (C) 2017 foo86
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "dolby_e.h"
#include "get_bits.h"
#include "put_bits.h"
typedef struct DBEParseContext {
DBEContext dectx;
} DBEParseContext;
static int dolby_e_parse(AVCodecParserContext *s2, AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
DBEParseContext *s1 = s2->priv_data;
DBEContext *s = &s1->dectx;
int ret;
if ((ret = ff_dolby_e_parse_header(s, buf, buf_size)) < 0)
goto end;
s2->duration = FRAME_SAMPLES;
switch (s->metadata.nb_channels) {
case 4:
avctx->channel_layout = AV_CH_LAYOUT_4POINT0;
break;
case 6:
avctx->channel_layout = AV_CH_LAYOUT_5POINT1;
break;
case 8:
avctx->channel_layout = AV_CH_LAYOUT_7POINT1;
break;
}
avctx->channels = s->metadata.nb_channels;
avctx->sample_rate = s->metadata.sample_rate;
avctx->sample_fmt = AV_SAMPLE_FMT_FLTP;
end:
/* always return the full packet. this parser isn't doing any splitting or
combining, only packet analysis */
*poutbuf = buf;
*poutbuf_size = buf_size;
return buf_size;
}
AVCodecParser ff_dolby_e_parser = {
.codec_ids = { AV_CODEC_ID_DOLBY_E },
.priv_data_size = sizeof(DBEParseContext),
.parser_parse = dolby_e_parse,
};

View File

@@ -1,515 +0,0 @@
/*
* DVB subtitle encoding
* Copyright (c) 2005 Fabrice Bellard
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "avcodec.h"
#include "bytestream.h"
#include "libavutil/colorspace.h"
typedef struct DVBSubtitleContext {
int object_version;
} DVBSubtitleContext;
#define PUTBITS2(val)\
{\
bitbuf |= (val) << bitcnt;\
bitcnt -= 2;\
if (bitcnt < 0) {\
bitcnt = 6;\
*q++ = bitbuf;\
bitbuf = 0;\
}\
}
static int dvb_encode_rle2(uint8_t **pq, int buf_size,
const uint8_t *bitmap, int linesize,
int w, int h)
{
uint8_t *q, *line_begin;
unsigned int bitbuf;
int bitcnt;
int x, y, len, x1, v, color;
q = *pq;
for(y = 0; y < h; y++) {
// Worst case line is 3 bits per value + 4 bytes overhead
if (buf_size * 8 < w * 3 + 32)
return AVERROR_BUFFER_TOO_SMALL;
line_begin = q;
*q++ = 0x10;
bitbuf = 0;
bitcnt = 6;
x = 0;
while (x < w) {
x1 = x;
color = bitmap[x1++];
while (x1 < w && bitmap[x1] == color)
x1++;
len = x1 - x;
if (color == 0 && len == 2) {
PUTBITS2(0);
PUTBITS2(0);
PUTBITS2(1);
} else if (len >= 3 && len <= 10) {
v = len - 3;
PUTBITS2(0);
PUTBITS2((v >> 2) | 2);
PUTBITS2(v & 3);
PUTBITS2(color);
} else if (len >= 12 && len <= 27) {
v = len - 12;
PUTBITS2(0);
PUTBITS2(0);
PUTBITS2(2);
PUTBITS2(v >> 2);
PUTBITS2(v & 3);
PUTBITS2(color);
} else if (len >= 29) {
/* length = 29 ... 284 */
if (len > 284)
len = 284;
v = len - 29;
PUTBITS2(0);
PUTBITS2(0);
PUTBITS2(3);
PUTBITS2((v >> 6));
PUTBITS2((v >> 4) & 3);
PUTBITS2((v >> 2) & 3);
PUTBITS2(v & 3);
PUTBITS2(color);
} else {
PUTBITS2(color);
if (color == 0) {
PUTBITS2(1);
}
len = 1;
}
x += len;
}
/* end of line */
PUTBITS2(0);
PUTBITS2(0);
PUTBITS2(0);
if (bitcnt != 6) {
*q++ = bitbuf;
}
*q++ = 0xf0;
bitmap += linesize;
buf_size -= q - line_begin;
}
len = q - *pq;
*pq = q;
return len;
}
#define PUTBITS4(val)\
{\
bitbuf |= (val) << bitcnt;\
bitcnt -= 4;\
if (bitcnt < 0) {\
bitcnt = 4;\
*q++ = bitbuf;\
bitbuf = 0;\
}\
}
/* some DVB decoders only implement 4 bits/pixel */
static int dvb_encode_rle4(uint8_t **pq, int buf_size,
const uint8_t *bitmap, int linesize,
int w, int h)
{
uint8_t *q, *line_begin;
unsigned int bitbuf;
int bitcnt;
int x, y, len, x1, v, color;
q = *pq;
for(y = 0; y < h; y++) {
// Worst case line is 6 bits per value, + 4 bytes overhead
if (buf_size * 8 < w * 6 + 32)
return AVERROR_BUFFER_TOO_SMALL;
line_begin = q;
*q++ = 0x11;
bitbuf = 0;
bitcnt = 4;
x = 0;
while (x < w) {
x1 = x;
color = bitmap[x1++];
while (x1 < w && bitmap[x1] == color)
x1++;
len = x1 - x;
if (color == 0 && len == 2) {
PUTBITS4(0);
PUTBITS4(0xd);
} else if (color == 0 && (len >= 3 && len <= 9)) {
PUTBITS4(0);
PUTBITS4(len - 2);
} else if (len >= 4 && len <= 7) {
PUTBITS4(0);
PUTBITS4(8 + len - 4);
PUTBITS4(color);
} else if (len >= 9 && len <= 24) {
PUTBITS4(0);
PUTBITS4(0xe);
PUTBITS4(len - 9);
PUTBITS4(color);
} else if (len >= 25) {
if (len > 280)
len = 280;
v = len - 25;
PUTBITS4(0);
PUTBITS4(0xf);
PUTBITS4(v >> 4);
PUTBITS4(v & 0xf);
PUTBITS4(color);
} else {
PUTBITS4(color);
if (color == 0) {
PUTBITS4(0xc);
}
len = 1;
}
x += len;
}
/* end of line */
PUTBITS4(0);
PUTBITS4(0);
if (bitcnt != 4) {
*q++ = bitbuf;
}
*q++ = 0xf0;
bitmap += linesize;
buf_size -= q - line_begin;
}
len = q - *pq;
*pq = q;
return len;
}
static int dvb_encode_rle8(uint8_t **pq, int buf_size,
const uint8_t *bitmap, int linesize,
int w, int h)
{
uint8_t *q, *line_begin;
int x, y, len, x1, color;
q = *pq;
for (y = 0; y < h; y++) {
// Worst case line is 12 bits per value, + 3 bytes overhead
if (buf_size * 8 < w * 12 + 24)
return AVERROR_BUFFER_TOO_SMALL;
line_begin = q;
*q++ = 0x12;
x = 0;
while (x < w) {
x1 = x;
color = bitmap[x1++];
while (x1 < w && bitmap[x1] == color)
x1++;
len = x1 - x;
if (len == 1 && color) {
// 00000001 to 11111111 1 pixel in colour x
*q++ = color;
} else {
if (color == 0x00) {
// 00000000 0LLLLLLL L pixels (1-127) in colour 0 (L > 0)
len = FFMIN(len, 127);
*q++ = 0x00;
*q++ = len;
} else if (len > 2) {
// 00000000 1LLLLLLL CCCCCCCC L pixels (3-127) in colour C (L > 2)
len = FFMIN(len, 127);
*q++ = 0x00;
*q++ = 0x80+len;
*q++ = color;
}
else if (len == 2) {
*q++ = color;
*q++ = color;
} else {
*q++ = color;
len = 1;
}
}
x += len;
}
/* end of line */
// 00000000 end of 8-bit/pixel_code_string
*q++ = 0x00;
*q++ = 0xf0;
bitmap += linesize;
buf_size -= q - line_begin;
}
len = q - *pq;
*pq = q;
return len;
}
static int dvbsub_encode(AVCodecContext *avctx, uint8_t *outbuf, int buf_size,
const AVSubtitle *h)
{
DVBSubtitleContext *s = avctx->priv_data;
uint8_t *q, *pseg_len;
int page_id, region_id, clut_id, object_id, i, bpp_index, page_state;
q = outbuf;
page_id = 1;
if (h->num_rects && !h->rects)
return AVERROR(EINVAL);
if (avctx->width > 0 && avctx->height > 0) {
if (buf_size < 11)
return AVERROR_BUFFER_TOO_SMALL;
/* display definition segment */
*q++ = 0x0f; /* sync_byte */
*q++ = 0x14; /* segment_type */
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
*q++ = 0x00; /* dds version number & display window flag */
bytestream_put_be16(&q, avctx->width - 1); /* display width */
bytestream_put_be16(&q, avctx->height - 1); /* display height */
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
buf_size -= 11;
}
/* page composition segment */
if (buf_size < 8 + h->num_rects * 6)
return AVERROR_BUFFER_TOO_SMALL;
*q++ = 0x0f; /* sync_byte */
*q++ = 0x10; /* segment_type */
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
*q++ = 30; /* page_timeout (seconds) */
page_state = 2; /* mode change */
/* page_version = 0 + page_state */
*q++ = (s->object_version << 4) | (page_state << 2) | 3;
for (region_id = 0; region_id < h->num_rects; region_id++) {
*q++ = region_id;
*q++ = 0xff; /* reserved */
bytestream_put_be16(&q, h->rects[region_id]->x); /* left pos */
bytestream_put_be16(&q, h->rects[region_id]->y); /* top pos */
}
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
buf_size -= 8 + h->num_rects * 6;
if (h->num_rects) {
for (clut_id = 0; clut_id < h->num_rects; clut_id++) {
if (buf_size < 6 + h->rects[clut_id]->nb_colors * 6)
return AVERROR_BUFFER_TOO_SMALL;
/* CLUT segment */
if (h->rects[clut_id]->nb_colors <= 4) {
/* 2 bpp, some decoders do not support it correctly */
bpp_index = 0;
} else if (h->rects[clut_id]->nb_colors <= 16) {
/* 4 bpp, standard encoding */
bpp_index = 1;
} else if (h->rects[clut_id]->nb_colors <= 256) {
/* 8 bpp, standard encoding */
bpp_index = 2;
} else {
return AVERROR(EINVAL);
}
/* CLUT segment */
*q++ = 0x0f; /* sync byte */
*q++ = 0x12; /* CLUT definition segment */
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
*q++ = clut_id;
*q++ = (0 << 4) | 0xf; /* version = 0 */
for(i = 0; i < h->rects[clut_id]->nb_colors; i++) {
*q++ = i; /* clut_entry_id */
*q++ = (1 << (7 - bpp_index)) | (0xf << 1) | 1; /* 2 bits/pixel full range */
{
int a, r, g, b;
uint32_t x= ((uint32_t*)h->rects[clut_id]->data[1])[i];
a = (x >> 24) & 0xff;
r = (x >> 16) & 0xff;
g = (x >> 8) & 0xff;
b = (x >> 0) & 0xff;
*q++ = RGB_TO_Y_CCIR(r, g, b);
*q++ = RGB_TO_V_CCIR(r, g, b, 0);
*q++ = RGB_TO_U_CCIR(r, g, b, 0);
*q++ = 255 - a;
}
}
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
buf_size -= 6 + h->rects[clut_id]->nb_colors * 6;
}
if (buf_size < h->num_rects * 22)
return AVERROR_BUFFER_TOO_SMALL;
for (region_id = 0; region_id < h->num_rects; region_id++) {
/* region composition segment */
if (h->rects[region_id]->nb_colors <= 4) {
/* 2 bpp, some decoders do not support it correctly */
bpp_index = 0;
} else if (h->rects[region_id]->nb_colors <= 16) {
/* 4 bpp, standard encoding */
bpp_index = 1;
} else if (h->rects[region_id]->nb_colors <= 256) {
/* 8 bpp, standard encoding */
bpp_index = 2;
} else {
return AVERROR(EINVAL);
}
*q++ = 0x0f; /* sync_byte */
*q++ = 0x11; /* segment_type */
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
*q++ = region_id;
*q++ = (s->object_version << 4) | (0 << 3) | 0x07; /* version , no fill */
bytestream_put_be16(&q, h->rects[region_id]->w); /* region width */
bytestream_put_be16(&q, h->rects[region_id]->h); /* region height */
*q++ = ((1 + bpp_index) << 5) | ((1 + bpp_index) << 2) | 0x03;
*q++ = region_id; /* clut_id == region_id */
*q++ = 0; /* 8 bit fill colors */
*q++ = 0x03; /* 4 bit and 2 bit fill colors */
bytestream_put_be16(&q, region_id); /* object_id == region_id */
*q++ = (0 << 6) | (0 << 4);
*q++ = 0;
*q++ = 0xf0;
*q++ = 0;
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
}
buf_size -= h->num_rects * 22;
for (object_id = 0; object_id < h->num_rects; object_id++) {
int (*dvb_encode_rle)(uint8_t **pq, int buf_size,
const uint8_t *bitmap, int linesize,
int w, int h);
if (buf_size < 13)
return AVERROR_BUFFER_TOO_SMALL;
/* bpp_index maths */
if (h->rects[object_id]->nb_colors <= 4) {
/* 2 bpp, some decoders do not support it correctly */
dvb_encode_rle = dvb_encode_rle2;
} else if (h->rects[object_id]->nb_colors <= 16) {
/* 4 bpp, standard encoding */
dvb_encode_rle = dvb_encode_rle4;
} else if (h->rects[object_id]->nb_colors <= 256) {
/* 8 bpp, standard encoding */
dvb_encode_rle = dvb_encode_rle8;
} else {
return AVERROR(EINVAL);
}
/* Object Data segment */
*q++ = 0x0f; /* sync byte */
*q++ = 0x13;
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
bytestream_put_be16(&q, object_id);
*q++ = (s->object_version << 4) | (0 << 2) | (0 << 1) | 1; /* version = 0,
object_coding_method,
non_modifying_color_flag */
{
uint8_t *ptop_field_len, *pbottom_field_len, *top_ptr, *bottom_ptr;
int ret;
ptop_field_len = q;
q += 2;
pbottom_field_len = q;
q += 2;
buf_size -= 13;
top_ptr = q;
ret = dvb_encode_rle(&q, buf_size,
h->rects[object_id]->data[0],
h->rects[object_id]->w * 2,
h->rects[object_id]->w,
h->rects[object_id]->h >> 1);
if (ret < 0)
return ret;
buf_size -= ret;
bottom_ptr = q;
ret = dvb_encode_rle(&q, buf_size,
h->rects[object_id]->data[0] + h->rects[object_id]->w,
h->rects[object_id]->w * 2,
h->rects[object_id]->w,
h->rects[object_id]->h >> 1);
if (ret < 0)
return ret;
buf_size -= ret;
bytestream_put_be16(&ptop_field_len, bottom_ptr - top_ptr);
bytestream_put_be16(&pbottom_field_len, q - bottom_ptr);
}
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
}
}
/* end of display set segment */
if (buf_size < 6)
return AVERROR_BUFFER_TOO_SMALL;
*q++ = 0x0f; /* sync_byte */
*q++ = 0x80; /* segment_type */
bytestream_put_be16(&q, page_id);
pseg_len = q;
q += 2; /* segment length */
bytestream_put_be16(&pseg_len, q - pseg_len - 2);
buf_size -= 6;
s->object_version = (s->object_version + 1) & 0xf;
return q - outbuf;
}
AVCodec ff_dvbsub_encoder = {
.name = "dvbsub",
.long_name = NULL_IF_CONFIG_SMALL("DVB subtitles"),
.type = AVMEDIA_TYPE_SUBTITLE,
.id = AV_CODEC_ID_DVB_SUBTITLE,
.priv_data_size = sizeof(DVBSubtitleContext),
.encode_sub = dvbsub_encode,
};

View File

@@ -1,506 +0,0 @@
/*
* DXVA2 AV1 HW acceleration.
*
* copyright (c) 2020 Hendrik Leppkes
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avassert.h"
#include "libavutil/pixdesc.h"
#include "dxva2_internal.h"
#include "av1dec.h"
#define MAX_TILES 256
struct AV1DXVAContext {
FFDXVASharedContext shared;
unsigned int bitstream_allocated;
uint8_t *bitstream_cache;
};
struct av1_dxva2_picture_context {
DXVA_PicParams_AV1 pp;
unsigned tile_count;
DXVA_Tile_AV1 tiles[MAX_TILES];
uint8_t *bitstream;
unsigned bitstream_size;
};
static int get_bit_depth_from_seq(const AV1RawSequenceHeader *seq)
{
if (seq->seq_profile == 2 && seq->color_config.high_bitdepth)
return seq->color_config.twelve_bit ? 12 : 10;
else if (seq->seq_profile <= 2 && seq->color_config.high_bitdepth)
return 10;
else
return 8;
}
static int fill_picture_parameters(const AVCodecContext *avctx, AVDXVAContext *ctx, const AV1DecContext *h,
DXVA_PicParams_AV1 *pp)
{
int i,j, uses_lr;
const AV1RawSequenceHeader *seq = h->raw_seq;
const AV1RawFrameHeader *frame_header = h->raw_frame_header;
const AV1RawFilmGrainParams *film_grain = &h->cur_frame.film_grain;
unsigned char remap_lr_type[4] = { AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ };
int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain;
memset(pp, 0, sizeof(*pp));
pp->width = avctx->width;
pp->height = avctx->height;
pp->max_width = seq->max_frame_width_minus_1 + 1;
pp->max_height = seq->max_frame_height_minus_1 + 1;
pp->CurrPicTextureIndex = ff_dxva2_get_surface_index(avctx, ctx, h->cur_frame.tf.f);
pp->superres_denom = frame_header->use_superres ? frame_header->coded_denom : AV1_SUPERRES_NUM;
pp->bitdepth = get_bit_depth_from_seq(seq);
pp->seq_profile = seq->seq_profile;
/* Tiling info */
pp->tiles.cols = frame_header->tile_cols;
pp->tiles.rows = frame_header->tile_rows;
pp->tiles.context_update_id = frame_header->context_update_tile_id;
for (i = 0; i < pp->tiles.cols; i++)
pp->tiles.widths[i] = frame_header->width_in_sbs_minus_1[i] + 1;
for (i = 0; i < pp->tiles.rows; i++)
pp->tiles.heights[i] = frame_header->height_in_sbs_minus_1[i] + 1;
/* Coding tools */
pp->coding.use_128x128_superblock = seq->use_128x128_superblock;
pp->coding.intra_edge_filter = seq->enable_intra_edge_filter;
pp->coding.interintra_compound = seq->enable_interintra_compound;
pp->coding.masked_compound = seq->enable_masked_compound;
pp->coding.warped_motion = frame_header->allow_warped_motion;
pp->coding.dual_filter = seq->enable_dual_filter;
pp->coding.jnt_comp = seq->enable_jnt_comp;
pp->coding.screen_content_tools = frame_header->allow_screen_content_tools;
pp->coding.integer_mv = frame_header->force_integer_mv || !(frame_header->frame_type & 1);
pp->coding.cdef = seq->enable_cdef;
pp->coding.restoration = seq->enable_restoration;
pp->coding.film_grain = seq->film_grain_params_present && !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN);
pp->coding.intrabc = frame_header->allow_intrabc;
pp->coding.high_precision_mv = frame_header->allow_high_precision_mv;
pp->coding.switchable_motion_mode = frame_header->is_motion_mode_switchable;
pp->coding.filter_intra = seq->enable_filter_intra;
pp->coding.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf;
pp->coding.disable_cdf_update = frame_header->disable_cdf_update;
pp->coding.reference_mode = frame_header->reference_select;
pp->coding.skip_mode = frame_header->skip_mode_present;
pp->coding.reduced_tx_set = frame_header->reduced_tx_set;
pp->coding.superres = frame_header->use_superres;
pp->coding.tx_mode = frame_header->tx_mode;
pp->coding.use_ref_frame_mvs = frame_header->use_ref_frame_mvs;
pp->coding.enable_ref_frame_mvs = seq->enable_ref_frame_mvs;
pp->coding.reference_frame_update = 1; // 0 for show_existing_frame with key frames, but those are not passed to the hwaccel
/* Format & Picture Info flags */
pp->format.frame_type = frame_header->frame_type;
pp->format.show_frame = frame_header->show_frame;
pp->format.showable_frame = frame_header->showable_frame;
pp->format.subsampling_x = seq->color_config.subsampling_x;
pp->format.subsampling_y = seq->color_config.subsampling_y;
pp->format.mono_chrome = seq->color_config.mono_chrome;
/* References */
pp->primary_ref_frame = frame_header->primary_ref_frame;
pp->order_hint = frame_header->order_hint;
pp->order_hint_bits = seq->enable_order_hint ? seq->order_hint_bits_minus_1 + 1 : 0;
memset(pp->RefFrameMapTextureIndex, 0xFF, sizeof(pp->RefFrameMapTextureIndex));
for (i = 0; i < AV1_REFS_PER_FRAME; i++) {
int8_t ref_idx = frame_header->ref_frame_idx[i];
AVFrame *ref_frame = h->ref[ref_idx].tf.f;
pp->frame_refs[i].width = ref_frame->width;
pp->frame_refs[i].height = ref_frame->height;
pp->frame_refs[i].Index = ref_frame->buf[0] ? ref_idx : 0xFF;
/* Global Motion */
pp->frame_refs[i].wminvalid = (h->cur_frame.gm_type[AV1_REF_FRAME_LAST + i] == AV1_WARP_MODEL_IDENTITY);
pp->frame_refs[i].wmtype = h->cur_frame.gm_type[AV1_REF_FRAME_LAST + i];
for (j = 0; j < 6; ++j) {
pp->frame_refs[i].wmmat[j] = h->cur_frame.gm_params[AV1_REF_FRAME_LAST + i][j];
}
}
for (i = 0; i < AV1_NUM_REF_FRAMES; i++) {
AVFrame *ref_frame = h->ref[i].tf.f;
if (ref_frame->buf[0])
pp->RefFrameMapTextureIndex[i] = ff_dxva2_get_surface_index(avctx, ctx, ref_frame);
}
/* Loop filter parameters */
pp->loop_filter.filter_level[0] = frame_header->loop_filter_level[0];
pp->loop_filter.filter_level[1] = frame_header->loop_filter_level[1];
pp->loop_filter.filter_level_u = frame_header->loop_filter_level[2];
pp->loop_filter.filter_level_v = frame_header->loop_filter_level[3];
pp->loop_filter.sharpness_level = frame_header->loop_filter_sharpness;
pp->loop_filter.mode_ref_delta_enabled = frame_header->loop_filter_delta_enabled;
pp->loop_filter.mode_ref_delta_update = frame_header->loop_filter_delta_update;
pp->loop_filter.delta_lf_multi = frame_header->delta_lf_multi;
pp->loop_filter.delta_lf_present = frame_header->delta_lf_present;
pp->loop_filter.delta_lf_res = frame_header->delta_lf_res;
for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) {
pp->loop_filter.ref_deltas[i] = frame_header->loop_filter_ref_deltas[i];
}
pp->loop_filter.mode_deltas[0] = frame_header->loop_filter_mode_deltas[0];
pp->loop_filter.mode_deltas[1] = frame_header->loop_filter_mode_deltas[1];
pp->loop_filter.frame_restoration_type[0] = remap_lr_type[frame_header->lr_type[0]];
pp->loop_filter.frame_restoration_type[1] = remap_lr_type[frame_header->lr_type[1]];
pp->loop_filter.frame_restoration_type[2] = remap_lr_type[frame_header->lr_type[2]];
uses_lr = frame_header->lr_type[0] || frame_header->lr_type[1] || frame_header->lr_type[2];
pp->loop_filter.log2_restoration_unit_size[0] = uses_lr ? (6 + frame_header->lr_unit_shift) : 8;
pp->loop_filter.log2_restoration_unit_size[1] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
pp->loop_filter.log2_restoration_unit_size[2] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
/* Quantization */
pp->quantization.delta_q_present = frame_header->delta_q_present;
pp->quantization.delta_q_res = frame_header->delta_q_res;
pp->quantization.base_qindex = frame_header->base_q_idx;
pp->quantization.y_dc_delta_q = frame_header->delta_q_y_dc;
pp->quantization.u_dc_delta_q = frame_header->delta_q_u_dc;
pp->quantization.v_dc_delta_q = frame_header->delta_q_v_dc;
pp->quantization.u_ac_delta_q = frame_header->delta_q_u_ac;
pp->quantization.v_ac_delta_q = frame_header->delta_q_v_ac;
pp->quantization.qm_y = frame_header->using_qmatrix ? frame_header->qm_y : 0xFF;
pp->quantization.qm_u = frame_header->using_qmatrix ? frame_header->qm_u : 0xFF;
pp->quantization.qm_v = frame_header->using_qmatrix ? frame_header->qm_v : 0xFF;
/* Cdef parameters */
pp->cdef.damping = frame_header->cdef_damping_minus_3;
pp->cdef.bits = frame_header->cdef_bits;
for (i = 0; i < 8; i++) {
pp->cdef.y_strengths[i].primary = frame_header->cdef_y_pri_strength[i];
pp->cdef.y_strengths[i].secondary = frame_header->cdef_y_sec_strength[i];
pp->cdef.uv_strengths[i].primary = frame_header->cdef_uv_pri_strength[i];
pp->cdef.uv_strengths[i].secondary = frame_header->cdef_uv_sec_strength[i];
}
/* Misc flags */
pp->interp_filter = frame_header->interpolation_filter;
/* Segmentation */
pp->segmentation.enabled = frame_header->segmentation_enabled;
pp->segmentation.update_map = frame_header->segmentation_update_map;
pp->segmentation.update_data = frame_header->segmentation_update_data;
pp->segmentation.temporal_update = frame_header->segmentation_temporal_update;
for (i = 0; i < AV1_MAX_SEGMENTS; i++) {
for (j = 0; j < AV1_SEG_LVL_MAX; j++) {
pp->segmentation.feature_mask[i].mask |= frame_header->feature_enabled[i][j] << j;
pp->segmentation.feature_data[i][j] = frame_header->feature_value[i][j];
}
}
/* Film grain */
if (apply_grain) {
pp->film_grain.apply_grain = 1;
pp->film_grain.scaling_shift_minus8 = film_grain->grain_scaling_minus_8;
pp->film_grain.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma;
pp->film_grain.ar_coeff_lag = film_grain->ar_coeff_lag;
pp->film_grain.ar_coeff_shift_minus6 = film_grain->ar_coeff_shift_minus_6;
pp->film_grain.grain_scale_shift = film_grain->grain_scale_shift;
pp->film_grain.overlap_flag = film_grain->overlap_flag;
pp->film_grain.clip_to_restricted_range = film_grain->clip_to_restricted_range;
pp->film_grain.matrix_coeff_is_identity = (seq->color_config.matrix_coefficients == AVCOL_SPC_RGB);
pp->film_grain.grain_seed = film_grain->grain_seed;
pp->film_grain.num_y_points = film_grain->num_y_points;
for (i = 0; i < film_grain->num_y_points; i++) {
pp->film_grain.scaling_points_y[i][0] = film_grain->point_y_value[i];
pp->film_grain.scaling_points_y[i][1] = film_grain->point_y_scaling[i];
}
pp->film_grain.num_cb_points = film_grain->num_cb_points;
for (i = 0; i < film_grain->num_cb_points; i++) {
pp->film_grain.scaling_points_cb[i][0] = film_grain->point_cb_value[i];
pp->film_grain.scaling_points_cb[i][1] = film_grain->point_cb_scaling[i];
}
pp->film_grain.num_cr_points = film_grain->num_cr_points;
for (i = 0; i < film_grain->num_cr_points; i++) {
pp->film_grain.scaling_points_cr[i][0] = film_grain->point_cr_value[i];
pp->film_grain.scaling_points_cr[i][1] = film_grain->point_cr_scaling[i];
}
for (i = 0; i < 24; i++) {
pp->film_grain.ar_coeffs_y[i] = film_grain->ar_coeffs_y_plus_128[i];
}
for (i = 0; i < 25; i++) {
pp->film_grain.ar_coeffs_cb[i] = film_grain->ar_coeffs_cb_plus_128[i];
pp->film_grain.ar_coeffs_cr[i] = film_grain->ar_coeffs_cr_plus_128[i];
}
pp->film_grain.cb_mult = film_grain->cb_mult;
pp->film_grain.cb_luma_mult = film_grain->cb_luma_mult;
pp->film_grain.cr_mult = film_grain->cr_mult;
pp->film_grain.cr_luma_mult = film_grain->cr_luma_mult;
pp->film_grain.cb_offset = film_grain->cb_offset;
pp->film_grain.cr_offset = film_grain->cr_offset;
pp->film_grain.cr_offset = film_grain->cr_offset;
}
// XXX: Setting the StatusReportFeedbackNumber breaks decoding on some drivers (tested on NVIDIA 457.09)
// Status Reporting is not used by FFmpeg, hence not providing a number does not cause any issues
//pp->StatusReportFeedbackNumber = 1 + DXVA_CONTEXT_REPORT_ID(avctx, ctx)++;
return 0;
}
static int dxva2_av1_start_frame(AVCodecContext *avctx,
av_unused const uint8_t *buffer,
av_unused uint32_t size)
{
const AV1DecContext *h = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
if (!DXVA_CONTEXT_VALID(avctx, ctx))
return -1;
av_assert0(ctx_pic);
/* Fill up DXVA_PicParams_AV1 */
if (fill_picture_parameters(avctx, ctx, h, &ctx_pic->pp) < 0)
return -1;
ctx_pic->bitstream_size = 0;
ctx_pic->bitstream = NULL;
return 0;
}
static int dxva2_av1_decode_slice(AVCodecContext *avctx,
const uint8_t *buffer,
uint32_t size)
{
const AV1DecContext *h = avctx->priv_data;
const AV1RawFrameHeader *frame_header = h->raw_frame_header;
struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
void *tmp;
ctx_pic->tile_count = frame_header->tile_cols * frame_header->tile_rows;
/* too many tiles, exceeding all defined levels in the AV1 spec */
if (ctx_pic->tile_count > MAX_TILES)
return AVERROR(ENOSYS);
/* Shortcut if all tiles are in the same buffer */
if (ctx_pic->tile_count == h->tg_end - h->tg_start + 1) {
ctx_pic->bitstream = (uint8_t *)buffer;
ctx_pic->bitstream_size = size;
for (uint32_t tile_num = 0; tile_num < ctx_pic->tile_count; tile_num++) {
ctx_pic->tiles[tile_num].DataOffset = h->tile_group_info[tile_num].tile_offset;
ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
}
return 0;
}
/* allocate an internal buffer */
tmp = av_fast_realloc(ctx->bitstream_cache, &ctx->bitstream_allocated,
ctx_pic->bitstream_size + size);
if (!tmp) {
return AVERROR(ENOMEM);
}
ctx_pic->bitstream = ctx->bitstream_cache = tmp;
memcpy(ctx_pic->bitstream + ctx_pic->bitstream_size, buffer, size);
for (uint32_t tile_num = h->tg_start; tile_num <= h->tg_end; tile_num++) {
ctx_pic->tiles[tile_num].DataOffset = ctx_pic->bitstream_size + h->tile_group_info[tile_num].tile_offset;
ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
}
ctx_pic->bitstream_size += size;
return 0;
}
static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
DECODER_BUFFER_DESC *bs,
DECODER_BUFFER_DESC *sc)
{
const AV1DecContext *h = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
void *dxva_data_ptr;
uint8_t *dxva_data;
unsigned dxva_size;
unsigned padding;
unsigned type;
#if CONFIG_D3D11VA
if (ff_dxva2_is_d3d11(avctx)) {
type = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM;
if (FAILED(ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context,
D3D11VA_CONTEXT(ctx)->decoder,
type,
&dxva_size, &dxva_data_ptr)))
return -1;
}
#endif
#if CONFIG_DXVA2
if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
type = DXVA2_BitStreamDateBufferType;
if (FAILED(IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder,
type,
&dxva_data_ptr, &dxva_size)))
return -1;
}
#endif
dxva_data = dxva_data_ptr;
if (ctx_pic->bitstream_size > dxva_size) {
av_log(avctx, AV_LOG_ERROR, "Bitstream size exceeds hardware buffer");
return -1;
}
memcpy(dxva_data, ctx_pic->bitstream, ctx_pic->bitstream_size);
padding = FFMIN(128 - ((ctx_pic->bitstream_size) & 127), dxva_size - ctx_pic->bitstream_size);
if (padding > 0) {
memset(dxva_data + ctx_pic->bitstream_size, 0, padding);
ctx_pic->bitstream_size += padding;
}
#if CONFIG_D3D11VA
if (ff_dxva2_is_d3d11(avctx))
if (FAILED(ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type)))
return -1;
#endif
#if CONFIG_DXVA2
if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD)
if (FAILED(IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type)))
return -1;
#endif
#if CONFIG_D3D11VA
if (ff_dxva2_is_d3d11(avctx)) {
D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = bs;
memset(dsc11, 0, sizeof(*dsc11));
dsc11->BufferType = type;
dsc11->DataSize = ctx_pic->bitstream_size;
dsc11->NumMBsInBuffer = 0;
type = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL;
}
#endif
#if CONFIG_DXVA2
if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
DXVA2_DecodeBufferDesc *dsc2 = bs;
memset(dsc2, 0, sizeof(*dsc2));
dsc2->CompressedBufferType = type;
dsc2->DataSize = ctx_pic->bitstream_size;
dsc2->NumMBsInBuffer = 0;
type = DXVA2_SliceControlBufferType;
}
#endif
return ff_dxva2_commit_buffer(avctx, ctx, sc, type,
ctx_pic->tiles, sizeof(*ctx_pic->tiles) * ctx_pic->tile_count, 0);
}
static int dxva2_av1_end_frame(AVCodecContext *avctx)
{
const AV1DecContext *h = avctx->priv_data;
struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
int ret;
if (ctx_pic->bitstream_size <= 0)
return -1;
ret = ff_dxva2_common_end_frame(avctx, h->cur_frame.tf.f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
NULL, 0,
commit_bitstream_and_slice_buffer);
return ret;
}
static int dxva2_av1_uninit(AVCodecContext *avctx)
{
struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
av_freep(&ctx->bitstream_cache);
ctx->bitstream_allocated = 0;
return ff_dxva2_decode_uninit(avctx);
}
#if CONFIG_AV1_DXVA2_HWACCEL
const AVHWAccel ff_av1_dxva2_hwaccel = {
.name = "av1_dxva2",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.pix_fmt = AV_PIX_FMT_DXVA2_VLD,
.init = ff_dxva2_decode_init,
.uninit = dxva2_av1_uninit,
.start_frame = dxva2_av1_start_frame,
.decode_slice = dxva2_av1_decode_slice,
.end_frame = dxva2_av1_end_frame,
.frame_params = ff_dxva2_common_frame_params,
.frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
.priv_data_size = sizeof(struct AV1DXVAContext),
};
#endif
#if CONFIG_AV1_D3D11VA_HWACCEL
const AVHWAccel ff_av1_d3d11va_hwaccel = {
.name = "av1_d3d11va",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.pix_fmt = AV_PIX_FMT_D3D11VA_VLD,
.init = ff_dxva2_decode_init,
.uninit = dxva2_av1_uninit,
.start_frame = dxva2_av1_start_frame,
.decode_slice = dxva2_av1_decode_slice,
.end_frame = dxva2_av1_end_frame,
.frame_params = ff_dxva2_common_frame_params,
.frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
.priv_data_size = sizeof(struct AV1DXVAContext),
};
#endif
#if CONFIG_AV1_D3D11VA2_HWACCEL
const AVHWAccel ff_av1_d3d11va2_hwaccel = {
.name = "av1_d3d11va2",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.pix_fmt = AV_PIX_FMT_D3D11,
.init = ff_dxva2_decode_init,
.uninit = dxva2_av1_uninit,
.start_frame = dxva2_av1_start_frame,
.decode_slice = dxva2_av1_decode_slice,
.end_frame = dxva2_av1_end_frame,
.frame_params = ff_dxva2_common_frame_params,
.frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
.priv_data_size = sizeof(struct AV1DXVAContext),
};
#endif

View File

@@ -1,198 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "dynamic_hdr10_plus.h"
#include "get_bits.h"
static const int64_t luminance_den = 1;
static const int32_t peak_luminance_den = 15;
static const int64_t rgb_den = 100000;
static const int32_t fraction_pixel_den = 1000;
static const int32_t knee_point_den = 4095;
static const int32_t bezier_anchor_den = 1023;
static const int32_t saturation_weight_den = 8;
int ff_parse_itu_t_t35_to_dynamic_hdr10_plus(AVDynamicHDRPlus *s, const uint8_t *data,
int size)
{
GetBitContext gbc, *gb = &gbc;
int ret;
if (!s)
return AVERROR(ENOMEM);
ret = init_get_bits8(gb, data, size);
if (ret < 0)
return ret;
s->application_version = get_bits(gb, 8);
if (get_bits_left(gb) < 2)
return AVERROR_INVALIDDATA;
s->num_windows = get_bits(gb, 2);
if (s->num_windows < 1 || s->num_windows > 3) {
return AVERROR_INVALIDDATA;
}
if (get_bits_left(gb) < ((19 * 8 + 1) * (s->num_windows - 1)))
return AVERROR_INVALIDDATA;
for (int w = 1; w < s->num_windows; w++) {
// The corners are set to absolute coordinates here. They should be
// converted to the relative coordinates (in [0, 1]) in the decoder.
AVHDRPlusColorTransformParams *params = &s->params[w];
params->window_upper_left_corner_x =
(AVRational){get_bits(gb, 16), 1};
params->window_upper_left_corner_y =
(AVRational){get_bits(gb, 16), 1};
params->window_lower_right_corner_x =
(AVRational){get_bits(gb, 16), 1};
params->window_lower_right_corner_y =
(AVRational){get_bits(gb, 16), 1};
params->center_of_ellipse_x = get_bits(gb, 16);
params->center_of_ellipse_y = get_bits(gb, 16);
params->rotation_angle = get_bits(gb, 8);
params->semimajor_axis_internal_ellipse = get_bits(gb, 16);
params->semimajor_axis_external_ellipse = get_bits(gb, 16);
params->semiminor_axis_external_ellipse = get_bits(gb, 16);
params->overlap_process_option = get_bits1(gb);
}
if (get_bits_left(gb) < 28)
return AVERROR(EINVAL);
s->targeted_system_display_maximum_luminance =
(AVRational){get_bits_long(gb, 27), luminance_den};
s->targeted_system_display_actual_peak_luminance_flag = get_bits1(gb);
if (s->targeted_system_display_actual_peak_luminance_flag) {
int rows, cols;
if (get_bits_left(gb) < 10)
return AVERROR(EINVAL);
rows = get_bits(gb, 5);
cols = get_bits(gb, 5);
if (((rows < 2) || (rows > 25)) || ((cols < 2) || (cols > 25))) {
return AVERROR_INVALIDDATA;
}
s->num_rows_targeted_system_display_actual_peak_luminance = rows;
s->num_cols_targeted_system_display_actual_peak_luminance = cols;
if (get_bits_left(gb) < (rows * cols * 4))
return AVERROR(EINVAL);
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
s->targeted_system_display_actual_peak_luminance[i][j] =
(AVRational){get_bits(gb, 4), peak_luminance_den};
}
}
}
for (int w = 0; w < s->num_windows; w++) {
AVHDRPlusColorTransformParams *params = &s->params[w];
if (get_bits_left(gb) < (3 * 17 + 17 + 4))
return AVERROR(EINVAL);
for (int i = 0; i < 3; i++) {
params->maxscl[i] =
(AVRational){get_bits(gb, 17), rgb_den};
}
params->average_maxrgb =
(AVRational){get_bits(gb, 17), rgb_den};
params->num_distribution_maxrgb_percentiles = get_bits(gb, 4);
if (get_bits_left(gb) <
(params->num_distribution_maxrgb_percentiles * 24))
return AVERROR(EINVAL);
for (int i = 0; i < params->num_distribution_maxrgb_percentiles; i++) {
params->distribution_maxrgb[i].percentage = get_bits(gb, 7);
params->distribution_maxrgb[i].percentile =
(AVRational){get_bits(gb, 17), rgb_den};
}
if (get_bits_left(gb) < 10)
return AVERROR(EINVAL);
params->fraction_bright_pixels = (AVRational){get_bits(gb, 10), fraction_pixel_den};
}
if (get_bits_left(gb) < 1)
return AVERROR(EINVAL);
s->mastering_display_actual_peak_luminance_flag = get_bits1(gb);
if (s->mastering_display_actual_peak_luminance_flag) {
int rows, cols;
if (get_bits_left(gb) < 10)
return AVERROR(EINVAL);
rows = get_bits(gb, 5);
cols = get_bits(gb, 5);
if (((rows < 2) || (rows > 25)) || ((cols < 2) || (cols > 25))) {
return AVERROR_INVALIDDATA;
}
s->num_rows_mastering_display_actual_peak_luminance = rows;
s->num_cols_mastering_display_actual_peak_luminance = cols;
if (get_bits_left(gb) < (rows * cols * 4))
return AVERROR(EINVAL);
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
s->mastering_display_actual_peak_luminance[i][j] =
(AVRational){get_bits(gb, 4), peak_luminance_den};
}
}
}
for (int w = 0; w < s->num_windows; w++) {
AVHDRPlusColorTransformParams *params = &s->params[w];
if (get_bits_left(gb) < 1)
return AVERROR(EINVAL);
params->tone_mapping_flag = get_bits1(gb);
if (params->tone_mapping_flag) {
if (get_bits_left(gb) < 28)
return AVERROR(EINVAL);
params->knee_point_x =
(AVRational){get_bits(gb, 12), knee_point_den};
params->knee_point_y =
(AVRational){get_bits(gb, 12), knee_point_den};
params->num_bezier_curve_anchors = get_bits(gb, 4);
if (get_bits_left(gb) < (params->num_bezier_curve_anchors * 10))
return AVERROR(EINVAL);
for (int i = 0; i < params->num_bezier_curve_anchors; i++) {
params->bezier_curve_anchors[i] =
(AVRational){get_bits(gb, 10), bezier_anchor_den};
}
}
if (get_bits_left(gb) < 1)
return AVERROR(EINVAL);
params->color_saturation_mapping_flag = get_bits1(gb);
if (params->color_saturation_mapping_flag) {
if (get_bits_left(gb) < 6)
return AVERROR(EINVAL);
params->color_saturation_weight =
(AVRational){get_bits(gb, 6), saturation_weight_den};
}
}
return 0;
}

View File

@@ -1,35 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_DYNAMIC_HDR10_PLUS_H
#define AVCODEC_DYNAMIC_HDR10_PLUS_H
#include "libavutil/hdr_dynamic_metadata.h"
/**
* Parse the user data registered ITU-T T.35 to AVbuffer (AVDynamicHDRPlus).
* @param s A pointer containing the decoded AVDynamicHDRPlus structure.
* @param data The byte array containing the raw ITU-T T.35 data.
* @param size Size of the data array in bytes.
*
* @return 0 if succeed. Otherwise, returns the appropriate AVERROR.
*/
int ff_parse_itu_t_t35_to_dynamic_hdr10_plus(AVDynamicHDRPlus *s, const uint8_t *data,
int size);
#endif /* AVCODEC_DYNAMIC_HDR10_PLUS_H */

View File

@@ -1,53 +0,0 @@
/*
* generic encoding-related code
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_ENCODE_H
#define AVCODEC_ENCODE_H
#include "libavutil/frame.h"
#include "avcodec.h"
#include "packet.h"
/**
* Called by encoders to get the next frame for encoding.
*
* @param frame An empty frame to be filled with data.
* @return 0 if a new reference has been successfully written to frame
* AVERROR(EAGAIN) if no data is currently available
* AVERROR_EOF if end of stream has been reached, so no more data
* will be available
*/
int ff_encode_get_frame(AVCodecContext *avctx, AVFrame *frame);
/**
* Get a buffer for a packet. This is a wrapper around
* AVCodecContext.get_encode_buffer() and should be used instead calling get_encode_buffer()
* directly.
*/
int ff_get_encode_buffer(AVCodecContext *avctx, AVPacket *avpkt, int64_t size, int flags);
/*
* Perform encoder initialization and validation.
* Called when opening the encoder, before the AVCodec.init() call.
*/
int ff_encode_preinit(AVCodecContext *avctx);
#endif /* AVCODEC_ENCODE_H */

View File

@@ -1,552 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* OpenEXR encoder
*/
#include <float.h>
#include <zlib.h>
#include "libavutil/avassert.h"
#include "libavutil/opt.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/imgutils.h"
#include "libavutil/pixdesc.h"
#include "avcodec.h"
#include "bytestream.h"
#include "internal.h"
#include "float2half.h"
enum ExrCompr {
EXR_RAW,
EXR_RLE,
EXR_ZIP1,
EXR_ZIP16,
EXR_NBCOMPR,
};
enum ExrPixelType {
EXR_UINT,
EXR_HALF,
EXR_FLOAT,
EXR_UNKNOWN,
};
static const char abgr_chlist[4] = { 'A', 'B', 'G', 'R' };
static const char bgr_chlist[4] = { 'B', 'G', 'R', 'A' };
static const uint8_t gbra_order[4] = { 3, 1, 0, 2 };
static const uint8_t gbr_order[4] = { 1, 0, 2, 0 };
typedef struct EXRScanlineData {
uint8_t *compressed_data;
unsigned int compressed_size;
uint8_t *uncompressed_data;
unsigned int uncompressed_size;
uint8_t *tmp;
unsigned int tmp_size;
int64_t actual_size;
} EXRScanlineData;
typedef struct EXRContext {
const AVClass *class;
int compression;
int pixel_type;
int planes;
int nb_scanlines;
int scanline_height;
float gamma;
const char *ch_names;
const uint8_t *ch_order;
PutByteContext pb;
EXRScanlineData *scanline;
uint16_t basetable[512];
uint8_t shifttable[512];
} EXRContext;
static int encode_init(AVCodecContext *avctx)
{
EXRContext *s = avctx->priv_data;
float2half_tables(s->basetable, s->shifttable);
switch (avctx->pix_fmt) {
case AV_PIX_FMT_GBRPF32:
s->planes = 3;
s->ch_names = bgr_chlist;
s->ch_order = gbr_order;
break;
case AV_PIX_FMT_GBRAPF32:
s->planes = 4;
s->ch_names = abgr_chlist;
s->ch_order = gbra_order;
break;
default:
av_assert0(0);
}
switch (s->compression) {
case EXR_RAW:
case EXR_RLE:
case EXR_ZIP1:
s->scanline_height = 1;
s->nb_scanlines = avctx->height;
break;
case EXR_ZIP16:
s->scanline_height = 16;
s->nb_scanlines = (avctx->height + s->scanline_height - 1) / s->scanline_height;
break;
default:
av_assert0(0);
}
s->scanline = av_calloc(s->nb_scanlines, sizeof(*s->scanline));
if (!s->scanline)
return AVERROR(ENOMEM);
return 0;
}
static int encode_close(AVCodecContext *avctx)
{
EXRContext *s = avctx->priv_data;
for (int y = 0; y < s->nb_scanlines && s->scanline; y++) {
EXRScanlineData *scanline = &s->scanline[y];
av_freep(&scanline->tmp);
av_freep(&scanline->compressed_data);
av_freep(&scanline->uncompressed_data);
}
av_freep(&s->scanline);
return 0;
}
static void reorder_pixels(uint8_t *dst, const uint8_t *src, ptrdiff_t size)
{
const ptrdiff_t half_size = (size + 1) / 2;
uint8_t *t1 = dst;
uint8_t *t2 = dst + half_size;
for (ptrdiff_t i = 0; i < half_size; i++) {
t1[i] = *(src++);
t2[i] = *(src++);
}
}
static void predictor(uint8_t *src, ptrdiff_t size)
{
int p = src[0];
for (ptrdiff_t i = 1; i < size; i++) {
int d = src[i] - p + 384;
p = src[i];
src[i] = d;
}
}
static int64_t rle_compress(uint8_t *out, int64_t out_size,
const uint8_t *in, int64_t in_size)
{
int64_t i = 0, o = 0, run = 1, copy = 0;
while (i < in_size) {
while (i + run < in_size && in[i] == in[i + run] && run < 128)
run++;
if (run >= 3) {
if (o + 2 >= out_size)
return -1;
out[o++] = run - 1;
out[o++] = in[i];
i += run;
} else {
if (i + run < in_size)
copy += run;
while (i + copy < in_size && copy < 127 && in[i + copy] != in[i + copy - 1])
copy++;
if (o + 1 + copy >= out_size)
return -1;
out[o++] = -copy;
for (int x = 0; x < copy; x++)
out[o + x] = in[i + x];
o += copy;
i += copy;
copy = 0;
}
run = 1;
}
return o;
}
static int encode_scanline_rle(EXRContext *s, const AVFrame *frame)
{
const int64_t element_size = s->pixel_type == EXR_HALF ? 2LL : 4LL;
for (int y = 0; y < frame->height; y++) {
EXRScanlineData *scanline = &s->scanline[y];
int64_t tmp_size = element_size * s->planes * frame->width;
int64_t max_compressed_size = tmp_size * 3 / 2;
av_fast_padded_malloc(&scanline->uncompressed_data, &scanline->uncompressed_size, tmp_size);
if (!scanline->uncompressed_data)
return AVERROR(ENOMEM);
av_fast_padded_malloc(&scanline->tmp, &scanline->tmp_size, tmp_size);
if (!scanline->tmp)
return AVERROR(ENOMEM);
av_fast_padded_malloc(&scanline->compressed_data, &scanline->compressed_size, max_compressed_size);
if (!scanline->compressed_data)
return AVERROR(ENOMEM);
switch (s->pixel_type) {
case EXR_FLOAT:
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
memcpy(scanline->uncompressed_data + frame->width * 4 * p,
frame->data[ch] + y * frame->linesize[ch], frame->width * 4);
}
break;
case EXR_HALF:
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
uint16_t *dst = (uint16_t *)(scanline->uncompressed_data + frame->width * 2 * p);
uint32_t *src = (uint32_t *)(frame->data[ch] + y * frame->linesize[ch]);
for (int x = 0; x < frame->width; x++)
dst[x] = float2half(src[x], s->basetable, s->shifttable);
}
break;
}
reorder_pixels(scanline->tmp, scanline->uncompressed_data, tmp_size);
predictor(scanline->tmp, tmp_size);
scanline->actual_size = rle_compress(scanline->compressed_data,
max_compressed_size,
scanline->tmp, tmp_size);
if (scanline->actual_size <= 0 || scanline->actual_size >= tmp_size) {
FFSWAP(uint8_t *, scanline->uncompressed_data, scanline->compressed_data);
FFSWAP(int, scanline->uncompressed_size, scanline->compressed_size);
scanline->actual_size = tmp_size;
}
}
return 0;
}
static int encode_scanline_zip(EXRContext *s, const AVFrame *frame)
{
const int64_t element_size = s->pixel_type == EXR_HALF ? 2LL : 4LL;
for (int y = 0; y < s->nb_scanlines; y++) {
EXRScanlineData *scanline = &s->scanline[y];
const int scanline_height = FFMIN(s->scanline_height, frame->height - y * s->scanline_height);
int64_t tmp_size = element_size * s->planes * frame->width * scanline_height;
int64_t max_compressed_size = tmp_size * 3 / 2;
unsigned long actual_size, source_size;
av_fast_padded_malloc(&scanline->uncompressed_data, &scanline->uncompressed_size, tmp_size);
if (!scanline->uncompressed_data)
return AVERROR(ENOMEM);
av_fast_padded_malloc(&scanline->tmp, &scanline->tmp_size, tmp_size);
if (!scanline->tmp)
return AVERROR(ENOMEM);
av_fast_padded_malloc(&scanline->compressed_data, &scanline->compressed_size, max_compressed_size);
if (!scanline->compressed_data)
return AVERROR(ENOMEM);
switch (s->pixel_type) {
case EXR_FLOAT:
for (int l = 0; l < scanline_height; l++) {
const int scanline_size = frame->width * 4 * s->planes;
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
memcpy(scanline->uncompressed_data + scanline_size * l + p * frame->width * 4,
frame->data[ch] + (y * s->scanline_height + l) * frame->linesize[ch],
frame->width * 4);
}
}
break;
case EXR_HALF:
for (int l = 0; l < scanline_height; l++) {
const int scanline_size = frame->width * 2 * s->planes;
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
uint16_t *dst = (uint16_t *)(scanline->uncompressed_data + scanline_size * l + p * frame->width * 2);
uint32_t *src = (uint32_t *)(frame->data[ch] + (y * s->scanline_height + l) * frame->linesize[ch]);
for (int x = 0; x < frame->width; x++)
dst[x] = float2half(src[x], s->basetable, s->shifttable);
}
}
break;
}
reorder_pixels(scanline->tmp, scanline->uncompressed_data, tmp_size);
predictor(scanline->tmp, tmp_size);
source_size = tmp_size;
actual_size = max_compressed_size;
compress(scanline->compressed_data, &actual_size,
scanline->tmp, source_size);
scanline->actual_size = actual_size;
if (scanline->actual_size >= tmp_size) {
FFSWAP(uint8_t *, scanline->uncompressed_data, scanline->compressed_data);
FFSWAP(int, scanline->uncompressed_size, scanline->compressed_size);
scanline->actual_size = tmp_size;
}
}
return 0;
}
static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
const AVFrame *frame, int *got_packet)
{
EXRContext *s = avctx->priv_data;
PutByteContext *pb = &s->pb;
int64_t offset;
int ret;
int64_t out_size = 2048LL + avctx->height * 16LL +
av_image_get_buffer_size(avctx->pix_fmt,
avctx->width,
avctx->height, 64) * 3LL / 2;
if ((ret = ff_alloc_packet2(avctx, pkt, out_size, out_size)) < 0)
return ret;
bytestream2_init_writer(pb, pkt->data, pkt->size);
bytestream2_put_le32(pb, 20000630);
bytestream2_put_byte(pb, 2);
bytestream2_put_le24(pb, 0);
bytestream2_put_buffer(pb, "channels\0chlist\0", 16);
bytestream2_put_le32(pb, s->planes * 18 + 1);
for (int p = 0; p < s->planes; p++) {
bytestream2_put_byte(pb, s->ch_names[p]);
bytestream2_put_byte(pb, 0);
bytestream2_put_le32(pb, s->pixel_type);
bytestream2_put_le32(pb, 0);
bytestream2_put_le32(pb, 1);
bytestream2_put_le32(pb, 1);
}
bytestream2_put_byte(pb, 0);
bytestream2_put_buffer(pb, "compression\0compression\0", 24);
bytestream2_put_le32(pb, 1);
bytestream2_put_byte(pb, s->compression);
bytestream2_put_buffer(pb, "dataWindow\0box2i\0", 17);
bytestream2_put_le32(pb, 16);
bytestream2_put_le32(pb, 0);
bytestream2_put_le32(pb, 0);
bytestream2_put_le32(pb, avctx->width - 1);
bytestream2_put_le32(pb, avctx->height - 1);
bytestream2_put_buffer(pb, "displayWindow\0box2i\0", 20);
bytestream2_put_le32(pb, 16);
bytestream2_put_le32(pb, 0);
bytestream2_put_le32(pb, 0);
bytestream2_put_le32(pb, avctx->width - 1);
bytestream2_put_le32(pb, avctx->height - 1);
bytestream2_put_buffer(pb, "lineOrder\0lineOrder\0", 20);
bytestream2_put_le32(pb, 1);
bytestream2_put_byte(pb, 0);
bytestream2_put_buffer(pb, "screenWindowCenter\0v2f\0", 23);
bytestream2_put_le32(pb, 8);
bytestream2_put_le64(pb, 0);
bytestream2_put_buffer(pb, "screenWindowWidth\0float\0", 24);
bytestream2_put_le32(pb, 4);
bytestream2_put_le32(pb, av_float2int(1.f));
if (avctx->sample_aspect_ratio.num && avctx->sample_aspect_ratio.den) {
bytestream2_put_buffer(pb, "pixelAspectRatio\0float\0", 23);
bytestream2_put_le32(pb, 4);
bytestream2_put_le32(pb, av_float2int(av_q2d(avctx->sample_aspect_ratio)));
}
if (avctx->framerate.num && avctx->framerate.den) {
bytestream2_put_buffer(pb, "framesPerSecond\0rational\0", 25);
bytestream2_put_le32(pb, 8);
bytestream2_put_le32(pb, avctx->framerate.num);
bytestream2_put_le32(pb, avctx->framerate.den);
}
bytestream2_put_buffer(pb, "gamma\0float\0", 12);
bytestream2_put_le32(pb, 4);
bytestream2_put_le32(pb, av_float2int(s->gamma));
bytestream2_put_buffer(pb, "writer\0string\0", 14);
bytestream2_put_le32(pb, 4);
bytestream2_put_buffer(pb, "lavc", 4);
bytestream2_put_byte(pb, 0);
switch (s->compression) {
case EXR_RAW:
/* nothing to do */
break;
case EXR_RLE:
encode_scanline_rle(s, frame);
break;
case EXR_ZIP16:
case EXR_ZIP1:
encode_scanline_zip(s, frame);
break;
default:
av_assert0(0);
}
switch (s->compression) {
case EXR_RAW:
offset = bytestream2_tell_p(pb) + avctx->height * 8LL;
if (s->pixel_type == EXR_FLOAT) {
for (int y = 0; y < avctx->height; y++) {
bytestream2_put_le64(pb, offset);
offset += avctx->width * s->planes * 4 + 8;
}
for (int y = 0; y < avctx->height; y++) {
bytestream2_put_le32(pb, y);
bytestream2_put_le32(pb, s->planes * avctx->width * 4);
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
bytestream2_put_buffer(pb, frame->data[ch] + y * frame->linesize[ch],
avctx->width * 4);
}
}
} else {
for (int y = 0; y < avctx->height; y++) {
bytestream2_put_le64(pb, offset);
offset += avctx->width * s->planes * 2 + 8;
}
for (int y = 0; y < avctx->height; y++) {
bytestream2_put_le32(pb, y);
bytestream2_put_le32(pb, s->planes * avctx->width * 2);
for (int p = 0; p < s->planes; p++) {
int ch = s->ch_order[p];
uint32_t *src = (uint32_t *)(frame->data[ch] + y * frame->linesize[ch]);
for (int x = 0; x < frame->width; x++)
bytestream2_put_le16(pb, float2half(src[x], s->basetable, s->shifttable));
}
}
}
break;
case EXR_ZIP16:
case EXR_ZIP1:
case EXR_RLE:
offset = bytestream2_tell_p(pb) + s->nb_scanlines * 8LL;
for (int y = 0; y < s->nb_scanlines; y++) {
EXRScanlineData *scanline = &s->scanline[y];
bytestream2_put_le64(pb, offset);
offset += scanline->actual_size + 8;
}
for (int y = 0; y < s->nb_scanlines; y++) {
EXRScanlineData *scanline = &s->scanline[y];
bytestream2_put_le32(pb, y * s->scanline_height);
bytestream2_put_le32(pb, scanline->actual_size);
bytestream2_put_buffer(pb, scanline->compressed_data,
scanline->actual_size);
}
break;
default:
av_assert0(0);
}
av_shrink_packet(pkt, bytestream2_tell_p(pb));
pkt->flags |= AV_PKT_FLAG_KEY;
*got_packet = 1;
return 0;
}
#define OFFSET(x) offsetof(EXRContext, x)
#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
static const AVOption options[] = {
{ "compression", "set compression type", OFFSET(compression), AV_OPT_TYPE_INT, {.i64=0}, 0, EXR_NBCOMPR-1, VE, "compr" },
{ "none", "none", 0, AV_OPT_TYPE_CONST, {.i64=EXR_RAW}, 0, 0, VE, "compr" },
{ "rle" , "RLE", 0, AV_OPT_TYPE_CONST, {.i64=EXR_RLE}, 0, 0, VE, "compr" },
{ "zip1", "ZIP1", 0, AV_OPT_TYPE_CONST, {.i64=EXR_ZIP1}, 0, 0, VE, "compr" },
{ "zip16", "ZIP16", 0, AV_OPT_TYPE_CONST, {.i64=EXR_ZIP16}, 0, 0, VE, "compr" },
{ "format", "set pixel type", OFFSET(pixel_type), AV_OPT_TYPE_INT, {.i64=EXR_FLOAT}, EXR_HALF, EXR_UNKNOWN-1, VE, "pixel" },
{ "half" , NULL, 0, AV_OPT_TYPE_CONST, {.i64=EXR_HALF}, 0, 0, VE, "pixel" },
{ "float", NULL, 0, AV_OPT_TYPE_CONST, {.i64=EXR_FLOAT}, 0, 0, VE, "pixel" },
{ "gamma", "set gamma", OFFSET(gamma), AV_OPT_TYPE_FLOAT, {.dbl=1.f}, 0.001, FLT_MAX, VE },
{ NULL},
};
static const AVClass exr_class = {
.class_name = "exr",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
AVCodec ff_exr_encoder = {
.name = "exr",
.long_name = NULL_IF_CONFIG_SMALL("OpenEXR image"),
.priv_data_size = sizeof(EXRContext),
.priv_class = &exr_class,
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_EXR,
.init = encode_init,
.encode2 = encode_frame,
.close = encode_close,
.capabilities = AV_CODEC_CAP_FRAME_THREADS,
.pix_fmts = (const enum AVPixelFormat[]) {
AV_PIX_FMT_GBRPF32,
AV_PIX_FMT_GBRAPF32,
AV_PIX_FMT_NONE },
};

View File

@@ -1,202 +0,0 @@
/*
* MOFLEX Fast Audio decoder
* Copyright (c) 2015-2016 Florian Nouwt
* Copyright (c) 2017 Adib Surani
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/intreadwrite.h"
#include "avcodec.h"
#include "bytestream.h"
#include "internal.h"
#include "mathops.h"
typedef struct ChannelItems {
float f[8];
float last;
} ChannelItems;
typedef struct FastAudioContext {
float table[8][64];
ChannelItems *ch;
} FastAudioContext;
static av_cold int fastaudio_init(AVCodecContext *avctx)
{
FastAudioContext *s = avctx->priv_data;
avctx->sample_fmt = AV_SAMPLE_FMT_FLTP;
for (int i = 0; i < 8; i++)
s->table[0][i] = (i - 159.5f) / 160.f;
for (int i = 0; i < 11; i++)
s->table[0][i + 8] = (i - 37.5f) / 40.f;
for (int i = 0; i < 27; i++)
s->table[0][i + 8 + 11] = (i - 13.f) / 20.f;
for (int i = 0; i < 11; i++)
s->table[0][i + 8 + 11 + 27] = (i + 27.5f) / 40.f;
for (int i = 0; i < 7; i++)
s->table[0][i + 8 + 11 + 27 + 11] = (i + 152.5f) / 160.f;
memcpy(s->table[1], s->table[0], sizeof(s->table[0]));
for (int i = 0; i < 7; i++)
s->table[2][i] = (i - 33.5f) / 40.f;
for (int i = 0; i < 25; i++)
s->table[2][i + 7] = (i - 13.f) / 20.f;
for (int i = 0; i < 32; i++)
s->table[3][i] = -s->table[2][31 - i];
for (int i = 0; i < 16; i++)
s->table[4][i] = i * 0.22f / 3.f - 0.6f;
for (int i = 0; i < 16; i++)
s->table[5][i] = i * 0.20f / 3.f - 0.3f;
for (int i = 0; i < 8; i++)
s->table[6][i] = i * 0.36f / 3.f - 0.4f;
for (int i = 0; i < 8; i++)
s->table[7][i] = i * 0.34f / 3.f - 0.2f;
s->ch = av_calloc(avctx->channels, sizeof(*s->ch));
if (!s->ch)
return AVERROR(ENOMEM);
return 0;
}
static int read_bits(int bits, int *ppos, unsigned *src)
{
int r, pos;
pos = *ppos;
pos += bits;
r = src[(pos - 1) / 32] >> ((-pos) & 31);
*ppos = pos;
return r & ((1 << bits) - 1);
}
static const uint8_t bits[8] = { 6, 6, 5, 5, 4, 0, 3, 3, };
static void set_sample(int i, int j, int v, float *result, int *pads, float value)
{
result[i * 64 + pads[i] + j * 3] = value * (2 * v - 7);
}
static int fastaudio_decode(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *pkt)
{
FastAudioContext *s = avctx->priv_data;
GetByteContext gb;
AVFrame *frame = data;
int subframes;
int ret;
subframes = pkt->size / (40 * avctx->channels);
frame->nb_samples = subframes * 256;
if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
return ret;
bytestream2_init(&gb, pkt->data, pkt->size);
for (int subframe = 0; subframe < subframes; subframe++) {
for (int channel = 0; channel < avctx->channels; channel++) {
ChannelItems *ch = &s->ch[channel];
float result[256] = { 0 };
unsigned src[10];
int inds[4], pads[4];
float m[8];
int pos = 0;
for (int i = 0; i < 10; i++)
src[i] = bytestream2_get_le32(&gb);
for (int i = 0; i < 8; i++)
m[7 - i] = s->table[i][read_bits(bits[i], &pos, src)];
for (int i = 0; i < 4; i++)
inds[3 - i] = read_bits(6, &pos, src);
for (int i = 0; i < 4; i++)
pads[3 - i] = read_bits(2, &pos, src);
for (int i = 0, index5 = 0; i < 4; i++) {
float value = av_int2float((inds[i] + 1) << 20) * powf(2.f, 116.f);
for (int j = 0, tmp = 0; j < 21; j++) {
set_sample(i, j, j == 20 ? tmp / 2 : read_bits(3, &pos, src), result, pads, value);
if (j % 10 == 9)
tmp = 4 * tmp + read_bits(2, &pos, src);
if (j == 20)
index5 = FFMIN(2 * index5 + tmp % 2, 63);
}
m[2] = s->table[5][index5];
}
for (int i = 0; i < 256; i++) {
float x = result[i];
for (int j = 0; j < 8; j++) {
x -= m[j] * ch->f[j];
ch->f[j] += m[j] * x;
}
memmove(&ch->f[0], &ch->f[1], sizeof(float) * 7);
ch->f[7] = x;
ch->last = x + ch->last * 0.86f;
result[i] = ch->last * 2.f;
}
memcpy(frame->extended_data[channel] + 1024 * subframe, result, 256 * sizeof(float));
}
}
*got_frame = 1;
return pkt->size;
}
static av_cold int fastaudio_close(AVCodecContext *avctx)
{
FastAudioContext *s = avctx->priv_data;
av_freep(&s->ch);
return 0;
}
AVCodec ff_fastaudio_decoder = {
.name = "fastaudio",
.long_name = NULL_IF_CONFIG_SMALL("MobiClip FastAudio"),
.type = AVMEDIA_TYPE_AUDIO,
.id = AV_CODEC_ID_FASTAUDIO,
.priv_data_size = sizeof(FastAudioContext),
.init = fastaudio_init,
.decode = fastaudio_decode,
.close = fastaudio_close,
.capabilities = AV_CODEC_CAP_DR1,
.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_FLTP,
AV_SAMPLE_FMT_NONE },
};

View File

@@ -1,67 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_FLOAT2HALF_H
#define AVCODEC_FLOAT2HALF_H
#include <stdint.h>
static void float2half_tables(uint16_t *basetable, uint8_t *shifttable)
{
for (int i = 0; i < 256; i++) {
int e = i - 127;
if (e < -24) { // Very small numbers map to zero
basetable[i|0x000] = 0x0000;
basetable[i|0x100] = 0x8000;
shifttable[i|0x000] = 24;
shifttable[i|0x100] = 24;
} else if (e < -14) { // Small numbers map to denorms
basetable[i|0x000] = (0x0400>>(-e-14));
basetable[i|0x100] = (0x0400>>(-e-14)) | 0x8000;
shifttable[i|0x000] = -e-1;
shifttable[i|0x100] = -e-1;
} else if (e <= 15) { // Normal numbers just lose precision
basetable[i|0x000] = ((e + 15) << 10);
basetable[i|0x100] = ((e + 15) << 10) | 0x8000;
shifttable[i|0x000] = 13;
shifttable[i|0x100] = 13;
} else if (e < 128) { // Large numbers map to Infinity
basetable[i|0x000] = 0x7C00;
basetable[i|0x100] = 0xFC00;
shifttable[i|0x000] = 24;
shifttable[i|0x100] = 24;
} else { // Infinity and NaN's stay Infinity and NaN's
basetable[i|0x000] = 0x7C00;
basetable[i|0x100] = 0xFC00;
shifttable[i|0x000] = 13;
shifttable[i|0x100] = 13;
}
}
}
static uint16_t float2half(uint32_t f, uint16_t *basetable, uint8_t *shifttable)
{
uint16_t h;
h = basetable[(f >> 23) & 0x1ff] + ((f & 0x007fffff) >> shifttable[(f >> 23) & 0x1ff]);
return h;
}
#endif /* AVCODEC_FLOAT2HALF_H */

View File

@@ -1,74 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_HALF2FLOAT_H
#define AVCODEC_HALF2FLOAT_H
#include <stdint.h>
static uint32_t convertmantissa(uint32_t i)
{
int32_t m = i << 13; // Zero pad mantissa bits
int32_t e = 0; // Zero exponent
while (!(m & 0x00800000)) { // While not normalized
e -= 0x00800000; // Decrement exponent (1<<23)
m <<= 1; // Shift mantissa
}
m &= ~0x00800000; // Clear leading 1 bit
e += 0x38800000; // Adjust bias ((127-14)<<23)
return m | e; // Return combined number
}
static void half2float_table(uint32_t *mantissatable, uint32_t *exponenttable,
uint16_t *offsettable)
{
mantissatable[0] = 0;
for (int i = 1; i < 1024; i++)
mantissatable[i] = convertmantissa(i);
for (int i = 1024; i < 2048; i++)
mantissatable[i] = 0x38000000UL + ((i - 1024) << 13UL);
exponenttable[0] = 0;
for (int i = 1; i < 31; i++)
exponenttable[i] = i << 23;
for (int i = 33; i < 63; i++)
exponenttable[i] = 0x80000000UL + ((i - 32) << 23UL);
exponenttable[31]= 0x47800000UL;
exponenttable[32]= 0x80000000UL;
exponenttable[63]= 0xC7800000UL;
offsettable[0] = 0;
for (int i = 1; i < 64; i++)
offsettable[i] = 1024;
offsettable[32] = 0;
}
static uint32_t half2float(uint16_t h, uint32_t *mantissatable, uint32_t *exponenttable,
uint16_t *offsettable)
{
uint32_t f;
f = mantissatable[offsettable[h >> 10] + (h & 0x3ff)] + exponenttable[h >> 10];
return f;
}
#endif /* AVCODEC_HALF2FLOAT_H */

View File

@@ -1,197 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/common.h"
#include "avcodec.h"
#include "bytestream.h"
#include "internal.h"
typedef struct SimbiosisIMXContext {
AVFrame *frame;
uint32_t pal[256];
uint8_t history[32768];
int pos;
} SimbiosisIMXContext;
static av_cold int imx_decode_init(AVCodecContext *avctx)
{
SimbiosisIMXContext *imx = avctx->priv_data;
avctx->pix_fmt = AV_PIX_FMT_PAL8;
avctx->width = 320;
avctx->height = 160;
imx->frame = av_frame_alloc();
if (!imx->frame)
return AVERROR(ENOMEM);
return 0;
}
static int imx_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
SimbiosisIMXContext *imx = avctx->priv_data;
int ret, x, y;
buffer_size_t pal_size;
const uint8_t *pal = av_packet_get_side_data(avpkt, AV_PKT_DATA_PALETTE, &pal_size);
AVFrame *frame = imx->frame;
GetByteContext gb;
if ((ret = ff_reget_buffer(avctx, frame, 0)) < 0)
return ret;
if (pal && pal_size == AVPALETTE_SIZE) {
memcpy(imx->pal, pal, pal_size);
frame->palette_has_changed = 1;
frame->key_frame = 1;
} else {
frame->key_frame = 0;
frame->palette_has_changed = 0;
}
bytestream2_init(&gb, avpkt->data, avpkt->size);
memcpy(frame->data[1], imx->pal, AVPALETTE_SIZE);
x = 0, y = 0;
while (bytestream2_get_bytes_left(&gb) > 0 &&
x < 320 && y < 160) {
int b = bytestream2_get_byte(&gb);
int len = b & 0x3f;
int op = b >> 6;
int fill;
switch (op) {
case 3:
len = len * 64 + bytestream2_get_byte(&gb);
case 0:
while (len > 0) {
x++;
len--;
if (x >= 320) {
x = 0;
y++;
}
if (y >= 160)
break;
}
frame->key_frame = 0;
break;
case 1:
if (len == 0) {
int offset = bytestream2_get_le16(&gb);
if (offset < 0 || offset >= 32768)
return AVERROR_INVALIDDATA;
len = bytestream2_get_byte(&gb);
while (len > 0 && offset < 32768) {
frame->data[0][x + y * frame->linesize[0]] = imx->history[offset++];
x++;
len--;
if (x >= 320) {
x = 0;
y++;
}
if (y >= 160)
break;
}
frame->key_frame = 0;
} else {
while (len > 0) {
fill = bytestream2_get_byte(&gb);
frame->data[0][x + y * frame->linesize[0]] = fill;
if (imx->pos < 32768)
imx->history[imx->pos++] = fill;
x++;
len--;
if (x >= 320) {
x = 0;
y++;
}
if (y >= 160)
break;
}
}
break;
case 2:
fill = bytestream2_get_byte(&gb);
while (len > 0) {
frame->data[0][x + y * frame->linesize[0]] = fill;
x++;
len--;
if (x >= 320) {
x = 0;
y++;
}
if (y >= 160)
break;
}
break;
}
}
frame->pict_type = frame->key_frame ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P;
if ((ret = av_frame_ref(data, frame)) < 0)
return ret;
*got_frame = 1;
return avpkt->size;
}
static void imx_decode_flush(AVCodecContext *avctx)
{
SimbiosisIMXContext *imx = avctx->priv_data;
av_frame_unref(imx->frame);
imx->pos = 0;
memset(imx->pal, 0, sizeof(imx->pal));
memset(imx->history, 0, sizeof(imx->history));
}
static int imx_decode_close(AVCodecContext *avctx)
{
SimbiosisIMXContext *imx = avctx->priv_data;
av_frame_free(&imx->frame);
return 0;
}
AVCodec ff_simbiosis_imx_decoder = {
.name = "simbiosis_imx",
.long_name = NULL_IF_CONFIG_SMALL("Simbiosis Interactive IMX Video"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_SIMBIOSIS_IMX,
.priv_data_size = sizeof(SimbiosisIMXContext),
.init = imx_decode_init,
.decode = imx_decode_frame,
.close = imx_decode_close,
.flush = imx_decode_flush,
.capabilities = AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE |
FF_CODEC_CAP_INIT_CLEANUP,
};

View File

@@ -1,77 +0,0 @@
/*
* IPU parser
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* IPU parser
*/
#include "parser.h"
typedef struct IPUParseContext {
ParseContext pc;
} IPUParseContext;
static int ipu_parse(AVCodecParserContext *s, AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
IPUParseContext *ipc = s->priv_data;
uint32_t state = ipc->pc.state;
int next = END_NOT_FOUND, i = 0;
s->pict_type = AV_PICTURE_TYPE_NONE;
s->duration = 1;
*poutbuf_size = 0;
*poutbuf = NULL;
if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) {
next = buf_size;
} else {
for (; i < buf_size; i++) {
state = (state << 8) | buf[i];
if (state == 0x1b0) {
next = i + 1;
break;
}
}
ipc->pc.state = state;
if (ff_combine_frame(&ipc->pc, next, &buf, &buf_size) < 0) {
*poutbuf = NULL;
*poutbuf_size = 0;
return buf_size;
}
}
*poutbuf = buf;
*poutbuf_size = buf_size;
return next;
}
AVCodecParser ff_ipu_parser = {
.codec_ids = { AV_CODEC_ID_IPU },
.priv_data_size = sizeof(IPUParseContext),
.parser_parse = ipu_parse,
.parser_close = ff_parse_close,
};

View File

@@ -1,572 +0,0 @@
/*
* Scalable Video Technology for AV1 encoder library plugin
*
* Copyright (c) 2018 Intel Corporation
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include <EbSvtAv1ErrorCodes.h>
#include <EbSvtAv1Enc.h>
#include "libavutil/common.h"
#include "libavutil/frame.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "libavutil/avassert.h"
#include "internal.h"
#include "encode.h"
#include "packet_internal.h"
#include "avcodec.h"
#include "profiles.h"
typedef enum eos_status {
EOS_NOT_REACHED = 0,
EOS_SENT,
EOS_RECEIVED
}EOS_STATUS;
typedef struct SvtContext {
const AVClass *class;
EbSvtAv1EncConfiguration enc_params;
EbComponentType *svt_handle;
EbBufferHeaderType *in_buf;
int raw_size;
int max_tu_size;
AVFrame *frame;
AVBufferPool *pool;
EOS_STATUS eos_flag;
// User options.
int hierarchical_level;
int la_depth;
int enc_mode;
int rc_mode;
int scd;
int qp;
int tier;
int tile_columns;
int tile_rows;
} SvtContext;
static const struct {
EbErrorType eb_err;
int av_err;
const char *desc;
} svt_errors[] = {
{ EB_ErrorNone, 0, "success" },
{ EB_ErrorInsufficientResources, AVERROR(ENOMEM), "insufficient resources" },
{ EB_ErrorUndefined, AVERROR(EINVAL), "undefined error" },
{ EB_ErrorInvalidComponent, AVERROR(EINVAL), "invalid component" },
{ EB_ErrorBadParameter, AVERROR(EINVAL), "bad parameter" },
{ EB_ErrorDestroyThreadFailed, AVERROR_EXTERNAL, "failed to destroy thread" },
{ EB_ErrorSemaphoreUnresponsive, AVERROR_EXTERNAL, "semaphore unresponsive" },
{ EB_ErrorDestroySemaphoreFailed, AVERROR_EXTERNAL, "failed to destroy semaphore"},
{ EB_ErrorCreateMutexFailed, AVERROR_EXTERNAL, "failed to create mutex" },
{ EB_ErrorMutexUnresponsive, AVERROR_EXTERNAL, "mutex unresponsive" },
{ EB_ErrorDestroyMutexFailed, AVERROR_EXTERNAL, "failed to destroy mutex" },
{ EB_NoErrorEmptyQueue, AVERROR(EAGAIN), "empty queue" },
};
static int svt_map_error(EbErrorType eb_err, const char **desc)
{
int i;
av_assert0(desc);
for (i = 0; i < FF_ARRAY_ELEMS(svt_errors); i++) {
if (svt_errors[i].eb_err == eb_err) {
*desc = svt_errors[i].desc;
return svt_errors[i].av_err;
}
}
*desc = "unknown error";
return AVERROR_UNKNOWN;
}
static int svt_print_error(void *log_ctx, EbErrorType err,
const char *error_string)
{
const char *desc;
int ret = svt_map_error(err, &desc);
av_log(log_ctx, AV_LOG_ERROR, "%s: %s (0x%x)\n", error_string, desc, err);
return ret;
}
static int alloc_buffer(EbSvtAv1EncConfiguration *config, SvtContext *svt_enc)
{
const int pack_mode_10bit =
(config->encoder_bit_depth > 8) && (config->compressed_ten_bit_format == 0) ? 1 : 0;
const size_t luma_size_8bit =
config->source_width * config->source_height * (1 << pack_mode_10bit);
const size_t luma_size_10bit =
(config->encoder_bit_depth > 8 && pack_mode_10bit == 0) ? luma_size_8bit : 0;
EbSvtIOFormat *in_data;
svt_enc->raw_size = (luma_size_8bit + luma_size_10bit) * 3 / 2;
// allocate buffer for in and out
svt_enc->in_buf = av_mallocz(sizeof(*svt_enc->in_buf));
if (!svt_enc->in_buf)
return AVERROR(ENOMEM);
svt_enc->in_buf->p_buffer = av_mallocz(sizeof(*in_data));
if (!svt_enc->in_buf->p_buffer)
return AVERROR(ENOMEM);
svt_enc->in_buf->size = sizeof(*svt_enc->in_buf);
return 0;
}
static int config_enc_params(EbSvtAv1EncConfiguration *param,
AVCodecContext *avctx)
{
SvtContext *svt_enc = avctx->priv_data;
const AVPixFmtDescriptor *desc;
param->source_width = avctx->width;
param->source_height = avctx->height;
desc = av_pix_fmt_desc_get(avctx->pix_fmt);
param->encoder_bit_depth = desc->comp[0].depth;
if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1)
param->encoder_color_format = EB_YUV420;
else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0)
param->encoder_color_format = EB_YUV422;
else if (!desc->log2_chroma_w && !desc->log2_chroma_h)
param->encoder_color_format = EB_YUV444;
else {
av_log(avctx, AV_LOG_ERROR , "Unsupported pixel format\n");
return AVERROR(EINVAL);
}
if (avctx->profile != FF_PROFILE_UNKNOWN)
param->profile = avctx->profile;
if (avctx->level != FF_LEVEL_UNKNOWN)
param->level = avctx->level;
if ((param->encoder_color_format == EB_YUV422 || param->encoder_bit_depth > 10)
&& param->profile != FF_PROFILE_AV1_PROFESSIONAL ) {
av_log(avctx, AV_LOG_WARNING, "Forcing Professional profile\n");
param->profile = FF_PROFILE_AV1_PROFESSIONAL;
} else if (param->encoder_color_format == EB_YUV444 && param->profile != FF_PROFILE_AV1_HIGH) {
av_log(avctx, AV_LOG_WARNING, "Forcing High profile\n");
param->profile = FF_PROFILE_AV1_HIGH;
}
// Update param from options
param->hierarchical_levels = svt_enc->hierarchical_level;
param->enc_mode = svt_enc->enc_mode;
param->tier = svt_enc->tier;
param->rate_control_mode = svt_enc->rc_mode;
param->scene_change_detection = svt_enc->scd;
param->qp = svt_enc->qp;
param->target_bit_rate = avctx->bit_rate;
if (avctx->gop_size > 0)
param->intra_period_length = avctx->gop_size - 1;
if (avctx->framerate.num > 0 && avctx->framerate.den > 0) {
param->frame_rate_numerator = avctx->framerate.num;
param->frame_rate_denominator = avctx->framerate.den;
} else {
param->frame_rate_numerator = avctx->time_base.den;
param->frame_rate_denominator = avctx->time_base.num * avctx->ticks_per_frame;
}
if (param->rate_control_mode) {
param->max_qp_allowed = avctx->qmax;
param->min_qp_allowed = avctx->qmin;
}
param->intra_refresh_type = 2; /* Real keyframes only */
if (svt_enc->la_depth >= 0)
param->look_ahead_distance = svt_enc->la_depth;
param->tile_columns = svt_enc->tile_columns;
param->tile_rows = svt_enc->tile_rows;
return 0;
}
static int read_in_data(EbSvtAv1EncConfiguration *param, const AVFrame *frame,
EbBufferHeaderType *header_ptr)
{
EbSvtIOFormat *in_data = (EbSvtIOFormat *)header_ptr->p_buffer;
ptrdiff_t linesizes[4];
size_t sizes[4];
int bytes_shift = param->encoder_bit_depth > 8 ? 1 : 0;
int ret, frame_size;
for (int i = 0; i < 4; i++)
linesizes[i] = frame->linesize[i];
ret = av_image_fill_plane_sizes(sizes, frame->format, frame->height,
linesizes);
if (ret < 0)
return ret;
frame_size = 0;
for (int i = 0; i < 4; i++) {
if (sizes[i] > INT_MAX - frame_size)
return AVERROR(EINVAL);
frame_size += sizes[i];
}
in_data->luma = frame->data[0];
in_data->cb = frame->data[1];
in_data->cr = frame->data[2];
in_data->y_stride = AV_CEIL_RSHIFT(frame->linesize[0], bytes_shift);
in_data->cb_stride = AV_CEIL_RSHIFT(frame->linesize[1], bytes_shift);
in_data->cr_stride = AV_CEIL_RSHIFT(frame->linesize[2], bytes_shift);
header_ptr->n_filled_len = frame_size;
return 0;
}
static av_cold int eb_enc_init(AVCodecContext *avctx)
{
SvtContext *svt_enc = avctx->priv_data;
EbErrorType svt_ret;
int ret;
svt_enc->eos_flag = EOS_NOT_REACHED;
svt_ret = svt_av1_enc_init_handle(&svt_enc->svt_handle, svt_enc, &svt_enc->enc_params);
if (svt_ret != EB_ErrorNone) {
return svt_print_error(avctx, svt_ret, "Error initializing encoder handle");
}
ret = config_enc_params(&svt_enc->enc_params, avctx);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Error configuring encoder parameters\n");
return ret;
}
svt_ret = svt_av1_enc_set_parameter(svt_enc->svt_handle, &svt_enc->enc_params);
if (svt_ret != EB_ErrorNone) {
return svt_print_error(avctx, svt_ret, "Error setting encoder parameters");
}
svt_ret = svt_av1_enc_init(svt_enc->svt_handle);
if (svt_ret != EB_ErrorNone) {
return svt_print_error(avctx, svt_ret, "Error initializing encoder");
}
if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
EbBufferHeaderType *headerPtr = NULL;
svt_ret = svt_av1_enc_stream_header(svt_enc->svt_handle, &headerPtr);
if (svt_ret != EB_ErrorNone) {
return svt_print_error(avctx, svt_ret, "Error building stream header");
}
avctx->extradata_size = headerPtr->n_filled_len;
avctx->extradata = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!avctx->extradata) {
av_log(avctx, AV_LOG_ERROR,
"Cannot allocate AV1 header of size %d.\n", avctx->extradata_size);
return AVERROR(ENOMEM);
}
memcpy(avctx->extradata, headerPtr->p_buffer, avctx->extradata_size);
svt_ret = svt_av1_enc_stream_header_release(headerPtr);
if (svt_ret != EB_ErrorNone) {
return svt_print_error(avctx, svt_ret, "Error freeing stream header");
}
}
svt_enc->frame = av_frame_alloc();
if (!svt_enc->frame)
return AVERROR(ENOMEM);
return alloc_buffer(&svt_enc->enc_params, svt_enc);
}
static int eb_send_frame(AVCodecContext *avctx, const AVFrame *frame)
{
SvtContext *svt_enc = avctx->priv_data;
EbBufferHeaderType *headerPtr = svt_enc->in_buf;
int ret;
if (!frame) {
EbBufferHeaderType headerPtrLast;
if (svt_enc->eos_flag == EOS_SENT)
return 0;
headerPtrLast.n_alloc_len = 0;
headerPtrLast.n_filled_len = 0;
headerPtrLast.n_tick_count = 0;
headerPtrLast.p_app_private = NULL;
headerPtrLast.p_buffer = NULL;
headerPtrLast.flags = EB_BUFFERFLAG_EOS;
svt_av1_enc_send_picture(svt_enc->svt_handle, &headerPtrLast);
svt_enc->eos_flag = EOS_SENT;
return 0;
}
ret = read_in_data(&svt_enc->enc_params, frame, headerPtr);
if (ret < 0)
return ret;
headerPtr->flags = 0;
headerPtr->p_app_private = NULL;
headerPtr->pts = frame->pts;
svt_av1_enc_send_picture(svt_enc->svt_handle, headerPtr);
return 0;
}
static AVBufferRef *get_output_ref(AVCodecContext *avctx, SvtContext *svt_enc, int filled_len)
{
if (filled_len > svt_enc->max_tu_size) {
const int max_frames = 8;
int max_tu_size;
if (filled_len > svt_enc->raw_size * max_frames) {
av_log(avctx, AV_LOG_ERROR, "TU size > %d raw frame size.\n", max_frames);
return NULL;
}
max_tu_size = 1 << av_ceil_log2(filled_len);
av_buffer_pool_uninit(&svt_enc->pool);
svt_enc->pool = av_buffer_pool_init(max_tu_size + AV_INPUT_BUFFER_PADDING_SIZE, NULL);
if (!svt_enc->pool)
return NULL;
svt_enc->max_tu_size = max_tu_size;
}
av_assert0(svt_enc->pool);
return av_buffer_pool_get(svt_enc->pool);
}
static int eb_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
{
SvtContext *svt_enc = avctx->priv_data;
EbBufferHeaderType *headerPtr;
AVFrame *frame = svt_enc->frame;
EbErrorType svt_ret;
AVBufferRef *ref;
int ret = 0, pict_type;
if (svt_enc->eos_flag == EOS_RECEIVED)
return AVERROR_EOF;
ret = ff_encode_get_frame(avctx, frame);
if (ret < 0 && ret != AVERROR_EOF)
return ret;
if (ret == AVERROR_EOF)
frame = NULL;
ret = eb_send_frame(avctx, frame);
if (ret < 0)
return ret;
av_frame_unref(svt_enc->frame);
svt_ret = svt_av1_enc_get_packet(svt_enc->svt_handle, &headerPtr, svt_enc->eos_flag);
if (svt_ret == EB_NoErrorEmptyQueue)
return AVERROR(EAGAIN);
ref = get_output_ref(avctx, svt_enc, headerPtr->n_filled_len);
if (!ref) {
av_log(avctx, AV_LOG_ERROR, "Failed to allocate output packet.\n");
svt_av1_enc_release_out_buffer(&headerPtr);
return AVERROR(ENOMEM);
}
pkt->buf = ref;
pkt->data = ref->data;
memcpy(pkt->data, headerPtr->p_buffer, headerPtr->n_filled_len);
memset(pkt->data + headerPtr->n_filled_len, 0, AV_INPUT_BUFFER_PADDING_SIZE);
pkt->size = headerPtr->n_filled_len;
pkt->pts = headerPtr->pts;
pkt->dts = headerPtr->dts;
switch (headerPtr->pic_type) {
case EB_AV1_KEY_PICTURE:
pkt->flags |= AV_PKT_FLAG_KEY;
// fall-through
case EB_AV1_INTRA_ONLY_PICTURE:
pict_type = AV_PICTURE_TYPE_I;
break;
case EB_AV1_INVALID_PICTURE:
pict_type = AV_PICTURE_TYPE_NONE;
break;
default:
pict_type = AV_PICTURE_TYPE_P;
break;
}
if (headerPtr->pic_type == EB_AV1_NON_REF_PICTURE)
pkt->flags |= AV_PKT_FLAG_DISPOSABLE;
if (headerPtr->flags & EB_BUFFERFLAG_EOS)
svt_enc->eos_flag = EOS_RECEIVED;
ff_side_data_set_encoder_stats(pkt, headerPtr->qp * FF_QP2LAMBDA, NULL, 0, pict_type);
svt_av1_enc_release_out_buffer(&headerPtr);
return 0;
}
static av_cold int eb_enc_close(AVCodecContext *avctx)
{
SvtContext *svt_enc = avctx->priv_data;
if (svt_enc->svt_handle) {
svt_av1_enc_deinit(svt_enc->svt_handle);
svt_av1_enc_deinit_handle(svt_enc->svt_handle);
}
if (svt_enc->in_buf) {
av_free(svt_enc->in_buf->p_buffer);
av_freep(&svt_enc->in_buf);
}
av_buffer_pool_uninit(&svt_enc->pool);
av_frame_free(&svt_enc->frame);
return 0;
}
#define OFFSET(x) offsetof(SvtContext, x)
#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
static const AVOption options[] = {
{ "hielevel", "Hierarchical prediction levels setting", OFFSET(hierarchical_level),
AV_OPT_TYPE_INT, { .i64 = 4 }, 3, 4, VE , "hielevel"},
{ "3level", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 3 }, INT_MIN, INT_MAX, VE, "hielevel" },
{ "4level", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 4 }, INT_MIN, INT_MAX, VE, "hielevel" },
{ "la_depth", "Look ahead distance [0, 120]", OFFSET(la_depth),
AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 120, VE },
{ "preset", "Encoding preset [0, 8]",
OFFSET(enc_mode), AV_OPT_TYPE_INT, { .i64 = MAX_ENC_PRESET }, 0, MAX_ENC_PRESET, VE },
{ "tier", "Set operating point tier", OFFSET(tier),
AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE, "tier" },
{ "main", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, 0, 0, VE, "tier" },
{ "high", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, 0, 0, VE, "tier" },
FF_AV1_PROFILE_OPTS
#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \
{ .i64 = value }, 0, 0, VE, "avctx.level"
{ LEVEL("2.0", 20) },
{ LEVEL("2.1", 21) },
{ LEVEL("2.2", 22) },
{ LEVEL("2.3", 23) },
{ LEVEL("3.0", 30) },
{ LEVEL("3.1", 31) },
{ LEVEL("3.2", 32) },
{ LEVEL("3.3", 33) },
{ LEVEL("4.0", 40) },
{ LEVEL("4.1", 41) },
{ LEVEL("4.2", 42) },
{ LEVEL("4.3", 43) },
{ LEVEL("5.0", 50) },
{ LEVEL("5.1", 51) },
{ LEVEL("5.2", 52) },
{ LEVEL("5.3", 53) },
{ LEVEL("6.0", 60) },
{ LEVEL("6.1", 61) },
{ LEVEL("6.2", 62) },
{ LEVEL("6.3", 63) },
{ LEVEL("7.0", 70) },
{ LEVEL("7.1", 71) },
{ LEVEL("7.2", 72) },
{ LEVEL("7.3", 73) },
#undef LEVEL
{ "rc", "Bit rate control mode", OFFSET(rc_mode),
AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 3, VE , "rc"},
{ "cqp", "Constant quantizer", 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, INT_MIN, INT_MAX, VE, "rc" },
{ "vbr", "Variable Bit Rate, use a target bitrate for the entire stream", 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, INT_MIN, INT_MAX, VE, "rc" },
{ "cvbr", "Constrained Variable Bit Rate, use a target bitrate for each GOP", 0, AV_OPT_TYPE_CONST,{ .i64 = 2 }, INT_MIN, INT_MAX, VE, "rc" },
{ "qp", "Quantizer to use with cqp rate control mode", OFFSET(qp),
AV_OPT_TYPE_INT, { .i64 = 50 }, 0, 63, VE },
{ "sc_detection", "Scene change detection", OFFSET(scd),
AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
{ "tile_columns", "Log2 of number of tile columns to use", OFFSET(tile_columns), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 4, VE},
{ "tile_rows", "Log2 of number of tile rows to use", OFFSET(tile_rows), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 6, VE},
{NULL},
};
static const AVClass class = {
.class_name = "libsvtav1",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
static const AVCodecDefault eb_enc_defaults[] = {
{ "b", "7M" },
{ "g", "-1" },
{ "qmin", "0" },
{ "qmax", "63" },
{ NULL },
};
AVCodec ff_libsvtav1_encoder = {
.name = "libsvtav1",
.long_name = NULL_IF_CONFIG_SMALL("SVT-AV1(Scalable Video Technology for AV1) encoder"),
.priv_data_size = sizeof(SvtContext),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.init = eb_enc_init,
.receive_packet = eb_receive_packet,
.close = eb_enc_close,
.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_OTHER_THREADS,
.caps_internal = FF_CODEC_CAP_AUTO_THREADS,
.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P,
AV_PIX_FMT_YUV420P10,
AV_PIX_FMT_NONE },
.priv_class = &class,
.defaults = eb_enc_defaults,
.caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
.wrapper_name = "libsvtav1",
};

View File

@@ -1,263 +0,0 @@
/*
* RAW AVS3-P2/IEEE1857.10 video demuxer
* Copyright (c) 2020 Zhenyu Wang <wangzhenyu@pkusz.edu.cn>
* Bingjie Han <hanbj@pkusz.edu.cn>
* Huiwen Ren <hwrenx@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avassert.h"
#include "libavutil/avutil.h"
#include "libavutil/common.h"
#include "libavutil/imgutils.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/opt.h"
#include "avcodec.h"
#include "avs3.h"
#include "internal.h"
#include "uavs3d.h"
typedef struct uavs3d_context {
AVCodecContext *avctx;
void *dec_handle;
int frame_threads;
int got_seqhdr;
uavs3d_io_frm_t dec_frame;
} uavs3d_context;
#define UAVS3D_CHECK_START_CODE(data_ptr, PIC_START_CODE) \
(AV_RL32(data_ptr) != (PIC_START_CODE << 24) + AVS3_NAL_START_CODE)
static int uavs3d_find_next_start_code(const unsigned char *bs_data, int bs_len, int *left)
{
const unsigned char *data_ptr = bs_data + 4;
int count = bs_len - 4;
while (count >= 4 &&
UAVS3D_CHECK_START_CODE(data_ptr, AVS3_INTER_PIC_START_CODE) &&
UAVS3D_CHECK_START_CODE(data_ptr, AVS3_INTRA_PIC_START_CODE) &&
UAVS3D_CHECK_START_CODE(data_ptr, AVS3_SEQ_START_CODE) &&
UAVS3D_CHECK_START_CODE(data_ptr, AVS3_FIRST_SLICE_START_CODE) &&
UAVS3D_CHECK_START_CODE(data_ptr, AVS3_SEQ_END_CODE)) {
data_ptr++;
count--;
}
if (count >= 4) {
*left = count;
return 1;
}
return 0;
}
static void uavs3d_output_callback(uavs3d_io_frm_t *dec_frame) {
uavs3d_io_frm_t frm_out;
AVFrame *frm = (AVFrame *)dec_frame->priv;
int i;
if (!frm || !frm->data[0]) {
dec_frame->got_pic = 0;
av_log(NULL, AV_LOG_ERROR, "Invalid AVFrame in uavs3d output.\n");
return;
}
frm->pts = dec_frame->pts;
frm->pkt_dts = dec_frame->dts;
frm->pkt_pos = dec_frame->pkt_pos;
frm->pkt_size = dec_frame->pkt_size;
frm->coded_picture_number = dec_frame->dtr;
frm->display_picture_number = dec_frame->ptr;
if (dec_frame->type < 0 || dec_frame->type >= 4) {
av_log(NULL, AV_LOG_WARNING, "Error frame type in uavs3d: %d.\n", dec_frame->type);
}
frm->pict_type = ff_avs3_image_type[dec_frame->type];
frm->key_frame = (frm->pict_type == AV_PICTURE_TYPE_I);
for (i = 0; i < 3; i++) {
frm_out.width [i] = dec_frame->width[i];
frm_out.height[i] = dec_frame->height[i];
frm_out.stride[i] = frm->linesize[i];
frm_out.buffer[i] = frm->data[i];
}
uavs3d_img_cpy_cvt(&frm_out, dec_frame, dec_frame->bit_depth);
}
static av_cold int libuavs3d_init(AVCodecContext *avctx)
{
uavs3d_context *h = avctx->priv_data;
uavs3d_cfg_t cdsc;
cdsc.frm_threads = avctx->thread_count > 0 ? avctx->thread_count : av_cpu_count();
cdsc.check_md5 = 0;
h->dec_handle = uavs3d_create(&cdsc, uavs3d_output_callback, NULL);
h->got_seqhdr = 0;
if (!h->dec_handle) {
return AVERROR(ENOMEM);
}
return 0;
}
static av_cold int libuavs3d_end(AVCodecContext *avctx)
{
uavs3d_context *h = avctx->priv_data;
if (h->dec_handle) {
uavs3d_flush(h->dec_handle, NULL);
uavs3d_delete(h->dec_handle);
h->dec_handle = NULL;
}
h->got_seqhdr = 0;
return 0;
}
static void libuavs3d_flush(AVCodecContext * avctx)
{
uavs3d_context *h = avctx->priv_data;
if (h->dec_handle) {
uavs3d_reset(h->dec_handle);
}
}
#define UAVS3D_CHECK_INVALID_RANGE(v, l, r) ((v)<(l)||(v)>(r))
static int libuavs3d_decode_frame(AVCodecContext *avctx, void *data, int *got_frame, AVPacket *avpkt)
{
uavs3d_context *h = avctx->priv_data;
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
const uint8_t *buf_end;
const uint8_t *buf_ptr;
AVFrame *frm = data;
int left_bytes;
int ret, finish = 0;
*got_frame = 0;
frm->pts = -1;
frm->pict_type = AV_PICTURE_TYPE_NONE;
if (!buf_size) {
if (h->got_seqhdr) {
if (!frm->data[0] && (ret = ff_get_buffer(avctx, frm, 0)) < 0) {
return ret;
}
h->dec_frame.priv = data; // AVFrame
}
do {
ret = uavs3d_flush(h->dec_handle, &h->dec_frame);
} while (ret > 0 && !h->dec_frame.got_pic);
} else {
uavs3d_io_frm_t *frm_dec = &h->dec_frame;
buf_ptr = buf;
buf_end = buf + buf_size;
frm_dec->pkt_pos = avpkt->pos;
frm_dec->pkt_size = avpkt->size;
while (!finish) {
int bs_len;
if (h->got_seqhdr) {
if (!frm->data[0] && (ret = ff_get_buffer(avctx, frm, 0)) < 0) {
return ret;
}
h->dec_frame.priv = data; // AVFrame
}
if (uavs3d_find_next_start_code(buf_ptr, buf_end - buf_ptr, &left_bytes)) {
bs_len = buf_end - buf_ptr - left_bytes;
} else {
bs_len = buf_end - buf_ptr;
finish = 1;
}
frm_dec->bs = (unsigned char *)buf_ptr;
frm_dec->bs_len = bs_len;
frm_dec->pts = avpkt->pts;
frm_dec->dts = avpkt->dts;
uavs3d_decode(h->dec_handle, frm_dec);
buf_ptr += bs_len;
if (frm_dec->nal_type == NAL_SEQ_HEADER) {
struct uavs3d_com_seqh_t *seqh = frm_dec->seqhdr;
if (UAVS3D_CHECK_INVALID_RANGE(seqh->frame_rate_code, 0, 15)) {
av_log(avctx, AV_LOG_ERROR, "Invalid frame rate code: %d.\n", seqh->frame_rate_code);
seqh->frame_rate_code = 3; // default 25 fps
} else {
avctx->framerate.num = ff_avs3_frame_rate_tab[seqh->frame_rate_code].num;
avctx->framerate.den = ff_avs3_frame_rate_tab[seqh->frame_rate_code].den;
}
avctx->has_b_frames = !seqh->low_delay;
avctx->pix_fmt = seqh->bit_depth_internal == 8 ? AV_PIX_FMT_YUV420P : AV_PIX_FMT_YUV420P10LE;
ff_set_dimensions(avctx, seqh->horizontal_size, seqh->vertical_size);
h->got_seqhdr = 1;
if (seqh->colour_description) {
if (UAVS3D_CHECK_INVALID_RANGE(seqh->colour_primaries, 0, 9) ||
UAVS3D_CHECK_INVALID_RANGE(seqh->transfer_characteristics, 0, 14) ||
UAVS3D_CHECK_INVALID_RANGE(seqh->matrix_coefficients, 0, 11)) {
av_log(avctx, AV_LOG_ERROR,
"Invalid colour description: primaries: %d"
"transfer characteristics: %d"
"matrix coefficients: %d.\n",
seqh->colour_primaries,
seqh->transfer_characteristics,
seqh->matrix_coefficients);
} else {
avctx->color_primaries = ff_avs3_color_primaries_tab[seqh->colour_primaries];
avctx->color_trc = ff_avs3_color_transfer_tab [seqh->transfer_characteristics];
avctx->colorspace = ff_avs3_color_matrix_tab [seqh->matrix_coefficients];
}
}
}
if (frm_dec->got_pic) {
break;
}
}
}
*got_frame = h->dec_frame.got_pic;
if (!(*got_frame)) {
av_frame_unref(frm);
}
return buf_ptr - buf;
}
AVCodec ff_libuavs3d_decoder = {
.name = "libuavs3d",
.long_name = NULL_IF_CONFIG_SMALL("libuavs3d AVS3-P2/IEEE1857.10"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AVS3,
.priv_data_size = sizeof(uavs3d_context),
.init = libuavs3d_init,
.close = libuavs3d_end,
.decode = libuavs3d_decode_frame,
.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_OTHER_THREADS,
.caps_internal = FF_CODEC_CAP_AUTO_THREADS,
.flush = libuavs3d_flush,
.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P,
AV_PIX_FMT_YUV420P10LE,
AV_PIX_FMT_NONE },
.wrapper_name = "libuavs3d",
};

View File

@@ -1,279 +0,0 @@
/*
* Copyright (c) 2019 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include <zlib.h>
#include "libavutil/frame.h"
#include "libavutil/error.h"
#include "libavutil/log.h"
#include "avcodec.h"
#include "bytestream.h"
#include "codec.h"
#include "internal.h"
#include "packet.h"
#include "png.h"
#include "pngdsp.h"
typedef struct LSCRContext {
PNGDSPContext dsp;
AVCodecContext *avctx;
AVFrame *last_picture;
uint8_t *buffer;
int buffer_size;
uint8_t *crow_buf;
int crow_size;
uint8_t *last_row;
unsigned int last_row_size;
GetByteContext gb;
uint8_t *image_buf;
int image_linesize;
int row_size;
int cur_h;
int y;
z_stream zstream;
} LSCRContext;
static void handle_row(LSCRContext *s)
{
uint8_t *ptr, *last_row;
ptr = s->image_buf + s->image_linesize * s->y;
if (s->y == 0)
last_row = s->last_row;
else
last_row = ptr - s->image_linesize;
ff_png_filter_row(&s->dsp, ptr, s->crow_buf[0], s->crow_buf + 1,
last_row, s->row_size, 3);
s->y++;
}
static int decode_idat(LSCRContext *s, int length)
{
int ret;
s->zstream.avail_in = FFMIN(length, bytestream2_get_bytes_left(&s->gb));
s->zstream.next_in = s->gb.buffer;
if (length <= 0)
return AVERROR_INVALIDDATA;
bytestream2_skip(&s->gb, length);
/* decode one line if possible */
while (s->zstream.avail_in > 0) {
ret = inflate(&s->zstream, Z_PARTIAL_FLUSH);
if (ret != Z_OK && ret != Z_STREAM_END) {
av_log(s->avctx, AV_LOG_ERROR, "inflate returned error %d\n", ret);
return AVERROR_EXTERNAL;
}
if (s->zstream.avail_out == 0) {
if (s->y < s->cur_h) {
handle_row(s);
}
s->zstream.avail_out = s->crow_size;
s->zstream.next_out = s->crow_buf;
}
if (ret == Z_STREAM_END && s->zstream.avail_in > 0) {
av_log(s->avctx, AV_LOG_WARNING,
"%d undecompressed bytes left in buffer\n", s->zstream.avail_in);
return 0;
}
}
return 0;
}
static int decode_frame_lscr(AVCodecContext *avctx,
void *data, int *got_frame,
AVPacket *avpkt)
{
LSCRContext *const s = avctx->priv_data;
GetByteContext *gb = &s->gb;
AVFrame *frame = s->last_picture;
int ret, nb_blocks, offset = 0;
if (avpkt->size < 2)
return AVERROR_INVALIDDATA;
if (avpkt->size == 2)
return 0;
bytestream2_init(gb, avpkt->data, avpkt->size);
nb_blocks = bytestream2_get_le16(gb);
if (bytestream2_get_bytes_left(gb) < 2 + nb_blocks * (12 + 8))
return AVERROR_INVALIDDATA;
ret = ff_reget_buffer(avctx, frame,
nb_blocks ? 0 : FF_REGET_BUFFER_FLAG_READONLY);
if (ret < 0)
return ret;
for (int b = 0; b < nb_blocks; b++) {
int x, y, x2, y2, w, h, left;
uint32_t csize, size;
s->zstream.zalloc = ff_png_zalloc;
s->zstream.zfree = ff_png_zfree;
s->zstream.opaque = NULL;
if ((ret = inflateInit(&s->zstream)) != Z_OK) {
av_log(avctx, AV_LOG_ERROR, "inflateInit returned error %d\n", ret);
ret = AVERROR_EXTERNAL;
goto end;
}
bytestream2_seek(gb, 2 + b * 12, SEEK_SET);
x = bytestream2_get_le16(gb);
y = bytestream2_get_le16(gb);
x2 = bytestream2_get_le16(gb);
y2 = bytestream2_get_le16(gb);
w = x2-x;
s->cur_h = h = y2-y;
if (w <= 0 || x < 0 || x >= avctx->width || w + x > avctx->width ||
h <= 0 || y < 0 || y >= avctx->height || h + y > avctx->height) {
ret = AVERROR_INVALIDDATA;
goto end;
}
size = bytestream2_get_le32(gb);
frame->key_frame = (nb_blocks == 1) &&
(w == avctx->width) &&
(h == avctx->height) &&
(x == 0) && (y == 0);
bytestream2_seek(gb, 2 + nb_blocks * 12 + offset, SEEK_SET);
csize = bytestream2_get_be32(gb);
if (bytestream2_get_le32(gb) != MKTAG('I', 'D', 'A', 'T')) {
ret = AVERROR_INVALIDDATA;
goto end;
}
offset += size;
left = size;
s->y = 0;
s->row_size = w * 3;
av_fast_padded_malloc(&s->buffer, &s->buffer_size, s->row_size + 16);
if (!s->buffer) {
ret = AVERROR(ENOMEM);
goto end;
}
av_fast_padded_malloc(&s->last_row, &s->last_row_size, s->row_size);
if (!s->last_row) {
ret = AVERROR(ENOMEM);
goto end;
}
s->crow_size = w * 3 + 1;
s->crow_buf = s->buffer + 15;
s->zstream.avail_out = s->crow_size;
s->zstream.next_out = s->crow_buf;
s->image_buf = frame->data[0] + (avctx->height - y - 1) * frame->linesize[0] + x * 3;
s->image_linesize =-frame->linesize[0];
while (left > 16) {
ret = decode_idat(s, csize);
if (ret < 0)
goto end;
left -= csize + 16;
if (left > 16) {
bytestream2_skip(gb, 4);
csize = bytestream2_get_be32(gb);
if (bytestream2_get_le32(gb) != MKTAG('I', 'D', 'A', 'T')) {
ret = AVERROR_INVALIDDATA;
goto end;
}
}
}
inflateEnd(&s->zstream);
}
frame->pict_type = frame->key_frame ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P;
if ((ret = av_frame_ref(data, frame)) < 0)
return ret;
*got_frame = 1;
end:
inflateEnd(&s->zstream);
if (ret < 0)
return ret;
return avpkt->size;
}
static int lscr_decode_close(AVCodecContext *avctx)
{
LSCRContext *s = avctx->priv_data;
av_frame_free(&s->last_picture);
av_freep(&s->buffer);
av_freep(&s->last_row);
return 0;
}
static int lscr_decode_init(AVCodecContext *avctx)
{
LSCRContext *s = avctx->priv_data;
avctx->color_range = AVCOL_RANGE_JPEG;
avctx->pix_fmt = AV_PIX_FMT_BGR24;
s->avctx = avctx;
s->last_picture = av_frame_alloc();
if (!s->last_picture)
return AVERROR(ENOMEM);
ff_pngdsp_init(&s->dsp);
return 0;
}
static void lscr_decode_flush(AVCodecContext *avctx)
{
LSCRContext *s = avctx->priv_data;
av_frame_unref(s->last_picture);
}
AVCodec ff_lscr_decoder = {
.name = "lscr",
.long_name = NULL_IF_CONFIG_SMALL("LEAD Screen Capture"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_LSCR,
.priv_data_size = sizeof(LSCRContext),
.init = lscr_decode_init,
.close = lscr_decode_close,
.decode = decode_frame_lscr,
.flush = lscr_decode_flush,
.capabilities = AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
};

View File

@@ -1,57 +0,0 @@
/*
* MJPEG decoder VLC code
* Copyright (c) 2000, 2001 Fabrice Bellard
* Copyright (c) 2003 Alex Beregszaszi
* Copyright (c) 2003-2004 Michael Niedermayer
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include "libavutil/avassert.h"
#include "mjpegdec.h"
#include "vlc.h"
static int build_huffman_codes(uint8_t *huff_size, const uint8_t *bits_table)
{
int nb_codes = 0;
for (int i = 1, j = 0; i <= 16; i++) {
nb_codes += bits_table[i];
av_assert1(nb_codes <= 256);
for (; j < nb_codes; j++)
huff_size[j] = i;
}
return nb_codes;
}
int ff_mjpeg_build_vlc(VLC *vlc, const uint8_t *bits_table,
const uint8_t *val_table, int is_ac, void *logctx)
{
uint8_t huff_size[256];
uint16_t huff_sym[256];
int nb_codes = build_huffman_codes(huff_size, bits_table);
for (int i = 0; i < nb_codes; i++) {
huff_sym[i] = val_table[i] + 16 * is_ac;
if (is_ac && !val_table[i])
huff_sym[i] = 16 * 256;
}
return ff_init_vlc_from_lengths(vlc, 9, nb_codes, huff_size, 1,
huff_sym, 2, 2, 0, 0, logctx);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,482 +0,0 @@
/*
* MPEG Audio decoder
* copyright (c) 2002 Fabrice Bellard
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* mpeg audio layer decoder tables.
*/
#include <stddef.h>
#include <stdint.h>
#include "libavutil/avassert.h"
#include "libavutil/thread.h"
#include "mpegaudiodata.h"
#include "mpegaudiodec_common_tablegen.h"
uint16_t ff_scale_factor_modshift[64];
static int16_t division_tab3[1 << 6 ];
static int16_t division_tab5[1 << 8 ];
static int16_t division_tab9[1 << 11];
int16_t *const ff_division_tabs[4] = {
division_tab3, division_tab5, NULL, division_tab9
};
/*******************************************************/
/* layer 3 tables */
const uint8_t ff_slen_table[2][16] = {
{ 0, 0, 0, 0, 3, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4 },
{ 0, 1, 2, 3, 0, 1, 2, 3, 1, 2, 3, 1, 2, 3, 2, 3 },
};
const uint8_t ff_lsf_nsf_table[6][3][4] = {
{ { 6, 5, 5, 5 }, { 9, 9, 9, 9 }, { 6, 9, 9, 9 } },
{ { 6, 5, 7, 3 }, { 9, 9, 12, 6 }, { 6, 9, 12, 6 } },
{ { 11, 10, 0, 0 }, { 18, 18, 0, 0 }, { 15, 18, 0, 0 } },
{ { 7, 7, 7, 0 }, { 12, 12, 12, 0 }, { 6, 15, 12, 0 } },
{ { 6, 6, 6, 3 }, { 12, 9, 9, 6 }, { 6, 12, 9, 6 } },
{ { 8, 8, 5, 0 }, { 15, 12, 9, 0 }, { 6, 18, 9, 0 } },
};
/* mpegaudio layer 3 huffman tables */
VLC ff_huff_vlc[16];
static VLC_TYPE huff_vlc_tables[128 + 128 + 128 + 130 + 128 + 154 + 166 + 142 +
204 + 190 + 170 + 542 + 460 + 662 + 414][2];
VLC ff_huff_quad_vlc[2];
static VLC_TYPE huff_quad_vlc_tables[64 + 16][2];
static const uint8_t mpa_hufflens[] = {
/* Huffman table 1 - 4 entries */
3, 3, 2, 1,
/* Huffman table 2 - 9 entries */
6, 6, 5, 5, 5, 3, 3, 3, 1,
/* Huffman table 3 - 9 entries */
6, 6, 5, 5, 5, 3, 2, 2, 2,
/* Huffman table 5 - 16 entries */
8, 8, 7, 6, 7, 7, 7, 7, 6, 6, 6, 6, 3, 3, 3, 1,
/* Huffman table 6 - 16 entries */
7, 7, 6, 6, 6, 5, 5, 5, 5, 4, 4, 4, 3, 2, 3, 3,
/* Huffman table 7 - 36 entries */
10, 10, 10, 10, 9, 9, 9, 9, 8, 8, 9, 9, 8, 9, 9, 8, 8, 7, 7,
7, 8, 8, 8, 8, 7, 7, 7, 7, 6, 5, 6, 6, 4, 3, 3, 1,
/* Huffman table 8 - 36 entries */
11, 11, 10, 9, 10, 10, 9, 9, 9, 8, 8, 9, 9, 9, 9, 8, 8, 8, 7,
8, 8, 8, 8, 8, 8, 8, 8, 6, 6, 6, 4, 4, 2, 3, 3, 2,
/* Huffman table 9 - 36 entries */
9, 9, 8, 8, 9, 9, 8, 8, 8, 8, 7, 7, 7, 8, 8, 7, 7, 7, 7,
6, 6, 6, 6, 5, 5, 6, 6, 5, 5, 4, 4, 4, 3, 3, 3, 3,
/* Huffman table 10 - 64 entries */
11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10, 10, 11, 11, 10, 9, 9, 10,
10, 9, 9, 10, 10, 9, 10, 10, 8, 8, 9, 9, 10, 10, 9, 9, 10, 10, 8,
8, 8, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 8, 8, 7, 7, 7, 7, 6,
6, 6, 6, 4, 3, 3, 1,
/* Huffman table 11 - 64 entries */
10, 10, 10, 10, 10, 10, 10, 11, 11, 10, 10, 9, 9, 9, 10, 10, 10, 10, 8,
8, 9, 9, 7, 8, 8, 8, 8, 8, 9, 9, 9, 9, 8, 7, 8, 8, 7, 7,
8, 8, 8, 9, 9, 8, 8, 8, 8, 8, 8, 7, 7, 6, 6, 7, 7, 6, 5,
4, 5, 5, 3, 3, 3, 2,
/* Huffman table 12 - 64 entries */
10, 10, 9, 9, 9, 9, 9, 9, 9, 8, 8, 9, 9, 8, 8, 8, 8, 8, 8,
9, 9, 8, 8, 8, 8, 8, 9, 9, 7, 7, 7, 8, 8, 8, 8, 8, 8, 7,
7, 7, 7, 8, 8, 7, 7, 7, 6, 6, 6, 6, 7, 7, 6, 5, 5, 5, 4,
4, 5, 5, 4, 3, 3, 3,
/* Huffman table 13 - 256 entries */
19, 19, 18, 17, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 15, 15, 16,
16, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 15, 16, 16, 14, 14, 15,
15, 15, 15, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 14, 13, 14,
14, 13, 13, 14, 14, 13, 14, 14, 13, 14, 14, 13, 14, 14, 13, 13, 14, 14, 12,
12, 12, 13, 13, 13, 13, 13, 13, 12, 13, 13, 12, 12, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 12, 12, 13, 13, 12, 12, 12, 12, 13, 13, 13, 13, 12,
13, 13, 12, 11, 12, 12, 12, 12, 12, 12, 12, 12, 11, 11, 11, 11, 12, 12, 11,
11, 12, 12, 11, 12, 12, 12, 12, 11, 11, 12, 12, 11, 12, 12, 11, 12, 12, 11,
12, 12, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 11, 11,
10, 11, 11, 10, 11, 11, 11, 11, 10, 10, 11, 11, 10, 10, 11, 11, 11, 11, 11,
11, 9, 9, 10, 10, 10, 10, 10, 11, 11, 9, 9, 9, 10, 10, 9, 9, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 8, 9, 9, 9, 9, 9, 9, 10, 10, 9, 9,
9, 8, 8, 9, 9, 9, 9, 9, 9, 8, 7, 8, 8, 8, 8, 7, 7, 7, 7,
7, 6, 6, 6, 6, 4, 4, 3, 1,
/* Huffman table 15 - 256 entries */
13, 13, 13, 13, 12, 13, 13, 13, 13, 13, 13, 12, 13, 13, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13,
13, 11, 11, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 12, 12, 11, 11, 11, 11,
11, 11, 11, 11, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 11, 11, 11, 11, 11,
11, 10, 11, 11, 11, 11, 11, 11, 10, 10, 11, 11, 10, 10, 10, 10, 11, 11, 10,
10, 10, 10, 10, 10, 10, 11, 11, 10, 10, 10, 10, 10, 11, 11, 9, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 10, 10, 10, 10, 9, 10, 10, 9, 10,
10, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 9, 10, 10, 9, 9, 9,
9, 9, 9, 10, 10, 9, 9, 9, 9, 9, 9, 8, 9, 9, 9, 9, 9, 9, 9,
9, 9, 9, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8,
8, 8, 9, 9, 8, 8, 8, 8, 8, 8, 8, 9, 9, 8, 7, 8, 8, 7, 7,
7, 7, 8, 8, 7, 7, 7, 7, 7, 6, 7, 7, 6, 6, 7, 7, 6, 6, 6,
5, 5, 5, 5, 5, 3, 4, 4, 3,
/* Huffman table 16 - 256 entries */
11, 11, 11, 11, 11, 11, 11, 11, 10, 11, 11, 11, 11, 10, 10, 10, 10, 10, 8,
10, 10, 9, 9, 9, 9, 10, 16, 17, 17, 15, 15, 16, 16, 14, 15, 15, 14, 14,
15, 15, 14, 14, 15, 15, 15, 15, 14, 15, 15, 14, 13, 8, 9, 9, 8, 8, 13,
14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 13, 13, 14, 14, 14, 14, 13, 14, 14,
13, 13, 13, 14, 14, 14, 14, 13, 13, 14, 14, 13, 14, 14, 12, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 12, 13, 13, 13, 13, 13, 13, 12, 13,
13, 12, 12, 13, 13, 11, 12, 12, 12, 12, 12, 12, 12, 13, 13, 11, 12, 12, 12,
12, 11, 12, 12, 12, 12, 12, 12, 12, 12, 11, 12, 12, 11, 11, 11, 11, 12, 12,
12, 12, 12, 12, 12, 12, 11, 12, 12, 11, 12, 12, 11, 12, 12, 11, 12, 12, 11,
10, 10, 11, 11, 11, 11, 11, 11, 10, 10, 11, 11, 10, 10, 11, 11, 11, 11, 11,
11, 11, 11, 10, 11, 11, 10, 10, 10, 11, 11, 10, 10, 11, 11, 10, 10, 11, 11,
10, 9, 9, 10, 10, 10, 10, 10, 10, 9, 9, 9, 10, 10, 9, 10, 10, 9, 9,
8, 9, 9, 9, 9, 9, 9, 9, 9, 8, 8, 9, 9, 8, 8, 7, 7, 8, 8,
7, 6, 6, 6, 6, 4, 4, 3, 1,
/* Huffman table 24 - 256 entries */
8, 8, 8, 8, 8, 8, 8, 8, 7, 8, 8, 7, 7, 8, 8, 7, 7, 7, 7,
7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 9, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 4, 11, 11, 11, 11, 12, 12, 11, 10, 11, 11, 10, 10, 10, 10, 11, 11, 10,
10, 10, 10, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 11, 11, 10, 11, 11, 10, 9, 10, 10, 10, 10, 11, 11, 10, 9, 9, 10,
10, 9, 10, 10, 10, 10, 9, 9, 10, 10, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 9, 9, 9, 10, 10, 8, 9, 9, 8,
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 8, 8, 8, 8, 8,
8, 9, 9, 7, 8, 8, 7, 7, 7, 7, 7, 8, 8, 7, 7, 6, 6, 7, 7,
6, 5, 5, 6, 6, 4, 4, 4, 4,
};
static const uint8_t mpa_huffsymbols[] = {
/* Huffman table 1 - 4 entries */
0x11, 0x01, 0x10, 0x00,
/* Huffman table 2 - 9 entries */
0x22, 0x02, 0x12, 0x21, 0x20, 0x11, 0x01, 0x10, 0x00,
/* Huffman table 3 - 9 entries */
0x22, 0x02, 0x12, 0x21, 0x20, 0x10, 0x11, 0x01, 0x00,
/* Huffman table 5 - 16 entries */
0x33, 0x23, 0x32, 0x31, 0x13, 0x03, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 6 - 16 entries */
0x33, 0x03, 0x23, 0x32, 0x30, 0x13, 0x31, 0x22, 0x02, 0x12, 0x21, 0x20,
0x01, 0x11, 0x10, 0x00,
/* Huffman table 7 - 36 entries */
0x55, 0x45, 0x54, 0x53, 0x35, 0x44, 0x25, 0x52, 0x15, 0x51, 0x05, 0x34,
0x50, 0x43, 0x33, 0x24, 0x42, 0x14, 0x41, 0x40, 0x04, 0x23, 0x32, 0x03,
0x13, 0x31, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20, 0x11, 0x01, 0x10, 0x00,
/* Huffman table 8 - 36 entries */
0x55, 0x54, 0x45, 0x53, 0x35, 0x44, 0x25, 0x52, 0x05, 0x15, 0x51, 0x34,
0x43, 0x50, 0x33, 0x24, 0x42, 0x14, 0x41, 0x04, 0x40, 0x23, 0x32, 0x13,
0x31, 0x03, 0x30, 0x22, 0x02, 0x20, 0x12, 0x21, 0x11, 0x01, 0x10, 0x00,
/* Huffman table 9 - 36 entries */
0x55, 0x45, 0x35, 0x53, 0x54, 0x05, 0x44, 0x25, 0x52, 0x15, 0x51, 0x34,
0x43, 0x50, 0x04, 0x24, 0x42, 0x33, 0x40, 0x14, 0x41, 0x23, 0x32, 0x13,
0x31, 0x03, 0x30, 0x22, 0x02, 0x12, 0x21, 0x20, 0x11, 0x01, 0x10, 0x00,
/* Huffman table 10 - 64 entries */
0x77, 0x67, 0x76, 0x57, 0x75, 0x66, 0x47, 0x74, 0x56, 0x65, 0x37, 0x73,
0x46, 0x55, 0x54, 0x63, 0x27, 0x72, 0x64, 0x07, 0x70, 0x62, 0x45, 0x35,
0x06, 0x53, 0x44, 0x17, 0x71, 0x36, 0x26, 0x25, 0x52, 0x15, 0x51, 0x34,
0x43, 0x16, 0x61, 0x60, 0x05, 0x50, 0x24, 0x42, 0x33, 0x04, 0x14, 0x41,
0x40, 0x23, 0x32, 0x03, 0x13, 0x31, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 11 - 64 entries */
0x77, 0x67, 0x76, 0x75, 0x66, 0x47, 0x74, 0x57, 0x55, 0x56, 0x65, 0x37,
0x73, 0x46, 0x45, 0x54, 0x35, 0x53, 0x27, 0x72, 0x64, 0x07, 0x71, 0x17,
0x70, 0x36, 0x63, 0x60, 0x44, 0x25, 0x52, 0x05, 0x15, 0x62, 0x26, 0x06,
0x16, 0x61, 0x51, 0x34, 0x50, 0x43, 0x33, 0x24, 0x42, 0x14, 0x41, 0x04,
0x40, 0x23, 0x32, 0x13, 0x31, 0x03, 0x30, 0x22, 0x21, 0x12, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 12 - 64 entries */
0x77, 0x67, 0x76, 0x57, 0x75, 0x66, 0x47, 0x74, 0x65, 0x56, 0x37, 0x73,
0x55, 0x27, 0x72, 0x46, 0x64, 0x17, 0x71, 0x07, 0x70, 0x36, 0x63, 0x45,
0x54, 0x44, 0x06, 0x05, 0x26, 0x62, 0x61, 0x16, 0x60, 0x35, 0x53, 0x25,
0x52, 0x15, 0x51, 0x34, 0x43, 0x50, 0x04, 0x24, 0x42, 0x14, 0x33, 0x41,
0x23, 0x32, 0x40, 0x03, 0x30, 0x13, 0x31, 0x22, 0x12, 0x21, 0x02, 0x20,
0x00, 0x11, 0x01, 0x10,
/* Huffman table 13 - 256 entries */
0xFE, 0xFC, 0xFD, 0xED, 0xFF, 0xEF, 0xDF, 0xEE, 0xCF, 0xDE, 0xBF, 0xFB,
0xCE, 0xDC, 0xAF, 0xE9, 0xEC, 0xDD, 0xFA, 0xCD, 0xBE, 0xEB, 0x9F, 0xF9,
0xEA, 0xBD, 0xDB, 0x8F, 0xF8, 0xCC, 0xAE, 0x9E, 0x8E, 0x7F, 0x7E, 0xF7,
0xDA, 0xAD, 0xBC, 0xCB, 0xF6, 0x6F, 0xE8, 0x5F, 0x9D, 0xD9, 0xF5, 0xE7,
0xAC, 0xBB, 0x4F, 0xF4, 0xCA, 0xE6, 0xF3, 0x3F, 0x8D, 0xD8, 0x2F, 0xF2,
0x6E, 0x9C, 0x0F, 0xC9, 0x5E, 0xAB, 0x7D, 0xD7, 0x4E, 0xC8, 0xD6, 0x3E,
0xB9, 0x9B, 0xAA, 0x1F, 0xF1, 0xF0, 0xBA, 0xE5, 0xE4, 0x8C, 0x6D, 0xE3,
0xE2, 0x2E, 0x0E, 0x1E, 0xE1, 0xE0, 0x5D, 0xD5, 0x7C, 0xC7, 0x4D, 0x8B,
0xB8, 0xD4, 0x9A, 0xA9, 0x6C, 0xC6, 0x3D, 0xD3, 0x7B, 0x2D, 0xD2, 0x1D,
0xB7, 0x5C, 0xC5, 0x99, 0x7A, 0xC3, 0xA7, 0x97, 0x4B, 0xD1, 0x0D, 0xD0,
0x8A, 0xA8, 0x4C, 0xC4, 0x6B, 0xB6, 0x3C, 0x2C, 0xC2, 0x5B, 0xB5, 0x89,
0x1C, 0xC1, 0x98, 0x0C, 0xC0, 0xB4, 0x6A, 0xA6, 0x79, 0x3B, 0xB3, 0x88,
0x5A, 0x2B, 0xA5, 0x69, 0xA4, 0x78, 0x87, 0x94, 0x77, 0x76, 0xB2, 0x1B,
0xB1, 0x0B, 0xB0, 0x96, 0x4A, 0x3A, 0xA3, 0x59, 0x95, 0x2A, 0xA2, 0x1A,
0xA1, 0x0A, 0x68, 0xA0, 0x86, 0x49, 0x93, 0x39, 0x58, 0x85, 0x67, 0x29,
0x92, 0x57, 0x75, 0x38, 0x83, 0x66, 0x47, 0x74, 0x56, 0x65, 0x73, 0x19,
0x91, 0x09, 0x90, 0x48, 0x84, 0x72, 0x46, 0x64, 0x28, 0x82, 0x18, 0x37,
0x27, 0x17, 0x71, 0x55, 0x07, 0x70, 0x36, 0x63, 0x45, 0x54, 0x26, 0x62,
0x35, 0x81, 0x08, 0x80, 0x16, 0x61, 0x06, 0x60, 0x53, 0x44, 0x25, 0x52,
0x05, 0x15, 0x51, 0x34, 0x43, 0x50, 0x24, 0x42, 0x33, 0x14, 0x41, 0x04,
0x40, 0x23, 0x32, 0x13, 0x31, 0x03, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 15 - 256 entries */
0xFF, 0xEF, 0xFE, 0xDF, 0xEE, 0xFD, 0xCF, 0xFC, 0xDE, 0xED, 0xBF, 0xFB,
0xCE, 0xEC, 0xDD, 0xAF, 0xFA, 0xBE, 0xEB, 0xCD, 0xDC, 0x9F, 0xF9, 0xEA,
0xBD, 0xDB, 0x8F, 0xF8, 0xCC, 0x9E, 0xE9, 0x7F, 0xF7, 0xAD, 0xDA, 0xBC,
0x6F, 0xAE, 0x0F, 0xCB, 0xF6, 0x8E, 0xE8, 0x5F, 0x9D, 0xF5, 0x7E, 0xE7,
0xAC, 0xCA, 0xBB, 0xD9, 0x8D, 0x4F, 0xF4, 0x3F, 0xF3, 0xD8, 0xE6, 0x2F,
0xF2, 0x6E, 0xF0, 0x1F, 0xF1, 0x9C, 0xC9, 0x5E, 0xAB, 0xBA, 0xE5, 0x7D,
0xD7, 0x4E, 0xE4, 0x8C, 0xC8, 0x3E, 0x6D, 0xD6, 0xE3, 0x9B, 0xB9, 0x2E,
0xAA, 0xE2, 0x1E, 0xE1, 0x0E, 0xE0, 0x5D, 0xD5, 0x7C, 0xC7, 0x4D, 0x8B,
0xD4, 0xB8, 0x9A, 0xA9, 0x6C, 0xC6, 0x3D, 0xD3, 0xD2, 0x2D, 0x0D, 0x1D,
0x7B, 0xB7, 0xD1, 0x5C, 0xD0, 0xC5, 0x8A, 0xA8, 0x4C, 0xC4, 0x6B, 0xB6,
0x99, 0x0C, 0x3C, 0xC3, 0x7A, 0xA7, 0xA6, 0xC0, 0x0B, 0xC2, 0x2C, 0x5B,
0xB5, 0x1C, 0x89, 0x98, 0xC1, 0x4B, 0xB4, 0x6A, 0x3B, 0x79, 0xB3, 0x97,
0x88, 0x2B, 0x5A, 0xB2, 0xA5, 0x1B, 0xB1, 0xB0, 0x69, 0x96, 0x4A, 0xA4,
0x78, 0x87, 0x3A, 0xA3, 0x59, 0x95, 0x2A, 0xA2, 0x1A, 0xA1, 0x0A, 0xA0,
0x68, 0x86, 0x49, 0x94, 0x39, 0x93, 0x77, 0x09, 0x58, 0x85, 0x29, 0x67,
0x76, 0x92, 0x91, 0x19, 0x90, 0x48, 0x84, 0x57, 0x75, 0x38, 0x83, 0x66,
0x47, 0x28, 0x82, 0x18, 0x81, 0x74, 0x08, 0x80, 0x56, 0x65, 0x37, 0x73,
0x46, 0x27, 0x72, 0x64, 0x17, 0x55, 0x71, 0x07, 0x70, 0x36, 0x63, 0x45,
0x54, 0x26, 0x62, 0x16, 0x06, 0x60, 0x35, 0x61, 0x53, 0x44, 0x25, 0x52,
0x15, 0x51, 0x05, 0x50, 0x34, 0x43, 0x24, 0x42, 0x33, 0x41, 0x14, 0x04,
0x23, 0x32, 0x40, 0x03, 0x13, 0x31, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 16 - 256 entries */
0xEF, 0xFE, 0xDF, 0xFD, 0xCF, 0xFC, 0xBF, 0xFB, 0xAF, 0xFA, 0x9F, 0xF9,
0xF8, 0x8F, 0x7F, 0xF7, 0x6F, 0xF6, 0xFF, 0x5F, 0xF5, 0x4F, 0xF4, 0xF3,
0xF0, 0x3F, 0xCE, 0xEC, 0xDD, 0xDE, 0xE9, 0xEA, 0xD9, 0xEE, 0xED, 0xEB,
0xBE, 0xCD, 0xDC, 0xDB, 0xAE, 0xCC, 0xAD, 0xDA, 0x7E, 0xAC, 0xCA, 0xC9,
0x7D, 0x5E, 0xBD, 0xF2, 0x2F, 0x0F, 0x1F, 0xF1, 0x9E, 0xBC, 0xCB, 0x8E,
0xE8, 0x9D, 0xE7, 0xBB, 0x8D, 0xD8, 0x6E, 0xE6, 0x9C, 0xAB, 0xBA, 0xE5,
0xD7, 0x4E, 0xE4, 0x8C, 0xC8, 0x3E, 0x6D, 0xD6, 0x9B, 0xB9, 0xAA, 0xE1,
0xD4, 0xB8, 0xA9, 0x7B, 0xB7, 0xD0, 0xE3, 0x0E, 0xE0, 0x5D, 0xD5, 0x7C,
0xC7, 0x4D, 0x8B, 0x9A, 0x6C, 0xC6, 0x3D, 0x5C, 0xC5, 0x0D, 0x8A, 0xA8,
0x99, 0x4C, 0xB6, 0x7A, 0x3C, 0x5B, 0x89, 0x1C, 0xC0, 0x98, 0x79, 0xE2,
0x2E, 0x1E, 0xD3, 0x2D, 0xD2, 0xD1, 0x3B, 0x97, 0x88, 0x1D, 0xC4, 0x6B,
0xC3, 0xA7, 0x2C, 0xC2, 0xB5, 0xC1, 0x0C, 0x4B, 0xB4, 0x6A, 0xA6, 0xB3,
0x5A, 0xA5, 0x2B, 0xB2, 0x1B, 0xB1, 0x0B, 0xB0, 0x69, 0x96, 0x4A, 0xA4,
0x78, 0x87, 0xA3, 0x3A, 0x59, 0x2A, 0x95, 0x68, 0xA1, 0x86, 0x77, 0x94,
0x49, 0x57, 0x67, 0xA2, 0x1A, 0x0A, 0xA0, 0x39, 0x93, 0x58, 0x85, 0x29,
0x92, 0x76, 0x09, 0x19, 0x91, 0x90, 0x48, 0x84, 0x75, 0x38, 0x83, 0x66,
0x28, 0x82, 0x47, 0x74, 0x18, 0x81, 0x80, 0x08, 0x56, 0x37, 0x73, 0x65,
0x46, 0x27, 0x72, 0x64, 0x55, 0x07, 0x17, 0x71, 0x70, 0x36, 0x63, 0x45,
0x54, 0x26, 0x62, 0x16, 0x61, 0x06, 0x60, 0x53, 0x35, 0x44, 0x25, 0x52,
0x51, 0x15, 0x05, 0x34, 0x43, 0x50, 0x24, 0x42, 0x33, 0x14, 0x41, 0x04,
0x40, 0x23, 0x32, 0x13, 0x31, 0x03, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
/* Huffman table 24 - 256 entries */
0xEF, 0xFE, 0xDF, 0xFD, 0xCF, 0xFC, 0xBF, 0xFB, 0xFA, 0xAF, 0x9F, 0xF9,
0xF8, 0x8F, 0x7F, 0xF7, 0x6F, 0xF6, 0x5F, 0xF5, 0x4F, 0xF4, 0x3F, 0xF3,
0x2F, 0xF2, 0xF1, 0x1F, 0xF0, 0x0F, 0xEE, 0xDE, 0xED, 0xCE, 0xEC, 0xDD,
0xBE, 0xEB, 0xCD, 0xDC, 0xAE, 0xEA, 0xBD, 0xDB, 0xCC, 0x9E, 0xE9, 0xAD,
0xDA, 0xBC, 0xCB, 0x8E, 0xE8, 0x9D, 0xD9, 0x7E, 0xE7, 0xAC, 0xFF, 0xCA,
0xBB, 0x8D, 0xD8, 0x0E, 0xE0, 0x0D, 0xE6, 0x6E, 0x9C, 0xC9, 0x5E, 0xBA,
0xE5, 0xAB, 0x7D, 0xD7, 0xE4, 0x8C, 0xC8, 0x4E, 0x2E, 0x3E, 0x6D, 0xD6,
0xE3, 0x9B, 0xB9, 0xAA, 0xE2, 0x1E, 0xE1, 0x5D, 0xD5, 0x7C, 0xC7, 0x4D,
0x8B, 0xB8, 0xD4, 0x9A, 0xA9, 0x6C, 0xC6, 0x3D, 0xD3, 0x2D, 0xD2, 0x1D,
0x7B, 0xB7, 0xD1, 0x5C, 0xC5, 0x8A, 0xA8, 0x99, 0x4C, 0xC4, 0x6B, 0xB6,
0xD0, 0x0C, 0x3C, 0xC3, 0x7A, 0xA7, 0x2C, 0xC2, 0x5B, 0xB5, 0x1C, 0x89,
0x98, 0xC1, 0x4B, 0xC0, 0x0B, 0x3B, 0xB0, 0x0A, 0x1A, 0xB4, 0x6A, 0xA6,
0x79, 0x97, 0xA0, 0x09, 0x90, 0xB3, 0x88, 0x2B, 0x5A, 0xB2, 0xA5, 0x1B,
0xB1, 0x69, 0x96, 0xA4, 0x4A, 0x78, 0x87, 0x3A, 0xA3, 0x59, 0x95, 0x2A,
0xA2, 0xA1, 0x68, 0x86, 0x77, 0x49, 0x94, 0x39, 0x93, 0x58, 0x85, 0x29,
0x67, 0x76, 0x92, 0x19, 0x91, 0x48, 0x84, 0x57, 0x75, 0x38, 0x83, 0x66,
0x28, 0x82, 0x18, 0x47, 0x74, 0x81, 0x08, 0x80, 0x56, 0x65, 0x17, 0x07,
0x70, 0x73, 0x37, 0x27, 0x72, 0x46, 0x64, 0x55, 0x71, 0x36, 0x63, 0x45,
0x54, 0x26, 0x62, 0x16, 0x61, 0x06, 0x60, 0x35, 0x53, 0x44, 0x25, 0x52,
0x15, 0x05, 0x50, 0x51, 0x34, 0x43, 0x24, 0x42, 0x33, 0x14, 0x41, 0x04,
0x40, 0x23, 0x32, 0x13, 0x31, 0x03, 0x30, 0x22, 0x12, 0x21, 0x02, 0x20,
0x11, 0x01, 0x10, 0x00,
};
static const uint8_t mpa_huff_sizes_minus_one[] =
{
3, 8, 8, 15, 15, 35, 35, 35, 63, 63, 63, 255, 255, 255, 255
};
const uint8_t ff_mpa_huff_data[32][2] = {
{ 0, 0 },
{ 1, 0 },
{ 2, 0 },
{ 3, 0 },
{ 0, 0 },
{ 4, 0 },
{ 5, 0 },
{ 6, 0 },
{ 7, 0 },
{ 8, 0 },
{ 9, 0 },
{ 10, 0 },
{ 11, 0 },
{ 12, 0 },
{ 0, 0 },
{ 13, 0 },
{ 14, 1 },
{ 14, 2 },
{ 14, 3 },
{ 14, 4 },
{ 14, 6 },
{ 14, 8 },
{ 14, 10 },
{ 14, 13 },
{ 15, 4 },
{ 15, 5 },
{ 15, 6 },
{ 15, 7 },
{ 15, 8 },
{ 15, 9 },
{ 15, 11 },
{ 15, 13 },
};
/* huffman tables for quadrules */
static const uint8_t mpa_quad_codes[2][16] = {
{ 1, 5, 4, 5, 6, 5, 4, 4, 7, 3, 6, 0, 7, 2, 3, 1, },
{ 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, },
};
static const uint8_t mpa_quad_bits[2][16] = {
{ 1, 4, 4, 5, 4, 6, 5, 6, 4, 5, 5, 6, 5, 6, 6, 6, },
{ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, },
};
const uint8_t ff_band_size_long[9][22] = {
{ 4, 4, 4, 4, 4, 4, 6, 6, 8, 8, 10,
12, 16, 20, 24, 28, 34, 42, 50, 54, 76, 158, }, /* 44100 */
{ 4, 4, 4, 4, 4, 4, 6, 6, 6, 8, 10,
12, 16, 18, 22, 28, 34, 40, 46, 54, 54, 192, }, /* 48000 */
{ 4, 4, 4, 4, 4, 4, 6, 6, 8, 10, 12,
16, 20, 24, 30, 38, 46, 56, 68, 84, 102, 26, }, /* 32000 */
{ 6, 6, 6, 6, 6, 6, 8, 10, 12, 14, 16,
20, 24, 28, 32, 38, 46, 52, 60, 68, 58, 54, }, /* 22050 */
{ 6, 6, 6, 6, 6, 6, 8, 10, 12, 14, 16,
18, 22, 26, 32, 38, 46, 52, 64, 70, 76, 36, }, /* 24000 */
{ 6, 6, 6, 6, 6, 6, 8, 10, 12, 14, 16,
20, 24, 28, 32, 38, 46, 52, 60, 68, 58, 54, }, /* 16000 */
{ 6, 6, 6, 6, 6, 6, 8, 10, 12, 14, 16,
20, 24, 28, 32, 38, 46, 52, 60, 68, 58, 54, }, /* 11025 */
{ 6, 6, 6, 6, 6, 6, 8, 10, 12, 14, 16,
20, 24, 28, 32, 38, 46, 52, 60, 68, 58, 54, }, /* 12000 */
{ 12, 12, 12, 12, 12, 12, 16, 20, 24, 28, 32,
40, 48, 56, 64, 76, 90, 2, 2, 2, 2, 2, }, /* 8000 */
};
const uint8_t ff_band_size_short[9][13] = {
{ 4, 4, 4, 4, 6, 8, 10, 12, 14, 18, 22, 30, 56, }, /* 44100 */
{ 4, 4, 4, 4, 6, 6, 10, 12, 14, 16, 20, 26, 66, }, /* 48000 */
{ 4, 4, 4, 4, 6, 8, 12, 16, 20, 26, 34, 42, 12, }, /* 32000 */
{ 4, 4, 4, 6, 6, 8, 10, 14, 18, 26, 32, 42, 18, }, /* 22050 */
{ 4, 4, 4, 6, 8, 10, 12, 14, 18, 24, 32, 44, 12, }, /* 24000 */
{ 4, 4, 4, 6, 8, 10, 12, 14, 18, 24, 30, 40, 18, }, /* 16000 */
{ 4, 4, 4, 6, 8, 10, 12, 14, 18, 24, 30, 40, 18, }, /* 11025 */
{ 4, 4, 4, 6, 8, 10, 12, 14, 18, 24, 30, 40, 18, }, /* 12000 */
{ 8, 8, 8, 12, 16, 20, 24, 28, 36, 2, 2, 2, 26, }, /* 8000 */
};
uint16_t ff_band_index_long[9][23];
const uint8_t ff_mpa_pretab[2][22] = {
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 3, 3, 3, 2, 0 },
};
static av_cold void mpegaudiodec_common_init_static(void)
{
const uint8_t *huff_sym = mpa_huffsymbols, *huff_lens = mpa_hufflens;
int offset;
/* scale factors table for layer 1/2 */
for (int i = 0; i < 64; i++) {
int shift, mod;
/* 1.0 (i = 3) is normalized to 2 ^ FRAC_BITS */
shift = i / 3;
mod = i % 3;
ff_scale_factor_modshift[i] = mod | (shift << 2);
}
/* huffman decode tables */
offset = 0;
for (int i = 0; i < 15;) {
uint16_t tmp_symbols[256];
int nb_codes_minus_one = mpa_huff_sizes_minus_one[i];
int j;
for (j = 0; j <= nb_codes_minus_one; j++) {
uint8_t high = huff_sym[j] & 0xF0, low = huff_sym[j] & 0xF;
tmp_symbols[j] = high << 1 | ((high && low) << 4) | low;
}
ff_huff_vlc[++i].table = huff_vlc_tables + offset;
ff_huff_vlc[i].table_allocated = FF_ARRAY_ELEMS(huff_vlc_tables) - offset;
ff_init_vlc_from_lengths(&ff_huff_vlc[i], 7, j,
huff_lens, 1, tmp_symbols, 2, 2,
0, INIT_VLC_STATIC_OVERLONG, NULL);
offset += ff_huff_vlc[i].table_size;
huff_lens += j;
huff_sym += j;
}
av_assert0(offset == FF_ARRAY_ELEMS(huff_vlc_tables));
offset = 0;
for (int i = 0; i < 2; i++) {
int bits = i == 0 ? 6 : 4;
ff_huff_quad_vlc[i].table = huff_quad_vlc_tables + offset;
ff_huff_quad_vlc[i].table_allocated = 1 << bits;
offset += 1 << bits;
init_vlc(&ff_huff_quad_vlc[i], bits, 16,
mpa_quad_bits[i], 1, 1, mpa_quad_codes[i], 1, 1,
INIT_VLC_USE_NEW_STATIC);
}
av_assert0(offset == FF_ARRAY_ELEMS(huff_quad_vlc_tables));
for (int i = 0; i < 9; i++) {
int k = 0;
for (int j = 0; j < 22; j++) {
ff_band_index_long[i][j] = k;
k += ff_band_size_long[i][j] >> 1;
}
ff_band_index_long[i][22] = k;
}
for (int i = 0; i < 4; i++) {
if (ff_mpa_quant_bits[i] < 0) {
for (int j = 0; j < (1 << (-ff_mpa_quant_bits[i] + 1)); j++) {
int val1, val2, val3, steps;
int val = j;
steps = ff_mpa_quant_steps[i];
val1 = val % steps;
val /= steps;
val2 = val % steps;
val3 = val / steps;
ff_division_tabs[i][j] = val1 + (val2 << 4) + (val3 << 8);
}
}
}
mpegaudiodec_common_tableinit();
}
av_cold void ff_mpegaudiodec_common_init_static(void)
{
static AVOnce init_static_once = AV_ONCE_INIT;
ff_thread_once(&init_static_once, mpegaudiodec_common_init_static);
}

View File

@@ -1,40 +0,0 @@
/*
* Generate a header file for hardcoded shared mpegaudiodec tables
*
* Copyright (c) 2009 Reimar Döffinger <Reimar.Doeffinger@gmx.de>
* Copyright (c) 2020 Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdlib.h>
#define CONFIG_HARDCODED_TABLES 0
#include "libavutil/tablegen.h"
#include "mpegaudiodec_common_tablegen.h"
#include "tableprint.h"
int main(void)
{
mpegaudiodec_common_tableinit();
write_fileheader();
WRITE_ARRAY("const", int8_t, ff_table_4_3_exp);
WRITE_ARRAY("const", uint32_t, ff_table_4_3_value);
return 0;
}

View File

@@ -1,72 +0,0 @@
/*
* Header file for hardcoded shared mpegaudiodec tables
*
* Copyright (c) 2009 Reimar Döffinger <Reimar.Doeffinger@gmx.de>
* Copyright (c) 2020 Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_MPEGAUDIODEC_COMMON_TABLEGEN_H
#define AVCODEC_MPEGAUDIODEC_COMMON_TABLEGEN_H
#include <stdint.h>
#define TABLE_4_3_SIZE ((8191 + 16)*4)
#if CONFIG_HARDCODED_TABLES
#define mpegaudiodec_common_tableinit()
#include "libavcodec/mpegaudiodec_common_tables.h"
#else
#include <math.h>
#include "libavutil/attributes.h"
int8_t ff_table_4_3_exp [TABLE_4_3_SIZE];
uint32_t ff_table_4_3_value[TABLE_4_3_SIZE];
#define FRAC_BITS 23
#define IMDCT_SCALAR 1.759
static av_cold void mpegaudiodec_common_tableinit(void)
{
static const double exp2_lut[4] = {
1.00000000000000000000, /* 2 ^ (0 * 0.25) */
1.18920711500272106672, /* 2 ^ (1 * 0.25) */
M_SQRT2 , /* 2 ^ (2 * 0.25) */
1.68179283050742908606, /* 2 ^ (3 * 0.25) */
};
double pow43_val = 0;
for (int i = 1; i < TABLE_4_3_SIZE; i++) {
double f, fm;
int e, m;
double value = i / 4;
if ((i & 3) == 0)
pow43_val = value / IMDCT_SCALAR * cbrt(value);
f = pow43_val * exp2_lut[i & 3];
fm = frexp(f, &e);
m = llrint(fm * (1LL << 31));
e += FRAC_BITS - 31 + 5 - 100;
/* normalized to FRAC_BITS */
ff_table_4_3_value[i] = m;
ff_table_4_3_exp [i] = -e;
}
}
#endif /* CONFIG_HARDCODED_TABLES */
#endif /* AVCODEC_MPEGAUDIODEC_COMMON_TABLEGEN_H */

View File

@@ -1,103 +0,0 @@
/*
* Microsoft Paint (MSP) version 2 decoder
* Copyright (c) 2020 Peter Ross (pross@xvid.org)
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Microsoft Paint (MSP) version 2 decoder
*/
#include "avcodec.h"
#include "bytestream.h"
#include "internal.h"
static int msp2_decode_frame(AVCodecContext *avctx,
void *data, int *got_frame,
AVPacket *avpkt)
{
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
AVFrame *p = data;
int ret;
unsigned int x, y, width = (avctx->width + 7) / 8;
GetByteContext idx, gb;
if (buf_size <= 2 * avctx->height)
return AVERROR_INVALIDDATA;
avctx->pix_fmt = AV_PIX_FMT_MONOBLACK;
if ((ret = ff_get_buffer(avctx, p, 0)) < 0)
return ret;
p->pict_type = AV_PICTURE_TYPE_I;
p->key_frame = 1;
bytestream2_init(&idx, buf, 2 * avctx->height);
buf += 2 * avctx->height;
buf_size -= 2 * avctx->height;
for (y = 0; y < avctx->height; y++) {
unsigned int pkt_size = bytestream2_get_le16(&idx);
if (!pkt_size) {
memset(p->data[0] + y * p->linesize[0], 0xFF, width);
continue;
}
if (pkt_size > buf_size) {
av_log(avctx, AV_LOG_WARNING, "image probably corrupt\n");
pkt_size = buf_size;
}
bytestream2_init(&gb, buf, pkt_size);
x = 0;
while (bytestream2_get_bytes_left(&gb) && x < width) {
int size = bytestream2_get_byte(&gb);
if (size) {
size = FFMIN(size, bytestream2_get_bytes_left(&gb));
memcpy(p->data[0] + y * p->linesize[0] + x, gb.buffer, FFMIN(size, width - x));
bytestream2_skip(&gb, size);
} else {
int value;
size = bytestream2_get_byte(&gb);
if (!size)
avpriv_request_sample(avctx, "escape value");
value = bytestream2_get_byte(&gb);
memset(p->data[0] + y * p->linesize[0] + x, value, FFMIN(size, width - x));
}
x += size;
}
buf += pkt_size;
buf_size -= pkt_size;
}
*got_frame = 1;
return buf_size;
}
AVCodec ff_msp2_decoder = {
.name = "msp2",
.long_name = NULL_IF_CONFIG_SMALL("Microsoft Paint (MSP) version 2"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_MSP2,
.decode = msp2_decode_frame,
.capabilities = AV_CODEC_CAP_DR1,
};

View File

@@ -1,352 +0,0 @@
/*
* AV1 HW decode acceleration through NVDEC
*
* Copyright (c) 2020 Timo Rothenpieler
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "avcodec.h"
#include "nvdec.h"
#include "decode.h"
#include "internal.h"
#include "av1dec.h"
static int get_bit_depth_from_seq(const AV1RawSequenceHeader *seq)
{
if (seq->seq_profile == 2 && seq->color_config.high_bitdepth)
return seq->color_config.twelve_bit ? 12 : 10;
else if (seq->seq_profile <= 2 && seq->color_config.high_bitdepth)
return 10;
else
return 8;
}
static int nvdec_av1_start_frame(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const AV1DecContext *s = avctx->priv_data;
const AV1RawSequenceHeader *seq = s->raw_seq;
const AV1RawFrameHeader *frame_header = s->raw_frame_header;
const AV1RawFilmGrainParams *film_grain = &s->cur_frame.film_grain;
NVDECContext *ctx = avctx->internal->hwaccel_priv_data;
CUVIDPICPARAMS *pp = &ctx->pic_params;
CUVIDAV1PICPARAMS *ppc = &pp->CodecSpecific.av1;
FrameDecodeData *fdd;
NVDECFrame *cf;
AVFrame *cur_frame = s->cur_frame.tf.f;
unsigned char remap_lr_type[4] = { AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ };
int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain;
int ret, i, j;
ret = ff_nvdec_start_frame_sep_ref(avctx, cur_frame, apply_grain);
if (ret < 0)
return ret;
fdd = (FrameDecodeData*)cur_frame->private_ref->data;
cf = (NVDECFrame*)fdd->hwaccel_priv;
*pp = (CUVIDPICPARAMS) {
.PicWidthInMbs = (cur_frame->width + 15) / 16,
.FrameHeightInMbs = (cur_frame->height + 15) / 16,
.CurrPicIdx = cf->idx,
.ref_pic_flag = !!frame_header->refresh_frame_flags,
.intra_pic_flag = frame_header->frame_type == AV1_FRAME_INTRA_ONLY ||
frame_header->frame_type == AV1_FRAME_KEY,
.CodecSpecific.av1 = {
.width = cur_frame->width,
.height = cur_frame->height,
.frame_offset = frame_header->order_hint,
.decodePicIdx = cf->ref_idx,
/* Sequence Header */
.profile = seq->seq_profile,
.use_128x128_superblock = seq->use_128x128_superblock,
.subsampling_x = seq->color_config.subsampling_x,
.subsampling_y = seq->color_config.subsampling_y,
.mono_chrome = seq->color_config.mono_chrome,
.bit_depth_minus8 = get_bit_depth_from_seq(seq) - 8,
.enable_filter_intra = seq->enable_filter_intra,
.enable_intra_edge_filter = seq->enable_intra_edge_filter,
.enable_interintra_compound = seq->enable_interintra_compound,
.enable_masked_compound = seq->enable_masked_compound,
.enable_dual_filter = seq->enable_dual_filter,
.enable_order_hint = seq->enable_order_hint,
.order_hint_bits_minus1 = seq->order_hint_bits_minus_1,
.enable_jnt_comp = seq->enable_jnt_comp,
.enable_superres = seq->enable_superres,
.enable_cdef = seq->enable_cdef,
.enable_restoration = seq->enable_restoration,
.enable_fgs = seq->film_grain_params_present &&
!(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN),
/* Frame Header */
.frame_type = frame_header->frame_type,
.show_frame = frame_header->show_frame,
.disable_cdf_update = frame_header->disable_cdf_update,
.allow_screen_content_tools = frame_header->allow_screen_content_tools,
.force_integer_mv = frame_header->force_integer_mv ||
frame_header->frame_type == AV1_FRAME_INTRA_ONLY ||
frame_header->frame_type == AV1_FRAME_KEY,
.coded_denom = frame_header->coded_denom,
.allow_intrabc = frame_header->allow_intrabc,
.allow_high_precision_mv = frame_header->allow_high_precision_mv,
.interp_filter = frame_header->interpolation_filter,
.switchable_motion_mode = frame_header->is_motion_mode_switchable,
.use_ref_frame_mvs = frame_header->use_ref_frame_mvs,
.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf,
.delta_q_present = frame_header->delta_q_present,
.delta_q_res = frame_header->delta_q_res,
.using_qmatrix = frame_header->using_qmatrix,
.coded_lossless = s->cur_frame.coded_lossless,
.use_superres = frame_header->use_superres,
.tx_mode = frame_header->tx_mode,
.reference_mode = frame_header->reference_select,
.allow_warped_motion = frame_header->allow_warped_motion,
.reduced_tx_set = frame_header->reduced_tx_set,
.skip_mode = frame_header->skip_mode_present,
/* Tiling Info */
.num_tile_cols = frame_header->tile_cols,
.num_tile_rows = frame_header->tile_rows,
.context_update_tile_id = frame_header->context_update_tile_id,
/* CDEF */
.cdef_damping_minus_3 = frame_header->cdef_damping_minus_3,
.cdef_bits = frame_header->cdef_bits,
/* SkipModeFrames */
.SkipModeFrame0 = frame_header->skip_mode_present ?
s->cur_frame.skip_mode_frame_idx[0] : 0,
.SkipModeFrame1 = frame_header->skip_mode_present ?
s->cur_frame.skip_mode_frame_idx[1] : 0,
/* QP Information */
.base_qindex = frame_header->base_q_idx,
.qp_y_dc_delta_q = frame_header->delta_q_y_dc,
.qp_u_dc_delta_q = frame_header->delta_q_u_dc,
.qp_v_dc_delta_q = frame_header->delta_q_v_dc,
.qp_u_ac_delta_q = frame_header->delta_q_u_ac,
.qp_v_ac_delta_q = frame_header->delta_q_v_ac,
.qm_y = frame_header->qm_y,
.qm_u = frame_header->qm_u,
.qm_v = frame_header->qm_v,
/* Segmentation */
.segmentation_enabled = frame_header->segmentation_enabled,
.segmentation_update_map = frame_header->segmentation_update_map,
.segmentation_update_data = frame_header->segmentation_update_data,
.segmentation_temporal_update = frame_header->segmentation_temporal_update,
/* Loopfilter */
.loop_filter_level[0] = frame_header->loop_filter_level[0],
.loop_filter_level[1] = frame_header->loop_filter_level[1],
.loop_filter_level_u = frame_header->loop_filter_level[2],
.loop_filter_level_v = frame_header->loop_filter_level[3],
.loop_filter_sharpness = frame_header->loop_filter_sharpness,
.loop_filter_delta_enabled = frame_header->loop_filter_delta_enabled,
.loop_filter_delta_update = frame_header->loop_filter_delta_update,
.loop_filter_mode_deltas[0] = frame_header->loop_filter_mode_deltas[0],
.loop_filter_mode_deltas[1] = frame_header->loop_filter_mode_deltas[1],
.delta_lf_present = frame_header->delta_lf_present,
.delta_lf_res = frame_header->delta_lf_res,
.delta_lf_multi = frame_header->delta_lf_multi,
/* Restoration */
.lr_type[0] = remap_lr_type[frame_header->lr_type[0]],
.lr_type[1] = remap_lr_type[frame_header->lr_type[1]],
.lr_type[2] = remap_lr_type[frame_header->lr_type[2]],
.lr_unit_size[0] = 1 + frame_header->lr_unit_shift,
.lr_unit_size[1] = 1 + frame_header->lr_unit_shift - frame_header->lr_uv_shift,
.lr_unit_size[2] = 1 + frame_header->lr_unit_shift - frame_header->lr_uv_shift,
/* Reference Frames */
.temporal_layer_id = s->cur_frame.temporal_id,
.spatial_layer_id = s->cur_frame.spatial_id,
/* Film Grain Params */
.apply_grain = apply_grain,
.overlap_flag = film_grain->overlap_flag,
.scaling_shift_minus8 = film_grain->grain_scaling_minus_8,
.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma,
.ar_coeff_lag = film_grain->ar_coeff_lag,
.ar_coeff_shift_minus6 = film_grain->ar_coeff_shift_minus_6,
.grain_scale_shift = film_grain->grain_scale_shift,
.clip_to_restricted_range = film_grain->clip_to_restricted_range,
.num_y_points = film_grain->num_y_points,
.num_cb_points = film_grain->num_cb_points,
.num_cr_points = film_grain->num_cr_points,
.random_seed = film_grain->grain_seed,
.cb_mult = film_grain->cb_mult,
.cb_luma_mult = film_grain->cb_luma_mult,
.cb_offset = film_grain->cb_offset,
.cr_mult = film_grain->cr_mult,
.cr_luma_mult = film_grain->cr_luma_mult,
.cr_offset = film_grain->cr_offset
}
};
/* Tiling Info */
for (i = 0; i < frame_header->tile_cols; ++i) {
ppc->tile_widths[i] = frame_header->width_in_sbs_minus_1[i] + 1;
}
for (i = 0; i < frame_header->tile_rows; ++i) {
ppc->tile_heights[i] = frame_header->height_in_sbs_minus_1[i] + 1;
}
/* CDEF */
for (i = 0; i < (1 << frame_header->cdef_bits); ++i) {
ppc->cdef_y_strength[i] = (frame_header->cdef_y_pri_strength[i] & 0x0F) | (frame_header->cdef_y_sec_strength[i] << 4);
ppc->cdef_uv_strength[i] = (frame_header->cdef_uv_pri_strength[i] & 0x0F) | (frame_header->cdef_uv_sec_strength[i] << 4);
}
/* Segmentation */
for (i = 0; i < AV1_MAX_SEGMENTS; ++i) {
ppc->segmentation_feature_mask[i] = 0;
for (j = 0; j < AV1_SEG_LVL_MAX; ++j) {
ppc->segmentation_feature_mask[i] |= frame_header->feature_enabled[i][j] << j;
ppc->segmentation_feature_data[i][j] = frame_header->feature_value[i][j];
}
}
for (i = 0; i < AV1_NUM_REF_FRAMES; ++i) {
/* Loopfilter */
ppc->loop_filter_ref_deltas[i] = frame_header->loop_filter_ref_deltas[i];
/* Reference Frames */
ppc->ref_frame_map[i] = ff_nvdec_get_ref_idx(s->ref[i].tf.f);
}
if (frame_header->primary_ref_frame == AV1_PRIMARY_REF_NONE) {
ppc->primary_ref_frame = -1;
} else {
int8_t pri_ref_idx = frame_header->ref_frame_idx[frame_header->primary_ref_frame];
ppc->primary_ref_frame = ppc->ref_frame_map[pri_ref_idx];
}
for (i = 0; i < AV1_REFS_PER_FRAME; ++i) {
/* Ref Frame List */
int8_t ref_idx = frame_header->ref_frame_idx[i];
AVFrame *ref_frame = s->ref[ref_idx].tf.f;
ppc->ref_frame[i].index = ppc->ref_frame_map[ref_idx];
ppc->ref_frame[i].width = ref_frame->width;
ppc->ref_frame[i].height = ref_frame->height;
/* Global Motion */
ppc->global_motion[i].invalid = !frame_header->is_global[AV1_REF_FRAME_LAST + i];
ppc->global_motion[i].wmtype = s->cur_frame.gm_type[AV1_REF_FRAME_LAST + i];
for (j = 0; j < 6; ++j) {
ppc->global_motion[i].wmmat[j] = s->cur_frame.gm_params[AV1_REF_FRAME_LAST + i][j];
}
}
/* Film Grain Params */
if (apply_grain) {
for (i = 0; i < 14; ++i) {
ppc->scaling_points_y[i][0] = film_grain->point_y_value[i];
ppc->scaling_points_y[i][1] = film_grain->point_y_scaling[i];
}
for (i = 0; i < 10; ++i) {
ppc->scaling_points_cb[i][0] = film_grain->point_cb_value[i];
ppc->scaling_points_cb[i][1] = film_grain->point_cb_scaling[i];
ppc->scaling_points_cr[i][0] = film_grain->point_cr_value[i];
ppc->scaling_points_cr[i][1] = film_grain->point_cr_scaling[i];
}
for (i = 0; i < 24; ++i) {
ppc->ar_coeffs_y[i] = (short)film_grain->ar_coeffs_y_plus_128[i] - 128;
}
for (i = 0; i < 25; ++i) {
ppc->ar_coeffs_cb[i] = (short)film_grain->ar_coeffs_cb_plus_128[i] - 128;
ppc->ar_coeffs_cr[i] = (short)film_grain->ar_coeffs_cr_plus_128[i] - 128;
}
}
return 0;
}
static int nvdec_av1_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const AV1DecContext *s = avctx->priv_data;
const AV1RawFrameHeader *frame_header = s->raw_frame_header;
NVDECContext *ctx = avctx->internal->hwaccel_priv_data;
void *tmp;
ctx->nb_slices = frame_header->tile_cols * frame_header->tile_rows;
tmp = av_fast_realloc(ctx->slice_offsets, &ctx->slice_offsets_allocated,
ctx->nb_slices * 2 * sizeof(*ctx->slice_offsets));
if (!tmp) {
return AVERROR(ENOMEM);
}
ctx->slice_offsets = tmp;
/* Shortcut if all tiles are in the same buffer */
if (ctx->nb_slices == s->tg_end - s->tg_start + 1) {
ctx->bitstream = (uint8_t*)buffer;
ctx->bitstream_len = size;
for (int i = 0; i < ctx->nb_slices; ++i) {
ctx->slice_offsets[i*2 ] = s->tile_group_info[i].tile_offset;
ctx->slice_offsets[i*2 + 1] = ctx->slice_offsets[i*2] + s->tile_group_info[i].tile_size;
}
return 0;
}
tmp = av_fast_realloc(ctx->bitstream_internal, &ctx->bitstream_allocated,
ctx->bitstream_len + size);
if (!tmp) {
return AVERROR(ENOMEM);
}
ctx->bitstream = ctx->bitstream_internal = tmp;
memcpy(ctx->bitstream + ctx->bitstream_len, buffer, size);
for (uint32_t tile_num = s->tg_start; tile_num <= s->tg_end; ++tile_num) {
ctx->slice_offsets[tile_num*2 ] = ctx->bitstream_len + s->tile_group_info[tile_num].tile_offset;
ctx->slice_offsets[tile_num*2 + 1] = ctx->slice_offsets[tile_num*2] + s->tile_group_info[tile_num].tile_size;
}
ctx->bitstream_len += size;
return 0;
}
static int nvdec_av1_frame_params(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx)
{
/* Maximum of 8 reference frames, but potentially stored twice due to film grain */
return ff_nvdec_frame_params(avctx, hw_frames_ctx, 8 * 2, 0);
}
const AVHWAccel ff_av1_nvdec_hwaccel = {
.name = "av1_nvdec",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.pix_fmt = AV_PIX_FMT_CUDA,
.start_frame = nvdec_av1_start_frame,
.end_frame = ff_nvdec_simple_end_frame,
.decode_slice = nvdec_av1_decode_slice,
.frame_params = nvdec_av1_frame_params,
.init = ff_nvdec_decode_init,
.uninit = ff_nvdec_decode_uninit,
.priv_data_size = sizeof(NVDECContext),
};

View File

@@ -1,168 +0,0 @@
/*
* PGX image format
* Copyright (c) 2020 Gautam Ramakrishnan
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "avcodec.h"
#include "internal.h"
#include "bytestream.h"
#include "libavutil/imgutils.h"
static int pgx_get_number(AVCodecContext *avctx, GetByteContext *g, int *number) {
int ret = AVERROR_INVALIDDATA;
char digit;
*number = 0;
while (1) {
uint64_t temp;
if (!bytestream2_get_bytes_left(g))
return AVERROR_INVALIDDATA;
digit = bytestream2_get_byte(g);
if (digit == ' ' || digit == 0xA || digit == 0xD)
break;
else if (digit < '0' || digit > '9')
return AVERROR_INVALIDDATA;
temp = (uint64_t)10 * (*number) + (digit - '0');
if (temp > INT_MAX)
return AVERROR_INVALIDDATA;
*number = temp;
ret = 0;
}
return ret;
}
static int pgx_decode_header(AVCodecContext *avctx, GetByteContext *g,
int *depth, int *width, int *height,
int *sign)
{
int byte;
if (bytestream2_get_bytes_left(g) < 6) {
return AVERROR_INVALIDDATA;
}
bytestream2_skip(g, 6);
// Is the component signed?
byte = bytestream2_peek_byte(g);
if (byte == '+') {
*sign = 0;
bytestream2_skip(g, 1);
} else if (byte == '-') {
*sign = 1;
bytestream2_skip(g, 1);
} else if (byte == 0)
goto error;
byte = bytestream2_peek_byte(g);
if (byte == ' ')
bytestream2_skip(g, 1);
else if (byte == 0)
goto error;
if (pgx_get_number(avctx, g, depth))
goto error;
if (pgx_get_number(avctx, g, width))
goto error;
if (pgx_get_number(avctx, g, height))
goto error;
if (bytestream2_peek_byte(g) == 0xA)
bytestream2_skip(g, 1);
return 0;
error:
av_log(avctx, AV_LOG_ERROR, "Error in decoding header.\n");
return AVERROR_INVALIDDATA;
}
#define WRITE_FRAME(D, PIXEL, suffix) \
static inline void write_frame_ ##D(AVFrame *frame, GetByteContext *g, \
int width, int height, int sign, int depth) \
{ \
int i, j; \
for (i = 0; i < height; i++) { \
PIXEL *line = (PIXEL*)frame->data[0] + i*frame->linesize[0]/sizeof(PIXEL); \
for (j = 0; j < width; j++) { \
unsigned val; \
if (sign) \
val = (PIXEL)bytestream2_get_ ##suffix(g) + (1 << (depth - 1)); \
else \
val = bytestream2_get_ ##suffix(g); \
val <<= (D - depth); \
*(line + j) = val; \
} \
} \
} \
WRITE_FRAME(8, int8_t, byte)
WRITE_FRAME(16, int16_t, be16)
static int pgx_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
AVFrame *p = data;
int ret;
int bpp;
int width, height, depth;
int sign = 0;
GetByteContext g;
bytestream2_init(&g, avpkt->data, avpkt->size);
if ((ret = pgx_decode_header(avctx, &g, &depth, &width, &height, &sign)) < 0)
return ret;
if ((ret = ff_set_dimensions(avctx, width, height)) < 0)
return ret;
if (depth > 0 && depth <= 8) {
avctx->pix_fmt = AV_PIX_FMT_GRAY8;
bpp = 8;
} else if (depth > 0 && depth <= 16) {
avctx->pix_fmt = AV_PIX_FMT_GRAY16;
bpp = 16;
} else {
av_log(avctx, AV_LOG_ERROR, "depth %d is invalid or unsupported.\n", depth);
return AVERROR_PATCHWELCOME;
}
if (bytestream2_get_bytes_left(&g) < width * height * (bpp >> 3))
return AVERROR_INVALIDDATA;
if ((ret = ff_get_buffer(avctx, p, 0)) < 0)
return ret;
p->pict_type = AV_PICTURE_TYPE_I;
p->key_frame = 1;
avctx->bits_per_raw_sample = depth;
if (bpp == 8)
write_frame_8(p, &g, width, height, sign, depth);
else if (bpp == 16)
write_frame_16(p, &g, width, height, sign, depth);
*got_frame = 1;
return 0;
}
AVCodec ff_pgx_decoder = {
.name = "pgx",
.long_name = NULL_IF_CONFIG_SMALL("PGX (JPEG2000 Test Format)"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_PGX,
.decode = pgx_decode_frame,
.capabilities = AV_CODEC_CAP_DR1,
};

View File

@@ -1,473 +0,0 @@
/*
* Kodak PhotoCD (a.k.a. ImagePac) image decoder
*
* Copyright (c) 1996-2002 Gerd Knorr
* Copyright (c) 2010 Kenneth Vermeirsch
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Kodak PhotoCD (a.k.a. ImagePac) image decoder
*
* Supports resolutions up to 3072x2048.
*/
#define CACHED_BITSTREAM_READER !ARCH_X86_32
#include "libavutil/avassert.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/opt.h"
#include "avcodec.h"
#include "bytestream.h"
#include "get_bits.h"
#include "internal.h"
#include "thread.h"
typedef struct PhotoCDContext {
AVClass *class;
int lowres;
GetByteContext gb;
int thumbnails; //* number of thumbnails; 0 for normal image */
int resolution;
int orientation;
int streampos;
uint8_t bits[256];
uint16_t codes[256];
uint8_t syms[256];
VLC vlc[3];
} PhotoCDContext;
typedef struct ImageInfo {
uint32_t start;
uint16_t width, height;
} ImageInfo;
static const ImageInfo img_info[6] = {
{8192, 192, 128},
{47104, 384, 256},
{196608, 768, 512},
{0, 1536, 1024},
{0, 3072, 2048},
{0, 6144, 4096},
};
static av_noinline void interp_lowres(PhotoCDContext *s, AVFrame *picture,
int width, int height)
{
GetByteContext *gb = &s->gb;
int start = s->streampos + img_info[2].start;
uint8_t *ptr, *ptr1, *ptr2;
uint8_t *dst;
int fill;
ptr = picture->data[0];
ptr1 = picture->data[1];
ptr2 = picture->data[2];
bytestream2_seek(gb, start, SEEK_SET);
for (int y = 0; y < height; y += 2) {
dst = ptr;
for (int x = 0; x < width - 1; x++) {
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = (fill + bytestream2_peek_byte(gb) + 1) >> 1;
}
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = fill;
ptr += picture->linesize[0] << 1;
dst = ptr;
for (int x = 0; x < width - 1; x++) {
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = (fill + bytestream2_peek_byte(gb) + 1) >> 1;
}
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = fill;
ptr += picture->linesize[0] << 1;
dst = ptr1;
for (int x = 0; x < (width >> 1) - 1; x++) {
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = (fill + bytestream2_peek_byte(gb) + 1) >> 1;
}
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = fill;
ptr1 += picture->linesize[1] << 1;
dst = ptr2;
for (int x = 0; x < (width >> 1) - 1; x++) {
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = (fill + bytestream2_peek_byte(gb) + 1) >> 1;
}
fill = bytestream2_get_byte(gb);
*(dst++) = fill;
*(dst++) = fill;
ptr2 += picture->linesize[2] << 1;
}
s->streampos += bytestream2_tell(gb) - start;
}
static av_noinline void interp_lines(uint8_t *ptr, int linesize,
int width, int height)
{
const uint8_t *src1;
uint8_t *dst;
int x;
for (int y = 0; y < height - 2; y += 2) {
const uint8_t *src1 = ptr;
uint8_t *dst = ptr + linesize;
const uint8_t *src2 = dst + linesize;
for (x = 0; x < width - 2; x += 2) {
dst[x] = (src1[x] + src2[x] + 1) >> 1;
dst[x + 1] = (src1[x] + src2[x] + src1[x + 2] + src2[x + 2] + 2) >> 2;
}
dst[x] = dst[x + 1] = (src1[x] + src2[x] + 1) >> 1;
ptr += linesize << 1;
}
src1 = ptr;
dst = ptr + linesize;
for (x = 0; x < width - 2; x += 2) {
dst[x] = src1[x];
dst[x + 1] = (src1[x] + src1[x + 2] + 1) >> 1;
}
dst[x] = dst[x + 1] = src1[x];
}
static av_noinline void interp_pixels(uint8_t *ptr, int linesize,
int width, int height)
{
for (int y = height - 2; y >= 0; y -= 2) {
const uint8_t *src = ptr + (y >> 1) * linesize;
uint8_t *dst = ptr + y * linesize;
dst[width - 2] = dst[width - 1] = src[(width >> 1) - 1];
for (int x = width - 4; x >= 0; x -= 2) {
dst[x] = src[x >> 1];
dst[x + 1] = (src[x >> 1] + src[(x >> 1) + 1] + 1) >> 1;
}
}
}
static av_noinline int read_hufftable(AVCodecContext *avctx, VLC *vlc)
{
PhotoCDContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
int start = s->streampos;
int count, ret;
bytestream2_seek(gb, start, SEEK_SET);
count = bytestream2_get_byte(gb) + 1;
if (bytestream2_get_bytes_left(gb) < count * 4)
return AVERROR_INVALIDDATA;
for (int j = 0; j < count; j++) {
const int bit = bytestream2_get_byteu(gb) + 1;
const int code = bytestream2_get_be16u(gb);
const int sym = bytestream2_get_byteu(gb);
if (bit > 16)
return AVERROR_INVALIDDATA;
s->bits[j] = bit;
s->codes[j] = code >> (16 - bit);
s->syms[j] = sym;
}
ff_free_vlc(vlc);
ret = ff_init_vlc_sparse(vlc, 12, count,
s->bits, sizeof(*s->bits), sizeof(*s->bits),
s->codes, sizeof(*s->codes), sizeof(*s->codes),
s->syms, sizeof(*s->syms), sizeof(*s->syms), 0);
s->streampos = bytestream2_tell(gb);
return ret;
}
static av_noinline int decode_huff(AVCodecContext *avctx, AVFrame *frame,
int target_res, int curr_res)
{
PhotoCDContext *s = avctx->priv_data;
GetBitContext g;
GetByteContext *gb = &s->gb;
int ret, y = 0, type, height;
int start = s->streampos;
unsigned shiftreg;
const int scaling = target_res - curr_res;
const uint8_t type2idx[] = { 0, 0xff, 1, 2 };
bytestream2_seek(gb, start, SEEK_SET);
ret = init_get_bits8(&g, gb->buffer, bytestream2_get_bytes_left(gb));
if (ret < 0)
return ret;
height = img_info[curr_res].height;
while (y < height) {
uint8_t *data;
int x2, idx;
for (; get_bits_left(&g) > 0;) {
if (show_bits(&g, 12) == 0xfff)
break;
skip_bits(&g, 8);
}
shiftreg = show_bits(&g, 24);
while (shiftreg != 0xfffffe) {
if (get_bits_left(&g) <= 0)
return AVERROR_INVALIDDATA;
skip_bits(&g, 1);
shiftreg = show_bits(&g, 24);
}
skip_bits(&g, 24);
y = show_bits(&g, 15) & 0x1fff;
if (y >= height)
break;
type = get_bits(&g, 2);
skip_bits(&g, 14);
if (type == 1)
return AVERROR_INVALIDDATA;
idx = type2idx[type];
data = frame->data[idx] + (y >> !!idx) * frame->linesize[idx];
x2 = avctx->width >> (scaling + !!idx);
for (int x = 0; x < x2; x++) {
int m;
if (get_bits_left(&g) <= 0)
return AVERROR_INVALIDDATA;
m = get_vlc2(&g, s->vlc[idx].table, s->vlc[idx].bits, 2);
if (m < 0)
return AVERROR_INVALIDDATA;
m = sign_extend(m, 8);
data[x] = av_clip_uint8(data[x] + m);
}
}
s->streampos += (get_bits_count(&g) + 7) >> 3;
s->streampos = (s->streampos + 0x6000 + 2047) & ~0x7ff;
return 0;
}
static int photocd_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
PhotoCDContext *s = avctx->priv_data;
ThreadFrame frame = { .f = data };
const uint8_t *buf = avpkt->data;
GetByteContext *gb = &s->gb;
AVFrame *p = data;
uint8_t *ptr, *ptr1, *ptr2;
int ret;
if (avpkt->size < img_info[0].start)
return AVERROR_INVALIDDATA;
if (!memcmp("PCD_OPA", buf, 7)) {
s->thumbnails = AV_RL16(buf + 10);
av_log(avctx, AV_LOG_WARNING, "this is a thumbnails file, "
"reading first thumbnail only\n");
} else if (avpkt->size < 786432) {
return AVERROR_INVALIDDATA;
} else if (memcmp("PCD_IPI", buf + 0x800, 7)) {
return AVERROR_INVALIDDATA;
}
s->orientation = s->thumbnails ? buf[12] & 3 : buf[0x48] & 3;
if (s->thumbnails)
s->resolution = 0;
else if (avpkt->size <= 788480)
s->resolution = 2;
else
s->resolution = av_clip(4 - s->lowres, 0, 4);
ret = ff_set_dimensions(avctx, img_info[s->resolution].width, img_info[s->resolution].height);
if (ret < 0)
return ret;
if ((ret = ff_thread_get_buffer(avctx, &frame, 0)) < 0)
return ret;
p->pict_type = AV_PICTURE_TYPE_I;
p->key_frame = 1;
bytestream2_init(gb, avpkt->data, avpkt->size);
if (s->resolution < 3) {
ptr = p->data[0];
ptr1 = p->data[1];
ptr2 = p->data[2];
if (s->thumbnails)
bytestream2_seek(gb, 10240, SEEK_SET);
else
bytestream2_seek(gb, img_info[s->resolution].start, SEEK_SET);
for (int y = 0; y < avctx->height; y += 2) {
bytestream2_get_buffer(gb, ptr, avctx->width);
ptr += p->linesize[0];
bytestream2_get_buffer(gb, ptr, avctx->width);
ptr += p->linesize[0];
bytestream2_get_buffer(gb, ptr1, avctx->width >> 1);
ptr1 += p->linesize[1];
bytestream2_get_buffer(gb, ptr2, avctx->width >> 1);
ptr2 += p->linesize[2];
}
} else {
s->streampos = 0;
ptr = p->data[0];
ptr1 = p->data[1];
ptr2 = p->data[2];
interp_lowres(s, p, img_info[2].width, img_info[2].height);
interp_lines(ptr1, p->linesize[1], img_info[2].width, img_info[2].height);
interp_lines(ptr2, p->linesize[2], img_info[2].width, img_info[2].height);
if (s->resolution == 4) {
interp_pixels(ptr1, p->linesize[1], img_info[3].width, img_info[3].height);
interp_lines (ptr1, p->linesize[1], img_info[3].width, img_info[3].height);
interp_pixels(ptr2, p->linesize[2], img_info[3].width, img_info[3].height);
interp_lines (ptr2, p->linesize[2], img_info[3].width, img_info[3].height);
}
interp_lines(ptr, p->linesize[0], img_info[3].width, img_info[3].height);
s->streampos = 0xc2000;
for (int n = 0; n < 3; n++) {
if ((ret = read_hufftable(avctx, &s->vlc[n])) < 0)
return ret;
}
s->streampos = (s->streampos + 2047) & ~0x3ff;
if (decode_huff(avctx, p, s->resolution, 3) < 0)
return AVERROR_INVALIDDATA;
if (s->resolution == 4) {
interp_pixels(ptr, p->linesize[0], img_info[4].width, img_info[4].height);
interp_lines (ptr, p->linesize[0], img_info[4].width, img_info[4].height);
for (int n = 0; n < 3; n++) {
if ((ret = read_hufftable(avctx, &s->vlc[n])) < 0)
return ret;
}
s->streampos = (s->streampos + 2047) & ~0x3ff;
if (decode_huff(avctx, p, 4, 4) < 0)
return AVERROR_INVALIDDATA;
}
}
{
ptr1 = p->data[1];
ptr2 = p->data[2];
for (int y = 0; y < avctx->height >> 1; y++) {
for (int x = 0; x < avctx->width >> 1; x++) {
ptr1[x] = av_clip_uint8(ptr1[x] - 28);
ptr2[x] = av_clip_uint8(ptr2[x] - 9);
}
ptr1 += p->linesize[1];
ptr2 += p->linesize[2];
}
}
*got_frame = 1;
return 0;
}
static av_cold int photocd_decode_init(AVCodecContext *avctx)
{
avctx->pix_fmt = AV_PIX_FMT_YUV420P;
avctx->colorspace = AVCOL_SPC_BT709;
avctx->color_primaries = AVCOL_PRI_BT709;
avctx->color_trc = AVCOL_TRC_IEC61966_2_1;
avctx->color_range = AVCOL_RANGE_JPEG;
return 0;
}
static av_cold int photocd_decode_close(AVCodecContext *avctx)
{
PhotoCDContext *s = avctx->priv_data;
for (int i = 0; i < 3; i++)
ff_free_vlc(&s->vlc[i]);
return 0;
}
#define OFFSET(x) offsetof(PhotoCDContext, x)
#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM
static const AVOption options[] = {
{ "lowres", "Lower the decoding resolution by a power of two",
OFFSET(lowres), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 4, VD },
{ NULL },
};
static const AVClass photocd_class = {
.class_name = "photocd",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
AVCodec ff_photocd_decoder = {
.name = "photocd",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_PHOTOCD,
.priv_data_size = sizeof(PhotoCDContext),
.priv_class = &photocd_class,
.init = photocd_decode_init,
.close = photocd_decode_close,
.decode = photocd_decode_frame,
.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS,
.long_name = NULL_IF_CONFIG_SMALL("Kodak Photo CD"),
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
};

View File

@@ -1,858 +0,0 @@
/*
* QuickTime RPZA Video Encoder
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file rpzaenc.c
* QT RPZA Video Encoder by Todd Kirby <doubleshot@pacbell.net> and David Adler
*/
#include "libavutil/avassert.h"
#include "libavutil/common.h"
#include "libavutil/opt.h"
#include "avcodec.h"
#include "internal.h"
#include "put_bits.h"
typedef struct RpzaContext {
AVClass *avclass;
int skip_frame_thresh;
int start_one_color_thresh;
int continue_one_color_thresh;
int sixteen_color_thresh;
AVFrame *prev_frame; // buffer for previous source frame
PutBitContext pb; // buffer for encoded frame data.
int frame_width; // width in pixels of source frame
int frame_height; // height in pixesl of source frame
int first_frame; // flag set to one when the first frame is being processed
// so that comparisons with previous frame data in not attempted
} RpzaContext;
typedef enum channel_offset {
RED = 2,
GREEN = 1,
BLUE = 0,
} channel_offset;
typedef struct rgb {
uint8_t r;
uint8_t g;
uint8_t b;
} rgb;
#define SQR(x) ((x) * (x))
/* 15 bit components */
#define GET_CHAN(color, chan) (((color) >> ((chan) * 5) & 0x1F) * 8)
#define R(color) GET_CHAN(color, RED)
#define G(color) GET_CHAN(color, GREEN)
#define B(color) GET_CHAN(color, BLUE)
typedef struct BlockInfo {
int row;
int col;
int block_width;
int block_height;
int image_width;
int image_height;
int block_index;
uint16_t start;
int rowstride;
int blocks_per_row;
int total_blocks;
} BlockInfo;
static void get_colors(uint8_t *min, uint8_t *max, uint8_t color4[4][3])
{
uint8_t step;
color4[0][0] = min[0];
color4[0][1] = min[1];
color4[0][2] = min[2];
color4[3][0] = max[0];
color4[3][1] = max[1];
color4[3][2] = max[2];
// red components
step = (color4[3][0] - color4[0][0] + 1) / 3;
color4[1][0] = color4[0][0] + step;
color4[2][0] = color4[3][0] - step;
// green components
step = (color4[3][1] - color4[0][1] + 1) / 3;
color4[1][1] = color4[0][1] + step;
color4[2][1] = color4[3][1] - step;
// blue components
step = (color4[3][2] - color4[0][2] + 1) / 3;
color4[1][2] = color4[0][2] + step;
color4[2][2] = color4[3][2] - step;
}
/* Fill BlockInfo struct with information about a 4x4 block of the image */
static int get_block_info(BlockInfo *bi, int block)
{
bi->row = block / bi->blocks_per_row;
bi->col = block % bi->blocks_per_row;
// test for right edge block
if (bi->col == bi->blocks_per_row - 1 && (bi->image_width % 4) != 0) {
bi->block_width = bi->image_width % 4;
} else {
bi->block_width = 4;
}
// test for bottom edge block
if (bi->row == (bi->image_height / 4) && (bi->image_height % 4) != 0) {
bi->block_height = bi->image_height % 4;
} else {
bi->block_height = 4;
}
return block ? (bi->col * 4) + (bi->row * bi->rowstride * 4) : 0;
}
static uint16_t rgb24_to_rgb555(uint8_t *rgb24)
{
uint16_t rgb555 = 0;
uint32_t r, g, b;
r = rgb24[0] >> 3;
g = rgb24[1] >> 3;
b = rgb24[2] >> 3;
rgb555 |= (r << 10);
rgb555 |= (g << 5);
rgb555 |= (b << 0);
return rgb555;
}
/*
* Returns the total difference between two 24 bit color values
*/
static int diff_colors(uint8_t *colorA, uint8_t *colorB)
{
int tot;
tot = SQR(colorA[0] - colorB[0]);
tot += SQR(colorA[1] - colorB[1]);
tot += SQR(colorA[2] - colorB[2]);
return tot;
}
/*
* Returns the maximum channel difference
*/
static int max_component_diff(uint16_t *colorA, uint16_t *colorB)
{
int diff, max = 0;
diff = FFABS(R(colorA[0]) - R(colorB[0]));
if (diff > max) {
max = diff;
}
diff = FFABS(G(colorA[0]) - G(colorB[0]));
if (diff > max) {
max = diff;
}
diff = FFABS(B(colorA[0]) - B(colorB[0]));
if (diff > max) {
max = diff;
}
return max * 8;
}
/*
* Find the channel that has the largest difference between minimum and maximum
* color values. Put the minimum value in min, maximum in max and the channel
* in chan.
*/
static void get_max_component_diff(BlockInfo *bi, uint16_t *block_ptr,
uint8_t *min, uint8_t *max, channel_offset *chan)
{
int x, y;
uint8_t min_r, max_r, min_g, max_g, min_b, max_b;
uint8_t r, g, b;
// fix warning about uninitialized vars
min_r = min_g = min_b = UINT8_MAX;
max_r = max_g = max_b = 0;
// loop thru and compare pixels
for (y = 0; y < bi->block_height; y++) {
for (x = 0; x < bi->block_width; x++){
// TODO: optimize
min_r = FFMIN(R(block_ptr[x]), min_r);
min_g = FFMIN(G(block_ptr[x]), min_g);
min_b = FFMIN(B(block_ptr[x]), min_b);
max_r = FFMAX(R(block_ptr[x]), max_r);
max_g = FFMAX(G(block_ptr[x]), max_g);
max_b = FFMAX(B(block_ptr[x]), max_b);
}
block_ptr += bi->rowstride;
}
r = max_r - min_r;
g = max_g - min_g;
b = max_b - min_b;
if (r > g && r > b) {
*max = max_r;
*min = min_r;
*chan = RED;
} else if (g > b && g >= r) {
*max = max_g;
*min = min_g;
*chan = GREEN;
} else {
*max = max_b;
*min = min_b;
*chan = BLUE;
}
}
/*
* Compare two 4x4 blocks to determine if the total difference between the
* blocks is greater than the thresh parameter. Returns -1 if difference
* exceeds threshold or zero otherwise.
*/
static int compare_blocks(uint16_t *block1, uint16_t *block2, BlockInfo *bi, int thresh)
{
int x, y, diff = 0;
for (y = 0; y < bi->block_height; y++) {
for (x = 0; x < bi->block_width; x++) {
diff = max_component_diff(&block1[x], &block2[x]);
if (diff >= thresh) {
return -1;
}
}
block1 += bi->rowstride;
block2 += bi->rowstride;
}
return 0;
}
/*
* Determine the fit of one channel to another within a 4x4 block. This
* is used to determine the best palette choices for 4-color encoding.
*/
static int leastsquares(uint16_t *block_ptr, BlockInfo *bi,
channel_offset xchannel, channel_offset ychannel,
double *slope, double *y_intercept, double *correlation_coef)
{
double sumx = 0, sumy = 0, sumx2 = 0, sumy2 = 0, sumxy = 0,
sumx_sq = 0, sumy_sq = 0, tmp, tmp2;
int i, j, count;
uint8_t x, y;
count = bi->block_height * bi->block_width;
if (count < 2)
return -1;
for (i = 0; i < bi->block_height; i++) {
for (j = 0; j < bi->block_width; j++){
x = GET_CHAN(block_ptr[j], xchannel);
y = GET_CHAN(block_ptr[j], ychannel);
sumx += x;
sumy += y;
sumx2 += x * x;
sumy2 += y * y;
sumxy += x * y;
}
block_ptr += bi->rowstride;
}
sumx_sq = sumx * sumx;
tmp = (count * sumx2 - sumx_sq);
// guard against div/0
if (tmp == 0)
return -2;
sumy_sq = sumy * sumy;
*slope = (sumx * sumy - sumxy) / tmp;
*y_intercept = (sumy - (*slope) * sumx) / count;
tmp2 = count * sumy2 - sumy_sq;
if (tmp2 == 0) {
*correlation_coef = 0.0;
} else {
*correlation_coef = (count * sumxy - sumx * sumy) /
sqrt(tmp * tmp2);
}
return 0; // success
}
/*
* Determine the amount of error in the leastsquares fit.
*/
static int calc_lsq_max_fit_error(uint16_t *block_ptr, BlockInfo *bi,
int min, int max, int tmp_min, int tmp_max,
channel_offset xchannel, channel_offset ychannel)
{
int i, j, x, y;
int err;
int max_err = 0;
for (i = 0; i < bi->block_height; i++) {
for (j = 0; j < bi->block_width; j++){
int x_inc, lin_y, lin_x;
x = GET_CHAN(block_ptr[j], xchannel);
y = GET_CHAN(block_ptr[j], ychannel);
/* calculate x_inc as the 4-color index (0..3) */
x_inc = floor( (x - min) * 3.0 / (max - min) + 0.5);
x_inc = FFMAX(FFMIN(3, x_inc), 0);
/* calculate lin_y corresponding to x_inc */
lin_y = (int)(tmp_min + (tmp_max - tmp_min) * x_inc / 3.0 + 0.5);
err = FFABS(lin_y - y);
if (err > max_err)
max_err = err;
/* calculate lin_x corresponding to x_inc */
lin_x = (int)(min + (max - min) * x_inc / 3.0 + 0.5);
err = FFABS(lin_x - x);
if (err > max_err)
max_err += err;
}
block_ptr += bi->rowstride;
}
return max_err;
}
/*
* Find the closest match to a color within the 4-color palette
*/
static int match_color(uint16_t *color, uint8_t colors[4][3])
{
int ret = 0;
int smallest_variance = INT_MAX;
uint8_t dithered_color[3];
for (int channel = 0; channel < 3; channel++) {
dithered_color[channel] = GET_CHAN(color[0], channel);
}
for (int palette_entry = 0; palette_entry < 4; palette_entry++) {
int variance = diff_colors(dithered_color, colors[palette_entry]);
if (variance < smallest_variance) {
smallest_variance = variance;
ret = palette_entry;
}
}
return ret;
}
/*
* Encode a block using the 4-color opcode and palette. return number of
* blocks encoded (until we implement multi-block 4 color runs this will
* always be 1)
*/
static int encode_four_color_block(uint8_t *min_color, uint8_t *max_color,
PutBitContext *pb, uint16_t *block_ptr, BlockInfo *bi)
{
int x, y, idx;
uint8_t color4[4][3];
uint16_t rounded_max, rounded_min;
// round min and max wider
rounded_min = rgb24_to_rgb555(min_color);
rounded_max = rgb24_to_rgb555(max_color);
// put a and b colors
// encode 4 colors = first 16 bit color with MSB zeroed and...
put_bits(pb, 16, rounded_max & ~0x8000);
// ...second 16 bit color with MSB on.
put_bits(pb, 16, rounded_min | 0x8000);
get_colors(min_color, max_color, color4);
for (y = 0; y < 4; y++) {
for (x = 0; x < 4; x++) {
idx = match_color(&block_ptr[x], color4);
put_bits(pb, 2, idx);
}
block_ptr += bi->rowstride;
}
return 1; // num blocks encoded
}
/*
* Copy a 4x4 block from the current frame buffer to the previous frame buffer.
*/
static void update_block_in_prev_frame(const uint16_t *src_pixels,
uint16_t *dest_pixels,
const BlockInfo *bi, int block_counter)
{
for (int y = 0; y < 4; y++) {
memcpy(dest_pixels, src_pixels, 8);
dest_pixels += bi->rowstride;
src_pixels += bi->rowstride;
}
}
/*
* update statistics for the specified block. If first_block,
* it initializes the statistics. Otherwise it updates the statistics IF THIS
* BLOCK IS SUITABLE TO CONTINUE A 1-COLOR RUN. That is, it checks whether
* the range of colors (since the routine was called first_block != 0) are
* all close enough intensities to be represented by a single color.
* The routine returns 0 if this block is too different to be part of
* the same run of 1-color blocks. The routine returns 1 if this
* block can be part of the same 1-color block run.
* If the routine returns 1, it also updates its arguments to include
* the statistics of this block. Otherwise, the stats are unchanged
* and don't include the current block.
*/
static int update_block_stats(RpzaContext *s, BlockInfo *bi, uint16_t *block,
uint8_t min_color[3], uint8_t max_color[3],
int *total_rgb, int *total_pixels,
uint8_t avg_color[3], int first_block)
{
int x, y;
int is_in_range;
int total_pixels_blk;
int threshold;
uint8_t min_color_blk[3], max_color_blk[3];
int total_rgb_blk[3];
uint8_t avg_color_blk[3];
if (first_block) {
min_color[0] = UINT8_MAX;
min_color[1] = UINT8_MAX;
min_color[2] = UINT8_MAX;
max_color[0] = 0;
max_color[1] = 0;
max_color[2] = 0;
total_rgb[0] = 0;
total_rgb[1] = 0;
total_rgb[2] = 0;
*total_pixels = 0;
threshold = s->start_one_color_thresh;
} else {
threshold = s->continue_one_color_thresh;
}
/*
The *_blk variables will include the current block.
Initialize them based on the blocks so far.
*/
min_color_blk[0] = min_color[0];
min_color_blk[1] = min_color[1];
min_color_blk[2] = min_color[2];
max_color_blk[0] = max_color[0];
max_color_blk[1] = max_color[1];
max_color_blk[2] = max_color[2];
total_rgb_blk[0] = total_rgb[0];
total_rgb_blk[1] = total_rgb[1];
total_rgb_blk[2] = total_rgb[2];
total_pixels_blk = *total_pixels + bi->block_height * bi->block_width;
/*
Update stats for this block's pixels
*/
for (y = 0; y < bi->block_height; y++) {
for (x = 0; x < bi->block_width; x++) {
total_rgb_blk[0] += R(block[x]);
total_rgb_blk[1] += G(block[x]);
total_rgb_blk[2] += B(block[x]);
min_color_blk[0] = FFMIN(R(block[x]), min_color_blk[0]);
min_color_blk[1] = FFMIN(G(block[x]), min_color_blk[1]);
min_color_blk[2] = FFMIN(B(block[x]), min_color_blk[2]);
max_color_blk[0] = FFMAX(R(block[x]), max_color_blk[0]);
max_color_blk[1] = FFMAX(G(block[x]), max_color_blk[1]);
max_color_blk[2] = FFMAX(B(block[x]), max_color_blk[2]);
}
block += bi->rowstride;
}
/*
Calculate average color including current block.
*/
avg_color_blk[0] = total_rgb_blk[0] / total_pixels_blk;
avg_color_blk[1] = total_rgb_blk[1] / total_pixels_blk;
avg_color_blk[2] = total_rgb_blk[2] / total_pixels_blk;
/*
Are all the pixels within threshold of the average color?
*/
is_in_range = (max_color_blk[0] - avg_color_blk[0] <= threshold &&
max_color_blk[1] - avg_color_blk[1] <= threshold &&
max_color_blk[2] - avg_color_blk[2] <= threshold &&
avg_color_blk[0] - min_color_blk[0] <= threshold &&
avg_color_blk[1] - min_color_blk[1] <= threshold &&
avg_color_blk[2] - min_color_blk[2] <= threshold);
if (is_in_range) {
/*
Set the output variables to include this block.
*/
min_color[0] = min_color_blk[0];
min_color[1] = min_color_blk[1];
min_color[2] = min_color_blk[2];
max_color[0] = max_color_blk[0];
max_color[1] = max_color_blk[1];
max_color[2] = max_color_blk[2];
total_rgb[0] = total_rgb_blk[0];
total_rgb[1] = total_rgb_blk[1];
total_rgb[2] = total_rgb_blk[2];
*total_pixels = total_pixels_blk;
avg_color[0] = avg_color_blk[0];
avg_color[1] = avg_color_blk[1];
avg_color[2] = avg_color_blk[2];
}
return is_in_range;
}
static void rpza_encode_stream(RpzaContext *s, const AVFrame *pict)
{
BlockInfo bi;
int block_counter = 0;
int n_blocks;
int total_blocks;
int prev_block_offset;
int block_offset = 0;
uint8_t min = 0, max = 0;
channel_offset chan;
int i;
int tmp_min, tmp_max;
int total_rgb[3];
uint8_t avg_color[3];
int pixel_count;
uint8_t min_color[3], max_color[3];
double slope, y_intercept, correlation_coef;
uint16_t *src_pixels = (uint16_t *)pict->data[0];
uint16_t *prev_pixels = (uint16_t *)s->prev_frame->data[0];
/* Number of 4x4 blocks in frame. */
total_blocks = ((s->frame_width + 3) / 4) * ((s->frame_height + 3) / 4);
bi.image_width = s->frame_width;
bi.image_height = s->frame_height;
bi.rowstride = pict->linesize[0] / 2;
bi.blocks_per_row = (s->frame_width + 3) / 4;
while (block_counter < total_blocks) {
// SKIP CHECK
// make sure we have a valid previous frame and we're not writing
// a key frame
if (!s->first_frame) {
n_blocks = 0;
prev_block_offset = 0;
while (n_blocks < 32 && block_counter + n_blocks < total_blocks) {
block_offset = get_block_info(&bi, block_counter + n_blocks);
// multi-block opcodes cannot span multiple rows.
// If we're starting a new row, break out and write the opcode
/* TODO: Should eventually use bi.row here to determine when a
row break occurs, but that is currently breaking the
quicktime player. This is probably due to a bug in the
way I'm calculating the current row.
*/
if (prev_block_offset && block_offset - prev_block_offset > 12) {
break;
}
prev_block_offset = block_offset;
if (compare_blocks(&prev_pixels[block_offset],
&src_pixels[block_offset], &bi, s->skip_frame_thresh) != 0) {
// write out skipable blocks
if (n_blocks) {
// write skip opcode
put_bits(&s->pb, 8, 0x80 | (n_blocks - 1));
block_counter += n_blocks;
goto post_skip;
}
break;
}
/*
* NOTE: we don't update skipped blocks in the previous frame buffer
* since skipped needs always to be compared against the first skipped
* block to avoid artifacts during gradual fade in/outs.
*/
// update_block_in_prev_frame(&src_pixels[block_offset],
// &prev_pixels[block_offset], &bi, block_counter + n_blocks);
n_blocks++;
}
// we're either at the end of the frame or we've reached the maximum
// of 32 blocks in a run. Write out the run.
if (n_blocks) {
// write skip opcode
put_bits(&s->pb, 8, 0x80 | (n_blocks - 1));
block_counter += n_blocks;
continue;
}
} else {
block_offset = get_block_info(&bi, block_counter);
}
post_skip :
// ONE COLOR CHECK
if (update_block_stats(s, &bi, &src_pixels[block_offset],
min_color, max_color,
total_rgb, &pixel_count, avg_color, 1)) {
prev_block_offset = block_offset;
n_blocks = 1;
/* update this block in the previous frame buffer */
update_block_in_prev_frame(&src_pixels[block_offset],
&prev_pixels[block_offset], &bi, block_counter + n_blocks);
// check for subsequent blocks with the same color
while (n_blocks < 32 && block_counter + n_blocks < total_blocks) {
block_offset = get_block_info(&bi, block_counter + n_blocks);
// multi-block opcodes cannot span multiple rows.
// If we've hit end of a row, break out and write the opcode
if (block_offset - prev_block_offset > 12) {
break;
}
if (!update_block_stats(s, &bi, &src_pixels[block_offset],
min_color, max_color,
total_rgb, &pixel_count, avg_color, 0)) {
break;
}
prev_block_offset = block_offset;
/* update this block in the previous frame buffer */
update_block_in_prev_frame(&src_pixels[block_offset],
&prev_pixels[block_offset], &bi, block_counter + n_blocks);
n_blocks++;
}
// write one color opcode.
put_bits(&s->pb, 8, 0xa0 | (n_blocks - 1));
// write color to encode.
put_bits(&s->pb, 16, rgb24_to_rgb555(avg_color));
// skip past the blocks we've just encoded.
block_counter += n_blocks;
} else { // FOUR COLOR CHECK
int err = 0;
// get max component diff for block
get_max_component_diff(&bi, &src_pixels[block_offset], &min, &max, &chan);
min_color[0] = 0;
max_color[0] = 0;
min_color[1] = 0;
max_color[1] = 0;
min_color[2] = 0;
max_color[2] = 0;
// run least squares against other two components
for (i = 0; i < 3; i++) {
if (i == chan) {
min_color[i] = min;
max_color[i] = max;
continue;
}
slope = y_intercept = correlation_coef = 0;
if (leastsquares(&src_pixels[block_offset], &bi, chan, i,
&slope, &y_intercept, &correlation_coef)) {
min_color[i] = GET_CHAN(src_pixels[block_offset], i);
max_color[i] = GET_CHAN(src_pixels[block_offset], i);
} else {
tmp_min = (int)(0.5 + min * slope + y_intercept);
tmp_max = (int)(0.5 + max * slope + y_intercept);
av_assert0(tmp_min <= tmp_max);
// clamp min and max color values
tmp_min = av_clip_uint8(tmp_min);
tmp_max = av_clip_uint8(tmp_max);
err = FFMAX(calc_lsq_max_fit_error(&src_pixels[block_offset], &bi,
min, max, tmp_min, tmp_max, chan, i), err);
min_color[i] = tmp_min;
max_color[i] = tmp_max;
}
}
if (err > s->sixteen_color_thresh) { // DO SIXTEEN COLOR BLOCK
uint16_t *row_ptr;
int rgb555;
block_offset = get_block_info(&bi, block_counter);
row_ptr = &src_pixels[block_offset];
for (int y = 0; y < 4; y++) {
for (int x = 0; x < 4; x++){
rgb555 = row_ptr[x] & ~0x8000;
put_bits(&s->pb, 16, rgb555);
}
row_ptr += bi.rowstride;
}
block_counter++;
} else { // FOUR COLOR BLOCK
block_counter += encode_four_color_block(min_color, max_color,
&s->pb, &src_pixels[block_offset], &bi);
}
/* update this block in the previous frame buffer */
update_block_in_prev_frame(&src_pixels[block_offset],
&prev_pixels[block_offset], &bi, block_counter);
}
}
}
static int rpza_encode_init(AVCodecContext *avctx)
{
RpzaContext *s = avctx->priv_data;
s->frame_width = avctx->width;
s->frame_height = avctx->height;
s->prev_frame = av_frame_alloc();
if (!s->prev_frame)
return AVERROR(ENOMEM);
return 0;
}
static int rpza_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
const AVFrame *frame, int *got_packet)
{
RpzaContext *s = avctx->priv_data;
const AVFrame *pict = frame;
uint8_t *buf;
int ret;
if ((ret = ff_alloc_packet2(avctx, pkt, 6LL * avctx->height * avctx->width, 0)) < 0)
return ret;
init_put_bits(&s->pb, pkt->data, pkt->size);
// skip 4 byte header, write it later once the size of the chunk is known
put_bits32(&s->pb, 0x00);
if (!s->prev_frame->data[0]) {
s->first_frame = 1;
s->prev_frame->format = pict->format;
s->prev_frame->width = pict->width;
s->prev_frame->height = pict->height;
ret = av_frame_get_buffer(s->prev_frame, 0);
if (ret < 0)
return ret;
} else {
s->first_frame = 0;
}
rpza_encode_stream(s, pict);
flush_put_bits(&s->pb);
av_shrink_packet(pkt, put_bits_count(&s->pb) >> 3);
buf = pkt->data;
// write header opcode
buf[0] = 0xe1; // chunk opcode
// write chunk length
AV_WB24(buf + 1, pkt->size);
*got_packet = 1;
return 0;
}
static int rpza_encode_end(AVCodecContext *avctx)
{
RpzaContext *s = (RpzaContext *)avctx->priv_data;
av_frame_free(&s->prev_frame);
return 0;
}
#define OFFSET(x) offsetof(RpzaContext, x)
#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
static const AVOption options[] = {
{ "skip_frame_thresh", NULL, OFFSET(skip_frame_thresh), AV_OPT_TYPE_INT, {.i64=1}, 0, 24, VE},
{ "start_one_color_thresh", NULL, OFFSET(start_one_color_thresh), AV_OPT_TYPE_INT, {.i64=1}, 0, 24, VE},
{ "continue_one_color_thresh", NULL, OFFSET(continue_one_color_thresh), AV_OPT_TYPE_INT, {.i64=0}, 0, 24, VE},
{ "sixteen_color_thresh", NULL, OFFSET(sixteen_color_thresh), AV_OPT_TYPE_INT, {.i64=1}, 0, 24, VE},
{ NULL },
};
static const AVClass rpza_class = {
.class_name = "rpza",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
AVCodec ff_rpza_encoder = {
.name = "rpza",
.long_name = NULL_IF_CONFIG_SMALL("QuickTime video (RPZA)"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_RPZA,
.priv_data_size = sizeof(RpzaContext),
.priv_class = &rpza_class,
.init = rpza_encode_init,
.encode2 = rpza_encode_frame,
.close = rpza_encode_end,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_RGB555,
AV_PIX_FMT_NONE},
};

View File

@@ -1,140 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_SEI_H
#define AVCODEC_SEI_H
// SEI payload types form a common namespace between the H.264, H.265
// and H.266 standards. A given payload type always has the same
// meaning, but some names have different payload types in different
// standards (e.g. scalable-nesting is 30 in H.264 but 133 in H.265).
// The content of the payload data depends on the standard, though
// many generic parts have the same interpretation everywhere (such as
// mastering-display-colour-volume and user-data-unregistered).
enum {
SEI_TYPE_BUFFERING_PERIOD = 0,
SEI_TYPE_PIC_TIMING = 1,
SEI_TYPE_PAN_SCAN_RECT = 2,
SEI_TYPE_FILLER_PAYLOAD = 3,
SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35 = 4,
SEI_TYPE_USER_DATA_UNREGISTERED = 5,
SEI_TYPE_RECOVERY_POINT = 6,
SEI_TYPE_DEC_REF_PIC_MARKING_REPETITION = 7,
SEI_TYPE_SPARE_PIC = 8,
SEI_TYPE_SCENE_INFO = 9,
SEI_TYPE_SUB_SEQ_INFO = 10,
SEI_TYPE_SUB_SEQ_LAYER_CHARACTERISTICS = 11,
SEI_TYPE_SUB_SEQ_CHARACTERISTICS = 12,
SEI_TYPE_FULL_FRAME_FREEZE = 13,
SEI_TYPE_FULL_FRAME_FREEZE_RELEASE = 14,
SEI_TYPE_FULL_FRAME_SNAPSHOT = 15,
SEI_TYPE_PROGRESSIVE_REFINEMENT_SEGMENT_START = 16,
SEI_TYPE_PROGRESSIVE_REFINEMENT_SEGMENT_END = 17,
SEI_TYPE_MOTION_CONSTRAINED_SLICE_GROUP_SET = 18,
SEI_TYPE_FILM_GRAIN_CHARACTERISTICS = 19,
SEI_TYPE_DEBLOCKING_FILTER_DISPLAY_PREFERENCE = 20,
SEI_TYPE_STEREO_VIDEO_INFO = 21,
SEI_TYPE_POST_FILTER_HINT = 22,
SEI_TYPE_TONE_MAPPING_INFO = 23,
SEI_TYPE_SCALABILITY_INFO = 24,
SEI_TYPE_SUB_PIC_SCALABLE_LAYER = 25,
SEI_TYPE_NON_REQUIRED_LAYER_REP = 26,
SEI_TYPE_PRIORITY_LAYER_INFO = 27,
SEI_TYPE_LAYERS_NOT_PRESENT_4 = 28,
SEI_TYPE_LAYER_DEPENDENCY_CHANGE = 29,
SEI_TYPE_SCALABLE_NESTING_4 = 30,
SEI_TYPE_BASE_LAYER_TEMPORAL_HRD = 31,
SEI_TYPE_QUALITY_LAYER_INTEGRITY_CHECK = 32,
SEI_TYPE_REDUNDANT_PIC_PROPERTY = 33,
SEI_TYPE_TL0_DEP_REP_INDEX = 34,
SEI_TYPE_TL_SWITCHING_POINT = 35,
SEI_TYPE_PARALLEL_DECODING_INFO = 36,
SEI_TYPE_MVC_SCALABLE_NESTING = 37,
SEI_TYPE_VIEW_SCALABILITY_INFO = 38,
SEI_TYPE_MULTIVIEW_SCENE_INFO_4 = 39,
SEI_TYPE_MULTIVIEW_ACQUISITION_INFO_4 = 40,
SEI_TYPE_NON_REQUIRED_VIEW_COMPONENT = 41,
SEI_TYPE_VIEW_DEPENDENCY_CHANGE = 42,
SEI_TYPE_OPERATION_POINTS_NOT_PRESENT = 43,
SEI_TYPE_BASE_VIEW_TEMPORAL_HRD = 44,
SEI_TYPE_FRAME_PACKING_ARRANGEMENT = 45,
SEI_TYPE_MULTIVIEW_VIEW_POSITION_4 = 46,
SEI_TYPE_DISPLAY_ORIENTATION = 47,
SEI_TYPE_MVCD_SCALABLE_NESTING = 48,
SEI_TYPE_MVCD_VIEW_SCALABILITY_INFO = 49,
SEI_TYPE_DEPTH_REPRESENTATION_INFO_4 = 50,
SEI_TYPE_THREE_DIMENSIONAL_REFERENCE_DISPLAYS_INFO_4 = 51,
SEI_TYPE_DEPTH_TIMING = 52,
SEI_TYPE_DEPTH_SAMPLING_INFO = 53,
SEI_TYPE_CONSTRAINED_DEPTH_PARAMETER_SET_IDENTIFIER = 54,
SEI_TYPE_GREEN_METADATA = 56,
SEI_TYPE_STRUCTURE_OF_PICTURES_INFO = 128,
SEI_TYPE_ACTIVE_PARAMETER_SETS = 129,
SEI_TYPE_PARAMETER_SETS_INCLUSION_INDICATION = SEI_TYPE_ACTIVE_PARAMETER_SETS,
SEI_TYPE_DECODING_UNIT_INFO = 130,
SEI_TYPE_TEMPORAL_SUB_LAYER_ZERO_IDX = 131,
SEI_TYPE_DECODED_PICTURE_HASH = 132,
SEI_TYPE_SCALABLE_NESTING_5 = 133,
SEI_TYPE_REGION_REFRESH_INFO = 134,
SEI_TYPE_NO_DISPLAY = 135,
SEI_TYPE_TIME_CODE = 136,
SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME = 137,
SEI_TYPE_SEGMENTED_RECT_FRAME_PACKING_ARRANGEMENT = 138,
SEI_TYPE_TEMPORAL_MOTION_CONSTRAINED_TILE_SETS = 139,
SEI_TYPE_CHROMA_RESAMPLING_FILTER_HINT = 140,
SEI_TYPE_KNEE_FUNCTION_INFO = 141,
SEI_TYPE_COLOUR_REMAPPING_INFO = 142,
SEI_TYPE_DEINTERLACED_FIELD_IDENTIFICATION = 143,
SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO = 144,
SEI_TYPE_DEPENDENT_RAP_INDICATION = 145,
SEI_TYPE_CODED_REGION_COMPLETION = 146,
SEI_TYPE_ALTERNATIVE_TRANSFER_CHARACTERISTICS = 147,
SEI_TYPE_AMBIENT_VIEWING_ENVIRONMENT = 148,
SEI_TYPE_CONTENT_COLOUR_VOLUME = 149,
SEI_TYPE_EQUIRECTANGULAR_PROJECTION = 150,
SEI_TYPE_CUBEMAP_PROJECTION = 151,
SEI_TYPE_FISHEYE_VIDEO_INFO = 152,
SEI_TYPE_SPHERE_ROTATION = 154,
SEI_TYPE_REGIONWISE_PACKING = 155,
SEI_TYPE_OMNI_VIEWPORT = 156,
SEI_TYPE_REGIONAL_NESTING = 157,
SEI_TYPE_MCTS_EXTRACTION_INFO_SETS = 158,
SEI_TYPE_MCTS_EXTRACTION_INFO_NESTING = 159,
SEI_TYPE_LAYERS_NOT_PRESENT_5 = 160,
SEI_TYPE_INTER_LAYER_CONSTRAINED_TILE_SETS = 161,
SEI_TYPE_BSP_NESTING = 162,
SEI_TYPE_BSP_INITIAL_ARRIVAL_TIME = 163,
SEI_TYPE_SUB_BITSTREAM_PROPERTY = 164,
SEI_TYPE_ALPHA_CHANNEL_INFO = 165,
SEI_TYPE_OVERLAY_INFO = 166,
SEI_TYPE_TEMPORAL_MV_PREDICTION_CONSTRAINTS = 167,
SEI_TYPE_FRAME_FIELD_INFO = 168,
SEI_TYPE_THREE_DIMENSIONAL_REFERENCE_DISPLAYS_INFO = 176,
SEI_TYPE_DEPTH_REPRESENTATION_INFO_5 = 177,
SEI_TYPE_MULTIVIEW_SCENE_INFO_5 = 178,
SEI_TYPE_MULTIVIEW_ACQUISITION_INFO_5 = 179,
SEI_TYPE_MULTIVIEW_VIEW_POSITION_5 = 180,
SEI_TYPE_ALTERNATIVE_DEPTH_INFO = 181,
SEI_TYPE_SEI_MANIFEST = 200,
SEI_TYPE_SEI_PREFIX_INDICATION = 201,
SEI_TYPE_ANNOTATED_REGIONS = 202,
SEI_TYPE_SUBPIC_LEVEL_INFO = 203,
SEI_TYPE_SAMPLE_ASPECT_RATIO_INFO = 204,
};
#endif /* AVCODEC_SEI_H */

View File

@@ -1,221 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Change the PTS/DTS timestamps.
*/
#include "libavutil/opt.h"
#include "libavutil/eval.h"
#include "avcodec.h"
#include "bsf.h"
#include "bsf_internal.h"
static const char *const var_names[] = {
"N", ///< frame number (starting at zero)
"TS",
"POS", ///< original position in the file of the frame
"PREV_INPTS", ///< previous input PTS
"PREV_INDTS", ///< previous input DTS
"PREV_OUTPTS", ///< previous output PTS
"PREV_OUTDTS", ///< previous output DTS
"PTS", ///< original PTS in the file of the frame
"DTS", ///< original DTS in the file of the frame
"STARTPTS", ///< PTS at start of movie
"STARTDTS", ///< DTS at start of movie
"TB", ///< timebase of the stream
"SR", ///< sample rate of the stream
NULL
};
enum var_name {
VAR_N,
VAR_TS,
VAR_POS,
VAR_PREV_INPTS,
VAR_PREV_INDTS,
VAR_PREV_OUTPTS,
VAR_PREV_OUTDTS,
VAR_PTS,
VAR_DTS,
VAR_STARTPTS,
VAR_STARTDTS,
VAR_TB,
VAR_SR,
VAR_VARS_NB
};
typedef struct SetTSContext {
const AVClass *class;
char *ts_str;
char *pts_str;
char *dts_str;
int64_t frame_number;
int64_t start_pts;
int64_t start_dts;
int64_t prev_inpts;
int64_t prev_indts;
int64_t prev_outpts;
int64_t prev_outdts;
double var_values[VAR_VARS_NB];
AVExpr *ts_expr;
AVExpr *pts_expr;
AVExpr *dts_expr;
} SetTSContext;
static int setts_init(AVBSFContext *ctx)
{
SetTSContext *s = ctx->priv_data;
int ret;
if ((ret = av_expr_parse(&s->ts_expr, s->ts_str,
var_names, NULL, NULL, NULL, NULL, 0, ctx)) < 0) {
av_log(ctx, AV_LOG_ERROR, "Error while parsing ts expression '%s'\n", s->ts_str);
return ret;
}
if (s->pts_str) {
if ((ret = av_expr_parse(&s->pts_expr, s->pts_str,
var_names, NULL, NULL, NULL, NULL, 0, ctx)) < 0) {
av_log(ctx, AV_LOG_ERROR, "Error while parsing pts expression '%s'\n", s->pts_str);
return ret;
}
}
if (s->dts_str) {
if ((ret = av_expr_parse(&s->dts_expr, s->dts_str,
var_names, NULL, NULL, NULL, NULL, 0, ctx)) < 0) {
av_log(ctx, AV_LOG_ERROR, "Error while parsing dts expression '%s'\n", s->dts_str);
return ret;
}
}
s->frame_number= 0;
s->start_pts = AV_NOPTS_VALUE;
s->start_dts = AV_NOPTS_VALUE;
s->prev_inpts = AV_NOPTS_VALUE;
s->prev_indts = AV_NOPTS_VALUE;
s->prev_outpts = AV_NOPTS_VALUE;
s->prev_outdts = AV_NOPTS_VALUE;
return 0;
}
static int setts_filter(AVBSFContext *ctx, AVPacket *pkt)
{
SetTSContext *s = ctx->priv_data;
int64_t new_ts, new_pts, new_dts;
int ret;
ret = ff_bsf_get_packet_ref(ctx, pkt);
if (ret < 0)
return ret;
if (s->start_pts == AV_NOPTS_VALUE)
s->start_pts = pkt->pts;
if (s->start_dts == AV_NOPTS_VALUE)
s->start_dts = pkt->dts;
s->var_values[VAR_N] = s->frame_number++;
s->var_values[VAR_TS] = pkt->dts;
s->var_values[VAR_POS] = pkt->pos;
s->var_values[VAR_PTS] = pkt->pts;
s->var_values[VAR_DTS] = pkt->dts;
s->var_values[VAR_PREV_INPTS] = s->prev_inpts;
s->var_values[VAR_PREV_INDTS] = s->prev_indts;
s->var_values[VAR_PREV_OUTPTS] = s->prev_outpts;
s->var_values[VAR_PREV_OUTDTS] = s->prev_outdts;
s->var_values[VAR_STARTPTS] = s->start_pts;
s->var_values[VAR_STARTDTS] = s->start_dts;
s->var_values[VAR_TB] = ctx->time_base_out.den ? av_q2d(ctx->time_base_out) : 0;
s->var_values[VAR_SR] = ctx->par_in->sample_rate;
new_ts = llrint(av_expr_eval(s->ts_expr, s->var_values, NULL));
if (s->pts_str) {
s->var_values[VAR_TS] = pkt->pts;
new_pts = llrint(av_expr_eval(s->pts_expr, s->var_values, NULL));
} else {
new_pts = new_ts;
}
if (s->dts_str) {
s->var_values[VAR_TS] = pkt->dts;
new_dts = llrint(av_expr_eval(s->dts_expr, s->var_values, NULL));
} else {
new_dts = new_ts;
}
s->var_values[VAR_PREV_INPTS] = pkt->pts;
s->var_values[VAR_PREV_INDTS] = pkt->dts;
s->var_values[VAR_PREV_OUTPTS] = new_pts;
s->var_values[VAR_PREV_OUTDTS] = new_dts;
pkt->pts = new_pts;
pkt->dts = new_dts;
return ret;
}
static void setts_close(AVBSFContext *bsf)
{
SetTSContext *s = bsf->priv_data;
av_expr_free(s->ts_expr);
s->ts_expr = NULL;
av_expr_free(s->pts_expr);
s->pts_expr = NULL;
av_expr_free(s->dts_expr);
s->dts_expr = NULL;
}
#define OFFSET(x) offsetof(SetTSContext, x)
#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_SUBTITLE_PARAM|AV_OPT_FLAG_BSF_PARAM)
static const AVOption options[] = {
{ "ts", "set expression for packet PTS and DTS", OFFSET(ts_str), AV_OPT_TYPE_STRING, {.str="TS"}, 0, 0, FLAGS },
{ "pts", "set expression for packet PTS", OFFSET(pts_str), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
{ "dts", "set expression for packet DTS", OFFSET(dts_str), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
{ NULL },
};
static const AVClass setts_class = {
.class_name = "setts_bsf",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
};
const AVBitStreamFilter ff_setts_bsf = {
.name = "setts",
.priv_data_size = sizeof(SetTSContext),
.priv_class = &setts_class,
.init = setts_init,
.close = setts_close,
.filter = setts_filter,
};

View File

@@ -1,534 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avassert.h"
#include "libavutil/common.h"
#include "avcodec.h"
#include "get_bits.h"
#include "bytestream.h"
#include "internal.h"
#define PALDATA_FOLLOWS_TILEDATA 4
#define HAVE_COMPRESSED_TILEMAP 32
#define HAVE_TILEMAP 128
typedef struct SGAVideoContext {
GetByteContext gb;
int metadata_size;
int tiledata_size;
int tiledata_offset;
int tilemapdata_size;
int tilemapdata_offset;
int paldata_size;
int paldata_offset;
int palmapdata_offset;
int palmapdata_size;
int flags;
int nb_pal;
int nb_tiles;
int tiles_w, tiles_h;
int shift;
int plus;
int swap;
uint32_t pal[256];
uint8_t *tileindex_data;
unsigned tileindex_size;
uint8_t *palmapindex_data;
unsigned palmapindex_size;
uint8_t uncompressed[65536];
} SGAVideoContext;
static av_cold int sga_decode_init(AVCodecContext *avctx)
{
avctx->pix_fmt = AV_PIX_FMT_PAL8;
return 0;
}
static int decode_palette(GetByteContext *gb, uint32_t *pal)
{
GetBitContext gbit;
if (bytestream2_get_bytes_left(gb) < 18)
return AVERROR_INVALIDDATA;
memset(pal, 0, 16 * sizeof(*pal));
init_get_bits8(&gbit, gb->buffer, 18);
for (int RGBIndex = 0; RGBIndex < 3; RGBIndex++) {
for (int index = 0; index < 16; index++) {
unsigned color = get_bits1(&gbit) << RGBIndex;
pal[15 - index] |= color << (5 + 16);
}
}
for (int RGBIndex = 0; RGBIndex < 3; RGBIndex++) {
for (int index = 0; index < 16; index++) {
unsigned color = get_bits1(&gbit) << RGBIndex;
pal[15 - index] |= color << (5 + 8);
}
}
for (int RGBIndex = 0; RGBIndex < 3; RGBIndex++) {
for (int index = 0; index < 16; index++) {
unsigned color = get_bits1(&gbit) << RGBIndex;
pal[15 - index] |= color << (5 + 0);
}
}
for (int index = 0; index < 16; index++)
pal[index] = (0xFFU << 24) | pal[index] | (pal[index] >> 3);
bytestream2_skip(gb, 18);
return 0;
}
static int decode_index_palmap(SGAVideoContext *s, AVFrame *frame)
{
const uint8_t *tt = s->tileindex_data;
for (int y = 0; y < s->tiles_h; y++) {
for (int x = 0; x < s->tiles_w; x++) {
int pal_idx = s->palmapindex_data[y * s->tiles_w + x] * 16;
uint8_t *dst = frame->data[0] + y * 8 * frame->linesize[0] + x * 8;
for (int yy = 0; yy < 8; yy++) {
for (int xx = 0; xx < 8; xx++)
dst[xx] = pal_idx + tt[xx];
tt += 8;
dst += frame->linesize[0];
}
}
}
return 0;
}
static int decode_index_tilemap(SGAVideoContext *s, AVFrame *frame)
{
GetByteContext *gb = &s->gb;
GetBitContext pm;
bytestream2_seek(gb, s->tilemapdata_offset, SEEK_SET);
if (bytestream2_get_bytes_left(gb) < s->tilemapdata_size)
return AVERROR_INVALIDDATA;
init_get_bits8(&pm, gb->buffer, s->tilemapdata_size);
for (int y = 0; y < s->tiles_h; y++) {
for (int x = 0; x < s->tiles_w; x++) {
uint8_t tile[64];
int tilemap = get_bits(&pm, 16);
int flip_x = (tilemap >> 11) & 1;
int flip_y = (tilemap >> 12) & 1;
int tindex = av_clip((tilemap & 511) - 1, 0, s->nb_tiles - 1);
const uint8_t *tt = s->tileindex_data + tindex * 64;
int pal_idx = ((tilemap >> 13) & 3) * 16;
uint8_t *dst = frame->data[0] + y * 8 * frame->linesize[0] + x * 8;
if (!flip_x && !flip_y) {
memcpy(tile, tt, 64);
} else if (flip_x && flip_y) {
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++)
tile[i * 8 + j] = tt[(7 - i) * 8 + 7 - j];
}
} else if (flip_x) {
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++)
tile[i * 8 + j] = tt[i * 8 + 7 - j];
}
} else {
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++)
tile[i * 8 + j] = tt[(7 - i) * 8 + j];
}
}
for (int yy = 0; yy < 8; yy++) {
for (int xx = 0; xx < 8; xx++)
dst[xx] = pal_idx + tile[xx + yy * 8];
dst += frame->linesize[0];
}
}
}
return 0;
}
static int decode_index(SGAVideoContext *s, AVFrame *frame)
{
const uint8_t *src = s->tileindex_data;
uint8_t *dst = frame->data[0];
for (int y = 0; y < frame->height; y += 8) {
for (int x = 0; x < frame->width; x += 8) {
for (int yy = 0; yy < 8; yy++) {
for (int xx = 0; xx < 8; xx++)
dst[x + xx + yy * frame->linesize[0]] = src[xx];
src += 8;
}
}
dst += 8 * frame->linesize[0];
}
return 0;
}
static int lzss_decompress(AVCodecContext *avctx,
GetByteContext *gb, uint8_t *dst,
int dst_size, int shift, int plus)
{
int oi = 0;
while (bytestream2_get_bytes_left(gb) > 0 && oi < dst_size) {
uint16_t displace, header = bytestream2_get_be16(gb);
int count, offset;
for (int i = 0; i < 16; i++) {
switch (header >> 15) {
case 0:
if (oi + 2 < dst_size) {
dst[oi++] = bytestream2_get_byte(gb);
dst[oi++] = bytestream2_get_byte(gb);
}
break;
case 1:
displace = bytestream2_get_be16(gb);
count = displace >> shift;
offset = displace & ((1 << shift) - 1);
if (displace == 0) {
while (bytestream2_get_bytes_left(gb) > 0 &&
oi < dst_size)
dst[oi++] = bytestream2_get_byte(gb);
return oi;
}
count += plus;
if (offset <= 0)
offset = 1;
if (oi < offset || oi + count * 2 > dst_size)
return AVERROR_INVALIDDATA;
for (int j = 0; j < count * 2; j++) {
dst[oi] = dst[oi - offset];
oi++;
}
break;
}
header <<= 1;
}
}
return AVERROR_INVALIDDATA;
}
static int decode_palmapdata(AVCodecContext *avctx)
{
SGAVideoContext *s = avctx->priv_data;
const int bits = (s->nb_pal + 1) / 2;
GetByteContext *gb = &s->gb;
GetBitContext pm;
bytestream2_seek(gb, s->palmapdata_offset, SEEK_SET);
if (bytestream2_get_bytes_left(gb) < s->palmapdata_size)
return AVERROR_INVALIDDATA;
init_get_bits8(&pm, gb->buffer, s->palmapdata_size);
for (int y = 0; y < s->tiles_h; y++) {
uint8_t *dst = s->palmapindex_data + y * s->tiles_w;
for (int x = 0; x < s->tiles_w; x++)
dst[x] = get_bits(&pm, bits);
dst += s->tiles_w;
}
return 0;
}
static int decode_tiledata(AVCodecContext *avctx)
{
SGAVideoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
GetBitContext tm;
bytestream2_seek(gb, s->tiledata_offset, SEEK_SET);
if (bytestream2_get_bytes_left(gb) < s->tiledata_size)
return AVERROR_INVALIDDATA;
init_get_bits8(&tm, gb->buffer, s->tiledata_size);
for (int n = 0; n < s->nb_tiles; n++) {
uint8_t *dst = s->tileindex_data + n * 64;
for (int yy = 0; yy < 8; yy++) {
for (int xx = 0; xx < 8; xx++)
dst[xx] = get_bits(&tm, 4);
dst += 8;
}
}
for (int i = 0; i < s->nb_tiles && s->swap; i++) {
uint8_t *dst = s->tileindex_data + i * 64;
for (int j = 8; j < 64; j += 16) {
for (int k = 0; k < 8; k += 2)
FFSWAP(uint8_t, dst[j + k], dst[j+k+1]);
}
}
return 0;
}
static int sga_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
SGAVideoContext *s = avctx->priv_data;
GetByteContext *gb = &s->gb;
AVFrame *frame = data;
int ret, type;
if (avpkt->size <= 14)
return AVERROR_INVALIDDATA;
s->flags = avpkt->data[8];
s->nb_pal = avpkt->data[9];
s->tiles_w = avpkt->data[10];
s->tiles_h = avpkt->data[11];
if (s->nb_pal > 4)
return AVERROR_INVALIDDATA;
if ((ret = ff_set_dimensions(avctx,
s->tiles_w * 8,
s->tiles_h * 8)) < 0)
return ret;
av_fast_padded_malloc(&s->tileindex_data, &s->tileindex_size,
avctx->width * avctx->height);
if (!s->tileindex_data)
return AVERROR(ENOMEM);
av_fast_padded_malloc(&s->palmapindex_data, &s->palmapindex_size,
s->tiles_w * s->tiles_h);
if (!s->palmapindex_data)
return AVERROR(ENOMEM);
if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
return ret;
bytestream2_init(gb, avpkt->data, avpkt->size);
type = bytestream2_get_byte(gb);
s->metadata_size = 12 + ((!!(s->flags & HAVE_TILEMAP)) * 2);
s->nb_tiles = s->flags & HAVE_TILEMAP ? AV_RB16(avpkt->data + 12) : s->tiles_w * s->tiles_h;
if (s->nb_tiles > s->tiles_w * s->tiles_h)
return AVERROR_INVALIDDATA;
av_log(avctx, AV_LOG_DEBUG, "type: %X flags: %X nb_tiles: %d\n", type, s->flags, s->nb_tiles);
switch (type) {
case 0xE7:
case 0xCB:
case 0xCD:
s->swap = 1;
s->shift = 12;
s->plus = 1;
break;
case 0xC9:
s->swap = 1;
s->shift = 13;
s->plus = 1;
break;
case 0xC8:
s->swap = 1;
s->shift = 13;
s->plus = 0;
break;
case 0xC7:
s->swap = 0;
s->shift = 13;
s->plus = 1;
break;
case 0xC6:
s->swap = 0;
s->shift = 13;
s->plus = 0;
break;
}
if (type == 0xE7) {
int offset = s->metadata_size, left;
int sizes[3];
bytestream2_seek(gb, s->metadata_size, SEEK_SET);
for (int i = 0; i < 3; i++)
sizes[i] = bytestream2_get_be16(gb);
for (int i = 0; i < 3; i++) {
int size = sizes[i];
int raw = size >> 15;
size &= (1 << 15) - 1;
if (raw) {
if (bytestream2_get_bytes_left(gb) < size)
return AVERROR_INVALIDDATA;
if (sizeof(s->uncompressed) - offset < size)
return AVERROR_INVALIDDATA;
memcpy(s->uncompressed + offset, gb->buffer, size);
bytestream2_skip(gb, size);
} else {
GetByteContext gb2;
if (bytestream2_get_bytes_left(gb) < size)
return AVERROR_INVALIDDATA;
bytestream2_init(&gb2, gb->buffer, size);
ret = lzss_decompress(avctx, &gb2, s->uncompressed + offset,
sizeof(s->uncompressed) - offset, s->shift, s->plus);
if (ret < 0)
return ret;
bytestream2_skip(gb, size);
size = ret;
}
offset += size;
}
left = bytestream2_get_bytes_left(gb);
if (sizeof(s->uncompressed) - offset < left)
return AVERROR_INVALIDDATA;
bytestream2_get_buffer(gb, s->uncompressed + offset, left);
offset += left;
bytestream2_init(gb, s->uncompressed, offset);
}
switch (type) {
case 0xCD:
case 0xCB:
case 0xC9:
case 0xC8:
case 0xC7:
case 0xC6:
bytestream2_seek(gb, s->metadata_size, SEEK_SET);
ret = lzss_decompress(avctx, gb, s->uncompressed + s->metadata_size,
sizeof(s->uncompressed) - s->metadata_size, s->shift, s->plus);
if (ret < 0)
return ret;
bytestream2_init(gb, s->uncompressed, ret + s->metadata_size);
case 0xE7:
case 0xC1:
s->tiledata_size = s->nb_tiles * 32;
s->paldata_size = s->nb_pal * 18;
s->tiledata_offset = s->flags & PALDATA_FOLLOWS_TILEDATA ? s->metadata_size : s->metadata_size + s->paldata_size;
s->paldata_offset = s->flags & PALDATA_FOLLOWS_TILEDATA ? s->metadata_size + s->tiledata_size : s->metadata_size;
s->palmapdata_offset = (s->flags & HAVE_TILEMAP) ? -1 : s->paldata_offset + s->paldata_size;
s->palmapdata_size = (s->flags & HAVE_TILEMAP) || s->nb_pal < 2 ? 0 : (s->tiles_w * s->tiles_h * ((s->nb_pal + 1) / 2) + 7) / 8;
s->tilemapdata_size = (s->flags & HAVE_TILEMAP) ? s->tiles_w * s->tiles_h * 2 : 0;
s->tilemapdata_offset = (s->flags & HAVE_TILEMAP) ? s->paldata_offset + s->paldata_size: -1;
bytestream2_seek(gb, s->paldata_offset, SEEK_SET);
for (int n = 0; n < s->nb_pal; n++) {
ret = decode_palette(gb, s->pal + 16 * n);
if (ret < 0)
return ret;
}
if (s->tiledata_size > 0) {
ret = decode_tiledata(avctx);
if (ret < 0)
return ret;
}
if (s->palmapdata_size > 0) {
ret = decode_palmapdata(avctx);
if (ret < 0)
return ret;
}
if (s->palmapdata_size > 0 && s->tiledata_size > 0) {
ret = decode_index_palmap(s, frame);
if (ret < 0)
return ret;
} else if (s->tilemapdata_size > 0 && s->tiledata_size > 0) {
ret = decode_index_tilemap(s, frame);
if (ret < 0)
return ret;
} else if (s->tiledata_size > 0) {
ret = decode_index(s, frame);
if (ret < 0)
return ret;
}
break;
default:
av_log(avctx, AV_LOG_ERROR, "Unknown type: %X\n", type);
return AVERROR_INVALIDDATA;
}
memcpy(frame->data[1], s->pal, AVPALETTE_SIZE);
frame->palette_has_changed = 1;
frame->pict_type = AV_PICTURE_TYPE_I;
frame->key_frame = 1;
*got_frame = 1;
return avpkt->size;
}
static av_cold int sga_decode_end(AVCodecContext *avctx)
{
SGAVideoContext *s = avctx->priv_data;
av_freep(&s->tileindex_data);
s->tileindex_size = 0;
av_freep(&s->palmapindex_data);
s->palmapindex_size = 0;
return 0;
}
AVCodec ff_sga_decoder = {
.name = "sga",
.long_name = NULL_IF_CONFIG_SMALL("Digital Pictures SGA Video"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_SGA_VIDEO,
.priv_data_size = sizeof(SGAVideoContext),
.init = sga_decode_init,
.decode = sga_decode_frame,
.close = sga_decode_end,
.capabilities = AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
};

View File

@@ -1,67 +0,0 @@
/*
* Header file for hardcoded sine windows
*
* Copyright (c) 2009 Reimar Döffinger <Reimar.Doeffinger@gmx.de>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_SINEWIN_FIXED_TABLEGEN_H
#define AVCODEC_SINEWIN_FIXED_TABLEGEN_H
#ifdef BUILD_TABLES
#undef DECLARE_ALIGNED
#define DECLARE_ALIGNED(align, type, name) type name
#else
#include "libavutil/mem_internal.h"
#endif
#define SINETABLE(size) \
static SINETABLE_CONST DECLARE_ALIGNED(32, int, sine_##size##_fixed)[size]
#if CONFIG_HARDCODED_TABLES
#define init_sine_windows_fixed()
#define SINETABLE_CONST const
#include "libavcodec/sinewin_fixed_tables.h"
#else
// do not use libavutil/libm.h since this is compiled both
// for the host and the target and config.h is only valid for the target
#include <math.h>
#include "libavutil/attributes.h"
#define SINETABLE_CONST
SINETABLE( 128);
SINETABLE( 512);
SINETABLE(1024);
#define SIN_FIX(a) (int)floor((a) * 0x80000000 + 0.5)
// Generate a sine window.
static av_cold void sine_window_init_fixed(int *window, int n)
{
for (int i = 0; i < n; i++)
window[i] = SIN_FIX(sinf((i + 0.5) * (M_PI / (2.0 * n))));
}
static av_cold void init_sine_windows_fixed(void)
{
sine_window_init_fixed(sine_128_fixed, 128);
sine_window_init_fixed(sine_512_fixed, 512);
sine_window_init_fixed(sine_1024_fixed, 1024);
}
#endif /* CONFIG_HARDCODED_TABLES */
#endif /* AVCODEC_SINEWIN_FIXED_TABLEGEN_H */

View File

@@ -1,308 +0,0 @@
/*
* SpeedHQ encoder
* Copyright (c) 2000, 2001 Fabrice Bellard
* Copyright (c) 2003 Alex Beregszaszi
* Copyright (c) 2003-2004 Michael Niedermayer
* Copyright (c) 2020 FFmpeg
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* SpeedHQ encoder.
*/
#include "libavutil/pixdesc.h"
#include "libavutil/thread.h"
#include "avcodec.h"
#include "mpeg12.h"
#include "mpegvideo.h"
#include "speedhqenc.h"
extern RLTable ff_rl_speedhq;
static uint8_t speedhq_static_rl_table_store[2][2*MAX_RUN + MAX_LEVEL + 3];
static uint16_t mpeg12_vlc_dc_lum_code_reversed[12];
static uint16_t mpeg12_vlc_dc_chroma_code_reversed[12];
/* simple include everything table for dc, first byte is bits
* number next 3 are code */
static uint32_t speedhq_lum_dc_uni[512];
static uint32_t speedhq_chr_dc_uni[512];
static uint8_t uni_speedhq_ac_vlc_len[64 * 64 * 2];
static uint32_t reverse(uint32_t num, int bits)
{
return bitswap_32(num) >> (32 - bits);
}
static void reverse_code(const uint16_t *code, const uint8_t *bits,
uint16_t *reversed_code, int num_entries)
{
for (int i = 0; i < num_entries; i++)
reversed_code[i] = reverse(code[i], bits[i]);
}
static av_cold void speedhq_init_static_data(void)
{
/* Exactly the same as MPEG-2, except little-endian. */
reverse_code(ff_mpeg12_vlc_dc_lum_code,
ff_mpeg12_vlc_dc_lum_bits,
mpeg12_vlc_dc_lum_code_reversed,
12);
reverse_code(ff_mpeg12_vlc_dc_chroma_code,
ff_mpeg12_vlc_dc_chroma_bits,
mpeg12_vlc_dc_chroma_code_reversed,
12);
ff_rl_init(&ff_rl_speedhq, speedhq_static_rl_table_store);
/* build unified dc encoding tables */
for (int i = -255; i < 256; i++) {
int adiff, index;
int bits, code;
int diff = i;
adiff = FFABS(diff);
if (diff < 0)
diff--;
index = av_log2(2 * adiff);
bits = ff_mpeg12_vlc_dc_lum_bits[index] + index;
code = mpeg12_vlc_dc_lum_code_reversed[index] +
(av_mod_uintp2(diff, index) << ff_mpeg12_vlc_dc_lum_bits[index]);
speedhq_lum_dc_uni[i + 255] = bits + (code << 8);
bits = ff_mpeg12_vlc_dc_chroma_bits[index] + index;
code = mpeg12_vlc_dc_chroma_code_reversed[index] +
(av_mod_uintp2(diff, index) << ff_mpeg12_vlc_dc_chroma_bits[index]);
speedhq_chr_dc_uni[i + 255] = bits + (code << 8);
}
ff_mpeg1_init_uni_ac_vlc(&ff_rl_speedhq, uni_speedhq_ac_vlc_len);
}
av_cold int ff_speedhq_encode_init(MpegEncContext *s)
{
static AVOnce init_static_once = AV_ONCE_INIT;
av_assert0(s->slice_context_count == 1);
if (s->width > 65500 || s->height > 65500) {
av_log(s, AV_LOG_ERROR, "SpeedHQ does not support resolutions above 65500x65500\n");
return AVERROR(EINVAL);
}
s->min_qcoeff = -2048;
s->max_qcoeff = 2047;
ff_thread_once(&init_static_once, speedhq_init_static_data);
s->intra_ac_vlc_length =
s->intra_ac_vlc_last_length =
s->intra_chroma_ac_vlc_length =
s->intra_chroma_ac_vlc_last_length = uni_speedhq_ac_vlc_len;
switch (s->avctx->pix_fmt) {
case AV_PIX_FMT_YUV420P:
s->avctx->codec_tag = MKTAG('S','H','Q','0');
break;
case AV_PIX_FMT_YUV422P:
s->avctx->codec_tag = MKTAG('S','H','Q','2');
break;
case AV_PIX_FMT_YUV444P:
s->avctx->codec_tag = MKTAG('S','H','Q','4');
break;
default:
av_assert0(0);
}
return 0;
}
void ff_speedhq_encode_picture_header(MpegEncContext *s)
{
put_bits_le(&s->pb, 8, 100 - s->qscale * 2); /* FIXME why doubled */
put_bits_le(&s->pb, 24, 4); /* no second field */
/* length of first slice, will be filled out later */
s->slice_start = 4;
put_bits_le(&s->pb, 24, 0);
}
void ff_speedhq_end_slice(MpegEncContext *s)
{
int slice_len;
flush_put_bits_le(&s->pb);
slice_len = s->pb.buf_ptr - (s->pb.buf + s->slice_start);
AV_WL24(s->pb.buf + s->slice_start, slice_len);
/* length of next slice, will be filled out later */
s->slice_start = s->pb.buf_ptr - s->pb.buf;
put_bits_le(&s->pb, 24, 0);
}
static inline void encode_dc(PutBitContext *pb, int diff, int component)
{
unsigned int diff_u = diff + 255;
if (diff_u >= 511) {
int index;
if (diff < 0) {
index = av_log2_16bit(-2 * diff);
diff--;
} else {
index = av_log2_16bit(2 * diff);
}
if (component == 0)
put_bits_le(pb,
ff_mpeg12_vlc_dc_lum_bits[index] + index,
mpeg12_vlc_dc_lum_code_reversed[index] +
(av_mod_uintp2(diff, index) << ff_mpeg12_vlc_dc_lum_bits[index]));
else
put_bits_le(pb,
ff_mpeg12_vlc_dc_chroma_bits[index] + index,
mpeg12_vlc_dc_chroma_code_reversed[index] +
(av_mod_uintp2(diff, index) << ff_mpeg12_vlc_dc_chroma_bits[index]));
} else {
if (component == 0)
put_bits_le(pb,
speedhq_lum_dc_uni[diff + 255] & 0xFF,
speedhq_lum_dc_uni[diff + 255] >> 8);
else
put_bits_le(pb,
speedhq_chr_dc_uni[diff + 255] & 0xFF,
speedhq_chr_dc_uni[diff + 255] >> 8);
}
}
static void encode_block(MpegEncContext *s, int16_t *block, int n)
{
int alevel, level, last_non_zero, dc, i, j, run, last_index, sign;
int code;
int component, val;
/* DC coef */
component = (n <= 3 ? 0 : (n&1) + 1);
dc = block[0]; /* overflow is impossible */
val = s->last_dc[component] - dc; /* opposite of most codecs */
encode_dc(&s->pb, val, component);
s->last_dc[component] = dc;
/* now quantify & encode AC coefs */
last_non_zero = 0;
last_index = s->block_last_index[n];
for (i = 1; i <= last_index; i++) {
j = s->intra_scantable.permutated[i];
level = block[j];
/* encode using VLC */
if (level != 0) {
run = i - last_non_zero - 1;
alevel = level;
MASK_ABS(sign, alevel);
sign &= 1;
if (alevel <= ff_rl_speedhq.max_level[0][run]) {
code = ff_rl_speedhq.index_run[0][run] + alevel - 1;
/* store the VLC & sign at once */
put_bits_le(&s->pb, ff_rl_speedhq.table_vlc[code][1] + 1,
ff_rl_speedhq.table_vlc[code][0] + (sign << ff_rl_speedhq.table_vlc[code][1]));
} else {
/* escape seems to be pretty rare <5% so I do not optimize it */
put_bits_le(&s->pb, ff_rl_speedhq.table_vlc[121][1], ff_rl_speedhq.table_vlc[121][0]);
/* escape: only clip in this case */
put_bits_le(&s->pb, 6, run);
put_bits_le(&s->pb, 12, level + 2048);
}
last_non_zero = i;
}
}
/* end of block */
put_bits_le(&s->pb, ff_rl_speedhq.table_vlc[122][1], ff_rl_speedhq.table_vlc[122][0]);
}
void ff_speedhq_encode_mb(MpegEncContext *s, int16_t block[12][64])
{
int i;
for(i=0;i<6;i++) {
encode_block(s, block[i], i);
}
if (s->chroma_format == CHROMA_444) {
encode_block(s, block[8], 8);
encode_block(s, block[9], 9);
encode_block(s, block[6], 6);
encode_block(s, block[7], 7);
encode_block(s, block[10], 10);
encode_block(s, block[11], 11);
} else if (s->chroma_format == CHROMA_422) {
encode_block(s, block[6], 6);
encode_block(s, block[7], 7);
}
s->i_tex_bits += get_bits_diff(s);
}
static int ff_speedhq_mb_rows_in_slice(int slice_num, int mb_height)
{
return mb_height / 4 + (slice_num < (mb_height % 4));
}
int ff_speedhq_mb_y_order_to_mb(int mb_y_order, int mb_height, int *first_in_slice)
{
int slice_num = 0;
while (mb_y_order >= ff_speedhq_mb_rows_in_slice(slice_num, mb_height)) {
mb_y_order -= ff_speedhq_mb_rows_in_slice(slice_num, mb_height);
slice_num++;
}
*first_in_slice = (mb_y_order == 0);
return mb_y_order * 4 + slice_num;
}
#if CONFIG_SPEEDHQ_ENCODER
static const AVClass speedhq_class = {
.class_name = "speedhq encoder",
.item_name = av_default_item_name,
.option = ff_mpv_generic_options,
.version = LIBAVUTIL_VERSION_INT,
};
AVCodec ff_speedhq_encoder = {
.name = "speedhq",
.long_name = NULL_IF_CONFIG_SMALL("NewTek SpeedHQ"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_SPEEDHQ,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_mpv_encode_init,
.encode2 = ff_mpv_encode_picture,
.close = ff_mpv_encode_end,
.caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
.pix_fmts = (const enum AVPixelFormat[]) {
AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV444P,
AV_PIX_FMT_NONE
},
.priv_class = &speedhq_class,
};
#endif

View File

@@ -1,48 +0,0 @@
/*
* SpeedHQ encoder
* Copyright (c) 2000, 2001 Fabrice Bellard
* Copyright (c) 2003 Alex Beregszaszi
* Copyright (c) 2003-2004 Michael Niedermayer
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* SpeedHQ encoder.
*/
#ifndef AVCODEC_SPEEDHQENC_H
#define AVCODEC_SPEEDHQENC_H
#include <stdint.h>
#include "mjpeg.h"
#include "mjpegenc_common.h"
#include "mpegvideo.h"
#include "put_bits.h"
int ff_speedhq_encode_init(MpegEncContext *s);
void ff_speedhq_encode_close(MpegEncContext *s);
void ff_speedhq_encode_mb(MpegEncContext *s, int16_t block[12][64]);
void ff_speedhq_encode_picture_header(MpegEncContext *s);
void ff_speedhq_end_slice(MpegEncContext *s);
int ff_speedhq_mb_y_order_to_mb(int mb_y_order, int mb_height, int *first_in_slice);
#endif /* AVCODEC_SPEEDHQENC_H */

View File

@@ -1,210 +0,0 @@
/*
* TTML subtitle encoder
* Copyright (c) 2020 24i
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* TTML subtitle encoder
* @see https://www.w3.org/TR/ttml1/
* @see https://www.w3.org/TR/ttml2/
* @see https://www.w3.org/TR/ttml-imsc/rec
*/
#include "avcodec.h"
#include "internal.h"
#include "libavutil/avstring.h"
#include "libavutil/bprint.h"
#include "libavutil/internal.h"
#include "ass_split.h"
#include "ass.h"
#include "ttmlenc.h"
typedef struct {
AVCodecContext *avctx;
ASSSplitContext *ass_ctx;
AVBPrint buffer;
} TTMLContext;
static void ttml_text_cb(void *priv, const char *text, int len)
{
TTMLContext *s = priv;
AVBPrint cur_line = { 0 };
AVBPrint *buffer = &s->buffer;
av_bprint_init(&cur_line, len, AV_BPRINT_SIZE_UNLIMITED);
av_bprint_append_data(&cur_line, text, len);
if (!av_bprint_is_complete(&cur_line)) {
av_log(s->avctx, AV_LOG_ERROR,
"Failed to move the current subtitle dialog to AVBPrint!\n");
av_bprint_finalize(&cur_line, NULL);
return;
}
av_bprint_escape(buffer, cur_line.str, NULL, AV_ESCAPE_MODE_XML,
0);
av_bprint_finalize(&cur_line, NULL);
}
static void ttml_new_line_cb(void *priv, int forced)
{
TTMLContext *s = priv;
av_bprintf(&s->buffer, "<br/>");
}
static const ASSCodesCallbacks ttml_callbacks = {
.text = ttml_text_cb,
.new_line = ttml_new_line_cb,
};
static int ttml_encode_frame(AVCodecContext *avctx, uint8_t *buf,
int bufsize, const AVSubtitle *sub)
{
TTMLContext *s = avctx->priv_data;
ASSDialog *dialog;
int i;
av_bprint_clear(&s->buffer);
for (i=0; i<sub->num_rects; i++) {
const char *ass = sub->rects[i]->ass;
if (sub->rects[i]->type != SUBTITLE_ASS) {
av_log(avctx, AV_LOG_ERROR, "Only SUBTITLE_ASS type supported.\n");
return AVERROR(EINVAL);
}
#if FF_API_ASS_TIMING
if (!strncmp(ass, "Dialogue: ", 10)) {
int num;
dialog = ff_ass_split_dialog(s->ass_ctx, ass, 0, &num);
for (; dialog && num--; dialog++) {
int ret = ff_ass_split_override_codes(&ttml_callbacks, s,
dialog->text);
int log_level = (ret != AVERROR_INVALIDDATA ||
avctx->err_recognition & AV_EF_EXPLODE) ?
AV_LOG_ERROR : AV_LOG_WARNING;
if (ret < 0) {
av_log(avctx, log_level,
"Splitting received ASS dialog failed: %s\n",
av_err2str(ret));
if (log_level == AV_LOG_ERROR)
return ret;
}
}
} else {
#endif
dialog = ff_ass_split_dialog2(s->ass_ctx, ass);
if (!dialog)
return AVERROR(ENOMEM);
{
int ret = ff_ass_split_override_codes(&ttml_callbacks, s,
dialog->text);
int log_level = (ret != AVERROR_INVALIDDATA ||
avctx->err_recognition & AV_EF_EXPLODE) ?
AV_LOG_ERROR : AV_LOG_WARNING;
if (ret < 0) {
av_log(avctx, log_level,
"Splitting received ASS dialog text %s failed: %s\n",
dialog->text,
av_err2str(ret));
if (log_level == AV_LOG_ERROR) {
ff_ass_free_dialog(&dialog);
return ret;
}
}
ff_ass_free_dialog(&dialog);
}
#if FF_API_ASS_TIMING
}
#endif
}
if (!av_bprint_is_complete(&s->buffer))
return AVERROR(ENOMEM);
if (!s->buffer.len)
return 0;
// force null-termination, so in case our destination buffer is
// too small, the return value is larger than bufsize minus null.
if (av_strlcpy(buf, s->buffer.str, bufsize) > bufsize - 1) {
av_log(avctx, AV_LOG_ERROR, "Buffer too small for TTML event.\n");
return AVERROR_BUFFER_TOO_SMALL;
}
return s->buffer.len;
}
static av_cold int ttml_encode_close(AVCodecContext *avctx)
{
TTMLContext *s = avctx->priv_data;
ff_ass_split_free(s->ass_ctx);
av_bprint_finalize(&s->buffer, NULL);
return 0;
}
static av_cold int ttml_encode_init(AVCodecContext *avctx)
{
TTMLContext *s = avctx->priv_data;
s->avctx = avctx;
if (!(s->ass_ctx = ff_ass_split(avctx->subtitle_header))) {
return AVERROR_INVALIDDATA;
}
if (!(avctx->extradata = av_mallocz(TTMLENC_EXTRADATA_SIGNATURE_SIZE +
1 + AV_INPUT_BUFFER_PADDING_SIZE))) {
return AVERROR(ENOMEM);
}
avctx->extradata_size = TTMLENC_EXTRADATA_SIGNATURE_SIZE;
memcpy(avctx->extradata, TTMLENC_EXTRADATA_SIGNATURE,
TTMLENC_EXTRADATA_SIGNATURE_SIZE);
av_bprint_init(&s->buffer, 0, AV_BPRINT_SIZE_UNLIMITED);
return 0;
}
AVCodec ff_ttml_encoder = {
.name = "ttml",
.long_name = NULL_IF_CONFIG_SMALL("TTML subtitle"),
.type = AVMEDIA_TYPE_SUBTITLE,
.id = AV_CODEC_ID_TTML,
.priv_data_size = sizeof(TTMLContext),
.init = ttml_encode_init,
.encode_sub = ttml_encode_frame,
.close = ttml_encode_close,
.capabilities = FF_CODEC_CAP_INIT_CLEANUP,
};

View File

@@ -1,28 +0,0 @@
/*
* TTML subtitle encoder shared functionality
* Copyright (c) 2020 24i
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_TTMLENC_H
#define AVCODEC_TTMLENC_H
#define TTMLENC_EXTRADATA_SIGNATURE "lavc-ttmlenc"
#define TTMLENC_EXTRADATA_SIGNATURE_SIZE (sizeof(TTMLENC_EXTRADATA_SIGNATURE) - 1)
#endif /* AVCODEC_TTMLENC_H */

View File

@@ -1,319 +0,0 @@
/*
* AV1 HW decode acceleration through VA API
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/pixdesc.h"
#include "hwconfig.h"
#include "vaapi_decode.h"
#include "av1dec.h"
static VASurfaceID vaapi_av1_surface_id(AV1Frame *vf)
{
if (vf)
return ff_vaapi_get_surface_id(vf->tf.f);
else
return VA_INVALID_SURFACE;
}
static int8_t vaapi_av1_get_bit_depth_idx(AVCodecContext *avctx)
{
AV1DecContext *s = avctx->priv_data;
const AV1RawSequenceHeader *seq = s->raw_seq;
int8_t bit_depth = 8;
if (seq->seq_profile == 2 && seq->color_config.high_bitdepth)
bit_depth = seq->color_config.twelve_bit ? 12 : 10;
else if (seq->seq_profile <= 2)
bit_depth = seq->color_config.high_bitdepth ? 10 : 8;
else {
av_log(avctx, AV_LOG_ERROR,
"Couldn't get bit depth from profile:%d.\n", seq->seq_profile);
return -1;
}
return bit_depth == 8 ? 0 : bit_depth == 10 ? 1 : 2;
}
static int vaapi_av1_start_frame(AVCodecContext *avctx,
av_unused const uint8_t *buffer,
av_unused uint32_t size)
{
AV1DecContext *s = avctx->priv_data;
const AV1RawSequenceHeader *seq = s->raw_seq;
const AV1RawFrameHeader *frame_header = s->raw_frame_header;
const AV1RawFilmGrainParams *film_grain = &s->cur_frame.film_grain;
VAAPIDecodePicture *pic = s->cur_frame.hwaccel_picture_private;
VADecPictureParameterBufferAV1 pic_param;
int8_t bit_depth_idx;
int err = 0;
int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain;
uint8_t remap_lr_type[4] = {AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ};
pic->output_surface = vaapi_av1_surface_id(&s->cur_frame);
bit_depth_idx = vaapi_av1_get_bit_depth_idx(avctx);
if (bit_depth_idx < 0)
goto fail;
memset(&pic_param, 0, sizeof(VADecPictureParameterBufferAV1));
pic_param = (VADecPictureParameterBufferAV1) {
.profile = seq->seq_profile,
.order_hint_bits_minus_1 = seq->order_hint_bits_minus_1,
.bit_depth_idx = bit_depth_idx,
.current_frame = pic->output_surface,
.current_display_picture = pic->output_surface,
.frame_width_minus1 = frame_header->frame_width_minus_1,
.frame_height_minus1 = frame_header->frame_height_minus_1,
.primary_ref_frame = frame_header->primary_ref_frame,
.order_hint = frame_header->order_hint,
.tile_cols = frame_header->tile_cols,
.tile_rows = frame_header->tile_rows,
.context_update_tile_id = frame_header->context_update_tile_id,
.interp_filter = frame_header->interpolation_filter,
.filter_level[0] = frame_header->loop_filter_level[0],
.filter_level[1] = frame_header->loop_filter_level[1],
.filter_level_u = frame_header->loop_filter_level[2],
.filter_level_v = frame_header->loop_filter_level[3],
.base_qindex = frame_header->base_q_idx,
.cdef_damping_minus_3 = frame_header->cdef_damping_minus_3,
.cdef_bits = frame_header->cdef_bits,
.seq_info_fields.fields = {
.still_picture = seq->still_picture,
.use_128x128_superblock = seq->use_128x128_superblock,
.enable_filter_intra = seq->enable_filter_intra,
.enable_intra_edge_filter = seq->enable_intra_edge_filter,
.enable_interintra_compound = seq->enable_interintra_compound,
.enable_masked_compound = seq->enable_masked_compound,
.enable_dual_filter = seq->enable_dual_filter,
.enable_order_hint = seq->enable_order_hint,
.enable_jnt_comp = seq->enable_jnt_comp,
.enable_cdef = seq->enable_cdef,
.mono_chrome = seq->color_config.mono_chrome,
.color_range = seq->color_config.color_range,
.subsampling_x = seq->color_config.subsampling_x,
.subsampling_y = seq->color_config.subsampling_y,
.chroma_sample_position = seq->color_config.chroma_sample_position,
.film_grain_params_present = seq->film_grain_params_present &&
!(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN),
},
.seg_info.segment_info_fields.bits = {
.enabled = frame_header->segmentation_enabled,
.update_map = frame_header->segmentation_update_map,
.temporal_update = frame_header->segmentation_temporal_update,
.update_data = frame_header->segmentation_update_data,
},
.film_grain_info = {
.film_grain_info_fields.bits = {
.apply_grain = apply_grain,
.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma,
.grain_scaling_minus_8 = film_grain->grain_scaling_minus_8,
.ar_coeff_lag = film_grain->ar_coeff_lag,
.ar_coeff_shift_minus_6 = film_grain->ar_coeff_shift_minus_6,
.grain_scale_shift = film_grain->grain_scale_shift,
.overlap_flag = film_grain->overlap_flag,
.clip_to_restricted_range = film_grain->clip_to_restricted_range,
},
.grain_seed = film_grain->grain_seed,
.num_y_points = film_grain->num_y_points,
.num_cb_points = film_grain->num_cb_points,
.num_cr_points = film_grain->num_cr_points,
.cb_mult = film_grain->cb_mult,
.cb_luma_mult = film_grain->cb_luma_mult,
.cb_offset = film_grain->cb_offset,
.cr_mult = film_grain->cr_mult,
.cr_luma_mult = film_grain->cr_luma_mult,
.cr_offset = film_grain->cr_offset,
},
.pic_info_fields.bits = {
.frame_type = frame_header->frame_type,
.show_frame = frame_header->show_frame,
.showable_frame = frame_header->showable_frame,
.error_resilient_mode = frame_header->error_resilient_mode,
.disable_cdf_update = frame_header->disable_cdf_update,
.allow_screen_content_tools = frame_header->allow_screen_content_tools,
.force_integer_mv = frame_header->force_integer_mv,
.allow_intrabc = frame_header->allow_intrabc,
.use_superres = frame_header->use_superres,
.allow_high_precision_mv = frame_header->allow_high_precision_mv,
.is_motion_mode_switchable = frame_header->is_motion_mode_switchable,
.use_ref_frame_mvs = frame_header->use_ref_frame_mvs,
.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf,
.uniform_tile_spacing_flag = frame_header->uniform_tile_spacing_flag,
.allow_warped_motion = frame_header->allow_warped_motion,
},
.loop_filter_info_fields.bits = {
.sharpness_level = frame_header->loop_filter_sharpness,
.mode_ref_delta_enabled = frame_header->loop_filter_delta_enabled,
.mode_ref_delta_update = frame_header->loop_filter_delta_update,
},
.mode_control_fields.bits = {
.delta_q_present_flag = frame_header->delta_q_present,
.log2_delta_q_res = frame_header->delta_q_res,
.tx_mode = frame_header->tx_mode,
.reference_select = frame_header->reference_select,
.reduced_tx_set_used = frame_header->reduced_tx_set,
.skip_mode_present = frame_header->skip_mode_present,
},
.loop_restoration_fields.bits = {
.yframe_restoration_type = remap_lr_type[frame_header->lr_type[0]],
.cbframe_restoration_type = remap_lr_type[frame_header->lr_type[1]],
.crframe_restoration_type = remap_lr_type[frame_header->lr_type[2]],
.lr_unit_shift = frame_header->lr_unit_shift,
.lr_uv_shift = frame_header->lr_uv_shift,
},
.qmatrix_fields.bits = {
.using_qmatrix = frame_header->using_qmatrix,
}
};
for (int i = 0; i < AV1_NUM_REF_FRAMES; i++) {
if (pic_param.pic_info_fields.bits.frame_type == AV1_FRAME_KEY)
pic_param.ref_frame_map[i] = VA_INVALID_ID;
else
pic_param.ref_frame_map[i] = vaapi_av1_surface_id(&s->ref[i]);
}
for (int i = 0; i < AV1_REFS_PER_FRAME; i++) {
pic_param.ref_frame_idx[i] = frame_header->ref_frame_idx[i];
}
for (int i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) {
pic_param.ref_deltas[i] = frame_header->loop_filter_ref_deltas[i];
}
for (int i = 0; i < 2; i++) {
pic_param.mode_deltas[i] = frame_header->loop_filter_mode_deltas[i];
}
for (int i = 0; i < (1 << frame_header->cdef_bits); i++) {
pic_param.cdef_y_strengths[i] =
(frame_header->cdef_y_pri_strength[i] << 2) +
frame_header->cdef_y_sec_strength[i];
pic_param.cdef_uv_strengths[i] =
(frame_header->cdef_uv_pri_strength[i] << 2) +
frame_header->cdef_uv_sec_strength[i];
}
for (int i = 0; i < frame_header->tile_cols; i++) {
pic_param.width_in_sbs_minus_1[i] =
frame_header->width_in_sbs_minus_1[i];
}
for (int i = 0; i < frame_header->tile_rows; i++) {
pic_param.height_in_sbs_minus_1[i] =
frame_header->height_in_sbs_minus_1[i];
}
for (int i = AV1_REF_FRAME_LAST; i <= AV1_REF_FRAME_ALTREF; i++) {
pic_param.wm[i - 1].wmtype = s->cur_frame.gm_type[i];
for (int j = 0; j < 6; j++)
pic_param.wm[i - 1].wmmat[j] = s->cur_frame.gm_params[i][j];
}
if (apply_grain) {
for (int i = 0; i < film_grain->num_y_points; i++) {
pic_param.film_grain_info.point_y_value[i] =
film_grain->point_y_value[i];
pic_param.film_grain_info.point_y_scaling[i] =
film_grain->point_y_scaling[i];
}
for (int i = 0; i < film_grain->num_cb_points; i++) {
pic_param.film_grain_info.point_cb_value[i] =
film_grain->point_cb_value[i];
pic_param.film_grain_info.point_cb_scaling[i] =
film_grain->point_cb_scaling[i];
}
for (int i = 0; i < film_grain->num_cr_points; i++) {
pic_param.film_grain_info.point_cr_value[i] =
film_grain->point_cr_value[i];
pic_param.film_grain_info.point_cr_scaling[i] =
film_grain->point_cr_scaling[i];
}
for (int i = 0; i < 24; i++) {
pic_param.film_grain_info.ar_coeffs_y[i] =
film_grain->ar_coeffs_y_plus_128[i] - 128;
}
for (int i = 0; i < 25; i++) {
pic_param.film_grain_info.ar_coeffs_cb[i] =
film_grain->ar_coeffs_cb_plus_128[i] - 128;
pic_param.film_grain_info.ar_coeffs_cr[i] =
film_grain->ar_coeffs_cr_plus_128[i] - 128;
}
}
err = ff_vaapi_decode_make_param_buffer(avctx, pic,
VAPictureParameterBufferType,
&pic_param, sizeof(pic_param));
if (err < 0)
goto fail;
return 0;
fail:
ff_vaapi_decode_cancel(avctx, pic);
return err;
}
static int vaapi_av1_end_frame(AVCodecContext *avctx)
{
const AV1DecContext *s = avctx->priv_data;
VAAPIDecodePicture *pic = s->cur_frame.hwaccel_picture_private;
return ff_vaapi_decode_issue(avctx, pic);
}
static int vaapi_av1_decode_slice(AVCodecContext *avctx,
const uint8_t *buffer,
uint32_t size)
{
const AV1DecContext *s = avctx->priv_data;
VAAPIDecodePicture *pic = s->cur_frame.hwaccel_picture_private;
VASliceParameterBufferAV1 slice_param;
int err = 0;
for (int i = s->tg_start; i <= s->tg_end; i++) {
memset(&slice_param, 0, sizeof(VASliceParameterBufferAV1));
slice_param = (VASliceParameterBufferAV1) {
.slice_data_size = s->tile_group_info[i].tile_size,
.slice_data_offset = s->tile_group_info[i].tile_offset,
.slice_data_flag = VA_SLICE_DATA_FLAG_ALL,
.tile_row = s->tile_group_info[i].tile_row,
.tile_column = s->tile_group_info[i].tile_column,
.tg_start = s->tg_start,
.tg_end = s->tg_end,
};
err = ff_vaapi_decode_make_slice_buffer(avctx, pic, &slice_param,
sizeof(VASliceParameterBufferAV1),
buffer,
s->tile_group_info[i].tile_size);
if (err) {
ff_vaapi_decode_cancel(avctx, pic);
return err;
}
}
return 0;
}
const AVHWAccel ff_av1_vaapi_hwaccel = {
.name = "av1_vaapi",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_AV1,
.pix_fmt = AV_PIX_FMT_VAAPI,
.start_frame = vaapi_av1_start_frame,
.end_frame = vaapi_av1_end_frame,
.decode_slice = vaapi_av1_decode_slice,
.frame_priv_data_size = sizeof(VAAPIDecodePicture),
.init = ff_vaapi_decode_init,
.uninit = ff_vaapi_decode_uninit,
.frame_params = ff_vaapi_common_frame_params,
.priv_data_size = sizeof(VAAPIDecodeContext),
.caps_internal = HWACCEL_CAP_ASYNC_SAFE,
};

View File

@@ -1,60 +0,0 @@
/*
* WavPack decoder/encoder common data
* Copyright (c) 2006,2011 Konstantin Shishkov
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "wavpack.h"
const uint8_t ff_wp_exp2_table[256] = {
0x00, 0x01, 0x01, 0x02, 0x03, 0x03, 0x04, 0x05, 0x06, 0x06, 0x07, 0x08, 0x08, 0x09, 0x0a, 0x0b,
0x0b, 0x0c, 0x0d, 0x0e, 0x0e, 0x0f, 0x10, 0x10, 0x11, 0x12, 0x13, 0x13, 0x14, 0x15, 0x16, 0x16,
0x17, 0x18, 0x19, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1d, 0x1e, 0x1f, 0x20, 0x20, 0x21, 0x22, 0x23,
0x24, 0x24, 0x25, 0x26, 0x27, 0x28, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2c, 0x2d, 0x2e, 0x2f, 0x30,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3a, 0x3b, 0x3c, 0x3d,
0x3e, 0x3f, 0x40, 0x41, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x48, 0x49, 0x4a, 0x4b,
0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a,
0x5b, 0x5c, 0x5d, 0x5e, 0x5e, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69,
0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79,
0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x87, 0x88, 0x89, 0x8a,
0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b,
0x9c, 0x9d, 0x9f, 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad,
0xaf, 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb6, 0xb7, 0xb8, 0xb9, 0xba, 0xbc, 0xbd, 0xbe, 0xbf, 0xc0,
0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc8, 0xc9, 0xca, 0xcb, 0xcd, 0xce, 0xcf, 0xd0, 0xd2, 0xd3, 0xd4,
0xd6, 0xd7, 0xd8, 0xd9, 0xdb, 0xdc, 0xdd, 0xde, 0xe0, 0xe1, 0xe2, 0xe4, 0xe5, 0xe6, 0xe8, 0xe9,
0xea, 0xec, 0xed, 0xee, 0xf0, 0xf1, 0xf2, 0xf4, 0xf5, 0xf6, 0xf8, 0xf9, 0xfa, 0xfc, 0xfd, 0xff
};
const uint8_t ff_wp_log2_table[256] = {
0x00, 0x01, 0x03, 0x04, 0x06, 0x07, 0x09, 0x0a, 0x0b, 0x0d, 0x0e, 0x10, 0x11, 0x12, 0x14, 0x15,
0x16, 0x18, 0x19, 0x1a, 0x1c, 0x1d, 0x1e, 0x20, 0x21, 0x22, 0x24, 0x25, 0x26, 0x28, 0x29, 0x2a,
0x2c, 0x2d, 0x2e, 0x2f, 0x31, 0x32, 0x33, 0x34, 0x36, 0x37, 0x38, 0x39, 0x3b, 0x3c, 0x3d, 0x3e,
0x3f, 0x41, 0x42, 0x43, 0x44, 0x45, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4d, 0x4e, 0x4f, 0x50, 0x51,
0x52, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5c, 0x5d, 0x5e, 0x5f, 0x60, 0x61, 0x62, 0x63,
0x64, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0x74, 0x75,
0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x80, 0x81, 0x82, 0x83, 0x84, 0x85,
0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95,
0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f, 0xa0, 0xa1, 0xa2, 0xa3, 0xa4,
0xa5, 0xa6, 0xa7, 0xa8, 0xa9, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf, 0xb0, 0xb1, 0xb2, 0xb2,
0xb3, 0xb4, 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf, 0xc0, 0xc0,
0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xcb, 0xcb, 0xcc, 0xcd, 0xce,
0xcf, 0xd0, 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd4, 0xd5, 0xd6, 0xd7, 0xd8, 0xd8, 0xd9, 0xda, 0xdb,
0xdc, 0xdc, 0xdd, 0xde, 0xdf, 0xe0, 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe4, 0xe5, 0xe6, 0xe7, 0xe7,
0xe8, 0xe9, 0xea, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xee, 0xef, 0xf0, 0xf1, 0xf1, 0xf2, 0xf3, 0xf4,
0xf4, 0xf5, 0xf6, 0xf7, 0xf7, 0xf8, 0xf9, 0xf9, 0xfa, 0xfb, 0xfc, 0xfc, 0xfd, 0xfe, 0xff, 0xff
};

View File

@@ -1,697 +0,0 @@
;******************************************************************************
;* x86-optimized functions for the CFHD decoder
;* Copyright (c) 2020 Paul B Mahol
;*
;* This file is part of FFmpeg.
;*
;* FFmpeg is free software; you can redistribute it and/or
;* modify it under the terms of the GNU Lesser General Public
;* License as published by the Free Software Foundation; either
;* version 2.1 of the License, or (at your option) any later version.
;*
;* FFmpeg is distributed in the hope that it will be useful,
;* but WITHOUT ANY WARRANTY; without even the implied warranty of
;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
;* Lesser General Public License for more details.
;*
;* You should have received a copy of the GNU Lesser General Public
;* License along with FFmpeg; if not, write to the Free Software
;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
;******************************************************************************
%include "libavutil/x86/x86util.asm"
SECTION_RODATA
factor_p1_n1: dw 1, -1, 1, -1, 1, -1, 1, -1,
factor_n1_p1: dw -1, 1, -1, 1, -1, 1, -1, 1,
factor_p11_n4: dw 11, -4, 11, -4, 11, -4, 11, -4,
factor_p5_p4: dw 5, 4, 5, 4, 5, 4, 5, 4,
pd_4: times 4 dd 4
pw_1: times 8 dw 1
pw_0: times 8 dw 0
pw_1023: times 8 dw 1023
pw_4095: times 8 dw 4095
SECTION .text
%macro CFHD_HORIZ_FILTER 1
%if %1 == 1023
cglobal cfhd_horiz_filter_clip10, 5, 6, 8 + 4 * ARCH_X86_64, output, low, high, width, x, temp
shl widthd, 1
%define ostrideq widthq
%define lwidthq widthq
%define hwidthq widthq
%elif %1 == 4095
cglobal cfhd_horiz_filter_clip12, 5, 6, 8 + 4 * ARCH_X86_64, output, low, high, width, x, temp
shl widthd, 1
%define ostrideq widthq
%define lwidthq widthq
%define hwidthq widthq
%else
%if ARCH_X86_64
cglobal cfhd_horiz_filter, 8, 11, 12, output, ostride, low, lwidth, high, hwidth, width, height, x, y, temp
shl ostrided, 1
shl lwidthd, 1
shl hwidthd, 1
shl widthd, 1
mov yd, heightd
neg yq
%else
cglobal cfhd_horiz_filter, 7, 7, 8, output, x, low, y, high, temp, width, height
shl xd, 1
shl yd, 1
shl tempd, 1
shl widthd, 1
mov xmp, xq
mov ymp, yq
mov tempmp, tempq
mov yd, r7m
neg yq
%define ostrideq xm
%define lwidthq ym
%define hwidthq tempm
%endif
%endif
%if ARCH_X86_64
mova m8, [factor_p1_n1]
mova m9, [factor_n1_p1]
mova m10, [pw_1]
mova m11, [pd_4]
%endif
%if %1 == 0
.looph:
%endif
movsx xq, word [lowq]
imul xq, 11
movsx tempq, word [lowq + 2]
imul tempq, -4
add tempq, xq
movsx xq, word [lowq + 4]
add tempq, xq
add tempq, 4
sar tempq, 3
movsx xq, word [highq]
add tempq, xq
sar tempq, 1
%if %1
movd xm0, tempd
CLIPW m0, [pw_0], [pw_%1]
pextrw tempd, xm0, 0
%endif
mov word [outputq], tempw
movsx xq, word [lowq]
imul xq, 5
movsx tempq, word [lowq + 2]
imul tempq, 4
add tempq, xq
movsx xq, word [lowq + 4]
sub tempq, xq
add tempq, 4
sar tempq, 3
movsx xq, word [highq]
sub tempq, xq
sar tempq, 1
%if %1
movd xm0, tempd
CLIPW m0, [pw_0], [pw_%1]
pextrw tempd, xm0, 0
%endif
mov word [outputq + 2], tempw
mov xq, 0
.loop:
movu m4, [lowq + xq]
movu m1, [lowq + xq + 4]
mova m5, m4
punpcklwd m4, m1
punpckhwd m5, m1
mova m6, m4
mova m7, m5
%if ARCH_X86_64
pmaddwd m4, m8
pmaddwd m5, m8
pmaddwd m6, m9
pmaddwd m7, m9
paddd m4, m11
paddd m5, m11
paddd m6, m11
paddd m7, m11
%else
pmaddwd m4, [factor_p1_n1]
pmaddwd m5, [factor_p1_n1]
pmaddwd m6, [factor_n1_p1]
pmaddwd m7, [factor_n1_p1]
paddd m4, [pd_4]
paddd m5, [pd_4]
paddd m6, [pd_4]
paddd m7, [pd_4]
%endif
psrad m4, 3
psrad m5, 3
psrad m6, 3
psrad m7, 3
movu m2, [lowq + xq + 2]
movu m3, [highq + xq + 2]
mova m0, m2
punpcklwd m2, m3
punpckhwd m0, m3
mova m1, m2
mova m3, m0
%if ARCH_X86_64
pmaddwd m2, m10
pmaddwd m0, m10
pmaddwd m1, m8
pmaddwd m3, m8
%else
pmaddwd m2, [pw_1]
pmaddwd m0, [pw_1]
pmaddwd m1, [factor_p1_n1]
pmaddwd m3, [factor_p1_n1]
%endif
paddd m2, m4
paddd m0, m5
paddd m1, m6
paddd m3, m7
psrad m2, 1
psrad m0, 1
psrad m1, 1
psrad m3, 1
packssdw m2, m0
packssdw m1, m3
mova m0, m2
punpcklwd m2, m1
punpckhwd m0, m1
%if %1
CLIPW m2, [pw_0], [pw_%1]
CLIPW m0, [pw_0], [pw_%1]
%endif
movu [outputq + xq * 2 + 4], m2
movu [outputq + xq * 2 + mmsize + 4], m0
add xq, mmsize
cmp xq, widthq
jl .loop
add lowq, widthq
add highq, widthq
add outputq, widthq
add outputq, widthq
movsx xq, word [lowq - 2]
imul xq, 5
movsx tempq, word [lowq - 4]
imul tempq, 4
add tempq, xq
movsx xq, word [lowq - 6]
sub tempq, xq
add tempq, 4
sar tempq, 3
movsx xq, word [highq - 2]
add tempq, xq
sar tempq, 1
%if %1
movd xm0, tempd
CLIPW m0, [pw_0], [pw_%1]
pextrw tempd, xm0, 0
%endif
mov word [outputq - 4], tempw
movsx xq, word [lowq - 2]
imul xq, 11
movsx tempq, word [lowq - 4]
imul tempq, -4
add tempq, xq
movsx xq, word [lowq - 6]
add tempq, xq
add tempq, 4
sar tempq, 3
movsx xq, word [highq - 2]
sub tempq, xq
sar tempq, 1
%if %1
movd xm0, tempd
CLIPW m0, [pw_0], [pw_%1]
pextrw tempd, xm0, 0
%endif
mov word [outputq - 2], tempw
%if %1 == 0
sub lowq, widthq
sub highq, widthq
sub outputq, widthq
sub outputq, widthq
add lowq, lwidthq
add highq, hwidthq
add outputq, ostrideq
add outputq, ostrideq
add yq, 1
jl .looph
%endif
RET
%endmacro
INIT_XMM sse2
CFHD_HORIZ_FILTER 0
INIT_XMM sse2
CFHD_HORIZ_FILTER 1023
INIT_XMM sse2
CFHD_HORIZ_FILTER 4095
INIT_XMM sse2
%if ARCH_X86_64
cglobal cfhd_vert_filter, 8, 11, 14, output, ostride, low, lwidth, high, hwidth, width, height, x, y, pos
shl ostrided, 1
shl lwidthd, 1
shl hwidthd, 1
shl widthd, 1
dec heightd
mova m8, [factor_p1_n1]
mova m9, [factor_n1_p1]
mova m10, [pw_1]
mova m11, [pd_4]
mova m12, [factor_p11_n4]
mova m13, [factor_p5_p4]
%else
cglobal cfhd_vert_filter, 7, 7, 8, output, x, low, y, high, pos, width, height
shl xd, 1
shl yd, 1
shl posd, 1
shl widthd, 1
mov xmp, xq
mov ymp, yq
mov posmp, posq
mov xq, r7m
dec xq
mov widthmp, xq
%define ostrideq xm
%define lwidthq ym
%define hwidthq posm
%define heightq widthm
%endif
xor xq, xq
.loopw:
xor yq, yq
mov posq, xq
movu m0, [lowq + posq]
add posq, lwidthq
movu m1, [lowq + posq]
mova m2, m0
punpcklwd m0, m1
punpckhwd m2, m1
%if ARCH_X86_64
pmaddwd m0, m12
pmaddwd m2, m12
%else
pmaddwd m0, [factor_p11_n4]
pmaddwd m2, [factor_p11_n4]
%endif
pxor m4, m4
add posq, lwidthq
movu m1, [lowq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
paddd m0, m4
paddd m2, m3
paddd m0, [pd_4]
paddd m2, [pd_4]
psrad m0, 3
psrad m2, 3
mov posq, xq
pxor m4, m4
movu m1, [highq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
paddd m0, m4
paddd m2, m3
psrad m0, 1
psrad m2, 1
packssdw m0, m2
movu [outputq + posq], m0
movu m0, [lowq + posq]
add posq, lwidthq
movu m1, [lowq + posq]
mova m2, m0
punpcklwd m0, m1
punpckhwd m2, m1
%if ARCH_X86_64
pmaddwd m0, m13
pmaddwd m2, m13
%else
pmaddwd m0, [factor_p5_p4]
pmaddwd m2, [factor_p5_p4]
%endif
pxor m4, m4
add posq, lwidthq
movu m1, [lowq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
psubd m0, m4
psubd m2, m3
paddd m0, [pd_4]
paddd m2, [pd_4]
psrad m0, 3
psrad m2, 3
mov posq, xq
pxor m4, m4
movu m1, [highq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
psubd m0, m4
psubd m2, m3
psrad m0, 1
psrad m2, 1
packssdw m0, m2
add posq, ostrideq
movu [outputq + posq], m0
add yq, 1
.looph:
mov posq, lwidthq
imul posq, yq
sub posq, lwidthq
add posq, xq
movu m4, [lowq + posq]
add posq, lwidthq
add posq, lwidthq
movu m1, [lowq + posq]
mova m5, m4
punpcklwd m4, m1
punpckhwd m5, m1
mova m6, m4
mova m7, m5
%if ARCH_X86_64
pmaddwd m4, m8
pmaddwd m5, m8
pmaddwd m6, m9
pmaddwd m7, m9
paddd m4, m11
paddd m5, m11
paddd m6, m11
paddd m7, m11
%else
pmaddwd m4, [factor_p1_n1]
pmaddwd m5, [factor_p1_n1]
pmaddwd m6, [factor_n1_p1]
pmaddwd m7, [factor_n1_p1]
paddd m4, [pd_4]
paddd m5, [pd_4]
paddd m6, [pd_4]
paddd m7, [pd_4]
%endif
psrad m4, 3
psrad m5, 3
psrad m6, 3
psrad m7, 3
sub posq, lwidthq
movu m0, [lowq + posq]
mov posq, hwidthq
imul posq, yq
add posq, xq
movu m1, [highq + posq]
mova m2, m0
punpcklwd m0, m1
punpckhwd m2, m1
mova m1, m0
mova m3, m2
%if ARCH_X86_64
pmaddwd m0, m10
pmaddwd m2, m10
pmaddwd m1, m8
pmaddwd m3, m8
%else
pmaddwd m0, [pw_1]
pmaddwd m2, [pw_1]
pmaddwd m1, [factor_p1_n1]
pmaddwd m3, [factor_p1_n1]
%endif
paddd m0, m4
paddd m2, m5
paddd m1, m6
paddd m3, m7
psrad m0, 1
psrad m2, 1
psrad m1, 1
psrad m3, 1
packssdw m0, m2
packssdw m1, m3
mov posq, ostrideq
imul posq, 2
imul posq, yq
add posq, xq
movu [outputq + posq], m0
add posq, ostrideq
movu [outputq + posq], m1
add yq, 1
cmp yq, heightq
jl .looph
mov posq, lwidthq
imul posq, yq
add posq, xq
movu m0, [lowq + posq]
sub posq, lwidthq
movu m1, [lowq + posq]
mova m2, m0
punpcklwd m0, m1
punpckhwd m2, m1
%if ARCH_X86_64
pmaddwd m0, m13
pmaddwd m2, m13
%else
pmaddwd m0, [factor_p5_p4]
pmaddwd m2, [factor_p5_p4]
%endif
pxor m4, m4
sub posq, lwidthq
movu m1, [lowq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
psubd m0, m4
psubd m2, m3
%if ARCH_X86_64
paddd m0, m11
paddd m2, m11
%else
paddd m0, [pd_4]
paddd m2, [pd_4]
%endif
psrad m0, 3
psrad m2, 3
mov posq, hwidthq
imul posq, yq
add posq, xq
pxor m4, m4
movu m1, [highq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
paddd m0, m4
paddd m2, m3
psrad m0, 1
psrad m2, 1
packssdw m0, m2
mov posq, ostrideq
imul posq, 2
imul posq, yq
add posq, xq
movu [outputq + posq], m0
mov posq, lwidthq
imul posq, yq
add posq, xq
movu m0, [lowq + posq]
sub posq, lwidthq
movu m1, [lowq + posq]
mova m2, m0
punpcklwd m0, m1
punpckhwd m2, m1
%if ARCH_X86_64
pmaddwd m0, m12
pmaddwd m2, m12
%else
pmaddwd m0, [factor_p11_n4]
pmaddwd m2, [factor_p11_n4]
%endif
pxor m4, m4
sub posq, lwidthq
movu m1, [lowq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
paddd m0, m4
paddd m2, m3
%if ARCH_X86_64
paddd m0, m11
paddd m2, m11
%else
paddd m0, [pd_4]
paddd m2, [pd_4]
%endif
psrad m0, 3
psrad m2, 3
mov posq, hwidthq
imul posq, yq
add posq, xq
pxor m4, m4
movu m1, [highq + posq]
mova m3, m4
punpcklwd m4, m1
punpckhwd m3, m1
psrad m4, 16
psrad m3, 16
psubd m0, m4
psubd m2, m3
psrad m0, 1
psrad m2, 1
packssdw m0, m2
mov posq, ostrideq
imul posq, 2
imul posq, yq
add posq, ostrideq
add posq, xq
movu [outputq + posq], m0
add xq, mmsize
cmp xq, widthq
jl .loopw
RET

View File

@@ -1,52 +0,0 @@
/*
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include "libavutil/attributes.h"
#include "libavutil/cpu.h"
#include "libavutil/x86/cpu.h"
#include "libavcodec/avcodec.h"
#include "libavcodec/cfhddsp.h"
void ff_cfhd_horiz_filter_sse2(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int width, int height);
void ff_cfhd_vert_filter_sse2(int16_t *output, ptrdiff_t out_stride,
const int16_t *low, ptrdiff_t low_stride,
const int16_t *high, ptrdiff_t high_stride,
int width, int height);
void ff_cfhd_horiz_filter_clip10_sse2(int16_t *output, const int16_t *low, const int16_t *high, int width, int bpc);
void ff_cfhd_horiz_filter_clip12_sse2(int16_t *output, const int16_t *low, const int16_t *high, int width, int bpc);
av_cold void ff_cfhddsp_init_x86(CFHDDSPContext *c, int depth, int bayer)
{
int cpu_flags = av_get_cpu_flags();
if (EXTERNAL_SSE2(cpu_flags)) {
c->horiz_filter = ff_cfhd_horiz_filter_sse2;
c->vert_filter = ff_cfhd_vert_filter_sse2;
if (depth == 10 && !bayer)
c->horiz_filter_clip = ff_cfhd_horiz_filter_clip10_sse2;
if (depth == 12 && !bayer)
c->horiz_filter_clip = ff_cfhd_horiz_filter_clip12_sse2;
}
}

View File

@@ -1,432 +0,0 @@
;******************************************************************************
;* x86-optimized functions for the CFHD encoder
;* Copyright (c) 2021 Paul B Mahol
;*
;* This file is part of FFmpeg.
;*
;* FFmpeg is free software; you can redistribute it and/or
;* modify it under the terms of the GNU Lesser General Public
;* License as published by the Free Software Foundation; either
;* version 2.1 of the License, or (at your option) any later version.
;*
;* FFmpeg is distributed in the hope that it will be useful,
;* but WITHOUT ANY WARRANTY; without even the implied warranty of
;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
;* Lesser General Public License for more details.
;*
;* You should have received a copy of the GNU Lesser General Public
;* License along with FFmpeg; if not, write to the Free Software
;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
;******************************************************************************
%include "libavutil/x86/x86util.asm"
SECTION_RODATA
pw_p1_n1: dw 1, -1, 1, -1, 1, -1, 1, -1
pw_n1_p1: dw -1, 1, -1, 1, -1, 1, -1, 1
pw_p5_n11: dw 5, -11, 5, -11, 5, -11, 5, -11
pw_n5_p11: dw -5, 11, -5, 11, -5, 11, -5, 11
pw_p11_n5: dw 11, -5, 11, -5, 11, -5, 11, -5
pw_n11_p5: dw -11, 5, -11, 5, -11, 5, -11, 5
pd_4: times 4 dd 4
pw_n4: times 8 dw -4
cextern pw_m1
cextern pw_1
cextern pw_4
SECTION .text
%if ARCH_X86_64
INIT_XMM sse2
cglobal cfhdenc_horiz_filter, 8, 10, 11, input, low, high, istride, lwidth, hwidth, width, y, x, temp
shl istrideq, 1
shl lwidthq, 1
shl hwidthq, 1
mova m7, [pd_4]
mova m8, [pw_1]
mova m9, [pw_m1]
mova m10,[pw_p1_n1]
movsxdifnidn yq, yd
movsxdifnidn widthq, widthd
neg yq
.looph:
movsx xq, word [inputq]
movsx tempq, word [inputq + 2]
add tempq, xq
movd xm0, tempd
packssdw m0, m0
movd tempd, m0
mov word [lowq], tempw
movsx xq, word [inputq]
imul xq, 5
movsx tempq, word [inputq + 2]
imul tempq, -11
add tempq, xq
movsx xq, word [inputq + 4]
imul xq, 4
add tempq, xq
movsx xq, word [inputq + 6]
imul xq, 4
add tempq, xq
movsx xq, word [inputq + 8]
imul xq, -1
add tempq, xq
movsx xq, word [inputq + 10]
imul xq, -1
add tempq, xq
add tempq, 4
sar tempq, 3
movd xm0, tempd
packssdw m0, m0
movd tempd, m0
mov word [highq], tempw
mov xq, 2
.loopw:
movu m0, [inputq + xq * 2]
movu m1, [inputq + xq * 2 + mmsize]
pmaddwd m0, m8
pmaddwd m1, m8
packssdw m0, m1
movu [lowq+xq], m0
movu m2, [inputq + xq * 2 - 4]
movu m3, [inputq + xq * 2 - 4 + mmsize]
pmaddwd m2, m9
pmaddwd m3, m9
movu m0, [inputq + xq * 2 + 4]
movu m1, [inputq + xq * 2 + 4 + mmsize]
pmaddwd m0, m8
pmaddwd m1, m8
paddd m0, m2
paddd m1, m3
paddd m0, m7
paddd m1, m7
psrad m0, 3
psrad m1, 3
movu m5, [inputq + xq * 2 + 0]
movu m6, [inputq + xq * 2 + mmsize]
pmaddwd m5, m10
pmaddwd m6, m10
paddd m0, m5
paddd m1, m6
packssdw m0, m1
movu [highq+xq], m0
add xq, mmsize
cmp xq, widthq
jl .loopw
add lowq, widthq
add highq, widthq
lea inputq, [inputq + widthq * 2]
movsx xq, word [inputq - 4]
movsx tempq, word [inputq - 2]
add tempq, xq
movd xm0, tempd
packssdw m0, m0
movd tempd, m0
mov word [lowq-2], tempw
movsx tempq, word [inputq - 4]
imul tempq, 11
movsx xq, word [inputq - 2]
imul xq, -5
add tempq, xq
movsx xq, word [inputq - 6]
imul xq, -4
add tempq, xq
movsx xq, word [inputq - 8]
imul xq, -4
add tempq, xq
movsx xq, word [inputq - 10]
add tempq, xq
movsx xq, word [inputq - 12]
add tempq, xq
add tempq, 4
sar tempq, 3
movd xm0, tempd
packssdw m0, m0
movd tempd, m0
mov word [highq-2], tempw
sub inputq, widthq
sub inputq, widthq
sub highq, widthq
sub lowq, widthq
add lowq, lwidthq
add highq, hwidthq
add inputq, istrideq
add yq, 1
jl .looph
RET
%endif
%if ARCH_X86_64
INIT_XMM sse2
cglobal cfhdenc_vert_filter, 8, 11, 14, input, low, high, istride, lwidth, hwidth, width, height, x, y, pos
shl istrideq, 1
shl widthd, 1
sub heightd, 2
xor xq, xq
mova m7, [pd_4]
mova m8, [pw_1]
mova m9, [pw_m1]
mova m10,[pw_p1_n1]
mova m11,[pw_n1_p1]
mova m12,[pw_4]
mova m13,[pw_n4]
.loopw:
mov yq, 2
mov posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
paddsw m0, m1
movu [lowq + xq], m0
mov posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
add posq, istrideq
movu m2, [inputq + posq]
add posq, istrideq
movu m3, [inputq + posq]
add posq, istrideq
movu m4, [inputq + posq]
add posq, istrideq
movu m5, [inputq + posq]
mova m6, m0
punpcklwd m0, m1
punpckhwd m1, m6
mova m6, m2
punpcklwd m2, m3
punpckhwd m3, m6
mova m6, m4
punpcklwd m4, m5
punpckhwd m5, m6
pmaddwd m0, [pw_p5_n11]
pmaddwd m1, [pw_n11_p5]
pmaddwd m2, m12
pmaddwd m3, m12
pmaddwd m4, m9
pmaddwd m5, m9
paddd m0, m2
paddd m1, m3
paddd m0, m4
paddd m1, m5
paddd m0, m7
paddd m1, m7
psrad m0, 3
psrad m1, 3
packssdw m0, m1
movu [highq + xq], m0
.looph:
mov posq, istrideq
imul posq, yq
add posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
paddsw m0, m1
mov posq, lwidthq
imul posq, yq
add posq, xq
movu [lowq + posq], m0
add yq, -2
mov posq, istrideq
imul posq, yq
add posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
add posq, istrideq
movu m2, [inputq + posq]
add posq, istrideq
movu m3, [inputq + posq]
add posq, istrideq
movu m4, [inputq + posq]
add posq, istrideq
movu m5, [inputq + posq]
add yq, 2
mova m6, m0
punpcklwd m0, m1
punpckhwd m1, m6
mova m6, m2
punpcklwd m2, m3
punpckhwd m3, m6
mova m6, m4
punpcklwd m4, m5
punpckhwd m5, m6
pmaddwd m0, m9
pmaddwd m1, m9
pmaddwd m2, m10
pmaddwd m3, m11
pmaddwd m4, m8
pmaddwd m5, m8
paddd m0, m4
paddd m1, m5
paddd m0, m7
paddd m1, m7
psrad m0, 3
psrad m1, 3
paddd m0, m2
paddd m1, m3
packssdw m0, m1
mov posq, hwidthq
imul posq, yq
add posq, xq
movu [highq + posq], m0
add yq, 2
cmp yq, heightq
jl .looph
mov posq, istrideq
imul posq, yq
add posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
paddsw m0, m1
mov posq, lwidthq
imul posq, yq
add posq, xq
movu [lowq + posq], m0
sub yq, 4
mov posq, istrideq
imul posq, yq
add posq, xq
movu m0, [inputq + posq]
add posq, istrideq
movu m1, [inputq + posq]
add posq, istrideq
movu m2, [inputq + posq]
add posq, istrideq
movu m3, [inputq + posq]
add posq, istrideq
movu m4, [inputq + posq]
add posq, istrideq
movu m5, [inputq + posq]
add yq, 4
mova m6, m0
punpcklwd m0, m1
punpckhwd m1, m6
mova m6, m2
punpcklwd m2, m3
punpckhwd m3, m6
mova m6, m4
punpcklwd m4, m5
punpckhwd m5, m6
pmaddwd m0, m8
pmaddwd m1, m8
pmaddwd m2, m13
pmaddwd m3, m13
pmaddwd m4, [pw_p11_n5]
pmaddwd m5, [pw_n5_p11]
paddd m4, m2
paddd m5, m3
paddd m4, m0
paddd m5, m1
paddd m4, m7
paddd m5, m7
psrad m4, 3
psrad m5, 3
packssdw m4, m5
mov posq, hwidthq
imul posq, yq
add posq, xq
movu [highq + posq], m4
add xq, mmsize
cmp xq, widthq
jl .loopw
RET
%endif

View File

@@ -1,48 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdint.h>
#include "libavutil/attributes.h"
#include "libavutil/cpu.h"
#include "libavutil/x86/cpu.h"
#include "libavcodec/avcodec.h"
#include "libavcodec/cfhdencdsp.h"
void ff_cfhdenc_horiz_filter_sse2(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height);
void ff_cfhdenc_vert_filter_sse2(int16_t *input, int16_t *low, int16_t *high,
ptrdiff_t in_stride, ptrdiff_t low_stride,
ptrdiff_t high_stride,
int width, int height);
av_cold void ff_cfhdencdsp_init_x86(CFHDEncDSPContext *c)
{
int cpu_flags = av_get_cpu_flags();
#if ARCH_X86_64
if (EXTERNAL_SSE2(cpu_flags)) {
c->horiz_filter = ff_cfhdenc_horiz_filter_sse2;
c->vert_filter = ff_cfhdenc_vert_filter_sse2;
}
#endif
}

View File

@@ -1,106 +0,0 @@
/*
* XBM parser
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* XBM parser
*/
#include "libavutil/common.h"
#include "parser.h"
typedef struct XBMParseContext {
ParseContext pc;
uint16_t state16;
int count;
} XBMParseContext;
#define KEY (((uint64_t)'\n' << 56) | ((uint64_t)'#' << 48) | \
((uint64_t)'d' << 40) | ((uint64_t)'e' << 32) | \
((uint64_t)'f' << 24) | ('i' << 16) | ('n' << 8) | \
('e' << 0))
#define END ((';' << 8) | ('\n' << 0))
static int xbm_init(AVCodecParserContext *s)
{
XBMParseContext *bpc = s->priv_data;
bpc->count = 1;
return 0;
}
static int xbm_parse(AVCodecParserContext *s, AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
XBMParseContext *bpc = s->priv_data;
uint64_t state = bpc->pc.state64;
uint16_t state16 = bpc->state16;
int next = END_NOT_FOUND, i = 0;
s->pict_type = AV_PICTURE_TYPE_I;
s->key_frame = 1;
s->duration = 1;
*poutbuf_size = 0;
*poutbuf = NULL;
for (; i < buf_size; i++) {
state = (state << 8) | buf[i];
state16 = (state16 << 8) | buf[i];
if (state == KEY)
bpc->count++;
if ((state == KEY && bpc->count == 1)) {
next = i - 6;
break;
} else if (state16 == END) {
next = i + 1;
bpc->count = 0;
break;
}
}
bpc->pc.state64 = state;
bpc->state16 = state16;
if (ff_combine_frame(&bpc->pc, next, &buf, &buf_size) < 0) {
*poutbuf = NULL;
*poutbuf_size = 0;
return buf_size;
}
*poutbuf = buf;
*poutbuf_size = buf_size;
return next;
}
AVCodecParser ff_xbm_parser = {
.codec_ids = { AV_CODEC_ID_XBM },
.priv_data_size = sizeof(XBMParseContext),
.parser_init = xbm_init,
.parser_parse = xbm_parse,
.parser_close = ff_parse_close,
};

View File

@@ -1,308 +0,0 @@
/*
* AudioToolbox output device
* Copyright (c) 2020 Thilo Borgmann <thilo.borgmann@mail.de>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* AudioToolbox output device
* @author Thilo Borgmann <thilo.borgmann@mail.de>
*/
#import <AudioToolbox/AudioToolbox.h>
#include <pthread.h>
#include "libavutil/opt.h"
#include "libavformat/internal.h"
#include "libavutil/internal.h"
#include "avdevice.h"
typedef struct
{
AVClass *class;
AudioQueueBufferRef buffer[2];
pthread_mutex_t buffer_lock[2];
int cur_buf;
AudioQueueRef queue;
int list_devices;
int audio_device_index;
} ATContext;
static int check_status(AVFormatContext *avctx, OSStatus *status, const char *msg)
{
if (*status != noErr) {
av_log(avctx, AV_LOG_ERROR, "Error: %s (%i)\n", msg, *status);
return 1;
} else {
av_log(avctx, AV_LOG_DEBUG, " OK : %s\n", msg);
return 0;
}
}
static void queue_callback(void* atctx, AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// unlock the buffer that has just been consumed
ATContext *ctx = (ATContext*)atctx;
for (int i = 0; i < 2; i++) {
if (inBuffer == ctx->buffer[i]) {
pthread_mutex_unlock(&ctx->buffer_lock[i]);
}
}
}
static av_cold int at_write_header(AVFormatContext *avctx)
{
ATContext *ctx = (ATContext*)avctx->priv_data;
OSStatus err = noErr;
CFStringRef device_UID = NULL;
AudioDeviceID *devices;
int num_devices;
// get devices
UInt32 data_size = 0;
AudioObjectPropertyAddress prop;
prop.mSelector = kAudioHardwarePropertyDevices;
prop.mScope = kAudioObjectPropertyScopeGlobal;
prop.mElement = kAudioObjectPropertyElementMaster;
err = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &prop, 0, NULL, &data_size);
if (check_status(avctx, &err, "AudioObjectGetPropertyDataSize devices"))
return AVERROR(EINVAL);
num_devices = data_size / sizeof(AudioDeviceID);
devices = (AudioDeviceID*)(av_malloc(data_size));
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &prop, 0, NULL, &data_size, devices);
if (check_status(avctx, &err, "AudioObjectGetPropertyData devices")) {
av_freep(&devices);
return AVERROR(EINVAL);
}
// list devices
if (ctx->list_devices) {
CFStringRef device_name = NULL;
prop.mScope = kAudioDevicePropertyScopeInput;
av_log(ctx, AV_LOG_INFO, "CoreAudio devices:\n");
for(UInt32 i = 0; i < num_devices; ++i) {
// UID
data_size = sizeof(device_UID);
prop.mSelector = kAudioDevicePropertyDeviceUID;
err = AudioObjectGetPropertyData(devices[i], &prop, 0, NULL, &data_size, &device_UID);
if (check_status(avctx, &err, "AudioObjectGetPropertyData UID"))
continue;
// name
data_size = sizeof(device_name);
prop.mSelector = kAudioDevicePropertyDeviceNameCFString;
err = AudioObjectGetPropertyData(devices[i], &prop, 0, NULL, &data_size, &device_name);
if (check_status(avctx, &err, "AudioObjecTGetPropertyData name"))
continue;
av_log(ctx, AV_LOG_INFO, "[%d] %30s, %s\n", i,
CFStringGetCStringPtr(device_name, kCFStringEncodingMacRoman),
CFStringGetCStringPtr(device_UID, kCFStringEncodingMacRoman));
}
}
// get user-defined device UID or use default device
// -audio_device_index overrides any URL given
const char *stream_name = avctx->url;
if (stream_name && ctx->audio_device_index == -1) {
sscanf(stream_name, "%d", &ctx->audio_device_index);
}
if (ctx->audio_device_index >= 0) {
// get UID of selected device
data_size = sizeof(device_UID);
prop.mSelector = kAudioDevicePropertyDeviceUID;
err = AudioObjectGetPropertyData(devices[ctx->audio_device_index], &prop, 0, NULL, &data_size, &device_UID);
if (check_status(avctx, &err, "AudioObjecTGetPropertyData UID")) {
av_freep(&devices);
return AVERROR(EINVAL);
}
} else {
// use default device
device_UID = NULL;
}
av_log(ctx, AV_LOG_DEBUG, "stream_name: %s\n", stream_name);
av_log(ctx, AV_LOG_DEBUG, "audio_device_idnex: %i\n", ctx->audio_device_index);
av_log(ctx, AV_LOG_DEBUG, "UID: %s\n", CFStringGetCStringPtr(device_UID, kCFStringEncodingMacRoman));
// check input stream
if (avctx->nb_streams != 1 || avctx->streams[0]->codecpar->codec_type != AVMEDIA_TYPE_AUDIO) {
av_log(ctx, AV_LOG_ERROR, "Only a single audio stream is supported.\n");
return AVERROR(EINVAL);
}
av_freep(&devices);
AVCodecParameters *codecpar = avctx->streams[0]->codecpar;
// audio format
AudioStreamBasicDescription device_format = {0};
device_format.mSampleRate = codecpar->sample_rate;
device_format.mFormatID = kAudioFormatLinearPCM;
device_format.mFormatFlags |= (codecpar->format == AV_SAMPLE_FMT_FLT) ? kLinearPCMFormatFlagIsFloat : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_CODEC_ID_PCM_S8) ? kLinearPCMFormatFlagIsSignedInteger : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S16BE, AV_CODEC_ID_PCM_S16LE)) ? kLinearPCMFormatFlagIsSignedInteger : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S24BE, AV_CODEC_ID_PCM_S24LE)) ? kLinearPCMFormatFlagIsSignedInteger : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S32BE, AV_CODEC_ID_PCM_S32LE)) ? kLinearPCMFormatFlagIsSignedInteger : 0;
device_format.mFormatFlags |= (av_sample_fmt_is_planar(codecpar->format)) ? kAudioFormatFlagIsNonInterleaved : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_CODEC_ID_PCM_F32BE) ? kAudioFormatFlagIsBigEndian : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_CODEC_ID_PCM_S16BE) ? kAudioFormatFlagIsBigEndian : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_CODEC_ID_PCM_S24BE) ? kAudioFormatFlagIsBigEndian : 0;
device_format.mFormatFlags |= (codecpar->codec_id == AV_CODEC_ID_PCM_S32BE) ? kAudioFormatFlagIsBigEndian : 0;
device_format.mChannelsPerFrame = codecpar->channels;
device_format.mBitsPerChannel = (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S24BE, AV_CODEC_ID_PCM_S24LE)) ? 24 : (av_get_bytes_per_sample(codecpar->format) << 3);
device_format.mBytesPerFrame = (device_format.mBitsPerChannel >> 3) * device_format.mChannelsPerFrame;
device_format.mFramesPerPacket = 1;
device_format.mBytesPerPacket = device_format.mBytesPerFrame * device_format.mFramesPerPacket;
device_format.mReserved = 0;
av_log(ctx, AV_LOG_DEBUG, "device_format.mSampleRate = %i\n", codecpar->sample_rate);
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatID = %s\n", "kAudioFormatLinearPCM");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->format == AV_SAMPLE_FMT_FLT) ? "kLinearPCMFormatFlagIsFloat" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_CODEC_ID_PCM_S8) ? "kLinearPCMFormatFlagIsSignedInteger" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S32BE, AV_CODEC_ID_PCM_S32LE)) ? "kLinearPCMFormatFlagIsSignedInteger" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S16BE, AV_CODEC_ID_PCM_S16LE)) ? "kLinearPCMFormatFlagIsSignedInteger" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_NE(AV_CODEC_ID_PCM_S24BE, AV_CODEC_ID_PCM_S24LE)) ? "kLinearPCMFormatFlagIsSignedInteger" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (av_sample_fmt_is_planar(codecpar->format)) ? "kAudioFormatFlagIsNonInterleaved" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_CODEC_ID_PCM_F32BE) ? "kAudioFormatFlagIsBigEndian" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_CODEC_ID_PCM_S16BE) ? "kAudioFormatFlagIsBigEndian" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_CODEC_ID_PCM_S24BE) ? "kAudioFormatFlagIsBigEndian" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags |= %s\n", (codecpar->codec_id == AV_CODEC_ID_PCM_S32BE) ? "kAudioFormatFlagIsBigEndian" : "0");
av_log(ctx, AV_LOG_DEBUG, "device_format.mFormatFlags == %i\n", device_format.mFormatFlags);
av_log(ctx, AV_LOG_DEBUG, "device_format.mChannelsPerFrame = %i\n", codecpar->channels);
av_log(ctx, AV_LOG_DEBUG, "device_format.mBitsPerChannel = %i\n", av_get_bytes_per_sample(codecpar->format) << 3);
av_log(ctx, AV_LOG_DEBUG, "device_format.mBytesPerFrame = %i\n", (device_format.mBitsPerChannel >> 3) * codecpar->channels);
av_log(ctx, AV_LOG_DEBUG, "device_format.mBytesPerPacket = %i\n", device_format.mBytesPerFrame);
av_log(ctx, AV_LOG_DEBUG, "device_format.mFramesPerPacket = %i\n", 1);
av_log(ctx, AV_LOG_DEBUG, "device_format.mReserved = %i\n", 0);
// create new output queue for the device
err = AudioQueueNewOutput(&device_format, queue_callback, ctx,
NULL, kCFRunLoopCommonModes,
0, &ctx->queue);
if (check_status(avctx, &err, "AudioQueueNewOutput")) {
if (err == kAudioFormatUnsupportedDataFormatError)
av_log(ctx, AV_LOG_ERROR, "Unsupported output format.\n");
return AVERROR(EINVAL);
}
// set user-defined device or leave untouched for default
if (device_UID != NULL) {
err = AudioQueueSetProperty(ctx->queue, kAudioQueueProperty_CurrentDevice, &device_UID, sizeof(device_UID));
if (check_status(avctx, &err, "AudioQueueSetProperty output UID"))
return AVERROR(EINVAL);
}
// start the queue
err = AudioQueueStart(ctx->queue, NULL);
if (check_status(avctx, &err, "AudioQueueStart"))
return AVERROR(EINVAL);
// init the mutexes for double-buffering
pthread_mutex_init(&ctx->buffer_lock[0], NULL);
pthread_mutex_init(&ctx->buffer_lock[1], NULL);
return 0;
}
static int at_write_packet(AVFormatContext *avctx, AVPacket *pkt)
{
ATContext *ctx = (ATContext*)avctx->priv_data;
OSStatus err = noErr;
// use the other buffer
ctx->cur_buf = !ctx->cur_buf;
// lock for writing or wait for the buffer to be available
// will be unlocked by queue callback
pthread_mutex_lock(&ctx->buffer_lock[ctx->cur_buf]);
// (re-)allocate the buffer if not existant or of different size
if (!ctx->buffer[ctx->cur_buf] || ctx->buffer[ctx->cur_buf]->mAudioDataBytesCapacity != pkt->size) {
err = AudioQueueAllocateBuffer(ctx->queue, pkt->size, &ctx->buffer[ctx->cur_buf]);
if (check_status(avctx, &err, "AudioQueueAllocateBuffer")) {
pthread_mutex_unlock(&ctx->buffer_lock[ctx->cur_buf]);
return AVERROR(ENOMEM);
}
}
AudioQueueBufferRef buf = ctx->buffer[ctx->cur_buf];
// copy audio data into buffer and enqueue the buffer
memcpy(buf->mAudioData, pkt->data, buf->mAudioDataBytesCapacity);
buf->mAudioDataByteSize = buf->mAudioDataBytesCapacity;
err = AudioQueueEnqueueBuffer(ctx->queue, buf, 0, NULL);
if (check_status(avctx, &err, "AudioQueueEnqueueBuffer")) {
pthread_mutex_unlock(&ctx->buffer_lock[ctx->cur_buf]);
return AVERROR(EINVAL);
}
return 0;
}
static av_cold int at_write_trailer(AVFormatContext *avctx)
{
ATContext *ctx = (ATContext*)avctx->priv_data;
OSStatus err = noErr;
pthread_mutex_destroy(&ctx->buffer_lock[0]);
pthread_mutex_destroy(&ctx->buffer_lock[1]);
err = AudioQueueFlush(ctx->queue);
check_status(avctx, &err, "AudioQueueFlush");
err = AudioQueueDispose(ctx->queue, true);
check_status(avctx, &err, "AudioQueueDispose");
return 0;
}
static const AVOption options[] = {
{ "list_devices", "list available audio devices", offsetof(ATContext, list_devices), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, AV_OPT_FLAG_ENCODING_PARAM },
{ "audio_device_index", "select audio device by index (starts at 0)", offsetof(ATContext, audio_device_index), AV_OPT_TYPE_INT, {.i64 = -1}, -1, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM },
{ NULL },
};
static const AVClass at_class = {
.class_name = "AudioToolbox",
.item_name = av_default_item_name,
.option = options,
.version = LIBAVUTIL_VERSION_INT,
.category = AV_CLASS_CATEGORY_DEVICE_AUDIO_OUTPUT,
};
AVOutputFormat ff_audiotoolbox_muxer = {
.name = "audiotoolbox",
.long_name = NULL_IF_CONFIG_SMALL("AudioToolbox output device"),
.priv_data_size = sizeof(ATContext),
.audio_codec = AV_NE(AV_CODEC_ID_PCM_S16BE, AV_CODEC_ID_PCM_S16LE),
.video_codec = AV_CODEC_ID_NONE,
.write_header = at_write_header,
.write_packet = at_write_packet,
.write_trailer = at_write_trailer,
.flags = AVFMT_NOFILE,
.priv_class = &at_class,
};

View File

@@ -1,332 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avassert.h"
#include "libavutil/channel_layout.h"
#include "libavutil/opt.h"
#include "audio.h"
#include "avfilter.h"
#include "internal.h"
enum FilterType {
DC_TYPE,
AC_TYPE,
SQ_TYPE,
PS_TYPE,
NB_TYPES,
};
typedef struct ADenormContext {
const AVClass *class;
double level;
double level_db;
int type;
int64_t in_samples;
void (*filter)(AVFilterContext *ctx, void *dst,
const void *src, int nb_samples);
} ADenormContext;
static int query_formats(AVFilterContext *ctx)
{
AVFilterFormats *formats = NULL;
AVFilterChannelLayouts *layouts = NULL;
static const enum AVSampleFormat sample_fmts[] = {
AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_DBLP,
AV_SAMPLE_FMT_NONE
};
int ret;
formats = ff_make_format_list(sample_fmts);
if (!formats)
return AVERROR(ENOMEM);
ret = ff_set_common_formats(ctx, formats);
if (ret < 0)
return ret;
layouts = ff_all_channel_counts();
if (!layouts)
return AVERROR(ENOMEM);
ret = ff_set_common_channel_layouts(ctx, layouts);
if (ret < 0)
return ret;
formats = ff_all_samplerates();
return ff_set_common_samplerates(ctx, formats);
}
static void dc_denorm_fltp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const float *src = (const float *)srcp;
float *dst = (float *)dstp;
const float dc = s->level;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc;
}
}
static void dc_denorm_dblp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const double *src = (const double *)srcp;
double *dst = (double *)dstp;
const double dc = s->level;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc;
}
}
static void ac_denorm_fltp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const float *src = (const float *)srcp;
float *dst = (float *)dstp;
const float dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * (((N + n) & 1) ? -1.f : 1.f);
}
}
static void ac_denorm_dblp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const double *src = (const double *)srcp;
double *dst = (double *)dstp;
const double dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * (((N + n) & 1) ? -1. : 1.);
}
}
static void sq_denorm_fltp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const float *src = (const float *)srcp;
float *dst = (float *)dstp;
const float dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * ((((N + n) >> 8) & 1) ? -1.f : 1.f);
}
}
static void sq_denorm_dblp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const double *src = (const double *)srcp;
double *dst = (double *)dstp;
const double dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * ((((N + n) >> 8) & 1) ? -1. : 1.);
}
}
static void ps_denorm_fltp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const float *src = (const float *)srcp;
float *dst = (float *)dstp;
const float dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * (((N + n) & 255) ? 0.f : 1.f);
}
}
static void ps_denorm_dblp(AVFilterContext *ctx, void *dstp,
const void *srcp, int nb_samples)
{
ADenormContext *s = ctx->priv;
const double *src = (const double *)srcp;
double *dst = (double *)dstp;
const double dc = s->level;
const int64_t N = s->in_samples;
for (int n = 0; n < nb_samples; n++) {
dst[n] = src[n] + dc * (((N + n) & 255) ? 0. : 1.);
}
}
static int config_output(AVFilterLink *outlink)
{
AVFilterContext *ctx = outlink->src;
ADenormContext *s = ctx->priv;
switch (s->type) {
case DC_TYPE:
switch (outlink->format) {
case AV_SAMPLE_FMT_FLTP: s->filter = dc_denorm_fltp; break;
case AV_SAMPLE_FMT_DBLP: s->filter = dc_denorm_dblp; break;
}
break;
case AC_TYPE:
switch (outlink->format) {
case AV_SAMPLE_FMT_FLTP: s->filter = ac_denorm_fltp; break;
case AV_SAMPLE_FMT_DBLP: s->filter = ac_denorm_dblp; break;
}
break;
case SQ_TYPE:
switch (outlink->format) {
case AV_SAMPLE_FMT_FLTP: s->filter = sq_denorm_fltp; break;
case AV_SAMPLE_FMT_DBLP: s->filter = sq_denorm_dblp; break;
}
break;
case PS_TYPE:
switch (outlink->format) {
case AV_SAMPLE_FMT_FLTP: s->filter = ps_denorm_fltp; break;
case AV_SAMPLE_FMT_DBLP: s->filter = ps_denorm_dblp; break;
}
break;
default:
av_assert0(0);
}
return 0;
}
typedef struct ThreadData {
AVFrame *in, *out;
} ThreadData;
static int filter_channels(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ADenormContext *s = ctx->priv;
ThreadData *td = arg;
AVFrame *out = td->out;
AVFrame *in = td->in;
const int start = (in->channels * jobnr) / nb_jobs;
const int end = (in->channels * (jobnr+1)) / nb_jobs;
for (int ch = start; ch < end; ch++) {
s->filter(ctx, out->extended_data[ch],
in->extended_data[ch],
in->nb_samples);
}
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
ADenormContext *s = ctx->priv;
AVFilterLink *outlink = ctx->outputs[0];
ThreadData td;
AVFrame *out;
if (av_frame_is_writable(in)) {
out = in;
} else {
out = ff_get_audio_buffer(outlink, in->nb_samples);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
}
s->level = exp(s->level_db / 20. * M_LN10);
td.in = in; td.out = out;
ctx->internal->execute(ctx, filter_channels, &td, NULL, FFMIN(inlink->channels,
ff_filter_get_nb_threads(ctx)));
s->in_samples += in->nb_samples;
if (out != in)
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static int process_command(AVFilterContext *ctx, const char *cmd, const char *args,
char *res, int res_len, int flags)
{
AVFilterLink *outlink = ctx->outputs[0];
int ret;
ret = ff_filter_process_command(ctx, cmd, args, res, res_len, flags);
if (ret < 0)
return ret;
return config_output(outlink);
}
static const AVFilterPad adenorm_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.filter_frame = filter_frame,
},
{ NULL }
};
static const AVFilterPad adenorm_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.config_props = config_output,
},
{ NULL }
};
#define OFFSET(x) offsetof(ADenormContext, x)
#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption adenorm_options[] = {
{ "level", "set level", OFFSET(level_db), AV_OPT_TYPE_DOUBLE, {.dbl=-351}, -451, -90, FLAGS },
{ "type", "set type", OFFSET(type), AV_OPT_TYPE_INT, {.i64=DC_TYPE}, 0, NB_TYPES-1, FLAGS, "type" },
{ "dc", NULL, 0, AV_OPT_TYPE_CONST, {.i64=DC_TYPE}, 0, 0, FLAGS, "type"},
{ "ac", NULL, 0, AV_OPT_TYPE_CONST, {.i64=AC_TYPE}, 0, 0, FLAGS, "type"},
{ "square",NULL, 0, AV_OPT_TYPE_CONST, {.i64=SQ_TYPE}, 0, 0, FLAGS, "type"},
{ "pulse", NULL, 0, AV_OPT_TYPE_CONST, {.i64=PS_TYPE}, 0, 0, FLAGS, "type"},
{ NULL }
};
AVFILTER_DEFINE_CLASS(adenorm);
AVFilter ff_af_adenorm = {
.name = "adenorm",
.description = NULL_IF_CONFIG_SMALL("Remedy denormals by adding extremely low-level noise."),
.query_formats = query_formats,
.priv_size = sizeof(ADenormContext),
.inputs = adenorm_inputs,
.outputs = adenorm_outputs,
.priv_class = &adenorm_class,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};

View File

@@ -1,317 +0,0 @@
/*
* Copyright (c) Markus Schmidt and Christian Holschuh
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/opt.h"
#include "avfilter.h"
#include "internal.h"
#include "audio.h"
typedef struct ChannelParams {
double blend_old, drive_old;
double rdrive, rbdr, kpa, kpb, kna, knb, ap,
an, imr, kc, srct, sq, pwrq;
double prev_med, prev_out;
double hp[5], lp[5];
double hw[4][2], lw[2][2];
} ChannelParams;
typedef struct AExciterContext {
const AVClass *class;
double level_in;
double level_out;
double amount;
double drive;
double blend;
double freq;
double ceil;
int listen;
ChannelParams *cp;
} AExciterContext;
#define OFFSET(x) offsetof(AExciterContext, x)
#define A AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption aexciter_options[] = {
{ "level_in", "set level in", OFFSET(level_in), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0, 64, A },
{ "level_out", "set level out", OFFSET(level_out), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0, 64, A },
{ "amount", "set amount", OFFSET(amount), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0, 64, A },
{ "drive", "set harmonics", OFFSET(drive), AV_OPT_TYPE_DOUBLE, {.dbl=8.5}, 0.1, 10, A },
{ "blend", "set blend harmonics", OFFSET(blend), AV_OPT_TYPE_DOUBLE, {.dbl=0}, -10, 10, A },
{ "freq", "set scope", OFFSET(freq), AV_OPT_TYPE_DOUBLE, {.dbl=7500}, 2000, 12000, A },
{ "ceil", "set ceiling", OFFSET(ceil), AV_OPT_TYPE_DOUBLE, {.dbl=9999}, 9999, 20000, A },
{ "listen", "enable listen mode", OFFSET(listen), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, A },
{ NULL }
};
AVFILTER_DEFINE_CLASS(aexciter);
static inline double M(double x)
{
return (fabs(x) > 0.00000001) ? x : 0.0;
}
static inline double D(double x)
{
x = fabs(x);
return (x > 0.00000001) ? sqrt(x) : 0.0;
}
static void set_params(ChannelParams *p,
double blend, double drive,
double srate, double freq,
double ceil)
{
double a0, a1, a2, b0, b1, b2, w0, alpha;
p->rdrive = 12.0 / drive;
p->rbdr = p->rdrive / (10.5 - blend) * 780.0 / 33.0;
p->kpa = D(2.0 * (p->rdrive*p->rdrive) - 1.0) + 1.0;
p->kpb = (2.0 - p->kpa) / 2.0;
p->ap = ((p->rdrive*p->rdrive) - p->kpa + 1.0) / 2.0;
p->kc = p->kpa / D(2.0 * D(2.0 * (p->rdrive*p->rdrive) - 1.0) - 2.0 * p->rdrive*p->rdrive);
p->srct = (0.1 * srate) / (0.1 * srate + 1.0);
p->sq = p->kc*p->kc + 1.0;
p->knb = -1.0 * p->rbdr / D(p->sq);
p->kna = 2.0 * p->kc * p->rbdr / D(p->sq);
p->an = p->rbdr*p->rbdr / p->sq;
p->imr = 2.0 * p->knb + D(2.0 * p->kna + 4.0 * p->an - 1.0);
p->pwrq = 2.0 / (p->imr + 1.0);
w0 = 2 * M_PI * freq / srate;
alpha = sin(w0) / (2. * 0.707);
a0 = 1 + alpha;
a1 = -2 * cos(w0);
a2 = 1 - alpha;
b0 = (1 + cos(w0)) / 2;
b1 = -(1 + cos(w0));
b2 = (1 + cos(w0)) / 2;
p->hp[0] =-a1 / a0;
p->hp[1] =-a2 / a0;
p->hp[2] = b0 / a0;
p->hp[3] = b1 / a0;
p->hp[4] = b2 / a0;
w0 = 2 * M_PI * ceil / srate;
alpha = sin(w0) / (2. * 0.707);
a0 = 1 + alpha;
a1 = -2 * cos(w0);
a2 = 1 - alpha;
b0 = (1 - cos(w0)) / 2;
b1 = 1 - cos(w0);
b2 = (1 - cos(w0)) / 2;
p->lp[0] =-a1 / a0;
p->lp[1] =-a2 / a0;
p->lp[2] = b0 / a0;
p->lp[3] = b1 / a0;
p->lp[4] = b2 / a0;
}
static double bprocess(double in, const double *const c,
double *w1, double *w2)
{
double out = c[2] * in + *w1;
*w1 = c[3] * in + *w2 + c[0] * out;
*w2 = c[4] * in + c[1] * out;
return out;
}
static double distortion_process(AExciterContext *s, ChannelParams *p, double in)
{
double proc = in, med;
proc = bprocess(proc, p->hp, &p->hw[0][0], &p->hw[0][1]);
proc = bprocess(proc, p->hp, &p->hw[1][0], &p->hw[1][1]);
if (proc >= 0.0) {
med = (D(p->ap + proc * (p->kpa - proc)) + p->kpb) * p->pwrq;
} else {
med = (D(p->an - proc * (p->kna + proc)) + p->knb) * p->pwrq * -1.0;
}
proc = p->srct * (med - p->prev_med + p->prev_out);
p->prev_med = M(med);
p->prev_out = M(proc);
proc = bprocess(proc, p->hp, &p->hw[2][0], &p->hw[2][1]);
proc = bprocess(proc, p->hp, &p->hw[3][0], &p->hw[3][1]);
if (s->ceil >= 10000.) {
proc = bprocess(proc, p->lp, &p->lw[0][0], &p->lw[0][1]);
proc = bprocess(proc, p->lp, &p->lw[1][0], &p->lw[1][1]);
}
return proc;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
AExciterContext *s = ctx->priv;
AVFilterLink *outlink = ctx->outputs[0];
AVFrame *out;
const double *src = (const double *)in->data[0];
const double level_in = s->level_in;
const double level_out = s->level_out;
const double amount = s->amount;
const double listen = 1.0 - s->listen;
double *dst;
if (av_frame_is_writable(in)) {
out = in;
} else {
out = ff_get_audio_buffer(inlink, in->nb_samples);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
}
dst = (double *)out->data[0];
for (int n = 0; n < in->nb_samples; n++) {
for (int c = 0; c < inlink->channels; c++) {
double sample = src[c] * level_in;
sample = distortion_process(s, &s->cp[c], sample);
sample = sample * amount + listen * src[c];
sample *= level_out;
if (ctx->is_disabled)
dst[c] = src[c];
else
dst[c] = sample;
}
src += inlink->channels;
dst += inlink->channels;
}
if (in != out)
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static int query_formats(AVFilterContext *ctx)
{
AVFilterFormats *formats;
AVFilterChannelLayouts *layouts;
static const enum AVSampleFormat sample_fmts[] = {
AV_SAMPLE_FMT_DBL,
AV_SAMPLE_FMT_NONE
};
int ret;
layouts = ff_all_channel_counts();
if (!layouts)
return AVERROR(ENOMEM);
ret = ff_set_common_channel_layouts(ctx, layouts);
if (ret < 0)
return ret;
formats = ff_make_format_list(sample_fmts);
if (!formats)
return AVERROR(ENOMEM);
ret = ff_set_common_formats(ctx, formats);
if (ret < 0)
return ret;
formats = ff_all_samplerates();
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_samplerates(ctx, formats);
}
static av_cold void uninit(AVFilterContext *ctx)
{
AExciterContext *s = ctx->priv;
av_freep(&s->cp);
}
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
AExciterContext *s = ctx->priv;
if (!s->cp)
s->cp = av_calloc(inlink->channels, sizeof(*s->cp));
if (!s->cp)
return AVERROR(ENOMEM);
for (int i = 0; i < inlink->channels; i++)
set_params(&s->cp[i], s->blend, s->drive, inlink->sample_rate,
s->freq, s->ceil);
return 0;
}
static int process_command(AVFilterContext *ctx, const char *cmd, const char *args,
char *res, int res_len, int flags)
{
AVFilterLink *inlink = ctx->inputs[0];
int ret;
ret = ff_filter_process_command(ctx, cmd, args, res, res_len, flags);
if (ret < 0)
return ret;
return config_input(inlink);
}
static const AVFilterPad avfilter_af_aexciter_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.config_props = config_input,
.filter_frame = filter_frame,
},
{ NULL }
};
static const AVFilterPad avfilter_af_aexciter_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
},
{ NULL }
};
AVFilter ff_af_aexciter = {
.name = "aexciter",
.description = NULL_IF_CONFIG_SMALL("Enhance high frequency part of audio."),
.priv_size = sizeof(AExciterContext),
.priv_class = &aexciter_class,
.uninit = uninit,
.query_formats = query_formats,
.inputs = avfilter_af_aexciter_inputs,
.outputs = avfilter_af_aexciter_outputs,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL,
};

View File

@@ -1,427 +0,0 @@
/*
* Copyright (c) Paul B Mahol
* Copyright (c) Laurent de Soras, 2005
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/channel_layout.h"
#include "libavutil/ffmath.h"
#include "libavutil/opt.h"
#include "avfilter.h"
#include "audio.h"
#include "formats.h"
#define NB_COEFS 16
typedef struct AFreqShift {
const AVClass *class;
double shift;
double level;
double cd[NB_COEFS];
float cf[NB_COEFS];
int64_t in_samples;
AVFrame *i1, *o1;
AVFrame *i2, *o2;
void (*filter_channel)(AVFilterContext *ctx,
int channel,
AVFrame *in, AVFrame *out);
} AFreqShift;
static int query_formats(AVFilterContext *ctx)
{
AVFilterFormats *formats = NULL;
AVFilterChannelLayouts *layouts = NULL;
static const enum AVSampleFormat sample_fmts[] = {
AV_SAMPLE_FMT_FLTP,
AV_SAMPLE_FMT_DBLP,
AV_SAMPLE_FMT_NONE
};
int ret;
formats = ff_make_format_list(sample_fmts);
if (!formats)
return AVERROR(ENOMEM);
ret = ff_set_common_formats(ctx, formats);
if (ret < 0)
return ret;
layouts = ff_all_channel_counts();
if (!layouts)
return AVERROR(ENOMEM);
ret = ff_set_common_channel_layouts(ctx, layouts);
if (ret < 0)
return ret;
formats = ff_all_samplerates();
return ff_set_common_samplerates(ctx, formats);
}
#define PFILTER(name, type, sin, cos, cc) \
static void pfilter_channel_## name(AVFilterContext *ctx, \
int ch, \
AVFrame *in, AVFrame *out) \
{ \
AFreqShift *s = ctx->priv; \
const int nb_samples = in->nb_samples; \
const type *src = (const type *)in->extended_data[ch]; \
type *dst = (type *)out->extended_data[ch]; \
type *i1 = (type *)s->i1->extended_data[ch]; \
type *o1 = (type *)s->o1->extended_data[ch]; \
type *i2 = (type *)s->i2->extended_data[ch]; \
type *o2 = (type *)s->o2->extended_data[ch]; \
const type *c = s->cc; \
const type level = s->level; \
type shift = s->shift * M_PI; \
type cos_theta = cos(shift); \
type sin_theta = sin(shift); \
\
for (int n = 0; n < nb_samples; n++) { \
type xn1 = src[n], xn2 = src[n]; \
type I, Q; \
\
for (int j = 0; j < NB_COEFS / 2; j++) { \
I = c[j] * (xn1 + o2[j]) - i2[j]; \
i2[j] = i1[j]; \
i1[j] = xn1; \
o2[j] = o1[j]; \
o1[j] = I; \
xn1 = I; \
} \
\
for (int j = NB_COEFS / 2; j < NB_COEFS; j++) { \
Q = c[j] * (xn2 + o2[j]) - i2[j]; \
i2[j] = i1[j]; \
i1[j] = xn2; \
o2[j] = o1[j]; \
o1[j] = Q; \
xn2 = Q; \
} \
Q = o2[NB_COEFS - 1]; \
\
dst[n] = (I * cos_theta - Q * sin_theta) * level; \
} \
}
PFILTER(flt, float, sin, cos, cf)
PFILTER(dbl, double, sin, cos, cd)
#define FFILTER(name, type, sin, cos, fmod, cc) \
static void ffilter_channel_## name(AVFilterContext *ctx, \
int ch, \
AVFrame *in, AVFrame *out) \
{ \
AFreqShift *s = ctx->priv; \
const int nb_samples = in->nb_samples; \
const type *src = (const type *)in->extended_data[ch]; \
type *dst = (type *)out->extended_data[ch]; \
type *i1 = (type *)s->i1->extended_data[ch]; \
type *o1 = (type *)s->o1->extended_data[ch]; \
type *i2 = (type *)s->i2->extended_data[ch]; \
type *o2 = (type *)s->o2->extended_data[ch]; \
const type *c = s->cc; \
const type level = s->level; \
type ts = 1. / in->sample_rate; \
type shift = s->shift; \
int64_t N = s->in_samples; \
\
for (int n = 0; n < nb_samples; n++) { \
type xn1 = src[n], xn2 = src[n]; \
type I, Q, theta; \
\
for (int j = 0; j < NB_COEFS / 2; j++) { \
I = c[j] * (xn1 + o2[j]) - i2[j]; \
i2[j] = i1[j]; \
i1[j] = xn1; \
o2[j] = o1[j]; \
o1[j] = I; \
xn1 = I; \
} \
\
for (int j = NB_COEFS / 2; j < NB_COEFS; j++) { \
Q = c[j] * (xn2 + o2[j]) - i2[j]; \
i2[j] = i1[j]; \
i1[j] = xn2; \
o2[j] = o1[j]; \
o1[j] = Q; \
xn2 = Q; \
} \
Q = o2[NB_COEFS - 1]; \
\
theta = 2. * M_PI * fmod(shift * (N + n) * ts, 1.); \
dst[n] = (I * cos(theta) - Q * sin(theta)) * level; \
} \
}
FFILTER(flt, float, sinf, cosf, fmodf, cf)
FFILTER(dbl, double, sin, cos, fmod, cd)
static void compute_transition_param(double *K, double *Q, double transition)
{
double kksqrt, e, e2, e4, k, q;
k = tan((1. - transition * 2.) * M_PI / 4.);
k *= k;
kksqrt = pow(1 - k * k, 0.25);
e = 0.5 * (1. - kksqrt) / (1. + kksqrt);
e2 = e * e;
e4 = e2 * e2;
q = e * (1. + e4 * (2. + e4 * (15. + 150. * e4)));
*Q = q;
*K = k;
}
static double ipowp(double x, int64_t n)
{
double z = 1.;
while (n != 0) {
if (n & 1)
z *= x;
n >>= 1;
x *= x;
}
return z;
}
static double compute_acc_num(double q, int order, int c)
{
int64_t i = 0;
int j = 1;
double acc = 0.;
double q_ii1;
do {
q_ii1 = ipowp(q, i * (i + 1));
q_ii1 *= sin((i * 2 + 1) * c * M_PI / order) * j;
acc += q_ii1;
j = -j;
i++;
} while (fabs(q_ii1) > 1e-100);
return acc;
}
static double compute_acc_den(double q, int order, int c)
{
int64_t i = 1;
int j = -1;
double acc = 0.;
double q_i2;
do {
q_i2 = ipowp(q, i * i);
q_i2 *= cos(i * 2 * c * M_PI / order) * j;
acc += q_i2;
j = -j;
i++;
} while (fabs(q_i2) > 1e-100);
return acc;
}
static double compute_coef(int index, double k, double q, int order)
{
const int c = index + 1;
const double num = compute_acc_num(q, order, c) * pow(q, 0.25);
const double den = compute_acc_den(q, order, c) + 0.5;
const double ww = num / den;
const double wwsq = ww * ww;
const double x = sqrt((1 - wwsq * k) * (1 - wwsq / k)) / (1 + wwsq);
const double coef = (1 - x) / (1 + x);
return coef;
}
static void compute_coefs(double *coef_arrd, float *coef_arrf, int nbr_coefs, double transition)
{
const int order = nbr_coefs * 2 + 1;
double k, q;
compute_transition_param(&k, &q, transition);
for (int n = 0; n < nbr_coefs; n++) {
const int idx = (n / 2) + (n & 1) * nbr_coefs / 2;
coef_arrd[idx] = compute_coef(n, k, q, order);
coef_arrf[idx] = coef_arrd[idx];
}
}
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
AFreqShift *s = ctx->priv;
compute_coefs(s->cd, s->cf, NB_COEFS, 2. * 20. / inlink->sample_rate);
s->i1 = ff_get_audio_buffer(inlink, NB_COEFS);
s->o1 = ff_get_audio_buffer(inlink, NB_COEFS);
s->i2 = ff_get_audio_buffer(inlink, NB_COEFS);
s->o2 = ff_get_audio_buffer(inlink, NB_COEFS);
if (!s->i1 || !s->o1 || !s->i2 || !s->o2)
return AVERROR(ENOMEM);
if (inlink->format == AV_SAMPLE_FMT_DBLP) {
if (!strcmp(ctx->filter->name, "afreqshift"))
s->filter_channel = ffilter_channel_dbl;
else
s->filter_channel = pfilter_channel_dbl;
} else {
if (!strcmp(ctx->filter->name, "afreqshift"))
s->filter_channel = ffilter_channel_flt;
else
s->filter_channel = pfilter_channel_flt;
}
return 0;
}
typedef struct ThreadData {
AVFrame *in, *out;
} ThreadData;
static int filter_channels(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
AFreqShift *s = ctx->priv;
ThreadData *td = arg;
AVFrame *out = td->out;
AVFrame *in = td->in;
const int start = (in->channels * jobnr) / nb_jobs;
const int end = (in->channels * (jobnr+1)) / nb_jobs;
for (int ch = start; ch < end; ch++)
s->filter_channel(ctx, ch, in, out);
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
AVFilterLink *outlink = ctx->outputs[0];
AFreqShift *s = ctx->priv;
AVFrame *out;
ThreadData td;
if (av_frame_is_writable(in)) {
out = in;
} else {
out = ff_get_audio_buffer(outlink, in->nb_samples);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
}
td.in = in; td.out = out;
ctx->internal->execute(ctx, filter_channels, &td, NULL, FFMIN(inlink->channels,
ff_filter_get_nb_threads(ctx)));
s->in_samples += in->nb_samples;
if (out != in)
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static av_cold void uninit(AVFilterContext *ctx)
{
AFreqShift *s = ctx->priv;
av_frame_free(&s->i1);
av_frame_free(&s->o1);
av_frame_free(&s->i2);
av_frame_free(&s->o2);
}
#define OFFSET(x) offsetof(AFreqShift, x)
#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption afreqshift_options[] = {
{ "shift", "set frequency shift", OFFSET(shift), AV_OPT_TYPE_DOUBLE, {.dbl=0}, -INT_MAX, INT_MAX, FLAGS },
{ "level", "set output level", OFFSET(level), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0.0, 1.0, FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(afreqshift);
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
},
{ NULL }
};
AVFilter ff_af_afreqshift = {
.name = "afreqshift",
.description = NULL_IF_CONFIG_SMALL("Apply frequency shifting to input audio."),
.query_formats = query_formats,
.priv_size = sizeof(AFreqShift),
.priv_class = &afreqshift_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = ff_filter_process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};
static const AVOption aphaseshift_options[] = {
{ "shift", "set phase shift", OFFSET(shift), AV_OPT_TYPE_DOUBLE, {.dbl=0}, -1.0, 1.0, FLAGS },
{ "level", "set output level",OFFSET(level), AV_OPT_TYPE_DOUBLE, {.dbl=1}, 0.0, 1.0, FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(aphaseshift);
AVFilter ff_af_aphaseshift = {
.name = "aphaseshift",
.description = NULL_IF_CONFIG_SMALL("Apply phase shifting to input audio."),
.query_formats = query_formats,
.priv_size = sizeof(AFreqShift),
.priv_class = &aphaseshift_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = ff_filter_process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};

View File

@@ -1,448 +0,0 @@
/*
* Copyright (c) 2005 Boðaç Topaktaþ
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/channel_layout.h"
#include "libavutil/ffmath.h"
#include "libavutil/opt.h"
#include "avfilter.h"
#include "audio.h"
#include "formats.h"
typedef struct BiquadCoeffs {
double a1, a2;
double b0, b1, b2;
} BiquadCoeffs;
typedef struct ASuperCutContext {
const AVClass *class;
double cutoff;
double level;
double qfactor;
int order;
int filter_count;
int bypass;
BiquadCoeffs coeffs[10];
AVFrame *w;
int (*filter_channels)(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs);
} ASuperCutContext;
static int query_formats(AVFilterContext *ctx)
{
AVFilterFormats *formats = NULL;
AVFilterChannelLayouts *layouts = NULL;
static const enum AVSampleFormat sample_fmts[] = {
AV_SAMPLE_FMT_FLTP,
AV_SAMPLE_FMT_DBLP,
AV_SAMPLE_FMT_NONE
};
int ret;
formats = ff_make_format_list(sample_fmts);
if (!formats)
return AVERROR(ENOMEM);
ret = ff_set_common_formats(ctx, formats);
if (ret < 0)
return ret;
layouts = ff_all_channel_counts();
if (!layouts)
return AVERROR(ENOMEM);
ret = ff_set_common_channel_layouts(ctx, layouts);
if (ret < 0)
return ret;
formats = ff_all_samplerates();
return ff_set_common_samplerates(ctx, formats);
}
static void calc_q_factors(int n, double *q)
{
for (int i = 0; i < n / 2; i++)
q[i] = 1. / (-2. * cos(M_PI * (2. * (i + 1) + n - 1.) / (2. * n)));
}
static int get_coeffs(AVFilterContext *ctx)
{
ASuperCutContext *s = ctx->priv;
AVFilterLink *inlink = ctx->inputs[0];
double w0 = s->cutoff / inlink->sample_rate;
double K = tan(M_PI * w0);
double q[10];
s->bypass = w0 >= 0.5;
if (s->bypass)
return 0;
if (!strcmp(ctx->filter->name, "asubcut")) {
s->filter_count = s->order / 2 + (s->order & 1);
calc_q_factors(s->order, q);
if (s->order & 1) {
BiquadCoeffs *coeffs = &s->coeffs[0];
double omega = 2. * tan(M_PI * w0);
coeffs->b0 = 2. / (2. + omega);
coeffs->b1 = -coeffs->b0;
coeffs->b2 = 0.;
coeffs->a1 = -(omega - 2.) / (2. + omega);
coeffs->a2 = 0.;
}
for (int b = (s->order & 1); b < s->filter_count; b++) {
BiquadCoeffs *coeffs = &s->coeffs[b];
const int idx = b - (s->order & 1);
double norm = 1.0 / (1.0 + K / q[idx] + K * K);
coeffs->b0 = norm;
coeffs->b1 = -2.0 * coeffs->b0;
coeffs->b2 = coeffs->b0;
coeffs->a1 = -2.0 * (K * K - 1.0) * norm;
coeffs->a2 = -(1.0 - K / q[idx] + K * K) * norm;
}
} else if (!strcmp(ctx->filter->name, "asupercut")) {
s->filter_count = s->order / 2 + (s->order & 1);
calc_q_factors(s->order, q);
if (s->order & 1) {
BiquadCoeffs *coeffs = &s->coeffs[0];
double omega = 2. * tan(M_PI * w0);
coeffs->b0 = omega / (2. + omega);
coeffs->b1 = coeffs->b0;
coeffs->b2 = 0.;
coeffs->a1 = -(omega - 2.) / (2. + omega);
coeffs->a2 = 0.;
}
for (int b = (s->order & 1); b < s->filter_count; b++) {
BiquadCoeffs *coeffs = &s->coeffs[b];
const int idx = b - (s->order & 1);
double norm = 1.0 / (1.0 + K / q[idx] + K * K);
coeffs->b0 = K * K * norm;
coeffs->b1 = 2.0 * coeffs->b0;
coeffs->b2 = coeffs->b0;
coeffs->a1 = -2.0 * (K * K - 1.0) * norm;
coeffs->a2 = -(1.0 - K / q[idx] + K * K) * norm;
}
} else if (!strcmp(ctx->filter->name, "asuperpass")) {
double alpha, beta, gamma, theta;
double theta_0 = 2. * M_PI * (s->cutoff / inlink->sample_rate);
double d_E;
s->filter_count = s->order / 2;
d_E = (2. * tan(theta_0 / (2. * s->qfactor))) / sin(theta_0);
for (int b = 0; b < s->filter_count; b += 2) {
double D = 2. * sin(((b + 1) * M_PI) / (2. * s->filter_count));
double A = (1. + pow((d_E / 2.), 2)) / (D * d_E / 2.);
double d = sqrt((d_E * D) / (A + sqrt(A * A - 1.)));
double B = D * (d_E / 2.) / d;
double W = B + sqrt(B * B - 1.);
for (int j = 0; j < 2; j++) {
BiquadCoeffs *coeffs = &s->coeffs[b + j];
if (j == 1)
theta = 2. * atan(tan(theta_0 / 2.) / W);
else
theta = 2. * atan(W * tan(theta_0 / 2.));
beta = 0.5 * ((1. - (d / 2.) * sin(theta)) / (1. + (d / 2.) * sin(theta)));
gamma = (0.5 + beta) * cos(theta);
alpha = 0.5 * (0.5 - beta) * sqrt(1. + pow((W - (1. / W)) / d, 2.));
coeffs->a1 = 2. * gamma;
coeffs->a2 = -2. * beta;
coeffs->b0 = 2. * alpha;
coeffs->b1 = 0.;
coeffs->b2 = -2. * alpha;
}
}
} else if (!strcmp(ctx->filter->name, "asuperstop")) {
double alpha, beta, gamma, theta;
double theta_0 = 2. * M_PI * (s->cutoff / inlink->sample_rate);
double d_E;
s->filter_count = s->order / 2;
d_E = (2. * tan(theta_0 / (2. * s->qfactor))) / sin(theta_0);
for (int b = 0; b < s->filter_count; b += 2) {
double D = 2. * sin(((b + 1) * M_PI) / (2. * s->filter_count));
double A = (1. + pow((d_E / 2.), 2)) / (D * d_E / 2.);
double d = sqrt((d_E * D) / (A + sqrt(A * A - 1.)));
double B = D * (d_E / 2.) / d;
double W = B + sqrt(B * B - 1.);
for (int j = 0; j < 2; j++) {
BiquadCoeffs *coeffs = &s->coeffs[b + j];
if (j == 1)
theta = 2. * atan(tan(theta_0 / 2.) / W);
else
theta = 2. * atan(W * tan(theta_0 / 2.));
beta = 0.5 * ((1. - (d / 2.) * sin(theta)) / (1. + (d / 2.) * sin(theta)));
gamma = (0.5 + beta) * cos(theta);
alpha = 0.5 * (0.5 + beta) * ((1. - cos(theta)) / (1. - cos(theta_0)));
coeffs->a1 = 2. * gamma;
coeffs->a2 = -2. * beta;
coeffs->b0 = 2. * alpha;
coeffs->b1 = -4. * alpha * cos(theta_0);
coeffs->b2 = 2. * alpha;
}
}
}
return 0;
}
typedef struct ThreadData {
AVFrame *in, *out;
} ThreadData;
#define FILTER(name, type) \
static int filter_channels_## name(AVFilterContext *ctx, void *arg, \
int jobnr, int nb_jobs) \
{ \
ASuperCutContext *s = ctx->priv; \
ThreadData *td = arg; \
AVFrame *out = td->out; \
AVFrame *in = td->in; \
const int start = (in->channels * jobnr) / nb_jobs; \
const int end = (in->channels * (jobnr+1)) / nb_jobs; \
const double level = s->level; \
\
for (int ch = start; ch < end; ch++) { \
const type *src = (const type *)in->extended_data[ch]; \
type *dst = (type *)out->extended_data[ch]; \
\
for (int b = 0; b < s->filter_count; b++) { \
BiquadCoeffs *coeffs = &s->coeffs[b]; \
const type a1 = coeffs->a1; \
const type a2 = coeffs->a2; \
const type b0 = coeffs->b0; \
const type b1 = coeffs->b1; \
const type b2 = coeffs->b2; \
type *w = ((type *)s->w->extended_data[ch]) + b * 2; \
\
for (int n = 0; n < in->nb_samples; n++) { \
type sin = b ? dst[n] : src[n] * level; \
type sout = sin * b0 + w[0]; \
\
w[0] = b1 * sin + w[1] + a1 * sout; \
w[1] = b2 * sin + a2 * sout; \
\
dst[n] = sout; \
} \
} \
} \
\
return 0; \
}
FILTER(fltp, float)
FILTER(dblp, double)
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ASuperCutContext *s = ctx->priv;
switch (inlink->format) {
case AV_SAMPLE_FMT_FLTP: s->filter_channels = filter_channels_fltp; break;
case AV_SAMPLE_FMT_DBLP: s->filter_channels = filter_channels_dblp; break;
}
s->w = ff_get_audio_buffer(inlink, 2 * 10);
if (!s->w)
return AVERROR(ENOMEM);
return get_coeffs(ctx);
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
ASuperCutContext *s = ctx->priv;
AVFilterLink *outlink = ctx->outputs[0];
ThreadData td;
AVFrame *out;
if (s->bypass)
return ff_filter_frame(outlink, in);
if (av_frame_is_writable(in)) {
out = in;
} else {
out = ff_get_audio_buffer(outlink, in->nb_samples);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
}
td.in = in; td.out = out;
ctx->internal->execute(ctx, s->filter_channels, &td, NULL, FFMIN(inlink->channels,
ff_filter_get_nb_threads(ctx)));
if (out != in)
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static int process_command(AVFilterContext *ctx, const char *cmd, const char *args,
char *res, int res_len, int flags)
{
int ret;
ret = ff_filter_process_command(ctx, cmd, args, res, res_len, flags);
if (ret < 0)
return ret;
return get_coeffs(ctx);
}
static av_cold void uninit(AVFilterContext *ctx)
{
ASuperCutContext *s = ctx->priv;
av_frame_free(&s->w);
}
#define OFFSET(x) offsetof(ASuperCutContext, x)
#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption asupercut_options[] = {
{ "cutoff", "set cutoff frequency", OFFSET(cutoff), AV_OPT_TYPE_DOUBLE, {.dbl=20000}, 20000, 192000, FLAGS },
{ "order", "set filter order", OFFSET(order), AV_OPT_TYPE_INT, {.i64=10}, 3, 20, FLAGS },
{ "level", "set input level", OFFSET(level), AV_OPT_TYPE_DOUBLE, {.dbl=1.}, 0., 1., FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(asupercut);
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
},
{ NULL }
};
AVFilter ff_af_asupercut = {
.name = "asupercut",
.description = NULL_IF_CONFIG_SMALL("Cut super frequencies."),
.query_formats = query_formats,
.priv_size = sizeof(ASuperCutContext),
.priv_class = &asupercut_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};
static const AVOption asubcut_options[] = {
{ "cutoff", "set cutoff frequency", OFFSET(cutoff), AV_OPT_TYPE_DOUBLE, {.dbl=20}, 2, 200, FLAGS },
{ "order", "set filter order", OFFSET(order), AV_OPT_TYPE_INT, {.i64=10}, 3, 20, FLAGS },
{ "level", "set input level", OFFSET(level), AV_OPT_TYPE_DOUBLE, {.dbl=1.}, 0., 1., FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(asubcut);
AVFilter ff_af_asubcut = {
.name = "asubcut",
.description = NULL_IF_CONFIG_SMALL("Cut subwoofer frequencies."),
.query_formats = query_formats,
.priv_size = sizeof(ASuperCutContext),
.priv_class = &asubcut_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};
static const AVOption asuperpass_asuperstop_options[] = {
{ "centerf","set center frequency", OFFSET(cutoff), AV_OPT_TYPE_DOUBLE, {.dbl=1000}, 2, 999999, FLAGS },
{ "order", "set filter order", OFFSET(order), AV_OPT_TYPE_INT, {.i64=4}, 4, 20, FLAGS },
{ "qfactor","set Q-factor", OFFSET(qfactor),AV_OPT_TYPE_DOUBLE, {.dbl=1.},0.01, 100., FLAGS },
{ "level", "set input level", OFFSET(level), AV_OPT_TYPE_DOUBLE, {.dbl=1.}, 0., 2., FLAGS },
{ NULL }
};
#define asuperpass_options asuperpass_asuperstop_options
AVFILTER_DEFINE_CLASS(asuperpass);
AVFilter ff_af_asuperpass = {
.name = "asuperpass",
.description = NULL_IF_CONFIG_SMALL("Apply high order Butterworth band-pass filter."),
.query_formats = query_formats,
.priv_size = sizeof(ASuperCutContext),
.priv_class = &asuperpass_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};
#define asuperstop_options asuperpass_asuperstop_options
AVFILTER_DEFINE_CLASS(asuperstop);
AVFilter ff_af_asuperstop = {
.name = "asuperstop",
.description = NULL_IF_CONFIG_SMALL("Apply high order Butterworth band-stop filter."),
.query_formats = query_formats,
.priv_size = sizeof(ASuperCutContext),
.priv_class = &asuperstop_class,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = process_command,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC |
AVFILTER_FLAG_SLICE_THREADS,
};

View File

@@ -1,579 +0,0 @@
/*
* Copyright (c) 2020 Paul B Mahol
*
* Speech Normalizer
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Speech Normalizer
*/
#include <float.h>
#include "libavutil/avassert.h"
#include "libavutil/opt.h"
#define FF_BUFQUEUE_SIZE (1024)
#include "bufferqueue.h"
#include "audio.h"
#include "avfilter.h"
#include "filters.h"
#include "internal.h"
#define MAX_ITEMS 882000
#define MIN_PEAK (1. / 32768.)
typedef struct PeriodItem {
int size;
int type;
double max_peak;
} PeriodItem;
typedef struct ChannelContext {
int state;
int bypass;
PeriodItem pi[MAX_ITEMS];
double gain_state;
double pi_max_peak;
int pi_start;
int pi_end;
int pi_size;
} ChannelContext;
typedef struct SpeechNormalizerContext {
const AVClass *class;
double peak_value;
double max_expansion;
double max_compression;
double threshold_value;
double raise_amount;
double fall_amount;
uint64_t channels;
int invert;
int link;
ChannelContext *cc;
double prev_gain;
int max_period;
int eof;
int64_t pts;
struct FFBufQueue queue;
void (*analyze_channel)(AVFilterContext *ctx, ChannelContext *cc,
const uint8_t *srcp, int nb_samples);
void (*filter_channels[2])(AVFilterContext *ctx,
AVFrame *in, int nb_samples);
} SpeechNormalizerContext;
#define OFFSET(x) offsetof(SpeechNormalizerContext, x)
#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption speechnorm_options[] = {
{ "peak", "set the peak value", OFFSET(peak_value), AV_OPT_TYPE_DOUBLE, {.dbl=0.95}, 0.0, 1.0, FLAGS },
{ "p", "set the peak value", OFFSET(peak_value), AV_OPT_TYPE_DOUBLE, {.dbl=0.95}, 0.0, 1.0, FLAGS },
{ "expansion", "set the max expansion factor", OFFSET(max_expansion), AV_OPT_TYPE_DOUBLE, {.dbl=2.0}, 1.0, 50.0, FLAGS },
{ "e", "set the max expansion factor", OFFSET(max_expansion), AV_OPT_TYPE_DOUBLE, {.dbl=2.0}, 1.0, 50.0, FLAGS },
{ "compression", "set the max compression factor", OFFSET(max_compression), AV_OPT_TYPE_DOUBLE, {.dbl=2.0}, 1.0, 50.0, FLAGS },
{ "c", "set the max compression factor", OFFSET(max_compression), AV_OPT_TYPE_DOUBLE, {.dbl=2.0}, 1.0, 50.0, FLAGS },
{ "threshold", "set the threshold value", OFFSET(threshold_value), AV_OPT_TYPE_DOUBLE, {.dbl=0}, 0.0, 1.0, FLAGS },
{ "t", "set the threshold value", OFFSET(threshold_value), AV_OPT_TYPE_DOUBLE, {.dbl=0}, 0.0, 1.0, FLAGS },
{ "raise", "set the expansion raising amount", OFFSET(raise_amount), AV_OPT_TYPE_DOUBLE, {.dbl=0.001}, 0.0, 1.0, FLAGS },
{ "r", "set the expansion raising amount", OFFSET(raise_amount), AV_OPT_TYPE_DOUBLE, {.dbl=0.001}, 0.0, 1.0, FLAGS },
{ "fall", "set the compression raising amount", OFFSET(fall_amount), AV_OPT_TYPE_DOUBLE, {.dbl=0.001}, 0.0, 1.0, FLAGS },
{ "f", "set the compression raising amount", OFFSET(fall_amount), AV_OPT_TYPE_DOUBLE, {.dbl=0.001}, 0.0, 1.0, FLAGS },
{ "channels", "set channels to filter", OFFSET(channels), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=-1}, INT64_MIN, INT64_MAX, FLAGS },
{ "h", "set channels to filter", OFFSET(channels), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=-1}, INT64_MIN, INT64_MAX, FLAGS },
{ "invert", "set inverted filtering", OFFSET(invert), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS },
{ "i", "set inverted filtering", OFFSET(invert), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS },
{ "link", "set linked channels filtering", OFFSET(link), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS },
{ "l", "set linked channels filtering", OFFSET(link), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(speechnorm);
static int query_formats(AVFilterContext *ctx)
{
AVFilterFormats *formats;
AVFilterChannelLayouts *layouts;
static const enum AVSampleFormat sample_fmts[] = {
AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_DBLP,
AV_SAMPLE_FMT_NONE
};
int ret;
layouts = ff_all_channel_counts();
if (!layouts)
return AVERROR(ENOMEM);
ret = ff_set_common_channel_layouts(ctx, layouts);
if (ret < 0)
return ret;
formats = ff_make_format_list(sample_fmts);
if (!formats)
return AVERROR(ENOMEM);
ret = ff_set_common_formats(ctx, formats);
if (ret < 0)
return ret;
formats = ff_all_samplerates();
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_samplerates(ctx, formats);
}
static int get_pi_samples(PeriodItem *pi, int start, int end, int remain)
{
int sum;
if (pi[start].type == 0)
return remain;
sum = remain;
while (start != end) {
start++;
if (start >= MAX_ITEMS)
start = 0;
if (pi[start].type == 0)
break;
av_assert0(pi[start].size > 0);
sum += pi[start].size;
}
return sum;
}
static int available_samples(AVFilterContext *ctx)
{
SpeechNormalizerContext *s = ctx->priv;
AVFilterLink *inlink = ctx->inputs[0];
int min_pi_nb_samples;
min_pi_nb_samples = get_pi_samples(s->cc[0].pi, s->cc[0].pi_start, s->cc[0].pi_end, s->cc[0].pi_size);
for (int ch = 1; ch < inlink->channels && min_pi_nb_samples > 0; ch++) {
ChannelContext *cc = &s->cc[ch];
min_pi_nb_samples = FFMIN(min_pi_nb_samples, get_pi_samples(cc->pi, cc->pi_start, cc->pi_end, cc->pi_size));
}
return min_pi_nb_samples;
}
static void consume_pi(ChannelContext *cc, int nb_samples)
{
if (cc->pi_size >= nb_samples) {
cc->pi_size -= nb_samples;
} else {
av_assert0(0);
}
}
static double next_gain(AVFilterContext *ctx, double pi_max_peak, int bypass, double state)
{
SpeechNormalizerContext *s = ctx->priv;
const double expansion = FFMIN(s->max_expansion, s->peak_value / pi_max_peak);
const double compression = 1. / s->max_compression;
const int type = s->invert ? pi_max_peak <= s->threshold_value : pi_max_peak >= s->threshold_value;
if (bypass) {
return 1.;
} else if (type) {
return FFMIN(expansion, state + s->raise_amount);
} else {
return FFMIN(expansion, FFMAX(compression, state - s->fall_amount));
}
}
static void next_pi(AVFilterContext *ctx, ChannelContext *cc, int bypass)
{
av_assert0(cc->pi_size >= 0);
if (cc->pi_size == 0) {
SpeechNormalizerContext *s = ctx->priv;
int start = cc->pi_start;
av_assert0(cc->pi[start].size > 0);
av_assert0(cc->pi[start].type > 0 || s->eof);
cc->pi_size = cc->pi[start].size;
cc->pi_max_peak = cc->pi[start].max_peak;
av_assert0(cc->pi_start != cc->pi_end || s->eof);
start++;
if (start >= MAX_ITEMS)
start = 0;
cc->pi_start = start;
cc->gain_state = next_gain(ctx, cc->pi_max_peak, bypass, cc->gain_state);
}
}
static double min_gain(AVFilterContext *ctx, ChannelContext *cc, int max_size)
{
SpeechNormalizerContext *s = ctx->priv;
double min_gain = s->max_expansion;
double gain_state = cc->gain_state;
int size = cc->pi_size;
int idx = cc->pi_start;
min_gain = FFMIN(min_gain, gain_state);
while (size <= max_size) {
if (idx == cc->pi_end)
break;
gain_state = next_gain(ctx, cc->pi[idx].max_peak, 0, gain_state);
min_gain = FFMIN(min_gain, gain_state);
size += cc->pi[idx].size;
idx++;
if (idx >= MAX_ITEMS)
idx = 0;
}
return min_gain;
}
#define ANALYZE_CHANNEL(name, ptype, zero) \
static void analyze_channel_## name (AVFilterContext *ctx, ChannelContext *cc, \
const uint8_t *srcp, int nb_samples) \
{ \
SpeechNormalizerContext *s = ctx->priv; \
const ptype *src = (const ptype *)srcp; \
int n = 0; \
\
if (cc->state < 0) \
cc->state = src[0] >= zero; \
\
while (n < nb_samples) { \
if ((cc->state != (src[n] >= zero)) || \
(cc->pi[cc->pi_end].size > s->max_period)) { \
double max_peak = cc->pi[cc->pi_end].max_peak; \
int state = cc->state; \
cc->state = src[n] >= zero; \
av_assert0(cc->pi[cc->pi_end].size > 0); \
if (cc->pi[cc->pi_end].max_peak >= MIN_PEAK || \
cc->pi[cc->pi_end].size > s->max_period) { \
cc->pi[cc->pi_end].type = 1; \
cc->pi_end++; \
if (cc->pi_end >= MAX_ITEMS) \
cc->pi_end = 0; \
if (cc->state != state) \
cc->pi[cc->pi_end].max_peak = DBL_MIN; \
else \
cc->pi[cc->pi_end].max_peak = max_peak; \
cc->pi[cc->pi_end].type = 0; \
cc->pi[cc->pi_end].size = 0; \
av_assert0(cc->pi_end != cc->pi_start); \
} \
} \
\
if (cc->state) { \
while (src[n] >= zero) { \
cc->pi[cc->pi_end].max_peak = FFMAX(cc->pi[cc->pi_end].max_peak, src[n]); \
cc->pi[cc->pi_end].size++; \
n++; \
if (n >= nb_samples) \
break; \
} \
} else { \
while (src[n] < zero) { \
cc->pi[cc->pi_end].max_peak = FFMAX(cc->pi[cc->pi_end].max_peak, -src[n]); \
cc->pi[cc->pi_end].size++; \
n++; \
if (n >= nb_samples) \
break; \
} \
} \
} \
}
ANALYZE_CHANNEL(dbl, double, 0.0)
ANALYZE_CHANNEL(flt, float, 0.f)
#define FILTER_CHANNELS(name, ptype) \
static void filter_channels_## name (AVFilterContext *ctx, \
AVFrame *in, int nb_samples) \
{ \
SpeechNormalizerContext *s = ctx->priv; \
AVFilterLink *inlink = ctx->inputs[0]; \
\
for (int ch = 0; ch < inlink->channels; ch++) { \
ChannelContext *cc = &s->cc[ch]; \
ptype *dst = (ptype *)in->extended_data[ch]; \
const int bypass = !(av_channel_layout_extract_channel(inlink->channel_layout, ch) & s->channels); \
int n = 0; \
\
while (n < nb_samples) { \
ptype gain; \
int size; \
\
next_pi(ctx, cc, bypass); \
size = FFMIN(nb_samples - n, cc->pi_size); \
av_assert0(size > 0); \
gain = cc->gain_state; \
consume_pi(cc, size); \
for (int i = n; i < n + size; i++) \
dst[i] *= gain; \
n += size; \
} \
} \
}
FILTER_CHANNELS(dbl, double)
FILTER_CHANNELS(flt, float)
static double lerp(double min, double max, double mix)
{
return min + (max - min) * mix;
}
#define FILTER_LINK_CHANNELS(name, ptype) \
static void filter_link_channels_## name (AVFilterContext *ctx, \
AVFrame *in, int nb_samples) \
{ \
SpeechNormalizerContext *s = ctx->priv; \
AVFilterLink *inlink = ctx->inputs[0]; \
int n = 0; \
\
while (n < nb_samples) { \
int min_size = nb_samples - n; \
int max_size = 1; \
ptype gain = s->max_expansion; \
\
for (int ch = 0; ch < inlink->channels; ch++) { \
ChannelContext *cc = &s->cc[ch]; \
\
cc->bypass = !(av_channel_layout_extract_channel(inlink->channel_layout, ch) & s->channels); \
\
next_pi(ctx, cc, cc->bypass); \
min_size = FFMIN(min_size, cc->pi_size); \
max_size = FFMAX(max_size, cc->pi_size); \
} \
\
av_assert0(min_size > 0); \
for (int ch = 0; ch < inlink->channels; ch++) { \
ChannelContext *cc = &s->cc[ch]; \
\
if (cc->bypass) \
continue; \
gain = FFMIN(gain, min_gain(ctx, cc, max_size)); \
} \
\
for (int ch = 0; ch < inlink->channels; ch++) { \
ChannelContext *cc = &s->cc[ch]; \
ptype *dst = (ptype *)in->extended_data[ch]; \
\
consume_pi(cc, min_size); \
if (cc->bypass) \
continue; \
\
for (int i = n; i < n + min_size; i++) { \
ptype g = lerp(s->prev_gain, gain, (i - n) / (double)min_size); \
dst[i] *= g; \
} \
} \
\
s->prev_gain = gain; \
n += min_size; \
} \
}
FILTER_LINK_CHANNELS(dbl, double)
FILTER_LINK_CHANNELS(flt, float)
static int filter_frame(AVFilterContext *ctx)
{
SpeechNormalizerContext *s = ctx->priv;
AVFilterLink *outlink = ctx->outputs[0];
AVFilterLink *inlink = ctx->inputs[0];
int ret;
while (s->queue.available > 0) {
int min_pi_nb_samples;
AVFrame *in;
in = ff_bufqueue_peek(&s->queue, 0);
if (!in)
break;
min_pi_nb_samples = available_samples(ctx);
if (min_pi_nb_samples < in->nb_samples && !s->eof)
break;
in = ff_bufqueue_get(&s->queue);
av_frame_make_writable(in);
s->filter_channels[s->link](ctx, in, in->nb_samples);
s->pts = in->pts + in->nb_samples;
return ff_filter_frame(outlink, in);
}
for (int f = 0; f < ff_inlink_queued_frames(inlink); f++) {
AVFrame *in;
ret = ff_inlink_consume_frame(inlink, &in);
if (ret < 0)
return ret;
if (ret == 0)
break;
ff_bufqueue_add(ctx, &s->queue, in);
for (int ch = 0; ch < inlink->channels; ch++) {
ChannelContext *cc = &s->cc[ch];
s->analyze_channel(ctx, cc, in->extended_data[ch], in->nb_samples);
}
}
return 1;
}
static int activate(AVFilterContext *ctx)
{
AVFilterLink *inlink = ctx->inputs[0];
AVFilterLink *outlink = ctx->outputs[0];
SpeechNormalizerContext *s = ctx->priv;
int ret, status;
int64_t pts;
FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink);
ret = filter_frame(ctx);
if (ret <= 0)
return ret;
if (!s->eof && ff_inlink_acknowledge_status(inlink, &status, &pts)) {
if (status == AVERROR_EOF)
s->eof = 1;
}
if (s->eof && ff_inlink_queued_samples(inlink) == 0 &&
s->queue.available == 0) {
ff_outlink_set_status(outlink, AVERROR_EOF, s->pts);
return 0;
}
if (s->queue.available > 0) {
AVFrame *in = ff_bufqueue_peek(&s->queue, 0);
const int nb_samples = available_samples(ctx);
if (nb_samples >= in->nb_samples || s->eof) {
ff_filter_set_ready(ctx, 10);
return 0;
}
}
FF_FILTER_FORWARD_WANTED(outlink, inlink);
return FFERROR_NOT_READY;
}
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
SpeechNormalizerContext *s = ctx->priv;
s->max_period = inlink->sample_rate / 10;
s->prev_gain = 1.;
s->cc = av_calloc(inlink->channels, sizeof(*s->cc));
if (!s->cc)
return AVERROR(ENOMEM);
for (int ch = 0; ch < inlink->channels; ch++) {
ChannelContext *cc = &s->cc[ch];
cc->state = -1;
cc->gain_state = 1.;
}
switch (inlink->format) {
case AV_SAMPLE_FMT_FLTP:
s->analyze_channel = analyze_channel_flt;
s->filter_channels[0] = filter_channels_flt;
s->filter_channels[1] = filter_link_channels_flt;
break;
case AV_SAMPLE_FMT_DBLP:
s->analyze_channel = analyze_channel_dbl;
s->filter_channels[0] = filter_channels_dbl;
s->filter_channels[1] = filter_link_channels_dbl;
break;
default:
av_assert0(0);
}
return 0;
}
static int process_command(AVFilterContext *ctx, const char *cmd, const char *args,
char *res, int res_len, int flags)
{
SpeechNormalizerContext *s = ctx->priv;
int link = s->link;
int ret;
ret = ff_filter_process_command(ctx, cmd, args, res, res_len, flags);
if (ret < 0)
return ret;
if (link != s->link)
s->prev_gain = 1.;
return 0;
}
static av_cold void uninit(AVFilterContext *ctx)
{
SpeechNormalizerContext *s = ctx->priv;
ff_bufqueue_discard_all(&s->queue);
av_freep(&s->cc);
}
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_AUDIO,
},
{ NULL }
};
AVFilter ff_af_speechnorm = {
.name = "speechnorm",
.description = NULL_IF_CONFIG_SMALL("Speech Normalizer."),
.query_formats = query_formats,
.priv_size = sizeof(SpeechNormalizerContext),
.priv_class = &speechnorm_class,
.activate = activate,
.uninit = uninit,
.inputs = inputs,
.outputs = outputs,
.process_command = process_command,
};

View File

@@ -1,112 +0,0 @@
/*
* This file is part of FFmpeg.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef AVFILTER_CUDA_VECTORHELPERS_H
#define AVFILTER_CUDA_VECTORHELPERS_H
typedef unsigned char uchar;
typedef unsigned short ushort;
template<typename T> struct vector_helper { };
template<> struct vector_helper<uchar> { typedef float ftype; typedef int itype; };
template<> struct vector_helper<uchar2> { typedef float2 ftype; typedef int2 itype; };
template<> struct vector_helper<uchar4> { typedef float4 ftype; typedef int4 itype; };
template<> struct vector_helper<ushort> { typedef float ftype; typedef int itype; };
template<> struct vector_helper<ushort2> { typedef float2 ftype; typedef int2 itype; };
template<> struct vector_helper<ushort4> { typedef float4 ftype; typedef int4 itype; };
template<> struct vector_helper<int> { typedef float ftype; typedef int itype; };
template<> struct vector_helper<int2> { typedef float2 ftype; typedef int2 itype; };
template<> struct vector_helper<int4> { typedef float4 ftype; typedef int4 itype; };
#define floatT typename vector_helper<T>::ftype
#define intT typename vector_helper<T>::itype
template<typename T, typename V> inline __device__ V to_floatN(const T &a) { return (V)a; }
template<typename T, typename V> inline __device__ T from_floatN(const V &a) { return (T)a; }
#define OPERATORS2(T) \
template<typename V> inline __device__ T operator+(const T &a, const V &b) { return make_ ## T (a.x + b.x, a.y + b.y); } \
template<typename V> inline __device__ T operator-(const T &a, const V &b) { return make_ ## T (a.x - b.x, a.y - b.y); } \
template<typename V> inline __device__ T operator*(const T &a, V b) { return make_ ## T (a.x * b, a.y * b); } \
template<typename V> inline __device__ T operator/(const T &a, V b) { return make_ ## T (a.x / b, a.y / b); } \
template<typename V> inline __device__ T operator>>(const T &a, V b) { return make_ ## T (a.x >> b, a.y >> b); } \
template<typename V> inline __device__ T operator<<(const T &a, V b) { return make_ ## T (a.x << b, a.y << b); } \
template<typename V> inline __device__ T &operator+=(T &a, const V &b) { a.x += b.x; a.y += b.y; return a; } \
template<typename V> inline __device__ void vec_set(T &a, const V &b) { a.x = b.x; a.y = b.y; } \
template<typename V> inline __device__ void vec_set_scalar(T &a, V b) { a.x = b; a.y = b; } \
template<> inline __device__ float2 to_floatN<T, float2>(const T &a) { return make_float2(a.x, a.y); } \
template<> inline __device__ T from_floatN<T, float2>(const float2 &a) { return make_ ## T(a.x, a.y); }
#define OPERATORS4(T) \
template<typename V> inline __device__ T operator+(const T &a, const V &b) { return make_ ## T (a.x + b.x, a.y + b.y, a.z + b.z, a.w + b.w); } \
template<typename V> inline __device__ T operator-(const T &a, const V &b) { return make_ ## T (a.x - b.x, a.y - b.y, a.z - b.z, a.w - b.w); } \
template<typename V> inline __device__ T operator*(const T &a, V b) { return make_ ## T (a.x * b, a.y * b, a.z * b, a.w * b); } \
template<typename V> inline __device__ T operator/(const T &a, V b) { return make_ ## T (a.x / b, a.y / b, a.z / b, a.w / b); } \
template<typename V> inline __device__ T operator>>(const T &a, V b) { return make_ ## T (a.x >> b, a.y >> b, a.z >> b, a.w >> b); } \
template<typename V> inline __device__ T operator<<(const T &a, V b) { return make_ ## T (a.x << b, a.y << b, a.z << b, a.w << b); } \
template<typename V> inline __device__ T &operator+=(T &a, const V &b) { a.x += b.x; a.y += b.y; a.z += b.z; a.w += b.w; return a; } \
template<typename V> inline __device__ void vec_set(T &a, const V &b) { a.x = b.x; a.y = b.y; a.z = b.z; a.w = b.w; } \
template<typename V> inline __device__ void vec_set_scalar(T &a, V b) { a.x = b; a.y = b; a.z = b; a.w = b; } \
template<> inline __device__ float4 to_floatN<T, float4>(const T &a) { return make_float4(a.x, a.y, a.z, a.w); } \
template<> inline __device__ T from_floatN<T, float4>(const float4 &a) { return make_ ## T(a.x, a.y, a.z, a.w); }
OPERATORS2(int2)
OPERATORS2(uchar2)
OPERATORS2(ushort2)
OPERATORS2(float2)
OPERATORS4(int4)
OPERATORS4(uchar4)
OPERATORS4(ushort4)
OPERATORS4(float4)
template<typename V> inline __device__ void vec_set(int &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set(float &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set(uchar &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set(ushort &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set_scalar(int &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set_scalar(float &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set_scalar(uchar &a, V b) { a = b; }
template<typename V> inline __device__ void vec_set_scalar(ushort &a, V b) { a = b; }
template<typename T>
inline __device__ T lerp_scalar(T v0, T v1, float t) {
return t*v1 + (1.0f - t)*v0;
}
template<>
inline __device__ float2 lerp_scalar<float2>(float2 v0, float2 v1, float t) {
return make_float2(
lerp_scalar(v0.x, v1.x, t),
lerp_scalar(v0.y, v1.y, t)
);
}
template<>
inline __device__ float4 lerp_scalar<float4>(float4 v0, float4 v1, float t) {
return make_float4(
lerp_scalar(v0.x, v1.x, t),
lerp_scalar(v0.y, v1.y, t),
lerp_scalar(v0.z, v1.z, t),
lerp_scalar(v0.w, v1.w, t)
);
}
#endif

View File

@@ -1,147 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN native backend implementation.
*/
#include "libavutil/avassert.h"
#include "dnn_backend_native_layer_avgpool.h"
int ff_dnn_load_layer_avg_pool(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num)
{
AvgPoolParams *avgpool_params;
int dnn_size = 0;
avgpool_params = av_malloc(sizeof(*avgpool_params));
if(!avgpool_params)
return 0;
avgpool_params->strides = (int32_t)avio_rl32(model_file_context);
avgpool_params->padding_method = (int32_t)avio_rl32(model_file_context);
avgpool_params->kernel_size = (int32_t)avio_rl32(model_file_context);
dnn_size += 12;
if (dnn_size > file_size || avgpool_params->kernel_size <= 0 || avgpool_params->strides <=0){
av_freep(&avgpool_params);
return 0;
}
layer->params = avgpool_params;
layer->input_operand_indexes[0] = (int32_t)avio_rl32(model_file_context);
layer->output_operand_index = (int32_t)avio_rl32(model_file_context);
dnn_size += 8;
if (layer->input_operand_indexes[0] >= operands_num || layer->output_operand_index >= operands_num) {
return 0;
}
return dnn_size;
}
int ff_dnn_execute_layer_avg_pool(DnnOperand *operands, const int32_t *input_operand_indexes,
int32_t output_operand_index, const void *parameters, NativeContext *ctx)
{
float *output;
int height_end, width_end, height_radius, width_radius, output_height, output_width, kernel_area;
int32_t input_operand_index = input_operand_indexes[0];
int number = operands[input_operand_index].dims[0];
int height = operands[input_operand_index].dims[1];
int width = operands[input_operand_index].dims[2];
int channel = operands[input_operand_index].dims[3];
const float *input = operands[input_operand_index].data;
const AvgPoolParams *avgpool_params = parameters;
int kernel_strides = avgpool_params->strides;
int src_linesize = width * channel;
DnnOperand *output_operand = &operands[output_operand_index];
/**
* When padding_method = SAME, the tensorflow will only padding the hald number of 0 pxiels
* except the remainders.
* Eg: assuming the input height = 1080, the strides = 11, so the remainders = 1080 % 11 = 2
* and if ksize = 5: it will fill (5 - 2) >> 1 = 1 line before the first line of input image,
* and 5 - 2 - 1 = 2 lines after the last line of input image.
* and if ksize = 7: it will fill (7 - 2) >> 1 = 2 lines before the first line of input image,
* and 7 - 2 - 2 = 3 lines after the last line of input image.
*/
if (avgpool_params->padding_method == SAME) {
height_end = height;
width_end = width;
height_radius = avgpool_params->kernel_size - ((height - 1) % kernel_strides + 1);
width_radius = avgpool_params->kernel_size - ((width - 1) % kernel_strides + 1);
height_radius = height_radius < 0 ? 0 : height_radius >> 1;
width_radius = width_radius < 0 ? 0 : width_radius >> 1;
output_height = ceil(height / (kernel_strides * 1.0));
output_width = ceil(width / (kernel_strides * 1.0));
} else {
av_assert0(avgpool_params->padding_method == VALID);
height_end = height - avgpool_params->kernel_size + 1;
width_end = width - avgpool_params->kernel_size + 1;
height_radius = 0;
width_radius = 0;
output_height = ceil((height - avgpool_params->kernel_size + 1) / (kernel_strides * 1.0));
output_width = ceil((width - avgpool_params->kernel_size + 1) / (kernel_strides * 1.0));
}
output_operand->dims[0] = number;
output_operand->dims[1] = output_height;
output_operand->dims[2] = output_width;
// not support pooling in channel dimension now
output_operand->dims[3] = channel;
output_operand->data_type = operands[input_operand_index].data_type;
output_operand->length = ff_calculate_operand_data_length(output_operand);
if (output_operand->length <= 0) {
av_log(ctx, AV_LOG_ERROR, "The output data length overflow\n");
return DNN_ERROR;
}
output_operand->data = av_realloc(output_operand->data, output_operand->length);
if (!output_operand->data) {
av_log(ctx, AV_LOG_ERROR, "Failed to reallocate memory for output\n");
return DNN_ERROR;
}
output = output_operand->data;
for (int y = 0; y < height_end; y += kernel_strides) {
for (int x = 0; x < width_end; x += kernel_strides) {
for (int n_channel = 0; n_channel < channel; ++n_channel) {
output[n_channel] = 0.0;
kernel_area = 0;
for (int kernel_y = 0; kernel_y < avgpool_params->kernel_size; ++kernel_y) {
for (int kernel_x = 0; kernel_x < avgpool_params->kernel_size; ++kernel_x) {
float input_pel;
int y_pos = y + (kernel_y - height_radius);
int x_pos = x + (kernel_x - width_radius);
if (x_pos < 0 || x_pos >= width || y_pos < 0 || y_pos >= height) {
input_pel = 0.0;
} else {
kernel_area++;
input_pel = input[y_pos * src_linesize + x_pos * channel + n_channel];
}
output[n_channel] += input_pel;
}
}
output[n_channel] /= kernel_area;
}
output += channel;
}
}
return 0;
}

View File

@@ -1,40 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN inference functions interface for native backend.
*/
#ifndef AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_AVGPOOL_H
#define AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_AVGPOOL_H
#include "dnn_backend_native.h"
typedef struct AvgPoolParams{
int32_t strides, kernel_size;
DNNPaddingParam padding_method;
} AvgPoolParams;
int ff_dnn_load_layer_avg_pool(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num);
int ff_dnn_execute_layer_avg_pool(DnnOperand *operands, const int32_t *input_operand_indexes,
int32_t output_operand_index, const void *parameters, NativeContext *ctx);
#endif

View File

@@ -1,151 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avassert.h"
#include "dnn_backend_native_layer_dense.h"
int ff_dnn_load_layer_dense(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num)
{
DenseParams *dense_params;
int kernel_size;
int dnn_size = 0;
dense_params = av_malloc(sizeof(*dense_params));
if (!dense_params)
return 0;
dense_params->activation = (int32_t)avio_rl32(model_file_context);
dense_params->input_num = (int32_t)avio_rl32(model_file_context);
dense_params->output_num = (int32_t)avio_rl32(model_file_context);
dense_params->has_bias = (int32_t)avio_rl32(model_file_context);
dnn_size += 16;
kernel_size = dense_params->input_num * dense_params->output_num;
dnn_size += kernel_size * 4;
if (dense_params->has_bias)
dnn_size += dense_params->output_num * 4;
if (dnn_size > file_size || dense_params->input_num <= 0 ||
dense_params->output_num <= 0){
av_freep(&dense_params);
return 0;
}
dense_params->kernel = av_malloc(kernel_size * sizeof(float));
if (!dense_params->kernel) {
av_freep(&dense_params);
return 0;
}
for (int i = 0; i < kernel_size; ++i) {
dense_params->kernel[i] = av_int2float(avio_rl32(model_file_context));
}
dense_params->biases = NULL;
if (dense_params->has_bias) {
dense_params->biases = av_malloc(dense_params->output_num * sizeof(float));
if (!dense_params->biases){
av_freep(&dense_params->kernel);
av_freep(&dense_params);
return 0;
}
for (int i = 0; i < dense_params->output_num; ++i){
dense_params->biases[i] = av_int2float(avio_rl32(model_file_context));
}
}
layer->params = dense_params;
layer->input_operand_indexes[0] = (int32_t)avio_rl32(model_file_context);
layer->output_operand_index = (int32_t)avio_rl32(model_file_context);
dnn_size += 8;
if (layer->input_operand_indexes[0] >= operands_num || layer->output_operand_index >= operands_num) {
return 0;
}
return dnn_size;
}
int ff_dnn_execute_layer_dense(DnnOperand *operands, const int32_t *input_operand_indexes,
int32_t output_operand_index, const void *parameters, NativeContext *ctx)
{
float *output;
int32_t input_operand_index = input_operand_indexes[0];
int number = operands[input_operand_index].dims[0];
int height = operands[input_operand_index].dims[1];
int width = operands[input_operand_index].dims[2];
int channel = operands[input_operand_index].dims[3];
const float *input = operands[input_operand_index].data;
const DenseParams *dense_params = parameters;
int src_linesize = width * channel;
DnnOperand *output_operand = &operands[output_operand_index];
output_operand->dims[0] = number;
output_operand->dims[1] = height;
output_operand->dims[2] = width;
output_operand->dims[3] = dense_params->output_num;
output_operand->data_type = operands[input_operand_index].data_type;
output_operand->length = ff_calculate_operand_data_length(output_operand);
if (output_operand->length <= 0) {
av_log(ctx, AV_LOG_ERROR, "The output data length overflow\n");
return DNN_ERROR;
}
output_operand->data = av_realloc(output_operand->data, output_operand->length);
if (!output_operand->data) {
av_log(ctx, AV_LOG_ERROR, "Failed to reallocate memory for output\n");
return DNN_ERROR;
}
output = output_operand->data;
av_assert0(channel == dense_params->input_num);
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
for (int n_filter = 0; n_filter < dense_params->output_num; ++n_filter) {
if (dense_params->has_bias)
output[n_filter] = dense_params->biases[n_filter];
else
output[n_filter] = 0.f;
for (int ch = 0; ch < dense_params->input_num; ++ch) {
float input_pel;
input_pel = input[y * src_linesize + x * dense_params->input_num + ch];
output[n_filter] += input_pel * dense_params->kernel[n_filter*dense_params->input_num + ch];
}
switch (dense_params->activation){
case RELU:
output[n_filter] = FFMAX(output[n_filter], 0.0);
break;
case TANH:
output[n_filter] = 2.0f / (1.0f + exp(-2.0f * output[n_filter])) - 1.0f;
break;
case SIGMOID:
output[n_filter] = 1.0f / (1.0f + exp(-output[n_filter]));
break;
case NONE:
break;
case LEAKY_RELU:
output[n_filter] = FFMAX(output[n_filter], 0.0) + 0.2 * FFMIN(output[n_filter], 0.0);
}
}
output += dense_params->output_num;
}
}
return 0;
}

View File

@@ -1,37 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_DENSE_H
#define AVFILTER_DNN_DNN_BACKEND_NATIVE_LAYER_DENSE_H
#include "dnn_backend_native.h"
typedef struct DenseParams{
int32_t input_num, output_num;
DNNActivationFunc activation;
int32_t has_bias;
float *kernel;
float *biases;
} DenseParams;
int ff_dnn_load_layer_dense(Layer *layer, AVIOContext *model_file_context, int file_size, int operands_num);
int ff_dnn_execute_layer_dense(DnnOperand *operands, const int32_t *input_operand_indexes,
int32_t output_operand_index, const void *parameters, NativeContext *ctx);
#endif

View File

@@ -1,814 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN OpenVINO backend implementation.
*/
#include "dnn_backend_openvino.h"
#include "dnn_io_proc.h"
#include "libavformat/avio.h"
#include "libavutil/avassert.h"
#include "libavutil/opt.h"
#include "libavutil/avstring.h"
#include "../internal.h"
#include "queue.h"
#include "safe_queue.h"
#include <c_api/ie_c_api.h>
typedef struct OVOptions{
char *device_type;
int nireq;
int batch_size;
int input_resizable;
} OVOptions;
typedef struct OVContext {
const AVClass *class;
OVOptions options;
} OVContext;
typedef struct OVModel{
OVContext ctx;
DNNModel *model;
ie_core_t *core;
ie_network_t *network;
ie_executable_network_t *exe_network;
ie_infer_request_t *infer_request;
/* for async execution */
SafeQueue *request_queue; // holds RequestItem
Queue *task_queue; // holds TaskItem
} OVModel;
typedef struct TaskItem {
OVModel *ov_model;
const char *input_name;
AVFrame *in_frame;
const char *output_name;
AVFrame *out_frame;
int do_ioproc;
int async;
int done;
} TaskItem;
typedef struct RequestItem {
ie_infer_request_t *infer_request;
TaskItem **tasks;
int task_count;
ie_complete_call_back_t callback;
} RequestItem;
#define APPEND_STRING(generated_string, iterate_string) \
generated_string = generated_string ? av_asprintf("%s %s", generated_string, iterate_string) : \
av_asprintf("%s", iterate_string);
#define OFFSET(x) offsetof(OVContext, x)
#define FLAGS AV_OPT_FLAG_FILTERING_PARAM
static const AVOption dnn_openvino_options[] = {
{ "device", "device to run model", OFFSET(options.device_type), AV_OPT_TYPE_STRING, { .str = "CPU" }, 0, 0, FLAGS },
{ "nireq", "number of request", OFFSET(options.nireq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS },
{ "batch_size", "batch size per request", OFFSET(options.batch_size), AV_OPT_TYPE_INT, { .i64 = 1 }, 1, 1000, FLAGS},
{ "input_resizable", "can input be resizable or not", OFFSET(options.input_resizable), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(dnn_openvino);
static DNNDataType precision_to_datatype(precision_e precision)
{
switch (precision)
{
case FP32:
return DNN_FLOAT;
case U8:
return DNN_UINT8;
default:
av_assert0(!"not supported yet.");
return DNN_FLOAT;
}
}
static int get_datatype_size(DNNDataType dt)
{
switch (dt)
{
case DNN_FLOAT:
return sizeof(float);
case DNN_UINT8:
return sizeof(uint8_t);
default:
av_assert0(!"not supported yet.");
return 1;
}
}
static DNNReturnType fill_model_input_ov(OVModel *ov_model, RequestItem *request)
{
dimensions_t dims;
precision_e precision;
ie_blob_buffer_t blob_buffer;
OVContext *ctx = &ov_model->ctx;
IEStatusCode status;
DNNData input;
ie_blob_t *input_blob = NULL;
TaskItem *task = request->tasks[0];
status = ie_infer_request_get_blob(request->infer_request, task->input_name, &input_blob);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to get input blob with name %s\n", task->input_name);
return DNN_ERROR;
}
status |= ie_blob_get_dims(input_blob, &dims);
status |= ie_blob_get_precision(input_blob, &precision);
if (status != OK) {
ie_blob_free(&input_blob);
av_log(ctx, AV_LOG_ERROR, "Failed to get input blob dims/precision\n");
return DNN_ERROR;
}
status = ie_blob_get_buffer(input_blob, &blob_buffer);
if (status != OK) {
ie_blob_free(&input_blob);
av_log(ctx, AV_LOG_ERROR, "Failed to get input blob buffer\n");
return DNN_ERROR;
}
input.height = dims.dims[2];
input.width = dims.dims[3];
input.channels = dims.dims[1];
input.data = blob_buffer.buffer;
input.dt = precision_to_datatype(precision);
// all models in openvino open model zoo use BGR as input,
// change to be an option when necessary.
input.order = DCO_BGR;
av_assert0(request->task_count <= dims.dims[0]);
for (int i = 0; i < request->task_count; ++i) {
task = request->tasks[i];
if (task->do_ioproc) {
if (ov_model->model->pre_proc != NULL) {
ov_model->model->pre_proc(task->in_frame, &input, ov_model->model->filter_ctx);
} else {
ff_proc_from_frame_to_dnn(task->in_frame, &input, ov_model->model->func_type, ctx);
}
}
input.data = (uint8_t *)input.data
+ input.width * input.height * input.channels * get_datatype_size(input.dt);
}
ie_blob_free(&input_blob);
return DNN_SUCCESS;
}
static void infer_completion_callback(void *args)
{
dimensions_t dims;
precision_e precision;
IEStatusCode status;
RequestItem *request = args;
TaskItem *task = request->tasks[0];
SafeQueue *requestq = task->ov_model->request_queue;
ie_blob_t *output_blob = NULL;
ie_blob_buffer_t blob_buffer;
DNNData output;
OVContext *ctx = &task->ov_model->ctx;
status = ie_infer_request_get_blob(request->infer_request, task->output_name, &output_blob);
if (status != OK) {
//incorrect output name
char *model_output_name = NULL;
char *all_output_names = NULL;
size_t model_output_count = 0;
av_log(ctx, AV_LOG_ERROR, "Failed to get model output data\n");
status = ie_network_get_outputs_number(task->ov_model->network, &model_output_count);
for (size_t i = 0; i < model_output_count; i++) {
status = ie_network_get_output_name(task->ov_model->network, i, &model_output_name);
APPEND_STRING(all_output_names, model_output_name)
}
av_log(ctx, AV_LOG_ERROR,
"output \"%s\" may not correct, all output(s) are: \"%s\"\n",
task->output_name, all_output_names);
return;
}
status = ie_blob_get_buffer(output_blob, &blob_buffer);
if (status != OK) {
ie_blob_free(&output_blob);
av_log(ctx, AV_LOG_ERROR, "Failed to access output memory\n");
return;
}
status |= ie_blob_get_dims(output_blob, &dims);
status |= ie_blob_get_precision(output_blob, &precision);
if (status != OK) {
ie_blob_free(&output_blob);
av_log(ctx, AV_LOG_ERROR, "Failed to get dims or precision of output\n");
return;
}
output.channels = dims.dims[1];
output.height = dims.dims[2];
output.width = dims.dims[3];
output.dt = precision_to_datatype(precision);
output.data = blob_buffer.buffer;
av_assert0(request->task_count <= dims.dims[0]);
av_assert0(request->task_count >= 1);
for (int i = 0; i < request->task_count; ++i) {
task = request->tasks[i];
if (task->do_ioproc) {
if (task->ov_model->model->post_proc != NULL) {
task->ov_model->model->post_proc(task->out_frame, &output, task->ov_model->model->filter_ctx);
} else {
ff_proc_from_dnn_to_frame(task->out_frame, &output, ctx);
}
} else {
task->out_frame->width = output.width;
task->out_frame->height = output.height;
}
task->done = 1;
output.data = (uint8_t *)output.data
+ output.width * output.height * output.channels * get_datatype_size(output.dt);
}
ie_blob_free(&output_blob);
request->task_count = 0;
if (task->async) {
if (ff_safe_queue_push_back(requestq, request) < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
return;
}
}
}
static DNNReturnType init_model_ov(OVModel *ov_model, const char *input_name, const char *output_name)
{
OVContext *ctx = &ov_model->ctx;
IEStatusCode status;
ie_available_devices_t a_dev;
ie_config_t config = {NULL, NULL, NULL};
char *all_dev_names = NULL;
// batch size
if (ctx->options.batch_size <= 0) {
ctx->options.batch_size = 1;
}
if (ctx->options.batch_size > 1) {
input_shapes_t input_shapes;
status = ie_network_get_input_shapes(ov_model->network, &input_shapes);
if (status != OK)
goto err;
for (int i = 0; i < input_shapes.shape_num; i++)
input_shapes.shapes[i].shape.dims[0] = ctx->options.batch_size;
status = ie_network_reshape(ov_model->network, input_shapes);
ie_network_input_shapes_free(&input_shapes);
if (status != OK)
goto err;
}
// The order of dims in the openvino is fixed and it is always NCHW for 4-D data.
// while we pass NHWC data from FFmpeg to openvino
status = ie_network_set_input_layout(ov_model->network, input_name, NHWC);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for input %s\n", input_name);
goto err;
}
status = ie_network_set_output_layout(ov_model->network, output_name, NHWC);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to set layout as NHWC for output %s\n", output_name);
goto err;
}
// all models in openvino open model zoo use BGR with range [0.0f, 255.0f] as input,
// we don't have a AVPixelFormat to descibe it, so we'll use AV_PIX_FMT_BGR24 and
// ask openvino to do the conversion internally.
// the current supported SR model (frame processing) is generated from tensorflow model,
// and its input is Y channel as float with range [0.0f, 1.0f], so do not set for this case.
// TODO: we need to get a final clear&general solution with all backends/formats considered.
if (ov_model->model->func_type != DFT_PROCESS_FRAME) {
status = ie_network_set_input_precision(ov_model->network, input_name, U8);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to set input precision as U8 for %s\n", input_name);
goto err;
}
}
status = ie_core_load_network(ov_model->core, ov_model->network, ctx->options.device_type, &config, &ov_model->exe_network);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to load OpenVINO model network\n");
status = ie_core_get_available_devices(ov_model->core, &a_dev);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to get available devices\n");
goto err;
}
for (int i = 0; i < a_dev.num_devices; i++) {
APPEND_STRING(all_dev_names, a_dev.devices[i])
}
av_log(ctx, AV_LOG_ERROR,"device %s may not be supported, all available devices are: \"%s\"\n",
ctx->options.device_type, all_dev_names);
goto err;
}
// create infer_request for sync execution
status = ie_exec_network_create_infer_request(ov_model->exe_network, &ov_model->infer_request);
if (status != OK)
goto err;
// create infer_requests for async execution
if (ctx->options.nireq <= 0) {
// the default value is a rough estimation
ctx->options.nireq = av_cpu_count() / 2 + 1;
}
ov_model->request_queue = ff_safe_queue_create();
if (!ov_model->request_queue) {
goto err;
}
for (int i = 0; i < ctx->options.nireq; i++) {
RequestItem *item = av_mallocz(sizeof(*item));
if (!item) {
goto err;
}
item->callback.completeCallBackFunc = infer_completion_callback;
item->callback.args = item;
if (ff_safe_queue_push_back(ov_model->request_queue, item) < 0) {
av_freep(&item);
goto err;
}
status = ie_exec_network_create_infer_request(ov_model->exe_network, &item->infer_request);
if (status != OK) {
goto err;
}
item->tasks = av_malloc_array(ctx->options.batch_size, sizeof(*item->tasks));
if (!item->tasks) {
goto err;
}
item->task_count = 0;
}
ov_model->task_queue = ff_queue_create();
if (!ov_model->task_queue) {
goto err;
}
return DNN_SUCCESS;
err:
ff_dnn_free_model_ov(&ov_model->model);
return DNN_ERROR;
}
static DNNReturnType execute_model_ov(RequestItem *request)
{
IEStatusCode status;
DNNReturnType ret;
TaskItem *task = request->tasks[0];
OVContext *ctx = &task->ov_model->ctx;
if (task->async) {
if (request->task_count < ctx->options.batch_size) {
if (ff_safe_queue_push_front(task->ov_model->request_queue, request) < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
return DNN_ERROR;
}
return DNN_SUCCESS;
}
ret = fill_model_input_ov(task->ov_model, request);
if (ret != DNN_SUCCESS) {
return ret;
}
status = ie_infer_set_completion_callback(request->infer_request, &request->callback);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to set completion callback for inference\n");
return DNN_ERROR;
}
status = ie_infer_request_infer_async(request->infer_request);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to start async inference\n");
return DNN_ERROR;
}
return DNN_SUCCESS;
} else {
ret = fill_model_input_ov(task->ov_model, request);
if (ret != DNN_SUCCESS) {
return ret;
}
status = ie_infer_request_infer(request->infer_request);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to start synchronous model inference\n");
return DNN_ERROR;
}
infer_completion_callback(request);
return task->done ? DNN_SUCCESS : DNN_ERROR;
}
}
static DNNReturnType get_input_ov(void *model, DNNData *input, const char *input_name)
{
OVModel *ov_model = model;
OVContext *ctx = &ov_model->ctx;
char *model_input_name = NULL;
char *all_input_names = NULL;
IEStatusCode status;
size_t model_input_count = 0;
dimensions_t dims;
precision_e precision;
int input_resizable = ctx->options.input_resizable;
status = ie_network_get_inputs_number(ov_model->network, &model_input_count);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to get input count\n");
return DNN_ERROR;
}
for (size_t i = 0; i < model_input_count; i++) {
status = ie_network_get_input_name(ov_model->network, i, &model_input_name);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to get No.%d input's name\n", (int)i);
return DNN_ERROR;
}
if (strcmp(model_input_name, input_name) == 0) {
ie_network_name_free(&model_input_name);
status |= ie_network_get_input_dims(ov_model->network, input_name, &dims);
status |= ie_network_get_input_precision(ov_model->network, input_name, &precision);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to get No.%d input's dims or precision\n", (int)i);
return DNN_ERROR;
}
input->channels = dims.dims[1];
input->height = input_resizable ? -1 : dims.dims[2];
input->width = input_resizable ? -1 : dims.dims[3];
input->dt = precision_to_datatype(precision);
return DNN_SUCCESS;
} else {
//incorrect input name
APPEND_STRING(all_input_names, model_input_name)
}
ie_network_name_free(&model_input_name);
}
av_log(ctx, AV_LOG_ERROR, "Could not find \"%s\" in model, all input(s) are: \"%s\"\n", input_name, all_input_names);
return DNN_ERROR;
}
static DNNReturnType get_output_ov(void *model, const char *input_name, int input_width, int input_height,
const char *output_name, int *output_width, int *output_height)
{
DNNReturnType ret;
OVModel *ov_model = model;
OVContext *ctx = &ov_model->ctx;
TaskItem task;
RequestItem request;
AVFrame *in_frame = NULL;
AVFrame *out_frame = NULL;
TaskItem *ptask = &task;
IEStatusCode status;
input_shapes_t input_shapes;
if (ctx->options.input_resizable) {
status = ie_network_get_input_shapes(ov_model->network, &input_shapes);
input_shapes.shapes->shape.dims[2] = input_height;
input_shapes.shapes->shape.dims[3] = input_width;
status |= ie_network_reshape(ov_model->network, input_shapes);
ie_network_input_shapes_free(&input_shapes);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to reshape input size for %s\n", input_name);
return DNN_ERROR;
}
}
if (!ov_model->exe_network) {
if (init_model_ov(ov_model, input_name, output_name) != DNN_SUCCESS) {
av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n");
return DNN_ERROR;
}
}
in_frame = av_frame_alloc();
if (!in_frame) {
av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for input frame\n");
return DNN_ERROR;
}
in_frame->width = input_width;
in_frame->height = input_height;
out_frame = av_frame_alloc();
if (!out_frame) {
av_log(ctx, AV_LOG_ERROR, "Failed to allocate memory for output frame\n");
av_frame_free(&in_frame);
return DNN_ERROR;
}
task.done = 0;
task.do_ioproc = 0;
task.async = 0;
task.input_name = input_name;
task.in_frame = in_frame;
task.output_name = output_name;
task.out_frame = out_frame;
task.ov_model = ov_model;
request.infer_request = ov_model->infer_request;
request.task_count = 1;
request.tasks = &ptask;
ret = execute_model_ov(&request);
*output_width = out_frame->width;
*output_height = out_frame->height;
av_frame_free(&out_frame);
av_frame_free(&in_frame);
return ret;
}
DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx)
{
DNNModel *model = NULL;
OVModel *ov_model = NULL;
OVContext *ctx = NULL;
IEStatusCode status;
model = av_mallocz(sizeof(DNNModel));
if (!model){
return NULL;
}
ov_model = av_mallocz(sizeof(OVModel));
if (!ov_model) {
av_freep(&model);
return NULL;
}
model->model = ov_model;
ov_model->model = model;
ov_model->ctx.class = &dnn_openvino_class;
ctx = &ov_model->ctx;
//parse options
av_opt_set_defaults(ctx);
if (av_opt_set_from_string(ctx, options, NULL, "=", "&") < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to parse options \"%s\"\n", options);
goto err;
}
status = ie_core_create("", &ov_model->core);
if (status != OK)
goto err;
status = ie_core_read_network(ov_model->core, model_filename, NULL, &ov_model->network);
if (status != OK) {
ie_version_t ver;
ver = ie_c_api_version();
av_log(ctx, AV_LOG_ERROR, "Failed to read the network from model file %s,\n"
"Please check if the model version matches the runtime OpenVINO %s\n",
model_filename, ver.api_version);
ie_version_free(&ver);
goto err;
}
model->get_input = &get_input_ov;
model->get_output = &get_output_ov;
model->options = options;
model->filter_ctx = filter_ctx;
model->func_type = func_type;
return model;
err:
ff_dnn_free_model_ov(&model);
return NULL;
}
DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, const char *input_name, AVFrame *in_frame,
const char **output_names, uint32_t nb_output, AVFrame *out_frame)
{
OVModel *ov_model = model->model;
OVContext *ctx = &ov_model->ctx;
TaskItem task;
RequestItem request;
TaskItem *ptask = &task;
if (!in_frame) {
av_log(ctx, AV_LOG_ERROR, "in frame is NULL when execute model.\n");
return DNN_ERROR;
}
if (!out_frame && model->func_type == DFT_PROCESS_FRAME) {
av_log(ctx, AV_LOG_ERROR, "out frame is NULL when execute model.\n");
return DNN_ERROR;
}
if (nb_output != 1) {
// currently, the filter does not need multiple outputs,
// so we just pending the support until we really need it.
avpriv_report_missing_feature(ctx, "multiple outputs");
return DNN_ERROR;
}
if (ctx->options.batch_size > 1) {
avpriv_report_missing_feature(ctx, "batch mode for sync execution");
return DNN_ERROR;
}
if (!ov_model->exe_network) {
if (init_model_ov(ov_model, input_name, output_names[0]) != DNN_SUCCESS) {
av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n");
return DNN_ERROR;
}
}
task.done = 0;
task.do_ioproc = 1;
task.async = 0;
task.input_name = input_name;
task.in_frame = in_frame;
task.output_name = output_names[0];
task.out_frame = out_frame;
task.ov_model = ov_model;
request.infer_request = ov_model->infer_request;
request.task_count = 1;
request.tasks = &ptask;
return execute_model_ov(&request);
}
DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, const char *input_name, AVFrame *in_frame,
const char **output_names, uint32_t nb_output, AVFrame *out_frame)
{
OVModel *ov_model = model->model;
OVContext *ctx = &ov_model->ctx;
RequestItem *request;
TaskItem *task;
if (!in_frame) {
av_log(ctx, AV_LOG_ERROR, "in frame is NULL when async execute model.\n");
return DNN_ERROR;
}
if (!out_frame && model->func_type == DFT_PROCESS_FRAME) {
av_log(ctx, AV_LOG_ERROR, "out frame is NULL when async execute model.\n");
return DNN_ERROR;
}
if (!ov_model->exe_network) {
if (init_model_ov(ov_model, input_name, output_names[0]) != DNN_SUCCESS) {
av_log(ctx, AV_LOG_ERROR, "Failed init OpenVINO exectuable network or inference request\n");
return DNN_ERROR;
}
}
task = av_malloc(sizeof(*task));
if (!task) {
av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n");
return DNN_ERROR;
}
task->done = 0;
task->do_ioproc = 1;
task->async = 1;
task->input_name = input_name;
task->in_frame = in_frame;
task->output_name = output_names[0];
task->out_frame = out_frame;
task->ov_model = ov_model;
if (ff_queue_push_back(ov_model->task_queue, task) < 0) {
av_freep(&task);
av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n");
return DNN_ERROR;
}
request = ff_safe_queue_pop_front(ov_model->request_queue);
if (!request) {
av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
return DNN_ERROR;
}
request->tasks[request->task_count++] = task;
return execute_model_ov(request);
}
DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out)
{
OVModel *ov_model = model->model;
TaskItem *task = ff_queue_peek_front(ov_model->task_queue);
if (!task) {
return DAST_EMPTY_QUEUE;
}
if (!task->done) {
return DAST_NOT_READY;
}
*in = task->in_frame;
*out = task->out_frame;
ff_queue_pop_front(ov_model->task_queue);
av_freep(&task);
return DAST_SUCCESS;
}
DNNReturnType ff_dnn_flush_ov(const DNNModel *model)
{
OVModel *ov_model = model->model;
OVContext *ctx = &ov_model->ctx;
RequestItem *request;
IEStatusCode status;
DNNReturnType ret;
request = ff_safe_queue_pop_front(ov_model->request_queue);
if (!request) {
av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n");
return DNN_ERROR;
}
if (request->task_count == 0) {
// no pending task need to flush
if (ff_safe_queue_push_back(ov_model->request_queue, request) < 0) {
av_log(ctx, AV_LOG_ERROR, "Failed to push back request_queue.\n");
return DNN_ERROR;
}
return DNN_SUCCESS;
}
ret = fill_model_input_ov(ov_model, request);
if (ret != DNN_SUCCESS) {
av_log(ctx, AV_LOG_ERROR, "Failed to fill model input.\n");
return ret;
}
status = ie_infer_set_completion_callback(request->infer_request, &request->callback);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to set completion callback for inference\n");
return DNN_ERROR;
}
status = ie_infer_request_infer_async(request->infer_request);
if (status != OK) {
av_log(ctx, AV_LOG_ERROR, "Failed to start async inference\n");
return DNN_ERROR;
}
return DNN_SUCCESS;
}
void ff_dnn_free_model_ov(DNNModel **model)
{
if (*model){
OVModel *ov_model = (*model)->model;
while (ff_safe_queue_size(ov_model->request_queue) != 0) {
RequestItem *item = ff_safe_queue_pop_front(ov_model->request_queue);
if (item && item->infer_request) {
ie_infer_request_free(&item->infer_request);
}
av_freep(&item->tasks);
av_freep(&item);
}
ff_safe_queue_destroy(ov_model->request_queue);
while (ff_queue_size(ov_model->task_queue) != 0) {
TaskItem *item = ff_queue_pop_front(ov_model->task_queue);
av_frame_free(&item->in_frame);
av_frame_free(&item->out_frame);
av_freep(&item);
}
ff_queue_destroy(ov_model->task_queue);
if (ov_model->infer_request)
ie_infer_request_free(&ov_model->infer_request);
if (ov_model->exe_network)
ie_exec_network_free(&ov_model->exe_network);
if (ov_model->network)
ie_network_free(&ov_model->network);
if (ov_model->core)
ie_core_free(&ov_model->core);
av_freep(&ov_model);
av_freep(model);
}
}

View File

@@ -1,43 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN inference functions interface for OpenVINO backend.
*/
#ifndef AVFILTER_DNN_DNN_BACKEND_OPENVINO_H
#define AVFILTER_DNN_DNN_BACKEND_OPENVINO_H
#include "../dnn_interface.h"
DNNModel *ff_dnn_load_model_ov(const char *model_filename, DNNFunctionType func_type, const char *options, AVFilterContext *filter_ctx);
DNNReturnType ff_dnn_execute_model_ov(const DNNModel *model, const char *input_name, AVFrame *in_frame,
const char **output_names, uint32_t nb_output, AVFrame *out_frame);
DNNReturnType ff_dnn_execute_model_async_ov(const DNNModel *model, const char *input_name, AVFrame *in_frame,
const char **output_names, uint32_t nb_output, AVFrame *out_frame);
DNNAsyncStatusType ff_dnn_get_async_result_ov(const DNNModel *model, AVFrame **in, AVFrame **out);
DNNReturnType ff_dnn_flush_ov(const DNNModel *model);
void ff_dnn_free_model_ov(DNNModel **model);
#endif

View File

@@ -1,219 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "dnn_io_proc.h"
#include "libavutil/imgutils.h"
#include "libswscale/swscale.h"
#include "libavutil/avassert.h"
DNNReturnType ff_proc_from_dnn_to_frame(AVFrame *frame, DNNData *output, void *log_ctx)
{
struct SwsContext *sws_ctx;
int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
if (output->dt != DNN_FLOAT) {
avpriv_report_missing_feature(log_ctx, "data type rather than DNN_FLOAT");
return DNN_ERROR;
}
switch (frame->format) {
case AV_PIX_FMT_RGB24:
case AV_PIX_FMT_BGR24:
sws_ctx = sws_getContext(frame->width * 3,
frame->height,
AV_PIX_FMT_GRAYF32,
frame->width * 3,
frame->height,
AV_PIX_FMT_GRAY8,
0, NULL, NULL, NULL);
if (!sws_ctx) {
av_log(log_ctx, AV_LOG_ERROR, "Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(AV_PIX_FMT_GRAYF32), frame->width * 3, frame->height,
av_get_pix_fmt_name(AV_PIX_FMT_GRAY8), frame->width * 3, frame->height);
return DNN_ERROR;
}
sws_scale(sws_ctx, (const uint8_t *[4]){(const uint8_t *)output->data, 0, 0, 0},
(const int[4]){frame->width * 3 * sizeof(float), 0, 0, 0}, 0, frame->height,
(uint8_t * const*)frame->data, frame->linesize);
sws_freeContext(sws_ctx);
return DNN_SUCCESS;
case AV_PIX_FMT_GRAYF32:
av_image_copy_plane(frame->data[0], frame->linesize[0],
output->data, bytewidth,
bytewidth, frame->height);
return DNN_SUCCESS;
case AV_PIX_FMT_YUV420P:
case AV_PIX_FMT_YUV422P:
case AV_PIX_FMT_YUV444P:
case AV_PIX_FMT_YUV410P:
case AV_PIX_FMT_YUV411P:
case AV_PIX_FMT_GRAY8:
case AV_PIX_FMT_NV12:
sws_ctx = sws_getContext(frame->width,
frame->height,
AV_PIX_FMT_GRAYF32,
frame->width,
frame->height,
AV_PIX_FMT_GRAY8,
0, NULL, NULL, NULL);
if (!sws_ctx) {
av_log(log_ctx, AV_LOG_ERROR, "Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(AV_PIX_FMT_GRAYF32), frame->width, frame->height,
av_get_pix_fmt_name(AV_PIX_FMT_GRAY8), frame->width, frame->height);
return DNN_ERROR;
}
sws_scale(sws_ctx, (const uint8_t *[4]){(const uint8_t *)output->data, 0, 0, 0},
(const int[4]){frame->width * sizeof(float), 0, 0, 0}, 0, frame->height,
(uint8_t * const*)frame->data, frame->linesize);
sws_freeContext(sws_ctx);
return DNN_SUCCESS;
default:
avpriv_report_missing_feature(log_ctx, "%s", av_get_pix_fmt_name(frame->format));
return DNN_ERROR;
}
return DNN_SUCCESS;
}
static DNNReturnType proc_from_frame_to_dnn_frameprocessing(AVFrame *frame, DNNData *input, void *log_ctx)
{
struct SwsContext *sws_ctx;
int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
if (input->dt != DNN_FLOAT) {
avpriv_report_missing_feature(log_ctx, "data type rather than DNN_FLOAT");
return DNN_ERROR;
}
switch (frame->format) {
case AV_PIX_FMT_RGB24:
case AV_PIX_FMT_BGR24:
sws_ctx = sws_getContext(frame->width * 3,
frame->height,
AV_PIX_FMT_GRAY8,
frame->width * 3,
frame->height,
AV_PIX_FMT_GRAYF32,
0, NULL, NULL, NULL);
if (!sws_ctx) {
av_log(log_ctx, AV_LOG_ERROR, "Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(AV_PIX_FMT_GRAY8), frame->width * 3, frame->height,
av_get_pix_fmt_name(AV_PIX_FMT_GRAYF32),frame->width * 3, frame->height);
return DNN_ERROR;
}
sws_scale(sws_ctx, (const uint8_t **)frame->data,
frame->linesize, 0, frame->height,
(uint8_t * const*)(&input->data),
(const int [4]){frame->width * 3 * sizeof(float), 0, 0, 0});
sws_freeContext(sws_ctx);
break;
case AV_PIX_FMT_GRAYF32:
av_image_copy_plane(input->data, bytewidth,
frame->data[0], frame->linesize[0],
bytewidth, frame->height);
break;
case AV_PIX_FMT_YUV420P:
case AV_PIX_FMT_YUV422P:
case AV_PIX_FMT_YUV444P:
case AV_PIX_FMT_YUV410P:
case AV_PIX_FMT_YUV411P:
case AV_PIX_FMT_GRAY8:
case AV_PIX_FMT_NV12:
sws_ctx = sws_getContext(frame->width,
frame->height,
AV_PIX_FMT_GRAY8,
frame->width,
frame->height,
AV_PIX_FMT_GRAYF32,
0, NULL, NULL, NULL);
if (!sws_ctx) {
av_log(log_ctx, AV_LOG_ERROR, "Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(AV_PIX_FMT_GRAY8), frame->width, frame->height,
av_get_pix_fmt_name(AV_PIX_FMT_GRAYF32),frame->width, frame->height);
return DNN_ERROR;
}
sws_scale(sws_ctx, (const uint8_t **)frame->data,
frame->linesize, 0, frame->height,
(uint8_t * const*)(&input->data),
(const int [4]){frame->width * sizeof(float), 0, 0, 0});
sws_freeContext(sws_ctx);
break;
default:
avpriv_report_missing_feature(log_ctx, "%s", av_get_pix_fmt_name(frame->format));
return DNN_ERROR;
}
return DNN_SUCCESS;
}
static enum AVPixelFormat get_pixel_format(DNNData *data)
{
if (data->dt == DNN_UINT8 && data->order == DCO_BGR) {
return AV_PIX_FMT_BGR24;
}
av_assert0(!"not supported yet.\n");
return AV_PIX_FMT_BGR24;
}
static DNNReturnType proc_from_frame_to_dnn_analytics(AVFrame *frame, DNNData *input, void *log_ctx)
{
struct SwsContext *sws_ctx;
int linesizes[4];
enum AVPixelFormat fmt = get_pixel_format(input);
sws_ctx = sws_getContext(frame->width, frame->height, frame->format,
input->width, input->height, fmt,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
av_log(log_ctx, AV_LOG_ERROR, "Impossible to create scale context for the conversion "
"fmt:%s s:%dx%d -> fmt:%s s:%dx%d\n",
av_get_pix_fmt_name(frame->format), frame->width, frame->height,
av_get_pix_fmt_name(fmt), input->width, input->height);
return DNN_ERROR;
}
if (av_image_fill_linesizes(linesizes, fmt, input->width) < 0) {
av_log(log_ctx, AV_LOG_ERROR, "unable to get linesizes with av_image_fill_linesizes");
sws_freeContext(sws_ctx);
return DNN_ERROR;
}
sws_scale(sws_ctx, (const uint8_t *const *)frame->data, frame->linesize, 0, frame->height,
(uint8_t *const *)(&input->data), linesizes);
sws_freeContext(sws_ctx);
return DNN_SUCCESS;
}
DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData *input, DNNFunctionType func_type, void *log_ctx)
{
switch (func_type)
{
case DFT_PROCESS_FRAME:
return proc_from_frame_to_dnn_frameprocessing(frame, input, log_ctx);
case DFT_ANALYTICS_DETECT:
return proc_from_frame_to_dnn_analytics(frame, input, log_ctx);
default:
avpriv_report_missing_feature(log_ctx, "model function type %d", func_type);
return DNN_ERROR;
}
}

View File

@@ -1,36 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* DNN input&output process between AVFrame and DNNData.
*/
#ifndef AVFILTER_DNN_DNN_IO_PROC_H
#define AVFILTER_DNN_DNN_IO_PROC_H
#include "../dnn_interface.h"
#include "libavutil/frame.h"
DNNReturnType ff_proc_from_frame_to_dnn(AVFrame *frame, DNNData *input, DNNFunctionType func_type, void *log_ctx);
DNNReturnType ff_proc_from_dnn_to_frame(AVFrame *frame, DNNData *output, void *log_ctx);
#endif

View File

@@ -1,192 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdio.h>
#include "queue.h"
#include "libavutil/mem.h"
#include "libavutil/avassert.h"
typedef struct QueueEntry QueueEntry;
struct QueueEntry {
void *value;
QueueEntry *prev;
QueueEntry *next;
};
struct Queue {
QueueEntry *head;
QueueEntry *tail;
size_t length;
};
static inline QueueEntry *create_entry(void *val)
{
QueueEntry *entry = av_malloc(sizeof(*entry));
if (entry)
entry->value = val;
return entry;
}
Queue* ff_queue_create(void)
{
Queue *q = av_malloc(sizeof(*q));
if (!q)
return NULL;
q->head = create_entry(q);
q->tail = create_entry(q);
if (!q->head || !q->tail) {
av_freep(&q->head);
av_freep(&q->tail);
av_freep(&q);
return NULL;
}
q->head->next = q->tail;
q->tail->prev = q->head;
q->head->prev = NULL;
q->tail->next = NULL;
q->length = 0;
return q;
}
void ff_queue_destroy(Queue *q)
{
QueueEntry *entry;
if (!q)
return;
entry = q->head;
while (entry != NULL) {
QueueEntry *temp = entry;
entry = entry->next;
av_freep(&temp);
}
av_freep(&q);
}
size_t ff_queue_size(Queue *q)
{
return q ? q->length : 0;
}
void *ff_queue_peek_front(Queue *q)
{
if (!q || q->length == 0)
return NULL;
return q->head->next->value;
}
void *ff_queue_peek_back(Queue *q)
{
if (!q || q->length == 0)
return NULL;
return q->tail->prev->value;
}
int ff_queue_push_front(Queue *q, void *v)
{
QueueEntry *new_entry;
QueueEntry *original_next;
if (!q)
return 0;
new_entry = create_entry(v);
if (!new_entry)
return -1;
original_next = q->head->next;
q->head->next = new_entry;
original_next->prev = new_entry;
new_entry->prev = q->head;
new_entry->next = original_next;
q->length++;
return q->length;
}
int ff_queue_push_back(Queue *q, void *v)
{
QueueEntry *new_entry;
QueueEntry *original_prev;
if (!q)
return 0;
new_entry = create_entry(v);
if (!new_entry)
return -1;
original_prev = q->tail->prev;
q->tail->prev = new_entry;
original_prev->next = new_entry;
new_entry->next = q->tail;
new_entry->prev = original_prev;
q->length++;
return q->length;
}
void *ff_queue_pop_front(Queue *q)
{
QueueEntry *front;
QueueEntry *new_head_next;
void *ret;
if (!q || q->length == 0)
return NULL;
front = q->head->next;
new_head_next = front->next;
ret = front->value;
q->head->next = new_head_next;
new_head_next->prev = q->head;
av_freep(&front);
q->length--;
return ret;
}
void *ff_queue_pop_back(Queue *q)
{
QueueEntry *back;
QueueEntry *new_tail_prev;
void *ret;
if (!q || q->length == 0)
return NULL;
back = q->tail->prev;
new_tail_prev = back->prev;
ret = back->value;
q->tail->prev = new_tail_prev;
new_tail_prev->next = q->tail;
av_freep(&back);
q->length--;
return ret;
}

View File

@@ -1,41 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVFILTER_DNN_QUEUE_H
#define AVFILTER_DNN_QUEUE_H
typedef struct Queue Queue;
Queue *ff_queue_create(void);
void ff_queue_destroy(Queue *q);
size_t ff_queue_size(Queue *q);
void *ff_queue_peek_front(Queue *q);
void *ff_queue_peek_back(Queue *q);
int ff_queue_push_front(Queue *q, void *v);
int ff_queue_push_back(Queue *q, void *v);
void *ff_queue_pop_front(Queue *q);
void *ff_queue_pop_back(Queue *q);
#endif

View File

@@ -1,116 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <stdio.h>
#include "queue.h"
#include "safe_queue.h"
#include "libavutil/mem.h"
#include "libavutil/avassert.h"
#include "libavutil/thread.h"
#if HAVE_PTHREAD_CANCEL
#define DNNCond pthread_cond_t
#define dnn_cond_init pthread_cond_init
#define dnn_cond_destroy pthread_cond_destroy
#define dnn_cond_signal pthread_cond_signal
#define dnn_cond_wait pthread_cond_wait
#else
#define DNNCond char
static inline int dnn_cond_init(DNNCond *cond, const void *attr) { return 0; }
static inline int dnn_cond_destroy(DNNCond *cond) { return 0; }
static inline int dnn_cond_signal(DNNCond *cond) { return 0; }
static inline int dnn_cond_wait(DNNCond *cond, AVMutex *mutex)
{
av_assert0(!"should not reach here");
return 0;
}
#endif
struct SafeQueue {
Queue *q;
AVMutex mutex;
DNNCond cond;
};
SafeQueue *ff_safe_queue_create(void)
{
SafeQueue *sq = av_malloc(sizeof(*sq));
if (!sq)
return NULL;
sq->q = ff_queue_create();
if (!sq->q) {
av_freep(&sq);
return NULL;
}
ff_mutex_init(&sq->mutex, NULL);
dnn_cond_init(&sq->cond, NULL);
return sq;
}
void ff_safe_queue_destroy(SafeQueue *sq)
{
if (!sq)
return;
ff_queue_destroy(sq->q);
ff_mutex_destroy(&sq->mutex);
dnn_cond_destroy(&sq->cond);
av_freep(&sq);
}
size_t ff_safe_queue_size(SafeQueue *sq)
{
return sq ? ff_queue_size(sq->q) : 0;
}
int ff_safe_queue_push_front(SafeQueue *sq, void *v)
{
int ret;
ff_mutex_lock(&sq->mutex);
ret = ff_queue_push_front(sq->q, v);
dnn_cond_signal(&sq->cond);
ff_mutex_unlock(&sq->mutex);
return ret;
}
int ff_safe_queue_push_back(SafeQueue *sq, void *v)
{
int ret;
ff_mutex_lock(&sq->mutex);
ret = ff_queue_push_back(sq->q, v);
dnn_cond_signal(&sq->cond);
ff_mutex_unlock(&sq->mutex);
return ret;
}
void *ff_safe_queue_pop_front(SafeQueue *sq)
{
void *value;
ff_mutex_lock(&sq->mutex);
while (ff_queue_size(sq->q) == 0) {
dnn_cond_wait(&sq->cond, &sq->mutex);
}
value = ff_queue_pop_front(sq->q);
dnn_cond_signal(&sq->cond);
ff_mutex_unlock(&sq->mutex);
return value;
}

View File

@@ -1,36 +0,0 @@
/*
* Copyright (c) 2020
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVFILTER_DNN_SAFE_QUEUE_H
#define AVFILTER_DNN_SAFE_QUEUE_H
typedef struct SafeQueue SafeQueue;
SafeQueue *ff_safe_queue_create(void);
void ff_safe_queue_destroy(SafeQueue *sq);
size_t ff_safe_queue_size(SafeQueue *sq);
int ff_safe_queue_push_front(SafeQueue *sq, void *v);
int ff_safe_queue_push_back(SafeQueue *sq, void *v);
void *ff_safe_queue_pop_front(SafeQueue *sq);
#endif

View File

@@ -1,106 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "dnn_filter_common.h"
int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx)
{
if (!ctx->model_filename) {
av_log(filter_ctx, AV_LOG_ERROR, "model file for network is not specified\n");
return AVERROR(EINVAL);
}
if (!ctx->model_inputname) {
av_log(filter_ctx, AV_LOG_ERROR, "input name of the model network is not specified\n");
return AVERROR(EINVAL);
}
if (!ctx->model_outputname) {
av_log(filter_ctx, AV_LOG_ERROR, "output name of the model network is not specified\n");
return AVERROR(EINVAL);
}
ctx->dnn_module = ff_get_dnn_module(ctx->backend_type);
if (!ctx->dnn_module) {
av_log(filter_ctx, AV_LOG_ERROR, "could not create DNN module for requested backend\n");
return AVERROR(ENOMEM);
}
if (!ctx->dnn_module->load_model) {
av_log(filter_ctx, AV_LOG_ERROR, "load_model for network is not specified\n");
return AVERROR(EINVAL);
}
ctx->model = (ctx->dnn_module->load_model)(ctx->model_filename, func_type, ctx->backend_options, filter_ctx);
if (!ctx->model) {
av_log(filter_ctx, AV_LOG_ERROR, "could not load DNN model\n");
return AVERROR(EINVAL);
}
if (!ctx->dnn_module->execute_model_async && ctx->async) {
ctx->async = 0;
av_log(filter_ctx, AV_LOG_WARNING, "this backend does not support async execution, roll back to sync.\n");
}
#if !HAVE_PTHREAD_CANCEL
if (ctx->async) {
ctx->async = 0;
av_log(filter_ctx, AV_LOG_WARNING, "pthread is not supported, roll back to sync.\n");
}
#endif
return 0;
}
DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input)
{
return ctx->model->get_input(ctx->model->model, input, ctx->model_inputname);
}
DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height)
{
return ctx->model->get_output(ctx->model->model, ctx->model_inputname, input_width, input_height,
ctx->model_outputname, output_width, output_height);
}
DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame)
{
return (ctx->dnn_module->execute_model)(ctx->model, ctx->model_inputname, in_frame,
(const char **)&ctx->model_outputname, 1, out_frame);
}
DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame)
{
return (ctx->dnn_module->execute_model_async)(ctx->model, ctx->model_inputname, in_frame,
(const char **)&ctx->model_outputname, 1, out_frame);
}
DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame)
{
return (ctx->dnn_module->get_async_result)(ctx->model, in_frame, out_frame);
}
DNNReturnType ff_dnn_flush(DnnContext *ctx)
{
return (ctx->dnn_module->flush)(ctx->model);
}
void ff_dnn_uninit(DnnContext *ctx)
{
if (ctx->dnn_module) {
(ctx->dnn_module->free_model)(&ctx->model);
av_freep(&ctx->dnn_module);
}
}

View File

@@ -1,59 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* common functions for the dnn based filters
*/
#ifndef AVFILTER_DNN_FILTER_COMMON_H
#define AVFILTER_DNN_FILTER_COMMON_H
#include "dnn_interface.h"
typedef struct DnnContext {
char *model_filename;
DNNBackendType backend_type;
char *model_inputname;
char *model_outputname;
char *backend_options;
int async;
DNNModule *dnn_module;
DNNModel *model;
} DnnContext;
#define DNN_COMMON_OPTIONS \
{ "model", "path to model file", OFFSET(model_filename), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\
{ "input", "input name of the model", OFFSET(model_inputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\
{ "output", "output name of the model", OFFSET(model_outputname), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\
{ "backend_configs", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\
{ "options", "backend configs", OFFSET(backend_options), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },\
{ "async", "use DNN async inference", OFFSET(async), AV_OPT_TYPE_BOOL, { .i64 = 1}, 0, 1, FLAGS},
int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx);
DNNReturnType ff_dnn_get_input(DnnContext *ctx, DNNData *input);
DNNReturnType ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height);
DNNReturnType ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame);
DNNReturnType ff_dnn_execute_model_async(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame);
DNNAsyncStatusType ff_dnn_get_async_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame);
DNNReturnType ff_dnn_flush(DnnContext *ctx);
void ff_dnn_uninit(DnnContext *ctx);
#endif

View File

@@ -1,72 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with FFmpeg; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <stdint.h>
// for FF_QSCALE_TYPE_*
#include "libavcodec/internal.h"
#include "libavutil/frame.h"
#include "libavutil/mem.h"
#include "libavutil/video_enc_params.h"
#include "qp_table.h"
int ff_qp_table_extract(AVFrame *frame, int8_t **table, int *table_w, int *table_h,
int *qscale_type)
{
AVFrameSideData *sd;
AVVideoEncParams *par;
unsigned int mb_h = (frame->height + 15) / 16;
unsigned int mb_w = (frame->width + 15) / 16;
unsigned int nb_mb = mb_h * mb_w;
unsigned int block_idx;
*table = NULL;
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_VIDEO_ENC_PARAMS);
if (!sd)
return 0;
par = (AVVideoEncParams*)sd->data;
if (par->type != AV_VIDEO_ENC_PARAMS_MPEG2 ||
(par->nb_blocks != 0 && par->nb_blocks != nb_mb))
return AVERROR(ENOSYS);
*table = av_malloc(nb_mb);
if (!*table)
return AVERROR(ENOMEM);
if (table_w)
*table_w = mb_w;
if (table_h)
*table_h = mb_h;
if (qscale_type)
*qscale_type = FF_QSCALE_TYPE_MPEG2;
if (par->nb_blocks == 0) {
memset(*table, par->qp, nb_mb);
return 0;
}
for (block_idx = 0; block_idx < nb_mb; block_idx++) {
AVVideoBlockParams *b = av_video_enc_params_block(par, block_idx);
(*table)[block_idx] = par->qp + b->delta_qp;
}
return 0;
}

View File

@@ -1,33 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with FFmpeg; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef AVFILTER_QP_TABLE_H
#define AVFILTER_QP_TABLE_H
#include <stdint.h>
#include "libavutil/frame.h"
/**
* Extract a libpostproc-compatible QP table - an 8-bit QP value per 16x16
* macroblock, stored in raster order - from AVVideoEncParams side data.
*/
int ff_qp_table_extract(AVFrame *frame, int8_t **table, int *table_w, int *table_h,
int *qscale_type);
#endif // AVFILTER_QP_TABLE_H

View File

@@ -1,275 +0,0 @@
/*
* Copyright (c) 2020 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/avstring.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "avfilter.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
typedef struct ChromaNRContext {
const AVClass *class;
float threshold;
float threshold_y;
float threshold_u;
float threshold_v;
int thres;
int thres_y;
int thres_u;
int thres_v;
int sizew;
int sizeh;
int stepw;
int steph;
int depth;
int chroma_w;
int chroma_h;
int nb_planes;
int linesize[4];
int planeheight[4];
int planewidth[4];
AVFrame *out;
int (*filter_slice)(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs);
} ChromaNRContext;
static int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pix_fmts[] = {
AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV444P,
AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P, AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_YUVJ411P,
AV_PIX_FMT_YUV420P9, AV_PIX_FMT_YUV422P9, AV_PIX_FMT_YUV444P9,
AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV440P10, AV_PIX_FMT_YUV444P10,
AV_PIX_FMT_YUV444P12, AV_PIX_FMT_YUV422P12, AV_PIX_FMT_YUV440P12, AV_PIX_FMT_YUV420P12,
AV_PIX_FMT_YUV444P14, AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV420P14,
AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16,
AV_PIX_FMT_YUVA420P9, AV_PIX_FMT_YUVA422P9, AV_PIX_FMT_YUVA444P9,
AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA444P10,
AV_PIX_FMT_YUVA422P12, AV_PIX_FMT_YUVA444P12,
AV_PIX_FMT_YUVA420P16, AV_PIX_FMT_YUVA422P16, AV_PIX_FMT_YUVA444P16,
AV_PIX_FMT_NONE
};
AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts);
if (!fmts_list)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, fmts_list);
}
#define FILTER_FUNC(name, type) \
static int filter_slice##name(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) \
{ \
ChromaNRContext *s = ctx->priv; \
AVFrame *in = arg; \
AVFrame *out = s->out; \
const int in_ylinesize = in->linesize[0]; \
const int in_ulinesize = in->linesize[1]; \
const int in_vlinesize = in->linesize[2]; \
const int out_ulinesize = out->linesize[1]; \
const int out_vlinesize = out->linesize[2]; \
const int chroma_w = s->chroma_w; \
const int chroma_h = s->chroma_h; \
const int stepw = s->stepw; \
const int steph = s->steph; \
const int sizew = s->sizew; \
const int sizeh = s->sizeh; \
const int thres = s->thres; \
const int thres_y = s->thres_y; \
const int thres_u = s->thres_u; \
const int thres_v = s->thres_v; \
const int h = s->planeheight[1]; \
const int w = s->planewidth[1]; \
const int slice_start = (h * jobnr) / nb_jobs; \
const int slice_end = (h * (jobnr+1)) / nb_jobs; \
type *out_uptr = (type *)(out->data[1] + slice_start * out_ulinesize); \
type *out_vptr = (type *)(out->data[2] + slice_start * out_vlinesize); \
\
{ \
const int h = s->planeheight[0]; \
const int slice_start = (h * jobnr) / nb_jobs; \
const int slice_end = (h * (jobnr+1)) / nb_jobs; \
\
av_image_copy_plane(out->data[0] + slice_start * out->linesize[0], \
out->linesize[0], \
in->data[0] + slice_start * in->linesize[0], \
in->linesize[0], \
s->linesize[0], slice_end - slice_start); \
\
if (s->nb_planes == 4) { \
av_image_copy_plane(out->data[3] + slice_start * out->linesize[3], \
out->linesize[3], \
in->data[3] + slice_start * in->linesize[3], \
in->linesize[3], \
s->linesize[3], slice_end - slice_start); \
} \
} \
\
for (int y = slice_start; y < slice_end; y++) { \
const type *in_yptr = (const type *)(in->data[0] + y * chroma_h * in_ylinesize); \
const type *in_uptr = (const type *)(in->data[1] + y * in_ulinesize); \
const type *in_vptr = (const type *)(in->data[2] + y * in_vlinesize); \
\
for (int x = 0; x < w; x++) { \
const int cy = in_yptr[x * chroma_w]; \
const int cu = in_uptr[x]; \
const int cv = in_vptr[x]; \
int su = cu; \
int sv = cv; \
int cn = 1; \
\
for (int yy = FFMAX(0, y - sizeh); yy < FFMIN(y + sizeh, h); yy += steph) { \
const type *in_yptr = (const type *)(in->data[0] + yy * chroma_h * in_ylinesize); \
const type *in_uptr = (const type *)(in->data[1] + yy * in_ulinesize); \
const type *in_vptr = (const type *)(in->data[2] + yy * in_vlinesize); \
\
for (int xx = FFMAX(0, x - sizew); xx < FFMIN(x + sizew, w); xx += stepw) { \
const int Y = in_yptr[xx * chroma_w]; \
const int U = in_uptr[xx]; \
const int V = in_vptr[xx]; \
\
if (FFABS(cu - U) + FFABS(cv - V) + FFABS(cy - Y) < thres && \
FFABS(cu - U) < thres_u && FFABS(cv - V) < thres_v && \
FFABS(cy - Y) < thres_y && \
xx != x && yy != y) { \
su += U; \
sv += V; \
cn++; \
} \
} \
} \
\
out_uptr[x] = su / cn; \
out_vptr[x] = sv / cn; \
} \
\
out_uptr += out_ulinesize / sizeof(type); \
out_vptr += out_vlinesize / sizeof(type); \
} \
\
return 0; \
}
FILTER_FUNC(8, uint8_t)
FILTER_FUNC(16, uint16_t)
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
AVFilterLink *outlink = ctx->outputs[0];
ChromaNRContext *s = ctx->priv;
AVFrame *out;
s->thres = s->threshold * (1 << (s->depth - 8));
s->thres_y = s->threshold_y * (1 << (s->depth - 8));
s->thres_u = s->threshold_u * (1 << (s->depth - 8));
s->thres_v = s->threshold_v * (1 << (s->depth - 8));
out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
s->out = out;
ctx->internal->execute(ctx, s->filter_slice, in, NULL,
FFMIN3(s->planeheight[1],
s->planeheight[2],
ff_filter_get_nb_threads(ctx)));
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ChromaNRContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
int ret;
s->nb_planes = desc->nb_components;
s->depth = desc->comp[0].depth;
s->filter_slice = s->depth <= 8 ? filter_slice8 : filter_slice16;
s->chroma_w = 1 << desc->log2_chroma_w;
s->chroma_h = 1 << desc->log2_chroma_h;
s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
s->planeheight[0] = s->planeheight[3] = inlink->h;
s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
s->planewidth[0] = s->planewidth[3] = inlink->w;
if ((ret = av_image_fill_linesizes(s->linesize, inlink->format, inlink->w)) < 0)
return ret;
return 0;
}
#define OFFSET(x) offsetof(ChromaNRContext, x)
#define VF AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM | AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption chromanr_options[] = {
{ "thres", "set y+u+v threshold", OFFSET(threshold), AV_OPT_TYPE_FLOAT, {.dbl=30}, 1, 200, VF },
{ "sizew", "set horizontal size", OFFSET(sizew), AV_OPT_TYPE_INT, {.i64=5}, 1, 100, VF },
{ "sizeh", "set vertical size", OFFSET(sizeh), AV_OPT_TYPE_INT, {.i64=5}, 1, 100, VF },
{ "stepw", "set horizontal step", OFFSET(stepw), AV_OPT_TYPE_INT, {.i64=1}, 1, 50, VF },
{ "steph", "set vertical step", OFFSET(steph), AV_OPT_TYPE_INT, {.i64=1}, 1, 50, VF },
{ "threy", "set y threshold", OFFSET(threshold_y), AV_OPT_TYPE_FLOAT, {.dbl=200},1, 200, VF },
{ "threu", "set u threshold", OFFSET(threshold_u), AV_OPT_TYPE_FLOAT, {.dbl=200},1, 200, VF },
{ "threv", "set v threshold", OFFSET(threshold_v), AV_OPT_TYPE_FLOAT, {.dbl=200},1, 200, VF },
{ NULL }
};
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
AVFILTER_DEFINE_CLASS(chromanr);
AVFilter ff_vf_chromanr = {
.name = "chromanr",
.description = NULL_IF_CONFIG_SMALL("Reduce chrominance noise."),
.priv_size = sizeof(ChromaNRContext),
.priv_class = &chromanr_class,
.query_formats = query_formats,
.outputs = outputs,
.inputs = inputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,408 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <float.h>
#include "libavutil/opt.h"
#include "libavutil/imgutils.h"
#include "avfilter.h"
#include "drawutils.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
#define R 0
#define G 1
#define B 2
typedef struct ColorContrastContext {
const AVClass *class;
float rc, gm, by;
float rcw, gmw, byw;
float preserve;
int step;
int depth;
uint8_t rgba_map[4];
int (*do_slice)(AVFilterContext *s, void *arg,
int jobnr, int nb_jobs);
} ColorContrastContext;
static inline float lerpf(float v0, float v1, float f)
{
return v0 + (v1 - v0) * f;
}
#define PROCESS(max) \
br = (b + r) * 0.5f; \
gb = (g + b) * 0.5f; \
rg = (r + g) * 0.5f; \
\
gd = g - br; \
bd = b - rg; \
rd = r - gb; \
\
g0 = g + gd * gm; \
b0 = b - gd * gm; \
r0 = r - gd * gm; \
\
g1 = g - bd * by; \
b1 = b + bd * by; \
r1 = r - bd * by; \
\
g2 = g - rd * rc; \
b2 = b - rd * rc; \
r2 = r + rd * rc; \
\
ng = av_clipf((g0 * gmw + g1 * byw + g2 * rcw) * scale, 0.f, max); \
nb = av_clipf((b0 * gmw + b1 * byw + b2 * rcw) * scale, 0.f, max); \
nr = av_clipf((r0 * gmw + r1 * byw + r2 * rcw) * scale, 0.f, max); \
\
li = FFMAX3(r, g, b) + FFMIN3(r, g, b); \
lo = FFMAX3(nr, ng, nb) + FFMIN3(nr, ng, nb) + FLT_EPSILON; \
lf = li / lo; \
\
r = nr * lf; \
g = ng * lf; \
b = nb * lf; \
\
nr = lerpf(nr, r, preserve); \
ng = lerpf(ng, g, preserve); \
nb = lerpf(nb, b, preserve);
static int colorcontrast_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorContrastContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int glinesize = frame->linesize[0];
const int blinesize = frame->linesize[1];
const int rlinesize = frame->linesize[2];
uint8_t *gptr = frame->data[0] + slice_start * glinesize;
uint8_t *bptr = frame->data[1] + slice_start * blinesize;
uint8_t *rptr = frame->data[2] + slice_start * rlinesize;
const float preserve = s->preserve;
const float gm = s->gm * 0.5f;
const float by = s->by * 0.5f;
const float rc = s->rc * 0.5f;
const float gmw = s->gmw;
const float byw = s->byw;
const float rcw = s->rcw;
const float sum = gmw + byw + rcw;
const float scale = 1.f / sum;
for (int y = slice_start; y < slice_end && sum > FLT_EPSILON; y++) {
for (int x = 0; x < width; x++) {
float g = gptr[x];
float b = bptr[x];
float r = rptr[x];
float g0, g1, g2;
float b0, b1, b2;
float r0, r1, r2;
float gd, bd, rd;
float gb, br, rg;
float nr, ng, nb;
float li, lo, lf;
PROCESS(255.f);
gptr[x] = av_clip_uint8(ng);
bptr[x] = av_clip_uint8(nb);
rptr[x] = av_clip_uint8(nr);
}
gptr += glinesize;
bptr += blinesize;
rptr += rlinesize;
}
return 0;
}
static int colorcontrast_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorContrastContext *s = ctx->priv;
AVFrame *frame = arg;
const int depth = s->depth;
const float max = (1 << depth) - 1;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int glinesize = frame->linesize[0] / 2;
const int blinesize = frame->linesize[1] / 2;
const int rlinesize = frame->linesize[2] / 2;
uint16_t *gptr = (uint16_t *)frame->data[0] + slice_start * glinesize;
uint16_t *bptr = (uint16_t *)frame->data[1] + slice_start * blinesize;
uint16_t *rptr = (uint16_t *)frame->data[2] + slice_start * rlinesize;
const float preserve = s->preserve;
const float gm = s->gm * 0.5f;
const float by = s->by * 0.5f;
const float rc = s->rc * 0.5f;
const float gmw = s->gmw;
const float byw = s->byw;
const float rcw = s->rcw;
const float sum = gmw + byw + rcw;
const float scale = 1.f / sum;
for (int y = slice_start; y < slice_end && sum > FLT_EPSILON; y++) {
for (int x = 0; x < width; x++) {
float g = gptr[x];
float b = bptr[x];
float r = rptr[x];
float g0, g1, g2;
float b0, b1, b2;
float r0, r1, r2;
float gd, bd, rd;
float gb, br, rg;
float nr, ng, nb;
float li, lo, lf;
PROCESS(max);
gptr[x] = av_clip_uintp2_c(ng, depth);
bptr[x] = av_clip_uintp2_c(nb, depth);
rptr[x] = av_clip_uintp2_c(nr, depth);
}
gptr += glinesize;
bptr += blinesize;
rptr += rlinesize;
}
return 0;
}
static int colorcontrast_slice8p(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorContrastContext *s = ctx->priv;
AVFrame *frame = arg;
const int step = s->step;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int linesize = frame->linesize[0];
const uint8_t roffset = s->rgba_map[R];
const uint8_t goffset = s->rgba_map[G];
const uint8_t boffset = s->rgba_map[B];
uint8_t *ptr = frame->data[0] + slice_start * linesize;
const float preserve = s->preserve;
const float gm = s->gm * 0.5f;
const float by = s->by * 0.5f;
const float rc = s->rc * 0.5f;
const float gmw = s->gmw;
const float byw = s->byw;
const float rcw = s->rcw;
const float sum = gmw + byw + rcw;
const float scale = 1.f / sum;
for (int y = slice_start; y < slice_end && sum > FLT_EPSILON; y++) {
for (int x = 0; x < width; x++) {
float g = ptr[x * step + goffset];
float b = ptr[x * step + boffset];
float r = ptr[x * step + roffset];
float g0, g1, g2;
float b0, b1, b2;
float r0, r1, r2;
float gd, bd, rd;
float gb, br, rg;
float nr, ng, nb;
float li, lo, lf;
PROCESS(255.f);
ptr[x * step + goffset] = av_clip_uint8(ng);
ptr[x * step + boffset] = av_clip_uint8(nb);
ptr[x * step + roffset] = av_clip_uint8(nr);
}
ptr += linesize;
}
return 0;
}
static int colorcontrast_slice16p(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorContrastContext *s = ctx->priv;
AVFrame *frame = arg;
const int step = s->step;
const int depth = s->depth;
const float max = (1 << depth) - 1;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int linesize = frame->linesize[0] / 2;
const uint8_t roffset = s->rgba_map[R];
const uint8_t goffset = s->rgba_map[G];
const uint8_t boffset = s->rgba_map[B];
uint16_t *ptr = (uint16_t *)frame->data[0] + slice_start * linesize;
const float preserve = s->preserve;
const float gm = s->gm * 0.5f;
const float by = s->by * 0.5f;
const float rc = s->rc * 0.5f;
const float gmw = s->gmw;
const float byw = s->byw;
const float rcw = s->rcw;
const float sum = gmw + byw + rcw;
const float scale = 1.f / sum;
for (int y = slice_start; y < slice_end && sum > FLT_EPSILON; y++) {
for (int x = 0; x < width; x++) {
float g = ptr[x * step + goffset];
float b = ptr[x * step + boffset];
float r = ptr[x * step + roffset];
float g0, g1, g2;
float b0, b1, b2;
float r0, r1, r2;
float gd, bd, rd;
float gb, br, rg;
float nr, ng, nb;
float li, lo, lf;
PROCESS(max);
ptr[x * step + goffset] = av_clip_uintp2_c(ng, depth);
ptr[x * step + boffset] = av_clip_uintp2_c(nb, depth);
ptr[x * step + roffset] = av_clip_uintp2_c(nr, depth);
}
ptr += linesize;
}
return 0;
}
static int filter_frame(AVFilterLink *link, AVFrame *frame)
{
AVFilterContext *ctx = link->dst;
ColorContrastContext *s = ctx->priv;
int res;
if (res = ctx->internal->execute(ctx, s->do_slice, frame, NULL,
FFMIN(frame->height, ff_filter_get_nb_threads(ctx))))
return res;
return ff_filter_frame(ctx->outputs[0], frame);
}
static av_cold int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pixel_fmts[] = {
AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24,
AV_PIX_FMT_RGBA, AV_PIX_FMT_BGRA,
AV_PIX_FMT_ARGB, AV_PIX_FMT_ABGR,
AV_PIX_FMT_0RGB, AV_PIX_FMT_0BGR,
AV_PIX_FMT_RGB0, AV_PIX_FMT_BGR0,
AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP,
AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16,
AV_PIX_FMT_GBRAP10, AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GBRAP16,
AV_PIX_FMT_RGB48, AV_PIX_FMT_BGR48,
AV_PIX_FMT_RGBA64, AV_PIX_FMT_BGRA64,
AV_PIX_FMT_NONE
};
AVFilterFormats *formats = NULL;
formats = ff_make_format_list(pixel_fmts);
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, formats);
}
static av_cold int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ColorContrastContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
int planar = desc->flags & AV_PIX_FMT_FLAG_PLANAR;
s->step = desc->nb_components;
if (inlink->format == AV_PIX_FMT_RGB0 ||
inlink->format == AV_PIX_FMT_0RGB ||
inlink->format == AV_PIX_FMT_BGR0 ||
inlink->format == AV_PIX_FMT_0BGR)
s->step = 4;
s->depth = desc->comp[0].depth;
s->do_slice = s->depth <= 8 ? colorcontrast_slice8 : colorcontrast_slice16;
if (!planar)
s->do_slice = s->depth <= 8 ? colorcontrast_slice8p : colorcontrast_slice16p;
ff_fill_rgba_map(s->rgba_map, inlink->format);
return 0;
}
static const AVFilterPad colorcontrast_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.needs_writable = 1,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad colorcontrast_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
#define OFFSET(x) offsetof(ColorContrastContext, x)
#define VF AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption colorcontrast_options[] = {
{ "rc", "set the red-cyan contrast", OFFSET(rc), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "gm", "set the green-magenta contrast", OFFSET(gm), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "by", "set the blue-yellow contrast", OFFSET(by), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "rcw", "set the red-cyan weight", OFFSET(rcw), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 1, VF },
{ "gmw", "set the green-magenta weight", OFFSET(gmw), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 1, VF },
{ "byw", "set the blue-yellow weight", OFFSET(byw), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 1, VF },
{ "pl", "set the amount of preserving lightness", OFFSET(preserve), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 1, VF },
{ NULL }
};
AVFILTER_DEFINE_CLASS(colorcontrast);
AVFilter ff_vf_colorcontrast = {
.name = "colorcontrast",
.description = NULL_IF_CONFIG_SMALL("Adjust color contrast between RGB components."),
.priv_size = sizeof(ColorContrastContext),
.priv_class = &colorcontrast_class,
.query_formats = query_formats,
.inputs = colorcontrast_inputs,
.outputs = colorcontrast_outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,217 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <float.h>
#include "libavutil/opt.h"
#include "libavutil/imgutils.h"
#include "avfilter.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
typedef struct ColorCorrectContext {
const AVClass *class;
float rl, bl;
float rh, bh;
float saturation;
int depth;
int (*do_slice)(AVFilterContext *s, void *arg,
int jobnr, int nb_jobs);
} ColorCorrectContext;
#define PROCESS() \
float y = yptr[x] * imax; \
float u = uptr[x] * imax - .5f; \
float v = vptr[x] * imax - .5f; \
float ny, nu, nv; \
\
ny = y; \
nu = saturation * (u + y * bd + bl); \
nv = saturation * (v + y * rd + rl);
static int colorcorrect_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorCorrectContext *s = ctx->priv;
AVFrame *frame = arg;
const int depth = s->depth;
const float max = (1 << depth) - 1;
const float imax = 1.f / max;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ylinesize = frame->linesize[0];
const int ulinesize = frame->linesize[1];
const int vlinesize = frame->linesize[2];
uint8_t *yptr = frame->data[0] + slice_start * ylinesize;
uint8_t *uptr = frame->data[1] + slice_start * ulinesize;
uint8_t *vptr = frame->data[2] + slice_start * vlinesize;
const float saturation = s->saturation;
const float bl = s->bl;
const float rl = s->rl;
const float bd = s->bh - bl;
const float rd = s->rh - rl;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
PROCESS()
yptr[x] = av_clip_uint8( ny * max);
uptr[x] = av_clip_uint8((nu + 0.5f) * max);
vptr[x] = av_clip_uint8((nv + 0.5f) * max);
}
yptr += ylinesize;
uptr += ulinesize;
vptr += vlinesize;
}
return 0;
}
static int colorcorrect_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorCorrectContext *s = ctx->priv;
AVFrame *frame = arg;
const int depth = s->depth;
const float max = (1 << depth) - 1;
const float imax = 1.f / max;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ylinesize = frame->linesize[0] / 2;
const int ulinesize = frame->linesize[1] / 2;
const int vlinesize = frame->linesize[2] / 2;
uint16_t *yptr = (uint16_t *)frame->data[0] + slice_start * ylinesize;
uint16_t *uptr = (uint16_t *)frame->data[1] + slice_start * ulinesize;
uint16_t *vptr = (uint16_t *)frame->data[2] + slice_start * vlinesize;
const float saturation = s->saturation;
const float bl = s->bl;
const float rl = s->rl;
const float bd = s->bh - bl;
const float rd = s->rh - rl;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
PROCESS()
yptr[x] = av_clip_uintp2_c( ny * max, depth);
uptr[x] = av_clip_uintp2_c((nu + 0.5f) * max, depth);
vptr[x] = av_clip_uintp2_c((nv + 0.5f) * max, depth);
}
yptr += ylinesize;
uptr += ulinesize;
vptr += vlinesize;
}
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
{
AVFilterContext *ctx = inlink->dst;
ColorCorrectContext *s = ctx->priv;
ctx->internal->execute(ctx, s->do_slice, frame, NULL,
FFMIN(frame->height, ff_filter_get_nb_threads(ctx)));
return ff_filter_frame(ctx->outputs[0], frame);
}
static av_cold int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pixel_fmts[] = {
AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUVJ444P,
AV_PIX_FMT_YUV444P9, AV_PIX_FMT_YUV444P10, AV_PIX_FMT_YUV444P12, AV_PIX_FMT_YUV444P14, AV_PIX_FMT_YUV444P16,
AV_PIX_FMT_YUVA444P, AV_PIX_FMT_YUVA444P9, AV_PIX_FMT_YUVA444P10, AV_PIX_FMT_YUVA444P12, AV_PIX_FMT_YUVA444P16,
AV_PIX_FMT_NONE
};
AVFilterFormats *formats = NULL;
formats = ff_make_format_list(pixel_fmts);
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, formats);
}
static av_cold int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ColorCorrectContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
s->depth = desc->comp[0].depth;
s->do_slice = s->depth <= 8 ? colorcorrect_slice8 : colorcorrect_slice16;
return 0;
}
static const AVFilterPad colorcorrect_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.needs_writable = 1,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad colorcorrect_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
#define OFFSET(x) offsetof(ColorCorrectContext, x)
#define VF AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption colorcorrect_options[] = {
{ "rl", "set the red shadow spot", OFFSET(rl), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "bl", "set the blue shadow spot", OFFSET(bl), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "rh", "set the red highlight spot", OFFSET(rh), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "bh", "set the blue highlight spot", OFFSET(bh), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ "saturation", "set the amount of saturation", OFFSET(saturation), AV_OPT_TYPE_FLOAT, {.dbl=1}, -3, 3, VF },
{ NULL }
};
AVFILTER_DEFINE_CLASS(colorcorrect);
AVFilter ff_vf_colorcorrect = {
.name = "colorcorrect",
.description = NULL_IF_CONFIG_SMALL("Adjust color white balance selectively for blacks and whites."),
.priv_size = sizeof(ColorCorrectContext),
.priv_class = &colorcorrect_class,
.query_formats = query_formats,
.inputs = colorcorrect_inputs,
.outputs = colorcorrect_outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,306 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/opt.h"
#include "libavutil/imgutils.h"
#include "avfilter.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
typedef struct ColorizeContext {
const AVClass *class;
float hue;
float saturation;
float lightness;
float mix;
int depth;
int c[3];
int planewidth[4];
int planeheight[4];
int (*do_plane_slice[2])(AVFilterContext *s, void *arg,
int jobnr, int nb_jobs);
} ColorizeContext;
static inline float lerpf(float v0, float v1, float f)
{
return v0 + (v1 - v0) * f;
}
static int colorizey_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorizeContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = s->planewidth[0];
const int height = s->planeheight[0];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ylinesize = frame->linesize[0];
uint8_t *yptr = frame->data[0] + slice_start * ylinesize;
const int yv = s->c[0];
const float mix = s->mix;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++)
yptr[x] = lerpf(yv, yptr[x], mix);
yptr += ylinesize;
}
return 0;
}
static int colorizey_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorizeContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = s->planewidth[0];
const int height = s->planeheight[0];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ylinesize = frame->linesize[0] / 2;
uint16_t *yptr = (uint16_t *)frame->data[0] + slice_start * ylinesize;
const int yv = s->c[0];
const float mix = s->mix;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++)
yptr[x] = lerpf(yv, yptr[x], mix);
yptr += ylinesize;
}
return 0;
}
static int colorize_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorizeContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = s->planewidth[1];
const int height = s->planeheight[1];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ulinesize = frame->linesize[1];
const int vlinesize = frame->linesize[2];
uint8_t *uptr = frame->data[1] + slice_start * ulinesize;
uint8_t *vptr = frame->data[2] + slice_start * vlinesize;
const int u = s->c[1];
const int v = s->c[2];
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
uptr[x] = u;
vptr[x] = v;
}
uptr += ulinesize;
vptr += vlinesize;
}
return 0;
}
static int colorize_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorizeContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = s->planewidth[1];
const int height = s->planeheight[1];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int ulinesize = frame->linesize[1] / 2;
const int vlinesize = frame->linesize[2] / 2;
uint16_t *uptr = (uint16_t *)frame->data[1] + slice_start * ulinesize;
uint16_t *vptr = (uint16_t *)frame->data[2] + slice_start * vlinesize;
const int u = s->c[1];
const int v = s->c[2];
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
uptr[x] = u;
vptr[x] = v;
}
uptr += ulinesize;
vptr += vlinesize;
}
return 0;
}
static int do_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorizeContext *s = ctx->priv;
s->do_plane_slice[0](ctx, arg, jobnr, nb_jobs);
s->do_plane_slice[1](ctx, arg, jobnr, nb_jobs);
return 0;
}
static float hue2rgb(float p, float q, float t)
{
if (t < 0.f) t += 1.f;
if (t > 1.f) t -= 1.f;
if (t < 1.f/6.f) return p + (q - p) * 6.f * t;
if (t < 1.f/2.f) return q;
if (t < 2.f/3.f) return p + (q - p) * (2.f/3.f - t) * 6.f;
return p;
}
static void hsl2rgb(float h, float s, float l, float *r, float *g, float *b)
{
h /= 360.f;
if (s == 0.f) {
*r = *g = *b = l;
} else {
const float q = l < 0.5f ? l * (1.f + s) : l + s - l * s;
const float p = 2.f * l - q;
*r = hue2rgb(p, q, h + 1.f / 3.f);
*g = hue2rgb(p, q, h);
*b = hue2rgb(p, q, h - 1.f / 3.f);
}
}
static void rgb2yuv(float r, float g, float b, int *y, int *u, int *v, int depth)
{
*y = ((0.21260*219.0/255.0) * r + (0.71520*219.0/255.0) * g +
(0.07220*219.0/255.0) * b) * ((1 << depth) - 1);
*u = (-(0.11457*224.0/255.0) * r - (0.38543*224.0/255.0) * g +
(0.50000*224.0/255.0) * b + 0.5) * ((1 << depth) - 1);
*v = ((0.50000*224.0/255.0) * r - (0.45415*224.0/255.0) * g -
(0.04585*224.0/255.0) * b + 0.5) * ((1 << depth) - 1);
}
static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
{
AVFilterContext *ctx = inlink->dst;
ColorizeContext *s = ctx->priv;
float c[3];
hsl2rgb(s->hue, s->saturation, s->lightness, &c[0], &c[1], &c[2]);
rgb2yuv(c[0], c[1], c[2], &s->c[0], &s->c[1], &s->c[2], s->depth);
ctx->internal->execute(ctx, do_slice, frame, NULL,
FFMIN(s->planeheight[1], ff_filter_get_nb_threads(ctx)));
return ff_filter_frame(ctx->outputs[0], frame);
}
static av_cold int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pixel_fmts[] = {
AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P,
AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_YUVJ422P,
AV_PIX_FMT_YUVJ440P, AV_PIX_FMT_YUVJ444P,
AV_PIX_FMT_YUVJ411P,
AV_PIX_FMT_YUV420P9, AV_PIX_FMT_YUV422P9, AV_PIX_FMT_YUV444P9,
AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV444P10,
AV_PIX_FMT_YUV440P10,
AV_PIX_FMT_YUV444P12, AV_PIX_FMT_YUV422P12, AV_PIX_FMT_YUV420P12,
AV_PIX_FMT_YUV440P12,
AV_PIX_FMT_YUV444P14, AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV420P14,
AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16,
AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
AV_PIX_FMT_YUVA444P9, AV_PIX_FMT_YUVA444P10, AV_PIX_FMT_YUVA444P12, AV_PIX_FMT_YUVA444P16,
AV_PIX_FMT_YUVA422P9, AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA422P12, AV_PIX_FMT_YUVA422P16,
AV_PIX_FMT_YUVA420P9, AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA420P16,
AV_PIX_FMT_NONE
};
AVFilterFormats *formats = NULL;
formats = ff_make_format_list(pixel_fmts);
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, formats);
}
static av_cold int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ColorizeContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
int depth;
s->depth = depth = desc->comp[0].depth;
s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
s->planewidth[0] = s->planewidth[3] = inlink->w;
s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
s->planeheight[0] = s->planeheight[3] = inlink->h;
s->do_plane_slice[0] = depth <= 8 ? colorizey_slice8 : colorizey_slice16;
s->do_plane_slice[1] = depth <= 8 ? colorize_slice8 : colorize_slice16;
return 0;
}
static const AVFilterPad colorize_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.needs_writable = 1,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad colorize_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
#define OFFSET(x) offsetof(ColorizeContext, x)
#define VF AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption colorize_options[] = {
{ "hue", "set the hue", OFFSET(hue), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 360, VF },
{ "saturation", "set the saturation", OFFSET(saturation), AV_OPT_TYPE_FLOAT, {.dbl=0.5},0, 1, VF },
{ "lightness", "set the lightness", OFFSET(lightness), AV_OPT_TYPE_FLOAT, {.dbl=0.5},0, 1, VF },
{ "mix", "set the mix of source lightness", OFFSET(mix), AV_OPT_TYPE_FLOAT, {.dbl=1}, 0, 1, VF },
{ NULL }
};
AVFILTER_DEFINE_CLASS(colorize);
AVFilter ff_vf_colorize = {
.name = "colorize",
.description = NULL_IF_CONFIG_SMALL("Overlay a solid color on the video stream."),
.priv_size = sizeof(ColorizeContext),
.priv_class = &colorize_class,
.query_formats = query_formats,
.inputs = colorize_inputs,
.outputs = colorize_outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,370 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <float.h>
#include "libavutil/opt.h"
#include "libavutil/imgutils.h"
#include "avfilter.h"
#include "drawutils.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
#define R 0
#define G 1
#define B 2
typedef struct ColorTemperatureContext {
const AVClass *class;
float temperature;
float mix;
float preserve;
float color[3];
int step;
int depth;
uint8_t rgba_map[4];
int (*do_slice)(AVFilterContext *s, void *arg,
int jobnr, int nb_jobs);
} ColorTemperatureContext;
static float saturate(float input)
{
return av_clipf(input, 0.f, 1.f);
}
static void kelvin2rgb(float k, float *rgb)
{
float kelvin = k / 100.0f;
if (kelvin <= 66.0f) {
rgb[0] = 1.0f;
rgb[1] = saturate(0.39008157876901960784f * logf(kelvin) - 0.63184144378862745098f);
} else {
const float t = fmaxf(kelvin - 60.0f, 0.0f);
rgb[0] = saturate(1.29293618606274509804f * powf(t, -0.1332047592f));
rgb[1] = saturate(1.12989086089529411765f * powf(t, -0.0755148492f));
}
if (kelvin >= 66.0f)
rgb[2] = 1.0f;
else if (kelvin <= 19.0f)
rgb[2] = 0.0f;
else
rgb[2] = saturate(0.54320678911019607843f * logf(kelvin - 10.0f) - 1.19625408914f);
}
static float lerpf(float v0, float v1, float f)
{
return v0 + (v1 - v0) * f;
}
#define PROCESS() \
nr = r * color[0]; \
ng = g * color[1]; \
nb = b * color[2]; \
\
nr = lerpf(r, nr, mix); \
ng = lerpf(g, ng, mix); \
nb = lerpf(b, nb, mix); \
\
l0 = (FFMAX3(r, g, b) + FFMIN3(r, g, b)) + FLT_EPSILON; \
l1 = (FFMAX3(nr, ng, nb) + FFMIN3(nr, ng, nb)) + FLT_EPSILON; \
l = l0 / l1; \
\
r = nr * l; \
g = ng * l; \
b = nb * l; \
\
nr = lerpf(nr, r, preserve); \
ng = lerpf(ng, g, preserve); \
nb = lerpf(nb, b, preserve);
static int temperature_slice8(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorTemperatureContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = frame->width;
const int height = frame->height;
const float mix = s->mix;
const float preserve = s->preserve;
const float *color = s->color;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int glinesize = frame->linesize[0];
const int blinesize = frame->linesize[1];
const int rlinesize = frame->linesize[2];
uint8_t *gptr = frame->data[0] + slice_start * glinesize;
uint8_t *bptr = frame->data[1] + slice_start * blinesize;
uint8_t *rptr = frame->data[2] + slice_start * rlinesize;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
float g = gptr[x];
float b = bptr[x];
float r = rptr[x];
float nr, ng, nb;
float l0, l1, l;
PROCESS()
gptr[x] = av_clip_uint8(ng);
bptr[x] = av_clip_uint8(nb);
rptr[x] = av_clip_uint8(nr);
}
gptr += glinesize;
bptr += blinesize;
rptr += rlinesize;
}
return 0;
}
static int temperature_slice16(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorTemperatureContext *s = ctx->priv;
AVFrame *frame = arg;
const int depth = s->depth;
const int width = frame->width;
const int height = frame->height;
const float preserve = s->preserve;
const float mix = s->mix;
const float *color = s->color;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int glinesize = frame->linesize[0] / sizeof(uint16_t);
const int blinesize = frame->linesize[1] / sizeof(uint16_t);
const int rlinesize = frame->linesize[2] / sizeof(uint16_t);
uint16_t *gptr = (uint16_t *)frame->data[0] + slice_start * glinesize;
uint16_t *bptr = (uint16_t *)frame->data[1] + slice_start * blinesize;
uint16_t *rptr = (uint16_t *)frame->data[2] + slice_start * rlinesize;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
float g = gptr[x];
float b = bptr[x];
float r = rptr[x];
float nr, ng, nb;
float l0, l1, l;
PROCESS()
gptr[x] = av_clip_uintp2_c(ng, depth);
bptr[x] = av_clip_uintp2_c(nb, depth);
rptr[x] = av_clip_uintp2_c(nr, depth);
}
gptr += glinesize;
bptr += blinesize;
rptr += rlinesize;
}
return 0;
}
static int temperature_slice8p(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorTemperatureContext *s = ctx->priv;
AVFrame *frame = arg;
const int step = s->step;
const int width = frame->width;
const int height = frame->height;
const float mix = s->mix;
const float preserve = s->preserve;
const float *color = s->color;
const uint8_t roffset = s->rgba_map[R];
const uint8_t goffset = s->rgba_map[G];
const uint8_t boffset = s->rgba_map[B];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int linesize = frame->linesize[0];
uint8_t *ptr = frame->data[0] + slice_start * linesize;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
float g = ptr[x * step + goffset];
float b = ptr[x * step + boffset];
float r = ptr[x * step + roffset];
float nr, ng, nb;
float l0, l1, l;
PROCESS()
ptr[x * step + goffset] = av_clip_uint8(ng);
ptr[x * step + boffset] = av_clip_uint8(nb);
ptr[x * step + roffset] = av_clip_uint8(nr);
}
ptr += linesize;
}
return 0;
}
static int temperature_slice16p(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ColorTemperatureContext *s = ctx->priv;
AVFrame *frame = arg;
const int step = s->step;
const int depth = s->depth;
const int width = frame->width;
const int height = frame->height;
const float preserve = s->preserve;
const float mix = s->mix;
const float *color = s->color;
const uint8_t roffset = s->rgba_map[R];
const uint8_t goffset = s->rgba_map[G];
const uint8_t boffset = s->rgba_map[B];
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const int linesize = frame->linesize[0] / sizeof(uint16_t);
uint16_t *ptr = (uint16_t *)frame->data[0] + slice_start * linesize;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++) {
float g = ptr[x * step + goffset];
float b = ptr[x * step + boffset];
float r = ptr[x * step + roffset];
float nr, ng, nb;
float l0, l1, l;
PROCESS()
ptr[x * step + goffset] = av_clip_uintp2_c(ng, depth);
ptr[x * step + boffset] = av_clip_uintp2_c(nb, depth);
ptr[x * step + roffset] = av_clip_uintp2_c(nr, depth);
}
ptr += linesize;
}
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
{
AVFilterContext *ctx = inlink->dst;
ColorTemperatureContext *s = ctx->priv;
kelvin2rgb(s->temperature, s->color);
ctx->internal->execute(ctx, s->do_slice, frame, NULL,
FFMIN(frame->height, ff_filter_get_nb_threads(ctx)));
return ff_filter_frame(ctx->outputs[0], frame);
}
static av_cold int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pixel_fmts[] = {
AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24,
AV_PIX_FMT_RGBA, AV_PIX_FMT_BGRA,
AV_PIX_FMT_ARGB, AV_PIX_FMT_ABGR,
AV_PIX_FMT_0RGB, AV_PIX_FMT_0BGR,
AV_PIX_FMT_RGB0, AV_PIX_FMT_BGR0,
AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP,
AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16,
AV_PIX_FMT_GBRAP10, AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GBRAP16,
AV_PIX_FMT_RGB48, AV_PIX_FMT_BGR48,
AV_PIX_FMT_RGBA64, AV_PIX_FMT_BGRA64,
AV_PIX_FMT_NONE
};
AVFilterFormats *formats = NULL;
formats = ff_make_format_list(pixel_fmts);
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, formats);
}
static av_cold int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ColorTemperatureContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
int planar = desc->flags & AV_PIX_FMT_FLAG_PLANAR;
s->step = desc->nb_components;
if (inlink->format == AV_PIX_FMT_RGB0 ||
inlink->format == AV_PIX_FMT_0RGB ||
inlink->format == AV_PIX_FMT_BGR0 ||
inlink->format == AV_PIX_FMT_0BGR)
s->step = 4;
s->depth = desc->comp[0].depth;
s->do_slice = s->depth <= 8 ? temperature_slice8 : temperature_slice16;
if (!planar)
s->do_slice = s->depth <= 8 ? temperature_slice8p : temperature_slice16p;
ff_fill_rgba_map(s->rgba_map, inlink->format);
return 0;
}
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.filter_frame = filter_frame,
.config_props = config_input,
.needs_writable = 1,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
#define OFFSET(x) offsetof(ColorTemperatureContext, x)
#define VF AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption colortemperature_options[] = {
{ "temperature", "set the temperature in Kelvin", OFFSET(temperature), AV_OPT_TYPE_FLOAT, {.dbl=6500}, 1000, 40000, VF },
{ "mix", "set the mix with filtered output", OFFSET(mix), AV_OPT_TYPE_FLOAT, {.dbl=1}, 0, 1, VF },
{ "pl", "set the amount of preserving lightness", OFFSET(preserve), AV_OPT_TYPE_FLOAT, {.dbl=0}, 0, 1, VF },
{ NULL }
};
AVFILTER_DEFINE_CLASS(colortemperature);
AVFilter ff_vf_colortemperature = {
.name = "colortemperature",
.description = NULL_IF_CONFIG_SMALL("Adjust color temperature of video."),
.priv_size = sizeof(ColorTemperatureContext),
.priv_class = &colortemperature_class,
.query_formats = query_formats,
.inputs = inputs,
.outputs = outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,287 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/opt.h"
#include "libavutil/avassert.h"
#include "libavutil/pixdesc.h"
#include "internal.h"
typedef struct EPXContext {
const AVClass *class;
int n;
int (*epx_slice)(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs);
} EPXContext;
typedef struct ThreadData {
AVFrame *in, *out;
} ThreadData;
#define OFFSET(x) offsetof(EPXContext, x)
#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
static const AVOption epx_options[] = {
{ "n", "set scale factor", OFFSET(n), AV_OPT_TYPE_INT, {.i64 = 3}, 2, 3, .flags = FLAGS },
{ NULL }
};
AVFILTER_DEFINE_CLASS(epx);
static int epx2_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ThreadData *td = arg;
const AVFrame *in = td->in;
AVFrame *out = td->out;
const int slice_start = (in->height * jobnr ) / nb_jobs;
const int slice_end = (in->height * (jobnr+1)) / nb_jobs;
for (int p = 0; p < 1; p++) {
const int width = in->width;
const int height = in->height;
const int src_linesize = in->linesize[p] / 4;
const int dst_linesize = out->linesize[p] / 4;
const uint32_t *src = (const uint32_t *)in->data[p];
uint32_t *dst = (uint32_t *)out->data[p];
const uint32_t *src_line[3];
src_line[0] = src + src_linesize * FFMAX(slice_start - 1, 0);
src_line[1] = src + src_linesize * slice_start;
src_line[2] = src + src_linesize * FFMIN(slice_start + 1, height-1);
for (int y = slice_start; y < slice_end; y++) {
uint32_t *dst_line[2];
dst_line[0] = dst + dst_linesize*2*y;
dst_line[1] = dst + dst_linesize*(2*y+1);
for (int x = 0; x < width; x++) {
uint32_t E0, E1, E2, E3;
uint32_t B, D, E, F, H;
B = src_line[0][x];
D = src_line[1][FFMAX(x-1, 0)];
E = src_line[1][x];
F = src_line[1][FFMIN(x+1, width - 1)];
H = src_line[2][x];
if (B != H && D != F) {
E0 = D == B ? D : E;
E1 = B == F ? F : E;
E2 = D == H ? D : E;
E3 = H == F ? F : E;
} else {
E0 = E;
E1 = E;
E2 = E;
E3 = E;
}
dst_line[0][x*2] = E0;
dst_line[0][x*2+1] = E1;
dst_line[1][x*2] = E2;
dst_line[1][x*2+1] = E3;
}
src_line[0] = src_line[1];
src_line[1] = src_line[2];
src_line[2] = src_line[1];
if (y < height - 1)
src_line[2] += src_linesize;
}
}
return 0;
}
static int epx3_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ThreadData *td = arg;
const AVFrame *in = td->in;
AVFrame *out = td->out;
const int slice_start = (in->height * jobnr ) / nb_jobs;
const int slice_end = (in->height * (jobnr+1)) / nb_jobs;
for (int p = 0; p < 1; p++) {
const int width = in->width;
const int height = in->height;
const int src_linesize = in->linesize[p] / 4;
const int dst_linesize = out->linesize[p] / 4;
const uint32_t *src = (const uint32_t *)in->data[p];
uint32_t *dst = (uint32_t *)out->data[p];
const uint32_t *src_line[3];
src_line[0] = src + src_linesize * FFMAX(slice_start - 1, 0);
src_line[1] = src + src_linesize * slice_start;
src_line[2] = src + src_linesize * FFMIN(slice_start + 1, height-1);
for (int y = slice_start; y < slice_end; y++) {
uint32_t *dst_line[3];
dst_line[0] = dst + dst_linesize*3*y;
dst_line[1] = dst + dst_linesize*(3*y+1);
dst_line[2] = dst + dst_linesize*(3*y+2);
for (int x = 0; x < width; x++) {
uint32_t E0, E1, E2, E3, E4, E5, E6, E7, E8;
uint32_t A, B, C, D, E, F, G, H, I;
A = src_line[0][FFMAX(x-1, 0)];
B = src_line[0][x];
C = src_line[0][FFMIN(x+1, width - 1)];
D = src_line[1][FFMAX(x-1, 0)];
E = src_line[1][x];
F = src_line[1][FFMIN(x+1, width - 1)];
G = src_line[2][FFMAX(x-1, 0)];
H = src_line[2][x];
I = src_line[2][FFMIN(x+1, width - 1)];
if (B != H && D != F) {
E0 = D == B ? D : E;
E1 = (D == B && E != C) || (B == F && E != A) ? B : E;
E2 = B == F ? F : E;
E3 = (D == B && E != G) || (D == H && E != A) ? D : E;
E4 = E;
E5 = (B == F && E != I) || (H == F && E != C) ? F : E;
E6 = D == H ? D : E;
E7 = (D == H && E != I) || (H == F && E != G) ? H : E;
E8 = H == F ? F : E;
} else {
E0 = E;
E1 = E;
E2 = E;
E3 = E;
E4 = E;
E5 = E;
E6 = E;
E7 = E;
E8 = E;
}
dst_line[0][x*3] = E0;
dst_line[0][x*3+1] = E1;
dst_line[0][x*3+2] = E2;
dst_line[1][x*3] = E3;
dst_line[1][x*3+1] = E4;
dst_line[1][x*3+2] = E5;
dst_line[2][x*3] = E6;
dst_line[2][x*3+1] = E7;
dst_line[2][x*3+2] = E8;
}
src_line[0] = src_line[1];
src_line[1] = src_line[2];
src_line[2] = src_line[1];
if (y < height - 1)
src_line[2] += src_linesize;
}
}
return 0;
}
static int config_output(AVFilterLink *outlink)
{
AVFilterContext *ctx = outlink->src;
EPXContext *s = ctx->priv;
AVFilterLink *inlink = ctx->inputs[0];
const AVPixFmtDescriptor *desc;
desc = av_pix_fmt_desc_get(outlink->format);
if (!desc)
return AVERROR_BUG;
outlink->w = inlink->w * s->n;
outlink->h = inlink->h * s->n;
switch (s->n) {
case 2:
s->epx_slice = epx2_slice;
break;
case 3:
s->epx_slice = epx3_slice;
break;
}
return 0;
}
static int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pix_fmts[] = {
AV_PIX_FMT_RGBA, AV_PIX_FMT_BGRA, AV_PIX_FMT_ARGB, AV_PIX_FMT_ABGR,
AV_PIX_FMT_NONE,
};
AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts);
if (!fmts_list)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, fmts_list);
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
AVFilterLink *outlink = ctx->outputs[0];
EPXContext *s = ctx->priv;
ThreadData td;
AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
td.in = in, td.out = out;
ctx->internal->execute(ctx, s->epx_slice, &td, NULL, FFMIN(inlink->h, ff_filter_get_nb_threads(ctx)));
av_frame_free(&in);
return ff_filter_frame(outlink, out);
}
static const AVFilterPad inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.filter_frame = filter_frame,
},
{ NULL }
};
static const AVFilterPad outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.config_props = config_output,
},
{ NULL }
};
AVFilter ff_vf_epx = {
.name = "epx",
.description = NULL_IF_CONFIG_SMALL("Scale the input using EPX algorithm."),
.inputs = inputs,
.outputs = outputs,
.query_formats = query_formats,
.priv_size = sizeof(EPXContext),
.priv_class = &epx_class,
.flags = AVFILTER_FLAG_SLICE_THREADS,
};

View File

@@ -1,591 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/common.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "avfilter.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
typedef struct ESTDIFContext {
const AVClass *class;
int mode; ///< 0 is frame, 1 is field
int parity; ///< frame field parity
int deint; ///< which frames to deinterlace
int rslope; ///< best edge slope search radius
int redge; ///< best edge match search radius
int interp; ///< type of interpolation
int linesize[4]; ///< bytes of pixel data per line for each plane
int planewidth[4]; ///< width of each plane
int planeheight[4]; ///< height of each plane
int field; ///< which field are we on, 0 or 1
int eof;
int depth;
int half;
int nb_planes;
int nb_threads;
int64_t pts;
AVFrame *prev;
void (*interpolate)(struct ESTDIFContext *s, uint8_t *dst,
const uint8_t *prev_line, const uint8_t *next_line,
const uint8_t *prev2_line, const uint8_t *next2_line,
const uint8_t *prev3_line, const uint8_t *next3_line,
int x, int width, int rslope, int redge, unsigned half,
int depth, int *K);
unsigned (*mid_8[3])(const uint8_t *const prev,
const uint8_t *const next,
const uint8_t *const prev2,
const uint8_t *const next2,
const uint8_t *const prev3,
const uint8_t *const next3,
int end, int x, int k, int depth);
unsigned (*mid_16[3])(const uint16_t *const prev,
const uint16_t *const next,
const uint16_t *const prev2,
const uint16_t *const next2,
const uint16_t *const prev3,
const uint16_t *const next3,
int end, int x, int k, int depth);
} ESTDIFContext;
#define MAX_R 15
#define S (MAX_R * 2 + 1)
#define OFFSET(x) offsetof(ESTDIFContext, x)
#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
#define CONST(name, help, val, unit) { name, help, 0, AV_OPT_TYPE_CONST, {.i64=val}, 0, 0, FLAGS, unit }
static const AVOption estdif_options[] = {
{ "mode", "specify the mode", OFFSET(mode), AV_OPT_TYPE_INT, {.i64=1}, 0, 1, FLAGS, "mode" },
CONST("frame", "send one frame for each frame", 0, "mode"),
CONST("field", "send one frame for each field", 1, "mode"),
{ "parity", "specify the assumed picture field parity", OFFSET(parity), AV_OPT_TYPE_INT, {.i64=-1}, -1, 1, FLAGS, "parity" },
CONST("tff", "assume top field first", 0, "parity"),
CONST("bff", "assume bottom field first", 1, "parity"),
CONST("auto", "auto detect parity", -1, "parity"),
{ "deint", "specify which frames to deinterlace", OFFSET(deint), AV_OPT_TYPE_INT, {.i64=0}, 0, 1, FLAGS, "deint" },
CONST("all", "deinterlace all frames", 0, "deint"),
CONST("interlaced", "only deinterlace frames marked as interlaced", 1, "deint"),
{ "rslope", "specify the search radius for edge slope tracing", OFFSET(rslope), AV_OPT_TYPE_INT, {.i64=1}, 1, MAX_R, FLAGS, },
{ "redge", "specify the search radius for best edge matching", OFFSET(redge), AV_OPT_TYPE_INT, {.i64=2}, 0, MAX_R, FLAGS, },
{ "interp", "specify the type of interpolation", OFFSET(interp), AV_OPT_TYPE_INT, {.i64=1}, 0, 2, FLAGS, "interp" },
CONST("2p", "two-point interpolation", 0, "interp"),
CONST("4p", "four-point interpolation", 1, "interp"),
CONST("6p", "six-point interpolation", 2, "interp"),
{ NULL }
};
AVFILTER_DEFINE_CLASS(estdif);
static int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pix_fmts[] = {
AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P,
AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P,
AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ420P,
AV_PIX_FMT_YUVJ411P,
AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP,
AV_PIX_FMT_GRAY8,
AV_PIX_FMT_GRAY9, AV_PIX_FMT_GRAY10, AV_PIX_FMT_GRAY12, AV_PIX_FMT_GRAY14, AV_PIX_FMT_GRAY16,
AV_PIX_FMT_YUV420P9, AV_PIX_FMT_YUV422P9, AV_PIX_FMT_YUV444P9,
AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV444P10,
AV_PIX_FMT_YUV440P10,
AV_PIX_FMT_YUV420P12, AV_PIX_FMT_YUV422P12, AV_PIX_FMT_YUV444P12,
AV_PIX_FMT_YUV440P12,
AV_PIX_FMT_YUV420P14, AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV444P14,
AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16,
AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16,
AV_PIX_FMT_YUVA444P9, AV_PIX_FMT_YUVA444P10, AV_PIX_FMT_YUVA444P12, AV_PIX_FMT_YUVA444P16,
AV_PIX_FMT_YUVA422P9, AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA422P12, AV_PIX_FMT_YUVA422P16,
AV_PIX_FMT_YUVA420P9, AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA420P16,
AV_PIX_FMT_GBRAP10, AV_PIX_FMT_GBRAP12, AV_PIX_FMT_GBRAP16,
AV_PIX_FMT_NONE
};
AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts);
if (!fmts_list)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, fmts_list);
}
static int config_output(AVFilterLink *outlink)
{
AVFilterContext *ctx = outlink->src;
AVFilterLink *inlink = ctx->inputs[0];
outlink->time_base.num = inlink->time_base.num;
outlink->time_base.den = inlink->time_base.den * 2;
outlink->frame_rate.num = inlink->frame_rate.num * 2;
outlink->frame_rate.den = inlink->frame_rate.den;
return 0;
}
typedef struct ThreadData {
AVFrame *out, *in;
} ThreadData;
#define MIDL(type, ss) \
static unsigned midl_##ss(const type *const prev, \
const type *const next, \
int end, int x, int k) \
{ \
return (prev[av_clip(x + k, 0, end)] + \
next[av_clip(x - k, 0, end)] + 1) >> 1; \
}
MIDL(uint8_t, 8)
MIDL(uint16_t, 16)
#define MID2(type, ss) \
static unsigned mid2_##ss(const type *const prev, \
const type *const next, \
const type *const prev2, \
const type *const next2, \
const type *const prev3, \
const type *const next3, \
int end, int x, int k, int depth) \
{ \
return (prev[av_clip(x + k, 0, end)] + \
next[av_clip(x - k, 0, end)] + 1) >> 1; \
}
MID2(uint8_t, 8)
MID2(uint16_t, 16)
#define MID4(type, ss) \
static unsigned mid4_##ss(const type *const prev, \
const type *const next, \
const type *const prev2, \
const type *const next2, \
const type *const prev3, \
const type *const next3, \
int end, int x, int k, int depth) \
{ \
return av_clip_uintp2_c(( \
9 * (prev[av_clip(x + k, 0, end)] + \
next[av_clip(x - k, 0, end)]) - \
1 * (prev2[av_clip(x + k*3, 0, end)] + \
next2[av_clip(x - k*3, 0, end)]) + 8) >> 4, \
depth); \
}
MID4(uint8_t, 8)
MID4(uint16_t, 16)
#define MID6(type, ss) \
static unsigned mid6_##ss(const type *const prev, \
const type *const next, \
const type *const prev2, \
const type *const next2, \
const type *const prev3, \
const type *const next3, \
int end, int x, int k, int depth) \
{ \
return av_clip_uintp2_c(( \
20 * (prev[av_clip(x + k, 0, end)] + \
next[av_clip(x - k, 0, end)]) - \
5 * (prev2[av_clip(x + k*3, 0, end)] + \
next2[av_clip(x - k*3, 0, end)]) + \
1 * (prev3[av_clip(x + k*5, 0, end)] + \
next3[av_clip(x - k*5, 0, end)]) + 16) >> 5, \
depth); \
}
MID6(uint8_t, 8)
MID6(uint16_t, 16)
#define DIFF(type, ss) \
static unsigned diff_##ss(const type *const prev, \
const type *const next, \
int end, int x, int k, int j) \
{ \
return FFABS(prev[av_clip(x + k + j, 0, end)] - \
next[av_clip(x - k + j, 0, end)]); \
}
DIFF(uint8_t, 8)
DIFF(uint16_t, 16)
#define COST(type, ss) \
static unsigned cost_##ss(const type *const prev, \
const type *const next, \
int end, int x, int k) \
{ \
const int m = midl_##ss(prev, next, end, x, k); \
const int p = prev[x]; \
const int n = next[x]; \
\
return FFABS(p - m) + FFABS(n - m); \
}
COST(uint8_t, 8)
COST(uint16_t, 16)
#define INTERPOLATE(type, atype, max, ss) \
static void interpolate_##ss(ESTDIFContext *s, uint8_t *ddst, \
const uint8_t *const pprev_line, \
const uint8_t *const nnext_line, \
const uint8_t *const pprev2_line, \
const uint8_t *const nnext2_line, \
const uint8_t *const pprev3_line, \
const uint8_t *const nnext3_line, \
int x, int width, int rslope, \
int redge, unsigned h, int depth, \
int *K) \
{ \
type *dst = (type *)ddst; \
const type *const prev_line = (const type *const)pprev_line; \
const type *const prev2_line = (const type *const)pprev2_line; \
const type *const prev3_line = (const type *const)pprev3_line; \
const type *const next_line = (const type *const)nnext_line; \
const type *const next2_line = (const type *const)nnext2_line; \
const type *const next3_line = (const type *const)nnext3_line; \
const int interp = s->interp; \
const int end = width - 1; \
const atype f = redge + 2; \
atype sd[S], sD[S], di = 0; \
atype dmin = max; \
int k = *K; \
\
for (int i = -rslope; i <= rslope && abs(k) > rslope; i++) { \
atype sum = 0; \
\
for (int j = -redge; j <= redge; j++) { \
sum += diff_##ss(prev_line, next_line, end, x, i, j); \
sum += diff_##ss(prev2_line, prev_line, end, x, i, j); \
sum += diff_##ss(next_line, next2_line, end, x, i, j); \
} \
\
sD[i + rslope] = sum; \
sD[i + rslope] += f * cost_##ss(prev_line, next_line, end, x, i); \
sD[i + rslope] += h * abs(i); \
\
dmin = FFMIN(sD[i + rslope], dmin); \
} \
\
for (int i = -rslope; i <= rslope; i++) { \
atype sum = 0; \
\
for (int j = -redge; j <= redge; j++) { \
sum += diff_##ss(prev_line, next_line, end, x, k + i, j); \
sum += diff_##ss(prev2_line, prev_line, end, x, k + i, j); \
sum += diff_##ss(next_line, next2_line, end, x, k + i, j); \
} \
\
sd[i + rslope] = sum; \
sd[i + rslope] += f * cost_##ss(prev_line, next_line, end, x, k + i); \
sd[i + rslope] += h * abs(k + i); \
\
dmin = FFMIN(sd[i + rslope], dmin); \
} \
\
for (int i = -rslope; i <= rslope && abs(k) > rslope; i++) { \
if (dmin == sD[i + rslope]) { \
di = 1; \
k = i; \
break; \
} \
} \
\
for (int i = -rslope; i <= rslope && !di; i++) { \
if (dmin == sd[i + rslope]) { \
k += i; \
break; \
} \
} \
\
dst[x] = s->mid_##ss[interp](prev_line, next_line, \
prev2_line, next2_line, \
prev3_line, next3_line, \
end, x, k, depth); \
\
*K = k; \
}
INTERPOLATE(uint8_t, unsigned, UINT_MAX, 8)
INTERPOLATE(uint16_t, uint64_t, UINT64_MAX, 16)
static int deinterlace_slice(AVFilterContext *ctx, void *arg,
int jobnr, int nb_jobs)
{
ESTDIFContext *s = ctx->priv;
ThreadData *td = arg;
AVFrame *out = td->out;
AVFrame *in = td->in;
const int rslope = s->rslope;
const int redge = s->redge;
const int half = s->half;
const int depth = s->depth;
const int interlaced = in->interlaced_frame;
const int tff = (s->field == (s->parity == -1 ? interlaced ? in->top_field_first : 1 :
s->parity ^ 1));
for (int plane = 0; plane < s->nb_planes; plane++) {
const uint8_t *src_data = in->data[plane];
uint8_t *dst_data = out->data[plane];
const int linesize = s->linesize[plane];
const int width = s->planewidth[plane];
const int height = s->planeheight[plane];
const int src_linesize = in->linesize[plane];
const int dst_linesize = out->linesize[plane];
const int start = (height * jobnr) / nb_jobs;
const int end = (height * (jobnr+1)) / nb_jobs;
const uint8_t *prev_line, *prev2_line, *next_line, *next2_line, *in_line;
const uint8_t *prev3_line, *next3_line;
uint8_t *out_line;
int y_out;
y_out = start + (tff ^ (start & 1));
in_line = src_data + (y_out * src_linesize);
out_line = dst_data + (y_out * dst_linesize);
while (y_out < end) {
memcpy(out_line, in_line, linesize);
y_out += 2;
in_line += src_linesize * 2;
out_line += dst_linesize * 2;
}
y_out = start + ((!tff) ^ (start & 1));
out_line = dst_data + (y_out * dst_linesize);
for (int y = y_out; y < end; y += 2) {
int y_prev3_in = y - 5;
int y_next3_in = y + 5;
int y_prev2_in = y - 3;
int y_next2_in = y + 3;
int y_prev_in = y - 1;
int y_next_in = y + 1;
int k;
while (y_prev3_in < 0)
y_prev3_in += 2;
while (y_next3_in >= height)
y_next3_in -= 2;
while (y_prev2_in < 0)
y_prev2_in += 2;
while (y_next2_in >= height)
y_next2_in -= 2;
while (y_prev_in < 0)
y_prev_in += 2;
while (y_next_in >= height)
y_next_in -= 2;
prev3_line = src_data + (y_prev3_in * src_linesize);
next3_line = src_data + (y_next3_in * src_linesize);
prev2_line = src_data + (y_prev2_in * src_linesize);
next2_line = src_data + (y_next2_in * src_linesize);
prev_line = src_data + (y_prev_in * src_linesize);
next_line = src_data + (y_next_in * src_linesize);
k = 0;
for (int x = 0; x < width; x++) {
s->interpolate(s, out_line,
prev_line, next_line,
prev2_line, next2_line,
prev3_line, next3_line,
x, width, rslope, redge, half, depth, &k);
}
out_line += 2 * dst_linesize;
}
}
return 0;
}
static int filter(AVFilterContext *ctx, int is_second, AVFrame *in)
{
ESTDIFContext *s = ctx->priv;
AVFilterLink *outlink = ctx->outputs[0];
AVFrame *out;
ThreadData td;
out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
if (!out)
return AVERROR(ENOMEM);
av_frame_copy_props(out, in);
out->interlaced_frame = 0;
out->pts = s->pts;
td.out = out; td.in = in;
ctx->internal->execute(ctx, deinterlace_slice, &td, NULL,
FFMIN(s->planeheight[1] / 2, s->nb_threads));
if (s->mode)
s->field = !s->field;
return ff_filter_frame(outlink, out);
}
static int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ESTDIFContext *s = ctx->priv;
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
int ret;
if ((ret = av_image_fill_linesizes(s->linesize, inlink->format, inlink->w)) < 0)
return ret;
s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
s->planeheight[0] = s->planeheight[3] = inlink->h;
s->planewidth[1] = s->planewidth[2] = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
s->planewidth[0] = s->planewidth[3] = inlink->w;
if (inlink->h < 3) {
av_log(ctx, AV_LOG_ERROR, "Video of less than 3 lines is not supported\n");
return AVERROR(EINVAL);
}
s->nb_planes = av_pix_fmt_count_planes(inlink->format);
s->nb_threads = ff_filter_get_nb_threads(ctx);
s->depth = desc->comp[0].depth;
s->interpolate = s->depth <= 8 ? interpolate_8 : interpolate_16;
s->mid_8[0] = mid2_8;
s->mid_8[1] = mid4_8;
s->mid_8[2] = mid6_8;
s->mid_16[0] = mid2_16;
s->mid_16[1] = mid4_16;
s->mid_16[2] = mid6_16;
s->half = 1 << (s->depth - 1);
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
AVFilterContext *ctx = inlink->dst;
ESTDIFContext *s = ctx->priv;
int ret;
if (!s->prev) {
s->prev = in;
return 0;
}
if ((s->deint && !in->interlaced_frame) || ctx->is_disabled) {
s->prev->pts *= 2;
ret = ff_filter_frame(ctx->outputs[0], s->prev);
s->prev = in;
return ret;
}
s->pts = s->prev->pts * 2;
ret = filter(ctx, 0, s->prev);
if (ret < 0 || s->mode == 0) {
av_frame_free(&s->prev);
s->prev = in;
return ret;
}
s->pts = s->prev->pts + in->pts;
ret = filter(ctx, 1, s->prev);
av_frame_free(&s->prev);
s->prev = in;
return ret;
}
static int request_frame(AVFilterLink *link)
{
AVFilterContext *ctx = link->src;
ESTDIFContext *s = ctx->priv;
int ret;
if (s->eof)
return AVERROR_EOF;
ret = ff_request_frame(ctx->inputs[0]);
if (ret == AVERROR_EOF && s->prev) {
AVFrame *next = av_frame_clone(s->prev);
if (!next)
return AVERROR(ENOMEM);
next->pts = s->prev->pts + av_rescale_q(1, av_inv_q(ctx->outputs[0]->frame_rate),
ctx->outputs[0]->time_base);
s->eof = 1;
ret = filter_frame(ctx->inputs[0], next);
} else if (ret < 0) {
return ret;
}
return ret;
}
static av_cold void uninit(AVFilterContext *ctx)
{
ESTDIFContext *s = ctx->priv;
av_frame_free(&s->prev);
}
static const AVFilterPad estdif_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad estdif_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.config_props = config_output,
.request_frame = request_frame,
},
{ NULL }
};
AVFilter ff_vf_estdif = {
.name = "estdif",
.description = NULL_IF_CONFIG_SMALL("Apply Edge Slope Tracing deinterlace."),
.priv_size = sizeof(ESTDIFContext),
.priv_class = &estdif_class,
.uninit = uninit,
.query_formats = query_formats,
.inputs = estdif_inputs,
.outputs = estdif_outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

View File

@@ -1,144 +0,0 @@
/*
* Copyright (c) 2021 Paul B Mahol
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <float.h>
#include "libavutil/opt.h"
#include "libavutil/imgutils.h"
#include "avfilter.h"
#include "formats.h"
#include "internal.h"
#include "video.h"
typedef struct ExposureContext {
const AVClass *class;
float exposure;
float black;
float scale;
int (*do_slice)(AVFilterContext *s, void *arg,
int jobnr, int nb_jobs);
} ExposureContext;
static int exposure_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
{
ExposureContext *s = ctx->priv;
AVFrame *frame = arg;
const int width = frame->width;
const int height = frame->height;
const int slice_start = (height * jobnr) / nb_jobs;
const int slice_end = (height * (jobnr + 1)) / nb_jobs;
const float black = s->black;
const float scale = s->scale;
for (int p = 0; p < 3; p++) {
const int linesize = frame->linesize[p] / 4;
float *ptr = (float *)frame->data[p] + slice_start * linesize;
for (int y = slice_start; y < slice_end; y++) {
for (int x = 0; x < width; x++)
ptr[x] = (ptr[x] - black) * scale;
ptr += linesize;
}
}
return 0;
}
static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
{
AVFilterContext *ctx = inlink->dst;
ExposureContext *s = ctx->priv;
s->scale = 1.f / (exp2f(-s->exposure) - s->black);
ctx->internal->execute(ctx, s->do_slice, frame, NULL,
FFMIN(frame->height, ff_filter_get_nb_threads(ctx)));
return ff_filter_frame(ctx->outputs[0], frame);
}
static av_cold int query_formats(AVFilterContext *ctx)
{
static const enum AVPixelFormat pixel_fmts[] = {
AV_PIX_FMT_GBRPF32, AV_PIX_FMT_GBRAPF32,
AV_PIX_FMT_NONE
};
AVFilterFormats *formats = NULL;
formats = ff_make_format_list(pixel_fmts);
if (!formats)
return AVERROR(ENOMEM);
return ff_set_common_formats(ctx, formats);
}
static av_cold int config_input(AVFilterLink *inlink)
{
AVFilterContext *ctx = inlink->dst;
ExposureContext *s = ctx->priv;
s->do_slice = exposure_slice;
return 0;
}
static const AVFilterPad exposure_inputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
.needs_writable = 1,
.filter_frame = filter_frame,
.config_props = config_input,
},
{ NULL }
};
static const AVFilterPad exposure_outputs[] = {
{
.name = "default",
.type = AVMEDIA_TYPE_VIDEO,
},
{ NULL }
};
#define OFFSET(x) offsetof(ExposureContext, x)
#define VF AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
static const AVOption exposure_options[] = {
{ "exposure", "set the exposure correction", OFFSET(exposure), AV_OPT_TYPE_FLOAT, {.dbl=0}, -3, 3, VF },
{ "black", "set the black level correction", OFFSET(black), AV_OPT_TYPE_FLOAT, {.dbl=0}, -1, 1, VF },
{ NULL }
};
AVFILTER_DEFINE_CLASS(exposure);
AVFilter ff_vf_exposure = {
.name = "exposure",
.description = NULL_IF_CONFIG_SMALL("Adjust exposure of the video stream."),
.priv_size = sizeof(ExposureContext),
.priv_class = &exposure_class,
.query_formats = query_formats,
.inputs = exposure_inputs,
.outputs = exposure_outputs,
.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
.process_command = ff_filter_process_command,
};

Some files were not shown because too many files have changed in this diff Show More