SkeyeExPlayer(Windows)开发之框架讲解
  TEZNKK3IfmPf 2023年11月12日 53 0

SkeyeExPlayer for Windows是基于ffmpeg进行开发的全功能播放器,开发过程中参考了很多开源的播放器,诸如vlc和ffplay等,其中最强大的莫过于vlc,但是鉴于vlc框架过于庞大而其中仍存在诸多问题而舍弃了,而其他的更倾向于演示demo,只能提供部分借鉴意义;故而,SkeyeExPlayer 一贯秉承Skeye系列小而精,接口简单功能强大的宗旨从新设计了一套框架,该套框架能适应多线程调用以及多个播放实例同时运行,和SkeyePlayer一样Skeye; 当然,在此也郑重的感谢各大开源播放器以及ffmpeg的作者的无私奉献。

SkeyeExPlayer分为三大模块:打开模块,获取流数据模块,解码模块和渲染模块,其中:

(1) 打开模块 打开流模块很简单,教科书式的调用方法:

player->avformat_context = avformat_alloc_context();
player->avformat_context->interrupt_callback.callback = interrupt_cb;
player->avformat_context->interrupt_callback.opaque = player;

// open input file
AVDictionary *options = NULL;
//av_dict_set(&options, "rtsp_transport", "udp", 0);
if (avformat_open_input(&player->avformat_context, url, fmt, &options) != 0)
{
goto error_handler;
}

// find stream info
if (avformat_find_stream_info(player->avformat_context, NULL) < 0)
{
goto error_handler;
}

// set current audio & video stream
for (i=0,idx=-1,cur=-1; i<(int)player->avformat_context->nb_streams; i++) {
switch (type) {
case AVMEDIA_TYPE_AUDIO:
// get last codec context
if (player->acodec_context) {
lastctxt = player->acodec_context;
}

// get new acodec_context & astream_timebase
player->acodec_context = player->avformat_context->streams[idx]->codec;
player->astream_timebase = player->avformat_context->streams[idx]->time_base;

// reopen codec
if (lastctxt) avcodec_close(lastctxt);
decoder = avcodec_find_decoder(player->acodec_context->codec_id);
if (decoder && avcodec_open2(player->acodec_context, decoder, NULL) == 0) {
player->astream_index = idx;
}
else {
av_log(NULL, AV_LOG_WARNING, "failed to find or open decoder for audio !\n");
player->astream_index = -1;
}
break;

case AVMEDIA_TYPE_VIDEO:
// get last codec context
if (player->vcodec_context) {
lastctxt = player->vcodec_context;
}

// get new vcodec_context & vstream_timebase
player->vcodec_context = player->avformat_context->streams[idx]->codec;
player->vstream_timebase = player->avformat_context->streams[idx]->time_base;

// reopen codec
if (lastctxt) avcodec_close(lastctxt);
decoder = avcodec_find_decoder(player->vcodec_context->codec_id);
if (decoder && avcodec_open2(player->vcodec_context, decoder, NULL) == 0) {
player->vstream_index = idx;
}
else {
av_log(NULL, AV_LOG_WARNING, "failed to find or open decoder for video !\n");
player->vstream_index = -1;
}
break;

case AVMEDIA_TYPE_SUBTITLE:
return -1; // todo...
}
}
if (idx == -1) return -1;
// for audio
if (player->astream_index != -1)
{
arate = player->acodec_context->sample_rate;
aformat = player->acodec_context->sample_fmt;
alayout = player->acodec_context->channel_layout;
//++ fix audio channel layout issue
if (alayout == 0) {
alayout = av_get_default_channel_layout(player->acodec_context->channels);
}
//-- fix audio channel layout issue
}

// for video
if (player->vstream_index != -1) {
vrate = player->avformat_context->streams[player->vstream_index]->r_frame_rate;
if (vrate.num / vrate.den >= 100) {
vrate.num = 25;
vrate.den = 1;
}
player->vcodec_context->pix_fmt = vformat;
width = player->vcodec_context->width;
height = player->vcodec_context->height;
}

首先,avformat_open_input打开一个流,为了避免在打开流的时候出现阻塞,我们创建一个线程来执行,同时,为了防止ffmpeg内部出现持久行的阻塞,我们传入阻塞回调函数,在关闭流或者其他必要的时候解除阻塞;avformat_find_stream_info获取流的解码信息,根据音视频以及字幕的解码信息初始化解码器;

(2) 读取流数据模块

retv = av_read_frame(player->avformat_context, packet);
//++ play completed ++//
if (retv < 0)
{
if (player->avformat_context->pb && player->avformat_context->pb->error)
{
//告知播放实时流中断
player->error_flag = 1;
//创建断线重连错误检测线程
// [9/4/2017 swordtwelve]
break;
}
player->player_status |= PS_D_PAUSE;
pktqueue_write_post_i(player->pktqueue, packet);
usleep(20*1000);
continue;
}
//-- play completed --//
player->error_flag = 0;//-1=初始化 0=正常 1-n错误代码

// audio
if (packet->stream_index == player->astream_index)
{
pktqueue_write_post_a(player->pktqueue, packet);
}

// video
if (packet->stream_index == player->vstream_index)
{
pktqueue_write_post_v(player->pktqueue, packet);
}

if ( packet->stream_index != player->astream_index
&& packet->stream_index != player->vstream_index )
{
av_packet_unref(packet); // free packet
pktqueue_write_post_i(player->pktqueue, packet);
}
}

读取数据模块超级简单,创建一个线程循环执行av_read_frame,读取到一帧就将其放入队列,这里采用了ffplay的阻塞的方式来处理队列的消费者和生产者的问题,这块有待优化,后续将改成无锁循环队列模式,如SkeyePlayer。

(3) 解码模块 解码模块分为音频和视频解码模块,音视频的解码流程非常相似, 主要分为三步: a. 从队列中读取音视频编码数据; b. 音视频分别采用avcodec_decode_audio4和avcodec_decode_video2进行解码; c. 音视频渲染; 这里着重讲解视频的解码后的过程,其中涉及到解码后的原始图像数据进行处理,解码出一帧图像以后,我们需要对其进行字幕和图像或者其他的视频图像的叠加,借助ffmpeg强大的图像转换和缩放能力,借助VFX库我们很容易实现:

consumed = avcodec_decode_video2(player->vcodec_context, vframe, &gotvideo, packet);
if (consumed < 0) {
av_log(NULL, AV_LOG_WARNING, "an error occurred during decoding video.\n");
break;
}

if (gotvideo)
{
// 解码视频帧添加特技处理 [9/7/2017 dingshuai]
// 1. 叠加图片
// 2. 叠加字母
// 3. 画框...
// 对解码帧进行特技处理(字符,图片叠加,添加特效) [Dingshuai 2017/08/07]
#if 1
WaterMarkInfo g_waterMarkInfo = player->vfxConfigInfo.warkMarkInfo;
if (g_waterMarkInfo.bIsUseWaterMark)
{
if (player->vcodec_context->width != vframe->width ||
player->vcodec_context->height != vframe->height ||
player->vfxConfigInfo.warkMarkInfo.bResetWaterMark )
{
//初始化水印叠加
//;表示台标位置:1 == 左上 2 == 右上 3 == 左下 4 == 右下
//eWaterMarkPos = 3

//;水印顶点x轴坐标,建议不小于0;不大于视频宽度
//nLeftTopX = 0

//;水印顶点y轴坐标,建议不小于0;不大于视频高度
//nLeftTopY = 480

//;水印风格:0 - 6
//eWatermarkStyle = 3

//;水印图像文件路径LOGO.png
//strWMFilePath = .\Res\logo.png
switch (g_waterMarkInfo.eWaterMarkPos)
{
case POS_LEFT_TOP:
g_waterMarkInfo.nLeftTopX = 0;
g_waterMarkInfo.nLeftTopY = 0;
break;
case POS_RIGHT_TOP:
g_waterMarkInfo.nLeftTopX = vframe->width;
g_waterMarkInfo.nLeftTopY = 0;
break;
case POS_LEFT_BOTTOM:
g_waterMarkInfo.nLeftTopX = 0;
g_waterMarkInfo.nLeftTopY = vframe->height;
break;
case POS_RIGHT_BOTTOM:
g_waterMarkInfo.nLeftTopX = vframe->width;
g_waterMarkInfo.nLeftTopY = vframe->height;
break;
}

player->vfxHandle->SetVideoInVideoParam( 101, 0, 0, vframe->width,
vframe->height, 100, 100, 100);

player->vfxHandle->SetLogoImage(g_waterMarkInfo.strWMFilePath, g_waterMarkInfo.nLeftTopX,
g_waterMarkInfo.nLeftTopY, g_waterMarkInfo.bIsUseWaterMark, g_waterMarkInfo.eWatermarkStyle);

player->vfxConfigInfo.warkMarkInfo.bResetWaterMark = FALSE;
}
}


//初始化字幕信息
VideoTittleInfo tittleInfo = player->vfxConfigInfo.tittleInfo;
if(tittleInfo.bResetTittleInfo)
{

// -->1、初始化创建字幕指针,并初始化视频长宽参数 m_pVideoVfxMakerInfo->nDesWidth, m_pVideoVfxMakerInfo->nDesHeight, m_pVideoVfxMakerInfo->strDesBytesType);
player->vfxHandle->CreateOverlayTitle(vframe->width, vframe->height, ("YUY2"));

// -->2、设置字幕文字信息
LOGFONTA inFont;
inFont.lfHeight = tittleInfo.nTittleHeight;
inFont.lfWidth = tittleInfo.nTittleWidth;
inFont.lfEscapement = 0;
inFont.lfOrientation = 0;
inFont.lfWeight = tittleInfo.nFontWeight;//FW_NORMAL;
inFont.lfItalic = 0;
inFont.lfUnderline = 0;
inFont.lfStrikeOut = 0;
inFont.lfCharSet =GB2312_CHARSET;// ANSI_CHARSET;//134
inFont.lfOutPrecision =3;// OUT_DEFAULT_PRECIS;
inFont.lfClipPrecision = 2;//CLIP_DEFAULT_PRECIS;
inFont.lfQuality = 1;//PROOF_QUALITY;
inFont.lfPitchAndFamily = 0;//49;//49

strcpy(inFont.lfFaceName, tittleInfo.strFontType);//"华文新魏");//"华文隶书");"隶书"

POINT pointTitle;

if(tittleInfo.nMoveType==0)
{
pointTitle= tittleInfo.ptStartPosition;
if(pointTitle.x<=0) pointTitle.x=1;
if(pointTitle.x>=vframe->width) pointTitle.x=vframe->width/2;
}
else if(tittleInfo.nMoveType==1)//从左往右
{

pointTitle.x = -1;
pointTitle.y = tittleInfo.ptStartPosition.y;
}
else if(tittleInfo.nMoveType==2)
{
pointTitle.x = vframe->width+1;
pointTitle.y = tittleInfo.ptStartPosition.y;
}

player->vfxHandle->SetOverlayTitleInfo(tittleInfo.strTittleContent,
inFont, tittleInfo.nColorR, tittleInfo.nColorG,
tittleInfo.nColorB, pointTitle);

//-->3、设置字幕运行抓状态
player->vfxHandle->SetOverlayTitleState(tittleInfo.nState);

player->vfxConfigInfo.tittleInfo.bResetTittleInfo = FALSE;
}

if (player->vfxHandle && (g_waterMarkInfo.bIsUseWaterMark || tittleInfo.nState))//logo-水印 + 字幕 + ???
{
if (player->vcodec_context->width != vframe->width ||
player->vcodec_context->height != vframe->height )
{
if (pVfxBuffer)
{
free(pVfxBuffer);
pVfxBuffer = NULL;
}
}

int nBufSize = vframe->width*vframe->height << 1;
if (!pVfxBuffer)
{
pVfxBuffer = (BYTE*)malloc(nBufSize); //缓存写入源数据
memset(pVfxBuffer, 0x00, nBufSize);
}

AVFrame src;
av_image_fill_arrays(src.data, src.linesize, pVfxBuffer, outPixelFormat, vframe->width, vframe->height, 1);
//YUV420 -> YUY2
ConvertColorSpace(&src, outPixelFormat, vframe, inPixelFormat, vframe->width, vframe->height);
// av_image_copy_to_buffer(pVfxBuffer, nBufSize,
// vframe->data, vframe->linesize, AV_PIX_FMT_YUYV422, vframe->width, vframe->height, 1);

//水印叠加
if(g_waterMarkInfo.bIsUseWaterMark)
player->vfxHandle->AddWaterMask(pVfxBuffer);
//OSD叠加
if(tittleInfo.nState)
player->vfxHandle->DoOverlayTitle(pVfxBuffer);

//YUY2 -> I420
//ConvertColorSpace(vframe, inPixelFormat, &src, outPixelFormat, vframe->width, vframe->height);
av_image_fill_arrays(vframe->data, vframe->linesize, pVfxBuffer, outPixelFormat, vframe->width, vframe->height, 1);
int nPixelFmt = AV_PIX_FMT_YUYV422;
player_setparam(player, PARAM_RENDER_OUTFORMAT, &nPixelFmt);
}
else
{
int nPixelFmt = AV_PIX_FMT_YUV420P;
player_setparam(player, PARAM_RENDER_OUTFORMAT, &nPixelFmt);
}
#endif

由于视频渲染需要一定的时间,我们也将解码帧数据进入队列进行缓存,从而保证播放的流畅性;

(4) 渲染模块 渲染模块分为音频渲染和视频渲染,音频渲染即播放,使用waveOutOpen,waveOutWrite等waveout函数即可实现,下面重点说一下视频渲染,视频渲染通俗讲也就是图像绘制,Windows平台可采用D3D,DDraw, GDI,OpenGL等多种方式进行呈现,本文主要采用3种渲染方式,D3D,GDI和OpenGL; 为了保证渲染的流畅性,我们创建线程执行渲染, a. 获取解码图像队列; b. 音视频时间戳同步处理; c. D3D/gdi/openGL渲染:

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月12日 0

暂无评论

TEZNKK3IfmPf