Commit 8b6394d4 by baiyfcu Committed by GitHub

Merge pull request #19 from xia-chu/master

update
parents 90bbdf95 2ae97a66
Subproject commit 17e82574991134f798ae32f82d48e2d6c6b97b06 Subproject commit 8611e88c2eda178973662dcfe180691ff1d8ba35
Subproject commit 43facc343afc2b5b70bbbc3c177f20dfa936f2bf Subproject commit d54285e96260e6d56d68929ec9ace402b83ff6b0
...@@ -14,3 +14,7 @@ huohuo <913481084@qq.com> ...@@ -14,3 +14,7 @@ huohuo <913481084@qq.com>
[γ瑞γミ](https://github.com/JerryLinGd) [γ瑞γミ](https://github.com/JerryLinGd)
[茄子](https://github.com/taotaobujue2008) [茄子](https://github.com/taotaobujue2008)
[好心情](<409257224@qq.com>) [好心情](<409257224@qq.com>)
[Xiaofeng Wang](https://github.com/wasphin)
[doodoocoder](https://github.com/doodoocoder)
[qingci](https://github.com/Colibrow)
Zhou Weimin <zhouweimin@supremind.com>
\ No newline at end of file
...@@ -151,15 +151,7 @@ endif() ...@@ -151,15 +151,7 @@ endif()
#添加rtp库用于rtp转ps/ts #添加rtp库用于rtp转ps/ts
if(ENABLE_RTPPROXY AND ENABLE_HLS) if(ENABLE_RTPPROXY AND ENABLE_HLS)
message(STATUS "ENABLE_RTPPROXY defined") message(STATUS "ENABLE_RTPPROXY defined")
include_directories(${MediaServer_Root}/librtp/include)
aux_source_directory(${MediaServer_Root}/librtp/include src_rtp)
aux_source_directory(${MediaServer_Root}/librtp/source src_rtp)
aux_source_directory(${MediaServer_Root}/librtp/payload src_rtp)
add_library(rtp STATIC ${src_rtp})
add_definitions(-DENABLE_RTPPROXY) add_definitions(-DENABLE_RTPPROXY)
list(APPEND LINK_LIB_LIST rtp)
list(APPEND CXX_API_TARGETS rtp)
endif() endif()
#收集源代码 #收集源代码
......
...@@ -2,12 +2,16 @@ ...@@ -2,12 +2,16 @@
# A lightweight ,high performance and stable stream server and client framework based on C++11. # A lightweight ,high performance and stable stream server and client framework based on C++11.
[![Build Status](https://travis-ci.org/xiongziliang/ZLMediaKit.svg?branch=master)](https://travis-ci.org/xiongziliang/ZLMediaKit)
[![license](http://img.shields.io/badge/license-MIT-green.svg)](https://github.com/xia-chu/ZLMediaKit/blob/master/LICENSE)
[![C++](https://img.shields.io/badge/language-c++-red.svg)](https://en.cppreference.com/)
[![platform](https://img.shields.io/badge/platform-linux%20|%20macos%20|%20windows-blue.svg)](https://github.com/xia-chu/ZLMediaKit)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-yellow.svg)](https://github.com/xia-chu/ZLMediaKit/pulls)
[![Build Status](https://travis-ci.org/xia-chu/ZLMediaKit.svg?branch=master)](https://travis-ci.org/xia-chu/ZLMediaKit)
## Why ZLMediaKit? ## Why ZLMediaKit?
- Developed based on C++ 11, the code is stable and reliable, avoiding the use of raw pointers, cross-platform porting is simple and convenient, and the code is clear and concise. - Developed based on C++ 11, the code is stable and reliable, avoiding the use of raw pointers, cross-platform porting is simple and convenient, and the code is clear and concise.
- Support rich streaming media protocols(`RTSP/RTMP/HLS/HTTP-FLV/Websocket-flv`),and support Inter-protocol conversion. - Support rich streaming media protocols(`RTSP/RTMP/HLS/HTTP-FLV/WebSocket-flv/HTTP-TS/WebSocket-TS/HTTP-fMP4/Websocket-fMP4/MP4`),and support Inter-protocol conversion.
- Multiplexing asynchronous network IO based on epoll and multi thread,extreme performance. - Multiplexing asynchronous network IO based on epoll and multi thread,extreme performance.
- Well performance and stable test,can be used commercially. - Well performance and stable test,can be used commercially.
- Support linux, macos, ios, android, Windows Platforms. - Support linux, macos, ios, android, Windows Platforms.
...@@ -20,15 +24,15 @@ ...@@ -20,15 +24,15 @@
- RTSP[S] player and pusher. - RTSP[S] player and pusher.
- RTP Transport : `rtp over udp` `rtp over tcp` `rtp over http` `rtp udp multicast` . - RTP Transport : `rtp over udp` `rtp over tcp` `rtp over http` `rtp udp multicast` .
- Basic/Digest/Url Authentication. - Basic/Digest/Url Authentication.
- H264/H265/AAC/G711 codec. - H265/H264/AAC/G711/OPUS codec.
- Recorded as mp4. - Recorded as mp4.
- Vod of mp4. - Vod of mp4.
- RTMP[S] - RTMP[S]
- RTMP[S] server,support player and pusher. - RTMP[S] server,support player and pusher.
- RTMP[S] player and pusher. - RTMP[S] player and pusher.
- Support HTTP-FLV player. - Support HTTP-FLV/WebSocket-FLV sever.
- H264/H265/AAC/G711 codec. - H265/H264/AAC/G711/OPUS codec.
- Recorded as flv or mp4. - Recorded as flv or mp4.
- Vod of mp4. - Vod of mp4.
- support [RTMP-H265](https://github.com/ksvc/FFmpeg/wiki) - support [RTMP-H265](https://github.com/ksvc/FFmpeg/wiki)
...@@ -37,6 +41,12 @@ ...@@ -37,6 +41,12 @@
- RTSP RTMP can be converted into HLS,built-in HTTP server. - RTSP RTMP can be converted into HLS,built-in HTTP server.
- Play authentication based on cookie. - Play authentication based on cookie.
- Support HLS player, support streaming HLS proxy to RTSP / RTMP / MP4. - Support HLS player, support streaming HLS proxy to RTSP / RTMP / MP4.
- TS
- Support HTTP-TS/WebSocket-TS sever.
- fMP4
- Support HTTP-fMP4/WebSocket-fMP4 sever.
- HTTP[S] - HTTP[S]
- HTTP server,suppor directory meun、RESTful http api. - HTTP server,suppor directory meun、RESTful http api.
...@@ -56,62 +66,6 @@ ...@@ -56,62 +66,6 @@
- Support TS / PS streaming push through RTP,and it can be converted to RTSP / RTMP / HLS / FLV. - Support TS / PS streaming push through RTP,and it can be converted to RTSP / RTMP / HLS / FLV.
- Support real-time online screenshot http api. - Support real-time online screenshot http api.
- Protocol conversion:
| protocol/codec | H264 | H265 | AAC | other |
| :------------------------------: | :--: | :--: | :--: | :---: |
| RTSP[S] --> RTMP/HTTP[S]-FLV/FLV | Y | Y | Y | N |
| RTMP --> RTSP[S] | Y | Y | Y | N |
| RTSP[S] --> HLS | Y | Y | Y | N |
| RTMP --> HLS | Y | Y | Y | N |
| RTSP[S] --> MP4 | Y | Y | Y | N |
| RTMP --> MP4 | Y | Y | Y | N |
| MP4 --> RTSP[S] | Y | Y | Y | N |
| MP4 --> RTMP | Y | Y | Y | N |
| HLS --> RTSP/RTMP/MP4 | Y | Y | Y | N |
- Stream generation:
| feature/codec | H264 | H265 | AAC | other |
| :-----------: | :--: | :--: | :--: | :---: |
| RTSP[S] push | Y | Y | Y | Y |
| RTSP proxy | Y | Y | Y | Y |
| RTMP push | Y | Y | Y | Y |
| RTMP proxy | Y | Y | Y | Y |
- RTP transport:
| feature/transport | tcp | udp | http | udp_multicast |
| :-----------------: | :--: | :--: | :--: | :-----------: |
| RTSP[S] Play Server | Y | Y | Y | Y |
| RTSP[S] Push Server | Y | Y | N | N |
| RTSP Player | Y | Y | N | Y |
| RTSP Pusher | Y | Y | N | N |
- Server supported:
| Server | Y/N |
| :-----------------: | :--: |
| RTSP[S] Play Server | Y |
| RTSP[S] Push Server | Y |
| RTMP | Y |
| HTTP[S]/WebSocket[S] | Y |
- Client supported:
| Client | Y/N |
| :---------: | :--: |
| RTSP Player | Y |
| RTSP Pusher | Y |
| RTMP Player | Y |
| RTMP Pusher | Y |
| HTTP[S] | Y |
| WebSocket[S] | Y |
| HLS player | Y |
## System Requirements ## System Requirements
- Compiler support c++11,GCC4.8/Clang3.3/VC2015 or above. - Compiler support c++11,GCC4.8/Clang3.3/VC2015 or above.
...@@ -191,8 +145,6 @@ git submodule update --init ...@@ -191,8 +145,6 @@ git submodule update --init
``` ```
### Build on Android ### Build on Android
Now you can open android sudio project in `Android` folder,this is a `aar library` and damo project. Now you can open android sudio project in `Android` folder,this is a `aar library` and damo project.
...@@ -298,7 +250,7 @@ git submodule update --init ...@@ -298,7 +250,7 @@ git submodule update --init
## Docker Image ## Docker Image
You can pull a pre-built docker image from Docker Hub and run with You can pull a pre-built docker image from Docker Hub and run with
```bash ```bash
docker run -id -p 1935:1935 -p 8080:80 gemfield/zlmediakit docker run -id -p 1935:1935 -p 8080:80 -p 8554:554 -p 10000:10000 -p 10000:10000/udp panjjo/zlmediakit
``` ```
Dockerfile is also supplied to build images on Ubuntu 16.04 Dockerfile is also supplied to build images on Ubuntu 16.04
...@@ -307,44 +259,6 @@ cd docker ...@@ -307,44 +259,6 @@ cd docker
docker build -t zlmediakit . docker build -t zlmediakit .
``` ```
## Mirrors
[ZLToolKit](http://git.oschina.net/xiahcu/ZLToolKit)
[ZLMediaKit](http://git.oschina.net/xiahcu/ZLMediaKit)
## Licence
```
MIT License
Copyright (c) 2016-2019 xiongziliang <771730766@qq.com>
Copyright (c) 2019 Gemfield <gemfield@civilnet.cn>
Copyright (c) 2018 huohuo <913481084@qq.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## Contact ## Contact
- Email:<1213642868@qq.com> - Email:<1213642868@qq.com>
- QQ chat group:542509000 - QQ chat group:542509000
......
...@@ -19,25 +19,25 @@ extern "C" { ...@@ -19,25 +19,25 @@ extern "C" {
///////////////////////////////////////////MP4Info///////////////////////////////////////////// ///////////////////////////////////////////MP4Info/////////////////////////////////////////////
//MP4Info对象的C映射 //MP4Info对象的C映射
typedef void* mk_mp4_info; typedef void* mk_mp4_info;
//MP4Info::ui64StartedTime // GMT 标准时间,单位秒
API_EXPORT uint64_t API_CALL mk_mp4_info_get_start_time(const mk_mp4_info ctx); API_EXPORT uint64_t API_CALL mk_mp4_info_get_start_time(const mk_mp4_info ctx);
//MP4Info::ui64TimeLen // 录像长度,单位秒
API_EXPORT uint64_t API_CALL mk_mp4_info_get_time_len(const mk_mp4_info ctx); API_EXPORT float API_CALL mk_mp4_info_get_time_len(const mk_mp4_info ctx);
//MP4Info::ui64FileSize // 文件大小,单位 BYTE
API_EXPORT uint64_t API_CALL mk_mp4_info_get_file_size(const mk_mp4_info ctx); API_EXPORT uint64_t API_CALL mk_mp4_info_get_file_size(const mk_mp4_info ctx);
//MP4Info::strFilePath // 文件路径
API_EXPORT const char* API_CALL mk_mp4_info_get_file_path(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_file_path(const mk_mp4_info ctx);
//MP4Info::strFileName // 文件名称
API_EXPORT const char* API_CALL mk_mp4_info_get_file_name(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_file_name(const mk_mp4_info ctx);
//MP4Info::strFolder // 文件夹路径
API_EXPORT const char* API_CALL mk_mp4_info_get_folder(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_folder(const mk_mp4_info ctx);
//MP4Info::strUrl // 播放路径
API_EXPORT const char* API_CALL mk_mp4_info_get_url(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_url(const mk_mp4_info ctx);
//MP4Info::strVhost // 应用名称
API_EXPORT const char* API_CALL mk_mp4_info_get_vhost(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_vhost(const mk_mp4_info ctx);
//MP4Info::strAppName // 流 ID
API_EXPORT const char* API_CALL mk_mp4_info_get_app(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_app(const mk_mp4_info ctx);
//MP4Info::strStreamId // 虚拟主机
API_EXPORT const char* API_CALL mk_mp4_info_get_stream(const mk_mp4_info ctx); API_EXPORT const char* API_CALL mk_mp4_info_get_stream(const mk_mp4_info ctx);
///////////////////////////////////////////Parser///////////////////////////////////////////// ///////////////////////////////////////////Parser/////////////////////////////////////////////
...@@ -276,13 +276,11 @@ typedef void* mk_publish_auth_invoker; ...@@ -276,13 +276,11 @@ typedef void* mk_publish_auth_invoker;
/** /**
* 执行Broadcast::PublishAuthInvoker * 执行Broadcast::PublishAuthInvoker
* @param err_msg 为空或null则代表鉴权成功 * @param err_msg 为空或null则代表鉴权成功
* @param enable_rtxp rtmp推流时是否运行转rtsp;rtsp推流时,是否允许转rtmp
* @param enable_hls 是否允许转换hls * @param enable_hls 是否允许转换hls
* @param enable_mp4 是否运行MP4录制 * @param enable_mp4 是否运行MP4录制
*/ */
API_EXPORT void API_CALL mk_publish_auth_invoker_do(const mk_publish_auth_invoker ctx, API_EXPORT void API_CALL mk_publish_auth_invoker_do(const mk_publish_auth_invoker ctx,
const char *err_msg, const char *err_msg,
int enable_rtxp,
int enable_hls, int enable_hls,
int enable_mp4); int enable_mp4);
......
...@@ -26,14 +26,12 @@ typedef void *mk_media; ...@@ -26,14 +26,12 @@ typedef void *mk_media;
* @param app 应用名,推荐为live * @param app 应用名,推荐为live
* @param stream 流id,例如camera * @param stream 流id,例如camera
* @param duration 时长(单位秒),直播则为0 * @param duration 时长(单位秒),直播则为0
* @param rtsp_enabled 是否启用rtsp协议
* @param rtmp_enabled 是否启用rtmp协议
* @param hls_enabled 是否生成hls * @param hls_enabled 是否生成hls
* @param mp4_enabled 是否生成mp4 * @param mp4_enabled 是否生成mp4
* @return 对象指针 * @return 对象指针
*/ */
API_EXPORT mk_media API_CALL mk_media_create(const char *vhost, const char *app, const char *stream, float duration, API_EXPORT mk_media API_CALL mk_media_create(const char *vhost, const char *app, const char *stream,
int rtsp_enabled, int rtmp_enabled, int hls_enabled, int mp4_enabled); float duration, int hls_enabled, int mp4_enabled);
/** /**
* 销毁媒体源 * 销毁媒体源
...@@ -54,7 +52,7 @@ API_EXPORT void API_CALL mk_media_init_video(mk_media ctx, int track_id, int wid ...@@ -54,7 +52,7 @@ API_EXPORT void API_CALL mk_media_init_video(mk_media ctx, int track_id, int wid
/** /**
* 添加音频轨道 * 添加音频轨道
* @param ctx 对象指针 * @param ctx 对象指针
* @param track_id 2:CodecAAC/3:CodecG711A/4:CodecG711U * @param track_id 2:CodecAAC/3:CodecG711A/4:CodecG711U/5:OPUS
* @param channel 通道数 * @param channel 通道数
* @param sample_bit 采样位数,只支持16 * @param sample_bit 采样位数,只支持16
* @param sample_rate 采样率 * @param sample_rate 采样率
...@@ -95,7 +93,7 @@ API_EXPORT void API_CALL mk_media_input_h265(mk_media ctx, void *data, int len, ...@@ -95,7 +93,7 @@ API_EXPORT void API_CALL mk_media_input_h265(mk_media ctx, void *data, int len,
* @param data 不包含adts头的单帧AAC数据 * @param data 不包含adts头的单帧AAC数据
* @param len 单帧AAC数据字节数 * @param len 单帧AAC数据字节数
* @param dts 时间戳,毫秒 * @param dts 时间戳,毫秒
* @param adts adts头 * @param adts adts头,可以为null
*/ */
API_EXPORT void API_CALL mk_media_input_aac(mk_media ctx, void *data, int len, uint32_t dts, void *adts); API_EXPORT void API_CALL mk_media_input_aac(mk_media ctx, void *data, int len, uint32_t dts, void *adts);
...@@ -109,13 +107,13 @@ API_EXPORT void API_CALL mk_media_input_aac(mk_media ctx, void *data, int len, u ...@@ -109,13 +107,13 @@ API_EXPORT void API_CALL mk_media_input_aac(mk_media ctx, void *data, int len, u
API_EXPORT void API_CALL mk_media_input_pcm(mk_media ctx, void *data, int len, uint32_t pts); API_EXPORT void API_CALL mk_media_input_pcm(mk_media ctx, void *data, int len, uint32_t pts);
/** /**
* 输入单帧G711音频 * 输入单帧OPUS/G711音频帧
* @param ctx 对象指针 * @param ctx 对象指针
* @param data 单帧G711数据 * @param data 单帧音频数据
* @param len 单帧G711数据字节数 * @param len 单帧音频数据字节数
* @param dts 时间戳,毫秒 * @param dts 时间戳,毫秒
*/ */
API_EXPORT void API_CALL mk_media_input_g711(mk_media ctx, void* data, int len, uint32_t dts); API_EXPORT void API_CALL mk_media_input_audio(mk_media ctx, void* data, int len, uint32_t dts);
/** /**
* MediaSource.close()回调事件 * MediaSource.close()回调事件
......
...@@ -22,5 +22,6 @@ ...@@ -22,5 +22,6 @@
#include "mk_tcp.h" #include "mk_tcp.h"
#include "mk_util.h" #include "mk_util.h"
#include "mk_thread.h" #include "mk_thread.h"
#include "mk_rtp_server.h"
#endif /* MK_API_H_ */ #endif /* MK_API_H_ */
...@@ -31,7 +31,7 @@ typedef void(API_CALL *on_mk_play_event)(void *user_data,int err_code,const char ...@@ -31,7 +31,7 @@ typedef void(API_CALL *on_mk_play_event)(void *user_data,int err_code,const char
* 收到音视频数据回调 * 收到音视频数据回调
* @param user_data 用户数据指针 * @param user_data 用户数据指针
* @param track_type 0:视频,1:音频 * @param track_type 0:视频,1:音频
* @param codec_id 0:H264,1:H265,2:AAC 3.G711A 4.G711U * @param codec_id 0:H264,1:H265,2:AAC 3.G711A 4.G711U 5.OPUS
* @param data 数据指针 * @param data 数据指针
* @param len 数据长度 * @param len 数据长度
* @param dts 解码时间戳,单位毫秒 * @param dts 解码时间戳,单位毫秒
......
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "mk_common.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef void* mk_rtp_server;
/**
* 创建GB28181 RTP 服务器
* @param port 监听端口,0则为随机
* @param enable_tcp 创建udp端口时是否同时监听tcp端口
* @param stream_id 该端口绑定的流id
* @return
*/
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create(uint16_t port, int enable_tcp, const char *stream_id);
/**
* 销毁GB28181 RTP 服务器
* @param ctx 服务器对象
*/
API_EXPORT void API_CALL mk_rtp_server_release(mk_rtp_server ctx);
/**
* 获取本地监听的端口号
* @param ctx 服务器对象
* @return 端口号
*/
API_EXPORT uint16_t API_CALL mk_rtp_server_port(mk_rtp_server ctx);
/**
* GB28181 RTP 服务器接收流超时时触发
* @param user_data 用户数据指针
*/
typedef void(API_CALL *on_mk_rtp_server_detach)(void *user_data);
/**
* 监听B28181 RTP 服务器接收流超时事件
* @param ctx 服务器对象
* @param cb 回调函数
* @param user_data 回调函数用户数据指针
*/
API_EXPORT void API_CALL mk_rtp_server_set_on_detach(mk_rtp_server ctx, on_mk_rtp_server_detach cb, void *user_data);
#ifdef __cplusplus
}
#endif
\ No newline at end of file
...@@ -101,11 +101,10 @@ API_EXPORT void API_CALL mk_events_listen(const mk_events *events){ ...@@ -101,11 +101,10 @@ API_EXPORT void API_CALL mk_events_listen(const mk_events *events){
s_events.on_mk_media_publish((mk_media_info) &args, s_events.on_mk_media_publish((mk_media_info) &args,
(mk_publish_auth_invoker) &invoker, (mk_publish_auth_invoker) &invoker,
(mk_sock_info) &sender); (mk_sock_info) &sender);
}else{ } else {
GET_CONFIG(bool,toRtxp,General::kPublishToRtxp); GET_CONFIG(bool, toHls, General::kPublishToHls);
GET_CONFIG(bool,toHls,General::kPublishToHls); GET_CONFIG(bool, toMP4, General::kPublishToMP4);
GET_CONFIG(bool,toMP4,General::kPublishToMP4); invoker("", toHls, toMP4);
invoker("",toRtxp,toHls,toMP4);
} }
}); });
......
...@@ -18,65 +18,65 @@ ...@@ -18,65 +18,65 @@
#include "Rtsp/RtspSession.h" #include "Rtsp/RtspSession.h"
using namespace mediakit; using namespace mediakit;
///////////////////////////////////////////MP4Info///////////////////////////////////////////// ///////////////////////////////////////////RecordInfo/////////////////////////////////////////////
API_EXPORT uint64_t API_CALL mk_mp4_info_get_start_time(const mk_mp4_info ctx){ API_EXPORT uint64_t API_CALL mk_mp4_info_get_start_time(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->ui64StartedTime; return info->start_time;
} }
API_EXPORT uint64_t API_CALL mk_mp4_info_get_time_len(const mk_mp4_info ctx){ API_EXPORT float API_CALL mk_mp4_info_get_time_len(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->ui64TimeLen; return info->time_len;
} }
API_EXPORT uint64_t API_CALL mk_mp4_info_get_file_size(const mk_mp4_info ctx){ API_EXPORT uint64_t API_CALL mk_mp4_info_get_file_size(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->ui64FileSize; return info->file_size;
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_file_path(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_file_path(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strFilePath.c_str(); return info->file_path.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_file_name(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_file_name(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strFileName.c_str(); return info->file_name.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_folder(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_folder(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strFolder.c_str(); return info->folder.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_url(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_url(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strUrl.c_str(); return info->url.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_vhost(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_vhost(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strVhost.c_str(); return info->vhost.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_app(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_app(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strAppName.c_str(); return info->app.c_str();
} }
API_EXPORT const char* API_CALL mk_mp4_info_get_stream(const mk_mp4_info ctx){ API_EXPORT const char* API_CALL mk_mp4_info_get_stream(const mk_mp4_info ctx){
assert(ctx); assert(ctx);
MP4Info *info = (MP4Info *)ctx; RecordInfo *info = (RecordInfo *)ctx;
return info->strStreamId.c_str(); return info->stream.c_str();
} }
///////////////////////////////////////////Parser///////////////////////////////////////////// ///////////////////////////////////////////Parser/////////////////////////////////////////////
...@@ -256,7 +256,7 @@ static C get_http_header( const char *response_header[]){ ...@@ -256,7 +256,7 @@ static C get_http_header( const char *response_header[]){
} }
break; break;
} }
return std::move(header); return header;
} }
API_EXPORT mk_http_body API_CALL mk_http_body_from_multi_form(const char *key_val[],const char *file_path){ API_EXPORT mk_http_body API_CALL mk_http_body_from_multi_form(const char *key_val[],const char *file_path){
...@@ -382,12 +382,11 @@ API_EXPORT void API_CALL mk_rtsp_auth_invoker_clone_release(const mk_rtsp_auth_i ...@@ -382,12 +382,11 @@ API_EXPORT void API_CALL mk_rtsp_auth_invoker_clone_release(const mk_rtsp_auth_i
///////////////////////////////////////////Broadcast::PublishAuthInvoker///////////////////////////////////////////// ///////////////////////////////////////////Broadcast::PublishAuthInvoker/////////////////////////////////////////////
API_EXPORT void API_CALL mk_publish_auth_invoker_do(const mk_publish_auth_invoker ctx, API_EXPORT void API_CALL mk_publish_auth_invoker_do(const mk_publish_auth_invoker ctx,
const char *err_msg, const char *err_msg,
int enable_rtxp,
int enable_hls, int enable_hls,
int enable_mp4){ int enable_mp4){
assert(ctx); assert(ctx);
Broadcast::PublishAuthInvoker *invoker = (Broadcast::PublishAuthInvoker *)ctx; Broadcast::PublishAuthInvoker *invoker = (Broadcast::PublishAuthInvoker *)ctx;
(*invoker)(err_msg ? err_msg : "", enable_rtxp, enable_hls, enable_mp4); (*invoker)(err_msg ? err_msg : "", enable_hls, enable_mp4);
} }
API_EXPORT mk_publish_auth_invoker API_CALL mk_publish_auth_invoker_clone(const mk_publish_auth_invoker ctx){ API_EXPORT mk_publish_auth_invoker API_CALL mk_publish_auth_invoker_clone(const mk_publish_auth_invoker ctx){
......
...@@ -78,7 +78,7 @@ static C get_http_header( const char *response_header[]){ ...@@ -78,7 +78,7 @@ static C get_http_header( const char *response_header[]){
} }
break; break;
} }
return std::move(header); return header;
} }
API_EXPORT void API_CALL mk_http_requester_set_body(mk_http_requester ctx, mk_http_body body){ API_EXPORT void API_CALL mk_http_requester_set_body(mk_http_requester ctx, mk_http_body body){
......
...@@ -117,11 +117,10 @@ API_EXPORT int API_CALL mk_media_total_reader_count(mk_media ctx){ ...@@ -117,11 +117,10 @@ API_EXPORT int API_CALL mk_media_total_reader_count(mk_media ctx){
return (*obj)->getChannel()->totalReaderCount(); return (*obj)->getChannel()->totalReaderCount();
} }
API_EXPORT mk_media API_CALL mk_media_create(const char *vhost, const char *app, const char *stream, float duration, API_EXPORT mk_media API_CALL mk_media_create(const char *vhost, const char *app, const char *stream,
int rtsp_enabled, int rtmp_enabled, int hls_enabled, int mp4_enabled) { float duration, int hls_enabled, int mp4_enabled) {
assert(vhost && app && stream); assert(vhost && app && stream);
MediaHelper::Ptr *obj(new MediaHelper::Ptr(new MediaHelper(vhost, app, stream, duration, MediaHelper::Ptr *obj(new MediaHelper::Ptr(new MediaHelper(vhost, app, stream, duration, hls_enabled, mp4_enabled)));
rtsp_enabled, rtmp_enabled, hls_enabled, mp4_enabled)));
(*obj)->attachEvent(); (*obj)->attachEvent();
return (mk_media) obj; return (mk_media) obj;
} }
...@@ -188,8 +187,8 @@ API_EXPORT void API_CALL mk_media_input_pcm(mk_media ctx, void *data , int len, ...@@ -188,8 +187,8 @@ API_EXPORT void API_CALL mk_media_input_pcm(mk_media ctx, void *data , int len,
#endif //ENABLE_FAAC #endif //ENABLE_FAAC
} }
API_EXPORT void API_CALL mk_media_input_g711(mk_media ctx, void* data, int len, uint32_t dts){ API_EXPORT void API_CALL mk_media_input_audio(mk_media ctx, void* data, int len, uint32_t dts){
assert(ctx && data && len > 0); assert(ctx && data && len > 0);
MediaHelper::Ptr* obj = (MediaHelper::Ptr*) ctx; MediaHelper::Ptr* obj = (MediaHelper::Ptr*) ctx;
(*obj)->getChannel()->inputG711((char*)data, len, dts); (*obj)->getChannel()->inputAudio((char*)data, len, dts);
} }
...@@ -80,7 +80,7 @@ public: ...@@ -80,7 +80,7 @@ public:
strong_self->onData(frame); strong_self->onData(frame);
} }
}); });
for (auto &track : _player->getTracks()) { for (auto &track : _player->getTracks(false)) {
track->addDelegate(delegate); track->addDelegate(delegate);
} }
} }
......
...@@ -16,7 +16,7 @@ using namespace mediakit; ...@@ -16,7 +16,7 @@ using namespace mediakit;
API_EXPORT mk_proxy_player API_CALL mk_proxy_player_create(const char *vhost, const char *app, const char *stream, int hls_enabled, int mp4_enabled) { API_EXPORT mk_proxy_player API_CALL mk_proxy_player_create(const char *vhost, const char *app, const char *stream, int hls_enabled, int mp4_enabled) {
assert(vhost && app && stream); assert(vhost && app && stream);
PlayerProxy::Ptr *obj(new PlayerProxy::Ptr(new PlayerProxy(vhost, app, stream, true, true, hls_enabled, mp4_enabled))); PlayerProxy::Ptr *obj(new PlayerProxy::Ptr(new PlayerProxy(vhost, app, stream, hls_enabled, mp4_enabled)));
return (mk_proxy_player) obj; return (mk_proxy_player) obj;
} }
......
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "mk_rtp_server.h"
#include "Rtp/RtpServer.h"
using namespace mediakit;
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create(uint16_t port, int enable_tcp, const char *stream_id){
RtpServer::Ptr *server = new RtpServer::Ptr(new RtpServer);
(*server)->start(port, stream_id, enable_tcp);
return server;
}
API_EXPORT void API_CALL mk_rtp_server_release(mk_rtp_server ctx){
RtpServer::Ptr *server = (RtpServer::Ptr *)ctx;
delete server;
}
API_EXPORT uint16_t API_CALL mk_rtp_server_port(mk_rtp_server ctx){
RtpServer::Ptr *server = (RtpServer::Ptr *)ctx;
return (*server)->getPort();
}
API_EXPORT void API_CALL mk_rtp_server_set_on_detach(mk_rtp_server ctx, on_mk_rtp_server_detach cb, void *user_data){
RtpServer::Ptr *server = (RtpServer::Ptr *) ctx;
if (cb) {
(*server)->setOnDetach([cb, user_data]() {
cb(user_data);
});
} else {
(*server)->setOnDetach(nullptr);
}
}
...@@ -61,8 +61,8 @@ void API_CALL on_mk_media_publish(const mk_media_info url_info, ...@@ -61,8 +61,8 @@ void API_CALL on_mk_media_publish(const mk_media_info url_info,
mk_media_info_get_stream(url_info), mk_media_info_get_stream(url_info),
mk_media_info_get_params(url_info)); mk_media_info_get_params(url_info));
//允许推流,并且允许转rtxp/hls/mp4 //允许推流,并且允许转hls/mp4
mk_publish_auth_invoker_do(invoker, NULL, 1, 1, 1); mk_publish_auth_invoker_do(invoker, NULL, 1, 1);
} }
/** /**
......
...@@ -40,8 +40,6 @@ addMuteAudio=1 ...@@ -40,8 +40,6 @@ addMuteAudio=1
#拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始, #拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始,
#如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写) #如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写)
resetWhenRePlay=1 resetWhenRePlay=1
#是否默认推流时转换成rtsp或rtmp,hook接口(on_publish)中可以覆盖该设置
publishToRtxp=1
#是否默认推流时转换成hls,hook接口(on_publish)中可以覆盖该设置 #是否默认推流时转换成hls,hook接口(on_publish)中可以覆盖该设置
publishToHls=1 publishToHls=1
#是否默认推流时mp4录像,hook接口(on_publish)中可以覆盖该设置 #是否默认推流时mp4录像,hook接口(on_publish)中可以覆盖该设置
...@@ -68,6 +66,8 @@ segDur=2 ...@@ -68,6 +66,8 @@ segDur=2
segNum=3 segNum=3
#HLS切片从m3u8文件中移除后,继续保留在磁盘上的个数 #HLS切片从m3u8文件中移除后,继续保留在磁盘上的个数
segRetain=5 segRetain=5
# 是否广播 ts 切片完成通知
broadcastRecordTs=0
[hook] [hook]
#在推流时,如果url参数匹对admin_params,那么可以不经过hook鉴权直接推流成功,播放时亦然 #在推流时,如果url参数匹对admin_params,那么可以不经过hook鉴权直接推流成功,播放时亦然
...@@ -85,6 +85,8 @@ on_play=https://127.0.0.1/index/hook/on_play ...@@ -85,6 +85,8 @@ on_play=https://127.0.0.1/index/hook/on_play
on_publish=https://127.0.0.1/index/hook/on_publish on_publish=https://127.0.0.1/index/hook/on_publish
#录制mp4切片完成事件 #录制mp4切片完成事件
on_record_mp4=https://127.0.0.1/index/hook/on_record_mp4 on_record_mp4=https://127.0.0.1/index/hook/on_record_mp4
# 录制 hls ts 切片完成事件
on_record_ts=https://127.0.0.1/index/hook/on_record_ts
#rtsp播放鉴权事件,此事件中比对rtsp的用户名密码 #rtsp播放鉴权事件,此事件中比对rtsp的用户名密码
on_rtsp_auth=https://127.0.0.1/index/hook/on_rtsp_auth on_rtsp_auth=https://127.0.0.1/index/hook/on_rtsp_auth
#rtsp播放是否开启专属鉴权事件,置空则关闭rtsp鉴权。rtsp播放鉴权还支持url方式鉴权 #rtsp播放是否开启专属鉴权事件,置空则关闭rtsp鉴权。rtsp播放鉴权还支持url方式鉴权
......
...@@ -1048,6 +1048,108 @@ ...@@ -1048,6 +1048,108 @@
} }
}, },
"response": [] "response": []
},
{
"name": "开始发送rtp(startSendRtp)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/startSendRtp?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=live&stream=obs&ssrc=1&dst_url=127.0.0.1&dst_port=10000&is_udp=0",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"startSendRtp"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置),如果操作ip是127.0.0.1,则不需要此参数"
},
{
"key": "vhost",
"value": "{{defaultVhost}}",
"description": "虚拟主机,例如__defaultVhost__"
},
{
"key": "app",
"value": "live",
"description": "应用名,例如 live"
},
{
"key": "stream",
"value": "obs",
"description": "流id,例如 obs"
},
{
"key": "ssrc",
"value": "1",
"description": "rtp的ssrc"
},
{
"key": "dst_url",
"value": "127.0.0.1",
"description": "目标ip或域名"
},
{
"key": "dst_port",
"value": "10000",
"description": "目标端口"
},
{
"key": "is_udp",
"value": "0",
"description": "是否为udp模式,否则为tcp模式"
}
]
}
},
"response": []
},
{
"name": "停止 发送rtp(stopSendRtp)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/stopSendRtp?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=live&stream=obs",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"stopSendRtp"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置),如果操作ip是127.0.0.1,则不需要此参数"
},
{
"key": "vhost",
"value": "{{defaultVhost}}",
"description": "虚拟主机,例如__defaultVhost__"
},
{
"key": "app",
"value": "live",
"description": "应用名,例如 live"
},
{
"key": "stream",
"value": "obs",
"description": "流id,例如 obs"
}
]
}
},
"response": []
} }
], ],
"event": [ "event": [
...@@ -1074,17 +1176,17 @@ ...@@ -1074,17 +1176,17 @@
], ],
"variable": [ "variable": [
{ {
"id": "0e272976-965b-4f25-8b9e-5916c59234d7", "id": "ce426571-eb1e-4067-8901-01978c982fed",
"key": "ZLMediaKit_URL", "key": "ZLMediaKit_URL",
"value": "zlmediakit.com:8880" "value": "zlmediakit.com:8880"
}, },
{ {
"id": "321374c3-3357-4405-915e-9cb524d95fbc", "id": "2d3dfd4a-a39c-47d8-a3e9-37d80352ea5f",
"key": "ZLMediaKit_secret", "key": "ZLMediaKit_secret",
"value": "035c73f7-bb6b-4889-a715-d9eb2d1925cc" "value": "035c73f7-bb6b-4889-a715-d9eb2d1925cc"
}, },
{ {
"id": "468ce1f6-ec79-44d2-819e-5cb9f42cd396", "id": "0aacc473-3a2e-4ef9-b415-e86ce71e0c42",
"key": "defaultVhost", "key": "defaultVhost",
"value": "__defaultVhost__" "value": "__defaultVhost__"
} }
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include "Util/File.h" #include "Util/File.h"
#include "System.h" #include "System.h"
#include "Thread/WorkThreadPool.h" #include "Thread/WorkThreadPool.h"
#include "Network/sockutil.h"
namespace FFmpeg { namespace FFmpeg {
#define FFmpeg_FIELD "ffmpeg." #define FFmpeg_FIELD "ffmpeg."
...@@ -45,6 +46,18 @@ FFmpegSource::~FFmpegSource() { ...@@ -45,6 +46,18 @@ FFmpegSource::~FFmpegSource() {
DebugL; DebugL;
} }
static bool is_local_ip(const string &ip){
if (ip == "127.0.0.1" || ip == "localhost") {
return true;
}
auto ips = SockUtil::getInterfaceList();
for (auto &obj : ips) {
if (ip == obj["ip"]) {
return true;
}
}
return false;
}
void FFmpegSource::play(const string &src_url,const string &dst_url,int timeout_ms,const onPlay &cb) { void FFmpegSource::play(const string &src_url,const string &dst_url,int timeout_ms,const onPlay &cb) {
GET_CONFIG(string,ffmpeg_bin,FFmpeg::kBin); GET_CONFIG(string,ffmpeg_bin,FFmpeg::kBin);
...@@ -60,7 +73,7 @@ void FFmpegSource::play(const string &src_url,const string &dst_url,int timeout_ ...@@ -60,7 +73,7 @@ void FFmpegSource::play(const string &src_url,const string &dst_url,int timeout_
_process.run(cmd,ffmpeg_log.empty() ? "" : File::absolutePath("",ffmpeg_log)); _process.run(cmd,ffmpeg_log.empty() ? "" : File::absolutePath("",ffmpeg_log));
InfoL << cmd; InfoL << cmd;
if(_media_info._host == "127.0.0.1"){ if (is_local_ip(_media_info._host)) {
//推流给自己的,通过判断流是否注册上来判断是否正常 //推流给自己的,通过判断流是否注册上来判断是否正常
if(_media_info._schema != RTSP_SCHEMA && _media_info._schema != RTMP_SCHEMA){ if(_media_info._schema != RTSP_SCHEMA && _media_info._schema != RTMP_SCHEMA){
cb(SockException(Err_other,"本服务只支持rtmp/rtsp推流")); cb(SockException(Err_other,"本服务只支持rtmp/rtsp推流"));
...@@ -179,7 +192,7 @@ void FFmpegSource::startTimer(int timeout_ms) { ...@@ -179,7 +192,7 @@ void FFmpegSource::startTimer(int timeout_ms) {
//自身已经销毁 //自身已经销毁
return false; return false;
} }
if (strongSelf->_media_info._host == "127.0.0.1") { if (is_local_ip(strongSelf->_media_info._host)) {
//推流给自己的,我们通过检查是否已经注册来判断FFmpeg是否工作正常 //推流给自己的,我们通过检查是否已经注册来判断FFmpeg是否工作正常
strongSelf->findAsync(0, [&](const MediaSource::Ptr &src) { strongSelf->findAsync(0, [&](const MediaSource::Ptr &src) {
//同步查找流 //同步查找流
...@@ -232,33 +245,19 @@ bool FFmpegSource::close(MediaSource &sender, bool force) { ...@@ -232,33 +245,19 @@ bool FFmpegSource::close(MediaSource &sender, bool force) {
return true; return true;
} }
int FFmpegSource::totalReaderCount(MediaSource &sender) {
auto listener = _listener.lock();
if(listener){
return listener->totalReaderCount(sender);
}
return sender.readerCount();
}
void FFmpegSource::onNoneReader(MediaSource &sender){
auto listener = _listener.lock();
if(listener){
listener->onNoneReader(sender);
return;
}
MediaSourceEvent::onNoneReader(sender);
}
void FFmpegSource::onRegist(MediaSource &sender, bool regist){
auto listener = _listener.lock();
if(listener){
listener->onRegist(sender, regist);
}
}
void FFmpegSource::onGetMediaSource(const MediaSource::Ptr &src) { void FFmpegSource::onGetMediaSource(const MediaSource::Ptr &src) {
_listener = src->getListener(); auto listener = src->getListener();
src->setListener(shared_from_this()); if (listener.lock().get() != this) {
//防止多次进入onGetMediaSource函数导致无效递归调用的bug
_listener = listener;
src->setListener(shared_from_this());
} else {
WarnL << "多次触发onGetMediaSource事件:"
<< src->getSchema() << "/"
<< src->getVhost() << "/"
<< src->getApp() << "/"
<< src->getId();
}
} }
void FFmpegSnap::makeSnap(const string &play_url, const string &save_path, float timeout_sec, const function<void(bool)> &cb) { void FFmpegSnap::makeSnap(const string &play_url, const string &save_path, float timeout_sec, const function<void(bool)> &cb) {
......
...@@ -40,7 +40,7 @@ private: ...@@ -40,7 +40,7 @@ private:
~FFmpegSnap() = delete; ~FFmpegSnap() = delete;
}; };
class FFmpegSource : public std::enable_shared_from_this<FFmpegSource> , public MediaSourceEvent{ class FFmpegSource : public std::enable_shared_from_this<FFmpegSource> , public MediaSourceEventInterceptor{
public: public:
typedef shared_ptr<FFmpegSource> Ptr; typedef shared_ptr<FFmpegSource> Ptr;
typedef function<void(const SockException &ex)> onPlay; typedef function<void(const SockException &ex)> onPlay;
...@@ -60,9 +60,6 @@ private: ...@@ -60,9 +60,6 @@ private:
//MediaSourceEvent override //MediaSourceEvent override
bool close(MediaSource &sender,bool force) override; bool close(MediaSource &sender,bool force) override;
int totalReaderCount(MediaSource &sender) override;
void onNoneReader(MediaSource &sender) override;
void onRegist(MediaSource &sender, bool regist) override;
private: private:
Process _process; Process _process;
...@@ -72,7 +69,6 @@ private: ...@@ -72,7 +69,6 @@ private:
string _src_url; string _src_url;
string _dst_url; string _dst_url;
function<void()> _onClose; function<void()> _onClose;
std::weak_ptr<MediaSourceEvent> _listener;
Ticker _replay_ticker; Ticker _replay_ticker;
}; };
......
...@@ -144,7 +144,7 @@ static ApiArgsType getAllArgs(const Parser &parser) { ...@@ -144,7 +144,7 @@ static ApiArgsType getAllArgs(const Parser &parser) {
for (auto &pr : parser.getUrlArgs()) { for (auto &pr : parser.getUrlArgs()) {
allArgs[pr.first] = pr.second; allArgs[pr.first] = pr.second;
} }
return std::move(allArgs); return allArgs;
} }
static inline void addHttpListener(){ static inline void addHttpListener(){
...@@ -596,8 +596,6 @@ void installWebApi() { ...@@ -596,8 +596,6 @@ void installWebApi() {
const string &app, const string &app,
const string &stream, const string &stream,
const string &url, const string &url,
bool enable_rtsp,
bool enable_rtmp,
bool enable_hls, bool enable_hls,
bool enable_mp4, bool enable_mp4,
int rtp_type, int rtp_type,
...@@ -610,7 +608,7 @@ void installWebApi() { ...@@ -610,7 +608,7 @@ void installWebApi() {
return; return;
} }
//添加拉流代理 //添加拉流代理
PlayerProxy::Ptr player(new PlayerProxy(vhost,app,stream,enable_rtsp,enable_rtmp,enable_hls,enable_mp4)); PlayerProxy::Ptr player(new PlayerProxy(vhost, app, stream, enable_hls, enable_mp4));
s_proxyMap[key] = player; s_proxyMap[key] = player;
//指定RTP over TCP(播放rtsp时有效) //指定RTP over TCP(播放rtsp时有效)
...@@ -636,13 +634,11 @@ void installWebApi() { ...@@ -636,13 +634,11 @@ void installWebApi() {
//测试url http://127.0.0.1/index/api/addStreamProxy?vhost=__defaultVhost__&app=proxy&enable_rtsp=1&enable_rtmp=1&stream=0&url=rtmp://127.0.0.1/live/obs //测试url http://127.0.0.1/index/api/addStreamProxy?vhost=__defaultVhost__&app=proxy&enable_rtsp=1&enable_rtmp=1&stream=0&url=rtmp://127.0.0.1/live/obs
api_regist2("/index/api/addStreamProxy",[](API_ARGS2){ api_regist2("/index/api/addStreamProxy",[](API_ARGS2){
CHECK_SECRET(); CHECK_SECRET();
CHECK_ARGS("vhost","app","stream","url","enable_rtsp","enable_rtmp"); CHECK_ARGS("vhost","app","stream","url");
addStreamProxy(allArgs["vhost"], addStreamProxy(allArgs["vhost"],
allArgs["app"], allArgs["app"],
allArgs["stream"], allArgs["stream"],
allArgs["url"], allArgs["url"],
allArgs["enable_rtsp"],/* 是否rtsp转发 */
allArgs["enable_rtmp"],/* 是否rtmp转发 */
allArgs["enable_hls"],/* 是否hls转发 */ allArgs["enable_hls"],/* 是否hls转发 */
allArgs["enable_mp4"],/* 是否MP4录制 */ allArgs["enable_mp4"],/* 是否MP4录制 */
allArgs["rtp_type"], allArgs["rtp_type"],
...@@ -788,7 +784,14 @@ void installWebApi() { ...@@ -788,7 +784,14 @@ void installWebApi() {
CHECK_ARGS("stream_id"); CHECK_ARGS("stream_id");
lock_guard<recursive_mutex> lck(s_rtpServerMapMtx); lock_guard<recursive_mutex> lck(s_rtpServerMapMtx);
val["hit"] = (int) s_rtpServerMap.erase(allArgs["stream_id"]); auto it = s_rtpServerMap.find(allArgs["stream_id"]);
if(it == s_rtpServerMap.end()){
val["hit"] = 0;
return;
}
auto server = it->second;
s_rtpServerMap.erase(it);
val["hit"] = 1;
}); });
api_regist1("/index/api/listRtpServer",[](API_ARGS1){ api_regist1("/index/api/listRtpServer",[](API_ARGS1){
...@@ -803,6 +806,39 @@ void installWebApi() { ...@@ -803,6 +806,39 @@ void installWebApi() {
} }
}); });
api_regist2("/index/api/startSendRtp",[](API_ARGS2){
CHECK_SECRET();
CHECK_ARGS("vhost", "app", "stream", "ssrc", "dst_url", "dst_port", "is_udp");
auto src = MediaSource::find(allArgs["vhost"], allArgs["app"], allArgs["stream"]);
if (!src) {
throw ApiRetException("该媒体流不存在", API::OtherFailed);
}
src->startSendRtp(allArgs["dst_url"], allArgs["dst_port"], allArgs["ssrc"], allArgs["is_udp"], [val, headerOut, invoker](const SockException &ex){
if (ex) {
const_cast<Value &>(val)["code"] = API::OtherFailed;
const_cast<Value &>(val)["msg"] = ex.what();
}
invoker("200 OK", headerOut, val.toStyledString());
});
});
api_regist1("/index/api/stopSendRtp",[](API_ARGS1){
CHECK_SECRET();
CHECK_ARGS("vhost", "app", "stream");
auto src = MediaSource::find(allArgs["vhost"], allArgs["app"], allArgs["stream"]);
if (!src) {
throw ApiRetException("该媒体流不存在", API::OtherFailed);
}
if (!src->stopSendRtp()) {
throw ApiRetException("尚未开始推流,停止失败", API::OtherFailed);
}
});
#endif//ENABLE_RTPPROXY #endif//ENABLE_RTPPROXY
// 开始录制hls或MP4 // 开始录制hls或MP4
...@@ -1031,8 +1067,6 @@ void installWebApi() { ...@@ -1031,8 +1067,6 @@ void installWebApi() {
allArgs["stream"], allArgs["stream"],
/** 支持rtsp和rtmp方式拉流 ,rtsp支持h265/h264/aac,rtmp仅支持h264/aac **/ /** 支持rtsp和rtmp方式拉流 ,rtsp支持h265/h264/aac,rtmp仅支持h264/aac **/
"rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov", "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov",
true,/* 开启rtsp转发 */
true,/* 开启rtmp转发 */
true,/* 开启hls转发 */ true,/* 开启hls转发 */
false,/* 禁用MP4录制 */ false,/* 禁用MP4录制 */
0,//rtp over tcp方式拉流 0,//rtp over tcp方式拉流
......
...@@ -53,6 +53,7 @@ const string kOnRtspAuth = HOOK_FIELD"on_rtsp_auth"; ...@@ -53,6 +53,7 @@ const string kOnRtspAuth = HOOK_FIELD"on_rtsp_auth";
const string kOnStreamChanged = HOOK_FIELD"on_stream_changed"; const string kOnStreamChanged = HOOK_FIELD"on_stream_changed";
const string kOnStreamNotFound = HOOK_FIELD"on_stream_not_found"; const string kOnStreamNotFound = HOOK_FIELD"on_stream_not_found";
const string kOnRecordMp4 = HOOK_FIELD"on_record_mp4"; const string kOnRecordMp4 = HOOK_FIELD"on_record_mp4";
const string kOnRecordTs = HOOK_FIELD"on_record_ts";
const string kOnShellLogin = HOOK_FIELD"on_shell_login"; const string kOnShellLogin = HOOK_FIELD"on_shell_login";
const string kOnStreamNoneReader = HOOK_FIELD"on_stream_none_reader"; const string kOnStreamNoneReader = HOOK_FIELD"on_stream_none_reader";
const string kOnHttpAccess = HOOK_FIELD"on_http_access"; const string kOnHttpAccess = HOOK_FIELD"on_http_access";
...@@ -70,6 +71,7 @@ onceToken token([](){ ...@@ -70,6 +71,7 @@ onceToken token([](){
mINI::Instance()[kOnStreamChanged] = "https://127.0.0.1/index/hook/on_stream_changed"; mINI::Instance()[kOnStreamChanged] = "https://127.0.0.1/index/hook/on_stream_changed";
mINI::Instance()[kOnStreamNotFound] = "https://127.0.0.1/index/hook/on_stream_not_found"; mINI::Instance()[kOnStreamNotFound] = "https://127.0.0.1/index/hook/on_stream_not_found";
mINI::Instance()[kOnRecordMp4] = "https://127.0.0.1/index/hook/on_record_mp4"; mINI::Instance()[kOnRecordMp4] = "https://127.0.0.1/index/hook/on_record_mp4";
mINI::Instance()[kOnRecordTs] = "https://127.0.0.1/index/hook/on_record_ts";
mINI::Instance()[kOnShellLogin] = "https://127.0.0.1/index/hook/on_shell_login"; mINI::Instance()[kOnShellLogin] = "https://127.0.0.1/index/hook/on_shell_login";
mINI::Instance()[kOnStreamNoneReader] = "https://127.0.0.1/index/hook/on_stream_none_reader"; mINI::Instance()[kOnStreamNoneReader] = "https://127.0.0.1/index/hook/on_stream_none_reader";
mINI::Instance()[kOnHttpAccess] = "https://127.0.0.1/index/hook/on_http_access"; mINI::Instance()[kOnHttpAccess] = "https://127.0.0.1/index/hook/on_http_access";
...@@ -161,7 +163,7 @@ static ArgsType make_json(const MediaInfo &args){ ...@@ -161,7 +163,7 @@ static ArgsType make_json(const MediaInfo &args){
body["app"] = args._app; body["app"] = args._app;
body["stream"] = args._streamid; body["stream"] = args._streamid;
body["params"] = args._param_strs; body["params"] = args._param_strs;
return std::move(body); return body;
} }
static void reportServerStarted(){ static void reportServerStarted(){
...@@ -190,16 +192,16 @@ void installWebHook(){ ...@@ -190,16 +192,16 @@ void installWebHook(){
GET_CONFIG(string,hook_stream_chaned,Hook::kOnStreamChanged); GET_CONFIG(string,hook_stream_chaned,Hook::kOnStreamChanged);
GET_CONFIG(string,hook_stream_not_found,Hook::kOnStreamNotFound); GET_CONFIG(string,hook_stream_not_found,Hook::kOnStreamNotFound);
GET_CONFIG(string,hook_record_mp4,Hook::kOnRecordMp4); GET_CONFIG(string,hook_record_mp4,Hook::kOnRecordMp4);
GET_CONFIG(string,hook_record_ts,Hook::kOnRecordTs);
GET_CONFIG(string,hook_shell_login,Hook::kOnShellLogin); GET_CONFIG(string,hook_shell_login,Hook::kOnShellLogin);
GET_CONFIG(string,hook_stream_none_reader,Hook::kOnStreamNoneReader); GET_CONFIG(string,hook_stream_none_reader,Hook::kOnStreamNoneReader);
GET_CONFIG(string,hook_http_access,Hook::kOnHttpAccess); GET_CONFIG(string,hook_http_access,Hook::kOnHttpAccess);
NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastMediaPublish,[](BroadcastMediaPublishArgs){ NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastMediaPublish,[](BroadcastMediaPublishArgs){
GET_CONFIG(bool,toRtxp,General::kPublishToRtxp);
GET_CONFIG(bool,toHls,General::kPublishToHls); GET_CONFIG(bool,toHls,General::kPublishToHls);
GET_CONFIG(bool,toMP4,General::kPublishToMP4); GET_CONFIG(bool,toMP4,General::kPublishToMP4);
if(!hook_enable || args._param_strs == hook_adminparams || hook_publish.empty() || sender.get_peer_ip() == "127.0.0.1"){ if(!hook_enable || args._param_strs == hook_adminparams || hook_publish.empty() || sender.get_peer_ip() == "127.0.0.1"){
invoker("",toRtxp,toHls,toMP4); invoker("", toHls, toMP4);
return; return;
} }
//异步执行该hook api,防止阻塞NoticeCenter //异步执行该hook api,防止阻塞NoticeCenter
...@@ -211,27 +213,20 @@ void installWebHook(){ ...@@ -211,27 +213,20 @@ void installWebHook(){
do_http_hook(hook_publish,body,[invoker](const Value &obj,const string &err){ do_http_hook(hook_publish,body,[invoker](const Value &obj,const string &err){
if(err.empty()){ if(err.empty()){
//推流鉴权成功 //推流鉴权成功
bool enableRtxp = toRtxp;
bool enableHls = toHls; bool enableHls = toHls;
bool enableMP4 = toMP4; bool enableMP4 = toMP4;
//兼容用户不传递enableRtxp、enableHls、enableMP4参数 //兼容用户不传递enableHls、enableMP4参数
if(obj.isMember("enableRtxp")){ if (obj.isMember("enableHls")) {
enableRtxp = obj["enableRtxp"].asBool();
}
if(obj.isMember("enableHls")){
enableHls = obj["enableHls"].asBool(); enableHls = obj["enableHls"].asBool();
} }
if (obj.isMember("enableMP4")) {
if(obj.isMember("enableMP4")){
enableMP4 = obj["enableMP4"].asBool(); enableMP4 = obj["enableMP4"].asBool();
} }
invoker(err, enableHls, enableMP4);
invoker(err,enableRtxp,enableHls,enableMP4); } else {
}else{
//推流鉴权失败 //推流鉴权失败
invoker(err,false, false, false); invoker(err, false, false);
} }
}); });
...@@ -336,7 +331,7 @@ void installWebHook(){ ...@@ -336,7 +331,7 @@ void installWebHook(){
//监听播放失败(未找到特定的流)事件 //监听播放失败(未找到特定的流)事件
NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastNotFoundStream,[](BroadcastNotFoundStreamArgs){ NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastNotFoundStream,[](BroadcastNotFoundStreamArgs){
if(!hook_enable || hook_stream_not_found.empty()){ if(!hook_enable || hook_stream_not_found.empty()){
closePlayer(); // closePlayer();
return; return;
} }
auto body = make_json(args); auto body = make_json(args);
...@@ -347,28 +342,40 @@ void installWebHook(){ ...@@ -347,28 +342,40 @@ void installWebHook(){
do_http_hook(hook_stream_not_found,body, nullptr); do_http_hook(hook_stream_not_found,body, nullptr);
}); });
static auto getRecordInfo = [](const RecordInfo &info) {
ArgsType body;
body["start_time"] = (Json::UInt64) info.start_time;
body["file_size"] = (Json::UInt64) info.file_size;
body["time_len"] = info.time_len;
body["file_path"] = info.file_path;
body["file_name"] = info.file_name;
body["folder"] = info.folder;
body["url"] = info.url;
body["app"] = info.app;
body["stream"] = info.stream;
body["vhost"] = info.vhost;
return body;
};
#ifdef ENABLE_MP4 #ifdef ENABLE_MP4
//录制mp4文件成功后广播 //录制mp4文件成功后广播
NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastRecordMP4,[](BroadcastRecordMP4Args){ NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastRecordMP4,[](BroadcastRecordMP4Args){
if(!hook_enable || hook_record_mp4.empty()){ if (!hook_enable || hook_record_mp4.empty()) {
return; return;
} }
ArgsType body;
body["start_time"] = (Json::UInt64)info.ui64StartedTime;
body["time_len"] = (Json::UInt64)info.ui64TimeLen;
body["file_size"] = (Json::UInt64)info.ui64FileSize;
body["file_path"] = info.strFilePath;
body["file_name"] = info.strFileName;
body["folder"] = info.strFolder;
body["url"] = info.strUrl;
body["app"] = info.strAppName;
body["stream"] = info.strStreamId;
body["vhost"] = info.strVhost;
//执行hook //执行hook
do_http_hook(hook_record_mp4,body, nullptr); do_http_hook(hook_record_mp4, getRecordInfo(info), nullptr);
}); });
#endif //ENABLE_MP4 #endif //ENABLE_MP4
NoticeCenter::Instance().addListener(nullptr, Broadcast::kBroadcastRecordTs, [](BroadcastRecordTsArgs) {
if (!hook_enable || hook_record_ts.empty()) {
return;
}
// 执行 hook
do_http_hook(hook_record_ts, getRecordInfo(info), nullptr);
});
NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastShellLogin,[](BroadcastShellLoginArgs){ NoticeCenter::Instance().addListener(nullptr,Broadcast::kBroadcastShellLogin,[](BroadcastShellLoginArgs){
if(!hook_enable || hook_shell_login.empty() || sender.get_peer_ip() == "127.0.0.1"){ if(!hook_enable || hook_shell_login.empty() || sender.get_peer_ip() == "127.0.0.1"){
invoker(""); invoker("");
...@@ -407,7 +414,6 @@ void installWebHook(){ ...@@ -407,7 +414,6 @@ void installWebHook(){
} }
strongSrc->close(false); strongSrc->close(false);
}); });
}); });
/** /**
......
...@@ -12,23 +12,17 @@ ...@@ -12,23 +12,17 @@
#include "Util/logger.h" #include "Util/logger.h"
#include "Util/base64.h" #include "Util/base64.h"
#include "Extension/AAC.h" #include "Extension/AAC.h"
#include "Extension/Opus.h"
#include "Extension/G711.h" #include "Extension/G711.h"
#include "Extension/H264.h" #include "Extension/H264.h"
#include "Extension/H265.h" #include "Extension/H265.h"
using namespace toolkit; using namespace toolkit;
namespace mediakit { namespace mediakit {
DevChannel::DevChannel(const string &vhost, DevChannel::DevChannel(const string &vhost, const string &app, const string &stream_id,
const string &app, float duration, bool enable_hls, bool enable_mp4) :
const string &stream_id, MultiMediaSourceMuxer(vhost, app, stream_id, duration, true, true, enable_hls, enable_mp4) {}
float duration,
bool enable_rtsp,
bool enable_rtmp,
bool enable_hls,
bool enable_mp4) :
MultiMediaSourceMuxer(vhost, app, stream_id, duration, enable_rtsp, enable_rtmp, enable_hls, enable_mp4) {}
DevChannel::~DevChannel() {} DevChannel::~DevChannel() {}
...@@ -109,11 +103,12 @@ void DevChannel::inputH265(const char *data, int len, uint32_t dts, uint32_t pts ...@@ -109,11 +103,12 @@ void DevChannel::inputH265(const char *data, int len, uint32_t dts, uint32_t pts
inputFrame(frame); inputFrame(frame);
} }
class AACFrameCacheAble : public AACFrameNoCacheAble{ class FrameAutoDelete : public FrameFromPtr{
public: public:
template <typename ... ARGS> template <typename ... ARGS>
AACFrameCacheAble(ARGS && ...args) : AACFrameNoCacheAble(std::forward<ARGS>(args)...){}; FrameAutoDelete(ARGS && ...args) : FrameFromPtr(std::forward<ARGS>(args)...){}
virtual ~AACFrameCacheAble() {
~FrameAutoDelete() override {
delete [] _ptr; delete [] _ptr;
}; };
...@@ -123,31 +118,32 @@ public: ...@@ -123,31 +118,32 @@ public:
}; };
void DevChannel::inputAAC(const char *data_without_adts, int len, uint32_t dts, const char *adts_header){ void DevChannel::inputAAC(const char *data_without_adts, int len, uint32_t dts, const char *adts_header){
if(dts == 0){ if (dts == 0) {
dts = (uint32_t)_aTicker[1].elapsedTime(); dts = (uint32_t) _aTicker[1].elapsedTime();
} }
if(adts_header){ if (adts_header) {
if(adts_header + 7 == data_without_adts){ if (adts_header + ADTS_HEADER_LEN == data_without_adts) {
//adts头和帧在一起 //adts头和帧在一起
inputFrame(std::make_shared<AACFrameNoCacheAble>((char *)data_without_adts - 7, len + 7, dts, 0, 7)); inputFrame(std::make_shared<FrameFromPtr>(_audio->codecId, (char *) data_without_adts - ADTS_HEADER_LEN, len + ADTS_HEADER_LEN, dts, 0, ADTS_HEADER_LEN));
}else{ } else {
//adts头和帧不在一起 //adts头和帧不在一起
char *dataWithAdts = new char[len + 7]; char *data_with_adts = new char[len + ADTS_HEADER_LEN];
memcpy(dataWithAdts, adts_header, 7); memcpy(data_with_adts, adts_header, ADTS_HEADER_LEN);
memcpy(dataWithAdts + 7 , data_without_adts , len); memcpy(data_with_adts + ADTS_HEADER_LEN, data_without_adts, len);
inputFrame(std::make_shared<AACFrameCacheAble>(dataWithAdts, len + 7, dts, 0, 7)); inputFrame(std::make_shared<FrameAutoDelete>(_audio->codecId, data_with_adts, len + ADTS_HEADER_LEN, dts, 0, ADTS_HEADER_LEN));
} }
} else {
//没有adts头
inputFrame(std::make_shared<FrameFromPtr>(_audio->codecId, (char *) data_without_adts, len, dts, 0, 0));
} }
} }
void DevChannel::inputG711(const char *data, int len, uint32_t dts){ void DevChannel::inputAudio(const char *data, int len, uint32_t dts){
if (dts == 0) { if (dts == 0) {
dts = (uint32_t)_aTicker[1].elapsedTime(); dts = (uint32_t) _aTicker[1].elapsedTime();
} }
auto frame = std::make_shared<G711FrameNoCacheAble>((char*)data, len, dts, 0); inputFrame(std::make_shared<FrameFromPtr>(_audio->codecId, (char *) data, len, dts, 0));
frame->setCodec(_audio->codecId);
inputFrame(frame);
} }
void DevChannel::initVideo(const VideoInfo &info) { void DevChannel::initVideo(const VideoInfo &info) {
...@@ -165,6 +161,7 @@ void DevChannel::initAudio(const AudioInfo &info) { ...@@ -165,6 +161,7 @@ void DevChannel::initAudio(const AudioInfo &info) {
case CodecAAC : addTrack(std::make_shared<AACTrack>()); break; case CodecAAC : addTrack(std::make_shared<AACTrack>()); break;
case CodecG711A : case CodecG711A :
case CodecG711U : addTrack(std::make_shared<G711Track>(info.codecId, info.iSampleRate, info.iChannel, info.iSampleBit)); break; case CodecG711U : addTrack(std::make_shared<G711Track>(info.codecId, info.iSampleRate, info.iChannel, info.iSampleBit)); break;
case CodecOpus : addTrack(std::make_shared<OpusTrack>()); break;
default: WarnL << "不支持该类型的音频编码类型:" << info.codecId; break; default: WarnL << "不支持该类型的音频编码类型:" << info.codecId; break;
} }
} }
......
...@@ -17,11 +17,9 @@ ...@@ -17,11 +17,9 @@
#include "Util/util.h" #include "Util/util.h"
#include "Util/TimeTicker.h" #include "Util/TimeTicker.h"
#include "Common/MultiMediaSourceMuxer.h" #include "Common/MultiMediaSourceMuxer.h"
using namespace std; using namespace std;
using namespace toolkit; using namespace toolkit;
#ifdef ENABLE_FAAC #ifdef ENABLE_FAAC
#include "Codec/AACEncoder.h" #include "Codec/AACEncoder.h"
#endif //ENABLE_FAAC #endif //ENABLE_FAAC
...@@ -55,16 +53,10 @@ class DevChannel : public MultiMediaSourceMuxer{ ...@@ -55,16 +53,10 @@ class DevChannel : public MultiMediaSourceMuxer{
public: public:
typedef std::shared_ptr<DevChannel> Ptr; typedef std::shared_ptr<DevChannel> Ptr;
//fDuration<=0为直播,否则为点播 //fDuration<=0为直播,否则为点播
DevChannel(const string &vhost, DevChannel(const string &vhost, const string &app, const string &stream_id,
const string &app, float duration = 0, bool enable_hls = true, bool enable_mp4 = false);
const string &stream_id,
float duration = 0,
bool enable_rtsp = true,
bool enable_rtmp = true,
bool enable_hls = true,
bool enable_mp4 = false);
virtual ~DevChannel(); ~DevChannel() override ;
/** /**
* 初始化视频Track * 初始化视频Track
...@@ -108,12 +100,12 @@ public: ...@@ -108,12 +100,12 @@ public:
void inputAAC(const char *data_without_adts, int len, uint32_t dts, const char *adts_header); void inputAAC(const char *data_without_adts, int len, uint32_t dts, const char *adts_header);
/** /**
* G711音频帧 * 输入OPUS/G711音频帧
* @param data 音频帧 * @param data 音频帧
* @param len 帧数据长度 * @param len 帧数据长度
* @param dts 时间戳,单位毫秒 * @param dts 时间戳,单位毫秒
*/ */
void inputG711(const char* data, int len, uint32_t dts); void inputAudio(const char *data, int len, uint32_t dts);
#ifdef ENABLE_X264 #ifdef ENABLE_X264
/** /**
......
...@@ -161,7 +161,7 @@ vector<Track::Ptr> MediaSink::getTracks(bool trackReady) const{ ...@@ -161,7 +161,7 @@ vector<Track::Ptr> MediaSink::getTracks(bool trackReady) const{
} }
ret.emplace_back(pr.second); ret.emplace_back(pr.second);
} }
return std::move(ret); return ret;
} }
......
...@@ -37,6 +37,11 @@ public: ...@@ -37,6 +37,11 @@ public:
virtual void addTrack(const Track::Ptr & track) = 0; virtual void addTrack(const Track::Ptr & track) = 0;
/** /**
* 添加所有Track完毕
*/
virtual void addTrackCompleted() {}
/**
* 重置track * 重置track
*/ */
virtual void resetTracks() = 0; virtual void resetTracks() = 0;
...@@ -70,7 +75,7 @@ public: ...@@ -70,7 +75,7 @@ public:
* 这样会增加生成流的延时,如果添加了音视频双Track,那么可以不调用此方法 * 这样会增加生成流的延时,如果添加了音视频双Track,那么可以不调用此方法
* 否则为了降低流注册延时,请手动调用此方法 * 否则为了降低流注册延时,请手动调用此方法
*/ */
void addTrackCompleted(); void addTrackCompleted() override;
/** /**
* 重置track * 重置track
......
...@@ -44,33 +44,63 @@ public: ...@@ -44,33 +44,63 @@ public:
virtual ~MediaSourceEvent(){}; virtual ~MediaSourceEvent(){};
// 通知拖动进度条 // 通知拖动进度条
virtual bool seekTo(MediaSource &sender,uint32_t ui32Stamp){ return false; } virtual bool seekTo(MediaSource &sender, uint32_t stamp) { return false; }
// 通知其停止 // 通知其停止产生
virtual bool close(MediaSource &sender,bool force) { return false;} virtual bool close(MediaSource &sender, bool force) { return false; }
// 观看总人数 // 获取观看总人数
virtual int totalReaderCount(MediaSource &sender) = 0; virtual int totalReaderCount(MediaSource &sender) = 0;
// 通知观看人数变化
virtual void onReaderChanged(MediaSource &sender, int size);
//流注册或注销事件
virtual void onRegist(MediaSource &sender, bool regist) {};
////////////////////////仅供MultiMediaSourceMuxer对象继承////////////////////////
// 开启或关闭录制 // 开启或关闭录制
virtual bool setupRecord(MediaSource &sender, Recorder::type type, bool start, const string &custom_path) { return false; }; virtual bool setupRecord(MediaSource &sender, Recorder::type type, bool start, const string &custom_path) { return false; };
// 获取录制状态 // 获取录制状态
virtual bool isRecording(MediaSource &sender, Recorder::type type) { return false; }; virtual bool isRecording(MediaSource &sender, Recorder::type type) { return false; };
// 通知无人观看 // 获取所有track相关信息
virtual void onNoneReader(MediaSource &sender); virtual vector<Track::Ptr> getTracks(MediaSource &sender, bool trackReady = true) const { return vector<Track::Ptr>(); };
//流注册或注销事件 // 开始发送ps-rtp
virtual void onRegist(MediaSource &sender, bool regist) {}; virtual void startSendRtp(MediaSource &sender, const string &dst_url, uint16_t dst_port, uint32_t ssrc, bool is_udp, const function<void(const SockException &ex)> &cb) { cb(SockException(Err_other, "not implemented"));};
// 停止发送ps-rtp
virtual bool stopSendRtp(MediaSource &sender) {return false; }
private: private:
Timer::Ptr _async_close_timer; Timer::Ptr _async_close_timer;
}; };
//该对象用于拦截感兴趣的MediaSourceEvent事件
class MediaSourceEventInterceptor : public MediaSourceEvent{
public:
MediaSourceEventInterceptor(){}
~MediaSourceEventInterceptor() override {}
bool seekTo(MediaSource &sender, uint32_t stamp) override;
bool close(MediaSource &sender, bool force) override;
int totalReaderCount(MediaSource &sender) override;
void onReaderChanged(MediaSource &sender, int size) override;
void onRegist(MediaSource &sender, bool regist) override;
bool setupRecord(MediaSource &sender, Recorder::type type, bool start, const string &custom_path) override;
bool isRecording(MediaSource &sender, Recorder::type type) override;
vector<Track::Ptr> getTracks(MediaSource &sender, bool trackReady = true) const override;
void startSendRtp(MediaSource &sender, const string &dst_url, uint16_t dst_port, uint32_t ssrc, bool is_udp, const function<void(const SockException &ex)> &cb) override;
bool stopSendRtp(MediaSource &sender) override;
protected:
std::weak_ptr<MediaSourceEvent> _listener;
};
/** /**
* 解析url获取媒体相关信息 * 解析url获取媒体相关信息
*/ */
class MediaInfo{ class MediaInfo{
public: public:
MediaInfo(){} ~MediaInfo() {}
~MediaInfo(){} MediaInfo() {}
MediaInfo(const string &url){ parse(url); } MediaInfo(const string &url) { parse(url); }
void parse(const string &url); void parse(const string &url);
public: public:
string _schema; string _schema;
string _host; string _host;
...@@ -92,9 +122,11 @@ public: ...@@ -92,9 +122,11 @@ public:
typedef unordered_map<string, AppStreamMap > VhostAppStreamMap; typedef unordered_map<string, AppStreamMap > VhostAppStreamMap;
typedef unordered_map<string, VhostAppStreamMap > SchemaVhostAppStreamMap; typedef unordered_map<string, VhostAppStreamMap > SchemaVhostAppStreamMap;
MediaSource(const string &strSchema, const string &strVhost, const string &strApp, const string &strId) ; MediaSource(const string &schema, const string &vhost, const string &app, const string &stream_id) ;
virtual ~MediaSource() ; virtual ~MediaSource() ;
////////////////获取MediaSource相关信息////////////////
// 获取协议类型 // 获取协议类型
const string& getSchema() const; const string& getSchema() const;
// 虚拟主机 // 虚拟主机
...@@ -104,13 +136,18 @@ public: ...@@ -104,13 +136,18 @@ public:
// 流id // 流id
const string& getId() const; const string& getId() const;
// 设置TrackSource
void setTrackSource(const std::weak_ptr<TrackSource> &track_src);
// 获取所有Track // 获取所有Track
vector<Track::Ptr> getTracks(bool trackReady = true) const override; vector<Track::Ptr> getTracks(bool ready = true) const override;
// 获取流当前时间戳
virtual uint32_t getTimeStamp(TrackType type) { return 0; };
// 设置时间戳
virtual void setTimeStamp(uint32_t stamp) {};
////////////////MediaSourceEvent相关接口实现////////////////
// 设置监听者 // 设置监听者
virtual void setListener(const std::weak_ptr<MediaSourceEvent> &listener); void setListener(const std::weak_ptr<MediaSourceEvent> &listener);
// 获取监听者 // 获取监听者
const std::weak_ptr<MediaSourceEvent>& getListener() const; const std::weak_ptr<MediaSourceEvent>& getListener() const;
...@@ -119,48 +156,52 @@ public: ...@@ -119,48 +156,52 @@ public:
// 观看者个数,包括(hls/rtsp/rtmp) // 观看者个数,包括(hls/rtsp/rtmp)
virtual int totalReaderCount(); virtual int totalReaderCount();
// 获取流当前时间戳
virtual uint32_t getTimeStamp(TrackType trackType) { return 0; };
// 设置时间戳
virtual void setTimeStamp(uint32_t uiStamp) {};
// 拖动进度条 // 拖动进度条
bool seekTo(uint32_t ui32Stamp); bool seekTo(uint32_t stamp);
// 关闭该流 // 关闭该流
bool close(bool force); bool close(bool force);
// 该流无人观看 // 该流观看人数变化
void onNoneReader(); void onReaderChanged(int size);
// 开启或关闭录制 // 开启或关闭录制
virtual bool setupRecord(Recorder::type type, bool start, const string &custom_path); bool setupRecord(Recorder::type type, bool start, const string &custom_path);
// 获取录制状态 // 获取录制状态
virtual bool isRecording(Recorder::type type); bool isRecording(Recorder::type type);
// 开始发送ps-rtp
void startSendRtp(const string &dst_url, uint16_t dst_port, uint32_t ssrc, bool is_udp, const function<void(const SockException &ex)> &cb);
// 停止发送ps-rtp
bool stopSendRtp();
////////////////static方法,查找或生成MediaSource////////////////
// 同步查找流 // 同步查找流
static Ptr find(const string &schema, const string &vhost, const string &app, const string &id); static Ptr find(const string &schema, const string &vhost, const string &app, const string &id);
// 忽略类型,同步查找流,可能返回rtmp/rtsp/hls类型
static Ptr find(const string &vhost, const string &app, const string &stream_id);
// 异步查找流 // 异步查找流
static void findAsync(const MediaInfo &info, const std::shared_ptr<TcpSession> &session, const function<void(const Ptr &src)> &cb); static void findAsync(const MediaInfo &info, const std::shared_ptr<TcpSession> &session, const function<void(const Ptr &src)> &cb);
// 遍历所有流 // 遍历所有流
static void for_each_media(const function<void(const Ptr &src)> &cb); static void for_each_media(const function<void(const Ptr &src)> &cb);
// 从mp4文件生成MediaSource // 从mp4文件生成MediaSource
static MediaSource::Ptr createFromMP4(const string &schema, const string &vhost, const string &app, const string &stream, const string &filePath = "", bool checkApp = true); static MediaSource::Ptr createFromMP4(const string &schema, const string &vhost, const string &app, const string &stream, const string &file_path = "", bool check_app = true);
protected: protected:
void regist() ; //媒体注册
bool unregist(); void regist();
private: private:
static Ptr find_l(const string &schema, const string &vhost, const string &app, const string &id, bool bMake); //媒体注销
static void findAsync_l(const MediaInfo &info, const std::shared_ptr<TcpSession> &session, bool retry, const function<void(const MediaSource::Ptr &src)> &cb); bool unregist();
//触发媒体事件
void emitEvent(bool regist);
private: private:
string _strSchema; string _schema;
string _strVhost; string _vhost;
string _strApp; string _app;
string _strId; string _stream_id;
std::weak_ptr<MediaSourceEvent> _listener; std::weak_ptr<MediaSourceEvent> _listener;
weak_ptr<TrackSource> _track_source;
static SchemaVhostAppStreamMap g_mapMediaSrc;
static recursive_mutex g_mtxMediaSrc;
}; };
///缓存刷新策略类 ///缓存刷新策略类
...@@ -174,7 +215,7 @@ public: ...@@ -174,7 +215,7 @@ public:
} }
uint32_t getStamp(const RtmpPacket::Ptr &packet) { uint32_t getStamp(const RtmpPacket::Ptr &packet) {
return packet->timeStamp; return packet->time_stamp;
} }
bool isFlushAble(bool is_video, bool is_key, uint32_t new_stamp, int cache_size); bool isFlushAble(bool is_video, bool is_key, uint32_t new_stamp, int cache_size);
...@@ -208,6 +249,10 @@ public: ...@@ -208,6 +249,10 @@ public:
} }
} }
virtual void clearCache() {
_cache->clear();
}
virtual void onFlush(std::shared_ptr<packet_list> &, bool key_pos) = 0; virtual void onFlush(std::shared_ptr<packet_list> &, bool key_pos) = 0;
private: private:
...@@ -221,9 +266,9 @@ private: ...@@ -221,9 +266,9 @@ private:
} }
private: private:
bool _key_pos = false;
policy _policy; policy _policy;
std::shared_ptr<packet_list> _cache; std::shared_ptr<packet_list> _cache;
bool _key_pos = false;
}; };
} /* namespace mediakit */ } /* namespace mediakit */
......
...@@ -10,14 +10,20 @@ ...@@ -10,14 +10,20 @@
#ifndef ZLMEDIAKIT_MULTIMEDIASOURCEMUXER_H #ifndef ZLMEDIAKIT_MULTIMEDIASOURCEMUXER_H
#define ZLMEDIAKIT_MULTIMEDIASOURCEMUXER_H #define ZLMEDIAKIT_MULTIMEDIASOURCEMUXER_H
#include "Rtsp/RtspMediaSourceMuxer.h"
#include "Rtmp/RtmpMediaSourceMuxer.h" #include "Common/Stamp.h"
#include "Rtp/PSRtpSender.h"
#include "Record/Recorder.h" #include "Record/Recorder.h"
#include "Record/HlsMediaSource.h"
#include "Record/HlsRecorder.h" #include "Record/HlsRecorder.h"
#include "Record/HlsMediaSource.h"
#include "Rtsp/RtspMediaSourceMuxer.h"
#include "Rtmp/RtmpMediaSourceMuxer.h"
#include "TS/TSMediaSourceMuxer.h"
#include "FMP4/FMP4MediaSourceMuxer.h"
namespace mediakit{ namespace mediakit{
class MultiMuxerPrivate : public MediaSink , public std::enable_shared_from_this<MultiMuxerPrivate>{ class MultiMuxerPrivate : public MediaSink, public std::enable_shared_from_this<MultiMuxerPrivate>{
public: public:
friend class MultiMediaSourceMuxer; friend class MultiMediaSourceMuxer;
typedef std::shared_ptr<MultiMuxerPrivate> Ptr; typedef std::shared_ptr<MultiMuxerPrivate> Ptr;
...@@ -27,17 +33,12 @@ public: ...@@ -27,17 +33,12 @@ public:
virtual ~Listener() = default; virtual ~Listener() = default;
virtual void onAllTrackReady() = 0; virtual void onAllTrackReady() = 0;
}; };
~MultiMuxerPrivate() override ;
private:
MultiMuxerPrivate(const string &vhost,
const string &app,
const string &stream,
float dur_sec,
bool enable_rtsp,
bool enable_rtmp,
bool enable_hls,
bool enable_mp4);
~MultiMuxerPrivate() override;
private:
MultiMuxerPrivate(const string &vhost,const string &app, const string &stream,float dur_sec,
bool enable_rtsp, bool enable_rtmp, bool enable_hls, bool enable_mp4);
void resetTracks() override; void resetTracks() override;
void setMediaListener(const std::weak_ptr<MediaSourceEvent> &listener); void setMediaListener(const std::weak_ptr<MediaSourceEvent> &listener);
int totalReaderCount() const; int totalReaderCount() const;
...@@ -46,83 +47,67 @@ private: ...@@ -46,83 +47,67 @@ private:
bool setupRecord(MediaSource &sender, Recorder::type type, bool start, const string &custom_path); bool setupRecord(MediaSource &sender, Recorder::type type, bool start, const string &custom_path);
bool isRecording(MediaSource &sender, Recorder::type type); bool isRecording(MediaSource &sender, Recorder::type type);
bool isEnabled(); bool isEnabled();
private:
void onTrackReady(const Track::Ptr & track) override; void onTrackReady(const Track::Ptr & track) override;
void onTrackFrame(const Frame::Ptr &frame) override; void onTrackFrame(const Frame::Ptr &frame) override;
void onAllTrackReady() override; void onAllTrackReady() override;
MediaSource::Ptr getHlsMediaSource() const;
private: private:
string _stream_url;
Listener *_track_listener = nullptr;
RtmpMediaSourceMuxer::Ptr _rtmp; RtmpMediaSourceMuxer::Ptr _rtmp;
RtspMediaSourceMuxer::Ptr _rtsp; RtspMediaSourceMuxer::Ptr _rtsp;
MediaSinkInterface::Ptr _hls; HlsRecorder::Ptr _hls;
MediaSinkInterface::Ptr _mp4; MediaSinkInterface::Ptr _mp4;
Listener *_listener = nullptr; TSMediaSourceMuxer::Ptr _ts;
std::weak_ptr<MediaSourceEvent> _meida_listener; FMP4MediaSourceMuxer::Ptr _fmp4;
bool _enable_rtxp = false; std::weak_ptr<MediaSourceEvent> _listener;
bool _enable_record = false;
}; };
class MultiMediaSourceMuxer : public MediaSourceEvent, public MediaSinkInterface, public TrackSource, public std::enable_shared_from_this<MultiMediaSourceMuxer>{ class MultiMediaSourceMuxer : public MediaSourceEventInterceptor, public MediaSinkInterface, public MultiMuxerPrivate::Listener, public std::enable_shared_from_this<MultiMediaSourceMuxer>{
public: public:
typedef MultiMuxerPrivate::Listener Listener; typedef MultiMuxerPrivate::Listener Listener;
typedef std::shared_ptr<MultiMediaSourceMuxer> Ptr; typedef std::shared_ptr<MultiMediaSourceMuxer> Ptr;
~MultiMediaSourceMuxer() override; ~MultiMediaSourceMuxer() override;
MultiMediaSourceMuxer(const string &vhost, MultiMediaSourceMuxer(const string &vhost, const string &app, const string &stream, float dur_sec = 0.0,
const string &app, bool enable_rtsp = true, bool enable_rtmp = true, bool enable_hls = true, bool enable_mp4 = false);
const string &stream,
float dur_sec = 0.0,
bool enable_rtsp = true,
bool enable_rtmp = true,
bool enable_hls = true,
bool enable_mp4 = false);
/** /**
* 设置事件监听器 * 设置事件监听器
* @param listener * @param listener 监听器
*/ */
void setMediaListener(const std::weak_ptr<MediaSourceEvent> &listener); void setMediaListener(const std::weak_ptr<MediaSourceEvent> &listener);
/**
* 随着Track就绪事件监听器
* @param listener 事件监听器
*/
void setTrackListener(const std::weak_ptr<MultiMuxerPrivate::Listener> &listener);
/** /**
* 返回总的消费者个数 * 返回总的消费者个数
* @return
*/ */
int totalReaderCount() const; int totalReaderCount() const;
/** /**
* 判断是否生效(是否正在转其他协议)
*/
bool isEnabled();
/**
* 设置MediaSource时间戳 * 设置MediaSource时间戳
* @param stamp 时间戳 * @param stamp 时间戳
*/ */
void setTimeStamp(uint32_t stamp); void setTimeStamp(uint32_t stamp);
/** /////////////////////////////////MediaSourceEvent override/////////////////////////////////
* 随着Track就绪事件监听器
* @param listener 事件监听器
*/
void setTrackListener(Listener *listener);
/** /**
* 获取所有Track * 获取所有Track
* @param trackReady 是否筛选过滤未就绪的track * @param trackReady 是否筛选过滤未就绪的track
* @return 所有Track * @return 所有Track
*/ */
vector<Track::Ptr> getTracks(bool trackReady = true) const override; vector<Track::Ptr> getTracks(MediaSource &sender, bool trackReady = true) const override;
/**
* 通知拖动进度条
* @param sender 事件发送者
* @param ui32Stamp 目标时间戳
* @return 是否成功
*/
bool seekTo(MediaSource &sender,uint32_t ui32Stamp) override;
/**
* 通知停止流生成
* @param sender 事件发送者
* @param force 是否强制关闭
* @return 成功与否
*/
bool close(MediaSource &sender,bool force) override;
/** /**
* 观看总人数 * 观看总人数
...@@ -132,19 +117,6 @@ public: ...@@ -132,19 +117,6 @@ public:
int totalReaderCount(MediaSource &sender) override; int totalReaderCount(MediaSource &sender) override;
/** /**
* 触发无人观看事件
* @param sender 触发者
*/
void onNoneReader(MediaSource &sender) override;
/**
* 媒体注册注销事件
* @param sender 触发者
* @param regist 是否为注册事件
*/
void onRegist(MediaSource &sender, bool regist) override;
/**
* 设置录制状态 * 设置录制状态
* @param type 录制类型 * @param type 录制类型
* @param start 开始或停止 * @param start 开始或停止
...@@ -161,17 +133,34 @@ public: ...@@ -161,17 +133,34 @@ public:
bool isRecording(MediaSource &sender, Recorder::type type) override; bool isRecording(MediaSource &sender, Recorder::type type) override;
/** /**
* 开始发送ps-rtp流
* @param dst_url 目标ip或域名
* @param dst_port 目标端口
* @param ssrc rtp的ssrc
* @param is_udp 是否为udp
* @param cb 启动成功或失败回调
*/
void startSendRtp(MediaSource &sender, const string &dst_url, uint16_t dst_port, uint32_t ssrc, bool is_udp, const function<void(const SockException &ex)> &cb) override;
/**
* 停止ps-rtp发送
* @return 是否成功
*/
bool stopSendRtp(MediaSource &sender) override;
/////////////////////////////////MediaSinkInterface override/////////////////////////////////
/**
* 添加track,内部会调用Track的clone方法 * 添加track,内部会调用Track的clone方法
* 只会克隆sps pps这些信息 ,而不会克隆Delegate相关关系 * 只会克隆sps pps这些信息 ,而不会克隆Delegate相关关系
* @param track * @param track 添加音频或视频轨道
*/ */
void addTrack(const Track::Ptr & track) override; void addTrack(const Track::Ptr &track) override;
/** /**
* 添加track完毕 * 添加track完毕
* @param track
*/ */
void addTrackCompleted(); void addTrackCompleted() override;
/** /**
* 重置track * 重置track
...@@ -184,14 +173,20 @@ public: ...@@ -184,14 +173,20 @@ public:
*/ */
void inputFrame(const Frame::Ptr &frame) override; void inputFrame(const Frame::Ptr &frame) override;
/////////////////////////////////MultiMuxerPrivate::Listener override/////////////////////////////////
/** /**
* 判断是否生效(是否正在转其他协议) * 所有track全部就绪
*/ */
bool isEnabled(); void onAllTrackReady() override;
private: private:
MultiMuxerPrivate::Ptr _muxer;
std::weak_ptr<MediaSourceEvent> _listener;
Stamp _stamp[2]; Stamp _stamp[2];
MultiMuxerPrivate::Ptr _muxer;
std::weak_ptr<MultiMuxerPrivate::Listener> _track_listener;
#if defined(ENABLE_RTPPROXY)
PSRtpSender::Ptr _ps_rtp_sender;
#endif //ENABLE_RTPPROXY
}; };
}//namespace mediakit }//namespace mediakit
......
...@@ -44,77 +44,79 @@ void Stamp::setPlayBack(bool playback) { ...@@ -44,77 +44,79 @@ void Stamp::setPlayBack(bool playback) {
void Stamp::syncTo(Stamp &other){ void Stamp::syncTo(Stamp &other){
_sync_master = &other; _sync_master = &other;
_sync_finished = false;
} }
//限制dts回退
void Stamp::revise(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp) { void Stamp::revise(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp) {
revise_l(dts,pts,dts_out,pts_out,modifyStamp); revise_l(dts, pts, dts_out, pts_out, modifyStamp);
if(_sync_finished || modifyStamp || _playback){ if (_playback) {
//回放允许时间戳回退
return;
}
if (dts_out < _last_dts_out) {
WarnL << "dts回退:" << dts_out << " < " << _last_dts_out;
dts_out = _last_dts_out;
pts_out = _last_pts_out;
return;
}
_last_dts_out = dts_out;
_last_pts_out = pts_out;
}
//音视频时间戳同步
void Stamp::revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp) {
revise_l2(dts, pts, dts_out, pts_out, modifyStamp);
if (!_sync_master || modifyStamp || _playback) {
//自动生成时间戳或回放或同步完毕 //自动生成时间戳或回放或同步完毕
if(dts_out < 0) { dts_out = 0; }
if(pts_out < 0) { pts_out = 0; }
return; return;
} }
if(_sync_master && _sync_master->_last_dts){ if (_sync_master && _sync_master->_last_dts_in) {
//音视频dts当前时间差 //音视频dts当前时间差
int64_t dts_diff = _last_dts - _sync_master->_last_dts; int64_t dts_diff = _last_dts_in - _sync_master->_last_dts_in;
if(ABS(dts_diff) < 5000){ if (ABS(dts_diff) < 5000) {
//如果绝对时间戳小于5秒,那么说明他们的起始时间戳是一致的,那么强制同步 //如果绝对时间戳小于5秒,那么说明他们的起始时间戳是一致的,那么强制同步
_last_relativeStamp = _relativeStamp; _relative_stamp = _sync_master->_relative_stamp + dts_diff;
_relativeStamp = _sync_master->_relativeStamp + dts_diff;
} }
//下次不用再强制同步 //下次不用再强制同步
_sync_master = nullptr; _sync_master = nullptr;
} }
if (dts_out < 0 || dts_out < _last_relativeStamp) {
//相对时间戳小于0,或者小于上次的时间戳,
//那么说明是同步时间戳导致的,在这个过渡期内,我们一直返回上次的结果(目的是为了防止时间戳回退)
pts_out = _last_relativeStamp + (pts_out - dts_out);
dts_out = _last_relativeStamp;
} else if(!_sync_master){
//音视频同步过渡期完毕
_sync_finished = true;
}
if(pts_out < 0){
pts_out = dts_out;
}
} }
void Stamp::revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp) { //求取相对时间戳
if(!pts){ void Stamp::revise_l2(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp) {
if (!pts) {
//没有播放时间戳,使其赋值为解码时间戳 //没有播放时间戳,使其赋值为解码时间戳
pts = dts; pts = dts;
} }
if(_playback){ if (_playback) {
//这是点播 //这是点播
dts_out = dts; dts_out = dts;
pts_out = pts; pts_out = pts;
_relativeStamp = dts_out; _relative_stamp = dts_out;
_last_dts = dts; _last_dts_in = dts;
return; return;
} }
//pts和dts的差值 //pts和dts的差值
int pts_dts_diff = pts - dts; int pts_dts_diff = pts - dts;
if(_last_dts != dts){ if (_last_dts_in != dts) {
//时间戳发生变更 //时间戳发生变更
if(modifyStamp){ if (modifyStamp) {
//内部自己生产时间戳 //内部自己生产时间戳
_relativeStamp = _ticker.elapsedTime(); _relative_stamp = _ticker.elapsedTime();
}else{ } else {
_relativeStamp += deltaStamp(dts); _relative_stamp += deltaStamp(dts);
} }
_last_dts = dts; _last_dts_in = dts;
} }
dts_out = _relativeStamp; dts_out = _relative_stamp;
//////////////以下是播放时间戳的计算////////////////// //////////////以下是播放时间戳的计算//////////////////
if(ABS(pts_dts_diff) > MAX_CTS){ if (ABS(pts_dts_diff) > MAX_CTS) {
//如果差值太大,则认为由于回环导致时间戳错乱了 //如果差值太大,则认为由于回环导致时间戳错乱了
pts_dts_diff = 0; pts_dts_diff = 0;
} }
...@@ -123,11 +125,11 @@ void Stamp::revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_ou ...@@ -123,11 +125,11 @@ void Stamp::revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_ou
} }
void Stamp::setRelativeStamp(int64_t relativeStamp) { void Stamp::setRelativeStamp(int64_t relativeStamp) {
_relativeStamp = relativeStamp; _relative_stamp = relativeStamp;
} }
int64_t Stamp::getRelativeStamp() const { int64_t Stamp::getRelativeStamp() const {
return _relativeStamp; return _relative_stamp;
} }
bool DtsGenerator::getDts(uint32_t pts, uint32_t &dts){ bool DtsGenerator::getDts(uint32_t pts, uint32_t &dts){
......
...@@ -29,6 +29,7 @@ public: ...@@ -29,6 +29,7 @@ public:
* @return 时间戳增量 * @return 时间戳增量
*/ */
int64_t deltaStamp(int64_t stamp); int64_t deltaStamp(int64_t stamp);
private: private:
int64_t _last_stamp = 0; int64_t _last_stamp = 0;
}; };
...@@ -41,7 +42,7 @@ public: ...@@ -41,7 +42,7 @@ public:
~Stamp() = default; ~Stamp() = default;
/** /**
* 修正时间戳 * 求取相对时间戳,同时实现了音视频同步、限制dts回退等功能
* @param dts 输入dts,如果为0则根据系统时间戳生成 * @param dts 输入dts,如果为0则根据系统时间戳生成
* @param pts 输入pts,如果为0则等于dts * @param pts 输入pts,如果为0则等于dts
* @param dts_out 输出dts * @param dts_out 输出dts
...@@ -75,15 +76,20 @@ public: ...@@ -75,15 +76,20 @@ public:
void syncTo(Stamp &other); void syncTo(Stamp &other);
private: private:
//主要实现音视频时间戳同步功能
void revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp = false); void revise_l(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp = false);
//主要实现获取相对时间戳功能
void revise_l2(int64_t dts, int64_t pts, int64_t &dts_out, int64_t &pts_out,bool modifyStamp = false);
private: private:
int64_t _relativeStamp = 0; int64_t _relative_stamp = 0;
int64_t _last_relativeStamp = 0; int64_t _last_dts_in = 0;
int64_t _last_dts = 0; int64_t _last_dts_out = 0;
int64_t _last_pts_out = 0;
SmoothTicker _ticker; SmoothTicker _ticker;
bool _playback = false; bool _playback = false;
Stamp *_sync_master = nullptr; Stamp *_sync_master = nullptr;
bool _sync_finished = true;
}; };
//dts生成器, //dts生成器,
...@@ -93,8 +99,10 @@ public: ...@@ -93,8 +99,10 @@ public:
DtsGenerator() = default; DtsGenerator() = default;
~DtsGenerator() = default; ~DtsGenerator() = default;
bool getDts(uint32_t pts, uint32_t &dts); bool getDts(uint32_t pts, uint32_t &dts);
private: private:
bool getDts_l(uint32_t pts, uint32_t &dts); bool getDts_l(uint32_t pts, uint32_t &dts);
private: private:
uint32_t _dts_pts_offset = 0; uint32_t _dts_pts_offset = 0;
uint32_t _last_dts = 0; uint32_t _last_dts = 0;
......
...@@ -40,6 +40,7 @@ bool loadIniConfig(const char *ini_path){ ...@@ -40,6 +40,7 @@ bool loadIniConfig(const char *ini_path){
namespace Broadcast { namespace Broadcast {
const string kBroadcastMediaChanged = "kBroadcastMediaChanged"; const string kBroadcastMediaChanged = "kBroadcastMediaChanged";
const string kBroadcastRecordMP4 = "kBroadcastRecordMP4"; const string kBroadcastRecordMP4 = "kBroadcastRecordMP4";
const string kBroadcastRecordTs = "kBroadcastRecoredTs";
const string kBroadcastHttpRequest = "kBroadcastHttpRequest"; const string kBroadcastHttpRequest = "kBroadcastHttpRequest";
const string kBroadcastHttpAccess = "kBroadcastHttpAccess"; const string kBroadcastHttpAccess = "kBroadcastHttpAccess";
const string kBroadcastOnGetRtspRealm = "kBroadcastOnGetRtspRealm"; const string kBroadcastOnGetRtspRealm = "kBroadcastOnGetRtspRealm";
...@@ -63,7 +64,6 @@ const string kMaxStreamWaitTimeMS = GENERAL_FIELD"maxStreamWaitMS"; ...@@ -63,7 +64,6 @@ const string kMaxStreamWaitTimeMS = GENERAL_FIELD"maxStreamWaitMS";
const string kEnableVhost = GENERAL_FIELD"enableVhost"; const string kEnableVhost = GENERAL_FIELD"enableVhost";
const string kAddMuteAudio = GENERAL_FIELD"addMuteAudio"; const string kAddMuteAudio = GENERAL_FIELD"addMuteAudio";
const string kResetWhenRePlay = GENERAL_FIELD"resetWhenRePlay"; const string kResetWhenRePlay = GENERAL_FIELD"resetWhenRePlay";
const string kPublishToRtxp = GENERAL_FIELD"publishToRtxp";
const string kPublishToHls = GENERAL_FIELD"publishToHls"; const string kPublishToHls = GENERAL_FIELD"publishToHls";
const string kPublishToMP4 = GENERAL_FIELD"publishToMP4"; const string kPublishToMP4 = GENERAL_FIELD"publishToMP4";
const string kMergeWriteMS = GENERAL_FIELD"mergeWriteMS"; const string kMergeWriteMS = GENERAL_FIELD"mergeWriteMS";
...@@ -76,7 +76,6 @@ onceToken token([](){ ...@@ -76,7 +76,6 @@ onceToken token([](){
mINI::Instance()[kEnableVhost] = 0; mINI::Instance()[kEnableVhost] = 0;
mINI::Instance()[kAddMuteAudio] = 1; mINI::Instance()[kAddMuteAudio] = 1;
mINI::Instance()[kResetWhenRePlay] = 1; mINI::Instance()[kResetWhenRePlay] = 1;
mINI::Instance()[kPublishToRtxp] = 1;
mINI::Instance()[kPublishToHls] = 1; mINI::Instance()[kPublishToHls] = 1;
mINI::Instance()[kPublishToMP4] = 0; mINI::Instance()[kPublishToMP4] = 0;
mINI::Instance()[kMergeWriteMS] = 0; mINI::Instance()[kMergeWriteMS] = 0;
...@@ -253,6 +252,8 @@ const string kSegmentRetain = HLS_FIELD"segRetain"; ...@@ -253,6 +252,8 @@ const string kSegmentRetain = HLS_FIELD"segRetain";
const string kFileBufSize = HLS_FIELD"fileBufSize"; const string kFileBufSize = HLS_FIELD"fileBufSize";
//录制文件路径 //录制文件路径
const string kFilePath = HLS_FIELD"filePath"; const string kFilePath = HLS_FIELD"filePath";
// 是否广播 ts 切片完成通知
const string kBroadcastRecordTs = HLS_FIELD"broadcastRecordTs";
onceToken token([](){ onceToken token([](){
mINI::Instance()[kSegmentDuration] = 2; mINI::Instance()[kSegmentDuration] = 2;
...@@ -260,6 +261,7 @@ onceToken token([](){ ...@@ -260,6 +261,7 @@ onceToken token([](){
mINI::Instance()[kSegmentRetain] = 5; mINI::Instance()[kSegmentRetain] = 5;
mINI::Instance()[kFileBufSize] = 64 * 1024; mINI::Instance()[kFileBufSize] = 64 * 1024;
mINI::Instance()[kFilePath] = "./www"; mINI::Instance()[kFilePath] = "./www";
mINI::Instance()[kBroadcastRecordTs] = false;
},nullptr); },nullptr);
} //namespace Hls } //namespace Hls
......
...@@ -47,6 +47,8 @@ bool loadIniConfig(const char *ini_path = nullptr); ...@@ -47,6 +47,8 @@ bool loadIniConfig(const char *ini_path = nullptr);
#define RTSP_SCHEMA "rtsp" #define RTSP_SCHEMA "rtsp"
#define RTMP_SCHEMA "rtmp" #define RTMP_SCHEMA "rtmp"
#define HLS_SCHEMA "hls" #define HLS_SCHEMA "hls"
#define TS_SCHEMA "ts"
#define FMP4_SCHEMA "fmp4"
#define DEFAULT_VHOST "__defaultVhost__" #define DEFAULT_VHOST "__defaultVhost__"
////////////广播名称/////////// ////////////广播名称///////////
...@@ -58,7 +60,11 @@ extern const string kBroadcastMediaChanged; ...@@ -58,7 +60,11 @@ extern const string kBroadcastMediaChanged;
//录制mp4文件成功后广播 //录制mp4文件成功后广播
extern const string kBroadcastRecordMP4; extern const string kBroadcastRecordMP4;
#define BroadcastRecordMP4Args const MP4Info &info #define BroadcastRecordMP4Args const RecordInfo &info
// 录制 ts 文件后广播
extern const string kBroadcastRecordTs;
#define BroadcastRecordTsArgs const RecordInfo &info
//收到http api请求广播 //收到http api请求广播
extern const string kBroadcastHttpRequest; extern const string kBroadcastHttpRequest;
...@@ -86,8 +92,7 @@ extern const string kBroadcastOnRtspAuth; ...@@ -86,8 +92,7 @@ extern const string kBroadcastOnRtspAuth;
//如果errMessage为空则代表鉴权成功 //如果errMessage为空则代表鉴权成功
//enableHls: 是否允许转换hls //enableHls: 是否允许转换hls
//enableMP4: 是否运行MP4录制 //enableMP4: 是否运行MP4录制
//enableRtxp: rtmp推流时是否运行转rtsp;rtsp推流时,是否允许转rtmp typedef std::function<void(const string &errMessage, bool enableHls, bool enableMP4)> PublishAuthInvoker;
typedef std::function<void(const string &errMessage,bool enableRtxp,bool enableHls,bool enableMP4)> PublishAuthInvoker;
//收到rtsp/rtmp推流事件广播,通过该事件控制推流鉴权 //收到rtsp/rtmp推流事件广播,通过该事件控制推流鉴权
extern const string kBroadcastMediaPublish; extern const string kBroadcastMediaPublish;
...@@ -165,8 +170,6 @@ extern const string kAddMuteAudio; ...@@ -165,8 +170,6 @@ extern const string kAddMuteAudio;
//拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始, //拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始,
//如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写) //如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写)
extern const string kResetWhenRePlay; extern const string kResetWhenRePlay;
//是否默认推流时转换成rtsp或rtmp,hook接口(on_publish)中可以覆盖该设置
extern const string kPublishToRtxp ;
//是否默认推流时转换成hls,hook接口(on_publish)中可以覆盖该设置 //是否默认推流时转换成hls,hook接口(on_publish)中可以覆盖该设置
extern const string kPublishToHls ; extern const string kPublishToHls ;
//是否默认推流时mp4录像,hook接口(on_publish)中可以覆盖该设置 //是否默认推流时mp4录像,hook接口(on_publish)中可以覆盖该设置
...@@ -284,6 +287,8 @@ extern const string kSegmentRetain; ...@@ -284,6 +287,8 @@ extern const string kSegmentRetain;
extern const string kFileBufSize; extern const string kFileBufSize;
//录制文件路径 //录制文件路径
extern const string kFilePath; extern const string kFilePath;
// 是否广播 ts 切片完成通知
extern const string kBroadcastRecordTs;
} //namespace Hls } //namespace Hls
////////////Rtp代理相关配置/////////// ////////////Rtp代理相关配置///////////
......
...@@ -92,6 +92,16 @@ static void parseAacConfig(const string &config, AdtsHeader &adts) { ...@@ -92,6 +92,16 @@ static void parseAacConfig(const string &config, AdtsHeader &adts) {
adts.no_raw_data_blocks_in_frame = 0; adts.no_raw_data_blocks_in_frame = 0;
} }
int getAacFrameLength(const uint8_t *data, int bytes) {
uint16_t len;
if (bytes < 7) return -1;
if (0xFF != data[0] || 0xF0 != (data[1] & 0xF0)) {
return -1;
}
len = ((uint16_t) (data[3] & 0x03) << 11) | ((uint16_t) data[4] << 3) | ((uint16_t) (data[5] >> 5) & 0x07);
return len;
}
string makeAacConfig(const uint8_t *hex, int length){ string makeAacConfig(const uint8_t *hex, int length){
#ifndef ENABLE_MP4 #ifndef ENABLE_MP4
if (!(hex[0] == 0xFF && (hex[1] & 0xF0) == 0xF0)) { if (!(hex[0] == 0xFF && (hex[1] & 0xF0) == 0xF0)) {
...@@ -134,7 +144,7 @@ int dumpAacConfig(const string &config, int length, uint8_t *out, int out_size) ...@@ -134,7 +144,7 @@ int dumpAacConfig(const string &config, int length, uint8_t *out, int out_size)
#ifndef ENABLE_MP4 #ifndef ENABLE_MP4
AdtsHeader header; AdtsHeader header;
parseAacConfig(config, header); parseAacConfig(config, header);
header.aac_frame_length = length; header.aac_frame_length = ADTS_HEADER_LEN + length;
dumpAdtsHeader(header, out); dumpAdtsHeader(header, out);
return ADTS_HEADER_LEN; return ADTS_HEADER_LEN;
#else #else
......
...@@ -18,45 +18,11 @@ ...@@ -18,45 +18,11 @@
namespace mediakit{ namespace mediakit{
string makeAacConfig(const uint8_t *hex, int length); string makeAacConfig(const uint8_t *hex, int length);
int getAacFrameLength(const uint8_t *hex, int length);
int dumpAacConfig(const string &config, int length, uint8_t *out, int out_size); int dumpAacConfig(const string &config, int length, uint8_t *out, int out_size);
bool parseAacConfig(const string &config, int &samplerate, int &channels); bool parseAacConfig(const string &config, int &samplerate, int &channels);
/** /**
* aac帧,包含adts头
*/
class AACFrame : public FrameImp {
public:
typedef std::shared_ptr<AACFrame> Ptr;
AACFrame(){
_codecid = CodecAAC;
}
};
class AACFrameNoCacheAble : public FrameFromPtr {
public:
typedef std::shared_ptr<AACFrameNoCacheAble> Ptr;
AACFrameNoCacheAble(char *ptr,uint32_t size,uint32_t dts,uint32_t pts = 0,int prefix_size = ADTS_HEADER_LEN){
_ptr = ptr;
_size = size;
_dts = dts;
_prefix_size = prefix_size;
}
CodecId getCodecId() const override{
return CodecAAC;
}
bool keyFrame() const override {
return false;
}
bool configFrame() const override{
return false;
}
};
/**
* aac音频通道 * aac音频通道
*/ */
class AACTrack : public AudioTrack{ class AACTrack : public AudioTrack{
...@@ -135,6 +101,28 @@ public: ...@@ -135,6 +101,28 @@ public:
* @param frame 数据帧 * @param frame 数据帧
*/ */
void inputFrame(const Frame::Ptr &frame) override{ void inputFrame(const Frame::Ptr &frame) override{
if (frame->prefixSize()) {
//有adts头,尝试分帧
auto ptr = frame->data();
auto end = frame->data() + frame->size();
while (ptr < end) {
auto frame_len = getAacFrameLength((uint8_t *) ptr, end - ptr);
if (frame_len < ADTS_HEADER_LEN) {
break;
}
auto sub_frame = std::make_shared<FrameInternal<FrameFromPtr> >(frame, (char *) ptr, frame_len, ADTS_HEADER_LEN);
ptr += frame_len;
sub_frame->setCodecId(CodecAAC);
inputFrame_l(sub_frame);
}
} else {
inputFrame_l(frame);
}
}
private:
void inputFrame_l(const Frame::Ptr &frame) {
if (_cfg.empty()) { if (_cfg.empty()) {
//未获取到aac_cfg信息 //未获取到aac_cfg信息
if (frame->prefixSize()) { if (frame->prefixSize()) {
...@@ -151,7 +139,6 @@ public: ...@@ -151,7 +139,6 @@ public:
AudioTrack::inputFrame(frame); AudioTrack::inputFrame(frame);
} }
} }
private:
/** /**
* 解析2个字节的aac配置 * 解析2个字节的aac配置
*/ */
......
...@@ -21,29 +21,30 @@ static string getAacCfg(const RtmpPacket &thiz) { ...@@ -21,29 +21,30 @@ static string getAacCfg(const RtmpPacket &thiz) {
if (!thiz.isCfgFrame()) { if (!thiz.isCfgFrame()) {
return ret; return ret;
} }
if (thiz.strBuf.size() < 4) { if (thiz.buffer.size() < 4) {
WarnL << "bad aac cfg!"; WarnL << "bad aac cfg!";
return ret; return ret;
} }
ret = thiz.strBuf.substr(2); ret = thiz.buffer.substr(2);
return ret; return ret;
} }
bool AACRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt, bool) { void AACRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
if (pkt->isCfgFrame()) { if (pkt->isCfgFrame()) {
_aac_cfg = getAacCfg(*pkt); _aac_cfg = getAacCfg(*pkt);
onGetAAC(nullptr, 0, 0); onGetAAC(nullptr, 0, 0);
return false; return;
} }
if (!_aac_cfg.empty()) { if (!_aac_cfg.empty()) {
onGetAAC(pkt->strBuf.data() + 2, pkt->strBuf.size() - 2, pkt->timeStamp); onGetAAC(pkt->buffer.data() + 2, pkt->buffer.size() - 2, pkt->time_stamp);
} }
return false;
} }
void AACRtmpDecoder::onGetAAC(const char* data, int len, uint32_t stamp) { void AACRtmpDecoder::onGetAAC(const char* data, int len, uint32_t stamp) {
auto frame = ResourcePoolHelper<AACFrame>::obtainObj(); auto frame = ResourcePoolHelper<FrameImp>::obtainObj();
frame->_codec_id = CodecAAC;
//生成adts头 //生成adts头
char adts_header[32] = {0}; char adts_header[32] = {0};
auto size = dumpAacConfig(_aac_cfg, len, (uint8_t *) adts_header, sizeof(adts_header)); auto size = dumpAacConfig(_aac_cfg, len, (uint8_t *) adts_header, sizeof(adts_header));
...@@ -95,43 +96,43 @@ void AACRtmpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -95,43 +96,43 @@ void AACRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
if(!_aac_cfg.empty()){ if(!_aac_cfg.empty()){
RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj(); RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmpPkt->strBuf.clear(); rtmpPkt->buffer.clear();
//header //header
uint8_t is_config = false; uint8_t is_config = false;
rtmpPkt->strBuf.push_back(_audio_flv_flags); rtmpPkt->buffer.push_back(_audio_flv_flags);
rtmpPkt->strBuf.push_back(!is_config); rtmpPkt->buffer.push_back(!is_config);
//aac data //aac data
rtmpPkt->strBuf.append(frame->data() + frame->prefixSize(), frame->size() - frame->prefixSize()); rtmpPkt->buffer.append(frame->data() + frame->prefixSize(), frame->size() - frame->prefixSize());
rtmpPkt->bodySize = rtmpPkt->strBuf.size(); rtmpPkt->body_size = rtmpPkt->buffer.size();
rtmpPkt->chunkId = CHUNK_AUDIO; rtmpPkt->chunk_id = CHUNK_AUDIO;
rtmpPkt->streamId = STREAM_MEDIA; rtmpPkt->stream_index = STREAM_MEDIA;
rtmpPkt->timeStamp = frame->dts(); rtmpPkt->time_stamp = frame->dts();
rtmpPkt->typeId = MSG_AUDIO; rtmpPkt->type_id = MSG_AUDIO;
RtmpCodec::inputRtmp(rtmpPkt, false); RtmpCodec::inputRtmp(rtmpPkt);
} }
} }
void AACRtmpEncoder::makeAudioConfigPkt() { void AACRtmpEncoder::makeAudioConfigPkt() {
_audio_flv_flags = getAudioRtmpFlags(std::make_shared<AACTrack>(_aac_cfg)); _audio_flv_flags = getAudioRtmpFlags(std::make_shared<AACTrack>(_aac_cfg));
RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj(); RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmpPkt->strBuf.clear(); rtmpPkt->buffer.clear();
//header //header
uint8_t is_config = true; uint8_t is_config = true;
rtmpPkt->strBuf.push_back(_audio_flv_flags); rtmpPkt->buffer.push_back(_audio_flv_flags);
rtmpPkt->strBuf.push_back(!is_config); rtmpPkt->buffer.push_back(!is_config);
//aac config //aac config
rtmpPkt->strBuf.append(_aac_cfg); rtmpPkt->buffer.append(_aac_cfg);
rtmpPkt->bodySize = rtmpPkt->strBuf.size(); rtmpPkt->body_size = rtmpPkt->buffer.size();
rtmpPkt->chunkId = CHUNK_AUDIO; rtmpPkt->chunk_id = CHUNK_AUDIO;
rtmpPkt->streamId = STREAM_MEDIA; rtmpPkt->stream_index = STREAM_MEDIA;
rtmpPkt->timeStamp = 0; rtmpPkt->time_stamp = 0;
rtmpPkt->typeId = MSG_AUDIO; rtmpPkt->type_id = MSG_AUDIO;
RtmpCodec::inputRtmp(rtmpPkt, false); RtmpCodec::inputRtmp(rtmpPkt);
} }
}//namespace mediakit }//namespace mediakit
\ No newline at end of file
...@@ -19,7 +19,7 @@ namespace mediakit{ ...@@ -19,7 +19,7 @@ namespace mediakit{
/** /**
* aac Rtmp转adts类 * aac Rtmp转adts类
*/ */
class AACRtmpDecoder : public RtmpCodec , public ResourcePoolHelper<AACFrame> { class AACRtmpDecoder : public RtmpCodec , public ResourcePoolHelper<FrameImp> {
public: public:
typedef std::shared_ptr<AACRtmpDecoder> Ptr; typedef std::shared_ptr<AACRtmpDecoder> Ptr;
...@@ -28,10 +28,9 @@ public: ...@@ -28,10 +28,9 @@ public:
/** /**
* 输入Rtmp并解码 * 输入Rtmp并解码
* @param Rtmp Rtmp数据包 * @param rtmp Rtmp数据包
* @param key_pos 此参数内部强制转换为false,请忽略之
*/ */
bool inputRtmp(const RtmpPacket::Ptr &Rtmp, bool key_pos = false) override; void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
CodecId getCodecId() const override{ CodecId getCodecId() const override{
return CodecAAC; return CodecAAC;
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include "AACRtp.h" #include "AACRtp.h"
#define AAC_MAX_FRAME_SIZE (2 * 1024)
namespace mediakit{ namespace mediakit{
...@@ -68,60 +67,73 @@ AACRtpDecoder::AACRtpDecoder(const Track::Ptr &track) { ...@@ -68,60 +67,73 @@ AACRtpDecoder::AACRtpDecoder(const Track::Ptr &track) {
} else { } else {
_aac_cfg = aacTrack->getAacCfg(); _aac_cfg = aacTrack->getAacCfg();
} }
_frame = obtainFrame(); obtainFrame();
} }
AACRtpDecoder::AACRtpDecoder() { AACRtpDecoder::AACRtpDecoder() {
_frame = obtainFrame(); obtainFrame();
} }
AACFrame::Ptr AACRtpDecoder::obtainFrame() { void AACRtpDecoder::obtainFrame() {
//从缓存池重新申请对象,防止覆盖已经写入环形缓存的对象 //从缓存池重新申请对象,防止覆盖已经写入环形缓存的对象
auto frame = ResourcePoolHelper<AACFrame>::obtainObj(); _frame = ResourcePoolHelper<FrameImp>::obtainObj();
frame->_prefix_size = 0; _frame->_prefix_size = 0;
frame->_buffer.clear(); _frame->_buffer.clear();
return frame; _frame->_codec_id = CodecAAC;
} }
bool AACRtpDecoder::inputRtp(const RtpPacket::Ptr &rtppack, bool key_pos) { bool AACRtpDecoder::inputRtp(const RtpPacket::Ptr &rtppack, bool key_pos) {
//rtp数据开始部分 //rtp数据开始部分
uint8_t *ptr = (uint8_t *) rtppack->data() + rtppack->offset; uint8_t *ptr = (uint8_t *) rtppack->data() + rtppack->offset;
//rtp数据末尾 //rtp数据末尾
const uint8_t *end = (uint8_t *) rtppack->data() + rtppack->size(); uint8_t *end = (uint8_t *) rtppack->data() + rtppack->size();
//首2字节表示Au-Header的个数,单位bit,所以除以16得到Au-Header个数 //首2字节表示Au-Header的个数,单位bit,所以除以16得到Au-Header个数
const uint16_t au_header_count = ((ptr[0] << 8) | ptr[1]) >> 4; uint16_t au_header_count = ((ptr[0] << 8) | ptr[1]) >> 4;
//忽略Au-Header区 //记录au_header起始指针
ptr += 2 + au_header_count * 2; uint8_t *au_header_ptr = ptr + 2;
ptr = au_header_ptr + au_header_count * 2;
while (ptr < end) {
auto size = (uint32_t) (end - ptr); if (end < ptr) {
if (size > AAC_MAX_FRAME_SIZE) { //数据不够
size = AAC_MAX_FRAME_SIZE; return false;
}
if (!_last_dts) {
//记录第一个时间戳
_last_dts = rtppack->timeStamp;
}
//每个audio unit时间戳增量
auto dts_inc = (rtppack->timeStamp - _last_dts) / au_header_count;
if (dts_inc < 0 && dts_inc > 100) {
//时间戳增量异常,忽略
dts_inc = 0;
}
for (int i = 0; i < au_header_count; ++i) {
// 之后的2字节是AU_HEADER,其中高13位表示一帧AAC负载的字节长度,低3位无用
uint16_t size = ((au_header_ptr[0] << 8) | au_header_ptr[1]) >> 3;
if (ptr + size > end) {
//数据不够
break;
} }
if (_frame->size() + size > AAC_MAX_FRAME_SIZE) {
//数据太多了,先清空 if (size) {
//设置aac数据
_frame->_buffer.assign((char *) ptr, size);
//设置当前audio unit时间戳
_frame->_dts = _last_dts + i * dts_inc;
ptr += size;
au_header_ptr += 2;
flushData(); flushData();
} }
//追加aac数据
_frame->_buffer.append((char *) ptr, size);
_frame->_dts = rtppack->timeStamp;
ptr += size;
}
if (rtppack->mark) {
//最后一个rtp分片
flushData();
} }
//记录上次时间戳
_last_dts = rtppack->timeStamp;
return false; return false;
} }
void AACRtpDecoder::flushData() { void AACRtpDecoder::flushData() {
if (_frame->_buffer.empty()) {
//没有有效数据
return;
}
//插入adts头 //插入adts头
char adts_header[32] = {0}; char adts_header[32] = {0};
auto size = dumpAacConfig(_aac_cfg, _frame->_buffer.size(), (uint8_t *) adts_header, sizeof(adts_header)); auto size = dumpAacConfig(_aac_cfg, _frame->_buffer.size(), (uint8_t *) adts_header, sizeof(adts_header));
...@@ -131,11 +143,7 @@ void AACRtpDecoder::flushData() { ...@@ -131,11 +143,7 @@ void AACRtpDecoder::flushData() {
_frame->_prefix_size = size; _frame->_prefix_size = size;
} }
RtpCodec::inputFrame(_frame); RtpCodec::inputFrame(_frame);
_frame = obtainFrame(); obtainFrame();
} }
}//namespace mediakit
}//namespace mediakit \ No newline at end of file
...@@ -17,7 +17,7 @@ namespace mediakit{ ...@@ -17,7 +17,7 @@ namespace mediakit{
/** /**
* aac rtp转adts类 * aac rtp转adts类
*/ */
class AACRtpDecoder : public RtpCodec , public ResourcePoolHelper<AACFrame> { class AACRtpDecoder : public RtpCodec , public ResourcePoolHelper<FrameImp> {
public: public:
typedef std::shared_ptr<AACRtpDecoder> Ptr; typedef std::shared_ptr<AACRtpDecoder> Ptr;
...@@ -39,12 +39,13 @@ protected: ...@@ -39,12 +39,13 @@ protected:
AACRtpDecoder(); AACRtpDecoder();
private: private:
AACFrame::Ptr obtainFrame(); void obtainFrame();
void flushData(); void flushData();
private: private:
AACFrame::Ptr _frame; FrameImp::Ptr _frame;
string _aac_cfg; string _aac_cfg;
uint32_t _last_dts = 0;
}; };
......
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "CommonRtmp.h"
namespace mediakit{
CommonRtmpDecoder::CommonRtmpDecoder(CodecId codec) {
_codec = codec;
obtainFrame();
}
CodecId CommonRtmpDecoder::getCodecId() const {
return _codec;
}
void CommonRtmpDecoder::obtainFrame() {
//从缓存池重新申请对象,防止覆盖已经写入环形缓存的对象
_frame = ResourcePoolHelper<FrameImp>::obtainObj();
_frame->_buffer.clear();
_frame->_codec_id = _codec;
_frame->_prefix_size = 0;
}
void CommonRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &rtmp) {
//拷贝负载
_frame->_buffer.assign(rtmp->buffer.data() + 1, rtmp->buffer.size() - 1);
_frame->_dts = rtmp->time_stamp;
//写入环形缓存
RtmpCodec::inputFrame(_frame);
//创建下一帧
obtainFrame();
}
/////////////////////////////////////////////////////////////////////////////////////
CommonRtmpEncoder::CommonRtmpEncoder(const Track::Ptr &track) : CommonRtmpDecoder(track->getCodecId()) {
_audio_flv_flags = getAudioRtmpFlags(track);
}
void CommonRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
if (!_audio_flv_flags) {
return;
}
RtmpPacket::Ptr rtmp = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmp->buffer.clear();
//header
rtmp->buffer.push_back(_audio_flv_flags);
//data
rtmp->buffer.append(frame->data() + frame->prefixSize(), frame->size() - frame->prefixSize());
rtmp->body_size = rtmp->buffer.size();
rtmp->chunk_id = CHUNK_AUDIO;
rtmp->stream_index = STREAM_MEDIA;
rtmp->time_stamp = frame->dts();
rtmp->type_id = MSG_AUDIO;
RtmpCodec::inputRtmp(rtmp);
}
}//namespace mediakit
\ No newline at end of file
...@@ -8,59 +8,66 @@ ...@@ -8,59 +8,66 @@
* may be found in the AUTHORS file in the root of the source tree. * may be found in the AUTHORS file in the root of the source tree.
*/ */
#ifndef ZLMEDIAKIT_G711RTMPCODEC_H #ifndef ZLMEDIAKIT_COMMONRTMP_H
#define ZLMEDIAKIT_G711RTMPCODEC_H #define ZLMEDIAKIT_COMMONRTMP_H
#include "Frame.h"
#include "Rtmp/RtmpCodec.h" #include "Rtmp/RtmpCodec.h"
#include "Extension/Track.h"
#include "Extension/G711.h"
namespace mediakit{ namespace mediakit{
/** /**
* G711 Rtmp转G711 Frame * 通用 rtmp解码
*/ */
class G711RtmpDecoder : public RtmpCodec , public ResourcePoolHelper<G711Frame> { class CommonRtmpDecoder : public RtmpCodec , public ResourcePoolHelper<FrameImp> {
public: public:
typedef std::shared_ptr<G711RtmpDecoder> Ptr; typedef std::shared_ptr<CommonRtmpDecoder> Ptr;
~CommonRtmpDecoder() override {}
/**
* 构造函数
* @param codec 编码id
*/
CommonRtmpDecoder(CodecId codec);
G711RtmpDecoder(CodecId codecId); /**
~G711RtmpDecoder() {} * 返回编码类型ID
*/
CodecId getCodecId() const override;
/** /**
* 输入Rtmp并解码 * 输入Rtmp并解码
* @param Rtmp Rtmp数据包 * @param rtmp Rtmp数据包
* @param key_pos 此参数内部强制转换为false,请忽略之
*/ */
bool inputRtmp(const RtmpPacket::Ptr &Rtmp, bool key_pos = false) override; void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
CodecId getCodecId() const override{
return _codecId;
}
private: private:
G711Frame::Ptr obtainFrame(); void obtainFrame();
private: private:
G711Frame::Ptr _frame; CodecId _codec;
CodecId _codecId; FrameImp::Ptr _frame;
}; };
/** /**
* G711 RTMP打包 * 通用 rtmp编码
*/ */
class G711RtmpEncoder : public G711RtmpDecoder , public ResourcePoolHelper<RtmpPacket> { class CommonRtmpEncoder : public CommonRtmpDecoder , public ResourcePoolHelper<RtmpPacket> {
public: public:
typedef std::shared_ptr<G711RtmpEncoder> Ptr; typedef std::shared_ptr<CommonRtmpEncoder> Ptr;
G711RtmpEncoder(const Track::Ptr &track); CommonRtmpEncoder(const Track::Ptr &track);
~G711RtmpEncoder() {} ~CommonRtmpEncoder() override{}
/** /**
* 输入G711 数据 * 输入数据
*/ */
void inputFrame(const Frame::Ptr &frame) override; void inputFrame(const Frame::Ptr &frame) override;
private: private:
uint8_t _audio_flv_flags = 0; uint8_t _audio_flv_flags = 0;
}; };
}//namespace mediakit }//namespace mediakit
#endif //ZLMEDIAKIT_COMMONRTMP_H
#endif //ZLMEDIAKIT_G711RTMPCODEC_H
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "CommonRtp.h"
CommonRtpDecoder::CommonRtpDecoder(CodecId codec, int max_frame_size ){
_codec = codec;
_max_frame_size = max_frame_size;
obtainFrame();
}
CodecId CommonRtpDecoder::getCodecId() const {
return _codec;
}
void CommonRtpDecoder::obtainFrame() {
_frame = ResourcePoolHelper<FrameImp>::obtainObj();
_frame->_buffer.clear();
_frame->_prefix_size = 0;
_frame->_dts = 0;
_frame->_codec_id = _codec;
}
bool CommonRtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool){
auto payload = rtp->data() + rtp->offset;
auto size = rtp->size() - rtp->offset;
if (size <= 0) {
//无实际负载
return false;
}
if (_frame->_dts != rtp->timeStamp || _frame->_buffer.size() > _max_frame_size) {
//时间戳发生变化或者缓存超过MAX_FRAME_SIZE,则清空上帧数据
if (!_frame->_buffer.empty()) {
//有有效帧,则输出
RtpCodec::inputFrame(_frame);
}
//新的一帧数据
obtainFrame();
_frame->_dts = rtp->timeStamp;
_drop_flag = false;
} else if (_last_seq != 0 && (uint16_t)(_last_seq + 1) != rtp->sequence) {
//时间戳未发生变化,但是seq却不连续,说明中间rtp丢包了,那么整帧应该废弃
WarnL << "rtp丢包:" << _last_seq << " -> " << rtp->sequence;
_drop_flag = true;
_frame->_buffer.clear();
}
if (!_drop_flag) {
_frame->_buffer.append(payload, size);
}
_last_seq = rtp->sequence;
return false;
}
////////////////////////////////////////////////////////////////
CommonRtpEncoder::CommonRtpEncoder(CodecId codec, uint32_t ssrc, uint32_t mtu_size,
uint32_t sample_rate, uint8_t payload_type, uint8_t interleaved)
: CommonRtpDecoder(codec), RtpInfo(ssrc, mtu_size, sample_rate, payload_type, interleaved) {
}
void CommonRtpEncoder::inputFrame(const Frame::Ptr &frame){
GET_CONFIG(uint32_t, cycleMS, Rtp::kCycleMS);
auto stamp = frame->dts() % cycleMS;
auto ptr = frame->data() + frame->prefixSize();
auto len = frame->size() - frame->prefixSize();
auto remain_size = len;
const auto max_rtp_size = _ui32MtuSize - 20;
while (remain_size > 0) {
auto rtp_size = remain_size > max_rtp_size ? max_rtp_size : remain_size;
RtpCodec::inputRtp(makeRtp(getTrackType(), ptr, rtp_size, false, stamp), false);
ptr += rtp_size;
remain_size -= rtp_size;
}
}
\ No newline at end of file
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_COMMONRTP_H
#define ZLMEDIAKIT_COMMONRTP_H
#include "Frame.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit{
/**
* 通用 rtp解码类
*/
class CommonRtpDecoder : public RtpCodec, public ResourcePoolHelper<FrameImp> {
public:
typedef std::shared_ptr <CommonRtpDecoder> Ptr;
~CommonRtpDecoder() override {}
/**
* 构造函数
* @param codec 编码id
* @param max_frame_size 允许的最大帧大小
*/
CommonRtpDecoder(CodecId codec, int max_frame_size = 2 * 1024);
/**
* 返回编码类型ID
*/
CodecId getCodecId() const override;
/**
* 输入rtp并解码
* @param rtp rtp数据包
* @param key_pos 此参数内部强制转换为false,请忽略之
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = false) override;
private:
void obtainFrame();
private:
bool _drop_flag = false;
uint16_t _last_seq = 0;
int _max_frame_size;
CodecId _codec;
FrameImp::Ptr _frame;
};
/**
* 通用 rtp编码类
*/
class CommonRtpEncoder : public CommonRtpDecoder, public RtpInfo {
public:
typedef std::shared_ptr <CommonRtpEncoder> Ptr;
~CommonRtpEncoder() override {}
/**
* 构造函数
* @param codec 编码类型
* @param ssrc ssrc
* @param mtu_size mtu 大小
* @param sample_rate 采样率
* @param payload_type pt类型
* @param interleaved rtsp interleaved 值
*/
CommonRtpEncoder(CodecId codec, uint32_t ssrc, uint32_t mtu_size, uint32_t sample_rate, uint8_t payload_type, uint8_t interleaved);
/**
* 输入帧数据并编码成rtp
*/
void inputFrame(const Frame::Ptr &frame) override;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_COMMONRTP_H
...@@ -13,11 +13,13 @@ ...@@ -13,11 +13,13 @@
#include "H264Rtmp.h" #include "H264Rtmp.h"
#include "H265Rtmp.h" #include "H265Rtmp.h"
#include "AACRtmp.h" #include "AACRtmp.h"
#include "G711Rtmp.h" #include "CommonRtmp.h"
#include "H264Rtp.h" #include "H264Rtp.h"
#include "AACRtp.h" #include "AACRtp.h"
#include "G711Rtp.h"
#include "H265Rtp.h" #include "H265Rtp.h"
#include "CommonRtp.h"
#include "Opus.h"
#include "G711.h"
#include "Common/Parser.h" #include "Common/Parser.h"
namespace mediakit{ namespace mediakit{
...@@ -42,6 +44,10 @@ Track::Ptr Factory::getTrackBySdp(const SdpTrack::Ptr &track) { ...@@ -42,6 +44,10 @@ Track::Ptr Factory::getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<AACTrack>(aac_cfg); return std::make_shared<AACTrack>(aac_cfg);
} }
if (strcasecmp(track->_codec.data(), "opus") == 0) {
return std::make_shared<OpusTrack>();
}
if (strcasecmp(track->_codec.data(), "PCMA") == 0) { if (strcasecmp(track->_codec.data(), "PCMA") == 0) {
return std::make_shared<G711Track>(CodecG711A, track->_samplerate, track->_channel, 16); return std::make_shared<G711Track>(CodecG711A, track->_samplerate, track->_channel, 16);
} }
...@@ -114,11 +120,12 @@ RtpCodec::Ptr Factory::getRtpEncoderBySdp(const Sdp::Ptr &sdp) { ...@@ -114,11 +120,12 @@ RtpCodec::Ptr Factory::getRtpEncoderBySdp(const Sdp::Ptr &sdp) {
auto interleaved = sdp->getTrackType() * 2; auto interleaved = sdp->getTrackType() * 2;
auto codec_id = sdp->getCodecId(); auto codec_id = sdp->getCodecId();
switch (codec_id){ switch (codec_id){
case CodecH264 : return std::make_shared<H264RtpEncoder>(ssrc,mtu,sample_rate,pt,interleaved); case CodecH264 : return std::make_shared<H264RtpEncoder>(ssrc, mtu, sample_rate, pt, interleaved);
case CodecH265 : return std::make_shared<H265RtpEncoder>(ssrc,mtu,sample_rate,pt,interleaved); case CodecH265 : return std::make_shared<H265RtpEncoder>(ssrc, mtu, sample_rate, pt, interleaved);
case CodecAAC : return std::make_shared<AACRtpEncoder>(ssrc,mtu,sample_rate,pt,interleaved); case CodecAAC : return std::make_shared<AACRtpEncoder>(ssrc, mtu, sample_rate, pt, interleaved);
case CodecOpus :
case CodecG711A : case CodecG711A :
case CodecG711U : return std::make_shared<G711RtpEncoder>(codec_id, ssrc, mtu, sample_rate, pt, interleaved); case CodecG711U : return std::make_shared<CommonRtpEncoder>(codec_id, ssrc, mtu, sample_rate, pt, interleaved);
default : WarnL << "暂不支持该CodecId:" << codec_id; return nullptr; default : WarnL << "暂不支持该CodecId:" << codec_id; return nullptr;
} }
} }
...@@ -128,8 +135,9 @@ RtpCodec::Ptr Factory::getRtpDecoderByTrack(const Track::Ptr &track) { ...@@ -128,8 +135,9 @@ RtpCodec::Ptr Factory::getRtpDecoderByTrack(const Track::Ptr &track) {
case CodecH264 : return std::make_shared<H264RtpDecoder>(); case CodecH264 : return std::make_shared<H264RtpDecoder>();
case CodecH265 : return std::make_shared<H265RtpDecoder>(); case CodecH265 : return std::make_shared<H265RtpDecoder>();
case CodecAAC : return std::make_shared<AACRtpDecoder>(track->clone()); case CodecAAC : return std::make_shared<AACRtpDecoder>(track->clone());
case CodecOpus :
case CodecG711A : case CodecG711A :
case CodecG711U : return std::make_shared<G711RtpDecoder>(track->getCodecId()); case CodecG711U : return std::make_shared<CommonRtpDecoder>(track->getCodecId());
default : WarnL << "暂不支持该CodecId:" << track->getCodecName(); return nullptr; default : WarnL << "暂不支持该CodecId:" << track->getCodecName(); return nullptr;
} }
} }
...@@ -137,40 +145,35 @@ RtpCodec::Ptr Factory::getRtpDecoderByTrack(const Track::Ptr &track) { ...@@ -137,40 +145,35 @@ RtpCodec::Ptr Factory::getRtpDecoderByTrack(const Track::Ptr &track) {
/////////////////////////////rtmp相关/////////////////////////////////////////// /////////////////////////////rtmp相关///////////////////////////////////////////
static CodecId getVideoCodecIdByAmf(const AMFValue &val){ static CodecId getVideoCodecIdByAmf(const AMFValue &val){
if (val.type() == AMF_STRING){ if (val.type() == AMF_STRING) {
auto str = val.as_string(); auto str = val.as_string();
if(str == "avc1"){ if (str == "avc1") {
return CodecH264; return CodecH264;
} }
if(str == "mp4a"){ if (str == "hev1" || str == "hvc1") {
return CodecAAC;
}
if(str == "hev1" || str == "hvc1"){
return CodecH265; return CodecH265;
} }
WarnL << "暂不支持该Amf:" << str; WarnL << "暂不支持该视频Amf:" << str;
return CodecInvalid; return CodecInvalid;
} }
if (val.type() != AMF_NULL){ if (val.type() != AMF_NULL) {
auto type_id = val.as_integer(); auto type_id = val.as_integer();
switch (type_id){ switch (type_id) {
case FLV_CODEC_H264: return CodecH264; case FLV_CODEC_H264 : return CodecH264;
case FLV_CODEC_AAC: return CodecAAC; case FLV_CODEC_H265 : return CodecH265;
case FLV_CODEC_H265: return CodecH265; default : WarnL << "暂不支持该视频Amf:" << type_id; return CodecInvalid;
default : WarnL << "暂不支持该Amf:" << type_id; return CodecInvalid;
} }
} }
return CodecInvalid; return CodecInvalid;
} }
Track::Ptr getTrackByCodecId(CodecId codecId, int sample_rate = 0, int channels = 0, int sample_bit = 0) { Track::Ptr getTrackByCodecId(CodecId codecId, int sample_rate = 0, int channels = 0, int sample_bit = 0) {
switch (codecId){ switch (codecId){
case CodecH264 : return std::make_shared<H264Track>(); case CodecH264 : return std::make_shared<H264Track>();
case CodecH265 : return std::make_shared<H265Track>(); case CodecH265 : return std::make_shared<H265Track>();
case CodecAAC : return std::make_shared<AACTrack>(); case CodecAAC : return std::make_shared<AACTrack>();
case CodecOpus: return std::make_shared<OpusTrack>();
case CodecG711A : case CodecG711A :
case CodecG711U : return (sample_rate && channels && sample_bit) ? std::make_shared<G711Track>(codecId, sample_rate, channels, sample_bit) : nullptr; case CodecG711U : return (sample_rate && channels && sample_bit) ? std::make_shared<G711Track>(codecId, sample_rate, channels, sample_bit) : nullptr;
default : WarnL << "暂不支持该CodecId:" << codecId; return nullptr; default : WarnL << "暂不支持该CodecId:" << codecId; return nullptr;
...@@ -191,7 +194,7 @@ static CodecId getAudioCodecIdByAmf(const AMFValue &val) { ...@@ -191,7 +194,7 @@ static CodecId getAudioCodecIdByAmf(const AMFValue &val) {
if (str == "mp4a") { if (str == "mp4a") {
return CodecAAC; return CodecAAC;
} }
WarnL << "暂不支持该Amf:" << str; WarnL << "暂不支持该音频Amf:" << str;
return CodecInvalid; return CodecInvalid;
} }
...@@ -201,7 +204,8 @@ static CodecId getAudioCodecIdByAmf(const AMFValue &val) { ...@@ -201,7 +204,8 @@ static CodecId getAudioCodecIdByAmf(const AMFValue &val) {
case FLV_CODEC_AAC : return CodecAAC; case FLV_CODEC_AAC : return CodecAAC;
case FLV_CODEC_G711A : return CodecG711A; case FLV_CODEC_G711A : return CodecG711A;
case FLV_CODEC_G711U : return CodecG711U; case FLV_CODEC_G711U : return CodecG711U;
default : WarnL << "暂不支持该Amf:" << type_id; return CodecInvalid; case FLV_CODEC_OPUS : return CodecOpus;
default : WarnL << "暂不支持该音频Amf:" << type_id; return CodecInvalid;
} }
} }
...@@ -221,6 +225,7 @@ RtmpCodec::Ptr Factory::getRtmpCodecByTrack(const Track::Ptr &track, bool is_enc ...@@ -221,6 +225,7 @@ RtmpCodec::Ptr Factory::getRtmpCodecByTrack(const Track::Ptr &track, bool is_enc
case CodecH264 : return std::make_shared<H264RtmpEncoder>(track); case CodecH264 : return std::make_shared<H264RtmpEncoder>(track);
case CodecAAC : return std::make_shared<AACRtmpEncoder>(track); case CodecAAC : return std::make_shared<AACRtmpEncoder>(track);
case CodecH265 : return std::make_shared<H265RtmpEncoder>(track); case CodecH265 : return std::make_shared<H265RtmpEncoder>(track);
case CodecOpus : return std::make_shared<CommonRtmpEncoder>(track);
case CodecG711A : case CodecG711A :
case CodecG711U : { case CodecG711U : {
auto audio_track = dynamic_pointer_cast<AudioTrack>(track); auto audio_track = dynamic_pointer_cast<AudioTrack>(track);
...@@ -235,7 +240,7 @@ RtmpCodec::Ptr Factory::getRtmpCodecByTrack(const Track::Ptr &track, bool is_enc ...@@ -235,7 +240,7 @@ RtmpCodec::Ptr Factory::getRtmpCodecByTrack(const Track::Ptr &track, bool is_enc
<< ",该音频已被忽略"; << ",该音频已被忽略";
return nullptr; return nullptr;
} }
return std::make_shared<G711RtmpEncoder>(track); return std::make_shared<CommonRtmpEncoder>(track);
} }
default : WarnL << "暂不支持该CodecId:" << track->getCodecName(); return nullptr; default : WarnL << "暂不支持该CodecId:" << track->getCodecName(); return nullptr;
} }
...@@ -248,6 +253,7 @@ AMFValue Factory::getAmfByCodecId(CodecId codecId) { ...@@ -248,6 +253,7 @@ AMFValue Factory::getAmfByCodecId(CodecId codecId) {
case CodecH265: return AMFValue(FLV_CODEC_H265); case CodecH265: return AMFValue(FLV_CODEC_H265);
case CodecG711A: return AMFValue(FLV_CODEC_G711A); case CodecG711A: return AMFValue(FLV_CODEC_G711A);
case CodecG711U: return AMFValue(FLV_CODEC_G711U); case CodecG711U: return AMFValue(FLV_CODEC_G711U);
case CodecOpus: return AMFValue(FLV_CODEC_OPUS);
default: return AMFValue(AMF_NULL); default: return AMFValue(AMF_NULL);
} }
} }
......
...@@ -35,12 +35,12 @@ public: ...@@ -35,12 +35,12 @@ public:
_dts = frame->dts(); _dts = frame->dts();
_pts = frame->pts(); _pts = frame->pts();
_prefix_size = frame->prefixSize(); _prefix_size = frame->prefixSize();
_codecid = frame->getCodecId(); _codec_id = frame->getCodecId();
_key = frame->keyFrame(); _key = frame->keyFrame();
_config = frame->configFrame(); _config = frame->configFrame();
} }
virtual ~FrameCacheAble() = default; ~FrameCacheAble() override = default;
/** /**
* 可以被缓存 * 可以被缓存
...@@ -49,10 +49,6 @@ public: ...@@ -49,10 +49,6 @@ public:
return true; return true;
} }
CodecId getCodecId() const override{
return _codecid;
}
bool keyFrame() const override{ bool keyFrame() const override{
return _key; return _key;
} }
...@@ -60,10 +56,10 @@ public: ...@@ -60,10 +56,10 @@ public:
bool configFrame() const override{ bool configFrame() const override{
return _config; return _config;
} }
private: private:
Frame::Ptr _frame; Frame::Ptr _frame;
BufferRaw::Ptr _buffer; BufferRaw::Ptr _buffer;
CodecId _codecid;
bool _key; bool _key;
bool _config; bool _config;
}; };
......
...@@ -148,7 +148,7 @@ public: ...@@ -148,7 +148,7 @@ public:
} }
CodecId getCodecId() const override{ CodecId getCodecId() const override{
return _codecid; return _codec_id;
} }
bool keyFrame() const override { bool keyFrame() const override {
...@@ -160,7 +160,7 @@ public: ...@@ -160,7 +160,7 @@ public:
} }
public: public:
CodecId _codecid = CodecInvalid; CodecId _codec_id = CodecInvalid;
string _buffer; string _buffer;
uint32_t _dts = 0; uint32_t _dts = 0;
uint32_t _pts = 0; uint32_t _pts = 0;
...@@ -314,9 +314,24 @@ private: ...@@ -314,9 +314,24 @@ private:
class FrameFromPtr : public Frame{ class FrameFromPtr : public Frame{
public: public:
typedef std::shared_ptr<FrameFromPtr> Ptr; typedef std::shared_ptr<FrameFromPtr> Ptr;
FrameFromPtr(CodecId codec_id, char *ptr, uint32_t size, uint32_t dts, uint32_t pts = 0, int prefix_size = 0)
: FrameFromPtr(ptr, size, dts, pts, prefix_size) {
_codec_id = codec_id;
}
FrameFromPtr(char *ptr, uint32_t size, uint32_t dts, uint32_t pts = 0, int prefix_size = 0){
_ptr = ptr;
_size = size;
_dts = dts;
_pts = pts;
_prefix_size = prefix_size;
}
char *data() const override{ char *data() const override{
return _ptr; return _ptr;
} }
uint32_t size() const override { uint32_t size() const override {
return _size; return _size;
} }
...@@ -336,12 +351,80 @@ public: ...@@ -336,12 +351,80 @@ public:
bool cacheAble() const override { bool cacheAble() const override {
return false; return false;
} }
CodecId getCodecId() const override {
if (_codec_id == CodecInvalid) {
throw std::invalid_argument("FrameFromPtr对象未设置codec类型");
}
return _codec_id;
}
void setCodecId(CodecId codec_id) {
_codec_id = codec_id;
}
bool keyFrame() const override {
return false;
}
bool configFrame() const override{
return false;
}
protected:
FrameFromPtr() {}
protected: protected:
char *_ptr; char *_ptr;
uint32_t _size; uint32_t _size;
uint32_t _dts; uint32_t _dts;
uint32_t _pts = 0; uint32_t _pts = 0;
uint32_t _prefix_size; uint32_t _prefix_size;
CodecId _codec_id = CodecInvalid;
};
/**
* 该对象可以把Buffer对象转换成可缓存的Frame对象
*/
template <typename Parent>
class FrameWrapper : public Parent{
public:
~FrameWrapper() = default;
/**
* 构造frame
* @param buf 数据缓存
* @param dts 解码时间戳
* @param pts 显示时间戳
* @param prefix 帧前缀长度
* @param offset buffer有效数据偏移量
*/
FrameWrapper(const Buffer::Ptr &buf, int64_t dts, int64_t pts, int prefix, int offset) : Parent(buf->data() + offset, buf->size() - offset, dts, pts, prefix){
_buf = buf;
}
/**
* 构造frame
* @param buf 数据缓存
* @param dts 解码时间戳
* @param pts 显示时间戳
* @param prefix 帧前缀长度
* @param offset buffer有效数据偏移量
* @param codec 帧类型
*/
FrameWrapper(const Buffer::Ptr &buf, int64_t dts, int64_t pts, int prefix, int offset, CodecId codec) : Parent(codec, buf->data() + offset, buf->size() - offset, dts, pts, prefix){
_buf = buf;
}
/**
* 该帧可缓存
*/
bool cacheAble() const override {
return true;
}
private:
Buffer::Ptr _buf;
}; };
}//namespace mediakit }//namespace mediakit
......
...@@ -17,47 +17,6 @@ ...@@ -17,47 +17,6 @@
namespace mediakit{ namespace mediakit{
/** /**
* G711帧
*/
class G711Frame : public FrameImp {
public:
G711Frame(){
_codecid = CodecG711A;
}
};
class G711FrameNoCacheAble : public FrameFromPtr {
public:
typedef std::shared_ptr<G711FrameNoCacheAble> Ptr;
G711FrameNoCacheAble(char *ptr,uint32_t size,uint32_t dts, uint32_t pts = 0,int prefix_size = 0){
_ptr = ptr;
_size = size;
_dts = dts;
_prefix_size = prefix_size;
}
void setCodec(CodecId codecId){
_codecId = codecId;
}
CodecId getCodecId() const override{
return _codecId;
}
bool keyFrame() const override {
return false;
}
bool configFrame() const override{
return false;
}
private:
CodecId _codecId;
};
/**
* G711音频通道 * G711音频通道
*/ */
class G711Track : public AudioTrackImp{ class G711Track : public AudioTrackImp{
......
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "G711Rtmp.h"
namespace mediakit{
G711RtmpDecoder::G711RtmpDecoder(CodecId codecId) {
_frame = obtainFrame();
_codecId = codecId;
}
G711Frame::Ptr G711RtmpDecoder::obtainFrame() {
//从缓存池重新申请对象,防止覆盖已经写入环形缓存的对象
auto frame = ResourcePoolHelper<G711Frame>::obtainObj();
frame->_buffer.clear();
frame->_codecid = _codecId;
return frame;
}
bool G711RtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt, bool) {
//拷贝G711负载
_frame->_buffer.assign(pkt->strBuf.data() + 1, pkt->strBuf.size() - 1);
_frame->_dts = pkt->timeStamp;
//写入环形缓存
RtmpCodec::inputFrame(_frame);
_frame = obtainFrame();
return false;
}
/////////////////////////////////////////////////////////////////////////////////////
G711RtmpEncoder::G711RtmpEncoder(const Track::Ptr &track) : G711RtmpDecoder(track->getCodecId()) {
_audio_flv_flags = getAudioRtmpFlags(track);
}
void G711RtmpEncoder::inputFrame(const Frame::Ptr &frame) {
if(!_audio_flv_flags){
return;
}
RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmpPkt->strBuf.clear();
//header
rtmpPkt->strBuf.push_back(_audio_flv_flags);
//g711 data
rtmpPkt->strBuf.append(frame->data() + frame->prefixSize(), frame->size() - frame->prefixSize());
rtmpPkt->bodySize = rtmpPkt->strBuf.size();
rtmpPkt->chunkId = CHUNK_AUDIO;
rtmpPkt->streamId = STREAM_MEDIA;
rtmpPkt->timeStamp = frame->dts();
rtmpPkt->typeId = MSG_AUDIO;
RtmpCodec::inputRtmp(rtmpPkt, false);
}
}//namespace mediakit
\ No newline at end of file
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "G711Rtp.h"
namespace mediakit{
G711RtpDecoder::G711RtpDecoder(CodecId codecid){
_codecid = codecid;
_frame = obtainFrame();
}
G711Frame::Ptr G711RtpDecoder::obtainFrame() {
//从缓存池重新申请对象,防止覆盖已经写入环形缓存的对象
auto frame = ResourcePoolHelper<G711Frame>::obtainObj();
frame->_buffer.clear();
frame->_codecid = _codecid;
frame->_dts = 0;
return frame;
}
bool G711RtpDecoder::inputRtp(const RtpPacket::Ptr &rtppack, bool) {
// 获取rtp数据长度
int length = rtppack->size() - rtppack->offset;
// 获取rtp数据
const char *rtp_packet_buf = rtppack->data() + rtppack->offset;
if (rtppack->timeStamp != _frame->_dts) {
//时间戳变更,清空上一帧
onGetG711(_frame);
}
//追加数据
_frame->_buffer.append(rtp_packet_buf, length);
//赋值时间戳
_frame->_dts = rtppack->timeStamp;
if (rtppack->mark || _frame->_buffer.size() > 10 * 1024) {
//标记为mark时,或者内存快溢出时,我们认为这是该帧最后一个包
onGetG711(_frame);
}
return false;
}
void G711RtpDecoder::onGetG711(const G711Frame::Ptr &frame) {
if(!frame->_buffer.empty()){
//写入环形缓存
RtpCodec::inputFrame(frame);
_frame = obtainFrame();
}
}
/////////////////////////////////////////////////////////////////////////////////////
G711RtpEncoder::G711RtpEncoder(CodecId codecid, uint32_t ui32Ssrc, uint32_t ui32MtuSize,
uint32_t ui32SampleRate, uint8_t ui8PayloadType, uint8_t ui8Interleaved) :
G711RtpDecoder(codecid),
RtpInfo(ui32Ssrc, ui32MtuSize, ui32SampleRate, ui8PayloadType, ui8Interleaved) {
}
void G711RtpEncoder::inputFrame(const Frame::Ptr &frame) {
GET_CONFIG(uint32_t, cycleMS, Rtp::kCycleMS);
auto uiStamp = frame->dts();
auto pcData = frame->data() + frame->prefixSize();
auto iLen = frame->size() - frame->prefixSize();
uiStamp %= cycleMS;
char *ptr = (char *) pcData;
int iSize = iLen;
while (iSize > 0) {
if (iSize <= _ui32MtuSize - 20) {
makeG711Rtp(ptr, iSize, true, uiStamp);
break;
}
makeG711Rtp(ptr, _ui32MtuSize - 20, false, uiStamp);
ptr += (_ui32MtuSize - 20);
iSize -= (_ui32MtuSize - 20);
}
}
void G711RtpEncoder::makeG711Rtp(const void *data, unsigned int len, bool mark, uint32_t uiStamp) {
RtpCodec::inputRtp(makeRtp(getTrackType(), data, len, mark, uiStamp), false);
}
}//namespace mediakit
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_G711RTPCODEC_H
#define ZLMEDIAKIT_G711RTPCODEC_H
#include "Rtsp/RtpCodec.h"
#include "Extension/G711.h"
namespace mediakit{
/**
* rtp转G711类
*/
class G711RtpDecoder : public RtpCodec , public ResourcePoolHelper<G711Frame> {
public:
typedef std::shared_ptr<G711RtpDecoder> Ptr;
G711RtpDecoder(CodecId codecid);
~G711RtpDecoder() {}
/**
* 输入rtp并解码
* @param rtp rtp数据包
* @param key_pos 此参数内部强制转换为false,请忽略之
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = false) override;
CodecId getCodecId() const override{
return _codecid;
}
private:
void onGetG711(const G711Frame::Ptr &frame);
G711Frame::Ptr obtainFrame();
private:
G711Frame::Ptr _frame;
CodecId _codecid;
};
/**
* g711 转rtp类
*/
class G711RtpEncoder : public G711RtpDecoder , public RtpInfo {
public:
typedef std::shared_ptr<G711RtpEncoder> Ptr;
/**
* @param ui32Ssrc ssrc
* @param ui32MtuSize mtu 大小
* @param ui32SampleRate 采样率
* @param ui8PayloadType pt类型
* @param ui8Interleaved rtsp interleaved 值
*/
G711RtpEncoder(CodecId codecid,
uint32_t ui32Ssrc,
uint32_t ui32MtuSize,
uint32_t ui32SampleRate,
uint8_t ui8PayloadType = 0,
uint8_t ui8Interleaved = TrackAudio * 2);
~G711RtpEncoder() {}
/**
* @param frame g711数据
*/
void inputFrame(const Frame::Ptr &frame) override;
private:
void makeG711Rtp(const void *pData, unsigned int uiLen, bool bMark, uint32_t uiStamp);
};
}//namespace mediakit
#endif //ZLMEDIAKIT_G711RTPCODEC_H
...@@ -30,14 +30,16 @@ public: ...@@ -30,14 +30,16 @@ public:
typedef std::shared_ptr<H264Frame> Ptr; typedef std::shared_ptr<H264Frame> Ptr;
typedef enum { typedef enum {
NAL_SPS = 7,
NAL_PPS = 8,
NAL_IDR = 5, NAL_IDR = 5,
NAL_SEI = 6, NAL_SEI = 6,
NAL_SPS = 7,
NAL_PPS = 8,
NAL_AUD = 9,
NAL_B_P = 1,
} NalType; } NalType;
H264Frame(){ H264Frame(){
_codecid = CodecH264; _codec_id = CodecH264;
} }
bool keyFrame() const override { bool keyFrame() const override {
...@@ -68,10 +70,7 @@ public: ...@@ -68,10 +70,7 @@ public:
_dts = dts; _dts = dts;
_pts = pts; _pts = pts;
_prefix_size = prefix_size; _prefix_size = prefix_size;
} _codec_id = CodecH264;
CodecId getCodecId() const override{
return CodecH264;
} }
bool keyFrame() const override { bool keyFrame() const override {
...@@ -182,8 +181,8 @@ public: ...@@ -182,8 +181,8 @@ public:
*/ */
void inputFrame(const Frame::Ptr &frame) override{ void inputFrame(const Frame::Ptr &frame) override{
int type = H264_TYPE(*((uint8_t *)frame->data() + frame->prefixSize())); int type = H264_TYPE(*((uint8_t *)frame->data() + frame->prefixSize()));
if(type == H264Frame::NAL_SPS || type == H264Frame::NAL_SEI){ if(type != H264Frame::NAL_B_P && type != H264Frame::NAL_IDR){
//有些设备会把SPS PPS IDR帧当做一个帧打包,所以我们要split一下 //非I/B/P帧情况下,split一下,防止多个帧粘合在一起
splitH264(frame->data(), frame->size(), frame->prefixSize(), [&](const char *ptr, int len, int prefix) { splitH264(frame->data(), frame->size(), frame->prefixSize(), [&](const char *ptr, int len, int prefix) {
H264FrameInternal::Ptr sub_frame = std::make_shared<H264FrameInternal>(frame, (char *)ptr, len, prefix); H264FrameInternal::Ptr sub_frame = std::make_shared<H264FrameInternal>(frame, (char *)ptr, len, prefix);
inputFrame_l(sub_frame); inputFrame_l(sub_frame);
...@@ -227,6 +226,10 @@ private: ...@@ -227,6 +226,10 @@ private:
VideoTrack::inputFrame(frame); VideoTrack::inputFrame(frame);
} }
break; break;
case H264Frame::NAL_AUD:{
//忽略AUD帧;
}
break;
default: default:
VideoTrack::inputFrame(frame); VideoTrack::inputFrame(frame);
......
...@@ -23,10 +23,6 @@ H264Frame::Ptr H264RtmpDecoder::obtainFrame() { ...@@ -23,10 +23,6 @@ H264Frame::Ptr H264RtmpDecoder::obtainFrame() {
return frame; return frame;
} }
bool H264RtmpDecoder::inputRtmp(const RtmpPacket::Ptr &rtmp, bool key_pos) {
return decodeRtmp(rtmp);
}
/** /**
* 返回不带0x00 00 00 01头的sps * 返回不带0x00 00 00 01头的sps
* @return * @return
...@@ -39,18 +35,18 @@ static string getH264SPS(const RtmpPacket &thiz) { ...@@ -39,18 +35,18 @@ static string getH264SPS(const RtmpPacket &thiz) {
if (!thiz.isCfgFrame()) { if (!thiz.isCfgFrame()) {
return ret; return ret;
} }
if (thiz.strBuf.size() < 13) { if (thiz.buffer.size() < 13) {
WarnL << "bad H264 cfg!"; WarnL << "bad H264 cfg!";
return ret; return ret;
} }
uint16_t sps_size ; uint16_t sps_size ;
memcpy(&sps_size, thiz.strBuf.data() + 11,2); memcpy(&sps_size, thiz.buffer.data() + 11, 2);
sps_size = ntohs(sps_size); sps_size = ntohs(sps_size);
if ((int) thiz.strBuf.size() < 13 + sps_size) { if ((int) thiz.buffer.size() < 13 + sps_size) {
WarnL << "bad H264 cfg!"; WarnL << "bad H264 cfg!";
return ret; return ret;
} }
ret.assign(thiz.strBuf.data() + 13, sps_size); ret.assign(thiz.buffer.data() + 13, sps_size);
return ret; return ret;
} }
...@@ -66,60 +62,59 @@ static string getH264PPS(const RtmpPacket &thiz) { ...@@ -66,60 +62,59 @@ static string getH264PPS(const RtmpPacket &thiz) {
if (!thiz.isCfgFrame()) { if (!thiz.isCfgFrame()) {
return ret; return ret;
} }
if (thiz.strBuf.size() < 13) { if (thiz.buffer.size() < 13) {
WarnL << "bad H264 cfg!"; WarnL << "bad H264 cfg!";
return ret; return ret;
} }
uint16_t sps_size ; uint16_t sps_size ;
memcpy(&sps_size,thiz.strBuf.data() + 11,2); memcpy(&sps_size, thiz.buffer.data() + 11, 2);
sps_size = ntohs(sps_size); sps_size = ntohs(sps_size);
if ((int) thiz.strBuf.size() < 13 + sps_size + 1 + 2) { if ((int) thiz.buffer.size() < 13 + sps_size + 1 + 2) {
WarnL << "bad H264 cfg!"; WarnL << "bad H264 cfg!";
return ret; return ret;
} }
uint16_t pps_size ; uint16_t pps_size ;
memcpy(&pps_size, thiz.strBuf.data() + 13 + sps_size + 1,2); memcpy(&pps_size, thiz.buffer.data() + 13 + sps_size + 1, 2);
pps_size = ntohs(pps_size); pps_size = ntohs(pps_size);
if ((int) thiz.strBuf.size() < 13 + sps_size + 1 + 2 + pps_size) { if ((int) thiz.buffer.size() < 13 + sps_size + 1 + 2 + pps_size) {
WarnL << "bad H264 cfg!"; WarnL << "bad H264 cfg!";
return ret; return ret;
} }
ret.assign(thiz.strBuf.data() + 13 + sps_size + 1 + 2, pps_size); ret.assign(thiz.buffer.data() + 13 + sps_size + 1 + 2, pps_size);
return ret; return ret;
} }
bool H264RtmpDecoder::decodeRtmp(const RtmpPacket::Ptr &pkt) { void H264RtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
if (pkt->isCfgFrame()) { if (pkt->isCfgFrame()) {
//缓存sps pps,后续插入到I帧之前 //缓存sps pps,后续插入到I帧之前
_sps = getH264SPS(*pkt); _sps = getH264SPS(*pkt);
_pps = getH264PPS(*pkt); _pps = getH264PPS(*pkt);
onGetH264(_sps.data(), _sps.size(), pkt->timeStamp , pkt->timeStamp); onGetH264(_sps.data(), _sps.size(), pkt->time_stamp , pkt->time_stamp);
onGetH264(_pps.data(), _pps.size(), pkt->timeStamp , pkt->timeStamp); onGetH264(_pps.data(), _pps.size(), pkt->time_stamp , pkt->time_stamp);
return false; return;
} }
if (pkt->strBuf.size() > 9) { if (pkt->buffer.size() > 9) {
uint32_t iTotalLen = pkt->strBuf.size(); uint32_t iTotalLen = pkt->buffer.size();
uint32_t iOffset = 5; uint32_t iOffset = 5;
uint8_t *cts_ptr = (uint8_t *) (pkt->strBuf.data() + 2); uint8_t *cts_ptr = (uint8_t *) (pkt->buffer.data() + 2);
int32_t cts = (((cts_ptr[0] << 16) | (cts_ptr[1] << 8) | (cts_ptr[2])) + 0xff800000) ^ 0xff800000; int32_t cts = (((cts_ptr[0] << 16) | (cts_ptr[1] << 8) | (cts_ptr[2])) + 0xff800000) ^ 0xff800000;
auto pts = pkt->timeStamp + cts; auto pts = pkt->time_stamp + cts;
while(iOffset + 4 < iTotalLen){ while(iOffset + 4 < iTotalLen){
uint32_t iFrameLen; uint32_t iFrameLen;
memcpy(&iFrameLen, pkt->strBuf.data() + iOffset, 4); memcpy(&iFrameLen, pkt->buffer.data() + iOffset, 4);
iFrameLen = ntohl(iFrameLen); iFrameLen = ntohl(iFrameLen);
iOffset += 4; iOffset += 4;
if(iFrameLen + iOffset > iTotalLen){ if(iFrameLen + iOffset > iTotalLen){
break; break;
} }
onGetH264(pkt->strBuf.data() + iOffset, iFrameLen, pkt->timeStamp , pts); onGetH264(pkt->buffer.data() + iOffset, iFrameLen, pkt->time_stamp , pts);
iOffset += iFrameLen; iOffset += iFrameLen;
} }
} }
return pkt->isVideoKeyFrame();
} }
inline void H264RtmpDecoder::onGetH264(const char* pcData, int iLen, uint32_t dts,uint32_t pts) { inline void H264RtmpDecoder::onGetH264(const char* pcData, int iLen, uint32_t dts,uint32_t pts) {
...@@ -190,8 +185,8 @@ void H264RtmpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -190,8 +185,8 @@ void H264RtmpEncoder::inputFrame(const Frame::Ptr &frame) {
} }
} }
if(_lastPacket && _lastPacket->timeStamp != frame->dts()) { if(_lastPacket && _lastPacket->time_stamp != frame->dts()) {
RtmpCodec::inputRtmp(_lastPacket, _lastPacket->isVideoKeyFrame()); RtmpCodec::inputRtmp(_lastPacket);
_lastPacket = nullptr; _lastPacket = nullptr;
} }
...@@ -202,23 +197,23 @@ void H264RtmpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -202,23 +197,23 @@ void H264RtmpEncoder::inputFrame(const Frame::Ptr &frame) {
flags |= (((frame->configFrame() || frame->keyFrame()) ? FLV_KEY_FRAME : FLV_INTER_FRAME) << 4); flags |= (((frame->configFrame() || frame->keyFrame()) ? FLV_KEY_FRAME : FLV_INTER_FRAME) << 4);
_lastPacket = ResourcePoolHelper<RtmpPacket>::obtainObj(); _lastPacket = ResourcePoolHelper<RtmpPacket>::obtainObj();
_lastPacket->strBuf.clear(); _lastPacket->buffer.clear();
_lastPacket->strBuf.push_back(flags); _lastPacket->buffer.push_back(flags);
_lastPacket->strBuf.push_back(!is_config); _lastPacket->buffer.push_back(!is_config);
auto cts = frame->pts() - frame->dts(); auto cts = frame->pts() - frame->dts();
cts = htonl(cts); cts = htonl(cts);
_lastPacket->strBuf.append((char *)&cts + 1, 3); _lastPacket->buffer.append((char *)&cts + 1, 3);
_lastPacket->chunkId = CHUNK_VIDEO; _lastPacket->chunk_id = CHUNK_VIDEO;
_lastPacket->streamId = STREAM_MEDIA; _lastPacket->stream_index = STREAM_MEDIA;
_lastPacket->timeStamp = frame->dts(); _lastPacket->time_stamp = frame->dts();
_lastPacket->typeId = MSG_VIDEO; _lastPacket->type_id = MSG_VIDEO;
} }
auto size = htonl(iLen); auto size = htonl(iLen);
_lastPacket->strBuf.append((char *) &size, 4); _lastPacket->buffer.append((char *) &size, 4);
_lastPacket->strBuf.append(pcData, iLen); _lastPacket->buffer.append(pcData, iLen);
_lastPacket->bodySize = _lastPacket->strBuf.size(); _lastPacket->body_size = _lastPacket->buffer.size();
} }
void H264RtmpEncoder::makeVideoConfigPkt() { void H264RtmpEncoder::makeVideoConfigPkt() {
...@@ -227,39 +222,39 @@ void H264RtmpEncoder::makeVideoConfigPkt() { ...@@ -227,39 +222,39 @@ void H264RtmpEncoder::makeVideoConfigPkt() {
bool is_config = true; bool is_config = true;
RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj(); RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmpPkt->strBuf.clear(); rtmpPkt->buffer.clear();
//header //header
rtmpPkt->strBuf.push_back(flags); rtmpPkt->buffer.push_back(flags);
rtmpPkt->strBuf.push_back(!is_config); rtmpPkt->buffer.push_back(!is_config);
//cts //cts
rtmpPkt->strBuf.append("\x0\x0\x0", 3); rtmpPkt->buffer.append("\x0\x0\x0", 3);
//AVCDecoderConfigurationRecord start //AVCDecoderConfigurationRecord start
rtmpPkt->strBuf.push_back(1); // version rtmpPkt->buffer.push_back(1); // version
rtmpPkt->strBuf.push_back(_sps[1]); // profile rtmpPkt->buffer.push_back(_sps[1]); // profile
rtmpPkt->strBuf.push_back(_sps[2]); // compat rtmpPkt->buffer.push_back(_sps[2]); // compat
rtmpPkt->strBuf.push_back(_sps[3]); // level rtmpPkt->buffer.push_back(_sps[3]); // level
rtmpPkt->strBuf.push_back(0xff); // 6 bits reserved + 2 bits nal size length - 1 (11) rtmpPkt->buffer.push_back(0xff); // 6 bits reserved + 2 bits nal size length - 1 (11)
rtmpPkt->strBuf.push_back(0xe1); // 3 bits reserved + 5 bits number of sps (00001) rtmpPkt->buffer.push_back(0xe1); // 3 bits reserved + 5 bits number of sps (00001)
//sps //sps
uint16_t size = _sps.size(); uint16_t size = _sps.size();
size = htons(size); size = htons(size);
rtmpPkt->strBuf.append((char *) &size, 2); rtmpPkt->buffer.append((char *) &size, 2);
rtmpPkt->strBuf.append(_sps); rtmpPkt->buffer.append(_sps);
//pps //pps
rtmpPkt->strBuf.push_back(1); // version rtmpPkt->buffer.push_back(1); // version
size = _pps.size(); size = _pps.size();
size = htons(size); size = htons(size);
rtmpPkt->strBuf.append((char *) &size, 2); rtmpPkt->buffer.append((char *) &size, 2);
rtmpPkt->strBuf.append(_pps); rtmpPkt->buffer.append(_pps);
rtmpPkt->bodySize = rtmpPkt->strBuf.size(); rtmpPkt->body_size = rtmpPkt->buffer.size();
rtmpPkt->chunkId = CHUNK_VIDEO; rtmpPkt->chunk_id = CHUNK_VIDEO;
rtmpPkt->streamId = STREAM_MEDIA; rtmpPkt->stream_index = STREAM_MEDIA;
rtmpPkt->timeStamp = 0; rtmpPkt->time_stamp = 0;
rtmpPkt->typeId = MSG_VIDEO; rtmpPkt->type_id = MSG_VIDEO;
RtmpCodec::inputRtmp(rtmpPkt, false); RtmpCodec::inputRtmp(rtmpPkt);
} }
}//namespace mediakit }//namespace mediakit
...@@ -32,17 +32,17 @@ public: ...@@ -32,17 +32,17 @@ public:
/** /**
* 输入264 Rtmp包 * 输入264 Rtmp包
* @param rtmp Rtmp包 * @param rtmp Rtmp包
* @param key_pos 此参数忽略之
*/ */
bool inputRtmp(const RtmpPacket::Ptr &rtmp, bool key_pos = true) override; void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
CodecId getCodecId() const override{ CodecId getCodecId() const override{
return CodecH264; return CodecH264;
} }
protected: protected:
bool decodeRtmp(const RtmpPacket::Ptr &Rtmp);
void onGetH264(const char *pcData, int iLen, uint32_t dts,uint32_t pts); void onGetH264(const char *pcData, int iLen, uint32_t dts,uint32_t pts);
H264Frame::Ptr obtainFrame(); H264Frame::Ptr obtainFrame();
protected: protected:
H264Frame::Ptr _h264frame; H264Frame::Ptr _h264frame;
string _sps; string _sps;
......
...@@ -12,13 +12,6 @@ ...@@ -12,13 +12,6 @@
namespace mediakit{ namespace mediakit{
typedef struct {
unsigned forbidden_zero_bit :1;
unsigned nal_ref_idc :2;
unsigned type :5;
} NALU;
typedef struct { typedef struct {
unsigned S :1; unsigned S :1;
unsigned E :1; unsigned E :1;
...@@ -26,15 +19,6 @@ typedef struct { ...@@ -26,15 +19,6 @@ typedef struct {
unsigned type :5; unsigned type :5;
} FU; } FU;
static bool MakeNalu(uint8_t in, NALU &nal) {
nal.forbidden_zero_bit = in >> 7;
if (nal.forbidden_zero_bit) {
return false;
}
nal.nal_ref_idc = (in & 0x60) >> 5;
nal.type = in & 0x1f;
return true;
}
static bool MakeFU(uint8_t in, FU &fu) { static bool MakeFU(uint8_t in, FU &fu) {
fu.S = in >> 7; fu.S = in >> 7;
fu.E = (in >> 6) & 0x01; fu.E = (in >> 6) & 0x01;
...@@ -86,30 +70,28 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) { ...@@ -86,30 +70,28 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) {
28 FU-A Fragmentation unit 5.8 28 FU-A Fragmentation unit 5.8
29 FU-B Fragmentation unit 5.8 29 FU-B Fragmentation unit 5.8
30-31 undefined - 30-31 undefined -
*/ */
const uint8_t *frame = (uint8_t *) rtppack->data() + rtppack->offset; const uint8_t *frame = (uint8_t *) rtppack->data() + rtppack->offset;
int length = rtppack->size() - rtppack->offset; int length = rtppack->size() - rtppack->offset;
NALU nal; int nal_type = *frame & 0x1F;
MakeNalu(*frame, nal); int nal_suffix = *frame & (~0x1F);
if (nal.type >= 0 && nal.type < 24) { if (nal_type >= 0 && nal_type < 24) {
//a full frame //a full frame
_h264frame->_buffer.assign("\x0\x0\x0\x1", 4); _h264frame->_buffer.assign("\x0\x0\x0\x1", 4);
_h264frame->_buffer.append((char *)frame, length); _h264frame->_buffer.append((char *) frame, length);
_h264frame->_pts = rtppack->timeStamp; _h264frame->_pts = rtppack->timeStamp;
auto key = _h264frame->keyFrame(); auto key = _h264frame->keyFrame();
onGetH264(_h264frame); onGetH264(_h264frame);
return (key); //i frame return (key); //i frame
} }
switch (nal.type){ switch (nal_type){
case 24:{ case 24:{
// 24 STAP-A 单一时间的组合包 // 24 STAP-A 单一时间的组合包
bool haveIDR = false; bool haveIDR = false;
auto ptr = frame + 1; auto ptr = frame + 1;
while(true){ while (true) {
int off = ptr - frame; int off = ptr - frame;
if (off >= length) { if (off >= length) {
break; break;
...@@ -121,14 +103,12 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) { ...@@ -121,14 +103,12 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) {
if (off + len > length) { if (off + len > length) {
break; break;
} }
if(len >= 10){ if (len > 0) {
//过小的帧丢弃 //有有效数据
NALU nal;
MakeNalu(ptr[0], nal);
_h264frame->_buffer.assign("\x0\x0\x0\x1", 4); _h264frame->_buffer.assign("\x0\x0\x0\x1", 4);
_h264frame->_buffer.append((char *)ptr, len); _h264frame->_buffer.append((char *) ptr, len);
_h264frame->_pts = rtppack->timeStamp; _h264frame->_pts = rtppack->timeStamp;
if(nal.type == H264Frame::NAL_IDR){ if ((ptr[0] & 0x1F) == H264Frame::NAL_IDR) {
haveIDR = true; haveIDR = true;
} }
onGetH264(_h264frame); onGetH264(_h264frame);
...@@ -144,10 +124,9 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) { ...@@ -144,10 +124,9 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) {
MakeFU(frame[1], fu); MakeFU(frame[1], fu);
if (fu.S) { if (fu.S) {
//该帧的第一个rtp包 FU-A start //该帧的第一个rtp包 FU-A start
char tmp = (nal.forbidden_zero_bit << 7 | nal.nal_ref_idc << 5 | fu.type);
_h264frame->_buffer.assign("\x0\x0\x0\x1", 4); _h264frame->_buffer.assign("\x0\x0\x0\x1", 4);
_h264frame->_buffer.push_back(tmp); _h264frame->_buffer.push_back(nal_suffix | fu.type);
_h264frame->_buffer.append((char *)frame + 2, length - 2); _h264frame->_buffer.append((char *) frame + 2, length - 2);
_h264frame->_pts = rtppack->timeStamp; _h264frame->_pts = rtppack->timeStamp;
//该函数return时,保存下当前sequence,以便下次对比seq是否连续 //该函数return时,保存下当前sequence,以便下次对比seq是否连续
_lastSeq = rtppack->sequence; _lastSeq = rtppack->sequence;
...@@ -163,20 +142,20 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) { ...@@ -163,20 +142,20 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) {
if (!fu.E) { if (!fu.E) {
//该帧的中间rtp包 FU-A mid //该帧的中间rtp包 FU-A mid
_h264frame->_buffer.append((char *)frame + 2, length - 2); _h264frame->_buffer.append((char *) frame + 2, length - 2);
//该函数return时,保存下当前sequence,以便下次对比seq是否连续 //该函数return时,保存下当前sequence,以便下次对比seq是否连续
_lastSeq = rtppack->sequence; _lastSeq = rtppack->sequence;
return false; return false;
} }
//该帧最后一个rtp包 FU-A end //该帧最后一个rtp包 FU-A end
_h264frame->_buffer.append((char *)frame + 2, length - 2); _h264frame->_buffer.append((char *) frame + 2, length - 2);
_h264frame->_pts = rtppack->timeStamp; _h264frame->_pts = rtppack->timeStamp;
onGetH264(_h264frame); onGetH264(_h264frame);
return false; return false;
} }
default:{ default: {
// 29 FU-B 单NAL单元B模式 // 29 FU-B 单NAL单元B模式
// 25 STAP-B 单一时间的组合包 // 25 STAP-B 单一时间的组合包
// 26 MTAP16 多个时间的组合包 // 26 MTAP16 多个时间的组合包
...@@ -184,7 +163,7 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) { ...@@ -184,7 +163,7 @@ bool H264RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtppack) {
// 0 udef // 0 udef
// 30 udef // 30 udef
// 31 udef // 31 udef
WarnL << "不支持的rtp类型:" << (int)nal.type << " " << rtppack->sequence; WarnL << "不支持的rtp类型:" << (int) nal_type << " " << rtppack->sequence;
return false; return false;
} }
} }
...@@ -215,63 +194,62 @@ H264RtpEncoder::H264RtpEncoder(uint32_t ui32Ssrc, ...@@ -215,63 +194,62 @@ H264RtpEncoder::H264RtpEncoder(uint32_t ui32Ssrc,
void H264RtpEncoder::inputFrame(const Frame::Ptr &frame) { void H264RtpEncoder::inputFrame(const Frame::Ptr &frame) {
GET_CONFIG(uint32_t,cycleMS,Rtp::kCycleMS); GET_CONFIG(uint32_t,cycleMS,Rtp::kCycleMS);
auto pcData = frame->data() + frame->prefixSize(); auto ptr = frame->data() + frame->prefixSize();
auto uiStamp = frame->pts(); auto pts = frame->pts() % cycleMS;
auto iLen = frame->size() - frame->prefixSize(); auto len = frame->size() - frame->prefixSize();
//获取NALU的5bit 帧类型 auto nal_type = H264_TYPE(ptr[0]);
unsigned char naluType = H264_TYPE(pcData[0]); auto max_rtp_size = _ui32MtuSize - 2;
uiStamp %= cycleMS;
int iSize = _ui32MtuSize - 2;
//超过MTU则按照FU-A模式打包 //超过MTU则按照FU-A模式打包
if (iLen > iSize) { if (len > max_rtp_size) {
//最高位bit为forbidden_zero_bit, //最高位bit为forbidden_zero_bit,
//后面2bit为nal_ref_idc(帧重要程度),00:可以丢,11:不能丢 //后面2bit为nal_ref_idc(帧重要程度),00:可以丢,11:不能丢
//末尾5bit为nalu type,固定为28(FU-A) //末尾5bit为nalu type,固定为28(FU-A)
unsigned char f_nri_flags = (*((unsigned char *) pcData) & 0x60) | 28; unsigned char nal_fu_a = (*((unsigned char *) ptr) & (~0x1F)) | 28;
unsigned char s_e_r_flags; unsigned char s_e_r_flags;
bool bFirst = true; bool fu_a_start = true;
bool mark = false; bool mark_bit = false;
int nOffset = 1; int offset = 1;
while (!mark) { while (!mark_bit) {
if (iLen <= nOffset + iSize) { if (len <= offset + max_rtp_size) {
//已经拆分结束 //已经拆分结束
iSize = iLen - nOffset; max_rtp_size = len - offset;
mark = true; mark_bit = true;
//FU-A end //FU-A end
s_e_r_flags = (1 << 6) | naluType; s_e_r_flags = (1 << 6) | nal_type;
} else if (bFirst) { } else if (fu_a_start) {
//FU-A start //FU-A start
s_e_r_flags = (1 << 7) | naluType; s_e_r_flags = (1 << 7) | nal_type;
} else { } else {
//FU-A mid //FU-A mid
s_e_r_flags = naluType; s_e_r_flags = nal_type;
} }
{ {
//传入nullptr先不做payload的内存拷贝 //传入nullptr先不做payload的内存拷贝
auto rtp = makeRtp(getTrackType(), nullptr, iSize + 2, mark, uiStamp); auto rtp = makeRtp(getTrackType(), nullptr, max_rtp_size + 2, mark_bit, pts);
//rtp payload 负载部分 //rtp payload 负载部分
uint8_t *payload = (uint8_t*)rtp->data() + rtp->offset; uint8_t *payload = (uint8_t*)rtp->data() + rtp->offset;
//FU-A 第1个字节 //FU-A 第1个字节
payload[0] = f_nri_flags; payload[0] = nal_fu_a;
//FU-A 第2个字节 //FU-A 第2个字节
payload[1] = s_e_r_flags; payload[1] = s_e_r_flags;
//H264 数据 //H264 数据
memcpy(payload + 2, (unsigned char *) pcData + nOffset, iSize); memcpy(payload + 2, (unsigned char *) ptr + offset, max_rtp_size);
//输入到rtp环形缓存 //输入到rtp环形缓存
RtpCodec::inputRtp(rtp,bFirst && naluType == H264Frame::NAL_IDR); RtpCodec::inputRtp(rtp, fu_a_start && nal_type == H264Frame::NAL_IDR);
} }
nOffset += iSize; offset += max_rtp_size;
bFirst = false; fu_a_start = false;
} }
} else { } else {
makeH264Rtp(naluType,pcData, iLen, true, true, uiStamp); //如果帧长度不超过mtu, 则按照Single NAL unit packet per H.264 方式打包
makeH264Rtp(ptr, len, false, false, pts);
} }
} }
void H264RtpEncoder::makeH264Rtp(int nal_type,const void* data, unsigned int len, bool mark, bool first_packet, uint32_t uiStamp) { void H264RtpEncoder::makeH264Rtp(const void* data, unsigned int len, bool mark, bool gop_pos, uint32_t uiStamp) {
RtpCodec::inputRtp(makeRtp(getTrackType(),data,len,mark,uiStamp),first_packet && nal_type == H264Frame::NAL_IDR); RtpCodec::inputRtp(makeRtp(getTrackType(), data, len, mark, uiStamp), gop_pos);
} }
}//namespace mediakit }//namespace mediakit
...@@ -78,7 +78,7 @@ public: ...@@ -78,7 +78,7 @@ public:
*/ */
void inputFrame(const Frame::Ptr &frame) override; void inputFrame(const Frame::Ptr &frame) override;
private: private:
void makeH264Rtp(int nal_type,const void *pData, unsigned int uiLen, bool bMark, bool first_packet, uint32_t uiStamp); void makeH264Rtp(const void *pData, unsigned int uiLen, bool bMark, bool gop_pos, uint32_t uiStamp);
}; };
}//namespace mediakit{ }//namespace mediakit{
......
...@@ -61,7 +61,7 @@ public: ...@@ -61,7 +61,7 @@ public:
} NaleType; } NaleType;
H265Frame(){ H265Frame(){
_codecid = CodecH265; _codec_id = CodecH265;
} }
bool keyFrame() const override { bool keyFrame() const override {
...@@ -92,10 +92,7 @@ public: ...@@ -92,10 +92,7 @@ public:
_dts = dts; _dts = dts;
_pts = pts; _pts = pts;
_prefix_size = prefix_size; _prefix_size = prefix_size;
} _codec_id = CodecH265;
CodecId getCodecId() const override {
return CodecH265;
} }
bool keyFrame() const override { bool keyFrame() const override {
......
...@@ -27,10 +27,6 @@ H265Frame::Ptr H265RtmpDecoder::obtainFrame() { ...@@ -27,10 +27,6 @@ H265Frame::Ptr H265RtmpDecoder::obtainFrame() {
return frame; return frame;
} }
bool H265RtmpDecoder::inputRtmp(const RtmpPacket::Ptr &rtmp, bool key_pos) {
return decodeRtmp(rtmp);
}
#ifdef ENABLE_MP4 #ifdef ENABLE_MP4
/** /**
* 返回不带0x00 00 00 01头的sps * 返回不带0x00 00 00 01头的sps
...@@ -43,61 +39,60 @@ static bool getH265ConfigFrame(const RtmpPacket &thiz,string &frame) { ...@@ -43,61 +39,60 @@ static bool getH265ConfigFrame(const RtmpPacket &thiz,string &frame) {
if (!thiz.isCfgFrame()) { if (!thiz.isCfgFrame()) {
return false; return false;
} }
if (thiz.strBuf.size() < 6) { if (thiz.buffer.size() < 6) {
WarnL << "bad H265 cfg!"; WarnL << "bad H265 cfg!";
return false; return false;
} }
auto extra = thiz.strBuf.data() + 5; auto extra = thiz.buffer.data() + 5;
auto bytes = thiz.strBuf.size() - 5; auto bytes = thiz.buffer.size() - 5;
struct mpeg4_hevc_t hevc = {0}; struct mpeg4_hevc_t hevc = {0};
if (mpeg4_hevc_decoder_configuration_record_load((uint8_t *) extra, bytes, &hevc) > 0) { if (mpeg4_hevc_decoder_configuration_record_load((uint8_t *) extra, bytes, &hevc) > 0) {
uint8_t config[1024] = {0}; uint8_t *config = new uint8_t[bytes * 2];
int size = mpeg4_hevc_to_nalu(&hevc, config, sizeof(config)); int size = mpeg4_hevc_to_nalu(&hevc, config, bytes * 2);
if (size > 4) { if (size > 4) {
frame.assign((char *) config + 4, size - 4); frame.assign((char *) config + 4, size - 4);
return true;
} }
delete [] config;
return size > 4;
} }
return false; return false;
} }
#endif #endif
bool H265RtmpDecoder::decodeRtmp(const RtmpPacket::Ptr &pkt) { void H265RtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
if (pkt->isCfgFrame()) { if (pkt->isCfgFrame()) {
#ifdef ENABLE_MP4 #ifdef ENABLE_MP4
string config; string config;
if(getH265ConfigFrame(*pkt,config)){ if(getH265ConfigFrame(*pkt,config)){
onGetH265(config.data(), config.size(), pkt->timeStamp , pkt->timeStamp); onGetH265(config.data(), config.size(), pkt->time_stamp , pkt->time_stamp);
} }
#else #else
WarnL << "请开启MP4相关功能并使能\"ENABLE_MP4\",否则对H265-RTMP支持不完善"; WarnL << "请开启MP4相关功能并使能\"ENABLE_MP4\",否则对H265-RTMP支持不完善";
#endif #endif
return false; return;
} }
if (pkt->strBuf.size() > 9) { if (pkt->buffer.size() > 9) {
uint32_t iTotalLen = pkt->strBuf.size(); uint32_t iTotalLen = pkt->buffer.size();
uint32_t iOffset = 5; uint32_t iOffset = 5;
uint8_t *cts_ptr = (uint8_t *) (pkt->strBuf.data() + 2); uint8_t *cts_ptr = (uint8_t *) (pkt->buffer.data() + 2);
int32_t cts = (((cts_ptr[0] << 16) | (cts_ptr[1] << 8) | (cts_ptr[2])) + 0xff800000) ^ 0xff800000; int32_t cts = (((cts_ptr[0] << 16) | (cts_ptr[1] << 8) | (cts_ptr[2])) + 0xff800000) ^ 0xff800000;
auto pts = pkt->timeStamp + cts; auto pts = pkt->time_stamp + cts;
while(iOffset + 4 < iTotalLen){ while(iOffset + 4 < iTotalLen){
uint32_t iFrameLen; uint32_t iFrameLen;
memcpy(&iFrameLen, pkt->strBuf.data() + iOffset, 4); memcpy(&iFrameLen, pkt->buffer.data() + iOffset, 4);
iFrameLen = ntohl(iFrameLen); iFrameLen = ntohl(iFrameLen);
iOffset += 4; iOffset += 4;
if(iFrameLen + iOffset > iTotalLen){ if(iFrameLen + iOffset > iTotalLen){
break; break;
} }
onGetH265(pkt->strBuf.data() + iOffset, iFrameLen, pkt->timeStamp , pts); onGetH265(pkt->buffer.data() + iOffset, iFrameLen, pkt->time_stamp , pts);
iOffset += iFrameLen; iOffset += iFrameLen;
} }
} }
return pkt->isVideoKeyFrame();
} }
inline void H265RtmpDecoder::onGetH265(const char* pcData, int iLen, uint32_t dts,uint32_t pts) { inline void H265RtmpDecoder::onGetH265(const char* pcData, int iLen, uint32_t dts,uint32_t pts) {
...@@ -176,8 +171,8 @@ void H265RtmpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -176,8 +171,8 @@ void H265RtmpEncoder::inputFrame(const Frame::Ptr &frame) {
return; return;
} }
if(_lastPacket && _lastPacket->timeStamp != frame->dts()) { if(_lastPacket && _lastPacket->time_stamp != frame->dts()) {
RtmpCodec::inputRtmp(_lastPacket, _lastPacket->isVideoKeyFrame()); RtmpCodec::inputRtmp(_lastPacket);
_lastPacket = nullptr; _lastPacket = nullptr;
} }
...@@ -188,23 +183,23 @@ void H265RtmpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -188,23 +183,23 @@ void H265RtmpEncoder::inputFrame(const Frame::Ptr &frame) {
flags |= (((frame->configFrame() || frame->keyFrame()) ? FLV_KEY_FRAME : FLV_INTER_FRAME) << 4); flags |= (((frame->configFrame() || frame->keyFrame()) ? FLV_KEY_FRAME : FLV_INTER_FRAME) << 4);
_lastPacket = ResourcePoolHelper<RtmpPacket>::obtainObj(); _lastPacket = ResourcePoolHelper<RtmpPacket>::obtainObj();
_lastPacket->strBuf.clear(); _lastPacket->buffer.clear();
_lastPacket->strBuf.push_back(flags); _lastPacket->buffer.push_back(flags);
_lastPacket->strBuf.push_back(!is_config); _lastPacket->buffer.push_back(!is_config);
auto cts = frame->pts() - frame->dts(); auto cts = frame->pts() - frame->dts();
cts = htonl(cts); cts = htonl(cts);
_lastPacket->strBuf.append((char *)&cts + 1, 3); _lastPacket->buffer.append((char *)&cts + 1, 3);
_lastPacket->chunkId = CHUNK_VIDEO; _lastPacket->chunk_id = CHUNK_VIDEO;
_lastPacket->streamId = STREAM_MEDIA; _lastPacket->stream_index = STREAM_MEDIA;
_lastPacket->timeStamp = frame->dts(); _lastPacket->time_stamp = frame->dts();
_lastPacket->typeId = MSG_VIDEO; _lastPacket->type_id = MSG_VIDEO;
} }
auto size = htonl(iLen); auto size = htonl(iLen);
_lastPacket->strBuf.append((char *) &size, 4); _lastPacket->buffer.append((char *) &size, 4);
_lastPacket->strBuf.append(pcData, iLen); _lastPacket->buffer.append(pcData, iLen);
_lastPacket->bodySize = _lastPacket->strBuf.size(); _lastPacket->body_size = _lastPacket->buffer.size();
} }
void H265RtmpEncoder::makeVideoConfigPkt() { void H265RtmpEncoder::makeVideoConfigPkt() {
...@@ -214,13 +209,13 @@ void H265RtmpEncoder::makeVideoConfigPkt() { ...@@ -214,13 +209,13 @@ void H265RtmpEncoder::makeVideoConfigPkt() {
bool is_config = true; bool is_config = true;
RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj(); RtmpPacket::Ptr rtmpPkt = ResourcePoolHelper<RtmpPacket>::obtainObj();
rtmpPkt->strBuf.clear(); rtmpPkt->buffer.clear();
//header //header
rtmpPkt->strBuf.push_back(flags); rtmpPkt->buffer.push_back(flags);
rtmpPkt->strBuf.push_back(!is_config); rtmpPkt->buffer.push_back(!is_config);
//cts //cts
rtmpPkt->strBuf.append("\x0\x0\x0", 3); rtmpPkt->buffer.append("\x0\x0\x0", 3);
struct mpeg4_hevc_t hevc = {0}; struct mpeg4_hevc_t hevc = {0};
string vps_sps_pps = string("\x00\x00\x00\x01", 4) + _vps + string vps_sps_pps = string("\x00\x00\x00\x01", 4) + _vps +
...@@ -235,14 +230,14 @@ void H265RtmpEncoder::makeVideoConfigPkt() { ...@@ -235,14 +230,14 @@ void H265RtmpEncoder::makeVideoConfigPkt() {
} }
//HEVCDecoderConfigurationRecord //HEVCDecoderConfigurationRecord
rtmpPkt->strBuf.append((char *)extra_data, extra_data_size); rtmpPkt->buffer.append((char *)extra_data, extra_data_size);
rtmpPkt->bodySize = rtmpPkt->strBuf.size(); rtmpPkt->body_size = rtmpPkt->buffer.size();
rtmpPkt->chunkId = CHUNK_VIDEO; rtmpPkt->chunk_id = CHUNK_VIDEO;
rtmpPkt->streamId = STREAM_MEDIA; rtmpPkt->stream_index = STREAM_MEDIA;
rtmpPkt->timeStamp = 0; rtmpPkt->time_stamp = 0;
rtmpPkt->typeId = MSG_VIDEO; rtmpPkt->type_id = MSG_VIDEO;
RtmpCodec::inputRtmp(rtmpPkt, false); RtmpCodec::inputRtmp(rtmpPkt);
#else #else
WarnL << "请开启MP4相关功能并使能\"ENABLE_MP4\",否则对H265-RTMP支持不完善"; WarnL << "请开启MP4相关功能并使能\"ENABLE_MP4\",否则对H265-RTMP支持不完善";
#endif #endif
......
...@@ -32,17 +32,17 @@ public: ...@@ -32,17 +32,17 @@ public:
/** /**
* 输入265 Rtmp包 * 输入265 Rtmp包
* @param rtmp Rtmp包 * @param rtmp Rtmp包
* @param key_pos 此参数忽略之
*/ */
bool inputRtmp(const RtmpPacket::Ptr &rtmp, bool key_pos = true) override; void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
CodecId getCodecId() const override{ CodecId getCodecId() const override{
return CodecH265; return CodecH265;
} }
protected: protected:
bool decodeRtmp(const RtmpPacket::Ptr &Rtmp);
void onGetH265(const char *pcData, int iLen, uint32_t dts,uint32_t pts); void onGetH265(const char *pcData, int iLen, uint32_t dts,uint32_t pts);
H265Frame::Ptr obtainFrame(); H265Frame::Ptr obtainFrame();
protected: protected:
H265Frame::Ptr _h265frame; H265Frame::Ptr _h265frame;
}; };
......
...@@ -200,7 +200,7 @@ void H265RtpEncoder::inputFrame(const Frame::Ptr &frame) { ...@@ -200,7 +200,7 @@ void H265RtpEncoder::inputFrame(const Frame::Ptr &frame) {
bFirst = false; bFirst = false;
} }
} else { } else {
makeH265Rtp(naluType,pcData, iLen, true, true, uiStamp); makeH265Rtp(naluType,pcData, iLen, false, true, uiStamp);
} }
} }
......
...@@ -17,51 +17,12 @@ ...@@ -17,51 +17,12 @@
namespace mediakit{ namespace mediakit{
/** /**
* Opus帧
*/
class OpusFrame : public FrameImp {
public:
typedef std::shared_ptr<OpusFrame> Ptr;
OpusFrame(){
_codecid = CodecOpus;
}
};
/**
* 不可缓存的Opus帧
*/
class OpusFrameNoCacheAble : public FrameFromPtr {
public:
typedef std::shared_ptr<OpusFrameNoCacheAble> Ptr;
OpusFrameNoCacheAble(char *ptr,uint32_t size,uint32_t dts, uint32_t pts = 0,int prefix_size = 0){
_ptr = ptr;
_size = size;
_dts = dts;
_prefix_size = prefix_size;
}
CodecId getCodecId() const override{
return CodecOpus;
}
bool keyFrame() const override {
return false;
}
bool configFrame() const override{
return false;
}
};
/**
* Opus帧音频通道 * Opus帧音频通道
*/ */
class OpusTrack : public AudioTrackImp{ class OpusTrack : public AudioTrackImp{
public: public:
typedef std::shared_ptr<OpusTrack> Ptr; typedef std::shared_ptr<OpusTrack> Ptr;
OpusTrack(int sample_rate, int channels, int sample_bit) : AudioTrackImp(CodecOpus,sample_rate,channels,sample_bit){} OpusTrack() : AudioTrackImp(CodecOpus,48000,2,16){}
private: private:
//克隆该Track //克隆该Track
......
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_FMP4MEDIASOURCE_H
#define ZLMEDIAKIT_FMP4MEDIASOURCE_H
#include "Common/MediaSource.h"
using namespace toolkit;
#define FMP4_GOP_SIZE 512
namespace mediakit {
//FMP4直播数据包
class FMP4Packet : public BufferString{
public:
using Ptr = std::shared_ptr<FMP4Packet>;
template<typename ...ARGS>
FMP4Packet(ARGS && ...args) : BufferString(std::forward<ARGS>(args)...) {};
~FMP4Packet() override = default;
public:
uint32_t time_stamp = 0;
};
//FMP4直播合并写策略类
class FMP4FlushPolicy : public FlushPolicy{
public:
FMP4FlushPolicy() = default;
~FMP4FlushPolicy() = default;
uint32_t getStamp(const FMP4Packet::Ptr &packet) {
return packet->time_stamp;
}
};
//FMP4直播源
class FMP4MediaSource : public MediaSource, public RingDelegate<FMP4Packet::Ptr>, public PacketCache<FMP4Packet, FMP4FlushPolicy>{
public:
using Ptr = std::shared_ptr<FMP4MediaSource>;
using RingDataType = std::shared_ptr<List<FMP4Packet::Ptr> >;
using RingType = RingBuffer<RingDataType>;
FMP4MediaSource(const string &vhost,
const string &app,
const string &stream_id,
int ring_size = FMP4_GOP_SIZE) : MediaSource(FMP4_SCHEMA, vhost, app, stream_id), _ring_size(ring_size) {}
~FMP4MediaSource() override = default;
/**
* 获取媒体源的环形缓冲
*/
const RingType::Ptr &getRing() const {
return _ring;
}
/**
* 获取fmp4 init segment
*/
const string &getInitSegment() const{
return _init_segment;
}
/**
* 设置fmp4 init segment
* @param str init segment
*/
void setInitSegment(string str) {
_init_segment = std::move(str);
if (_ring) {
regist();
}
}
/**
* 获取播放器个数
*/
int readerCount() override {
return _ring ? _ring->readerCount() : 0;
}
/**
* 输入FMP4包
* @param packet FMP4包
* @param key 是否为关键帧第一个包
*/
void onWrite(const FMP4Packet::Ptr &packet, bool key) override {
if (!_ring) {
createRing();
}
if (key) {
_have_video = true;
}
PacketCache<FMP4Packet, FMP4FlushPolicy>::inputPacket(true, packet, key);
}
/**
* 情况GOP缓存
*/
void clearCache() override {
PacketCache<FMP4Packet, FMP4FlushPolicy>::clearCache();
_ring->clearCache();
}
private:
void createRing(){
weak_ptr<FMP4MediaSource> weak_self = dynamic_pointer_cast<FMP4MediaSource>(shared_from_this());
_ring = std::make_shared<RingType>(_ring_size, [weak_self](int size) {
auto strong_self = weak_self.lock();
if (!strong_self) {
return;
}
strong_self->onReaderChanged(size);
});
onReaderChanged(0);
if (!_init_segment.empty()) {
regist();
}
}
/**
* 合并写回调
* @param packet_list 合并写缓存列队
* @param key_pos 是否包含关键帧
*/
void onFlush(std::shared_ptr<List<FMP4Packet::Ptr> > &packet_list, bool key_pos) override {
//如果不存在视频,那么就没有存在GOP缓存的意义,所以确保一直清空GOP缓存
_ring->write(packet_list, _have_video ? key_pos : true);
}
private:
bool _have_video = false;
int _ring_size;
string _init_segment;
RingType::Ptr _ring;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_FMP4MEDIASOURCE_H
/*
* Copyright (c) 2016 The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/xiongziliang/ZLMediaKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_FMP4MEDIASOURCEMUXER_H
#define ZLMEDIAKIT_FMP4MEDIASOURCEMUXER_H
#include "FMP4MediaSource.h"
#include "Record/MP4Muxer.h"
namespace mediakit {
class FMP4MediaSourceMuxer : public MP4MuxerMemory, public MediaSourceEventInterceptor,
public std::enable_shared_from_this<FMP4MediaSourceMuxer> {
public:
using Ptr = std::shared_ptr<FMP4MediaSourceMuxer>;
FMP4MediaSourceMuxer(const string &vhost,
const string &app,
const string &stream_id) {
_media_src = std::make_shared<FMP4MediaSource>(vhost, app, stream_id);
}
~FMP4MediaSourceMuxer() override = default;
void setListener(const std::weak_ptr<MediaSourceEvent> &listener){
_listener = listener;
_media_src->setListener(shared_from_this());
}
int readerCount() const{
return _media_src->readerCount();
}
void onReaderChanged(MediaSource &sender, int size) override {
_enabled = size;
if (!size) {
_clear_cache = true;
}
MediaSourceEventInterceptor::onReaderChanged(sender, size);
}
void inputFrame(const Frame::Ptr &frame) override {
if (_clear_cache) {
_clear_cache = false;
_media_src->clearCache();
}
if (_enabled) {
MP4MuxerMemory::inputFrame(frame);
}
}
bool isEnabled() {
//缓存尚未清空时,还允许触发inputFrame函数,以便及时清空缓存
return _clear_cache ? true : _enabled;
}
void onAllTrackReady() {
_media_src->setInitSegment(getInitSegment());
}
protected:
void onSegmentData(const string &string, uint32_t stamp, bool key_frame) override {
if (string.empty()) {
return;
}
FMP4Packet::Ptr packet = std::make_shared<FMP4Packet>(std::move(string));
packet->time_stamp = stamp;
_media_src->onWrite(packet, key_frame);
}
private:
bool _enabled = true;
bool _clear_cache = false;
FMP4MediaSource::Ptr _media_src;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_FMP4MEDIASOURCEMUXER_H
...@@ -13,7 +13,7 @@ namespace mediakit { ...@@ -13,7 +13,7 @@ namespace mediakit {
HlsPlayer::HlsPlayer(const EventPoller::Ptr &poller){ HlsPlayer::HlsPlayer(const EventPoller::Ptr &poller){
_segment.setOnSegment([this](const char *data, uint64_t len) { onPacket(data, len); }); _segment.setOnSegment([this](const char *data, uint64_t len) { onPacket(data, len); });
_poller = poller ? poller : EventPollerPool::Instance().getPoller(); setPoller(poller ? poller : EventPollerPool::Instance().getPoller());
} }
HlsPlayer::~HlsPlayer() {} HlsPlayer::~HlsPlayer() {}
...@@ -63,6 +63,15 @@ void HlsPlayer::playNextTs(bool force){ ...@@ -63,6 +63,15 @@ void HlsPlayer::playNextTs(bool force){
std::shared_ptr<Ticker> ticker(new Ticker); std::shared_ptr<Ticker> ticker(new Ticker);
_http_ts_player = std::make_shared<HttpTSPlayer>(getPoller(), false); _http_ts_player = std::make_shared<HttpTSPlayer>(getPoller(), false);
_http_ts_player->setOnCreateSocket([weakSelf](const EventPoller::Ptr &poller) {
auto strongSelf = weakSelf.lock();
if (strongSelf) {
return strongSelf->createSocket();
}
return Socket::createSocket(poller, true);
});
_http_ts_player->setOnDisconnect([weakSelf, ticker, ts_duration](const SockException &err) { _http_ts_player->setOnDisconnect([weakSelf, ticker, ts_duration](const SockException &err) {
auto strongSelf = weakSelf.lock(); auto strongSelf = weakSelf.lock();
if (!strongSelf) { if (!strongSelf) {
...@@ -84,6 +93,7 @@ void HlsPlayer::playNextTs(bool force){ ...@@ -84,6 +93,7 @@ void HlsPlayer::playNextTs(bool force){
}, strongSelf->getPoller())); }, strongSelf->getPoller()));
} }
}); });
_http_ts_player->setOnPacket([weakSelf](const char *data, uint64_t len) { _http_ts_player->setOnPacket([weakSelf](const char *data, uint64_t len) {
auto strongSelf = weakSelf.lock(); auto strongSelf = weakSelf.lock();
if (!strongSelf) { if (!strongSelf) {
...@@ -94,9 +104,10 @@ void HlsPlayer::playNextTs(bool force){ ...@@ -94,9 +104,10 @@ void HlsPlayer::playNextTs(bool force){
}); });
_http_ts_player->setMethod("GET"); _http_ts_player->setMethod("GET");
if(!(*this)[kNetAdapter].empty()) { if (!(*this)[kNetAdapter].empty()) {
_http_ts_player->setNetAdapter((*this)[Client::kNetAdapter]); _http_ts_player->setNetAdapter((*this)[Client::kNetAdapter]);
} }
_http_ts_player->sendRequest(_ts_list.front().url, 2 * _ts_list.front().duration); _http_ts_player->sendRequest(_ts_list.front().url, 2 * _ts_list.front().duration);
_ts_list.pop_front(); _ts_list.pop_front();
} }
...@@ -254,11 +265,15 @@ void HlsPlayerImp::onAllTrackReady() { ...@@ -254,11 +265,15 @@ void HlsPlayerImp::onAllTrackReady() {
} }
void HlsPlayerImp::onPlayResult(const SockException &ex) { void HlsPlayerImp::onPlayResult(const SockException &ex) {
if(ex){ if (ex) {
PlayerImp<HlsPlayer, PlayerBase>::onPlayResult(ex); PlayerImp<HlsPlayer, PlayerBase>::onPlayResult(ex);
}else{ } else {
_frame_cache.clear();
_stamp[TrackAudio].setRelativeStamp(0);
_stamp[TrackVideo].setRelativeStamp(0);
_stamp[TrackAudio].syncTo(_stamp[TrackVideo]); _stamp[TrackAudio].syncTo(_stamp[TrackVideo]);
_ticker.resetTime(); setPlayPosition(0);
weak_ptr<HlsPlayerImp> weakSelf = dynamic_pointer_cast<HlsPlayerImp>(shared_from_this()); weak_ptr<HlsPlayerImp> weakSelf = dynamic_pointer_cast<HlsPlayerImp>(shared_from_this());
//每50毫秒执行一次 //每50毫秒执行一次
_timer = std::make_shared<Timer>(0.05, [weakSelf]() { _timer = std::make_shared<Timer>(0.05, [weakSelf]() {
...@@ -288,25 +303,47 @@ void HlsPlayerImp::inputFrame(const Frame::Ptr &frame) { ...@@ -288,25 +303,47 @@ void HlsPlayerImp::inputFrame(const Frame::Ptr &frame) {
//根据时间戳缓存frame //根据时间戳缓存frame
_frame_cache.emplace(dts, Frame::getCacheAbleFrame(frame)); _frame_cache.emplace(dts, Frame::getCacheAbleFrame(frame));
while (!_frame_cache.empty()) { if (getBufferMS() > 30 * 1000) {
if (_frame_cache.rbegin()->first - _frame_cache.begin()->first > 30 * 1000) { //缓存超过30秒,强制消费至15秒(减少延时或内存占用)
//缓存超过30秒,强制消费掉 while (getBufferMS() > 15 * 1000) {
MediaSink::inputFrame(_frame_cache.begin()->second); MediaSink::inputFrame(_frame_cache.begin()->second);
_frame_cache.erase(_frame_cache.begin()); _frame_cache.erase(_frame_cache.begin());
continue;
} }
//缓存小于30秒 //接着播放缓存中最早的帧
break; setPlayPosition(_frame_cache.begin()->first);
} }
} }
int64_t HlsPlayerImp::getPlayPosition(){
return _ticker.elapsedTime() + _ticker_offset;
}
int64_t HlsPlayerImp::getBufferMS(){
if(_frame_cache.empty()){
return 0;
}
return _frame_cache.rbegin()->first - _frame_cache.begin()->first;
}
void HlsPlayerImp::setPlayPosition(int64_t pos){
_ticker.resetTime();
_ticker_offset = pos;
}
void HlsPlayerImp::onTick() { void HlsPlayerImp::onTick() {
auto it = _frame_cache.begin(); auto it = _frame_cache.begin();
while (it != _frame_cache.end()) { while (it != _frame_cache.end()) {
if (it->first > _ticker.elapsedTime()) { if (it->first > getPlayPosition()) {
//这些帧还未到时间播放 //这些帧还未到时间播放
break; break;
} }
if (getBufferMS() < 3 * 1000) {
//缓存小于3秒,那么降低定时器消费速度(让剩余的数据在3秒后消费完毕)
//目的是为了防止定时器长时间干等后,数据瞬间消费完毕
setPlayPosition(_frame_cache.begin()->first);
}
//消费掉已经到期的帧 //消费掉已经到期的帧
MediaSink::inputFrame(it->second); MediaSink::inputFrame(it->second);
it = _frame_cache.erase(it); it = _frame_cache.erase(it);
......
...@@ -138,13 +138,19 @@ private: ...@@ -138,13 +138,19 @@ private:
void inputFrame(const Frame::Ptr &frame) override; void inputFrame(const Frame::Ptr &frame) override;
void onShutdown(const SockException &ex) override; void onShutdown(const SockException &ex) override;
void onTick(); void onTick();
int64_t getPlayPosition();
void setPlayPosition(int64_t pos);
int64_t getBufferMS();
private: private:
TSSegment::onSegment _on_ts; int64_t _ticker_offset = 0;
DecoderImp::Ptr _decoder;
multimap<int64_t, Frame::Ptr> _frame_cache;
Timer::Ptr _timer;
Ticker _ticker; Ticker _ticker;
Stamp _stamp[2]; Stamp _stamp[2];
Timer::Ptr _timer;
DecoderImp::Ptr _decoder;
TSSegment::onSegment _on_ts;
multimap<int64_t, Frame::Ptr> _frame_cache;
}; };
}//namespace mediakit }//namespace mediakit
......
...@@ -135,7 +135,7 @@ Buffer::Ptr HttpFileBody::readData(uint32_t size) { ...@@ -135,7 +135,7 @@ Buffer::Ptr HttpFileBody::readData(uint32_t size) {
//读到数据了 //读到数据了
ret->setSize(iRead); ret->setSize(iRead);
_offset += iRead; _offset += iRead;
return std::move(ret); return ret;
} }
//读取文件异常,文件真实长度小于声明长度 //读取文件异常,文件真实长度小于声明长度
_offset = _max_size; _offset = _max_size;
...@@ -146,7 +146,7 @@ Buffer::Ptr HttpFileBody::readData(uint32_t size) { ...@@ -146,7 +146,7 @@ Buffer::Ptr HttpFileBody::readData(uint32_t size) {
//mmap模式 //mmap模式
auto ret = std::make_shared<BufferMmap>(_map_addr,_offset,size); auto ret = std::make_shared<BufferMmap>(_map_addr,_offset,size);
_offset += size; _offset += size;
return std::move(ret); return ret;
} }
////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////
......
...@@ -100,7 +100,7 @@ void HttpClient::onConnect(const SockException &ex) { ...@@ -100,7 +100,7 @@ void HttpClient::onConnect(const SockException &ex) {
} }
//先假设http客户端只会接收一点点数据(只接受http头,节省内存) //先假设http客户端只会接收一点点数据(只接受http头,节省内存)
_sock->setReadBuffer(std::make_shared<BufferRaw>(1 * 1024)); getSock()->setReadBuffer(std::make_shared<BufferRaw>(1 * 1024));
_totalBodySize = 0; _totalBodySize = 0;
_recvedBodySize = 0; _recvedBodySize = 0;
...@@ -157,7 +157,7 @@ int64_t HttpClient::onRecvHeader(const char *data, uint64_t len) { ...@@ -157,7 +157,7 @@ int64_t HttpClient::onRecvHeader(const char *data, uint64_t len) {
if(_parser["Transfer-Encoding"] == "chunked"){ if(_parser["Transfer-Encoding"] == "chunked"){
//我们认为这种情况下后面应该有大量的数据过来,加大接收缓存提高性能 //我们认为这种情况下后面应该有大量的数据过来,加大接收缓存提高性能
_sock->setReadBuffer(std::make_shared<BufferRaw>(256 * 1024)); getSock()->setReadBuffer(std::make_shared<BufferRaw>(256 * 1024));
//如果Transfer-Encoding字段等于chunked,则认为后续的content是不限制长度的 //如果Transfer-Encoding字段等于chunked,则认为后续的content是不限制长度的
_totalBodySize = -1; _totalBodySize = -1;
...@@ -185,9 +185,9 @@ int64_t HttpClient::onRecvHeader(const char *data, uint64_t len) { ...@@ -185,9 +185,9 @@ int64_t HttpClient::onRecvHeader(const char *data, uint64_t len) {
_recvedBodySize = 0; _recvedBodySize = 0;
if(_totalBodySize > 0){ if(_totalBodySize > 0){
//根据_totalBodySize设置接收缓存大小 //根据_totalBodySize设置接收缓存大小
_sock->setReadBuffer(std::make_shared<BufferRaw>(MIN(_totalBodySize + 1,256 * 1024))); getSock()->setReadBuffer(std::make_shared<BufferRaw>(MIN(_totalBodySize + 1,256 * 1024)));
}else{ }else{
_sock->setReadBuffer(std::make_shared<BufferRaw>(256 * 1024)); getSock()->setReadBuffer(std::make_shared<BufferRaw>(256 * 1024));
} }
return -1; return -1;
......
...@@ -19,6 +19,8 @@ ...@@ -19,6 +19,8 @@
#include "WebSocketSplitter.h" #include "WebSocketSplitter.h"
#include "HttpCookieManager.h" #include "HttpCookieManager.h"
#include "HttpFileManager.h" #include "HttpFileManager.h"
#include "TS/TSMediaSource.h"
#include "FMP4/FMP4MediaSource.h"
using namespace std; using namespace std;
using namespace toolkit; using namespace toolkit;
...@@ -47,6 +49,7 @@ public: ...@@ -47,6 +49,7 @@ public:
void onError(const SockException &err) override; void onError(const SockException &err) override;
void onManager() override; void onManager() override;
static string urlDecode(const string &str); static string urlDecode(const string &str);
protected: protected:
//FlvMuxer override //FlvMuxer override
void onWrite(const Buffer::Ptr &data, bool flush) override ; void onWrite(const Buffer::Ptr &data, bool flush) override ;
...@@ -90,35 +93,49 @@ protected: ...@@ -90,35 +93,49 @@ protected:
* @param buffer websocket协议数据 * @param buffer websocket协议数据
*/ */
void onWebSocketEncodeData(const Buffer::Ptr &buffer) override; void onWebSocketEncodeData(const Buffer::Ptr &buffer) override;
/**
* 接收到完整的一个webSocket数据包后回调
* @param header 数据包包头
*/
void onWebSocketDecodeComplete(const WebSocketHeader &header_in) override;
private: private:
void Handle_Req_GET(int64_t &content_len); void Handle_Req_GET(int64_t &content_len);
void Handle_Req_GET_l(int64_t &content_len, bool sendBody); void Handle_Req_GET_l(int64_t &content_len, bool sendBody);
void Handle_Req_POST(int64_t &content_len); void Handle_Req_POST(int64_t &content_len);
void Handle_Req_HEAD(int64_t &content_len); void Handle_Req_HEAD(int64_t &content_len);
bool checkLiveFlvStream(const function<void()> &cb = nullptr); bool checkLiveStream(const string &schema, const string &url_suffix, const function<void(const MediaSource::Ptr &src)> &cb);
bool checkLiveStreamFlv(const function<void()> &cb = nullptr);
bool checkLiveStreamTS(const function<void()> &cb = nullptr);
bool checkLiveStreamFMP4(const function<void()> &fmp4_list = nullptr);
bool checkWebSocket(); bool checkWebSocket();
bool emitHttpEvent(bool doInvoke); bool emitHttpEvent(bool doInvoke);
void urlDecode(Parser &parser); void urlDecode(Parser &parser);
void sendNotFound(bool bClose); void sendNotFound(bool bClose);
void sendResponse(const char *pcStatus, bool bClose, const char *pcContentType = nullptr, void sendResponse(const char *pcStatus, bool bClose, const char *pcContentType = nullptr,
const HttpSession::KeyValue &header = HttpSession::KeyValue(), const HttpSession::KeyValue &header = HttpSession::KeyValue(),
const HttpBody::Ptr &body = nullptr,bool is_http_flv = false); const HttpBody::Ptr &body = nullptr, bool no_content_length = false);
//设置socket标志 //设置socket标志
void setSocketFlags(); void setSocketFlags();
private: private:
bool _is_live_stream = false;
bool _live_over_websocket = false;
//消耗的总流量
uint64_t _total_bytes_usage = 0;
string _origin; string _origin;
Parser _parser; Parser _parser;
Ticker _ticker; Ticker _ticker;
//消耗的总流量
uint64_t _ui64TotalBytes = 0;
//flv over http
MediaInfo _mediaInfo; MediaInfo _mediaInfo;
TSMediaSource::RingType::RingReader::Ptr _ts_reader;
FMP4MediaSource::RingType::RingReader::Ptr _fmp4_reader;
//处理content数据的callback //处理content数据的callback
function<bool (const char *data,uint64_t len) > _contentCallBack; function<bool (const char *data,uint64_t len) > _contentCallBack;
bool _flv_over_websocket = false;
bool _is_flv_stream = false;
}; };
......
...@@ -12,9 +12,9 @@ ...@@ -12,9 +12,9 @@
namespace mediakit { namespace mediakit {
HttpTSPlayer::HttpTSPlayer(const EventPoller::Ptr &poller, bool split_ts){ HttpTSPlayer::HttpTSPlayer(const EventPoller::Ptr &poller, bool split_ts){
_segment.setOnSegment([this](const char *data, uint64_t len) { onPacket(data, len); });
_poller = poller ? poller : EventPollerPool::Instance().getPoller();
_split_ts = split_ts; _split_ts = split_ts;
_segment.setOnSegment([this](const char *data, uint64_t len) { onPacket(data, len); });
setPoller(poller ? poller : EventPollerPool::Instance().getPoller());
} }
HttpTSPlayer::~HttpTSPlayer() {} HttpTSPlayer::~HttpTSPlayer() {}
...@@ -25,8 +25,8 @@ int64_t HttpTSPlayer::onResponseHeader(const string &status, const HttpClient::H ...@@ -25,8 +25,8 @@ int64_t HttpTSPlayer::onResponseHeader(const string &status, const HttpClient::H
shutdown(SockException(Err_other, StrPrinter << "bad http status code:" + status)); shutdown(SockException(Err_other, StrPrinter << "bad http status code:" + status));
return 0; return 0;
} }
auto contet_type = const_cast< HttpClient::HttpHeader &>(headers)["Content-Type"]; auto content_type = const_cast< HttpClient::HttpHeader &>(headers)["Content-Type"];
if (contet_type.find("video/mp2t") == 0 || contet_type.find("video/mpeg") == 0) { if (content_type.find("video/mp2t") == 0 || content_type.find("video/mpeg") == 0) {
_is_ts_content = true; _is_ts_content = true;
} }
......
...@@ -38,11 +38,10 @@ public: ...@@ -38,11 +38,10 @@ public:
template<typename ...ArgsType> template<typename ...ArgsType>
ClientTypeImp(ArgsType &&...args): ClientType(std::forward<ArgsType>(args)...){} ClientTypeImp(ArgsType &&...args): ClientType(std::forward<ArgsType>(args)...){}
~ClientTypeImp() override {}; ~ClientTypeImp() override {};
protected: protected:
/** /**
* 发送前拦截并打包为websocket协议 * 发送前拦截并打包为websocket协议
* @param buf
* @return
*/ */
int send(const Buffer::Ptr &buf) override{ int send(const Buffer::Ptr &buf) override{
if(_beforeSendCB){ if(_beforeSendCB){
...@@ -50,6 +49,7 @@ protected: ...@@ -50,6 +49,7 @@ protected:
} }
return ClientType::send(buf); return ClientType::send(buf);
} }
/** /**
* 设置发送数据截取回调函数 * 设置发送数据截取回调函数
* @param cb 截取回调函数 * @param cb 截取回调函数
...@@ -57,6 +57,7 @@ protected: ...@@ -57,6 +57,7 @@ protected:
void setOnBeforeSendCB(const onBeforeSendCB &cb){ void setOnBeforeSendCB(const onBeforeSendCB &cb){
_beforeSendCB = cb; _beforeSendCB = cb;
} }
private: private:
onBeforeSendCB _beforeSendCB; onBeforeSendCB _beforeSendCB;
}; };
...@@ -73,7 +74,7 @@ public: ...@@ -73,7 +74,7 @@ public:
HttpWsClient(ClientTypeImp<ClientType,DataType> &delegate) : _delegate(delegate){ HttpWsClient(ClientTypeImp<ClientType,DataType> &delegate) : _delegate(delegate){
_Sec_WebSocket_Key = encodeBase64(SHA1::encode_bin(makeRandStr(16, false))); _Sec_WebSocket_Key = encodeBase64(SHA1::encode_bin(makeRandStr(16, false)));
_poller = delegate.getPoller(); setPoller(delegate.getPoller());
} }
~HttpWsClient(){} ~HttpWsClient(){}
...@@ -108,6 +109,7 @@ public: ...@@ -108,6 +109,7 @@ public:
header._mask_flag = true; header._mask_flag = true;
WebSocketSplitter::encode(header, nullptr); WebSocketSplitter::encode(header, nullptr);
} }
protected: protected:
//HttpClientImp override //HttpClientImp override
...@@ -124,6 +126,8 @@ protected: ...@@ -124,6 +126,8 @@ protected:
if(Sec_WebSocket_Accept == const_cast<HttpHeader &>(headers)["Sec-WebSocket-Accept"]){ if(Sec_WebSocket_Accept == const_cast<HttpHeader &>(headers)["Sec-WebSocket-Accept"]){
//success //success
onWebSocketException(SockException()); onWebSocketException(SockException());
//防止ws服务器返回Content-Length
const_cast<HttpHeader &>(headers).erase("Content-Length");
//后续全是websocket负载数据 //后续全是websocket负载数据
return -1; return -1;
} }
...@@ -180,7 +184,6 @@ protected: ...@@ -180,7 +184,6 @@ protected:
/** /**
* tcp连接结果 * tcp连接结果
* @param ex
*/ */
void onConnect(const SockException &ex) override{ void onConnect(const SockException &ex) override{
if(ex){ if(ex){
...@@ -194,7 +197,6 @@ protected: ...@@ -194,7 +197,6 @@ protected:
/** /**
* tcp连接断开 * tcp连接断开
* @param ex
*/ */
void onErr(const SockException &ex) override{ void onErr(const SockException &ex) override{
//tcp断开或者shutdown导致的断开 //tcp断开或者shutdown导致的断开
...@@ -208,7 +210,7 @@ protected: ...@@ -208,7 +210,7 @@ protected:
* @param header 数据包头 * @param header 数据包头
*/ */
void onWebSocketDecodeHeader(const WebSocketHeader &header) override{ void onWebSocketDecodeHeader(const WebSocketHeader &header) override{
_payload.clear(); _payload_section.clear();
} }
/** /**
...@@ -219,10 +221,9 @@ protected: ...@@ -219,10 +221,9 @@ protected:
* @param recved 已接收数据长度(包含本次数据长度),等于header._payload_len时则接受完毕 * @param recved 已接收数据长度(包含本次数据长度),等于header._payload_len时则接受完毕
*/ */
void onWebSocketDecodePayload(const WebSocketHeader &header, const uint8_t *ptr, uint64_t len, uint64_t recved) override{ void onWebSocketDecodePayload(const WebSocketHeader &header, const uint8_t *ptr, uint64_t len, uint64_t recved) override{
_payload.append((char *)ptr,len); _payload_section.append((char *)ptr,len);
} }
/** /**
* 接收到完整的一个webSocket数据包后回调 * 接收到完整的一个webSocket数据包后回调
* @param header 数据包包头 * @param header 数据包包头
...@@ -238,28 +239,46 @@ protected: ...@@ -238,28 +239,46 @@ protected:
//服务器主动关闭 //服务器主动关闭
WebSocketSplitter::encode(header,nullptr); WebSocketSplitter::encode(header,nullptr);
shutdown(SockException(Err_eof,"websocket server close the connection")); shutdown(SockException(Err_eof,"websocket server close the connection"));
}
break; break;
}
case WebSocketHeader::PING:{ case WebSocketHeader::PING:{
//心跳包 //心跳包
header._opcode = WebSocketHeader::PONG; header._opcode = WebSocketHeader::PONG;
WebSocketSplitter::encode(header,std::make_shared<BufferString>(std::move(_payload))); WebSocketSplitter::encode(header,std::make_shared<BufferString>(std::move(_payload_section)));
}
break; break;
case WebSocketHeader::CONTINUATION:{
} }
break;
case WebSocketHeader::CONTINUATION:
case WebSocketHeader::TEXT: case WebSocketHeader::TEXT:
case WebSocketHeader::BINARY:{ case WebSocketHeader::BINARY:{
//接收完毕websocket数据包,触发onRecv事件 if (!header._fin) {
_delegate.onRecv(std::make_shared<BufferString>(std::move(_payload))); //还有后续分片数据, 我们先缓存数据,所有分片收集完成才一次性输出
} _payload_cache.append(std::move(_payload_section));
break; if (_payload_cache.size() < MAX_WS_PACKET) {
default: //还有内存容量缓存分片数据
break;
}
//分片缓存太大,需要清空
}
//最后一个包
if (_payload_cache.empty()) {
//这个包是唯一个分片
_delegate.onRecv(std::make_shared<WebSocketBuffer>(header._opcode, header._fin, std::move(_payload_section)));
break;
}
//这个包由多个分片组成
_payload_cache.append(std::move(_payload_section));
_delegate.onRecv(std::make_shared<WebSocketBuffer>(header._opcode, header._fin, std::move(_payload_cache)));
_payload_cache.clear();
break; break;
}
default: break;
} }
_payload.clear(); _payload_section.clear();
header._mask_flag = flag; header._mask_flag = flag;
} }
...@@ -271,6 +290,7 @@ protected: ...@@ -271,6 +290,7 @@ protected:
void onWebSocketEncodeData(const Buffer::Ptr &buffer) override{ void onWebSocketEncodeData(const Buffer::Ptr &buffer) override{
HttpClientImp::send(buffer); HttpClientImp::send(buffer);
} }
private: private:
void onWebSocketException(const SockException &ex){ void onWebSocketException(const SockException &ex){
if(!ex){ if(!ex){
...@@ -292,7 +312,7 @@ private: ...@@ -292,7 +312,7 @@ private:
}); });
//设置sock,否则shutdown等接口都无效 //设置sock,否则shutdown等接口都无效
_delegate.setSock(HttpClientImp::_sock); _delegate.setSock(HttpClientImp::getSock());
//触发连接成功事件 //触发连接成功事件
_delegate.onConnect(ex); _delegate.onConnect(ex);
//拦截websocket数据接收 //拦截websocket数据接收
...@@ -319,10 +339,10 @@ private: ...@@ -319,10 +339,10 @@ private:
string _Sec_WebSocket_Key; string _Sec_WebSocket_Key;
function<void(const char *data, int len)> _onRecv; function<void(const char *data, int len)> _onRecv;
ClientTypeImp<ClientType,DataType> &_delegate; ClientTypeImp<ClientType,DataType> &_delegate;
string _payload; string _payload_section;
string _payload_cache;
}; };
/** /**
* Tcp客户端转WebSocket客户端模板, * Tcp客户端转WebSocket客户端模板,
* 通过该模板,开发者再不修改TcpClient派生类任何代码的情况下快速实现WebSocket协议的包装 * 通过该模板,开发者再不修改TcpClient派生类任何代码的情况下快速实现WebSocket协议的包装
...@@ -365,6 +385,7 @@ public: ...@@ -365,6 +385,7 @@ public:
void startWebSocket(const string &ws_url,float fTimeOutSec = 3){ void startWebSocket(const string &ws_url,float fTimeOutSec = 3){
_wsClient->startWsClient(ws_url,fTimeOutSec); _wsClient->startWsClient(ws_url,fTimeOutSec);
} }
private: private:
typename HttpWsClient<ClientType,DataType>::Ptr _wsClient; typename HttpWsClient<ClientType,DataType>::Ptr _wsClient;
}; };
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论