render video in unity3d
using ffmpeg c++ for video decoding
unstable dev now, self project to learn
- load video data by ffmpeg or other video decode tools to decode video file get video informations(fps, duration, format type ...)
- use ffmpeg or other tools to decode video data to frame image format(RGB,YUV420,NV12 ...)
- pass one frame data buffer from native player to unity3d (normal way)
- unity3d convert native frame data buffer to byte array(like Inptr to byte array)
- unity3d side need to create texture and updating texture with bytes
- unity3d side we should convert frame buffer data to RGB color if input frame data isn't RGB format(like Y,UV to RGB) Convert to RGB formats for rendering/effects.
in native side create a very simple ffmpeg c++ player to decode video frame data(audio not included),export interface to unity3d ,so we init player and decode video by unity3d side call.
the key: we use yuv three textures(TextureFormat.R8) to store yuv plane value, and convert yuv to RGB with shader, so the most important of render video in unity3d is texture-updating, like we need update yuv textures each frame
we have multi-ways to make texture updating
native side decode video data and pass frame buffer to unity3d, each frame we need to convert given IntPtr to bytes[] unity3d side will create texture2d with bytes or just update texture2d with buffer bytes
we has multi ways to make texture2d updating with input bytes
- 1 way---->. every frame to create new texture2d with input buffer by LoadRawTextureData, and destory pre-created texture2d
- 2 way---->. create texture2d once,everty frame we using SetPixelData to update texture2d's data
after update texture raw data with raw bytes, we need call Texture2D.Apply() upload texture to GPU finish texture-updating
Pros:
- freely to use different native output buffers (nv12, yuv420,rgb...)
Cons:
- pass buffer between native and unity3d will cause everyframe's GC
- not fully use GPU's power
unity3d side create textures by input video height , width and image format, use GetNativeTexturePtr() get texture2d's native ptr pass to native-render-backend, each frame we using opengl or dx to updating texture data
opengl ways:
1. update texture buffer glTexSubImage2D or glTexImage2D to update unity3d's texture in native opengl backend
2. if we using SurfaceTexture in Android, we can sample SurfaceTexture to fbo -> convert GL_TEXTURE_EXTERNAL_OES to GL_TEXTURE -> update unity3d's texture
3. PBO
Based On Unity3d's NativeRenderingPlugin to create multi RenderAPI backend.
https://github.com/Unity-Technologies/NativeRenderingPlugin
there is no buffer pass and convert between Native and Unity3d so will not GC happened, it's fater then using CPU texture update
Use CommandBuffer.IssuePluginCustomTextureUpdateV2 Send a texture update event to a native code plugin.so there no need to create different render-api-backend to update texture, it's a common way.
native side
if (eventID == kUnityRenderingExtEventUpdateTextureBeginV2)
{
// get one frame data and set data to texture
// ...
UnityRenderingExtTextureUpdateParamsV2 *params = (UnityRenderingExtTextureUpdateParamsV2 *)data;
params->texData = framedata;
// ...
}
else if (eventID == kUnityRenderingExtEventUpdateTextureEndV2)
{
// release frame data after unity3d update texture end
free framedata;
}
unity side
// each frame use command buffer to update texture like(Y,U,V)
_command.IssuePluginCustomTextureUpdateV2(callBack,
mYTexture, (uint)0);
_command.IssuePluginCustomTextureUpdateV2(callBack,
mUTexture, (uint)1);
_command.IssuePluginCustomTextureUpdateV2(callBack,
mVTexture, (uint)2);
Graphics.ExecuteCommandBuffer(_command);
_command.Clear();
// with yuv textures using shader to output RGB color
renderRGBWithShader();
we render video frame image to yuv420 format, nativeplayer will output yuv420 frame buffer.
unity3d side we create three textures(YUV) , use shader sample yuv textures and with formula to outout RGB color
we store yuv plane value in texture's R Channel so our debug yuv texture will looks RED
YTexture : TextureFormat.R8 y in R Channel
UTexture : TextureFormat.R8 u in R Channel
VTexture : TextureFormat.R8 v in R Channel
- Very simple decode video frame using ffmpeg in windows(Done)
- CPU-load buffer to update yuv texture each frame(Done)
- GPU using command buffer (Done)
- RenderAPI BackEnd->Windows OpenGL Core with glTexSubImage2D(Done)
- RenderAPI BackEnd->Windows DX11(Done)
- RenderAPI BackEnd->Windows DX12(TODO)
- RenderAPI BackEnd->MacOSX OpenGL Core(Done)
if not modify native-simple-player C++ and rebulid lib, just open scenes in unity editor
- VideoPlayer-CommandBuffer scene(GPU command buffer)
- VideoPlayer-CPU.unity(normal CPU)
- VideoPlayer-OPENGL.unity(RenderAPI OpenGLCore glTexSubImage2D)
create 'ffmpeglib-win' folder at simplevideodemo/FFMpegSources/ place ffmpeg c++ windows library to this 'ffmpeglib-win' folder
# UNIX Makefile
cmake ..
# Mac OSX
cmake -G "Xcode" ..
# Microsoft Windows
visual studio 2017
cmake -G "Visual Studio 15" ..
cmake -G "Visual Studio 15 Win64" ..