[易语言] 求大神赐教一个yuy2转换到rgb的命令 求源码

2959人阅读
& 最近项目中用到了好多YUV格式相关的东西,在网上找了一些资料,整理如下:计算机彩色显示器显示色彩的原理与彩色电视机一样,都是采用R(Red)、G(Green)、B(Blue)相加混色的原理:通过发射出三种不同强度的电子束,使屏幕内侧覆盖的红、绿、蓝磷光材料发光而产生色彩。这种色彩的表示方法称为RGB色彩空间表示(它也是多媒体计算机技术中用得最多的一种色彩空间表示方法)。根据三基色原理,任意一种色光F都可以用不同分量的R、G、B三色相加混合而成。F = r [ R ] + g [ G ] + b [ B ]其中,r、g、b分别为三基色参与混合的系数。当三基色分量都为0(最弱)时混合为黑色光;而当三基色分量都为k(最强)时混合为白色光。调整r、g、b三个系数的值,可以混合出介于黑色光和白色光之间的各种各样的色光。那么YUV又从何而来呢?在现代彩色电视系统中,通常采用三管彩色摄像机或彩色CCD摄像机进行摄像,然后把摄得的彩色图像信号经分色、分别放大校正后得到RGB,再经过矩阵变换电路得到亮度信号Y和两个色差信号R-Y(即U)、B-Y(即V),最后发送端将亮度和色差三个信号分别进行编码,用同一信道发送出去。这种色彩的表示方法就是所谓的YUV色彩空间表示。采用YUV色彩空间的重要性是它的亮度信号Y和色度信号U、V是分离的。如果只有Y信号分量而没有U、V分量,那么这样表示的图像就是黑白灰度图像。彩色电视采用YUV空间正是为了用亮度信号Y解决彩色电视机与黑白电视机的兼容问题,使黑白电视机也能接收彩色电视信号。YUV与RGB相互转换的公式如下(RGB取值范围均为0-255):Y = 0.299R + 0.587G + 0.114BU = -0.147R - 0.289G + 0.436BV = 0.615R - 0.515G - 0.100BR = Y + 1.14VG = Y - 0.39U - 0.58VB = Y + 2.03U在DirectShow中,常见的RGB格式有RGB1、RGB4、RGB8、RGB565、RGB555、RGB24、RGB32、ARGB32等;常见的YUV格式有YUY2、YUYV、YVYU、UYVY、AYUV、Y41P、Y411、Y211、IF09、IYUV、YV12、YVU9、 YUV411、YUV420等。作为视频媒体类型的辅助说明类型(Subtype),它们对应的GUID见表2.3。表2.3 常见的RGB和YUV格式GUID& & 格式描述MEDIASUBTYPE_RGB1& & 2色,每个像素用1位表示,需要调色板MEDIASUBTYPE_RGB4& & 16色,每个像素用4位表示,需要调色板MEDIASUBTYPE_RGB8& & 256色,每个像素用8位表示,需要调色板MEDIASUBTYPE_RGB565& & 每个像素用16位表示,RGB分量分别使用5位、6位、5位MEDIASUBTYPE_RGB555& & 每个像素用16位表示,RGB分量都使用5位(剩下的1位不用)MEDIASUBTYPE_RGB24& & 每个像素用24位表示,RGB分量各使用8位MEDIASUBTYPE_RGB32& & 每个像素用32位表示,RGB分量各使用8位(剩下的8位不用)MEDIASUBTYPE_ARGB32& & 每个像素用32位表示,RGB分量各使用8位(剩下的8位用于表示Alpha通道值)MEDIASUBTYPE_YUY2& & YUY2格式,以4:2:2方式打包MEDIASUBTYPE_YUYV& & YUYV格式(实际格式与YUY2相同)MEDIASUBTYPE_YVYU& & YVYU格式,以4:2:2方式打包MEDIASUBTYPE_UYVY& & UYVY格式,以4:2:2方式打包MEDIASUBTYPE_AYUV& & 带Alpha通道的4:4:4 YUV格式MEDIASUBTYPE_Y41P& & Y41P格式,以4:1:1方式打包MEDIASUBTYPE_Y411& & Y411格式(实际格式与Y41P相同)MEDIASUBTYPE_Y211& & Y211格式MEDIASUBTYPE_IF09& & IF09格式MEDIASUBTYPE_IYUV& & IYUV格式MEDIASUBTYPE_YV12& & YV12格式MEDIASUBTYPE_YVU9& & YVU9格式下面分别介绍各种RGB格式。¨RGB1、RGB4、RGB8都是调色板类型的RGB格式,在描述这些媒体类型的格式细节时,通常会在BITMAPINFOHEADER数据结构后面跟着一个调色板(定义一系列颜色)。它们的图像数据并不是真正的颜色值,而是当前像素颜色值在调色板中的索引。以RGB1(2色位图)为例,比如它的调色板中定义的两种颜色值依次为0x000000(黑色)和0xFFFFFF(白色),那么图像数据…(每个像素用1位表示)表示对应各像素的颜色为:黑黑白白黑白黑白黑白白白…。¨ RGB565使用16位表示一个像素,这16位中的5位用于R,6位用于G,5位用于B。程序中通常使用一个字(WORD,一个字等于两个字节)来操作一个像素。当读出一个像素后,这个字的各个位意义如下:高字节& && && && &&&低字节R R R R R G G G& &&&G G G B B B B B可以组合使用屏蔽字和移位操作来得到RGB各分量的值:#define RGB565_MASK_RED& & 0xF800#define RGB565_MASK_GREEN&&0x07E0#define RGB565_MASK_BLUE& &0x001FR = (wPixel & RGB565_MASK_RED) && 11;& &// 取值范围0-31G = (wPixel & RGB565_MASK_GREEN) && 5;&&// 取值范围0-63B =&&wPixel & RGB565_MASK_BLUE;& && && &// 取值范围0-31¨ RGB555是另一种16位的RGB格式,RGB分量都用5位表示(剩下的1位不用)。使用一个字读出一个像素后,这个字的各个位意义如下:高字节& && && && & 低字节X R R R R G G& && & G G G B B B B B& && & (X表示不用,可以忽略)可以组合使用屏蔽字和移位操作来得到RGB各分量的值:#define RGB555_MASK_RED& & 0x7C00#define RGB555_MASK_GREEN&&0x03E0#define RGB555_MASK_BLUE& &0x001FR = (wPixel & RGB555_MASK_RED) && 10;& &// 取值范围0-31G = (wPixel & RGB555_MASK_GREEN) && 5;&&// 取值范围0-31B =&&wPixel & RGB555_MASK_BLUE;& && && &// 取值范围0-31¨ RGB24使用24位来表示一个像素,RGB分量都用8位表示,取值范围为0-255。注意在内存中RGB各分量的排列顺序为:BGR BGR BGR…。通常可以使用RGBTRIPLE数据结构来操作一个像素,它的定义为:typedef struct tagRGBTRIPLE { BYTE rgbtB& & // 蓝色分量BYTE rgbtG& &// 绿色分量BYTE rgbtR& &&&// 红色分量} RGBTRIPLE;¨ RGB32使用32位来表示一个像素,RGB分量各用去8位,剩下的8位用作Alpha通道或者不用。(ARGB32就是带Alpha通道的 RGB32。)注意在内存中RGB各分量的排列顺序为:BGRA BGRABGRA…。通常可以使用RGBQUAD数据结构来操作一个像素,它的定义为:typedef struct tagRGBQUAD {BYTE& & rgbB& && &// 蓝色分量BYTE& & rgbG& &&&// 绿色分量BYTE& & rgbR& && & // 红色分量BYTE& & rgbR&&// 保留字节(用作Alpha通道或忽略)} RGBQUAD;下面介绍各种YUV格式。YUV格式通常有两大类:打包(packed)格式和平面(planar)格式。前者将YUV分量存放在同一个数组中,通常是几个相邻的像素组成一个宏像素(macro-pixel);而后者使用三个数组分开存放YUV三个分量,就像是一个三维平面一样。表2.3中的YUY2到 Y211都是打包格式,而IF09到YVU9都是平面格式。(注意:在介绍各种具体格式时,YUV各分量都会带有下标,如Y0、U0、V0表示第一个像素的YUV分量,Y1、U1、V1表示第二个像素的YUV分量,以此类推。)¨ YUY2(和YUYV)格式为每个像素保留Y分量,而UV分量在水平方向上每两个像素采样一次。一个宏像素为4个字节,实际表示2个像素。(4:2:2的意思为一个宏像素中有4个Y分量、2个U分量和2个V分量。)图像数据中YUV分量排列顺序如下:Y0 U0 Y1 V0& & Y2 U2 Y3 V2 …¨ YVYU格式跟YUY2类似,只是图像数据中YUV分量的排列顺序有所不同:Y0 V0 Y1 U0& & Y2 V2 Y3 U2 …¨ UYVY格式跟YUY2类似,只是图像数据中YUV分量的排列顺序有所不同:U0 Y0 V0 Y1& & U2 Y2 V2 Y3 …¨ AYUV格式带有一个Alpha通道,并且为每个像素都提取YUV分量,图像数据格式如下:A0 Y0 U0 V0& & A1 Y1 U1 V1 …¨ Y41P(和Y411)格式为每个像素保留Y分量,而UV分量在水平方向上每4个像素采样一次。一个宏像素为12个字节,实际表示8个像素。图像数据中YUV分量排列顺序如下:U0 Y0 V0 Y1& & U4 Y2 V4 Y3& & Y4 Y5 Y6 Y8 … ¨ Y211格式在水平方向上Y分量每2个像素采样一次,而UV分量每4个像素采样一次。一个宏像素为4个字节,实际表示4个像素。图像数据中YUV分量排列顺序如下:Y0 U0 Y2 V0& & Y4 U4 Y6 V4 …¨ YVU9格式为每个像素都提取Y分量,而在UV分量的提取时,首先将图像分成若干个4 x 4的宏块,然后每个宏块提取一个U分量和一个V分量。图像数据存储时,首先是整幅图像的Y分量数组,然后就跟着U分量数组,以及V分量数组。IF09格式与YVU9类似。¨ IYUV格式为每个像素都提取Y分量,而在UV分量的提取时,首先将图像分成若干个2 x 2的宏块,然后每个宏块提取一个U分量和一个V分量。YV12格式与IYUV类似。¨YUV411、YUV420格式多见于DV数据中,前者用于NTSC制,后者用于PAL制。YUV411为每个像素都提取Y分量,而UV分量在水平方向上每4个像素采样一次。YUV420并非V分量采样为0,而是跟YUV411相比,在水平方向上提高一倍色差采样频率,在垂直方向上以U/V间隔的方式减小一半色差采样。&////////////////////////////////////////////////////////////////////////////////////////*******************************************************////////////////////////////////////////////////////////////////////////////////////关于YUV格式的详细介绍及内存排布方式如下:YUV formats fall into two distinct groups, the
where Y, U (Cb) and V (Cr) samples are packed together into macropixels which are stored in a single array, and the
where each component is stored as a separate array, the final image being a fusing of the three separate planes. In the diagrams below, the numerical suffix attached to each Y, U or V sample indicates the sampling position across the image line, so, for example, V0 indicates the leftmost V sample and Yn indicates the Y sample at the (n+1)th pixel from the left. Subsampling intervals in the horizontal and vertical directions may merit some explanation. The horizontal subsampling interval describes how frequently across a line a sample of that component is taken while the vertical interval describes on which lines samples are taken. For example, UYVY format has a horizontal subsampling period of 2 for both the U and V components indicating that U and V samples are taken for every second pixel across a line. Their vertical subsampling period is 1 indicating that U and V samples are taken on each line of the image. For YVU9, though, the vertical subsampling interval is 4. This indicates that U and V samples are only taken on every fourth line of the original image. Since the horizontal sampling period is also 4, a single U and a single V sample are taken for each square block of 16 image pixels. Also, if you are interested in YCrCb to RGB conversion, you may find
helpful.People reading this page may be interested in a freeware codec from Drastic Technologies which allegedly handles the vast majority of YUV formats listed here. I've not tried it but you can find it .Packed YUV Formats LabelFOURCC in HexBits per pixelDescription0x32Combined YUV and alpha0x524A4C438Cirrus Logic format with 4 pixels packed into a u_int32. A form of YUV 4:1:1 wiht less than 8 bits per Y, U and V sample.0x16Essentially a copy of UYVY except that the sense of the height is reversed - the image is upside down with respect to the UYVY version.0x8Apparently a duplicate of Y800 (and also, presumably, &Y8& &)IRAW0x?Intel uncompressed YUV. I have no information on this format - can you help?0x16Interlaced version of UYVY (line order 0, 2, 4,....,1, 3, 5....) registered by Silviu Brinzei of .0x12Interlaced version of Y41P (line order 0, 2, 4,....,1, 3, 5....) registered by Silviu Brinzei of .0x1212 bit format used in mode 2 of the IEEE 1394 Digital Camera 1.04 spec. This is equivalent to 0x2424 bit format used in mode 0 of the IEEE 1394 Digital Camera 1.04 spec0x16YUV 4:2:2 (Y sample at every pixel, U and V sampled at every second pixel horizontally on each line). A macropixel contains 2 pixels in 1 u_int32. This is a suplicate of
except that the color components use the BT709 color space (as used in HD video).0x564E595516A direct copy of
registered by NVidia to work around problems in some old codecs which did not like hardware which offered more than 2 UYVY surfaces.UYVP0x24?YCbCr 4:2:2 extended precision 10-bits per component in U0Y0V0Y1 order. Registered by
of Evans & Sutherland. (Awaiting confirmation of component packing structure)0x16YUV 4:2:2 (Y sample at every pixel, U and V sampled at every second pixel horizontally on each line). A macropixel contains 2 pixels in 1 u_int32.0x3210-bit 4:2:2 YCrCb equivalent to the Quicktime format of the same name.V4220x16I am told that this is an upside down version of UYVY.V6550x16?16 bit YUV 4:2:2 format registered by Vitec Multimedia. I have no information on the component ordering or packing.VYUY0x?ATI Packed YUV Data (format unknown but you can get hold of a codec supporting it )0x16Direct copy of UYVY as used by ADS Technologies Pyro WebCam firewire camera.0x16YUV 4:2:2 as for UYVY but with different component ordering within the u_int32 macropixel.0x16Duplicate of YUY20x564E555916A direct copy of
registered by NVidia to work around problems in some old codecs which did not like hardware which offered more than 2 YUY2 surfaces.0x16YUV 4:2:2 as for UYVY but with different component ordering within the u_int32 macropixel.0x12YUV 4:1:1 (Y sample at every pixel, U and V sampled at every fourth pixel horizontally on each line). A macropixel contains 8 pixels in 3 u_int32s.0x12YUV 4:1:1 with a packed, 6 byte/4 pixel macroblock structure.0x8Packed YUV format with Y sampled at every second pixel across each line and U and V sampled at every fourth pixel.0x12Format as for Y41P but the lsb of each Y component is used to signal pixel transparency .0x16Format as for UYVY but the lsb of each Y component is used to signal pixel transparency .0x24?YCbCr 4:2:2 extended precision 10-bits per component in Y0U0Y1V0 order. Registered by
of Evans & Sutherland.0x8Simple, single Y plane for monochrome images.0x8Duplicate of Y800 as far as I can see.0x1616-bit uncompressed greyscale image.AYUVThis is a 4:4:4 YUV format with 8 bit samples for each component along with an 8 bit alpha blend value per pixel. Component ordering is A Y U V (as the name suggests).UYVY (and Y422 and UYNV and HDYC)UYVY is probably the most popular of the various YUV 4:2:2 formats. It is output as the format of choice by the Radius Cinepak codec and is often the second choice of software MPEG codecs after YV12. Y422 and UYNV appear to be direct equivalents to the original UYVY.HDYC is equivalent in layout but pixels are described using the BT709 color space as used in HD video systems rather than the BT470 SD video color space typically used. Apparently there is a description in the DeckLink DirectShow SDK documentation at , find DeckLink SDK 5.6.2 for Windows XP and download , set product to None, serial no is not required), see &Video Formats& section.&HorizontalVerticalY Sample Period11V Sample Period21U Sample Period21Effective bits per pixel : 16 Positive biHeight implies top-down imge (top line first) IUYVIUYV is basically the same as UYVY with the exception that the data is interlaced. Lines are ordered 0,2,4,....,1,3,5.... instead of 0,1,2,3,4,5,....cyuv This FOURCC, allegedly registered by Creative Labs, is essentially a duplicate of UYVY. The only difference is that the image is flipped vertically, the first u_int16 in the buffer representing the bottom line of the viewed image. Note that the FOURCC is comprised of lower case characters (so much for the upper case convention !)&HorizontalVerticalY Sample Period11V Sample Period21U Sample Period21Effective bits per pixel : 16 Positive biHeight implies bottom-up image (botton line first) YUY2 (and YUNV and V422 and YUYV)YUY2 is another in the family of YUV 4:2:2 formats and appears to be used by all the same codecs as UYVY. &HorizontalVerticalY Sample Period11V Sample Period21U Sample Period21Effective bits per pixel : 16 Positive biHeight implies top-down image (top line first)
There is a
which contains information on playing AVIs which include video stored in YUY2 format.YVYUDespite being a simple byte ordering change from YUY2 or UYVY, YVYU seems to be seen somewhat less often than the other two formats defined above. &HorizontalVerticalY Sample Period11V Sample Period21U Sample Period21Effective bits per pixel : 16 Positive biHeight implies top-down image (top line first) Y41PThis YUV 4:1:1 format is registered as a PCI standard format. Mediamatics MPEG 1 engine is the only codec (other than a Brooktree internal one) that I know of that can generate it. &HorizontalVerticalY Sample Period11V Sample Period41U Sample Period41Effective bits per pixel : 12 Positive biHeight implies top-down image (top line first) Y411I was originally told that this was a duplicate of
however it seems that this is not the case after all. Y411 is a packed YUV 4:1:1 format with a 6 pixel macroblock structure containing 4 pixels. Component packing order is:U2 Y0 Y1 V2 Y2 Y3I have not been able to find 100% confirmation of the position for the U and V samples. I suspect that the chroma samples are probably both taken at the position of Y2 but this is a guess just now. I have recently been informed that this format is identical to.&HorizontalVerticalY Sample Period11V Sample Period41U Sample Period41Effective bits per pixel : 12Positive biHeight implies top-down image (top line first) IY41IY41 is basically the same as Y41P with the exception that the data is interlaced. Lines are ordered 0,2,4,....,1,3,5.... instead of 0,1,2,3,4,5,....Y211I have yet to find anything that will output Y211 ! The format looks very much like the missing YUV 4:2:2 ordering but Y samples are only taken on every second pixel. Think of it as a half width 4:2:2 image and double the width on display. &HorizontalVerticalY Sample Period21V Sample Period41U Sample Period41Effective bits per pixel : 8 Positive biHeight implies top-down image (top line first) Y41TThis format is identical to
except for the fact that the least significant bit of each Y component forms a chromakey channel. If this bit is set, the YUV image pixel is displayed, if cleared, the pixel is transparent (and the underlying graphics pixel is shown). Positive biHeight implies top-down image (top line first)Y42TThis format is identical to
except for the fact that the least significant bit of each Y component forms a chromakey channel. If this bit is set, the YUV image pixel is displayed, if cleared, the pixel is transparent (and the underlying graphics pixel is shown). Positive biHeight implies top-down image (top line first)CLJRCirrus Logic's format packs 4 pixel samples into a single u_int32 by sacrificing precision on each sample. Y samples are truncated to 5 bits each, U and V have 6 bits per sample. &HorizontalVerticalY Sample Period11V Sample Period41U Sample Period41Effective bits per pixel : 8 Positive biHeight implies top-down image (top line first) IYU1The IYU1 format is a 12 bit format used in mode 2 of the IEEE 1394 Digital Camera 1.04 spec (&1394-based Digital Camera Specification, Version 1.04, August 9, 1996&, page 14.). The format, a duplicate of , is YUV (4:1:1) according to the following pattern:Byte012345SampleU(K+0)Y(K+0)Y(K+1)V(K+0)Y(K+2)Y(K+3)&&HorizontalVerticalY Sample Period11V Sample Period41U Sample Period41IYU2The IYU2 format is a 24 bit format used in mode 0 of the IEEE 1394 Digital Camera 1.04 spec (ibid.) The format is YUV (4:4:4) according to the following pattern:Byte012345SampleU(K+0)Y(K+0)V(K+0)U(K+1)Y(K+1)V(K+1)&HorizontalVerticalY Sample Period11V Sample Period11U Sample Period11YUVPThis is another format similar to YUY2 and it's aliases. The difference here is that each Y, U and V samples is 10 bits rather than 8. I&am still waiting to hear how the samples are packed - is a macropixel just 5 bytes long with all the samples packed together or is there more to it than this?V210 have implemented this Quicktime format for Windows. It is a 10 bit per component, YCrCb 4:2:2 format in which samples for 5 pixels are packed into 4 4-byte little endian words. Rather than repeat the details here, I suggest looking at the .Supposedly there are images described as &YUV10& that are formatted similarly to this aside from the bit ordering (the correspondent mentioned having to run ntoh on the pixel data to reformat from YUV10 to V210. Despite 20 years of C, I've not heard of ntoh but I suspect it performs big-endian to little-endian conversion).Planar YUV FormatsLabelFOURCC in HexBits per pixelDescription 0x98 bit Y plane followed by 8 bit 4x4 subsampled V and U planes. Registered by Intel.0x9?Registered by Intel., this is the format used internally by Indeo video code0x9.5As YVU9 but an additional 4x4 subsampled plane is appended containing delta information relative to the last frame. (Bpp is reported as 9)0x168 bit Y plane followed by 8 bit 2x1 subsampled V and U planes.0x128 bit Y plane followed by 8 bit 2x2 subsampled V and U planes.0x128 bit Y plane followed by 8 bit 2x2 subsampled U and V planes.0x12Duplicate FOURCC, identical to I420.0x3231564E128-bit Y plane followed by an interleaved U/V plane with 2x2 subsampling0x3132564E12As NV12 with U and V reversed in the interleaved plane0x31434D4912As YV12 except the U and V planes each have the same stride as the Y plane0x32434D4912Similar to IMC1 except that the U and V lines are interleaved at half stride boundaries0x33434D4912As IMC1 except that U and V are swapped0x34434D4912As IMC2 except that U and V are swapped0x4C504C4312Format similar to YV12 but including a level of indirection.Y41B0x12?Weitek format listed as &YUV 4:1:1 planar&. I have no other information on this format.Y42B0x16?Weitek format listed as &YUV 4:2:2 planar&. I have no other information on this format.0x8Simple, single Y plane for monochrome images.0x8Duplicate of Y800 as far as I can see.0x12Awaiting clarification of format.0x16Awaiting clarification of format.YVU9This format dates back to the days of the ActionMedia II adapter and comprises an NxN plane of Y samples, 8 bits each, followed by (N/4)x(N/4) V and U planes. &HorizontalVerticalY Sample Period11V Sample Period44U Sample Period44
Positive biHeight implies top-down image (top line first)ATI has a codec supporting this format that you can download from .YUV9 states that YUV9 is&&the color encoding scheme used in Indeo video technology. The YUV9 format stores information in 4x4 pixel blocks. Sixteen bytes of luminance are stored for every 1 byte of chrominance. For example, a 640x480 image will have 307,200 bytes of luminance and 19,200 bytes of chrominance.& This sounds exactly the same as
to me. Anyone know if there is any difference?IF09A derivative of YVU9, IF09 contains the basic 3 planes for Y, V and U followed by an additional (N/4)x(N/4) plane of &skip blocks&. This final plane forms a basic delta encoding scheme which can be used by a displayer to decide which pixels in the image are unchanged from the previous displayed frame. The strange number of bits per pixel listed for the format results from the fact that an NxN image is described using N2+3(N/4)2 bytes. This format is generated by Intel's Indeo codecs though users should beware - the original 32 bit Indeo 3.2 shipped with Windows 95 and the beta levels of Indeo 4.1 contain bugs which cause them to generate protection faults when using IF09. Fixed versions of these codecs are available from Intel. &HorizontalVerticalY Sample Period11V Sample Period44U Sample Period44
Positive biHeight implies top-down image (top line first)Delta plane definitionTo be completed...YV12This is the format of choice for many software MPEG codecs. It comprises an NxM Y plane followed by (N/2)x(M/2) V and U planes.&HorizontalVerticalY Sample Period11V Sample Period22U Sample Period22
& Positive biHeight implies top-down image (top line first)ATI says they have
but I can't find it on their site. If you would like something similar for Quicktime, .YV16This format is basically a version of with higher chroma resolution. It comprises an NxM Y plane followed by (N/2)xM U and V planes.&HorizontalVerticalY Sample Period11V Sample Period21U Sample Period21IYUV and I420These formats are identical to YV12 except that the U and V plane order is reversed. They comprise an NxN Y plane followed by (N/2)x(N/2) U and V planes. Full marks to Intel for registering the same format twice and full marks to Microsoft for not picking up on this and rejecting the second registration. (Note:&There is some confusion over these formats thanks to the definitions on
which tend to suggest that the two FOURCCs are different. One is described as a 4:2:0 format while the other is described as 4:1:1. Later, however, the same page states that YV12 is the same as both of these with the U and V plane order reversed. I would consider 4:2:0 to imply 1 chroma sample for every 2x2 luma block and 4:1:1 to imply 1 chroma sample for every 4x1 luma block but it seems as if the Microsoft writer may have been using the terms interchangeably. If you know these formats, please could you
whether the definition here is correct or whether I&need to update one or other?)&HorizontalVerticalY Sample Period11V Sample Period22U Sample Period22
Positive biHeight implies top-down image (top line first)CLPLThis format introduces an extra level of indirection in the process of accessing YUV pixels in the surface. Locking the DirectDraw or DCI CLPL surface returns a pointer which itself points to three other pointers. These pointers respectively point to an NxN Y plane, an (N/2)x(N/2) U plane and an (N/2)x(N/2) V plane. The Y plane pointer retrieved is (allegedly) valid even when the surface is subsequently unlocked but the U and V pointers can only be used with a lock held (as you should be doing anyway if adhereing to the DirectDraw/DCI spec).&HorizontalVerticalY Sample Period11V Sample Period22U Sample Period22
Positive biHeight implies top-down image (top line first) Y800This format contains only a single, 8 bit Y plane for monochrome images. Apparent duplicate FOURCCs are &Y8& and &GREY&.&HorizontalVerticalY Sample Period11V Sample PeriodN/AN/AU Sample PeriodN/AN/A&& Y16This format contains only a single, 16 bit Y plane for monochrome images. Each pixel is represented by a 16 bit, little endian luminance sample.&HorizontalVerticalY Sample Period11V Sample PeriodN/AN/AU Sample PeriodN/AN/A&& NV12YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved U/V plane containing 8 bit 2x2 subsampled colour difference samples.&HorizontalVerticalY Sample Period11V (Cr) Sample Period22U (Cb) Sample Period22Microsoft defines this format as follows: &&A format in which all Y samples are found first in memory as an array of unsigned char with an even number of lines (possibly with a larger stride for memory alignment), followed immediately by an array of unsigned char containing interleaved Cb and Cr samples (such that if addressed as a little-endian WORD type, Cb would be in the LSBs and Cr would be in the MSBs) with the same total stride as the Y samples. This is the preferred 4:2:0 pixel format.& NV21YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved V/U plane containing 8 bit 2x2 subsampled chroma samples. The same as
except the interleave order of U and V is reversed.&HorizontalVerticalY Sample Period11V (Cr) Sample Period22U (Cb) Sample Period22Microsoft defines this format as follows: &&The same as , except that Cb and Cr samples are swapped so that the chroma array of unsigned char would have Cr followed by Cb for each sample (such that if addressed as a little-endian WORD type, Cr would be in the LSBs and Cb would be in the MSBs).& IMC1Similar to , this format comprises an NxN Y plane followed by (N/2)x(N/2) U and V planes. The U and V planes have the same stride as the Y plane and are restricted to start on 16 line boundaries.&HorizontalVerticalY Sample Period11V (Cr) Sample Period22U (Cb) Sample Period22Microsoft defines this format as follows: &&The same as , except that the stride of the Cb and Cr planes is the same as the stride in the Y plane. The Cb and Cr planes are also restricted to fall on memory boundaries that are a multiple of 16 lines (a restriction that has no effect on usage for the standard formats, since the standards all use 16×16 macroblocks).& IMC2Similar to , this format comprises an NxN Y plane followed by &rectangularly adjacent& (N/2)x(N/2) U and V planes. Lines of U and V pixels are interleaved at half stride boundaries below the Y plane.&HorizontalVerticalY Sample Period11V (Cr) Sample Period22U (Cb) Sample Period22Microsoft defines this format as follows: &&The same as , except that Cb and Cr lines are interleaved at half-stride boundaries. In other words, each full-stride line in the chrominance area starts with a line of Cr, followed by a line of Cb that starts at the next half-stride boundary. (This is a more address-space-efficient format than , cutting the chrominance address space in half, and thus cutting the total address space by 25%.) This runs a close second in preference relative to , but
appears to be more popular.& IMC3The same as
except for swapping the U and V order. IMC4The same as
except for swapping the U and V order. CXY1Planar YUV 4:1:1 format registered by Conexant. Awaiting clarification of pixel component ordering.CXY2Planar YUV 4:2:2 format registered by Conexant. Awaiting clarification of pixel component ordering.&////////////////////////////////////////////////////////////////////////////////////////*******************************************************////////////////////////////////////////////////////////////////////////////////////&&下面给出了一个例子:YUY2经常用于电视制式以及许多摄像头的输出格式.而我们在处理时经常需要将其转化为RGB进行处理,这里简单介绍下YUY2(YUV)与RGB之间相互转化的关系:&YUY2(YUV) To RGB:C = Y - 16D = U - 128E = V - 128R = clip(( 298 * C
+ 409 * E + 128) && 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) && 8)
B = clip(( 298 * C + 516 * D
+ 128) && 8)其中 clip()为限制函数,将其取值限制在0-255之间.&RGB To YUY2(YUV):Y = ( (
66 * R + 129 * G +
25 * B + 128) && 8) +
U = ( ( -38 * R -
74 * G + 112 * B + 128) && 8) + 128
V = ( ( 112 * R -
18 * B + 128) && 8) + 128
上述两个公式在代码中的
int YUV2RGB(void* pYUV, void* pRGB, int width, int height, bool alphaYUV, bool alphaRGB);
int RGB2YUV(void* pRGB, void* pYUVX, int width, int height, bool alphaYUV, bool alphaRGB);
函数中转换。
在诸如摄像头的数据获取中,我们往往需要直接在YUY2(YUV)空间上进行一些图象处理,我们希望能够在YUY2
(YUV)进行一些RGB上可以做到的处理。这里已blending为例,将两张带有透明度的YUY2(YUV)图片进行叠加,
以达到在RGB空间进行图像合成的效果。
RGB空间进行图像叠加,通常背景(BG)是不透明的,而前景(FG)是带有透明度的。在RGB空间,可以简单表示为:
Rdest = Rfg*alpha + Rbg*(1-alpha);
Gdest = Gfg*alpha + Gbg*(1-alpha);
Bdest = Bfg*alpha + Bbg*(1-alpha);
// Rdest、Gdest、Bdest 为最终合成后的像素值
66 * R + 129 * G +
25 * B + 128) && 8) +
U = ( ( -38 * R -
74 * G + 112 * B + 128) && 8) + 128
V = ( ( 112 * R -
18 * B + 128) && 8) + 128
我们可以推导出
(Ydest-16)&&8 = ((Yfg-16)&&8)*alpha + ((Ybg-16)&&8)*(1-alpha);
(Udest-128)&&8 = ((Ufg-128)&&8)*alpha + ((Ubg-128)&&8)*(1-alpha);
(Vdest-128)&&8 = ((Vfg-128)&&8)*alpha + ((Vbg-128)&&8)*(1-alpha);
从而可以得到
Ydest = (Yfg-16)*alpha + (Ybg-16)*(1-alpha) + 16;
Udest = (Ufg-128)*alpha + (Ubg-128)*(1-alpha) + 128;
Vdest = (Vfg-128)*alpha + (Vbg-128)*(1-alpha) + 128;
这个叠加过程在函数
int YUVBlending(void* pBGYUV, void* pFGYUV, int width, int height, bool alphaBG, bool alphaFG)
由于本文针对摄像头采集所得的数据进行处理,因此数据为YUY2格式,即4个字节来表示两个像素点的YUV信息,
排列为Y1 U1 Y2 V2, 对于像素点1为(Y1, U1, V1),像素点2为(Y2, U1, V1)。即两个像素点共用U、V信息。
这里假设带有alpha透明度的YUV格式用6个字节来表示两个像素点的YUV以及alpha信息,排列为 Y1 U1 Y2 V1 alpha1 alpha2
其中像素点1为(Y1, U1, V1, alpha1),像素点2为(Y2, U1, V1, alpha2)。其中alpha为对应点的透明度信息。
而带有alpha透明度RGB格式的图片,假设为32bits的BMP图片,每个像素点用4bytes来表示,分别为B G R alpha信息。
上述函数的具体实现为:
//////////////////////////////////////////////////////////////////////////
// YUV2RGB
point to the YUV data
point to the RGB data
width of the picture
height of the picture
// alphaYUV
is there an alpha channel in YUV
// alphaRGB
is there an alpha channel in RGB
//////////////////////////////////////////////////////////////////////////
int YUV2RGB(void* pYUV, void* pRGB, int width, int height, bool alphaYUV, bool alphaRGB)
if (NULL == pYUV)
return -1;
unsigned char* pYUVData = (unsigned char *)pYUV;
unsigned char* pRGBData = (unsigned char *)pRGB;
if (NULL == pRGBData)
if (alphaRGB)
pRGBData = new unsigned char[width*height*4];
pRGBData = new unsigned char[width*height*3];
int Y1, U1, V1, Y2, alpha1, alpha2, R1, G1, B1, R2, G2, B2;
int C1, D1, E1, C2;
if (alphaRGB)
if (alphaYUV)
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y1 = *(pYUVData+i*width*3+j*6);
//i*width*3 = i*(width/2)*6
U1 = *(pYUVData+i*width*3+j*6+1);
Y2 = *(pYUVData+i*width*3+j*6+2);
V1 = *(pYUVData+i*width*3+j*6+3);
alpha1 = *(pYUVData+i*width*3+j*6+4);
alpha2 = *(pYUVData+i*width*3+j*6+5);
C1 = Y1-16;
C2 = Y2-16;
D1 = U1-128;
E1 = V1-128;
R1 = ((298*C1 + 409*E1 + 128)&&8&255 ? 255 : (298*C1 + 409*E1 + 128)&&8);
G1 = ((298*C1 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C1 - 100*D1 - 208*E1 + 128)&&8);
B1 = ((298*C1+516*D1 +128)&&8&255 ? 255 : (298*C1+516*D1 +128)&&8);
R2 = ((298*C2 + 409*E1 + 128)&&8&255 ? 255 : (298*C2 + 409*E1 + 128)&&8);
G2 = ((298*C2 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C2 - 100*D1 - 208*E1 + 128)&&8);
B2 = ((298*C2 + 516*D1 +128)&&8&255 ? 255 : (298*C2 + 516*D1 +128)&&8);
*(pRGBData+(height-i-1)*width*4+j*8+2) = R1&0 ? 0 : R1;
*(pRGBData+(height-i-1)*width*4+j*8+1) = G1&0 ? 0 : G1;
*(pRGBData+(height-i-1)*width*4+j*8) = B1&0 ? 0 : B1;
*(pRGBData+(height-i-1)*width*4+j*8+3) = alpha1;
*(pRGBData+(height-i-1)*width*4+j*8+6) = R2&0 ? 0 : R2;
*(pRGBData+(height-i-1)*width*4+j*8+5) = G2&0 ? 0 : G2;
*(pRGBData+(height-i-1)*width*4+j*8+4) = B2&0 ? 0 : B2;
*(pRGBData+(height-i-1)*width*4+j*8+7) = alpha2;
int alpha = 255;
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y1 = *(pYUVData+i*width*2+j*4);
U1 = *(pYUVData+i*width*2+j*4+1);
Y2 = *(pYUVData+i*width*2+j*4+2);
V1 = *(pYUVData+i*width*2+j*4+3);
C1 = Y1-16;
C2 = Y2-16;
D1 = U1-128;
E1 = V1-128;
R1 = ((298*C1 + 409*E1 + 128)&&8&255 ? 255 : (298*C1 + 409*E1 + 128)&&8);
G1 = ((298*C1 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C1 - 100*D1 - 208*E1 + 128)&&8);
B1 = ((298*C1+516*D1 +128)&&8&255 ? 255 : (298*C1+516*D1 +128)&&8);
R2 = ((298*C2 + 409*E1 + 128)&&8&255 ? 255 : (298*C2 + 409*E1 + 128)&&8);
G2 = ((298*C2 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C2 - 100*D1 - 208*E1 + 128)&&8);
B2 = ((298*C2 + 516*D1 +128)&&8&255 ? 255 : (298*C2 + 516*D1 +128)&&8);
*(pRGBData+(height-i-1)*width*4+j*8+2) = R1&0 ? 0 : R1;
*(pRGBData+(height-i-1)*width*4+j*8+1) = G1&0 ? 0 : G1;
*(pRGBData+(height-i-1)*width*4+j*8) = B1&0 ? 0 : B1;
*(pRGBData+(height-i-1)*width*4+j*8+3) =
*(pRGBData+(height-i-1)*width*4+j*8+6) = R2&0 ? 0 : R2;
*(pRGBData+(height-i-1)*width*4+j*8+5) = G2&0 ? 0 : G2;
*(pRGBData+(height-i-1)*width*4+j*8+4) = B2&0 ? 0 : B2;
*(pRGBData+(height-i-1)*width*4+j*8+7) =
if (alphaYUV)
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y1 = *(pYUVData+i*width*3+j*4);
U1 = *(pYUVData+i*width*3+j*4+1);
Y2 = *(pYUVData+i*width*3+j*4+2);
V1 = *(pYUVData+i*width*3+j*4+3);
C1 = Y1-16;
C2 = Y2-16;
D1 = U1-128;
E1 = V1-128;
R1 = ((298*C1 + 409*E1 + 128)&&8&255 ? 255 : (298*C1 + 409*E1 + 128)&&8);
G1 = ((298*C1 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C1 - 100*D1 - 208*E1 + 128)&&8);
B1 = ((298*C1+516*D1 +128)&&8&255 ? 255 : (298*C1+516*D1 +128)&&8);
R2 = ((298*C2 + 409*E1 + 128)&&8&255 ? 255 : (298*C2 + 409*E1 + 128)&&8);
G2 = ((298*C2 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C2 - 100*D1 - 208*E1 + 128)&&8);
B2 = ((298*C2 + 516*D1 +128)&&8&255 ? 255 : (298*C2 + 516*D1 +128)&&8);
*(pRGBData+(height-i-1)*width*3+j*6+2) = R1&0 ? 0 : R1;
*(pRGBData+(height-i-1)*width*3+j*6+1) = G1&0 ? 0 : G1;
*(pRGBData+(height-i-1)*width*3+j*6) = B1&0 ? 0 : B1;
*(pRGBData+(height-i-1)*width*3+j*6+5) = R2&0 ? 0 : R2;
*(pRGBData+(height-i-1)*width*3+j*6+4) = G2&0 ? 0 : G2;
*(pRGBData+(height-i-1)*width*3+j*6+3) = B2&0 ? 0 : B2;
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y1 = *(pYUVData+i*width*2+j*4);
U1 = *(pYUVData+i*width*2+j*4+1);
Y2 = *(pYUVData+i*width*2+j*4+2);
V1 = *(pYUVData+i*width*2+j*4+3);
C1 = Y1-16;
C2 = Y2-16;
D1 = U1-128;
E1 = V1-128;
R1 = ((298*C1 + 409*E1 + 128)&&8&255 ? 255 : (298*C1 + 409*E1 + 128)&&8);
G1 = ((298*C1 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C1 - 100*D1 - 208*E1 + 128)&&8);
B1 = ((298*C1+516*D1 +128)&&8&255 ? 255 : (298*C1+516*D1 +128)&&8);
R2 = ((298*C2 + 409*E1 + 128)&&8&255 ? 255 : (298*C2 + 409*E1 + 128)&&8);
G2 = ((298*C2 - 100*D1 - 208*E1 + 128)&&8&255 ? 255 : (298*C2 - 100*D1 - 208*E1 + 128)&&8);
B2 = ((298*C2 + 516*D1 +128)&&8&255 ? 255 : (298*C2 + 516*D1 +128)&&8);
*(pRGBData+(height-i-1)*width*3+j*6+2) = R1&0 ? 0 : R1;
*(pRGBData+(height-i-1)*width*3+j*6+1) = G1&0 ? 0 : G1;
*(pRGBData+(height-i-1)*width*3+j*6) = B1&0 ? 0 : B1;
*(pRGBData+(height-i-1)*width*3+j*6+5) = R2&0 ? 0 : R2;
*(pRGBData+(height-i-1)*width*3+j*6+4) = G2&0 ? 0 : G2;
*(pRGBData+(height-i-1)*width*3+j*6+3) = B2&0 ? 0 : B2;
//////////////////////////////////////////////////////////////////////////
// RGB2YUV
point to the RGB data
point to the YUV data
width of the picture
height of the picture
// alphaYUV
is there an alpha channel in YUV
// alphaRGB
is there an alpha channel in RGB
//////////////////////////////////////////////////////////////////////////
int RGB2YUV(void* pRGB, void* pYUV, int width, int height, bool alphaYUV, bool alphaRGB)
if (NULL == pRGB)
return -1;
unsigned char* pRGBData = (unsigned char *)pRGB;
unsigned char* pYUVData = (unsigned char *)pYUV;
if (NULL == pYUVData)
if (alphaYUV)
pYUVData = new unsigned char[width*height*3];
pYUVData = new unsigned char[width*height*2];
int R1, G1, B1, R2, G2, B2, Y1, U1, Y2, V1;
int alpha1, alpha2;
if (alphaYUV)
if (alphaRGB)
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
B1 = *(pRGBData+(height-i-1)*width*4+j*8);
G1 = *(pRGBData+(height-i-1)*width*4+j*8+1);
R1 = *(pRGBData+(height-i-1)*width*4+j*8+2);
alpha1 = *(pRGBData+(height-i-1)*width*4+j*8+3);
B2 = *(pRGBData+(height-i-1)*width*4+j*8+4);
G2 = *(pRGBData+(height-i-1)*width*4+j*8+5);
R2 = *(pRGBData+(height-i-1)*width*4+j*8+6);
alpha2 = *(pRGBData+(height-i-1)*width*4+j*8+7);
Y1 = (((66*R1+129*G1+25*B1+128)&&8) + 16) & 255 ? 255 : (((66*R1+129*G1+25*B1+128)&&8) + 16);
U1 = ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128)&255 ? 255 : ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128);
Y2 = (((66*R2+129*G2+25*B2+128)&&8) + 16)&255 ? 255 : ((66*R2+129*G2+25*B2+128)&&8) + 16;
V1 = ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128)&255 ? 255 : ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128);
*(pYUVData+i*width*3+j*6) = Y1;
*(pYUVData+i*width*3+j*6+1) = U1;
*(pYUVData+i*width*3+j*6+2) = Y2;
*(pYUVData+i*width*3+j*6+3) = V1;
*(pYUVData+i*width*3+j*6+4) = alpha1;
*(pYUVData+i*width*3+j*6+5) = alpha2;
unsigned char alpha = 255;
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
B1 = *(pRGBData+(height-i-1)*width*3+j*6);
G1 = *(pRGBData+(height-i-1)*width*3+j*6+1);
R1 = *(pRGBData+(height-i-1)*width*3+j*6+2);
B2 = *(pRGBData+(height-i-1)*width*3+j*6+3);
G2 = *(pRGBData+(height-i-1)*width*3+j*6+4);
R2 = *(pRGBData+(height-i-1)*width*3+j*6+5);
Y1 = ((66*R1+129*G1+25*B1+128)&&8) + 16;
U1 = ((-38*R1-74*G1+112*B1+128)&&8+(-38*R2-74*G2+112*B2+128)&&8)/2 + 128;
Y2 = ((66*R2+129*G2+25*B2+128)&&8) + 16;
V1 = ((112*R1-94*G1-18*B1+128)&&8 + (112*R2-94*G2-18*B2+128)&&8)/2 + 128;
Y1 = (((66*R1+129*G1+25*B1+128)&&8) + 16) & 255 ? 255 : (((66*R1+129*G1+25*B1+128)&&8) + 16);
U1 = ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128)&255 ? 255 : ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128);
Y2 = (((66*R2+129*G2+25*B2+128)&&8) + 16)&255 ? 255 : ((66*R2+129*G2+25*B2+128)&&8) + 16;
V1 = ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128)&255 ? 255 : ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128);
*(pYUVData+i*width*3+j*6) = Y1;
*(pYUVData+i*width*3+j*6+1) = U1;
*(pYUVData+i*width*3+j*6+2) = Y2;
*(pYUVData+i*width*3+j*6+3) = V1;
*(pYUVData+i*width*3+j*6+4) =
*(pYUVData+i*width*3+j*6+5) =
if (alphaRGB)
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
B1 = *(pRGBData+(height-i-1)*width*4+j*8);
G1 = *(pRGBData+(height-i-1)*width*4+j*8+1);
R1 = *(pRGBData+(height-i-1)*width*4+j*8+2);
B2 = *(pRGBData+(height-i-1)*width*4+j*8+4);
G2 = *(pRGBData+(height-i-1)*width*4+j*8+5);
R2 = *(pRGBData+(height-i-1)*width*4+j*8+6);
Y1 = (((66*R1+129*G1+25*B1+128)&&8) + 16) & 255 ? 255 : (((66*R1+129*G1+25*B1+128)&&8) + 16);
U1 = ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128)&255 ? 255 : ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128);
Y2 = (((66*R2+129*G2+25*B2+128)&&8) + 16)&255 ? 255 : ((66*R2+129*G2+25*B2+128)&&8) + 16;
V1 = ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128)&255 ? 255 : ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128);
*(pYUVData+i*width*2+j*4) = Y1;
*(pYUVData+i*width*2+j*4+1) = U1;
*(pYUVData+i*width*2+j*4+2) = Y2;
*(pYUVData+i*width*2+j*4+3) = V1;
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
B1 = *(pRGBData+(height-i-1)*width*3+j*6);
G1 = *(pRGBData+(height-i-1)*width*3+j*6+1);
R1 = *(pRGBData+(height-i-1)*width*3+j*6+2);
B2 = *(pRGBData+(height-i-1)*width*3+j*6+3);
G2 = *(pRGBData+(height-i-1)*width*3+j*6+4);
R2 = *(pRGBData+(height-i-1)*width*3+j*6+5);
Y1 = (((66*R1+129*G1+25*B1+128)&&8) + 16) & 255 ? 255 : (((66*R1+129*G1+25*B1+128)&&8) + 16);
U1 = ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128)&255 ? 255 : ((((-38*R1-74*G1+112*B1+128)&&8)+((-38*R2-74*G2+112*B2+128)&&8))/2 + 128);
Y2 = (((66*R2+129*G2+25*B2+128)&&8) + 16)&255 ? 255 : ((66*R2+129*G2+25*B2+128)&&8) + 16;
V1 = ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128)&255 ? 255 : ((((112*R1-94*G1-18*B1+128)&&8) + ((112*R2-94*G2-18*B2+128)&&8))/2 + 128);
*(pYUVData+i*width*2+j*4) = Y1;
*(pYUVData+i*width*2+j*4+1) = U1;
*(pYUVData+i*width*2+j*4+2) = Y2;
*(pYUVData+i*width*2+j*4+3) = V1;
//////////////////////////////////////////////////////////////////////////
point to the background YUV data
point to the foreground YUV data
width of the picture
height of the picture
// alphaBG
is there an alpha channel in background YUV data
// alphaFG
is there an alpha channel in fourground YUV data
//////////////////////////////////////////////////////////////////////////
int YUVBlending(void* pBGYUV, void* pFGYUV, int width, int height, bool alphaBG, bool alphaFG)
if (NULL == pBGYUV || NULL == pFGYUV)
return -1;
unsigned char* pBGData = (unsigned char*)pBGYUV;
unsigned char* pFGData = (unsigned char*)pFGYUV;
if (!alphaFG)
if (!alphaBG)
memcpy(pBGData, pFGData, width*height*2);
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
*(pBGData+i*width*2+j*4) = *(pFGData+i*width*2+j*4);
*(pBGData+i*width*2+j*4+1) = *(pFGData+i*width*2+j*4+1);
*(pBGData+i*width*2+j*4+2) = *(pFGData+i*width*2+j*4+2);
*(pBGData+i*width*2+j*4+3) = *(pFGData+i*width*2+j*4+3);
int Y11, U11, V11, Y12, Y21, U21, V21, Y22;
int alpha1, alpha2;
if (!alphaBG)
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y11 = *(pBGData+i*width*2+j*4);
U11 = *(pBGData+i*width*2+j*4+1);
Y12 = *(pBGData+i*width*2+j*4+2);
V11 = *(pBGData+i*width*2+j*4+3);
Y21 = *(pFGData+i*width*3+j*6);
U21 = *(pFGData+i*width*3+j*6+1);
Y22 = *(pFGData+i*width*3+j*6+2);
V21 = *(pFGData+i*width*3+j*6+3);
alpha1 = *(pFGData+i*width*3+j*6+4);
alpha2 = *(pFGData+i*width*3+j*6+5);
*(pBGData+i*width*2+j*4) = (Y21-16)*alpha1/255+(Y11-16)*(255-alpha1)/255+16;
*(pBGData+i*width*2+j*4+1) = ((U21-128)*alpha1/255+(U11-128)*(255-alpha1)/255 + (U21-128)*alpha2/255+(U11-128)*(255-alpha2)/255)/2+128;
*(pBGData+i*width*2+j*4+3) = ((V21-128)*alpha1/255+(V11-128)*(255-alpha1)/255 + (V21-128)*alpha2/255+(V11-128)*(255-alpha2)/255)/2+128;
*(pBGData+i*width*2+j*4+2) = (Y22-16)*alpha2/255+(Y12-16)*(255-alpha2)/255+16;
for (int i=0; i& ++i)
for (int j=0; j&width/2; ++j)
Y11 = *(pBGData+i*width*3+j*6);
U11 = *(pBGData+i*width*3+j*6+1);
Y12 = *(pBGData+i*width*3+j*6+2);
V11 = *(pBGData+i*width*3+j*6+3);
Y21 = *(pFGData+i*width*3+j*6);
U21 = *(pFGData+i*width*3+j*6+1);
Y22 = *(pFGData+i*width*3+j*6+2);
V21 = *(pFGData+i*width*3+j*6+3);
alpha1 = *(pFGData+i*width*3+j*6+4);
alpha2 = *(pFGData+i*width*3+j*6+5);
*(pBGData+i*width*3+j*6) = (Y21-16)*alpha1/255+(Y11-16)*(255-alpha1)/255+16;
*(pBGData+i*width*3+j*6+1) = ((U21-128)*alpha1/255+(U11-128)*(255-alpha1)/255 + (U21-128)*alpha2/255+(U11-128)*(255-alpha2)/255)/2+128;
*(pBGData+i*width*3+j*6+3) = ((V21-128)*alpha1/255+(V11-128)*(255-alpha1)/255 + (V21-128)*alpha2/255+(V11-128)*(255-alpha2)/255)/2+128;
*(pBGData+i*width*3+j*6+2) = (Y22-16)*alpha2/255+(Y12-16)*(255-alpha2)/255+16;
}&参考网址:
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:487679次
积分:6730
积分:6730
排名:第1547名
原创:163篇
转载:80篇
评论:159条
(1)(5)(1)(9)(4)(6)(1)(1)(2)(2)(5)(1)(5)(1)(2)(2)(4)(1)(10)(7)(3)(5)(4)(3)(2)(4)(1)(3)(1)(1)(19)(1)(3)(4)(12)(1)(4)(35)(20)(3)(8)(4)(1)(5)(4)(5)(4)(3)(3)(1)(2)(2)(1)(1)

我要回帖

更多关于 易语言 的文章

 

随机推荐