DNG是一种很灵活的图像格式,是在Tiff基础上拓展来的,随着传感器技术、后期图像处理能力的增强,我们很多时候直接将ISP采集的图像数据直接Dump下来,不做任何处理,这就是RAW图,同时我们把处理RAW图需要必要信息存下来(Bayer Pattern、白平衡、噪声水平等),然后我们就可以进行后期处理了。总之,DNG是一种RAW图格式,其中除了数据段之外,有很多标签信息。

DNG格式不断迭代,目前DNG的格式标准是1.5.0(如果还发现有标签字段没有找到,那就说明格式已经更新了,核对下版本)

Adobe对DNG格式定义的文档:https://www.adobe.com/content/dam/acom/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf

所参考的源码有两份:

第二份源码摘自博客:http://forum.xitek.com/thread-1677216-1-1-1.html

以上第一份源码比较简洁,没有使用DNG中提供的光源信息,可以处理大部分图,但是偶尔有些格式中光源信息不一样的,就会出现色彩一场。第二份源码按照格式Pipline处理出来,颜色基本正确(不过感觉还是比正常处理出来更加鲜艳一些),并且处理速度比较慢,对Matlab来说,我们就不苛求速度了。

另外,Adobe也有专门的RAW处理工具,如果已经安装了Adobe Photoshop,那么再安装这个插件就会增强对RAW图的处理,我们可以以Adobe的结果作为参考:https://helpx.adobe.com/cn/camera-raw/kb/camera-raw-plug-in-installer.html

如果是从相机采集的图,那么可以用过Adobe DNG Converter来对图像转换到DNG格式,这个工具和上面的工具一样,也是通过插件的方式进行安装。

关于DNG解码,上面引用的那片博客已经把DNG处理的大致流程讲的比较清楚了,这里就简单概括下。

1. RAW数据处理:线性校准、黑色补偿、归一化,把数据归一化到0-1

2. 解码赛克:把CFA pattern插值出三通道的RGB数据

3. 激活区域剪裁:RAW存储的是整个CMOS的数据,有用的部分可能比CMOS稍微小一点点

4. 色彩转换:这部分处理比较负载,DNG中存储有参考色温,转换矩阵需要根据白点和白平衡色温内插计算出来

5. HSV校准:如果DNG存在HSV映射表,就需要做HSV校正。把XYZ(D50)转换到ProPhoto RGB,再把RGB转换到HSV,然后计算索引号,查表修正,之后根据需要再转换到ProPhoto RGB线性空间,或者XYZ(D50)空间

6. 曝光校准:大部分相机都为保护RAW数据高光不溢出,就在曝光时减少一点儿曝光量,在RAW解码时补回来。根据曝光量EV,可以直接换算 Photo = Photo*2EV

7. HSV色彩增强:与HSV校准的算法相同,作用是色彩增强。

8. 影调曲线:影调曲线要早ProPhoto RGB的线性空间进行转换,不同相机的曲线大致相同。

9. Gamma校准:人眼视觉是在Gamma空间的,所以按照Adobe sRGB的Gamma对数据进行校准,这个是Adobe RGB标准规定好的,照做即可

10. Opcode List Processing: 这一步是处理坏点、噪声、镜头校正等等

 

% Rafael Villamor Lora (Using Rob Sumner 'RAW Guide')
% January 24, 2019 (Last modified: 02/14/19)
% PROCESSING RAW (DNG) IMAGES IN MATLAB
%
% This fucntion follows Rob Sumner's "Processing RAW Images in MATLAB"
% guide. I'm simply copying and pasting his explanations. Unless otherwise
% noted, all the credits belong to Rob Sumner (2014)
% I HIGHLY recommend to read the full documment:
% http://www.rcsumner.net/raw_guide/RAWguide.pdf

% RAW FILES
%{
A RAW photo files contain the raw sensor data from a digital camera;  while
it can be quite scientifically useful, raw data must generally be processed 
before it can be  displayed. In order to use a DSLR (Digital Single Lens
Reflex camera) as a scientific camera it is vital to know the entire 
processing chain that was applied to an  image after being captured.  If 
possible, the best image to deal with is  the sensor data straight from the
camera, the raw data.
%}

% HOW TO READ RAW FILES IN MATLAB
%{
'RAW� is a class of computer files which typically contain an uncompressed 
image containing the sensor pixel values as well as a large amount of meta-
information about the image generated by the camera (the Exif data). RAW 
files themselves come in many proprietary file formats (Nikon�s .NEF, 
Canon�s .CR2, etc) and at least one common open format, .DNG, which stands
for Digital Negative.  The latter indicates how these files  are  supposed  
to  be  thought  of  by  digital  photographers:  the  master  originals,
repositories  of  all  the captured information of the scene.

Many proprietary file formats can be converted to DNG using Adobe Digital
Negative Converter. NOTE: Make sure you export your images as uncompressed:
i.e. Open Abobe DNG Converter > Change Preferences... > Compatibility
> Custom... > Linear (unchecked),Uncompressed (checked) > OK > OK
%}

% HOW TO PROCESS RAW FILES
%{
The process coded here is a first-order approximation of the steps that all 
cameras do to take an image and produce a viewable output. It starts with 
the raw sensor data and implements:

1. Linearization
2. White Balancing
3. Demosaicing
4. Color Space Correction
5. Brightness and Constrast Adjustment for Display
%}

%%%%%%%%%%%%%%

function [lin_srgb, lin_rgb, balanced_bayer, lin_bayer, raw, camSettings] = dng2rgb(imagename)
% OUTPUTS
% lin_srgb       [nxmx3 double][0-1] = 16-bit, RGB image that has color corrected and exists in the right color space for display
% lin_rgb        [nxmx3 double][0-1] = RGB image from balanced_bayer using demosaic()                                                
% balanced_bayer [nxm double]  [0-1] = White Balanced-linearized raw data (Bayer array)                                              
% lin_bayer      [nxm double]  [0-1] = Linearized raw data (Bayer array) with black level and saturation level corrected           
% raw            [nxm double]  [0-1] = Raw data (Bayer array) from the camera's sensor
% camSettings    [structure]         = Contains camera settings used for convertion from RAW -> lin_srgb

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                     READING THE CFA IMAGE INTO MATLAB                   %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
The following code will read the DNG file into a MATLAB array called
raw, as well as creating a structure of metadata about the image, meta-info
The code also uses information from the metadata to crop the image to only 
the meaningful area.
%}
warning off MATLAB:tifflib:TIFFReadDirectory:libraryWarning
t         = Tiff(imagename,'r');
offsets   = getTag(t,'SubIFD');
setSubDirectory(t,offsets(1));
raw       = read(t);
close(t);
meta_info = imfinfo(imagename);
% Crop to only valid pixels (http://www.rcsumner.net/raw_guide/RAWguide.pdf)
x_origin  = meta_info.SubIFDs{1}.ActiveArea(2)+1;    % +1 due to MATLAB indexing
width     = meta_info.SubIFDs{1}.DefaultCropSize(1);
y_origin  = meta_info.SubIFDs{1}.ActiveArea(1)+1;
height    = meta_info.SubIFDs{1}.DefaultCropSize(2);
RAW       = raw(y_origin:y_origin+height-1,x_origin:x_origin+width-1);
raw       = double(RAW);
% Save settings
camSettings.x_origin = x_origin;
camSettings.width    = width;
camSettings.y_origin = y_origin;
camSettings.height   = height;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                                LINEARIZING                              %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%meta_info = imfinfo(imagename);
%{
The 2-D array raw is not yet a linear image.  It is possible that the 
camera applied a non-linear transformation to the sensor data for storage 
purposes (e.g., Nikon cameras).  If so, the DNG metadata will contain A
table under meta info.SubIFDs{1}.LinearizationTable. You will need to map 
the values of the raw array through this look-up table to the full 10-14 
bit values.  If this tag is empty (as for Canon cameras), you do not need 
to worry about this step.

Even if there is no non-linear compression to invert, the raw image might 
still have an offset and arbitrary scaling.  Find the black level value and
saturation level value as below and do an affine transformation to the
pixels of the image to make it linear and normalized to the range [0,1].
Also, because of sensor noise, it is possible that there exist values in 
the array which are above the theoretical maximum value or below the
black level.  These need to be clipped, as follows.

Note:  There may exist a different black level or saturation level for each
of the four Bayer color channels. The code below assumes they are the same
and uses just one.  You may choose to be more precise.
%}
%If the values are stored non-linearly, undo that mapping
if isfield(meta_info.SubIFDs{1},'LinearizationTable')
    warning('RVL: Stored RAW-values are non-linear')
    ltab = meta_info.SubIFDs{1}.LinearizationTable;
    raw  = ltab(raw+1);
    % Save settings
    camSettings.ltab = ltab;
end
% The black level and saturation level values are stored in the DNG 
% metadata and can be accessed as shown.
black      = meta_info.SubIFDs{1}.BlackLevel(1);
saturation = meta_info.SubIFDs{1}.WhiteLevel;
lin_bayer  = (raw - black) / (saturation - black);
lin_bayer  = max(0, min(lin_bayer, 1));  % Always keep image clipped b/w 0-1
% Save settings
camSettings.black      = black;
camSettings.saturation = saturation;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                              WHITE BALANCING                            %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
Now we scale each color channel in the CFA by an appropriate amount to
white balance the image.  Since only the ratio of the three colors matters,
we can arbitrarily set one channel's multiplier to 1; this is usually done
for the green pixels.  You may set the other two white balance multipliers 
to any value you want (e.g., the Exif information for the original RAW file
may contain standard multiplier values for different standard illuminants),
but here we use the multipliers the camera calculated at the time of 
shooting. Once the values are found, multiply every red-location pixel in 
the image by the red multiplier and every blue-location  pixel  by  the  
blue  multiplier.   This  can  be  done  by  dot-multiplication  with  a 
mask  of  these scalars, which can be easily created by a function similar 
to the following.
%}
% An array of the inverses of the multiplier values, for [R G B], is found 
% in meta_info.AsShotNeutral. Thus we invert the values and then rescale 
% them all so that the green multiplier is 1.
wb_multipliers = (meta_info.AsShotNeutral) .^ -1;
wb_multipliers = wb_multipliers / wb_multipliers(2);
mask           = wbmask(size(lin_bayer,1),size(lin_bayer,2),wb_multipliers,'rggb');
balanced_bayer = lin_bayer .* mask;
% Save settings
camSettings.wb_multipliers = wb_multipliers;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                               DEMOSAICING                               %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
Apply your favorite demosaicing algorithm (or MATLAB�s built-in one) to 
generate the familiar 3-layer RGB image variable.  Note that the built-in
demosaic() function requires a uint8 or uint16 input.  To get a meaningful
integer image, scale the entire image so that the max value is 65535. Then
scale back to 0-1 for the rest of the process.

Originally, the code looked like this:

temp    = uint16(balanced_bayer / max(balanced_bayer(:)) * (2^16 - 1)); % RVL: I'm not sure why we need  / max(balanced_bayer(:)
if max(balanced_bayer(:)) ~= 1
    warning('RVL: Weird normalization during demosaicing')
    warning(['RVL: Normalization factor: ',num2str(max(balanced_bayer(:)))])
end
lin_rgb = double(demosaic(temp,'rggb')) / (2^16 - 1);
% Save settings
camSettings.max_balanced_layer = max(balanced_bayer(:));

I'm not sure why we need  the division '/ max(balanced_bayer(:)'. So I
eliminated it*. Another altervative would be to keep it, and then multiply
each layer of lin_rgb by 'max(balanced_bayer(:))'

*I compared both scenarios (i.e. with/without division), and it seems that
as long the image is not infra-/over-exposed, the division doesn't make any
difference.
%}

temp    = uint16(balanced_bayer * (2^16 - 1));
lin_rgb = double(demosaic(temp,'rggb')) / (2^16 - 1);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                          COLOR SPACE CONVERSION                         %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
The current RGB image is viewable with the standard MATLAB display
functions.  However, its pixels will not have coordinates in the correct 
RGB space that is expected by the operating system.

Any given pixel�s RGB values, which represent a vector in the color basis 
defined by the camera�s sensors, must be converted to some color basis 
which the monitor expects.  This is done by a linear transfor-mation, so we 
will need to apply a 3x3 matrix transformation to each of the pixels.

The correct matrix to apply can be difficult to find. Some software
packages use matrices (gleaned from Adobe) which transform from the 
camera�s color space to the XYZ color space, a common standard.  Then the 
transformation from XYZ to the desired output space, e.g., sRGB, can be 
applied.  Better yet, these two transformations can be combined first and 
then applied once. NOTE: As an added complication, however, these matrices 
typically are defined in the direction of sRGB-to-XYZ and XYZ-to-camera 
color basis. 

One other necessary trick is to first normalize the rows of the sRGB-to-Cam
matrix so that each row sums to 1.  Though it may seem arbitrary and
somewhat ad hoc, we can see that this is necessary if we consider what will
happen when a white pixel in the camera color space is transformed to a
white pixel in the output space:  we can argue that it should still be 
white because we have already applied white  balance  multipliers  in  
order  to  make  it  so.   Since  white  in  both  spaces  is  represented  by  the  RGB
coordinates [ 1    1    1]^T.

The matrices used for the output-to-XYZ colorspace transformations can be 
found at Bruce Lindbloom�s comprehensive website. For convenience, the most
commonly desired one, the matrix from sRGB space to XYZ space, is given 
here

rgb2xyz = [0.4124564    0.3575761    0.1804375
           0.2126729    0.7151522    0.0721750
           0.0193339    0.1191920    0.9503041];
%}
% You  can  find  the  entries  of  the  XYZ-to-camera  matrix  in  the
% meta_info.ColorMatrix2 array. NOTE: These entries fill the transformation
% matrix in a C row-wise manner, not MATLAB column-wise.
rgb2xyz  = [0.4124564    0.3575761    0.1804375
            0.2126729    0.7151522    0.0721750
            0.0193339    0.1191920    0.9503041];
xyz2cam  = reshape(meta_info.ColorMatrix2,3,3)';  % Transpose is due to row-wise definition of meta_info.ColorMatrix2
rgb2cam  = xyz2cam * rgb2xyz;
rgb2cam  = rgb2cam ./ repmat(sum(rgb2cam,2),1,3); % Normalize rows to 1
cam2rgb  = rgb2cam ^ -1;
lin_srgb = apply_cmatrix(lin_rgb, cam2rgb);
lin_srgb = max(0,min(lin_srgb,1)); % Always keep image clipped b/w 0-1
% Save settings
camSettings.rgb2xyz = rgb2xyz;
camSettings.xyz2cam = xyz2cam;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%                      BRIGHTNESS AND GAMMA CORRECTION                    %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
We  now  have  a  16-bit,  RGB  image  that  has  been  color  corrected  
and  exists  in  the  right  color  space  for display.  However, it is 
still a linear image with values relating to what was sensed, which may not
be in a range appropriate for being displayed.  We can brighten the image 
by simply scaling it (adding a constant would just make it look gray), or 
something more complicated, e.g., applying a non-linear transformation.
Here we will do both, but be aware that the steps of this subsection are
highly subjective and at this point we are just tweaking the image so it 
looks good.  It is already �correct� in some sense, but not necessarily
�pretty.� 

As a extremely simple brightening measure, we can find the mean luminance 
of the image and then scale it so that the mean luminance is some more 
reasonable value.  In the following lines, we (fairly arbitrarily) scale
the image so that the mean luminance is 1/4 the maximum.  For the 
photographically inclined, this is equivalent to scaling the image so that
there are two stops of bright area detail.  This is not extremely clever, 
but the code is simple.

grayim      = rgb2gray(lin_srgb);
grayscale   = 0.25/mean(grayim(:));
bright_srgb = min(1,lin_srgb * grayscale);
nl_srgb     = bright_srgb.^(1/2.2);

The image is still linear, which will almost certaintly not be the best for
display (dark areas will appear too dark, etc).  We will apply a �gamma 
correction� power function to this image as a simple way to fix this.
Though the official sRGB compression actually uses a power function with
? = 1/2.4 and a small linear toe region for the lowest values, this is 
often approximated by the following simple ? = 1/2.2 compression.  Note
that in general you only want to apply such a function to an image that has
been scaled to be in the range [0,1], which we have made sure our input is.

nl_srgb = bright_srgb.�(1/2.2);

Congratulations, you now have a color-corrected, displayable RGB image.  It 
is real valued, ranges from 0-1, and thus is ready for direct display by
imshow()
%}


end


function colormask = wbmask(m,n,wbmults,align)
% COLORMASK = wbmask(M,N,WBMULTS,ALIGN)
%
% Makes a white-balance multiplicative mask for an image of size m-by-n
% with RGB while balance multipliers WBMULTS = [R_scale G_scale B_scale].
% ALIGN is string indicating Bayer arrangement: 'rggb','gbrg','grbg','bggr'

colormask = wbmults(2)*ones(m,n); %Initialize to all green values
switch align
    case 'rggb'
        colormask(1:2:end,1:2:end) = wbmults(1);    %r
        colormask(2:2:end,2:2:end) = wbmults(3);    %b
    case 'bggr'
        colormask(2:2:end,2:2:end) = wbmults(1);    %r
        colormask(1:2:end,1:2:end) = wbmults(3);    %b
    case 'grbg'
        colormask(1:2:end,2:2:end) = wbmults(1);    %r
        colormask(1:2:end,2:2:end) = wbmults(3);    %b
    case 'gbrg'
        colormask(2:2:end,1:2:end) = wbmults(1);    %r
        colormask(1:2:end,2:2:end) = wbmults(3);    %b
end
end

function corrected = apply_cmatrix(im,cmatrix)
% CORRECTED = apply_cmatrix(IM,CMATRIX)
%
% Applies CMATRIX to RGB input IM. Finds the appropriate weighting of the
% old color planes to form the new color planes, equivalent to but much
% more efficient than applying a matrix transformation to each pixel.
if size(im,3)~=3
    error('Apply cmatrix to RGB image only.')
end

r = cmatrix(1,1) * im(:,:,1)+cmatrix(1,2) * im(:,:,2) + cmatrix(1,3) * im(:,:,3);
g = cmatrix(2,1) * im(:,:,1)+cmatrix(2,2) * im(:,:,2) + cmatrix(2,3) * im(:,:,3);
b = cmatrix(3,1) * im(:,:,1)+cmatrix(3,2) * im(:,:,2) + cmatrix(3,3) * im(:,:,3);

corrected = cat(3,r,g,b);
end

 

 

如果只是想简单处理,不需要很精确的颜色,那么可以follow:

[embeddoc url=”http://pcv.oss-cn-shanghai.aliyuncs.com/www.p-chao.com/2019/10/RAWguide.pdf” download=”all” viewer=”google”]

大致梳理了下,希望能有帮助。

发表评论

邮箱地址不会被公开。 必填项已用*标注