卷積神經網路中PET/CT圖像的紋理特徵提取

Author: Zongwei Zhou 周縱葦

Weibo: @MrGiovanni

Email: zongweiz@asu.edu

Please cite this paper if you found it useful. Thanks!

Wang H, Zhou Z, Li Y, et al. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images[J]. 2017, 7.

<img src="upload-images.jianshu.io" width_="0.001" height="0.001"/>

目的

檢驗紋理特徵對3d-PET/CT圖像分類的效果。

簡介

在使用傳統分類器的時候,和深度學習不一樣,我們需要人為地定義圖像特徵,其實CNN的卷積過程就是一個個的濾波器的作用,目的也是為了提取特徵,而這種特徵可視化之後往往就是紋理、邊緣特徵了。因此,在人為定義特徵的時候,我們也會去定義一些紋理特徵。在這次實驗中,我們用數學的方法定義圖像的紋理特徵,分別計算出來後就可以放入四個經典的傳統分類器(隨機森林,支持向量機,AdaBoost,BP-人工神經網路)中分類啦。

工具

我使用的工具是MATLAB 2014b,建議版本高一點好,因為裡面會更新很多的函數庫。實驗過程盡量簡化,本實驗的重點是檢驗紋理特徵對PET/CT圖像分類的效果,因此,有些常規的代碼我們就用標準的函數庫足夠啦。

參考文檔

PORTS 3D Image Texture Metric Calculation Package

1、直方圖-histogram

直方圖描述的是一幅圖像中各個像素的分布情況,也就是一個對像素做的統計圖。

對於一幅灰度圖像 I,它每個像素值的範圍是0-255,我們對這些像素點做一個統計,遍歷整幅圖像,統計像素值0,1,2,3,...,255分別出現的次數。統計完以後相當於我們有了256個頻數(次數),再把它們轉化成頻率,也就是每個頻數除以總頻數:

p(i) = P(i) / ∑P

以像素值作為橫坐標,對應的頻率作為縱坐標,就可以得到這個灰度圖像 I 的直方圖啦。

1.1 舉例子:CT圖像的直方圖

左圖是原始的CT圖像,右圖是該圖像的直方圖

1. CT圖像的像素值範圍是-1000~1000。相當於我們需要統計2000個像素值的頻數,這樣劃分的粒度有點太細了,因此

2. 將這-1000~1000的區間20等分,每個像素值投射到20個值。直接導致的結果是圖像看上去不那麼豐富了,但是這樣有利於計算。

3. 分別統計這20個像素值出現的頻數,除以總頻數轉化成頻率。這樣頻率介於[0,1],並且加和為1.

4. 以20個像素值為橫坐標,對應的頻率為縱坐標,即可畫出這個CT圖像的直方圖。

The end of this 栗子.

1.2直方圖的代碼實現

%%%%%n%%%%% Histogram-based computations:n%%%%%nn% Compute the histogram of the ROI and probability of each voxel value:nvox_val_hist = zeros(num_img_values,1);nfor this_vox_value = 1:num_img_valuesn vox_val_hist(this_vox_value) = length(find((img_vol_subvol == this_vox_value) & (mask_vol_subvol == 1) ));nendn% Compute the relative probabilities from the histogram:nvox_val_probs = vox_val_hist / num_ROI_voxels;n% Compute the histogram_based metrics:ntexture_metrics(1:6) = compute_histogram_metrics(vox_val_probs,num_img_values);n

1.3 基於直方圖的PET/CT紋理特徵

包括六個值,分別是:

(1) Mean

(2) Variance

(3) Skewness – set to 0 when σ=0

(4) Kurtosis – set to 0 when σ=0 (NOTE: 「Kurtosis」 and 「Excess Kurtosis」 differ in that Excess Kurtosis = Kurtosis – 3).

(5) Energy

(6) Entropy (NOTE: We will differentiate between the various entropy calculations in this document, specifying the distribution from which the entropy is computed)

1.4 紋理特徵計算實現

%%% Overhead:n% The numerical values of each histogram bin:nvox_val_indices = (1:num_img_values);n% The indices of non-empty histogram bins:nhist_nz_bin_indices = find(vox_val_probs);n%%% (1) Mean nmetrics_vect(1) = sum(vox_val_indices .* vox_val_probs);n%%% (2) Variancenmetrics_vect(2) = sum( ((vox_val_indices - metrics_vect(1)).^2) .* vox_val_probs );n%%%%% IF standard variance is zero, so are skewness and kurtosis:nif metrics_vect(2) > 0n %%% (3) Skewnessn metrics_vect(3) = sum( ((vox_val_indices - metrics_vect(1)).^3) .* vox_val_probs ) / (metrics_vect(2)^(3/2));n %%% (4) Kurtosisn metrics_vect(4) = sum( ((vox_val_indices - metrics_vect(1)).^4) .* vox_val_probs ) / (metrics_vect(2)^2);n metrics_vect(4) = metrics_vect(4) - 3;nelsen %%% (3) Skewnessn metrics_vect(3) = 0;n %%% (4) Kurtosisn metrics_vect(4) = 0;nendn%%% (5) Energynmetrics_vect(5) = sum( vox_val_probs .^2 );n%%% (6) Entropy (NOTE: 0*log(0) = 0 for entropy calculations)nmetrics_vect(6) = -sum( vox_val_probs(hist_nz_bin_indices) .* log(vox_val_probs(hist_nz_bin_indices)) );n

註:vox_val_probs表示直方圖中的概率值向量,num_img_values表示像素值劃分了幾等分,相當於上面的栗子中的20.

2、灰度共生矩陣-GLCM/GTSDM

了解了直方圖,我們接下來看看灰度共生矩陣Grey-level co-occurrence matrix GLCM (also called grey tone spatial dependence matrix GTSDM)是個啥。說白了如果直方圖是簡單的像素概率統計,得到的統計結果是個一維的向量;GLCM就是兩個像素之間的共現(共同出現)概率統計,得到的統計結果是個二維的向量。

鬧,沒看懂。

比如,一幅圖中,A處出現了像素值為x的值,如果在距離A處一個特定的地方出現了像素值為y的值,那麼得到的GLCM中,坐標(x,y)處的計數加一。假設我們是一個灰度圖,x和y的範圍都是固定的(0-255),那麼也就是說這個統計矩陣也是固定的,是256×256的大小,矩陣中的數值就是頻數統計結果,最後轉換成頻率就是GLCM啦。

也就是說GLCM刻畫的是一組像素對兒在圖像中的分布情況。

2.1 不知道有沒有講清楚,舉個例子

左圖是原始的CT圖像,右圖是該圖像的灰度共生矩陣

1. CT圖像的像素值範圍是-1000~1000。相當於我們需要統計2000個像素值的頻數,這樣劃分的粒度有點太細了,因此

2. 將這-1000~1000的區間20等分,每個像素值投射到20個值。直接導致的結果是圖像看上去不那麼豐富了,但是這樣有利於計算。

以上兩步和直方圖一樣。

3. 鎖定CT圖中一個點A,坐標(i,j)。A點的像素值是x,在CT圖中,距離A點向右del_i個像素,向下del_j個像素的位置B點,坐標(i+del_i, j+del_j),B點的像素值是y,那麼,GLCM矩陣中的位置(x,y)計數加一。注意哦,這裡的x,y是原來的CT圖像的像素值大小,i,j,del_i,del_j,x,y的意義可不要搞混嘍!

4. 遍歷CT圖中所有的點,方法就是按照第三步這麼統計。注意:del_i和del_j這兩個偏移量是預先設定好的,也就是說可以認為是常量。

5. 分別將統計完的矩陣中的頻數,除以總頻數轉化成頻率。這樣頻率介於[0,1],並且加和為1.

6. 以20個像素值為橫坐標,20個像素值為縱坐標,中間的值表示對應的頻率,就得到了這個CT圖像的GLCM可視化圖。

如此這般,得到的GLCM矩陣描述的就是一組像素對兒在原始CT圖像中,在固定偏移(del_x,del_y)中的共現概率分布。

The end of this 栗子.

2.2 簡易的2D-image-GLCM代碼實現

GLCM2 = graycomatrix(CTimage, Offset,[4,4], NumLevels,20,GrayLimits,[]);

2.3 2D-image向3D-Image拓展

對於一幅3D的圖像,它的GLCM矩陣計算方法與2D圖像類似,得到的GLCM矩陣依舊是一個二維的哦,因為GLCM的橫縱坐標是像素值,和原始圖像的維度無關,即使是個4D圖像,它的GLCM矩陣也同樣是二維的。

與二維圖像相比,三維圖像在計算GLCM的步驟類似,只有栗子2的第三步需要做一個改動:

3. 鎖定3D-CT圖中一個點A,坐標(i,j,k)。A點的像素值是x,在CT圖中,距離A點向右del_i個像素,向下del_j個像素,向外del_k個像素的位置B點,坐標(i+del_i, j+del_j, k+del_k),B點的像素值是y,那麼,GLCM矩陣中的位置(x,y)計數加一。注意哦,這裡的x,y是原來的CT圖像的像素值大小,i,j,k,del_i,del_j,del_k,x,y的意義可不要搞混嘍!

厲害的你可能已經發現,對於一個固定的偏移量del,可以取0或者±del,一共是三個取值,那麼對於2D圖像,就有3×3-1種情況,如下圖所示:

對於3D圖像,就有3×3×3-1種情況。

2.4 基於GLCM的PET/CT紋理特徵

一共有19個,分別是:

% The first 14 entries in the output are from [Haralick, 1973]:

(1) Angular second moment (called "Energy" in Soh 1999)

(2) Contrast

(3) Correlation

(4) Sum of squares variance

(5) Inverse Difference moment (called "Homogeneity" in [Soh, 1999])

(6) Sum average

(7) Sum variance

(8) Sum Entropy

(9) Entropy

(10) Difference Variance

(11) Difference Entropy

(12) Information Correlation 1

(13) Information Correlation 2

*(14) Maximal Correlation Coefficient (不做計算,永遠是0) *

% The next five entries in the output are from [Soh, 1999]:

(15) Autocorrelation

(16) Dissimilarity

(17) Cluster Shade

(18) Cluster Prominence

(19) Maximum Probability

% The next entries are from [Clausi, 2002]:

(20) Inverse Difference

中間量的計算

2.5 紋理特徵計算實現

% (1) Angular second momentnmetrics_vect(1) = sum( p(:).^2 );n % (2) Contrast (for some reason, the paper does not explicitly state p_xmyn% here):nmetrics_vect(2) = sum( ((0:(N_g-1)) .^2) .* p_xmy );n % (3) Correlation (there is mathematical ambiguity in the nature of the sum asn% stated in the paper ; this version has the means subtracted after the sum is n % taken, which is the proper method for computation):nmu_x = sum( (1:N_g) .* p_x );nmu_y = sum( (1:N_g) .* p_y );nsg_x = sqrt( sum( ( ((1:N_g) - mu_x).^2 ) .* p_x ) );nsg_y = sqrt( sum( ( ((1:N_g) - mu_y).^2 ) .* p_y ) );n if (sg_x*sg_y) == 0 nmetrics_vect(3) = Inf;nelse nmetrics_vect(3) = ( sum(ndr(:) .* ndc(:) .* p(:) ) - (mu_x*mu_y) ) ./ (sg_x*sg_y);nendn % (4) Sum of squares variance (NOTE: mu is not defined in the paper, we willn% take it to describe the mean of the normalized GTSDM):nmetrics_vect(4) = sum( (( ndr(:) - mean(p(:)) ) .^2) .* p(:) );n % (5) Inverse Difference moment metrics_vect(5) = sum( ( 1 ./ (1 + ((ndr(:)-ndc(:)).^2) ) ) .* p(:) ); % (6) Sum average metrics_vect(6) = sum( (1:(2*N_g)) .* p_xpy(:) );n % NOTE: p_xpy(1) = 0 , so adds nothing.n% (7) Sum variance metrics_vect(7) = sum( (((1:(2*N_g)) - metrics_vect(6)) .^2) .* p_xpy(:));n % (8) Sum Entropy (computed above) metrics_vect(8) = SE;n % (9) Entropy (computed above) metrics_vect(9) = HXY;n % (10) Difference Variance mu_xmy = sum( (0:(N_g-1)) .* p_xmy ); metrics_vect(10) = sum( (((0:(N_g-1)) - mu_xmy) .^2) .* p_xmy );n % (11) Difference Entropy metrics_vect(11) = -sum( p_xmy(p_xmy>0) .* log(p_xmy(p_xmy>0)) );n % (12) and (13) Information Correlations if (max(HX,HY)== 0) nmetrics_vect(12) = Inf;nelse nmetrics_vect(12) = (HXY - HXY1) / max(HX,HY);nendn metrics_vect(13) = sqrt(1-exp(-2*(HXY2-HXY)) );n % (14) Maximal Correlation Coefficient %%% I dont think we use it, so Ill only code it up if needed.n %%%%% %%%%% The following are from Soh (1999) %%%%%n % (15) Autocorrelation metrics_vect(15) = sum( (ndr(:) .* ndc(:)) .* p(:) );n % (16) Dissimilarity metrics_vect(16) = sum( abs(ndr(:) - ndc(:)) .* p(:) );n % (17) Cluster Shade metrics_vect(17) = sum( (ndr(:) + ndc(:) - mu_x - mu_y) .^3 .* p(:) );n % (18) Cluster Prominence metrics_vect(18) = sum( (ndr(:) + ndc(:) - mu_x - mu_y) .^4 .* p(:) );n % (19) Maximum Probability metrics_vect(19) = max( p(:) );n %%%%% %%%%% The following are from Clausi (2002) %%%%%n % (20) Inverse Difference: metrics_vect(20) = sum( ( 1 ./ (1 + abs( ndr(:)-ndc(:) ) ) ) .* p(:) );n

3、NGTDM

NGTDM刻畫的是一個像素與其周圍像素值的關係。

3.1 舉個2D圖像的例子

1. CT圖像的像素值範圍是-1000~1000。相當於我們需要統計2000個像素值的頻數,這樣劃分的粒度有點太細了,因此

2. 將這-1000~1000的區間20等分,每個像素值投射到20個值。直接導致的結果是圖像看上去不那麼豐富了,但是這樣有利於計算。

以上兩步和前面栗子的一樣。

3. 鎖定CT圖中一個點A,坐標(i,j)。對於一個二維圖像來說,A點周圍應該有8個點,左邊分別是(i±1,j±1),(i,j±1),(i±1,j),這8個點的像素範圍是1~20(因為步驟2)。求這8個點的像素值的平均值,為A。那麼,設A點的像素值為p_A

NGTDM(p_A) = NGTDM(p_A) + abs(p_A-A);

occur(p_A) = occur(p_A) + 1;

4. 遍歷CT圖中所有的點,方法就是按照第三步這麼統計。我們可以得到兩個矩陣NGTDM和occur,它們都是20×1的矩陣,NGTDM記錄每個像素值周圍的情況,occur記錄的是每個像素值在整個CT圖像中出現的頻數。

5. 分別將統計完的occur中的頻數,除以總頻數轉化成頻率。這樣頻率介於[0,1],並且加和為1

6. 以20個像素值為橫坐標,以它們所對應的NGTDM和occur值為縱坐標,做一個柱狀圖,就可以得到NGTDM和occur的可視化圖。

3.2 3D-NGTDM代碼實現

function [NGTDM,vox_occurances_NGD26] =compute_3D_NGTDM(ROI_vol,img_vol,binary_dir_connectivity,num_img_values)n % Placeholder for the NGTDM and number of occurances with full NGDs: n NGTDM = zeros(num_img_values,1);nvox_occurances_NGD26 = zeros(num_img_values,1);n % Record the indices of the voxels used in the ROI:nROI_voxel_indices = find(ROI_vol);n % Loop over each voxel in the ROI sub-volume:nfor this_ROI_voxel = 1:length(ROI_voxel_indices) n% The index of this voxel in the sub-volume: nthis_voxel_index = ROI_voxel_indices(this_ROI_voxel); n% This voxel must have 26 neighbors (plus itself) to be considered: n if sum(binary_dir_connectivity{this_ROI_voxel}(:)) == 27 n% Determine the [r,c,s] of this voxel: [r,c,s] = ind2sub(size(ROI_vol),this_voxel_index); % Compute the mean value around this voxel: nthis_vox_val = img_vol(this_voxel_index); n vox_ngd = img_vol((r-1):(r+1) , (c-1):(c+1) , (s-1):(s+1)); nvox_ngd_sum = sum(vox_ngd(:)) - this_vox_val; nvox_ngd_mean = vox_ngd_sum / 26; n% Add this value to the matrix: nNGTDM(this_vox_val) = NGTDM(this_vox_val) + abs(this_vox_val-vox_ngd_mean); % Increment the number of occurances of this voxel: nvox_occurances_NGD26(this_vox_val) = vox_occurances_NGD26(this_vox_val) + 1; end % Test for full neighborhood nend % Loop over ROI voxeln

3.3 基於NGTDM的PET/CT紋理特徵

(1) Coarseness

(2) Contrast

(3) Busyness

(4) Complexity

(5) Texture Strength

3.4 紋理特徵計算實現

%%% (1) Coarsenessnmetrics_vect(1) = sum( vox_val_probs .* NGTDM );n % Its the reciprocal, so test for zero denominator:n if metrics_vect(1) == 0 nmetrics_vect(1) = Inf;nelse nmetrics_vect(1) = 1/metrics_vect(1);nendn %%% (2) Contrast if N_g > 1 n% There is some voxel color differences, so perform calculations as normal: n% The first term in equation (4): nfirst_term_mat = (vox_val_probs * vox_val_probs) .* ( (nd_r-nd_c).^2 ); nfirst_term_val = sum(first_term_mat(:)) / (N_g * (N_g-1) ); n% The second term in equation (4). Note that the 3D computation n% necessitates normalization by the number of voxels instead of the n^2 that appears in n% equation (4). nsecond_term_val = sum(NGTDM) / sum(vox_occurances_NGD26); n% Record the value: nmetrics_vect(2) = first_term_val * second_term_val; nelse n% There is only a single color, so no contrast to compute, so set to negative: nmetrics_vect(2) = -1;nend n %%% (3) Busynessn% NOTE: The denominator equals zero in the paperAmadasun 1989. Absolute value inside then% double-sum is given here, in accordance with n % % Texture Analysis Methods ? A Reviewn% Andrzej Materka and Michal Strzelecki (1998)n% first_term = sum(vox_val_probs .* NGTDM); second_term_mat = (nd_nz_r .* nd_nz_p_r) - (nd_nz_c .* nd_nz_p_c); second_term = sum(abs(second_term_mat(:))); if second_term == 0 metrics_vect(3) = Inf;nelse nmetrics_vect(3) = first_term / second_term;nendn %%% (4) Complexity first_term_num = abs(nd_nz_r - nd_nz_c); first_term_den = nd_nz_p_r + nd_nz_p_c; second_term = (nd_nz_p_r .* nd_nz_NGTDMop_r) + (nd_nz_p_c .* nd_nz_NGTDMop_c);n if second_term == 0 nmetrics_vect(4) = Inf;nelse n tmp = first_term_num(:) .* second_term(:) ; tmp = sum(tmp ./ first_term_den(:)) ; metrics_vect(4) = tmp / sum(vox_occurances_NGD26) ;nend n%%% (5) Texture Strength first_term_mat = (nd_nz_p_r+nd_nz_p_c) .* ( (nd_nz_r - nd_nz_c) .^2 );nfirst_term = sum(first_term_mat(:));nsecond_term = sum(NGTDM);n if second_term == 0 nmetrics_vect(5) = Inf;nelse nmetrics_vect(5) = first_term / second_term ;nendn

4、GLZSM4.1 基於GLZSM的PET/CT紋理特徵

(1) Small Zone Size Emphasis

(2) Large Zone Size Emphasis

(3) Low Gray-Level Zone Emphasis

(4) High Gray-Level Zone Emphasis

(5) Small Zone / Low Gray Emphasis

(6) Small Zone / High Gray Emphasis

(7) Large Zone / Low Gray Emphasis

(8) Large Zone / High Gray Emphasis

(9) Gray-Level Non-Uniformity

(10) Zone Size Non-Uniformity

(11) Zone Size Percentag


推薦閱讀:

醫學影像學前途怎麼樣呢!?
想學ct讀片,主要關於頸椎腰椎四肢等骨骼方面的,請大神推薦下哪本書好,謝謝?
讀醫學影像專業是一種怎樣的體驗?
Deep Learning 和 Machine Learning 在生物醫學工程方面將會有哪些應用?
從醫學影像技術轉到醫學影像學是否有價值?

TAG:卷积神经网络CNN | 医学影像 |