Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning

Abstract

We aimed to assess the ability of deep learning (DL) and support vector machine (SVM) to detect a nonperfusion area (NPA) caused by retinal vein occlusion (RVO) with optical coherence tomography angiography (OCTA) images. The study included 322 OCTA images (normal: 148; NPA owing to RVO: 174 [128 branch RVO images and 46 central RVO images]). Training to construct the DL model using deep convolutional neural network (DNN) algorithms was provided using OCTA images. The SVM used a scikit-learn library with a radial basis function kernel. The area under the curve (AUC), sensitivity and specificity for detecting an NPA were examined. We compared the diagnostic ability (sensitivity, specificity and average required time) between the DNN, SVM and seven ophthalmologists. Heat maps were generated. With regard to the DNN, the mean AUC, sensitivity, specificity and average required time for distinguishing RVO OCTA images with an NPA from normal OCTA images were 0.986, 93.7%, 97.3% and 176.9 s, respectively. With regard to SVM, the mean AUC, sensitivity, and specificity were 0.880, 79.3%, and 81.1%, respectively. With regard to the seven ophthalmologists, the mean AUC, sensitivity, specificity and average required time were 0.962, 90.8%, 89.2%, and 700.6 s, respectively. The DNN focused on the foveal avascular zone and NPA in heat maps. The performance of the DNN was significantly better than that of SVM in all parameters (p < 0.01, all) and that of the ophthalmologists in AUC and specificity (p < 0.01, all). The combination of DL and OCTA images had high accuracy for the detection of an NPA, and it might be useful in clinical practice and retinal screening.

Introduction

Retinal vein occlusion (RVO) is the second most common retinal vascular disease after diabetic retinopathy. Worldwide, the estimated number of RVO patients is 16.4 million [1], with a prevalence of 2.1% in the general population over 40 years of age [2], and risk factors include hypertension, diabetes and hyperlipidemia. RVO is a common cause of visual reduction from complications such as macular edema (ME), retinal bleeding and retinal ischemia [3,4]. RVO is divided into the following two types according to the occlusion site: branch retinal vein occlusion (BRVO) and central retinal vein occlusion (CRVO). Major vein occlusion of the retinal circulation can cause increased intraluminal pressure, hemorrhage and ME [5]. In recent years, intravitreal anti-VEGF agents have become the common clinical therapy for ME associated with BRVO and CRVO. In fact, numerous large-scale studies have reported that intravitreal injections of anti-VEGF agents significantly improve visual and anatomic outcomes for BRVO and CRVO patients with ME [612]. Although ME is most commonly associated with vision loss, thrombosis can result in engorged veins frequently accompanied by variable amounts of retinal nonperfusion.

Previously, angiography, including fluorescein angiography, was essential for diagnosing retinal vascular lesions. However, since angiography is an invasive examination, frequent examination is difficult. Additionally, visualizing the fine structure at the capillary level is difficult in these angiography images. Moreover, these images are two-dimensional images and cannot be assessed by stratification of the retina and choroid. In recent years, optical coherence tomography angiography (OCTA) has been devised, which can noninvasively detect a moving part of the fundus equivalent to red blood cells in the blood flow as a flow signal and visualize it as a blood vessel [1316]. OCTA can analyze the retina in detail by dividing it into superficial capillary plexus (SCP) and deep capillary plexus (DCP) (Fig 1). Additionally, one report considered the foveal avascular zone (FAZ) and vessel density drawn from those images as a quantitative index [15]. Furthermore, the area of FAZ and visual acuity are reportedly inversely correlated in RVO and diabetic retinopathy (Fig 2) [17].

thumbnail
Fig 1. Representative images of the normal macula obtained using optical coherence tomography angiography (OCTA).

The left image is a superficial capillary plexus OCTA image with a normal macula, and the right image is a deep capillary plexus OCTA image with a normal macula. The arrowheads indicate the foveal avascular zone.

https://doi.org/10.1371/journal.pone.0223965.g001

thumbnail
Fig 2. Representative retinal vein occlusion images of the macula obtained using optical coherence tomography angiography (OCTA).

(A) The left image is the superficial capillary plexus (SCP) OCTA image with branch retinal vein occlusion (BRVO), and the right image is the deep capillary plexus (DCP) OCTA image with BRVO. The arrows indicate the foveal avascular zone and nonperfusion area with BRVO. (B) The left image is the SCP OCTA image with central retinal vein occlusion (CRVO), and the right image is the DCP OCTA image with CRVO. In the SCP and DCP OCTA images with CRVO, the foveal avascular zone and the nonperfusion area are observed throughout the cropped images.

https://doi.org/10.1371/journal.pone.0223965.g002

Recently, an image processing technology using deep learning (DL) and support vector machine (SVM), a machine-learning method, has been dramatically developed. According to several studies, image processing technology has very high classification performance in medical imaging [1828]. In the ophthalmology field, recent investigations have demonstrated the application of image processing technology involving machine-learning algorithms in medical imaging for various retinal diseases, including BRVO and CRVO, using fundus color photographs and ultra-widefield fundus ophthalmoscopy images [21,2325,2932]. In a recent investigation, DL segmented the nonperfusion area (NPA) in OCTA images of diabetic retinopathy [33].

However, to the best of our knowledge, no studies have focused on the automated diagnostic accuracy of image processing technology involving DL and SVM for the NPA using OCTA images of RVO.

Thus, the present study aimed to assess the ability of image processing technology involving DL and SVM to detect an NPA owing to RVO using OCTA images. This study was performed at Tsukazaki Hospital and Tokushima University Hospital.

Materials and methods

Data set

The OCTA images of normal eyes and eyes with NPA caused by RVO were extracted from the clinical databases of the Ophthalmology departments of Tsukazaki Hospital and Tokushima University Hospital. A retinal specialist reviewed and confirmed the presence of NPA by assessing 3 × 3 mm OCTA images for the SCP and DCP. The OCTA images were then registered on a database for analysis. There were 322 OCTA images included in the current study. With regard to BRVO, eyes without NPA in OCTA images were not included. NPA with CRVO was present in all eyes. To assess OCTA image processing accuracy with DL for an NPA, we focused on the OCTA images of normal and NPA with acute RVO cases. We did not include NPA cases with chronic RVO in which abnormal conditions, such as collateral vessels, may exist in addition to NPA. Moreover, NPA cases with diabetic retinopathy were not included in which other abnormal conditions, such as microaneurysms, must be considered. These additional abnormalities may be confounders that make it difficult for DL to determine the NPA.

In the current study, we used K-fold cross-validation (K = 8), which was previously reported [34,35]. In brief, OCTA imaging data were divided into K groups. Then, (K−1) groups were used for training, and one group were used for validation. This process was repeated K times until each of the K groups reached the validation data set.

The OCTA images in the training data set were augmented with image transformation processes such as brightness adjustment, gamma correction, histogram equalization, and noise addition and inversion. The amount of training images approached 18 times the amount of original training data. A deep convolutional neural network (DNN) model, as described below, was created and trained with the augmented training data. These processes are described in supplemental files (in data_augment.py).

Because of the retrospective and observational nature of the study, the need for written informed consent was waived by the ethics committees. The data acquired in the course of the data analysis were anonymized before we accessed them. This study adhered to the tenets of the Declaration of Helsinki, and it was approved by the local ethics committees of Tsukazaki Hospital and Tokushima University Hospital.

Deep-learning model and training

We implemented a DL model that uses a Visual Geometry Group (VGG)-16 DNN (Fig 3). This DNN automatically learns local features of images and generates a classification model [3638]. The input in this study was concatenated OCTA images of SCP and DCP images. The size of the concatenated original OCTA images was 640 × 320 pixels. We converted the size of the original input images to 256 × 192 pixels because of the reduction in the analysis time. The RGB image input had a range of 0 to 255, and the input was first normalized to a range of 0 to 1 by dividing the values by 255. The shape of the input tensors used in this study is 256 × 192 × 3.

thumbnail
Fig 3. Overall architecture of the Visual Geometry Group (VGG)-16 model.

A data set of resized optical coherence tomography angiography images (256 × 192 pixels) is the input. VGG-16 includes five blocks and three fully connected layers. Each block includes some convolutional layers followed by a max-pooling layer. The output of block 5 is flattened, resulting in two fully connected layers. The first layer removes spatial information from the extracted feature vectors, and the second layer is a classification layer that uses the feature vectors of the target images acquired in previous layers and the softmax function for binary classification.

https://doi.org/10.1371/journal.pone.0223965.g003

The VGG-16 DNN included five blocks and three fully connected layers. Each block included some convolutional layers followed by a max-pooling layer, which decreased positional sensitivity, improving generic recognition [39]. These convolutional layers capture only the features of the image without shrinking because the strides of convolution layers were 1 and the padding of the layers were the “same”. We could avoid the vanishing gradient problem because the activation function of the layers was ReLU [40]. The strides of max-pooling layers were 2, so these layers compressed the information of the image. The output of block 5 was flattened, and subsequently, two layers were fully connected. The first layer removed spatial information from the extracted feature vectors, and the second layer was a classification layer that used the feature vectors of the target images acquired in previous layers and the softmax function for binary classification. To improve generalization performance, we carried out a dropout process to mask the first fully connected layer with 25% probability. The output of the neural network was the vector of order 2 representing probability for each class value (non-RVO, RVO).

Fine tuning was applied to increase the learning speed for high performance achievement even with limited data [41,42]. We used parameters from ImageNet as initial parameters of blocks 1 to 5.

The layers were updated using the optimization momentum stochastic gradient descent algorithm (learning rate = 0.0005, momentum coefficient = 0.9) [43,44]. Mini Bach size was 32. Among the 20 DL models obtained in 20 learning cycles, the model with the highest correct answer rate for the available test data was selected as the final DL model in each split. To build and evaluate the model, we ran Keras (https://keras.io/ja/) on TensorFlow (https://www.tensorflow.org/), which was written in Python.

Support vector machine model

We used the soft-margin SVM implemented in the scikit-learn library using the radial basis function kernel [45]. We reduced all images to 10 dimensions. Optimal values for cost parameter “C” of the SVM algorithm and parameter “γ” of the radial basis function were determined by grid search using quadrant cross-validation, and the combination with the highest accuracy was selected in each split. The parameter values tested for C were 1, 10, 100, and 1000, and those for γ were 0.0001, 0.001, 0.01, 0.1, and 1. The optimized parameter values of C and γ in each split are described in S1 File.

Outcome

The area under the curve (AUC) of the receiver operating characteristic curve, sensitivity, and specificity were determined from the concatenated OCTA images using the DNN and SVM model described above.

Creation of the test application for ophthalmologist interpretation

We compared the diagnostic accuracy between the DNN and ophthalmologists. All 322 concatenated OCTA images were included. The sensitivity, specificity, and required time were determined for the DNN and seven ophthalmologists. Details were shown in S2 File.

NPA assessment and required time

The seven ophthalmologists assessed the presence or absence of an NPA by reviewing the 322 concatenated OCTA images as indicated on the computer screen, without other images. Using a Microsoft Excel-based response form, each of the seven ophthalmologists entered the integer 0 or 1 directly into a computer. Details were shown in S3 File.

Statistical analysis

With regard to background demographic data, Student’s t-test was used to compare age, and Fisher’s exact test was used to compare the ratios of gender and left/right affected eyes between patients and normal subjects. These statistical analyses were performed using Python Scipy (https://www.scipy.org/), Python Statsmodels (http://www.statsmodels.org/stable/index.html) and R pROC (https://cran.r-project.org/web/packages/pROC/pROC.pdf). A p value of <0.05 was considered statistically significant.

The 95% confidence interval (CI) of the AUC was obtained as follows. The OCTA images judged to exceed a threshold were considered positive for RVO, and a receiver operating characteristic (ROC) curve was created. For the AUC, the 95% CI was obtained by assuming a normal distribution and calculated in these equations [46].

np … the number of RVO images, 174

nn … the number of normal images, 148

In the RVO classification, in sensitivity and specificity image output by the neural network higher than 0.5 was classified as RVO, and image output lower than 0.5 was classified as normal. Additionally, regarding the sensitivity and specificity of seven ophthalmologists, if four or more ophthalmologists considered OCTA images positive, these images were considered positive. The 95% CIs of sensitivity and specificity were calculated assuming a binomial distribution. Fleiss’ kappa coefficients were used to assess the agreement rate among seven ophthalmologists for NPA detection [47,48]. Fisher’s exact test was used to compare the sensitivity and specificity between the DNN, SVM and ophthalmologists.

Heat map

Overlaying heatmap images of the DNN focus site were created using a gradient-weighted class activation mapping (Grad-CAM) method on the corresponding RVO and non-RVO OCTA images [49]. In the current study, we used the grad-CAM method to maximize the outputs of the second convolution layer in block 2. The function in the backpropagation steps for modification of the loss function was a rectified linear unit, which propagated only positive gradients. This process was performed using Python Keras-vis (https://raghakot.github.io/keras-vis/).

Results

The study included 322 OCTA images. Of these images, 174 were of eyes with NPA owing to RVO [170 patients (mean age: 71.4 ± 10.9 years); 90 eyes from men and 84 from women; 79 left and 95 right eyes; and 128 eyes with BRVO and 46 with CRVO], and 148 images were of normal eyes [147 subjects (mean age: 70.4 ± 10.8 years); 75 eyes from men and 73 from women; and 81 left and 67 right eyes]. No significant differences were detected between these two groups with respect to age, gender ratio, and left-right eye image ratio (p = 0.401, p = 0.911, and p = 0.117, respectively) (Table 1).

thumbnail
Table 1. Comparison of demographic variables between the nonperfusion area owing to retinal vein occlusion and normal groups.

https://doi.org/10.1371/journal.pone.0223965.t001

With regard to the detection of an NPA owing to RVO, the DNN had a sensitivity of 93.7% (95% CI, 89.0–96.8%), specificity of 97.3% (95% CI, 93.2–99.3%), AUC of 0.986 (95% CI, 0.974–0.999) and average required time of 176.9 s (95% CI, 172.4–180.2 s). The SVM had a sensitivity of 79.3% (95% CI, 72.5–85.1%), specificity of 81.1% (95% CI, 73.8–87.0%) and AUC of 0.880 (95% CI, 0.843–0.918) (Fig 4). The ophthalmologists had a sensitivity of 90.8% (95% CI, 85.5–94.7%), specificity of 89.2% (95% CI, 83.0–93.7%), AUC of 0.962 (95% CI, 0.942–0.983) and average required time of 700.6 s (95% CI, 585.2–816.0 s) (Table 2). The mean kappa coefficient among seven ophthalmologists for the detection of an NPA was 0.746 (95% CI, 0.725–0.766).

thumbnail
Fig 4. The Receiver operating characteristic curves in the deep-learning model, support vector machine model and ophthalmologists.

The area under the curve is 0.986 in the deep-learning model, 0.880 in the support vector machine model and 0.962 in the seven ophthalmologists.

https://doi.org/10.1371/journal.pone.0223965.g004

thumbnail
Table 2. Comparison of the abilities of the deep convolutional neural network, support vector machine and ophthalmologists (n = 7) to detect a nonperfusion area.

https://doi.org/10.1371/journal.pone.0223965.t002

A composite image, comprising the fundus image superimposed with its corresponding heat map, was created by the DNN, and these images showed that DNNs could accurately identify crucial areas in the fundus images; a representative composite image is presented in Fig 5. Blue indicates the strength of DNN-based identification, and an increase in color intensity was observed in areas with the FAZ area and NPA at the fovea in SCP and DCP OCTA images at the focal points. These results imply that the DNN might differentiate RVO eyes from normal eyes by identifying and highlighting the NPA. Red indicates the strength of DNN focus. The color intensity was high in the FAZ area and NPA in the SCP and DCP OCTA images. Accumulation occurred in focal points.

thumbnail
Fig 5. Representative images obtained using optical coherence tomography angiography (OCTA) and their heat maps.

(A) Normal superficial capillary plexus (SCP) OCTA image, (B) normal deep capillary plexus (DCP) OCTA image, (C) heat map of the normal SCP OCTA image, (D) heat map of the normal DCP OCTA image, (E) SCP OCTA image with a nonperfusion area (NPA) owing to branch retinal vein occlusion (BRVO), (F) DCP OCTA image with an NPA owing to BRVO, (G) heat map of the SCP OCTA image with BRVO, (H) heat map of the DCP OCTA image with BRVO, (I) SCP OCTA image with an NPA owing to central retinal vein occlusion (CRVO), (J) DCP OCTA image with an NPA owing to CRVO, (K) heat map of the SCP OCTA image with CRVO and (L) heat map of the DCP OCTA image with CRVO. Red is used to indicate the strength of deep convolutional neural network focus. The color intensity is high at the area of the foveal avascular zone and NPA in SCP and DCP OCTA images; accumulation is noted at the focal points. The deep convolutional neural network focused on the foveal avascular zone and NPA.

https://doi.org/10.1371/journal.pone.0223965.g005

Discussion

Generally, FA is considered the gold standard for diagnosing and delineating the extent of retinal ischemia. However, the recent emergence of three-dimensional, noninvasive imaging using OCTA has provided an opportunity to quantify vessel density in a defined retinal area and to gage its loss over time, either physiologically with aging or through an underlying vascular pathology [5052]. Recent studies have identified measurable parameters, such as vessel density, capillary length, intercapillary distance and FAZ area, to quantify the degree of retinal ischemia and to longitudinally assess its progression [23,5355].

With regard to the representation of the NPA and the FAZ area associated with RVO, OCTA images are clearer than FA images, and the boundary in OCTA images is clear [15,16]. In the present study, the performance of the DNN was significantly better than that of SVM in all parameters (p < 0.01) and that of ophthalmologists in the specificity, AUC and average required time (p < 0.01). The combination of DL and OCTA images had high accuracy for the detection of an NPA, and it might be useful in clinical practice and retinal screening. Recent investigations have demonstrated a high AUC for detecting diabetic retinopathy on retinal fundus photography [21,29] and rhegmatogenous retinal detachment on ultra-widefield fundus ophthalmoscopy [30]. Moreover, in the radiological field, it has been proposed that perfusion image quality is better and perfusion measurement is more accurate with convolutional neural network techniques, such as a DL algorithm, than with the conventional averaging method for the generation of arterial spin labeling images from pairwise subtraction images [56]. In the present study, the AUC and required time for distinguishing between normal eyes and NPA owing to RVO eyes were better with the DNN than with ophthalmologist assessment. Guo et al. [33] reported that the NPA in OCTA images of diabetic retinopathy was segmented by DL. However, these authors detected NPA in OCTA images using a manually segmented nonperfusion binary map. In our study, we did not use manual images to detect NPA in OCTA images. The NPA in OCTA images was relatively clear as we used images with a narrow angle of view, and DL easily distinguished NPA OCTA images from normal OCTA images.

Retinal ischemia is a key prognostic factor in the management of various retinal diseases, including RVO. Several studies have demonstrated that decreases in both the SCP and DCP vessel density, fractal dimension and skeletal vessel density on OCTA are associated with RVO severity [16,17,57]. In fact, according to the heat maps, the DNN focused on the FAZ area in normal SCP and DCP OCTA images and the FAZ area and NPA in RVO SCP and DCP OCTA images. Our results indicate that the DNN has a classification ability that is equivalent to or greater than the ability of ophthalmologists. Therefore, the identification of an NPA using DL and OCTA is considered highly useful and clinically significant. The ability of DL to distinguish between RVO and normal eyes with high accuracy using automatically segmented OCTA images suggests the possibility of automatic diagnosis of eye disease by artificial intelligence in the future.

The present study had some limitations. First, we compared only OCTA images between normal eyes and RVO eyes and did not include OCTA images of other retinal diseases. Further studies involving other retinal diseases are required to confirm our findings. Additionally, for extensive evaluation of the performance and versatility of DL for the detection of an NPA, it will be necessary to use larger samples and include OCTA images of other retinal diseases. Second, the scan area of 3 × 3 mm was not large enough to detect the entire NPA associated with RVO. Wider ranges of examination areas may provide more conclusive evidence.

Conclusions

In conclusion, the combination of DL and OCTA images had high accuracy for the detection of an NPA. DL was useful for detecting NPA in OCTA images. These findings suggest that further investigations are required to develop artificial intelligence that detects retinal ischemic disorders.

Supporting information

S1 File. The optimized parameter values of cost and gamma in each split.

https://doi.org/10.1371/journal.pone.0223965.s001

(XLSX)

S2 File. The diagnostic accuracy between the deep convolutional neural network and ophthalmologists.

https://doi.org/10.1371/journal.pone.0223965.s002

(CSV)

S3 File. The seven ophthalmologists assessed the presence or absence of the non perfusion area.

https://doi.org/10.1371/journal.pone.0223965.s003

(CSV)

Acknowledgments

We thank Masayuki Miki and the orthoptists at Tsukazaki Hospital for support in collecting the data.

References

  1. 1. Rogers S, McIntosh RL, Cheung N, Lim L, Wang JJ, Mitchell P, et al. The prevalence of retinal vein occlusion: pooled data from population studies from the United States, Europe, Asia, and Australia. Ophthalmology. 2010;117: 313–319.e311. pmid:20022117
  2. 2. Yasuda M, Kiyohara Y, Arakawa S, Hata Y, Yonemoto K, Doi Y, et al. Prevalence and systemic risk factors for retinal vein occlusion in a general Japanese population: the Hisayama study. Invest Ophthalmol Vis Sci. 2010;51: 3205–3209. pmid:20071683
  3. 3. Campa C, Alivernini G, Bolletta E, Parodi MB, Perri P. Anti-VEGF therapy for retinal vein occlusions. Curr Drug Targets. 2016;17: 328–336. pmid:26073857
  4. 4. MacDonald D. The ABCs of RVO: a review of retinal venous occlusion. Clin Exp Optom. 2014;97: 311–323. pmid:24256639
  5. 5. Rogers SL, McIntosh RL, Lim L, Mitchell P, Cheung N, Kowalski JW, et al. Natural history of branch retinal vein occlusion: an evidence-based systematic review. Ophthalmology. 2010;117: 1094–1101.e1095. pmid:20430447
  6. 6. Boyer D, Heier J, Brown DM, Clark WL, Vitti R, Berliner AJ, et al. Vascular endothelial growth factor trap-eye for macular edema secondary to central retinal vein occlusion: six-month results of the phase 3 COPERNICUS study. Ophthalmology. 2012;119: 1024–1032. pmid:22440275
  7. 7. Brown DM, Campochiaro PA, Singh RP, Li Z, Gray S, Saroj N, et al. Ranibizumab for macular edema following central retinal vein occlusion: six-month primary end point results of a phase III study. Ophthalmology. 2010;117: 1124–1133.e1121. pmid:20381871
  8. 8. Brown DM, Heier JS, Clark WL, Boyer DS, Vitti R, Berliner AJ, et al. Intravitreal aflibercept injection for macular edema secondary to central retinal vein occlusion: 1-year results from the phase 3 COPERNICUS study. Am J Ophthalmol. 2013;155: 429–437.e427. pmid:23218699
  9. 9. Campochiaro PA, Brown DM, Awh CC, Lee SY, Gray S, Saroj N, et al. Sustained benefits from ranibizumab for macular edema following central retinal vein occlusion: twelve-month outcomes of a phase III study. Ophthalmology. 2011;118: 2041–2049. pmid:21715011
  10. 10. Holz FG, Roider J, Ogura Y, Korobelnik JF, Simader C, Groetzbach G, et al. VEGF trap-eye for macular oedema secondary to central retinal vein occlusion: 6-month results of the phase III GALILEO study. Br J Ophthalmol. 2013;97: 278–284. pmid:23298885
  11. 11. Brown DM, Campochiaro PA, Bhisitkul RB, Ho AC, Gray S, Saroj N, et al. Sustained benefits from ranibizumab for macular edema following branch retinal vein occlusion: 12-month outcomes of a phase III study. Ophthalmology. 2011;118: 1594–1602. pmid:21684606
  12. 12. Heier JS, Campochiaro PA, Yau L, Li Z, Saroj N, Rubio RG, et al. Ranibizumab for macular edema due to retinal vein occlusions: long-term follow-up in the HORIZON trial. Ophthalmology. 2012;119: 802–809. pmid:22301066
  13. 13. Jia Y, Tan O, Tokayer J, Potsaid B, Wang Y, Liu JJ, et al. Split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Opt Express. 2012;20: 4710–4725. pmid:22418228
  14. 14. Takase N, Nozaki M, Kato A, Ozeki H, Yoshida M, Ogura Y. Enlargement of foveal avascular zone in diabetic eyes evaluated by en face optical coherence tomography angiography. Retina. 2015;35: 2377–2383. pmid:26457396
  15. 15. Suzuki N, Hirano Y, Yoshida M, Tomiyasu T, Uemura A, Yasukawa T, et al. Microvascular abnormalities on optical coherence tomography angiography in macular edema associated with branch retinal vein occlusion. Am J Ophthalmol. 2016;161: 126–132.e121. pmid:26454243
  16. 16. Coscas F, Glacet-Bernard A, Miere A, Caillaux V, Uzzan J, Lupidi M, et al. Optical coherence tomography angiography in retinal vein occlusion: evaluation of superficial and deep capillary plexa. Am J Ophthalmol. 2016;161: 160–171.e161-e162. pmid:26476211
  17. 17. Balaratnasingam C, Inoue M, Ahn S, McCann J, Dhrami-Gavazi E, Yannuzzi LA, et al. Visual acuity is correlated with the area of the foveal avascular zone in diabetic retinopathy and retinal vein occlusion. Ophthalmology. 2016;123: 2352–2367. pmid:27523615
  18. 18. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521: 436–444. pmid:26017442
  19. 19. Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, et al. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans Biomed Eng. 2015;62: 1132–1140. pmid:25423647
  20. 20. Litjens G, Sanchez CI, Timofeeva N, Hermsen M, Nagtegaal I, Kovacs I, et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci Rep. 2016;6: 26286. pmid:27212078
  21. 21. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316: 2402–2410. pmid:27898976
  22. 22. Pinaya WH, Gadelha A, Doyle OM, Noto C, Zugman A, Cordeiro Q, et al. Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia. Sci Rep. 2016;6: 38897. pmid:27941946
  23. 23. Hosny KM, Kassem MA, Foaud MM. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS One. 2019;14: e0217293. pmid:31112591
  24. 24. Yu C, Yang S, Kim W, Jung J, Chung KY, Lee SW, et al. Acral melanoma detection using a convolutional neural network for dermoscopy images. PLoS One. 2018;13: e0193321. pmid:29513718
  25. 25. Zielinski B, Plichta A, Misztal K, Spurek P, Brzychczy-Wloch M, Ochonska D. Deep learning approach to bacterial colony classification. PLoS One. 2017;12: e0184554. pmid:28910352
  26. 26. Yu C, Yang S, Kim W, Jung J, Chung KY, Lee SW, et al. Acral melanoma detection using a convolutional neural network for dermoscopy images. PLoS One. 2018;13: e0193321. pmid:29513718
  27. 27. Hosny KM, Kassem MA, Foaud MM. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS One. 2019;14: e0217293. pmid:31112591
  28. 28. Zielinski B, Plichta A, Misztal K, Spurek P, Brzychczy-Wloch M, Ochonska D. Deep learning approach to bacterial colony classification. PLoS One. 2017;12: e0184554. pmid:28910352
  29. 29. Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology. 2017;124: 962–969. pmid:28359545
  30. 30. Ohsugi H, Tabuchi H, Enno H, Ishitobi N. Accuracy of deep learning, a machine-learning technology, using ultra–wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci Rep. 2017;7: 9425. pmid:28842613
  31. 31. Nagasato D, Tabuchi H, Ohsugi H, Masumoto H, Enno H, Ishitobi N, et al. Deep-learning classifier with ultrawide-field fundus ophthalmoscopy for detecting branch retinal vein occlusion. Int J Ophthalmol. 2019;12: 94–99. pmid:30662847
  32. 32. Nagasato D, Tabuchi H, Ohsugi H, Masumoto H, Enno H, Ishitobi N, et al. Deep neural network-based method for detecting central retinal vein occlusion using ultrawide-field fundus ophthalmoscopy. J Ophthalmol. 2018;2018: 1875431. pmid:30515316
  33. 33. Guo Y, Camino A, Wang J, Huang D, Hwang TS, Jia Y. MEDnet, a neural network for automated detection of avascular area in OCT angiography. Biomed Opt Express. 2018;9: 5147–5158. pmid:30460119
  34. 34. Mosteller F, Tukey JW. Data analysis, including statistics. In: Lindzey G, Aronson E, editors. Handbook of social psychology. Reading, MA: Addison–Wesley; 1968. pp. 80–203.
  35. 35. Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th International joint conference on artificial intelligence. Montreal, Quebec, Canada: Morgan Kaufmann Publishers Inc.; 1995. pp. 1137–1143.
  36. 36. Deng J, Dong W, Socher R, Li L, Kai L, Li F-F. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. Miami, FL: IEEE; 2009. pp. 248–255.
  37. 37. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comp Vision. 2015;115: 211–252.
  38. 38. Lee CY, Xie S, Gallagher P, Zhang Z, Tu Z. Deeply-supervised nets. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS). San Diego, CA, USA: Journal of Machine Learning Research Workshop and Conference Proceedings; 2015. pp. 562–570.
  39. 39. Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition. In: Diamantaras K, Duch W, Iliadis LS, editiors. Artificial neural networks–ICANN 2010. Berlin, Heidelberg: Springer Berlin; 2010. pp. 92–101.
  40. 40. Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. In: Proceedings of the 14th International conference on artificial intelligence and statistics. Fort Lauderdale, FL: PMLR; 2011. pp. 315–323.
  41. 41. Redmon J, Divvala S, Girshick R, Farhadi F. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference computer vision and pattern recognition. Piscataway, NJ: IEEE; 2016. pp. 779–788.
  42. 42. Agrawal P, Girshick R, Malik J. Analyzing the performance of multilayer neural networks for object recognition. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision–ECCV 2014. Cham: Springer International Publishing; 2014. pp. 329–344.
  43. 43. Qian N. On the momentum term in gradient descent learning algorithms. Neural Networks. 1999;12: 145–151. pmid:12662723
  44. 44. Nesterov Y. A method for unconstrained convex minimization problem with the rate of convergence O (1/k^2). Doklady AN USSR. 1983;269: 543–547.
  45. 45. Brereton RG, Lloyd GR. Support vector machines for classification and regression. Analyst. 2010;135: 230–267. pmid:20098757
  46. 46. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143: 29–36. pmid:7063747
  47. 47. Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull. 1971;76: 378–382.
  48. 48. Bartko JJ, Carpenter WT Jr. On the methods and theory of reliability. J Nerv Ment Dis. 1976;163: 307–317. pmid:978187
  49. 49. Akobeng AK. Understanding diagnostic tests 3: receiver operating characteristic curves. Acta Paediatr. 2007;96: 644–647. pmid:17376185
  50. 50. Lee J, Rosen R. Optical coherence tomography angiography in diabetes. Curr Diab Rep. 2016;16: 123. pmid:27766583
  51. 51. Hwang TS, Gao SS, Liu L, Lauer AK, Bailey ST, Flaxel CJ, et al. Automated quantification of capillary nonperfusion using optical coherence tomography angiography in diabetic retinopathy. JAMA Ophthalmol. 2016;134: 367–373. pmid:26795548
  52. 52. Shariati MA, Park JH, Liao YJ. Optical coherence tomography study of retinal changes in normal aging and after ischemia. Invest Ophthalmol Vis Sci. 2015;56: 2790–2797. pmid:25414186
  53. 53. Fan W, Wang K, Falavarjani KG, Sagong M, Uji A, Ip M, et al. Distribution of nonperfusion area on ultra-widefield fluorescein angiography in eyes with diabetic macular edema: DAVE study. Am J Ophthalmol. 2017;180: 110–116. pmid:28579062
  54. 54. Tang FY, Ng DS, Lam A, Luk F, Wong R, Chan C, et al. Determinants of quantitative optical coherence tomography angiography metrics in patients with diabetes. Sci Rep. 2017;7: 2575. pmid:28566760
  55. 55. Kim K, Kim ES, Yu SY. Optical coherence tomography angiography analysis of foveal microvascular changes and inner retinal layer thinning in patients with diabetes. Br J Ophthalmol. 2018;102: 1226–1231. pmid:29259019
  56. 56. Kim KH, Choi SH, Park SH. Improving arterial spin labeling by using deep learning. Radiology. 2018;287: 658–666. pmid:29267145
  57. 57. Koulisis N, Kim AY, Chu Z, Shahidzadeh A, Burkemper B, de Koo LCO, et al. Quantitative microvascular analysis of retinal venous occlusions by spectral domain optical coherence tomography angiography. PLoS One. 2017;12: e0176404. pmid:28437483