Jump to content

Deep learning in photoacoustic imaging

From Wikipedia, the free encyclopedia
Depiction of photoacoustic tomography

Deep learning in photoacoustic imaging combines the hybrid imaging modality of photoacoustic imaging (PA) with the rapidly evolving field of deep learning. Photoacoustic imaging is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion.[1] This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue.[2]

Photoacoustic imaging has applications of deep learning in both photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT utilizes wide-field optical excitation and an array of unfocused ultrasound transducers.[1] Similar to other computed tomography methods, the sample is imaged at multiple view angles, which are then used to perform an inverse reconstruction algorithm based on the detection geometry (typically through universal backprojection,[3] modified delay-and-sum,[4] or time reversal [5][6]) to elicit the initial pressure distribution within the tissue. PAM on the other hand uses focused ultrasound detection combined with weakly-focused optical excitation (acoustic resolution PAM or AR-PAM) or tightly-focused optical excitation (optical resolution PAM or OR-PAM).[7] PAM typically captures images point-by-point via a mechanical raster scanning pattern. At each scanned point, the acoustic time-of-flight provides axial resolution while the acoustic focusing yields lateral resolution.[1]

Applications of deep learning in PACT

[edit]

The first application of deep learning in PACT was by Reiter et al.[8] in which a deep neural network was trained to learn spatial impulse responses and locate photoacoustic point sources. The resulting mean axial and lateral point location errors on 2,412 of their randomly selected test images were 0.28 mm and 0.37 mm respectively. After this initial implementation, the applications of deep learning in PACT have branched out primarily into removing artifacts from acoustic reflections,[9] sparse sampling,[10][11][12] limited-view,[13][14][15] and limited-bandwidth.[16][14][17][18] There has also been some recent work in PACT toward using deep learning for wavefront localization.[19] There have been networks based on fusion of information from two different reconstructions to improve the reconstruction using deep learning fusion based networks.[20]

Using deep learning to locate photoacoustic point sources

[edit]

Traditional photoacoustic beamforming techniques modeled photoacoustic wave propagation by using detector array geometry and the time-of-flight to account for differences in the PA signal arrival time. However, this technique failed to account for reverberant acoustic signals caused by acoustic reflection, resulting in acoustic reflection artifacts that corrupt the true photoacoustic point source location information. In Reiter et al.,[8] a convolutional neural network (similar to a simple VGG-16 [21] style architecture) was used that took pre-beamformed photoacoustic data as input and outputted a classification result specifying the 2-D point source location.

Deep learning for PA wavefront localization

[edit]

Johnstonbaugh et al.[19] was able to localize the source of photoacoustic wavefronts with a deep neural network. The network used was an encoder-decoder style convolutional neural network. The encoder-decoder network was made of residual convolution, upsampling, and high field-of-view convolution modules. A Nyquist convolution layer and differentiable spatial-to-numerical transform layer were also used within the architecture. Simulated PA wavefronts served as the input for training the model. To create the wavefronts, the forward simulation of light propagation was done with the NIRFast toolbox and the light-diffusion approximation, while the forward simulation of sound propagation was done with the K-Wave toolbox. The simulated wavefronts were subjected to different scattering mediums and Gaussian noise. The output for the network was an artifact free heat map of the targets axial and lateral position. The network had a mean error rate of less than 30 microns when localizing target below 40 mm and had a mean error rate of 1.06 mm for localizing targets between 40 mm and 60 mm.[19] With a slight modification to the network, the model was able to accommodate multi target localization.[19] A validation experiment was performed in which pencil lead was submerged into an intralipid solution at a depth of 32 mm. The network was able to localize the lead's position when the solution had a reduced scattering coefficient of 0, 5, 10, and 15 cm−1.[19] The results of the network show improvements over standard delay-and-sum or frequency-domain beamforming algorithms and Johnstonbaugh proposes that this technology could be used for optical wavefront shaping, circulating melanoma cell detection, and real-time vascular surgeries.[19]

Removing acoustic reflection artifacts (in the presence of multiple sources and channel noise)

[edit]

Building on the work of Reiter et al.,[8] Allman et al. [9] utilized a full VGG-16 [21] architecture to locate point sources and remove reflection artifacts within raw photoacoustic channel data (in the presence of multiple sources and channel noise). This utilization of deep learning trained on simulated data produced in the MATLAB k-wave library, and then later reaffirmed their results on experimental data.

Ill-posed PACT reconstruction

[edit]

In PACT, tomographic reconstruction is performed, in which the projections from multiple solid angles are combined to form an image. When reconstruction methods like filtered backprojection or time reversal, are ill-posed inverse problems [22] due to sampling under the Nyquist-Shannon's sampling requirement or with limited-bandwidth/view, the resulting reconstruction contains image artifacts. Traditionally these artifacts were removed with slow iterative methods like total variation minimization, but the advent of deep learning approaches has opened a new avenue that utilizes a priori knowledge from network training to remove artifacts. In the deep learning methods that seek to remove these sparse sampling, limited-bandwidth, and limited-view artifacts, the typical workflow involves first performing the ill-posed reconstruction technique to transform the pre-beamformed data into a 2-D representation of the initial pressure distribution that contains artifacts. Then, a convolutional neural network (CNN) is trained to remove the artifacts, in order to produce an artifact-free representation of the ground truth initial pressure distribution.

Using deep learning to remove sparse sampling artifacts

[edit]

When the density of uniform tomographic view angles is under what is prescribed by the Nyquist-Shannon's sampling theorem, it is said that the imaging system is performing sparse sampling. Sparse sampling typically occurs as a way of keeping production costs low and improving image acquisition speed.[10] The typical network architectures used to remove these sparse sampling artifacts are U-net[10][12] and Fully Dense (FD) U-net.[11] Both of these architectures contain a compression and decompression phase. The compression phase learns to compress the image to a latent representation that lacks the imaging artifacts and other details.[23] The decompression phase then combines with information passed by the residual connections in order to add back image details without adding in the details associated with the artifacts.[23] FD U-net modifies the original U-net architecture by including dense blocks that allow layers to utilize information learned by previous layers within the dense block.[11] Another technique was proposed using a simple CNN based architecture for removal of artifacts and improving the k-wave image reconstruction.[17]

Removing limited-view artifacts with deep learning

[edit]

When a region of partial solid angles are not captured, generally due to geometric limitations, the image acquisition is said to have limited-view.[24] As illustrated by the experiments of Davoudi et al.,[12] limited-view corruptions can be directly observed as missing information in the frequency domain of the reconstructed image. Limited-view, similar to sparse sampling, makes the initial reconstruction algorithm ill-posed. Prior to deep learning, the limited-view problem was addressed with complex hardware such as acoustic deflectors[25] and full ring-shaped transducer arrays,[12][26] as well as solutions like compressed sensing,[27][28][29][30][31] weighted factor,[32] and iterative filtered backprojection.[33][34] The result of this ill-posed reconstruction is imaging artifacts that can be removed by CNNs. The deep learning algorithms used to remove limited-view artifacts include U-net[12][15][35] and FD U-net,[36] as well as generative adversarial networks (GANs)[14] and volumetric versions of U-net.[13] One GAN implementation of note improved upon U-net by using U-net as a generator and VGG as a discriminator, with the Wasserstein metric and gradient penalty to stabilize training (WGAN-GP).[14]

Pixel-wise interpolation and deep learning for faster reconstruction of limited-view signals

[edit]

Guan et al.[36] was able to apply a FD U-net to remove artifacts from simulated limited-view reconstructed PA images. PA images reconstructed with the time-reversal process and PA data collected with either 16, 32, or 64 sensors served as the input to the network and the ground truth images served as the desired output. The network was able to remove artifacts created in the time-reversal process from synthetic, mouse brain, fundus, and lung vasculature phantoms.[36] This process was similar to the work done for clearing artifacts from sparse and limited view images done by Davoudi et al.[12] To improve the speed of reconstruction and to allow for the FD U-net to use more information from the sensor, Guan et al. proposed to use a pixel-wise interpolation as an input to the network instead of a reconstructed image.[36] Using a pixel-wise interpolation would remove the need to produce an initial image that may remove small details or make details unrecoverable by obscuring them with artifacts. To create the pixel-wise interpolation, the time-of-flight for each pixel was calculated using the wave propagation equation. Next, a reconstruction grid was created from pressure measurements calculated from the pixels' time-of-flight. Using the reconstruction grid as an input, the FD U-net was able to create artifact free reconstructed images. This pixel-wise interpolation method was faster and achieved better peak signal to noise ratios (PSNR) and structural similarity index measures (SSIM) than artifact free images created when the time-reversal images served as the input to the FD U-net.[36] This pixel-wise interpolation method was significantly faster and had comparable PSNR and SSIM than the images reconstructed from the computationally intensive iterative approach.[36] The pixel-wise method proposed in this study was only proven for in silico experiments with homogenous medium, but Guan posits that the pixel-wise method can be used for real time PAT rendering.[36]

Limited-bandwidth artifact removal with deep neural networks

[edit]

The limited-bandwidth problem occurs as a result of the ultrasound transducer array's limited detection frequency bandwidth. This transducer array acts like a band-pass filter in the frequency domain, attenuating both high and low frequencies within the photoacoustic signal.[15][16] This limited-bandwidth can cause artifacts and limit the axial resolution of the imaging system.[14] The primary deep neural network architectures used to remove limited-bandwidth artifacts have been WGAN-GP[14] and modified U-net.[15][16] The typical method to remove artifacts and denoise limited-bandwidth reconstructions before deep learning was Wiener filtering, which helps to expand the PA signal's frequency spectrum.[14] The primary advantage of the deep learning method over Wiener filtering is that Wiener filtering requires a high initial signal-to-noise ratio (SNR), which is not always possible, while the deep learning model has no such restriction.[14]

Fusion of information for improving photoacoustic Images with deep neural networks

The complementary information is utilized using fusion based architectures for improving the photoacoustic image reconstruction.[20] Since different reconstructions promote different characteristics in the output and hence the image quality and characteristics vary if a different reconstruction technique is used.[20] A novel fusion based architecture was proposed to combine the output of two different reconstructions and give a better image quality as compared to any of those reconstructions. It includes weight sharing, and fusion of characteristics to achieve the desired improvement in the output image quality.[20]

Deep learning to improve penetration depth of PA images

[edit]

High energy lasers allow for light to reach deep into tissue and they allow for deep structures to be visible in PA images. High energy lasers provide a greater penetration depth than low energy lasers. Around an 8 mm greater penetration depth for lasers with a wavelength between 690 to 900 nm.[35] The American National Standards Institute has set a maximal permissible exposure (MPE) for different biological tissues. Lasers with specifications above the MPE can cause mechanical or thermal damage to the tissue they are imaging.[35] Manwar et al. was able to increase the penetration of depth of low energy lasers that meet the MPE standard by applying a U-net architecture to the images created by a low energy laser.[35] The network was trained with images of an ex vivo sheep brain created by a low energy laser of 20 mJ as the input to the network and images of the same sheep brain created by a high energy laser of 100 mJ, 20 mJ above the MPE, as the desired output. A perceptually sensitive loss function was used to train the network to increase the low signal-to-noise ratio in PA images created by the low energy laser. The trained network was able to increase the peak-to-background ratio by 4.19 dB and penetration depth by 5.88% for photos created by the low energy laser of an in vivo sheep brain.[35] Manwar claims that this technology could be beneficial in neonatal brain imaging where transfontanelle imaging is possible to look for any lessions or injury.

Applications of deep learning in PAM

[edit]
Depiction of mechanical raster scanning method

Photoacoustic microscopy differs from other forms of photoacoustic tomography in that it uses focused ultrasound detection to acquire images pixel-by-pixel. PAM images are acquired as time-resolved volumetric data that is typically mapped to a 2-D projection via a Hilbert transform and maximum amplitude projection (MAP).[1] The first application of deep learning to PAM, took the form of a motion-correction algorithm.[37] This procedure was posed to correct the PAM artifacts that occur when an in vivo model moves during scanning. This movement creates the appearance of vessel discontinuities.

Deep learning to remove motion artifacts in PAM

[edit]

The two primary motion artifact types addressed by deep learning in PAM are displacements in the vertical and tilted directions. Chen et al.[37] used a simple three layer convolutional neural network, with each layer represented by a weight matrix and a bias vector, in order to remove the PAM motion artifacts. Two of the convolutional layers contain RELU activation functions, while the last has no activation function.[37] Using this architecture, kernel sizes of 3 × 3, 4 × 4, and 5 × 5 were tested, with the largest kernel size of 5 × 5 yielding the best results.[37] After training, the performance of the motion correction model was tested and performed well on both simulation and in vivo data.[37]

Deep learning-assisted frequency-domain PAM

[edit]
Noisy input, denoised output through U-Net and averaged ground truth frequency-domain PA amplitude images of two label-free Parhyale hawaiensis embryos. Yellow arrows indicate the cell membranes. Scalebars are equal to 100 μm.

Frequency-domain PAM constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams emitted by continuous wave sources for the excitation of single-frequency PA signals.[38] Nevertheless, this imaging approach generally provides smaller signal-to-noise ratios (SNR) which can be up to two orders of magnitude lower than the conventional time-domain systems.[39] To overcome the inherent SNR limitation of frequency-domain PAM, a U-Net neural network has been utilized to augment the generated images without the need for excessive averaging or the application of high optical power on the sample. In this context, the accessibility of PAM is improved as the system’s cost is dramatically reduced while retaining sufficiently high image quality standards for demanding biological observations.[40]

See also

[edit]

References

[edit]
  1. ^ a b c d Wang, Lihong V. (2009-08-29). "Multiscale photoacoustic microscopy and computed tomography". Nature Photonics. 3 (9): 503–509. Bibcode:2009NaPho...3..503W. doi:10.1038/nphoton.2009.157. ISSN 1749-4885. PMC 2802217. PMID 20161535.
  2. ^ Beard, Paul (2011-08-06). "Biomedical photoacoustic imaging". Interface Focus. 1 (4): 602–631. doi:10.1098/rsfs.2011.0028. ISSN 2042-8898. PMC 3262268. PMID 22866233.
  3. ^ Xu, Minghua; Wang, Lihong V. (2005-01-19). "Universal back-projection algorithm for photoacoustic computed tomography". Physical Review E. 71 (1): 016706. Bibcode:2005PhRvE..71a6706X. doi:10.1103/PhysRevE.71.016706. hdl:1969.1/180492. PMID 15697763.
  4. ^ Kalva, Sandeep Kumar; Pramanik, Manojit (August 2016). "Experimental validation of tangential resolution improvement in photoacoustic tomography using modified delay-and-sum reconstruction algorithm". Journal of Biomedical Optics. 21 (8): 086011. Bibcode:2016JBO....21h6011K. doi:10.1117/1.JBO.21.8.086011. hdl:10356/82178. ISSN 1083-3668. PMID 27548773.
  5. ^ Bossy, Emmanuel; Daoudi, Khalid; Boccara, Albert-Claude; Tanter, Mickael; Aubry, Jean-François; Montaldo, Gabriel; Fink, Mathias (2006-10-30). "Time reversal of photoacoustic waves" (PDF). Applied Physics Letters. 89 (18): 184108. Bibcode:2006ApPhL..89r4108B. doi:10.1063/1.2382732. ISSN 0003-6951. S2CID 121195599.
  6. ^ Treeby, Bradley E; Zhang, Edward Z; Cox, B T (2010-09-24). "Photoacoustic tomography in absorbing acoustic media using time reversal". Inverse Problems. 26 (11): 115003. Bibcode:2010InvPr..26k5003T. doi:10.1088/0266-5611/26/11/115003. ISSN 0266-5611. S2CID 14745088.
  7. ^ Wang, Lihong V.; Yao, Junjie (2016-07-28). "A Practical Guide to Photoacoustic Tomography in the Life Sciences". Nature Methods. 13 (8): 627–638. doi:10.1038/nmeth.3925. ISSN 1548-7091. PMC 4980387. PMID 27467726.
  8. ^ a b c Reiter, Austin; Bell, Muyinatu A Lediju (2017-03-03). Oraevsky, Alexander A; Wang, Lihong V (eds.). "A machine learning approach to identifying point source locations in photoacoustic data". Photons Plus Ultrasound: Imaging and Sensing 2017. 10064. International Society for Optics and Photonics: 100643J. Bibcode:2017SPIE10064E..3JR. doi:10.1117/12.2255098. S2CID 35030143.
  9. ^ a b Allman, Derek; Reiter, Austin; Bell, Muyinatu A. Lediju (June 2018). "Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning". IEEE Transactions on Medical Imaging. 37 (6): 1464–1477. doi:10.1109/TMI.2018.2829662. ISSN 1558-254X. PMC 6075868. PMID 29870374.
  10. ^ a b c Antholzer, Stephan; Haltmeier, Markus; Schwab, Johannes (2019-07-03). "Deep learning for photoacoustic tomography from sparse data". Inverse Problems in Science and Engineering. 27 (7): 987–1005. doi:10.1080/17415977.2018.1518444. ISSN 1741-5977. PMC 6474723. PMID 31057659.
  11. ^ a b c Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (February 2020). "Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal". IEEE Journal of Biomedical and Health Informatics. 24 (2): 568–576. arXiv:1808.10848. doi:10.1109/jbhi.2019.2912935. ISSN 2168-2194. PMID 31021809. S2CID 52143594.
  12. ^ a b c d e f Davoudi, Neda; Deán-Ben, Xosé Luís; Razansky, Daniel (2019-09-16). "Deep learning optoacoustic tomography with sparse data". Nature Machine Intelligence. 1 (10): 453–460. doi:10.1038/s42256-019-0095-3. ISSN 2522-5839. S2CID 202640890.
  13. ^ a b Hauptmann, Andreas; Lucka, Felix; Betcke, Marta; Huynh, Nam; Adler, Jonas; Cox, Ben; Beard, Paul; Ourselin, Sebastien; Arridge, Simon (June 2018). "Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography". IEEE Transactions on Medical Imaging. 37 (6): 1382–1393. doi:10.1109/TMI.2018.2820382. ISSN 1558-254X. PMC 7613684. PMID 29870367. S2CID 4321879.
  14. ^ a b c d e f g h Vu, Tri; Li, Mucong; Humayun, Hannah; Zhou, Yuan; Yao, Junjie (2020-03-25). "Feature article: A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer". Experimental Biology and Medicine. 245 (7): 597–605. doi:10.1177/1535370220914285. ISSN 1535-3702. PMC 7153213. PMID 32208974.
  15. ^ a b c d Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena (2018-02-19). "Reconstruction of initial pressure from limited view photoacoustic images using deep learning". In Wang, Lihong V; Oraevsky, Alexander A (eds.). Photons Plus Ultrasound: Imaging and Sensing 2018. Vol. 10494. International Society for Optics and Photonics. pp. 104942S. Bibcode:2018SPIE10494E..2SW. doi:10.1117/12.2288353. ISBN 9781510614734. S2CID 57745829.
  16. ^ a b c Awasthi, Navchetan (28 February 2020). "Deep Neural Network Based Sinogram Super-resolution and Bandwidth Enhancement for Limited-data Photoacoustic Tomography". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 67 (12): 2660–2673. doi:10.1109/TUFFC.2020.2977210. hdl:10356/146551. PMID 32142429. S2CID 212621872.
  17. ^ a b Awasthi, Navchetan; Pardasani, Rohit; Sandeep Kumar Kalva; Pramanik, Manojit; Yalavarthy, Phaneendra K. (2020). "Sinogram super-resolution and denoising convolutional neural network (SRCN) for limited data photoacoustic tomography". arXiv:2001.06434 [eess.IV].
  18. ^ Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K. (2017-11-02). "Deep neural network-based bandwidth enhancement of photoacoustic data". Journal of Biomedical Optics. 22 (11): 116001. Bibcode:2017JBO....22k6001G. doi:10.1117/1.jbo.22.11.116001. hdl:10356/86305. ISSN 1083-3668. PMID 29098811.
  19. ^ a b c d e f Johnstonbaugh, Kerrick; Agrawal, Sumit; Durairaj, Deepit Abhishek; Fadden, Christopher; Dangi, Ajay; Karri, Sri Phani Krishna; Kothapalli, Sri-Rajasekhar (December 2020). "A Deep Learning approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 67 (12): 2649–2659. doi:10.1109/tuffc.2020.2964698. ISSN 0885-3010. PMC 7769001. PMID 31944951.
  20. ^ a b c d Awasthi, Navchetan (3 April 2019). "PA-Fuse: deep supervised approach for the fusion of photoacoustic images with distinct reconstruction characteristics". Published in: Biomedical Optics Express. 10 (5): 2227–2243. doi:10.1364/BOE.10.002227. PMC 6524595. PMID 31149371.
  21. ^ a b Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv:1409.1556 [cs.CV].
  22. ^ Agranovsky, Mark; Kuchment, Peter (2007-08-28). "Uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed". Inverse Problems. 23 (5): 2089–2102. arXiv:0706.0598. Bibcode:2007InvPr..23.2089A. doi:10.1088/0266-5611/23/5/016. ISSN 0266-5611. S2CID 17810059.
  23. ^ a b Ronneberger, Olaf; Fischer, Philipp; Brox, Thomas (2015), "U-Net: Convolutional Networks for Biomedical Image Segmentation", Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol. 9351, Springer International Publishing, pp. 234–241, arXiv:1505.04597, Bibcode:2015arXiv150504597R, doi:10.1007/978-3-319-24574-4_28, ISBN 978-3-319-24573-7, S2CID 3719281
  24. ^ Xu, Yuan; Wang, Lihong V.; Ambartsoumian, Gaik; Kuchment, Peter (2004-03-11). "Reconstructions in limited-view thermoacoustic tomography" (PDF). Medical Physics. 31 (4): 724–733. Bibcode:2004MedPh..31..724X. doi:10.1118/1.1644531. ISSN 0094-2405. PMID 15124989.
  25. ^ Huang, Bin; Xia, Jun; Maslov, Konstantin; Wang, Lihong V. (2013-11-27). "Improving limited-view photoacoustic tomography with an acoustic reflector". Journal of Biomedical Optics. 18 (11): 110505. Bibcode:2013JBO....18k0505H. doi:10.1117/1.jbo.18.11.110505. ISSN 1083-3668. PMC 3818029. PMID 24285421.
  26. ^ Xia, Jun; Chatni, Muhammad R.; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V. (2012). "Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo". Journal of Biomedical Optics. 17 (5): 050506. Bibcode:2012JBO....17e0506X. doi:10.1117/1.jbo.17.5.050506. ISSN 1083-3668. PMC 3382342. PMID 22612121.
  27. ^ Sandbichler, M.; Krahmer, F.; Berer, T.; Burgholzer, P.; Haltmeier, M. (January 2015). "A Novel Compressed Sensing Scheme for Photoacoustic Tomography". SIAM Journal on Applied Mathematics. 75 (6): 2475–2494. arXiv:1501.04305. Bibcode:2015arXiv150104305S. doi:10.1137/141001408. ISSN 0036-1399. S2CID 15701831.
  28. ^ Provost, J.; Lesage, F. (April 2009). "The Application of Compressed Sensing for Photo-Acoustic Tomography". IEEE Transactions on Medical Imaging. 28 (4): 585–594. doi:10.1109/tmi.2008.2007825. ISSN 0278-0062. PMID 19272991. S2CID 11398335.
  29. ^ Haltmeier, Markus; Sandbichler, Michael; Berer, Thomas; Bauer-Marschallinger, Johannes; Burgholzer, Peter; Nguyen, Linh (June 2018). "A sparsification and reconstruction strategy for compressed sensing photoacoustic tomography". The Journal of the Acoustical Society of America. 143 (6): 3838–3848. arXiv:1801.00117. Bibcode:2018ASAJ..143.3838H. doi:10.1121/1.5042230. ISSN 0001-4966. PMID 29960458. S2CID 49643233.
  30. ^ Liang, Jinyang; Zhou, Yong; Winkler, Amy W.; Wang, Lidai; Maslov, Konstantin I.; Li, Chiye; Wang, Lihong V. (2013-07-22). "Random-access optical-resolution photoacoustic microscopy using a digital micromirror device". Optics Letters. 38 (15): 2683–6. Bibcode:2013OptL...38.2683L. doi:10.1364/ol.38.002683. ISSN 0146-9592. PMC 3784350. PMID 23903111.
  31. ^ Duarte, Marco F.; Davenport, Mark A.; Takhar, Dharmpal; Laska, Jason N.; Sun, Ting; Kelly, Kevin F.; Baraniuk, Richard G. (March 2008). "Single-pixel imaging via compressive sampling". IEEE Signal Processing Magazine. 25 (2): 83–91. Bibcode:2008ISPM...25...83D. doi:10.1109/msp.2007.914730. hdl:1911/21682. ISSN 1053-5888. S2CID 11454318.
  32. ^ Paltauf, G; Nuster, R; Burgholzer, P (2009-05-08). "Weight factors for limited angle photoacoustic tomography". Physics in Medicine and Biology. 54 (11): 3303–3314. Bibcode:2009PMB....54.3303P. doi:10.1088/0031-9155/54/11/002. ISSN 0031-9155. PMC 3166844. PMID 19430108.
  33. ^ Liu, Xueyan; Peng, Dong; Ma, Xibo; Guo, Wei; Liu, Zhenyu; Han, Dong; Yang, Xin; Tian, Jie (2013-05-14). "Limited-view photoacoustic imaging based on an iterative adaptive weighted filtered backprojection approach". Applied Optics. 52 (15): 3477–83. Bibcode:2013ApOpt..52.3477L. doi:10.1364/ao.52.003477. ISSN 1559-128X. PMID 23736232.
  34. ^ Ma, Songbo; Yang, Sihua; Guo, Hua (2009-12-15). "Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction". Journal of Applied Physics. 106 (12): 123104–123104–6. Bibcode:2009JAP...106l3104M. doi:10.1063/1.3273322. ISSN 0021-8979.
  35. ^ a b c d e Manwar, Rayyan; Li, Xin; Mahmoodkalayeh, Sadreddin; Asano, Eishi; Zhu, Dongxiao; Avanaki, Kamran (2020). "Deep learning protocol for improved photoacoustic brain imaging". Journal of Biophotonics. 13 (10): e202000212. doi:10.1002/jbio.202000212. ISSN 1864-0648. PMC 10906453. PMID 33405275. S2CID 224845812.
  36. ^ a b c d e f g Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (2020). "Limited View and Sparse Photoacoustic Tomography for Neuroimaging with Deep Learning". Scientific Reports. 10 (1): 8510. arXiv:1911.04357. Bibcode:2020NatSR..10.8510G. doi:10.1038/s41598-020-65235-2. PMC 7244747. PMID 32444649.
  37. ^ a b c d e Chen, Xingxing; Qi, Weizhi; Xi, Lei (2019-10-29). "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy". Visual Computing for Industry, Biomedicine, and Art. 2 (1): 12. doi:10.1186/s42492-019-0022-9. ISSN 2524-4442. PMC 7099543. PMID 32240397.
  38. ^ Tserevelakis, George J.; Mavrakis, Kostas G.; Kakakios, Nikitas; Zacharakis, Giannis (2021-10-01). "Full image reconstruction in frequency-domain photoacoustic microscopy by means of a low-cost I/Q demodulator". Optics Letters. 46 (19): 4718–4721. Bibcode:2021OptL...46.4718T. doi:10.1364/OL.435146. ISSN 0146-9592. PMID 34598182.
  39. ^ Langer, Gregor; Buchegger, Bianca; Jacak, Jaroslaw; Klar, Thomas A.; Berer, Thomas (2016-07-01). "Frequency domain photoacoustic and fluorescence microscopy". Biomedical Optics Express. 7 (7): 2692–3302. doi:10.1364/BOE.7.002692. ISSN 2156-7085. PMC 4948622. PMID 27446698.
  40. ^ Tserevelakis, George J.; Barmparis, Georgios D.; Kokosalis, Nikolaos; Giosa, Eirini Smaro; Pavlopoulos, Anastasios; Tsironis, Giorgos P.; Zacharakis, Giannis (2023-05-15). "Deep learning-assisted frequency-domain photoacoustic microscopy". Optics Letters. 48 (10): 2720–2723. Bibcode:2023OptL...48.2720T. doi:10.1364/OL.486624. ISSN 1539-4794. PMID 37186749. S2CID 258229033.
[edit]

Photoacoustic imaging

Photoacoustic microscopy

Photoacoustic effect