Abstract
Microwave image reconstruction based on a deep learning method is investigated in this article. The neural network is capable of converting measured microwave signals acquired from a 24 \times 24 antenna array at 4 GHz into a 128 \times 128 image. To reduce the training difficulty, we first developed an autoencoder by which high-resolution images ( 128 \times 128 ) were represented with 256 \times 1 vectors; then we developed the second neural network which aimed to map microwave signals to the compressed features ( 256 \times 1 vector). Two neural networks can be combined to a full network to make reconstructions, when both are successfully developed. The present two-stage training method reduces the difficulty in training deep learning networks (DLNs) for inverse reconstruction. The developed neural network is validated by simulation examples and experimental data with objects in different shapes/sizes, placed in different locations, and with dielectric constant ranging from 2 to 6. Comparisons between the imaging results achieved by the present method and two conventional approaches: distorted Born iterative method (DBIM) and phase confocal method (PCM) are also provided.
Original language | English (US) |
---|---|
Article number | 9034483 |
Pages (from-to) | 5626-5635 |
Number of pages | 10 |
Journal | IEEE Transactions on Antennas and Propagation |
Volume | 68 |
Issue number | 7 |
DOIs | |
State | Published - Jul 2020 |
Keywords
- Autoencoder (AE)
- convolutional neural net
- deep learning
- microwave imaging
- scattered fields
ASJC Scopus subject areas
- Electrical and Electronic Engineering