Session Index

S4. Optical Information Processing and Holography

Optical Information Processing and Holography II
Friday, Dec. 2, 2022  15:15-17:00
Presider: Jung-Ping Liu、Cheng-Chih Hsu
Room: 1F 羅家倫
15:15 - 15:45
Manuscript ID.  0894
Paper No.  2022-FRI-S0402-I001
Invited Speaker:
Wei-Chia Su
Volume Holographic Optical Element on Lightguide for Near-eye Display Application
Wei-Chia Su, National Changhua University of Education (Taiwan)

Near-eye display systems based on lightguide structure will be presented in this talk.
Volume holographic optical elements (VHOEs) are generated and are attached on the
lightguide for images in-coupling and out-coupling in a near-eye display system. Different
VHOEs are designed and fabricated for two different image sources. A full-color display with
FOV of 30° is achievable with a planar display image. In addition, a monochromatic display
with FOV of 23° is presented when using computer-generated hologram (CGH) as image

  Preview abstract
15:45 - 16:00 Award Candidate (Paper Competition)
Manuscript ID.  0807
Paper No.  2022-FRI-S0402-O001
Chen-Ming Tsai Fresnel biprism for common-path digital holographic microscopy
Chen-Ming Tsai, Yuan Luo, National Taiwan University (Taiwan)

By using a Fresnel biprism, the object and reference beam in digital holographic microscopy (DHM) could be easily separated at a small angle and propagate through a common path. The common DHM system we demonstrated is highly compact and able to minimize the influence of environmental fluctuation.

  Preview abstract
16:00 - 16:15 Award Candidate (Paper Competition)
Manuscript ID.  0293
Paper No.  2022-FRI-S0402-O002
Hen-Wan Chi Deep learning-assisted three-dimensional segmentation of SH-SY5Y cell morphology with holographic tomography
Hen-Wan Chi, Chung-Hsuan Huang, National Taiwan Normal University (Taiwan); Han-Yen Tu, Chinese Culture University (Taiwan); Chau-Jern Cheng, National Taiwan Normal University (Taiwan)

We present a deep leaning-assisted three-dimensional segmentation method with holographic tomography and apply it to assess SH-SY5Y cell morphology. In the experimental results, the proposed method performs high speed and high accuracy in three-dimensional segmentation of the cells.

  Preview abstract
16:15 - 16:30 Award Candidate (Paper Competition)
Manuscript ID.  0725
Paper No.  2022-FRI-S0402-O003
Fang-Yong Lee A facile fabrication route of silica-azobenzene compounds with efficient surface relief grating for holographic recording
Fang-Yong Lee, Tzu-Chien Hsu, Wei-Hung Su, National Sun Yat-Sen University (Taiwan)

An azobenzene holographic material based on the {[4-(dimethylamino) phenyl] diazenyl} benzoic acid (Methyl red) as a photo-responsive unit is synthesized. The surface relief grating is fabricated by 532 nm DPSS laser with a total power intensity of 200 mW/cm2. About 1500 nm depth of grating was fabricated on a film with thickness of 860 nm. Its diffraction efficiency as a function of time is demonstrated as well.

  Preview abstract
16:30 - 16:45 Award Candidate (Paper Competition)
Manuscript ID.  0731
Paper No.  2022-FRI-S0402-O004
Yu-Hsiang Lin Making an ultra low cost single longitudinal mode and frequency tunable 655 nm laser source
Yu-Hsiang Lin, Te-Yuan Chung, National Central University (Taiwan)

A homemade PQ/PMMA VBG is utilized to feedback a 650 nm laser diode. Single longitudinal mode is achieved. By adding a twist nematic liquid crystal cell in the laser cavity serving as the phase modulator, the single mode laser output frequency can be tuned exceed 19 GHz.

  Preview abstract
16:45 - 17:00 Award Candidate (Paper Competition)
Manuscript ID.  0706
Paper No.  2022-FRI-S0402-O005
Ya-Ti Chang Lee Computational Lensless Imaging via Perceptual Loss and End-to-End Training
Ya-Ti Chang Lee, Chung-Hao Tien, National Yang Ming Chiao Tung University (Taiwan)

Since the evolution of artificial neural networks nowadays, computational lensless imaging had been making headway. However, generative models for scene reconstruction inherit challenge owing to the ill-posed property of inverse system, which superior results were obtained under constrained conditions. In this work, we proposed a deep neural network based lensless imaging system by optimizing perceptual loss exclusively to reconstruct scenes conforming human preference with end-to-end training. In our experiment, the purposed method could attain favorable quality, where the score achieved 0.15 by LPIPS merit at best.

  Preview abstract