SD&A 2023
Conference Programarrow
Short Course
Demonstration Session
3D Screening Session
Conference Registration

SD&A library

SD&A 2024

advertise here
  Advance Conference Program:

SD&A 2023

The World's Premier Conference for 3D Innovation

Held as part of:

The IS&T International Symposium on Electronic Imaging: Science and Technology
15-19 January 2023
- Hilton Parc 55 Hotel, San Francisco, USA -

To be published open-access as part of the IS&T Proceedings of Electronic Imaging.


3D Theater Partners:     Christie Digital         DepthQ 3D by Lightspeed Design         LA 3-D Movie Festival

Conference Chairs: Nicolas S. Holliman, Kings College Longdon (UK)
Takashi Kawai, Waseda University (Japan)
Bjorn Sommer, Royal College of Art (United Kingdom)
Andrew J. Woods, Curtin University (Australia);

Program Committee:

Neil A. Dodgson, Victoria University of Wellington (New Zealand)
Gregg Favalora, Quadratic 3D, Inc. (USA)
Justus Ilgner, University Hospital Aachen (Germany)
Eric Kurland, 3-D SPACE Museum (United States)
John D. Stern, Intuitive Surgical, Retired (USA)
Chris Ward, Lightspeed Design, Inc. (USA)
Laurie Wilcox, York University (Canada)

Sunday 15th January 2023

Short Course: Stereoscopic Imaging Fundamentals
More information (Separate Payment required for short course)
8:00 am - 12:15 pm

Monday 16th January 2023

Stereoscopic Displays and Applications Session 1
Stereoscopic Displays
Session Chair: Bjorn Sommer, Royal College of Art (United KIngdom)
Mon. 8:45 - 9:50 am

8:45 am: Conference Welcome

8:50 am : Evaluating the angular resolution of a simulated light field display in regards to three-dimensionality, motion parallax and viewing experience, Sophie Kergaßner and Jan Fröhlich, Stuttgart Media University (Germany) [SD&A-383]

9:10 am : CubicSpace for stereo cards: Enabling universal 3D playback on today's display devices, Nicholas Routhier, Mindtrick Innovations Inc. (Canada) [SD&A-384]

9:30 am : Upgrading the Curtin HIVE's stereoscopic projection facilities, Andrew J. Woods, Wesley Lamont, and Daniel Adams, Curtin University (Australia) [SD&A-385]

Session Break Mon. 9:50 - 10:50 am

SD&A Keynote 1 Mon. 10:50 - 11:50 am

The long-awaited arrival of holographic interfaces [SD&A-386]

Shawn Frayne, Looking Glass Factory (United States)  


Holographic or light field generating devices that could enable groups of people to see and interact with genuinely three-dimensional content have long been held as a "holy grail" by those inventors and engineers that work in the field of perfecting the human-computer interface. Now after decades of work, real-time holographic interfaces are at last commercially available. The driving forces behind the emergence of holographic interfaces, the advantages of these headset-free systems compared to head-mounted displays, and the unique characteristics of the first commercially-viable approaches to come to market will be covered here.


Shawn is Co-founder and CEO of Looking Glass working between their Brooklyn and Hong Kong offices. A graduate of MIT, Shawn got his start with a classic laser interference pattern holographic studio he built in high school, followed by training in advanced holographic film techniques while at MIT. He is listed on dozens of patents around the world, with inventions he has developed covered in Forbes, TIME, Popular Mechanics, and other major publications.

Stereoscopic Displays and Applications Session 2
Stereoscopic Simulation
Session Chair: Andrew Woods, Curtin University (Australia)
Mon. 11:50 am - 12:30 pm

11:50 am : Estimation of motion sickness in automated vehicles using stereoscopic visual simulation (JIST-first), Yoshihiro Banchi and Takashi Kawai, Waseda University (Japan) [SD&A-387]

12:10 pm : Lenny Lipton - In Memoriam, Andrew Woods, Curtin University (Australia)

Session Break Mon. 12:30 - 2:00 pm

EI Plenary 1
Mon. 2:00 - 3:00 pm

IS&T Welcome

PLENARY: Neural Operators for Solving PDEs

speaker photographAnima Anandkumar,
Bren professor, California Institute of Technology, and senior director of AI Research, NVIDIA Corporation (United States)

Abstract: Deep learning surrogate models have shown promise in modeling complex physical phenomena such as fluid flows, molecular dynamics, and material properties. However, standard neural networks assume finite-dimensional inputs and outputs, and hence, cannot withstand a change in resolution or discretization between training and testing. We introduce Fourier neural operators that can learn operators, which are mappings between infinite dimensional spaces. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. When applied to weather forecasting, neural operators capture fine-scale phenomena and have similar skill as gold-standard numerical weather models for predictions up to a week or longer, while being 4-5 orders of magnitude faster.

Biography: Anima Anandkumar is a Bren Professor at Caltech and Senior Director of AI Research at NVIDIA. She is passionate about designing principled AI algorithms and applying them to interdisciplinary domains. She has received several honors such as the IEEE fellowship, Alfred. P. Sloan Fellowship, NSF Career Award, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. Anandkumar received her BTech from Indian Institute of Technology Madras, her PhD from Cornell University, and did her postdoctoral research at MIT and assistant professorship at University of California Irvine.

Session Break Mon. 3:00 - 3:30 pm

Conference Papers Showcase Session
(Two papers have been nominated from each of the conferences. The selected showcase presenters will each give a short showcase talk and then their full presentation in conference.)
Mon. 3:30 - 5:00 pm

Electronic Imaging Symposium Reception
The annual Electronic Imaging All-Conference Reception provides a wonderful opportunity to get to know and interact with new and old SD&A colleagues. Plan to join us for this relaxing and enjoyable event.
Mon. 5:00 - 6:00 pm

SD&A 3D Theatre
Producers: John Stern, retired (United States); Dan Lawrence, Lightspeed Design Group (United States); Eric Kurland, 3-D SPACE Museum (United States); Andrew Woods, Curtin University (Australia).
Mon. 6:00 to 7:30 pm

This ever-popular session of each year's Stereoscopic Displays and Applications Conference showcases the wide variety of 3D content that is being produced and exhibited around the world. All 3D footage screened in the 3D Theater Session is shown in high-quality polarized 3D on a large screen. The final program will be announced at the conference and 3D glasses will be provided.

SD&A Conference Annual Dinner Mon. 7:50 to 10:00 pm

The annual informal dinner for SD&A attendees. An opportunity to meet with colleagues and discuss the latest advances. There is no host for the dinner. Information on venue and cost will be provided on the day at the conference.

Tuesday 17th January 2023

Stereoscopic Displays and Applications Session 3
Stereoscopy in Education and Vergence Accommodation
Session Chair: Takashi Kawai, Waseda University (Japan)
Tue. 8:50 - 10:10 am

8:50 am : A review of the VR workshop at the Grand Challenge 2022, Bjorn Sommer, Kam Raoofi, Robert Daniel Philips, Laura Ferrarello, Ashley Hall, and Paul Anderson, Royal College of Art (United Kingdom) [SD&A-388]

9:10 am : Use of a multi-display, mobile 2D / 3D video wall in surgical operating theatres, Justus F. Ilgner, Thien A. Duong Dinh, Martin Westhofen, and Stephan Hackenberg, RWTH Aachen University (Germany) [SD&A-389]

9:30 am : Investigating the use of spectacle lenses to alleviate vergence-accommodation mismatch in a stereoscopic remote vision system, Eleanor O'Keefe1, Kirk Moffitt2, Eric Seemiller3, Marc Winterbottom4, and Steven Hadley4; 1KBRwyle, 2Moffitt Consulting, 3EIS, and 4United States Air Force (United States) [SD&A-390]

9:50 am : Observed optical resolution of light field display: Empirical and simulation results, Yuta Miyanishi, Qi Zhang, Erdem Sahin, and Atanas Gotchev, Tampere University (Finland) [SD&A-391]

Session Break Tue. 10:10 - 10:50 am

SD&A Keynote 2
Session Chair: Nicolas Holliman, King's College London (United Kingdom)

JOINT SESSION: This session is jointly sponsored by: Stereoscopic Displays and Applications XXXIV
and Engineering Reality of Virtual Reality 2023.

Tue. 10:50 - 11:50 am

Human performance using stereo 3D in a helmet mounted display and association with individual stereo acuity   [SD&A-224]

Bonnie Posselt, RAF Centre of Aviation Medicine (United Kingdom)  


Binocular Helmet Mounted Displays (HMDs) are a critical part of the aircraft system, allowing information to be presented to the aviator with stereoscopic 3D (S3D) depth, potentially enhancing situational awareness and improving performance. The utility of S3D in an HMD may be linked to an individual’s ability to perceive changes in binocular disparity (stereo acuity). Though minimum stereo acuity standards exist for most military aviators, current test methods may be unable to characterise this relationship. This presentation will investigate the effect of S3D on performance when used in a warning alert displayed in an HMD. Furthermore, any effect on performance, ocular symptoms, and cognitive workload shall be evaluated in regard to individual stereo acuity measured with a variety of paper-based and digital stereo tests.


Wing Commander (Dr) Bonnie Posselt is a medical officer in the RAF (UK) specialising in Aviation and Space Medicine. Bonnie is currently based at the RAF Centre of Aviation Medicine in Bedfordshire, UK, having recently returned from a 3.5year exchange tour to Wright-Patterson Air Force Base in Ohio. While working with the 711th Human Performance Wing and the Air Force Research Laboratory (AFRL) in Ohio, Bonnie undertook a PhD in Helmet Mounted Displays and vision standards in collaboration with the University of Birmingham (UK). Bonnie is a graduate of the University of Manchester, King’s College London, and the International Space University. She is an associate fellow of the Aerospace Medical Association and elected member of the Royal Aeronautical Society.
  speaker photograph

Stereoscopic Displays and Applications Session 4
Stereoscopy in VR
Session Chair: Nicolas Holliman, King's College London (United Kingdom)

JOINT SESSION: This session is jointly sponsored by: Stereoscopic Displays and Applications XXXIV
and Engineering Reality of Virtual Reality 2023.
Tue. 11:50 am - 12:30 pm

11:50 am : Incidence of stereo-blindness in a recent VR distance perception user study, Michael Wiebrands, Andrew J. Woods, and Hugh Riddell, Curtin University (Australia) [SD&A-225]

12:10 pm : Evaluating requirements for design education in a virtual studio environment, Bjorn Sommer1, Ayn Sayuti2, Zidong Lin1, Shefali Bohra1, Emre Kayganaci1, Caroline Yan Zheng1, Chang Hee Lee3, Ashley Hall1, and Paul Anderson1; 1Royal College of Art UK, 2Universiti Teknologi MARA (UiTM) (Malaysia), and 3Korea Advanced Institute of Science and Technology (KAIST) (Republic of Korea) [SD&A-226]

Session Break Tue. 12:30 - 2:00 pm

EI Plenary 2
Tue. 2:00 - 3:00 pm

PLENARY: Embedded Gain Maps for Adaptive Display of High Dynamic Range Images

speaker photographEric Chan, Fellow, Adobe Inc., and
Paul M. Hubel, Director of Image Quality in Software Engineering, Apple Inc.


Images optimized for High Dynamic Range (HDR) displays have brighter highlights and more detailed shadows, resulting in an increased sense of realism and greater impact. However, a major issue with HDR content is the lack of consistency in appearance across different devices and viewing environments. There are several reasons, including varying capabilities of HDR displays and the different tone mapping methods implemented across software and platforms. Consequently, HDR content authors can neither control nor predict how their images will appear in other apps. We present a flexible system that provides consistent and adaptive display of HDR images. Conceptually, the method combines both SDR and HDR renditions within a single image and interpolates between the two dynamically at display time. We compute a Gain Map that represents the difference between the two renditions. In the file, we store a Base rendition (either SDR or HDR), the Gain Map, and some associated metadata. At display time, we combine the Base image with a scaled version of the Gain Map, where the scale factor depends on the image metadata, the HDR capacity of the display, and the viewing environment.


Eric Chan is a Fellow at Adobe, where he develops software for editing photographs. Current projects include Photoshop, Lightroom, Camera Raw, and Digital Negative (DNG). When not writing software, Chan enjoys spending time at his other keyboard, the piano. He is an enthusiastic nature photographer and often combines his photo activities with travel and hiking.

Paul M. Hubel is director of Image Quality in Software Engineering at Apple. He has worked on computational photography and image quality of photographic systems for many years on all aspects of the imaging chain, particularly for iPhone. He trained in optical engineering at University of Rochester, Oxford University, and MIT, and has more than 50 patents on color imaging and camera technology. Hubel is active on the ISO-TC42 committee Digital Photography, where this work is under discussion, and is currently a VP on the IS&T Board. Outside work he enjoys photography, travel, cycling, coffee roasting, and plays trumpet in several bay area ensembles.

Session Break Tue. 3:00 - 3:35 pm

The Engineering Reality of Virtual Reality Session 1
VR Systems and Immersion
Session Chair: Margaret Dolinsky, Indiana University (United States) and Ian McDowall, Intuitive Surgical / Fakespace Labs (United States)

JOINT SESSION: This session is jointly sponsored by: Engineering Reality of Virtual Reality 2023 and
Stereoscopic Displays and Applications XXXIV.
Tue. 3:35 - 5:45 pm

3:35 pm : Display system sharpness modeling and requirement in VR and AR applications, Jiawei Lu, Trisha Lian, and Jerry Jia, Meta (United States) [ERVR-213]

4:05 pm : Tangible extended reality with sensor fusion, Yang Cai, CMU (United States) [ERVR-214]

4:25 pm : Exploring third orality in VR, Semi Ryu, Virginia Commonwealth University (United States) [ERVR-215]

4:45 pm : Immersion, presence and behavioral validity in virtual and augmented environments, Daniel R. Mestre, CNRS and Aix-Marseille University (France) [ERVR-216]

5:05 pm : Immersive security personnel training module for active shooter events, Sharad Sharma and Nishith Mannuru, University of North Texas (United States) [ERVR-217]

5:25 pm : Mobile augmented reality system for object detection, alert, and safety, Sharad Sharma1, Nishith Mannuru1, and Don Engel2; 1University of North Texas and 2University of Maryland, Baltimore County (United States) [ERVR-218]

Symposium Demonstration Session
Demonstration Chair: Bjorn Sommer, Royal College of Art (United Kingdom)
Tues. 5:30 - 7:30 pm

A symposium-wide demonstration session will be open to attendees 5:30 to 7:30 pm Tuesday evening. Demonstrators will provide interactive, hands-on demonstrations of a wide-range of products related to Electronic Imaging. The demonstration session hosts a vast collection of stereoscopic products providing a perfect opportunity to witness a wide array of stereoscopic displays with your own two eyes.

More information:

Wednesday 18th January 2023

Human Vision and Electronic Imaging Keynote

JOINT SESSION: This session is jointly sponsored by: Human Vision and Electronic Imaging 2023,
Stereoscopic Displays and Applications XXXIV, and Engineering Reality of Virtual Reality 2023.

Wed. 9:05 - 10:10 am

HVEI KEYNOTE: Display Consideration for AR/VR Systems

speaker photograph Ajit Ninan,
Director of Engineering, Display and Optics, Reality Labs at Meta (United States)


AR and VR displays must take into consideration human perception and image quality factors that are required for a product. At Meta, we study these perceptual factors and determine what quality targets and requirements are needed. This talk will discuss some of these aspects and highlight examples of our process that help us set direction.


Ajit Ninan is a display industry veteran and led the way to the industry adopting HDR. His inventions & innovations are manifest in millions of shipped HDR TV’s and consumer electronics from multiple companies. He holds 400+ granted patents in imaging and display technology and now works in imaging related to AR/VR at Meta as Senior Director of Applied Perceptual Science and Image Quality. His work spans multiple subjects ranging from Displays, Imaging, Color, Video, Compression, Audio and Networking. His career spans early start-ups to public companies. Ninan is the inventor of the local dimmed quantum dot TV and led the way to the industry adoption of quantum dot displays by working with Vizio, Nanosys and 3M to release the first of its kind R-series QD TV with HDR. He also led the effort with the JPEG committee to standardize JPEG-XT to enable JPEG HDR images. Ninan was inducted as a SMPTE Fellow for his contributions to imaging and standards. The display that caused the world to adopt HDR called the “Pulsar” capable of 4000nits down to .005nits with P3 color in 2010, built by Ninan and his team, has received many awards including the Advanced Imaging Society’s Lumiere award which enabled the development of Dolby Vision and earned Ninan an Emmy.

Session Break Wed. 10:10 - 10:50 pm

Human Vision and Electronic Imaging Session 1
AR/VR Special Session

JOINT SESSION: This session is jointly sponsored by: Human Vision and Electronic Imaging 2023,
Stereoscopic Displays and Applications XXXIV, and Engineering Reality of Virtual Reality 2023.
Wed. 10:50 am - 12:30 pm

10:50 am : Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited), Xiaoyang Tian, Nicko Caluya, and Damon M. Chandler, Ritsumeikan University (Japan) [HVEI-220]

11:10 am : Projection mapping for enhancing the perceived deliciousness of food (Invited), Yuichiro Fujimoto, Nara Institute of Science and Technology (Japan) [HVEI-221]

11:30 am : Real-time imaging processing for low-vision users, Yang Cai, CMU (United States) [HVEI-222]

11:50 am : Critical flicker frequency (CFF) at high luminance levels, Alexandre Chapiro1, Nathan Matsuda1, Maliha Ashraf2, and Rafal Mantiuk3; 1Meta (United States), 2University of Liverpool UK, and 3University of Cambridge (United Kingdom) [HVEI-223]

12:10 pm : A multichannel LED-based lighting approach to improve color discrimination for low vision people, Linna Yang1, Éric Dinet1, Pichayada Katemake2, Alain Trémeau1, and Philippe Colantoni1; 1University Jean Monnet Saint-Etienne (France) and 2Chulalongkorn University (Thailand) [HVEI-253]

Session Break Wed. 12:30 - 2:00 pm

EI Plenary 3
Wed. 2:00 - 3:00 pm

PLENARY: Bringing Vision Science to Electronic Imaging: The Pyramid of Visibility

speaker photographAndrew B. Watson,
chief vision scientist, Apple Inc. (United States)

Abstract: Electronic imaging depends fundamentally on the capabilities and limitations of human vision. The challenge for the vision scientist is to describe these limitations to the engineer in a comprehensive, computable, and elegant formulation. Primary among these limitations are visibility of variations in light intensity over space and time, of variations in color over space and time, and of all of these patterns with position in the visual field. Lastly, we must describe how all these sensitivities vary with adapting light level. We have recently developed a structural description of human visual sensitivity that we call the Pyramid of Visibility, that accomplishes this synthesis. This talk show show this structure accommodates all the dimensions described above, and how it can be used to solve a wide variety of problems in display engineering.

Biography: Andrew Watson is Chief Vision Scientist at Apple, where he leads the application of vision science to technologies, applications, and displays. His research focuses on computational models of early vision. He is the author of more than 100 scientific papers and 8 patents. He has 21,180 citations and an h-index of 63. Watson founded the Journal of Vision, and served as editor-in-chief 2001-2013 and 2018-2022. Watson has received numerous awards including the Presidential Rank Award from the President of the United States.

Session Break Wed. 3:00 - 3:30 pm

Human Vision and Electronic Imaging Conference Session 2
AR/VR Special Session Panel Discussion

JOINT SESSION: This session is jointly sponsored by: Human Vision and Electronic Imaging 2023,
Stereoscopic Displays and Applications XXXIV, and Engineering Reality of Virtual Reality 2023.
Wed. 3:30 to 4:50 pm

Panelists: Alexandre Chapiro, Yuichiro Fujimoto, Nicolas Holliman, Ajit Ninan.

Human Vision and Electronic Imaging conference
DISCUSSION: Wednesday end of joint sessions discussion

JOINT SESSION: This session is jointly sponsored by: Human Vision and Electronic Imaging 2023,
Stereoscopic Displays and Applications XXXIV, and Engineering Reality of Virtual Reality 2023.

Wed. 4:50 - 5:30 pm
Please join us for a lively discussion of today's presentations. Participate in an interactive, moderated discussion, where key topics and questions are discussed from many perspectives, reflecting the diverse conference community.

Thursday 19th January 2023

New at EI for 2023:
Imaging for XR Workshop

  Thu. 9:00 am to 6:00 pm
Imaging for XR Workshop 2023

See full program here.

        Register for the 2023 Stereoscopic Displays and Applications conference and Electronic Imaging Symposium here:

See other Electronic Imaging Symposium events at:      

Stereoscopic Displays and Applications Conference

[Home] [2023: Program, Committee, Short Course, Demonstration Session, 3D Theatre, Sponsor, Register ]
[ 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996]

* Advertisers are not directly affiliated with or specifically endorsed by the Stereoscopic Displays and Applications conference.
Maintained by: Andrew Woods
Last Updated: 14 January, 2023.