Computational photography has become an increasingly active area of research within the computer vision community. Within the few last years, the amount of research has grown tremendously with dozens of published papers per year in a variety of vision, optics, and graphics venues. A similar trend can be seen in the emerging field of computational displays – spurred by the widespread availability of precise optical and material fabrication technologies, the research community has begun to investigate the joint design of display optics and computational processing. Such displays are not only designed for human observers but also for computer vision applications, providing high-dimensional structured illumination that varies in space, time, angle, and the color spectrum. This workshop is designed to unite the computational camera and display communities in that it considers to what degree concepts from computational cameras can inform the design of emerging computational displays and vice versa, both focused on applications in computer vision.
The Computational Cameras and Displays (CCD) workshop series serves as an annual gathering place for researchers and practitioners who design, build, and use computational cameras, displays, and imaging systems for a wide variety of uses. The workshop solicits posters and demo submissions on all topics relating to computational imaging systems.
Previous CCD Workshops: CCD2023, CCD2022, CCD2021, CCD2020, CCD2019, CCD2018, CCD2017, CCD2016, CCD2015, CCD2014, CCD2013, CCD2012
Time(Seattle local) | Title | Speaker |
8:45 - 9:00 | Welcome / Opening Remarks | Organizers |
9:00 - 9:30 | Keynote 1: Advanced Optical Imaging: Scattering and Absorption-Based Internal Structure Analysis with Photoacoustic Technology | Imari Sato |
9:30-9:50 | Invited Talk 1: Invisible Fluorescent Markers for Deformable Tracking | Jinwei Ye |
9:50 - 10:10 | Invited Talk 2: Resource-Aware Single-Photon Imaging | Atul Ingle |
10:10 - 10:30 | Morning Break | |
10:30 - 11:00 | Keynote 2: Spatially-Selective Lensing | Aswin C. Sankaranarayanan |
11:00 - 11:15 | Spotlight presentations | |
11:15 - 12:30 | Poster Session (Boards #315-344) | |
12:30 - 13:30 | Lunch break | |
13:30 - 14:00 | Keynote 3: Computational photography at the point of capture on mobile cameras | Marc Levoy |
14:00 - 14:20 | Invited Talk 3: Mobile Time-Lapse | Abe Davis |
14:20 - 14:40 | Invited Talk 4: From Cameras to Displays, End-to-End Optimization Empowers Imaging Fidelity | Evan Peng |
14:40 - 15:00 | Invited Talk 5: Revealing the Invisible with Neural Inverse Light Transport | Akshat Dave |
15:00 - 15:30 | Afternoon Break | |
15:30 - 16:00 | Keynote 4: Seeing Beyond the Blur: Imaging Black Holes with Increasingly Strong Assumptions | Katie Bouman |
16:00 - 16:45 | Panel discussion | |
16:45 - 16:55 | Closing Remarks |
ID | Board Number | Title | Presenter |
1 | #315 | Learning Constrained Binary Color Filter Arrays For Enhanced Demosaicking with Trainable Hard Thresholding | Ali Cafer Gurbuz |
2 | #316 | Physics constrained neural tomography of a black hole | Aviad Levis |
3 | #317 | Single View Refractive Index Tomography with Neural Fields | Brandon Zhao |
4 | #318 | Towards 3D Vision with Low-Cost Single-Photon Cameras | Carter Sifferman |
5 | #319 | Optimized nano optics for 360 Structured light | Eunsue Choi |
6 | #320 | PixRO: Pixel-Distributed Rotational Odometry with Gaussian Belief Propagation | Ignacio Alzugaray |
7 | #321 | Behind the Blurry Background: Practical Synthetic Features To Enable Robust Imaging Through Scattering | Jeffrey Alido |
8 | #322 | Doppler Time-of-Flight Rendering | Juhyeon Kim |
9 | #323 | 3D sensing with single-photon cameras for resource-constrained applications | Kaustubh Sadekar |
10 | #324 | Seeing the World Through Your Eyes | Kevin Zhang |
11 | #325 | WaveMo: Learning Wavefront Modulations to See Through Scattering | Mingyang Xie |
12 | #326 | Domain Expansion via Network Adaptation for Solving Inverse Problems | Nebiyou Tenager Yismaw |
13 | #327 | TurboSL: Dense, Accurate and Fast 3D by Neural Inverse Structured Light | Parsa Mirdehghan |
14 | #328 | Computational multi-aperture camera for wide-field high-resolution imaging | Qianwan Yang |
15 | #329 | Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence | Ripon Kumar Saha |
16 | #330 | CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras | Sachin Shah |
17 | #331 | Snapshot Lidar: Fourier embedding of amplitude and phase for single-image depth reconstruction | Sarah Friday |
18 | #332 | Differentiable Display Photometric Stereo | Seokjun Choi |
19 | #333 | Dispersed Structured Light for Hyperspectral 3D imaging | Suhyun Shin |
20 | #334 | Generalized Event Cameras | Varun Sundar |
21 | #335 | ƒNeRF: High Quality Radiance Fields from Practical Cameras | Yi Hua |
22 | #336 | Spectral and Polarization Vision: Spectro-polarimetric Real-world Dataset | Yujin Jeon |
23 | #337 | Projecting Trackable Thermal Patterns for Dynamic Computer Vision | Mark Sheinin |
24 | #338 | Explicit Neural Fields for 3D Refractive Index Reconstruction using Two-photon Fluorescence Illuminations | Yi Xue |
25 | #339 | Streaming quanta sensors | Tianyi Zhang | 26 | #340 | Textureless Deformable Object Tracking with Invisible Markers | Yubei Tu |
Computational Cameras and Displays Workshop - June 18, 2024