Jump to content

Draft:Kihwan Kim

From Wikipedia, the free encyclopedia


Kihwan Kim
Kihwan at XR Unlocked[1] (New York; 2024)
Born (1975-03-07) 7 March 1975 (age 49)
Alma materYonsei University (B.S.)
Georgia Institute of Technology (Ph.D.)
TitleCorporate Executive Vice President at Samsung Electronics (MX)
Scientific career
FieldsComputer Vision, Graphics, Machine learning, On-device AI, 3D Vision, XR
Korean name
Hangul
김기환
Hanja
金基煥
Revised RomanizationGim Gihwan
McCune–ReischauerKim Kihwan

Kihwan Kim (Korean김기환; born 7 March 1975)[2] is a computer scientist and currently the Corporate Executive Vice President of Samsung Electronics in Mobile eXperience(MX) division.

His specializations include On-Device Artificial Intelligence (AI), Multimodal AI, 3D Vision, Camera pipeline optimization, and Extended Reality (XR) platforms tailored for high-end consumer electronics products from Samsung—notably innovative smartphone technologies like Foldable Phones and Visual See-Through (VST) headsets / Augmented Reality (AR) glasses. Recently, he led the team behind Project Moohan featured prominently on Android XR.

With a track record in research[3] and commercialization, Kihwan has been leading cross-functional teams to achieve mass product deployment, delivering over 10 million units annually across phones, watches and diverse mobile devices.

Kihwan holds a Ph.D and M.S in Computer Science from the Georgia Institute of Technology and a B.S in Electrical Engineering from Yonsei University.

Recent highlights

[edit]

SDC 2022 (Oct 2022)

[edit]

At SDC 2022, Kihwan unveiled the new Avatar Software Development Kit (SDK) designed explicitly for integration across various Samsung devices including smartphones, smartwatches, and televisions[4]. This announcement coincided with demonstrations highlighting the most recent iteration of avatars within the expansive Samsung Galaxy ecosystem.

XR Unlocked (Dec 2024)

[edit]
Kihwan introduces Android XR at XR Unlocked[1] - a new extended reality (XR) operating system developed by Samsung/Google/Qualcomm based on Android.

On December 12 2024, Samsung and Google together unveiled a new operating system for XR, called Android XR, at the Google's developer event, "XR Unlocked", at New York City. At the event, two prototype devices have been showcased, one of which was Samsung's mixed reality (MR) headset, "Project Moohan".

Project Moohan; "Moohan" means 'infinity' in Korean, highlighting the infinite possibilities of XR (from Samsung Newsroom)[5]

Project Moohan, similar to devices like the Meta Quest and Apple Vision Pro, is a headset designed to enable immersive experiences. The name “Moohan” means ‘infinity’ in Korean, connoting Samsung's belief in delivering unparalleled, immersive experiences within an infinite space[5]. At the event, the device has been highlighted to seamlessly enable Google’s AI assistant, Gemini, allowing users to issue voice commands.

At an interview with Wired[6], Kihwan emphasized the revolutionary nature of the Android XR project where he described it as “not something that could be created by a single team or company, but a project that required an entirely different level of collaboration”.

To build the new Android XR platform, we set off on a collaboration at a level that has never been seen before between industry leaders, with experts from computer vision, graphics, machine learnings, all working together to co-design, test, and create a new operating system from the ground up.

During his keynote talk[1], Kihwan introduced the three core values of the Android XR platform, which underpin the vision for Project Moohan. He explained that the platform aims to create meaningful changes in everyday life by delivering an immersive visual experience through unlimited visual content, enabling natural interactions through audio-visual elements and movement, and fostering open collaboration with the Andriod, OpenXR, VR and mobile AR communities.

Engineering Careers

[edit]

Samsung Electronics / SDS (2001-2005)

[edit]

Kihwan’s professional journey began at Samsung SDS, where he worked as a Senior Research Engineer, focusing on a face recognition system (ViaFace TM) and multimedia collaboration system (Syncbiz TM)[2].

Disney Research (2009-2009)

[edit]

In 2009, he joined Disney Research as a Research Associate and research intern, where he engaged in pioneering projects in automated broadcasting system utilizing the estimation of player intention and tracking players in sport games[7].

NVIDIA (2012-2020)

[edit]

Kihwan Kim was a principal research scientist at NVIDIA Research from 2012 to 2020. During his tenure, he focused on building solutions to tackle various 3D computer vision problems such as SLAM/Localization[8][9], 3D reconstruction[10][11], point cloud registration[12][13], hand tracking[14][15], Intrinsic property estimation[16][17], depth estimation[18][19], mid-level vision problems[20][21] for the applications in robotics, Augmented Reality/Virtual Reality (AR/VR), and autonomous vehicles. Kihwan led several teams within NVIDIA Research, and contributed to 3D deep learning field in 2010s[3].

Samsung Electronics / Mobile eXperience (MX) (2020-)

[edit]

Kihwan returned to Samsung Electronics in 2020 as a Corporate Vice President, spearheading various camera and computer vision solutions for Galaxy flagship mobile models. Some major AI/camera projects he led during this time include Under Display Camera (UDC), Photo remaster and Nightography.

His expertise led him to become the head of the joint venture team behind VST/AR glasses involving Google and Qualcomm in 2021.

Managing cross-functional worldwide teams spanning verticals, he led XR collaborations on both software and hardware innovations.

He was promoted to an Executive Vice President in Samsung leading both Software and Hardware team in 2024.

References

[edit]
  1. ^ a b c Android Developers (2024-12-18). XR Unlocked '24 in under 8 minutes. Retrieved 2025-01-06 – via YouTube.
  2. ^ a b "Kihwan Kim's page". www.kihwan23.com. Retrieved 2025-01-05.
  3. ^ a b "Kihwan Kim". scholar.google.com. Retrieved 2025-01-06.
  4. ^ Samsung Developers (2022-10-13). [SDC22] AR Emoji: Your avatar, your experience. Retrieved 2025-01-06 – via YouTube.
  5. ^ a b "Unlock the Infinite Possibilities of XR With Galaxy AI".
  6. ^ Chokkattu, Julian. "Hands On With Android XR and Google's AI-Powered Smart Glasses". Wired.
  7. ^ "CVPR10_Motion_Fields_to_Predict_Play_Evolution_in_Dynamic_Sport_Scenes" (PDF).
  8. ^ Herrera, Daniel C.; Kim, Kihwan; Kannala, Juho; Pulli, Kari; Heikkila, Janne (2014). "DT-SLAM: Deferred Triangulation for Robust SLAM". 2014 2nd International Conference on 3D Vision. pp. 609–616. doi:10.1109/3DV.2014.49. ISBN 978-1-4799-7000-1.
  9. ^ Brahmbhatt, Samarth; Gu, Jinwei; Kim, Kihwan; Hays, James; Kautz, Jan (2017). "Geometry-Aware Learning of Maps for Camera Localization". arXiv:1712.03342 [cs.CV].
  10. ^ Maier, Robert; Kim, Kihwan; Cremers, Daniel; Kautz, Jan; Niessner, Matthias (2017). "Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 3133–3141. arXiv:1708.01670. doi:10.1109/ICCV.2017.338. ISBN 978-1-5386-1032-9.
  11. ^ Liu, Chen; Kim, Kihwan; Gu, Jinwei; Furukawa, Yasutaka; Kautz, Jan (2018). "PlaneRCNN: 3D Plane Detection and Reconstruction from a Single Image". arXiv:1812.04072 [cs.CV].
  12. ^ Eckart, Ben; Kim, Kihwan; Troccoli, Alejandro; Kelly, Alonzo; Kautz, Jan (2016). "Accelerated Generative Models for 3D Point Cloud Data". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5497–5505. doi:10.1109/CVPR.2016.593. ISBN 978-1-4673-8851-1.
  13. ^ Eckart, Ben; Kim, Kihwan; Kautz, Jan (2018). "Fast and Accurate Point Cloud Registration using Trees of Gaussian Mixtures". arXiv:1807.02587 [cs.CV].
  14. ^ Molchanov, Pavlo; Yang, Xiaodong; Gupta, Shalini; Kim, Kihwan; Tyree, Stephen; Kautz, Jan (2016). "Online Detection and Classification of Dynamic Hand Gestures with Recurrent 3D Convolutional Neural Networks". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4207–4215. doi:10.1109/CVPR.2016.456. ISBN 978-1-4673-8851-1.
  15. ^ Molchanov, Pavlo; Gupta, Shalini; Kim, Kihwan; Kautz, Jan (2015). "Hand gesture recognition with 3D convolutional neural networks". 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 1–7. doi:10.1109/CVPRW.2015.7301342. ISBN 978-1-4673-6759-2.
  16. ^ Kim, Kihwan; Gu, Jinwei; Tyree, Stephen; Molchanov, Pavlo; Niebner, Matthias; Kautz, Jan (2017). "A Lightweight Approach for On-the-Fly Reflectance Estimation". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 20–28. arXiv:1705.07162. doi:10.1109/ICCV.2017.12. ISBN 978-1-5386-1032-9.
  17. ^ Boss, Mark; Jampani, Varun; Kim, Kihwan; Lensch, Hendrik P.A.; Kautz, Jan (2020). "Two-Shot Spatially-Varying BRDF and Shape Estimation". 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3981–3990. arXiv:2004.00403. doi:10.1109/CVPR42600.2020.00404. ISBN 978-1-7281-7168-5.
  18. ^ Ranjan, Anurag; Jampani, Varun; Balles, Lukas; Kim, Kihwan; Sun, Deqing; Wulff, Jonas; Black, Michael J. (2018). "Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation". arXiv:1805.09806 [cs.CV].
  19. ^ Liu, Chao; Gu, Jinwei; Kim, Kihwan; Narasimhan, Srinivasa; Kautz, Jan (2019). "Neural RGB->D Sensing: Depth and Uncertainty from a Video Camera". arXiv:1901.02571 [cs.CV].
  20. ^ Lv, Zhaoyang; Kim, Kihwan; Troccoli, Alejandro; Sun, Deqing; Rehg, James M.; Kautz, Jan (2018). "Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation". arXiv:1804.04259 [cs.CV].
  21. ^ Golyanik, Vladislav; Kim, Kihwan; Maier, Robert; Nießner, Matthias; Stricker, Didier; Kautz, Jan (2017). "Multiframe Scene Flow with Piecewise Rigid Motion". arXiv:1710.02124 [cs.CV].