如果iPhone具有LiDAR传感器,则可能会生成3D点云,从理论上讲,您可以使用该3D点云对周围环境进行AR 3D投影/重建,例如您可以创建AR视频聊天,从而在3D环境中看到与之交谈的人的全息投影。
However, given that LiDAR is yet to be released on iPhone, would it be possible use both cameras at once, e.g. the wide and normal, to correct for the camera FOV and optic distortion in each and then use the 3D stereoscopic transform in OpenCV knowing the optical properties of the setup (distance between the cameras, focal length, etc.) to generate a 3D image of the scene with almost as much precision as LiDAR? I think depth map in Apple's AVFoundation is certainly a possibility; however, the TrueDepth camera system was only introduced in iPhoneX.
This Medium article on Stereo 3D reconstruction seems to suggest that such a configuration is possible, although I suppose the small baseline distance between the cameras relative to the object might affect the outcome.
非常感谢!