However, given that LiDAR is yet to be released on iPhone, would it be possible use both cameras at once, e.g. the wide and normal, to correct for the camera FOV and optic distortion in each and then use the 3D stereoscopic transform in OpenCV knowing the optical properties of the setup (distance between the cameras, focal length, etc.) to generate a 3D image of the scene with almost as much precision as LiDAR? I think depth map in Apple's AVFoundation is certainly a possibility; however, the TrueDepth camera system was only introduced in iPhoneX.
This Medium article on Stereo 3D reconstruction seems to suggest that such a configuration is possible, although I suppose the small baseline distance between the cameras relative to the object might affect the outcome.