peterCore
TSDThe Swift Den
•Created by peterCore on 10/18/2023 in #swift-development
Attaching position, rotation, extrinsic, gravity, intrinsics to an images metadata
Hi I’m looking to see if anyone has figured out how to: Attach position, rotation, extrinsic, gravity, intrinsics to an images metadata. Recently I have been using ARKit, RealityKit and SceneKit to capture my arframes data including cvpixelbuffers. I noticed that apples new object capture api for iOS natively saves all this data but their process is quite restrictive for actually capturing the images. Instead I’d rather use my method and then manually attach the metadata. I’ve been successful with the depth map and gps and exif dictionaries but the Apple documentation doesn’t exist for the remaining parameters. As of ios16 Apple introduced new fields like kIIOCameraModel_Intrinsics etc but it’s not clear that dictionary that is in so i don’t know how to add it to metadata l. Any one have any clue?
I think with this information if you can attach it all in the image and then create a photogrammetrysession, object capture api will run faster and create 3d models to the scale and as close as position to the position of your world position origin transform….
Currently I get good 3d models with just the depth maps attached inside the images I make but their orientation is wrong due to lack of gravity meta data etc…..
#arkit
2 replies