TRANSFORMING ASSET OPERATION VIDEO TO AUGMENTED REALITY GUIDANCE MODEL
The present invention relates generally to the field of computing, and more particularly to augmented reality (AR) technologies. In asset management scenarios, a technician may have to rely on text-based and/or video-based operation guides for installation, uninstallation, and maintenance procedures. However, in many instances, these text-based and video-based operation guides may be hard to follow along. For example, it may be difficult for a technician to identify an exact match between objects/tools called out in the operation guides and the objects/tools in the technician's work space. It may also be difficult for the technician to identify an exact location for a connection point between objects in the technician's work space based on the operation guides. It may further be difficult for the technician to verify the accuracy of a completed procedure and determine the cause of an error based on the operation guides. Embodiments of the present invention disclose a method, computer system, and a computer program product for AR guidance. The present invention may include detecting a plurality of objects in a video recording associated with completing a task. The present invention may include generating a plurality of three-dimensional (3D) object models based on scanning a plurality of real objects in a task space. The present invention may include matching the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. The present invention may include generating, based on the video recording, an augmented reality (AR) guidance model for completing the task, wherein the generated AR guidance model replaces the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. The various features of the drawings are not to scale as the illustrations are for clarity in facilitating one skilled in the art in understanding the invention in conjunction with the detailed description. In the drawings: Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of this invention to those skilled in the art. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, Python, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The following described exemplary embodiments provide a system, method and program product for transforming asset operation videos into AR guidance models. As such, the present embodiment has the capacity to improve the technical field of asset management by implementing AR assisted guidance generated from an asset operation video, where objects detected in the video are matched with objects found in a real environment. More specifically, an asset management program may analyze a video to identify one or more objects, tools, and actions involved in each step of a procedure. Then, the asset management program may scan the objects and tools in a real environment (e.g., work or task space of a technician) using an AR enabled camera to generate three-dimensional (3D) models of the objects and tools scanned in real environment. Next, the asset management program may match the 3D models of the objects and tools with the objects and tools identified in the video to build a complete AR guidance model of the video. Thereafter, the asset management program may enable the AR guidance model to be applied to guide a user through installation, uninstallation, and/or maintenance steps. In one embodiment, the asset management program may enable recognizing objects and tools in the real environment as seen through an AR device (e.g., AR headset, AR glasses and/or AR enabled smart phone cameras) and displaying 3D markings (e.g., arrows, labels, or other annotations) associated with those objects and tools. In some embodiments, the asset management program may also illustrate and demonstrate the operation process via the AR device. In various embodiments, the asset management program may also enable checking the progress of a procedure and monitoring for potential sources of error in the procedure. As described previously, in asset management scenarios, a technician may have to rely on text-based and/or video-based operation guides for installation, uninstallation, and maintenance procedures. However, in many instances, these text-based and video-based operation guides may be hard to follow along. For example, it may be difficult for a technician to identify an exact match between objects/tools called out in the operation guides and the objects/tools in the technician's work space. It may also be difficult for the technician to identify an exact location for a connection point between objects in the technician's work space based on the operation guides. It may further be difficult for the technician to verify the accuracy of a completed procedure and determine the cause of an error based on the operation guides. Therefore, it may be advantageous to, among other things, provide a way to automatically transform a source video into an AR guidance model for asset management operations without requiring pre-define 3D models. According to at least one embodiment, the asset management program may match objects and tools shown in a source video with real objects and tools scanned in a real environment. As noted above, the asset management program may not require pre-defined 3D models of the objects and tools and may instead generate these 3D models based on scanning the real environment. According to one embodiment, if the source video includes multiple similar accessories (e.g., objects and/or tools), the asset management program may identify the correct accessory by analyzing the assembly relationships depicted in the source video and implementing AR measurement techniques to find the correct accessory based on related accessories. According to one embodiment, the asset management program may identify and mark connection anchors on a 3D model, which may be used to mark the connection position on real objects. In some embodiments, the asset management program may compose an action-complete-model, which may be used to compare and judge an installation, uninstallation, and/or maintenance progress. Referring to The client computer 102 may communicate with the server computer 112 via the communications network 116. The communications network 116 may include connections, such as wire, wireless communication links, or fiber optic cables. As will be discussed with reference to According to the present embodiment, a user using a client computer 102 or a server computer 112 may use the asset management program 110 Referring now to According to one embodiment, the asset management environment 200 may include a computer system 202 having a tangible storage device and a processor that is enabled to run the asset management program 110 In one embodiment, the asset management program 110 According to at least one embodiment, the asset management program 110 According to one embodiment, the video analyzer component 204 may receive a video file 212 including a source video for constructing an AR guidance model. In one embodiment, the video file 212 may include a video recording of a subject matter expert (SME) performing an asset operation, such as, for example, an installation, uninstallation, repair, or other maintenance procedure which a technician 214 or other user may want to replicate in their task space 216. In one embodiment, once the video file 212 is received by the video analyzer component 204, the video analyzer component 204 may scan the video file 212 using an object identifier 218, an anchor identifier 220, an action identifier 222, and a step identifier 224. In at least one embodiment, the step identifier 224 may be implemented to split or separate the video file 212 into multiple video segments 226. In one embodiment, the video segments 226 may represent one or more steps or processes of a procedure and may be determined based on analyzing image similarities in the video data. According to one embodiment, in each video segment 226 (e.g., step), the video analyzer component 204 may implement the object identifier 218 to identify one or more objects 228 (e.g., object A and object B) and/or one or more tools 230. Further, in each video segment 226, the video analyzer component 204 may implement the anchor identifier 220 to identify one or more anchors 232 (e.g., connection points between objects) associated with the respective objects 228 and may implement the action identifier 222 to identify one or more actions 234 (e.g., action with tools). In one embodiment, the anchors 232 may be further defined as a source anchor or a target anchor of a respective object 228 such that the action 234 may be described as a connection/disconnection between the source anchor and the target anchor. For example, in According to one embodiment, the objects 228, tools 230, anchors 232, and actions 234 associated with each video segment 226 may be described as two-dimensional (2D) image data. The video analyzer component 204 and video segment 226 will be further described with reference to According to one embodiment, the asset management program 110 According to one embodiment, the asset management program 110 According to one embodiment, the AR guidance step 248 may include one or more objects 250 (e.g., object A and object B) corresponding to objects 228 in the corresponding video segment 226. Similarly, the AR guidance step 248 may include one or more anchors 252, tools 254, and actions 256 corresponding to anchors 232, the tools 230, and actions 234 associated with each video segment 226. However, unlike the 2D image data found in the corresponding video segment 226, the AR guidance step 248 may include 3D model data for the objects 250, anchors 252, tools 254, and actions 256. Further, the 3D model data associated with the actions 256 may contain information illustrating assembly of the objects 250 using the anchor positions 234 (e.g., connection points in the 3D objects). The AR guidance step 248 will be further described with reference to According to one embodiment, the asset management program 110 In one embodiment, the AR guidance model player component 210 may display (e.g., via the AR device 238) AR annotations for the objects and tools required for each action 412 in the AR guidance step 248. In one embodiment, the AR guidance model player component 210 may demonstrate the action 412 via the AR device 238 based on the anchor positions 244 for each object. In one embodiment, the AR guidance model player component 210 may monitor and check the progress of each AR guidance step 248 by comparing the real progress of the assembly in the task space 216 with an action complete model for each step. According to one embodiment, the AR guidance model player component 210 may also enable verifying the accuracy of a completed assembly or procedure and identify causes of potential errors. Referring now to According to at least one embodiment, the video segmentation process 300 may be implemented by the video analyzer component 204, as described with reference to As described previously with reference to In one embodiment, object detection 304 may use computer vision and image processing to detect specific objects within an image based on features that help to classify the objects. For example, object detection 304 may be used to determine what objects/tools are being used in each video segment 226. In one embodiment, object tracking 306 may use computer vision and machine learning to locate an object is successive frames of a video. For example, object tracking 306 may be used to determine how the objects/tool are being moved during the assembly/procedure in each video segment 226. In one embodiment, voice-to-text 308 may use speech recognition capabilities to capture and understand the words spoken in the video recording on the video file 212 and output the spoken words into text. In one embodiment, NLP 310 may use machine learning to process and understand human language (e.g., including intent and sentiment) in the form of text or voice data. For example, the voice-to-text 308 and the NLP 310 may be used to process and understand the instructions spoken by the SME in the video recording. According to one embodiment, the object detection 304, object tracking 306, voice-to-text 308, and NLP 310 technologies may be used to generate the video segment data 302 for each step described/illustrated in the video file 212. As shown in According to one embodiment, the objects 228 detected in each video segment 226 may be described in the video segment code snippet 312 (of video segment data 302) using a unique object identifier 314, a natural language object label 316 (e.g., based on a description by the speaker in the video), and one or more 2D images 318. In one embodiment, the anchors 232 associated with each object 228 may be described in the video segment code snippet 312 using a unique anchor identifier 320 and an anchor image area 322 corresponding to the 2D images 318 of associated object 228. In one embodiment, the anchor image area 322 may be described using image pixels of the 2D images 318 corresponding to the objects 228. According to one embodiment, the tools 230 may be described in the video segment code snippet 312 in a manner similar to the objects 228. That is, each tool 230 may include a unique tool identifier 324, a natural language tool label 326 (e.g., based on a description by the speaker in the video), and one or more 2D tool images 328. According to one embodiment, the actions 234 detected in each video segment 226 may be described in the video segment code snippet 312 using the tools (e.g., tool 230) needed for the action, a source object 330 (e.g., “sourceId”), one or more source object anchors 332 (e.g., “sourceAnchors”), a target object 334 (e.g., “targetId”), one or more target object anchors 336 (e.g., “targetAnchors”), a natural language action label 338 (e.g., based on a description by the speaker in the video), and an action video clip 340 indicating a start time and end time for a video clip depicting the corresponding action 234 in the video file 212. Referring now to According to one embodiment, the object matching process 400 may be implemented by the matcher component 208 ( At 402, objects from previously analyzed video segments are imported. According to one embodiment the asset management program 110 At 404, each action in a video segment is checked for 3D data. According to one embodiment, the asset management program 110 Then at 406, the asset management program 110 However, if at 406, the asset management program 110 Returning to 408, the asset management program 110 However, if at 416, the asset management program 110 According to one embodiment, once the asset management program 110 However, if at 420, the asset management program 110 Referring now to According to one embodiment, the matcher component 208 may receive the video segment data 302 (e.g., including the video segment code snippets 312 described with reference to According to one embodiment, although the AR guidance code snippets 504 may include one or more portions that may be similar to the video segment code snippets 312, the AR guidance step data 502 may be different from the video segment data 302 in that the 2D image data in the video segment data 302 may be supplemented with 3D model data in the AR guidance step data 502. More specifically, the one or more 2D object images 318 corresponding to object 228 may be replaced with a 3D object model 506 corresponding to the object 228. Similarly, the anchor image area 322 corresponding the 2D images 318 of associated object 228 may be replaced with a 3D anchor position 508 (e.g., 3D position on 3D model). Further, the one or more 2D tool images 328 corresponding to tool 230 may be replaced with a 3D tool model 510 corresponding to the tool 230. In addition, the AR guidance step data 502 may also include a 3D action complete model 512 for each action 234. According to one embodiment, 3D action complete model 512 may illustrate an assembly or disassembly of the objects 228 after each step using 3D object models 506. In at least one embodiment, after each step of a task, the asset management program 110 According to one embodiment, the 3D object model 506, the 3D anchor position 508, the 3D tool model, and the 3D action complete model 512 described above may be generated by the matcher component 208 of the asset management program 110 Accordingly, the asset management program 110 It may be appreciated that Data processing system 902, 904 is representative of any electronic device capable of executing machine-readable program instructions. Data processing system 902, 904 may be representative of a smart phone, a computer system, PDA, or other electronic devices. Examples of computing systems, environments, and/or configurations that may represented by data processing system 902, 904 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, network PCs, minicomputer systems, and distributed cloud computing environments that include any of the above systems or devices. User client computer 102 and network server 112 may include respective sets of internal components 902 Each set of internal components 902 Each set of internal components 902 a, b may also include network adapters (or switch port cards) or interfaces 922 such as a TCP/IP adapter cards, wireless wi-fi interface cards, or 3G or 4G wireless interface cards or other wired or wireless communication links. The software program 108 and the asset management program 110 Each of the sets of external components 904 It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows:
Service Models are as follows:
Deployment Models are as follows:
A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now to Referring now to Workloads layer 1144 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 1146; software development and lifecycle management 1148; virtual classroom education delivery 1150; data analytics processing 1152; transaction processing 1154; and asset management 1156. An asset management program 110 The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. A method, computer system, and a computer program product for AR guidance is provided. The present invention may include detecting a plurality of objects in a video recording associated with completing a task. The present invention may include generating a plurality of three-dimensional (3D) object models based on scanning a plurality of real objects in a task space. The present invention may include matching the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. The present invention may include generating, based on the video recording, an augmented reality (AR) guidance model for completing the task, wherein the generated AR guidance model replaces the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. 1. A computer-implemented method, comprising:
detecting a plurality of objects in a video recording associated with completing a task; generating a plurality of three-dimensional (3D) object models based on scanning a plurality of real objects in a task space; matching the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space; and generating, based on the video recording, an augmented reality (AR) guidance model for completing the task, wherein the generated AR guidance model replaces the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. 2. The method of generating at least one two-dimensional (2D) image data for each detected object of the detected plurality of objects; generating at least one 3D model data for each 3D object model of the generated plurality of 3D object models; and replacing the generated at least one 2D image data with the generated at least one 3D model data in the generated AR guidance model. 3. (canceled) 4. The method of identifying at least one anchor for each detected object of the detected plurality of objects, wherein the identified at least one anchor is configured to enable a connection between a first detected object and a second detected object; storing the identified at least one anchor for each detected object as an anchor image area associated with the generated at least one 2D image data for a corresponding detected object; and matching the anchor image area associated with the generated at least one 2D image data with a 3D position on a corresponding 3D object model of the generated plurality of 3D object models to determine a corresponding 3D anchor position. 5. The method of in response to determining a plurality of 3D model candidates for a first object of the detected plurality of objects, identifying a related 3D object model corresponding to a second object related to the first object; and implementing at least one AR measurement of the related 3D object model to identify an exact 3D object model for the first object from the determined plurality of 3D model candidates. 6. (canceled) 7. (Canceled) 8. A computer system for AR guidance, comprising:
one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage media, and program instructions stored on at least one of the one or more computer-readable tangible storage media for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising:
detecting a plurality of objects in a video recording associated with completing a task; generating a plurality of three-dimensional (3D) object models based on scanning a plurality of real objects in a task space; matching the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space; and generating, based on the video recording, an augmented reality (AR) guidance model for completing the task, wherein the generated AR guidance model replaces the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. 9. The computer system of generating at least one two-dimensional (2D) image data for each detected object of the detected plurality of objects; generating at least one 3D model data for each 3D object model of the generated plurality of 3D object models; and replacing the generated at least one 2D image data with the generated at least one 3D model data in the generated AR guidance model. 10. (canceled) 11. The computer system of identifying at least one anchor for each detected object of the detected plurality of objects, wherein the identified at least one anchor is configured to enable a connection between a first detected object and a second detected object; storing the identified at least one anchor for each detected object as an anchor image area associated with the generated at least one 2D image data for a corresponding detected object; and matching the anchor image area associated with the generated at least one 2D image data with a 3D position on a corresponding 3D object model of the generated plurality of 3D object models to determine a corresponding 3D anchor position. 12. The computer system of in response to determining a plurality of 3D model candidates for a first object of the detected plurality of objects, identifying a related 3D object model corresponding to a second object related to the first object; and implementing at least one AR measurement of the related 3D object model to identify an exact 3D object model for the first object from the determined plurality of 3D model candidates. 13. (canceled) 14. (canceled) 15. A computer program product for AR guidance, comprising:
one or more computer-readable storage media and program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable by a processor to cause the processor to perform a method comprising:
detecting a plurality of objects in a video recording associated with completing a task; generating a plurality of three-dimensional (3D) object models based on scanning a plurality of real objects in a task space; matching the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space; and generating, based on the video recording, an augmented reality (AR) guidance model for completing the task, wherein the generated AR guidance model replaces the detected plurality of objects in the video recording with the generated plurality of 3D object models representing the plurality of real objects in the task space. 16. The computer program product of generating at least one two-dimensional (2D) image data for each detected object of the detected plurality of objects; generating at least one 3D model data for each 3D object model of the generated plurality of 3D object models; and replacing the generated at least one 2D image data with the generated at least one 3D model data in the generated AR guidance model. 17. (canceled) 18. The computer program product of identifying at least one anchor for each detected object of the detected plurality of objects, wherein the identified at least one anchor is configured to enable a connection between a first detected object and a second detected object; storing the identified at least one anchor for each detected object as an anchor image area associated with the generated at least one 2D image data for a corresponding detected object; and matching the anchor image area associated with the generated at least one 2D image data with a 3D position on a corresponding 3D object model of the generated plurality of 3D object models to determine a corresponding 3D anchor position. 19. The computer program product of in response to determining a plurality of 3D model candidates for a first object of the detected plurality of objects, identifying a related 3D object model corresponding to a second object related to the first object; and implementing at least one AR measurement of the related 3D object model to identify an exact 3D object model for the first object from the determined plurality of 3D model candidates. 20. (canceled)BACKGROUND
SUMMARY
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
DETAILED DESCRIPTION







