ENDOSCOPIC IMAGE PROCESSING APPARATUS, ENDOSCOPE SYSTEM, AND METHOD OF OPERATING ENDOSCOPIC IMAGE PROCESSING APPARATUS
This application is a continuation application of PCT/JP2019/046545 filed on Nov. 28, 2019, the entire contents of which are incorporated herein by this reference. The present invention relates to an endoscopic image processing apparatus, an endoscope system, and a method of operating an endoscopic image processing apparatus. An endoscopic inspection in an industrial field has been used as a method to nondestructively inspect an object such as a turbine and an engine. Further, in the endoscopic inspection in the industrial field, for example, adoption of a method of visual simultaneous localization and mapping (SLAM) has been examined in recent years as a method to obtain information useful for the inspection of the object, such as a size of a defective portion present inside the object, while eliminating physical components (motion sensor, etc.) used for distance measurement or attitude detection from an endoscope as much as possible. For example, Japanese Patent Application Laid-Open Publication No. 2017-129904 discloses a viewpoint to estimate a size of an object present in a real space by applying the method of visual SLAM (hereinafter, also referred to as VSLAM) to an image obtained by picking up an image of the object. In a case where the method of VSLAM is used in the endoscopic inspection in the industrial field, for example, information corresponding to a relative positional relationship between an object present in a real space and an image pickup unit provided in an endoscope inserted into the object is acquired, and a three-dimensional shape of the object is sequentially reconstructed based on the acquired information. An endoscopic image processing apparatus according to an aspect of the present invention is an endoscopic image processing apparatus configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group obtained by causing an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of the object. The endoscopic image processing apparatus includes a processor. The processor estimates a self-position of the image pickup device based on the endoscopic image group, calculates a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculates a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of the insertion portion inserted into the object, and generates scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of the three-dimensional shape model of the object. An endoscope system according to another aspect of the present invention includes: an endoscope configured to cause an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of an object; an insertion/removal state detection device configured to detect an insertion/removal state of the insertion portion inserted into the object and to output a detection signal; and a processor. The processor estimates a self-position of the image pickup device based on an endoscopic image group obtained by picking up images of the inside of the object by the endoscope, calculates a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculates a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on the detection signal outputted from the insertion/removal state detection device, and generates scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of a three-dimensional shape model of the object. A method of operating an endoscopic image processing apparatus according to yet another aspect of the present invention is a method of operating an endoscopic image processing apparatus configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group obtained by causing an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of the object. The method includes: estimating a self-position of the image pickup device based on the endoscopic image group; calculating a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation; calculating a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of the insertion portion inserted into the object; and generating scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of the three-dimensional shape model of the object. An embodiment of the present invention is described below with reference to drawings. For example, as illustrated in The endoscope 2 includes an insertion portion 5, an operation portion 6, and a universal cord 7. The insertion portion 5 is formed in an elongated shape insertable into an object such as a turbine and an engine. The operation portion 6 is provided on a proximal end side of the insertion portion 5. The universal cord 7 extends from the operation portion 6. Further, the endoscope 2 is configured so as to be removably connected to the main body apparatus 3 by the universal cord 7. The insertion portion 5 includes a distal end portion 11, a bendably-formed bending portion 12, and a long flexible tube portion 13 having flexibility in order from a distal end side. The operation portion 6 includes a bending operator 6 As illustrated in Each of the light source units 21 includes a light emitting element 21 Each of the light emitting elements 21 Each of the illumination optical systems 21 The image pickup unit 22 is configured as a camera including an observation optical system 22 The observation optical system 22 The image pickup device 22 The bending portion 12 includes, for example, a plurality of bending pieces. The bending portion 12 is connected to a distal end portion of each of a plurality of bending wires BW inserted into the flexible tube portion 13, the operation portion 6, and the universal cord 7. Further, the bending portion 12 can direct the distal end portion 11 to a direction intersecting a longitudinal axis direction of the insertion portion 5 by bending based on a traction state of each of the plurality of bending wires BW. In other words, the endoscope 2 picks up an image of an inside of the object by the image pickup unit 22 provided at the distal end portion 11 of the elongated insertion portion 5. As illustrated in The light source driving unit 31 includes, for example, a light source driving circuit. Further, the light source driving unit 31 generates and outputs the light emitting element driving signal to drive the light emitting elements 21 The image pickup device driving unit 32 includes, for example, an image pickup device driving circuit. Further, the image pickup device driving unit 32 generates and outputs the image pickup device driving signal to drive the image pickup device 22 The bending driving unit 33 includes, for example, a motor. The bending driving unit 33 is connected to a proximal end portion of each of the plurality of bending wires BW. Further, the bending driving unit 33 can individually change traction quantities of the plurality of bending wires BW under the control of the controller 38. In other words, the bending driving unit 33 can change a traction state of each of the plurality of bending wires BW under the control of the controller 38. The image generation unit 34 includes an integrated circuit such as an FPGA (field programmable gate array). Further, the image generation unit 34 generates endoscopic images by performing predetermined signal processing on the image pickup signal outputted from the image pickup device 22 The display unit 35 includes, for example, a liquid crystal panel. The display unit 35 displays a display image outputted from the controller 38 on a display screen. Further, the display unit 35 includes a touch panel 35 The storage unit 36 includes a storage medium such as a memory. The storage unit 36 stores various programs corresponding to operation of the controller 38, for example, programs used for control of the units of the endoscope system 1 and programs to perform processing relating to VSLAM described below. Further, the storage unit 36 can store the endoscopic images and the like used for the processing relating to VSLAM by the controller 38. The input I/F unit 37 includes switches or the like that can issue instructions corresponding to input operation of a user, to the controller 38. The controller 38 includes one or more processors 38 The controller 38 generates a synchronization signal to synchronize operation of the image pickup unit 22 and operation of an insertion/removal state detection device 41, and outputs the generated synchronization signal to the image pickup device driving unit 32 and the insertion/removal state detection device 41. The controller 38 outputs the above-described synchronization signal and performs the processing relating to VSLAM based on an endoscopic image group including the plurality of endoscopic images sequentially outputted from the image generation unit 34, and a detection signal outputted from the insertion/removal state detection device 41. Note that, in the present embodiment, description is given by assuming that the processing relating to VSLAM at least includes, for example, processing to extract a plurality of feature points (corresponding points) matching in the endoscopic image group outputted from the image generation unit 34, processing to estimate a self-position of the image pickup unit 22 corresponding to the plurality of feature points and to acquire a result of the estimation, and processing to create the three-dimensional shape model of the object corresponding to the plurality of feature points and the result of the estimation as an environment map. Specific examples of the processing performed by the controller 38 are described below. In the present embodiment, the processors 38 In other words, the main body apparatus 3 includes a function as the endoscopic image processing apparatus, and performs processing on the endoscopic image group that is obtained by picking up images of the inside of the object by the image pickup unit 22 provided at the distal end portion 11 of the insertion portion 5, to create the three-dimensional shape model of the object. In the present embodiment, in inspection of the object by using the endoscope 2, the insertion/removal state detection device 41 that can detect an insertion/removal state of the insertion portion 5 is used together. The insertion/removal state detection device 41 can transmit and receive signals and the like to/from the main body apparatus 3. The insertion/removal state detection device 41 detects the insertion/removal state of the insertion portion 5 inserted into the object, and generates the detection signal representing the detected insertion/removal state of the insertion portion 5 and outputs the detection signal to the main body apparatus 3. The insertion/removal state detection device 41 includes, for example, a through hole (not illustrated) formed in a shape that can displace the insertion portion 5 in the longitudinal axis direction while the insertion portion 5 is inserted into the through hole. The insertion/removal state detection device 41 includes a roller 41 The roller 41 The encoder 41 More specifically, at the timing set by the synchronization signal outputted from the controller 38, the encoder 41 The insertion/removal state detection device 41 including the above-described configuration can output, to the main body apparatus 3, the detection signal having waveforms different between a case where the insertion portion 5 inserted into the object is advanced and a case where the insertion portion 5 inserted into the object is retracted. Further, the insertion/removal state detection device 41 including the above-described configuration can output, to the main body apparatus 3, the detection signal having the waveforms different based on the displacement amount when the insertion portion 5 inserted into the object is displaced. In other words. the insertion/removal state detection device 41 including the above-described configuration can detect, as the insertion/removal state of the insertion portion 5 inserted into the object, the displacement amount and the displacement direction of the insertion portion 5 in the longitudinal axis direction. Subsequently, action of the present embodiment is described. Note that, in the following, description is given by assuming that inspection is performed while the insertion portion 5 is inserted into a tube-shaped object such as a conduit. Before an object inspection using the endoscope 2, the user fixes the insertion/removal state detection device 41 at a predetermined position near an insertion port for insertion of the insertion portion 5 into the object. According to the present embodiment, it is sufficient to dispose the insertion/removal state detection device 41 at a position where a relative position to the object into which the insertion portion 5 is inserted is not changed. Therefore, according to the present embodiment, for example, the insertion/removal state detection device 41 may be fixed at a position separated from the above-described insertion port. After the user turns on the units of the endoscope system 1, the user brings the distal end portion 11 to a desired site inside the object by performing insertion operation to insert the insertion portion 5 into the object. In response to such user operation, the illumination light emitted from the light emitting elements 21 According to the above-described insertion operation by the user, it is possible to make a visual field direction of the image pickup unit 22 corresponding to a front side of the distal end portion 1I coincident with an insertion direction (see When detecting a state where the signals and the like are transmittable and receivable between the main body apparatus 3 and the insertion/removal state detection device 41, the controller 38 generates the synchronization signal to synchronize the operation of the image pickup unit 22 and the operation of the insertion/removal state detection device 41, and outputs the generated synchronization signal to the image pickup device driving unit 32 and the insertion/removal state detection device 41. More specifically, the controller 38 generates a synchronization signal to align a cycle in which the image pickup device 22 The controller 38 outputs the above-described synchronization signal during a period when the distal end portion 11 is disposed inside the object and performs processing, for example, as illustrated in The controller 38 performs the processing to extract a plurality of feature points CP matching in the endoscopic image group outputted from the image generation unit 34 (step S1 of More specifically, the controller 38 extracts the plurality of feature points CP matching in the endoscopic image group outputted from the image generation unit 34 by, for example, applying algorithm such as ORB (oriented FAST and rotated BRIEF) to the endoscopic image group. The controller 38 performs the processing to estimate the self-position of the image pickup unit 22 based on the plurality of feature points CP extracted by the processing in step S1 of More specifically, the controller 38 estimates the self-position of the image pickup unit 22 corresponding to the plurality of feature points CP by, for example, performing processing based on an E matrix (essential matrix) acquired by using a method such as five-point algorithm. In other words, the controller 38 includes a function as an estimation unit, and estimates the self-position of the image pickup unit 22 based on the endoscopic image group obtained by picking up images of the inside of the object by the endoscope 2. The controller 38 performs processing to acquire one or more processing target images IG from the endoscopic image group used for extraction of the plurality of feature points CP, based on the plurality of feature points CP extracted by the processing in step S1 of More specifically, the controller 38 acquires one or more processing target images IG based on, for example, the number of feature points CP extracted by the processing in step S1 of The controller 38 performs processing to calculate a displacement amount ΔZ of the image pickup unit 22 based on the estimation result of the self-position of the image pickup unit 22 obtained by the processing in step S2 of More specifically, the controller 38 performs, as the processing to calculate the displacement amount ΔZ, for example, processing to calculate a distance between the self-position of the image pickup unit 22 obtained as the estimation result at a time point T1 and the self-position of the image pickup unit 22 obtained as the estimation result at a time point T2 after the time point T1. In other words, the controller 38 includes a function as a first displacement amount calculation unit, and calculates a first displacement amount corresponding to the displacement amount of the image pickup unit 22 based on the estimation result of the self-position of the image pickup unit 22 obtained by the processing in step S2 of The controller 38 performs processing to calculate a displacement amount ΔL of the insertion portion 5 based on the detection signal outputted from the encoder 41 More specifically, the controller 38 performs, as the processing to calculate the displacement amount ΔL of the insertion portion 5, for example, processing to calculate a difference value between a displacement amount ΔL1 of the insertion portion 5 at the time point T1 and a displacement amount ΔL2 of the insertion portion 5 at the time point T2, based on the detection signal outputted from the encoder 41 Note that, in the present embodiment, the displacement amount of the insertion portion 5 in the longitudinal axis direction is calculated as the displacement amount ΔL. In other words, the controller 38 includes a function as a second displacement amount calculation unit, and calculates a second displacement amount corresponding to the displacement amount of the insertion portion 5 in the longitudinal axis direction based on the detection signal outputted from the insertion/removal state detection device 41. The controller 38 performs processing to generate scale information SJ by associating the displacement amount ΔZ calculated by the processing in step S4 of In other words, the scale information SJ is generated as information in which a length of the displacement amount ΔL corresponding to a physical amount measured based on the insertion/removal state of the insertion portion 5 is added to the displacement amount ΔZ having unknown scale. The controller 38 includes a function as a scale information generation unit, and generates the scale information in which the displacement amount ΔZ and the displacement amount ΔL are associated with each other, as information used in the processing to create the three-dimensional shape model of the object. The controller 38 performs processing to specify a three-dimensional coordinate position of the image pickup unit 22 in a world coordinate system of a three-dimensional space in which the three-dimensional shape model of the object is created, based on the estimation result of the self-position of the image pickup unit 22 obtained by the processing in step S2 of The controller 38 performs processing to acquire three-dimensional point group coordinates including the three-dimensional coordinate position of the world coordinate system corresponding to the plurality of feature points CP in the one or more processing target images IG, based on the three-dimensional coordinate position of the image pickup unit 22 obtained by the processing in step S7 of The controller 38 acquires a plurality of three-dimensional point group coordinates by, for example, repeating the series of processing of As described above, according to the present embodiment, the three-dimensional coordinate position of the image pickup unit 22 is specified based on the scale information SJ representing a correspondence relationship between the displacement amount ΔZ and the displacement amount ΔL, and the three-dimensional point group coordinates are acquired based on the specified three-dimensional coordinate position of the image pickup unit 22. Therefore, it is possible to create the three-dimensional shape model of the object with high accuracy by using the acquired three-dimensional point group coordinates. Thus, according to the present embodiment, it is possible to improve inspection efficiency in inspection of an object having unknown scale in a real space. According to the present embodiment, it is sufficient to perform the processing in step S6 of According to the present embodiment, for example, in a case where the controller 38 performs control to direct the distal end portion 11 to the direction intersecting the longitudinal axis direction of the insertion portion 5 by bending the bending portion 12, namely, in a case where the displacement amount ΔL is calculated in a state where the visual field direction of the image pickup unit 22 and the displacement direction (insertion direction or removal direction) of the insertion portion 5 are not coincident with each other, the processing relating to the scale information SJ corresponding to the processing in steps S4 to S7 of Note that the processing relating to the scale information SJ includes the processing to generate the scale information SJ and the processing using the scale information SJ. Therefore, in a case where the processing relating to the scale information SJ is not performed, the controller 38 performs processing to acquire the three-dimensional point group coordinates including the three-dimensional coordinate position of the world coordinate system corresponding to the plurality of feature points CP in the one or more processing target images IG, based on the estimation result of the self-position of the image pickup unit 22 obtained by the processing in step S2 of The present embodiment is applicable not only to the endoscope system 1 including the endoscope 2 provided with the soft (flexible) insertion portion 5 but also to other endoscope systems each including an endoscope provided with a rigid (inflexible) insertion portion in substantially the same manner. In the endoscope 2 provided with the soft insertion portion 5, for example deflection as illustrated in Therefore, in a case where the present embodiment is applied to the endoscope system 1 including the endoscope 2, the processing relating to the scale information SJ is desirably performed while no deflection occurs in the insertion portion 5, for example, as illustrated in In other words, in the case where the present embodiment is applied to the endoscope system 1 including the endoscope 2, for example, the controller 38 detects the displacement direction of the insertion portion 5 based on the detection signal outputted from the insertion/removal state detection device 41. In a case where the detected displacement direction of the insertion portion 5 is the removal direction, the controller 38 performs the processing relating to the scale information SJ, whereas in a case where the detected displacement direction of the insertion portion 5 is the insertion direction, the controller 38 does not perform the processing relating to the scale information SJ. Such processing makes it possible to secure accuracy of the scale information SJ. According to the present embodiment, the controller 38 may perform processing to further improve accuracy of the scale information SJ generated in the case where the displacement direction of the insertion portion 5 is the removal direction. Such processing according to a modification of the present embodiment is described below. Note that, in the following, specific descriptions about portions to which the above-described operation and the like are applicable are appropriately omitted. After the user turns on the units of the endoscope system 1, the user brings the distal end portion 11 to the deepest site of the object by performing the insertion operation to insert the insertion portion 5 into the object. Further, after the user brings the distal end portion 11 to the deepest site of the object, the user performs the removal operation to remove the insertion portion 5 from the inside of the object. For example, in a case where the displacement direction of the insertion portion 5 detected based on the detection signal outputted from the insertion/removal state detection device 41 is the removal direction, the controller 38 calculates a displacement speed VZ of the image pickup unit 22 by dividing the displacement amount ΔZ obtained by the processing in step S4 of In a state where the distal end portion 11 has reached the deepest site of the object by the insertion operation of the user, for example, deflection as illustrated in In a case where the state of The controller 38 does not perform the processing in step S6 of The processing of the controller 38 according to the present modification is applicable not only to the case as illustrated in More specifically, the controller 38 does not perform the processing in step S6 of As described above, by the processing of the controller 38 according to the present modification, the scale information SJ is generated during the period when the displacement amount ΔZr and the displacement amount ΔL are hardly separated in the period when the insertion portion 5 is removed from the inside of the object. Therefore, the processing of the controller 38 according to the present modification makes it possible to further improve accuracy of the scale information SJ generated in the case where the displacement direction of the insertion portion 5 is the removal direction. The present invention is not limited to the above-described embodiment, and various modifications and applications can be made without departing from the gist of the present invention as a matter of course. An endoscopic image processing apparatus is configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group of an inside of the object, and includes a processor. The processor estimates a self-position of the image pickup device based on the endoscopic image group, calculates a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculates a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of an insertion portion inserted into the object, and generates scale information in which the first displacement amount and the second displacement amount are associated with each other. 1. An endoscopic image processing apparatus configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group obtained by causing an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of the object,
the endoscopic image processing apparatus comprising a processor, the processor estimating a self-position of the image pickup device based on the endoscopic image group, calculating a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculating a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of the insertion portion inserted into the object, and generating scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of the three-dimensional shape model of the object. 2. The endoscopic image processing apparatus according to the processor detects a displacement direction of the insertion portion based on the detection signal, and in a case where the detected displacement direction of the insertion portion is a removal direction, the processor performs processing to generate the scale information, and in a case where the detected displacement direction of the insertion portion is an insertion direction, the processor does not perform the processing to generate the scale information. 3. The endoscopic image processing apparatus according to 4. The endoscopic image processing apparatus according to 5. The endoscopic image processing apparatus according to 6. The endoscopic image processing apparatus according to 7. The endoscopic image processing apparatus according to 8. An endoscope system, comprising:
an endoscope configured to cause an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of an object; an insertion/removal state detection device configured to detect an insertion/removal state of the insertion portion inserted into the object and to output a detection signal; and a processor, the processor estimating a self-position of the image pickup device based on an endoscopic image group obtained by picking up images of the inside of the object by the endoscope, calculating a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculating a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on the detection signal outputted from the insertion/removal state detection device, and generating scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of a three-dimensional shape model of the object. 9. A method of operating an endoscopic image processing apparatus configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group obtained by causing an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of the object,
the method comprising: estimating a self-position of the image pickup device based on the endoscopic image group; calculating a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation; calculating a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of the insertion portion inserted into the object; and generating scale information in which the first displacement amount and the second displacement amount are associated with each other, as information used for processing relating to creation of the three-dimensional shape model of the object.CROSS REFERENCE TO RELATED APPLICATION
BACKGROUND OF THE INVENTION
1. Field of the Invention
2. Description of the Related Art
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT







