BEIJING, Oct. 25, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that its multi-view 3D reconstruction algorithm based on semantic segmentation combine semantic segmentation and 3D reconstruction is developed, aiming to achieve more accurate 3D reconstruction results. In traditional multi-view 3D reconstruction algorithm, only the geometric information of the image is usually considered to reconstruct the 3D scene by extracting feature points or matching features from multiple views, while the utilization of semantic information is neglected, resulting in reconstruction results that lack the understanding and interpretation of the scene semantics. With the rapid development of deep learning, semantic segmentation technology has gradually become a popular research direction in the field of computer vision. Semantic segmentation technology can assign each pixel in an image to a different semantic category, to realize accurate segmentation and semantic understanding of objects in an image. The multi-view 3D reconstruction algorithm based on semantic segmentation researched of WiMi combines semantic segmentation technology with the 3D reconstruction method to realize accurate reconstruction and semantic understanding of 3D scenes.
Semantic information can provide more contextual and semantic constraints, and by applying semantic segmentation technology to multi-view 3D reconstruction, more accurate semantic information can be obtained during the reconstruction process, thus improving the accuracy and comprehensibility of the reconstruction results. In practical applications, the multi-view 3D reconstruction based on semantic segmentation can be applied to 3D scene reconstruction to provide users with a more realistic experience. For example, when reconstructing a building, semantic segmentation can assign different areas to different categories, such as walls, windows, doors, etc., so that the reconstruction results more accurately reflect the structure and composition of the building. In addition, the multi-view 3D reconstruction algorithm based on semantic segmentation can be applied to other fields, such as automatic driving, virtual reality, augmented reality, etc., in order to achieve a more accurate understanding and simulation of the scene. The development of multi-view 3D reconstruction algorithms based on semantic segmentation has important research and application value.
In the process of applying WiMi's multi-view 3D reconstruction algorithm based on semantic segmentation, the input multi-view image needs to be pre-processed and feature extraction first, and the pre-processing mainly includes operations such as image denoising and image enhancement, and then the feature points are extracted for each image to get the feature map. Then the feature map is semantically segmented using a semantic segmentation network to get the semantic label of each pixel. Then, according to the semantic label, the matching pixels are found in different views and the correspondence between the feature points is established. Based on the results of pixel matching, the 3D point cloud is reconstructed using a triangulation algorithm. Finally, the reconstructed 3D point cloud is optimized, including operations such as removing outlier points and filling in missing regions, and finally, the 3D reconstruction results are obtained. The algorithm can realize accurate 3D reconstruction in multi-view scenes, and it can provide richer scene information through semantic segmentation to improve the accuracy of reconstruction.
Compared to traditional 3D reconstruction algorithms, the multi-view 3D reconstruction algorithm based on semantic segmentation can perform image processing and computation more efficiently. By utilizing semantic information, the algorithm can reduce unnecessary computation and processing, thus increasing the running speed of the algorithm. In addition, semantic segmentation can help the algorithm to better utilize the ability of parallel computing, which further improves the efficiency of the algorithm.
By using semantic segmentation technique, the algorithm can also better understand the object boundaries and structural information in the image, thus improving the accuracy of 3D reconstruction. By assigning each pixel to its corresponding semantic category, the algorithm can better distinguish the boundaries between different objects and can better restore the details of the objects. In addition, the semantic segmentation technique can help the algorithm to better deal with noise and occlusion in the image. By segmenting the image into semantic regions, occluded objects can be better recognized and handled, thus improving the robustness of the reconstruction. In addition, semantic segmentation can help the algorithms to better deal with lighting variations and poor image quality.
WIiMi's multi-view 3D reconstruction algorithm based on semantic segmentation has obvious advantages in terms of accuracy, and efficiency. These advantages enable the algorithm to better restore real-world 3D scenes in practical applications and to better cope with various complex situations in the real world.
Currently, deep learning has achieved remarkable results in the field of semantic segmentation and 3D reconstruction. In the future, WiMi will explore how to combine deep learning methods with traditional geometric computation methods, fully utilize the advantages of both, and improve the performance and effect of multi-view 3D reconstruction algorithms based on semantic segmentation.
About WIMI Hologram Cloud
WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.
Safe Harbor Statements
This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services.
Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.