BEIJING, April 20, 2023 /PRNewswire/––WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced the development of a method and system for human-computer interaction based on XR technology that enhances the user experience and enables the effect of changing the viewpoint at different locations. The convergence of visual interaction technologies facilitates an immersive experience that seamlessly transitions between the virtual and real worlds.
The XR-based HCI method and system acquires a first-person perspective image of a user, a secondary perspective image, and a configuration method and a fusion mode between images by collecting the user's location and observation perspective. The user observation viewpoint includes a first-person perspective and a third-person perspective. The system constructs a multi-view fusion image based on the user's observation viewpoint and location to provide the user with different viewpoint images based on the configuration method and the fusion mode. The viewpoint images include first-person perspective images and secondary perspective images. In capturing the user observation perspective, the system captures user commands based on the user's location and acquires interaction tasks. The user commands include voice commands and action commands. In the process of acquiring the viewpoint image, the system acquires the viewpoint image based on the task type of the interactive task. The task type indicates the user's imaging requirements for the viewpoint image. Based on the imaging requirements, the system acquires the configuration method and the fusion mode.
In the process of acquiring the interaction task, when the interaction task requires high observation and perception ability for the tight space and low perception ability for the far space, the user observation viewpoint, the first-person perspective is the primary viewpoint, and the multi-viewpoint fusion image is the primary viewpoint image. When the interaction task requires average observation and perception ability in both near and far space, the user observation view, the first-person perspective, or the third-person perspective is the primary view, and the multi-view fusion image is the primary view image or the secondary perspective image. When the interaction task requires high perception ability in both near and far space, the user observation view is the first-person view, the third-person view is the primary view, and the multi-view fusion image is the fused image of the primary and the secondary view. When the interaction task requires low observation perception ability for tight space and high perception ability for far space, the user observes the viewpoint, the third-person perspective is the primary viewpoint, and the multi-view fusion image is the second perspective image. When the interaction task requires general observation and perception ability for both near and far space, the secondary perspective image corresponding to the first edge of the primary perspective image is obtained based on the first edge of the primary perspective image. The secondary perspective image is stitched with the primary perspective image to generate the multi-view fusion image, in which the user switches between the primary perspective image and the second perspective image through user commands.
In the process, when the interaction task requires high observation and perception ability for both tight space and far space, the system acquires the second feature corresponding to the first feature in the second perspective image based on the first feature of the primary perspective image. Then, the image connected to the second feature is fused to the primary perspective image. The user acquires the second feature by selecting the first feature. The system then acquires the fourth feature corresponding to the third feature in the second perspective image based on the third feature and then fuses the image related to the fourth feature into the second perspective image. The user can acquire the multi-view fusion image corresponding to the second and third features by selecting the first and fourth features.
WiMi's XR-based HCI system includes an interaction task acquisition module, a data processing module, an image fusion module, a command acquisition module, a data storage module, and a data fusion module.
The interaction task acquisition module can acquire user commands and obtain interaction tasks based on the user's location. The user instructions include voice instructions and action instructions. The data processing module receives the user location and observation viewpoint and obtains a primary viewpoint image of the user, a secondary viewpoint image, and a configuration method and a fusion mode between images. The user observation viewpoint includes a first-person perspective and a third-person viewpoint.
The image fusion module constructs a multi-viewpoint fusion image based on the user observation perspective to provide the user with a different perspective image. The viewpoint images include a primary viewpoint image and a second viewpoint image. The command acquisition module can acquire the user's voice and action commands by capturing the user's voice and action. The data storage module stores the viewpoint images. The data fusion module performs image fusion based on the same features of each viewpoint image to generate a multi-viewpoint fusion image.
WiMi's system realizes users' technical requirements for multi-view observation of the same object and provides a technical reference for the multi-user fusion of HCI intelligent processing technology. The multi-view fusion images generated based on XR technology can be widely used in many new application scenarios such as social, office, entertainment, exhibition, and education. This will provide a broad development space for the XR industry. The virtualization of some natural objects through big data, combined with WiMi's multi-view fusion solution, will help users to make virtual reality fusion comparisons. This provides a feasible solution for current object defect detection, feature observation, etc.
ABOUT WIMI HOLOGRAM CLOUD
WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.
SAFE HARBOR STATEMENTS
This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services.
Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement, except as required under applicable laws.