BEIJING, May 10, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that the development of the LMVO algorithm for improving the efficiency of SLAM. WiMi’s LMVO-SLAM acquires phase by directly matching feature point image blocks in an image. Unlike the traditional approach, the entire image is matched by filtering the points to be matched based on the grayscale gradient size.

The key to SLAM is localization and map building.WiMi’s LMVO algorithm uses discrete feature extraction with direct extraction of keyframes for localization and ensures the uniformity of feature point distribution in the image. The algorithm passes the feature region points frame by frame and uses the image pixel blocks around the feature point machine to match gray values to achieve the path solution. Then the algorithm optimizes the projection coordinates of the 3D holographic points to get the projection coordinates of the spatial points in the current frame and then transforms the ordinary feature points to get the image coordinates to determine and obtain the camera pose and the 3D coordinates of the feature points. The point cloud is positioned in the 3D holographic space and viewed as an ordinary feature comparison of two images without matching a large amount of data, which can significantly improve efficiency.

The map construction of WiMi’s LMVO algorithm contains the depth model calculation of the holographic point cloud data. The depth model calculation uses a filtering method to achieve the depth, i.e., estimating hypothetical points. The depth conforms to a specific probability model, and the depth estimation of the points is further adjusted after new observations are obtained. When the depth variance is less than a certain threshold, the depth estimate of the point is more reliable. In turn, it is involved in the inter-frame transfer and finally added to the environment map.

The algorithm is divided into two main parts: localization and map building. These two parts are executed in parallel threads to ensure the real-time nature of the algorithm. The algorithm uses minimized resources to compare the gap between pixels corresponding to the projected positions of the same 3D holographic space points to obtain the change in the camera’s position relative to the moment of the previous frame. The algorithm extracts only the feature points of keyframes. When observing a normal frame, the algorithm optimizes the depth values of the feature points and then sends the feature points to the current frame. The grayscale of the key frame and its surrounding pixel blocks equals the grayscale value presented by its corresponding point and surrounding pixel blocks in the key frame image. The algorithm obtains the spatial points under the camera coordinate system of the current frame with their corresponding points, with the 3D coordinates under the camera coordinate system. An error is generated when estimating the depth of the image feature points and the depth of the 3D holographic point cloud. The algorithm optimizes the coordinates of the projected image of the feature points in the current frame by locating and matching the errors.

The algorithm finds the nearest keyframe for each pixel feature point in 3D holographic space that can be observed. The error of the feature point block of the keyframe is constrained with the matching feature point block of the current frame to find more accurate image coordinates. The algorithm compares the 3D holographic space pixel data to ensure matching accuracy by creating a new mapping, stretching, and rotating the image block. Once the projection image coordinates of the feature points on the image are obtained for the current frame, the algorithm uses a similar feature point method to optimize the localization algorithm formula, thus further optimizing the 3D point location coordinates.

After obtaining the accurate 3D point cloud trajectory position, the LMVO algorithm determines whether it is a keyframe. If all the nearby observation frames are far from the current frame, the current frame will be the keyframe. The algorithm initializes the depth filter, inserts the points that have converged and not yet inserted into the map and the issues that can be observed under the current frame, and performs a new round of feature point detection. The algorithm constructs a depth variance value for each feature point. If the current frame is normal, the algorithm estimates the depth of the iterative feature points. When the depth estimation variance is less than a certain threshold, which indicates that the depth estimation converges, the map is inserted, and the points in the reference key frame are calculated to correspond to the polar lines in the image of the current frame. The corresponding depth values are calculated using triangulation based on the matching results obtained from the localization thread.

The advantage of WiMi’s LMVO algorithm is that the key points are evenly distributed, and the speed. A rate of 500 fps or more is guaranteed in the algorithm, which is very important for security. The algorithm also has some shortcomings, such as the need for more robustness of the system to illumination due to grayscale matching. The algorithm will also be optimized later to make it commercially available and accessible to developers.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.