omniture

WiMi Developed Multi-modal EEG-Based Hybrid BCI System with Visual Servo Module

2023-10-16 20:00 1142

BEIJING, Oct. 16, 2023 /PRNewswire/ -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"), a leading global Hologram Augmented Reality ("AR") Technology provider, today announced that it developed multi-modal EEG-based hybrid BCI system with visual servo module which combines SSVEP and motor image signals and introduces a visual servo module to improve the robot's performance in performing grasping tasks. By combining different types of EEG signals, users can control the robot more freely and intuitively to perform diverse actions, thus providing a more satisfying service experience.

The approach of the multi-modal EEG-based hybrid BCI system with visual servo module mainly involves the design of signal acquisition, signal processing, control command generation and visual servo module.

1. Signal Acquisition: The system first needs to acquire the user's EEG signal and visual feedback signal. In order to realize multi-modal control, the system acquires both SSVEP and motor image signals.

SSVEP Signal Acquisition: By placing EEG electrodes on the user's scalp, the system can acquire the user's SSVEP signals. SSVEP is a type of flicker visual evoked potential, in which the brain generates electrical signals of a specific frequency when the user's visual attention is focused on a specific frequency of flicker stimuli. In order to realize multi-modal control, the system provides three different frequencies of flicker stimuli on the visual interface, and each stimulus corresponds to one control command of the robot, such as forward, turn left, and turn right.

Motor Image Signal Acquisition: In addition to the SSVEP signal, the system needs to acquire the motor image signal of the user. This is achieved by acquiring the user's motor image signals through EEG electrodes over a specific area. When the user imagines a grasping movement, the associated motor image signals are captured and used to control the robot to perform the grasping action.

Signal Processing: After signal acquisition, the raw EEG signals obtained need to be processed and analyzed in order to extract useful information and to perform feature extraction and classification to recognize the user's intention.

SSVEP signal processing: For SSVEP signals, the system first needs to filter and pre-process the original signal to eliminate noise and interference. Then, by extracting spectral features, it recognizes which frequency the user's current visual attention is focused on, so as to determine whether the user's intention is to turn forward, left or right.

Motor Image Signal Processing: For motor image signals, the system needs to pre-process the raw signals to eliminate noise and interference. Then, the user's imagined actions, such as grasping motion, are recognized through feature extraction and classification.

In WiMi's multi-modal EEG-based hybrid BCI system with visual servo module, the generation of control commands is the core part of the whole system. Control command generation involves parsing the recognized EEG signals and mapping them to the corresponding robot actions.

3. Control Command Generation: After recognizing the user's intention, the system generates corresponding control commands according to the results obtained, so as to control the robot's actions.

SSVEP control command generation: For SSVEP signals, the system uses spectrum analysis to process the signals. Spectral analysis extracts which frequency the user's current visual attention is focused on. Different flashing stimuli provided on the visual interface correspond to different movements of the robot, such as forward, left turn and right turn. By recognizing the frequency at which the user's visual attention is located, the system is able to determine the user's intention and generate corresponding control commands accordingly.

Motor Image Control Command Generation: For motor image signals, the system utilizes feature extraction and classification techniques to recognize the user's imagined movements. When the user imagines a grasping motion, specific motor image signals are captured. The system recognizes these features by training machine learning algorithms and generates corresponding control commands based on the recognition results to instruct the robot to perform the grasping motion.

4. Visual servo module design: The visual servo module is designed to improve the performance and accuracy of the robot in performing the grasping task. This module can adjust the robot's grasping attitude and strength in real time, making the grasping action more accurate and reliable. The visual servo module captures the real-time visual feedback from the robot performing the grasping task through the camera and combines it with the user's motor image signal for dynamic adjustment.

Visual Feedback Acquisition: The camera captures real-time visual feedback from the robot as it performs a grasping task. This feedback may include the position of the robot's end-effector (e.g., a mechanical gripper), its attitude, the position and shape of the target object, and so on.

Feature extraction: Extracting useful features from the visual feedback. These features may include information such as edges, colors, and shapes of the target object, as well as position and attitude information of the robot's end-effector.

Control Command Adjustment: The features extracted from the visual feedback are combined with the user's motor image signals to make dynamic adjustments. For example, if the user imagines grasping a more distant object, the system can adjust the robot's grasping posture and strength accordingly, enabling the robot to better accomplish the grasping task.

Feedback control: The vision servo module monitors the robot's progress in performing the grasping task in real time and provides feedback control according to the actual execution. If there is any error or instability in the grasping process, the system can make timely adjustments so that the robot can complete the grasping action more accurately.

With the visual servo module, WiMi's multi-modal EEG-based hybrid BCI system with visual servo module can more flexibly adapt to different grasping scenarios and user intentions, providing a higher quality service experience. The introduction of this module enhances the autonomy and adaptability of the robot to perform grasping tasks, enabling the system to realize more complex and natural multi-modal control.

Traditional brain-computer interface systems usually provide only a limited number of control commands, restricting the way the user interacts with the robot. WiMi's multi-modal EEG-based hybrid BCI system with visual servo module combines different types of EEG signals to enable richer and more varied control commands, allowing users to visualize different actions or focus on different frequencies of stimuli to achieve more complex robot control, thus providing a more flexible and natural interaction experience. The user can visualize different movements or focus on different frequencies of stimuli to achieve more sophisticated robot control, thus providing a more flexible and natural interaction experience.

WiMi's multi-modal EEG-based hybrid BCI system with visual servo module By combining multiple EEG signals, the system is able to recognize the user's intention more accurately. For example, the combination of SSVEP signals and motor image signals enables higher control accuracy, while the introduction of the visual servo module enables real-time adjustments to the robot's execution movements and improves the reliability and accuracy of the control commands, resulting in a robot that better responds to the user's commands. The system is not only limited to robot control, but can also be applied to other fields, such as virtual reality, rehabilitation therapy, and control of assistive devices. The expansion of this technology provides new possibilities for the application scope of brain-computer interface technology and promotes the development of the human-computer interaction field.

WiMi's multi-modal EEG-based hybrid BCI system with visual servo module involves the combination and application of multiple technologies, such as EEG signal processing, feature extraction, machine learning algorithms, and visual servo technology. Addressing the coordination and integration of these technologies, it promotes the further development of BCI technology and lays the foundation for higher-level brain-computer interface applications.WiMi's multi-modal EEG-based hybrid BCI system with visual servo module provides richer and more diversified control commands, improves control accuracy and reliability, and expands the brain-computer interface applications. WiMi's multi-modal EEG-based hybrid BCI system with visual servo module provides richer and more diversified control commands, improves control precision and reliability, expands the application fields of brain-computer interfaces, and improves the user experience and quality of life, as well as pushes forward the development and innovation of BCI technology, which is expected to play an important role in the field of intelligent robotics and human-computer interaction in the future.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts, including statements about the Company's beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company's goals and strategies; the Company's future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.
Related Stocks:
NASDAQ:WIMI
collection