Skip to main content
WHAT LAB, which is short for WeChat-HKUST Joint Lab on Artificial Intelligence Technology, was established in 2015 with the missions to foster artificial intelligence (AI) and big data research to improve people’s living and advance the frontiers of knowledge. The establishment of WHAT LAB marked a milestone in the collaboration of WeChat and the higher education sector.

Over the past years, WeChat and HKUST had jointly conducted a series of AI technology related research and explored the far-reaching frontiers of AI. Research areas of WHAT LAB included intelligent robotic systems, natural language processing, data mining, speech recognition and understanding.

AI technology has been growing tremendously. This technology advancement depends on talents, problems, and data. WHAT LAB facilitated the collaboration between WeChat and HKUST to complement with each other in these aspects for development of impactful research and innovation. The lab brought together top researchers in the development of innovative artificial intelligence applications with the database of WeChat. And the lab has successfully accomplished its missions in 2021.

Research areas


Demos


Machine Reading system

Developed by Yuxiang Wu, Prof. Qiang Yang’s Group

Demo Introduction: Machine Reading aims to develop Machine Learning algorithms that could read and comprehend natural language documents as humans do. With Machine Reading, natural language information is converted to the form that could be processed by computers, and could be further utilized in applications such as summarization, question answering and dialogue system.

Moments Articles Real Time Propagation and Visualization System

Developed by Quan Li, Dongyu Liu, Haiyan Yang, Prof. Huamin Qu’s Group

Demo Introduction: In this project, we visually investigated how official public account article information propagate in WeChat platform from different perspectives, involving a 3D global overview, time-varying propagation view, community detection view, etc. We also implemented several designs by using real propagation data, including the propagation clock, propagation wave, propagation galaxy and propagating tree. The system --- WeSeer has already been deployed and applied to WeChat, Tencent for daily propagation analysis.

Model-based Global Localization for Aerial Robots using Edge Alignments

Developed by Kejie Qiu, Prof. Shaojie Shen’s Group

Demo Introduction: "The video contains three parts; The first part presents the localization accuracy and global consistency by comparing with the ground truth provided in the indoor environment. Three trajectories of model-only, model+EKF​ and ground truth are shown in different colors. The 3D model used for localization is shown as dense point cloud. The second part shows the real-time localization results in outdoor case.
Four trajectories of model-only, model+EKF, VINS and GPS are represented in different colors. The 3D model used for localization is shown as sparse point cloud for display efficiency consideration. The special outdoor case with unstable GPS is also included. Three image views including the raw fisheye image, the electronically stabilized image, and the rendered image are shown simultaneously. Besides comparing different trajectories, it is intuitive to tell the localization performance by visually comparing the similarity between the stabilized image and the rendered image. The third part of the video shows the closed-loop control by a trajectory tracking experiment using the proposed method for state feedback."