Introducing Command R+: Our new, most powerful model in the Command R family.

Learn More

Cohere For AI - Guest Speaker: Hongyu Wang Ph.D candidate @ VIPL group


Date: Jun 26, 2024

Time: 4:00 PM - 5:00 PM

Location: Online

About the speaker:"I am Hongyu Wang (王鸿钰 in Chinese), a second-year Ph.D candidate at VIPL group, Chinese Academy of Sciences (CAS) under the supervision of Professor Xilin Chen and Professor Ruiping Wang. I received my B.Eng. degree from Department of Computer Science and Technology, University of Science and Technology of China, where I was advised by associate researcher Chao Qian. I was a research intern under the supervision of Dr. Furu Wei and Shuming Ma at Natural Language Computing group, MSR-Asia from Aug. 2021 to Jan. 2023. I have great instreast on the efficient architecture for the large-scale foundation models and multimodal reasoning."

Session Description: Recent research, such as BitNet, is paving the way for a new era of 1- bit Large Language Models (LLMs). In this talk, we will share our latest progress on extremely low-bit large language models, a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.

Add event to calendar

Apple Google Office 365 Outlook Yahoo