TMTPost CEO: Five Major Misconceptions on Chi...

发布于: 雪球转发:0回复:0喜欢:0

AsianFin--It's crucial to assess how many years China lags behind the United States in AI, said TMTPost founder, Chairperson and CEO Zhao Hejuan, in a recent speech delivered in a conference organized by Cheung Kong Graduate School of Business and Shantou University.

In the speech titled Five Misconceptions About China’s Catchup in the AI Race, she noted that many argue that after GPT-3 was released in 2020, and ChatGPT came out in 2022, China quickly developed models similar to GPT-3; after GPT-4 was released, it took no more than two years for China to develop a model comparable to GPT-4. However, that does not mean the gap between Chinese companies and their peers is only one to two years, said Zhao, who is an alumna of Cheung Kong Graduate School of Business.

“I find it rather misleading to use such time frames to describe the gaps because they are generational innovation timescales, not capability gaps,” she added.

The following is the main content of the speech edited by TMTPost for brevity and clarity:

Dear alumni, the topic of my speech today is "Five Major Misconceptions on China's Catchup in the AI Race.”

From the perspective of TMTPost, I play two roles in the field of AI, both as a researcher and reporter in the AI field, and as a participant in the application of AIGC in the content industry’s transformation.

TMTPost has closely followed the development of the AI field since the era of AI 1.0. In the AI 1.0 era, whether from the perspective of Chinese listed companies or applications, we seem to be catching up with the United States. However, in the AI 2.0 era, or the era of AIGC, we came to realize that China has lagged behind overnight.

I listened carefully to the remarks by each guest yesterday. One of the guests argued that China quick catchup after GPT went viral actually indicates that China followed hard on the heels of the United States in terms of strengths and capability building.

However, I'd like to offer a reality check now. I believe we might be overly optimistic in the immediate future. The optimism isn't just confined to the Chinese market; it extends to our expectations regarding the pace of the global AI application boom. I suspect that progress in the short term might not be as fast as everyone's expectations, and in the long term, there's a risk of being solely focused on immediate profitability.

For over a decade, we've been diligently covering developments in this field, closely monitoring AI-related entrepreneurship. However, we find ourselves in a somewhat stagnant position now. It's time to face the reality and strategize our way out of the "pseudo-AI entrepreneurship zone."

Let me explain in detail.

The two most talked-about things in the AI field this year are: the recent release of AlphaFold 3 and the upcoming release of GPT-5.

First, let's talk about the AlphaFold 3 model released by the Google DeepMind team on May 8, and TMTPost was the first in China to report on it and offered the most comprehensive coverage to the readership.

In 2022, the AlphaFold 2 Enhanced Edition was launched. Fast forward two years to today, and we witness the unveiling of the AlphaFold 3 model—a groundbreaking tool designed for predicting protein structures within the realm of biology. The pivotal shift in this evolution lies in the alteration of the underlying calculation methodology and model algorithm.

AlphaFold 3 integrates a combination of Transformer-based generative models and diffusion models. This fusion results in a remarkable advancement, with AlphaFold 3 boasting a prediction accuracy improvement of 100% compared to existing methods.

The prediction accuracy of AlphaFold 2 has already doubled compared to its predecessors, and now it has doubled again. Scientists have conducted comparisons, suggesting that this advancement could propel biological research forward by hundreds of millions of years and potentially save tens of trillions of dollars. This underscores the immense impact of AIGC.

However, China's research achievements in this field are relatively scarce. Today, TMTPost published a video clip of Professor Yan Ning's speech about two years ago. She remarked that accurate prediction of protein-related structures seemed unattainable with AI. Yet, today's AlphaFold 3 release seems to have effectively disproved her assessment.

The second is the upcoming release of GPT-5.

I believe the impact of this event will be as significant as the disruptive technological leap brought by AlphaFold 3, if not greater. The release of GPT-4 surpasses the shock brought by GPT-3.

Why has China been able to develop its own version of models rapidly? I attribute this primarily to open source practices. Before GPT-3, OpenAI operated on open-source principles, and even Google's Transformer paper was open source. However, it shifted to closed source after GPT-3.

This indicates a significant leap from GPT-3 to GPT-4, and the forthcoming GPT-5 is poised to achieve another substantial advancement compared to GPT-4, addressing many existing limitations.

During a meeting with OpenAI founder and CEO Sam Altman last September, he mentioned that OpenAI had been laying the groundwork for GPT-5 for some time. However, if GPT-5 merely offered incremental improvements in capabilities, it wouldn't require such extensive preparation. One fundamental change expected in GPT-5 involves segregating the inference models from the related data and potentially introducing its own search engine.

The AI advancements are remarkable. To put it pessimistically, China is far behind. To put it optimistically, China will have the capacity to catch up.

Next, I would like to explain why we assert that China must be cognizant of its status as a follower in the AI realm, refraining from overestimating its capability and instead dedicating efforts to diligent learning. It's imperative to address a pertinent reality we confront presently, thus necessitating the clarification of several misconceptions to comprehend our standing.

Misconception 1: The Gap between China and the United States in AI is Only 1 to 2 Years.

I believe it's imperative to challenge the prevalent belief that the disparity between China and the U.S. in AI amounts to merely 1 to 2 years. Is it truly such a narrow timeframe? And if so, what substantiates this claim? Many argue that China’s performance after the release of GPT-3 in 2020 and that of ChatGPT in 2022 demonstrates our ability to swiftly develop models akin to U.S. innovative products. With the subsequent release of GPT-4, we promptly produced a model on par with it. But does this imply that our gap is indeed only 1 to 2 years? Is this assertion accurate?

I find it somewhat disingenuous to characterize the gap using such temporal parameters, as they correspond to generational innovation cycles rather than our proficiency disparities.

Consider this: given that GPT-5 is unavailable now, we might not be able to develop a similar model in a decade. Yet, upon its release, we might require 2 to 3 years to catch up. Nevertheless, the caliber of the GPT-5 model merely represents a milestone in innovation and iteration for them, not indicative of our own capability level. This distinction is crucial, as it underscores a fundamental gap.

We must understand that this is truly a gap led by innovation, not a situation where we catch up in two years with a single model.

Misconception 2: China is the Largest Market for AI Patents and Talent Globally.

We often assert, particularly during the AI 1.0 era, that Chinese investors and entrepreneurs making speeches in Silicon Valley would proclaim China's AI superiority over the U.S. A common metric supporting this claim is China's status as the largest market for AI patents and talent worldwide.

This patent market encompasses the volume of AI-related papers published and AI patents filed in China, both of which rank the highest globally. However, what's the reality?

Examining this chart depicting the new generation of global digital technology, we observe that the majority are AI-related papers. China undeniably holds a prominent position in terms of the quantity of AI-related papers. However, when considering the number of top-tier papers or citations, we lag behind.

In essence, while we lead globally in the quantity of papers, we fall behind in terms of top-tier papers or those with high citation rates, not only compared to the U.S. but also countries like Germany, Canada, and Britain.

Now, let's assess our engineering talent.

China indeed produces a substantial number of engineers and computer science professionals from universities. Many tech giants in Silicon Valley actively recruit computer experts from prestigious Chinese institutions such as Tsinghua and Peking University.

However, as of 2022, although China ranked approximately the second globally in terms of top-tier researchers, the number of China’s top-tier AI researchers is about one-fifth that of the U.S... And as of 2024, this gap may have widened even further compared to two years ago.

Therefore, the reality doesn't align with the notion that China is the world's AI talent powerhouse.

Misconception 3: The Main Obstacle for China's AI Lies in "Bottlenecked Computing Power".

The primary hurdle for Chinese AI is often identified as "bottlenecked computing power." The prevailing belief is that once we acquire relevant chips through various means, we'll reach the required level.

However, allow me to inject a dose of reality: in this phase of AI 2.0 development, computing power alone isn't sufficient. Model innovation capability and data capability are equally critical. Thus, the current reality is that not only is computing power a bottleneck, but so too are the innovation capabilities of our underlying models and our data capacity.

Let's address data capability first. Many assume that China, being a vast market with abundant consumer and corporate behavior data, must possess ample data resources. But I must be frank: much of this data is either irrelevant or inaccessible.

Earlier this year, during a conversation about meteorological data with a Chinese-American scientist who advises the Chinese Meteorological Administration, I mentioned that there are companies promoting models for meteorological calculations. The scientist bluntly stated that almost all of our meteorological data is useless due to a lack of organization, induction, and integration of historical meteorological data into computable formats.

Currently, China faces a significant deficiency in this regard. In the U.S., the most crucial aspect of the AI ecosystem is the development of the data market. However, in China, theoretically, there is no mature data market. This underscores a critical aspect of ecosystem development: the establishment of a robust data market. Without a mature data market, what meaningful calculations can be made?

Model companies in China may boast leading computing capabilities domestically, but the entire Chinese data market comprises less than 1% of the global data market. Moreover, when considering the efficacy of all data, including research and user application data, videos, or texts, the majority of mainstream global data, particularly research and user application data, is predominantly in English.

Consequently, if we cannot effectively compute with English data, how can we develop competitive large models of our own? This presents a significant challenge. That's why I emphasize that the bottleneck faced by the US isn't solely related to computing power. It encompasses the entire ecosystem, from computing power to innovation in underlying models, to data capabilities, and the establishment of a robust data market. Unfortunately, we are falling behind in all these aspects. Considering the time factor, it's extremely challenging to build up this capability adequately within ten years.

Misconception 4: Closed-Source Large Models vs. Open-Source Large Models: Which Is Better?

Recently, entrepreneurs and internet personalities have engaged in a debate regarding the superiority of closed-source versus open-source large models. However, I believe this debate is somewhat irrelevant; what truly matters is which approach is more suitable for a given context.

Both open-source and closed-source models come with their own set of advantages and disadvantages, much like the comparison between iOS (closed-source) and Android (open-source) operating systems. Each has its strengths and weaknesses. Presently, particularly in terms of performance, especially concerning large language models, where computations often involve tens of billions or even trillions of data points, closed-source models tend to exhibit significantly higher performance compared to their open-source counterparts.

For many applications or specific scenarios, the necessity for every model to be as large as tens of billions may not be crucial. Hence, open-source models remain viable to a certain extent.

For entities like OpenAI, aiming for Artificial General Intelligence (AGI), closed-source models may expedite the concentration of resources and funds towards achieving the AGI goal more swiftly and efficiently.

However, for widespread application and increased iterations, open-source large models are also indispensable. Thus, we should transcend the debate over whether open-source or closed-source large models are superior. Instead, the paramount consideration should be whether we possess the capability for innovation and originality, rather than merely imitating at a basic level.

In discussions about a "hundred-model battle" or a "thousand-model battle," if each of our models harbors its own innovative elements and contributes inventive functions within its respective domains, then the quantity of models ceases to be an issue.

Indeed, in a scenario like a "hundred-model battle" or a "thousand-model battle" where innovation points are absent, and only low-level imitation and replication prevail, the necessity for numerous models diminishes. Thus, the crux of the matter lies in whether we can genuinely establish ourselves on the global stage in terms of model innovation capability. This is a matter that warrants meticulous consideration.

Misconception 5: The Explosion of AI in Major Vertical Industries Will Happen Quickly.

In China, there's often talk about an imminent explosion in vertical industries propelled by AI, with this year being touted as the inaugural year for large-scale model applications to surge. However, I've been cautioning friends that this year likely won't mark the explosion of AI in vertical industries. While it might signify the start of applications, it's not an explosion. Such transformative shifts don't occur overnight because every progression adheres to certain rules, and industrial development follows a distinct pattern.

The fundamental issue is that our overall infrastructure capability hasn't yet met the threshold for widespread industrial applications.

Consider this: even if our SORA or other applications achieve 50% efficiency, does that imply we can deploy them in 50% of applications? Not necessarily. If industrial applications demand a 90% efficiency threshold, and you're only at 50% efficiency, or even 89%, rapid and widespread application in that industry becomes unattainable.

It's important to realize that the bottleneck isn't just China's computing power; it's a global bottleneck affecting computing power worldwide, including American companies. That's why, despite OpenAI's advancements with GPT-5 and GPT-6, progress remains sluggish. At its core, large AI models rely on "brute force" – having sufficiently vast data, computing power, and energy. Without these resources, they'll hit bottlenecks, and progress will only inch forward.

Many companies may entertain the idea that since Chinese firms acknowledge their inferiority in technological innovation compared to the US, yet boast larger market sizes and stronger application capabilities, should they prioritize entrepreneurship and application development for swift success or results?

However, I believe this might hold true in the long term, but not necessarily in the short term.

Even OpenAI CEO Sam Altman stated that 95% of startup companies rely on large models for development, but each major iteration of large models replaces a cohort of startup firms.

AI doesn't operate outside the realm of general business laws. So, even if AI is deployed, it won't automatically supplant existing products until foundational capabilities have reached a certain threshold.

This concern was also echoed by the founder of Pika during our conversation earlier this year. When I asked if she considered Runway as Pika's primary competitor, she pointed to OpenAI as her main concern because of their inevitable development of multimodal technology. So, I believe that until foundational capabilities reach a certain level, newly developed AI applications won't necessarily displace existing ones.

Since the fundamental infrastructure capabilities haven't reached the stage of industry transformation, we can't herald a "booming" new era of AI.

Despite claims that China's mobile internet applications are global frontrunners, our current historical juncture doesn't align with the internet era or the explosion phase of mobile internet applications. Instead, we're in the current stage of AI development, akin to the early phase of Cisco, rather than the post-internet development stage.

Today's NVIDIA is like the Cisco of the past, when Cisco dominated the US market and its stock price rose 60 times in a year. At that time, were there any noteworthy internet companies? Many of today's internet companies might not have appeared back then. Later, with the improvement of basic infrastructure capabilities, the development of communication technology from 2G to 4G, the improvement of network technology, and the emergence of mobile internet and long and short video applications.

The current state of AI applications is primarily focused on enhancing industrial efficiency, but achieving a complete transformation of industries will require considerable time and patience.

This is why we refer to it as weak artificial intelligence, and China's advantage in its vast market cannot be fully leveraged at present. In the short term, the primary focus remains on content generation-related auxiliary tools, such as search, question answering, text and image processing, and text-to-audio/video conversion.

So, how should we navigate this landscape?

I believe it's imperative to establish a social consensus regarding our actions in the global arena and during the course of AI development.

First and foremost, we must prioritize enhancing fundamental innovation and fostering long-term capacity building.

This involves building a robust ecosystem, beginning with education. Initiatives such as establishing AI education programs, evaluating university education systems, and implementing frameworks for academic openness and collaboration should revolve around fostering innovative technological capabilities in AI. Additionally, we must enhance the foundational innovation capacities required for large model development. Without this groundwork, all other efforts would be akin to "water without a source."

Second, we must adopt a patient approach to navigate the AI explosion cycle across various industrial application scenarios. Every industry transformed by AI undergoes a cyclical process starting from changes in underlying technology, and this transformation won't occur overnight or in a single leap.

I firmly believe that each industry potentially influenced by AI will experience a bottom-up transformation and initiate a new cycle for the industry. It's not about immediate changes at the application layer. This principle applies to sectors such as media, robotics, manufacturing, biopharmaceuticals, and more. While they will all undergo disruptive effects, the ability of our fundamental research capabilities to keep pace becomes paramount.

Every industry begins its journey with foundational capabilities and infrastructure construction from ground zero, constituting the real industrial cycle.

Thirdly, we need to adopt a more open mindset to embrace the competition and challenges presented by global AI development without limiting ourselves.

While some may argue that Americans are holding us back, I believe it's essential that we don't hinder our own progress. This is why I advocate against engaging in low-level imitative competition. Instead, we should consider taking a more proactive approach in AI innovation, even if it means taking a break in AI governance, norms, and ethical frameworks, and embracing a more open attitude towards advancement.

I sincerely hope that our advancements in AI research won't follow the same trajectory as the beaten path of new energy vehicles. While there were innovations in new energy vehicles a decade ago, such as in intelligent experiences and battery technology, today, including Xiaomi's entry, we find ourselves stuck in low-level, repetitive pursuits that hinder our ability to progress.

So, I hope our basic research capabilities and innovation capabilities can progress faster, and we should maintain patience in our endeavors.

Lastly, I'd like to recommend TMTPost's new product, AGI. TMTPost has been a significant contributor and participant in the AI field, and AGI is its latest information offering. AGI primarily focuses on cutting-edge AI information, aggregating global AI technology trends. Through various content formats centered around in-depth analysis, it explores industry trends, technological innovations, and business applications, delivering the latest and most relevant AI insights to enterprises and users. AGI aims to present a comprehensive and dynamic view of the AI landscape.