The Future of Digital China: Impressions from Beijing
By Florian Schneider
July 2025
LAC Director Florian Schneider has written a new LAC Short after visiting the 22nd Chinese Internet Research Conference in Beijing. He discusses the latest AI developments in China, particularly the use of ‘anthromorphised’ AI in online marketing and e-commerce on platforms such as JD.com. What do scholars think about AI – including its darker sides? What does the future of Digital China hold?
The Future of Digital China: Impressions from Beijing
Packages zip across conveyor-belt systems. Robot arms spin and rotate as they allocate goods. Drones buzz across the sky, automated lifters hover across the warehouse floors. Then, fleets of vans and trucks head out from the warehouses into all parts of the country, where uniformed delivery personnel make their way across hypermodern cityscapes or track through rugged hinterlands. At the end of the journey await their customer, smiling as they gratefully receive their newest purchases.
This is not an Amazon promotional video, though it could just as well be. It is a PR video by the Chinese corporation JD.com, China’s second-largest online retailer. My colleagues and I are treated to this introduction in a mercifully air-conditioned lecture hall of JD’s campus as the sweltering heat beats down on Beijing’s Silicon Valley outside. But the main attraction is an introduction to the corporation’s online marketing vision. As we learn, JD is collaborating with China’s large social media platforms to analyse user engagement and deliver bespoke personal ads through its ‘multi-dimensional real-time behavioral attribution architecture’. As users click and like their way across social-media platforms like RedNote (Xiaohongshu), the algorithms are ticking away in the background to gather their ‘impressions’ and ‘seed’ brands along the way. The goal is to convert those impressions to sales on JD.com, hook users into the e-commerce ecosystem, and increase their ‘Life Time Value’ (LTV) for the corporation. At least that is the hope.
Similar hopes are shared by platforms and e-commerce giants the world over. In a digital environment where users are bombarded with information, the holy grail for advertisers and retailers everywhere is how to capture those eye-balls and turn them into actual purchases. Meta and Amazon, ByteDance and Alibaba, all of these corporations are crunching the numbers and tweaking their algorithms to funnel users to real-world sales. But now the game is changing. The future of marketing, so we are told, is powered by artificial intelligence (AI). JD is already integrating AI into all steps of its business: AI now measures and analyses user behaviour on social media at the ‘sub-second’ level; it triangulates user search queries; it helps prime user with brands and products based on those behaviours. JD uses AI to generate cutesy animations and short advertising videos for its products, then tailors those ads to user preferences in real time. AI is optimising JD’s logistics to deliver goods quickly and efficiently. And in the future, these efforts will be further flanked by its AI chatbot ‘In-Joy’, as JD’s marketing manager explains: ‘at some point you’ll use Chatbots and you won’t even notice the ads, you’re just talking to a really good sales person’. This sales person, which comes in the shape of the cute and innocent-looking cartoon dog that is JD’s mascot, will ideally be a conversation partner who you know and trust – and who knows you better than you know yourself. Such is the vision of seamless consumerism, super-charged by AI.
It is not just JD that is innovating. China is at the forefront of AI development. When Jensen Huang, CEO of chip-manufacturer Nvidia, visited Beijing in July 2025, he courted his partners by saying that ‘there’s so much opportunity, so much confidence in the China market… it all starts with great students and great developers and researchers and China has 50% of the world’s AI researchers. There’s so much entrepreneurial energy, so many companies being created, and so many amazing companies already formed, and so the energy around AI is incredible.’ Indeed, this enthusiasm for AI is not limited to Jensen Huang, co-founder and head of the world’s largest company. It was also on display at the event that brought me to Beijing in July: the 22nd Chinese Internet Research Conference (CIRC), hosted this year by Peking University’s prestigious School of Journalism. On 8 and 9 July, researchers of digital China came together to discuss the future of China’s internet ‘in the era of AI’, culminating in the field-trip to JD’s headquarters at the outskirts of Beijing.
The keynotes and papers of this year’s CIRC provided an intriguing snapshot of AI in China today. They explored everything from the political economy of AI, to its design and usage, to its geopolitical impacts, shining a spotlight on the state-of-the-field. Granted, not all of the research on display at CIRC was about AI. Plenty of papers continued the long-standing interest in China’s platform economy, the activities of its highly diverse social media users, and the way that corporate and state actors collaborate to manage the activities of such users. A particularly intriguing question that has emerged in the past years is how China’s influencer economy is altering life outside of China, for instance when Chinese tourists, students, and expats interact with local German, French, or American cultures through the filters of iconic ‘Instagramable’ visual culture e.g. on Xiaohongshu. Such cases are powerful reminders of why scholars in Chinese Studies increasingly speak of ‘Global China‘ to highlight how ‘China’ is much more than party and state power: it includes Chinese people, Chinese expectations, Chinese aesthetics, and much more, all circulating around the globe in mind-bogglingly complex networks driven by capital and ideas. What this means for China, and for the world, is an exciting site of current research.
So not everything about digital China boils down to AI, and yet AI was the prominent concern at CIRC, with a few patterns standing out in particular. The first was the question of how different societies regulate AI. Comparisons between the US and China were particularly central to that discussion, reflecting the geopolitical tensions between the two tech superpowers, but EU regulation also received plenty of attention. A major take-away from these explorations is the degree to which ideas about sovereignty and techno-ethics shape Chinese approaches to AI, and how that approach extends the rationales of China’s developmental state. An open question then also was how strongly such ideas travel, and whether we are witnessing a norm-diffusion, for instance as Chinese actors collaborate with partners in Central or Southeast Asia. The general sense at CIRC was then also that research should look beyond the three poles of US-China-EU tech regulation to explore how other actors manoeuvre the complexities of the AI era, especially in the so-called Global South. It was promising to see some papers already take up this challenge, for instance by homing in on Indian regulations or China-Kasakhstan tech relations.
Aside from these discussions of policies and discourses, a recurring concern at CIRC was algorithmic bias. One group of researchers, for instance, took highly involved technical steps to figure out in what ways different Large Language Models (LLM) hallucinate, and how exposure to such hallucinations might undermine user confidence in different ChatBots. Another team analysed the images of people that different generative AI models create, to show that such image generation is racist: such models overwhelmingly skews towards representations of white professionals, and when they do depict other ethnicities, they are steeped in racial stereotypes. Asian faces are depicted as serene and calm, but white faces as energetic and extrovert. Intriguingly, these effects are not limited to models from the US. Chinese models create similar results. Biases were indeed the focus of other studies as well, ranging from the way Chatbots represent environmental issues to the way they reproduce geopolitical understandings, all illustrating in their own ways the dangers of using imperfect data sets to train LLM. A common refrain at CIRC this year was therefore the old computer-science principle: garbage in, garbage out.
The involved empirics on display in such AI studies already points to another pattern: their concerns with methodological issues. A great deal of effort is currently being expended on the question of how to open up the blackbox of various LLM, trace their inner works, and extrapolate their effects. The mathematical and computer-science knowledge necessary to explore these issues is often substantial, which can lead to a highly technical discourse that is difficult to follow, especially for readers from humanities and social-science fields. This makes it challenging to create truly interdisciplinary conversations about AI. It also raises the question of how the methodologically-involved aspects of the scholarship relate to the bigger social and political questions we need to ask about AI. As Jack Qiu, co-founder of CIRC, pointed out during his closing remarks, the field needs to connect its empiricism with theoretical concerns about the way these technologies interact with our societies. A few studies already move in this direction, for instance by exploring – empirically – how American and Chinese LLMs reproduce biases about sensitive geopolitical topics, and then asking – in a theoretically-informed way – what this might do to public understandings of contemporary issues like the war in Ukraine or the political status of Taiwan. Such studies show how important it is to connect sophisticated analyses with philosophical, psychological, and sociological concerns.
In much of the scholarship I had the pleasure of hearing about in Beijing, such socio-political issues seemed an after-thought. This is unfortunate. Over lunches and coffee-breaks and dinners, I had plenty of opportunity to discuss with CIRC attendees the darker sides of AI, such as its staggering environmental and energy footprint, its predatory practices of unethical data mining and intellectual property theft, its devastating impacts on precarious labour, and its worrying effects on how different people may see the world. And yet, even when the papers on AI spoke to one of those issues, they rarely went on to properly unpack them. The sense was often that AI was already a fact of life, and that our concerns as scholars should now be how to make AI more effective: less prone to hallucinate, more equitable in who it represents and how, more ethical, more trustworthy.
In a way, this realism is understandable, certainly in China, where AI is quickly becoming integrated into all aspects of life. This is what Nvidia CEO Jensen Huang alluded to during his Beijing visit: ‘AI is a new infrastructure, like electricity, like the internet.’ He is not wrong. But there is a risk of scholarship on AI, in China or elsewhere, taking at face value the vision that drives these developments. This may inadvertently reproduce the ‘solutionism‘ that is so pervasive in the industry: if only the algorithms can be tweaked sufficiently and the data cleaned adequately, if only industry regulations can protect privacy and assure transparency, then these systems will become a benefit to humanity. But is that the case? Can AI’s problems be solved through technical solutions and industry regulation?
I have my doubts. Much of the scholarship in Science and Technology Studies, for instance Kate Crawford’s excellent work, points to broader issues such as the profoundly instrumentalist attitudes of tech entrepreneurs, the worst authoritarian impulses of politicians, the exploitative rationales embedded in the tech, and more. And yet, these problems are often obscured by the very ways in which we discuss AI, especially the wide-spread tendency to anthropomorphise these systems. My colleague Hu Yong from Peking University rightly pointed out this fallacy in his keynote at CIRC: if we treat AI like human conversation partners, we risk losing sight of the fact that LLMs are probability engines that serve vested commercial and political interest.
This warning echoes that of media scholars like Simone Natale, whose work challenges us to consider the long history of AI as an inherently ‘deceitful medium’: whether it is early attempts like the ‘mechanical Turk‘ that made automation seem more intelligent than it was, or the Turing Test that is meant to establish whether a machine is ‘conscious’, or the advent of voice interfaces that seem to give ‘life’ to digital assistants like Siri or Alexa, the field of digital automation has been rife with invitations to view computers as something they are not. If we are to intervene in the AI developments that are already taking place at break-neck speed around the wold, then helping make those invitations more compelling is unlikely to provide a fruitful way forward. We need critical scholarship that dismantles the underlying assumptions about AI and that peels back the layers of what drives AI innovation today, and in what directions.
That is a tall order, and the work on display at CIRC this year certainly starts to scratch the surface of what will be required to move this field intro productive new areas. Maybe a way forward is precisely the kind of field trip that Peking University organized for CIRC’s participants. There, at the heart of Beijing’s tech industry, work the strategists, the designers, the practitioners of AI development. Asking them what their visions are, for the future of AI, may provide a crucial puzzle-piece. If I were a young scholar starting out today, I’d probably try to get a job in that industry, to see how the puzzle looks from the inside. Because, ultimately, AI is not a technical issue. It is a human issue. Putting the human factor back into the equation is crucial if we want to truly see what big picture the puzzle pieces form.