Nvidia CEO Jensen Huang has downplayed Washington’s concerns that the Chinese military will use advanced U.S. AI tech to improve its capabilities. Mr. Huang said in an interview with CNN that China’s People’s Liberation Army (PLA) will avoid American tech the same way that the U.S.’s armed forces avoid Chinese products.
This announcement comes on the heels of the United States Senate’s open letter [PDF] to the CEO, asking him to “refrain from meeting with representatives of any companies that are working with the PRC’s military or intelligence establishment…or are suspected to have engaged in activities that undermine U.S. export controls.”
“…Depriving someone of technology is not a goal, it’s a tactic — and that tactic was not in service of the goal,” said the Nvidia CEO during the interview. “Just like we want the world to be built on the American dollar, using the American dollar as the global standard, we want the American tech stack to be the global standard.” He also added, “In order for America to have AI leadership, it needs to make sure the American tech stack is available to markets all over the world, so that amazing developers, including the ones in China, are able to build on American tech stack so that AI runs best on the American tech stack.”
When Zakaria asked him about the Chinese PLA’s use of this tech, Jensen said that it’s not going to be an issue. “The Chinese military [is] no different [from] the American military: [they] will not seek each other’s technology to be built on top [of each other]. They simply can’t rely on it — it could be, of course, limited at any time,” Jensen answered. “Not to mention, there’s plenty of computing capacity in China already. If you just think about the number of supercomputers that are in China, built by amazing Chinese engineers, that are already in operation — they don’t need Nvidia’s chips or American tech stacks in order to build their military.”
Chinese operators of these smuggled AI chips would have a harder time getting firmware updates and likely won’t have access to Nvidia’s advanced cloud tools and enterprise platforms. However, because Nvidia still sells export-compliant GPUs to China, the platform and cloud software can still potentially work with the banned higher-power equipment.
Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.
Aside from that, it will probably be difficult for the U.S. to disable these AI GPUs remotely, if it comes to that. After all, Nvidia would have a harder time selling its chips if a way to disable them remotely exists — that’s why the U.S. has a bill in the works that could force geo-tracking tech on high-end hardware. Even if there is such a technology, China can just air gap the systems to prevent them from being remotely killed.
Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
AI’s hype may be fading, but its true impact is only now unfolding. From geopolitics to jobs, five key tensions will define its role in reshaping our world.
International cooperation is vital for responsible AI regulation
Private companies lead AI innovation, raising questions on public oversight
Energy-intensive data centers strain resources, outpacing green solutions
When it comes to truly transformative technologies, history shows that we tend to overestimate their short-term impact but underestimate their long-term effects. Artificial intelligence is no exception. After a surge of unprecedented hype, much of it driven by the same technology giants developing these systems, we are now entering a phase of disillusionment. The public is starting to ask: “Where are the promised breakthroughs?” and “Why hasn’t everything changed already?”
But this is misleading. Just as no one in the 1990s could have foreseen that the internet would lead people to spend nearly five hours a day on smartphones, AI’s true impact is still unfolding – profound, unpredictable and irreversible.
This report explores five emerging battlegrounds where AI’s long-term influence will be most critical: geopolitics, governance, the environment, the economy and societal cohesion. These areas are already being reshaped, often in ways that are not yet fully visible. Understanding these issues is the first step in preparing for AI’s role, not just as a technological revolution, but as a broader global transformation.
The global AI race: A new geopolitical order
AI is becoming the central arena of geopolitical competition. The United States and China are engaged in a strategic contest to lead in AI development, deployment and governance. The stakes are high: economic dominance, military superiority and influence over global standards.
While the U.S. currently leads in frontier model development and private investment, China is making rapid progress by leveraging vast datasets, state-backed funding and centralized coordination. The European Union, meanwhile, positions itself as the chief regulator, shaping norms even as it lags in capabilities. In 2024, institutions in the U.S. produced 40 notable AI models, surpassing the 15 from China and the three from Europe. Although the U.S. still leads in the number of models, Chinese developments have quickly narrowed the quality gap.
Smaller innovation hubs such as Israel, Singapore and the United Arab Emirates aim to punch above their weight by focusing on strategic niches and ensuring they remain key players in this global race.
As AI advances toward artificial general intelligence (AGI) – a theoretical system that could match human thinking – key questions emerge: Who will control the most powerful AI? Could an AGI breakthrough lead to a new kind of technological hegemony? Might this AI race even spark real-world conflicts, such as a military clash over Taiwan’s semiconductor industry?
Global guardrails are necessary to prevent a race that leads to instability or misuse, but the current trust deficit between major powers makes such cooperation challenging.
Artificial general intelligence (AGI) is the theoretical capability of a machine to understand or learn any intellectual task that a human can perform. This type of AI seeks to emulate the cognitive functions of the human brain. Although it remains a hypothetical concept at present, there is potential for AGI to replicate human-like skills such as reasoning, problem-solving, perception, learning and language comprehension. Research into AGI has been ongoing since the inception of AI development, yet there remains no consensus among scholars on what exactly qualifies as AGI or the most effective approaches to achieve it.
Who governs intelligence?
The most advanced AI systems today are being developed not by governments but by private companies. In the U.S., OpenAI, Google, Anthropic, Meta and others are in a race to push the boundaries of capability and ambition, often outpacing public regulators.
Unlike earlier general-purpose technologies, such as electricity, computing or the internet, where governments played a major role in their initial development, the current wave of AI in the U.S. is largely driven by private enterprises. In China this is partially true, though the Chinese Communist Party is directing private entities toward developments that serve centralized state interests. This marks a notable shift, as groundbreaking innovation is taking place primarily outside public institutions. This creates fundamental conflict, as corporations control a technology with far-reaching public implications yet operate based on private interests.
While internet-based services are indeed controlled by tech companies, they primarily function as platforms for communication and distribution. AI, in contrast, is a decision-making technology capable of interpreting data, generating content and even acting autonomously. This distinction brings a heightened risk of bias, misuse and unintended consequences. For instance, once developed, an individual could leverage an open-source AGI system to autonomously generate and deploy a personalized bioweapon or manipulate financial markets by identifying vulnerabilities, executing large-scale trades and destabilizing systems faster than regulators can respond.
Like the adage that “war is too important to be left to the generals,” many argue that AI is too important to be left solely to tech companies. Public oversight is not a bureaucratic burden, but a democratic necessity.
As AI progresses toward AGI, despite the ongoing debate about what this concept truly means, governments may be forced to assert stronger control through measures such as licensing requirements, mandatory audits or even nationalization. With systems capable of outthinking humans in various fields, concerns about private ownership arise. How can we ensure that oversight is both competent and democratic?
Moreover, AI, like any digital technology, knows no borders. Its development, deployment and impact are inherently transnational. This means that effective AI regulation requires cross-border cooperation. International organizations such as the Organisation for Economic Co-operation and Development (OECD), which promotes collaboration among countries on ethical and efficient AI governance, should play a key role in establishing standards, aligning principles and preventing a global race to the bottom in regulation.
Last year, there was a notable increase in global collaboration on AI governance. Key organizations, including the OECD, the EU, the United Nations and African Union, have rolled out frameworks that emphasize transparency, trustworthiness and other fundamental principles of responsible AI.
Intelligence at the cost of sustainability?
Training advanced AI models requires immense resources, and the environmental impact cannot be overlooked. Large data centers, which power these advancements, consume enormous amounts of electricity and water. Scientists estimate that in 2022, global data centers consumed around 460 terawatt-hours (TWh) of electricity, roughly equivalent to the total annual energy consumption of France.
The International Energy Agency forecasts that global electricity demand from data centers will more than double by 2030, reaching approximately 945 TWh. A significant factor driving this surge will be AI, as electricity demand from AI-optimized data centers is expected to increase more than fourfold by 2030.
As the demand for generative AI and real-time inference continues to rise, so does the environmental cost associated with it. This creates a challenging dilemma between advancing AI technology and protecting the environment. AI has the potential to be a valuable tool in addressing environmental issues, whether by optimizing energy grids, modeling climate change or speeding up R&D in green technologies. However, in its current form, AI could be contributing more to the climate crisis than offering solutions. The expansion of computing infrastructure often outpaces the growth of renewable energy, particularly in regions where major cloud service providers are building massive data centers.
Innovations in energy-efficient chips, model optimization and sustainable cooling are advancing, but the environmental cost of creating increasingly powerful AI remains a major worry.
Job creation, displacement and the future of work
“We are being afflicted with a new disease… technological unemployment.” This warning was not issued by a modern-day tech pessimist but by one of the greatest economists of all time, John Maynard Keynes. Writing in 1930, Keynes speculated that within a century, the “economic problem” might be solved, and human beings would no longer need to work.
While his forecast of mass unemployment due to automation has not yet materialized, as technology created new jobs alongside those it displaced, his second prediction raises questions today. A hundred years later, society may be approaching a world defined more by leisure than labor, where material needs are no longer the primary focus.
Many believe that AI holds this promise, or threat, in which AI-driven abundance clashes with fears of a jobless future. Estimates suggest that hundreds of millions of jobs globally could be partially automated in the coming decades. As models grow more capable, entire categories of work may vanish, and with it, human know-how. Historical precedents, from the Industrial Revolution to the internet age, show that economies adjust, often through painful transitions and rising inequality. If AGI becomes reality, the economic stakes will multiply. New social contracts, such as universal basic income or shorter workweeks, may be necessary. Training workers for jobs that do not yet exist and ensuring AI augments rather than replaces human potential will be critical.
Inequality, inclusion and the fabric of trust
AI could serve as a great equalizer. AI tutors might democratize education. AI diagnostics could provide healthcare to remote areas. AI assistants may help underserved populations navigate bureaucracy, learn new skills and access vital services.
But these outcomes are not guaranteed. Without proactive policies, the opposite is more likely to occur. First, at present, access to AI tools is uneven, limiting most benefits to tech-savvy and wealthier individuals.
Second, data biases can reinforce discrimination. Language models often fail to reflect the diversity of human cultures, voices and needs. Although they mirror human biases, overreliance on these models and the psychological reluctance of humans to challenge a machine’s intelligence can amplify their effects.
Read more on artificial intelligence
Third, AI systems deployed by for-profit companies may further erode trust. In the absence of transparency, accountability and inclusivity, public distrust in AI systems – and in the entities that deploy them – may lead to backlash.
As AI assumes roles traditionally held by teachers, doctors, judges and bureaucrats, we must ask: Who is included in the design of these systems? Whose values are encoded in the algorithms? And how do we ensure that technology strengthens, rather than fractures, our social fabric?
From hype to responsibility
We are moving beyond the initial hype cycle of AI and entering a more complex and consequential phase. The five tensions outlined above are not merely abstract dilemmas; they represent the key battlegrounds that will determine the future of the AI age. While the world is not yet prepared for this transition, readiness is not a static condition. It is a choice.
Governments need to adopt a long-term vision that extends beyond the next election cycle. Tech companies should prioritize social responsibility over quarterly earnings. Citizens must stay engaged, rather than passively consuming information. These are all big asks. The international community should view AI as more than just a new technology, but as a significant force shaping the 21st century.
The question is not whether the change will come. It is whether we will shape it, or be shaped by it.
More likely: Global AI race escalates, prioritizing power over equity
In this scenario, the global AI race intensifies to the point where technological dominance becomes the primary strategic objective. The U.S. and China, engaged in intense competition, prioritize speed and power over ethics, collaboration or domestic impacts. Regulations are patchwork, reactive and mostly symbolic. Private companies face few restrictions, and governments invest heavily in AI for military and commercial purposes.
As a result, the societal, economic and environmental applications of AI lack coordinated oversight. Inequality worsens, job displacement speeds up without enough safety nets and environmental costs increase. The world undergoes significant AI-driven change but without a clear plan to ensure stability or fairness.
Less likely: AI technology is used democratically for global progress
Here, Western democracies lead in integrating AI development within inclusive, citizen-focused frameworks. Inspired by economist Daron Acemoglu’s vision, governments pursue “democratic people power” – using public policy, education, labor protections and governance to guide AI toward widespread social benefits. International cooperation grows stronger, with organizations such as the OECD, the UN and the EU establishing global standards. AI is used not just for profit or power, but to improve public services, increase worker productivity and tackle issues like climate change and health equity. Although challenges remain, this coordinated effort helps harness AI for the many, not just the few, ensuring democracy, not technological oligarchy, shapes the future.
Contactus today for tailored geopolitical insights and industry-specific advisory services.
Sign up for our newsletter
Receive insights from our experts every week in your inbox.
Policies must block confidential information from being input into public generative AI systems; they must also ban unlawful discrimination via AI programs. Court staff and judicial officers must also “take reasonable steps” to confirm the accuracy of material, as per a statement published by Reuters. Staff and judicial officers must also reveal whether they used AI if the final version of any publicized written, visual, or audio work was AI-generated.
Courts must implement their respective policies by September 1.
Task force chair Brad Hill told the council in a statement published by Reuters that the rule “strikes the best balance between uniformity and flexibility.” He explained that the task force steered clear of a rule that would dictate court use of the evolving technology.
Illinois, Delaware, and Arizona have also taken on generative AI rules or policies. New York, Georgia, and Connecticut are presently evaluating generative AI use in court.
California’s court system comprises five million cases, 65 courts, and around 1,800 judges. The AI task force was established to address the increasing interest in generative AI as well as public concern about its effect on the judiciary; it supervises the development of AI use policy recommendations in this branch.
Meta Platforms’ recent rally has brought its market cap close to the $2 trillion mark.
The digital advertising giant’s upcoming earnings report could help it hit this milestone.
Meta’s ability to deliver strong returns to advertisers with the help of AI tools could help it grow at a faster pace than the end market in the long run, paving the way for more upside.
Meta Platforms(NASDAQ: META) stock has been rallying impressively of late, gaining more than 32% in the past three months amid the broader rally in technology stocks. As a result, Meta’s market cap has jumped to $1.8 trillion as of this writing on July 14, making it the sixth-largest company in the world.
Meta is slated to release its second-quarter results after the market closes on July 31. The company has been able to grow at a faster pace than the digital ad market thanks to the integration of artificial intelligence (AI) tools into its offerings, which could enable it to deliver another solid set of results later this month.
Given that Meta stock is just 11% away from entering the $2 trillion market cap club as I write this, there is a good chance it could achieve that milestone in July, driven by the tech stock rally and a healthy quarterly report.
Let’s look at the reasons why Meta stock is primed for more upside this month and in the long run.
It is worth noting that Meta’s earnings have been better than consensus expectations in each of the last four quarters. One reason is the increase in spending across its family of applications by advertisers. In the first quarter, for instance, Meta reported an impressive increase of 10% year over year in the average price per ad.
Image source: Getty Images.
Ad impressions also increased by 5% from the year-ago period, which means the company is delivering more ads. This combination of higher pricing per ad and an increase in impressions delivered enabled Meta to report a 37% year-over-year increase in its earnings to $6.43 per share in Q1. However, investors should also note that the company has been aggressively increasing its capital expenditures (capex) to bolster its AI infrastructure.
It expects to spend $68 billion on capex in 2025, at the midpoint of its guidance range. That would be a massive increase over its 2024 capex of $39 billion. This explains why analysts are expecting Meta’s earnings to increase at a slower year-over-year pace of 13% for the second quarter to $5.84 per share. While the increased investment in AI-focused data center infrastructure is undoubtedly likely to weigh on Meta’s bottom line in the short run, the higher returns its AI investments are generating on the advertising front could help it beat the market’s bottom-line expectations. And beating expectations often sends a stock up, as investors react with excitement and optimism.
Meta management points out that users are now spending more time on its applications thanks to AI-recommended content. In the six months that ended March 31, Meta saw the time spent on Facebook and Instagram increase by 7% and 6%, respectively. The increase in user engagement tells us why it has been able to serve more ads.
Moreover, the gains advertisers have seen on the dollars they are spending on Meta’s applications are also quite solid. A couple of months ago, Meta said it “assessed the impact of [its] new AI-driven advertising tools and found that they drive a 22% improvement in return on ad spend for advertisers. This means that for every dollar U.S. advertisers spend with Meta, they see a $4.52 return when they use [its] new AI-driven advertising tools.”
Unsurprisingly, Meta saw a 30% increase in the number of advertisers using its AI tools to create campaigns in the first quarter. So, there is indeed a solid possibility that Meta will clock healthy growth in ads delivered and the average price per ad in Q2, which could pave the way for a better-than-expected jump in its bottom line and help the company cross the $2 trillion milestone. I expect it to hit that market cap before Aug. 1.
Looking ahead, Meta expects that it will allow advertisers to completely automate the creation and execution of ad campaigns by the end of next year. As such, there is a good chance that Meta’s earnings growth could accelerate from 2026, following this year’s projected increase of 7%. Estimates are shown in the chart below.
However, there is a strong possibility that Meta’s earnings growth will outpace market expectations, thanks to AI. That’s why it won’t be surprising to see its market cap jumping to higher levels in the long run, as the digital ad market is expected to clock a robust annual growth rate of 15% through 2030, and Meta has the ability to keep growing at a faster pace than the end market.
Ever feel like you missed the boat in buying the most successful stocks? Then you’ll want to hear this.
On rare occasions, our expert team of analysts issues a “Double Down” stock recommendation for companies that they think are about to pop. If you’re worried you’ve already missed your chance to invest, now is the best time to buy before it’s too late. And the numbers speak for themselves:
Nvidia:if you invested $1,000 when we doubled down in 2009,you’d have $447,134!*
Apple: if you invested $1,000 when we doubled down in 2008, you’d have $40,090!*
Netflix: if you invested $1,000 when we doubled down in 2004, you’d have $652,133!*
Right now, we’re issuing “Double Down” alerts for three incredible companies, available when you joinStock Advisor, and there may not be another chance like this anytime soon.
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Meta Platforms. The Motley Fool has a disclosure policy.
You must be logged in to post a comment Login