Connect with us

Brand Stories

Jensen Huang says China’s military will avoid U.S. AI tech — ‘they don’t need Nvidia’s chips or American tech stacks in order to build their military’

Published

on


Nvidia CEO Jensen Huang has downplayed Washington’s concerns that the Chinese military will use advanced U.S. AI tech to improve its capabilities. Mr. Huang said in an interview with CNN that China’s People’s Liberation Army (PLA) will avoid American tech the same way that the U.S.’s armed forces avoid Chinese products.

This announcement comes on the heels of the United States Senate’s open letter [PDF] to the CEO, asking him to “refrain from meeting with representatives of any companies that are working with the PRC’s military or intelligence establishment…or are suspected to have engaged in activities that undermine U.S. export controls.”



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

Navigating five critical global challenges – GIS Reports

Published

on


AI’s hype may be fading, but its true impact is only now unfolding. From geopolitics to jobs, five key tensions will define its role in reshaping our world.

June 18: A robot plays chess with an attendee during the Super AI Conference in Singapore. © Getty Images
×

In a nutshell

  • International cooperation is vital for responsible AI regulation
  • Private companies lead AI innovation, raising questions on public oversight
  • Energy-intensive data centers strain resources, outpacing green solutions

When it comes to truly transformative technologies, history shows that we tend to overestimate their short-term impact but underestimate their long-term effects. Artificial intelligence is no exception. After a surge of unprecedented hype, much of it driven by the same technology giants developing these systems, we are now entering a phase of disillusionment. The public is starting to ask:  “Where are the promised breakthroughs?” and “Why hasn’t everything changed already?”

But this is misleading. Just as no one in the 1990s could have foreseen that the internet would lead people to spend nearly five hours a day on smartphones, AI’s true impact is still unfolding – profound, unpredictable and irreversible.

This report explores five emerging battlegrounds where AI’s long-term influence will be most critical: geopolitics, governance, the environment, the economy and societal cohesion. These areas are already being reshaped, often in ways that are not yet fully visible. Understanding these issues is the first step in preparing for AI’s role, not just as a technological revolution, but as a broader global transformation.

The global AI race: A new geopolitical order

AI is becoming the central arena of geopolitical competition. The United States and China are engaged in a strategic contest to lead in AI development, deployment and governance. The stakes are high: economic dominance, military superiority and influence over global standards.

While the U.S. currently leads in frontier model development and private investment, China is making rapid progress by leveraging vast datasets, state-backed funding and centralized coordination. The European Union, meanwhile, positions itself as the chief regulator, shaping norms even as it lags in capabilities. In 2024, institutions in the U.S. produced 40 notable AI models, surpassing the 15 from China and the three from Europe. Although the U.S. still leads in the number of models, Chinese developments have quickly narrowed the quality gap.

Smaller innovation hubs such as Israel, Singapore and the United Arab Emirates aim to punch above their weight by focusing on strategic niches and ensuring they remain key players in this global race.

As AI advances toward artificial general intelligence (AGI) – a theoretical system that could match human thinking – key questions emerge: Who will control the most powerful AI? Could an AGI breakthrough lead to a new kind of technological hegemony? Might this AI race even spark real-world conflicts, such as a military clash over Taiwan’s semiconductor industry?

Global guardrails are necessary to prevent a race that leads to instability or misuse, but the current trust deficit between major powers makes such cooperation challenging.

×

Facts & figures

What is artificial general intelligence?

Artificial general intelligence (AGI) is the theoretical capability of a machine to understand or learn any intellectual task that a human can perform. This type of AI seeks to emulate the cognitive functions of the human brain. Although it remains a hypothetical concept at present, there is potential for AGI to replicate human-like skills such as reasoning, problem-solving, perception, learning and language comprehension. Research into AGI has been ongoing since the inception of AI development, yet there remains no consensus among scholars on what exactly qualifies as AGI or the most effective approaches to achieve it.

Who governs intelligence?

The most advanced AI systems today are being developed not by governments but by private companies. In the U.S., OpenAI, Google, Anthropic, Meta and others are in a race to push the boundaries of capability and ambition, often outpacing public regulators.

Unlike earlier general-purpose technologies, such as electricity, computing or the internet, where governments played a major role in their initial development, the current wave of AI in the U.S. is largely driven by private enterprises. In China this is partially true, though the Chinese Communist Party is directing private entities toward developments that serve centralized state interests. This marks a notable shift, as groundbreaking innovation is taking place primarily outside public institutions. This creates fundamental conflict, as corporations control a technology with far-reaching public implications yet operate based on private interests.

While internet-based services are indeed controlled by tech companies, they primarily function as platforms for communication and distribution. AI, in contrast, is a decision-making technology capable of interpreting data, generating content and even acting autonomously. This distinction brings a heightened risk of bias, misuse and unintended consequences. For instance, once developed, an individual could leverage an open-source AGI system to autonomously generate and deploy a personalized bioweapon or manipulate financial markets by identifying vulnerabilities, executing large-scale trades and destabilizing systems faster than regulators can respond.

Like the adage that “war is too important to be left to the generals,” many argue that AI is too important to be left solely to tech companies. Public oversight is not a bureaucratic burden, but a democratic necessity.

As AI progresses toward AGI, despite the ongoing debate about what this concept truly means, governments may be forced to assert stronger control through measures such as licensing requirements, mandatory audits or even nationalization. With systems capable of outthinking humans in various fields, concerns about private ownership arise. How can we ensure that oversight is both competent and democratic?

Moreover, AI, like any digital technology, knows no borders. Its development, deployment and impact are inherently transnational. This means that effective AI regulation requires cross-border cooperation. International organizations such as the Organisation for Economic Co-operation and Development (OECD), which promotes collaboration among countries on ethical and efficient AI governance, should play a key role in establishing standards, aligning principles and preventing a global race to the bottom in regulation.

Last year, there was a notable increase in global collaboration on AI governance. Key organizations, including the OECD, the EU, the United Nations and African Union, have rolled out frameworks that emphasize transparency, trustworthiness and other fundamental principles of responsible AI.

Intelligence at the cost of sustainability?

Training advanced AI models requires immense resources, and the environmental impact cannot be overlooked. Large data centers, which power these advancements, consume enormous amounts of electricity and water. Scientists estimate that in 2022, global data centers consumed around 460 terawatt-hours (TWh) of electricity, roughly equivalent to the total annual energy consumption of France.

The International Energy Agency forecasts that global electricity demand from data centers will more than double by 2030, reaching approximately 945 TWh. A significant factor driving this surge will be AI, as electricity demand from AI-optimized data centers is expected to increase more than fourfold by 2030.

Data Center Alley, a vast area spanning 30 square miles just outside Washington D.C., is home to over 200 data centers and has an energy consumption comparable to that of the city of Boston in the U.S.
Data Center Alley, a vast area spanning 30 square miles just outside Washington, D.C., is home to over 200 data centers and has an energy consumption comparable to that of the city of Boston. © Getty Images

As the demand for generative AI and real-time inference continues to rise, so does the environmental cost associated with it. This creates a challenging dilemma between advancing AI technology and protecting the environment. AI has the potential to be a valuable tool in addressing environmental issues, whether by optimizing energy grids, modeling climate change or speeding up R&D in green technologies. However, in its current form, AI could be contributing more to the climate crisis than offering solutions. The expansion of computing infrastructure often outpaces the growth of renewable energy, particularly in regions where major cloud service providers are building massive data centers.

Innovations in energy-efficient chips, model optimization and sustainable cooling are advancing, but the environmental cost of creating increasingly powerful AI remains a major worry.

Job creation, displacement and the future of work

“We are being afflicted with a new disease… technological unemployment.” This warning was not issued by a modern-day tech pessimist but by one of the greatest economists of all time, John Maynard Keynes. Writing in 1930, Keynes speculated that within a century, the “economic problem” might be solved, and human beings would no longer need to work.

While his forecast of mass unemployment due to automation has not yet materialized, as technology created new jobs alongside those it displaced, his second prediction raises questions today. A hundred years later, society may be approaching a world defined more by leisure than labor, where material needs are no longer the primary focus.

Many believe that AI holds this promise, or threat, in which AI-driven abundance clashes with fears of a jobless future. Estimates suggest that hundreds of millions of jobs globally could be partially automated in the coming decades. As models grow more capable, entire categories of work may vanish, and with it, human know-how. Historical precedents, from the Industrial Revolution to the internet age, show that economies adjust, often through painful transitions and rising inequality. If AGI becomes reality, the economic stakes will multiply. New social contracts, such as universal basic income or shorter workweeks, may be necessary. Training workers for jobs that do not yet exist and ensuring AI augments rather than replaces human potential will be critical.

Inequality, inclusion and the fabric of trust

AI could serve as a great equalizer. AI tutors might democratize education. AI diagnostics could provide healthcare to remote areas. AI assistants may help underserved populations navigate bureaucracy, learn new skills and access vital services.

But these outcomes are not guaranteed. Without proactive policies, the opposite is more likely to occur. First, at present, access to AI tools is uneven, limiting most benefits to tech-savvy and wealthier individuals.

Second, data biases can reinforce discrimination. Language models often fail to reflect the diversity of human cultures, voices and needs. Although they mirror human biases, overreliance on these models and the psychological reluctance of humans to challenge a machine’s intelligence can amplify their effects.

Read more on artificial intelligence

Third, AI systems deployed by for-profit companies may further erode trust. In the absence of transparency, accountability and inclusivity, public distrust in AI systems – and in the entities that deploy them – may lead to backlash.

As AI assumes roles traditionally held by teachers, doctors, judges and bureaucrats, we must ask: Who is included in the design of these systems? Whose values are encoded in the algorithms? And how do we ensure that technology strengthens, rather than fractures, our social fabric?

From hype to responsibility

We are moving beyond the initial hype cycle of AI and entering a more complex and consequential phase. The five tensions outlined above are not merely abstract dilemmas; they represent the key battlegrounds that will determine the future of the AI age. While the world is not yet prepared for this transition, readiness is not a static condition. It is a choice.

Governments need to adopt a long-term vision that extends beyond the next election cycle. Tech companies should prioritize social responsibility over quarterly earnings. Citizens must stay engaged, rather than passively consuming information. These are all big asks. The international community should view AI as more than just a new technology, but as a significant force shaping the 21st century.

The question is not whether the change will come. It is whether we will shape it, or be shaped by it.

×

Scenarios

More likely: Global AI race escalates, prioritizing power over equity

In this scenario, the global AI race intensifies to the point where technological dominance becomes the primary strategic objective. The U.S. and China, engaged in intense competition, prioritize speed and power over ethics, collaboration or domestic impacts. Regulations are patchwork, reactive and mostly symbolic. Private companies face few restrictions, and governments invest heavily in AI for military and commercial purposes.

As a result, the societal, economic and environmental applications of AI lack coordinated oversight. Inequality worsens, job displacement speeds up without enough safety nets and environmental costs increase. The world undergoes significant AI-driven change but without a clear plan to ensure stability or fairness.

Less likely: AI technology is used democratically for global progress

Here, Western democracies lead in integrating AI development within inclusive, citizen-focused frameworks. Inspired by economist Daron Acemoglu’s vision, governments pursue “democratic people power” – using public policy, education, labor protections and governance to guide AI toward widespread social benefits. International cooperation grows stronger, with organizations such as the OECD, the UN and the EU establishing global standards. AI is used not just for profit or power, but to improve public services, increase worker productivity and tackle issues like climate change and health equity. Although challenges remain, this coordinated effort helps harness AI for the many, not just the few, ensuring democracy, not technological oligarchy, shapes the future.

Contact us today for tailored geopolitical insights and industry-specific advisory services.




Source link

Continue Reading

Brand Stories

California Judicial Council implements rule for generative artificial intelligence use in court

Published

on


Policies must block confidential information from being input into public generative AI systems; they must also ban unlawful discrimination via AI programs. Court staff and judicial officers must also “take reasonable steps” to confirm the accuracy of material, as per a statement published by Reuters. Staff and judicial officers must also reveal whether they used AI if the final version of any publicized written, visual, or audio work was AI-generated.

Courts must implement their respective policies by September 1.

Task force chair Brad Hill told the council in a statement published by Reuters that the rule “strikes the best balance between uniformity and flexibility.” He explained that the task force steered clear of a rule that would dictate court use of the evolving technology.

Illinois, Delaware, and Arizona have also taken on generative AI rules or policies. New York, Georgia, and Connecticut are presently evaluating generative AI use in court.

California’s court system comprises five million cases, 65 courts, and around 1,800 judges. The AI task force was established to address the increasing interest in generative AI as well as public concern about its effect on the judiciary; it supervises the development of AI use policy recommendations in this branch.



Source link

Continue Reading

Brand Stories

This Artificial Intelligence (AI) Stock Could Hit a $2 Trillion Valuation by July 31

Published

on


  • Meta Platforms’ recent rally has brought its market cap close to the $2 trillion mark.

  • The digital advertising giant’s upcoming earnings report could help it hit this milestone.

  • Meta’s ability to deliver strong returns to advertisers with the help of AI tools could help it grow at a faster pace than the end market in the long run, paving the way for more upside.

  • These 10 stocks could mint the next wave of millionaires ›

Meta Platforms (NASDAQ: META) stock has been rallying impressively of late, gaining more than 32% in the past three months amid the broader rally in technology stocks. As a result, Meta’s market cap has jumped to $1.8 trillion as of this writing on July 14, making it the sixth-largest company in the world.

Meta is slated to release its second-quarter results after the market closes on July 31. The company has been able to grow at a faster pace than the digital ad market thanks to the integration of artificial intelligence (AI) tools into its offerings, which could enable it to deliver another solid set of results later this month.

Given that Meta stock is just 11% away from entering the $2 trillion market cap club as I write this, there is a good chance it could achieve that milestone in July, driven by the tech stock rally and a healthy quarterly report.

META data by YCharts. E = earnings reports.

Let’s look at the reasons why Meta stock is primed for more upside this month and in the long run.

It is worth noting that Meta’s earnings have been better than consensus expectations in each of the last four quarters. One reason is the increase in spending across its family of applications by advertisers. In the first quarter, for instance, Meta reported an impressive increase of 10% year over year in the average price per ad.

Person smiling and looking at a smartphone in a gym.
Image source: Getty Images.

Ad impressions also increased by 5% from the year-ago period, which means the company is delivering more ads. This combination of higher pricing per ad and an increase in impressions delivered enabled Meta to report a 37% year-over-year increase in its earnings to $6.43 per share in Q1. However, investors should also note that the company has been aggressively increasing its capital expenditures (capex) to bolster its AI infrastructure.

It expects to spend $68 billion on capex in 2025, at the midpoint of its guidance range. That would be a massive increase over its 2024 capex of $39 billion. This explains why analysts are expecting Meta’s earnings to increase at a slower year-over-year pace of 13% for the second quarter to $5.84 per share. While the increased investment in AI-focused data center infrastructure is undoubtedly likely to weigh on Meta’s bottom line in the short run, the higher returns its AI investments are generating on the advertising front could help it beat the market’s bottom-line expectations. And beating expectations often sends a stock up, as investors react with excitement and optimism.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com