Funding & Investment in Travel
Data center tweaks could unlock 76 GW of new power capacity in the US
Tech companies, data center developers, and power utilities have been panicking over the prospect of runaway demand for electricity in the U.S. in the face of unprecedented growth in AI.
Amidst all the hand wringing, a new paper published this week suggests the situation might not be so dire if data center operators and other heavy electricity users curtail their use ever so slightly.
By limiting power drawn from the grid to 90% of the maximum for a couple hours at a time — for a total of about a day per year — new users could unlock 76 gigawatts of capacity in the United States. That’s more than all data centers use globally, according to Goldman Sachs. To put that number into perspective, it’s about 10% of peak demand in the U.S.
If data centers were to curtail their use more, they could unlock progressively more capacity.
Such programs aren’t exactly new.
For decades, utilities have encouraged big electricity users like shopping malls, universities, and factories to curtail their use when demand peaks, like on hot summer days. Those users might turn down the air conditioning or turn off thirsty machines for a few hours, and in return, the utility gives them a credit on their bill.
Data centers have largely sat on the sidelines, instead opting to maintain uptime and performance levels for their customers. The study argues that data centers could be ideal demand-response participants because they have the potential to be flexible.
There are a few ways that data centers can trim their power use, the study says. One is temporal flexibility, or shifting computing tasks to times of lower demand. AI model training, for example, could easily be rescheduled to accommodate a brief curtailment.
Another is spatial flexibility, where companies shift their computational tasks to other regions that aren’t experiencing high demand. Even with data centers, operators can consolidate loads and shut down a portion of their servers.
And if tasks are mission critical and can’t be delayed or shifted, data center operators can always turn to alternative power sources to make up for any curtailment. Batteries are ideally suited for this since even modestly sized installations can provide several hours of power almost instantaneously.
Some companies have already participated in ad hoc versions of these.
Google has used its carbon-aware computing platform, originally developed to trim emissions, to enable demand response. Enel X has worked with data centers to tap into the batteries in their uninterruptible power supplies (UPS) to stabilize the grid. And PG&E is offering to connect data centers to the grid quicker if operators agree to participate in a demand response program.
These tweaks won’t completely eliminate the need for new sources of power. But they might turn a potentially catastrophic situation — in which half of all new AI servers are underpowered — into one that’s more easily solved.
Funding & Investment in Travel
Hyperlocal tourism offers answer for overtourism, sustainability
… is an independent writer and travel journalist with over 25 years …
Source link
Funding & Investment in Travel
Ukraine considers easing travel ban for men ages 18-24, parliament speaker says
Ukrainian lawmakers are considering whether to allow men ages 18 to 24 to travel abroad, a move that would ease current wartime restrictions, Chairman of the Verkhovna Rada Ruslan Stefanchuk said on July 19.
According to Suspilne, the Verkhovna Rada (Ukraine’s parliament) is reviewing proposals from both individual members and the parliamentary Committee on National Security, Defense, and Intelligence. Under current martial law, men in that age group are not subject to mobilization but are still barred from leaving the country unless they qualify for exemptions, such as medical reasons or official business.
Stefanchuk emphasized the need to find a legal mechanism to uphold basic rights for young men who are not eligible for conscription. These include the right to pursue education, work opportunities, or reunite with family abroad.
“There are people aged 18 to 25 who are not subject to mobilization, but they cannot exercise their rights,” Stefanchuk said. “We must find a mechanism to enable them to exercise their rights.”
Several proposals are under discussion to liberalize exit rules, including the establishment of clear criteria and permitting certain categories of individuals to travel. However, no final decision has been made. Lawmakers expect the committee to issue its recommendations soon.
Since Russia’s full-scale invasion in 2022, men aged 18 to 60 have been prohibited from leaving Ukraine without special exemptions due to martial law. These include university students studying abroad, humanitarian volunteers, and drivers transporting aid.
Talks about easing restrictions for non-mobilized men to travel outside Ukraine began in 2023 but have yet to produce a comprehensive policy change.
Ukraine war latest: EU agrees on ‘one of its strongest’ Russia sanctions packages after Slovakia lifts veto
Key developments on July 18: * EU agrees on ‘one of its strongest’ Russia sanctions packages after Slovakia lifts veto * UK sanctions Russian intelligence units involved in cyberattacks * Ukrainian drones reportedly attack Moscow for second night in a row * Ukrainian hackers wipe databases at Russia’s Gazprom in major cyberattack, intelligence source says * Ukraine raises flags in villages near Dnipropetrovsk Oblast’s borders, refuting Russia’s claims of capture European Union member stat

Funding & Investment in Travel
5 key questions your developers should be asking about MCP
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both.
One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.
1. Why should I use MCP over other alternatives?
Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack.
Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair.
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
The practical benefit becomes obvious when you’re building something like an analysis tool that needs to connect to data sources across multiple ecosystems. Without MCP, you’re required to write custom integrations for each data source and each LLM you want to support. With MCP, you implement the data source connections once, and any compatible AI client can use them.
2. Local vs. remote MCP deployment: What are the actual trade-offs in production?
This is where you really start to see the gap between reference servers and reality. Local MCP deployment using the stdio programming language is dead simple to get running: Spawn subprocesses for each MCP server and let them talk through stdin/stdout. Great for a technical audience, difficult for everyday users.
Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers.
But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must.
Authorization is another variable you’ll need to consider with remote deployments. The OAuth 2.1 integration requires mapping tokens between external identity providers and MCP sessions. While this adds complexity, it’s manageable with proper planning.
3. How can I be sure my MCP server is secure?
This is probably the biggest gap between the MCP hype and what you actually need to tackle for production. Most showcases or examples you’ll see use local connections with no authentication at all, or they handwave the security by saying “it uses OAuth.”
The MCP authorization spec does leverage OAuth 2.1, which is a proven open standard. But there’s always going to be some variability in implementation. For production deployments, focus on the fundamentals:
- Proper scope-based access control that matches your actual tool boundaries
- Direct (local) token validation
- Audit logs and monitoring for tool use
However, the biggest security consideration with MCP is around tool execution itself. Many tools need (or think they need) broad permissions to be useful, which means sweeping scope design (like a blanket “read” or “write”) is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations — so, when in doubt, stick to the best practices recommended in the latest MCP auth draft spec.
4. Is MCP worth investing resources and time into, and will it be around for the long term?
This gets to the heart of any adoption decision: Why should I bother with a flavor-of-the-quarter protocol when everything AI is moving so fast? What guarantee do you have that MCP will be a solid choice (or even around) in a year, or even six months?
Well, look at MCP’s adoption by major players: Google supports it with its Agent2Agent protocol, Microsoft has integrated MCP with Copilot Studio and is even adding built-in MCP features for Windows 11, and Cloudflare is more than happy to help you fire up your first MCP server on their platform. Similarly, the ecosystem growth is encouraging, with hundreds of community-built MCP servers and official integrations from well-known platforms.
In short, the learning curve isn’t terrible, and the implementation burden is manageable for most teams or solo devs. It does what it says on the tin. So, why would I be cautious about buying into the hype?
MCP is fundamentally designed for current-gen AI systems, meaning it assumes you have a human supervising a single-agent interaction. Multi-agent and autonomous tasking are two areas MCP doesn’t really address; in fairness, it doesn’t really need to. But if you’re looking for an evergreen yet still somehow bleeding-edge approach, MCP isn’t it. It’s standardizing something that desperately needs consistency, not pioneering in uncharted territory.
5. Are we about to witness the “AI protocol wars?”
Signs are pointing toward some tension down the line for AI protocols. While MCP has carved out a tidy audience by being early, there’s plenty of evidence it won’t be alone for much longer.
Take Google’s Agent2Agent (A2A) protocol launch with 50-plus industry partners. It’s complementary to MCP, but the timing — just weeks after OpenAI publicly adopted MCP — doesn’t feel coincidental. Was Google cooking up an MCP competitor when they saw the biggest name in LLMs embrace it? Maybe a pivot was the right move. But it’s hardly speculation to think that, with features like multi-LLM sampling soon to be released for MCP, A2A and MCP may become competitors.
Then there’s the sentiment from today’s skeptics about MCP being a “wrapper” rather than a genuine leap forward for API-to-LLM communication. This is another variable that will only become more apparent as consumer-facing applications move from single-agent/single-user interactions and into the realm of multi-tool, multi-user, multi-agent tasking. What MCP and A2A don’t address will become a battleground for another breed of protocol altogether.
For teams bringing AI-powered projects to production today, the smart play is probably hedging protocols. Implement what works now while designing for flexibility. If AI makes a generational leap and leaves MCP behind, your work won’t suffer for it. The investment in standardized tool integration absolutely will pay off immediately, but keep your architecture adaptable for whatever comes next.
Ultimately, the dev community will decide whether MCP stays relevant. It’s MCP projects in production, not specification elegance or market buzz, that will determine if MCP (or something else) stays on top for the next AI hype cycle. And frankly, that’s probably how it should be.
Meir Wahnon is a co-founder at Descope.
Source link
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Mergers & Acquisitions1 week ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
Humans must remain at the heart of the AI story
-
The Travel Revolution of Our Era1 month ago
CheQin.ai Redefines Hotel Booking with Zero-Commission Model