Funding & Investment in Travel
Solar notches another win as Microsoft adds 475 MW to power its AI data centers
Microsoft is adding another 475 megawatts to its already considerable renewable-powered portfolio to feed the growing energy appetite of its data centers. The company recently signed a deal with energy provider AES for three solar projects across the Midwest, one each in Illinois, Michigan, and Missouri.
The ramp up reflects the immediacy of Microsoft’s needs. When it comes to powering data centers, it’s hard to argue with solar. Quick to install, inexpensive, and modular, it’s a perfect fit for tech companies that need electricity now.
Microsoft has been tapping solar with some regularity. In February, it contracted 389 megawatts from three solar projects across Illinois and Texas. And late last year, the company announced it was anchoring a $9 billion renewable power coalition that’s organized by Acadia. The Redmond-based company’s own renewable portfolio already includes over 34 GW of capacity.
While tech companies have shown increasing interest in nuclear power in recent months, the cost and speed advantages of renewables have kept solar deals flowing.
Though renewable power on its own doesn’t have the same consistency as nuclear or natural gas, developers are increasingly pairing it with battery storage to provide around-the-clock electricity.
The combination is more expensive than solar or wind on its own, but given the rapid declines in cost for both solar and batteries, so-called hybrid power plants are beginning to encroach on prices for a new natural gas generating capacity.
So far, new nuclear prices have remained significantly higher than either renewables or natural gas power plants.
For tech companies and data center developers, time is of the essence. Demand for new computing power has risen at such a rate that up to half of all new AI servers could be underpowered by 2027. Most new natural gas and nuclear power plants aren’t scheduled to come online until several years after that.
But renewables can start supplying power quickly, with utility-scale solar projects starting to produce electrons in about 18 months.
That speed has proven attractive, leading to some massive deals: Microsoft, for example, signed a deal with Brookfield Asset Management last summer for 10.5 gigawatts of renewable capacity in the U.S. and Europe, all of which will be delivered by 2030.
Funding & Investment in Travel
Ukraine considers easing travel ban for men ages 18-24, parliament speaker says
Ukrainian lawmakers are considering whether to allow men ages 18 to 24 to travel abroad, a move that would ease current wartime restrictions, Chairman of the Verkhovna Rada Ruslan Stefanchuk said on July 19.
According to Suspilne, the Verkhovna Rada (Ukraine’s parliament) is reviewing proposals from both individual members and the parliamentary Committee on National Security, Defense, and Intelligence. Under current martial law, men in that age group are not subject to mobilization but are still barred from leaving the country unless they qualify for exemptions, such as medical reasons or official business.
Stefanchuk emphasized the need to find a legal mechanism to uphold basic rights for young men who are not eligible for conscription. These include the right to pursue education, work opportunities, or reunite with family abroad.
“There are people aged 18 to 25 who are not subject to mobilization, but they cannot exercise their rights,” Stefanchuk said. “We must find a mechanism to enable them to exercise their rights.”
Several proposals are under discussion to liberalize exit rules, including the establishment of clear criteria and permitting certain categories of individuals to travel. However, no final decision has been made. Lawmakers expect the committee to issue its recommendations soon.
Since Russia’s full-scale invasion in 2022, men aged 18 to 60 have been prohibited from leaving Ukraine without special exemptions due to martial law. These include university students studying abroad, humanitarian volunteers, and drivers transporting aid.
Talks about easing restrictions for non-mobilized men to travel outside Ukraine began in 2023 but have yet to produce a comprehensive policy change.
Ukraine war latest: EU agrees on ‘one of its strongest’ Russia sanctions packages after Slovakia lifts veto
Key developments on July 18: * EU agrees on ‘one of its strongest’ Russia sanctions packages after Slovakia lifts veto * UK sanctions Russian intelligence units involved in cyberattacks * Ukrainian drones reportedly attack Moscow for second night in a row * Ukrainian hackers wipe databases at Russia’s Gazprom in major cyberattack, intelligence source says * Ukraine raises flags in villages near Dnipropetrovsk Oblast’s borders, refuting Russia’s claims of capture European Union member stat

Funding & Investment in Travel
5 key questions your developers should be asking about MCP
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both.
One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.
1. Why should I use MCP over other alternatives?
Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack.
Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair.
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
The practical benefit becomes obvious when you’re building something like an analysis tool that needs to connect to data sources across multiple ecosystems. Without MCP, you’re required to write custom integrations for each data source and each LLM you want to support. With MCP, you implement the data source connections once, and any compatible AI client can use them.
2. Local vs. remote MCP deployment: What are the actual trade-offs in production?
This is where you really start to see the gap between reference servers and reality. Local MCP deployment using the stdio programming language is dead simple to get running: Spawn subprocesses for each MCP server and let them talk through stdin/stdout. Great for a technical audience, difficult for everyday users.
Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers.
But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must.
Authorization is another variable you’ll need to consider with remote deployments. The OAuth 2.1 integration requires mapping tokens between external identity providers and MCP sessions. While this adds complexity, it’s manageable with proper planning.
3. How can I be sure my MCP server is secure?
This is probably the biggest gap between the MCP hype and what you actually need to tackle for production. Most showcases or examples you’ll see use local connections with no authentication at all, or they handwave the security by saying “it uses OAuth.”
The MCP authorization spec does leverage OAuth 2.1, which is a proven open standard. But there’s always going to be some variability in implementation. For production deployments, focus on the fundamentals:
- Proper scope-based access control that matches your actual tool boundaries
- Direct (local) token validation
- Audit logs and monitoring for tool use
However, the biggest security consideration with MCP is around tool execution itself. Many tools need (or think they need) broad permissions to be useful, which means sweeping scope design (like a blanket “read” or “write”) is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations — so, when in doubt, stick to the best practices recommended in the latest MCP auth draft spec.
4. Is MCP worth investing resources and time into, and will it be around for the long term?
This gets to the heart of any adoption decision: Why should I bother with a flavor-of-the-quarter protocol when everything AI is moving so fast? What guarantee do you have that MCP will be a solid choice (or even around) in a year, or even six months?
Well, look at MCP’s adoption by major players: Google supports it with its Agent2Agent protocol, Microsoft has integrated MCP with Copilot Studio and is even adding built-in MCP features for Windows 11, and Cloudflare is more than happy to help you fire up your first MCP server on their platform. Similarly, the ecosystem growth is encouraging, with hundreds of community-built MCP servers and official integrations from well-known platforms.
In short, the learning curve isn’t terrible, and the implementation burden is manageable for most teams or solo devs. It does what it says on the tin. So, why would I be cautious about buying into the hype?
MCP is fundamentally designed for current-gen AI systems, meaning it assumes you have a human supervising a single-agent interaction. Multi-agent and autonomous tasking are two areas MCP doesn’t really address; in fairness, it doesn’t really need to. But if you’re looking for an evergreen yet still somehow bleeding-edge approach, MCP isn’t it. It’s standardizing something that desperately needs consistency, not pioneering in uncharted territory.
5. Are we about to witness the “AI protocol wars?”
Signs are pointing toward some tension down the line for AI protocols. While MCP has carved out a tidy audience by being early, there’s plenty of evidence it won’t be alone for much longer.
Take Google’s Agent2Agent (A2A) protocol launch with 50-plus industry partners. It’s complementary to MCP, but the timing — just weeks after OpenAI publicly adopted MCP — doesn’t feel coincidental. Was Google cooking up an MCP competitor when they saw the biggest name in LLMs embrace it? Maybe a pivot was the right move. But it’s hardly speculation to think that, with features like multi-LLM sampling soon to be released for MCP, A2A and MCP may become competitors.
Then there’s the sentiment from today’s skeptics about MCP being a “wrapper” rather than a genuine leap forward for API-to-LLM communication. This is another variable that will only become more apparent as consumer-facing applications move from single-agent/single-user interactions and into the realm of multi-tool, multi-user, multi-agent tasking. What MCP and A2A don’t address will become a battleground for another breed of protocol altogether.
For teams bringing AI-powered projects to production today, the smart play is probably hedging protocols. Implement what works now while designing for flexibility. If AI makes a generational leap and leaves MCP behind, your work won’t suffer for it. The investment in standardized tool integration absolutely will pay off immediately, but keep your architecture adaptable for whatever comes next.
Ultimately, the dev community will decide whether MCP stays relevant. It’s MCP projects in production, not specification elegance or market buzz, that will determine if MCP (or something else) stays on top for the next AI hype cycle. And frankly, that’s probably how it should be.
Meir Wahnon is a co-founder at Descope.
Source link
Funding & Investment in Travel
North Korea’s ‘Benidorm’ resort bans foreign visitors – despite bid to bring tourists | World | News
International visitors have been banned from North Korea‘s massive new beach resort following its grand opening. The Wonsan Kalma complex, unveiled by leader Kim Jong-un at the end of June and dubbed the North Korean Benidorm, boasts a capacity for nearly 20,000 guests and includes accommodation, a shoreline, sporting venues, and restaurants.
Kim declared it would be remembered as “one of the greatest successes this year” and hailed the location as “the proud first step” towards advancing tourism. However, only North Koreans can experience the facilities. DPR Korea Tour, a platform operated by the nation’s tourism officials, announced that the resort “is temporarily not receiving foreign tourists”.
No additional information was provided regarding the reasons behind the ban or how long it would last. Shortly after its launch, a limited number of Russians were the only foreign tourists to visit.
North Korea may have stopped international visitor access after a Russian journalist wrote a damning story about the Wonsan Kalma resort.
Accompanied the Russian foreign minister, the journalist suggested the people at the resort were government operatives rather than genuine guests.
Kim has been pushing to make North Korea a tourist destination as part of efforts to revive the isolated country’s struggling economy.
Wonsan Kalma, with a 2.5 mile beach, is one of Kim’s most-discussed tourism projects, and state media reported North Korea will also confirm plans to build large tourism areas in other parts of the country.
Photos shared by state media show the leader taking in the views and watching someone go down a slide.
Despite the dangers Westerners may face if allowed to visit, Brits have voiced their desire to go.
Holiday planners On The Beach opened a link for people to express their interest, which racked up more than 250 sign-ups from Brits within a month.
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Mergers & Acquisitions1 week ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
Humans must remain at the heart of the AI story
-
The Travel Revolution of Our Era1 month ago
CheQin.ai Redefines Hotel Booking with Zero-Commission Model