Connect with us

Brand Stories

Minister demands overhaul of UK’s leading AI institute | Artificial intelligence (AI)

Published

on


The technology secretary has demanded an overhaul of the UK’s leading artificial intelligence institute in a wide-ranging letter that calls for a switch in focus to defence and national security, as well as leadership changes.

Peter Kyle said it was clear further action was needed to ensure the government-backed Alan Turing Institute met its full potential.

In a letter to ATI’s chair, seen by the Guardian, Kyle said the institute should be changed to prioritise defence, national security and “sovereign capabilities” – a reference to nation states being able to control their own AI technology.

The call for new priorities implies a downgrading of ATI’s focus on health and the environment, which are two of three core subjects for the institute, alongside defence and security, under its “Turing 2.0” strategy.

“Moving forward, defence and national security projects should form a core of ATI’s activities, and relationships with the UK’s security, defence, and intelligence communities should be strengthened accordingly,” Kyle wrote.

Making clear that the Turing 2.0 strategy did not meet government requirements, Kyle indicated that he expected leadership changes at ATI.

“To realise this vision, it is imperative that the ATI’s leadership reflects the institute’s reformed focus,” he wrote in a letter first reported by Politico. “While we acknowledge the success of the current leadership in delivering reform at the institute during a difficult period, careful consideration should be given to the importance of an executive team who possesses a relevant background and sector knowledge to lead this transition.”

ATI is chaired by Doug Gurr, the former head of Amazon’s UK operations and interim chair of the UK’s competition watchdog.

The institute is going through a restructuring under its chief executive, Jean Innes, which one in five staff have said puts ATI’s credibility in “serious jeopardy”. At the end of last year, ATI employed 440 staff, but it has since launched a redundancy process.

Although the institute is nominally independent, it recently secured £100m from the government in a five-year funding deal. The letter said ATI’s “longer-term funding arrangement” could be reviewed next year. The government would maintain its current level of research and development from national security and defence for the next three years, Kyle wrote, and would increase the number of defence and national security staff embedded in the institute.

Dame Wendy Hall, a professor of computer science at the University of Southampton and the co-chair of a 2017 government AI review, said ATI would cease to be a national institute under the government’s proposed changes.

“If the institute focuses on defence and security it ceases to be a national institute on AI,” Hall said. “It’s not broad enough. If the government wants an AI institute that does defence and security then it should just call it that.”

In February, the government indicated a focus on national security with its AI strategy by renaming its AI Safety Institute, established under the premiership of Rishi Sunak, the AI Security Institute.

skip past newsletter promotion

Kyle’s letter also referred to the government’s 50-point AI action plan as a “testament” to the UK’s AI ambitions, The plan’s targets include a 20-fold increase in the amount of AI computing power under public control by 2030, and embedding AI in the public sector.

A spokesperson for ATI said the institute was focused on “high-impact missions” that support the UK including in defence and national security.

We share the government’s vision of AI transforming the UK for the better, welcome the recognition of our critical role, and will continue to work closely with the government to support its priorities and deliver science and innovation for the public good,” said the spokesperson.

The Department for Science, Innovation and Technology said the changes would be a “natural next step” for ATI following the safety institute renaming.

“These proposed changes would not only ensure the Alan Turing Institute delivers real value for money – it would see it taking on a key role in safeguarding our national security,” said the spokesperson.



Source link

Continue Reading

Brand Stories

Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times

Published

on








Identity Crisis: Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times





















Source link

Continue Reading

Brand Stories

Musk Hints at Kid-Friendly Version of AI Chatbot Grok

Published

on

By


Elon Musk’s artificial intelligence (AI) chatbot is about to spawn a new generation.

“We’re going to make Baby Grok, an app dedicated to kid-friendly content,” the billionaire wrote in a post on his X social media platform Saturday (July 19) night without offering further details.

Grok is the name of the AI model used by Musk’s xAI startup, introduced in November 2023 and touted for its sarcastic sense of humor as well as its reasoning capabilities. 

Musk’s comments about a kid-friendly version of the tool came a little more than a week after xAI debuted its newest version of Grok — Grok 4 — which the CEO called “the smartest AI in the world,” adding that in “some ways, it’s terrifying.”

As PYMNTS reported, Musk likened Grok 4 to a “super-genius child” in which the “right values” of truthfulness and a sense of honor must be instilled so society can benefit from its advances. 

Musk said Grok 4 was built to perform at the “post-graduate level” in many topics simultaneously, which no person can do. It can generate realistic visuals and tackle complex analytical tasks.

In addition, Musk said Grok 4 would score perfectly on SAT and graduate-level exams like GRE even without seeing the questions ahead of time.

Grok also encountered controversy this month when the chatbot praised Adolf Hitler in a conversation on X. xAI has since said it has taken action to ban hate speech.

In other AI news, PYMNTS wrote recently about the recent wave of funding for AI startups. For example, the AI search company Perplexity saw its valuation reach $18 billion following its latest funding round of $100 million.

“Capital raised by Perplexity, which has tripled its valuation over the past year, point to robust investor interest in the competitive AI search market especially for leading startups,” that report said. “Apple reportedly was interested in acquiring Perplexity.”

An even bigger funding round last week involved Thinking Machines, founded by former OpenAI CTO Mira Murati. That company achieved a $10 billion valuation after raising $2 billion.

“We’re building multimodal AI that works with how you naturally interact with the world — through conversation, through sight, through the messy way we collaborate,” Murati said in a post on X.

Finally, reports emerged last week that Anthropic had been approached by investors with funding offers that could value the startup at $100 billion. The company’s valuation hit $61.5 billion earlier this year after a $3.5 billion fundraise.

 



Source link

Continue Reading

Brand Stories

India can reframe the Artificial Intelligence debate

Published

on

By


‘India must make a serious push to share AI capacity with the global majority’ 
| Photo Credit: Getty Images

Less than three years ago, ChatGPT dragged artificial intelligence (AI) out of research laboratories and into living rooms, classrooms and parliaments. Leaders sensed the shock waves instantly. Despite an already crowded summit calendar, three global gatherings on AI followed in quick succession. When New Delhi hosts the AI Impact Summit in February 2026, it can do more than break attendance records. It can show that governments, not just corporations, can steer AI for the public good.

India can bridge the divide

But the geopolitical climate is far from smooth. War continues in Ukraine. West Asia teeters between flareups. Trade walls are rising faster than regulators can respond. Even the Paris AI Summit (February 2025), meant to unify, ended in division. The United States and the United Kingdom rejected the final text. China welcomed it. The very forum meant to protect humanity’s digital future faces the risk of splintering. India has the standing and the credibility to bridge these divides.

India’s Ministry of Electronics and Information Technology began preparations in earnest. In June, it launched a nationwide consultation through the MyGov platform. Students, researchers, startups, and civil society groups submitted ideas.

The brief was simple: show how AI can advance inclusive growth, improve development, and protect the planet. These ideas will shape the agenda and the final declaration. This turned the consultation into capital and gave India a democratic edge no previous host has enjoyed. Here are five suggestions rooted in India’s digital experience. They are modest in cost but can be rich in credibility.

Pledges and report cards

First, measure what matters. India’s digital tools prove that technology can serve everyone. Aadhaar provides secure identity to more than a billion people. The Unified Payments Interface (UPI) moves money in seconds. The Summit in 2026 can borrow that spirit. Each delegation could announce one clear goal to achieve within 12 months. A company might cut its data centre electricity use. A university could offer a free AI course for rural girls. A government might translate essential health advice into local languages using AI. All pledges could be listed on a public website and tracked through a scoreboard a year later. Report cards are more interesting than press releases.

Second, bring the global South to the front row. Half of humanity was missing from the leaders’ photo session at the first summit. That must not happen again. As a leader of the Global South, India must endeavour to have as wide a participation as possible.

India should also push for an AI for Billions Fund, seeded by development banks and Gulf investors, which could pay for cloud credits, fellowships and local language datasets. India could launch a multilingual model challenge for say 50 underserved languages and award prizes before the closing dinner. The message is simple: talent is everywhere, and not just in California or Beijing.

Third, create a common safety check. Since the Bletchley Summit in 2023 (or the AI Safety Summit 2023), experts have urged red teaming and stress tests. Many national AI safety institutes have sprung up. But no shared checklist exists. India could endeavour to broker them into a Global AI Safety Collaborative which can share red team scripts, incident logs and stress tests on any model above an agreed compute line. Our own institute can post an open evaluation kit with code and datasets for bias robustness.

Fourth, offer a usable middle road on rules. The United States fears heavy regulation. Europe rolls out its AI Act. China trusts state control. Most nations want something in between. India can voice that balance. It can draft a voluntary frontier AI code of conduct. Base it on the Seoul pledge but add teeth. Publish external red team results within 90 days. Disclose compute once it crosses a line. Provide an accident hotline. Voluntary yet specific.

Fifth, avoid fragmentation. Splintered summits serve no one. The U.S. and China eye each other across the frontier AI race. New Delhi cannot erase that tension but can blunt it. The summit agenda must be broad, inclusive, and focused on global good.

The path for India

India cannot craft a global AI authority in one week and should not try. It can stitch together what exists and make a serious push to share AI capacity with the global majority. If India can turn participation into progress, it will not just be hosting a summit. It will reframe its identity on a cutting edge issue.

Syed Akbaruddin is a former Indian Permanent Representative to the United Nations and, currently, Dean, Kautilya School of Public Policy, Hyderabad



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com