Connect with us

Brand Stories

Israel launches NIS 1m. fund for AI regulatory sandboxes – The Jerusalem Post

Published

on

Brand Stories

Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times

Published

on








Identity Crisis: Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times





















Source link

Continue Reading

Brand Stories

Musk Hints at Kid-Friendly Version of AI Chatbot Grok

Published

on

By


Elon Musk’s artificial intelligence (AI) chatbot is about to spawn a new generation.

“We’re going to make Baby Grok, an app dedicated to kid-friendly content,” the billionaire wrote in a post on his X social media platform Saturday (July 19) night without offering further details.

Grok is the name of the AI model used by Musk’s xAI startup, introduced in November 2023 and touted for its sarcastic sense of humor as well as its reasoning capabilities. 

Musk’s comments about a kid-friendly version of the tool came a little more than a week after xAI debuted its newest version of Grok — Grok 4 — which the CEO called “the smartest AI in the world,” adding that in “some ways, it’s terrifying.”

As PYMNTS reported, Musk likened Grok 4 to a “super-genius child” in which the “right values” of truthfulness and a sense of honor must be instilled so society can benefit from its advances. 

Musk said Grok 4 was built to perform at the “post-graduate level” in many topics simultaneously, which no person can do. It can generate realistic visuals and tackle complex analytical tasks.

In addition, Musk said Grok 4 would score perfectly on SAT and graduate-level exams like GRE even without seeing the questions ahead of time.

Grok also encountered controversy this month when the chatbot praised Adolf Hitler in a conversation on X. xAI has since said it has taken action to ban hate speech.

In other AI news, PYMNTS wrote recently about the recent wave of funding for AI startups. For example, the AI search company Perplexity saw its valuation reach $18 billion following its latest funding round of $100 million.

“Capital raised by Perplexity, which has tripled its valuation over the past year, point to robust investor interest in the competitive AI search market especially for leading startups,” that report said. “Apple reportedly was interested in acquiring Perplexity.”

An even bigger funding round last week involved Thinking Machines, founded by former OpenAI CTO Mira Murati. That company achieved a $10 billion valuation after raising $2 billion.

“We’re building multimodal AI that works with how you naturally interact with the world — through conversation, through sight, through the messy way we collaborate,” Murati said in a post on X.

Finally, reports emerged last week that Anthropic had been approached by investors with funding offers that could value the startup at $100 billion. The company’s valuation hit $61.5 billion earlier this year after a $3.5 billion fundraise.

 



Source link

Continue Reading

Brand Stories

India can reframe the Artificial Intelligence debate

Published

on

By


‘India must make a serious push to share AI capacity with the global majority’ 
| Photo Credit: Getty Images

Less than three years ago, ChatGPT dragged artificial intelligence (AI) out of research laboratories and into living rooms, classrooms and parliaments. Leaders sensed the shock waves instantly. Despite an already crowded summit calendar, three global gatherings on AI followed in quick succession. When New Delhi hosts the AI Impact Summit in February 2026, it can do more than break attendance records. It can show that governments, not just corporations, can steer AI for the public good.

India can bridge the divide

But the geopolitical climate is far from smooth. War continues in Ukraine. West Asia teeters between flareups. Trade walls are rising faster than regulators can respond. Even the Paris AI Summit (February 2025), meant to unify, ended in division. The United States and the United Kingdom rejected the final text. China welcomed it. The very forum meant to protect humanity’s digital future faces the risk of splintering. India has the standing and the credibility to bridge these divides.

India’s Ministry of Electronics and Information Technology began preparations in earnest. In June, it launched a nationwide consultation through the MyGov platform. Students, researchers, startups, and civil society groups submitted ideas.

The brief was simple: show how AI can advance inclusive growth, improve development, and protect the planet. These ideas will shape the agenda and the final declaration. This turned the consultation into capital and gave India a democratic edge no previous host has enjoyed. Here are five suggestions rooted in India’s digital experience. They are modest in cost but can be rich in credibility.

Pledges and report cards

First, measure what matters. India’s digital tools prove that technology can serve everyone. Aadhaar provides secure identity to more than a billion people. The Unified Payments Interface (UPI) moves money in seconds. The Summit in 2026 can borrow that spirit. Each delegation could announce one clear goal to achieve within 12 months. A company might cut its data centre electricity use. A university could offer a free AI course for rural girls. A government might translate essential health advice into local languages using AI. All pledges could be listed on a public website and tracked through a scoreboard a year later. Report cards are more interesting than press releases.

Second, bring the global South to the front row. Half of humanity was missing from the leaders’ photo session at the first summit. That must not happen again. As a leader of the Global South, India must endeavour to have as wide a participation as possible.

India should also push for an AI for Billions Fund, seeded by development banks and Gulf investors, which could pay for cloud credits, fellowships and local language datasets. India could launch a multilingual model challenge for say 50 underserved languages and award prizes before the closing dinner. The message is simple: talent is everywhere, and not just in California or Beijing.

Third, create a common safety check. Since the Bletchley Summit in 2023 (or the AI Safety Summit 2023), experts have urged red teaming and stress tests. Many national AI safety institutes have sprung up. But no shared checklist exists. India could endeavour to broker them into a Global AI Safety Collaborative which can share red team scripts, incident logs and stress tests on any model above an agreed compute line. Our own institute can post an open evaluation kit with code and datasets for bias robustness.

Fourth, offer a usable middle road on rules. The United States fears heavy regulation. Europe rolls out its AI Act. China trusts state control. Most nations want something in between. India can voice that balance. It can draft a voluntary frontier AI code of conduct. Base it on the Seoul pledge but add teeth. Publish external red team results within 90 days. Disclose compute once it crosses a line. Provide an accident hotline. Voluntary yet specific.

Fifth, avoid fragmentation. Splintered summits serve no one. The U.S. and China eye each other across the frontier AI race. New Delhi cannot erase that tension but can blunt it. The summit agenda must be broad, inclusive, and focused on global good.

The path for India

India cannot craft a global AI authority in one week and should not try. It can stitch together what exists and make a serious push to share AI capacity with the global majority. If India can turn participation into progress, it will not just be hosting a summit. It will reframe its identity on a cutting edge issue.

Syed Akbaruddin is a former Indian Permanent Representative to the United Nations and, currently, Dean, Kautilya School of Public Policy, Hyderabad



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com