Connect with us

Brand Stories

UK switches on AI supercomputer that will help spot sick cows and skin cancer | Artificial intelligence (AI)

Published

on


Britain’s new £225m national artificial intelligence supercomputer will be used to spot sick dairy cows in Somerset, improve the detection of skin cancer on brown skin and help create wearable AI assistants that could help riot police anticipate danger.

Scientists hope Isambard-AI – named after the 19th-century engineer of groundbreaking bridges and railways, Isambard Kingdom Brunel – will unleash a wave of AI-powered technological, medical and social breakthroughs by allowing academics and public bodies access to the kind of vast computing power previously the preserve of private tech companies.

The supercomputer was formally switched on in Bristol on Thursday by the secretary of state for science and technology, Peter Kyle, who said it gave the UK “the raw computational horsepower that will save lives, create jobs, and help us reach net zero-ambitions faster”.

The machine is fitted with 5,400 Nvidia “superchips” and sits inside a black metal cage topped with razor wire north of the city. It will consume almost £1m a month of mostly nuclear-powered electricity and will run 100,000 times faster than an average laptop.

Amid fierce international competition for computing power, it is the largest publicly acknowledged facility in the UK but will be the 11th fastest in the world behind those in the US, Japan, Germany, Italy, Finland and Switzerland. Elon Musk’s new xAI supercomputer in Tennessee already has 20 times its processing power, while Meta’s chief executive, Mark Zuckerberg, is planning a datacentre that “covers a significant part of the footprint of Manhattan”.

The investment is part of the government’s £2bn push to attain “AI sovereignty” so Britain does not have to rely on foreign processing chips to make AI-enabled research progress. But the switch-on could trigger new ethical dilemmas about how far AI should be allowed to steer policy on anything from the control of public protests to the breeding of animals.

One AI model under development by academics at the University of Bristol is an algorithm that learns from thousands of hours of footage on human motion, captured using wearable cameras. The idea is to try to predict how humans could move next. It could be applied to a wide range of scenarios, including enabling police to predict how crowds of protesters may behave, or predict accidents in an industrial setting such as a construction site.

Dima Damen, a professor of computer vision at the university, said based on patterns in the human behaviours a wearable camera was capturing in real time, the algorithm, trained by Isambard-AI, could even “give an early warning that in the next two minutes, something is likely to happen here”.

Damen added there were “huge ethical implications of AI” and it would be important to always know why a system made a decision.

“One of the fears of AI is that some people will own the technology and the knowhow and others won’t,” she said. “It’s our biggest duty as researchers to make sure that the data and the knowledge is available for everyone.”

Another AI model under development could detect early infections in cows. A herd in Somerset is being filmed around the clock to train a model to predict if an animal is in the early stages of mastitis, which affects milk production and is an animal welfare problem. The scientists at Bristol believe this could be possible based on detecting subtle shifts in cows’ social behaviour.

“The farmer obviously takes a great interest in their herd, but they don’t necessarily have the time to look at all of the cows in their herd continuously day in, day out, so the AI will be there to provide that view,” said Andrew Dowsey, a professor of health data science at the University of Bristol.

A third group of researchers are using the supercomputer to detect bias in the detection of skin cancer. James Pope, a senior lecturer in data science at the University of Bristol, has already run “quadrillions if not quintillions of computations” on Isambard to find that current phone apps to check moles and lesions for signs of cancer are performing better on lighter coloured skin. If confirmed with further testing, apps could be retuned to avoid bias.

“It would be quite difficult, and frankly impossible to do it with a traditional computer,” he said.



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

Americans May Have To Pay Much More For Electricity. Reason: Artificial Intelligence

Published

on

By


Artificial intelligence is reshaping the future — but not without a cost. A new report by the White House Council of Economic Advisors warns that AI and cloud computing may drive up electricity prices dramatically across the United States unless urgent investments are made in power infrastructure.

The study highlights a significant shift: after decades of minimal electricity demand growth, 2024 alone saw a 2% rise, largely attributed to the surge in AI-powered data centers. The International Energy Agency (IEA) projects that by 2030, data centers in the US could consume more electricity than the combined output of heavy industries such as aluminum, steel, cement, and chemicals.

Productivity Promises VS Power Pressures

Despite the looming challenges, the report does not discount AI’s potential benefits. If half of all US businesses adopt AI by 2034, labor productivity could rise by 1.5 percentage points annually, potentially boosting GDP growth by 0.4% that year. But that promise comes with a price.

To meet the surge in demand, especially when factoring in industrial electrification and efforts to reshore manufacturing, the US would need to invest an estimated 1.4 trillion Dollars between 2025 and 2030 in new electricity generation. That figure surpasses the industry’s investment over the past decade. The study cautions that without the emergence of lower-cost power providerssuch as renewables or advanced nuclearelectricity bills will rise sharply.



Source link

Continue Reading

Brand Stories

Delaware Firm to Evolve Defense Tech Org With Self-Growing AI

Published

on


Star26 Capital Inc. is collaborating with Delaware-based Synthetic Darwin to supercharge its defense tech developments through self-growing AI.

This partnership will utilize Darwinslab, an AI ecosystem where digital agents generate, assess, and cultivate other algorithms inspired by biological evolution.

The solution slashes the time needed to build or sustain complex AI systems, shrinking development cycles to days and enabling rapid adaptation to new data and mission needs.

Read the full story on our new publication, Military AI: Delaware Firm to Evolve New York Defense Tech Org Through Self-Growing AI



Source link

Continue Reading

Brand Stories

AI isn’t just for coders: 7 emerging non-tech career paths in artificial intelligence

Published

on

By


7 emerging non-tech career paths in artificial intelligence

Artificial intelligence is no longer the future. It’s already shaping how we live, work, and learn. From smart assistants to personalised learning apps and automated hiring tools, AI is now part of everyday life. But here’s something many students still don’t realise — you don’t have to be a computer science genius to build a meaningful career in AI.In 2025, AI needs more than just coders. It needs people who understand ethics, design, communication, psychology, policy, and human behaviour. Whether you’re studying law, liberal arts, design, economics, or media, there is space for you in this fast-growing field. These emerging roles are all about making AI more responsible, more human, and more useful.Here are seven exciting non-tech career paths in artificial intelligence that you can start exploring now.

AI ethics specialist

AI systems make decisions that can affect real lives — from who gets hired to who receives a loan. That’s why companies and governments need experts who can guide them on what’s fair, what’s biased, and what crosses a line. Ethics specialists work closely with developers, legal teams, and product leaders to make sure AI is built and used responsibly.Best suited for: Students from philosophy, sociology, law, or political science backgroundsWhere to work: Tech companies, research institutes, policy think tanks, or digital rights NGOs

AI UX and UI designer

AI tools need to be easy to use, intuitive, and accessible. That’s where design comes in. AI UX and UI designers focus on creating smooth, human-centered experiences, whether it’s a chatbot, a virtual assistant, or a smart home interface. They use design thinking to make sure AI works well for real users.Best suited for: Students of psychology, graphic design, human-computer interaction, or visual communicationWhere to work: Tech startups, health-tech and ed-tech platforms, voice and interface design labs

AI policy analyst

AI raises big questions about privacy, rights, and regulation. Governments and organisations are racing to create smart policies that balance innovation with safety. AI policy analysts study laws, write guidelines, and advise decision-makers on how to manage the impact of AI in sectors like education, defense, healthcare, and finance.Best suited for: Public policy, law, international relations, or development studies studentsWhere to work: Government agencies, global institutions, research bodies, and policy units within companies

AI behavioural researcher

AI tools influence human behaviour — from how long we scroll to what we buy. Behavioural researchers look at how people respond to AI and what changes when technology gets smarter. Their insights help companies design better products and understand the social effects of automation and machine learning.Best suited for: Students of psychology, behavioural economics, sociology, or educationWhere to work: Tech companies, research labs, social impact startups, or mental health platforms

AI content strategist and explainer

AI is complex, and most people don’t fully understand it. That’s why companies need writers, educators, and content creators who can break it down. Whether it’s writing onboarding guides for AI apps or creating videos that explain how algorithms work, content strategists make AI easier to understand for everyday users.Best suited for: Students of journalism, English, media studies, marketing, or communicationWhere to work: Ed-tech and SaaS companies, AI product teams, digital agencies, or NGOs

AI program manager

This role is perfect for big-picture thinkers who love connecting people, processes, and purpose. Responsible AI program managers help companies build AI that meets ethical, legal, and user standards. They coordinate between tech, legal, and design teams and ensure that AI development stays aligned with values and global standards.Best suited for: Business, liberal arts, management, or public administration studentsWhere to work: Large tech firms, AI consultancies, corporate ethics teams, or international development agencies

AI research associate (non-technical)

Not all AI research is about coding. Many labs focus on the social, psychological, or economic impact of AI. As a research associate, you could be studying how AI affects jobs, education, privacy, or cultural behaviour. Your work might feed into policy, academic papers, or product design.Best suited for: Students from linguistics, anthropology, education, economics, or communication studiesWhere to work: Universities, research labs, global think tanks, or ethics institutesThe world of AI is expanding rapidly, and it’s no longer just about math, code, and machines. It’s also about people, systems, ethics, and storytelling. If you’re a student with curiosity, critical thinking skills, and a passion for meaningful work, there’s a place for you in AI — even if you’ve never opened a programming textbook.TOI Education is on WhatsApp now. Follow us here.





Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com