Connect with us

Brand Stories

‘Workforce crisis’: key takeaways for graduates battling AI in the jobs market | Artificial intelligence (AI)

Published

on



  • 1. The current crisis is as much economic as AI-led

    A shifting graduate labour market is not unusual, said Kirsten Barnes, head of digital platform at Bright Network, which connects graduates and young professionals to employers.

    “Any shifts in the graduate job market this year – which typically fluctuates by 10-15% – appear to be driven by a combination of factors, including wider economic conditions and the usual fluctuations in business demand, rather than a direct impact from AI alone. We’re not seeing a consistent trend across specific sectors,” she said.

    Claire Tyler, head of insights at the Institute for Student Employers (ISE), which represents major graduate employers, said that among companies recruiting fewer graduates “none of them have said it’s down to AI”.

    Some recruitment specialists cited the recent increase in employer national insurance contributions as a factor in slowing down entry-level recruitment.

    Ed Steer, chief executive off Sphere Digital Recruitment, which hires for junior marketer and sales roles in tech and media, said graduate vacancies have fallen from 400 a year in 2021 to an expected 75 this year. He put the drop down to businesses wanting to hire more experienced applicants who can “deliver for their customers on day one”.


  • 2. But AI is definitely a factor

    However, Auria Heanley, co-founder of Oriel Partners, which recruits for personal assistant roles, has seen a 30% drop in entry-level roles this year. She said she had “no doubt” that “AI combined with wider economic uncertainty, is making it much tougher for graduates to find these roles”.

    Felix Mitchell, co-chief executive at Instant Impact, which recruits for mid-sized businesses, said jobs related to Stem [science, technology, engineering and mathematics] were the most disrupted. “I do think that the evidence suggests that AI will likely be a net job creator, but the losses are happening faster than the gains.”


  • 3. The revolution is only going to accelerate

    Major tech companies such as Microsoft are trumpeting the impact of AI agents – systems that perform human-level cognitive tasks autonomously – as tools that can be competent assistants in the workplace, with early adopters including the consultancy McKinsey and the law firm Clifford Chance. Dario Amodei, the boss of AI the developer Anthropic, has warned that the technology could wipe out half of all entry-level office jobs in the next five years.

    James Reed, chief executive of the employment agency Reed, said AI would transform the whole jobs market from now on: “This is the year of AI… lots of businesses are really doubling down on it, investing in it.

    “This feels like the year that AI is really changing and getting embedded – for better or for worse.”

    Sophie O’Brien, chief executive of Pollen Careers, which caters for early-career and entry-level roles, said AI had “accelerated” a decline in graduate recruitment that has been going on for a few years now: “The job market could look vastly different in even a year’s time.”

    She added: “For a lot of professional, desk-based jobs where you are processing information on a laptop it’s entirely obvious that a huge number of those jobs over the next few years are going to be redundant. There’s a workforce crisis that is going to happen and I don’t know if we are ready for this.”


  • 4. Learn AI skills now

    David Bell, at the executive search firm Odgers, said law firms are demanding AI competence from graduates. “As part of the interview process for the graduate intake they are asking them about their understanding and usage of AI,” he said. “Anyone who has not been using ChatGPT or the equivalent will struggle to be taken on board.”

    James Milligan, global head of Stem at recruitment multinational Hays, agreed. “If they do not have that second skill set around how to use AI then they are definitely going to be at a disadvantage,” he said. “Jobs don’t die, they evolve and change. I think we are in a process of evolutionary change at the moment.”

    Chris Morrow, managing director at Digitalent, an agency that specialises in recruiting AI-related roles, said that rather than the technology taking jobs it was creating a new category of AI-adjacent positions: “It is opening windows to jobs that did not exist 12 months ago, like AI ethics and prompt engineering. New roles are being born.” 

    With such demand for expertise, universities are being urged to adapt courses accordingly. Louise Ballard, a co-founder of Atheni.ai, which helps companies adopt AI technology, says there is a problem with “basic AI literacy skills” not being taught in higher education. 

    “You people are not getting the training they need,” she said. “The skills required at being good at AI are not necessarily the academic skills you have acquired.”

    The real risk, said Morrow, was not that AI takes jobs but that educational institutions and government policy fail to keep up. “Universities need to embed AI learning across all their subjects,” he said. 


  • 5. Graduates are using AI to apply for jobs – but should take care

    AI is an obvious aid for filling out CVs and forms as well as writing cover letters. Many of the organisations contacted by the Guardian reported a surge in applications now that filing one has become easier.

    Bright Network said the number of graduates and undergraduates using AI for their applications has risen from 38% last year to 50%. Teach First, a major graduate employer, said it plans to accelerate use of vetting processes that don’t involve writing to reduce the impact of computer-drafted entries.

    The ISE’s Tyler warned that excessive use of AI in applications could results in employers ending recruitment campaigns early and targeting specific groups with recruitment work. Ending such drives early could also affect under-represented groups, she said. 

    Errors that were once seen as red flags might now be seen in a different way, says James Reed. “In the old days we used to screen out CVs that had spelling mistakes because we’d think the person isn’t paying attention to detail or is approaching things with a casual mindset. Now if you see someone’s CV with a spelling mistake you think: ‘Wow, that’s actually written by a person – it’s the real thing.’”


  • 6. Consider applying to smaller businesses

    Small-to-medium-sized enterprises, or businesses that employ fewer than 250 people, were also singled out as an opportunity for graduates.

    Pollen’s O’Brien pointed out that SMEs are the biggest employers in the UK, at 60% of the workforce, and any lack of AI knowledge on their part could present an employment opportunity.

    “A lot of these businesses don’t know how to use AI, they are scared of AI and there is a huge opportunity for young graduates to be bringing those skills into small companies that are still hiring,” she said. “If you bring these skills into a small business you could revolutionise that business.”

    Dan Hawes, co-founder of the Graduate Recruitment Bureau, said there were thousands of “under the radar” employers below the level of big corporates who were “desperate for brainy individuals”.

    “There is this huge, hidden market and it gets rarely reported,” he said.



  • Source link

    Continue Reading
    Click to comment

    You must be logged in to post a comment Login

    Leave a Reply

    Brand Stories

    Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times

    Published

    on








    Identity Crisis: Artificial Intelligence engines ignore black skin tones and African Hair texture – The Tanzania Times





















    Source link

    Continue Reading

    Brand Stories

    Musk Hints at Kid-Friendly Version of AI Chatbot Grok

    Published

    on

    By


    Elon Musk’s artificial intelligence (AI) chatbot is about to spawn a new generation.

    “We’re going to make Baby Grok, an app dedicated to kid-friendly content,” the billionaire wrote in a post on his X social media platform Saturday (July 19) night without offering further details.

    Grok is the name of the AI model used by Musk’s xAI startup, introduced in November 2023 and touted for its sarcastic sense of humor as well as its reasoning capabilities. 

    Musk’s comments about a kid-friendly version of the tool came a little more than a week after xAI debuted its newest version of Grok — Grok 4 — which the CEO called “the smartest AI in the world,” adding that in “some ways, it’s terrifying.”

    As PYMNTS reported, Musk likened Grok 4 to a “super-genius child” in which the “right values” of truthfulness and a sense of honor must be instilled so society can benefit from its advances. 

    Musk said Grok 4 was built to perform at the “post-graduate level” in many topics simultaneously, which no person can do. It can generate realistic visuals and tackle complex analytical tasks.

    In addition, Musk said Grok 4 would score perfectly on SAT and graduate-level exams like GRE even without seeing the questions ahead of time.

    Grok also encountered controversy this month when the chatbot praised Adolf Hitler in a conversation on X. xAI has since said it has taken action to ban hate speech.

    In other AI news, PYMNTS wrote recently about the recent wave of funding for AI startups. For example, the AI search company Perplexity saw its valuation reach $18 billion following its latest funding round of $100 million.

    “Capital raised by Perplexity, which has tripled its valuation over the past year, point to robust investor interest in the competitive AI search market especially for leading startups,” that report said. “Apple reportedly was interested in acquiring Perplexity.”

    An even bigger funding round last week involved Thinking Machines, founded by former OpenAI CTO Mira Murati. That company achieved a $10 billion valuation after raising $2 billion.

    “We’re building multimodal AI that works with how you naturally interact with the world — through conversation, through sight, through the messy way we collaborate,” Murati said in a post on X.

    Finally, reports emerged last week that Anthropic had been approached by investors with funding offers that could value the startup at $100 billion. The company’s valuation hit $61.5 billion earlier this year after a $3.5 billion fundraise.

     



    Source link

    Continue Reading

    Brand Stories

    India can reframe the Artificial Intelligence debate

    Published

    on

    By


    ‘India must make a serious push to share AI capacity with the global majority’ 
    | Photo Credit: Getty Images

    Less than three years ago, ChatGPT dragged artificial intelligence (AI) out of research laboratories and into living rooms, classrooms and parliaments. Leaders sensed the shock waves instantly. Despite an already crowded summit calendar, three global gatherings on AI followed in quick succession. When New Delhi hosts the AI Impact Summit in February 2026, it can do more than break attendance records. It can show that governments, not just corporations, can steer AI for the public good.

    India can bridge the divide

    But the geopolitical climate is far from smooth. War continues in Ukraine. West Asia teeters between flareups. Trade walls are rising faster than regulators can respond. Even the Paris AI Summit (February 2025), meant to unify, ended in division. The United States and the United Kingdom rejected the final text. China welcomed it. The very forum meant to protect humanity’s digital future faces the risk of splintering. India has the standing and the credibility to bridge these divides.

    India’s Ministry of Electronics and Information Technology began preparations in earnest. In June, it launched a nationwide consultation through the MyGov platform. Students, researchers, startups, and civil society groups submitted ideas.

    The brief was simple: show how AI can advance inclusive growth, improve development, and protect the planet. These ideas will shape the agenda and the final declaration. This turned the consultation into capital and gave India a democratic edge no previous host has enjoyed. Here are five suggestions rooted in India’s digital experience. They are modest in cost but can be rich in credibility.

    Pledges and report cards

    First, measure what matters. India’s digital tools prove that technology can serve everyone. Aadhaar provides secure identity to more than a billion people. The Unified Payments Interface (UPI) moves money in seconds. The Summit in 2026 can borrow that spirit. Each delegation could announce one clear goal to achieve within 12 months. A company might cut its data centre electricity use. A university could offer a free AI course for rural girls. A government might translate essential health advice into local languages using AI. All pledges could be listed on a public website and tracked through a scoreboard a year later. Report cards are more interesting than press releases.

    Second, bring the global South to the front row. Half of humanity was missing from the leaders’ photo session at the first summit. That must not happen again. As a leader of the Global South, India must endeavour to have as wide a participation as possible.

    India should also push for an AI for Billions Fund, seeded by development banks and Gulf investors, which could pay for cloud credits, fellowships and local language datasets. India could launch a multilingual model challenge for say 50 underserved languages and award prizes before the closing dinner. The message is simple: talent is everywhere, and not just in California or Beijing.

    Third, create a common safety check. Since the Bletchley Summit in 2023 (or the AI Safety Summit 2023), experts have urged red teaming and stress tests. Many national AI safety institutes have sprung up. But no shared checklist exists. India could endeavour to broker them into a Global AI Safety Collaborative which can share red team scripts, incident logs and stress tests on any model above an agreed compute line. Our own institute can post an open evaluation kit with code and datasets for bias robustness.

    Fourth, offer a usable middle road on rules. The United States fears heavy regulation. Europe rolls out its AI Act. China trusts state control. Most nations want something in between. India can voice that balance. It can draft a voluntary frontier AI code of conduct. Base it on the Seoul pledge but add teeth. Publish external red team results within 90 days. Disclose compute once it crosses a line. Provide an accident hotline. Voluntary yet specific.

    Fifth, avoid fragmentation. Splintered summits serve no one. The U.S. and China eye each other across the frontier AI race. New Delhi cannot erase that tension but can blunt it. The summit agenda must be broad, inclusive, and focused on global good.

    The path for India

    India cannot craft a global AI authority in one week and should not try. It can stitch together what exists and make a serious push to share AI capacity with the global majority. If India can turn participation into progress, it will not just be hosting a summit. It will reframe its identity on a cutting edge issue.

    Syed Akbaruddin is a former Indian Permanent Representative to the United Nations and, currently, Dean, Kautilya School of Public Policy, Hyderabad



    Source link

    Continue Reading

    Trending

    Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com