Connect with us

Brand Stories

Companies keep slashing jobs. How worried should workers be about AI replacing them?

Published

on


Tech companies that are cutting jobs and leaning more on artificial intelligence are also disrupting themselves.

Amazon’s Chief Executive Andy Jassy said last month that he expects the e-commerce giant will shrink its workforce as employees “get efficiency gains from using AI extensively.”

At Salesforce, a software company that helps businesses manage customer relationships, Chief Executive Marc Benioff said last week that AI is already doing 30% to 50% of the company’s work.

Other tech leaders have chimed in before. Earlier this year, Anthropic, an AI startup, flashed a big warning: AI could wipe out more than half of all entry-level white-collar jobs in the next one to five years.

Ready or not, AI is reshaping, displacing and creating new roles as technology’s impact on the job market ripples across multiple sectors. The AI frenzy has fueled a lot of anxiety from workers who fear their jobs could be automated. Roughly half of U.S. workers are worried about how AI may be used in the workplace in the future and few think AI will lead to more job opportunities in the long run, according to a Pew Research Center report.

The heightened fear comes as major tech companies, such as Microsoft, Intel, Amazon and Meta cut workers, push for more efficiency and promote their AI tools. Tech companies have rolled out AI-powered features that can generate code, analyze data, develop apps and help complete other tedious tasks.

“AI isn’t just taking jobs. It’s really rewriting the rule book on what work even looks like right now,” said Robert Lucido, senior director of strategic advisory at Magnit, a company based in Folsom, Calif., that helps tech giants and other businesses manage contractors, freelancers and other contingent workers.

Disruption debated

Exactly how big of a disruption AI will have on the job market is still being debated. Executives for OpenAI, the maker of popular chatbot ChatGPT, have pushed back against the prediction that a massive white-collar job bloodbath is coming.

“I do totally get not just the anxiety, but that there is going to be real pain here, in many cases,” said Sam Altman, chief executive of OpenAI, at an interview with “Hard Fork,” the tech podcast from the New York Times. ”In many more cases, though, I think we will find that the world is significantly underemployed. The world wants way more code than can get written right now.”

As new economic policies, including those around tariffs, create more unease among businesses, companies are reining in costs while also being pickier about whom they hire.

“They’re trying to find what we call the purple unicorns rather than someone that they can ramp up and train,” Lucido said.

Before the 2022 launch of ChatGPT — a chatbot that can generate text, images, code and more —tech companies were already using AI to curate posts, flag offensive content and power virtual assistants. But the popularity and apparent superpowers of ChatGPT set off a fierce competition among tech companies to release even more powerful generative AI tools. They’re racing ahead, spending hundreds of billions of dollars on data centers, facilities that house computing equipment such as servers used to process the trove of information needed to train and maintain AI systems.

Economists and consultants have been trying to figure out how AI will affect engineers, lawyers, analysts and other professions. Some say the change won’t happen as soon as some tech executives expect.

“There have been many claims about new technologies displacing jobs, and although such displacement has occurred in the past, it tends to take longer than technologists typically expect,” economists for the U.S. Bureau of Labor Statistics said in a February report.

AI can help develop, test and write code, provide financial advice and sift through legal documents. The bureau, though, still projects that employment of software developers, financial advisors, aerospace engineers and lawyers will grow faster than the average for all occupations from 2023 to 2033. Companies will still need software developers to build AI tools for businesses or maintain AI systems.

Worker bots

Tech executives have touted AI’s ability to write code. Meta Chief Executive Mark Zuckerberg has said that he thinks AI will be able to write code like a mid-level engineer in 2025. And Microsoft Chief Executive Satya Nadella has said that as much as 30% of the company’s code is written by AI.

Other roles could grow more slowly or shrink because of AI. The Bureau of Labor Statistics expects employment of paralegals and legal assistants to grow slower than the average for all occupations while roles for credit analysts, claims adjusters and insurance appraisers to decrease.

McKinsey Global Institute, the business and economics research arm of the global management consulting firm McKinsey & Co., predicts that by 2030 “activities that account for up to 30 percent of hours currently worked across the US economy could be automated.”

The institute expects that demand for science, technology, engineering and mathematics roles will grow in the United States and Europe but shrink for customer service and office support.

“A large part of that work involves skills, which are routine, predictable and can be easily done by machines,” said Anu Madgavkar, a partner with the McKinsey Global Institute.

Although generative AI fuels the potential for automation to eliminate jobs, AI can also enhance technical, creative, legal and business roles, the report said. There will be a lot of “noise and volatility” in hiring data, Madgavkar said, but what will separate the “winners and losers” is how people rethink their work flows and jobs themselves.

Tech companies have announced 74,716 cuts from January to May, up 35% from the same period last year, according to a report from Challenger, Gray & Christmas, a firm that offers job search and career transition coaching.

Tech companies say they’re slashing jobs for various reasons.

Autodesk, which makes software used by architects, designers and engineers, slashed 9% of its workforce, or 1,350 positions, this year. The San Francisco company cited geopolitical and macroeconomic factors along with its efforts to invest more heavily in AI as reasons for the cuts, according to a regulatory filing. Other companies such as Oakland fintech company Block, which slashed 8% of its workforce in March, told employees that the cuts were strategic not because they’re “replacing folks with AI.”

Diana Colella, executive vice president, entertainment and media solutions at Autodesk, said that it’s scary when people don’t know what their job will look like in a year. Still, she doesn’t think AI will replace humans or creativity but rather act as an assistant.

Companies are looking for more AI expertise. Autodesk found that mentions of AI in U.S. job listings surged in 2025 and some of the fastest-growing roles include AI engineer, AI content creator and AI solutions architect. The company partnered with analytics firm GlobalData to examine nearly 3 million job postings over two years across industries such as architecture, engineering and entertainment.

Workers have adapted to technology before. When the job of a door-to-door encyclopedia salesman was disrupted because of the rise of online search, those workers pivoted to selling other products, Colella said.

“The skills are still key and important,” she said. “They just might be used for a different product or a different service.”



Source link

Continue Reading

Brand Stories

Critical thinking in the age of artificial intelligence

Published

on

By


Artificial intelligence is rapidly transforming the business landscape, and we must properly prepare ourselves to use this technology effectively and thrive in the new future of work. There is no doubt that in recent years, we have seen the many ways in which artificial intelligence tools are being experimented with to improve efficiency and achieve better results in less time. However, we also know that it can be overwhelming to determine the best way to integrate artificial intelligence into our lives. Critical thinking is essential at this time since not everything that is obtained is reliable or truthful, so if we firmly believe in what a program tells us, we could be making bad decisions.

Between our fear of the unknown and the resistance to change, it is logical that we are invaded by confusion, especially if we are unaware of what progress is making accessible. On the other hand, who can feel completely up to date in terms of technology when movements are accelerated? And at the center of vertigo, we are in the eye of the hurricane of the reconfiguration that artificial intelligence is generating.

The challenge we face is to understand the techniques to know how to approach and incorporate artificial intelligence in our own projects, to promote the appropriate use of technological advances, and to promote critical thinking. We have to promote the ability to analyze information and form an opinion based on evidence and reasoning. Because while it is true that there are great advances, it is also true that not all that glitters is gold, and that when one consults artificial intelligence programs, they may be giving us false, implicit, or totally distorted data. It is still up to the human mind to discern and not swallow all the pills that are offered to us.

The challenge cannot be ignored. Harvard University predicts that more than eighty percent of companies will have used or implemented Artificial Intelligence in some form by 2027, that is, in two years, which means the very near future. This means that it is essential for businesses to help prepare workers to use these technologies effectively and to approach these technologies with critical thinking.

However, incorporating artificial intelligence can be intimidating. But losing the fear of these advances (when well used and well evaluated) can help execute our strategies successfully. Necessarily, they must be understood. The world’s leading business schools, such as Dartmouth University, designed and executed the sprint model.

Sprints are focused, collaborative sessions that take place over a compressed period of time for rapid learning and skill development. In 2022, to encourage experimentation, this format was adopted for a subset of training courses, each consisting of four and a half hours of instruction in one to five sessions and graded as pass/fail. The freedom fostered by this format was ideal for boosting the creativity and hands-on learning that were critical.

The philosophy in these courses was to help decision-making. The objective is that in each session, they face situations in which they can critically apply artificial intelligence processes:

  1. Reflective prompts increase the creative surface. We are referring to these techniques that create opportunities for human ingenuity, which remains an indispensable ingredient. Techniques that help to discover that, although the artificial intelligence tools they were using produced many ideas, the final inspiration came from a human who established a less obvious connection. If AI produces many alternatives, the human mind is the one that evaluates and chooses.

  2. The iterative integration of tools that enable engaging communications. In our day, it’s critical to find compelling ways to communicate ideas. Using a combination of AI tools to bring an idea to life with engaging prose, powerful visuals, and catchy videos and audio clips is what you are looking for. Creating a good result is not difficult, as it can be left to the Artificial Intelligence, but obtaining a great result requires the work of a human mind.

  3. People are a powerful way to test ideas. Not to mention, machines can be very intelligent but also very stupid. Organizations look for different perspectives to shape informed decision-making; they need to understand the views of different stakeholders to anticipate rejection or acceptance and ensure that their speech resonates with the customer.

The best way to get comfortable with an AI tool is to play around with it, and the best way to play with it is in the context of a real problem. Perspective is the best ally to play with these programs. For example:

  1. Criticizing a concept as if we were an investor in the company.

  2. Evaluate another concept as if we were the COO who has to bring this idea to market.

  3. Value this concept as if you were a 30-year-old customer who loved the existing brand.

  4. Criticize that concept as if you were Greta Thunberg or some environmentalist.

The power of play is to do it with a purpose. Artificial intelligence is still an emerging technology, and its impact remains unclear. That said, based on the little experience that humanity has with these technologies, it is necessary to understand the role it will play in our sector and, therefore, in business training and the benefits that are obtained when it is used effectively.

An experiential activity, such as a sprint, is ideal for collective experimentation. It combines focus and discipline with space for learning through purposeful play; explore, discover, and create together freely, which leads to more significant results.



Source link

Continue Reading

Brand Stories

AI reshapes ARDS care by predicting risk, guiding ventilation, and personalizing treatment

Published

on

By


From early warnings to smarter ventilators, artificial intelligence is helping clinicians outpace ARDS, offering hope for more lives saved through personalized, data-driven care.

Review: Artificial intelligence and machine learning in acute respiratory distress syndrome management: recent advances. Image Credit: Design_Cells / Shutterstock

In a recent review published in the journal Frontiers in Medicine, a group of authors synthesized recent evidence on how artificial intelligence (AI) and machine learning (ML) enhance prediction, stratification, and treatment of acute respiratory distress syndrome (ARDS) across the patient journey.

Background

Every day, more than one thousand people worldwide enter an intensive care unit (ICU) with ARDS, and 35–45% of those with severe illness still die despite guideline-based ventilation and prone positioning. Conventional care works, yet it remains fundamentally supportive and cannot overcome the syndrome’s striking biological and clinical heterogeneity. Meanwhile, the digital exhaust of modern ICUs, continuous vital signs, electronic health records (EHRs), imaging, and ventilator waveforms has outgrown the capabilities of unaided human cognition. AI and ML are increasingly being explored as tools that promise to transform this complexity into actionable insight. However, as the review notes, external validation, generalizability, and proof of real-world benefit remain crucial research needs. Further research is needed to determine whether these algorithms actually improve survival, disability, and cost.

Early Warning: Predicting Trouble Before It Starts

ML algorithms already flag patients likely to develop ARDS hours and sometimes days before clinical criteria are met. Convolutional neural networks (CNNs) trained on chest radiographs and ventilator waveforms, as well as gradient boosting models fed raw EHR data, have been shown to achieve area under curve (AUC) values up to 0.95 for detection or prediction tasks in specific settings. However, performance varies across cohorts and model types. This shift from reactive diagnosis to proactive screening enables teams to mobilize lung-protective ventilation, fluid stewardship, or transfer to high-acuity centers earlier, a practical advantage during coronavirus disease 2019 (COVID-19) surges when ICU beds are scarce. The review highlights that combining multiple data types, clinical, imaging, waveform, and even unstructured text, generally yields more accurate predictions. Still, real-world accuracy remains dependent on the quality of the data and external validation.

Sharper Prognosis: Dynamic Risk Profiles

Once ARDS is established, knowing who is likely to deteriorate guides resource allocation and family counseling. Long short-term memory (LSTM) networks that ingest time series vitals and laboratory trends outperform conventional Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score (SAPS II) tools; meta-analysis shows a concordance index of 0.84 versus 0.64–0.70 for traditional scores. By continuously updating risk, these models enable clinicians to decide when to escalate to extracorporeal membrane oxygenation (ECMO) or palliative pathways, rather than relying on “worst value in 24 hours” snapshots. However, the review cautions that most current models are focused on mortality risk, and broader outcome prediction (e.g., disability, quality of life) remains underexplored.

Phenotypes and Endotypes

Latent class analysis (LCA) applied to multicenter trial data revealed two reproducible inflammatory phenotypes: hyper-inflammatory, characterized by interleukin 6 surges and a 40–50% mortality rate, and hypo-inflammatory, associated with less organ failure and a roughly 20% mortality rate. Treatment responses diverge; high positive end-expiratory pressure (PEEP) harms the hyper-inflammatory group, yet may aid the hypo-inflammatory group. Supervised gradient boosting models now assign these phenotypes bedside using routine labs and vitals with an accuracy of 0.94–0.95, paving the way for phenotype-specific trials of corticosteroids, fluid strategies, or emerging biologics. The review also describes additional ARDS subtypes, such as those based on respiratory mechanics, radiology, or multi-omics data. It emphasizes that real-time bedside subtyping is a critical goal for future precision medicine.

Smarter Breathing Support

AI also refines everyday ventilation decisions. A multi-task neural network simulates how oxygenation and compliance will change 45 minutes after a PEEP adjustment, enabling virtual “test drives” instead of trial-and-error titration. Mechanical power (MP) is the energy delivered to the lung each minute and exceeds 12 Joules per minute in patients at the highest risk of ventilator-induced injury. XGBoost models individualize MP thresholds and predict ICU mortality with an AUC of 0.88. For patient-ventilator asynchrony (PVA), deep learning detectors sift through millions of breaths and achieve over 90% accuracy, promising real-time alarms or even closed-loop ventilators that self-correct harmful cycling. The review notes, however, that most PVA detection models remain offline, and real-time actionable systems are still in development.

High Stakes Decisions: ECMO and Liberation

ECMO can salvage gas exchange but consumes significant resources in terms of staffing and supplies. The hierarchical Prediction, Early Monitoring, and Proactive Triage for Extracorporeal Membrane Oxygenation (PreEMPT ECMO) deep network combines demographics, laboratory results, and minute-by-minute vital signs to forecast ECMO need up to 96 hours in advance (AUC = 0.89 at 48 hours), aiding in referral timing and equitable resource utilization. At the other end of the journey, AI-based systems are being explored to predict when ventilator weaning will succeed, shortening mechanical ventilation and hospital stay in proof-of-concept studies. However, the review highlights that most studies of AI for weaning and extubation are generally conducted in ICU populations, rather than ARDS-specific cohorts, and direct evidence in ARDS remains scarce. Integrating both tools could one day create a complete life cycle decision platform, but this remains an aspirational goal.

Next Generation Algorithms and Real World Barriers

Graph neural networks (GNNs) model relationships among patients, treatments, and physiologic variables, potentially uncovering hidden risk clusters. Federated learning (FL) trains shared models across hospitals without moving protected health data, improving generalizability. Self-supervised learning (SSL) leverages billions of unlabeled waveforms to pre-train robust representations. Large language models (LLMs) and emerging multimodal variants act as orchestrators, calling specialized image or waveform models and generating human-readable plans. The review additionally highlights causal inference and reinforcement learning (RL) as promising approaches for simulating “what-if” scenarios and for developing AI agents that make sequential decisions in dynamic ICU environments. These techniques promise richer insights but still face hurdles related to data quality, interpretability, and workflow integration that must be addressed before routine clinical adoption.

In the area of drug discovery, the review notes that while AI has enabled target and compound identification in related lung diseases (such as idiopathic pulmonary fibrosis), the application of generative AI for ARDS-specific therapies remains largely conceptual at present.

Conclusions

To summarize, current evidence shows that AI and ML can detect ARDS earlier, stratify risk more precisely, tailor ventilation to individual lung mechanics, and guide costly therapies such as ECMO. Phenotype-aware algorithms already flag patients who benefit from, or suffer from, high PEEP, while neural networks forecast MP-related injury and PVA in real-time. Next-generation GNNs, FL, RL, causal inference, and LLMs may weave disparate data into cohesive bedside recommendations. Rigorous prospective trials, transparent reporting, and clinician-friendly interfaces remain essential to translate these digital advances into lives saved and disabilities prevented.

Journal reference:



Source link

Continue Reading

Brand Stories

California Judicial Council implements rule for generative artificial intelligence use in court

Published

on


Policies must block confidential information from being input into public generative AI systems; they must also ban unlawful discrimination via AI programs. Court staff and judicial officers must also “take reasonable steps” to confirm the accuracy of material, as per a statement published by Reuters. Staff and judicial officers must also reveal whether they used AI if the final version of any publicized written, visual, or audio work was AI-generated.

Courts must implement their respective policies by September 1.

Task force chair Brad Hill told the council in a statement published by Reuters that the rule “strikes the best balance between uniformity and flexibility.” He explained that the task force steered clear of a rule that would dictate court use of the evolving technology.

Illinois, Delaware, and Arizona have also taken on generative AI rules or policies. New York, Georgia, and Connecticut are presently evaluating generative AI use in court.

California’s court system comprises five million cases, 65 courts, and around 1,800 judges. The AI task force was established to address the increasing interest in generative AI as well as public concern about its effect on the judiciary; it supervises the development of AI use policy recommendations in this branch.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com