Connect with us

Brand Stories

AI revolution: How artificial intelligence is reshaping education and jobs in America

Published

on


Artificial intelligence has rapidly become a part of American’s lives. What once was a fringe concept a few years ago is now an everyday tool.

Its expansive reach affects what and how students study, as well as the job sector, prompting some to question how students and higher education at large should respond.

The best way an undergrad can prepare for an AI-altered workforce is to develop human qualities that machines cannot replicate, such as critical thinking, creativity, and social intelligence, some experts told The College Fix.

While the value of specific majors may diminish, careers in mental health, healthcare, and fields requiring high-level decision-making and management will remain viable, they said.

But make no mistake, the role of humans will increasingly center on collaboration with AI.

AI will be a job killer. It will also be a job creator.

While some jobs will be eliminated, others will be created.

“The amount of work that’s being created and the opportunities to both create and contribute are going to be expanded exponentially,” said corporate advisor Jack Myers, a University of Arizona lecturer in its School of Information Science.

Forecasts predicting the coming obsolescence of countless careers should be viewed “through the prism of not only what’s going to be eliminated, but what’s going to be created,” he told The College Fix in a telephone interview.

Jobs in coding, basic processing, routine bookkeeping, low-complexity customer service and translation will all soon be eliminated, Myers said.

But the opportunities ushered in by AI are going to be exceptional, said Myers, author of the book “The Tao of Leadership: Harmonizing Technological Innovation and Human Creativity in the AI Era.”

“If you look at almost any area of human creation,” Myers said, “it will be enhanced through the same type of collaborative partnership as if the creator was hiring an expert to assist and support in the process.”

Joey Kim, chair of the Department of Engineering and Computer Science at Master’s University, described AI as “simply a tool.”

“With the advent of new tools, careers do disappear,” Kim said in a telephone interview with The Fix. “There’s also careers that get modified…. It’s not simply binary and careers [either] remain unaffected or [become] obsolete. There is a spectrum.”

But like it or not, AI will be part of many jobs, said Michael Pavlin, an associate professor in the School of Business and Economics at Wilfrid Laurier University, who has been involved in AI research since the early 2000s and serves as the chair of his school’s management analytics program.

“It’s hard to imagine a white collar job where you’re not going to be interacting with AI at some level,” he said in a telephone interview.

However, despite recent AI advances, he said he remains “more on the skeptical side,” later adding he believes “we’re being a little bit oversold.”

Reva Freedman, an associate professor of computer science at Northern Illinois University with expertise in computational linguistics, said AI “is going to have a huge impact on the job market, but not different in kind to the effect that computerization had with the invention of the PC in 1983.”

“In offices, [b]efore the invention of the PC, lots of people had jobs as secretaries and clerks. Secretaries typed memos that other people wrote. Those jobs have been largely replaced by people using word processors themselves,” she said via email. “Clerks did a variety of jobs that have been automated by use of Excel and other software.”

The jobs that will survive require high-level thinking, management skills, or require hands-on work, such as medicine, Freedman said.

Gary Clemenceau, a “deep geek” turned chaplain and author, who claims 30 years of experience in tech, agrees. He told The Fix that “mental health and healthcare jobs, and anything that requires dealing with humans and higher-order thinking, will still be viable.”

AI and the dumbing-down of higher education

But will there be any higher-order thinking left?

“For teachers, it’s absolutely impossible to give a writing [assignment] today that students can’t cheat on,” Freedman said. “Even for an in-class assignment, you can now get glasses that allow you to look up stuff on the web during an exam.”

Kim said the misuse of AI in the classroom devalues a degree’s representation of how well one has been trained in a program and successfully met its requirements.

Freedman also expressed concerns over the misuse of AI in other segments of society, citing allegations it was used to write a recent MAHA report said to contain made-up citations.

Pavlin told The Fix he is more concerned about less obvious errors that require a greater level of expertise to detect. For example, when querying AI about esoteric subjects related to his research, he tends to find deeper ways in which AI makes mistakes than he would if he similarly asked AI a question about general relativity.

In that sense, AI is not bulletproof. Kim echoed similar sentiments: “When big important decisions must be made where it’s either life-or-death or costing millions and millions of dollars, you’re going to need something more than ChatGPT.”

Yet, as some of the scholars interviewed by The Fix noted, the increasing overuse of AI by students may lead to the attrition of capacities beyond their proficiency at using ChatGPT.

“I think it’s impacting their learning,” Pavlin said. “Not all students, but [there is] definitely a subset of students where I’m concerned about their critical thinking skills.”

AI and the college student

When asked how students could best prepare for the careers that await them in an AI-altered job market, most of the scholars interviewed recommended they develop their more uniquely human attributes.

“The machines are already smarter than the human brain in many instances,” Myers told The Fix. “[They have] been for a while and that’s just going to continue to become increasingly the norm.”

“So where does the human come in?” Myers asked rhetorically, answering that humans enter through the “collaborative process” and “the unique human qualities of the human brain.” These he said are developed in the social sciences and humanistic majors.

Clemenceau said students must develop their human qualities.

“Students need to put down their phones and THINK,” Clemenceau wrote in an email to The College Fix. “AI is not very good at being creative.”

Whether majoring in computer science and learning to code is still a wise choice was a point of some disagreement.

“Coding will be irrelevant as a tool or resource to bring to the table,” Myers said. “The AI is doing its own coding going forward. It doesn’t need the human coders anymore.”

In contrast, Freedman noted that people “have been saying ever since I was a beginning programmer (in the 70’s) that programs that can write programs were coming.”

“Is it more true now? Probably. Does that mean the [number] of programmers needed will go down? T[h]at’s a much harder question to answer.”

“I think there will always be room for people who care about the quality of their work, understand the business needs, and can communicate with non-programmers,” she said.

As for choosing a major, though, she added: “I don’t think students’ majors have a lot to do with their success in the work world; their personal qualities are a lot more important. So I don’t think we can tell students what majors will be more useful.”

Kim expressed similar sentiments, saying “I personally believe that with any major, if you’re going to be using your tools to your advantage, and if you’re really going to be motivated enough to not just follow the crowd, you will have a job.”

Clemenceau said the future may be bleaker than his optimistic peers.

“I see two roads,” he said via email. “A small percentage of people will reject AI as inhuman and soulless and empty, and take the ‘human road’ as much as possible, living more spiritual lives.”

However, he added, a “larger percentage of people will fully embrace AI and (sadly) sacrifice part of their humanity, becoming less creative, less able to think critically – and more easily manipulated.”

MORE: Using AI to write essays can impair brain function: MIT study

IMAGE CAPTION AND CREDIT: A graphic showing a laptop user employing AI / Supatman, CanvaPro

Like The College Fix on Facebook / Follow us on Twitter



Share our work - Thank you





Source link

Continue Reading

Brand Stories

Critical thinking in the age of artificial intelligence

Published

on

By


Artificial intelligence is rapidly transforming the business landscape, and we must properly prepare ourselves to use this technology effectively and thrive in the new future of work. There is no doubt that in recent years, we have seen the many ways in which artificial intelligence tools are being experimented with to improve efficiency and achieve better results in less time. However, we also know that it can be overwhelming to determine the best way to integrate artificial intelligence into our lives. Critical thinking is essential at this time since not everything that is obtained is reliable or truthful, so if we firmly believe in what a program tells us, we could be making bad decisions.

Between our fear of the unknown and the resistance to change, it is logical that we are invaded by confusion, especially if we are unaware of what progress is making accessible. On the other hand, who can feel completely up to date in terms of technology when movements are accelerated? And at the center of vertigo, we are in the eye of the hurricane of the reconfiguration that artificial intelligence is generating.

The challenge we face is to understand the techniques to know how to approach and incorporate artificial intelligence in our own projects, to promote the appropriate use of technological advances, and to promote critical thinking. We have to promote the ability to analyze information and form an opinion based on evidence and reasoning. Because while it is true that there are great advances, it is also true that not all that glitters is gold, and that when one consults artificial intelligence programs, they may be giving us false, implicit, or totally distorted data. It is still up to the human mind to discern and not swallow all the pills that are offered to us.

The challenge cannot be ignored. Harvard University predicts that more than eighty percent of companies will have used or implemented Artificial Intelligence in some form by 2027, that is, in two years, which means the very near future. This means that it is essential for businesses to help prepare workers to use these technologies effectively and to approach these technologies with critical thinking.

However, incorporating artificial intelligence can be intimidating. But losing the fear of these advances (when well used and well evaluated) can help execute our strategies successfully. Necessarily, they must be understood. The world’s leading business schools, such as Dartmouth University, designed and executed the sprint model.

Sprints are focused, collaborative sessions that take place over a compressed period of time for rapid learning and skill development. In 2022, to encourage experimentation, this format was adopted for a subset of training courses, each consisting of four and a half hours of instruction in one to five sessions and graded as pass/fail. The freedom fostered by this format was ideal for boosting the creativity and hands-on learning that were critical.

The philosophy in these courses was to help decision-making. The objective is that in each session, they face situations in which they can critically apply artificial intelligence processes:

  1. Reflective prompts increase the creative surface. We are referring to these techniques that create opportunities for human ingenuity, which remains an indispensable ingredient. Techniques that help to discover that, although the artificial intelligence tools they were using produced many ideas, the final inspiration came from a human who established a less obvious connection. If AI produces many alternatives, the human mind is the one that evaluates and chooses.

  2. The iterative integration of tools that enable engaging communications. In our day, it’s critical to find compelling ways to communicate ideas. Using a combination of AI tools to bring an idea to life with engaging prose, powerful visuals, and catchy videos and audio clips is what you are looking for. Creating a good result is not difficult, as it can be left to the Artificial Intelligence, but obtaining a great result requires the work of a human mind.

  3. People are a powerful way to test ideas. Not to mention, machines can be very intelligent but also very stupid. Organizations look for different perspectives to shape informed decision-making; they need to understand the views of different stakeholders to anticipate rejection or acceptance and ensure that their speech resonates with the customer.

The best way to get comfortable with an AI tool is to play around with it, and the best way to play with it is in the context of a real problem. Perspective is the best ally to play with these programs. For example:

  1. Criticizing a concept as if we were an investor in the company.

  2. Evaluate another concept as if we were the COO who has to bring this idea to market.

  3. Value this concept as if you were a 30-year-old customer who loved the existing brand.

  4. Criticize that concept as if you were Greta Thunberg or some environmentalist.

The power of play is to do it with a purpose. Artificial intelligence is still an emerging technology, and its impact remains unclear. That said, based on the little experience that humanity has with these technologies, it is necessary to understand the role it will play in our sector and, therefore, in business training and the benefits that are obtained when it is used effectively.

An experiential activity, such as a sprint, is ideal for collective experimentation. It combines focus and discipline with space for learning through purposeful play; explore, discover, and create together freely, which leads to more significant results.



Source link

Continue Reading

Brand Stories

AI reshapes ARDS care by predicting risk, guiding ventilation, and personalizing treatment

Published

on

By


From early warnings to smarter ventilators, artificial intelligence is helping clinicians outpace ARDS, offering hope for more lives saved through personalized, data-driven care.

Review: Artificial intelligence and machine learning in acute respiratory distress syndrome management: recent advances. Image Credit: Design_Cells / Shutterstock

In a recent review published in the journal Frontiers in Medicine, a group of authors synthesized recent evidence on how artificial intelligence (AI) and machine learning (ML) enhance prediction, stratification, and treatment of acute respiratory distress syndrome (ARDS) across the patient journey.

Background

Every day, more than one thousand people worldwide enter an intensive care unit (ICU) with ARDS, and 35–45% of those with severe illness still die despite guideline-based ventilation and prone positioning. Conventional care works, yet it remains fundamentally supportive and cannot overcome the syndrome’s striking biological and clinical heterogeneity. Meanwhile, the digital exhaust of modern ICUs, continuous vital signs, electronic health records (EHRs), imaging, and ventilator waveforms has outgrown the capabilities of unaided human cognition. AI and ML are increasingly being explored as tools that promise to transform this complexity into actionable insight. However, as the review notes, external validation, generalizability, and proof of real-world benefit remain crucial research needs. Further research is needed to determine whether these algorithms actually improve survival, disability, and cost.

Early Warning: Predicting Trouble Before It Starts

ML algorithms already flag patients likely to develop ARDS hours and sometimes days before clinical criteria are met. Convolutional neural networks (CNNs) trained on chest radiographs and ventilator waveforms, as well as gradient boosting models fed raw EHR data, have been shown to achieve area under curve (AUC) values up to 0.95 for detection or prediction tasks in specific settings. However, performance varies across cohorts and model types. This shift from reactive diagnosis to proactive screening enables teams to mobilize lung-protective ventilation, fluid stewardship, or transfer to high-acuity centers earlier, a practical advantage during coronavirus disease 2019 (COVID-19) surges when ICU beds are scarce. The review highlights that combining multiple data types, clinical, imaging, waveform, and even unstructured text, generally yields more accurate predictions. Still, real-world accuracy remains dependent on the quality of the data and external validation.

Sharper Prognosis: Dynamic Risk Profiles

Once ARDS is established, knowing who is likely to deteriorate guides resource allocation and family counseling. Long short-term memory (LSTM) networks that ingest time series vitals and laboratory trends outperform conventional Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score (SAPS II) tools; meta-analysis shows a concordance index of 0.84 versus 0.64–0.70 for traditional scores. By continuously updating risk, these models enable clinicians to decide when to escalate to extracorporeal membrane oxygenation (ECMO) or palliative pathways, rather than relying on “worst value in 24 hours” snapshots. However, the review cautions that most current models are focused on mortality risk, and broader outcome prediction (e.g., disability, quality of life) remains underexplored.

Phenotypes and Endotypes

Latent class analysis (LCA) applied to multicenter trial data revealed two reproducible inflammatory phenotypes: hyper-inflammatory, characterized by interleukin 6 surges and a 40–50% mortality rate, and hypo-inflammatory, associated with less organ failure and a roughly 20% mortality rate. Treatment responses diverge; high positive end-expiratory pressure (PEEP) harms the hyper-inflammatory group, yet may aid the hypo-inflammatory group. Supervised gradient boosting models now assign these phenotypes bedside using routine labs and vitals with an accuracy of 0.94–0.95, paving the way for phenotype-specific trials of corticosteroids, fluid strategies, or emerging biologics. The review also describes additional ARDS subtypes, such as those based on respiratory mechanics, radiology, or multi-omics data. It emphasizes that real-time bedside subtyping is a critical goal for future precision medicine.

Smarter Breathing Support

AI also refines everyday ventilation decisions. A multi-task neural network simulates how oxygenation and compliance will change 45 minutes after a PEEP adjustment, enabling virtual “test drives” instead of trial-and-error titration. Mechanical power (MP) is the energy delivered to the lung each minute and exceeds 12 Joules per minute in patients at the highest risk of ventilator-induced injury. XGBoost models individualize MP thresholds and predict ICU mortality with an AUC of 0.88. For patient-ventilator asynchrony (PVA), deep learning detectors sift through millions of breaths and achieve over 90% accuracy, promising real-time alarms or even closed-loop ventilators that self-correct harmful cycling. The review notes, however, that most PVA detection models remain offline, and real-time actionable systems are still in development.

High Stakes Decisions: ECMO and Liberation

ECMO can salvage gas exchange but consumes significant resources in terms of staffing and supplies. The hierarchical Prediction, Early Monitoring, and Proactive Triage for Extracorporeal Membrane Oxygenation (PreEMPT ECMO) deep network combines demographics, laboratory results, and minute-by-minute vital signs to forecast ECMO need up to 96 hours in advance (AUC = 0.89 at 48 hours), aiding in referral timing and equitable resource utilization. At the other end of the journey, AI-based systems are being explored to predict when ventilator weaning will succeed, shortening mechanical ventilation and hospital stay in proof-of-concept studies. However, the review highlights that most studies of AI for weaning and extubation are generally conducted in ICU populations, rather than ARDS-specific cohorts, and direct evidence in ARDS remains scarce. Integrating both tools could one day create a complete life cycle decision platform, but this remains an aspirational goal.

Next Generation Algorithms and Real World Barriers

Graph neural networks (GNNs) model relationships among patients, treatments, and physiologic variables, potentially uncovering hidden risk clusters. Federated learning (FL) trains shared models across hospitals without moving protected health data, improving generalizability. Self-supervised learning (SSL) leverages billions of unlabeled waveforms to pre-train robust representations. Large language models (LLMs) and emerging multimodal variants act as orchestrators, calling specialized image or waveform models and generating human-readable plans. The review additionally highlights causal inference and reinforcement learning (RL) as promising approaches for simulating “what-if” scenarios and for developing AI agents that make sequential decisions in dynamic ICU environments. These techniques promise richer insights but still face hurdles related to data quality, interpretability, and workflow integration that must be addressed before routine clinical adoption.

In the area of drug discovery, the review notes that while AI has enabled target and compound identification in related lung diseases (such as idiopathic pulmonary fibrosis), the application of generative AI for ARDS-specific therapies remains largely conceptual at present.

Conclusions

To summarize, current evidence shows that AI and ML can detect ARDS earlier, stratify risk more precisely, tailor ventilation to individual lung mechanics, and guide costly therapies such as ECMO. Phenotype-aware algorithms already flag patients who benefit from, or suffer from, high PEEP, while neural networks forecast MP-related injury and PVA in real-time. Next-generation GNNs, FL, RL, causal inference, and LLMs may weave disparate data into cohesive bedside recommendations. Rigorous prospective trials, transparent reporting, and clinician-friendly interfaces remain essential to translate these digital advances into lives saved and disabilities prevented.

Journal reference:



Source link

Continue Reading

Brand Stories

California Judicial Council implements rule for generative artificial intelligence use in court

Published

on


Policies must block confidential information from being input into public generative AI systems; they must also ban unlawful discrimination via AI programs. Court staff and judicial officers must also “take reasonable steps” to confirm the accuracy of material, as per a statement published by Reuters. Staff and judicial officers must also reveal whether they used AI if the final version of any publicized written, visual, or audio work was AI-generated.

Courts must implement their respective policies by September 1.

Task force chair Brad Hill told the council in a statement published by Reuters that the rule “strikes the best balance between uniformity and flexibility.” He explained that the task force steered clear of a rule that would dictate court use of the evolving technology.

Illinois, Delaware, and Arizona have also taken on generative AI rules or policies. New York, Georgia, and Connecticut are presently evaluating generative AI use in court.

California’s court system comprises five million cases, 65 courts, and around 1,800 judges. The AI task force was established to address the increasing interest in generative AI as well as public concern about its effect on the judiciary; it supervises the development of AI use policy recommendations in this branch.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com