Connect with us

Brand Stories

Better Artificial Intelligence (AI) Stock: SoundHound AI vs. C3.ai

Published

on


The adoption of artificial intelligence (AI) software is increasing at an incredible pace because of the productivity and efficiency gains this technology is capable of delivering, and the good part is that this niche is likely to sustain a healthy growth rate over the long run.

According to ABI Research, the AI software market is expected to clock a compound annual growth rate (CAGR) of 25% through 2030, generating $467 billion in annual revenue at the end of the decade. That’s why it would be a good time to take a closer look at the prospects of SoundHound AI (SOUN -1.11%) and C3.ai (AI -0.37%) — two pure-play AI companies that could help investors capitalize on a couple of fast-growing niches within the AI software market — and check which one of them is worth buying right now.

Image source: Getty Images.

The case for SoundHound AI

SoundHound AI provides a voice AI platform where its customers can create conversational AI assistants and voice-based AI agents that can be deployed for multiple uses, such as taking orders in restaurants, car infotainment systems, and customer service applications, among others.

This particular market is growing at a nice clip, as deploying AI-powered voice solutions can help companies improve productivity and efficiency, since they will be able to automate tasks. Companies can now significantly improve their customer interaction experiences, thanks to the availability of round-the-clock multilingual AI agents and assistants.

Not surprisingly, SoundHound AI has been witnessing a robust growth in demand for its voice AI solutions, which explains the solid revenue growth in the past year.

SOUN Revenue (TTM) Chart
SOUN Revenue (TTM) data by YCharts.

But here’s what investors should look forward to: The conversational AI market could grow at an annual average rate of almost 24% through 2030, generating over $41 billion in annual revenue by the end of the decade. SoundHound AI has been growing at a much faster pace than the overall market, suggesting it is gaining a bigger share of this lucrative space.

SoundHound’s revenue guidance of $167 million at the mid-point for 2025, is nearly double the revenue it reported last year. Importantly, its cumulative subscriptions and bookings backlog stood at a massive $1.2 billion last year. This metric is a measure of the potential revenue that the company expects to “realize over the coming several years,” suggesting it can maintain its healthy growth rates for a long time to come thanks to the AI-fueled opportunity it’s sitting on.

The case for C3.ai

C3.ai is a pure-play enterprise AI software platform provider that enables its customers to build generative AI applications and agentic AI solutions. The company claims that it provides 130 comprehensive enterprise AI applications ready for deployment across industries such as oil and gas, manufacturing, financial services, utilities, chemicals, defense, and others.

It has been in the news of late for receiving a bigger contract worth $450 million from the U.S. Air Force for maintaining aircraft, ground assets, and weapons systems for the next four years. However, this is just one of the many contracts that the company has been landing lately.

C3.ai’s offerings are used across diverse industries, and its customer base includes the likes of Baker Hughes, which recently expanded its partnership with the company; local and state government bodies across multiple U.S. states; and companies such as Ericsson, Bristol Myers Squibb, Chanel, and others. The company’s fast-expanding customer base and the bigger contracts that it is signing with existing customers explain why there has been an uptick in C3.ai’s growth of late.

AI Revenue (TTM) Chart
AI Revenue (TTM) data by YCharts.

The company finished fiscal 2025 (which ended on April 30) with a 25% increase in its revenue to $389 million. Management expects another 20% increase in total revenue in fiscal 2025. Consensus estimates suggest that C3.ai is likely to report similar growth next year, followed by an acceleration in fiscal 2028.

AI Revenue Estimates for Current Fiscal Year Chart
AI Revenue Estimates for Current Fiscal Year data by YCharts.

There’s a strong possibility, however, that C3.ai will exceed expectations and its own forecast for growth this year. That’s because C3.ai ended the previous fiscal year with 174 pilot projects, which it calls initial production deployments. The good part is that the company has been converting its pilots into contracts at a healthy rate.

C3.ai turned 66 of its initial production deployments into long-term contracts in fiscal 2025. The company ended fiscal 2024 with 123 pilot projects, which means that it has a conversion rate of more than 50%. So the robust increase in the company’s pilot projects last year means that it could close more such initial production deployments into full agreements in the current fiscal year, going by past trends.

So there is a strong possibility of C3.ai’s growth rate exceeding Wall Street’s expectations, which should ideally turn out to be a tailwind for its stock price in the long run.

The verdict

While it is clear both SoundHound and C3.ai are growing at a nice pace because of AI, the former’s growth rate is much higher. However, to buy SoundHound stock, investors will have to pay a handsome price-to-sales ratio of nearly 38. C3.ai, on the other hand, is trading at a much more attractive 8 times sales, which is almost in line with the U.S. technology sector’s average sales multiple.

So, investors looking for a mix of steady growth and attractive valuation can consider buying shares of C3.ai. However, if you have a higher appetite for risk and are willing to pay for a stock with a richer valuation, then consider buying SoundHound AI, as its faster growth could help it clock more upside, though the expensive valuation also exposes it to more volatility.



Source link

Continue Reading

Brand Stories

Critical thinking in the age of artificial intelligence

Published

on

By


Artificial intelligence is rapidly transforming the business landscape, and we must properly prepare ourselves to use this technology effectively and thrive in the new future of work. There is no doubt that in recent years, we have seen the many ways in which artificial intelligence tools are being experimented with to improve efficiency and achieve better results in less time. However, we also know that it can be overwhelming to determine the best way to integrate artificial intelligence into our lives. Critical thinking is essential at this time since not everything that is obtained is reliable or truthful, so if we firmly believe in what a program tells us, we could be making bad decisions.

Between our fear of the unknown and the resistance to change, it is logical that we are invaded by confusion, especially if we are unaware of what progress is making accessible. On the other hand, who can feel completely up to date in terms of technology when movements are accelerated? And at the center of vertigo, we are in the eye of the hurricane of the reconfiguration that artificial intelligence is generating.

The challenge we face is to understand the techniques to know how to approach and incorporate artificial intelligence in our own projects, to promote the appropriate use of technological advances, and to promote critical thinking. We have to promote the ability to analyze information and form an opinion based on evidence and reasoning. Because while it is true that there are great advances, it is also true that not all that glitters is gold, and that when one consults artificial intelligence programs, they may be giving us false, implicit, or totally distorted data. It is still up to the human mind to discern and not swallow all the pills that are offered to us.

The challenge cannot be ignored. Harvard University predicts that more than eighty percent of companies will have used or implemented Artificial Intelligence in some form by 2027, that is, in two years, which means the very near future. This means that it is essential for businesses to help prepare workers to use these technologies effectively and to approach these technologies with critical thinking.

However, incorporating artificial intelligence can be intimidating. But losing the fear of these advances (when well used and well evaluated) can help execute our strategies successfully. Necessarily, they must be understood. The world’s leading business schools, such as Dartmouth University, designed and executed the sprint model.

Sprints are focused, collaborative sessions that take place over a compressed period of time for rapid learning and skill development. In 2022, to encourage experimentation, this format was adopted for a subset of training courses, each consisting of four and a half hours of instruction in one to five sessions and graded as pass/fail. The freedom fostered by this format was ideal for boosting the creativity and hands-on learning that were critical.

The philosophy in these courses was to help decision-making. The objective is that in each session, they face situations in which they can critically apply artificial intelligence processes:

  1. Reflective prompts increase the creative surface. We are referring to these techniques that create opportunities for human ingenuity, which remains an indispensable ingredient. Techniques that help to discover that, although the artificial intelligence tools they were using produced many ideas, the final inspiration came from a human who established a less obvious connection. If AI produces many alternatives, the human mind is the one that evaluates and chooses.

  2. The iterative integration of tools that enable engaging communications. In our day, it’s critical to find compelling ways to communicate ideas. Using a combination of AI tools to bring an idea to life with engaging prose, powerful visuals, and catchy videos and audio clips is what you are looking for. Creating a good result is not difficult, as it can be left to the Artificial Intelligence, but obtaining a great result requires the work of a human mind.

  3. People are a powerful way to test ideas. Not to mention, machines can be very intelligent but also very stupid. Organizations look for different perspectives to shape informed decision-making; they need to understand the views of different stakeholders to anticipate rejection or acceptance and ensure that their speech resonates with the customer.

The best way to get comfortable with an AI tool is to play around with it, and the best way to play with it is in the context of a real problem. Perspective is the best ally to play with these programs. For example:

  1. Criticizing a concept as if we were an investor in the company.

  2. Evaluate another concept as if we were the COO who has to bring this idea to market.

  3. Value this concept as if you were a 30-year-old customer who loved the existing brand.

  4. Criticize that concept as if you were Greta Thunberg or some environmentalist.

The power of play is to do it with a purpose. Artificial intelligence is still an emerging technology, and its impact remains unclear. That said, based on the little experience that humanity has with these technologies, it is necessary to understand the role it will play in our sector and, therefore, in business training and the benefits that are obtained when it is used effectively.

An experiential activity, such as a sprint, is ideal for collective experimentation. It combines focus and discipline with space for learning through purposeful play; explore, discover, and create together freely, which leads to more significant results.



Source link

Continue Reading

Brand Stories

AI reshapes ARDS care by predicting risk, guiding ventilation, and personalizing treatment

Published

on

By


From early warnings to smarter ventilators, artificial intelligence is helping clinicians outpace ARDS, offering hope for more lives saved through personalized, data-driven care.

Review: Artificial intelligence and machine learning in acute respiratory distress syndrome management: recent advances. Image Credit: Design_Cells / Shutterstock

In a recent review published in the journal Frontiers in Medicine, a group of authors synthesized recent evidence on how artificial intelligence (AI) and machine learning (ML) enhance prediction, stratification, and treatment of acute respiratory distress syndrome (ARDS) across the patient journey.

Background

Every day, more than one thousand people worldwide enter an intensive care unit (ICU) with ARDS, and 35–45% of those with severe illness still die despite guideline-based ventilation and prone positioning. Conventional care works, yet it remains fundamentally supportive and cannot overcome the syndrome’s striking biological and clinical heterogeneity. Meanwhile, the digital exhaust of modern ICUs, continuous vital signs, electronic health records (EHRs), imaging, and ventilator waveforms has outgrown the capabilities of unaided human cognition. AI and ML are increasingly being explored as tools that promise to transform this complexity into actionable insight. However, as the review notes, external validation, generalizability, and proof of real-world benefit remain crucial research needs. Further research is needed to determine whether these algorithms actually improve survival, disability, and cost.

Early Warning: Predicting Trouble Before It Starts

ML algorithms already flag patients likely to develop ARDS hours and sometimes days before clinical criteria are met. Convolutional neural networks (CNNs) trained on chest radiographs and ventilator waveforms, as well as gradient boosting models fed raw EHR data, have been shown to achieve area under curve (AUC) values up to 0.95 for detection or prediction tasks in specific settings. However, performance varies across cohorts and model types. This shift from reactive diagnosis to proactive screening enables teams to mobilize lung-protective ventilation, fluid stewardship, or transfer to high-acuity centers earlier, a practical advantage during coronavirus disease 2019 (COVID-19) surges when ICU beds are scarce. The review highlights that combining multiple data types, clinical, imaging, waveform, and even unstructured text, generally yields more accurate predictions. Still, real-world accuracy remains dependent on the quality of the data and external validation.

Sharper Prognosis: Dynamic Risk Profiles

Once ARDS is established, knowing who is likely to deteriorate guides resource allocation and family counseling. Long short-term memory (LSTM) networks that ingest time series vitals and laboratory trends outperform conventional Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score (SAPS II) tools; meta-analysis shows a concordance index of 0.84 versus 0.64–0.70 for traditional scores. By continuously updating risk, these models enable clinicians to decide when to escalate to extracorporeal membrane oxygenation (ECMO) or palliative pathways, rather than relying on “worst value in 24 hours” snapshots. However, the review cautions that most current models are focused on mortality risk, and broader outcome prediction (e.g., disability, quality of life) remains underexplored.

Phenotypes and Endotypes

Latent class analysis (LCA) applied to multicenter trial data revealed two reproducible inflammatory phenotypes: hyper-inflammatory, characterized by interleukin 6 surges and a 40–50% mortality rate, and hypo-inflammatory, associated with less organ failure and a roughly 20% mortality rate. Treatment responses diverge; high positive end-expiratory pressure (PEEP) harms the hyper-inflammatory group, yet may aid the hypo-inflammatory group. Supervised gradient boosting models now assign these phenotypes bedside using routine labs and vitals with an accuracy of 0.94–0.95, paving the way for phenotype-specific trials of corticosteroids, fluid strategies, or emerging biologics. The review also describes additional ARDS subtypes, such as those based on respiratory mechanics, radiology, or multi-omics data. It emphasizes that real-time bedside subtyping is a critical goal for future precision medicine.

Smarter Breathing Support

AI also refines everyday ventilation decisions. A multi-task neural network simulates how oxygenation and compliance will change 45 minutes after a PEEP adjustment, enabling virtual “test drives” instead of trial-and-error titration. Mechanical power (MP) is the energy delivered to the lung each minute and exceeds 12 Joules per minute in patients at the highest risk of ventilator-induced injury. XGBoost models individualize MP thresholds and predict ICU mortality with an AUC of 0.88. For patient-ventilator asynchrony (PVA), deep learning detectors sift through millions of breaths and achieve over 90% accuracy, promising real-time alarms or even closed-loop ventilators that self-correct harmful cycling. The review notes, however, that most PVA detection models remain offline, and real-time actionable systems are still in development.

High Stakes Decisions: ECMO and Liberation

ECMO can salvage gas exchange but consumes significant resources in terms of staffing and supplies. The hierarchical Prediction, Early Monitoring, and Proactive Triage for Extracorporeal Membrane Oxygenation (PreEMPT ECMO) deep network combines demographics, laboratory results, and minute-by-minute vital signs to forecast ECMO need up to 96 hours in advance (AUC = 0.89 at 48 hours), aiding in referral timing and equitable resource utilization. At the other end of the journey, AI-based systems are being explored to predict when ventilator weaning will succeed, shortening mechanical ventilation and hospital stay in proof-of-concept studies. However, the review highlights that most studies of AI for weaning and extubation are generally conducted in ICU populations, rather than ARDS-specific cohorts, and direct evidence in ARDS remains scarce. Integrating both tools could one day create a complete life cycle decision platform, but this remains an aspirational goal.

Next Generation Algorithms and Real World Barriers

Graph neural networks (GNNs) model relationships among patients, treatments, and physiologic variables, potentially uncovering hidden risk clusters. Federated learning (FL) trains shared models across hospitals without moving protected health data, improving generalizability. Self-supervised learning (SSL) leverages billions of unlabeled waveforms to pre-train robust representations. Large language models (LLMs) and emerging multimodal variants act as orchestrators, calling specialized image or waveform models and generating human-readable plans. The review additionally highlights causal inference and reinforcement learning (RL) as promising approaches for simulating “what-if” scenarios and for developing AI agents that make sequential decisions in dynamic ICU environments. These techniques promise richer insights but still face hurdles related to data quality, interpretability, and workflow integration that must be addressed before routine clinical adoption.

In the area of drug discovery, the review notes that while AI has enabled target and compound identification in related lung diseases (such as idiopathic pulmonary fibrosis), the application of generative AI for ARDS-specific therapies remains largely conceptual at present.

Conclusions

To summarize, current evidence shows that AI and ML can detect ARDS earlier, stratify risk more precisely, tailor ventilation to individual lung mechanics, and guide costly therapies such as ECMO. Phenotype-aware algorithms already flag patients who benefit from, or suffer from, high PEEP, while neural networks forecast MP-related injury and PVA in real-time. Next-generation GNNs, FL, RL, causal inference, and LLMs may weave disparate data into cohesive bedside recommendations. Rigorous prospective trials, transparent reporting, and clinician-friendly interfaces remain essential to translate these digital advances into lives saved and disabilities prevented.

Journal reference:



Source link

Continue Reading

Brand Stories

California Judicial Council implements rule for generative artificial intelligence use in court

Published

on


Policies must block confidential information from being input into public generative AI systems; they must also ban unlawful discrimination via AI programs. Court staff and judicial officers must also “take reasonable steps” to confirm the accuracy of material, as per a statement published by Reuters. Staff and judicial officers must also reveal whether they used AI if the final version of any publicized written, visual, or audio work was AI-generated.

Courts must implement their respective policies by September 1.

Task force chair Brad Hill told the council in a statement published by Reuters that the rule “strikes the best balance between uniformity and flexibility.” He explained that the task force steered clear of a rule that would dictate court use of the evolving technology.

Illinois, Delaware, and Arizona have also taken on generative AI rules or policies. New York, Georgia, and Connecticut are presently evaluating generative AI use in court.

California’s court system comprises five million cases, 65 courts, and around 1,800 judges. The AI task force was established to address the increasing interest in generative AI as well as public concern about its effect on the judiciary; it supervises the development of AI use policy recommendations in this branch.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com