Connect with us

Brand Stories

Prediction: This Artificial Intelligence (AI) and “Magnificent Seven” Stock Will Be the Next Company to Surpass a $3 Trillion Market Cap by the End of 2025

Published

on


Key Points

  • The artificial intelligence trend will be a huge growth engine for Amazon’s cloud computing division.

  • Efficiency improvements should help expand profit margins for its e-commerce business.

  • Anticipation of the company’s earnings growth could help drive the shares higher in 2025’s second half.

Only three stocks so far have ever achieved a market capitalization of $3 trillion: Microsoft, Nvidia, and Apple. Tremendous wealth has been created for some long-term investors in these companies — only two countries (China and the United States) have gross domestic products greater than their combined worth today.

In recent years, artificial intelligence (AI) and other technology tailwinds have driven these stocks to previously inconceivable heights, and it looks like the party is just getting started. So, which stock will be next to reach $3 trillion?

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »

I think it will be Amazon(NASDAQ: AMZN), and it will happen before the year is done. Here’s why.

The next wave of cloud growth

Amazon was positioned perfectly to take advantage of the AI revolution. Over the last two decades, it has built the leading cloud computing infrastructure company, Amazon Web Services (AWS), which as of its last reported quarter had booked more than $110 billion in trailing-12-month revenue. New AI workloads require immense amounts of computing power, which only some of the large cloud providers have the capacity to provide.

AWS’s revenue growth has accelerated in recent quarters, hitting 17% growth year-over-year in Q1 of this year. With spending on AI just getting started, the unit’s revenue growth could stay in the double-digit percentages for many years. Its profit margins are also expanding, and hit 37.5% over the last 12 months.

Assuming that its double-digit percentage revenue growth continues over the next several years, Amazon Web Services will reach $200 billion in annual revenue within the decade. At its current 37.5% operating margin, that would equate to a cool $75 billion in operating income just from AWS. Investors can anticipate this growth and should start pricing those expected profits into the stock as the second half of 2025 progresses.

Image source: Getty Images.

Automation and margin expansion

For years, Amazon’s e-commerce platform operated at razor-thin margins. Over the past 12 months, the company’s North America division generated close to $400 billion in revenue but produced just $25.8 billion in operating income, or a 6.3% profit margin.

However, in the last few quarters, the fruits of Amazon’s long-term investments have begun to ripen in the form of profit margin expansion. The company spent billions of dollars to build out a vertically integrated delivery network that will give it operating leverage at increasing scale. It now has an advertising division generating tens of billions of dollars in annual revenue. It’s beginning to roll out more advanced robotics systems at its warehouses, so they will require fewer workers to operate. All of this should lead to long-term profit margin expansion.

Indeed, its North American segment’s operating margin has begun to expand already, but it still has plenty of room to grow. With growing contributions to the top line from high-margin revenue sources like subscriptions, advertising, and third-party seller services combined with a highly efficient and automated logistics network, Amazon could easily expand its North American operating margin to 15% within the next few years. On $500 billion in annual revenue, that would equate to $75 billion in annual operating income from the retail-focused segment.

AMZN Operating Income (TTM) Chart

AMZN Operating Income (TTM) data by YCharts.

The path to $3 trillion

Currently, Amazon’s market cap is in the neighborhood of $2.3 trillion. But over the course of the rest of this year, investors should get a clearer picture of its profit margin expansion story and the earnings growth it can expect due to the AI trend and its ever more efficient e-commerce network.

Today, the AWS and North American (retail) segments combine to produce annual operating income of $72 billion. But based on these projections, within a decade, we can expect that figure to hit $150 billion. And that is assuming that the international segment — which still operates at quite narrow margins — provides zero operating income.

It won’t happen this year, but investors habitually price the future of companies into their stocks, and it will become increasingly clear that Amazon still has huge potential to grow its earnings over the next decade.

For a company with $150 billion in annual earnings, a $3 trillion market cap would give it an earnings ratio of 20. That’s an entirely reasonable valuation for a business such as Amazon. It’s not guaranteed to reach that market cap in 2025, but I believe investors will grow increasingly optimistic about Amazon’s future earnings potential as we progress through the second half of this year, driving its share price to new heights and keeping its shareholders fat and happy.

Should you invest $1,000 in Amazon right now?

Before you buy stock in Amazon, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Amazon wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider whenNetflixmade this list on December 17, 2004… if you invested $1,000 at the time of our recommendation,you’d have $694,758!* Or when Nvidiamade this list on April 15, 2005… if you invested $1,000 at the time of our recommendation,you’d have $998,376!*

Now, it’s worth notingStock Advisor’s total average return is1,058% — a market-crushing outperformance compared to180%for the S&P 500. Don’t miss out on the latest top 10 list, available when you joinStock Advisor.

See the 10 stocks »

*Stock Advisor returns as of July 7, 2025

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Brett Schafer has positions in Amazon. The Motley Fool has positions in and recommends Amazon, Apple, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Brand Stories

The analysis of learning investment effect for artificial intelligence English translation model based on deep neural network

Published

on


Datasets collection

This experiment employs two widely recognized standard datasets in MMT: Multi30K and Microsoft Common Objects in Context (MS COCO)27,28. The Multi30K dataset comprises image-text pairs spanning various domains and is commonly used for image caption generation and multimodal translation tasks. The dataset contains three language pairs: English to German (En-De), English to French (En-Fr), and English to Czech (En-Cs). Specifically, the Multi30K training set encompasses 29,000 bilingual parallel sentence pairs, 1000 validation samples, and 1000 test samples. Each sentence is paired with an image to ensure the consistency between the text description and the image content, thus providing high-quality multimodal data for model training. The test16 and test17 datasets are used here. MS COCO is a dataset containing a wide range of images and their descriptions, extensively used in multiple tasks in computer vision and NLP. Beyond its established role as a standard benchmark for image captioning evaluation, the dataset’s rich semantic annotations make it particularly suitable for assessing model performance in cross-domain and cross-lingual translation scenarios.

Experimental environment

This experiment utilizes the Fairseq toolkit built upon the PyTorch framework. Fairseq is an open-source toolkit widely used in NLP tasks, particularly for constructing and training MT models. It supports various model architectures, including RNNs, convolutional neural networks, and Transformers, enabling effective performance enhancement in MT tasks. Based on Fairseq, the experimental model framework can be easily constructed, and the corresponding training tasks can be configured. The toolkit provides efficient parallel computing support and optimized training workflows, enabling effective large-scale model training.

Parameters setting

Table 1 exhibits the parameter settings for the experiment.

Table 1 Experimental parameter settings.

Two evaluation metrics, Bilingual Evaluation Understudy (BLEU) and Meteor, are used to comprehensively evaluate the performance of the FACT model29,30,31. These two metrics are among the most commonly used and representative automated evaluation tools in the current field of MT research. They have been widely applied in authoritative translation evaluation tasks such as the Workshop on Machine Translation (WMT), and have good universality and reliability. BLEU measures translation quality by calculating the n-gram match between the translated text and the reference answer. Specifically, BLEU calculates the precision of n-grams in the translated text, and its equation is as follows:

$${P}_{n}=\frac{{c}_{n}}{{r}_{n}}$$

(18)

\({P}_{n}\) refers to the n-gram precision; \({c}_{n}\) represents the number of times the n-gram units in the translation match those in the reference answer; \({r}_{n}\) denotes the total number of n-gram units in the translation. The final BLEU score of the translation is the weighted average of the precision for each n-gram unit, which can be written as:

$$BLEU=\text{exp}\left(\sum_{n=1}^{N}{\omega }_{n}\text{log}{P}_{n}\right)$$

(19)

\({\omega }_{n}\) is the weighting factor for each n-gram unit. To avoid giving overly high scores to shorter translations, BLEU introduces a brevity penalty (BP) to adjust the score. The calculation of BP reads:

$$BP=\left\{\begin{array}{l}1,\quad if\,r\ge c\\ \text{exp}(1-\frac{r}{c}),\quad if\, r

(20)

r and c represent the length of the reference and candidate translations. The final BLEU score is obtained by combining the BP of short sentences with the weighted average of n-gram precision, as follows:

$$BLEU=BP\cdot \text{exp}(\sum_{n=1}^{N}{\omega }_{n}\text{log}{P}_{n})$$

(21)

The advantages of BLEU lie in its simplicity and speed of computation, making it suitable for large-scale evaluations. However, it relies solely on lexical-level matching, neglecting linguistic features such as semantic similarity and syntactic variations. As a result, it demonstrates limited effectiveness when handling synonyms, word order changes, or translations that maintain semantic consistency but are expressed differently.

In contrast to BLEU, Meteor adopts a word alignment-based evaluation method, which better considers semantic information and word order. Meteor establishes a one-to-one correspondence between the words in the candidate translation and the reference translation to calculate precision and recall. The expression is as follows:

$$P=\frac{{m}_{w}}{{M}_{hypothesis}}$$

(22)

$$R=\frac{{m}_{w}}{{N}_{reference}}$$

(23)

P represents the proportion of words in the translation that match the reference words; \({m}_{w}\) denotes the number of matched words; \({M}_{hypothesis}\) and \({N}_{reference}\) refer to the total number of words in the translation and the reference. R implies the proportion of words in the reference that match the words in the translation. Meteor calculates an F1 score by combining precision and recall, and gives higher weight to recall. The equation is as follows:

$${F}_{\beta }=\frac{(1+{\beta }^{2})\cdot P\cdot R}{{\beta }^{2}\cdot P+R}$$

(24)

\(\beta\) controls the weight between precision and recall. To better handle word order issues, Meteor also introduces a chunking mechanism that penalizes translations with word order mismatches, as given in Eq. (25):

$$Penalty=\frac{{C}_{hypothesis}}{{C}_{reference}}$$

(25)

\({C}_{hypothesis}\) and \({C}_{reference}\) represent the number of chunks in the translated text and the reference answer, respectively. The final Meteor score combines the F1 score with the word order penalty, and is calculated using Eq. (26):

$$Meteor \,Score={F}_{\beta }-Penalty$$

(26)

Compared to BLEU, Meteor places greater emphasis on translation fluency, semantic retention, and linguistic naturalness, thus generally exhibiting higher correlation with human evaluation in simulations. By employing both BLEU and Meteor metrics simultaneously, a comprehensive evaluation of the FACT model’s translation performance can be conducted from two dimensions: formal accuracy and semantic acceptability. This makes a more authentic reflection of its practical effectiveness in MMT.

Performance evaluation

  1. (1)

    Comparison of model performance

Five representative baseline models are selected for comparison to comprehensively evaluate the performance of the proposed FACT model in MNMT tasks. These models are Transformer, Latent Multimodal Machine Translation (LMMT), Dynamic Context-Driven Capsule Network for Multimodal Machine Translation (DMMT), Target-modulated Multimodal Machine Translation (TMMT), and Imagined Representation for Multimodal Machine Translation (IMMT). Among them, the Transformer model is a classic architecture in MT and, as a pure text baseline model, effectively verifies the performance gains brought by multimodal mechanisms. LMMT uses latent variables to model multimodal interactions, emphasizing the potential semantic expressive power of image-text fusion in the latent space. DMMT introduces a dynamic context capsule mechanism to enhance semantic coupling between modalities during translation. TMMT guides visual information to participate in the translation generation process under a target modulation mechanism, improving target alignment between modalities. IMMT attempts to use an “imagination” mechanism to generate intermediate image representations for assisting semantic understanding and translation generation. All of the above models are representative methods in recent MNMT research, with strong representativeness and comparability. The primary reasons for not including large multimodal language models such as Generative Pre-trained Transformer 4 omni (GPT-4o) or Large Language and Vision Assistant (LLaVA) in this experiment are as follows. (1) These models are closed-source or commercialized, making fair comparisons under unified datasets and parameter configurations difficult; (2) Their training data and computing resources far exceed those accessible to the FACT model, rendering direct comparability infeasible; (3) FACT prioritizes structural lightness, training efficiency, and language learning adaptability over scale advantages. The above publicly structured and representative multimodal translation models are selected for horizontal comparison to ensure fair comparisons under unified datasets and parameter configurations. This enables more objective validation of the FACT model’s performance advantages in semantic consistency modeling and future context information guidance. The BLEU and Meteor evaluation results of each model on the En-De translation task are depicted in Fig. 3. To further verify the statistical reliability of this advantage, a paired significance test is conducted on the performance scores between FACT and each benchmark model. The results are outlined in Table 2.

Fig. 3

Comparison of different models on the En-De translation task.

Table 2 Significance test.

In Fig. 3, the FACT model proposed in this work outperforms other comparative models in both BLEU and Meteor scores. In the En-De translation tasks on the test16, test17, and MS COCO datasets, the FACT model achieves BLEU scores of 41.3, 32.8, and 29.6, respectively, which are significantly higher than those of the baseline models. In terms of Meteor scores, the FACT model also performs excellently, reaching 58.1, 52.6, and 49.6, outperforming other models. Although the performance of each model varies across different datasets, the FACT model consistently maintains leadership in BLEU and Meteor metrics, demonstrating its advantages in multimodal machine translation. Combined with Table 2, the p values of FACT compared with Transformer, LMMT, and DMMT are all less than 0.005, indicating highly significant performance differences. The p values with TMMT and IMMT are 0.015 and 0.028, respectively, below the conventional significance level of 0.05. This demonstrates that FACT’s performance advantages are statistically significant. The statistical results reveal that FACT remarkably outperforms all comparative methods in overall translation performance, fully confirming its effectiveness and advancement in MNMT. This is because the FACT model introduces two key innovations in structural design and modeling strategies compared to baseline models. On one hand, in future context information modeling, FACT leverages an attention-based future information guidance module to explicitly model the interaction among future target-side words, current source language, and visual features. Thus, it optimizes the directionality and contextual coherence of translation generation, which has not been systematically addressed in existing models. On the other hand, in multimodal consistency mechanisms, FACT constructs a loss function, strengthening the collaborative expressive capability between visual and linguistic modalities. By aligning the semantic space projections of images and texts, the collaborative expression ability between visual and language modalities is strengthened, and the robustness and generalization of image-text semantic fusion are improved. These two mechanisms complement each other, enabling FACT to outperform existing models in the granularity of information modeling and the depth of semantic alignment, significantly leading in multiple evaluation metrics such as BLEU and Meteor.

  1. (2)

    Ablation experiment

Ablation experiments are conducted by creating the FACT model’s variants to explore how this model integrates visual features to enhance translation performance. Table 3 lists the model variants.

Table 3 Names and descriptions of model variants.

Figure 4 demonstrates the results of ablation experiments on the En-De translation task, including BLEU and Meteor scores for the FACT model, three variant models, and the Transformer model. The “Transformer” in Fig. 4 is a pure text model without any image information or consistency modeling, serving as a baseline control.

Fig. 4
figure 4

Ablation experiment results on En-De translation task.

Figure 4 reveals that for the En-De translation task, the BLEU and Meteor scores of the FACT model decrease when either the future target context information supervision function \({L}_{fd}\), or the multimodal consistency loss function \({L}_{md}\) is decreased. When both \({L}_{fd}\) and \({L}_{md}\) are removed, the FACT model’s performance experiences the largest drop, but it still outperforms the Transformer model. Specifically, the BLEU scores decline by 6.05%, 8.23%, and 9.46% on the test16, test17, and MS COCO datasets. The Meteor scores decrease by 4.3%, 5.7%, and 7.86%, respectively. These results indicate that the future target context information and the multimodal consistency loss function remarkably influence the FACT model’s translation performance.

Ablation experiments are also performed on the En-Fr and En-Cs translation tasks to verify the FACT model’s generalization ability. Figures 5 and 6 show the results.

Fig. 5
figure 5

Ablation experiment results on En-Fr translation task.

Fig. 6
figure 6

Ablation experiment results on En-Cs translation task.

The results of the En-Fr translation task exhibit a similar pattern to the En-De findings. Both the future target context information supervision function \({L}_{fd}\) and the multimodal consistency loss function \({L}_{md}\) are deactivated. In this case, the FACT model achieves BLEU scores of 60.1, 53.0, and 43.8, and Meteor scores of 74.8, 70.1, and 63.7 on the test16, test17, and MS COCO datasets, respectively. These scores are all higher than those of the Transformer model.

Figure 6 shows that the results of the En-Cs translation task on the test2016 dataset are consistent with those of the En-De and En-Fr translation tasks. When the future target context information supervision function \({L}_{fd}\) and the multimodal consistency loss function \({L}_{md}\) are removed, the FACT model achieves BLEU and Meteor scores of 31.7 and 51.8, both exceeding those of the Transformer model. The results from En-Fr and En-Cs translation tasks further confirm that the FACT model can leverage multimodal consistency to learn future target context information, thus enhancing the performance of MMT.

  1. (3)

    Impact of sentence length on model performance

The generated sentence lengths and BLEU scores for the FACT and Transformer models on the En-De translation task across the test16 and test17 datasets under varying source language sentence lengths are compared. Figure 7 presents the results.

Fig. 7
figure 7

Performance comparison of models at different source sentence lengths.

Figure 7 shows that as the length of the source language sentence increases, the FACT model demonstrates a significant advantage in translation quality compared to the Transformer model. In the En-De translation task, the FACT model achieves a BLEU score of 44.1 for short sentences (0–10 words), outperforming the Transformer’s 41.0. The translated sentence length is relatively short, with FACT producing a length of 8.4 and Transformer 8.2. As the source sentence length grows, the FACT model’s translation quality advantage becomes even more pronounced. Additionally, its generated translation lengths adapt to the increase in source sentence length, producing more reasonable translation lengths for longer sentences. This indicates the model’s strong handling of long sentences. These findings demonstrate that the FACT model can more effectively predict future context when handling long sentence translation tasks, thereby improving translation quality.

  1. (4)

    Impact of Model on Learning Investment Effect

To explore the effectiveness of the FACT model, experiments are conducted to evaluate its application in language learning. Figure 8 compares the learning process quality, learning efficiency, and learning outcomes between FACT and Transformer models.

Fig. 8
figure 8

Comparison of model impact on learning investment effect.

Figure 8 suggests that the FACT model exhibits a distinct advantage over the Transformer model in language learning tasks. Specifically, it outperforms Transformer across multiple metrics, including learning efficiency, translation quality, user satisfaction, and understanding improvement. The learning efficiency of FACT is 83.2 words per hour, compared to 74.6 words per hour for the Transformer, highlighting FACT’s potential to accelerate the learning process. Additionally, FACT achieves a translation quality score of 82.7, higher than the Transformer’s 78.9, indicating its superior performance in translation quality. It also scores higher in both user satisfaction and understanding improvement. Overall, the FACT model offers higher efficiency and better learning outcomes in language learning tasks, demonstrating significant application potential.



Source link

Continue Reading

Brand Stories

Artificial Intelligence topic of chamber discussion (VIDEO)

Published

on


MONTICELLO – The Sullivan County Chamber of Commerce, in partnership with the Orthodox Jewish Chamber of Commerce, the Brooklyn Chamber of Commerce, and the Greater New York Chamber of Commerce, is hosting a cross-chamber mixer and panel to explore the application of AI in business on Wednesday, July 30th.

The event, themed “Keeping Your Business Current in the Age of AI,” will run from 5 p.m. to 7:30 p.m. at The Kartrite Resort in Monticello.

Sullivan Chamber President Ashley Leavitt highlighted the goal of the event, stating, “We’re coming together as one, not only to network and mingle and mix as all business owners and entrepreneurs and whatever else. But also to transparently talk about AI because technology is changing the business game.”

Leavitt emphasized the importance of understanding AI in business, noting, “There are a lot of things that if you’re not keeping up to dating in technology, you’re falling behind, and there’s a lot of things that can be streamlined that’ll save our small businesses a lot of time, energy and money with the new technology. So staying up to date and how they can automate responses or how they can automate some, you know, like QuickBooks processes and stuff of that sort.”

For more information about this event, search for “keeping your business current in the age of AI” or visit catskills.com directly through the Chamber’s website.





Source link

Continue Reading

Brand Stories

Studying a galaxy far, far away could become easier with help from AI, says researcher

Published

on

By


Youssef Zaazou graduated with a master’s of science from the Memorial University of Newfoundland in 2025. (Memorial University/Richard Blenkinsopp)

A recent Memorial University of Newfoundland graduate says his research may help study galaxies more efficiently — with help from Artificial Intelligence.

As part of Youssef Zaazou’s master’s of science, he developed an AI-based image-processing technique that generates predictions of what certain galaxies may look like in a given wavelength of light.

“Think of it as translating galaxy images across different wavelengths of light,” Zaazou told CBC News over email.

He did this by researching past methods for similar tasks, adapting current AI tools for his specific purposes, finding and curating the right dataset to train the models, along with plenty of trial and error.

“Instead of … having to look at an entire region of sky, we can get predictions for certain regions and figure out, ‘Oh this might be interesting to look at,'” said Zaazou. “So we can then prioritize how we use our telescope resources.”

An excerpt from Zaazou's research showing green light inputs to the model, outputs of the model in red light, the true value of the red light the model aims to replicate, and the difference between rows two and three.

Zaazou developed an AI-based image-processing technique that generates predictions of what certain galaxies may look like in a given wavelength of light. (Submitted by Youssef Zaazou)

Zaazou recently teamed up with his supervisors Terrence Tricco and Alex Bihlo to co-author a paper on his research in The Astrophysical Journal, which is published by The American Astronomical Society.

Tricco says this research could also help justify allocation of high-demand telescopes like the Hubble Space Telescope, which has a competitive process to assign its use.

A future for AI in astronomy

Both Tricco and Zaazou emphasised the research does not use AI to replace current methods but to augment them.

Tricco says that Zaazou’s findings have the potential to help guide future telescope development, and predict what astronomers might expect to see, making for more efficient exploration.

Calling The Astrophysical Journal the “gold standard” for astronomy journals in the world, Tricco hopes the wider astronomical community will take notice of Zaazou’s findings.

“We want to have them be aware of this because as I was mentioning, AI, machine learning, and physics, astronomy, it’s still very new for physicists and for astronomers, and they’re a little bit hesitant about these tools,” said Tricco.

Terrence Tricco is an Assistant Professor in the Department of Computer Science at the Memorial University of Newfoundland.

Terrence Tricco, an assistant professor at MUN’s Department of Computer Science , says Zaazou’s findings have the potential to help guide future telescope development. (Submitted by Terrence Tricco )

Tricco praised the growing presence of space research in general at Memorial University.

“We are here, we’re doing great research,” he said.

He added growing AI expertise is also transferable to other disciplines.

“I think that builds into our just tech ecosystem here as well.”

‘Only the beginning’

Though Zaazou’s time as a Memorial University student is over, he hopes to see research in this area continue to grow.

“I’m hoping this is the beginning of further research to be done,” he said.

Though Zaazou described his contribution to the field as merely a “pebble,” he’s happy to have been able to do his part.

“I’m an astronomer. And it just feels great to be able to say that and to be able to have that little contribution because I just love the field and I’m fascinated by everything out there,” said Zaazou.

Download our free CBC News app to sign up for push alerts for CBC Newfoundland and Labrador. Sign up for our daily headlines newsletter here. Click here to visit our landing page.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com