Connect with us

AI in Travel

Yandex Türkiye launches AI-powered finance and travel tools, eyes expansion in mobility sector

Published

on


Photo shows Yandex Türkiye website on the display of a computer, accessed on July 22, 2025. (Adobe Stock Photo)

July 23, 2025 12:06 AM GMT+03:00

Yandex Türkiye unveiled a series of summer updates Friday, introducing new artificial intelligence-driven tools for finance, travel planning, and search functionality, as part of efforts to broaden its digital services and strengthen its presence in Türkiye’s growing tech and mobility markets.

The new offerings, branded Yandex Travel and Yandex Finance, aim to help users plan vacations and business trips more efficiently while providing real-time financial data. These features are integrated into Yandex’s search engine Yazeka and its answer platform Yandex Cevap, which now includes a “Reasoning Mode” to deliver deeper, more structured responses using wider information sources.

Yandex Türkiye CEO and Yandex Search International Chief Executive Alexander Popovskiy emphasized the company’s ambitions to expand in Türkiye’s ride-hailing sector, which he said remains underserved and heavily regulated.

Turkish market ‘significantly underserved’

“We have always been saying that the Turkish market is significantly underserved in terms of ride-hailing, in terms of taxi services,” Popovskiy told Turkish news agency Anadolu. “The current regulation is very strict. Supply is very limited. It is sensible in such cities like Istanbul.”

Popovskiy estimated that Istanbul’s ride-hailing market could support up to ten times more taxis than currently operate and suggested the sector’s financial value could grow fivefold if liberalized. He stressed that liberalization would benefit users, drivers, fleet owners, and platforms alike.

Rear view of a woman sitting at a computer with the Yandex logo on the monitor in Novosibirsk, Russia on September 16, 2020. (Adobe Stock Photo)

As a preparatory step, Yandex is gaining experience in cities such as Ankara, Izmir, and Antalya before fully entering Istanbul’s competitive market. Yandex Go recently received an electronic service license from Istanbul Metropolitan Municipality, a key milestone toward expanding in the city.

In travel, Yandex Travel allows users to compare hotel and flight prices across multiple partners, with plans to add other transportation options like buses. Popovskiy said the company aims to integrate artificial intelligence more deeply to transform travel planning into an interactive, chat-like experience.

AI-powered access to currency exchange rates

Yandex Finance Türkiye provides users with AI-powered access to currency exchange rates, stock prices, cryptocurrency updates, and economic news. Popovskiy said it offers a more comprehensive experience than similar services elsewhere and is working toward integration with Yazeka to potentially deliver personalized financial recommendations.

Yazeka, Yandex’s answer engine, now handles nearly one in four search queries in Türkiye. Its latest feature, Reasoning Mode, enables users to request more detailed answers by drawing from a wider range of verified sources.

“With internet search, you should always remember the balance between quality and speed,” Popovskiy said. “If you want a more comprehensive answer, you just click one button. And then Yazeka starts thinking more deeply about your request.”

Popovskiy also hinted at future plans for Yazeka to become a standalone application, possibly with its own unique personality.

The 2025 updates mark a significant push by Yandex to expand its footprint across Türkiye’s mobility, travel, finance, and AI sectors.

July 23, 2025 12:06 AM GMT+03:00



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

AI in Travel

When AI Deepfakes Send Tourists Chasing Illusions – Open Jaw

Published

on



When AI Deepfakes Send Tourists Chasing Illusions  Open Jaw



Source link

Continue Reading

AI in Travel

How many Aussies are using AI to plan travel, who’s utilising it & what exactly are they using it for?

Published

on


Nearly a third of Aussies are now using artificial intelligence (AI) to help plan their holidays, according to new research from Compare the Market.

In a survey of over 1,000 Australian adults, three in ten (28.8%) respondents said they relied on AI tools to lock in travel deals, scout destinations and find activities. 

More than one in ten (11.5%) are specifically using AI for destination recommendations – the most popular use of AI in travel – while a similar number (10.3%) are seeking out deals. 

Meanwhile, nearly one in ten (9.4%) look for recreational activities and accommodation, while one in 11 (9%) use AI to create itineraries, and nearly the same number (8.2%) search for flights and transport. A small percentage (3.2%) use AI to understand currency conversion.

AI can be used for simple flight searches.

“Australians love a good holiday and have never been afraid to ask for help when planning the perfect getaway,” Compare the Market’s Chris Ford says.

“Our latest data highlights a shift in the way travellers are approaching their planning, with convenience, personalisation and speed driving the adoption of innovative AI tools.” 

When it comes to who’s using the technology, the survey reveals a clear generational divide. 

The study found that, unsurprisingly, Gen Z and Millennials are the most likely to engage with AI when planning a trip. 

On the other hand, the vast majority (93%) of Baby Boomers and three-quarters (76%) of Gen Xers said they’ve never used AI tools to help book a holiday.

Interestingly, Gen Z and Gen X lean on AI for destination recommendations, Millennials for recreational activities, and Baby Boomers primarily for accommodation.

Advice, but not an advisor

A good agent can inspire you and do all the legwork.

While AI adoption isn’t surprising, Ford cautions that it should be treated as a tool, not a travel agent – and travellers should always sense-check recommendations. 

“It’s likely that travellers are using these tools in addition to chatting with travel agents, conducting desktop research or seeking ideas and inspiration from social media,” he notes.

Despite being a “great starting point” in the overall journey, Ford says that it’s important to “always ensure you’re crossing your ‘t’s and dotting your ‘I’s” when using AI.

“Many of these tools and services are still in their infancy stage and may not be 100% accurate, so do your own research to ensure you’re equipped with the right tools and information for your trip,” he states. 

“The last thing we want to see is anyone getting themselves into a potentially dangerous or unsafe situation based on the recommendations from AI.”

With this in mind, Ford also reminds travellers not to overlook insurance.

“Travel insurance is designed to protect you against unexpected events when you’re travelling domestically or internationally and AI may not be forthcoming with these types of incidents,” he says. 

“The type of cover offered by insurers can vary, but consider policies that cover scenarios for the kind of holiday you’re booking.” 

Where AI “falls short”

A family on the Mekong, Vietnam. Image Shutterstock

Karryon Features Editor Gaya Avery says while AI handles bookings, great travel agents go further — acting as trusted advisors, curators and problem-solvers.

“They don’t just book travel – they shape it, tailoring experiences to each client’s needs. That’s where artificial intelligence falls short,” she said.

“Travel professionals provide value: personalised service, insider knowledge and human connections that AI simply can’t replicate.”

So does high AI uptake mark the death of the travel agent? Get Gaya’s take on the technology from earlier this year here.





Source link

Continue Reading

AI in Travel

Benefits of Using LiteLLM for Your LLM Apps

Published

on


Benefits of Using LiteLLM for Your LLM Apps_1Image by Author | ideogram.ai

 

Introduction

 
With the surge of large language models (LLMs) in recent years, many LLM-powered applications are emerging. LLM implementation has introduced features that were previously non-existent.

As time goes on, many LLM models and products have become available, each with its pros and cons. Unfortunately, there is still no standard way to access all these models, as each company can develop its own framework. That is why having an open-source tool such as LiteLLM is useful when you need standardized access to your LLM apps without any additional cost.

In this article, we will explore why LiteLLM is beneficial for building LLM applications.

Let’s get into it.

 
 

Benefit 1: Unified Access

 
LiteLLM’s biggest advantage is its compatibility with different model providers. The tool supports over 100 different LLM services through standardized interfaces, allowing us to access them regardless of the model provider we use. It’s especially useful if your applications utilize multiple different models that need to work interchangeably.

A few examples of the major model providers that LiteLLM supports include:

  • OpenAI and Azure OpenAI, like GPT-4.
  • Anthropic, like Claude.
  • AWS Bedrock & SageMaker, supporting models like Amazon Titan and Claude.
  • Google Vertex AI, like Gemini.
  • Hugging Face Hub and Ollama for open-source models like LLaMA and Mistral.

The standardized format follows OpenAI’s framework, using its chat/completions schema. This means that we can switch models easily without needing to understand the original model provider’s schema.

For example, here is the Python code to use Google’s Gemini model with LiteLLM.

from litellm import completion

prompt = "YOUR-PROMPT-FOR-LITELLM"
api_key = "YOUR-API-KEY-FOR-LLM"

response = completion(
      model="gemini/gemini-1.5-flash-latest",
      messages=[{"content": prompt, "role": "user"}],
      api_key=api_key)

response['choices'][0]['message']['content']

 

You only need to obtain the model name and the respective API keys from the model provider to access them. This flexibility makes LiteLLM ideal for applications that use multiple models or for performing model comparisons.

 

Benefit 2: Cost Tracking and Optimization

 
When working with LLM applications, it is important to track token usage and spending for each model you implement and across all integrated providers, especially in real-time scenarios. 

LiteLLM enables users to maintain a detailed log of model API call usage, providing all the necessary information to control costs effectively. For example, the `completion` call above will have information about the token usage, as shown below.

usage=Usage(completion_tokens=10, prompt_tokens=8, total_tokens=18, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=None, text_tokens=8, image_tokens=None))

 

Accessing the response’s hidden parameters will also provide more detailed information, including the cost.

 

With the output similar to below:

{'custom_llm_provider': 'gemini',
 'region_name': None,
 'vertex_ai_grounding_metadata': [],
 'vertex_ai_url_context_metadata': [],
 'vertex_ai_safety_results': [],
 'vertex_ai_citation_metadata': [],
 'optional_params': {},
 'litellm_call_id': '558e4b42-95c3-46de-beb7-9086d6a954c1',
 'api_base': 'https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-latest:generateContent',
 'model_id': None,
 'response_cost': 4.8e-06,
 'additional_headers': {},
 'litellm_model_name': 'gemini/gemini-1.5-flash-latest'}

 

There is a lot of information, but the most important piece is `response_cost`, as it estimates the actual charge you will incur during that call, although it could still be offset if the model provider offers free access. Users can also define custom pricing for models (per token or per second) to calculate costs accurately. 

A more advanced cost-tracking implementation will also allow users to set a spending budget and limit, while also connecting the LiteLLM cost usage information to an analytics dashboard to more easily aggregate information. It’s also possible to provide custom label tags to help attribute costs to certain usage or departments.

By providing detailed cost usage data, LiteLLM helps users and organizations optimize their LLM application costs and budget more effectively. 

 

Benefit 3: Ease of Deployment

 
LiteLLM is designed for easy deployment, whether you use it for local development or a production environment. With modest resources required for Python library installation, we can run LiteLLM on our local laptop or host it in a containerized deployment with Docker without a need for complex additional configuration. 

Speaking of configuration, we can set up LiteLLM more efficiently using a YAML config file to list all the necessary information, such as the model name, API keys, and any essential custom settings for your LLM Apps. You can also use a backend database such as SQLite or PostgreSQL to store its state.

For data privacy, you are responsible for your own privacy as a user deploying LiteLLM yourself, but this approach is more secure since the data never leaves your controlled environment except when sent to the LLM providers. One feature LiteLLM provides for enterprise users is Single Sign-On (SSO), role-based access control, and audit logs if your application needs a more secure environment.

Overall, LiteLLM provides flexible deployment options and configuration while keeping the data secure.

 

Benefit 4: Resilience Features

 
Resilience is crucial when building LLM Apps, as we want our application to remain operational even in the face of unexpected issues. To promote resilience, LiteLLM provides many features that are useful in application development.

One feature that LiteLLM has is built-in caching, where users can cache LLM prompts and responses so that identical requests don’t incur repeated costs or latency. It is a useful feature if our application frequently receives the same queries. The caching system is flexible, supporting both in-memory and remote caching, such as with a vector database.

Another feature of LiteLLM is automatic retries, allowing users to configure a mechanism when requests fail due to errors like timeouts or rate-limit errors to automatically retry the request. It’s also possible to set up additional fallback mechanisms, such as using another model if the request has already hit the retry limit. 

Lastly, we can set rate limiting for defined requests per minute (RPM) or tokens per minute (TPM) to limit the usage level. It’s a great way to cap specific model integrations to prevent failures and respect application infrastructure requirements.

 

Conclusion

 
In the era of LLM product growth, it has become much easier to build LLM applications. However, with so many model providers out there, it becomes hard to establish a standard for LLM implementation, especially in the case of multi-model system architectures. This is why LiteLLM can help us build LLM Apps efficiently.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com