Connect with us

AI in Travel

Industry leaders on the future of digital identity and AI agents

Published

on


The European Commission’s digital wallet initiative will take effect next year, but it’s just the first step. During a session at Phocuswright Europe 2025, industry leaders sat down to discuss the impact of these digital identity wallets and how artificial intelligence (AI) will come into play.

Nick Price, founder of Netsys Technology and co-chair of the Decentralized Identity Foundation (DIF) Hospitality & Travel Special Interest Group, said DIF is working on standards and use cases for self-sovereign identity in the travel industry and trying to address a major gap: traveler profile and preferences.

“Anybody who’s in the hospitality industry will know that this isn’t just a name and address and information derived from a passport,” Price said. In reality, this may include preferences about where you sit on a plane, the proximity of your hotel room to the elevator, your allergies, etc.

This information is “deep and personal and meaningful, and it’s constantly changing,” Price said. “It is the grease in the machine of travel that actually enables the surprise and delight moments to happen—and that information is locked away in loyalty systems that are behind the bars of individual travel providers today, and they just simply are out of date, incomplete and unused and unusable in the most part.”

Jamie Smith, founder of Customer Futures, also pointed to traveler preferences and profile history being “locked up” in customer relationship management systems and travel provider technologies.

To achieve a truly seamless journey, that information should be held by the customer.

“The only place to organize this information is the individual. The only 360-degree view of the customer is the customer, and there’s no tech on that side,” Smith said.

The concept of “empowerment tech,” then, involves three things: a digital wallet, a data store and AI agents, like Perplexity, for the customer.

Overall, the digital wallet is set to have a “major impact on the world of travel,” Annet Steenbergen, advisor for the European Union Digital Wallet Consortium, said. And that’s not just for consumers but also for businesses that “will be able to instantaneously verify that you’re dealing with another real business,” she said.

The trio also commented on what the full traveler journey will look like in the future and how digital wallets can add new revenue streams and improve security.

Watch the full session moderated by Mike Coletta, senior manager of research and innovation at Phocuswright, below:

Executive Panel: Agents, ID and the Future of Travel

 



Source link

AI in Travel

OpenAI Rolls Out ChatGPT Agent Combining Deep Research and Operator 

Published

on


OpenAI has launched the ChatGPT agent, a new feature that allows ChatGPT to act independently using its own virtual computer. The agent can navigate websites, run code, analyse data, and complete tasks such as planning meetings, building slideshows, and updating spreadsheets. 

The feature is now rolling out to Pro, Plus, and Team users, with access for Enterprise and Education users expected in the coming weeks.

The agent integrates previously separate features like Operator and Deep Research, combining their capabilities into a single system. Operator allowed web interaction through clicks and inputs, while deep research focused on synthesis and summarisation. 

The new system allows fluid transition between reasoning and action in a single conversation.

“You can use it to effortlessly plan and book travel itineraries, design and book entire dinner parties, or find specialists and schedule appointments,” OpenAI said in a statement. “ChatGPT requests permission before taking actions of consequence, and you can easily interrupt, take over the browser, or stop tasks at any point.”

Users can activate agent mode via the tools dropdown in ChatGPT’s composer window. The agent uses a suite of tools, including a visual browser, a text-based browser, terminal access, and API integration. It can also work with connectors like Gmail and GitHub, provided users log in via a secure takeover mode.

All tasks are carried out on a virtual machine that preserves state across tool switches. This allows ChatGPT to browse the web, download files, run commands, and review outputs, all within a single session. Users can interrupt or redirect tasks at any time without losing progress.

ChatGPT agent is currently limited to 400 messages per month for Pro users and 40 for Plus and Team users. Additional usage is available through credit-based options. Support for the European Economic Area and Switzerland is in progress.

The standalone Operator research preview will be phased out in the coming weeks. Users who prefer longer-form, slower responses can still access deep research mode via the dropdown menu.

While slideshow generation is available, OpenAI noted that formatting may be inconsistent, and export issues remain. Improvements to this capability are under development.

The system showed strong performance across benchmarks. On Humanity’s Last Exam, it scored a new state-of-the-art pass@1 rate of 41.6%, increasing to 44.4% when using parallel attempts. On DSBench, which tests data science workflows, it reached 89.9% on analysis tasks and 85.5% on modelling, significantly higher than human baselines.

In investment banking modelling tasks, the agent achieved a 71.3% mean accuracy, outperforming OpenAI’s o3 model and the earlier deep research tool. It also scored 68.9% on BrowseComp and 65.4% on WebArena, both benchmarks measuring real-world web navigation and task completion.

However, OpenAI acknowledged new risks with this capability. “This is the first time users can ask ChatGPT to take actions on the live web,” the company said. “We’ve placed a particular emphasis on safeguarding ChatGPT agent against adversarial manipulation through prompt injection.”

To counter these risks, ChatGPT requires explicit confirmation before high-impact actions like purchases, restricts actions such as bank transfers, and offers settings to delete browsing data and log out of sessions. Sensitive inputs entered during takeover sessions are not collected or stored.

The new system is classified under OpenAI’s “High Biological and Chemical” capability tier, triggering additional safeguards. The company has worked with external biosecurity experts and introduced monitoring tools, dual-use refusal training, and threat modelling to prevent misuse.



Source link

Continue Reading

AI in Travel

Lovable Becomes AI Unicorn with $200 Million Series A Led by Accel in Less than 8 Months

Published

on


Stockholm-based AI startup Lovable has raised $200 million in a Series A funding round led by Accel, pushing its valuation to $1.8 billion. The announcement comes just eight months after the company’s launch.

Lovable allows users to build websites and apps using natural language prompts, similar to platforms like Cursor. The company claims over 2.3 million active users, with more than 180,000 of them now paying subscribers. 

CEO Anton Osika said the company has reached $75 million in annual recurring revenue within seven months.

“Today, there are 47M developers worldwide. Lovable is going to produce 1B potential builders,” he said in a post on X.

The latest round saw participation from existing backers, including 20VC, byFounders, Creandum, Hummingbird, and Visionaries Club. In February, Creandum led a $15 million pre-Series A investment when Lovable had 30,000 paying customers and $17 million in ARR, having spent only $2 million.

The company currently operates with a team of 45 full-time employees. The Series A round also attracted a long list of angel investors, including Klarna CEO Sebastian Siemiatkowski, Remote CEO Job van der Voort, Slack co-founder Stewart Butterfield, and HubSpot co-founder Dharmesh Shah.

Most of Lovable’s users are non-technical individuals building prototypes that are later developed further with engineering support. According to a press release, more than 10 million projects have been created on the platform to date.

Osika said the company is not targeting existing developers but a new category of users entirely. “99% of the world’s best ideas are trapped in the heads of people who can’t code. They have problems. They know the solutions. They just can’t build them.”

Lovable is also being used by enterprises such as Klarna and HubSpot, and its leadership sees the platform evolving into a tool for building full-scale production applications. 

“Every day, brilliant founders and operators with game-changing ideas hit the same wall: they don’t have a developer to realise their vision quickly and easily,” Osika said in a statement.

Osika also said on X that he has become an angel investor in a software startup built using Lovable. 

In another recent example, Osika noted that a Brazilian edtech company built an app using Lovable that generated $3 million in 48 hours.

Lovable’s growth trajectory suggests increased adoption among both individual users and enterprise customers, positioning it as a significant player in the growing AI-powered software creation market.



Source link

Continue Reading

AI in Travel

Build Your Own Simple Data Pipeline with Python and Docker

Published

on


Build Your Own Simple Data Pipeline with Python and DockerBuild Your Own Simple Data Pipeline with Python and DockerImage by Author | Ideogram

 

Data is the asset that drives our work as data professionals. Without proper data, we cannot perform our tasks, and our business will fail to gain a competitive advantage. Thus, securing suitable data is crucial for any data professional, and data pipelines are the systems designed for this purpose.

Data pipelines are systems designed to move and transform data from one source to another. These systems are part of the overall infrastructure for any business that relies on data, as they guarantee that our data is reliable and always ready to use.

Building a data pipeline may sound complex, but a few simple tools are sufficient to create reliable data pipelines with just a few lines of code. In this article, we will explore how to build a straightforward data pipeline using Python and Docker that you can apply in your everyday data work.

Let’s get into it.

 

Building the Data Pipeline

 
Before we build our data pipeline, let’s understand the concept of ETL, which stands for Extract, Transform, and Load. ETL is a process where the data pipeline performs the following actions:

  • Extract data from various sources. 
  • Transform data into a valid format. 
  • Load data into an accessible storage location.

ETL is a standard pattern for data pipelines, so what we build will follow this structure. 

With Python and Docker, we can build a data pipeline around the ETL process with a simple setup. Python is a valuable tool for orchestrating any data flow activity, while Docker is useful for managing the data pipeline application’s environment using containers.

Let’s set up our data pipeline with Python and Docker. 

 

Step 1: Preparation

First, we must nsure that we have Python and Docker installed on our system (we will not cover this here).

For our example, we will use the heart attack dataset from Kaggle as the data source to develop our ETL process.  

With everything in place, we will prepare the project structure. Overall, the simple data pipeline will have the following skeleton:

simple-data-pipeline/
├── app/
│   └── pipeline.py
├── data/
│   └── Medicaldataset.csv
├── Dockerfile
├── requirements.txt
└── docker-compose.yml

 

There is a main folder called simple-data-pipeline, which contains:

  • An app folder containing the pipeline.py file.
  • A data folder containing the source data (Medicaldataset.csv).
  • The requirements.txt file for environment dependencies.
  • The Dockerfile for the Docker configuration.
  • The docker-compose.yml file to define and run our multi-container Docker application.

We will first fill out the requirements.txt file, which contains the libraries required for our project.

In this case, we will only use the following library:

 

In the next section, we will set up the data pipeline using our sample data.

 

Step 2: Set up the Pipeline

We will set up the Python pipeline.py file for the ETL process. In our case, we will use the following code.

import pandas as pd
import os

input_path = os.path.join("/data", "Medicaldataset.csv")
output_path = os.path.join("/data", "CleanedMedicalData.csv")

def extract_data(path):
    df = pd.read_csv(path)
    print("Data Extraction completed.")
    return df

def transform_data(df):
    df_cleaned = df.dropna()
    df_cleaned.columns = [col.strip().lower().replace(" ", "_") for col in df_cleaned.columns]
    print("Data Transformation completed.")
    return df_cleaned

def load_data(df, output_path):
    df.to_csv(output_path, index=False)
    print("Data Loading completed.")

def run_pipeline():
    df_raw = extract_data(input_path)
    df_cleaned = transform_data(df_raw)
    load_data(df_cleaned, output_path)
    print("Data pipeline completed successfully.")

if __name__ == "__main__":
    run_pipeline()

 

The pipeline follows the ETL process, where we load the CSV file, perform data transformations such as dropping missing data and cleaning the column names, and load the cleaned data into a new CSV file. We wrapped these steps into a single run_pipeline function that executes the entire process.

 

Step 3: Set up the Dockerfile

With the Python pipeline file ready, we will fill in the Dockerfile to set up the configuration for the Docker container using the following code:

FROM python:3.10-slim

WORKDIR /app
COPY ./app /app
COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

CMD ["python", "pipeline.py"]

 

In the code above, we specify that the container will use Python version 3.10 as its environment. Next, we set the container’s working directory to /app and copy everything from our local app folder into the container’s app directory. We also copy the requirements.txt file and execute the pip installation within the container. Finally, we specify the command to run the Python script when the container starts.

With the Dockerfile ready, we will prepare the docker-compose.yml file to manage the overall execution:

version: '3.9'

services:
  data-pipeline:
    build: .
    container_name: simple_pipeline_container
    volumes:
      - ./data:/data

 

The YAML file above, when executed, will build the Docker image from the current directory using the available Dockerfile. We also mount the local data folder to the data folder within the container, making the dataset accessible to our script.

 

Executing the Pipeline

 
With all the files ready, we will execute the data pipeline in Docker. Go to the project root folder and run the following command in your command prompt to build the Docker image and execute the pipeline.

docker compose up --build

 

If you run this successfully, you will see an informational log like the following:

 ✔ data-pipeline                           Built                                                                                   0.0s 
 ✔ Network simple_docker_pipeline_default  Created                                                                                 0.4s 
 ✔ Container simple_pipeline_container     Created                                                                                 0.4s 
Attaching to simple_pipeline_container
simple_pipeline_container  | Data Extraction completed.
simple_pipeline_container  | Data Transformation completed.
simple_pipeline_container  | Data Loading completed.
simple_pipeline_container  | Data pipeline completed successfully.
simple_pipeline_container exited with code 0

 

If everything is executed successfully, you will see a new CleanedMedicalData.csv file in your data folder. 

Congratulations! You have just created a simple data pipeline with Python and Docker. Try using various data sources and ETL processes to see if you can handle a more complex pipeline.

 

Conclusion

 
Understanding data pipelines is crucial for every data professional, as they are essential for acquiring the right data for their work. In this article, we explored how to build a simple data pipeline using Python and Docker and learned how to execute it.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com