AI in Travel
OpenAI Rolls Out ChatGPT Agent Combining Deep Research and Operator
OpenAI has launched the ChatGPT agent, a new feature that allows ChatGPT to act independently using its own virtual computer. The agent can navigate websites, run code, analyse data, and complete tasks such as planning meetings, building slideshows, and updating spreadsheets.
The feature is now rolling out to Pro, Plus, and Team users, with access for Enterprise and Education users expected in the coming weeks.
The agent integrates previously separate features like Operator and Deep Research, combining their capabilities into a single system. Operator allowed web interaction through clicks and inputs, while deep research focused on synthesis and summarisation.
The new system allows fluid transition between reasoning and action in a single conversation.
“You can use it to effortlessly plan and book travel itineraries, design and book entire dinner parties, or find specialists and schedule appointments,” OpenAI said in a statement. “ChatGPT requests permission before taking actions of consequence, and you can easily interrupt, take over the browser, or stop tasks at any point.”
Users can activate agent mode via the tools dropdown in ChatGPT’s composer window. The agent uses a suite of tools, including a visual browser, a text-based browser, terminal access, and API integration. It can also work with connectors like Gmail and GitHub, provided users log in via a secure takeover mode.
All tasks are carried out on a virtual machine that preserves state across tool switches. This allows ChatGPT to browse the web, download files, run commands, and review outputs, all within a single session. Users can interrupt or redirect tasks at any time without losing progress.
ChatGPT agent is currently limited to 400 messages per month for Pro users and 40 for Plus and Team users. Additional usage is available through credit-based options. Support for the European Economic Area and Switzerland is in progress.
The standalone Operator research preview will be phased out in the coming weeks. Users who prefer longer-form, slower responses can still access deep research mode via the dropdown menu.
While slideshow generation is available, OpenAI noted that formatting may be inconsistent, and export issues remain. Improvements to this capability are under development.
The system showed strong performance across benchmarks. On Humanity’s Last Exam, it scored a new state-of-the-art pass@1 rate of 41.6%, increasing to 44.4% when using parallel attempts. On DSBench, which tests data science workflows, it reached 89.9% on analysis tasks and 85.5% on modelling, significantly higher than human baselines.
In investment banking modelling tasks, the agent achieved a 71.3% mean accuracy, outperforming OpenAI’s o3 model and the earlier deep research tool. It also scored 68.9% on BrowseComp and 65.4% on WebArena, both benchmarks measuring real-world web navigation and task completion.
However, OpenAI acknowledged new risks with this capability. “This is the first time users can ask ChatGPT to take actions on the live web,” the company said. “We’ve placed a particular emphasis on safeguarding ChatGPT agent against adversarial manipulation through prompt injection.”
To counter these risks, ChatGPT requires explicit confirmation before high-impact actions like purchases, restricts actions such as bank transfers, and offers settings to delete browsing data and log out of sessions. Sensitive inputs entered during takeover sessions are not collected or stored.
The new system is classified under OpenAI’s “High Biological and Chemical” capability tier, triggering additional safeguards. The company has worked with external biosecurity experts and introduced monitoring tools, dual-use refusal training, and threat modelling to prevent misuse.
AI in Travel
Lovable Becomes AI Unicorn with $200 Million Series A Led by Accel in Less than 8 Months
Stockholm-based AI startup Lovable has raised $200 million in a Series A funding round led by Accel, pushing its valuation to $1.8 billion. The announcement comes just eight months after the company’s launch.
Lovable allows users to build websites and apps using natural language prompts, similar to platforms like Cursor. The company claims over 2.3 million active users, with more than 180,000 of them now paying subscribers.
CEO Anton Osika said the company has reached $75 million in annual recurring revenue within seven months.
“Today, there are 47M developers worldwide. Lovable is going to produce 1B potential builders,” he said in a post on X.
The latest round saw participation from existing backers, including 20VC, byFounders, Creandum, Hummingbird, and Visionaries Club. In February, Creandum led a $15 million pre-Series A investment when Lovable had 30,000 paying customers and $17 million in ARR, having spent only $2 million.
The company currently operates with a team of 45 full-time employees. The Series A round also attracted a long list of angel investors, including Klarna CEO Sebastian Siemiatkowski, Remote CEO Job van der Voort, Slack co-founder Stewart Butterfield, and HubSpot co-founder Dharmesh Shah.
Most of Lovable’s users are non-technical individuals building prototypes that are later developed further with engineering support. According to a press release, more than 10 million projects have been created on the platform to date.
Osika said the company is not targeting existing developers but a new category of users entirely. “99% of the world’s best ideas are trapped in the heads of people who can’t code. They have problems. They know the solutions. They just can’t build them.”
Lovable is also being used by enterprises such as Klarna and HubSpot, and its leadership sees the platform evolving into a tool for building full-scale production applications.
“Every day, brilliant founders and operators with game-changing ideas hit the same wall: they don’t have a developer to realise their vision quickly and easily,” Osika said in a statement.
Osika also said on X that he has become an angel investor in a software startup built using Lovable.
In another recent example, Osika noted that a Brazilian edtech company built an app using Lovable that generated $3 million in 48 hours.
Lovable’s growth trajectory suggests increased adoption among both individual users and enterprise customers, positioning it as a significant player in the growing AI-powered software creation market.
AI in Travel
Build Your Own Simple Data Pipeline with Python and Docker
Data is the asset that drives our work as data professionals. Without proper data, we cannot perform our tasks, and our business will fail to gain a competitive advantage. Thus, securing suitable data is crucial for any data professional, and data pipelines are the systems designed for this purpose.
Data pipelines are systems designed to move and transform data from one source to another. These systems are part of the overall infrastructure for any business that relies on data, as they guarantee that our data is reliable and always ready to use.
Building a data pipeline may sound complex, but a few simple tools are sufficient to create reliable data pipelines with just a few lines of code. In this article, we will explore how to build a straightforward data pipeline using Python and Docker that you can apply in your everyday data work.
Let’s get into it.
Building the Data Pipeline
Before we build our data pipeline, let’s understand the concept of ETL, which stands for Extract, Transform, and Load. ETL is a process where the data pipeline performs the following actions:
- Extract data from various sources.
- Transform data into a valid format.
- Load data into an accessible storage location.
ETL is a standard pattern for data pipelines, so what we build will follow this structure.
With Python and Docker, we can build a data pipeline around the ETL process with a simple setup. Python is a valuable tool for orchestrating any data flow activity, while Docker is useful for managing the data pipeline application’s environment using containers.
Let’s set up our data pipeline with Python and Docker.
Step 1: Preparation
First, we must nsure that we have Python and Docker installed on our system (we will not cover this here).
For our example, we will use the heart attack dataset from Kaggle as the data source to develop our ETL process.
With everything in place, we will prepare the project structure. Overall, the simple data pipeline will have the following skeleton:
simple-data-pipeline/
├── app/
│ └── pipeline.py
├── data/
│ └── Medicaldataset.csv
├── Dockerfile
├── requirements.txt
└── docker-compose.yml
There is a main folder called simple-data-pipeline
, which contains:
- An
app
folder containing thepipeline.py
file. - A
data
folder containing the source data (Medicaldataset.csv
). - The
requirements.txt
file for environment dependencies. - The
Dockerfile
for the Docker configuration. - The
docker-compose.yml
file to define and run our multi-container Docker application.
We will first fill out the requirements.txt
file, which contains the libraries required for our project.
In this case, we will only use the following library:
In the next section, we will set up the data pipeline using our sample data.
Step 2: Set up the Pipeline
We will set up the Python pipeline.py
file for the ETL process. In our case, we will use the following code.
import pandas as pd
import os
input_path = os.path.join("/data", "Medicaldataset.csv")
output_path = os.path.join("/data", "CleanedMedicalData.csv")
def extract_data(path):
df = pd.read_csv(path)
print("Data Extraction completed.")
return df
def transform_data(df):
df_cleaned = df.dropna()
df_cleaned.columns = [col.strip().lower().replace(" ", "_") for col in df_cleaned.columns]
print("Data Transformation completed.")
return df_cleaned
def load_data(df, output_path):
df.to_csv(output_path, index=False)
print("Data Loading completed.")
def run_pipeline():
df_raw = extract_data(input_path)
df_cleaned = transform_data(df_raw)
load_data(df_cleaned, output_path)
print("Data pipeline completed successfully.")
if __name__ == "__main__":
run_pipeline()
The pipeline follows the ETL process, where we load the CSV file, perform data transformations such as dropping missing data and cleaning the column names, and load the cleaned data into a new CSV file. We wrapped these steps into a single run_pipeline
function that executes the entire process.
Step 3: Set up the Dockerfile
With the Python pipeline file ready, we will fill in the Dockerfile
to set up the configuration for the Docker container using the following code:
FROM python:3.10-slim
WORKDIR /app
COPY ./app /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "pipeline.py"]
In the code above, we specify that the container will use Python version 3.10 as its environment. Next, we set the container’s working directory to /app
and copy everything from our local app
folder into the container’s app
directory. We also copy the requirements.txt
file and execute the pip installation within the container. Finally, we specify the command to run the Python script when the container starts.
With the Dockerfile
ready, we will prepare the docker-compose.yml
file to manage the overall execution:
version: '3.9'
services:
data-pipeline:
build: .
container_name: simple_pipeline_container
volumes:
- ./data:/data
The YAML file above, when executed, will build the Docker image from the current directory using the available Dockerfile
. We also mount the local data
folder to the data
folder within the container, making the dataset accessible to our script.
Executing the Pipeline
With all the files ready, we will execute the data pipeline in Docker. Go to the project root folder and run the following command in your command prompt to build the Docker image and execute the pipeline.
docker compose up --build
If you run this successfully, you will see an informational log like the following:
✔ data-pipeline Built 0.0s
✔ Network simple_docker_pipeline_default Created 0.4s
✔ Container simple_pipeline_container Created 0.4s
Attaching to simple_pipeline_container
simple_pipeline_container | Data Extraction completed.
simple_pipeline_container | Data Transformation completed.
simple_pipeline_container | Data Loading completed.
simple_pipeline_container | Data pipeline completed successfully.
simple_pipeline_container exited with code 0
If everything is executed successfully, you will see a new CleanedMedicalData.csv
file in your data folder.
Congratulations! You have just created a simple data pipeline with Python and Docker. Try using various data sources and ETL processes to see if you can handle a more complex pipeline.
Conclusion
Understanding data pipelines is crucial for every data professional, as they are essential for acquiring the right data for their work. In this article, we explored how to build a simple data pipeline using Python and Docker and learned how to execute it.
I hope this has helped!
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.
AI in Travel
CSC Partners with Salesforce to Transform Grievance Redressal in Rural India
Common Services Centres (CSC), the electronics and IT ministry’s (MeitY) flagship digital inclusion initiative, has partnered with Salesforce, a leading AI-powered CRM platform.
This collaboration will strengthen support for citizens and Village Level Entrepreneurs (VLEs) in rural and semi-urban areas through an AI-driven grievance redressal system designed to offer unified, intelligent, and scalable service experiences.
Built on Salesforce’s platform, the solution integrates Service Cloud with AI tools such as Einstein Bots for 24×7 self-service and Digital Engagement to consolidate citizen queries from WhatsApp, email, SMS, and the CSC portal.
“By integrating modern tools and AI-led workflows, we are equipping our frontline network with the capability to resolve issues faster, track them transparently, and deliver better experiences to the communities we serve,” Sanjay Kumar Rakesh, MD and CEO at CSC SPV, said.
With a network of over six lakh active VLEs, CSC serves as a crucial link between citizens and vital public and private services in remote regions. Integrating Salesforce represents a significant step forward in CSC’s digital transformation, improving resolution timelines, empowering VLEs with modern tools, and fostering greater transparency and trust in citizen service.
CSC’s collaboration with Salesforce lays the foundation for a broader digital public infrastructure. Arundhati Bhattacharya, president and CEO of Salesforce South Asia, said, “India’s next leap in digital public infrastructure will be defined by how effectively we can bring citizen services closer to every corner of the country—with speed, scale, and intelligence.”
As CSC expands into new areas such as wallet services, DigiPay, insurance, and telemedicine, the Salesforce platform offers a robust, future-ready foundation to streamline service delivery across these functions. With its scalable architecture, multilingual capabilities, and mobile-first approach, Salesforce is well-suited to support the growth of digital governance across India’s diverse population.
-
The Travel Revolution of Our Era3 weeks ago
‘AI is undeniably reshaping the core structure of the hospitality ecosystem’: Venu G Somineni
-
Brand Stories7 days ago
The Smart Way to Stay: How CheQin.AI Is Flipping Hotel Booking in Your Favor
-
Mergers & Acquisitions6 days ago
How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
-
Brand Stories2 weeks ago
Voice AI Startup ElevenLabs Plans to Add Hubs Around the World
-
Asia Travel Pulse2 weeks ago
Looking For Adventure In Asia? Here Are 7 Epic Destinations You Need To Experience At Least Once – Zee News
-
Mergers & Acquisitions1 week ago
Amazon weighs further investment in Anthropic to deepen AI alliance
-
Mergers & Acquisitions1 week ago
UK crime agency arrests 4 people over cyber attacks on retailers
-
AI in Travel2 weeks ago
‘Will AI take my job?’ A trip to a Beijing fortune-telling bar to see what lies ahead | China
-
Mergers & Acquisitions2 weeks ago
ChatGPT — the last of the great romantics
-
Mergers & Acquisitions1 week ago
EU pushes ahead with AI code of practice
You must be logged in to post a comment Login