Connect with us

AI in Travel

Serve Machine Learning Models via REST APIs in Under 10 Minutes

Published

on


SServe Machine Learning Models via REST APIs in Under 10 Minutes
Image by Author | Canva

 

If you like building machine learning models and experimenting with new stuff, that’s really cool — but to be honest, it only becomes useful to others once you make it available to them. For that, you need to serve it — expose it through a web API so that other programs (or humans) can send data and get predictions back. That’s where REST APIs come in.

In this article, you will learn how we’ll go from a simple machine learning model to a production-ready API using FastAPI, one of Python’s fastest and most developer-friendly web frameworks, in just under 10 minutes. And we won’t just stop at a “make it run” demo, but we will add things like:

  • Validating incoming data
  • Logging every request
  • Adding background tasks to avoid slowdowns
  • Gracefully handling errors

So, let me just quickly show you how our project structure is going to look before we move to the code part:

ml-api/
│
├── model/
│   └── train_model.py        # Script to train and save the model
│   └── iris_model.pkl        # Trained model file
│
├── app/
│   └── main.py               # FastAPI app
│   └── schema.py             # Input data schema using Pydantic
│
├── requirements.txt          # All dependencies
└── README.md                 # Optional documentation

 

Step 1: Install What You Need

 
We’ll need a few Python packages for this project: FastAPI for the API, Scikit-learn for the model, and a few helpers like joblib and pydantic. You can install them using pip:

pip install fastapi uvicorn scikit-learn joblib pydantic

 

And save your environment:

pip freeze > requirements.txt

 

Step 2: Train and Save a Simple Model

 
Let’s keep the machine learning part simple so we can focus on serving the model. We’ll use the famous Iris dataset and train a random forest classifier to predict the type of iris flower based on its petal and sepal measurements.

Here’s the training script. Create a file called train_model.py in a model/ directory:

from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
import joblib, os

X, y = load_iris(return_X_y=True)
clf = RandomForestClassifier()
clf.fit(*train_test_split(X, y, test_size=0.2, random_state=42)[:2])

os.makedirs("model", exist_ok=True)
joblib.dump(clf, "model/iris_model.pkl")
print("✅ Model saved to model/iris_model.pkl")

 

This script loads the data, splits it, trains the model, and saves it using joblib. Run it once to generate the model file:

python model/train_model.py

 

Step 3: Define What Input Your API Should Expect

 
Now we need to define how users will interact with your API. What should they send, and in what format?

We’ll use Pydantic, a built-in part of FastAPI, to create a schema that describes and validates incoming data. Specifically, we’ll ensure that users provide four positive float values — for sepal length/width and petal length/width.

In a new file app/schema.py, add:

from pydantic import BaseModel, Field

class IrisInput(BaseModel):
    sepal_length: float = Field(..., gt=0, lt=10)
    sepal_width: float = Field(..., gt=0, lt=10)
    petal_length: float = Field(..., gt=0, lt=10)
    petal_width: float = Field(..., gt=0, lt=10)

 

Here, we’ve added value constraints (greater than 0 and less than 10) to keep our inputs clean and realistic.

 

Step 4: Create the API

 
Now it’s time to build the actual API. We’ll use FastAPI to:

  • Load the model
  • Accept JSON input
  • Predict the class and probabilities
  • Log the request in the background
  • Return a clean JSON response

Let’s write the main API code inside app/main.py:

from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi.responses import JSONResponse
from app.schema import IrisInput
import numpy as np, joblib, logging

# Load the model
model = joblib.load("model/iris_model.pkl")

# Set up logging
logging.basicConfig(filename="api.log", level=logging.INFO,
                    format="%(asctime)s - %(message)s")

# Create the FastAPI app
app = FastAPI()

@app.post("/predict")
def predict(input_data: IrisInput, background_tasks: BackgroundTasks):
    try:
        # Format the input as a NumPy array
        data = np.array([[input_data.sepal_length,
                          input_data.sepal_width,
                          input_data.petal_length,
                          input_data.petal_width]])
        
        # Run prediction
        pred = model.predict(data)[0]
        proba = model.predict_proba(data)[0]
        species = ["setosa", "versicolor", "virginica"][pred]

        # Log in the background so it doesn’t block response
        background_tasks.add_task(log_request, input_data, species)

        # Return prediction and probabilities
        return {
            "prediction": species,
            "class_index": int(pred),
            "probabilities": {
                "setosa": float(proba[0]),
                "versicolor": float(proba[1]),
                "virginica": float(proba[2])
            }
        }

    except Exception as e:
        logging.exception("Prediction failed")
        raise HTTPException(status_code=500, detail="Internal error")

# Background logging task
def log_request(data: IrisInput, prediction: str):
    logging.info(f"Input: {data.dict()} | Prediction: {prediction}")

 

Let’s pause and understand what’s happening here.

We load the model once when the app starts. When a user hits the /predict endpoint with valid JSON input, we convert that into a NumPy array, pass it through the model, and return the predicted class and probabilities. If something goes wrong, we log it and return a friendly error.

Notice the BackgroundTasks part — this is a neat FastAPI feature that lets us do work after the response is sent (like saving logs). That keeps the API responsive and avoids delays.

 

Step 5: Run Your API

 
To launch the server, use uvicorn like this:

uvicorn app.main:app --reload

 

Visit: http://127.0.0.1:8000/docs
You’ll see an interactive Swagger UI where you can test the API.
Try this sample input:

{
  "sepal_length": 6.1,
  "sepal_width": 2.8,
  "petal_length": 4.7,
  "petal_width": 1.2
}

 

or you can use CURL to make the request like this:

curl -X POST "http://127.0.0.1:8000/predict" -H  "Content-Type: application/json" -d \
'{
  "sepal_length": 6.1,
  "sepal_width": 2.8,
  "petal_length": 4.7,
  "petal_width": 1.2
}'

 

Both of the them generates the same response which is this:

{"prediction":"versicolor",
 "class_index":1,
 "probabilities": {
	 "setosa":0.0,
	 "versicolor":1.0,
	 "virginica":0.0 }
 }

 

Optional Step: Deploy Your API

 
You can deploy the FastAPI app on:

  • Render.com (zero config deployment)
  • Railway.app (for continuous integration)
  • Heroku (via Docker)

You can also extend this into a production-ready service by adding authentication (such as API keys or OAuth) to protect your endpoints, monitoring requests with Prometheus and Grafana, and using Redis or Celery for background job queues. You can also refer to my article : Step-by-Step Guide to Deploying Machine Learning Models with Docker.

 

Wrapping Up

 
That’s it — and it’s already better than most demos. What we’ve built is more than just a toy example. However, it:

  • Validates input data automatically
  • Returns meaningful responses with prediction confidence
  • Logs every request to a file (api.log)
  • Uses background tasks so the API stays fast and responsive
  • Handles failures gracefully

And all of it in under 100 lines of code.
 
 

Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.



Source link

Continue Reading

AI in Travel

With focus on AI, sustainable travel Arya Niwas organises Openscapes 2025 in New Delhi

Published

on


The opportunities and challenges that issues like artificial intelligence, sustainability and experiential travel pose to the tourism industry in India and overseas were highlighted at Openscapes 2025, a travel conclave in New Delhi on Saturday.

Organised by Arya Niwas, a hospitality group based in Jaipur, the conclave served as a participative platform to explore transformative ideas for the tourism sector, addressing pressing issues such as sustainability, experiential curation, the role of artificial intelligence (AI), and the integration of responsible practices into the travel experience.

Drawing stakeholders from across India’s hospitality industry, the conclave was organised with the core theme of Projecting India and Rajasthan with a stronger, more meaningful narrative.

“This is the first conclave. It is called Openscapes. We hope that we will be having more such dialogue-based conclaves on travel. There is a need for us to behave as one in the travel industry and to move forward together because the ultimate aim is to serve the guests and make the guests win,” Pooja Bansal, Owner and General Manager, Arya Niwas, told India & You on the sidelines of the event.

The urgency of the issues raised at the meeting was underscored by leading tour operators, who highlighted that Indian tourism, particularly in recent years, “has not been sustainable and things have gone really, really bad.”

The conclave drew stakeholders from across India’s hospitality industry

“When we talk about sustainability with experiential tourism, the experience at the grassroot level, meeting local people with a bit of sustainability, offers eye-opening encounters. Yet, there are challenges,” Navneet Arora, Managing Director, VINString Holidays, a travel agency in New Delhi, told India & You.

The meeting illustrated both obstacles and achievements in rural and urban experiential tourism. Operators cited instances where visitors’ immersion in heritage neighbourhoods and private homes fostered mutual pride among locals and tourists. However, they also warned against approaches that leave rural residents feeling like “monkeys in the zoo,” underscoring the necessity of responsible, respectful interaction, something now addressed by ensuring a share of tour proceeds benefit the communities involved. Sustainability, participants argued, extends well beyond eco-friendly rhetoric.

The conclave highlighted innovative tour formats, slow tourism, creative workshops and direct engagement with artisans, as pathways for deeper, more rewarding guest experiences.

“I think that is the call for the future, because automation has to come in. If we are not doing automation today, we are backwards. AI is important. The event opens up eyes for a lot of people. Difficult, but yes, AI and sustainability are important and doable,” Arora added.

“The interpretation of sustainability has become very cliché. This was a session to break that,” said Bansal.

Participants at the Openscapes 2025 called for a sustained dialogue, with suggestions for sector-wide conventions and targetted sessions on marketing and AI and more collaborative initiatives.



Source link

Continue Reading

AI in Travel

Sabre Corporation’s Strategic Partnership with Christopherson Business Travel and Its Implications for Undervalued Cloud and AI Stocks

Published

on


Sabre Corporation (NASDAQ: SABR) has long been a cornerstone of the global travel technology sector, but its recent strategic partnership with Christopherson Business Travel marks a pivotal evolution. By leveraging its AI-driven platform and cloud-native infrastructure, Sabre is not only modernizing corporate travel management but also positioning itself as a catalyst for growth in the undervalued travel tech sector. For investors, this collaboration offers a compelling case study in how AI and cloud innovation can unlock long-term value in a niche yet resilient market.

A Strategic Alliance for the Future of Corporate Travel

On July 17, 2025, Sabre announced a multi-year agreement to become Christopherson Business Travel’s primary technology partner. This partnership is more than a transactional arrangement—it’s a strategic alignment of two companies aiming to redefine corporate travel through automation, real-time data, and personalized service. Sabre’s AI-powered tools, including Sabre Red 360, Trip Proposal, and Market Intelligence, will streamline operations for Christopherson, enabling faster decision-making and enhanced client offerings.

The integration of Sabre’s cloud-native infrastructure into Christopherson’s proprietary Andavo platform is particularly noteworthy. This move allows for real-time orchestration of multi-source content (air, hotel, rail, ground) and seamless API-driven integrations, reducing manual effort and improving scalability. As Chad Maughan, CTO of Christopherson, noted, Sabre’s architecture provides the operational flexibility needed to adapt to evolving client demands—a critical advantage in the post-pandemic corporate travel landscape.

Sabre’s Financial Resilience and AI-Driven Growth

Sabre’s financial performance in 2024 underscores its transition from a turnaround story to a growth-oriented entity. Revenue increased to $3 billion, with adjusted EBITDA rising to $517 million—a 54% year-over-year improvement. While IT Solutions revenue dipped due to de-migrations, the Travel Solutions and Distribution segments grew by 4% and 6%, respectively, driven by demand for Sabre’s AI-powered tools.

The company’s market cap of $1.222 billion pales in comparison to AI/cloud giants like Databricks ($62 billion) or Snowflake ($43.6 billion), but this undervaluation reflects Sabre’s niche focus. Its strategic investments in Sabre Mosaic—a modular platform combining AI, cloud, and traditional agent workflows—position it to capture a larger share of the corporate travel market, which is projected to grow as businesses prioritize cost optimization and efficiency.

The AI/Cloud Travel Tech Opportunity

The broader travel tech sector is undergoing a transformation fueled by generative AI. According to Skift Research, AI-driven tools could create a $28 billion+ opportunity for the industry, with applications in personalized itineraries, dynamic pricing, and automated customer service. Sabre’s Automated Exchanges & Refunds and Agency Retailer solutions are already streamlining post-booking processes, reducing manual intervention by up to 70%.

However, Sabre is not alone in the race to monetize AI in travel. Competitors like C3.ai (NYSE: AI), Marvell Technology (NASDAQ: MRVL), and DigitalOcean (DOCN) are also leveraging cloud and AI to drive growth. C3.ai’s predictive analytics tools, for instance, have secured government contracts worth $450 million, while Marvell’s AI-optimized chips are powering data centers for hyperscale providers. Yet, Sabre’s deep vertical integration into travel-specific workflows gives it a unique edge in the corporate travel niche.

Why Sabre Is an Undervalued Investment

Despite its strategic advantages, Sabre remains overlooked by many investors. Its current price-to-earnings ratio (P/E) of 8.5 is significantly lower than the industry average of 18.5, and its hedge fund ownership (11.2%) suggests growing confidence in its AI-driven roadmap. The partnership with Christopherson is a validation of Sabre’s value proposition: it enables the company to scale its AI/Cloud offerings without overhauling existing systems, a critical factor for travel agencies seeking cost-effective modernization.

For investors, the key question is whether Sabre can replicate its success in other verticals. The company’s PowerSuite Cloud platform, which automates operations and integrates NDC content, is already gaining traction among mid-sized travel agencies. If Sabre can expand its footprint in the corporate and leisure travel markets, its revenue could outpace the 10% growth projected by analysts.

Conclusion: A Strategic Bet on AI-Driven Travel

Sabre’s partnership with Christopherson Business Travel is a microcosm of the broader shift toward AI and cloud-native solutions in travel technology. While the company may lack the valuation of tech giants like Microsoft or Google, its focus on vertical-specific innovation and operational efficiency makes it a compelling play for investors seeking exposure to the travel sector’s AI revolution.

For those considering a diversified portfolio, Sabre offers a unique blend of undervaluation and growth potential. However, it should be viewed as a complementary holding to broader AI/cloud stocks like C3.ai or Marvell, rather than a standalone bet. As the travel industry continues to embrace AI-driven automation, Sabre’s ability to deliver scalable, client-centric solutions will likely drive long-term value for both its partners and shareholders.



Source link

Continue Reading

AI in Travel

AI Travel Tricks: Watch Out for the Road to Nowhere – Herald/Review Media

Published

on



AI Travel Tricks: Watch Out for the Road to Nowhere  Herald/Review Media



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com