Skip to main content

OCI Data Science Model Deployment Endpoint

OCI Data Science is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure.

For the latest updates, examples and experimental features, please see ADS LangChain Integration.

This notebooks goes over how to use an LLM hosted on a OCI Data Science Model Deployment.

For authentication, the oracle-ads library is used to automatically load credentials required for invoking the endpoint.

!pip3 install oracle-ads

Prerequisiteโ€‹

Deploy modelโ€‹

You can easily deploy, fine-tune, and evaluate foundation models using the AI Quick Actions on OCI Data Science Model deployment. For additional deployment examples, please visit the Oracle GitHub samples repository.

Policiesโ€‹

Make sure to have the required policies to access the OCI Data Science Model Deployment endpoint.

Set upโ€‹

After having deployed model, you have to set up following required parameters of the call:

  • endpoint: The model HTTP endpoint from the deployed model, e.g. https://modeldeployment.<region>.oci.customer-oci.com/<md_ocid>/predict.

Authenticationโ€‹

You can set authentication through either ads or environment variables. When you are working in OCI Data Science Notebook Session, you can leverage resource principal to access other OCI resources. Check out here to see more options.

Examplesโ€‹

import ads
from langchain_community.llms import OCIModelDeploymentLLM

# Set authentication through ads
# Use resource principal are operating within a
# OCI service that has resource principal based
# authentication configured
ads.set_auth("resource_principal")

# Create an instance of OCI Model Deployment Endpoint
# Replace the endpoint uri and model name with your own
# Using generic class as entry point, you will be able
# to pass model parameters through model_kwargs during
# instantiation.
llm = OCIModelDeploymentLLM(
endpoint="https://modeldeployment.<region>.oci.customer-oci.com/<md_ocid>/predict",
model="odsc-llm",
)

# Run the LLM
llm.invoke("Who is the first president of United States?")
API Reference:OCIModelDeploymentLLM
import ads
from langchain_community.llms import OCIModelDeploymentVLLM

# Set authentication through ads
# Use resource principal are operating within a
# OCI service that has resource principal based
# authentication configured
ads.set_auth("resource_principal")

# Create an instance of OCI Model Deployment Endpoint
# Replace the endpoint uri and model name with your own
# Using framework specific class as entry point, you will
# be able to pass model parameters in constructor.
llm = OCIModelDeploymentVLLM(
endpoint="https://modeldeployment.<region>.oci.customer-oci.com/<md_ocid>/predict",
)

# Run the LLM
llm.invoke("Who is the first president of United States?")
import os

from langchain_community.llms import OCIModelDeploymentTGI

# Set authentication through environment variables
# Use API Key setup when you are working from a local
# workstation or on platform which does not support
# resource principals.
os.environ["OCI_IAM_TYPE"] = "api_key"
os.environ["OCI_CONFIG_PROFILE"] = "default"
os.environ["OCI_CONFIG_LOCATION"] = "~/.oci"

# Set endpoint through environment variables
# Replace the endpoint uri with your own
os.environ["OCI_LLM_ENDPOINT"] = (
"https://modeldeployment.<region>.oci.customer-oci.com/<md_ocid>/predict"
)

# Create an instance of OCI Model Deployment Endpoint
# Using framework specific class as entry point, you will
# be able to pass model parameters in constructor.
llm = OCIModelDeploymentTGI()

# Run the LLM
llm.invoke("Who is the first president of United States?")
API Reference:OCIModelDeploymentTGI

Asynchronous callsโ€‹

await llm.ainvoke("Tell me a joke.")

Streaming callsโ€‹

for chunk in llm.stream("Tell me a joke."):
print(chunk, end="", flush=True)

API referenceโ€‹

For comprehensive details on all features and configurations, please refer to the API reference documentation for each class:


Was this page helpful?