Method: projects.locations.reasoningEngines.create

Creates a reasoning engine.

Endpoint

post https://{service-endpoint}/v1beta1/{parent}/reasoningEngines

Where {service-endpoint} is one of the supported service endpoints.

Path parameters

parent string

Required. The resource name of the Location to create the ReasoningEngine in. Format: projects/{project}/locations/{location}

Request body

The request body contains an instance of ReasoningEngine.

Example request

Minimal

Python

import vertexai
from vertexai.preview import reasoning_engines

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# staging_bucket = "gs://YOUR_BUCKET_NAME"

vertexai.init(
    project=project_id, location="us-central1", staging_bucket=staging_bucket
)

class SimpleAdditionApp:
    def query(self, a: int, b: int) -> str:
        """Query the application.

        Args:
            a: The first input number
            b: The second input number

        Returns:
            int: The additional result.
        """

        return f"{int(a)} + {int(b)} is {int(a + b)}"

# Locally test
app = SimpleAdditionApp()
app.query(a=1, b=2)

# Create a remote app with reasoning engine.
# This may take 1-2 minutes to finish.
reasoning_engine = reasoning_engines.ReasoningEngine.create(
    SimpleAdditionApp(),
    display_name="Demo Addition App",
    description="A simple demo addition app",
    requirements=[],
    extra_packages=[],
)

Advanced

Python


from typing import List

import vertexai
from vertexai.preview import reasoning_engines

# TODO(developer): Update and un-comment below lines
# project_id = "PROJECT_ID"
# location = "us-central1"
# staging_bucket = "gs://YOUR_BUCKET_NAME"

vertexai.init(project=project_id, location=location, staging_bucket=staging_bucket)

class LangchainApp:
    def __init__(self, project: str, location: str) -> None:
        self.project_id = project
        self.location = location

    def set_up(self) -> None:
        from langchain_core.prompts import ChatPromptTemplate
        from langchain_google_vertexai import ChatVertexAI

        system = (
            "You are a helpful assistant that answers questions "
            "about Google Cloud."
        )
        human = "{text}"
        prompt = ChatPromptTemplate.from_messages(
            [("system", system), ("human", human)]
        )
        chat = ChatVertexAI(project=self.project_id, location=self.location)
        self.chain = prompt | chat

    def query(self, question: str) -> Union[str, List[Union[str, Dict]]]:
        """Query the application.

        Args:
            question: The user prompt.

        Returns:
            str: The LLM response.
        """
        return self.chain.invoke({"text": question}).content

# Locally test
app = LangchainApp(project=project_id, location=location)
app.set_up()
print(app.query("What is Vertex AI?"))

# Create a remote app with reasoning engine
# This may take 1-2 minutes to finish because it builds a container and turn up HTTP servers.
reasoning_engine = reasoning_engines.ReasoningEngine.create(
    LangchainApp(project=project_id, location=location),
    requirements=[
        "google-cloud-aiplatform==1.50.0",
        "langchain-google-vertexai",
        "langchain-core",
    ],
    display_name="Demo LangChain App",
    description="This is a simple LangChain app.",
    # sys_version="3.10",  # Optional
    extra_packages=[],
)

Response body

If successful, the response body contains a newly created instance of Operation.