Skip to main content

Instrumenting Workflows

1. Create an application in agenta

To start, we need to create an application in agenta. You can do this using the command from the CLI:

agenta init

This command creates a new application in agenta and a config.toml file with all the information about the application.

2. Initialize agenta

import agenta as ag
# Option 1

ag.init(api_key="", app_id="")

# Option 2
os.environ["AGENTA_API_KEY"] = ""
os.environ["AGENTA_APP_ID"] = ""
ag.init()

# Option 3
ag.init(config_fname="config.toml") # using the config.toml generated by agenta init

You can find the API Key under the Setting view in agenta.

The app id can be found in the config.toml file if you have created the application from the CLI.

Note that if you are serving your application to the agenta cloud, agenta will automatically populate all the information in the environment variable. Therefore, you only need to use ag.init().

3. Instrument with the decorator

Add the @ag.instrument() decorator to the functions you want to instrument. This decorator will trace all input and output information for the functions.

caution

Make sure the instrument decorator is the first decorator in the function.

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
return response.choices[0].text

@ag.instrument()
def generate(country:str):
return myllmcall(country)

4. Modify a span's metadata

You can modify a span's metadata to add additional information using ag.tracing.set_span_attributes(). This function will access the active span and add the key-value pairs to the metadata.:

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

5. Putting it all together

Here's how our code would look if we combine everything:

import agenta as ag

os.environ["AGENTA_API_KEY"] = ""
os.environ["AGENTA_APP_ID"] = ""
ag.init()

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

@ag.instrument()
def generate(country:str):
return myllmcall(country)

Setting up telemetry for apps hosted in agenta

If you're creating an application to serve to agenta, not much changes. You just need to add the entrypoint decorator, ensuring it comes before the instrument decorator.

import agenta as ag

ag.init()
ag.config.register_default(prompt=ag.TextParam("What is the capital of {country}"))

@ag.instrument(spankind="llm")
def myllmcall(country:str):
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': ag.config.prompt.format(country=country)},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

@ag.entrypoint
@ag.instrument()
def generate(country:str):
return myllmcall(country)

The advantage of this approach is that the configuration you use is automatically instrumented along with the other data.