昨天我們有看到 ai_handler,接下來我們來看看 ai_handler 是啥。ai_handler 在 PRAgent 被 implement 的時候會 init ai_handler,partial[BaseAiHandler,] 是使用 python 的 typing.Partial ,指定 BaseAiHandler 作為部分應用,而默認還是 LiteLLMAIHandler
class
class PRAgent:
def __init__(self, ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
self.ai_handler = ai_handler # will be initialized in run_action
BaseAiHandler 繼承自 ABC
,代表她是抽象 class,不能直接被 implement,而是需要被繼承。BaseAiHandler 裡面的func 都使用 @abstractmethod
裝飾器,這表示它們必須在子類中被實現。而func 包含 deployment_id 跟 chat_completion 代表子 class 一定要實現。這個 class 講白一點就是畫藍圖規定後面的人一定要有此設定
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
from abc import ABC, abstractmethod
class BaseAiHandler(ABC):
"""
This class defines the interface for an AI handler to be used by the PR Agents.
"""
@abstractmethod
def __init__(self):
pass
@property
@abstractmethod
def deployment_id(self):
pass
@abstractmethod
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
"""
This method should be implemented to return a chat completion from the AI model.
Args:
model (str): the name of the model to use for the chat completion
system (str): the system message string to use for the chat completion
user (str): the user message string to use for the chat completion
temperature (float): the temperature to use for the chat completion
"""
pass
LiteLLMAIHandler 會繼承 BaseAiHandler,LiteLLMAIHandler 從 get_settings()
配置各種不同 AI 服務的設定,以及包含一個核心的func chat_completion,會發送請求到 AI 模型。在 chat_completion 內,會先將昨天提到的 system & user 轉成 msg list。
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
如果有圖像則進行圖像處理
if img_path:
try:
# check if the image link is alive
r = requests.head(img_path, allow_redirects=True)
if r.status_code == 404:
error_msg = f"The image link is not [alive](img_path).\nPlease repost the original image as a comment, and send the question again with 'quote reply' (see [instructions](https://pr-agent-docs.codium.ai/tools/ask/#ask-on-images-using-the-pr-code-as-context))."
get_logger().error(error_msg)
return f"{error_msg}", "error"
except Exception as e:
get_logger().error(f"Error fetching image: {img_path}", e)
return f"Error fetching image: {img_path}", "error"
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
{"type": "image_url", "image_url": {"url": img_path}}]
使用 acompletion 發送異步請求到 AI 模型,根據 Google 調查,iteLLM 是一個開源、高效能的大型語言模型代理。它支援OpenAI、Anthropic、Cohere 等多個提供商。 特點. 為多個LLM 提供商提供統一介面。就先想像他幫忙向 LLM 詢問吧~
from litellm import acompletion
response = await acompletion(**kwargs)