昨天我們從 PRReviewer 追到了 auto_approve 的應用,今天繼續往下看。接下來會進行一個 prediction 的測試。先從 setting 獲取全部的 model,然後將 model 餵給 prediction ,如果有成功,就繼續往下,沒有,就會移除 inital_comment。以下慢慢拆解
await retry_with_fallback_models(self._prepare_prediction)
if not self.prediction:
self.git_provider.remove_initial_comment()
return None
retry_with_fallback_models 看起來是測試模型,測試查找看有沒有正常運作。他會先拿到全部的 model 跟 deployments(來自 model)。iterater Model 跟 deployment 試 run f(),直到成功。這邊的 f() 來自上面傳入的 self._prepare_prediction。self._prepare_prediction 會藉由 get_pr_diff 拿到 diff
async def retry_with_fallback_models(f: Callable, model_type: ModelType = ModelType.REGULAR):
all_models = _get_all_models(model_type)
all_deployments = _get_all_deployments(all_models)
# try each (model, deployment_id) pair until one is successful, otherwise raise exception
for i, (model, deployment_id) in enumerate(zip(all_models, all_deployments)):
try:
get_logger().debug(
f"Generating prediction with {model}"
f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
)
get_settings().set("openai.deployment_id", deployment_id)
return await f(model)
except:
get_logger().warning(
f"Failed to generate prediction with {model}"
f"{(' from deployment ' + deployment_id) if deployment_id else ''}: "
f"{traceback.format_exc()}"
)
if i == len(all_models) - 1: # If it's the last iteration
raise # Re-raise the last exception
async def _prepare_prediction(self, model: str) -> None:
self.patches_diff = get_pr_diff(self.git_provider,
self.token_handler,
model,
add_line_numbers_to_hunks=True,
disable_extra_lines=True,)
if self.patches_diff:
get_logger().debug(f"PR diff", diff=self.patches_diff)
self.prediction = await self._get_prediction(model)
else:
get_logger().error(f"Error getting PR diff")
self.prediction = None
model 則是從 starlette_context's context 的 context["settings"]來的
def _get_all_models(model_type: ModelType = ModelType.REGULAR) -> List[str]:
if model_type == ModelType.TURBO:
model = get_settings().config.model_turbo
else:
model = get_settings().config.model
fallback_models = get_settings().config.fallback_models
if not isinstance(fallback_models, list):
fallback_models = [m.strip() for m in fallback_models.split(",")]
all_models = [model] + fallback_models
return all_models
def get_settings():
"""
Retrieves the current settings.
This function attempts to fetch the settings from the starlette_context's context object. If it fails,
it defaults to the global settings defined outside of this function.
Returns:
Dynaconf: The current settings object, either from the context or the global default.
"""
try:
return context["settings"]
except Exception:
return global_settings
我很好奇這邊的 prediction 是什麼,因為我想像下一步 pr_review 才會帶出我們所認知到 AI 解答,所以明天要來看 ai handler
await retry_with_fallback_models(self._prepare_prediction)
if not self.prediction:
self.git_provider.remove_initial_comment()
return None
pr_review = self._prepare_pr_review()