Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Litellm #1265

Merged
merged 60 commits into from
Jul 3, 2024
Merged

Litellm #1265

Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
af4743a
litellm base version
sfahad1414 Jun 21, 2024
e126bc2
1. added missing test cases
sfahad1414 Jun 24, 2024
90a47aa
test cased fixed
sfahad1414 Jun 25, 2024
3c8bb1b
1. added missing test case
sfahad1414 Jun 25, 2024
76ed000
1. added missing test case
sfahad1414 Jun 25, 2024
16cb58f
1. added missing test case
sfahad1414 Jun 25, 2024
3bfd368
1. added missing test case
sfahad1414 Jun 26, 2024
aba13c8
test cases fixed
sfahad1414 Jun 26, 2024
2dfddd6
removed unused variable
sfahad1414 Jun 26, 2024
ff7588c
fixed unused variable
sfahad1414 Jun 26, 2024
3ec8315
fixed unused variable
sfahad1414 Jun 26, 2024
6af2708
added test cased for fetching logs
sfahad1414 Jun 28, 2024
6da18bb
litellm base version
sfahad1414 Jun 21, 2024
a4b01c0
1. added missing test cases
sfahad1414 Jun 24, 2024
8cc9c97
test cased fixed
sfahad1414 Jun 25, 2024
e3bf9aa
1. added missing test case
sfahad1414 Jun 25, 2024
ab991a5
1. added missing test case
sfahad1414 Jun 25, 2024
2da9cc6
1. added missing test case
sfahad1414 Jun 25, 2024
16104bd
1. added missing test case
sfahad1414 Jun 26, 2024
7246313
test cases fixed
sfahad1414 Jun 26, 2024
ab31dd2
removed unused variable
sfahad1414 Jun 26, 2024
6090cf4
fixed unused variable
sfahad1414 Jun 26, 2024
cdc9dfa
fixed unused variable
sfahad1414 Jun 26, 2024
e04f8d5
added test cased for fetching logs
sfahad1414 Jun 28, 2024
8663846
added test cased for fetching logs
sfahad1414 Jun 28, 2024
5692297
Merge remote-tracking branch 'github/litellm' into litellm
sfahad1414 Jun 28, 2024
bcd7a5e
removed unused import
sfahad1414 Jun 28, 2024
f8545c6
added invocation in metadata for litellm
sfahad1414 Jun 28, 2024
2875c2f
1. changed rasa rule policy to allow max history
sfahad1414 Jul 1, 2024
a710ddd
litellm base version
sfahad1414 Jun 21, 2024
e3d52c1
1. added missing test cases
sfahad1414 Jun 24, 2024
2b4a3bb
test cased fixed
sfahad1414 Jun 25, 2024
f5e1e94
1. added missing test case
sfahad1414 Jun 25, 2024
270b0ab
1. added missing test case
sfahad1414 Jun 25, 2024
db4a710
1. added missing test case
sfahad1414 Jun 25, 2024
5b83405
1. added missing test case
sfahad1414 Jun 26, 2024
e57d183
test cases fixed
sfahad1414 Jun 26, 2024
c75d3ef
removed unused variable
sfahad1414 Jun 26, 2024
423ec46
fixed unused variable
sfahad1414 Jun 26, 2024
e7cd631
fixed unused variable
sfahad1414 Jun 26, 2024
a05ebd1
added test cased for fetching logs
sfahad1414 Jun 28, 2024
c286989
added test cased for fetching logs
sfahad1414 Jun 28, 2024
f48dc2e
litellm base version
sfahad1414 Jun 21, 2024
06693c0
1. added missing test cases
sfahad1414 Jun 24, 2024
887adc1
test cased fixed
sfahad1414 Jun 25, 2024
8252520
1. added missing test case
sfahad1414 Jun 25, 2024
b556412
1. added missing test case
sfahad1414 Jun 25, 2024
70b8c71
1. added missing test case
sfahad1414 Jun 25, 2024
181cfa7
1. added missing test case
sfahad1414 Jun 26, 2024
d0d5b1d
test cases fixed
sfahad1414 Jun 26, 2024
2ed8e06
removed unused variable
sfahad1414 Jun 26, 2024
dd80d84
fixed unused variable
sfahad1414 Jun 26, 2024
e68ba65
fixed unused variable
sfahad1414 Jun 26, 2024
a0156f4
added test cased for fetching logs
sfahad1414 Jun 28, 2024
b357b5e
removed unused import
sfahad1414 Jun 28, 2024
43d3c85
added invocation in metadata for litellm
sfahad1414 Jun 28, 2024
da13b69
1. changed rasa rule policy to allow max history
sfahad1414 Jul 1, 2024
a309505
test cases fixed after merging
sfahad1414 Jul 3, 2024
779e519
Merge remote-tracking branch 'github/litellm' into litellm
sfahad1414 Jul 3, 2024
c3316a9
Merge branch 'master' into litellm
sfahad1414 Jul 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions augmentation/paraphrase/gpt3/gpt.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""Creates the Example and GPT classes for a user to interface with the OpenAI
API."""

import openai
from openai import OpenAI
import uuid


Expand Down Expand Up @@ -95,8 +95,9 @@ def submit_request(self, prompt, num_responses, api_key):
"""Calls the OpenAI API with the specified parameters."""
if num_responses < 1:
num_responses = 1
response = openai.Completion.create(api_key=api_key,
engine=self.get_engine(),
client = OpenAI(api_key=api_key)
response = client.completions.create(
model=self.get_engine(),
prompt=self.craft_query(prompt),
max_tokens=self.get_max_tokens(),
temperature=self.get_temperature(),
Expand Down
Empty file removed custom/__init__.py
Empty file.
58 changes: 0 additions & 58 deletions custom/fallback.py

This file was deleted.

169 changes: 0 additions & 169 deletions custom/ner.py

This file was deleted.

2 changes: 1 addition & 1 deletion kairon/actions/definitions/database.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ async def execute(self, dispatcher: CollectingDispatcher, tracker: Tracker, doma
request_body = ActionUtility.get_payload(payload, tracker)
msg_logger.append(request_body)
tracker_data = ActionUtility.build_context(tracker, True)
response = await vector_db.perform_operation(operation_type, request_body)
response = await vector_db.perform_operation(operation_type, request_body, user=tracker.sender_id)
logger.info("response: " + str(response))
response_context = self.__add_user_context_to_http_response(response, tracker_data)
bot_response, bot_resp_log, _ = ActionUtility.compose_response(vector_action_config['response'], response_context)
Expand Down
30 changes: 11 additions & 19 deletions kairon/actions/definitions/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,15 @@
from rasa_sdk import Tracker
from rasa_sdk.executor import CollectingDispatcher

from kairon import Utility
from kairon.actions.definitions.base import ActionsBase
from kairon.shared.actions.data_objects import ActionServerLogs
from kairon.shared.actions.exception import ActionFailure
from kairon.shared.actions.models import ActionType, UserMessageType
from kairon.shared.actions.utils import ActionUtility
from kairon.shared.constants import FAQ_DISABLED_ERR, KaironSystemSlots, KAIRON_USER_MSG_ENTITY
from kairon.shared.data.constant import DEFAULT_NLU_FALLBACK_RESPONSE
from kairon.shared.llm.factory import LLMFactory
from kairon.shared.models import LlmPromptType, LlmPromptSource
from kairon.shared.llm.processor import LLMProcessor


class ActionPrompt(ActionsBase):
Expand Down Expand Up @@ -62,14 +61,17 @@ async def execute(self, dispatcher: CollectingDispatcher, tracker: Tracker, doma
time_taken_slots = 0
final_slots = {"type": "slots_to_fill"}
llm_response_log = {"type": "llm_response"}

llm_processor = None
try:
k_faq_action_config, bot_settings = self.retrieve_config()
user_question = k_faq_action_config.get('user_question')
user_msg = self.__get_user_msg(tracker, user_question)
llm_type = k_faq_action_config['llm_type']
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused variable.

The variable llm_type is declared but not used anywhere in the method.

- llm_type = k_faq_action_config['llm_type']
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
llm_type = k_faq_action_config['llm_type']
Tools
Ruff

69-69: Local variable llm_type is assigned to but never used (F841)

Remove assignment to unused variable llm_type

llm_params = await self.__get_llm_params(k_faq_action_config, dispatcher, tracker, domain)
llm = LLMFactory.get_instance("faq")(self.bot, bot_settings["llm_settings"])
llm_response, time_taken_llm_response = await llm.predict(user_msg, **llm_params)
llm_processor = LLMProcessor(self.bot)
llm_response, time_taken_llm_response = await llm_processor.predict(user_msg,
user=tracker.sender_id,
**llm_params)
status = "FAILURE" if llm_response.get("is_failure", False) is True else status
exception = llm_response.get("exception")
bot_response = llm_response['content']
Expand All @@ -93,8 +95,8 @@ async def execute(self, dispatcher: CollectingDispatcher, tracker: Tracker, doma
total_time_elapsed = time_taken_llm_response + time_taken_slots
events_to_extend = [llm_response_log, final_slots]
events.extend(events_to_extend)
if llm:
llm_logs = llm.logs
if llm_processor:
llm_logs = llm_processor.logs
ActionServerLogs(
type=ActionType.prompt_action.value,
intent=tracker.get_intent_of_latest_message(skip_fallback_intent=False),
Expand All @@ -119,16 +121,6 @@ async def execute(self, dispatcher: CollectingDispatcher, tracker: Tracker, doma
return slots_to_fill

async def __get_llm_params(self, k_faq_action_config: dict, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]):
implementations = {
"GPT3_FAQ_EMBED": self.__get_gpt_params,
}

llm_type = Utility.environment['llm']["faq"]
if not implementations.get(llm_type):
raise ActionFailure(f'{llm_type} type LLM is not supported')
return await implementations[Utility.environment['llm']["faq"]](k_faq_action_config, dispatcher, tracker, domain)

async def __get_gpt_params(self, k_faq_action_config: dict, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]):
from kairon.actions.definitions.factory import ActionFactory

system_prompt = None
Expand All @@ -147,7 +139,7 @@ async def __get_gpt_params(self, k_faq_action_config: dict, dispatcher: Collecti
history_prompt = ActionUtility.prepare_bot_responses(tracker, num_bot_responses)
elif prompt['source'] == LlmPromptSource.bot_content.value and prompt['is_enabled']:
use_similarity_prompt = True
hyperparameters = prompt.get('hyperparameters', {})
hyperparameters = prompt.get("hyperparameters", {})
similarity_prompt.append({'similarity_prompt_name': prompt['name'],
'similarity_prompt_instructions': prompt['instructions'],
'collection': prompt['data'],
Expand Down Expand Up @@ -179,7 +171,7 @@ async def __get_gpt_params(self, k_faq_action_config: dict, dispatcher: Collecti
is_query_prompt_enabled = True
query_prompt_dict.update({'query_prompt': query_prompt, 'use_query_prompt': is_query_prompt_enabled})

params["hyperparameters"] = k_faq_action_config.get('hyperparameters', Utility.get_llm_hyperparameters())
params["hyperparameters"] = k_faq_action_config['hyperparameters']
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor method to reduce complexity.

The __get_llm_params method is quite complex and handles multiple responsibilities. Consider breaking it down into smaller, more focused methods.

- async def __get_llm_params(self, k_faq_action_config: dict, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]):
+ async def __get_llm_params(self, k_faq_action_config: dict, tracker: Tracker):
+     # Simplified method focusing only on necessary parameters

Committable suggestion was skipped due to low confidence.

params["system_prompt"] = system_prompt
params["context_prompt"] = context_prompt
params["query_prompt"] = query_prompt_dict
Expand Down
Loading
Loading