上下傳越少的字數, 越節省token
我們可以要求AI只參考你提供的一串資料回答問題
但當資料過多時, 每次提供都上傳所有資料會很貴,
可採取只提供最相似的部份資料給AI
方法是:
使用Embedding將每一部份資料分別轉成向量(簡稱A),
問題也轉成向量(簡稱B), 以B分別比對A的相似度,
選擇最高或前幾高相似度的資料提供AI回答問題
向量若要存入oracle欄位類型為BLOBTaiwan is a country. 臺灣是我的國家
安裝工具:
pip install openai pickle numpy pandas oracledb
產生向量並存入db:
import openai
import oracledb
import pickle
openai.api_key = "你的api key"
#todo 設定連線字串及帳密
with oracledb.connect(user=user, password=password, dsn=conn_string) as conn:
conn.autocommit = True
with conn.cursor() as cursor:
for r in cursor.execute("select id, txt from table1 where vct is null"):
print(r[0])
text = r[1].replace("\n", " ")
emb = openai.Embedding.create(input=[text], model="text-embedding-ada-002")
token = emb["usage"]["total_tokens"]
data = pickle.dumps(emb['data'][0]['embedding'])
conn.cursor().execute("update table1 set vct =:dd, token = :tk where id = :id", [data,token,r[0]])
由於正在實驗要取多少資料, 所以我另開table將相似度儲存, 以供後續研究,
以下比對向量並呼叫gpt3.5進行回答:
import sys
import openai
import oracledb
import numpy as np
import pandas as pd
import pickle
import time
openai.api_key = "你的api key"
#todo connection information
with oracledb.connect(user=user, password=password, dsn=conn_string) as conn:
conn.autocommit = True
with conn.cursor() as cursor:
#撈問題
for r in cursor.execute("select id,txt,vct from table1 where ans = 0 order by id"):
print(r[0])
#print(r[1])
question_embedding = np.array(pickle.loads(r[2].read()))
conn.cursor().execute("delete table2 where qid = %d" % r[0])#先清前次資料
#撈所有資料
for ans in conn.cursor().execute("select vct,id from table1 where ans >0"):
answer_embedding = np.array(pickle.loads(ans[0].read()))
#一一比對計算相似度
similarity = np.dot(question_embedding, answer_embedding) / (np.linalg.norm(question_embedding) * np.linalg.norm(answer_embedding))
#儲存相似度供後續撈取
conn.cursor().execute("insert into table2(QID,aid,SCORE)values(%d,%d,%f)" % (r[0], ans[1], similarity*100))
#撈問題和前5名相似度資料合併
for ans in conn.cursor().execute("""SELECT REPLACE(REPLACE(
RTRIM(
XMLSERIALIZE(CONTENT XMLAGG(XMLELEMENT(e, a || chr(10) || chr(10)) ORDER BY score DESC) AS CLOB),
chr(10) || chr(10)
),
'<E>'),
'</E>') AS result
FROM (
select c.txt a,score
FROM table2 a
join table1 c
on a.aid = c.id
where qid = %d
order by score desc
)
where rownum <6""" %r[0]):
print('question: ' + r[1])
#print('ref: ' + ans[0].read())
msg = [
{"role": "system", "content": """You must answer the question briefly and precisely in Traditional Chinese based on the context below. Don't respond answers that are irrelevant to the question. If the question is not related to the context below, respond with: "不知". context:
%s"""% ans[0].read()},
{"role": "user", "content": "%s"% r[1]}
]
#print(msg)
for i in range(6):
try:
#gpt3.5之後的呼叫寫法和之前不同,要作角色扮演
response=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages= msg)
token=response["usage"]["total_tokens"]
print("token " + str(token))
data = response["choices"][0]["message"]["content"].strip()
print(data)
conn.cursor().execute("update table1 set ai =:dd,ANStoken=:tk where id = :id", [data,token,r[0]])
break
except Exception as msg:
print('錯誤%d次 '%i,msg)
if i < 5:
time.sleep(20)
else:
sys.exit()
已有llama-index整併以上功能可直接使用,參考網頁:
https://gpt-index.readthedocs.io/en/latest/how_to/query_interface.html
https://zhuanlan.zhihu.com/p/613155165
Taiwan is a country. 臺灣是我的國家