Wrapper around Ali Tongyi large language models that use the Chat endpoint.

To use you should have the ALIBABA_API_KEY environment variable set.

Example

const qwen = new ChatAlibabaTongyi({
alibabaApiKey: "YOUR-API-KEY",
});

const qwen = new ChatAlibabaTongyi({
modelName: "qwen-turbo",
temperature: 1,
alibabaApiKey: "YOUR-API-KEY",
});

const messages = [new HumanMessage("Hello")];

await qwen.call(messages);

Hierarchy

Implements

  • AlibabaTongyiChatInput

Constructors

Properties

apiUrl: string
modelName: string & {} | "qwen-turbo" | "qwen-plus" | "qwen-max" | "qwen-max-1201" | "qwen-max-longcontext" | "qwen-7b-chat" | "qwen-14b-chat" | "qwen-72b-chat" | "llama2-7b-chat-v2" | "llama2-13b-chat-v2" | "baichuan-7b-v1" | "baichuan2-13b-chat-v1" | "baichuan2-7b-chat-v1" | "chatglm3-6b" | "chatglm-6b-v2"
streaming: boolean
alibabaApiKey?: string
enableSearch?: boolean
maxTokens?: number
prefixMessages?: TongyiMessage[]
repetitionPenalty?: number
seed?: number
temperature?: number
topK?: number
topP?: number

Accessors

Methods

  • Get the identifying parameters for the model

    Returns {
        enable_search?: null | boolean;
        incremental_output?: null | boolean;
        max_tokens?: null | number;
        repetition_penalty?: null | number;
        result_format?: "text" | "message";
        seed?: null | number;
        stream?: boolean;
        temperature?: null | number;
        top_k?: null | number;
        top_p?: null | number;
    } & Pick<ChatCompletionRequest, "model">

  • Get the parameters used to invoke the model

    Returns {
        enable_search?: null | boolean;
        incremental_output?: null | boolean;
        max_tokens?: null | number;
        repetition_penalty?: null | number;
        result_format?: "text" | "message";
        seed?: null | number;
        stream?: boolean;
        temperature?: null | number;
        top_k?: null | number;
        top_p?: null | number;
    }

    • Optional enable_search?: null | boolean
    • Optional incremental_output?: null | boolean
    • Optional max_tokens?: null | number
    • Optional repetition_penalty?: null | number
    • Optional result_format?: "text" | "message"
    • Optional seed?: null | number
    • Optional stream?: boolean
    • Optional temperature?: null | number
    • Optional top_k?: null | number
    • Optional top_p?: null | number

Generated using TypeDoc