Skip to main content

AI Large Language Models(LLM)

AI LLMs is an enterprise-level large model service gateway that connects to various large model vendors, providing a unified model calling interface that shields API differences between different large model vendors. It handles API key management, load balancing, failover and retry mechanisms, supporting structured output and multi-turn conversations.

The AI LLMs element has a hierarchical structure of Meta (llms.Meta) → Type (llms.Bailian, llms.OpenAI, etc.) → Instance. Developers can quickly create AI LLMs instance elements through JitAi's visual development tools.

Of course, developers can also create their own Type elements or modify the official llms.Bailian, llms.OpenAI and other elements provided by JitAi in their own App to implement their own encapsulation.

Supported LLMs Vendors:

Type ElementfullNameVendorDescription
Bailianllms.BailianAlibaba CloudAlibaba Cloud Bailian platform, integrating various mainstream LLMs from domestic and international sources
OpenAIllms.OpenAIOpenAIGPT series models, supporting GPT-4, GPT-3.5-turbo, etc.
Anthropicllms.AnthropicAnthropicClaude series models, excelling at long text processing and complex problem analysis
Geminillms.GeminiGoogleGoogle's multimodal AI model, supporting text, image and code understanding
Siliconflowllms.SiliconflowSiliconflowProfessional AI inference acceleration platform, providing efficient large model inference services
Deepseekllms.DeepseekDeepSeekLeading domestic large language model, excellent performance in Chinese understanding and code generation
OpenAICompatiblellms.OpenAICompatibleOpenAI CompatibleServices compatible with OpenAI API protocol, supporting private deployment and third-party vendors

Quick Start

Creating Instance Elements

The following is a complete example of creating an Alibaba Cloud Bailian AI Large Model instance element:

Directory Structure

myapp/llms/MyBailianLLM/
├── e.json
└── config.json

e.json File

myapp/llms/MyBailianLLM/e.json
{
"title": "My Bailian Model",
"type": "llms.Bailian",
"backendBundleEntry": "."
}

config.json File

myapp/llms/MyBailianLLM/config.json
{
"api_key": "xxx",
"api_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"spare_api_keys": ["sk-backup-key-1", "sk-backup-key-2"]
}

Usage Example

# Get AI large model element
LLMProvider = app.getElement("llms.MyBailianLLM")

# Call AI large model
response = LLMProvider.runLlm({
"dataType": "Ltext",
"promptList": [
{"role": "user", "prompt": "Hello, please introduce yourself", "id": "user-1"}
],
"llmConfig": {"model": "qwen-plus"}
}, locals())

print(response)

OpenAI Compatible Instance Elements

The following is a complete example of creating an OpenAI compatible AI large model instance element, suitable for services like Ollama, Doubao, etc. that support the OpenAI API protocol:

Directory Structure

myapp/llms/MyOpenAICompatibleLLM/
├── e.json
└── config.json

e.json File

myapp/llms/MyOpenAICompatibleLLM/e.json
{
"title": "My OpenAI Compatible Model",
"type": "llms.OpenAICompatible",
"backendBundleEntry": "."
}

config.json File

myapp/llms/MyOpenAICompatibleLLM/config.json
{
"api_key": "ollama",
"api_url": "http://127.0.0.1:11434/v1",
"spare_api_keys": []
}

Usage Example

# Get AI large model element
LLMProvider = app.getElement("llms.MyOpenAICompatibleLLM")

# Call AI large model
response = LLMProvider.runLlm({
"dataType": "Ltext",
"promptList": [
{"role": "user", "prompt": "Hello, please introduce yourself", "id": "user-1"}
],
"llmConfig": {"model": "llama3.1"} # Use complete model name
}, locals())

print(response)

Notes:

  • OpenAI compatible mode supports privately deployed model services like Ollama, vLLM, etc.
  • api_key can be set to any value, some services (like Ollama) don't validate keys
  • api_url needs to point to a service endpoint that supports the OpenAI API protocol
  • model parameter needs to use the complete model name supported by the service

Element Configuration

e.json Configuration

ParameterTypeNative TypeRequiredDescription
titleStextstrYesInstance element display name
typeStextstrYesPoints to Type element fullName, such as llms.Bailian
backendBundleEntryStextstrYesFixed as "."
variablesJitListlistNoVariable definition list, used for sensitive information like API keys

variables Configuration (Optional)

Used when API key management through variable substitution is needed:

ParameterTypeNative TypeRequiredDescription
nameStextstrYesVariable name
titleStextstrYesVariable display name
valueStextstrYesVariable value
requiredNumericintYesWhether required, 1 for required, 0 for optional
endpointStextstrYesFixed as "backend"

Note: API keys can also be configured directly in config.json without using variables.

config.json Configuration

ParameterTypeNative TypeRequiredDescription
api_keyStextstrYesAPI key, supports variable substitution {{variableName}}
api_urlStextstrYesAPI service address
spare_api_keysJitListlistNoBackup API key list, used for load balancing

Methods

runLlm

Core class method of AI large model, used to send requests to large model services.

Method Signature

@classmethod
def runLlm(cls, config: Dict[str, Any], context: Dict[str, Any]) -> str

Parameter Details

config parameter (configuration dictionary)

ParameterTypeNative TypeRequiredDescription
dataTypeStextstrYesData type, such as Ltext, Stext, etc.
promptListJitListlistYesPrompt list, containing role, prompt, id fields
llmConfigJitDictdictNoLLM configuration, containing model and other parameters
dataTypeConfigJitDictdictNoData type configuration
outputArgsJitListlistNoOutput parameter configuration, used for structured output

context parameter Context variable dictionary, used for variable substitution, usually pass locals()

outputArgs Configuration

outputArgs is used to define structured output format, the system will automatically generate JSON Schema and guide the model to return data in the specified format.

Basic Format

"outputArgs": [
{
"name": "parameter_name",
"title": "parameter_description",
"dataType": "data_type"
}
]

Supported Data Types

Data TypeDescriptionExample
StextSingle line text{"name": "summary", "title": "Summary", "dataType": "Stext"}
LtextMulti-line text{"name": "content", "title": "Detailed content", "dataType": "Ltext"}
NumericNumber{"name": "score", "title": "Score", "dataType": "Numeric"}
IntegerInteger{"name": "count", "title": "Count", "dataType": "Integer"}
DateDate{"name": "deadline", "title": "Deadline", "dataType": "Date"}
DateTimeDate time{"name": "createdAt", "title": "Creation time", "dataType": "DateTime"}

Complex Data Type Examples

  • JitDict: {"name": "userInfo", "dataType": "JitDict", "variableList": [field definitions]}
  • JitList: {"name": "tags", "dataType": "JitList", "variableConfig": {"dataType": "Stext"}}
  • RowData: {"name": "user", "dataType": "RowData", "generic": "models.UserModel"}
  • RowList: {"name": "users", "dataType": "RowList", "generic": "models.UserModel"}

promptList format: Dictionary list containing role (system/user/assistant), prompt (message content), id (message identifier).

llmConfig format: Dictionary containing model (model name), max_tokens, temperature and other configurations.

Note: Different vendors may support different llmConfig parameters, please refer to each vendor's official API documentation.

Return Value

Return Type: Returns string when no outputArgs, returns parsed structured data dictionary when outputArgs is present.

Usage Examples

Structured Output

LLMProvider = app.getElement("llms.MyBailianLLM")

response = LLMProvider.runLlm({
"dataType": "JitDict",
"promptList": [
{"role": "user", "prompt": "Analyze the sentiment of this sentence: 'The weather is really nice today'", "id": "user-1"}
],
"llmConfig": {"model": "qwen-plus"},
"outputArgs": [
{"name": "emotion", "dataType": "Stext", "title": "Emotion type"},
{"name": "confidence", "dataType": "Numeric", "title": "Confidence"}
]
}, locals())

Variable Substitution

userName = "John"
response = LLMProvider.runLlm({
"dataType": "Ltext",
"promptList": [
{"role": "user", "prompt": "Hello {userName}, please introduce yourself", "id": "user-1"}
],
"llmConfig": {"model": "qwen-plus"}
}, locals())

embedDocuments

Document vectorization class method of AI large model, used to convert text lists to high-dimensional vector representations.

Method Signature

@classmethod
def embedDocuments(cls, config: Dict[str, Any]) -> List[List[float]]

Parameter Details

config parameter (configuration dictionary)

ParameterTypeNative TypeRequiredDescription
textsJitListlistYesList of texts to be vectorized
modelStextstrYesVectorization model name, such as text-embedding-v3

Return Value

Return Type: List[List[float]], vectorization result list, each document corresponds to one vector.

Usage Example

Vectorization

LLMProvider = app.getElement("llms.MyBailianLLM")

response = LLMProvider.embedDocuments({
"texts": ["This is the first document", "This is the second document"],
"model": "text-embedding-v3"
})

rerankDocuments

Document reranking class method of AI large model, used to reorder candidate documents based on query text.

Method Signature

@classmethod
def rerankDocuments(cls, config: Dict[str, Any]) -> List[Dict]

Parameter Details

config parameter (configuration dictionary)

ParameterTypeNative TypeRequiredDescription
queryStextstrYesQuery text, used for relevance comparison with documents
documentsJitListlistYesList of document texts to be reranked
modelStextstrYesReranking model name, such as gte-rerank-v2

Return Value

Return Type: List[Dict], reranking result list, each dictionary contains:

  • index: Original document index (int)
  • score: Relevance score (float)
  • document: Document content (str, optional)

Usage Examples

Basic Reranking

LLMProvider = app.getElement("llms.MyBailianLLM")

response = LLMProvider.rerankDocuments({
"query": "What is artificial intelligence?",
"documents": [
"Artificial intelligence is a branch of computer science",
"Machine learning is an important component of artificial intelligence",
"Deep learning is a subfield of machine learning"
],
"model": "gte-rerank"
})

Combined with Search Scenarios

search_results = [
"Document 1: AI Technology Development History",
"Document 2: Machine Learning Algorithm Introduction",
"Document 3: Deep Learning Framework Comparison",
"Document 4: Natural Language Processing Applications"
]

response = LLMProvider.rerankDocuments({
"query": "Deep learning related technologies",
"documents": search_results,
"model": "gte-rerank-v2"
})

Properties

None

Advanced Features

API Key Management

Supports primary and backup key configuration, system automatically implements load balancing and failover.

Retry Mechanism

Built-in exponential backoff retry strategy, automatically handles temporary errors.

Error Handling

Provides standardized error codes, including invalid API keys, quota exceeded, request rate limiting and other common error types.

JitAI AssistantBeta
Powered by JitAI