Eland Sentiment vLLM - Chinese Financial Sentiment Analysis

A merged Qwen3-4B model fine-tuned for Chinese financial sentiment analysis, optimized for vLLM deployment.

This is the full merged model (LoRA weights merged into base model) for high-throughput inference with vLLM.

Performance

Metric Score
Reliability (Macro Avg) 89.38%
Overall Sentiment 93.00%
Entity Sentiment 91.18%
Opinion Sentiment 76.67%
Agrees with Text 96.67%

Usage with vLLM

Basic Usage

from vllm import LLM, SamplingParams

# Load model
llm = LLM(model="p988744/eland-sentiment-zh-vllm")

# Define sampling parameters
sampling_params = SamplingParams(
    temperature=0.1,
    top_p=0.9,
    max_tokens=10
)

# Create prompt
prompt = """<|im_start|>system
你是一個專業的金融文本情感分析助手。請分析以下文本的整體情感,回答「正面」、「負面」或「中立」。<|im_end|>
<|im_start|>user
台積電今日股價大漲,市場看好AI需求持續成長。<|im_end|>
<|im_start|>assistant
"""

# Generate
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)  # Expected: 正面

Batch Processing

from vllm import LLM, SamplingParams

llm = LLM(model="p988744/eland-sentiment-zh-vllm")
sampling_params = SamplingParams(temperature=0.1, max_tokens=10)

# Multiple texts
texts = [
    "台積電營收創新高",
    "投資人對後市持觀望態度",
    "公司宣布大幅裁員"
]

prompts = []
for text in texts:
    prompt = f"""<|im_start|>system
你是一個專業的金融文本情感分析助手。請分析以下文本的整體情感,回答「正面」、「負面」或「中立」。<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant
"""
    prompts.append(prompt)

outputs = llm.generate(prompts, sampling_params)
for text, output in zip(texts, outputs):
    print(f"{text} -> {output.outputs[0].text}")

OpenAI-Compatible Server

# Start vLLM server
vllm serve p988744/eland-sentiment-zh-vllm --port 8000

# Query with curl
curl http://localhost:8000/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "p988744/eland-sentiment-zh-vllm",
    "prompt": "<|im_start|>system\n你是一個專業的金融文本情感分析助手。請分析以下文本的整體情感,回答「正面」、「負面」或「中立」。<|im_end|>\n<|im_start|>user\n台積電股價大漲<|im_end|>\n<|im_start|>assistant\n",
    "max_tokens": 10,
    "temperature": 0.1
  }'

Task Prompts

Overall Sentiment:

System: 你是一個專業的金融文本情感分析助手。請分析以下文本的整體情感,回答「正面」、「負面」或「中立」。
User: [your text]

Entity Sentiment:

System: 你是一個專業的金融文本情感分析助手。請分析以下文本中對「{entity}」的情感,回答「正面」、「負面」或「中立」。
User: [your text]

Opinion Sentiment:

System: 你是一個專業的金融文本情感分析助手。請判斷以下觀點的情感傾向,回答「正面」、「負面」或「中立」。
User: 文本:[text]
觀點:[opinion]

Model Variants

Version Repository Use Case
LoRA Adapter p988744/eland-sentiment-zh HuggingFace + PEFT
GGUF p988744/eland-sentiment-zh-gguf Ollama / llama.cpp
Full Merged p988744/eland-sentiment-zh-vllm vLLM (this repo)

Model Details

Parameter Value
Base Model Qwen/Qwen3-4B
Parameters 4.05B
dtype bfloat16
Model Size ~8GB
Context Length 2048

Training Details

Parameter Value
Method LoRA (merged)
LoRA Rank 32
LoRA Alpha 64
Epochs 8
Learning Rate 1e-5

Dataset

Trained on p988744/eland-sentiment-zh-data:

  • 999 training samples
  • 300 test samples
  • Taiwan stock market forum and news text

Requirements

pip install vllm>=0.4.0

License

Apache 2.0

Downloads last month
15
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for p988744/eland-sentiment-zh-vllm

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(352)
this model

Dataset used to train p988744/eland-sentiment-zh-vllm