🙋魔搭ModelScope本期社区进展:
📟1361个模型:GLM-Edge系列、通义千问QwQ、Marco-o1、MagicQuill-models、Qwen2vl-Flux等;
📁29个数据集:DAVIS-Edit、modjumlu等;
🎨44个创新应用:超好用P图神器MagicQuill、logo任意贴、QwQ-32B-preview、Marco-o1等;
📄 5 篇内容:
-
智谱开源端侧大语言和多模态模型GLM-Edge系列!
-
千问团队开源会思考的QwQ模型,这模型成精了!
-
从 OpenAI-o1 看大模型的复杂推理能力
-
探索面向开放型问题的推理模型Marco-o1,阿里国际AI团队最新开源!
-
MagicQuill:4天斩获千颗 Star,登上Huggingface趋势榜榜首的AI P图神器
01.精选模型
GLM-Edge系列
GLM-Edge系列是智谱开源的面向端侧真实落地使用的场景下的一次尝试,由两种尺寸的大语言对话模型和多模态理解模型组成(GLM-Edge-1.5B-Chat,GLM-Edge-4B-Chat,GLM-Edge-V-2B,GLM-Edge-V-5B)。其中,1.5B / 2B模型主要面向手机、车机等平台, 4B / 5B 模型主要面向PC等平台。
模型合集:
https://modelscope.cn/collections/GLM-Edge-ff0306563d2844
示例代码:
安装源代码的transformers库
pip install git+https://github.com/huggingface/transformers.git
大语言模型推理
from modelscope import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "ZhipuAI/glm-edge-4b-chat"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "hello!"}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 128,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
多模态模型推理
import torch
from PIL import Image
from modelscope import snapshot_download
from transformers import (
AutoTokenizer,
AutoImageProcessor,
AutoModelForCausalLM,
)
url = "example.png"
messages = [{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "describe this image"}]}]
image = Image.open(url)
model_dir = snapshot_download("ZhipuAI/glm-edge-v-5b")
processor = AutoImageProcessor.from_pretrained(model_dir, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=True, tokenize=True, return_tensors="pt"
).to(next(model.parameters()).device)
generate_kwargs = {
**inputs,
"pixel_values": torch.tensor(processor(image).pixel_values).to(next(model.parameters()).device),
}
output = model.generate(**generate_kwargs, max_new_tokens=100)
print(tokenizer.decode(output[0][len(inputs["input_ids"][0]):], skip_special_tokens=True))
更多推理、微调应用详见:
QwQ-32B-Preview
QwQ-32B-Preview 是由 Qwen 团队开发的实验性研究模型,专注于增强 AI 推理能力,在数学和编程领域表现出色,具体表现如下:
-
GPQA:65.2%,展示了研究生水平的科学推理能力
-
AIME:50.0%,证明了强大的数学问题解决技能
-
MATH-500:90.6%,体现了在各类数学主题上的全面理解
-
LiveCodeBench:50.0%,验证了在实际编程场景中的出色表现
模型链接:
https://modelscope.cn/models/Qwen/QwQ-32B-Preview
示例代码:
from modelscope import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
更多体验玩法详见:
Marco-o1
阿里国际AI团队发布了最新的Marco-o1模型,Marco-o1不仅关注具有标准答案的学科(例如代码、数学等)领域,而且更加强调开放式问题的解决方案。Marco-o1模型通过超长CoT数据微调,增强了反思和改正能力。它利用MCTS扩展解空间,输出更优结果,并定义mini-Step以提高搜索细粒度。此外,该模型在翻译任务中表现出色,首次将Inference time scaling应用于机器翻译。
模型链接:
https://modelscope.cn/models/AIDC-AI/Marco-o1
示例代码:
import torch
from typing import List, Dict, Tuple
from modelscope import AutoModelForCausalLM, AutoTokenizer
def load_model_and_tokenizer(path):
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(path, trust_remote_code=True).to('cuda:0')
model.eval()
return tokenizer, model
def generate_response(model, tokenizer,
input_ids, attention_mask,
max_new_tokens=4096):
generated_ids = input_ids
with torch.inference_mode():
for _ in range(max_new_tokens):
outputs = model(input_ids=generated_ids, attention_mask=attention_mask)
next_token_id = torch.argmax(outputs.logits[:, -1, :], dim=-1).unsqueeze(-1)
generated_ids = torch.cat([generated_ids, next_token_id], dim=-1)
attention_mask = torch.cat([attention_mask, torch.ones_like(next_token_id)], dim=-1)
new_token = tokenizer.decode(next_token_id.squeeze(), skip_special_tokens=True)
print(new_token, end='', flush=True)
if next_token_id.item() == tokenizer.eos_token_id:
break
return tokenizer.decode(generated_ids[0][input_ids.shape[-1]:],
skip_special_tokens=True)
def chat(model, tokenizer):
history: List[Dict[str, str]] = []
print("Enter 'q' to quit, 'c' to clear chat history.")
while True:
user_input = input("User: ").strip().lower()
if user_input == 'q':
print("Exiting chat.")
break
if user_input == 'c':
print("Clearing chat history.")
history.clear()
continue
if not user_input:
print("Input cannot be empty.")
continue
history.append({"role": "user", "content": user_input})
text = tokenizer.apply_chat_template(history, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=4096).to('cuda:0')
print('Assistant:', end=' ', flush=True)
response = generate_response(model, tokenizer, model_inputs.input_ids, model_inputs.attention_mask)
print()
history.append({"role": "assistant", "content": response})
def main():
path = "AIDC-AI/Marco-o1"
tokenizer, model = load_model_and_tokenizer(path)
print('Starting chat.')
chat(model, tokenizer)
if __name__ == "__main__":
main()
更多详情请看教程文章:
探索面向开放型问题的推理模型Marco-o1,阿里国际AI团队最新开源!
02.数据集推荐
DAVIS-Edit
DAVIS-Edit是一个专注于视频编辑和处理的数据集,可能包含用于训练和评估视频对象分割、跟踪和编辑算法的扩展或修改后的视频内容,旨在推动视觉AI技术在视频领域的应用和研究。
数据集链接:
https://modelscope.cn/datasets/AlonzoLeeeoooo/DAVIS-Edit
modjumlu
MMLU是一个全面的AI模型通用知识基准,覆盖从基础到高级专业领域的57个类别,通过将问题集合人工翻译成14种语言,包括对低资源语言的准确翻译,致力于提升AI模型的多语言处理能力,确保全球用户都能享受到准确、包容的AI技术服务。
数据集链接:
https://modelscope.cn/datasets/zhangx123/modjumlu
03.精选应用
超好用P图神器MagicQuill
MagicQuill是蚂蚁技术研究院开源的一个交互式的 AI 图像编辑工具,可以快速、简单的实现智能化、精细化的图像编辑。即便是 PS 小白,也能轻松上手。用画笔涂抹加一句提示词,瞬间完成复杂繁琐的图像编辑功能。
体验直达:
https://modelscope.cn/studios/ant-research/MagicQuill_demo
logo任意贴
In-Context LoRA + Image-to-Image + Inpainting,将你的logo应用到任何东西上。
体验直达:
https://www.modelscope.cn/studios/iic/logo-in-context
QwQ-32B-preview
QwQ-32B-Preview是由Qwen团队开发的实验性研究模型,旨在提升人工智能的推理能力。作为preview版本,它展示了有前途的分析能力,但也存在一些重要的局限性,比如代码切换和递归推理循环。在此demo中,仅支持单轮查询。
体验直达:
https://www.modelscope.cn/studios/Qwen/QwQ-32B-preview
Marco-o1
阿里国际AI团队最新开源的面向开放型问题的推理模型Marco-o1 演示demo。
体验直达:
https://www.modelscope.cn/studios/modelscope/Marco-o1
GLM-Edge系列demo
GLM-Edge系列是智谱AI开源的面向端侧真实落地使用的场景下的一次尝试,分别由大语言对话模型和多模态理解模型组成(GLM-Edge-1.5B-Chat,GLM-Edge-4B-Chat,GLM-Edge-V-2B,GLM-Edge-V-5B)。
体验直达:
GLM端侧1.5B对话大模型:
https://modelscope.cn/studios/ZhipuAI/GLM-Edge-1.5B-Chat-Demo
GLM端侧多模态大模型:
https://modelscope.cn/studios/ZhipuAI/GLM-Edge-V-5B-Demo
所有评论(0)