-
软件
-
服务
-
支持
-
见解
-
对比
-
更多
Procyon® AI Text Generation Benchmark
Simplifying Local LLM AI Performance testing
Testing AI LLM performance can be very complicated and time-consuming, with full AI models requiring large amounts of storage space and bandwidth to download. There are also many variables such as quantization, conversion, and variations in input tokens that can reduce a test’s reliability if not configured correctly.
The Procyon AI Text Generation Benchmark provides a more compact and easier way to repeatedly and consistently test AI performance with multiple LLM AI models. We worked closely with many AI software and hardware leaders to ensure our benchmark tests take full advantage of the local AI accelerator hardware in your systems.
立即购买Prompt 7 (RAG Query): How can benchmarking save time and money for my organization? How to choose a reference benchmark score for RFPs? Summarize how to efficiently test the performance of PCs for Enterprise IT. Answer based on the context provided.
结果与见解
Built with input from industry leaders
- Built with input from leading AI vendors to take full advantage of next-generation local AI accelerator hardware.
- Seven prompts simulating multiple real-world use cases, with RAG (Retrieval-Augmented Generation) and non-RAG queries
- Designed to run consistent, repeatable workloads, minimizing common AI LLM workload variables.
Detailed Results
- Get in-depth reporting as to how system resources are being used during AI workloads.
- Reduced install size vs testing with entire AI models.
- Easily compare results between devices to help identify the best systems for your use cases.
AI Testing Simplified
- Easily and quickly test using four industry standard AI Models of varying parameter sizes.
- Get a real-time view of responses being generated during the benchmark
- One-click to easily test with all supported inference engines, or configure based on your preference.
与行业专家共同开发
Procyon benchmarks are designed for industry, enterprise, and press use, with tests and features created specifically for professional users. The Procyon AI Text Generation Benchmark was designed and developed with industry partners through the UL Benchmark Development Program (BDP). BDP 是 UL Solutions 制订的计划,旨在通过与计划成员的密切合作,创建相关的、公正的基准测试。
推理引擎性能
With the Procyon AI Text Generation Benchmark, you can measure the performance of dedicated AI processing hardware and verify inference engine implementation quality with tests based on a heavy AI image generation workload.
专为专业人士而设计
We created our Procyon AI Inference Benchmarks for engineering teams who need independent, standardized tools for assessing the general AI performance of inference engine implementations and dedicated hardware.
快速并且容易使用
基准测试易于安装和执行,无需复杂的配置。Run the benchmark using the Procyon application or via command-line. 查看基准测试分数与图表,或导出详细的结果文件以供进一步分析。
系统要求
All ONNX models
存储空间:18.25 GB
All OpenVINO models
存储空间:15.45 GB
Phi-3.5-mini
ONNX with DirectML
- 6GB VRAM (discrete GPU)
- 16GB System RAM (iGPU)
- 存储空间:2.15 GB
Intel OpenVINO
- 4GB VRAM (Discrete GPU)
- 16GB System RAM (iGPU)
- 存储空间:1.84 GB
Llama-3.1-8B
ONNX with DirectML
- 8GB VRAM (Discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:5.37 GB
Intel OpenVINO
- 8GB VRAM (Discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:3.88 GB
Mistral-7B
ONNX with DirectML
- 8GB VRAM (discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:3.69 GB
Intel OpenVINO
- 8GB VRAM (Discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:3.48 GB
Llama-2-13B
ONNX with DirectML
- 12GB VRAM (Discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:7.04 GB
Intel OpenVINO
- 10GB VRAM (Discrete GPU)
- 32GB System RAM (iGPU)
- 存储空间:6.25 GB
支持
Latest 1.0.73.0 | 2024 年 12 月 9 日
语言
- 英语
- 德语
- 日语
- 葡萄牙语(巴西)
- 简中
- 西语