Measure execution times of BERT models using WebGPU and WASM
Open Persian LLM Leaderboard
View and compare language model evaluations
GIFT-Eval: A Benchmark for General Time Series Forecasting
Rank machines based on LLaMA 7B v2 benchmark results
Explain GPU usage for model training
Persian Text Embedding Benchmark
Benchmark LLMs in accuracy and translation across languages
Request model evaluation on COCO val 2017 dataset
Explore and submit models using the LLM Leaderboard
Evaluate open LLMs in the languages of LATAM and Spain.
Compare LLM performance across benchmarks
Evaluate AI-generated results for accuracy
WebGPU Embedding Benchmark is a tool designed to measure the execution times of BERT models using WebGPU and WebAssembly (WASM). It helps developers and researchers evaluate the performance of embedding models in web-based environments, leveraging modern graphics technologies for accelerated computations.
• WebGPU Acceleration: Leverages WebGPU for hardware-accelerated computations. • WASM Execution: Utilizes WebAssembly for efficient model inference. • Detailed Timing Measurements: Provides precise execution time metrics for model inference. • Cross-Platform Compatibility: Runs on modern web browsers supporting WebGPU. • Model Optimization Insights: Offers benchmarks to guide model optimization strategies. • Performance Comparison: Enables comparison of performance across different hardware setups.
What does WebGPU Embedding Benchmark measure?
It measures the execution time of BERT models using WebGPU and WASM, providing insights into performance bottlenecks.
Which browsers support WebGPU?
As of now, browsers like Chrome, Firefox, and Edge provide experimental or full support for WebGPU.
Why is WebGPU combined with WASM for this benchmark?
WebGPU offers hardware acceleration, while WASM provides efficient computation, making them a powerful combination for high-performance web-based model inference.