Measure execution times of BERT models using WebGPU and WASM
Submit models for evaluation and view leaderboard
Explore GenAI model efficiency on ML.ENERGY leaderboard
Calculate GPU requirements for running LLMs
Browse and submit LLM evaluations
Explore and benchmark visual document retrieval models
Calculate VRAM requirements for LLM models
Find recent high-liked Hugging Face models
Calculate survival probability based on passenger details
Load AI models and prepare your space
Determine GPU requirements for large language models
Convert PyTorch models to waifu2x-ios format
Text-To-Speech (TTS) Evaluation using objective metrics.
WebGPU Embedding Benchmark is a tool designed to measure the execution times of BERT models using WebGPU and WebAssembly (WASM). It helps developers and researchers evaluate the performance of embedding models in web-based environments, leveraging modern graphics technologies for accelerated computations.
• WebGPU Acceleration: Leverages WebGPU for hardware-accelerated computations. • WASM Execution: Utilizes WebAssembly for efficient model inference. • Detailed Timing Measurements: Provides precise execution time metrics for model inference. • Cross-Platform Compatibility: Runs on modern web browsers supporting WebGPU. • Model Optimization Insights: Offers benchmarks to guide model optimization strategies. • Performance Comparison: Enables comparison of performance across different hardware setups.
What does WebGPU Embedding Benchmark measure?
It measures the execution time of BERT models using WebGPU and WASM, providing insights into performance bottlenecks.
Which browsers support WebGPU?
As of now, browsers like Chrome, Firefox, and Edge provide experimental or full support for WebGPU.
Why is WebGPU combined with WASM for this benchmark?
WebGPU offers hardware acceleration, while WASM provides efficient computation, making them a powerful combination for high-performance web-based model inference.