Local AI Chat
--
Model
--
Runtime
--
Backend
WebAssembly
Privacy
100% local
wllama (CPU/WASM)
web-llm (WebGPU)
Download & Start
Preparing...
0%
Load:
--
Tokens:
--
Speed:
--
Time:
--
Mem:
--
Platform:
--
⚙
▶