RubyLLM HTTP Bridge Demo
This demo shows outbound HTTP requests from Rails running in browser WASM — powered by the RubyLLM gem and a JavaScript fetch bridge.
How it works
Browser → Stimulus JS → POST /chat → Rails (WASM) → RubyLLM → Faraday → HTTP bridge (JS fetch) → CORS proxy → OpenAI APIRuby’s Net::HTTP is monkey-patched to route through a JavaScript fetch() bridge, since WASM has no socket support. A CORS proxy forwards requests to the OpenAI API (which doesn’t set CORS headers).
Set up your API key
- Open in the editor (it should already be focused)
- Find the line
RubyLLM.config.openai_api_key = "sk-your-api-key-here" - Replace
"sk-your-api-key-here"with your actual OpenAI API key
Since this is a controller file, Rails automatically picks up your change — no server restart needed. Your key will be used on the very next request.
Your API key is sent through a CORS proxy. This is a demo environment — do not use production keys. Use a temporary or low-limit key.
Try it
Once the server is running and you’ve set your API key:
- Type a message in the chat input
- Hit Send
- Wait for the response (you’ll see “Thinking…” while the LLM generates)
Key files
| File | Purpose |
|---|---|
Receives messages, configures API key, calls RubyLLM.chat, returns JSON | |
Points the HTTP bridge at the CORS proxy for api.openai.com | |
| Chat UI with message bubbles | |
| Stimulus controller for AJAX form submission |
Responses are not streamed — the full response appears after the LLM finishes generating. This is because the Ruby WASM runtime processes requests synchronously through a single-threaded queue.
- Preparing Ruby runtime
- Prepare database
- Starting Rails server