RubyLLM HTTP Bridge Demo

This demo shows outbound HTTP requests from Rails running in browser WASM — powered by the RubyLLM gem and a JavaScript fetch bridge.

How it works

Browser → Stimulus JS → POST /chat → Rails (WASM) → RubyLLM → Faraday
→ HTTP bridge (JS fetch) → CORS proxy → OpenAI API

Ruby’s Net::HTTP is monkey-patched to route through a JavaScript fetch() bridge, since WASM has no socket support. A CORS proxy forwards requests to the OpenAI API (which doesn’t set CORS headers).

Set up your API key

  1. Open in the editor (it should already be focused)
  2. Find the line RubyLLM.config.openai_api_key = "sk-your-api-key-here"
  3. Replace "sk-your-api-key-here" with your actual OpenAI API key

Since this is a controller file, Rails automatically picks up your change — no server restart needed. Your key will be used on the very next request.

Your API key is sent through a CORS proxy. This is a demo environment — do not use production keys. Use a temporary or low-limit key.

Try it

Once the server is running and you’ve set your API key:

  1. Type a message in the chat input
  2. Hit Send
  3. Wait for the response (you’ll see “Thinking…” while the LLM generates)

Key files

FilePurpose
Receives messages, configures API key, calls RubyLLM.chat, returns JSON
Points the HTTP bridge at the CORS proxy for api.openai.com
Chat UI with message bubbles
Stimulus controller for AJAX form submission

Responses are not streamed — the full response appears after the LLM finishes generating. This is because the Ruby WASM runtime processes requests synchronously through a single-threaded queue.

Powered by WebContainers
Files
Preparing Environment
  • Preparing Ruby runtime
  • Prepare database
  • Starting Rails server