<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Hayride]]></title><description><![CDATA[Hayride]]></description><link>https://blog.hayride.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 04:45:40 GMT</lastBuildDate><atom:link href="https://blog.hayride.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Composable Agents]]></title><description><![CDATA[Overview
In our last post, we announced Hayride, an open-source secure AI runtime for LLMs, sandboxed code execution, and orchestrating agentic workflows.
Hayride leverages the security and portability benefits offered by WebAssembly, making it an id...]]></description><link>https://blog.hayride.dev/composable-agents</link><guid isPermaLink="true">https://blog.hayride.dev/composable-agents</guid><category><![CDATA[golang]]></category><category><![CDATA[WebAssembly]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Ethan Lewis]]></dc:creator><pubDate>Fri, 01 Aug 2025 16:39:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754066285929/a55de9b4-aa86-400b-91bd-fb0fbe25516e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview</h1>
<p><a target="_blank" href="https://blog.hayride.dev/sandboxing-ai">In our last post</a>, we announced Hayride, an open-source secure AI runtime for LLMs, sandboxed code execution, and orchestrating agentic workflows.</p>
<p>Hayride leverages the <a target="_blank" href="https://webassembly.org/docs/security/">security</a> and <a target="_blank" href="https://webassembly.org/docs/security/">por</a><a target="_blank" href="https://webassembly.org/docs/portability/">tability</a> benefits offered by WebAssembly, making it an ideal platform for developers focused on building composable and reusable AI tooling.</p>
<p>In a series of posts this one kicks off, we will explore building a lightweight command-line (CLI) AI agent using <strong>Golang</strong>, with a sprinkle of <strong>Rust</strong>, to demonstrate how quickly AI agents leveraging tools written in multiple languages can be composed together using Hayride.</p>
<p>If you are new to WebAssembly and concepts such as WebAssembly Interface Types, WebAssembly System Interfaces, or the component model, we recommend learning about these topics now. However, this post will guide you through the various concepts as they come up.</p>
<p>Here are some resources to get you up to speed on WebAssembly:</p>
<ul>
<li><p><a target="_blank" href="https://wasi.dev/">https://wasi.dev/</a></p>
</li>
<li><p><a target="_blank" href="https://component-model.bytecodealliance.org/introduction.html">https://component-model.bytecodealliance.org/introduction.html</a></p>
</li>
<li><p><a target="_blank" href="https://docs.hayride.dev/platform/concepts/wasm">https://docs.hayride.dev/platform/concepts/wasm</a></p>
</li>
</ul>
<p>Let’s dive in!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we can start implementing our application, several tools are required. Of note, Hayride leverages <a target="_blank" href="https://wasi.dev/interfaces#wasi-02">WASI Preview 2</a>, which is gaining support across various languages.</p>
<p>We’ll use the following tools in this post:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/hayride-dev/releases">Hayride</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/bytecodealliance/go-modules?tab=readme-ov-file#wit-bindgen-go">Wit-bindgen-go</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/bytecodealliance/wit-deps">Wit-deps</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/bytecodealliance/wac">Wac</a></p>
</li>
<li><p><a target="_blank" href="https://go.dev/doc/install">Go</a> version 1.23.0+</p>
</li>
<li><p><a target="_blank" href="https://tinygo.org/">TinyGo</a> version 0.33.0+</p>
</li>
<li><p><a target="_blank" href="https://www.rust-lang.org/tools/install">Rust +nightly</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/bytecodealliance/cargo-component">Cargo component</a></p>
</li>
</ul>
<p>Please refer to the tools’ installation guides to get started.</p>
<h3 id="heading-installing-hayride">Installing Hayride</h3>
<p>The easiest way to install Hayride is through our installation script. Linux and macOS users can execute the following:</p>
<p><code>curl https://raw.githubusercontent.com/hayride-dev/releases/refs/heads/main/install.sh -sSf | bash</code></p>
<p>This downloads a precompiled version of wasmtime, places it in $HOME/.hayride, and updates your shell configuration to set the right directory in PATH.</p>
<p>Windows users can visit our releases page to download the <a target="_blank" href="https://github.com/hayride-dev/releases/releases/download/v0.0.3-alpha/hayride-v0.0.3-alpha-x86_64-windows.msi">MSI installer</a> and use it to install Hayride.</p>
<p>After the installation completes, the hayride binary should be located in your path. You can verify the installation by running <code>hayride help</code> from your terminal.</p>
<p>Now that Hayride is installed, we can start developing an agent that can be deployed to Hayride!</p>
<h2 id="heading-building-a-cli-agent">Building a CLI Agent</h2>
<p>Hayride has defined a set of AI interfaces using WebAssembly Interface Types (WIT).</p>
<p>An <strong>interface</strong> describes a single-focused, composable contract through which components can interact with each other and with hosts.</p>
<p>Interfaces are directional. When using an interface, you can indicate whether the interface is available for external code to call (i.e., <strong>export</strong>) or whether external code must fulfill the interface for the component to call (i.e., <strong>import</strong>).</p>
<p>Interfaces are strictly bound to a component. A component cannot interact with anything outside itself except by having its exports called or by calling its imports. These constraints provide rigorous sandboxing.</p>
<p>Here is an example of how Hayride defines an agent runner interface using WIT:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> hayride:ai@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;

<span class="hljs-keyword">interface</span> runner {
    use types.{message};
    use agents.{agent};
    use wasi:io/streams@<span class="hljs-number">0.2</span><span class="hljs-number">.0</span>.{output-stream};

    enum error-code {
        invoke-error,
        unknown
    }

    resource error {
        code: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">error</span>-<span class="hljs-title">code</span>;</span>
        data: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">string</span>;</span>
    }

    invoke: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(message: message, agent: borrow&lt;agent&gt;)</span> -&gt; <span class="hljs-title">result</span>&lt;<span class="hljs-title">list</span>&lt;<span class="hljs-title">message</span>&gt;, <span class="hljs-title">error</span>&gt;;</span>
    invoke-stream: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(message: message, writer: borrow&lt;output-stream&gt;, agent: borrow&lt;agent&gt;)</span> -&gt; <span class="hljs-title">result</span>&lt;_,<span class="hljs-title">error</span>&gt;;</span>
}
</code></pre>
<p>(<a target="_blank" href="https://github.com/hayride-dev/coven/blob/main/ai/wit/runner.wit">https://github.com/hayride-dev/coven/blob/main/ai/wit/runner.wit</a>)</p>
<p>The runner interface is responsible for invoking an agent and supplying a prompt or message.</p>
<p><strong>Runners</strong> define the agent loop as a function that describes how the agent executes.</p>
<p><strong>Agents</strong> are defined as a component that interacts with an AI model, can use tools, and can store the context of any interactions.</p>
<p>Our agent interface in WIT is defined as follows:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> hayride:ai@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;

<span class="hljs-keyword">interface</span> agents {
    use types.{message};
    use context.{context};
    use model.{format};
    use hayride:mcp/tools@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>.{tools};
    use hayride:mcp/types@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>.{tool, call-tool-params, call-tool-result};
    use graph-stream.{graph-stream};
    use inference-stream.{graph-execution-context-stream};
    use wasi:io/streams@<span class="hljs-number">0.2</span><span class="hljs-number">.0</span>.{output-stream};

    enum error-code {
        capabilities-error,
        context-error,
        compute-error,
        execute-error,
        unknown
    }

    resource error {
        code: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">error</span>-<span class="hljs-title">code</span>;</span>
        data: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">string</span>;</span>
    }

    resource agent {
        constructor(name: <span class="hljs-keyword">string</span>, instruction: <span class="hljs-keyword">string</span>, format: format, graph: graph-execution-context-stream, tools: option&lt;tools&gt;, context: option&lt;context&gt;);
        name: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">string</span>;</span>
        instruction: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">string</span>;</span>
        capabilities: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">result</span>&lt;<span class="hljs-title">list</span>&lt;<span class="hljs-title">tool</span>&gt;, <span class="hljs-title">error</span>&gt;;</span>
        context: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> -&gt; <span class="hljs-title">result</span>&lt;<span class="hljs-title">list</span>&lt;<span class="hljs-title">message</span>&gt;, <span class="hljs-title">error</span>&gt;;</span>
        compute: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(message: message)</span> -&gt; <span class="hljs-title">result</span>&lt;<span class="hljs-title">message</span>, <span class="hljs-title">error</span>&gt;;</span>
        execute: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(params: call-tool-params)</span> -&gt; <span class="hljs-title">result</span>&lt;<span class="hljs-title">call</span>-<span class="hljs-title">tool</span>-<span class="hljs-title">result</span>, <span class="hljs-title">error</span>&gt;;</span>
    }
}
</code></pre>
<p>(<a target="_blank" href="https://github.com/hayride-dev/coven/blob/main/ai/wit/agents.wit">https://github.com/hayride-dev/coven/blob/main/ai/wit/agents.wit</a>)</p>
<p>Following the component model, these interfaces can be implemented externally by outside code and imported by our component.</p>
<p>For this post, we use a default runner and agent implementation packaged with Hayride. This allows us to focus solely on the CLI portion of our agent and uses an externally available runner and agent component that satisfy our interface contracts. The implementations of these components can be found in our <a target="_blank" href="https://github.com/hayride-dev/morphs/tree/main/components">morphs repository</a>.</p>
<p>In a future post, we will unpack how each of these components works and how you can implement your own component that satisfies the various AI interfaces Hayride supplies.</p>
<h3 id="heading-defining-our-morph">Defining Our Morph</h3>
<p>Hayride Morphs are the fundamental building blocks of applications. They can <strong>import</strong> functions to access external capabilities and can also <strong>export</strong> their capabilities to other morphs.</p>
<p>The term <strong>morph</strong> simply refers to a WebAssembly component that is designed to be composable and portable across different environments.</p>
<p>Our CLI Agent Morph can be described in WIT using <strong>worlds</strong>.</p>
<p>A WIT world is a higher-level contract that describes a component’s capabilities and needs. A world is composed of interfaces. For a component to run, its imports must be fulfilled by a host or by other components.</p>
<p>Connecting up some or all of a component’s imports to other components’ matching exports is called <strong>composition</strong>.</p>
<p>Given this, we can define our component world as follows:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> hayride:example@<span class="hljs-number">0.0</span><span class="hljs-number">.1</span>;

world cli {
    include hayride:wasip2/imports@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;
    include hayride:wasip2/exports@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;

    <span class="hljs-keyword">import</span> hayride:ai/runner@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;
    <span class="hljs-keyword">import</span> hayride:ai/model-repository@<span class="hljs-number">0.0</span><span class="hljs-number">.61</span>;
}
</code></pre>
<p>(<a target="_blank" href="https://github.com/hayride-dev/morphs/blob/main/components/examples/agents/wit/world.wit#L3C1-L9C2">https://github.com/hayride-dev/morphs/blob/main/components/examples/agents/wit/world.wit#L3C1-L9C2</a>)</p>
<p>Now that we have a rough idea of what our world and interfaces look like, we can create our project and see how we use the preceding WIT definitions.</p>
<h3 id="heading-project-setup">Project Setup</h3>
<p>First, we create our project’s directory layout:</p>
<p><code>mkdir hayride-example-agent</code></p>
<p>Since we are building our agent in Go and compiling to WebAssembly using TinyGo, we can use <strong>go mod</strong> to initialize our application and dependencies:</p>
<p><code>go mod init</code></p>
<p>Next, we create a directory called wit:</p>
<p><code>mkdir wit</code></p>
<p>We use the world defined above and copy it to a file in our wit directory:</p>
<p><code>touch ./wit/world.wit</code></p>
<p>To use this world, we need to pull down our dependencies. Using Hayride’s WIT repository, we can add two dependencies using <strong>wit-deps</strong>.</p>
<p>Wit-deps requires a <code>deps.toml</code> to track dependencies. We can add it to our wit directory using the following command:</p>
<p><code>Touch ./wit/deps.toml</code></p>
<p>In the <code>deps.toml</code> file, add the following dependencies:</p>
<pre><code class="lang-ini"><span class="hljs-attr">wasip2</span> = <span class="hljs-string">"https://github.com/hayride-dev/coven/releases/download/v0.0.61/hayride_wasip2_v0.0.61.tar.gz"</span>
<span class="hljs-attr">ai</span> = <span class="hljs-string">"https://github.com/hayride-dev/coven/releases/download/v0.0.61/hayride_ai_v0.0.61.tar.gz"</span>
<span class="hljs-attr">mcp</span> = <span class="hljs-string">"https://github.com/hayride-dev/coven/releases/download/v0.0.61/hayride_mcp_v0.0.61.tar.gz"</span>
</code></pre>
<p>To pull these dependencies into our project, we use a tool called <strong>wit-deps.</strong></p>
<p>From the project’s root, run the following command:</p>
<p><code>wit-deps update</code></p>
<p>Next, we create a <code>main.go</code> file and start implementing our CLI application:</p>
<p><code>touch main.go</code></p>
<p>Now that we have the basic project layout and dependencies downloaded, we can move on to implementing our CLI.</p>
<h3 id="heading-cli-application">CLI Application</h3>
<p>Our CLI is responsible for reading in a user’s message from STDIN and returning the response from the agent.</p>
<p>First, let’s start by creating the necessary objects using Hayride’s <a target="_blank" href="https://github.com/hayride-dev/bindings">bindings repository</a>.</p>
<p>In the <code>main.go</code> file, add the following lines of code:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"strings"</span>

    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/agents"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/ctx"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/graph"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/models"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/models/repository"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/runner"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/mcp/tools"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/types"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/wasi/cli"</span>
    <span class="hljs-string">"go.bytecodealliance.org/cm"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    repo := repository.New()
    path, err := repo.DownloadModel(<span class="hljs-string">"bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to download model:"</span>, err)
    }

    <span class="hljs-comment">// Initialize the context, tools, and model format</span>
    ctx, err := ctx.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create context:"</span>, err)
    }

    tools, err := tools.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create tools:"</span>, err)
    }

    format, err := models.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create model format:"</span>, err)
    }

    <span class="hljs-comment">// host provides a graph stream</span>
    inferenceStream, err := graph.LoadByName(path)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to load graph:"</span>, err)
    }

    graphExecutionCtxStream, err := inferenceStream.InitExecutionContextStream()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to initialize graph execution context stream:"</span>, err)
    }

    a, err := agents.New(
        format, graphExecutionCtxStream,
        agents.WithName(<span class="hljs-string">"Helpful Agent"</span>),
        agents.WithInstruction(<span class="hljs-string">"You are a helpful assistant. Answer the user's questions to the best of your ability."</span>),
        agents.WithContext(ctx),
        agents.WithTools(tools),
    )
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create agent:"</span>, err)
    }

    runner := runner.New()
}
</code></pre>
<p>This code simply creates the various objects that our runner and agent require to execute:</p>
<ul>
<li><p><strong>Repository:</strong> The repository package provides the ability to download models from a remote repository. Hayride’s host environment provides a Hugging Face implementation for model repositories.</p>
</li>
<li><p><strong>Context</strong>: The context object is a message store for the agent. The agent determines when to store context and when to pull past messages. We’re using Hayride’s in-memory context store for this example.</p>
</li>
<li><p><strong>Tools</strong>: The tools object is used to expose callable tools to the agent. Since our agent doesn’t require tools, we’ll attach an empty tools component.</p>
</li>
<li><p><strong>Format</strong>: The format object is used to encode the user’s message before sending it to the LLM. We also use the format object to decode the response from the LLM. Each model typically requires some form of custom encoding or decoding.</p>
</li>
<li><p><strong>GraphExecutionCtxStream</strong>: The GraphExecutionCtxStream provides access to our host environment and the LLM loaded. This is an extension of <a target="_blank" href="https://github.com/WebAssembly/wasi-nn/releases">wasi-nn</a> to allow for streaming responses.</p>
</li>
</ul>
<p>Next, we add the code to read from <strong>STDIN</strong> and create a <strong>STDOUT</strong> writer.</p>
<p>Since we are working with WebAssembly, we leverage WASI to pipe the terminal’s <strong>STDIN/STDOUT</strong> in our application.</p>
<p>While TinyGo supports wasip2, a few limitations come up when composing multiple components. One of these limitations is the inability to access the Wasm resource provisioned by the host runtime for an <code>io.Writer</code> when using the Standard library. In short, this means that we are unable to pass this resource to a component that uses this resource.</p>
<p>To avoid this limitation, we have implemented a few WASI helpers in the <a target="_blank" href="https://github.com/hayride-dev/bindings/tree/main/go/wasi">bindings repository</a>. The main helper to leverage is our implementation of the <a target="_blank" href="https://github.com/WebAssembly/wasi-cli/blob/main/wit/stdio.wit"><strong>wasi-cli</strong></a> interface.</p>
<p>Using our bindings, we can create an <code>io.Writer</code> that can be converted into a WASI output stream and passed between components, in our case, passing the writer created in our CLI application to an AI runner:</p>
<pre><code class="lang-go">writer := cli.GetStdout(<span class="hljs-literal">true</span>)
reader := bufio.NewReader(os.Stdin)
</code></pre>
<p>Lastly, we add a basic loop that allows the user to type a prompt, send the prompt to the agent using our runner, and display the result:</p>
<pre><code class="lang-go">fmt.Println(<span class="hljs-string">"What can I help with?"</span>)
<span class="hljs-keyword">for</span> {
    input, _ := reader.ReadString(<span class="hljs-string">'\n'</span>)
    prompt := strings.TrimSpace(input)
    <span class="hljs-keyword">if</span> strings.ToLower(prompt) == <span class="hljs-string">"exit"</span> {
        fmt.Println(<span class="hljs-string">"Goodbye!"</span>)
        <span class="hljs-keyword">break</span>
    }

    msg := types.Message{
        Role: types.RoleUser,
        Content: cm.ToList([]types.MessageContent{
            types.NewMessageContent(types.Text(input)),
        }),
    }

    err := runner.InvokeStream(msg, writer, a)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"error invoking agent:"</span>, err)
        os.Exit(<span class="hljs-number">1</span>)
    }

    fmt.Println(<span class="hljs-string">"\nWhat else can I help with? (type 'exit' to quit)"</span>)
}
</code></pre>
<p>The runner’s <strong>InvokeStream</strong> function is called with the user’s prompt, an output stream, and an agent. The result of the agent is automatically written back to the user. We simply invoke our agent in a loop with the message the user has sent.</p>
<p>There are limitations with WebAssembly’s async capabilities that require us to pass the writer forward to our component in order to start writing the result as fast as possible. However, discussions around async functions are taking place in wasip3. More information can be found on the <a target="_blank" href="https://wasi.dev/roadmap?utm_source=chatgpt.com">wasi roadmap</a>.</p>
<p>The full code looks like this:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"strings"</span>

    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/agents"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/ctx"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/graph"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/models"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/models/repository"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/ai/runner"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/mcp/tools"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/hayride/types"</span>
    <span class="hljs-string">"github.com/hayride-dev/bindings/go/wasi/cli"</span>
    <span class="hljs-string">"go.bytecodealliance.org/cm"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    repo := repository.New()
    path, err := repo.DownloadModel(<span class="hljs-string">"bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to download model:"</span>, err)
    }

    <span class="hljs-comment">// Initialize the context, tools, and model format</span>
    ctx, err := ctx.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create context:"</span>, err)
    }

    tools, err := tools.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create tools:"</span>, err)
    }

    format, err := models.New()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create model format:"</span>, err)
    }

    <span class="hljs-comment">// host provides a graph stream</span>
    inferenceStream, err := graph.LoadByName(path)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to load graph:"</span>, err)
    }

    graphExecutionCtxStream, err := inferenceStream.InitExecutionContextStream()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to initialize graph execution context stream:"</span>, err)
    }

    a, err := agents.New(
        format, graphExecutionCtxStream,
        agents.WithName(<span class="hljs-string">"Helpful Agent"</span>),
        agents.WithInstruction(<span class="hljs-string">"You are a helpful assistant. Answer the user's questions to the best of your ability."</span>),
        agents.WithContext(ctx),
        agents.WithTools(tools),
    )
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"failed to create agent:"</span>, err)
    }

    runner := runner.New()

    writer := cli.GetStdout(<span class="hljs-literal">true</span>)
    reader := bufio.NewReader(os.Stdin)

    fmt.Println(<span class="hljs-string">"What can I help with?"</span>)
    <span class="hljs-keyword">for</span> {
        input, _ := reader.ReadString(<span class="hljs-string">'\n'</span>)
        prompt := strings.TrimSpace(input)
        <span class="hljs-keyword">if</span> strings.ToLower(prompt) == <span class="hljs-string">"exit"</span> {
            fmt.Println(<span class="hljs-string">"Goodbye!"</span>)
            <span class="hljs-keyword">break</span>
        }

        msg := types.Message{
            Role: types.RoleUser,
            Content: cm.ToList([]types.MessageContent{
                types.NewMessageContent(types.Text(input)),
            }),
        }

        err := runner.InvokeStream(msg, writer, a)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            fmt.Println(<span class="hljs-string">"error invoking agent:"</span>, err)
            os.Exit(<span class="hljs-number">1</span>)
        }

        fmt.Println(<span class="hljs-string">"\nWhat else can I help with? (type 'exit' to quit)"</span>)
    }
}
</code></pre>
<p>(<a target="_blank" href="https://github.com/hayride-dev/morphs/blob/main/components/examples/agents/cli.go">https://github.com/hayride-dev/morphs/blob/main/components/examples/agents/cli.go</a>)</p>
<p>All that’s left is to build and deploy our agent onto Hayride!</p>
<p>We’ll compile our application, compose it with Hayride’s existing morphs, and deploy our composed morph to Hayride.</p>
<h3 id="heading-build-composition-and-deployment">Build Composition and Deployment</h3>
<p>To compose our CLI with the existing Wasm components supplied by Hayride, we use <strong>WAC,</strong> a tool for composing WebAssembly Components together. The source code for these components can be found in our <a target="_blank" href="https://github.com/hayride-dev/morphs/tree/main/components">morphs repository</a>.</p>
<p>The full language guide for WAC can be found <a target="_blank" href="https://github.com/bytecodealliance/wac/blob/main/LANGUAGE.md">here</a>.</p>
<p>We start by creating a <code>cli.wac</code> with the following content:</p>
<pre><code class="lang-bash">package hayride:example;

<span class="hljs-built_in">let</span> context = new hayride:inmemory@0.0.1 {...}; 
<span class="hljs-built_in">let</span> llama = new hayride:llama31@0.0.1 {...};
<span class="hljs-built_in">let</span> tools = new hayride:default-tools@0.0.1 {...};

<span class="hljs-built_in">let</span> agent = new hayride:default-agent@0.0.1 {
  context: context.context,
  model: llama.model,
  tools: tools.tools,
  ...
};

<span class="hljs-built_in">let</span> runner = new hayride:default-runner@0.0.1 {
  agents: agent.agents,
  ...
};

<span class="hljs-built_in">let</span> cli = new hayride:cli@0.0.1 {
  context: context.context,
  model: llama.model,
  tools: tools.tools,
  agents: agent.agents,
  runner: runner.runner,
  ...
};

<span class="hljs-built_in">export</span> cli...;
</code></pre>
<p>This file is responsible for composing the Wasm components that satisfy the interfaces our runner and agent expect.</p>
<p>In the above file, we are using the following Hayride Morphs:</p>
<ul>
<li><p>hayride:inmemory@0.0.1</p>
</li>
<li><p>hayride:llama31@0.0.1</p>
</li>
<li><p>hayride:default-tools@0.0.1</p>
</li>
<li><p>hayride:default-agent@0.0.1</p>
</li>
<li><p>hayride:default-runner@0.0.1</p>
</li>
</ul>
<p>Using these components, we can compose our CLI. The final result is a single Wasm module that can be deployed on Hayride.</p>
<p>Hayride has built-in support for WAC files, and we can execute our composition with the following command:</p>
<p><code>hayride wac compose --path ./cli.wac --out ./composed-cli-agent.wasm</code></p>
<p>Once we have the <code>composed-cli-agent.wasm</code> file, we can register it with Hayride. This makes the morph available for future composition and direct execution.</p>
<p><code>hayride register --bin ./cli-agent.wasm --package hayride:composed-cli-agent@0.0.1</code></p>
<p>All that’s left is to execute our morph:<br /><code>hayride cast --package hayride:composed-cli-agent@0.0.1 -it</code></p>
<p>This command launches our CLI:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcnunFoEQ-rmBxnonyKtdkK2dhB4ZIw_VnxxzpGwFRQVqoGet5Yi9Xxt9JnC1BxmYJ8cVUklDceFXLq8ELxc7zDErb1Ft3T_FbJyXzwz1t9EQa3L09z13qc5pdApF42VzqEFVV-?key=HJZoXeiqMu56XW0ZbuwWHuiK" alt /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this post, we have demonstrated how to build a CLI application using Hayride’s existing AI morphs. Using WebAssembly’s Component model and various community tools, we composed multiple components together to build and deploy our CLI application on Hayride.</p>
<p>In our next post, we will delve into the Hayride Agent and Runner, exploring how each of these components works.</p>
<p>To stay informed about future developments, follow us on <a target="_blank" href="https://x.com/HayrideDev">X</a> and <a target="_blank" href="https://github.com/hayride-dev">GitHub</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Sandboxing AI]]></title><description><![CDATA[Overview
Artificial intelligence (AI) is not just transforming technology—it’s reshaping how the world builds, thinks, and operates. As large language models (LLMs) become more capable and accessible, we’re seeing a profound shift: Developers can now...]]></description><link>https://blog.hayride.dev/sandboxing-ai</link><guid isPermaLink="true">https://blog.hayride.dev/sandboxing-ai</guid><category><![CDATA[wasm]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[wasi]]></category><category><![CDATA[llm]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[Ethan Lewis]]></dc:creator><pubDate>Mon, 14 Jul 2025 15:02:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752501595661/230fa878-20ac-46c3-9006-e0b02c0b4ebc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview</h1>
<p>Artificial intelligence (AI) is not just transforming technology—it’s reshaping how the world builds, thinks, and operates. As large language models (LLMs) become more capable and accessible, we’re seeing a profound shift: Developers can now create intelligent, dynamic systems that were previously unimaginable. Barriers are dropping. Innovation is accelerating. The edge of what’s possible is being redrawn daily.</p>
<p>At the heart of this revolution lies software. While AI models can be viewed as the engine, software is the infrastructure enabling them to connect, execute, and deliver value.</p>
<p>As teams race to integrate LLMs into real-world systems, one trend is increasingly clear: building, testing, deploying, and securing AI-driven applications demands a new architecture pattern.</p>
<p>Many of today’s applications weren’t built with AI-native execution in mind. Integrations are often brittle, one-off, and tightly coupled to specific models or runtimes. Furthermore, once AI enters the loop, traditional workflows give way to nondeterministic paths, unpredictable behaviors, and execution contexts that defy conventional security assumptions.</p>
<p>It’s a thrilling time to be a developer. A single prompt can unlock functionality that once took months to build. Rigid APIs dissolve. Interfaces fade. However, with this power comes uncertainty. When AI is part of the execution path, much of the system is no longer deterministic—it’s probabilistic. The black box of inference becomes a black hole of potential outcomes. Even with predictable AI responses, user interaction within your application and its underlying data is drastically changing.</p>
<p>To address this, developers have turned to tool calling and agent frameworks—ways to let LLMs interact with real systems through structured protocols. These efforts are promising, but today’s approaches often sacrifice security, portability, and interoperability for speed. While LLMs are generating more code than ever, few systems are built to execute that code safely, consistently, or at scale.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752501883580/eff70f68-fb86-42d2-8977-d43f9737fd16.png" alt class="image--center mx-auto" /></p>
<p>Today, we’re introducing <strong>Hayride</strong>, our approach for standardizing AI agent execution using WebAssembly.</p>
<p>At Hayride, we believe the next era of AI development hinges on how we sandbox AI systems. This means rethinking the infrastructure that powers agentic systems and establishing trusted runtimes that are secure, composable, and built for the multimodal future.</p>
<p>This isn’t just about running code. It’s about establishing an execution layer for AI—one that’s open, safe, and built to scale.</p>
<p>In this post, we’ll walk through the limitations of today’s frameworks, examine emerging standards such as model context protocol (MCP), smolagents, and CodeAct, and share how WebAssembly and various WebAssembly proposals unlock a path to portable, secure, and interoperable agent execution.</p>
<p>Let’s dig in!</p>
<h2 id="heading-protocolsframeworks">Protocols/Frameworks</h2>
<p>The pace of innovation around AI agents is accelerating. Across the ecosystem, we’re seeing three dominant interaction patterns emerge:</p>
<ol>
<li><p>LLMs are directly trained for solving end-to-end tasks</p>
</li>
<li><p>LLMs selecting from a set of tools and passing arguments for execution</p>
</li>
<li><p>LLMs generate code that must be executed in a secure runtime</p>
</li>
</ol>
<p>Despite these advances, one thing remains constant: <strong>Language models do not execute anything themselves</strong>. Execution still requires a trusted environment—be it a local runtime, server, or secure sandbox.</p>
<p>To bridge the gap between generative reasoning and real-world action, developers have introduced protocols that allow LLMs to <strong>invoke structured functions</strong>, often referred to as <em>tool calling</em>. These protocols represent a critical layer in AI architecture: the interface between intelligence and execution.</p>
<p>In this section, we’ll explore several influential and emerging open frameworks defining the future of agentic AI, including:</p>
<ul>
<li><p><strong>Meta’s Llama 3.1+ Tool Calling</strong>—prompt-based mechanisms that enable code interpretation and limited tool invocation</p>
</li>
<li><p><strong>Anthropic’s Model Context Protocol (MCP)</strong>—a structured, JSON-RPC-based interface between models and tools</p>
</li>
<li><p><strong>CodeAct</strong>—a research-driven extension of ReAct that emphasizes direct code generation and execution</p>
</li>
</ul>
<p>While other agent frameworks like Google’s Agents-to-Agents (A2A) aim to facilitate interagent messaging and coordination, our priority is the execution layer, where LLMs interact with structured tools. In a future post, we’ll unpack how A2A fits within the Hayride runtime.</p>
<h3 id="heading-meta-llama-31-code-interpretation-and-tool-use"><strong>Meta Llama 3.1+: Code Interpretation and Tool Use</strong></h3>
<p>Meta’s latest LLM, <strong>Llama 3.3</strong>, extends features introduced in <strong>Llama 3.1</strong>, offering code interpretation and various forms of tool use. While not a formal protocol, Meta uses <strong>special prompt tokens</strong> to guide behavior such as code execution. For example, setting Environment: ipython in the system prompt enables the model to emit code responses intended for execution in a Jupyter-style environment.</p>
<pre><code class="lang-plaintext">&lt;|begin_of_text|&gt;&lt;|start_header_id|&gt;system&lt;|end_header_id|&gt;
Environment: ipython&lt;|eot_id|&gt;&lt;|start_header_id|&gt;user&lt;|end_header_id|&gt;

Write code to check if a number is prime...

This causes the model to return code like:

def is_prime(n):

    ...

print(is_prime(7))  # True
</code></pre>
<p>However, <strong>Meta’s models don’t execute the code themselves</strong>—this must be handled by external infrastructure such as <a target="_blank" href="https://github.com/meta-llama/llama-stack-apps">llama-stack-apps</a> or iPython runtimes. While convenient, this execution model lacks isolation and security by default, requiring additional sandboxing layers to run arbitrary code safely.</p>
<p>In addition to code interpretation, Llama supports three tool-calling modes:</p>
<ol>
<li><p><strong>Built-In Tools</strong> (e.g., Brave search, Wolfram Alpha)</p>
</li>
<li><p><strong>JSON-Based Tool Calling</strong></p>
</li>
<li><p><strong>User-Defined Custom Tools</strong> via structured prompt instructions</p>
</li>
</ol>
<h4 id="heading-whats-missing"><strong>What’s Missing?</strong></h4>
<p>Llama’s capabilities are embedded deeply in model weights and token streams, creating a brittle integration surface. There’s <strong>no shared protocol or interface</strong> for tool execution across runtimes or models. If you switch from Llama to another model, your tool infrastructure often needs to change as well.</p>
<p>Meta’s system is powerful but lacks <strong>portability, extensibility,</strong> and <strong>language-agnostic standards</strong> for integrating tools or ensuring secure execution at scale.</p>
<h3 id="heading-anthropics-model-context-protocol-mcp"><strong>Anthropic’s Model Context Protocol (MCP)</strong></h3>
<p><strong>Anthropic’s MCP</strong> (Model Context Protocol) is an emerging open standard designed to bring <strong>structured, contextual, and secure</strong> interaction between LLMs and their toolchains.</p>
<p>At its core, MCP is a <strong>JSON-RPC 2.0-based protocol</strong> for bidirectional communication between:</p>
<ul>
<li><p><strong>Hosts</strong>: Apps running the LLM</p>
</li>
<li><p><strong>Clients</strong>: Tool connectors</p>
</li>
<li><p><strong>Servers</strong>: Services that expose tools, resources, and prompts to the model</p>
</li>
</ul>
<p>The protocol formalizes several interaction primitives:</p>
<ul>
<li><p><strong>Prompts</strong>: Customizable templates to guide model output</p>
</li>
<li><p><strong>Resources</strong>: Structured external data for richer context</p>
</li>
<li><p><strong>Tools</strong>: Executable functions that the model can call dynamically</p>
</li>
</ul>
<p>Anthropic’s MCP emphasizes <strong>authentication, validation,</strong> and <strong>portability</strong>. It supports both local execution via STDIO and remote execution via server-sent events (SSE), with transport layer security (TLS) and access control as recommended practices.</p>
<p>Projects like Docker’s MCP integration and <a target="_blank" href="https://mcp.run">mcp.run</a> show how containers and WebAssembly plugins can be packaged and exposed via MCP-compatible servers. These projects are pushing AI forward in a meaningful way and are fantastic examples of sandboxed environments.</p>
<h4 id="heading-whats-missing-1"><strong>What’s Missing?</strong></h4>
<p>MCP excels at <strong>interoperability and safety</strong>, but often introduces latency and complexity due to its middleware nature. The need to round-trip tool calls over JSON protocols can increase the number of actions required for an agent to solve a task, especially compared to in-process code generation and execution.</p>
<p>This overhead is highlighted in recent work, like <a target="_blank" href="https://arxiv.org/abs/2402.01030"><strong>Executable Code Actions Elicit Better LLM Agents</strong> (CodeAct)</a></p>
<h3 id="heading-codeact-direct-code-generation-and-execution"><strong>CodeAct: Direct Code Generation and Execution</strong></h3>
<p><strong>CodeAct</strong> is an extension of the ReAct pattern that allows LLMs to write and execute code inline during reasoning. Instead of describing tools via JSON or expecting the model to reason through text, CodeAct gives the LLM full access to <strong>generate, execute,</strong> and <strong>revise Python code</strong> as part of the task-solving loop.</p>
<p>Compared to JSON-based tool calling, CodeAct achieves:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>CodeAct for LLM Action</strong></td><td><strong>JSON or Text for LLM Action</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Availability of Data</strong></td><td>A large quantity of code is available for pre-training</td><td>Data curation is required for a particular format</td></tr>
<tr>
<td><strong>Complex Operation</strong></td><td>Natively supported via control and data flow</td><td>Requires careful engineering if feasible (e.g., define new tools to mimic if-statement)</td></tr>
<tr>
<td><strong>Availability of Tools</strong></td><td>Can directly use existing software packages</td><td>Requires human effort to curate tools from scratch or existing software</td></tr>
<tr>
<td><strong>Automated Feedback</strong></td><td>Feedback mechanism (e.g., traceback) is already implemented as an infrastructure for most programming languages</td><td>Requires human effort to provide feedback or reroute feedback from the underlying program</td></tr>
</tbody>
</table>
</div><p><em>(</em> <a target="_blank" href="https://arxiv.org/pdf/2402.01030"><em>https://arxiv.org/pdf/2402.01030</em></a> <em>- table 1 )</em></p>
<p>The source code and datasets are available on <a target="_blank" href="https://github.com/xingyaoww/code-act">GitHub</a>.</p>
<p>As shown in CodeAct’s benchmarks, this approach often reduces the number of steps needed to complete a task while improving task success rate.</p>
<h4 id="heading-whats-missing-2"><strong>What’s Missing?</strong></h4>
<p>While CodeAct increases <strong>performance and flexibility</strong>, it introduces <strong>significant security risks</strong>. Executing arbitrary generated code means trusting both the generation pipeline and its runtime environment.</p>
<p>There are no standardized sandboxing or language-agnostic methods to ensure that this process is secure by default.</p>
<h3 id="heading-hugging-face-smolagents-lightweight-code-agents"><strong>Hugging Face Smolagents: Lightweight Code Agents</strong></h3>
<p><strong>Smolagents</strong>, an open-source library from Hugging Face, implements CodeAct-like behavior in Python. It supports two execution modes:</p>
<ul>
<li><p><strong>ToolCallingAgent</strong>: Traditional structured tool invocation</p>
</li>
<li><p><strong>CodeAgent</strong>: Direct generation and execution of Python code</p>
</li>
</ul>
<p>Smolagents focuses on:</p>
<ul>
<li><p>Code consistency between LLM generation and execution</p>
</li>
<li><p>Seamless tool registration and usage</p>
</li>
<li><p>Sandboxing through platforms like <a target="_blank" href="https://e2b.dev">E2B.dev</a></p>
</li>
</ul>
<p>To define an agent, developers provide:</p>
<pre><code class="lang-python">agent = CodeAgent(
    tools=[DuckDuckGoSearchTool()],
    model=LLMWrapper()
)
</code></pre>
<p>The agent can write Python code, use external tools, and solve tasks by chaining logic and execution without predefined templates.</p>
<h4 id="heading-whats-missing-3"><strong>What’s Missing?</strong></h4>
<p>Smolagents excel at <strong>agile code agent composition</strong>, but it is currently <strong>Python-only</strong> and leaves runtime trust and security concerns to external systems (e.g., E2B or local VMs).</p>
<p>It's a compelling foundation for CodeAct-style agent systems, but <strong>not yet portable, secure,</strong> or <strong>language-independent</strong>—all key criteria for running agents at scale across diverse environments.</p>
<h2 id="heading-purpose-built-for-the-future-why-we-built-hayride"><strong>Purpose-Built for the Future: Why We Built Hayride</strong></h2>
<p>As we’ve explored, modern AI agents rely on a growing constellation of tool-calling frameworks—from JSON-based protocols like MCP to dynamic code-generation libraries like smolagents and CodeAct. Each of these systems brings powerful capabilities to language models, enabling them to interact with tools, generate code, and operate across local and remote environments. However, each also introduces difficult tradeoffs around <strong>security</strong>, <strong>portability</strong>, and <strong>interoperability</strong>.</p>
<p>With <strong>Hayride</strong>, we asked a foundational question:</p>
<p><strong><em>What if AI agent execution could be secure by design, portable by default, and interoperable across tools, languages, and runtimes—without sacrificing performance or control?</em></strong></p>
<p>To answer that, we turned to <strong>WebAssembly</strong>.</p>
<h2 id="heading-why-webassembly"><strong>Why WebAssembly?</strong></h2>
<p><strong>WebAssembly (Wasm)</strong> is a fast, safe, and language-agnostic binary format for executing code across platforms. Originally designed for web browsers, it’s now evolving rapidly into a <strong>general-purpose runtime for secure sandboxed execution</strong>—ideal for the demands of modern AI workloads.</p>
<p>What makes Wasm uniquely suited for agentic systems?</p>
<ul>
<li><p><strong>Secure by default</strong>: Sandboxed execution prevents privilege escalation and system-level access.</p>
</li>
<li><p><strong>Portable</strong>: Wasm modules run consistently across environments—from browsers to edge to cloud.</p>
</li>
<li><p><strong>Composable</strong>: Through the <a target="_blank" href="https://github.com/WebAssembly/component-model">WebAssembly Component Model</a>, modules written in different languages can interoperate seamlessly.</p>
</li>
<li><p><strong>Language-agnostic</strong>: Wasm supports Rust, TinyGo, Python, C/C++, JavaScript, and more.</p>
</li>
</ul>
<p>The power of Wasm is unlocked by <strong>WASI</strong> (WebAssembly System Interface), a standards-based suite of APIs that make Wasm suitable for real-world applications, from filesystem access to machine learning.</p>
<h2 id="heading-enter-wasi-02-and-the-component-model"><strong>Enter WASI 0.2 and the Component Model</strong></h2>
<p><strong>WASI 0.2</strong>, released in 2024, introduces the <strong>Component Model</strong>—a breakthrough architecture that makes WebAssembly truly modular and interoperable. Instead of monolithic binaries, developers can now build reusable <strong>components</strong> that import and export standardized interfaces.</p>
<p>These interfaces are defined using <strong>WIT (WebAssembly Interface Types)</strong>—a simple language for specifying data types and function signatures across languages and runtimes.</p>
<p>Example WIT definition:</p>
<pre><code class="lang-plaintext">package docs:adder@0.1.0;

interface add {
  add: func(a: u32, b: u32) -&gt; u32;
}

world adder {
  export add;
}
</code></pre>
<p>WIT doesn’t implement behavior; it defines contracts. This creates a <strong>shared surface area</strong> between independently developed components, much like MCP describes JSON-RPC tools, but closer to the compiled code and inherently cross-language.</p>
<h2 id="heading-standardizing-ai-inference-with-webassembly"><strong>Standardizing AI Inference with WebAssembly</strong></h2>
<p>The power of the Component Model extends beyond basic function calls. Projects like <a target="_blank" href="https://github.com/WebAssembly/wasi-nn">wasi-nn</a> are defining standardized interfaces for <strong>AI inference</strong>, allowing Wasm components to serve or consume ML models using backends like TensorFlow, ONNX, PyTorch, and Llama.cpp.</p>
<p>Here’s a brief look at the wasi-nn WIT interface:</p>
<ul>
<li><p><strong>tensor</strong>: Typed multidimensional data</p>
</li>
<li><p><strong>graph</strong>: A loaded ML model (e.g., ONNX, TensorFlow)</p>
</li>
<li><p><strong>Inference</strong>: Compute APIs for executing models</p>
</li>
<li><p><strong>errors</strong>: Robust handling and diagnostics</p>
</li>
</ul>
<pre><code class="lang-plaintext">world ml {
  import tensor;
  import graph;
  import inference;
  import errors;
}
</code></pre>
<p>This enables any WASI 0.2-compatible language <a target="_blank" href="https://github.com/WebAssembly/wasi-nn">(</a>Rust<a target="_blank" href="https://github.com/WebAssembly/wasi-nn">,</a> TinyGo, Python, C++, etc.) to <a target="_blank" href="https://github.com/WebAssembly/wasi-nn">i</a>mplement or consume AI workloads in a <strong>sandboxed, modular way</strong>, free from language bindings, OS dependencies, or container bloat.</p>
<h2 id="heading-hayride-the-execution-layer-for-agentic-ai"><strong>Hayride: The Execution Layer for Agentic AI</strong></h2>
<p>Building on these principles, <strong>Hayride</strong> is our proposed execution architecture for next-generation AI agents.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752500630198/4ac0d561-cf23-40b2-85b8-bb59e139fa33.png" alt class="image--center mx-auto" /></p>
<p>Our thesis is simple: Secure execution is the missing layer of AI agent infrastructure.</p>
<p>Prompting and function calling are not enough. LLMs need a <strong>trusted, deterministic environment</strong> to execute actions, whether tool invocations, API requests, or AI-generated code.</p>
<p>With <strong>Hayride</strong>, we combine:</p>
<ul>
<li><p>🧱 <strong>WASI 0.2 components</strong> for defining, importing, and composing tools</p>
</li>
<li><p>🛡️ <strong>Sandboxed WebAssembly runtimes</strong> for secure execution of agent actions</p>
</li>
<li><p>🔁 <strong>Standard interfaces (via WIT)</strong> for pluggable, reusable functions</p>
</li>
<li><p>🔄 <strong>Language interoperability</strong> so agents can execute in Rust, Go, Python, and more</p>
</li>
<li><p>🔌 <strong>Extensibility</strong> through protocol adapters and LLM orchestration layers</p>
</li>
</ul>
<p>Much of Hayride builds on the existing WebAssembly ecosystem. Using WebAssembly Interface Types, we are releasing our AI Interfaces, which aim to help compose WebAssembly components for agentic workloads.</p>
<p>Using Hayride, agents can execute code compiled into a WebAssembly component. When an LLM instructs agents to execute code, the component is loaded and launched in a new WebAssembly sandbox.</p>
<p>These interface definitions can be found on our <a target="_blank" href="https://github.com/hayride-dev/coven/tree/main/ai">GitHub</a>. In future articles, we will deep dive into each of the AI interfaces.</p>
<h2 id="heading-a-new-standard-for-ai-execution"><strong>A New Standard for AI Execution</strong></h2>
<p>Hayride isn’t just a runtime. It’s a step toward a formal <strong>execution protocol for agentic AI</strong>—one where function calls, tool orchestration, and even model inference can be described, sandboxed, and deployed as modular Wasm components.</p>
<p>This unlocks:</p>
<ul>
<li><p>🔐 <strong>Security</strong> through isolated execution and memory safety</p>
</li>
<li><p>📦 <strong>Portability</strong> across cloud, edge, and local environments</p>
</li>
<li><p>🧩 <strong>Composability</strong> via reusable, language-agnostic toolchains</p>
</li>
<li><p>🚀 <strong>Performance</strong> through near-native execution speed</p>
</li>
<li><p>🤖 <strong>Agent readiness</strong> for LLM orchestration, simulation, and autonomy</p>
</li>
</ul>
<p><strong>Standards-Compatible, Extensible by Design</strong></p>
<p>Hayride is not a closed system. While the architecture is WebAssembly-first, it is protocol-agnostic. We are actively building <strong>adapters and bindings</strong> for:</p>
<ul>
<li><p><strong>Anthropic’s Model Context Protocol (MCP)</strong></p>
</li>
<li><p><strong>Agents2Agents (A2A)</strong> collaborative LLM frameworks</p>
</li>
</ul>
<p>This allows Hayride components to be invoked as part of an existing MCP flow, exposed via A2A, or called directly via language models that support structured tool calling.</p>
<h2 id="heading-call-to-action"><strong>Call to Action</strong></h2>
<p>At Hayride, we’re building a secure AI runtime designed for agentic systems—and we’re betting on WebAssembly. We’re actively exploring:</p>
<ul>
<li><p>Registry support for WASI components</p>
</li>
<li><p>WIT-based agent orchestration</p>
</li>
<li><p>LLM code generation alignment with the WebAssembly’s Component Model</p>
</li>
<li><p>Sandboxed inference and agent code execution</p>
</li>
</ul>
<p>If you’re building agentic AI, secure runtimes, or composable dev tools, we’d love to talk!</p>
<p>Check out our <a target="_blank" href="https://github.com/hayride-dev">GitHub</a> and <a target="_blank" href="https://docs.hayride.dev/">Developer Docs</a> to get started today!</p>
]]></content:encoded></item></channel></rss>