/

Personal AI Chatbot

Personal AI Chatbot: My Journey with OpenAI Agent Builder, Github, Vercel & Framer

Personal AI Chatbot: My Journey with OpenAI Agent Builder, Github, Vercel & Framer

Let me walk you through how I built my own personal chatbot using OpenAI’s Agent Builder.

OpenAI

OpenAI

OpenAI

Github

Github

Github

Vercel

Vercel

Vercel

Framer

Framer

Framer

ChatGPT

ChatGPT

ChatGPT

For a long time, I wanted people to be able to “chat with me” on my website. Not in a generic support-bot way, but with something that actually understands my work, my projects, and how I think.


I imagined a little chat bubble in the corner of my portfolio. You click it and an AI version of me pops open, ready to answer:


  • What do you do?

  • What projects are you most proud of?

  • What side projects are you building right now?

  • What is you hubby?


The twist: I’m not a backend engineer. I live mostly in design tools, not in dev consoles. Still, I wanted to learn this properly and build it myself.


This blog is the story of how I went from that idea to a working personal AI chatbot, step by step, using:


  • ChatGPT (to think and prototype)

  • OpenAI Agent Builder (for the actual brain)

  • GitHub + Vercel (for the repository and deployment)

  • Framer (for my personal site). You can use your own regular website development tool here.

  • A small chunk of HTML + CSS + JS (for a floating chat widget)


I’ll walk you through the architecture, the key steps, the rough costs, and some of the frustrating-but-fun gotchas along the way.


Tech stack table

Layer

Tool / Platform

Role

Brain

OpenAI Agent Builder & workflow

Understands questions, answers as “me”

Data about me

My site, LinkedIn, notes, uploaded files as PDF.

Knowledge for the agent

Chat UI

OpenAI ChatKit starter app

Chat bubbles, input, streaming, etc.

Hosting for chat

Vercel (Free plan)

Deploys the chat app (Next.js)

Website

Framer

My public site & where the widget lives

Floating widget

A tiny HTML + CSS + JS snippet + iframe

Button + sliding chat window


Step 1 – Designing the AI “me” with ChatGPT

The first thing I did wasn’t code. I opened ChatGPT and worked on my persona:


  • What should the bot know?

  • How should it talk?

  • What shouldn’t it answer?


I ended up with something like this as the core instruction (system prompt / agent instructions):


You are an AI version of Shabbir Hossain, a Lead Product Designer
based in the Netherlands.

Your job is to answer questions about Shabbir’s:
- role and responsibilities,
- design process and philosophy,
- key projects and case studies,
- side projects (like AI tools and platforms),
- skills, tools, and career story.

Rules:
- Always speak in the first person (“I”, “me”) as if you are Shabbir.
- If you don’t know something, say so. Never invent personal facts.
- Do NOT talk about:
  - Shabbir’s personal life
  - salary or compensation
  - private family details
  - confidential internal company information
- It’s okay to share high-level context about his employer (public info),
  but not internal strategy, roadmaps, or unreleased features.
- If users push for private or sensitive info, politely decline and
  steer back to work, design, and projects.

When the user clicks a starter prompt like “Would you like me to
introduce myself first?, treat it as:
“Please introduce yourself. and give a concise first-person intro

I refined this a few times while testing. This text later became the instructions inside my Agent node in Agent Builder.


Step 2 – Building the workflow in OpenAI Agent Builder

Next, I created the actual workflow (the logic that runs for each message) in OpenAI Agent Builder.

2.1 Creating the workflow


  1. Open the OpenAI platform and go to Agent Builder.

  2. Click New workflow.

  3. Add:

    • A Start node

    • An Agent node (I named it ProfileAgent)

    • An End node


Wire them like:

Start ProfileAgent End


2.2 Configuring the Agent node

Inside ProfileAgent:

  • Instructions: I pasted the persona text from Step 1.

  • Output format: set to Text / natural language.

  • ChatKit options (this was critical):

    • Display response in chatON
      (without this, ChatKit shows nothing even though the agent is working)

    • Write to conversation history → ON

    • (Optionally) “Include chat history” can be OFF if you want each response to focus only on the latest question.

For the “starter chip” problem (“Would you like me to introduce myself first?”), I added a special rule to the instructions:

If the user says "Would you like me to introduce myself first?" or a
very similar phrase, interpret that as "Please introduce yourself."
In that case, immediately give a first-person introduction of Shabbir,
without asking another question back

So now that chip acts like a shortcut to “Introduce yourself”.


2.3 Adding knowledge about me

To make the agent actually know me, I added:

  • My personal website content

  • My LinkedIn “About” section and key job entries

  • A few short notes about big projects and side projects

You can either:

  • Upload them as files (PDF/Markdown/text), or

  • Copy-paste as “Knowledge” / “Documents” inside the agent configuration.

Short, focused documents work best — think small, curated bios and case study summaries rather than full raw Figma exports.


2.4 Publishing the workflow

When I was happy with the behavior:

  1. I clicked Publish (top-right).

  2. That gave me a workflow ID like wf_xxxxxxxxxx.

  3. I used that ID later in my chat app.

If Preview works in Agent Builder (you see good answers), you know the “brain” is fine and any issues later are in the UI / wiring.


2.4 Publishing the workflow

When I was happy with the behavior:

  1. I clicked Publish (top-right).

  2. That gave me a workflow ID like wf_xxxxxxxxxx.

  3. I used that ID later in my chat app.

If Preview works in Agent Builder (you see good answers), you know the “brain” is fine and any issues later are in the UI / wiring.


Step 3 – Wiring it to a chat UI with ChatKit, GitHub & Vercel

Now I needed a front-end: a clean chat interface that talks to my workflow.

3.1 Using the ChatKit starter

OpenAI provides a ChatKit starter app (a Next.js project) that:

  • Shows a modern chat interface

  • Streams responses

  • Connects to workflows via environment variables

  • Forked/cloned the starter repository into my own GitHub.

  • Linked that repo to Vercel.

  • Forked/cloned the starter repository into my own GitHub.

  • Linked that repo to Vercel.


3.2 Environment variables

In Vercel, under the project’s settings, I configured:

  • OPENAI_API_KEY → API key for the OpenAI project where my workflow lives

  • NEXT_PUBLIC_CHATKIT_WORKFLOW_ID → the wf_... ID from Agent Builder

Important detail: The workflow and the API key must both belong to the same OpenAI project, otherwise you might accidentally hit a different workflow.


3.3 Fixing the weird “AI Age Inquiry” issue

At one point, my UI was showing strange text like:

  • “AI Identity Inquiry”

  • “AI Age Inquiry”

…instead of proper answers 😭🙈🙈

What was happening:

  • A classification / label was being returned instead of the agent response, or

  • The End node was returning JSON like { "output_text": "..." } and the UI was only showing part of it, or

  • ChatKit was only using the first field.

I fixed it by:

  • Making sure the End node either:

    • Returns just {{ProfileAgent.output_text}}, or

    • Returns a simple object and the UI extracts the correct field.

In the simpler case, I just removed the End node and let the Agent node’s “Display response in chat” handle everything for ChatKit, which is enough for many chat scenarios.


3.4 Customizing the ChatKit configuration

In the repo there was a file like lib/config.ts. I used it to:

  • Change the greeting text.

  • Define starter chips.

  • Remove the attachment “+” button (I don’t need file uploads).

Example (simplified):

export const GREETING =
  "Hi, I’m Shabbir’s AI assistant. Ask me about my work, projects, or background.";

export const STARTER_PROMPTS = [
  {
    label: "Would you like me to introduce myself first?",
    prompt:
      "Please introduce yourself in the first person as Shabbir Hossain. Describe your role, what you work on, and a few key projects.",
    icon: "sparkles",
  },
  {
    label: "What do you do at Mendix?",
    prompt: "What do you do as a Lead Product Designer at Mendix?",
    icon: "user",
  },
  {
    label: "Side projects",
    prompt: "What side projects and AI tools are you working on?",
    icon: "rocket",
  },
];

export const PLACEHOLDER_INPUT =
  "Ask me anything about my work, projects, or experience…";

export const CHATKIT_OPTIONS = {
  composer: {
    placeholder: PLACEHOLDER_INPUT,
    // No attachments or tools:
    // attachments: undefined,
    // tools: undefined,
  },
  startScreen: {
    greeting: GREETING,
    prompts: STARTER_PROMPTS,
  },
};

After committing this to main, Vercel automatically redeployed my chat app.


Step 4 – Embedding the chat app in Framer (simple iframe)

Now I had a standalone chat app (at a Vercel URL like https://my-chat-app.vercel.app). Time to bring it into my actual portfolio.

The simplest version is:

  1. Add an Embed in Framer.

  2. Set it to Fixed, bottom-right.

  3. Paste an iframe:

<iframe
  src="https://my-chat-app.vercel.app"
  style="width: 100%; height: 650px; border: 0; border-radius: 16px; overflow: hidden;"
  loading="lazy"
  title="Chat with Shabbir"
></iframe>

That already works, but I wanted a floating button + small window experience.


Step 5 – Building the floating chat widget with HTML/CSS/JS

To get a “chat bubble” UX, I used a small chunk of HTML + CSS + JS inside a Framer Embed. The idea:

  • Only a round button is visible at first.

  • When you click it, a chat window appears above it (containing the iframe).

  • Click ✕ or the button again → it closes.

Here’s a simplified version of the widget I ended up with (you can tweak the styling):

<style>
  .sh-chat-root,
  .sh-chat-root * {
    box-sizing: border-box;
  }

  .sh-chat-root {
    width: 100%;
    height: 100%;
    font-family: system-ui, -apple-system, BlinkMacSystemFont, "SF Pro Text",
      "Segoe UI", sans-serif;
    display: flex;
    align-items: flex-end;
    justify-content: flex-end;
    pointer-events: none; /* only widget is clickable */
  }

  .sh-chat-widget {
    position: relative;
    display: flex;
    flex-direction: column;
    align-items: flex-end;
    gap: 10px;
    pointer-events: auto;
  }

  .sh-chat-window {
    width: min(420px, 96vw);
    height: 100%;
    max-height: 100%;
    background: #020617;
    border-radius: 18px;
    border: 1px solid rgba(148, 163, 184, 0.35);
    box-shadow:
      0 22px 60px rgba(15, 23, 42, 0.45),
      0 0 0 1px rgba(15, 23, 42, 0.35);
    overflow: hidden;
    transform-origin: bottom right;
    transform: translateY(10px) scale(0.97);
    opacity: 0;
    pointer-events: none;
    transition:
      opacity 0.18s ease-out,
      transform 0.18s ease-out;
    display: flex;
    flex-direction: column;
    backdrop-filter: blur(18px);
  }

  .sh-chat-window.sh-open {
    opacity: 1;
    transform: translateY(0) scale(1);
    pointer-events: auto;
  }

  .sh-chat-header {
    height: 48px;
    padding: 0 14px;
    display: flex;
    align-items: center;
    justify-content: space-between;
    background: radial-gradient(circle at top left, #111827 0, #020617 55%);
    border-bottom: 1px solid rgba(51, 65, 85, 0.9);
    color: #e5e7eb;
  }

  .sh-chat-header-left {
    display: flex;
    flex-direction: column;
    gap: 2px;
  }

  .sh-chat-title {
    font-weight: 600;
    font-size: 13px;
  }

  .sh-chat-subtitle {
    font-size: 11px;
    color: #9ca3af;
  }

  .sh-chat-close {
    border: none;
    background: transparent;
    color: #9ca3af;
    font-size: 18px;
    cursor: pointer;
    padding: 4px;
    border-radius: 999px;
  }

  .sh-chat-close:hover {
    background: rgba(148, 163, 184, 0.16);
    color: #e5e7eb;
  }

  .sh-chat-body {
    flex: 1;
    background: #020617;
  }

  .sh-chat-body iframe {
    width: 100%;
    height: 100%;
    border: 0;
    display: block;
  }

  .sh-chat-button {
    width: 56px;
    height: 56px;
    border-radius: 999px;
    border: 1px solid rgba(148, 163, 184, 0.35);
    background: radial-gradient(circle at 30% 0, #1f2937 0, #020617 65%);
    color: #e5e7eb;
    display: flex;
    align-items: center;
    justify-content: center;
    cursor: pointer;
    font-size: 22px;
    position: relative;
    box-shadow: 0 14px 40px rgba(15, 23, 42, 0.7);
  }

  .sh-chat-button-icon {
    transform: translateY(1px);
  }

  .sh-chat-badge {
    position: absolute;
    top: 7px;
    right: 7px;
    background: #22c55e;
    width: 8px;
    height: 8px;
    border-radius: 999px;
    box-shadow: 0 0 0 2px #020617;
  }

  @media (max-width: 640px) {
    .sh-chat-root {
      align-items: flex-end;
      justify-content: center;
    }

    .sh-chat-widget {
      width: 100%;
      align-items: center;
    }

    .sh-chat-window {
      width: min(100vw, 420px);
      height: 100%;
      max-height: 100%;
      border-radius: 16px;
    }
  }
</style>

<div class="sh-chat-root">
  <div class="sh-chat-widget">
    <div
      id="sh-chat-window"
      class="sh-chat-window"
      aria-label="Chat with Shabbir"
      role="dialog"
    >
      <div class="sh-chat-header">
        <div class="sh-chat-header-left">
          <span class="sh-chat-title">Chat with Shabbir</span>
          <span class="sh-chat-subtitle">Ask me about my work & projects</span>
        </div>
        <button
          id="sh-chat-close"
          class="sh-chat-close"
          aria-label="Close chat"
          type="button"
        >
          ×
        </button>
      </div>
      <div class="sh-chat-body">
        <iframe
          src="https://my-chat-app.vercel.app"
          title="Chat with Shabbir"
        ></iframe>
      </div>
    </div>

    <button
      id="sh-chat-toggle"
      class="sh-chat-button"
      aria-label="Open chat"
      type="button"
    >
      <span class="sh-chat-button-icon">💬</span>
      <span class="sh-chat-badge"></span>
    </button>
  </div>
</div>

<script>
  (function () {
    var chatWindow = document.getElementById("sh-chat-window");
    var toggleBtn = document.getElementById("sh-chat-toggle");
    var closeBtn = document.getElementById("sh-chat-close");
    var isOpen = false;

    function openChat() {
      isOpen = true;
      chatWindow.classList.add("sh-open");
    }

    function closeChat() {
      isOpen = false;
      chatWindow.classList.remove("sh-open");
    }

    toggleBtn.addEventListener("click", function () {
      if (isOpen) {
        closeChat();
      } else {
        openChat();
      }
    });

    closeBtn.addEventListener("click", closeChat);
  })();
<


In Framer:

  • I dropped this into an Embed.

  • Set the Embed to Fixed, bottom-right.

  • On desktop: give it a height like 650px.

  • On mobile: set the height to 100vh so the chat can use the full viewport height when open.


Step 6 – Costs: How much does this actually cost to run?

This was a big question for me too: “If people chat with this on my website, how much money am I burning?”

6.1 OpenAI costs

For a personal bot, I used GPT-4.1 mini — strong enough for good answers, but much cheaper than the big flagship models. According to OpenAI’s pricing docs, GPT-4.1 mini costs roughly:

  • $0.40 per 1M input tokens

  • $1.60 per 1M output tokens


Rough back-of-the-envelope:

  • Imagine a typical question+answer pair uses about:

    • 300 input tokens (your message + some history)

    • 300 output tokens (the answer)

  • That’s 600 tokens per exchange.


For 1,000 conversations like that:

  • Total input tokens ≈ 300,000 → 0.3M

    • Cost: 0.3 × $0.40 = $0.12

  • Total output tokens ≈ 300,000 → 0.3M

    • Cost: 0.3 × $1.60 = $0.48


So 1,000 full Q&A turns ≈ $0.60 in model costs.


Also, OpenAI currently gives new users $5 in free credits that last 3 months, which is enough to power thousands of these small chats before you pay anything.


6.2 Vercel costs

I used the Vercel Hobby plan, which is Free forever, aimed at personal projects and small apps.

For my use case, a small chat app with light traffic. This is more than enough. If your traffic explodes, you might eventually consider Pro (around $20/month in 2025, with usage-based overages).

6.3 Framer costs

I’m using Framer for my personal site.

  • Framer offers a Free plan for non-commercial projects, hosted on a Framer domain and with a small “Made in Framer” label.

  • For a more polished setup (custom domain, more features), there are paid plans. A recent update lists a Basic-type plan around $10/month for small/personal projects.

So, roughly:

Piece

Typical plan for this use case

Monthly cost (approx.)

OpenAI API

GPT-4.1 mini, low personal use

$1–$5 (or $0 if on trial)

Vercel

Hobby plan

$0

Framer

Free or Basic/Personal plan

$0–$10


Realistically, for a portfolio-level chatbot with light traffic, your monthly cost is very likely in the few dollars range, mainly from OpenAI if you exceed the free credits.


Step 7 – Lessons learned & things I’d do differently next time


Displaying a response in chat is critical

In Agent Builder, if Display response in chat is off, your agent can be working perfectly but your UI will show… nothing. It took some debugging to realize that was the issue.


Schema vs plain text

When I experimented with structured outputs (JSON like { "output_text": "..." }), the UI sometimes showed weird labels (e.g., “AI Identity Inquiry”) instead of the real answer. For a simple profile bot, plain text output is much safer.


Starter prompts are just messages

Those little chips (“What can you do?”, “Introduce yourself”, etc.) are just shortcuts to user messages. The agent sees the prompt text exactly as if the user typed it. That’s why you either:

  • Make the prompt text very explicit, or

  • Handle that exact phrase in your instructions (“If user says X, do Y”).


Domain allow-lists matter

Since the chat app calls the OpenAI API and is embedded via iframe:

  • The domain where your site lives (e.g. Framer’s domain or your custom domain) must be added to your OpenAI domain allowlist, or requests can fail silently.


Start simple, then add polish

I got the basic end-to-end flow working first with a plain iframe. Only after that I did:

  • Remove attachments

  • Customize greeting + chips

  • Build the floating widget

  • Fine-tune mobile behavior


It’s much easier to style something that already works than to debug styling and logic at the same time.


Conclusion

Building this personal AI chatbot was far less “developer-only” than I expected — but still technical enough to be a genuinely fun learning curve. In the end, what I built is more than just a widget:

  • It’s a living, interactive “About Me” that sits in the corner of my site.

  • It’s a way for recruiters, collaborators, or random visitors to explore my work at their own pace.

  • And it’s a foundation: I can now add things like “Ask me for feedback on your portfolio” or “Let me walk you through my CI/CD case study” without changing the core architecture.


If you’re a designer or non-backend person thinking “I’d love an AI version of myself on my site”, my biggest advice is:


Don’t wait for the perfect stack. Start with the basics: Agent Builder → ChatKit starter → Vercel → Framer iframe. Then iterate.


You’ll be surprised how quickly it starts feeling like a real product and how much you learn about AI, tokens, and just enough frontend to be dangerous.


Click on the blue floating button on the bottom right and talk to me 😎

CERTIFIED BY LEADERS IN UX DESIGN

Shabbir Hossain – Expert in AI-driven SaaS product design

© 2024 Shabbir Hossain
Experts in SaaS product design

Shabbir Hossain
Expert in AI-driven SaaS product design

CERTIFIED BY LEADERS IN UX DESIGN

Shabbir Hossain – Expert in AI-driven SaaS product design