Let me walk you through how I built my own personal chatbot using OpenAI’s Agent Builder.
For a long time, I wanted people to be able to “chat with me” on my website. Not in a generic support-bot way, but with something that actually understands my work, my projects, and how I think.
I imagined a little chat bubble in the corner of my portfolio. You click it and an AI version of me pops open, ready to answer:
What do you do?
What projects are you most proud of?
What side projects are you building right now?
What is you hubby?
The twist: I’m not a backend engineer. I live mostly in design tools, not in dev consoles. Still, I wanted to learn this properly and build it myself.
This blog is the story of how I went from that idea to a working personal AI chatbot, step by step, using:
ChatGPT (to think and prototype)
OpenAI Agent Builder (for the actual brain)
GitHub + Vercel (for the repository and deployment)
Framer (for my personal site). You can use your own regular website development tool here.
A small chunk of HTML + CSS + JS (for a floating chat widget)
I’ll walk you through the architecture, the key steps, the rough costs, and some of the frustrating-but-fun gotchas along the way.
Tech stack table
Layer
Tool / Platform
Role
Brain
OpenAI Agent Builder & workflow
Understands questions, answers as “me”
Data about me
My site, LinkedIn, notes, uploaded files as PDF.
Knowledge for the agent
Chat UI
OpenAI ChatKit starter app
Chat bubbles, input, streaming, etc.
Hosting for chat
Vercel (Free plan)
Deploys the chat app (Next.js)
Website
Framer
My public site & where the widget lives
Floating widget
A tiny HTML + CSS + JS snippet + iframe
Button + sliding chat window
Step 1 – Designing the AI “me” with ChatGPT
The first thing I did wasn’t code. I opened ChatGPT and worked on my persona:
What should the bot know?
How should it talk?
What shouldn’t it answer?
I ended up with something like this as the core instruction (system prompt / agent instructions):
I refined this a few times while testing. This text later became the instructions inside my Agent node in Agent Builder.
Step 2 – Building the workflow in OpenAI Agent Builder
Next, I created the actual workflow (the logic that runs for each message) in OpenAI Agent Builder.
2.1 Creating the workflow
Open the OpenAI platform and go to Agent Builder.
Click New workflow.
Add:
A Start node
An Agent node (I named it
ProfileAgent)An End node
Wire them like:
2.2 Configuring the Agent node
Inside ProfileAgent:
Instructions: I pasted the persona text from Step 1.
Output format: set to Text / natural language.
ChatKit options (this was critical):
✅ Display response in chat → ON
(without this, ChatKit shows nothing even though the agent is working)✅ Write to conversation history → ON
(Optionally) “Include chat history” can be OFF if you want each response to focus only on the latest question.
For the “starter chip” problem (“Would you like me to introduce myself first?”), I added a special rule to the instructions:
So now that chip acts like a shortcut to “Introduce yourself”.
2.3 Adding knowledge about me
To make the agent actually know me, I added:
My personal website content
My LinkedIn “About” section and key job entries
A few short notes about big projects and side projects
You can either:
Upload them as files (PDF/Markdown/text), or
Copy-paste as “Knowledge” / “Documents” inside the agent configuration.
Short, focused documents work best — think small, curated bios and case study summaries rather than full raw Figma exports.
2.4 Publishing the workflow
When I was happy with the behavior:
I clicked Publish (top-right).
That gave me a workflow ID like
wf_xxxxxxxxxx.I used that ID later in my chat app.
If Preview works in Agent Builder (you see good answers), you know the “brain” is fine and any issues later are in the UI / wiring.
2.4 Publishing the workflow
When I was happy with the behavior:
I clicked Publish (top-right).
That gave me a workflow ID like
wf_xxxxxxxxxx.I used that ID later in my chat app.
If Preview works in Agent Builder (you see good answers), you know the “brain” is fine and any issues later are in the UI / wiring.
Step 3 – Wiring it to a chat UI with ChatKit, GitHub & Vercel
Now I needed a front-end: a clean chat interface that talks to my workflow.
3.1 Using the ChatKit starter
OpenAI provides a ChatKit starter app (a Next.js project) that:
Shows a modern chat interface
Streams responses
Connects to workflows via environment variables
Forked/cloned the starter repository into my own GitHub.
Linked that repo to Vercel.
Forked/cloned the starter repository into my own GitHub.
Linked that repo to Vercel.
3.2 Environment variables
In Vercel, under the project’s settings, I configured:
OPENAI_API_KEY→ API key for the OpenAI project where my workflow livesNEXT_PUBLIC_CHATKIT_WORKFLOW_ID→ thewf_...ID from Agent Builder
3.3 Fixing the weird “AI Age Inquiry” issue
At one point, my UI was showing strange text like:
“AI Identity Inquiry”
“AI Age Inquiry”
…instead of proper answers 😭🙈🙈
What was happening:
A classification / label was being returned instead of the agent response, or
The End node was returning JSON like
{ "output_text": "..." }and the UI was only showing part of it, orChatKit was only using the first field.
I fixed it by:
Making sure the End node either:
Returns just
{{ProfileAgent.output_text}}, orReturns a simple object and the UI extracts the correct field.
In the simpler case, I just removed the End node and let the Agent node’s “Display response in chat” handle everything for ChatKit, which is enough for many chat scenarios.
3.4 Customizing the ChatKit configuration
In the repo there was a file like lib/config.ts. I used it to:
Change the greeting text.
Define starter chips.
Remove the attachment “+” button (I don’t need file uploads).
Example (simplified):
After committing this to main, Vercel automatically redeployed my chat app.
Step 4 – Embedding the chat app in Framer (simple iframe)
Now I had a standalone chat app (at a Vercel URL like https://my-chat-app.vercel.app). Time to bring it into my actual portfolio.
The simplest version is:
Add an Embed in Framer.
Set it to Fixed, bottom-right.
Paste an iframe:
That already works, but I wanted a floating button + small window experience.
Step 5 – Building the floating chat widget with HTML/CSS/JS
To get a “chat bubble” UX, I used a small chunk of HTML + CSS + JS inside a Framer Embed. The idea:
Only a round button is visible at first.
When you click it, a chat window appears above it (containing the iframe).
Click ✕ or the button again → it closes.
Here’s a simplified version of the widget I ended up with (you can tweak the styling):
In Framer:
I dropped this into an Embed.
Set the Embed to Fixed, bottom-right.
On desktop: give it a height like 650px.
On mobile: set the height to
100vhso the chat can use the full viewport height when open.
Step 6 – Costs: How much does this actually cost to run?
This was a big question for me too: “If people chat with this on my website, how much money am I burning?”
6.1 OpenAI costs
For a personal bot, I used GPT-4.1 mini — strong enough for good answers, but much cheaper than the big flagship models. According to OpenAI’s pricing docs, GPT-4.1 mini costs roughly:
$0.40 per 1M input tokens
$1.60 per 1M output tokens
Rough back-of-the-envelope:
Imagine a typical question+answer pair uses about:
300 input tokens (your message + some history)
300 output tokens (the answer)
That’s 600 tokens per exchange.
For 1,000 conversations like that:
Total input tokens ≈ 300,000 → 0.3M
Cost: 0.3 × $0.40 = $0.12
Total output tokens ≈ 300,000 → 0.3M
Cost: 0.3 × $1.60 = $0.48
So 1,000 full Q&A turns ≈ $0.60 in model costs.
Also, OpenAI currently gives new users $5 in free credits that last 3 months, which is enough to power thousands of these small chats before you pay anything.
6.2 Vercel costs
I used the Vercel Hobby plan, which is Free forever, aimed at personal projects and small apps.
For my use case, a small chat app with light traffic. This is more than enough. If your traffic explodes, you might eventually consider Pro (around $20/month in 2025, with usage-based overages).
6.3 Framer costs
I’m using Framer for my personal site.
Framer offers a Free plan for non-commercial projects, hosted on a Framer domain and with a small “Made in Framer” label.
For a more polished setup (custom domain, more features), there are paid plans. A recent update lists a Basic-type plan around $10/month for small/personal projects.
So, roughly:
Piece
Typical plan for this use case
Monthly cost (approx.)
OpenAI API
GPT-4.1 mini, low personal use
$1–$5 (or $0 if on trial)
Vercel
Hobby plan
$0
Framer
Free or Basic/Personal plan
$0–$10
Realistically, for a portfolio-level chatbot with light traffic, your monthly cost is very likely in the few dollars range, mainly from OpenAI if you exceed the free credits.
Step 7 – Lessons learned & things I’d do differently next time
Displaying a response in chat is critical
In Agent Builder, if Display response in chat is off, your agent can be working perfectly but your UI will show… nothing. It took some debugging to realize that was the issue.
Schema vs plain text
When I experimented with structured outputs (JSON like { "output_text": "..." }), the UI sometimes showed weird labels (e.g., “AI Identity Inquiry”) instead of the real answer. For a simple profile bot, plain text output is much safer.
Starter prompts are just messages
Those little chips (“What can you do?”, “Introduce yourself”, etc.) are just shortcuts to user messages. The agent sees the prompt text exactly as if the user typed it. That’s why you either:
Make the
prompttext very explicit, orHandle that exact phrase in your instructions (“If user says X, do Y”).
Domain allow-lists matter
Since the chat app calls the OpenAI API and is embedded via iframe:
The domain where your site lives (e.g. Framer’s domain or your custom domain) must be added to your OpenAI domain allowlist, or requests can fail silently.
Start simple, then add polish
I got the basic end-to-end flow working first with a plain iframe. Only after that I did:
Remove attachments
Customize greeting + chips
Build the floating widget
Fine-tune mobile behavior
It’s much easier to style something that already works than to debug styling and logic at the same time.
Conclusion
Building this personal AI chatbot was far less “developer-only” than I expected — but still technical enough to be a genuinely fun learning curve. In the end, what I built is more than just a widget:
It’s a living, interactive “About Me” that sits in the corner of my site.
It’s a way for recruiters, collaborators, or random visitors to explore my work at their own pace.
And it’s a foundation: I can now add things like “Ask me for feedback on your portfolio” or “Let me walk you through my CI/CD case study” without changing the core architecture.
If you’re a designer or non-backend person thinking “I’d love an AI version of myself on my site”, my biggest advice is:
You’ll be surprised how quickly it starts feeling like a real product and how much you learn about AI, tokens, and just enough frontend to be dangerous.
Click on the blue floating button on the bottom right and talk to me 😎
