The Brainstory Blog

Think smarter, not harder! Check out what's new with the Brainstory team.

Find us elsewhere on: TwitterLinkedInGitHub

Best practices for coding with ChatGPT

The types of questions to ask and the workflow that’s worked best for me.
By Lilly Chen

I’ve been experimenting with GPT-4 and coding small apps with it. Here’s what’s worked the best for me. For this blog, I’ll be using my CodePen project for translating physician notes into American insurance codes as an example. For each prompt, I will be giving GPT a rating out of 10.

First Pass

Prompt: Write a basic HTML, CSS, and Javascript code for an app that accepts physician notes and converts it to CPT codes

See the Pen Best Practices Blog Example #1 by Lilly Chen (@lforalaska-the-scripter) on CodePen.

Right off the bat, the first thing to note is GPT-4 is a significant improvement over all other versions. In the past, asking it such a broad implementation question would have resulted in something that is fundamentally broken. But as you can see from the CodePen link, this app visually looks like my ask. One funny thing is GPT-4 did not default to calling its own API to complete the conversion task. Instead, it put in a comment on script.js for me to implement the conversion logic.

Let’s see if we can get it to write that logic for us.

Rating: 10/10

Getting GPT to make an API call to itself

Prompt: Can you use OpenAI’s API for GPT-4 to complete the logic to convert physician notes to CPT codes?

Result: As an AI language model developed by OpenAI, I am an instance of GPT-3 and do not have direct access to GPT-4 or its API. Furthermore, GPT models are designed for natural language processing tasks, and converting physician notes to CPT codes requires domain-specific knowledge of medical billing and coding systems. However, I can provide a simple example using GPT-3 to process physician notes and extract relevant information that might be useful for mapping to CPT codes. To do this, you will need to sign up for an API key from OpenAI and use it in your application. Here's an example using JavaScript with the `axios` library for making API requests:

The answer isn’t entirely true— GPT-4 is not GPT-3, but its also right that GPT-4 can’t have knowledge of how to call itself since the API was developed after its completed training date. Mea culpa, ChatGPT. Looks like I’ll need to look up the OpenAI API docs myself.

Rating: 8/10 because human error but also robot lie

Asking GPT to formulate an API call using docs

Prompt: Write Javascript fetch call to the following endpoint with these docs, use model=gpt-4:

Result: Here is an example of a plain vanilla JavaScript API call to the OpenAI Chat Completions endpoint using fetch. Please note that GPT-4 is not available, so I will use "text-davinci-002" as the model in this example. Replace "your_openai_api_key_here" with your actual OpenAI API key.
function createChatCompletion(messages) {
const apiKey = 'your_openai_api_key_here';

const payload = {
  model: 'text-davinci-002',
  messages: messages,
  temperature: 1,
  top_p: 1,
  n: 1,
  stream: false,
  stop: null,
  max_tokens: null,
  presence_penalty: 0,
  frequency_penalty: 0,
  logit_bias: null,
  user: null,

return fetch('', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${apiKey}`,
  body: JSON.stringify(payload),
}).then((response) => {
  if (response.ok) {
    return response.json();
  throw new Error(`Error: ${response.status} - ${response.statusText}`);

// Example usage
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Who won the world series in 2020?' },

.then((response) => {
  console.log('Response:', response);
.catch((error) => {
  console.error('Error:', error);

The createChatCompletion function takes an array of messages as an argument and returns a promise that resolves with the response data. The request is made using the fetch() function, and the response is processed and returned as a JSON object. The example usage demonstrates how to call the function and handle the response and errors using .then() and .catch().

I pasted in the docs copy and pasted from OpenAI into my prompt (thanks to the new tokenization limit, this is now a lot more feasible). I’m still a little disappointed that GPT-4 wasn’t able to read the docs, then properly formulate my API call. Since we’re now on our third message together, it also seems to have forgotten what our original goal was, or maybe it thinks this is unrelated. In any case, we’re on our own for the API call.

Rating: 2/10 because code works but doesn’t do what I asked

Asking GPT to make my CSS pretty

Prompt: Make this app more visually appealing:

See the Pen Best Practices Blog Example #2 by Lilly Chen (@lforalaska-the-scripter) on CodePen.

This looks a lot like modern web apps! The right padding and margins seems to be messed up on the input box. Let’s see if we can ask it to fix in the most casual language possible.

Rating: 6/10

Debugging with GPT without knowing technical terms

Prompt: The input box looks messed up with the container on the right side only

See the Pen Best Practices Blog Example #3 by Lilly Chen (@lforalaska-the-scripter) on CodePen.

It fixed it!! Thank goodness because fixing CSS is the bane of my existence. I’ve tried to learn “proper” CSS in the past, but I always end up just trying things until it sort of works. With GPT, I can see myself pasting in CSS code and asking it to visually change X without needing to know the attributes. This is a big quality of life upgrade for me.

Rating: 🤖/10


My background is in Python, not HTML/CSS/JavaScript. GPT was incredibly helpful in getting everything off the ground and debugging issues. Because I do have a programming background, I’m able to troubleshoot GPT, similarly to how I might troubleshoot Visual Studio Code or any other tool I’m using. If I didn’t know any programming or had very limited experience, I might just copy/paste GPT’s answers as gospel without identifying its limitations.

A bigger surprise for me was how much I learned by asking GPT. I needed to engage more critical thinking skills than working with plain docs because I suspected GPT would mislead me at times. This was a healthy skepticism that forced me to truly understand what was happening. When I read docs or follow a tutorial, I tend to be more on autopilot because I assume everything will 100% work.

Looks like GPT-4 won’t be replacing programmers today, but it’s definitely a useful tool for us.

Are you doing something boring, manual, and necessary with your content? Let us know - we’d love to help.