Simplify Feedback Evaluation with Chat GPT and Supabase Edge Functions
A real use case analysis for user evaluations
One of the tasks for a side hustle I am building requires me the evaluation of users feedback. And given I am adopting the usage of AI agents in all the development process I decided to implement Chat GPT evaluations for it. The following is the code snippet for the edge function I am using to perform this task.
To connect to Supabase, you need to create a project and get your credentials. For local function development, you must install and set up the CLI on your machine. Here are the official docs for Supabase Edge Functions.
Edge function
// Setup type definitions for built-in Supabase Runtime APIs
import OpenAI from "https://deno.land/x/openai@v4.24.0/mod.ts";
import "https://esm.sh/@supabase/functions-js/src/edge-runtime.d.ts";
Deno.serve(async (req) => {
try {
const { type, content } = await req.json();
if (!type || !content) {
return new Response(
JSON.stringify({ error: "Type and context must be defined" }),
{
headers: { "Content-Type": "application/json" },
status: 422,
},
);
}
const openai = new OpenAI({
apiKey: Deno.env.get("OPEN_AI_KEY"),
});
const chatCompletion = await openai.chat.completions.create({
messages: [
{
role: "system",
content: `
You will receive a text that presents a positive
description of an employee's experience within a company.
Your task is to review the text for any offensive
language and ensure that the overall message cannot
be construed as retaliatory. Additionally, assess
whether the content can serve as a valuable experience
for its readers.
The above information should be responded to
in JSON format.
Output example:
{
score: number, //text score from 1 to 10
offensive: boolean // wether if it contains offensive
// language
}
`,
},
{
role: "user",
content: content,
},
],
model: "gpt-4o-mini",
stream: false,
response_format: {
type: "json_object",
},
});
const reply = chatCompletion.choices[0].message.content;
return new Response(reply, {
headers: { "Content-Type": "application/json" },
});
} catch (error: any) {
return new Response(JSON.stringify({ error: error?.message }), {
headers: { "Content-Type": "application/json" },
status: 500,
});
}
});
Edge functions uses Deno so you have to import packages for OpenAI and server as shown. It's a pretty small snippet you can easy run. Let's explain it a bit more:
Here we are expecting a json request which should includes a type
and the content
to be validated, otherwise we are responding back with a 422 HTTP Code (Unprocessable entity):
const { type, content } = await req.json();
if (!type || !content) {
return new Response(
JSON.stringify({ error: "Type and context must be defined" }),
{
headers: { "Content-Type": "application/json" },
status: 422,
},
);
}
The usage of Chat GPT requires an API KEY you should obtain in its dashboard. And this exactly is the way you get environment variables using Deno:
const openai = new OpenAI({
apiKey: Deno.env.get("OPEN_AI_KEY"),
});
Chat completions is the function we are using to define the system prompt to run along the user content to be evaluated. We will get the reply in the first choice and this should be a JSON format as we specified.
The specified model is gpt-4o-mini, feel free to compare between different models.
Last but not least, we are specifying the response_format
as json_object
again.
const chatCompletion = await openai.chat.completions.create({
messages: [
{
role: "system",
content: `
You will receive a text that presents a positive
description of an employee's experience within a company.
Your task is to review the text for any offensive
language and ensure that the overall message cannot
be construed as retaliatory. Additionally, assess
whether the content can serve as a valuable experience
for its readers.
The above information should be responded to
in JSON format.
Output example:
{
score: number, //text score from 1 to 10
offensive: boolean // wether if it contains offensive
// language
}
`,
},
{
role: "user",
content: content,
},
],
model: "gpt-4o-mini",
stream: false,
response_format: {
type: "json_object",
},
});
const reply = chatCompletion.choices[0].message.content;
Now that we have a response in the format we need just send it back:
return new Response(reply, {
headers: { "Content-Type": "application/json" },
});
That's it, we defined a perfectly functional edge function to consume Chat GPT. It's your turn to adapt it to your own needs.
Here you will find the steps to create this supabase function and how to invoke it: https://supabase.com/docs/guides/functions/quickstart
And here the steps to deploy it and invoke it from production as well: https://supabase.com/docs/guides/functions/deploy
Happy coding folks! ๐