Does AI have a place in form validation? I believe it does and in this post I’ll show you where it can be applied, why it makes sense and how to build it.
Dea is my new startup that helps builders track, plan and ship their next big product idea. An important step in the platform is being able to capture ideas. For this we have a quick but structured form that captures the what, who and outcome of the product.
Each input has some standard validation powered by Zod and Superforms. However, we want to make sure that the initial draft is coherent, legal and safe. This is where AI can help us.
We created a simple validation function using Vercel’s AI SDK.
async function isValidIdea(draft: string) { const { object } = await generateObject({ model: openai('gpt-4o-mini'), system: `You are responsible for checking if an idea is coherent, legal and safe. Please reject any ideas that contain spam or harmful content. If you are unsure then allow the idea.`, prompt: draft, temperature: 0, schema: z.object({ isValid: z.boolean() }) }); return object.isValid; }
Using generateObject we can get a type-safe and structured output from the LLM. Let’s breakdown the 3 parts of the prompt:
Then within our form action we call this function within a try/catch block since network requests could fail for any number of reasons. In our case if there’s a failure from OpenAI we still allow the validation to pass.
const form = await superValidate(request, zod(ideaSchemaV1)); if (!form.valid) { return fail(400, { form, error: null }); } const draft = ideaToDraft(form.data); try { const isValid = await isValidIdea(draft); if (!isValid) { return fail(400, { form, error: 'Please check your answers' }); } } catch (e) { console.error(e); // don't prevent submission if the ai validation fails }
Obviously if you can validate your inputs confidently without AI then don’t use AI. Relying on an LLM is prone to uncertainty and introduces latency.
However, there are valid use-cases when the input is unstructured or you need a deeper analysis of the inputs. A good rule-of-thumb is to try AI when otherwise you’d have a human/manual approval step.
Use a fast model, 4o-mini instead of o1, to ensure a fast response and the best UX for the end user. With that in mind also provide some indication to the user that the form is being validated.
If not 100% necessary we think it’s better to use non-blocking AI-validation. In our example we let the validation pass if the LLM call fails. You could also nudge the user to check their answers but still allow them to confirm that they are correct and wish to proceed.
Finally, don’t use this for things that LLMs are known to be bad at, such as unit conversion. This could be possible with function calling but be mindful of latency.
I hope this has given you some ideas on how to apply AI validation with your own forms and data capture. If you’re already doing something similar then we’d love to hear the lessons you’ve learned.
The above is the detailed content of AI Form Validation. For more information, please follow other related articles on the PHP Chinese website!