I’m having trouble with
A scammer filling out a form impersonating someone and asking for money.
Working in
Wix Studio Editor
Site link
What I’m trying to do
I want to add a level of security to these custom forms. I have the email of the sender and I am wondering if I can add some kind of flagging system that denies submissions from particular senders.
When you say “Custom form” is this with CMS and Code, or with datasets?
There is the reCAPTCHA component that can be added, although only really works for bots. You might be able to add an automation that rejects the submission if a field matches a specific value
Thanks for your response! @noahlovell
The form itself is built from input field elements from Wix and then connects to a dataset we created.
Here is the code connected to the from:
import wixWindow from 'wix-window';
import { sendlifegroupleaderemail } from 'backend/lifegroupleaderemail';
$w("#errorbox").hide();
$w("#successMessage").hide();
$w.onReady(function () {
const context = wixWindow.lightbox.getContext();
if (context && context.formContext) {
$w("#contextField").value = context.formContext;
$w("#formHeader").text = `Join ${context.formContext}`;
$w("#successText").text = `Thanks for signing up for ${context.formContext}. A group leader will reach out soon.`;
}
if (context && context.slug) {
$w("#slug").value = context.slug; // Set the hidden slug field
}
$w('#submitLifeGroup').onClick(async function () {
$w('#submitLifeGroup').disable();
const requiredFields = [
$w('#namefield'),
$w('#phonefield'),
$w('#emailfield')
];
const hasInvalidFields = requiredFields.some(field => !field.valid);
if (hasInvalidFields) {
await $w("#errorbox").show("fade", { duration: 200 });
$w('#submitLifeGroup').enable();
return; // stop if validation fails
}
// Capture all values BEFORE hiding
const name = $w('#namefield').value;
const phone = $w('#phonefield').value;
const email = $w('#emailfield').value;
const message = $w('#messagefield').value;
const context = $w('#contextField').value;
const data = { name, phone, email, message, context };
console.log("Sending data to backend:", data);
// Only hide form and proceed if valid
await $w("#errorbox").hide("fade", { duration: 500 });
$w("#GroupFields").hide("fade", { duration: 500 });
try {
const response = await sendlifegroupleaderemail(data);
console.log("Email sent successfully:", response);
await $w("#successMessage").show("fade", { duration: 300 });
} catch (error) {
console.error("Error sending email:", error);
await $w("#GroupFields").show("fade", { duration: 500 });
await $w("#errorbox").show("fade", { duration: 200 });
} finally {
$w('#submitLifeGroup').enable();
}
});
$w('#submitAnotherText').onClick(async () => {
await $w("#successMessage").hide("fade", { duration: 500 });
await $w("#errorbox").hide();
await $w("#GroupFields").show("fade", { duration: 500 });
await $w("#successMessage").show();
$w('#submitLifeGroup').enable();
});
});
I was wondering if you had any solutions for this?
Awesome - there’s probably a couple of ways that you could approach this, such as regex/validation on the field change.
That said, the best step to take is to run validation on the backend, so you’re not exposing the logic/which emails are blocked publicly. Then returning an error if it doesn’t pass validation
Hi, @Benjamin_Sullivan !!
Personally, I think the most effective way to deal with annoying or malicious message submissions is to leverage AI on the backend.
Blocking a sender’s email address directly in the code doesn’t help much, since they can easily switch to a new one. Even blocking by IP address isn’t reliable—VPNs make it trivial to get around that. That’s why it’s far more practical to analyze message content using something like the OpenAI API on the backend, and have the system automatically detect scams or spam. Only messages judged as “safe” by the AI would actually be accepted by the form. At the same time, it’s important to provide feedback to users based on the AI’s decision. For example, if a message is flagged as a scam, the frontend could display something like: “Your submission has been blocked by our security AI for the following reason: [reason here].” That way, the system stays transparent and user-friendly. Of course, running every single message through AI can get expensive. A more realistic approach is to first pre-filter submissions using an NG word list before sending anything to the AI. Obviously fraudulent messages can be stopped right there, and the AI only gets involved in borderline or ambiguous cases. You can build and maintain your NG word list by having ChatGPT or Claude generate common spam/scam keywords, and then customize it with terms that are specific to your own site. If you want to go a step further in being considerate toward users, it’s a good idea to include a note below the form that says something like: “This message will be reviewed by an external security AI. Please do not include personal information.” A message like that not only promotes transparency but can also act as a psychological deterrent to potential scammers. And for even better data protection, you could preprocess messages through an anonymization program running in a closed environment before sending them to the AI. This way, personal information can be masked or removed in advance. The AI would still be able to analyze the content of the message for spam or fraud, but without ever handling sensitive personal data—achieving both strong security and privacy protection. 
Implementing all of these measures might seem challenging, but aside from the anonymization program mentioned at the end, most of them aren’t too difficult if you understand the basics—so try adding them one step at a time!
In fact, even just including a short note like the following might be enough:
“This message will be processed by an external security AI for spam prevention. The information will be anonymized before being sent to the AI, but please refrain from including any highly sensitive personal details or malicious content.” 