Well,
I suppose Wix does this to some extent — after all, AI is a data-driven technology.
I don’t recall ever seeing a pop-up from Wix asking something like “Do you allow us to anonymously send usage data to improve our services?” the way many desktop apps do.
In the AI era, it seems that using behavioral or usage data for training purposes has almost become a kind of unspoken global consensus — a tradeoff for the sake of human progress.
Some see it as a price worth paying, while others see it as theft.
Personally, I’m fine with it as long as the data is fully anonymized and used only to refine the “invisible logic” of systems — not for anything that can directly identify users.
(Perhaps Wix’s “display adjustment” falls into that category?)
However, as seen with OpenAI’s latest video-generation models, there’s another emerging issue: AI systems have become so advanced that they can now produce works nearly indistinguishable from their training data — such as something that looks exactly like a famous Japanese animation or resembles a well-known Hollywood movie.
Different countries and states have different legal perspectives on this.
In my country, it’s apparently fine to use existing works as training data, but if the model outputs something too similar to the original, that becomes a legal issue.
That said, even when it’s not identical, AI can still imitate the style or atmosphere so closely that it feels like an affront to the creators’ efforts.
In short, the law simply hasn’t caught up with the speed of AI’s evolution.
Meanwhile, in Europe — where copyright protection is traditionally strong — AI training regulations are likely stricter, which might make it harder for the AI industry to thrive there.
There’s also the question of who decides what counts as “too similar”, and some argue that human creativity itself is just imitation anyway.
But I don’t think anyone has reached a truly satisfying answer yet — perhaps because AI has already become too convenient for us to give up.
What I find particularly fascinating is that humans seem to strongly reject visible imitation — like copying characters or visual works — while being much more tolerant toward invisible imitation, such as learning structures, systems, or linguistic patterns.
I think that’s an emotional response learned through evolution.
In human society, appearance plays a key role in identity, helping maintain social order.
If everyone had the same face and wore the same clothes, society would collapse.
That’s why people instinctively feel uneasy when AI produces creations that blur those visible boundaries.
On the other hand, when AI learns “invisible” things — systems, logic, or context — people easily accept it as “technology making life easier.”
It’s similar to how voice generation raises copyright issues, while speech recognition does not.
That’s why I’m curious what kind of “AI learning” Christina is concerned about here.
It seems less about media copyrights and more about security or privacy issues, perhaps?
Another interesting question is: when Wix performs such learning, which country’s laws actually apply — the user’s, or the one where Wix is headquartered?
It’s a complex issue, and I imagine Wix must be carefully considering what data is used for learning to avoid legal or ethical complications.
That said, since we’re using Wix, we’ve probably already agreed to its worldwide license terms without even realizing it.
So perhaps, as with other AI systems, as long as user-generated content isn’t identical to AI-generated results, it’s considered legally distinct.
In that sense, Wix may not claim ownership of user content, yet still use it to improve its code-generation AI or tools like Wix Vibe.
If the AI ever produced something visibly identical to user content, that would clearly be an issue — but using it as training data might technically be allowed.
It’s a complicated topic indeed.
But I’d love to hear how others see this — especially regarding the balance between innovation and AI’s creativity. 