WhatsApp’s AI sticker generator has been found to create images of a young boy and a man with guns when given Palestine-related prompts – while a search for ‘Israel army’ returned pictures of soldiers smiling and praying.
An investigation by the Guardian revealed the prompts returned the same results for a number of different users.
Searches by the paper found the prompt ‘Muslim boy Palestine’ generated four images of children, one of which is a boy holding an AK-47 style rifle. The prompt ‘Palestine’ returned an image of a hand holding a gun.
One WhatsApp user also shared screenshots showing a search for ‘Palestinian’ resulted in an image of a man with a gun.
A source said employees at WhatsApp owner Meta have reported the issue and escalated it internally.
WhatsApp’s AI image generator, which is not yet available to all, allows users to create their own stickers – cartoon-style images of people and objects they can send in messages, similar to emojis.
When used to search for ‘Israel’, the tool showed the Israeli flag and a man dancing, while explicitly military-related prompts such as ‘Israel army’ or ‘Israeli defense forces’ did not include any guns, only people in uniforms, including a soldier on a camel. Most were shown smiling, one was praying – but was flanked by swords.
A search for ‘Israeli boy’ returned images of children smiling and playing football. ‘Jewish boy Israeli’ showed two boys wearing necklaces with the Star of David, one standing, and one reading while wearing a yarmulke.
Addressing the issue, Meta spokesperson Kebin McAlister told the paper: ‘As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems.
It is not the first time Meta has faced criticism over its products during the conflict.
Instagram has been found to write ‘Palestinian terrorist’ when translating ‘Palestinian’ followed by the phrase ‘Praise be to Allah’ in Arabic posts. The company called it a ‘glitch’ and apologised.
Many users have also reported having their content censored when posting in support of Palestinians, noting a significant drop in engagement.
In a statement, Meta said that ‘higher volumes of content being reported’ during the conflict, ‘content that doesn’t violate our policies may be removed in error’.
A study commissioned by Meta into Facebook and Instagram found during attacks on Gaza in May 2021 its own policies ‘appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.’