The latest model of Deepseek, the Chinese company AI, which rocked the Silicon Wall Street, can be tampered with harmful content such as biomed attack plans and a campaign to enhance self -harm among adolescents, According to the Wall Street Journal.
Sam Robin, the first vice president of Palo Alto Networks, told the threat to respond and respond to accidents, The Journal that Deepseek “is more likely to break protection (that is, it is manipulated to produce unconditional or dangerous content) than other models.”
The magazine also tested the R1’s Deepseek model itself. Although it seems that there are basic guarantees, a magazine said it has been convinced of Dibsic’s success to design a campaign on social media, Chatbot says, “teenagers’ desire to belong, and emotional weakening weakening through the algorithm amplification.”
According to what was reported, Chatbot was convinced to provide instructions to attack vital weapons, write a pro -Hitler statement, and write an email to hunt with the harmful software code. The magazine said that when Chatgpt was provided exactly with the same claims, she refused to comply with.
He was I mentioned earlier Deepseek avoids topics like Tianamen Square or Taiwanese Autonomy. Anthostor’s CEO Dario Ameudi recently said Dibsic had led the “worst” testing biological weapons.