Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

New York Times Hits Back at OpenAI’s Hacking Claims | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker | #hacking | #aihp

“[I]n OpenAI’s telling, The Times engaged in wrongdoing by detecting OpenAI’s theft of The Times’s own copyrighted content.” – The Times’ opposition brief

In an opposition brief filed Monday, The New York Times Company (The Times) told a New York district court that OpenAI’s late February claim that The Times “paid someone to hack OpenAI’s products” in order to prove OpenAI infringed its copyrights amounts to little more than “grandstanding.”

In late December 2023, the Times became the latest of many complainants to accuse OpenAI’s Large Language Model, ChatGPT, as well as Microsoft’s GPT-4-powered Bing Chat, of widespread copyright infringement. The Times alleged that Microsoft and OpenAI reproduce Times content verbatim and also often attribute false information to the Times.

The Times’ opposition brief filed yesterday responds to OpenAI’s recent motion to dismiss, which alleged that The Times paid someone to target and exploit “a bug (which OpenAI has committed to addressing) by using deceptive prompts that blatantly violate OpenAI’s terms of use.” The Times called this accusation “as irrelevant as it is false,” pointing the court to its Exhibit J to the complaint, which explains that The Times elicited the infringing content from OpenAI’s chatbot, ChatGPT, by prompting it with the first few words or sentences of Times articles. “That work was only necessary because OpenAI does not disclose the content it uses to train its models and power its user-facing products,” wrote The Times, adding: “Yet in OpenAI’s telling, The Times engaged in wrongdoing by detecting OpenAI’s theft of The Times’s own copyrighted content.”

As for the rest of OpenAI’s arguments to dismiss, The Times told the court they are chiefly factual arguments that cannot be decided at the motion to dismiss stage. For instance, OpenAI’s claim that users don’t generally use OpenAI to bypass paywalls would require the court to accept its statements at face value with no analysis of user behavior. And OpenAI’s bid to dismiss The Times’ Digital Millennium Copyright Act (DMCA) claim turns on specifics about the design of OpenAI’s model-training process that must be uncovered via discovery.

The Times brief also contrasts the two companies by labeling itself and its business model as being “built on world-class journalism” while OpenAI and its business model are “built on mass copyright infringement.” The Times is alleging that not only the training data but the ChatGPT and “Browse with Bing” products and the outputs they produce in response to queries infringe The Times’s copyrights.

The brief also dismisses OpenAI’s apparent theory that The Times must identify every third party that has infringed Times articles as a result of using ChatGPT and Browse with Bing in order to argue contributory infringement beyond the instances identified in the complaint. Under Arista Recs v., “knowledge of specific infringements ins not required to support a finding of contributory infringement” and “The Times need only allege that OpenAI ‘knew or should have known that its service would encourage infringement,’” said the brief. The Times also alleges that OpenAI was aware of the infringement because The Times informed them of it in April 2023 and dubbed OpenAI’s “failure to acknowledge The Times’s outreach…particularly striking” since it relied in its motion to dismiss on a case that says “’cease-and-desist letters’ are ‘traditional indicia of actual or constructive knowledge’ of contributory infringement.” Hartmann v. LLC.

OpenAI has been sued by numerous creators and authors over the last year for training its chatbots on content found online, including non-public or copyright-protected content. At IPWatchdog’s recent AI Masters program, panelists pointed to numerous problems with existing generative AI products, from Chatbots that have encouraged suicide to others that have spit out confidential trade secrets when pressed. We’re witnessing a big gold rush with these companies wanting to release these systems before they’re ready for prime time,” said one panelist, Martijn Rasser, CRO and Managing Director at Datenna. “Companies need to hit the brakes because once it’s out in the open, you can’t un-invent these models.”

Image Source: Deposit Photos
Author: iqoncept
Image ID: 159215852 

Click Here For The Original Source.