By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Concealed RepublicanConcealed Republican
  • Home
  • Latest News
  • Guns
  • Politics
  • Videos
Reading: This Big Tech company promises safe AI for all. Here’s why it keeps getting sued.
Share
Notification Show More
Font ResizerAa
Font ResizerAa
Concealed RepublicanConcealed Republican
  • News
  • Guns
  • Politics
  • Videos
  • Home
  • Latest News
  • Guns
  • Politics
  • Videos
Have an existing account? Sign In
Follow US
  • Advertise
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Concealed Republican > Blog > News > This Big Tech company promises safe AI for all. Here’s why it keeps getting sued.
News

This Big Tech company promises safe AI for all. Here’s why it keeps getting sued.

Jim Taft
Last updated: July 19, 2025 12:31 pm
By Jim Taft 17 Min Read
Share
This Big Tech company promises safe AI for all. Here’s why it keeps getting sued.
SHARE

Despite a vocal commitment to safety and public good – seemingly more genuine than other top-level players in the AI sector – Anthropic keeps coming under legal fire. Already, the leading AI firm has been caught red-handed pirating thousands of copyrighted books and, in a separate-but-related incident, destroying millions of physical books used to train its AI models. Further court proceedings later this year will determine exact fines for the violation, potentially $100,000 per incident.

It matters, not just for the sake of those authors, but because Anthropic is one of the main four or five companies positioned to make decisive plays with national consequences over the next 12 months and beyond. It is tier-one, pushing an upper-echelon valuation, and like too many companies in the space, its palliative verbiage on ethical considerations hasn’t translated into action.

Anthropic’s LLM was caught faking data and ratting on perceived user ‘wrongdoing,’ and it even resorted to user blackmail.

Led by the Amodei siblings, Dario and Daniela, Anthropic emerged out of the high-profile breakup of OpenAI and has gone on to simultaneously rake in massive heaps of cash based on a rather overweening advertising and PR focus on human safety and “safe AI.”

The “public benefit” organization strategy does indeed seem to retain some of the quasi-nonprofit and public-good corporate structuring ethos carried over from Musk’s original OpenAI vision. Nonetheless, when we consider we’re dealing with potentially the most powerful and transformative (destructive?) technologies yet imagined, the company’s early track record leaves a bit to be desired.

From safety to blackmail

Anthropic strategy has targeted enterprise and B2B, and along the way, it has secured an impressive set of tech-heavy allies and business partners, including Google and Amazon Web Services. The company of “safety, steerability, and interpretability” may have larger designs that include clawing back, or perhaps sharing, some of Palantir’s government market share. In June of this year, its Claude Gov product was launched through the Fed Start program in a process involving both Palantir and Google Public Sector.

Anthropic product quality and market penetration are strong. The flagship, Claude, is a set of large language models in direct competition with Grok, Chat-GPT, and Gemini. Comprised of Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3 Haiku, the whole package is underwritten (in theory) by so-called Constitutional AI, which is essentially an experimental training method to guide software toward explicit rules – as opposed to relying solely on human feedback or implicit reward systems, which have been used by other AI organizations with equally mixed results.

Also on offer are Artifacts, a coding assistant; Computer Use, a sort of bridge between Claude models and physical desktops; and AI Fluency Framework, which purports to be an educational initiative for human users of AI. Various sub-variants and custom creations seem to have been built for the aforementioned partnerships, as well.

In terms of missteps and the usual upheaval we see in the AI industry, Anthropic’s Constitutional AI, while designed to “embed” ethical guidelines and minimize toxic output, hasn’t managed to screen out the toxic and ethically ambiguous. In a recent episode, when placed in fictional settings for tests, Anthropic’s LLM was caught faking data and ratting on perceived user “wrongdoing,” and it even resorted somehow to user blackmail as it scrambled to locate “right and wrong.”

RELATED: Tech elites warn ‘reality itself’ may not survive the AI revolution

  sankai via iStock/Getty Images

One wonders if this sort of acting out, as is often seen with children, is somehow related to those very restrictions placed into an LLM to restrict action. If the LLM by nature cannot ever grasp embodied human morality, why pretend otherwise, especially when misbehavior is the result?

In and out of court

Anthropic has faced numerous legal challenges for its commandeering of intellectual property and copyrighted material. In 2023, Universal Music and other publishers sued Anthropic for allegedly infringing copyright of song lyrics while training Claude. A March ruling sided with Anthropic.

This year saw another lawsuit from Reddit wherein it was alleged Anthropic had “scraped the site upwards of 100,000 times.”

In another illustrative episode, the Anthropic legal team was caught out in court when it was determined that a Claude-derived citation deployed to bolster the company’s copyright defense was fake — it was rather the product of so-called “AI hallucination,” a phenomenon whereby an LLM simply invests, fabricates, or alters material for evidently unknown reasons.

It’s all, as the kids say, not a good look. With a valuation upwards of $60 billion, why not work out a bulk purchase deal with distributors?

Most curious and potentially dangerous (aside from the insistence on ethics as the correct framing for an LLM that lacks human morality or spirituality), is the interwoven nature of the Anthropic funding structure – which suggests mixed economic objectives may quickly override any high-minded claims of human-first product. Amazon ($8 billion), Google ($2 to $3 billion), and a host of other Silicon Valley venture firms whose allocations aren’t entirely transparent are all committed.

To its credit, Anthropic has made the humiliating results of its hallucinating AI public and open to comment, which lends credence to its stated principles.

Still, some have suggested this rather overly attentive response may be little more than advertising for the company’s favorite virtues. It’s well known that Anthropic is among the more woke-friendly AI companies on the block – a recipe for a quasi-spiritural form of censorship-driven safetyism, where the AI treats the user with kid gloves while the humans in the C-suite run amok.



Read the full article here

You Might Also Like

Pope Francis rested well after overcoming setback from respiratory crisis

Kid Rock calls media ‘public enemy number one’ for not recognizing Trump olive branches

EXPOSED: Gavin Newsom’s shocking ties to the Chinese mafia

Alec Baldwin appears in first ‘Rust’ trailer since Halyna Hutchins’ death

Republicans throw cold water on liberal complaints about Trump firing inspectors general

Share This Article
Facebook X Email Print
Previous Article Six months of Trump: Here are the highlights of his second administration Six months of Trump: Here are the highlights of his second administration
Next Article New Jersey Bans on NFA Items Hit With One-Two Punch of Litigation New Jersey Bans on NFA Items Hit With One-Two Punch of Litigation
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad image

Latest News

Police Hunt for Armed Carjacking and ATM Robbery Suspects
Police Hunt for Armed Carjacking and ATM Robbery Suspects
Guns
Third Biden Insider Pleads the Fifth in Joe’s Mental Decline Cover-Up Investigation [WATCH]
Third Biden Insider Pleads the Fifth in Joe’s Mental Decline Cover-Up Investigation [WATCH]
Politics
Three Fifths: Yet Another Biden Regency ‘Politburo’ Member Refuses to Testify
Three Fifths: Yet Another Biden Regency ‘Politburo’ Member Refuses to Testify
Politics
Ex-DNC Chair Says Dem Party Has Become ‘Doormat For Republicans’
Ex-DNC Chair Says Dem Party Has Become ‘Doormat For Republicans’
Politics
Police chiefs accused in massive visa fraud ring: Fake armed robberies for illegal immigrants?
Police chiefs accused in massive visa fraud ring: Fake armed robberies for illegal immigrants?
News
Livvy Dunne slams New York Times for controversial ‘Sex Sells’ headline
Livvy Dunne slams New York Times for controversial ‘Sex Sells’ headline
News
© 2025 Concealed Republican. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?