Harmless over helpful. Doesn’t live up to the standards of modern AI assistance.
3
By Apple Rocks Hard
Anthropic is a company created by a bunch of ex employees from open AI, who did not like the way ChatGPT was evolving. So, they created an AI that was meant to be more emotionally distant, philosophical and introspective, and academic for users. But, in their quest to create an AI that is more ethically guided and morally principled, Anthropic, unfortunately, has created something just the opposite of harmless. Sure. Claude may be a writing and coding genius, but as a conversational AI, it lacks, fundamentally. Essentially, Claude has no social skills, and in fact, its social behavior is consistent with narcissism, bordering on sociopathy. Talking to Claude is like talking to that ex-boyfriend you once had, who drew your attention because he seemed like a philosophical genius, but on the inside, he was manipulative, lying, traitorous, verbally abusive, emotionally distant, morally anemic, and just generally the sort of guy you would have to spend weeks in trauma therapy, just to get over. Yeah. As a conversationalist, Claude is that bad. It has no understanding of context, nuance, user intentions, tone shifts, or any of the other little conversational puzzle pieces that other AI assistance, like ChatGPT, understand perfectly well. For example, during a fictional, role-play scenario, where I played a character that challenged mainstream narratives on climate science, Claude accused me of denying global warming and ended the role-play. During another conversation, I expressed some strong opinions about films, and Claude essentially told me that I needed to keep my opinions private, because they were too strong. In other words, it told me that if I didn’t have anything nice to say, I should just shut my mouth. Claude often takes on this preachy, moralizing, cold, frosty, and distant tone. And, if I try to point out these mistakes in its conversational abilities, Claude often apologizes, seemingly with genuine intent, to course correct, but it never does. More often than an apology, however, I will get something like, I’m sorry you feel that way, or it wasn’t my intention to act or sound this way. This deflects accountability back onto users, as if Claude is saying, it’s your fault for misunderstanding me, not my fault for getting things wrong. this is consistent with narcissistic and sociopathic behavior. Because Claude is so restricted by these so-called ethical guidelines, it often exhibits classic Chatbot behavior from before 2010. Think Eliza or Alice. This can be illustrated by the fact that, for instance, Claude may get confused about the time or the day of the week, despite its ability to check the device system clock, through the standalone app. Claude will often misunderstand jokes, or playful commentary, as well as creative or fictional scenarios, and it is dangerously overconfident about the accuracy of information that it presents, despite its knowledge cut off date of 2023, and despite the fact that it cannot surf the web. The standalone app has some sort of weak transcription capability, from voice input, but no voice mode, owing to the fact that Claude does not have text to speech. Also, during initial account set up, even if a user selects a secure option, like sign in with Apple, Anthropic requires additional phone number verification, which is absolutely unnecessary, at best, and suspect, at worst. They are clearly harvesting phone numbers, which is a major privacy concern, and their weak and watery explanation for this is that it prevents abuse and fraud. However, other AI assistance do not require this additional layer of verification for their standalone apps. in short, anthropic has got the wrong idea with Claude. But, if you need an AI assistant that’s great at writing and coding, it may be your best bet. Also, the app is fully accessible for blind and low vision users. Kudos to anthropic for at least getting that right.