| The vast majority of Americans believe that violent crime is rising in this country. This perception holds across parties - 92% of Republicans, 78% of Independents and 58% of Democrats - according to recent polls. It's easy to see why most Americans believe this. Turn on the news and you'll find it's filled with reports of violent crimes committed, along with suspects interviewed, arrested, arraigned, tried and sentenced. A single violent crime can dominate the news headlines for weeks. (Just look at the amount of coverage devoted to the abduction of Savannah Guthrie's mother.) However, a single incident - or string of incidents - doesn't necessarily make a trend. So let's test this widespread belief using AI. The first step is to define the claim clearly. For example, "Is the long-term trend in violent crime in the U.S. getting better or worse?" Perplexity answered: "Violent crime in the U.S. has gotten better over the long term: rates are dramatically lower than in the early 1990s, despite a pandemic-era spike and recent media focus on crime." It then cites several sources of declining crime rates and concludes, "Using the Bureau of Justice Statistics' victimization survey, violent crime rates are down about 70% from 1993, an even steeper long-term drop than shown in police-reported data." Knowing that any AI platform can make mistakes, I then ask ChatGPT to fact-check this. ChatGPT tells me "Short answer: Your summary is largely accurate. The main numbers and overall conclusion match widely cited U.S. crime data." I fact-checked it again with Claude, Copilot, Grok, and Gemini. The other platforms gave similar answers. None of them were contradictory. AI is extremely good at quickly locating the institutions that collect or publish authoritative data—government agencies, academic researchers, and major statistical organizations. In practice, the optimal process looks something like this: define a precise claim, challenge the conclusion with opposing evidence (if necessary), then use competing AI platforms to verify the answer. Used this way, AI becomes a fast and powerful accelerator of critical thinking rather than just a substitute for it. As I mentioned previously, if you really want to know the truth about something - to the extent it can be known - you must not be just willing but delighted to learn that a belief of yours is mistaken. AI platforms offer all of us that opportunity. (The AI models I mentioned are all free to download and free to use.) You can ask whatever you want - and follow up by asking for citations, further evidence, or more clarification. However, when some AI users don't get the answer they want, they'll point to some respected authority or highly praised book or podcast that argues something different. And maybe that authority is right and the world's superintelligent AI platforms are all in error. But that's not usually the case. What is more likely is that users would prefer to hang on to mistaken beliefs that they find satisfying. Particularly if they are ones that their friends - or members of their tribe - subscribe to. If so, don't push. AI is a good way to stress test your own thinking. It's not always a great way to persuade others, despite its effectiveness. Besides, if you make it your mission to correct the world's mistaken opinions, I can tell you one thing for certain. You've got your work cut out for you. Good investing, Alex P.S. If you're as weirdly obsessed with verification as I am, I highly recommend Michael Shermer's new book, Truth: What It Is, How to Find It, and Why It Still Matters. |
No comments:
Post a Comment
Keep a civil tongue.