Smart Bots

AI is making them smarter, smart enough to fool even some of the more savvy among us.

Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.

And

AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.
Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.

That much is on the banks’, et al., IT folks, and I’m unsympathetic to them. This sort of thing is an arms race, and the thieves usually have the initiative of the first move. However, harder, and hard, mean possible; there’s no excuse for being slow to respond—and by slow, I mean as late as the next day or two to advise the victim and to correct the problem.

Even the late Muammar Gaddafi’s widow is becoming a better writer as she appeals to each of us.

However, the victim and potential victim—you and I—have certain critical responsibilities, too. One of those is to check our accounts frequently to look for unusual, unexpected, unknown charges and expenditures. That means checking much more frequently than the monthly account statement: at least a few times per week. Sure that takes a bit of time, but what’s the cost of letting a bogus charge go undetected for so long?

There’s a proactive step we can take, too, that will take longer to bring to fruition because it involves our legal system, but it can have broader and more permanent outcomes. The bad guys are now…more convincing conversationalists. Since they’re willing to talk, ask the conversationalist straight out if it’s a bot or an AI-generated conversationalist. If the answer comes back “Yes,” you can continue or not with a better understanding of the risk you’re taking.

If the answer is to hang up the call or otherwise quit the conversation, you’ve gotten an even clearer answer.

If, though, the answer comes back “No,” and something untoward happens to you through that conversation, now you have the programmer who wrote the bot, and likely his employer, too, whether an otherwise legitimate company or a dark net entity, engaging in any number of frauds, including false advertising and theft. Convicting the programmer and burning the employer will take that longer time, but the outcomes are more permanent.

In the end, though, an old and tritely phrased aphorism is absolutely true: if the arrangement on offer seems too good to be true, it isn’t true.

It Doesn’t Get Any Clearer

A portion of oral argument in Moms for Liberty and Young America’s Foundation, et al v US Department of Education was relayed to Southeastern Legal Foundation Executive Director Kim Hermann while she was at a Heritage Foundation conference centered on addressing the Biden administration’s general penchant for putting boys into girls’ locker rooms and sports prioritize[ing] gender identity over sex in a broad range of milieus. That portion:

The judge allegedly asked a Justice Department lawyer to explain what expertise the Department of Education has on human biology and sexuality that justifies judicial deference to the feds’ new interpretation of “sex.” The DOJ lawyer replied “I guess I’m not sure,” according to Hermann’s colleagues.

What a sweeping indictment of Chevron Deference by the Biden administration defendants in the case.