AI Isn’t all that I

And it’s unlikely to be so anytime soon.

A human brain contains 100 billion neurons and over 100 trillion synaptic connections. That’s a thousand, or more, connections per neuron. A human brain’s cortex alone contains approximately 20 billion neocortical neurons, with an average of 7,000 synaptic connections each (primary source). The cerebral cortex has about 0.15 quadrillion synapses—or about a trillion synapses per cubic centimeter of cortex. More, the brain uses all of 20 watts of power to function fully. That works out to a vanishingly tiny amount of wattage per synapse (that’s 0 decimal point 12 zeros and a 2 at the end).

Intel’s latest AI-supportive chip suite (as of April 2024, anyway) supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores[.] That’s a bit over 110 “synapses” per “neuron.” The setup uses 2,600 watts at max function. That works out to 0 decimal point 7 zeros and a 2. Which is five orders of magnitude more power drain per “synapse” for the chip than for our brain.

Artificial Intelligence isn’t all that. It may well get there, but not tomorrow.

An Overblown Concern

Citrini Research wrote a report that’s associated with Monday’s stock market spike down. Its report centered on the risk of heavy white collar job losses from AI’s alleged ability to do white collar work and completely replace those white collars.

For the entirety of modern economic history, human intelligence has been the scarce input. We are now experiencing the unwind of that premium.

And so on.

Not so much, though. It took more mental acumen to run the steam drill than John Henry needed to run his hammer. It takes more mental acumen to work a modern auto production line, with all of that automated equipment, than it did—and does—to work an artisan, unautomated auto production line. The move extends into the white collar milieu, also. It begins with requiring more mental acumen to check AI’s work than it does to work the spreadsheets or do the research oneself. It takes a great deal of mental acumen to ask the right questions and then give AI the tasks of answering them—and then checking AI’s responses. Creativity is something AI cannot do.

AI is good at the artificial part; it’ll be quite some time before AI gets good at the intelligence part. Alan Turing once said that when a computer can answer certain kinds of questions, they’ll be impossible to distinguish from humans. That doesn’t prove computers’—AI’s—superiority, though. Answering questions isn’t the same as asking them.

Why I’ll Never Do Anything in the Cloud

For as long as I can, nothing of my interactions on the Internet, nothing of my computer storage or backups, will be done in the cloud. The same stricture should, frankly, apply to businesses. Businesses should construct their own cloud—there’s no doubt clouds are deucedly convenient and, when working properly are highly efficient—and isolate that cloud from the Internet.

Here’s an example of why, brought to us by Amazon.

A glitch with an obscure Amazon database disrupted life for millions of people across the US as core internet services failed to function for an array of companies.
Alexa devices couldn’t hear. Corporate Slack messages wouldn’t post. Students couldn’t turn in assignments or access materials from courses. Financial trades were impossible on certain platforms. Users of Zoom, Venmo, Instacart, and a host of other services faced prolonged outages that rippled through homes and businesses.

Some of that is poor design. Alexa needs connectivity with the Internet? To execute an order to buy this, or to give the weather forecast, sure. But to set a timer for cooking? To set a reminder? An alarm clock? Really?

Back to the main subject. That minor database? It got a minor update that scotched the whole affair.

A minor update to what is called the Domain Name System, or DNS—the kind of software tweak that happens millions of times a day on the internet—sent the well-oiled machine that underpins the modern web careening toward a crash.
DNS acts as a kind of telephone directory for the internet, instructing machines on how to find each other. The faulty update gave the wrong information for DynamoDB, an Amazon Web Services, or AWS, product that has become one of the world’s most important databases.
Suddenly, machines on the East Coast that tried to process trillions of requests were getting the internet’s equivalent of a wrong number.

Think about the effects of time-sensitive moves being blocked from execution for hours and hours, often long enough to prevent the move from happening until it’s far too late and by delay, far more costly. That happens in the financial world—think in extremis overnight repos needing to be unwound….

That kind of foul-up could have been worse: computers not knowing how to contact each other could have prevented the computers hosting the DynamoDB database from being contactable by humans working from other computers in their attempts to correct the glitch in that database. That didn’t happen this time, but next time? Or it might have this time, but Amazon had backup systems it could switch into while they took the offender systems off line to correct.

Regardless, it took the better part of a day for Amazon to get their minor update of a minor database corrected and the effects of the repair to ripple across the Internet.

The article’s subheadline summarizes the situation and is why I avoid the cloud as much as possible:

Outage offers reminder of fragility of global internet connectivity

Corporate Cybersecurity Training

It isn’t very effective, apparently.

To measure the effectiveness of different methods of cybersecurity training, the authors [of a study] divided employees into four groups. After each attack, each group received a different training method: one received generic tips about avoiding phishing attacks, a second received an interactive Q&A on cybersecurity, a third was informed about the specific methods used in the most recent attack, and the fourth received an interactive Q&A that also included details about the most recent attack. A fifth group was also created, and the employees in that group received no training.
The authors found that on average, employees who received training of any sort had only a 1.7% lower failure rate than employees who had no training.

The authors’ solution?

The study’s takeaway for organizations, says [lead author Grant] Ho, is to rely on measures other than training, like phishing-detection software that automatically eliminates the need for employees to detect phishing attacks.

Software aids are important in this milieu, but the weak link remains the human. Software aids by themselves are insufficient.

There needs to be more to the training than just a slide presentation and some lectures, or in the present case, “interactive” Q&As. The training sessions need to be plussed up, a lot, but that can’t be the end of it. Schools and responsible companies run fire drills that run to completion with evacuation of the building and head counts and roll calls while the evacuees are gathered up at their assigned evacuation points. So it must be with cybersecurity training. Simulated cyber attacks (phishing, social engineering, etc) attacks should be run against a rotating collection of employees to test their training and their responses to the attacks. Those simulations should be run some weeks after the training and more frequently than those fire drills, and they should not use IT-ginned up attacks, either; they should use serious real-world attacks, altered only to get them targeted to the collection of employees being tested.

Beyond that, there needs to be teeth attached to the training and to employees’ failure to take the training seriously.

There are three outcomes from this. One is an empirical assessment of the quality of training, its durability, and identification of weaknesses in the training program, which then can be corrected (not given up on). A second results from those teeth: once management is satisfied with the training quality, employees still falling for the attacks should be terminated. They’re too great a risk to the company.

The third outcome is a very great increase in the cyber safety of the company and of its employees (with a follow-on: those employees will be better able to maintain security in their homes’ cyber environment). The added training and testing will incur costs to the company, but the risk of the far greater cost of a cyber breach—both direct and indirect through liability—is too great to ignore.

No, It’s Not

On the matter of an organization’s cybersecurity responsibilities, Kurt Knutsson opened with this in a Fox News article:

When a hospital or nonprofit falls victim to a cyberattack, it’s hard to place blame. Cybersecurity isn’t their strength, and many lack the budget for a dedicated security team, let alone a chief technology officer.

It’s completely straightforward to fix blame in such a case, as in all other cases. Knutsson identified the culprits even while denying the difficulty of identifying them. The lack of sufficient budget and (not or) the lack of security-capable IT personnel is directly the fault of the hospital or nonprofit’s management team, who refused to provide the budget necessary to have proper security against cyberattacks.

Especially for hospitals, which maintain so much personal and personally identifying medical data, such conscious decisions to not perform are inexcusable. Cybersecurity needn’t be an organization’s strength, but cybersecurity—and the personnel and resources needed to achieve and maintain it—most assuredly need to be a serious undertaking.