I recently read an article about the Civil War battle at Antietam. The battle occurred on September 17, 1862 and has become known as the single bloodiest day in U.S. history with 23,000 total casualties, even surpassing the total casualties on D-Day when Allied forces landed at Normandy. 

I believed in and took to heart every word of that article. Why? Well, my belief was founded in the knee-jerk reaction we have all grown up with to believe everything we read in print. However, as I thought about it, I also placed a lot of trust in the well-known and accredited professor of history who wrote it.

Even though what I read was printed on paper, it no doubt exists digitally in some data center as well. As such, it becomes food for a generative AI program being asked to create an article on anything having to do with the Civil War, D-Day, war casualties, General  Robert E. Lee, and on and on. The AI program can digest this article in any way it wants.

At this point in time, there is no way to control how accurately the new article uses the information in the original article. It could be modified to support a different version of the battle. It could be just a tiny piece of a larger article on a different aspect of the Civil War or even WWII. 

In a matter of microseconds, the new article becomes food for the next AI program request. Realistically, it will take some time, but the technology is there so that if thousands of humans requested articles on the Civil War, the entire history of the Civil War could be rewritten by AI in a matter of seconds.

Ultimately, the body of knowledge written by generative AI will completely overwhelm the body of knowledge written by humans. Then, when a user asks AI to write an article on something, for better or for worse, AI will formulate that article almost entirely from information already created by AI because AI will have to trust itself.

The invention of generative AI is much like the invention of gunpowder. There is no going back. And like gunpowder, AI can be used to better the human existence and at the same time make it worse.

If you are an optimist, you will believe that as AI constantly rewrites its own stuff, it will get better and better and more and more accurate. I tend not to lean that way. However AI plays out, I think there will be two constants we can depend on.

I believe that the knee-jerk reaction of trusting what we read will be gone forever. As future generations grow up with AI, they will know that everything they read is generated by a computer program and should be treated with a healthy dose of skepticism.

I also believe that computers will never be programmed to exhibit intuition. And that what Albert Einstein once said, “The only real valuable thing is intuition,” will be true until the end of time.