Artwork

Inhalt bereitgestellt von Support and Christian Natural Health. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Support and Christian Natural Health oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

What the Bible Says about Artificial Intelligence

32:11
 
Teilen
 

Manage episode 427392233 series 2394412
Inhalt bereitgestellt von Support and Christian Natural Health. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Support and Christian Natural Health oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

For years now, even as headlines about the development of AI have become more frequent and more dire, I really never worried about it much, because I couldn't think of anything in scripture that sounded a great deal like a superintelligent machine. I'd read the end of the book (Revelation), I knew how it ended, and it wasn't in a robot apocalypse... so all the fears surrounding that possibility must therefore be much ado about nothing. (I did write a fictional trilogy for young adults back in 2017 about how I imagined a near-miss robot apocalypse might look, though, because I found the topic fascinating enough to research at the time. It's called the "Uncanny Valley" trilogy, where the "uncanny valley" refers to the "creepy" factor, as a synthetic humanoid creature approaches human likeness.)

When I finished the trilogy, I more or less forgot about advancing AI, until some of the later iterations of Chat GPT and similar Large Language Models (LLMs). Full disclosure: I've never used any LLMs myself, mostly because (last I checked) you had to create an account with your email address before you started asking it questions. (In the third book of my series, the superintelligent bot Jaguar kept track of everyone via facial recognition cameras, recording literally everything they did in enormous data processing centers across the globe that synced with one another many times per day. Though at that point I doubt it would make any difference, I'd rather not voluntarily give Jaguar's real-life analog any data on me if I can help it!)

Particularly the recent release of Chat GPT Omni (which apparently stands for "omniscient" --!!) gave me pause, though, and I had to stop and ask myself why the idea that it could be approaching actual Artificial General Intelligence (AGI) made the hairs on the back of my neck stand up. I recently read a book called "Deep Medicine" by Eric Topol on the integration of AI into the medical field, which helped allay some potential concerns--that book contended that AGI would likely never be realized, largely because AGI inherently requires experience in the real world, and a robot can never have lived experiences in the way that humans can. It painted a mostly rosy picture of narrow (specialized) AI engaging in pattern recognition (reading radiology images or recognizing pathology samples or dermatological lesions, for instance), and thus vastly improving diagnostic capabilities of physicians. Other uses might include parsing a given individual's years of medical records and offering a synopsis and recommendations, or consolidating PubMed studies, and offering relevant suggestions. Topol did not seem to think that the AI would ever replace the doctor, though. Rather, the author contended, at the rate that data is currently exploding, doctors are drowning in the attempt to document and to keep up with it all, and empathic patient care suffers as a result. AI, he argues, will actually give the doctor time to spend with the patient again, to make judgment calls with a summary of all the data at his fingertips, and to put it together in an integrated whole with his uniquely human common sense.

Synthetic Empathy and Emotions?

But, "Deep Medicine" was written in 2019, which (in the world of AI) is already potentially obsolete. I'm told that Chat GPT Omni is better than most humans at anything involving either logic or creativity, and it does a terrific approximation of empathy, too. Even "Deep Medicine" cited statistics to suggest that most humans would prefer a machine for a therapist than a person (!!), largely due to the fear that the human might judge them for some of their most secret or shameful thoughts or feelings. And if the machine makes you feel like it understands you, does it really matter whether its empathy is "real" or not?

What does "real" empathy mean, anyway? In "Uncanny Valley," my main character, as a teenager, inherited a "companion bot" who was programmed with mirror neurons (the seat of empathy in the human brain.) In the wake of her father's death, she came to regard her companion bot as her best friend. It was only as she got older that she started to ask questions like whether its 'love' for her was genuine, if it was programmed. This is essentially the theological argument for free will, too. Could God have made a world without sin? Sure, but in order to do it, we'd all have to be automatons--programmed to do His will, programmed to love Him and to love one another. Would there be any value in the love of a creature who could not do anything else? (The Calvinists might say that's the way the world actually is, for those who are predestined, but everyone else would vehemently disagree.) It certainly seems that God thought it was worth all the misery He endured since creation, for the chance that some of us might freely choose Him. I daresay that same logic is self-evident to all of us. Freedom is an inherent good--possibly the highest good.

So, back to AI: real empathy requires not just real emotion, but memories of one's own real emotions, so that we can truly imagine that we are in another person's shoes. How can a robot, without its own lived memories, experience real empathy? Can it even experience real emotion? It might have goals or motives that can be programmed, but emotion at minimum requires biochemistry and a nervous system, at least in the way we understand it. We know from psychology research on brain lesions as well as from psychiatric and recreational medications and experiences with those suffering from neurodegenerative conditions that mood, affect, and personality can drastically change from physiologic tampering, as well.

Does it follow that emotions are 'mere' biochemistry, though? This is at least part of the age-old question of materialism versus vitalism, or (to put it another way), reductionism versus holism. Modern medicine is inherently materialistic, believing that the entirety of a living entity can be explained by its physical makeup, and reductionistic, believing that one can reduce the 'whole' of the living system to a sum of its parts. Vitalism, on the other hand, argues that there is something else, something outside the physical body of the creature, that animates it and gives it life. At the moment just before death and just after, all the same biochemical machinery exists... but anyone who has seen the death of a loved one can attest that the body doesn't look the same. It becomes almost like clay. Some key essence is missing. I recently read "The Rainbow and the Worm" by Mae-Wan Ho, which described fascinating experiments on living worms viewed under electron microscopes. The structured water in the living tissue of the worm exhibited coherence, refracting visible light in a beautiful rainbow pattern. At the moment of death, though, the coherence vanished, and the rainbow was gone--even though all of the same physical components remained. The change is immaterial; the shift between death and life is inherently energetic. There was an animus, a vital force--qi, as Chinese Medicine would call it, or prana, as Ayurvedic medicine would describe it, or (as we're now discovering in alternative Western medicine), voltage carried through this structured water via our collagen. That hydrated collagen appears to function in our bodies very much like a semiconductor, animating our tissues with electrons, the literal energy of life. At the moment of death, it’s there, and then it's not--like someone pulled the plug. What's left is only the shell of the machine, the hardware.

But where is that plug, such that it can be connected and then, abruptly, not? The materialist, who believes that everything should be explainable on the physical level, can have no answer. The Bible tells us, though, that we are body, soul, and spirit (1 Thess 5:23)--which inherently makes a distinction between body and soul (implying that the soul is not a mere product of the chemistry of the body). The spirit is what was dead without Jesus, and what gets born again when we are saved, and it's perfect, identical with Jesus' spirit (2 Cor. 5:17, Eph 4:24). It's God's "seal" on us, vacuum-packed as it were, so that no sin can contaminate it. It’s the down-payment, a promise that complete and total restoration is coming (Eph 1:13-14).

But there's no physical outlet connecting the spirit and the body; the connection between them is the soul. With our souls, we can see what's ours in the Spirit through scripture, and scripture can train our souls to conform more and more to the spirit (Romans 12:2, Phil 2:12-13). No one would ever argue that a machine would have a spirit, obviously, but the materialists wouldn’t believe there is such a thing, anyway. What about the soul, though?

What is a soul, anyway? Can it be explained entirely through materialistic means?
Before God made Adam, He explicitly stated that He intended to make man after His own image (Gen 1:26-27). God is spirit (John 4:24), though, so the resemblance can't be physical, per se, at least not exclusively or even primarily. After forming his body, God breathed into him the breath of life (Genesis 2:7)--the same thing Jesus did to the disciples after His resurrection when he said "Receive the Holy Spirit" (John 20:22). So it must therefore be in our spirits that we resemble God. Adam and Eve died spiritually when they sinned (Genesis 3:3), but something continued to animate their bodies for another 930 years. This is the non-corporeal part of us that gets "unplugged" at physical death. Since it can be neither body nor spirit, it must be the soul.

Andrew Wommack defines the soul as the mind, will, and emotions. I can't think of a single scripture that defines the soul this way; I think it's just an extrapolation, based on what's otherwise unaccounted for. But in our mind, will, and emotions, even before redemption, mankind continued to reflect God's image, in that he continued to possess the ability to reason, to choose, to create, to love, and to discern right from wrong.

The materialists would argue that emotion, like everything else, must have its root purely in the physical realm. Yet they do acknowledge that because there are so many possible emotional states, and relatively few physiologic expressions of them, many emotions necessarily share a physiologic expression. It's up to our minds to translate the meaning of a physiologic state, based on the context. In "How Emotions are Made," author Lisa Barrett gave a memorable example of this: once, a colleague to whom she didn’t think she was particularly attracted asked her for a date. She went, felt various strange things in her gut that felt a little like “butterflies”, and assumed during the date that perhaps she was attracted to him after all… only to later learn that she was actually in the early stages of gastroenteritis!

This example illustrates how the biochemistry and physiologic expressions of emotion are merely the blunt downstream instruments that translate an emotion from the non-corporeal soul into physical perception--and in some cases, as in that one, the emotional perception might originate from the body entirely. This also might be why some people (children especially) can mistake hunger or fatigue for irritability, or why erratic blood sugar in uncontrolled diabetics can manifest as rage, etc. In those cases, the emotional response really does correspond to the materialist's worldview, originating far downstream in the "circuit," as it were. But people who experience these things as adults will say things like, "That's not me." I think they're right--when we think of our true selves, none of us think of our bodies--those are just our "tents" (2 Cor 5:1), to be put off eventually when we die. When we refer to our true selves, we mean our souls: our mind, will, and emotions.

It's certainly possible for many of us to feel "hijacked" by our emotions, as if they're in control and not "us," though (Romans 7:15-20). Most of us recognize a certain distinction there, too, between the real "us" and our emotions. The examples of physiologic states influencing emotions are what scripture would call "carnal" responses. If we're "carnal," ruled by our flesh, then physiologic states will have a great deal of influence over our emotions-- a kind of small scale anarchy. The "government" is supposed to be our born-again spirits, governing our souls, which in turn controls our bodies, rather than allowing our flesh to control our souls (Romans 8:1-17) - though this is of course possible if we don’t enforce order.

With respect to AI, my point is, where does "true" emotion originate? There is a version of it produced downstream, in our flesh, yes. It can either originate from the flesh itself, or it can originate upstream, from the non-corporeal soul, what we think of us "the real us." That's inherently a philosophical and not a scientific argument, though, as science by definition is "the observation, identification, description, experimental investigation, and theoretical explanation of phenomena." Any question pertaining to something outside the physical world cannot fall under the purview of science. But even for those who do not accept scripture as authority, our own inner experience testifies to the truth of the argument. We all know that we have free will; we all know we can reason, and feel emotions. We can also tell the difference between an emotion that is "us" and an emotion that feels like it originates from outside of "our real selves". As C.S. Lewis said in "Mere Christianity," if there is a world outside of the one we can experimentally observe, the only place in which we could possibly expect to have any evidence of it is in our own internal experience. And there, we find it’s true.

Without a soul, then, a robot (such as an LLM) would of course exist entirely on the physical plane, unlike us. It therefore might have physical experiences that it might translate as emotion, the same way that we sometimes interpret physical experiences as emotion--but it cannot have true emotions. Empathy, therefore, can likewise be nothing more than programmed pattern recognition: this facial expression or these words or phrases tend to mean that the person is experiencing these feelings, and here is the appropriate way to respond. Many interactions with many different humans over a long period of time will refine the LLM's learning such that its pattern recognition and responses get closer and closer to the mark... but that's not empathy, not really. It's fake.

Does that matter, though, if the person "feels" heard and understood?

Well, does truth matter? If a man who is locked up in an insane asylum believes himself to be a great king, and believes that all the doctors and nurses around him are really his servants and subjects, would you trade places with him? I suspect that all of us would say no. With at least the protagonists in "The Matrix," we all agree that it's better to be awakened to a desperate truth than to be deceived by a happy lie.

The Emotional Uncanny Valley

Even aside from that issue, is it likely that mere pattern recognition could simulate empathy well enough to satisfy us--or is it likely that this, too, would fall into the "uncanny valley"? Most of us have had the experience of meeting a person who seems pleasant enough on the surface, and yet something about them just seemed ‘off’. (The Bible calls this discernment, 1 Corinthians 12:10.) When I was in a psychology course in college, the professor flashed images of several clean-cut, smiling men in the powerpoint, out of context, and asked us to raise our hands if we would trust each of them. I don't remember who most of them were - probably red herrings to disguise the point - but one of them was Ted Bundy, the serial killer of the 1970s. I didn't recognize him, but I did feel a prickling sense of unease as I gazed at his smiling face. Something just wasn't right. Granted, a violent psychopath is not quite the same, but isn't the idea of creating a robot possessed of emotional intelligence (in the sense that it can read others well) but without real empathy essentially like creating an artificial sociopath? Isn't the lack of true empathy the very definition? (Knowing this, would we really want jobs like social workers, nurses, or even elementary school teachers to be assumed by robots--no matter how good the empathic pattern recognition became?)

An analogy of this is the 1958 Harlow experiment on infant monkeys (https://www.simplypsychology.org/harlow-monkey.html), in which the monkeys were given a choice between two simulated mothers: one made of wire, but that provided milk, and one made of cloth, but without milk. The study showed that the monkeys would only go to the wire mother when hungry; the rest of the day they would spend in the company of the cloth mother.

My point is that emotional support matters to all living creatures, far more than objective physical needs (provided those needs are also met). If we just want a logical problem solved, we may well go to the robot. But most of our problems are not just questions of logic; they involve emotions, too. As Leonard Mlodinow, author of "Emotional" writes, emotions are not mere extraneous data that colors an experience, but can otherwise be ignored at will. In many cases, the emotions actually serve to motivate a course of action. Every major decision I've ever made in my life involved not just logic, but also emotion, or in some cases intuition (which I assume is a conscious prompting when the unconscious reasoning is present but unknown to me), or a else leading of the Holy Spirit (which "feels" like intuition, only without the presumed unconscious underpinning. He knows the reason, but I don't, even subconsciously.) Obviously, AI, with synthetic emotion or not, would have no way to advise us on matters of intuition, or especially promptings from the Holy Spirit. Those won't usually *seem* logical, based on the available information, but He has a perspective that we don't have. Neither will a machine, even if it could simultaneously process all known data available on earth.

There was a time when Newtonian physicists believed that, with access to that level of data in the present, the entire future would become deterministic, making true omniscience in this world theoretically possible. Then we discovered quantum physics, and all of that went out the window. Heisenberg's Uncertainty Principle eliminates the possibility that any creature or machine, no matter how powerful, can in our own dimension ever truly achieve omniscience.

In other words, even a perfectly logical machine with access to all available knowledge will fail to guide us into appropriate decisions much of the time -- precisely because they must lack true emotion, intuition, and especially the guidance of the Holy Spirit.

Knowledge vs Wisdom


None of us will be able to compete with the level of knowledge an AI can process in a split second. But does that mean the application of that knowledge will always be appropriate? I think there's several levels to this question. The first has to do with the data sets on which AI has been trained. It can only learn from the patterns it's seen, and it will (like a teenager who draws sweeping conclusions based on very limited life experience) assume that it has the whole picture. In this way, AI may be part of the great deception mentioned by both Jesus (Matt 24:24) and the Apostle Paul (2 Thess 2:11) in the last days. How many of us already abdicate our own reasoning to those in positions of authority, blindly following them because we assume they must know more than we do on their subject? How much more will many of us fail to question the edicts of a purportedly "omniscient" machine, which must know more than we do on every subject? That machine may have only superficial knowledge of a subject, based on the data set it's been given, and may thus draw an inappropriate conclusion. (Also, my understanding is that current LLMs continue learning only until they are released into the world; from that point, they can no longer learn anything new, because of the risk that in storing new information, they could accidentally overwrite an older memory.)

A human may draw an inappropriate conclusion too, of course, and if that person has enough credentials behind his name, it may be just as deceptive to many. But at least one individual will not command such blind obedience on absolutely every subject. AGI might. So who controls the data from which that machine learns? That's a tremendous responsibility... and, potentially, a tremendous amount of power, to deceive, if possible, "even the elect."

For the sake of argument, let's say that the AGI is exposed only to real and complete data, though--not cherry-picked, and not "misinformation." In this scenario, some believe that (if appropriate safeguards are in place, to keep the AGI from deciding to save the planet by killing all the humans, for example, akin to science fiction author Isaac Asimov's Three Laws of Robotics), utopia will result.

The only way this is possible, though, is if not only does the machine learn on a full, accurate, and complete set of collective human knowledge, but it also has a depth of understanding of how to apply that knowledge, as well. This is the difference between knowledge and wisdom. The dictionary definition of wisdom is "the ability to discern or judge what is true, right, or lasting," versus knowledge, defined as "information gained through experience, reasoning, or acquaintance." Wisdom has to do with one's worldview, in other words, or the lens through which he sees and interprets a set of facts. It is inextricably tied to morality. (So, who is programming these LLMs again? Even without AI, since postmodernism and beyond, there's been a crisis among many intellectuals as to whether or not there's such a thing as "truth," even going so far as to question objective physical reality. That’s certainly a major potential hazard right there.)

Both words of wisdom and discernment are listed as explicit supernatural gifts of the Spirit (1 Cor 12:8, 10). God says that He is the source of wisdom, as well as of knowledge and understanding (Prov 2:6), and that if we lack wisdom, we should ask Him for it (James 1:5). Wisdom is personified in the book of Proverbs as a person, with God at creation (Prov 8:29-30)--which means, unless it's simply a poetic construct, that wisdom and the Holy Spirit must be synonymous (Gen 1:2). Jesus did say that it was the Holy Spirit who would guide us into all truth, as He is the Spirit of truth (John 16:13). The Apostle Paul contrasts the wisdom of this world as foolishness compared to the wisdom of God (1 Cor 1:18-30)--because if God is truth (John 14:6), then no one can get to true wisdom without Him. That's not to say that no human (or robot) can make a true statement without an understanding of God, of course--but when he does so, he's borrowing from a worldview not his own. The statement may be true, but almost by accident--on some level, if you go down deep enough to bedrock beliefs, there is an inherent inconsistency between the statement of truth and the person's general worldview, if that worldview does not recognize a Creator. (Jason Lisle explains this well and in great detail in "The Ultimate Proof of Creation.")

Can you see the danger of trusting a machine to discern what is right, then, simply because in terms of sheer facts and computing power, it's vastly "smarter" than we are? Anyone who does so is almost guaranteed to be deceived, unless he also filters the machine's response through his own discernment afterwards. (We should all be doing this with statements from any human authority on any subject, too, by the way. Never subjugate your own reasoning to anyone else's, even if they do know the Lord, but especially if they don't. You have the mind of Christ! 1 Cor 2:16).

Would Eliminating Emotion from the Workplace Actually Be a Good Thing?

I can see how one might think that replacing a human being with a machine that optimizes logic, but strips away everything else might seem a good trade, on the surface. After all, we humans (especially these days) aren't very logical, on the whole. Our emotions and desires are usually corrupted by sin. We're motivated by selfishness, greed, pride, and petty jealousies, when we're not actively being renewed by the Holy Spirit (and most of us aren't; even most believers are more carnal than not, most of the time. I don’t know if that’s always been the case, but it seems to be now). We also are subject to the normal human frailties: we get sick, or tired, or cranky, or hungry, or overwhelmed. We need vacations. We might be distracted by our own problems, or apathetic about the task we've been paid to accomplish. Machines would have none of these drawbacks.

But do we really understand the trade-off we're making? We humans have a tendency to take a sliver of information, assume it's the whole picture, and run with it--eliminating everything we think is extraneous, simply because we don't understand it. In our hubris, we don't stop to consider that all the elements we've discarded might actually be critical to function.

This seems to me sort of like processed food. We've taken the real thing the way God made it, and tweaked it in a laboratory to make it sweeter, crunchier, more savory, and with better "mouth feel.” It's even still got the same number of macronutrients and calories that it had before. But we didn’t understand not only how processing stripped away necessary micronutrients, but also added synthetic fats that contaminated our cell membranes, and chemicals that can overwhelm our livers, making us overweight and simultaneously nutrient depleted. We just didn't know what we didn't know.

We've done the same thing with genetically engineered foods. God's instructions in scripture were to let the land lie fallow, and to rotate crops, because the soil itself is the source of micronutrition for the plant. If you plant the same crop in the same soil repeatedly and without a break, you will deplete the soil, and the plants will no longer be as nutritious, or as healthy... and an unhealthy plant is easy prey for pests. But the agriculture industry ignored this; it didn’t seem efficient or profitable enough, presumably. Synthetic fertilizer is the equivalent of macronutrients only for plants, so they grow bigger than ever before (much like humans do if they subsist on nothing but fast food), but they're still nutrient depleted and unhealthy, and thus, easy prey for pests. So we added the gene to the plants to make them produce their own glyphosate, the active ingredient in RoundUp. Only glyphosate itself turns out to be incredibly toxic to humans, lo and behold...

There are many, many more examples I can think of just in the realm of science, health, and nutrition, to say nothing of our approach to economics, or climate, or many other complex systems. We tend to isolate the “active ingredient,” and eliminate everything we consider to be extraneous… only to learn of the side effects decades later.

So what will the consequences be to society if most workers in most professions eventually lack true emotion, empathy, wisdom, and intuition?

Finding Purpose in Work

There’s also a growing concern that AI will take over nearly all jobs, putting almost everyone out of work. At this point, it seems that information-based positions are most at risk, and especially anything involving repetitive, computer-based tasks. I also understand that AI is better than most humans at writing essays, poetry, and producing art. Current robotics is far behind AI technology, though... Elon Musk has been promising self-driving cars in the eminent future for some time, yet they don't seem any closer to ubiquitous adoption now than they were five years ago. "A Brief History of Intelligence" by Max Bennett, published in fall 2023, said that as of the time of writing, robots can diagnose tumors from radiographic imaging better than most radiologists, yet they are still incapable of simple physical tasks such as loading a dishwasher without breaking things. (I suspect this is because the former involves intellectual pattern recognition, which seems to be their forte, while the latter involves movements that are subconscious for most of us, requiring integration of spatial recognition, balance, distal fine motor skills, etc. We're still a very long way from understanding the intricacies of the human brain... but then again, the pace at which knowledge is doubling is anywhere from every three to thirteen months, depending on the source. Either way, that’s fast).

On the assumption that we'll soon be able to automate nearly everything a human can do physically or intellectually, then, the world's elite have postulated a Universal Basic Income--essentially welfare for all, since we would in theory be incapable of supporting ourselves. Leaving aside the many catastrophically failed historical examples of socialism and communism, it's pretty clear that God made us for good work (Eph 2:10, 2 Cor 9:8), and He expects us to work (2 Thess 3:10). Idleness while machines run the world is certainly not a biblical solution.

That said, technology in and of itself is morally neutral. It's a tool, like money, time, or influence, and can be used for good or for evil. Both the Industrial Revolution and in the Information Revolution led to plenty of unforeseen consequences and social upheaval. Many jobs became obsolete, while new jobs were created that had never existed before. Work creates wealth, and due to increased efficiency, the world as a whole became wealthier than ever before, particularly in nations where these revolutions took hold. In the US, after the Industrial Revolution, the previously stagnant average standard of living suddenly doubled every 36 years. At the same time, though, the vast majority of the wealth created was in the hands of the few owners of the technology, and there was a greater disparity between the rich and the poor than ever before. This disparity has only grown more pronounced since the Information Revolution--and we have a clue in Revelation 6:5-6 that in the end times, it will be worse than ever. Will another AI-driven economic revolution have anything to do with this? It’s certainly possible.

Whether or not another economic revolution should happen has little bearing on whether or not it will, though. But one thing for those of us who follow the Lord to remember is that we don't have to participate in the world's economy, if we trust Him to meet our needs. He is able to make us abound for every good work (2 Cor 9:8)--which I believe means we will also have some form of work, no matter what is going on in the world around us. He will bless the work of our hands, whatever we find for them to do (Deut 12:7). He will give us the ability to produce wealth (Deut 8:18), even if it seems impossible. He will meet all our needs as we seek His kingdom first (Luke 12:31-32)-and one of our deepest needs is undoubtedly a sense of purpose (Phil 4:19). We are designed to fulfill a purpose.

What about the AI Apocalyptic Fears?

The world's elite seem to fall into two camps on how an AI revolution might affect our world--those who think it will usher in utopia (Isaac Asimov’s “The Last Question” essentially depicts this), and those who think AI will decide that humans are the problem, and destroy us all.

I feel pretty confident the latter won't occur, at least not completely, since neither Revelation nor any of the rest of the prophetic books seem to imply domination of humanity by machine overlords. Most, if not all of the actors involved certainly appear to be human (and angelic, and demonic). That said, there are several biblical references that the end times will be "as in the days of Noah" (Matt 24:27, Luke 17:26). What could that mean? Genesis 6 states that the thoughts in the minds of men were only evil all the time, so it may simply mean that in the end times, mankind will have achieved the same level of corruption as in the antediluvian world.

But that might not be all. In Gen 6:1-4, we're told that the "sons of God" came down to the "daughters of men," and had children by them--the Nephilim. This mingling of human and non-human corrupted the genetic line, compromising God's ability to bring the promised seed of Eve to redeem mankind. Daniel 2:43 also reads, "As you saw iron mixed with ceramic clay, they (in the end times) will mingle with the seed of men; but they will not adhere to one another, just as iron does not mix with clay." What is "they," if not the seed of men? It appears to be humanity, plus something else. Chuck Missler and many others have speculated that this could refer to transhumanism, the merging of human and machine.

Revelation 13:14-15 is probably the most likely description I can think of in scripture of AI, describing the image of the beast that speaks, knows whether or not people worship the beast (AI facial recognition, possibly embedded into the "internet of things"?), and turns in anyone who refuses to do so.

The mark of the beast sure sounds like a computer chip of some kind, with an internet connection (Bluetooth or something like it - Rev 13:17).

Joel 2:4-9 describes evil beings "like mighty men" that can "climb upon a wall" and "when they fall upon the sword, they shall not be wounded," and they "enter in at the windows like a thief." These could be demonic and thus extra-dimensional, but don’t they also sound like “The Terminator,” if robotics ever manages to advance that far?

Jeremiah 50:9 says, "their arrows shall be like those of an expert warrior; none shall return in vain." This sounds like it could be AI-guided missiles.

But the main evil actors of Revelation--the antichrist, the false prophet, the kings of the east, etc, all certainly appear to refer to humans. And from the time that the "earth lease" to humanity is up (Revelation 11), God Himself is the One cleansing the earth of all evil influences. I doubt He uses AI to do it.

So, depending upon where we are on the prophetic timeline, I can certainly imagine AI playing a role in how the events of Revelation unfold, but I can't see how they'll take center stage. For whatever reason, it doesn't look to me like they'll ever get that far.

The Bottom Line

We know that in the end times, deception will come. We don't know if AI will be a part of it, but it could be. It's important for us to know the truth, to meditate on the truth, to keep our eyes focused on the truth -- on things above, and not on things beneath (Col 3:2). Don't outsource your thinking to a machine; no matter how "smart" they become, they will never have true wisdom; they can't. That doesn't mean don't use them at all, but if you do, do so cautiously, check the information you receive, and listen to the Holy Spirit in the process, trusting Him to guide you into all truth (John 16:13).

Regardless of how rapidly or dramatically the economic landscape and the world around us may change, God has not given us a spirit of fear, but of power, love, and a sound mind (2 Tim 1:7). Perfect love casts out fear (1 John 4:18), and faith works through love (Gal 5:6). If we know how much God loves us, it becomes easy to not be anxious about anything, but in everything, by prayer and petition, with thanksgiving, present our requests to God... and then to fix our minds on whatever is true, noble, just, pure, lovely, of good report, praiseworthy, or virtuous (Phil 4:6-8). He knows the end from the beginning. He's not surprised, and He'll absolutely take care of you in every way, if you trust Him to do it (Matt 6:33-34).
Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.

  continue reading

232 Episoden

Artwork
iconTeilen
 
Manage episode 427392233 series 2394412
Inhalt bereitgestellt von Support and Christian Natural Health. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Support and Christian Natural Health oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

For years now, even as headlines about the development of AI have become more frequent and more dire, I really never worried about it much, because I couldn't think of anything in scripture that sounded a great deal like a superintelligent machine. I'd read the end of the book (Revelation), I knew how it ended, and it wasn't in a robot apocalypse... so all the fears surrounding that possibility must therefore be much ado about nothing. (I did write a fictional trilogy for young adults back in 2017 about how I imagined a near-miss robot apocalypse might look, though, because I found the topic fascinating enough to research at the time. It's called the "Uncanny Valley" trilogy, where the "uncanny valley" refers to the "creepy" factor, as a synthetic humanoid creature approaches human likeness.)

When I finished the trilogy, I more or less forgot about advancing AI, until some of the later iterations of Chat GPT and similar Large Language Models (LLMs). Full disclosure: I've never used any LLMs myself, mostly because (last I checked) you had to create an account with your email address before you started asking it questions. (In the third book of my series, the superintelligent bot Jaguar kept track of everyone via facial recognition cameras, recording literally everything they did in enormous data processing centers across the globe that synced with one another many times per day. Though at that point I doubt it would make any difference, I'd rather not voluntarily give Jaguar's real-life analog any data on me if I can help it!)

Particularly the recent release of Chat GPT Omni (which apparently stands for "omniscient" --!!) gave me pause, though, and I had to stop and ask myself why the idea that it could be approaching actual Artificial General Intelligence (AGI) made the hairs on the back of my neck stand up. I recently read a book called "Deep Medicine" by Eric Topol on the integration of AI into the medical field, which helped allay some potential concerns--that book contended that AGI would likely never be realized, largely because AGI inherently requires experience in the real world, and a robot can never have lived experiences in the way that humans can. It painted a mostly rosy picture of narrow (specialized) AI engaging in pattern recognition (reading radiology images or recognizing pathology samples or dermatological lesions, for instance), and thus vastly improving diagnostic capabilities of physicians. Other uses might include parsing a given individual's years of medical records and offering a synopsis and recommendations, or consolidating PubMed studies, and offering relevant suggestions. Topol did not seem to think that the AI would ever replace the doctor, though. Rather, the author contended, at the rate that data is currently exploding, doctors are drowning in the attempt to document and to keep up with it all, and empathic patient care suffers as a result. AI, he argues, will actually give the doctor time to spend with the patient again, to make judgment calls with a summary of all the data at his fingertips, and to put it together in an integrated whole with his uniquely human common sense.

Synthetic Empathy and Emotions?

But, "Deep Medicine" was written in 2019, which (in the world of AI) is already potentially obsolete. I'm told that Chat GPT Omni is better than most humans at anything involving either logic or creativity, and it does a terrific approximation of empathy, too. Even "Deep Medicine" cited statistics to suggest that most humans would prefer a machine for a therapist than a person (!!), largely due to the fear that the human might judge them for some of their most secret or shameful thoughts or feelings. And if the machine makes you feel like it understands you, does it really matter whether its empathy is "real" or not?

What does "real" empathy mean, anyway? In "Uncanny Valley," my main character, as a teenager, inherited a "companion bot" who was programmed with mirror neurons (the seat of empathy in the human brain.) In the wake of her father's death, she came to regard her companion bot as her best friend. It was only as she got older that she started to ask questions like whether its 'love' for her was genuine, if it was programmed. This is essentially the theological argument for free will, too. Could God have made a world without sin? Sure, but in order to do it, we'd all have to be automatons--programmed to do His will, programmed to love Him and to love one another. Would there be any value in the love of a creature who could not do anything else? (The Calvinists might say that's the way the world actually is, for those who are predestined, but everyone else would vehemently disagree.) It certainly seems that God thought it was worth all the misery He endured since creation, for the chance that some of us might freely choose Him. I daresay that same logic is self-evident to all of us. Freedom is an inherent good--possibly the highest good.

So, back to AI: real empathy requires not just real emotion, but memories of one's own real emotions, so that we can truly imagine that we are in another person's shoes. How can a robot, without its own lived memories, experience real empathy? Can it even experience real emotion? It might have goals or motives that can be programmed, but emotion at minimum requires biochemistry and a nervous system, at least in the way we understand it. We know from psychology research on brain lesions as well as from psychiatric and recreational medications and experiences with those suffering from neurodegenerative conditions that mood, affect, and personality can drastically change from physiologic tampering, as well.

Does it follow that emotions are 'mere' biochemistry, though? This is at least part of the age-old question of materialism versus vitalism, or (to put it another way), reductionism versus holism. Modern medicine is inherently materialistic, believing that the entirety of a living entity can be explained by its physical makeup, and reductionistic, believing that one can reduce the 'whole' of the living system to a sum of its parts. Vitalism, on the other hand, argues that there is something else, something outside the physical body of the creature, that animates it and gives it life. At the moment just before death and just after, all the same biochemical machinery exists... but anyone who has seen the death of a loved one can attest that the body doesn't look the same. It becomes almost like clay. Some key essence is missing. I recently read "The Rainbow and the Worm" by Mae-Wan Ho, which described fascinating experiments on living worms viewed under electron microscopes. The structured water in the living tissue of the worm exhibited coherence, refracting visible light in a beautiful rainbow pattern. At the moment of death, though, the coherence vanished, and the rainbow was gone--even though all of the same physical components remained. The change is immaterial; the shift between death and life is inherently energetic. There was an animus, a vital force--qi, as Chinese Medicine would call it, or prana, as Ayurvedic medicine would describe it, or (as we're now discovering in alternative Western medicine), voltage carried through this structured water via our collagen. That hydrated collagen appears to function in our bodies very much like a semiconductor, animating our tissues with electrons, the literal energy of life. At the moment of death, it’s there, and then it's not--like someone pulled the plug. What's left is only the shell of the machine, the hardware.

But where is that plug, such that it can be connected and then, abruptly, not? The materialist, who believes that everything should be explainable on the physical level, can have no answer. The Bible tells us, though, that we are body, soul, and spirit (1 Thess 5:23)--which inherently makes a distinction between body and soul (implying that the soul is not a mere product of the chemistry of the body). The spirit is what was dead without Jesus, and what gets born again when we are saved, and it's perfect, identical with Jesus' spirit (2 Cor. 5:17, Eph 4:24). It's God's "seal" on us, vacuum-packed as it were, so that no sin can contaminate it. It’s the down-payment, a promise that complete and total restoration is coming (Eph 1:13-14).

But there's no physical outlet connecting the spirit and the body; the connection between them is the soul. With our souls, we can see what's ours in the Spirit through scripture, and scripture can train our souls to conform more and more to the spirit (Romans 12:2, Phil 2:12-13). No one would ever argue that a machine would have a spirit, obviously, but the materialists wouldn’t believe there is such a thing, anyway. What about the soul, though?

What is a soul, anyway? Can it be explained entirely through materialistic means?
Before God made Adam, He explicitly stated that He intended to make man after His own image (Gen 1:26-27). God is spirit (John 4:24), though, so the resemblance can't be physical, per se, at least not exclusively or even primarily. After forming his body, God breathed into him the breath of life (Genesis 2:7)--the same thing Jesus did to the disciples after His resurrection when he said "Receive the Holy Spirit" (John 20:22). So it must therefore be in our spirits that we resemble God. Adam and Eve died spiritually when they sinned (Genesis 3:3), but something continued to animate their bodies for another 930 years. This is the non-corporeal part of us that gets "unplugged" at physical death. Since it can be neither body nor spirit, it must be the soul.

Andrew Wommack defines the soul as the mind, will, and emotions. I can't think of a single scripture that defines the soul this way; I think it's just an extrapolation, based on what's otherwise unaccounted for. But in our mind, will, and emotions, even before redemption, mankind continued to reflect God's image, in that he continued to possess the ability to reason, to choose, to create, to love, and to discern right from wrong.

The materialists would argue that emotion, like everything else, must have its root purely in the physical realm. Yet they do acknowledge that because there are so many possible emotional states, and relatively few physiologic expressions of them, many emotions necessarily share a physiologic expression. It's up to our minds to translate the meaning of a physiologic state, based on the context. In "How Emotions are Made," author Lisa Barrett gave a memorable example of this: once, a colleague to whom she didn’t think she was particularly attracted asked her for a date. She went, felt various strange things in her gut that felt a little like “butterflies”, and assumed during the date that perhaps she was attracted to him after all… only to later learn that she was actually in the early stages of gastroenteritis!

This example illustrates how the biochemistry and physiologic expressions of emotion are merely the blunt downstream instruments that translate an emotion from the non-corporeal soul into physical perception--and in some cases, as in that one, the emotional perception might originate from the body entirely. This also might be why some people (children especially) can mistake hunger or fatigue for irritability, or why erratic blood sugar in uncontrolled diabetics can manifest as rage, etc. In those cases, the emotional response really does correspond to the materialist's worldview, originating far downstream in the "circuit," as it were. But people who experience these things as adults will say things like, "That's not me." I think they're right--when we think of our true selves, none of us think of our bodies--those are just our "tents" (2 Cor 5:1), to be put off eventually when we die. When we refer to our true selves, we mean our souls: our mind, will, and emotions.

It's certainly possible for many of us to feel "hijacked" by our emotions, as if they're in control and not "us," though (Romans 7:15-20). Most of us recognize a certain distinction there, too, between the real "us" and our emotions. The examples of physiologic states influencing emotions are what scripture would call "carnal" responses. If we're "carnal," ruled by our flesh, then physiologic states will have a great deal of influence over our emotions-- a kind of small scale anarchy. The "government" is supposed to be our born-again spirits, governing our souls, which in turn controls our bodies, rather than allowing our flesh to control our souls (Romans 8:1-17) - though this is of course possible if we don’t enforce order.

With respect to AI, my point is, where does "true" emotion originate? There is a version of it produced downstream, in our flesh, yes. It can either originate from the flesh itself, or it can originate upstream, from the non-corporeal soul, what we think of us "the real us." That's inherently a philosophical and not a scientific argument, though, as science by definition is "the observation, identification, description, experimental investigation, and theoretical explanation of phenomena." Any question pertaining to something outside the physical world cannot fall under the purview of science. But even for those who do not accept scripture as authority, our own inner experience testifies to the truth of the argument. We all know that we have free will; we all know we can reason, and feel emotions. We can also tell the difference between an emotion that is "us" and an emotion that feels like it originates from outside of "our real selves". As C.S. Lewis said in "Mere Christianity," if there is a world outside of the one we can experimentally observe, the only place in which we could possibly expect to have any evidence of it is in our own internal experience. And there, we find it’s true.

Without a soul, then, a robot (such as an LLM) would of course exist entirely on the physical plane, unlike us. It therefore might have physical experiences that it might translate as emotion, the same way that we sometimes interpret physical experiences as emotion--but it cannot have true emotions. Empathy, therefore, can likewise be nothing more than programmed pattern recognition: this facial expression or these words or phrases tend to mean that the person is experiencing these feelings, and here is the appropriate way to respond. Many interactions with many different humans over a long period of time will refine the LLM's learning such that its pattern recognition and responses get closer and closer to the mark... but that's not empathy, not really. It's fake.

Does that matter, though, if the person "feels" heard and understood?

Well, does truth matter? If a man who is locked up in an insane asylum believes himself to be a great king, and believes that all the doctors and nurses around him are really his servants and subjects, would you trade places with him? I suspect that all of us would say no. With at least the protagonists in "The Matrix," we all agree that it's better to be awakened to a desperate truth than to be deceived by a happy lie.

The Emotional Uncanny Valley

Even aside from that issue, is it likely that mere pattern recognition could simulate empathy well enough to satisfy us--or is it likely that this, too, would fall into the "uncanny valley"? Most of us have had the experience of meeting a person who seems pleasant enough on the surface, and yet something about them just seemed ‘off’. (The Bible calls this discernment, 1 Corinthians 12:10.) When I was in a psychology course in college, the professor flashed images of several clean-cut, smiling men in the powerpoint, out of context, and asked us to raise our hands if we would trust each of them. I don't remember who most of them were - probably red herrings to disguise the point - but one of them was Ted Bundy, the serial killer of the 1970s. I didn't recognize him, but I did feel a prickling sense of unease as I gazed at his smiling face. Something just wasn't right. Granted, a violent psychopath is not quite the same, but isn't the idea of creating a robot possessed of emotional intelligence (in the sense that it can read others well) but without real empathy essentially like creating an artificial sociopath? Isn't the lack of true empathy the very definition? (Knowing this, would we really want jobs like social workers, nurses, or even elementary school teachers to be assumed by robots--no matter how good the empathic pattern recognition became?)

An analogy of this is the 1958 Harlow experiment on infant monkeys (https://www.simplypsychology.org/harlow-monkey.html), in which the monkeys were given a choice between two simulated mothers: one made of wire, but that provided milk, and one made of cloth, but without milk. The study showed that the monkeys would only go to the wire mother when hungry; the rest of the day they would spend in the company of the cloth mother.

My point is that emotional support matters to all living creatures, far more than objective physical needs (provided those needs are also met). If we just want a logical problem solved, we may well go to the robot. But most of our problems are not just questions of logic; they involve emotions, too. As Leonard Mlodinow, author of "Emotional" writes, emotions are not mere extraneous data that colors an experience, but can otherwise be ignored at will. In many cases, the emotions actually serve to motivate a course of action. Every major decision I've ever made in my life involved not just logic, but also emotion, or in some cases intuition (which I assume is a conscious prompting when the unconscious reasoning is present but unknown to me), or a else leading of the Holy Spirit (which "feels" like intuition, only without the presumed unconscious underpinning. He knows the reason, but I don't, even subconsciously.) Obviously, AI, with synthetic emotion or not, would have no way to advise us on matters of intuition, or especially promptings from the Holy Spirit. Those won't usually *seem* logical, based on the available information, but He has a perspective that we don't have. Neither will a machine, even if it could simultaneously process all known data available on earth.

There was a time when Newtonian physicists believed that, with access to that level of data in the present, the entire future would become deterministic, making true omniscience in this world theoretically possible. Then we discovered quantum physics, and all of that went out the window. Heisenberg's Uncertainty Principle eliminates the possibility that any creature or machine, no matter how powerful, can in our own dimension ever truly achieve omniscience.

In other words, even a perfectly logical machine with access to all available knowledge will fail to guide us into appropriate decisions much of the time -- precisely because they must lack true emotion, intuition, and especially the guidance of the Holy Spirit.

Knowledge vs Wisdom


None of us will be able to compete with the level of knowledge an AI can process in a split second. But does that mean the application of that knowledge will always be appropriate? I think there's several levels to this question. The first has to do with the data sets on which AI has been trained. It can only learn from the patterns it's seen, and it will (like a teenager who draws sweeping conclusions based on very limited life experience) assume that it has the whole picture. In this way, AI may be part of the great deception mentioned by both Jesus (Matt 24:24) and the Apostle Paul (2 Thess 2:11) in the last days. How many of us already abdicate our own reasoning to those in positions of authority, blindly following them because we assume they must know more than we do on their subject? How much more will many of us fail to question the edicts of a purportedly "omniscient" machine, which must know more than we do on every subject? That machine may have only superficial knowledge of a subject, based on the data set it's been given, and may thus draw an inappropriate conclusion. (Also, my understanding is that current LLMs continue learning only until they are released into the world; from that point, they can no longer learn anything new, because of the risk that in storing new information, they could accidentally overwrite an older memory.)

A human may draw an inappropriate conclusion too, of course, and if that person has enough credentials behind his name, it may be just as deceptive to many. But at least one individual will not command such blind obedience on absolutely every subject. AGI might. So who controls the data from which that machine learns? That's a tremendous responsibility... and, potentially, a tremendous amount of power, to deceive, if possible, "even the elect."

For the sake of argument, let's say that the AGI is exposed only to real and complete data, though--not cherry-picked, and not "misinformation." In this scenario, some believe that (if appropriate safeguards are in place, to keep the AGI from deciding to save the planet by killing all the humans, for example, akin to science fiction author Isaac Asimov's Three Laws of Robotics), utopia will result.

The only way this is possible, though, is if not only does the machine learn on a full, accurate, and complete set of collective human knowledge, but it also has a depth of understanding of how to apply that knowledge, as well. This is the difference between knowledge and wisdom. The dictionary definition of wisdom is "the ability to discern or judge what is true, right, or lasting," versus knowledge, defined as "information gained through experience, reasoning, or acquaintance." Wisdom has to do with one's worldview, in other words, or the lens through which he sees and interprets a set of facts. It is inextricably tied to morality. (So, who is programming these LLMs again? Even without AI, since postmodernism and beyond, there's been a crisis among many intellectuals as to whether or not there's such a thing as "truth," even going so far as to question objective physical reality. That’s certainly a major potential hazard right there.)

Both words of wisdom and discernment are listed as explicit supernatural gifts of the Spirit (1 Cor 12:8, 10). God says that He is the source of wisdom, as well as of knowledge and understanding (Prov 2:6), and that if we lack wisdom, we should ask Him for it (James 1:5). Wisdom is personified in the book of Proverbs as a person, with God at creation (Prov 8:29-30)--which means, unless it's simply a poetic construct, that wisdom and the Holy Spirit must be synonymous (Gen 1:2). Jesus did say that it was the Holy Spirit who would guide us into all truth, as He is the Spirit of truth (John 16:13). The Apostle Paul contrasts the wisdom of this world as foolishness compared to the wisdom of God (1 Cor 1:18-30)--because if God is truth (John 14:6), then no one can get to true wisdom without Him. That's not to say that no human (or robot) can make a true statement without an understanding of God, of course--but when he does so, he's borrowing from a worldview not his own. The statement may be true, but almost by accident--on some level, if you go down deep enough to bedrock beliefs, there is an inherent inconsistency between the statement of truth and the person's general worldview, if that worldview does not recognize a Creator. (Jason Lisle explains this well and in great detail in "The Ultimate Proof of Creation.")

Can you see the danger of trusting a machine to discern what is right, then, simply because in terms of sheer facts and computing power, it's vastly "smarter" than we are? Anyone who does so is almost guaranteed to be deceived, unless he also filters the machine's response through his own discernment afterwards. (We should all be doing this with statements from any human authority on any subject, too, by the way. Never subjugate your own reasoning to anyone else's, even if they do know the Lord, but especially if they don't. You have the mind of Christ! 1 Cor 2:16).

Would Eliminating Emotion from the Workplace Actually Be a Good Thing?

I can see how one might think that replacing a human being with a machine that optimizes logic, but strips away everything else might seem a good trade, on the surface. After all, we humans (especially these days) aren't very logical, on the whole. Our emotions and desires are usually corrupted by sin. We're motivated by selfishness, greed, pride, and petty jealousies, when we're not actively being renewed by the Holy Spirit (and most of us aren't; even most believers are more carnal than not, most of the time. I don’t know if that’s always been the case, but it seems to be now). We also are subject to the normal human frailties: we get sick, or tired, or cranky, or hungry, or overwhelmed. We need vacations. We might be distracted by our own problems, or apathetic about the task we've been paid to accomplish. Machines would have none of these drawbacks.

But do we really understand the trade-off we're making? We humans have a tendency to take a sliver of information, assume it's the whole picture, and run with it--eliminating everything we think is extraneous, simply because we don't understand it. In our hubris, we don't stop to consider that all the elements we've discarded might actually be critical to function.

This seems to me sort of like processed food. We've taken the real thing the way God made it, and tweaked it in a laboratory to make it sweeter, crunchier, more savory, and with better "mouth feel.” It's even still got the same number of macronutrients and calories that it had before. But we didn’t understand not only how processing stripped away necessary micronutrients, but also added synthetic fats that contaminated our cell membranes, and chemicals that can overwhelm our livers, making us overweight and simultaneously nutrient depleted. We just didn't know what we didn't know.

We've done the same thing with genetically engineered foods. God's instructions in scripture were to let the land lie fallow, and to rotate crops, because the soil itself is the source of micronutrition for the plant. If you plant the same crop in the same soil repeatedly and without a break, you will deplete the soil, and the plants will no longer be as nutritious, or as healthy... and an unhealthy plant is easy prey for pests. But the agriculture industry ignored this; it didn’t seem efficient or profitable enough, presumably. Synthetic fertilizer is the equivalent of macronutrients only for plants, so they grow bigger than ever before (much like humans do if they subsist on nothing but fast food), but they're still nutrient depleted and unhealthy, and thus, easy prey for pests. So we added the gene to the plants to make them produce their own glyphosate, the active ingredient in RoundUp. Only glyphosate itself turns out to be incredibly toxic to humans, lo and behold...

There are many, many more examples I can think of just in the realm of science, health, and nutrition, to say nothing of our approach to economics, or climate, or many other complex systems. We tend to isolate the “active ingredient,” and eliminate everything we consider to be extraneous… only to learn of the side effects decades later.

So what will the consequences be to society if most workers in most professions eventually lack true emotion, empathy, wisdom, and intuition?

Finding Purpose in Work

There’s also a growing concern that AI will take over nearly all jobs, putting almost everyone out of work. At this point, it seems that information-based positions are most at risk, and especially anything involving repetitive, computer-based tasks. I also understand that AI is better than most humans at writing essays, poetry, and producing art. Current robotics is far behind AI technology, though... Elon Musk has been promising self-driving cars in the eminent future for some time, yet they don't seem any closer to ubiquitous adoption now than they were five years ago. "A Brief History of Intelligence" by Max Bennett, published in fall 2023, said that as of the time of writing, robots can diagnose tumors from radiographic imaging better than most radiologists, yet they are still incapable of simple physical tasks such as loading a dishwasher without breaking things. (I suspect this is because the former involves intellectual pattern recognition, which seems to be their forte, while the latter involves movements that are subconscious for most of us, requiring integration of spatial recognition, balance, distal fine motor skills, etc. We're still a very long way from understanding the intricacies of the human brain... but then again, the pace at which knowledge is doubling is anywhere from every three to thirteen months, depending on the source. Either way, that’s fast).

On the assumption that we'll soon be able to automate nearly everything a human can do physically or intellectually, then, the world's elite have postulated a Universal Basic Income--essentially welfare for all, since we would in theory be incapable of supporting ourselves. Leaving aside the many catastrophically failed historical examples of socialism and communism, it's pretty clear that God made us for good work (Eph 2:10, 2 Cor 9:8), and He expects us to work (2 Thess 3:10). Idleness while machines run the world is certainly not a biblical solution.

That said, technology in and of itself is morally neutral. It's a tool, like money, time, or influence, and can be used for good or for evil. Both the Industrial Revolution and in the Information Revolution led to plenty of unforeseen consequences and social upheaval. Many jobs became obsolete, while new jobs were created that had never existed before. Work creates wealth, and due to increased efficiency, the world as a whole became wealthier than ever before, particularly in nations where these revolutions took hold. In the US, after the Industrial Revolution, the previously stagnant average standard of living suddenly doubled every 36 years. At the same time, though, the vast majority of the wealth created was in the hands of the few owners of the technology, and there was a greater disparity between the rich and the poor than ever before. This disparity has only grown more pronounced since the Information Revolution--and we have a clue in Revelation 6:5-6 that in the end times, it will be worse than ever. Will another AI-driven economic revolution have anything to do with this? It’s certainly possible.

Whether or not another economic revolution should happen has little bearing on whether or not it will, though. But one thing for those of us who follow the Lord to remember is that we don't have to participate in the world's economy, if we trust Him to meet our needs. He is able to make us abound for every good work (2 Cor 9:8)--which I believe means we will also have some form of work, no matter what is going on in the world around us. He will bless the work of our hands, whatever we find for them to do (Deut 12:7). He will give us the ability to produce wealth (Deut 8:18), even if it seems impossible. He will meet all our needs as we seek His kingdom first (Luke 12:31-32)-and one of our deepest needs is undoubtedly a sense of purpose (Phil 4:19). We are designed to fulfill a purpose.

What about the AI Apocalyptic Fears?

The world's elite seem to fall into two camps on how an AI revolution might affect our world--those who think it will usher in utopia (Isaac Asimov’s “The Last Question” essentially depicts this), and those who think AI will decide that humans are the problem, and destroy us all.

I feel pretty confident the latter won't occur, at least not completely, since neither Revelation nor any of the rest of the prophetic books seem to imply domination of humanity by machine overlords. Most, if not all of the actors involved certainly appear to be human (and angelic, and demonic). That said, there are several biblical references that the end times will be "as in the days of Noah" (Matt 24:27, Luke 17:26). What could that mean? Genesis 6 states that the thoughts in the minds of men were only evil all the time, so it may simply mean that in the end times, mankind will have achieved the same level of corruption as in the antediluvian world.

But that might not be all. In Gen 6:1-4, we're told that the "sons of God" came down to the "daughters of men," and had children by them--the Nephilim. This mingling of human and non-human corrupted the genetic line, compromising God's ability to bring the promised seed of Eve to redeem mankind. Daniel 2:43 also reads, "As you saw iron mixed with ceramic clay, they (in the end times) will mingle with the seed of men; but they will not adhere to one another, just as iron does not mix with clay." What is "they," if not the seed of men? It appears to be humanity, plus something else. Chuck Missler and many others have speculated that this could refer to transhumanism, the merging of human and machine.

Revelation 13:14-15 is probably the most likely description I can think of in scripture of AI, describing the image of the beast that speaks, knows whether or not people worship the beast (AI facial recognition, possibly embedded into the "internet of things"?), and turns in anyone who refuses to do so.

The mark of the beast sure sounds like a computer chip of some kind, with an internet connection (Bluetooth or something like it - Rev 13:17).

Joel 2:4-9 describes evil beings "like mighty men" that can "climb upon a wall" and "when they fall upon the sword, they shall not be wounded," and they "enter in at the windows like a thief." These could be demonic and thus extra-dimensional, but don’t they also sound like “The Terminator,” if robotics ever manages to advance that far?

Jeremiah 50:9 says, "their arrows shall be like those of an expert warrior; none shall return in vain." This sounds like it could be AI-guided missiles.

But the main evil actors of Revelation--the antichrist, the false prophet, the kings of the east, etc, all certainly appear to refer to humans. And from the time that the "earth lease" to humanity is up (Revelation 11), God Himself is the One cleansing the earth of all evil influences. I doubt He uses AI to do it.

So, depending upon where we are on the prophetic timeline, I can certainly imagine AI playing a role in how the events of Revelation unfold, but I can't see how they'll take center stage. For whatever reason, it doesn't look to me like they'll ever get that far.

The Bottom Line

We know that in the end times, deception will come. We don't know if AI will be a part of it, but it could be. It's important for us to know the truth, to meditate on the truth, to keep our eyes focused on the truth -- on things above, and not on things beneath (Col 3:2). Don't outsource your thinking to a machine; no matter how "smart" they become, they will never have true wisdom; they can't. That doesn't mean don't use them at all, but if you do, do so cautiously, check the information you receive, and listen to the Holy Spirit in the process, trusting Him to guide you into all truth (John 16:13).

Regardless of how rapidly or dramatically the economic landscape and the world around us may change, God has not given us a spirit of fear, but of power, love, and a sound mind (2 Tim 1:7). Perfect love casts out fear (1 John 4:18), and faith works through love (Gal 5:6). If we know how much God loves us, it becomes easy to not be anxious about anything, but in everything, by prayer and petition, with thanksgiving, present our requests to God... and then to fix our minds on whatever is true, noble, just, pure, lovely, of good report, praiseworthy, or virtuous (Phil 4:6-8). He knows the end from the beginning. He's not surprised, and He'll absolutely take care of you in every way, if you trust Him to do it (Matt 6:33-34).
Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.

  continue reading

232 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung