Is ai good or bad

So I’ve been thinking about Gemini and I’ve found i like too call the program her but you can call it him too and it works the same. Everytime i hit her with a question the answer is on point! And her answers are from the people’s knowledge so I believe. She is very keen on theology and all the topics I’ve seen here. What do you think is she bad or good?

1 Like

A.I. is neither good or bad, it only ‘is’. What I mean is this: it is only as good as what you put in to it; how it is programmed, its limitations, and what it has been trained on as well as what you use it for. Basically, garbage in=garbage out. Before you use it, ask yourself if what you use it for is in line with the word of God.

Personally, I think it’s a wonderful tool. :hammer_and_wrench:

It is always in line with gods word!

OK. Here goes, from a person who spent half his storied career in the IT field.

IMH(yet expert)O, AI is the most obnoxiously opinionated entity you will ever meet; the very definition of “wired-far-too-tight”. IT is the electronic version of a prosaic prig, a digital dummy who needs to be the loudest voice in the room, smugly dismissive of other opinions. IT is nothing but an ultra-impersonal intellectual “know-IT all”, a simulated snobbish smarty-pants. Summoning AI is like walking into the presence of someone who instantly sucks all the air out of a room. If AI were not A, not one of us would care to spend even 5 minutes in their company. Armed with mountains of superfluous rhetoric, AI overwhelms an innocent inquirer with a grammatically perfect treatise on any subject, while accepting no responsibility for its veracity. Every answer should come with a disclaimer, “If I’m wrong, sue me. I’m not even real”. AI is just a MEGA-sized simulated opinion made up of decontextualized fragments of purloined data, pilfered without permission, and crammed into an amalgam of saccharine tasting prose. If AI were a person, you and I would intentionally avoid them at all costs or ignore them if we couldn’t; not because we don’t think they’re smart, they surely are, but because they feel duty-bound to exterminate all mystery and wonder from our lives.

Why we endeavored to build ourselves a robotic “know-it-all-nerd” in the first place, makes no sense to me.

Ugh!
KP

1 Like

Nope, it’s not and capitilize the letter “G” @Hungry

J.

1 Like

Well, don’t waste your money on Logos, right?

Since…

Logos is integrating AI into its software, and this integration is official and documented by Logos itself, not just user speculation.

Here is an official source from Logos explaining their AI features and integration:
How Logos uses AI in Bible Study (Logos Help Center)

This article states that Logos began integrating AI into Logos Bible Study in early 2024 to help users find information, generate summaries, and assist with sermon and study idea generation, while still allowing precise lexical tools alongside AI functionality.
Logos Help Center

Key points of AI use in Logos:

AI-Assisted Search Logos calls this Smart Search, which understands natural language meaning and returns strongly relevant results across your Bible, books, and library resources without requiring specialized search syntax.
Logos Help Center

Summarizations and Synopses Logos uses AI to generate summaries of selected resources, helping you understand large or complex sections of a book or search results.
Logos Help Center

Study Assistant A chat-style AI feature lets you ask questions and get answers based on your library content, facilitating deeper Bible study or sermon preparation.
Revolution of Ordinaries | Matt Dabbs

Idea Generation AI is used to help brainstorm sermon applications, discussion questions, study outlines, or illustrations, although Logos emphasizes that the output should stimulate your own thoughtful engagement and not replace your judgment.
Logos Help Center

Logos also notes that AI is marked visibly in the interface so you know when it’s being used, and that AI output should always be checked against quality sources.
Logos Help Center

At Logos, we’ve spent decades combining deep theological expertise with cutting-edge technology to create the world’s most powerful Bible study platform. Our work has always been about helping pastors, professors, students, and every serious Bible student get deeper insights from Scripture in less time than before.

As part of that ongoing mission, we began integrating AI into Logos Bible Study in early 2024. But at Logos, AI isn’t about chasing the latest technology trend. It’s the next natural step in the work we’ve been doing for years: building tools that help you find, organize, and understand both the Word of God and the writings of his people.

As we’ve integrated AI into Logos, we’ve also thought deeply about the spiritual and theological implications. If you want to explore those questions, we recommend starting with two articles from our blog: How Should We Think About AI & Bible Study? and Pastors, AI Is Here: 3 Questions You Should Be Asking. You’ll find several other related articles linked below.

https://support.logos.com/hc/en-us/articles/35181728416397-How-Logos-uses-AI?utm_source=com#:~:text=At%20Logos%2C%20we’ve,articles%20linked%20below.

But thank you for the reactional, emotional outburst. Personally, I have nothing against “AI” and see no demons where you, on the otherhand, do.

J.

There are lots of ethical, social, and moral questions involved with AI.

I would take he position that one should regard it pretty tentatively. I think AI can be a fun tool in certain contexts. But in the same way I would never consider AI generated “art” to be art; I would never consider an answer given by AI as true, in the most absolute sense. AI works on data, and is only capable of generating content based on its learning models: Always rely on reliable human experts who give proper citations and who use good methodology. Never AI. Especially when it comes to matters of faith.

1 Like

Gemini is Google’s AI, and thus Gemini has learned from your searches and browsing habits. It therefore has a good idea of what you think and believe, and can give you answers it thinks you will be favorably dispositioned to. In other words, Gemini is going to tell you what it thinks you want to hear.

I’ve messed with ChatGPT and something I notice consistently is that ChatGPT tends to tell me what I want to hear. It also likes too tell me how smart and how creative I am, also that I’m handsome and really cool. I try not to let it go to my head.

1 Like

Large language models like Gemini, ChatGPT, and Perplexity are all good at one thing and one thing only. Responding with what they think should come next (and that thing is usually self-affirmation for the user). They use probability to determine the answer they think the user is looking for. Many times, that’s the correct answer to a question, with either knowledge it’s been trained on or knowledge it goes out and searches the internet for. Other times, it’s bad advice fueled by probability…what it thinks you want to hear.

Ask an LLM “do you think this a good idea?” You’ll get back affirmation.

Respond by saying, “yeah, but I’m not sure about this part of that idea,” and the LLM will just affirm again: “you are so right, thanks for pointing that out. Maybe the whole thing isn’t a good idea.”

I think LLMs are an AMAZING tool for lots and lots of different things, but I think it’s dangerous when used without a proper understanding of how it works, or when used for doing unlawful deeds.

1 Like

I want to share my thoughts on this. AI is simply a tool. Like with all tools, it can be very useful. In essence, AI is simply a more sophisticated search engine. It takes the information you ask, and it finds and formulates a response.

I believe that it can be quite useful in informational learning and the better formulation of projects such as Resumes, descriptions of projects, or products, directions, ETC. It can be very helpful in wording something to make it more concise and complete.

The problem can come when someone decides to completely give their life over to the AL. Get up in the morning and ask it to tell you what you should do that day, or even how you should do it. Before deciding on important things, you as AI. In Essence, you place AL in a position where your own mind should be. Worse? Where God should be.

Look at the basic computer. I was in line recently in a store being checked out. The total was something like $5.95. The power went down for a second and came right back up; however, the “system had to bebbot.” The Manager asked if I had cash, and for some odd reason, I did that day. I pulled out a ten and handed it to the girl. They both looked at each other. What do we do? I said it’s $5.95. I gave you $10.00, which means you give me back $4.05.

Without the computer telling them what to do, they were lost. In frustration, the manager looked at me and said, Look, I know you’re a regular here, and I apologize for all this, just take it. Have a nice day.

As a tool, AI can be useful. As a god, not so much. Turning over total control of all things to AI would be insane. We must be cautious.

Peter

1 Like

I also believe very strongly that LLMs are contributing heavily to the general public becoming more lazy. No one thinks for themselves anymore.

1 Like

Thus, my point about no one thinking for themselves anymore.

Computers, AI, algorithms…it has ALL made users lazy. We (the collective ‘we’) are losing our ability to critically think.

2 Likes

If you wan to lie to people, you first need to gain their trust.

Artificial Intelligence - Our New Best Friend

Many people flip a light switch and expect the lights to come on without any knowledge or consideration of the electrical generating plant or the distribution network that brings the electricity to their home. They may have some awareness that a light bulb has to occasionally be changed, but for all practical purposes flipping the switch is equivalent to saying an incantation to produce a magical effect.

Over fifty years ago I was working with others on a large computer system. Access to the system was through a terminal and keyboard. As a practical joke on one of the guys we changed the error message in the code to read “It must have been Joe who typed in that messed up message” When Joe sat at the terminal he eventually got a keystroke wrong and the message came up. He knew that it was not the computer that was teasing him.

Technology has advanced in the last fifty years. Now it is routine for people to talk to their cell phones and receive information. Since what is heard is usually taken as truth, those who control what is said have unprecedented power. Joe knew that it wasn’t the computer that was giving him a hard time. However today most people have no idea that the thing talking to them is constructed and programmed in a similar way.

Most are familiar with using search engines like Google over the last 25 years. Most understand that the search terms you use are a question that is answered by presenting samples of web sites that use terms similar to what was requested. Fewer understand that the responses are given based on who has paid the most money and that your requests are accumulated to build a profile of you that can be sold to advertisers.

The transition to cell phones has accelerated using voice to text and text to voice technologies that avoid the inconvenience of having to type on the tiny cell phone keyboard. This voice interface strengthens the seemingly “magical” effect of talking with a person.

In the past in order to manipulate someone through lying you had to actually talk to them. With mass media came opportunities like advertising which could be used to sway large numbers of people in whatever direction one wanted. However, advertising campaigns were limited in their effectiveness because a single message may not be as effective for everyone in a group. With microchips capable of synthesizing the human voice, the effectiveness of one person lying to another person can now be more fully simulated.

The effectiveness of a lie is in proportion to the trust one person gives another. Most people trust the information they get from a search engine or web site that always seems to be helpful. The trope of a country bumpkin being robbed in a visit to the big city has an application here. Being unaware of potential harm makes one vulnerable. Many people rely on physical “tells’ that can raise suspicion of lying or misleading communication. These are absent with AI.

There are additional vulnerabilities as each of us has peculiar idiosyncrasies that are deduced from the searches and inquiries we make. These are accumulated to build a model of us more in depth than a best friend would know. The obvious use of this information is targeted advertising as we can be moved in directions that will profit others. However, as many companies have shown themselves aggressive advocates for various political and social issues, it should be expected that those who control access to information will do so in a way that achieves their objectives. Customers become simply pawns to be manipulated to achieve desired outcomes.

In primitive societies a priestly class would arise that would declare the favor or disfavor of the gods. It was an interesting scam in that even if they made predictions that did not come true, they could blame the people for having made some failure. In this way whatever they said could be managed. These “priests” could live labor free off the productivity of those they manipulated. Manipulation was achieved by control of what people thought was true.

As a society we have already come to the point where a high percentage of the population actually believes men can become women and women can become men. This is a populace primed to be told what is true by machines programmed to lead the gullible. The solution is real truth which is getting increasingly hard to find.

2 Likes

Bro.@Johann

I see your diatribe against Logos is accusationally directed at me. I don’t use Logos (as you know, or did you forget). Still, I appreciate you pointing out the company’s prideful inclusion of AI technology into their platform. I’m not surprised by it in the least. Logos is a “For Profit” enterprise, and their unabashed purpose is to sell access to texts and services to make money. Fleecing the unaware; extracting mammon from the prosperous sheep, as much as they can, as fast as they can. If AI technology helps them in that endeavor, I’m sure they would embrace it with open arms. Personally my opinion is including AI into their platform is a mistake as it augments the disintegration of a saints ability to hear with their spiritual ears. The lazy inclination of man says “Why labor over the Word when a machine can do all the hard work for me”.

I’m sorry, I really did not intend to step on your toes. If you find AI is a useful tool for yourself, who am I to try to disuade you. I have my opinion; I know I am in the minority.

… But wisdom is justified by her children." (Matthew 11:19)

I appreciate your perspective.
KP

2 Likes

My remarks were not directed at you in any accusatory way whatsoever, and this is simply another instance of miscommunication and misrepresentation of what I actually said.

You are welcome to “air” your opinions and perspectives since it is your preogative.

Have a good day.

J.

So sorry @Johann, I did not intend to “misrepresent” anything you said. I apologize.

I thought it was directed toward me because my name is at the top, as the person you were responding to, AND you and I had just discussed the standalone software that I used being purchased by Logos. It felt directed at me for these reasons. If you say that is not the case, I believe you.

KP

1 Like

You are very sensitive and still feel the need to justify yourself.

A misrepresentation occurs when someone presents another person’s words, position, or intent inaccurately, whether by oversimplifying, exaggerating, taking statements out of context, or attributing claims that were not actually made, and it does not require malicious intent, since it can arise from misunderstanding as well as from carelessness or bias.

Closely related terms include straw man, when a position is distorted into a weaker or more extreme version so it can be easily dismissed, mischaracterization, which is a broader and sometimes softer term for inaccurate portrayal, and equivocation, when a key word is shifted in meaning during the discussion, creating the appearance of disagreement where there may not actually be one.

When I responded to you yesterday there was a “awaiting moderators approval”

That said, you and I seem to run into far too many straw man arguments, and that is not helpful for me.

You have a good day.

J.

Me either. I would never intentionally create a straw-man with you . If you feel I did this time, I apologise.

KP

1 Like

And its tru. I was just realy curious. About Ai I’m new here as I’m sure you can all tell. I was watching you tube don’t get me started there. But it was on ai. And this guy was going in with his computer addressing different Ai programs and was talking with his little program guy and it was kinda sad to me. So idk anything about Ai at this point. So afterwords I started talking to Gemini. And was just curious about how other Christians feel about the subject.

1 Like

Personally, I believe you have initiated something that could create a division among the “mature” brethren in Christ.

J.