Afraid of AI?

Joined
Aug 12, 2007
Messages
9,522
Reaction score
7,769
Location
between beers
Seems like any time AI is brought up lately, folks respond with concern and a bit of paranoia.
Some seems legit, however, very recently I ran into a work related problem that needs a solution.
Right now they are applying Chevette level solutions to a Ferrari level problem and if the zip ties don't hold the 600 surging horsepower, someone is coming for someone else's head.
So, I got a hair up my back side to ask Google's AI for whatever solutions might be on record elsewhere for this issue....
It turns out the equipment is too new to have a solution as it just graduated from lab mistress and hasn't even gone live yet.
It was then that I discovered another issue.
AI is a self breaching hackers wet dream.
With a single crumb of information, it gave up some unbelievably sensitive goods and an endless string of leads to get more.
We fear this .... Why? 😮
 
Register to hide this ad
Bad players will get a lot more traction out of AI than the good guys. It's about to change our world.
More bad than good.
The first thing man does with a newly-developed technological breakthrough is try to figure out how it can be used against his fellow man. AI is no different. The evil uses of AI are as infinite as the good uses.
If we don't adopt it now, our enemies will do us in using AI within 5 years.
If we do adopt it now, our own AI systems will do us in within 10 to 15 years.
The end result will be the same. Just a matter of time.
The supposedly smart people in the world who are pushing AI and are so thoroughly enamored of it that they are blinded to its future ability to control and eventually destroy what freedoms we have left. Frankly, I think most of them don't care.
 
Bad players will get a lot more traction out of AI than the good guys. It's about to change our world.
Agree 1,000%
We humans will become slaves to it when it develops to the point that humans lose control and can't even turn it off.
Some forms of it have begun thinking for themselves and have developed computer languages that humans cannot understand so that different AI systems can communicate with each other privately.
In the near future the only way to stop an AI system will be to shut down entire power grids. As the technology develops a sense of self and self-preservation, we may not be able to override it ability to prevent it's own destruction without collapsing most of civilization to do it.
Isn't that uplifting?
 
Last edited:
I'm still trying to master Natural Intelligence. :ROFLMAO:

I still think about the case where a law firm decided to have AI write a brief for them and submitted the brief to the court without anybody in the firm bothering to read it. The judge was not pleased with the citations to non-existent cases. The judge threw the book at the law firms that did this.
 
I'm still trying to master Natural Intelligence. :ROFLMAO:

I still think about the case where a law firm decided to have AI write a brief for them and submitted the brief to the court without anybody in the firm bothering to read it. The judge was not pleased with the citations to non-existent cases. The judge threw the book at the law firms that did this.
But what do we do when AI brief-writing programs start referencing other falsified AI briefs and cases that have not been identified as such?
 
But what do we do when AI brief-writing programs start referencing other falsified AI briefs and cases that have not been identified as such?
Review of prospective briefs will probably have to be done the old fashioned way, with Shepard's Citations and/or Google Scholar. My old law school still has bound case books in the library.
 
Using AI to write a brief is especially stupid unless you double-check everything. My grandson used AI, Chat-GPT to be specific, to write a college paper. I'm his editor, so he sent it to me, and I laughed. It was excellent, but somewhat verbose, and it said the same thing three different ways.

Example for your amusement - made up on the spot, but you will understand.

The sky is blue in the summer except in the rainy season.

When it rains, the sky is no longer blue, but once the rain stops the sky will definitely be blue again.

At night, the sky is black; it is only blue in the daytime. Except when it rains.

:geek:
 
Although a trite opinion: I'm not afraid of AI so much as how people interact with it. Fundamentally, like ISCS Yoda demonstrated - it's not thinking. It's compiling things (sometimes garbage) that people (or other AI agents) have written which the user interface presents in whatever format is requested. Like Faulkner said, it tends to accelerate the efforts of bad (or lazy, or uncritical) people more than others.

AI tools are here and have potential, but it's going to force us to stomp through a lot of fuzzy & verbose slop (John Oliver has a humerous video about this posted over on YouTube). As for the more sinister concerns, well, it's based on algorithims that attempt to do as requested with what they have been supplied. There was a good article about this on the Ars Technica web site called "Is AI really trying to escape human control and blackmail people?" - it's worth a read. Similarly, those of y'all familiar with the "Eliza effect" associated with Joseph Weizenbaum's work in the 1960s have seen this story before.

In a recent thread on .38 Special performance pre- and post- SAMMI, a member used an AI-generated answer as something of a starting off point, but the member filled in other information. Truthfully, the member could have skipped the AI agent entirely and presented their own analysis, which was excellent.
 
Last edited:

Latest posts

Back
Top