Skynet gets another weapon

Register to hide this ad
This is a quote from the second Terminator movie.

In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.
 
Back in the late '60s I knew a guy who was a professor at a Penn State extension campus. He told me about a computer at the main campus that had a self diagnosis program and a short list of fixes to resolve common problems. While doing some back checking, they found an instance where the computer chained several of the fixes to resolve an issue that wasn't on the list of problems to be solved. Much chin scratching and hair pulling ensued. Twern't any code for it to do that.

And, that was in the proverbial stone age so far as computers go........

Maybe 20 years ago I was reading in a computer magazine where one of the big shot code creators was lamenting about the unintended consequences of cheap memory. When memory was expensive code had to be concise and well written. When memory got cheap, code monkeys got sloppy when they wrote code. If there was a problem, they'd just write more code to fix the problem instead of finding the cause of the issue and correcting it.
 
I remember reading a story about when Facebook was working on a system where multiple computers coluld communicate among themselves. Relatively quickly, the computers developed a new language known only to themselves and strange and unexplainable things began to happen, so they pulled the plug and terminated the project. I'll see if I can find it.

Here it is. Facebook's artificial intelligence robots shut down after they start talking to each other in their own language | The Independent | The Independent
 
Last edited:
Maybe 20 years ago I was reading in a computer magazine where one of the big shot code creators was lamenting about the unintended consequences of cheap memory. When memory was expensive code had to be concise and well written. When memory got cheap, code monkeys got sloppy when they wrote code. If there was a problem, they'd just write more code to fix the problem instead of finding the cause of the issue and correcting it.

Just think. Today, anyone can buy a brand new 4 TB drive for less than $100, and a used one for less than $50. Back when I was still working in the late 00s, we needed to buy a 1 TB drive to store some very large graphics files. Everyone thought that it was miraculous that a drive could even be that large. I do not remember its price, but I think it was several thousand $. The first HD I bought was in the mid 1980s, and it was relatively tiny, less (maybe much less) than 10 MB. Yet it was over $1000 back then. At that time, when you bought a PC, EVERYTHING was priced separately. I think the first one I had (an IBM PC) priced out to be close to $20K. And that was in 1985 dollars. And it did virtually nothing even close to what the simplest $200 netbook from Best Buy is capable of doing today. We didn't know how primitive we were back in those pre-Windows days. Can anyone imagine what the AI technology will be 40 years from now? Assuming that anyone will be around to use it.

My latest story. After about a year, I decided to list some un-needed items on eBay last night. As anyone familiar with selling on eBay knows, you need to provide a verbal description of the item you are selling. When I got to that part, a little pop-up appeared asking if I wanted eBay's AI to write my item's description. I passed, but I probably should have at least tried it to see how well it would have read my mind.
 
Last edited:
I can see it now: "Uh, General, somewhere, somehow, we have a 7th grader who has hacked our computers and is flying our aircrafts."
 
Great. Now when a fighter jet crashes into your house, they'll blame "software error" instead of pilot error.
Good point, but my first thought is...
Does that in some way reduce their liability - or even shift the liability to someone else (the programmers)?
 
Back in the late '60s I knew a guy who was a professor at a Penn State extension campus. He told me about a computer at the main campus that had a self diagnosis program and a short list of fixes to resolve common problems. While doing some back checking, they found an instance where the computer chained several of the fixes to resolve an issue that wasn't on the list of problems to be solved. Much chin scratching and hair pulling ensued. Twern't any code for it to do that.

And, that was in the proverbial stone age so far as computers go........

Maybe 20 years ago I was reading in a computer magazine where one of the big shot code creators was lamenting about the unintended consequences of cheap memory. When memory was expensive code had to be concise and well written. When memory got cheap, code monkeys got sloppy when they wrote code. If there was a problem, they'd just write more code to fix the problem instead of finding the cause of the issue and correcting it.

My wife is the Senior Programmer/Analyst for the Student Records Department of a private university. She's been with them for 35 years.

Your description above sounds just like their approach to "fixing" problems in their systems. It is too much work/trouble to find the root cause of the problem, so just add some more code to "fix" it...
 
Just think. Today, anyone can buy a brand new 4 TB drive for less than $100, and a used one for less than $50. Back when I was still working in the late 00s, we needed to buy a 1 TB drive to store some very large graphics files. Everyone thought that it was miraculous that a drive could even be that large. I do not remember its price, but I think it was several thousand $. The first HD I bought was in the mid 1980s, and it was relatively tiny, less (maybe much less) than 10 MB. Yet it was over $1000 back then. At that time, when you bought a PC, EVERYTHING was priced separately. I think the first one I had (an IBM PC) priced out to be close to $20K. And that was in 1985 dollars. And it did virtually nothing even close to what the simplest $200 netbook from Best Buy is capable of doing today. We didn't know how primitive we were back in those pre-Windows days. Can anyone imagine what the AI technology will be 40 years from now? Assuming that anyone will be around to use it.

My latest story. After about a year, I decided to list some un-needed items on eBay last night. As anyone familiar with selling on eBay knows, you need to provide a verbal description of the item you are selling. When I got to that part, a little pop-up appeared asking if I wanted eBay's AI to write my item's description. I passed, but I probably should have at least tried it to see how well it would have read my mind.

I remember those days too DWalt. My first computer was a 286 floppy-based system that I bought used from a company that was holding a going-out-of-business sale.

I upgraded that system by adding a used 2.1Mb hard drive and a memory upgrade card that allowed me to install more than 640k of memory. We thought we'd NEVER use up 2.1Mb of storage. But that was over 35 years ago.

Moore's Law marches on.

For those unfamiliar with Moore's Law, it says that the number of transistors on a chip (a measure of computing power) will DOUBLE every two years - and that the cost of computing power will be reduced proportionally. Basically, as the number of transistors increases, the cost of the increased computing power will decrease.

Gordon Moore first made this prediction in 1965. So here we are, in 2024, almost 50 years later, and Moore's prediction has proven to be true.

For the mathematicians in the crowd, that means that today's computers should be roughly 4 million times more powerful than the computers of 1965.

The computers that Niel Armstrong and his fellow astronauts used to get to the moon in 1967 had 64k of memory. They had to load and run the calculations for their capsule's trajectory into the computer, then write down the calculated results, clear the computer's memory, and then load those previous results into the next set of calculations, and run them to get the next set of calculations, just to figure out their way back home.

They had to figure it out one-step-at-a-time, because the computers they were using were so limited by the amount of available memory.

Compare that to the computers we all take for granted today.

Even a really cheap Chromebook computer today has 64Gb of storage and 4Gb of RAM memory. That's more than sixty THOUSAND times more memory than the 64k of memory that the Apollo Astronauts had to work with.

FWIW, most of us carry around a cell phone in our pocket that has 100,000 (or more) times the storage and computing power that our Apollo astronauts used to get to the moon and back.

That is pretty incredible when you think about it. The technological advances that we have made in the last half-century completely DWARF all of the advances that humanity has made over the last 6,000 years of recorded human history....
 
Last edited:
Back
Top