Hakurei ... splitting computer hairs isn't a practical argument. Yes, in any practical sense, memory is finite, but will execute most programs (most programmers design for the hardware).

Yes, but Turing machines are not called upon to do such tasks. Turing machines are used to explore the computable functions. There is no problem that a computer with finite memory can do that a Turing machine can't. Turing machines cannot compute whether or not a Turing machine will halt on any arbitrary input. Therefore, neither can practical computers with finite memory. You can compute it in special cases, but not in the general case. It is a computational equivalent to a Gödel statement.

And there is a halt, or a loop.

No. The unlimited tape destroys that assumption. There are about 40 Busy Beaver machines of

*five*-states that are unknown whether they halt or continue forever. Only five fucking states. Why do you think that is? The tape, man. It's not a precise, one-to-one loop where a single, discrete configuration of machine and tape is recapitulated. "There is a halt, or a loop" is a gross oversimplification of the issue.

There's also the fact that just because a Turing machine can compute some function, doesn't mean it will be quick about it. Believe it or not, the time taken to compute a solution is a practical concern. Any practical AI will have to not just be correct enough, but also fast enough.

Of course sometimes the goal is some output, but sometimes more indirect, the side-effects of execution are what are sought.

Bullshit. "Halting" means practically that a procedure will exit having performed some task, including "side-effects" of execution. The tape of a Turing machine is exactly this sort of "side-effect". How do you guarantee that a computer really has finished your "side-effect" unless it successfully exits the procedure you bundled up for that task?

Are you simply doing a "must have last comment"? That doesn't address the OP.

I admit that we got off track, but your stupidity and google-scholarship annoys me.

Marketing defines AI, that and the credulity of DARPA or venture capitalists or the completely ignorant computer user.

And yet we get useful applications of AI like Siri and expert systems. Just because the usual vision of a butler robot (a conventional, popular notion of AI) has not materialized doesn't make the investigation of AI something that "marketing" defines. We've had to tackle smaller, more foundational problems of AI (like navigating an unpredictable environment) before moving on to general intelligence. That's just what you have to do when a problem turns out to be harder to crack than you first thought.

If moving ones and zeros are AI, then we have had AI for many decades now. My programmable calculator in the 1970s was worthy of civil rights ;-)) Number 5 Is Alive! Similarly if random atomic configurations are a life form, then the kitchen garbage I throw out is a life form (not just the mold on the overly old cottage cheese).

And now you just devolve into rhetoric. A life form is not a

*random* atomic configuration. It's been through non-random selection. There's no purposeful top-down design involved, but it's not random. We are a general intelligence that evolved because as life forms we needed to solve problems dealing with an arbitrary and often hostile environment. Today's AI products are still very sheltered things, so it's not very surprising that when released into the wild they often fail spectacularly.

Intelligence wasn't something we crafted, but emerged naturally, so of course the definition of intelligence is kind of fuzzy when trying to characterize it after the fact, and artificial intelligence even more so.