The question is not does this or that software think or not think—it is not a simple binary yes/no. It is more like, how much, or to what granularity, or how many levels of indirection of abstract processing is a human, animal, or peice of software capable of?
If a human has to think to acomplish X, and a machine can do X, then the mahine can be said to think.
And yes, a normal caculator thinks then, albeit in an extremely limited domain. BUT if the full general purpose and almost infinitely deep abstraction levels that the human mind is capable of is say ‘level 1,000,000,000’, then a calculator ‘thinks’ at say level 1, or perhaps 10.
Now, add pattern recognition and ‘conditional branching’ which a computer has that a calculator doesn’t, now perhaps you are at level 100 (of 1,000,000,000).
‘thinking’ is not a binary yes/no, it is ‘what depth’ or what level of generality of abstract processing (ie thought) is this entity , man, machine, animal, alien, capable of.
Intelligence is directly related to how general or broad the set of types of problems that can be solved. So a modern digital calculator is more intelligent than an abacus, and a computer is more intelligent than a calculator, and an ape is more intelligent than a computer, and a human is more intelligent than an ape.
People will never see computers as ‘thinking’ though, until they can perform ‘thinking’ (ie abstract processing) in an extremely varied set of domains. for that, they will have to understand the world. To do that, they will have to partake in naturual language understanding
Now computers won’t ever REALLY understand the world the way we do, simply because we have senses that they do not (and probably vice-versa eventually), but language will be a kind of ‘common denominator’ for us to communicate with them and solve real world problems.