The real-world implications of this are pretty limited though, the Turing test doesn't deal with actual intelligence and is merely concerned with external behavior. In the case of this AI bot, all he had to do was to imitate a 13-year-old boy from Odessa, Ukraine. The test can be passed by following a completely predetermined set of rules, without giving the AI bot the capability to learn or react to new situations that they haven't been programmed for.
The modern-day Turing Test isn’t quite like that, though. In this case, Eugene Goostman — an AI developed by Vladimir Veselov, Eugene Demchenko, and Sergey Ulasen — is a chatterbot. Eugene is basically just a text box on a website: You type your message into the box, and then Eugene responds. At the event held at the Royal Society in London — organized by the University of Reading to celebrate the 60th anniversary of Turing’s death — a number of judges had a five-minute “conversation” with Eugene. 33% of the judges believed him to be human, passing the 30% threshold mandated by Turing, and thus becoming the first AI to pass the Turing Test.Source: ExtremeTech
By this point, you can probably tell that the Turing Test — especially the modern-day variation — is rather flawed. For a start, the judges already know ahead of time that computers are involved, and thus may be prejudiced to give a more optimistic (or cynical) response. We also don’t know what questions the judges asked (were they the right questions?) Presumably the test carried out at the Royal Society was double-blind (the judges didn’t know if they were talking to a human or a chatterbot), but the official press release doesn’t mention it, so maybe not (in which case, the results are worthless). But most of all, it’s important to note that the Turing Test doesn’t actually deal with actual intelligence — it’s only concerned with external behavior — how the machine acts — rather than what’s going on inside.