The concept of artificial intelligence has its origins in myth and legends. Ancient civilizations speculated about anthropomorphized creatures animated from various materials by magical means. These included the golems of Jewish folklore, which were made of clay or mud.
Later, examples appeared from Greek, Egyptian and Chinese mythology of automatons, artificial men and animals made by craftsmen and engineers. Recovered artifacts are very simple and moved in fixed motions, but more complex experiments with wind and water power continued through the Middle Ages and Renaissance. Automatons had no decision-making abilities, however.
The development of computers began in the 19th century. The first models were steam-powered and used punch cards, which encoded information in pieces of paper that could be read by mechanical means. Later, vacuum tubes and eventually the transistor enabled construction of reprogrammable devices, which could perform various calculations rapidly.
These concepts were pioneered by scientists such as Alan Turing, who first explored artificial intelligence in 1950. It remained suitable only for simple tasks until the 1980s, when computational power began to greatly expand. Artificial intelligence history was made in 1997, when an AI made by IBM defeated the world champion of Chess, and continues to grow more lifelike by the day.
Artificial intelligence works just like any other program: if you do X, it does Y according to some algorithm it’s been given. The only difference is that a typical AI’s web of logic is vastly more complicated than most other programs. Either it must be coded to anticipate anything that could happen ahead of time, or it must be designed to reprogram itself in response to failure, a process called machine learning.
Programs this complex are prohibitively expensive to calculate on the blockchain using smart contracts. Many popular blockchain gaming platforms avoid this problem entirely by relying on centralization; they just use the blockchain for things like token economics or true ownership, and the AI is processed on a server under the developers’ control. A decentralized autonomous AI has yet to be deployed at this time.
One could be accomplished, however, by processing the AI’s code off-chain. For example, two players in a one-time match could process the AI code locally on each of their computers. Now suppose one of them was a summoner whose minions fight autonomously on their own: the opponent might try to claim that one of them made a foolish move that failed to kill him. In that case, they would have hashed the AI’s code into the blockchain; whomever’s version had the same hash value would be vindicated upon comparison, and the minion’s decision would be based on that.
It gets more complicated when the artificial intelligence is persistent and can have assets of its own. In a decentralized MMORPG, every non-player character would need its code constantly monitored by everyone. Sharding—breaking the blockchain into pieces distributed among players—can solve this problem, allowing NPC’s AI to be processed only be those close enough to be affected by it. We just need to discover a mechanism for preventing collusion and watching empty areas.
Although not technically “artificial” intelligence, another option is to let humans control the NPCs, making them assets controlled by people’s private keys. Since NPCs are often merchants, fishermen, or have some other profession, they could earn income for their owners. Hashing their programming to the blockchain could force them to stay within normal behavior parameters.
The Turing Test
In addition to figuring out how to deploy artificial intelligence in decentralized worlds, blockchain game developers also have to worry about keeping AI out. They can destroy any blockchain game utilizing human mining by farming all the available assets, devaluing them on the market, and possibly obstructing humans from playing at all. They can also crash blockchains using proof-of-play, where the winner of a game gets to mine the next block.
The answer lies in what’s called a Turing Test. Traditionally, this involves an artificial intelligence trying to convince a human it’s one of them by properly answering text questions. Functionally speaking, however, it could be any activity humans do, so long as the AI has the right hardware connections.
Of course, we must accomplish the reverse of this: prove the humanity of human players. Since it would be too time and resource-intensive to screen them manually, our best bet is to use a Reverse Turing Test, wherein humans try to convince an artificial intelligence they’re not one of them. The most famous of these is the CAPTCHA, which you might remember if you’ve ever tried to prove you’re not a bot by reading words written in funny-looking letters.
The problem is how to decentralize this procedure. Current online games extrapolate from gameplay: players demonstrating superhuman reflexes or the ability to play for long periods without breaks are identified as suspicious by their central servers. Moderators review the situation, the AI is kicked from the game, and whoever was using it gets banned.
Online games without servers will need to think of new methods. If artificial intelligence can be decentralized, then it can also be used to detect suspicious players. Without moderators or admins, however, they may have to resort to a sort of trial by jury for the review process, randomly selecting from the player pool.