Google Research posted a short article regarding some very interesting research being done by Alex Graves, a Junior Fellow in the Department of Computer Science at the University of Toronto. Graves' research is in Long Short-Term Memory (LSTM) recurrent neural networks. Almost immediately after the link appeared on Twitter, both the Tweet and Google+ post were taken down.
Graves' research suggests that by predicting one data point at a time, a Long Short-Term Memory recurrent neural network can be used to replicate complex sequences. To demonstrate this idea, he chose handwriting synthesis, and he created a tool to show this in action. Using a set of writing examples from different people, his program is able to recreate any phrase in the selected style of writing. The results aren't perfect, but they are surprisingly accurate.
If you’d like to see the process in action, you can try it out for yourself here. The tool, as it stands right now, has five different examples of writing style, and it will let you type up to 100 characters to convert.
It’s somewhat unnerving to think about this. We’ve come to a point where computers are able to not only read cursive writing and decipher what it means with acceptable accuracy, they can now replicate handwriting in a convincing manner, as well. I can’t think of many good reasons to use this technique, but I can sure think of a few that aren’t so good. Maybe that’s why Google pulled the post so quickly.
Follow Kevin Carbotte @pumcypuhoy. Follow us @tomshardware, on Facebook and on Google+.