Remember David Ha? He’s the former Goldman Sachs managing director and co-head of Japanese rates trading who landed a highly sought-after 12 month residency at Google Brain, Google’s program for jump-starting careers in machine learning, based in California.
Ha joined Google 12 months ago in June 2016, suggesting he’s now completed the program at Mountain View. If you’re a senior banker who’s thinking of following Ha’s example, you might like to know what Ha’s been up to. Helpfully, therefore, Ha’s written a blog post about his exploits, the deeply summarized version of which is below.
Ha set about teaching machines to draw
Away from Goldman’s Japanese rates desk, Ha devoted himself to developing machines that can draw pictures. He says his goal at Google was to train, “a machine to draw and generalize abstract concepts in a manner similar to humans…” based on a, “dataset of hand-drawn sketches, each represented as a sequence of motor actions controlling a pen: which direction to move, when to lift the pen up, and when to stop drawing.”
What was the point of this? “We created a model that potentially has many applications,” says Ha, “…from assisting the creative process of an artist, to helping teach students how to draw.”
Ha’s drawing machine produced the following sketches:
Ha’s developed a drawing machine that’s better than the rest
Ha says his machine is novel because it’s focused on low dimensional drawings, unlike predecessors which focus on more complex 2D images and often get them wrong, as per the photographs below.
Ha’s machine uses an, “encoder”
Ha’s machine involved a complex ‘auto-encoder’ which took an input sequence and encoded into a vector of “floating point numbers.” This vector was then used to construct an output sequence that matched the input sequence as closely as possible.
To complicated matters, Ha introduced “noise” into the communication with the encoder. This meant that the machine couldn’t simply copy the sketch exactly, but that it needed to learn how to capture the essence of the sketch.
The encoder is able to learn how to draw – not just how to copy drawings
This way, Ha’s machine learned how to draw things rather than simply replicating the images it was presented with. – Given a cat with three eyes, it produced one with two eyes, for example. Similarly, given an eight legged pig, it produced one with four legs. Ha says his auto-encoder was therefore able to, “encode an input sketch into a set of abstract cat-concepts.”
Ha thinks his model might be able to design things like wallpaper
Ha doesn’t only see his model helping with drawing classes. Because it can extrapolate from a starting image (as per the sketches below). he thinks it could also be used to to create similar but unique designs for textiles or wallpaper prints.
He also think is might be able to finish off drawings for artists
Lastly, Ha suggests his machine could be used to finish off drawings that artists have begun working on. As the sketches below show, the auto-encoder can take a rough diagram and produce all sorts of related sketches based upon it. Illustrators might want to start thinking of back-up careers.
Is this the kind of thing you want to do when leaving the trading floor? If so, applications for 2018 residencies at Google Brain are currently closed, but they’ll be open again in September 2017.
For the full paper exploring Ha’s project, click here.