What happens when artificial intelligence takes a deep look at art-teacher-cum-t.v.-star Bob Ross? Alexander Reben decided to find out, applying machine learning algorithms to video and audio tracks of Ross’s shows.
The result is the eerie mashup video Deeply Artificial Trees (embedded below). The soundtrack, generated by Wavenet, mimics the tone and rhythm of Ross’s distinctive narrative, without real language. For the video, Reben feed original footage through “Deep Dream” algorithms. The system teaches itself to identify images in the original footage–Ross, the dog/person/deer etc he is painting–and then overlays images of what it “thinks” it sees.
A little glimpse into how machines are teaching themselves to interpret our world…
(I found the video via this story in Engadget.)
Futurist Friday: AI Takes Joy in Painting
Category:
Center for the Future of Museums Blog
Posted on Apr 7, 2017
Skip over related stories to continue reading article
Comments