Cool Video of Aikon II Robot Drawing a Face With a Pen

120 15 Loading
aikon face drawing robot

The Aikon II project aims to create a machine that can draw with its own artistic style. It's already got me beat.

For several years, Patrick Tresset and Frederic Fol Leymarie of Goldsmiths University have been trying to teach a machine how to draw. The Aikon II project has a camera, some software, and a robotic arm with which to observe and sketch human faces. In that way it uses the same basic tools (eyes, brain, hands) that artists have been relying upon for centuries. Aikon’s sketches are not photo-realistic, but they do look stunning. Far better than anything I could draw, I assure you. At the recent Kinetica Art Fair in London, Aikon was on hand to sketch faces and earn praise. You can see it in action in the video below. Aikon II has funding to keep it going at least through 2011. According to Tresset and Leymarie, not only will Aikon’s skill improve, it may one day be able to draw in it’s own artistic style.

aikon II face drawing robot

Aikon II can interpret the same subject in multiple ways. From the "Lena" set of sketches.

Anybody with a digital camera and a photo printer can produce a more realistic image of a human face than Aikon II. That’s not the point. Aikon is able to provide an artistic interpretation of what it sees. In this way, it’s very different from robots that simply create what they are programmed to draw. This project has much more in common with the attempts to get computer programs to create new pieces of classical music. Tresset and Leymarie have studied many historic sketches (and artist’s notes) to generate an idea of the visualization and interpretation needed to create an artistic rendering of a human face. As artificial intelligence improves programs like Aikon II could develop into artificial artists, full of the same creative insights, synthesis, and stylistic choices as their human counterparts. The only thing missing would be passion. But maybe we’ll learn how to program that into a robot, too.



[image credits: Aikon II Project]

[source: Aikon Website, Aikon II 2009 Press Release]

Discussion — 15 Responses

  • Leen March 25, 2010 on 5:47 pm

    Dammit they are about to take my job as an artist now! Pretty cool robot though.

  • Leen March 25, 2010 on 1:47 pm

    Dammit they are about to take my job as an artist now! Pretty cool robot though.

  • Six March 25, 2010 on 11:40 pm

    I fail to see how exactly this isn’t just drawing what it is programmed to. It looks to just be running some image processing over a camera image, then using a simple algorithm to try to replicate that image with a pen on paper. Unless there is some level of self learning I’m not aware of, that seems to me to just be drawing as programmed.

  • Six March 25, 2010 on 7:40 pm

    I fail to see how exactly this isn’t just drawing what it is programmed to. It looks to just be running some image processing over a camera image, then using a simple algorithm to try to replicate that image with a pen on paper. Unless there is some level of self learning I’m not aware of, that seems to me to just be drawing as programmed.

    • Aaron Saenz Six March 25, 2010 on 11:08 pm

      @Six. I think the innovation comes from that “simple algorithm”, which isn’t translating pixel for pixel but rather decides what constitutes a line and selectively choosing which lines to draw.

      “It looks to just be running some image processing over a camera image, then using a simple algorithm to try to replicate that image with a pen on paper.” –This could also be used to describe human drawing to some degree.

  • Stafford Williams March 26, 2010 on 4:01 am

    this is ridiculous. how is this any different from a printer? a print driver tells the hardware how to produce a copy of a digital image. this looks like an expensive (time and/or money) gimmick.

    • Frederic Fol Leymarie Stafford Williams March 31, 2010 on 4:04 pm

      Hi Stafford,
      have a look at the project’s website please: http://www.aikon-gold.com
      We are trying to better understand how the mind (of the artist) functions by building what amounts to a simulator made of software and hardware parts.
      AIkon is also a subject of study for us, where we learn from it, by trying/implementing/testing various models and ideas on various steps we think are at play when the artist thinks and acts.

      The input is an image, but we do not just process the image and re-present it in a single step. Rather we have defined a system with multiple steps and are now introducing feedback loops. We have tested some form of visual feedback so that AIkon will eventually be directly influenced by what it draws (at the moment it is mainly influenced by what it sees by looking at the sitter only; although there is some uncertainty built in by
      using a cheap mechanical robot arm and having some noise in the algorithmically created plan — leading to gesture curves for the robot to follow).

      LMK if this is helpful or not.

      Regards,
      Frederic

      • Nunya Frederic Fol Leymarie October 5, 2010 on 6:01 pm

        How about focusing on building automated alternatives to jobs people don’t WANT to do, like mopping or cleaning the toilet?

        • PantOpticon Nunya December 10, 2010 on 4:21 am

          That’s TRADITIONAL robot research thinking. This is getting outside of the box.

  • Stafford Williams March 26, 2010 on 12:01 am

    this is ridiculous. how is this any different from a printer? a print driver tells the hardware how to produce a copy of a digital image. this looks like an expensive (time and/or money) gimmick.

    • Frederic Fol Leymarie Stafford Williams March 31, 2010 on 12:04 pm

      Hi Stafford,
      have a look at the project’s website please: http://www.aikon-gold.com
      We are trying to better understand how the mind (of the artist) functions by building what amounts to a simulator made of software and hardware parts.
      AIkon is also a subject of study for us, where we learn from it, by trying/implementing/testing various models and ideas on various steps we think are at play when the artist thinks and acts.

      The input is an image, but we do not just process the image and re-present it in a single step. Rather we have defined a system with multiple steps and are now introducing feedback loops. We have tested some form of visual feedback so that AIkon will eventually be directly influenced by what it draws (at the moment it is mainly influenced by what it sees by looking at the sitter only; although there is some uncertainty built in by
      using a cheap mechanical robot arm and having some noise in the algorithmically created plan — leading to gesture curves for the robot to follow).

      LMK if this is helpful or not.

      Regards,
      Frederic

  • cybotic May 19, 2010 on 12:06 pm

    I hope they’re using Epsom approved ink in that pen ;-)

  • cybotic May 19, 2010 on 8:06 am

    I hope they’re using Epsom approved ink in that pen ;-)

  • Plasticfrank October 5, 2010 on 6:11 pm

    Man… People are so close minded. Even I can see that this project is more closely related to AI research than automation. Just seems like some smart people trying to understand creativity filtered through digital processes. I thought this was a tech blog.

    • Mgbeers Plasticfrank October 5, 2010 on 6:39 pm

      This is dumb. What kind of ai is this? There have been booths that you can sit in where a CAMERA would capture a face and simulate drawing it with a pen. This is no more robotic innovation than that. Simply put, the “eye” could take a picture and “draw” it out (really just a plotter with a simulated drawing technique) Photoshop-like software can apply filters to an image before printing and the “robot” would just render the image that way. What a joke, this is not futuristic at all!