Neural Aesthetic: Translation Study #1: Folding Flowers


Screen Shot 2019-12-10 at 1.48.23 AM.png


Translation Study #1: Folding Flowers was created during my time in Gene Kogan’s Neural Aesthetic class at ITP NYU. The exploration feels deeply rooted in my experience at SFPC reminding me of when Gene introduced us to machine learning and style transfer as well as being introduced to the idea of collaboration with machines.

I remember also Zach explaining the beauty and interesting things that can happen with translation. This could be a translation from one medium to another as we experienced in his class Recreating the Past or also conceptually thinking about how something in the world might be articulated through new types of media. Additionally inspired by our classmate Robby Kraft walking us through a workshop of different methods of origami and consequently tactile ways of thinking.

In this experiment I used RunwayML’s training to explore how computation might “fold” images of flowers into flower origami, and witness its thinking through latent space.

[Origami Image Set: Applying Sam Lavigne/ antiboredom’s flickr scaper to gather public images of Flickr group Flower Origami. Then training it on a set of flower images via RunwayML.]













Next test this weekend is with Swindell Pottery and her and her dad’s pottery archive & an environmental dataset like landscapes or trees. I think one helpful thing in particular has been how maybe the origami have a blank backdrop / surface that the objects are photographed against. I really like the moving between what looks like a 2d image scape and then transforming into a 3d object. It almost looks like the AI is learning how to fold and make its own origami inspired by flowers and landscapes.

If you were to use it as an actual training model maybe training it for less steps would be helpful.



Screen Shot 2019-12-04 at 5.28.48 PM.png



Neural Aesthetic W5: Visualization, deepdream, style & texture synthesis

For week 5 we dove into visualizations. The last 2 weeks we touched on convolutional networks but not necessarily how they functioned. This week we lifted the hood a little to break down what was happening.  Next week we’ll dive into generative models.



This week:

  • visualizing neural networks
  • optimization-based class synthesis 
  • style transfer and texture analysis 


Deep learning frameworks: 

Screen Shot 2019-10-02 at 8.48.58 PM.png

How some of these items are more like the tips of the iceberg, how processing is to java as keras is to tensorflow.


Review of Neural Networks





W1: Neural Aesthetic

In our first week of The Neural Aesthetic w/ Gene Kogan we had a rundown of all the things we’ll be covering over the semester. A rundown of things covered:


I really love Memo’s Learning to See (2017) work ❤ using edge detection and pic2pic


Also really interesting to think about the implications of how easy it is to puppeteer in footage with machine learning. Especially with the Everybody Dance Now video and paper. I had an interesting talk on phone with my brother about what will we need in the future to help ensure that footage is real?  What are the implications of this for the future and methods we may need to prove when something is fake?   ❤