In our first week of The Neural Aesthetic w/ Gene Kogan we had a rundown of all the things we’ll be covering over the semester. A rundown of things covered:
- Introduction to machine learning (27:06)
- AI resurgence and deep learning (36:36)
- Characteristics of deep learning (47:11)
- Types of machine learning (51:22)
- Supervised learning (55:14)
- Examples of supervised learning (1:02:42)
- Neural networks and feature extraction (1:15:28)
- Core applications of feature extraction (1:18:20)
- Interactive machine learning (1:24:40)
- Deepdream, style transfer, and texture synthesis (1:31:40)
- Generative models (1:36:19)
- Conditional generative models (Image-to-image) (1:46:18)
- Voice synthesis, language models, and miscellaneous (2:01:30)
- Reinforcement learning and decentralized AI (2:07:35)
I really love Memo’s Learning to See (2017) work ❤ using edge detection and pic2pic
Also really interesting to think about the implications of how easy it is to puppeteer in footage with machine learning. Especially with the Everybody Dance Now video and paper. I had an interesting talk on phone with my brother about what will we need in the future to help ensure that footage is real? What are the implications of this for the future and methods we may need to prove when something is fake? ❤