Mobile Lab: List app brainstorm

Screen Shot 2020-03-12 at 1.31.13 PM.png  TreeWireFrame1.jpgIMG_2798 2

Would like to create a list app that helps me get to know the trees on my block based off of  the NYC Street Tree Map. Hope to pull specific info from the bio & current ecological benefits  to date.

Screen Shot 2020-03-12 at 1.48.52 PM



Diagram of your Data Model

  • 2-3 screen mockups of your app
  • Develop a scrollview prototype using a few sample items from your data – follow along video demo from class but use your own structs an data
  • Any technical questions/requirements
  • Be prepared to present your idea to the class for feedback
  • Post documentation and scrollview demo video to #sp2020-homework channel



Mobile Lab: Midterm – List app

  • Problem solving and debugging
  • ObservedObject
  • Data Modeling and Structs

Labs 🔬

  • Create/add an original icon and name to your app
    • Apple design resources are here
    • If not using Apple’s Photoshop template, start with a 1024×1024 resolution image and use either App Icon Resizer or IconKit to create different icon sizes

Design Assignment 📐

Midterm: Master-Detail Application

A common app interface pattern displays a master list of items. Selecting the item navigates to a separate view showing more details about that item. e.g. Instagram, Facebook, Notes App, Spotify, etc. For the midterm you will design and develop your own master-detail application.


  • Data model for app needs to be 2 or more levels deep
  • App needs to show list (vertical or horizontal scrolling) and allow navigation to detail
  • Must be able to input/update a data element

Due next week (March 12)

  • Diagram of your Data Model
  • 2-3 screen mockups of your app
  • Working ScrollView prototype of a few sample items from your data (follow along video demo from class but use your own structs and data)
  • Post documentation and scrollview demo video to #sp2020-homework channel
  • Any technical questions/requirements
  • Be prepared to present your idea to the class for feedback

Due after Spring Break (March 26):

  • Completed application with original name and icon

Thesis: Thinking about Collections

For my project’s MVP I hope to build an app that lets me create my own tree species image collections from images selected from my camera roll or taken live.

Those images would then be added to a list or dictionary (form to be explored). This collection could then be used as texture wraps viewable in AR exploration mode (Unity + ARfoundation + Swift/IOS).

Things to also consider is dynamic/mutable libraries, Loading Resources at Runtime,  and world saving or some ability to save the changes to collections once the app is closed.

Other things noted: Coroutines , Quaternions


Brainstorming App build 

Overall AR App brainstorm


AR App3_Flowonly
Flow of interaction



Brainstorming around ways to organize Tree Species image groupings 

  • List
    • is like a dynamically sized array, you don’t need to know how many elements it will have ahead of time
    • can store any type 
    • using
      • UnityEngine;
      • System.Collections;
      • System.Collections.Generic;
    • can create a constructor and then populate your list
    • .Add
      • add function to add to end of the list
    • .Count
      • same as length property of arrays
    • .Remove
    • .Insert
    • .Clear
  • Dictionaries 


  • Dictionary 
  • to ideally have each associated target collection wrap as image textures around associated prefabs (like planes)


Maybe it would function something like this could work for the mvp?  Although it is for sprites, could work similarly for images.

It contains similar steps needed:


textiles & togetherness – 3.6.20

Ashley and I kept working alongside on our textile projects. Ashley continued weaving and I worked on creating smaller image targets for a soft mvp quilt. I decided to hold off on the bottlebrush buckeye and think of another for the future.

Depending on ability to build out collections digitally, may only focus on 1-3 species for first stages, but for now drew out space for 6, 5 drawn and 1 held for a new species. I layered the images in photoshop to make one printable stencil. Tracing on the muslin with the image underneath, first in pencil to mark/ brainstorm positioning, and then after all after positioned, filled in with sharpie.



Magic Windows: Augmenting the urban space

Test w/ cones


Original hello world test – plane detection & point cloud


For this week we were to explore using AR Foundation and augment urban space. I was able to get the class hello world example to my phone with Rui’s help and a couple other tests but wasn’t quite able to get at my initial ideas. ❤  Maybe after office hours tomorrow : )

Things i looked into:

= Multiple image trackers, reference libraries and mutable reference libraries

Wanted to augment the sonic space with subway train logo markers. Where if it recognizes the signage and starts playing various songs written about the NYC trains, like Tom Waits’ Downtown train or Beastie Boys Stop That Train.

Was able to get it to the stage where it recognized my images in the reference library, next steps would be to have it trigger audio once tracked.


Mobile Lab W5: HW

Requirements and Tips:
  • Views should be no larger than 400×400
  • If your views require image/audio/video assets, please send those along as well.
  • Be creative with your design. Think about how standard buttons and sliders can be mapped in fun and novel ways.
  • Consider using a timer and/or randomizer to modulate the signal over time.
  • You may or may not require some internal @State for your component views based on their complexity.
Main Test Code will look something like”
import SwiftUIstruct Signal {
    // Range: 0 - 10
    var intValue: Int    // Range: 0 - 1.0
    var floatValue: Float    // True or False
    var toggleValue: Bool
}struct ContentView: View {
    @State var signal = Signal(intValue: 0, floatValue: 0, toggleValue: false)    var body: some View {
        VStack {
            NiensController(signal: $signal)            Spacer()            NiensVisualizer(signal: signal)
  • A controller file with your name. e.g. NiensController.swift
struct NiensController: View {
    @Binding var signal: Signal    var body: some View {
        // Add your buttons, knobs, etc. here.
        // Update signal appropriately.
        //        Text("Controller")
  • A visualizer file with your name. e.g. NiensVisualizer.swift
struct NiensVisualizer: View {
    var signal: Signal    var body: some View {
        // Create visuals/animations here.
        //        Text("Visualizer")

3.4.20 – Understanding ARfoundation Image Tracking

After working through some examples / tutorials last night, I decided to sift back through the AR Tracked Image Manager documentation to see about the following:

  •  multiple targets via an XRReferenceImageLibrary 
    • encountered issues when sifting through the ARfoundation example via GitHub ❤ mostly worked though! Was having trouble showing the last 3 I added
  • dynamic + modifiable / mutable libraries in runtime 
    • how to dynamically change the image library live via ios camera or camera roll  (most likely through a MutableRuntimeReferenceImageLibrary)



Helpful References:



Looking through the ARfoundation Trackables / ImageTracking Documentation: 

  • AR Tracked Image Manager 
    • The tracked image manager will create GameObjects for each detected image in the environment. Before an image can be detected, the manager must be instructed to look for a set of reference images compiled into a reference image library. Only images in this library will be detected


  • Reference Library  
    • XRReferenceImageLibrary
    • RuntimeReferenceImageLibrary
      • RuntimeReferenceImageLibrary is the runtime representation of an XRReferenceImageLibrary
      • You can create a RuntimeReferenceImageLibrary from an XRReferenceImageLibrary with the ARTrackedImageManager.CreateRuntimeLibrary method

Coding Lab – 3.2.20

Today I went to Coding Lab to troubleshoot the logic roadblocks I was having with last weeks homework. Vince helped walk me through how @State & @Bindings were working in the Mobile Lab Game Kit.

We specifically walked through how avatarPosition was working across views since its needed across all layers. He mentioned how part of the magic that SwiftUI allows us with this system is that when the code goes to $avatarPosition to update the state it won’t get stuck in an infinite / recursive loop.

We then took a look at my app concept of navigating over clouds to reveal cloud names. he suggested approaching it in 3 steps:

  1.  collisionObject 
    1. that you would need to change the let to a var to allow for flexibility in the target size / shape. With the change from the finite let to the versatile var, we can now target that the size variable would need to become 2 different variables: sizeHeight & sizeWidth since in the example the trigger is a square and only needed one dimension repeated twice.
  2. contentView 
    1. Vince then walked me through the simple logic to toggle the background (shifting the color), and that the logic would be the same for switching out images. He created a helper function that is an if/else statement and then called it in the view.  He also reiterated how you cannot declare a function inside a view, that view only wants to deal with things that make instant changes to the canvas. So where calling a function in view would shift the canvas, the process of declaring it doesn’t return anything, therefore would return an error if in view.
  3. Thinking about restrictions in position ( >&< ) logic to create rollovers  
    1. that in the example we did in coding lab, we only restricted in 1 direction, in order to achieve a rollover would need to restrict in all directions

Thesis: Thinking Through Steps

Thinking Through Steps

Our resident Ilana has been so incredibly helpful (see past post). Was able to get on ITP’s share apple developer account to help with app build limits & was able to finally get my personal one sorted as well ❤ Before office hours she mentioned to check out the next few things:

  1. Be able to create an AR app that places an image on an image target.
  2. Take a photo with your camera/access camera library and as you use the image target that image appears on it. (Try using the plugin)
  3. Take multiple photos or choose multiple images and attach that to your image target.
  4. Add two different image targets. Make that when you take it or select photo to add in your app you will have to ‘tag’ them to a specific image target. The user now will be able to see the specific tagged images in each one of the targets



1 & 2 

Targets & Photo to Texture – Here are some links to different tests so far that help answer prompts 1 & 2. Note the ARFoundationExamples on Github too


  • Vuforia W2 tutorial Magic Windows
  • Screen test only / Try on mobile


  • Vuforia W3 tutorial Magic Window
  • Screen test only / Try on mobile


  • ARfoundationPointCloud/ Plane Detection & Place Cube tutorial Magic Windows


  • Exploring how to save and load session data / to not lose when things have been placed previously see post here



Had difficulty with executingthis tutorial on ARfoundation Image Tracking – builds to device but only flickrs briefly when activated.




Speech/ Reading Explorations 

  • Speech Test with Teachable Machines / P5js
  • See blog post exploring words vs phonemes


  • Speech Recognition Test on Mobile
  • Same blog post as above


Additional References: