Magic Windows: W5 – augmenting the urban space

news for the week:

 

 

========================================================================
Keiishi Matsuda – Hyper Reality
https://www.youtube.com/watch?v=YJg02ivYzSs
http://km.cx/projects/augmented-hyper-realityKeiishi Matsuda – Augmented City
https://vimeo.com/14294054Nexus Studios – HotStepper
https://nexusstudios.com/work/hotstepper/

Tom Ermitage,  Pan Studio and Gyorgyi Galik – Hello Lamp Post
https://tomarmitage.com/projects/hello-lamp-post/

Janet Cardiff and George Bures Miller – Alter Bahnhof Video Walk
https://www.youtube.com/watch?v=sOkQE7m31Pw

Timo Arnall – Immaterials: Light Painting WiFi
http://yourban.no/2011/03/07/making-immaterials-light-painting-wifi/
https://vimeo.com/20412632

Choy Ka Fai – Crossing Borders: a visualization of private spaces in public photography
https://vimeo.com/7918122

========================================================================

Hands-on:

Eyes-on:

Lev Manovich, The Poetics of Augmented Space
http://manovich.net/content/04-projects/034-the-poetics-of-augmented-space/31_article_2002.pdf

Keith Matsuda, ‘Domesti-city’
https://drive.google.com/open?id=1EVBmR_B3NOpnt-d0P7CCU_ldnWa5jOcx
http://km.cx/projects/domest-city

Mark Pesce, The Mixed Reality Service

Magic Windows: HW4 – Augment an Object [ Exploring Teachable Machines: image & Audio classification ]

QuiltAR_step1 copy.jpg

Test 2: A quick speech test, filling in color of a leaf with your voice as you move through the words tulip – poplar .  link to p5js present mode

Test 1: A quick speech test, filling in color of a leaf with your voice as you move through the phonemes. tu-lip pop-lar

 

QulitAR_step2.jpg

 

References 

Teachable Machine 1: Image Classification 

  • how to save the model you train and bring it into a p5js sketch + ml5s library

 

Teachable Machine 2: Snake Game

  • flipping camera footage around

 

 

 

Questions

How to allow for a lower tolerance for speech accuracy? As well as audio sample time. (had to be pretty rigid with my syllables / record in short words, reads). Take a look at how the code was working inthis test last week to compare and contrast ❤

How to do voice/ speech commands using vuforia/ unity see office hours w Rui post. (thinking about Unity article & Dialogue flow)

Magic Windows: HW3

Hands on
Write a brief paragraph about your idea on augmenting an object.
This should be initial concept for next week’s assignment.
Be clear about your idea, inspirations and potential execution plan.

For this assignment I hope to explore an aspect of my thesis project. For thesis I’m working on a memory quilt designed for my nephews (target age 4/5ish) that uses AR to unlock memories around tree id-ing and walks they’ve taken with their family (mainly parents). This project is inspired by the tree id-ing walks they already go on, my nephews enjoyment of AR and quilts made in our family. The quilt will include repurposed squares from my brothers childhood quilt.

The augmented object will be patches on the quilt illustrated with contour lines of leaves of local trees in their neighborhood like a tulip poplar. When the phone (with vuforia) is held over the patch, the name of the tree will appear, encouraging the name to be read.  As you read it the leaf and words will fill in with color in (similar to the voice interaction of wonderscope). when the  whole name has been said it will unlock images of trees of they’ve found on past walks / hikes (that have been tagged to that target).

 

 

Brain on
List potential input (sensory) and output( feedback) modes that you can leverage in the real world – translate these into technical possibilities using current technology (you can use your mobile device but feel free to bring any other technology or platform (Arduino? Etc..) that you can implement.

Input / Sensory

  • sound
  • speech
  • peripheral senses – ambient light to signal a shift in weather (thinking of the experiments mentioned in reading Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms  by Hiroshi Ishii and Brygg Ullmer)
  • temperature
  • motion
  • human connection / like touch
  • a force / or amount of pressure exerted
  • distance between objects
  • time (between actions) / duration
  • accelerometer / orientation / tilt sensors
  • sentiment analysis / emotionally quality of messages & speech
  • arduino + distance sensor
  • FSR

 

Output / Feedback

  • arduino + heating pad + thermo ink (ex: Jingwen Zhu’s Heart on My Dress)
  • animation (play, pause skip around time codes)
  • air flow like Programmable Air  / expand – contract
  • arduino + a range of things like neopixels
  • animate or unlock something visual on a mobile device

 

Choose one of each (input, output) and create a simple experience to show off their properties but, also, affordances and constraints.

 

Experiments:  (quick prototype – sliding images / illustrations into view as they would be unlocked through audio / speech recognition) 

would like to do another quick experiement that gets both sides of interaction, was only able to get the SFSpeechRecognizer example to work would love for specific words to activate the animations shown above.  Was also able to getthe unity tutorials from the past couple weeksup and going on my updated unity again ❤

And loved reading the papers provided including: Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms  by Hiroshi Ishii and Brygg Ullmer @ MIT Medialab / Tangible Media Group. (post here) and BRICKS: LAYING THE FOUNDATIONS FOR GRASPABLE USER INTERFACES (post here)

Magic Windows: W3 reading – Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms

Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms  by Hiroshi Ishii and Brygg Ullmer @ MIT Medialab / Tangible Media Group. It is not a proposed solution but hopes to “raise a new set of research questions to go beyond the GUI (graphic user interface)”

In the paper they break down 3 essential ideas of Tangible bits:

  • interactive surfaces 
  • coupling of bits with graspable physical objects
  • ambient media for background awareness 

Using 3 prototypes for illustration:

  • metaDESK
  • transBOARD
  • ambientROOM

 

Tools throughout history

It begins with the reflection on how before computers people created a rich and inspiring range of objects that measure the passage of time, predict the planets movements, to compute and draw shapes. A lot of them are made of a range of beautiful materials from oak to brass( such as items in the Collection of Historical Scientific Instruments).

” We were inspired by the aesthetics and rich affordances of these historical scientific instruments, most of which have disappeared from schools, laboratories, and design studios and have been replaced with the most general of appliances: personal computers. Through grasping and manipulating these instruments, users of the past must have developed rich languages and cultures which valued haptic interaction with real physical objects. Alas, much of this richness has been lost to the rapid flood of digital technologies.”

 

Bits & Atoms 

What has been lost in the advent of personal computing? How can back physicality and its benefits to HCI. This question brought them to the thought of Bits & Atoms. That “we live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless couplings between these parallel existences leaves a great divide between the worlds of bits and atoms”. Yet we’re expected at times to be in both worlds simultaneously, why do our interfaces not reflect or provide this bridging(thinking at the time of the published paper)?

 

 

 

Haptic & Peripheral Senses 

They mention that we have created ways of processing info through working with physical things like Post it notes.  Or how we might be able to sense a change of weather through a shift in ambient light. However these methods are not always folded into the developing of HCI design. That there needs to be a more diverse range of input/output media – currently there is “too much bias towards graphical output at the expense of input from the real world”

 

From Desktop to Physical Environment: GUI to TUI

Xerox Star workstation laid the foundation for the first generation of GUI, a desktop metaphor,

The Xerox Star (1981) workstation set the stage for the first generation of GUI, establishing a “desktop metaphor” which simulates a desktop on a bit-mapped screen. Also set several important HCI design principles, such as “seeing and pointing vs remembering and typing,” and “what you see is what you get.”

Apple then brought this style of HCI to the public in 1984. Pervasive through Windows and more. In 1991 Mark Weiser published “Ubiquitous Computing.”

Keywords tangible user interface, ambient media, graspable user interface, augmented reality, ubiquitous computing, center and periphery, foreground and background. It showed a different style of computing / HCI that pushes for computers to be invisible. Inspired by this paper they seek to establish a new type of HCI called “TUIs”. Tangible User Interfaces, augmenting the real world around us by pairing with digital information to everyday things and spaces.

 

 

Screen Shot 2020-02-16 at 4.37.27 PMScreen Shot 2020-02-16 at 4.37.46 PM

 

Magic Windows: W3 Reading – Bricks: Laying the Foundations for Graspable User Interfaces

Bricks: Laying the Foundations for Graspable User Interfaces

An argument for virtual items having physical forms. That physical items allow for richer interactions /affordances that ” include facilitating two handed interactions, spatial caching, and parallel position and orientation control”.
This paper was publiched in the ACM proceedings of CHI ’95 by George Fitzmaurice, Hiroshi Ishii, and William Buxton.  It Introduces the concept of Graspable User Interfaces which allow for control of electronic or virtual objects via physical control handles. The “bricks” are physical objects that function as input devices paired to virtual objects for manipulation or expressing an action. They work with a large display surface that they deemed the “active desk”. In the paper they present 4 stages:
    • series of exploratory studies on hand gestures / grasping
    • interaction simulations using mockups / rapid prototyping tool
    • working prototype and sample application called GraspDraw
    • initial integration of graspable UI concepts into a commercial application

 

Space multi-plexed object 
With space-multiplexed input, each function to be controlled has a dedicated transducer, each occupying its own space. For example, an automobile has a brake, clutch, throttle, steering wheel, and gear shift which are distinct, dedicated transducers controlling a single specific task.” 
Time multi-plexed object 
“uses one device to control different functions at different points in time. For instance, the mouse uses time multiplexing as it controls functions as
diverse as menu selection, navigation using the scroll
widgets, pointing, and activating “buttons.””
GUIs as an example of dissonance
the display output is often space-multiplexed (icons or control widgets occupytheir own space and must be made visible to use) while the input is time-multiplexed (i.e., most of our actions are channeled through a single device, a mouse, over time). Only one task, therefore, can be performed at a time, as they all use the same transducer. The resulting interaction techniques are often sequential in nature and mutually exclusive.” 
The Graspable UI philosophy as proposed solution for this dissonance (bullets from paper)
    • “It encourages two handed interactions [3, 7];
    • shifts to more specialized, context sensitive input devices;
    • allows for more parallel input specification by the user, thereby improving the expressiveness or the communication capacity with the computer;
    • leverages off of our well developed, everyday skills of prehensile behaviors [8] for physical object manipulations;
    • externalizes traditionally internal computer representations;
    • facilitates interactions by making interface elements more “direct” and more “manipulable” by using physical artifacts;
    • takes advantage of our keen spatial reasoning [2] skills;
    • offers a space multiplex design with a one to one mapping between control and controller; and finally,
    • affords multi-person, collaborative use.”
KEYWORDS: input devices, graphical user interfaces,
graspable user interfaces, haptic input, two-handed
interaction, prototyping, computer augmented environ-
ments, ubiquitous computing
Basic Concepts 
Additional Word Bank:

 

  • transducer – a device that converts energy from one form to another
  • multiplexing –  “method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share a scarce resource. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now widely applied in communications. In telephonyGeorge Owen Squier is credited with the development of telephone carrier multiplexing in 1910.”
    • think back to pcomp/icm exploration
  • space-multiplexed 
  • time-multiplexed 
  • paradigm – a typical example or pattern of something; a model.
  • concurrence – the fact of two or more events or circumstances happening or existing at the same time.
  • haptic technology

 

Magic Windows: W3 – Object Augmentation

What does it mean to augment objects, both technically and conceptually ?

This week Rui brought to focus up a few thinking points as well as walked us through a vuforia tutorial.  Class began thinking about the philosophy of animism as a powerful approach to thinking about object augmentation.

“Animism is the belief that objects, places and creatures all possess a distinct spiritual essence.  Potentially animism perceives all things — animals, rocks, rivers, weather systems, human handiwork, and perhaps even words as animated and alive (from wiki)”. 

Also thinking about developmental psychology theorist Piaget and how kids are egocentric in that for a set amount of years tend to only have themselves as a reference, as a way they experience the world, in and in that thinking embody objects as a relational exercise. As an example how sometimes instead of we got lost we say the car got lost when really it was a combination of us, our gps etc.

Rui challenged us to move beyond thinking about projection, past the hologram assistant.

He also talked about how to augment objects as an interface. How to extend the current tool and its current functionality? Can a waterbottle also tell you about the amount of water drunk over the day? How to invoke, look at the form and materiality of things and how to blend its form into the ar space / more naturally tied

He then brought up to the sensation of Pareidolia ,   seeing faces in everyday objects. How the brain reads for easy patterns sometimes and fills in the dots. A quick assesment as a signal to noice mechanic, reaching conclusions that aren’t always correct like an outlet being a face. Or Rui’s parallel example of hearing Portugese vowel sounds when listening to Russian. Think also about how computer vision / facial recognition works too.

Then we walked through a bunch of fun examples of augmenting objects.

 

Suwappu – Berg London

 

Garden Friends – Nicole He

Tape Drawing – Bill Buxton

https://www.billbuxton.com/tapeDrawing.htm

 

Smarter Objects

 

Invoked computing

 

MARIO

 

Paper Cubes – Anna Fusté, Judth Amores

 

 

InForm

 

 

when objects dream – ECAL

 

HW for next week:

  1. Eyes on
    Pls read at least one of the following:

    Extra reading: Radical Atoms
    it is a nice pairing with Ivan Sutherland’s ‘The Ultimate Display’

     

  2. Hands on
    Write a brief paragraph about your idea on augmenting an object.
    This should be initial concept for next week’s assignment.
    Be clear about your idea, inspirations and potential execution plan.

  3. Brain on
    List potential input (sensory) and output( feedback) modes that you can leverage in the real world – translate these into technical possibilities using current technology (you can use your mobile device but feel free to bring any other technology or platform (Arduino? Etc..) that you can implement.
    Choose on elf each (input, output) and create a simple experience to show off their properties but, also, affordances and constraints.

Magic Windows: Hw 2 – Storytelling through AR targets

For this week we were to use AR targets to explore storytelling / narrative. I thought it would be fun for this class and for thesis ideation to do a lotus style brainstorm around the central question  “How to Convey a Narrative Experience using AR targets” (post here).

 

Vuforia & Unity

In class we learned how to use Unity + Vuforia for target tracking, however I think I learned my lesson about taking extra time when upgrading! I upgraded my Unity and was having a little trouble with it? Will make sure to sign up for office hours. Originally was going to explore targets through working on embroidered / quilted targets. I did upload the embroidery successful in my Vuforia developer portal (post here). It was interesting to see the augment-ability rating.

Screen Shot 2020-02-10 at 2.53.37 PMScreen Shot 2020-02-10 at 2.53.50 PM

 

 

Eyejack app + Image targets 

Screen Shot 2020-02-12 at 11.59.13 PM.png

In office hours Sarah Rothberg mentioned to me to try using Eyejack AR app as a way to quick & dirty prototype a target based AR experience. It is a good tool to get you thinking and sketching out the actions you might want to develop for later with Vuforia + Unity.

For homework I made a small scavenger hunt inspired experience where you try to catch a lady bug and let it back outside. My oldest nephew is obsessed with scavenger hunts right now and was thinking how ar targets could illustrate a story while helping navigate the user through a space. I was also inspired by the Cern Big Bang AR app and how they designed around action prompts (like place out your hand) and made it feel as if it was directly affecting the animation, even though the animation would move forward without it.

 

 

 

AR target Test – help the ladybug back home

giphy

 

 

 

 

 

 

 

 

Eyejack was fairly glitchy as a handheld experience. Especially for a user that is new to the experience you built, exploring around for the exact target mark. Maybe a use case for how it is a good sketching tool but better built out with another tool for later and final states.

 

 

Magic Windows: AR Target brainstorm

IMG_2547.jpg

Lotus style brainstorm around “How to Convey a Narrative Experience using AR targets”

Center of Lotus: 

  • What are the strengths of AR targets?
  • What are the weaknesses of AR targets generally?
  • How to explore AR targets in thesis quilt project
  • How to break away from a rectilinear target shape / boundary of mobile device?
  • How to further mesh the AR world into the physical environment / tricks to avoide gimmicky pitfalls?
  • Who is telling the story – narrator/reader relationship or perspective?
  • What is the experience “end goal” vibe?
  • What materials do you want to augment / use as targets?

 

Petal pages:

  • What are the strengths of AR targets?
    • How to maximize for AR’s ability to be experienced mobilely, through a wide range of physical space?
    • How to best get at the feeling of discovery?
    • How do you want to “edit”/ manipulate surroundings?
    • What “realities” do you want to question?
    • How to use voice activation as a narrative device?
    • What is not “visible” in physical world that you wish to see or experience?
    • What is “visible” in physical world that you wish to rearrange, remove or re-imagine?
    • How might sound be altered- shifted or triggered to convey a narrative through target based AR

 

  • What are weaknesses of AR? 
    • How to break out of AR’s boundary of the mobile device shape?
    • How to avoid the experience from feeling “gimicky” or derivative?
    • How to make accessible? for those who dont own smart mobile devices and for a range of abilities
    • how to design for quick data download?
    • how do you minimize the motion sickness/ feeling for other participants that are not contoling the device?
    • How to make it a social interaction
    • how to make the mobile screen not the end goal but to better observe the physical world around you
    • What are some onboarding tricks for AR use? Or quick prompts to make the experience feel more “immersive” / interactive?

 

  • What materials do you want to augment / use as targets?
    • objects / moments of a relationship to something or someone?
    • somethings along lines of ingredients in kitchen suggesting recipes with items on hand
    • best ways to use AR to reveal the multiplicity of histories (or untold histories)
    • how to bring to light / manifest stories through objects
    • how to draw with our bodies in space?
    • seeing other human interaction in the AR world, multi-player interaction
    • thinking about wearables, how to create fun narrative interactions through worn objects
    • augmenting symbolic physical or public spaces ex: Stonewall Forever AR at Christoper Park  / or in a digital humanities way using the plaques in botanitcal gardens shakespeare garden to augment different passages

 

  • What is the experience “end goal” vibe 
    • what design aesthetic? [nostalgic, referential to a specific moment in design history]
    • how do you encourage user to experience / play w/ several elements if designed as a nonlinear, explorative story experience?
    • Does it celebrate or commemorate something specific?
    • Does the interaction contribute to a larger dialogue?
    • what is the duration of the experience?
    • who is the primary & secondary user?
    • how to design for an immersive educational tool / experience?
    • what emotional tone do you want?

 

  • How to avoid AR from feeling gimmicky? To further mesh the physical & AR space together?
    • What are AR techniques that other people feel worn out on?
    • How to have experience the grounded in geolocation
    • what are optical tricks that AR apps have used to help you suspend belief?
    • how to have animation style or visuals better match the physical environment? (if elevates the story?)
    • What are new innovations in the AR space?
    • What are techniques /experiences that feel overplayed to you?
    • What are everyday tasks that AR could help make easier, more accessible, beneficial for routine use? think openlab parsons
    • How can sounds best support and round out the experience?

 

  • How to break away from a rectilinear target shape / mobile screen? 
    • how to fold the ar mobile device intro an outward facing screen of a wearable?
    • what are effective playful 3d everyday objects to use as targets?
    • how to creatively design sfx and music in interaction
    • building a physical handheld object around the mobile space
    • how to use textures naturally found in our everyday?
    • are there physical objects in public space that can be used?
    • could duration of day function as creating different targets but still the same object?
    • could using prompts through geolocation support a more exciting experience?

 

  • how to effectively / creatively use AR targets in thesis quilt? 
    • how to have multiple pathways / pages in AR experience [magnifier / find leaf mode vs a more info button / return home screen]
    • how to fold in real world collection or experience into the trigger
    • what is the action that brings the user into the ‘magic circle’ as we talked about in Joy of Games?
    • Which AR tools to use in the deliverable time frame?
    • What types of mark making in quilting can be mirrored in the ar animation experiment
    • what size or spacing between targets is most effective
    • how to fold in gamification or badge collection?
    • how to have it reel like it naturally enhances the project / feels integrated vs a tacked on buzzwordy tech method?

 

  • who is telling the story?
    • which voice do you want to use?
      • ex:3rd person omniscient
    • if in a digital humanities exercise of “thick” storymapping, who built the map that the stories are being placed or imbedded?
    • historical / educational?
    • revealing multiple histories & stories around a specific topic
      • oral histories
    • does the augmented target unfold the narrative through onscreen visuals (having to watch) verses visual cues to inspire you to looks around the screen?
    • how to develop experience to be player input driven, like a choose your own adventure or generative storyline?
    • how to design as a culture jamming activity / experience? who is dictating the culture? and who is creating the jam
    • what are storytelling narration or poetic devices  that could be helpful?(like breaking 4th wall(is there a 4th wall in ar)? or onomatopoeia / metaphors?

Filtering

Highlighting

Value mapping – Prioritization