Skip to content

Local Eyes

Research Open House 2020

Jeffrey Anderson, Graduate Architecture and Urban Design and Jason C Vigneri-Beane, Undergraduate Architecture
School of Architecture

Local Eyes is an architectural platform that increases environmental awareness by bridging physical and virtual worlds. As a prototype cladding unit that turns architecture into hardware and software for harvesting data, it integrates sensors and augmented reality triggers that relay environmental information to dynamically updated digital visualizations. Data visualizations can be triggered locally by machine-graphics and camera-eyes or anywhere via users’ gestural inputs. This modular design supports a variety of sensor plug-ins for use in different contexts, with the ability to harvest sound, light, humidity, temperature, air quality, air pressure, and motion.

VIDEO TRANSCRIPT

We are interested in supporting environmental awareness and analysis by connecting people to the invisible environments around objects and themselves in the physical world. Borne of this interest, Local Eyes is a physical-virtual hybrid of architectural hardware and data-visualization software that treats architectural componentry as a platform for capturing and viewing invisible environmental data.

Local Eyes uses digital design and fabrication to prototype architectural cladding units with sensors integrated into their skins. Sensors harvest environmental data and relay it to a server. An augmented reality platform converts this data into a visual environment that is populated with information, graphics, animation and spatiality that can be explored by a user on a phone or tablet.

In the Local Eyes V.1 prototype, users are able to view temperature, humidity, light levels and sound levels at the surface of an architectural membrane over time. The architectural hardware is industrially-designed so that sensors and localized triggers can be swapped in and out, upgraded and adjusted.

Through the Local Eyes platform, an augmented reality environment is designed to be triggered in two ways: locally, with unit-integrated Augmented Reality triggers or globally, with gestural inputs on the screen of a phone or tablet. This allows users to get information about an object and its environment when they are at its location or someplace else. When a platform is triggered, the physical environment around a user’s location becomes virtually populated with information within the screen of a phone or tablet. Users are then able to navigate around virtual arrays of information clusters in order to gain insight into what invisible changes are occurring in their environments over time.