We Rep [Ideas]

Mass Innovation: Interface as Infrastructure Pt. 4

with one comment

Notes to the next cup.

Notes to the next cup.


People tinker, they build, remix, they repurpose, reverse and perverse engineer.  They crack open the housing. Look under the hood. They view the source. We have always made things, and have been defined by what we make and how it was made. Today the ‘refresh rate’ of the majority of physical objects is in line with the just-in-time business processes responsible for the economic dance that synchronizes and coordinates time, resources, design, engineering, manufacturing and distribution. Emerging manufacturing technologies suggest that much of what we will produce materially in the future will be printed by mix of industrial and domestic multi-material fabricators and recyclers. Widespread adoption of these processes shortens lead times, speeding up manufacturing processes and thus the “flow of objects”. Begging a new question around why should something come to be? Is hyper-disposability and reproduction of the same matter a sustainable idea?   How might “just in time objects”, or a service-like time-share in a “pattern of matter” change expectations and experiences?

Just as the web moved from static websites to “feeds and flows” objects too are moving from static things to service driven production by newly minted ‘manufacturing- as-service’ practices. We may begin to see ‘things’ as instantiations of responsive and relational system of services, feeds and flows. As objects become considered information, they become more and more malleable, subject to change, edits, mixes and blends. Simultaneously giving birth to a new breed of unconventional value propositions and business opportunities. Could what Google did for information, be done with matter? Today’s adaptive and flexible digital manufacturing technologies and production processes lend themselves to the kind of ‘experience data’ varieties, and flow of designs stemming from an interplay of experience sampling infrastructure, user-annotation and user-guided design. Objects can become passports to experience and feedback, a platform for citizen consultants to sketch dream products and interactions. Foraging a direct interface between people and organizations.

Object behavior, perception and communication explorations are strange attractors of interest to business, policy and academia. As consumers we ‘vote’ with our wallets, clicks, time and attention. People expect things to be easy to use, cost little or nothing, maximize personal freedom, be customizable, safe and secure, and to learn about our personal preferences. [Lee et al] According to the literature describing the promise of Ubiquitous Computing artifacts, people expect value from seamless anybody, any service, any place, any time and any device interactions. [Weiser]  It seems as though people expect things to just get better, and to be a breeze, to be almost invisible. Organizations need sustainable returns on their IP holdings, mobilizing them through product, service and systems solutions in the marketplace. How do we create a common vocabulary for mass innovation interactions needed to move toward a system that satisfies all of these concerns?

Products and services are behavior ‘enabling and mapping’ technologies, the identification and recording of behaviors on an object level will be critical to mass innovation. Products and spaces too can be a platform for enabling the recording of user annotations and innovations. Labs will spill out in into the streets, into objects, and into your hands, boasting a growing evolving upgradeable ‘experiential literacy.’ Revealing new insights into the climate of intent, motivation, and experience that contribute to the meanings an object may be associated with. A future Google result for “Sketching User Experiences” may return – “Did you mean: Users sketching experiences?”

Can objects become collectively or individually authored by their users? The relationships between object and the individual/group experience produce a stream of information in regard to the conceptual, emotional and functional role of the object.  Through time the information may show shifts and transitions of a given objects roles in relation to changes in what it means to people. This information is an invaluable and currently under observed resource that can aid in the design of the next instantiation of a given object. Designing in, through and from the human processes that negotiates meaning through experience is key to sustained relevance. Better inspiring and facilitating the habits that accompany new meanings around objects as they manifest. A great deal of strategic value for organizations, institutions and individuals lies in cracking this dilemma, and in creating the things and interfaces that operate not just for many people, but also because of them. An infrastructure that is an interface to infrastructure. New revenue strategies and more important, pervasive co-prosperity lie dormant in the interplay of the social negotiation of ‘what could be’ and ‘what comes to be.’

‘Non-destructive Annotation’, historically used in design and architecture where one would employ vellum in drafting to make suggestions, is beginning to migrate to digital media through startups such as Cozimo, that offer the sharing, discussion and annotation of digital files. Vellum was traditionally the tangible ‘comment medium’ beyond text. One could draw alternatives, suggest changes, write words, measures and dimensions ect. Cozimo offers a direct write onto digital video and image files, as a workflow friendly way to converse around digital works, as conversations become more remote and asynchronous. One side effect we are beginning to see as hard things gain increasingly soft qualities; the surface and motion of things are beginning to become interface. I can navigate a game or control my Roomba vacuum cleaner with the accelerometer within my laptop. I can translate complex real world actions into in-game behavior with a Wiimote. The iPhone’s ‘screen sprawl’ has almost engulfed the entirety of the device, and Apple’s patents suggest that it eventually will. Ambient Devices boasts objects that are screen, streaming real world data by way of color saturation and gradient change. Objects indeed are becoming screen, and screen is becoming interactive interface. If we push this to the limit of  “objects as interface’ they become a ripe vessel for explicit tangible annotation, where people can voice what they might like to see and feel stemming from their interpretations and personal experiences. People sketching, scrubbing and manipulating, and speaking through tweaking in the context of use, produces a wealth of variants that add up to thousands or millions of nuanced ‘perfect’ products, a ‘mass innovation.’

The majority of people might allow objects to accrue metadata, cultivating its ‘aboutness’, as opposed to annotation. The use of products can become a direct interface to manufacturing processes, as objects implicitly harvest suggestions through their journeys in human habiture. What we may see is the collision of user driven ambient and explicit annotations and designs, with the emerging fabrication processes that are engulfing the production of everyday things. Normal things. A product line of enabled objects can become a networked innovation lab that innovates from within the experience and interactions of a population of prosuming-performers. There is opportunity in harnessing the narrative of edge competencies diverse and driven individuals are expressing. The convergence of hardware as software, new forms of experience sampling, user annotation and design, manufacturing as service, combined with the behavior of remix cultures create ripe conditions for new classes of user-generated content built around read/write matter.

Ambient Co-creation

“BCN formula is a planning tool in the form of an operational multi-player game, that generates building proposals for Barcelona in real-time. The existing city is modelled as a two-plus-eleven-dimensional grid that processes its internal states like a cellular automat, and can be externally influenced by the users through urban interventions. These interventions change the position of blocks within the eleven urban dimensions of Barcelona defined by ESARQ students. The dimensions are capturing the swarming life, traffic, and commercial activities enfolding in the existing Barcelona grid. The grid influences the movement of a swarming point cloud. As soon the point cloud reaches critical mass, it generates a sculptural structure informed by the multi-dimensional procedural model of the city. ESARQ students provided feedback to the process by interpreting the structures as buildings in urban contexts, which are placed by the point cloud at locations that fit the criteria. Parallel to the existing grid, a seducing city comes to flourish, respecting the old rationalist city grid but refraining from any mimicry. The new parallel city of Barcelona co-exists with the existing one.

Here, city planning is not thought of as a top-down pressure but as a strategy for evolving existing social structures. The workshop participants described the genetic codes of the Barcelona Grid and the Barcelona feeling and atmosphere.  The students were asked to design flowcharts diagramming the city as an input > processing > output device. The students learned how to work with the game development programme Virtools, which was then used to build the multi-player planning game. Playing the planning game produces the data [in realtime], which are used for the design of the buildings from the parallel world. Procedures include intuitive acts.

The resulting structures are blobs, in that they are double-curved, twisted, seemingly irrational and not geometrically derivable structures. Yet such blobs they are only to those who do not know the genetic code and the procedures that generated these highly informed, context-related building structures.”

The example above describes a new anomalous form of co-creation, where interacting individuals and existing infrastructure becomes a stream of entangled and dialogical real-time data and information. This data constitutes a shared vocabulary for further discussion, a shared design material that can be observed, shaped, and expressed in a myriad of ways. In this example the Hyperbody Research group from Deflt TU in the Netherlands are enabling a system whereby users indirectly partake in generative architecture and city planning. The team of researchers may audit, tweak and further develop these plans for viable implementation. The BCN Formula game, through the mediums of location, transaction  and visualization have fashioned a low cognitive sub-activity that generates useful pre-design material for professionals to ingest and design from. BCN is a weak signal that provides an existence proof that we can model, to some degree, products, spaces and experiences as a side effect of living.

Any product in the hands of a person, or any place proximal to people is valuable time that could provide useful information about how products and spaces are used, their unforeseen uses, as well as how they interact with other objects and spaces. Information such as this reveals unknown but existing value, that can be amplified to create more moving, useful and compelling experiences. This is a new evolutionary step from the Focus group or the Supergroup, it is unfocused, indirect creation through the ambient ‘sketchings and suggestions’ of people living life. “Everybody” may not be a designer, but indeed, “Everybody” can partake in “living co-authorship.”

Direct Annotation, Definition and Design
For the point of the industrial era economy was mass production for mass consumption, the formula created by Henry Ford; but these new forms of mass, creative collaboration announce the arrival of a new kind of society, in which people want to be players, not spectators. This is a huge cultural shift, for in this new economy people want not services and goods, delivered to them, but tools so they can take part.

“We-think” Charles Leadbeater

There is the other side to this argument. People who want to do. People have always modded, hacked and ‘did it themselves’ and we still do. Communities such as make.com, instructibles.com and various DIY cultures in backyards and basements still preserve this resourceful skill. It spans everything from the biotech hobbyist to children who have to make their own toys. Some do it for sport, and some out of necessity.

‘Direct Creation’ is the idea that near future objects will be a medium of digital/tangible sketching, annotation and expression enabling people to customize, re-design, re-specify, re-render and re-print objects. Objects with this capability will enable a flexibility, hackability, and freedom with digital precision that former generations of objects just could not match. The difference between Direct Annotation/Creation and Ambient Co-creation is that Direct Creation is a individual or social act that takes time, attentional and cognitive resources of the user/creator. Time and attention are scarce, so the benifets and rewards of their participation would have to outweigh their investments.

Palcom, a recent European Union funded research project explored the antithesis of Latour’s “black box.” This touches on the difference of knowing how something works and knowing how to work it. Palcom, short for palpable computing supports the autonomy, authorship and access implied by Direct Creation in their statement:

“..notions like inspection, experimentation, translation, and emergent use become important, as people creatively connect and use ‘assemblies’ of palpable pervasive technologies.”

They continue to explain “palpability”:

By ‘palpable’ we mean ‘noticeable’ and ‘understandable’. Palpability is not a property of technology itself, but an effect of people’s engagement with technologies, objects, and environments. For designers of pervasive computing, this means that they cannot design palpability into technologies. But they can design for palpability, to support people in making computing palpable.

I love to look at toys sometimes as indicators of future features. One of these toys that offers rather unorthodox features of palpability and direct creation is Topobo:

Topobo is a 3D constructive assembly system with kinetic memory, the ability to record and playback physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of Passive (static) and Active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons with Topobo, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly. The same way people can learn about static structures playing with building blocks, they can learn about dynamic structures playing with Topobo.

Simply Topobo is like Lego with gesture memory that you can ‘program through play.’ It is this form of playful programming that could enable many people to engage in design conversations through the object, the very subject of the conversation. As screen and shape deforming materials begin to merge with object skins and surfaces, form and function becomes no longer solely determined by material constraints, form becomes programmable, as does function and service. Form in this case no longer follows function, form follows sponentaity, improvisation, will and desire. Form follows Relationships.

Chances are ‘Thought to Thing’ interfaces will not occur so seamlessly. It will emerge in messy stages and spurts, with many faces. Recently on a trip to Amsterdam I walked into a Puma flagship store to find a novel way of buying a shoe.  What appeared to be a cafeteria cashier’s station with small containers filled with shoe parts, was actually a small factory. The ‘factory’ combined physical pieces of shoes with RFID tags to be scanned, and a digital touch screen where one could modify the scanned pieces and build a new shoe. It could be paid for immediately and delivered to your door in a couple of weeks. This is a rather crude form of direct creation. Long waits. You have to do it in the store; cool, but no go. Stephen Intille describes ‘User based Digital Designs’ in his research paper entitled ‘Eliciting User Preferences Using Image Based Experience Sampling and Reflection.’ Stephen envisions users interacting with images and video from his experience sampling cameras. The images are manipulated with a stylus within an interface on a Palm handheld device. That interface could also include a scanning option where you could generate a 3d model from video or orthographic snapshots, manipulate away, and send to the fabbers. Perhaps near field data transfer where each object has its own mod data ready to be annotated, transformed, paid for and directions to pick-up, or send to home printer. Who knows, any, all or none may emerge.

Intel and Carnegie Mellon are working on a version of what Dr. J Storrs Hall first termed “Utility Fog”, a form of programmable matter. Carnegie Mellon calls it Claytronics, at Intel calls it by a more formal term, Dynamic Physical Rendering.

In the Dynamic Physical Rendering Project, researchers at the Intel Pittsburgh Lablet and Carnegie Mellon University are jointly exploring a new form of smart matter which would be composed of myriad tiny robots acting together for telepresence, teleoperation, material handing/manipulation, locomotion, and distributed sensing. “Ensembles” of thousands to millions of robots would form physical analogs of virtual shapes which human senses would accept as real, eliminating cumbersome virtual reality gear and viewing angle limitations now present for most 3D visualization and telepresence applications. Likewise, such ensembles would act as reconfigurable, general-purpose robots capable of many forms of locomotion and object manipulation.

Dynamic Physical Rendering intends to create a new level of ‘directness’ where designers, engineers or users could push and pull masses of nanobots into new configurations to negotiate a new design, create a button, or a new handle. Objects then would be reduced to a database, play list and replay within a context of new tangible media where soft and hard are phase states of desire. Such an extreme should be a starting point for philosophers, sociologists, designers, future MBAs, psychologists and political scientists willing to grapple with the future of symbols, signs, meaning, power, economy and access. How will we read as well as write into this new medium? What worlds will we render after having them rendered for us for so long?


Written by rthomas

April 8, 2009 at 3:12 pm

Posted in Uncategorized

One Response

Subscribe to comments with RSS.

  1. I want you to know, your writing goes to the nerve of the issue. Your pellucidity leaves me wanting to know more. I am going to immediately grab your feed to keep up to date with your site. Sounding Out thanks is simply my little way of saying what a masterpiece for a fantastic resource. Let In my dearest wishes for your future post.

    Thurman Falson

    December 22, 2009 at 9:22 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: