Smart objects research explores embedding sensing and computing into everyday objects - augmenting objects to be a source of information on their identity, state, and context in the physical world. A major challenge for the design of smart objects is to preserve their original appearance, purpose and function. Consequently, many research projects have focussed on adding input capabilities to objects, while neglecting the requirement for an output capability which would provide a balanced interface. This thesis presents a new approach to add output capability by smart objects cooperating with projector-camera systems. The concept of Cooperative Augmentation enables the knowledge required for visual detection, tracking and projection on smart objects to be embedded within the object itself. This allows projector-camera systems to provide generic display services, enabling spontaneous use by any smart object to achieve non-invasive and interactive projected displays on their surfaces. Smart objects cooperate to achieve this by describing their appearance directly to the projector-camera systems and use embedded sensing to constrain the visual detection process. We investigate natural appearance vision-based detection methods and perform an experimental study specifically analysing the increase in detection performance achieved with movement sensing in the target object. We find that detection performance significantly increases with sensing, indicating the combination of different sensing modalities is important, and that different objects require different appearance representations and detection methods. These studies inform the design and implementation of a system architecture which serves as the basis for three applications demonstrating the aspects of visual detection, integration of sensing, projection, interaction with displays and knowledge updating. The displays achieved with Cooperative Augmentation allow any smart object to deliver visual feedback to users from implicit and explicit interaction with information represented or sensed by the physical object, supporting objects as both input and output medium simultaneously. This contributes to the central vision of Ubiquitous Computing by enabling users to address tasks in physical space with direct manipulation and have feedback on the objects themselves, where it belongs in the real world.