This paper addresses issues of safety in pervasive spaces. We show how pervasive systems are different from traditional computer systems, and how their cyber-physical nature ties intimately with the users. Errors and conflicts in such space could have detrimental, dangerous or undesired effects on the user, the space, or the devices. There are no support systems or programming models conscious of the issue of safety. Unrestrained programming is the model de jour, which is inadequate. We need a programming model that encourages and obligates various roles engaged in the development of pervasive spaces to contribute to increasing safety. We propose a model that utilizes role-specific safety knowledge, and that takes advantage of the rich sensing and actuations capabilities of pervasive systems to detect and handle "conflicting contexts" and prevent or detect/avert "impermissible contexts". We present our model and discuss how it mitigates overall safety risks in presence of uncertainty due to multiple independent roles. © 2011 Springer-Verlag.