Home > Research > Publications & Outputs > Comparing distributed and face-to-face meetings...
View graph of relations

Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment. / Babar, Muhammad Ali; Kitchenham, Barbara A.; Jeffery, D. Ross.
In: Empirical Software Engineering, Vol. 13, No. 1, 02.2008, p. 39-62.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Babar MA, Kitchenham BA, Jeffery DR. Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment. Empirical Software Engineering. 2008 Feb;13(1):39-62. doi: 10.1007/s10664-007-9052-6

Author

Babar, Muhammad Ali ; Kitchenham, Barbara A. ; Jeffery, D. Ross. / Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment. In: Empirical Software Engineering. 2008 ; Vol. 13, No. 1. pp. 39-62.

Bibtex

@article{5092fc993a3748b79663a9d7e36a7462,
title = "Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment",
abstract = "Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.",
keywords = "Architecture evaluation , Process improvement , Controlled experiments , Groupware support , Scenario development",
author = "Babar, {Muhammad Ali} and Kitchenham, {Barbara A.} and Jeffery, {D. Ross}",
year = "2008",
month = feb,
doi = "10.1007/s10664-007-9052-6",
language = "English",
volume = "13",
pages = "39--62",
journal = "Empirical Software Engineering",
issn = "1382-3256",
publisher = "Springer Netherlands",
number = "1",

}

RIS

TY - JOUR

T1 - Comparing distributed and face-to-face meetings for software architecture evaluation: A controlled experiment

AU - Babar, Muhammad Ali

AU - Kitchenham, Barbara A.

AU - Jeffery, D. Ross

PY - 2008/2

Y1 - 2008/2

N2 - Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.

AB - Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation meetings. Collocating stakeholders is often an expensive exercise. To reduce expense, we have proposed a framework for supporting software architecture evaluation process using groupware systems. This paper presents a controlled experiment that we conducted to assess the effectiveness of one of the key activities, developing scenario profiles, of the proposed groupware-supported process of evaluating software architecture. We used a cross-over experiment involving 32 teams of three 3rd and 4th year undergraduate students. We found that the quality of scenario profiles developed by distributed teams using a groupware tool were significantly better than the quality of scenario profiles developed by face-to-face teams (p < 0.001). However, questionnaires indicated that most participants preferred the face-to-face arrangement (82%) and 60% thought the distributed meetings were less efficient. We conclude that distributed meetings for developing scenario profiles are extremely effective but that tool support must be of a high standard or participants will not find distributed meetings acceptable.

KW - Architecture evaluation

KW - Process improvement

KW - Controlled experiments

KW - Groupware support

KW - Scenario development

UR - http://www.scopus.com/inward/record.url?scp=37649012568&partnerID=8YFLogxK

U2 - 10.1007/s10664-007-9052-6

DO - 10.1007/s10664-007-9052-6

M3 - Journal article

VL - 13

SP - 39

EP - 62

JO - Empirical Software Engineering

JF - Empirical Software Engineering

SN - 1382-3256

IS - 1

ER -