Advertisement

Collaborizer: The sizer of the agile collaboration

Open AccessPublished:July 14, 2022DOI:https://doi.org/10.1016/j.simpa.2022.100371

      Highlights

      • Collaborizer is a tool that facilitates the management of collaborative work.
      • The tool generates benchmarking and visual reports.
      • The system identifies the aspects that affect Computer Supported Cooperative Work (CSCW).
      • The technology uses the principles of Agile management and Human–Computer Interaction (HCI).
      • The software is being used in scientific research and has potential business applications.
      • The code is open-source in Python, for reuse, and has verified reproducibility.

      Abstract

      In recent years some ways of measuring Computer Supported Cooperative Work (CSCW) have been proposed. However, there are few mechanisms for managers to assess the quantitative performance of their teams’ collaboration. This paper presents Collaborizer, a tool designed to benchmark collaborative work based on Agile principles and generate visual reports. The software’s functionalities involve estimating team collaboration, generating graphs, and providing reports. About Collaborizer’s contributions, we can highlight that its code enables reproduction and the tool allows managers to identify and improve several aspects of Human–Computer Interaction (HCI).

      Keywords

      Code metadata
      Tabled 1
      Current code versionV2
      Permanent link to code/repository used for this code versionhttps://github.com/SoftwareImpacts/SIMPAC-2022-117
      Permanent link to reproducible capsulehttps://codeocean.com/capsule/0706732/tree/v2
      Legal code licenseGNU General Public License v3.0
      Code versioning system usedgit
      Software code languages, tools and services usedPython 3 and Google Colaboratory
      Compilation requirements, operating environments and dependencieskaleido, matplotlib, pandas, pillow, plotly, reportlab, seaborn, and numpy
      If available, link to developer documentation/manual
      Support email for questions[email protected]

      1. Introduction

      Human–Computer Interaction (HCI) involves the study of people’s behavior through technology, used in academic and business work. A variety of systems have been created to support the development of Computer Supported Collaborative Work (CSCW) area, such as collaborative platforms for councils [
      • Jardim R.R.J.
      • et al.
      Designing a collaboration platform for electricity consumer councils.
      ] and tools that facilitate the visualization of information, such as ExplorViz [
      • Hasselbring W.
      • Krause A.
      • Zirkelbach C.
      ExplorViz: Research on software visualization, comprehension and collaboration.
      ].
      Mechanisms for measuring collaboration have been proposed, like  [
      • Zhang Z.
      • Zhou Q.
      • Zhao H.
      • Wu H.
      Social regularized collaborative metric learning.
      ] and [
      • Wang K.
      • Wang Z.
      Novel metrics for social collaboration processes.
      ], and research indicates that managers need tools to help evaluate teams and resources used [

      D. Nath, V.K. Reja, K. Varghese, A framework to measure collaboration in a construction project, in: World Constr. Symp., 2021, pp. 2–13, http://dx.doi.org/10.31705/WCS.2021.1.

      ]. Some tools propose a qualitative method  [
      • Vivian R.
      • Falkner K.
      • Falkner N.
      • Tarmazdi H.
      A method to analyze computer science students teamwork in online collaborative learning environments.
      ]. However, this approach does not allow an empirical comparison.
      To overcome this research challenge, Collaborizer was proposed and developed. Its finality is to benchmark the collaboration of a team when using a given computer system, and generate a visual report. This report enables managers to identify problems related to the interaction between project participants. Thus, the tool enhances situational awareness for decision making based on quantitative data.
      The Waterfall approach is one of the most traditional project management approaches [
      • Jardim R.R.A.J.
      • Santos M.
      • Neto E.C.D.O.
      • Da Silva E.D.
      • De Barros F.C.M.M.
      Integration of the waterfall model with ISO/IEC/IEEE 29148:2018 for the development of military defense system.
      ], It is enabled to support the interaction of those involved in the project, and is widely applied in well-defined scopes. However, we opted for the Agile perspective in this research, as it is better suited for tailoring conditions [
      • Kiv S.
      • Heng S.
      • Kolp M.
      • Wautelet Y.
      Agile manifesto and practices selection for tailoring software development: A systematic literature review.
      ].
      In our previous study [
      • Arce D.C.
      • França J.B.S.
      • Antunes L.M.
      • Roberto M.
      • Borges S.
      Avaliação da Colaboração em Projeto Fundamentado em Práticas Ágeis.
      ], a collaboration maturity assessment was built based on the 3C Model [
      • Fuks H.
      • Raposo A.
      • Gerosa M.A.
      • Pimental M.
      • Lucena C.J.P.
      The 3C collaboration model.
      ] and the principles of the Agile Manifesto [
      • Kiv S.
      • Heng S.
      • Kolp M.
      • Wautelet Y.
      Agile manifesto and practices selection for tailoring software development: A systematic literature review.
      ]. In this model, the collaboration criteria and maturity levels were defined, being useful in the evaluation of a team involved in a real project. Later, the model was consolidated into the Collaboration Observation Framework (COF) with 31 criteria, and was used to evaluate three other organizational teams [
      • Franca J.B.S.
      • Dias A.F.S.
      • Borges M.R.S.
      Observations on collaboration in agile software development.
      ]. Table 1 summarizes the criteria defined by the framework.
      COF has been revised and implemented in the open source Collaborizer tool that is presented in this paper. Its code (collaborizer.py) is available for reuse and other applications.
      Table 1Criteria of the collaboration evaluated.
      Adapted from
      • Franca J.B.S.
      • Dias A.F.S.
      • Borges M.R.S.
      Observations on collaboration in agile software development.
      .
      ConstructorsCriteria
      CooperationGoal
      Product
      Artefact
      Geographical distribution
      Shared space
      Activity/Task
      Shared task
      Resource
      Decision
      Commitment/Motivation
      CoordinationMonitoring tasks
      Problems solutions
      Assistance in organization
      Tasks definition
      Planning change
      Methodology
      Capacity
      Team result
      Coordinator role
      Tools
      Team experience
      Adaptability
      CommunicationPerception/Interpretation
      Value
      Negotiation
      Information change
      Communication language
      Common sense
      Synchronism/Transmission way
      Knowledge sharing
      Transparency

      2. Software description

      The Collaborizer tool is designed to take advantage of the Google Forms interface [
      Google forms: Online form creator | google workspace.
      ] and use the generated database. Its components are: collaboration calculator, graph generator and report builder. Fig. 1 shows the software deployment diagram in UML 2 notation [
      • OMG Object Management Group
      An OMG ® unified modeling language ® publication.
      ].
      The Collaborizer was developed in Python and tested on Google Colaboratory [
      Google colab, welcome to colaboratory - colaboratory, getting started - introduction.
      ]. The software is also lightweight and easy to run. As a first step, a questionnaire should be prepared, in a tool like Google Forms, with the 31 statements based on the COF. Each statement should have a scale from 1 to 5 to indicate its level of agreement, similar to the Likert Scale.
      Figure thumbnail gr1
      Fig. 1Collaborizer software deployment diagram.
      Participants in a collaborative project can then use the questionnaire to evaluate their interaction through technology. A dataset with anonymized answers is generated (Respostas.csv file in the code of a sample class).
      The Collaborizer imports the dataset as a dataframe for processing. The “Collaboration calculator” component performs some calculations, such as the average score achieved for each criterion. In turn, each criterion makes up the score from 0 to 100 for its respective constructor.
      The “Graph generator” component was developed in order to make it easier for a manager to visualize and interpret the data. The Collaborizer tool provides four different graphs able to represent the collaboration measurements. Currently it generates a bar chart (Fig. 2) that allows performance comparison between each builder constructor (Cooperation, Coordination, and Communication), and the Collaboration average results. This allows us to identify the constructor in which teams performed best with the technology used.
      The stacked bar chart (Fig. 3) displays the participants’ agreement variations concerning collaboration criteria. Each row displays the proportions of responses, ranging from “strongly disagree” to “strongly agree”. Thus, the further to the left (in red), the greater the proportion of participants who disagree with the respective criterion, while the further to the right (in blue) shows greater agreement.
      Figure thumbnail gr2
      Fig. 2Scores achieved in each constructor and the average in the collaboration.
      In addition, the “Report builder” component is capable of generating an executive report in pdf format. It indicates the most productive aspects of the collaboration and recommendations for improvement in the negative ones.
      Figure thumbnail gr3
      Fig. 3Stacked bar graph with the variation of participants’ agreement on each criterion.

      3. Impact overview

      Collaborizer is a functional prototype. Its code was developed in Python 3, one of the most widespread programming languages  [
      • Jean-Baptiste L.
      Ontologies with Python.
      ]. It runs on Google Colaboratory’s cloud servers  [
      Google colab, welcome to colaboratory - colaboratory, getting started - introduction.
      ], but also in any IDE that supports Python. Thus, other engineers and designers do not need complex configurations to perform their experiments
      Collaborizer is proving to be a powerful management support tool for executive professionals and researchers. Unlike most other technology solutions, which make quantitative assessments of collaboration, the tool’s differential is in making quantitative measurements of CSCW, as well as integrating Agile principles into its metrics.
      Some research questions will be answered by the Collaborizer, such as for the development of new algorithms, similar to research that tries to combine communication and experience for the formation of dream teams [
      • Najaflou Y.
      • Bubendorfer K.
      Forming dream teams: A chemistry-oriented approach in social networks.
      ], or in proposing new methods, such as in research trying to determine the best configuration between humans and intelligent systems [
      • Mackeprang M.
      • Müller-Birn C.
      • Stauss M.
      Discovering the sweet spot of human-computer configurations.
      ].
      Regarding the widespread of this tool, it is worth noting that it is currently in use in research at the “Universidade Federal do Rio de Janeiro” (UFRJ). This investigation examines the human and technological factors that positively and negatively affect students’ collaborative work in Virtual Learning Environments (VLEs). This exemplifies that Collaborizer has potential application in the context of educational technologies  [
      • Jardim R.R.J.
      • Delgado C.
      • Silva M.F.
      CLIQ! Intelligent Question Classifier for the elaboration of exams.
      ] as well as in corporate environments.
      The tool is used in three Brazilian university courses. This project supported teachers to recognize class interaction in Google Classroom, select other technological resources, and orient student engagement. Thus, teachers made both behavioral and technological adjustments. According to preliminary results, the feedback communication system helped educators perceive and act on student participation with the technology adopted. Subsequently, a round of benchmark testing will be carried out and the outcomes will be published.

      4. Conclusion

      In this article, we present the free and open source software Collaborizer, able to run a benchmark testing about collaborative work according to Agile principles. It allows a quantitative analysis of work, enabling the manager to compare teams and technologies adopted. In addition, the tool produces visual reports for decision making to improve HCI.
      A limitation of the current system is that it relies on the active action of the participant responding to the survey to collect the data, but this may not be a deterrent in many contexts. For future work, the authors aim to study collaboration throughout a project, build a graphical interface for the tool, and define and implement a system of performance indicators.

      CRediT authorship contribution statement

      Rafael Jardim: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition. Henrique Rodrigues: Software, Validation, Formal analysis, Data curation, Visualization. Lidvaldo Santos: Validation, Resources. Juliana França: Validation, Writing – review & editing. Adriana Vivacqua: Writing – review & editing, Supervision.

      Declaration of Competing Interest

      The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

      Acknowledgments

      The authors of this publication recognize the “Universidade Federal do Rio de Janeiro” (UFRJ), Brazil and the “Coordenação de Aperfeiçoamento de Pessoal de Nível Superior” (CAPES), Brazil for their support on the development of new technologies.

      References

        • Jardim R.R.J.
        • et al.
        Designing a collaboration platform for electricity consumer councils.
        in: Proceedings of the 2019 IEEE 23rd International Conference on Computer Supported Cooperative Work in Design, CSCWD 20192019https://doi.org/10.1109/CSCWD.2019.8791909
        • Hasselbring W.
        • Krause A.
        • Zirkelbach C.
        ExplorViz: Research on software visualization, comprehension and collaboration.
        Softw. Impacts. 2020; 6100034https://doi.org/10.1016/J. SIMPA.2020.100034
        • Zhang Z.
        • Zhou Q.
        • Zhao H.
        • Wu H.
        Social regularized collaborative metric learning.
        in: Proc. - 21st IEEE Int. Conf. High Perform. Comput. Commun. 17th IEEE Int. Conf. Smart City 5th IEEE Int. Conf. Data Sci. Syst. HPCC/SmartCity/DSS 2019. 2019: 2839-2846https://doi.org/10.1109/HPCC/SmartCity/DSS.2019.00398
        • Wang K.
        • Wang Z.
        Novel metrics for social collaboration processes.
        in: Proc. - 15th IEEE Int. Conf. Serv. Syst. Eng. SOSE 2021. 2021: 150-154https://doi.org/10.1109/SOSE52839.2021.00022
      1. D. Nath, V.K. Reja, K. Varghese, A framework to measure collaboration in a construction project, in: World Constr. Symp., 2021, pp. 2–13, http://dx.doi.org/10.31705/WCS.2021.1.

        • Vivian R.
        • Falkner K.
        • Falkner N.
        • Tarmazdi H.
        A method to analyze computer science students teamwork in online collaborative learning environments.
        ACM Trans. Comput. Educ. 2016; 16https://doi.org/10.1145/2793507
        • Jardim R.R.A.J.
        • Santos M.
        • Neto E.C.D.O.
        • Da Silva E.D.
        • De Barros F.C.M.M.
        Integration of the waterfall model with ISO/IEC/IEEE 29148:2018 for the development of military defense system.
        IEEE Lat. Am. Trans. 2020; 18: 2096-2103https://doi.org/10.1109/TLA.2020.9400437
        • Kiv S.
        • Heng S.
        • Kolp M.
        • Wautelet Y.
        Agile manifesto and practices selection for tailoring software development: A systematic literature review.
        in: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). vol. 11271 LNCS. 2018: 12-30https://doi.org/10.1007/978-3-030-03673-7_2/COVER/
        • Arce D.C.
        • França J.B.S.
        • Antunes L.M.
        • Roberto M.
        • Borges S.
        Avaliação da Colaboração em Projeto Fundamentado em Práticas Ágeis.
        2013https://doi.org/10.5555/2542508
        • Fuks H.
        • Raposo A.
        • Gerosa M.A.
        • Pimental M.
        • Lucena C.J.P.
        The 3C collaboration model.
        in: Encycl. E-Collaboration. 2008: 637-644https://doi.org/10.4018/978-1-59904-000-4.ch097
        • Franca J.B.S.
        • Dias A.F.S.
        • Borges M.R.S.
        Observations on collaboration in agile software development.
        in: Proc. 2015 IEEE 19th Int. Conf. Comput. Support. Coop. Work Des., CSCWD 20152015: 147-152https://doi.org/10.1109/CSCWD.2015.7230949
      2. Google forms: Online form creator | google workspace.
        2022 (https://www.google.com/intl/en/forms/about/. (Accessed 19 Jun. 2022))
        • OMG Object Management Group
        An OMG ® unified modeling language ® publication.
        2017 (no. December)
      3. Google colab, welcome to colaboratory - colaboratory, getting started - introduction.
        2020 (https://colab.research.google.com/. (Accessed 16 May 2022))
        • Jean-Baptiste L.
        Ontologies with Python.
        Apress, 2021
        • Najaflou Y.
        • Bubendorfer K.
        Forming dream teams: A chemistry-oriented approach in social networks.
        IEEE Trans. Emerg. Top. Comput. 2021; 9: 204-215https://doi.org/10.1109/TETC.2018.2869377
        • Mackeprang M.
        • Müller-Birn C.
        • Stauss M.
        Discovering the sweet spot of human-computer configurations.
        Proc. ACM Human-Computer Interact. 2019; 3https://doi.org/10.1145/3359297
        • Jardim R.R.J.
        • Delgado C.
        • Silva M.F.
        CLIQ! Intelligent Question Classifier for the elaboration of exams.
        Softw. Impacts. 2022; 13https://doi.org/10.1016/J.SIMPA.2022.100345