Sampling techniques. Third ed. Dlamini, M. Drevin, L. Journal of Computer and Security, 26 1 , pp. Fabrigar, L. Petty, R. Journal of Personal Social Psychology, 90 4 , pp. Feldman, R. Understanding Psychology. Boston: River Ridge, IL.. Feng, N. Journal of Applied Soft Computing, 11 7 , pp. Furnell, S. Journal of Medical Informatics, 21 2 , pp. Hansche, S. Journal of Information System Security, 10 1 , pp. Humphreys, E. Risk Assessment and Risk Management. London: BSI. Hwang, Y. Journal of Environmental and Education, 31 4 , pp. The standard of good practice for information security.
Israel, G. Determining Sample Size.
- Being Jewish in the New Germany.
- When Continents Collide: Geodynamics and Geochemistry of Ultrahigh-Pressure Rocks.
- Join Kobo & start eReading today?
- Beyond the Darkness (Guardians of Eternity Book 6).
- ENASE 2018 Abstracts.
- Upcoming Events.
Gerber, M. Journal of Computer and Security, 20 7 , pp. Gopi, M. Journal of International Emerging Markets, 2 4 , pp. Kallgren, C. Journal of Experimental Social Psychology, 22 1 , pp. Kankanhalli, A. Journal of Information Management, 23 1 , pp. Karat, J. Journal of IBM Systems, 42 4 , pp. Karyda, M. Journal of Computer and Security , 24 1 , pp. Keeney, R. Journal of Sloan Management Review, 35 4 , pp.
- Twice Kissed.
- Mapping English Metaphor Through Time?
- All Aunt Hagars Children: Stories.
- Scientology Exposed: The Truth About the Worlds Most Controversial Religion.
- Weitere vorgeschlagene Titel.
Kruger, H. Journal of Computer and Security, 25 1 , pp. Kuwait University KU , About KU. Lichtenberg, E. Martha, A. Social Cognition: An Integrated Approach. Second Ed. Offor, P. Oxford Dictionaries, Pahnila, S. Posthumus, S. Journal of Computer and Security, 23 1 , pp. Rezgui, Y. Journal of Computer and Security , 27 1 , pp.
Rezmierski, V. Journal of Computer and Security, 21 6 , pp. Ritchie, B. Journal of Marketing Intelligence and Planning, 19 1 , pp. Ross, S. Rogers, R. Protection motivation theory. Gochman Ed. Ryan, J. Journal of computers and security, 25 1 , pp. Singh, A.
What makes BSA's secure software development framework unique?
N, Picot, A. P, Ojha, A. Siponen, M. Journal of Information and Organization, 15 1 , pp. Solms, R. Journal of Computer and security, 15 1 , pp. Stoneburner, G. Thellufsen, C. Journal of Land Use Policy, 26 1 , pp. Tipton, H.
Thomson, K. Journal of Computer and Security, 24 1 , pp. Theoharidou, M. Turban, E. Updegrove, D. Veiga, A. Journal of Computer and Security, 29 1 , pp. Watson, R. Information Systems. Zurich, Switzerland: Global Text Project. Whitman, M. Principles of information security. Second ed. Independence, KY: Course Technology.
Williams, P. Journal of Information Security Technical Report, 6 3 , pp. Wilson, M. Woon, I. A protection motivation theory approach to home wireless security. Las Vegas, s. Zissis, D. SAFECode produces really good literature around best practices in relation to secure development. But those documents tend to be more narrative, they don't provide the specific diagnostic statements that we try to include in the framework and so they don't really provide any basis for assessing the security of an individual product or service.
The best practice literature tends to focus on certain elements of software security.
Code of Ethics | IEEE Computer Society
SAFECode has done a wonderful job over the years building out best practice literature around the secure development lifecycle. But if you look at our framework, one of the things that we're trying to assert is that the secure development lifecycle is one important thing to consider in relation to software security, but it's kind of the input side. It outlines the processes through which organizations should be integrating security into their software development. We also need to think about the output side, when you look at the product or service that comes out of the secure development lifecycle.
What are the characteristics of that product or service that are important to assessing that product or service's security? We try to tackle that as well. It is the first framework or guidance or tool that provides specific, measurable guidance specifically for software-related security considerations. How can software developers prepare to use or start using the BSA security framework? Ross: We'd like to see it become a basis for organizations to design their secure development lifecycle.
One of the reasons we link to information resources throughout the document is so each one of the diagnostic statements and subcategory statements make assertions about what software developers should be doing. We want to provide as much information as possible on how they can do it. Ultimately, we'd like to see software development organizations design their software development lifecycles in ways that address each of the diagnostic statements throughout the relevant portions of the framework, and, again, that's the input side.
On the output side, we think it's important that software development organizations be able to account for how they address each of the security considerations, particularly in the secure capabilities function of the framework in relation to each of the products and services they produce.
Please check the box if you want to proceed. When using multiple cloud service providers, it's critical to consider your enterprise's cloud scope and the specifics of each CASB tools have gained traction as cloud security becomes more important. Among other features, a cloud security access broker Patch management for cloud creates new challenges than traditional in-house programs. Expert Dave Shackleford presents patch VPN services, enterprises choosing between the technologies should consider factors like With 20 questions For companies having trouble finding qualified IT professionals to hire, the solution may be closer than you think.
Just ask Home Board presentations can be scary. The good news is CIOs can't go too wrong in a climate where boards are desperate to learn about For Schneider Electric and many other large enterprises that take a look at edge computing projects, the main criterion for New options for delivering remote Windows apps in the cloud, combined with the maturity of SaaS apps, Chromebooks and Mac Zoho One customers can now make phone calls using Zoho's telephony platform, extend provisioning through custom apps and use the Before a Windows 10 migration, IT admins should make sure all applications are compatible with the new OS.
RQ3: In the second round, the participants supported our categorization of behavioral-based and opinion-based evaluation methods. Also in the case of predictive evaluation methods, the majority supported the classification. The focus group was the only method that was specified as opinion-based by six participants.
The participants stated that focus group can include elements of the category Predictive but also of the category Opinion-based , because it can combine an inspection with a group discussion with focus on different viewpoints and opinions about artifacts. The results let us conclude that the participants were able to apply the used classification to the collected evaluation methods. The interaction between the participants and the possibility to ask the participants questions makes it possible to avoid misunderstandings and to verify the found evaluation methods and artifacts in a discussion round.
In particular, the focus group session gave us the possibility to a discuss and verify the results from the literature review RQ1 and from the expert survey RQ1, RQ2, and RQ3 , but also to b collect and identify missing or further evaluation methods RQ2 resulting from the discussion and interaction between participants in the group. Since we decided on a face-to-face focus group which allows the exchange of visual and nonverbal cues to enhance communication , we were restricted to inviting people from the local area.
Further criteria for recruiting the participants were a that they were familiar with the topic, and b that they had the time and interest to attend a focus group session with a duration of about one hour. In addition, we selected experts who had not taken part in the expert survey in order to avoid that participants felt the need to defend the results that we gained from the expert survey. Based on the literature review and expert survey results we observed a trend toward Behavior-based and Opinion-based evaluation methods. Since these are well-known methods in Human Computer Interaction, it was important for us to have at least one participant with expertise in this field to identify further methods known in Human Computer Interaction but which have not been adopted in PAIS so far.
A further criterion was that the participants covered the key concerns of business process management defined in van der Aalst and had a comprehensive knowledge of evaluation methods in computer science. One of these experts also had additional expertise in Human Computer Interaction. The focus group was conducted in a university meeting room and took about one hour. The session was guided by two skilled moderators and one note taker who helped the moderators.
The focus group session consisted of two steps: First, the participants filled out a questionnaire in which they had to 1 define their level of knowledge, 2 grade the relevance of typical and prospective evaluation methods for theoretical and executable artifacts in PAIS found in the first round of the expert survey, and 3 find future evaluation methods. In the second step, the participants discussed the relevance of evaluation methods and possible future directions for theoretical and executable artifacts.
RQ1: The participants discussed the set of evaluation methods which resulted from the first round of the expert survey. They found it was an arbitrary and fuzzy set of methods and mentioned that some methods did not seem to be evaluation methods but rather artifacts e. The reason for this difference between the two groups is that the focus group allows discussions between experts in order to reduce misunderstandings between the individual interpretations which is not feasible in a survey. Moreover, the participants agreed that the users should play a more important role in evaluation methods especially for executable artifacts.
As users work with the system on many levels, e. RQ2: Although the experts found it difficult to define future evaluation methods at the beginning and only interdisciplinary evaluation methods were mentioned, e. RQ3: The participants stated that for rating the relevance additional information, such as which artifacts the methods relate to or a category scheme e. A classification of methods would further support the relevance rating of evaluation methods in the questionnaire.
We proposed our category scheme Behavior-based , Opinion-based , and Predictive , and the participants agreed to it. This section summarizes and outlines the results of the literature review see Sect. Summary of evaluation methods; described by name, category behavior-based B , opinion-based O , and predictive P , artifacts executable E and theoretical T , and areas human orientation Hum , security Sec , and visualization Vis.
About This Item
The analysis of evaluation methods showed us that different words were used for the same kind of evaluation methods e. For example, the application method includes also case, example, scenario, storyboard, and use case methods. All these methods were used in the reviewed literature and were also stated by the experts in the expert survey and focus group to describe the application of an artifact, e. Furthermore, discussion in groups and focus group as well as implementation and prototype as methods were similarly used in the literature but also by the experts.
The inspection method includes review techniques to detect a large number of basic problems, considering, e. Therefore, the inspection method incorporates also the review and heuristic methods. To sum up, we identified a set of 23 evaluation methods. Even though we identified some evaluation methods suitable for only certain areas such as functionality tests in human orientation , most of them can be adapted and applied to other areas such as security and visualization. Based on the previous section, we will discuss results, recommendations, lessons learned, the potential impact on research and practice as well as limitations of this paper.
In this article, we investigated and examined artifacts as well as evaluation methods in the area of human orientation in general, security, and visualization in PAIS. This list can be used as reference to evaluate theoretical and executable artifacts. In the following, we will describe four results derived from the literature review, expert survey, and focus group. In the literature review, we noted that behavior-based and opinion-based evaluation methods are less frequently used than predictive evaluation methods. We assume that during the past 30 years, PAIS research has centered on the design and development of core PAIS-relevant features such as implementation, function, and application.
Behavior-based and opinion-based methods focus on users activities and feedback.
Area 2 - Software Engineering
It can be seen from the literature that the use of these methods has not been the main focus so far. Based on these results, we can assume that the technical quality of PAIS has improved while user experience and feedback have been neglected during the development. However, user evaluations conducted in the past PAIS developments might not have been published. Human Orientation and Security Correctness evaluations of theoretical artifacts and simulations for executable artifacts were mentioned by the experts and in the literature as evaluation methods for the areas human orientation and security.
Both methods are also interesting methods for the area of visualization, e. These three methods are all behavior-based methods and are primarily used for executable artifacts. The literature review showed that for security, the evaluation of theoretical artifacts played a more important role in the last years than the evaluation of executable artifacts. Security and Visualization The results showed that except for the ten widely used evaluation methods, no explicit, distinct, and overlapping evaluation methods between the areas of security and visualization were discovered.
Only questionnaires for theoretical artifacts were found. However, we also identified questionnaires for executable artifacts in the human orientation area. This does not mean that no existing methods exist which intersect the areas security and visualization.
In this study based on the results of the literature review, expert survey, and focus group, however, we were not able to identify them. In recent years, research has studied and analyzed users in PAIS more frequently e. This tendency is also reflected in the results of the expert survey and focus group. This increased interest in human aspects of PAIS is also reflected in current conference calls such as highlighted in the introduction section see Sect. Based on the results, we propose the following three general recommendations.
These recommendations provide researchers with an overview of aspects which they should consider in their investigations. The selection of an evaluation method depends strongly on the objectives that your work is aiming at. Artifacts, data type, time, feasibility, and monetary funds are essential factors to consider when choosing the adequate evaluation methods.
For example, a walkthrough can be used to analyze usability issues in software products. Not only more common evaluation methods but also selectively used ones can be utilized. However, expert panels and functionality tests can be used in all three areas, e.
This might be surprising but during our review of literature, the artifacts and evaluation methods were often not explicitly stated. We recommend that authors provide a full description of their evaluation methods. This can facilitate the reading of publications and promote systematic reviews on evaluation methods. We observed that experts in the expert survey and in the focus group referred to evaluation methods on different abstraction levels e. A possible reason is that often it is difficult to draw a clear boundary between the different granularity of evaluation methods.
Furthermore, multiple definitions of evaluation methods exist in human orientation, security, and visualization in PAIS. For example, case studies were mentioned as evaluation methods by experts in the expert survey and in the focus group. However, according to Yin , a case study is a strategy and includes methods like interviews and participant observation for data collection. Hence, a framework specifically for PAIS that describes different evaluation strategies including artifacts and evaluation methods would be helpful as a common basis. A reason for the different definitions and interpretations between experts could be that the participants came from different domains e.
Nevertheless, this diversity of experts had the benefit of collecting typical evaluation methods for PAIS from different viewpoints e. Moreover, the usage of interdisciplinary evaluation methods was perceived as gaining importance for future research. However, not only the definitions of evaluation methods varied but also the meaning and its context differed between the fields human orientation, security, and visualization. For example, in process mining cf. Therefore, a taxonomy to provide a common understanding and contextual meaning would support the understanding of common practices and should be combined with the above mentioned framework for the different evaluation strategies.
In the literature review, the classification of the publications was performed based on 1 the content of the publication, and 2 the textual definition. By analyzing the content, we ensured that the misuse of definitions e. During the review, we discovered that the studies are often not fully described in publications. For this reason, we skimmed the text headings and captions of figures to identify artifacts and evaluation methods used in the publications. Often, we had to fully read the paper. Furthermore, we assessed the main idea behind each publication and identified artifacts and evaluation methods based on the course of actions.
Furthermore, the assignment of the artifacts to be theoretical or executable artifacts as, e. We noticed that in our study the experts specified an algorithm as theoretical and as executable artifact. Hence, in Table 2 , an algorithm is assigned to a theoretical and an executable artifact. Another example is the prototype. Most publications use various names for this such as prototype, prototypical implementation or proof of concept. In visualization, a prototype may also refer to a paper mockup as a theoretical artifact.
However, in human orientation and security, a prototype always refers to an executable artifact. We acknowledge that these ambiguities exist in research. Here, we dealt with this challenge by carefully reading each publication to determine which method was actually used. In this study, it was not possible to identify which of the evaluation methods named by the experts are more or less relevant.
One reason could be that the choice of evaluation method depends strongly on which artifact is going to evaluated and on the aim of the evaluation. For example, if the aim of the evaluation is to find out how users interact with the system, the information about the time a user needs to complete predefined tasks might not give enough insight into user behavior.
This means that the usefulness and applicability of each evaluation method depends on the investigated artifact. Since the number of options and applications was extremely large it was not possible to generalize the results. However, an evaluation of the different evaluation methods in regard to their specific application i. In order to minimize the different options, a further possible direction for future work is to concentrate on a single category of the evaluation methods and compare these methods by means of experiments. The aim of this paper was to assess how research conducts the evaluation of theoretical and executable artifacts for human orientation in general, in security, and in visualization in PAIS.
For this purpose, we provided a list of these artifacts and which evaluation methods are typically used to conduct an evaluation. This collection of artifacts and evaluation methods may serve as a basis for researchers and practitioners who wish to investigate, e. Furthermore, researchers and practitioners may use this collection to discover unfamiliar, interdisciplinary evaluation methods. For example in the area of security, research has neglected the evaluation of security modeling extensions cf. In addition, the classification of evaluation methods can be used as a guideline for categorizing the evaluation methods researchers utilize.
This paper provides an allocation of artifacts and evaluation methods in PAIS. This may serve as a basis and can be extended by adding new evaluation methods and artifacts that are not listed in this paper. Furthermore, practitioners may use this paper for reassessing evaluation methods and to become acquainted with unfamiliar evaluation methods. This might lead to an improvement of software, for example, by using new evaluation methods in the software development.
The results showed that behavior-based and opinion-based methods were recognized as prospectively relevant. Furthermore, predictive evaluation methods will continue to be of importance. The categorization of evaluation methods of PAIS research in the fields of human orientation in general, of security, and of visualization could be used for assigning collected evaluation methods by participants in the expert survey and the focus group. See especially the demand for adequate process support in Industrie 4.
Skip to main content Skip to sections. Advertisement Hide. Download PDF. Open Access. First Online: 01 March The type of investigation determines the strategy and artifact selected for in the evaluation.