- CSMR Fees
- About Oldenburg
- Travel Information
- Important Dates
The following tutorials will be held at CSMR 2011, given a minimum of six participants:
- Tutorial 1: Tuesday, March 1, 2011, 9:00 to 12:30, A14 1-115
Elliot J. Chikofsky: IT Portfolio Management: Making the Best Choices of Software Projects and Systems to Invest In, Sustain, or Replace
- Tutorial 2: Tuesday, March 1, 2011, 14:00 to 17:30, A14 0-031
Florian Deißenböck, Benjamin Hummel, Elmar Jürgens: Code Clone Detection in Practice
- Tutorial 3: Tuesday, March 1, 2011, 9:00 to 12:30, A14 1-114
Jens Knodel: Rapid Architecture Evaluation (RATE)
- Tutorial 4: Tuesday, March 1, 2011, 9:00 to 12:30, A14 0-031
Rainer Koschke: Stopping Software Erosion with Static Analysis
- Tutorial 5: Tuesday, March 1, 2011, 14:00 to 17:30, A14 1-115
Harry Sneed: Migrating from PLI and COBOL to Java — Strategies and Constraints
Tutorial 1: Tuesday, March 1, 2011, 9:00 to 12:30, A14 1-115
IT Portfolio Management: Making the Best
Choices of Software Projects and Systems
to Invest In, Sustain, or Replace
Elliot J. Chikofsky, Engineering Management & Integration (EM&I), Burlington, MA, USA
IT Portfolio Management is a structured process for the selection, tracking, and accountability of investments in programs, systems, and initiatives. Every project and idea cannot be funded. There are enterprise goals and objectives to be met and trade-offs that must be made. What decision criteria are most important? Should systems be sustained and repaired or replaced? Are existing systems and transformation projects meeting their objectives? In tough economic times, IT Portfolio Management provides an essential framework to enable better strategic and tactical decisions on how best to allocate resources.
This tutorial includes:
- An overview of IT Portfolio Management and how it relates to Enterprise Architecture, to software and systems Reengineering, and to project management accountability
- What constitutes a manageable portfolio of IT investments
- Structuring executive and department-level decisions with an IT Portfolio, including comparison of dissimilar alternatives
- Lessons learned in a variety of commercial enterprises and government organizations.
IT management professionals, commercial systems development managers, and systems-savvy business managers will learn portfolio concepts to bring home and put to use right away in their organizations. Commercial and government agency participants will better understand how to discuss and plan investment decisions. Research and development participants will better understand how to explain the value and justification for their projects in terms of the sponsor's enterprise objectives. Software engineers and R&D leaders will get a better understanding of the way enterprises decide to fund, or not fund, systems investments and software projects.
With 35 years experience in both software and systems engineering as a software tools developer, R&D leader, research analyst, and IT management consultant, Elliot J. Chikofsky brings a rich and diverse background to the discussion of enterprise systems decisions and investments. He is EM&I Fellow with the Engineering Management and Integration consulting firm, where he advises government and commercial clients on Enterprise Architecture and IT Portfolio Management. He is on the faculty of University of Phoenix's Massachusetts Campus in both IT and business management, as well as 20 years as adjunct faculty of Northeastern University's Graduate School of Engineering in Boston. He is past-Chair of the IEEE Technical Council on Software Engineering (TCSE) and its current Vice-Chair for Conference. He was Associate Editor-in-Chief of IEEE Software magazine, where he founded the column on software management. He chairs the Reengineering Forum.
Tutorial 2: Tuesday, March 1, 2011, 14:00 to 17:30, A14 0-031
Code Clone Detection in Practice
Florian Deißenböck, Benjamin
Hummel, Elmar Jürgens:
Technische Universität München, Germany
Research in software maintenance has shown that many programs contain a significant amount of duplicated (cloned) code. Such cloned code is considered harmful for two reasons: (1) multiple, possibly unnecessary, duplicates of code increase maintenance costs and, (2) inconsistent changes to cloned code can create faults and, hence, lead to incorrect program behavior.
Consequently, the identification of duplicated code, clone detection, has been a very active area of research in recent years. Although this research led to the development of mature clone detection techniques and tools, they are not commonly applied in practice yet. This tutorial transfers the insights that were gained in the clone detection community, and experiences that we collected during several years of applying clone detection in industry, to practitioners who want to apply clone detection in practice.
Attendees will learn what causes cloning, what its consequences are and how tools for the detection of clones work in principle. The core of the tutorial is a hands-on experience session where attendees learn how to apply a clone detection tool to their source code. This includes the tailoring required to avoid false positives as well as an interpretation of the clone detection results. The practical part will be performed using the clone detector built into ConQAT, which is licensed as open-source and allows attendees to directly apply their newly acquired knowledge to their work environment.
Florian, Benjamin and Elmar work as research assistants in the Software & Systems Engineering group of Prof. M. Broy at the Technische Universität München and as consultants at CQSE GmbH.
Florian finished his PhD thesis about software quality control. His academic interests lie in software maintenance, software product quality and program comprehension. He studied computer science at the TU München and the Asian Institute of Technology, Bangkok.
Benjamin's research interests are modeling and verification of discrete and hybrid systems, and software quality and maintenance. He received a diploma in computer science from the Technische Universität München and is a member of the ACM and the Gesellschaft für Informatik (GI).
Elmar's academic interests include software maintenance, clone detection and usage analysis. He studied computer science at the Technische Universität München and the Universidad Carlos III in Madrid, Spain.
Tutorial 3: Tuesday, March 1, 2011, 9:00 to 12:30, A14 1-114
Rapid Architecture Evaluation (RATE)
Fraunhofer IESE, Germany
Taking the right decisions is integral part of any successful software development and evolution. Decision making requires to efficiently and effectively evaluate risks and their potential impacts on the software system. Such evaluations have to consider qualitative characteristics and the internal structure of the software — information the architecture provides even for very large, heterogeneous and complex software systems. Hence, critical questions occurring in all life cycle phases (e.g., appropriate technologies, subcontractor selection, whether to restructuring or not, or quality assurance activities) and sound decision making are mostly architecture-centric.
This tutorial presents RATE (Rapid ArchiTecture Evaluation) — an architecture-centric evaluation method delivering valuable input to decision making. Examples, experiences and lessons learned from more than 20 architecture evaluation projects illustrate the principles underlying the method. Critical stakeholder concerns serve as input and drive the goal-oriented evaluation of the architectural solutions relevant to the concerns. The architecture-centric evaluation manages fast delivery of confidence and trust into the solutions and delivers results rapidly. The evaluation results provide input to decision making, serve to mitigate risks, and, of course, improve the architecture of the software system.
The tutorial shows that architecture evaluation results can be achieved rapidly with limited effort, provide fast feedback to stakeholder concerns, and are crucial for decision-making in Software development and evolution. Examples from industrial projects world-wide give evidence how architecture-centric evaluation establishes a sound basis for decision making.
Jens Knodel received a Diploma in Computer Science with a focus on Software Engineering in 2002 by the Technical University of Stuttgart, Germany. Since 2002 he is a scientist at the Fraunhofer-Institute for Experimental Software Engineering (IESE) in Kaiserslautern, Germany. As an applied researcher he works in research and industry projects in the context of product line engineering, software architectures, and software evolution. Jens Knodel is author of international conference and journal publications in the area of software and system architectures and software maintenance. Since 2006 he serves regularly as Program Committee Member in international conferences in the area of reengineering, software maintenance, and reverse engineering and he was the General Chair of the 13th European Conference on Software Maintenance and Reengineering (CSMR 2009).
Tutorial 4: Tuesday, March 1, 2011, 9:00 to 12:30, A14 0-031
Stopping Software Erosion with Static Analysis
University of Bremen, Germany
Note: This tutorial will be held in German.
External quality of software such as usability or performance is quality that can be observed by the end-user. The typical means of quality assurance for external quality are diverse kinds of tests. The internal quality, that is, the inner structure of software, is at least as important as external quality. If we cannot control inner quality, we can hardly deliver external quality. And we cannot fix defects and react to other new or changed requirements efficiently and effectively. During the evolution, the inner quality decreases over time — a phenomenon known as software erosion.
Internal quality is primarily observed by developers. Unfortunately, the inner quality of software is difficult to observe directly due to the nature of software. Source code of software can be virtually endless and complex and nobody can read and understand it as a whole. Static program analyses are able to help developers to measure, estimate, and visualize the inner quality of software.
This tutorial describes static program analyses for the quality assurance of the inner quality of software and its integration in the normal development process. It shows how to make the internal quality visible and what to do to maintain and improve the inner quality through refactoring. You will learn what kind of static program analyses exist (metrics, coding rule checkers, detectors for dead code, dependency cycles, bad smells, software clones and architecture violations and more). In addition, the tutorial demonstrates how quality assurance using static program analysis can be embedded in the daily development process as continuous integration. The tutorial will demonstrate tools to support this process.
Tutorial 5: Tuesday, March 1, 2011, 14:00 to 17:30, A14 1-115
Migrating from PL/I and COBOL to Java —
Strategies and Constraints
The topics of this tutorial include the following:
- Problems with legacy systems:
- inflexibility, complexity, incompatibility, proprietary, personnel dependence, not portable, non standard.
- Strategies for migration:
- new development, reimplementation, wrapping, conversion.
- Economics of Migration:
- costs, benefits and risks of each alternative strategy.
- Conversion Strategy:
- Reengineering the legacy code, automated transformation of the code, procedural vs. object-oriented approach, tools for the conversion, sample transformations of PL/I and COBOL.
- Wrapping Strategy:
- Stripping out Code modules, automated wrapping of code modules, program and procedure wrapping, tools for wrapping, sample wrapping of PL/I and COBOL.
- Testing migrated systems:
- difficulties of proving functional equivalence, automated regression testing, program and system level test, comparing data and paths, test demonstrations.