- Click here for the conference poster with all details
- Conference start-end time: 9.30-17.00
- Conference room: 303
- Conference Schedule
We present a prototype of a technical framework to perform AFP (Automated Function Point Analysis) on software built on a canonical architecture (Java Enteprise with Entities modeled using JPA 2.0 – the Java Persistence Framework). The solution is integrated in Sonarqube, and allows for on-the-fly assessment of a new quality metric that we propose, called “Technical Delay”. We define Technical Delay as the Technical Debt of the project expressed in Function Points per hour, instead of LOC/hour. Using AFPs as a normalization factor, Tehcnical Delay can be used as a SLA metric on Agile Projects (especially in contexts where Continuous Integration is used, and the metric needs to be constantly recomputed and monitored). The presentation will show early reasults on pilot projects applying the methodology.
This presentation, valid for one credit for the IFPUG Certification Extension Program, discusses the issue of properly defining boundaries in modern multi-layered applications, moving from the CPM statements and analyzing it from a different viewpoint, that’s the Asset Management one.
- A. Fantechi, Metrics in software verification and validation: some research challenges (Keynote speech)
The complex software applications pervading nowadays safety critical domains, such as transportation, raise increasing concerns about the ability of software development, verification and validation processes to avoid the presence of software faults in such applications. How to measure such ability is a matter of debate, especially since theory and practice tell that perfect complex software does not exists. An account of currently applied techniques and of open research challenges will be given.
Agile Software Development suffers from the impossibility to predict the outcome. Not following a plan, the functional size of the finished product remains unknown at the beginning. However, agile teams need to know their velocity. Velocity is normally measured by story points; however, story points are not a size but an effort metric, and velocity refers to size not effort. This talk explains how the ISBSG database can be used to determine velocity in terms of size, based on the story points assigned to story cards. No additional metrics are needed except some automated COSMIC counting. The approach uses the powerful concept of transfer functions, and also the Buglione-Trudel matrix for determining story cards priorities.
The speaker introduced a new approach for IT project performance benchmarking in IT confidence 2014 in Tokyo. Feedback from the audience was enthusiastic and encouraging: Both the customers and the suppliers welcome a new, simple, but effective way to compare IT programs and projects against each other, and with the industry top performers. The Triangle Benchmarking concept is based on very few measurements and reasonably light data collection, compared to traditional benchmarking services. It can be easily applied within all kind of organizations, and especially well with agile software development projects, as the old heavier benchmarking services were applicable and affordable only for very large corporations. In this presentation Pekka Forselius will introduce how to use Triangle Benchmarking effectively, and what kind of decisions can be made based on the results.
Officially an estimate is “the most knowledgeable statement one can make regarding cost, schedule, effort, risk and probability.” That is an excellent definition. However, research shows people are generally hardwired to mis-estimate, and those estimates are nearly always on the low, extremely optimistic side. This is a disaster because viable estimates are core to project success as well as ROI determination and other decision making. After decades of studying estimation, it has become apparent that: Most people don’t like to and/or don’t know how to estimate, those who estimate are often optimistic, full of unintentional bias and sometimes strategic misestimating. While many of us spend time on model errors the biggest source of estimation error usually comes from people, either by accident or strategically. There is little we can do except provide reality checks via things like parametric models, analogy data, and process rigor. This presentation discusses the issues of estimation bias and strategic mis-estimation as well as how to mitigate these issues. Both the results of Nobel prize winning work and subsequent discoveries by researchers and the author will be discussed.
In physics, potential energy corresponds to ‘capacity of doing work’. In software engineering, the potential energy of a development team corresponds to team’s cumulative intellectual work capacity in a development environment for developing a piece of software during a period of time. Hence, the efficiency (or in commonly used terms, productivity) of software development can then be denoted as the ratio of the amount of ‘output work’ produced to the team’s cumulative intellectual energy input to do this work. So, any waste in development would decrease productivity. The focus of this presentation is on the input effort delivered by the team to do work, rather than the work output and factors affecting productivity. We will first revisit and clarify some fundamental concepts such Team Size and Team Power, and then investigate empirically the nature of the relationship between Average Team Power and Average Team Size, which we normally would expect to move together. The results indicate that for a large number of projects, the Average Team ManPower increases up to a point and stays around this figure even though the reported Average Team Size increases further. These preliminary findings suggest poor planning, and hence inefficient utilization of people in projects that might have resulted in longer durations or higher costs.
This presentation will present and discuss the particularities of software for the Automotive sector in order to point out which could be further data to be possibly collected in a next version of ISBSG repositories, in particular the Development and Enhancement (D&E) one, e.g. moving from the safety and security issue that’s deeply managed by ISO norms as the 26262 family.
Cost modeling and estimation has a long and interesting history in industry and many models and methods have been applied to estimate projects, but over time more and more industry and government professionals are asking for models built or tuned with data that is very specific to their industry and their organization. Unfortunately, many organizations do not have the infrastructure, processes or tools for collecting project data efficiently. And among those who do, some still struggle to find the best way to use their data effectively. This paper chronicles a journey that PRICE Systems has traveled with an Army customer to develop and institutionalize a process for the collection and application of historical cost data for software projects. It will discuss the obstacles encountered and the lessons learned along the way. Attendees will learn how this organization is now armed with the right tools and processes to deliver defendable, credible estimates to their program office.