Thursday, 2018-01-18

Tutorial Program

 

Tutorial 1
Implementing Comprehensive Application Performance Testing

Igor Entin, Accenture Canada

Monday, April 9th, 09:00 - 17:30
Duration: 6 hours

The objective of the tutorial is to guide participants through the key practices and activities of building comprehensive Performance Testing program for mission-critical enterprise applications. The tutorial will provide an overview of the Applications Performance Foundation aspects, such as Performance Engineering methodology, performance testing delivery roadmap, roles and responsibilities and key stakeholders. Participants will learn how to create Performance Testing approach, build workload emulation, define key tests required to validate Performance and availability objectives. Participants will also learn activities carried out during performance testing process and the key artefacts. The tutorial will also include discussion about performance testing of mobile applications, network and service virtualization, as well as test data management. Furthermore, participants will learn about leading industry tools used during performance testing. Finally, participants will learn about performance defect management and performance testing governance processes.

 

Tutorial 2
Tools for Declarative Performance Engineering

Jürgen Walter, University of Würzburg (Germany)
Simon Eismann, University of Würzburg (Germany)
Johannes Grohmann, University of Würzburg (Germany)
Dušan Okanović, University of Stuttgart (Germany)
Samuel Kounev, University of Würzburg (Germany)

Monday, April 9th, 14:00 - 17:00
Duration: 3 hours

Performance is of particular relevance to software system design, operation, and evolution. However, the application of performance engineering approaches to solve a given user concern is challenging and requires expert knowledge and experience. In this tutorial, we guide the reader step-by-step through the answering of performance concerns during software life-cycle using measurement and model-based analysis. We explain tools representing a unified approach for automating huge parts of the software performance engineering process including (i) a performance concern language, for which we provide automated answering using (ii) measurement-based and (iii) model-based analysis. We detail how to derive performance models providing details on (iv) automated extraction of architectural performance models and (v) modeling of parametric dependencies. We introduce tools available online for the answering of performance concerns by demonstrations or hands-on.

 

Tutorial 3
Measuring and Benchmarking Power Consumption and Energy Efficiency

Jóakim von Kistowski, University of Würzburg (Germany)
Klaus-Dieter Lange, Hewlett Packard Enterprise (Houston, USA)
Sanjay Sharma, Intel Corporation
Hansfried Block (Paderborn, Germany)

Tuesday, April 10th, 9:00 - 12:30
Duration: 3 hours

Energy efficiency is an important quality of computing systems. Consequently, researchers try to analyse, model, and predict the energy e efficiency and power consumption of systems. Such research requires energy e efficiency and power measurements and measurement methodologies. In this tutorial, members of the SPECpower Committee will introduce the methodologies behind the SPEC Power measurement tools, frameworks, and benchmarks.  The tutorial will discuss the PTDaemon power measurement tool and how it achieves accuracy in power measurements. It also will discuss the Chauffeur framework and how to use it for custom workloads in research and the energy-effciency methodology it implements.  The tutorial will introduce the SPEC SERT, the workloads it contains, the energy-effciency metrics it uses, and how these workloads and metrics can be employed in a general power analysis and modelling setting. Finally, the tutorial will give an introduction into the industry-standard power benchmarks SPECpower ssj2008 and the upcoming SPECpower2018 benchmark.

 

Tutorial 4
Setting up two virtualized environments of Android 1) Xen VM & 2) Container models and comparing system performance

Kashif Rajput, Intel Corporation
Tushar Gohad, Intel Corporation

Tuesday, April 10th, 14:00 - 17:30
Duration: 3 hours

Given the emerging use cases and benefits of running embedded OSes such as Android in virtualized environments, it is often required to assess how the performance compares when virtualized Android is used in a Linux container model, for example, as against running on a hypervisor such as Xen. Although simplistic as this may sound, setting up embedded OSes to run effectively in virtualized environments is not a trivial task. Beyond the initial setup of the virtualized environments, setting up tools and establishing methods to effectively and accurately measure system performance is also challenging. This tutorial attempts to do exactly both. Two virtualized environments will be setup on identical embedded hardware reference platforms and then Android Marshmallow version 6.0 will be configured to run in the two separate environments: 1. Under the open source Xen hypervisor as a virtual machine (VM) and 2. As a container complying with the open source Linux container model. Once done with the virtualized environment setups, the various aspects of selecting system parameters to compare in terms of performance will be finalized and eventually the tools setup and configuration to collect performance along with the actual performance collection would be demonstrated.