Introduction
Motivation
Contributions
Background
Structure of this document
CAMEL specification and implementation
Requirements
- models@runtime (R1): CAMEL must support both type and instance level, enabling to specify both provider-independent and provider-specific models. The first will drive the deployment reasoning phase, thus enabling users to define non-functional and deployment requirements in a cloud-provider-agnostic way. The second will enable to maintain a cloud-provider-specific model of both the application and monitoring topology.
- multiple aspects coverage (R2): CAMEL should enable the coverage of multiple aspects, to support all phases of the MCRM lifecycle.
- high expressiveness level (R3): A suitable expressiveness level should be employed to capture accordingly required aspects of the respective domain. This enables both the users to specify the needed application information and the system to maintain and derive such information at a detailed level, so as to support all application lifecycle management phases.
- Separation of concerns (R4): CAMEL should support loosely-coupled packages, each covering an aspect of MCRM. This will facilitate a faster and more focused specification of models at each phase.
- Reusability (R5): CAMEL should support reusable types for multiple aspects of cross-cloud applications. This will ease the evolution of models.
- Suitable integration level (R6): All CAMEL sub-DSLs should be mapped to an appropriate integration level that can support the consistency of the information provided and minimise overlap across sub-DSLs.
- Textual syntax support (R7): CAMEL targets DevOps that deal with cloud management and are akin to textual/code editing. Thus, the need to support CAMEL textual syntax arises for editing textual models.
- Re-use of DSLs (R8): Existing DSLs from disjoint research activities should be reused and integrated (R6), as attested also in [8]. This is because they provide valuable experience and information on MCRM aspects. This also enables involving different DSLs communities in CAMEL evolution, while it reduces the learning curve for DevOps already familiar with them.
Design and development
Aspect Identification
Aspect | Phase | Rationale |
---|---|---|
Deployment | All | The PITMs and PSTMs models drive both application reasoning and deployment, |
while execution-related activities should be reflected in PSTM models | ||
Requirement | Reasoning | The user requirements drive application deployment reasoning, |
Execution | while they are also used to restrain the way local scalability can be performed at runtime | |
Provider | Reasoning, | Provider models enable to matchmake and select suitable cloud offerings |
Security | Reasoning | High- and low-level security requirements can drive the offering space |
filtering, as well as the application deployment optimisation | ||
according to security criteria apart from the quality ones and cost | ||
Metric | Reasoning, | Metrics are used as optimisation criteria for deployment reasoning, while they |
Execution | also explicate how application monitoring can be performed during the execution phase | |
Scalability | Execution | Scalability rules drive the local application reconfiguration during execution |
Organisation | Reasoning, | An organisation can have accounts on certain providers which reduces the offering space |
Deployment | only to them. The credentials to these providers enable the platform to act on user | |
behalf for deploying application components to suitable VMs | ||
Location | Reasoning | Location requirements can be used to filter the offering space during deployment reasoning |
Execution | Reasoning, | Previous execution history knowledge can be used to improve application deployment |
Unit | All | Auxiliary aspect enabling to associate units of measurement to metrics and thus, |
indirectly, to the conditions (i.e., SLOs) posed on them | ||
Type | All | Auxiliary aspect enabling to provide types to language elements like metrics, as well as |
to define different kinds of values that can be assigned to element properties |
Language Selection
Integration
Implementation
Requirements fulfillment
The CAMEL language
CAMEL overview
DSL | Core concepts covered | Role |
---|---|---|
Core (Top-Level) | Top model, Container of other Models, Applications | DevOps, System |
Deployment | Application topology (Internal Components, VMs, Hostings, Communications) | DevOps, System |
Requirement | Hardware, Security, Location, OS, Provider, QoS and Optimisation Requirements | DevOps |
Provider | Provider offerings (in form of a feature-attribute model) | Admin |
Security | Security controls, Attributes and mMtrics | DevOps |
Metric | Metrics, Sensors, Attributes, Schedules, (measurement) Windows, Conditions | DevOps, System |
Scalability | Scalability Rules, Event (Patterns), Horizontal and Vertical Scaling Actions | DevOps |
Location | Physical and Cloud-specific Locations | DevOps |
Organisation | Organisations, Users, Roles, Policies, Cloud/platform credentials | Admin |
Execution | Execution contexts, measurements, SLO assessments, adaptation history | System |
Unit | Units of measurement | DevOps |
Type | Value types and Values | DevOps |
CAMEL in the PaaSage workflow
Modelling phase
Deployment phase
Execution phase
Reconfiguration and adaptations
CAMEL metamodel
Deployment Metamodel
Requirement metamodel
Metric metamodel
Scalability metamodel
Other metamodels
CAMEL application: the data farming use case
Scalarm overview
Scalarm architecture
As-is and to-Be situation
The scalarm cAMEL model
The scalarm deployment model.
ExperimentManager
has one provided communication port (443) and two required communication ports (20001 & 11300). It also requires to be hosted on a core intensive VM (i.e., hosting port). SimulationManager
has three required communication ports (11300 & 20001 & 443) and requires to be hosted on a CPU intensive VM (i.e., hosting port). The two internal components define required hosting ports that need different VM nodes. In particular, VM nodes must be associated with a 64bit Ubuntu OS and be located in Germany, i.e., the nearest place to Poland where major cloud providers have data centres (see requirement model in Listing 2).The scalarm requirement model.
CoreIntensiveUbuntuGermany
, is associated with a quantitative requirement to incorporate 8 to 32 cores and have a memory size from 4096 to 8192 MB, while the CPU intensive VM, named as CPUIntensiveUbuntuGermany
, must support a memory size between 8192 and 16384 MB. These requirements are actually specified (along with others) in the requirement model presented in Listing 2.The scalarm scalability model.
CPUScalabilityRule
, maps the CPU specific event CPUAvgMetricNFEAny
to the HorizontalScalingSimulationManager
scaling action. It is also associated to the HorizontalScaleSimulationManager
scale requirement (see Listing 2) denoting that the number of instances of SimulationManager should be at most 5, thus representing the actual upper scalability limit to hold for the scalability rule. The HorizontalScalingSimulationManager
action indicates that the SimulationManager component should scale out, as hosted by the CPUIntensiveUbuntuGermany
VM node, with an additional instance. On the other hand, the CPUAvgMetricNFEAny
is a single non-functional event directly mapping to the violation of the CPUMetricCondition
condition, as indicated in Listing 4.The scalarm metric model
CPUAverage
composite metric and its condition can be specified in CAMEL (see Listing 4. This metric condition participates in the CPUMetricSLO as indicated in Listing 2 and the CPUAvgMetricNFEAny
non-functional event in Listing 3.CPUAverage
composite metric is calculated by the Formula_Average
formula, which applies the MEAN
function over the CPUMetric
, a raw metric computed by the push-based CPUSensor
sensor, part of the PaaSage platform and especially the Executionware module.CPUMetricCondition
is a composite metric condition imposing that the metric refer to as CPUAverage
should be less than 80%. This condition refers to the CPUAvgMetricContextAny
composite metric context. This context explicates the CPUAverage
metric’s schedule and window, as well as that it is applied over the SimulationManager component. It also refers to the composing metric’s raw metric context named as CPURawMetricContext
. The CPUAverage
’s Schedule1Min
schedule specifies that the metric’s measurements will be computed repeatedly every 1 min, according to the metric’s Win1Min
sliding window.CPURawMetricContext
is the raw metric context for the CPUMetric
. It explicates that the CPUSensor
will be used to measure this metric and it is associated with the Schedule1Sec
schedule, which means that CPUMetric
’s measurements will be calculated every 1 s.Evaluation
Population
Methodology
- Perceived Ease of Use (PEU): the degree to which a user believes that CAMEL reduces the effort in modelling tasks.
- Perceived Usefulness (PU): the degree to which a user believes that using CAMEL enhances the modelling tasks’ performance.
Name | Sector | Use case provider | Organisation type | Relevant application | ||
---|---|---|---|---|---|---|
Data farming | eScience | AGH University of Science | research | Scalarm | ||
and Technology | ||||||
Automotive | eScience | High Performance Computing | research | HPC systems, e.g. | ||
simulation | Centre, Automotive Simulation | Computer Aided | ||||
Centre Stuttgart | Engineering | |||||
Flight scheduling | industrial | Lufthansa Systems | consulting, IT services | NetLine/Sched | ||
ERP | industrial | BeWan | IT services | Multi Tenant | ||
Financial service | industrial | University of Cyprus, IBSCY | research, IT services | Quorum | ||
Human milk bank | public | EVRY Solutions | IT services | Human Milk | ||
Bank Project |
Reliability analysis
Technology acceptance
Group-Based analysis
PU - MEANS | PEU - MEANS | ||||||
---|---|---|---|---|---|---|---|
Cloud≤3 | Cloud>3 | Cloud≤3 | Cloud>3 | ||||
MDE≤3 | 3.99 | 3.76 | 3.88 | MDE≤3 | 3.7 | 3.23 | 3.46 |
MDE>3 | 4.23 | 3.82 | 4.03 | MDE>3 | 4.10 | 3.43 | 3.76 |
4.11 | 3.79 | 3.90 | 3.33 |
Threats to validity
Related work
Comparison criteria
Analysis
Language | Abstract | Concrete | Aspect | Integration | Delivery Model | Models@run-time |
---|---|---|---|---|---|---|
Syntax | Syntax | Coverage | Level | Support | Support | |
Reservoir OVF Extension (2009) | XML Schema | XML | low | N/A | IaaS | N/A |
Optimis OVF Extension (2010) | XML Schema | XML | medium | N/A | IaaS | N/A |
Vamp (2011) | XML Schema | XML | low | N/A | IaaS | N/A |
4CaaSt Blueprint Template (2011) | XML Schema | XML | low | N/A | IaaS, PaaS | N/A |
TOSCA (2013) | XML Schema | XML, txt | medium | medium | IaaS, PaaS | N/A |
Provider DSL [40] (2014) | MOF | XML, gra | low | medium | IaaS | N/A |
GENTL (2014) | MOF | gra, XML | low | N/A | IaaS | N/A |
ModaCloudML (2014) | MOF | XML, gra, txt | medium | low | IaaS, PaaS | deployment |
CAML (2014) | MOF | gra | medium | medium | IaaS | N/A |
CAMEL (2014) | MOF | XML, gra, txt | high | high | IaaS, PaaS, SaaS | deployment, metric |
ARCADIA Context Model (2015) | XML Schema | XML | high | medium | IaaS | deployment |
StratusML (2015) | MOF | XML, gra | medium | high | IaaS | deployment |
CloudMF (2018) | MOF | XML, gra | medium | low | IaaS, PaaS | deployment, metric |
- With the exception of Arcadia Context Model, most recent languages rely on MOF for their abstract syntax. Maybe this can be explained partly due to the use of the language in a model-driven management framework and due to the various advanced tools available for MOF-based languages that assist in their rapid development.
- Coupled with the first finding is the fact that the most recent languages do provide support for the production of graphical/textual models according to the language’s concrete syntax. This enables then to move from the cumbersome XML-based to a more human-readable form, which also makes the models more concise and easier to be edited/manipulated.
- Most recent DSLs do cater for the models@runtime approach, thus providing better support for the adaptive provisioning of multi-cloud applications, with CAMEL and CloudMF being the only ones that support both deployment and metric. This means that they do not only support the adaptation of the application and VM instances in the deployment model based on scalability rules, but they cater so that the adaptation is reflected also on the monitoring infrastructure.