OPENSTACK CLOUD COMPUTING GUIDE
Summary by Damian Ndunda © 2020
TABLE OF CONTENTS
OPENSTACK CLOUD COMPUTING GUIDE. 1
COMPONENTS OF OPENSTACK DEPLOYMENT. 6
Figure: Components of a two-system OpenStack deployment with Cinder block storage. 7
SAMPLE CONFIGURATION: WEB SERVICES AND ECOMMERCE. 9
DEPLOYMENT SCENARIO RECOMMENDATIONS FOR POC AND PILOT PHASES. 11
CHAPTER: OPENSTACK GUI AND CLI 12
Login then create a keypair (dashboard) 13
How to create a keypair (nova CLI) 14
How to import a keypair (dashboard) 15
How to create instance (dashboard) 17
How to download credentials for nova/EC2 API 20
How to download credentials for nova CLI 20
How to create instance (nova CLI) 22
How to list available instances (nova CLI) 26
How to list available instances (nova CLI) 26
How to list available instances (nova CLI) 27
How to get instance details (dashboard) 28
How to get instance details (nova CLI) 29
How to get console log (dashboard) 31
How to get console log (nova CLI) 32
How to interact with console (dashboard) 33
How to interact with console (nova CLI) 34
How to delete instance (dashboard) 43
How to delete instance (nova CLI) 44
How to hard reboot instance (dashboard) 45
How to hard reboot instance (nova CLI) 45
How to resize instance (dashboard) 47
How to resize instance (nova CLI) 48
Create instance image snapshot (dashboard) 51
Create instance image snapshot (nova CLI) 53
How to upload image file (dashboard) 55
How to upload image file (glance CLI) 57
List all available images using glance CLI with: 58
How to delete image (glance CLI) 59
Share images between tenants (glance CLI) 60
List all shared images with: 60
Share images between tenants (dashboard) 60
CHAPTER: OPENSTACK COMPUTE SERVICE (NOVA) 62
Root (and ephemeral) disks. 62
3. THEORY OF OPERATION & DEPLOYMENT CHOICES. 63
Root Disk Choices When Booting Nova Instances. 63
Instance Snapshots vs. Cinder Snapshots. 64
Instance Storage Options at the Hypervisor. 64
4 TROUBLESHOOTING COMMON PROBLEMS. 64
CHAPTER: CONTAINERS AND OPENSTACK. 71
Figure: Containers vs. VMs. 72
Administrators And Developers Are Interested In Containers For Two Major Reasons. 73
THIRD-PARTY ECOSYSTEM TOOLS. 74
2 VALUE OF CONTAINERS WITHIN AN OPENSTACK INFRASTRUCTURE. 75
Organizations Could Use Containers For The Following Reasons: 75
WHAT ARE THE USE CASES IN OPENSTACK?. 76
3 CONTAINERS WITH OPENSTACK TODAY. 77
Building a Container Hosting Environment with OpenStack Compute. 77
4 CONTAINERS WITH OPENSTACK TOMORROW... 77
The top three areas of focus are: 77
OpenStack Container-as-a-Service Support Architecture. 79
Magnum Security and Multi-tenancy. 81
FOREWORD
OpenStack is an open source software platform for cloud computing. Mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers. Managed through a web-based dashboard, command-line tools, or via RESTful web services.
BLOCK STORAGE CLOUD
Joseph E.K, Fischer M, (2017)
OpenStack provides two popular mechanisms for storage: object and block storage. Block storage is traditionally what you’d mount as a file system on your server. Object storage instead hosts individual files that are then referenced from within your application.
USES
Cinder provides an abstraction layer through the volume manager that hooks in to over 70 different proprietary and open source storage solutions. Additionally, it can be an interface to multiple back ends at once, enabling you to not only diversify your back ends across vendors but also change them out and do a planned migration as your organization sees fit.
Some Uses of Block Cloud Storage are:
· Cloud User services
· Data Processing
· Keeping Backups
COMPONENTS OF OPENSTACK DEPLOYMENT
v Compute (Nova)
v Identity (Keystone)
v Networking (Neutron)
v Image service (Glance)
v Dashboard (Horizon)
v Block Storage (Cinder)
ARCHITECTURE OVERVIEW
When a user request comes in, either from the OpenStack dashboard (Horizon), the OpenStack Client (OSC) or through a Software Developer Kit (SDK), it interfaces with the API for Cinder. This API will talk to a database, for initially storing the request, and then set the status to creating and reserving quota usage. The API will also interact with a messaging queue. The messaging queue will pass requests on to the scheduler for Cinder, which makes decisions about where the change will be made.
OPENSTACK FLAVORS
FLAVOR NAME |
VCPU |
MEMORY |
EPHEMERAL DISK |
m1.tiny |
1 |
512 MB |
1 GB |
m1.small |
1 |
2048 MB |
20 GB |
m1.medium |
2 |
4096 MB |
40 GB |
m1.large |
4 |
8192 MB |
80 GB |
m1.xlarge |
8 |
16384 MB |
160 GB |
DEPLOYMENT SCENARIO RECOMMENDATIONS FOR POC AND PILOT PHASES
DEPLOYMENT SIZE |
BENEFITS |
RECOMMENDED USE |
RECOMMENDED OPENSTACK |
1 node all-in-one |
Easy to start |
• Gaining familiarity
with |
• Nova, Neutron, Keystone, |
4 nodes, multi |
Small cloud |
• PoC or pilot cloud |
• Nova, Neutron,
Keystone, |
12 nodes multi |
High availability |
• PoC or small
production |
• Nova, Neutron,
Keystone, |
CHAPTER: OPENSTACK COMPUTE SERVICE (NOVA)
1. OVERVIEW
The OpenStack Compute Service (Nova) is a cloud computing fabric controller, which is the main part of an IaaS system. Nova is typically deployed in conjunction with other OpenStack services (e.g.Block Storage, Object Storage, Image, etc) as part of a larger, more comprehensive cloud infrastructure.
2. KEY CONCEPTS
Instance
An instance is the fundamental resource unit allocated by the OpenStack Compute service. It represents an allocation of compute capability (most commonly but not exclusively a virtual machine), along with optional ephemeral storage utilized in support of the provisioned compute capacity.
Unless a root disk is sourced from Cinder (google search “Root Disk Choices When Booting Nova Instances”, the disks associated with VMs are "ephemeral," meaning that (from the user's point of view) they effectively disappear when a virtual machine is terminated. Instances can be identified uniquely through a UUID assigned by the Nova service at the time of instance creation. An instance may also be optionally referred to by a human-readable name,
Flavor
Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. The default install provides five flavors, and are configurable by admin users. Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run
Root (and ephemeral) disks
Each instance needs at least one root disk (that contains the bootloader and core operating system files), and may have optional ephemeral disk (per the definition of the flavor selected at instance creation time). The content for the root disk either comes from an image stored within the Glance repository (and copied to storage attached to the destination hypervisor) or from a persistent block storage volume (via Cinder).
3. THEORY OF OPERATION & DEPLOYMENT CHOICES
Root Disk Choices When Booting Nova Instances
|
|
Boot from image
|
This option allows a user to specify an image from the Glance repository to copy into an ephemeral disk.
|
Boot from snapshot
|
This option allows a user to specify an instance snapshot to use as the root disk; the snapshot is copied into an ephemeral disk.
|
Boot from volume
|
This option allows a user to specify a Cinder volume (by name or UUID) that should be directly attached to the instance as the root disk; no copy is made into an ephemeral disk and any content stored in the volume is persistent.
|
Boot from image (create new volume)
|
This option allows a user to specify an image from the Glance repository to be copied into a persistent Cinder volume, which is subsequently attached as the root disk for the instance.
|
Boot from volume snapshot (create new volume)
|
This option allows a user to specify a Cinder volume snapshot (by name or UUID) that should be used as the root disk; the snapshot is copied into a new, persistent Cinder volume which is subsequently attached as the root disk for the instance.
|
One can select the “delete on terminate” option in combination with any of the aforementioned options to create. An ephemeral volume while still leveraging the enhanced instance creation capabilities. This can provide a significantly faster provisioning and boot sequence than the normal way that ephemeral disks are provisioned, where a copy of the disk image is made from Glance to local storage on the hypervisor node where the instance resides.
Instance Snapshots vs. Cinder Snapshots
Instance snapshots allow you to take a point in time snapshot of the content of an instance's disk. Instance snapshots can subsequently be used to create an image that can be stored in Glance which can be referenced upon subsequent boot requests. While Cinder snapshots also allow you to take a point-in-time snapshot of the content of a disk, they are more flexible than instance snapshots.
Instance Storage Options at the Hypervisor
The Nova configuration option instances_path specifies where instances are stored on the hypervisor's disk. While this may normally point to locally attached storage (which could be desirable from a performance perspective), it prohibits the ability to support live migration of instances between hypervisors. By specifying a directory that is a mounted NFS export (from a NetApp FlexVol volume), it is possible to support live migration of instances because their root disks are on shared storage which can be accessed from multiple hypervisor nodes concurrently
There are several other requirements that must be met in order to fully support live migration scenarios. More information can be found at http://docs.openstack.org/trunk/openstack-ops/content/compute_nodes.html
CHAPTER: CONTAINERS AND OPENSTACK
Rather than create new vertical silos to manage containers in their data centers, IT organizations find value in OpenStack providing a cross-platform API to manage virtual machines, containers and bare metal.
Trevor Pott, writing for The Register, provides perspective.
“OpenStack is not a cloud. It is not a project or a product. It is not a virtualization system or an API or a user interface or a set of standards. OpenStack is all of these things and more: it is a framework for doing IT infrastructure – all IT infrastructure – in as interchangeable and interoperable a way as we are ever likely to know how.”
http://www.theregister.co.uk/2015/07/09/openstack_overview/
1 WHAT ARE CONTAINERS?
Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need. All containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups. Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.
Containers are a way of bundling and running applications in a more portable way. They can be used to break down and isolate parts of applications, called microservices, which allow for more granular scaling, simplified management, superior security configurations, and solving a class of problems previously addressed with configuration management (CM) tools.
A developer can put an application or service inside a container, along with the runtime requisites and services the application requires, without having to include a full operating system. This allows container images to be small, usually just a few megabytes in size compared to virtual machine images which can be orders of magnitude larger.
The container File system is arranged in layers, like how a series of commits are arranged in a git repository. This allows the container image to indicate which parent image it is derived from, allowing it to be very small by comparison. All it needs are the bits that are different from its parent.
Container images allow tools like Docker to simplify container creation and deployment, using a single command to launch the app with all its requisites.
Administrators And Developers Are Interested In Containers For Two Major Reasons.
1. Application containers, compared with virtual machines, are very lightweight – minimizing compute, storage, and bandwidth requirements. Since multiple containers leverage the same kernel (Linux today, with Windows soon), containers can be smaller and may require less processing, RAM, and storage than virtual machines because they can be used without any hardware virtualization. They allow more dynamic systems than virtual machines allow, because the chunks of data that need to be moved around to use containers are so much smaller than virtual machine images.
2. The other advantage is that containers are portable, effectively running on any hardware that runs the relevant operating system. That means developers can run a container on a workstation, create an app in that container, save it in a container image, and then deploy the app on any virtual or physical server running the same operating system - and expect the application to work.
Containers offer deployment speed advantages over virtual machines because they’re smaller megabytes instead of gigabytes. Typical application containers can be started in seconds, whereas virtual machines often take minutes. Containers also allow direct access to device drivers through the kernel, which makes I/O operations faster than with a hypervisor approach where those operations must be virtualized.
Containers create a proliferation of compute units, and without robust monitoring, management, and orchestration, IT administrators will be coping with “container sprawl”, where containers are left running, mislocated or forgotten.
THIRD-PARTY ECOSYSTEM TOOLS
The three most common are Docker Swarm, Kubernetes, and Mesos.
DOCKER
Docker popularized the idea of the container image. They provide a straightforward way for developers to package an application and its dependencies in a container image that can run on any modern Linux, and soon Windows, server. Docker also has additional tools for container deployments, including Docker Machine, Docker Compose, and Docker Swarm. At the highest level, Machine makes it easy to spin up Docker hosts, Compose makes it easier to deploy complex distributed apps on Docker, and Swarm enables native clustering for Docker.
https://opensource.com/resources/whatdocker
KUBERNETES
Kubernetes (originally by Google, now contributes to the Cloud Native Computing Foundation4) is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the user's’ declared intentions.
APACHE MESOS
Apache Mesos can be used to deploy and manage application containers in large-scale clustered environments. It allows developers to conceptualize their applications as jobs and tasks. Mesos, in combination with a job system like Marathon, takes care of scheduling and running jobs and tasks.
http://opensource.com/business/14/9/opensourcedatacentercomputingapachemesos
OpenStack refers to these three options as Container Orchestration Engines (COE). All three of these COE systems are supported in OpenStack Magnum.
Docker is donating its container format and runtime, runC, to the OCI, Open Container Initiative https://www.opencontainers.org/
2 VALUE OF CONTAINERS WITHIN AN OPENSTACK INFRASTRUCTURE
OpenStack includes multi-tenant security and isolation, management and monitoring, storage and networking and more. Operators must be aware that containers don’t have the same security isolation capabilities as virtual machines, which means that containers can not be viewed as a direct substitute for virtual machines. As an example, service providers often run containers in VMs in order to provide robust protection of one tenant’s processes from poorly behaved or malicious code in other containers7. Another approach is to use a bay in OpenStack Magnum to arrange a group of virtual machines or bare metal (Ironic) instances that are only used by one tenant to address this risk. OpenStack supports all of these configurations in the role of the overall data center manager - virtual machines deliver compute resources and containers aid application deployment and management.
https://cloud.google.com/compute/docs/containers/container_vms
Organizations Could Use Containers For The Following Reasons:
• Containers provide deterministic software packaging and t nicely with an immutable infrastructure model.
• Containers are excellent for encapsulation of microservices.
• For portability of containers on top of OpenStack virtual machines as well as bare metal servers (Ironic) using a single, lightweight image.
One of the benefits of using an orchestration framework with containers, is that it can allow switching between OpenStack or bare metal environments at any given point in time, abstracting the application away from the infrastructure. The Kubernetes orchestration engine is integrated with OpenStack as well. In fact, with OpenStack Magnum containers-as-a-service, the default bay type is a Kubernetes bay.
http://www.openstack.org/blog/2015/07/google-bringing-container-expertise-to-openstack/
WHAT ARE THE USE CASES IN OPENSTACK?
a developer can create an application container, containing the app, runtimes, libraries, etc., and move it to any machine - physical or virtual. Since containers are truly stateless, developers don’t have to worry about compatibility and containers can be used as easily provisioned, immediately disposable development environments on any kind of IT infrastructure. This speeds up ramp time for new developers as well as increasing overall development productivity.
In build/continuous integration environments, containers enable organizations to rapidly test more system permutations as well as deliver increased parallelism, increasing innovation and feature velocity.
For quality assurance, containers enable better black box testing as well as help organizations shift from governance to compliance. Because containers can be stateless, they also contribute to the shift toward immutable infrastructure. Thousands of containers can be created using a single consistent container image. Changes to the image can immediately be layered upon all the container instances. Old container images can be discarded as needed. Stateless containers also facilitate high availability. Containers can be run on different underlying hardware, so if one host goes down, administrators can route traffic to live application containers running elsewhere.
Administrators can create and destroy container resources in their data center without worrying about costs. With typical data center utilization at 30%, it is easy to bump up that number by deploying additional containers on the same hardware.
Containers also enable density improvements. Instead of running a dozen or two dozen virtual machines per server, it’s possible to run hundreds of application containers per server. There are a few implications to this possibility. One is that enterprises might be able to make use of older, or lower performing hardware – thereby reducing costs. Another implication is that an enterprise might be able to use fewer servers, or smaller cloud instances
3 CONTAINERS WITH OPENSTACK TODAY
It supports LXC and Virtuozzo system containers. Docker application containers and Docker Swarm, Kubernetes and Mesos container orchestration are available with the Liberty release of Magnum
http://lists.openstack.org/pipermail/openstackdev/2015March/058714.html
Building a Container Hosting Environment with OpenStack Compute
OpenStack Compute (Nova) manages the compute resources for an OpenStack cloud. Those resources may be virtual machines (VMs) from hypervisors such as KVM, Xen, VMware® vSphere® and Hyper-V® or from container technology like LXC and OpenVZ (Virtuozzo)
http://docs.openstack.org/developer/nova/supportmatrix.html
Rackspace Private Cloud is using LXC containers in production for all Infrastructure components of an OpenStack powered cloud
4 CONTAINERS WITH OPENSTACK TOMORROW
As new technologies like containers emerge and become relevant, the community will work on supporting them, taking a consistent and open approach.
The top three areas of focus are:
I. Provide comprehensive support for running containerized workloads on OpenStack.
II. Simplify the setup needed to run a production multi-tenant container service.
III. Offer modular choice to OpenStack cloud operators who have not yet established a definitive containers strategy
There are multiple OpenStack projects leveraging container technology to make OpenStack better: Magnum, Kolla and Murano. Basically:
• Magnum is designed to o‑er container specific APIs for multi-tenant containers-as-a-service with OpenStack. Figure below shows how Magnum integrates with other OpenStack components.
• Kolla is designed to o‑er a dynamic OpenStack control plane where each OpenStack service runs in a Docker container.
• Murano is an application catalog solution that allows for packaged applications to be deployed on OpenStack, including single-tenant installations of Kubernetes.
REFERENCES
Cacciatore K, Czarkowski P, Dake S, Garbutt J, Hemphill B, et al (2015) Exploring Opportunities: Containers and OpenStack
Joseph E.K, Fischer M, (2017) Common OpenStack Deployments Pearson Education, Inc.
Moreira B (2014) OpenStack training, CERN
NetApp, Inc. (September 2017) OpenStack Deployment and Operations Guide
OpenStack is a registered trademark of the OpenStack Foundation in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows and Hyper-V are trademarks of Microsoft Corporation in the United States, other countries, or both.
VMware and VMware vSphere are trademarks and registered trademarks of VMware, Inc. in the United States and certain other countries.
Other product and service names might be trademarks of other companies.
Home/ Info/ Products/ Price list/ PC Buyers Guide/ Technology Videos/ Venus Project/ Contact
Copyright BICT Solutions