Safdar Mirza @ Cloud

Virtual Machines vs Containers:

The determination of which is better in Containers vs VMs is dependent on what you are trying to accomplish. Virtualization enables workloads to be run in environments that are separated from their underlying hardware by a layer of abstraction. This abstraction allows servers to be broken up into virtualized machines (VMs) that can run different operating systems.

Container technology offers an alternative method for virtualization, in which a single operating system on a host can run many different applications from the cloud. One way to think of containers vs VMs is that while VMs run several different operating systems on one compute node, container technology offers the opportunity to virtualize the operating system itself.


Virtual Machines:
A VM is a software-based environment geared to simulate a hardware-based environment, for the sake of the applications it will host. Conventional applications are designed to be managed by an operating system and executed by a set of processor cores. Such applications can run within a VM without any re-architecture.

With VMs, a software component called a hypervisor acts as an agent between the VM environment and the underlying hardware, providing the necessary layer of abstraction. A hypervisor, such as VMware ESXi, is responsible for executing the virtual machine assigned to it and can execute several simultaneously. Other popular hypervisors include KVM, Citrix Xen, and Microsoft Hyper-V. In the most recent VM environments, modern processors are capable of interacting with hypervisors directly, providing them with channels for pipelining instructions from the VM in a manner that is completely opaque to the applications running inside the VM. They also include sophisticated network virtualization models such as VMware NSX.

The scalability of a VM server workload is achieved in much the same way it is achieved on bare metal: With a Web server or a database server, the programs responsible for delivering service are distributed among multiple hosts. Load balancers are inserted in front of those hosts to direct traffic among them equally. Automated procedures within VM environments make such load balancing processes sensitive to changes in traffic patterns across data centers.

Containers: 
The concept of containerization was originally developed, not as an alternative to VM environments, but as a way to segregate namespaces in a Linux operating system for security purposes. The first Linux environments, resembling modern container systems, produced partitions (sometimes called “jails”) within which applications of questionable security or authenticity could be executed without risk to the kernel. The kernel was still responsible for execution, though a layer of abstraction was inserted between the kernel and the workload.

Once the environment within these partitions was minimized for efficiency’s sake, the idea of making the contents of these partitions portable came later. The first true container system was not yet an environment: LXC developed as a part of Linux. Docker originated as an experiment for easily deploying LXC containers onto a PaaS platform operated by Docker Inc.’s original parent company, dotCloud.

Workloads within containers such as Docker are virtualized. However, within Docker’s native environment, there is no hypervisor Instead, the Linux kernel (or, more recently, the Windows Server kernel) is supplemented by a daemon that maintains the compartmentalization between containers, while connecting their workloads to the kernel. Modern containers often do include minimalized operating systems such as CoreOS and VMware’s Photon OS – their only purpose is to maintain basic, local services for the programs they host, not to project the image of a complete processor space.

 

 

Webinar: AWS Services for Data Migration

A key part of moving applications to the cloud is migrating data. AWS now offers several simple services for data migration at a petabyte scale. Whether the destination is Amazon S3, Amazon Glacier, Amazon EFS or Amazon EBS, you can move large volumes of data from your facilities to the cloud with AWS. This webinar will explain how you can do this with online or offline transfer services including AWS Storage Gateway, AWS Snowball and Snowball Edge, or EFS File Sync.

 

Watch this webinar to:

  • Learn about AWS services to migrate data sets into AWS file, block and object storage services
  • Determine which AWS service option can fit your requirements for data migration, and
  • See how you can get started. For instance, we demonstrate how you can move block volumes of application data to Amazon EBS by using the AWS Storage Gateway with EBS Snapshots.

Table of Contents

(0:00) Introduction

(1:23) AWS Direct Connect

(2:31) Considerations for moving to the cloud

(5:12) 5 key questions for migrating data to the cloud

(8:47) AWS Snowball (Snow*) family

(11:48) Amazon EFS File Sync

(13:19) Amazon S3 Transfer Acceleration

(16:04) AWS Storage Gateway family introduction

(17:24) File Gateway for moving file data to S3 and for hybrid cloud workloads

(22:38) Tape Gateway for migrating tape backup processes to AWS

(27:50) Volume Gateway for moving block data & creating hybrid cloud storage

(31:55) Demonstration: Migrating block data volumes to Amazon EBS for Amazon EC2 applications using Volume Gateway

(37:34) AWS Partner Network (APN) for Migration & Storage

(38:24) AWS Storage Training

https://www.youtube.com/watch?time_continue=2195&v=hgmFoRf33uA

Original Link: 

https://aws.amazon.com/storagegateway/developer-resources/data-migration-webinar/

 

Getting Started With Concourse on macOS

 

Original Post:
https://medium.com/concourse-ci/getting-started-with-concourse-ci-on-macos-fb3a49a8e6b4

Dockers Machine vs Docker for Mac
https://stories.amazee.io/docker-on-mac-performance-docker-machine-vs-docker-for-mac-4c64c0afdf99

Concourse Github Repository:
https://github.com/concourse/concourse-docker

 

Linux:It uses linux kenrel


Windows/Mac: No Linux Kernel, Docker starts a virtual machine with a small Linux installed and runs Docker containers in there. File system mounts are also not possible natively and need a helper-system in between, which both Docker and Cachalot provide.

Docker Machine:
A set of CLI tools that start a boot2docker virtual machine inside a provided hypervisor (like VirtualBox, Parallels, and VMWare).

Cachalot:


Pre Requests:
New Docker for Mac (HyperKit): Install from following link
https://store.docker.com/


Check Docker Version:


$ docker --version
Docker version 18.03, build c97c6d6

$ docker-compose --version
docker-compose version 1.21.0, build 8dd22a9

$ docker-machine --version
docker-machine version 0.14.0, build 9ba6da9

 

$ docker info

$ wget -nv -O docker-compose.yml https://raw.githubusercontent.com/concourse/concourse-docker/master/docker-compose-quickstart.yml

$ docker-compose up -d

$ docker ps

Check it at :  http://127.0.0.1:8080/

Install Concourse CLI (fly)
https://concourse-ci.org/download.html

File may download as fly.dms - Make sure to rename it to fly (mv fly.dms fly)


$ install fly /usr/local/bin

$ which fly

$ fly -v

$ fly login -t hello -c http://localhost:8080

Note: The -t flag is a target alias and is required for almost every command. This alias comes in very handy when you’re targeting multiple concourse installations.

Setting up a Pipeline:


$ wget -nv https://raw.githubusercontent.com/concourse/testflight/master/pipelines/fixtures/simple.yml

Get more pipeline at following link:


https://github.com/search?l=YAML&p=1&q=org%3Aconcourse+platform%3A&type=Code

$ fly -t hello set-pipeline -p hello-world -c simple.yml


To UnPause pipeline:


$ fly -t hello unpause-pipeline -p hello-world

Trigger pipeline manually:


$ fly -t hello trigger-job -j hello-world/simple