Safdar Mirza @ Cloud

https://www.cnbc.com/2018/06/01/vmware-looks-to-expand-cloud-to-azure-and-google-due-to-customer-demand.html

VMware looks to expand Cloud to Azure and Google, due to customer demand

  • VMware, the cloud computing subsidiary of Dell, initially launched VMware Cloud on AWS last year in the U.S.
  • Now CEO Patrick Gelsinger says customer demand has warranted adapting VMware services to Microsoft Azure and Google Cloud, too.
  • "We have interest from our customers to expand our relationships with Google, Microsoft and others," Gelsinger said.

canary (canary test, canary deployment)

In software testing, a canary is a push of programming code changes to a small group of end users who are unaware that they are receiving new code. Because the canary is only distributed to a small number of users, its impact is relatively small and changes can be reversed quickly should the new code prove to be buggy. Canary tests, which are often automated, are run after testing in a sandbox environment has been completed.


For incremental code changes, a canary approach to delivering functionality allows the development team to quickly evaluate whether or not the code release provides the desired outcome. The word canary was selected to describe the code push to a subset of users because canaries were once used in coal mining to alert miners when toxic gases reached dangerous levels. Like the canary in a coal mind, the end user who is selected to receive new code in a canary test is unaware he is being used to provide an early warning.

Virtual Machines vs Containers:

The determination of which is better in Containers vs VMs is dependent on what you are trying to accomplish. Virtualization enables workloads to be run in environments that are separated from their underlying hardware by a layer of abstraction. This abstraction allows servers to be broken up into virtualized machines (VMs) that can run different operating systems.

Container technology offers an alternative method for virtualization, in which a single operating system on a host can run many different applications from the cloud. One way to think of containers vs VMs is that while VMs run several different operating systems on one compute node, container technology offers the opportunity to virtualize the operating system itself.


Virtual Machines:
A VM is a software-based environment geared to simulate a hardware-based environment, for the sake of the applications it will host. Conventional applications are designed to be managed by an operating system and executed by a set of processor cores. Such applications can run within a VM without any re-architecture.

With VMs, a software component called a hypervisor acts as an agent between the VM environment and the underlying hardware, providing the necessary layer of abstraction. A hypervisor, such as VMware ESXi, is responsible for executing the virtual machine assigned to it and can execute several simultaneously. Other popular hypervisors include KVM, Citrix Xen, and Microsoft Hyper-V. In the most recent VM environments, modern processors are capable of interacting with hypervisors directly, providing them with channels for pipelining instructions from the VM in a manner that is completely opaque to the applications running inside the VM. They also include sophisticated network virtualization models such as VMware NSX.

The scalability of a VM server workload is achieved in much the same way it is achieved on bare metal: With a Web server or a database server, the programs responsible for delivering service are distributed among multiple hosts. Load balancers are inserted in front of those hosts to direct traffic among them equally. Automated procedures within VM environments make such load balancing processes sensitive to changes in traffic patterns across data centers.

Containers: 
The concept of containerization was originally developed, not as an alternative to VM environments, but as a way to segregate namespaces in a Linux operating system for security purposes. The first Linux environments, resembling modern container systems, produced partitions (sometimes called “jails”) within which applications of questionable security or authenticity could be executed without risk to the kernel. The kernel was still responsible for execution, though a layer of abstraction was inserted between the kernel and the workload.

Once the environment within these partitions was minimized for efficiency’s sake, the idea of making the contents of these partitions portable came later. The first true container system was not yet an environment: LXC developed as a part of Linux. Docker originated as an experiment for easily deploying LXC containers onto a PaaS platform operated by Docker Inc.’s original parent company, dotCloud.

Workloads within containers such as Docker are virtualized. However, within Docker’s native environment, there is no hypervisor Instead, the Linux kernel (or, more recently, the Windows Server kernel) is supplemented by a daemon that maintains the compartmentalization between containers, while connecting their workloads to the kernel. Modern containers often do include minimalized operating systems such as CoreOS and VMware’s Photon OS – their only purpose is to maintain basic, local services for the programs they host, not to project the image of a complete processor space.