Originally published on CloudOps’ blog.

The IT industry is in a state of deep confusion when it comes to multi-cloud. Against pushback from cloud vendors who sought to monopolize the industry, most large organizations are now willfully pursuing multi-cloud approaches. In a recent report, 93% of enterprise respondents reported having multi-cloud strategies. Multi-cloud is no longer a strategy, it’s the reality.

Cloud native has become the standard, but the IT industry is still in a state of confusion when it comes to multi-cloud. We’ve compiled a few of the most common myths surrounding multi-cloud.

1. Any organization that has containerized its applications is cloud native by default and therefore prepared to multi-cloud.

It is true that containers provide…

Originally published on CloudOps’ blog.

Most applications will require resources from the environment they are running on. Memory, CPU, storage, networking, etc. Most of those resources may be consumed easily and transparently, some may not depending on the application. Most applications will require some previous configuration steps before being deployed and will require a few, or maybe a lot, of special maintenance tasks that may be related to backups, restores, file compression, high availability checks, log maintenance, database growth, and sanity routines, etc. …

Originally published on CloudOps’ blog.

Continuous delivery is a software engineering approach where teams produce software in short cycles, ensuring it can be reliably released at any time. It relates to continuous integration, which is the practice of merging all developer working copies to a shared mainline several times a day. With the rise of microservices and cloud native architectures, continuous delivery is increasingly becoming a necessity and open source communities are coming together to drive its adoption.

“Software delivery is an exercise in continuous improvement, and our research shows that year over year the best keep getting better, and…

Originally published on CloudOps’ blog.

Cloud native applications take full advantage of the cloud’s operational model, driving business value by being auto-provisioning, scaling, and redundant. By breaking down monolithic applications into independent but connected containers, developers create applications that can scale seamlessly according to demand. At its core, cloud native computing allows you to write and deploy code anywhere, in any one of and most likely several private, hybrid, and public cloud environments.

While the cloud native landscape is becoming more vast and complex each day, Kubernetes and other foundational tools have crossed the chasm and reached a size and…

Originally published on CloudOps’ blog.

Microservices have made applications more scalable, portable, and resilient. They enforce volatile and ephemeral environments that allow accelerated software delivery pipelines.

Their adoption has skyrocketed in the past few years, but their complexity can make their networking problematic. It can be difficult to establish trust between microservices. Canary deployments, the idea of rolling out releases to a subset of users or servers, can be complicated. Likewise, rollbacks, attribute-based routing, end-to-end encryption, metrics collection, and rate limiting can all be difficult. There are still challenges with microservices that must be ironed out.

Service meshes have become…

Originally published on CloudOps’ blog.

The effects of cloud computing on climate change are complex. Data centres consume energy and emit greenhouse gases in the process. The information and communications technology industry is projected to take 30% of the global demand for energy by 2030. Currently consuming 7% of the world’s energy supply, internet technology has the same carbon footprint as the aviation industry.

The source of 7% of the global energy demand will not make or break the fight against climate change. This has been hammered home recently as people stayed home to fight the spread of COVID-19. Never…

Originally published on CloudOps’ blog.

Networking can be very important when dealing with microservice-based architectures, and Kubernetes provides first-class support for a range of different networking configurations. Essentially, it provides you with a simple and abstracted cluster-wide network. Behind the scenes, Kubernetes networking can be quite complex due to its range of different networking plugins. It may be useful to try keeping the simpler concepts in mind before trying to identify the flow of individual networking packets.

A good understanding of Kubernetes’ range of service types and ingresses should help you choose appropriate configurations for your clusters. …

Originally published on CloudOps’ blog.

Earlier this year, I moved from leading the marketing group to HR. While I am still happily driving CloudOps’ community participation, I am now also responsible for ongoing learning for our teams. My very first training mission was to research, curate, and present best practices for working from home (including practical and wellness tips on how to optimize remote work pandemic or not) from the plethora of articles that exist on the subject. I am summarizing the presentation here for all of you — hopefully, you will find something inspiring.

The session ultimately centred around…

Originally published on CloudOps’ blog.

People leverage the cloud for many reasons. More often than not, the need to cut costs ranks high among them. And while the cloud does offer significant benefits that include cost-efficiency, some organizations have found out the hard way that this isn’t necessarily true as they scale. Cloud native infrastructures are complex entities that must be managed properly to scale cost-effectively. In this blog, we identify eight ways that can help your organization optimize its cloud usage.

1. Have a strategy for business continuity

It’s important to always maintain the ability to quickly and cost-effectively move your business-critical applications to new deployments…

Originally published on CloudOps’ blog.

If you’re looking to get started with Kubernetes, this blog post will teach you the basics of deployments.

What is a Kubernetes deployment?

A deployment is one of the many Kubernetes objects. In technical terms, it currently encapsulates the following (which we will be covering below):

  • Pod specification
  • Replica count
  • Deployment strategy

In practical terms, you can think of a deployment as an instance of an application with it’s associated configuration. If you have two deployments, one could be a “production” environment and the other a “staging” environment.

There are a few important concepts to know about Kubernetes deployments.



Leader in #cloud solutions, focused on open source, cloud platforms, networking, and DevOps. Experts in Kubernetes, OpenStack, CloudStack, and more.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store