When it comes to the public cloud, watch out New Zealand, Google’s coming! I was fortunate enough to attend the Google Cloud Platform (GCP) conference in San Francisco last week. I don’t know what the scientifically measured half-life of Kool-Aid is, but I think I’ve let it pass through my hype-system by now. Those Googlers are a passionate and excitable bunch!
Not the Blue Oyster Bar, but actually the Day 1 Keynote at Pier 48 in San Francisco.
So what makes Google Cloud Platform relevant and interesting? Size, network, and bringing a different approach to cloud from what we have become accustomed to.
You don’t need me to tell you Google are huge. They have seven products with over a billion users (think Search, Gmail and YouTube). But the reason their size is relevant is not because our companies will need the scaling capacity of Google or even that of flagship customers like Spotify and Snapchat. Rather, what Google have done with GCP is expose the same high performance computing. running in their ultra-efficient data centres (where only 10% of power goes on overheads other than running computers), and delivered with publicly facing and often open-sourced versions of their own internal systems (e.g. Kubernetes and TensorFlow). This means we can scale from dev into the enterprise with a high quality experience the whole way.
With regards to what GCP can achieve, some of the headline highlights are an average VM start up time of 43 seconds; and 1,000 VMs starting up in 5 minutes! Cloud Dataflow that can be used for both batch and streaming data processing. Archival storage is available at low cost, high durability, high availability AND low access times of around 3 seconds response with Cloud Storage Nearline (as opposed to hours with AWS Glacier). But perhaps most amazing of all is that they can do live migrations on GCP because the cloud services are in fact underpinned by containers. This means no downtime for platform upgrades because the apps don’t know they are running on a containerised GCP.
The Google network has to be high performance, low latency and have a global reach to be able to support all the Google consumer services that came before GCP. Now GCP gets to leverage that network of 77 Points of Presence (PoPs) around the world. This means they can deliver global load balancing, presenting a single Virtual IP (VIP) across the world (rather than having to have one per region). This translates to low latency for your users since they will hit a Google Front End (GFE) close to them and then run a TCP session over Google’s high quality network.
Inside their data centres, the network for compute and applications is also very high quality with up to 14Gbps throughput and only 100µs latency between VMs in the same zone.
A different approach
The Google approach is different in a number of ways. In terms of pricing model, rather than ask you to predict your usage and commit to a term in order to get rewarded with discounts, GCP will calculate and automatically apply the best pricing models that match your consumption pattern – and they charge per minute, rather than per hour.
There are also some nice little touches, such as presenting a command line interface (cli) into the cloud services via the gcloud SDK from directly inside the web console – they call this the Cloud Shell. They achieve this by spinning up some very small compute nodes each time you log in which are dedicated to you for this purpose.
Cloud Shell spins up small VMs so you can run commands from the web console
For me the exemplary stand out innovation from Google is the Kubernetes container cluster manager and orchestration system (officially called Google Container Engine). Whether you are a container hobbyist, evangelist or despot who plans to overthrow the world, by now we all recognise the tremendous power containers and Docker in particular bring in terms of speed, isolation and portability. This topic needs more time than this single paragraph, but what Google deliver with Kubernetes is truly amazing, including; workload portability (avoiding vendor lock-in, write-once run-anywhere, avoiding coupling); rolling updates (blue/green deployments) for no downtime; autoscaling; persistent storage; multi-zone clusters; managing secrets (keys etc.); scalability; and performance.
Kubernetes abstracts your containers from provider specific concepts
And so for the shortcomings? Well, for us in New Zealand the most obvious one is proximity. The DC in Taiwan appears the most responsive at around 330ms for compute (GCE) and 200ms for storage (GCS). These are some anecdotal figures returned from CloudHarmony, so don’t get too fixated on the numbers, but bear in mind AWS is around 60ms and 40ms respectively for similar services out of Sydney. However, two new regions have been announced for Tokyo and Oregon, with 10 more locations through 2017. It seems highly plausible they will deliver on this given they say they invested USD $9.9bn in data centre projects (for context the annual AWS revenue is estimated at USD$6-8bn). We can wait and see if one of the new 2017 locations will be in Australia (fingers crossed).
Apart from reach, breadth is also a challenge at the moment since not every region or zone has the whole portfolio of services yet. Maturity in certain areas is also something that they are focussing on. For example Stackdriver is a good start in the monitoring, logging and diagnostics space for GCP and AWS, but needs work to expand into on-premises and attain the deep integration of a suite like VMware vRealize Operations. When it comes to Identity and Access Management (IAM) they have made things easy to enable Single Sign-On (SSO) if you want to use your own authentication mechanism, but the fact that 23 new IAM roles with granular permissions are still in beta, illustrates how they are still catching up in some areas.
Another area that requires work is Google’s ability and appeal to sell into the enterprise. There has been some press focus on this and Google are not unaware of the problem themselves. Up until recently, it has been developers of cloud native application that have been drawn to GCP because they can code and have App Engine (GAE) take care of the scaling and management of the components (a driver beyond DevOps into NoOps). However, in my opinion, Google will need the business of the suited and booted enterprises to back their market share growth and global data centre expansion plans. The way to appeal to enterprise is to address comprehensively the areas of security, a complete portfolio, proven track record, consistency in delivery and offerings and perhaps most importantly, making it easy to migrate into the cloud (for an archetypal example of how to do this well, look at the AWS Database Migration Service). Of course, Google are not unaware of these challenges and part of the direct strategy to address has been to appoint Diane Green, co-founder and ex-CEO of VMware. She received a nice little sign-on bonus when they bought her start-up Bepop for USD$380m (of which she donated over USD$140m to a donor advised fund). In her presentation during the Keynote at GCP Next 2016, she focussed on the three core propositions of value, lower risk, and access to innovation. She believes they know how to sell well and sell differently, since the cloud is not like traditional software/hardware models.
So, who's interested?
Spotify are a big win for Google and spoke about how they came for the data platform. With a billion concurrent music streams and 700 thousand events per second, they are proving they can do things at scale.
Snapchat were happy to talk about their use of new IAM capabilities and where they are helping drive further development in these areas with Google.
From our neck of the woods, Telstra, Australia spoke about their experiences with DataQuery, analysing masses of click-stream data with a very small team of engineers – and then expanding into Big Data analysis with Dataflow
Coke were paraded for the classic use case of spinning up a global scale media campaign around the FIFA World Cup, which certainly shows off the agility and scale angles.
I also met and spoke with a very smart and switched on CTO from a South African mobile payments company called nomanini, who have bet their business on Google. They do everything from building the Point of Sale (POS) circuit boards to building the transaction and data analysis systems on GCP. I can see the parallels to ambitious, self-starting New Zealand companies. And then of course there are the rumours of Apple making the Great Escape.
It is relatively small compared to the major players in cloud, but it is growing fast and it is very development and Business Intelligence (BI) focussed. Some of the notable names there were Hashicorp (best known for Vagrant), Saltstack, fastly (CDN), CoreOS, MongoDB, NGINX, Chef, Puppet Labs, Splunk, Tableau and PagerDuty.
Relevance for New Zealand organisations
In reality for nearly all cases we will have to wait until they are closer to our shores, which could mean Australia (no official announcement) or even west coast US (Oregon is officially announced), since us-central-1 appears to be no better than Taiwan in terms of responsiveness.
Google have tech passion and pride, and I personally rate these qualities very highly even when they are ahead of an organisation’s execution to date. With intent, we’ll be following the Diane Greene initiatives as Google appeals to the corporate base.
And we’ll be exploring Kubernetes much more deeply right now. You will probably want to as well if you are getting into Docker. It might just be the last molecules of the Kool-Aid talking, but the Kubernetes management and orchestration system does seem like the holy grail of containers to me.
Stay tuned for more value and insight from the Google NEXT conference in the coming weeks.
What are your thoughts on Google Cloud Platform being the next serious contender for your cloud dollar? Let us know in the comments below.