opensource.google.com

Menu
Showing posts with label Google Cloud Platform. Show all posts
Showing posts with label Google Cloud Platform. Show all posts

Modernizing Oracle operations with Kubernetes and El Carro

Thursday, May 13, 2021

Google Cloud is releasing El Carro, an open source tool to help you transform and modernize your Oracle database operations. El Carro implements the Kubernetes operator pattern to deliver automation for provisioning and ongoing operations like backups, patching, and high availability for databases running in hybrid and multi-cloud environments. And it does so using the same declarative syntax that DevOps teams are using to manage applications. With El Carro, users can choose to modernize and transform their database operations in place and benefit from a consistent management experience and hybrid and multi-cloud portability. Released under the Apache License 2.0, you are free to use El Carro in any Kubernetes environment—you are in control.

Containers and Kubernetes deliver portability on standardized infrastructure, and today Oracle supports databases running in containers; they’ve also released container build files and images and helm charts to simplify provisioning. What is missing for the next level of integration is support for lifecycle operations and an extension of the Kubernetes API to the primitives needed for database management.

In addition, fully managed or autonomous services for Oracle may not make available all the required features, such as Active Data Guard, Multitenant, and In-Memory, parameters/flags, versions, and patch levels. DBAs also find themselves locked out of many roles, including sysadmin and root. These restrictions make many cloud architects fall back to lift and shift Oracle databases onto infrastructure as a service offerings and miss out on opportunities to modernize and transform database operations. And with transactional databases growing in number and criticality, organizations are struggling to deliver innovation and modernization. Engineers are already busy keeping up with sprawl and mundane operational tasks while adhering to strict change management processes.

How do we solve this database operations gap?

El Carro solves this. It is built with scalability in mind, using the same container orchestration infrastructure, Kubernetes, that powers many businesses and is a top choice for modern architectures. Its open API allows you to manage your database configurations as declarative code, enabling CI/CD or Gitops workflows for auditability and control mechanisms. El Carro automates many database lifecycle operations, like backups, replication, and patching. And, when it distributes databases on the nodes of a cluster, it is aware of the priority and resource requirements of each database to optimize tight packing while respecting quality of service. Lastly, it helps DBAs by delivering automation without restrictions and leaving DBAs in full control over their systems. You can choose to let the operator drive for you, but you can also take over the steering wheel yourself at any time.

Because Kubernetes is now the standard for portable infrastructure automation and orchestration, engineers appreciate how Kubernetes abstracts complex problems into manageable infrastructure as code. Kubernetes can scale from small projects to large projects that support the infrastructure that powers Google products and services for billions of users around the world. Moreover, Google pioneers the next generation of infrastructure as code that we refer to as Configuration as Data to declaratively establish a contract between developer intent and the runtime operation. According to the Cloud Native Survey 2020, two-thirds of respondents were either already running stateful workloads in production or were considering doing so within the next 12 months. We expect that datastores are going to drive the next wave of enterprise Kubernetes adoption.

A number of open source operators for databases, such as PostgreSQL, MySQL, and many others, have been released, are actively maintained by the community, and are popular among developers and architects looking for a hands-off approach to manage databases with their applications. El Carro extends the list of database operators to include Oracle.

What are we building with El Carro for Oracle databases?

The operator pattern emerged in late 2016 as an extension of the Kubernetes API and control loop aimed at automating more complicated and application-specific tasks that are beyond the native Kubernetes objects.

El Carro implements a custom resource definition (CRD), which is tailored to database management. Users set and change attributes of the custom resource using the Kubernetes API the same way they do for built-in objects such as pods, deployments, or services. The El Carro controller observes changes to the CRs and compares the declared state with the current reality in the cluster, then makes the necessary changes. Those changes could either affect the Kubernetes resources used by the database such as persistent volumes or the pod itself, or may result in issuing calls via SQL or command line tools to the database to create and modify users or other database objects.

Here’s a look at how this works:
El Carro Architecture
El Carro Architecture

The diagram above shows how the major components of a database managed by the El Carro Operator interact with each other. The controller monitors the CRD for any changes made by admins. It creates and manages the cluster resources that make up the actual database deployment: persistent volumes for filesystems and data, a pod to run containers with the actual database, and a daemon that allows the controller to securely run SQL commands on the database. And lastly, a service makes sqlnet connections available to applications and end users that can either run in the same Kubernetes cluster or outside of it.

At release time, the El Carro Operator can provision Oracle databases of 12c Enterprise Edition and 18c Express Edition. It manages instance parameters, pluggable databases, and users. You can take and restore backups either using rman or storage snapshots, and we are working to add additional features.

How to get involved with El Carro?

In the development process, we collaborated with users and partners in the Oracle community to help us validate the approach. "Pythian has helped Oracle users to automate and optimize the operations of their mission-critical systems for over 20 years,” says Simon Pane, principal consultant at Pythian. “We are excited about the possibilities that El Carro brings to users on their cloud modernization journeys. We are proud to work with the community on a vision for the future of database management.".

Sean Scott covers Docker for databases on his blog oraclesean.com, and says: "There are many benefits to running Oracle databases in containers. Adding Kubernetes orchestration introduces new opportunities to bring the DevOps and Oracle communities together."

You can try out El Carro today. Follow the quick start guide and try out provisioning of instances, databases, users. Import data via Data Pump, manage instance parameters, choose between different methods for backups, and try out a restore. Have a look at how we integrate with external logging and monitoring solutions. Reach out via our Google group and leave feedback for what features you would like to see next, or even create your own patch and pull request on GitHub.

By Bjoern Rost - Product Manager and Boris Dali - Team Lead, Engineering

W3C Trace Context Specification: What it Means for You

Wednesday, December 11, 2019

Since the first days of Google Cloud Platform (GCP), Google has been at the forefront of making your applications more observable. Beyond Stackdriver, our most visible impact in this space is OpenTelemetry, which we initiated in 2017 (as OpenCensus) and has grown into a huge community that includes the majority of APM / monitoring vendors and cloud platforms.

While OpenTelemetry allows developers to easily capture distributed traces and metrics from their own services, there’s also a need to trace requests as they propagate through components that developers don’t directly control, like managed services, load balancers, network hardware, etc. To solve this we co-defined a prototype HTTP header that these components can rely on, gathered partners, and moved the work into the W3C.

This work is now complete, and the W3C Trace Context format is now an official standard. Once implemented in GCP, this will make our services even easier to manage, both with Stackdriver and other third party distributed tracing tools. We explain more in the
official post on the W3C blog, which I’ve copied below:

The W3C Distributed Tracing working group has moved the Trace Context specification to the next maturity level. The specification is already being adopted and implemented by many platforms and SDKs. This article describes the Trace Context specification and how it improves troubleshooting and monitoring of modern distributed apps.

W3C Trace Context specification defines the format for propagating distributed tracing context between services. Distributed tracing makes it easy for developers to find the causes of issues in highly-distributed microservices applications by tracking how a single interaction was processed across multiple services. Each step of a trace is correlated through an ID that is passed between services, and W3C Trace Context now defines a standard for these context propagation headers.

Until now, different tracing systems have defined their own headers. Examples include Zipkin’s B3 format and X-Google-Cloud-Trace. Adopting a common context propagation format has been long desired by developers, APM vendors, and cloud platform hosts, as compatibility provides numerous benefits:
  • Web and RPC frameworks that use this standard to provide context propagation out of the box will also offer cross-service log correlation, even for developers who haven’t set up distributed tracing.
  • API producers can record the trace IDs of requests from API consumers and provide additional spans or metadata to their customers for a given traced request. Producers can also correlate customer trace IDs to internal traces when debugging technical issues raised by consumers.
  • Networking infrastructure (proxies, load balancers, routers, etc.) can both ensure that context propagation headers are not removed from requests passing through them, and can record spans or logs for a given trace, without having to support multiple vendor-specific formats. Potential examples of these include router appliances, cloud load balancers, and sidecar proxies like Envoy.
  • Instrumentation can be further decoupled from a developer’s choice of APM vendor. For example, using both OpenTelemetry and a given vendor’s agents, a developer can instrument different services in an application, and traces will flow through the system and be processed correctly by the vendor’s backend.
  • Web browsers and other clients can use these identifiers to correlate their telemetry with traces collected from backend services. This functionality is currently being defined.
To address this effort, a group of cloud providers, open source contributors, and APM vendors started defining a standard HTTP context propagation header that would replace their homegrown formats. This specification has been discussed and iterated on over the past two years, and the group working on it has grown significantly over that time. Sponsors include Google, Microsoft, Dynatrace, and New Relic (W3C members), and the group was officially moved into the W3C in 2018 for the work to proceed under the guidance of an official standards body and to spur even greater adoption.

TraceContext has since been adopted by OpenTelemetry (which enables it by default and also serves as the reference implementation), Azure services, Dynatrace, Elastic, Google Cloud Platform, Lightstep, and New Relic. We are tracking adoption in this list.

This first phase of work has focused on HTTP, as it is commonly used and has no built-in affordances for trace context propagation (gRPC and some newer RPC systems do). The same group of committee members are also working to define trace context propagation in other formats, starting with AMQP and MQTT for IoT; other upcoming topics include context propagation from clients and web browsers.

By Morgan McLean, OpenTelemetry + Stackdriver

How we brought the latest version of Python to App Engine and Cloud Functions

Monday, August 13, 2018

At Cloud Next 2018, we added Python 3.7 support to Cloud Functions and now we’ve announced Python 3.7 support for the App Engine standard environment. These new runtimes allow you to write Python functions and apps using the latest version of Python and the rich ecosystem of packages available on Python Packaging Index (PyPI).

This new runtime marks a significant update to App Engine and was enabled by new open source software that we recently released: gVisor and FTL.

Python, straight from the source

Running Python 3.7 on App Engine and Cloud Functions required us to fundamentally rethink our infrastructure. Traditionally, meeting Google Cloud’s security requirements meant that we had to run a modified version of the Python interpreter. However, using a modified interpreter constrained some language features and only allowed us to support a limited set of whitelisted Python libraries.

Thanks to gVisor, a container sandbox that provides improved security and process isolation, we can now run the unmodified Python 3.7.0 interpreter. We’ve done extensive testing to make sure Python 3.7 is compatible with gVisor. As part of our compatibility testing, we run Python’s full suite of language tests, and tests for Python packages that are popular on PyPI. We’re committed to ensuring that everything you’ve come to know and love about Python is supported on our platform.

Seamless deployments

Most importantly, this change in our infrastructure makes it easier to take advantage of Python’s vast ecosystem. As a developer, you just add project dependencies to a requirements.txt file and deploy.

During deployment, FTL, a tool for building containers, fetches dependencies listed in your requirements.txt file and installs them alongside your app or function. FTL also includes a short-lived dependency cache, which speeds up repeated deployments if no changes are detected in your requirements.txt file. This is particularly useful if you find just need to re-deploy because you found a typo.

Keeping up with the Pythonistas

In making these changes, we also decided to expand the list of system packages that are included with each runtime’s Ubuntu 18.04 distribution. We think that will make life just a little bit easier for developers working with the latest release of Python.

Looking forward, we’re excited about how these changes will allow us to keep up with the Python community’s progress as they release new versions and libraries. Please let us know what you think and if you run into any challenges.

You can learn more about how to get started with it on App Engine and Cloud Functions in our documentation. We can’t wait to see what you build with Python 3.7.

By Stewart Reichling, Product Manager

Authenticating to HashiCorp Vault using Google Cloud IAM

Wednesday, August 16, 2017

Applications often require access to small pieces of sensitive data at build or run time, referred to as secrets. Secrets are generally more sensitive than other environment variables or parts of your repository as they may grant access to additional data, such as user data.

HashiCorp Vault is a popular open source tool for secret management, which allows a developer to store, manage and control access to tokens, passwords, certificates, API keys and other secrets. Vault has many options for authentication, called authentication backends. These allow developers to use many kinds of identities to access Vault, including tokens, or usernames and passwords. As the number of developers on a team grows, these kinds of authentication options become impractical; and in enterprise scenarios, managing and auditing these identities becomes burdensome.

Today, we are pleased to announce a Google Cloud Platform IAM authentication backend for Vault. This allows a developer to use an existing IAM identity to authenticate to Vault. Using a service account, you can sign a JWT to show it came from a particular account, and use that to authenticate to Vault. Learn more in the documentation.


The following example in Go shows how a user can authenticate with Vault using this backend. This example assumes the Vault server has already been mounted at auth/gcp and configured.
package main

import (
 ...
 vaultapi "github.com/hashicorp/vault/api"
 "golang.org/x/oauth2"
 "golang.org/x/oauth2/google"
 "google.golang.org/api/iam/v1"
 ...
)

func main() {
 // Start [PARAMS]
 project := "project-123456"
 serviceAccount := "myserviceaccount@project-123456.iam.gserviceaccount.com"
 credsPath := "path/to/creds.json"

 os.Setenv("VAULT_ADDR", "https://vault.mycompany.com")
 defer os.Setenv("VAULT_ADDR", "")
 // End [PARAMS]

 // Start [GCP IAM Setup]
 jsonBytes, err := ioutil.ReadFile(credsPath)
 if err != nil {
  log.Fatal(err)
 }
 config, err := google.JWTConfigFromJSON(jsonBytes, iam.CloudPlatformScope)
 if err != nil {
  log.Fatal(err)
 }

 httpClient := config.Client(oauth2.NoContext)
 iamClient, err := iam.New(httpClient)
 if err != nil {
  log.Fatal(err)
 }
 // End [GCP IAM Setup]
 
 // 1. Generate signed JWT using IAM.
 resourceName := fmt.Sprintf("projects/%s/serviceAccounts/%s", project, serviceAccount)
 jwtPayload := map[string]interface{}{
  "aud": "auth/gcp/login",
  "sub": serviceAccount,
  "exp": time.Now().Add(time.Minute * 10).Unix(),
 }

 payloadBytes, err := json.Marshal(jwtPayload)
 if err != nil {
  log.Fatal(err)
 }
 signJwtReq := &iam.SignJwtRequest{
  Payload: string(payloadBytes),
 }

 resp, err := iamClient.Projects.ServiceAccounts.SignJwt(
resourceName, signJwtReq).Do()
 if err != nil {
  log.Fatal(err)
 }

 // 2. Send signed JWT in login request to Vault.
 vaultClient, err := vaultapi.NewClient(vaultapi.DefaultConfig())
 if err != nil {
  log.Fatal(err)
 }

 vaultResp, err := vaultClient.Logical().Write(
"auth/gcp/login", 
map[string]interface{}{
   "role": "test",
   "jwt":  resp.SignedJwt,
  })

 if err != nil {
  log.Fatal(err)
 }

 // 3. Use auth token from response.
 log.Println("Access token %s", vaultResp.Auth.ClientToken)
 vaultClient.SetToken(vaultResp.Auth.ClientToken)
 // ...
}

Vault is just one way of managing secrets in development. For further reading on choosing a solution that’s right for you, see Google Cloud Platform’s documentation on Secret Management.

By Emily Ye, Software Engineer

.NET and PowerShell tooling for the Google Cloud Platform

Thursday, September 29, 2016

Last month Google made an announcement unveiling support for Visual Studio, C#, PowerShell, Microsoft SQL Server and more on the Google Cloud Platform. With so many  new features, it is easy to gloss over some of the technical aspects of the announcement, especially the fact that all of the developer tooling and libraries are open source and available on GitHub.

This post will go into some of the details behind the new C# libraries, PowerShell cmdlets, and Visual Studio extension. All three products are open source, have an exciting roadmap for the future and are hungry for your feedback.

C# bindings for Google APIs

Source: https://github.com/googlecloudplatform/google-cloud-dotnet
Docs: https://cloud.google.com/dotnet/

For years, Google has had innovative technologies powering its data centers, unfortunately Google’s internal APIs and technology couldn’t directly benefit you and your software. That was, until the Google Cloud Platform started exposing public APIs for things like machine learning, storage, logging etc. With these APIs publicly available, you can add powerful capabilities to your apps without needing to manage complex infrastructure.

There have been C# bindings for Google APIs for years. In fact, Google receives hundreds of millions of API calls from C# clients every day. But newer APIs, especially those from the Google Cloud Platform, require more advanced features like bidirectional streaming. That’s why rather than using HTTP/REST many newer Google APIs are built on top of gRPC, a high performance, open source universal RPC framework.

But don’t worry, we have C# bindings for those gRPC-based APIs too; all of it open source and on GitHub.

In both cases, the client library is the result of a C# code generator. We take the API’s discovery document (analogous to a WSDL) and generate C# code. gRPC APIs require more careful design than other APIs, but the end product is the same. Once built, the API libraries are published to NuGet.

C# code generators for Google APIs isn’t the entire story.

Source code generated from tools can look foreign at times. So for libraries where the codegen isn’t good enough, we have hand-written wrappers to provide a better, more idiomatic experience. In some cases -- such as CRUD operations using the Datastore API -- the hand-written library cuts down on the required lines of code by half.

Finally, support for C# doesn’t just mean code. We are also working to ensure Google APIs are supported on different runtimes too. Most Google APIs work on the cross-platform .NET Core runtime and we are continuing to expand support.

PowerShell support

Source: https://github.com/googlecloudplatform/google-cloud-powershell
Docs: http://googlecloudplatform.github.io/google-cloud-powershell/

C# support is great when you are writing full applications, but for DevOps, scripting is more typical. The Cloud SDK provides command-line tools (gcloud, gsutil) for managing cloud resources, but when running on Windows, Windows PowerShell is a dramatically more productive environment. Google Cloud tools for PowerShell is a set of cmdlets so you can manage your Google cloud resources. They are strongly typed, and integrate seamlessly with other PowerShell tools. For example, to learn more about a cmdlet, just use Get-Help.

In designing the PowerShell cmdlets, the main goal was to be idiomatic. We wanted to follow the best practices and guidelines so PowerShell novices and pros alike could use our cmdlets. Of course, if we have anything wrong, please log an issue on the GitHub repository. Pull requests are also welcome.

Visual Studio

Source: https://github.com/googlecloudplatform/google-cloud-visualstudio
Docs: https://cloud.google.com/visual-studio/

The C# and PowerShell features should help developers using Google services. But the biggest impact on developer productivity comes from being inside the Visual Studio IDE.

From within Visual Studio you can search for new extensions and find the Google Cloud Platform Extension for Visual Studio. It provides tools for viewing/managing data stored in Google Cloud Storage and Google Cloud SQL. It also provides support for deploying ASP.NET 4.x applications to Google Compute Engine.

It is only the first release and we have some big plans for the future. You can see a lot of the short-term features we have planned by looking at the issues list in GitHub. Like making Google APIs light up for the new .NET Core runtime, being able to deploy ASP.NET Core applications to Google App Engine or Google Container Engine will be huge. Stay tuned for a future blog post about how to run C# on Google App Engine Flexible Environment, as well.

We’re just getting started

Hopefully you share my enthusiasm for Google’s ongoing development in .NET tooling. Not only is it exciting to be able to take advantage of Google Cloud Platform technologies, but also to see a future where .NET Core enables C# code to run cross-platform.

But to be successful we need your help.

If you have questions, be sure to ask on Stack Overflow (e.g. the google-cloud-visualstudio or google-cloud-powershell tags). If you have problems, please open issues on GitHub (libraries, VS, PowerShell). If you still have trouble, participate in the google-cloud-dev group.

The team here at Google is thrilled to be working with the .NET stack and your feedback is immensely helpful in prioritizing things.

By Chris Smith, Software Engineer

A sizzling open source release for the Australian Election site

Wednesday, September 28, 2016

Originally posted on the Geo Developers Blog

One of the best parts of my job at Google is 20 percent time. While I was hired to help developers use Google’s APIs, I value the time I'm afforded to be a student myself—to learn new technologies and solve real-world problems. A few weeks prior to the recent Australian election an opportunity presented itself. A small team in Sydney set their sights on helping the 15 million voters stay informed of how to participate, track real-time results, and (of course) find the closest election sausage sizzle!


Our team of designers, engineers and product managers didn't have an immediate sense of how to attack the problem. What we did have was the power of Google’s APIs, programming languages, and Cloud hosting with Firebase and Google Cloud Platform.



The result is a mish-mash of some technologies we'd been wanting to learn more about. We're open sourcing the ausvotes.withgoogle.com repository to give developers a sense of what happens when you get a handful of engineers in a room with a clear goal and a immovable deadline.

The Election AU 2016 repository uses:

  • Go from Google App Engine instances to serve the appropriate level of detail for users' viewport queries from memory at very low latency, and
  • Dart to render the live result maps on top of Google Maps JavaScript API using Firebase real time database updates.

A product is only as good as the attention and usage is receives. Our team was really happy with the results of our work:

  • 406,000 people used our maps, including 217,000 on election day.
  • We had 139 stories in the media.
  • Our map was also embedded in major news websites, such as Sky News.

Complete setup and installation instructions are available in the GitHub README.

By Brett Morgan, Developer Programs Engineer

Making Rubyists more comfortable on Google Cloud Platform

Friday, August 5, 2016

One of the many open source efforts at Google is the Google Cloud Platform (GCP) native libraries for our most popular languages. One of these libraries is the gcloud-ruby project on GitHub which is released as the gcloud gem on rubygems.org. There are several gems for accessing Google Cloud Platform resources from Ruby but this gem is different. It is hand coded by Rubyists for Rubyists and that has some distinct advantages.

Many of us have had experience working with libraries that are clearly ported from another language. I usually talk about them as Ruby with a Java accent or Python with a Perl accent. Generally they work just fine but you can run into some low level friction — sometimes things just don’t feel right. Native gems written by members of the community solve this problem. In the case of gcloud-ruby there are some really concrete examples.

First, gcloud-ruby uses syntax that is similar to other popular Ruby libraries. For example, the syntax for specifying a table schema in BigQuery (Google Cloud Platform's very large scale data warehouse) looks like this:

table = dataset.create_table "baby_names" do |schema|
  schema.string "name"
  schema.string "sex"
  schema.integer "number"
end

Creating the same table in popular Ruby on Rails looks like this:

create_table "baby_names" do |schema|
  schema.string "name"
  schema.string "sex"
  schema.integer "number
end

The two are nearly identical. That makes getting up to speed on BigQuery easier and quicker than it would be if the Ruby library didn't use patterns that are already known to the majority of Rubyists. 

Another way the gcloud-ruby library meets the community where it is at is by embracing the community's fondness for doing things several different ways. In Ruby there are often several correct ways to do a given task.

The gcloud-ruby library is no exception. There are a few different ways to authenticate and create the objects you use to interact with the API. Ruby also has many common methods that have aliases. In the standard library Enumerable#map and Enumerable#collect actually run the same code path for example. In gcloud-ruby the vision API uses aliases. Google Cloud Vision provides a single endpoint: annotate. gcloud-ruby has an annotate method but also aliases this method as mark and detect if those make more sense to you (detect is the method that makes the most sense to my brain so that's the one I use). By providing a couple of different aliases it can mean the first thing you try is more likely to work. This speeds up development time and makes learning the library easier. 

The last way the gcloud-ruby gem makes Rubyists feel at home is by having comprehensive tests, a common value and popular discussion topic for the Ruby community. gcloud-ruby uses minitest-spec for testing, a popular choice that most Rubyists can easily read. When I was learning the storage API I looked at the tests for storage to learn how to use the library. There is outstanding documentation as well for those who prefer learning that way but I'm so used to looking at tests that I really appreciated that gcloud-ruby has well written and easily accessible tests.

Above are three examples of how hand-coded libraries from within the community can improve the user experience when learning to use tools. Of course, doing all the development on GitHub in the open also helps. Users can easily see what bugs people have run into and what features are next up in the production queue. And if a user has a feature request (like the previously mentioned Cloud Vision support) they can create a GitHub issue.

If you’re a Rubyist, give gcloud-ruby a shot and let us know what you think!

By Aja Hammerly, Developer Advocate

An update on container support on Google Cloud Platform

Wednesday, June 11, 2014

Cross posted from the Google Cloud Platform Blog

Everything at Google, from Search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability. Now we’re taking another step toward making those capabilities available to developers everywhere.

Support for Docker images in Google App Engine
Last month we released improved Docker image support in Compute Engine. Today, we’re building on that work and adding a set of extensions that allow App Engine developers to build and deploy Docker images in Managed VMs. Developers can use these extensions to easily access the large and growing library of Docker images, and the Docker community can easily deploy containers into a completely managed environment with access to services such as Cloud Datastore. If you want to try it, sign up via this form.

Kubernetes—an open source container manager
Based on our experience running Linux containers within Google, we know how important it is to be able to efficiently schedule containers at Internet scale. We use Omega within Google, but many developers have more modest needs. To that end, we’re announcing Kubernetes, a lean yet powerful open-source container manager that deploys containers into a fleet of machines, provides health management and replication capabilities, and makes it easy for containers to connect to one another and the outside world. (For the curious, Kubernetes (koo-ber-nay'-tace) is Greek for “helmsman” of a ship.) Kubernetes was developed from the outset to be an extensible, community-supported project. Take a look at the source and documentation on GitHub and let us know what you think via our mailing list. We’ll continue to build out the feature set, while collaborating with the Docker community to incorporate the best ideas from Kubernetes into Docker.

Container stack improvements
We’ve released an open-source tool called cAdvisor that enables fine-grain statistics on resource usage for containers. It tracks both instantaneous and historical stats for a wide variety of resources, handles nested containers, and supports both LMCTFY and Docker’s libcontainer. It’s written in Go with the hope that we can move some of these tools into libcontainer directly if people find them useful (as we have).

A commitment to open container standards
Finally, I'm happy that I've been nominated to Docker's Governance Committee to continue working with the Docker community toward better open container standards. Containers have been a great building block for Google and by working together we can make them the key building block for “cloud native” applications.

-Posted by Eric Brewer, VP of Infrastructure

A better way to explore and learn on GitHub

Monday, January 13, 2014

Cross posted from the Google Cloud Platform Blog

Almost one year ago, Google Cloud Platform launched our GitHub organization, with repositories ranging from tutorials to samples to utilities. This is where developers could find all resources relating to the platform, and get started developing quickly. We started with 36 repositories, with lofty plans to add more over time in response to requests from you, our developers. Many product releases, feature launches, and one logo redesign later, we are now up to 123 repositories illustrating how to use all parts of our platform!
Despite some clever naming schemes, it was becoming difficult to find exactly the code that you wanted amongst all of our repositories. Idly browsing through over 100 options wasn’t productive. The repository names gave you an idea of what stacks they used, but not what problems they solved.

Today, we are making it easier to browse our repositories and search for sample code with our landing page at googlecloudplatform.github.io. Whether you want to find all Compute Engine resources, locate all samples that are available in your particular stack, or find examples that fit your particular area of interest, you can find it with the new GitHub page. We’ll be rotating the repositories in the featured section, so make sure to wander that way from time to time.

We are very committed to open source at Google Cloud Platform. Please let us know what kind of samples and tools that you’d like to see from the team. We’re looking forward to many more commits ahead!

By Julia Ferraioli, Developer Advocate

Get coding faster thanks to little green buttons

Monday, June 24, 2013


Cross-posted from the Google Cloud Platform Blog

On the Google Cloud Platform team we're always looking for ways to make developers' lives easier, so you can focus on building interesting applications instead of worrying about managing infrastructure. We also want you to be as productive as possible when you're busy writing code. We provide an SDK which offers access to production APIs, in a way that's compatible with a local development environment.
 

But sometimes you just want to dip your toes in the water, and the prospect of setting up a local development environment seems daunting. What if you just want to try out some sample code? What if you want to see how the actual production APIs will behave? What if you could share a code snippet with a colleague and your entire environment came along for the ride? What if there was a playground where you could try out APIs, all from within your web browser? We asked ourselves these same questions and decided to try an experiment: we created a Cloud Playground, a place for you to quickly test production APIs you're interested in using. Note: the Cloud Playground is currently limited to Python 2.7 App Engine apps. To get you started, we added little green buttons to our getting started documentation, which take you straight to the Cloud Playground where you can edit and run the guestbook sample code as it appears in the documentation.
In addition, the main Cloud Playground page offers easy access to many more samples. There's even an option to clone other open source App Engine Python 2.7 template projects from Github.
How does it work? The Cloud Playground is itself an open source project and consists of two modules:
  • mimic is a regular Python App Engine app, which serves as a development server (similar to the App Engine SDK "dev_appserver"), but which runs in the production App Engine environment, providing you access to the production APIs and environment while still offering a quick and easy way to test out bits of code.
  • bliss is a trivial browser-based code editor which lets you edit code in the mimic virtual file system (backed by the App Engine datastore), providing you with a user interface so you can see what the mimic app can do for you.
We previously blogged about DevTable which also uses mimic to speed up refresh cycles for their App Engine developers.

We look forward to seeing what you're able to build.

By Fred Sauer, Developer Advocate

Find sample code and more for Google Cloud Platform, now on GitHub

Tuesday, January 22, 2013


Today, we’re announcing that you can now find Google Cloud Platform on GitHub! The GitHub organization for the Google Cloud Platform is your destination for samples and tools relating to App Engine, BigQuery, Compute Engine, Cloud SQL, and Cloud Storage. Most Google Cloud Platform existing open source tools will be migrated to the organization over time. You can quickly get your app running by forking any of our repositories and diving into the code.

Currently, the GitHub organization for the Google Cloud Platform has 36 public repositories, some of which are currently undergoing their initial code reviews, which you can follow on the repo. The Google Cloud Platform Developer Relations Team will be using GitHub to maintain our starter projects, which show how to get started with our APIs using different stacks. We will continue to add repositories that illustrate solutions, such as the classic guest book app on Google App Engine. For good measure, you will also see some tools that will make your life easier, such as an OAuth 2.0 helper.

From getting started with Python on Google Cloud Storage to monitoring your Google Compute Engine instances with App Engine, our GitHub organization is home to it all.

Trick of the trade: to find samples relating to a specific platform, try filtering on the name in the “Find a Repository” text field.

We set up this organization not only to give you an easy way to find and follow our samples, but also to give you a way to get involved and start hacking alongside us. We’ll be monitoring our repositories for any reported issues as well as for pull requests. If you’re interested in seeing what a code review looks like for Google’s open source code, you can follow along with the discussion happening right on the commits.

Let us know about your suggestions for samples. We look forward to seeing what you create!

By Julia Ferraioli, Developer Advocate, Google Compute Engine
.