Thursday, October 29, 2015

New Course on Developing Android Apps for Google Cast and Android TV

Posted by Josh Gordon, Developer Advocate

Go where your users are: the living room! Google Cast lets users stream their favorite apps from Android, iOS and the Web right onto their TV. Android TV turns a TV into an Android device, only bigger!




We've partnered with Udacity to launch a new online course - Google Cast and Android TV Development. This course teaches you how to extend your existing Android app to work with these technologies. It’s packed with practical advice, code snippets, and deep dives into sample code.

You can take advantage of both, without having to rewrite your app. Android TV is just Android on a new form factor, and the Leanback library makes it easy to add a big screen and cinematic UI to your app. Google Cast comes with great samples and guides to help you get started. Google also provides the Cast Companion Library, which makes it faster and easier to add cast to your Android app.

This class is part of our larger series on Ubiquitous Computing across other Google platforms, including Android Wear, and Android Auto. Designed as short, standalone courses, you can take any on its own, or take them all!

Get started now and try it out at no cost, your users are waiting!


Wednesday, October 28, 2015

Learn top tips from Kongregate to achieve success with Store Listing Experiments

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: This is another post in our series featuring tips from developers finding success on Google Play. We recently spoke to games developer Kongregate, to find out how they use Store Listing Experiments successfully. - Ed.

With Store Listing Experiments in the Google Play Developer Console, you can conduct A/B tests on the content of your store listing pages. Test versions of the text and graphics to see which ones perform best, based on install data.

Kongregate increases installs by 45 percent with Store Listing Experiments

Founded in 2006 by brother and sister Jim and Emily Greer, Kongregate is a leading mobile games publisher specializing in free to play games. Kongregate used Store Listing Experiments to test new content for the Global Assault listing page on Google Play. By testing with different audience sizes, they found a new icon that drove 92 percent more installs, while variant screenshots achieved an impressive 14 percent improvement. By picking the icons, screenshots, and text descriptions that were the most sticky with users, Kongregate saw installs increase by 45 percent on the improved page.

Kongregate’s Mike Gordon, VP of Publishing; Peter Eykemans, Senior Producer; and Tammy Levy, Director of Product for Mobile Games, talk about how to successfully optimise mobile game listings with Store Listing Experiments.



Kongregate’s tips for success with Store Listing Experiments

Jeff Gurian, Sr. Director of Marketing at Kongregate also shares his do’s and don’ts on how to use experiments to convert more of your visitors, thereby increasing installs. Check them out below:

Do’s Don’ts
Do start by testing your game’s icon. Icons can have the greatest impact (positive or negative) on installs — so test early! Don’t test too many variables at once. It makes it harder to determine what drove results. The more variables you test, the more installs (and time) you’ll need to identify a winner.
Do have a question or objective in mind when designing an experiment. For example, does artwork visualizing gameplay drive more installs than artwork that doesn’t? Don’t test artwork only. Also test screenshot ordering, videos, and text to find what combinations increase installs.
Do run experiments long enough to achieve statistical significance. How long it takes to get a result can vary due to changes in traffic sources, location of users, and other factors during testing. Don’t target too small an audience with your experiment variants. The more users you expose to your variants, the more data you collect, the faster you get results!
Do pay attention to the banner, which tells you if your experiment is still “in progress.” When it has collected enough data, the banner will clearly tell you which variant won or if it was a tie. Don’t interpret a test where the control attribute performs better than variants as a waste. You can still learn valuable lessons from what “didn’t work.” Iterate and try again!

Learn more about how Kongregate optimized their Play Store listing with Store Listing Experiments. Learn more about Google Play products and best practices to help you grow your business globally.

Tuesday, October 27, 2015

Introducing a New Course on Developing Android Apps for Auto

Posted by Wayne Piekarski, Developer Advocate

Android Auto brings the Android platform to the car in a way that’s optimized for the driving experience, allowing the user to keep their hands on the wheel, and their eyes on the road. To learn how to extend your existing media and messaging apps to work within a car, we collaborated with Udacity to introduce a new course on Ubiquitous Computing with Android Auto.



Designed by Developer Advocates from Google, the course shows you how to take advantage of your existing Android knowledge to work on this new platform. The best part is that Android Auto is based on extensions to the regular Android framework, so you don't need to rewrite your existing apps to support it. You'll learn how to implement messaging apps, by using Notification extensions. You'll also learn how audio players just work on Android Auto when you use the Android media APIs. In both cases, we work through some simple Android samples, and then show what changes are needed to extend them for Android Auto. Finally, we show a complete music playing sample, and how it works across other platforms like Android Wear.

If you have an interest in Android-based messaging or media apps, then you need to learn about Android Auto. Users want to be able to take their experience to other places, such as their cars, and not just on their phones. Having Auto support will allow you to differentiate your app, and give users another reason to try it.

This class is part of our larger series on Ubiquitous Computing across Google platforms, such as Android Wear, Android Auto, Android TV, and Google Cast. Designed as short, standalone courses, you can take any course on its own, or take them all! The Android Auto platform is a great opportunity to add functionality that will distinguish your app from others. This Udacity course will get you up to speed quickly with everything you need to get started.

Get started now and try it out at no cost, your users are waiting!

Monday, October 26, 2015

New in Android Samples: Authenticating to remote servers using the Fingerprint API

Posted by Takeshi Hagikura, Yuichi Araki, Developer Programs Engineer

As we announced in the previous blog post, Android 6.0 Marshmallow is now publicly available to users. Along the way, we’ve been updating our samples collection to highlight exciting new features available to developers.

This week, we’re releasing AsymmetricFingerprintDialog, a new sample demonstrating how to securely integrate with compatible fingerprint readers (like Nexus Imprint) in a client/server environment.

Let’s take a closer look at how this sample works, and talk about how it complements the FingerprintDialog sample we released earlier during the public preview.

Symmetric vs Asymmetric Keys

The Android Fingerprint API protects user privacy by keeping users’ fingerprint features carefully contained within secure hardware on the device. This guards against malicious actors, ensuring that users can safely use their fingerprint, even in untrusted applications.

Android also provides protection for application developers, providing assurances that a user’s fingerprint has been positively identified before providing access to secure data or resources. This protects against tampered applications, providing cryptographic-level security for both offline data and online interactions.

When a user activates their fingerprint reader, they’re unlocking a hardware-backed cryptographic vault. As a developer, you can choose what type of key material is stored in that vault, depending on the needs of your application:

  • Symmetric keys: Similar to a password, symmetric keys allow encrypting local data. This is a good choice securing access to databases or offline files.
  • Asymmetric keys: Provides a key pair, comprised of a public key and a private key. The public key can be safely sent across the internet and stored on a remote server. The private key can later be used to sign data, such that the signature can be verified using the public key. Signed data cannot be tampered with, and positively identifies the original author of that data. In this way, asymmetric keys can be used for network login and authenticating online transactions. Similarly, the public key can be used to encrypt data, such that the data can only be decrypted with the private key.

This sample demonstrates how to use an asymmetric key, in the context of authenticating an online purchase. If you’re curious about using symmetric keys instead, take a look at the FingerprintDialog sample that was published earlier.

Here is a visual explanation of how the Android app, the user, and the backend fit together using the asymmetric key flow:

1. Setting Up: Creating an asymmetric keypair

First you need to create an asymmetric key pair as follows:

KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC, "AndroidKeyStore");
keyPairGenerator.initialize(
        new KeyGenParameterSpec.Builder(KEY_NAME,
                KeyProperties.PURPOSE_SIGN)
                .setDigests(KeyProperties.DIGEST_SHA256)
                .setAlgorithmParameterSpec(new ECGenParameterSpec("secp256r1"))
                .setUserAuthenticationRequired(true)
                .build());
keyPairGenerator.generateKeyPair();

Note that .setUserAuthenticationRequired(true) requires that the user authenticate with a registered fingerprint to authorize every use of the private key.

Then you can retrieve the created private and public keys with as follows:

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PublicKey publicKey =
        keyStore.getCertificate(MainActivity.KEY_NAME).getPublicKey();

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);

2. Registering: Enrolling the public key with your server

Second, you need to transmit the public key to your backend so that in the future the backend can verify that transactions were authorized by the user (i.e. signed by the private key corresponding to this public key). This sample uses the fake backend implementation for reference, so it mimics the transmission of the public key, but in real life you need to transmit the public key over the network.

boolean enroll(String userId, String password, PublicKey publicKey);

3. Let’s Go: Signing transactions with a fingerprint

To allow the user to authenticate the transaction, e.g. to purchase an item, prompt the user to touch the fingerprint sensor.

Then start listening for a fingerprint as follows:

Signature.getInstance("SHA256withECDSA");
KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);
signature.initSign(key);
CryptoObject cryptObject = new FingerprintManager.CryptoObject(signature);

CancellationSignal cancellationSignal = new CancellationSignal();
FingerprintManager fingerprintManager =
        context.getSystemService(FingerprintManager.class);
fingerprintManager.authenticate(cryptoObject, cancellationSignal, 0, this, null);

4. Finishing Up: Sending the data to your backend and verifying

After successful authentication, send the signed piece of data (in this sample, the contents of a purchase transaction) to the backend, like so:

Signature signature = cryptoObject.getSignature();
// Include a client nonce in the transaction so that the nonce is also signed 
// by the private key and the backend can verify that the same nonce can't be used 
// to prevent replay attacks.
Transaction transaction = new Transaction("user", 1, new SecureRandom().nextLong());
try {
    signature.update(transaction.toByteArray());
    byte[] sigBytes = signature.sign();
    // Send the transaction and signedTransaction to the dummy backend
    if (mStoreBackend.verify(transaction, sigBytes)) {
        mActivity.onPurchased(sigBytes);
        dismiss();
    } else {
        mActivity.onPurchaseFailed();
        dismiss();
    }
} catch (SignatureException e) {
    throw new RuntimeException(e);
}

Last, verify the signed data in the backend using the public key enrolled in step 2:

@Override
public boolean verify(Transaction transaction, byte[] transactionSignature) {
    try {
        if (mReceivedTransactions.contains(transaction)) {
            // It verifies the equality of the transaction including the client nonce
            // So attackers can't do replay attacks.
            return false;
        }
        mReceivedTransactions.add(transaction);
        PublicKey publicKey = mPublicKeys.get(transaction.getUserId());
        Signature verificationFunction = Signature.getInstance("SHA256withECDSA");
        verificationFunction.initVerify(publicKey);
        verificationFunction.update(transaction.toByteArray());
        if (verificationFunction.verify(transactionSignature)) {
            // Transaction is verified with the public key associated with the user
            // Do some post purchase processing in the server
            return true;
        }
    } catch (NoSuchAlgorithmException | InvalidKeyException | SignatureException e) {
        // In a real world, better to send some error message to the user
    }
    return false;
}

At this point, you can assume that the user is correctly authenticated with their fingerprints because as noted in step 1, user authentication is required before every use of the private key. Let’s do the post processing in the backend and tell the user that the transaction is successful!

Other updated samples

We also have a couple of Marshmallow-related updates to the Android For Work APIs this month for you to peruse:

  • AppRestrictionEnforcer and AppRestrictionSchema These samples were originally released when the App Restriction feature was introduced as a part of Android for Work API in Android 5.0 Lollipop. AppRestrictionEnforcer demonstrates how to set restriction to other apps as a profile owner. AppRestrictionSchema defines some restrictions that can be controlled by AppRestrictionEnforcer. This update shows how to use 2 additional restriction types introduced in Android 6.0.

We hope you enjoy the updated samples. If you have any questions regarding the samples, please visit us on our GitHub page and file issues or send us pull requests.

Friday, October 23, 2015

Coffee with a Googler: Bullet Time with the Cloud Spin Team

Posted by Laurence Moroney, Developer Advocate

As part of Google Cloud Platform’s Next roadshow, the team decided to make a demo that anybody could get involved in, and the concept of Google Cloud Spin was born. The idea, influenced by the ”bullet time” scenes from The Matrix was simple -- build a rig of cameras, have them take a number of pictures, and stitch them together into an animated GIF that looks like this:

Coffee with a Googler caught up with the team responsible for this demo to talk about how they built it, what technical challenges they faced, and how they used the cloud to managed the number of camera and create an animated GIF like the one above.

There was so much great information for developers in the exploration of this project, that we’ve split our interview into two episodes. The first, with Ray Tsang, Developer Advocate, who tells us about the ins and outs of setting up a number of phones to take a pictures to create a spin. From their original prototype (having lots of googlers holding up selfie sticks) to their final version (shooting videos that are synchronized on an audio signal, controlled by events in Firebase), it’s a fascinating conversation. In Part Two of our conversation, we will talk about how those videos had the right frame extracted, and the frames assembled into a cloud spin is coming soon!

To learn more about this project, visit the Google Cloud Platform blog.

Virtual currency: Sources and Sinks

Posted by Damien Mabin, Developer Advocate

More and more mobile games base their economic model on virtual currencies and free to play, yet there are plenty of pitfalls to be aware of while developing your game. One of these pitfalls is having an unbalanced economy. Sources and Sinks, a handy feature included in the Play Games Services toolset, is specifically designed to help measure the balance of your game’s virtual economy.

It helps you visualise in one simple diagram the state of your current in game economy. In diagram 1 (below), along the x-axis (time), and y-axis (amount of virtual currency), we see 2 curves:

  • One showing the amount of virtual currency earn by players (orange line)
  • The other one showing the amount spent by players (green line)

Diagram 1: Poorly Monetizing Economy

What do the curves in the diagram tell us? In this case, that our game is likely not going to monetize well. Users are spending less currency than they are earning: resulting in a surplus. There is no sense of scarcity for the user which may indicate that your players do not understand how they can spend currency or that there is value to them in doing so. It would be a good idea in this case to re-evaluate how much content is available to spend virtual currency on and how discoverable this content is to your users. Alternatively, you may want to consider decreasing the amount of in game earned currency is available (inflation can be a bad thing). Ultimately, you want your curves to change as demonstrated in diagram 2 (below).

Diagram 2: Balancing economy

That’s a lot better! Now your users are spending more than they earn… Wait! How is that possible? Two reasons: Players are spending the stock of money they accumulated before your changes. Moreover, there is another important point not to forget: you should not track in the above diagrams the amount of virtual currency the user purchase through in app purchases. If you wait a few more days, you should see the 2 curves converge a bit; the delta of them being the amount of virtual currency users purchase through IAP:

Diagram 3: Stabilised economy

With play game services you can get this visualisation with 2 lines of codes! It works on iOS and Android and doesn’t require the user to sign in to Play Games. What you will have in your Android or iOS app is something like this:

You can find more information about the integration here.

Once the client integration done you can go into your Play Store Developer Console to visualise the curve. Go into the “Game services” section, and click on “Player analytics->overview.”

Thursday, October 22, 2015

Google Developers teams up with General Assembly to launch Android Development Immersive training course

Posted by Peter Lubbers, Senior Program Manager, Google Developer Training

Today at the Big Android BBQ we announced that we have teamed up with General Assembly (GA), a global education institution transforming thinkers into creators, to create a new Android Development Immersive training course. This 12-week, full-time course will be offered beginning in January 2016 at GA’s New York campus, and in February at GA’s San Francisco campus and will roll out to additional campuses over the course of the next year. It is the first in-person training program of its kind that Google Developers has designed and built.

The Google Developer Relations team teamed up with General Assembly to ensure the Android Development Immersive bootcamp provides developers with access to the best instructors and latest and greatest hands-on material to create successful app experiences and businesses. To effectively reach over a billion of Android users globally, it's important for developers to build high-quality apps that are beautifully designed, performant, and delightful to use.

“We are constantly looking at the economy and job market for what skills are most in-demand. Demand for developers who can address this market and build new applications is tremendous,” said Jake Schwartz, co-founder and CEO, General Assembly. “Developing this course in partnership with Google Developers allows us to provide students with the most relevant skills, ensuring a reliable pipeline of talented developers ready to meet the urgent demand of companies in the Android ecosystem, a key component of GA's education-to-employment model."

Registration in the Android Development Immersive includes access to GA’s career preparation services and support, also known as Outcomes, includes assistance in creating portfolio-ready projects, access to career development workshops, networking events, and coaching and support in the job search process. Through in-person hiring events, mock interviews & GA’s online job search platform, graduates connect with GA’s hiring partners, which consists of close to 2,000 employers globally.

One of these employers is Vice Media. "I'm really excited to see the candidates coming out of the GA Android course. The fact that they're working with both Google and potential employers to shape the curriculum around real-world problems will make a huge difference. Textbook learning is one thing, but classroom learning with practitioners is a level we have all been waiting for. In fact, Vice Media is going to be hiring an apprentice right out of this course," said Ben Jackson, Director of Mobile Apps for Vice Media.

Learn more and sign up here.

Wednesday, October 21, 2015

Get your bibs ready for Big Android BBQ!

Posted by, Colt McAnlis, Senior Texas Based Developer Advocate

We’re excited to be involved in the Big Android BBQ (BABBQ) this year because of one thing: passion! Just like BBQ, Android development is much better when passionate people obsess over it. This year’s event is no exception.

Take +Ian Lake for example. His passion about Android development runs so deep, he was willing to chug a whole bottle of BBQ sauce just so we’d let him represent Android Development Patterns at the conference this year. Or even +Chet Haase, who suffered a humiliating defeat during the Speechless session last year (at the hands of this charming bald guy). He loves BBQ so much that he’s willing to come back and lose again this year, just so he can convince you all that #perfmatters. Let’s not forget +Reto Meier. That mustache was stuck on his face for days. DAYS! All because he loves Android Development so much.

When you see passion like this, you just have to be part of it. Which is why this year’s BABBQ is jam packed with awesome Google Developers content. We’re going to be talking about performance, new APIs in Marshmallow 6.0, NDK tricks, and Wear optimization. We even have a new set of code labs so that folks can get their hands on new code to use in their apps.

Finally, we haven’t even mentioned our BABBQ attendees, yet. We’re talking about people who are so passionate about an Android development conference that they are willing to travel to Texas to be a part of it!

If BBQ isn’t your thing, or you won’t be able to make the event in person, the Android Developers and Google Developers YouTube channels will be there in full force. We’ll be recording the sessions and posting them to Twitter and Google+ throughout the event.

So, whether you are planning to attend in person or watch online, we want you to remain passionate about your Android development.

New Courses -- Developing Watch Apps for Android Wear

Posted by Wayne Piekarski, Developer Advocate

We recently released a new Udacity course on Ubiquitous Computing with Android Wear, built as a collaboration between Udacity and Google. Android Wear watches allow users to get access to their information quickly, with just a glance, using an always-on display. By taking this course, you will learn everything you need to know to reach your users with notifications, custom watch faces, and even full apps.

Designed by Developer Advocates from Google, the course is a practical approach to getting started with Android Wear. It takes you through through code snippets, and deep dives into sample code, showing you how easy it is to extend your existing Android apps to work on Android Wear. It also covers how to design user interfaces and great experiences for this unique wearable platform, which is important because the interface of the watch needs to be glanceable and unobtrusive for all day use.




This class is part of our larger series on Ubiquitous Computing across Google platforms,such as Android Wear, Android Auto, Android TV, and Google Cast. Designed as short, standalone courses, you can take any course on its own, or take them all! The Android Wear platform is a great opportunity to add functionality that will distinguish your app from others; and this Udacity course will get you up to speed quickly and easily.

Get started now and try it out for at no cost, your users are waiting!

Tuesday, October 20, 2015

Android Developer Story: RogerVoice takes advantage of beta testing to launch its app on Android first

Posted by Lily Sheringham, Google Play team

RogerVoice is an app which enables people who are hearing impaired to make phone calls through voice recognition and text captions. Founded by Olivier Jeannel, who grew up with more than 80 percent hearing loss, the company successfully raised $35,000 through Kickstarter to get off the ground. Today the team publicly released the app on the Android platform first.

The team behind RogerVoice talk about how material design and beta testing helped them create an interface which is accessible and intuitive to navigate for users.



Learn more about how RogerVoice built its app with the help of Google Play features:

  • Material Design: How Material Design helps you create beautiful, engaging apps.
  • Beta testing: Learn more about using beta testing on Google Play for your app.
  • Developer Console: Make the most of the Google Play Developer Console to publish your apps and grow and engage your user base.

Monday, October 19, 2015

Introducing the Tech Entrepreneur Nanodegree

Originally posted on Google Developers Blog

Posted by Shanea King-Roberson, Program Manager

As a developer, writing your app is important. But even more important is getting it into the hands of users. Ideally millions of users. To that end, you can now learn what it takes to design, validate, prototype, monetize, and market app ideas from the ground up and grow them into a scalable business with the new Tech Entrepreneur Nanodegree.

Designed by Google in partnership with Udacity, the Tech Entrepreneur Nanodegree, takes 4-7 months to complete. We have teamed up with most successful thought leaders in this space to provide students with a unique and battle-tested perspective. You’ll meet Geoffrey Moore, author of “Crossing the Chasm”, Pete Koomen, co-founder of Optimizely; Aaron Harris and Kevin Hale, Partners at Y-Combinator; Nir Eyal, author of the book “Hooked: How to build habit forming products” and co-founder of Product Hunt; Steve Chen, Co-Founder of YouTube, and many more.

All of the content that make up this nanodegree is available online for free at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate when you complete the nanodegree.




The Tech Entrepreneur offering will consist of the following courses:

  • Product Design: Learn Google’s Design Sprint methodology, Ideation & Validation, UI/UX design and gathering the right metrics.
  • Prototyping: Experiment with rapid-low and high-fidelity prototyping on mobile and the web using online tools.
  • Monetization: Learn how to monetize your app and how to set up an effective payment funnel.
  • App Marketing: Understand your market, analyze competition, position your product, prepare for launch, acquire customers and learn growth hacks.
  • How to get your startup started: Find out whether you really need venture capital funding, evaluate build vs. buy, and learn simple ways to monitor and maintain your startup business effectively.

Pitch your ideas in front of Venture Capitalists

Upon completion, students will receive a joint certificate from Udacity and Google. The top graduates will also be invited to an exclusive pitch event, where they will have the opportunity to pitch their final product to venture capitalists at Google.

Friday, October 16, 2015

Game Performance: Vertex Array Objects

Posted by Shanee Nishry

Previously, we showed how you can use vertex layout qualifiers to increase the performance and determinism of your OpenGL application. In this post, we’ll show another useful technique that will help you produce increased performance and cleaner code when drawing objects.

Binding the vertex buffer

Before drawing onto the screen, you need to bind your vertex data (e.g. positions, normals, UVs) to the corresponding vertex shader attributes. To do that, you need to bind the vertex buffer, enable the generic vertex attribute, and use glVertexAttribPointer to describe the layout of the buffer.

Therefore, a draw call might look like this:

const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
const GLuint ATTRIBUTE_LOCATION_NORMALS     = 2;

// Bind shader program, uniforms and textures
// ...

// Bind the vertex buffer
glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

// Set the vertex attributes
glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );

glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );

glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

// Draw elements
glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );

There are several reasons why we might not like this code very much. The first is that we need to cache the layout of the vertex buffer to enable and disable the right attributes before drawing. This means we are either hard-coding or saving some amount of data for a nearly meaningless task.

The second reason is performance. Having to tell the drivers which attributes to individually activate is suboptimal. It would be best if we could precompile this information and deliver it all at once.

Lastly, and purely for aesthetics, our draw call is cluttered by long boilerplate code. It would be nice to get rid of it.


Did you know there is another reason why someone might frown on this code? The code is making use of layout qualifiers which is great! But, since it’s already using OpenGL ES 3+, it would be even better if the code also used Geometry Instancing. By batching many instances of a mesh into a single draw call, you can really boost performance.

So how can we improve on the above code?

Vertex Array Objects (VAOs)

If you are using OpenGL ES 3 or higher, you should use Vertex Array Objects (or "VAOs") to store your vertex attribute state.

Using a VAO allows the drivers to compile the vertex description format for repeated use. In addition, this frees you from having to cache the vertex format needed for glVertexAttribPointer, and it also results in less per-draw boilerplate code.

Creating Vertex Array Objects

The first thing you need to do is create your VAO. This is created once per mesh, alongside the vertex buffer object and is done like this:

const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
const GLuint ATTRIBUTE_LOCATION_NORMALS     = 2;

// Bind the vertex buffer object
glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

// Create a VAO
GLuint vao;
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );

// Set the vertex attributes as usual
glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );

glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );

glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

// Unbind the VAO to avoid accidentally overwriting the state
// Skip this if you are confident your code will not do so
glBindVertexArray( 0 );

You have probably noticed that this is very similar to our previous code section except that we now have the addition of:

// Create a vertex array object
GLuint vao;
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );

These lines create and bind the VAO. All glEnableVertexAttribArray and glVertexAttribPointer calls after that are recorded in the currently bound VAO, and that greatly simplifies our per-draw procedure as all you need to do is use the newly created VAO.

Using the Vertex Array Object

The next time you want to draw using this mesh all you need to do is bind the VAO using glBindVertexArray.

// Bind shader program, uniforms and textures
// ...

// Bind Vertex Array Object
glBindVertexArray( vao );

// Draw elements
glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );

You no longer need to go through all the vertex attributes. This makes your code cleaner, makes per-frame calls shorter and more efficient, and allows the drivers to optimize the binding stage to increase performance.


Did you notice we are no longer calling glBindBuffer? This is because calling glVertexAttribPointer while recording the VAO references the currently bound buffer even though the VAO does not record glBindBuffer calls on itself.

Want to learn more how to improve your game performance? Check out our Game Performance article series. If you are building on Android you might also be interested in the Android Performance Patterns.

Related Posts Plugin for WordPress, Blogger...