Writing Google Glass Apps With the GDK Sneak Peek

Since the beginning of time, humans have wanted to mount small computers on their heads. In 2013 we got our wish, as Google started releasing Glass onto explorers and developers alike. In November of 2013, Google released a “Sneak Peek” of the GDK, a way to write native code for Google Glass.

While the GDK is still under development and will probably change quite a bit before being finalized, those on the cutting edge will want to familiarize themselves with it.

In this quick start guide, I’ll go through the steps of writing your first GDK app, “Where Am I?” This app will ensure you are never lost. By simply shouting “Okay Glass, where am I?” the app will find your latitude and longitude, and try to find the nearest street address.


In order to start writing native apps for Google Glass, here’s what you’ll need:

  • Google Glass! There is currently no way to emulate the device and its sensors, so getting a real device is a must. You can request an invite directly from Google here, or search on Twitter for someone with a spare invite to share.
  • ADT – Android Development Tools is basically a plugin for Eclipse that lets you compile and load your native code onto Glass.
  • Source Code (optional) – I’ve included the source code for this app on Github here. If you want to follow along without copying and pasting a bunch of code, you can simply skip back and forth by running git commands that I’ll describe later on.

Getting Started

Google Glass runs on a special version of Android, “Glass Development Kit Sneak Peek” - a tricked out version of API 15.  You’ll need to download this version with the Android SDK Manager within ADT (you can open it by clicking on the icon of the Android with a blue box and an arrow on it).

Glass Sneak Peek

Once you’ve done that, you’ll need to create a new Android project with the correct SDK settings. Use API 15 for Minimum and Target SDK, and set “Compile With” to the GDK Sneak Peek.GDK SettingsCongratulations! You’ve created a project that should run on Google Glass! If you try running the app on your device, it may or may not work, since it won’t have a voice trigger associated with it yet. We’ll add a voice trigger in the next step.

Adding a Voice Trigger

If you’re following along with git, you can type the following command to fill out all the code for this step:

Because Google Glass is such a new paradigm for user input, we’ll need to set up a voice trigger to allow the user to start our app. To do this, we’ll add some some strings to our strings.xml and add an xml resource that describes the voice trigger. Finally, we’ll add an intent filter to our AndroidManifest.xml to register the voice command.

1. In res/values/strings.xml

2. In res/xml/whereami.xml (you can name this file anything as long as you refer to the same file in AndroidManifest.xml

3. In AndroidManifest.xml:

Now, if you run your code on Glass, you should be able to find the “Where Am I?” command in the Ok, Glass menu!

Glass Voice Trigger

Using Glass Cards

We’ve gotten voice commands working, and that’s half the battle. Next we’ll learn how to use Cards, the native user interface for Glass. We’ll need to remove the existing style, then instantiate a Card object, set up its text and set it as the content view.

1. Remove the line in AndroidManifest.xml that reads:

2. Import Card from the GDK and create a new Card instance in OnCreate()

If you run the app, should see something like this:

Glass Card

Getting Location and Updating the Card

If you’re following along with git, you can type the following command to fill out all the code for this step:

Finally, we want to get the Glass’ location and update the card with the appropriate info. Our lost Glass Explorer will be found and all will be well. We’ll do this by requesting location permissions, setting up a LocationManager and handling its callbacks by updating the card with lat long and a geocoded address, if available.

1. Ask for permissions for user location and internet in AndroidManifest.xml

2. Create a LocationManager that uses certain criteria to figure out which location services it can use on Glass in MainActivity.java

3. Implement callbacks that will update the card when you get a location update in MainActivity.java

The app should now start getting location updates on the Glass (it can take a while to get a good fix) and update the Card once it finds you. The result should look something like this:

Glass Card Complete

Congratulations! You’ve finished your first Glass app. Future generations will wonder how they ever knew where they were without a head mounted computer printing out their exact location on the earth into a glowing prism directly above their field of vision at all times!

For more information about the GDK, check out Google’s official documentation! Glass on!


Google Glass Megapost (+ win an invite!)

Here’s a giant post of stuff I’ve been doing with Google Glass! I made a survey, wrote an app and am running a contest, so that’s all bundled into this blog post.


I got a Google Glass invite after signing up at Google’s official waitlist. After deciding I would go ahead and get one, I was paralyzed by the choice of which color to order. I made a survey that asked participants to rank the five colors in order of awesomeness. The results surprised me. I was leaning towards white (cotton) and sky blue, but everyone seemed to really like tangerine (orange).

glass survey

Tangerine wins by a landslide!

I was going to go against the popular vote and get the white one, but both the white and black ones were out of stock at the time, so I went for the fan favorite. I ended up getting the Tangerine device the day after I ordered it (next day shipping FTW).


My intention with getting the Google Glass is to try it out and see if I can make anything really cool with the additional sensors and the fact that the device is worn on a person’s head. I was really struggling with what to start working on, and then it hit me: a fart app. I was actually working on a Dragonball Z Scouter app, but that was going to be too complicated. I found that you can insert your own “Okay Glass” prompts to start an app, so I figured I’d make a hello world fart app.

The history of fart apps on advanced mobile operating systems is a long one, so I won’t go over all that. Suffice it to say I am proud to have written the first (as far as I know) native fart app for Google Glass. Here’s a video of me demonstrating it:

Having played around with the GDK a bit, and the device a bit more, I am still not sure if I will keep it. The $1500 price tag is one factor. The public acceptance of the device is another. My opinions on my previous Glass post still hold. If anything, the opportunity is definitely worth the $1500 “loan” to Google plus return shipping.


Before I even got my first Google Glass invite, I went to a GDG Android meeting in Ann Arbor. The organizer had some spare Glass invites, so I asked for one. I hadn’t heard back about the invite, but after ordering my Glass from my original invites, I got another one from the GDG group’s pool. I figured I’d give it away, and to drum up interest in the fart app, I’m going to make a tweet and ask users to retweet it for a chance to win.

Here’s some rules:

  • Enter the contest by retweeting this tweet.
  • You can only enter once per Twitter account.
  • Don’t make multiple Twitter accounts to enter or you’ll be disqualified.
  • The contest is for the invite code. You still have to buy the Glass yourself.
  • The contest will end on Sunday, Dec 8th at 11:59PM ET.
  • The contest winner will be chosen by random and will get a DM with the code (you should also follow me to make sure I can DM you).
  • The invite code expires some time on Dec 10th, so you’ll have a couple of days to order it.
  • I’m not liable for anything. Like if you’re mad about getting an invite and want to sue me or something.
  • If no one enters I’ll probably just tweet out the code and the first person to use it gets it!

Here are Google’s rule for eligibility:

  • be U.S. residents
  • be 18 years or older
  • purchase Glass
  • provide a U.S. shipping address OR pick up their Glass at one of our locations in New York, San Francisco or Los Angeles

<3s Threadless for Android Launch!


For the past few weeks I’ve been learning Android development in order to diversify my skill set a bit. Plus I needed to justify buying a Nexus 7 tablet somehow. I decided to learn a bit using the Big Nerd Ranch book, and once I went through enough examples I was confident enough to attempt recreating my Threadless app for Android.

I was planning on reaching feature parity with the iOS app before releasing it, but I decided that a more frequent update cadence is probably better when it’s possible. The Google Play store makes it really easy to alpha and beta test, as well as promote beta builds to production. Rather than taking a week to find out that your app does not meet the requirements of Apple, you can simply push out a build and it’ll be ready in a few hours. This is probably one of the best “features” of being an Android developer.

While Android has its fair share of “WTF” features, I actually kind of like it. I think it’s quickly getting to the point where Android’s advantages (amazing integration with very high quality Google products) will outweigh Apple’s (Super awesome third party applications albeit running in a tightly confined sandbox).

Design Patterns: iOS and Android – Dispatch Queues and AsyncTask

For the past few weeks, I’ve been learning Android and porting one of my iOS apps to the platform. While there are many differences between the two platforms, it’s also kind of interesting to see common design patterns between them. I figured I should write up the common design patterns that I notice and maybe help out some iOS developers who are also learning Android like me.

Today I’ll look at doing asynchronous tasks in each platform. While there are many ways of doing this in both platforms, I’ll take a look at Dispatch Queues for iOS and AsyncTask in Android, since that’s what I’ve been using lately.

In iOS, you can use the dispatch_async call to run code on a background dispatch queue. Say we get an NSArray of JSON objects and want to save them to a Core Data store. We can call dispatch_async on a dispatch queue, process all of the objects and then update the UI by using dispatch_async again on the main queue:

In Android, performing a lightweight asynchronous task requires you to subclass AsyncTask. I guess I’m using the term “lightweight” loosely because creating a subclass just to do something asynchronously seems a bit heavy, but at least you get to reuse your code!

You must define three generic types which describe what the input is (in this example, a String), the progress type (an Integer) and a result (a Boolean).

Once you have that AsyncTask set up, you can call

to run your asynchronous task. One tricky thing to remember is that execute takes a list of objects and sends it to doInBackground as an array. I’m not BFF with Java so the syntax threw me a bit, but apparently it’s using a feature called varargs that’s been in Java for a while now.

That’s all for today. I hope this blog post was useful. I certainly found it useful, since I had to do some research to really understand what the heck I was writing about. I’ll probably write about UITableViewDelegate/Datasource vs. ListAdapter next, unless there’s something else that seems more timely.

Impressions on Google Glass

Obligatory double-glasses Glasshole shot.

Obligatory double-glasses Glasshole shot.

I had a chance to play around with Google Glass Explorer Edition via my employer. I was able to successfully hook it up to my personal Google account, contrary to stuff I’ve read about the Explorer program not allowing loans, etc. If there’s a specific policy behind that, it doesn’t seem to be enforced on a technical level.

Anyway, I figured I should write down some of my initial impressions on the thing. It’s always interesting to look back and see how well I did in my predictions, like when I thought Twitter was just for narcissists (not sure I was wrong on that one).

The two strong feelings I have from Google Glass are that I wish it was less visible (to others) and I think it will greatly improve on video and photo sharing.

While I understand that technology can work as a fashion accessory (see anyone who owns an iPhone), I also feel like it shouldn’t burden the user with its outward appearance. Everyone writes about how Google Glass will create some kind of panopticon state, but the one wearing them is really the one who feels watched. I once tried walking to get my mail while wearing the gadget, and felt super awkward as I said hi to a neighbor. The awkwardness could have also had something to do with the fact that I have to wear the Google Glass over my normal glasses, which looks super dumb.

On a positive note, I think the ability to take first person videos is going to be the killer feature of Google Glass, if one ends up existing. I took a few videos of myself making dinner, which aren’t really that interesting now, but I can see a sort of lifestream genre bubble up from taking short videos of doing really mundane stuff day to day. Here’s a sample video I took:

As far as developing for the Glass, I found that the Google Mirror API is a bit lacking if you don’t already have some existing app you’d like to integrate. It’s basically glorified push notifications with a few extra location features built in. As an Android noob, I haven’t really pushed anything interesting to the Glass device yet in terms of native apps (just the Hello World one and a few samples). I’d wait for the official GDK to start developing in earnest, and maybe in the meantime, learn Android.

I had some mixed experiences with the Glass, overall. While the technology is neat, I feel there are many social hurdles that the device must pass before the thing can take off. Remember those Bluetooth headsets that you can wear on your ear? Google Glass is basically twice as useful, yet also twice as awkward. I think that a lot of the awkwardness will go away once sub-vocal microphone technology advances to the consumer level (think Metal Gear Solid). Then all Glass will need is a makeover to disguise the camera and screen into a normal set of glasses. Once the technology becomes outwardly invisible, the technology will be able to speak for itself.

Until either the technology makes itself less conspicuous or society decides that it’s socially acceptable, Google Glass wearers will all look like this guy.