Tag Archives: Android

Impressions of Android Wear (LG G Watch) with my Nexus 5

I finally got the LG G watch that I won in a contest about a week ago. I replaced my Pebble smart watch with the Android Wear device along with using my spare Nexus 5 instead of my iPhone. It’s been about four days or so since I made the switch so I thought I’d write up my impressions.

The Watch

LG G

The LG G watch is one of the two currently available Android Wear devices for purchase. My coworker Jason, who also won a watch in the same contest, got the Samsung model. We both would have rather had the Motorola watch, but it isn’t available until later this summer so Google decided to send us these ones.

The LG watch is not particularly fashionable. It looks like a pretty generic black rectangle with rounded corners. It comes with a pretty neat charging base that you can magnetically attach the watch to. I would’ve preferred some kind of wireless charging, but this gets the job done, and the watch actually charges pretty quickly. The quick charge time is good because the watch only last about 24 hours anyway.

The Software

Android Wear Software

The software for the LG watch is basically an enhanced version of the notification dock in Android. This actually mirrors Google Glass a bit, but the execution is less awkward because you’re not wearing a watch on your face. Each notification that lives on your phone will also appear on the watch. If the notification is enhanced, you might be able to take action on it through the watch. For example, you can reply to a text message or Google Hangout by dictating a message. This is probably the killer app of the watch, though I feel like talking to your watch is still going to be considered antisocial behavior.

Aside from the notifications, you can also tell the watch to take notes, send e-mails, and set alarms. The watch still relies pretty heavily on your Android phone, though. Notes get saved to Google Keep (which I am not really sure is an app that will stick around), and GPS navigation simply opens Google maps on your phone. I’m guessing that in the future the watch will get smarter, but for now I think that’s a pretty decent feature set.

I’ve been toying around with the idea of writing an app that creates a persistent notification that just shows me how late the bus is going to be in the morning. Every morning I check the AATA website to see how it my bus is going to be. The information is contextual and time sensitive, and I only need it for that one time during the day, so it would make sense to make this something that shows up on my watch when I need it and goes away when I don’t. I think that Google Now could do this to some extent, but it’s not smart enough to know that I want to leave 5 minutes before my bus comes around and not when it’s “scheduled” to arrive.

Android

In order to fully test out the Android Wear watch, I took my SIM card out of my iPhone 5S and put it into my Nexus 5. So far the transition hasn’t been too difficult as I mostly rely on Google services anyway. There aren’t really any apps on the iPhone that I haven’t been able to use on my Android phone.

I may have said this before in previous blog post, but I really think that the quality of Google’s services is catching up to the polish of the iPhone. Consider how bad Apple’s online services are, and how good Google’s are. I can live without iCloud and Apple’s email service (does anyone use their .me email as a real email address?), but I am locked into Gmail, both personally and professionally.

I am guessing that Google could increase Android’s marketshare sharply by simply withdrawing support for all Google services on iOS. Meanwhile, Apple is relying more and more on third parties to fill out the expertise that it is lacking (I mean, really, who the fuck uses WebObjects?). I can’t really see a situation in the long term where Apple beats Google on services, either directly or by convincing third party developers to do it for them. As for whether I am actually switching to Android as my main device, I’d need to see what Apple has in store for their next generation. (Sadly, this LG G watch is not nearly awesome enough to make me switch on its own)

On top of the awesome built-in services of Android, the material design stuff coming from them from this past I/O is looking pretty sweet. If Android can give iOS a run for its money in terms of look and feel, and ease of creating really nice custom views and animations, then developers might just defect en masse. I guess the only thing Android needs to do now is get away from Java (if Apple can do this with Swift, then I’m sure Google can figure it out), because developing on it really sucks the joy out of being a developer.

Anyway, enough reverse-fanboying. I think it’s time for the…

Conclusion

While I think Android Wear does a lot of things right, and it has a lot of potential, the watch really isn’t all that revolutionary. To put it in terms of the HBO show SIlicon Valley, it’s not disrupting or making the world a better place through compression algorithms. It also isn’t a particularly stylish piece of metal and glass, so I wouldn’t wear it if it didn’t serve a purpose. Having said that, Google stuff is all about the integrated services, so I am feeling bullish on the fact that Google Now is always getting smarter, and developers should be able to hook into it, giving it a bit more life than it has right now. Would I buy it if I hadn’t won it? Maybe if I was an Android user already anyway, but I tend to buy too many gadgets as it is.

Android Wear Design Contest!

Last week I went to a GDG Ann Arbor Android meetup on Android Wear. The presentation was set up by Google and had a bunch of information on the upcoming Android Wear SDK. At the end, it was announced that there would be a few vouchers for Android Wear devices available. A design contest was announced and the deadline was set at Sunday.

Being naturally competitive and really into new technology, I made a design mockup for a home automation app that I thought would be useful to have on a wearable device (the most common form factor happens to be a watch). Here’s my submission that I originally posted to Google Plus:

Home Arrival

The app uses the user’s location as a way to provide feedback only when it’s necessary, and to stay out of the way when it isn’t. In this example, the app can detect when a user is arriving home and can ask if they want to switch the lights on.

Voice Controls

Instead of switching individual lights on and off, the user can create preset groups and activate them with voice commands. Users can set commonly used groups like “living room” and “kitchen.”

Sleepytime Reminder

Night owl users can set reminders to go to bed, and the app can turn off lights if the user decides to stay up late.

Motion Sensor

The app can also use sensors to detect when motion is occurring in the home and alert the user if no one is supposed to be at home. The user can decide whether the motion is a false alarm or if further action should be taken.

There is obviously much more that can be done with a home automation app, but these are just a few scenarios that would work well on an Android Wear device. I’m looking forward to experimenting on Android Wear when it becomes available!

I heard this week that my submission was a winning entry, so I’ll get an Android Wear device as soon as they go on sale some time this Summer. Thanks to Google and the GDG group here in Ann Arbor for setting up the event!

Writing Google Glass Apps With the GDK Sneak Peek

Since the beginning of time, humans have wanted to mount small computers on their heads. In 2013 we got our wish, as Google started releasing Glass onto explorers and developers alike. In November of 2013, Google released a “Sneak Peek” of the GDK, a way to write native code for Google Glass.

While the GDK is still under development and will probably change quite a bit before being finalized, those on the cutting edge will want to familiarize themselves with it.

In this quick start guide, I’ll go through the steps of writing your first GDK app, “Where Am I?” This app will ensure you are never lost. By simply shouting “Okay Glass, where am I?” the app will find your latitude and longitude, and try to find the nearest street address.

Prerequisites

In order to start writing native apps for Google Glass, here’s what you’ll need:

  • Google Glass! There is currently no way to emulate the device and its sensors, so getting a real device is a must. You can request an invite directly from Google here, or search on Twitter for someone with a spare invite to share.
  • ADT – Android Development Tools is basically a plugin for Eclipse that lets you compile and load your native code onto Glass.
  • Source Code (optional) – I’ve included the source code for this app on Github here. If you want to follow along without copying and pasting a bunch of code, you can simply skip back and forth by running git commands that I’ll describe later on.

Getting Started

Google Glass runs on a special version of Android, “Glass Development Kit Sneak Peek” – a tricked out version of API 15.  You’ll need to download this version with the Android SDK Manager within ADT (you can open it by clicking on the icon of the Android with a blue box and an arrow on it).

Glass Sneak Peek

Once you’ve done that, you’ll need to create a new Android project with the correct SDK settings. Use API 15 for Minimum and Target SDK, and set “Compile With” to the GDK Sneak Peek.GDK SettingsCongratulations! You’ve created a project that should run on Google Glass! If you try running the app on your device, it may or may not work, since it won’t have a voice trigger associated with it yet. We’ll add a voice trigger in the next step.

Adding a Voice Trigger

If you’re following along with git, you can type the following command to fill out all the code for this step:

Because Google Glass is such a new paradigm for user input, we’ll need to set up a voice trigger to allow the user to start our app. To do this, we’ll add some some strings to our strings.xml and add an xml resource that describes the voice trigger. Finally, we’ll add an intent filter to our AndroidManifest.xml to register the voice command.

1. In res/values/strings.xml

2. In res/xml/whereami.xml (you can name this file anything as long as you refer to the same file in AndroidManifest.xml

3. In AndroidManifest.xml:

Now, if you run your code on Glass, you should be able to find the “Where Am I?” command in the Ok, Glass menu!

Glass Voice Trigger

Using Glass Cards

We’ve gotten voice commands working, and that’s half the battle. Next we’ll learn how to use Cards, the native user interface for Glass. We’ll need to remove the existing style, then instantiate a Card object, set up its text and set it as the content view.

1. Remove the line in AndroidManifest.xml that reads:

2. Import Card from the GDK and create a new Card instance in OnCreate()

If you run the app, should see something like this:

Glass Card

Getting Location and Updating the Card

If you’re following along with git, you can type the following command to fill out all the code for this step:

Finally, we want to get the Glass’ location and update the card with the appropriate info. Our lost Glass Explorer will be found and all will be well. We’ll do this by requesting location permissions, setting up a LocationManager and handling its callbacks by updating the card with lat long and a geocoded address, if available.

1. Ask for permissions for user location and internet in AndroidManifest.xml

2. Create a LocationManager that uses certain criteria to figure out which location services it can use on Glass in MainActivity.java

3. Implement callbacks that will update the card when you get a location update in MainActivity.java

The app should now start getting location updates on the Glass (it can take a while to get a good fix) and update the Card once it finds you. The result should look something like this:

Glass Card Complete

Congratulations! You’ve finished your first Glass app. Future generations will wonder how they ever knew where they were without a head mounted computer printing out their exact location on the earth into a glowing prism directly above their field of vision at all times!

For more information about the GDK, check out Google’s official documentation! Glass on!

 

<3s Threadless for Android Launch!

Panda

For the past few weeks I’ve been learning Android development in order to diversify my skill set a bit. Plus I needed to justify buying a Nexus 7 tablet somehow. I decided to learn a bit using the Big Nerd Ranch book, and once I went through enough examples I was confident enough to attempt recreating my Threadless app for Android.

I was planning on reaching feature parity with the iOS app before releasing it, but I decided that a more frequent update cadence is probably better when it’s possible. The Google Play store makes it really easy to alpha and beta test, as well as promote beta builds to production. Rather than taking a week to find out that your app does not meet the requirements of Apple, you can simply push out a build and it’ll be ready in a few hours. This is probably one of the best “features” of being an Android developer.

While Android has its fair share of “WTF” features, I actually kind of like it. I think it’s quickly getting to the point where Android’s advantages (amazing integration with very high quality Google products) will outweigh Apple’s (Super awesome third party applications albeit running in a tightly confined sandbox).

Design Patterns: iOS and Android – Dispatch Queues and AsyncTask

For the past few weeks, I’ve been learning Android and porting one of my iOS apps to the platform. While there are many differences between the two platforms, it’s also kind of interesting to see common design patterns between them. I figured I should write up the common design patterns that I notice and maybe help out some iOS developers who are also learning Android like me.

Today I’ll look at doing asynchronous tasks in each platform. While there are many ways of doing this in both platforms, I’ll take a look at Dispatch Queues for iOS and AsyncTask in Android, since that’s what I’ve been using lately.

In iOS, you can use the dispatch_async call to run code on a background dispatch queue. Say we get an NSArray of JSON objects and want to save them to a Core Data store. We can call dispatch_async on a dispatch queue, process all of the objects and then update the UI by using dispatch_async again on the main queue:

In Android, performing a lightweight asynchronous task requires you to subclass AsyncTask. I guess I’m using the term “lightweight” loosely because creating a subclass just to do something asynchronously seems a bit heavy, but at least you get to reuse your code!

You must define three generic types which describe what the input is (in this example, a String), the progress type (an Integer) and a result (a Boolean).

Once you have that AsyncTask set up, you can call

to run your asynchronous task. One tricky thing to remember is that execute takes a list of objects and sends it to doInBackground as an array. I’m not BFF with Java so the syntax threw me a bit, but apparently it’s using a feature called varargs that’s been in Java for a while now.

That’s all for today. I hope this blog post was useful. I certainly found it useful, since I had to do some research to really understand what the heck I was writing about. I’ll probably write about UITableViewDelegate/Datasource vs. ListAdapter next, unless there’s something else that seems more timely.