If I was developing for Google Glass, I would be on a Delivery Courier’s Bike

The term ‘Google Glass‘ actually refers to the Glass Project Google announced recently. The ‘Glass’ or ‘Glasses’ as they actually are, is a method to use Augmented Reality data without having to walk around holding your smartphone up in the air and pointing it’s camera at things. Instead, the data is presented to the wearer like a fighter jet heads up display, numbers and information floating over the area you are viewing. If you look at a store front and there is information available for the location like name, address, contact information, hours and other bits. A movie theater might even have a list of movies showing, the user tips their head a bit to scroll through the list and sees show times.

If you have not seen the Project Glass eye wear, they are not a pair of two eye glasses. There is a single bar on the right side of a person’s head and a nose bit to hold everything in place. The right eye area has what looks to others as a prism just above the center of the wearer’s eyeball.

The demo Google did at the Developer’s conference was a few folks jumping out of an airplane with the glasses on. While jumping out of a perfectly good plane was very exciting, it isn’t something many of us do so the power of Project Glass was lost, who can relate?

The current users of heads up display glasses and augmented reality today is folks doing specialized wiring where they are able to see information on the loom they are looking at and what should be there to test/repair. The ‘glasses’ look more like very large goggles so they aren’t realistic to walk outside of a controlled environment with. Non ‘glasses’ wearers are holding their smartphones up in the air as they walk around town, hoping they stumble upon some bit of information about the things they are looking at so they feel rewarded for everyone staring at them.

OK, onto the reason for this post:

With all of the power of Google that can brought to a person wearing the new device, the data isn’t the issue. It is the delivery of the data. A person sitting at a desk with a nice big Internet pipe is only handy when dealing with people that are at the office all of the time. Then, there is a the walking person, they are moving slow enough that data can get to them without the users noticing that they slowed their pace a bit for the data to arrive. The walker can change their mind in the direction they are heading which will be forecasted to the system when they hesitate where they shouldn’t be along the projected path. Any system smart enough to be making decision on what information is important to that walker will also be watching for alerts coming in through their data feed, like key words in an email or text message that would cause a person to alter their original plan.

Enter the bike mounted messenger/delivery person.

The rider needs to receive mapping information very quickly as a bike has a speed multiplier over walking. While a bike will weave it’s way through a traffic jam, a packed road can slow the ability to make money. As the course has to change a block at a time, the data must update, all while cutting around tall buildings.

A delivery person being paid per drop can’t wait to finish a job to get the next. Incoming pick up jobs and locations need to appear as they happen, the rider will decide on which jobs they take while not slowing to take their hands off the handle bars.

Between the work there has to be the social and personal aspects of a person’s life, email and social site updates.

Data communications should be both directions. Progress, location, confirmation of pick up and delivery. No need to carry a scanner when an image of the package or package ID will have a Google Goggles record so a office can search and find/reference later.

All of features are available on Android based devices now. The test is being able to interact with the environment, fast enough to keep a bike on schedule. Lets not forget to throw in a sign scan glanced at, then reported on later when options for the evening are being explored.

Rethinking User Interfaces with Augmented Reality

A few years ago, a Apple OS technician mentioned that we needed to rethink the way we think about organizing our electronic lives. The talk wasn’t about Cloud Computing centrally located mass storage, it was about how we interact with the information. There is so much info available to us, we need to think about how we find and access that info.

The speech had spoken to stepping ‘through’ windows and folders rather than opening. This immediately made me think of a software add-on from Aaron back in the Mac OS 8 days. It allowed us to tap and hold on a folder that would open on top of the current folder like we were stepping through the folders – deeper and deeper, each opening allowed access to open a folder the next level down. Releasing the mouse left that folder open. File movement was also possible using this method by click, drag, hold over a folder. Drilling down to the final location.

The phrase I used ‘Drilling down’ makes me think that this is not the future of reaching ‘through’ our file management. It has been said that most of the old file folder thinkers might have a issue jumping into this sort of information management – so maybe I’m just not getting it yet.

Apple’s new Snow Leopard has introduced a new way of better managing open windows and app screen. You can now have many open files (Word, Web sites, images…) and hide them down in the ‘Doc’ inside of the application’s icon. Clicking on the Application now presents you with all of the files as images side by side on the screen to jump to easily. This takes us away from always having to select your Web browser and you get many browser windows show on the screen, one on top of the other that requires you to cycle through to get to the page you needed to reference.

Microsoft is attempting a different direction where there is no folders within folders. Instead, all files are stored in a giant area and you access via search and keywords. Having played for years with finding files from text within them or doing a good enough job of keywording for later reference has been a big ‘fail’ for me. I still use a pad and pen a lot, each of those pages are scanned and dated for later reference. My handwriting, or actually printing, isn’t clean enough for OCR. As well I use a lot of arrows and quick scribbled pictures to remember better. These do not allow a program to automatically know I’m looking for that page later. Instead, I have to keyword the page… will I ever get all the keywords each of those pages are important to? Nah. As well, info on a page may be thought of differently later as the conversation changes and it important in a different way.

Finally, we have a ‘name’ to one rethink of data display: Augmented Reality (Wikipedia says: “A term for a live direct or indirect view of a physical real-world environment whose elements are merged with (or augmented by) virtual computer-generated imagery – creating a mixed reality”).

While the general idea of interactive TV shows where your watching a sporting event and you can click on the players for more info/stats isn’t quite there yet. One place multiple companies have jumped onto is navigation. When GPS units were first introduced, they were electronic paper maps, flat. Then they started to change a bit to become more 3D so it looked like you were traveling down a road. The latest versions have the road and signs you expect to see with a moving arrow just ahead of you so you know where and when to take the path the device recommends. These images are still lower resolution artist creations, you can imagine Google’s Street View group is working hard to get their real images into one of the mobile GPS directions app providers.

Getting back to providing information in our new 3D world – one developer has created an Augmented Reality iPhone app.. With RobotVision, you are looking at the world through the devices camera so you see the real world, not a drawing or previously photographed version. RobotVision overlays information on the screen. Currently, you can choose to see a variety of different business types. Stand on the street corner and turn around 360 degrees to see what is down the road from where you stand. There is also the feature of seeing what Twitter person or Flickr images uploaded in the area are. Just tap to get more info… let your imagination really go with this. Tie your computer’s UI to where you are, previously created docs in the area or from words found within the documents. You wont be searching at all, everything is presented to you as an option – basically assisting you with more info before you thought to look for it.