The term ‘Google Glass‘ actually refers to the Glass Project Google announced recently. The ‘Glass’ or ‘Glasses’ as they actually are, is a method to use Augmented Reality data without having to walk around holding your smartphone up in the air and pointing it’s camera at things. Instead, the data is presented to the wearer like a fighter jet heads up display, numbers and information floating over the area you are viewing. If you look at a store front and there is information available for the location like name, address, contact information, hours and other bits. A movie theater might even have a list of movies showing, the user tips their head a bit to scroll through the list and sees show times.
If you have not seen the Project Glass eye wear, they are not a pair of two eye glasses. There is a single bar on the right side of a person’s head and a nose bit to hold everything in place. The right eye area has what looks to others as a prism just above the center of the wearer’s eyeball.
The demo Google did at the Developer’s conference was a few folks jumping out of an airplane with the glasses on. While jumping out of a perfectly good plane was very exciting, it isn’t something many of us do so the power of Project Glass was lost, who can relate?
The current users of heads up display glasses and augmented reality today is folks doing specialized wiring where they are able to see information on the loom they are looking at and what should be there to test/repair. The ‘glasses’ look more like very large goggles so they aren’t realistic to walk outside of a controlled environment with. Non ‘glasses’ wearers are holding their smartphones up in the air as they walk around town, hoping they stumble upon some bit of information about the things they are looking at so they feel rewarded for everyone staring at them.
OK, onto the reason for this post:
With all of the power of Google that can brought to a person wearing the new device, the data isn’t the issue. It is the delivery of the data. A person sitting at a desk with a nice big Internet pipe is only handy when dealing with people that are at the office all of the time. Then, there is a the walking person, they are moving slow enough that data can get to them without the users noticing that they slowed their pace a bit for the data to arrive. The walker can change their mind in the direction they are heading which will be forecasted to the system when they hesitate where they shouldn’t be along the projected path. Any system smart enough to be making decision on what information is important to that walker will also be watching for alerts coming in through their data feed, like key words in an email or text message that would cause a person to alter their original plan.
Enter the bike mounted messenger/delivery person.
The rider needs to receive mapping information very quickly as a bike has a speed multiplier over walking. While a bike will weave it’s way through a traffic jam, a packed road can slow the ability to make money. As the course has to change a block at a time, the data must update, all while cutting around tall buildings.
A delivery person being paid per drop can’t wait to finish a job to get the next. Incoming pick up jobs and locations need to appear as they happen, the rider will decide on which jobs they take while not slowing to take their hands off the handle bars.
Between the work there has to be the social and personal aspects of a person’s life, email and social site updates.
Data communications should be both directions. Progress, location, confirmation of pick up and delivery. No need to carry a scanner when an image of the package or package ID will have a Google Goggles record so a office can search and find/reference later.
All of features are available on Android based devices now. The test is being able to interact with the environment, fast enough to keep a bike on schedule. Lets not forget to throw in a sign scan glanced at, then reported on later when options for the evening are being explored.